Oct  2 03:12:03 np0005465604 kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct  2 03:12:03 np0005465604 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct  2 03:12:03 np0005465604 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  2 03:12:03 np0005465604 kernel: BIOS-provided physical RAM map:
Oct  2 03:12:03 np0005465604 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct  2 03:12:03 np0005465604 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct  2 03:12:03 np0005465604 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct  2 03:12:03 np0005465604 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct  2 03:12:03 np0005465604 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct  2 03:12:03 np0005465604 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct  2 03:12:03 np0005465604 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct  2 03:12:03 np0005465604 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct  2 03:12:03 np0005465604 kernel: NX (Execute Disable) protection: active
Oct  2 03:12:03 np0005465604 kernel: APIC: Static calls initialized
Oct  2 03:12:03 np0005465604 kernel: SMBIOS 2.8 present.
Oct  2 03:12:03 np0005465604 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct  2 03:12:03 np0005465604 kernel: Hypervisor detected: KVM
Oct  2 03:12:03 np0005465604 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct  2 03:12:03 np0005465604 kernel: kvm-clock: using sched offset of 4481226092 cycles
Oct  2 03:12:03 np0005465604 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct  2 03:12:03 np0005465604 kernel: tsc: Detected 2799.886 MHz processor
Oct  2 03:12:03 np0005465604 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct  2 03:12:03 np0005465604 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct  2 03:12:03 np0005465604 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct  2 03:12:03 np0005465604 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct  2 03:12:03 np0005465604 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct  2 03:12:03 np0005465604 kernel: Using GB pages for direct mapping
Oct  2 03:12:03 np0005465604 kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct  2 03:12:03 np0005465604 kernel: ACPI: Early table checksum verification disabled
Oct  2 03:12:03 np0005465604 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct  2 03:12:03 np0005465604 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 03:12:03 np0005465604 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 03:12:03 np0005465604 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 03:12:03 np0005465604 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct  2 03:12:03 np0005465604 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 03:12:03 np0005465604 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 03:12:03 np0005465604 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct  2 03:12:03 np0005465604 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct  2 03:12:03 np0005465604 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct  2 03:12:03 np0005465604 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct  2 03:12:03 np0005465604 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct  2 03:12:03 np0005465604 kernel: No NUMA configuration found
Oct  2 03:12:03 np0005465604 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct  2 03:12:03 np0005465604 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct  2 03:12:03 np0005465604 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct  2 03:12:03 np0005465604 kernel: Zone ranges:
Oct  2 03:12:03 np0005465604 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct  2 03:12:03 np0005465604 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct  2 03:12:03 np0005465604 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct  2 03:12:03 np0005465604 kernel:  Device   empty
Oct  2 03:12:03 np0005465604 kernel: Movable zone start for each node
Oct  2 03:12:03 np0005465604 kernel: Early memory node ranges
Oct  2 03:12:03 np0005465604 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct  2 03:12:03 np0005465604 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct  2 03:12:03 np0005465604 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct  2 03:12:03 np0005465604 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct  2 03:12:03 np0005465604 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct  2 03:12:03 np0005465604 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct  2 03:12:03 np0005465604 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct  2 03:12:03 np0005465604 kernel: ACPI: PM-Timer IO Port: 0x608
Oct  2 03:12:03 np0005465604 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct  2 03:12:03 np0005465604 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct  2 03:12:03 np0005465604 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct  2 03:12:03 np0005465604 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct  2 03:12:03 np0005465604 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct  2 03:12:03 np0005465604 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct  2 03:12:03 np0005465604 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct  2 03:12:03 np0005465604 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct  2 03:12:03 np0005465604 kernel: TSC deadline timer available
Oct  2 03:12:03 np0005465604 kernel: CPU topo: Max. logical packages:   8
Oct  2 03:12:03 np0005465604 kernel: CPU topo: Max. logical dies:       8
Oct  2 03:12:03 np0005465604 kernel: CPU topo: Max. dies per package:   1
Oct  2 03:12:03 np0005465604 kernel: CPU topo: Max. threads per core:   1
Oct  2 03:12:03 np0005465604 kernel: CPU topo: Num. cores per package:     1
Oct  2 03:12:03 np0005465604 kernel: CPU topo: Num. threads per package:   1
Oct  2 03:12:03 np0005465604 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct  2 03:12:03 np0005465604 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct  2 03:12:03 np0005465604 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct  2 03:12:03 np0005465604 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct  2 03:12:03 np0005465604 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct  2 03:12:03 np0005465604 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct  2 03:12:03 np0005465604 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct  2 03:12:03 np0005465604 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct  2 03:12:03 np0005465604 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct  2 03:12:03 np0005465604 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct  2 03:12:03 np0005465604 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct  2 03:12:03 np0005465604 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct  2 03:12:03 np0005465604 kernel: Booting paravirtualized kernel on KVM
Oct  2 03:12:03 np0005465604 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct  2 03:12:03 np0005465604 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct  2 03:12:03 np0005465604 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct  2 03:12:03 np0005465604 kernel: kvm-guest: PV spinlocks disabled, no host support
Oct  2 03:12:03 np0005465604 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  2 03:12:03 np0005465604 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct  2 03:12:03 np0005465604 kernel: random: crng init done
Oct  2 03:12:03 np0005465604 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct  2 03:12:03 np0005465604 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct  2 03:12:03 np0005465604 kernel: Fallback order for Node 0: 0 
Oct  2 03:12:03 np0005465604 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct  2 03:12:03 np0005465604 kernel: Policy zone: Normal
Oct  2 03:12:03 np0005465604 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct  2 03:12:03 np0005465604 kernel: software IO TLB: area num 8.
Oct  2 03:12:03 np0005465604 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct  2 03:12:03 np0005465604 kernel: ftrace: allocating 49370 entries in 193 pages
Oct  2 03:12:03 np0005465604 kernel: ftrace: allocated 193 pages with 3 groups
Oct  2 03:12:03 np0005465604 kernel: Dynamic Preempt: voluntary
Oct  2 03:12:03 np0005465604 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct  2 03:12:03 np0005465604 kernel: rcu: #011RCU event tracing is enabled.
Oct  2 03:12:03 np0005465604 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct  2 03:12:03 np0005465604 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct  2 03:12:03 np0005465604 kernel: #011Rude variant of Tasks RCU enabled.
Oct  2 03:12:03 np0005465604 kernel: #011Tracing variant of Tasks RCU enabled.
Oct  2 03:12:03 np0005465604 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct  2 03:12:03 np0005465604 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct  2 03:12:03 np0005465604 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  2 03:12:03 np0005465604 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  2 03:12:03 np0005465604 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  2 03:12:03 np0005465604 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct  2 03:12:03 np0005465604 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct  2 03:12:03 np0005465604 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct  2 03:12:03 np0005465604 kernel: Console: colour VGA+ 80x25
Oct  2 03:12:03 np0005465604 kernel: printk: console [ttyS0] enabled
Oct  2 03:12:03 np0005465604 kernel: ACPI: Core revision 20230331
Oct  2 03:12:03 np0005465604 kernel: APIC: Switch to symmetric I/O mode setup
Oct  2 03:12:03 np0005465604 kernel: x2apic enabled
Oct  2 03:12:03 np0005465604 kernel: APIC: Switched APIC routing to: physical x2apic
Oct  2 03:12:03 np0005465604 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct  2 03:12:03 np0005465604 kernel: Calibrating delay loop (skipped) preset value.. 5599.77 BogoMIPS (lpj=2799886)
Oct  2 03:12:03 np0005465604 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct  2 03:12:03 np0005465604 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct  2 03:12:03 np0005465604 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct  2 03:12:03 np0005465604 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct  2 03:12:03 np0005465604 kernel: Spectre V2 : Mitigation: Retpolines
Oct  2 03:12:03 np0005465604 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct  2 03:12:03 np0005465604 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct  2 03:12:03 np0005465604 kernel: RETBleed: Mitigation: untrained return thunk
Oct  2 03:12:03 np0005465604 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct  2 03:12:03 np0005465604 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct  2 03:12:03 np0005465604 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct  2 03:12:03 np0005465604 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct  2 03:12:03 np0005465604 kernel: x86/bugs: return thunk changed
Oct  2 03:12:03 np0005465604 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct  2 03:12:03 np0005465604 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct  2 03:12:03 np0005465604 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct  2 03:12:03 np0005465604 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct  2 03:12:03 np0005465604 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct  2 03:12:03 np0005465604 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct  2 03:12:03 np0005465604 kernel: Freeing SMP alternatives memory: 40K
Oct  2 03:12:03 np0005465604 kernel: pid_max: default: 32768 minimum: 301
Oct  2 03:12:03 np0005465604 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct  2 03:12:03 np0005465604 kernel: landlock: Up and running.
Oct  2 03:12:03 np0005465604 kernel: Yama: becoming mindful.
Oct  2 03:12:03 np0005465604 kernel: SELinux:  Initializing.
Oct  2 03:12:03 np0005465604 kernel: LSM support for eBPF active
Oct  2 03:12:03 np0005465604 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  2 03:12:03 np0005465604 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  2 03:12:03 np0005465604 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct  2 03:12:03 np0005465604 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct  2 03:12:03 np0005465604 kernel: ... version:                0
Oct  2 03:12:03 np0005465604 kernel: ... bit width:              48
Oct  2 03:12:03 np0005465604 kernel: ... generic registers:      6
Oct  2 03:12:03 np0005465604 kernel: ... value mask:             0000ffffffffffff
Oct  2 03:12:03 np0005465604 kernel: ... max period:             00007fffffffffff
Oct  2 03:12:03 np0005465604 kernel: ... fixed-purpose events:   0
Oct  2 03:12:03 np0005465604 kernel: ... event mask:             000000000000003f
Oct  2 03:12:03 np0005465604 kernel: signal: max sigframe size: 1776
Oct  2 03:12:03 np0005465604 kernel: rcu: Hierarchical SRCU implementation.
Oct  2 03:12:03 np0005465604 kernel: rcu: #011Max phase no-delay instances is 400.
Oct  2 03:12:03 np0005465604 kernel: smp: Bringing up secondary CPUs ...
Oct  2 03:12:03 np0005465604 kernel: smpboot: x86: Booting SMP configuration:
Oct  2 03:12:03 np0005465604 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct  2 03:12:03 np0005465604 kernel: smp: Brought up 1 node, 8 CPUs
Oct  2 03:12:03 np0005465604 kernel: smpboot: Total of 8 processors activated (44798.17 BogoMIPS)
Oct  2 03:12:03 np0005465604 kernel: node 0 deferred pages initialised in 23ms
Oct  2 03:12:03 np0005465604 kernel: Memory: 7765540K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616504K reserved, 0K cma-reserved)
Oct  2 03:12:03 np0005465604 kernel: devtmpfs: initialized
Oct  2 03:12:03 np0005465604 kernel: x86/mm: Memory block size: 128MB
Oct  2 03:12:03 np0005465604 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct  2 03:12:03 np0005465604 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct  2 03:12:03 np0005465604 kernel: pinctrl core: initialized pinctrl subsystem
Oct  2 03:12:03 np0005465604 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct  2 03:12:03 np0005465604 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct  2 03:12:03 np0005465604 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct  2 03:12:03 np0005465604 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct  2 03:12:03 np0005465604 kernel: audit: initializing netlink subsys (disabled)
Oct  2 03:12:03 np0005465604 kernel: audit: type=2000 audit(1759389121.466:1): state=initialized audit_enabled=0 res=1
Oct  2 03:12:03 np0005465604 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct  2 03:12:03 np0005465604 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct  2 03:12:03 np0005465604 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct  2 03:12:03 np0005465604 kernel: cpuidle: using governor menu
Oct  2 03:12:03 np0005465604 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct  2 03:12:03 np0005465604 kernel: PCI: Using configuration type 1 for base access
Oct  2 03:12:03 np0005465604 kernel: PCI: Using configuration type 1 for extended access
Oct  2 03:12:03 np0005465604 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct  2 03:12:03 np0005465604 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct  2 03:12:03 np0005465604 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct  2 03:12:03 np0005465604 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct  2 03:12:03 np0005465604 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct  2 03:12:03 np0005465604 kernel: Demotion targets for Node 0: null
Oct  2 03:12:03 np0005465604 kernel: cryptd: max_cpu_qlen set to 1000
Oct  2 03:12:03 np0005465604 kernel: ACPI: Added _OSI(Module Device)
Oct  2 03:12:03 np0005465604 kernel: ACPI: Added _OSI(Processor Device)
Oct  2 03:12:03 np0005465604 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct  2 03:12:03 np0005465604 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct  2 03:12:03 np0005465604 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct  2 03:12:03 np0005465604 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct  2 03:12:03 np0005465604 kernel: ACPI: Interpreter enabled
Oct  2 03:12:03 np0005465604 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct  2 03:12:03 np0005465604 kernel: ACPI: Using IOAPIC for interrupt routing
Oct  2 03:12:03 np0005465604 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct  2 03:12:03 np0005465604 kernel: PCI: Using E820 reservations for host bridge windows
Oct  2 03:12:03 np0005465604 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct  2 03:12:03 np0005465604 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct  2 03:12:03 np0005465604 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [3] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [4] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [5] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [6] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [7] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [8] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [9] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [10] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [11] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [12] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [13] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [14] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [15] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [16] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [17] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [18] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [19] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [20] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [21] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [22] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [23] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [24] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [25] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [26] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [27] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [28] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [29] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [30] registered
Oct  2 03:12:03 np0005465604 kernel: acpiphp: Slot [31] registered
Oct  2 03:12:03 np0005465604 kernel: PCI host bridge to bus 0000:00
Oct  2 03:12:03 np0005465604 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct  2 03:12:03 np0005465604 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct  2 03:12:03 np0005465604 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct  2 03:12:03 np0005465604 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct  2 03:12:03 np0005465604 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct  2 03:12:03 np0005465604 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct  2 03:12:03 np0005465604 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct  2 03:12:03 np0005465604 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct  2 03:12:03 np0005465604 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct  2 03:12:03 np0005465604 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct  2 03:12:03 np0005465604 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct  2 03:12:03 np0005465604 kernel: iommu: Default domain type: Translated
Oct  2 03:12:03 np0005465604 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct  2 03:12:03 np0005465604 kernel: SCSI subsystem initialized
Oct  2 03:12:03 np0005465604 kernel: ACPI: bus type USB registered
Oct  2 03:12:03 np0005465604 kernel: usbcore: registered new interface driver usbfs
Oct  2 03:12:03 np0005465604 kernel: usbcore: registered new interface driver hub
Oct  2 03:12:03 np0005465604 kernel: usbcore: registered new device driver usb
Oct  2 03:12:03 np0005465604 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct  2 03:12:03 np0005465604 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct  2 03:12:03 np0005465604 kernel: PTP clock support registered
Oct  2 03:12:03 np0005465604 kernel: EDAC MC: Ver: 3.0.0
Oct  2 03:12:03 np0005465604 kernel: NetLabel: Initializing
Oct  2 03:12:03 np0005465604 kernel: NetLabel:  domain hash size = 128
Oct  2 03:12:03 np0005465604 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct  2 03:12:03 np0005465604 kernel: NetLabel:  unlabeled traffic allowed by default
Oct  2 03:12:03 np0005465604 kernel: PCI: Using ACPI for IRQ routing
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct  2 03:12:03 np0005465604 kernel: vgaarb: loaded
Oct  2 03:12:03 np0005465604 kernel: clocksource: Switched to clocksource kvm-clock
Oct  2 03:12:03 np0005465604 kernel: VFS: Disk quotas dquot_6.6.0
Oct  2 03:12:03 np0005465604 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct  2 03:12:03 np0005465604 kernel: pnp: PnP ACPI init
Oct  2 03:12:03 np0005465604 kernel: pnp: PnP ACPI: found 5 devices
Oct  2 03:12:03 np0005465604 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct  2 03:12:03 np0005465604 kernel: NET: Registered PF_INET protocol family
Oct  2 03:12:03 np0005465604 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct  2 03:12:03 np0005465604 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct  2 03:12:03 np0005465604 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct  2 03:12:03 np0005465604 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct  2 03:12:03 np0005465604 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct  2 03:12:03 np0005465604 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct  2 03:12:03 np0005465604 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct  2 03:12:03 np0005465604 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  2 03:12:03 np0005465604 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  2 03:12:03 np0005465604 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct  2 03:12:03 np0005465604 kernel: NET: Registered PF_XDP protocol family
Oct  2 03:12:03 np0005465604 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct  2 03:12:03 np0005465604 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct  2 03:12:03 np0005465604 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct  2 03:12:03 np0005465604 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct  2 03:12:03 np0005465604 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct  2 03:12:03 np0005465604 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct  2 03:12:03 np0005465604 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 73431 usecs
Oct  2 03:12:03 np0005465604 kernel: PCI: CLS 0 bytes, default 64
Oct  2 03:12:03 np0005465604 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct  2 03:12:03 np0005465604 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct  2 03:12:03 np0005465604 kernel: ACPI: bus type thunderbolt registered
Oct  2 03:12:03 np0005465604 kernel: Trying to unpack rootfs image as initramfs...
Oct  2 03:12:03 np0005465604 kernel: Initialise system trusted keyrings
Oct  2 03:12:03 np0005465604 kernel: Key type blacklist registered
Oct  2 03:12:03 np0005465604 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct  2 03:12:03 np0005465604 kernel: zbud: loaded
Oct  2 03:12:03 np0005465604 kernel: integrity: Platform Keyring initialized
Oct  2 03:12:03 np0005465604 kernel: integrity: Machine keyring initialized
Oct  2 03:12:03 np0005465604 kernel: Freeing initrd memory: 86104K
Oct  2 03:12:03 np0005465604 kernel: NET: Registered PF_ALG protocol family
Oct  2 03:12:03 np0005465604 kernel: xor: automatically using best checksumming function   avx       
Oct  2 03:12:03 np0005465604 kernel: Key type asymmetric registered
Oct  2 03:12:03 np0005465604 kernel: Asymmetric key parser 'x509' registered
Oct  2 03:12:03 np0005465604 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct  2 03:12:03 np0005465604 kernel: io scheduler mq-deadline registered
Oct  2 03:12:03 np0005465604 kernel: io scheduler kyber registered
Oct  2 03:12:03 np0005465604 kernel: io scheduler bfq registered
Oct  2 03:12:03 np0005465604 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct  2 03:12:03 np0005465604 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct  2 03:12:03 np0005465604 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct  2 03:12:03 np0005465604 kernel: ACPI: button: Power Button [PWRF]
Oct  2 03:12:03 np0005465604 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct  2 03:12:03 np0005465604 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct  2 03:12:03 np0005465604 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct  2 03:12:03 np0005465604 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct  2 03:12:03 np0005465604 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct  2 03:12:03 np0005465604 kernel: Non-volatile memory driver v1.3
Oct  2 03:12:03 np0005465604 kernel: rdac: device handler registered
Oct  2 03:12:03 np0005465604 kernel: hp_sw: device handler registered
Oct  2 03:12:03 np0005465604 kernel: emc: device handler registered
Oct  2 03:12:03 np0005465604 kernel: alua: device handler registered
Oct  2 03:12:03 np0005465604 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct  2 03:12:03 np0005465604 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct  2 03:12:03 np0005465604 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct  2 03:12:03 np0005465604 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct  2 03:12:03 np0005465604 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct  2 03:12:03 np0005465604 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct  2 03:12:03 np0005465604 kernel: usb usb1: Product: UHCI Host Controller
Oct  2 03:12:03 np0005465604 kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct  2 03:12:03 np0005465604 kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct  2 03:12:03 np0005465604 kernel: hub 1-0:1.0: USB hub found
Oct  2 03:12:03 np0005465604 kernel: hub 1-0:1.0: 2 ports detected
Oct  2 03:12:03 np0005465604 kernel: usbcore: registered new interface driver usbserial_generic
Oct  2 03:12:03 np0005465604 kernel: usbserial: USB Serial support registered for generic
Oct  2 03:12:03 np0005465604 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct  2 03:12:03 np0005465604 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct  2 03:12:03 np0005465604 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct  2 03:12:03 np0005465604 kernel: mousedev: PS/2 mouse device common for all mice
Oct  2 03:12:03 np0005465604 kernel: rtc_cmos 00:04: RTC can wake from S4
Oct  2 03:12:03 np0005465604 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct  2 03:12:03 np0005465604 kernel: rtc_cmos 00:04: registered as rtc0
Oct  2 03:12:03 np0005465604 kernel: rtc_cmos 00:04: setting system clock to 2025-10-02T07:12:02 UTC (1759389122)
Oct  2 03:12:03 np0005465604 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct  2 03:12:03 np0005465604 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct  2 03:12:03 np0005465604 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct  2 03:12:03 np0005465604 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct  2 03:12:03 np0005465604 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct  2 03:12:03 np0005465604 kernel: usbcore: registered new interface driver usbhid
Oct  2 03:12:03 np0005465604 kernel: usbhid: USB HID core driver
Oct  2 03:12:03 np0005465604 kernel: drop_monitor: Initializing network drop monitor service
Oct  2 03:12:03 np0005465604 kernel: Initializing XFRM netlink socket
Oct  2 03:12:03 np0005465604 kernel: NET: Registered PF_INET6 protocol family
Oct  2 03:12:03 np0005465604 kernel: Segment Routing with IPv6
Oct  2 03:12:03 np0005465604 kernel: NET: Registered PF_PACKET protocol family
Oct  2 03:12:03 np0005465604 kernel: mpls_gso: MPLS GSO support
Oct  2 03:12:03 np0005465604 kernel: IPI shorthand broadcast: enabled
Oct  2 03:12:03 np0005465604 kernel: AVX2 version of gcm_enc/dec engaged.
Oct  2 03:12:03 np0005465604 kernel: AES CTR mode by8 optimization enabled
Oct  2 03:12:03 np0005465604 kernel: sched_clock: Marking stable (1205003172, 143160197)->(1470572017, -122408648)
Oct  2 03:12:03 np0005465604 kernel: registered taskstats version 1
Oct  2 03:12:03 np0005465604 kernel: Loading compiled-in X.509 certificates
Oct  2 03:12:03 np0005465604 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  2 03:12:03 np0005465604 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct  2 03:12:03 np0005465604 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct  2 03:12:03 np0005465604 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct  2 03:12:03 np0005465604 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct  2 03:12:03 np0005465604 kernel: Demotion targets for Node 0: null
Oct  2 03:12:03 np0005465604 kernel: page_owner is disabled
Oct  2 03:12:03 np0005465604 kernel: Key type .fscrypt registered
Oct  2 03:12:03 np0005465604 kernel: Key type fscrypt-provisioning registered
Oct  2 03:12:03 np0005465604 kernel: Key type big_key registered
Oct  2 03:12:03 np0005465604 kernel: Key type encrypted registered
Oct  2 03:12:03 np0005465604 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct  2 03:12:03 np0005465604 kernel: Loading compiled-in module X.509 certificates
Oct  2 03:12:03 np0005465604 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  2 03:12:03 np0005465604 kernel: ima: Allocated hash algorithm: sha256
Oct  2 03:12:03 np0005465604 kernel: ima: No architecture policies found
Oct  2 03:12:03 np0005465604 kernel: evm: Initialising EVM extended attributes:
Oct  2 03:12:03 np0005465604 kernel: evm: security.selinux
Oct  2 03:12:03 np0005465604 kernel: evm: security.SMACK64 (disabled)
Oct  2 03:12:03 np0005465604 kernel: evm: security.SMACK64EXEC (disabled)
Oct  2 03:12:03 np0005465604 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct  2 03:12:03 np0005465604 kernel: evm: security.SMACK64MMAP (disabled)
Oct  2 03:12:03 np0005465604 kernel: evm: security.apparmor (disabled)
Oct  2 03:12:03 np0005465604 kernel: evm: security.ima
Oct  2 03:12:03 np0005465604 kernel: evm: security.capability
Oct  2 03:12:03 np0005465604 kernel: evm: HMAC attrs: 0x1
Oct  2 03:12:03 np0005465604 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct  2 03:12:03 np0005465604 kernel: Running certificate verification RSA selftest
Oct  2 03:12:03 np0005465604 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct  2 03:12:03 np0005465604 kernel: Running certificate verification ECDSA selftest
Oct  2 03:12:03 np0005465604 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct  2 03:12:03 np0005465604 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct  2 03:12:03 np0005465604 kernel: usb 1-1: Product: QEMU USB Tablet
Oct  2 03:12:03 np0005465604 kernel: usb 1-1: Manufacturer: QEMU
Oct  2 03:12:03 np0005465604 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct  2 03:12:03 np0005465604 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct  2 03:12:03 np0005465604 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct  2 03:12:03 np0005465604 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct  2 03:12:03 np0005465604 kernel: clk: Disabling unused clocks
Oct  2 03:12:03 np0005465604 kernel: Freeing unused decrypted memory: 2028K
Oct  2 03:12:03 np0005465604 kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct  2 03:12:03 np0005465604 kernel: Write protecting the kernel read-only data: 30720k
Oct  2 03:12:03 np0005465604 kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct  2 03:12:03 np0005465604 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct  2 03:12:03 np0005465604 kernel: Run /init as init process
Oct  2 03:12:03 np0005465604 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  2 03:12:03 np0005465604 systemd: Detected virtualization kvm.
Oct  2 03:12:03 np0005465604 systemd: Detected architecture x86-64.
Oct  2 03:12:03 np0005465604 systemd: Running in initrd.
Oct  2 03:12:03 np0005465604 systemd: No hostname configured, using default hostname.
Oct  2 03:12:03 np0005465604 systemd: Hostname set to <localhost>.
Oct  2 03:12:03 np0005465604 systemd: Initializing machine ID from VM UUID.
Oct  2 03:12:03 np0005465604 systemd: Queued start job for default target Initrd Default Target.
Oct  2 03:12:03 np0005465604 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  2 03:12:03 np0005465604 systemd: Reached target Local Encrypted Volumes.
Oct  2 03:12:03 np0005465604 systemd: Reached target Initrd /usr File System.
Oct  2 03:12:03 np0005465604 systemd: Reached target Local File Systems.
Oct  2 03:12:03 np0005465604 systemd: Reached target Path Units.
Oct  2 03:12:03 np0005465604 systemd: Reached target Slice Units.
Oct  2 03:12:03 np0005465604 systemd: Reached target Swaps.
Oct  2 03:12:03 np0005465604 systemd: Reached target Timer Units.
Oct  2 03:12:03 np0005465604 systemd: Listening on D-Bus System Message Bus Socket.
Oct  2 03:12:03 np0005465604 systemd: Listening on Journal Socket (/dev/log).
Oct  2 03:12:03 np0005465604 systemd: Listening on Journal Socket.
Oct  2 03:12:03 np0005465604 systemd: Listening on udev Control Socket.
Oct  2 03:12:03 np0005465604 systemd: Listening on udev Kernel Socket.
Oct  2 03:12:03 np0005465604 systemd: Reached target Socket Units.
Oct  2 03:12:03 np0005465604 systemd: Starting Create List of Static Device Nodes...
Oct  2 03:12:03 np0005465604 systemd: Starting Journal Service...
Oct  2 03:12:03 np0005465604 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  2 03:12:03 np0005465604 systemd: Starting Apply Kernel Variables...
Oct  2 03:12:03 np0005465604 systemd: Starting Create System Users...
Oct  2 03:12:03 np0005465604 systemd: Starting Setup Virtual Console...
Oct  2 03:12:03 np0005465604 systemd: Finished Create List of Static Device Nodes.
Oct  2 03:12:03 np0005465604 systemd: Finished Apply Kernel Variables.
Oct  2 03:12:03 np0005465604 systemd-journald[311]: Journal started
Oct  2 03:12:03 np0005465604 systemd-journald[311]: Runtime Journal (/run/log/journal/8d603fb8984b47fa8e454729d5224ba1) is 8.0M, max 153.5M, 145.5M free.
Oct  2 03:12:03 np0005465604 systemd-sysusers[314]: Creating group 'users' with GID 100.
Oct  2 03:12:03 np0005465604 systemd: Started Journal Service.
Oct  2 03:12:03 np0005465604 systemd-sysusers[314]: Creating group 'dbus' with GID 81.
Oct  2 03:12:03 np0005465604 systemd-sysusers[314]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct  2 03:12:03 np0005465604 systemd[1]: Finished Create System Users.
Oct  2 03:12:03 np0005465604 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  2 03:12:03 np0005465604 systemd[1]: Starting Create Volatile Files and Directories...
Oct  2 03:12:03 np0005465604 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  2 03:12:04 np0005465604 systemd[1]: Finished Create Volatile Files and Directories.
Oct  2 03:12:04 np0005465604 systemd[1]: Finished Setup Virtual Console.
Oct  2 03:12:04 np0005465604 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct  2 03:12:04 np0005465604 systemd[1]: Starting dracut cmdline hook...
Oct  2 03:12:04 np0005465604 dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Oct  2 03:12:04 np0005465604 dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  2 03:12:04 np0005465604 systemd[1]: Finished dracut cmdline hook.
Oct  2 03:12:04 np0005465604 systemd[1]: Starting dracut pre-udev hook...
Oct  2 03:12:04 np0005465604 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct  2 03:12:04 np0005465604 kernel: device-mapper: uevent: version 1.0.3
Oct  2 03:12:04 np0005465604 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct  2 03:12:04 np0005465604 kernel: RPC: Registered named UNIX socket transport module.
Oct  2 03:12:04 np0005465604 kernel: RPC: Registered udp transport module.
Oct  2 03:12:04 np0005465604 kernel: RPC: Registered tcp transport module.
Oct  2 03:12:04 np0005465604 kernel: RPC: Registered tcp-with-tls transport module.
Oct  2 03:12:04 np0005465604 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct  2 03:12:04 np0005465604 rpc.statd[446]: Version 2.5.4 starting
Oct  2 03:12:04 np0005465604 rpc.statd[446]: Initializing NSM state
Oct  2 03:12:04 np0005465604 rpc.idmapd[451]: Setting log level to 0
Oct  2 03:12:04 np0005465604 systemd[1]: Finished dracut pre-udev hook.
Oct  2 03:12:04 np0005465604 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  2 03:12:04 np0005465604 systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Oct  2 03:12:04 np0005465604 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  2 03:12:04 np0005465604 systemd[1]: Starting dracut pre-trigger hook...
Oct  2 03:12:04 np0005465604 systemd[1]: Finished dracut pre-trigger hook.
Oct  2 03:12:04 np0005465604 systemd[1]: Starting Coldplug All udev Devices...
Oct  2 03:12:04 np0005465604 systemd[1]: Created slice Slice /system/modprobe.
Oct  2 03:12:04 np0005465604 systemd[1]: Starting Load Kernel Module configfs...
Oct  2 03:12:04 np0005465604 systemd[1]: Finished Coldplug All udev Devices.
Oct  2 03:12:04 np0005465604 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  2 03:12:04 np0005465604 systemd[1]: Finished Load Kernel Module configfs.
Oct  2 03:12:04 np0005465604 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  2 03:12:04 np0005465604 systemd[1]: Reached target Network.
Oct  2 03:12:04 np0005465604 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  2 03:12:04 np0005465604 systemd[1]: Starting dracut initqueue hook...
Oct  2 03:12:04 np0005465604 systemd[1]: Mounting Kernel Configuration File System...
Oct  2 03:12:04 np0005465604 systemd[1]: Mounted Kernel Configuration File System.
Oct  2 03:12:04 np0005465604 systemd[1]: Reached target System Initialization.
Oct  2 03:12:04 np0005465604 systemd[1]: Reached target Basic System.
Oct  2 03:12:04 np0005465604 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct  2 03:12:04 np0005465604 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct  2 03:12:04 np0005465604 kernel: vda: vda1
Oct  2 03:12:04 np0005465604 kernel: scsi host0: ata_piix
Oct  2 03:12:04 np0005465604 kernel: scsi host1: ata_piix
Oct  2 03:12:04 np0005465604 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct  2 03:12:04 np0005465604 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct  2 03:12:04 np0005465604 systemd-udevd[487]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 03:12:05 np0005465604 systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  2 03:12:05 np0005465604 systemd[1]: Reached target Initrd Root Device.
Oct  2 03:12:05 np0005465604 kernel: ata1: found unknown device (class 0)
Oct  2 03:12:05 np0005465604 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct  2 03:12:05 np0005465604 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct  2 03:12:05 np0005465604 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct  2 03:12:05 np0005465604 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct  2 03:12:05 np0005465604 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct  2 03:12:05 np0005465604 systemd[1]: Finished dracut initqueue hook.
Oct  2 03:12:05 np0005465604 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  2 03:12:05 np0005465604 systemd[1]: Reached target Remote Encrypted Volumes.
Oct  2 03:12:05 np0005465604 systemd[1]: Reached target Remote File Systems.
Oct  2 03:12:05 np0005465604 systemd[1]: Starting dracut pre-mount hook...
Oct  2 03:12:05 np0005465604 systemd[1]: Finished dracut pre-mount hook.
Oct  2 03:12:05 np0005465604 systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct  2 03:12:05 np0005465604 systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Oct  2 03:12:05 np0005465604 systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  2 03:12:05 np0005465604 systemd[1]: Mounting /sysroot...
Oct  2 03:12:05 np0005465604 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct  2 03:12:05 np0005465604 kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct  2 03:12:05 np0005465604 kernel: XFS (vda1): Ending clean mount
Oct  2 03:12:05 np0005465604 systemd[1]: Mounted /sysroot.
Oct  2 03:12:06 np0005465604 systemd[1]: Reached target Initrd Root File System.
Oct  2 03:12:06 np0005465604 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct  2 03:12:06 np0005465604 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct  2 03:12:06 np0005465604 systemd[1]: Reached target Initrd File Systems.
Oct  2 03:12:06 np0005465604 systemd[1]: Reached target Initrd Default Target.
Oct  2 03:12:06 np0005465604 systemd[1]: Starting dracut mount hook...
Oct  2 03:12:06 np0005465604 systemd[1]: Finished dracut mount hook.
Oct  2 03:12:06 np0005465604 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct  2 03:12:06 np0005465604 rpc.idmapd[451]: exiting on signal 15
Oct  2 03:12:06 np0005465604 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct  2 03:12:06 np0005465604 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped target Network.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped target Timer Units.
Oct  2 03:12:06 np0005465604 systemd[1]: dbus.socket: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct  2 03:12:06 np0005465604 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped target Initrd Default Target.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped target Basic System.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped target Initrd Root Device.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped target Initrd /usr File System.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped target Path Units.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped target Remote File Systems.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped target Slice Units.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped target Socket Units.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped target System Initialization.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped target Local File Systems.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped target Swaps.
Oct  2 03:12:06 np0005465604 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped dracut mount hook.
Oct  2 03:12:06 np0005465604 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped dracut pre-mount hook.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped target Local Encrypted Volumes.
Oct  2 03:12:06 np0005465604 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct  2 03:12:06 np0005465604 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped dracut initqueue hook.
Oct  2 03:12:06 np0005465604 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped Apply Kernel Variables.
Oct  2 03:12:06 np0005465604 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped Create Volatile Files and Directories.
Oct  2 03:12:06 np0005465604 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped Coldplug All udev Devices.
Oct  2 03:12:06 np0005465604 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped dracut pre-trigger hook.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct  2 03:12:06 np0005465604 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped Setup Virtual Console.
Oct  2 03:12:06 np0005465604 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct  2 03:12:06 np0005465604 systemd[1]: systemd-udevd.service: Consumed 1.001s CPU time.
Oct  2 03:12:06 np0005465604 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Closed udev Control Socket.
Oct  2 03:12:06 np0005465604 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Closed udev Kernel Socket.
Oct  2 03:12:06 np0005465604 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped dracut pre-udev hook.
Oct  2 03:12:06 np0005465604 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped dracut cmdline hook.
Oct  2 03:12:06 np0005465604 systemd[1]: Starting Cleanup udev Database...
Oct  2 03:12:06 np0005465604 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct  2 03:12:06 np0005465604 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped Create List of Static Device Nodes.
Oct  2 03:12:06 np0005465604 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Stopped Create System Users.
Oct  2 03:12:06 np0005465604 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct  2 03:12:06 np0005465604 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct  2 03:12:06 np0005465604 systemd[1]: Finished Cleanup udev Database.
Oct  2 03:12:06 np0005465604 systemd[1]: Reached target Switch Root.
Oct  2 03:12:06 np0005465604 systemd[1]: Starting Switch Root...
Oct  2 03:12:06 np0005465604 systemd[1]: Switching root.
Oct  2 03:12:06 np0005465604 systemd-journald[311]: Journal stopped
Oct  2 03:12:07 np0005465604 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct  2 03:12:07 np0005465604 kernel: audit: type=1404 audit(1759389126.579:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct  2 03:12:07 np0005465604 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 03:12:07 np0005465604 kernel: SELinux:  policy capability open_perms=1
Oct  2 03:12:07 np0005465604 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 03:12:07 np0005465604 kernel: SELinux:  policy capability always_check_network=0
Oct  2 03:12:07 np0005465604 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 03:12:07 np0005465604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 03:12:07 np0005465604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 03:12:07 np0005465604 kernel: audit: type=1403 audit(1759389126.787:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct  2 03:12:07 np0005465604 systemd: Successfully loaded SELinux policy in 212.479ms.
Oct  2 03:12:07 np0005465604 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.670ms.
Oct  2 03:12:07 np0005465604 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  2 03:12:07 np0005465604 systemd: Detected virtualization kvm.
Oct  2 03:12:07 np0005465604 systemd: Detected architecture x86-64.
Oct  2 03:12:07 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:12:07 np0005465604 systemd: initrd-switch-root.service: Deactivated successfully.
Oct  2 03:12:07 np0005465604 systemd: Stopped Switch Root.
Oct  2 03:12:07 np0005465604 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct  2 03:12:07 np0005465604 systemd: Created slice Slice /system/getty.
Oct  2 03:12:07 np0005465604 systemd: Created slice Slice /system/serial-getty.
Oct  2 03:12:07 np0005465604 systemd: Created slice Slice /system/sshd-keygen.
Oct  2 03:12:07 np0005465604 systemd: Created slice User and Session Slice.
Oct  2 03:12:07 np0005465604 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  2 03:12:07 np0005465604 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct  2 03:12:07 np0005465604 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct  2 03:12:07 np0005465604 systemd: Reached target Local Encrypted Volumes.
Oct  2 03:12:07 np0005465604 systemd: Stopped target Switch Root.
Oct  2 03:12:07 np0005465604 systemd: Stopped target Initrd File Systems.
Oct  2 03:12:07 np0005465604 systemd: Stopped target Initrd Root File System.
Oct  2 03:12:07 np0005465604 systemd: Reached target Local Integrity Protected Volumes.
Oct  2 03:12:07 np0005465604 systemd: Reached target Path Units.
Oct  2 03:12:07 np0005465604 systemd: Reached target rpc_pipefs.target.
Oct  2 03:12:07 np0005465604 systemd: Reached target Slice Units.
Oct  2 03:12:07 np0005465604 systemd: Reached target Swaps.
Oct  2 03:12:07 np0005465604 systemd: Reached target Local Verity Protected Volumes.
Oct  2 03:12:07 np0005465604 systemd: Listening on RPCbind Server Activation Socket.
Oct  2 03:12:07 np0005465604 systemd: Reached target RPC Port Mapper.
Oct  2 03:12:07 np0005465604 systemd: Listening on Process Core Dump Socket.
Oct  2 03:12:07 np0005465604 systemd: Listening on initctl Compatibility Named Pipe.
Oct  2 03:12:07 np0005465604 systemd: Listening on udev Control Socket.
Oct  2 03:12:07 np0005465604 systemd: Listening on udev Kernel Socket.
Oct  2 03:12:07 np0005465604 systemd: Mounting Huge Pages File System...
Oct  2 03:12:07 np0005465604 systemd: Mounting POSIX Message Queue File System...
Oct  2 03:12:07 np0005465604 systemd: Mounting Kernel Debug File System...
Oct  2 03:12:07 np0005465604 systemd: Mounting Kernel Trace File System...
Oct  2 03:12:07 np0005465604 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  2 03:12:07 np0005465604 systemd: Starting Create List of Static Device Nodes...
Oct  2 03:12:07 np0005465604 systemd: Starting Load Kernel Module configfs...
Oct  2 03:12:07 np0005465604 systemd: Starting Load Kernel Module drm...
Oct  2 03:12:07 np0005465604 systemd: Starting Load Kernel Module efi_pstore...
Oct  2 03:12:07 np0005465604 systemd: Starting Load Kernel Module fuse...
Oct  2 03:12:07 np0005465604 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct  2 03:12:07 np0005465604 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct  2 03:12:07 np0005465604 systemd: Stopped File System Check on Root Device.
Oct  2 03:12:07 np0005465604 systemd: Stopped Journal Service.
Oct  2 03:12:07 np0005465604 systemd: Starting Journal Service...
Oct  2 03:12:07 np0005465604 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  2 03:12:07 np0005465604 systemd: Starting Generate network units from Kernel command line...
Oct  2 03:12:07 np0005465604 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  2 03:12:07 np0005465604 systemd: Starting Remount Root and Kernel File Systems...
Oct  2 03:12:07 np0005465604 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct  2 03:12:07 np0005465604 systemd: Starting Apply Kernel Variables...
Oct  2 03:12:07 np0005465604 kernel: fuse: init (API version 7.37)
Oct  2 03:12:07 np0005465604 systemd: Starting Coldplug All udev Devices...
Oct  2 03:12:07 np0005465604 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct  2 03:12:07 np0005465604 systemd: Mounted Huge Pages File System.
Oct  2 03:12:07 np0005465604 systemd: Mounted POSIX Message Queue File System.
Oct  2 03:12:07 np0005465604 systemd: Mounted Kernel Debug File System.
Oct  2 03:12:07 np0005465604 systemd: Mounted Kernel Trace File System.
Oct  2 03:12:07 np0005465604 systemd: Finished Create List of Static Device Nodes.
Oct  2 03:12:07 np0005465604 systemd: modprobe@configfs.service: Deactivated successfully.
Oct  2 03:12:07 np0005465604 systemd: Finished Load Kernel Module configfs.
Oct  2 03:12:07 np0005465604 systemd: modprobe@efi_pstore.service: Deactivated successfully.
Oct  2 03:12:07 np0005465604 systemd: Finished Load Kernel Module efi_pstore.
Oct  2 03:12:07 np0005465604 systemd-journald[679]: Journal started
Oct  2 03:12:07 np0005465604 systemd-journald[679]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct  2 03:12:07 np0005465604 systemd[1]: Queued start job for default target Multi-User System.
Oct  2 03:12:07 np0005465604 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct  2 03:12:07 np0005465604 systemd: Started Journal Service.
Oct  2 03:12:07 np0005465604 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct  2 03:12:07 np0005465604 systemd[1]: Finished Load Kernel Module fuse.
Oct  2 03:12:07 np0005465604 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct  2 03:12:07 np0005465604 systemd[1]: Finished Generate network units from Kernel command line.
Oct  2 03:12:07 np0005465604 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct  2 03:12:07 np0005465604 kernel: ACPI: bus type drm_connector registered
Oct  2 03:12:07 np0005465604 systemd[1]: Finished Apply Kernel Variables.
Oct  2 03:12:07 np0005465604 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct  2 03:12:07 np0005465604 systemd[1]: Finished Load Kernel Module drm.
Oct  2 03:12:07 np0005465604 systemd[1]: Mounting FUSE Control File System...
Oct  2 03:12:07 np0005465604 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  2 03:12:07 np0005465604 systemd[1]: Starting Rebuild Hardware Database...
Oct  2 03:12:07 np0005465604 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct  2 03:12:07 np0005465604 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct  2 03:12:07 np0005465604 systemd[1]: Starting Load/Save OS Random Seed...
Oct  2 03:12:07 np0005465604 systemd[1]: Starting Create System Users...
Oct  2 03:12:07 np0005465604 systemd[1]: Mounted FUSE Control File System.
Oct  2 03:12:07 np0005465604 systemd-journald[679]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct  2 03:12:07 np0005465604 systemd-journald[679]: Received client request to flush runtime journal.
Oct  2 03:12:07 np0005465604 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct  2 03:12:07 np0005465604 systemd[1]: Finished Load/Save OS Random Seed.
Oct  2 03:12:07 np0005465604 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  2 03:12:07 np0005465604 systemd[1]: Finished Create System Users.
Oct  2 03:12:07 np0005465604 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  2 03:12:07 np0005465604 systemd[1]: Finished Coldplug All udev Devices.
Oct  2 03:12:07 np0005465604 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  2 03:12:07 np0005465604 systemd[1]: Reached target Preparation for Local File Systems.
Oct  2 03:12:07 np0005465604 systemd[1]: Reached target Local File Systems.
Oct  2 03:12:07 np0005465604 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct  2 03:12:07 np0005465604 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct  2 03:12:07 np0005465604 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct  2 03:12:07 np0005465604 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct  2 03:12:07 np0005465604 systemd[1]: Starting Automatic Boot Loader Update...
Oct  2 03:12:07 np0005465604 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct  2 03:12:07 np0005465604 systemd[1]: Starting Create Volatile Files and Directories...
Oct  2 03:12:07 np0005465604 bootctl[696]: Couldn't find EFI system partition, skipping.
Oct  2 03:12:07 np0005465604 systemd[1]: Finished Automatic Boot Loader Update.
Oct  2 03:12:07 np0005465604 systemd[1]: Finished Create Volatile Files and Directories.
Oct  2 03:12:08 np0005465604 systemd[1]: Starting Security Auditing Service...
Oct  2 03:12:08 np0005465604 systemd[1]: Starting RPC Bind...
Oct  2 03:12:08 np0005465604 systemd[1]: Starting Rebuild Journal Catalog...
Oct  2 03:12:08 np0005465604 auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct  2 03:12:08 np0005465604 auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct  2 03:12:08 np0005465604 systemd[1]: Finished Rebuild Journal Catalog.
Oct  2 03:12:08 np0005465604 systemd[1]: Started RPC Bind.
Oct  2 03:12:08 np0005465604 augenrules[707]: /sbin/augenrules: No change
Oct  2 03:12:08 np0005465604 augenrules[722]: No rules
Oct  2 03:12:08 np0005465604 augenrules[722]: enabled 1
Oct  2 03:12:08 np0005465604 augenrules[722]: failure 1
Oct  2 03:12:08 np0005465604 augenrules[722]: pid 702
Oct  2 03:12:08 np0005465604 augenrules[722]: rate_limit 0
Oct  2 03:12:08 np0005465604 augenrules[722]: backlog_limit 8192
Oct  2 03:12:08 np0005465604 augenrules[722]: lost 0
Oct  2 03:12:08 np0005465604 augenrules[722]: backlog 3
Oct  2 03:12:08 np0005465604 augenrules[722]: backlog_wait_time 60000
Oct  2 03:12:08 np0005465604 augenrules[722]: backlog_wait_time_actual 0
Oct  2 03:12:08 np0005465604 augenrules[722]: enabled 1
Oct  2 03:12:08 np0005465604 augenrules[722]: failure 1
Oct  2 03:12:08 np0005465604 augenrules[722]: pid 702
Oct  2 03:12:08 np0005465604 augenrules[722]: rate_limit 0
Oct  2 03:12:08 np0005465604 augenrules[722]: backlog_limit 8192
Oct  2 03:12:08 np0005465604 augenrules[722]: lost 0
Oct  2 03:12:08 np0005465604 augenrules[722]: backlog 3
Oct  2 03:12:08 np0005465604 augenrules[722]: backlog_wait_time 60000
Oct  2 03:12:08 np0005465604 augenrules[722]: backlog_wait_time_actual 0
Oct  2 03:12:08 np0005465604 augenrules[722]: enabled 1
Oct  2 03:12:08 np0005465604 augenrules[722]: failure 1
Oct  2 03:12:08 np0005465604 augenrules[722]: pid 702
Oct  2 03:12:08 np0005465604 augenrules[722]: rate_limit 0
Oct  2 03:12:08 np0005465604 augenrules[722]: backlog_limit 8192
Oct  2 03:12:08 np0005465604 augenrules[722]: lost 0
Oct  2 03:12:08 np0005465604 augenrules[722]: backlog 4
Oct  2 03:12:08 np0005465604 augenrules[722]: backlog_wait_time 60000
Oct  2 03:12:08 np0005465604 augenrules[722]: backlog_wait_time_actual 0
Oct  2 03:12:08 np0005465604 systemd[1]: Started Security Auditing Service.
Oct  2 03:12:08 np0005465604 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct  2 03:12:08 np0005465604 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct  2 03:12:08 np0005465604 systemd[1]: Finished Rebuild Hardware Database.
Oct  2 03:12:08 np0005465604 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  2 03:12:08 np0005465604 systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Oct  2 03:12:08 np0005465604 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct  2 03:12:08 np0005465604 systemd[1]: Starting Update is Completed...
Oct  2 03:12:08 np0005465604 systemd[1]: Finished Update is Completed.
Oct  2 03:12:08 np0005465604 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  2 03:12:08 np0005465604 systemd[1]: Reached target System Initialization.
Oct  2 03:12:08 np0005465604 systemd[1]: Started dnf makecache --timer.
Oct  2 03:12:08 np0005465604 systemd[1]: Started Daily rotation of log files.
Oct  2 03:12:08 np0005465604 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct  2 03:12:08 np0005465604 systemd[1]: Reached target Timer Units.
Oct  2 03:12:08 np0005465604 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct  2 03:12:08 np0005465604 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct  2 03:12:08 np0005465604 systemd[1]: Reached target Socket Units.
Oct  2 03:12:08 np0005465604 systemd[1]: Starting D-Bus System Message Bus...
Oct  2 03:12:08 np0005465604 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  2 03:12:08 np0005465604 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct  2 03:12:08 np0005465604 systemd[1]: Starting Load Kernel Module configfs...
Oct  2 03:12:08 np0005465604 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  2 03:12:08 np0005465604 systemd[1]: Finished Load Kernel Module configfs.
Oct  2 03:12:08 np0005465604 systemd-udevd[738]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 03:12:08 np0005465604 systemd[1]: Started D-Bus System Message Bus.
Oct  2 03:12:08 np0005465604 systemd[1]: Reached target Basic System.
Oct  2 03:12:08 np0005465604 dbus-broker-lau[765]: Ready
Oct  2 03:12:08 np0005465604 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct  2 03:12:08 np0005465604 systemd[1]: Starting NTP client/server...
Oct  2 03:12:08 np0005465604 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct  2 03:12:08 np0005465604 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct  2 03:12:08 np0005465604 systemd[1]: Starting IPv4 firewall with iptables...
Oct  2 03:12:08 np0005465604 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct  2 03:12:08 np0005465604 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct  2 03:12:08 np0005465604 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct  2 03:12:08 np0005465604 systemd[1]: Started irqbalance daemon.
Oct  2 03:12:08 np0005465604 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct  2 03:12:08 np0005465604 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 03:12:08 np0005465604 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 03:12:08 np0005465604 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 03:12:08 np0005465604 systemd[1]: Reached target sshd-keygen.target.
Oct  2 03:12:08 np0005465604 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct  2 03:12:08 np0005465604 systemd[1]: Reached target User and Group Name Lookups.
Oct  2 03:12:08 np0005465604 systemd[1]: Starting User Login Management...
Oct  2 03:12:08 np0005465604 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct  2 03:12:08 np0005465604 chronyd[796]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  2 03:12:08 np0005465604 chronyd[796]: Loaded 0 symmetric keys
Oct  2 03:12:08 np0005465604 chronyd[796]: Using right/UTC timezone to obtain leap second data
Oct  2 03:12:08 np0005465604 chronyd[796]: Loaded seccomp filter (level 2)
Oct  2 03:12:08 np0005465604 systemd[1]: Started NTP client/server.
Oct  2 03:12:08 np0005465604 systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  2 03:12:08 np0005465604 systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  2 03:12:08 np0005465604 systemd-logind[787]: New seat seat0.
Oct  2 03:12:08 np0005465604 systemd[1]: Started User Login Management.
Oct  2 03:12:08 np0005465604 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct  2 03:12:08 np0005465604 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct  2 03:12:08 np0005465604 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct  2 03:12:08 np0005465604 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct  2 03:12:08 np0005465604 kernel: Console: switching to colour dummy device 80x25
Oct  2 03:12:08 np0005465604 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct  2 03:12:08 np0005465604 kernel: [drm] features: -context_init
Oct  2 03:12:08 np0005465604 kernel: [drm] number of scanouts: 1
Oct  2 03:12:08 np0005465604 kernel: [drm] number of cap sets: 0
Oct  2 03:12:08 np0005465604 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct  2 03:12:08 np0005465604 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct  2 03:12:08 np0005465604 kernel: Console: switching to colour frame buffer device 128x48
Oct  2 03:12:08 np0005465604 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct  2 03:12:08 np0005465604 kernel: kvm_amd: TSC scaling supported
Oct  2 03:12:08 np0005465604 kernel: kvm_amd: Nested Virtualization enabled
Oct  2 03:12:08 np0005465604 kernel: kvm_amd: Nested Paging enabled
Oct  2 03:12:08 np0005465604 kernel: kvm_amd: LBR virtualization supported
Oct  2 03:12:08 np0005465604 iptables.init[780]: iptables: Applying firewall rules: [  OK  ]
Oct  2 03:12:08 np0005465604 systemd[1]: Finished IPv4 firewall with iptables.
Oct  2 03:12:09 np0005465604 cloud-init[838]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 02 Oct 2025 07:12:09 +0000. Up 7.86 seconds.
Oct  2 03:12:09 np0005465604 systemd[1]: run-cloud\x2dinit-tmp-tmptoik6diy.mount: Deactivated successfully.
Oct  2 03:12:09 np0005465604 systemd[1]: Starting Hostname Service...
Oct  2 03:12:09 np0005465604 systemd[1]: Started Hostname Service.
Oct  2 03:12:09 np0005465604 systemd-hostnamed[852]: Hostname set to <np0005465604.novalocal> (static)
Oct  2 03:12:09 np0005465604 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct  2 03:12:09 np0005465604 systemd[1]: Reached target Preparation for Network.
Oct  2 03:12:09 np0005465604 systemd[1]: Starting Network Manager...
Oct  2 03:12:09 np0005465604 NetworkManager[856]: <info>  [1759389129.9421] NetworkManager (version 1.54.1-1.el9) is starting... (boot:72247ef6-349c-4c45-9097-d92b9d419ca4)
Oct  2 03:12:09 np0005465604 NetworkManager[856]: <info>  [1759389129.9430] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  2 03:12:09 np0005465604 NetworkManager[856]: <info>  [1759389129.9646] manager[0x55fd037ef080]: monitoring kernel firmware directory '/lib/firmware'.
Oct  2 03:12:09 np0005465604 NetworkManager[856]: <info>  [1759389129.9722] hostname: hostname: using hostnamed
Oct  2 03:12:09 np0005465604 NetworkManager[856]: <info>  [1759389129.9722] hostname: static hostname changed from (none) to "np0005465604.novalocal"
Oct  2 03:12:09 np0005465604 NetworkManager[856]: <info>  [1759389129.9730] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  2 03:12:09 np0005465604 NetworkManager[856]: <info>  [1759389129.9905] manager[0x55fd037ef080]: rfkill: Wi-Fi hardware radio set enabled
Oct  2 03:12:09 np0005465604 NetworkManager[856]: <info>  [1759389129.9908] manager[0x55fd037ef080]: rfkill: WWAN hardware radio set enabled
Oct  2 03:12:09 np0005465604 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0018] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0019] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0020] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0021] manager: Networking is enabled by state file
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0023] settings: Loaded settings plugin: keyfile (internal)
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0111] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0158] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0192] dhcp: init: Using DHCP client 'internal'
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0197] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0222] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0243] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0259] device (lo): Activation: starting connection 'lo' (dbb54e49-2798-40d2-8772-855e895cdbba)
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0278] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0285] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 03:12:10 np0005465604 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0337] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0342] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0350] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  2 03:12:10 np0005465604 systemd[1]: Started Network Manager.
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0360] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0363] device (eth0): carrier: link connected
Oct  2 03:12:10 np0005465604 systemd[1]: Reached target Network.
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0374] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0386] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0396] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  2 03:12:10 np0005465604 systemd[1]: Starting Network Manager Wait Online...
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0417] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0418] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0422] manager: NetworkManager state is now CONNECTING
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0423] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0434] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0438] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 03:12:10 np0005465604 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0480] dhcp4 (eth0): state changed new lease, address=38.102.83.142
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0492] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0513] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 03:12:10 np0005465604 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0661] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0665] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0667] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  2 03:12:10 np0005465604 systemd[1]: Started GSSAPI Proxy Daemon.
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0677] device (lo): Activation: successful, device activated.
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0686] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0693] manager: NetworkManager state is now CONNECTED_SITE
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0698] device (eth0): Activation: successful, device activated.
Oct  2 03:12:10 np0005465604 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0710] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  2 03:12:10 np0005465604 systemd[1]: Reached target NFS client services.
Oct  2 03:12:10 np0005465604 NetworkManager[856]: <info>  [1759389130.0715] manager: startup complete
Oct  2 03:12:10 np0005465604 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  2 03:12:10 np0005465604 systemd[1]: Reached target Remote File Systems.
Oct  2 03:12:10 np0005465604 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  2 03:12:10 np0005465604 systemd[1]: Finished Network Manager Wait Online.
Oct  2 03:12:10 np0005465604 systemd[1]: Starting Cloud-init: Network Stage...
Oct  2 03:12:10 np0005465604 cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 02 Oct 2025 07:12:10 +0000. Up 9.01 seconds.
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.142         | 255.255.255.0 | global | fa:16:3e:77:69:25 |
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fe77:6925/64 |       .       |  link  | fa:16:3e:77:69:25 |
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct  2 03:12:10 np0005465604 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  2 03:12:12 np0005465604 cloud-init[921]: Generating public/private rsa key pair.
Oct  2 03:12:12 np0005465604 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct  2 03:12:12 np0005465604 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct  2 03:12:12 np0005465604 cloud-init[921]: The key fingerprint is:
Oct  2 03:12:12 np0005465604 cloud-init[921]: SHA256:LlxaMYb5JIF6KFtA+R0PVeItDI1QRR2/HQzSLRktyCE root@np0005465604.novalocal
Oct  2 03:12:12 np0005465604 cloud-init[921]: The key's randomart image is:
Oct  2 03:12:12 np0005465604 cloud-init[921]: +---[RSA 3072]----+
Oct  2 03:12:12 np0005465604 cloud-init[921]: |....o+BE+==o=    |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |..  .++=++o=oo   |
Oct  2 03:12:12 np0005465604 cloud-init[921]: | ..o. B+=. .oo   |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |. +... *.o  o .  |
Oct  2 03:12:12 np0005465604 cloud-init[921]: | + .    S  . .   |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |.    . =         |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |      + .        |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |       .         |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |                 |
Oct  2 03:12:12 np0005465604 cloud-init[921]: +----[SHA256]-----+
Oct  2 03:12:12 np0005465604 cloud-init[921]: Generating public/private ecdsa key pair.
Oct  2 03:12:12 np0005465604 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct  2 03:12:12 np0005465604 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct  2 03:12:12 np0005465604 cloud-init[921]: The key fingerprint is:
Oct  2 03:12:12 np0005465604 cloud-init[921]: SHA256:voiSy25HQwJDl2pr02Lmu/r/svoom+vaXUJgjeKbMPY root@np0005465604.novalocal
Oct  2 03:12:12 np0005465604 cloud-init[921]: The key's randomart image is:
Oct  2 03:12:12 np0005465604 cloud-init[921]: +---[ECDSA 256]---+
Oct  2 03:12:12 np0005465604 cloud-init[921]: |.. ..            |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |o .+             |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |.o= .            |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |.=...            |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |+ooo.   S        |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |oO+oo  .         |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |=ooE... .        |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |.=+oo+ . .       |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |X@@B*+o .        |
Oct  2 03:12:12 np0005465604 cloud-init[921]: +----[SHA256]-----+
Oct  2 03:12:12 np0005465604 cloud-init[921]: Generating public/private ed25519 key pair.
Oct  2 03:12:12 np0005465604 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct  2 03:12:12 np0005465604 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct  2 03:12:12 np0005465604 cloud-init[921]: The key fingerprint is:
Oct  2 03:12:12 np0005465604 cloud-init[921]: SHA256:PNTXKJAKhw4lpfENNrI244hTbX9dGCKsBbUG6K/8Msc root@np0005465604.novalocal
Oct  2 03:12:12 np0005465604 cloud-init[921]: The key's randomart image is:
Oct  2 03:12:12 np0005465604 cloud-init[921]: +--[ED25519 256]--+
Oct  2 03:12:12 np0005465604 cloud-init[921]: |  ++O=o o..      |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |  oXo=+o.+ o o   |
Oct  2 03:12:12 np0005465604 cloud-init[921]: | .Bo+=+.. + + .  |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |.=.+oo.o . +     |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |+ ..  . S .      |
Oct  2 03:12:12 np0005465604 cloud-init[921]: | .  .  . .       |
Oct  2 03:12:12 np0005465604 cloud-init[921]: | . o             |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |  = E            |
Oct  2 03:12:12 np0005465604 cloud-init[921]: |   =.            |
Oct  2 03:12:12 np0005465604 cloud-init[921]: +----[SHA256]-----+
Oct  2 03:12:12 np0005465604 systemd[1]: Finished Cloud-init: Network Stage.
Oct  2 03:12:12 np0005465604 systemd[1]: Reached target Cloud-config availability.
Oct  2 03:12:12 np0005465604 systemd[1]: Reached target Network is Online.
Oct  2 03:12:12 np0005465604 systemd[1]: Starting Cloud-init: Config Stage...
Oct  2 03:12:12 np0005465604 systemd[1]: Starting Notify NFS peers of a restart...
Oct  2 03:12:12 np0005465604 systemd[1]: Starting System Logging Service...
Oct  2 03:12:12 np0005465604 systemd[1]: Starting OpenSSH server daemon...
Oct  2 03:12:12 np0005465604 sm-notify[1003]: Version 2.5.4 starting
Oct  2 03:12:12 np0005465604 systemd[1]: Starting Permit User Sessions...
Oct  2 03:12:12 np0005465604 systemd[1]: Started Notify NFS peers of a restart.
Oct  2 03:12:12 np0005465604 systemd[1]: Finished Permit User Sessions.
Oct  2 03:12:12 np0005465604 systemd[1]: Started OpenSSH server daemon.
Oct  2 03:12:12 np0005465604 systemd[1]: Started Command Scheduler.
Oct  2 03:12:12 np0005465604 systemd[1]: Started Getty on tty1.
Oct  2 03:12:12 np0005465604 systemd[1]: Started Serial Getty on ttyS0.
Oct  2 03:12:12 np0005465604 systemd[1]: Reached target Login Prompts.
Oct  2 03:12:12 np0005465604 rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Oct  2 03:12:12 np0005465604 rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct  2 03:12:12 np0005465604 systemd[1]: Started System Logging Service.
Oct  2 03:12:12 np0005465604 systemd[1]: Reached target Multi-User System.
Oct  2 03:12:12 np0005465604 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct  2 03:12:12 np0005465604 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct  2 03:12:12 np0005465604 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct  2 03:12:12 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 03:12:12 np0005465604 cloud-init[1016]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 02 Oct 2025 07:12:12 +0000. Up 11.21 seconds.
Oct  2 03:12:12 np0005465604 systemd[1]: Finished Cloud-init: Config Stage.
Oct  2 03:12:12 np0005465604 systemd[1]: Starting Cloud-init: Final Stage...
Oct  2 03:12:12 np0005465604 cloud-init[1020]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 02 Oct 2025 07:12:12 +0000. Up 11.62 seconds.
Oct  2 03:12:13 np0005465604 cloud-init[1022]: #############################################################
Oct  2 03:12:13 np0005465604 cloud-init[1023]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct  2 03:12:13 np0005465604 cloud-init[1025]: 256 SHA256:voiSy25HQwJDl2pr02Lmu/r/svoom+vaXUJgjeKbMPY root@np0005465604.novalocal (ECDSA)
Oct  2 03:12:13 np0005465604 cloud-init[1027]: 256 SHA256:PNTXKJAKhw4lpfENNrI244hTbX9dGCKsBbUG6K/8Msc root@np0005465604.novalocal (ED25519)
Oct  2 03:12:13 np0005465604 cloud-init[1029]: 3072 SHA256:LlxaMYb5JIF6KFtA+R0PVeItDI1QRR2/HQzSLRktyCE root@np0005465604.novalocal (RSA)
Oct  2 03:12:13 np0005465604 cloud-init[1030]: -----END SSH HOST KEY FINGERPRINTS-----
Oct  2 03:12:13 np0005465604 cloud-init[1031]: #############################################################
Oct  2 03:12:13 np0005465604 cloud-init[1020]: Cloud-init v. 24.4-7.el9 finished at Thu, 02 Oct 2025 07:12:13 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.85 seconds
Oct  2 03:12:13 np0005465604 systemd[1]: Finished Cloud-init: Final Stage.
Oct  2 03:12:13 np0005465604 systemd[1]: Reached target Cloud-init target.
Oct  2 03:12:13 np0005465604 systemd[1]: Startup finished in 1.691s (kernel) + 3.536s (initrd) + 6.680s (userspace) = 11.908s.
Oct  2 03:12:14 np0005465604 chronyd[796]: Selected source 167.160.187.179 (2.centos.pool.ntp.org)
Oct  2 03:12:14 np0005465604 chronyd[796]: System clock TAI offset set to 37 seconds
Oct  2 03:12:19 np0005465604 irqbalance[784]: Cannot change IRQ 25 affinity: Operation not permitted
Oct  2 03:12:19 np0005465604 irqbalance[784]: IRQ 25 affinity is now unmanaged
Oct  2 03:12:19 np0005465604 irqbalance[784]: Cannot change IRQ 31 affinity: Operation not permitted
Oct  2 03:12:19 np0005465604 irqbalance[784]: IRQ 31 affinity is now unmanaged
Oct  2 03:12:19 np0005465604 irqbalance[784]: Cannot change IRQ 28 affinity: Operation not permitted
Oct  2 03:12:19 np0005465604 irqbalance[784]: IRQ 28 affinity is now unmanaged
Oct  2 03:12:19 np0005465604 irqbalance[784]: Cannot change IRQ 32 affinity: Operation not permitted
Oct  2 03:12:19 np0005465604 irqbalance[784]: IRQ 32 affinity is now unmanaged
Oct  2 03:12:19 np0005465604 irqbalance[784]: Cannot change IRQ 30 affinity: Operation not permitted
Oct  2 03:12:19 np0005465604 irqbalance[784]: IRQ 30 affinity is now unmanaged
Oct  2 03:12:19 np0005465604 irqbalance[784]: Cannot change IRQ 29 affinity: Operation not permitted
Oct  2 03:12:19 np0005465604 irqbalance[784]: IRQ 29 affinity is now unmanaged
Oct  2 03:12:20 np0005465604 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 03:12:28 np0005465604 systemd[1]: Created slice User Slice of UID 1000.
Oct  2 03:12:28 np0005465604 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  2 03:12:28 np0005465604 systemd-logind[787]: New session 1 of user zuul.
Oct  2 03:12:28 np0005465604 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  2 03:12:28 np0005465604 systemd[1]: Starting User Manager for UID 1000...
Oct  2 03:12:28 np0005465604 systemd[1058]: Queued start job for default target Main User Target.
Oct  2 03:12:28 np0005465604 systemd[1058]: Created slice User Application Slice.
Oct  2 03:12:28 np0005465604 systemd[1058]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 03:12:28 np0005465604 systemd[1058]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 03:12:28 np0005465604 systemd[1058]: Reached target Paths.
Oct  2 03:12:28 np0005465604 systemd[1058]: Reached target Timers.
Oct  2 03:12:28 np0005465604 systemd[1058]: Starting D-Bus User Message Bus Socket...
Oct  2 03:12:28 np0005465604 systemd[1058]: Starting Create User's Volatile Files and Directories...
Oct  2 03:12:28 np0005465604 systemd[1058]: Listening on D-Bus User Message Bus Socket.
Oct  2 03:12:28 np0005465604 systemd[1058]: Reached target Sockets.
Oct  2 03:12:28 np0005465604 systemd[1058]: Finished Create User's Volatile Files and Directories.
Oct  2 03:12:28 np0005465604 systemd[1058]: Reached target Basic System.
Oct  2 03:12:28 np0005465604 systemd[1058]: Reached target Main User Target.
Oct  2 03:12:28 np0005465604 systemd[1058]: Startup finished in 169ms.
Oct  2 03:12:28 np0005465604 systemd[1]: Started User Manager for UID 1000.
Oct  2 03:12:28 np0005465604 systemd[1]: Started Session 1 of User zuul.
Oct  2 03:12:28 np0005465604 python3[1140]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:12:31 np0005465604 python3[1168]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:12:38 np0005465604 python3[1226]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:12:38 np0005465604 python3[1266]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct  2 03:12:40 np0005465604 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 03:12:40 np0005465604 python3[1294]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgIqpCn7CArBsOtNhGUzdZ+mgyy30Sxh/HQRKMy3q7wRPXkx4J0TtVrIbr7NZkhPKn9QA2EADre5wAcOZpFq0NwWFsaG8N0W9pJ/49nO8f5IfFyfTAmhCOiAloG4EyVSwfP+RI3lZP4o5sRCmxJOw8iBEH31AvVEJonPQX5vVd+OOTJN/pL2alBh+d+lfp2hn2G03TGZ4FNV+fIBvRy1WTCpwDw1dCILArU8jQNzAto9pUR4aF9QJsWrfNqwnxmelpRL0q4LV8m1B2lwqsEDOC33uNdilWAgJK278zyW99Qb09vKAjbATx22+WJ9a9LzVfbeCQL/hjD4t0mCtnaC4k/JZVvJBkj2GmWEsDhpgZdQHUTOZ6ojVPcxkcNQCX46U2tLPdUIa9zT8WafNMvt6O7qEEsondW2Ldh77J7BXhlTp7JUKqzSkXI9N6cqVrh9wH4Y5Ge4rob8eRjJyQrOaPOysnN4wGWp6vYNdFewy5J54NqtW4oW/drjQCnP9zyrU= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:41 np0005465604 python3[1318]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:12:41 np0005465604 python3[1417]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:12:42 np0005465604 python3[1488]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759389161.3205822-207-152937798811117/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=1d578206feaf4852be7697461a1dea75_id_rsa follow=False checksum=58e8f9f884475e674bd67e0c34400daca0920d8b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:12:42 np0005465604 python3[1611]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:12:43 np0005465604 python3[1682]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759389162.3134017-240-112452136651523/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=1d578206feaf4852be7697461a1dea75_id_rsa.pub follow=False checksum=bea3cf052144d8514741c7a3effa3cfb700bcb9a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:12:44 np0005465604 python3[1730]: ansible-ping Invoked with data=pong
Oct  2 03:12:45 np0005465604 python3[1754]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:12:47 np0005465604 python3[1812]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct  2 03:12:48 np0005465604 python3[1844]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:12:48 np0005465604 python3[1868]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:12:48 np0005465604 python3[1892]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:12:48 np0005465604 python3[1916]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:12:49 np0005465604 python3[1940]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:12:49 np0005465604 python3[1964]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:12:51 np0005465604 python3[1990]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:12:51 np0005465604 python3[2068]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:12:52 np0005465604 python3[2141]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759389171.4109707-21-253319237449353/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:12:53 np0005465604 python3[2189]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:53 np0005465604 python3[2213]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:53 np0005465604 python3[2237]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:53 np0005465604 python3[2261]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:54 np0005465604 python3[2285]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:54 np0005465604 python3[2309]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:54 np0005465604 python3[2333]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:55 np0005465604 python3[2357]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:55 np0005465604 python3[2381]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:55 np0005465604 python3[2405]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:56 np0005465604 python3[2429]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:56 np0005465604 python3[2453]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:56 np0005465604 python3[2477]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:56 np0005465604 python3[2501]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:57 np0005465604 python3[2525]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:57 np0005465604 python3[2549]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:57 np0005465604 python3[2573]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:58 np0005465604 python3[2597]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:58 np0005465604 python3[2621]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:58 np0005465604 python3[2645]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:58 np0005465604 python3[2669]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:59 np0005465604 python3[2693]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:59 np0005465604 python3[2717]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:12:59 np0005465604 python3[2741]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:13:00 np0005465604 python3[2765]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:13:00 np0005465604 python3[2789]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:13:02 np0005465604 python3[2815]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  2 03:13:02 np0005465604 systemd[1]: Starting Time & Date Service...
Oct  2 03:13:02 np0005465604 systemd[1]: Started Time & Date Service.
Oct  2 03:13:02 np0005465604 systemd-timedated[2817]: Changed time zone to 'UTC' (UTC).
Oct  2 03:13:03 np0005465604 python3[2846]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:13:03 np0005465604 python3[2922]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:13:04 np0005465604 python3[2993]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759389183.479741-153-161332379580762/source _original_basename=tmpcsukyc2h follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:13:04 np0005465604 python3[3093]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:13:05 np0005465604 python3[3164]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759389184.3611217-183-182132074885765/source _original_basename=tmpwh7s2vf7 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:13:05 np0005465604 python3[3266]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:13:06 np0005465604 python3[3339]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759389185.5172417-231-15380201203837/source _original_basename=tmpxseuwlhu follow=False checksum=eb31a54ab353993df0881d335bb57aa163860e42 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:13:06 np0005465604 python3[3387]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:13:07 np0005465604 python3[3413]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:13:07 np0005465604 python3[3493]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:13:08 np0005465604 python3[3566]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759389187.2586474-273-123306481782036/source _original_basename=tmpom8v_sq8 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:13:08 np0005465604 python3[3617]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-027f-e581-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:13:09 np0005465604 python3[3645]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-027f-e581-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct  2 03:13:10 np0005465604 python3[3674]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:13:27 np0005465604 python3[3700]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:13:33 np0005465604 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  2 03:14:00 np0005465604 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  2 03:14:00 np0005465604 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct  2 03:14:00 np0005465604 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct  2 03:14:00 np0005465604 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct  2 03:14:00 np0005465604 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct  2 03:14:00 np0005465604 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct  2 03:14:00 np0005465604 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct  2 03:14:00 np0005465604 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct  2 03:14:00 np0005465604 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct  2 03:14:00 np0005465604 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct  2 03:14:00 np0005465604 NetworkManager[856]: <info>  [1759389240.8062] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  2 03:14:00 np0005465604 systemd-udevd[3703]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 03:14:00 np0005465604 NetworkManager[856]: <info>  [1759389240.8268] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 03:14:00 np0005465604 NetworkManager[856]: <info>  [1759389240.8304] settings: (eth1): created default wired connection 'Wired connection 1'
Oct  2 03:14:00 np0005465604 NetworkManager[856]: <info>  [1759389240.8310] device (eth1): carrier: link connected
Oct  2 03:14:00 np0005465604 NetworkManager[856]: <info>  [1759389240.8313] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  2 03:14:00 np0005465604 NetworkManager[856]: <info>  [1759389240.8322] policy: auto-activating connection 'Wired connection 1' (d18a9840-c9c9-34ba-a725-6f9fae88d80f)
Oct  2 03:14:00 np0005465604 NetworkManager[856]: <info>  [1759389240.8329] device (eth1): Activation: starting connection 'Wired connection 1' (d18a9840-c9c9-34ba-a725-6f9fae88d80f)
Oct  2 03:14:00 np0005465604 NetworkManager[856]: <info>  [1759389240.8330] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 03:14:00 np0005465604 NetworkManager[856]: <info>  [1759389240.8334] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 03:14:00 np0005465604 NetworkManager[856]: <info>  [1759389240.8341] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 03:14:00 np0005465604 NetworkManager[856]: <info>  [1759389240.8348] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  2 03:14:01 np0005465604 python3[3730]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-b7d0-d86b-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:14:12 np0005465604 python3[3810]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:14:12 np0005465604 python3[3883]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759389251.687279-102-76696999389558/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=5430476f0bc79cd249145647d8f696bc4511ca13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:14:13 np0005465604 python3[3933]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 03:14:13 np0005465604 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  2 03:14:13 np0005465604 systemd[1]: Stopped Network Manager Wait Online.
Oct  2 03:14:13 np0005465604 systemd[1]: Stopping Network Manager Wait Online...
Oct  2 03:14:13 np0005465604 systemd[1]: Stopping Network Manager...
Oct  2 03:14:13 np0005465604 NetworkManager[856]: <info>  [1759389253.4258] caught SIGTERM, shutting down normally.
Oct  2 03:14:13 np0005465604 NetworkManager[856]: <info>  [1759389253.4271] dhcp4 (eth0): canceled DHCP transaction
Oct  2 03:14:13 np0005465604 NetworkManager[856]: <info>  [1759389253.4271] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 03:14:13 np0005465604 NetworkManager[856]: <info>  [1759389253.4272] dhcp4 (eth0): state changed no lease
Oct  2 03:14:13 np0005465604 NetworkManager[856]: <info>  [1759389253.4275] manager: NetworkManager state is now CONNECTING
Oct  2 03:14:13 np0005465604 NetworkManager[856]: <info>  [1759389253.4348] dhcp4 (eth1): canceled DHCP transaction
Oct  2 03:14:13 np0005465604 NetworkManager[856]: <info>  [1759389253.4348] dhcp4 (eth1): state changed no lease
Oct  2 03:14:13 np0005465604 NetworkManager[856]: <info>  [1759389253.4401] exiting (success)
Oct  2 03:14:13 np0005465604 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 03:14:13 np0005465604 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 03:14:13 np0005465604 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  2 03:14:13 np0005465604 systemd[1]: Stopped Network Manager.
Oct  2 03:14:13 np0005465604 systemd[1]: Starting Network Manager...
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.5245] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:72247ef6-349c-4c45-9097-d92b9d419ca4)
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.5249] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.5346] manager[0x564594ade070]: monitoring kernel firmware directory '/lib/firmware'.
Oct  2 03:14:13 np0005465604 systemd[1]: Starting Hostname Service...
Oct  2 03:14:13 np0005465604 systemd[1]: Started Hostname Service.
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6562] hostname: hostname: using hostnamed
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6563] hostname: static hostname changed from (none) to "np0005465604.novalocal"
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6571] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6579] manager[0x564594ade070]: rfkill: Wi-Fi hardware radio set enabled
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6580] manager[0x564594ade070]: rfkill: WWAN hardware radio set enabled
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6631] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6631] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6632] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6633] manager: Networking is enabled by state file
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6636] settings: Loaded settings plugin: keyfile (internal)
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6643] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6683] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6699] dhcp: init: Using DHCP client 'internal'
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6704] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6712] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6721] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6734] device (lo): Activation: starting connection 'lo' (dbb54e49-2798-40d2-8772-855e895cdbba)
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6746] device (eth0): carrier: link connected
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6754] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6763] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6765] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6778] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6790] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6801] device (eth1): carrier: link connected
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6811] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6821] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (d18a9840-c9c9-34ba-a725-6f9fae88d80f) (indicated)
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6822] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6833] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6844] device (eth1): Activation: starting connection 'Wired connection 1' (d18a9840-c9c9-34ba-a725-6f9fae88d80f)
Oct  2 03:14:13 np0005465604 systemd[1]: Started Network Manager.
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6861] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6870] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6875] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6878] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6882] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6886] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6890] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6894] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6900] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6914] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6925] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6951] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6962] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.6994] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.7003] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  2 03:14:13 np0005465604 systemd[1]: Starting Network Manager Wait Online...
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.7023] device (lo): Activation: successful, device activated.
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.7039] dhcp4 (eth0): state changed new lease, address=38.102.83.142
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.7060] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.7149] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.7181] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.7183] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.7189] manager: NetworkManager state is now CONNECTED_SITE
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.7195] device (eth0): Activation: successful, device activated.
Oct  2 03:14:13 np0005465604 NetworkManager[3945]: <info>  [1759389253.7204] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  2 03:14:14 np0005465604 python3[4018]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-b7d0-d86b-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:14:23 np0005465604 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 03:14:43 np0005465604 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 03:14:48 np0005465604 systemd[1058]: Starting Mark boot as successful...
Oct  2 03:14:48 np0005465604 systemd[1058]: Finished Mark boot as successful.
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3378] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  2 03:14:59 np0005465604 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 03:14:59 np0005465604 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3711] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3714] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3724] device (eth1): Activation: successful, device activated.
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3731] manager: startup complete
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3733] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <warn>  [1759389299.3738] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3746] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct  2 03:14:59 np0005465604 systemd[1]: Finished Network Manager Wait Online.
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3868] dhcp4 (eth1): canceled DHCP transaction
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3869] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3869] dhcp4 (eth1): state changed no lease
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3891] policy: auto-activating connection 'ci-private-network' (aad9a87b-4ca1-5041-97aa-925b7b93062e)
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3897] device (eth1): Activation: starting connection 'ci-private-network' (aad9a87b-4ca1-5041-97aa-925b7b93062e)
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3898] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3901] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3912] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3920] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3955] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3959] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 03:14:59 np0005465604 NetworkManager[3945]: <info>  [1759389299.3969] device (eth1): Activation: successful, device activated.
Oct  2 03:15:09 np0005465604 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 03:15:14 np0005465604 systemd-logind[787]: Session 1 logged out. Waiting for processes to exit.
Oct  2 03:15:18 np0005465604 systemd-logind[787]: New session 3 of user zuul.
Oct  2 03:15:18 np0005465604 systemd[1]: Started Session 3 of User zuul.
Oct  2 03:15:19 np0005465604 python3[4128]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:15:19 np0005465604 python3[4201]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759389318.6629257-267-260723747179240/source _original_basename=tmp88uyvb0q follow=False checksum=5308d68673b2e8408c48de0cb11acf24f2ed5c79 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:15:21 np0005465604 systemd[1]: session-3.scope: Deactivated successfully.
Oct  2 03:15:21 np0005465604 systemd-logind[787]: Session 3 logged out. Waiting for processes to exit.
Oct  2 03:15:21 np0005465604 systemd-logind[787]: Removed session 3.
Oct  2 03:17:48 np0005465604 systemd[1058]: Created slice User Background Tasks Slice.
Oct  2 03:17:48 np0005465604 systemd[1058]: Starting Cleanup of User's Temporary Files and Directories...
Oct  2 03:17:48 np0005465604 systemd[1058]: Finished Cleanup of User's Temporary Files and Directories.
Oct  2 03:20:48 np0005465604 systemd-logind[787]: New session 4 of user zuul.
Oct  2 03:20:48 np0005465604 systemd[1]: Started Session 4 of User zuul.
Oct  2 03:20:49 np0005465604 python3[4260]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-b3a7-4801-000000001ce8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:20:49 np0005465604 python3[4288]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:20:49 np0005465604 python3[4315]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:20:50 np0005465604 python3[4341]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:20:50 np0005465604 python3[4367]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:20:50 np0005465604 python3[4393]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:20:50 np0005465604 python3[4393]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct  2 03:20:51 np0005465604 python3[4419]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 03:20:51 np0005465604 systemd[1]: Reloading.
Oct  2 03:20:51 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:20:53 np0005465604 python3[4475]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct  2 03:20:53 np0005465604 python3[4501]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:20:54 np0005465604 python3[4529]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:20:54 np0005465604 python3[4557]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:20:54 np0005465604 python3[4585]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:20:55 np0005465604 python3[4612]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-b3a7-4801-000000001cee-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:20:55 np0005465604 python3[4642]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:20:57 np0005465604 systemd[1]: session-4.scope: Deactivated successfully.
Oct  2 03:20:57 np0005465604 systemd[1]: session-4.scope: Consumed 3.934s CPU time.
Oct  2 03:20:57 np0005465604 systemd-logind[787]: Session 4 logged out. Waiting for processes to exit.
Oct  2 03:20:57 np0005465604 systemd-logind[787]: Removed session 4.
Oct  2 03:20:59 np0005465604 systemd-logind[787]: New session 5 of user zuul.
Oct  2 03:20:59 np0005465604 systemd[1]: Started Session 5 of User zuul.
Oct  2 03:20:59 np0005465604 python3[4677]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  2 03:21:14 np0005465604 kernel: SELinux:  Converting 363 SID table entries...
Oct  2 03:21:14 np0005465604 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 03:21:14 np0005465604 kernel: SELinux:  policy capability open_perms=1
Oct  2 03:21:14 np0005465604 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 03:21:14 np0005465604 kernel: SELinux:  policy capability always_check_network=0
Oct  2 03:21:14 np0005465604 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 03:21:14 np0005465604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 03:21:14 np0005465604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 03:21:25 np0005465604 kernel: SELinux:  Converting 363 SID table entries...
Oct  2 03:21:25 np0005465604 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 03:21:25 np0005465604 kernel: SELinux:  policy capability open_perms=1
Oct  2 03:21:25 np0005465604 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 03:21:25 np0005465604 kernel: SELinux:  policy capability always_check_network=0
Oct  2 03:21:25 np0005465604 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 03:21:25 np0005465604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 03:21:25 np0005465604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 03:21:36 np0005465604 kernel: SELinux:  Converting 363 SID table entries...
Oct  2 03:21:36 np0005465604 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 03:21:36 np0005465604 kernel: SELinux:  policy capability open_perms=1
Oct  2 03:21:36 np0005465604 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 03:21:36 np0005465604 kernel: SELinux:  policy capability always_check_network=0
Oct  2 03:21:36 np0005465604 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 03:21:36 np0005465604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 03:21:36 np0005465604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 03:21:37 np0005465604 setsebool[4744]: The virt_use_nfs policy boolean was changed to 1 by root
Oct  2 03:21:37 np0005465604 setsebool[4744]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct  2 03:21:48 np0005465604 kernel: SELinux:  Converting 366 SID table entries...
Oct  2 03:21:48 np0005465604 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 03:21:48 np0005465604 kernel: SELinux:  policy capability open_perms=1
Oct  2 03:21:48 np0005465604 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 03:21:48 np0005465604 kernel: SELinux:  policy capability always_check_network=0
Oct  2 03:21:48 np0005465604 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 03:21:48 np0005465604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 03:21:48 np0005465604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 03:22:07 np0005465604 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  2 03:22:07 np0005465604 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 03:22:07 np0005465604 systemd[1]: Starting man-db-cache-update.service...
Oct  2 03:22:07 np0005465604 systemd[1]: Reloading.
Oct  2 03:22:07 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:22:07 np0005465604 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 03:22:08 np0005465604 systemd[1]: Starting PackageKit Daemon...
Oct  2 03:22:08 np0005465604 systemd[1]: Starting Authorization Manager...
Oct  2 03:22:08 np0005465604 polkitd[6178]: Started polkitd version 0.117
Oct  2 03:22:08 np0005465604 systemd[1]: Started Authorization Manager.
Oct  2 03:22:08 np0005465604 systemd[1]: Started PackageKit Daemon.
Oct  2 03:22:10 np0005465604 python3[7936]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-6342-6664-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:22:11 np0005465604 kernel: evm: overlay not supported
Oct  2 03:22:11 np0005465604 systemd[1058]: Starting D-Bus User Message Bus...
Oct  2 03:22:11 np0005465604 dbus-broker-launch[8689]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct  2 03:22:11 np0005465604 dbus-broker-launch[8689]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct  2 03:22:11 np0005465604 systemd[1058]: Started D-Bus User Message Bus.
Oct  2 03:22:11 np0005465604 dbus-broker-lau[8689]: Ready
Oct  2 03:22:11 np0005465604 systemd[1058]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  2 03:22:11 np0005465604 systemd[1058]: Created slice Slice /user.
Oct  2 03:22:11 np0005465604 systemd[1058]: podman-8602.scope: unit configures an IP firewall, but not running as root.
Oct  2 03:22:11 np0005465604 systemd[1058]: (This warning is only shown for the first unit using IP firewalling.)
Oct  2 03:22:11 np0005465604 systemd[1058]: Started podman-8602.scope.
Oct  2 03:22:11 np0005465604 systemd[1058]: Started podman-pause-05b158ab.scope.
Oct  2 03:22:12 np0005465604 python3[9137]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.203:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.203:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:22:12 np0005465604 systemd-logind[787]: Session 5 logged out. Waiting for processes to exit.
Oct  2 03:22:12 np0005465604 systemd[1]: session-5.scope: Deactivated successfully.
Oct  2 03:22:12 np0005465604 systemd[1]: session-5.scope: Consumed 1min 5.128s CPU time.
Oct  2 03:22:12 np0005465604 systemd-logind[787]: Removed session 5.
Oct  2 03:22:36 np0005465604 systemd-logind[787]: New session 6 of user zuul.
Oct  2 03:22:36 np0005465604 systemd[1]: Started Session 6 of User zuul.
Oct  2 03:22:36 np0005465604 python3[18030]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKnS+0ZF+FosVlAZsJIXMnQoNHJHfDB4AZP75oaeHIDQqA8CXwoMGSz2Uz88za6OzebaqwdlFRHxnou5nXuXMuw= zuul@np0005465603.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:22:37 np0005465604 python3[18188]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKnS+0ZF+FosVlAZsJIXMnQoNHJHfDB4AZP75oaeHIDQqA8CXwoMGSz2Uz88za6OzebaqwdlFRHxnou5nXuXMuw= zuul@np0005465603.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:22:37 np0005465604 python3[18597]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005465604.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct  2 03:22:38 np0005465604 python3[18812]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKnS+0ZF+FosVlAZsJIXMnQoNHJHfDB4AZP75oaeHIDQqA8CXwoMGSz2Uz88za6OzebaqwdlFRHxnou5nXuXMuw= zuul@np0005465603.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 03:22:38 np0005465604 python3[19105]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:22:39 np0005465604 python3[19421]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759389758.459802-135-232518679969545/source _original_basename=tmpyyzs1s4_ follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:22:39 np0005465604 irqbalance[784]: Cannot change IRQ 27 affinity: Operation not permitted
Oct  2 03:22:39 np0005465604 irqbalance[784]: IRQ 27 affinity is now unmanaged
Oct  2 03:22:39 np0005465604 python3[19817]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct  2 03:22:39 np0005465604 systemd[1]: Starting Hostname Service...
Oct  2 03:22:39 np0005465604 systemd[1]: Started Hostname Service.
Oct  2 03:22:40 np0005465604 systemd-hostnamed[19953]: Changed pretty hostname to 'compute-0'
Oct  2 03:22:40 np0005465604 systemd-hostnamed[19953]: Hostname set to <compute-0> (static)
Oct  2 03:22:40 np0005465604 NetworkManager[3945]: <info>  [1759389760.0226] hostname: static hostname changed from "np0005465604.novalocal" to "compute-0"
Oct  2 03:22:40 np0005465604 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 03:22:40 np0005465604 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 03:22:40 np0005465604 systemd[1]: session-6.scope: Deactivated successfully.
Oct  2 03:22:40 np0005465604 systemd[1]: session-6.scope: Consumed 2.150s CPU time.
Oct  2 03:22:40 np0005465604 systemd-logind[787]: Session 6 logged out. Waiting for processes to exit.
Oct  2 03:22:40 np0005465604 systemd-logind[787]: Removed session 6.
Oct  2 03:22:50 np0005465604 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 03:22:57 np0005465604 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 03:22:57 np0005465604 systemd[1]: Finished man-db-cache-update.service.
Oct  2 03:22:57 np0005465604 systemd[1]: man-db-cache-update.service: Consumed 1min 1.068s CPU time.
Oct  2 03:22:57 np0005465604 systemd[1]: run-r1ba5b006d8494293a056c299cea77e8f.service: Deactivated successfully.
Oct  2 03:23:10 np0005465604 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 03:26:28 np0005465604 systemd-logind[787]: New session 7 of user zuul.
Oct  2 03:26:28 np0005465604 systemd[1]: Started Session 7 of User zuul.
Oct  2 03:26:28 np0005465604 python3[26705]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:26:30 np0005465604 python3[26821]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:26:30 np0005465604 python3[26894]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759389990.084047-30202-146501533129921/source mode=0755 _original_basename=delorean.repo follow=False checksum=bb4c2ff9dad546f135d54d9729ea11b84117755d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:26:31 np0005465604 python3[26920]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:26:31 np0005465604 python3[26993]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759389990.084047-30202-146501533129921/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:26:31 np0005465604 python3[27019]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:26:32 np0005465604 python3[27092]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759389990.084047-30202-146501533129921/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:26:32 np0005465604 python3[27118]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:26:32 np0005465604 python3[27191]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759389990.084047-30202-146501533129921/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:26:32 np0005465604 python3[27217]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:26:33 np0005465604 python3[27290]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759389990.084047-30202-146501533129921/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:26:33 np0005465604 python3[27316]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:26:33 np0005465604 python3[27389]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759389990.084047-30202-146501533129921/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:26:34 np0005465604 python3[27415]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:26:34 np0005465604 python3[27488]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759389990.084047-30202-146501533129921/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=d911291791b114a72daf18f370e91cb1ae300933 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:26:49 np0005465604 python3[27576]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:27:07 np0005465604 systemd[1]: Starting Cleanup of Temporary Directories...
Oct  2 03:27:07 np0005465604 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct  2 03:27:07 np0005465604 systemd[1]: Finished Cleanup of Temporary Directories.
Oct  2 03:27:07 np0005465604 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct  2 03:27:14 np0005465604 systemd[1]: packagekit.service: Deactivated successfully.
Oct  2 03:31:48 np0005465604 systemd[1]: session-7.scope: Deactivated successfully.
Oct  2 03:31:48 np0005465604 systemd[1]: session-7.scope: Consumed 5.021s CPU time.
Oct  2 03:31:48 np0005465604 systemd-logind[787]: Session 7 logged out. Waiting for processes to exit.
Oct  2 03:31:48 np0005465604 systemd-logind[787]: Removed session 7.
Oct  2 03:37:16 np0005465604 systemd-logind[787]: New session 8 of user zuul.
Oct  2 03:37:16 np0005465604 systemd[1]: Started Session 8 of User zuul.
Oct  2 03:37:17 np0005465604 python3.9[27851]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:37:18 np0005465604 python3.9[28032]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:37:26 np0005465604 systemd[1]: session-8.scope: Deactivated successfully.
Oct  2 03:37:26 np0005465604 systemd[1]: session-8.scope: Consumed 8.008s CPU time.
Oct  2 03:37:26 np0005465604 systemd-logind[787]: Session 8 logged out. Waiting for processes to exit.
Oct  2 03:37:26 np0005465604 systemd-logind[787]: Removed session 8.
Oct  2 03:37:41 np0005465604 systemd-logind[787]: New session 9 of user zuul.
Oct  2 03:37:41 np0005465604 systemd[1]: Started Session 9 of User zuul.
Oct  2 03:37:42 np0005465604 python3.9[28242]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  2 03:37:44 np0005465604 python3.9[28416]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:37:45 np0005465604 python3.9[28568]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:37:46 np0005465604 python3.9[28721]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:37:47 np0005465604 python3.9[28873]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:37:48 np0005465604 python3.9[29025]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:37:48 np0005465604 python3.9[29149]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759390667.6555712-73-208322104197069/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:37:49 np0005465604 python3.9[29301]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:37:50 np0005465604 python3.9[29457]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:37:51 np0005465604 python3.9[29607]: ansible-ansible.builtin.service_facts Invoked
Oct  2 03:37:57 np0005465604 python3.9[29862]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:37:58 np0005465604 python3.9[30012]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:37:59 np0005465604 python3.9[30166]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:38:00 np0005465604 python3.9[30324]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 03:38:01 np0005465604 python3.9[30408]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:38:44 np0005465604 systemd[1]: Reloading.
Oct  2 03:38:44 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:38:44 np0005465604 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct  2 03:38:45 np0005465604 systemd[1]: Reloading.
Oct  2 03:38:45 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:38:45 np0005465604 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct  2 03:38:45 np0005465604 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct  2 03:38:45 np0005465604 systemd[1]: Reloading.
Oct  2 03:38:45 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:38:45 np0005465604 systemd[1]: Listening on LVM2 poll daemon socket.
Oct  2 03:38:45 np0005465604 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Oct  2 03:38:45 np0005465604 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Oct  2 03:39:47 np0005465604 kernel: SELinux:  Converting 2713 SID table entries...
Oct  2 03:39:47 np0005465604 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 03:39:47 np0005465604 kernel: SELinux:  policy capability open_perms=1
Oct  2 03:39:47 np0005465604 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 03:39:47 np0005465604 kernel: SELinux:  policy capability always_check_network=0
Oct  2 03:39:47 np0005465604 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 03:39:47 np0005465604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 03:39:47 np0005465604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 03:39:48 np0005465604 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct  2 03:39:48 np0005465604 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 03:39:48 np0005465604 systemd[1]: Starting man-db-cache-update.service...
Oct  2 03:39:48 np0005465604 systemd[1]: Reloading.
Oct  2 03:39:48 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:39:48 np0005465604 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 03:39:48 np0005465604 systemd[1]: Starting PackageKit Daemon...
Oct  2 03:39:49 np0005465604 systemd[1]: Started PackageKit Daemon.
Oct  2 03:39:49 np0005465604 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 03:39:49 np0005465604 systemd[1]: Finished man-db-cache-update.service.
Oct  2 03:39:49 np0005465604 systemd[1]: man-db-cache-update.service: Consumed 1.183s CPU time.
Oct  2 03:39:49 np0005465604 systemd[1]: run-r7cf8591b7adc4584b97842c0a3cc1a1b.service: Deactivated successfully.
Oct  2 03:39:50 np0005465604 python3.9[31929]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:39:52 np0005465604 python3.9[32210]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  2 03:39:53 np0005465604 python3.9[32362]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  2 03:39:55 np0005465604 python3.9[32516]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:39:56 np0005465604 python3.9[32668]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  2 03:39:57 np0005465604 python3.9[32820]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:39:58 np0005465604 python3.9[32972]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:39:59 np0005465604 python3.9[33095]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759390797.9518645-227-39911457990214/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4803740a300e76cc85ac6885ccf0e509da0d5b51 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:40:02 np0005465604 python3.9[33247]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  2 03:40:03 np0005465604 python3.9[33401]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 03:40:03 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 03:40:04 np0005465604 python3.9[33560]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  2 03:40:05 np0005465604 python3.9[33720]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  2 03:40:06 np0005465604 python3.9[33873]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 03:40:07 np0005465604 python3.9[34031]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  2 03:40:08 np0005465604 python3.9[34183]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:40:10 np0005465604 python3.9[34336]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:40:11 np0005465604 python3.9[34488]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:40:11 np0005465604 python3.9[34611]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759390810.602941-322-24280073684438/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:40:13 np0005465604 python3.9[34763]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 03:40:13 np0005465604 systemd[1]: Starting Load Kernel Modules...
Oct  2 03:40:13 np0005465604 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct  2 03:40:13 np0005465604 kernel: Bridge firewalling registered
Oct  2 03:40:13 np0005465604 systemd-modules-load[34767]: Inserted module 'br_netfilter'
Oct  2 03:40:13 np0005465604 systemd[1]: Finished Load Kernel Modules.
Oct  2 03:40:14 np0005465604 python3.9[34922]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:40:14 np0005465604 python3.9[35045]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759390813.5166683-345-14584103941612/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:40:15 np0005465604 python3.9[35197]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:40:18 np0005465604 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Oct  2 03:40:18 np0005465604 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Oct  2 03:40:19 np0005465604 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 03:40:19 np0005465604 systemd[1]: Starting man-db-cache-update.service...
Oct  2 03:40:19 np0005465604 systemd[1]: Reloading.
Oct  2 03:40:19 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:40:19 np0005465604 systemd[1]: Starting dnf makecache...
Oct  2 03:40:19 np0005465604 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 03:40:19 np0005465604 dnf[35280]: Failed determining last makecache time.
Oct  2 03:40:19 np0005465604 dnf[35280]: delorean-openstack-barbican-42b4c41831408a8e323 124 kB/s | 3.0 kB     00:00
Oct  2 03:40:19 np0005465604 dnf[35280]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 182 kB/s | 3.0 kB     00:00
Oct  2 03:40:19 np0005465604 dnf[35280]: delorean-openstack-cinder-1c00d6490d88e436f26ef 173 kB/s | 3.0 kB     00:00
Oct  2 03:40:19 np0005465604 dnf[35280]: delorean-python-stevedore-c4acc5639fd2329372142 163 kB/s | 3.0 kB     00:00
Oct  2 03:40:19 np0005465604 dnf[35280]: delorean-python-cloudkitty-tests-tempest-3961dc 164 kB/s | 3.0 kB     00:00
Oct  2 03:40:19 np0005465604 dnf[35280]: delorean-os-net-config-28598c2978b9e2207dd19fc4 173 kB/s | 3.0 kB     00:00
Oct  2 03:40:19 np0005465604 dnf[35280]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 193 kB/s | 3.0 kB     00:00
Oct  2 03:40:19 np0005465604 dnf[35280]: delorean-python-designate-tests-tempest-347fdbc 138 kB/s | 3.0 kB     00:00
Oct  2 03:40:19 np0005465604 dnf[35280]: delorean-openstack-glance-1fd12c29b339f30fe823e 178 kB/s | 3.0 kB     00:00
Oct  2 03:40:19 np0005465604 dnf[35280]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 163 kB/s | 3.0 kB     00:00
Oct  2 03:40:19 np0005465604 dnf[35280]: delorean-openstack-manila-3c01b7181572c95dac462 173 kB/s | 3.0 kB     00:00
Oct  2 03:40:19 np0005465604 dnf[35280]: delorean-python-whitebox-neutron-tests-tempest- 184 kB/s | 3.0 kB     00:00
Oct  2 03:40:19 np0005465604 dnf[35280]: delorean-openstack-octavia-ba397f07a7331190208c 187 kB/s | 3.0 kB     00:00
Oct  2 03:40:19 np0005465604 dnf[35280]: delorean-openstack-watcher-c014f81a8647287f6dcc 189 kB/s | 3.0 kB     00:00
Oct  2 03:40:20 np0005465604 dnf[35280]: delorean-edpm-image-builder-55ba53cf215b14ed95b 179 kB/s | 3.0 kB     00:00
Oct  2 03:40:20 np0005465604 dnf[35280]: delorean-puppet-ceph-b0c245ccde541a63fde0564366 186 kB/s | 3.0 kB     00:00
Oct  2 03:40:20 np0005465604 dnf[35280]: delorean-openstack-swift-dc98a8463506ac520c469a 193 kB/s | 3.0 kB     00:00
Oct  2 03:40:20 np0005465604 dnf[35280]: delorean-python-tempestconf-8515371b7cceebd4282 178 kB/s | 3.0 kB     00:00
Oct  2 03:40:20 np0005465604 dnf[35280]: delorean-openstack-heat-ui-013accbfd179753bc3f0 184 kB/s | 3.0 kB     00:00
Oct  2 03:40:20 np0005465604 dnf[35280]: CentOS Stream 9 - BaseOS                         70 kB/s | 6.7 kB     00:00
Oct  2 03:40:20 np0005465604 dnf[35280]: CentOS Stream 9 - AppStream                      70 kB/s | 6.8 kB     00:00
Oct  2 03:40:20 np0005465604 dnf[35280]: CentOS Stream 9 - CRB                            58 kB/s | 6.6 kB     00:00
Oct  2 03:40:20 np0005465604 dnf[35280]: CentOS Stream 9 - Extras packages                63 kB/s | 8.0 kB     00:00
Oct  2 03:40:20 np0005465604 dnf[35280]: dlrn-antelope-testing                            91 kB/s | 3.0 kB     00:00
Oct  2 03:40:20 np0005465604 dnf[35280]: dlrn-antelope-build-deps                         95 kB/s | 3.0 kB     00:00
Oct  2 03:40:20 np0005465604 python3.9[36446]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:40:20 np0005465604 dnf[35280]: centos9-rabbitmq                                 79 kB/s | 3.0 kB     00:00
Oct  2 03:40:20 np0005465604 dnf[35280]: centos9-storage                                  89 kB/s | 3.0 kB     00:00
Oct  2 03:40:20 np0005465604 dnf[35280]: centos9-opstools                                 88 kB/s | 3.0 kB     00:00
Oct  2 03:40:20 np0005465604 dnf[35280]: NFV SIG OpenvSwitch                              87 kB/s | 3.0 kB     00:00
Oct  2 03:40:20 np0005465604 dnf[35280]: repo-setup-centos-appstream                     101 kB/s | 4.4 kB     00:00
Oct  2 03:40:21 np0005465604 dnf[35280]: repo-setup-centos-baseos                        171 kB/s | 3.9 kB     00:00
Oct  2 03:40:21 np0005465604 dnf[35280]: repo-setup-centos-highavailability              173 kB/s | 3.9 kB     00:00
Oct  2 03:40:21 np0005465604 dnf[35280]: repo-setup-centos-powertools                    111 kB/s | 4.3 kB     00:00
Oct  2 03:40:21 np0005465604 dnf[35280]: Extra Packages for Enterprise Linux 9 - x86_64   86 kB/s |  28 kB     00:00
Oct  2 03:40:21 np0005465604 python3.9[37412]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  2 03:40:22 np0005465604 dnf[35280]: Metadata cache created.
Oct  2 03:40:22 np0005465604 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  2 03:40:22 np0005465604 systemd[1]: Finished dnf makecache.
Oct  2 03:40:22 np0005465604 systemd[1]: dnf-makecache.service: Consumed 1.737s CPU time.
Oct  2 03:40:22 np0005465604 python3.9[38238]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:40:23 np0005465604 python3.9[39110]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:40:23 np0005465604 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  2 03:40:23 np0005465604 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 03:40:23 np0005465604 systemd[1]: Finished man-db-cache-update.service.
Oct  2 03:40:23 np0005465604 systemd[1]: man-db-cache-update.service: Consumed 5.449s CPU time.
Oct  2 03:40:23 np0005465604 systemd[1]: run-r5afd1dc3094b4dd98463cb3808b6ea9a.service: Deactivated successfully.
Oct  2 03:40:23 np0005465604 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  2 03:40:24 np0005465604 python3.9[39777]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:40:24 np0005465604 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  2 03:40:24 np0005465604 systemd[1]: tuned.service: Deactivated successfully.
Oct  2 03:40:24 np0005465604 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  2 03:40:24 np0005465604 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  2 03:40:25 np0005465604 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  2 03:40:25 np0005465604 python3.9[39939]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  2 03:40:28 np0005465604 python3.9[40091]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:40:28 np0005465604 systemd[1]: Reloading.
Oct  2 03:40:28 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:40:29 np0005465604 python3.9[40281]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:40:30 np0005465604 systemd[1]: Reloading.
Oct  2 03:40:30 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:40:31 np0005465604 python3.9[40470]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:40:32 np0005465604 python3.9[40623]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:40:32 np0005465604 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct  2 03:40:32 np0005465604 python3.9[40776]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:40:35 np0005465604 python3.9[40938]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:40:35 np0005465604 python3.9[41091]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 03:40:35 np0005465604 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  2 03:40:35 np0005465604 systemd[1]: Stopped Apply Kernel Variables.
Oct  2 03:40:35 np0005465604 systemd[1]: Stopping Apply Kernel Variables...
Oct  2 03:40:36 np0005465604 systemd[1]: Starting Apply Kernel Variables...
Oct  2 03:40:36 np0005465604 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  2 03:40:36 np0005465604 systemd[1]: Finished Apply Kernel Variables.
Oct  2 03:40:36 np0005465604 systemd[1]: session-9.scope: Deactivated successfully.
Oct  2 03:40:36 np0005465604 systemd[1]: session-9.scope: Consumed 2min 12.238s CPU time.
Oct  2 03:40:36 np0005465604 systemd-logind[787]: Session 9 logged out. Waiting for processes to exit.
Oct  2 03:40:36 np0005465604 systemd-logind[787]: Removed session 9.
Oct  2 03:40:41 np0005465604 systemd-logind[787]: New session 10 of user zuul.
Oct  2 03:40:41 np0005465604 systemd[1]: Started Session 10 of User zuul.
Oct  2 03:40:43 np0005465604 python3.9[41274]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:40:44 np0005465604 python3.9[41430]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  2 03:40:45 np0005465604 python3.9[41583]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 03:40:46 np0005465604 python3.9[41741]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  2 03:40:47 np0005465604 python3.9[41901]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 03:40:48 np0005465604 python3.9[41985]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  2 03:40:51 np0005465604 python3.9[42148]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:41:02 np0005465604 kernel: SELinux:  Converting 2724 SID table entries...
Oct  2 03:41:02 np0005465604 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 03:41:02 np0005465604 kernel: SELinux:  policy capability open_perms=1
Oct  2 03:41:02 np0005465604 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 03:41:02 np0005465604 kernel: SELinux:  policy capability always_check_network=0
Oct  2 03:41:02 np0005465604 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 03:41:02 np0005465604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 03:41:02 np0005465604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 03:41:02 np0005465604 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct  2 03:41:02 np0005465604 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct  2 03:41:03 np0005465604 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 03:41:03 np0005465604 systemd[1]: Starting man-db-cache-update.service...
Oct  2 03:41:03 np0005465604 systemd[1]: Reloading.
Oct  2 03:41:03 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:41:03 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:41:03 np0005465604 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 03:41:04 np0005465604 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 03:41:04 np0005465604 systemd[1]: Finished man-db-cache-update.service.
Oct  2 03:41:04 np0005465604 systemd[1]: run-r6ac93879b231446bbbd3b48c6850237a.service: Deactivated successfully.
Oct  2 03:41:05 np0005465604 python3.9[43250]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 03:41:05 np0005465604 systemd[1]: Reloading.
Oct  2 03:41:05 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:41:05 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:41:05 np0005465604 systemd[1]: Starting Open vSwitch Database Unit...
Oct  2 03:41:05 np0005465604 chown[43292]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct  2 03:41:05 np0005465604 ovs-ctl[43297]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct  2 03:41:05 np0005465604 ovs-ctl[43297]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct  2 03:41:05 np0005465604 ovs-ctl[43297]: Starting ovsdb-server [  OK  ]
Oct  2 03:41:06 np0005465604 ovs-vsctl[43346]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct  2 03:41:06 np0005465604 ovs-vsctl[43366]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"45c6349c-a870-4e27-8117-4ccd02005c80\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct  2 03:41:06 np0005465604 ovs-ctl[43297]: Configuring Open vSwitch system IDs [  OK  ]
Oct  2 03:41:06 np0005465604 ovs-ctl[43297]: Enabling remote OVSDB managers [  OK  ]
Oct  2 03:41:06 np0005465604 ovs-vsctl[43372]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  2 03:41:06 np0005465604 systemd[1]: Started Open vSwitch Database Unit.
Oct  2 03:41:06 np0005465604 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct  2 03:41:06 np0005465604 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct  2 03:41:06 np0005465604 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct  2 03:41:06 np0005465604 kernel: openvswitch: Open vSwitch switching datapath
Oct  2 03:41:06 np0005465604 ovs-ctl[43417]: Inserting openvswitch module [  OK  ]
Oct  2 03:41:06 np0005465604 ovs-ctl[43386]: Starting ovs-vswitchd [  OK  ]
Oct  2 03:41:06 np0005465604 ovs-ctl[43386]: Enabling remote OVSDB managers [  OK  ]
Oct  2 03:41:06 np0005465604 ovs-vsctl[43434]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  2 03:41:06 np0005465604 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct  2 03:41:06 np0005465604 systemd[1]: Starting Open vSwitch...
Oct  2 03:41:06 np0005465604 systemd[1]: Finished Open vSwitch.
Oct  2 03:41:07 np0005465604 python3.9[43586]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:41:08 np0005465604 python3.9[43738]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  2 03:41:09 np0005465604 kernel: SELinux:  Converting 2738 SID table entries...
Oct  2 03:41:09 np0005465604 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 03:41:09 np0005465604 kernel: SELinux:  policy capability open_perms=1
Oct  2 03:41:09 np0005465604 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 03:41:09 np0005465604 kernel: SELinux:  policy capability always_check_network=0
Oct  2 03:41:09 np0005465604 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 03:41:09 np0005465604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 03:41:09 np0005465604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 03:41:10 np0005465604 python3.9[43894]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:41:11 np0005465604 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct  2 03:41:11 np0005465604 python3.9[44052]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:41:13 np0005465604 python3.9[44205]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:41:15 np0005465604 python3.9[44492]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 03:41:16 np0005465604 python3.9[44642]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:41:17 np0005465604 python3.9[44796]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:41:18 np0005465604 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 03:41:18 np0005465604 systemd[1]: Starting man-db-cache-update.service...
Oct  2 03:41:18 np0005465604 systemd[1]: Reloading.
Oct  2 03:41:18 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:41:18 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:41:19 np0005465604 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 03:41:19 np0005465604 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 03:41:19 np0005465604 systemd[1]: Finished man-db-cache-update.service.
Oct  2 03:41:19 np0005465604 systemd[1]: run-re97aec98b45240fcafd9c7d3e47c04ed.service: Deactivated successfully.
Oct  2 03:41:20 np0005465604 python3.9[45113]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 03:41:20 np0005465604 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  2 03:41:20 np0005465604 systemd[1]: Stopped Network Manager Wait Online.
Oct  2 03:41:20 np0005465604 systemd[1]: Stopping Network Manager Wait Online...
Oct  2 03:41:20 np0005465604 systemd[1]: Stopping Network Manager...
Oct  2 03:41:20 np0005465604 NetworkManager[3945]: <info>  [1759390880.4386] caught SIGTERM, shutting down normally.
Oct  2 03:41:20 np0005465604 NetworkManager[3945]: <info>  [1759390880.4399] dhcp4 (eth0): canceled DHCP transaction
Oct  2 03:41:20 np0005465604 NetworkManager[3945]: <info>  [1759390880.4400] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 03:41:20 np0005465604 NetworkManager[3945]: <info>  [1759390880.4400] dhcp4 (eth0): state changed no lease
Oct  2 03:41:20 np0005465604 NetworkManager[3945]: <info>  [1759390880.4401] manager: NetworkManager state is now CONNECTED_SITE
Oct  2 03:41:20 np0005465604 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 03:41:20 np0005465604 NetworkManager[3945]: <info>  [1759390880.4684] exiting (success)
Oct  2 03:41:20 np0005465604 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 03:41:20 np0005465604 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  2 03:41:20 np0005465604 systemd[1]: Stopped Network Manager.
Oct  2 03:41:20 np0005465604 systemd[1]: NetworkManager.service: Consumed 9.767s CPU time, 4.1M memory peak, read 0B from disk, written 21.0K to disk.
Oct  2 03:41:20 np0005465604 systemd[1]: Starting Network Manager...
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.5598] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:72247ef6-349c-4c45-9097-d92b9d419ca4)
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.5602] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.5668] manager[0x55d8dd87d090]: monitoring kernel firmware directory '/lib/firmware'.
Oct  2 03:41:20 np0005465604 systemd[1]: Starting Hostname Service...
Oct  2 03:41:20 np0005465604 systemd[1]: Started Hostname Service.
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6759] hostname: hostname: using hostnamed
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6760] hostname: static hostname changed from (none) to "compute-0"
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6765] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6770] manager[0x55d8dd87d090]: rfkill: Wi-Fi hardware radio set enabled
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6770] manager[0x55d8dd87d090]: rfkill: WWAN hardware radio set enabled
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6794] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6804] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6804] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6805] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6805] manager: Networking is enabled by state file
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6808] settings: Loaded settings plugin: keyfile (internal)
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6812] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6842] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6855] dhcp: init: Using DHCP client 'internal'
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6858] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6864] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6872] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6881] device (lo): Activation: starting connection 'lo' (dbb54e49-2798-40d2-8772-855e895cdbba)
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6888] device (eth0): carrier: link connected
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6894] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6899] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6900] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6906] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6914] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6921] device (eth1): carrier: link connected
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6926] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6931] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (aad9a87b-4ca1-5041-97aa-925b7b93062e) (indicated)
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6932] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6938] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6946] device (eth1): Activation: starting connection 'ci-private-network' (aad9a87b-4ca1-5041-97aa-925b7b93062e)
Oct  2 03:41:20 np0005465604 systemd[1]: Started Network Manager.
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6955] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6963] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6985] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6989] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6992] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6995] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.6999] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7003] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7010] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7017] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7020] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7039] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7051] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7058] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7060] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7064] device (lo): Activation: successful, device activated.
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7070] dhcp4 (eth0): state changed new lease, address=38.102.83.142
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7076] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7140] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7153] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7157] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7161] manager: NetworkManager state is now CONNECTED_LOCAL
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7164] device (eth1): Activation: successful, device activated.
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7191] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7192] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7196] manager: NetworkManager state is now CONNECTED_SITE
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7200] device (eth0): Activation: successful, device activated.
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7205] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  2 03:41:20 np0005465604 NetworkManager[45129]: <info>  [1759390880.7208] manager: startup complete
Oct  2 03:41:20 np0005465604 systemd[1]: Starting Network Manager Wait Online...
Oct  2 03:41:20 np0005465604 systemd[1]: Finished Network Manager Wait Online.
Oct  2 03:41:21 np0005465604 python3.9[45340]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:41:26 np0005465604 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 03:41:26 np0005465604 systemd[1]: Starting man-db-cache-update.service...
Oct  2 03:41:26 np0005465604 systemd[1]: Reloading.
Oct  2 03:41:26 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:41:26 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:41:26 np0005465604 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 03:41:27 np0005465604 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 03:41:27 np0005465604 systemd[1]: Finished man-db-cache-update.service.
Oct  2 03:41:27 np0005465604 systemd[1]: run-r2efce90517584b669188b614745c0a57.service: Deactivated successfully.
Oct  2 03:41:28 np0005465604 python3.9[45802]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:41:29 np0005465604 python3.9[45954]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:41:30 np0005465604 python3.9[46108]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:41:30 np0005465604 python3.9[46260]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:41:30 np0005465604 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 03:41:31 np0005465604 python3.9[46412]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:41:32 np0005465604 python3.9[46564]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:41:32 np0005465604 python3.9[46716]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:41:33 np0005465604 python3.9[46839]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759390892.3730977-229-60996392433461/.source _original_basename=._e_nz379 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:41:34 np0005465604 python3.9[46991]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:41:35 np0005465604 python3.9[47143]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct  2 03:41:36 np0005465604 python3.9[47295]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:41:38 np0005465604 python3.9[47722]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct  2 03:41:39 np0005465604 irqbalance[784]: Cannot change IRQ 26 affinity: Operation not permitted
Oct  2 03:41:39 np0005465604 irqbalance[784]: IRQ 26 affinity is now unmanaged
Oct  2 03:41:39 np0005465604 ansible-async_wrapper.py[47897]: Invoked with j73868693587 300 /home/zuul/.ansible/tmp/ansible-tmp-1759390898.7834196-295-241647680426962/AnsiballZ_edpm_os_net_config.py _
Oct  2 03:41:39 np0005465604 ansible-async_wrapper.py[47900]: Starting module and watcher
Oct  2 03:41:39 np0005465604 ansible-async_wrapper.py[47900]: Start watching 47901 (300)
Oct  2 03:41:39 np0005465604 ansible-async_wrapper.py[47901]: Start module (47901)
Oct  2 03:41:39 np0005465604 ansible-async_wrapper.py[47897]: Return async_wrapper task started.
Oct  2 03:41:39 np0005465604 python3.9[47902]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct  2 03:41:40 np0005465604 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct  2 03:41:40 np0005465604 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct  2 03:41:40 np0005465604 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct  2 03:41:40 np0005465604 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct  2 03:41:40 np0005465604 kernel: cfg80211: failed to load regulatory.db
Oct  2 03:41:41 np0005465604 NetworkManager[45129]: <info>  [1759390901.9953] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47903 uid=0 result="success"
Oct  2 03:41:41 np0005465604 NetworkManager[45129]: <info>  [1759390901.9980] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.0819] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.0825] audit: op="connection-add" uuid="4e254bdc-a780-4010-9132-27492f81a4e7" name="br-ex-br" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.0842] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.0845] audit: op="connection-add" uuid="699e4cc4-2327-446e-83d3-17d106fa7738" name="br-ex-port" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.0863] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.0866] audit: op="connection-add" uuid="eeafd0b2-0c8c-4e97-8ee2-ba90f0a3fb99" name="eth1-port" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.0885] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.0888] audit: op="connection-add" uuid="7e5e5364-822a-4627-b44f-f7c4923724f2" name="vlan20-port" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.0905] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.0908] audit: op="connection-add" uuid="76622820-4807-4ab7-8858-a2444b3fccbd" name="vlan21-port" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.0924] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.0927] audit: op="connection-add" uuid="ca2b1904-bbba-451e-9a13-a8308b917188" name="vlan22-port" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.0945] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.0948] audit: op="connection-add" uuid="808f3658-bd94-4ad5-a0e2-4bb9292c8bed" name="vlan23-port" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.0975] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.0996] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1000] audit: op="connection-add" uuid="8bde1b73-d3fe-4cdb-8cba-c88cb62795f5" name="br-ex-if" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1041] audit: op="connection-update" uuid="aad9a87b-4ca1-5041-97aa-925b7b93062e" name="ci-private-network" args="ovs-interface.type,ovs-external-ids.data,connection.port-type,connection.timestamp,connection.master,connection.controller,connection.slave-type,ipv4.addresses,ipv4.dns,ipv4.method,ipv4.routing-rules,ipv4.routes,ipv4.never-default,ipv6.routing-rules,ipv6.addresses,ipv6.dns,ipv6.method,ipv6.addr-gen-mode,ipv6.routes" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1072] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1075] audit: op="connection-add" uuid="aab22982-4140-4bbc-be83-d4cad0b2bd3c" name="vlan20-if" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1105] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1108] audit: op="connection-add" uuid="8d3f8c3e-11c0-411d-8119-106b5f9323ae" name="vlan21-if" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1136] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1139] audit: op="connection-add" uuid="93fef4e7-e181-4611-88fd-bb2d182d57bd" name="vlan22-if" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1168] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1172] audit: op="connection-add" uuid="52768433-707e-4308-956e-3c1c5685cac4" name="vlan23-if" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1192] audit: op="connection-delete" uuid="d18a9840-c9c9-34ba-a725-6f9fae88d80f" name="Wired connection 1" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1216] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1234] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1243] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (4e254bdc-a780-4010-9132-27492f81a4e7)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1245] audit: op="connection-activate" uuid="4e254bdc-a780-4010-9132-27492f81a4e7" name="br-ex-br" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1250] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1262] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1271] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (699e4cc4-2327-446e-83d3-17d106fa7738)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1276] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1289] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1297] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (eeafd0b2-0c8c-4e97-8ee2-ba90f0a3fb99)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1301] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1313] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1320] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (7e5e5364-822a-4627-b44f-f7c4923724f2)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1324] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1339] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1346] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (76622820-4807-4ab7-8858-a2444b3fccbd)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1350] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1362] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1370] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (ca2b1904-bbba-451e-9a13-a8308b917188)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1376] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1389] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1398] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (808f3658-bd94-4ad5-a0e2-4bb9292c8bed)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1400] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1405] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1409] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1421] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1431] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1439] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (8bde1b73-d3fe-4cdb-8cba-c88cb62795f5)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1441] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1447] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1451] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1455] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1459] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1480] device (eth1): disconnecting for new activation request.
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1483] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1489] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1493] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1496] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1502] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1511] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1521] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (aab22982-4140-4bbc-be83-d4cad0b2bd3c)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1523] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1529] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1533] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1536] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1542] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1551] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1559] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (8d3f8c3e-11c0-411d-8119-106b5f9323ae)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1562] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1569] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1573] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1577] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1582] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1591] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1600] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (93fef4e7-e181-4611-88fd-bb2d182d57bd)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1602] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1608] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1612] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1616] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1623] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 03:41:42 np0005465604 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1632] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1641] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (52768433-707e-4308-956e-3c1c5685cac4)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1642] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1647] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1651] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1653] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1658] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1682] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.method,ipv6.addr-gen-mode" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1685] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1692] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1695] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1705] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1709] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1713] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1717] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1718] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1724] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 kernel: ovs-system: entered promiscuous mode
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1727] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1730] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1732] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1737] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1741] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1744] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1746] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1751] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1756] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 systemd-udevd[47907]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 03:41:42 np0005465604 kernel: Timeout policy base is empty
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1759] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1761] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1767] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1771] dhcp4 (eth0): canceled DHCP transaction
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1772] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1772] dhcp4 (eth0): state changed no lease
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1773] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1785] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1790] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47903 uid=0 result="fail" reason="Device is not activated"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1797] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1831] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1838] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1846] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1852] dhcp4 (eth0): state changed new lease, address=38.102.83.142
Oct  2 03:41:42 np0005465604 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1914] device (eth1): disconnecting for new activation request.
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1915] audit: op="connection-activate" uuid="aad9a87b-4ca1-5041-97aa-925b7b93062e" name="ci-private-network" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1915] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.1986] device (eth1): Activation: starting connection 'ci-private-network' (aad9a87b-4ca1-5041-97aa-925b7b93062e)
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2007] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2011] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2017] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2019] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2020] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47903 uid=0 result="success"
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2021] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2022] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2023] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2024] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2025] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2027] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2033] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2039] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2044] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2049] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2053] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2057] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2061] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2066] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2070] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2077] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2083] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2087] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2092] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2097] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2103] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2107] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2156] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2159] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2165] device (eth1): Activation: successful, device activated.
Oct  2 03:41:42 np0005465604 kernel: br-ex: entered promiscuous mode
Oct  2 03:41:42 np0005465604 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  2 03:41:42 np0005465604 kernel: vlan22: entered promiscuous mode
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2316] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct  2 03:41:42 np0005465604 systemd-udevd[47908]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2327] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2362] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2367] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2373] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 03:41:42 np0005465604 kernel: vlan21: entered promiscuous mode
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2426] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2435] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2454] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2455] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2459] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2509] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2519] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 kernel: vlan23: entered promiscuous mode
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2569] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2571] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2575] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 03:41:42 np0005465604 kernel: vlan20: entered promiscuous mode
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2656] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2670] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2687] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2688] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2692] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2744] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2759] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2775] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2776] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 03:41:42 np0005465604 NetworkManager[45129]: <info>  [1759390902.2781] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 03:41:43 np0005465604 NetworkManager[45129]: <info>  [1759390903.3899] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47903 uid=0 result="success"
Oct  2 03:41:43 np0005465604 NetworkManager[45129]: <info>  [1759390903.6219] checkpoint[0x55d8dd854950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct  2 03:41:43 np0005465604 NetworkManager[45129]: <info>  [1759390903.6222] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47903 uid=0 result="success"
Oct  2 03:41:43 np0005465604 python3.9[48260]: ansible-ansible.legacy.async_status Invoked with jid=j73868693587.47897 mode=status _async_dir=/root/.ansible_async
Oct  2 03:41:43 np0005465604 NetworkManager[45129]: <info>  [1759390903.8812] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47903 uid=0 result="success"
Oct  2 03:41:43 np0005465604 NetworkManager[45129]: <info>  [1759390903.8822] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47903 uid=0 result="success"
Oct  2 03:41:44 np0005465604 NetworkManager[45129]: <info>  [1759390904.0719] audit: op="networking-control" arg="global-dns-configuration" pid=47903 uid=0 result="success"
Oct  2 03:41:44 np0005465604 NetworkManager[45129]: <info>  [1759390904.0844] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct  2 03:41:44 np0005465604 NetworkManager[45129]: <info>  [1759390904.0872] audit: op="networking-control" arg="global-dns-configuration" pid=47903 uid=0 result="success"
Oct  2 03:41:44 np0005465604 NetworkManager[45129]: <info>  [1759390904.0886] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47903 uid=0 result="success"
Oct  2 03:41:44 np0005465604 NetworkManager[45129]: <info>  [1759390904.2143] checkpoint[0x55d8dd854a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct  2 03:41:44 np0005465604 NetworkManager[45129]: <info>  [1759390904.2146] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47903 uid=0 result="success"
Oct  2 03:41:44 np0005465604 ansible-async_wrapper.py[47901]: Module complete (47901)
Oct  2 03:41:44 np0005465604 ansible-async_wrapper.py[47900]: Done in kid B.
Oct  2 03:41:47 np0005465604 python3.9[48366]: ansible-ansible.legacy.async_status Invoked with jid=j73868693587.47897 mode=status _async_dir=/root/.ansible_async
Oct  2 03:41:47 np0005465604 python3.9[48466]: ansible-ansible.legacy.async_status Invoked with jid=j73868693587.47897 mode=cleanup _async_dir=/root/.ansible_async
Oct  2 03:41:48 np0005465604 python3.9[48618]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:41:49 np0005465604 python3.9[48741]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759390908.1806734-322-28607203773689/.source.returncode _original_basename=.byp3_epy follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:41:50 np0005465604 python3.9[48893]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:41:50 np0005465604 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 03:41:50 np0005465604 python3.9[49017]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759390909.6052206-338-45516805825754/.source.cfg _original_basename=.gds3ylyo follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:41:51 np0005465604 python3.9[49171]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 03:41:51 np0005465604 systemd[1]: Reloading Network Manager...
Oct  2 03:41:51 np0005465604 NetworkManager[45129]: <info>  [1759390911.6438] audit: op="reload" arg="0" pid=49175 uid=0 result="success"
Oct  2 03:41:51 np0005465604 NetworkManager[45129]: <info>  [1759390911.6447] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct  2 03:41:51 np0005465604 systemd[1]: Reloaded Network Manager.
Oct  2 03:41:52 np0005465604 systemd-logind[787]: Session 10 logged out. Waiting for processes to exit.
Oct  2 03:41:52 np0005465604 systemd[1]: session-10.scope: Deactivated successfully.
Oct  2 03:41:52 np0005465604 systemd[1]: session-10.scope: Consumed 50.425s CPU time.
Oct  2 03:41:52 np0005465604 systemd-logind[787]: Removed session 10.
Oct  2 03:41:58 np0005465604 systemd-logind[787]: New session 11 of user zuul.
Oct  2 03:41:58 np0005465604 systemd[1]: Started Session 11 of User zuul.
Oct  2 03:41:59 np0005465604 python3.9[49359]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:42:00 np0005465604 python3.9[49514]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 03:42:01 np0005465604 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 03:42:01 np0005465604 python3.9[49708]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:42:02 np0005465604 systemd[1]: session-11.scope: Deactivated successfully.
Oct  2 03:42:02 np0005465604 systemd[1]: session-11.scope: Consumed 2.608s CPU time.
Oct  2 03:42:02 np0005465604 systemd-logind[787]: Session 11 logged out. Waiting for processes to exit.
Oct  2 03:42:02 np0005465604 systemd-logind[787]: Removed session 11.
Oct  2 03:42:08 np0005465604 systemd-logind[787]: New session 12 of user zuul.
Oct  2 03:42:08 np0005465604 systemd[1]: Started Session 12 of User zuul.
Oct  2 03:42:09 np0005465604 python3.9[49889]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:42:10 np0005465604 python3.9[50044]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:42:11 np0005465604 python3.9[50200]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 03:42:13 np0005465604 python3.9[50285]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:42:15 np0005465604 python3.9[50438]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 03:42:16 np0005465604 python3.9[50633]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:42:17 np0005465604 python3.9[50785]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:42:17 np0005465604 systemd[1]: var-lib-containers-storage-overlay-compat1236441195-merged.mount: Deactivated successfully.
Oct  2 03:42:17 np0005465604 podman[50786]: 2025-10-02 07:42:17.642135289 +0000 UTC m=+0.152144545 system refresh
Oct  2 03:42:18 np0005465604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 03:42:18 np0005465604 python3.9[50948]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:42:19 np0005465604 python3.9[51071]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759390937.8512976-79-91578209584390/.source.json follow=False _original_basename=podman_network_config.j2 checksum=594d9547d3e5acf722ca8a7f3b66abc1284f30f8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:42:20 np0005465604 python3.9[51223]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:42:21 np0005465604 python3.9[51346]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759390939.8021064-94-5506016039847/.source.conf follow=False _original_basename=registries.conf.j2 checksum=7583035fe00323822d170f910e3cbd96dee33d94 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:42:22 np0005465604 python3.9[51498]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:42:22 np0005465604 python3.9[51650]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:42:23 np0005465604 python3.9[51802]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:42:24 np0005465604 python3.9[51954]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:42:25 np0005465604 python3.9[52106]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:42:27 np0005465604 python3.9[52259]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:42:28 np0005465604 python3.9[52413]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:42:28 np0005465604 python3.9[52565]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:42:29 np0005465604 python3.9[52717]: ansible-service_facts Invoked
Oct  2 03:42:29 np0005465604 network[52734]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 03:42:30 np0005465604 network[52735]: 'network-scripts' will be removed from distribution in near future.
Oct  2 03:42:30 np0005465604 network[52736]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 03:42:35 np0005465604 python3.9[53190]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:42:38 np0005465604 python3.9[53343]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  2 03:42:39 np0005465604 python3.9[53495]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:42:40 np0005465604 python3.9[53620]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759390958.8801746-226-2621899063/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:42:40 np0005465604 python3.9[53774]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:42:41 np0005465604 python3.9[53899]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759390960.3087966-241-232152689993220/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:42:43 np0005465604 python3.9[54053]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:42:44 np0005465604 python3.9[54207]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 03:42:45 np0005465604 python3.9[54291]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:42:46 np0005465604 python3.9[54445]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 03:42:47 np0005465604 python3.9[54529]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 03:42:47 np0005465604 chronyd[796]: chronyd exiting
Oct  2 03:42:47 np0005465604 systemd[1]: Stopping NTP client/server...
Oct  2 03:42:47 np0005465604 systemd[1]: chronyd.service: Deactivated successfully.
Oct  2 03:42:47 np0005465604 systemd[1]: Stopped NTP client/server.
Oct  2 03:42:47 np0005465604 systemd[1]: Starting NTP client/server...
Oct  2 03:42:47 np0005465604 chronyd[54537]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  2 03:42:47 np0005465604 chronyd[54537]: Frequency 9.248 +/- 0.210 ppm read from /var/lib/chrony/drift
Oct  2 03:42:47 np0005465604 chronyd[54537]: Loaded seccomp filter (level 2)
Oct  2 03:42:47 np0005465604 systemd[1]: Started NTP client/server.
Oct  2 03:42:48 np0005465604 systemd[1]: session-12.scope: Deactivated successfully.
Oct  2 03:42:48 np0005465604 systemd[1]: session-12.scope: Consumed 27.731s CPU time.
Oct  2 03:42:48 np0005465604 systemd-logind[787]: Session 12 logged out. Waiting for processes to exit.
Oct  2 03:42:48 np0005465604 systemd-logind[787]: Removed session 12.
Oct  2 03:42:53 np0005465604 systemd-logind[787]: New session 13 of user zuul.
Oct  2 03:42:53 np0005465604 systemd[1]: Started Session 13 of User zuul.
Oct  2 03:42:54 np0005465604 python3.9[54719]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:42:55 np0005465604 python3.9[54871]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:42:56 np0005465604 python3.9[54994]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759390974.82365-34-130492593573576/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:42:56 np0005465604 systemd[1]: session-13.scope: Deactivated successfully.
Oct  2 03:42:56 np0005465604 systemd[1]: session-13.scope: Consumed 1.965s CPU time.
Oct  2 03:42:56 np0005465604 systemd-logind[787]: Session 13 logged out. Waiting for processes to exit.
Oct  2 03:42:56 np0005465604 systemd-logind[787]: Removed session 13.
Oct  2 03:43:02 np0005465604 systemd-logind[787]: New session 14 of user zuul.
Oct  2 03:43:02 np0005465604 systemd[1]: Started Session 14 of User zuul.
Oct  2 03:43:03 np0005465604 python3.9[55172]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:43:05 np0005465604 python3.9[55328]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:43:06 np0005465604 python3.9[55503]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:43:06 np0005465604 python3.9[55626]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759390985.3098044-41-183648596843767/.source.json _original_basename=.5x0tj047 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:43:07 np0005465604 python3.9[55778]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:43:08 np0005465604 python3.9[55901]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759390987.1861327-64-180111677014020/.source _original_basename=.2mjiob3z follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:43:09 np0005465604 python3.9[56053]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:43:09 np0005465604 python3.9[56205]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:43:10 np0005465604 python3.9[56328]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759390989.3108299-88-128061246722862/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:43:11 np0005465604 python3.9[56480]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:43:11 np0005465604 python3.9[56603]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759390990.5152855-88-189453790154804/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:43:12 np0005465604 python3.9[56755]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:43:13 np0005465604 python3.9[56907]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:43:13 np0005465604 python3.9[57030]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759390992.5693727-125-238626244122379/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:43:14 np0005465604 python3.9[57182]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:43:15 np0005465604 python3.9[57305]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759390993.8604777-140-251846085119997/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:43:16 np0005465604 python3.9[57457]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:43:16 np0005465604 systemd[1]: Reloading.
Oct  2 03:43:16 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:43:16 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:43:16 np0005465604 systemd[1]: Reloading.
Oct  2 03:43:16 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:43:16 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:43:16 np0005465604 systemd[1]: Starting EDPM Container Shutdown...
Oct  2 03:43:16 np0005465604 systemd[1]: Finished EDPM Container Shutdown.
Oct  2 03:43:17 np0005465604 python3.9[57682]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:43:17 np0005465604 python3.9[57805]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759390996.8487418-163-202792430395450/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:43:18 np0005465604 python3.9[57957]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:43:19 np0005465604 python3.9[58080]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759390998.0756702-178-189750319249334/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:43:20 np0005465604 python3.9[58232]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:43:20 np0005465604 systemd[1]: Reloading.
Oct  2 03:43:20 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:43:20 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:43:20 np0005465604 systemd[1]: Reloading.
Oct  2 03:43:20 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:43:20 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:43:20 np0005465604 systemd[1]: Starting Create netns directory...
Oct  2 03:43:20 np0005465604 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 03:43:20 np0005465604 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 03:43:20 np0005465604 systemd[1]: Finished Create netns directory.
Oct  2 03:43:21 np0005465604 python3.9[58458]: ansible-ansible.builtin.service_facts Invoked
Oct  2 03:43:21 np0005465604 network[58475]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 03:43:21 np0005465604 network[58476]: 'network-scripts' will be removed from distribution in near future.
Oct  2 03:43:21 np0005465604 network[58477]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 03:43:29 np0005465604 python3.9[58741]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:43:29 np0005465604 systemd[1]: Reloading.
Oct  2 03:43:29 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:43:29 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:43:30 np0005465604 systemd[1]: Stopping IPv4 firewall with iptables...
Oct  2 03:43:30 np0005465604 iptables.init[58781]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct  2 03:43:30 np0005465604 iptables.init[58781]: iptables: Flushing firewall rules: [  OK  ]
Oct  2 03:43:30 np0005465604 systemd[1]: iptables.service: Deactivated successfully.
Oct  2 03:43:30 np0005465604 systemd[1]: Stopped IPv4 firewall with iptables.
Oct  2 03:43:31 np0005465604 python3.9[58977]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:43:32 np0005465604 python3.9[59131]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:43:32 np0005465604 systemd[1]: Reloading.
Oct  2 03:43:32 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:43:32 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:43:32 np0005465604 systemd[1]: Starting Netfilter Tables...
Oct  2 03:43:32 np0005465604 systemd[1]: Finished Netfilter Tables.
Oct  2 03:43:33 np0005465604 python3.9[59323]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:43:34 np0005465604 python3.9[59476]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:43:35 np0005465604 python3.9[59601]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759391014.1001282-247-258131578811737/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:43:35 np0005465604 python3.9[59752]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 03:44:01 np0005465604 systemd[1]: session-14.scope: Deactivated successfully.
Oct  2 03:44:01 np0005465604 systemd[1]: session-14.scope: Consumed 21.778s CPU time.
Oct  2 03:44:01 np0005465604 systemd-logind[787]: Session 14 logged out. Waiting for processes to exit.
Oct  2 03:44:01 np0005465604 systemd-logind[787]: Removed session 14.
Oct  2 03:44:14 np0005465604 systemd-logind[787]: New session 15 of user zuul.
Oct  2 03:44:14 np0005465604 systemd[1]: Started Session 15 of User zuul.
Oct  2 03:44:15 np0005465604 python3.9[59945]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:44:16 np0005465604 python3.9[60101]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:17 np0005465604 python3.9[60276]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:44:18 np0005465604 python3.9[60354]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.9qmti9su recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:19 np0005465604 python3.9[60506]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:44:19 np0005465604 python3.9[60584]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.eynyqegi recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:20 np0005465604 python3.9[60736]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:44:20 np0005465604 python3.9[60888]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:44:21 np0005465604 python3.9[60966]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:44:21 np0005465604 python3.9[61118]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:44:22 np0005465604 python3.9[61196]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:44:23 np0005465604 python3.9[61348]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:23 np0005465604 python3.9[61500]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:44:24 np0005465604 python3.9[61578]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:24 np0005465604 python3.9[61730]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:44:25 np0005465604 python3.9[61808]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:26 np0005465604 python3.9[61960]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:44:26 np0005465604 systemd[1]: Reloading.
Oct  2 03:44:26 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:44:26 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:44:27 np0005465604 python3.9[62149]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:44:28 np0005465604 python3.9[62227]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:29 np0005465604 python3.9[62379]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:44:29 np0005465604 python3.9[62457]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:30 np0005465604 python3.9[62609]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:44:30 np0005465604 systemd[1]: Reloading.
Oct  2 03:44:30 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:44:30 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:44:30 np0005465604 systemd[1]: Starting Create netns directory...
Oct  2 03:44:30 np0005465604 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 03:44:30 np0005465604 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 03:44:30 np0005465604 systemd[1]: Finished Create netns directory.
Oct  2 03:44:31 np0005465604 python3.9[62801]: ansible-ansible.builtin.service_facts Invoked
Oct  2 03:44:31 np0005465604 network[62818]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 03:44:31 np0005465604 network[62819]: 'network-scripts' will be removed from distribution in near future.
Oct  2 03:44:31 np0005465604 network[62820]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 03:44:35 np0005465604 python3.9[63083]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:44:36 np0005465604 python3.9[63161]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:37 np0005465604 python3.9[63313]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:38 np0005465604 python3.9[63465]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:44:38 np0005465604 python3.9[63588]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759391077.4021125-216-38202342227269/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:39 np0005465604 python3.9[63740]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  2 03:44:39 np0005465604 systemd[1]: Starting Time & Date Service...
Oct  2 03:44:39 np0005465604 systemd[1]: Started Time & Date Service.
Oct  2 03:44:40 np0005465604 python3.9[63896]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:41 np0005465604 python3.9[64048]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:44:42 np0005465604 python3.9[64171]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759391080.859729-251-156180549720363/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:42 np0005465604 python3.9[64323]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:44:43 np0005465604 python3.9[64446]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759391082.261322-266-85495474954316/.source.yaml _original_basename=.dfdleya5 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:44 np0005465604 python3.9[64598]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:44:44 np0005465604 python3.9[64721]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759391083.8247848-281-19124114582672/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:45 np0005465604 python3.9[64873]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:44:46 np0005465604 python3.9[65026]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:44:47 np0005465604 python3[65179]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  2 03:44:48 np0005465604 python3.9[65331]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:44:49 np0005465604 python3.9[65454]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759391087.8693101-320-276187993587805/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:49 np0005465604 python3.9[65606]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:44:50 np0005465604 python3.9[65729]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759391089.3584063-335-118384908911588/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:51 np0005465604 python3.9[65881]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:44:51 np0005465604 python3.9[66004]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759391090.765655-350-275708556029278/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:52 np0005465604 python3.9[66156]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:44:53 np0005465604 python3.9[66279]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759391092.1366801-365-85845869427741/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:54 np0005465604 python3.9[66431]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:44:54 np0005465604 python3.9[66554]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759391093.5754793-380-103924683502349/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:55 np0005465604 python3.9[66706]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:56 np0005465604 python3.9[66858]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:44:57 np0005465604 python3.9[67017]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:58 np0005465604 python3.9[67170]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:59 np0005465604 python3.9[67322]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:44:59 np0005465604 python3.9[67474]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  2 03:45:00 np0005465604 python3.9[67627]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  2 03:45:01 np0005465604 systemd[1]: session-15.scope: Deactivated successfully.
Oct  2 03:45:01 np0005465604 systemd[1]: session-15.scope: Consumed 34.428s CPU time.
Oct  2 03:45:01 np0005465604 systemd-logind[787]: Session 15 logged out. Waiting for processes to exit.
Oct  2 03:45:01 np0005465604 systemd-logind[787]: Removed session 15.
Oct  2 03:45:06 np0005465604 systemd-logind[787]: New session 16 of user zuul.
Oct  2 03:45:06 np0005465604 systemd[1]: Started Session 16 of User zuul.
Oct  2 03:45:07 np0005465604 python3.9[67809]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  2 03:45:08 np0005465604 python3.9[67961]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:45:09 np0005465604 python3.9[68113]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:45:09 np0005465604 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  2 03:45:10 np0005465604 python3.9[68267]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCl9MNxZqZt1+LKJmjMxSImuTS6g6ITiDsWMUj8ov4OWb0XiA/RoRyk6lMGQ9rFgUrLN6MXX8uLxKxtGuJmiEQeXRrULejipgsEFE4aDvwP6bQDoYw38SRkF0or9xxQhcc8bskwE8r4klucH0p9uuy8JReoQrifyNlC2KCL67xaBQt4FEoIQ7XQqXmtQ3FxjQmunkcmiZtAG+ghmWxo3otG/wqBjqUyP80j50HO7dT2JfzbMGFBt5HloFhlmgDKAcxa6D9JbJpcw/l6MZuVNLEFYXYi/GjLtmUI7wHTyi0TV5vFcJloZQO8wHb1AiLC95UaVi+2zk3dRR1zbVXhSY4WlQWRimbcSIl1gT/AAW54RbG5UojyV18Ju757NEqQpSBA7pY5RCfXtGf3iR2Hi1DxzRzcMsRBQK+Uioj0rQVUmvTkT0hb+uWK8YsFYKlAzxabkgXYncCSFSPvKeBBCW8RHSWZpqcbVpfHq0pi9jLJondPq/jjn1BN3jdb85Ddp5E=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP9Hlh820OxNMxNlYnV8JYdqc96iSWx2Mk80d2pfymKk#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP1+KnRL5H4RCvxofUpjkNQNPAhZAYqmNzEWZQoQw6I0NswG58wMGpEMySsANrAufagAyZm0ug2APmeoNrbxLLk=#012 create=True mode=0644 path=/tmp/ansible.00o3kvfr state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:45:11 np0005465604 python3.9[68419]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.00o3kvfr' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:45:12 np0005465604 python3.9[68573]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.00o3kvfr state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:45:12 np0005465604 systemd[1]: session-16.scope: Deactivated successfully.
Oct  2 03:45:12 np0005465604 systemd[1]: session-16.scope: Consumed 3.618s CPU time.
Oct  2 03:45:12 np0005465604 systemd-logind[787]: Session 16 logged out. Waiting for processes to exit.
Oct  2 03:45:12 np0005465604 systemd-logind[787]: Removed session 16.
Oct  2 03:45:18 np0005465604 systemd-logind[787]: New session 17 of user zuul.
Oct  2 03:45:18 np0005465604 systemd[1]: Started Session 17 of User zuul.
Oct  2 03:45:19 np0005465604 python3.9[68751]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:45:20 np0005465604 python3.9[68907]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  2 03:45:21 np0005465604 python3.9[69061]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 03:45:22 np0005465604 python3.9[69214]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:45:23 np0005465604 python3.9[69367]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:45:24 np0005465604 python3.9[69521]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:45:25 np0005465604 python3.9[69676]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:45:25 np0005465604 systemd[1]: session-17.scope: Deactivated successfully.
Oct  2 03:45:25 np0005465604 systemd[1]: session-17.scope: Consumed 4.354s CPU time.
Oct  2 03:45:25 np0005465604 systemd-logind[787]: Session 17 logged out. Waiting for processes to exit.
Oct  2 03:45:25 np0005465604 systemd-logind[787]: Removed session 17.
Oct  2 03:45:30 np0005465604 systemd-logind[787]: New session 18 of user zuul.
Oct  2 03:45:30 np0005465604 systemd[1]: Started Session 18 of User zuul.
Oct  2 03:45:31 np0005465604 python3.9[69855]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:45:32 np0005465604 python3.9[70011]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 03:45:33 np0005465604 python3.9[70095]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  2 03:45:35 np0005465604 python3.9[70246]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:45:36 np0005465604 python3.9[70397]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 03:45:37 np0005465604 python3.9[70547]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:45:37 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 03:45:38 np0005465604 python3.9[70698]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:45:38 np0005465604 systemd[1]: session-18.scope: Deactivated successfully.
Oct  2 03:45:38 np0005465604 systemd[1]: session-18.scope: Consumed 5.687s CPU time.
Oct  2 03:45:38 np0005465604 systemd-logind[787]: Session 18 logged out. Waiting for processes to exit.
Oct  2 03:45:38 np0005465604 systemd-logind[787]: Removed session 18.
Oct  2 03:45:46 np0005465604 systemd-logind[787]: New session 19 of user zuul.
Oct  2 03:45:46 np0005465604 systemd[1]: Started Session 19 of User zuul.
Oct  2 03:45:52 np0005465604 python3[71464]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:45:53 np0005465604 python3[71559]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  2 03:45:55 np0005465604 python3[71586]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:45:55 np0005465604 python3[71612]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:45:55 np0005465604 kernel: loop: module loaded
Oct  2 03:45:55 np0005465604 kernel: loop3: detected capacity change from 0 to 41943040
Oct  2 03:45:55 np0005465604 python3[71647]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:45:56 np0005465604 lvm[71650]: PV /dev/loop3 not used.
Oct  2 03:45:56 np0005465604 lvm[71652]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 03:45:56 np0005465604 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct  2 03:45:56 np0005465604 lvm[71659]:  1 logical volume(s) in volume group "ceph_vg0" now active
Oct  2 03:45:56 np0005465604 lvm[71662]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 03:45:56 np0005465604 lvm[71662]: VG ceph_vg0 finished
Oct  2 03:45:56 np0005465604 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct  2 03:45:56 np0005465604 python3[71740]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:45:57 np0005465604 python3[71813]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759391156.40623-32889-166691755267225/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:45:58 np0005465604 python3[71863]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:45:58 np0005465604 systemd[1]: Reloading.
Oct  2 03:45:58 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:45:58 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:45:58 np0005465604 systemd[1]: Starting Ceph OSD losetup...
Oct  2 03:45:58 np0005465604 bash[71903]: /dev/loop3: [64513]:4329715 (/var/lib/ceph-osd-0.img)
Oct  2 03:45:58 np0005465604 systemd[1]: Finished Ceph OSD losetup.
Oct  2 03:45:58 np0005465604 lvm[71904]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 03:45:58 np0005465604 lvm[71904]: VG ceph_vg0 finished
Oct  2 03:45:58 np0005465604 python3[71930]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  2 03:46:00 np0005465604 python3[71957]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:46:00 np0005465604 python3[71983]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:46:00 np0005465604 kernel: loop4: detected capacity change from 0 to 41943040
Oct  2 03:46:01 np0005465604 python3[72015]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:46:01 np0005465604 lvm[72018]: PV /dev/loop4 not used.
Oct  2 03:46:01 np0005465604 lvm[72020]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  2 03:46:01 np0005465604 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Oct  2 03:46:01 np0005465604 lvm[72031]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  2 03:46:01 np0005465604 lvm[72031]: VG ceph_vg1 finished
Oct  2 03:46:01 np0005465604 lvm[72028]:  1 logical volume(s) in volume group "ceph_vg1" now active
Oct  2 03:46:01 np0005465604 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Oct  2 03:46:01 np0005465604 python3[72109]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:46:02 np0005465604 python3[72182]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759391161.4482722-32916-258175987723548/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:46:02 np0005465604 python3[72232]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:46:02 np0005465604 systemd[1]: Reloading.
Oct  2 03:46:02 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:46:02 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:46:02 np0005465604 chronyd[54537]: Selected source 23.133.168.247 (pool.ntp.org)
Oct  2 03:46:03 np0005465604 systemd[1]: Starting Ceph OSD losetup...
Oct  2 03:46:03 np0005465604 bash[72273]: /dev/loop4: [64513]:4655398 (/var/lib/ceph-osd-1.img)
Oct  2 03:46:03 np0005465604 systemd[1]: Finished Ceph OSD losetup.
Oct  2 03:46:03 np0005465604 lvm[72274]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  2 03:46:03 np0005465604 lvm[72274]: VG ceph_vg1 finished
Oct  2 03:46:03 np0005465604 python3[72301]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  2 03:46:04 np0005465604 python3[72328]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:46:05 np0005465604 python3[72354]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:46:05 np0005465604 kernel: loop5: detected capacity change from 0 to 41943040
Oct  2 03:46:05 np0005465604 python3[72386]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:46:05 np0005465604 lvm[72389]: PV /dev/loop5 not used.
Oct  2 03:46:05 np0005465604 lvm[72398]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  2 03:46:06 np0005465604 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Oct  2 03:46:06 np0005465604 lvm[72400]:  1 logical volume(s) in volume group "ceph_vg2" now active
Oct  2 03:46:06 np0005465604 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Oct  2 03:46:06 np0005465604 python3[72478]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:46:06 np0005465604 python3[72551]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759391166.199292-32943-272626912573267/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:46:07 np0005465604 python3[72601]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:46:07 np0005465604 systemd[1]: Reloading.
Oct  2 03:46:07 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:46:07 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:46:07 np0005465604 systemd[1]: Starting Ceph OSD losetup...
Oct  2 03:46:07 np0005465604 bash[72642]: /dev/loop5: [64513]:4720768 (/var/lib/ceph-osd-2.img)
Oct  2 03:46:07 np0005465604 systemd[1]: Finished Ceph OSD losetup.
Oct  2 03:46:07 np0005465604 lvm[72643]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  2 03:46:07 np0005465604 lvm[72643]: VG ceph_vg2 finished
Oct  2 03:46:09 np0005465604 python3[72667]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:46:11 np0005465604 python3[72760]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  2 03:46:13 np0005465604 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 03:46:13 np0005465604 systemd[1]: Starting man-db-cache-update.service...
Oct  2 03:46:13 np0005465604 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 03:46:13 np0005465604 systemd[1]: Finished man-db-cache-update.service.
Oct  2 03:46:13 np0005465604 systemd[1]: run-r9c43d38212e14c8c8e92bf682ef2ec63.service: Deactivated successfully.
Oct  2 03:46:13 np0005465604 python3[72877]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:46:14 np0005465604 python3[72905]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:46:14 np0005465604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 03:46:14 np0005465604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 03:46:15 np0005465604 python3[72969]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:46:15 np0005465604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 03:46:15 np0005465604 python3[72995]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:46:16 np0005465604 python3[73073]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:46:16 np0005465604 python3[73146]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759391176.0599117-33090-98552196580007/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:46:17 np0005465604 python3[73248]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:46:18 np0005465604 python3[73321]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759391177.3157153-33110-136360891472792/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:46:18 np0005465604 python3[73371]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:46:18 np0005465604 python3[73399]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:46:19 np0005465604 python3[73427]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:46:19 np0005465604 python3[73455]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid a52e644f-f702-594c-a648-813e3e0df2b1 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:46:19 np0005465604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 03:46:19 np0005465604 systemd[1]: Created slice User Slice of UID 42477.
Oct  2 03:46:19 np0005465604 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  2 03:46:19 np0005465604 systemd-logind[787]: New session 20 of user ceph-admin.
Oct  2 03:46:19 np0005465604 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  2 03:46:19 np0005465604 systemd[1]: Starting User Manager for UID 42477...
Oct  2 03:46:20 np0005465604 systemd[73475]: Queued start job for default target Main User Target.
Oct  2 03:46:20 np0005465604 systemd[73475]: Created slice User Application Slice.
Oct  2 03:46:20 np0005465604 systemd[73475]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 03:46:20 np0005465604 systemd[73475]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 03:46:20 np0005465604 systemd[73475]: Reached target Paths.
Oct  2 03:46:20 np0005465604 systemd[73475]: Reached target Timers.
Oct  2 03:46:20 np0005465604 systemd[73475]: Starting D-Bus User Message Bus Socket...
Oct  2 03:46:20 np0005465604 systemd[73475]: Starting Create User's Volatile Files and Directories...
Oct  2 03:46:20 np0005465604 systemd[73475]: Finished Create User's Volatile Files and Directories.
Oct  2 03:46:20 np0005465604 systemd[73475]: Listening on D-Bus User Message Bus Socket.
Oct  2 03:46:20 np0005465604 systemd[73475]: Reached target Sockets.
Oct  2 03:46:20 np0005465604 systemd[73475]: Reached target Basic System.
Oct  2 03:46:20 np0005465604 systemd[73475]: Reached target Main User Target.
Oct  2 03:46:20 np0005465604 systemd[73475]: Startup finished in 155ms.
Oct  2 03:46:20 np0005465604 systemd[1]: Started User Manager for UID 42477.
Oct  2 03:46:20 np0005465604 systemd[1]: Started Session 20 of User ceph-admin.
Oct  2 03:46:20 np0005465604 systemd[1]: session-20.scope: Deactivated successfully.
Oct  2 03:46:20 np0005465604 systemd-logind[787]: Session 20 logged out. Waiting for processes to exit.
Oct  2 03:46:20 np0005465604 systemd-logind[787]: Removed session 20.
Oct  2 03:46:22 np0005465604 systemd[1]: var-lib-containers-storage-overlay-compat2316574827-merged.mount: Deactivated successfully.
Oct  2 03:46:22 np0005465604 systemd[1]: var-lib-containers-storage-overlay-compat2316574827-lower\x2dmapped.mount: Deactivated successfully.
Oct  2 03:46:30 np0005465604 systemd[1]: Stopping User Manager for UID 42477...
Oct  2 03:46:30 np0005465604 systemd[73475]: Activating special unit Exit the Session...
Oct  2 03:46:30 np0005465604 systemd[73475]: Stopped target Main User Target.
Oct  2 03:46:30 np0005465604 systemd[73475]: Stopped target Basic System.
Oct  2 03:46:30 np0005465604 systemd[73475]: Stopped target Paths.
Oct  2 03:46:30 np0005465604 systemd[73475]: Stopped target Sockets.
Oct  2 03:46:30 np0005465604 systemd[73475]: Stopped target Timers.
Oct  2 03:46:30 np0005465604 systemd[73475]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  2 03:46:30 np0005465604 systemd[73475]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 03:46:30 np0005465604 systemd[73475]: Closed D-Bus User Message Bus Socket.
Oct  2 03:46:30 np0005465604 systemd[73475]: Stopped Create User's Volatile Files and Directories.
Oct  2 03:46:30 np0005465604 systemd[73475]: Removed slice User Application Slice.
Oct  2 03:46:30 np0005465604 systemd[73475]: Reached target Shutdown.
Oct  2 03:46:30 np0005465604 systemd[73475]: Finished Exit the Session.
Oct  2 03:46:30 np0005465604 systemd[73475]: Reached target Exit the Session.
Oct  2 03:46:30 np0005465604 systemd[1]: user@42477.service: Deactivated successfully.
Oct  2 03:46:30 np0005465604 systemd[1]: Stopped User Manager for UID 42477.
Oct  2 03:46:30 np0005465604 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct  2 03:46:30 np0005465604 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct  2 03:46:30 np0005465604 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct  2 03:46:30 np0005465604 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct  2 03:46:30 np0005465604 systemd[1]: Removed slice User Slice of UID 42477.
Oct  2 03:46:33 np0005465604 podman[73528]: 2025-10-02 07:46:33.968149013 +0000 UTC m=+13.670685069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:33 np0005465604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 03:46:34 np0005465604 podman[73589]: 2025-10-02 07:46:34.035872689 +0000 UTC m=+0.040625725 container create faab4102d1054a13d12b18453503fd185346c195e08686d5297f08fb373dfc24 (image=quay.io/ceph/ceph:v18, name=youthful_wilson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 03:46:34 np0005465604 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct  2 03:46:34 np0005465604 systemd[1]: Started libpod-conmon-faab4102d1054a13d12b18453503fd185346c195e08686d5297f08fb373dfc24.scope.
Oct  2 03:46:34 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:34 np0005465604 podman[73589]: 2025-10-02 07:46:34.016735634 +0000 UTC m=+0.021488640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:34 np0005465604 podman[73589]: 2025-10-02 07:46:34.135903141 +0000 UTC m=+0.140656157 container init faab4102d1054a13d12b18453503fd185346c195e08686d5297f08fb373dfc24 (image=quay.io/ceph/ceph:v18, name=youthful_wilson, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  2 03:46:34 np0005465604 podman[73589]: 2025-10-02 07:46:34.148135312 +0000 UTC m=+0.152888308 container start faab4102d1054a13d12b18453503fd185346c195e08686d5297f08fb373dfc24 (image=quay.io/ceph/ceph:v18, name=youthful_wilson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 03:46:34 np0005465604 podman[73589]: 2025-10-02 07:46:34.15131687 +0000 UTC m=+0.156069876 container attach faab4102d1054a13d12b18453503fd185346c195e08686d5297f08fb373dfc24 (image=quay.io/ceph/ceph:v18, name=youthful_wilson, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:46:34 np0005465604 youthful_wilson[73606]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct  2 03:46:34 np0005465604 systemd[1]: libpod-faab4102d1054a13d12b18453503fd185346c195e08686d5297f08fb373dfc24.scope: Deactivated successfully.
Oct  2 03:46:34 np0005465604 podman[73589]: 2025-10-02 07:46:34.438798504 +0000 UTC m=+0.443551500 container died faab4102d1054a13d12b18453503fd185346c195e08686d5297f08fb373dfc24 (image=quay.io/ceph/ceph:v18, name=youthful_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:46:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay-aca4938f528bb6494dae68832284a47e6f8e5d4437f5bcfb80bd3c6e5c11a94a-merged.mount: Deactivated successfully.
Oct  2 03:46:34 np0005465604 podman[73589]: 2025-10-02 07:46:34.485513557 +0000 UTC m=+0.490266553 container remove faab4102d1054a13d12b18453503fd185346c195e08686d5297f08fb373dfc24 (image=quay.io/ceph/ceph:v18, name=youthful_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:46:34 np0005465604 systemd[1]: libpod-conmon-faab4102d1054a13d12b18453503fd185346c195e08686d5297f08fb373dfc24.scope: Deactivated successfully.
Oct  2 03:46:34 np0005465604 podman[73622]: 2025-10-02 07:46:34.594179628 +0000 UTC m=+0.068761301 container create 3f68d8163995027d348ba28b6c12d1d1ad5b50c57f20d1e86a3dabce78907aa8 (image=quay.io/ceph/ceph:v18, name=silly_dewdney, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:46:34 np0005465604 systemd[1]: Started libpod-conmon-3f68d8163995027d348ba28b6c12d1d1ad5b50c57f20d1e86a3dabce78907aa8.scope.
Oct  2 03:46:34 np0005465604 podman[73622]: 2025-10-02 07:46:34.571069688 +0000 UTC m=+0.045651381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:34 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:34 np0005465604 podman[73622]: 2025-10-02 07:46:34.682107132 +0000 UTC m=+0.156688785 container init 3f68d8163995027d348ba28b6c12d1d1ad5b50c57f20d1e86a3dabce78907aa8 (image=quay.io/ceph/ceph:v18, name=silly_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 03:46:34 np0005465604 podman[73622]: 2025-10-02 07:46:34.691315139 +0000 UTC m=+0.165896792 container start 3f68d8163995027d348ba28b6c12d1d1ad5b50c57f20d1e86a3dabce78907aa8 (image=quay.io/ceph/ceph:v18, name=silly_dewdney, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:46:34 np0005465604 podman[73622]: 2025-10-02 07:46:34.695309223 +0000 UTC m=+0.169890906 container attach 3f68d8163995027d348ba28b6c12d1d1ad5b50c57f20d1e86a3dabce78907aa8 (image=quay.io/ceph/ceph:v18, name=silly_dewdney, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  2 03:46:34 np0005465604 silly_dewdney[73639]: 167 167
Oct  2 03:46:34 np0005465604 systemd[1]: libpod-3f68d8163995027d348ba28b6c12d1d1ad5b50c57f20d1e86a3dabce78907aa8.scope: Deactivated successfully.
Oct  2 03:46:34 np0005465604 podman[73622]: 2025-10-02 07:46:34.697697637 +0000 UTC m=+0.172279290 container died 3f68d8163995027d348ba28b6c12d1d1ad5b50c57f20d1e86a3dabce78907aa8 (image=quay.io/ceph/ceph:v18, name=silly_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:46:34 np0005465604 podman[73622]: 2025-10-02 07:46:34.737200677 +0000 UTC m=+0.211782360 container remove 3f68d8163995027d348ba28b6c12d1d1ad5b50c57f20d1e86a3dabce78907aa8 (image=quay.io/ceph/ceph:v18, name=silly_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:46:34 np0005465604 systemd[1]: libpod-conmon-3f68d8163995027d348ba28b6c12d1d1ad5b50c57f20d1e86a3dabce78907aa8.scope: Deactivated successfully.
Oct  2 03:46:34 np0005465604 podman[73656]: 2025-10-02 07:46:34.835801024 +0000 UTC m=+0.066066786 container create 7c7e7068611e6b54df0bb99b2c0051292fad06f8505764561024dc5cb0c696cd (image=quay.io/ceph/ceph:v18, name=eager_nash, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:46:34 np0005465604 systemd[1]: Started libpod-conmon-7c7e7068611e6b54df0bb99b2c0051292fad06f8505764561024dc5cb0c696cd.scope.
Oct  2 03:46:34 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:34 np0005465604 podman[73656]: 2025-10-02 07:46:34.808231596 +0000 UTC m=+0.038497428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:34 np0005465604 podman[73656]: 2025-10-02 07:46:34.908687711 +0000 UTC m=+0.138953463 container init 7c7e7068611e6b54df0bb99b2c0051292fad06f8505764561024dc5cb0c696cd (image=quay.io/ceph/ceph:v18, name=eager_nash, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 03:46:34 np0005465604 podman[73656]: 2025-10-02 07:46:34.916196564 +0000 UTC m=+0.146462306 container start 7c7e7068611e6b54df0bb99b2c0051292fad06f8505764561024dc5cb0c696cd (image=quay.io/ceph/ceph:v18, name=eager_nash, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 03:46:34 np0005465604 podman[73656]: 2025-10-02 07:46:34.920340244 +0000 UTC m=+0.150605996 container attach 7c7e7068611e6b54df0bb99b2c0051292fad06f8505764561024dc5cb0c696cd (image=quay.io/ceph/ceph:v18, name=eager_nash, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  2 03:46:34 np0005465604 eager_nash[73673]: AQDaLd5oCUsSORAAvc1dyt02FZVPTJaj+qlCTQ==
Oct  2 03:46:34 np0005465604 systemd[1]: libpod-7c7e7068611e6b54df0bb99b2c0051292fad06f8505764561024dc5cb0c696cd.scope: Deactivated successfully.
Oct  2 03:46:34 np0005465604 podman[73656]: 2025-10-02 07:46:34.963456305 +0000 UTC m=+0.193722077 container died 7c7e7068611e6b54df0bb99b2c0051292fad06f8505764561024dc5cb0c696cd (image=quay.io/ceph/ceph:v18, name=eager_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:46:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ed0e081dc20d02655a2f729162539d99429a2428526ad5d863be9e6559158fb8-merged.mount: Deactivated successfully.
Oct  2 03:46:35 np0005465604 podman[73656]: 2025-10-02 07:46:35.010183028 +0000 UTC m=+0.240448800 container remove 7c7e7068611e6b54df0bb99b2c0051292fad06f8505764561024dc5cb0c696cd (image=quay.io/ceph/ceph:v18, name=eager_nash, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 03:46:35 np0005465604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 03:46:35 np0005465604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 03:46:35 np0005465604 systemd[1]: libpod-conmon-7c7e7068611e6b54df0bb99b2c0051292fad06f8505764561024dc5cb0c696cd.scope: Deactivated successfully.
Oct  2 03:46:35 np0005465604 podman[73692]: 2025-10-02 07:46:35.101164769 +0000 UTC m=+0.055964252 container create e2cd7c479e1f60ed422cc7bf1248eb60a8d322839bd47b3b59468d2109a256fa (image=quay.io/ceph/ceph:v18, name=pedantic_jennings, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  2 03:46:35 np0005465604 systemd[1]: Started libpod-conmon-e2cd7c479e1f60ed422cc7bf1248eb60a8d322839bd47b3b59468d2109a256fa.scope.
Oct  2 03:46:35 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:35 np0005465604 podman[73692]: 2025-10-02 07:46:35.076559043 +0000 UTC m=+0.031358546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:35 np0005465604 podman[73692]: 2025-10-02 07:46:35.178834835 +0000 UTC m=+0.133634398 container init e2cd7c479e1f60ed422cc7bf1248eb60a8d322839bd47b3b59468d2109a256fa (image=quay.io/ceph/ceph:v18, name=pedantic_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:46:35 np0005465604 podman[73692]: 2025-10-02 07:46:35.188323271 +0000 UTC m=+0.143122754 container start e2cd7c479e1f60ed422cc7bf1248eb60a8d322839bd47b3b59468d2109a256fa (image=quay.io/ceph/ceph:v18, name=pedantic_jennings, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:46:35 np0005465604 podman[73692]: 2025-10-02 07:46:35.192518861 +0000 UTC m=+0.147318504 container attach e2cd7c479e1f60ed422cc7bf1248eb60a8d322839bd47b3b59468d2109a256fa (image=quay.io/ceph/ceph:v18, name=pedantic_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:46:35 np0005465604 pedantic_jennings[73708]: AQDbLd5oa8xODRAAnI4XQ3Sgz67u3cjZSpNz5A==
Oct  2 03:46:35 np0005465604 systemd[1]: libpod-e2cd7c479e1f60ed422cc7bf1248eb60a8d322839bd47b3b59468d2109a256fa.scope: Deactivated successfully.
Oct  2 03:46:35 np0005465604 podman[73692]: 2025-10-02 07:46:35.229932485 +0000 UTC m=+0.184731978 container died e2cd7c479e1f60ed422cc7bf1248eb60a8d322839bd47b3b59468d2109a256fa (image=quay.io/ceph/ceph:v18, name=pedantic_jennings, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 03:46:35 np0005465604 podman[73692]: 2025-10-02 07:46:35.280172107 +0000 UTC m=+0.234971601 container remove e2cd7c479e1f60ed422cc7bf1248eb60a8d322839bd47b3b59468d2109a256fa (image=quay.io/ceph/ceph:v18, name=pedantic_jennings, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:46:35 np0005465604 systemd[1]: libpod-conmon-e2cd7c479e1f60ed422cc7bf1248eb60a8d322839bd47b3b59468d2109a256fa.scope: Deactivated successfully.
Oct  2 03:46:35 np0005465604 podman[73727]: 2025-10-02 07:46:35.381844941 +0000 UTC m=+0.071347831 container create a29d30b89f38fc62b010883b0b544c20f2830d1a4dd4a2537021be5afa9bc3e5 (image=quay.io/ceph/ceph:v18, name=great_faraday, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:46:35 np0005465604 systemd[1]: Started libpod-conmon-a29d30b89f38fc62b010883b0b544c20f2830d1a4dd4a2537021be5afa9bc3e5.scope.
Oct  2 03:46:35 np0005465604 podman[73727]: 2025-10-02 07:46:35.352081815 +0000 UTC m=+0.041584755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:35 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:35 np0005465604 podman[73727]: 2025-10-02 07:46:35.539257417 +0000 UTC m=+0.228760347 container init a29d30b89f38fc62b010883b0b544c20f2830d1a4dd4a2537021be5afa9bc3e5 (image=quay.io/ceph/ceph:v18, name=great_faraday, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:46:35 np0005465604 podman[73727]: 2025-10-02 07:46:35.549191676 +0000 UTC m=+0.238694586 container start a29d30b89f38fc62b010883b0b544c20f2830d1a4dd4a2537021be5afa9bc3e5 (image=quay.io/ceph/ceph:v18, name=great_faraday, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 03:46:35 np0005465604 great_faraday[73743]: AQDbLd5o9QXMIhAAWnYT659ynQFqO6PshjXO4Q==
Oct  2 03:46:35 np0005465604 systemd[1]: libpod-a29d30b89f38fc62b010883b0b544c20f2830d1a4dd4a2537021be5afa9bc3e5.scope: Deactivated successfully.
Oct  2 03:46:35 np0005465604 podman[73727]: 2025-10-02 07:46:35.777368625 +0000 UTC m=+0.466871585 container attach a29d30b89f38fc62b010883b0b544c20f2830d1a4dd4a2537021be5afa9bc3e5 (image=quay.io/ceph/ceph:v18, name=great_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 03:46:35 np0005465604 podman[73727]: 2025-10-02 07:46:35.778205861 +0000 UTC m=+0.467708741 container died a29d30b89f38fc62b010883b0b544c20f2830d1a4dd4a2537021be5afa9bc3e5 (image=quay.io/ceph/ceph:v18, name=great_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 03:46:36 np0005465604 podman[73727]: 2025-10-02 07:46:36.079836585 +0000 UTC m=+0.769339465 container remove a29d30b89f38fc62b010883b0b544c20f2830d1a4dd4a2537021be5afa9bc3e5 (image=quay.io/ceph/ceph:v18, name=great_faraday, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:46:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 03:46:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 03:46:36 np0005465604 systemd[1]: libpod-conmon-a29d30b89f38fc62b010883b0b544c20f2830d1a4dd4a2537021be5afa9bc3e5.scope: Deactivated successfully.
Oct  2 03:46:36 np0005465604 podman[73764]: 2025-10-02 07:46:36.153433384 +0000 UTC m=+0.048090197 container create 48a9c3e31cea36f728e7c7cb9ecfd9becbda3395f917f245ad708e25f27fb956 (image=quay.io/ceph/ceph:v18, name=brave_greider, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:46:36 np0005465604 systemd[1]: Started libpod-conmon-48a9c3e31cea36f728e7c7cb9ecfd9becbda3395f917f245ad708e25f27fb956.scope.
Oct  2 03:46:36 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c737554a465b8cd8d3a71fb95b66e82fb1d8be7a3fc5e6a944607f4758602f8/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:36 np0005465604 podman[73764]: 2025-10-02 07:46:36.132779241 +0000 UTC m=+0.027436064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:36 np0005465604 podman[73764]: 2025-10-02 07:46:36.23947895 +0000 UTC m=+0.134135773 container init 48a9c3e31cea36f728e7c7cb9ecfd9becbda3395f917f245ad708e25f27fb956 (image=quay.io/ceph/ceph:v18, name=brave_greider, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:46:36 np0005465604 podman[73764]: 2025-10-02 07:46:36.24845259 +0000 UTC m=+0.143109403 container start 48a9c3e31cea36f728e7c7cb9ecfd9becbda3395f917f245ad708e25f27fb956 (image=quay.io/ceph/ceph:v18, name=brave_greider, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 03:46:36 np0005465604 podman[73764]: 2025-10-02 07:46:36.252448774 +0000 UTC m=+0.147105587 container attach 48a9c3e31cea36f728e7c7cb9ecfd9becbda3395f917f245ad708e25f27fb956 (image=quay.io/ceph/ceph:v18, name=brave_greider, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:46:36 np0005465604 brave_greider[73780]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct  2 03:46:36 np0005465604 brave_greider[73780]: setting min_mon_release = pacific
Oct  2 03:46:36 np0005465604 brave_greider[73780]: /usr/bin/monmaptool: set fsid to a52e644f-f702-594c-a648-813e3e0df2b1
Oct  2 03:46:36 np0005465604 brave_greider[73780]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct  2 03:46:36 np0005465604 systemd[1]: libpod-48a9c3e31cea36f728e7c7cb9ecfd9becbda3395f917f245ad708e25f27fb956.scope: Deactivated successfully.
Oct  2 03:46:36 np0005465604 podman[73764]: 2025-10-02 07:46:36.301824721 +0000 UTC m=+0.196481514 container died 48a9c3e31cea36f728e7c7cb9ecfd9becbda3395f917f245ad708e25f27fb956 (image=quay.io/ceph/ceph:v18, name=brave_greider, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 03:46:36 np0005465604 podman[73764]: 2025-10-02 07:46:36.338927664 +0000 UTC m=+0.233584467 container remove 48a9c3e31cea36f728e7c7cb9ecfd9becbda3395f917f245ad708e25f27fb956 (image=quay.io/ceph/ceph:v18, name=brave_greider, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 03:46:36 np0005465604 systemd[1]: libpod-conmon-48a9c3e31cea36f728e7c7cb9ecfd9becbda3395f917f245ad708e25f27fb956.scope: Deactivated successfully.
Oct  2 03:46:36 np0005465604 podman[73800]: 2025-10-02 07:46:36.411772181 +0000 UTC m=+0.045828697 container create 256619dd3019dbf7c0ad760d37bcd6e1c49ce30e6f772b7ed7af785004e920d7 (image=quay.io/ceph/ceph:v18, name=charming_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 03:46:36 np0005465604 systemd[1]: Started libpod-conmon-256619dd3019dbf7c0ad760d37bcd6e1c49ce30e6f772b7ed7af785004e920d7.scope.
Oct  2 03:46:36 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b8a6d9f85a3da05eb4fe5f45bb38549075691c15993e77ef7665f88de6b529/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b8a6d9f85a3da05eb4fe5f45bb38549075691c15993e77ef7665f88de6b529/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b8a6d9f85a3da05eb4fe5f45bb38549075691c15993e77ef7665f88de6b529/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b8a6d9f85a3da05eb4fe5f45bb38549075691c15993e77ef7665f88de6b529/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:36 np0005465604 podman[73800]: 2025-10-02 07:46:36.469224287 +0000 UTC m=+0.103280813 container init 256619dd3019dbf7c0ad760d37bcd6e1c49ce30e6f772b7ed7af785004e920d7 (image=quay.io/ceph/ceph:v18, name=charming_wilson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:46:36 np0005465604 podman[73800]: 2025-10-02 07:46:36.475839753 +0000 UTC m=+0.109896299 container start 256619dd3019dbf7c0ad760d37bcd6e1c49ce30e6f772b7ed7af785004e920d7 (image=quay.io/ceph/ceph:v18, name=charming_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 03:46:36 np0005465604 podman[73800]: 2025-10-02 07:46:36.479658062 +0000 UTC m=+0.113714678 container attach 256619dd3019dbf7c0ad760d37bcd6e1c49ce30e6f772b7ed7af785004e920d7 (image=quay.io/ceph/ceph:v18, name=charming_wilson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  2 03:46:36 np0005465604 podman[73800]: 2025-10-02 07:46:36.397479746 +0000 UTC m=+0.031536292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:36 np0005465604 systemd[1]: libpod-256619dd3019dbf7c0ad760d37bcd6e1c49ce30e6f772b7ed7af785004e920d7.scope: Deactivated successfully.
Oct  2 03:46:36 np0005465604 podman[73843]: 2025-10-02 07:46:36.627147171 +0000 UTC m=+0.036994502 container died 256619dd3019dbf7c0ad760d37bcd6e1c49ce30e6f772b7ed7af785004e920d7 (image=quay.io/ceph/ceph:v18, name=charming_wilson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:46:36 np0005465604 podman[73843]: 2025-10-02 07:46:36.668350283 +0000 UTC m=+0.078197524 container remove 256619dd3019dbf7c0ad760d37bcd6e1c49ce30e6f772b7ed7af785004e920d7 (image=quay.io/ceph/ceph:v18, name=charming_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 03:46:36 np0005465604 systemd[1]: libpod-conmon-256619dd3019dbf7c0ad760d37bcd6e1c49ce30e6f772b7ed7af785004e920d7.scope: Deactivated successfully.
Oct  2 03:46:36 np0005465604 systemd[1]: Reloading.
Oct  2 03:46:36 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:46:36 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:46:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 03:46:37 np0005465604 systemd[1]: Reloading.
Oct  2 03:46:37 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:46:37 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:46:37 np0005465604 systemd[1]: Reached target All Ceph clusters and services.
Oct  2 03:46:37 np0005465604 systemd[1]: Reloading.
Oct  2 03:46:37 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:46:37 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:46:37 np0005465604 systemd[1]: Reached target Ceph cluster a52e644f-f702-594c-a648-813e3e0df2b1.
Oct  2 03:46:37 np0005465604 systemd[1]: Reloading.
Oct  2 03:46:37 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:46:37 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:46:37 np0005465604 systemd[1]: Reloading.
Oct  2 03:46:37 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:46:37 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:46:38 np0005465604 systemd[1]: Created slice Slice /system/ceph-a52e644f-f702-594c-a648-813e3e0df2b1.
Oct  2 03:46:38 np0005465604 systemd[1]: Reached target System Time Set.
Oct  2 03:46:38 np0005465604 systemd[1]: Reached target System Time Synchronized.
Oct  2 03:46:38 np0005465604 systemd[1]: Starting Ceph mon.compute-0 for a52e644f-f702-594c-a648-813e3e0df2b1...
Oct  2 03:46:38 np0005465604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 03:46:38 np0005465604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 03:46:38 np0005465604 podman[74096]: 2025-10-02 07:46:38.424626098 +0000 UTC m=+0.064215469 container create 516cccb474a35253b5d338a6bed84675fa60fe2f3dc80e17cc03731549033183 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 03:46:38 np0005465604 podman[74096]: 2025-10-02 07:46:38.398869246 +0000 UTC m=+0.038458707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92216093d3d204fd4f9fa66be1f8631e4c185afdf435ed204fec599024f5808/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92216093d3d204fd4f9fa66be1f8631e4c185afdf435ed204fec599024f5808/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92216093d3d204fd4f9fa66be1f8631e4c185afdf435ed204fec599024f5808/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92216093d3d204fd4f9fa66be1f8631e4c185afdf435ed204fec599024f5808/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:38 np0005465604 podman[74096]: 2025-10-02 07:46:38.522937356 +0000 UTC m=+0.162526767 container init 516cccb474a35253b5d338a6bed84675fa60fe2f3dc80e17cc03731549033183 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 03:46:38 np0005465604 podman[74096]: 2025-10-02 07:46:38.531078719 +0000 UTC m=+0.170668110 container start 516cccb474a35253b5d338a6bed84675fa60fe2f3dc80e17cc03731549033183 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 03:46:38 np0005465604 bash[74096]: 516cccb474a35253b5d338a6bed84675fa60fe2f3dc80e17cc03731549033183
Oct  2 03:46:38 np0005465604 systemd[1]: Started Ceph mon.compute-0 for a52e644f-f702-594c-a648-813e3e0df2b1.
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: pidfile_write: ignore empty --pid-file
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: load: jerasure load: lrc 
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: RocksDB version: 7.9.2
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Git sha 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: DB SUMMARY
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: DB Session ID:  SOY8MTLDIDB7IG75EPAY
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: CURRENT file:  CURRENT
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: IDENTITY file:  IDENTITY
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                         Options.error_if_exists: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                       Options.create_if_missing: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                         Options.paranoid_checks: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                                     Options.env: 0x56462d9e1c40
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                                Options.info_log: 0x56462f654e80
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                Options.max_file_opening_threads: 16
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                              Options.statistics: (nil)
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                               Options.use_fsync: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                       Options.max_log_file_size: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                         Options.allow_fallocate: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                        Options.use_direct_reads: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:          Options.create_missing_column_families: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                              Options.db_log_dir: 
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                                 Options.wal_dir: 
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                   Options.advise_random_on_open: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                    Options.write_buffer_manager: 0x56462f664b40
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                            Options.rate_limiter: (nil)
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                  Options.unordered_write: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                               Options.row_cache: None
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                              Options.wal_filter: None
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.allow_ingest_behind: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.two_write_queues: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.manual_wal_flush: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.wal_compression: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.atomic_flush: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                 Options.log_readahead_size: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.allow_data_in_errors: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.db_host_id: __hostname__
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.max_background_jobs: 2
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.max_background_compactions: -1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.max_subcompactions: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.max_total_wal_size: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                          Options.max_open_files: -1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                          Options.bytes_per_sync: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:       Options.compaction_readahead_size: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                  Options.max_background_flushes: -1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Compression algorithms supported:
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: #011kZSTD supported: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: #011kXpressCompression supported: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: #011kBZip2Compression supported: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: #011kLZ4Compression supported: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: #011kZlibCompression supported: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: #011kSnappyCompression supported: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:           Options.merge_operator: 
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:        Options.compaction_filter: None
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56462f654a80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56462f64d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:        Options.write_buffer_size: 33554432
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:  Options.max_write_buffer_number: 2
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:          Options.compression: NoCompression
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.num_levels: 7
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7b07c9b1-a6c7-45d0-9522-b015946345f4
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391198584873, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391198586908, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "SOY8MTLDIDB7IG75EPAY", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391198587019, "job": 1, "event": "recovery_finished"}
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56462f676e00
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: DB pointer 0x56462f700000
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56462f64d1f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid a52e644f-f702-594c-a648-813e3e0df2b1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@-1(???) e0 preinit fsid a52e644f-f702-594c-a648-813e3e0df2b1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(probing) e0 win_standalone_election
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(probing) e1 win_standalone_election
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-10-02T07:46:36.518704Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864104,os=Linux}
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct  2 03:46:38 np0005465604 podman[74117]: 2025-10-02 07:46:38.634736423 +0000 UTC m=+0.063944409 container create 615dc107b99fb5ea00ab944be665261de8f479f6cb6b3325d58e7ba5e4040b09 (image=quay.io/ceph/ceph:v18, name=hardcore_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).mds e1 new map
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: log_channel(cluster) log [DBG] : fsmap 
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mkfs a52e644f-f702-594c-a648-813e3e0df2b1
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct  2 03:46:38 np0005465604 ceph-mon[74116]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  2 03:46:38 np0005465604 systemd[1]: Started libpod-conmon-615dc107b99fb5ea00ab944be665261de8f479f6cb6b3325d58e7ba5e4040b09.scope.
Oct  2 03:46:38 np0005465604 podman[74117]: 2025-10-02 07:46:38.609579531 +0000 UTC m=+0.038787537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3d4bf73da81c9074236416b1cbc45d5f461ba8db6333761ca1cd13b08c2404/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3d4bf73da81c9074236416b1cbc45d5f461ba8db6333761ca1cd13b08c2404/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3d4bf73da81c9074236416b1cbc45d5f461ba8db6333761ca1cd13b08c2404/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:38 np0005465604 podman[74117]: 2025-10-02 07:46:38.757048099 +0000 UTC m=+0.186256165 container init 615dc107b99fb5ea00ab944be665261de8f479f6cb6b3325d58e7ba5e4040b09 (image=quay.io/ceph/ceph:v18, name=hardcore_burnell, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:46:38 np0005465604 podman[74117]: 2025-10-02 07:46:38.764439429 +0000 UTC m=+0.193647415 container start 615dc107b99fb5ea00ab944be665261de8f479f6cb6b3325d58e7ba5e4040b09 (image=quay.io/ceph/ceph:v18, name=hardcore_burnell, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:46:38 np0005465604 podman[74117]: 2025-10-02 07:46:38.768713541 +0000 UTC m=+0.197921507 container attach 615dc107b99fb5ea00ab944be665261de8f479f6cb6b3325d58e7ba5e4040b09 (image=quay.io/ceph/ceph:v18, name=hardcore_burnell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:46:39 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  2 03:46:39 np0005465604 ceph-mon[74116]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2433213063' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  2 03:46:39 np0005465604 hardcore_burnell[74172]:  cluster:
Oct  2 03:46:39 np0005465604 hardcore_burnell[74172]:    id:     a52e644f-f702-594c-a648-813e3e0df2b1
Oct  2 03:46:39 np0005465604 hardcore_burnell[74172]:    health: HEALTH_OK
Oct  2 03:46:39 np0005465604 hardcore_burnell[74172]: 
Oct  2 03:46:39 np0005465604 hardcore_burnell[74172]:  services:
Oct  2 03:46:39 np0005465604 hardcore_burnell[74172]:    mon: 1 daemons, quorum compute-0 (age 0.514399s)
Oct  2 03:46:39 np0005465604 hardcore_burnell[74172]:    mgr: no daemons active
Oct  2 03:46:39 np0005465604 hardcore_burnell[74172]:    osd: 0 osds: 0 up, 0 in
Oct  2 03:46:39 np0005465604 hardcore_burnell[74172]: 
Oct  2 03:46:39 np0005465604 hardcore_burnell[74172]:  data:
Oct  2 03:46:39 np0005465604 hardcore_burnell[74172]:    pools:   0 pools, 0 pgs
Oct  2 03:46:39 np0005465604 hardcore_burnell[74172]:    objects: 0 objects, 0 B
Oct  2 03:46:39 np0005465604 hardcore_burnell[74172]:    usage:   0 B used, 0 B / 0 B avail
Oct  2 03:46:39 np0005465604 hardcore_burnell[74172]:    pgs:     
Oct  2 03:46:39 np0005465604 hardcore_burnell[74172]: 
Oct  2 03:46:39 np0005465604 podman[74117]: 2025-10-02 07:46:39.173281308 +0000 UTC m=+0.602489304 container died 615dc107b99fb5ea00ab944be665261de8f479f6cb6b3325d58e7ba5e4040b09 (image=quay.io/ceph/ceph:v18, name=hardcore_burnell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:46:39 np0005465604 systemd[1]: libpod-615dc107b99fb5ea00ab944be665261de8f479f6cb6b3325d58e7ba5e4040b09.scope: Deactivated successfully.
Oct  2 03:46:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-cf3d4bf73da81c9074236416b1cbc45d5f461ba8db6333761ca1cd13b08c2404-merged.mount: Deactivated successfully.
Oct  2 03:46:39 np0005465604 podman[74117]: 2025-10-02 07:46:39.225910465 +0000 UTC m=+0.655118451 container remove 615dc107b99fb5ea00ab944be665261de8f479f6cb6b3325d58e7ba5e4040b09 (image=quay.io/ceph/ceph:v18, name=hardcore_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:46:39 np0005465604 systemd[1]: libpod-conmon-615dc107b99fb5ea00ab944be665261de8f479f6cb6b3325d58e7ba5e4040b09.scope: Deactivated successfully.
Oct  2 03:46:39 np0005465604 podman[74212]: 2025-10-02 07:46:39.305878872 +0000 UTC m=+0.050801551 container create cca054c0fcc08f7e2b2354a4e676bf7e348fae07b7ed14c6ebf4433fe442c213 (image=quay.io/ceph/ceph:v18, name=eloquent_albattani, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:46:39 np0005465604 systemd[1]: Started libpod-conmon-cca054c0fcc08f7e2b2354a4e676bf7e348fae07b7ed14c6ebf4433fe442c213.scope.
Oct  2 03:46:39 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64911c79fce71866f2d733812c4f58e4c7c2b2ea9652a9c5769d18c633bdc9f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64911c79fce71866f2d733812c4f58e4c7c2b2ea9652a9c5769d18c633bdc9f1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64911c79fce71866f2d733812c4f58e4c7c2b2ea9652a9c5769d18c633bdc9f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64911c79fce71866f2d733812c4f58e4c7c2b2ea9652a9c5769d18c633bdc9f1/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:39 np0005465604 podman[74212]: 2025-10-02 07:46:39.285521549 +0000 UTC m=+0.030444258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:39 np0005465604 podman[74212]: 2025-10-02 07:46:39.386663635 +0000 UTC m=+0.131586364 container init cca054c0fcc08f7e2b2354a4e676bf7e348fae07b7ed14c6ebf4433fe442c213 (image=quay.io/ceph/ceph:v18, name=eloquent_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:46:39 np0005465604 podman[74212]: 2025-10-02 07:46:39.395347186 +0000 UTC m=+0.140269855 container start cca054c0fcc08f7e2b2354a4e676bf7e348fae07b7ed14c6ebf4433fe442c213 (image=quay.io/ceph/ceph:v18, name=eloquent_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Oct  2 03:46:39 np0005465604 podman[74212]: 2025-10-02 07:46:39.398873815 +0000 UTC m=+0.143796494 container attach cca054c0fcc08f7e2b2354a4e676bf7e348fae07b7ed14c6ebf4433fe442c213 (image=quay.io/ceph/ceph:v18, name=eloquent_albattani, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct  2 03:46:39 np0005465604 ceph-mon[74116]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  2 03:46:39 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct  2 03:46:39 np0005465604 ceph-mon[74116]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/365615652' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  2 03:46:39 np0005465604 ceph-mon[74116]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/365615652' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  2 03:46:39 np0005465604 eloquent_albattani[74229]: 
Oct  2 03:46:39 np0005465604 eloquent_albattani[74229]: [global]
Oct  2 03:46:39 np0005465604 eloquent_albattani[74229]: #011fsid = a52e644f-f702-594c-a648-813e3e0df2b1
Oct  2 03:46:39 np0005465604 eloquent_albattani[74229]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct  2 03:46:39 np0005465604 eloquent_albattani[74229]: #011osd_crush_chooseleaf_type = 0
Oct  2 03:46:39 np0005465604 systemd[1]: libpod-cca054c0fcc08f7e2b2354a4e676bf7e348fae07b7ed14c6ebf4433fe442c213.scope: Deactivated successfully.
Oct  2 03:46:39 np0005465604 conmon[74229]: conmon cca054c0fcc08f7e2b23 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cca054c0fcc08f7e2b2354a4e676bf7e348fae07b7ed14c6ebf4433fe442c213.scope/container/memory.events
Oct  2 03:46:39 np0005465604 podman[74212]: 2025-10-02 07:46:39.802733379 +0000 UTC m=+0.547656088 container died cca054c0fcc08f7e2b2354a4e676bf7e348fae07b7ed14c6ebf4433fe442c213 (image=quay.io/ceph/ceph:v18, name=eloquent_albattani, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 03:46:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-64911c79fce71866f2d733812c4f58e4c7c2b2ea9652a9c5769d18c633bdc9f1-merged.mount: Deactivated successfully.
Oct  2 03:46:39 np0005465604 podman[74212]: 2025-10-02 07:46:39.858286338 +0000 UTC m=+0.603209047 container remove cca054c0fcc08f7e2b2354a4e676bf7e348fae07b7ed14c6ebf4433fe442c213 (image=quay.io/ceph/ceph:v18, name=eloquent_albattani, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:46:39 np0005465604 systemd[1]: libpod-conmon-cca054c0fcc08f7e2b2354a4e676bf7e348fae07b7ed14c6ebf4433fe442c213.scope: Deactivated successfully.
Oct  2 03:46:39 np0005465604 podman[74268]: 2025-10-02 07:46:39.942364073 +0000 UTC m=+0.061945808 container create 886a9bbb7b086f1f8b9e9191c1d40f528fdc58a1082448f20c98764c12a5f05b (image=quay.io/ceph/ceph:v18, name=jolly_antonelli, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 03:46:39 np0005465604 systemd[1]: Started libpod-conmon-886a9bbb7b086f1f8b9e9191c1d40f528fdc58a1082448f20c98764c12a5f05b.scope.
Oct  2 03:46:39 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:39 np0005465604 podman[74268]: 2025-10-02 07:46:39.906148136 +0000 UTC m=+0.025729931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ebe61531cadfffb9480577fc1f004826b316cf00afb85cb2cc2379509b8fcd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ebe61531cadfffb9480577fc1f004826b316cf00afb85cb2cc2379509b8fcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ebe61531cadfffb9480577fc1f004826b316cf00afb85cb2cc2379509b8fcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ebe61531cadfffb9480577fc1f004826b316cf00afb85cb2cc2379509b8fcd/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:40 np0005465604 podman[74268]: 2025-10-02 07:46:40.014705523 +0000 UTC m=+0.134287238 container init 886a9bbb7b086f1f8b9e9191c1d40f528fdc58a1082448f20c98764c12a5f05b (image=quay.io/ceph/ceph:v18, name=jolly_antonelli, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 03:46:40 np0005465604 podman[74268]: 2025-10-02 07:46:40.023054473 +0000 UTC m=+0.142636168 container start 886a9bbb7b086f1f8b9e9191c1d40f528fdc58a1082448f20c98764c12a5f05b (image=quay.io/ceph/ceph:v18, name=jolly_antonelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 03:46:40 np0005465604 podman[74268]: 2025-10-02 07:46:40.026549691 +0000 UTC m=+0.146131416 container attach 886a9bbb7b086f1f8b9e9191c1d40f528fdc58a1082448f20c98764c12a5f05b (image=quay.io/ceph/ceph:v18, name=jolly_antonelli, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  2 03:46:40 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:46:40 np0005465604 ceph-mon[74116]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4221317412' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:46:40 np0005465604 systemd[1]: libpod-886a9bbb7b086f1f8b9e9191c1d40f528fdc58a1082448f20c98764c12a5f05b.scope: Deactivated successfully.
Oct  2 03:46:40 np0005465604 podman[74268]: 2025-10-02 07:46:40.40322115 +0000 UTC m=+0.522802855 container died 886a9bbb7b086f1f8b9e9191c1d40f528fdc58a1082448f20c98764c12a5f05b (image=quay.io/ceph/ceph:v18, name=jolly_antonelli, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:46:40 np0005465604 systemd[1]: var-lib-containers-storage-overlay-21ebe61531cadfffb9480577fc1f004826b316cf00afb85cb2cc2379509b8fcd-merged.mount: Deactivated successfully.
Oct  2 03:46:40 np0005465604 podman[74268]: 2025-10-02 07:46:40.444421421 +0000 UTC m=+0.564003116 container remove 886a9bbb7b086f1f8b9e9191c1d40f528fdc58a1082448f20c98764c12a5f05b (image=quay.io/ceph/ceph:v18, name=jolly_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:46:40 np0005465604 systemd[1]: libpod-conmon-886a9bbb7b086f1f8b9e9191c1d40f528fdc58a1082448f20c98764c12a5f05b.scope: Deactivated successfully.
Oct  2 03:46:40 np0005465604 systemd[1]: Stopping Ceph mon.compute-0 for a52e644f-f702-594c-a648-813e3e0df2b1...
Oct  2 03:46:40 np0005465604 ceph-mon[74116]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct  2 03:46:40 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct  2 03:46:40 np0005465604 ceph-mon[74116]: mon.compute-0@0(leader) e1 shutdown
Oct  2 03:46:40 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0[74112]: 2025-10-02T07:46:40.655+0000 7f0479907640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct  2 03:46:40 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0[74112]: 2025-10-02T07:46:40.655+0000 7f0479907640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct  2 03:46:40 np0005465604 ceph-mon[74116]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  2 03:46:40 np0005465604 ceph-mon[74116]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  2 03:46:40 np0005465604 podman[74351]: 2025-10-02 07:46:40.840123031 +0000 UTC m=+0.217571770 container died 516cccb474a35253b5d338a6bed84675fa60fe2f3dc80e17cc03731549033183 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:46:40 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b92216093d3d204fd4f9fa66be1f8631e4c185afdf435ed204fec599024f5808-merged.mount: Deactivated successfully.
Oct  2 03:46:40 np0005465604 podman[74351]: 2025-10-02 07:46:40.882453967 +0000 UTC m=+0.259902696 container remove 516cccb474a35253b5d338a6bed84675fa60fe2f3dc80e17cc03731549033183 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:46:40 np0005465604 bash[74351]: ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0
Oct  2 03:46:40 np0005465604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 03:46:41 np0005465604 systemd[1]: ceph-a52e644f-f702-594c-a648-813e3e0df2b1@mon.compute-0.service: Deactivated successfully.
Oct  2 03:46:41 np0005465604 systemd[1]: Stopped Ceph mon.compute-0 for a52e644f-f702-594c-a648-813e3e0df2b1.
Oct  2 03:46:41 np0005465604 systemd[1]: ceph-a52e644f-f702-594c-a648-813e3e0df2b1@mon.compute-0.service: Consumed 1.062s CPU time.
Oct  2 03:46:41 np0005465604 systemd[1]: Starting Ceph mon.compute-0 for a52e644f-f702-594c-a648-813e3e0df2b1...
Oct  2 03:46:41 np0005465604 podman[74457]: 2025-10-02 07:46:41.303436084 +0000 UTC m=+0.048587223 container create 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:46:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1703f10f5130556d6e24adaa7121a49b233d9a90f53df0072fd6d5e1ec26bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1703f10f5130556d6e24adaa7121a49b233d9a90f53df0072fd6d5e1ec26bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1703f10f5130556d6e24adaa7121a49b233d9a90f53df0072fd6d5e1ec26bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1703f10f5130556d6e24adaa7121a49b233d9a90f53df0072fd6d5e1ec26bf/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:41 np0005465604 podman[74457]: 2025-10-02 07:46:41.284510465 +0000 UTC m=+0.029661634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:41 np0005465604 podman[74457]: 2025-10-02 07:46:41.397895502 +0000 UTC m=+0.143046721 container init 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:46:41 np0005465604 podman[74457]: 2025-10-02 07:46:41.409861415 +0000 UTC m=+0.155012574 container start 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:46:41 np0005465604 bash[74457]: 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104
Oct  2 03:46:41 np0005465604 systemd[1]: Started Ceph mon.compute-0 for a52e644f-f702-594c-a648-813e3e0df2b1.
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: pidfile_write: ignore empty --pid-file
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: load: jerasure load: lrc 
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: RocksDB version: 7.9.2
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Git sha 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: DB SUMMARY
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: DB Session ID:  E5Q3H049J9TEXP7LLR7P
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: CURRENT file:  CURRENT
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: IDENTITY file:  IDENTITY
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 52078 ; 
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                         Options.error_if_exists: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                       Options.create_if_missing: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                         Options.paranoid_checks: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                                     Options.env: 0x557a6309ac40
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                                Options.info_log: 0x557a653c9040
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                Options.max_file_opening_threads: 16
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                              Options.statistics: (nil)
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                               Options.use_fsync: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                       Options.max_log_file_size: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                         Options.allow_fallocate: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                        Options.use_direct_reads: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:          Options.create_missing_column_families: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                              Options.db_log_dir: 
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                                 Options.wal_dir: 
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                   Options.advise_random_on_open: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                    Options.write_buffer_manager: 0x557a653d8b40
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                            Options.rate_limiter: (nil)
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                  Options.unordered_write: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                               Options.row_cache: None
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                              Options.wal_filter: None
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.allow_ingest_behind: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.two_write_queues: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.manual_wal_flush: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.wal_compression: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.atomic_flush: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                 Options.log_readahead_size: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.allow_data_in_errors: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.db_host_id: __hostname__
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.max_background_jobs: 2
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.max_background_compactions: -1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.max_subcompactions: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.max_total_wal_size: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                          Options.max_open_files: -1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                          Options.bytes_per_sync: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:       Options.compaction_readahead_size: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                  Options.max_background_flushes: -1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Compression algorithms supported:
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: #011kZSTD supported: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: #011kXpressCompression supported: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: #011kBZip2Compression supported: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: #011kLZ4Compression supported: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: #011kZlibCompression supported: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: #011kSnappyCompression supported: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:           Options.merge_operator: 
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:        Options.compaction_filter: None
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a653c8c40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a653c11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:        Options.write_buffer_size: 33554432
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:  Options.max_write_buffer_number: 2
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:          Options.compression: NoCompression
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.num_levels: 7
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7b07c9b1-a6c7-45d0-9522-b015946345f4
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391201482199, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391201488052, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 51794, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 129, "table_properties": {"data_size": 50351, "index_size": 149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2940, "raw_average_key_size": 30, "raw_value_size": 48030, "raw_average_value_size": 500, "num_data_blocks": 7, "num_entries": 96, "num_filter_entries": 96, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391201, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391201488248, "job": 1, "event": "recovery_finished"}
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557a653eae00
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: DB pointer 0x557a65474000
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   52.48 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0   52.48 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 2.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 2.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 512.00 MB usage: 1.72 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.34 KB,6.55651e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid a52e644f-f702-594c-a648-813e3e0df2b1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: mon.compute-0@-1(???) e1 preinit fsid a52e644f-f702-594c-a648-813e3e0df2b1
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: mon.compute-0@-1(???).mds e1 new map
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(probing) e1 win_standalone_election
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : fsmap 
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct  2 03:46:41 np0005465604 podman[74478]: 2025-10-02 07:46:41.51196061 +0000 UTC m=+0.055035233 container create 69004e3d170b758e13013328dfbe1f7b6813ff33dba643144a991adb8af25d34 (image=quay.io/ceph/ceph:v18, name=frosty_goodall, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct  2 03:46:41 np0005465604 systemd[1]: Started libpod-conmon-69004e3d170b758e13013328dfbe1f7b6813ff33dba643144a991adb8af25d34.scope.
Oct  2 03:46:41 np0005465604 ceph-mon[74477]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  2 03:46:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:41 np0005465604 podman[74478]: 2025-10-02 07:46:41.497441779 +0000 UTC m=+0.040516402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a5df2e014dc19f5653ce363caf7d16af9128f11a9220b4f2e5cc83ef7f568bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a5df2e014dc19f5653ce363caf7d16af9128f11a9220b4f2e5cc83ef7f568bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a5df2e014dc19f5653ce363caf7d16af9128f11a9220b4f2e5cc83ef7f568bf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:41 np0005465604 podman[74478]: 2025-10-02 07:46:41.611458066 +0000 UTC m=+0.154532739 container init 69004e3d170b758e13013328dfbe1f7b6813ff33dba643144a991adb8af25d34 (image=quay.io/ceph/ceph:v18, name=frosty_goodall, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct  2 03:46:41 np0005465604 podman[74478]: 2025-10-02 07:46:41.619233838 +0000 UTC m=+0.162308471 container start 69004e3d170b758e13013328dfbe1f7b6813ff33dba643144a991adb8af25d34 (image=quay.io/ceph/ceph:v18, name=frosty_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 03:46:41 np0005465604 podman[74478]: 2025-10-02 07:46:41.624566834 +0000 UTC m=+0.167641507 container attach 69004e3d170b758e13013328dfbe1f7b6813ff33dba643144a991adb8af25d34 (image=quay.io/ceph/ceph:v18, name=frosty_goodall, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 03:46:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Oct  2 03:46:42 np0005465604 systemd[1]: libpod-69004e3d170b758e13013328dfbe1f7b6813ff33dba643144a991adb8af25d34.scope: Deactivated successfully.
Oct  2 03:46:42 np0005465604 podman[74556]: 2025-10-02 07:46:42.056217592 +0000 UTC m=+0.021033276 container died 69004e3d170b758e13013328dfbe1f7b6813ff33dba643144a991adb8af25d34 (image=quay.io/ceph/ceph:v18, name=frosty_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 03:46:42 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6a5df2e014dc19f5653ce363caf7d16af9128f11a9220b4f2e5cc83ef7f568bf-merged.mount: Deactivated successfully.
Oct  2 03:46:42 np0005465604 podman[74556]: 2025-10-02 07:46:42.093964926 +0000 UTC m=+0.058780620 container remove 69004e3d170b758e13013328dfbe1f7b6813ff33dba643144a991adb8af25d34 (image=quay.io/ceph/ceph:v18, name=frosty_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:46:42 np0005465604 systemd[1]: libpod-conmon-69004e3d170b758e13013328dfbe1f7b6813ff33dba643144a991adb8af25d34.scope: Deactivated successfully.
Oct  2 03:46:42 np0005465604 podman[74571]: 2025-10-02 07:46:42.161436545 +0000 UTC m=+0.040940405 container create d6d21951d6acb6233a9f8139685c9067bdcf2ebcfdb0ffcfc4236d22c6bfddb9 (image=quay.io/ceph/ceph:v18, name=boring_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:46:42 np0005465604 systemd[1]: Started libpod-conmon-d6d21951d6acb6233a9f8139685c9067bdcf2ebcfdb0ffcfc4236d22c6bfddb9.scope.
Oct  2 03:46:42 np0005465604 podman[74571]: 2025-10-02 07:46:42.14200145 +0000 UTC m=+0.021505300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:42 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8b287984c28012f2e1aff608c966495fe95b40744e703bb9e500c86ab563c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8b287984c28012f2e1aff608c966495fe95b40744e703bb9e500c86ab563c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e8b287984c28012f2e1aff608c966495fe95b40744e703bb9e500c86ab563c5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:42 np0005465604 podman[74571]: 2025-10-02 07:46:42.264795351 +0000 UTC m=+0.144299231 container init d6d21951d6acb6233a9f8139685c9067bdcf2ebcfdb0ffcfc4236d22c6bfddb9 (image=quay.io/ceph/ceph:v18, name=boring_jennings, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:46:42 np0005465604 podman[74571]: 2025-10-02 07:46:42.273565474 +0000 UTC m=+0.153069364 container start d6d21951d6acb6233a9f8139685c9067bdcf2ebcfdb0ffcfc4236d22c6bfddb9 (image=quay.io/ceph/ceph:v18, name=boring_jennings, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  2 03:46:42 np0005465604 podman[74571]: 2025-10-02 07:46:42.277775994 +0000 UTC m=+0.157279924 container attach d6d21951d6acb6233a9f8139685c9067bdcf2ebcfdb0ffcfc4236d22c6bfddb9 (image=quay.io/ceph/ceph:v18, name=boring_jennings, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:46:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Oct  2 03:46:42 np0005465604 systemd[1]: libpod-d6d21951d6acb6233a9f8139685c9067bdcf2ebcfdb0ffcfc4236d22c6bfddb9.scope: Deactivated successfully.
Oct  2 03:46:42 np0005465604 podman[74571]: 2025-10-02 07:46:42.721144877 +0000 UTC m=+0.600648737 container died d6d21951d6acb6233a9f8139685c9067bdcf2ebcfdb0ffcfc4236d22c6bfddb9 (image=quay.io/ceph/ceph:v18, name=boring_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:46:42 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9e8b287984c28012f2e1aff608c966495fe95b40744e703bb9e500c86ab563c5-merged.mount: Deactivated successfully.
Oct  2 03:46:42 np0005465604 podman[74571]: 2025-10-02 07:46:42.77686904 +0000 UTC m=+0.656372890 container remove d6d21951d6acb6233a9f8139685c9067bdcf2ebcfdb0ffcfc4236d22c6bfddb9 (image=quay.io/ceph/ceph:v18, name=boring_jennings, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:46:42 np0005465604 systemd[1]: libpod-conmon-d6d21951d6acb6233a9f8139685c9067bdcf2ebcfdb0ffcfc4236d22c6bfddb9.scope: Deactivated successfully.
Oct  2 03:46:42 np0005465604 systemd[1]: Reloading.
Oct  2 03:46:42 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:46:42 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:46:43 np0005465604 systemd[1]: Reloading.
Oct  2 03:46:43 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:46:43 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:46:43 np0005465604 systemd[1]: Starting Ceph mgr.compute-0.qlmhsi for a52e644f-f702-594c-a648-813e3e0df2b1...
Oct  2 03:46:43 np0005465604 podman[74754]: 2025-10-02 07:46:43.63962507 +0000 UTC m=+0.045186787 container create 41446870f28deb8c1fe52c71f0b1c172de891e403f17d431a8eee54457053474 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:46:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad20ac098333760208610f68947266cb3a8062c97d1ce38c73be0ff6b8089c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad20ac098333760208610f68947266cb3a8062c97d1ce38c73be0ff6b8089c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad20ac098333760208610f68947266cb3a8062c97d1ce38c73be0ff6b8089c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad20ac098333760208610f68947266cb3a8062c97d1ce38c73be0ff6b8089c5/merged/var/lib/ceph/mgr/ceph-compute-0.qlmhsi supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:43 np0005465604 podman[74754]: 2025-10-02 07:46:43.620379481 +0000 UTC m=+0.025941188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:43 np0005465604 podman[74754]: 2025-10-02 07:46:43.738059361 +0000 UTC m=+0.143621088 container init 41446870f28deb8c1fe52c71f0b1c172de891e403f17d431a8eee54457053474 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:46:43 np0005465604 podman[74754]: 2025-10-02 07:46:43.749233099 +0000 UTC m=+0.154794806 container start 41446870f28deb8c1fe52c71f0b1c172de891e403f17d431a8eee54457053474 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 03:46:43 np0005465604 bash[74754]: 41446870f28deb8c1fe52c71f0b1c172de891e403f17d431a8eee54457053474
Oct  2 03:46:43 np0005465604 systemd[1]: Started Ceph mgr.compute-0.qlmhsi for a52e644f-f702-594c-a648-813e3e0df2b1.
Oct  2 03:46:43 np0005465604 ceph-mgr[74774]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 03:46:43 np0005465604 ceph-mgr[74774]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct  2 03:46:43 np0005465604 ceph-mgr[74774]: pidfile_write: ignore empty --pid-file
Oct  2 03:46:43 np0005465604 podman[74775]: 2025-10-02 07:46:43.848378124 +0000 UTC m=+0.053492655 container create 002d14f087519d2d6befc7ff9c1f6a9edb6c9cfcd83a3d019e34d7c76b54f892 (image=quay.io/ceph/ceph:v18, name=frosty_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 03:46:43 np0005465604 systemd[1]: Started libpod-conmon-002d14f087519d2d6befc7ff9c1f6a9edb6c9cfcd83a3d019e34d7c76b54f892.scope.
Oct  2 03:46:43 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'alerts'
Oct  2 03:46:43 np0005465604 podman[74775]: 2025-10-02 07:46:43.824399948 +0000 UTC m=+0.029514529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:43 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f1733615e906f6d251bb297c18a0a14b9ee8d0bd3174c733c2312c0edd1363/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f1733615e906f6d251bb297c18a0a14b9ee8d0bd3174c733c2312c0edd1363/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6f1733615e906f6d251bb297c18a0a14b9ee8d0bd3174c733c2312c0edd1363/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:43 np0005465604 podman[74775]: 2025-10-02 07:46:43.953433192 +0000 UTC m=+0.158547733 container init 002d14f087519d2d6befc7ff9c1f6a9edb6c9cfcd83a3d019e34d7c76b54f892 (image=quay.io/ceph/ceph:v18, name=frosty_tharp, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:46:43 np0005465604 podman[74775]: 2025-10-02 07:46:43.965350572 +0000 UTC m=+0.170465103 container start 002d14f087519d2d6befc7ff9c1f6a9edb6c9cfcd83a3d019e34d7c76b54f892 (image=quay.io/ceph/ceph:v18, name=frosty_tharp, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  2 03:46:43 np0005465604 podman[74775]: 2025-10-02 07:46:43.969729699 +0000 UTC m=+0.174844260 container attach 002d14f087519d2d6befc7ff9c1f6a9edb6c9cfcd83a3d019e34d7c76b54f892 (image=quay.io/ceph/ceph:v18, name=frosty_tharp, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Oct  2 03:46:44 np0005465604 ceph-mgr[74774]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  2 03:46:44 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'balancer'
Oct  2 03:46:44 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:44.215+0000 7fc4242d4140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  2 03:46:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  2 03:46:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/927635993' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]: 
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]: {
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    "fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    "health": {
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "status": "HEALTH_OK",
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "checks": {},
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "mutes": []
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    },
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    "election_epoch": 5,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    "quorum": [
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        0
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    ],
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    "quorum_names": [
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "compute-0"
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    ],
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    "quorum_age": 2,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    "monmap": {
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "epoch": 1,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "min_mon_release_name": "reef",
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "num_mons": 1
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    },
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    "osdmap": {
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "epoch": 1,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "num_osds": 0,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "num_up_osds": 0,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "osd_up_since": 0,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "num_in_osds": 0,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "osd_in_since": 0,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "num_remapped_pgs": 0
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    },
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    "pgmap": {
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "pgs_by_state": [],
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "num_pgs": 0,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "num_pools": 0,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "num_objects": 0,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "data_bytes": 0,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "bytes_used": 0,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "bytes_avail": 0,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "bytes_total": 0
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    },
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    "fsmap": {
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "epoch": 1,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "by_rank": [],
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "up:standby": 0
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    },
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    "mgrmap": {
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "available": false,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "num_standbys": 0,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "modules": [
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:            "iostat",
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:            "nfs",
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:            "restful"
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        ],
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "services": {}
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    },
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    "servicemap": {
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "epoch": 1,
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "modified": "2025-10-02T07:46:38.638077+0000",
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:        "services": {}
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    },
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]:    "progress_events": {}
Oct  2 03:46:44 np0005465604 frosty_tharp[74814]: }
Oct  2 03:46:44 np0005465604 systemd[1]: libpod-002d14f087519d2d6befc7ff9c1f6a9edb6c9cfcd83a3d019e34d7c76b54f892.scope: Deactivated successfully.
Oct  2 03:46:44 np0005465604 podman[74775]: 2025-10-02 07:46:44.394184313 +0000 UTC m=+0.599298864 container died 002d14f087519d2d6befc7ff9c1f6a9edb6c9cfcd83a3d019e34d7c76b54f892 (image=quay.io/ceph/ceph:v18, name=frosty_tharp, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:46:44 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f6f1733615e906f6d251bb297c18a0a14b9ee8d0bd3174c733c2312c0edd1363-merged.mount: Deactivated successfully.
Oct  2 03:46:44 np0005465604 podman[74775]: 2025-10-02 07:46:44.453175989 +0000 UTC m=+0.658290510 container remove 002d14f087519d2d6befc7ff9c1f6a9edb6c9cfcd83a3d019e34d7c76b54f892 (image=quay.io/ceph/ceph:v18, name=frosty_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:46:44 np0005465604 systemd[1]: libpod-conmon-002d14f087519d2d6befc7ff9c1f6a9edb6c9cfcd83a3d019e34d7c76b54f892.scope: Deactivated successfully.
Oct  2 03:46:44 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:44.487+0000 7fc4242d4140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  2 03:46:44 np0005465604 ceph-mgr[74774]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  2 03:46:44 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'cephadm'
Oct  2 03:46:46 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'crash'
Oct  2 03:46:46 np0005465604 podman[74863]: 2025-10-02 07:46:46.550554655 +0000 UTC m=+0.062101273 container create b942f18db58465ea1b4d5d3af20805b29d4c90f45535f69ebf31e5b32ceef125 (image=quay.io/ceph/ceph:v18, name=sharp_curie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:46:46 np0005465604 systemd[1]: Started libpod-conmon-b942f18db58465ea1b4d5d3af20805b29d4c90f45535f69ebf31e5b32ceef125.scope.
Oct  2 03:46:46 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:46.600+0000 7fc4242d4140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  2 03:46:46 np0005465604 ceph-mgr[74774]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  2 03:46:46 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'dashboard'
Oct  2 03:46:46 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0e27e92dc246f4452c4cbc8bb1f1e400d8197a6ca165debb309d133490382a7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0e27e92dc246f4452c4cbc8bb1f1e400d8197a6ca165debb309d133490382a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0e27e92dc246f4452c4cbc8bb1f1e400d8197a6ca165debb309d133490382a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:46 np0005465604 podman[74863]: 2025-10-02 07:46:46.528820059 +0000 UTC m=+0.040366727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:46 np0005465604 podman[74863]: 2025-10-02 07:46:46.639461991 +0000 UTC m=+0.151008609 container init b942f18db58465ea1b4d5d3af20805b29d4c90f45535f69ebf31e5b32ceef125 (image=quay.io/ceph/ceph:v18, name=sharp_curie, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 03:46:46 np0005465604 podman[74863]: 2025-10-02 07:46:46.644698763 +0000 UTC m=+0.156245411 container start b942f18db58465ea1b4d5d3af20805b29d4c90f45535f69ebf31e5b32ceef125 (image=quay.io/ceph/ceph:v18, name=sharp_curie, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 03:46:46 np0005465604 podman[74863]: 2025-10-02 07:46:46.647839961 +0000 UTC m=+0.159386579 container attach b942f18db58465ea1b4d5d3af20805b29d4c90f45535f69ebf31e5b32ceef125 (image=quay.io/ceph/ceph:v18, name=sharp_curie, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:46:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  2 03:46:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2869817267' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  2 03:46:47 np0005465604 sharp_curie[74879]: 
Oct  2 03:46:47 np0005465604 sharp_curie[74879]: {
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    "fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    "health": {
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "status": "HEALTH_OK",
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "checks": {},
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "mutes": []
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    },
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    "election_epoch": 5,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    "quorum": [
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        0
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    ],
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    "quorum_names": [
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "compute-0"
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    ],
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    "quorum_age": 5,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    "monmap": {
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "epoch": 1,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "min_mon_release_name": "reef",
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "num_mons": 1
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    },
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    "osdmap": {
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "epoch": 1,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "num_osds": 0,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "num_up_osds": 0,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "osd_up_since": 0,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "num_in_osds": 0,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "osd_in_since": 0,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "num_remapped_pgs": 0
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    },
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    "pgmap": {
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "pgs_by_state": [],
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "num_pgs": 0,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "num_pools": 0,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "num_objects": 0,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "data_bytes": 0,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "bytes_used": 0,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "bytes_avail": 0,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "bytes_total": 0
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    },
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    "fsmap": {
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "epoch": 1,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "by_rank": [],
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "up:standby": 0
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    },
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    "mgrmap": {
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "available": false,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "num_standbys": 0,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "modules": [
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:            "iostat",
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:            "nfs",
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:            "restful"
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        ],
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "services": {}
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    },
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    "servicemap": {
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "epoch": 1,
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "modified": "2025-10-02T07:46:38.638077+0000",
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:        "services": {}
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    },
Oct  2 03:46:47 np0005465604 sharp_curie[74879]:    "progress_events": {}
Oct  2 03:46:47 np0005465604 sharp_curie[74879]: }
Oct  2 03:46:47 np0005465604 systemd[1]: libpod-b942f18db58465ea1b4d5d3af20805b29d4c90f45535f69ebf31e5b32ceef125.scope: Deactivated successfully.
Oct  2 03:46:47 np0005465604 podman[74863]: 2025-10-02 07:46:47.041723534 +0000 UTC m=+0.553270152 container died b942f18db58465ea1b4d5d3af20805b29d4c90f45535f69ebf31e5b32ceef125 (image=quay.io/ceph/ceph:v18, name=sharp_curie, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 03:46:47 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b0e27e92dc246f4452c4cbc8bb1f1e400d8197a6ca165debb309d133490382a7-merged.mount: Deactivated successfully.
Oct  2 03:46:47 np0005465604 podman[74863]: 2025-10-02 07:46:47.080619765 +0000 UTC m=+0.592166383 container remove b942f18db58465ea1b4d5d3af20805b29d4c90f45535f69ebf31e5b32ceef125 (image=quay.io/ceph/ceph:v18, name=sharp_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:46:47 np0005465604 systemd[1]: libpod-conmon-b942f18db58465ea1b4d5d3af20805b29d4c90f45535f69ebf31e5b32ceef125.scope: Deactivated successfully.
Oct  2 03:46:47 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'devicehealth'
Oct  2 03:46:48 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:48.228+0000 7fc4242d4140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  2 03:46:48 np0005465604 ceph-mgr[74774]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  2 03:46:48 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'diskprediction_local'
Oct  2 03:46:48 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  2 03:46:48 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  2 03:46:48 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]:  from numpy import show_config as show_numpy_config
Oct  2 03:46:48 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:48.747+0000 7fc4242d4140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  2 03:46:48 np0005465604 ceph-mgr[74774]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  2 03:46:48 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'influx'
Oct  2 03:46:48 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:48.979+0000 7fc4242d4140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  2 03:46:48 np0005465604 ceph-mgr[74774]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  2 03:46:48 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'insights'
Oct  2 03:46:49 np0005465604 podman[74917]: 2025-10-02 07:46:49.159001331 +0000 UTC m=+0.059945597 container create 237993a3baea841b420b73a9b7c49a08009a172dd43efb398dc594c7c67c4531 (image=quay.io/ceph/ceph:v18, name=xenodochial_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 03:46:49 np0005465604 systemd[1]: Started libpod-conmon-237993a3baea841b420b73a9b7c49a08009a172dd43efb398dc594c7c67c4531.scope.
Oct  2 03:46:49 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'iostat'
Oct  2 03:46:49 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969867e36324f27ea50b2cd5b0b9152592ab497cbaac9b6823d43cfa9a5429a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969867e36324f27ea50b2cd5b0b9152592ab497cbaac9b6823d43cfa9a5429a3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969867e36324f27ea50b2cd5b0b9152592ab497cbaac9b6823d43cfa9a5429a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:49 np0005465604 podman[74917]: 2025-10-02 07:46:49.136909443 +0000 UTC m=+0.037853749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:49 np0005465604 podman[74917]: 2025-10-02 07:46:49.250794726 +0000 UTC m=+0.151739012 container init 237993a3baea841b420b73a9b7c49a08009a172dd43efb398dc594c7c67c4531 (image=quay.io/ceph/ceph:v18, name=xenodochial_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 03:46:49 np0005465604 podman[74917]: 2025-10-02 07:46:49.257158094 +0000 UTC m=+0.158102400 container start 237993a3baea841b420b73a9b7c49a08009a172dd43efb398dc594c7c67c4531 (image=quay.io/ceph/ceph:v18, name=xenodochial_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 03:46:49 np0005465604 podman[74917]: 2025-10-02 07:46:49.260791857 +0000 UTC m=+0.161736143 container attach 237993a3baea841b420b73a9b7c49a08009a172dd43efb398dc594c7c67c4531 (image=quay.io/ceph/ceph:v18, name=xenodochial_chaplygin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:46:49 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:49.496+0000 7fc4242d4140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  2 03:46:49 np0005465604 ceph-mgr[74774]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  2 03:46:49 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'k8sevents'
Oct  2 03:46:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  2 03:46:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3908889597' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]: 
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]: {
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    "fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    "health": {
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "status": "HEALTH_OK",
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "checks": {},
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "mutes": []
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    },
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    "election_epoch": 5,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    "quorum": [
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        0
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    ],
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    "quorum_names": [
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "compute-0"
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    ],
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    "quorum_age": 8,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    "monmap": {
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "epoch": 1,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "min_mon_release_name": "reef",
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "num_mons": 1
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    },
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    "osdmap": {
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "epoch": 1,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "num_osds": 0,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "num_up_osds": 0,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "osd_up_since": 0,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "num_in_osds": 0,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "osd_in_since": 0,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "num_remapped_pgs": 0
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    },
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    "pgmap": {
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "pgs_by_state": [],
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "num_pgs": 0,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "num_pools": 0,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "num_objects": 0,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "data_bytes": 0,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "bytes_used": 0,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "bytes_avail": 0,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "bytes_total": 0
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    },
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    "fsmap": {
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "epoch": 1,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "by_rank": [],
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "up:standby": 0
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    },
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    "mgrmap": {
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "available": false,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "num_standbys": 0,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "modules": [
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:            "iostat",
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:            "nfs",
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:            "restful"
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        ],
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "services": {}
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    },
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    "servicemap": {
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "epoch": 1,
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "modified": "2025-10-02T07:46:38.638077+0000",
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:        "services": {}
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    },
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]:    "progress_events": {}
Oct  2 03:46:49 np0005465604 xenodochial_chaplygin[74933]: }
Oct  2 03:46:49 np0005465604 systemd[1]: libpod-237993a3baea841b420b73a9b7c49a08009a172dd43efb398dc594c7c67c4531.scope: Deactivated successfully.
Oct  2 03:46:49 np0005465604 podman[74917]: 2025-10-02 07:46:49.68070021 +0000 UTC m=+0.581644486 container died 237993a3baea841b420b73a9b7c49a08009a172dd43efb398dc594c7c67c4531 (image=quay.io/ceph/ceph:v18, name=xenodochial_chaplygin, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:46:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay-969867e36324f27ea50b2cd5b0b9152592ab497cbaac9b6823d43cfa9a5429a3-merged.mount: Deactivated successfully.
Oct  2 03:46:49 np0005465604 podman[74917]: 2025-10-02 07:46:49.736830016 +0000 UTC m=+0.637774322 container remove 237993a3baea841b420b73a9b7c49a08009a172dd43efb398dc594c7c67c4531 (image=quay.io/ceph/ceph:v18, name=xenodochial_chaplygin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 03:46:49 np0005465604 systemd[1]: libpod-conmon-237993a3baea841b420b73a9b7c49a08009a172dd43efb398dc594c7c67c4531.scope: Deactivated successfully.
Oct  2 03:46:51 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'localpool'
Oct  2 03:46:51 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'mds_autoscaler'
Oct  2 03:46:51 np0005465604 podman[74971]: 2025-10-02 07:46:51.815009166 +0000 UTC m=+0.045631531 container create 089320a3f68b0db3b238241a3d188a79445900dfa6e1843d386c6ef792c8f7cd (image=quay.io/ceph/ceph:v18, name=fervent_hodgkin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 03:46:51 np0005465604 systemd[1]: Started libpod-conmon-089320a3f68b0db3b238241a3d188a79445900dfa6e1843d386c6ef792c8f7cd.scope.
Oct  2 03:46:51 np0005465604 podman[74971]: 2025-10-02 07:46:51.795466867 +0000 UTC m=+0.026089222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:51 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e7d0ecb88ec6daa2a47c372915bca03ee29f3c5563c109d5661103f7c06b12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e7d0ecb88ec6daa2a47c372915bca03ee29f3c5563c109d5661103f7c06b12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e7d0ecb88ec6daa2a47c372915bca03ee29f3c5563c109d5661103f7c06b12/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:51 np0005465604 podman[74971]: 2025-10-02 07:46:51.932917053 +0000 UTC m=+0.163539408 container init 089320a3f68b0db3b238241a3d188a79445900dfa6e1843d386c6ef792c8f7cd (image=quay.io/ceph/ceph:v18, name=fervent_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  2 03:46:51 np0005465604 podman[74971]: 2025-10-02 07:46:51.940613753 +0000 UTC m=+0.171236078 container start 089320a3f68b0db3b238241a3d188a79445900dfa6e1843d386c6ef792c8f7cd (image=quay.io/ceph/ceph:v18, name=fervent_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 03:46:51 np0005465604 podman[74971]: 2025-10-02 07:46:51.95437401 +0000 UTC m=+0.184996395 container attach 089320a3f68b0db3b238241a3d188a79445900dfa6e1843d386c6ef792c8f7cd (image=quay.io/ceph/ceph:v18, name=fervent_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 03:46:52 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'mirroring'
Oct  2 03:46:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  2 03:46:52 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2381688684' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]: 
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]: {
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    "fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    "health": {
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "status": "HEALTH_OK",
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "checks": {},
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "mutes": []
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    },
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    "election_epoch": 5,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    "quorum": [
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        0
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    ],
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    "quorum_names": [
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "compute-0"
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    ],
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    "quorum_age": 10,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    "monmap": {
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "epoch": 1,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "min_mon_release_name": "reef",
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "num_mons": 1
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    },
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    "osdmap": {
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "epoch": 1,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "num_osds": 0,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "num_up_osds": 0,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "osd_up_since": 0,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "num_in_osds": 0,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "osd_in_since": 0,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "num_remapped_pgs": 0
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    },
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    "pgmap": {
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "pgs_by_state": [],
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "num_pgs": 0,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "num_pools": 0,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "num_objects": 0,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "data_bytes": 0,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "bytes_used": 0,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "bytes_avail": 0,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "bytes_total": 0
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    },
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    "fsmap": {
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "epoch": 1,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "by_rank": [],
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "up:standby": 0
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    },
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    "mgrmap": {
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "available": false,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "num_standbys": 0,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "modules": [
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:            "iostat",
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:            "nfs",
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:            "restful"
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        ],
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "services": {}
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    },
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    "servicemap": {
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "epoch": 1,
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "modified": "2025-10-02T07:46:38.638077+0000",
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:        "services": {}
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    },
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]:    "progress_events": {}
Oct  2 03:46:52 np0005465604 fervent_hodgkin[74988]: }
Oct  2 03:46:52 np0005465604 systemd[1]: libpod-089320a3f68b0db3b238241a3d188a79445900dfa6e1843d386c6ef792c8f7cd.scope: Deactivated successfully.
Oct  2 03:46:52 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'nfs'
Oct  2 03:46:52 np0005465604 podman[75014]: 2025-10-02 07:46:52.416789846 +0000 UTC m=+0.027887438 container died 089320a3f68b0db3b238241a3d188a79445900dfa6e1843d386c6ef792c8f7cd (image=quay.io/ceph/ceph:v18, name=fervent_hodgkin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:46:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay-59e7d0ecb88ec6daa2a47c372915bca03ee29f3c5563c109d5661103f7c06b12-merged.mount: Deactivated successfully.
Oct  2 03:46:52 np0005465604 podman[75014]: 2025-10-02 07:46:52.455273353 +0000 UTC m=+0.066370955 container remove 089320a3f68b0db3b238241a3d188a79445900dfa6e1843d386c6ef792c8f7cd (image=quay.io/ceph/ceph:v18, name=fervent_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 03:46:52 np0005465604 systemd[1]: libpod-conmon-089320a3f68b0db3b238241a3d188a79445900dfa6e1843d386c6ef792c8f7cd.scope: Deactivated successfully.
Oct  2 03:46:53 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:53.061+0000 7fc4242d4140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  2 03:46:53 np0005465604 ceph-mgr[74774]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  2 03:46:53 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'orchestrator'
Oct  2 03:46:53 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:53.719+0000 7fc4242d4140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  2 03:46:53 np0005465604 ceph-mgr[74774]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  2 03:46:53 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'osd_perf_query'
Oct  2 03:46:53 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:53.984+0000 7fc4242d4140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  2 03:46:53 np0005465604 ceph-mgr[74774]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  2 03:46:53 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'osd_support'
Oct  2 03:46:54 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:54.214+0000 7fc4242d4140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  2 03:46:54 np0005465604 ceph-mgr[74774]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  2 03:46:54 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'pg_autoscaler'
Oct  2 03:46:54 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:54.487+0000 7fc4242d4140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  2 03:46:54 np0005465604 ceph-mgr[74774]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  2 03:46:54 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'progress'
Oct  2 03:46:54 np0005465604 podman[75029]: 2025-10-02 07:46:54.54108986 +0000 UTC m=+0.053638770 container create 4f03b8f173ff08a37ff3a9e1fa55f0d28f979252ff71961ec90ec46eff1e914b (image=quay.io/ceph/ceph:v18, name=sleepy_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 03:46:54 np0005465604 systemd[1]: Started libpod-conmon-4f03b8f173ff08a37ff3a9e1fa55f0d28f979252ff71961ec90ec46eff1e914b.scope.
Oct  2 03:46:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebc71bf24983e423c2b40639f9045f19459fdf80ad4bd34d4542f91d5c9f5d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebc71bf24983e423c2b40639f9045f19459fdf80ad4bd34d4542f91d5c9f5d5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebc71bf24983e423c2b40639f9045f19459fdf80ad4bd34d4542f91d5c9f5d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:54 np0005465604 podman[75029]: 2025-10-02 07:46:54.520062776 +0000 UTC m=+0.032611716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:54 np0005465604 podman[75029]: 2025-10-02 07:46:54.624618509 +0000 UTC m=+0.137167459 container init 4f03b8f173ff08a37ff3a9e1fa55f0d28f979252ff71961ec90ec46eff1e914b (image=quay.io/ceph/ceph:v18, name=sleepy_matsumoto, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:46:54 np0005465604 podman[75029]: 2025-10-02 07:46:54.631340137 +0000 UTC m=+0.143889047 container start 4f03b8f173ff08a37ff3a9e1fa55f0d28f979252ff71961ec90ec46eff1e914b (image=quay.io/ceph/ceph:v18, name=sleepy_matsumoto, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 03:46:54 np0005465604 podman[75029]: 2025-10-02 07:46:54.635146106 +0000 UTC m=+0.147695006 container attach 4f03b8f173ff08a37ff3a9e1fa55f0d28f979252ff71961ec90ec46eff1e914b (image=quay.io/ceph/ceph:v18, name=sleepy_matsumoto, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:46:54 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:54.741+0000 7fc4242d4140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  2 03:46:54 np0005465604 ceph-mgr[74774]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  2 03:46:54 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'prometheus'
Oct  2 03:46:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  2 03:46:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3316619771' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]: 
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]: {
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    "fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    "health": {
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "status": "HEALTH_OK",
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "checks": {},
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "mutes": []
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    },
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    "election_epoch": 5,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    "quorum": [
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        0
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    ],
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    "quorum_names": [
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "compute-0"
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    ],
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    "quorum_age": 13,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    "monmap": {
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "epoch": 1,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "min_mon_release_name": "reef",
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "num_mons": 1
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    },
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    "osdmap": {
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "epoch": 1,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "num_osds": 0,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "num_up_osds": 0,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "osd_up_since": 0,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "num_in_osds": 0,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "osd_in_since": 0,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "num_remapped_pgs": 0
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    },
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    "pgmap": {
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "pgs_by_state": [],
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "num_pgs": 0,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "num_pools": 0,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "num_objects": 0,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "data_bytes": 0,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "bytes_used": 0,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "bytes_avail": 0,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "bytes_total": 0
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    },
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    "fsmap": {
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "epoch": 1,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "by_rank": [],
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "up:standby": 0
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    },
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    "mgrmap": {
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "available": false,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "num_standbys": 0,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "modules": [
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:            "iostat",
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:            "nfs",
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:            "restful"
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        ],
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "services": {}
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    },
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    "servicemap": {
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "epoch": 1,
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "modified": "2025-10-02T07:46:38.638077+0000",
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:        "services": {}
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    },
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]:    "progress_events": {}
Oct  2 03:46:55 np0005465604 sleepy_matsumoto[75045]: }
Oct  2 03:46:55 np0005465604 systemd[1]: libpod-4f03b8f173ff08a37ff3a9e1fa55f0d28f979252ff71961ec90ec46eff1e914b.scope: Deactivated successfully.
Oct  2 03:46:55 np0005465604 podman[75029]: 2025-10-02 07:46:55.052488439 +0000 UTC m=+0.565037399 container died 4f03b8f173ff08a37ff3a9e1fa55f0d28f979252ff71961ec90ec46eff1e914b (image=quay.io/ceph/ceph:v18, name=sleepy_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 03:46:55 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0ebc71bf24983e423c2b40639f9045f19459fdf80ad4bd34d4542f91d5c9f5d5-merged.mount: Deactivated successfully.
Oct  2 03:46:55 np0005465604 podman[75029]: 2025-10-02 07:46:55.115909332 +0000 UTC m=+0.628458252 container remove 4f03b8f173ff08a37ff3a9e1fa55f0d28f979252ff71961ec90ec46eff1e914b (image=quay.io/ceph/ceph:v18, name=sleepy_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 03:46:55 np0005465604 systemd[1]: libpod-conmon-4f03b8f173ff08a37ff3a9e1fa55f0d28f979252ff71961ec90ec46eff1e914b.scope: Deactivated successfully.
Oct  2 03:46:55 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:55.722+0000 7fc4242d4140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  2 03:46:55 np0005465604 ceph-mgr[74774]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  2 03:46:55 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'rbd_support'
Oct  2 03:46:56 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:56.006+0000 7fc4242d4140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  2 03:46:56 np0005465604 ceph-mgr[74774]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  2 03:46:56 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'restful'
Oct  2 03:46:56 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'rgw'
Oct  2 03:46:57 np0005465604 podman[75083]: 2025-10-02 07:46:57.184972188 +0000 UTC m=+0.045936840 container create c8d9809071d29c1c48c9dc60cf6e915515b00e313d7a7981a51dbe4d02234461 (image=quay.io/ceph/ceph:v18, name=competent_agnesi, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  2 03:46:57 np0005465604 systemd[1]: Started libpod-conmon-c8d9809071d29c1c48c9dc60cf6e915515b00e313d7a7981a51dbe4d02234461.scope.
Oct  2 03:46:57 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7d2745ea1c2652c74b698b3343ea9c17d17e7c61085cba56442e24a0f05536/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7d2745ea1c2652c74b698b3343ea9c17d17e7c61085cba56442e24a0f05536/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc7d2745ea1c2652c74b698b3343ea9c17d17e7c61085cba56442e24a0f05536/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:57 np0005465604 podman[75083]: 2025-10-02 07:46:57.250214158 +0000 UTC m=+0.111178850 container init c8d9809071d29c1c48c9dc60cf6e915515b00e313d7a7981a51dbe4d02234461 (image=quay.io/ceph/ceph:v18, name=competent_agnesi, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  2 03:46:57 np0005465604 podman[75083]: 2025-10-02 07:46:57.254840311 +0000 UTC m=+0.115804923 container start c8d9809071d29c1c48c9dc60cf6e915515b00e313d7a7981a51dbe4d02234461 (image=quay.io/ceph/ceph:v18, name=competent_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:46:57 np0005465604 podman[75083]: 2025-10-02 07:46:57.257781043 +0000 UTC m=+0.118745745 container attach c8d9809071d29c1c48c9dc60cf6e915515b00e313d7a7981a51dbe4d02234461 (image=quay.io/ceph/ceph:v18, name=competent_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:46:57 np0005465604 podman[75083]: 2025-10-02 07:46:57.162394426 +0000 UTC m=+0.023359058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:57 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:57.370+0000 7fc4242d4140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  2 03:46:57 np0005465604 ceph-mgr[74774]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  2 03:46:57 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'rook'
Oct  2 03:46:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  2 03:46:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4033769983' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]: 
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]: {
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    "fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    "health": {
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "status": "HEALTH_OK",
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "checks": {},
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "mutes": []
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    },
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    "election_epoch": 5,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    "quorum": [
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        0
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    ],
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    "quorum_names": [
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "compute-0"
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    ],
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    "quorum_age": 16,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    "monmap": {
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "epoch": 1,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "min_mon_release_name": "reef",
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "num_mons": 1
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    },
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    "osdmap": {
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "epoch": 1,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "num_osds": 0,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "num_up_osds": 0,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "osd_up_since": 0,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "num_in_osds": 0,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "osd_in_since": 0,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "num_remapped_pgs": 0
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    },
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    "pgmap": {
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "pgs_by_state": [],
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "num_pgs": 0,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "num_pools": 0,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "num_objects": 0,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "data_bytes": 0,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "bytes_used": 0,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "bytes_avail": 0,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "bytes_total": 0
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    },
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    "fsmap": {
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "epoch": 1,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "by_rank": [],
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "up:standby": 0
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    },
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    "mgrmap": {
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "available": false,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "num_standbys": 0,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "modules": [
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:            "iostat",
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:            "nfs",
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:            "restful"
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        ],
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "services": {}
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    },
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    "servicemap": {
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "epoch": 1,
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "modified": "2025-10-02T07:46:38.638077+0000",
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:        "services": {}
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    },
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]:    "progress_events": {}
Oct  2 03:46:57 np0005465604 competent_agnesi[75100]: }
Oct  2 03:46:57 np0005465604 systemd[1]: libpod-c8d9809071d29c1c48c9dc60cf6e915515b00e313d7a7981a51dbe4d02234461.scope: Deactivated successfully.
Oct  2 03:46:57 np0005465604 podman[75083]: 2025-10-02 07:46:57.643591355 +0000 UTC m=+0.504556007 container died c8d9809071d29c1c48c9dc60cf6e915515b00e313d7a7981a51dbe4d02234461 (image=quay.io/ceph/ceph:v18, name=competent_agnesi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:46:57 np0005465604 systemd[1]: var-lib-containers-storage-overlay-cc7d2745ea1c2652c74b698b3343ea9c17d17e7c61085cba56442e24a0f05536-merged.mount: Deactivated successfully.
Oct  2 03:46:57 np0005465604 podman[75083]: 2025-10-02 07:46:57.692552128 +0000 UTC m=+0.553516740 container remove c8d9809071d29c1c48c9dc60cf6e915515b00e313d7a7981a51dbe4d02234461 (image=quay.io/ceph/ceph:v18, name=competent_agnesi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 03:46:57 np0005465604 systemd[1]: libpod-conmon-c8d9809071d29c1c48c9dc60cf6e915515b00e313d7a7981a51dbe4d02234461.scope: Deactivated successfully.
Oct  2 03:46:59 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:59.385+0000 7fc4242d4140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  2 03:46:59 np0005465604 ceph-mgr[74774]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  2 03:46:59 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'selftest'
Oct  2 03:46:59 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:59.626+0000 7fc4242d4140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  2 03:46:59 np0005465604 ceph-mgr[74774]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  2 03:46:59 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'snap_schedule'
Oct  2 03:46:59 np0005465604 podman[75137]: 2025-10-02 07:46:59.772713269 +0000 UTC m=+0.055390984 container create 71d7703eef4faa429dab2c3503432c3589d921d2a017993a47dea128273dde22 (image=quay.io/ceph/ceph:v18, name=serene_burnell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:46:59 np0005465604 systemd[1]: Started libpod-conmon-71d7703eef4faa429dab2c3503432c3589d921d2a017993a47dea128273dde22.scope.
Oct  2 03:46:59 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:46:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f546ce4d4f2a5f880d69b1f5f8464b8a26f8af1d2abeaf018dc2f9e770b534e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f546ce4d4f2a5f880d69b1f5f8464b8a26f8af1d2abeaf018dc2f9e770b534e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f546ce4d4f2a5f880d69b1f5f8464b8a26f8af1d2abeaf018dc2f9e770b534e3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:46:59 np0005465604 podman[75137]: 2025-10-02 07:46:59.743559562 +0000 UTC m=+0.026237317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:46:59 np0005465604 podman[75137]: 2025-10-02 07:46:59.848143446 +0000 UTC m=+0.130821211 container init 71d7703eef4faa429dab2c3503432c3589d921d2a017993a47dea128273dde22 (image=quay.io/ceph/ceph:v18, name=serene_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 03:46:59 np0005465604 podman[75137]: 2025-10-02 07:46:59.854156953 +0000 UTC m=+0.136834668 container start 71d7703eef4faa429dab2c3503432c3589d921d2a017993a47dea128273dde22 (image=quay.io/ceph/ceph:v18, name=serene_burnell, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:46:59 np0005465604 podman[75137]: 2025-10-02 07:46:59.857397073 +0000 UTC m=+0.140074818 container attach 71d7703eef4faa429dab2c3503432c3589d921d2a017993a47dea128273dde22 (image=quay.io/ceph/ceph:v18, name=serene_burnell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:46:59 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:46:59.867+0000 7fc4242d4140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  2 03:46:59 np0005465604 ceph-mgr[74774]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  2 03:46:59 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'stats'
Oct  2 03:47:00 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'status'
Oct  2 03:47:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  2 03:47:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1859185238' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  2 03:47:00 np0005465604 serene_burnell[75153]: 
Oct  2 03:47:00 np0005465604 serene_burnell[75153]: {
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    "fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    "health": {
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "status": "HEALTH_OK",
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "checks": {},
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "mutes": []
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    },
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    "election_epoch": 5,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    "quorum": [
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        0
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    ],
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    "quorum_names": [
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "compute-0"
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    ],
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    "quorum_age": 18,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    "monmap": {
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "epoch": 1,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "min_mon_release_name": "reef",
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "num_mons": 1
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    },
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    "osdmap": {
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "epoch": 1,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "num_osds": 0,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "num_up_osds": 0,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "osd_up_since": 0,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "num_in_osds": 0,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "osd_in_since": 0,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "num_remapped_pgs": 0
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    },
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    "pgmap": {
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "pgs_by_state": [],
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "num_pgs": 0,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "num_pools": 0,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "num_objects": 0,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "data_bytes": 0,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "bytes_used": 0,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "bytes_avail": 0,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "bytes_total": 0
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    },
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    "fsmap": {
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "epoch": 1,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "by_rank": [],
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "up:standby": 0
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    },
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    "mgrmap": {
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "available": false,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "num_standbys": 0,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "modules": [
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:            "iostat",
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:            "nfs",
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:            "restful"
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        ],
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "services": {}
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    },
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    "servicemap": {
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "epoch": 1,
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "modified": "2025-10-02T07:46:38.638077+0000",
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:        "services": {}
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    },
Oct  2 03:47:00 np0005465604 serene_burnell[75153]:    "progress_events": {}
Oct  2 03:47:00 np0005465604 serene_burnell[75153]: }
Oct  2 03:47:00 np0005465604 systemd[1]: libpod-71d7703eef4faa429dab2c3503432c3589d921d2a017993a47dea128273dde22.scope: Deactivated successfully.
Oct  2 03:47:00 np0005465604 podman[75137]: 2025-10-02 07:47:00.252808304 +0000 UTC m=+0.535486039 container died 71d7703eef4faa429dab2c3503432c3589d921d2a017993a47dea128273dde22 (image=quay.io/ceph/ceph:v18, name=serene_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:47:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f546ce4d4f2a5f880d69b1f5f8464b8a26f8af1d2abeaf018dc2f9e770b534e3-merged.mount: Deactivated successfully.
Oct  2 03:47:00 np0005465604 podman[75137]: 2025-10-02 07:47:00.289761123 +0000 UTC m=+0.572438838 container remove 71d7703eef4faa429dab2c3503432c3589d921d2a017993a47dea128273dde22 (image=quay.io/ceph/ceph:v18, name=serene_burnell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 03:47:00 np0005465604 systemd[1]: libpod-conmon-71d7703eef4faa429dab2c3503432c3589d921d2a017993a47dea128273dde22.scope: Deactivated successfully.
Oct  2 03:47:00 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:00.380+0000 7fc4242d4140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  2 03:47:00 np0005465604 ceph-mgr[74774]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  2 03:47:00 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'telegraf'
Oct  2 03:47:00 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:00.614+0000 7fc4242d4140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  2 03:47:00 np0005465604 ceph-mgr[74774]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  2 03:47:00 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'telemetry'
Oct  2 03:47:01 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:01.270+0000 7fc4242d4140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  2 03:47:01 np0005465604 ceph-mgr[74774]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  2 03:47:01 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'test_orchestrator'
Oct  2 03:47:01 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:01.925+0000 7fc4242d4140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  2 03:47:01 np0005465604 ceph-mgr[74774]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  2 03:47:01 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'volumes'
Oct  2 03:47:02 np0005465604 podman[75192]: 2025-10-02 07:47:02.374369073 +0000 UTC m=+0.055425605 container create 89dcb37ec6f3e5df68ef7734d866127279a5aa8afe6d876303cc6fc6dac7f36f (image=quay.io/ceph/ceph:v18, name=goofy_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  2 03:47:02 np0005465604 systemd[1]: Started libpod-conmon-89dcb37ec6f3e5df68ef7734d866127279a5aa8afe6d876303cc6fc6dac7f36f.scope.
Oct  2 03:47:02 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:02 np0005465604 podman[75192]: 2025-10-02 07:47:02.344024739 +0000 UTC m=+0.025081261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc07d56fd6650bae41114c9c3fda3391d4e9a570fbafd5df9580c68fe878bc60/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc07d56fd6650bae41114c9c3fda3391d4e9a570fbafd5df9580c68fe878bc60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc07d56fd6650bae41114c9c3fda3391d4e9a570fbafd5df9580c68fe878bc60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:02 np0005465604 podman[75192]: 2025-10-02 07:47:02.476017926 +0000 UTC m=+0.157074518 container init 89dcb37ec6f3e5df68ef7734d866127279a5aa8afe6d876303cc6fc6dac7f36f (image=quay.io/ceph/ceph:v18, name=goofy_buck, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  2 03:47:02 np0005465604 podman[75192]: 2025-10-02 07:47:02.487507942 +0000 UTC m=+0.168564474 container start 89dcb37ec6f3e5df68ef7734d866127279a5aa8afe6d876303cc6fc6dac7f36f (image=quay.io/ceph/ceph:v18, name=goofy_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 03:47:02 np0005465604 podman[75192]: 2025-10-02 07:47:02.491601011 +0000 UTC m=+0.172657603 container attach 89dcb37ec6f3e5df68ef7734d866127279a5aa8afe6d876303cc6fc6dac7f36f (image=quay.io/ceph/ceph:v18, name=goofy_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 03:47:02 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:02.617+0000 7fc4242d4140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'zabbix'
Oct  2 03:47:02 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:02.841+0000 7fc4242d4140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: ms_deliver_dispatch: unhandled message 0x56371f2bd1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qlmhsi
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: mgr handle_mgr_map Activating!
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: mgr handle_mgr_map I am now activating
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.qlmhsi(active, starting, since 0.0139666s)
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1322250187' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).mds e1 all = 1
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1322250187' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1322250187' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1322250187' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qlmhsi", "id": "compute-0.qlmhsi"} v 0) v1
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1322250187' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qlmhsi", "id": "compute-0.qlmhsi"}]: dispatch
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: balancer
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: crash
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [balancer INFO root] Starting
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : Manager daemon compute-0.qlmhsi is now available
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: devicehealth
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_07:47:02
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [balancer INFO root] No pools available
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: iostat
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [devicehealth INFO root] Starting
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: nfs
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: orchestrator
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: pg_autoscaler
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: progress
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [progress INFO root] Loading...
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [progress INFO root] No stored events to load
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [progress INFO root] Loaded [] historic events
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [progress INFO root] Loaded OSDMap, ready.
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] recovery thread starting
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] starting setup
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: rbd_support
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: restful
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2517820874' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: status
Oct  2 03:47:02 np0005465604 goofy_buck[75208]: 
Oct  2 03:47:02 np0005465604 goofy_buck[75208]: {
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    "fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    "health": {
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "status": "HEALTH_OK",
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "checks": {},
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "mutes": []
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    },
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    "election_epoch": 5,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    "quorum": [
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        0
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    ],
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    "quorum_names": [
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "compute-0"
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [restful INFO root] server_addr: :: server_port: 8003
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    ],
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    "quorum_age": 21,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    "monmap": {
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "epoch": 1,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "min_mon_release_name": "reef",
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "num_mons": 1
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    },
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    "osdmap": {
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "epoch": 1,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "num_osds": 0,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "num_up_osds": 0,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "osd_up_since": 0,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "num_in_osds": 0,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "osd_in_since": 0,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "num_remapped_pgs": 0
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    },
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    "pgmap": {
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "pgs_by_state": [],
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "num_pgs": 0,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "num_pools": 0,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "num_objects": 0,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "data_bytes": 0,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "bytes_used": 0,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "bytes_avail": 0,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "bytes_total": 0
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    },
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    "fsmap": {
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "epoch": 1,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "by_rank": [],
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "up:standby": 0
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    },
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    "mgrmap": {
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "available": false,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "num_standbys": 0,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "modules": [
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:            "iostat",
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:            "nfs",
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:            "restful"
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        ],
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "services": {}
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    },
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    "servicemap": {
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "epoch": 1,
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "modified": "2025-10-02T07:46:38.638077+0000",
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:        "services": {}
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    },
Oct  2 03:47:02 np0005465604 goofy_buck[75208]:    "progress_events": {}
Oct  2 03:47:02 np0005465604 goofy_buck[75208]: }
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: telemetry
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qlmhsi/mirror_snapshot_schedule"} v 0) v1
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1322250187' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qlmhsi/mirror_snapshot_schedule"}]: dispatch
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] PerfHandler: starting
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [restful WARNING root] server not running: no certificate configured
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TaskHandler: starting
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qlmhsi/trash_purge_schedule"} v 0) v1
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1322250187' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qlmhsi/trash_purge_schedule"}]: dispatch
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1322250187' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] setup complete
Oct  2 03:47:02 np0005465604 systemd[1]: libpod-89dcb37ec6f3e5df68ef7734d866127279a5aa8afe6d876303cc6fc6dac7f36f.scope: Deactivated successfully.
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Oct  2 03:47:02 np0005465604 podman[75192]: 2025-10-02 07:47:02.907115007 +0000 UTC m=+0.588171529 container died 89dcb37ec6f3e5df68ef7734d866127279a5aa8afe6d876303cc6fc6dac7f36f (image=quay.io/ceph/ceph:v18, name=goofy_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: Activating manager daemon compute-0.qlmhsi
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: Manager daemon compute-0.qlmhsi is now available
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: from='mgr.14102 192.168.122.100:0/1322250187' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qlmhsi/mirror_snapshot_schedule"}]: dispatch
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: from='mgr.14102 192.168.122.100:0/1322250187' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qlmhsi/trash_purge_schedule"}]: dispatch
Oct  2 03:47:02 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: volumes
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1322250187' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Oct  2 03:47:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1322250187' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:02 np0005465604 systemd[1]: var-lib-containers-storage-overlay-bc07d56fd6650bae41114c9c3fda3391d4e9a570fbafd5df9580c68fe878bc60-merged.mount: Deactivated successfully.
Oct  2 03:47:02 np0005465604 podman[75192]: 2025-10-02 07:47:02.950268708 +0000 UTC m=+0.631325200 container remove 89dcb37ec6f3e5df68ef7734d866127279a5aa8afe6d876303cc6fc6dac7f36f (image=quay.io/ceph/ceph:v18, name=goofy_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:47:02 np0005465604 systemd[1]: libpod-conmon-89dcb37ec6f3e5df68ef7734d866127279a5aa8afe6d876303cc6fc6dac7f36f.scope: Deactivated successfully.
Oct  2 03:47:03 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.qlmhsi(active, since 1.02656s)
Oct  2 03:47:03 np0005465604 ceph-mon[74477]: from='mgr.14102 192.168.122.100:0/1322250187' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:03 np0005465604 ceph-mon[74477]: from='mgr.14102 192.168.122.100:0/1322250187' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:03 np0005465604 ceph-mon[74477]: from='mgr.14102 192.168.122.100:0/1322250187' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:04 np0005465604 ceph-mgr[74774]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  2 03:47:04 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.qlmhsi(active, since 2s)
Oct  2 03:47:05 np0005465604 podman[75326]: 2025-10-02 07:47:05.042858717 +0000 UTC m=+0.062750043 container create 4c4f784dd42cc9b00a50e7080de4e7aba575607674a9a251a3365ca9b96dcb66 (image=quay.io/ceph/ceph:v18, name=friendly_hoover, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 03:47:05 np0005465604 systemd[1]: Started libpod-conmon-4c4f784dd42cc9b00a50e7080de4e7aba575607674a9a251a3365ca9b96dcb66.scope.
Oct  2 03:47:05 np0005465604 podman[75326]: 2025-10-02 07:47:05.019131229 +0000 UTC m=+0.039022635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:05 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b0fe0d07818e9ab72be1d7e559db5c87b43f2c46fa2524c0bb7b67b9c1534f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b0fe0d07818e9ab72be1d7e559db5c87b43f2c46fa2524c0bb7b67b9c1534f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b0fe0d07818e9ab72be1d7e559db5c87b43f2c46fa2524c0bb7b67b9c1534f2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:05 np0005465604 podman[75326]: 2025-10-02 07:47:05.139263356 +0000 UTC m=+0.159154702 container init 4c4f784dd42cc9b00a50e7080de4e7aba575607674a9a251a3365ca9b96dcb66 (image=quay.io/ceph/ceph:v18, name=friendly_hoover, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:05 np0005465604 podman[75326]: 2025-10-02 07:47:05.152187668 +0000 UTC m=+0.172079024 container start 4c4f784dd42cc9b00a50e7080de4e7aba575607674a9a251a3365ca9b96dcb66 (image=quay.io/ceph/ceph:v18, name=friendly_hoover, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct  2 03:47:05 np0005465604 podman[75326]: 2025-10-02 07:47:05.156906335 +0000 UTC m=+0.176797711 container attach 4c4f784dd42cc9b00a50e7080de4e7aba575607674a9a251a3365ca9b96dcb66 (image=quay.io/ceph/ceph:v18, name=friendly_hoover, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 03:47:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  2 03:47:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4246472216' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]: 
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]: {
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    "fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    "health": {
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "status": "HEALTH_OK",
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "checks": {},
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "mutes": []
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    },
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    "election_epoch": 5,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    "quorum": [
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        0
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    ],
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    "quorum_names": [
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "compute-0"
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    ],
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    "quorum_age": 24,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    "monmap": {
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "epoch": 1,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "min_mon_release_name": "reef",
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "num_mons": 1
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    },
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    "osdmap": {
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "epoch": 1,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "num_osds": 0,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "num_up_osds": 0,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "osd_up_since": 0,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "num_in_osds": 0,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "osd_in_since": 0,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "num_remapped_pgs": 0
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    },
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    "pgmap": {
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "pgs_by_state": [],
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "num_pgs": 0,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "num_pools": 0,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "num_objects": 0,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "data_bytes": 0,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "bytes_used": 0,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "bytes_avail": 0,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "bytes_total": 0
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    },
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    "fsmap": {
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "epoch": 1,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "by_rank": [],
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "up:standby": 0
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    },
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    "mgrmap": {
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "available": true,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "num_standbys": 0,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "modules": [
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:            "iostat",
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:            "nfs",
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:            "restful"
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        ],
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "services": {}
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    },
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    "servicemap": {
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "epoch": 1,
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "modified": "2025-10-02T07:46:38.638077+0000",
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:        "services": {}
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    },
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]:    "progress_events": {}
Oct  2 03:47:05 np0005465604 friendly_hoover[75342]: }
Oct  2 03:47:05 np0005465604 systemd[1]: libpod-4c4f784dd42cc9b00a50e7080de4e7aba575607674a9a251a3365ca9b96dcb66.scope: Deactivated successfully.
Oct  2 03:47:05 np0005465604 conmon[75342]: conmon 4c4f784dd42cc9b00a50 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c4f784dd42cc9b00a50e7080de4e7aba575607674a9a251a3365ca9b96dcb66.scope/container/memory.events
Oct  2 03:47:05 np0005465604 podman[75326]: 2025-10-02 07:47:05.742673648 +0000 UTC m=+0.762564984 container died 4c4f784dd42cc9b00a50e7080de4e7aba575607674a9a251a3365ca9b96dcb66 (image=quay.io/ceph/ceph:v18, name=friendly_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:47:05 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6b0fe0d07818e9ab72be1d7e559db5c87b43f2c46fa2524c0bb7b67b9c1534f2-merged.mount: Deactivated successfully.
Oct  2 03:47:05 np0005465604 podman[75326]: 2025-10-02 07:47:05.784485148 +0000 UTC m=+0.804376464 container remove 4c4f784dd42cc9b00a50e7080de4e7aba575607674a9a251a3365ca9b96dcb66 (image=quay.io/ceph/ceph:v18, name=friendly_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:47:05 np0005465604 systemd[1]: libpod-conmon-4c4f784dd42cc9b00a50e7080de4e7aba575607674a9a251a3365ca9b96dcb66.scope: Deactivated successfully.
Oct  2 03:47:05 np0005465604 podman[75378]: 2025-10-02 07:47:05.852408811 +0000 UTC m=+0.048891242 container create 6be0eff5ebc18b4490c47d9b4a9469a1a6ef7793abf308983a36364125e75071 (image=quay.io/ceph/ceph:v18, name=vigorous_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 03:47:05 np0005465604 systemd[1]: Started libpod-conmon-6be0eff5ebc18b4490c47d9b4a9469a1a6ef7793abf308983a36364125e75071.scope.
Oct  2 03:47:05 np0005465604 podman[75378]: 2025-10-02 07:47:05.826727552 +0000 UTC m=+0.023210043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:05 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56e1b792571aae3621c51e8d0a8885d184a0170537de75add94b33eaa75215b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56e1b792571aae3621c51e8d0a8885d184a0170537de75add94b33eaa75215b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56e1b792571aae3621c51e8d0a8885d184a0170537de75add94b33eaa75215b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56e1b792571aae3621c51e8d0a8885d184a0170537de75add94b33eaa75215b/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:05 np0005465604 podman[75378]: 2025-10-02 07:47:05.967401858 +0000 UTC m=+0.163884309 container init 6be0eff5ebc18b4490c47d9b4a9469a1a6ef7793abf308983a36364125e75071 (image=quay.io/ceph/ceph:v18, name=vigorous_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:05 np0005465604 podman[75378]: 2025-10-02 07:47:05.976814021 +0000 UTC m=+0.173296452 container start 6be0eff5ebc18b4490c47d9b4a9469a1a6ef7793abf308983a36364125e75071 (image=quay.io/ceph/ceph:v18, name=vigorous_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:47:05 np0005465604 podman[75378]: 2025-10-02 07:47:05.981953042 +0000 UTC m=+0.178435483 container attach 6be0eff5ebc18b4490c47d9b4a9469a1a6ef7793abf308983a36364125e75071 (image=quay.io/ceph/ceph:v18, name=vigorous_zhukovsky, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:47:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct  2 03:47:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1378819296' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  2 03:47:06 np0005465604 systemd[1]: libpod-6be0eff5ebc18b4490c47d9b4a9469a1a6ef7793abf308983a36364125e75071.scope: Deactivated successfully.
Oct  2 03:47:06 np0005465604 podman[75378]: 2025-10-02 07:47:06.520456664 +0000 UTC m=+0.716939055 container died 6be0eff5ebc18b4490c47d9b4a9469a1a6ef7793abf308983a36364125e75071 (image=quay.io/ceph/ceph:v18, name=vigorous_zhukovsky, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Oct  2 03:47:06 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d56e1b792571aae3621c51e8d0a8885d184a0170537de75add94b33eaa75215b-merged.mount: Deactivated successfully.
Oct  2 03:47:06 np0005465604 podman[75378]: 2025-10-02 07:47:06.559136637 +0000 UTC m=+0.755619068 container remove 6be0eff5ebc18b4490c47d9b4a9469a1a6ef7793abf308983a36364125e75071 (image=quay.io/ceph/ceph:v18, name=vigorous_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:47:06 np0005465604 systemd[1]: libpod-conmon-6be0eff5ebc18b4490c47d9b4a9469a1a6ef7793abf308983a36364125e75071.scope: Deactivated successfully.
Oct  2 03:47:06 np0005465604 podman[75432]: 2025-10-02 07:47:06.643177771 +0000 UTC m=+0.055716864 container create dcbff35b1ef99ddfe58f60b62854095a3cd221b7ccb752aefbd83ec9e3daec3a (image=quay.io/ceph/ceph:v18, name=vibrant_albattani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:47:06 np0005465604 systemd[1]: Started libpod-conmon-dcbff35b1ef99ddfe58f60b62854095a3cd221b7ccb752aefbd83ec9e3daec3a.scope.
Oct  2 03:47:06 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/771cf58bc1aea5d4263274ddc5a8e6021227c0a07aa88b6eefc90b3ee4bae1a8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/771cf58bc1aea5d4263274ddc5a8e6021227c0a07aa88b6eefc90b3ee4bae1a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/771cf58bc1aea5d4263274ddc5a8e6021227c0a07aa88b6eefc90b3ee4bae1a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:06 np0005465604 podman[75432]: 2025-10-02 07:47:06.712532219 +0000 UTC m=+0.125071302 container init dcbff35b1ef99ddfe58f60b62854095a3cd221b7ccb752aefbd83ec9e3daec3a (image=quay.io/ceph/ceph:v18, name=vibrant_albattani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:47:06 np0005465604 podman[75432]: 2025-10-02 07:47:06.716793681 +0000 UTC m=+0.129332744 container start dcbff35b1ef99ddfe58f60b62854095a3cd221b7ccb752aefbd83ec9e3daec3a (image=quay.io/ceph/ceph:v18, name=vibrant_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:47:06 np0005465604 podman[75432]: 2025-10-02 07:47:06.623456408 +0000 UTC m=+0.035995521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:06 np0005465604 podman[75432]: 2025-10-02 07:47:06.853649399 +0000 UTC m=+0.266188472 container attach dcbff35b1ef99ddfe58f60b62854095a3cd221b7ccb752aefbd83ec9e3daec3a (image=quay.io/ceph/ceph:v18, name=vibrant_albattani, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:47:06 np0005465604 ceph-mgr[74774]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  2 03:47:07 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/1378819296' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  2 03:47:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Oct  2 03:47:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2127854443' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct  2 03:47:08 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/2127854443' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct  2 03:47:08 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2127854443' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr respawn  1: '-n'
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr respawn  2: 'mgr.compute-0.qlmhsi'
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr respawn  3: '-f'
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr respawn  4: '--setuser'
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr respawn  5: 'ceph'
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr respawn  6: '--setgroup'
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr respawn  7: 'ceph'
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr respawn  8: '--default-log-to-file=false'
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr respawn  9: '--default-log-to-journald=true'
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr respawn  exe_path /proc/self/exe
Oct  2 03:47:08 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.qlmhsi(active, since 5s)
Oct  2 03:47:08 np0005465604 systemd[1]: libpod-dcbff35b1ef99ddfe58f60b62854095a3cd221b7ccb752aefbd83ec9e3daec3a.scope: Deactivated successfully.
Oct  2 03:47:08 np0005465604 podman[75432]: 2025-10-02 07:47:08.155785987 +0000 UTC m=+1.568325070 container died dcbff35b1ef99ddfe58f60b62854095a3cd221b7ccb752aefbd83ec9e3daec3a (image=quay.io/ceph/ceph:v18, name=vibrant_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:47:08 np0005465604 systemd[1]: var-lib-containers-storage-overlay-771cf58bc1aea5d4263274ddc5a8e6021227c0a07aa88b6eefc90b3ee4bae1a8-merged.mount: Deactivated successfully.
Oct  2 03:47:08 np0005465604 podman[75432]: 2025-10-02 07:47:08.201094392 +0000 UTC m=+1.613633455 container remove dcbff35b1ef99ddfe58f60b62854095a3cd221b7ccb752aefbd83ec9e3daec3a (image=quay.io/ceph/ceph:v18, name=vibrant_albattani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:47:08 np0005465604 systemd[1]: libpod-conmon-dcbff35b1ef99ddfe58f60b62854095a3cd221b7ccb752aefbd83ec9e3daec3a.scope: Deactivated successfully.
Oct  2 03:47:08 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: ignoring --setuser ceph since I am not root
Oct  2 03:47:08 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: ignoring --setgroup ceph since I am not root
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: pidfile_write: ignore empty --pid-file
Oct  2 03:47:08 np0005465604 podman[75486]: 2025-10-02 07:47:08.264981702 +0000 UTC m=+0.044036807 container create afd5eaa741323e8aa1578773d584c6d72063619c297937c0fdd5fceeeaa0a320 (image=quay.io/ceph/ceph:v18, name=sharp_galois, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 03:47:08 np0005465604 systemd[1]: Started libpod-conmon-afd5eaa741323e8aa1578773d584c6d72063619c297937c0fdd5fceeeaa0a320.scope.
Oct  2 03:47:08 np0005465604 podman[75486]: 2025-10-02 07:47:08.243671375 +0000 UTC m=+0.022726530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:08 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:08 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f57dd7752067fff2182f87912279663998936073f259e884ca7fa10e7a267b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:08 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f57dd7752067fff2182f87912279663998936073f259e884ca7fa10e7a267b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:08 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f57dd7752067fff2182f87912279663998936073f259e884ca7fa10e7a267b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:08 np0005465604 podman[75486]: 2025-10-02 07:47:08.359696835 +0000 UTC m=+0.138751990 container init afd5eaa741323e8aa1578773d584c6d72063619c297937c0fdd5fceeeaa0a320 (image=quay.io/ceph/ceph:v18, name=sharp_galois, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct  2 03:47:08 np0005465604 podman[75486]: 2025-10-02 07:47:08.369691564 +0000 UTC m=+0.148746699 container start afd5eaa741323e8aa1578773d584c6d72063619c297937c0fdd5fceeeaa0a320 (image=quay.io/ceph/ceph:v18, name=sharp_galois, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 03:47:08 np0005465604 podman[75486]: 2025-10-02 07:47:08.373695114 +0000 UTC m=+0.152750239 container attach afd5eaa741323e8aa1578773d584c6d72063619c297937c0fdd5fceeeaa0a320 (image=quay.io/ceph/ceph:v18, name=sharp_galois, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'alerts'
Oct  2 03:47:08 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:08.724+0000 7f684b432140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'balancer'
Oct  2 03:47:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct  2 03:47:08 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4069264499' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  2 03:47:08 np0005465604 sharp_galois[75526]: {
Oct  2 03:47:08 np0005465604 sharp_galois[75526]:    "epoch": 5,
Oct  2 03:47:08 np0005465604 sharp_galois[75526]:    "available": true,
Oct  2 03:47:08 np0005465604 sharp_galois[75526]:    "active_name": "compute-0.qlmhsi",
Oct  2 03:47:08 np0005465604 sharp_galois[75526]:    "num_standby": 0
Oct  2 03:47:08 np0005465604 sharp_galois[75526]: }
Oct  2 03:47:08 np0005465604 systemd[1]: libpod-afd5eaa741323e8aa1578773d584c6d72063619c297937c0fdd5fceeeaa0a320.scope: Deactivated successfully.
Oct  2 03:47:08 np0005465604 podman[75486]: 2025-10-02 07:47:08.936340278 +0000 UTC m=+0.715395413 container died afd5eaa741323e8aa1578773d584c6d72063619c297937c0fdd5fceeeaa0a320 (image=quay.io/ceph/ceph:v18, name=sharp_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:47:08 np0005465604 systemd[1]: var-lib-containers-storage-overlay-25f57dd7752067fff2182f87912279663998936073f259e884ca7fa10e7a267b-merged.mount: Deactivated successfully.
Oct  2 03:47:08 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:08.982+0000 7f684b432140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  2 03:47:08 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'cephadm'
Oct  2 03:47:08 np0005465604 podman[75486]: 2025-10-02 07:47:08.995183628 +0000 UTC m=+0.774238763 container remove afd5eaa741323e8aa1578773d584c6d72063619c297937c0fdd5fceeeaa0a320 (image=quay.io/ceph/ceph:v18, name=sharp_galois, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:47:09 np0005465604 systemd[1]: libpod-conmon-afd5eaa741323e8aa1578773d584c6d72063619c297937c0fdd5fceeeaa0a320.scope: Deactivated successfully.
Oct  2 03:47:09 np0005465604 podman[75565]: 2025-10-02 07:47:09.077026065 +0000 UTC m=+0.059723087 container create 167c93227d90e6d306b2457172f9c87a21267bd3d8ed1875e498763b2541149c (image=quay.io/ceph/ceph:v18, name=nervous_williams, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 03:47:09 np0005465604 systemd[1]: Started libpod-conmon-167c93227d90e6d306b2457172f9c87a21267bd3d8ed1875e498763b2541149c.scope.
Oct  2 03:47:09 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/2127854443' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct  2 03:47:09 np0005465604 podman[75565]: 2025-10-02 07:47:09.045191303 +0000 UTC m=+0.027888385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:09 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4220e32a9800f7c0fa84af669c19275f18b9f840269d97f017f00d24ec854c19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4220e32a9800f7c0fa84af669c19275f18b9f840269d97f017f00d24ec854c19/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4220e32a9800f7c0fa84af669c19275f18b9f840269d97f017f00d24ec854c19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:09 np0005465604 podman[75565]: 2025-10-02 07:47:09.188340154 +0000 UTC m=+0.171037206 container init 167c93227d90e6d306b2457172f9c87a21267bd3d8ed1875e498763b2541149c (image=quay.io/ceph/ceph:v18, name=nervous_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 03:47:09 np0005465604 podman[75565]: 2025-10-02 07:47:09.195947472 +0000 UTC m=+0.178644484 container start 167c93227d90e6d306b2457172f9c87a21267bd3d8ed1875e498763b2541149c (image=quay.io/ceph/ceph:v18, name=nervous_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:47:09 np0005465604 podman[75565]: 2025-10-02 07:47:09.200167288 +0000 UTC m=+0.182864300 container attach 167c93227d90e6d306b2457172f9c87a21267bd3d8ed1875e498763b2541149c (image=quay.io/ceph/ceph:v18, name=nervous_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:10 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'crash'
Oct  2 03:47:11 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:11.176+0000 7f684b432140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  2 03:47:11 np0005465604 ceph-mgr[74774]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  2 03:47:11 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'dashboard'
Oct  2 03:47:12 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'devicehealth'
Oct  2 03:47:12 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:12.811+0000 7f684b432140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  2 03:47:12 np0005465604 ceph-mgr[74774]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  2 03:47:12 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'diskprediction_local'
Oct  2 03:47:13 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  2 03:47:13 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  2 03:47:13 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]:  from numpy import show_config as show_numpy_config
Oct  2 03:47:13 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:13.316+0000 7f684b432140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  2 03:47:13 np0005465604 ceph-mgr[74774]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  2 03:47:13 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'influx'
Oct  2 03:47:13 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:13.556+0000 7f684b432140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  2 03:47:13 np0005465604 ceph-mgr[74774]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  2 03:47:13 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'insights'
Oct  2 03:47:13 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'iostat'
Oct  2 03:47:14 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:14.026+0000 7f684b432140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  2 03:47:14 np0005465604 ceph-mgr[74774]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  2 03:47:14 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'k8sevents'
Oct  2 03:47:15 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'localpool'
Oct  2 03:47:15 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'mds_autoscaler'
Oct  2 03:47:16 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'mirroring'
Oct  2 03:47:16 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'nfs'
Oct  2 03:47:17 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:17.623+0000 7f684b432140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  2 03:47:17 np0005465604 ceph-mgr[74774]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  2 03:47:17 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'orchestrator'
Oct  2 03:47:18 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:18.283+0000 7f684b432140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  2 03:47:18 np0005465604 ceph-mgr[74774]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  2 03:47:18 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'osd_perf_query'
Oct  2 03:47:18 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:18.552+0000 7f684b432140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  2 03:47:18 np0005465604 ceph-mgr[74774]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  2 03:47:18 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'osd_support'
Oct  2 03:47:18 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:18.799+0000 7f684b432140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  2 03:47:18 np0005465604 ceph-mgr[74774]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  2 03:47:18 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'pg_autoscaler'
Oct  2 03:47:19 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:19.082+0000 7f684b432140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  2 03:47:19 np0005465604 ceph-mgr[74774]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  2 03:47:19 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'progress'
Oct  2 03:47:19 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:19.318+0000 7f684b432140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  2 03:47:19 np0005465604 ceph-mgr[74774]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  2 03:47:19 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'prometheus'
Oct  2 03:47:20 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:20.365+0000 7f684b432140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  2 03:47:20 np0005465604 ceph-mgr[74774]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  2 03:47:20 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'rbd_support'
Oct  2 03:47:20 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:20.668+0000 7f684b432140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  2 03:47:20 np0005465604 ceph-mgr[74774]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  2 03:47:20 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'restful'
Oct  2 03:47:21 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'rgw'
Oct  2 03:47:22 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:22.111+0000 7f684b432140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  2 03:47:22 np0005465604 ceph-mgr[74774]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  2 03:47:22 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'rook'
Oct  2 03:47:24 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:24.144+0000 7f684b432140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  2 03:47:24 np0005465604 ceph-mgr[74774]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  2 03:47:24 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'selftest'
Oct  2 03:47:24 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:24.374+0000 7f684b432140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  2 03:47:24 np0005465604 ceph-mgr[74774]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  2 03:47:24 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'snap_schedule'
Oct  2 03:47:24 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:24.621+0000 7f684b432140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  2 03:47:24 np0005465604 ceph-mgr[74774]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  2 03:47:24 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'stats'
Oct  2 03:47:24 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'status'
Oct  2 03:47:25 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:25.176+0000 7f684b432140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  2 03:47:25 np0005465604 ceph-mgr[74774]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  2 03:47:25 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'telegraf'
Oct  2 03:47:25 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:25.425+0000 7f684b432140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  2 03:47:25 np0005465604 ceph-mgr[74774]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  2 03:47:25 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'telemetry'
Oct  2 03:47:26 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:26.065+0000 7f684b432140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  2 03:47:26 np0005465604 ceph-mgr[74774]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  2 03:47:26 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'test_orchestrator'
Oct  2 03:47:26 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:26.782+0000 7f684b432140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  2 03:47:26 np0005465604 ceph-mgr[74774]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  2 03:47:26 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'volumes'
Oct  2 03:47:27 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:27.514+0000 7f684b432140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr[py] Loading python module 'zabbix'
Oct  2 03:47:27 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T07:47:27.755+0000 7f684b432140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : Active manager daemon compute-0.qlmhsi restarted
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qlmhsi
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: ms_deliver_dispatch: unhandled message 0x557f237651e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr handle_mgr_map Activating!
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr handle_mgr_map I am now activating
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.qlmhsi(active, starting, since 0.016134s)
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qlmhsi", "id": "compute-0.qlmhsi"} v 0) v1
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qlmhsi", "id": "compute-0.qlmhsi"}]: dispatch
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).mds e1 all = 1
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: balancer
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : Manager daemon compute-0.qlmhsi is now available
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Starting
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_07:47:27
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] No pools available
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: Active manager daemon compute-0.qlmhsi restarted
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: Activating manager daemon compute-0.qlmhsi
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: Manager daemon compute-0.qlmhsi is now available
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: cephadm
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: crash
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: devicehealth
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [devicehealth INFO root] Starting
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: iostat
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: nfs
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: orchestrator
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: pg_autoscaler
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: progress
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [progress INFO root] Loading...
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [progress INFO root] No stored events to load
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [progress INFO root] Loaded [] historic events
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [progress INFO root] Loaded OSDMap, ready.
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] recovery thread starting
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] starting setup
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: rbd_support
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: restful
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [restful INFO root] server_addr: :: server_port: 8003
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: status
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [restful WARNING root] server not running: no certificate configured
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qlmhsi/mirror_snapshot_schedule"} v 0) v1
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qlmhsi/mirror_snapshot_schedule"}]: dispatch
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: telemetry
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] PerfHandler: starting
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TaskHandler: starting
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qlmhsi/trash_purge_schedule"} v 0) v1
Oct  2 03:47:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qlmhsi/trash_purge_schedule"}]: dispatch
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] setup complete
Oct  2 03:47:27 np0005465604 ceph-mgr[74774]: mgr load Constructed class from module: volumes
Oct  2 03:47:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Oct  2 03:47:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Oct  2 03:47:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:28 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.qlmhsi(active, since 1.0259s)
Oct  2 03:47:28 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct  2 03:47:28 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct  2 03:47:28 np0005465604 nervous_williams[75581]: {
Oct  2 03:47:28 np0005465604 nervous_williams[75581]:    "mgrmap_epoch": 7,
Oct  2 03:47:28 np0005465604 nervous_williams[75581]:    "initialized": true
Oct  2 03:47:28 np0005465604 nervous_williams[75581]: }
Oct  2 03:47:28 np0005465604 systemd[1]: libpod-167c93227d90e6d306b2457172f9c87a21267bd3d8ed1875e498763b2541149c.scope: Deactivated successfully.
Oct  2 03:47:28 np0005465604 podman[75565]: 2025-10-02 07:47:28.826682856 +0000 UTC m=+19.809379878 container died 167c93227d90e6d306b2457172f9c87a21267bd3d8ed1875e498763b2541149c (image=quay.io/ceph/ceph:v18, name=nervous_williams, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 03:47:28 np0005465604 ceph-mon[74477]: Found migration_current of "None". Setting to last migration.
Oct  2 03:47:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qlmhsi/mirror_snapshot_schedule"}]: dispatch
Oct  2 03:47:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qlmhsi/trash_purge_schedule"}]: dispatch
Oct  2 03:47:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:28 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4220e32a9800f7c0fa84af669c19275f18b9f840269d97f017f00d24ec854c19-merged.mount: Deactivated successfully.
Oct  2 03:47:28 np0005465604 podman[75565]: 2025-10-02 07:47:28.866849937 +0000 UTC m=+19.849546929 container remove 167c93227d90e6d306b2457172f9c87a21267bd3d8ed1875e498763b2541149c (image=quay.io/ceph/ceph:v18, name=nervous_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 03:47:28 np0005465604 systemd[1]: libpod-conmon-167c93227d90e6d306b2457172f9c87a21267bd3d8ed1875e498763b2541149c.scope: Deactivated successfully.
Oct  2 03:47:28 np0005465604 podman[75741]: 2025-10-02 07:47:28.931841071 +0000 UTC m=+0.042046488 container create 74d85943e68793ba216e46177ad3557736fc82c0ee5c3ab0549ba544e961a43f (image=quay.io/ceph/ceph:v18, name=mystifying_brattain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 03:47:28 np0005465604 systemd[1]: Started libpod-conmon-74d85943e68793ba216e46177ad3557736fc82c0ee5c3ab0549ba544e961a43f.scope.
Oct  2 03:47:29 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f170c37f8c67ef1814e06936cbe19f19a050586c417dcf83d027d0f945634640/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f170c37f8c67ef1814e06936cbe19f19a050586c417dcf83d027d0f945634640/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f170c37f8c67ef1814e06936cbe19f19a050586c417dcf83d027d0f945634640/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:29 np0005465604 podman[75741]: 2025-10-02 07:47:28.914345378 +0000 UTC m=+0.024550815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:29 np0005465604 podman[75741]: 2025-10-02 07:47:29.019393659 +0000 UTC m=+0.129599096 container init 74d85943e68793ba216e46177ad3557736fc82c0ee5c3ab0549ba544e961a43f (image=quay.io/ceph/ceph:v18, name=mystifying_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 03:47:29 np0005465604 podman[75741]: 2025-10-02 07:47:29.025617645 +0000 UTC m=+0.135823072 container start 74d85943e68793ba216e46177ad3557736fc82c0ee5c3ab0549ba544e961a43f (image=quay.io/ceph/ceph:v18, name=mystifying_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 03:47:29 np0005465604 podman[75741]: 2025-10-02 07:47:29.028673376 +0000 UTC m=+0.138878843 container attach 74d85943e68793ba216e46177ad3557736fc82c0ee5c3ab0549ba544e961a43f (image=quay.io/ceph/ceph:v18, name=mystifying_brattain, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 03:47:29 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 03:47:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Oct  2 03:47:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  2 03:47:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  2 03:47:29 np0005465604 systemd[1]: libpod-74d85943e68793ba216e46177ad3557736fc82c0ee5c3ab0549ba544e961a43f.scope: Deactivated successfully.
Oct  2 03:47:29 np0005465604 conmon[75757]: conmon 74d85943e68793ba216e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-74d85943e68793ba216e46177ad3557736fc82c0ee5c3ab0549ba544e961a43f.scope/container/memory.events
Oct  2 03:47:29 np0005465604 podman[75741]: 2025-10-02 07:47:29.579318152 +0000 UTC m=+0.689523569 container died 74d85943e68793ba216e46177ad3557736fc82c0ee5c3ab0549ba544e961a43f (image=quay.io/ceph/ceph:v18, name=mystifying_brattain, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 03:47:29 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f170c37f8c67ef1814e06936cbe19f19a050586c417dcf83d027d0f945634640-merged.mount: Deactivated successfully.
Oct  2 03:47:29 np0005465604 podman[75741]: 2025-10-02 07:47:29.614456073 +0000 UTC m=+0.724661470 container remove 74d85943e68793ba216e46177ad3557736fc82c0ee5c3ab0549ba544e961a43f (image=quay.io/ceph/ceph:v18, name=mystifying_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:29 np0005465604 systemd[1]: libpod-conmon-74d85943e68793ba216e46177ad3557736fc82c0ee5c3ab0549ba544e961a43f.scope: Deactivated successfully.
Oct  2 03:47:29 np0005465604 podman[75794]: 2025-10-02 07:47:29.684277621 +0000 UTC m=+0.050337477 container create c340b7b4082f716ef070f4814c09583cf3083f4a36ef19540c61425df579c260 (image=quay.io/ceph/ceph:v18, name=hopeful_murdock, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 03:47:29 np0005465604 systemd[1]: Started libpod-conmon-c340b7b4082f716ef070f4814c09583cf3083f4a36ef19540c61425df579c260.scope.
Oct  2 03:47:29 np0005465604 ceph-mgr[74774]: [cephadm INFO cherrypy.error] [02/Oct/2025:07:47:29] ENGINE Bus STARTING
Oct  2 03:47:29 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : [02/Oct/2025:07:47:29] ENGINE Bus STARTING
Oct  2 03:47:29 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9d7154e22900567c1854147b7a8e1993b98ca3948ead617cef701d95cfc66b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9d7154e22900567c1854147b7a8e1993b98ca3948ead617cef701d95cfc66b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9d7154e22900567c1854147b7a8e1993b98ca3948ead617cef701d95cfc66b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:29 np0005465604 podman[75794]: 2025-10-02 07:47:29.666006265 +0000 UTC m=+0.032066121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:29 np0005465604 podman[75794]: 2025-10-02 07:47:29.763292784 +0000 UTC m=+0.129352720 container init c340b7b4082f716ef070f4814c09583cf3083f4a36ef19540c61425df579c260 (image=quay.io/ceph/ceph:v18, name=hopeful_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 03:47:29 np0005465604 podman[75794]: 2025-10-02 07:47:29.775288062 +0000 UTC m=+0.141347908 container start c340b7b4082f716ef070f4814c09583cf3083f4a36ef19540c61425df579c260 (image=quay.io/ceph/ceph:v18, name=hopeful_murdock, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 03:47:29 np0005465604 ceph-mgr[74774]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  2 03:47:29 np0005465604 podman[75794]: 2025-10-02 07:47:29.778656303 +0000 UTC m=+0.144716249 container attach c340b7b4082f716ef070f4814c09583cf3083f4a36ef19540c61425df579c260 (image=quay.io/ceph/ceph:v18, name=hopeful_murdock, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:47:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:29 np0005465604 ceph-mgr[74774]: [cephadm INFO cherrypy.error] [02/Oct/2025:07:47:29] ENGINE Serving on http://192.168.122.100:8765
Oct  2 03:47:29 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : [02/Oct/2025:07:47:29] ENGINE Serving on http://192.168.122.100:8765
Oct  2 03:47:29 np0005465604 ceph-mgr[74774]: [cephadm INFO cherrypy.error] [02/Oct/2025:07:47:29] ENGINE Serving on https://192.168.122.100:7150
Oct  2 03:47:29 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : [02/Oct/2025:07:47:29] ENGINE Serving on https://192.168.122.100:7150
Oct  2 03:47:29 np0005465604 ceph-mgr[74774]: [cephadm INFO cherrypy.error] [02/Oct/2025:07:47:29] ENGINE Bus STARTED
Oct  2 03:47:29 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : [02/Oct/2025:07:47:29] ENGINE Bus STARTED
Oct  2 03:47:29 np0005465604 ceph-mgr[74774]: [cephadm INFO cherrypy.error] [02/Oct/2025:07:47:29] ENGINE Client ('192.168.122.100', 43372) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  2 03:47:29 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : [02/Oct/2025:07:47:29] ENGINE Client ('192.168.122.100', 43372) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  2 03:47:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  2 03:47:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  2 03:47:30 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 03:47:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Oct  2 03:47:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:30 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Set ssh ssh_user
Oct  2 03:47:30 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct  2 03:47:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Oct  2 03:47:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:30 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Set ssh ssh_config
Oct  2 03:47:30 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct  2 03:47:30 np0005465604 ceph-mgr[74774]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct  2 03:47:30 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct  2 03:47:30 np0005465604 hopeful_murdock[75810]: ssh user set to ceph-admin. sudo will be used
Oct  2 03:47:30 np0005465604 systemd[1]: libpod-c340b7b4082f716ef070f4814c09583cf3083f4a36ef19540c61425df579c260.scope: Deactivated successfully.
Oct  2 03:47:30 np0005465604 podman[75794]: 2025-10-02 07:47:30.294093445 +0000 UTC m=+0.660153291 container died c340b7b4082f716ef070f4814c09583cf3083f4a36ef19540c61425df579c260 (image=quay.io/ceph/ceph:v18, name=hopeful_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 03:47:30 np0005465604 systemd[1]: var-lib-containers-storage-overlay-7d9d7154e22900567c1854147b7a8e1993b98ca3948ead617cef701d95cfc66b-merged.mount: Deactivated successfully.
Oct  2 03:47:30 np0005465604 podman[75794]: 2025-10-02 07:47:30.335882145 +0000 UTC m=+0.701942001 container remove c340b7b4082f716ef070f4814c09583cf3083f4a36ef19540c61425df579c260 (image=quay.io/ceph/ceph:v18, name=hopeful_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:47:30 np0005465604 systemd[1]: libpod-conmon-c340b7b4082f716ef070f4814c09583cf3083f4a36ef19540c61425df579c260.scope: Deactivated successfully.
Oct  2 03:47:30 np0005465604 podman[75872]: 2025-10-02 07:47:30.408674732 +0000 UTC m=+0.050959825 container create c2c0c1d57123663772c83241fc93b8d11445189d67df4ef62df5d5fe735edebb (image=quay.io/ceph/ceph:v18, name=silly_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 03:47:30 np0005465604 systemd[1]: Started libpod-conmon-c2c0c1d57123663772c83241fc93b8d11445189d67df4ef62df5d5fe735edebb.scope.
Oct  2 03:47:30 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8096410745ebf554ead553b21865ef94b036c76ca4ccbb30e3c3b41e7d4f9431/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8096410745ebf554ead553b21865ef94b036c76ca4ccbb30e3c3b41e7d4f9431/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8096410745ebf554ead553b21865ef94b036c76ca4ccbb30e3c3b41e7d4f9431/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:30 np0005465604 podman[75872]: 2025-10-02 07:47:30.387065015 +0000 UTC m=+0.029350098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8096410745ebf554ead553b21865ef94b036c76ca4ccbb30e3c3b41e7d4f9431/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8096410745ebf554ead553b21865ef94b036c76ca4ccbb30e3c3b41e7d4f9431/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:30 np0005465604 podman[75872]: 2025-10-02 07:47:30.495184789 +0000 UTC m=+0.137469862 container init c2c0c1d57123663772c83241fc93b8d11445189d67df4ef62df5d5fe735edebb (image=quay.io/ceph/ceph:v18, name=silly_hypatia, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 03:47:30 np0005465604 podman[75872]: 2025-10-02 07:47:30.505559 +0000 UTC m=+0.147844053 container start c2c0c1d57123663772c83241fc93b8d11445189d67df4ef62df5d5fe735edebb (image=quay.io/ceph/ceph:v18, name=silly_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 03:47:30 np0005465604 podman[75872]: 2025-10-02 07:47:30.50894738 +0000 UTC m=+0.151232473 container attach c2c0c1d57123663772c83241fc93b8d11445189d67df4ef62df5d5fe735edebb (image=quay.io/ceph/ceph:v18, name=silly_hypatia, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 03:47:30 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.qlmhsi(active, since 2s)
Oct  2 03:47:30 np0005465604 ceph-mon[74477]: [02/Oct/2025:07:47:29] ENGINE Bus STARTING
Oct  2 03:47:30 np0005465604 ceph-mon[74477]: [02/Oct/2025:07:47:29] ENGINE Serving on http://192.168.122.100:8765
Oct  2 03:47:30 np0005465604 ceph-mon[74477]: [02/Oct/2025:07:47:29] ENGINE Serving on https://192.168.122.100:7150
Oct  2 03:47:30 np0005465604 ceph-mon[74477]: [02/Oct/2025:07:47:29] ENGINE Bus STARTED
Oct  2 03:47:30 np0005465604 ceph-mon[74477]: [02/Oct/2025:07:47:29] ENGINE Client ('192.168.122.100', 43372) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  2 03:47:30 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:30 np0005465604 ceph-mon[74477]: Set ssh ssh_user
Oct  2 03:47:30 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:30 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 03:47:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Oct  2 03:47:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:30 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Set ssh ssh_identity_key
Oct  2 03:47:30 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct  2 03:47:30 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Set ssh private key
Oct  2 03:47:30 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Set ssh private key
Oct  2 03:47:31 np0005465604 systemd[1]: libpod-c2c0c1d57123663772c83241fc93b8d11445189d67df4ef62df5d5fe735edebb.scope: Deactivated successfully.
Oct  2 03:47:31 np0005465604 podman[75872]: 2025-10-02 07:47:31.012966592 +0000 UTC m=+0.655251645 container died c2c0c1d57123663772c83241fc93b8d11445189d67df4ef62df5d5fe735edebb (image=quay.io/ceph/ceph:v18, name=silly_hypatia, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:47:31 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8096410745ebf554ead553b21865ef94b036c76ca4ccbb30e3c3b41e7d4f9431-merged.mount: Deactivated successfully.
Oct  2 03:47:31 np0005465604 podman[75872]: 2025-10-02 07:47:31.05472078 +0000 UTC m=+0.697005833 container remove c2c0c1d57123663772c83241fc93b8d11445189d67df4ef62df5d5fe735edebb (image=quay.io/ceph/ceph:v18, name=silly_hypatia, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 03:47:31 np0005465604 systemd[1]: libpod-conmon-c2c0c1d57123663772c83241fc93b8d11445189d67df4ef62df5d5fe735edebb.scope: Deactivated successfully.
Oct  2 03:47:31 np0005465604 podman[75925]: 2025-10-02 07:47:31.125566549 +0000 UTC m=+0.047505692 container create fd20cc98812dfd56ccf91f08e0f01a7ee923c04f37ade4193d901a0171c37a0f (image=quay.io/ceph/ceph:v18, name=objective_antonelli, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 03:47:31 np0005465604 systemd[1]: Started libpod-conmon-fd20cc98812dfd56ccf91f08e0f01a7ee923c04f37ade4193d901a0171c37a0f.scope.
Oct  2 03:47:31 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55dbc1a622214b0361b0b39f84cfcd0d3f0f0ea7dd449d3faf247cb8f35ddaad/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55dbc1a622214b0361b0b39f84cfcd0d3f0f0ea7dd449d3faf247cb8f35ddaad/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55dbc1a622214b0361b0b39f84cfcd0d3f0f0ea7dd449d3faf247cb8f35ddaad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55dbc1a622214b0361b0b39f84cfcd0d3f0f0ea7dd449d3faf247cb8f35ddaad/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55dbc1a622214b0361b0b39f84cfcd0d3f0f0ea7dd449d3faf247cb8f35ddaad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:31 np0005465604 podman[75925]: 2025-10-02 07:47:31.185815841 +0000 UTC m=+0.107755004 container init fd20cc98812dfd56ccf91f08e0f01a7ee923c04f37ade4193d901a0171c37a0f (image=quay.io/ceph/ceph:v18, name=objective_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:31 np0005465604 podman[75925]: 2025-10-02 07:47:31.191454479 +0000 UTC m=+0.113393642 container start fd20cc98812dfd56ccf91f08e0f01a7ee923c04f37ade4193d901a0171c37a0f (image=quay.io/ceph/ceph:v18, name=objective_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:47:31 np0005465604 podman[75925]: 2025-10-02 07:47:31.195726078 +0000 UTC m=+0.117665211 container attach fd20cc98812dfd56ccf91f08e0f01a7ee923c04f37ade4193d901a0171c37a0f (image=quay.io/ceph/ceph:v18, name=objective_antonelli, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 03:47:31 np0005465604 podman[75925]: 2025-10-02 07:47:31.110119917 +0000 UTC m=+0.032059080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019925487 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:47:31 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 03:47:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Oct  2 03:47:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:31 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct  2 03:47:31 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct  2 03:47:31 np0005465604 systemd[1]: libpod-fd20cc98812dfd56ccf91f08e0f01a7ee923c04f37ade4193d901a0171c37a0f.scope: Deactivated successfully.
Oct  2 03:47:31 np0005465604 podman[75925]: 2025-10-02 07:47:31.700629815 +0000 UTC m=+0.622568968 container died fd20cc98812dfd56ccf91f08e0f01a7ee923c04f37ade4193d901a0171c37a0f (image=quay.io/ceph/ceph:v18, name=objective_antonelli, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:47:31 np0005465604 systemd[1]: var-lib-containers-storage-overlay-55dbc1a622214b0361b0b39f84cfcd0d3f0f0ea7dd449d3faf247cb8f35ddaad-merged.mount: Deactivated successfully.
Oct  2 03:47:31 np0005465604 podman[75925]: 2025-10-02 07:47:31.740393104 +0000 UTC m=+0.662332247 container remove fd20cc98812dfd56ccf91f08e0f01a7ee923c04f37ade4193d901a0171c37a0f (image=quay.io/ceph/ceph:v18, name=objective_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 03:47:31 np0005465604 systemd[1]: libpod-conmon-fd20cc98812dfd56ccf91f08e0f01a7ee923c04f37ade4193d901a0171c37a0f.scope: Deactivated successfully.
Oct  2 03:47:31 np0005465604 ceph-mgr[74774]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  2 03:47:31 np0005465604 podman[75977]: 2025-10-02 07:47:31.837467157 +0000 UTC m=+0.067529291 container create ffd57a0639d26ccdb0c044b9b97c0f817fa24ea5ad2a8d9825592db6169373c7 (image=quay.io/ceph/ceph:v18, name=suspicious_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 03:47:31 np0005465604 systemd[1]: Started libpod-conmon-ffd57a0639d26ccdb0c044b9b97c0f817fa24ea5ad2a8d9825592db6169373c7.scope.
Oct  2 03:47:31 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8f1e9063c8431d9e0ab2a579a5e5de4d330b92f9c2f87086d89da2a66396d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8f1e9063c8431d9e0ab2a579a5e5de4d330b92f9c2f87086d89da2a66396d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8f1e9063c8431d9e0ab2a579a5e5de4d330b92f9c2f87086d89da2a66396d1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:31 np0005465604 podman[75977]: 2025-10-02 07:47:31.80847074 +0000 UTC m=+0.038532934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:31 np0005465604 podman[75977]: 2025-10-02 07:47:31.91817132 +0000 UTC m=+0.148233454 container init ffd57a0639d26ccdb0c044b9b97c0f817fa24ea5ad2a8d9825592db6169373c7 (image=quay.io/ceph/ceph:v18, name=suspicious_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct  2 03:47:31 np0005465604 podman[75977]: 2025-10-02 07:47:31.928050865 +0000 UTC m=+0.158112989 container start ffd57a0639d26ccdb0c044b9b97c0f817fa24ea5ad2a8d9825592db6169373c7 (image=quay.io/ceph/ceph:v18, name=suspicious_khorana, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 03:47:31 np0005465604 podman[75977]: 2025-10-02 07:47:31.931826099 +0000 UTC m=+0.161888283 container attach ffd57a0639d26ccdb0c044b9b97c0f817fa24ea5ad2a8d9825592db6169373c7 (image=quay.io/ceph/ceph:v18, name=suspicious_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 03:47:31 np0005465604 ceph-mon[74477]: Set ssh ssh_config
Oct  2 03:47:31 np0005465604 ceph-mon[74477]: ssh user set to ceph-admin. sudo will be used
Oct  2 03:47:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:31 np0005465604 ceph-mon[74477]: Set ssh ssh_identity_key
Oct  2 03:47:31 np0005465604 ceph-mon[74477]: Set ssh private key
Oct  2 03:47:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:32 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 03:47:32 np0005465604 suspicious_khorana[75993]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDzS47Wd238XmWUyW4PNNR0vJ/WuaYk9QGm4cmLap5WC7/cK2R+Hq8KUHTF3ej8DTR3gRWIhbZrewf1Shn8j4p5daqo5xbZWXtiBdi/2kF06cdWNpl6/NyKX07CUpQATaYHF6nTCrQOZJGsgVNK++mGn/ff/fYNC8lO081KjqIRw2yMTkMOOhn+ATgMWDUPk2PFxGgMIh7YXquFdRBf0aBECmbEr4DNFhKA3AOCVj/GLDNC0dm6pu9AgF48by9Iva4mhcIeDV/isbXm014fhjSQO6CAUBzFzhY2t12oLfy27pRZiJRfV9/oY7o2i9ncDx92UPj7zNJX1Qo9m8t75us1AnknTjEIC9/FRD7wO46ECaNTUqSRkxYY+2lDHtQUCpYm2eQM2cHS+oHEXJl0ff/qQEHEtTE8tqqOj0k2sLw04unITSq3QrgTIdiXLXKs8mNiWejV4U7BINAABjk182dUokpBuOjrYF55OFaz9onIGUvsxzE1xnaI7nk11lQ56M8= zuul@controller
Oct  2 03:47:32 np0005465604 systemd[1]: libpod-ffd57a0639d26ccdb0c044b9b97c0f817fa24ea5ad2a8d9825592db6169373c7.scope: Deactivated successfully.
Oct  2 03:47:32 np0005465604 podman[75977]: 2025-10-02 07:47:32.466877368 +0000 UTC m=+0.696939542 container died ffd57a0639d26ccdb0c044b9b97c0f817fa24ea5ad2a8d9825592db6169373c7 (image=quay.io/ceph/ceph:v18, name=suspicious_khorana, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  2 03:47:32 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1d8f1e9063c8431d9e0ab2a579a5e5de4d330b92f9c2f87086d89da2a66396d1-merged.mount: Deactivated successfully.
Oct  2 03:47:32 np0005465604 podman[75977]: 2025-10-02 07:47:32.508512353 +0000 UTC m=+0.738574487 container remove ffd57a0639d26ccdb0c044b9b97c0f817fa24ea5ad2a8d9825592db6169373c7 (image=quay.io/ceph/ceph:v18, name=suspicious_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 03:47:32 np0005465604 systemd[1]: libpod-conmon-ffd57a0639d26ccdb0c044b9b97c0f817fa24ea5ad2a8d9825592db6169373c7.scope: Deactivated successfully.
Oct  2 03:47:32 np0005465604 podman[76031]: 2025-10-02 07:47:32.569760384 +0000 UTC m=+0.042077439 container create d13e414f994d8e948513a425c7e5dd79371c01abe6ea25c6440c942734fb0f79 (image=quay.io/ceph/ceph:v18, name=stupefied_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:32 np0005465604 systemd[1]: Started libpod-conmon-d13e414f994d8e948513a425c7e5dd79371c01abe6ea25c6440c942734fb0f79.scope.
Oct  2 03:47:32 np0005465604 podman[76031]: 2025-10-02 07:47:32.548949052 +0000 UTC m=+0.021266167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:32 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c045fc7172011ccd7969262823ba72d68311ee98444135141bfbbffff0ae6aca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c045fc7172011ccd7969262823ba72d68311ee98444135141bfbbffff0ae6aca/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c045fc7172011ccd7969262823ba72d68311ee98444135141bfbbffff0ae6aca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:32 np0005465604 podman[76031]: 2025-10-02 07:47:32.682102234 +0000 UTC m=+0.154419359 container init d13e414f994d8e948513a425c7e5dd79371c01abe6ea25c6440c942734fb0f79 (image=quay.io/ceph/ceph:v18, name=stupefied_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 03:47:32 np0005465604 podman[76031]: 2025-10-02 07:47:32.687540446 +0000 UTC m=+0.159857501 container start d13e414f994d8e948513a425c7e5dd79371c01abe6ea25c6440c942734fb0f79 (image=quay.io/ceph/ceph:v18, name=stupefied_dubinsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:47:32 np0005465604 podman[76031]: 2025-10-02 07:47:32.690903837 +0000 UTC m=+0.163220932 container attach d13e414f994d8e948513a425c7e5dd79371c01abe6ea25c6440c942734fb0f79 (image=quay.io/ceph/ceph:v18, name=stupefied_dubinsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:33 np0005465604 ceph-mon[74477]: Set ssh ssh_identity_pub
Oct  2 03:47:33 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 03:47:33 np0005465604 systemd[1]: Created slice User Slice of UID 42477.
Oct  2 03:47:33 np0005465604 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  2 03:47:33 np0005465604 systemd-logind[787]: New session 22 of user ceph-admin.
Oct  2 03:47:33 np0005465604 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  2 03:47:33 np0005465604 systemd[1]: Starting User Manager for UID 42477...
Oct  2 03:47:33 np0005465604 systemd[76077]: Queued start job for default target Main User Target.
Oct  2 03:47:33 np0005465604 systemd[76077]: Created slice User Application Slice.
Oct  2 03:47:33 np0005465604 systemd[76077]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 03:47:33 np0005465604 systemd[76077]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 03:47:33 np0005465604 systemd[76077]: Reached target Paths.
Oct  2 03:47:33 np0005465604 systemd[76077]: Reached target Timers.
Oct  2 03:47:33 np0005465604 systemd[76077]: Starting D-Bus User Message Bus Socket...
Oct  2 03:47:33 np0005465604 systemd[76077]: Starting Create User's Volatile Files and Directories...
Oct  2 03:47:33 np0005465604 systemd[76077]: Listening on D-Bus User Message Bus Socket.
Oct  2 03:47:33 np0005465604 systemd-logind[787]: New session 24 of user ceph-admin.
Oct  2 03:47:33 np0005465604 systemd[76077]: Finished Create User's Volatile Files and Directories.
Oct  2 03:47:33 np0005465604 systemd[76077]: Reached target Sockets.
Oct  2 03:47:33 np0005465604 systemd[76077]: Reached target Basic System.
Oct  2 03:47:33 np0005465604 systemd[76077]: Reached target Main User Target.
Oct  2 03:47:33 np0005465604 systemd[76077]: Startup finished in 128ms.
Oct  2 03:47:33 np0005465604 systemd[1]: Started User Manager for UID 42477.
Oct  2 03:47:33 np0005465604 systemd[1]: Started Session 22 of User ceph-admin.
Oct  2 03:47:33 np0005465604 systemd[1]: Started Session 24 of User ceph-admin.
Oct  2 03:47:33 np0005465604 ceph-mgr[74774]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  2 03:47:33 np0005465604 systemd-logind[787]: New session 25 of user ceph-admin.
Oct  2 03:47:34 np0005465604 systemd[1]: Started Session 25 of User ceph-admin.
Oct  2 03:47:34 np0005465604 systemd-logind[787]: New session 26 of user ceph-admin.
Oct  2 03:47:34 np0005465604 systemd[1]: Started Session 26 of User ceph-admin.
Oct  2 03:47:34 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct  2 03:47:34 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct  2 03:47:34 np0005465604 systemd-logind[787]: New session 27 of user ceph-admin.
Oct  2 03:47:34 np0005465604 systemd[1]: Started Session 27 of User ceph-admin.
Oct  2 03:47:35 np0005465604 systemd-logind[787]: New session 28 of user ceph-admin.
Oct  2 03:47:35 np0005465604 systemd[1]: Started Session 28 of User ceph-admin.
Oct  2 03:47:35 np0005465604 ceph-mon[74477]: Deploying cephadm binary to compute-0
Oct  2 03:47:35 np0005465604 systemd-logind[787]: New session 29 of user ceph-admin.
Oct  2 03:47:35 np0005465604 systemd[1]: Started Session 29 of User ceph-admin.
Oct  2 03:47:35 np0005465604 ceph-mgr[74774]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  2 03:47:36 np0005465604 systemd-logind[787]: New session 30 of user ceph-admin.
Oct  2 03:47:36 np0005465604 systemd[1]: Started Session 30 of User ceph-admin.
Oct  2 03:47:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053098 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:47:36 np0005465604 systemd-logind[787]: New session 31 of user ceph-admin.
Oct  2 03:47:36 np0005465604 systemd[1]: Started Session 31 of User ceph-admin.
Oct  2 03:47:37 np0005465604 systemd-logind[787]: New session 32 of user ceph-admin.
Oct  2 03:47:37 np0005465604 systemd[1]: Started Session 32 of User ceph-admin.
Oct  2 03:47:37 np0005465604 systemd-logind[787]: New session 33 of user ceph-admin.
Oct  2 03:47:37 np0005465604 systemd[1]: Started Session 33 of User ceph-admin.
Oct  2 03:47:37 np0005465604 ceph-mgr[74774]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  2 03:47:38 np0005465604 systemd-logind[787]: New session 34 of user ceph-admin.
Oct  2 03:47:38 np0005465604 systemd[1]: Started Session 34 of User ceph-admin.
Oct  2 03:47:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  2 03:47:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:38 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Added host compute-0
Oct  2 03:47:38 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Added host compute-0
Oct  2 03:47:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  2 03:47:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  2 03:47:38 np0005465604 stupefied_dubinsky[76047]: Added host 'compute-0' with addr '192.168.122.100'
Oct  2 03:47:38 np0005465604 systemd[1]: libpod-d13e414f994d8e948513a425c7e5dd79371c01abe6ea25c6440c942734fb0f79.scope: Deactivated successfully.
Oct  2 03:47:38 np0005465604 podman[76031]: 2025-10-02 07:47:38.653973049 +0000 UTC m=+6.126290144 container died d13e414f994d8e948513a425c7e5dd79371c01abe6ea25c6440c942734fb0f79 (image=quay.io/ceph/ceph:v18, name=stupefied_dubinsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 03:47:38 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c045fc7172011ccd7969262823ba72d68311ee98444135141bfbbffff0ae6aca-merged.mount: Deactivated successfully.
Oct  2 03:47:38 np0005465604 podman[76031]: 2025-10-02 07:47:38.715414286 +0000 UTC m=+6.187731361 container remove d13e414f994d8e948513a425c7e5dd79371c01abe6ea25c6440c942734fb0f79 (image=quay.io/ceph/ceph:v18, name=stupefied_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:47:38 np0005465604 systemd[1]: libpod-conmon-d13e414f994d8e948513a425c7e5dd79371c01abe6ea25c6440c942734fb0f79.scope: Deactivated successfully.
Oct  2 03:47:38 np0005465604 podman[76723]: 2025-10-02 07:47:38.795915844 +0000 UTC m=+0.055447870 container create afdb0912170d9a51d3e88ece3883bccd3ca6db7f4421c6fcb8c63e1b97a282d3 (image=quay.io/ceph/ceph:v18, name=infallible_elgamal, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 03:47:38 np0005465604 systemd[1]: Started libpod-conmon-afdb0912170d9a51d3e88ece3883bccd3ca6db7f4421c6fcb8c63e1b97a282d3.scope.
Oct  2 03:47:38 np0005465604 podman[76723]: 2025-10-02 07:47:38.769876475 +0000 UTC m=+0.029408551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbe69fdacfe54253353002eac8ffdfe00bfa0b70c64ad7fd1244aec07c40d8ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbe69fdacfe54253353002eac8ffdfe00bfa0b70c64ad7fd1244aec07c40d8ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbe69fdacfe54253353002eac8ffdfe00bfa0b70c64ad7fd1244aec07c40d8ed/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:38 np0005465604 podman[76723]: 2025-10-02 07:47:38.892403299 +0000 UTC m=+0.151935315 container init afdb0912170d9a51d3e88ece3883bccd3ca6db7f4421c6fcb8c63e1b97a282d3 (image=quay.io/ceph/ceph:v18, name=infallible_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 03:47:38 np0005465604 podman[76723]: 2025-10-02 07:47:38.900231353 +0000 UTC m=+0.159763379 container start afdb0912170d9a51d3e88ece3883bccd3ca6db7f4421c6fcb8c63e1b97a282d3 (image=quay.io/ceph/ceph:v18, name=infallible_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  2 03:47:38 np0005465604 podman[76723]: 2025-10-02 07:47:38.904298215 +0000 UTC m=+0.163830341 container attach afdb0912170d9a51d3e88ece3883bccd3ca6db7f4421c6fcb8c63e1b97a282d3 (image=quay.io/ceph/ceph:v18, name=infallible_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:47:39 np0005465604 podman[76843]: 2025-10-02 07:47:39.21585494 +0000 UTC m=+0.043546462 container create 4d14ad0a9b0a3b34650a73961bb184ddcb1305df911ec845c653befd863c51c6 (image=quay.io/ceph/ceph:v18, name=jovial_mcnulty, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 03:47:39 np0005465604 systemd[1]: Started libpod-conmon-4d14ad0a9b0a3b34650a73961bb184ddcb1305df911ec845c653befd863c51c6.scope.
Oct  2 03:47:39 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:39 np0005465604 podman[76843]: 2025-10-02 07:47:39.194532403 +0000 UTC m=+0.022223955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:39 np0005465604 podman[76843]: 2025-10-02 07:47:39.297114621 +0000 UTC m=+0.124806193 container init 4d14ad0a9b0a3b34650a73961bb184ddcb1305df911ec845c653befd863c51c6 (image=quay.io/ceph/ceph:v18, name=jovial_mcnulty, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 03:47:39 np0005465604 podman[76843]: 2025-10-02 07:47:39.304303706 +0000 UTC m=+0.131995228 container start 4d14ad0a9b0a3b34650a73961bb184ddcb1305df911ec845c653befd863c51c6 (image=quay.io/ceph/ceph:v18, name=jovial_mcnulty, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 03:47:39 np0005465604 podman[76843]: 2025-10-02 07:47:39.307349077 +0000 UTC m=+0.135040639 container attach 4d14ad0a9b0a3b34650a73961bb184ddcb1305df911ec845c653befd863c51c6 (image=quay.io/ceph/ceph:v18, name=jovial_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:47:39 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 03:47:39 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct  2 03:47:39 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct  2 03:47:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  2 03:47:39 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:39 np0005465604 infallible_elgamal[76782]: Scheduled mon update...
Oct  2 03:47:39 np0005465604 systemd[1]: libpod-afdb0912170d9a51d3e88ece3883bccd3ca6db7f4421c6fcb8c63e1b97a282d3.scope: Deactivated successfully.
Oct  2 03:47:39 np0005465604 podman[76723]: 2025-10-02 07:47:39.430729846 +0000 UTC m=+0.690261902 container died afdb0912170d9a51d3e88ece3883bccd3ca6db7f4421c6fcb8c63e1b97a282d3 (image=quay.io/ceph/ceph:v18, name=infallible_elgamal, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:47:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-cbe69fdacfe54253353002eac8ffdfe00bfa0b70c64ad7fd1244aec07c40d8ed-merged.mount: Deactivated successfully.
Oct  2 03:47:39 np0005465604 podman[76723]: 2025-10-02 07:47:39.476868626 +0000 UTC m=+0.736400652 container remove afdb0912170d9a51d3e88ece3883bccd3ca6db7f4421c6fcb8c63e1b97a282d3 (image=quay.io/ceph/ceph:v18, name=infallible_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:47:39 np0005465604 systemd[1]: libpod-conmon-afdb0912170d9a51d3e88ece3883bccd3ca6db7f4421c6fcb8c63e1b97a282d3.scope: Deactivated successfully.
Oct  2 03:47:39 np0005465604 podman[76898]: 2025-10-02 07:47:39.552860768 +0000 UTC m=+0.048333556 container create 76131dde938d489a070e022710d37f306d4e58c9c0932a4c6a8cfb30113aee6b (image=quay.io/ceph/ceph:v18, name=epic_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 03:47:39 np0005465604 systemd[1]: Started libpod-conmon-76131dde938d489a070e022710d37f306d4e58c9c0932a4c6a8cfb30113aee6b.scope.
Oct  2 03:47:39 np0005465604 jovial_mcnulty[76878]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct  2 03:47:39 np0005465604 podman[76898]: 2025-10-02 07:47:39.530716866 +0000 UTC m=+0.026189664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:39 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:39 np0005465604 systemd[1]: libpod-4d14ad0a9b0a3b34650a73961bb184ddcb1305df911ec845c653befd863c51c6.scope: Deactivated successfully.
Oct  2 03:47:39 np0005465604 podman[76843]: 2025-10-02 07:47:39.627904263 +0000 UTC m=+0.455595785 container died 4d14ad0a9b0a3b34650a73961bb184ddcb1305df911ec845c653befd863c51c6 (image=quay.io/ceph/ceph:v18, name=jovial_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:47:39 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:39 np0005465604 ceph-mon[74477]: Added host compute-0
Oct  2 03:47:39 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683fb27faca44b49c0f64a5e9449e27712146a95365c224f08e89e2fc9d12618/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683fb27faca44b49c0f64a5e9449e27712146a95365c224f08e89e2fc9d12618/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683fb27faca44b49c0f64a5e9449e27712146a95365c224f08e89e2fc9d12618/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:39 np0005465604 podman[76898]: 2025-10-02 07:47:39.649551399 +0000 UTC m=+0.145024187 container init 76131dde938d489a070e022710d37f306d4e58c9c0932a4c6a8cfb30113aee6b (image=quay.io/ceph/ceph:v18, name=epic_einstein, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:47:39 np0005465604 podman[76898]: 2025-10-02 07:47:39.656283891 +0000 UTC m=+0.151756679 container start 76131dde938d489a070e022710d37f306d4e58c9c0932a4c6a8cfb30113aee6b (image=quay.io/ceph/ceph:v18, name=epic_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:47:39 np0005465604 podman[76898]: 2025-10-02 07:47:39.659637051 +0000 UTC m=+0.155109839 container attach 76131dde938d489a070e022710d37f306d4e58c9c0932a4c6a8cfb30113aee6b (image=quay.io/ceph/ceph:v18, name=epic_einstein, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:47:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-877a442b0d34af229755eb2d3bc8239fbd7828014dfbf4e5fa5fb8c3bfebe16b-merged.mount: Deactivated successfully.
Oct  2 03:47:39 np0005465604 podman[76843]: 2025-10-02 07:47:39.687837864 +0000 UTC m=+0.515529396 container remove 4d14ad0a9b0a3b34650a73961bb184ddcb1305df911ec845c653befd863c51c6 (image=quay.io/ceph/ceph:v18, name=jovial_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:39 np0005465604 systemd[1]: libpod-conmon-4d14ad0a9b0a3b34650a73961bb184ddcb1305df911ec845c653befd863c51c6.scope: Deactivated successfully.
Oct  2 03:47:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Oct  2 03:47:39 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:39 np0005465604 ceph-mgr[74774]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  2 03:47:40 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 03:47:40 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct  2 03:47:40 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct  2 03:47:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  2 03:47:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:40 np0005465604 epic_einstein[76915]: Scheduled mgr update...
Oct  2 03:47:40 np0005465604 systemd[1]: libpod-76131dde938d489a070e022710d37f306d4e58c9c0932a4c6a8cfb30113aee6b.scope: Deactivated successfully.
Oct  2 03:47:40 np0005465604 podman[76898]: 2025-10-02 07:47:40.190533327 +0000 UTC m=+0.686006105 container died 76131dde938d489a070e022710d37f306d4e58c9c0932a4c6a8cfb30113aee6b (image=quay.io/ceph/ceph:v18, name=epic_einstein, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 03:47:40 np0005465604 systemd[1]: var-lib-containers-storage-overlay-683fb27faca44b49c0f64a5e9449e27712146a95365c224f08e89e2fc9d12618-merged.mount: Deactivated successfully.
Oct  2 03:47:40 np0005465604 podman[76898]: 2025-10-02 07:47:40.235352707 +0000 UTC m=+0.730825485 container remove 76131dde938d489a070e022710d37f306d4e58c9c0932a4c6a8cfb30113aee6b (image=quay.io/ceph/ceph:v18, name=epic_einstein, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:47:40 np0005465604 systemd[1]: libpod-conmon-76131dde938d489a070e022710d37f306d4e58c9c0932a4c6a8cfb30113aee6b.scope: Deactivated successfully.
Oct  2 03:47:40 np0005465604 podman[77082]: 2025-10-02 07:47:40.298549057 +0000 UTC m=+0.042850283 container create 7fe1ca795c996d2a018dd96fd020351ff0897aa79fcdcfecbea15ece5f498019 (image=quay.io/ceph/ceph:v18, name=compassionate_lehmann, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 03:47:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:47:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:40 np0005465604 systemd[1]: Started libpod-conmon-7fe1ca795c996d2a018dd96fd020351ff0897aa79fcdcfecbea15ece5f498019.scope.
Oct  2 03:47:40 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1ff2edd8705a5beb02fc9b81ee7bd87bc1e954a791c2c93da7c59e0a54cd4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1ff2edd8705a5beb02fc9b81ee7bd87bc1e954a791c2c93da7c59e0a54cd4c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1ff2edd8705a5beb02fc9b81ee7bd87bc1e954a791c2c93da7c59e0a54cd4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:40 np0005465604 podman[77082]: 2025-10-02 07:47:40.280431255 +0000 UTC m=+0.024732511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:40 np0005465604 podman[77082]: 2025-10-02 07:47:40.384480526 +0000 UTC m=+0.128781742 container init 7fe1ca795c996d2a018dd96fd020351ff0897aa79fcdcfecbea15ece5f498019 (image=quay.io/ceph/ceph:v18, name=compassionate_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:47:40 np0005465604 podman[77082]: 2025-10-02 07:47:40.393478295 +0000 UTC m=+0.137779501 container start 7fe1ca795c996d2a018dd96fd020351ff0897aa79fcdcfecbea15ece5f498019 (image=quay.io/ceph/ceph:v18, name=compassionate_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:47:40 np0005465604 podman[77082]: 2025-10-02 07:47:40.396389463 +0000 UTC m=+0.140690669 container attach 7fe1ca795c996d2a018dd96fd020351ff0897aa79fcdcfecbea15ece5f498019 (image=quay.io/ceph/ceph:v18, name=compassionate_lehmann, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:47:40 np0005465604 ceph-mon[74477]: Saving service mon spec with placement count:5
Oct  2 03:47:40 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:40 np0005465604 ceph-mon[74477]: Saving service mgr spec with placement count:2
Oct  2 03:47:40 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:40 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:40 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 03:47:40 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Saving service crash spec with placement *
Oct  2 03:47:40 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct  2 03:47:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct  2 03:47:40 np0005465604 podman[77293]: 2025-10-02 07:47:40.985160118 +0000 UTC m=+0.054584523 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:47:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:40 np0005465604 compassionate_lehmann[77123]: Scheduled crash update...
Oct  2 03:47:41 np0005465604 systemd[1]: libpod-7fe1ca795c996d2a018dd96fd020351ff0897aa79fcdcfecbea15ece5f498019.scope: Deactivated successfully.
Oct  2 03:47:41 np0005465604 podman[77082]: 2025-10-02 07:47:41.009363472 +0000 UTC m=+0.753664678 container died 7fe1ca795c996d2a018dd96fd020351ff0897aa79fcdcfecbea15ece5f498019 (image=quay.io/ceph/ceph:v18, name=compassionate_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5b1ff2edd8705a5beb02fc9b81ee7bd87bc1e954a791c2c93da7c59e0a54cd4c-merged.mount: Deactivated successfully.
Oct  2 03:47:41 np0005465604 podman[77082]: 2025-10-02 07:47:41.045643247 +0000 UTC m=+0.789944453 container remove 7fe1ca795c996d2a018dd96fd020351ff0897aa79fcdcfecbea15ece5f498019 (image=quay.io/ceph/ceph:v18, name=compassionate_lehmann, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:47:41 np0005465604 systemd[1]: libpod-conmon-7fe1ca795c996d2a018dd96fd020351ff0897aa79fcdcfecbea15ece5f498019.scope: Deactivated successfully.
Oct  2 03:47:41 np0005465604 podman[77328]: 2025-10-02 07:47:41.108624491 +0000 UTC m=+0.042495222 container create 3a9276e65e4d6280141e4dc2af133d6c8cc520dc226504e4449568342f98749e (image=quay.io/ceph/ceph:v18, name=lucid_swirles, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:41 np0005465604 systemd[1]: Started libpod-conmon-3a9276e65e4d6280141e4dc2af133d6c8cc520dc226504e4449568342f98749e.scope.
Oct  2 03:47:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aba520369b93c696b94c41a780e0b86a0649c71da9542d63845c084c0328ad0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aba520369b93c696b94c41a780e0b86a0649c71da9542d63845c084c0328ad0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aba520369b93c696b94c41a780e0b86a0649c71da9542d63845c084c0328ad0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:41 np0005465604 podman[77328]: 2025-10-02 07:47:41.167931394 +0000 UTC m=+0.101802135 container init 3a9276e65e4d6280141e4dc2af133d6c8cc520dc226504e4449568342f98749e (image=quay.io/ceph/ceph:v18, name=lucid_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 03:47:41 np0005465604 podman[77328]: 2025-10-02 07:47:41.173779489 +0000 UTC m=+0.107650210 container start 3a9276e65e4d6280141e4dc2af133d6c8cc520dc226504e4449568342f98749e (image=quay.io/ceph/ceph:v18, name=lucid_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 03:47:41 np0005465604 podman[77328]: 2025-10-02 07:47:41.176526751 +0000 UTC m=+0.110397472 container attach 3a9276e65e4d6280141e4dc2af133d6c8cc520dc226504e4449568342f98749e (image=quay.io/ceph/ceph:v18, name=lucid_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 03:47:41 np0005465604 podman[77328]: 2025-10-02 07:47:41.085608812 +0000 UTC m=+0.019479553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:41 np0005465604 podman[77293]: 2025-10-02 07:47:41.269095719 +0000 UTC m=+0.338520114 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:47:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:47:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:47:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Oct  2 03:47:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/982899978' entity='client.admin' 
Oct  2 03:47:41 np0005465604 systemd[1]: libpod-3a9276e65e4d6280141e4dc2af133d6c8cc520dc226504e4449568342f98749e.scope: Deactivated successfully.
Oct  2 03:47:41 np0005465604 podman[77328]: 2025-10-02 07:47:41.709836908 +0000 UTC m=+0.643707669 container died 3a9276e65e4d6280141e4dc2af133d6c8cc520dc226504e4449568342f98749e (image=quay.io/ceph/ceph:v18, name=lucid_swirles, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:47:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1aba520369b93c696b94c41a780e0b86a0649c71da9542d63845c084c0328ad0-merged.mount: Deactivated successfully.
Oct  2 03:47:41 np0005465604 podman[77328]: 2025-10-02 07:47:41.757939966 +0000 UTC m=+0.691810717 container remove 3a9276e65e4d6280141e4dc2af133d6c8cc520dc226504e4449568342f98749e (image=quay.io/ceph/ceph:v18, name=lucid_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:41 np0005465604 systemd[1]: libpod-conmon-3a9276e65e4d6280141e4dc2af133d6c8cc520dc226504e4449568342f98749e.scope: Deactivated successfully.
Oct  2 03:47:41 np0005465604 ceph-mgr[74774]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  2 03:47:41 np0005465604 podman[77514]: 2025-10-02 07:47:41.846740822 +0000 UTC m=+0.055132510 container create cbe12f4a13430789365111a53fb5de0340bd7893e5a621f9fbaa4d2497481623 (image=quay.io/ceph/ceph:v18, name=unruffled_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:41 np0005465604 systemd[1]: Started libpod-conmon-cbe12f4a13430789365111a53fb5de0340bd7893e5a621f9fbaa4d2497481623.scope.
Oct  2 03:47:41 np0005465604 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77542 (sysctl)
Oct  2 03:47:41 np0005465604 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct  2 03:47:41 np0005465604 podman[77514]: 2025-10-02 07:47:41.821331902 +0000 UTC m=+0.029723620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:41 np0005465604 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct  2 03:47:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/342f1363fb255b1affb33e2187add13c904267e08964cc0515316d0bd6c2fad7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/342f1363fb255b1affb33e2187add13c904267e08964cc0515316d0bd6c2fad7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/342f1363fb255b1affb33e2187add13c904267e08964cc0515316d0bd6c2fad7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:41 np0005465604 podman[77514]: 2025-10-02 07:47:41.94533491 +0000 UTC m=+0.153726618 container init cbe12f4a13430789365111a53fb5de0340bd7893e5a621f9fbaa4d2497481623 (image=quay.io/ceph/ceph:v18, name=unruffled_goldwasser, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 03:47:41 np0005465604 podman[77514]: 2025-10-02 07:47:41.953288198 +0000 UTC m=+0.161679886 container start cbe12f4a13430789365111a53fb5de0340bd7893e5a621f9fbaa4d2497481623 (image=quay.io/ceph/ceph:v18, name=unruffled_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:47:41 np0005465604 podman[77514]: 2025-10-02 07:47:41.957115922 +0000 UTC m=+0.165507620 container attach cbe12f4a13430789365111a53fb5de0340bd7893e5a621f9fbaa4d2497481623 (image=quay.io/ceph/ceph:v18, name=unruffled_goldwasser, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:47:41 np0005465604 ceph-mon[74477]: Saving service crash spec with placement *
Oct  2 03:47:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:41 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/982899978' entity='client.admin' 
Oct  2 03:47:42 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 03:47:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Oct  2 03:47:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:42 np0005465604 systemd[1]: libpod-cbe12f4a13430789365111a53fb5de0340bd7893e5a621f9fbaa4d2497481623.scope: Deactivated successfully.
Oct  2 03:47:42 np0005465604 conmon[77544]: conmon cbe12f4a134307893651 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cbe12f4a13430789365111a53fb5de0340bd7893e5a621f9fbaa4d2497481623.scope/container/memory.events
Oct  2 03:47:42 np0005465604 podman[77514]: 2025-10-02 07:47:42.511593413 +0000 UTC m=+0.719985091 container died cbe12f4a13430789365111a53fb5de0340bd7893e5a621f9fbaa4d2497481623 (image=quay.io/ceph/ceph:v18, name=unruffled_goldwasser, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct  2 03:47:42 np0005465604 systemd[1]: var-lib-containers-storage-overlay-342f1363fb255b1affb33e2187add13c904267e08964cc0515316d0bd6c2fad7-merged.mount: Deactivated successfully.
Oct  2 03:47:42 np0005465604 podman[77514]: 2025-10-02 07:47:42.557360472 +0000 UTC m=+0.765752150 container remove cbe12f4a13430789365111a53fb5de0340bd7893e5a621f9fbaa4d2497481623 (image=quay.io/ceph/ceph:v18, name=unruffled_goldwasser, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:47:42 np0005465604 systemd[1]: libpod-conmon-cbe12f4a13430789365111a53fb5de0340bd7893e5a621f9fbaa4d2497481623.scope: Deactivated successfully.
Oct  2 03:47:42 np0005465604 podman[77700]: 2025-10-02 07:47:42.613357515 +0000 UTC m=+0.037639156 container create c360db62c6f5575bb86378e6b4d131734bf067745edf56bbf258fdcfffa4520c (image=quay.io/ceph/ceph:v18, name=clever_wozniak, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 03:47:42 np0005465604 systemd[1]: Started libpod-conmon-c360db62c6f5575bb86378e6b4d131734bf067745edf56bbf258fdcfffa4520c.scope.
Oct  2 03:47:42 np0005465604 podman[77700]: 2025-10-02 07:47:42.596533412 +0000 UTC m=+0.020815033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:42 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4cf508fa4086fd24047618fbcc0c60a8700caeb1154136ef83f802776430c14/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4cf508fa4086fd24047618fbcc0c60a8700caeb1154136ef83f802776430c14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4cf508fa4086fd24047618fbcc0c60a8700caeb1154136ef83f802776430c14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:42 np0005465604 podman[77700]: 2025-10-02 07:47:42.753363573 +0000 UTC m=+0.177645224 container init c360db62c6f5575bb86378e6b4d131734bf067745edf56bbf258fdcfffa4520c (image=quay.io/ceph/ceph:v18, name=clever_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:47:42 np0005465604 podman[77700]: 2025-10-02 07:47:42.758824886 +0000 UTC m=+0.183106487 container start c360db62c6f5575bb86378e6b4d131734bf067745edf56bbf258fdcfffa4520c (image=quay.io/ceph/ceph:v18, name=clever_wozniak, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:47:42 np0005465604 podman[77700]: 2025-10-02 07:47:42.762288629 +0000 UTC m=+0.186570310 container attach c360db62c6f5575bb86378e6b4d131734bf067745edf56bbf258fdcfffa4520c (image=quay.io/ceph/ceph:v18, name=clever_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 03:47:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:47:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:43 np0005465604 podman[77899]: 2025-10-02 07:47:43.30974142 +0000 UTC m=+0.034115321 container create 33aaf84e37f5254f7692f0df0e0dd89265adb48e8d6291abbec7e0acd9fa8a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:47:43 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 03:47:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  2 03:47:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:43 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Added label _admin to host compute-0
Oct  2 03:47:43 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct  2 03:47:43 np0005465604 clever_wozniak[77718]: Added label _admin to host compute-0
Oct  2 03:47:43 np0005465604 systemd[1]: Started libpod-conmon-33aaf84e37f5254f7692f0df0e0dd89265adb48e8d6291abbec7e0acd9fa8a2a.scope.
Oct  2 03:47:43 np0005465604 systemd[1]: libpod-c360db62c6f5575bb86378e6b4d131734bf067745edf56bbf258fdcfffa4520c.scope: Deactivated successfully.
Oct  2 03:47:43 np0005465604 podman[77700]: 2025-10-02 07:47:43.353139467 +0000 UTC m=+0.777421068 container died c360db62c6f5575bb86378e6b4d131734bf067745edf56bbf258fdcfffa4520c (image=quay.io/ceph/ceph:v18, name=clever_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 03:47:43 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:43 np0005465604 podman[77899]: 2025-10-02 07:47:43.376149686 +0000 UTC m=+0.100523597 container init 33aaf84e37f5254f7692f0df0e0dd89265adb48e8d6291abbec7e0acd9fa8a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 03:47:43 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d4cf508fa4086fd24047618fbcc0c60a8700caeb1154136ef83f802776430c14-merged.mount: Deactivated successfully.
Oct  2 03:47:43 np0005465604 podman[77899]: 2025-10-02 07:47:43.390030051 +0000 UTC m=+0.114403952 container start 33aaf84e37f5254f7692f0df0e0dd89265adb48e8d6291abbec7e0acd9fa8a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elbakyan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:47:43 np0005465604 podman[77899]: 2025-10-02 07:47:43.294997529 +0000 UTC m=+0.019371450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:47:43 np0005465604 hardcore_elbakyan[77917]: 167 167
Oct  2 03:47:43 np0005465604 podman[77899]: 2025-10-02 07:47:43.394921267 +0000 UTC m=+0.119295168 container attach 33aaf84e37f5254f7692f0df0e0dd89265adb48e8d6291abbec7e0acd9fa8a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elbakyan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:47:43 np0005465604 podman[77700]: 2025-10-02 07:47:43.40306753 +0000 UTC m=+0.827349131 container remove c360db62c6f5575bb86378e6b4d131734bf067745edf56bbf258fdcfffa4520c (image=quay.io/ceph/ceph:v18, name=clever_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 03:47:43 np0005465604 systemd[1]: libpod-33aaf84e37f5254f7692f0df0e0dd89265adb48e8d6291abbec7e0acd9fa8a2a.scope: Deactivated successfully.
Oct  2 03:47:43 np0005465604 podman[77899]: 2025-10-02 07:47:43.409479872 +0000 UTC m=+0.133853773 container died 33aaf84e37f5254f7692f0df0e0dd89265adb48e8d6291abbec7e0acd9fa8a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elbakyan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 03:47:43 np0005465604 systemd[1]: libpod-conmon-c360db62c6f5575bb86378e6b4d131734bf067745edf56bbf258fdcfffa4520c.scope: Deactivated successfully.
Oct  2 03:47:43 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e0eae4c3ebab59d0c59de2157e6d68a83a20b1f017b0957257c8e034473c2ff7-merged.mount: Deactivated successfully.
Oct  2 03:47:43 np0005465604 podman[77899]: 2025-10-02 07:47:43.457326643 +0000 UTC m=+0.181700544 container remove 33aaf84e37f5254f7692f0df0e0dd89265adb48e8d6291abbec7e0acd9fa8a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  2 03:47:43 np0005465604 systemd[1]: libpod-conmon-33aaf84e37f5254f7692f0df0e0dd89265adb48e8d6291abbec7e0acd9fa8a2a.scope: Deactivated successfully.
Oct  2 03:47:43 np0005465604 podman[77937]: 2025-10-02 07:47:43.483448834 +0000 UTC m=+0.061563342 container create eeb74189d6a31786d53c68d6709f89eed0bd7832ed512efe5e136b6fac447bb2 (image=quay.io/ceph/ceph:v18, name=frosty_raman, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:47:43 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:43 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:43 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:43 np0005465604 systemd[1]: Started libpod-conmon-eeb74189d6a31786d53c68d6709f89eed0bd7832ed512efe5e136b6fac447bb2.scope.
Oct  2 03:47:43 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab1c097637c97a52bf1556515f1da2666c94847306b7741a41e4cca31bc32ed0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab1c097637c97a52bf1556515f1da2666c94847306b7741a41e4cca31bc32ed0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab1c097637c97a52bf1556515f1da2666c94847306b7741a41e4cca31bc32ed0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:43 np0005465604 podman[77937]: 2025-10-02 07:47:43.545419357 +0000 UTC m=+0.123533935 container init eeb74189d6a31786d53c68d6709f89eed0bd7832ed512efe5e136b6fac447bb2 (image=quay.io/ceph/ceph:v18, name=frosty_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 03:47:43 np0005465604 podman[77937]: 2025-10-02 07:47:43.550832339 +0000 UTC m=+0.128946837 container start eeb74189d6a31786d53c68d6709f89eed0bd7832ed512efe5e136b6fac447bb2 (image=quay.io/ceph/ceph:v18, name=frosty_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 03:47:43 np0005465604 podman[77937]: 2025-10-02 07:47:43.553914001 +0000 UTC m=+0.132028589 container attach eeb74189d6a31786d53c68d6709f89eed0bd7832ed512efe5e136b6fac447bb2 (image=quay.io/ceph/ceph:v18, name=frosty_raman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 03:47:43 np0005465604 podman[77937]: 2025-10-02 07:47:43.467028383 +0000 UTC m=+0.045142901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:43 np0005465604 ceph-mgr[74774]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  2 03:47:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Oct  2 03:47:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/596763612' entity='client.admin' 
Oct  2 03:47:44 np0005465604 systemd[1]: libpod-eeb74189d6a31786d53c68d6709f89eed0bd7832ed512efe5e136b6fac447bb2.scope: Deactivated successfully.
Oct  2 03:47:44 np0005465604 podman[77937]: 2025-10-02 07:47:44.155917103 +0000 UTC m=+0.734031631 container died eeb74189d6a31786d53c68d6709f89eed0bd7832ed512efe5e136b6fac447bb2 (image=quay.io/ceph/ceph:v18, name=frosty_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:47:44 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ab1c097637c97a52bf1556515f1da2666c94847306b7741a41e4cca31bc32ed0-merged.mount: Deactivated successfully.
Oct  2 03:47:44 np0005465604 podman[77937]: 2025-10-02 07:47:44.20363723 +0000 UTC m=+0.781751728 container remove eeb74189d6a31786d53c68d6709f89eed0bd7832ed512efe5e136b6fac447bb2 (image=quay.io/ceph/ceph:v18, name=frosty_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:44 np0005465604 systemd[1]: libpod-conmon-eeb74189d6a31786d53c68d6709f89eed0bd7832ed512efe5e136b6fac447bb2.scope: Deactivated successfully.
Oct  2 03:47:44 np0005465604 podman[78004]: 2025-10-02 07:47:44.279325292 +0000 UTC m=+0.053443218 container create b40b952a76651dda9a3fa2d7174087f8202d6db475fb7862d65623bad7cfb795 (image=quay.io/ceph/ceph:v18, name=distracted_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:44 np0005465604 systemd[1]: Started libpod-conmon-b40b952a76651dda9a3fa2d7174087f8202d6db475fb7862d65623bad7cfb795.scope.
Oct  2 03:47:44 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a7380c9bd1ef0f5a78b01a97b7be8f9e558e1faaf89729114ccbeab75df9cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a7380c9bd1ef0f5a78b01a97b7be8f9e558e1faaf89729114ccbeab75df9cb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a7380c9bd1ef0f5a78b01a97b7be8f9e558e1faaf89729114ccbeab75df9cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:44 np0005465604 podman[78004]: 2025-10-02 07:47:44.339821382 +0000 UTC m=+0.113939348 container init b40b952a76651dda9a3fa2d7174087f8202d6db475fb7862d65623bad7cfb795 (image=quay.io/ceph/ceph:v18, name=distracted_wing, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 03:47:44 np0005465604 podman[78004]: 2025-10-02 07:47:44.345367168 +0000 UTC m=+0.119485114 container start b40b952a76651dda9a3fa2d7174087f8202d6db475fb7862d65623bad7cfb795 (image=quay.io/ceph/ceph:v18, name=distracted_wing, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:47:44 np0005465604 podman[78004]: 2025-10-02 07:47:44.25481755 +0000 UTC m=+0.028935556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:44 np0005465604 podman[78004]: 2025-10-02 07:47:44.348275415 +0000 UTC m=+0.122393361 container attach b40b952a76651dda9a3fa2d7174087f8202d6db475fb7862d65623bad7cfb795 (image=quay.io/ceph/ceph:v18, name=distracted_wing, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:47:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Oct  2 03:47:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1217163385' entity='client.admin' 
Oct  2 03:47:44 np0005465604 distracted_wing[78021]: set mgr/dashboard/cluster/status
Oct  2 03:47:44 np0005465604 systemd[1]: libpod-b40b952a76651dda9a3fa2d7174087f8202d6db475fb7862d65623bad7cfb795.scope: Deactivated successfully.
Oct  2 03:47:44 np0005465604 conmon[78021]: conmon b40b952a76651dda9a3f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b40b952a76651dda9a3fa2d7174087f8202d6db475fb7862d65623bad7cfb795.scope/container/memory.events
Oct  2 03:47:44 np0005465604 podman[78004]: 2025-10-02 07:47:44.970411448 +0000 UTC m=+0.744529404 container died b40b952a76651dda9a3fa2d7174087f8202d6db475fb7862d65623bad7cfb795 (image=quay.io/ceph/ceph:v18, name=distracted_wing, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 03:47:45 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e0a7380c9bd1ef0f5a78b01a97b7be8f9e558e1faaf89729114ccbeab75df9cb-merged.mount: Deactivated successfully.
Oct  2 03:47:45 np0005465604 podman[78004]: 2025-10-02 07:47:45.025543087 +0000 UTC m=+0.799661053 container remove b40b952a76651dda9a3fa2d7174087f8202d6db475fb7862d65623bad7cfb795 (image=quay.io/ceph/ceph:v18, name=distracted_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct  2 03:47:45 np0005465604 systemd[1]: libpod-conmon-b40b952a76651dda9a3fa2d7174087f8202d6db475fb7862d65623bad7cfb795.scope: Deactivated successfully.
Oct  2 03:47:45 np0005465604 ceph-mon[74477]: Added label _admin to host compute-0
Oct  2 03:47:45 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/596763612' entity='client.admin' 
Oct  2 03:47:45 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/1217163385' entity='client.admin' 
Oct  2 03:47:45 np0005465604 podman[78068]: 2025-10-02 07:47:45.243905106 +0000 UTC m=+0.053203062 container create 6bdfdc5250c84b3054d533ff5ccad57deacea5a13b8f5fc9e7aab6e7606936b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 03:47:45 np0005465604 systemd[1]: Started libpod-conmon-6bdfdc5250c84b3054d533ff5ccad57deacea5a13b8f5fc9e7aab6e7606936b6.scope.
Oct  2 03:47:45 np0005465604 podman[78068]: 2025-10-02 07:47:45.218549028 +0000 UTC m=+0.027847044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:47:45 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a5ae5aa6d2666178360ca5b1a03ef275c9dda68ce495dd6525ce85cc4f117f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a5ae5aa6d2666178360ca5b1a03ef275c9dda68ce495dd6525ce85cc4f117f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a5ae5aa6d2666178360ca5b1a03ef275c9dda68ce495dd6525ce85cc4f117f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a5ae5aa6d2666178360ca5b1a03ef275c9dda68ce495dd6525ce85cc4f117f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:45 np0005465604 podman[78068]: 2025-10-02 07:47:45.346797603 +0000 UTC m=+0.156095619 container init 6bdfdc5250c84b3054d533ff5ccad57deacea5a13b8f5fc9e7aab6e7606936b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilbur, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:47:45 np0005465604 podman[78068]: 2025-10-02 07:47:45.358177544 +0000 UTC m=+0.167475490 container start 6bdfdc5250c84b3054d533ff5ccad57deacea5a13b8f5fc9e7aab6e7606936b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:47:45 np0005465604 podman[78068]: 2025-10-02 07:47:45.362276486 +0000 UTC m=+0.171574442 container attach 6bdfdc5250c84b3054d533ff5ccad57deacea5a13b8f5fc9e7aab6e7606936b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:47:45 np0005465604 python3[78114]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:47:45 np0005465604 podman[78115]: 2025-10-02 07:47:45.704742316 +0000 UTC m=+0.055246822 container create 982d031b96a4f94c11f4579b51249850e4b9a395b040cbced4b024b842caf58f (image=quay.io/ceph/ceph:v18, name=quirky_shirley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:45 np0005465604 systemd[1]: Started libpod-conmon-982d031b96a4f94c11f4579b51249850e4b9a395b040cbced4b024b842caf58f.scope.
Oct  2 03:47:45 np0005465604 podman[78115]: 2025-10-02 07:47:45.679169372 +0000 UTC m=+0.029673878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:45 np0005465604 ceph-mgr[74774]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  2 03:47:45 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/019bc087ffa1366c9539ec1e7866e922a17bf432ca0487b6f8dda59213dd9b9e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/019bc087ffa1366c9539ec1e7866e922a17bf432ca0487b6f8dda59213dd9b9e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:45 np0005465604 podman[78115]: 2025-10-02 07:47:45.809389956 +0000 UTC m=+0.159894442 container init 982d031b96a4f94c11f4579b51249850e4b9a395b040cbced4b024b842caf58f (image=quay.io/ceph/ceph:v18, name=quirky_shirley, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:45 np0005465604 podman[78115]: 2025-10-02 07:47:45.818309873 +0000 UTC m=+0.168814379 container start 982d031b96a4f94c11f4579b51249850e4b9a395b040cbced4b024b842caf58f (image=quay.io/ceph/ceph:v18, name=quirky_shirley, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:45 np0005465604 podman[78115]: 2025-10-02 07:47:45.822709604 +0000 UTC m=+0.173214110 container attach 982d031b96a4f94c11f4579b51249850e4b9a395b040cbced4b024b842caf58f (image=quay.io/ceph/ceph:v18, name=quirky_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 03:47:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Oct  2 03:47:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4098285131' entity='client.admin' 
Oct  2 03:47:46 np0005465604 systemd[1]: libpod-982d031b96a4f94c11f4579b51249850e4b9a395b040cbced4b024b842caf58f.scope: Deactivated successfully.
Oct  2 03:47:46 np0005465604 podman[78171]: 2025-10-02 07:47:46.427458758 +0000 UTC m=+0.031410030 container died 982d031b96a4f94c11f4579b51249850e4b9a395b040cbced4b024b842caf58f (image=quay.io/ceph/ceph:v18, name=quirky_shirley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:46 np0005465604 systemd[1]: var-lib-containers-storage-overlay-019bc087ffa1366c9539ec1e7866e922a17bf432ca0487b6f8dda59213dd9b9e-merged.mount: Deactivated successfully.
Oct  2 03:47:46 np0005465604 podman[78171]: 2025-10-02 07:47:46.471196025 +0000 UTC m=+0.075147247 container remove 982d031b96a4f94c11f4579b51249850e4b9a395b040cbced4b024b842caf58f (image=quay.io/ceph/ceph:v18, name=quirky_shirley, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:47:46 np0005465604 systemd[1]: libpod-conmon-982d031b96a4f94c11f4579b51249850e4b9a395b040cbced4b024b842caf58f.scope: Deactivated successfully.
Oct  2 03:47:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]: [
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:    {
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:        "available": false,
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:        "ceph_device": false,
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:        "lsm_data": {},
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:        "lvs": [],
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:        "path": "/dev/sr0",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:        "rejected_reasons": [
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "Insufficient space (<5GB)",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "Has a FileSystem"
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:        ],
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:        "sys_api": {
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "actuators": null,
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "device_nodes": "sr0",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "devname": "sr0",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "human_readable_size": "482.00 KB",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "id_bus": "ata",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "model": "QEMU DVD-ROM",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "nr_requests": "2",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "parent": "/dev/sr0",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "partitions": {},
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "path": "/dev/sr0",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "removable": "1",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "rev": "2.5+",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "ro": "0",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "rotational": "0",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "sas_address": "",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "sas_device_handle": "",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "scheduler_mode": "mq-deadline",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "sectors": 0,
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "sectorsize": "2048",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "size": 493568.0,
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "support_discard": "2048",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "type": "disk",
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:            "vendor": "QEMU"
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:        }
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]:    }
Oct  2 03:47:46 np0005465604 sleepy_wilbur[78084]: ]
Oct  2 03:47:46 np0005465604 systemd[1]: libpod-6bdfdc5250c84b3054d533ff5ccad57deacea5a13b8f5fc9e7aab6e7606936b6.scope: Deactivated successfully.
Oct  2 03:47:46 np0005465604 podman[78068]: 2025-10-02 07:47:46.810191342 +0000 UTC m=+1.619489258 container died 6bdfdc5250c84b3054d533ff5ccad57deacea5a13b8f5fc9e7aab6e7606936b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:47:46 np0005465604 systemd[1]: libpod-6bdfdc5250c84b3054d533ff5ccad57deacea5a13b8f5fc9e7aab6e7606936b6.scope: Consumed 1.433s CPU time.
Oct  2 03:47:46 np0005465604 systemd[1]: var-lib-containers-storage-overlay-91a5ae5aa6d2666178360ca5b1a03ef275c9dda68ce495dd6525ce85cc4f117f-merged.mount: Deactivated successfully.
Oct  2 03:47:46 np0005465604 podman[78068]: 2025-10-02 07:47:46.86026083 +0000 UTC m=+1.669558746 container remove 6bdfdc5250c84b3054d533ff5ccad57deacea5a13b8f5fc9e7aab6e7606936b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 03:47:46 np0005465604 systemd[1]: libpod-conmon-6bdfdc5250c84b3054d533ff5ccad57deacea5a13b8f5fc9e7aab6e7606936b6.scope: Deactivated successfully.
Oct  2 03:47:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:47:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:47:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:47:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:47:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  2 03:47:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 03:47:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:47:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:47:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 03:47:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:47:46 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  2 03:47:46 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  2 03:47:47 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/4098285131' entity='client.admin' 
Oct  2 03:47:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 03:47:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:47:47 np0005465604 ceph-mon[74477]: Updating compute-0:/etc/ceph/ceph.conf
Oct  2 03:47:47 np0005465604 ansible-async_wrapper.py[80252]: Invoked with j570937248486 30 /home/zuul/.ansible/tmp/ansible-tmp-1759391266.8991654-33155-246869962443162/AnsiballZ_command.py _
Oct  2 03:47:47 np0005465604 ansible-async_wrapper.py[80308]: Starting module and watcher
Oct  2 03:47:47 np0005465604 ansible-async_wrapper.py[80308]: Start watching 80311 (30)
Oct  2 03:47:47 np0005465604 ansible-async_wrapper.py[80311]: Start module (80311)
Oct  2 03:47:47 np0005465604 ansible-async_wrapper.py[80252]: Return async_wrapper task started.
Oct  2 03:47:47 np0005465604 ceph-mgr[74774]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct  2 03:47:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:47:47 np0005465604 ceph-mon[74477]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct  2 03:47:47 np0005465604 python3[80315]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:47:47 np0005465604 podman[80379]: 2025-10-02 07:47:47.931645497 +0000 UTC m=+0.071085837 container create ed1b37696f88755ad6e7362c3a84ae2956fc57d11355f109051017d9841bc681 (image=quay.io/ceph/ceph:v18, name=blissful_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 03:47:47 np0005465604 systemd[1]: Started libpod-conmon-ed1b37696f88755ad6e7362c3a84ae2956fc57d11355f109051017d9841bc681.scope.
Oct  2 03:47:47 np0005465604 podman[80379]: 2025-10-02 07:47:47.906178695 +0000 UTC m=+0.045619095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:48 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c82ce7528583e16c55fb47fe0cc5b021de3a9ac885699f9e9e67ebbe865b094/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c82ce7528583e16c55fb47fe0cc5b021de3a9ac885699f9e9e67ebbe865b094/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:48 np0005465604 podman[80379]: 2025-10-02 07:47:48.027573375 +0000 UTC m=+0.167013735 container init ed1b37696f88755ad6e7362c3a84ae2956fc57d11355f109051017d9841bc681 (image=quay.io/ceph/ceph:v18, name=blissful_margulis, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 03:47:48 np0005465604 podman[80379]: 2025-10-02 07:47:48.041855843 +0000 UTC m=+0.181296183 container start ed1b37696f88755ad6e7362c3a84ae2956fc57d11355f109051017d9841bc681 (image=quay.io/ceph/ceph:v18, name=blissful_margulis, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:47:48 np0005465604 podman[80379]: 2025-10-02 07:47:48.045495252 +0000 UTC m=+0.184935632 container attach ed1b37696f88755ad6e7362c3a84ae2956fc57d11355f109051017d9841bc681 (image=quay.io/ceph/ceph:v18, name=blissful_margulis, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 03:47:48 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/config/ceph.conf
Oct  2 03:47:48 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/config/ceph.conf
Oct  2 03:47:48 np0005465604 ceph-mon[74477]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct  2 03:47:48 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  2 03:47:48 np0005465604 blissful_margulis[80433]: 
Oct  2 03:47:48 np0005465604 blissful_margulis[80433]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  2 03:47:48 np0005465604 systemd[1]: libpod-ed1b37696f88755ad6e7362c3a84ae2956fc57d11355f109051017d9841bc681.scope: Deactivated successfully.
Oct  2 03:47:48 np0005465604 podman[80379]: 2025-10-02 07:47:48.626398472 +0000 UTC m=+0.765838852 container died ed1b37696f88755ad6e7362c3a84ae2956fc57d11355f109051017d9841bc681 (image=quay.io/ceph/ceph:v18, name=blissful_margulis, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct  2 03:47:48 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5c82ce7528583e16c55fb47fe0cc5b021de3a9ac885699f9e9e67ebbe865b094-merged.mount: Deactivated successfully.
Oct  2 03:47:48 np0005465604 podman[80379]: 2025-10-02 07:47:48.672730377 +0000 UTC m=+0.812170727 container remove ed1b37696f88755ad6e7362c3a84ae2956fc57d11355f109051017d9841bc681 (image=quay.io/ceph/ceph:v18, name=blissful_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:47:48 np0005465604 systemd[1]: libpod-conmon-ed1b37696f88755ad6e7362c3a84ae2956fc57d11355f109051017d9841bc681.scope: Deactivated successfully.
Oct  2 03:47:48 np0005465604 ansible-async_wrapper.py[80311]: Module complete (80311)
Oct  2 03:47:49 np0005465604 python3[80857]: ansible-ansible.legacy.async_status Invoked with jid=j570937248486.80252 mode=status _async_dir=/root/.ansible_async
Oct  2 03:47:49 np0005465604 ceph-mon[74477]: Updating compute-0:/var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/config/ceph.conf
Oct  2 03:47:49 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  2 03:47:49 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  2 03:47:49 np0005465604 python3[81037]: ansible-ansible.legacy.async_status Invoked with jid=j570937248486.80252 mode=cleanup _async_dir=/root/.ansible_async
Oct  2 03:47:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:47:50 np0005465604 python3[81278]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:47:50 np0005465604 ceph-mon[74477]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  2 03:47:50 np0005465604 python3[81496]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:47:50 np0005465604 podman[81534]: 2025-10-02 07:47:50.691287098 +0000 UTC m=+0.075668854 container create b09118f5e928cf31eee2be0543b106102f5ce99ce9603a8b4d52612e7c0ccf00 (image=quay.io/ceph/ceph:v18, name=boring_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct  2 03:47:50 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/config/ceph.client.admin.keyring
Oct  2 03:47:50 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/config/ceph.client.admin.keyring
Oct  2 03:47:50 np0005465604 systemd[1]: Started libpod-conmon-b09118f5e928cf31eee2be0543b106102f5ce99ce9603a8b4d52612e7c0ccf00.scope.
Oct  2 03:47:50 np0005465604 podman[81534]: 2025-10-02 07:47:50.660956251 +0000 UTC m=+0.045338067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:50 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcfbfda84c20c89aed1e49820bfb79c80cceab9570dc97c881aab228fc282f5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcfbfda84c20c89aed1e49820bfb79c80cceab9570dc97c881aab228fc282f5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcfbfda84c20c89aed1e49820bfb79c80cceab9570dc97c881aab228fc282f5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:50 np0005465604 podman[81534]: 2025-10-02 07:47:50.805892995 +0000 UTC m=+0.190274771 container init b09118f5e928cf31eee2be0543b106102f5ce99ce9603a8b4d52612e7c0ccf00 (image=quay.io/ceph/ceph:v18, name=boring_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:47:50 np0005465604 podman[81534]: 2025-10-02 07:47:50.813580835 +0000 UTC m=+0.197962551 container start b09118f5e928cf31eee2be0543b106102f5ce99ce9603a8b4d52612e7c0ccf00 (image=quay.io/ceph/ceph:v18, name=boring_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:47:50 np0005465604 podman[81534]: 2025-10-02 07:47:50.816551894 +0000 UTC m=+0.200933630 container attach b09118f5e928cf31eee2be0543b106102f5ce99ce9603a8b4d52612e7c0ccf00 (image=quay.io/ceph/ceph:v18, name=boring_allen, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:47:51 np0005465604 ceph-mon[74477]: Updating compute-0:/var/lib/ceph/a52e644f-f702-594c-a648-813e3e0df2b1/config/ceph.client.admin.keyring
Oct  2 03:47:51 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  2 03:47:51 np0005465604 boring_allen[81592]: 
Oct  2 03:47:51 np0005465604 boring_allen[81592]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  2 03:47:51 np0005465604 systemd[1]: libpod-b09118f5e928cf31eee2be0543b106102f5ce99ce9603a8b4d52612e7c0ccf00.scope: Deactivated successfully.
Oct  2 03:47:51 np0005465604 podman[81534]: 2025-10-02 07:47:51.41736894 +0000 UTC m=+0.801750666 container died b09118f5e928cf31eee2be0543b106102f5ce99ce9603a8b4d52612e7c0ccf00 (image=quay.io/ceph/ceph:v18, name=boring_allen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:47:51 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ebcfbfda84c20c89aed1e49820bfb79c80cceab9570dc97c881aab228fc282f5-merged.mount: Deactivated successfully.
Oct  2 03:47:51 np0005465604 podman[81534]: 2025-10-02 07:47:51.455932893 +0000 UTC m=+0.840314639 container remove b09118f5e928cf31eee2be0543b106102f5ce99ce9603a8b4d52612e7c0ccf00 (image=quay.io/ceph/ceph:v18, name=boring_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct  2 03:47:51 np0005465604 systemd[1]: libpod-conmon-b09118f5e928cf31eee2be0543b106102f5ce99ce9603a8b4d52612e7c0ccf00.scope: Deactivated successfully.
Oct  2 03:47:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:47:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:47:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:47:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:47:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 03:47:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:51 np0005465604 ceph-mgr[74774]: [progress INFO root] update: starting ev 93b59808-1afe-4dab-814c-3d891cd54dd8 (Updating crash deployment (+1 -> 1))
Oct  2 03:47:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct  2 03:47:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  2 03:47:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  2 03:47:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:47:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:47:51 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct  2 03:47:51 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct  2 03:47:51 np0005465604 python3[82033]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:47:52 np0005465604 podman[82065]: 2025-10-02 07:47:52.072051737 +0000 UTC m=+0.064164760 container create c332d6b875ea94279fddf09bc9d3f7fa7f9b5dd184c77d1f7ad4952a1198650c (image=quay.io/ceph/ceph:v18, name=condescending_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 03:47:52 np0005465604 systemd[1]: Started libpod-conmon-c332d6b875ea94279fddf09bc9d3f7fa7f9b5dd184c77d1f7ad4952a1198650c.scope.
Oct  2 03:47:52 np0005465604 podman[82065]: 2025-10-02 07:47:52.039441382 +0000 UTC m=+0.031554455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:52 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e1196e2ed7eb7a57bc8df5813cee638141cdd82901a990131588f8fb0c27cb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e1196e2ed7eb7a57bc8df5813cee638141cdd82901a990131588f8fb0c27cb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e1196e2ed7eb7a57bc8df5813cee638141cdd82901a990131588f8fb0c27cb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:52 np0005465604 podman[82065]: 2025-10-02 07:47:52.15910803 +0000 UTC m=+0.151221093 container init c332d6b875ea94279fddf09bc9d3f7fa7f9b5dd184c77d1f7ad4952a1198650c (image=quay.io/ceph/ceph:v18, name=condescending_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 03:47:52 np0005465604 podman[82065]: 2025-10-02 07:47:52.167332706 +0000 UTC m=+0.159445729 container start c332d6b875ea94279fddf09bc9d3f7fa7f9b5dd184c77d1f7ad4952a1198650c (image=quay.io/ceph/ceph:v18, name=condescending_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 03:47:52 np0005465604 podman[82065]: 2025-10-02 07:47:52.171708537 +0000 UTC m=+0.163821550 container attach c332d6b875ea94279fddf09bc9d3f7fa7f9b5dd184c77d1f7ad4952a1198650c (image=quay.io/ceph/ceph:v18, name=condescending_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 03:47:52 np0005465604 ansible-async_wrapper.py[80308]: Done in kid B.
Oct  2 03:47:52 np0005465604 podman[82237]: 2025-10-02 07:47:52.673332377 +0000 UTC m=+0.056285585 container create 861eb223978d25822a3656b03ef9ef2997741909b9a9bb30e06dcc46154352d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:47:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Oct  2 03:47:52 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1814308330' entity='client.admin' 
Oct  2 03:47:52 np0005465604 systemd[1]: Started libpod-conmon-861eb223978d25822a3656b03ef9ef2997741909b9a9bb30e06dcc46154352d5.scope.
Oct  2 03:47:52 np0005465604 podman[82237]: 2025-10-02 07:47:52.642209716 +0000 UTC m=+0.025162974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:47:52 np0005465604 podman[82065]: 2025-10-02 07:47:52.738274819 +0000 UTC m=+0.730387832 container died c332d6b875ea94279fddf09bc9d3f7fa7f9b5dd184c77d1f7ad4952a1198650c (image=quay.io/ceph/ceph:v18, name=condescending_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 03:47:52 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:52 np0005465604 systemd[1]: libpod-c332d6b875ea94279fddf09bc9d3f7fa7f9b5dd184c77d1f7ad4952a1198650c.scope: Deactivated successfully.
Oct  2 03:47:52 np0005465604 podman[82237]: 2025-10-02 07:47:52.79151948 +0000 UTC m=+0.174472688 container init 861eb223978d25822a3656b03ef9ef2997741909b9a9bb30e06dcc46154352d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wiles, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:47:52 np0005465604 podman[82237]: 2025-10-02 07:47:52.80286779 +0000 UTC m=+0.185820968 container start 861eb223978d25822a3656b03ef9ef2997741909b9a9bb30e06dcc46154352d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wiles, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:47:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b7e1196e2ed7eb7a57bc8df5813cee638141cdd82901a990131588f8fb0c27cb-merged.mount: Deactivated successfully.
Oct  2 03:47:52 np0005465604 infallible_wiles[82256]: 167 167
Oct  2 03:47:52 np0005465604 systemd[1]: libpod-861eb223978d25822a3656b03ef9ef2997741909b9a9bb30e06dcc46154352d5.scope: Deactivated successfully.
Oct  2 03:47:52 np0005465604 podman[82237]: 2025-10-02 07:47:52.814150218 +0000 UTC m=+0.197103396 container attach 861eb223978d25822a3656b03ef9ef2997741909b9a9bb30e06dcc46154352d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 03:47:52 np0005465604 podman[82237]: 2025-10-02 07:47:52.814531109 +0000 UTC m=+0.197484287 container died 861eb223978d25822a3656b03ef9ef2997741909b9a9bb30e06dcc46154352d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wiles, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct  2 03:47:52 np0005465604 podman[82065]: 2025-10-02 07:47:52.830489806 +0000 UTC m=+0.822602779 container remove c332d6b875ea94279fddf09bc9d3f7fa7f9b5dd184c77d1f7ad4952a1198650c (image=quay.io/ceph/ceph:v18, name=condescending_rhodes, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 03:47:52 np0005465604 systemd[1]: libpod-conmon-c332d6b875ea94279fddf09bc9d3f7fa7f9b5dd184c77d1f7ad4952a1198650c.scope: Deactivated successfully.
Oct  2 03:47:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1947baf10b94b5b01a4a246eba6099948dd5066d032f0fb938f2036d33124cfa-merged.mount: Deactivated successfully.
Oct  2 03:47:52 np0005465604 podman[82237]: 2025-10-02 07:47:52.864077831 +0000 UTC m=+0.247030999 container remove 861eb223978d25822a3656b03ef9ef2997741909b9a9bb30e06dcc46154352d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 03:47:52 np0005465604 systemd[1]: libpod-conmon-861eb223978d25822a3656b03ef9ef2997741909b9a9bb30e06dcc46154352d5.scope: Deactivated successfully.
Oct  2 03:47:52 np0005465604 systemd[1]: Reloading.
Oct  2 03:47:52 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:52 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:52 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:52 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  2 03:47:52 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  2 03:47:52 np0005465604 ceph-mon[74477]: Deploying daemon crash.compute-0 on compute-0
Oct  2 03:47:52 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/1814308330' entity='client.admin' 
Oct  2 03:47:53 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:47:53 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:47:53 np0005465604 systemd[1]: Reloading.
Oct  2 03:47:53 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:47:53 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:47:53 np0005465604 python3[82352]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:47:53 np0005465604 podman[82390]: 2025-10-02 07:47:53.455459754 +0000 UTC m=+0.038148541 container create 0b2b5e3d92feb89f63c236adc3f568625a1fde02b0729c7384175d009f0be2f8 (image=quay.io/ceph/ceph:v18, name=thirsty_torvalds, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 03:47:53 np0005465604 podman[82390]: 2025-10-02 07:47:53.43960162 +0000 UTC m=+0.022290397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:53 np0005465604 systemd[1]: Started libpod-conmon-0b2b5e3d92feb89f63c236adc3f568625a1fde02b0729c7384175d009f0be2f8.scope.
Oct  2 03:47:53 np0005465604 systemd[1]: Starting Ceph crash.compute-0 for a52e644f-f702-594c-a648-813e3e0df2b1...
Oct  2 03:47:53 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3e08e2b7f6e69ec1a9c8ad4d8668a7703543b6d1a86a61aacaddf1cd3910e8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3e08e2b7f6e69ec1a9c8ad4d8668a7703543b6d1a86a61aacaddf1cd3910e8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3e08e2b7f6e69ec1a9c8ad4d8668a7703543b6d1a86a61aacaddf1cd3910e8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:53 np0005465604 podman[82390]: 2025-10-02 07:47:53.56867004 +0000 UTC m=+0.151358807 container init 0b2b5e3d92feb89f63c236adc3f568625a1fde02b0729c7384175d009f0be2f8 (image=quay.io/ceph/ceph:v18, name=thirsty_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 03:47:53 np0005465604 podman[82390]: 2025-10-02 07:47:53.574601377 +0000 UTC m=+0.157290144 container start 0b2b5e3d92feb89f63c236adc3f568625a1fde02b0729c7384175d009f0be2f8 (image=quay.io/ceph/ceph:v18, name=thirsty_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:47:53 np0005465604 podman[82390]: 2025-10-02 07:47:53.577819753 +0000 UTC m=+0.160508520 container attach 0b2b5e3d92feb89f63c236adc3f568625a1fde02b0729c7384175d009f0be2f8 (image=quay.io/ceph/ceph:v18, name=thirsty_torvalds, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:47:53 np0005465604 podman[82460]: 2025-10-02 07:47:53.778168045 +0000 UTC m=+0.054268414 container create 031c46800cbf304e1b98142a8d1cb93edd0742afd6eb29d3220b18c37c78172b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-crash-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 03:47:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:47:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2304504cdadca5625d322fbefcfc5b013c1b3c16a53cdace4e578207326a376f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:53 np0005465604 podman[82460]: 2025-10-02 07:47:53.748877829 +0000 UTC m=+0.024978248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:47:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2304504cdadca5625d322fbefcfc5b013c1b3c16a53cdace4e578207326a376f/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2304504cdadca5625d322fbefcfc5b013c1b3c16a53cdace4e578207326a376f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2304504cdadca5625d322fbefcfc5b013c1b3c16a53cdace4e578207326a376f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:53 np0005465604 podman[82460]: 2025-10-02 07:47:53.87367507 +0000 UTC m=+0.149775439 container init 031c46800cbf304e1b98142a8d1cb93edd0742afd6eb29d3220b18c37c78172b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-crash-compute-0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:47:53 np0005465604 podman[82460]: 2025-10-02 07:47:53.877986379 +0000 UTC m=+0.154086728 container start 031c46800cbf304e1b98142a8d1cb93edd0742afd6eb29d3220b18c37c78172b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-crash-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 03:47:53 np0005465604 bash[82460]: 031c46800cbf304e1b98142a8d1cb93edd0742afd6eb29d3220b18c37c78172b
Oct  2 03:47:53 np0005465604 systemd[1]: Started Ceph crash.compute-0 for a52e644f-f702-594c-a648-813e3e0df2b1.
Oct  2 03:47:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:47:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:47:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct  2 03:47:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:53 np0005465604 ceph-mgr[74774]: [progress INFO root] complete: finished ev 93b59808-1afe-4dab-814c-3d891cd54dd8 (Updating crash deployment (+1 -> 1))
Oct  2 03:47:53 np0005465604 ceph-mgr[74774]: [progress INFO root] Completed event 93b59808-1afe-4dab-814c-3d891cd54dd8 (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct  2 03:47:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct  2 03:47:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:53 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 3224d3b6-206f-4fa9-96fc-0df8757292e2 does not exist
Oct  2 03:47:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  2 03:47:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:53 np0005465604 ceph-mgr[74774]: [progress INFO root] update: starting ev f206c7b3-8fec-4581-9963-16e035e07f9c (Updating mgr deployment (+1 -> 2))
Oct  2 03:47:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.csaqxv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct  2 03:47:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.csaqxv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  2 03:47:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.csaqxv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  2 03:47:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  2 03:47:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  2 03:47:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:47:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:47:53 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.csaqxv on compute-0
Oct  2 03:47:53 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.csaqxv on compute-0
Oct  2 03:47:54 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-crash-compute-0[82475]: INFO:ceph-crash:pinging cluster to exercise our key
Oct  2 03:47:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Oct  2 03:47:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2433725605' entity='client.admin' 
Oct  2 03:47:54 np0005465604 systemd[1]: libpod-0b2b5e3d92feb89f63c236adc3f568625a1fde02b0729c7384175d009f0be2f8.scope: Deactivated successfully.
Oct  2 03:47:54 np0005465604 podman[82390]: 2025-10-02 07:47:54.141314563 +0000 UTC m=+0.724003370 container died 0b2b5e3d92feb89f63c236adc3f568625a1fde02b0729c7384175d009f0be2f8 (image=quay.io/ceph/ceph:v18, name=thirsty_torvalds, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 03:47:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0c3e08e2b7f6e69ec1a9c8ad4d8668a7703543b6d1a86a61aacaddf1cd3910e8-merged.mount: Deactivated successfully.
Oct  2 03:47:54 np0005465604 podman[82390]: 2025-10-02 07:47:54.204857064 +0000 UTC m=+0.787545871 container remove 0b2b5e3d92feb89f63c236adc3f568625a1fde02b0729c7384175d009f0be2f8 (image=quay.io/ceph/ceph:v18, name=thirsty_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:47:54 np0005465604 systemd[1]: libpod-conmon-0b2b5e3d92feb89f63c236adc3f568625a1fde02b0729c7384175d009f0be2f8.scope: Deactivated successfully.
Oct  2 03:47:54 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-crash-compute-0[82475]: 2025-10-02T07:47:54.276+0000 7fdcb995e640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  2 03:47:54 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-crash-compute-0[82475]: 2025-10-02T07:47:54.276+0000 7fdcb995e640 -1 AuthRegistry(0x7fdcb4066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  2 03:47:54 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-crash-compute-0[82475]: 2025-10-02T07:47:54.278+0000 7fdcb995e640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  2 03:47:54 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-crash-compute-0[82475]: 2025-10-02T07:47:54.278+0000 7fdcb995e640 -1 AuthRegistry(0x7fdcb995d000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  2 03:47:54 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-crash-compute-0[82475]: 2025-10-02T07:47:54.279+0000 7fdcb2ffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct  2 03:47:54 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-crash-compute-0[82475]: 2025-10-02T07:47:54.279+0000 7fdcb995e640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct  2 03:47:54 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-crash-compute-0[82475]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct  2 03:47:54 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-crash-compute-0[82475]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct  2 03:47:54 np0005465604 podman[82690]: 2025-10-02 07:47:54.605829934 +0000 UTC m=+0.047877533 container create db2a00fd3ff86663bac706e95fb471616b9c8adbea0cabc1ef0ead283c007852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 03:47:54 np0005465604 python3[82677]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:47:54 np0005465604 systemd[1]: Started libpod-conmon-db2a00fd3ff86663bac706e95fb471616b9c8adbea0cabc1ef0ead283c007852.scope.
Oct  2 03:47:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:54 np0005465604 podman[82690]: 2025-10-02 07:47:54.584338192 +0000 UTC m=+0.026385781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:47:54 np0005465604 podman[82690]: 2025-10-02 07:47:54.690336971 +0000 UTC m=+0.132384560 container init db2a00fd3ff86663bac706e95fb471616b9c8adbea0cabc1ef0ead283c007852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_rosalind, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:54 np0005465604 podman[82690]: 2025-10-02 07:47:54.695923198 +0000 UTC m=+0.137970757 container start db2a00fd3ff86663bac706e95fb471616b9c8adbea0cabc1ef0ead283c007852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_rosalind, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct  2 03:47:54 np0005465604 podman[82690]: 2025-10-02 07:47:54.699104173 +0000 UTC m=+0.141151742 container attach db2a00fd3ff86663bac706e95fb471616b9c8adbea0cabc1ef0ead283c007852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_rosalind, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct  2 03:47:54 np0005465604 mystifying_rosalind[82707]: 167 167
Oct  2 03:47:54 np0005465604 systemd[1]: libpod-db2a00fd3ff86663bac706e95fb471616b9c8adbea0cabc1ef0ead283c007852.scope: Deactivated successfully.
Oct  2 03:47:54 np0005465604 podman[82706]: 2025-10-02 07:47:54.72205302 +0000 UTC m=+0.060765459 container create 50f72953402c83f240353878f9a434b1a321a7a63b5e7eec5953c38945c8e32c (image=quay.io/ceph/ceph:v18, name=vigorous_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:47:54 np0005465604 podman[82724]: 2025-10-02 07:47:54.75086856 +0000 UTC m=+0.035070469 container died db2a00fd3ff86663bac706e95fb471616b9c8adbea0cabc1ef0ead283c007852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 03:47:54 np0005465604 systemd[1]: Started libpod-conmon-50f72953402c83f240353878f9a434b1a321a7a63b5e7eec5953c38945c8e32c.scope.
Oct  2 03:47:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d30ae4dcf80382f5c665ebe566077a2d3a7d080f9810001e5c10a48011a1ef3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d30ae4dcf80382f5c665ebe566077a2d3a7d080f9810001e5c10a48011a1ef3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d30ae4dcf80382f5c665ebe566077a2d3a7d080f9810001e5c10a48011a1ef3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay-449bb2de4e45727e762ed51dbd51a932018c94a236fe4578e78b7fd2a04c28d7-merged.mount: Deactivated successfully.
Oct  2 03:47:54 np0005465604 podman[82706]: 2025-10-02 07:47:54.697922758 +0000 UTC m=+0.036635217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:54 np0005465604 podman[82706]: 2025-10-02 07:47:54.800039891 +0000 UTC m=+0.138752390 container init 50f72953402c83f240353878f9a434b1a321a7a63b5e7eec5953c38945c8e32c (image=quay.io/ceph/ceph:v18, name=vigorous_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 03:47:54 np0005465604 podman[82706]: 2025-10-02 07:47:54.808588427 +0000 UTC m=+0.147300866 container start 50f72953402c83f240353878f9a434b1a321a7a63b5e7eec5953c38945c8e32c (image=quay.io/ceph/ceph:v18, name=vigorous_morse, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:47:54 np0005465604 podman[82724]: 2025-10-02 07:47:54.809148643 +0000 UTC m=+0.093350552 container remove db2a00fd3ff86663bac706e95fb471616b9c8adbea0cabc1ef0ead283c007852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_rosalind, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  2 03:47:54 np0005465604 podman[82706]: 2025-10-02 07:47:54.812919437 +0000 UTC m=+0.151631876 container attach 50f72953402c83f240353878f9a434b1a321a7a63b5e7eec5953c38945c8e32c (image=quay.io/ceph/ceph:v18, name=vigorous_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:47:54 np0005465604 systemd[1]: libpod-conmon-db2a00fd3ff86663bac706e95fb471616b9c8adbea0cabc1ef0ead283c007852.scope: Deactivated successfully.
Oct  2 03:47:54 np0005465604 systemd[1]: Reloading.
Oct  2 03:47:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.csaqxv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  2 03:47:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.csaqxv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  2 03:47:54 np0005465604 ceph-mon[74477]: Deploying daemon mgr.compute-0.csaqxv on compute-0
Oct  2 03:47:54 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/2433725605' entity='client.admin' 
Oct  2 03:47:54 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:47:54 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:47:55 np0005465604 systemd[1]: Reloading.
Oct  2 03:47:55 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:47:55 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:47:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Oct  2 03:47:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1452266908' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct  2 03:47:55 np0005465604 systemd[1]: Starting Ceph mgr.compute-0.csaqxv for a52e644f-f702-594c-a648-813e3e0df2b1...
Oct  2 03:47:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:47:55 np0005465604 podman[82890]: 2025-10-02 07:47:55.79590625 +0000 UTC m=+0.061975694 container create 15c6b63cbfa13452ddcd5625cb5dc7226a176775331ecd6b145bbf1f036cec94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-csaqxv, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:47:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c258ed46d679224767ae0ab4f058171de65c94f0cc4e541b9813bdb67b9c861d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c258ed46d679224767ae0ab4f058171de65c94f0cc4e541b9813bdb67b9c861d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c258ed46d679224767ae0ab4f058171de65c94f0cc4e541b9813bdb67b9c861d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c258ed46d679224767ae0ab4f058171de65c94f0cc4e541b9813bdb67b9c861d/merged/var/lib/ceph/mgr/ceph-compute-0.csaqxv supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:55 np0005465604 podman[82890]: 2025-10-02 07:47:55.774286194 +0000 UTC m=+0.040355658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:47:55 np0005465604 podman[82890]: 2025-10-02 07:47:55.888138999 +0000 UTC m=+0.154208473 container init 15c6b63cbfa13452ddcd5625cb5dc7226a176775331ecd6b145bbf1f036cec94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-csaqxv, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:47:55 np0005465604 podman[82890]: 2025-10-02 07:47:55.900737365 +0000 UTC m=+0.166806809 container start 15c6b63cbfa13452ddcd5625cb5dc7226a176775331ecd6b145bbf1f036cec94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-csaqxv, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 03:47:55 np0005465604 bash[82890]: 15c6b63cbfa13452ddcd5625cb5dc7226a176775331ecd6b145bbf1f036cec94
Oct  2 03:47:55 np0005465604 systemd[1]: Started Ceph mgr.compute-0.csaqxv for a52e644f-f702-594c-a648-813e3e0df2b1.
Oct  2 03:47:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct  2 03:47:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  2 03:47:55 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/1452266908' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct  2 03:47:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1452266908' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct  2 03:47:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct  2 03:47:55 np0005465604 vigorous_morse[82738]: set require_min_compat_client to mimic
Oct  2 03:47:55 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct  2 03:47:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:47:55 np0005465604 ceph-mgr[82910]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 03:47:55 np0005465604 ceph-mgr[82910]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct  2 03:47:55 np0005465604 ceph-mgr[82910]: pidfile_write: ignore empty --pid-file
Oct  2 03:47:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:47:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:55 np0005465604 systemd[1]: libpod-50f72953402c83f240353878f9a434b1a321a7a63b5e7eec5953c38945c8e32c.scope: Deactivated successfully.
Oct  2 03:47:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  2 03:47:55 np0005465604 podman[82706]: 2025-10-02 07:47:55.979674646 +0000 UTC m=+1.318387075 container died 50f72953402c83f240353878f9a434b1a321a7a63b5e7eec5953c38945c8e32c (image=quay.io/ceph/ceph:v18, name=vigorous_morse, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  2 03:47:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:55 np0005465604 ceph-mgr[74774]: [progress INFO root] complete: finished ev f206c7b3-8fec-4581-9963-16e035e07f9c (Updating mgr deployment (+1 -> 2))
Oct  2 03:47:55 np0005465604 ceph-mgr[74774]: [progress INFO root] Completed event f206c7b3-8fec-4581-9963-16e035e07f9c (Updating mgr deployment (+1 -> 2)) in 2 seconds
Oct  2 03:47:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  2 03:47:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:56 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3d30ae4dcf80382f5c665ebe566077a2d3a7d080f9810001e5c10a48011a1ef3-merged.mount: Deactivated successfully.
Oct  2 03:47:56 np0005465604 podman[82706]: 2025-10-02 07:47:56.032068012 +0000 UTC m=+1.370780431 container remove 50f72953402c83f240353878f9a434b1a321a7a63b5e7eec5953c38945c8e32c (image=quay.io/ceph/ceph:v18, name=vigorous_morse, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 03:47:56 np0005465604 systemd[1]: libpod-conmon-50f72953402c83f240353878f9a434b1a321a7a63b5e7eec5953c38945c8e32c.scope: Deactivated successfully.
Oct  2 03:47:56 np0005465604 ceph-mgr[82910]: mgr[py] Loading python module 'alerts'
Oct  2 03:47:56 np0005465604 ceph-mgr[82910]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  2 03:47:56 np0005465604 ceph-mgr[82910]: mgr[py] Loading python module 'balancer'
Oct  2 03:47:56 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-csaqxv[82906]: 2025-10-02T07:47:56.376+0000 7fc8c9649140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  2 03:47:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:47:56 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-csaqxv[82906]: 2025-10-02T07:47:56.626+0000 7fc8c9649140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  2 03:47:56 np0005465604 ceph-mgr[82910]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  2 03:47:56 np0005465604 ceph-mgr[82910]: mgr[py] Loading python module 'cephadm'
Oct  2 03:47:56 np0005465604 python3[83135]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:47:56 np0005465604 podman[83173]: 2025-10-02 07:47:56.870459642 +0000 UTC m=+0.036362348 container create 711d2e18d3305ca79b58d6da898d21529446c895ff9952e9f973869f15779245 (image=quay.io/ceph/ceph:v18, name=peaceful_moore, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 03:47:56 np0005465604 systemd[1]: Started libpod-conmon-711d2e18d3305ca79b58d6da898d21529446c895ff9952e9f973869f15779245.scope.
Oct  2 03:47:56 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b21001541ed87ebc8f5a013ecc2226932c92d403e3caf86e8910b78bd01ad605/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b21001541ed87ebc8f5a013ecc2226932c92d403e3caf86e8910b78bd01ad605/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b21001541ed87ebc8f5a013ecc2226932c92d403e3caf86e8910b78bd01ad605/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:56 np0005465604 podman[83173]: 2025-10-02 07:47:56.854155055 +0000 UTC m=+0.020057781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:56 np0005465604 podman[83173]: 2025-10-02 07:47:56.958269848 +0000 UTC m=+0.124172594 container init 711d2e18d3305ca79b58d6da898d21529446c895ff9952e9f973869f15779245 (image=quay.io/ceph/ceph:v18, name=peaceful_moore, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:47:56 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/1452266908' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct  2 03:47:56 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:56 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:56 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:56 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:56 np0005465604 podman[83173]: 2025-10-02 07:47:56.964927368 +0000 UTC m=+0.130830084 container start 711d2e18d3305ca79b58d6da898d21529446c895ff9952e9f973869f15779245 (image=quay.io/ceph/ceph:v18, name=peaceful_moore, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct  2 03:47:56 np0005465604 podman[83208]: 2025-10-02 07:47:56.974889075 +0000 UTC m=+0.059719276 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:56 np0005465604 podman[83173]: 2025-10-02 07:47:56.968871985 +0000 UTC m=+0.134774701 container attach 711d2e18d3305ca79b58d6da898d21529446c895ff9952e9f973869f15779245 (image=quay.io/ceph/ceph:v18, name=peaceful_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:47:57 np0005465604 podman[83208]: 2025-10-02 07:47:57.071623098 +0000 UTC m=+0.156453319 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ee63b312-1042-4d90-ba97-c7289d44b6c0 does not exist
Oct  2 03:47:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ecafbc5e-987e-4424-b507-75e5f30708de does not exist
Oct  2 03:47:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 095683c4-9406-4fe5-8736-491ea56be595 does not exist
Oct  2 03:47:57 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:57 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct  2 03:47:57 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:47:57 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct  2 03:47:57 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct  2 03:47:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:47:57 np0005465604 ceph-mgr[74774]: [progress INFO root] Writing back 2 completed events
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  2 03:47:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:47:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:47:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:47:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:47:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:47:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Added host compute-0
Oct  2 03:47:58 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Added host compute-0
Oct  2 03:47:58 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Saving service mon spec with placement compute-0
Oct  2 03:47:58 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Oct  2 03:47:58 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct  2 03:47:58 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct  2 03:47:58 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Oct  2 03:47:58 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 peaceful_moore[83210]: Added host 'compute-0' with addr '192.168.122.100'
Oct  2 03:47:58 np0005465604 peaceful_moore[83210]: Scheduled mon update...
Oct  2 03:47:58 np0005465604 peaceful_moore[83210]: Scheduled mgr update...
Oct  2 03:47:58 np0005465604 peaceful_moore[83210]: Scheduled osd.default_drive_group update...
Oct  2 03:47:58 np0005465604 systemd[1]: libpod-711d2e18d3305ca79b58d6da898d21529446c895ff9952e9f973869f15779245.scope: Deactivated successfully.
Oct  2 03:47:58 np0005465604 podman[83173]: 2025-10-02 07:47:58.086200146 +0000 UTC m=+1.252102892 container died 711d2e18d3305ca79b58d6da898d21529446c895ff9952e9f973869f15779245 (image=quay.io/ceph/ceph:v18, name=peaceful_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 03:47:58 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b21001541ed87ebc8f5a013ecc2226932c92d403e3caf86e8910b78bd01ad605-merged.mount: Deactivated successfully.
Oct  2 03:47:58 np0005465604 podman[83173]: 2025-10-02 07:47:58.139401957 +0000 UTC m=+1.305304653 container remove 711d2e18d3305ca79b58d6da898d21529446c895ff9952e9f973869f15779245 (image=quay.io/ceph/ceph:v18, name=peaceful_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:47:58 np0005465604 systemd[1]: libpod-conmon-711d2e18d3305ca79b58d6da898d21529446c895ff9952e9f973869f15779245.scope: Deactivated successfully.
Oct  2 03:47:58 np0005465604 podman[83626]: 2025-10-02 07:47:58.169064595 +0000 UTC m=+0.038674188 container create c085a7c5523c014e31afe81f150fafbb4d3a19cc95f8a1d076d72d936b4bab61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 03:47:58 np0005465604 systemd[1]: Started libpod-conmon-c085a7c5523c014e31afe81f150fafbb4d3a19cc95f8a1d076d72d936b4bab61.scope.
Oct  2 03:47:58 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:58 np0005465604 podman[83626]: 2025-10-02 07:47:58.245562362 +0000 UTC m=+0.115172015 container init c085a7c5523c014e31afe81f150fafbb4d3a19cc95f8a1d076d72d936b4bab61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:58 np0005465604 podman[83626]: 2025-10-02 07:47:58.152230711 +0000 UTC m=+0.021840314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:47:58 np0005465604 podman[83626]: 2025-10-02 07:47:58.253281793 +0000 UTC m=+0.122891386 container start c085a7c5523c014e31afe81f150fafbb4d3a19cc95f8a1d076d72d936b4bab61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:47:58 np0005465604 podman[83626]: 2025-10-02 07:47:58.256301193 +0000 UTC m=+0.125910906 container attach c085a7c5523c014e31afe81f150fafbb4d3a19cc95f8a1d076d72d936b4bab61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:47:58 np0005465604 silly_jepsen[83642]: 167 167
Oct  2 03:47:58 np0005465604 systemd[1]: libpod-c085a7c5523c014e31afe81f150fafbb4d3a19cc95f8a1d076d72d936b4bab61.scope: Deactivated successfully.
Oct  2 03:47:58 np0005465604 conmon[83642]: conmon c085a7c5523c014e31af <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c085a7c5523c014e31afe81f150fafbb4d3a19cc95f8a1d076d72d936b4bab61.scope/container/memory.events
Oct  2 03:47:58 np0005465604 podman[83626]: 2025-10-02 07:47:58.258025735 +0000 UTC m=+0.127635328 container died c085a7c5523c014e31afe81f150fafbb4d3a19cc95f8a1d076d72d936b4bab61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:47:58 np0005465604 systemd[1]: var-lib-containers-storage-overlay-fb8bb556fa77e82cf23ab105632fc1ac7ec1d0c374234f826215a4f3a4d1c18e-merged.mount: Deactivated successfully.
Oct  2 03:47:58 np0005465604 podman[83626]: 2025-10-02 07:47:58.291234418 +0000 UTC m=+0.160844011 container remove c085a7c5523c014e31afe81f150fafbb4d3a19cc95f8a1d076d72d936b4bab61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  2 03:47:58 np0005465604 systemd[1]: libpod-conmon-c085a7c5523c014e31afe81f150fafbb4d3a19cc95f8a1d076d72d936b4bab61.scope: Deactivated successfully.
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.qlmhsi (unknown last config time)...
Oct  2 03:47:58 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.qlmhsi (unknown last config time)...
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.qlmhsi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.qlmhsi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:47:58 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.qlmhsi on compute-0
Oct  2 03:47:58 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.qlmhsi on compute-0
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: Reconfiguring daemon mon.compute-0 on compute-0
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: Added host compute-0
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: Saving service mon spec with placement compute-0
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: Saving service mgr spec with placement compute-0
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: Marking host: compute-0 for OSDSpec preview refresh.
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: Saving service osd.default_drive_group spec with placement compute-0
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.qlmhsi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  2 03:47:58 np0005465604 ceph-mgr[82910]: mgr[py] Loading python module 'crash'
Oct  2 03:47:58 np0005465604 python3[83736]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:47:58 np0005465604 podman[83788]: 2025-10-02 07:47:58.755109819 +0000 UTC m=+0.046380468 container create 720a613ceb54fa25d876eea45ddef00ec2945061542e80df3971df7c98af6632 (image=quay.io/ceph/ceph:v18, name=trusting_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:47:58 np0005465604 systemd[1]: Started libpod-conmon-720a613ceb54fa25d876eea45ddef00ec2945061542e80df3971df7c98af6632.scope.
Oct  2 03:47:58 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9ca55585e2020f89669a1b95a8b56e1c6c4c0fec96058395fdae096bf662b64/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9ca55585e2020f89669a1b95a8b56e1c6c4c0fec96058395fdae096bf662b64/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9ca55585e2020f89669a1b95a8b56e1c6c4c0fec96058395fdae096bf662b64/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:47:58 np0005465604 podman[83788]: 2025-10-02 07:47:58.731895245 +0000 UTC m=+0.023165934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:47:58 np0005465604 podman[83788]: 2025-10-02 07:47:58.839898625 +0000 UTC m=+0.131169244 container init 720a613ceb54fa25d876eea45ddef00ec2945061542e80df3971df7c98af6632 (image=quay.io/ceph/ceph:v18, name=trusting_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:47:58 np0005465604 podman[83788]: 2025-10-02 07:47:58.84843635 +0000 UTC m=+0.139706959 container start 720a613ceb54fa25d876eea45ddef00ec2945061542e80df3971df7c98af6632 (image=quay.io/ceph/ceph:v18, name=trusting_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 03:47:58 np0005465604 podman[83788]: 2025-10-02 07:47:58.851854262 +0000 UTC m=+0.143124871 container attach 720a613ceb54fa25d876eea45ddef00ec2945061542e80df3971df7c98af6632 (image=quay.io/ceph/ceph:v18, name=trusting_kalam, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 03:47:58 np0005465604 ceph-mgr[82910]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  2 03:47:58 np0005465604 ceph-mgr[82910]: mgr[py] Loading python module 'dashboard'
Oct  2 03:47:58 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-csaqxv[82906]: 2025-10-02T07:47:58.892+0000 7fc8c9649140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  2 03:47:58 np0005465604 podman[83822]: 2025-10-02 07:47:58.899692453 +0000 UTC m=+0.045010648 container create 7868f4125e489dac814a8aba6dac228fc612bc74bea8d66fafffecffdfa65f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 03:47:58 np0005465604 systemd[1]: Started libpod-conmon-7868f4125e489dac814a8aba6dac228fc612bc74bea8d66fafffecffdfa65f8f.scope.
Oct  2 03:47:58 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:47:58 np0005465604 podman[83822]: 2025-10-02 07:47:58.97650844 +0000 UTC m=+0.121826705 container init 7868f4125e489dac814a8aba6dac228fc612bc74bea8d66fafffecffdfa65f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 03:47:58 np0005465604 podman[83822]: 2025-10-02 07:47:58.883841928 +0000 UTC m=+0.029160143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:47:58 np0005465604 podman[83822]: 2025-10-02 07:47:58.983372775 +0000 UTC m=+0.128691000 container start 7868f4125e489dac814a8aba6dac228fc612bc74bea8d66fafffecffdfa65f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:47:58 np0005465604 podman[83822]: 2025-10-02 07:47:58.986749106 +0000 UTC m=+0.132067391 container attach 7868f4125e489dac814a8aba6dac228fc612bc74bea8d66fafffecffdfa65f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elgamal, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:47:58 np0005465604 admiring_elgamal[83839]: 167 167
Oct  2 03:47:58 np0005465604 systemd[1]: libpod-7868f4125e489dac814a8aba6dac228fc612bc74bea8d66fafffecffdfa65f8f.scope: Deactivated successfully.
Oct  2 03:47:58 np0005465604 podman[83822]: 2025-10-02 07:47:58.988504738 +0000 UTC m=+0.133822943 container died 7868f4125e489dac814a8aba6dac228fc612bc74bea8d66fafffecffdfa65f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 03:47:59 np0005465604 podman[83822]: 2025-10-02 07:47:59.03303605 +0000 UTC m=+0.178354285 container remove 7868f4125e489dac814a8aba6dac228fc612bc74bea8d66fafffecffdfa65f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 03:47:59 np0005465604 systemd[1]: libpod-conmon-7868f4125e489dac814a8aba6dac228fc612bc74bea8d66fafffecffdfa65f8f.scope: Deactivated successfully.
Oct  2 03:47:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:47:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:47:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:47:59 np0005465604 systemd[1]: var-lib-containers-storage-overlay-092190fa813190dc384965eff040bc18c97d6a4fea4fc5284fcef4cb7004d2a8-merged.mount: Deactivated successfully.
Oct  2 03:47:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  2 03:47:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2879769527' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  2 03:47:59 np0005465604 trusting_kalam[83815]: 
Oct  2 03:47:59 np0005465604 trusting_kalam[83815]: {"fsid":"a52e644f-f702-594c-a648-813e3e0df2b1","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":77,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-02T07:46:38.638077+0000","services":{}},"progress_events":{}}
Oct  2 03:47:59 np0005465604 systemd[1]: libpod-720a613ceb54fa25d876eea45ddef00ec2945061542e80df3971df7c98af6632.scope: Deactivated successfully.
Oct  2 03:47:59 np0005465604 podman[83788]: 2025-10-02 07:47:59.475615004 +0000 UTC m=+0.766885623 container died 720a613ceb54fa25d876eea45ddef00ec2945061542e80df3971df7c98af6632 (image=quay.io/ceph/ceph:v18, name=trusting_kalam, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 03:47:59 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a9ca55585e2020f89669a1b95a8b56e1c6c4c0fec96058395fdae096bf662b64-merged.mount: Deactivated successfully.
Oct  2 03:47:59 np0005465604 podman[83788]: 2025-10-02 07:47:59.523818436 +0000 UTC m=+0.815089065 container remove 720a613ceb54fa25d876eea45ddef00ec2945061542e80df3971df7c98af6632 (image=quay.io/ceph/ceph:v18, name=trusting_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 03:47:59 np0005465604 systemd[1]: libpod-conmon-720a613ceb54fa25d876eea45ddef00ec2945061542e80df3971df7c98af6632.scope: Deactivated successfully.
Oct  2 03:47:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:47:59 np0005465604 podman[84061]: 2025-10-02 07:47:59.823873568 +0000 UTC m=+0.056082288 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:47:59 np0005465604 podman[84061]: 2025-10-02 07:47:59.927059673 +0000 UTC m=+0.159268373 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: Reconfiguring mgr.compute-0.qlmhsi (unknown last config time)...
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: Reconfiguring daemon mgr.compute-0.qlmhsi on compute-0
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:00 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 60cc5c82-abc4-4930-a396-7c81470d2a2e does not exist
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  2 03:48:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:00 np0005465604 ceph-mgr[74774]: [progress INFO root] update: starting ev c275b19a-8f08-451c-bf1c-76abcb5a1e12 (Updating mgr deployment (-1 -> 1))
Oct  2 03:48:00 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.csaqxv from compute-0 -- ports [8765]
Oct  2 03:48:00 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.csaqxv from compute-0 -- ports [8765]
Oct  2 03:48:00 np0005465604 ceph-mgr[82910]: mgr[py] Loading python module 'devicehealth'
Oct  2 03:48:00 np0005465604 ceph-mgr[82910]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  2 03:48:00 np0005465604 ceph-mgr[82910]: mgr[py] Loading python module 'diskprediction_local'
Oct  2 03:48:00 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-csaqxv[82906]: 2025-10-02T07:48:00.565+0000 7fc8c9649140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  2 03:48:00 np0005465604 systemd[1]: Stopping Ceph mgr.compute-0.csaqxv for a52e644f-f702-594c-a648-813e3e0df2b1...
Oct  2 03:48:00 np0005465604 podman[84318]: 2025-10-02 07:48:00.971572677 +0000 UTC m=+0.086136076 container died 15c6b63cbfa13452ddcd5625cb5dc7226a176775331ecd6b145bbf1f036cec94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-csaqxv, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:48:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c258ed46d679224767ae0ab4f058171de65c94f0cc4e541b9813bdb67b9c861d-merged.mount: Deactivated successfully.
Oct  2 03:48:01 np0005465604 podman[84318]: 2025-10-02 07:48:01.020365766 +0000 UTC m=+0.134929145 container remove 15c6b63cbfa13452ddcd5625cb5dc7226a176775331ecd6b145bbf1f036cec94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-csaqxv, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 03:48:01 np0005465604 bash[84318]: ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-csaqxv
Oct  2 03:48:01 np0005465604 systemd[1]: ceph-a52e644f-f702-594c-a648-813e3e0df2b1@mgr.compute-0.csaqxv.service: Main process exited, code=exited, status=143/n/a
Oct  2 03:48:01 np0005465604 systemd[1]: ceph-a52e644f-f702-594c-a648-813e3e0df2b1@mgr.compute-0.csaqxv.service: Failed with result 'exit-code'.
Oct  2 03:48:01 np0005465604 systemd[1]: Stopped Ceph mgr.compute-0.csaqxv for a52e644f-f702-594c-a648-813e3e0df2b1.
Oct  2 03:48:01 np0005465604 systemd[1]: ceph-a52e644f-f702-594c-a648-813e3e0df2b1@mgr.compute-0.csaqxv.service: Consumed 6.004s CPU time.
Oct  2 03:48:01 np0005465604 systemd[1]: Reloading.
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: Removing daemon mgr.compute-0.csaqxv from compute-0 -- ports [8765]
Oct  2 03:48:01 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:48:01 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:48:01 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.csaqxv
Oct  2 03:48:01 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.csaqxv
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.csaqxv"} v 0) v1
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.csaqxv"}]: dispatch
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.csaqxv"}]': finished
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:01 np0005465604 ceph-mgr[74774]: [progress INFO root] complete: finished ev c275b19a-8f08-451c-bf1c-76abcb5a1e12 (Updating mgr deployment (-1 -> 1))
Oct  2 03:48:01 np0005465604 ceph-mgr[74774]: [progress INFO root] Completed event c275b19a-8f08-451c-bf1c-76abcb5a1e12 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:01 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 34de07a7-738b-498f-8e2a-86f62a2f5e35 does not exist
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:48:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:48:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:48:02 np0005465604 podman[84554]: 2025-10-02 07:48:02.197160176 +0000 UTC m=+0.068806378 container create 697956eaf80ebcb5980ceb56cadd017dbf527b33d81d43766b33dd4d5f7835c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatelet, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  2 03:48:02 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.csaqxv"}]: dispatch
Oct  2 03:48:02 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.csaqxv"}]': finished
Oct  2 03:48:02 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:02 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:02 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:48:02 np0005465604 systemd[1]: Started libpod-conmon-697956eaf80ebcb5980ceb56cadd017dbf527b33d81d43766b33dd4d5f7835c9.scope.
Oct  2 03:48:02 np0005465604 podman[84554]: 2025-10-02 07:48:02.168920871 +0000 UTC m=+0.040567133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:02 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:02 np0005465604 podman[84554]: 2025-10-02 07:48:02.304087293 +0000 UTC m=+0.175733545 container init 697956eaf80ebcb5980ceb56cadd017dbf527b33d81d43766b33dd4d5f7835c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 03:48:02 np0005465604 podman[84554]: 2025-10-02 07:48:02.317709601 +0000 UTC m=+0.189355813 container start 697956eaf80ebcb5980ceb56cadd017dbf527b33d81d43766b33dd4d5f7835c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 03:48:02 np0005465604 podman[84554]: 2025-10-02 07:48:02.321783722 +0000 UTC m=+0.193429974 container attach 697956eaf80ebcb5980ceb56cadd017dbf527b33d81d43766b33dd4d5f7835c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 03:48:02 np0005465604 gracious_chatelet[84571]: 167 167
Oct  2 03:48:02 np0005465604 systemd[1]: libpod-697956eaf80ebcb5980ceb56cadd017dbf527b33d81d43766b33dd4d5f7835c9.scope: Deactivated successfully.
Oct  2 03:48:02 np0005465604 podman[84554]: 2025-10-02 07:48:02.329833953 +0000 UTC m=+0.201480165 container died 697956eaf80ebcb5980ceb56cadd017dbf527b33d81d43766b33dd4d5f7835c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatelet, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 03:48:02 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a44f00445e704b74a7a4889c74b325ec1076c81675042ac6ce3bb24a00f1c456-merged.mount: Deactivated successfully.
Oct  2 03:48:02 np0005465604 podman[84554]: 2025-10-02 07:48:02.381374784 +0000 UTC m=+0.253020956 container remove 697956eaf80ebcb5980ceb56cadd017dbf527b33d81d43766b33dd4d5f7835c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatelet, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:48:02 np0005465604 systemd[1]: libpod-conmon-697956eaf80ebcb5980ceb56cadd017dbf527b33d81d43766b33dd4d5f7835c9.scope: Deactivated successfully.
Oct  2 03:48:02 np0005465604 podman[84595]: 2025-10-02 07:48:02.633439242 +0000 UTC m=+0.078089836 container create 42ab34aa1f4c19fb6ff69365829191a76b645e788f8f93041fe491766aab9d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 03:48:02 np0005465604 systemd[1]: Started libpod-conmon-42ab34aa1f4c19fb6ff69365829191a76b645e788f8f93041fe491766aab9d16.scope.
Oct  2 03:48:02 np0005465604 podman[84595]: 2025-10-02 07:48:02.60361427 +0000 UTC m=+0.048264954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:02 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796e90f20a8a509bdf7e9d6145c64fdc5ae3f8af62be7e55a51c2ec61ad73360/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796e90f20a8a509bdf7e9d6145c64fdc5ae3f8af62be7e55a51c2ec61ad73360/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796e90f20a8a509bdf7e9d6145c64fdc5ae3f8af62be7e55a51c2ec61ad73360/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796e90f20a8a509bdf7e9d6145c64fdc5ae3f8af62be7e55a51c2ec61ad73360/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796e90f20a8a509bdf7e9d6145c64fdc5ae3f8af62be7e55a51c2ec61ad73360/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:02 np0005465604 podman[84595]: 2025-10-02 07:48:02.734983179 +0000 UTC m=+0.179633843 container init 42ab34aa1f4c19fb6ff69365829191a76b645e788f8f93041fe491766aab9d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:02 np0005465604 podman[84595]: 2025-10-02 07:48:02.744990317 +0000 UTC m=+0.189640941 container start 42ab34aa1f4c19fb6ff69365829191a76b645e788f8f93041fe491766aab9d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:48:02 np0005465604 podman[84595]: 2025-10-02 07:48:02.74944936 +0000 UTC m=+0.194100034 container attach 42ab34aa1f4c19fb6ff69365829191a76b645e788f8f93041fe491766aab9d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_meninsky, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:48:02 np0005465604 ceph-mgr[74774]: [progress INFO root] Writing back 3 completed events
Oct  2 03:48:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  2 03:48:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:03 np0005465604 ceph-mon[74477]: Removing key for mgr.compute-0.csaqxv
Oct  2 03:48:03 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:48:03 np0005465604 sweet_meninsky[84611]: --> passed data devices: 0 physical, 3 LVM
Oct  2 03:48:03 np0005465604 sweet_meninsky[84611]: --> relative data size: 1.0
Oct  2 03:48:03 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  2 03:48:03 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f14c1e76-9e34-46aa-9e3c-f0bae5378cc0
Oct  2 03:48:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0"} v 0) v1
Oct  2 03:48:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2960116074' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0"}]: dispatch
Oct  2 03:48:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct  2 03:48:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  2 03:48:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2960116074' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0"}]': finished
Oct  2 03:48:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct  2 03:48:04 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct  2 03:48:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  2 03:48:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  2 03:48:04 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  2 03:48:04 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  2 03:48:04 np0005465604 lvm[84673]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 03:48:04 np0005465604 lvm[84673]: VG ceph_vg0 finished
Oct  2 03:48:04 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct  2 03:48:04 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct  2 03:48:04 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  2 03:48:04 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  2 03:48:04 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct  2 03:48:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct  2 03:48:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2767252467' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  2 03:48:04 np0005465604 sweet_meninsky[84611]: stderr: got monmap epoch 1
Oct  2 03:48:04 np0005465604 sweet_meninsky[84611]: --> Creating keyring file for osd.0
Oct  2 03:48:04 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct  2 03:48:05 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct  2 03:48:05 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid f14c1e76-9e34-46aa-9e3c-f0bae5378cc0 --setuser ceph --setgroup ceph
Oct  2 03:48:05 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/2960116074' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0"}]: dispatch
Oct  2 03:48:05 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/2960116074' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0"}]': finished
Oct  2 03:48:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:48:06 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  2 03:48:06 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  2 03:48:06 np0005465604 ceph-mon[74477]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  2 03:48:06 np0005465604 ceph-mon[74477]: Cluster is now healthy
Oct  2 03:48:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:48:07 np0005465604 sweet_meninsky[84611]: stderr: 2025-10-02T07:48:05.051+0000 7f043e1c9740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  2 03:48:07 np0005465604 sweet_meninsky[84611]: stderr: 2025-10-02T07:48:05.051+0000 7f043e1c9740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  2 03:48:07 np0005465604 sweet_meninsky[84611]: stderr: 2025-10-02T07:48:05.051+0000 7f043e1c9740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  2 03:48:07 np0005465604 sweet_meninsky[84611]: stderr: 2025-10-02T07:48:05.052+0000 7f043e1c9740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct  2 03:48:07 np0005465604 sweet_meninsky[84611]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct  2 03:48:07 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  2 03:48:07 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct  2 03:48:07 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  2 03:48:07 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct  2 03:48:07 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  2 03:48:07 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  2 03:48:07 np0005465604 sweet_meninsky[84611]: --> ceph-volume lvm activate successful for osd ID: 0
Oct  2 03:48:07 np0005465604 sweet_meninsky[84611]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct  2 03:48:07 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  2 03:48:07 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 8ecdfa53-c8d8-401c-b12f-ba8d091f39fe
Oct  2 03:48:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:48:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe"} v 0) v1
Oct  2 03:48:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3567269093' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe"}]: dispatch
Oct  2 03:48:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct  2 03:48:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  2 03:48:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3567269093' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe"}]': finished
Oct  2 03:48:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct  2 03:48:07 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct  2 03:48:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  2 03:48:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  2 03:48:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  2 03:48:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  2 03:48:07 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  2 03:48:07 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  2 03:48:08 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  2 03:48:08 np0005465604 lvm[85624]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  2 03:48:08 np0005465604 lvm[85624]: VG ceph_vg1 finished
Oct  2 03:48:08 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct  2 03:48:08 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Oct  2 03:48:08 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  2 03:48:08 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  2 03:48:08 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct  2 03:48:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct  2 03:48:08 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2856755351' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  2 03:48:08 np0005465604 sweet_meninsky[84611]: stderr: got monmap epoch 1
Oct  2 03:48:08 np0005465604 sweet_meninsky[84611]: --> Creating keyring file for osd.1
Oct  2 03:48:08 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct  2 03:48:08 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct  2 03:48:08 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 8ecdfa53-c8d8-401c-b12f-ba8d091f39fe --setuser ceph --setgroup ceph
Oct  2 03:48:08 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/3567269093' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe"}]: dispatch
Oct  2 03:48:08 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/3567269093' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe"}]': finished
Oct  2 03:48:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:48:10 np0005465604 sweet_meninsky[84611]: stderr: 2025-10-02T07:48:08.568+0000 7f372dea1740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  2 03:48:10 np0005465604 sweet_meninsky[84611]: stderr: 2025-10-02T07:48:08.568+0000 7f372dea1740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  2 03:48:10 np0005465604 sweet_meninsky[84611]: stderr: 2025-10-02T07:48:08.568+0000 7f372dea1740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  2 03:48:10 np0005465604 sweet_meninsky[84611]: stderr: 2025-10-02T07:48:08.568+0000 7f372dea1740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct  2 03:48:10 np0005465604 sweet_meninsky[84611]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Oct  2 03:48:10 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  2 03:48:10 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct  2 03:48:10 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  2 03:48:10 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct  2 03:48:10 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  2 03:48:10 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  2 03:48:10 np0005465604 sweet_meninsky[84611]: --> ceph-volume lvm activate successful for osd ID: 1
Oct  2 03:48:10 np0005465604 sweet_meninsky[84611]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Oct  2 03:48:10 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  2 03:48:10 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 8e617210-aec3-4316-bc5c-58c501c21dd7
Oct  2 03:48:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7"} v 0) v1
Oct  2 03:48:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1212815527' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7"}]: dispatch
Oct  2 03:48:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct  2 03:48:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  2 03:48:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1212815527' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7"}]': finished
Oct  2 03:48:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Oct  2 03:48:11 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Oct  2 03:48:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  2 03:48:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  2 03:48:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  2 03:48:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  2 03:48:11 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  2 03:48:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  2 03:48:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  2 03:48:11 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  2 03:48:11 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  2 03:48:11 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  2 03:48:11 np0005465604 lvm[86575]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  2 03:48:11 np0005465604 lvm[86575]: VG ceph_vg2 finished
Oct  2 03:48:11 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Oct  2 03:48:11 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Oct  2 03:48:11 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  2 03:48:11 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  2 03:48:11 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Oct  2 03:48:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:48:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:48:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct  2 03:48:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1243980339' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  2 03:48:11 np0005465604 sweet_meninsky[84611]: stderr: got monmap epoch 1
Oct  2 03:48:11 np0005465604 sweet_meninsky[84611]: --> Creating keyring file for osd.2
Oct  2 03:48:11 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/1212815527' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7"}]: dispatch
Oct  2 03:48:11 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/1212815527' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7"}]': finished
Oct  2 03:48:11 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Oct  2 03:48:11 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Oct  2 03:48:11 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 8e617210-aec3-4316-bc5c-58c501c21dd7 --setuser ceph --setgroup ceph
Oct  2 03:48:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:48:14 np0005465604 sweet_meninsky[84611]: stderr: 2025-10-02T07:48:11.949+0000 7fd7bb2cd740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  2 03:48:14 np0005465604 sweet_meninsky[84611]: stderr: 2025-10-02T07:48:11.949+0000 7fd7bb2cd740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  2 03:48:14 np0005465604 sweet_meninsky[84611]: stderr: 2025-10-02T07:48:11.949+0000 7fd7bb2cd740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  2 03:48:14 np0005465604 sweet_meninsky[84611]: stderr: 2025-10-02T07:48:11.950+0000 7fd7bb2cd740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Oct  2 03:48:14 np0005465604 sweet_meninsky[84611]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Oct  2 03:48:14 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  2 03:48:14 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct  2 03:48:14 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  2 03:48:14 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct  2 03:48:14 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  2 03:48:14 np0005465604 sweet_meninsky[84611]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  2 03:48:14 np0005465604 sweet_meninsky[84611]: --> ceph-volume lvm activate successful for osd ID: 2
Oct  2 03:48:14 np0005465604 sweet_meninsky[84611]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Oct  2 03:48:14 np0005465604 systemd[1]: libpod-42ab34aa1f4c19fb6ff69365829191a76b645e788f8f93041fe491766aab9d16.scope: Deactivated successfully.
Oct  2 03:48:14 np0005465604 systemd[1]: libpod-42ab34aa1f4c19fb6ff69365829191a76b645e788f8f93041fe491766aab9d16.scope: Consumed 6.068s CPU time.
Oct  2 03:48:14 np0005465604 podman[84595]: 2025-10-02 07:48:14.294604204 +0000 UTC m=+11.739254798 container died 42ab34aa1f4c19fb6ff69365829191a76b645e788f8f93041fe491766aab9d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:48:14 np0005465604 systemd[1]: var-lib-containers-storage-overlay-796e90f20a8a509bdf7e9d6145c64fdc5ae3f8af62be7e55a51c2ec61ad73360-merged.mount: Deactivated successfully.
Oct  2 03:48:14 np0005465604 podman[84595]: 2025-10-02 07:48:14.352701145 +0000 UTC m=+11.797351729 container remove 42ab34aa1f4c19fb6ff69365829191a76b645e788f8f93041fe491766aab9d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 03:48:14 np0005465604 systemd[1]: libpod-conmon-42ab34aa1f4c19fb6ff69365829191a76b645e788f8f93041fe491766aab9d16.scope: Deactivated successfully.
Oct  2 03:48:14 np0005465604 podman[87648]: 2025-10-02 07:48:14.944332883 +0000 UTC m=+0.041493471 container create e8591af27051cc61040e5f53c2cb025208b265973db3d040d304fd5e488a6dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 03:48:14 np0005465604 systemd[1]: Started libpod-conmon-e8591af27051cc61040e5f53c2cb025208b265973db3d040d304fd5e488a6dbb.scope.
Oct  2 03:48:15 np0005465604 podman[87648]: 2025-10-02 07:48:14.925242695 +0000 UTC m=+0.022403273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:15 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:15 np0005465604 podman[87648]: 2025-10-02 07:48:15.048620241 +0000 UTC m=+0.145780869 container init e8591af27051cc61040e5f53c2cb025208b265973db3d040d304fd5e488a6dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 03:48:15 np0005465604 podman[87648]: 2025-10-02 07:48:15.055544738 +0000 UTC m=+0.152705316 container start e8591af27051cc61040e5f53c2cb025208b265973db3d040d304fd5e488a6dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pare, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 03:48:15 np0005465604 podman[87648]: 2025-10-02 07:48:15.059472081 +0000 UTC m=+0.156632709 container attach e8591af27051cc61040e5f53c2cb025208b265973db3d040d304fd5e488a6dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  2 03:48:15 np0005465604 recursing_pare[87664]: 167 167
Oct  2 03:48:15 np0005465604 systemd[1]: libpod-e8591af27051cc61040e5f53c2cb025208b265973db3d040d304fd5e488a6dbb.scope: Deactivated successfully.
Oct  2 03:48:15 np0005465604 conmon[87664]: conmon e8591af27051cc61040e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e8591af27051cc61040e5f53c2cb025208b265973db3d040d304fd5e488a6dbb.scope/container/memory.events
Oct  2 03:48:15 np0005465604 podman[87648]: 2025-10-02 07:48:15.061893627 +0000 UTC m=+0.159054195 container died e8591af27051cc61040e5f53c2cb025208b265973db3d040d304fd5e488a6dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:15 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b935ea1d0617994abf0da917f365115bb025c322af0354cc010b4751f7491967-merged.mount: Deactivated successfully.
Oct  2 03:48:15 np0005465604 podman[87648]: 2025-10-02 07:48:15.096238653 +0000 UTC m=+0.193399211 container remove e8591af27051cc61040e5f53c2cb025208b265973db3d040d304fd5e488a6dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pare, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:15 np0005465604 systemd[1]: libpod-conmon-e8591af27051cc61040e5f53c2cb025208b265973db3d040d304fd5e488a6dbb.scope: Deactivated successfully.
Oct  2 03:48:15 np0005465604 podman[87688]: 2025-10-02 07:48:15.3080507 +0000 UTC m=+0.064688198 container create 6b8c9b3041b3393c26f22de1ab95d75c0ed3e6935ad0142bd8b59b09035ebd51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leavitt, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 03:48:15 np0005465604 systemd[1]: Started libpod-conmon-6b8c9b3041b3393c26f22de1ab95d75c0ed3e6935ad0142bd8b59b09035ebd51.scope.
Oct  2 03:48:15 np0005465604 podman[87688]: 2025-10-02 07:48:15.285866155 +0000 UTC m=+0.042503663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:15 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9370f717cf5048a57d46839783b8c97548b2f69dbb2c221bfdf11e3edfba351f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9370f717cf5048a57d46839783b8c97548b2f69dbb2c221bfdf11e3edfba351f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9370f717cf5048a57d46839783b8c97548b2f69dbb2c221bfdf11e3edfba351f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9370f717cf5048a57d46839783b8c97548b2f69dbb2c221bfdf11e3edfba351f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:15 np0005465604 podman[87688]: 2025-10-02 07:48:15.418649626 +0000 UTC m=+0.175287184 container init 6b8c9b3041b3393c26f22de1ab95d75c0ed3e6935ad0142bd8b59b09035ebd51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:15 np0005465604 podman[87688]: 2025-10-02 07:48:15.430537529 +0000 UTC m=+0.187174997 container start 6b8c9b3041b3393c26f22de1ab95d75c0ed3e6935ad0142bd8b59b09035ebd51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 03:48:15 np0005465604 podman[87688]: 2025-10-02 07:48:15.434526653 +0000 UTC m=+0.191164201 container attach 6b8c9b3041b3393c26f22de1ab95d75c0ed3e6935ad0142bd8b59b09035ebd51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leavitt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]: {
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:    "0": [
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:        {
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "devices": [
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "/dev/loop3"
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            ],
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "lv_name": "ceph_lv0",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "lv_size": "21470642176",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "name": "ceph_lv0",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "tags": {
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.cluster_name": "ceph",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.crush_device_class": "",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.encrypted": "0",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.osd_id": "0",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.type": "block",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.vdo": "0"
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            },
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "type": "block",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "vg_name": "ceph_vg0"
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:        }
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:    ],
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:    "1": [
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:        {
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "devices": [
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "/dev/loop4"
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            ],
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "lv_name": "ceph_lv1",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "lv_size": "21470642176",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "name": "ceph_lv1",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "tags": {
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.cluster_name": "ceph",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.crush_device_class": "",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.encrypted": "0",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.osd_id": "1",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.type": "block",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.vdo": "0"
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            },
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "type": "block",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "vg_name": "ceph_vg1"
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:        }
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:    ],
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:    "2": [
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:        {
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "devices": [
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "/dev/loop5"
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            ],
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "lv_name": "ceph_lv2",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "lv_size": "21470642176",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "name": "ceph_lv2",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "tags": {
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.cluster_name": "ceph",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.crush_device_class": "",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.encrypted": "0",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.osd_id": "2",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.type": "block",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:                "ceph.vdo": "0"
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            },
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "type": "block",
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:            "vg_name": "ceph_vg2"
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:        }
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]:    ]
Oct  2 03:48:16 np0005465604 epic_leavitt[87704]: }
Oct  2 03:48:16 np0005465604 systemd[1]: libpod-6b8c9b3041b3393c26f22de1ab95d75c0ed3e6935ad0142bd8b59b09035ebd51.scope: Deactivated successfully.
Oct  2 03:48:16 np0005465604 podman[87688]: 2025-10-02 07:48:16.234418587 +0000 UTC m=+0.991056055 container died 6b8c9b3041b3393c26f22de1ab95d75c0ed3e6935ad0142bd8b59b09035ebd51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:48:16 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9370f717cf5048a57d46839783b8c97548b2f69dbb2c221bfdf11e3edfba351f-merged.mount: Deactivated successfully.
Oct  2 03:48:16 np0005465604 podman[87688]: 2025-10-02 07:48:16.31590079 +0000 UTC m=+1.072538258 container remove 6b8c9b3041b3393c26f22de1ab95d75c0ed3e6935ad0142bd8b59b09035ebd51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leavitt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 03:48:16 np0005465604 systemd[1]: libpod-conmon-6b8c9b3041b3393c26f22de1ab95d75c0ed3e6935ad0142bd8b59b09035ebd51.scope: Deactivated successfully.
Oct  2 03:48:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Oct  2 03:48:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  2 03:48:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:48:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:48:16 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Oct  2 03:48:16 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Oct  2 03:48:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:48:16 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  2 03:48:16 np0005465604 podman[87866]: 2025-10-02 07:48:16.915037414 +0000 UTC m=+0.035159053 container create 9ae263c2875b3c11b853c43a89b304ba03421b98dbcf11260aa828e743abadd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:48:16 np0005465604 systemd[1]: Started libpod-conmon-9ae263c2875b3c11b853c43a89b304ba03421b98dbcf11260aa828e743abadd6.scope.
Oct  2 03:48:16 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:16 np0005465604 podman[87866]: 2025-10-02 07:48:16.985396948 +0000 UTC m=+0.105518577 container init 9ae263c2875b3c11b853c43a89b304ba03421b98dbcf11260aa828e743abadd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 03:48:16 np0005465604 podman[87866]: 2025-10-02 07:48:16.991249812 +0000 UTC m=+0.111371431 container start 9ae263c2875b3c11b853c43a89b304ba03421b98dbcf11260aa828e743abadd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pasteur, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 03:48:16 np0005465604 podman[87866]: 2025-10-02 07:48:16.898526196 +0000 UTC m=+0.018647835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:16 np0005465604 podman[87866]: 2025-10-02 07:48:16.994651268 +0000 UTC m=+0.114772887 container attach 9ae263c2875b3c11b853c43a89b304ba03421b98dbcf11260aa828e743abadd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pasteur, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 03:48:16 np0005465604 relaxed_pasteur[87882]: 167 167
Oct  2 03:48:16 np0005465604 systemd[1]: libpod-9ae263c2875b3c11b853c43a89b304ba03421b98dbcf11260aa828e743abadd6.scope: Deactivated successfully.
Oct  2 03:48:16 np0005465604 podman[87866]: 2025-10-02 07:48:16.99662944 +0000 UTC m=+0.116751069 container died 9ae263c2875b3c11b853c43a89b304ba03421b98dbcf11260aa828e743abadd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 03:48:17 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d946c0edf2e927789e7934e11bbd5c1c121a3aac0072b82f8b62882b2ef53b72-merged.mount: Deactivated successfully.
Oct  2 03:48:17 np0005465604 podman[87866]: 2025-10-02 07:48:17.027777486 +0000 UTC m=+0.147899105 container remove 9ae263c2875b3c11b853c43a89b304ba03421b98dbcf11260aa828e743abadd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:48:17 np0005465604 systemd[1]: libpod-conmon-9ae263c2875b3c11b853c43a89b304ba03421b98dbcf11260aa828e743abadd6.scope: Deactivated successfully.
Oct  2 03:48:17 np0005465604 podman[87914]: 2025-10-02 07:48:17.25194413 +0000 UTC m=+0.034603185 container create be70b307770d232d11b53c76da96b88a75010504af612ea43fde69c0c021ab67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:17 np0005465604 systemd[1]: Started libpod-conmon-be70b307770d232d11b53c76da96b88a75010504af612ea43fde69c0c021ab67.scope.
Oct  2 03:48:17 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2498673104ace8dbea25303f2bb8e7b86694f8e43fa7197dfb02d852bcbf06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2498673104ace8dbea25303f2bb8e7b86694f8e43fa7197dfb02d852bcbf06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2498673104ace8dbea25303f2bb8e7b86694f8e43fa7197dfb02d852bcbf06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2498673104ace8dbea25303f2bb8e7b86694f8e43fa7197dfb02d852bcbf06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2498673104ace8dbea25303f2bb8e7b86694f8e43fa7197dfb02d852bcbf06/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:17 np0005465604 podman[87914]: 2025-10-02 07:48:17.328669984 +0000 UTC m=+0.111329059 container init be70b307770d232d11b53c76da96b88a75010504af612ea43fde69c0c021ab67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate-test, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:17 np0005465604 podman[87914]: 2025-10-02 07:48:17.237337392 +0000 UTC m=+0.019996467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:17 np0005465604 podman[87914]: 2025-10-02 07:48:17.336969445 +0000 UTC m=+0.119628500 container start be70b307770d232d11b53c76da96b88a75010504af612ea43fde69c0c021ab67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:17 np0005465604 podman[87914]: 2025-10-02 07:48:17.339396821 +0000 UTC m=+0.122055876 container attach be70b307770d232d11b53c76da96b88a75010504af612ea43fde69c0c021ab67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:48:17 np0005465604 ceph-mon[74477]: Deploying daemon osd.0 on compute-0
Oct  2 03:48:17 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate-test[87930]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct  2 03:48:17 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate-test[87930]:                            [--no-systemd] [--no-tmpfs]
Oct  2 03:48:17 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate-test[87930]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  2 03:48:17 np0005465604 systemd[1]: libpod-be70b307770d232d11b53c76da96b88a75010504af612ea43fde69c0c021ab67.scope: Deactivated successfully.
Oct  2 03:48:17 np0005465604 podman[87914]: 2025-10-02 07:48:17.986878808 +0000 UTC m=+0.769537853 container died be70b307770d232d11b53c76da96b88a75010504af612ea43fde69c0c021ab67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 03:48:18 np0005465604 systemd[1]: var-lib-containers-storage-overlay-7d2498673104ace8dbea25303f2bb8e7b86694f8e43fa7197dfb02d852bcbf06-merged.mount: Deactivated successfully.
Oct  2 03:48:18 np0005465604 podman[87914]: 2025-10-02 07:48:18.053466125 +0000 UTC m=+0.836125180 container remove be70b307770d232d11b53c76da96b88a75010504af612ea43fde69c0c021ab67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate-test, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 03:48:18 np0005465604 systemd[1]: libpod-conmon-be70b307770d232d11b53c76da96b88a75010504af612ea43fde69c0c021ab67.scope: Deactivated successfully.
Oct  2 03:48:18 np0005465604 systemd[1]: Reloading.
Oct  2 03:48:18 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:48:18 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:48:18 np0005465604 systemd[1]: Reloading.
Oct  2 03:48:18 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:48:18 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:48:18 np0005465604 systemd[1]: Starting Ceph osd.0 for a52e644f-f702-594c-a648-813e3e0df2b1...
Oct  2 03:48:19 np0005465604 podman[88088]: 2025-10-02 07:48:19.229187966 +0000 UTC m=+0.043594458 container create 93781f34c09f83559e2dda2288215ac8964bf011a9991662e57821cebdb21faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:19 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b03b4fc80caaf4d8d138e394a7f18685fac61c3bb9b42e835c3830f68f6dca2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b03b4fc80caaf4d8d138e394a7f18685fac61c3bb9b42e835c3830f68f6dca2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b03b4fc80caaf4d8d138e394a7f18685fac61c3bb9b42e835c3830f68f6dca2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b03b4fc80caaf4d8d138e394a7f18685fac61c3bb9b42e835c3830f68f6dca2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b03b4fc80caaf4d8d138e394a7f18685fac61c3bb9b42e835c3830f68f6dca2/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:19 np0005465604 podman[88088]: 2025-10-02 07:48:19.293345226 +0000 UTC m=+0.107751678 container init 93781f34c09f83559e2dda2288215ac8964bf011a9991662e57821cebdb21faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:48:19 np0005465604 podman[88088]: 2025-10-02 07:48:19.300003144 +0000 UTC m=+0.114409616 container start 93781f34c09f83559e2dda2288215ac8964bf011a9991662e57821cebdb21faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 03:48:19 np0005465604 podman[88088]: 2025-10-02 07:48:19.303904757 +0000 UTC m=+0.118311229 container attach 93781f34c09f83559e2dda2288215ac8964bf011a9991662e57821cebdb21faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:19 np0005465604 podman[88088]: 2025-10-02 07:48:19.212499422 +0000 UTC m=+0.026905874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:48:20 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate[88104]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  2 03:48:20 np0005465604 bash[88088]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  2 03:48:20 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate[88104]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct  2 03:48:20 np0005465604 bash[88088]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct  2 03:48:20 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate[88104]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct  2 03:48:20 np0005465604 bash[88088]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct  2 03:48:20 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate[88104]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  2 03:48:20 np0005465604 bash[88088]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  2 03:48:20 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate[88104]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  2 03:48:20 np0005465604 bash[88088]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  2 03:48:20 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate[88104]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  2 03:48:20 np0005465604 bash[88088]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  2 03:48:20 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate[88104]: --> ceph-volume raw activate successful for osd ID: 0
Oct  2 03:48:20 np0005465604 bash[88088]: --> ceph-volume raw activate successful for osd ID: 0
Oct  2 03:48:20 np0005465604 systemd[1]: libpod-93781f34c09f83559e2dda2288215ac8964bf011a9991662e57821cebdb21faa.scope: Deactivated successfully.
Oct  2 03:48:20 np0005465604 podman[88088]: 2025-10-02 07:48:20.38878022 +0000 UTC m=+1.203186672 container died 93781f34c09f83559e2dda2288215ac8964bf011a9991662e57821cebdb21faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 03:48:20 np0005465604 systemd[1]: libpod-93781f34c09f83559e2dda2288215ac8964bf011a9991662e57821cebdb21faa.scope: Consumed 1.104s CPU time.
Oct  2 03:48:20 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5b03b4fc80caaf4d8d138e394a7f18685fac61c3bb9b42e835c3830f68f6dca2-merged.mount: Deactivated successfully.
Oct  2 03:48:20 np0005465604 podman[88088]: 2025-10-02 07:48:20.43854224 +0000 UTC m=+1.252948692 container remove 93781f34c09f83559e2dda2288215ac8964bf011a9991662e57821cebdb21faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0-activate, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 03:48:20 np0005465604 podman[88297]: 2025-10-02 07:48:20.609550948 +0000 UTC m=+0.031023503 container create 205e9ff0271aab02a750eeebd020752477a7b5ab8f586929ad971c7c74e989dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89ac36148c76ce7f7ef6ddb40c9bb2673cbcf35941458a5b0d0f2eec397fb276/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89ac36148c76ce7f7ef6ddb40c9bb2673cbcf35941458a5b0d0f2eec397fb276/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89ac36148c76ce7f7ef6ddb40c9bb2673cbcf35941458a5b0d0f2eec397fb276/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89ac36148c76ce7f7ef6ddb40c9bb2673cbcf35941458a5b0d0f2eec397fb276/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89ac36148c76ce7f7ef6ddb40c9bb2673cbcf35941458a5b0d0f2eec397fb276/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:20 np0005465604 podman[88297]: 2025-10-02 07:48:20.661246397 +0000 UTC m=+0.082718952 container init 205e9ff0271aab02a750eeebd020752477a7b5ab8f586929ad971c7c74e989dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:20 np0005465604 podman[88297]: 2025-10-02 07:48:20.666475591 +0000 UTC m=+0.087948156 container start 205e9ff0271aab02a750eeebd020752477a7b5ab8f586929ad971c7c74e989dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:48:20 np0005465604 bash[88297]: 205e9ff0271aab02a750eeebd020752477a7b5ab8f586929ad971c7c74e989dc
Oct  2 03:48:20 np0005465604 podman[88297]: 2025-10-02 07:48:20.596003743 +0000 UTC m=+0.017476318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:20 np0005465604 systemd[1]: Started Ceph osd.0 for a52e644f-f702-594c-a648-813e3e0df2b1.
Oct  2 03:48:20 np0005465604 ceph-osd[88314]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 03:48:20 np0005465604 ceph-osd[88314]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct  2 03:48:20 np0005465604 ceph-osd[88314]: pidfile_write: ignore empty --pid-file
Oct  2 03:48:20 np0005465604 ceph-osd[88314]: bdev(0x555e7d265800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 03:48:20 np0005465604 ceph-osd[88314]: bdev(0x555e7d265800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 03:48:20 np0005465604 ceph-osd[88314]: bdev(0x555e7d265800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:20 np0005465604 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 03:48:20 np0005465604 ceph-osd[88314]: bdev(0x555e7e09d000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 03:48:20 np0005465604 ceph-osd[88314]: bdev(0x555e7e09d000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 03:48:20 np0005465604 ceph-osd[88314]: bdev(0x555e7e09d000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:20 np0005465604 ceph-osd[88314]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  2 03:48:20 np0005465604 ceph-osd[88314]: bdev(0x555e7e09d000 /var/lib/ceph/osd/ceph-0/block) close
Oct  2 03:48:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:48:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:48:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Oct  2 03:48:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  2 03:48:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:48:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:48:20 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Oct  2 03:48:20 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Oct  2 03:48:20 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:20 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:20 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  2 03:48:20 np0005465604 ceph-osd[88314]: bdev(0x555e7d265800 /var/lib/ceph/osd/ceph-0/block) close
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: load: jerasure load: lrc 
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bdev(0x555e7e09dc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bdev(0x555e7e09dc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bdev(0x555e7e09dc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bdev(0x555e7e09dc00 /var/lib/ceph/osd/ceph-0/block) close
Oct  2 03:48:21 np0005465604 podman[88469]: 2025-10-02 07:48:21.263639113 +0000 UTC m=+0.051835566 container create bb955fe1830e57ae47b01c0c11bb30000c2086291a609de30a677b337afc87a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:21 np0005465604 systemd[1]: Started libpod-conmon-bb955fe1830e57ae47b01c0c11bb30000c2086291a609de30a677b337afc87a4.scope.
Oct  2 03:48:21 np0005465604 podman[88469]: 2025-10-02 07:48:21.240098955 +0000 UTC m=+0.028295458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:21 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:21 np0005465604 podman[88469]: 2025-10-02 07:48:21.350942799 +0000 UTC m=+0.139139232 container init bb955fe1830e57ae47b01c0c11bb30000c2086291a609de30a677b337afc87a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct  2 03:48:21 np0005465604 podman[88469]: 2025-10-02 07:48:21.357771852 +0000 UTC m=+0.145968265 container start bb955fe1830e57ae47b01c0c11bb30000c2086291a609de30a677b337afc87a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_brahmagupta, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  2 03:48:21 np0005465604 podman[88469]: 2025-10-02 07:48:21.360486288 +0000 UTC m=+0.148682731 container attach bb955fe1830e57ae47b01c0c11bb30000c2086291a609de30a677b337afc87a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  2 03:48:21 np0005465604 goofy_brahmagupta[88490]: 167 167
Oct  2 03:48:21 np0005465604 systemd[1]: libpod-bb955fe1830e57ae47b01c0c11bb30000c2086291a609de30a677b337afc87a4.scope: Deactivated successfully.
Oct  2 03:48:21 np0005465604 podman[88469]: 2025-10-02 07:48:21.363824763 +0000 UTC m=+0.152021236 container died bb955fe1830e57ae47b01c0c11bb30000c2086291a609de30a677b337afc87a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 03:48:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-29a15166ff507440484c5986abf1ff668c40d832a010e45d03cd6991477b51b1-merged.mount: Deactivated successfully.
Oct  2 03:48:21 np0005465604 podman[88469]: 2025-10-02 07:48:21.407992156 +0000 UTC m=+0.196188599 container remove bb955fe1830e57ae47b01c0c11bb30000c2086291a609de30a677b337afc87a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_brahmagupta, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  2 03:48:21 np0005465604 systemd[1]: libpod-conmon-bb955fe1830e57ae47b01c0c11bb30000c2086291a609de30a677b337afc87a4.scope: Deactivated successfully.
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bdev(0x555e7e09dc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bdev(0x555e7e09dc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bdev(0x555e7e09dc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bdev(0x555e7e09dc00 /var/lib/ceph/osd/ceph-0/block) close
Oct  2 03:48:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:48:21 np0005465604 podman[88527]: 2025-10-02 07:48:21.631025835 +0000 UTC m=+0.034806622 container create 562c9b3c6afb38dade807c20f5afbe0d16055358c0507e9497c02ec6c66a32d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate-test, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:21 np0005465604 systemd[1]: Started libpod-conmon-562c9b3c6afb38dade807c20f5afbe0d16055358c0507e9497c02ec6c66a32d0.scope.
Oct  2 03:48:21 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20714fe0e1bd770f4fd4576b07de41fcdd0bde4607b240c7441b2d222e12bc44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20714fe0e1bd770f4fd4576b07de41fcdd0bde4607b240c7441b2d222e12bc44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20714fe0e1bd770f4fd4576b07de41fcdd0bde4607b240c7441b2d222e12bc44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20714fe0e1bd770f4fd4576b07de41fcdd0bde4607b240c7441b2d222e12bc44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20714fe0e1bd770f4fd4576b07de41fcdd0bde4607b240c7441b2d222e12bc44/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:21 np0005465604 podman[88527]: 2025-10-02 07:48:21.687220866 +0000 UTC m=+0.091001663 container init 562c9b3c6afb38dade807c20f5afbe0d16055358c0507e9497c02ec6c66a32d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate-test, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:48:21 np0005465604 podman[88527]: 2025-10-02 07:48:21.695504935 +0000 UTC m=+0.099285722 container start 562c9b3c6afb38dade807c20f5afbe0d16055358c0507e9497c02ec6c66a32d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 03:48:21 np0005465604 podman[88527]: 2025-10-02 07:48:21.698932503 +0000 UTC m=+0.102713320 container attach 562c9b3c6afb38dade807c20f5afbe0d16055358c0507e9497c02ec6c66a32d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 03:48:21 np0005465604 podman[88527]: 2025-10-02 07:48:21.617865793 +0000 UTC m=+0.021646590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bdev(0x555e7e09dc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bdev(0x555e7e09dc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bdev(0x555e7e09dc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bdev(0x555e7e288400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bdev(0x555e7e288400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bdev(0x555e7e288400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bluefs mount
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bluefs mount shared_bdev_used = 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: RocksDB version: 7.9.2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Git sha 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: DB SUMMARY
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: DB Session ID:  HDF6S4CH7UBGSVF5OZ7V
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: CURRENT file:  CURRENT
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: IDENTITY file:  IDENTITY
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                         Options.error_if_exists: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                       Options.create_if_missing: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                         Options.paranoid_checks: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                                     Options.env: 0x555e7e0efd50
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                                Options.info_log: 0x555e7d2f09c0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_file_opening_threads: 16
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                              Options.statistics: (nil)
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                               Options.use_fsync: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                       Options.max_log_file_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                         Options.allow_fallocate: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.use_direct_reads: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.create_missing_column_families: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                              Options.db_log_dir: 
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                                 Options.wal_dir: db.wal
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.advise_random_on_open: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.write_buffer_manager: 0x555e7e202460
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                            Options.rate_limiter: (nil)
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.unordered_write: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                               Options.row_cache: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                              Options.wal_filter: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.allow_ingest_behind: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.two_write_queues: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.manual_wal_flush: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.wal_compression: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.atomic_flush: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                 Options.log_readahead_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.allow_data_in_errors: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.db_host_id: __hostname__
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.max_background_jobs: 4
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.max_background_compactions: -1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.max_subcompactions: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.max_open_files: -1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.bytes_per_sync: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.max_background_flushes: -1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Compression algorithms supported:
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: #011kZSTD supported: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: #011kXpressCompression supported: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: #011kBZip2Compression supported: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: #011kLZ4Compression supported: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: #011kZlibCompression supported: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: #011kSnappyCompression supported: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2f1020)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2f1020)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2f1020)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2f1020)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2f1020)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2f1020)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2f1020)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2f1040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2f1040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2f1040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b266b3c3-3507-4d11-8789-b2e93d5894a5
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391301793983, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391301794188, "job": 1, "event": "recovery_finished"}
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: freelist init
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: freelist _read_cfg
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bluefs umount
Oct  2 03:48:21 np0005465604 ceph-osd[88314]: bdev(0x555e7e288400 /var/lib/ceph/osd/ceph-0/block) close
Oct  2 03:48:21 np0005465604 ceph-mon[74477]: Deploying daemon osd.1 on compute-0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: bdev(0x555e7e288400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: bdev(0x555e7e288400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: bdev(0x555e7e288400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: bluefs mount
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: bluefs mount shared_bdev_used = 4718592
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: RocksDB version: 7.9.2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Git sha 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: DB SUMMARY
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: DB Session ID:  HDF6S4CH7UBGSVF5OZ7U
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: CURRENT file:  CURRENT
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: IDENTITY file:  IDENTITY
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                         Options.error_if_exists: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                       Options.create_if_missing: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                         Options.paranoid_checks: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                                     Options.env: 0x555e7e2b0460
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                                Options.info_log: 0x555e7d2f0780
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_file_opening_threads: 16
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                              Options.statistics: (nil)
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                               Options.use_fsync: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                       Options.max_log_file_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                         Options.allow_fallocate: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.use_direct_reads: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.create_missing_column_families: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                              Options.db_log_dir: 
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                                 Options.wal_dir: db.wal
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.advise_random_on_open: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.write_buffer_manager: 0x555e7e202460
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                            Options.rate_limiter: (nil)
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.unordered_write: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                               Options.row_cache: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                              Options.wal_filter: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.allow_ingest_behind: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.two_write_queues: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.manual_wal_flush: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.wal_compression: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.atomic_flush: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                 Options.log_readahead_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.allow_data_in_errors: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.db_host_id: __hostname__
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.max_background_jobs: 4
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.max_background_compactions: -1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.max_subcompactions: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.max_open_files: -1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.bytes_per_sync: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.max_background_flushes: -1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Compression algorithms supported:
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: #011kZSTD supported: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: #011kXpressCompression supported: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: #011kBZip2Compression supported: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: #011kLZ4Compression supported: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: #011kZlibCompression supported: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: #011kSnappyCompression supported: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2edf40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2edf40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2edf40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2edf40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2edf40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2edf40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2edf40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2f0500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2f0500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555e7d2f0500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x555e7d2d8430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b266b3c3-3507-4d11-8789-b2e93d5894a5
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391302082229, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391302089481, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391302, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b266b3c3-3507-4d11-8789-b2e93d5894a5", "db_session_id": "HDF6S4CH7UBGSVF5OZ7U", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391302093032, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391302, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b266b3c3-3507-4d11-8789-b2e93d5894a5", "db_session_id": "HDF6S4CH7UBGSVF5OZ7U", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391302096396, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391302, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b266b3c3-3507-4d11-8789-b2e93d5894a5", "db_session_id": "HDF6S4CH7UBGSVF5OZ7U", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391302098239, "job": 1, "event": "recovery_finished"}
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x555e7e2e0000
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: DB pointer 0x555e7d313a00
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: _get_class not permitted to load lua
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: _get_class not permitted to load sdk
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: _get_class not permitted to load test_remote_reads
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: osd.0 0 load_pgs
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: osd.0 0 load_pgs opened 0 pgs
Oct  2 03:48:22 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0[88310]: 2025-10-02T07:48:22.137+0000 7f6c74213740 -1 osd.0 0 log_to_monitors true
Oct  2 03:48:22 np0005465604 ceph-osd[88314]: osd.0 0 log_to_monitors true
Oct  2 03:48:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Oct  2 03:48:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1358321688,v1:192.168.122.100:6803/1358321688]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  2 03:48:22 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate-test[88545]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct  2 03:48:22 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate-test[88545]:                            [--no-systemd] [--no-tmpfs]
Oct  2 03:48:22 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate-test[88545]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  2 03:48:22 np0005465604 systemd[1]: libpod-562c9b3c6afb38dade807c20f5afbe0d16055358c0507e9497c02ec6c66a32d0.scope: Deactivated successfully.
Oct  2 03:48:22 np0005465604 podman[88527]: 2025-10-02 07:48:22.348206517 +0000 UTC m=+0.751987294 container died 562c9b3c6afb38dade807c20f5afbe0d16055358c0507e9497c02ec6c66a32d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate-test, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 03:48:22 np0005465604 systemd[1]: var-lib-containers-storage-overlay-20714fe0e1bd770f4fd4576b07de41fcdd0bde4607b240c7441b2d222e12bc44-merged.mount: Deactivated successfully.
Oct  2 03:48:22 np0005465604 podman[88527]: 2025-10-02 07:48:22.399004509 +0000 UTC m=+0.802785296 container remove 562c9b3c6afb38dade807c20f5afbe0d16055358c0507e9497c02ec6c66a32d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate-test, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:22 np0005465604 systemd[1]: libpod-conmon-562c9b3c6afb38dade807c20f5afbe0d16055358c0507e9497c02ec6c66a32d0.scope: Deactivated successfully.
Oct  2 03:48:22 np0005465604 systemd[1]: Reloading.
Oct  2 03:48:22 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:48:22 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:48:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct  2 03:48:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  2 03:48:22 np0005465604 ceph-mon[74477]: from='osd.0 [v2:192.168.122.100:6802/1358321688,v1:192.168.122.100:6803/1358321688]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  2 03:48:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1358321688,v1:192.168.122.100:6803/1358321688]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  2 03:48:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Oct  2 03:48:22 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Oct  2 03:48:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct  2 03:48:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1358321688,v1:192.168.122.100:6803/1358321688]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  2 03:48:22 np0005465604 systemd[1]: Reloading.
Oct  2 03:48:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct  2 03:48:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  2 03:48:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  2 03:48:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  2 03:48:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  2 03:48:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  2 03:48:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  2 03:48:22 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  2 03:48:22 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  2 03:48:22 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  2 03:48:22 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:48:22 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:48:23 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  2 03:48:23 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  2 03:48:23 np0005465604 systemd[1]: Starting Ceph osd.1 for a52e644f-f702-594c-a648-813e3e0df2b1...
Oct  2 03:48:23 np0005465604 podman[89115]: 2025-10-02 07:48:23.409825271 +0000 UTC m=+0.053801876 container create 4548fc5f1094687d4b6344e06492e8f1681df75d69b1283fb31a20deae63fe36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 03:48:23 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd8e3deeb9264a7758a62db821c0820211bff3b8065d197386533008695960a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd8e3deeb9264a7758a62db821c0820211bff3b8065d197386533008695960a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd8e3deeb9264a7758a62db821c0820211bff3b8065d197386533008695960a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd8e3deeb9264a7758a62db821c0820211bff3b8065d197386533008695960a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd8e3deeb9264a7758a62db821c0820211bff3b8065d197386533008695960a5/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:23 np0005465604 podman[89115]: 2025-10-02 07:48:23.391017182 +0000 UTC m=+0.034993827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:23 np0005465604 podman[89115]: 2025-10-02 07:48:23.487040622 +0000 UTC m=+0.131017267 container init 4548fc5f1094687d4b6344e06492e8f1681df75d69b1283fb31a20deae63fe36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 03:48:23 np0005465604 podman[89115]: 2025-10-02 07:48:23.494513105 +0000 UTC m=+0.138489720 container start 4548fc5f1094687d4b6344e06492e8f1681df75d69b1283fb31a20deae63fe36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:23 np0005465604 podman[89115]: 2025-10-02 07:48:23.498094538 +0000 UTC m=+0.142071193 container attach 4548fc5f1094687d4b6344e06492e8f1681df75d69b1283fb31a20deae63fe36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:48:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:48:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct  2 03:48:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  2 03:48:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1358321688,v1:192.168.122.100:6803/1358321688]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  2 03:48:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Oct  2 03:48:23 np0005465604 ceph-osd[88314]: osd.0 0 done with init, starting boot process
Oct  2 03:48:23 np0005465604 ceph-osd[88314]: osd.0 0 start_boot
Oct  2 03:48:23 np0005465604 ceph-osd[88314]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  2 03:48:23 np0005465604 ceph-osd[88314]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  2 03:48:23 np0005465604 ceph-osd[88314]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  2 03:48:23 np0005465604 ceph-osd[88314]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  2 03:48:23 np0005465604 ceph-osd[88314]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct  2 03:48:23 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Oct  2 03:48:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  2 03:48:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  2 03:48:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  2 03:48:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  2 03:48:23 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  2 03:48:23 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  2 03:48:23 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  2 03:48:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  2 03:48:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  2 03:48:23 np0005465604 ceph-mon[74477]: from='osd.0 [v2:192.168.122.100:6802/1358321688,v1:192.168.122.100:6803/1358321688]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  2 03:48:23 np0005465604 ceph-mon[74477]: from='osd.0 [v2:192.168.122.100:6802/1358321688,v1:192.168.122.100:6803/1358321688]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  2 03:48:23 np0005465604 ceph-mgr[74774]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1358321688; not ready for session (expect reconnect)
Oct  2 03:48:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  2 03:48:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  2 03:48:23 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  2 03:48:24 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate[89131]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  2 03:48:24 np0005465604 bash[89115]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  2 03:48:24 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate[89131]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct  2 03:48:24 np0005465604 bash[89115]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct  2 03:48:24 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate[89131]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct  2 03:48:24 np0005465604 bash[89115]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct  2 03:48:24 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate[89131]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  2 03:48:24 np0005465604 bash[89115]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  2 03:48:24 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate[89131]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  2 03:48:24 np0005465604 bash[89115]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  2 03:48:24 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate[89131]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  2 03:48:24 np0005465604 bash[89115]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  2 03:48:24 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate[89131]: --> ceph-volume raw activate successful for osd ID: 1
Oct  2 03:48:24 np0005465604 bash[89115]: --> ceph-volume raw activate successful for osd ID: 1
Oct  2 03:48:24 np0005465604 systemd[1]: libpod-4548fc5f1094687d4b6344e06492e8f1681df75d69b1283fb31a20deae63fe36.scope: Deactivated successfully.
Oct  2 03:48:24 np0005465604 systemd[1]: libpod-4548fc5f1094687d4b6344e06492e8f1681df75d69b1283fb31a20deae63fe36.scope: Consumed 1.084s CPU time.
Oct  2 03:48:24 np0005465604 podman[89115]: 2025-10-02 07:48:24.572886375 +0000 UTC m=+1.216862980 container died 4548fc5f1094687d4b6344e06492e8f1681df75d69b1283fb31a20deae63fe36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 03:48:24 np0005465604 systemd[1]: var-lib-containers-storage-overlay-cd8e3deeb9264a7758a62db821c0820211bff3b8065d197386533008695960a5-merged.mount: Deactivated successfully.
Oct  2 03:48:24 np0005465604 podman[89115]: 2025-10-02 07:48:24.667365966 +0000 UTC m=+1.311342581 container remove 4548fc5f1094687d4b6344e06492e8f1681df75d69b1283fb31a20deae63fe36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1-activate, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:24 np0005465604 podman[89302]: 2025-10-02 07:48:24.902219484 +0000 UTC m=+0.045734463 container create 05e9e96261906ae8a75e5c188329eb80b13700efb0f3f3e9dba092e08d21be5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 03:48:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1207ea47e0c68c3a588400806ed60a497a8121dbd2c4321a1a6e8d3b46662d9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1207ea47e0c68c3a588400806ed60a497a8121dbd2c4321a1a6e8d3b46662d9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1207ea47e0c68c3a588400806ed60a497a8121dbd2c4321a1a6e8d3b46662d9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1207ea47e0c68c3a588400806ed60a497a8121dbd2c4321a1a6e8d3b46662d9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1207ea47e0c68c3a588400806ed60a497a8121dbd2c4321a1a6e8d3b46662d9a/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:24 np0005465604 podman[89302]: 2025-10-02 07:48:24.957220128 +0000 UTC m=+0.100735127 container init 05e9e96261906ae8a75e5c188329eb80b13700efb0f3f3e9dba092e08d21be5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 03:48:24 np0005465604 ceph-mgr[74774]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1358321688; not ready for session (expect reconnect)
Oct  2 03:48:24 np0005465604 podman[89302]: 2025-10-02 07:48:24.964321341 +0000 UTC m=+0.107836320 container start 05e9e96261906ae8a75e5c188329eb80b13700efb0f3f3e9dba092e08d21be5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:24 np0005465604 bash[89302]: 05e9e96261906ae8a75e5c188329eb80b13700efb0f3f3e9dba092e08d21be5d
Oct  2 03:48:24 np0005465604 podman[89302]: 2025-10-02 07:48:24.880844935 +0000 UTC m=+0.024359934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  2 03:48:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  2 03:48:24 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  2 03:48:24 np0005465604 ceph-mon[74477]: from='osd.0 [v2:192.168.122.100:6802/1358321688,v1:192.168.122.100:6803/1358321688]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  2 03:48:24 np0005465604 systemd[1]: Started Ceph osd.1 for a52e644f-f702-594c-a648-813e3e0df2b1.
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: pidfile_write: ignore empty --pid-file
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bdev(0x562d9df87800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bdev(0x562d9df87800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bdev(0x562d9df87800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bdev(0x562d9edbf800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bdev(0x562d9edbf800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bdev(0x562d9edbf800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bdev(0x562d9edbf800 /var/lib/ceph/osd/ceph-1/block) close
Oct  2 03:48:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:48:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:48:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Oct  2 03:48:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  2 03:48:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:48:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:48:25 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Oct  2 03:48:25 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bdev(0x562d9df87800 /var/lib/ceph/osd/ceph-1/block) close
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: load: jerasure load: lrc 
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bdev(0x562d9ee40c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bdev(0x562d9ee40c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bdev(0x562d9ee40c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bdev(0x562d9ee40c00 /var/lib/ceph/osd/ceph-1/block) close
Oct  2 03:48:25 np0005465604 podman[89480]: 2025-10-02 07:48:25.653028741 +0000 UTC m=+0.046180379 container create 9cd9ba01440081b5e80f0c86e11c854463913c05c65865125b6fd4c020782862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lewin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:48:25 np0005465604 systemd[1]: Started libpod-conmon-9cd9ba01440081b5e80f0c86e11c854463913c05c65865125b6fd4c020782862.scope.
Oct  2 03:48:25 np0005465604 podman[89480]: 2025-10-02 07:48:25.631578638 +0000 UTC m=+0.024730296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:25 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:25 np0005465604 podman[89480]: 2025-10-02 07:48:25.760810498 +0000 UTC m=+0.153962226 container init 9cd9ba01440081b5e80f0c86e11c854463913c05c65865125b6fd4c020782862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 03:48:25 np0005465604 podman[89480]: 2025-10-02 07:48:25.768957083 +0000 UTC m=+0.162108721 container start 9cd9ba01440081b5e80f0c86e11c854463913c05c65865125b6fd4c020782862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 03:48:25 np0005465604 podman[89480]: 2025-10-02 07:48:25.773908638 +0000 UTC m=+0.167060356 container attach 9cd9ba01440081b5e80f0c86e11c854463913c05c65865125b6fd4c020782862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lewin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 03:48:25 np0005465604 naughty_lewin[89496]: 167 167
Oct  2 03:48:25 np0005465604 systemd[1]: libpod-9cd9ba01440081b5e80f0c86e11c854463913c05c65865125b6fd4c020782862.scope: Deactivated successfully.
Oct  2 03:48:25 np0005465604 conmon[89496]: conmon 9cd9ba01440081b5e80f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9cd9ba01440081b5e80f0c86e11c854463913c05c65865125b6fd4c020782862.scope/container/memory.events
Oct  2 03:48:25 np0005465604 podman[89480]: 2025-10-02 07:48:25.779782852 +0000 UTC m=+0.172934520 container died 9cd9ba01440081b5e80f0c86e11c854463913c05c65865125b6fd4c020782862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lewin, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:48:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bdev(0x562d9ee40c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bdev(0x562d9ee40c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bdev(0x562d9ee40c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 03:48:25 np0005465604 ceph-osd[89321]: bdev(0x562d9ee40c00 /var/lib/ceph/osd/ceph-1/block) close
Oct  2 03:48:25 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8475942ace81d41b4260691f96fe446f0b17eaf5b6631c8516196728a3e06982-merged.mount: Deactivated successfully.
Oct  2 03:48:25 np0005465604 podman[89480]: 2025-10-02 07:48:25.877624928 +0000 UTC m=+0.270776556 container remove 9cd9ba01440081b5e80f0c86e11c854463913c05c65865125b6fd4c020782862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lewin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 03:48:25 np0005465604 systemd[1]: libpod-conmon-9cd9ba01440081b5e80f0c86e11c854463913c05c65865125b6fd4c020782862.scope: Deactivated successfully.
Oct  2 03:48:25 np0005465604 ceph-mgr[74774]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1358321688; not ready for session (expect reconnect)
Oct  2 03:48:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  2 03:48:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  2 03:48:25 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  2 03:48:25 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:25 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:25 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  2 03:48:25 np0005465604 ceph-mon[74477]: Deploying daemon osd.2 on compute-0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bdev(0x562d9ee40c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bdev(0x562d9ee40c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bdev(0x562d9ee40c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bdev(0x562d9ee41400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bdev(0x562d9ee41400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bdev(0x562d9ee41400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluefs mount
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluefs mount shared_bdev_used = 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: RocksDB version: 7.9.2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Git sha 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: DB SUMMARY
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: DB Session ID:  VQBM53EH8LKV0VWC313B
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: CURRENT file:  CURRENT
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: IDENTITY file:  IDENTITY
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                         Options.error_if_exists: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.create_if_missing: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                         Options.paranoid_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                                     Options.env: 0x562d9ee11c70
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                                Options.info_log: 0x562d9e00e8a0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_file_opening_threads: 16
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                              Options.statistics: (nil)
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.use_fsync: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.max_log_file_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                         Options.allow_fallocate: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.use_direct_reads: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.create_missing_column_families: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                              Options.db_log_dir: 
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                                 Options.wal_dir: db.wal
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.advise_random_on_open: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.write_buffer_manager: 0x562d9ef24460
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                            Options.rate_limiter: (nil)
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.unordered_write: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.row_cache: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                              Options.wal_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.allow_ingest_behind: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.two_write_queues: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.manual_wal_flush: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.wal_compression: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.atomic_flush: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.log_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.allow_data_in_errors: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.db_host_id: __hostname__
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.max_background_jobs: 4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.max_background_compactions: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.max_subcompactions: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.max_open_files: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.bytes_per_sync: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.max_background_flushes: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Compression algorithms supported:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: #011kZSTD supported: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: #011kXpressCompression supported: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: #011kBZip2Compression supported: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: #011kLZ4Compression supported: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: #011kZlibCompression supported: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: #011kSnappyCompression supported: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9e00e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9e00e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9e00e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9e00e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9e00e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9e00e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9e00e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9e00e240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9e00e240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9e00e240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2f05cb72-b549-4f47-b9c4-56c714b92797
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391306124191, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391306124429, "job": 1, "event": "recovery_finished"}
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: freelist init
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: freelist _read_cfg
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluefs umount
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bdev(0x562d9ee41400 /var/lib/ceph/osd/ceph-1/block) close
Oct  2 03:48:26 np0005465604 podman[89729]: 2025-10-02 07:48:26.196467869 +0000 UTC m=+0.049077659 container create ed1b9b62e43d2d9526e5b8bc86b0a520b71201d38323eea0d921744b5a0bed7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate-test, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct  2 03:48:26 np0005465604 systemd[1]: Started libpod-conmon-ed1b9b62e43d2d9526e5b8bc86b0a520b71201d38323eea0d921744b5a0bed7a.scope.
Oct  2 03:48:26 np0005465604 podman[89729]: 2025-10-02 07:48:26.17609008 +0000 UTC m=+0.028699890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:26 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd4d0d74700e3a870020cf99dda00cca5f3acb97f133a27e8c5bf84e34bd51e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd4d0d74700e3a870020cf99dda00cca5f3acb97f133a27e8c5bf84e34bd51e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd4d0d74700e3a870020cf99dda00cca5f3acb97f133a27e8c5bf84e34bd51e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd4d0d74700e3a870020cf99dda00cca5f3acb97f133a27e8c5bf84e34bd51e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd4d0d74700e3a870020cf99dda00cca5f3acb97f133a27e8c5bf84e34bd51e/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:26 np0005465604 podman[89729]: 2025-10-02 07:48:26.305517426 +0000 UTC m=+0.158127246 container init ed1b9b62e43d2d9526e5b8bc86b0a520b71201d38323eea0d921744b5a0bed7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:26 np0005465604 ceph-osd[88314]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 39.361 iops: 10076.517 elapsed_sec: 0.298
Oct  2 03:48:26 np0005465604 ceph-osd[88314]: log_channel(cluster) log [WRN] : OSD bench result of 10076.517074 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  2 03:48:26 np0005465604 ceph-osd[88314]: osd.0 0 waiting for initial osdmap
Oct  2 03:48:26 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0[88310]: 2025-10-02T07:48:26.310+0000 7f6c70193640 -1 osd.0 0 waiting for initial osdmap
Oct  2 03:48:26 np0005465604 podman[89729]: 2025-10-02 07:48:26.319923867 +0000 UTC m=+0.172533657 container start ed1b9b62e43d2d9526e5b8bc86b0a520b71201d38323eea0d921744b5a0bed7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:48:26 np0005465604 ceph-osd[88314]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct  2 03:48:26 np0005465604 ceph-osd[88314]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct  2 03:48:26 np0005465604 ceph-osd[88314]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct  2 03:48:26 np0005465604 ceph-osd[88314]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Oct  2 03:48:26 np0005465604 podman[89729]: 2025-10-02 07:48:26.32319691 +0000 UTC m=+0.175806780 container attach ed1b9b62e43d2d9526e5b8bc86b0a520b71201d38323eea0d921744b5a0bed7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:26 np0005465604 ceph-osd[88314]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  2 03:48:26 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-0[88310]: 2025-10-02T07:48:26.333+0000 7f6c6b7bb640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  2 03:48:26 np0005465604 ceph-osd[88314]: osd.0 8 set_numa_affinity not setting numa affinity
Oct  2 03:48:26 np0005465604 ceph-osd[88314]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bdev(0x562d9ee41400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bdev(0x562d9ee41400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bdev(0x562d9ee41400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluefs mount
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluefs mount shared_bdev_used = 4718592
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: RocksDB version: 7.9.2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Git sha 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: DB SUMMARY
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: DB Session ID:  VQBM53EH8LKV0VWC313A
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: CURRENT file:  CURRENT
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: IDENTITY file:  IDENTITY
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                         Options.error_if_exists: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.create_if_missing: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                         Options.paranoid_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                                     Options.env: 0x562d9efccf50
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                                Options.info_log: 0x562d9ee0d8a0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_file_opening_threads: 16
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                              Options.statistics: (nil)
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.use_fsync: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.max_log_file_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                         Options.allow_fallocate: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.use_direct_reads: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.create_missing_column_families: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                              Options.db_log_dir: 
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                                 Options.wal_dir: db.wal
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.advise_random_on_open: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.write_buffer_manager: 0x562d9ef246e0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                            Options.rate_limiter: (nil)
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.unordered_write: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.row_cache: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                              Options.wal_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.allow_ingest_behind: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.two_write_queues: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.manual_wal_flush: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.wal_compression: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.atomic_flush: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.log_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.allow_data_in_errors: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.db_host_id: __hostname__
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.max_background_jobs: 4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.max_background_compactions: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.max_subcompactions: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.max_open_files: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.bytes_per_sync: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.max_background_flushes: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Compression algorithms supported:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: #011kZSTD supported: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: #011kXpressCompression supported: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: #011kBZip2Compression supported: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: #011kLZ4Compression supported: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: #011kZlibCompression supported: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: #011kSnappyCompression supported: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9ee0d4e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9ee0d4e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9ee0d4e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9ee0d4e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9ee0d4e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9ee0d4e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9ee0d4e0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9ee0d500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9ee0d500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562d9ee0d500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562d9dffb350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2f05cb72-b549-4f47-b9c4-56c714b92797
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391306410803, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391306416306, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391306, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2f05cb72-b549-4f47-b9c4-56c714b92797", "db_session_id": "VQBM53EH8LKV0VWC313A", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391306420836, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391306, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2f05cb72-b549-4f47-b9c4-56c714b92797", "db_session_id": "VQBM53EH8LKV0VWC313A", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391306425348, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391306, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2f05cb72-b549-4f47-b9c4-56c714b92797", "db_session_id": "VQBM53EH8LKV0VWC313A", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391306428808, "job": 1, "event": "recovery_finished"}
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562d9efd9c00
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: DB pointer 0x562d9ef03a00
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 460.80 MB usag
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: _get_class not permitted to load lua
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: _get_class not permitted to load sdk
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: _get_class not permitted to load test_remote_reads
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: osd.1 0 load_pgs
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: osd.1 0 load_pgs opened 0 pgs
Oct  2 03:48:26 np0005465604 ceph-osd[89321]: osd.1 0 log_to_monitors true
Oct  2 03:48:26 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1[89317]: 2025-10-02T07:48:26.468+0000 7f1b07154740 -1 osd.1 0 log_to_monitors true
Oct  2 03:48:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Oct  2 03:48:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/4104882188,v1:192.168.122.100:6807/4104882188]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  2 03:48:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:48:26 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate-test[89745]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct  2 03:48:26 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate-test[89745]:                            [--no-systemd] [--no-tmpfs]
Oct  2 03:48:26 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate-test[89745]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  2 03:48:26 np0005465604 systemd[1]: libpod-ed1b9b62e43d2d9526e5b8bc86b0a520b71201d38323eea0d921744b5a0bed7a.scope: Deactivated successfully.
Oct  2 03:48:26 np0005465604 ceph-mgr[74774]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1358321688; not ready for session (expect reconnect)
Oct  2 03:48:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  2 03:48:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  2 03:48:26 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  2 03:48:26 np0005465604 podman[89967]: 2025-10-02 07:48:26.990467168 +0000 UTC m=+0.034970577 container died ed1b9b62e43d2d9526e5b8bc86b0a520b71201d38323eea0d921744b5a0bed7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:27 np0005465604 systemd[1]: var-lib-containers-storage-overlay-fbd4d0d74700e3a870020cf99dda00cca5f3acb97f133a27e8c5bf84e34bd51e-merged.mount: Deactivated successfully.
Oct  2 03:48:27 np0005465604 podman[89967]: 2025-10-02 07:48:27.039491465 +0000 UTC m=+0.083994854 container remove ed1b9b62e43d2d9526e5b8bc86b0a520b71201d38323eea0d921744b5a0bed7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:27 np0005465604 systemd[1]: libpod-conmon-ed1b9b62e43d2d9526e5b8bc86b0a520b71201d38323eea0d921744b5a0bed7a.scope: Deactivated successfully.
Oct  2 03:48:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct  2 03:48:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  2 03:48:27 np0005465604 ceph-mon[74477]: from='osd.1 [v2:192.168.122.100:6806/4104882188,v1:192.168.122.100:6807/4104882188]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  2 03:48:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/4104882188,v1:192.168.122.100:6807/4104882188]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  2 03:48:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Oct  2 03:48:27 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1358321688,v1:192.168.122.100:6803/1358321688] boot
Oct  2 03:48:27 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Oct  2 03:48:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct  2 03:48:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/4104882188,v1:192.168.122.100:6807/4104882188]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  2 03:48:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct  2 03:48:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  2 03:48:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  2 03:48:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  2 03:48:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  2 03:48:27 np0005465604 ceph-osd[88314]: osd.0 9 state: booting -> active
Oct  2 03:48:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  2 03:48:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  2 03:48:27 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  2 03:48:27 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  2 03:48:27 np0005465604 systemd[1]: Reloading.
Oct  2 03:48:27 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  2 03:48:27 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  2 03:48:27 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:48:27 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:48:27 np0005465604 systemd[1]: Reloading.
Oct  2 03:48:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  2 03:48:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_07:48:27
Oct  2 03:48:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 03:48:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 03:48:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] No pools available
Oct  2 03:48:27 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:48:27 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:48:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 03:48:27 np0005465604 ceph-mgr[74774]: [devicehealth INFO root] creating mgr pool
Oct  2 03:48:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Oct  2 03:48:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  2 03:48:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 03:48:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:48:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:48:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 03:48:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:48:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:48:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:48:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:48:28 np0005465604 systemd[1]: Starting Ceph osd.2 for a52e644f-f702-594c-a648-813e3e0df2b1...
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/4104882188,v1:192.168.122.100:6807/4104882188]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Oct  2 03:48:28 np0005465604 ceph-osd[89321]: osd.1 0 done with init, starting boot process
Oct  2 03:48:28 np0005465604 ceph-osd[89321]: osd.1 0 start_boot
Oct  2 03:48:28 np0005465604 ceph-osd[89321]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  2 03:48:28 np0005465604 ceph-osd[89321]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  2 03:48:28 np0005465604 ceph-osd[89321]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  2 03:48:28 np0005465604 ceph-osd[89321]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  2 03:48:28 np0005465604 ceph-osd[89321]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Oct  2 03:48:28 np0005465604 ceph-osd[88314]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  2 03:48:28 np0005465604 ceph-osd[88314]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct  2 03:48:28 np0005465604 ceph-osd[88314]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  2 03:48:28 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  2 03:48:28 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  2 03:48:28 np0005465604 ceph-mgr[74774]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/4104882188; not ready for session (expect reconnect)
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  2 03:48:28 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: OSD bench result of 10076.517074 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: from='osd.1 [v2:192.168.122.100:6806/4104882188,v1:192.168.122.100:6807/4104882188]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: osd.0 [v2:192.168.122.100:6802/1358321688,v1:192.168.122.100:6803/1358321688] boot
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: from='osd.1 [v2:192.168.122.100:6806/4104882188,v1:192.168.122.100:6807/4104882188]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  2 03:48:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  2 03:48:28 np0005465604 podman[90130]: 2025-10-02 07:48:28.232372692 +0000 UTC m=+0.059480954 container create 8a94a4dc6e2c35d4f8b74619e5198f486d661fdfd3d78951feddbee826baa61f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 03:48:28 np0005465604 podman[90130]: 2025-10-02 07:48:28.197420667 +0000 UTC m=+0.024528939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:28 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9df5aa681bbf14ad7bf85e9ca81eea3ac28b407f9617aaab1357da7d69149bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9df5aa681bbf14ad7bf85e9ca81eea3ac28b407f9617aaab1357da7d69149bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9df5aa681bbf14ad7bf85e9ca81eea3ac28b407f9617aaab1357da7d69149bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9df5aa681bbf14ad7bf85e9ca81eea3ac28b407f9617aaab1357da7d69149bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9df5aa681bbf14ad7bf85e9ca81eea3ac28b407f9617aaab1357da7d69149bb/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:28 np0005465604 podman[90130]: 2025-10-02 07:48:28.326397769 +0000 UTC m=+0.153506061 container init 8a94a4dc6e2c35d4f8b74619e5198f486d661fdfd3d78951feddbee826baa61f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 03:48:28 np0005465604 podman[90130]: 2025-10-02 07:48:28.335517174 +0000 UTC m=+0.162625436 container start 8a94a4dc6e2c35d4f8b74619e5198f486d661fdfd3d78951feddbee826baa61f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 03:48:28 np0005465604 podman[90130]: 2025-10-02 07:48:28.340442018 +0000 UTC m=+0.167550270 container attach 8a94a4dc6e2c35d4f8b74619e5198f486d661fdfd3d78951feddbee826baa61f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 03:48:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct  2 03:48:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  2 03:48:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Oct  2 03:48:29 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Oct  2 03:48:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  2 03:48:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  2 03:48:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  2 03:48:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  2 03:48:29 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  2 03:48:29 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  2 03:48:29 np0005465604 ceph-mgr[74774]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/4104882188; not ready for session (expect reconnect)
Oct  2 03:48:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  2 03:48:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  2 03:48:29 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  2 03:48:29 np0005465604 ceph-mon[74477]: from='osd.1 [v2:192.168.122.100:6806/4104882188,v1:192.168.122.100:6807/4104882188]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  2 03:48:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  2 03:48:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  2 03:48:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  2 03:48:29 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate[90145]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  2 03:48:29 np0005465604 bash[90130]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  2 03:48:29 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate[90145]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct  2 03:48:29 np0005465604 bash[90130]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct  2 03:48:29 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate[90145]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct  2 03:48:29 np0005465604 bash[90130]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct  2 03:48:29 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate[90145]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  2 03:48:29 np0005465604 bash[90130]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  2 03:48:29 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate[90145]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  2 03:48:29 np0005465604 bash[90130]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  2 03:48:29 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate[90145]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  2 03:48:29 np0005465604 bash[90130]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  2 03:48:29 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate[90145]: --> ceph-volume raw activate successful for osd ID: 2
Oct  2 03:48:29 np0005465604 bash[90130]: --> ceph-volume raw activate successful for osd ID: 2
Oct  2 03:48:29 np0005465604 systemd[1]: libpod-8a94a4dc6e2c35d4f8b74619e5198f486d661fdfd3d78951feddbee826baa61f.scope: Deactivated successfully.
Oct  2 03:48:29 np0005465604 systemd[1]: libpod-8a94a4dc6e2c35d4f8b74619e5198f486d661fdfd3d78951feddbee826baa61f.scope: Consumed 1.194s CPU time.
Oct  2 03:48:29 np0005465604 podman[90130]: 2025-10-02 07:48:29.512528605 +0000 UTC m=+1.339636867 container died 8a94a4dc6e2c35d4f8b74619e5198f486d661fdfd3d78951feddbee826baa61f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:48:29 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f9df5aa681bbf14ad7bf85e9ca81eea3ac28b407f9617aaab1357da7d69149bb-merged.mount: Deactivated successfully.
Oct  2 03:48:29 np0005465604 podman[90130]: 2025-10-02 07:48:29.599640204 +0000 UTC m=+1.426748456 container remove 8a94a4dc6e2c35d4f8b74619e5198f486d661fdfd3d78951feddbee826baa61f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2-activate, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct  2 03:48:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v33: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct  2 03:48:29 np0005465604 python3[90331]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:48:29 np0005465604 podman[90352]: 2025-10-02 07:48:29.824016615 +0000 UTC m=+0.048876023 container create 2a85f2dba8e5b24e7f53d13b1f3376c45ca88b6ca469d7f233f034e156738d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3016db3a888bf2597aaf7b07bd87944f6fe38f7e7ebaa41f080eb15783690140/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3016db3a888bf2597aaf7b07bd87944f6fe38f7e7ebaa41f080eb15783690140/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3016db3a888bf2597aaf7b07bd87944f6fe38f7e7ebaa41f080eb15783690140/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3016db3a888bf2597aaf7b07bd87944f6fe38f7e7ebaa41f080eb15783690140/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3016db3a888bf2597aaf7b07bd87944f6fe38f7e7ebaa41f080eb15783690140/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:29 np0005465604 podman[90352]: 2025-10-02 07:48:29.883502239 +0000 UTC m=+0.108361647 container init 2a85f2dba8e5b24e7f53d13b1f3376c45ca88b6ca469d7f233f034e156738d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:29 np0005465604 podman[90352]: 2025-10-02 07:48:29.89215195 +0000 UTC m=+0.117011348 container start 2a85f2dba8e5b24e7f53d13b1f3376c45ca88b6ca469d7f233f034e156738d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  2 03:48:29 np0005465604 podman[90352]: 2025-10-02 07:48:29.799243389 +0000 UTC m=+0.024102837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:29 np0005465604 podman[90366]: 2025-10-02 07:48:29.903131674 +0000 UTC m=+0.064933675 container create 5c31d52503cba816728f2c58dbd74c7df3a30572d2074e62efa3823f55de29cc (image=quay.io/ceph/ceph:v18, name=keen_wing, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  2 03:48:29 np0005465604 bash[90352]: 2a85f2dba8e5b24e7f53d13b1f3376c45ca88b6ca469d7f233f034e156738d35
Oct  2 03:48:29 np0005465604 ceph-osd[90385]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 03:48:29 np0005465604 ceph-osd[90385]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct  2 03:48:29 np0005465604 ceph-osd[90385]: pidfile_write: ignore empty --pid-file
Oct  2 03:48:29 np0005465604 ceph-osd[90385]: bdev(0x562c60f85800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  2 03:48:29 np0005465604 ceph-osd[90385]: bdev(0x562c60f85800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  2 03:48:29 np0005465604 ceph-osd[90385]: bdev(0x562c60f85800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:29 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 03:48:29 np0005465604 ceph-osd[90385]: bdev(0x562c61dbd800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  2 03:48:29 np0005465604 ceph-osd[90385]: bdev(0x562c61dbd800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  2 03:48:29 np0005465604 ceph-osd[90385]: bdev(0x562c61dbd800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:29 np0005465604 ceph-osd[90385]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct  2 03:48:29 np0005465604 ceph-osd[90385]: bdev(0x562c61dbd800 /var/lib/ceph/osd/ceph-2/block) close
Oct  2 03:48:29 np0005465604 systemd[1]: Started Ceph osd.2 for a52e644f-f702-594c-a648-813e3e0df2b1.
Oct  2 03:48:29 np0005465604 systemd[1]: Started libpod-conmon-5c31d52503cba816728f2c58dbd74c7df3a30572d2074e62efa3823f55de29cc.scope.
Oct  2 03:48:29 np0005465604 podman[90366]: 2025-10-02 07:48:29.860850419 +0000 UTC m=+0.022652480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:48:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:48:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:48:29 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eed23a81babf59b7dca2ab15d1e85078f9e2b713ec19d5e87a0e2f49d797b9c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eed23a81babf59b7dca2ab15d1e85078f9e2b713ec19d5e87a0e2f49d797b9c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eed23a81babf59b7dca2ab15d1e85078f9e2b713ec19d5e87a0e2f49d797b9c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:30 np0005465604 podman[90366]: 2025-10-02 07:48:30.005074498 +0000 UTC m=+0.166876529 container init 5c31d52503cba816728f2c58dbd74c7df3a30572d2074e62efa3823f55de29cc (image=quay.io/ceph/ceph:v18, name=keen_wing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:48:30 np0005465604 podman[90366]: 2025-10-02 07:48:30.01248144 +0000 UTC m=+0.174283451 container start 5c31d52503cba816728f2c58dbd74c7df3a30572d2074e62efa3823f55de29cc (image=quay.io/ceph/ceph:v18, name=keen_wing, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  2 03:48:30 np0005465604 podman[90366]: 2025-10-02 07:48:30.02013268 +0000 UTC m=+0.181934711 container attach 5c31d52503cba816728f2c58dbd74c7df3a30572d2074e62efa3823f55de29cc (image=quay.io/ceph/ceph:v18, name=keen_wing, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:48:30 np0005465604 ceph-mgr[74774]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/4104882188; not ready for session (expect reconnect)
Oct  2 03:48:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  2 03:48:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  2 03:48:30 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  2 03:48:30 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:30 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:30 np0005465604 ceph-osd[90385]: bdev(0x562c60f85800 /var/lib/ceph/osd/ceph-2/block) close
Oct  2 03:48:30 np0005465604 ceph-osd[90385]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Oct  2 03:48:30 np0005465604 ceph-osd[90385]: load: jerasure load: lrc 
Oct  2 03:48:30 np0005465604 ceph-osd[90385]: bdev(0x562c6114ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  2 03:48:30 np0005465604 ceph-osd[90385]: bdev(0x562c6114ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  2 03:48:30 np0005465604 ceph-osd[90385]: bdev(0x562c6114ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:30 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 03:48:30 np0005465604 ceph-osd[90385]: bdev(0x562c6114ec00 /var/lib/ceph/osd/ceph-2/block) close
Oct  2 03:48:30 np0005465604 ceph-osd[89321]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 33.325 iops: 8531.177 elapsed_sec: 0.352
Oct  2 03:48:30 np0005465604 ceph-osd[89321]: log_channel(cluster) log [WRN] : OSD bench result of 8531.177064 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  2 03:48:30 np0005465604 ceph-osd[89321]: osd.1 0 waiting for initial osdmap
Oct  2 03:48:30 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1[89317]: 2025-10-02T07:48:30.481+0000 7f1b030d4640 -1 osd.1 0 waiting for initial osdmap
Oct  2 03:48:30 np0005465604 ceph-osd[89321]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  2 03:48:30 np0005465604 ceph-osd[89321]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct  2 03:48:30 np0005465604 ceph-osd[89321]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  2 03:48:30 np0005465604 ceph-osd[89321]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef
Oct  2 03:48:30 np0005465604 ceph-osd[89321]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  2 03:48:30 np0005465604 ceph-osd[89321]: osd.1 11 set_numa_affinity not setting numa affinity
Oct  2 03:48:30 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-1[89317]: 2025-10-02T07:48:30.510+0000 7f1afe6fc640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  2 03:48:30 np0005465604 ceph-osd[89321]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Oct  2 03:48:30 np0005465604 podman[90571]: 2025-10-02 07:48:30.540096302 +0000 UTC m=+0.036458173 container create 25bd3e9d6ab6fceb25feabcd16ac51f5627bcc18ec4a1cf7f7650151c126c4a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:30 np0005465604 systemd[1]: Started libpod-conmon-25bd3e9d6ab6fceb25feabcd16ac51f5627bcc18ec4a1cf7f7650151c126c4a6.scope.
Oct  2 03:48:30 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:30 np0005465604 podman[90571]: 2025-10-02 07:48:30.607776593 +0000 UTC m=+0.104138474 container init 25bd3e9d6ab6fceb25feabcd16ac51f5627bcc18ec4a1cf7f7650151c126c4a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 03:48:30 np0005465604 podman[90571]: 2025-10-02 07:48:30.615715322 +0000 UTC m=+0.112077203 container start 25bd3e9d6ab6fceb25feabcd16ac51f5627bcc18ec4a1cf7f7650151c126c4a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:30 np0005465604 amazing_bartik[90588]: 167 167
Oct  2 03:48:30 np0005465604 systemd[1]: libpod-25bd3e9d6ab6fceb25feabcd16ac51f5627bcc18ec4a1cf7f7650151c126c4a6.scope: Deactivated successfully.
Oct  2 03:48:30 np0005465604 podman[90571]: 2025-10-02 07:48:30.522823511 +0000 UTC m=+0.019185402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:30 np0005465604 podman[90571]: 2025-10-02 07:48:30.620955527 +0000 UTC m=+0.117317428 container attach 25bd3e9d6ab6fceb25feabcd16ac51f5627bcc18ec4a1cf7f7650151c126c4a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 03:48:30 np0005465604 podman[90571]: 2025-10-02 07:48:30.622004469 +0000 UTC m=+0.118366360 container died 25bd3e9d6ab6fceb25feabcd16ac51f5627bcc18ec4a1cf7f7650151c126c4a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  2 03:48:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/122382903' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  2 03:48:30 np0005465604 keen_wing[90400]: 
Oct  2 03:48:30 np0005465604 keen_wing[90400]: {"fsid":"a52e644f-f702-594c-a648-813e3e0df2b1","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":109,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":11,"num_osds":3,"num_up_osds":1,"osd_up_since":1759391307,"num_in_osds":3,"osd_in_since":1759391291,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":446984192,"bytes_avail":21023657984,"bytes_total":21470642176,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-02T07:48:29.787187+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Oct  2 03:48:30 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2ab485c5fe563a75372a9ea75bf96d439128eb75dedd4df461b9a0df97653242-merged.mount: Deactivated successfully.
Oct  2 03:48:30 np0005465604 systemd[1]: libpod-5c31d52503cba816728f2c58dbd74c7df3a30572d2074e62efa3823f55de29cc.scope: Deactivated successfully.
Oct  2 03:48:30 np0005465604 podman[90571]: 2025-10-02 07:48:30.656948834 +0000 UTC m=+0.153310705 container remove 25bd3e9d6ab6fceb25feabcd16ac51f5627bcc18ec4a1cf7f7650151c126c4a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:48:30 np0005465604 podman[90366]: 2025-10-02 07:48:30.664872542 +0000 UTC m=+0.826674573 container died 5c31d52503cba816728f2c58dbd74c7df3a30572d2074e62efa3823f55de29cc (image=quay.io/ceph/ceph:v18, name=keen_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:30 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5eed23a81babf59b7dca2ab15d1e85078f9e2b713ec19d5e87a0e2f49d797b9c-merged.mount: Deactivated successfully.
Oct  2 03:48:30 np0005465604 systemd[1]: libpod-conmon-25bd3e9d6ab6fceb25feabcd16ac51f5627bcc18ec4a1cf7f7650151c126c4a6.scope: Deactivated successfully.
Oct  2 03:48:30 np0005465604 podman[90366]: 2025-10-02 07:48:30.708299883 +0000 UTC m=+0.870101894 container remove 5c31d52503cba816728f2c58dbd74c7df3a30572d2074e62efa3823f55de29cc (image=quay.io/ceph/ceph:v18, name=keen_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 03:48:30 np0005465604 systemd[1]: libpod-conmon-5c31d52503cba816728f2c58dbd74c7df3a30572d2074e62efa3823f55de29cc.scope: Deactivated successfully.
Oct  2 03:48:30 np0005465604 ceph-osd[90385]: bdev(0x562c6114ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  2 03:48:30 np0005465604 ceph-osd[90385]: bdev(0x562c6114ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  2 03:48:30 np0005465604 ceph-osd[90385]: bdev(0x562c6114ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:30 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 03:48:30 np0005465604 ceph-osd[90385]: bdev(0x562c6114ec00 /var/lib/ceph/osd/ceph-2/block) close
Oct  2 03:48:30 np0005465604 podman[90630]: 2025-10-02 07:48:30.820338334 +0000 UTC m=+0.047113487 container create b79936894feac3d371458c4ee18b314d70224c59b24b9fb6302449611a78401a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wright, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:48:30 np0005465604 systemd[1]: Started libpod-conmon-b79936894feac3d371458c4ee18b314d70224c59b24b9fb6302449611a78401a.scope.
Oct  2 03:48:30 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7d5d6db36dc7bc08817984d53055c74a6f229cc265bc770cacb00b0b7f39a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7d5d6db36dc7bc08817984d53055c74a6f229cc265bc770cacb00b0b7f39a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7d5d6db36dc7bc08817984d53055c74a6f229cc265bc770cacb00b0b7f39a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7d5d6db36dc7bc08817984d53055c74a6f229cc265bc770cacb00b0b7f39a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:30 np0005465604 podman[90630]: 2025-10-02 07:48:30.800088989 +0000 UTC m=+0.026864142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:30 np0005465604 podman[90630]: 2025-10-02 07:48:30.909031863 +0000 UTC m=+0.135807026 container init b79936894feac3d371458c4ee18b314d70224c59b24b9fb6302449611a78401a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wright, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 03:48:30 np0005465604 podman[90630]: 2025-10-02 07:48:30.916864489 +0000 UTC m=+0.143639642 container start b79936894feac3d371458c4ee18b314d70224c59b24b9fb6302449611a78401a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wright, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 03:48:30 np0005465604 podman[90630]: 2025-10-02 07:48:30.920170152 +0000 UTC m=+0.146945385 container attach b79936894feac3d371458c4ee18b314d70224c59b24b9fb6302449611a78401a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bdev(0x562c6114ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bdev(0x562c6114ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bdev(0x562c6114ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bdev(0x562c6114f400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bdev(0x562c6114f400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bdev(0x562c6114f400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluefs mount
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluefs mount shared_bdev_used = 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: RocksDB version: 7.9.2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Git sha 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: DB SUMMARY
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: DB Session ID:  G8P1VMAQ6AOURDSMKDQU
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: CURRENT file:  CURRENT
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: IDENTITY file:  IDENTITY
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                         Options.error_if_exists: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.create_if_missing: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                         Options.paranoid_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                                     Options.env: 0x562c61e0fd50
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                                Options.info_log: 0x562c61010d00
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_file_opening_threads: 16
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                              Options.statistics: (nil)
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.use_fsync: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.max_log_file_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                         Options.allow_fallocate: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.use_direct_reads: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.create_missing_column_families: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                              Options.db_log_dir: 
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                                 Options.wal_dir: db.wal
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.advise_random_on_open: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.write_buffer_manager: 0x562c61f20460
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                            Options.rate_limiter: (nil)
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.unordered_write: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.row_cache: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                              Options.wal_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.allow_ingest_behind: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.two_write_queues: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.manual_wal_flush: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.wal_compression: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.atomic_flush: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.log_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.allow_data_in_errors: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.db_host_id: __hostname__
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.max_background_jobs: 4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.max_background_compactions: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.max_subcompactions: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.max_open_files: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.bytes_per_sync: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.max_background_flushes: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Compression algorithms supported:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: #011kZSTD supported: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: #011kXpressCompression supported: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: #011kBZip2Compression supported: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: #011kLZ4Compression supported: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: #011kZlibCompression supported: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: #011kSnappyCompression supported: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61011340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61011340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61011340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61011340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61011340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61011340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61011340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61011320)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61011320)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61011320)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 9b80451c-f259-4f76-ae49-1841728217b9
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391311080037, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391311080348, "job": 1, "event": "recovery_finished"}
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: freelist init
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: freelist _read_cfg
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluefs umount
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bdev(0x562c6114f400 /var/lib/ceph/osd/ceph-2/block) close
Oct  2 03:48:31 np0005465604 ceph-mgr[74774]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/4104882188; not ready for session (expect reconnect)
Oct  2 03:48:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  2 03:48:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  2 03:48:31 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  2 03:48:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct  2 03:48:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Oct  2 03:48:31 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/4104882188,v1:192.168.122.100:6807/4104882188] boot
Oct  2 03:48:31 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Oct  2 03:48:31 np0005465604 ceph-osd[89321]: osd.1 12 state: booting -> active
Oct  2 03:48:31 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:48:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  2 03:48:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  2 03:48:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  2 03:48:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  2 03:48:31 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  2 03:48:31 np0005465604 python3[90710]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:48:31 np0005465604 podman[90872]: 2025-10-02 07:48:31.278566082 +0000 UTC m=+0.051220736 container create e9fd637c61b1ef24d1fe20f769915ff8918f4ba1b2f93979c28f8fabedff6751 (image=quay.io/ceph/ceph:v18, name=sad_dirac, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bdev(0x562c6114f400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bdev(0x562c6114f400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bdev(0x562c6114f400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluefs mount
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluefs mount shared_bdev_used = 4718592
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: RocksDB version: 7.9.2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Git sha 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: DB SUMMARY
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: DB Session ID:  G8P1VMAQ6AOURDSMKDQV
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: CURRENT file:  CURRENT
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: IDENTITY file:  IDENTITY
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                         Options.error_if_exists: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.create_if_missing: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                         Options.paranoid_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                                     Options.env: 0x562c61fd0b60
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                                Options.info_log: 0x562c61010a80
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_file_opening_threads: 16
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                              Options.statistics: (nil)
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.use_fsync: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.max_log_file_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                         Options.allow_fallocate: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.use_direct_reads: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.create_missing_column_families: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                              Options.db_log_dir: 
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                                 Options.wal_dir: db.wal
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.advise_random_on_open: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.write_buffer_manager: 0x562c61f20460
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                            Options.rate_limiter: (nil)
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.unordered_write: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.row_cache: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                              Options.wal_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.allow_ingest_behind: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.two_write_queues: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.manual_wal_flush: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.wal_compression: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.atomic_flush: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.log_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.allow_data_in_errors: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.db_host_id: __hostname__
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.max_background_jobs: 4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.max_background_compactions: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.max_subcompactions: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.max_open_files: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.bytes_per_sync: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.max_background_flushes: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Compression algorithms supported:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: #011kZSTD supported: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: #011kXpressCompression supported: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: #011kBZip2Compression supported: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: #011kLZ4Compression supported: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: #011kZlibCompression supported: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: #011kSnappyCompression supported: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61010e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61010e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61010e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61010e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61010e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61010e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61010e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61011440)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61011440)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:           Options.merge_operator: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562c61011440)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562c60ff8430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.compression: LZ4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.num_levels: 7
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.bloom_locality: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                               Options.ttl: 2592000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                       Options.enable_blob_files: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                           Options.min_blob_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 03:48:31 np0005465604 systemd[1]: Started libpod-conmon-e9fd637c61b1ef24d1fe20f769915ff8918f4ba1b2f93979c28f8fabedff6751.scope.
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 9b80451c-f259-4f76-ae49-1841728217b9
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391311342521, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391311348692, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391311, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b80451c-f259-4f76-ae49-1841728217b9", "db_session_id": "G8P1VMAQ6AOURDSMKDQV", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  2 03:48:31 np0005465604 podman[90872]: 2025-10-02 07:48:31.255133837 +0000 UTC m=+0.027788511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391311353021, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391311, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b80451c-f259-4f76-ae49-1841728217b9", "db_session_id": "G8P1VMAQ6AOURDSMKDQV", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391311356269, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391311, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "9b80451c-f259-4f76-ae49-1841728217b9", "db_session_id": "G8P1VMAQ6AOURDSMKDQV", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391311357822, "job": 1, "event": "recovery_finished"}
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  2 03:48:31 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2689bc85702f7e187ec3d215d3132d6b6fd2e73bc1e2c7de900c10f87cd8775/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2689bc85702f7e187ec3d215d3132d6b6fd2e73bc1e2c7de900c10f87cd8775/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:31 np0005465604 podman[90872]: 2025-10-02 07:48:31.37742845 +0000 UTC m=+0.150083104 container init e9fd637c61b1ef24d1fe20f769915ff8918f4ba1b2f93979c28f8fabedff6751 (image=quay.io/ceph/ceph:v18, name=sad_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:31 np0005465604 podman[90872]: 2025-10-02 07:48:31.384213233 +0000 UTC m=+0.156867897 container start e9fd637c61b1ef24d1fe20f769915ff8918f4ba1b2f93979c28f8fabedff6751 (image=quay.io/ceph/ceph:v18, name=sad_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 03:48:31 np0005465604 podman[90872]: 2025-10-02 07:48:31.387993541 +0000 UTC m=+0.160648215 container attach e9fd637c61b1ef24d1fe20f769915ff8918f4ba1b2f93979c28f8fabedff6751 (image=quay.io/ceph/ceph:v18, name=sad_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562c61fdc000
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: DB pointer 0x562c61f11a00
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 460.80 MB usage: 0
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: _get_class not permitted to load lua
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: _get_class not permitted to load sdk
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: _get_class not permitted to load test_remote_reads
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: osd.2 0 load_pgs
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: osd.2 0 load_pgs opened 0 pgs
Oct  2 03:48:31 np0005465604 ceph-osd[90385]: osd.2 0 log_to_monitors true
Oct  2 03:48:31 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2[90381]: 2025-10-02T07:48:31.400+0000 7f74b7402740 -1 osd.2 0 log_to_monitors true
Oct  2 03:48:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Oct  2 03:48:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2198773173,v1:192.168.122.100:6811/2198773173]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  2 03:48:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:48:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]: {
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:        "osd_id": 2,
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:        "type": "bluestore"
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:    },
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:        "osd_id": 1,
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:        "type": "bluestore"
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:    },
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:        "osd_id": 0,
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:        "type": "bluestore"
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]:    }
Oct  2 03:48:31 np0005465604 mystifying_wright[90647]: }
Oct  2 03:48:31 np0005465604 systemd[1]: libpod-b79936894feac3d371458c4ee18b314d70224c59b24b9fb6302449611a78401a.scope: Deactivated successfully.
Oct  2 03:48:31 np0005465604 podman[90630]: 2025-10-02 07:48:31.90656172 +0000 UTC m=+1.133336903 container died b79936894feac3d371458c4ee18b314d70224c59b24b9fb6302449611a78401a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 03:48:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  2 03:48:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/997898873' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 03:48:31 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3e7d5d6db36dc7bc08817984d53055c74a6f229cc265bc770cacb00b0b7f39a4-merged.mount: Deactivated successfully.
Oct  2 03:48:31 np0005465604 podman[90630]: 2025-10-02 07:48:31.967340624 +0000 UTC m=+1.194115767 container remove b79936894feac3d371458c4ee18b314d70224c59b24b9fb6302449611a78401a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:31 np0005465604 systemd[1]: libpod-conmon-b79936894feac3d371458c4ee18b314d70224c59b24b9fb6302449611a78401a.scope: Deactivated successfully.
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2198773173,v1:192.168.122.100:6811/2198773173]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/997898873' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Oct  2 03:48:32 np0005465604 sad_dirac[91026]: pool 'vms' created
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2198773173,v1:192.168.122.100:6811/2198773173]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  2 03:48:32 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  2 03:48:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:48:32 np0005465604 systemd[1]: libpod-e9fd637c61b1ef24d1fe20f769915ff8918f4ba1b2f93979c28f8fabedff6751.scope: Deactivated successfully.
Oct  2 03:48:32 np0005465604 podman[90872]: 2025-10-02 07:48:32.16126012 +0000 UTC m=+0.933914774 container died e9fd637c61b1ef24d1fe20f769915ff8918f4ba1b2f93979c28f8fabedff6751 (image=quay.io/ceph/ceph:v18, name=sad_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: OSD bench result of 8531.177064 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: osd.1 [v2:192.168.122.100:6806/4104882188,v1:192.168.122.100:6807/4104882188] boot
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: from='osd.2 [v2:192.168.122.100:6810/2198773173,v1:192.168.122.100:6811/2198773173]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/997898873' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: from='osd.2 [v2:192.168.122.100:6810/2198773173,v1:192.168.122.100:6811/2198773173]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/997898873' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: from='osd.2 [v2:192.168.122.100:6810/2198773173,v1:192.168.122.100:6811/2198773173]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  2 03:48:32 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b2689bc85702f7e187ec3d215d3132d6b6fd2e73bc1e2c7de900c10f87cd8775-merged.mount: Deactivated successfully.
Oct  2 03:48:32 np0005465604 ceph-mgr[74774]: [devicehealth INFO root] creating main.db for devicehealth
Oct  2 03:48:32 np0005465604 podman[90872]: 2025-10-02 07:48:32.201530212 +0000 UTC m=+0.974184866 container remove e9fd637c61b1ef24d1fe20f769915ff8918f4ba1b2f93979c28f8fabedff6751 (image=quay.io/ceph/ceph:v18, name=sad_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 03:48:32 np0005465604 systemd[1]: libpod-conmon-e9fd637c61b1ef24d1fe20f769915ff8918f4ba1b2f93979c28f8fabedff6751.scope: Deactivated successfully.
Oct  2 03:48:32 np0005465604 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct  2 03:48:32 np0005465604 ceph-mgr[74774]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct  2 03:48:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  2 03:48:32 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  2 03:48:32 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  2 03:48:32 np0005465604 python3[91348]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:48:32 np0005465604 podman[91373]: 2025-10-02 07:48:32.5506091 +0000 UTC m=+0.043786733 container create c7d30409ad67fca102f7f0542cb9d31c69f26fd18db46565b2b3a6fdbf6547bc (image=quay.io/ceph/ceph:v18, name=sharp_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:32 np0005465604 systemd[1]: Started libpod-conmon-c7d30409ad67fca102f7f0542cb9d31c69f26fd18db46565b2b3a6fdbf6547bc.scope.
Oct  2 03:48:32 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6e4b69c58937ca17b2949af7feb4d525ca9450600e8eda0342157b53e946d1b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6e4b69c58937ca17b2949af7feb4d525ca9450600e8eda0342157b53e946d1b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:32 np0005465604 podman[91373]: 2025-10-02 07:48:32.536648973 +0000 UTC m=+0.029826616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:48:32 np0005465604 podman[91373]: 2025-10-02 07:48:32.635591664 +0000 UTC m=+0.128769357 container init c7d30409ad67fca102f7f0542cb9d31c69f26fd18db46565b2b3a6fdbf6547bc (image=quay.io/ceph/ceph:v18, name=sharp_saha, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 03:48:32 np0005465604 podman[91373]: 2025-10-02 07:48:32.642128629 +0000 UTC m=+0.135306272 container start c7d30409ad67fca102f7f0542cb9d31c69f26fd18db46565b2b3a6fdbf6547bc (image=quay.io/ceph/ceph:v18, name=sharp_saha, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 03:48:32 np0005465604 podman[91373]: 2025-10-02 07:48:32.645966869 +0000 UTC m=+0.139144522 container attach c7d30409ad67fca102f7f0542cb9d31c69f26fd18db46565b2b3a6fdbf6547bc (image=quay.io/ceph/ceph:v18, name=sharp_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 03:48:32 np0005465604 podman[91464]: 2025-10-02 07:48:32.91411115 +0000 UTC m=+0.067640260 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 03:48:33 np0005465604 podman[91464]: 2025-10-02 07:48:33.019809523 +0000 UTC m=+0.173338543 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:33 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 13 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:48:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct  2 03:48:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2198773173,v1:192.168.122.100:6811/2198773173]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  2 03:48:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Oct  2 03:48:33 np0005465604 ceph-osd[90385]: osd.2 0 done with init, starting boot process
Oct  2 03:48:33 np0005465604 ceph-osd[90385]: osd.2 0 start_boot
Oct  2 03:48:33 np0005465604 ceph-osd[90385]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  2 03:48:33 np0005465604 ceph-osd[90385]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  2 03:48:33 np0005465604 ceph-osd[90385]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  2 03:48:33 np0005465604 ceph-osd[90385]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  2 03:48:33 np0005465604 ceph-osd[90385]: osd.2 0  bench count 12288000 bsize 4 KiB
Oct  2 03:48:33 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Oct  2 03:48:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  2 03:48:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1096215035' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 03:48:33 np0005465604 ceph-mgr[74774]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2198773173; not ready for session (expect reconnect)
Oct  2 03:48:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  2 03:48:33 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 14 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=14) [] r=-1 lpr=14 pi=[13,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:48:33 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 14 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=14) [] r=-1 lpr=14 pi=[13,14)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:48:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  2 03:48:33 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  2 03:48:33 np0005465604 ceph-mon[74477]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  2 03:48:33 np0005465604 ceph-mon[74477]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  2 03:48:33 np0005465604 ceph-mon[74477]: from='osd.2 [v2:192.168.122.100:6810/2198773173,v1:192.168.122.100:6811/2198773173]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  2 03:48:33 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/1096215035' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 03:48:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:48:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:48:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v38: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct  2 03:48:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct  2 03:48:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1096215035' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 03:48:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Oct  2 03:48:34 np0005465604 sharp_saha[91407]: pool 'volumes' created
Oct  2 03:48:34 np0005465604 ceph-mgr[74774]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2198773173; not ready for session (expect reconnect)
Oct  2 03:48:34 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Oct  2 03:48:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  2 03:48:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  2 03:48:34 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  2 03:48:34 np0005465604 podman[91746]: 2025-10-02 07:48:34.177316373 +0000 UTC m=+0.053661293 container create 55add4065ba3397767c46c7f935da718fe96b906f0f5e1d3f986410f27ed442b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mayer, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Oct  2 03:48:34 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:48:34 np0005465604 systemd[1]: libpod-c7d30409ad67fca102f7f0542cb9d31c69f26fd18db46565b2b3a6fdbf6547bc.scope: Deactivated successfully.
Oct  2 03:48:34 np0005465604 podman[91373]: 2025-10-02 07:48:34.208800309 +0000 UTC m=+1.701977952 container died c7d30409ad67fca102f7f0542cb9d31c69f26fd18db46565b2b3a6fdbf6547bc (image=quay.io/ceph/ceph:v18, name=sharp_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  2 03:48:34 np0005465604 systemd[1]: Started libpod-conmon-55add4065ba3397767c46c7f935da718fe96b906f0f5e1d3f986410f27ed442b.scope.
Oct  2 03:48:34 np0005465604 podman[91746]: 2025-10-02 07:48:34.149897123 +0000 UTC m=+0.026242033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c6e4b69c58937ca17b2949af7feb4d525ca9450600e8eda0342157b53e946d1b-merged.mount: Deactivated successfully.
Oct  2 03:48:34 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:34 np0005465604 podman[91746]: 2025-10-02 07:48:34.282072925 +0000 UTC m=+0.158417855 container init 55add4065ba3397767c46c7f935da718fe96b906f0f5e1d3f986410f27ed442b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 03:48:34 np0005465604 podman[91746]: 2025-10-02 07:48:34.294106762 +0000 UTC m=+0.170451672 container start 55add4065ba3397767c46c7f935da718fe96b906f0f5e1d3f986410f27ed442b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mayer, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:48:34 np0005465604 tender_mayer[91775]: 167 167
Oct  2 03:48:34 np0005465604 systemd[1]: libpod-55add4065ba3397767c46c7f935da718fe96b906f0f5e1d3f986410f27ed442b.scope: Deactivated successfully.
Oct  2 03:48:34 np0005465604 podman[91373]: 2025-10-02 07:48:34.298588843 +0000 UTC m=+1.791766466 container remove c7d30409ad67fca102f7f0542cb9d31c69f26fd18db46565b2b3a6fdbf6547bc (image=quay.io/ceph/ceph:v18, name=sharp_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:34 np0005465604 podman[91746]: 2025-10-02 07:48:34.30679166 +0000 UTC m=+0.183136570 container attach 55add4065ba3397767c46c7f935da718fe96b906f0f5e1d3f986410f27ed442b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mayer, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 03:48:34 np0005465604 systemd[1]: libpod-conmon-c7d30409ad67fca102f7f0542cb9d31c69f26fd18db46565b2b3a6fdbf6547bc.scope: Deactivated successfully.
Oct  2 03:48:34 np0005465604 podman[91746]: 2025-10-02 07:48:34.307029617 +0000 UTC m=+0.183374527 container died 55add4065ba3397767c46c7f935da718fe96b906f0f5e1d3f986410f27ed442b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay-7768125c8320d9f123e482a44d802ac7fcfc71c515e69982447df883e414da73-merged.mount: Deactivated successfully.
Oct  2 03:48:34 np0005465604 podman[91746]: 2025-10-02 07:48:34.387376095 +0000 UTC m=+0.263721045 container remove 55add4065ba3397767c46c7f935da718fe96b906f0f5e1d3f986410f27ed442b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mayer, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:48:34 np0005465604 systemd[1]: libpod-conmon-55add4065ba3397767c46c7f935da718fe96b906f0f5e1d3f986410f27ed442b.scope: Deactivated successfully.
Oct  2 03:48:34 np0005465604 python3[91820]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:48:34 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:34 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:34 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/1096215035' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 03:48:34 np0005465604 ceph-mon[74477]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 03:48:34 np0005465604 podman[91826]: 2025-10-02 07:48:34.609130553 +0000 UTC m=+0.066430553 container create 6df9ce81fa6f26f260708fdb2134303b1a23b990e472505041a93dc0da863829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_agnesi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:34 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.qlmhsi(active, since 66s)
Oct  2 03:48:34 np0005465604 podman[91826]: 2025-10-02 07:48:34.583411557 +0000 UTC m=+0.040711587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:34 np0005465604 systemd[1]: Started libpod-conmon-6df9ce81fa6f26f260708fdb2134303b1a23b990e472505041a93dc0da863829.scope.
Oct  2 03:48:34 np0005465604 podman[91840]: 2025-10-02 07:48:34.681663336 +0000 UTC m=+0.072214524 container create 1b6ce2049b19cd5c67c358d8c2e9bfcaac6d93213e1d85e58a610676ff4110c5 (image=quay.io/ceph/ceph:v18, name=sad_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:34 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9b267c2f8209653ecaff85bb050dfeca2b0246f39db211b15f1a97a5b3be3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9b267c2f8209653ecaff85bb050dfeca2b0246f39db211b15f1a97a5b3be3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9b267c2f8209653ecaff85bb050dfeca2b0246f39db211b15f1a97a5b3be3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9b267c2f8209653ecaff85bb050dfeca2b0246f39db211b15f1a97a5b3be3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:34 np0005465604 systemd[1]: Started libpod-conmon-1b6ce2049b19cd5c67c358d8c2e9bfcaac6d93213e1d85e58a610676ff4110c5.scope.
Oct  2 03:48:34 np0005465604 podman[91826]: 2025-10-02 07:48:34.727419479 +0000 UTC m=+0.184719509 container init 6df9ce81fa6f26f260708fdb2134303b1a23b990e472505041a93dc0da863829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:34 np0005465604 podman[91826]: 2025-10-02 07:48:34.742417359 +0000 UTC m=+0.199717359 container start 6df9ce81fa6f26f260708fdb2134303b1a23b990e472505041a93dc0da863829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:48:34 np0005465604 podman[91840]: 2025-10-02 07:48:34.652188812 +0000 UTC m=+0.042740000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:48:34 np0005465604 podman[91826]: 2025-10-02 07:48:34.752061112 +0000 UTC m=+0.209361102 container attach 6df9ce81fa6f26f260708fdb2134303b1a23b990e472505041a93dc0da863829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_agnesi, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 03:48:34 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a21581a9a379cb15e89a3c6d617153a19d0364085b105e996dfad9ff1b18e67d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a21581a9a379cb15e89a3c6d617153a19d0364085b105e996dfad9ff1b18e67d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:34 np0005465604 podman[91840]: 2025-10-02 07:48:34.777255461 +0000 UTC m=+0.167806649 container init 1b6ce2049b19cd5c67c358d8c2e9bfcaac6d93213e1d85e58a610676ff4110c5 (image=quay.io/ceph/ceph:v18, name=sad_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:34 np0005465604 podman[91840]: 2025-10-02 07:48:34.785657875 +0000 UTC m=+0.176209053 container start 1b6ce2049b19cd5c67c358d8c2e9bfcaac6d93213e1d85e58a610676ff4110c5 (image=quay.io/ceph/ceph:v18, name=sad_lehmann, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 03:48:34 np0005465604 podman[91840]: 2025-10-02 07:48:34.797829036 +0000 UTC m=+0.188380224 container attach 1b6ce2049b19cd5c67c358d8c2e9bfcaac6d93213e1d85e58a610676ff4110c5 (image=quay.io/ceph/ceph:v18, name=sad_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 03:48:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct  2 03:48:35 np0005465604 ceph-mgr[74774]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2198773173; not ready for session (expect reconnect)
Oct  2 03:48:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  2 03:48:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  2 03:48:35 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  2 03:48:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Oct  2 03:48:35 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Oct  2 03:48:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  2 03:48:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  2 03:48:35 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  2 03:48:35 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 16 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:48:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  2 03:48:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4246752954' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 03:48:35 np0005465604 ceph-mon[74477]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 03:48:35 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/4246752954' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 03:48:35 np0005465604 ceph-osd[90385]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 33.944 iops: 8689.738 elapsed_sec: 0.345
Oct  2 03:48:35 np0005465604 ceph-osd[90385]: log_channel(cluster) log [WRN] : OSD bench result of 8689.738165 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  2 03:48:35 np0005465604 ceph-osd[90385]: osd.2 0 waiting for initial osdmap
Oct  2 03:48:35 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2[90381]: 2025-10-02T07:48:35.732+0000 7f74b3382640 -1 osd.2 0 waiting for initial osdmap
Oct  2 03:48:35 np0005465604 ceph-osd[90385]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  2 03:48:35 np0005465604 ceph-osd[90385]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct  2 03:48:35 np0005465604 ceph-osd[90385]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  2 03:48:35 np0005465604 ceph-osd[90385]: osd.2 16 check_osdmap_features require_osd_release unknown -> reef
Oct  2 03:48:35 np0005465604 ceph-osd[90385]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  2 03:48:35 np0005465604 ceph-osd[90385]: osd.2 16 set_numa_affinity not setting numa affinity
Oct  2 03:48:35 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-osd-2[90381]: 2025-10-02T07:48:35.756+0000 7f74ae9aa640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  2 03:48:35 np0005465604 ceph-osd[90385]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Oct  2 03:48:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v41: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]: [
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:    {
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:        "available": false,
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:        "ceph_device": false,
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:        "lsm_data": {},
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:        "lvs": [],
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:        "path": "/dev/sr0",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:        "rejected_reasons": [
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "Has a FileSystem",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "Insufficient space (<5GB)"
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:        ],
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:        "sys_api": {
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "actuators": null,
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "device_nodes": "sr0",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "devname": "sr0",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "human_readable_size": "482.00 KB",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "id_bus": "ata",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "model": "QEMU DVD-ROM",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "nr_requests": "2",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "parent": "/dev/sr0",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "partitions": {},
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "path": "/dev/sr0",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "removable": "1",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "rev": "2.5+",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "ro": "0",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "rotational": "0",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "sas_address": "",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "sas_device_handle": "",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "scheduler_mode": "mq-deadline",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "sectors": 0,
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "sectorsize": "2048",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "size": 493568.0,
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "support_discard": "2048",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "type": "disk",
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:            "vendor": "QEMU"
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:        }
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]:    }
Oct  2 03:48:36 np0005465604 gracious_agnesi[91855]: ]
Oct  2 03:48:36 np0005465604 systemd[1]: libpod-6df9ce81fa6f26f260708fdb2134303b1a23b990e472505041a93dc0da863829.scope: Deactivated successfully.
Oct  2 03:48:36 np0005465604 podman[91826]: 2025-10-02 07:48:36.122436381 +0000 UTC m=+1.579736371 container died 6df9ce81fa6f26f260708fdb2134303b1a23b990e472505041a93dc0da863829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  2 03:48:36 np0005465604 systemd[1]: libpod-6df9ce81fa6f26f260708fdb2134303b1a23b990e472505041a93dc0da863829.scope: Consumed 1.394s CPU time.
Oct  2 03:48:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay-eb9b267c2f8209653ecaff85bb050dfeca2b0246f39db211b15f1a97a5b3be3c-merged.mount: Deactivated successfully.
Oct  2 03:48:36 np0005465604 ceph-mgr[74774]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2198773173; not ready for session (expect reconnect)
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  2 03:48:36 np0005465604 ceph-mgr[74774]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4246752954' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Oct  2 03:48:36 np0005465604 sad_lehmann[91860]: pool 'backups' created
Oct  2 03:48:36 np0005465604 podman[91826]: 2025-10-02 07:48:36.189374099 +0000 UTC m=+1.646674089 container remove 6df9ce81fa6f26f260708fdb2134303b1a23b990e472505041a93dc0da863829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_agnesi, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2198773173,v1:192.168.122.100:6811/2198773173] boot
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  2 03:48:36 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 17 pg[4.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:48:36 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=-1 lpr=17 pi=[13,17)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:48:36 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=-1 lpr=17 pi=[13,17)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:48:36 np0005465604 ceph-osd[90385]: osd.2 17 state: booting -> active
Oct  2 03:48:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 pi=[13,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:48:36 np0005465604 systemd[1]: libpod-conmon-6df9ce81fa6f26f260708fdb2134303b1a23b990e472505041a93dc0da863829.scope: Deactivated successfully.
Oct  2 03:48:36 np0005465604 systemd[1]: libpod-1b6ce2049b19cd5c67c358d8c2e9bfcaac6d93213e1d85e58a610676ff4110c5.scope: Deactivated successfully.
Oct  2 03:48:36 np0005465604 podman[91840]: 2025-10-02 07:48:36.210594313 +0000 UTC m=+1.601145491 container died 1b6ce2049b19cd5c67c358d8c2e9bfcaac6d93213e1d85e58a610676ff4110c5 (image=quay.io/ceph/ceph:v18, name=sad_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:48:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a21581a9a379cb15e89a3c6d617153a19d0364085b105e996dfad9ff1b18e67d-merged.mount: Deactivated successfully.
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct  2 03:48:36 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43640k
Oct  2 03:48:36 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43640k
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct  2 03:48:36 np0005465604 ceph-mgr[74774]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44687633: error parsing value: Value '44687633' is below minimum 939524096
Oct  2 03:48:36 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44687633: error parsing value: Value '44687633' is below minimum 939524096
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 03:48:36 np0005465604 podman[91840]: 2025-10-02 07:48:36.254644834 +0000 UTC m=+1.645196012 container remove 1b6ce2049b19cd5c67c358d8c2e9bfcaac6d93213e1d85e58a610676ff4110c5 (image=quay.io/ceph/ceph:v18, name=sad_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:36 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c9c771f4-9447-4810-bc3c-1caa09dc4c26 does not exist
Oct  2 03:48:36 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 79770109-fa34-4577-9e93-c137d6a745fb does not exist
Oct  2 03:48:36 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a46e1184-9009-4901-80c1-5035eae0c6a2 does not exist
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 03:48:36 np0005465604 systemd[1]: libpod-conmon-1b6ce2049b19cd5c67c358d8c2e9bfcaac6d93213e1d85e58a610676ff4110c5.scope: Deactivated successfully.
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:48:36 np0005465604 python3[93633]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: OSD bench result of 8689.738165 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/4246752954' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: osd.2 [v2:192.168.122.100:6810/2198773173,v1:192.168.122.100:6811/2198773173] boot
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: Adjusting osd_memory_target on compute-0 to 43640k
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: Unable to set osd_memory_target on compute-0 to 44687633: error parsing value: Value '44687633' is below minimum 939524096
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:48:36 np0005465604 podman[93661]: 2025-10-02 07:48:36.656235007 +0000 UTC m=+0.057535393 container create ea309d2f2e5dbc9ded004a1650881d1cff0cbd1a2c0960e907369f56058268b9 (image=quay.io/ceph/ceph:v18, name=jolly_turing, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 03:48:36 np0005465604 systemd[1]: Started libpod-conmon-ea309d2f2e5dbc9ded004a1650881d1cff0cbd1a2c0960e907369f56058268b9.scope.
Oct  2 03:48:36 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ec2d837fe9686051325e195a07305da465b86996b8ea6a6057928316c495b6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ec2d837fe9686051325e195a07305da465b86996b8ea6a6057928316c495b6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:36 np0005465604 podman[93661]: 2025-10-02 07:48:36.724778355 +0000 UTC m=+0.126078761 container init ea309d2f2e5dbc9ded004a1650881d1cff0cbd1a2c0960e907369f56058268b9 (image=quay.io/ceph/ceph:v18, name=jolly_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct  2 03:48:36 np0005465604 podman[93661]: 2025-10-02 07:48:36.638316236 +0000 UTC m=+0.039616642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:48:36 np0005465604 podman[93661]: 2025-10-02 07:48:36.736003337 +0000 UTC m=+0.137303723 container start ea309d2f2e5dbc9ded004a1650881d1cff0cbd1a2c0960e907369f56058268b9 (image=quay.io/ceph/ceph:v18, name=jolly_turing, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:36 np0005465604 podman[93661]: 2025-10-02 07:48:36.739283179 +0000 UTC m=+0.140583635 container attach ea309d2f2e5dbc9ded004a1650881d1cff0cbd1a2c0960e907369f56058268b9 (image=quay.io/ceph/ceph:v18, name=jolly_turing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:36 np0005465604 podman[93720]: 2025-10-02 07:48:36.85258493 +0000 UTC m=+0.045724844 container create e00db59efb6cbf16699eebbf12cd4dc2e1e7a90d03025f0ab96693f29d389de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:36 np0005465604 systemd[1]: Started libpod-conmon-e00db59efb6cbf16699eebbf12cd4dc2e1e7a90d03025f0ab96693f29d389de1.scope.
Oct  2 03:48:36 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:36 np0005465604 podman[93720]: 2025-10-02 07:48:36.9186529 +0000 UTC m=+0.111792824 container init e00db59efb6cbf16699eebbf12cd4dc2e1e7a90d03025f0ab96693f29d389de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 03:48:36 np0005465604 podman[93720]: 2025-10-02 07:48:36.827911256 +0000 UTC m=+0.021051210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:36 np0005465604 podman[93720]: 2025-10-02 07:48:36.925054251 +0000 UTC m=+0.118194205 container start e00db59efb6cbf16699eebbf12cd4dc2e1e7a90d03025f0ab96693f29d389de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:36 np0005465604 stoic_wing[93737]: 167 167
Oct  2 03:48:36 np0005465604 systemd[1]: libpod-e00db59efb6cbf16699eebbf12cd4dc2e1e7a90d03025f0ab96693f29d389de1.scope: Deactivated successfully.
Oct  2 03:48:36 np0005465604 podman[93720]: 2025-10-02 07:48:36.931346118 +0000 UTC m=+0.124486132 container attach e00db59efb6cbf16699eebbf12cd4dc2e1e7a90d03025f0ab96693f29d389de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 03:48:36 np0005465604 podman[93720]: 2025-10-02 07:48:36.931805362 +0000 UTC m=+0.124945316 container died e00db59efb6cbf16699eebbf12cd4dc2e1e7a90d03025f0ab96693f29d389de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 03:48:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay-cd484eb2d4cfafdf77611a485ebab550a2e861df9c0f843aac7733dcc025084a-merged.mount: Deactivated successfully.
Oct  2 03:48:36 np0005465604 podman[93720]: 2025-10-02 07:48:36.977824294 +0000 UTC m=+0.170964208 container remove e00db59efb6cbf16699eebbf12cd4dc2e1e7a90d03025f0ab96693f29d389de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:36 np0005465604 systemd[1]: libpod-conmon-e00db59efb6cbf16699eebbf12cd4dc2e1e7a90d03025f0ab96693f29d389de1.scope: Deactivated successfully.
Oct  2 03:48:37 np0005465604 podman[93780]: 2025-10-02 07:48:37.144358422 +0000 UTC m=+0.048311025 container create 697ea7970a2ac79ab3fb4cdd0934552258aee9f5be69d5edd6d5e5d5ce9fef86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Oct  2 03:48:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct  2 03:48:37 np0005465604 systemd[1]: Started libpod-conmon-697ea7970a2ac79ab3fb4cdd0934552258aee9f5be69d5edd6d5e5d5ce9fef86.scope.
Oct  2 03:48:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Oct  2 03:48:37 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Oct  2 03:48:37 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 pi=[13,17)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:48:37 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:48:37 np0005465604 podman[93780]: 2025-10-02 07:48:37.124050586 +0000 UTC m=+0.028003229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:37 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8b9d6afebb965fdeb82527a30e87075c3b1dbabe59d653610aeb73c1e34533/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8b9d6afebb965fdeb82527a30e87075c3b1dbabe59d653610aeb73c1e34533/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8b9d6afebb965fdeb82527a30e87075c3b1dbabe59d653610aeb73c1e34533/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8b9d6afebb965fdeb82527a30e87075c3b1dbabe59d653610aeb73c1e34533/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8b9d6afebb965fdeb82527a30e87075c3b1dbabe59d653610aeb73c1e34533/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:37 np0005465604 podman[93780]: 2025-10-02 07:48:37.244112168 +0000 UTC m=+0.148064811 container init 697ea7970a2ac79ab3fb4cdd0934552258aee9f5be69d5edd6d5e5d5ce9fef86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  2 03:48:37 np0005465604 podman[93780]: 2025-10-02 07:48:37.25313945 +0000 UTC m=+0.157092093 container start 697ea7970a2ac79ab3fb4cdd0934552258aee9f5be69d5edd6d5e5d5ce9fef86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_borg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:37 np0005465604 podman[93780]: 2025-10-02 07:48:37.256999302 +0000 UTC m=+0.160951945 container attach 697ea7970a2ac79ab3fb4cdd0934552258aee9f5be69d5edd6d5e5d5ce9fef86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 03:48:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  2 03:48:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3597851401' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 03:48:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v44: 4 pgs: 1 creating+peering, 2 active+clean, 1 unknown; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:48:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct  2 03:48:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3597851401' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 03:48:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Oct  2 03:48:38 np0005465604 jolly_turing[93703]: pool 'images' created
Oct  2 03:48:38 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Oct  2 03:48:38 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:48:38 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/3597851401' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 03:48:38 np0005465604 systemd[1]: libpod-ea309d2f2e5dbc9ded004a1650881d1cff0cbd1a2c0960e907369f56058268b9.scope: Deactivated successfully.
Oct  2 03:48:38 np0005465604 conmon[93703]: conmon ea309d2f2e5dbc9ded00 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ea309d2f2e5dbc9ded004a1650881d1cff0cbd1a2c0960e907369f56058268b9.scope/container/memory.events
Oct  2 03:48:38 np0005465604 podman[93661]: 2025-10-02 07:48:38.222506745 +0000 UTC m=+1.623807131 container died ea309d2f2e5dbc9ded004a1650881d1cff0cbd1a2c0960e907369f56058268b9 (image=quay.io/ceph/ceph:v18, name=jolly_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  2 03:48:38 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d1ec2d837fe9686051325e195a07305da465b86996b8ea6a6057928316c495b6-merged.mount: Deactivated successfully.
Oct  2 03:48:38 np0005465604 podman[93661]: 2025-10-02 07:48:38.265383409 +0000 UTC m=+1.666683795 container remove ea309d2f2e5dbc9ded004a1650881d1cff0cbd1a2c0960e907369f56058268b9 (image=quay.io/ceph/ceph:v18, name=jolly_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:48:38 np0005465604 systemd[1]: libpod-conmon-ea309d2f2e5dbc9ded004a1650881d1cff0cbd1a2c0960e907369f56058268b9.scope: Deactivated successfully.
Oct  2 03:48:38 np0005465604 vigilant_borg[93797]: --> passed data devices: 0 physical, 3 LVM
Oct  2 03:48:38 np0005465604 vigilant_borg[93797]: --> relative data size: 1.0
Oct  2 03:48:38 np0005465604 vigilant_borg[93797]: --> All data devices are unavailable
Oct  2 03:48:38 np0005465604 systemd[1]: libpod-697ea7970a2ac79ab3fb4cdd0934552258aee9f5be69d5edd6d5e5d5ce9fef86.scope: Deactivated successfully.
Oct  2 03:48:38 np0005465604 systemd[1]: libpod-697ea7970a2ac79ab3fb4cdd0934552258aee9f5be69d5edd6d5e5d5ce9fef86.scope: Consumed 1.030s CPU time.
Oct  2 03:48:38 np0005465604 conmon[93797]: conmon 697ea7970a2ac79ab3fb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-697ea7970a2ac79ab3fb4cdd0934552258aee9f5be69d5edd6d5e5d5ce9fef86.scope/container/memory.events
Oct  2 03:48:38 np0005465604 podman[93780]: 2025-10-02 07:48:38.333644227 +0000 UTC m=+1.237596870 container died 697ea7970a2ac79ab3fb4cdd0934552258aee9f5be69d5edd6d5e5d5ce9fef86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 03:48:38 np0005465604 systemd[1]: var-lib-containers-storage-overlay-df8b9d6afebb965fdeb82527a30e87075c3b1dbabe59d653610aeb73c1e34533-merged.mount: Deactivated successfully.
Oct  2 03:48:38 np0005465604 podman[93780]: 2025-10-02 07:48:38.394862586 +0000 UTC m=+1.298815199 container remove 697ea7970a2ac79ab3fb4cdd0934552258aee9f5be69d5edd6d5e5d5ce9fef86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 03:48:38 np0005465604 systemd[1]: libpod-conmon-697ea7970a2ac79ab3fb4cdd0934552258aee9f5be69d5edd6d5e5d5ce9fef86.scope: Deactivated successfully.
Oct  2 03:48:38 np0005465604 python3[93887]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:48:38 np0005465604 podman[93954]: 2025-10-02 07:48:38.680847617 +0000 UTC m=+0.056772421 container create b11b0195e5b83bc4c9b2cab7cb6e6edca44ae57d7da80f234423265d11d2e3e8 (image=quay.io/ceph/ceph:v18, name=musing_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 03:48:38 np0005465604 systemd[1]: Started libpod-conmon-b11b0195e5b83bc4c9b2cab7cb6e6edca44ae57d7da80f234423265d11d2e3e8.scope.
Oct  2 03:48:38 np0005465604 podman[93954]: 2025-10-02 07:48:38.649503805 +0000 UTC m=+0.025428659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:48:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5a69d3890adc0e5e3b085be38e5a8a8cc3fd9c9033e47c19545ac955ebaaf4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5a69d3890adc0e5e3b085be38e5a8a8cc3fd9c9033e47c19545ac955ebaaf4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:38 np0005465604 podman[93954]: 2025-10-02 07:48:38.771020283 +0000 UTC m=+0.146945057 container init b11b0195e5b83bc4c9b2cab7cb6e6edca44ae57d7da80f234423265d11d2e3e8 (image=quay.io/ceph/ceph:v18, name=musing_ride, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 03:48:38 np0005465604 podman[93954]: 2025-10-02 07:48:38.777403132 +0000 UTC m=+0.153327936 container start b11b0195e5b83bc4c9b2cab7cb6e6edca44ae57d7da80f234423265d11d2e3e8 (image=quay.io/ceph/ceph:v18, name=musing_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 03:48:38 np0005465604 podman[93954]: 2025-10-02 07:48:38.78116738 +0000 UTC m=+0.157092254 container attach b11b0195e5b83bc4c9b2cab7cb6e6edca44ae57d7da80f234423265d11d2e3e8 (image=quay.io/ceph/ceph:v18, name=musing_ride, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:38 np0005465604 podman[94036]: 2025-10-02 07:48:38.969158221 +0000 UTC m=+0.035155333 container create da405ce6a00d39d068008aa7af06585aafa09993b9a7261009b60cfdc4ac9743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kalam, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 03:48:38 np0005465604 systemd[1]: Started libpod-conmon-da405ce6a00d39d068008aa7af06585aafa09993b9a7261009b60cfdc4ac9743.scope.
Oct  2 03:48:39 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:39 np0005465604 podman[94036]: 2025-10-02 07:48:39.042454117 +0000 UTC m=+0.108451249 container init da405ce6a00d39d068008aa7af06585aafa09993b9a7261009b60cfdc4ac9743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 03:48:39 np0005465604 podman[94036]: 2025-10-02 07:48:38.952365254 +0000 UTC m=+0.018362366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:39 np0005465604 podman[94036]: 2025-10-02 07:48:39.052288596 +0000 UTC m=+0.118285708 container start da405ce6a00d39d068008aa7af06585aafa09993b9a7261009b60cfdc4ac9743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kalam, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:48:39 np0005465604 podman[94036]: 2025-10-02 07:48:39.055949651 +0000 UTC m=+0.121946793 container attach da405ce6a00d39d068008aa7af06585aafa09993b9a7261009b60cfdc4ac9743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kalam, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:39 np0005465604 cool_kalam[94052]: 167 167
Oct  2 03:48:39 np0005465604 systemd[1]: libpod-da405ce6a00d39d068008aa7af06585aafa09993b9a7261009b60cfdc4ac9743.scope: Deactivated successfully.
Oct  2 03:48:39 np0005465604 podman[94036]: 2025-10-02 07:48:39.058724168 +0000 UTC m=+0.124721290 container died da405ce6a00d39d068008aa7af06585aafa09993b9a7261009b60cfdc4ac9743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kalam, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-46fa80f01d620eedcf526de7c05721355fcafdddbe7d5eb7330a4e77859c7ba8-merged.mount: Deactivated successfully.
Oct  2 03:48:39 np0005465604 podman[94036]: 2025-10-02 07:48:39.101661682 +0000 UTC m=+0.167658824 container remove da405ce6a00d39d068008aa7af06585aafa09993b9a7261009b60cfdc4ac9743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kalam, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:48:39 np0005465604 systemd[1]: libpod-conmon-da405ce6a00d39d068008aa7af06585aafa09993b9a7261009b60cfdc4ac9743.scope: Deactivated successfully.
Oct  2 03:48:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct  2 03:48:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Oct  2 03:48:39 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Oct  2 03:48:39 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/3597851401' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 03:48:39 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:48:39 np0005465604 podman[94095]: 2025-10-02 07:48:39.302257148 +0000 UTC m=+0.051009660 container create be7a5b4c1165252efe00513a089b7600af42100320abe0e10dd53127c7e98134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cohen, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:39 np0005465604 systemd[1]: Started libpod-conmon-be7a5b4c1165252efe00513a089b7600af42100320abe0e10dd53127c7e98134.scope.
Oct  2 03:48:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  2 03:48:39 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3536807752' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 03:48:39 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d7ab1c54e212e39662365ed089014b08535e145df3be6511509bcf77304144/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d7ab1c54e212e39662365ed089014b08535e145df3be6511509bcf77304144/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d7ab1c54e212e39662365ed089014b08535e145df3be6511509bcf77304144/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:39 np0005465604 podman[94095]: 2025-10-02 07:48:39.277033748 +0000 UTC m=+0.025786290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d7ab1c54e212e39662365ed089014b08535e145df3be6511509bcf77304144/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:39 np0005465604 podman[94095]: 2025-10-02 07:48:39.386653163 +0000 UTC m=+0.135405685 container init be7a5b4c1165252efe00513a089b7600af42100320abe0e10dd53127c7e98134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:39 np0005465604 podman[94095]: 2025-10-02 07:48:39.394836789 +0000 UTC m=+0.143589271 container start be7a5b4c1165252efe00513a089b7600af42100320abe0e10dd53127c7e98134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:48:39 np0005465604 podman[94095]: 2025-10-02 07:48:39.399129363 +0000 UTC m=+0.147881895 container attach be7a5b4c1165252efe00513a089b7600af42100320abe0e10dd53127c7e98134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cohen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:48:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v47: 5 pgs: 1 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]: {
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:    "0": [
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:        {
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "devices": [
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "/dev/loop3"
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            ],
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "lv_name": "ceph_lv0",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "lv_size": "21470642176",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "name": "ceph_lv0",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "tags": {
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.cluster_name": "ceph",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.crush_device_class": "",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.encrypted": "0",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.osd_id": "0",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.type": "block",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.vdo": "0"
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            },
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "type": "block",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "vg_name": "ceph_vg0"
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:        }
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:    ],
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:    "1": [
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:        {
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "devices": [
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "/dev/loop4"
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            ],
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "lv_name": "ceph_lv1",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "lv_size": "21470642176",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "name": "ceph_lv1",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "tags": {
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.cluster_name": "ceph",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.crush_device_class": "",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.encrypted": "0",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.osd_id": "1",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.type": "block",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.vdo": "0"
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            },
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "type": "block",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "vg_name": "ceph_vg1"
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:        }
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:    ],
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:    "2": [
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:        {
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "devices": [
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "/dev/loop5"
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            ],
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "lv_name": "ceph_lv2",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "lv_size": "21470642176",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "name": "ceph_lv2",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "tags": {
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.cluster_name": "ceph",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.crush_device_class": "",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.encrypted": "0",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.osd_id": "2",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.type": "block",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:                "ceph.vdo": "0"
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            },
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "type": "block",
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:            "vg_name": "ceph_vg2"
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:        }
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]:    ]
Oct  2 03:48:40 np0005465604 agitated_cohen[94111]: }
Oct  2 03:48:40 np0005465604 systemd[1]: libpod-be7a5b4c1165252efe00513a089b7600af42100320abe0e10dd53127c7e98134.scope: Deactivated successfully.
Oct  2 03:48:40 np0005465604 ceph-mon[74477]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 03:48:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct  2 03:48:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3536807752' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 03:48:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Oct  2 03:48:40 np0005465604 musing_ride[93991]: pool 'cephfs.cephfs.meta' created
Oct  2 03:48:40 np0005465604 podman[94123]: 2025-10-02 07:48:40.234402056 +0000 UTC m=+0.038195928 container died be7a5b4c1165252efe00513a089b7600af42100320abe0e10dd53127c7e98134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cohen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:40 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/3536807752' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 03:48:40 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Oct  2 03:48:40 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c2d7ab1c54e212e39662365ed089014b08535e145df3be6511509bcf77304144-merged.mount: Deactivated successfully.
Oct  2 03:48:40 np0005465604 systemd[1]: libpod-b11b0195e5b83bc4c9b2cab7cb6e6edca44ae57d7da80f234423265d11d2e3e8.scope: Deactivated successfully.
Oct  2 03:48:40 np0005465604 podman[94123]: 2025-10-02 07:48:40.294341464 +0000 UTC m=+0.098135266 container remove be7a5b4c1165252efe00513a089b7600af42100320abe0e10dd53127c7e98134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cohen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 03:48:40 np0005465604 systemd[1]: libpod-conmon-be7a5b4c1165252efe00513a089b7600af42100320abe0e10dd53127c7e98134.scope: Deactivated successfully.
Oct  2 03:48:40 np0005465604 podman[94139]: 2025-10-02 07:48:40.305998199 +0000 UTC m=+0.031473447 container died b11b0195e5b83bc4c9b2cab7cb6e6edca44ae57d7da80f234423265d11d2e3e8 (image=quay.io/ceph/ceph:v18, name=musing_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:40 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8b5a69d3890adc0e5e3b085be38e5a8a8cc3fd9c9033e47c19545ac955ebaaf4-merged.mount: Deactivated successfully.
Oct  2 03:48:40 np0005465604 podman[94139]: 2025-10-02 07:48:40.346939312 +0000 UTC m=+0.072414530 container remove b11b0195e5b83bc4c9b2cab7cb6e6edca44ae57d7da80f234423265d11d2e3e8 (image=quay.io/ceph/ceph:v18, name=musing_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  2 03:48:40 np0005465604 systemd[1]: libpod-conmon-b11b0195e5b83bc4c9b2cab7cb6e6edca44ae57d7da80f234423265d11d2e3e8.scope: Deactivated successfully.
Oct  2 03:48:40 np0005465604 python3[94250]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:48:40 np0005465604 podman[94278]: 2025-10-02 07:48:40.756868297 +0000 UTC m=+0.066792064 container create 3d4c01b21f1700183da3c29c65585d52e4edaf52da556be04cd099aebdf80b25 (image=quay.io/ceph/ceph:v18, name=confident_lumiere, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:40 np0005465604 systemd[1]: Started libpod-conmon-3d4c01b21f1700183da3c29c65585d52e4edaf52da556be04cd099aebdf80b25.scope.
Oct  2 03:48:40 np0005465604 podman[94278]: 2025-10-02 07:48:40.733430423 +0000 UTC m=+0.043354250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:48:40 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ca3d31862615b32c3ac8cf68dd0778f9be17735b65d57829710e7a842348301/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ca3d31862615b32c3ac8cf68dd0778f9be17735b65d57829710e7a842348301/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:40 np0005465604 podman[94278]: 2025-10-02 07:48:40.856324254 +0000 UTC m=+0.166248011 container init 3d4c01b21f1700183da3c29c65585d52e4edaf52da556be04cd099aebdf80b25 (image=quay.io/ceph/ceph:v18, name=confident_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 03:48:40 np0005465604 podman[94278]: 2025-10-02 07:48:40.866912495 +0000 UTC m=+0.176836252 container start 3d4c01b21f1700183da3c29c65585d52e4edaf52da556be04cd099aebdf80b25 (image=quay.io/ceph/ceph:v18, name=confident_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 03:48:40 np0005465604 podman[94278]: 2025-10-02 07:48:40.870333262 +0000 UTC m=+0.180257029 container attach 3d4c01b21f1700183da3c29c65585d52e4edaf52da556be04cd099aebdf80b25 (image=quay.io/ceph/ceph:v18, name=confident_lumiere, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 03:48:41 np0005465604 podman[94337]: 2025-10-02 07:48:41.000880033 +0000 UTC m=+0.041496491 container create 938fad356e981e58ab08b6fe4197e0bdd5384a04d50453c856dfd9c2532df228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_payne, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 03:48:41 np0005465604 systemd[1]: Started libpod-conmon-938fad356e981e58ab08b6fe4197e0bdd5384a04d50453c856dfd9c2532df228.scope.
Oct  2 03:48:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:41 np0005465604 podman[94337]: 2025-10-02 07:48:41.044651105 +0000 UTC m=+0.085267593 container init 938fad356e981e58ab08b6fe4197e0bdd5384a04d50453c856dfd9c2532df228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:48:41 np0005465604 podman[94337]: 2025-10-02 07:48:41.049898269 +0000 UTC m=+0.090514707 container start 938fad356e981e58ab08b6fe4197e0bdd5384a04d50453c856dfd9c2532df228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:41 np0005465604 silly_payne[94353]: 167 167
Oct  2 03:48:41 np0005465604 podman[94337]: 2025-10-02 07:48:41.053145911 +0000 UTC m=+0.093762399 container attach 938fad356e981e58ab08b6fe4197e0bdd5384a04d50453c856dfd9c2532df228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 03:48:41 np0005465604 systemd[1]: libpod-938fad356e981e58ab08b6fe4197e0bdd5384a04d50453c856dfd9c2532df228.scope: Deactivated successfully.
Oct  2 03:48:41 np0005465604 podman[94337]: 2025-10-02 07:48:40.982742995 +0000 UTC m=+0.023359453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:41 np0005465604 podman[94358]: 2025-10-02 07:48:41.089911283 +0000 UTC m=+0.023184748 container died 938fad356e981e58ab08b6fe4197e0bdd5384a04d50453c856dfd9c2532df228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:41 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:48:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay-13f1ef07f0437441fef7dd7e3a89b2d7d0cd85c02c9e88fff32dc60acf173e6d-merged.mount: Deactivated successfully.
Oct  2 03:48:41 np0005465604 podman[94358]: 2025-10-02 07:48:41.125775997 +0000 UTC m=+0.059049442 container remove 938fad356e981e58ab08b6fe4197e0bdd5384a04d50453c856dfd9c2532df228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_payne, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:41 np0005465604 systemd[1]: libpod-conmon-938fad356e981e58ab08b6fe4197e0bdd5384a04d50453c856dfd9c2532df228.scope: Deactivated successfully.
Oct  2 03:48:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct  2 03:48:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Oct  2 03:48:41 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Oct  2 03:48:41 np0005465604 ceph-mon[74477]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 03:48:41 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/3536807752' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 03:48:41 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:48:41 np0005465604 podman[94399]: 2025-10-02 07:48:41.274998662 +0000 UTC m=+0.043330838 container create bdf85cb003457aaa056796c57c5e9e99e430622d430f0860c81cda2e5df6e869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct  2 03:48:41 np0005465604 systemd[1]: Started libpod-conmon-bdf85cb003457aaa056796c57c5e9e99e430622d430f0860c81cda2e5df6e869.scope.
Oct  2 03:48:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d752ec75e15ccee39d390c4cdbde2657b413ab943f0c5850734643f104053d53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:41 np0005465604 podman[94399]: 2025-10-02 07:48:41.252983533 +0000 UTC m=+0.021315749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d752ec75e15ccee39d390c4cdbde2657b413ab943f0c5850734643f104053d53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d752ec75e15ccee39d390c4cdbde2657b413ab943f0c5850734643f104053d53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d752ec75e15ccee39d390c4cdbde2657b413ab943f0c5850734643f104053d53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:41 np0005465604 podman[94399]: 2025-10-02 07:48:41.374526021 +0000 UTC m=+0.142858207 container init bdf85cb003457aaa056796c57c5e9e99e430622d430f0860c81cda2e5df6e869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct  2 03:48:41 np0005465604 podman[94399]: 2025-10-02 07:48:41.380990614 +0000 UTC m=+0.149322810 container start bdf85cb003457aaa056796c57c5e9e99e430622d430f0860c81cda2e5df6e869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ptolemy, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 03:48:41 np0005465604 podman[94399]: 2025-10-02 07:48:41.384441622 +0000 UTC m=+0.152773788 container attach bdf85cb003457aaa056796c57c5e9e99e430622d430f0860c81cda2e5df6e869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ptolemy, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:48:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  2 03:48:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3675602487' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 03:48:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:48:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v50: 6 pgs: 2 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:48:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct  2 03:48:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3675602487' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 03:48:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Oct  2 03:48:42 np0005465604 confident_lumiere[94316]: pool 'cephfs.cephfs.data' created
Oct  2 03:48:42 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Oct  2 03:48:42 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/3675602487' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 03:48:42 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [1] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:48:42 np0005465604 systemd[1]: libpod-3d4c01b21f1700183da3c29c65585d52e4edaf52da556be04cd099aebdf80b25.scope: Deactivated successfully.
Oct  2 03:48:42 np0005465604 podman[94278]: 2025-10-02 07:48:42.288235222 +0000 UTC m=+1.598158979 container died 3d4c01b21f1700183da3c29c65585d52e4edaf52da556be04cd099aebdf80b25 (image=quay.io/ceph/ceph:v18, name=confident_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:42 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6ca3d31862615b32c3ac8cf68dd0778f9be17735b65d57829710e7a842348301-merged.mount: Deactivated successfully.
Oct  2 03:48:42 np0005465604 podman[94278]: 2025-10-02 07:48:42.338364442 +0000 UTC m=+1.648288199 container remove 3d4c01b21f1700183da3c29c65585d52e4edaf52da556be04cd099aebdf80b25 (image=quay.io/ceph/ceph:v18, name=confident_lumiere, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]: {
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:        "osd_id": 2,
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:        "type": "bluestore"
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:    },
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:        "osd_id": 1,
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:        "type": "bluestore"
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:    },
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:        "osd_id": 0,
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:        "type": "bluestore"
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]:    }
Oct  2 03:48:42 np0005465604 quirky_ptolemy[94416]: }
Oct  2 03:48:42 np0005465604 systemd[1]: libpod-conmon-3d4c01b21f1700183da3c29c65585d52e4edaf52da556be04cd099aebdf80b25.scope: Deactivated successfully.
Oct  2 03:48:42 np0005465604 systemd[1]: libpod-bdf85cb003457aaa056796c57c5e9e99e430622d430f0860c81cda2e5df6e869.scope: Deactivated successfully.
Oct  2 03:48:42 np0005465604 systemd[1]: libpod-bdf85cb003457aaa056796c57c5e9e99e430622d430f0860c81cda2e5df6e869.scope: Consumed 1.000s CPU time.
Oct  2 03:48:42 np0005465604 podman[94399]: 2025-10-02 07:48:42.385733916 +0000 UTC m=+1.154066133 container died bdf85cb003457aaa056796c57c5e9e99e430622d430f0860c81cda2e5df6e869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ptolemy, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:42 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d752ec75e15ccee39d390c4cdbde2657b413ab943f0c5850734643f104053d53-merged.mount: Deactivated successfully.
Oct  2 03:48:42 np0005465604 podman[94399]: 2025-10-02 07:48:42.445098386 +0000 UTC m=+1.213430562 container remove bdf85cb003457aaa056796c57c5e9e99e430622d430f0860c81cda2e5df6e869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:42 np0005465604 systemd[1]: libpod-conmon-bdf85cb003457aaa056796c57c5e9e99e430622d430f0860c81cda2e5df6e869.scope: Deactivated successfully.
Oct  2 03:48:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:48:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:48:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:42 np0005465604 python3[94528]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:48:42 np0005465604 podman[94581]: 2025-10-02 07:48:42.785696899 +0000 UTC m=+0.037149935 container create 17a92c9aff33cdffe0c7f1eb92f9962a25e303d0f38107d72e60582f9632450f (image=quay.io/ceph/ceph:v18, name=ecstatic_villani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:48:42 np0005465604 systemd[1]: Started libpod-conmon-17a92c9aff33cdffe0c7f1eb92f9962a25e303d0f38107d72e60582f9632450f.scope.
Oct  2 03:48:42 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:42 np0005465604 podman[94581]: 2025-10-02 07:48:42.770353108 +0000 UTC m=+0.021806154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:48:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e793ab309b2ae70eb25c57ac6ce4102ac25fa77c06accf2c12df8fff74a0733/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e793ab309b2ae70eb25c57ac6ce4102ac25fa77c06accf2c12df8fff74a0733/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:42 np0005465604 podman[94581]: 2025-10-02 07:48:42.880451388 +0000 UTC m=+0.131904454 container init 17a92c9aff33cdffe0c7f1eb92f9962a25e303d0f38107d72e60582f9632450f (image=quay.io/ceph/ceph:v18, name=ecstatic_villani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 03:48:42 np0005465604 podman[94581]: 2025-10-02 07:48:42.88881167 +0000 UTC m=+0.140264706 container start 17a92c9aff33cdffe0c7f1eb92f9962a25e303d0f38107d72e60582f9632450f (image=quay.io/ceph/ceph:v18, name=ecstatic_villani, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:48:42 np0005465604 podman[94581]: 2025-10-02 07:48:42.892409113 +0000 UTC m=+0.143862199 container attach 17a92c9aff33cdffe0c7f1eb92f9962a25e303d0f38107d72e60582f9632450f (image=quay.io/ceph/ceph:v18, name=ecstatic_villani, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct  2 03:48:43 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/3675602487' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 03:48:43 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:43 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Oct  2 03:48:43 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Oct  2 03:48:43 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [1] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:48:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Oct  2 03:48:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2565558450' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  2 03:48:43 np0005465604 podman[94765]: 2025-10-02 07:48:43.430165343 +0000 UTC m=+0.074283069 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:48:43 np0005465604 podman[94765]: 2025-10-02 07:48:43.520330858 +0000 UTC m=+0.164448624 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:48:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v53: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:48:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:48:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:48:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/2565558450' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2565558450' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Oct  2 03:48:44 np0005465604 ecstatic_villani[94632]: enabled application 'rbd' on pool 'vms'
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Oct  2 03:48:44 np0005465604 systemd[1]: libpod-17a92c9aff33cdffe0c7f1eb92f9962a25e303d0f38107d72e60582f9632450f.scope: Deactivated successfully.
Oct  2 03:48:44 np0005465604 podman[94581]: 2025-10-02 07:48:44.326500708 +0000 UTC m=+1.577953764 container died 17a92c9aff33cdffe0c7f1eb92f9962a25e303d0f38107d72e60582f9632450f (image=quay.io/ceph/ceph:v18, name=ecstatic_villani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:44 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6e793ab309b2ae70eb25c57ac6ce4102ac25fa77c06accf2c12df8fff74a0733-merged.mount: Deactivated successfully.
Oct  2 03:48:44 np0005465604 podman[94581]: 2025-10-02 07:48:44.368898608 +0000 UTC m=+1.620351634 container remove 17a92c9aff33cdffe0c7f1eb92f9962a25e303d0f38107d72e60582f9632450f (image=quay.io/ceph/ceph:v18, name=ecstatic_villani, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 03:48:44 np0005465604 systemd[1]: libpod-conmon-17a92c9aff33cdffe0c7f1eb92f9962a25e303d0f38107d72e60582f9632450f.scope: Deactivated successfully.
Oct  2 03:48:44 np0005465604 python3[95046]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:48:44 np0005465604 podman[95058]: 2025-10-02 07:48:44.756264525 +0000 UTC m=+0.053188098 container create 3bf3fb4de95c59c9a232b971e826e74bce58170129f648268380dc9f7e65b42b (image=quay.io/ceph/ceph:v18, name=angry_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:44 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ea350d39-9c82-43a5-b3b2-283a7726ef64 does not exist
Oct  2 03:48:44 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev cc012ac5-243b-41b7-bafb-cb243bc598d0 does not exist
Oct  2 03:48:44 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 6f9efbab-17e3-4fb1-b0bd-bb1396837e05 does not exist
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:48:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:48:44 np0005465604 systemd[1]: Started libpod-conmon-3bf3fb4de95c59c9a232b971e826e74bce58170129f648268380dc9f7e65b42b.scope.
Oct  2 03:48:44 np0005465604 podman[95058]: 2025-10-02 07:48:44.727885135 +0000 UTC m=+0.024808808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:48:44 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09942fc8b7e8a4a20e7d9accd3201438828214be8686e9a90f93cfed1cfc2c3e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09942fc8b7e8a4a20e7d9accd3201438828214be8686e9a90f93cfed1cfc2c3e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:44 np0005465604 podman[95058]: 2025-10-02 07:48:44.849237308 +0000 UTC m=+0.146160901 container init 3bf3fb4de95c59c9a232b971e826e74bce58170129f648268380dc9f7e65b42b (image=quay.io/ceph/ceph:v18, name=angry_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct  2 03:48:44 np0005465604 podman[95058]: 2025-10-02 07:48:44.858060185 +0000 UTC m=+0.154983748 container start 3bf3fb4de95c59c9a232b971e826e74bce58170129f648268380dc9f7e65b42b (image=quay.io/ceph/ceph:v18, name=angry_bell, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 03:48:44 np0005465604 podman[95058]: 2025-10-02 07:48:44.861629897 +0000 UTC m=+0.158553470 container attach 3bf3fb4de95c59c9a232b971e826e74bce58170129f648268380dc9f7e65b42b (image=quay.io/ceph/ceph:v18, name=angry_bell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 03:48:45 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/2565558450' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  2 03:48:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:48:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:48:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Oct  2 03:48:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/359839438' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  2 03:48:45 np0005465604 podman[95236]: 2025-10-02 07:48:45.432800674 +0000 UTC m=+0.058820674 container create f5b6400ef96ed5711b7437702fa892f723f319fe3551920b5b673dc7d514db1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:45 np0005465604 systemd[1]: Started libpod-conmon-f5b6400ef96ed5711b7437702fa892f723f319fe3551920b5b673dc7d514db1d.scope.
Oct  2 03:48:45 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:45 np0005465604 podman[95236]: 2025-10-02 07:48:45.403528496 +0000 UTC m=+0.029548486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:45 np0005465604 podman[95236]: 2025-10-02 07:48:45.497718998 +0000 UTC m=+0.123739038 container init f5b6400ef96ed5711b7437702fa892f723f319fe3551920b5b673dc7d514db1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_torvalds, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 03:48:45 np0005465604 podman[95236]: 2025-10-02 07:48:45.503915912 +0000 UTC m=+0.129935912 container start f5b6400ef96ed5711b7437702fa892f723f319fe3551920b5b673dc7d514db1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_torvalds, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 03:48:45 np0005465604 awesome_torvalds[95253]: 167 167
Oct  2 03:48:45 np0005465604 systemd[1]: libpod-f5b6400ef96ed5711b7437702fa892f723f319fe3551920b5b673dc7d514db1d.scope: Deactivated successfully.
Oct  2 03:48:45 np0005465604 podman[95236]: 2025-10-02 07:48:45.50894464 +0000 UTC m=+0.134964640 container attach f5b6400ef96ed5711b7437702fa892f723f319fe3551920b5b673dc7d514db1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 03:48:45 np0005465604 podman[95236]: 2025-10-02 07:48:45.509391534 +0000 UTC m=+0.135411574 container died f5b6400ef96ed5711b7437702fa892f723f319fe3551920b5b673dc7d514db1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_torvalds, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:45 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3802cec903c34bfb124d5ba048f328689997898f76498457089c9fc5ab9f781e-merged.mount: Deactivated successfully.
Oct  2 03:48:45 np0005465604 podman[95236]: 2025-10-02 07:48:45.552744712 +0000 UTC m=+0.178764712 container remove f5b6400ef96ed5711b7437702fa892f723f319fe3551920b5b673dc7d514db1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_torvalds, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:48:45 np0005465604 systemd[1]: libpod-conmon-f5b6400ef96ed5711b7437702fa892f723f319fe3551920b5b673dc7d514db1d.scope: Deactivated successfully.
Oct  2 03:48:45 np0005465604 podman[95278]: 2025-10-02 07:48:45.744519841 +0000 UTC m=+0.038464027 container create faa7a20afb88686e1207862e0e7f943e43dae0152d102c67317b60c17d6735b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:45 np0005465604 systemd[1]: Started libpod-conmon-faa7a20afb88686e1207862e0e7f943e43dae0152d102c67317b60c17d6735b1.scope.
Oct  2 03:48:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v55: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:48:45 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c533895e86d13c17f4c404ce6f5b10f5cce4979b0b19ad9bbb4301fdea7f9642/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c533895e86d13c17f4c404ce6f5b10f5cce4979b0b19ad9bbb4301fdea7f9642/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c533895e86d13c17f4c404ce6f5b10f5cce4979b0b19ad9bbb4301fdea7f9642/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c533895e86d13c17f4c404ce6f5b10f5cce4979b0b19ad9bbb4301fdea7f9642/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c533895e86d13c17f4c404ce6f5b10f5cce4979b0b19ad9bbb4301fdea7f9642/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:45 np0005465604 podman[95278]: 2025-10-02 07:48:45.726329181 +0000 UTC m=+0.020273357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:45 np0005465604 podman[95278]: 2025-10-02 07:48:45.830925059 +0000 UTC m=+0.124869225 container init faa7a20afb88686e1207862e0e7f943e43dae0152d102c67317b60c17d6735b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_edison, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  2 03:48:45 np0005465604 podman[95278]: 2025-10-02 07:48:45.842903164 +0000 UTC m=+0.136847350 container start faa7a20afb88686e1207862e0e7f943e43dae0152d102c67317b60c17d6735b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 03:48:45 np0005465604 podman[95278]: 2025-10-02 07:48:45.846691302 +0000 UTC m=+0.140635478 container attach faa7a20afb88686e1207862e0e7f943e43dae0152d102c67317b60c17d6735b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:46 np0005465604 ceph-mon[74477]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 03:48:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct  2 03:48:46 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/359839438' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  2 03:48:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/359839438' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  2 03:48:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Oct  2 03:48:46 np0005465604 angry_bell[95079]: enabled application 'rbd' on pool 'volumes'
Oct  2 03:48:46 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Oct  2 03:48:46 np0005465604 systemd[1]: libpod-3bf3fb4de95c59c9a232b971e826e74bce58170129f648268380dc9f7e65b42b.scope: Deactivated successfully.
Oct  2 03:48:46 np0005465604 podman[95058]: 2025-10-02 07:48:46.34318842 +0000 UTC m=+1.640112023 container died 3bf3fb4de95c59c9a232b971e826e74bce58170129f648268380dc9f7e65b42b (image=quay.io/ceph/ceph:v18, name=angry_bell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 03:48:46 np0005465604 systemd[1]: var-lib-containers-storage-overlay-09942fc8b7e8a4a20e7d9accd3201438828214be8686e9a90f93cfed1cfc2c3e-merged.mount: Deactivated successfully.
Oct  2 03:48:46 np0005465604 podman[95058]: 2025-10-02 07:48:46.385730743 +0000 UTC m=+1.682654306 container remove 3bf3fb4de95c59c9a232b971e826e74bce58170129f648268380dc9f7e65b42b (image=quay.io/ceph/ceph:v18, name=angry_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:46 np0005465604 systemd[1]: libpod-conmon-3bf3fb4de95c59c9a232b971e826e74bce58170129f648268380dc9f7e65b42b.scope: Deactivated successfully.
Oct  2 03:48:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:48:46 np0005465604 python3[95339]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:48:46 np0005465604 podman[95350]: 2025-10-02 07:48:46.808972625 +0000 UTC m=+0.050476973 container create 660e630c50f9814105ba2113b5b19eefc37a7acc76e27d005cd662b63a29e36a (image=quay.io/ceph/ceph:v18, name=bold_lovelace, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 03:48:46 np0005465604 systemd[1]: Started libpod-conmon-660e630c50f9814105ba2113b5b19eefc37a7acc76e27d005cd662b63a29e36a.scope.
Oct  2 03:48:46 np0005465604 podman[95350]: 2025-10-02 07:48:46.781827275 +0000 UTC m=+0.023331703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:48:46 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5384218ce379bc8e2d3c9e2de215fb4ad6d0f55fe10139c2de12ee15a5813ef3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5384218ce379bc8e2d3c9e2de215fb4ad6d0f55fe10139c2de12ee15a5813ef3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:46 np0005465604 podman[95350]: 2025-10-02 07:48:46.916546456 +0000 UTC m=+0.158050814 container init 660e630c50f9814105ba2113b5b19eefc37a7acc76e27d005cd662b63a29e36a (image=quay.io/ceph/ceph:v18, name=bold_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:46 np0005465604 podman[95350]: 2025-10-02 07:48:46.925656011 +0000 UTC m=+0.167160379 container start 660e630c50f9814105ba2113b5b19eefc37a7acc76e27d005cd662b63a29e36a (image=quay.io/ceph/ceph:v18, name=bold_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 03:48:46 np0005465604 podman[95350]: 2025-10-02 07:48:46.92850956 +0000 UTC m=+0.170013938 container attach 660e630c50f9814105ba2113b5b19eefc37a7acc76e27d005cd662b63a29e36a (image=quay.io/ceph/ceph:v18, name=bold_lovelace, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 03:48:46 np0005465604 trusting_edison[95294]: --> passed data devices: 0 physical, 3 LVM
Oct  2 03:48:46 np0005465604 trusting_edison[95294]: --> relative data size: 1.0
Oct  2 03:48:46 np0005465604 trusting_edison[95294]: --> All data devices are unavailable
Oct  2 03:48:47 np0005465604 systemd[1]: libpod-faa7a20afb88686e1207862e0e7f943e43dae0152d102c67317b60c17d6735b1.scope: Deactivated successfully.
Oct  2 03:48:47 np0005465604 podman[95278]: 2025-10-02 07:48:47.011223852 +0000 UTC m=+1.305168038 container died faa7a20afb88686e1207862e0e7f943e43dae0152d102c67317b60c17d6735b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:48:47 np0005465604 systemd[1]: libpod-faa7a20afb88686e1207862e0e7f943e43dae0152d102c67317b60c17d6735b1.scope: Consumed 1.073s CPU time.
Oct  2 03:48:47 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c533895e86d13c17f4c404ce6f5b10f5cce4979b0b19ad9bbb4301fdea7f9642-merged.mount: Deactivated successfully.
Oct  2 03:48:47 np0005465604 podman[95278]: 2025-10-02 07:48:47.072394729 +0000 UTC m=+1.366338885 container remove faa7a20afb88686e1207862e0e7f943e43dae0152d102c67317b60c17d6735b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_edison, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 03:48:47 np0005465604 systemd[1]: libpod-conmon-faa7a20afb88686e1207862e0e7f943e43dae0152d102c67317b60c17d6735b1.scope: Deactivated successfully.
Oct  2 03:48:47 np0005465604 ceph-mon[74477]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 03:48:47 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/359839438' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  2 03:48:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Oct  2 03:48:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3880862465' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  2 03:48:47 np0005465604 podman[95553]: 2025-10-02 07:48:47.691257421 +0000 UTC m=+0.033674887 container create 47758205e53f9fe706b8214c99390474ef09a585205c35c0bdc9ce327ecdd643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 03:48:47 np0005465604 systemd[1]: Started libpod-conmon-47758205e53f9fe706b8214c99390474ef09a585205c35c0bdc9ce327ecdd643.scope.
Oct  2 03:48:47 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:47 np0005465604 podman[95553]: 2025-10-02 07:48:47.752338624 +0000 UTC m=+0.094756110 container init 47758205e53f9fe706b8214c99390474ef09a585205c35c0bdc9ce327ecdd643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_archimedes, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:48:47 np0005465604 podman[95553]: 2025-10-02 07:48:47.75730376 +0000 UTC m=+0.099721216 container start 47758205e53f9fe706b8214c99390474ef09a585205c35c0bdc9ce327ecdd643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 03:48:47 np0005465604 podman[95553]: 2025-10-02 07:48:47.75985068 +0000 UTC m=+0.102268176 container attach 47758205e53f9fe706b8214c99390474ef09a585205c35c0bdc9ce327ecdd643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:47 np0005465604 distracted_archimedes[95569]: 167 167
Oct  2 03:48:47 np0005465604 podman[95553]: 2025-10-02 07:48:47.760654024 +0000 UTC m=+0.103071490 container died 47758205e53f9fe706b8214c99390474ef09a585205c35c0bdc9ce327ecdd643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:47 np0005465604 systemd[1]: libpod-47758205e53f9fe706b8214c99390474ef09a585205c35c0bdc9ce327ecdd643.scope: Deactivated successfully.
Oct  2 03:48:47 np0005465604 podman[95553]: 2025-10-02 07:48:47.677496749 +0000 UTC m=+0.019914235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:47 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a0e00f39a06f096a73bcd2d4b02f474b08396da9caf90c8d574e5da0fe732aa0-merged.mount: Deactivated successfully.
Oct  2 03:48:47 np0005465604 podman[95553]: 2025-10-02 07:48:47.790152549 +0000 UTC m=+0.132570015 container remove 47758205e53f9fe706b8214c99390474ef09a585205c35c0bdc9ce327ecdd643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 03:48:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:48:47 np0005465604 systemd[1]: libpod-conmon-47758205e53f9fe706b8214c99390474ef09a585205c35c0bdc9ce327ecdd643.scope: Deactivated successfully.
Oct  2 03:48:47 np0005465604 podman[95591]: 2025-10-02 07:48:47.926427 +0000 UTC m=+0.047343915 container create bd90c207c00678941c03c3d780b89301d8d6dcedf56ddc95b625cf1f0c28f584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 03:48:47 np0005465604 systemd[1]: Started libpod-conmon-bd90c207c00678941c03c3d780b89301d8d6dcedf56ddc95b625cf1f0c28f584.scope.
Oct  2 03:48:47 np0005465604 podman[95591]: 2025-10-02 07:48:47.907908549 +0000 UTC m=+0.028825434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:48 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaceb9bae90bc6d938311665df7d1437e9db35ac33a83a696c64cbfc43b10e1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaceb9bae90bc6d938311665df7d1437e9db35ac33a83a696c64cbfc43b10e1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaceb9bae90bc6d938311665df7d1437e9db35ac33a83a696c64cbfc43b10e1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaceb9bae90bc6d938311665df7d1437e9db35ac33a83a696c64cbfc43b10e1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:48 np0005465604 podman[95591]: 2025-10-02 07:48:48.028681023 +0000 UTC m=+0.149597988 container init bd90c207c00678941c03c3d780b89301d8d6dcedf56ddc95b625cf1f0c28f584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bardeen, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:48 np0005465604 podman[95591]: 2025-10-02 07:48:48.045528901 +0000 UTC m=+0.166445776 container start bd90c207c00678941c03c3d780b89301d8d6dcedf56ddc95b625cf1f0c28f584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct  2 03:48:48 np0005465604 podman[95591]: 2025-10-02 07:48:48.049139504 +0000 UTC m=+0.170056419 container attach bd90c207c00678941c03c3d780b89301d8d6dcedf56ddc95b625cf1f0c28f584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bardeen, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  2 03:48:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct  2 03:48:48 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/3880862465' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  2 03:48:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3880862465' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  2 03:48:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Oct  2 03:48:48 np0005465604 bold_lovelace[95370]: enabled application 'rbd' on pool 'backups'
Oct  2 03:48:48 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Oct  2 03:48:48 np0005465604 systemd[1]: libpod-660e630c50f9814105ba2113b5b19eefc37a7acc76e27d005cd662b63a29e36a.scope: Deactivated successfully.
Oct  2 03:48:48 np0005465604 podman[95350]: 2025-10-02 07:48:48.3576022 +0000 UTC m=+1.599106578 container died 660e630c50f9814105ba2113b5b19eefc37a7acc76e27d005cd662b63a29e36a (image=quay.io/ceph/ceph:v18, name=bold_lovelace, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:48:48 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5384218ce379bc8e2d3c9e2de215fb4ad6d0f55fe10139c2de12ee15a5813ef3-merged.mount: Deactivated successfully.
Oct  2 03:48:48 np0005465604 podman[95350]: 2025-10-02 07:48:48.420144189 +0000 UTC m=+1.661648567 container remove 660e630c50f9814105ba2113b5b19eefc37a7acc76e27d005cd662b63a29e36a (image=quay.io/ceph/ceph:v18, name=bold_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:48:48 np0005465604 systemd[1]: libpod-conmon-660e630c50f9814105ba2113b5b19eefc37a7acc76e27d005cd662b63a29e36a.scope: Deactivated successfully.
Oct  2 03:48:48 np0005465604 python3[95650]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]: {
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:    "0": [
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:        {
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "devices": [
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "/dev/loop3"
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            ],
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "lv_name": "ceph_lv0",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "lv_size": "21470642176",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "name": "ceph_lv0",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "tags": {
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.cluster_name": "ceph",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.crush_device_class": "",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.encrypted": "0",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.osd_id": "0",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.type": "block",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.vdo": "0"
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            },
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "type": "block",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "vg_name": "ceph_vg0"
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:        }
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:    ],
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:    "1": [
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:        {
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "devices": [
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "/dev/loop4"
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            ],
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "lv_name": "ceph_lv1",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "lv_size": "21470642176",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "name": "ceph_lv1",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "tags": {
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.cluster_name": "ceph",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.crush_device_class": "",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.encrypted": "0",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.osd_id": "1",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.type": "block",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.vdo": "0"
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            },
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "type": "block",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "vg_name": "ceph_vg1"
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:        }
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:    ],
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:    "2": [
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:        {
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "devices": [
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "/dev/loop5"
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            ],
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "lv_name": "ceph_lv2",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "lv_size": "21470642176",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "name": "ceph_lv2",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "tags": {
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.cluster_name": "ceph",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.crush_device_class": "",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.encrypted": "0",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.osd_id": "2",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.type": "block",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:                "ceph.vdo": "0"
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            },
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "type": "block",
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:            "vg_name": "ceph_vg2"
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:        }
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]:    ]
Oct  2 03:48:48 np0005465604 gifted_bardeen[95607]: }
Oct  2 03:48:48 np0005465604 systemd[1]: libpod-bd90c207c00678941c03c3d780b89301d8d6dcedf56ddc95b625cf1f0c28f584.scope: Deactivated successfully.
Oct  2 03:48:48 np0005465604 podman[95591]: 2025-10-02 07:48:48.776449574 +0000 UTC m=+0.897366479 container died bd90c207c00678941c03c3d780b89301d8d6dcedf56ddc95b625cf1f0c28f584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 03:48:48 np0005465604 podman[95654]: 2025-10-02 07:48:48.775846075 +0000 UTC m=+0.051590187 container create 6e1fd17fd3b7919a668087555e504781912d2ed2b4d9418d09406f5571006453 (image=quay.io/ceph/ceph:v18, name=wonderful_raman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:48 np0005465604 systemd[1]: var-lib-containers-storage-overlay-aaceb9bae90bc6d938311665df7d1437e9db35ac33a83a696c64cbfc43b10e1c-merged.mount: Deactivated successfully.
Oct  2 03:48:48 np0005465604 systemd[1]: Started libpod-conmon-6e1fd17fd3b7919a668087555e504781912d2ed2b4d9418d09406f5571006453.scope.
Oct  2 03:48:48 np0005465604 podman[95654]: 2025-10-02 07:48:48.754852107 +0000 UTC m=+0.030596189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:48:48 np0005465604 podman[95591]: 2025-10-02 07:48:48.850906867 +0000 UTC m=+0.971823742 container remove bd90c207c00678941c03c3d780b89301d8d6dcedf56ddc95b625cf1f0c28f584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:48:48 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:48 np0005465604 systemd[1]: libpod-conmon-bd90c207c00678941c03c3d780b89301d8d6dcedf56ddc95b625cf1f0c28f584.scope: Deactivated successfully.
Oct  2 03:48:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08c303ba7ff23ece16c3de1bba221e67dacc3274fd14b10be4ea4a7771c201c1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08c303ba7ff23ece16c3de1bba221e67dacc3274fd14b10be4ea4a7771c201c1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:48 np0005465604 podman[95654]: 2025-10-02 07:48:48.892591463 +0000 UTC m=+0.168335565 container init 6e1fd17fd3b7919a668087555e504781912d2ed2b4d9418d09406f5571006453 (image=quay.io/ceph/ceph:v18, name=wonderful_raman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 03:48:48 np0005465604 podman[95654]: 2025-10-02 07:48:48.898281762 +0000 UTC m=+0.174025844 container start 6e1fd17fd3b7919a668087555e504781912d2ed2b4d9418d09406f5571006453 (image=quay.io/ceph/ceph:v18, name=wonderful_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 03:48:48 np0005465604 podman[95654]: 2025-10-02 07:48:48.902219355 +0000 UTC m=+0.177963437 container attach 6e1fd17fd3b7919a668087555e504781912d2ed2b4d9418d09406f5571006453 (image=quay.io/ceph/ceph:v18, name=wonderful_raman, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:49 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/3880862465' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  2 03:48:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Oct  2 03:48:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/782196792' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  2 03:48:49 np0005465604 podman[95845]: 2025-10-02 07:48:49.463050918 +0000 UTC m=+0.054112616 container create 1b52a58dbcab9475144bac1467530820de3a78317ca8ca4ba8d1ca1ade7b8f40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cartwright, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 03:48:49 np0005465604 systemd[1]: Started libpod-conmon-1b52a58dbcab9475144bac1467530820de3a78317ca8ca4ba8d1ca1ade7b8f40.scope.
Oct  2 03:48:49 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:49 np0005465604 podman[95845]: 2025-10-02 07:48:49.527656763 +0000 UTC m=+0.118718471 container init 1b52a58dbcab9475144bac1467530820de3a78317ca8ca4ba8d1ca1ade7b8f40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 03:48:49 np0005465604 podman[95845]: 2025-10-02 07:48:49.438537 +0000 UTC m=+0.029598738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:49 np0005465604 podman[95845]: 2025-10-02 07:48:49.533989671 +0000 UTC m=+0.125051359 container start 1b52a58dbcab9475144bac1467530820de3a78317ca8ca4ba8d1ca1ade7b8f40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:49 np0005465604 loving_cartwright[95862]: 167 167
Oct  2 03:48:49 np0005465604 systemd[1]: libpod-1b52a58dbcab9475144bac1467530820de3a78317ca8ca4ba8d1ca1ade7b8f40.scope: Deactivated successfully.
Oct  2 03:48:49 np0005465604 podman[95845]: 2025-10-02 07:48:49.539115262 +0000 UTC m=+0.130176980 container attach 1b52a58dbcab9475144bac1467530820de3a78317ca8ca4ba8d1ca1ade7b8f40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cartwright, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:48:49 np0005465604 podman[95845]: 2025-10-02 07:48:49.539504544 +0000 UTC m=+0.130566232 container died 1b52a58dbcab9475144bac1467530820de3a78317ca8ca4ba8d1ca1ade7b8f40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 03:48:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay-49dc15ac6cf4fd95f81fd997bd8034ba8ab54ee7f7ea763977ad23a68d7d7175-merged.mount: Deactivated successfully.
Oct  2 03:48:49 np0005465604 podman[95845]: 2025-10-02 07:48:49.572523258 +0000 UTC m=+0.163584946 container remove 1b52a58dbcab9475144bac1467530820de3a78317ca8ca4ba8d1ca1ade7b8f40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cartwright, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:48:49 np0005465604 systemd[1]: libpod-conmon-1b52a58dbcab9475144bac1467530820de3a78317ca8ca4ba8d1ca1ade7b8f40.scope: Deactivated successfully.
Oct  2 03:48:49 np0005465604 podman[95885]: 2025-10-02 07:48:49.762823411 +0000 UTC m=+0.040225991 container create c9fa0dab2956c1c468956dd1f3a3d7fcab4d17ed84911f6dc7b18b2c5a6a9f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_galileo, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 03:48:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:48:49 np0005465604 systemd[1]: Started libpod-conmon-c9fa0dab2956c1c468956dd1f3a3d7fcab4d17ed84911f6dc7b18b2c5a6a9f26.scope.
Oct  2 03:48:49 np0005465604 podman[95885]: 2025-10-02 07:48:49.744498298 +0000 UTC m=+0.021900848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:48:49 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ac32e71ebaa8790508a2870c802b498089ad47b9340f59a7b2e7d62b7c0a1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ac32e71ebaa8790508a2870c802b498089ad47b9340f59a7b2e7d62b7c0a1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ac32e71ebaa8790508a2870c802b498089ad47b9340f59a7b2e7d62b7c0a1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ac32e71ebaa8790508a2870c802b498089ad47b9340f59a7b2e7d62b7c0a1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:49 np0005465604 podman[95885]: 2025-10-02 07:48:49.866401898 +0000 UTC m=+0.143804548 container init c9fa0dab2956c1c468956dd1f3a3d7fcab4d17ed84911f6dc7b18b2c5a6a9f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_galileo, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:49 np0005465604 podman[95885]: 2025-10-02 07:48:49.871826737 +0000 UTC m=+0.149229277 container start c9fa0dab2956c1c468956dd1f3a3d7fcab4d17ed84911f6dc7b18b2c5a6a9f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_galileo, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 03:48:49 np0005465604 podman[95885]: 2025-10-02 07:48:49.875126311 +0000 UTC m=+0.152528891 container attach c9fa0dab2956c1c468956dd1f3a3d7fcab4d17ed84911f6dc7b18b2c5a6a9f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_galileo, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 03:48:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct  2 03:48:50 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/782196792' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  2 03:48:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/782196792' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  2 03:48:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Oct  2 03:48:50 np0005465604 wonderful_raman[95682]: enabled application 'rbd' on pool 'images'
Oct  2 03:48:50 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Oct  2 03:48:50 np0005465604 systemd[1]: libpod-6e1fd17fd3b7919a668087555e504781912d2ed2b4d9418d09406f5571006453.scope: Deactivated successfully.
Oct  2 03:48:50 np0005465604 podman[95654]: 2025-10-02 07:48:50.378640427 +0000 UTC m=+1.654384549 container died 6e1fd17fd3b7919a668087555e504781912d2ed2b4d9418d09406f5571006453 (image=quay.io/ceph/ceph:v18, name=wonderful_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 03:48:50 np0005465604 systemd[1]: var-lib-containers-storage-overlay-08c303ba7ff23ece16c3de1bba221e67dacc3274fd14b10be4ea4a7771c201c1-merged.mount: Deactivated successfully.
Oct  2 03:48:50 np0005465604 podman[95654]: 2025-10-02 07:48:50.442090516 +0000 UTC m=+1.717834598 container remove 6e1fd17fd3b7919a668087555e504781912d2ed2b4d9418d09406f5571006453 (image=quay.io/ceph/ceph:v18, name=wonderful_raman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 03:48:50 np0005465604 systemd[1]: libpod-conmon-6e1fd17fd3b7919a668087555e504781912d2ed2b4d9418d09406f5571006453.scope: Deactivated successfully.
Oct  2 03:48:50 np0005465604 python3[95951]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:48:50 np0005465604 podman[95967]: 2025-10-02 07:48:50.794240791 +0000 UTC m=+0.052042333 container create 06d45ad1eb105b69001e936efc58b59dfe9bfac79a3a816df76792582e1fb09a (image=quay.io/ceph/ceph:v18, name=sweet_darwin, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]: {
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:        "osd_id": 2,
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:        "type": "bluestore"
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:    },
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:        "osd_id": 1,
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:        "type": "bluestore"
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:    },
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:        "osd_id": 0,
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:        "type": "bluestore"
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]:    }
Oct  2 03:48:50 np0005465604 optimistic_galileo[95901]: }
Oct  2 03:48:50 np0005465604 podman[95885]: 2025-10-02 07:48:50.83540251 +0000 UTC m=+1.112805040 container died c9fa0dab2956c1c468956dd1f3a3d7fcab4d17ed84911f6dc7b18b2c5a6a9f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_galileo, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 03:48:50 np0005465604 systemd[1]: Started libpod-conmon-06d45ad1eb105b69001e936efc58b59dfe9bfac79a3a816df76792582e1fb09a.scope.
Oct  2 03:48:50 np0005465604 systemd[1]: libpod-c9fa0dab2956c1c468956dd1f3a3d7fcab4d17ed84911f6dc7b18b2c5a6a9f26.scope: Deactivated successfully.
Oct  2 03:48:50 np0005465604 podman[95967]: 2025-10-02 07:48:50.766543573 +0000 UTC m=+0.024345205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:48:50 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:50 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c5ac32e71ebaa8790508a2870c802b498089ad47b9340f59a7b2e7d62b7c0a1a-merged.mount: Deactivated successfully.
Oct  2 03:48:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30cd759816e78e055b1d08323dfd3e845500554e4e2d3590c119f3add2b0d025/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30cd759816e78e055b1d08323dfd3e845500554e4e2d3590c119f3add2b0d025/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:50 np0005465604 podman[95967]: 2025-10-02 07:48:50.885015385 +0000 UTC m=+0.142816947 container init 06d45ad1eb105b69001e936efc58b59dfe9bfac79a3a816df76792582e1fb09a (image=quay.io/ceph/ceph:v18, name=sweet_darwin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 03:48:50 np0005465604 podman[95967]: 2025-10-02 07:48:50.890801015 +0000 UTC m=+0.148602557 container start 06d45ad1eb105b69001e936efc58b59dfe9bfac79a3a816df76792582e1fb09a (image=quay.io/ceph/ceph:v18, name=sweet_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 03:48:50 np0005465604 podman[95885]: 2025-10-02 07:48:50.892904391 +0000 UTC m=+1.170306931 container remove c9fa0dab2956c1c468956dd1f3a3d7fcab4d17ed84911f6dc7b18b2c5a6a9f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_galileo, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:50 np0005465604 podman[95967]: 2025-10-02 07:48:50.898939981 +0000 UTC m=+0.156741523 container attach 06d45ad1eb105b69001e936efc58b59dfe9bfac79a3a816df76792582e1fb09a (image=quay.io/ceph/ceph:v18, name=sweet_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  2 03:48:50 np0005465604 systemd[1]: libpod-conmon-c9fa0dab2956c1c468956dd1f3a3d7fcab4d17ed84911f6dc7b18b2c5a6a9f26.scope: Deactivated successfully.
Oct  2 03:48:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:48:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:48:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:51 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/782196792' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  2 03:48:51 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:51 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Oct  2 03:48:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/928827189' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  2 03:48:51 np0005465604 ceph-mon[74477]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 03:48:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:48:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:48:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct  2 03:48:52 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/928827189' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  2 03:48:52 np0005465604 ceph-mon[74477]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 03:48:52 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/928827189' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  2 03:48:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Oct  2 03:48:52 np0005465604 sweet_darwin[95995]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct  2 03:48:52 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Oct  2 03:48:52 np0005465604 systemd[1]: libpod-06d45ad1eb105b69001e936efc58b59dfe9bfac79a3a816df76792582e1fb09a.scope: Deactivated successfully.
Oct  2 03:48:52 np0005465604 podman[95967]: 2025-10-02 07:48:52.384850121 +0000 UTC m=+1.642651663 container died 06d45ad1eb105b69001e936efc58b59dfe9bfac79a3a816df76792582e1fb09a (image=quay.io/ceph/ceph:v18, name=sweet_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 03:48:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay-30cd759816e78e055b1d08323dfd3e845500554e4e2d3590c119f3add2b0d025-merged.mount: Deactivated successfully.
Oct  2 03:48:52 np0005465604 podman[95967]: 2025-10-02 07:48:52.441385063 +0000 UTC m=+1.699186615 container remove 06d45ad1eb105b69001e936efc58b59dfe9bfac79a3a816df76792582e1fb09a (image=quay.io/ceph/ceph:v18, name=sweet_darwin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 03:48:52 np0005465604 systemd[1]: libpod-conmon-06d45ad1eb105b69001e936efc58b59dfe9bfac79a3a816df76792582e1fb09a.scope: Deactivated successfully.
Oct  2 03:48:52 np0005465604 python3[96117]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:48:52 np0005465604 podman[96118]: 2025-10-02 07:48:52.829292817 +0000 UTC m=+0.048947435 container create 52dac0c467d8b1b29744aca17e5b49fe390e018a7bb552ba8e36b40086625bdd (image=quay.io/ceph/ceph:v18, name=heuristic_shaw, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  2 03:48:52 np0005465604 systemd[1]: Started libpod-conmon-52dac0c467d8b1b29744aca17e5b49fe390e018a7bb552ba8e36b40086625bdd.scope.
Oct  2 03:48:52 np0005465604 podman[96118]: 2025-10-02 07:48:52.813788241 +0000 UTC m=+0.033442879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:48:52 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc07fa7310ccaf8e891badf9bce5e0c7cfed6cb5d8663326fdcf2cc7ecfce282/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc07fa7310ccaf8e891badf9bce5e0c7cfed6cb5d8663326fdcf2cc7ecfce282/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:52 np0005465604 podman[96118]: 2025-10-02 07:48:52.937654813 +0000 UTC m=+0.157309431 container init 52dac0c467d8b1b29744aca17e5b49fe390e018a7bb552ba8e36b40086625bdd (image=quay.io/ceph/ceph:v18, name=heuristic_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 03:48:52 np0005465604 podman[96118]: 2025-10-02 07:48:52.947197992 +0000 UTC m=+0.166852620 container start 52dac0c467d8b1b29744aca17e5b49fe390e018a7bb552ba8e36b40086625bdd (image=quay.io/ceph/ceph:v18, name=heuristic_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:48:52 np0005465604 podman[96118]: 2025-10-02 07:48:52.950488365 +0000 UTC m=+0.170143003 container attach 52dac0c467d8b1b29744aca17e5b49fe390e018a7bb552ba8e36b40086625bdd (image=quay.io/ceph/ceph:v18, name=heuristic_shaw, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 03:48:53 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/928827189' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  2 03:48:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Oct  2 03:48:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2164236954' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  2 03:48:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:48:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct  2 03:48:54 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/2164236954' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  2 03:48:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2164236954' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  2 03:48:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Oct  2 03:48:54 np0005465604 heuristic_shaw[96133]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct  2 03:48:54 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Oct  2 03:48:54 np0005465604 systemd[1]: libpod-52dac0c467d8b1b29744aca17e5b49fe390e018a7bb552ba8e36b40086625bdd.scope: Deactivated successfully.
Oct  2 03:48:54 np0005465604 podman[96118]: 2025-10-02 07:48:54.406535979 +0000 UTC m=+1.626190607 container died 52dac0c467d8b1b29744aca17e5b49fe390e018a7bb552ba8e36b40086625bdd (image=quay.io/ceph/ceph:v18, name=heuristic_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:48:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay-fc07fa7310ccaf8e891badf9bce5e0c7cfed6cb5d8663326fdcf2cc7ecfce282-merged.mount: Deactivated successfully.
Oct  2 03:48:54 np0005465604 podman[96118]: 2025-10-02 07:48:54.456037389 +0000 UTC m=+1.675692007 container remove 52dac0c467d8b1b29744aca17e5b49fe390e018a7bb552ba8e36b40086625bdd (image=quay.io/ceph/ceph:v18, name=heuristic_shaw, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:54 np0005465604 systemd[1]: libpod-conmon-52dac0c467d8b1b29744aca17e5b49fe390e018a7bb552ba8e36b40086625bdd.scope: Deactivated successfully.
Oct  2 03:48:55 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/2164236954' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  2 03:48:55 np0005465604 python3[96247]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:48:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:48:55 np0005465604 python3[96318]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759391335.1395495-33273-64174612004852/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:48:56 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  2 03:48:56 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  2 03:48:56 np0005465604 python3[96420]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:48:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:48:56 np0005465604 python3[96495]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759391336.1592062-33287-103603703362783/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=81bb9c37cb7dbd5cf615733fae51495a22c6bf2c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:48:57 np0005465604 python3[96545]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:48:57 np0005465604 podman[96546]: 2025-10-02 07:48:57.274478183 +0000 UTC m=+0.053124466 container create 172328acc255eb84f5661aef3a2037cee2308021b69d2869309b9d902d6b7e62 (image=quay.io/ceph/ceph:v18, name=adoring_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:48:57 np0005465604 systemd[1]: Started libpod-conmon-172328acc255eb84f5661aef3a2037cee2308021b69d2869309b9d902d6b7e62.scope.
Oct  2 03:48:57 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2c5bfae11b5cff768a889e49974c989327dd5c570c7b311adb0f793458f800/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2c5bfae11b5cff768a889e49974c989327dd5c570c7b311adb0f793458f800/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2c5bfae11b5cff768a889e49974c989327dd5c570c7b311adb0f793458f800/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:57 np0005465604 podman[96546]: 2025-10-02 07:48:57.249308884 +0000 UTC m=+0.027955237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:48:57 np0005465604 podman[96546]: 2025-10-02 07:48:57.348922815 +0000 UTC m=+0.127569188 container init 172328acc255eb84f5661aef3a2037cee2308021b69d2869309b9d902d6b7e62 (image=quay.io/ceph/ceph:v18, name=adoring_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:57 np0005465604 podman[96546]: 2025-10-02 07:48:57.354869501 +0000 UTC m=+0.133515814 container start 172328acc255eb84f5661aef3a2037cee2308021b69d2869309b9d902d6b7e62 (image=quay.io/ceph/ceph:v18, name=adoring_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  2 03:48:57 np0005465604 podman[96546]: 2025-10-02 07:48:57.358851266 +0000 UTC m=+0.137497589 container attach 172328acc255eb84f5661aef3a2037cee2308021b69d2869309b9d902d6b7e62 (image=quay.io/ceph/ceph:v18, name=adoring_tharp, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:48:57 np0005465604 ceph-mon[74477]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  2 03:48:57 np0005465604 ceph-mon[74477]: Cluster is now healthy
Oct  2 03:48:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:48:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:48:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:48:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:48:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:48:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:48:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:48:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct  2 03:48:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/736646471' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  2 03:48:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/736646471' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  2 03:48:57 np0005465604 adoring_tharp[96561]: 
Oct  2 03:48:57 np0005465604 adoring_tharp[96561]: [global]
Oct  2 03:48:57 np0005465604 adoring_tharp[96561]: #011fsid = a52e644f-f702-594c-a648-813e3e0df2b1
Oct  2 03:48:57 np0005465604 adoring_tharp[96561]: #011mon_host = 192.168.122.100
Oct  2 03:48:57 np0005465604 systemd[1]: libpod-172328acc255eb84f5661aef3a2037cee2308021b69d2869309b9d902d6b7e62.scope: Deactivated successfully.
Oct  2 03:48:57 np0005465604 conmon[96561]: conmon 172328acc255eb84f566 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-172328acc255eb84f5661aef3a2037cee2308021b69d2869309b9d902d6b7e62.scope/container/memory.events
Oct  2 03:48:57 np0005465604 podman[96546]: 2025-10-02 07:48:57.927167664 +0000 UTC m=+0.705813987 container died 172328acc255eb84f5661aef3a2037cee2308021b69d2869309b9d902d6b7e62 (image=quay.io/ceph/ceph:v18, name=adoring_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 03:48:57 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3f2c5bfae11b5cff768a889e49974c989327dd5c570c7b311adb0f793458f800-merged.mount: Deactivated successfully.
Oct  2 03:48:57 np0005465604 podman[96546]: 2025-10-02 07:48:57.97300255 +0000 UTC m=+0.751648833 container remove 172328acc255eb84f5661aef3a2037cee2308021b69d2869309b9d902d6b7e62 (image=quay.io/ceph/ceph:v18, name=adoring_tharp, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 03:48:57 np0005465604 systemd[1]: libpod-conmon-172328acc255eb84f5661aef3a2037cee2308021b69d2869309b9d902d6b7e62.scope: Deactivated successfully.
Oct  2 03:48:58 np0005465604 python3[96724]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:48:58 np0005465604 podman[96739]: 2025-10-02 07:48:58.404694037 +0000 UTC m=+0.047737207 container create 9572c1988d3db328173ac23d32042b77e412a79e8bab5ecc8184210bdf45c2a3 (image=quay.io/ceph/ceph:v18, name=sharp_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:48:58 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/736646471' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  2 03:48:58 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/736646471' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  2 03:48:58 np0005465604 systemd[1]: Started libpod-conmon-9572c1988d3db328173ac23d32042b77e412a79e8bab5ecc8184210bdf45c2a3.scope.
Oct  2 03:48:58 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84243f22e2db4c85b21ac3b03bcd351adb2f986d3d81ab9b1b71bcfd6857e8e1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84243f22e2db4c85b21ac3b03bcd351adb2f986d3d81ab9b1b71bcfd6857e8e1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84243f22e2db4c85b21ac3b03bcd351adb2f986d3d81ab9b1b71bcfd6857e8e1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:58 np0005465604 podman[96739]: 2025-10-02 07:48:58.474088841 +0000 UTC m=+0.117132031 container init 9572c1988d3db328173ac23d32042b77e412a79e8bab5ecc8184210bdf45c2a3 (image=quay.io/ceph/ceph:v18, name=sharp_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:48:58 np0005465604 podman[96739]: 2025-10-02 07:48:58.479954765 +0000 UTC m=+0.122997935 container start 9572c1988d3db328173ac23d32042b77e412a79e8bab5ecc8184210bdf45c2a3 (image=quay.io/ceph/ceph:v18, name=sharp_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 03:48:58 np0005465604 podman[96739]: 2025-10-02 07:48:58.386987112 +0000 UTC m=+0.030030292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:48:58 np0005465604 podman[96739]: 2025-10-02 07:48:58.482863866 +0000 UTC m=+0.125907126 container attach 9572c1988d3db328173ac23d32042b77e412a79e8bab5ecc8184210bdf45c2a3 (image=quay.io/ceph/ceph:v18, name=sharp_kapitsa, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:48:58 np0005465604 podman[96813]: 2025-10-02 07:48:58.705879554 +0000 UTC m=+0.058271636 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:48:58 np0005465604 podman[96813]: 2025-10-02 07:48:58.826127332 +0000 UTC m=+0.178519374 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Oct  2 03:48:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1183411301' entity='client.admin' 
Oct  2 03:48:59 np0005465604 sharp_kapitsa[96778]: set ssl_option
Oct  2 03:48:59 np0005465604 systemd[1]: libpod-9572c1988d3db328173ac23d32042b77e412a79e8bab5ecc8184210bdf45c2a3.scope: Deactivated successfully.
Oct  2 03:48:59 np0005465604 podman[96739]: 2025-10-02 07:48:59.125626817 +0000 UTC m=+0.768669997 container died 9572c1988d3db328173ac23d32042b77e412a79e8bab5ecc8184210bdf45c2a3 (image=quay.io/ceph/ceph:v18, name=sharp_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:48:59 np0005465604 systemd[1]: var-lib-containers-storage-overlay-84243f22e2db4c85b21ac3b03bcd351adb2f986d3d81ab9b1b71bcfd6857e8e1-merged.mount: Deactivated successfully.
Oct  2 03:48:59 np0005465604 podman[96739]: 2025-10-02 07:48:59.166407655 +0000 UTC m=+0.809450825 container remove 9572c1988d3db328173ac23d32042b77e412a79e8bab5ecc8184210bdf45c2a3 (image=quay.io/ceph/ceph:v18, name=sharp_kapitsa, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:48:59 np0005465604 systemd[1]: libpod-conmon-9572c1988d3db328173ac23d32042b77e412a79e8bab5ecc8184210bdf45c2a3.scope: Deactivated successfully.
Oct  2 03:48:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:48:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:48:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:48:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:48:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 03:48:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:48:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 03:48:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:48:59 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 25952476-d958-4a0a-837a-c1eb6deaf5f8 does not exist
Oct  2 03:48:59 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev fbec4657-6a6b-4922-96a0-c62e6b59c9d6 does not exist
Oct  2 03:48:59 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 8585863a-64db-4661-9443-8b0da82f56b4 does not exist
Oct  2 03:48:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 03:48:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 03:48:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 03:48:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:48:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:48:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:48:59 np0005465604 python3[97010]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:48:59 np0005465604 podman[97072]: 2025-10-02 07:48:59.585542288 +0000 UTC m=+0.041753820 container create 84eab0eb9b2db903f3e799af52d13abaafeb2abb6c321c9b4fc882a0ed57d6e1 (image=quay.io/ceph/ceph:v18, name=competent_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 03:48:59 np0005465604 systemd[1]: Started libpod-conmon-84eab0eb9b2db903f3e799af52d13abaafeb2abb6c321c9b4fc882a0ed57d6e1.scope.
Oct  2 03:48:59 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bdd8c800b4d2867d49f4a798df213cd9dcdc92319f9f876f65f49a38c065533/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bdd8c800b4d2867d49f4a798df213cd9dcdc92319f9f876f65f49a38c065533/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bdd8c800b4d2867d49f4a798df213cd9dcdc92319f9f876f65f49a38c065533/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  2 03:48:59 np0005465604 podman[97072]: 2025-10-02 07:48:59.645869308 +0000 UTC m=+0.102080850 container init 84eab0eb9b2db903f3e799af52d13abaafeb2abb6c321c9b4fc882a0ed57d6e1 (image=quay.io/ceph/ceph:v18, name=competent_yonath, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:48:59 np0005465604 podman[97072]: 2025-10-02 07:48:59.651732491 +0000 UTC m=+0.107944043 container start 84eab0eb9b2db903f3e799af52d13abaafeb2abb6c321c9b4fc882a0ed57d6e1 (image=quay.io/ceph/ceph:v18, name=competent_yonath, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 03:48:59 np0005465604 podman[97072]: 2025-10-02 07:48:59.655277022 +0000 UTC m=+0.111488604 container attach 84eab0eb9b2db903f3e799af52d13abaafeb2abb6c321c9b4fc882a0ed57d6e1 (image=quay.io/ceph/ceph:v18, name=competent_yonath, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 03:48:59 np0005465604 podman[97072]: 2025-10-02 07:48:59.567667588 +0000 UTC m=+0.023879160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:48:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:48:59 np0005465604 podman[97157]: 2025-10-02 07:48:59.933972026 +0000 UTC m=+0.045790186 container create 0d22dc2ab49874ac32ae042c01a144b562992f964a043e2a738726c4a2e3ddf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:48:59 np0005465604 systemd[1]: Started libpod-conmon-0d22dc2ab49874ac32ae042c01a144b562992f964a043e2a738726c4a2e3ddf4.scope.
Oct  2 03:48:59 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:48:59 np0005465604 podman[97157]: 2025-10-02 07:48:59.994851373 +0000 UTC m=+0.106669483 container init 0d22dc2ab49874ac32ae042c01a144b562992f964a043e2a738726c4a2e3ddf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct  2 03:49:00 np0005465604 podman[97157]: 2025-10-02 07:49:00.001329256 +0000 UTC m=+0.113147366 container start 0d22dc2ab49874ac32ae042c01a144b562992f964a043e2a738726c4a2e3ddf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 03:49:00 np0005465604 intelligent_montalcini[97191]: 167 167
Oct  2 03:49:00 np0005465604 systemd[1]: libpod-0d22dc2ab49874ac32ae042c01a144b562992f964a043e2a738726c4a2e3ddf4.scope: Deactivated successfully.
Oct  2 03:49:00 np0005465604 podman[97157]: 2025-10-02 07:49:00.004197336 +0000 UTC m=+0.116015446 container attach 0d22dc2ab49874ac32ae042c01a144b562992f964a043e2a738726c4a2e3ddf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 03:49:00 np0005465604 podman[97157]: 2025-10-02 07:49:00.006663483 +0000 UTC m=+0.118481593 container died 0d22dc2ab49874ac32ae042c01a144b562992f964a043e2a738726c4a2e3ddf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:49:00 np0005465604 podman[97157]: 2025-10-02 07:48:59.915276179 +0000 UTC m=+0.027094309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c3869c7ce875b332bbcf48ee310f089d50ccc3b3738982016dc5ed649e98591e-merged.mount: Deactivated successfully.
Oct  2 03:49:00 np0005465604 podman[97157]: 2025-10-02 07:49:00.047430671 +0000 UTC m=+0.159248821 container remove 0d22dc2ab49874ac32ae042c01a144b562992f964a043e2a738726c4a2e3ddf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 03:49:00 np0005465604 systemd[1]: libpod-conmon-0d22dc2ab49874ac32ae042c01a144b562992f964a043e2a738726c4a2e3ddf4.scope: Deactivated successfully.
Oct  2 03:49:00 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/1183411301' entity='client.admin' 
Oct  2 03:49:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:49:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:49:00 np0005465604 podman[97216]: 2025-10-02 07:49:00.177142515 +0000 UTC m=+0.032474528 container create 84daef646d6553c224c76f679b4e4bc869385c5136ee3225ff92f9c89d3a8256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 03:49:00 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 03:49:00 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Oct  2 03:49:00 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct  2 03:49:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  2 03:49:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:00 np0005465604 competent_yonath[97113]: Scheduled rgw.rgw update...
Oct  2 03:49:00 np0005465604 podman[97072]: 2025-10-02 07:49:00.216119146 +0000 UTC m=+0.672330698 container died 84eab0eb9b2db903f3e799af52d13abaafeb2abb6c321c9b4fc882a0ed57d6e1 (image=quay.io/ceph/ceph:v18, name=competent_yonath, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:00 np0005465604 systemd[1]: Started libpod-conmon-84daef646d6553c224c76f679b4e4bc869385c5136ee3225ff92f9c89d3a8256.scope.
Oct  2 03:49:00 np0005465604 systemd[1]: libpod-84eab0eb9b2db903f3e799af52d13abaafeb2abb6c321c9b4fc882a0ed57d6e1.scope: Deactivated successfully.
Oct  2 03:49:00 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fd115e2b2f7fde298199b59a11179dd9ef1ad38ec7f012d37c64f1f8eef697/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fd115e2b2f7fde298199b59a11179dd9ef1ad38ec7f012d37c64f1f8eef697/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0bdd8c800b4d2867d49f4a798df213cd9dcdc92319f9f876f65f49a38c065533-merged.mount: Deactivated successfully.
Oct  2 03:49:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fd115e2b2f7fde298199b59a11179dd9ef1ad38ec7f012d37c64f1f8eef697/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fd115e2b2f7fde298199b59a11179dd9ef1ad38ec7f012d37c64f1f8eef697/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81fd115e2b2f7fde298199b59a11179dd9ef1ad38ec7f012d37c64f1f8eef697/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:00 np0005465604 podman[97216]: 2025-10-02 07:49:00.163111035 +0000 UTC m=+0.018443058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:00 np0005465604 podman[97216]: 2025-10-02 07:49:00.263827561 +0000 UTC m=+0.119159594 container init 84daef646d6553c224c76f679b4e4bc869385c5136ee3225ff92f9c89d3a8256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_blackburn, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:49:00 np0005465604 podman[97216]: 2025-10-02 07:49:00.270114949 +0000 UTC m=+0.125446962 container start 84daef646d6553c224c76f679b4e4bc869385c5136ee3225ff92f9c89d3a8256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_blackburn, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:00 np0005465604 podman[97216]: 2025-10-02 07:49:00.272922656 +0000 UTC m=+0.128254689 container attach 84daef646d6553c224c76f679b4e4bc869385c5136ee3225ff92f9c89d3a8256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:49:00 np0005465604 podman[97072]: 2025-10-02 07:49:00.291009083 +0000 UTC m=+0.747220615 container remove 84eab0eb9b2db903f3e799af52d13abaafeb2abb6c321c9b4fc882a0ed57d6e1 (image=quay.io/ceph/ceph:v18, name=competent_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 03:49:00 np0005465604 systemd[1]: libpod-conmon-84eab0eb9b2db903f3e799af52d13abaafeb2abb6c321c9b4fc882a0ed57d6e1.scope: Deactivated successfully.
Oct  2 03:49:01 np0005465604 ceph-mon[74477]: Saving service rgw.rgw spec with placement compute-0
Oct  2 03:49:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:01 np0005465604 serene_blackburn[97236]: --> passed data devices: 0 physical, 3 LVM
Oct  2 03:49:01 np0005465604 serene_blackburn[97236]: --> relative data size: 1.0
Oct  2 03:49:01 np0005465604 serene_blackburn[97236]: --> All data devices are unavailable
Oct  2 03:49:01 np0005465604 systemd[1]: libpod-84daef646d6553c224c76f679b4e4bc869385c5136ee3225ff92f9c89d3a8256.scope: Deactivated successfully.
Oct  2 03:49:01 np0005465604 podman[97216]: 2025-10-02 07:49:01.275003506 +0000 UTC m=+1.130335549 container died 84daef646d6553c224c76f679b4e4bc869385c5136ee3225ff92f9c89d3a8256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 03:49:01 np0005465604 python3[97349]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:49:01 np0005465604 systemd[1]: var-lib-containers-storage-overlay-81fd115e2b2f7fde298199b59a11179dd9ef1ad38ec7f012d37c64f1f8eef697-merged.mount: Deactivated successfully.
Oct  2 03:49:01 np0005465604 podman[97216]: 2025-10-02 07:49:01.324864718 +0000 UTC m=+1.180196721 container remove 84daef646d6553c224c76f679b4e4bc869385c5136ee3225ff92f9c89d3a8256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_blackburn, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 03:49:01 np0005465604 systemd[1]: libpod-conmon-84daef646d6553c224c76f679b4e4bc869385c5136ee3225ff92f9c89d3a8256.scope: Deactivated successfully.
Oct  2 03:49:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:49:01 np0005465604 python3[97484]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759391340.9982126-33330-229927925069848/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:49:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:49:01 np0005465604 podman[97600]: 2025-10-02 07:49:01.922330919 +0000 UTC m=+0.046683884 container create e34f8086c6e682b98d8b4ce0853482ce78046567aebf691d01e82d3bb4132634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:49:01 np0005465604 systemd[1]: Started libpod-conmon-e34f8086c6e682b98d8b4ce0853482ce78046567aebf691d01e82d3bb4132634.scope.
Oct  2 03:49:01 np0005465604 podman[97600]: 2025-10-02 07:49:01.89904894 +0000 UTC m=+0.023401905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:02 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:02 np0005465604 podman[97600]: 2025-10-02 07:49:02.02130023 +0000 UTC m=+0.145653256 container init e34f8086c6e682b98d8b4ce0853482ce78046567aebf691d01e82d3bb4132634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sammet, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:02 np0005465604 podman[97600]: 2025-10-02 07:49:02.032799941 +0000 UTC m=+0.157152916 container start e34f8086c6e682b98d8b4ce0853482ce78046567aebf691d01e82d3bb4132634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:02 np0005465604 podman[97600]: 2025-10-02 07:49:02.036968721 +0000 UTC m=+0.161321686 container attach e34f8086c6e682b98d8b4ce0853482ce78046567aebf691d01e82d3bb4132634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sammet, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:02 np0005465604 serene_sammet[97641]: 167 167
Oct  2 03:49:02 np0005465604 systemd[1]: libpod-e34f8086c6e682b98d8b4ce0853482ce78046567aebf691d01e82d3bb4132634.scope: Deactivated successfully.
Oct  2 03:49:02 np0005465604 podman[97647]: 2025-10-02 07:49:02.102252477 +0000 UTC m=+0.043286617 container died e34f8086c6e682b98d8b4ce0853482ce78046567aebf691d01e82d3bb4132634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 03:49:02 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a19ff6c1fdb500458fa12d749463a72ecfd684d18aeb352c31919d6a6bfe65a6-merged.mount: Deactivated successfully.
Oct  2 03:49:02 np0005465604 podman[97647]: 2025-10-02 07:49:02.137584303 +0000 UTC m=+0.078618393 container remove e34f8086c6e682b98d8b4ce0853482ce78046567aebf691d01e82d3bb4132634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:02 np0005465604 systemd[1]: libpod-conmon-e34f8086c6e682b98d8b4ce0853482ce78046567aebf691d01e82d3bb4132634.scope: Deactivated successfully.
Oct  2 03:49:02 np0005465604 python3[97643]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:49:02 np0005465604 podman[97662]: 2025-10-02 07:49:02.22711722 +0000 UTC m=+0.039242881 container create f37f75c42d6a7ae035cc47ff44e01aedcada1879c7c3a2dfea93c2fb9de69233 (image=quay.io/ceph/ceph:v18, name=nice_bohr, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:02 np0005465604 systemd[1]: Started libpod-conmon-f37f75c42d6a7ae035cc47ff44e01aedcada1879c7c3a2dfea93c2fb9de69233.scope.
Oct  2 03:49:02 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4bb729d5514356034d9e4671c65c1cfee445d3e0e34faadbc4a7b2dd82c473f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4bb729d5514356034d9e4671c65c1cfee445d3e0e34faadbc4a7b2dd82c473f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4bb729d5514356034d9e4671c65c1cfee445d3e0e34faadbc4a7b2dd82c473f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:02 np0005465604 podman[97662]: 2025-10-02 07:49:02.303220694 +0000 UTC m=+0.115346355 container init f37f75c42d6a7ae035cc47ff44e01aedcada1879c7c3a2dfea93c2fb9de69233 (image=quay.io/ceph/ceph:v18, name=nice_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 03:49:02 np0005465604 podman[97662]: 2025-10-02 07:49:02.208739333 +0000 UTC m=+0.020864954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:49:02 np0005465604 podman[97662]: 2025-10-02 07:49:02.309510711 +0000 UTC m=+0.121636352 container start f37f75c42d6a7ae035cc47ff44e01aedcada1879c7c3a2dfea93c2fb9de69233 (image=quay.io/ceph/ceph:v18, name=nice_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:02 np0005465604 podman[97662]: 2025-10-02 07:49:02.312850286 +0000 UTC m=+0.124975917 container attach f37f75c42d6a7ae035cc47ff44e01aedcada1879c7c3a2dfea93c2fb9de69233 (image=quay.io/ceph/ceph:v18, name=nice_bohr, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 03:49:02 np0005465604 podman[97682]: 2025-10-02 07:49:02.318053729 +0000 UTC m=+0.043893587 container create 73b2ba65209ff8efd2c44cdca5c5bc002df47b16f507df967aba7259d987c958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_beaver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 03:49:02 np0005465604 systemd[1]: Started libpod-conmon-73b2ba65209ff8efd2c44cdca5c5bc002df47b16f507df967aba7259d987c958.scope.
Oct  2 03:49:02 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e4e846ddc5c103c5fd8d71e487d9b437f433309acfaca8f88949156df08aa1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e4e846ddc5c103c5fd8d71e487d9b437f433309acfaca8f88949156df08aa1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e4e846ddc5c103c5fd8d71e487d9b437f433309acfaca8f88949156df08aa1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e4e846ddc5c103c5fd8d71e487d9b437f433309acfaca8f88949156df08aa1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:02 np0005465604 podman[97682]: 2025-10-02 07:49:02.398076686 +0000 UTC m=+0.123916574 container init 73b2ba65209ff8efd2c44cdca5c5bc002df47b16f507df967aba7259d987c958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_beaver, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 03:49:02 np0005465604 podman[97682]: 2025-10-02 07:49:02.303322757 +0000 UTC m=+0.029162635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:02 np0005465604 podman[97682]: 2025-10-02 07:49:02.405013503 +0000 UTC m=+0.130853361 container start 73b2ba65209ff8efd2c44cdca5c5bc002df47b16f507df967aba7259d987c958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_beaver, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:02 np0005465604 podman[97682]: 2025-10-02 07:49:02.408195214 +0000 UTC m=+0.134035112 container attach 73b2ba65209ff8efd2c44cdca5c5bc002df47b16f507df967aba7259d987c958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:02 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 03:49:02 np0005465604 ceph-mgr[74774]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct  2 03:49:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Oct  2 03:49:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  2 03:49:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Oct  2 03:49:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  2 03:49:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Oct  2 03:49:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  2 03:49:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct  2 03:49:02 np0005465604 ceph-mon[74477]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  2 03:49:02 np0005465604 ceph-mon[74477]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  2 03:49:02 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0[74473]: 2025-10-02T07:49:02.875+0000 7f1cf1015640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  2 03:49:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  2 03:49:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).mds e2 new map
Oct  2 03:49:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T07:49:02.875980+0000#012modified#0112025-10-02T07:49:02.876016+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Oct  2 03:49:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Oct  2 03:49:02 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Oct  2 03:49:02 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct  2 03:49:02 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct  2 03:49:02 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct  2 03:49:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  2 03:49:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:02 np0005465604 ceph-mgr[74774]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct  2 03:49:02 np0005465604 systemd[1]: libpod-f37f75c42d6a7ae035cc47ff44e01aedcada1879c7c3a2dfea93c2fb9de69233.scope: Deactivated successfully.
Oct  2 03:49:02 np0005465604 podman[97662]: 2025-10-02 07:49:02.914670833 +0000 UTC m=+0.726796464 container died f37f75c42d6a7ae035cc47ff44e01aedcada1879c7c3a2dfea93c2fb9de69233 (image=quay.io/ceph/ceph:v18, name=nice_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:02 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a4bb729d5514356034d9e4671c65c1cfee445d3e0e34faadbc4a7b2dd82c473f-merged.mount: Deactivated successfully.
Oct  2 03:49:02 np0005465604 podman[97662]: 2025-10-02 07:49:02.962923405 +0000 UTC m=+0.775049036 container remove f37f75c42d6a7ae035cc47ff44e01aedcada1879c7c3a2dfea93c2fb9de69233 (image=quay.io/ceph/ceph:v18, name=nice_bohr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 03:49:02 np0005465604 systemd[1]: libpod-conmon-f37f75c42d6a7ae035cc47ff44e01aedcada1879c7c3a2dfea93c2fb9de69233.scope: Deactivated successfully.
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]: {
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:    "0": [
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:        {
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "devices": [
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "/dev/loop3"
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            ],
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "lv_name": "ceph_lv0",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "lv_size": "21470642176",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "name": "ceph_lv0",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "tags": {
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.cluster_name": "ceph",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.crush_device_class": "",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.encrypted": "0",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.osd_id": "0",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.type": "block",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.vdo": "0"
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            },
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "type": "block",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "vg_name": "ceph_vg0"
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:        }
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:    ],
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:    "1": [
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:        {
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "devices": [
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "/dev/loop4"
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            ],
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "lv_name": "ceph_lv1",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "lv_size": "21470642176",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "name": "ceph_lv1",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "tags": {
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.cluster_name": "ceph",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.crush_device_class": "",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.encrypted": "0",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.osd_id": "1",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.type": "block",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.vdo": "0"
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            },
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "type": "block",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "vg_name": "ceph_vg1"
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:        }
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:    ],
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:    "2": [
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:        {
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "devices": [
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "/dev/loop5"
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            ],
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "lv_name": "ceph_lv2",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "lv_size": "21470642176",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "name": "ceph_lv2",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "tags": {
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.cluster_name": "ceph",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.crush_device_class": "",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.encrypted": "0",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.osd_id": "2",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.type": "block",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:                "ceph.vdo": "0"
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            },
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "type": "block",
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:            "vg_name": "ceph_vg2"
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:        }
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]:    ]
Oct  2 03:49:03 np0005465604 friendly_beaver[97703]: }
Oct  2 03:49:03 np0005465604 systemd[1]: libpod-73b2ba65209ff8efd2c44cdca5c5bc002df47b16f507df967aba7259d987c958.scope: Deactivated successfully.
Oct  2 03:49:03 np0005465604 podman[97682]: 2025-10-02 07:49:03.178076677 +0000 UTC m=+0.903916545 container died 73b2ba65209ff8efd2c44cdca5c5bc002df47b16f507df967aba7259d987c958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_beaver, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 03:49:03 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1e4e846ddc5c103c5fd8d71e487d9b437f433309acfaca8f88949156df08aa1c-merged.mount: Deactivated successfully.
Oct  2 03:49:03 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  2 03:49:03 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  2 03:49:03 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  2 03:49:03 np0005465604 ceph-mon[74477]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  2 03:49:03 np0005465604 ceph-mon[74477]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  2 03:49:03 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  2 03:49:03 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:03 np0005465604 podman[97682]: 2025-10-02 07:49:03.228563508 +0000 UTC m=+0.954403366 container remove 73b2ba65209ff8efd2c44cdca5c5bc002df47b16f507df967aba7259d987c958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_beaver, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:49:03 np0005465604 systemd[1]: libpod-conmon-73b2ba65209ff8efd2c44cdca5c5bc002df47b16f507df967aba7259d987c958.scope: Deactivated successfully.
Oct  2 03:49:03 np0005465604 python3[97770]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:49:03 np0005465604 podman[97783]: 2025-10-02 07:49:03.321581753 +0000 UTC m=+0.041620624 container create d3bf3390b641f0ee33cae1e253e73e282fd8f60c9cf21e98189ae57130c00018 (image=quay.io/ceph/ceph:v18, name=peaceful_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 03:49:03 np0005465604 systemd[1]: Started libpod-conmon-d3bf3390b641f0ee33cae1e253e73e282fd8f60c9cf21e98189ae57130c00018.scope.
Oct  2 03:49:03 np0005465604 podman[97783]: 2025-10-02 07:49:03.300651578 +0000 UTC m=+0.020690459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:49:03 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7746309f162471d12a0d60d69112d6a3b583c7170fe0b67196d93350f5f43921/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7746309f162471d12a0d60d69112d6a3b583c7170fe0b67196d93350f5f43921/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7746309f162471d12a0d60d69112d6a3b583c7170fe0b67196d93350f5f43921/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:03 np0005465604 podman[97783]: 2025-10-02 07:49:03.431981583 +0000 UTC m=+0.152020494 container init d3bf3390b641f0ee33cae1e253e73e282fd8f60c9cf21e98189ae57130c00018 (image=quay.io/ceph/ceph:v18, name=peaceful_bhaskara, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 03:49:03 np0005465604 podman[97783]: 2025-10-02 07:49:03.441492191 +0000 UTC m=+0.161531072 container start d3bf3390b641f0ee33cae1e253e73e282fd8f60c9cf21e98189ae57130c00018 (image=quay.io/ceph/ceph:v18, name=peaceful_bhaskara, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 03:49:03 np0005465604 podman[97783]: 2025-10-02 07:49:03.445070063 +0000 UTC m=+0.165108964 container attach d3bf3390b641f0ee33cae1e253e73e282fd8f60c9cf21e98189ae57130c00018 (image=quay.io/ceph/ceph:v18, name=peaceful_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct  2 03:49:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:49:03 np0005465604 podman[97961]: 2025-10-02 07:49:03.835784565 +0000 UTC m=+0.041545463 container create 1788e70e9eb61ca76a7839365fa8aa787a88c961b865c86d76fb4ca00ae56915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  2 03:49:03 np0005465604 systemd[1]: Started libpod-conmon-1788e70e9eb61ca76a7839365fa8aa787a88c961b865c86d76fb4ca00ae56915.scope.
Oct  2 03:49:03 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:03 np0005465604 podman[97961]: 2025-10-02 07:49:03.889110836 +0000 UTC m=+0.094871834 container init 1788e70e9eb61ca76a7839365fa8aa787a88c961b865c86d76fb4ca00ae56915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_booth, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 03:49:03 np0005465604 podman[97961]: 2025-10-02 07:49:03.894760403 +0000 UTC m=+0.100521291 container start 1788e70e9eb61ca76a7839365fa8aa787a88c961b865c86d76fb4ca00ae56915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_booth, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:49:03 np0005465604 podman[97961]: 2025-10-02 07:49:03.897312103 +0000 UTC m=+0.103073031 container attach 1788e70e9eb61ca76a7839365fa8aa787a88c961b865c86d76fb4ca00ae56915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_booth, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 03:49:03 np0005465604 focused_booth[97977]: 167 167
Oct  2 03:49:03 np0005465604 systemd[1]: libpod-1788e70e9eb61ca76a7839365fa8aa787a88c961b865c86d76fb4ca00ae56915.scope: Deactivated successfully.
Oct  2 03:49:03 np0005465604 podman[97961]: 2025-10-02 07:49:03.899375858 +0000 UTC m=+0.105136786 container died 1788e70e9eb61ca76a7839365fa8aa787a88c961b865c86d76fb4ca00ae56915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 03:49:03 np0005465604 podman[97961]: 2025-10-02 07:49:03.81870606 +0000 UTC m=+0.024466998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:03 np0005465604 systemd[1]: var-lib-containers-storage-overlay-06d8bae9a0bcdf6df84b0971fca4b5724f5b99e8e04969ba1b70167ecd8c8dd3-merged.mount: Deactivated successfully.
Oct  2 03:49:03 np0005465604 podman[97961]: 2025-10-02 07:49:03.939066301 +0000 UTC m=+0.144827209 container remove 1788e70e9eb61ca76a7839365fa8aa787a88c961b865c86d76fb4ca00ae56915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_booth, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:49:03 np0005465604 systemd[1]: libpod-conmon-1788e70e9eb61ca76a7839365fa8aa787a88c961b865c86d76fb4ca00ae56915.scope: Deactivated successfully.
Oct  2 03:49:03 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 03:49:03 np0005465604 ceph-mgr[74774]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct  2 03:49:03 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct  2 03:49:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  2 03:49:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:03 np0005465604 peaceful_bhaskara[97836]: Scheduled mds.cephfs update...
Oct  2 03:49:04 np0005465604 systemd[1]: libpod-d3bf3390b641f0ee33cae1e253e73e282fd8f60c9cf21e98189ae57130c00018.scope: Deactivated successfully.
Oct  2 03:49:04 np0005465604 podman[97998]: 2025-10-02 07:49:04.048427389 +0000 UTC m=+0.021802465 container died d3bf3390b641f0ee33cae1e253e73e282fd8f60c9cf21e98189ae57130c00018 (image=quay.io/ceph/ceph:v18, name=peaceful_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:49:04 np0005465604 systemd[1]: var-lib-containers-storage-overlay-7746309f162471d12a0d60d69112d6a3b583c7170fe0b67196d93350f5f43921-merged.mount: Deactivated successfully.
Oct  2 03:49:04 np0005465604 podman[97998]: 2025-10-02 07:49:04.089089423 +0000 UTC m=+0.062464489 container remove d3bf3390b641f0ee33cae1e253e73e282fd8f60c9cf21e98189ae57130c00018 (image=quay.io/ceph/ceph:v18, name=peaceful_bhaskara, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 03:49:04 np0005465604 podman[98014]: 2025-10-02 07:49:04.092249411 +0000 UTC m=+0.038685003 container create cd37ffd63ba607a239452d2549ee78b165ec729c77ed92e18cd189b2491239a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haslett, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:49:04 np0005465604 systemd[1]: libpod-conmon-d3bf3390b641f0ee33cae1e253e73e282fd8f60c9cf21e98189ae57130c00018.scope: Deactivated successfully.
Oct  2 03:49:04 np0005465604 systemd[1]: Started libpod-conmon-cd37ffd63ba607a239452d2549ee78b165ec729c77ed92e18cd189b2491239a6.scope.
Oct  2 03:49:04 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ec66dd0961d33b4bfef607dc45886083b5a4b2998325698835509c1e2fbabd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ec66dd0961d33b4bfef607dc45886083b5a4b2998325698835509c1e2fbabd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ec66dd0961d33b4bfef607dc45886083b5a4b2998325698835509c1e2fbabd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ec66dd0961d33b4bfef607dc45886083b5a4b2998325698835509c1e2fbabd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:04 np0005465604 podman[98014]: 2025-10-02 07:49:04.158624281 +0000 UTC m=+0.105059893 container init cd37ffd63ba607a239452d2549ee78b165ec729c77ed92e18cd189b2491239a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haslett, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:49:04 np0005465604 podman[98014]: 2025-10-02 07:49:04.164114734 +0000 UTC m=+0.110550326 container start cd37ffd63ba607a239452d2549ee78b165ec729c77ed92e18cd189b2491239a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haslett, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 03:49:04 np0005465604 podman[98014]: 2025-10-02 07:49:04.166910181 +0000 UTC m=+0.113345773 container attach cd37ffd63ba607a239452d2549ee78b165ec729c77ed92e18cd189b2491239a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:04 np0005465604 podman[98014]: 2025-10-02 07:49:04.077171649 +0000 UTC m=+0.023607261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:04 np0005465604 ceph-mon[74477]: Saving service mds.cephfs spec with placement compute-0
Oct  2 03:49:04 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:04 np0005465604 python3[98116]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]: {
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:        "osd_id": 2,
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:        "type": "bluestore"
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:    },
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:        "osd_id": 1,
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:        "type": "bluestore"
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:    },
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:        "osd_id": 0,
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:        "type": "bluestore"
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]:    }
Oct  2 03:49:05 np0005465604 infallible_haslett[98034]: }
Oct  2 03:49:05 np0005465604 systemd[1]: libpod-cd37ffd63ba607a239452d2549ee78b165ec729c77ed92e18cd189b2491239a6.scope: Deactivated successfully.
Oct  2 03:49:05 np0005465604 podman[98014]: 2025-10-02 07:49:05.077010829 +0000 UTC m=+1.023446431 container died cd37ffd63ba607a239452d2549ee78b165ec729c77ed92e18cd189b2491239a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct  2 03:49:05 np0005465604 systemd[1]: var-lib-containers-storage-overlay-45ec66dd0961d33b4bfef607dc45886083b5a4b2998325698835509c1e2fbabd-merged.mount: Deactivated successfully.
Oct  2 03:49:05 np0005465604 podman[98014]: 2025-10-02 07:49:05.134141358 +0000 UTC m=+1.080576950 container remove cd37ffd63ba607a239452d2549ee78b165ec729c77ed92e18cd189b2491239a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:05 np0005465604 systemd[1]: libpod-conmon-cd37ffd63ba607a239452d2549ee78b165ec729c77ed92e18cd189b2491239a6.scope: Deactivated successfully.
Oct  2 03:49:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:49:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:49:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:05 np0005465604 ceph-mon[74477]: Saving service mds.cephfs spec with placement compute-0
Oct  2 03:49:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:05 np0005465604 python3[98229]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759391344.5296137-33361-230392058605490/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=2c67f9fe26159ea7074e4af19292a1efe57f71fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:49:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:49:05 np0005465604 python3[98429]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:49:05 np0005465604 podman[98466]: 2025-10-02 07:49:05.870335306 +0000 UTC m=+0.057459051 container create 5445149eb29526611f12c35a344bd23f756d303a556f77cc1f4c30fb68caea82 (image=quay.io/ceph/ceph:v18, name=elegant_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 03:49:05 np0005465604 systemd[1]: Started libpod-conmon-5445149eb29526611f12c35a344bd23f756d303a556f77cc1f4c30fb68caea82.scope.
Oct  2 03:49:05 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80bc46159cc74b133304bb93f8b61242c755b963a6c1d8767cf8766d34c76fd1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80bc46159cc74b133304bb93f8b61242c755b963a6c1d8767cf8766d34c76fd1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:05 np0005465604 podman[98466]: 2025-10-02 07:49:05.845192378 +0000 UTC m=+0.032316183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:49:05 np0005465604 podman[98466]: 2025-10-02 07:49:05.943549331 +0000 UTC m=+0.130673066 container init 5445149eb29526611f12c35a344bd23f756d303a556f77cc1f4c30fb68caea82 (image=quay.io/ceph/ceph:v18, name=elegant_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 03:49:05 np0005465604 podman[98466]: 2025-10-02 07:49:05.95183434 +0000 UTC m=+0.138958055 container start 5445149eb29526611f12c35a344bd23f756d303a556f77cc1f4c30fb68caea82 (image=quay.io/ceph/ceph:v18, name=elegant_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 03:49:05 np0005465604 podman[98466]: 2025-10-02 07:49:05.954809123 +0000 UTC m=+0.141932858 container attach 5445149eb29526611f12c35a344bd23f756d303a556f77cc1f4c30fb68caea82 (image=quay.io/ceph/ceph:v18, name=elegant_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 03:49:06 np0005465604 podman[98514]: 2025-10-02 07:49:06.015784304 +0000 UTC m=+0.049728939 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 03:49:06 np0005465604 podman[98514]: 2025-10-02 07:49:06.098045232 +0000 UTC m=+0.131989837 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2946469668' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2946469668' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  2 03:49:06 np0005465604 podman[98466]: 2025-10-02 07:49:06.548576429 +0000 UTC m=+0.735700154 container died 5445149eb29526611f12c35a344bd23f756d303a556f77cc1f4c30fb68caea82 (image=quay.io/ceph/ceph:v18, name=elegant_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:49:06 np0005465604 systemd[1]: libpod-5445149eb29526611f12c35a344bd23f756d303a556f77cc1f4c30fb68caea82.scope: Deactivated successfully.
Oct  2 03:49:06 np0005465604 systemd[1]: var-lib-containers-storage-overlay-80bc46159cc74b133304bb93f8b61242c755b963a6c1d8767cf8766d34c76fd1-merged.mount: Deactivated successfully.
Oct  2 03:49:06 np0005465604 podman[98466]: 2025-10-02 07:49:06.600727912 +0000 UTC m=+0.787851648 container remove 5445149eb29526611f12c35a344bd23f756d303a556f77cc1f4c30fb68caea82 (image=quay.io/ceph/ceph:v18, name=elegant_blackwell, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:49:06 np0005465604 systemd[1]: libpod-conmon-5445149eb29526611f12c35a344bd23f756d303a556f77cc1f4c30fb68caea82.scope: Deactivated successfully.
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:06 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9c95324a-a9c3-41c9-b1cc-7e97d3347275 does not exist
Oct  2 03:49:06 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e41408d5-5a8d-4e38-929e-99176242cb2c does not exist
Oct  2 03:49:06 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 792da833-7d80-4e18-943c-f34b6b83cb4b does not exist
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:49:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:49:07 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/2946469668' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  2 03:49:07 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/2946469668' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  2 03:49:07 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:07 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:07 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:49:07 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:07 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:49:07 np0005465604 podman[98836]: 2025-10-02 07:49:07.288376729 +0000 UTC m=+0.054295142 container create ccbf1b315d63e47efb6d0ad7e0662cc43f68e7144a478fb4dc7bfa8d3adaccac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_colden, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 03:49:07 np0005465604 python3[98823]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:49:07 np0005465604 systemd[1]: Started libpod-conmon-ccbf1b315d63e47efb6d0ad7e0662cc43f68e7144a478fb4dc7bfa8d3adaccac.scope.
Oct  2 03:49:07 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:07 np0005465604 podman[98836]: 2025-10-02 07:49:07.261862778 +0000 UTC m=+0.027781251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:07 np0005465604 podman[98836]: 2025-10-02 07:49:07.366339642 +0000 UTC m=+0.132258065 container init ccbf1b315d63e47efb6d0ad7e0662cc43f68e7144a478fb4dc7bfa8d3adaccac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_colden, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 03:49:07 np0005465604 podman[98853]: 2025-10-02 07:49:07.371837694 +0000 UTC m=+0.049180021 container create 93467dbf9fd31315a2aa419dd12fed353a11c68e5caa1835b7a7bbcd884b2aec (image=quay.io/ceph/ceph:v18, name=unruffled_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:07 np0005465604 podman[98836]: 2025-10-02 07:49:07.373870588 +0000 UTC m=+0.139789011 container start ccbf1b315d63e47efb6d0ad7e0662cc43f68e7144a478fb4dc7bfa8d3adaccac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:49:07 np0005465604 distracted_colden[98860]: 167 167
Oct  2 03:49:07 np0005465604 systemd[1]: libpod-ccbf1b315d63e47efb6d0ad7e0662cc43f68e7144a478fb4dc7bfa8d3adaccac.scope: Deactivated successfully.
Oct  2 03:49:07 np0005465604 podman[98836]: 2025-10-02 07:49:07.383259622 +0000 UTC m=+0.149178095 container attach ccbf1b315d63e47efb6d0ad7e0662cc43f68e7144a478fb4dc7bfa8d3adaccac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_colden, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 03:49:07 np0005465604 podman[98836]: 2025-10-02 07:49:07.383698546 +0000 UTC m=+0.149616999 container died ccbf1b315d63e47efb6d0ad7e0662cc43f68e7144a478fb4dc7bfa8d3adaccac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_colden, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 03:49:07 np0005465604 systemd[1]: var-lib-containers-storage-overlay-40332556d833ac7b8c4d7a5f3e5d83b0f335efddb672296a8e1afe8def6a9649-merged.mount: Deactivated successfully.
Oct  2 03:49:07 np0005465604 podman[98836]: 2025-10-02 07:49:07.421876873 +0000 UTC m=+0.187795256 container remove ccbf1b315d63e47efb6d0ad7e0662cc43f68e7144a478fb4dc7bfa8d3adaccac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_colden, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 03:49:07 np0005465604 systemd[1]: Started libpod-conmon-93467dbf9fd31315a2aa419dd12fed353a11c68e5caa1835b7a7bbcd884b2aec.scope.
Oct  2 03:49:07 np0005465604 podman[98853]: 2025-10-02 07:49:07.347576094 +0000 UTC m=+0.024918501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:49:07 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:07 np0005465604 systemd[1]: libpod-conmon-ccbf1b315d63e47efb6d0ad7e0662cc43f68e7144a478fb4dc7bfa8d3adaccac.scope: Deactivated successfully.
Oct  2 03:49:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc3185193bc9ab28c40fd63d1be23b0af42d35b4cb613d42ec7f89e16422ad9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc3185193bc9ab28c40fd63d1be23b0af42d35b4cb613d42ec7f89e16422ad9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:07 np0005465604 podman[98853]: 2025-10-02 07:49:07.46490777 +0000 UTC m=+0.142250147 container init 93467dbf9fd31315a2aa419dd12fed353a11c68e5caa1835b7a7bbcd884b2aec (image=quay.io/ceph/ceph:v18, name=unruffled_payne, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 03:49:07 np0005465604 podman[98853]: 2025-10-02 07:49:07.470674621 +0000 UTC m=+0.148016948 container start 93467dbf9fd31315a2aa419dd12fed353a11c68e5caa1835b7a7bbcd884b2aec (image=quay.io/ceph/ceph:v18, name=unruffled_payne, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 03:49:07 np0005465604 podman[98853]: 2025-10-02 07:49:07.474873843 +0000 UTC m=+0.152216210 container attach 93467dbf9fd31315a2aa419dd12fed353a11c68e5caa1835b7a7bbcd884b2aec (image=quay.io/ceph/ceph:v18, name=unruffled_payne, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:49:07 np0005465604 podman[98895]: 2025-10-02 07:49:07.586979826 +0000 UTC m=+0.037101263 container create 7e1b11ace40c597d5b7ab30aa1c218600498a17a6a8052423071c3e00d2f6bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_spence, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 03:49:07 np0005465604 systemd[1]: Started libpod-conmon-7e1b11ace40c597d5b7ab30aa1c218600498a17a6a8052423071c3e00d2f6bcc.scope.
Oct  2 03:49:07 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05de0c88370846215a3f80ac0e131ca049c60be29b329d105edd4a8f3b010a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05de0c88370846215a3f80ac0e131ca049c60be29b329d105edd4a8f3b010a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05de0c88370846215a3f80ac0e131ca049c60be29b329d105edd4a8f3b010a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05de0c88370846215a3f80ac0e131ca049c60be29b329d105edd4a8f3b010a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05de0c88370846215a3f80ac0e131ca049c60be29b329d105edd4a8f3b010a7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:07 np0005465604 podman[98895]: 2025-10-02 07:49:07.570560911 +0000 UTC m=+0.020682378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:07 np0005465604 podman[98895]: 2025-10-02 07:49:07.680139765 +0000 UTC m=+0.130261222 container init 7e1b11ace40c597d5b7ab30aa1c218600498a17a6a8052423071c3e00d2f6bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_spence, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:07 np0005465604 podman[98895]: 2025-10-02 07:49:07.687275139 +0000 UTC m=+0.137396576 container start 7e1b11ace40c597d5b7ab30aa1c218600498a17a6a8052423071c3e00d2f6bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  2 03:49:07 np0005465604 podman[98895]: 2025-10-02 07:49:07.690557881 +0000 UTC m=+0.140679318 container attach 7e1b11ace40c597d5b7ab30aa1c218600498a17a6a8052423071c3e00d2f6bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_spence, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 03:49:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:49:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  2 03:49:08 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1503215996' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  2 03:49:08 np0005465604 unruffled_payne[98884]: 
Oct  2 03:49:08 np0005465604 unruffled_payne[98884]: {"fsid":"a52e644f-f702-594c-a648-813e3e0df2b1","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":146,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":3,"osd_up_since":1759391316,"num_in_osds":3,"osd_in_since":1759391291,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83804160,"bytes_avail":64328122368,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-02T07:48:29.787187+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Oct  2 03:49:08 np0005465604 systemd[1]: libpod-93467dbf9fd31315a2aa419dd12fed353a11c68e5caa1835b7a7bbcd884b2aec.scope: Deactivated successfully.
Oct  2 03:49:08 np0005465604 podman[98853]: 2025-10-02 07:49:08.090253786 +0000 UTC m=+0.767596153 container died 93467dbf9fd31315a2aa419dd12fed353a11c68e5caa1835b7a7bbcd884b2aec (image=quay.io/ceph/ceph:v18, name=unruffled_payne, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:08 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ecc3185193bc9ab28c40fd63d1be23b0af42d35b4cb613d42ec7f89e16422ad9-merged.mount: Deactivated successfully.
Oct  2 03:49:08 np0005465604 podman[98853]: 2025-10-02 07:49:08.129873397 +0000 UTC m=+0.807215724 container remove 93467dbf9fd31315a2aa419dd12fed353a11c68e5caa1835b7a7bbcd884b2aec (image=quay.io/ceph/ceph:v18, name=unruffled_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 03:49:08 np0005465604 systemd[1]: libpod-conmon-93467dbf9fd31315a2aa419dd12fed353a11c68e5caa1835b7a7bbcd884b2aec.scope: Deactivated successfully.
Oct  2 03:49:08 np0005465604 python3[98977]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:49:08 np0005465604 podman[98984]: 2025-10-02 07:49:08.492472549 +0000 UTC m=+0.040572242 container create 5e0eb75a84180a1880200d484dc1d510df7fbac0204133ddb13fde880f9b8f5f (image=quay.io/ceph/ceph:v18, name=pedantic_meninsky, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 03:49:08 np0005465604 systemd[1]: Started libpod-conmon-5e0eb75a84180a1880200d484dc1d510df7fbac0204133ddb13fde880f9b8f5f.scope.
Oct  2 03:49:08 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:08 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7d202c3ef7e8159c1fe49e41184fb5fe0dca4752799188870f339dd7d13e632/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:08 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7d202c3ef7e8159c1fe49e41184fb5fe0dca4752799188870f339dd7d13e632/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:08 np0005465604 podman[98984]: 2025-10-02 07:49:08.476002912 +0000 UTC m=+0.024102655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:49:08 np0005465604 podman[98984]: 2025-10-02 07:49:08.582770078 +0000 UTC m=+0.130869801 container init 5e0eb75a84180a1880200d484dc1d510df7fbac0204133ddb13fde880f9b8f5f (image=quay.io/ceph/ceph:v18, name=pedantic_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:49:08 np0005465604 podman[98984]: 2025-10-02 07:49:08.592117931 +0000 UTC m=+0.140217624 container start 5e0eb75a84180a1880200d484dc1d510df7fbac0204133ddb13fde880f9b8f5f (image=quay.io/ceph/ceph:v18, name=pedantic_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Oct  2 03:49:08 np0005465604 podman[98984]: 2025-10-02 07:49:08.595160636 +0000 UTC m=+0.143260379 container attach 5e0eb75a84180a1880200d484dc1d510df7fbac0204133ddb13fde880f9b8f5f (image=quay.io/ceph/ceph:v18, name=pedantic_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 03:49:08 np0005465604 elegant_spence[98912]: --> passed data devices: 0 physical, 3 LVM
Oct  2 03:49:08 np0005465604 elegant_spence[98912]: --> relative data size: 1.0
Oct  2 03:49:08 np0005465604 elegant_spence[98912]: --> All data devices are unavailable
Oct  2 03:49:08 np0005465604 systemd[1]: libpod-7e1b11ace40c597d5b7ab30aa1c218600498a17a6a8052423071c3e00d2f6bcc.scope: Deactivated successfully.
Oct  2 03:49:08 np0005465604 podman[98895]: 2025-10-02 07:49:08.82249005 +0000 UTC m=+1.272611527 container died 7e1b11ace40c597d5b7ab30aa1c218600498a17a6a8052423071c3e00d2f6bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_spence, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:49:08 np0005465604 systemd[1]: libpod-7e1b11ace40c597d5b7ab30aa1c218600498a17a6a8052423071c3e00d2f6bcc.scope: Consumed 1.044s CPU time.
Oct  2 03:49:08 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b05de0c88370846215a3f80ac0e131ca049c60be29b329d105edd4a8f3b010a7-merged.mount: Deactivated successfully.
Oct  2 03:49:08 np0005465604 podman[98895]: 2025-10-02 07:49:08.882656105 +0000 UTC m=+1.332777562 container remove 7e1b11ace40c597d5b7ab30aa1c218600498a17a6a8052423071c3e00d2f6bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_spence, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:08 np0005465604 systemd[1]: libpod-conmon-7e1b11ace40c597d5b7ab30aa1c218600498a17a6a8052423071c3e00d2f6bcc.scope: Deactivated successfully.
Oct  2 03:49:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 03:49:09 np0005465604 pedantic_meninsky[99004]: 
Oct  2 03:49:09 np0005465604 pedantic_meninsky[99004]: {"epoch":1,"fsid":"a52e644f-f702-594c-a648-813e3e0df2b1","modified":"2025-10-02T07:46:36.297112Z","created":"2025-10-02T07:46:36.297112Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Oct  2 03:49:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1139204715' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 03:49:09 np0005465604 pedantic_meninsky[99004]: dumped monmap epoch 1
Oct  2 03:49:09 np0005465604 systemd[1]: libpod-5e0eb75a84180a1880200d484dc1d510df7fbac0204133ddb13fde880f9b8f5f.scope: Deactivated successfully.
Oct  2 03:49:09 np0005465604 podman[98984]: 2025-10-02 07:49:09.188070764 +0000 UTC m=+0.736170457 container died 5e0eb75a84180a1880200d484dc1d510df7fbac0204133ddb13fde880f9b8f5f (image=quay.io/ceph/ceph:v18, name=pedantic_meninsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:49:09 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a7d202c3ef7e8159c1fe49e41184fb5fe0dca4752799188870f339dd7d13e632-merged.mount: Deactivated successfully.
Oct  2 03:49:09 np0005465604 podman[98984]: 2025-10-02 07:49:09.239223398 +0000 UTC m=+0.787323091 container remove 5e0eb75a84180a1880200d484dc1d510df7fbac0204133ddb13fde880f9b8f5f (image=quay.io/ceph/ceph:v18, name=pedantic_meninsky, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 03:49:09 np0005465604 systemd[1]: libpod-conmon-5e0eb75a84180a1880200d484dc1d510df7fbac0204133ddb13fde880f9b8f5f.scope: Deactivated successfully.
Oct  2 03:49:09 np0005465604 podman[99205]: 2025-10-02 07:49:09.424701419 +0000 UTC m=+0.034271455 container create 6627326bbbc3557794b71d02c0d2153de48aa33100b8aaab02af644f569d9f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_vaughan, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:49:09 np0005465604 systemd[1]: Started libpod-conmon-6627326bbbc3557794b71d02c0d2153de48aa33100b8aaab02af644f569d9f51.scope.
Oct  2 03:49:09 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:09 np0005465604 podman[99205]: 2025-10-02 07:49:09.411232448 +0000 UTC m=+0.020802514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:09 np0005465604 podman[99205]: 2025-10-02 07:49:09.510217668 +0000 UTC m=+0.119787774 container init 6627326bbbc3557794b71d02c0d2153de48aa33100b8aaab02af644f569d9f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_vaughan, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:09 np0005465604 podman[99205]: 2025-10-02 07:49:09.516564148 +0000 UTC m=+0.126134194 container start 6627326bbbc3557794b71d02c0d2153de48aa33100b8aaab02af644f569d9f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_vaughan, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 03:49:09 np0005465604 podman[99205]: 2025-10-02 07:49:09.519766508 +0000 UTC m=+0.129336584 container attach 6627326bbbc3557794b71d02c0d2153de48aa33100b8aaab02af644f569d9f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:49:09 np0005465604 vibrant_vaughan[99221]: 167 167
Oct  2 03:49:09 np0005465604 systemd[1]: libpod-6627326bbbc3557794b71d02c0d2153de48aa33100b8aaab02af644f569d9f51.scope: Deactivated successfully.
Oct  2 03:49:09 np0005465604 podman[99205]: 2025-10-02 07:49:09.522034439 +0000 UTC m=+0.131604495 container died 6627326bbbc3557794b71d02c0d2153de48aa33100b8aaab02af644f569d9f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 03:49:09 np0005465604 systemd[1]: var-lib-containers-storage-overlay-540efc20871308debde98b79dca44a6728dea789d5bf9ac3dea90980bd86bc65-merged.mount: Deactivated successfully.
Oct  2 03:49:09 np0005465604 podman[99205]: 2025-10-02 07:49:09.562810007 +0000 UTC m=+0.172380053 container remove 6627326bbbc3557794b71d02c0d2153de48aa33100b8aaab02af644f569d9f51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:09 np0005465604 systemd[1]: libpod-conmon-6627326bbbc3557794b71d02c0d2153de48aa33100b8aaab02af644f569d9f51.scope: Deactivated successfully.
Oct  2 03:49:09 np0005465604 podman[99268]: 2025-10-02 07:49:09.711246447 +0000 UTC m=+0.047712715 container create 0a5a16537dfb4f7d51c994ce6fd28864508db8ff90743327797df5e813cf5420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:09 np0005465604 systemd[1]: Started libpod-conmon-0a5a16537dfb4f7d51c994ce6fd28864508db8ff90743327797df5e813cf5420.scope.
Oct  2 03:49:09 np0005465604 podman[99268]: 2025-10-02 07:49:09.684523001 +0000 UTC m=+0.020989379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:49:09 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:09 np0005465604 python3[99280]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:49:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a46112308ec482555e2621fda9aa54130856b07cdd1e15b6ffaf188f493c683/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a46112308ec482555e2621fda9aa54130856b07cdd1e15b6ffaf188f493c683/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a46112308ec482555e2621fda9aa54130856b07cdd1e15b6ffaf188f493c683/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a46112308ec482555e2621fda9aa54130856b07cdd1e15b6ffaf188f493c683/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:09 np0005465604 podman[99268]: 2025-10-02 07:49:09.823856276 +0000 UTC m=+0.160322564 container init 0a5a16537dfb4f7d51c994ce6fd28864508db8ff90743327797df5e813cf5420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 03:49:09 np0005465604 podman[99268]: 2025-10-02 07:49:09.830085511 +0000 UTC m=+0.166551779 container start 0a5a16537dfb4f7d51c994ce6fd28864508db8ff90743327797df5e813cf5420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 03:49:09 np0005465604 podman[99268]: 2025-10-02 07:49:09.83479703 +0000 UTC m=+0.171263308 container attach 0a5a16537dfb4f7d51c994ce6fd28864508db8ff90743327797df5e813cf5420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 03:49:09 np0005465604 podman[99290]: 2025-10-02 07:49:09.855644123 +0000 UTC m=+0.040434359 container create 1f338860c6099dba926025595ec153b80c08dec3cfa4116c9926ef25ec48c873 (image=quay.io/ceph/ceph:v18, name=fervent_cohen, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 03:49:09 np0005465604 systemd[1]: Started libpod-conmon-1f338860c6099dba926025595ec153b80c08dec3cfa4116c9926ef25ec48c873.scope.
Oct  2 03:49:09 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0a6e8d92eab68df5d9a293a45105678afe7df57eb18db0d3407fdfc966bbc3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0a6e8d92eab68df5d9a293a45105678afe7df57eb18db0d3407fdfc966bbc3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:09 np0005465604 podman[99290]: 2025-10-02 07:49:09.920316699 +0000 UTC m=+0.105106945 container init 1f338860c6099dba926025595ec153b80c08dec3cfa4116c9926ef25ec48c873 (image=quay.io/ceph/ceph:v18, name=fervent_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:09 np0005465604 podman[99290]: 2025-10-02 07:49:09.925819781 +0000 UTC m=+0.110610017 container start 1f338860c6099dba926025595ec153b80c08dec3cfa4116c9926ef25ec48c873 (image=quay.io/ceph/ceph:v18, name=fervent_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:49:09 np0005465604 podman[99290]: 2025-10-02 07:49:09.840820208 +0000 UTC m=+0.025610464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:49:09 np0005465604 podman[99290]: 2025-10-02 07:49:09.936985542 +0000 UTC m=+0.121775798 container attach 1f338860c6099dba926025595ec153b80c08dec3cfa4116c9926ef25ec48c873 (image=quay.io/ceph/ceph:v18, name=fervent_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Oct  2 03:49:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1127736894' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  2 03:49:10 np0005465604 fervent_cohen[99308]: [client.openstack]
Oct  2 03:49:10 np0005465604 fervent_cohen[99308]: #011key = AQDCLd5oAAAAABAACaE4tiNlOr67uYgseuuj2w==
Oct  2 03:49:10 np0005465604 fervent_cohen[99308]: #011caps mgr = "allow *"
Oct  2 03:49:10 np0005465604 fervent_cohen[99308]: #011caps mon = "profile rbd"
Oct  2 03:49:10 np0005465604 fervent_cohen[99308]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct  2 03:49:10 np0005465604 systemd[1]: libpod-1f338860c6099dba926025595ec153b80c08dec3cfa4116c9926ef25ec48c873.scope: Deactivated successfully.
Oct  2 03:49:10 np0005465604 podman[99290]: 2025-10-02 07:49:10.517585054 +0000 UTC m=+0.702375290 container died 1f338860c6099dba926025595ec153b80c08dec3cfa4116c9926ef25ec48c873 (image=quay.io/ceph/ceph:v18, name=fervent_cohen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:49:10 np0005465604 systemd[1]: var-lib-containers-storage-overlay-df0a6e8d92eab68df5d9a293a45105678afe7df57eb18db0d3407fdfc966bbc3-merged.mount: Deactivated successfully.
Oct  2 03:49:10 np0005465604 podman[99290]: 2025-10-02 07:49:10.558077082 +0000 UTC m=+0.742867308 container remove 1f338860c6099dba926025595ec153b80c08dec3cfa4116c9926ef25ec48c873 (image=quay.io/ceph/ceph:v18, name=fervent_cohen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:49:10 np0005465604 systemd[1]: libpod-conmon-1f338860c6099dba926025595ec153b80c08dec3cfa4116c9926ef25ec48c873.scope: Deactivated successfully.
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]: {
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:    "0": [
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:        {
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "devices": [
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "/dev/loop3"
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            ],
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "lv_name": "ceph_lv0",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "lv_size": "21470642176",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "name": "ceph_lv0",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "tags": {
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.cluster_name": "ceph",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.crush_device_class": "",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.encrypted": "0",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.osd_id": "0",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.type": "block",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.vdo": "0"
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            },
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "type": "block",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "vg_name": "ceph_vg0"
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:        }
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:    ],
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:    "1": [
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:        {
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "devices": [
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "/dev/loop4"
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            ],
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "lv_name": "ceph_lv1",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "lv_size": "21470642176",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "name": "ceph_lv1",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "tags": {
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.cluster_name": "ceph",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.crush_device_class": "",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.encrypted": "0",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.osd_id": "1",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.type": "block",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.vdo": "0"
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            },
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "type": "block",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "vg_name": "ceph_vg1"
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:        }
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:    ],
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:    "2": [
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:        {
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "devices": [
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "/dev/loop5"
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            ],
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "lv_name": "ceph_lv2",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "lv_size": "21470642176",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "name": "ceph_lv2",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "tags": {
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.cluster_name": "ceph",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.crush_device_class": "",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.encrypted": "0",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.osd_id": "2",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.type": "block",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:                "ceph.vdo": "0"
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            },
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "type": "block",
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:            "vg_name": "ceph_vg2"
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:        }
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]:    ]
Oct  2 03:49:10 np0005465604 silly_goldwasser[99287]: }
Oct  2 03:49:10 np0005465604 systemd[1]: libpod-0a5a16537dfb4f7d51c994ce6fd28864508db8ff90743327797df5e813cf5420.scope: Deactivated successfully.
Oct  2 03:49:10 np0005465604 podman[99268]: 2025-10-02 07:49:10.62693499 +0000 UTC m=+0.963401308 container died 0a5a16537dfb4f7d51c994ce6fd28864508db8ff90743327797df5e813cf5420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldwasser, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:10 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1a46112308ec482555e2621fda9aa54130856b07cdd1e15b6ffaf188f493c683-merged.mount: Deactivated successfully.
Oct  2 03:49:10 np0005465604 podman[99268]: 2025-10-02 07:49:10.671226578 +0000 UTC m=+1.007692846 container remove 0a5a16537dfb4f7d51c994ce6fd28864508db8ff90743327797df5e813cf5420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldwasser, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 03:49:10 np0005465604 systemd[1]: libpod-conmon-0a5a16537dfb4f7d51c994ce6fd28864508db8ff90743327797df5e813cf5420.scope: Deactivated successfully.
Oct  2 03:49:11 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/1127736894' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  2 03:49:11 np0005465604 podman[99503]: 2025-10-02 07:49:11.268359248 +0000 UTC m=+0.033599853 container create 35877e1f0f57fd07e1502f90a531edd0520ca91384928ab6c96be9edb401050d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_engelbart, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 03:49:11 np0005465604 systemd[1]: Started libpod-conmon-35877e1f0f57fd07e1502f90a531edd0520ca91384928ab6c96be9edb401050d.scope.
Oct  2 03:49:11 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:11 np0005465604 podman[99503]: 2025-10-02 07:49:11.346870819 +0000 UTC m=+0.112111424 container init 35877e1f0f57fd07e1502f90a531edd0520ca91384928ab6c96be9edb401050d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  2 03:49:11 np0005465604 podman[99503]: 2025-10-02 07:49:11.254300918 +0000 UTC m=+0.019541523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:11 np0005465604 podman[99503]: 2025-10-02 07:49:11.353283629 +0000 UTC m=+0.118524214 container start 35877e1f0f57fd07e1502f90a531edd0520ca91384928ab6c96be9edb401050d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_engelbart, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 03:49:11 np0005465604 condescending_engelbart[99519]: 167 167
Oct  2 03:49:11 np0005465604 systemd[1]: libpod-35877e1f0f57fd07e1502f90a531edd0520ca91384928ab6c96be9edb401050d.scope: Deactivated successfully.
Oct  2 03:49:11 np0005465604 podman[99503]: 2025-10-02 07:49:11.357240933 +0000 UTC m=+0.122481518 container attach 35877e1f0f57fd07e1502f90a531edd0520ca91384928ab6c96be9edb401050d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:49:11 np0005465604 podman[99503]: 2025-10-02 07:49:11.357571335 +0000 UTC m=+0.122811920 container died 35877e1f0f57fd07e1502f90a531edd0520ca91384928ab6c96be9edb401050d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_engelbart, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 03:49:11 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2c4e48a39889e6fe04f22c45e2c241abe9dc6827d32e421566c5ee13249c3dd4-merged.mount: Deactivated successfully.
Oct  2 03:49:11 np0005465604 podman[99503]: 2025-10-02 07:49:11.390385822 +0000 UTC m=+0.155626407 container remove 35877e1f0f57fd07e1502f90a531edd0520ca91384928ab6c96be9edb401050d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_engelbart, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct  2 03:49:11 np0005465604 systemd[1]: libpod-conmon-35877e1f0f57fd07e1502f90a531edd0520ca91384928ab6c96be9edb401050d.scope: Deactivated successfully.
Oct  2 03:49:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:49:11 np0005465604 podman[99543]: 2025-10-02 07:49:11.536117398 +0000 UTC m=+0.041098668 container create a3491081abc39ab5236a1bd8323d7578428f788bd8195b1f2055142a368f0531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_vaughan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 03:49:11 np0005465604 systemd[1]: Started libpod-conmon-a3491081abc39ab5236a1bd8323d7578428f788bd8195b1f2055142a368f0531.scope.
Oct  2 03:49:11 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a0d148c4e75cb00671bdc48b071eba2af8ad349775031633cf600cffb5882d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a0d148c4e75cb00671bdc48b071eba2af8ad349775031633cf600cffb5882d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a0d148c4e75cb00671bdc48b071eba2af8ad349775031633cf600cffb5882d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a0d148c4e75cb00671bdc48b071eba2af8ad349775031633cf600cffb5882d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:11 np0005465604 podman[99543]: 2025-10-02 07:49:11.611273873 +0000 UTC m=+0.116255163 container init a3491081abc39ab5236a1bd8323d7578428f788bd8195b1f2055142a368f0531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_vaughan, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  2 03:49:11 np0005465604 podman[99543]: 2025-10-02 07:49:11.516638158 +0000 UTC m=+0.021619438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:11 np0005465604 podman[99543]: 2025-10-02 07:49:11.617184889 +0000 UTC m=+0.122166169 container start a3491081abc39ab5236a1bd8323d7578428f788bd8195b1f2055142a368f0531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_vaughan, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:49:11 np0005465604 podman[99543]: 2025-10-02 07:49:11.620767771 +0000 UTC m=+0.125749061 container attach a3491081abc39ab5236a1bd8323d7578428f788bd8195b1f2055142a368f0531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 03:49:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:49:12 np0005465604 ansible-async_wrapper.py[99714]: Invoked with j201622710160 30 /home/zuul/.ansible/tmp/ansible-tmp-1759391351.5846753-33433-145933841673207/AnsiballZ_command.py _
Oct  2 03:49:12 np0005465604 ansible-async_wrapper.py[99717]: Starting module and watcher
Oct  2 03:49:12 np0005465604 ansible-async_wrapper.py[99717]: Start watching 99718 (30)
Oct  2 03:49:12 np0005465604 ansible-async_wrapper.py[99718]: Start module (99718)
Oct  2 03:49:12 np0005465604 ansible-async_wrapper.py[99714]: Return async_wrapper task started.
Oct  2 03:49:12 np0005465604 python3[99719]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:49:12 np0005465604 podman[99720]: 2025-10-02 07:49:12.284478778 +0000 UTC m=+0.048286754 container create 3bf6487c3c702d9d94b3009bdfb808ad74f4cfb81085a23916501741be2a638a (image=quay.io/ceph/ceph:v18, name=friendly_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 03:49:12 np0005465604 systemd[1]: Started libpod-conmon-3bf6487c3c702d9d94b3009bdfb808ad74f4cfb81085a23916501741be2a638a.scope.
Oct  2 03:49:12 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afce3cccd4170964d37bd56ea322af1a72be3b31bcd4a59f81d668530a334e7c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afce3cccd4170964d37bd56ea322af1a72be3b31bcd4a59f81d668530a334e7c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:12 np0005465604 podman[99720]: 2025-10-02 07:49:12.26669548 +0000 UTC m=+0.030503446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:49:12 np0005465604 podman[99720]: 2025-10-02 07:49:12.373458476 +0000 UTC m=+0.137266452 container init 3bf6487c3c702d9d94b3009bdfb808ad74f4cfb81085a23916501741be2a638a (image=quay.io/ceph/ceph:v18, name=friendly_blackburn, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:49:12 np0005465604 podman[99720]: 2025-10-02 07:49:12.380653511 +0000 UTC m=+0.144461467 container start 3bf6487c3c702d9d94b3009bdfb808ad74f4cfb81085a23916501741be2a638a (image=quay.io/ceph/ceph:v18, name=friendly_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 03:49:12 np0005465604 podman[99720]: 2025-10-02 07:49:12.384078758 +0000 UTC m=+0.147886705 container attach 3bf6487c3c702d9d94b3009bdfb808ad74f4cfb81085a23916501741be2a638a (image=quay.io/ceph/ceph:v18, name=friendly_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]: {
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:        "osd_id": 2,
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:        "type": "bluestore"
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:    },
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:        "osd_id": 1,
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:        "type": "bluestore"
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:    },
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:        "osd_id": 0,
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:        "type": "bluestore"
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]:    }
Oct  2 03:49:12 np0005465604 hungry_vaughan[99583]: }
Oct  2 03:49:12 np0005465604 systemd[1]: libpod-a3491081abc39ab5236a1bd8323d7578428f788bd8195b1f2055142a368f0531.scope: Deactivated successfully.
Oct  2 03:49:12 np0005465604 podman[99767]: 2025-10-02 07:49:12.602951127 +0000 UTC m=+0.027173882 container died a3491081abc39ab5236a1bd8323d7578428f788bd8195b1f2055142a368f0531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 03:49:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a5a0d148c4e75cb00671bdc48b071eba2af8ad349775031633cf600cffb5882d-merged.mount: Deactivated successfully.
Oct  2 03:49:12 np0005465604 podman[99767]: 2025-10-02 07:49:12.646326396 +0000 UTC m=+0.070549131 container remove a3491081abc39ab5236a1bd8323d7578428f788bd8195b1f2055142a368f0531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:12 np0005465604 systemd[1]: libpod-conmon-a3491081abc39ab5236a1bd8323d7578428f788bd8195b1f2055142a368f0531.scope: Deactivated successfully.
Oct  2 03:49:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:49:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:49:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:12 np0005465604 ceph-mgr[74774]: [progress INFO root] update: starting ev ef5ee7ec-e46e-4ccb-9aa0-6895128a6ac8 (Updating rgw.rgw deployment (+1 -> 1))
Oct  2 03:49:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ulqmgx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct  2 03:49:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ulqmgx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  2 03:49:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ulqmgx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  2 03:49:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct  2 03:49:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:49:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:49:12 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.ulqmgx on compute-0
Oct  2 03:49:12 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.ulqmgx on compute-0
Oct  2 03:49:12 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  2 03:49:12 np0005465604 friendly_blackburn[99746]: 
Oct  2 03:49:12 np0005465604 friendly_blackburn[99746]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  2 03:49:12 np0005465604 systemd[1]: libpod-3bf6487c3c702d9d94b3009bdfb808ad74f4cfb81085a23916501741be2a638a.scope: Deactivated successfully.
Oct  2 03:49:12 np0005465604 podman[99720]: 2025-10-02 07:49:12.944104327 +0000 UTC m=+0.707912263 container died 3bf6487c3c702d9d94b3009bdfb808ad74f4cfb81085a23916501741be2a638a (image=quay.io/ceph/ceph:v18, name=friendly_blackburn, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:49:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay-afce3cccd4170964d37bd56ea322af1a72be3b31bcd4a59f81d668530a334e7c-merged.mount: Deactivated successfully.
Oct  2 03:49:12 np0005465604 podman[99720]: 2025-10-02 07:49:12.987790576 +0000 UTC m=+0.751598532 container remove 3bf6487c3c702d9d94b3009bdfb808ad74f4cfb81085a23916501741be2a638a (image=quay.io/ceph/ceph:v18, name=friendly_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:12 np0005465604 systemd[1]: libpod-conmon-3bf6487c3c702d9d94b3009bdfb808ad74f4cfb81085a23916501741be2a638a.scope: Deactivated successfully.
Oct  2 03:49:13 np0005465604 ansible-async_wrapper.py[99718]: Module complete (99718)
Oct  2 03:49:13 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:13 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:13 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ulqmgx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  2 03:49:13 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ulqmgx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  2 03:49:13 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:13 np0005465604 podman[100004]: 2025-10-02 07:49:13.348524219 +0000 UTC m=+0.064367158 container create f424172a446b6eeed76b1da30c0b0862068d1d5e32a430769c6c81e585218871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:49:13 np0005465604 systemd[1]: Started libpod-conmon-f424172a446b6eeed76b1da30c0b0862068d1d5e32a430769c6c81e585218871.scope.
Oct  2 03:49:13 np0005465604 python3[100003]: ansible-ansible.legacy.async_status Invoked with jid=j201622710160.99714 mode=status _async_dir=/root/.ansible_async
Oct  2 03:49:13 np0005465604 podman[100004]: 2025-10-02 07:49:13.325183878 +0000 UTC m=+0.041026847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:13 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:13 np0005465604 podman[100004]: 2025-10-02 07:49:13.454104447 +0000 UTC m=+0.169947476 container init f424172a446b6eeed76b1da30c0b0862068d1d5e32a430769c6c81e585218871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_villani, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 03:49:13 np0005465604 podman[100004]: 2025-10-02 07:49:13.462217961 +0000 UTC m=+0.178060900 container start f424172a446b6eeed76b1da30c0b0862068d1d5e32a430769c6c81e585218871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 03:49:13 np0005465604 podman[100004]: 2025-10-02 07:49:13.46567529 +0000 UTC m=+0.181518319 container attach f424172a446b6eeed76b1da30c0b0862068d1d5e32a430769c6c81e585218871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_villani, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:49:13 np0005465604 beautiful_villani[100020]: 167 167
Oct  2 03:49:13 np0005465604 systemd[1]: libpod-f424172a446b6eeed76b1da30c0b0862068d1d5e32a430769c6c81e585218871.scope: Deactivated successfully.
Oct  2 03:49:13 np0005465604 podman[100004]: 2025-10-02 07:49:13.469631883 +0000 UTC m=+0.185474832 container died f424172a446b6eeed76b1da30c0b0862068d1d5e32a430769c6c81e585218871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_villani, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:49:13 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d0935b09b1ddcad01d0782d1ef1a0fb0393b595878fecfb200b78cb4558747a3-merged.mount: Deactivated successfully.
Oct  2 03:49:13 np0005465604 podman[100004]: 2025-10-02 07:49:13.526136884 +0000 UTC m=+0.241979813 container remove f424172a446b6eeed76b1da30c0b0862068d1d5e32a430769c6c81e585218871 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 03:49:13 np0005465604 systemd[1]: libpod-conmon-f424172a446b6eeed76b1da30c0b0862068d1d5e32a430769c6c81e585218871.scope: Deactivated successfully.
Oct  2 03:49:13 np0005465604 systemd[1]: Reloading.
Oct  2 03:49:13 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:49:13 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:49:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:49:13 np0005465604 systemd[1]: Reloading.
Oct  2 03:49:13 np0005465604 python3[100125]: ansible-ansible.legacy.async_status Invoked with jid=j201622710160.99714 mode=cleanup _async_dir=/root/.ansible_async
Oct  2 03:49:13 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:49:14 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:49:14 np0005465604 systemd[1]: Starting Ceph rgw.rgw.compute-0.ulqmgx for a52e644f-f702-594c-a648-813e3e0df2b1...
Oct  2 03:49:14 np0005465604 ceph-mon[74477]: Deploying daemon rgw.rgw.compute-0.ulqmgx on compute-0
Oct  2 03:49:14 np0005465604 podman[100236]: 2025-10-02 07:49:14.464323611 +0000 UTC m=+0.049479601 container create 72cef90808890e3f422ea3e7e08b1984c9e5bd52befa334bcda9a123aecc2749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-rgw-rgw-compute-0-ulqmgx, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:49:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1512dd3e91db1de94ebb2cf381501afe15f404588431240e7f2ea40bbc28c78b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1512dd3e91db1de94ebb2cf381501afe15f404588431240e7f2ea40bbc28c78b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1512dd3e91db1de94ebb2cf381501afe15f404588431240e7f2ea40bbc28c78b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1512dd3e91db1de94ebb2cf381501afe15f404588431240e7f2ea40bbc28c78b/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.ulqmgx supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:14 np0005465604 podman[100236]: 2025-10-02 07:49:14.535974106 +0000 UTC m=+0.121130126 container init 72cef90808890e3f422ea3e7e08b1984c9e5bd52befa334bcda9a123aecc2749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-rgw-rgw-compute-0-ulqmgx, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 03:49:14 np0005465604 podman[100236]: 2025-10-02 07:49:14.542770969 +0000 UTC m=+0.127926959 container start 72cef90808890e3f422ea3e7e08b1984c9e5bd52befa334bcda9a123aecc2749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-rgw-rgw-compute-0-ulqmgx, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 03:49:14 np0005465604 podman[100236]: 2025-10-02 07:49:14.447262547 +0000 UTC m=+0.032418577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:14 np0005465604 bash[100236]: 72cef90808890e3f422ea3e7e08b1984c9e5bd52befa334bcda9a123aecc2749
Oct  2 03:49:14 np0005465604 systemd[1]: Started Ceph rgw.rgw.compute-0.ulqmgx for a52e644f-f702-594c-a648-813e3e0df2b1.
Oct  2 03:49:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:49:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:49:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  2 03:49:14 np0005465604 radosgw[100259]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct  2 03:49:14 np0005465604 radosgw[100259]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct  2 03:49:14 np0005465604 radosgw[100259]: framework: beast
Oct  2 03:49:14 np0005465604 radosgw[100259]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct  2 03:49:14 np0005465604 radosgw[100259]: init_numa not setting numa affinity
Oct  2 03:49:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:14 np0005465604 ceph-mgr[74774]: [progress INFO root] complete: finished ev ef5ee7ec-e46e-4ccb-9aa0-6895128a6ac8 (Updating rgw.rgw deployment (+1 -> 1))
Oct  2 03:49:14 np0005465604 ceph-mgr[74774]: [progress INFO root] Completed event ef5ee7ec-e46e-4ccb-9aa0-6895128a6ac8 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Oct  2 03:49:14 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Oct  2 03:49:14 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct  2 03:49:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  2 03:49:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  2 03:49:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:14 np0005465604 python3[100246]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:49:14 np0005465604 ceph-mgr[74774]: [progress INFO root] update: starting ev f58ad5b9-1a78-41ee-aa6c-652f18e32020 (Updating mds.cephfs deployment (+1 -> 1))
Oct  2 03:49:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mjmqka", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct  2 03:49:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mjmqka", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  2 03:49:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mjmqka", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  2 03:49:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:49:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:49:14 np0005465604 ceph-mgr[74774]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.mjmqka on compute-0
Oct  2 03:49:14 np0005465604 ceph-mgr[74774]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.mjmqka on compute-0
Oct  2 03:49:14 np0005465604 podman[100321]: 2025-10-02 07:49:14.724910277 +0000 UTC m=+0.057393730 container create e5e401e004480e33bc71e6650b30216a5b12b52b4af52070de7596374ef00313 (image=quay.io/ceph/ceph:v18, name=busy_mcclintock, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:14 np0005465604 systemd[1]: Started libpod-conmon-e5e401e004480e33bc71e6650b30216a5b12b52b4af52070de7596374ef00313.scope.
Oct  2 03:49:14 np0005465604 podman[100321]: 2025-10-02 07:49:14.700095499 +0000 UTC m=+0.032578952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:49:14 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298eab94bf2881407c32ecf69ce81f346f645c040c01a5d604276744adf6e9f3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/298eab94bf2881407c32ecf69ce81f346f645c040c01a5d604276744adf6e9f3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:14 np0005465604 podman[100321]: 2025-10-02 07:49:14.851506903 +0000 UTC m=+0.183990376 container init e5e401e004480e33bc71e6650b30216a5b12b52b4af52070de7596374ef00313 (image=quay.io/ceph/ceph:v18, name=busy_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:49:14 np0005465604 podman[100321]: 2025-10-02 07:49:14.859444152 +0000 UTC m=+0.191927585 container start e5e401e004480e33bc71e6650b30216a5b12b52b4af52070de7596374ef00313 (image=quay.io/ceph/ceph:v18, name=busy_mcclintock, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:14 np0005465604 podman[100321]: 2025-10-02 07:49:14.863419647 +0000 UTC m=+0.195903070 container attach e5e401e004480e33bc71e6650b30216a5b12b52b4af52070de7596374ef00313 (image=quay.io/ceph/ceph:v18, name=busy_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:49:15 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:15 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:15 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:15 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:15 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:15 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mjmqka", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  2 03:49:15 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mjmqka", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  2 03:49:15 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  2 03:49:15 np0005465604 busy_mcclintock[100382]: 
Oct  2 03:49:15 np0005465604 busy_mcclintock[100382]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  2 03:49:15 np0005465604 systemd[1]: libpod-e5e401e004480e33bc71e6650b30216a5b12b52b4af52070de7596374ef00313.scope: Deactivated successfully.
Oct  2 03:49:15 np0005465604 podman[100500]: 2025-10-02 07:49:15.454057064 +0000 UTC m=+0.073420472 container create 830cfdffe2ed56215bf218e9f5d950914c10dbbea548ce77ebb76d6ea04c44b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_ardinghelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 03:49:15 np0005465604 podman[100321]: 2025-10-02 07:49:15.459219726 +0000 UTC m=+0.791703149 container died e5e401e004480e33bc71e6650b30216a5b12b52b4af52070de7596374ef00313 (image=quay.io/ceph/ceph:v18, name=busy_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:49:15 np0005465604 systemd[1]: Started libpod-conmon-830cfdffe2ed56215bf218e9f5d950914c10dbbea548ce77ebb76d6ea04c44b4.scope.
Oct  2 03:49:15 np0005465604 podman[100500]: 2025-10-02 07:49:15.421904947 +0000 UTC m=+0.041268405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:15 np0005465604 systemd[1]: var-lib-containers-storage-overlay-298eab94bf2881407c32ecf69ce81f346f645c040c01a5d604276744adf6e9f3-merged.mount: Deactivated successfully.
Oct  2 03:49:15 np0005465604 podman[100321]: 2025-10-02 07:49:15.541196494 +0000 UTC m=+0.873679937 container remove e5e401e004480e33bc71e6650b30216a5b12b52b4af52070de7596374ef00313 (image=quay.io/ceph/ceph:v18, name=busy_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:49:15 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:15 np0005465604 systemd[1]: libpod-conmon-e5e401e004480e33bc71e6650b30216a5b12b52b4af52070de7596374ef00313.scope: Deactivated successfully.
Oct  2 03:49:15 np0005465604 podman[100500]: 2025-10-02 07:49:15.575517809 +0000 UTC m=+0.194881217 container init 830cfdffe2ed56215bf218e9f5d950914c10dbbea548ce77ebb76d6ea04c44b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_ardinghelli, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:15 np0005465604 podman[100500]: 2025-10-02 07:49:15.584072097 +0000 UTC m=+0.203435475 container start 830cfdffe2ed56215bf218e9f5d950914c10dbbea548ce77ebb76d6ea04c44b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_ardinghelli, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 03:49:15 np0005465604 podman[100500]: 2025-10-02 07:49:15.588568749 +0000 UTC m=+0.207932227 container attach 830cfdffe2ed56215bf218e9f5d950914c10dbbea548ce77ebb76d6ea04c44b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 03:49:15 np0005465604 unruffled_ardinghelli[100529]: 167 167
Oct  2 03:49:15 np0005465604 systemd[1]: libpod-830cfdffe2ed56215bf218e9f5d950914c10dbbea548ce77ebb76d6ea04c44b4.scope: Deactivated successfully.
Oct  2 03:49:15 np0005465604 podman[100500]: 2025-10-02 07:49:15.590640243 +0000 UTC m=+0.210003601 container died 830cfdffe2ed56215bf218e9f5d950914c10dbbea548ce77ebb76d6ea04c44b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 03:49:15 np0005465604 systemd[1]: var-lib-containers-storage-overlay-39474ff7ab1739cde03d583c7596292af30c40b561906fe77f55ab4089e48cf2-merged.mount: Deactivated successfully.
Oct  2 03:49:15 np0005465604 podman[100500]: 2025-10-02 07:49:15.637895365 +0000 UTC m=+0.257258743 container remove 830cfdffe2ed56215bf218e9f5d950914c10dbbea548ce77ebb76d6ea04c44b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_ardinghelli, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 03:49:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct  2 03:49:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Oct  2 03:49:15 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Oct  2 03:49:15 np0005465604 systemd[1]: libpod-conmon-830cfdffe2ed56215bf218e9f5d950914c10dbbea548ce77ebb76d6ea04c44b4.scope: Deactivated successfully.
Oct  2 03:49:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Oct  2 03:49:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1622397351' entity='client.rgw.rgw.compute-0.ulqmgx' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  2 03:49:15 np0005465604 systemd[1]: Reloading.
Oct  2 03:49:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v77: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:49:15 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:49:15 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:49:16 np0005465604 systemd[1]: Reloading.
Oct  2 03:49:16 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:49:16 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: Saving service rgw.rgw spec with placement compute-0
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: Deploying daemon mds.cephfs.compute-0.mjmqka on compute-0
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/1622397351' entity='client.rgw.rgw.compute-0.ulqmgx' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  2 03:49:16 np0005465604 systemd[1]: Starting Ceph mds.cephfs.compute-0.mjmqka for a52e644f-f702-594c-a648-813e3e0df2b1...
Oct  2 03:49:16 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 32 pg[8.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:16 np0005465604 python3[100651]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:49:16 np0005465604 podman[100678]: 2025-10-02 07:49:16.537609565 +0000 UTC m=+0.040803848 container create ae238751a9124823cd63e1b0541f0844f43221b723275f2cf8981483b7d86a5f (image=quay.io/ceph/ceph:v18, name=hardcore_hoover, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:16 np0005465604 systemd[1]: Started libpod-conmon-ae238751a9124823cd63e1b0541f0844f43221b723275f2cf8981483b7d86a5f.scope.
Oct  2 03:49:16 np0005465604 podman[100678]: 2025-10-02 07:49:16.522316936 +0000 UTC m=+0.025511249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:49:16 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c27bb573754ffce2dc6e69669dd8eddcf0000369016e8930a3ed11af09a4dd5b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c27bb573754ffce2dc6e69669dd8eddcf0000369016e8930a3ed11af09a4dd5b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1622397351' entity='client.rgw.rgw.compute-0.ulqmgx' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Oct  2 03:49:16 np0005465604 podman[100678]: 2025-10-02 07:49:16.656102936 +0000 UTC m=+0.159297279 container init ae238751a9124823cd63e1b0541f0844f43221b723275f2cf8981483b7d86a5f (image=quay.io/ceph/ceph:v18, name=hardcore_hoover, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Oct  2 03:49:16 np0005465604 podman[100678]: 2025-10-02 07:49:16.66759252 +0000 UTC m=+0.170786813 container start ae238751a9124823cd63e1b0541f0844f43221b723275f2cf8981483b7d86a5f (image=quay.io/ceph/ceph:v18, name=hardcore_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 03:49:16 np0005465604 podman[100678]: 2025-10-02 07:49:16.680771203 +0000 UTC m=+0.183965516 container attach ae238751a9124823cd63e1b0541f0844f43221b723275f2cf8981483b7d86a5f (image=quay.io/ceph/ceph:v18, name=hardcore_hoover, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:49:16 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 33 pg[8.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:16 np0005465604 podman[100717]: 2025-10-02 07:49:16.722200498 +0000 UTC m=+0.084561069 container create 3db8945b129283b6926f5bf0420e29e92939535236ffb2a1d02e5887a9891021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mds-cephfs-compute-0-mjmqka, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 03:49:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a50b33e1bfd5abc264945c5010e178b6b3ddbe69ed72d8589f4f82a791ca0d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a50b33e1bfd5abc264945c5010e178b6b3ddbe69ed72d8589f4f82a791ca0d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a50b33e1bfd5abc264945c5010e178b6b3ddbe69ed72d8589f4f82a791ca0d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a50b33e1bfd5abc264945c5010e178b6b3ddbe69ed72d8589f4f82a791ca0d9/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.mjmqka supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:16 np0005465604 podman[100717]: 2025-10-02 07:49:16.787504474 +0000 UTC m=+0.149865065 container init 3db8945b129283b6926f5bf0420e29e92939535236ffb2a1d02e5887a9891021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mds-cephfs-compute-0-mjmqka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 03:49:16 np0005465604 podman[100717]: 2025-10-02 07:49:16.699680983 +0000 UTC m=+0.062041564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:16 np0005465604 podman[100717]: 2025-10-02 07:49:16.796165711 +0000 UTC m=+0.158526272 container start 3db8945b129283b6926f5bf0420e29e92939535236ffb2a1d02e5887a9891021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mds-cephfs-compute-0-mjmqka, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 03:49:16 np0005465604 bash[100717]: 3db8945b129283b6926f5bf0420e29e92939535236ffb2a1d02e5887a9891021
Oct  2 03:49:16 np0005465604 systemd[1]: Started Ceph mds.cephfs.compute-0.mjmqka for a52e644f-f702-594c-a648-813e3e0df2b1.
Oct  2 03:49:16 np0005465604 ceph-mds[100739]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 03:49:16 np0005465604 ceph-mds[100739]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct  2 03:49:16 np0005465604 ceph-mds[100739]: main not setting numa affinity
Oct  2 03:49:16 np0005465604 ceph-mds[100739]: pidfile_write: ignore empty --pid-file
Oct  2 03:49:16 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mds-cephfs-compute-0-mjmqka[100735]: starting mds.cephfs.compute-0.mjmqka at 
Oct  2 03:49:16 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka Updating MDS map to version 2 from mon.0
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:16 np0005465604 ceph-mgr[74774]: [progress INFO root] complete: finished ev f58ad5b9-1a78-41ee-aa6c-652f18e32020 (Updating mds.cephfs deployment (+1 -> 1))
Oct  2 03:49:16 np0005465604 ceph-mgr[74774]: [progress INFO root] Completed event f58ad5b9-1a78-41ee-aa6c-652f18e32020 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  2 03:49:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:17 np0005465604 ansible-async_wrapper.py[99717]: Done in kid B.
Oct  2 03:49:17 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  2 03:49:17 np0005465604 hardcore_hoover[100709]: 
Oct  2 03:49:17 np0005465604 hardcore_hoover[100709]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct  2 03:49:17 np0005465604 systemd[1]: libpod-ae238751a9124823cd63e1b0541f0844f43221b723275f2cf8981483b7d86a5f.scope: Deactivated successfully.
Oct  2 03:49:17 np0005465604 conmon[100709]: conmon ae238751a9124823cd63 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae238751a9124823cd63e1b0541f0844f43221b723275f2cf8981483b7d86a5f.scope/container/memory.events
Oct  2 03:49:17 np0005465604 podman[100678]: 2025-10-02 07:49:17.202959608 +0000 UTC m=+0.706153921 container died ae238751a9124823cd63e1b0541f0844f43221b723275f2cf8981483b7d86a5f (image=quay.io/ceph/ceph:v18, name=hardcore_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 03:49:17 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c27bb573754ffce2dc6e69669dd8eddcf0000369016e8930a3ed11af09a4dd5b-merged.mount: Deactivated successfully.
Oct  2 03:49:17 np0005465604 podman[100678]: 2025-10-02 07:49:17.243674607 +0000 UTC m=+0.746868900 container remove ae238751a9124823cd63e1b0541f0844f43221b723275f2cf8981483b7d86a5f (image=quay.io/ceph/ceph:v18, name=hardcore_hoover, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 03:49:17 np0005465604 systemd[1]: libpod-conmon-ae238751a9124823cd63e1b0541f0844f43221b723275f2cf8981483b7d86a5f.scope: Deactivated successfully.
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/1622397351' entity='client.rgw.rgw.compute-0.ulqmgx' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:17 np0005465604 podman[101014]: 2025-10-02 07:49:17.633733209 +0000 UTC m=+0.051825033 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).mds e3 new map
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T07:49:02.875980+0000#012modified#0112025-10-02T07:49:02.876016+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.mjmqka{-1:14265} state up:standby seq 1 addr [v2:192.168.122.100:6814/943007464,v1:192.168.122.100:6815/943007464] compat {c=[1],r=[1],i=[7ff]}]
Oct  2 03:49:17 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka Updating MDS map to version 3 from mon.0
Oct  2 03:49:17 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka Monitors have assigned me to become a standby.
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/943007464,v1:192.168.122.100:6815/943007464] up:boot
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/943007464,v1:192.168.122.100:6815/943007464] as mds.0
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.mjmqka assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.mjmqka"} v 0) v1
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.mjmqka"}]: dispatch
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).mds e3 all = 0
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).mds e4 new map
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T07:49:02.875980+0000#012modified#0112025-10-02T07:49:17.687993+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14265}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.mjmqka{0:14265} state up:creating seq 1 addr [v2:192.168.122.100:6814/943007464,v1:192.168.122.100:6815/943007464] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct  2 03:49:17 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka Updating MDS map to version 4 from mon.0
Oct  2 03:49:17 np0005465604 ceph-mds[100739]: mds.0.4 handle_mds_map i am now mds.0.4
Oct  2 03:49:17 np0005465604 ceph-mds[100739]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Oct  2 03:49:17 np0005465604 ceph-mds[100739]: mds.0.cache creating system inode with ino:0x1
Oct  2 03:49:17 np0005465604 ceph-mds[100739]: mds.0.cache creating system inode with ino:0x100
Oct  2 03:49:17 np0005465604 ceph-mds[100739]: mds.0.cache creating system inode with ino:0x600
Oct  2 03:49:17 np0005465604 ceph-mds[100739]: mds.0.cache creating system inode with ino:0x601
Oct  2 03:49:17 np0005465604 ceph-mds[100739]: mds.0.cache creating system inode with ino:0x602
Oct  2 03:49:17 np0005465604 ceph-mds[100739]: mds.0.cache creating system inode with ino:0x603
Oct  2 03:49:17 np0005465604 ceph-mds[100739]: mds.0.cache creating system inode with ino:0x604
Oct  2 03:49:17 np0005465604 ceph-mds[100739]: mds.0.cache creating system inode with ino:0x605
Oct  2 03:49:17 np0005465604 ceph-mds[100739]: mds.0.cache creating system inode with ino:0x606
Oct  2 03:49:17 np0005465604 ceph-mds[100739]: mds.0.cache creating system inode with ino:0x607
Oct  2 03:49:17 np0005465604 ceph-mds[100739]: mds.0.cache creating system inode with ino:0x608
Oct  2 03:49:17 np0005465604 ceph-mds[100739]: mds.0.cache creating system inode with ino:0x609
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.mjmqka=up:creating}
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1622397351' entity='client.rgw.rgw.compute-0.ulqmgx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  2 03:49:17 np0005465604 ceph-mds[100739]: mds.0.4 creating_done
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.mjmqka is now active in filesystem cephfs as rank 0
Oct  2 03:49:17 np0005465604 podman[101014]: 2025-10-02 07:49:17.74369355 +0000 UTC m=+0.161785324 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v80: 9 pgs: 1 unknown, 8 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 682 B/s wr, 1 op/s
Oct  2 03:49:17 np0005465604 ceph-mgr[74774]: [progress INFO root] Writing back 5 completed events
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  2 03:49:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:17 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 34 pg[9.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:18 np0005465604 python3[101140]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:49:18 np0005465604 podman[101175]: 2025-10-02 07:49:18.27100374 +0000 UTC m=+0.041404605 container create 435386f5639b2ecb7aa5c55e56c102f7c8b5b72f78d1ccca5b8d14c6f873bb52 (image=quay.io/ceph/ceph:v18, name=great_hoover, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:18 np0005465604 systemd[1]: Started libpod-conmon-435386f5639b2ecb7aa5c55e56c102f7c8b5b72f78d1ccca5b8d14c6f873bb52.scope.
Oct  2 03:49:18 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6364c8b33741235703e9b4edb2aba7649afc0d4b1d5d7aae996c81a3143e0004/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6364c8b33741235703e9b4edb2aba7649afc0d4b1d5d7aae996c81a3143e0004/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:18 np0005465604 podman[101175]: 2025-10-02 07:49:18.337109963 +0000 UTC m=+0.107510808 container init 435386f5639b2ecb7aa5c55e56c102f7c8b5b72f78d1ccca5b8d14c6f873bb52 (image=quay.io/ceph/ceph:v18, name=great_hoover, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:49:18 np0005465604 podman[101175]: 2025-10-02 07:49:18.342765817 +0000 UTC m=+0.113166642 container start 435386f5639b2ecb7aa5c55e56c102f7c8b5b72f78d1ccca5b8d14c6f873bb52 (image=quay.io/ceph/ceph:v18, name=great_hoover, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 03:49:18 np0005465604 podman[101175]: 2025-10-02 07:49:18.345315105 +0000 UTC m=+0.115715960 container attach 435386f5639b2ecb7aa5c55e56c102f7c8b5b72f78d1ccca5b8d14c6f873bb52 (image=quay.io/ceph/ceph:v18, name=great_hoover, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 03:49:18 np0005465604 podman[101175]: 2025-10-02 07:49:18.248534927 +0000 UTC m=+0.018935772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:18 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a109a457-3579-4eca-ac99-e6796867b552 does not exist
Oct  2 03:49:18 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ac8186df-48f7-463d-b733-9c1e8f68e027 does not exist
Oct  2 03:49:18 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev d4673ee7-d09a-4ef7-983d-b8a04885c110 does not exist
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: daemon mds.cephfs.compute-0.mjmqka assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/1622397351' entity='client.rgw.rgw.compute-0.ulqmgx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: daemon mds.cephfs.compute-0.mjmqka is now active in filesystem cephfs as rank 0
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1622397351' entity='client.rgw.rgw.compute-0.ulqmgx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).mds e5 new map
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T07:49:02.875980+0000#012modified#0112025-10-02T07:49:18.699606+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14265}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.mjmqka{0:14265} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/943007464,v1:192.168.122.100:6815/943007464] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct  2 03:49:18 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka Updating MDS map to version 5 from mon.0
Oct  2 03:49:18 np0005465604 ceph-mds[100739]: mds.0.4 handle_mds_map i am now mds.0.4
Oct  2 03:49:18 np0005465604 ceph-mds[100739]: mds.0.4 handle_mds_map state change up:creating --> up:active
Oct  2 03:49:18 np0005465604 ceph-mds[100739]: mds.0.4 recovery_done -- successful recovery!
Oct  2 03:49:18 np0005465604 ceph-mds[100739]: mds.0.4 active_start
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/943007464,v1:192.168.122.100:6815/943007464] up:active
Oct  2 03:49:18 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.mjmqka=up:active}
Oct  2 03:49:18 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 35 pg[9.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:18 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  2 03:49:18 np0005465604 great_hoover[101208]: 
Oct  2 03:49:18 np0005465604 great_hoover[101208]: [{"container_id": "031c46800cbf", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.49%", "created": "2025-10-02T07:47:53.890304Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-10-02T07:47:53.941474Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T07:49:18.412615Z", "memory_usage": 11660165, "ports": [], "service_name": "crash", "started": "2025-10-02T07:47:53.756842Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a52e644f-f702-594c-a648-813e3e0df2b1@crash.compute-0", "version": "18.2.7"}, {"container_id": "3db8945b1292", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "9.97%", "created": "2025-10-02T07:49:16.815530Z", "daemon_id": "cephfs.compute-0.mjmqka", "daemon_name": "mds.cephfs.compute-0.mjmqka", "daemon_type": "mds", "events": ["2025-10-02T07:49:16.865573Z daemon:mds.cephfs.compute-0.mjmqka [INFO] \"Deployed mds.cephfs.compute-0.mjmqka on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T07:49:18.412925Z", "memory_usage": 15120465, "ports": [], "service_name": "mds.cephfs", "started": "2025-10-02T07:49:16.706562Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a52e644f-f702-594c-a648-813e3e0df2b1@mds.cephfs.compute-0.mjmqka", "version": "18.2.7"}, {"container_id": "41446870f28d", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "30.15%", "created": "2025-10-02T07:46:43.766008Z", "daemon_id": "compute-0.qlmhsi", "daemon_name": "mgr.compute-0.qlmhsi", "daemon_type": "mgr", "events": ["2025-10-02T07:47:59.097651Z daemon:mgr.compute-0.qlmhsi [INFO] \"Reconfigured mgr.compute-0.qlmhsi on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T07:49:18.412553Z", "memory_usage": 548614963, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-02T07:46:43.624912Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a52e644f-f702-594c-a648-813e3e0df2b1@mgr.compute-0.qlmhsi", "version": "18.2.7"}, {"container_id": "6c3e23d2ca6a", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.24%", "created": "2025-10-02T07:46:38.546056Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-10-02T07:47:58.368728Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T07:49:18.412476Z", "memory_request": 2147483648, "memory_usage": 39395000, "ports": [], "service_name": "mon", "started": "2025-10-02T07:46:41.290001Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a52e644f-f702-594c-a648-813e3e0df2b1@mon.compute-0", "version": "18.2.7"}, {"container_id": "205e9ff0271a", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.71%", "created": "2025-10-02T07:48:20.680036Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-10-02T07:48:20.725828Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T07:49:18.412674Z", "memory_request": 4294967296, "memory_usage": 56706990, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-02T07:48:20.599239Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a52e644f-f702-594c-a648-813e3e0df2b1@osd.0", "version": "18.2.7"}, {"container_id": "05e9e9626190", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.66%", "created": "2025-10-02T07:48:24.985997Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-10-02T07:48:25.051999Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T07:49:18.412732Z", "memory_request": 4294967296, "memory_usage": 58804142, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-02T07:48:24.884163Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a52e644f-f702-594c-a648-813e3e0df2b1@osd.1", "version": "18.2.7"}, {"container_id": "2a85f2dba8e5", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.84%", "created": "2025-10-02T07:48:29.932952Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-10-02T07:48:30.001448Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-02T07:49:18.412810Z", "memory_request": 4294967296, "memory_usage": 56822333, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-02T07:48:29.803945Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a52e644f-f702-594c-a648-813e3e0df2b1@osd.2", "version": "18.2.7"}, {"container_id": "72cef9080889", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.45%", "created": "2025-10-02T07:49:14.562549Z", "daemon_id": "rgw.compute-0.ulqmgx", "daemon_name": "rgw.rgw.compute-0.ulqmgx", "daemon_type": "rgw", "events": ["2025-10-02T
Oct  2 03:49:18 np0005465604 systemd[1]: libpod-435386f5639b2ecb7aa5c55e56c102f7c8b5b72f78d1ccca5b8d14c6f873bb52.scope: Deactivated successfully.
Oct  2 03:49:18 np0005465604 podman[101175]: 2025-10-02 07:49:18.890882324 +0000 UTC m=+0.661283149 container died 435386f5639b2ecb7aa5c55e56c102f7c8b5b72f78d1ccca5b8d14c6f873bb52 (image=quay.io/ceph/ceph:v18, name=great_hoover, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  2 03:49:18 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6364c8b33741235703e9b4edb2aba7649afc0d4b1d5d7aae996c81a3143e0004-merged.mount: Deactivated successfully.
Oct  2 03:49:18 np0005465604 podman[101175]: 2025-10-02 07:49:18.934861856 +0000 UTC m=+0.705262681 container remove 435386f5639b2ecb7aa5c55e56c102f7c8b5b72f78d1ccca5b8d14c6f873bb52 (image=quay.io/ceph/ceph:v18, name=great_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:49:18 np0005465604 systemd[1]: libpod-conmon-435386f5639b2ecb7aa5c55e56c102f7c8b5b72f78d1ccca5b8d14c6f873bb52.scope: Deactivated successfully.
Oct  2 03:49:18 np0005465604 podman[101399]: 2025-10-02 07:49:18.997092075 +0000 UTC m=+0.031904197 container create 259281881c8800aa252e1c67de19f339432427fe2b5a6d782745bbf36c44fc48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct  2 03:49:19 np0005465604 systemd[1]: Started libpod-conmon-259281881c8800aa252e1c67de19f339432427fe2b5a6d782745bbf36c44fc48.scope.
Oct  2 03:49:19 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:19 np0005465604 podman[101399]: 2025-10-02 07:49:19.051827407 +0000 UTC m=+0.086639549 container init 259281881c8800aa252e1c67de19f339432427fe2b5a6d782745bbf36c44fc48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 03:49:19 np0005465604 podman[101399]: 2025-10-02 07:49:19.057251594 +0000 UTC m=+0.092063716 container start 259281881c8800aa252e1c67de19f339432427fe2b5a6d782745bbf36c44fc48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 03:49:19 np0005465604 podman[101399]: 2025-10-02 07:49:19.060143744 +0000 UTC m=+0.094955916 container attach 259281881c8800aa252e1c67de19f339432427fe2b5a6d782745bbf36c44fc48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 03:49:19 np0005465604 festive_kilby[101415]: 167 167
Oct  2 03:49:19 np0005465604 podman[101399]: 2025-10-02 07:49:19.07633197 +0000 UTC m=+0.111144112 container died 259281881c8800aa252e1c67de19f339432427fe2b5a6d782745bbf36c44fc48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 03:49:19 np0005465604 systemd[1]: libpod-259281881c8800aa252e1c67de19f339432427fe2b5a6d782745bbf36c44fc48.scope: Deactivated successfully.
Oct  2 03:49:19 np0005465604 podman[101399]: 2025-10-02 07:49:18.983833249 +0000 UTC m=+0.018645381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:19 np0005465604 systemd[1]: var-lib-containers-storage-overlay-719c787e106384786a9526bdf8278f8f2f943b612619c617b88fd1360341a279-merged.mount: Deactivated successfully.
Oct  2 03:49:19 np0005465604 podman[101399]: 2025-10-02 07:49:19.11125375 +0000 UTC m=+0.146065872 container remove 259281881c8800aa252e1c67de19f339432427fe2b5a6d782745bbf36c44fc48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 03:49:19 np0005465604 systemd[1]: libpod-conmon-259281881c8800aa252e1c67de19f339432427fe2b5a6d782745bbf36c44fc48.scope: Deactivated successfully.
Oct  2 03:49:19 np0005465604 rsyslogd[1004]: message too long (8588) with configured size 8096, begin of message is: [{"container_id": "031c46800cbf", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  2 03:49:19 np0005465604 podman[101440]: 2025-10-02 07:49:19.258606057 +0000 UTC m=+0.051287485 container create 7a57c4beadea42dcebca72e7a2298151041b60885eddf4b98871f68e989f5f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:19 np0005465604 systemd[1]: Started libpod-conmon-7a57c4beadea42dcebca72e7a2298151041b60885eddf4b98871f68e989f5f9d.scope.
Oct  2 03:49:19 np0005465604 podman[101440]: 2025-10-02 07:49:19.231904848 +0000 UTC m=+0.024586296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:19 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287091a5fe47d5e78a2ab8b57a5bac5cd0aa58999012f6821764a6289379af8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287091a5fe47d5e78a2ab8b57a5bac5cd0aa58999012f6821764a6289379af8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287091a5fe47d5e78a2ab8b57a5bac5cd0aa58999012f6821764a6289379af8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287091a5fe47d5e78a2ab8b57a5bac5cd0aa58999012f6821764a6289379af8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/287091a5fe47d5e78a2ab8b57a5bac5cd0aa58999012f6821764a6289379af8f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:19 np0005465604 podman[101440]: 2025-10-02 07:49:19.370175643 +0000 UTC m=+0.162857081 container init 7a57c4beadea42dcebca72e7a2298151041b60885eddf4b98871f68e989f5f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kilby, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 03:49:19 np0005465604 podman[101440]: 2025-10-02 07:49:19.384987512 +0000 UTC m=+0.177668930 container start 7a57c4beadea42dcebca72e7a2298151041b60885eddf4b98871f68e989f5f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kilby, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:49:19 np0005465604 podman[101440]: 2025-10-02 07:49:19.388894077 +0000 UTC m=+0.181575585 container attach 7a57c4beadea42dcebca72e7a2298151041b60885eddf4b98871f68e989f5f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kilby, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:49:19 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/1622397351' entity='client.rgw.rgw.compute-0.ulqmgx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  2 03:49:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct  2 03:49:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct  2 03:49:19 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct  2 03:49:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct  2 03:49:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1622397351' entity='client.rgw.rgw.compute-0.ulqmgx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 03:49:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v83: 10 pgs: 2 unknown, 8 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Oct  2 03:49:19 np0005465604 python3[101486]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:49:19 np0005465604 podman[101487]: 2025-10-02 07:49:19.986894968 +0000 UTC m=+0.057816080 container create ebd00cb19cb4cabd55cf358313fa0a8e44b1d2987b91e4da6b861e192c8a1cc6 (image=quay.io/ceph/ceph:v18, name=fervent_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct  2 03:49:20 np0005465604 systemd[1]: Started libpod-conmon-ebd00cb19cb4cabd55cf358313fa0a8e44b1d2987b91e4da6b861e192c8a1cc6.scope.
Oct  2 03:49:20 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:20 np0005465604 podman[101487]: 2025-10-02 07:49:19.96661321 +0000 UTC m=+0.037534352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:49:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b172d1ba9141501c05ed72311f4f5240d04b3d777db70597f0299b7d7ecc82bd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b172d1ba9141501c05ed72311f4f5240d04b3d777db70597f0299b7d7ecc82bd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:20 np0005465604 podman[101487]: 2025-10-02 07:49:20.081551173 +0000 UTC m=+0.152472295 container init ebd00cb19cb4cabd55cf358313fa0a8e44b1d2987b91e4da6b861e192c8a1cc6 (image=quay.io/ceph/ceph:v18, name=fervent_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 03:49:20 np0005465604 podman[101487]: 2025-10-02 07:49:20.08757874 +0000 UTC m=+0.158499882 container start ebd00cb19cb4cabd55cf358313fa0a8e44b1d2987b91e4da6b861e192c8a1cc6 (image=quay.io/ceph/ceph:v18, name=fervent_grothendieck, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:20 np0005465604 podman[101487]: 2025-10-02 07:49:20.091496414 +0000 UTC m=+0.162417516 container attach ebd00cb19cb4cabd55cf358313fa0a8e44b1d2987b91e4da6b861e192c8a1cc6 (image=quay.io/ceph/ceph:v18, name=fervent_grothendieck, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 03:49:20 np0005465604 agitated_kilby[101456]: --> passed data devices: 0 physical, 3 LVM
Oct  2 03:49:20 np0005465604 agitated_kilby[101456]: --> relative data size: 1.0
Oct  2 03:49:20 np0005465604 agitated_kilby[101456]: --> All data devices are unavailable
Oct  2 03:49:20 np0005465604 systemd[1]: libpod-7a57c4beadea42dcebca72e7a2298151041b60885eddf4b98871f68e989f5f9d.scope: Deactivated successfully.
Oct  2 03:49:20 np0005465604 systemd[1]: libpod-7a57c4beadea42dcebca72e7a2298151041b60885eddf4b98871f68e989f5f9d.scope: Consumed 1.077s CPU time.
Oct  2 03:49:20 np0005465604 conmon[101456]: conmon 7a57c4beadea42dcebca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a57c4beadea42dcebca72e7a2298151041b60885eddf4b98871f68e989f5f9d.scope/container/memory.events
Oct  2 03:49:20 np0005465604 podman[101440]: 2025-10-02 07:49:20.53870418 +0000 UTC m=+1.331385588 container died 7a57c4beadea42dcebca72e7a2298151041b60885eddf4b98871f68e989f5f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kilby, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 03:49:20 np0005465604 systemd[1]: var-lib-containers-storage-overlay-287091a5fe47d5e78a2ab8b57a5bac5cd0aa58999012f6821764a6289379af8f-merged.mount: Deactivated successfully.
Oct  2 03:49:20 np0005465604 podman[101440]: 2025-10-02 07:49:20.595668909 +0000 UTC m=+1.388350307 container remove 7a57c4beadea42dcebca72e7a2298151041b60885eddf4b98871f68e989f5f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_kilby, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 03:49:20 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 36 pg[10.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:20 np0005465604 systemd[1]: libpod-conmon-7a57c4beadea42dcebca72e7a2298151041b60885eddf4b98871f68e989f5f9d.scope: Deactivated successfully.
Oct  2 03:49:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  2 03:49:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2455410562' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  2 03:49:20 np0005465604 fervent_grothendieck[101502]: 
Oct  2 03:49:20 np0005465604 fervent_grothendieck[101502]: {"fsid":"a52e644f-f702-594c-a648-813e3e0df2b1","health":{"status":"HEALTH_WARN","checks":{"POOL_APP_NOT_ENABLED":{"severity":"HEALTH_WARN","summary":{"message":"1 pool(s) do not have an application enabled","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":159,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":36,"num_osds":3,"num_up_osds":3,"osd_up_since":1759391316,"num_in_osds":3,"osd_in_since":1759391291,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":8},{"state_name":"unknown","count":1}],"num_pgs":9,"num_pools":9,"num_objects":6,"data_bytes":460666,"bytes_used":83845120,"bytes_avail":64328081408,"bytes_total":64411926528,"unknown_pgs_ratio":0.1111111119389534,"read_bytes_sec":682,"write_bytes_sec":682,"read_op_per_sec":0,"write_op_per_sec":0},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.mjmqka","status":"up:active","gid":14265}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-02T07:48:29.787187+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Oct  2 03:49:20 np0005465604 systemd[1]: libpod-ebd00cb19cb4cabd55cf358313fa0a8e44b1d2987b91e4da6b861e192c8a1cc6.scope: Deactivated successfully.
Oct  2 03:49:20 np0005465604 podman[101487]: 2025-10-02 07:49:20.692098075 +0000 UTC m=+0.763019177 container died ebd00cb19cb4cabd55cf358313fa0a8e44b1d2987b91e4da6b861e192c8a1cc6 (image=quay.io/ceph/ceph:v18, name=fervent_grothendieck, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:49:20 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b172d1ba9141501c05ed72311f4f5240d04b3d777db70597f0299b7d7ecc82bd-merged.mount: Deactivated successfully.
Oct  2 03:49:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct  2 03:49:20 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/1622397351' entity='client.rgw.rgw.compute-0.ulqmgx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 03:49:20 np0005465604 podman[101487]: 2025-10-02 07:49:20.726975864 +0000 UTC m=+0.797896966 container remove ebd00cb19cb4cabd55cf358313fa0a8e44b1d2987b91e4da6b861e192c8a1cc6 (image=quay.io/ceph/ceph:v18, name=fervent_grothendieck, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:49:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1622397351' entity='client.rgw.rgw.compute-0.ulqmgx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  2 03:49:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct  2 03:49:20 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct  2 03:49:20 np0005465604 systemd[1]: libpod-conmon-ebd00cb19cb4cabd55cf358313fa0a8e44b1d2987b91e4da6b861e192c8a1cc6.scope: Deactivated successfully.
Oct  2 03:49:20 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 37 pg[10.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [2] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:21 np0005465604 podman[101727]: 2025-10-02 07:49:21.252876226 +0000 UTC m=+0.050797628 container create 08032a8e28e928827fc18bb4e1e78de9f85e906b182e4e07b986f8f497f793ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_moore, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 03:49:21 np0005465604 systemd[1]: Started libpod-conmon-08032a8e28e928827fc18bb4e1e78de9f85e906b182e4e07b986f8f497f793ed.scope.
Oct  2 03:49:21 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:21 np0005465604 podman[101727]: 2025-10-02 07:49:21.234866787 +0000 UTC m=+0.032788189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:21 np0005465604 podman[101727]: 2025-10-02 07:49:21.33299578 +0000 UTC m=+0.130917182 container init 08032a8e28e928827fc18bb4e1e78de9f85e906b182e4e07b986f8f497f793ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_moore, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:21 np0005465604 podman[101727]: 2025-10-02 07:49:21.339704021 +0000 UTC m=+0.137625393 container start 08032a8e28e928827fc18bb4e1e78de9f85e906b182e4e07b986f8f497f793ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_moore, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:49:21 np0005465604 podman[101727]: 2025-10-02 07:49:21.342642303 +0000 UTC m=+0.140563695 container attach 08032a8e28e928827fc18bb4e1e78de9f85e906b182e4e07b986f8f497f793ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct  2 03:49:21 np0005465604 angry_moore[101743]: 167 167
Oct  2 03:49:21 np0005465604 systemd[1]: libpod-08032a8e28e928827fc18bb4e1e78de9f85e906b182e4e07b986f8f497f793ed.scope: Deactivated successfully.
Oct  2 03:49:21 np0005465604 podman[101727]: 2025-10-02 07:49:21.344832648 +0000 UTC m=+0.142754020 container died 08032a8e28e928827fc18bb4e1e78de9f85e906b182e4e07b986f8f497f793ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_moore, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:49:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e9bcde3a487d74f4e4047088a30ddfaca722625c4468b1fc9065e4767fc39e37-merged.mount: Deactivated successfully.
Oct  2 03:49:21 np0005465604 podman[101727]: 2025-10-02 07:49:21.384837054 +0000 UTC m=+0.182758426 container remove 08032a8e28e928827fc18bb4e1e78de9f85e906b182e4e07b986f8f497f793ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_moore, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 03:49:21 np0005465604 systemd[1]: libpod-conmon-08032a8e28e928827fc18bb4e1e78de9f85e906b182e4e07b986f8f497f793ed.scope: Deactivated successfully.
Oct  2 03:49:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:49:21 np0005465604 podman[101792]: 2025-10-02 07:49:21.569824314 +0000 UTC m=+0.038784615 container create ae1cd3b283c28de10b20f3a208479c40842faae2b27d3f50ec8f0343ebb33db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_swanson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:49:21 np0005465604 systemd[1]: Started libpod-conmon-ae1cd3b283c28de10b20f3a208479c40842faae2b27d3f50ec8f0343ebb33db2.scope.
Oct  2 03:49:21 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e4dd4e0d0c9b583bd0a350579f22a7dbc01582c8d4a16b1f5a03f7056ba81d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e4dd4e0d0c9b583bd0a350579f22a7dbc01582c8d4a16b1f5a03f7056ba81d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e4dd4e0d0c9b583bd0a350579f22a7dbc01582c8d4a16b1f5a03f7056ba81d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e4dd4e0d0c9b583bd0a350579f22a7dbc01582c8d4a16b1f5a03f7056ba81d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:21 np0005465604 podman[101792]: 2025-10-02 07:49:21.633781722 +0000 UTC m=+0.102742023 container init ae1cd3b283c28de10b20f3a208479c40842faae2b27d3f50ec8f0343ebb33db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 03:49:21 np0005465604 podman[101792]: 2025-10-02 07:49:21.640828195 +0000 UTC m=+0.109788536 container start ae1cd3b283c28de10b20f3a208479c40842faae2b27d3f50ec8f0343ebb33db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_swanson, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 03:49:21 np0005465604 podman[101792]: 2025-10-02 07:49:21.64388885 +0000 UTC m=+0.112849151 container attach ae1cd3b283c28de10b20f3a208479c40842faae2b27d3f50ec8f0343ebb33db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 03:49:21 np0005465604 podman[101792]: 2025-10-02 07:49:21.55517567 +0000 UTC m=+0.024135971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:21 np0005465604 python3[101800]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:49:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct  2 03:49:21 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/1622397351' entity='client.rgw.rgw.compute-0.ulqmgx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  2 03:49:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct  2 03:49:21 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct  2 03:49:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct  2 03:49:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3354485599' entity='client.rgw.rgw.compute-0.ulqmgx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 03:49:21 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 38 pg[11.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:21 np0005465604 podman[101815]: 2025-10-02 07:49:21.782580569 +0000 UTC m=+0.048122356 container create ce4232ca689429446428368aa9ae201b05d1e1410e481c1434a8195f9de49f20 (image=quay.io/ceph/ceph:v18, name=keen_wozniak, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 03:49:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v86: 11 pgs: 3 unknown, 8 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Oct  2 03:49:21 np0005465604 systemd[1]: Started libpod-conmon-ce4232ca689429446428368aa9ae201b05d1e1410e481c1434a8195f9de49f20.scope.
Oct  2 03:49:21 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:21 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 03:49:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2440a0b1131ede3b96e2f497ed620de14af5910019a4385fc045626fd77e9661/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:21 np0005465604 podman[101815]: 2025-10-02 07:49:21.760086106 +0000 UTC m=+0.025627893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:49:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2440a0b1131ede3b96e2f497ed620de14af5910019a4385fc045626fd77e9661/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:21 np0005465604 podman[101815]: 2025-10-02 07:49:21.869913252 +0000 UTC m=+0.135455019 container init ce4232ca689429446428368aa9ae201b05d1e1410e481c1434a8195f9de49f20 (image=quay.io/ceph/ceph:v18, name=keen_wozniak, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  2 03:49:21 np0005465604 podman[101815]: 2025-10-02 07:49:21.878538148 +0000 UTC m=+0.144079895 container start ce4232ca689429446428368aa9ae201b05d1e1410e481c1434a8195f9de49f20 (image=quay.io/ceph/ceph:v18, name=keen_wozniak, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:49:21 np0005465604 podman[101815]: 2025-10-02 07:49:21.882414901 +0000 UTC m=+0.147956678 container attach ce4232ca689429446428368aa9ae201b05d1e1410e481c1434a8195f9de49f20 (image=quay.io/ceph/ceph:v18, name=keen_wozniak, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct  2 03:49:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  2 03:49:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2478990510' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  2 03:49:22 np0005465604 keen_wozniak[101830]: 
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]: {
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:    "0": [
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:        {
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "devices": [
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "/dev/loop3"
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            ],
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "lv_name": "ceph_lv0",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "lv_size": "21470642176",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "name": "ceph_lv0",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "tags": {
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.cluster_name": "ceph",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.crush_device_class": "",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.encrypted": "0",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.osd_id": "0",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.type": "block",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.vdo": "0"
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            },
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "type": "block",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "vg_name": "ceph_vg0"
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:        }
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:    ],
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:    "1": [
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:        {
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "devices": [
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "/dev/loop4"
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            ],
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "lv_name": "ceph_lv1",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "lv_size": "21470642176",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "name": "ceph_lv1",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "tags": {
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.cluster_name": "ceph",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.crush_device_class": "",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.encrypted": "0",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.osd_id": "1",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.type": "block",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.vdo": "0"
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            },
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "type": "block",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "vg_name": "ceph_vg1"
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:        }
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:    ],
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:    "2": [
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:        {
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "devices": [
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "/dev/loop5"
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            ],
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "lv_name": "ceph_lv2",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "lv_size": "21470642176",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "name": "ceph_lv2",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "tags": {
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.cluster_name": "ceph",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.crush_device_class": "",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.encrypted": "0",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.osd_id": "2",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.type": "block",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:                "ceph.vdo": "0"
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            },
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "type": "block",
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:            "vg_name": "ceph_vg2"
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:        }
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]:    ]
Oct  2 03:49:22 np0005465604 blissful_swanson[101810]: }
Oct  2 03:49:22 np0005465604 systemd[1]: libpod-ce4232ca689429446428368aa9ae201b05d1e1410e481c1434a8195f9de49f20.scope: Deactivated successfully.
Oct  2 03:49:22 np0005465604 keen_wozniak[101830]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.ulqmgx","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct  2 03:49:22 np0005465604 systemd[1]: libpod-ae1cd3b283c28de10b20f3a208479c40842faae2b27d3f50ec8f0343ebb33db2.scope: Deactivated successfully.
Oct  2 03:49:22 np0005465604 podman[101792]: 2025-10-02 07:49:22.402338018 +0000 UTC m=+0.871298349 container died ae1cd3b283c28de10b20f3a208479c40842faae2b27d3f50ec8f0343ebb33db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct  2 03:49:22 np0005465604 podman[101860]: 2025-10-02 07:49:22.435002281 +0000 UTC m=+0.030534251 container died ce4232ca689429446428368aa9ae201b05d1e1410e481c1434a8195f9de49f20 (image=quay.io/ceph/ceph:v18, name=keen_wozniak, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:49:22 np0005465604 systemd[1]: var-lib-containers-storage-overlay-88e4dd4e0d0c9b583bd0a350579f22a7dbc01582c8d4a16b1f5a03f7056ba81d-merged.mount: Deactivated successfully.
Oct  2 03:49:22 np0005465604 podman[101792]: 2025-10-02 07:49:22.465920484 +0000 UTC m=+0.934880795 container remove ae1cd3b283c28de10b20f3a208479c40842faae2b27d3f50ec8f0343ebb33db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_swanson, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:22 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2440a0b1131ede3b96e2f497ed620de14af5910019a4385fc045626fd77e9661-merged.mount: Deactivated successfully.
Oct  2 03:49:22 np0005465604 systemd[1]: libpod-conmon-ae1cd3b283c28de10b20f3a208479c40842faae2b27d3f50ec8f0343ebb33db2.scope: Deactivated successfully.
Oct  2 03:49:22 np0005465604 podman[101860]: 2025-10-02 07:49:22.494345231 +0000 UTC m=+0.089877191 container remove ce4232ca689429446428368aa9ae201b05d1e1410e481c1434a8195f9de49f20 (image=quay.io/ceph/ceph:v18, name=keen_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 03:49:22 np0005465604 systemd[1]: libpod-conmon-ce4232ca689429446428368aa9ae201b05d1e1410e481c1434a8195f9de49f20.scope: Deactivated successfully.
Oct  2 03:49:22 np0005465604 ceph-mds[100739]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct  2 03:49:22 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mds-cephfs-compute-0-mjmqka[100735]: 2025-10-02T07:49:22.692+0000 7f21fd554640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct  2 03:49:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct  2 03:49:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3354485599' entity='client.rgw.rgw.compute-0.ulqmgx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  2 03:49:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct  2 03:49:22 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct  2 03:49:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct  2 03:49:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3354485599' entity='client.rgw.rgw.compute-0.ulqmgx' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 03:49:22 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/3354485599' entity='client.rgw.rgw.compute-0.ulqmgx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 03:49:22 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 39 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [1] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:23 np0005465604 podman[102027]: 2025-10-02 07:49:23.078809657 +0000 UTC m=+0.053019064 container create 5e07355cfa0194c776fd81f9defcb81c3c187254a3af90124b14d9c70babfacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:49:23 np0005465604 systemd[1]: Started libpod-conmon-5e07355cfa0194c776fd81f9defcb81c3c187254a3af90124b14d9c70babfacc.scope.
Oct  2 03:49:23 np0005465604 podman[102027]: 2025-10-02 07:49:23.05271678 +0000 UTC m=+0.026926187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:23 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:23 np0005465604 podman[102027]: 2025-10-02 07:49:23.168822832 +0000 UTC m=+0.143032279 container init 5e07355cfa0194c776fd81f9defcb81c3c187254a3af90124b14d9c70babfacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lamport, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 03:49:23 np0005465604 podman[102027]: 2025-10-02 07:49:23.175839273 +0000 UTC m=+0.150048670 container start 5e07355cfa0194c776fd81f9defcb81c3c187254a3af90124b14d9c70babfacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lamport, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:49:23 np0005465604 podman[102027]: 2025-10-02 07:49:23.179464988 +0000 UTC m=+0.153674425 container attach 5e07355cfa0194c776fd81f9defcb81c3c187254a3af90124b14d9c70babfacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lamport, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:49:23 np0005465604 heuristic_lamport[102043]: 167 167
Oct  2 03:49:23 np0005465604 podman[102027]: 2025-10-02 07:49:23.182822583 +0000 UTC m=+0.157031980 container died 5e07355cfa0194c776fd81f9defcb81c3c187254a3af90124b14d9c70babfacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lamport, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 03:49:23 np0005465604 systemd[1]: libpod-5e07355cfa0194c776fd81f9defcb81c3c187254a3af90124b14d9c70babfacc.scope: Deactivated successfully.
Oct  2 03:49:23 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6b67f562f40297899a99788ff3d3a718023c1c4dbfd3ff7131e624d679748f91-merged.mount: Deactivated successfully.
Oct  2 03:49:23 np0005465604 podman[102027]: 2025-10-02 07:49:23.225796511 +0000 UTC m=+0.200005908 container remove 5e07355cfa0194c776fd81f9defcb81c3c187254a3af90124b14d9c70babfacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_lamport, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 03:49:23 np0005465604 systemd[1]: libpod-conmon-5e07355cfa0194c776fd81f9defcb81c3c187254a3af90124b14d9c70babfacc.scope: Deactivated successfully.
Oct  2 03:49:23 np0005465604 podman[102092]: 2025-10-02 07:49:23.408066628 +0000 UTC m=+0.043518377 container create ca9b862049b8caaea450c8bb7bef98790c99ea504db7dd1e6494c545f98867d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_noether, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:49:23 np0005465604 systemd[1]: Started libpod-conmon-ca9b862049b8caaea450c8bb7bef98790c99ea504db7dd1e6494c545f98867d4.scope.
Oct  2 03:49:23 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a0e064ca5cff2f70776eef44f1113a43071a5e3b1d76959dcbd4cfea7eec3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a0e064ca5cff2f70776eef44f1113a43071a5e3b1d76959dcbd4cfea7eec3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a0e064ca5cff2f70776eef44f1113a43071a5e3b1d76959dcbd4cfea7eec3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a0e064ca5cff2f70776eef44f1113a43071a5e3b1d76959dcbd4cfea7eec3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:23 np0005465604 python3[102086]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:49:23 np0005465604 podman[102092]: 2025-10-02 07:49:23.390444332 +0000 UTC m=+0.025896111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:23 np0005465604 podman[102092]: 2025-10-02 07:49:23.503395195 +0000 UTC m=+0.138846954 container init ca9b862049b8caaea450c8bb7bef98790c99ea504db7dd1e6494c545f98867d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_noether, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 03:49:23 np0005465604 podman[102092]: 2025-10-02 07:49:23.517840982 +0000 UTC m=+0.153292731 container start ca9b862049b8caaea450c8bb7bef98790c99ea504db7dd1e6494c545f98867d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 03:49:23 np0005465604 podman[102092]: 2025-10-02 07:49:23.521768257 +0000 UTC m=+0.157220026 container attach ca9b862049b8caaea450c8bb7bef98790c99ea504db7dd1e6494c545f98867d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_noether, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 03:49:23 np0005465604 podman[102112]: 2025-10-02 07:49:23.576342193 +0000 UTC m=+0.067104258 container create 5f96352ba4bbad6380a27db01fb12d7b6e903585f39083265de1d42f4f25a890 (image=quay.io/ceph/ceph:v18, name=quizzical_mclaren, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:23 np0005465604 systemd[1]: Started libpod-conmon-5f96352ba4bbad6380a27db01fb12d7b6e903585f39083265de1d42f4f25a890.scope.
Oct  2 03:49:23 np0005465604 podman[102112]: 2025-10-02 07:49:23.548326001 +0000 UTC m=+0.039088116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:49:23 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3674ffd8821163ccf37bca8b8cbb94be91d82dc67b7aa57022ac3f63e6e75112/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3674ffd8821163ccf37bca8b8cbb94be91d82dc67b7aa57022ac3f63e6e75112/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:23 np0005465604 podman[102112]: 2025-10-02 07:49:23.672718798 +0000 UTC m=+0.163480913 container init 5f96352ba4bbad6380a27db01fb12d7b6e903585f39083265de1d42f4f25a890 (image=quay.io/ceph/ceph:v18, name=quizzical_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 03:49:23 np0005465604 podman[102112]: 2025-10-02 07:49:23.68294673 +0000 UTC m=+0.173708765 container start 5f96352ba4bbad6380a27db01fb12d7b6e903585f39083265de1d42f4f25a890 (image=quay.io/ceph/ceph:v18, name=quizzical_mclaren, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:23 np0005465604 podman[102112]: 2025-10-02 07:49:23.686645826 +0000 UTC m=+0.177407861 container attach 5f96352ba4bbad6380a27db01fb12d7b6e903585f39083265de1d42f4f25a890 (image=quay.io/ceph/ceph:v18, name=quizzical_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 03:49:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct  2 03:49:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3354485599' entity='client.rgw.rgw.compute-0.ulqmgx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  2 03:49:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct  2 03:49:23 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct  2 03:49:23 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/3354485599' entity='client.rgw.rgw.compute-0.ulqmgx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  2 03:49:23 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/3354485599' entity='client.rgw.rgw.compute-0.ulqmgx' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 03:49:23 np0005465604 ceph-mon[74477]: from='client.? 192.168.122.100:0/3354485599' entity='client.rgw.rgw.compute-0.ulqmgx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  2 03:49:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 1 creating+peering, 10 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Oct  2 03:49:23 np0005465604 radosgw[100259]: LDAP not started since no server URIs were provided in the configuration.
Oct  2 03:49:23 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-rgw-rgw-compute-0-ulqmgx[100255]: 2025-10-02T07:49:23.915+0000 7fefc9317940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct  2 03:49:23 np0005465604 radosgw[100259]: framework: beast
Oct  2 03:49:23 np0005465604 radosgw[100259]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct  2 03:49:23 np0005465604 radosgw[100259]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct  2 03:49:23 np0005465604 radosgw[100259]: starting handler: beast
Oct  2 03:49:23 np0005465604 radosgw[100259]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 03:49:24 np0005465604 radosgw[100259]: mgrc service_daemon_register rgw.14273 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.ulqmgx,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864104,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=fd6fe44d-8008-49fe-8062-459aab7d5a87,zone_name=default,zonegroup_id=91ee5501-1408-4d49-b2a1-0947f2e52cef,zonegroup_name=default}
Oct  2 03:49:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Oct  2 03:49:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030891457' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct  2 03:49:24 np0005465604 quizzical_mclaren[102129]: mimic
Oct  2 03:49:24 np0005465604 systemd[1]: libpod-5f96352ba4bbad6380a27db01fb12d7b6e903585f39083265de1d42f4f25a890.scope: Deactivated successfully.
Oct  2 03:49:24 np0005465604 podman[102112]: 2025-10-02 07:49:24.271376231 +0000 UTC m=+0.762138266 container died 5f96352ba4bbad6380a27db01fb12d7b6e903585f39083265de1d42f4f25a890 (image=quay.io/ceph/ceph:v18, name=quizzical_mclaren, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:24 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3674ffd8821163ccf37bca8b8cbb94be91d82dc67b7aa57022ac3f63e6e75112-merged.mount: Deactivated successfully.
Oct  2 03:49:24 np0005465604 podman[102112]: 2025-10-02 07:49:24.320838672 +0000 UTC m=+0.811600707 container remove 5f96352ba4bbad6380a27db01fb12d7b6e903585f39083265de1d42f4f25a890 (image=quay.io/ceph/ceph:v18, name=quizzical_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 03:49:24 np0005465604 systemd[1]: libpod-conmon-5f96352ba4bbad6380a27db01fb12d7b6e903585f39083265de1d42f4f25a890.scope: Deactivated successfully.
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]: {
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:        "osd_id": 2,
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:        "type": "bluestore"
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:    },
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:        "osd_id": 1,
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:        "type": "bluestore"
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:    },
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:        "osd_id": 0,
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:        "type": "bluestore"
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]:    }
Oct  2 03:49:24 np0005465604 hardcore_noether[102109]: }
Oct  2 03:49:24 np0005465604 systemd[1]: libpod-ca9b862049b8caaea450c8bb7bef98790c99ea504db7dd1e6494c545f98867d4.scope: Deactivated successfully.
Oct  2 03:49:24 np0005465604 podman[102092]: 2025-10-02 07:49:24.521557283 +0000 UTC m=+1.157009053 container died ca9b862049b8caaea450c8bb7bef98790c99ea504db7dd1e6494c545f98867d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 03:49:24 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b0a0e064ca5cff2f70776eef44f1113a43071a5e3b1d76959dcbd4cfea7eec3f-merged.mount: Deactivated successfully.
Oct  2 03:49:24 np0005465604 podman[102092]: 2025-10-02 07:49:24.573291372 +0000 UTC m=+1.208743121 container remove ca9b862049b8caaea450c8bb7bef98790c99ea504db7dd1e6494c545f98867d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_noether, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:49:24 np0005465604 systemd[1]: libpod-conmon-ca9b862049b8caaea450c8bb7bef98790c99ea504db7dd1e6494c545f98867d4.scope: Deactivated successfully.
Oct  2 03:49:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:49:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:49:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:24 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9376ce30-60d2-4257-ac9b-d5c6d4e4de00 does not exist
Oct  2 03:49:24 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 65998e44-3aba-4770-9e24-31698d1fe81b does not exist
Oct  2 03:49:24 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  2 03:49:24 np0005465604 ceph-mon[74477]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  2 03:49:24 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:24 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:25 np0005465604 python3[102967]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:49:25 np0005465604 podman[102998]: 2025-10-02 07:49:25.437140394 +0000 UTC m=+0.055156178 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 03:49:25 np0005465604 podman[103010]: 2025-10-02 07:49:25.463355305 +0000 UTC m=+0.043533688 container create 454c7b2c98c9f7d6b31402fdb5c87ba6033c86f60a88cd3d5ea07716c6cf0804 (image=quay.io/ceph/ceph:v18, name=intelligent_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 03:49:25 np0005465604 systemd[1]: Started libpod-conmon-454c7b2c98c9f7d6b31402fdb5c87ba6033c86f60a88cd3d5ea07716c6cf0804.scope.
Oct  2 03:49:25 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:25 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529248a7805602aca628f96f9e0ae52d0a00f545481f8bdb51d4cb0f848f583b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:25 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529248a7805602aca628f96f9e0ae52d0a00f545481f8bdb51d4cb0f848f583b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:25 np0005465604 podman[103010]: 2025-10-02 07:49:25.544037669 +0000 UTC m=+0.124216072 container init 454c7b2c98c9f7d6b31402fdb5c87ba6033c86f60a88cd3d5ea07716c6cf0804 (image=quay.io/ceph/ceph:v18, name=intelligent_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 03:49:25 np0005465604 podman[103010]: 2025-10-02 07:49:25.447759588 +0000 UTC m=+0.027937991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:49:25 np0005465604 podman[103010]: 2025-10-02 07:49:25.55017862 +0000 UTC m=+0.130357043 container start 454c7b2c98c9f7d6b31402fdb5c87ba6033c86f60a88cd3d5ea07716c6cf0804 (image=quay.io/ceph/ceph:v18, name=intelligent_hypatia, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:49:25 np0005465604 podman[103010]: 2025-10-02 07:49:25.554938554 +0000 UTC m=+0.135116937 container attach 454c7b2c98c9f7d6b31402fdb5c87ba6033c86f60a88cd3d5ea07716c6cf0804 (image=quay.io/ceph/ceph:v18, name=intelligent_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:25 np0005465604 podman[102998]: 2025-10-02 07:49:25.555049877 +0000 UTC m=+0.173065641 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:49:25 np0005465604 ceph-mon[74477]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  2 03:49:25 np0005465604 ceph-mon[74477]: Cluster is now healthy
Oct  2 03:49:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v90: 11 pgs: 1 creating+peering, 10 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 202 B/s rd, 404 B/s wr, 1 op/s
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/458925144' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct  2 03:49:26 np0005465604 intelligent_hypatia[103031]: 
Oct  2 03:49:26 np0005465604 intelligent_hypatia[103031]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Oct  2 03:49:26 np0005465604 systemd[1]: libpod-454c7b2c98c9f7d6b31402fdb5c87ba6033c86f60a88cd3d5ea07716c6cf0804.scope: Deactivated successfully.
Oct  2 03:49:26 np0005465604 podman[103010]: 2025-10-02 07:49:26.17168209 +0000 UTC m=+0.751860483 container died 454c7b2c98c9f7d6b31402fdb5c87ba6033c86f60a88cd3d5ea07716c6cf0804 (image=quay.io/ceph/ceph:v18, name=intelligent_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:49:26 np0005465604 systemd[1]: var-lib-containers-storage-overlay-529248a7805602aca628f96f9e0ae52d0a00f545481f8bdb51d4cb0f848f583b-merged.mount: Deactivated successfully.
Oct  2 03:49:26 np0005465604 podman[103010]: 2025-10-02 07:49:26.222233917 +0000 UTC m=+0.802412300 container remove 454c7b2c98c9f7d6b31402fdb5c87ba6033c86f60a88cd3d5ea07716c6cf0804 (image=quay.io/ceph/ceph:v18, name=intelligent_hypatia, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:49:26 np0005465604 systemd[1]: libpod-conmon-454c7b2c98c9f7d6b31402fdb5c87ba6033c86f60a88cd3d5ea07716c6cf0804.scope: Deactivated successfully.
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:26 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 4f11adb6-a57d-4864-920f-3ec007a8c02f does not exist
Oct  2 03:49:26 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e0bba768-cad5-479b-a071-5e1efd4c3270 does not exist
Oct  2 03:49:26 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a332d3cb-5492-4f35-9e58-eaa886770f51 does not exist
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:26 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:49:26 np0005465604 podman[103347]: 2025-10-02 07:49:26.95465495 +0000 UTC m=+0.048178757 container create cd6e3e30525aa339f2a9cbcca72c9f648be9d38120894c99c5d33ef2af66f935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mendel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:49:27 np0005465604 systemd[1]: Started libpod-conmon-cd6e3e30525aa339f2a9cbcca72c9f648be9d38120894c99c5d33ef2af66f935.scope.
Oct  2 03:49:27 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:27 np0005465604 podman[103347]: 2025-10-02 07:49:26.936865938 +0000 UTC m=+0.030389785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:27 np0005465604 podman[103347]: 2025-10-02 07:49:27.040617195 +0000 UTC m=+0.134141022 container init cd6e3e30525aa339f2a9cbcca72c9f648be9d38120894c99c5d33ef2af66f935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 03:49:27 np0005465604 podman[103347]: 2025-10-02 07:49:27.046028362 +0000 UTC m=+0.139552169 container start cd6e3e30525aa339f2a9cbcca72c9f648be9d38120894c99c5d33ef2af66f935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 03:49:27 np0005465604 podman[103347]: 2025-10-02 07:49:27.049785471 +0000 UTC m=+0.143309308 container attach cd6e3e30525aa339f2a9cbcca72c9f648be9d38120894c99c5d33ef2af66f935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mendel, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:49:27 np0005465604 youthful_mendel[103364]: 167 167
Oct  2 03:49:27 np0005465604 systemd[1]: libpod-cd6e3e30525aa339f2a9cbcca72c9f648be9d38120894c99c5d33ef2af66f935.scope: Deactivated successfully.
Oct  2 03:49:27 np0005465604 podman[103347]: 2025-10-02 07:49:27.053074814 +0000 UTC m=+0.146598661 container died cd6e3e30525aa339f2a9cbcca72c9f648be9d38120894c99c5d33ef2af66f935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 03:49:27 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a88e3eee69417f9461d0ac1266baefb9f9a0c7ac549b086cd5d931463596d4f2-merged.mount: Deactivated successfully.
Oct  2 03:49:27 np0005465604 podman[103347]: 2025-10-02 07:49:27.100697622 +0000 UTC m=+0.194221469 container remove cd6e3e30525aa339f2a9cbcca72c9f648be9d38120894c99c5d33ef2af66f935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mendel, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:49:27 np0005465604 systemd[1]: libpod-conmon-cd6e3e30525aa339f2a9cbcca72c9f648be9d38120894c99c5d33ef2af66f935.scope: Deactivated successfully.
Oct  2 03:49:27 np0005465604 podman[103387]: 2025-10-02 07:49:27.299648222 +0000 UTC m=+0.061590748 container create 40f182369e4d239dcf0fbf11ac047c2d2ebbd02b8919614d93f5dd2a7d2b4274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_cray, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:49:27 np0005465604 systemd[1]: Started libpod-conmon-40f182369e4d239dcf0fbf11ac047c2d2ebbd02b8919614d93f5dd2a7d2b4274.scope.
Oct  2 03:49:27 np0005465604 podman[103387]: 2025-10-02 07:49:27.27718812 +0000 UTC m=+0.039130726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:27 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:27 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f09e06a9c622cb01c03e094e32dd1e688adba9098d0375c046ece967675d0f3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:27 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f09e06a9c622cb01c03e094e32dd1e688adba9098d0375c046ece967675d0f3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:27 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f09e06a9c622cb01c03e094e32dd1e688adba9098d0375c046ece967675d0f3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:27 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f09e06a9c622cb01c03e094e32dd1e688adba9098d0375c046ece967675d0f3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:27 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f09e06a9c622cb01c03e094e32dd1e688adba9098d0375c046ece967675d0f3f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:27 np0005465604 podman[103387]: 2025-10-02 07:49:27.392432663 +0000 UTC m=+0.154375229 container init 40f182369e4d239dcf0fbf11ac047c2d2ebbd02b8919614d93f5dd2a7d2b4274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 03:49:27 np0005465604 podman[103387]: 2025-10-02 07:49:27.403530754 +0000 UTC m=+0.165473310 container start 40f182369e4d239dcf0fbf11ac047c2d2ebbd02b8919614d93f5dd2a7d2b4274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_cray, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:49:27 np0005465604 podman[103387]: 2025-10-02 07:49:27.407793181 +0000 UTC m=+0.169735737 container attach 40f182369e4d239dcf0fbf11ac047c2d2ebbd02b8919614d93f5dd2a7d2b4274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_cray, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_07:49:27
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Some PGs (0.090909) are inactive; try again later
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v91: 11 pgs: 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 5.6 KiB/s wr, 182 op/s
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:49:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Oct  2 03:49:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 03:49:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 03:49:28 np0005465604 cranky_cray[103403]: --> passed data devices: 0 physical, 3 LVM
Oct  2 03:49:28 np0005465604 cranky_cray[103403]: --> relative data size: 1.0
Oct  2 03:49:28 np0005465604 cranky_cray[103403]: --> All data devices are unavailable
Oct  2 03:49:28 np0005465604 systemd[1]: libpod-40f182369e4d239dcf0fbf11ac047c2d2ebbd02b8919614d93f5dd2a7d2b4274.scope: Deactivated successfully.
Oct  2 03:49:28 np0005465604 systemd[1]: libpod-40f182369e4d239dcf0fbf11ac047c2d2ebbd02b8919614d93f5dd2a7d2b4274.scope: Consumed 1.009s CPU time.
Oct  2 03:49:28 np0005465604 podman[103387]: 2025-10-02 07:49:28.458624781 +0000 UTC m=+1.220567307 container died 40f182369e4d239dcf0fbf11ac047c2d2ebbd02b8919614d93f5dd2a7d2b4274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_cray, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  2 03:49:28 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f09e06a9c622cb01c03e094e32dd1e688adba9098d0375c046ece967675d0f3f-merged.mount: Deactivated successfully.
Oct  2 03:49:28 np0005465604 podman[103387]: 2025-10-02 07:49:28.520350753 +0000 UTC m=+1.282293299 container remove 40f182369e4d239dcf0fbf11ac047c2d2ebbd02b8919614d93f5dd2a7d2b4274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  2 03:49:28 np0005465604 systemd[1]: libpod-conmon-40f182369e4d239dcf0fbf11ac047c2d2ebbd02b8919614d93f5dd2a7d2b4274.scope: Deactivated successfully.
Oct  2 03:49:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct  2 03:49:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 03:49:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  2 03:49:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct  2 03:49:28 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct  2 03:49:28 np0005465604 ceph-mgr[74774]: [progress INFO root] update: starting ev 7608b97a-3d5e-4870-a8ea-74716f4dedc6 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct  2 03:49:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Oct  2 03:49:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 03:49:29 np0005465604 podman[103582]: 2025-10-02 07:49:29.275924433 +0000 UTC m=+0.048938283 container create 00b7bf7f870a849505dae9ae71291356a83850c372e0f066afaf9b2de217ba27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:49:29 np0005465604 systemd[1]: Started libpod-conmon-00b7bf7f870a849505dae9ae71291356a83850c372e0f066afaf9b2de217ba27.scope.
Oct  2 03:49:29 np0005465604 podman[103582]: 2025-10-02 07:49:29.256619749 +0000 UTC m=+0.029633680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:29 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:29 np0005465604 podman[103582]: 2025-10-02 07:49:29.381303456 +0000 UTC m=+0.154317387 container init 00b7bf7f870a849505dae9ae71291356a83850c372e0f066afaf9b2de217ba27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:49:29 np0005465604 podman[103582]: 2025-10-02 07:49:29.393141723 +0000 UTC m=+0.166155564 container start 00b7bf7f870a849505dae9ae71291356a83850c372e0f066afaf9b2de217ba27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:49:29 np0005465604 podman[103582]: 2025-10-02 07:49:29.396583961 +0000 UTC m=+0.169597892 container attach 00b7bf7f870a849505dae9ae71291356a83850c372e0f066afaf9b2de217ba27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 03:49:29 np0005465604 focused_goldwasser[103598]: 167 167
Oct  2 03:49:29 np0005465604 systemd[1]: libpod-00b7bf7f870a849505dae9ae71291356a83850c372e0f066afaf9b2de217ba27.scope: Deactivated successfully.
Oct  2 03:49:29 np0005465604 podman[103582]: 2025-10-02 07:49:29.397996939 +0000 UTC m=+0.171010820 container died 00b7bf7f870a849505dae9ae71291356a83850c372e0f066afaf9b2de217ba27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:49:29 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3251720b4ddaa589f042f7e4be8b467f0562aad9c31242a71c28b54181903d2a-merged.mount: Deactivated successfully.
Oct  2 03:49:29 np0005465604 podman[103582]: 2025-10-02 07:49:29.448080182 +0000 UTC m=+0.221094023 container remove 00b7bf7f870a849505dae9ae71291356a83850c372e0f066afaf9b2de217ba27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_goldwasser, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 03:49:29 np0005465604 systemd[1]: libpod-conmon-00b7bf7f870a849505dae9ae71291356a83850c372e0f066afaf9b2de217ba27.scope: Deactivated successfully.
Oct  2 03:49:29 np0005465604 podman[103621]: 2025-10-02 07:49:29.624529839 +0000 UTC m=+0.044221052 container create addbec2f346a412681d86db79352ab63d97db29b707d4d9e2d0d56e1ea98b633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 03:49:29 np0005465604 systemd[1]: Started libpod-conmon-addbec2f346a412681d86db79352ab63d97db29b707d4d9e2d0d56e1ea98b633.scope.
Oct  2 03:49:29 np0005465604 podman[103621]: 2025-10-02 07:49:29.611167439 +0000 UTC m=+0.030858672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:29 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d399715fe649f9fb78dc4d9b6722f3a3d950ee5039c68d84b8d3e478dc5e828/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d399715fe649f9fb78dc4d9b6722f3a3d950ee5039c68d84b8d3e478dc5e828/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d399715fe649f9fb78dc4d9b6722f3a3d950ee5039c68d84b8d3e478dc5e828/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d399715fe649f9fb78dc4d9b6722f3a3d950ee5039c68d84b8d3e478dc5e828/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:29 np0005465604 podman[103621]: 2025-10-02 07:49:29.729683004 +0000 UTC m=+0.149374297 container init addbec2f346a412681d86db79352ab63d97db29b707d4d9e2d0d56e1ea98b633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 03:49:29 np0005465604 podman[103621]: 2025-10-02 07:49:29.741370436 +0000 UTC m=+0.161061649 container start addbec2f346a412681d86db79352ab63d97db29b707d4d9e2d0d56e1ea98b633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:49:29 np0005465604 podman[103621]: 2025-10-02 07:49:29.744724821 +0000 UTC m=+0.164416114 container attach addbec2f346a412681d86db79352ab63d97db29b707d4d9e2d0d56e1ea98b633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:49:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v93: 11 pgs: 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 4.5 KiB/s wr, 156 op/s
Oct  2 03:49:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  2 03:49:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct  2 03:49:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  2 03:49:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 03:49:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct  2 03:49:29 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct  2 03:49:29 np0005465604 ceph-mgr[74774]: [progress INFO root] update: starting ev 06622226-196a-4740-951f-31384750ec6f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct  2 03:49:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Oct  2 03:49:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 03:49:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  2 03:49:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 03:49:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 42 pg[2.0( empty local-lis/les=17/18 n=0 ec=13/13 lis/c=17/17 les/c/f=18/18/0 sis=42 pruub=11.167280197s) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active pruub 69.813713074s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 42 pg[2.0( empty local-lis/les=17/18 n=0 ec=13/13 lis/c=17/17 les/c/f=18/18/0 sis=42 pruub=11.167280197s) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown pruub 69.813713074s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]: {
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:    "0": [
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:        {
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "devices": [
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "/dev/loop3"
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            ],
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "lv_name": "ceph_lv0",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "lv_size": "21470642176",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "name": "ceph_lv0",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "tags": {
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.cluster_name": "ceph",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.crush_device_class": "",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.encrypted": "0",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.osd_id": "0",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.type": "block",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.vdo": "0"
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            },
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "type": "block",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "vg_name": "ceph_vg0"
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:        }
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:    ],
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:    "1": [
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:        {
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "devices": [
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "/dev/loop4"
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            ],
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "lv_name": "ceph_lv1",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "lv_size": "21470642176",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "name": "ceph_lv1",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "tags": {
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.cluster_name": "ceph",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.crush_device_class": "",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.encrypted": "0",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.osd_id": "1",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.type": "block",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.vdo": "0"
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            },
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "type": "block",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "vg_name": "ceph_vg1"
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:        }
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:    ],
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:    "2": [
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:        {
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "devices": [
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "/dev/loop5"
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            ],
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "lv_name": "ceph_lv2",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "lv_size": "21470642176",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "name": "ceph_lv2",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "tags": {
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.cluster_name": "ceph",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.crush_device_class": "",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.encrypted": "0",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.osd_id": "2",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.type": "block",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:                "ceph.vdo": "0"
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            },
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "type": "block",
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:            "vg_name": "ceph_vg2"
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:        }
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]:    ]
Oct  2 03:49:30 np0005465604 competent_khayyam[103638]: }
Oct  2 03:49:30 np0005465604 systemd[1]: libpod-addbec2f346a412681d86db79352ab63d97db29b707d4d9e2d0d56e1ea98b633.scope: Deactivated successfully.
Oct  2 03:49:30 np0005465604 podman[103621]: 2025-10-02 07:49:30.545512185 +0000 UTC m=+0.965203398 container died addbec2f346a412681d86db79352ab63d97db29b707d4d9e2d0d56e1ea98b633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct  2 03:49:30 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9d399715fe649f9fb78dc4d9b6722f3a3d950ee5039c68d84b8d3e478dc5e828-merged.mount: Deactivated successfully.
Oct  2 03:49:30 np0005465604 podman[103621]: 2025-10-02 07:49:30.60091972 +0000 UTC m=+1.020610943 container remove addbec2f346a412681d86db79352ab63d97db29b707d4d9e2d0d56e1ea98b633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:49:30 np0005465604 systemd[1]: libpod-conmon-addbec2f346a412681d86db79352ab63d97db29b707d4d9e2d0d56e1ea98b633.scope: Deactivated successfully.
Oct  2 03:49:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct  2 03:49:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  2 03:49:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct  2 03:49:30 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct  2 03:49:30 np0005465604 ceph-mgr[74774]: [progress INFO root] update: starting ev 3f52aeb8-3f6e-43eb-9428-047a540a7648 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct  2 03:49:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Oct  2 03:49:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.1e( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 03:49:30 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 03:49:30 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.1( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.c( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.e( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.12( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.10( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.14( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.1a( empty local-lis/les=17/18 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.1e( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.1( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.0( empty local-lis/les=42/43 n=0 ec=13/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.e( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.c( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.10( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.12( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.14( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.1a( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:30 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=17/17 les/c/f=18/18/0 sis=42) [2] r=0 lpr=42 pi=[17,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:31 np0005465604 podman[103798]: 2025-10-02 07:49:31.373667199 +0000 UTC m=+0.066112124 container create dad66bc9d86a36f6005c28ab254cffc8c3dcafd98f53d328511283e3ffeb34ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:31 np0005465604 systemd[1]: Started libpod-conmon-dad66bc9d86a36f6005c28ab254cffc8c3dcafd98f53d328511283e3ffeb34ed.scope.
Oct  2 03:49:31 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:31 np0005465604 podman[103798]: 2025-10-02 07:49:31.346793045 +0000 UTC m=+0.039238070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:31 np0005465604 podman[103798]: 2025-10-02 07:49:31.440773656 +0000 UTC m=+0.133218601 container init dad66bc9d86a36f6005c28ab254cffc8c3dcafd98f53d328511283e3ffeb34ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bohr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 03:49:31 np0005465604 podman[103798]: 2025-10-02 07:49:31.452902164 +0000 UTC m=+0.145347119 container start dad66bc9d86a36f6005c28ab254cffc8c3dcafd98f53d328511283e3ffeb34ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bohr, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 03:49:31 np0005465604 podman[103798]: 2025-10-02 07:49:31.456769747 +0000 UTC m=+0.149214702 container attach dad66bc9d86a36f6005c28ab254cffc8c3dcafd98f53d328511283e3ffeb34ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct  2 03:49:31 np0005465604 sad_bohr[103814]: 167 167
Oct  2 03:49:31 np0005465604 systemd[1]: libpod-dad66bc9d86a36f6005c28ab254cffc8c3dcafd98f53d328511283e3ffeb34ed.scope: Deactivated successfully.
Oct  2 03:49:31 np0005465604 podman[103798]: 2025-10-02 07:49:31.460281387 +0000 UTC m=+0.152726332 container died dad66bc9d86a36f6005c28ab254cffc8c3dcafd98f53d328511283e3ffeb34ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Oct  2 03:49:31 np0005465604 systemd[1]: var-lib-containers-storage-overlay-755171124df078610056e8f8ba92c161ea178f47e771228d4b1cf1c037e71b29-merged.mount: Deactivated successfully.
Oct  2 03:49:31 np0005465604 podman[103798]: 2025-10-02 07:49:31.50834831 +0000 UTC m=+0.200793265 container remove dad66bc9d86a36f6005c28ab254cffc8c3dcafd98f53d328511283e3ffeb34ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bohr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:49:31 np0005465604 systemd[1]: libpod-conmon-dad66bc9d86a36f6005c28ab254cffc8c3dcafd98f53d328511283e3ffeb34ed.scope: Deactivated successfully.
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:49:31 np0005465604 podman[103839]: 2025-10-02 07:49:31.715361958 +0000 UTC m=+0.058961539 container create f90c97ff587b3c44963b14bdeece7513baeba8fdf37bcb404135c52a293e777b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chatelet, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 03:49:31 np0005465604 systemd[1]: Started libpod-conmon-f90c97ff587b3c44963b14bdeece7513baeba8fdf37bcb404135c52a293e777b.scope.
Oct  2 03:49:31 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:49:31 np0005465604 podman[103839]: 2025-10-02 07:49:31.687655854 +0000 UTC m=+0.031255485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:49:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88f3053903c2e65043bc7efae8b470fd6316f447dfabb3281ef1d4808b81f83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88f3053903c2e65043bc7efae8b470fd6316f447dfabb3281ef1d4808b81f83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88f3053903c2e65043bc7efae8b470fd6316f447dfabb3281ef1d4808b81f83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88f3053903c2e65043bc7efae8b470fd6316f447dfabb3281ef1d4808b81f83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:49:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v96: 42 pgs: 31 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 5.3 KiB/s wr, 183 op/s
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:31 np0005465604 podman[103839]: 2025-10-02 07:49:31.808711067 +0000 UTC m=+0.152310698 container init f90c97ff587b3c44963b14bdeece7513baeba8fdf37bcb404135c52a293e777b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chatelet, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 03:49:31 np0005465604 podman[103839]: 2025-10-02 07:49:31.819214479 +0000 UTC m=+0.162814060 container start f90c97ff587b3c44963b14bdeece7513baeba8fdf37bcb404135c52a293e777b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chatelet, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 03:49:31 np0005465604 podman[103839]: 2025-10-02 07:49:31.823051341 +0000 UTC m=+0.166650912 container attach f90c97ff587b3c44963b14bdeece7513baeba8fdf37bcb404135c52a293e777b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct  2 03:49:31 np0005465604 ceph-mgr[74774]: [progress INFO root] update: starting ev b10bd5ef-11fa-4266-8401-be5cec577e8c (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct  2 03:49:31 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 44 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=44 pruub=9.351511002s) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active pruub 79.079154968s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 03:49:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct  2 03:49:31 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 44 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=44 pruub=9.351511002s) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown pruub 79.079154968s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]: {
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:        "osd_id": 2,
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:        "type": "bluestore"
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:    },
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:        "osd_id": 1,
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:        "type": "bluestore"
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:    },
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:        "osd_id": 0,
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:        "type": "bluestore"
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]:    }
Oct  2 03:49:32 np0005465604 keen_chatelet[103856]: }
Oct  2 03:49:32 np0005465604 systemd[1]: libpod-f90c97ff587b3c44963b14bdeece7513baeba8fdf37bcb404135c52a293e777b.scope: Deactivated successfully.
Oct  2 03:49:32 np0005465604 systemd[1]: libpod-f90c97ff587b3c44963b14bdeece7513baeba8fdf37bcb404135c52a293e777b.scope: Consumed 1.024s CPU time.
Oct  2 03:49:32 np0005465604 podman[103839]: 2025-10-02 07:49:32.834166105 +0000 UTC m=+1.177765656 container died f90c97ff587b3c44963b14bdeece7513baeba8fdf37bcb404135c52a293e777b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:49:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct  2 03:49:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct  2 03:49:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct  2 03:49:32 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct  2 03:49:32 np0005465604 ceph-mgr[74774]: [progress INFO root] update: starting ev 726b3379-5c31-4899-96a7-29738544ce0f (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct  2 03:49:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Oct  2 03:49:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 03:49:32 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b88f3053903c2e65043bc7efae8b470fd6316f447dfabb3281ef1d4808b81f83-merged.mount: Deactivated successfully.
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.1e( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.1f( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.1d( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.b( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.6( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.3( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.19( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.c( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.15( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.16( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.17( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 44 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=44 pruub=14.335729599s) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active pruub 80.751655579s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=44 pruub=14.335729599s) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown pruub 80.751655579s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.2( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.4( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.1e( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.b( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.1f( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.1d( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.b( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.6( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.d( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.10( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.13( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.19( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.1a( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.1c( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.14( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=15/16 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.3( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.19( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.0( empty local-lis/les=44/45 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.c( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.15( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.17( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.16( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:32 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct  2 03:49:32 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 03:49:32 np0005465604 ceph-mgr[74774]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Oct  2 03:49:32 np0005465604 podman[103839]: 2025-10-02 07:49:32.896526499 +0000 UTC m=+1.240126040 container remove f90c97ff587b3c44963b14bdeece7513baeba8fdf37bcb404135c52a293e777b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chatelet, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:49:32 np0005465604 systemd[1]: libpod-conmon-f90c97ff587b3c44963b14bdeece7513baeba8fdf37bcb404135c52a293e777b.scope: Deactivated successfully.
Oct  2 03:49:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:49:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:49:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:32 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 08910732-6110-4316-aa73-cd7d136ed186 does not exist
Oct  2 03:49:32 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e72c985c-2b18-45a8-88d7-c52cef8e0e1b does not exist
Oct  2 03:49:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v99: 104 pgs: 31 unknown, 73 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct  2 03:49:33 np0005465604 ceph-mgr[74774]: [progress INFO root] update: starting ev ea30a1a1-15dc-4f20-bf04-799654d6e2e9 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.1c( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.1e( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.1f( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.7( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.18( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.1b( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.19( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.6( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.1a( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.8( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.1( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.b( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.4( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.2( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.0( empty local-lis/les=44/46 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.d( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.10( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.13( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.12( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.14( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.17( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=15/15 les/c/f=16/16/0 sis=44) [1] r=0 lpr=44 pi=[15,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 03:49:33 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 03:49:34 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct  2 03:49:34 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct  2 03:49:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct  2 03:49:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  2 03:49:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct  2 03:49:34 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct  2 03:49:34 np0005465604 ceph-mgr[74774]: [progress INFO root] update: starting ev ab10b4fa-360d-4b6c-ae09-f51962f5c12f (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct  2 03:49:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Oct  2 03:49:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 03:49:34 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  2 03:49:34 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 03:49:34 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 46 pg[6.0( v 38'39 (0'0,38'39] local-lis/les=21/22 n=22 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=46 pruub=10.297105789s) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 34'38 mlcod 34'38 active pruub 83.126548767s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:34 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 47 pg[6.0( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=46 pruub=10.297105789s) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 34'38 mlcod 0'0 unknown pruub 83.126548767s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:34 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 47 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=21/22 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:34 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 47 pg[6.2( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:34 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 47 pg[6.3( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:34 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 47 pg[6.4( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:34 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 47 pg[6.5( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:34 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 47 pg[6.6( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:34 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 47 pg[6.7( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:34 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 47 pg[6.8( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:34 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 47 pg[6.9( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:34 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 47 pg[6.c( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:34 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 47 pg[6.d( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:34 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 47 pg[6.a( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:34 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 47 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:34 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 47 pg[6.f( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:34 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 47 pg[6.e( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=21/22 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:35 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Oct  2 03:49:35 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Oct  2 03:49:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v102: 150 pgs: 77 unknown, 73 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:49:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  2 03:49:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  2 03:49:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct  2 03:49:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  2 03:49:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 03:49:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 03:49:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct  2 03:49:35 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct  2 03:49:35 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 48 pg[8.0( v 33'4 (0'0,33'4] local-lis/les=32/33 n=4 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=48 pruub=12.808247566s) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 33'3 mlcod 33'3 active pruub 82.237358093s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:35 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 48 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=48 pruub=11.411789894s) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active pruub 80.841033936s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:35 np0005465604 ceph-mgr[74774]: [progress INFO root] update: starting ev 22dcd49a-15d9-4727-8921-ec76676af8ef (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct  2 03:49:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Oct  2 03:49:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 03:49:35 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 48 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=48 pruub=11.411789894s) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown pruub 80.841033936s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:35 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 48 pg[8.0( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=48 pruub=12.808247566s) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 33'3 mlcod 0'0 unknown pruub 82.237358093s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:35 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 48 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=46/48 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:35 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 48 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:35 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 48 pg[6.4( v 38'39 (0'0,38'39] local-lis/les=46/48 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:35 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 48 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:35 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 48 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:35 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 48 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:35 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 48 pg[6.6( v 38'39 (0'0,38'39] local-lis/les=46/48 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:35 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 48 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=46/48 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:35 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 48 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:35 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 48 pg[6.0( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 34'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:35 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 48 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=46/48 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:35 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 48 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=46/48 n=2 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:35 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 48 pg[6.c( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:35 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 48 pg[6.e( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:35 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 48 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:35 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 48 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [0] r=0 lpr=46 pi=[21,46)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:35 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:35 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:35 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  2 03:49:35 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 03:49:35 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 03:49:35 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Oct  2 03:49:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 46 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=46 pruub=14.557520866s) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active pruub 79.825981140s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=46 pruub=14.557520866s) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown pruub 79.825981140s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.18( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.17( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.19( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.1a( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.1b( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.1c( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.3( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.4( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.5( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.6( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.1d( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.1f( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.1( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.1e( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.2( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.11( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.12( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.13( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.14( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.15( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.7( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.8( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.9( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.a( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.16( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.b( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.c( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.d( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.e( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.10( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 48 pg[5.f( empty local-lis/les=19/20 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct  2 03:49:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  2 03:49:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct  2 03:49:36 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct  2 03:49:36 np0005465604 ceph-mgr[74774]: [progress INFO root] update: starting ev 61f78fa2-136d-488c-bfb8-d7eb93a4fce3 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct  2 03:49:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct  2 03:49:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.1c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.1d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.12( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.11( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.1f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.10( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.18( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.17( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.13( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.19( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.16( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.15( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.1b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.14( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.5( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.4( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.b( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.6( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.9( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.a( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.7( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.8( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.2( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.d( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.9( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.6( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.b( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.4( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.f( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.1( v 33'4 (0'0,33'4] local-lis/les=32/33 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.e( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.3( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.c( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.5( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.8( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.7( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.1( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.d( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.2( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.c( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.3( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.13( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.1c( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.12( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.1d( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.11( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.1e( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.10( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.1f( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.17( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.1f( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.18( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.16( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.19( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.1a( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.14( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.1b( empty local-lis/les=23/24 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.1e( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=32/33 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.10( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.1d( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.12( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.13( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.14( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.15( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.16( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.17( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.8( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.1e( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.a( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.c( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.9( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.b( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.7( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.0( empty local-lis/les=46/49 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.4( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.6( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.f( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.e( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.3( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.d( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.2( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.1c( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.1a( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.1b( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.19( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.18( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.5( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.1( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 49 pg[5.11( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=19/19 les/c/f=20/20/0 sis=46) [2] r=0 lpr=46 pi=[19,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.11( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.12( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.10( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.17( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.19( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.16( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.15( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.13( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.14( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.5( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=48/49 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.9( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.b( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.7( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.a( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.8( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.d( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.6( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.4( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.0( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 33'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.0( empty local-lis/les=48/49 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.f( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.1( v 33'4 (0'0,33'4] local-lis/les=48/49 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.e( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.5( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.c( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.a( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.8( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.7( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.3( v 33'4 (0'0,33'4] local-lis/les=48/49 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.1( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.2( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.3( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.13( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.1c( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.1d( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.1e( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.1f( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=48/49 n=1 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.18( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.19( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.16( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.1a( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[7.1b( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [1] r=0 lpr=48 pi=[23,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.1e( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 49 pg[8.17( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=32/32 les/c/f=33/33/0 sis=48) [1] r=0 lpr=48 pi=[32,48)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  2 03:49:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v105: 212 pgs: 15 unknown, 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:49:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  2 03:49:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  2 03:49:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct  2 03:49:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  2 03:49:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 03:49:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 03:49:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct  2 03:49:37 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] update: starting ev da73a409-5073-4216-9bc7-78593bd79732 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] complete: finished ev 7608b97a-3d5e-4870-a8ea-74716f4dedc6 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] Completed event 7608b97a-3d5e-4870-a8ea-74716f4dedc6 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] complete: finished ev 06622226-196a-4740-951f-31384750ec6f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] Completed event 06622226-196a-4740-951f-31384750ec6f (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] complete: finished ev 3f52aeb8-3f6e-43eb-9428-047a540a7648 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] Completed event 3f52aeb8-3f6e-43eb-9428-047a540a7648 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] complete: finished ev b10bd5ef-11fa-4266-8401-be5cec577e8c (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] Completed event b10bd5ef-11fa-4266-8401-be5cec577e8c (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] complete: finished ev 726b3379-5c31-4899-96a7-29738544ce0f (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] Completed event 726b3379-5c31-4899-96a7-29738544ce0f (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] complete: finished ev ea30a1a1-15dc-4f20-bf04-799654d6e2e9 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] Completed event ea30a1a1-15dc-4f20-bf04-799654d6e2e9 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] complete: finished ev ab10b4fa-360d-4b6c-ae09-f51962f5c12f (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] Completed event ab10b4fa-360d-4b6c-ae09-f51962f5c12f (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] complete: finished ev 22dcd49a-15d9-4727-8921-ec76676af8ef (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] Completed event 22dcd49a-15d9-4727-8921-ec76676af8ef (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] complete: finished ev 61f78fa2-136d-488c-bfb8-d7eb93a4fce3 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] Completed event 61f78fa2-136d-488c-bfb8-d7eb93a4fce3 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] complete: finished ev da73a409-5073-4216-9bc7-78593bd79732 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct  2 03:49:37 np0005465604 ceph-mgr[74774]: [progress INFO root] Completed event da73a409-5073-4216-9bc7-78593bd79732 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Oct  2 03:49:37 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:37 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:37 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  2 03:49:37 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 03:49:37 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 03:49:38 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Oct  2 03:49:38 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Oct  2 03:49:39 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Oct  2 03:49:39 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Oct  2 03:49:39 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 50 pg[10.0( v 37'16 (0'0,37'16] local-lis/les=36/37 n=8 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=13.093198776s) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 37'15 mlcod 37'15 active pruub 81.349510193s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:39 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 50 pg[10.0( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=50 pruub=13.093198776s) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 37'15 mlcod 0'0 unknown pruub 81.349510193s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v107: 274 pgs: 62 unknown, 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:49:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  2 03:49:39 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct  2 03:49:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 03:49:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct  2 03:49:40 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:40 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.12( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.10( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.1f( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.1e( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.1d( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.1c( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.1b( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.1a( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.19( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.18( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.7( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.6( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.5( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.4( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.3( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.8( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.f( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.9( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.a( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.b( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.c( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.d( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.e( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=36/37 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.2( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.13( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.14( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.15( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.16( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.17( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.11( v 37'16 lc 0'0 (0'0,37'16] local-lis/les=36/37 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.12( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.10( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.1e( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.1d( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.1c( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.1f( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.1b( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.1a( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.19( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.7( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.6( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.18( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.5( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.3( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.4( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.0( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 37'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.9( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.8( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.f( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.b( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.a( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.c( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.d( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.e( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.2( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.14( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.13( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.15( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.16( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.17( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 51 pg[10.11( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=36/36 les/c/f=37/37/0 sis=50) [2] r=0 lpr=50 pi=[36,50)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=51 pruub=14.231414795s) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active pruub 88.322547913s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 50 pg[9.0( v 40'385 (0'0,40'385] local-lis/les=34/35 n=177 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=50 pruub=10.176308632s) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 40'384 mlcod 40'384 active pruub 84.267486572s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=51 pruub=14.231414795s) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown pruub 88.322547913s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.0( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=50 pruub=10.176308632s) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 40'384 mlcod 0'0 unknown pruub 84.267486572s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.2( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.1( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.3( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.4( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.5( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.6( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.8( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.9( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.7( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.a( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.b( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.c( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.d( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.f( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.10( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.e( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.12( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.11( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.13( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.14( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.15( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.16( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.17( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.18( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.19( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.1a( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.1b( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.1c( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.1d( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.1e( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:40 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 51 pg[9.1f( v 40'385 lc 0'0 (0'0,40'385] local-lis/les=34/35 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct  2 03:49:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 03:49:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct  2 03:49:41 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.16( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.13( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.c( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.a( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.5( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.7( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.1d( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.16( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.14( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.13( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.10( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.12( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=51/52 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.2( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.0( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 40'384 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.a( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.5( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.4( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.1a( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.7( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.1d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=34/34 les/c/f=35/35/0 sis=50) [1] r=0 lpr=50 pi=[34,50)/1 crt=40'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [1] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:49:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v110: 305 pgs: 93 unknown, 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:49:42 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Oct  2 03:49:42 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Oct  2 03:49:42 np0005465604 ceph-mgr[74774]: [progress INFO root] Writing back 15 completed events
Oct  2 03:49:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  2 03:49:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:49:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v111: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  2 03:49:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.856583595s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.513908386s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.11( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.976805687s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 84.634162903s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.856728554s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.514144897s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.856526375s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.513908386s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.11( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.976761818s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.634162903s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.856655121s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.514144897s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.12( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.970631599s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 84.628211975s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.12( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.970531464s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.628211975s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.10( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.975749016s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 84.633529663s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.10( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.975693703s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.633529663s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.17( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.813022614s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.471084595s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.17( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.812990189s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.471084595s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.855658531s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.513916016s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.16( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.813046455s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.471321106s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.855629921s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.513916016s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.16( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.813006401s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.471321106s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.18( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.812953949s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.471145630s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.1e( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.975205421s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 84.633544922s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.15( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.812518120s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.470977783s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.18( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.812621117s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.471145630s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.1e( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.975034714s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.633544922s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.13( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.812019348s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.470947266s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.855048180s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.514015198s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.13( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.811904907s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.470947266s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.19( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.812143326s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.471153259s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.15( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.811891556s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.470977783s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.854993820s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.513984680s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.854792595s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.514068604s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.854922295s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.514015198s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.854769707s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.514068604s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.19( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.811915398s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.471153259s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.1a( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.974370956s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 84.633750916s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.854600906s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.514076233s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.854598045s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.513984680s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.1a( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.974221230s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.633750916s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.854440689s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.514076233s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.11( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.811338425s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.471191406s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.11( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.811277390s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.471191406s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.7( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.973771095s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 84.633773804s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.854141235s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.514091492s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.7( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.973687172s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.633773804s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.f( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.810643196s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.470817566s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.6( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.973460197s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 84.633811951s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.853813171s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.514198303s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.f( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.810599327s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.470817566s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.6( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.973371506s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.633811951s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.853750229s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.514198303s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.19( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.973495483s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 84.633766174s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.19( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.973220825s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.633766174s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.d( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.810564995s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.471160889s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.d( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.810543060s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.471160889s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.4( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.973237038s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 84.633903503s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.4( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.973218918s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.633903503s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.853451729s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.514190674s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.853435516s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.514190674s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.8( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.973180771s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 84.633956909s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.853453636s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.514259338s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.853438377s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.514259338s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.b( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.809956551s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.470794678s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.8( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.973155975s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.633956909s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.f( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.973021507s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 84.633972168s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.f( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.973000526s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.633972168s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.7( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.809622765s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.470603943s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.7( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.809602737s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.470603943s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.b( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.809756279s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.470794678s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.853227615s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.514312744s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.8( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.809520721s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.470809937s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.8( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.809495926s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.470809937s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.9( v 51'17 (0'0,51'17] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.972615242s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 84.633956909s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.2( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.809210777s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.470565796s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.853682518s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.514091492s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.9( v 51'17 (0'0,51'17] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.972572327s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 84.633956909s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.853068352s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.514511108s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.853049278s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.514511108s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.b( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.972438812s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 84.633987427s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.3( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.808975220s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.470542908s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.3( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.808957100s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.470542908s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.2( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.809093475s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.470565796s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.b( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.972402573s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.633987427s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.852576256s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.514289856s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.4( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.808705330s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.470413208s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.852626801s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.514381409s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.4( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.808629990s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.470413208s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.852539062s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.514289856s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.852580070s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.514381409s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.5( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.808876991s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.470825195s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.5( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.808857918s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.470825195s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.853078842s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.514312744s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.d( v 51'17 (0'0,51'17] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.971917152s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 84.634010315s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.852294922s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.514396667s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.e( v 51'17 (0'0,51'17] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.971838951s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 84.634040833s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.d( v 51'17 (0'0,51'17] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.971793175s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 84.634010315s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.852223396s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.514396667s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.e( v 51'17 (0'0,51'17] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.971768379s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 84.634040833s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.971554756s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 84.634101868s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.9( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.807711601s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.470336914s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.851772308s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.514381409s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.971510887s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.634101868s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.851706505s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.514381409s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.9( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.807663918s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.470336914s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.6( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.807674408s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.470436096s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.6( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.807656288s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.470436096s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.2( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.971147537s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 84.634056091s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.2( v 37'16 (0'0,37'16] local-lis/les=50/51 n=1 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.971114159s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.634056091s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.14( v 51'17 (0'0,51'17] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.970991135s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 84.634086609s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.1b( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.807104111s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.470214844s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.a( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.807194710s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.470336914s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.a( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.807175636s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.470336914s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.1b( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.807058334s) [1] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.470214844s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.1c( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.807272911s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.470451355s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.14( v 51'17 (0'0,51'17] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.970923424s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 84.634086609s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.1c( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.807239532s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.470451355s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.15( v 51'17 (0'0,51'17] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.970853806s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 37'16 mlcod 37'16 active pruub 84.634124756s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.15( v 51'17 (0'0,51'17] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.970829964s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 37'16 mlcod 0'0 unknown NOTIFY pruub 84.634124756s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.851101875s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.514488220s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.16( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.970751762s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 84.634132385s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.851087570s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.514488220s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.1d( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.806784630s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.470207214s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.16( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.970718384s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.634132385s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.1d( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.806757927s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.470207214s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.850977898s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.514427185s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.17( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.970655441s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 84.634140015s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.17( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.970641136s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.634140015s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.13( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.970911980s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active pruub 84.634109497s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.1f( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.800957680s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active pruub 83.464553833s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.850894928s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active pruub 81.514503479s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.850936890s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.514427185s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[10.13( v 37'16 (0'0,37'16] local-lis/les=50/51 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53 pruub=11.970504761s) [1] r=-1 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 84.634109497s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=46/49 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53 pruub=8.850871086s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.514503479s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[2.1f( empty local-lis/les=42/43 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=10.800922394s) [0] r=-1 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.464553833s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[5.11( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[2.17( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[5.13( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[2.15( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[5.12( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[10.1a( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[10.19( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[5.16( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[10.6( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[5.9( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[2.d( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[5.f( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[2.3( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[10.b( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[2.5( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[10.2( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[2.a( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[5.c( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[2.9( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[2.4( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[10.f( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[2.7( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[5.1( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[2.6( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[2.19( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[10.11( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[10.10( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[5.1e( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[2.1b( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[10.13( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[2.18( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[5.1d( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[10.9( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[10.12( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[5.7( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[5.1a( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[10.8( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[10.14( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[5.18( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[5.19( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[10.15( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.961406708s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.580329895s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.961380959s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.580329895s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.845133781s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.464187622s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.845120430s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.464187622s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[2.1d( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.1f( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.807906151s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.426925659s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.845024109s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.464149475s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.966620445s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 90.585807800s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.844924927s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.464149475s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.1f( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.807671547s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.426925659s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.1e( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.807546616s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.426895142s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.844722748s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.464126587s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.1e( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.807509422s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.426895142s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.844706535s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.464126587s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.844629288s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.464141846s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.15( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.844610214s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.464141846s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965917587s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.585502625s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.1d( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.807241440s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.426918030s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.1d( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.807226181s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.426918030s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965814590s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.585502625s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.965808868s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 90.585594177s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.965794563s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 90.585594177s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.966014862s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.585830688s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965989113s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.585830688s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.965933800s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 90.585807800s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.1b( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.807032585s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.427032471s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.1b( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.807017326s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.427032471s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.844056129s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.464080811s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.844023705s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.464080811s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.965571404s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 90.585762024s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.965553284s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 90.585762024s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965633392s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.585853577s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965555191s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.585853577s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.843637466s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.463981628s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.843616486s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.463981628s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.843358994s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.463882446s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.843339920s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.463882446s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.843364716s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.464050293s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.843345642s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.464050293s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965125084s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.585914612s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965111732s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.585914612s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.18( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.806062698s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.426933289s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.18( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.806051254s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.426933289s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.842893600s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.463859558s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.842881203s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.463859558s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964918137s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.586013794s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964904785s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.586013794s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.842646599s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.463821411s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.842634201s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.463821411s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.7( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.805699348s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.426948547s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.7( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.805683136s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.426948547s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.842580795s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.463935852s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.842565536s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.463935852s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.964469910s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 90.585922241s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.964456558s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 90.585922241s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964516640s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.586051941s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964504242s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.586051941s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.842194557s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.463798523s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.842183113s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.463798523s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.6( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.805513382s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.427215576s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.6( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.805500031s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.427215576s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.841986656s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.463783264s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.841973305s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.463783264s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.842722893s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.464019775s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964178085s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.586074829s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.842125893s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.464019775s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964164734s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.586074829s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.841720581s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.463760376s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.5( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.805453300s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.427513123s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.841700554s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.463760376s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.963835716s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 90.585899353s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.5( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.805437088s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.427513123s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.963785172s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 90.585899353s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.841764450s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.463920593s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.963874817s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 90.586051941s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.841750145s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.463920593s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.963854790s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 90.586051941s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.963876724s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.586135864s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.963862419s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.586135864s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.3( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.805502892s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.427825928s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.3( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.805483818s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.427825928s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.963780403s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.586143494s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.963767052s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.586143494s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.963693619s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 90.586097717s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.841216087s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.463653564s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.841199875s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.463653564s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.1( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.805352211s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.427825928s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.1( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.805327415s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.427825928s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.963540077s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 90.586105347s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.963521957s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 90.586105347s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.963436127s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 90.586097717s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.8( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.805032730s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.427772522s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.8( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.805012703s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.427772522s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.963334084s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.586196899s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.963315964s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.586196899s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.840720177s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.463653564s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.840652466s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.463584900s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.840701103s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.463653564s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.840610504s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.463584900s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.a( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.804722786s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.427787781s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.a( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.804704666s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.427787781s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.963040352s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.586235046s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.840336800s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.463546753s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.840317726s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.463546753s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.963001251s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.586235046s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[10.4( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.962836266s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 90.586204529s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[5.4( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[2.1c( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.962815285s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 90.586204529s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[2.f( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.839954376s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.463493347s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[10.7( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.f( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.839933395s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.463493347s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962536812s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.586257935s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.839711189s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.463462830s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.839688301s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.463462830s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962490082s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.586257935s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[5.5( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[2.2( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.839567184s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.463455200s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.839548111s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.463455200s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.839348793s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.463424683s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.839327812s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.463424683s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962188721s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.586311340s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962149620s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.586311340s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.9( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.808356285s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.432556152s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.9( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.808341980s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.432556152s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=48/49 n=1 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.839016914s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.463310242s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=48/49 n=1 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.839001656s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.463310242s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.961998940s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 90.586364746s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.961980820s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.586380005s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[10.17( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.961966515s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.586380005s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[2.1f( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[5.2( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[7.1a( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.961959839s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 90.586364746s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.837959290s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.462448120s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965286255s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.585868835s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.837945938s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.462448120s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.c( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.807893753s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.432502747s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.837690353s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.462371826s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.961264610s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.585868835s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.837667465s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.462371826s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.c( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.807815552s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.432502747s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.838644981s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.463417053s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.838631630s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.463417053s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[3.1e( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.837488174s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.462356567s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.837472916s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.462356567s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.961420059s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 90.586380005s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.961407661s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 90.586380005s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.961398125s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.586448669s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.961356163s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.586448669s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.837285995s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.462440491s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.e( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.807358742s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.432540894s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.837268829s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.462440491s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[8.15( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.e( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.807338715s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.432540894s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[3.1d( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=48/49 n=1 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.837009430s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.462356567s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.f( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.807222366s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.432571411s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.f( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.807198524s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.432571411s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[10.d( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.961041451s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 90.586479187s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[5.3( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.961028099s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 90.586479187s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[10.e( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[2.b( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962990761s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.588470459s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[2.8( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962970734s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.588470459s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[10.1( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[10.1e( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.836741447s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.462280273s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[2.16( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.836726189s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.462280273s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[5.15( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[5.14( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[2.13( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.836604118s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.462265015s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[2.11( empty local-lis/les=0/0 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[10.16( empty local-lis/les=0/0 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.836590767s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.462265015s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.800135612s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.749450684s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.800082207s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.749450684s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962820053s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.588500977s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[11.15( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.799942970s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.749382019s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.799921989s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.749382019s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.799922943s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.749450684s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962800026s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.588500977s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.799897194s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.749450684s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.12( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.799810410s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.749389648s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.12( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.799794197s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.749389648s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.11( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.806824684s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.432571411s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.10( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.799695015s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.749382019s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.11( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.799775124s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.749473572s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.11( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.806812286s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.432571411s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.10( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.799682617s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.749382019s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.11( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.799749374s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.749473572s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.f( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.799464226s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.749237061s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.836449623s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.462272644s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.f( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.799448967s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.749237061s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53 pruub=15.821110725s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 97.770980835s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.962639809s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 90.588478088s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.799267769s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.749252319s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53 pruub=15.821095467s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.770980835s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.1a( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.836428642s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.462272644s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.799235344s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.749252319s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.962621689s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 90.588478088s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53 pruub=15.820754051s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 97.770957947s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.799095154s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.749305725s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=48/49 n=1 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.836444855s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.462356567s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53 pruub=15.820734024s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.770957947s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.799067497s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.749305725s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.799028397s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.749382019s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.798680305s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.749130249s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962601662s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.588562012s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.798650742s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.749130249s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=46/48 n=2 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53 pruub=15.820442200s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 97.771041870s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962584496s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.588562012s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=46/48 n=2 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53 pruub=15.820397377s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.771041870s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[11.12( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=46/48 n=2 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53 pruub=15.820164680s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 97.770851135s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.12( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.806736946s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.432746887s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=46/48 n=2 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53 pruub=15.820124626s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.770851135s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.12( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.806718826s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.432746887s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962472916s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.588577271s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962456703s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.588577271s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.836015701s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.462165833s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.4( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.797673225s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.748657227s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.798352242s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.749382019s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.4( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.797615051s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.748657227s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.797937393s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.749046326s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.797446251s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.748619080s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53 pruub=15.819605827s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 97.770820618s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.797418594s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.748619080s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53 pruub=15.819580078s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.770820618s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53 pruub=15.819589615s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 97.770904541s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.792072296s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.743392944s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53 pruub=15.819573402s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.770904541s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.792040825s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.743392944s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.791638374s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.743141174s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.835995674s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.462165833s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53 pruub=15.819241524s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 97.770812988s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.791601181s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.743141174s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.962376595s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 90.588577271s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.962363243s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 90.588577271s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.963003159s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.589279175s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53 pruub=15.819210052s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.770812988s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[8.11( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962985039s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.589279175s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.835598946s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.461914062s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.835583687s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.461914062s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.7( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.791353226s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.743118286s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.835470200s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.461891174s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.962819099s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 90.589241028s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.835454941s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.461891174s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.962798119s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 90.589241028s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.15( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.806269646s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.432823181s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.15( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.806252480s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.432823181s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962677956s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.589347839s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962663651s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.589347839s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.16( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.806079865s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.432830811s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[3.18( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.16( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.806063652s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.432830811s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.835082054s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.461990356s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.835066795s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.461990356s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962411880s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 90.589454651s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962387085s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.589454651s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.835164070s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active pruub 86.462310791s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.17( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.805657387s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 91.432830811s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=48/49 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.835141182s) [0] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.462310791s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.962318420s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 90.589431763s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.825471878s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active pruub 86.452728271s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[7.1c( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=48/49 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53 pruub=8.825457573s) [2] r=-1 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 86.452728271s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53 pruub=12.962203979s) [0] r=-1 lpr=53 pi=[50,53)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 90.589431763s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[3.17( empty local-lis/les=44/46 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53 pruub=13.805642128s) [0] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.432830811s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[4.14( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.797885895s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.749046326s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=46/48 n=2 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53 pruub=15.811489105s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 97.763351440s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.7( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.791311264s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.743118286s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=46/48 n=2 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53 pruub=15.811365128s) [1] r=-1 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.763351440s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[3.7( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[7.2( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[8.d( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[11.d( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[7.1( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.790921211s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.743064880s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.790887833s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.743064880s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.790835381s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.743080139s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.790804863s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.743080139s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[11.17( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.790663719s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 94.743392944s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=12.790642738s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.743392944s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[8.14( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[11.14( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[3.5( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[3.1b( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[8.10( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[3.1f( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[11.10( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[11.f( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[11.b( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[8.c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[11.e( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[11.9( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[7.5( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[3.6( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[8.e( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[8.12( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[3.8( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[7.c( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[3.3( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[3.1( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[7.e( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[4.12( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[11.3( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[11.8( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[11.2( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[4.10( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[3.a( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[8.2( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[4.f( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[7.8( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[6.d( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[11.11( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[7.a( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[8.f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[6.f( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[3.e( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[11.18( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[8.1b( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[4.d( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[6.3( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[8.b( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[6.1( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[8.9( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[4.2( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[7.15( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[3.9( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[3.11( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[11.1( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[8.4( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[11.4( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[3.c( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[8.6( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[4.18( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[4.13( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[4.11( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[3.f( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[4.e( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[11.19( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[4.1( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[8.1a( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[3.12( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[4.1a( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[8.18( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[4.4( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[6.b( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[4.1b( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[8.1f( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[3.15( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[4.1c( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[4.a( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[6.7( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[11.1a( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[8.1d( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[4.5( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[6.9( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[3.17( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[4.7( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[11.1b( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[11.1c( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 53 pg[11.6( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[7.11( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[6.5( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[11.1e( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[3.16( empty local-lis/les=0/0 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[4.8( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[11.1f( empty local-lis/les=0/0 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 53 pg[4.9( empty local-lis/les=0/0 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 53 pg[8.1c( empty local-lis/les=0/0 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct  2 03:49:44 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.c deep-scrub starts
Oct  2 03:49:44 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.c deep-scrub ok
Oct  2 03:49:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct  2 03:49:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct  2 03:49:45 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 03:49:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 03:49:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  2 03:49:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 03:49:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  2 03:49:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 03:49:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 03:49:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[5.19( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[5.18( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[5.1a( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[10.12( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[5.1d( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[2.1b( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[10.14( v 51'17 lc 37'13 (0'0,51'17] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[10.10( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[10.11( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[3.1f( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[8.14( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[10.16( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[11.17( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[8.15( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=33'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[8.12( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[8.d( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[8.2( v 33'4 (0'0,33'4] local-lis/les=53/54 n=1 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[8.4( v 33'4 (0'0,33'4] local-lis/les=53/54 n=1 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[8.1b( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[8.1c( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[2.6( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[5.1( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [2] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[2.7( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[10.f( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[4.2( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[2.4( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[10.13( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[2.9( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[4.d( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[5.c( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[4.f( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[2.a( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[10.2( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[2.5( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[6.3( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=38'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[6.5( v 38'39 lc 34'11 (0'0,38'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[6.f( v 38'39 lc 34'1 (0'0,38'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[4.5( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[4.7( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[4.4( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[10.b( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[2.3( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[5.f( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 54 pg[8.11( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [2] r=0 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[2.d( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[4.9( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[4.8( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[5.9( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[6.7( v 38'39 lc 34'21 (0'0,38'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[10.6( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[5.16( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[10.19( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[5.12( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[4.14( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[2.15( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[4.12( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[5.13( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[2.17( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[5.11( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[3.9( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[2.8( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[2.b( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[3.a( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[11.1( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[10.1( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[7.f( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[8.c( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[11.f( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[7.3( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[8.e( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[5.3( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[10.e( v 51'17 lc 37'7 (0'0,51'17] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[3.6( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[11.e( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[8.f( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[5.2( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[10.d( v 51'17 lc 37'9 (0'0,51'17] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[10.17( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[2.1f( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[3.3( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[2.2( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[5.5( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[2.f( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[2.1c( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[5.4( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[8.9( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[7.18( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[10.4( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[2.1d( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[11.14( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[3.1( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[10.15( v 51'17 lc 37'5 (0'0,51'17] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[10.8( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[5.7( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[8.b( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[10.9( v 51'17 lc 37'15 (0'0,51'17] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[7.4( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[3.1b( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[8.10( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[7.1f( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[2.19( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[11.10( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[5.1e( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[10.7( v 37'16 (0'0,37'16] local-lis/les=53/54 n=1 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[2.11( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[2.18( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[2.13( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[5.15( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[3.f( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[5.14( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[3.c( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[11.4( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[7.9( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[4.10( empty local-lis/les=53/54 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 54 pg[10.1a( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [1] r=0 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[8.6( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[11.6( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[3.17( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[10.1e( v 37'16 (0'0,37'16] local-lis/les=53/54 n=0 ec=50/36 lis/c=50/50 les/c/f=51/51/0 sis=53) [0] r=0 lpr=53 pi=[50,53)/1 crt=37'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[7.13( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[2.16( empty local-lis/les=53/54 n=0 ec=42/13 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[8.1d( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[3.15( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[8.1f( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[7.6( empty local-lis/les=53/54 n=0 ec=48/23 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[8.18( v 33'4 (0'0,33'4] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[3.12( empty local-lis/les=53/54 n=0 ec=44/15 lis/c=44/44 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[11.19( empty local-lis/les=53/54 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 54 pg[8.1a( v 33'4 lc 0'0 (0'0,33'4] local-lis/les=53/54 n=0 ec=48/32 lis/c=48/48 les/c/f=49/49/0 sis=53) [0] r=0 lpr=53 pi=[48,53)/1 crt=33'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v114: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:49:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct  2 03:49:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  2 03:49:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct  2 03:49:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  2 03:49:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct  2 03:49:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  2 03:49:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  2 03:49:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct  2 03:49:46 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct  2 03:49:46 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  2 03:49:46 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  2 03:49:46 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  2 03:49:46 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  2 03:49:46 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 55 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=13.784236908s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 97.770759583s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:46 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 55 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=13.784202576s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.770759583s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:46 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 55 pg[6.6( v 38'39 (0'0,38'39] local-lis/les=46/48 n=2 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=13.783961296s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 97.770843506s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:46 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 55 pg[6.6( v 38'39 (0'0,38'39] local-lis/les=46/48 n=2 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=13.783870697s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.770843506s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:46 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 55 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=46/48 n=2 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=13.783834457s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 97.770889282s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:46 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 55 pg[6.e( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=13.783822060s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 97.770904541s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:46 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 55 pg[6.e( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=13.783802032s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.770904541s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:46 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 55 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=46/48 n=2 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=13.783784866s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.770889282s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 55 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[50,54)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:46 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct  2 03:49:46 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct  2 03:49:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:49:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct  2 03:49:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct  2 03:49:47 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct  2 03:49:47 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 56 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56 pruub=15.014286041s) [0] async=[0] r=-1 lpr=56 pi=[50,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 95.677818298s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:47 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 56 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56 pruub=15.013343811s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.677818298s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:47 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 56 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56 pruub=15.013238907s) [0] async=[0] r=-1 lpr=56 pi=[50,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 95.677825928s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:47 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 56 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56 pruub=15.013153076s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.677825928s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:47 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 56 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56 pruub=15.013868332s) [0] async=[0] r=-1 lpr=56 pi=[50,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 95.678596497s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:47 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 56 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56 pruub=15.013793945s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.678596497s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:47 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 56 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56 pruub=15.012847900s) [0] async=[0] r=-1 lpr=56 pi=[50,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 95.677856445s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:47 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 56 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56 pruub=15.012856483s) [0] async=[0] r=-1 lpr=56 pi=[50,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 95.677940369s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:47 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 56 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56 pruub=15.012763023s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.677940369s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:47 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 56 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56 pruub=15.012298584s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.677856445s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:47 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 56 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=55/56 n=2 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:47 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 56 pg[6.6( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=55/56 n=2 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:47 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 56 pg[6.e( v 38'39 lc 34'19 (0'0,38'39] local-lis/les=55/56 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:47 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 56 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:47 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 56 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:47 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 56 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:47 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 56 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:47 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 56 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:47 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 56 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:47 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 56 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:47 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 56 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:47 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 56 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:47 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 56 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:47 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 56 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:47 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Oct  2 03:49:47 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Oct  2 03:49:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v117: 305 pgs: 1 active+recovering+remapped, 10 active+recovery_wait+remapped, 5 active+remapped, 4 peering, 285 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 64/213 objects misplaced (30.047%); 469 B/s, 2 keys/s, 8 objects/s recovering
Oct  2 03:49:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct  2 03:49:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct  2 03:49:48 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.006646156s) [0] async=[0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 95.678817749s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.006368637s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.678817749s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.005725861s) [0] async=[0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 95.678993225s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=54/55 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.005679131s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.678993225s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.004648209s) [0] async=[0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 95.678604126s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.004604340s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.678604126s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.004679680s) [0] async=[0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 95.678840637s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.004649162s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.678840637s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.004390717s) [0] async=[0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 95.678741455s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.004357338s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.678741455s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.004215240s) [0] async=[0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 95.679008484s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.004051208s) [0] async=[0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 95.678848267s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.003993034s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.678848267s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.004135132s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.679008484s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.003344536s) [0] async=[0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 95.678596497s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.003285408s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.678596497s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.003442764s) [0] async=[0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 95.678771973s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.003437996s) [0] async=[0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 95.678825378s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.003383636s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.678771973s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.003382683s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.678825378s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.003119469s) [0] async=[0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 95.678718567s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 57 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=54/55 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57 pruub=14.002809525s) [0] r=-1 lpr=57 pi=[50,57)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.678718567s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.1d( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.1b( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 57 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.2 deep-scrub starts
Oct  2 03:49:48 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.2 deep-scrub ok
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Oct  2 03:49:48 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Oct  2 03:49:48 np0005465604 systemd[76077]: Starting Mark boot as successful...
Oct  2 03:49:48 np0005465604 systemd[76077]: Finished Mark boot as successful.
Oct  2 03:49:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct  2 03:49:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct  2 03:49:49 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct  2 03:49:49 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 58 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:49 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 58 pg[9.b( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:49 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 58 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:49 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 58 pg[9.11( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:49 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 58 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:49 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 58 pg[9.5( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:49 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 58 pg[9.9( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:49 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 58 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:49 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 58 pg[9.d( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:49 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 58 pg[9.3( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:49 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 58 pg[9.1( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=50/34 lis/c=54/50 les/c/f=55/52/0 sis=57) [0] r=0 lpr=57 pi=[50,57)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:49 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct  2 03:49:49 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct  2 03:49:49 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.b deep-scrub starts
Oct  2 03:49:49 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.b deep-scrub ok
Oct  2 03:49:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v120: 305 pgs: 1 active+recovering+remapped, 10 active+recovery_wait+remapped, 9 peering, 285 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 64/213 objects misplaced (30.047%); 710 B/s, 2 keys/s, 10 objects/s recovering
Oct  2 03:49:50 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.c scrub starts
Oct  2 03:49:50 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.c scrub ok
Oct  2 03:49:51 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct  2 03:49:51 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct  2 03:49:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:49:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v121: 305 pgs: 1 active+recovering+remapped, 10 active+recovery_wait+remapped, 9 peering, 285 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 64/213 objects misplaced (30.047%); 495 B/s, 1 keys/s, 7 objects/s recovering
Oct  2 03:49:52 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.b scrub starts
Oct  2 03:49:52 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.b scrub ok
Oct  2 03:49:52 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Oct  2 03:49:52 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Oct  2 03:49:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v122: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 443 B/s, 11 objects/s recovering
Oct  2 03:49:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct  2 03:49:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  2 03:49:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct  2 03:49:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  2 03:49:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct  2 03:49:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  2 03:49:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  2 03:49:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct  2 03:49:54 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct  2 03:49:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  2 03:49:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  2 03:49:54 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 59 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=14.556351662s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=38'39 mlcod 38'39 active pruub 102.648658752s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:54 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 59 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=14.556101799s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 102.648658752s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:54 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 59 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=14.555418968s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=38'39 mlcod 38'39 active pruub 102.648666382s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:54 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 59 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=14.555357933s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 102.648666382s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:54 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 59 pg[6.3( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:54 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 59 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=14.563267708s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=38'39 mlcod 38'39 active pruub 102.657310486s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:54 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 59 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=14.563233376s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 102.657310486s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:54 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 59 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=14.562925339s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=38'39 mlcod 38'39 active pruub 102.657180786s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:54 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 59 pg[6.f( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:54 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 59 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=14.562871933s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 102.657180786s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:54 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 59 pg[6.7( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:54 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct  2 03:49:55 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  2 03:49:55 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  2 03:49:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct  2 03:49:55 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct  2 03:49:55 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 60 pg[6.f( v 38'39 lc 34'1 (0'0,38'39] local-lis/les=59/60 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:55 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 60 pg[6.7( v 38'39 lc 34'21 (0'0,38'39] local-lis/les=59/60 n=1 ec=46/21 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:55 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 60 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=59/60 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:55 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 60 pg[6.3( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=59/60 n=2 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=38'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:55 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct  2 03:49:55 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct  2 03:49:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v125: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 237 B/s, 9 objects/s recovering
Oct  2 03:49:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct  2 03:49:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  2 03:49:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct  2 03:49:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  2 03:49:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct  2 03:49:56 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  2 03:49:56 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  2 03:49:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  2 03:49:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  2 03:49:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct  2 03:49:56 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct  2 03:49:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:49:57 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 61 pg[6.4( v 38'39 (0'0,38'39] local-lis/les=46/48 n=2 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=61 pruub=10.734261513s) [1] r=-1 lpr=61 pi=[46,61)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 105.771118164s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:57 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 61 pg[6.4( v 38'39 (0'0,38'39] local-lis/les=46/48 n=2 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=61 pruub=10.734114647s) [1] r=-1 lpr=61 pi=[46,61)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.771118164s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:57 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 61 pg[6.c( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=61 pruub=10.733708382s) [1] r=-1 lpr=61 pi=[46,61)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 105.771476746s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:57 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 61 pg[6.4( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[46,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:57 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 61 pg[6.c( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=61 pruub=10.733651161s) [1] r=-1 lpr=61 pi=[46,61)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.771476746s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:57 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 61 pg[6.c( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[46,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct  2 03:49:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct  2 03:49:57 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct  2 03:49:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  2 03:49:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  2 03:49:57 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 62 pg[6.4( v 38'39 lc 34'15 (0'0,38'39] local-lis/les=61/62 n=2 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[46,61)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:57 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 62 pg[6.c( v 38'39 lc 34'17 (0'0,38'39] local-lis/les=61/62 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[46,61)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:57 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Oct  2 03:49:57 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Oct  2 03:49:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v128: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:49:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct  2 03:49:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  2 03:49:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct  2 03:49:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  2 03:49:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:49:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:49:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:49:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:49:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:49:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:49:57 np0005465604 ceph-mgr[74774]: [progress INFO root] Completed event e4fc1e57-b3aa-432f-86a8-d372c317f74f (Global Recovery Event) in 25 seconds
Oct  2 03:49:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct  2 03:49:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  2 03:49:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  2 03:49:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct  2 03:49:58 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct  2 03:49:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  2 03:49:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  2 03:49:58 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 63 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=10.410144806s) [0] r=-1 lpr=63 pi=[53,63)/1 crt=38'39 mlcod 38'39 active pruub 102.648536682s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:58 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 63 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=10.410108566s) [0] r=-1 lpr=63 pi=[53,63)/1 crt=38'39 mlcod 38'39 active pruub 102.648658752s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:49:58 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 63 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=10.409929276s) [0] r=-1 lpr=63 pi=[53,63)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 102.648536682s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:58 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 63 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=53/54 n=2 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=10.409875870s) [0] r=-1 lpr=63 pi=[53,63)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 102.648658752s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:49:58 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 63 pg[6.d( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [0] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:58 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 63 pg[6.5( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [0] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:49:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct  2 03:49:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct  2 03:49:59 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct  2 03:49:59 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  2 03:49:59 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  2 03:49:59 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 64 pg[6.5( v 38'39 lc 34'11 (0'0,38'39] local-lis/les=63/64 n=2 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [0] r=0 lpr=63 pi=[53,63)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:59 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 64 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=63/64 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [0] r=0 lpr=63 pi=[53,63)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:49:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v131: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 159 B/s, 2 keys/s, 1 objects/s recovering
Oct  2 03:49:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct  2 03:49:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  2 03:49:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct  2 03:49:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  2 03:50:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct  2 03:50:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  2 03:50:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  2 03:50:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct  2 03:50:00 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct  2 03:50:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  2 03:50:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  2 03:50:00 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 65 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=65 pruub=12.747048378s) [2] r=-1 lpr=65 pi=[50,65)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 106.586051941s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:00 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 65 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=65 pruub=12.747002602s) [2] r=-1 lpr=65 pi=[50,65)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 106.586051941s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:00 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=65) [2] r=0 lpr=65 pi=[50,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:00 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 65 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=65 pruub=12.746165276s) [2] r=-1 lpr=65 pi=[50,65)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 106.587203979s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:00 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 65 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=65 pruub=12.746102333s) [2] r=-1 lpr=65 pi=[50,65)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 106.587203979s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:00 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 65 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=65 pruub=12.746071815s) [2] r=-1 lpr=65 pi=[50,65)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 106.587356567s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:00 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 65 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=65 pruub=12.745681763s) [2] r=-1 lpr=65 pi=[50,65)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 106.587356567s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:00 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 65 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=65 pruub=12.747219086s) [2] r=-1 lpr=65 pi=[50,65)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 106.588920593s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:00 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 65 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=65 pruub=12.747053146s) [2] r=-1 lpr=65 pi=[50,65)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 106.588920593s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:00 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=65) [2] r=0 lpr=65 pi=[50,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:00 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=65) [2] r=0 lpr=65 pi=[50,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:00 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=65) [2] r=0 lpr=65 pi=[50,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct  2 03:50:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct  2 03:50:01 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct  2 03:50:01 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[50,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:01 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[50,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:01 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[50,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:01 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[50,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:01 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[50,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:01 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[50,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:01 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[50,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:01 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[50,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  2 03:50:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  2 03:50:01 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 66 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] r=0 lpr=66 pi=[50,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:01 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 66 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] r=0 lpr=66 pi=[50,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:01 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 66 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] r=0 lpr=66 pi=[50,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:01 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 66 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] r=0 lpr=66 pi=[50,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:01 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 66 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] r=0 lpr=66 pi=[50,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:01 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 66 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] r=0 lpr=66 pi=[50,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:01 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 66 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] r=0 lpr=66 pi=[50,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:01 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 66 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] r=0 lpr=66 pi=[50,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:01 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.10 deep-scrub starts
Oct  2 03:50:01 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.10 deep-scrub ok
Oct  2 03:50:01 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Oct  2 03:50:01 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Oct  2 03:50:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:50:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v134: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 159 B/s, 2 keys/s, 1 objects/s recovering
Oct  2 03:50:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct  2 03:50:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  2 03:50:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct  2 03:50:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  2 03:50:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct  2 03:50:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  2 03:50:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  2 03:50:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct  2 03:50:02 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct  2 03:50:02 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  2 03:50:02 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  2 03:50:02 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 67 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[50,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:02 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 67 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[50,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:02 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 67 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=66/67 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[50,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:02 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 67 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=66/67 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[50,66)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:02 np0005465604 ceph-mgr[74774]: [progress INFO root] Writing back 16 completed events
Oct  2 03:50:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  2 03:50:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:50:02 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 67 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=67 pruub=10.182432175s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=40'385 mlcod 0'0 active pruub 111.016563416s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:02 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 67 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=67 pruub=10.182224274s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=40'385 mlcod 0'0 active pruub 111.016502380s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:02 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 67 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=67 pruub=10.182320595s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 111.016563416s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:02 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 67 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=67 pruub=10.182105064s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 111.016502380s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:02 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 67 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=67 pruub=10.182380676s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=40'385 mlcod 0'0 active pruub 111.016807556s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:02 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 67 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=67 pruub=10.182318687s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 111.016807556s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:02 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 67 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=67 pruub=9.175710678s) [2] r=-1 lpr=67 pi=[56,67)/1 crt=40'385 mlcod 0'0 active pruub 110.010421753s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:02 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 67 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=67 pruub=9.175667763s) [2] r=-1 lpr=67 pi=[56,67)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 110.010421753s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:02 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:02 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:02 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=67) [2] r=0 lpr=67 pi=[57,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:02 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=67) [2] r=0 lpr=67 pi=[56,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct  2 03:50:03 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  2 03:50:03 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  2 03:50:03 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:50:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct  2 03:50:03 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct  2 03:50:03 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 68 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68 pruub=15.013597488s) [2] async=[2] r=-1 lpr=68 pi=[50,68)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 111.897064209s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:03 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 68 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68 pruub=15.013379097s) [2] r=-1 lpr=68 pi=[50,68)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 111.897064209s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:03 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 68 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=66/67 n=6 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68 pruub=15.013083458s) [2] async=[2] r=-1 lpr=68 pi=[50,68)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 111.897094727s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:03 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 68 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=66/67 n=6 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68 pruub=15.012871742s) [2] async=[2] r=-1 lpr=68 pi=[50,68)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 111.897026062s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:03 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 68 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=66/67 n=6 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68 pruub=15.012727737s) [2] r=-1 lpr=68 pi=[50,68)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 111.897026062s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:03 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 68 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68 pruub=15.007982254s) [2] async=[2] r=-1 lpr=68 pi=[50,68)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 111.892486572s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:03 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 68 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=66/67 n=5 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68 pruub=15.007892609s) [2] r=-1 lpr=68 pi=[50,68)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 111.892486572s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:03 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 68 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=66/67 n=6 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68 pruub=15.012310028s) [2] r=-1 lpr=68 pi=[50,68)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 111.897094727s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 68 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68) [2] r=0 lpr=68 pi=[50,68)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 68 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68) [2] r=0 lpr=68 pi=[50,68)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 68 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68) [2] r=0 lpr=68 pi=[50,68)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 68 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68) [2] r=0 lpr=68 pi=[50,68)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 68 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68) [2] r=0 lpr=68 pi=[50,68)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 68 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68) [2] r=0 lpr=68 pi=[50,68)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 68 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68) [2] r=0 lpr=68 pi=[50,68)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 68 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68) [2] r=0 lpr=68 pi=[50,68)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 68 pg[9.17( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[57,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 68 pg[9.17( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[57,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 68 pg[9.f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[57,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 68 pg[9.f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[57,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 68 pg[9.7( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[57,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 68 pg[9.7( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[57,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 68 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[56,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 68 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[56,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:03 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 68 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=68) [2]/[0] r=0 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:03 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 68 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=68) [2]/[0] r=0 lpr=68 pi=[56,68)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:03 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 68 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=68) [2]/[0] r=0 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:03 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 68 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=68) [2]/[0] r=0 lpr=68 pi=[56,68)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:03 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 68 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=68) [2]/[0] r=0 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:03 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 68 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=57/58 n=6 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=68) [2]/[0] r=0 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:03 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 68 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=68) [2]/[0] r=0 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:03 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 68 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=68) [2]/[0] r=0 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:03 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct  2 03:50:03 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct  2 03:50:03 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct  2 03:50:03 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct  2 03:50:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v137: 305 pgs: 4 remapped+peering, 301 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 430 B/s, 2 objects/s recovering
Oct  2 03:50:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct  2 03:50:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct  2 03:50:04 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct  2 03:50:04 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 69 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[56,68)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:04 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 69 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:04 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 69 pg[9.e( v 40'385 (0'0,40'385] local-lis/les=68/69 n=6 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68) [2] r=0 lpr=68 pi=[50,68)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:04 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 69 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68) [2] r=0 lpr=68 pi=[50,68)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:04 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 69 pg[9.6( v 40'385 (0'0,40'385] local-lis/les=68/69 n=6 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68) [2] r=0 lpr=68 pi=[50,68)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:04 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 69 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=50/34 lis/c=66/50 les/c/f=67/52/0 sis=68) [2] r=0 lpr=68 pi=[50,68)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:04 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 69 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=68/69 n=6 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:04 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 69 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=68/69 n=6 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=68) [2]/[0] async=[2] r=0 lpr=68 pi=[57,68)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:04 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Oct  2 03:50:04 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Oct  2 03:50:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct  2 03:50:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct  2 03:50:05 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct  2 03:50:05 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 70 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=68/69 n=6 ec=50/34 lis/c=68/57 les/c/f=69/58/0 sis=70 pruub=15.004016876s) [2] async=[2] r=-1 lpr=70 pi=[57,70)/1 crt=40'385 mlcod 40'385 active pruub 118.230529785s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:05 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 70 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=50/34 lis/c=68/57 les/c/f=69/58/0 sis=70 pruub=14.999955177s) [2] async=[2] r=-1 lpr=70 pi=[57,70)/1 crt=40'385 mlcod 40'385 active pruub 118.226509094s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:05 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 70 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=68/69 n=6 ec=50/34 lis/c=68/57 les/c/f=69/58/0 sis=70 pruub=15.003919601s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 118.230529785s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:05 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 70 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=68/69 n=6 ec=50/34 lis/c=68/57 les/c/f=69/58/0 sis=70 pruub=15.003870964s) [2] async=[2] r=-1 lpr=70 pi=[57,70)/1 crt=40'385 mlcod 40'385 active pruub 118.230644226s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:05 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 70 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=68/69 n=6 ec=50/34 lis/c=68/57 les/c/f=69/58/0 sis=70 pruub=15.003823280s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 118.230644226s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:05 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 70 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=50/34 lis/c=68/56 les/c/f=69/57/0 sis=70 pruub=14.999426842s) [2] async=[2] r=-1 lpr=70 pi=[56,70)/1 crt=40'385 mlcod 40'385 active pruub 118.226501465s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:05 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 70 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=50/34 lis/c=68/56 les/c/f=69/57/0 sis=70 pruub=14.999383926s) [2] r=-1 lpr=70 pi=[56,70)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 118.226501465s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:05 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 70 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=50/34 lis/c=68/57 les/c/f=69/58/0 sis=70 pruub=14.999895096s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 118.226509094s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:05 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 70 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=68/57 les/c/f=69/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:05 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 70 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=68/57 les/c/f=69/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:05 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 70 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=68/56 les/c/f=69/57/0 sis=70) [2] r=0 lpr=70 pi=[56,70)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:05 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 70 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=68/57 les/c/f=69/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:05 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 70 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=68/57 les/c/f=69/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:05 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 70 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=68/57 les/c/f=69/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:05 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 70 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=68/56 les/c/f=69/57/0 sis=70) [2] r=0 lpr=70 pi=[56,70)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:05 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 70 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=68/57 les/c/f=69/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:05 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Oct  2 03:50:05 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Oct  2 03:50:05 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct  2 03:50:05 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct  2 03:50:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v140: 305 pgs: 4 remapped+peering, 301 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 430 B/s, 2 objects/s recovering
Oct  2 03:50:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct  2 03:50:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct  2 03:50:06 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct  2 03:50:06 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 71 pg[9.17( v 40'385 (0'0,40'385] local-lis/les=70/71 n=5 ec=50/34 lis/c=68/57 les/c/f=69/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:06 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 71 pg[9.f( v 40'385 (0'0,40'385] local-lis/les=70/71 n=6 ec=50/34 lis/c=68/57 les/c/f=69/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:06 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 71 pg[9.7( v 40'385 (0'0,40'385] local-lis/les=70/71 n=6 ec=50/34 lis/c=68/57 les/c/f=69/58/0 sis=70) [2] r=0 lpr=70 pi=[57,70)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:06 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 71 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=70/71 n=5 ec=50/34 lis/c=68/56 les/c/f=69/57/0 sis=70) [2] r=0 lpr=70 pi=[56,70)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:06 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct  2 03:50:06 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct  2 03:50:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:50:07 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct  2 03:50:07 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Oct  2 03:50:07 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct  2 03:50:07 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Oct  2 03:50:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v142: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 171 B/s, 8 objects/s recovering
Oct  2 03:50:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct  2 03:50:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  2 03:50:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct  2 03:50:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  2 03:50:08 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Oct  2 03:50:08 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Oct  2 03:50:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct  2 03:50:08 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  2 03:50:08 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  2 03:50:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct  2 03:50:08 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct  2 03:50:08 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 72 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=72 pruub=15.474835396s) [2] r=-1 lpr=72 pi=[46,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 121.771659851s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:08 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 72 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=46/48 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=72 pruub=15.474776268s) [2] r=-1 lpr=72 pi=[46,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 121.771659851s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:08 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  2 03:50:08 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  2 03:50:08 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 72 pg[6.8( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=72) [2] r=0 lpr=72 pi=[46,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:08 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 72 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=72 pruub=12.131629944s) [2] r=-1 lpr=72 pi=[50,72)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 114.587417603s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:08 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 72 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=72 pruub=12.131387711s) [2] r=-1 lpr=72 pi=[50,72)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 114.587417603s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:08 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 72 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=72 pruub=12.132633209s) [2] r=-1 lpr=72 pi=[50,72)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 114.588966370s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:08 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 72 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=72 pruub=12.132251740s) [2] r=-1 lpr=72 pi=[50,72)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 114.588966370s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:08 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=72) [2] r=0 lpr=72 pi=[50,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:08 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=72) [2] r=0 lpr=72 pi=[50,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:09 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Oct  2 03:50:09 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Oct  2 03:50:09 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct  2 03:50:09 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct  2 03:50:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct  2 03:50:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct  2 03:50:09 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct  2 03:50:09 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 73 pg[9.8( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[50,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:09 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 73 pg[9.8( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[50,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:09 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  2 03:50:09 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  2 03:50:09 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 73 pg[9.18( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[50,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:09 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 73 pg[9.18( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[50,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:09 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 73 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=73) [2]/[1] r=0 lpr=73 pi=[50,73)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:09 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 73 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=73) [2]/[1] r=0 lpr=73 pi=[50,73)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:09 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 73 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=73) [2]/[1] r=0 lpr=73 pi=[50,73)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:09 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 73 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=73) [2]/[1] r=0 lpr=73 pi=[50,73)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:09 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 73 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=72/73 n=1 ec=46/21 lis/c=46/46 les/c/f=48/48/0 sis=72) [2] r=0 lpr=72 pi=[46,72)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v145: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 172 B/s, 8 objects/s recovering
Oct  2 03:50:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct  2 03:50:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  2 03:50:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct  2 03:50:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  2 03:50:10 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.a scrub starts
Oct  2 03:50:10 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.a scrub ok
Oct  2 03:50:10 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct  2 03:50:10 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct  2 03:50:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct  2 03:50:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  2 03:50:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  2 03:50:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct  2 03:50:10 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct  2 03:50:10 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  2 03:50:10 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  2 03:50:10 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 74 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=14.647860527s) [0] r=-1 lpr=74 pi=[53,74)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 118.657432556s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:10 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 74 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=74 pruub=14.647820473s) [0] r=-1 lpr=74 pi=[53,74)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 118.657432556s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:10 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 74 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=73/74 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[50,73)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:10 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 74 pg[6.9( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=74) [0] r=0 lpr=74 pi=[53,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:10 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 74 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=73/74 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[50,73)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:10 np0005465604 python3[103978]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:50:10 np0005465604 podman[103979]: 2025-10-02 07:50:10.936135741 +0000 UTC m=+0.053168299 container create c5a323df2d3d68d2a4df95657de2d46cc7d0f43121d3d5308d77751b4cb2761c (image=quay.io/ceph/ceph:v18, name=zen_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:50:10 np0005465604 systemd[1]: Started libpod-conmon-c5a323df2d3d68d2a4df95657de2d46cc7d0f43121d3d5308d77751b4cb2761c.scope.
Oct  2 03:50:11 np0005465604 podman[103979]: 2025-10-02 07:50:10.911007017 +0000 UTC m=+0.028039615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:50:11 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:50:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac66417b8d8f76ae63b7cde04a3a22634de46fd2574918e37b0768d2c58d934/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:50:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac66417b8d8f76ae63b7cde04a3a22634de46fd2574918e37b0768d2c58d934/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:50:11 np0005465604 podman[103979]: 2025-10-02 07:50:11.03179056 +0000 UTC m=+0.148823088 container init c5a323df2d3d68d2a4df95657de2d46cc7d0f43121d3d5308d77751b4cb2761c (image=quay.io/ceph/ceph:v18, name=zen_grothendieck, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:50:11 np0005465604 podman[103979]: 2025-10-02 07:50:11.037809587 +0000 UTC m=+0.154842115 container start c5a323df2d3d68d2a4df95657de2d46cc7d0f43121d3d5308d77751b4cb2761c (image=quay.io/ceph/ceph:v18, name=zen_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 03:50:11 np0005465604 podman[103979]: 2025-10-02 07:50:11.040644444 +0000 UTC m=+0.157676972 container attach c5a323df2d3d68d2a4df95657de2d46cc7d0f43121d3d5308d77751b4cb2761c (image=quay.io/ceph/ceph:v18, name=zen_grothendieck, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct  2 03:50:11 np0005465604 zen_grothendieck[103995]: could not fetch user info: no user info saved
Oct  2 03:50:11 np0005465604 systemd[1]: libpod-c5a323df2d3d68d2a4df95657de2d46cc7d0f43121d3d5308d77751b4cb2761c.scope: Deactivated successfully.
Oct  2 03:50:11 np0005465604 podman[103979]: 2025-10-02 07:50:11.229726796 +0000 UTC m=+0.346759324 container died c5a323df2d3d68d2a4df95657de2d46cc7d0f43121d3d5308d77751b4cb2761c (image=quay.io/ceph/ceph:v18, name=zen_grothendieck, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 03:50:11 np0005465604 systemd[1]: var-lib-containers-storage-overlay-7ac66417b8d8f76ae63b7cde04a3a22634de46fd2574918e37b0768d2c58d934-merged.mount: Deactivated successfully.
Oct  2 03:50:11 np0005465604 podman[103979]: 2025-10-02 07:50:11.268113785 +0000 UTC m=+0.385146313 container remove c5a323df2d3d68d2a4df95657de2d46cc7d0f43121d3d5308d77751b4cb2761c (image=quay.io/ceph/ceph:v18, name=zen_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 03:50:11 np0005465604 systemd[1]: libpod-conmon-c5a323df2d3d68d2a4df95657de2d46cc7d0f43121d3d5308d77751b4cb2761c.scope: Deactivated successfully.
Oct  2 03:50:11 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct  2 03:50:11 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct  2 03:50:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct  2 03:50:11 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  2 03:50:11 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  2 03:50:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct  2 03:50:11 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct  2 03:50:11 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 75 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=73/74 n=6 ec=50/34 lis/c=73/50 les/c/f=74/52/0 sis=75 pruub=14.968125343s) [2] async=[2] r=-1 lpr=75 pi=[50,75)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 120.015449524s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:11 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 75 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=73/74 n=5 ec=50/34 lis/c=73/50 les/c/f=74/52/0 sis=75 pruub=14.970918655s) [2] async=[2] r=-1 lpr=75 pi=[50,75)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 120.018585205s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:11 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 75 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=73/74 n=6 ec=50/34 lis/c=73/50 les/c/f=74/52/0 sis=75 pruub=14.967748642s) [2] r=-1 lpr=75 pi=[50,75)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.015449524s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:11 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 75 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=73/74 n=5 ec=50/34 lis/c=73/50 les/c/f=74/52/0 sis=75 pruub=14.970861435s) [2] r=-1 lpr=75 pi=[50,75)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.018585205s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:11 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 75 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=73/50 les/c/f=74/52/0 sis=75) [2] r=0 lpr=75 pi=[50,75)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:11 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 75 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=73/50 les/c/f=74/52/0 sis=75) [2] r=0 lpr=75 pi=[50,75)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:11 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 75 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=73/50 les/c/f=74/52/0 sis=75) [2] r=0 lpr=75 pi=[50,75)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:11 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 75 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=74/75 n=1 ec=46/21 lis/c=53/53 les/c/f=54/54/0 sis=74) [0] r=0 lpr=74 pi=[53,74)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:11 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 75 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=73/50 les/c/f=74/52/0 sis=75) [2] r=0 lpr=75 pi=[50,75)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:50:11 np0005465604 python3[104116]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid a52e644f-f702-594c-a648-813e3e0df2b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:50:11 np0005465604 podman[104117]: 2025-10-02 07:50:11.678892559 +0000 UTC m=+0.043746945 container create 968a17defc1294c357bd10c9b8b26ac2f856d61861b8c51c7196bcc838cf87fe (image=quay.io/ceph/ceph:v18, name=affectionate_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 03:50:11 np0005465604 systemd[1]: Started libpod-conmon-968a17defc1294c357bd10c9b8b26ac2f856d61861b8c51c7196bcc838cf87fe.scope.
Oct  2 03:50:11 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:50:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e041f9efea52bfcfb56aa27c76b7b6b3736c6206d3ccabff81b4f456c3ba41ad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:50:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e041f9efea52bfcfb56aa27c76b7b6b3736c6206d3ccabff81b4f456c3ba41ad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:50:11 np0005465604 podman[104117]: 2025-10-02 07:50:11.750119218 +0000 UTC m=+0.114973564 container init 968a17defc1294c357bd10c9b8b26ac2f856d61861b8c51c7196bcc838cf87fe (image=quay.io/ceph/ceph:v18, name=affectionate_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 03:50:11 np0005465604 podman[104117]: 2025-10-02 07:50:11.657578467 +0000 UTC m=+0.022432863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  2 03:50:11 np0005465604 podman[104117]: 2025-10-02 07:50:11.755608097 +0000 UTC m=+0.120462463 container start 968a17defc1294c357bd10c9b8b26ac2f856d61861b8c51c7196bcc838cf87fe (image=quay.io/ceph/ceph:v18, name=affectionate_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:50:11 np0005465604 podman[104117]: 2025-10-02 07:50:11.758088292 +0000 UTC m=+0.122942638 container attach 968a17defc1294c357bd10c9b8b26ac2f856d61861b8c51c7196bcc838cf87fe (image=quay.io/ceph/ceph:v18, name=affectionate_blackwell, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 03:50:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v148: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:50:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct  2 03:50:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  2 03:50:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct  2 03:50:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]: {
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    "user_id": "openstack",
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    "display_name": "openstack",
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    "email": "",
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    "suspended": 0,
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    "max_buckets": 1000,
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    "subusers": [],
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    "keys": [
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:        {
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:            "user": "openstack",
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:            "access_key": "4L3UODFX6G25PBEQY1SO",
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:            "secret_key": "FwjpW9CwarSPOuQUKnrEw6hDvhM8cLB5AhpU4kMB"
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:        }
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    ],
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    "swift_keys": [],
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    "caps": [],
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    "op_mask": "read, write, delete",
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    "default_placement": "",
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    "default_storage_class": "",
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    "placement_tags": [],
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    "bucket_quota": {
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:        "enabled": false,
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:        "check_on_raw": false,
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:        "max_size": -1,
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:        "max_size_kb": 0,
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:        "max_objects": -1
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    },
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    "user_quota": {
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:        "enabled": false,
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:        "check_on_raw": false,
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:        "max_size": -1,
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:        "max_size_kb": 0,
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:        "max_objects": -1
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    },
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    "temp_url_keys": [],
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    "type": "rgw",
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]:    "mfa_ids": []
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]: }
Oct  2 03:50:11 np0005465604 affectionate_blackwell[104134]: 
Oct  2 03:50:11 np0005465604 systemd[1]: libpod-968a17defc1294c357bd10c9b8b26ac2f856d61861b8c51c7196bcc838cf87fe.scope: Deactivated successfully.
Oct  2 03:50:11 np0005465604 podman[104117]: 2025-10-02 07:50:11.911603731 +0000 UTC m=+0.276458097 container died 968a17defc1294c357bd10c9b8b26ac2f856d61861b8c51c7196bcc838cf87fe (image=quay.io/ceph/ceph:v18, name=affectionate_blackwell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 03:50:11 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e041f9efea52bfcfb56aa27c76b7b6b3736c6206d3ccabff81b4f456c3ba41ad-merged.mount: Deactivated successfully.
Oct  2 03:50:11 np0005465604 podman[104117]: 2025-10-02 07:50:11.945668172 +0000 UTC m=+0.310522518 container remove 968a17defc1294c357bd10c9b8b26ac2f856d61861b8c51c7196bcc838cf87fe (image=quay.io/ceph/ceph:v18, name=affectionate_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 03:50:11 np0005465604 systemd[1]: libpod-conmon-968a17defc1294c357bd10c9b8b26ac2f856d61861b8c51c7196bcc838cf87fe.scope: Deactivated successfully.
Oct  2 03:50:12 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Oct  2 03:50:12 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Oct  2 03:50:12 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Oct  2 03:50:12 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Oct  2 03:50:12 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  2 03:50:12 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  2 03:50:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct  2 03:50:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  2 03:50:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  2 03:50:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct  2 03:50:12 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct  2 03:50:12 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 76 pg[9.8( v 40'385 (0'0,40'385] local-lis/les=75/76 n=6 ec=50/34 lis/c=73/50 les/c/f=74/52/0 sis=75) [2] r=0 lpr=75 pi=[50,75)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:12 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 76 pg[9.18( v 40'385 (0'0,40'385] local-lis/les=75/76 n=5 ec=50/34 lis/c=73/50 les/c/f=74/52/0 sis=75) [2] r=0 lpr=75 pi=[50,75)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:13 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Oct  2 03:50:13 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 76 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=46/21 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=13.868204117s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 120.671447754s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:13 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 76 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=46/21 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=13.868151665s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.671447754s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:13 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 76 pg[6.a( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:13 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Oct  2 03:50:13 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.b scrub starts
Oct  2 03:50:13 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.b scrub ok
Oct  2 03:50:13 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  2 03:50:13 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  2 03:50:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct  2 03:50:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct  2 03:50:13 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct  2 03:50:13 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 77 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=76/77 n=1 ec=46/21 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v151: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 3 op/s; 54 B/s, 2 objects/s recovering
Oct  2 03:50:14 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct  2 03:50:14 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct  2 03:50:15 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct  2 03:50:15 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct  2 03:50:15 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct  2 03:50:15 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct  2 03:50:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 2 op/s; 40 B/s, 2 objects/s recovering
Oct  2 03:50:16 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.d scrub starts
Oct  2 03:50:16 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.d scrub ok
Oct  2 03:50:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:50:17 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct  2 03:50:17 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct  2 03:50:17 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct  2 03:50:17 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct  2 03:50:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 0 B/s wr, 2 op/s; 34 B/s, 1 objects/s recovering
Oct  2 03:50:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct  2 03:50:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  2 03:50:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct  2 03:50:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  2 03:50:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct  2 03:50:18 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  2 03:50:18 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  2 03:50:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  2 03:50:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  2 03:50:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct  2 03:50:18 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct  2 03:50:19 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct  2 03:50:19 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct  2 03:50:19 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct  2 03:50:19 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 78 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=59/60 n=1 ec=46/21 lis/c=59/59 les/c/f=60/60/0 sis=78 pruub=15.868014336s) [1] r=-1 lpr=78 pi=[59,78)/1 crt=38'39 mlcod 38'39 active pruub 133.089614868s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:19 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 78 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=59/60 n=1 ec=46/21 lis/c=59/59 les/c/f=60/60/0 sis=78 pruub=15.867219925s) [1] r=-1 lpr=78 pi=[59,78)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 133.089614868s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:19 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:19 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct  2 03:50:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct  2 03:50:19 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  2 03:50:19 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  2 03:50:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct  2 03:50:19 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct  2 03:50:19 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 79 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=78/79 n=1 ec=46/21 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v156: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:50:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct  2 03:50:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  2 03:50:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct  2 03:50:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  2 03:50:20 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Oct  2 03:50:20 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Oct  2 03:50:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct  2 03:50:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  2 03:50:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  2 03:50:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct  2 03:50:20 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct  2 03:50:20 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  2 03:50:20 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  2 03:50:21 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Oct  2 03:50:21 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Oct  2 03:50:21 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct  2 03:50:21 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct  2 03:50:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:50:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  2 03:50:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  2 03:50:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:50:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct  2 03:50:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  2 03:50:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct  2 03:50:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  2 03:50:21 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 80 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=80 pruub=15.169525146s) [2] r=-1 lpr=80 pi=[50,80)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 130.586975098s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:21 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 80 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=80 pruub=15.168946266s) [2] r=-1 lpr=80 pi=[50,80)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.586975098s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:21 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 80 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=80 pruub=15.171788216s) [2] r=-1 lpr=80 pi=[50,80)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 130.590148926s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:21 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 80 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=80 pruub=15.171757698s) [2] r=-1 lpr=80 pi=[50,80)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.590148926s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:21 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=80) [2] r=0 lpr=80 pi=[50,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:21 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=80) [2] r=0 lpr=80 pi=[50,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:22 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 5.1e deep-scrub starts
Oct  2 03:50:22 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 5.1e deep-scrub ok
Oct  2 03:50:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct  2 03:50:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  2 03:50:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  2 03:50:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct  2 03:50:22 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct  2 03:50:22 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 81 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=63/64 n=1 ec=46/21 lis/c=63/63 les/c/f=64/64/0 sis=81 pruub=8.606334686s) [1] r=-1 lpr=81 pi=[63,81)/1 crt=38'39 mlcod 38'39 active pruub 129.143386841s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:22 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 81 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=63/64 n=1 ec=46/21 lis/c=63/63 les/c/f=64/64/0 sis=81 pruub=8.606274605s) [1] r=-1 lpr=81 pi=[63,81)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 129.143386841s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:22 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  2 03:50:22 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  2 03:50:22 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[50,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:22 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[50,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:22 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[50,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:22 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[50,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:22 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 81 pg[6.d( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=63/63 les/c/f=64/64/0 sis=81) [1] r=0 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:22 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 81 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=81) [2]/[1] r=0 lpr=81 pi=[50,81)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:22 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 81 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=50/52 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=81) [2]/[1] r=0 lpr=81 pi=[50,81)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:22 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 81 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=81) [2]/[1] r=0 lpr=81 pi=[50,81)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:22 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 81 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=50/52 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=81) [2]/[1] r=0 lpr=81 pi=[50,81)/1 crt=40'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:23 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Oct  2 03:50:23 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Oct  2 03:50:23 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Oct  2 03:50:23 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Oct  2 03:50:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct  2 03:50:23 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  2 03:50:23 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  2 03:50:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct  2 03:50:23 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct  2 03:50:23 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 82 pg[6.d( v 38'39 lc 34'13 (0'0,38'39] local-lis/les=81/82 n=1 ec=46/21 lis/c=63/63 les/c/f=64/64/0 sis=81) [1] r=0 lpr=81 pi=[63,81)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  2 03:50:24 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 82 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=81/82 n=6 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[50,81)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:24 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 82 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=81/82 n=5 ec=50/34 lis/c=50/50 les/c/f=52/52/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[50,81)/1 crt=40'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct  2 03:50:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct  2 03:50:24 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct  2 03:50:24 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 83 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=81/50 les/c/f=82/52/0 sis=83) [2] r=0 lpr=83 pi=[50,83)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:24 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 83 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=0/0 n=6 ec=50/34 lis/c=81/50 les/c/f=82/52/0 sis=83) [2] r=0 lpr=83 pi=[50,83)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:24 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 83 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=81/50 les/c/f=82/52/0 sis=83) [2] r=0 lpr=83 pi=[50,83)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:24 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 83 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=81/50 les/c/f=82/52/0 sis=83) [2] r=0 lpr=83 pi=[50,83)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:24 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 83 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=81/82 n=5 ec=50/34 lis/c=81/50 les/c/f=82/52/0 sis=83 pruub=15.603665352s) [2] async=[2] r=-1 lpr=83 pi=[50,83)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 133.858016968s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:24 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 83 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=81/82 n=6 ec=50/34 lis/c=81/50 les/c/f=82/52/0 sis=83 pruub=15.603563309s) [2] async=[2] r=-1 lpr=83 pi=[50,83)/1 crt=40'385 lcod 0'0 mlcod 0'0 active pruub 133.857986450s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:24 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 83 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=81/82 n=5 ec=50/34 lis/c=81/50 les/c/f=82/52/0 sis=83 pruub=15.603572845s) [2] r=-1 lpr=83 pi=[50,83)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.858016968s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:24 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 83 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=81/82 n=6 ec=50/34 lis/c=81/50 les/c/f=82/52/0 sis=83 pruub=15.603493690s) [2] r=-1 lpr=83 pi=[50,83)/1 crt=40'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 133.857986450s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:25 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct  2 03:50:25 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct  2 03:50:25 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct  2 03:50:25 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct  2 03:50:25 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct  2 03:50:25 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct  2 03:50:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct  2 03:50:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct  2 03:50:25 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct  2 03:50:25 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 84 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=83/84 n=5 ec=50/34 lis/c=81/50 les/c/f=82/52/0 sis=83) [2] r=0 lpr=83 pi=[50,83)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:25 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 84 pg[9.c( v 40'385 (0'0,40'385] local-lis/les=83/84 n=6 ec=50/34 lis/c=81/50 les/c/f=82/52/0 sis=83) [2] r=0 lpr=83 pi=[50,83)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  2 03:50:26 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.a deep-scrub starts
Oct  2 03:50:26 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.a deep-scrub ok
Oct  2 03:50:26 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 5.7 deep-scrub starts
Oct  2 03:50:26 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 5.7 deep-scrub ok
Oct  2 03:50:26 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Oct  2 03:50:26 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Oct  2 03:50:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:50:27 np0005465604 systemd-logind[787]: New session 35 of user zuul.
Oct  2 03:50:27 np0005465604 systemd[1]: Started Session 35 of User zuul.
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_07:50:27
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Some PGs (0.006557) are unknown; try again later
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v165: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 396 B/s wr, 13 op/s; 34 B/s, 2 objects/s recovering
Oct  2 03:50:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct  2 03:50:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  2 03:50:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct  2 03:50:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 03:50:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 03:50:28 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Oct  2 03:50:28 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Oct  2 03:50:28 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct  2 03:50:28 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct  2 03:50:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct  2 03:50:28 np0005465604 python3.9[104382]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:50:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  2 03:50:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  2 03:50:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  2 03:50:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  2 03:50:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct  2 03:50:28 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct  2 03:50:29 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct  2 03:50:29 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct  2 03:50:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  2 03:50:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  2 03:50:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v167: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 341 B/s wr, 11 op/s; 29 B/s, 2 objects/s recovering
Oct  2 03:50:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct  2 03:50:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  2 03:50:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct  2 03:50:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  2 03:50:30 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct  2 03:50:30 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct  2 03:50:30 np0005465604 python3.9[104600]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:50:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct  2 03:50:30 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  2 03:50:30 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  2 03:50:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  2 03:50:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  2 03:50:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct  2 03:50:30 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct  2 03:50:31 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 86 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=59/60 n=1 ec=46/21 lis/c=59/59 les/c/f=60/60/0 sis=86 pruub=12.168185234s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=38'39 mlcod 38'39 active pruub 141.089981079s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:31 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 86 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=59/60 n=1 ec=46/21 lis/c=59/59 les/c/f=60/60/0 sis=86 pruub=12.168099403s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 141.089981079s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:31 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 86 pg[6.f( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=59/59 les/c/f=60/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:31 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct  2 03:50:31 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct  2 03:50:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:50:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct  2 03:50:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  2 03:50:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  2 03:50:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct  2 03:50:31 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct  2 03:50:31 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 87 pg[6.f( v 38'39 lc 34'1 (0'0,38'39] local-lis/les=86/87 n=1 ec=46/21 lis/c=59/59 les/c/f=60/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 341 B/s wr, 11 op/s; 29 B/s, 2 objects/s recovering
Oct  2 03:50:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Oct  2 03:50:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  2 03:50:32 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Oct  2 03:50:32 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Oct  2 03:50:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct  2 03:50:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  2 03:50:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct  2 03:50:32 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct  2 03:50:32 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  2 03:50:33 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  2 03:50:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 118 B/s, 0 objects/s recovering
Oct  2 03:50:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Oct  2 03:50:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  2 03:50:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:50:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:50:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 03:50:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:50:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 03:50:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:50:33 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev f6d895bd-f6ce-4435-afcd-86e013456285 does not exist
Oct  2 03:50:33 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 47c757ae-e281-4661-abd6-6d6e5affdf66 does not exist
Oct  2 03:50:33 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev cf1f9bb1-f6c6-4ee4-a734-49e30239b603 does not exist
Oct  2 03:50:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 03:50:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 03:50:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 03:50:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:50:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:50:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:50:34 np0005465604 podman[104894]: 2025-10-02 07:50:34.422685607 +0000 UTC m=+0.044472779 container create 400a4938d62ba47e8567cbe4f5ad37b159bc165817e0bb27cd799d0e74fcfed0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 03:50:34 np0005465604 systemd[1]: Started libpod-conmon-400a4938d62ba47e8567cbe4f5ad37b159bc165817e0bb27cd799d0e74fcfed0.scope.
Oct  2 03:50:34 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:50:34 np0005465604 podman[104894]: 2025-10-02 07:50:34.49993454 +0000 UTC m=+0.121721722 container init 400a4938d62ba47e8567cbe4f5ad37b159bc165817e0bb27cd799d0e74fcfed0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 03:50:34 np0005465604 podman[104894]: 2025-10-02 07:50:34.406440139 +0000 UTC m=+0.028227341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:50:34 np0005465604 podman[104894]: 2025-10-02 07:50:34.505869064 +0000 UTC m=+0.127656236 container start 400a4938d62ba47e8567cbe4f5ad37b159bc165817e0bb27cd799d0e74fcfed0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 03:50:34 np0005465604 awesome_dewdney[104911]: 167 167
Oct  2 03:50:34 np0005465604 systemd[1]: libpod-400a4938d62ba47e8567cbe4f5ad37b159bc165817e0bb27cd799d0e74fcfed0.scope: Deactivated successfully.
Oct  2 03:50:34 np0005465604 podman[104894]: 2025-10-02 07:50:34.521050505 +0000 UTC m=+0.142837677 container attach 400a4938d62ba47e8567cbe4f5ad37b159bc165817e0bb27cd799d0e74fcfed0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:50:34 np0005465604 podman[104894]: 2025-10-02 07:50:34.521367306 +0000 UTC m=+0.143154478 container died 400a4938d62ba47e8567cbe4f5ad37b159bc165817e0bb27cd799d0e74fcfed0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 03:50:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay-930090af07e95e30dec8b9d9b1f4e8e7e936a6fff76dee1fc995c6cece1c459c-merged.mount: Deactivated successfully.
Oct  2 03:50:34 np0005465604 podman[104894]: 2025-10-02 07:50:34.553114306 +0000 UTC m=+0.174901478 container remove 400a4938d62ba47e8567cbe4f5ad37b159bc165817e0bb27cd799d0e74fcfed0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 03:50:34 np0005465604 systemd[1]: libpod-conmon-400a4938d62ba47e8567cbe4f5ad37b159bc165817e0bb27cd799d0e74fcfed0.scope: Deactivated successfully.
Oct  2 03:50:34 np0005465604 podman[104935]: 2025-10-02 07:50:34.697488805 +0000 UTC m=+0.039090814 container create 73d39c18c4c1e35d3b9f0d8476e5dc0ce276b3bc51ebdcfad3f696128e863390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:50:34 np0005465604 systemd[1]: Started libpod-conmon-73d39c18c4c1e35d3b9f0d8476e5dc0ce276b3bc51ebdcfad3f696128e863390.scope.
Oct  2 03:50:34 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:50:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6475ba8dd3597c5369f448a80dea224c541c0e738882a77477a494b99e762e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:50:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6475ba8dd3597c5369f448a80dea224c541c0e738882a77477a494b99e762e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:50:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6475ba8dd3597c5369f448a80dea224c541c0e738882a77477a494b99e762e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:50:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6475ba8dd3597c5369f448a80dea224c541c0e738882a77477a494b99e762e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:50:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6475ba8dd3597c5369f448a80dea224c541c0e738882a77477a494b99e762e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:50:34 np0005465604 podman[104935]: 2025-10-02 07:50:34.682740749 +0000 UTC m=+0.024342758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:50:34 np0005465604 podman[104935]: 2025-10-02 07:50:34.78524981 +0000 UTC m=+0.126851869 container init 73d39c18c4c1e35d3b9f0d8476e5dc0ce276b3bc51ebdcfad3f696128e863390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct  2 03:50:34 np0005465604 podman[104935]: 2025-10-02 07:50:34.791496614 +0000 UTC m=+0.133098613 container start 73d39c18c4c1e35d3b9f0d8476e5dc0ce276b3bc51ebdcfad3f696128e863390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 03:50:34 np0005465604 podman[104935]: 2025-10-02 07:50:34.79487932 +0000 UTC m=+0.136481329 container attach 73d39c18c4c1e35d3b9f0d8476e5dc0ce276b3bc51ebdcfad3f696128e863390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:50:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct  2 03:50:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  2 03:50:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct  2 03:50:34 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct  2 03:50:34 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  2 03:50:34 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:50:34 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:50:34 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:50:35 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Oct  2 03:50:35 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Oct  2 03:50:35 np0005465604 romantic_raman[104952]: --> passed data devices: 0 physical, 3 LVM
Oct  2 03:50:35 np0005465604 romantic_raman[104952]: --> relative data size: 1.0
Oct  2 03:50:35 np0005465604 romantic_raman[104952]: --> All data devices are unavailable
Oct  2 03:50:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 119 B/s, 0 objects/s recovering
Oct  2 03:50:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Oct  2 03:50:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  2 03:50:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct  2 03:50:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  2 03:50:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct  2 03:50:35 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct  2 03:50:35 np0005465604 systemd[1]: libpod-73d39c18c4c1e35d3b9f0d8476e5dc0ce276b3bc51ebdcfad3f696128e863390.scope: Deactivated successfully.
Oct  2 03:50:35 np0005465604 systemd[1]: libpod-73d39c18c4c1e35d3b9f0d8476e5dc0ce276b3bc51ebdcfad3f696128e863390.scope: Consumed 1.005s CPU time.
Oct  2 03:50:35 np0005465604 podman[104935]: 2025-10-02 07:50:35.854974771 +0000 UTC m=+1.196576820 container died 73d39c18c4c1e35d3b9f0d8476e5dc0ce276b3bc51ebdcfad3f696128e863390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:50:35 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  2 03:50:35 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  2 03:50:35 np0005465604 systemd[1]: var-lib-containers-storage-overlay-be6475ba8dd3597c5369f448a80dea224c541c0e738882a77477a494b99e762e-merged.mount: Deactivated successfully.
Oct  2 03:50:35 np0005465604 podman[104935]: 2025-10-02 07:50:35.91696175 +0000 UTC m=+1.258563769 container remove 73d39c18c4c1e35d3b9f0d8476e5dc0ce276b3bc51ebdcfad3f696128e863390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_raman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 03:50:35 np0005465604 systemd[1]: libpod-conmon-73d39c18c4c1e35d3b9f0d8476e5dc0ce276b3bc51ebdcfad3f696128e863390.scope: Deactivated successfully.
Oct  2 03:50:36 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.17 deep-scrub starts
Oct  2 03:50:36 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.17 deep-scrub ok
Oct  2 03:50:36 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Oct  2 03:50:36 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Oct  2 03:50:36 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct  2 03:50:36 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct  2 03:50:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:50:36 np0005465604 podman[105145]: 2025-10-02 07:50:36.664443273 +0000 UTC m=+0.056134039 container create 38a7983303f7edba20fa77c7dfddafd6a288d2852ac4241c1a17cf647a1fd9ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shockley, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 03:50:36 np0005465604 systemd[1]: Started libpod-conmon-38a7983303f7edba20fa77c7dfddafd6a288d2852ac4241c1a17cf647a1fd9ac.scope.
Oct  2 03:50:36 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:50:36 np0005465604 podman[105145]: 2025-10-02 07:50:36.734258761 +0000 UTC m=+0.125949507 container init 38a7983303f7edba20fa77c7dfddafd6a288d2852ac4241c1a17cf647a1fd9ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 03:50:36 np0005465604 podman[105145]: 2025-10-02 07:50:36.641466134 +0000 UTC m=+0.033156880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:50:36 np0005465604 podman[105145]: 2025-10-02 07:50:36.74036464 +0000 UTC m=+0.132055366 container start 38a7983303f7edba20fa77c7dfddafd6a288d2852ac4241c1a17cf647a1fd9ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shockley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Oct  2 03:50:36 np0005465604 podman[105145]: 2025-10-02 07:50:36.743418776 +0000 UTC m=+0.135109522 container attach 38a7983303f7edba20fa77c7dfddafd6a288d2852ac4241c1a17cf647a1fd9ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shockley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 03:50:36 np0005465604 festive_shockley[105162]: 167 167
Oct  2 03:50:36 np0005465604 systemd[1]: libpod-38a7983303f7edba20fa77c7dfddafd6a288d2852ac4241c1a17cf647a1fd9ac.scope: Deactivated successfully.
Oct  2 03:50:36 np0005465604 conmon[105162]: conmon 38a7983303f7edba20fa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-38a7983303f7edba20fa77c7dfddafd6a288d2852ac4241c1a17cf647a1fd9ac.scope/container/memory.events
Oct  2 03:50:36 np0005465604 podman[105145]: 2025-10-02 07:50:36.746554053 +0000 UTC m=+0.138244789 container died 38a7983303f7edba20fa77c7dfddafd6a288d2852ac4241c1a17cf647a1fd9ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:50:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0056757c166c199484e7107a5cb3b50a05d64ecd1f282f1b08bbafc65bebc638-merged.mount: Deactivated successfully.
Oct  2 03:50:36 np0005465604 podman[105145]: 2025-10-02 07:50:36.782707545 +0000 UTC m=+0.174398281 container remove 38a7983303f7edba20fa77c7dfddafd6a288d2852ac4241c1a17cf647a1fd9ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:50:36 np0005465604 systemd[1]: libpod-conmon-38a7983303f7edba20fa77c7dfddafd6a288d2852ac4241c1a17cf647a1fd9ac.scope: Deactivated successfully.
Oct  2 03:50:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  2 03:50:36 np0005465604 podman[105184]: 2025-10-02 07:50:36.952172845 +0000 UTC m=+0.039721865 container create c474e3f71026df60aefaef1692fef79204b6852c64289b156dacd08a3f92ede0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sinoussi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 03:50:36 np0005465604 systemd[1]: Started libpod-conmon-c474e3f71026df60aefaef1692fef79204b6852c64289b156dacd08a3f92ede0.scope.
Oct  2 03:50:37 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:50:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5771cad0c4d2b21e1f90743239164d396714e629ab2e2eca469f57eab0cd43cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:50:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5771cad0c4d2b21e1f90743239164d396714e629ab2e2eca469f57eab0cd43cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:50:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5771cad0c4d2b21e1f90743239164d396714e629ab2e2eca469f57eab0cd43cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:50:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5771cad0c4d2b21e1f90743239164d396714e629ab2e2eca469f57eab0cd43cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:50:37 np0005465604 podman[105184]: 2025-10-02 07:50:36.933102051 +0000 UTC m=+0.020651061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:50:37 np0005465604 podman[105184]: 2025-10-02 07:50:37.038627045 +0000 UTC m=+0.126176075 container init c474e3f71026df60aefaef1692fef79204b6852c64289b156dacd08a3f92ede0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:50:37 np0005465604 podman[105184]: 2025-10-02 07:50:37.044717004 +0000 UTC m=+0.132265994 container start c474e3f71026df60aefaef1692fef79204b6852c64289b156dacd08a3f92ede0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:50:37 np0005465604 podman[105184]: 2025-10-02 07:50:37.05157812 +0000 UTC m=+0.139127160 container attach c474e3f71026df60aefaef1692fef79204b6852c64289b156dacd08a3f92ede0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 03:50:37 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct  2 03:50:37 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct  2 03:50:37 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.f scrub starts
Oct  2 03:50:37 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.f scrub ok
Oct  2 03:50:37 np0005465604 systemd[1]: session-35.scope: Deactivated successfully.
Oct  2 03:50:37 np0005465604 systemd[1]: session-35.scope: Consumed 8.173s CPU time.
Oct  2 03:50:37 np0005465604 systemd-logind[787]: Session 35 logged out. Waiting for processes to exit.
Oct  2 03:50:37 np0005465604 systemd-logind[787]: Removed session 35.
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]: {
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:    "0": [
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:        {
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "devices": [
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "/dev/loop3"
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            ],
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "lv_name": "ceph_lv0",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "lv_size": "21470642176",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "name": "ceph_lv0",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "tags": {
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.cluster_name": "ceph",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.crush_device_class": "",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.encrypted": "0",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.osd_id": "0",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.type": "block",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.vdo": "0"
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            },
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "type": "block",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "vg_name": "ceph_vg0"
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:        }
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:    ],
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:    "1": [
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:        {
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "devices": [
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "/dev/loop4"
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            ],
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "lv_name": "ceph_lv1",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "lv_size": "21470642176",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "name": "ceph_lv1",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "tags": {
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.cluster_name": "ceph",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.crush_device_class": "",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.encrypted": "0",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.osd_id": "1",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.type": "block",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.vdo": "0"
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            },
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "type": "block",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "vg_name": "ceph_vg1"
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:        }
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:    ],
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:    "2": [
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:        {
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "devices": [
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "/dev/loop5"
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            ],
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "lv_name": "ceph_lv2",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "lv_size": "21470642176",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "name": "ceph_lv2",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "tags": {
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.cluster_name": "ceph",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.crush_device_class": "",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.encrypted": "0",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.osd_id": "2",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.type": "block",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:                "ceph.vdo": "0"
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            },
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "type": "block",
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:            "vg_name": "ceph_vg2"
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:        }
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]:    ]
Oct  2 03:50:37 np0005465604 stupefied_sinoussi[105200]: }
Oct  2 03:50:37 np0005465604 systemd[1]: libpod-c474e3f71026df60aefaef1692fef79204b6852c64289b156dacd08a3f92ede0.scope: Deactivated successfully.
Oct  2 03:50:37 np0005465604 podman[105184]: 2025-10-02 07:50:37.742551042 +0000 UTC m=+0.830100032 container died c474e3f71026df60aefaef1692fef79204b6852c64289b156dacd08a3f92ede0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:50:37 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5771cad0c4d2b21e1f90743239164d396714e629ab2e2eca469f57eab0cd43cc-merged.mount: Deactivated successfully.
Oct  2 03:50:37 np0005465604 podman[105184]: 2025-10-02 07:50:37.797565222 +0000 UTC m=+0.885114212 container remove c474e3f71026df60aefaef1692fef79204b6852c64289b156dacd08a3f92ede0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sinoussi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:50:37 np0005465604 systemd[1]: libpod-conmon-c474e3f71026df60aefaef1692fef79204b6852c64289b156dacd08a3f92ede0.scope: Deactivated successfully.
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 0 objects/s recovering
Oct  2 03:50:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Oct  2 03:50:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  2 03:50:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct  2 03:50:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  2 03:50:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct  2 03:50:37 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct  2 03:50:37 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 91 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=91 pruub=15.262973785s) [2] r=-1 lpr=91 pi=[57,91)/1 crt=40'385 mlcod 0'0 active pruub 151.014358521s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:37 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 91 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=91 pruub=15.262915611s) [2] r=-1 lpr=91 pi=[57,91)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 151.014358521s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:37 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  2 03:50:37 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 91 pg[9.13( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=91) [2] r=0 lpr=91 pi=[57,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:50:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 03:50:38 np0005465604 podman[105387]: 2025-10-02 07:50:38.431823866 +0000 UTC m=+0.044679535 container create 62f4af94dc38fbe85137853ab550697d0c43c626fe70a8a293dd58723a465911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:50:38 np0005465604 systemd[1]: Started libpod-conmon-62f4af94dc38fbe85137853ab550697d0c43c626fe70a8a293dd58723a465911.scope.
Oct  2 03:50:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:50:38 np0005465604 podman[105387]: 2025-10-02 07:50:38.500008849 +0000 UTC m=+0.112864568 container init 62f4af94dc38fbe85137853ab550697d0c43c626fe70a8a293dd58723a465911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_agnesi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:50:38 np0005465604 podman[105387]: 2025-10-02 07:50:38.41416832 +0000 UTC m=+0.027024009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:50:38 np0005465604 podman[105387]: 2025-10-02 07:50:38.51199083 +0000 UTC m=+0.124846499 container start 62f4af94dc38fbe85137853ab550697d0c43c626fe70a8a293dd58723a465911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:50:38 np0005465604 podman[105387]: 2025-10-02 07:50:38.516034448 +0000 UTC m=+0.128890167 container attach 62f4af94dc38fbe85137853ab550697d0c43c626fe70a8a293dd58723a465911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_agnesi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 03:50:38 np0005465604 agitated_agnesi[105403]: 167 167
Oct  2 03:50:38 np0005465604 systemd[1]: libpod-62f4af94dc38fbe85137853ab550697d0c43c626fe70a8a293dd58723a465911.scope: Deactivated successfully.
Oct  2 03:50:38 np0005465604 podman[105387]: 2025-10-02 07:50:38.517642614 +0000 UTC m=+0.130498293 container died 62f4af94dc38fbe85137853ab550697d0c43c626fe70a8a293dd58723a465911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_agnesi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 03:50:38 np0005465604 systemd[1]: var-lib-containers-storage-overlay-bfed9b06f8588623fcde0e95fa30c216f390e7d62e2a5f9a89b2fd6cfaf6438f-merged.mount: Deactivated successfully.
Oct  2 03:50:38 np0005465604 podman[105387]: 2025-10-02 07:50:38.569487365 +0000 UTC m=+0.182343044 container remove 62f4af94dc38fbe85137853ab550697d0c43c626fe70a8a293dd58723a465911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_agnesi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  2 03:50:38 np0005465604 systemd[1]: libpod-conmon-62f4af94dc38fbe85137853ab550697d0c43c626fe70a8a293dd58723a465911.scope: Deactivated successfully.
Oct  2 03:50:38 np0005465604 podman[105426]: 2025-10-02 07:50:38.7787108 +0000 UTC m=+0.069824869 container create 7e946286966084c913a4223003950dc55fca0dd8047a2c59156e89b7d7c6d60f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dhawan, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:50:38 np0005465604 systemd[1]: Started libpod-conmon-7e946286966084c913a4223003950dc55fca0dd8047a2c59156e89b7d7c6d60f.scope.
Oct  2 03:50:38 np0005465604 podman[105426]: 2025-10-02 07:50:38.752098406 +0000 UTC m=+0.043212565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:50:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:50:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4658e64d814c60428e3c6dd481e0f275e44f88ea5c7d1c6b048ff4979f8bf03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:50:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4658e64d814c60428e3c6dd481e0f275e44f88ea5c7d1c6b048ff4979f8bf03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:50:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4658e64d814c60428e3c6dd481e0f275e44f88ea5c7d1c6b048ff4979f8bf03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:50:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4658e64d814c60428e3c6dd481e0f275e44f88ea5c7d1c6b048ff4979f8bf03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:50:38 np0005465604 podman[105426]: 2025-10-02 07:50:38.875210545 +0000 UTC m=+0.166324604 container init 7e946286966084c913a4223003950dc55fca0dd8047a2c59156e89b7d7c6d60f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:50:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct  2 03:50:38 np0005465604 podman[105426]: 2025-10-02 07:50:38.882850028 +0000 UTC m=+0.173964067 container start 7e946286966084c913a4223003950dc55fca0dd8047a2c59156e89b7d7c6d60f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dhawan, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:50:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct  2 03:50:38 np0005465604 podman[105426]: 2025-10-02 07:50:38.888704558 +0000 UTC m=+0.179818657 container attach 7e946286966084c913a4223003950dc55fca0dd8047a2c59156e89b7d7c6d60f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dhawan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:50:38 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct  2 03:50:38 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=92) [2]/[0] r=-1 lpr=92 pi=[57,92)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:38 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  2 03:50:38 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 92 pg[9.13( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=92) [2]/[0] r=-1 lpr=92 pi=[57,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:38 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 92 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=92) [2]/[0] r=0 lpr=92 pi=[57,92)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:38 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 92 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=57/58 n=5 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=92) [2]/[0] r=0 lpr=92 pi=[57,92)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:50:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Oct  2 03:50:39 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]: {
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:        "osd_id": 2,
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:        "type": "bluestore"
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:    },
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:        "osd_id": 1,
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:        "type": "bluestore"
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:    },
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:        "osd_id": 0,
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:        "type": "bluestore"
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]:    }
Oct  2 03:50:39 np0005465604 peaceful_dhawan[105442]: }
Oct  2 03:50:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct  2 03:50:39 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct  2 03:50:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct  2 03:50:39 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct  2 03:50:39 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct  2 03:50:39 np0005465604 systemd[1]: libpod-7e946286966084c913a4223003950dc55fca0dd8047a2c59156e89b7d7c6d60f.scope: Deactivated successfully.
Oct  2 03:50:39 np0005465604 systemd[1]: libpod-7e946286966084c913a4223003950dc55fca0dd8047a2c59156e89b7d7c6d60f.scope: Consumed 1.038s CPU time.
Oct  2 03:50:39 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 93 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=92/93 n=5 ec=50/34 lis/c=57/57 les/c/f=58/58/0 sis=92) [2]/[0] async=[2] r=0 lpr=92 pi=[57,92)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:39 np0005465604 conmon[105442]: conmon 7e946286966084c913a4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e946286966084c913a4223003950dc55fca0dd8047a2c59156e89b7d7c6d60f.scope/container/memory.events
Oct  2 03:50:39 np0005465604 podman[105426]: 2025-10-02 07:50:39.927302601 +0000 UTC m=+1.218416670 container died 7e946286966084c913a4223003950dc55fca0dd8047a2c59156e89b7d7c6d60f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dhawan, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 03:50:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e4658e64d814c60428e3c6dd481e0f275e44f88ea5c7d1c6b048ff4979f8bf03-merged.mount: Deactivated successfully.
Oct  2 03:50:39 np0005465604 podman[105426]: 2025-10-02 07:50:39.990103717 +0000 UTC m=+1.281217766 container remove 7e946286966084c913a4223003950dc55fca0dd8047a2c59156e89b7d7c6d60f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dhawan, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:50:39 np0005465604 systemd[1]: libpod-conmon-7e946286966084c913a4223003950dc55fca0dd8047a2c59156e89b7d7c6d60f.scope: Deactivated successfully.
Oct  2 03:50:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:50:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:50:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:50:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:50:40 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c69a1753-c833-4b89-9c5a-c4ff68736432 does not exist
Oct  2 03:50:40 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 57733d9f-40b3-4ac0-bd9e-013b59619fe0 does not exist
Oct  2 03:50:40 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Oct  2 03:50:40 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Oct  2 03:50:40 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct  2 03:50:40 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct  2 03:50:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct  2 03:50:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct  2 03:50:40 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct  2 03:50:40 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 94 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=92/93 n=5 ec=50/34 lis/c=92/57 les/c/f=93/58/0 sis=94 pruub=14.997392654s) [2] async=[2] r=-1 lpr=94 pi=[57,94)/1 crt=40'385 mlcod 40'385 active pruub 153.798202515s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:40 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 94 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=92/93 n=5 ec=50/34 lis/c=92/57 les/c/f=93/58/0 sis=94 pruub=14.997323036s) [2] r=-1 lpr=94 pi=[57,94)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 153.798202515s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:40 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct  2 03:50:40 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:50:40 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:50:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 94 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=92/57 les/c/f=93/58/0 sis=94) [2] r=0 lpr=94 pi=[57,94)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:40 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 94 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=92/57 les/c/f=93/58/0 sis=94) [2] r=0 lpr=94 pi=[57,94)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:41 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Oct  2 03:50:41 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Oct  2 03:50:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:50:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v182: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:50:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Oct  2 03:50:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct  2 03:50:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct  2 03:50:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct  2 03:50:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct  2 03:50:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct  2 03:50:41 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct  2 03:50:41 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 95 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=95 pruub=10.179340363s) [1] r=-1 lpr=95 pi=[56,95)/1 crt=40'385 mlcod 0'0 active pruub 150.011566162s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:41 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 95 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=95 pruub=10.179276466s) [1] r=-1 lpr=95 pi=[56,95)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 150.011566162s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:41 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 95 pg[9.13( v 40'385 (0'0,40'385] local-lis/les=94/95 n=5 ec=50/34 lis/c=92/57 les/c/f=93/58/0 sis=94) [2] r=0 lpr=94 pi=[57,94)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:41 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=95) [1] r=0 lpr=95 pi=[56,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:42 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct  2 03:50:42 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct  2 03:50:42 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Oct  2 03:50:42 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Oct  2 03:50:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct  2 03:50:42 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct  2 03:50:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct  2 03:50:42 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct  2 03:50:42 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[56,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:42 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[56,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:42 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 96 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=96) [1]/[0] r=0 lpr=96 pi=[56,96)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:42 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 96 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=96) [1]/[0] r=0 lpr=96 pi=[56,96)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:43 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Oct  2 03:50:43 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Oct  2 03:50:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  2 03:50:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct  2 03:50:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct  2 03:50:43 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct  2 03:50:44 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 97 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=96/97 n=5 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[56,96)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct  2 03:50:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct  2 03:50:45 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct  2 03:50:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 98 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=96/56 les/c/f=97/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:45 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 98 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=96/56 les/c/f=97/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 98 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=96/97 n=5 ec=50/34 lis/c=96/56 les/c/f=97/57/0 sis=98 pruub=15.227589607s) [1] async=[1] r=-1 lpr=98 pi=[56,98)/1 crt=40'385 mlcod 40'385 active pruub 158.110870361s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:45 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 98 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=96/97 n=5 ec=50/34 lis/c=96/56 les/c/f=97/57/0 sis=98 pruub=15.227509499s) [1] r=-1 lpr=98 pi=[56,98)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 158.110870361s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:45 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct  2 03:50:45 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct  2 03:50:45 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct  2 03:50:45 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct  2 03:50:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  2 03:50:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct  2 03:50:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct  2 03:50:46 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct  2 03:50:46 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 99 pg[9.15( v 40'385 (0'0,40'385] local-lis/les=98/99 n=5 ec=50/34 lis/c=96/56 les/c/f=97/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:50:47 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct  2 03:50:47 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct  2 03:50:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 210 B/s wr, 7 op/s; 45 B/s, 1 objects/s recovering
Oct  2 03:50:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct  2 03:50:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  2 03:50:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct  2 03:50:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  2 03:50:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct  2 03:50:48 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct  2 03:50:48 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  2 03:50:48 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 100 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=50/34 lis/c=68/68 les/c/f=69/69/0 sis=100 pruub=11.808845520s) [0] r=-1 lpr=100 pi=[68,100)/1 crt=40'385 mlcod 0'0 active pruub 148.965972900s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:48 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 100 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=50/34 lis/c=68/68 les/c/f=69/69/0 sis=100 pruub=11.808777809s) [0] r=-1 lpr=100 pi=[68,100)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 148.965972900s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:48 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=68/68 les/c/f=69/69/0 sis=100) [0] r=0 lpr=100 pi=[68,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct  2 03:50:49 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  2 03:50:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct  2 03:50:49 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct  2 03:50:49 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 101 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=50/34 lis/c=68/68 les/c/f=69/69/0 sis=101) [0]/[2] r=0 lpr=101 pi=[68,101)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:49 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 101 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=50/34 lis/c=68/68 les/c/f=69/69/0 sis=101) [0]/[2] r=0 lpr=101 pi=[68,101)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:49 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=68/68 les/c/f=69/69/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[68,101)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:49 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=68/68 les/c/f=69/69/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[68,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:49 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Oct  2 03:50:49 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Oct  2 03:50:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 212 B/s wr, 7 op/s; 45 B/s, 1 objects/s recovering
Oct  2 03:50:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Oct  2 03:50:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct  2 03:50:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct  2 03:50:50 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct  2 03:50:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct  2 03:50:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct  2 03:50:50 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct  2 03:50:50 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 102 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=101/102 n=5 ec=50/34 lis/c=68/68 les/c/f=69/69/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[68,101)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:50 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Oct  2 03:50:50 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Oct  2 03:50:50 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Oct  2 03:50:50 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Oct  2 03:50:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct  2 03:50:51 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct  2 03:50:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct  2 03:50:51 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct  2 03:50:51 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 103 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=101/102 n=5 ec=50/34 lis/c=101/68 les/c/f=102/69/0 sis=103 pruub=14.987048149s) [0] async=[0] r=-1 lpr=103 pi=[68,103)/1 crt=40'385 mlcod 40'385 active pruub 154.680786133s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:51 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 103 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=101/102 n=5 ec=50/34 lis/c=101/68 les/c/f=102/69/0 sis=103 pruub=14.986979485s) [0] r=-1 lpr=103 pi=[68,103)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 154.680786133s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:51 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct  2 03:50:51 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 103 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=101/68 les/c/f=102/69/0 sis=103) [0] r=0 lpr=103 pi=[68,103)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:51 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 103 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=101/68 les/c/f=102/69/0 sis=103) [0] r=0 lpr=103 pi=[68,103)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:51 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct  2 03:50:51 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct  2 03:50:51 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct  2 03:50:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:50:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:50:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Oct  2 03:50:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct  2 03:50:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct  2 03:50:52 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct  2 03:50:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct  2 03:50:52 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct  2 03:50:52 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct  2 03:50:52 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 104 pg[9.16( v 40'385 (0'0,40'385] local-lis/les=103/104 n=5 ec=50/34 lis/c=101/68 les/c/f=102/69/0 sis=103) [0] r=0 lpr=103 pi=[68,103)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:52 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.17 deep-scrub starts
Oct  2 03:50:52 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.17 deep-scrub ok
Oct  2 03:50:53 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct  2 03:50:53 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct  2 03:50:53 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct  2 03:50:53 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct  2 03:50:53 np0005465604 systemd-logind[787]: New session 36 of user zuul.
Oct  2 03:50:53 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct  2 03:50:53 np0005465604 systemd[1]: Started Session 36 of User zuul.
Oct  2 03:50:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 0 objects/s recovering
Oct  2 03:50:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Oct  2 03:50:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct  2 03:50:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct  2 03:50:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct  2 03:50:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct  2 03:50:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct  2 03:50:54 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct  2 03:50:54 np0005465604 python3.9[105692]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  2 03:50:55 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct  2 03:50:55 np0005465604 python3.9[105866]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:50:55 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Oct  2 03:50:55 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Oct  2 03:50:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v200: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 0 objects/s recovering
Oct  2 03:50:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Oct  2 03:50:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct  2 03:50:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct  2 03:50:56 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct  2 03:50:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct  2 03:50:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct  2 03:50:56 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct  2 03:50:56 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 105 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=105 pruub=11.988772392s) [2] r=-1 lpr=105 pi=[56,105)/1 crt=40'385 mlcod 0'0 active pruub 166.012054443s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:56 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 106 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=105 pruub=11.988674164s) [2] r=-1 lpr=105 pi=[56,105)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 166.012054443s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:56 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 106 pg[9.19( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=105) [2] r=0 lpr=106 pi=[56,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:50:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct  2 03:50:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct  2 03:50:56 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct  2 03:50:56 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 107 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=107) [2]/[0] r=0 lpr=107 pi=[56,107)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:56 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 107 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=56/57 n=5 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=107) [2]/[0] r=0 lpr=107 pi=[56,107)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:56 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[56,107)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:56 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[56,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:56 np0005465604 python3.9[106022]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:50:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct  2 03:50:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct  2 03:50:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct  2 03:50:57 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct  2 03:50:57 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct  2 03:50:57 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct  2 03:50:57 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 108 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=107/108 n=5 ec=50/34 lis/c=56/56 les/c/f=57/57/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[56,107)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:57 np0005465604 python3.9[106175]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:50:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:50:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Oct  2 03:50:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct  2 03:50:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:50:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:50:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:50:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:50:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:50:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:50:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct  2 03:50:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct  2 03:50:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct  2 03:50:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct  2 03:50:58 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 109 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=107/108 n=5 ec=50/34 lis/c=107/56 les/c/f=108/57/0 sis=109 pruub=15.001475334s) [2] async=[2] r=-1 lpr=109 pi=[56,109)/1 crt=40'385 mlcod 40'385 active pruub 171.457031250s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:58 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 109 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=107/108 n=5 ec=50/34 lis/c=107/56 les/c/f=108/57/0 sis=109 pruub=15.001364708s) [2] r=-1 lpr=109 pi=[56,109)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 171.457031250s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:50:58 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct  2 03:50:58 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 109 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=107/56 les/c/f=108/57/0 sis=109) [2] r=0 lpr=109 pi=[56,109)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:50:58 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 109 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=107/56 les/c/f=108/57/0 sis=109) [2] r=0 lpr=109 pi=[56,109)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:50:58 np0005465604 python3.9[106329]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:50:59 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct  2 03:50:59 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct  2 03:50:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct  2 03:50:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct  2 03:50:59 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct  2 03:50:59 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct  2 03:50:59 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 110 pg[9.19( v 40'385 (0'0,40'385] local-lis/les=109/110 n=5 ec=50/34 lis/c=107/56 les/c/f=108/57/0 sis=109) [2] r=0 lpr=109 pi=[56,109)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:50:59 np0005465604 python3.9[106479]: ansible-ansible.builtin.service_facts Invoked
Oct  2 03:50:59 np0005465604 network[106496]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 03:50:59 np0005465604 network[106497]: 'network-scripts' will be removed from distribution in near future.
Oct  2 03:50:59 np0005465604 network[106498]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 03:50:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 1 objects/s recovering
Oct  2 03:50:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Oct  2 03:50:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct  2 03:51:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct  2 03:51:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct  2 03:51:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct  2 03:51:00 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct  2 03:51:00 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 111 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=83/84 n=5 ec=50/34 lis/c=83/83 les/c/f=84/84/0 sis=111 pruub=13.110097885s) [0] r=-1 lpr=111 pi=[83,111)/1 crt=40'385 mlcod 0'0 active pruub 162.327850342s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:51:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct  2 03:51:00 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 111 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=83/84 n=5 ec=50/34 lis/c=83/83 les/c/f=84/84/0 sis=111 pruub=13.110056877s) [0] r=-1 lpr=111 pi=[83,111)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 162.327850342s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:51:00 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 111 pg[9.1c( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=83/83 les/c/f=84/84/0 sis=111) [0] r=0 lpr=111 pi=[83,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:51:01 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct  2 03:51:01 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct  2 03:51:01 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct  2 03:51:01 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct  2 03:51:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:51:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct  2 03:51:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct  2 03:51:01 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct  2 03:51:01 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=83/83 les/c/f=84/84/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:51:01 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=83/83 les/c/f=84/84/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:51:01 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 112 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=83/84 n=5 ec=50/34 lis/c=83/83 les/c/f=84/84/0 sis=112) [0]/[2] r=0 lpr=112 pi=[83,112)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:51:01 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 112 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=83/84 n=5 ec=50/34 lis/c=83/83 les/c/f=84/84/0 sis=112) [0]/[2] r=0 lpr=112 pi=[83,112)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:51:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct  2 03:51:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v210: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct  2 03:51:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Oct  2 03:51:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct  2 03:51:02 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 8.15 deep-scrub starts
Oct  2 03:51:02 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 8.15 deep-scrub ok
Oct  2 03:51:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct  2 03:51:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct  2 03:51:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct  2 03:51:02 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct  2 03:51:02 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct  2 03:51:02 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct  2 03:51:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 113 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=112/113 n=5 ec=50/34 lis/c=83/83 les/c/f=84/84/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[83,112)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:51:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct  2 03:51:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct  2 03:51:03 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct  2 03:51:03 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 114 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=112/83 les/c/f=113/84/0 sis=114) [0] r=0 lpr=114 pi=[83,114)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:51:03 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 114 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=112/83 les/c/f=113/84/0 sis=114) [0] r=0 lpr=114 pi=[83,114)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:51:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 114 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=112/113 n=5 ec=50/34 lis/c=112/83 les/c/f=113/84/0 sis=114 pruub=15.439930916s) [0] async=[0] r=-1 lpr=114 pi=[83,114)/1 crt=40'385 mlcod 40'385 active pruub 167.690582275s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:51:03 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 114 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=112/113 n=5 ec=50/34 lis/c=112/83 les/c/f=113/84/0 sis=114 pruub=15.439669609s) [0] r=-1 lpr=114 pi=[83,114)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 167.690582275s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:51:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:04 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct  2 03:51:04 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct  2 03:51:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct  2 03:51:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct  2 03:51:04 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct  2 03:51:04 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 115 pg[9.1c( v 40'385 (0'0,40'385] local-lis/les=114/115 n=5 ec=50/34 lis/c=112/83 les/c/f=113/84/0 sis=114) [0] r=0 lpr=114 pi=[83,114)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:51:04 np0005465604 python3.9[106760]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:51:05 np0005465604 python3.9[106910]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:51:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:51:06 np0005465604 python3.9[107064]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:51:07 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct  2 03:51:07 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct  2 03:51:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:07 np0005465604 python3.9[107222]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 03:51:08 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.e deep-scrub starts
Oct  2 03:51:08 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.e deep-scrub ok
Oct  2 03:51:09 np0005465604 python3.9[107306]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:51:09 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct  2 03:51:09 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct  2 03:51:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  2 03:51:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Oct  2 03:51:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct  2 03:51:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Oct  2 03:51:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct  2 03:51:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Oct  2 03:51:09 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Oct  2 03:51:09 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 116 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=50/34 lis/c=68/68 les/c/f=69/69/0 sis=116 pruub=14.451098442s) [0] r=-1 lpr=116 pi=[68,116)/1 crt=40'385 mlcod 0'0 active pruub 172.966705322s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:51:09 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 116 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=50/34 lis/c=68/68 les/c/f=69/69/0 sis=116 pruub=14.450612068s) [0] r=-1 lpr=116 pi=[68,116)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 172.966705322s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:51:09 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct  2 03:51:09 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=68/68 les/c/f=69/69/0 sis=116) [0] r=0 lpr=116 pi=[68,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:51:10 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct  2 03:51:10 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct  2 03:51:10 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct  2 03:51:10 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct  2 03:51:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Oct  2 03:51:10 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct  2 03:51:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Oct  2 03:51:10 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.b scrub starts
Oct  2 03:51:10 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Oct  2 03:51:10 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=68/68 les/c/f=69/69/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[68,117)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:51:10 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=68/68 les/c/f=69/69/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[68,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:51:10 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 117 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=50/34 lis/c=68/68 les/c/f=69/69/0 sis=117) [0]/[2] r=0 lpr=117 pi=[68,117)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:51:10 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 117 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=68/69 n=5 ec=50/34 lis/c=68/68 les/c/f=69/69/0 sis=117) [0]/[2] r=0 lpr=117 pi=[68,117)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:51:10 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.b scrub ok
Oct  2 03:51:11 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Oct  2 03:51:11 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Oct  2 03:51:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:51:11 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 8.d scrub starts
Oct  2 03:51:11 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 8.d scrub ok
Oct  2 03:51:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  2 03:51:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  2 03:51:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 03:51:11 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Oct  2 03:51:11 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Oct  2 03:51:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Oct  2 03:51:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 03:51:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Oct  2 03:51:11 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Oct  2 03:51:11 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 03:51:11 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 118 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=70/71 n=5 ec=50/34 lis/c=70/70 les/c/f=71/71/0 sis=118 pruub=14.448017120s) [1] r=-1 lpr=118 pi=[70,118)/1 crt=40'385 mlcod 0'0 active pruub 174.998397827s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:51:11 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 118 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=70/71 n=5 ec=50/34 lis/c=70/70 les/c/f=71/71/0 sis=118 pruub=14.447932243s) [1] r=-1 lpr=118 pi=[70,118)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 174.998397827s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:51:11 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=70/70 les/c/f=71/71/0 sis=118) [1] r=0 lpr=118 pi=[70,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:51:11 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 118 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=117/118 n=5 ec=50/34 lis/c=68/68 les/c/f=69/69/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[68,117)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:51:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct  2 03:51:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct  2 03:51:12 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct  2 03:51:12 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 119 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=117/118 n=5 ec=50/34 lis/c=117/68 les/c/f=118/69/0 sis=119 pruub=14.991776466s) [0] async=[0] r=-1 lpr=119 pi=[68,119)/1 crt=40'385 mlcod 40'385 active pruub 176.555923462s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:51:12 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 119 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=70/71 n=5 ec=50/34 lis/c=70/70 les/c/f=71/71/0 sis=119) [1]/[2] r=0 lpr=119 pi=[70,119)/1 crt=40'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:51:12 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 119 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=117/118 n=5 ec=50/34 lis/c=117/68 les/c/f=118/69/0 sis=119 pruub=14.991716385s) [0] r=-1 lpr=119 pi=[68,119)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 176.555923462s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:51:12 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 119 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=70/71 n=5 ec=50/34 lis/c=70/70 les/c/f=71/71/0 sis=119) [1]/[2] r=0 lpr=119 pi=[70,119)/1 crt=40'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 03:51:12 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=70/70 les/c/f=71/71/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[70,119)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:51:12 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/34 lis/c=70/70 les/c/f=71/71/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[70,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 03:51:12 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 03:51:12 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 119 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=117/68 les/c/f=118/69/0 sis=119) [0] r=0 lpr=119 pi=[68,119)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:51:12 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 119 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=117/68 les/c/f=118/69/0 sis=119) [0] r=0 lpr=119 pi=[68,119)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:51:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct  2 03:51:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct  2 03:51:13 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct  2 03:51:13 np0005465604 ceph-osd[88314]: osd.0 pg_epoch: 120 pg[9.1e( v 40'385 (0'0,40'385] local-lis/les=119/120 n=5 ec=50/34 lis/c=117/68 les/c/f=118/69/0 sis=119) [0] r=0 lpr=119 pi=[68,119)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:51:14 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 120 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=119/120 n=5 ec=50/34 lis/c=70/70 les/c/f=71/71/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[70,119)/1 crt=40'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:51:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct  2 03:51:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct  2 03:51:15 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct  2 03:51:15 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 121 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=119/70 les/c/f=120/71/0 sis=121) [1] r=0 lpr=121 pi=[70,121)/1 luod=0'0 crt=40'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:51:15 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 121 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=0/0 n=5 ec=50/34 lis/c=119/70 les/c/f=120/71/0 sis=121) [1] r=0 lpr=121 pi=[70,121)/1 crt=40'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 03:51:15 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 121 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=119/120 n=5 ec=50/34 lis/c=119/70 les/c/f=120/71/0 sis=121 pruub=14.995224953s) [1] async=[1] r=-1 lpr=121 pi=[70,121)/1 crt=40'385 mlcod 40'385 active pruub 178.660583496s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 03:51:15 np0005465604 ceph-osd[90385]: osd.2 pg_epoch: 121 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=119/120 n=5 ec=50/34 lis/c=119/70 les/c/f=120/71/0 sis=121 pruub=14.995162964s) [1] r=-1 lpr=121 pi=[70,121)/1 crt=40'385 mlcod 0'0 unknown NOTIFY pruub 178.660583496s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 03:51:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:15 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.1 deep-scrub starts
Oct  2 03:51:15 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.1 deep-scrub ok
Oct  2 03:51:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct  2 03:51:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct  2 03:51:16 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct  2 03:51:16 np0005465604 ceph-osd[89321]: osd.1 pg_epoch: 122 pg[9.1f( v 40'385 (0'0,40'385] local-lis/les=121/122 n=5 ec=50/34 lis/c=119/70 les/c/f=120/71/0 sis=121) [1] r=0 lpr=121 pi=[70,121)/1 crt=40'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 03:51:16 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct  2 03:51:16 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct  2 03:51:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:51:17 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct  2 03:51:17 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct  2 03:51:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 67 B/s, 2 objects/s recovering
Oct  2 03:51:17 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.1e deep-scrub starts
Oct  2 03:51:17 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.1e deep-scrub ok
Oct  2 03:51:18 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct  2 03:51:18 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct  2 03:51:19 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 11.a scrub starts
Oct  2 03:51:19 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 11.a scrub ok
Oct  2 03:51:19 np0005465604 systemd[1]: packagekit.service: Deactivated successfully.
Oct  2 03:51:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 2 objects/s recovering
Oct  2 03:51:20 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Oct  2 03:51:20 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Oct  2 03:51:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:51:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 41 B/s, 1 objects/s recovering
Oct  2 03:51:23 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct  2 03:51:23 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct  2 03:51:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 1 objects/s recovering
Oct  2 03:51:23 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 11.c deep-scrub starts
Oct  2 03:51:23 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 11.c deep-scrub ok
Oct  2 03:51:24 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.d scrub starts
Oct  2 03:51:24 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.d scrub ok
Oct  2 03:51:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 32 B/s, 1 objects/s recovering
Oct  2 03:51:25 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct  2 03:51:25 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct  2 03:51:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:51:27 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct  2 03:51:27 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_07:51:27
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'volumes', '.rgw.root', '.mgr', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'images']
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 1 objects/s recovering
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 03:51:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 03:51:27 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Oct  2 03:51:28 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Oct  2 03:51:29 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Oct  2 03:51:29 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Oct  2 03:51:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:29 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 5.14 deep-scrub starts
Oct  2 03:51:29 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 5.14 deep-scrub ok
Oct  2 03:51:30 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct  2 03:51:30 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct  2 03:51:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:51:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:31 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Oct  2 03:51:31 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Oct  2 03:51:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:34 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct  2 03:51:34 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct  2 03:51:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:51:37 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct  2 03:51:37 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:37 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Oct  2 03:51:37 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:51:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 03:51:38 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct  2 03:51:38 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct  2 03:51:39 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.4 deep-scrub starts
Oct  2 03:51:39 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.4 deep-scrub ok
Oct  2 03:51:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:40 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Oct  2 03:51:40 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Oct  2 03:51:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:51:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:51:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 03:51:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:51:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 03:51:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:51:40 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9abe023b-55b5-4898-b23a-96395669ee6a does not exist
Oct  2 03:51:40 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b50a27bf-3dbb-4edf-aa34-d1ea0512cfe0 does not exist
Oct  2 03:51:40 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev bb7900c0-b73f-4200-a891-6b40cbf914c3 does not exist
Oct  2 03:51:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 03:51:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 03:51:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 03:51:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:51:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:51:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:51:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:51:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:51:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:51:41 np0005465604 podman[107723]: 2025-10-02 07:51:41.502728352 +0000 UTC m=+0.034036032 container create 4c05f13b732e35c1d2bf633f80e6f36066af0bdd175b5629884e4054538b6a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:51:41 np0005465604 systemd[1]: Started libpod-conmon-4c05f13b732e35c1d2bf633f80e6f36066af0bdd175b5629884e4054538b6a30.scope.
Oct  2 03:51:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:51:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:51:41 np0005465604 podman[107723]: 2025-10-02 07:51:41.57421763 +0000 UTC m=+0.105525330 container init 4c05f13b732e35c1d2bf633f80e6f36066af0bdd175b5629884e4054538b6a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 03:51:41 np0005465604 podman[107723]: 2025-10-02 07:51:41.581407114 +0000 UTC m=+0.112714794 container start 4c05f13b732e35c1d2bf633f80e6f36066af0bdd175b5629884e4054538b6a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:51:41 np0005465604 podman[107723]: 2025-10-02 07:51:41.487412215 +0000 UTC m=+0.018719915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:51:41 np0005465604 nostalgic_cray[107740]: 167 167
Oct  2 03:51:41 np0005465604 systemd[1]: libpod-4c05f13b732e35c1d2bf633f80e6f36066af0bdd175b5629884e4054538b6a30.scope: Deactivated successfully.
Oct  2 03:51:41 np0005465604 podman[107723]: 2025-10-02 07:51:41.584451349 +0000 UTC m=+0.115759029 container attach 4c05f13b732e35c1d2bf633f80e6f36066af0bdd175b5629884e4054538b6a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:51:41 np0005465604 podman[107723]: 2025-10-02 07:51:41.587814894 +0000 UTC m=+0.119122594 container died 4c05f13b732e35c1d2bf633f80e6f36066af0bdd175b5629884e4054538b6a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:51:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay-050b4af294e3a62563ae3fdb8f493b19804e95dfaffea21b30fbc22a510e3bc4-merged.mount: Deactivated successfully.
Oct  2 03:51:41 np0005465604 podman[107723]: 2025-10-02 07:51:41.627516852 +0000 UTC m=+0.158824532 container remove 4c05f13b732e35c1d2bf633f80e6f36066af0bdd175b5629884e4054538b6a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 03:51:41 np0005465604 systemd[1]: libpod-conmon-4c05f13b732e35c1d2bf633f80e6f36066af0bdd175b5629884e4054538b6a30.scope: Deactivated successfully.
Oct  2 03:51:41 np0005465604 podman[107764]: 2025-10-02 07:51:41.782447731 +0000 UTC m=+0.040092211 container create ebc69acc1e6a407ac96840f587901930c26924c1638dbe0ebb0b975a086c91d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wu, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:51:41 np0005465604 systemd[1]: Started libpod-conmon-ebc69acc1e6a407ac96840f587901930c26924c1638dbe0ebb0b975a086c91d2.scope.
Oct  2 03:51:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:51:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37449b32d02090ad2127a18e55edeb009988389628826355d2d0acb28b863ebf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:51:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37449b32d02090ad2127a18e55edeb009988389628826355d2d0acb28b863ebf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:51:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37449b32d02090ad2127a18e55edeb009988389628826355d2d0acb28b863ebf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:51:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37449b32d02090ad2127a18e55edeb009988389628826355d2d0acb28b863ebf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:51:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37449b32d02090ad2127a18e55edeb009988389628826355d2d0acb28b863ebf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:51:41 np0005465604 podman[107764]: 2025-10-02 07:51:41.764407519 +0000 UTC m=+0.022052029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:51:41 np0005465604 podman[107764]: 2025-10-02 07:51:41.862395902 +0000 UTC m=+0.120040382 container init ebc69acc1e6a407ac96840f587901930c26924c1638dbe0ebb0b975a086c91d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 03:51:41 np0005465604 podman[107764]: 2025-10-02 07:51:41.871692602 +0000 UTC m=+0.129337082 container start ebc69acc1e6a407ac96840f587901930c26924c1638dbe0ebb0b975a086c91d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:51:41 np0005465604 podman[107764]: 2025-10-02 07:51:41.874261272 +0000 UTC m=+0.131905742 container attach ebc69acc1e6a407ac96840f587901930c26924c1638dbe0ebb0b975a086c91d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wu, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 03:51:41 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.1c deep-scrub starts
Oct  2 03:51:41 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 2.1c deep-scrub ok
Oct  2 03:51:42 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct  2 03:51:42 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct  2 03:51:42 np0005465604 elated_wu[107782]: --> passed data devices: 0 physical, 3 LVM
Oct  2 03:51:42 np0005465604 elated_wu[107782]: --> relative data size: 1.0
Oct  2 03:51:42 np0005465604 elated_wu[107782]: --> All data devices are unavailable
Oct  2 03:51:42 np0005465604 systemd[1]: libpod-ebc69acc1e6a407ac96840f587901930c26924c1638dbe0ebb0b975a086c91d2.scope: Deactivated successfully.
Oct  2 03:51:42 np0005465604 systemd[1]: libpod-ebc69acc1e6a407ac96840f587901930c26924c1638dbe0ebb0b975a086c91d2.scope: Consumed 1.037s CPU time.
Oct  2 03:51:43 np0005465604 podman[107811]: 2025-10-02 07:51:43.024456193 +0000 UTC m=+0.047099550 container died ebc69acc1e6a407ac96840f587901930c26924c1638dbe0ebb0b975a086c91d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wu, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:51:43 np0005465604 systemd[1]: var-lib-containers-storage-overlay-37449b32d02090ad2127a18e55edeb009988389628826355d2d0acb28b863ebf-merged.mount: Deactivated successfully.
Oct  2 03:51:43 np0005465604 podman[107811]: 2025-10-02 07:51:43.074994857 +0000 UTC m=+0.097638134 container remove ebc69acc1e6a407ac96840f587901930c26924c1638dbe0ebb0b975a086c91d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 03:51:43 np0005465604 systemd[1]: libpod-conmon-ebc69acc1e6a407ac96840f587901930c26924c1638dbe0ebb0b975a086c91d2.scope: Deactivated successfully.
Oct  2 03:51:43 np0005465604 podman[107966]: 2025-10-02 07:51:43.769474394 +0000 UTC m=+0.045117338 container create dde135f1083ca40acea68b3a56660678d69dced714511c561d7c6c2a2471dcc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_grothendieck, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:51:43 np0005465604 systemd[1]: Started libpod-conmon-dde135f1083ca40acea68b3a56660678d69dced714511c561d7c6c2a2471dcc2.scope.
Oct  2 03:51:43 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:51:43 np0005465604 podman[107966]: 2025-10-02 07:51:43.832551039 +0000 UTC m=+0.108194013 container init dde135f1083ca40acea68b3a56660678d69dced714511c561d7c6c2a2471dcc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_grothendieck, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:51:43 np0005465604 podman[107966]: 2025-10-02 07:51:43.838777333 +0000 UTC m=+0.114420297 container start dde135f1083ca40acea68b3a56660678d69dced714511c561d7c6c2a2471dcc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_grothendieck, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Oct  2 03:51:43 np0005465604 podman[107966]: 2025-10-02 07:51:43.841464627 +0000 UTC m=+0.117107591 container attach dde135f1083ca40acea68b3a56660678d69dced714511c561d7c6c2a2471dcc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_grothendieck, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:51:43 np0005465604 intelligent_grothendieck[107983]: 167 167
Oct  2 03:51:43 np0005465604 systemd[1]: libpod-dde135f1083ca40acea68b3a56660678d69dced714511c561d7c6c2a2471dcc2.scope: Deactivated successfully.
Oct  2 03:51:43 np0005465604 podman[107966]: 2025-10-02 07:51:43.842226101 +0000 UTC m=+0.117869035 container died dde135f1083ca40acea68b3a56660678d69dced714511c561d7c6c2a2471dcc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct  2 03:51:43 np0005465604 podman[107966]: 2025-10-02 07:51:43.748639484 +0000 UTC m=+0.024282468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:51:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:43 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5ea0f5e8278e2df0a3a7b7b7a8e4e693bbffabf9e9d5a3b3780aa2f71029c4be-merged.mount: Deactivated successfully.
Oct  2 03:51:43 np0005465604 podman[107966]: 2025-10-02 07:51:43.87203187 +0000 UTC m=+0.147674824 container remove dde135f1083ca40acea68b3a56660678d69dced714511c561d7c6c2a2471dcc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_grothendieck, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:51:43 np0005465604 systemd[1]: libpod-conmon-dde135f1083ca40acea68b3a56660678d69dced714511c561d7c6c2a2471dcc2.scope: Deactivated successfully.
Oct  2 03:51:43 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Oct  2 03:51:43 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Oct  2 03:51:44 np0005465604 podman[108006]: 2025-10-02 07:51:44.032988647 +0000 UTC m=+0.039956116 container create d5f6f7a45222a39958a5f0a3949a21ce05e3fa4ddd5af4052d72cb0f50e3ab25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_davinci, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:51:44 np0005465604 systemd[1]: Started libpod-conmon-d5f6f7a45222a39958a5f0a3949a21ce05e3fa4ddd5af4052d72cb0f50e3ab25.scope.
Oct  2 03:51:44 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:51:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ca12515bab3895435171e49479a73b6555cf6ad43c4de3f208f37d28a42776/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:51:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ca12515bab3895435171e49479a73b6555cf6ad43c4de3f208f37d28a42776/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:51:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ca12515bab3895435171e49479a73b6555cf6ad43c4de3f208f37d28a42776/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:51:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ca12515bab3895435171e49479a73b6555cf6ad43c4de3f208f37d28a42776/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:51:44 np0005465604 podman[108006]: 2025-10-02 07:51:44.110912686 +0000 UTC m=+0.117880185 container init d5f6f7a45222a39958a5f0a3949a21ce05e3fa4ddd5af4052d72cb0f50e3ab25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_davinci, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:51:44 np0005465604 podman[108006]: 2025-10-02 07:51:44.017736852 +0000 UTC m=+0.024704341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:51:44 np0005465604 podman[108006]: 2025-10-02 07:51:44.123197349 +0000 UTC m=+0.130164828 container start d5f6f7a45222a39958a5f0a3949a21ce05e3fa4ddd5af4052d72cb0f50e3ab25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:51:44 np0005465604 podman[108006]: 2025-10-02 07:51:44.136797913 +0000 UTC m=+0.143765402 container attach d5f6f7a45222a39958a5f0a3949a21ce05e3fa4ddd5af4052d72cb0f50e3ab25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 03:51:44 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct  2 03:51:44 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct  2 03:51:44 np0005465604 loving_davinci[108023]: {
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:    "0": [
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:        {
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "devices": [
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "/dev/loop3"
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            ],
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "lv_name": "ceph_lv0",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "lv_size": "21470642176",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "name": "ceph_lv0",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "tags": {
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.cluster_name": "ceph",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.crush_device_class": "",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.encrypted": "0",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.osd_id": "0",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.type": "block",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.vdo": "0"
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            },
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "type": "block",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "vg_name": "ceph_vg0"
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:        }
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:    ],
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:    "1": [
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:        {
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "devices": [
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "/dev/loop4"
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            ],
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "lv_name": "ceph_lv1",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "lv_size": "21470642176",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "name": "ceph_lv1",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "tags": {
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.cluster_name": "ceph",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.crush_device_class": "",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.encrypted": "0",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.osd_id": "1",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.type": "block",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.vdo": "0"
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            },
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "type": "block",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "vg_name": "ceph_vg1"
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:        }
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:    ],
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:    "2": [
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:        {
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "devices": [
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "/dev/loop5"
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            ],
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "lv_name": "ceph_lv2",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "lv_size": "21470642176",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "name": "ceph_lv2",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "tags": {
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.cluster_name": "ceph",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.crush_device_class": "",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.encrypted": "0",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.osd_id": "2",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.type": "block",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:                "ceph.vdo": "0"
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            },
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "type": "block",
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:            "vg_name": "ceph_vg2"
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:        }
Oct  2 03:51:44 np0005465604 loving_davinci[108023]:    ]
Oct  2 03:51:44 np0005465604 loving_davinci[108023]: }
Oct  2 03:51:44 np0005465604 systemd[1]: libpod-d5f6f7a45222a39958a5f0a3949a21ce05e3fa4ddd5af4052d72cb0f50e3ab25.scope: Deactivated successfully.
Oct  2 03:51:44 np0005465604 podman[108006]: 2025-10-02 07:51:44.84704141 +0000 UTC m=+0.854008899 container died d5f6f7a45222a39958a5f0a3949a21ce05e3fa4ddd5af4052d72cb0f50e3ab25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:51:44 np0005465604 systemd[1]: var-lib-containers-storage-overlay-14ca12515bab3895435171e49479a73b6555cf6ad43c4de3f208f37d28a42776-merged.mount: Deactivated successfully.
Oct  2 03:51:44 np0005465604 podman[108006]: 2025-10-02 07:51:44.908032821 +0000 UTC m=+0.915000290 container remove d5f6f7a45222a39958a5f0a3949a21ce05e3fa4ddd5af4052d72cb0f50e3ab25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_davinci, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  2 03:51:44 np0005465604 systemd[1]: libpod-conmon-d5f6f7a45222a39958a5f0a3949a21ce05e3fa4ddd5af4052d72cb0f50e3ab25.scope: Deactivated successfully.
Oct  2 03:51:45 np0005465604 podman[108184]: 2025-10-02 07:51:45.510369195 +0000 UTC m=+0.045572952 container create bec68c69b56eade6ceeee9bd6881cfcf5b83fc04350b81b4c34daffdfc8c6e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wozniak, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 03:51:45 np0005465604 systemd[1]: Started libpod-conmon-bec68c69b56eade6ceeee9bd6881cfcf5b83fc04350b81b4c34daffdfc8c6e67.scope.
Oct  2 03:51:45 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:51:45 np0005465604 podman[108184]: 2025-10-02 07:51:45.588345715 +0000 UTC m=+0.123549512 container init bec68c69b56eade6ceeee9bd6881cfcf5b83fc04350b81b4c34daffdfc8c6e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wozniak, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 03:51:45 np0005465604 podman[108184]: 2025-10-02 07:51:45.495303085 +0000 UTC m=+0.030506862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:51:45 np0005465604 podman[108184]: 2025-10-02 07:51:45.597719847 +0000 UTC m=+0.132923614 container start bec68c69b56eade6ceeee9bd6881cfcf5b83fc04350b81b4c34daffdfc8c6e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 03:51:45 np0005465604 podman[108184]: 2025-10-02 07:51:45.600897567 +0000 UTC m=+0.136101364 container attach bec68c69b56eade6ceeee9bd6881cfcf5b83fc04350b81b4c34daffdfc8c6e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wozniak, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:51:45 np0005465604 serene_wozniak[108200]: 167 167
Oct  2 03:51:45 np0005465604 systemd[1]: libpod-bec68c69b56eade6ceeee9bd6881cfcf5b83fc04350b81b4c34daffdfc8c6e67.scope: Deactivated successfully.
Oct  2 03:51:45 np0005465604 podman[108184]: 2025-10-02 07:51:45.604209859 +0000 UTC m=+0.139413646 container died bec68c69b56eade6ceeee9bd6881cfcf5b83fc04350b81b4c34daffdfc8c6e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wozniak, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 03:51:45 np0005465604 systemd[1]: var-lib-containers-storage-overlay-fda3e0326b300fee639e75f712139fbb6e3bf7bcedccf25f87d609f8776dad5c-merged.mount: Deactivated successfully.
Oct  2 03:51:45 np0005465604 podman[108184]: 2025-10-02 07:51:45.634461602 +0000 UTC m=+0.169665369 container remove bec68c69b56eade6ceeee9bd6881cfcf5b83fc04350b81b4c34daffdfc8c6e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  2 03:51:45 np0005465604 systemd[1]: libpod-conmon-bec68c69b56eade6ceeee9bd6881cfcf5b83fc04350b81b4c34daffdfc8c6e67.scope: Deactivated successfully.
Oct  2 03:51:45 np0005465604 podman[108226]: 2025-10-02 07:51:45.773758904 +0000 UTC m=+0.037802899 container create 1b70a16c710337b88eaa58217f5263ad9951f38e34089ad11cba25091813c260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 03:51:45 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct  2 03:51:45 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct  2 03:51:45 np0005465604 systemd[1]: Started libpod-conmon-1b70a16c710337b88eaa58217f5263ad9951f38e34089ad11cba25091813c260.scope.
Oct  2 03:51:45 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:51:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd15bf48375b1c889291a788bd2c08a5b7ae4b5af10386b3e1aa6acddb24ede/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:51:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd15bf48375b1c889291a788bd2c08a5b7ae4b5af10386b3e1aa6acddb24ede/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:51:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd15bf48375b1c889291a788bd2c08a5b7ae4b5af10386b3e1aa6acddb24ede/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:51:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd15bf48375b1c889291a788bd2c08a5b7ae4b5af10386b3e1aa6acddb24ede/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:51:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:45 np0005465604 podman[108226]: 2025-10-02 07:51:45.850590369 +0000 UTC m=+0.114634384 container init 1b70a16c710337b88eaa58217f5263ad9951f38e34089ad11cba25091813c260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_kare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:51:45 np0005465604 podman[108226]: 2025-10-02 07:51:45.757460676 +0000 UTC m=+0.021504681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:51:45 np0005465604 podman[108226]: 2025-10-02 07:51:45.862421878 +0000 UTC m=+0.126465873 container start 1b70a16c710337b88eaa58217f5263ad9951f38e34089ad11cba25091813c260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_kare, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 03:51:45 np0005465604 podman[108226]: 2025-10-02 07:51:45.865998819 +0000 UTC m=+0.130042854 container attach 1b70a16c710337b88eaa58217f5263ad9951f38e34089ad11cba25091813c260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_kare, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:51:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:51:46 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Oct  2 03:51:46 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Oct  2 03:51:46 np0005465604 epic_kare[108242]: {
Oct  2 03:51:46 np0005465604 epic_kare[108242]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 03:51:46 np0005465604 epic_kare[108242]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:51:46 np0005465604 epic_kare[108242]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 03:51:46 np0005465604 epic_kare[108242]:        "osd_id": 2,
Oct  2 03:51:46 np0005465604 epic_kare[108242]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:51:46 np0005465604 epic_kare[108242]:        "type": "bluestore"
Oct  2 03:51:46 np0005465604 epic_kare[108242]:    },
Oct  2 03:51:46 np0005465604 epic_kare[108242]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 03:51:46 np0005465604 epic_kare[108242]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:51:46 np0005465604 epic_kare[108242]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 03:51:46 np0005465604 epic_kare[108242]:        "osd_id": 1,
Oct  2 03:51:46 np0005465604 epic_kare[108242]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:51:46 np0005465604 epic_kare[108242]:        "type": "bluestore"
Oct  2 03:51:46 np0005465604 epic_kare[108242]:    },
Oct  2 03:51:46 np0005465604 epic_kare[108242]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 03:51:46 np0005465604 epic_kare[108242]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:51:46 np0005465604 epic_kare[108242]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 03:51:46 np0005465604 epic_kare[108242]:        "osd_id": 0,
Oct  2 03:51:46 np0005465604 epic_kare[108242]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:51:46 np0005465604 epic_kare[108242]:        "type": "bluestore"
Oct  2 03:51:46 np0005465604 epic_kare[108242]:    }
Oct  2 03:51:46 np0005465604 epic_kare[108242]: }
Oct  2 03:51:46 np0005465604 systemd[1]: libpod-1b70a16c710337b88eaa58217f5263ad9951f38e34089ad11cba25091813c260.scope: Deactivated successfully.
Oct  2 03:51:46 np0005465604 systemd[1]: libpod-1b70a16c710337b88eaa58217f5263ad9951f38e34089ad11cba25091813c260.scope: Consumed 1.017s CPU time.
Oct  2 03:51:46 np0005465604 podman[108226]: 2025-10-02 07:51:46.879451368 +0000 UTC m=+1.143495363 container died 1b70a16c710337b88eaa58217f5263ad9951f38e34089ad11cba25091813c260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_kare, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:51:46 np0005465604 systemd[1]: var-lib-containers-storage-overlay-fbd15bf48375b1c889291a788bd2c08a5b7ae4b5af10386b3e1aa6acddb24ede-merged.mount: Deactivated successfully.
Oct  2 03:51:46 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct  2 03:51:46 np0005465604 podman[108226]: 2025-10-02 07:51:46.933738799 +0000 UTC m=+1.197782794 container remove 1b70a16c710337b88eaa58217f5263ad9951f38e34089ad11cba25091813c260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_kare, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 03:51:46 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct  2 03:51:46 np0005465604 systemd[1]: libpod-conmon-1b70a16c710337b88eaa58217f5263ad9951f38e34089ad11cba25091813c260.scope: Deactivated successfully.
Oct  2 03:51:46 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct  2 03:51:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:51:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:51:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:51:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:51:46 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b37d9f15-3289-45ba-a1e4-7095d6c3a942 does not exist
Oct  2 03:51:46 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 059727c7-82a2-4507-9ebb-8b21e35ddf14 does not exist
Oct  2 03:51:47 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct  2 03:51:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:51:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:51:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:48 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct  2 03:51:48 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct  2 03:51:49 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Oct  2 03:51:49 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Oct  2 03:51:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:49 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct  2 03:51:49 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct  2 03:51:50 np0005465604 python3.9[108491]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:51:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:51:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:51 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Oct  2 03:51:51 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Oct  2 03:51:52 np0005465604 python3.9[108778]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  2 03:51:52 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.e deep-scrub starts
Oct  2 03:51:52 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.e deep-scrub ok
Oct  2 03:51:53 np0005465604 python3.9[108930]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  2 03:51:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:54 np0005465604 python3.9[109082]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:51:54 np0005465604 python3.9[109234]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  2 03:51:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:56 np0005465604 python3.9[109386]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:51:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:51:56 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct  2 03:51:56 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct  2 03:51:56 np0005465604 python3.9[109538]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:51:57 np0005465604 python3.9[109616]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:51:57 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct  2 03:51:57 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct  2 03:51:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:51:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:51:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:51:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:51:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:51:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:51:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:51:58 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Oct  2 03:51:58 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Oct  2 03:51:58 np0005465604 python3.9[109768]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  2 03:51:59 np0005465604 python3.9[109921]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  2 03:51:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:00 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 9.1a deep-scrub starts
Oct  2 03:52:00 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 9.1a deep-scrub ok
Oct  2 03:52:00 np0005465604 python3.9[110074]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 03:52:01 np0005465604 python3.9[110226]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  2 03:52:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:52:01 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct  2 03:52:01 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct  2 03:52:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:02 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct  2 03:52:02 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct  2 03:52:02 np0005465604 python3.9[110378]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:52:03 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Oct  2 03:52:03 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Oct  2 03:52:03 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct  2 03:52:03 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct  2 03:52:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:04 np0005465604 python3.9[110531]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:52:04 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Oct  2 03:52:04 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Oct  2 03:52:04 np0005465604 python3.9[110683]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:52:05 np0005465604 python3.9[110761]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:52:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:06 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Oct  2 03:52:06 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Oct  2 03:52:06 np0005465604 python3.9[110913]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:52:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:52:06 np0005465604 python3.9[110991]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:52:06 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct  2 03:52:06 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct  2 03:52:06 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct  2 03:52:06 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct  2 03:52:07 np0005465604 python3.9[111143]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:52:07 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct  2 03:52:07 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct  2 03:52:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:08 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct  2 03:52:08 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct  2 03:52:08 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 11.f deep-scrub starts
Oct  2 03:52:08 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 11.f deep-scrub ok
Oct  2 03:52:09 np0005465604 python3.9[111294]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:52:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:09 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Oct  2 03:52:09 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Oct  2 03:52:10 np0005465604 python3.9[111446]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  2 03:52:10 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.3 deep-scrub starts
Oct  2 03:52:10 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.3 deep-scrub ok
Oct  2 03:52:11 np0005465604 python3.9[111596]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:52:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:52:11 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 11.e scrub starts
Oct  2 03:52:11 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 11.e scrub ok
Oct  2 03:52:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:12 np0005465604 python3.9[111748]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:52:12 np0005465604 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  2 03:52:12 np0005465604 systemd[1]: tuned.service: Deactivated successfully.
Oct  2 03:52:12 np0005465604 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  2 03:52:12 np0005465604 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  2 03:52:12 np0005465604 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  2 03:52:13 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Oct  2 03:52:13 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Oct  2 03:52:13 np0005465604 python3.9[111910]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  2 03:52:13 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.e scrub starts
Oct  2 03:52:13 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.e scrub ok
Oct  2 03:52:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:14 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct  2 03:52:14 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct  2 03:52:14 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct  2 03:52:14 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct  2 03:52:15 np0005465604 python3.9[112062]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:52:15 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Oct  2 03:52:15 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Oct  2 03:52:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:16 np0005465604 python3.9[112216]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:52:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:52:16 np0005465604 systemd[1]: session-36.scope: Deactivated successfully.
Oct  2 03:52:16 np0005465604 systemd[1]: session-36.scope: Consumed 1min 3.536s CPU time.
Oct  2 03:52:16 np0005465604 systemd-logind[787]: Session 36 logged out. Waiting for processes to exit.
Oct  2 03:52:16 np0005465604 systemd-logind[787]: Removed session 36.
Oct  2 03:52:16 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct  2 03:52:16 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct  2 03:52:17 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct  2 03:52:17 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct  2 03:52:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:18 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct  2 03:52:18 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct  2 03:52:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:20 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct  2 03:52:20 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct  2 03:52:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:52:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:22 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct  2 03:52:22 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct  2 03:52:22 np0005465604 systemd-logind[787]: New session 37 of user zuul.
Oct  2 03:52:22 np0005465604 systemd[1]: Started Session 37 of User zuul.
Oct  2 03:52:23 np0005465604 python3.9[112396]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:52:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:24 np0005465604 python3.9[112552]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  2 03:52:24 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct  2 03:52:24 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct  2 03:52:25 np0005465604 python3.9[112705]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 03:52:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:25 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct  2 03:52:25 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct  2 03:52:25 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.16 deep-scrub starts
Oct  2 03:52:25 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.16 deep-scrub ok
Oct  2 03:52:26 np0005465604 python3.9[112789]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  2 03:52:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:52:26 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.a deep-scrub starts
Oct  2 03:52:26 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.a deep-scrub ok
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_07:52:27
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.log', 'default.rgw.meta', 'volumes', '.rgw.root', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta']
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 03:52:27 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.b deep-scrub starts
Oct  2 03:52:27 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.b deep-scrub ok
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 03:52:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 03:52:27 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct  2 03:52:27 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct  2 03:52:28 np0005465604 python3.9[112942]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:52:28 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct  2 03:52:28 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct  2 03:52:29 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.d scrub starts
Oct  2 03:52:29 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.d scrub ok
Oct  2 03:52:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:29 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.f deep-scrub starts
Oct  2 03:52:29 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.f deep-scrub ok
Oct  2 03:52:30 np0005465604 python3.9[113095]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 03:52:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:52:31 np0005465604 python3.9[113248]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:52:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:31 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct  2 03:52:31 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct  2 03:52:32 np0005465604 python3.9[113400]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  2 03:52:33 np0005465604 python3.9[113550]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:52:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:33 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.f scrub starts
Oct  2 03:52:33 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.f scrub ok
Oct  2 03:52:34 np0005465604 python3.9[113708]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:52:34 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct  2 03:52:34 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct  2 03:52:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:35 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.b scrub starts
Oct  2 03:52:35 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.b scrub ok
Oct  2 03:52:35 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 10.6 deep-scrub starts
Oct  2 03:52:36 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 10.6 deep-scrub ok
Oct  2 03:52:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:52:36 np0005465604 python3.9[113861]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:52:37 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:37 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:52:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 03:52:37 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct  2 03:52:37 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct  2 03:52:38 np0005465604 python3.9[114148]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 03:52:38 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct  2 03:52:38 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct  2 03:52:38 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 10.b deep-scrub starts
Oct  2 03:52:38 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 10.b deep-scrub ok
Oct  2 03:52:39 np0005465604 python3.9[114298]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:52:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:39 np0005465604 python3.9[114452]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:52:40 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct  2 03:52:40 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct  2 03:52:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:52:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:41 np0005465604 python3.9[114605]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:52:42 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Oct  2 03:52:42 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Oct  2 03:52:42 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Oct  2 03:52:42 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Oct  2 03:52:43 np0005465604 python3.9[114758]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:52:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:44 np0005465604 python3.9[114912]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct  2 03:52:45 np0005465604 systemd[1]: session-37.scope: Deactivated successfully.
Oct  2 03:52:45 np0005465604 systemd[1]: session-37.scope: Consumed 17.595s CPU time.
Oct  2 03:52:45 np0005465604 systemd-logind[787]: Session 37 logged out. Waiting for processes to exit.
Oct  2 03:52:45 np0005465604 systemd-logind[787]: Removed session 37.
Oct  2 03:52:45 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct  2 03:52:45 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct  2 03:52:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:45 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.c deep-scrub starts
Oct  2 03:52:46 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.c deep-scrub ok
Oct  2 03:52:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:52:46 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct  2 03:52:46 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct  2 03:52:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:52:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:52:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 03:52:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:52:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 03:52:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:52:47 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 4c9914ae-332f-4bc3-ae01-3160800ab003 does not exist
Oct  2 03:52:47 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 24eb3284-072a-43bd-9982-52187e828b9b does not exist
Oct  2 03:52:47 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a5ad8bf4-34f1-4394-98dd-687e7636b9c7 does not exist
Oct  2 03:52:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 03:52:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 03:52:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 03:52:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:52:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:52:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:52:47 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct  2 03:52:47 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct  2 03:52:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:48 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:52:48 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:52:48 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:52:48 np0005465604 podman[115208]: 2025-10-02 07:52:48.44697746 +0000 UTC m=+0.041576662 container create 4039fc352ada59d630f4ce8ec2d9345638da578f03d684d84ca7f63e3b28d873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:52:48 np0005465604 systemd[76077]: Created slice User Background Tasks Slice.
Oct  2 03:52:48 np0005465604 systemd[76077]: Starting Cleanup of User's Temporary Files and Directories...
Oct  2 03:52:48 np0005465604 systemd[1]: Started libpod-conmon-4039fc352ada59d630f4ce8ec2d9345638da578f03d684d84ca7f63e3b28d873.scope.
Oct  2 03:52:48 np0005465604 systemd[76077]: Finished Cleanup of User's Temporary Files and Directories.
Oct  2 03:52:48 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:52:48 np0005465604 podman[115208]: 2025-10-02 07:52:48.497809628 +0000 UTC m=+0.092408870 container init 4039fc352ada59d630f4ce8ec2d9345638da578f03d684d84ca7f63e3b28d873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct  2 03:52:48 np0005465604 podman[115208]: 2025-10-02 07:52:48.504447882 +0000 UTC m=+0.099047104 container start 4039fc352ada59d630f4ce8ec2d9345638da578f03d684d84ca7f63e3b28d873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_keldysh, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 03:52:48 np0005465604 podman[115208]: 2025-10-02 07:52:48.508042858 +0000 UTC m=+0.102642100 container attach 4039fc352ada59d630f4ce8ec2d9345638da578f03d684d84ca7f63e3b28d873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_keldysh, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 03:52:48 np0005465604 xenodochial_keldysh[115225]: 167 167
Oct  2 03:52:48 np0005465604 systemd[1]: libpod-4039fc352ada59d630f4ce8ec2d9345638da578f03d684d84ca7f63e3b28d873.scope: Deactivated successfully.
Oct  2 03:52:48 np0005465604 podman[115208]: 2025-10-02 07:52:48.509406212 +0000 UTC m=+0.104005424 container died 4039fc352ada59d630f4ce8ec2d9345638da578f03d684d84ca7f63e3b28d873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  2 03:52:48 np0005465604 podman[115208]: 2025-10-02 07:52:48.428902896 +0000 UTC m=+0.023502118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:52:48 np0005465604 systemd[1]: var-lib-containers-storage-overlay-7cbb012060329572e7b0dc8222b1c9e3dd4df0bb2ab407a9f19f824ebb5d2ebc-merged.mount: Deactivated successfully.
Oct  2 03:52:48 np0005465604 podman[115208]: 2025-10-02 07:52:48.550695943 +0000 UTC m=+0.145295145 container remove 4039fc352ada59d630f4ce8ec2d9345638da578f03d684d84ca7f63e3b28d873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_keldysh, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 03:52:48 np0005465604 systemd[1]: libpod-conmon-4039fc352ada59d630f4ce8ec2d9345638da578f03d684d84ca7f63e3b28d873.scope: Deactivated successfully.
Oct  2 03:52:48 np0005465604 podman[115250]: 2025-10-02 07:52:48.69763753 +0000 UTC m=+0.050785998 container create 90d3d4df19cbc3391cbae41940298d4fbb08b52d412efb9cbb9914007c3a5664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:52:48 np0005465604 systemd[1]: Started libpod-conmon-90d3d4df19cbc3391cbae41940298d4fbb08b52d412efb9cbb9914007c3a5664.scope.
Oct  2 03:52:48 np0005465604 podman[115250]: 2025-10-02 07:52:48.671525288 +0000 UTC m=+0.024673816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:52:48 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:52:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fff65dde255264e7a8e4a8021e5c96c6bc9dd39443142d384534f3c35928a95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:52:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fff65dde255264e7a8e4a8021e5c96c6bc9dd39443142d384534f3c35928a95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:52:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fff65dde255264e7a8e4a8021e5c96c6bc9dd39443142d384534f3c35928a95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:52:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fff65dde255264e7a8e4a8021e5c96c6bc9dd39443142d384534f3c35928a95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:52:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fff65dde255264e7a8e4a8021e5c96c6bc9dd39443142d384534f3c35928a95/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:52:48 np0005465604 podman[115250]: 2025-10-02 07:52:48.78667529 +0000 UTC m=+0.139823788 container init 90d3d4df19cbc3391cbae41940298d4fbb08b52d412efb9cbb9914007c3a5664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 03:52:48 np0005465604 podman[115250]: 2025-10-02 07:52:48.795221636 +0000 UTC m=+0.148370094 container start 90d3d4df19cbc3391cbae41940298d4fbb08b52d412efb9cbb9914007c3a5664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 03:52:48 np0005465604 podman[115250]: 2025-10-02 07:52:48.799835795 +0000 UTC m=+0.152984263 container attach 90d3d4df19cbc3391cbae41940298d4fbb08b52d412efb9cbb9914007c3a5664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_snyder, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:52:48 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct  2 03:52:48 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct  2 03:52:49 np0005465604 eager_snyder[115266]: --> passed data devices: 0 physical, 3 LVM
Oct  2 03:52:49 np0005465604 eager_snyder[115266]: --> relative data size: 1.0
Oct  2 03:52:49 np0005465604 eager_snyder[115266]: --> All data devices are unavailable
Oct  2 03:52:49 np0005465604 systemd[1]: libpod-90d3d4df19cbc3391cbae41940298d4fbb08b52d412efb9cbb9914007c3a5664.scope: Deactivated successfully.
Oct  2 03:52:49 np0005465604 podman[115295]: 2025-10-02 07:52:49.791121902 +0000 UTC m=+0.038878275 container died 90d3d4df19cbc3391cbae41940298d4fbb08b52d412efb9cbb9914007c3a5664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 03:52:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8fff65dde255264e7a8e4a8021e5c96c6bc9dd39443142d384534f3c35928a95-merged.mount: Deactivated successfully.
Oct  2 03:52:49 np0005465604 podman[115295]: 2025-10-02 07:52:49.84784781 +0000 UTC m=+0.095604193 container remove 90d3d4df19cbc3391cbae41940298d4fbb08b52d412efb9cbb9914007c3a5664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_snyder, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 03:52:49 np0005465604 systemd[1]: libpod-conmon-90d3d4df19cbc3391cbae41940298d4fbb08b52d412efb9cbb9914007c3a5664.scope: Deactivated successfully.
Oct  2 03:52:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:49 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct  2 03:52:49 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct  2 03:52:50 np0005465604 podman[115451]: 2025-10-02 07:52:50.498587879 +0000 UTC m=+0.043143442 container create 21ca367deeb1a185fbbb4fbb5591d285f23fae9fa7acd06487f63fd0e6c2250a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 03:52:50 np0005465604 systemd[1]: Started libpod-conmon-21ca367deeb1a185fbbb4fbb5591d285f23fae9fa7acd06487f63fd0e6c2250a.scope.
Oct  2 03:52:50 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:52:50 np0005465604 podman[115451]: 2025-10-02 07:52:50.480379972 +0000 UTC m=+0.024935565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:52:50 np0005465604 podman[115451]: 2025-10-02 07:52:50.582027798 +0000 UTC m=+0.126583391 container init 21ca367deeb1a185fbbb4fbb5591d285f23fae9fa7acd06487f63fd0e6c2250a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:52:50 np0005465604 podman[115451]: 2025-10-02 07:52:50.592821666 +0000 UTC m=+0.137377229 container start 21ca367deeb1a185fbbb4fbb5591d285f23fae9fa7acd06487f63fd0e6c2250a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_merkle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 03:52:50 np0005465604 podman[115451]: 2025-10-02 07:52:50.595871465 +0000 UTC m=+0.140427118 container attach 21ca367deeb1a185fbbb4fbb5591d285f23fae9fa7acd06487f63fd0e6c2250a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_merkle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 03:52:50 np0005465604 xenodochial_merkle[115468]: 167 167
Oct  2 03:52:50 np0005465604 systemd[1]: libpod-21ca367deeb1a185fbbb4fbb5591d285f23fae9fa7acd06487f63fd0e6c2250a.scope: Deactivated successfully.
Oct  2 03:52:50 np0005465604 podman[115451]: 2025-10-02 07:52:50.599314356 +0000 UTC m=+0.143869919 container died 21ca367deeb1a185fbbb4fbb5591d285f23fae9fa7acd06487f63fd0e6c2250a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_merkle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:52:50 np0005465604 systemd[1]: var-lib-containers-storage-overlay-96814251454a23c917bc1f00b1ba7602304002c17a8faec1c0a1355200f36fcc-merged.mount: Deactivated successfully.
Oct  2 03:52:50 np0005465604 podman[115451]: 2025-10-02 07:52:50.644989888 +0000 UTC m=+0.189545481 container remove 21ca367deeb1a185fbbb4fbb5591d285f23fae9fa7acd06487f63fd0e6c2250a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_merkle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 03:52:50 np0005465604 systemd[1]: libpod-conmon-21ca367deeb1a185fbbb4fbb5591d285f23fae9fa7acd06487f63fd0e6c2250a.scope: Deactivated successfully.
Oct  2 03:52:50 np0005465604 podman[115492]: 2025-10-02 07:52:50.844328584 +0000 UTC m=+0.043003306 container create 8c9806a42f38b622f9cc209037e6bd8c0e83caaf01f89210a0afafb8f8c215ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 03:52:50 np0005465604 systemd[1]: Started libpod-conmon-8c9806a42f38b622f9cc209037e6bd8c0e83caaf01f89210a0afafb8f8c215ca.scope.
Oct  2 03:52:50 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:52:50 np0005465604 podman[115492]: 2025-10-02 07:52:50.823045088 +0000 UTC m=+0.021719850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:52:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c1636b9175f352f08e95269938e33e4b90c93900a8e8f5332855009cb3d328/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:52:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c1636b9175f352f08e95269938e33e4b90c93900a8e8f5332855009cb3d328/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:52:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c1636b9175f352f08e95269938e33e4b90c93900a8e8f5332855009cb3d328/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:52:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c1636b9175f352f08e95269938e33e4b90c93900a8e8f5332855009cb3d328/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:52:50 np0005465604 podman[115492]: 2025-10-02 07:52:50.936521447 +0000 UTC m=+0.135196179 container init 8c9806a42f38b622f9cc209037e6bd8c0e83caaf01f89210a0afafb8f8c215ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 03:52:50 np0005465604 podman[115492]: 2025-10-02 07:52:50.948021547 +0000 UTC m=+0.146696279 container start 8c9806a42f38b622f9cc209037e6bd8c0e83caaf01f89210a0afafb8f8c215ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 03:52:50 np0005465604 podman[115492]: 2025-10-02 07:52:50.951302603 +0000 UTC m=+0.149977325 container attach 8c9806a42f38b622f9cc209037e6bd8c0e83caaf01f89210a0afafb8f8c215ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:52:51 np0005465604 systemd-logind[787]: New session 38 of user zuul.
Oct  2 03:52:51 np0005465604 systemd[1]: Started Session 38 of User zuul.
Oct  2 03:52:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]: {
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:    "0": [
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:        {
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "devices": [
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "/dev/loop3"
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            ],
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "lv_name": "ceph_lv0",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "lv_size": "21470642176",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "name": "ceph_lv0",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "tags": {
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.cluster_name": "ceph",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.crush_device_class": "",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.encrypted": "0",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.osd_id": "0",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.type": "block",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.vdo": "0"
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            },
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "type": "block",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "vg_name": "ceph_vg0"
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:        }
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:    ],
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:    "1": [
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:        {
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "devices": [
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "/dev/loop4"
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            ],
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "lv_name": "ceph_lv1",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "lv_size": "21470642176",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "name": "ceph_lv1",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "tags": {
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.cluster_name": "ceph",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.crush_device_class": "",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.encrypted": "0",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.osd_id": "1",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.type": "block",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.vdo": "0"
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            },
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "type": "block",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "vg_name": "ceph_vg1"
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:        }
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:    ],
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:    "2": [
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:        {
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "devices": [
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "/dev/loop5"
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            ],
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "lv_name": "ceph_lv2",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "lv_size": "21470642176",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "name": "ceph_lv2",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "tags": {
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.cluster_name": "ceph",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.crush_device_class": "",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.encrypted": "0",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.osd_id": "2",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.type": "block",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:                "ceph.vdo": "0"
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            },
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "type": "block",
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:            "vg_name": "ceph_vg2"
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:        }
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]:    ]
Oct  2 03:52:51 np0005465604 laughing_chaum[115508]: }
Oct  2 03:52:51 np0005465604 systemd[1]: libpod-8c9806a42f38b622f9cc209037e6bd8c0e83caaf01f89210a0afafb8f8c215ca.scope: Deactivated successfully.
Oct  2 03:52:51 np0005465604 conmon[115508]: conmon 8c9806a42f38b622f9cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c9806a42f38b622f9cc209037e6bd8c0e83caaf01f89210a0afafb8f8c215ca.scope/container/memory.events
Oct  2 03:52:51 np0005465604 podman[115573]: 2025-10-02 07:52:51.839175026 +0000 UTC m=+0.048221576 container died 8c9806a42f38b622f9cc209037e6bd8c0e83caaf01f89210a0afafb8f8c215ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 03:52:51 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct  2 03:52:51 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct  2 03:52:51 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e1c1636b9175f352f08e95269938e33e4b90c93900a8e8f5332855009cb3d328-merged.mount: Deactivated successfully.
Oct  2 03:52:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:51 np0005465604 podman[115573]: 2025-10-02 07:52:51.90848191 +0000 UTC m=+0.117528440 container remove 8c9806a42f38b622f9cc209037e6bd8c0e83caaf01f89210a0afafb8f8c215ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:52:51 np0005465604 systemd[1]: libpod-conmon-8c9806a42f38b622f9cc209037e6bd8c0e83caaf01f89210a0afafb8f8c215ca.scope: Deactivated successfully.
Oct  2 03:52:51 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct  2 03:52:52 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct  2 03:52:52 np0005465604 python3.9[115773]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:52:52 np0005465604 podman[115830]: 2025-10-02 07:52:52.644290022 +0000 UTC m=+0.036236560 container create 70087f3e4df8a49d40cf48e2496882bb52ef0ec70d5c669216199af5b52a2581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cray, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 03:52:52 np0005465604 systemd[1]: Started libpod-conmon-70087f3e4df8a49d40cf48e2496882bb52ef0ec70d5c669216199af5b52a2581.scope.
Oct  2 03:52:52 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:52:52 np0005465604 podman[115830]: 2025-10-02 07:52:52.706902969 +0000 UTC m=+0.098849517 container init 70087f3e4df8a49d40cf48e2496882bb52ef0ec70d5c669216199af5b52a2581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Oct  2 03:52:52 np0005465604 podman[115830]: 2025-10-02 07:52:52.712347295 +0000 UTC m=+0.104293813 container start 70087f3e4df8a49d40cf48e2496882bb52ef0ec70d5c669216199af5b52a2581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:52:52 np0005465604 podman[115830]: 2025-10-02 07:52:52.715455766 +0000 UTC m=+0.107402284 container attach 70087f3e4df8a49d40cf48e2496882bb52ef0ec70d5c669216199af5b52a2581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cray, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct  2 03:52:52 np0005465604 practical_cray[115846]: 167 167
Oct  2 03:52:52 np0005465604 systemd[1]: libpod-70087f3e4df8a49d40cf48e2496882bb52ef0ec70d5c669216199af5b52a2581.scope: Deactivated successfully.
Oct  2 03:52:52 np0005465604 conmon[115846]: conmon 70087f3e4df8a49d40cf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70087f3e4df8a49d40cf48e2496882bb52ef0ec70d5c669216199af5b52a2581.scope/container/memory.events
Oct  2 03:52:52 np0005465604 podman[115830]: 2025-10-02 07:52:52.717938586 +0000 UTC m=+0.109885104 container died 70087f3e4df8a49d40cf48e2496882bb52ef0ec70d5c669216199af5b52a2581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cray, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 03:52:52 np0005465604 podman[115830]: 2025-10-02 07:52:52.629429732 +0000 UTC m=+0.021376270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:52:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3cfdea78431e61d754a984517998bf27c3eeb553c825420b79502e5e18701957-merged.mount: Deactivated successfully.
Oct  2 03:52:52 np0005465604 podman[115830]: 2025-10-02 07:52:52.75404493 +0000 UTC m=+0.145991448 container remove 70087f3e4df8a49d40cf48e2496882bb52ef0ec70d5c669216199af5b52a2581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cray, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:52:52 np0005465604 systemd[1]: libpod-conmon-70087f3e4df8a49d40cf48e2496882bb52ef0ec70d5c669216199af5b52a2581.scope: Deactivated successfully.
Oct  2 03:52:52 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 4.1a deep-scrub starts
Oct  2 03:52:52 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 4.1a deep-scrub ok
Oct  2 03:52:52 np0005465604 podman[115894]: 2025-10-02 07:52:52.907982922 +0000 UTC m=+0.037418187 container create c6d0e8303f3385f1b6d3410bebb486e4868fa6701ac3706308c5f8f4a241543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:52:52 np0005465604 systemd[1]: Started libpod-conmon-c6d0e8303f3385f1b6d3410bebb486e4868fa6701ac3706308c5f8f4a241543e.scope.
Oct  2 03:52:52 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:52:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a575a3e0a7a05550739a67cde428e89b6e288983b5afdaced1445b8a3f184bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:52:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a575a3e0a7a05550739a67cde428e89b6e288983b5afdaced1445b8a3f184bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:52:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a575a3e0a7a05550739a67cde428e89b6e288983b5afdaced1445b8a3f184bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:52:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a575a3e0a7a05550739a67cde428e89b6e288983b5afdaced1445b8a3f184bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:52:52 np0005465604 podman[115894]: 2025-10-02 07:52:52.892592546 +0000 UTC m=+0.022027841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:52:52 np0005465604 podman[115894]: 2025-10-02 07:52:52.99849676 +0000 UTC m=+0.127932025 container init c6d0e8303f3385f1b6d3410bebb486e4868fa6701ac3706308c5f8f4a241543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 03:52:53 np0005465604 podman[115894]: 2025-10-02 07:52:53.004579386 +0000 UTC m=+0.134014661 container start c6d0e8303f3385f1b6d3410bebb486e4868fa6701ac3706308c5f8f4a241543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:52:53 np0005465604 podman[115894]: 2025-10-02 07:52:53.008970848 +0000 UTC m=+0.138406143 container attach c6d0e8303f3385f1b6d3410bebb486e4868fa6701ac3706308c5f8f4a241543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 03:52:53 np0005465604 python3.9[116040]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 03:52:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:53 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.9 deep-scrub starts
Oct  2 03:52:54 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.9 deep-scrub ok
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]: {
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:        "osd_id": 2,
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:        "type": "bluestore"
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:    },
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:        "osd_id": 1,
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:        "type": "bluestore"
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:    },
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:        "osd_id": 0,
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:        "type": "bluestore"
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]:    }
Oct  2 03:52:54 np0005465604 hungry_feistel[115914]: }
Oct  2 03:52:54 np0005465604 systemd[1]: libpod-c6d0e8303f3385f1b6d3410bebb486e4868fa6701ac3706308c5f8f4a241543e.scope: Deactivated successfully.
Oct  2 03:52:54 np0005465604 podman[115894]: 2025-10-02 07:52:54.053575133 +0000 UTC m=+1.183010398 container died c6d0e8303f3385f1b6d3410bebb486e4868fa6701ac3706308c5f8f4a241543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 03:52:54 np0005465604 systemd[1]: libpod-c6d0e8303f3385f1b6d3410bebb486e4868fa6701ac3706308c5f8f4a241543e.scope: Consumed 1.050s CPU time.
Oct  2 03:52:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0a575a3e0a7a05550739a67cde428e89b6e288983b5afdaced1445b8a3f184bd-merged.mount: Deactivated successfully.
Oct  2 03:52:54 np0005465604 podman[115894]: 2025-10-02 07:52:54.099848185 +0000 UTC m=+1.229283450 container remove c6d0e8303f3385f1b6d3410bebb486e4868fa6701ac3706308c5f8f4a241543e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 03:52:54 np0005465604 systemd[1]: libpod-conmon-c6d0e8303f3385f1b6d3410bebb486e4868fa6701ac3706308c5f8f4a241543e.scope: Deactivated successfully.
Oct  2 03:52:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:52:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:52:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:52:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:52:54 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 80a9492e-2f9e-4fb5-a13f-25ec2928db98 does not exist
Oct  2 03:52:54 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 66170222-267c-421c-b5f9-9107eb55669d does not exist
Oct  2 03:52:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:52:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:52:54 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Oct  2 03:52:54 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Oct  2 03:52:54 np0005465604 python3.9[116322]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:52:54 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct  2 03:52:55 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct  2 03:52:55 np0005465604 systemd[1]: session-38.scope: Deactivated successfully.
Oct  2 03:52:55 np0005465604 systemd[1]: session-38.scope: Consumed 2.500s CPU time.
Oct  2 03:52:55 np0005465604 systemd-logind[787]: Session 38 logged out. Waiting for processes to exit.
Oct  2 03:52:55 np0005465604 systemd-logind[787]: Removed session 38.
Oct  2 03:52:55 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct  2 03:52:55 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct  2 03:52:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:52:56 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct  2 03:52:57 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct  2 03:52:57 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct  2 03:52:57 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct  2 03:52:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:52:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:52:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:52:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:52:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:52:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:52:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:52:58 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct  2 03:52:58 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct  2 03:52:58 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Oct  2 03:52:58 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Oct  2 03:52:59 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct  2 03:52:59 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct  2 03:52:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:00 np0005465604 systemd-logind[787]: New session 39 of user zuul.
Oct  2 03:53:00 np0005465604 systemd[1]: Started Session 39 of User zuul.
Oct  2 03:53:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:53:01 np0005465604 python3.9[116502]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:53:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:01 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Oct  2 03:53:01 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Oct  2 03:53:02 np0005465604 python3.9[116656]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:53:02 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct  2 03:53:02 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct  2 03:53:03 np0005465604 python3.9[116812]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 03:53:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:04 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 11.4 deep-scrub starts
Oct  2 03:53:04 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 11.4 deep-scrub ok
Oct  2 03:53:04 np0005465604 python3.9[116896]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:53:05 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 7.6 deep-scrub starts
Oct  2 03:53:05 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 7.6 deep-scrub ok
Oct  2 03:53:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:06 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct  2 03:53:06 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct  2 03:53:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:53:06 np0005465604 python3.9[117049]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 03:53:06 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.1b deep-scrub starts
Oct  2 03:53:06 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.1b deep-scrub ok
Oct  2 03:53:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:07 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Oct  2 03:53:07 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Oct  2 03:53:08 np0005465604 python3.9[117244]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:53:08 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct  2 03:53:08 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct  2 03:53:08 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct  2 03:53:08 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct  2 03:53:09 np0005465604 python3.9[117396]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:53:09 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct  2 03:53:09 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct  2 03:53:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:10 np0005465604 python3.9[117561]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:53:10 np0005465604 python3.9[117639]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:53:10 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct  2 03:53:10 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct  2 03:53:11 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 10.10 deep-scrub starts
Oct  2 03:53:11 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 10.10 deep-scrub ok
Oct  2 03:53:11 np0005465604 python3.9[117791]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:53:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:53:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:11 np0005465604 python3.9[117869]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:53:11 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Oct  2 03:53:11 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Oct  2 03:53:11 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 7.9 deep-scrub starts
Oct  2 03:53:12 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 7.9 deep-scrub ok
Oct  2 03:53:12 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct  2 03:53:12 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct  2 03:53:12 np0005465604 python3.9[118021]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:53:13 np0005465604 python3.9[118173]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:53:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:14 np0005465604 python3.9[118325]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:53:14 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Oct  2 03:53:14 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Oct  2 03:53:14 np0005465604 python3.9[118477]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:53:14 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Oct  2 03:53:14 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Oct  2 03:53:15 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Oct  2 03:53:15 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Oct  2 03:53:15 np0005465604 python3.9[118629]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:53:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:16 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct  2 03:53:16 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct  2 03:53:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:53:16 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct  2 03:53:17 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct  2 03:53:17 np0005465604 python3.9[118782]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:53:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:18 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct  2 03:53:18 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct  2 03:53:18 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Oct  2 03:53:18 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Oct  2 03:53:18 np0005465604 python3.9[118936]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:53:19 np0005465604 python3.9[119088]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:53:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:20 np0005465604 python3.9[119240]: ansible-service_facts Invoked
Oct  2 03:53:20 np0005465604 network[119257]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 03:53:20 np0005465604 network[119258]: 'network-scripts' will be removed from distribution in near future.
Oct  2 03:53:20 np0005465604 network[119259]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 03:53:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:53:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:22 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 9.e scrub starts
Oct  2 03:53:22 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 9.e scrub ok
Oct  2 03:53:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:24 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct  2 03:53:24 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct  2 03:53:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:26 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Oct  2 03:53:26 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Oct  2 03:53:26 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Oct  2 03:53:26 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Oct  2 03:53:26 np0005465604 python3.9[119714]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:53:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:53:26 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct  2 03:53:27 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct  2 03:53:27 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct  2 03:53:27 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct  2 03:53:27 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Oct  2 03:53:27 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_07:53:27
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'vms', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'images', 'default.rgw.control']
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 03:53:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 03:53:28 np0005465604 python3.9[119867]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  2 03:53:29 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 9.f deep-scrub starts
Oct  2 03:53:29 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 9.f deep-scrub ok
Oct  2 03:53:29 np0005465604 python3.9[120019]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:53:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:29 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct  2 03:53:29 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct  2 03:53:30 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct  2 03:53:30 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct  2 03:53:30 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct  2 03:53:30 np0005465604 ceph-osd[89321]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct  2 03:53:30 np0005465604 python3.9[120097]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:53:30 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Oct  2 03:53:30 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Oct  2 03:53:31 np0005465604 python3.9[120249]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:53:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:53:31 np0005465604 python3.9[120327]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:53:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:31 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.15 deep-scrub starts
Oct  2 03:53:31 np0005465604 ceph-osd[88314]: log_channel(cluster) log [DBG] : 3.15 deep-scrub ok
Oct  2 03:53:32 np0005465604 python3.9[120479]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:53:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:34 np0005465604 python3.9[120631]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 03:53:35 np0005465604 python3.9[120715]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:53:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:36 np0005465604 systemd[1]: session-39.scope: Deactivated successfully.
Oct  2 03:53:36 np0005465604 systemd[1]: session-39.scope: Consumed 24.138s CPU time.
Oct  2 03:53:36 np0005465604 systemd-logind[787]: Session 39 logged out. Waiting for processes to exit.
Oct  2 03:53:36 np0005465604 systemd-logind[787]: Removed session 39.
Oct  2 03:53:36 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Oct  2 03:53:36 np0005465604 ceph-osd[90385]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Oct  2 03:53:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 03:53:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:56:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:56:24 np0005465604 rsyslogd[1004]: imjournal: 2057 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct  2 03:56:25 np0005465604 python3.9[142779]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:56:25 np0005465604 python3.9[142857]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:56:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:56:26 np0005465604 python3.9[143009]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:56:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:56:26 np0005465604 python3.9[143087]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.7ujb_5z8 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:56:27 np0005465604 python3.9[143239]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_07:56:27
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'vms', 'backups', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.mgr']
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 03:56:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:56:28 np0005465604 python3.9[143317]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:56:29 np0005465604 python3.9[143469]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:56:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:56:30 np0005465604 python3[143622]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  2 03:56:31 np0005465604 python3.9[143774]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:56:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:56:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:56:32 np0005465604 python3.9[143899]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759391790.607795-157-200545910608215/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:56:32 np0005465604 python3.9[144051]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:56:33 np0005465604 python3.9[144176]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759391792.186916-172-49696075043738/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:56:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:56:34 np0005465604 python3.9[144328]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:56:35 np0005465604 python3.9[144453]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759391793.8070493-187-209947028585878/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:56:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:56:36 np0005465604 python3.9[144605]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:56:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:56:36 np0005465604 python3.9[144730]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759391795.4689538-202-43097856163206/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:56:37 np0005465604 python3.9[144882]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:56:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:56:37 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 03:56:38 np0005465604 python3.9[145007]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759391797.1400874-217-53199612474923/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:56:39 np0005465604 python3.9[145159]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:56:39 np0005465604 python3.9[145311]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:56:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:56:40 np0005465604 python3.9[145466]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:56:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 03:56:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2025 writes, 9006 keys, 2025 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2025 writes, 2025 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2025 writes, 9006 keys, 2025 commit groups, 1.0 writes per commit group, ingest: 11.49 MB, 0.02 MB/s#012Interval WAL: 2025 writes, 2025 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    140.1      0.06              0.03         3    0.020       0      0       0.0       0.0#012  L6      1/0    6.60 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    180.7    158.4      0.09              0.04         2    0.044    7160    732       0.0       0.0#012 Sum      1/0    6.60 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    106.4    150.9      0.15              0.08         5    0.030    7160    732       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    110.4    156.2      0.14              0.08         4    0.036    7160    732       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    180.7    158.4      0.09              0.04         2    0.044    7160    732       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    152.8      0.06              0.03         2    0.028       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 308.00 MB usage: 570.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000863 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(36,482.47 KB,0.152974%) FilterBlock(6,28.55 KB,0.00905124%) IndexBlock(6,59.16 KB,0.0187564%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 03:56:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:56:41 np0005465604 python3.9[145618]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:56:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:56:42 np0005465604 python3.9[145771]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:56:43 np0005465604 python3.9[145925]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:56:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:56:44 np0005465604 python3.9[146080]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:56:45 np0005465604 python3.9[146230]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:56:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:56:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:56:46 np0005465604 python3.9[146383]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:56:46 np0005465604 ovs-vsctl[146384]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct  2 03:56:47 np0005465604 python3.9[146536]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:56:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:56:48 np0005465604 python3.9[146691]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:56:48 np0005465604 ovs-vsctl[146692]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct  2 03:56:49 np0005465604 python3.9[146842]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:56:49 np0005465604 python3.9[146996]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:56:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:56:50 np0005465604 python3.9[147148]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:56:51 np0005465604 python3.9[147226]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:56:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:56:51 np0005465604 python3.9[147378]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:56:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:56:52 np0005465604 python3.9[147456]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:56:52 np0005465604 python3.9[147608]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:56:53 np0005465604 python3.9[147760]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:56:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:56:54 np0005465604 python3.9[147838]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:56:55 np0005465604 python3.9[147990]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:56:55 np0005465604 python3.9[148068]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:56:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:56:56 np0005465604 python3.9[148220]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:56:56 np0005465604 systemd[1]: Reloading.
Oct  2 03:56:56 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:56:56 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:56:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:56:57 np0005465604 python3.9[148409]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:56:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:56:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:56:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:56:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:56:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:56:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:56:57 np0005465604 python3.9[148487]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:56:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:56:58 np0005465604 python3.9[148639]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:56:59 np0005465604 python3.9[148717]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:56:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:00 np0005465604 python3.9[148869]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:57:00 np0005465604 systemd[1]: Reloading.
Oct  2 03:57:00 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:57:00 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:57:00 np0005465604 systemd[1]: Starting Create netns directory...
Oct  2 03:57:00 np0005465604 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 03:57:00 np0005465604 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 03:57:00 np0005465604 systemd[1]: Finished Create netns directory.
Oct  2 03:57:01 np0005465604 python3.9[149062]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:57:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:57:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:02 np0005465604 python3.9[149214]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:57:02 np0005465604 python3.9[149337]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759391821.6378107-468-179510424629386/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:57:03 np0005465604 python3.9[149489]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:57:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:04 np0005465604 python3.9[149641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:57:05 np0005465604 python3.9[149764]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759391823.9834917-493-199065351504855/.source.json _original_basename=.f4i1rcyf follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:57:05 np0005465604 python3.9[149916]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:57:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:57:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:08 np0005465604 python3.9[150343]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct  2 03:57:09 np0005465604 python3.9[150495]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 03:57:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:10 np0005465604 python3.9[150647]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  2 03:57:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:57:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:12 np0005465604 python3[150825]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 03:57:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:16 np0005465604 podman[150839]: 2025-10-02 07:57:16.558195965 +0000 UTC m=+4.499238720 image pull ceb6fcca0131acbc0ff37d5322c126e14f8045fca848e7440fedac2d6444d8c2 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0
Oct  2 03:57:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:57:16 np0005465604 podman[150957]: 2025-10-02 07:57:16.718242779 +0000 UTC m=+0.052666060 container create 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  2 03:57:16 np0005465604 podman[150957]: 2025-10-02 07:57:16.692383675 +0000 UTC m=+0.026806976 image pull ceb6fcca0131acbc0ff37d5322c126e14f8045fca848e7440fedac2d6444d8c2 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0
Oct  2 03:57:16 np0005465604 python3[150825]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0
Oct  2 03:57:17 np0005465604 python3.9[151261]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:57:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:57:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:57:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 03:57:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:57:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 03:57:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:57:17 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 1a1f6070-b169-4805-a19e-ac55e65051d1 does not exist
Oct  2 03:57:17 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 0d3bd1d3-f0bf-4d69-92a2-0ef228b27587 does not exist
Oct  2 03:57:17 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev d451c08f-c25d-4ad1-ad06-8b4738adf6b0 does not exist
Oct  2 03:57:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 03:57:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 03:57:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 03:57:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:57:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:57:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:57:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:18 np0005465604 podman[151567]: 2025-10-02 07:57:18.134062471 +0000 UTC m=+0.039146504 container create 5c902b7ba3b0a531b544c088717d7c057258bc382dab600509703ef3ff69ed5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:57:18 np0005465604 systemd[1]: Started libpod-conmon-5c902b7ba3b0a531b544c088717d7c057258bc382dab600509703ef3ff69ed5c.scope.
Oct  2 03:57:18 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:57:18 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:57:18 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:57:18 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:57:18 np0005465604 podman[151567]: 2025-10-02 07:57:18.114539241 +0000 UTC m=+0.019623304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:57:18 np0005465604 podman[151567]: 2025-10-02 07:57:18.213806714 +0000 UTC m=+0.118890767 container init 5c902b7ba3b0a531b544c088717d7c057258bc382dab600509703ef3ff69ed5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:57:18 np0005465604 podman[151567]: 2025-10-02 07:57:18.222112132 +0000 UTC m=+0.127196165 container start 5c902b7ba3b0a531b544c088717d7c057258bc382dab600509703ef3ff69ed5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 03:57:18 np0005465604 podman[151567]: 2025-10-02 07:57:18.226230494 +0000 UTC m=+0.131314537 container attach 5c902b7ba3b0a531b544c088717d7c057258bc382dab600509703ef3ff69ed5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 03:57:18 np0005465604 systemd[1]: libpod-5c902b7ba3b0a531b544c088717d7c057258bc382dab600509703ef3ff69ed5c.scope: Deactivated successfully.
Oct  2 03:57:18 np0005465604 gallant_chaum[151589]: 167 167
Oct  2 03:57:18 np0005465604 podman[151567]: 2025-10-02 07:57:18.228536009 +0000 UTC m=+0.133620042 container died 5c902b7ba3b0a531b544c088717d7c057258bc382dab600509703ef3ff69ed5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:57:18 np0005465604 conmon[151589]: conmon 5c902b7ba3b0a531b544 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c902b7ba3b0a531b544c088717d7c057258bc382dab600509703ef3ff69ed5c.scope/container/memory.events
Oct  2 03:57:18 np0005465604 systemd[1]: var-lib-containers-storage-overlay-7dd2af9b481c0a4b592072ac9a12e5a8e64ea0dd276b3fb06fea84bb6a9c2e6f-merged.mount: Deactivated successfully.
Oct  2 03:57:18 np0005465604 podman[151567]: 2025-10-02 07:57:18.273544072 +0000 UTC m=+0.178628115 container remove 5c902b7ba3b0a531b544c088717d7c057258bc382dab600509703ef3ff69ed5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:57:18 np0005465604 systemd[1]: libpod-conmon-5c902b7ba3b0a531b544c088717d7c057258bc382dab600509703ef3ff69ed5c.scope: Deactivated successfully.
Oct  2 03:57:18 np0005465604 python3.9[151585]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:57:18 np0005465604 podman[151614]: 2025-10-02 07:57:18.422360703 +0000 UTC m=+0.039872457 container create bd943f74fb1b05296cf051bc9a998c6c5549602542f0004ade325511a74812e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hoover, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 03:57:18 np0005465604 systemd[1]: Started libpod-conmon-bd943f74fb1b05296cf051bc9a998c6c5549602542f0004ade325511a74812e3.scope.
Oct  2 03:57:18 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:57:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4449151b94a7b90830391c3a973666d37be4afa0906ea8b9e7915b28c7157119/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:57:18 np0005465604 podman[151614]: 2025-10-02 07:57:18.405107126 +0000 UTC m=+0.022618910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:57:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4449151b94a7b90830391c3a973666d37be4afa0906ea8b9e7915b28c7157119/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:57:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4449151b94a7b90830391c3a973666d37be4afa0906ea8b9e7915b28c7157119/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:57:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4449151b94a7b90830391c3a973666d37be4afa0906ea8b9e7915b28c7157119/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:57:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4449151b94a7b90830391c3a973666d37be4afa0906ea8b9e7915b28c7157119/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:57:18 np0005465604 podman[151614]: 2025-10-02 07:57:18.519870279 +0000 UTC m=+0.137382063 container init bd943f74fb1b05296cf051bc9a998c6c5549602542f0004ade325511a74812e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 03:57:18 np0005465604 podman[151614]: 2025-10-02 07:57:18.528918231 +0000 UTC m=+0.146429995 container start bd943f74fb1b05296cf051bc9a998c6c5549602542f0004ade325511a74812e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct  2 03:57:18 np0005465604 podman[151614]: 2025-10-02 07:57:18.532438715 +0000 UTC m=+0.149950499 container attach bd943f74fb1b05296cf051bc9a998c6c5549602542f0004ade325511a74812e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hoover, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 03:57:18 np0005465604 python3.9[151709]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:57:19 np0005465604 elegant_hoover[151652]: --> passed data devices: 0 physical, 3 LVM
Oct  2 03:57:19 np0005465604 elegant_hoover[151652]: --> relative data size: 1.0
Oct  2 03:57:19 np0005465604 elegant_hoover[151652]: --> All data devices are unavailable
Oct  2 03:57:19 np0005465604 systemd[1]: libpod-bd943f74fb1b05296cf051bc9a998c6c5549602542f0004ade325511a74812e3.scope: Deactivated successfully.
Oct  2 03:57:19 np0005465604 podman[151614]: 2025-10-02 07:57:19.5600067 +0000 UTC m=+1.177518494 container died bd943f74fb1b05296cf051bc9a998c6c5549602542f0004ade325511a74812e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hoover, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Oct  2 03:57:19 np0005465604 python3.9[151870]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759391838.900873-581-10709088583025/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:57:19 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4449151b94a7b90830391c3a973666d37be4afa0906ea8b9e7915b28c7157119-merged.mount: Deactivated successfully.
Oct  2 03:57:19 np0005465604 podman[151614]: 2025-10-02 07:57:19.616523653 +0000 UTC m=+1.234035427 container remove bd943f74fb1b05296cf051bc9a998c6c5549602542f0004ade325511a74812e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:57:19 np0005465604 systemd[1]: libpod-conmon-bd943f74fb1b05296cf051bc9a998c6c5549602542f0004ade325511a74812e3.scope: Deactivated successfully.
Oct  2 03:57:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:20 np0005465604 python3.9[152065]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 03:57:20 np0005465604 systemd[1]: Reloading.
Oct  2 03:57:20 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:57:20 np0005465604 podman[152113]: 2025-10-02 07:57:20.257526976 +0000 UTC m=+0.083004250 container create cbc526672cc5874763c9046dade5644ad01d8c10b396c3da9d66354a45ade268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brattain, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 03:57:20 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:57:20 np0005465604 podman[152113]: 2025-10-02 07:57:20.219135846 +0000 UTC m=+0.044613200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:57:20 np0005465604 systemd[1]: Started libpod-conmon-cbc526672cc5874763c9046dade5644ad01d8c10b396c3da9d66354a45ade268.scope.
Oct  2 03:57:20 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:57:20 np0005465604 podman[152113]: 2025-10-02 07:57:20.512088499 +0000 UTC m=+0.337565783 container init cbc526672cc5874763c9046dade5644ad01d8c10b396c3da9d66354a45ade268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brattain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 03:57:20 np0005465604 podman[152113]: 2025-10-02 07:57:20.523113525 +0000 UTC m=+0.348590829 container start cbc526672cc5874763c9046dade5644ad01d8c10b396c3da9d66354a45ade268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:57:20 np0005465604 podman[152113]: 2025-10-02 07:57:20.527791126 +0000 UTC m=+0.353268420 container attach cbc526672cc5874763c9046dade5644ad01d8c10b396c3da9d66354a45ade268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brattain, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 03:57:20 np0005465604 focused_brattain[152165]: 167 167
Oct  2 03:57:20 np0005465604 systemd[1]: libpod-cbc526672cc5874763c9046dade5644ad01d8c10b396c3da9d66354a45ade268.scope: Deactivated successfully.
Oct  2 03:57:20 np0005465604 conmon[152165]: conmon cbc526672cc5874763c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cbc526672cc5874763c9046dade5644ad01d8c10b396c3da9d66354a45ade268.scope/container/memory.events
Oct  2 03:57:20 np0005465604 podman[152113]: 2025-10-02 07:57:20.531714652 +0000 UTC m=+0.357191936 container died cbc526672cc5874763c9046dade5644ad01d8c10b396c3da9d66354a45ade268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:57:20 np0005465604 systemd[1]: var-lib-containers-storage-overlay-19ba9f3b585ffaefe4ee0773a3a292129b195cbd05c0daf851a6f75177648179-merged.mount: Deactivated successfully.
Oct  2 03:57:20 np0005465604 podman[152113]: 2025-10-02 07:57:20.576645662 +0000 UTC m=+0.402122926 container remove cbc526672cc5874763c9046dade5644ad01d8c10b396c3da9d66354a45ade268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 03:57:20 np0005465604 systemd[1]: libpod-conmon-cbc526672cc5874763c9046dade5644ad01d8c10b396c3da9d66354a45ade268.scope: Deactivated successfully.
Oct  2 03:57:20 np0005465604 podman[152212]: 2025-10-02 07:57:20.736085176 +0000 UTC m=+0.046104939 container create 96047e3c8eef0f0cd136f0a3d459f7fc70af4c9d8d39473770290887f49f7b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tesla, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 03:57:20 np0005465604 systemd[1]: Started libpod-conmon-96047e3c8eef0f0cd136f0a3d459f7fc70af4c9d8d39473770290887f49f7b4d.scope.
Oct  2 03:57:20 np0005465604 podman[152212]: 2025-10-02 07:57:20.716127163 +0000 UTC m=+0.026146946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:57:20 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:57:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a5721d95c75d43e6f530a4451f345c70bbbd14af97dec6362d2c77260bf5998/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:57:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a5721d95c75d43e6f530a4451f345c70bbbd14af97dec6362d2c77260bf5998/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:57:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a5721d95c75d43e6f530a4451f345c70bbbd14af97dec6362d2c77260bf5998/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:57:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a5721d95c75d43e6f530a4451f345c70bbbd14af97dec6362d2c77260bf5998/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:57:20 np0005465604 podman[152212]: 2025-10-02 07:57:20.838137529 +0000 UTC m=+0.148157392 container init 96047e3c8eef0f0cd136f0a3d459f7fc70af4c9d8d39473770290887f49f7b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tesla, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:57:20 np0005465604 podman[152212]: 2025-10-02 07:57:20.84902618 +0000 UTC m=+0.159045943 container start 96047e3c8eef0f0cd136f0a3d459f7fc70af4c9d8d39473770290887f49f7b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:57:20 np0005465604 podman[152212]: 2025-10-02 07:57:20.852844453 +0000 UTC m=+0.162864306 container attach 96047e3c8eef0f0cd136f0a3d459f7fc70af4c9d8d39473770290887f49f7b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tesla, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:57:21 np0005465604 python3.9[152286]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:57:21 np0005465604 systemd[1]: Reloading.
Oct  2 03:57:21 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:57:21 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:57:21 np0005465604 systemd[1]: Starting ovn_controller container...
Oct  2 03:57:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:57:21 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:57:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a252540bce246d8c05ce81ae2d51db0beb06a69507466923bdb206ee5984f223/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]: {
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:    "0": [
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:        {
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "devices": [
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "/dev/loop3"
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            ],
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "lv_name": "ceph_lv0",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "lv_size": "21470642176",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "name": "ceph_lv0",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "tags": {
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.cluster_name": "ceph",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.crush_device_class": "",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.encrypted": "0",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.osd_id": "0",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.type": "block",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.vdo": "0"
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            },
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "type": "block",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "vg_name": "ceph_vg0"
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:        }
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:    ],
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:    "1": [
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:        {
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "devices": [
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "/dev/loop4"
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            ],
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "lv_name": "ceph_lv1",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "lv_size": "21470642176",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "name": "ceph_lv1",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "tags": {
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.cluster_name": "ceph",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.crush_device_class": "",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.encrypted": "0",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.osd_id": "1",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.type": "block",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.vdo": "0"
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            },
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "type": "block",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "vg_name": "ceph_vg1"
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:        }
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:    ],
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:    "2": [
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:        {
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "devices": [
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "/dev/loop5"
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            ],
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "lv_name": "ceph_lv2",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "lv_size": "21470642176",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "name": "ceph_lv2",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "tags": {
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.cluster_name": "ceph",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.crush_device_class": "",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.encrypted": "0",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.osd_id": "2",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.type": "block",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:                "ceph.vdo": "0"
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            },
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "type": "block",
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:            "vg_name": "ceph_vg2"
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:        }
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]:    ]
Oct  2 03:57:21 np0005465604 peaceful_tesla[152253]: }
Oct  2 03:57:21 np0005465604 systemd[1]: Started /usr/bin/podman healthcheck run 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa.
Oct  2 03:57:21 np0005465604 podman[152326]: 2025-10-02 07:57:21.733300882 +0000 UTC m=+0.164968894 container init 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct  2 03:57:21 np0005465604 podman[152212]: 2025-10-02 07:57:21.749785434 +0000 UTC m=+1.059805207 container died 96047e3c8eef0f0cd136f0a3d459f7fc70af4c9d8d39473770290887f49f7b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:57:21 np0005465604 ovn_controller[152344]: + sudo -E kolla_set_configs
Oct  2 03:57:21 np0005465604 systemd[1]: libpod-96047e3c8eef0f0cd136f0a3d459f7fc70af4c9d8d39473770290887f49f7b4d.scope: Deactivated successfully.
Oct  2 03:57:21 np0005465604 podman[152326]: 2025-10-02 07:57:21.767513836 +0000 UTC m=+0.199181868 container start 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:57:21 np0005465604 edpm-start-podman-container[152326]: ovn_controller
Oct  2 03:57:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5a5721d95c75d43e6f530a4451f345c70bbbd14af97dec6362d2c77260bf5998-merged.mount: Deactivated successfully.
Oct  2 03:57:21 np0005465604 systemd[1]: Created slice User Slice of UID 0.
Oct  2 03:57:21 np0005465604 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  2 03:57:21 np0005465604 podman[152212]: 2025-10-02 07:57:21.81631101 +0000 UTC m=+1.126330783 container remove 96047e3c8eef0f0cd136f0a3d459f7fc70af4c9d8d39473770290887f49f7b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 03:57:21 np0005465604 systemd[1]: libpod-conmon-96047e3c8eef0f0cd136f0a3d459f7fc70af4c9d8d39473770290887f49f7b4d.scope: Deactivated successfully.
Oct  2 03:57:21 np0005465604 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  2 03:57:21 np0005465604 systemd[1]: Starting User Manager for UID 0...
Oct  2 03:57:21 np0005465604 edpm-start-podman-container[152325]: Creating additional drop-in dependency for "ovn_controller" (034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa)
Oct  2 03:57:21 np0005465604 systemd[1]: Reloading.
Oct  2 03:57:21 np0005465604 podman[152354]: 2025-10-02 07:57:21.903584706 +0000 UTC m=+0.124316532 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 03:57:21 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:57:21 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:57:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:22 np0005465604 systemd[152396]: Queued start job for default target Main User Target.
Oct  2 03:57:22 np0005465604 systemd[152396]: Created slice User Application Slice.
Oct  2 03:57:22 np0005465604 systemd[152396]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  2 03:57:22 np0005465604 systemd[152396]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 03:57:22 np0005465604 systemd[152396]: Reached target Paths.
Oct  2 03:57:22 np0005465604 systemd[152396]: Reached target Timers.
Oct  2 03:57:22 np0005465604 systemd[152396]: Starting D-Bus User Message Bus Socket...
Oct  2 03:57:22 np0005465604 systemd[152396]: Starting Create User's Volatile Files and Directories...
Oct  2 03:57:22 np0005465604 systemd[152396]: Finished Create User's Volatile Files and Directories.
Oct  2 03:57:22 np0005465604 systemd[152396]: Listening on D-Bus User Message Bus Socket.
Oct  2 03:57:22 np0005465604 systemd[152396]: Reached target Sockets.
Oct  2 03:57:22 np0005465604 systemd[152396]: Reached target Basic System.
Oct  2 03:57:22 np0005465604 systemd[152396]: Reached target Main User Target.
Oct  2 03:57:22 np0005465604 systemd[152396]: Startup finished in 156ms.
Oct  2 03:57:22 np0005465604 systemd[1]: Started User Manager for UID 0.
Oct  2 03:57:22 np0005465604 systemd[1]: Started ovn_controller container.
Oct  2 03:57:22 np0005465604 systemd[1]: 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa-b1c54196870f0b5.service: Main process exited, code=exited, status=1/FAILURE
Oct  2 03:57:22 np0005465604 systemd[1]: 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa-b1c54196870f0b5.service: Failed with result 'exit-code'.
Oct  2 03:57:22 np0005465604 systemd[1]: Started Session c1 of User root.
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: INFO:__main__:Validating config file
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: INFO:__main__:Writing out command to execute
Oct  2 03:57:22 np0005465604 systemd[1]: session-c1.scope: Deactivated successfully.
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: ++ cat /run_command
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: + ARGS=
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: + sudo kolla_copy_cacerts
Oct  2 03:57:22 np0005465604 systemd[1]: Started Session c2 of User root.
Oct  2 03:57:22 np0005465604 systemd[1]: session-c2.scope: Deactivated successfully.
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: + [[ ! -n '' ]]
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: + . kolla_extend_start
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: + umask 0022
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct  2 03:57:22 np0005465604 NetworkManager[45129]: <info>  [1759391842.3327] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct  2 03:57:22 np0005465604 NetworkManager[45129]: <info>  [1759391842.3336] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 03:57:22 np0005465604 NetworkManager[45129]: <info>  [1759391842.3349] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct  2 03:57:22 np0005465604 NetworkManager[45129]: <info>  [1759391842.3356] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct  2 03:57:22 np0005465604 NetworkManager[45129]: <info>  [1759391842.3358] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  2 03:57:22 np0005465604 kernel: br-int: entered promiscuous mode
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00023|main|INFO|OVS feature set changed, force recompute.
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  2 03:57:22 np0005465604 NetworkManager[45129]: <info>  [1759391842.3689] manager: (ovn-c10f03-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  2 03:57:22 np0005465604 systemd-udevd[152570]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 03:57:22 np0005465604 kernel: genev_sys_6081: entered promiscuous mode
Oct  2 03:57:22 np0005465604 systemd-udevd[152598]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  2 03:57:22 np0005465604 NetworkManager[45129]: <info>  [1759391842.4050] device (genev_sys_6081): carrier: link connected
Oct  2 03:57:22 np0005465604 NetworkManager[45129]: <info>  [1759391842.4056] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Oct  2 03:57:22 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:22Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  2 03:57:22 np0005465604 podman[152765]: 2025-10-02 07:57:22.802421258 +0000 UTC m=+0.030968041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:57:23 np0005465604 podman[152765]: 2025-10-02 07:57:23.359016876 +0000 UTC m=+0.587563639 container create 1e4341192a3266b037b28f711dd3410a464c2f4b69a25e78b2a123a5bc898bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:57:23 np0005465604 systemd[1]: Started libpod-conmon-1e4341192a3266b037b28f711dd3410a464c2f4b69a25e78b2a123a5bc898bef.scope.
Oct  2 03:57:23 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:57:23 np0005465604 podman[152765]: 2025-10-02 07:57:23.609726105 +0000 UTC m=+0.838272868 container init 1e4341192a3266b037b28f711dd3410a464c2f4b69a25e78b2a123a5bc898bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Oct  2 03:57:23 np0005465604 python3.9[152768]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:57:23 np0005465604 podman[152765]: 2025-10-02 07:57:23.624469771 +0000 UTC m=+0.853016544 container start 1e4341192a3266b037b28f711dd3410a464c2f4b69a25e78b2a123a5bc898bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  2 03:57:23 np0005465604 podman[152765]: 2025-10-02 07:57:23.628939106 +0000 UTC m=+0.857485869 container attach 1e4341192a3266b037b28f711dd3410a464c2f4b69a25e78b2a123a5bc898bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 03:57:23 np0005465604 elegant_sanderson[152783]: 167 167
Oct  2 03:57:23 np0005465604 systemd[1]: libpod-1e4341192a3266b037b28f711dd3410a464c2f4b69a25e78b2a123a5bc898bef.scope: Deactivated successfully.
Oct  2 03:57:23 np0005465604 podman[152765]: 2025-10-02 07:57:23.63463927 +0000 UTC m=+0.863186033 container died 1e4341192a3266b037b28f711dd3410a464c2f4b69a25e78b2a123a5bc898bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 03:57:23 np0005465604 ovs-vsctl[152788]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct  2 03:57:23 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0021d9996001f741d11e36c758b4c1c7289576c7ffb13d33ee2c35aaa10ea0e0-merged.mount: Deactivated successfully.
Oct  2 03:57:23 np0005465604 podman[152765]: 2025-10-02 07:57:23.69479549 +0000 UTC m=+0.923342243 container remove 1e4341192a3266b037b28f711dd3410a464c2f4b69a25e78b2a123a5bc898bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_sanderson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 03:57:23 np0005465604 systemd[1]: libpod-conmon-1e4341192a3266b037b28f711dd3410a464c2f4b69a25e78b2a123a5bc898bef.scope: Deactivated successfully.
Oct  2 03:57:23 np0005465604 podman[152832]: 2025-10-02 07:57:23.947490623 +0000 UTC m=+0.068522211 container create 8002008221c36dda00d2777c3a98c941cb366ef9e89e12ac2856450da267eb7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:57:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:23 np0005465604 systemd[1]: Started libpod-conmon-8002008221c36dda00d2777c3a98c941cb366ef9e89e12ac2856450da267eb7e.scope.
Oct  2 03:57:24 np0005465604 podman[152832]: 2025-10-02 07:57:23.924124519 +0000 UTC m=+0.045156147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:57:24 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:57:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d03b58e501476eb99d935a99990405e0a176830de42bb65f1405015fd5295ded/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:57:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d03b58e501476eb99d935a99990405e0a176830de42bb65f1405015fd5295ded/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:57:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d03b58e501476eb99d935a99990405e0a176830de42bb65f1405015fd5295ded/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:57:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d03b58e501476eb99d935a99990405e0a176830de42bb65f1405015fd5295ded/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:57:24 np0005465604 podman[152832]: 2025-10-02 07:57:24.056444029 +0000 UTC m=+0.177475617 container init 8002008221c36dda00d2777c3a98c941cb366ef9e89e12ac2856450da267eb7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-07:57:24.059957) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391844060088, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 769, "num_deletes": 251, "total_data_size": 1009881, "memory_usage": 1025384, "flush_reason": "Manual Compaction"}
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391844069555, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1000876, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8945, "largest_seqno": 9713, "table_properties": {"data_size": 996947, "index_size": 1709, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8375, "raw_average_key_size": 18, "raw_value_size": 989084, "raw_average_value_size": 2193, "num_data_blocks": 80, "num_entries": 451, "num_filter_entries": 451, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391776, "oldest_key_time": 1759391776, "file_creation_time": 1759391844, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 9639 microseconds, and 6088 cpu microseconds.
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 03:57:24 np0005465604 podman[152832]: 2025-10-02 07:57:24.072434455 +0000 UTC m=+0.193466043 container start 8002008221c36dda00d2777c3a98c941cb366ef9e89e12ac2856450da267eb7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-07:57:24.069644) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1000876 bytes OK
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-07:57:24.069669) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-07:57:24.072871) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-07:57:24.072938) EVENT_LOG_v1 {"time_micros": 1759391844072924, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-07:57:24.072974) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1005999, prev total WAL file size 1005999, number of live WAL files 2.
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-07:57:24.074421) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(977KB)], [23(6759KB)]
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391844074527, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7922112, "oldest_snapshot_seqno": -1}
Oct  2 03:57:24 np0005465604 podman[152832]: 2025-10-02 07:57:24.076497726 +0000 UTC m=+0.197529314 container attach 8002008221c36dda00d2777c3a98c941cb366ef9e89e12ac2856450da267eb7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_proskuriakova, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3297 keys, 6098434 bytes, temperature: kUnknown
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391844138022, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6098434, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6074496, "index_size": 14613, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79912, "raw_average_key_size": 24, "raw_value_size": 6012943, "raw_average_value_size": 1823, "num_data_blocks": 636, "num_entries": 3297, "num_filter_entries": 3297, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759391844, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-07:57:24.138372) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6098434 bytes
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-07:57:24.139800) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.6 rd, 95.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 6.6 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(14.0) write-amplify(6.1) OK, records in: 3811, records dropped: 514 output_compression: NoCompression
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-07:57:24.139834) EVENT_LOG_v1 {"time_micros": 1759391844139818, "job": 8, "event": "compaction_finished", "compaction_time_micros": 63590, "compaction_time_cpu_micros": 30494, "output_level": 6, "num_output_files": 1, "total_output_size": 6098434, "num_input_records": 3811, "num_output_records": 3297, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391844140336, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759391844142328, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-07:57:24.074275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-07:57:24.142419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-07:57:24.142430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-07:57:24.142433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-07:57:24.142435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 03:57:24 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-07:57:24.142437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 03:57:24 np0005465604 python3.9[152979]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:57:24 np0005465604 ovs-vsctl[152981]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]: {
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:        "osd_id": 2,
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:        "type": "bluestore"
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:    },
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:        "osd_id": 1,
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:        "type": "bluestore"
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:    },
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:        "osd_id": 0,
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:        "type": "bluestore"
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]:    }
Oct  2 03:57:25 np0005465604 cool_proskuriakova[152895]: }
Oct  2 03:57:25 np0005465604 systemd[1]: libpod-8002008221c36dda00d2777c3a98c941cb366ef9e89e12ac2856450da267eb7e.scope: Deactivated successfully.
Oct  2 03:57:25 np0005465604 systemd[1]: libpod-8002008221c36dda00d2777c3a98c941cb366ef9e89e12ac2856450da267eb7e.scope: Consumed 1.144s CPU time.
Oct  2 03:57:25 np0005465604 podman[152832]: 2025-10-02 07:57:25.216094966 +0000 UTC m=+1.337126554 container died 8002008221c36dda00d2777c3a98c941cb366ef9e89e12ac2856450da267eb7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:57:25 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d03b58e501476eb99d935a99990405e0a176830de42bb65f1405015fd5295ded-merged.mount: Deactivated successfully.
Oct  2 03:57:25 np0005465604 podman[152832]: 2025-10-02 07:57:25.291076995 +0000 UTC m=+1.412108593 container remove 8002008221c36dda00d2777c3a98c941cb366ef9e89e12ac2856450da267eb7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:57:25 np0005465604 systemd[1]: libpod-conmon-8002008221c36dda00d2777c3a98c941cb366ef9e89e12ac2856450da267eb7e.scope: Deactivated successfully.
Oct  2 03:57:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:57:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:57:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:57:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:57:25 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev bf54633a-d1a2-4c31-86d1-85cbaa6fe141 does not exist
Oct  2 03:57:25 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 5d5f1335-f67e-4452-b3a7-be2acbf8760c does not exist
Oct  2 03:57:25 np0005465604 python3.9[153171]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:57:25 np0005465604 ovs-vsctl[153222]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct  2 03:57:25 np0005465604 systemd[1]: session-47.scope: Deactivated successfully.
Oct  2 03:57:25 np0005465604 systemd[1]: session-47.scope: Consumed 1min 812ms CPU time.
Oct  2 03:57:25 np0005465604 systemd-logind[787]: Session 47 logged out. Waiting for processes to exit.
Oct  2 03:57:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:25 np0005465604 systemd-logind[787]: Removed session 47.
Oct  2 03:57:26 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:57:26 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:57:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_07:57:27
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'backups', 'images', '.mgr']
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 03:57:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:31 np0005465604 systemd-logind[787]: New session 49 of user zuul.
Oct  2 03:57:31 np0005465604 systemd[1]: Started Session 49 of User zuul.
Oct  2 03:57:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:57:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:32 np0005465604 systemd[1]: Stopping User Manager for UID 0...
Oct  2 03:57:32 np0005465604 systemd[152396]: Activating special unit Exit the Session...
Oct  2 03:57:32 np0005465604 systemd[152396]: Stopped target Main User Target.
Oct  2 03:57:32 np0005465604 systemd[152396]: Stopped target Basic System.
Oct  2 03:57:32 np0005465604 systemd[152396]: Stopped target Paths.
Oct  2 03:57:32 np0005465604 systemd[152396]: Stopped target Sockets.
Oct  2 03:57:32 np0005465604 systemd[152396]: Stopped target Timers.
Oct  2 03:57:32 np0005465604 systemd[152396]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 03:57:32 np0005465604 systemd[152396]: Closed D-Bus User Message Bus Socket.
Oct  2 03:57:32 np0005465604 systemd[152396]: Stopped Create User's Volatile Files and Directories.
Oct  2 03:57:32 np0005465604 systemd[152396]: Removed slice User Application Slice.
Oct  2 03:57:32 np0005465604 systemd[152396]: Reached target Shutdown.
Oct  2 03:57:32 np0005465604 systemd[152396]: Finished Exit the Session.
Oct  2 03:57:32 np0005465604 systemd[152396]: Reached target Exit the Session.
Oct  2 03:57:32 np0005465604 systemd[1]: user@0.service: Deactivated successfully.
Oct  2 03:57:32 np0005465604 systemd[1]: Stopped User Manager for UID 0.
Oct  2 03:57:32 np0005465604 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  2 03:57:32 np0005465604 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  2 03:57:32 np0005465604 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  2 03:57:32 np0005465604 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  2 03:57:32 np0005465604 systemd[1]: Removed slice User Slice of UID 0.
Oct  2 03:57:32 np0005465604 python3.9[153402]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:57:33 np0005465604 python3.9[153558]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:57:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:34 np0005465604 python3.9[153710]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:57:35 np0005465604 python3.9[153862]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:57:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:36 np0005465604 python3.9[154014]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:57:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:57:36 np0005465604 python3.9[154166]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:57:37 np0005465604 python3.9[154316]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:57:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 03:57:38 np0005465604 python3.9[154468]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  2 03:57:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:40 np0005465604 python3.9[154618]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:57:40 np0005465604 python3.9[154740]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759391859.4966075-86-34107110545015/.source follow=False _original_basename=haproxy.j2 checksum=8f269363fe1324de1c9195a7cf313d369642b8c7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:57:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:57:41 np0005465604 python3.9[154890]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:57:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:42 np0005465604 python3.9[155011]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759391861.232428-101-230028015153867/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:57:43 np0005465604 python3.9[155163]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 03:57:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:44 np0005465604 python3.9[155247]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:57:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:57:46 np0005465604 python3.9[155400]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 03:57:47 np0005465604 python3.9[155553]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:57:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:48 np0005465604 python3.9[155674]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759391867.0298011-138-10189003789712/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:57:48 np0005465604 python3.9[155824]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:57:49 np0005465604 python3.9[155945]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759391868.1569138-138-35970093769044/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:57:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:50 np0005465604 python3.9[156095]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:57:51 np0005465604 python3.9[156216]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759391870.0139096-182-15142484267574/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:57:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:57:51 np0005465604 python3.9[156366]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:57:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:52 np0005465604 python3.9[156487]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759391871.1661968-182-271498890775744/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:57:52 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:52Z|00025|memory|INFO|16128 kB peak resident set size after 30.0 seconds
Oct  2 03:57:52 np0005465604 ovn_controller[152344]: 2025-10-02T07:57:52Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Oct  2 03:57:52 np0005465604 podman[156488]: 2025-10-02 07:57:52.373030724 +0000 UTC m=+0.095584264 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  2 03:57:52 np0005465604 python3.9[156663]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:57:53 np0005465604 python3.9[156817]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:57:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:54 np0005465604 python3.9[156969]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:57:55 np0005465604 python3.9[157047]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:57:55 np0005465604 python3.9[157199]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:57:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:56 np0005465604 python3.9[157277]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:57:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:57:57 np0005465604 python3.9[157429]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:57:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:57:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:57:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:57:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:57:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:57:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:57:57 np0005465604 python3.9[157581]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:57:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:57:58 np0005465604 python3.9[157659]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:57:59 np0005465604 python3.9[157811]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:57:59 np0005465604 python3.9[157889]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:58:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:00 np0005465604 python3.9[158041]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:58:00 np0005465604 systemd[1]: Reloading.
Oct  2 03:58:00 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:58:00 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:58:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:58:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:02 np0005465604 python3.9[158230]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:58:03 np0005465604 python3.9[158308]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:58:03 np0005465604 python3.9[158460]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:58:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:04 np0005465604 python3.9[158538]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:58:05 np0005465604 python3.9[158690]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:58:05 np0005465604 systemd[1]: Reloading.
Oct  2 03:58:05 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:58:05 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:58:05 np0005465604 systemd[1]: Starting Create netns directory...
Oct  2 03:58:05 np0005465604 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 03:58:05 np0005465604 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 03:58:05 np0005465604 systemd[1]: Finished Create netns directory.
Oct  2 03:58:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:06 np0005465604 python3.9[158883]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:58:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:58:07 np0005465604 python3.9[159035]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:58:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:08 np0005465604 python3.9[159158]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759391886.8292706-333-278626552665567/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:58:09 np0005465604 python3.9[159310]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 03:58:09 np0005465604 python3.9[159462]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 03:58:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:10 np0005465604 python3.9[159585]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759391889.3471296-358-67853357398270/.source.json _original_basename=.d2dma9kp follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:58:11 np0005465604 python3.9[159737]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:58:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:58:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:14 np0005465604 python3.9[160164]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct  2 03:58:15 np0005465604 python3.9[160316]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 03:58:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:16 np0005465604 python3.9[160468]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  2 03:58:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:58:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:18 np0005465604 python3[160647]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 03:58:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:58:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 03:58:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5536 writes, 23K keys, 5536 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5536 writes, 843 syncs, 6.57 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5536 writes, 23K keys, 5536 commit groups, 1.0 writes per commit group, ingest: 18.60 MB, 0.03 MB/s#012Interval WAL: 5536 writes, 843 syncs, 6.57 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  2 03:58:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:24 np0005465604 podman[160727]: 2025-10-02 07:58:24.976235323 +0000 UTC m=+2.036764077 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Oct  2 03:58:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 03:58:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6697 writes, 27K keys, 6697 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6697 writes, 1191 syncs, 5.62 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6697 writes, 27K keys, 6697 commit groups, 1.0 writes per commit group, ingest: 19.47 MB, 0.03 MB/s#012Interval WAL: 6697 writes, 1191 syncs, 5.62 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct  2 03:58:26 np0005465604 podman[160661]: 2025-10-02 07:58:26.575202954 +0000 UTC m=+8.386898350 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 03:58:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:58:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:58:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:58:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:58:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:58:26 np0005465604 podman[160951]: 2025-10-02 07:58:26.819821838 +0000 UTC m=+0.079978264 container create fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 03:58:26 np0005465604 podman[160951]: 2025-10-02 07:58:26.768797669 +0000 UTC m=+0.028954135 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 03:58:26 np0005465604 python3[160647]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 03:58:27 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:58:27 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:58:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:58:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:58:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 03:58:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:58:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 03:58:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 2c29522e-d77f-4901-8983-7be2526b93db does not exist
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 955d58d2-c41a-435c-a4e1-86addf689803 does not exist
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c6c2dedf-1572-43d5-a71d-3e26aa7b4e8a does not exist
Oct  2 03:58:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 03:58:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 03:58:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 03:58:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:58:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:58:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:58:27 np0005465604 python3.9[161248]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_07:58:27
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'vms', 'backups', '.mgr', '.rgw.root', 'default.rgw.meta', 'default.rgw.control']
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 03:58:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 03:58:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:28 np0005465604 podman[161432]: 2025-10-02 07:58:28.117050113 +0000 UTC m=+0.057968608 container create 1f7133c2dfa4f04c36d824a94d2d254cff152860fb3d67d7194c90e0dd45fc2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kalam, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:58:28 np0005465604 systemd[1]: Started libpod-conmon-1f7133c2dfa4f04c36d824a94d2d254cff152860fb3d67d7194c90e0dd45fc2b.scope.
Oct  2 03:58:28 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:58:28 np0005465604 podman[161432]: 2025-10-02 07:58:28.09853761 +0000 UTC m=+0.039456125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:58:28 np0005465604 podman[161432]: 2025-10-02 07:58:28.213155934 +0000 UTC m=+0.154074469 container init 1f7133c2dfa4f04c36d824a94d2d254cff152860fb3d67d7194c90e0dd45fc2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 03:58:28 np0005465604 podman[161432]: 2025-10-02 07:58:28.222781227 +0000 UTC m=+0.163699722 container start 1f7133c2dfa4f04c36d824a94d2d254cff152860fb3d67d7194c90e0dd45fc2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:58:28 np0005465604 podman[161432]: 2025-10-02 07:58:28.227761084 +0000 UTC m=+0.168679619 container attach 1f7133c2dfa4f04c36d824a94d2d254cff152860fb3d67d7194c90e0dd45fc2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kalam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:58:28 np0005465604 happy_kalam[161480]: 167 167
Oct  2 03:58:28 np0005465604 systemd[1]: libpod-1f7133c2dfa4f04c36d824a94d2d254cff152860fb3d67d7194c90e0dd45fc2b.scope: Deactivated successfully.
Oct  2 03:58:28 np0005465604 conmon[161480]: conmon 1f7133c2dfa4f04c36d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f7133c2dfa4f04c36d824a94d2d254cff152860fb3d67d7194c90e0dd45fc2b.scope/container/memory.events
Oct  2 03:58:28 np0005465604 podman[161505]: 2025-10-02 07:58:28.283773381 +0000 UTC m=+0.034095196 container died 1f7133c2dfa4f04c36d824a94d2d254cff152860fb3d67d7194c90e0dd45fc2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:58:28 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8930252f5d064acc98794a19759010eaf7df04b0a352efaf3065f2f533a4515b-merged.mount: Deactivated successfully.
Oct  2 03:58:28 np0005465604 podman[161505]: 2025-10-02 07:58:28.333963143 +0000 UTC m=+0.084284958 container remove 1f7133c2dfa4f04c36d824a94d2d254cff152860fb3d67d7194c90e0dd45fc2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 03:58:28 np0005465604 systemd[1]: libpod-conmon-1f7133c2dfa4f04c36d824a94d2d254cff152860fb3d67d7194c90e0dd45fc2b.scope: Deactivated successfully.
Oct  2 03:58:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:58:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:58:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:58:28 np0005465604 podman[161579]: 2025-10-02 07:58:28.585120603 +0000 UTC m=+0.071914138 container create a0d7a9903d7350f0904406985e25ec6cd698fd1f266e1dce98e3733772f3925b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 03:58:28 np0005465604 podman[161579]: 2025-10-02 07:58:28.550920595 +0000 UTC m=+0.037714220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:58:28 np0005465604 systemd[1]: Started libpod-conmon-a0d7a9903d7350f0904406985e25ec6cd698fd1f266e1dce98e3733772f3925b.scope.
Oct  2 03:58:28 np0005465604 python3.9[161573]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:58:28 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:58:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/738df75788e89fa587dcccee3f72e312bc7f264c310cf018ecc113fde160d914/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:58:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/738df75788e89fa587dcccee3f72e312bc7f264c310cf018ecc113fde160d914/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:58:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/738df75788e89fa587dcccee3f72e312bc7f264c310cf018ecc113fde160d914/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:58:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/738df75788e89fa587dcccee3f72e312bc7f264c310cf018ecc113fde160d914/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:58:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/738df75788e89fa587dcccee3f72e312bc7f264c310cf018ecc113fde160d914/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:58:28 np0005465604 podman[161579]: 2025-10-02 07:58:28.708201635 +0000 UTC m=+0.194995180 container init a0d7a9903d7350f0904406985e25ec6cd698fd1f266e1dce98e3733772f3925b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:58:28 np0005465604 podman[161579]: 2025-10-02 07:58:28.71786215 +0000 UTC m=+0.204655655 container start a0d7a9903d7350f0904406985e25ec6cd698fd1f266e1dce98e3733772f3925b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:58:28 np0005465604 podman[161579]: 2025-10-02 07:58:28.721670689 +0000 UTC m=+0.208464304 container attach a0d7a9903d7350f0904406985e25ec6cd698fd1f266e1dce98e3733772f3925b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 03:58:29 np0005465604 python3.9[161677]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 03:58:29 np0005465604 happy_brahmagupta[161597]: --> passed data devices: 0 physical, 3 LVM
Oct  2 03:58:29 np0005465604 happy_brahmagupta[161597]: --> relative data size: 1.0
Oct  2 03:58:29 np0005465604 happy_brahmagupta[161597]: --> All data devices are unavailable
Oct  2 03:58:29 np0005465604 systemd[1]: libpod-a0d7a9903d7350f0904406985e25ec6cd698fd1f266e1dce98e3733772f3925b.scope: Deactivated successfully.
Oct  2 03:58:29 np0005465604 systemd[1]: libpod-a0d7a9903d7350f0904406985e25ec6cd698fd1f266e1dce98e3733772f3925b.scope: Consumed 1.137s CPU time.
Oct  2 03:58:29 np0005465604 podman[161579]: 2025-10-02 07:58:29.910815707 +0000 UTC m=+1.397609232 container died a0d7a9903d7350f0904406985e25ec6cd698fd1f266e1dce98e3733772f3925b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:58:29 np0005465604 systemd[1]: var-lib-containers-storage-overlay-738df75788e89fa587dcccee3f72e312bc7f264c310cf018ecc113fde160d914-merged.mount: Deactivated successfully.
Oct  2 03:58:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:30 np0005465604 podman[161579]: 2025-10-02 07:58:30.061956443 +0000 UTC m=+1.548749948 container remove a0d7a9903d7350f0904406985e25ec6cd698fd1f266e1dce98e3733772f3925b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  2 03:58:30 np0005465604 systemd[1]: libpod-conmon-a0d7a9903d7350f0904406985e25ec6cd698fd1f266e1dce98e3733772f3925b.scope: Deactivated successfully.
Oct  2 03:58:30 np0005465604 python3.9[161863]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759391909.3660767-446-112618168553476/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:58:30 np0005465604 podman[162083]: 2025-10-02 07:58:30.76520803 +0000 UTC m=+0.082983268 container create af3187286e3237acb6f8ea0c75aacf5a79feeef87ccf537651f830f63c3111b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  2 03:58:30 np0005465604 python3.9[162041]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 03:58:30 np0005465604 podman[162083]: 2025-10-02 07:58:30.710128973 +0000 UTC m=+0.027904231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:58:30 np0005465604 systemd[1]: Started libpod-conmon-af3187286e3237acb6f8ea0c75aacf5a79feeef87ccf537651f830f63c3111b4.scope.
Oct  2 03:58:30 np0005465604 systemd[1]: Reloading.
Oct  2 03:58:30 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:58:30 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:58:31 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:58:31 np0005465604 podman[162083]: 2025-10-02 07:58:31.169718395 +0000 UTC m=+0.487493693 container init af3187286e3237acb6f8ea0c75aacf5a79feeef87ccf537651f830f63c3111b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 03:58:31 np0005465604 podman[162083]: 2025-10-02 07:58:31.179931157 +0000 UTC m=+0.497706435 container start af3187286e3237acb6f8ea0c75aacf5a79feeef87ccf537651f830f63c3111b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chandrasekhar, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:58:31 np0005465604 adoring_chandrasekhar[162100]: 167 167
Oct  2 03:58:31 np0005465604 systemd[1]: libpod-af3187286e3237acb6f8ea0c75aacf5a79feeef87ccf537651f830f63c3111b4.scope: Deactivated successfully.
Oct  2 03:58:31 np0005465604 podman[162083]: 2025-10-02 07:58:31.195324652 +0000 UTC m=+0.513099970 container attach af3187286e3237acb6f8ea0c75aacf5a79feeef87ccf537651f830f63c3111b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 03:58:31 np0005465604 podman[162083]: 2025-10-02 07:58:31.196409747 +0000 UTC m=+0.514184985 container died af3187286e3237acb6f8ea0c75aacf5a79feeef87ccf537651f830f63c3111b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chandrasekhar, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 03:58:31 np0005465604 systemd[1]: var-lib-containers-storage-overlay-814c5b0dc6a26e887a519ecf7214230a9f9206b09970422c5e421a9e09fe7abe-merged.mount: Deactivated successfully.
Oct  2 03:58:31 np0005465604 podman[162083]: 2025-10-02 07:58:31.272868607 +0000 UTC m=+0.590643845 container remove af3187286e3237acb6f8ea0c75aacf5a79feeef87ccf537651f830f63c3111b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_chandrasekhar, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 03:58:31 np0005465604 systemd[1]: libpod-conmon-af3187286e3237acb6f8ea0c75aacf5a79feeef87ccf537651f830f63c3111b4.scope: Deactivated successfully.
Oct  2 03:58:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 03:58:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5423 writes, 23K keys, 5423 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5423 writes, 782 syncs, 6.93 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5423 writes, 23K keys, 5423 commit groups, 1.0 writes per commit group, ingest: 18.35 MB, 0.03 MB/s#012Interval WAL: 5423 writes, 782 syncs, 6.93 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  2 03:58:31 np0005465604 podman[162207]: 2025-10-02 07:58:31.465575505 +0000 UTC m=+0.054596913 container create 00e4961d42600a3d29da7b715bcc5a0e0a05c1e4480fc9a1e2e6776b460c20b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_allen, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 03:58:31 np0005465604 podman[162207]: 2025-10-02 07:58:31.436362524 +0000 UTC m=+0.025383912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:58:31 np0005465604 systemd[1]: Started libpod-conmon-00e4961d42600a3d29da7b715bcc5a0e0a05c1e4480fc9a1e2e6776b460c20b1.scope.
Oct  2 03:58:31 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:58:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402d1d08b4ee74ecf653df1baeab9a6fbfdd38c4f9615af83bad8715e071439c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:58:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402d1d08b4ee74ecf653df1baeab9a6fbfdd38c4f9615af83bad8715e071439c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:58:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402d1d08b4ee74ecf653df1baeab9a6fbfdd38c4f9615af83bad8715e071439c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:58:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402d1d08b4ee74ecf653df1baeab9a6fbfdd38c4f9615af83bad8715e071439c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:58:31 np0005465604 podman[162207]: 2025-10-02 07:58:31.619928092 +0000 UTC m=+0.208949530 container init 00e4961d42600a3d29da7b715bcc5a0e0a05c1e4480fc9a1e2e6776b460c20b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_allen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:58:31 np0005465604 podman[162207]: 2025-10-02 07:58:31.629768602 +0000 UTC m=+0.218789980 container start 00e4961d42600a3d29da7b715bcc5a0e0a05c1e4480fc9a1e2e6776b460c20b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 03:58:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:58:31 np0005465604 podman[162207]: 2025-10-02 07:58:31.651192238 +0000 UTC m=+0.240213646 container attach 00e4961d42600a3d29da7b715bcc5a0e0a05c1e4480fc9a1e2e6776b460c20b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 03:58:31 np0005465604 python3.9[162254]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:58:32 np0005465604 systemd[1]: Reloading.
Oct  2 03:58:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:32 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:58:32 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:58:32 np0005465604 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct  2 03:58:32 np0005465604 systemd[1]: Starting ovn_metadata_agent container...
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]: {
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:    "0": [
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:        {
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "devices": [
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "/dev/loop3"
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            ],
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "lv_name": "ceph_lv0",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "lv_size": "21470642176",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "name": "ceph_lv0",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "tags": {
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.cluster_name": "ceph",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.crush_device_class": "",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.encrypted": "0",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.osd_id": "0",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.type": "block",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.vdo": "0"
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            },
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "type": "block",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "vg_name": "ceph_vg0"
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:        }
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:    ],
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:    "1": [
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:        {
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "devices": [
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "/dev/loop4"
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            ],
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "lv_name": "ceph_lv1",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "lv_size": "21470642176",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "name": "ceph_lv1",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "tags": {
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.cluster_name": "ceph",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.crush_device_class": "",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.encrypted": "0",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.osd_id": "1",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.type": "block",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.vdo": "0"
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            },
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "type": "block",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "vg_name": "ceph_vg1"
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:        }
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:    ],
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:    "2": [
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:        {
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "devices": [
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "/dev/loop5"
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            ],
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "lv_name": "ceph_lv2",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "lv_size": "21470642176",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "name": "ceph_lv2",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "tags": {
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.cluster_name": "ceph",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.crush_device_class": "",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.encrypted": "0",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.osd_id": "2",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.type": "block",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:                "ceph.vdo": "0"
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            },
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "type": "block",
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:            "vg_name": "ceph_vg2"
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:        }
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]:    ]
Oct  2 03:58:32 np0005465604 wizardly_allen[162234]: }
Oct  2 03:58:32 np0005465604 systemd[1]: libpod-00e4961d42600a3d29da7b715bcc5a0e0a05c1e4480fc9a1e2e6776b460c20b1.scope: Deactivated successfully.
Oct  2 03:58:32 np0005465604 podman[162207]: 2025-10-02 07:58:32.46658645 +0000 UTC m=+1.055607818 container died 00e4961d42600a3d29da7b715bcc5a0e0a05c1e4480fc9a1e2e6776b460c20b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:58:32 np0005465604 systemd[1]: var-lib-containers-storage-overlay-402d1d08b4ee74ecf653df1baeab9a6fbfdd38c4f9615af83bad8715e071439c-merged.mount: Deactivated successfully.
Oct  2 03:58:32 np0005465604 podman[162207]: 2025-10-02 07:58:32.549740331 +0000 UTC m=+1.138761709 container remove 00e4961d42600a3d29da7b715bcc5a0e0a05c1e4480fc9a1e2e6776b460c20b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:58:32 np0005465604 systemd[1]: libpod-conmon-00e4961d42600a3d29da7b715bcc5a0e0a05c1e4480fc9a1e2e6776b460c20b1.scope: Deactivated successfully.
Oct  2 03:58:32 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:58:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae370a39696e0a852da1105b69e19445b88a28bde2d87caf48cbb1bd39fd5e7/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct  2 03:58:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae370a39696e0a852da1105b69e19445b88a28bde2d87caf48cbb1bd39fd5e7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 03:58:32 np0005465604 systemd[1]: Started /usr/bin/podman healthcheck run fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838.
Oct  2 03:58:32 np0005465604 podman[162301]: 2025-10-02 07:58:32.658834552 +0000 UTC m=+0.217960724 container init fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: + sudo -E kolla_set_configs
Oct  2 03:58:32 np0005465604 podman[162301]: 2025-10-02 07:58:32.693942299 +0000 UTC m=+0.253068481 container start fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 03:58:32 np0005465604 edpm-start-podman-container[162301]: ovn_metadata_agent
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: INFO:__main__:Validating config file
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: INFO:__main__:Copying service configuration files
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: INFO:__main__:Writing out command to execute
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: INFO:__main__:Setting permission for /var/lib/neutron
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: ++ cat /run_command
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: + CMD=neutron-ovn-metadata-agent
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: + ARGS=
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: + sudo kolla_copy_cacerts
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: + [[ ! -n '' ]]
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: + . kolla_extend_start
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: Running command: 'neutron-ovn-metadata-agent'
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: + umask 0022
Oct  2 03:58:32 np0005465604 ovn_metadata_agent[162328]: + exec neutron-ovn-metadata-agent
Oct  2 03:58:32 np0005465604 podman[162359]: 2025-10-02 07:58:32.800743057 +0000 UTC m=+0.083323978 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct  2 03:58:32 np0005465604 edpm-start-podman-container[162300]: Creating additional drop-in dependency for "ovn_metadata_agent" (fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838)
Oct  2 03:58:32 np0005465604 systemd[1]: Reloading.
Oct  2 03:58:32 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:58:32 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:58:33 np0005465604 systemd[1]: Started ovn_metadata_agent container.
Oct  2 03:58:33 np0005465604 podman[162579]: 2025-10-02 07:58:33.638794744 +0000 UTC m=+0.045229288 container create a7e8313325c9b0a085a16a69d98fa9a6112888dd2cfb0f3d63d751dba840caaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hypatia, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:58:33 np0005465604 systemd[1]: session-49.scope: Deactivated successfully.
Oct  2 03:58:33 np0005465604 systemd[1]: session-49.scope: Consumed 1min 233ms CPU time.
Oct  2 03:58:33 np0005465604 systemd-logind[787]: Session 49 logged out. Waiting for processes to exit.
Oct  2 03:58:33 np0005465604 systemd-logind[787]: Removed session 49.
Oct  2 03:58:33 np0005465604 systemd[1]: Started libpod-conmon-a7e8313325c9b0a085a16a69d98fa9a6112888dd2cfb0f3d63d751dba840caaf.scope.
Oct  2 03:58:33 np0005465604 podman[162579]: 2025-10-02 07:58:33.619151444 +0000 UTC m=+0.025585968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:58:33 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:58:33 np0005465604 podman[162579]: 2025-10-02 07:58:33.747925285 +0000 UTC m=+0.154359879 container init a7e8313325c9b0a085a16a69d98fa9a6112888dd2cfb0f3d63d751dba840caaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hypatia, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 03:58:33 np0005465604 podman[162579]: 2025-10-02 07:58:33.763274499 +0000 UTC m=+0.169709013 container start a7e8313325c9b0a085a16a69d98fa9a6112888dd2cfb0f3d63d751dba840caaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 03:58:33 np0005465604 recursing_hypatia[162596]: 167 167
Oct  2 03:58:33 np0005465604 systemd[1]: libpod-a7e8313325c9b0a085a16a69d98fa9a6112888dd2cfb0f3d63d751dba840caaf.scope: Deactivated successfully.
Oct  2 03:58:33 np0005465604 podman[162579]: 2025-10-02 07:58:33.773990547 +0000 UTC m=+0.180425071 container attach a7e8313325c9b0a085a16a69d98fa9a6112888dd2cfb0f3d63d751dba840caaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hypatia, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  2 03:58:33 np0005465604 podman[162579]: 2025-10-02 07:58:33.774867024 +0000 UTC m=+0.181301538 container died a7e8313325c9b0a085a16a69d98fa9a6112888dd2cfb0f3d63d751dba840caaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hypatia, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 03:58:33 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9d066fc77692978aa49587909fcdd334950ee11823a440bace99d4db5c4e1edd-merged.mount: Deactivated successfully.
Oct  2 03:58:33 np0005465604 podman[162579]: 2025-10-02 07:58:33.823025673 +0000 UTC m=+0.229460197 container remove a7e8313325c9b0a085a16a69d98fa9a6112888dd2cfb0f3d63d751dba840caaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hypatia, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 03:58:33 np0005465604 systemd[1]: libpod-conmon-a7e8313325c9b0a085a16a69d98fa9a6112888dd2cfb0f3d63d751dba840caaf.scope: Deactivated successfully.
Oct  2 03:58:34 np0005465604 podman[162620]: 2025-10-02 07:58:34.020521761 +0000 UTC m=+0.044139323 container create bd4f200a493de3dc23e56bd7a1b094fc4a3d1c3bd5f18599eac4c5af110427f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 03:58:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:34 np0005465604 systemd[1]: Started libpod-conmon-bd4f200a493de3dc23e56bd7a1b094fc4a3d1c3bd5f18599eac4c5af110427f0.scope.
Oct  2 03:58:34 np0005465604 podman[162620]: 2025-10-02 07:58:34.000043025 +0000 UTC m=+0.023660617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:58:34 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:58:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f60107b62c182360a6e1ebaca34ee9d3e3a8dbb699f2bee1db0b73d643abe5d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:58:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f60107b62c182360a6e1ebaca34ee9d3e3a8dbb699f2bee1db0b73d643abe5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:58:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f60107b62c182360a6e1ebaca34ee9d3e3a8dbb699f2bee1db0b73d643abe5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:58:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f60107b62c182360a6e1ebaca34ee9d3e3a8dbb699f2bee1db0b73d643abe5d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:58:34 np0005465604 podman[162620]: 2025-10-02 07:58:34.143332823 +0000 UTC m=+0.166950425 container init bd4f200a493de3dc23e56bd7a1b094fc4a3d1c3bd5f18599eac4c5af110427f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:58:34 np0005465604 podman[162620]: 2025-10-02 07:58:34.155154417 +0000 UTC m=+0.178772009 container start bd4f200a493de3dc23e56bd7a1b094fc4a3d1c3bd5f18599eac4c5af110427f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct  2 03:58:34 np0005465604 podman[162620]: 2025-10-02 07:58:34.160158654 +0000 UTC m=+0.183776296 container attach bd4f200a493de3dc23e56bd7a1b094fc4a3d1c3bd5f18599eac4c5af110427f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.734 162357 INFO neutron.common.config [-] Logging enabled!#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.734 162357 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.734 162357 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.734 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.735 162357 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.735 162357 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.735 162357 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.735 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.735 162357 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.735 162357 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.735 162357 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.735 162357 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.735 162357 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.736 162357 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.736 162357 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.736 162357 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.736 162357 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.736 162357 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.736 162357 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.736 162357 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.736 162357 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.736 162357 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.737 162357 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.737 162357 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.737 162357 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.737 162357 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.737 162357 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.737 162357 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.737 162357 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.737 162357 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.737 162357 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.738 162357 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.738 162357 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.738 162357 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.738 162357 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.738 162357 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.738 162357 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.738 162357 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.738 162357 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.738 162357 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.739 162357 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.739 162357 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.739 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.739 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.739 162357 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.739 162357 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.739 162357 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.739 162357 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.739 162357 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.739 162357 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.740 162357 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.740 162357 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.740 162357 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.740 162357 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.740 162357 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.740 162357 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.740 162357 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.740 162357 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.740 162357 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.741 162357 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.741 162357 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.741 162357 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.741 162357 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.741 162357 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.742 162357 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.742 162357 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.743 162357 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.743 162357 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.743 162357 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.743 162357 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.743 162357 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.743 162357 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.743 162357 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.744 162357 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.744 162357 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.744 162357 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.744 162357 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.744 162357 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.744 162357 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.744 162357 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.744 162357 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.744 162357 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.745 162357 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.745 162357 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.745 162357 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.745 162357 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.745 162357 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.745 162357 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.745 162357 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.745 162357 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.745 162357 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.746 162357 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.746 162357 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.746 162357 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.746 162357 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.746 162357 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.746 162357 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.746 162357 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.746 162357 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.746 162357 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.746 162357 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.747 162357 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.747 162357 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.747 162357 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.747 162357 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.747 162357 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.747 162357 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.747 162357 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.747 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.748 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.748 162357 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.748 162357 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.748 162357 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.748 162357 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.748 162357 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.748 162357 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.748 162357 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.748 162357 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.749 162357 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.749 162357 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.749 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.749 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.749 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.749 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.749 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.749 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.749 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.750 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.750 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.750 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.750 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.750 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.750 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.750 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.750 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.750 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.751 162357 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.751 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.751 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.751 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.751 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.751 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.751 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.751 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.751 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.751 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.752 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.752 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.752 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.752 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.752 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.752 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.752 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.752 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.752 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.753 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.753 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.753 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.753 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.753 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.753 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.753 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.753 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.753 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.753 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.754 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.754 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.754 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.754 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.754 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.754 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.754 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.754 162357 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.754 162357 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.754 162357 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.755 162357 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.755 162357 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.755 162357 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.755 162357 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.755 162357 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.755 162357 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.755 162357 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.755 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.755 162357 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.756 162357 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.756 162357 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.756 162357 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.756 162357 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.756 162357 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.756 162357 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.756 162357 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.756 162357 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.756 162357 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.757 162357 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.757 162357 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.757 162357 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.757 162357 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.757 162357 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.757 162357 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.757 162357 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.757 162357 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.757 162357 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.757 162357 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.758 162357 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.758 162357 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.758 162357 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.758 162357 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.758 162357 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.758 162357 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.758 162357 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.758 162357 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.758 162357 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.759 162357 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.759 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.759 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.759 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.759 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.759 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.759 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.759 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.760 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.760 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.760 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.760 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.760 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.760 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.760 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.760 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.760 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.761 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.761 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.761 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.761 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.761 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.761 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.761 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.761 162357 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.762 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.762 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.762 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.762 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.762 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.762 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.762 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.763 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.763 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.763 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.763 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.763 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.763 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.764 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.764 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.764 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.764 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.764 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.764 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.764 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.764 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.764 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.765 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.765 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.765 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.765 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.765 162357 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.765 162357 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.765 162357 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.765 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.765 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.766 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.766 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.766 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.766 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.766 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.766 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.766 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.766 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.767 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.767 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.767 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.767 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.767 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.767 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.767 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.767 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.767 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.768 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.768 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.768 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.768 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.768 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.768 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.768 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.768 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.768 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.769 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.769 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.769 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.769 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.769 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.769 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.769 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.769 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.769 162357 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.769 162357 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.779 162357 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.779 162357 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.779 162357 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.779 162357 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.780 162357 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.792 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 45c6349c-a870-4e27-8117-4ccd02005c80 (UUID: 45c6349c-a870-4e27-8117-4ccd02005c80) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.818 162357 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.818 162357 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.818 162357 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.819 162357 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.822 162357 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.827 162357 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.833 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '45c6349c-a870-4e27-8117-4ccd02005c80'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], external_ids={}, name=45c6349c-a870-4e27-8117-4ccd02005c80, nb_cfg_timestamp=1759391850367, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.833 162357 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f3313a10c10>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.834 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.834 162357 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.834 162357 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.835 162357 INFO oslo_service.service [-] Starting 1 workers#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.840 162357 DEBUG oslo_service.service [-] Started child 162644 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.844 162357 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpy9zytnye/privsep.sock']#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.848 162644 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-431831'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.882 162644 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.883 162644 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.883 162644 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.887 162644 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.894 162644 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  2 03:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:34.902 162644 INFO eventlet.wsgi.server [-] (162644) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]: {
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:        "osd_id": 2,
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:        "type": "bluestore"
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:    },
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:        "osd_id": 1,
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:        "type": "bluestore"
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:    },
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:        "osd_id": 0,
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:        "type": "bluestore"
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]:    }
Oct  2 03:58:35 np0005465604 flamboyant_chebyshev[162637]: }
Oct  2 03:58:35 np0005465604 systemd[1]: libpod-bd4f200a493de3dc23e56bd7a1b094fc4a3d1c3bd5f18599eac4c5af110427f0.scope: Deactivated successfully.
Oct  2 03:58:35 np0005465604 systemd[1]: libpod-bd4f200a493de3dc23e56bd7a1b094fc4a3d1c3bd5f18599eac4c5af110427f0.scope: Consumed 1.033s CPU time.
Oct  2 03:58:35 np0005465604 podman[162675]: 2025-10-02 07:58:35.246996136 +0000 UTC m=+0.038672300 container died bd4f200a493de3dc23e56bd7a1b094fc4a3d1c3bd5f18599eac4c5af110427f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chebyshev, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 03:58:35 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5f60107b62c182360a6e1ebaca34ee9d3e3a8dbb699f2bee1db0b73d643abe5d-merged.mount: Deactivated successfully.
Oct  2 03:58:35 np0005465604 podman[162675]: 2025-10-02 07:58:35.321955759 +0000 UTC m=+0.113631823 container remove bd4f200a493de3dc23e56bd7a1b094fc4a3d1c3bd5f18599eac4c5af110427f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:58:35 np0005465604 systemd[1]: libpod-conmon-bd4f200a493de3dc23e56bd7a1b094fc4a3d1c3bd5f18599eac4c5af110427f0.scope: Deactivated successfully.
Oct  2 03:58:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:58:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:58:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:58:35 np0005465604 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct  2 03:58:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:58:35 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 141031a2-6f61-4ae6-bf9b-a5546adc3702 does not exist
Oct  2 03:58:35 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c0f47d97-1081-4ebe-8ac8-26a1cd3c4779 does not exist
Oct  2 03:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:35.540 162357 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 03:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:35.541 162357 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpy9zytnye/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  2 03:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:35.402 162690 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 03:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:35.411 162690 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 03:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:35.416 162690 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct  2 03:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:35.416 162690 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162690#033[00m
Oct  2 03:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:35.544 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[931334d1-17ed-4cb9-b1da-5b1e8abcea39]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 03:58:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.085 162690 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.086 162690 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.086 162690 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 03:58:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:58:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:58:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.643 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[148df9b3-88c3-48bd-887a-324b0122e621]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.646 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, column=external_ids, values=({'neutron:ovn-metadata-id': 'e0578034-4b27-50f8-9542-d7fec17971ae'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.658 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.668 162357 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.668 162357 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.669 162357 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.669 162357 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.669 162357 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.669 162357 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.670 162357 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.670 162357 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.670 162357 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.670 162357 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.670 162357 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.670 162357 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.670 162357 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.670 162357 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.671 162357 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.671 162357 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.671 162357 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.671 162357 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.671 162357 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.671 162357 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.671 162357 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.671 162357 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.671 162357 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.672 162357 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.672 162357 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.672 162357 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.672 162357 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.672 162357 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.672 162357 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.672 162357 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.673 162357 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.673 162357 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.673 162357 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.673 162357 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.673 162357 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.673 162357 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.673 162357 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.673 162357 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.674 162357 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.674 162357 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.674 162357 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.674 162357 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.674 162357 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.674 162357 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.674 162357 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.674 162357 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.675 162357 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.675 162357 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.675 162357 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.675 162357 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.675 162357 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.675 162357 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.675 162357 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.675 162357 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.675 162357 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.675 162357 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.676 162357 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.676 162357 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.676 162357 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.676 162357 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.676 162357 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.676 162357 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.676 162357 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.676 162357 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.676 162357 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.677 162357 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.677 162357 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.677 162357 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.677 162357 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.677 162357 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.677 162357 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.677 162357 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.677 162357 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.677 162357 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.678 162357 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.678 162357 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.678 162357 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.678 162357 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.678 162357 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.678 162357 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.678 162357 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.678 162357 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.678 162357 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.678 162357 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.679 162357 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.679 162357 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.679 162357 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.679 162357 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.679 162357 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.679 162357 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.679 162357 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.679 162357 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.679 162357 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.679 162357 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.680 162357 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.680 162357 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.680 162357 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.680 162357 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.680 162357 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.680 162357 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.680 162357 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.680 162357 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.680 162357 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.680 162357 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.681 162357 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.681 162357 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.681 162357 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.681 162357 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.681 162357 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.681 162357 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.681 162357 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.681 162357 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.681 162357 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.682 162357 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.682 162357 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.682 162357 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.682 162357 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.682 162357 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.682 162357 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.682 162357 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.682 162357 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.683 162357 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.683 162357 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.683 162357 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.683 162357 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.683 162357 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.683 162357 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.683 162357 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.683 162357 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.683 162357 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.684 162357 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.684 162357 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.684 162357 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.684 162357 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.684 162357 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.684 162357 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.684 162357 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.684 162357 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.685 162357 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.685 162357 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.685 162357 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.685 162357 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.685 162357 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.686 162357 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.687 162357 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.687 162357 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.687 162357 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.687 162357 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.688 162357 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.688 162357 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.688 162357 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.688 162357 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.688 162357 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.688 162357 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.689 162357 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.689 162357 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.689 162357 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.689 162357 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.689 162357 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.689 162357 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.689 162357 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.690 162357 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.690 162357 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.690 162357 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.690 162357 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.690 162357 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.690 162357 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.691 162357 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.691 162357 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.691 162357 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.691 162357 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.691 162357 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.691 162357 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.691 162357 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.692 162357 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.692 162357 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.692 162357 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.692 162357 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.692 162357 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.692 162357 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.693 162357 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.693 162357 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.693 162357 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.693 162357 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.693 162357 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.693 162357 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.694 162357 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.694 162357 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.694 162357 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.694 162357 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.694 162357 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.694 162357 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.695 162357 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.695 162357 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.695 162357 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.695 162357 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.695 162357 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.695 162357 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.695 162357 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.696 162357 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.696 162357 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.696 162357 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.696 162357 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.696 162357 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.696 162357 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.696 162357 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.697 162357 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.697 162357 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.697 162357 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.697 162357 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.697 162357 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.697 162357 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.697 162357 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.698 162357 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.698 162357 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.698 162357 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.698 162357 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.698 162357 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.698 162357 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.698 162357 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.699 162357 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.699 162357 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.699 162357 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.699 162357 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.699 162357 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.699 162357 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.699 162357 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.700 162357 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.700 162357 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.700 162357 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.700 162357 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.700 162357 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.700 162357 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.700 162357 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.701 162357 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.701 162357 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.701 162357 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.701 162357 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.701 162357 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.701 162357 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.701 162357 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.702 162357 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.702 162357 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.702 162357 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.702 162357 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.702 162357 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.702 162357 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.703 162357 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.703 162357 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.703 162357 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.703 162357 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.703 162357 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.703 162357 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.703 162357 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.704 162357 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.704 162357 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.704 162357 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.704 162357 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.704 162357 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.704 162357 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.705 162357 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.705 162357 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.705 162357 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.705 162357 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.705 162357 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.705 162357 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.705 162357 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.706 162357 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.706 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.706 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.706 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.706 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.706 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.707 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.707 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.707 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.707 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.707 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.707 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.707 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.708 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.708 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.708 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.708 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.708 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.708 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.708 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.709 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.709 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.709 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.709 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.709 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.709 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.709 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.710 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.710 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.710 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.710 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.710 162357 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.710 162357 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.711 162357 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.711 162357 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.711 162357 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 03:58:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:58:36.711 162357 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 03:58:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:39 np0005465604 systemd-logind[787]: New session 50 of user zuul.
Oct  2 03:58:39 np0005465604 systemd[1]: Started Session 50 of User zuul.
Oct  2 03:58:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:40 np0005465604 python3.9[162901]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 03:58:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:58:41 np0005465604 python3.9[163057]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:58:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:43 np0005465604 python3.9[163222]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 03:58:43 np0005465604 systemd[1]: Reloading.
Oct  2 03:58:43 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:58:43 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:58:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:44 np0005465604 python3.9[163406]: ansible-ansible.builtin.service_facts Invoked
Oct  2 03:58:45 np0005465604 network[163423]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 03:58:45 np0005465604 network[163424]: 'network-scripts' will be removed from distribution in near future.
Oct  2 03:58:45 np0005465604 network[163425]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 03:58:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:58:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:49 np0005465604 python3.9[163690]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:58:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:50 np0005465604 python3.9[163843]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:58:51 np0005465604 python3.9[163996]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:58:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:58:52 np0005465604 python3.9[164149]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:58:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:52 np0005465604 python3.9[164302]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:58:53 np0005465604 python3.9[164455]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:58:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:54 np0005465604 python3.9[164608]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 03:58:55 np0005465604 python3.9[164761]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:58:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:56 np0005465604 podman[164885]: 2025-10-02 07:58:56.392603511 +0000 UTC m=+0.200105401 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 03:58:56 np0005465604 python3.9[164930]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:58:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:58:57 np0005465604 python3.9[165092]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:58:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:58:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:58:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:58:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:58:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:58:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:58:57 np0005465604 python3.9[165244]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:58:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:58:58 np0005465604 python3.9[165397]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:58:59 np0005465604 python3.9[165549]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:59:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:00 np0005465604 python3.9[165701]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:59:01 np0005465604 python3.9[165853]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:59:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:59:01 np0005465604 python3.9[166005]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:59:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:02 np0005465604 python3.9[166157]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:59:03 np0005465604 podman[166281]: 2025-10-02 07:59:03.023549408 +0000 UTC m=+0.078147585 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 03:59:03 np0005465604 python3.9[166328]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:59:03 np0005465604 python3.9[166480]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:59:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:04 np0005465604 python3.9[166632]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:59:05 np0005465604 python3.9[166784]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 03:59:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:06 np0005465604 python3.9[166936]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:59:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:59:07 np0005465604 python3.9[167088]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 03:59:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:08 np0005465604 python3.9[167240]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 03:59:08 np0005465604 systemd[1]: Reloading.
Oct  2 03:59:08 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 03:59:08 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 03:59:09 np0005465604 python3.9[167428]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:59:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:10 np0005465604 python3.9[167581]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:59:11 np0005465604 python3.9[167734]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:59:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:59:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:12 np0005465604 python3.9[167887]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:59:13 np0005465604 python3.9[168040]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:59:13 np0005465604 python3.9[168193]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:59:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:14 np0005465604 python3.9[168346]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 03:59:15 np0005465604 python3.9[168499]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct  2 03:59:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:59:16 np0005465604 python3.9[168652]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 03:59:18 np0005465604 python3.9[168810]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  2 03:59:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:19 np0005465604 python3.9[168970]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 03:59:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:20 np0005465604 python3.9[169054]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 03:59:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:59:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:59:27 np0005465604 podman[169066]: 2025-10-02 07:59:27.126375713 +0000 UTC m=+0.181489462 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 03:59:27 np0005465604 auditd[702]: Audit daemon rotating log files
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_07:59:27
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'backups', '.mgr', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta']
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 03:59:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 03:59:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Oct  2 03:59:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  2 03:59:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:59:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  2 03:59:34 np0005465604 podman[169265]: 2025-10-02 07:59:34.005086974 +0000 UTC m=+0.070030861 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 03:59:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  2 03:59:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:59:34.772 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 03:59:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:59:34.773 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 03:59:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 07:59:34.773 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 03:59:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  2 03:59:36 np0005465604 podman[169459]: 2025-10-02 07:59:36.560815658 +0000 UTC m=+0.090402945 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 03:59:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:59:36 np0005465604 podman[169459]: 2025-10-02 07:59:36.706224786 +0000 UTC m=+0.235811983 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 03:59:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:59:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:59:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:59:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 0f71e3d0-1054-4f63-b286-09e5ac5d3db9 does not exist
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 63a3ac23-789b-485b-bb0d-7682eafa27ae does not exist
Oct  2 03:59:38 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 007b9d49-e00e-4345-8e7e-f04c83f2bbb9 does not exist
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:59:38 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 03:59:39 np0005465604 podman[169888]: 2025-10-02 07:59:39.055134281 +0000 UTC m=+0.057016216 container create 06c1ff11975d404ce7530864b27a4d1f00e428c993445c91751098e614128efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:59:39 np0005465604 systemd[1]: Started libpod-conmon-06c1ff11975d404ce7530864b27a4d1f00e428c993445c91751098e614128efd.scope.
Oct  2 03:59:39 np0005465604 podman[169888]: 2025-10-02 07:59:39.026626123 +0000 UTC m=+0.028508098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:59:39 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:59:39 np0005465604 podman[169888]: 2025-10-02 07:59:39.153584097 +0000 UTC m=+0.155466012 container init 06c1ff11975d404ce7530864b27a4d1f00e428c993445c91751098e614128efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 03:59:39 np0005465604 podman[169888]: 2025-10-02 07:59:39.210863921 +0000 UTC m=+0.212745826 container start 06c1ff11975d404ce7530864b27a4d1f00e428c993445c91751098e614128efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 03:59:39 np0005465604 podman[169888]: 2025-10-02 07:59:39.216194377 +0000 UTC m=+0.218076372 container attach 06c1ff11975d404ce7530864b27a4d1f00e428c993445c91751098e614128efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_albattani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:59:39 np0005465604 brave_albattani[169904]: 167 167
Oct  2 03:59:39 np0005465604 systemd[1]: libpod-06c1ff11975d404ce7530864b27a4d1f00e428c993445c91751098e614128efd.scope: Deactivated successfully.
Oct  2 03:59:39 np0005465604 conmon[169904]: conmon 06c1ff11975d404ce753 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06c1ff11975d404ce7530864b27a4d1f00e428c993445c91751098e614128efd.scope/container/memory.events
Oct  2 03:59:39 np0005465604 podman[169888]: 2025-10-02 07:59:39.227128997 +0000 UTC m=+0.229010882 container died 06c1ff11975d404ce7530864b27a4d1f00e428c993445c91751098e614128efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_albattani, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 03:59:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-923a6cad6306187ff2a929d041f14a8f8c52731db2201a1d223544d5be920fb2-merged.mount: Deactivated successfully.
Oct  2 03:59:39 np0005465604 podman[169888]: 2025-10-02 07:59:39.271573581 +0000 UTC m=+0.273455476 container remove 06c1ff11975d404ce7530864b27a4d1f00e428c993445c91751098e614128efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_albattani, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:59:39 np0005465604 systemd[1]: libpod-conmon-06c1ff11975d404ce7530864b27a4d1f00e428c993445c91751098e614128efd.scope: Deactivated successfully.
Oct  2 03:59:39 np0005465604 podman[169928]: 2025-10-02 07:59:39.52204089 +0000 UTC m=+0.078613659 container create e280a5c5598872ec088b1533aa20e91ea19b788a846138c72117fc8b161e8aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 03:59:39 np0005465604 systemd[1]: Started libpod-conmon-e280a5c5598872ec088b1533aa20e91ea19b788a846138c72117fc8b161e8aae.scope.
Oct  2 03:59:39 np0005465604 podman[169928]: 2025-10-02 07:59:39.49184848 +0000 UTC m=+0.048421359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:59:39 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:59:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b519cfc94c94b2119635bb9e635012f480b29c377804b39d3d66505bb98a03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:59:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b519cfc94c94b2119635bb9e635012f480b29c377804b39d3d66505bb98a03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:59:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b519cfc94c94b2119635bb9e635012f480b29c377804b39d3d66505bb98a03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:59:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b519cfc94c94b2119635bb9e635012f480b29c377804b39d3d66505bb98a03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:59:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b519cfc94c94b2119635bb9e635012f480b29c377804b39d3d66505bb98a03/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 03:59:39 np0005465604 podman[169928]: 2025-10-02 07:59:39.640928682 +0000 UTC m=+0.197501531 container init e280a5c5598872ec088b1533aa20e91ea19b788a846138c72117fc8b161e8aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:59:39 np0005465604 podman[169928]: 2025-10-02 07:59:39.652820303 +0000 UTC m=+0.209393102 container start e280a5c5598872ec088b1533aa20e91ea19b788a846138c72117fc8b161e8aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:59:39 np0005465604 podman[169928]: 2025-10-02 07:59:39.658699036 +0000 UTC m=+0.215271835 container attach e280a5c5598872ec088b1533aa20e91ea19b788a846138c72117fc8b161e8aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 03:59:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Oct  2 03:59:40 np0005465604 dreamy_ishizaka[169945]: --> passed data devices: 0 physical, 3 LVM
Oct  2 03:59:40 np0005465604 dreamy_ishizaka[169945]: --> relative data size: 1.0
Oct  2 03:59:40 np0005465604 dreamy_ishizaka[169945]: --> All data devices are unavailable
Oct  2 03:59:40 np0005465604 systemd[1]: libpod-e280a5c5598872ec088b1533aa20e91ea19b788a846138c72117fc8b161e8aae.scope: Deactivated successfully.
Oct  2 03:59:40 np0005465604 podman[169928]: 2025-10-02 07:59:40.907298087 +0000 UTC m=+1.463870916 container died e280a5c5598872ec088b1533aa20e91ea19b788a846138c72117fc8b161e8aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:59:40 np0005465604 systemd[1]: libpod-e280a5c5598872ec088b1533aa20e91ea19b788a846138c72117fc8b161e8aae.scope: Consumed 1.167s CPU time.
Oct  2 03:59:40 np0005465604 systemd[1]: var-lib-containers-storage-overlay-42b519cfc94c94b2119635bb9e635012f480b29c377804b39d3d66505bb98a03-merged.mount: Deactivated successfully.
Oct  2 03:59:40 np0005465604 podman[169928]: 2025-10-02 07:59:40.98994383 +0000 UTC m=+1.546516589 container remove e280a5c5598872ec088b1533aa20e91ea19b788a846138c72117fc8b161e8aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 03:59:41 np0005465604 systemd[1]: libpod-conmon-e280a5c5598872ec088b1533aa20e91ea19b788a846138c72117fc8b161e8aae.scope: Deactivated successfully.
Oct  2 03:59:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:59:41 np0005465604 podman[170128]: 2025-10-02 07:59:41.830589068 +0000 UTC m=+0.054137427 container create ed94b96a66865a96db49e1ef1ee080a84b24b321680d50788dbd2fb458a7029c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 03:59:41 np0005465604 systemd[1]: Started libpod-conmon-ed94b96a66865a96db49e1ef1ee080a84b24b321680d50788dbd2fb458a7029c.scope.
Oct  2 03:59:41 np0005465604 podman[170128]: 2025-10-02 07:59:41.802925386 +0000 UTC m=+0.026473735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:59:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:59:41 np0005465604 podman[170128]: 2025-10-02 07:59:41.947036374 +0000 UTC m=+0.170584763 container init ed94b96a66865a96db49e1ef1ee080a84b24b321680d50788dbd2fb458a7029c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 03:59:41 np0005465604 podman[170128]: 2025-10-02 07:59:41.96296443 +0000 UTC m=+0.186512769 container start ed94b96a66865a96db49e1ef1ee080a84b24b321680d50788dbd2fb458a7029c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:59:41 np0005465604 podman[170128]: 2025-10-02 07:59:41.967500121 +0000 UTC m=+0.191048540 container attach ed94b96a66865a96db49e1ef1ee080a84b24b321680d50788dbd2fb458a7029c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 03:59:41 np0005465604 loving_heyrovsky[170144]: 167 167
Oct  2 03:59:41 np0005465604 systemd[1]: libpod-ed94b96a66865a96db49e1ef1ee080a84b24b321680d50788dbd2fb458a7029c.scope: Deactivated successfully.
Oct  2 03:59:41 np0005465604 podman[170128]: 2025-10-02 07:59:41.973544059 +0000 UTC m=+0.197092428 container died ed94b96a66865a96db49e1ef1ee080a84b24b321680d50788dbd2fb458a7029c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 03:59:42 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1bb454a082053959f853422267b68412def5c2d1d6861a13371946e11817a678-merged.mount: Deactivated successfully.
Oct  2 03:59:42 np0005465604 podman[170128]: 2025-10-02 07:59:42.029858823 +0000 UTC m=+0.253407172 container remove ed94b96a66865a96db49e1ef1ee080a84b24b321680d50788dbd2fb458a7029c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 03:59:42 np0005465604 systemd[1]: libpod-conmon-ed94b96a66865a96db49e1ef1ee080a84b24b321680d50788dbd2fb458a7029c.scope: Deactivated successfully.
Oct  2 03:59:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:42 np0005465604 podman[170168]: 2025-10-02 07:59:42.218130336 +0000 UTC m=+0.059469273 container create e58e0b00eaa019e619f23645e1de312e8129a0c70859b3b998ed21cabd5854bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heisenberg, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  2 03:59:42 np0005465604 systemd[1]: Started libpod-conmon-e58e0b00eaa019e619f23645e1de312e8129a0c70859b3b998ed21cabd5854bb.scope.
Oct  2 03:59:42 np0005465604 podman[170168]: 2025-10-02 07:59:42.191682302 +0000 UTC m=+0.033021289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:59:42 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:59:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1114232bb56b8815739712dd839926edff9163f5e6625637897e800962d3e64f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:59:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1114232bb56b8815739712dd839926edff9163f5e6625637897e800962d3e64f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:59:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1114232bb56b8815739712dd839926edff9163f5e6625637897e800962d3e64f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:59:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1114232bb56b8815739712dd839926edff9163f5e6625637897e800962d3e64f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:59:42 np0005465604 podman[170168]: 2025-10-02 07:59:42.32262677 +0000 UTC m=+0.163965777 container init e58e0b00eaa019e619f23645e1de312e8129a0c70859b3b998ed21cabd5854bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heisenberg, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 03:59:42 np0005465604 podman[170168]: 2025-10-02 07:59:42.34223134 +0000 UTC m=+0.183570287 container start e58e0b00eaa019e619f23645e1de312e8129a0c70859b3b998ed21cabd5854bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  2 03:59:42 np0005465604 podman[170168]: 2025-10-02 07:59:42.350808547 +0000 UTC m=+0.192147514 container attach e58e0b00eaa019e619f23645e1de312e8129a0c70859b3b998ed21cabd5854bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heisenberg, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]: {
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:    "0": [
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:        {
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "devices": [
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "/dev/loop3"
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            ],
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "lv_name": "ceph_lv0",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "lv_size": "21470642176",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "name": "ceph_lv0",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "tags": {
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.cluster_name": "ceph",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.crush_device_class": "",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.encrypted": "0",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.osd_id": "0",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.type": "block",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.vdo": "0"
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            },
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "type": "block",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "vg_name": "ceph_vg0"
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:        }
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:    ],
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:    "1": [
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:        {
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "devices": [
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "/dev/loop4"
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            ],
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "lv_name": "ceph_lv1",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "lv_size": "21470642176",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "name": "ceph_lv1",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "tags": {
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.cluster_name": "ceph",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.crush_device_class": "",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.encrypted": "0",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.osd_id": "1",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.type": "block",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.vdo": "0"
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            },
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "type": "block",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "vg_name": "ceph_vg1"
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:        }
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:    ],
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:    "2": [
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:        {
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "devices": [
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "/dev/loop5"
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            ],
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "lv_name": "ceph_lv2",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "lv_size": "21470642176",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "name": "ceph_lv2",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "tags": {
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.cephx_lockbox_secret": "",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.cluster_name": "ceph",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.crush_device_class": "",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.encrypted": "0",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.osd_id": "2",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.type": "block",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:                "ceph.vdo": "0"
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            },
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "type": "block",
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:            "vg_name": "ceph_vg2"
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:        }
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]:    ]
Oct  2 03:59:43 np0005465604 suspicious_heisenberg[170185]: }
Oct  2 03:59:43 np0005465604 systemd[1]: libpod-e58e0b00eaa019e619f23645e1de312e8129a0c70859b3b998ed21cabd5854bb.scope: Deactivated successfully.
Oct  2 03:59:43 np0005465604 podman[170168]: 2025-10-02 07:59:43.144616026 +0000 UTC m=+0.985955003 container died e58e0b00eaa019e619f23645e1de312e8129a0c70859b3b998ed21cabd5854bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 03:59:43 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1114232bb56b8815739712dd839926edff9163f5e6625637897e800962d3e64f-merged.mount: Deactivated successfully.
Oct  2 03:59:43 np0005465604 podman[170168]: 2025-10-02 07:59:43.19742698 +0000 UTC m=+1.038765917 container remove e58e0b00eaa019e619f23645e1de312e8129a0c70859b3b998ed21cabd5854bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 03:59:43 np0005465604 systemd[1]: libpod-conmon-e58e0b00eaa019e619f23645e1de312e8129a0c70859b3b998ed21cabd5854bb.scope: Deactivated successfully.
Oct  2 03:59:44 np0005465604 podman[170348]: 2025-10-02 07:59:44.052993563 +0000 UTC m=+0.055103037 container create 3339622e2df2b424141ed3ebbe3654603c7fab8fd04bcd7e4bfa285e4cab20d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 03:59:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:44 np0005465604 systemd[1]: Started libpod-conmon-3339622e2df2b424141ed3ebbe3654603c7fab8fd04bcd7e4bfa285e4cab20d1.scope.
Oct  2 03:59:44 np0005465604 podman[170348]: 2025-10-02 07:59:44.026628462 +0000 UTC m=+0.028737946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:59:44 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:59:44 np0005465604 podman[170348]: 2025-10-02 07:59:44.151567062 +0000 UTC m=+0.153676546 container init 3339622e2df2b424141ed3ebbe3654603c7fab8fd04bcd7e4bfa285e4cab20d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cori, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 03:59:44 np0005465604 podman[170348]: 2025-10-02 07:59:44.164198736 +0000 UTC m=+0.166308200 container start 3339622e2df2b424141ed3ebbe3654603c7fab8fd04bcd7e4bfa285e4cab20d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cori, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 03:59:44 np0005465604 podman[170348]: 2025-10-02 07:59:44.168148189 +0000 UTC m=+0.170257653 container attach 3339622e2df2b424141ed3ebbe3654603c7fab8fd04bcd7e4bfa285e4cab20d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cori, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:59:44 np0005465604 mystifying_cori[170365]: 167 167
Oct  2 03:59:44 np0005465604 systemd[1]: libpod-3339622e2df2b424141ed3ebbe3654603c7fab8fd04bcd7e4bfa285e4cab20d1.scope: Deactivated successfully.
Oct  2 03:59:44 np0005465604 podman[170348]: 2025-10-02 07:59:44.171370549 +0000 UTC m=+0.173480013 container died 3339622e2df2b424141ed3ebbe3654603c7fab8fd04bcd7e4bfa285e4cab20d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:59:44 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8eb4bdb057ac198488495a71a0892022141cbb9f3e88d4b7c6f2860f46811903-merged.mount: Deactivated successfully.
Oct  2 03:59:44 np0005465604 podman[170348]: 2025-10-02 07:59:44.229937223 +0000 UTC m=+0.232046687 container remove 3339622e2df2b424141ed3ebbe3654603c7fab8fd04bcd7e4bfa285e4cab20d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cori, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 03:59:44 np0005465604 systemd[1]: libpod-conmon-3339622e2df2b424141ed3ebbe3654603c7fab8fd04bcd7e4bfa285e4cab20d1.scope: Deactivated successfully.
Oct  2 03:59:44 np0005465604 podman[170389]: 2025-10-02 07:59:44.427987279 +0000 UTC m=+0.052689311 container create 5bdadac7d5bb5256fbbfecdc46318adf1a19f0c631ee7ea0ad65c09d7937df11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mirzakhani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 03:59:44 np0005465604 systemd[1]: Started libpod-conmon-5bdadac7d5bb5256fbbfecdc46318adf1a19f0c631ee7ea0ad65c09d7937df11.scope.
Oct  2 03:59:44 np0005465604 podman[170389]: 2025-10-02 07:59:44.404169918 +0000 UTC m=+0.028871940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 03:59:44 np0005465604 systemd[1]: Started libcrun container.
Oct  2 03:59:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435986ee2c9b29681befc66707b2d240da4d4d44bd12607b8d80e146a36c0dbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 03:59:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435986ee2c9b29681befc66707b2d240da4d4d44bd12607b8d80e146a36c0dbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 03:59:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435986ee2c9b29681befc66707b2d240da4d4d44bd12607b8d80e146a36c0dbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 03:59:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435986ee2c9b29681befc66707b2d240da4d4d44bd12607b8d80e146a36c0dbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 03:59:44 np0005465604 podman[170389]: 2025-10-02 07:59:44.551001981 +0000 UTC m=+0.175704073 container init 5bdadac7d5bb5256fbbfecdc46318adf1a19f0c631ee7ea0ad65c09d7937df11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mirzakhani, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 03:59:44 np0005465604 podman[170389]: 2025-10-02 07:59:44.56286542 +0000 UTC m=+0.187567462 container start 5bdadac7d5bb5256fbbfecdc46318adf1a19f0c631ee7ea0ad65c09d7937df11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 03:59:44 np0005465604 podman[170389]: 2025-10-02 07:59:44.567527215 +0000 UTC m=+0.192229297 container attach 5bdadac7d5bb5256fbbfecdc46318adf1a19f0c631ee7ea0ad65c09d7937df11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mirzakhani, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]: {
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:        "osd_id": 2,
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:        "type": "bluestore"
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:    },
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:        "osd_id": 1,
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:        "type": "bluestore"
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:    },
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:        "osd_id": 0,
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:        "type": "bluestore"
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]:    }
Oct  2 03:59:45 np0005465604 compassionate_mirzakhani[170406]: }
Oct  2 03:59:45 np0005465604 systemd[1]: libpod-5bdadac7d5bb5256fbbfecdc46318adf1a19f0c631ee7ea0ad65c09d7937df11.scope: Deactivated successfully.
Oct  2 03:59:45 np0005465604 systemd[1]: libpod-5bdadac7d5bb5256fbbfecdc46318adf1a19f0c631ee7ea0ad65c09d7937df11.scope: Consumed 1.161s CPU time.
Oct  2 03:59:45 np0005465604 podman[170389]: 2025-10-02 07:59:45.720787027 +0000 UTC m=+1.345489049 container died 5bdadac7d5bb5256fbbfecdc46318adf1a19f0c631ee7ea0ad65c09d7937df11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 03:59:45 np0005465604 systemd[1]: var-lib-containers-storage-overlay-435986ee2c9b29681befc66707b2d240da4d4d44bd12607b8d80e146a36c0dbf-merged.mount: Deactivated successfully.
Oct  2 03:59:45 np0005465604 podman[170389]: 2025-10-02 07:59:45.792801229 +0000 UTC m=+1.417503241 container remove 5bdadac7d5bb5256fbbfecdc46318adf1a19f0c631ee7ea0ad65c09d7937df11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 03:59:45 np0005465604 systemd[1]: libpod-conmon-5bdadac7d5bb5256fbbfecdc46318adf1a19f0c631ee7ea0ad65c09d7937df11.scope: Deactivated successfully.
Oct  2 03:59:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 03:59:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:59:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 03:59:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:59:45 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 03c5f2e2-e5c3-4320-85aa-7717ff786b60 does not exist
Oct  2 03:59:45 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev f97c98c2-a950-443a-8576-b76e4a7b6d28 does not exist
Oct  2 03:59:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:59:46 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:59:46 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 03:59:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:50 np0005465604 kernel: SELinux:  Converting 2765 SID table entries...
Oct  2 03:59:50 np0005465604 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 03:59:50 np0005465604 kernel: SELinux:  policy capability open_perms=1
Oct  2 03:59:50 np0005465604 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 03:59:50 np0005465604 kernel: SELinux:  policy capability always_check_network=0
Oct  2 03:59:50 np0005465604 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 03:59:50 np0005465604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 03:59:50 np0005465604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 03:59:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:59:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 03:59:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:59:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:59:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:59:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:59:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 03:59:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 03:59:57 np0005465604 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct  2 03:59:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 03:59:58 np0005465604 podman[170509]: 2025-10-02 07:59:58.107401492 +0000 UTC m=+0.142590131 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:00:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:00 np0005465604 kernel: SELinux:  Converting 2765 SID table entries...
Oct  2 04:00:00 np0005465604 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 04:00:00 np0005465604 kernel: SELinux:  policy capability open_perms=1
Oct  2 04:00:00 np0005465604 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 04:00:00 np0005465604 kernel: SELinux:  policy capability always_check_network=0
Oct  2 04:00:00 np0005465604 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 04:00:00 np0005465604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 04:00:00 np0005465604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 04:00:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:00:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:04 np0005465604 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct  2 04:00:05 np0005465604 podman[170540]: 2025-10-02 08:00:05.026131248 +0000 UTC m=+0.078277048 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 04:00:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:00:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:00:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:00:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:00:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:00:27
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'images']
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:00:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:00:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:29 np0005465604 podman[179125]: 2025-10-02 08:00:29.067447277 +0000 UTC m=+0.123827067 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:00:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:00:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:00:34.784 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:00:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:00:34.789 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:00:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:00:34.789 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:00:36 np0005465604 podman[182615]: 2025-10-02 08:00:36.028088879 +0000 UTC m=+0.077046580 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 04:00:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:00:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:00:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:00:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:00:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:00:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:00:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:00:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:00:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:00:47 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 005f88dc-7c3f-4487-bbad-7f010390ad10 does not exist
Oct  2 04:00:47 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev eff0fe33-9319-4e3f-92bd-53b8960576a8 does not exist
Oct  2 04:00:47 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 90ea496d-56e1-4b39-9afa-56bde6af337d does not exist
Oct  2 04:00:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:00:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:00:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:00:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:00:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:00:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:00:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:00:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:00:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:00:47 np0005465604 podman[187613]: 2025-10-02 08:00:47.825450591 +0000 UTC m=+0.048827172 container create 823af3ca48ab09f55fc85475eb61606023789172a1146359cd250d3eb291cb50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kalam, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 04:00:47 np0005465604 systemd[1]: Started libpod-conmon-823af3ca48ab09f55fc85475eb61606023789172a1146359cd250d3eb291cb50.scope.
Oct  2 04:00:47 np0005465604 podman[187613]: 2025-10-02 08:00:47.805987304 +0000 UTC m=+0.029363905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:00:47 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:00:47 np0005465604 podman[187613]: 2025-10-02 08:00:47.940851364 +0000 UTC m=+0.164228025 container init 823af3ca48ab09f55fc85475eb61606023789172a1146359cd250d3eb291cb50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kalam, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:00:47 np0005465604 podman[187613]: 2025-10-02 08:00:47.953052374 +0000 UTC m=+0.176428995 container start 823af3ca48ab09f55fc85475eb61606023789172a1146359cd250d3eb291cb50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:00:47 np0005465604 podman[187613]: 2025-10-02 08:00:47.958000308 +0000 UTC m=+0.181376979 container attach 823af3ca48ab09f55fc85475eb61606023789172a1146359cd250d3eb291cb50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kalam, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:00:47 np0005465604 elated_kalam[187630]: 167 167
Oct  2 04:00:47 np0005465604 systemd[1]: libpod-823af3ca48ab09f55fc85475eb61606023789172a1146359cd250d3eb291cb50.scope: Deactivated successfully.
Oct  2 04:00:47 np0005465604 podman[187613]: 2025-10-02 08:00:47.964390297 +0000 UTC m=+0.187766918 container died 823af3ca48ab09f55fc85475eb61606023789172a1146359cd250d3eb291cb50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kalam, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:00:48 np0005465604 systemd[1]: var-lib-containers-storage-overlay-51d7ece2fb8f17cf728e795de59749701c6f2efda881ec0ec4d91fcad24e5398-merged.mount: Deactivated successfully.
Oct  2 04:00:48 np0005465604 podman[187613]: 2025-10-02 08:00:48.024312433 +0000 UTC m=+0.247689044 container remove 823af3ca48ab09f55fc85475eb61606023789172a1146359cd250d3eb291cb50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kalam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  2 04:00:48 np0005465604 systemd[1]: libpod-conmon-823af3ca48ab09f55fc85475eb61606023789172a1146359cd250d3eb291cb50.scope: Deactivated successfully.
Oct  2 04:00:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:48 np0005465604 podman[187653]: 2025-10-02 08:00:48.192879092 +0000 UTC m=+0.049320887 container create 599de9cac59b265a289c47492f10905b7ed91880a3fa116e674dd281c47bfada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brown, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:00:48 np0005465604 systemd[1]: Started libpod-conmon-599de9cac59b265a289c47492f10905b7ed91880a3fa116e674dd281c47bfada.scope.
Oct  2 04:00:48 np0005465604 podman[187653]: 2025-10-02 08:00:48.172517518 +0000 UTC m=+0.028959333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:00:48 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:00:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bab1e6b49188dce1c94ed09a59bb4f3506b1076bf73b0e4cd0a60b3bd61590/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:00:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bab1e6b49188dce1c94ed09a59bb4f3506b1076bf73b0e4cd0a60b3bd61590/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:00:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bab1e6b49188dce1c94ed09a59bb4f3506b1076bf73b0e4cd0a60b3bd61590/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:00:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bab1e6b49188dce1c94ed09a59bb4f3506b1076bf73b0e4cd0a60b3bd61590/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:00:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64bab1e6b49188dce1c94ed09a59bb4f3506b1076bf73b0e4cd0a60b3bd61590/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:00:48 np0005465604 podman[187653]: 2025-10-02 08:00:48.30551561 +0000 UTC m=+0.161957435 container init 599de9cac59b265a289c47492f10905b7ed91880a3fa116e674dd281c47bfada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:00:48 np0005465604 podman[187653]: 2025-10-02 08:00:48.317891805 +0000 UTC m=+0.174333600 container start 599de9cac59b265a289c47492f10905b7ed91880a3fa116e674dd281c47bfada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brown, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:00:48 np0005465604 podman[187653]: 2025-10-02 08:00:48.322516169 +0000 UTC m=+0.178957994 container attach 599de9cac59b265a289c47492f10905b7ed91880a3fa116e674dd281c47bfada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brown, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:00:49 np0005465604 gallant_brown[187670]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:00:49 np0005465604 gallant_brown[187670]: --> relative data size: 1.0
Oct  2 04:00:49 np0005465604 gallant_brown[187670]: --> All data devices are unavailable
Oct  2 04:00:49 np0005465604 systemd[1]: libpod-599de9cac59b265a289c47492f10905b7ed91880a3fa116e674dd281c47bfada.scope: Deactivated successfully.
Oct  2 04:00:49 np0005465604 systemd[1]: libpod-599de9cac59b265a289c47492f10905b7ed91880a3fa116e674dd281c47bfada.scope: Consumed 1.156s CPU time.
Oct  2 04:00:49 np0005465604 podman[187653]: 2025-10-02 08:00:49.549825536 +0000 UTC m=+1.406267331 container died 599de9cac59b265a289c47492f10905b7ed91880a3fa116e674dd281c47bfada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  2 04:00:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay-64bab1e6b49188dce1c94ed09a59bb4f3506b1076bf73b0e4cd0a60b3bd61590-merged.mount: Deactivated successfully.
Oct  2 04:00:49 np0005465604 podman[187653]: 2025-10-02 08:00:49.609829014 +0000 UTC m=+1.466270799 container remove 599de9cac59b265a289c47492f10905b7ed91880a3fa116e674dd281c47bfada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brown, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 04:00:49 np0005465604 systemd[1]: libpod-conmon-599de9cac59b265a289c47492f10905b7ed91880a3fa116e674dd281c47bfada.scope: Deactivated successfully.
Oct  2 04:00:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:50 np0005465604 podman[187859]: 2025-10-02 08:00:50.547029658 +0000 UTC m=+0.057399588 container create b6cef7d91a3a64aaf09488cf66389ff3a5791e56ca498e743bc4601b98a7ce13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:00:50 np0005465604 systemd[1]: Started libpod-conmon-b6cef7d91a3a64aaf09488cf66389ff3a5791e56ca498e743bc4601b98a7ce13.scope.
Oct  2 04:00:50 np0005465604 podman[187859]: 2025-10-02 08:00:50.529840113 +0000 UTC m=+0.040210063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:00:50 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:00:50 np0005465604 podman[187859]: 2025-10-02 08:00:50.64950189 +0000 UTC m=+0.159871860 container init b6cef7d91a3a64aaf09488cf66389ff3a5791e56ca498e743bc4601b98a7ce13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elbakyan, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct  2 04:00:50 np0005465604 podman[187859]: 2025-10-02 08:00:50.663033861 +0000 UTC m=+0.173403821 container start b6cef7d91a3a64aaf09488cf66389ff3a5791e56ca498e743bc4601b98a7ce13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:00:50 np0005465604 podman[187859]: 2025-10-02 08:00:50.667954523 +0000 UTC m=+0.178324493 container attach b6cef7d91a3a64aaf09488cf66389ff3a5791e56ca498e743bc4601b98a7ce13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elbakyan, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:00:50 np0005465604 optimistic_elbakyan[187875]: 167 167
Oct  2 04:00:50 np0005465604 podman[187859]: 2025-10-02 08:00:50.672521086 +0000 UTC m=+0.182891046 container died b6cef7d91a3a64aaf09488cf66389ff3a5791e56ca498e743bc4601b98a7ce13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 04:00:50 np0005465604 systemd[1]: libpod-b6cef7d91a3a64aaf09488cf66389ff3a5791e56ca498e743bc4601b98a7ce13.scope: Deactivated successfully.
Oct  2 04:00:50 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a2a11451d12a1ee09f6ed729f7679a7f14a7d874547e01913b1f89adcc350349-merged.mount: Deactivated successfully.
Oct  2 04:00:50 np0005465604 podman[187859]: 2025-10-02 08:00:50.731607316 +0000 UTC m=+0.241977286 container remove b6cef7d91a3a64aaf09488cf66389ff3a5791e56ca498e743bc4601b98a7ce13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 04:00:50 np0005465604 systemd[1]: libpod-conmon-b6cef7d91a3a64aaf09488cf66389ff3a5791e56ca498e743bc4601b98a7ce13.scope: Deactivated successfully.
Oct  2 04:00:50 np0005465604 podman[187904]: 2025-10-02 08:00:50.919881698 +0000 UTC m=+0.056025985 container create 1078e1fb3c43539eb59f2a111a5e9305dac1d76eb3531e48195c1cd2aec29463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:00:50 np0005465604 systemd[1]: Started libpod-conmon-1078e1fb3c43539eb59f2a111a5e9305dac1d76eb3531e48195c1cd2aec29463.scope.
Oct  2 04:00:50 np0005465604 podman[187904]: 2025-10-02 08:00:50.892349411 +0000 UTC m=+0.028493728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:00:51 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:00:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135881a38b7c6225dad94374c140647fb6f980a83793b57537f10b852ad54d90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:00:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135881a38b7c6225dad94374c140647fb6f980a83793b57537f10b852ad54d90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:00:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135881a38b7c6225dad94374c140647fb6f980a83793b57537f10b852ad54d90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:00:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135881a38b7c6225dad94374c140647fb6f980a83793b57537f10b852ad54d90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:00:51 np0005465604 podman[187904]: 2025-10-02 08:00:51.031167213 +0000 UTC m=+0.167311540 container init 1078e1fb3c43539eb59f2a111a5e9305dac1d76eb3531e48195c1cd2aec29463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 04:00:51 np0005465604 podman[187904]: 2025-10-02 08:00:51.041294389 +0000 UTC m=+0.177438676 container start 1078e1fb3c43539eb59f2a111a5e9305dac1d76eb3531e48195c1cd2aec29463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nightingale, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:00:51 np0005465604 podman[187904]: 2025-10-02 08:00:51.04548167 +0000 UTC m=+0.181625967 container attach 1078e1fb3c43539eb59f2a111a5e9305dac1d76eb3531e48195c1cd2aec29463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nightingale, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 04:00:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]: {
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:    "0": [
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:        {
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "devices": [
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "/dev/loop3"
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            ],
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "lv_name": "ceph_lv0",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "lv_size": "21470642176",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "name": "ceph_lv0",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "tags": {
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.cluster_name": "ceph",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.crush_device_class": "",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.encrypted": "0",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.osd_id": "0",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.type": "block",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.vdo": "0"
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            },
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "type": "block",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "vg_name": "ceph_vg0"
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:        }
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:    ],
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:    "1": [
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:        {
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "devices": [
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "/dev/loop4"
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            ],
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "lv_name": "ceph_lv1",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "lv_size": "21470642176",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "name": "ceph_lv1",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "tags": {
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.cluster_name": "ceph",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.crush_device_class": "",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.encrypted": "0",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.osd_id": "1",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.type": "block",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.vdo": "0"
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            },
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "type": "block",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "vg_name": "ceph_vg1"
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:        }
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:    ],
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:    "2": [
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:        {
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "devices": [
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "/dev/loop5"
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            ],
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "lv_name": "ceph_lv2",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "lv_size": "21470642176",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "name": "ceph_lv2",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "tags": {
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.cluster_name": "ceph",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.crush_device_class": "",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.encrypted": "0",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.osd_id": "2",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.type": "block",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:                "ceph.vdo": "0"
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            },
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "type": "block",
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:            "vg_name": "ceph_vg2"
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:        }
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]:    ]
Oct  2 04:00:51 np0005465604 silly_nightingale[187922]: }
Oct  2 04:00:51 np0005465604 systemd[1]: libpod-1078e1fb3c43539eb59f2a111a5e9305dac1d76eb3531e48195c1cd2aec29463.scope: Deactivated successfully.
Oct  2 04:00:51 np0005465604 podman[187904]: 2025-10-02 08:00:51.919500476 +0000 UTC m=+1.055644753 container died 1078e1fb3c43539eb59f2a111a5e9305dac1d76eb3531e48195c1cd2aec29463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nightingale, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 04:00:51 np0005465604 systemd[1]: var-lib-containers-storage-overlay-135881a38b7c6225dad94374c140647fb6f980a83793b57537f10b852ad54d90-merged.mount: Deactivated successfully.
Oct  2 04:00:52 np0005465604 podman[187904]: 2025-10-02 08:00:52.004561065 +0000 UTC m=+1.140705352 container remove 1078e1fb3c43539eb59f2a111a5e9305dac1d76eb3531e48195c1cd2aec29463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:00:52 np0005465604 systemd[1]: libpod-conmon-1078e1fb3c43539eb59f2a111a5e9305dac1d76eb3531e48195c1cd2aec29463.scope: Deactivated successfully.
Oct  2 04:00:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:52 np0005465604 podman[188086]: 2025-10-02 08:00:52.859980732 +0000 UTC m=+0.058766670 container create 8db281b1efd42539557cc61e00d36a5b39209d53246fa23e7571688bb9010d29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_albattani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:00:52 np0005465604 systemd[1]: Started libpod-conmon-8db281b1efd42539557cc61e00d36a5b39209d53246fa23e7571688bb9010d29.scope.
Oct  2 04:00:52 np0005465604 podman[188086]: 2025-10-02 08:00:52.835863461 +0000 UTC m=+0.034649399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:00:52 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:00:52 np0005465604 podman[188086]: 2025-10-02 08:00:52.985954485 +0000 UTC m=+0.184740503 container init 8db281b1efd42539557cc61e00d36a5b39209d53246fa23e7571688bb9010d29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 04:00:52 np0005465604 podman[188086]: 2025-10-02 08:00:52.99380711 +0000 UTC m=+0.192593058 container start 8db281b1efd42539557cc61e00d36a5b39209d53246fa23e7571688bb9010d29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_albattani, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:00:52 np0005465604 podman[188086]: 2025-10-02 08:00:52.998719812 +0000 UTC m=+0.197505730 container attach 8db281b1efd42539557cc61e00d36a5b39209d53246fa23e7571688bb9010d29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_albattani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:00:52 np0005465604 recursing_albattani[188102]: 167 167
Oct  2 04:00:53 np0005465604 podman[188086]: 2025-10-02 08:00:53.000990863 +0000 UTC m=+0.199776771 container died 8db281b1efd42539557cc61e00d36a5b39209d53246fa23e7571688bb9010d29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 04:00:53 np0005465604 systemd[1]: libpod-8db281b1efd42539557cc61e00d36a5b39209d53246fa23e7571688bb9010d29.scope: Deactivated successfully.
Oct  2 04:00:53 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d55e381508c014d768d3b4b8f40ce960d133b9e19af293dbadf3593e94a811e0-merged.mount: Deactivated successfully.
Oct  2 04:00:53 np0005465604 podman[188086]: 2025-10-02 08:00:53.042558898 +0000 UTC m=+0.241344806 container remove 8db281b1efd42539557cc61e00d36a5b39209d53246fa23e7571688bb9010d29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_albattani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:00:53 np0005465604 systemd[1]: libpod-conmon-8db281b1efd42539557cc61e00d36a5b39209d53246fa23e7571688bb9010d29.scope: Deactivated successfully.
Oct  2 04:00:53 np0005465604 podman[188126]: 2025-10-02 08:00:53.312279787 +0000 UTC m=+0.072372175 container create d8bd197a998aa457d0f0c198099b51a0ceda9204f8487aa9ae5a73ab4ca2fc45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lumiere, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct  2 04:00:53 np0005465604 podman[188126]: 2025-10-02 08:00:53.283641975 +0000 UTC m=+0.043734333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:00:53 np0005465604 systemd[1]: Started libpod-conmon-d8bd197a998aa457d0f0c198099b51a0ceda9204f8487aa9ae5a73ab4ca2fc45.scope.
Oct  2 04:00:53 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:00:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4468077736e61ac35fb8fe3b7bc5b3eb9a9836473e54aead84fa3ffcad7f711/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:00:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4468077736e61ac35fb8fe3b7bc5b3eb9a9836473e54aead84fa3ffcad7f711/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:00:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4468077736e61ac35fb8fe3b7bc5b3eb9a9836473e54aead84fa3ffcad7f711/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:00:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4468077736e61ac35fb8fe3b7bc5b3eb9a9836473e54aead84fa3ffcad7f711/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:00:53 np0005465604 podman[188126]: 2025-10-02 08:00:53.456416276 +0000 UTC m=+0.216508664 container init d8bd197a998aa457d0f0c198099b51a0ceda9204f8487aa9ae5a73ab4ca2fc45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:00:53 np0005465604 podman[188126]: 2025-10-02 08:00:53.471575288 +0000 UTC m=+0.231667646 container start d8bd197a998aa457d0f0c198099b51a0ceda9204f8487aa9ae5a73ab4ca2fc45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lumiere, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 04:00:53 np0005465604 podman[188126]: 2025-10-02 08:00:53.476188461 +0000 UTC m=+0.236280889 container attach d8bd197a998aa457d0f0c198099b51a0ceda9204f8487aa9ae5a73ab4ca2fc45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lumiere, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 04:00:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]: {
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:        "osd_id": 2,
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:        "type": "bluestore"
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:    },
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:        "osd_id": 1,
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:        "type": "bluestore"
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:    },
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:        "osd_id": 0,
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:        "type": "bluestore"
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]:    }
Oct  2 04:00:54 np0005465604 vibrant_lumiere[188143]: }
Oct  2 04:00:54 np0005465604 systemd[1]: libpod-d8bd197a998aa457d0f0c198099b51a0ceda9204f8487aa9ae5a73ab4ca2fc45.scope: Deactivated successfully.
Oct  2 04:00:54 np0005465604 systemd[1]: libpod-d8bd197a998aa457d0f0c198099b51a0ceda9204f8487aa9ae5a73ab4ca2fc45.scope: Consumed 1.146s CPU time.
Oct  2 04:00:54 np0005465604 podman[188126]: 2025-10-02 08:00:54.609606895 +0000 UTC m=+1.369699263 container died d8bd197a998aa457d0f0c198099b51a0ceda9204f8487aa9ae5a73ab4ca2fc45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 04:00:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d4468077736e61ac35fb8fe3b7bc5b3eb9a9836473e54aead84fa3ffcad7f711-merged.mount: Deactivated successfully.
Oct  2 04:00:54 np0005465604 podman[188126]: 2025-10-02 08:00:54.675116645 +0000 UTC m=+1.435209003 container remove d8bd197a998aa457d0f0c198099b51a0ceda9204f8487aa9ae5a73ab4ca2fc45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:00:54 np0005465604 systemd[1]: libpod-conmon-d8bd197a998aa457d0f0c198099b51a0ceda9204f8487aa9ae5a73ab4ca2fc45.scope: Deactivated successfully.
Oct  2 04:00:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:00:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:00:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:00:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:00:54 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 0c6b5aa4-6a33-4f44-a18a-a3245d572292 does not exist
Oct  2 04:00:54 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 2989076c-174c-4bbb-be76-12d405c73386 does not exist
Oct  2 04:00:55 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:00:55 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:00:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:00:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:00:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:00:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:00:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:00:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:00:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:00:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:00:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:00 np0005465604 podman[188240]: 2025-10-02 08:01:00.0889659 +0000 UTC m=+0.135285984 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:01:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:01 np0005465604 kernel: SELinux:  Converting 2766 SID table entries...
Oct  2 04:01:01 np0005465604 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 04:01:01 np0005465604 kernel: SELinux:  policy capability open_perms=1
Oct  2 04:01:01 np0005465604 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 04:01:01 np0005465604 kernel: SELinux:  policy capability always_check_network=0
Oct  2 04:01:01 np0005465604 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 04:01:01 np0005465604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 04:01:01 np0005465604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 04:01:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:01:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:02 np0005465604 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Oct  2 04:01:02 np0005465604 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct  2 04:01:02 np0005465604 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Oct  2 04:01:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:01:06 np0005465604 podman[188424]: 2025-10-02 08:01:06.773293458 +0000 UTC m=+0.071129825 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 04:01:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:10 np0005465604 systemd[1]: Stopping OpenSSH server daemon...
Oct  2 04:01:10 np0005465604 systemd[1]: sshd.service: Deactivated successfully.
Oct  2 04:01:10 np0005465604 systemd[1]: Stopped OpenSSH server daemon.
Oct  2 04:01:10 np0005465604 systemd[1]: sshd.service: Consumed 5.823s CPU time, read 0B from disk, written 52.0K to disk.
Oct  2 04:01:10 np0005465604 systemd[1]: Stopped target sshd-keygen.target.
Oct  2 04:01:10 np0005465604 systemd[1]: Stopping sshd-keygen.target...
Oct  2 04:01:10 np0005465604 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 04:01:10 np0005465604 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 04:01:10 np0005465604 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 04:01:10 np0005465604 systemd[1]: Reached target sshd-keygen.target.
Oct  2 04:01:11 np0005465604 systemd[1]: Starting OpenSSH server daemon...
Oct  2 04:01:11 np0005465604 systemd[1]: Started OpenSSH server daemon.
Oct  2 04:01:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:01:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:13 np0005465604 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 04:01:13 np0005465604 systemd[1]: Starting man-db-cache-update.service...
Oct  2 04:01:13 np0005465604 systemd[1]: Reloading.
Oct  2 04:01:13 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:01:13 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:01:13 np0005465604 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 04:01:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:01:14.545602) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392074545692, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2037, "num_deletes": 251, "total_data_size": 3500202, "memory_usage": 3544864, "flush_reason": "Manual Compaction"}
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392074563145, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3414431, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9714, "largest_seqno": 11750, "table_properties": {"data_size": 3405194, "index_size": 5859, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17734, "raw_average_key_size": 19, "raw_value_size": 3386907, "raw_average_value_size": 3709, "num_data_blocks": 266, "num_entries": 913, "num_filter_entries": 913, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391845, "oldest_key_time": 1759391845, "file_creation_time": 1759392074, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 17634 microseconds, and 7532 cpu microseconds.
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:01:14.563232) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3414431 bytes OK
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:01:14.563268) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:01:14.565597) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:01:14.565627) EVENT_LOG_v1 {"time_micros": 1759392074565616, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:01:14.565657) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3491716, prev total WAL file size 3491716, number of live WAL files 2.
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:01:14.567644) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3334KB)], [26(5955KB)]
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392074567890, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9512865, "oldest_snapshot_seqno": -1}
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3696 keys, 7947871 bytes, temperature: kUnknown
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392074654426, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7947871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7919395, "index_size": 18126, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88721, "raw_average_key_size": 24, "raw_value_size": 7848917, "raw_average_value_size": 2123, "num_data_blocks": 784, "num_entries": 3696, "num_filter_entries": 3696, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759392074, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:01:14.654850) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7947871 bytes
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:01:14.656341) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.8 rd, 91.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.8 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4210, records dropped: 514 output_compression: NoCompression
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:01:14.656374) EVENT_LOG_v1 {"time_micros": 1759392074656361, "job": 10, "event": "compaction_finished", "compaction_time_micros": 86645, "compaction_time_cpu_micros": 29240, "output_level": 6, "num_output_files": 1, "total_output_size": 7947871, "num_input_records": 4210, "num_output_records": 3696, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392074657311, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392074659092, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:01:14.567550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:01:14.659201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:01:14.659207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:01:14.659209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:01:14.659211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:01:14 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:01:14.659212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:01:15 np0005465604 systemd[1]: Starting PackageKit Daemon...
Oct  2 04:01:15 np0005465604 systemd[1]: Started PackageKit Daemon.
Oct  2 04:01:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:01:17 np0005465604 python3.9[192795]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 04:01:17 np0005465604 systemd[1]: Reloading.
Oct  2 04:01:17 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:01:17 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:01:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:18 np0005465604 python3.9[194088]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 04:01:18 np0005465604 systemd[1]: Reloading.
Oct  2 04:01:18 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:01:18 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:01:20 np0005465604 python3.9[195140]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 04:01:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:20 np0005465604 systemd[1]: Reloading.
Oct  2 04:01:20 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:01:20 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:01:21 np0005465604 python3.9[196237]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 04:01:21 np0005465604 systemd[1]: Reloading.
Oct  2 04:01:21 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:01:21 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:01:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:01:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:22 np0005465604 python3.9[197584]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:22 np0005465604 systemd[1]: Reloading.
Oct  2 04:01:22 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:01:22 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:01:24 np0005465604 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 04:01:24 np0005465604 systemd[1]: Finished man-db-cache-update.service.
Oct  2 04:01:24 np0005465604 systemd[1]: man-db-cache-update.service: Consumed 13.138s CPU time.
Oct  2 04:01:24 np0005465604 systemd[1]: run-re78cb8bbc5c34ea2a5e4b503b1f6a0b9.service: Deactivated successfully.
Oct  2 04:01:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:24 np0005465604 python3.9[198760]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:24 np0005465604 systemd[1]: Reloading.
Oct  2 04:01:24 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:01:24 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:01:25 np0005465604 python3.9[198959]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:25 np0005465604 systemd[1]: Reloading.
Oct  2 04:01:25 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:01:25 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:01:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:01:27 np0005465604 python3.9[199149]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:01:27
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'volumes', '.rgw.root', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.log']
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:01:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:01:27 np0005465604 python3.9[199304]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:28 np0005465604 systemd[1]: Reloading.
Oct  2 04:01:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:28 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:01:28 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:01:29 np0005465604 python3.9[199493]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 04:01:29 np0005465604 systemd[1]: Reloading.
Oct  2 04:01:29 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:01:29 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:01:29 np0005465604 systemd[1]: Listening on libvirt proxy daemon socket.
Oct  2 04:01:29 np0005465604 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct  2 04:01:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:30 np0005465604 podman[199657]: 2025-10-02 08:01:30.514846349 +0000 UTC m=+0.129130154 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:01:30 np0005465604 python3.9[199702]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:01:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:32 np0005465604 python3.9[199866]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:33 np0005465604 python3.9[200021]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:01:34.785 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:01:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:01:34.785 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:01:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:01:34.785 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:01:35 np0005465604 python3.9[200176]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:36 np0005465604 python3.9[200331]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:01:36 np0005465604 podman[200486]: 2025-10-02 08:01:36.971574602 +0000 UTC m=+0.092596551 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 04:01:37 np0005465604 python3.9[200487]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:01:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:39 np0005465604 python3.9[200661]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:40 np0005465604 python3.9[200816]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:41 np0005465604 python3.9[200971]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:01:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:42 np0005465604 python3.9[201126]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:43 np0005465604 python3.9[201281]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:44 np0005465604 python3.9[201436]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:45 np0005465604 python3.9[201591]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:46 np0005465604 python3.9[201746]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 04:01:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:01:47 np0005465604 python3.9[201901]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:01:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:48 np0005465604 python3.9[202053]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:01:49 np0005465604 python3.9[202205]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:01:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:50 np0005465604 python3.9[202357]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:01:50 np0005465604 python3.9[202509]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:01:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:01:51 np0005465604 python3.9[202661]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:01:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:52 np0005465604 python3.9[202813]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:01:53 np0005465604 python3.9[202938]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759392112.083804-554-264045338720387/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:01:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:54 np0005465604 python3.9[203090]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:01:55 np0005465604 python3.9[203289]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759392114.040365-554-97551183975264/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:01:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:01:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:01:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:01:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:01:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:01:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:01:55 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 32c5458c-e891-40cf-8c4f-0fb779ecfab8 does not exist
Oct  2 04:01:55 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 51d77a62-fdd7-476b-a7f5-20794bdd7790 does not exist
Oct  2 04:01:55 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 32d357c9-41e8-4dd3-ac2c-ece8b49dd7d3 does not exist
Oct  2 04:01:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:01:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:01:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:01:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:01:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:01:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:01:56 np0005465604 python3.9[203524]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:01:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:56 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:01:56 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:01:56 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:01:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:01:56 np0005465604 podman[203753]: 2025-10-02 08:01:56.695160517 +0000 UTC m=+0.065681512 container create e01ff9787a81deaaf9ebf23623f27d2ba9c77f232aef7e08a5ded40a0d46049d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 04:01:56 np0005465604 systemd[1]: Started libpod-conmon-e01ff9787a81deaaf9ebf23623f27d2ba9c77f232aef7e08a5ded40a0d46049d.scope.
Oct  2 04:01:56 np0005465604 podman[203753]: 2025-10-02 08:01:56.670030672 +0000 UTC m=+0.040551707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:01:56 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:01:56 np0005465604 podman[203753]: 2025-10-02 08:01:56.798221376 +0000 UTC m=+0.168742381 container init e01ff9787a81deaaf9ebf23623f27d2ba9c77f232aef7e08a5ded40a0d46049d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:01:56 np0005465604 podman[203753]: 2025-10-02 08:01:56.808622013 +0000 UTC m=+0.179143008 container start e01ff9787a81deaaf9ebf23623f27d2ba9c77f232aef7e08a5ded40a0d46049d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:01:56 np0005465604 podman[203753]: 2025-10-02 08:01:56.812380508 +0000 UTC m=+0.182901523 container attach e01ff9787a81deaaf9ebf23623f27d2ba9c77f232aef7e08a5ded40a0d46049d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct  2 04:01:56 np0005465604 wonderful_lalande[203783]: 167 167
Oct  2 04:01:56 np0005465604 systemd[1]: libpod-e01ff9787a81deaaf9ebf23623f27d2ba9c77f232aef7e08a5ded40a0d46049d.scope: Deactivated successfully.
Oct  2 04:01:56 np0005465604 conmon[203783]: conmon e01ff9787a81deaaf9eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e01ff9787a81deaaf9ebf23623f27d2ba9c77f232aef7e08a5ded40a0d46049d.scope/container/memory.events
Oct  2 04:01:56 np0005465604 podman[203753]: 2025-10-02 08:01:56.818513924 +0000 UTC m=+0.189034919 container died e01ff9787a81deaaf9ebf23623f27d2ba9c77f232aef7e08a5ded40a0d46049d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lalande, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 04:01:56 np0005465604 python3.9[203775]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759392115.5621905-554-22166881841400/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:01:56 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c60ca888f869ab26357e450395c8b49d35bb0af75d73299f5218c9adc55a9235-merged.mount: Deactivated successfully.
Oct  2 04:01:56 np0005465604 podman[203753]: 2025-10-02 08:01:56.874774789 +0000 UTC m=+0.245295784 container remove e01ff9787a81deaaf9ebf23623f27d2ba9c77f232aef7e08a5ded40a0d46049d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lalande, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 04:01:56 np0005465604 systemd[1]: libpod-conmon-e01ff9787a81deaaf9ebf23623f27d2ba9c77f232aef7e08a5ded40a0d46049d.scope: Deactivated successfully.
Oct  2 04:01:57 np0005465604 podman[203832]: 2025-10-02 08:01:57.027963615 +0000 UTC m=+0.041302009 container create 734a93cef3a7729196ce6efa5efa8a026361c93d199ff1a1b8d9aa8b495569bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_neumann, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:01:57 np0005465604 systemd[1]: Started libpod-conmon-734a93cef3a7729196ce6efa5efa8a026361c93d199ff1a1b8d9aa8b495569bf.scope.
Oct  2 04:01:57 np0005465604 podman[203832]: 2025-10-02 08:01:57.010968907 +0000 UTC m=+0.024307321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:01:57 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:01:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f4ee2ae6c86ad3d56db48ce45b712ace55c4d514e6c52d6ddad2d859e42665/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:01:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f4ee2ae6c86ad3d56db48ce45b712ace55c4d514e6c52d6ddad2d859e42665/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:01:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f4ee2ae6c86ad3d56db48ce45b712ace55c4d514e6c52d6ddad2d859e42665/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:01:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f4ee2ae6c86ad3d56db48ce45b712ace55c4d514e6c52d6ddad2d859e42665/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:01:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f4ee2ae6c86ad3d56db48ce45b712ace55c4d514e6c52d6ddad2d859e42665/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:01:57 np0005465604 podman[203832]: 2025-10-02 08:01:57.140295337 +0000 UTC m=+0.153633761 container init 734a93cef3a7729196ce6efa5efa8a026361c93d199ff1a1b8d9aa8b495569bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_neumann, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:01:57 np0005465604 podman[203832]: 2025-10-02 08:01:57.14795128 +0000 UTC m=+0.161289674 container start 734a93cef3a7729196ce6efa5efa8a026361c93d199ff1a1b8d9aa8b495569bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_neumann, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 04:01:57 np0005465604 podman[203832]: 2025-10-02 08:01:57.151065505 +0000 UTC m=+0.164403899 container attach 734a93cef3a7729196ce6efa5efa8a026361c93d199ff1a1b8d9aa8b495569bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:01:57 np0005465604 python3.9[203980]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:01:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:01:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:01:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:01:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:01:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:01:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:01:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:01:58 np0005465604 fervent_neumann[203880]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:01:58 np0005465604 fervent_neumann[203880]: --> relative data size: 1.0
Oct  2 04:01:58 np0005465604 fervent_neumann[203880]: --> All data devices are unavailable
Oct  2 04:01:58 np0005465604 systemd[1]: libpod-734a93cef3a7729196ce6efa5efa8a026361c93d199ff1a1b8d9aa8b495569bf.scope: Deactivated successfully.
Oct  2 04:01:58 np0005465604 systemd[1]: libpod-734a93cef3a7729196ce6efa5efa8a026361c93d199ff1a1b8d9aa8b495569bf.scope: Consumed 1.165s CPU time.
Oct  2 04:01:58 np0005465604 podman[203832]: 2025-10-02 08:01:58.393424211 +0000 UTC m=+1.406762665 container died 734a93cef3a7729196ce6efa5efa8a026361c93d199ff1a1b8d9aa8b495569bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_neumann, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:01:58 np0005465604 python3.9[204123]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759392117.0342047-554-134033127252061/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:01:58 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d5f4ee2ae6c86ad3d56db48ce45b712ace55c4d514e6c52d6ddad2d859e42665-merged.mount: Deactivated successfully.
Oct  2 04:01:58 np0005465604 podman[203832]: 2025-10-02 08:01:58.519673027 +0000 UTC m=+1.533011431 container remove 734a93cef3a7729196ce6efa5efa8a026361c93d199ff1a1b8d9aa8b495569bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 04:01:58 np0005465604 systemd[1]: libpod-conmon-734a93cef3a7729196ce6efa5efa8a026361c93d199ff1a1b8d9aa8b495569bf.scope: Deactivated successfully.
Oct  2 04:01:59 np0005465604 podman[204437]: 2025-10-02 08:01:59.210534774 +0000 UTC m=+0.041978751 container create b15735f360e924b39a202d2edef2c7c9a6a1b990300f7f34d2a3e9ba4f936b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 04:01:59 np0005465604 systemd[1]: Started libpod-conmon-b15735f360e924b39a202d2edef2c7c9a6a1b990300f7f34d2a3e9ba4f936b94.scope.
Oct  2 04:01:59 np0005465604 python3.9[204415]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:01:59 np0005465604 podman[204437]: 2025-10-02 08:01:59.193010369 +0000 UTC m=+0.024454386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:01:59 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:01:59 np0005465604 podman[204437]: 2025-10-02 08:01:59.312463588 +0000 UTC m=+0.143907605 container init b15735f360e924b39a202d2edef2c7c9a6a1b990300f7f34d2a3e9ba4f936b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kepler, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 04:01:59 np0005465604 podman[204437]: 2025-10-02 08:01:59.320862404 +0000 UTC m=+0.152306391 container start b15735f360e924b39a202d2edef2c7c9a6a1b990300f7f34d2a3e9ba4f936b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  2 04:01:59 np0005465604 podman[204437]: 2025-10-02 08:01:59.324934038 +0000 UTC m=+0.156378065 container attach b15735f360e924b39a202d2edef2c7c9a6a1b990300f7f34d2a3e9ba4f936b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:01:59 np0005465604 bold_kepler[204454]: 167 167
Oct  2 04:01:59 np0005465604 systemd[1]: libpod-b15735f360e924b39a202d2edef2c7c9a6a1b990300f7f34d2a3e9ba4f936b94.scope: Deactivated successfully.
Oct  2 04:01:59 np0005465604 podman[204437]: 2025-10-02 08:01:59.327915099 +0000 UTC m=+0.159359076 container died b15735f360e924b39a202d2edef2c7c9a6a1b990300f7f34d2a3e9ba4f936b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kepler, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:01:59 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3fd51f6c9d84cc58690a88d981804ccb9d70def4a5714a5447dc1dafef55fdeb-merged.mount: Deactivated successfully.
Oct  2 04:01:59 np0005465604 podman[204437]: 2025-10-02 08:01:59.372632051 +0000 UTC m=+0.204076038 container remove b15735f360e924b39a202d2edef2c7c9a6a1b990300f7f34d2a3e9ba4f936b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 04:01:59 np0005465604 systemd[1]: libpod-conmon-b15735f360e924b39a202d2edef2c7c9a6a1b990300f7f34d2a3e9ba4f936b94.scope: Deactivated successfully.
Oct  2 04:01:59 np0005465604 podman[204531]: 2025-10-02 08:01:59.59250544 +0000 UTC m=+0.065628961 container create 143dae46cb0431015da85476a170aafa947b7a8be305247c41cbd67bb6980179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 04:01:59 np0005465604 systemd[1]: Started libpod-conmon-143dae46cb0431015da85476a170aafa947b7a8be305247c41cbd67bb6980179.scope.
Oct  2 04:01:59 np0005465604 podman[204531]: 2025-10-02 08:01:59.572420697 +0000 UTC m=+0.045544188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:01:59 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:01:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdad10965430791aab4bcc7a5379d1b44cd9b99dcac932deb610419062335e41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:01:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdad10965430791aab4bcc7a5379d1b44cd9b99dcac932deb610419062335e41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:01:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdad10965430791aab4bcc7a5379d1b44cd9b99dcac932deb610419062335e41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:01:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdad10965430791aab4bcc7a5379d1b44cd9b99dcac932deb610419062335e41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:01:59 np0005465604 podman[204531]: 2025-10-02 08:01:59.72612868 +0000 UTC m=+0.199252181 container init 143dae46cb0431015da85476a170aafa947b7a8be305247c41cbd67bb6980179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:01:59 np0005465604 podman[204531]: 2025-10-02 08:01:59.74649472 +0000 UTC m=+0.219618201 container start 143dae46cb0431015da85476a170aafa947b7a8be305247c41cbd67bb6980179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:01:59 np0005465604 podman[204531]: 2025-10-02 08:01:59.750140942 +0000 UTC m=+0.223264423 container attach 143dae46cb0431015da85476a170aafa947b7a8be305247c41cbd67bb6980179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:01:59 np0005465604 python3.9[204623]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759392118.682665-554-248658554892312/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]: {
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:    "0": [
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:        {
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "devices": [
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "/dev/loop3"
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            ],
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "lv_name": "ceph_lv0",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "lv_size": "21470642176",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "name": "ceph_lv0",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "tags": {
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.cluster_name": "ceph",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.crush_device_class": "",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.encrypted": "0",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.osd_id": "0",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.type": "block",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.vdo": "0"
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            },
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "type": "block",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "vg_name": "ceph_vg0"
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:        }
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:    ],
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:    "1": [
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:        {
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "devices": [
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "/dev/loop4"
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            ],
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "lv_name": "ceph_lv1",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "lv_size": "21470642176",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "name": "ceph_lv1",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "tags": {
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.cluster_name": "ceph",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.crush_device_class": "",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.encrypted": "0",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.osd_id": "1",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.type": "block",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.vdo": "0"
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            },
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "type": "block",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "vg_name": "ceph_vg1"
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:        }
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:    ],
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:    "2": [
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:        {
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "devices": [
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "/dev/loop5"
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            ],
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "lv_name": "ceph_lv2",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "lv_size": "21470642176",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "name": "ceph_lv2",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "tags": {
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.cluster_name": "ceph",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.crush_device_class": "",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.encrypted": "0",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.osd_id": "2",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.type": "block",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:                "ceph.vdo": "0"
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            },
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "type": "block",
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:            "vg_name": "ceph_vg2"
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:        }
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]:    ]
Oct  2 04:02:00 np0005465604 amazing_kowalevski[204590]: }
Oct  2 04:02:00 np0005465604 systemd[1]: libpod-143dae46cb0431015da85476a170aafa947b7a8be305247c41cbd67bb6980179.scope: Deactivated successfully.
Oct  2 04:02:00 np0005465604 podman[204531]: 2025-10-02 08:02:00.657445561 +0000 UTC m=+1.130569042 container died 143dae46cb0431015da85476a170aafa947b7a8be305247c41cbd67bb6980179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:02:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay-bdad10965430791aab4bcc7a5379d1b44cd9b99dcac932deb610419062335e41-merged.mount: Deactivated successfully.
Oct  2 04:02:00 np0005465604 podman[204778]: 2025-10-02 08:02:00.712013133 +0000 UTC m=+0.130579109 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:02:00 np0005465604 podman[204531]: 2025-10-02 08:02:00.73192077 +0000 UTC m=+1.205044251 container remove 143dae46cb0431015da85476a170aafa947b7a8be305247c41cbd67bb6980179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:02:00 np0005465604 systemd[1]: libpod-conmon-143dae46cb0431015da85476a170aafa947b7a8be305247c41cbd67bb6980179.scope: Deactivated successfully.
Oct  2 04:02:00 np0005465604 python3.9[204780]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:02:01 np0005465604 python3.9[205045]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759392120.2047014-554-274857393474005/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:01 np0005465604 podman[205088]: 2025-10-02 08:02:01.674323708 +0000 UTC m=+0.068356003 container create 548b3177303cb89ccd8fd9adacbbcadef6b6db3057e02284c7a80de47b5e1b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:02:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:02:01 np0005465604 podman[205088]: 2025-10-02 08:02:01.641375655 +0000 UTC m=+0.035408030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:02:01 np0005465604 systemd[1]: Started libpod-conmon-548b3177303cb89ccd8fd9adacbbcadef6b6db3057e02284c7a80de47b5e1b81.scope.
Oct  2 04:02:01 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:02:01 np0005465604 podman[205088]: 2025-10-02 08:02:01.787384932 +0000 UTC m=+0.181417277 container init 548b3177303cb89ccd8fd9adacbbcadef6b6db3057e02284c7a80de47b5e1b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elbakyan, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct  2 04:02:01 np0005465604 podman[205088]: 2025-10-02 08:02:01.802146692 +0000 UTC m=+0.196178997 container start 548b3177303cb89ccd8fd9adacbbcadef6b6db3057e02284c7a80de47b5e1b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:02:01 np0005465604 podman[205088]: 2025-10-02 08:02:01.806940568 +0000 UTC m=+0.200972873 container attach 548b3177303cb89ccd8fd9adacbbcadef6b6db3057e02284c7a80de47b5e1b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elbakyan, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 04:02:01 np0005465604 flamboyant_elbakyan[205128]: 167 167
Oct  2 04:02:01 np0005465604 systemd[1]: libpod-548b3177303cb89ccd8fd9adacbbcadef6b6db3057e02284c7a80de47b5e1b81.scope: Deactivated successfully.
Oct  2 04:02:01 np0005465604 podman[205088]: 2025-10-02 08:02:01.814123887 +0000 UTC m=+0.208156212 container died 548b3177303cb89ccd8fd9adacbbcadef6b6db3057e02284c7a80de47b5e1b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elbakyan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  2 04:02:01 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2c5df07e6e7df99d08ca36565d6caa3d06ab009106cfedbb306d77cf4f8b59a7-merged.mount: Deactivated successfully.
Oct  2 04:02:01 np0005465604 podman[205088]: 2025-10-02 08:02:01.87692947 +0000 UTC m=+0.270961745 container remove 548b3177303cb89ccd8fd9adacbbcadef6b6db3057e02284c7a80de47b5e1b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elbakyan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:02:01 np0005465604 systemd[1]: libpod-conmon-548b3177303cb89ccd8fd9adacbbcadef6b6db3057e02284c7a80de47b5e1b81.scope: Deactivated successfully.
Oct  2 04:02:02 np0005465604 podman[205211]: 2025-10-02 08:02:02.103515593 +0000 UTC m=+0.057015248 container create b223ac3a2fe3c2c48068770f2025033ae235d19c746e1d75b5fb50561de1aea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goodall, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 04:02:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:02 np0005465604 systemd[1]: Started libpod-conmon-b223ac3a2fe3c2c48068770f2025033ae235d19c746e1d75b5fb50561de1aea8.scope.
Oct  2 04:02:02 np0005465604 podman[205211]: 2025-10-02 08:02:02.085980999 +0000 UTC m=+0.039480674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:02:02 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:02:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db678539f529a403e43f04e0cb12bd944ccb352435eacdf1775d00907e19977e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:02:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db678539f529a403e43f04e0cb12bd944ccb352435eacdf1775d00907e19977e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:02:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db678539f529a403e43f04e0cb12bd944ccb352435eacdf1775d00907e19977e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:02:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db678539f529a403e43f04e0cb12bd944ccb352435eacdf1775d00907e19977e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:02:02 np0005465604 podman[205211]: 2025-10-02 08:02:02.226084097 +0000 UTC m=+0.179583842 container init b223ac3a2fe3c2c48068770f2025033ae235d19c746e1d75b5fb50561de1aea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 04:02:02 np0005465604 podman[205211]: 2025-10-02 08:02:02.236671169 +0000 UTC m=+0.190170854 container start b223ac3a2fe3c2c48068770f2025033ae235d19c746e1d75b5fb50561de1aea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goodall, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 04:02:02 np0005465604 podman[205211]: 2025-10-02 08:02:02.240914348 +0000 UTC m=+0.194414033 container attach b223ac3a2fe3c2c48068770f2025033ae235d19c746e1d75b5fb50561de1aea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goodall, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  2 04:02:02 np0005465604 python3.9[205300]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:02:03 np0005465604 python3.9[205424]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759392121.866006-554-86569823820847/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]: {
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:        "osd_id": 2,
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:        "type": "bluestore"
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:    },
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:        "osd_id": 1,
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:        "type": "bluestore"
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:    },
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:        "osd_id": 0,
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:        "type": "bluestore"
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]:    }
Oct  2 04:02:03 np0005465604 agitated_goodall[205252]: }
Oct  2 04:02:03 np0005465604 systemd[1]: libpod-b223ac3a2fe3c2c48068770f2025033ae235d19c746e1d75b5fb50561de1aea8.scope: Deactivated successfully.
Oct  2 04:02:03 np0005465604 systemd[1]: libpod-b223ac3a2fe3c2c48068770f2025033ae235d19c746e1d75b5fb50561de1aea8.scope: Consumed 1.062s CPU time.
Oct  2 04:02:03 np0005465604 podman[205211]: 2025-10-02 08:02:03.293485153 +0000 UTC m=+1.246984848 container died b223ac3a2fe3c2c48068770f2025033ae235d19c746e1d75b5fb50561de1aea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goodall, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:02:03 np0005465604 systemd[1]: var-lib-containers-storage-overlay-db678539f529a403e43f04e0cb12bd944ccb352435eacdf1775d00907e19977e-merged.mount: Deactivated successfully.
Oct  2 04:02:03 np0005465604 podman[205211]: 2025-10-02 08:02:03.358940007 +0000 UTC m=+1.312439672 container remove b223ac3a2fe3c2c48068770f2025033ae235d19c746e1d75b5fb50561de1aea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_goodall, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:02:03 np0005465604 systemd[1]: libpod-conmon-b223ac3a2fe3c2c48068770f2025033ae235d19c746e1d75b5fb50561de1aea8.scope: Deactivated successfully.
Oct  2 04:02:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:02:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:02:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:02:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:02:03 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e0d3b508-2275-4a05-837b-f0c3e9ad2842 does not exist
Oct  2 04:02:03 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 11dc6a58-3557-4123-833b-872a50ae0bc8 does not exist
Oct  2 04:02:03 np0005465604 python3.9[205666]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:02:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:04 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:02:04 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:02:04 np0005465604 python3.9[205791]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759392123.3721209-554-56189764183138/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:05 np0005465604 python3.9[205943]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct  2 04:02:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:06 np0005465604 python3.9[206096]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:02:07 np0005465604 python3.9[206248]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:07 np0005465604 podman[206372]: 2025-10-02 08:02:07.752949893 +0000 UTC m=+0.095781188 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 04:02:07 np0005465604 python3.9[206419]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:08 np0005465604 python3.9[206572]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:09 np0005465604 python3.9[206724]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:10 np0005465604 python3.9[206876]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:11 np0005465604 python3.9[207028]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:02:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:12 np0005465604 python3.9[207180]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:12 np0005465604 python3.9[207332]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:13 np0005465604 python3.9[207484]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:14 np0005465604 python3.9[207636]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:15 np0005465604 python3.9[207788]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:16 np0005465604 python3.9[207940]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:02:16 np0005465604 python3.9[208092]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:17 np0005465604 python3.9[208244]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:02:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:18 np0005465604 python3.9[208367]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759392137.2702725-775-41671837915600/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:19 np0005465604 python3.9[208519]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:02:19 np0005465604 python3.9[208642]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759392138.6993265-775-170759924459116/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:20 np0005465604 python3.9[208794]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:02:21 np0005465604 python3.9[208917]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759392140.1678147-775-47928314518592/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:02:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:22 np0005465604 python3.9[209069]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:02:23 np0005465604 python3.9[209192]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759392141.6928482-775-147380612331185/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:23 np0005465604 python3.9[209344]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:02:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:24 np0005465604 python3.9[209467]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759392143.2284696-775-23946908254630/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:25 np0005465604 python3.9[209619]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:02:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:26 np0005465604 python3.9[209742]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759392144.8903039-775-167403612107005/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:02:27 np0005465604 python3.9[209894]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:02:27
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'backups', 'images', 'default.rgw.meta', '.mgr', 'vms']
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:02:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:02:28 np0005465604 python3.9[210017]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759392146.5306346-775-183405216565526/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:28 np0005465604 python3.9[210169]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:02:29 np0005465604 python3.9[210292]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759392148.270869-775-223050699611074/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:30 np0005465604 python3.9[210444]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:02:30 np0005465604 podman[210567]: 2025-10-02 08:02:30.955153742 +0000 UTC m=+0.124820977 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct  2 04:02:31 np0005465604 python3.9[210568]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759392149.719671-775-101229917898842/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:02:31 np0005465604 python3.9[210745]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:02:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:32 np0005465604 python3.9[210868]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759392151.268129-775-74325346663791/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:33 np0005465604 python3.9[211020]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:02:34 np0005465604 python3.9[211143]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759392152.7911894-775-193959511163494/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:02:34.787 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:02:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:02:34.787 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:02:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:02:34.788 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:02:34 np0005465604 python3.9[211295]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:02:35 np0005465604 python3.9[211418]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759392154.303296-775-113067310935940/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:36 np0005465604 python3.9[211570]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:02:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:02:37 np0005465604 python3.9[211693]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759392155.8743186-775-179762727235181/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:37 np0005465604 podman[211845]: 2025-10-02 08:02:37.912727918 +0000 UTC m=+0.086007853 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:02:38 np0005465604 python3.9[211846]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:02:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:38 np0005465604 python3.9[211987]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759392157.3977857-775-141178778604268/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:39 np0005465604 python3.9[212137]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 04:02:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:40 np0005465604 python3.9[212292]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct  2 04:02:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:02:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:42 np0005465604 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct  2 04:02:42 np0005465604 python3.9[212448]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:43 np0005465604 python3.9[212600]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:44 np0005465604 python3.9[212752]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:45 np0005465604 python3.9[212904]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:45 np0005465604 python3.9[213056]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:02:46 np0005465604 python3.9[213208]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:47 np0005465604 python3.9[213360]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:48 np0005465604 python3.9[213512]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:49 np0005465604 python3.9[213664]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:50 np0005465604 python3.9[213816]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:51 np0005465604 python3.9[213968]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 04:02:51 np0005465604 systemd[1]: Reloading.
Oct  2 04:02:51 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:02:51 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:02:51 np0005465604 systemd[1]: Starting libvirt logging daemon socket...
Oct  2 04:02:51 np0005465604 systemd[1]: Listening on libvirt logging daemon socket.
Oct  2 04:02:51 np0005465604 systemd[1]: Starting libvirt logging daemon admin socket...
Oct  2 04:02:51 np0005465604 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct  2 04:02:51 np0005465604 systemd[1]: Starting libvirt logging daemon...
Oct  2 04:02:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:02:51 np0005465604 systemd[1]: Started libvirt logging daemon.
Oct  2 04:02:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:52 np0005465604 python3.9[214161]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 04:02:52 np0005465604 systemd[1]: Reloading.
Oct  2 04:02:52 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:02:52 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:02:53 np0005465604 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct  2 04:02:53 np0005465604 systemd[1]: Starting libvirt nodedev daemon socket...
Oct  2 04:02:53 np0005465604 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct  2 04:02:53 np0005465604 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct  2 04:02:53 np0005465604 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct  2 04:02:53 np0005465604 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct  2 04:02:53 np0005465604 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct  2 04:02:53 np0005465604 systemd[1]: Starting libvirt nodedev daemon...
Oct  2 04:02:53 np0005465604 systemd[1]: Started libvirt nodedev daemon.
Oct  2 04:02:53 np0005465604 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct  2 04:02:53 np0005465604 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct  2 04:02:53 np0005465604 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct  2 04:02:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:54 np0005465604 python3.9[214384]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 04:02:54 np0005465604 systemd[1]: Reloading.
Oct  2 04:02:54 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:02:54 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:02:54 np0005465604 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct  2 04:02:54 np0005465604 setroubleshoot[214198]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l dda0109b-1f55-4136-81a7-ef67a062d1df
Oct  2 04:02:54 np0005465604 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct  2 04:02:54 np0005465604 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct  2 04:02:54 np0005465604 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct  2 04:02:54 np0005465604 systemd[1]: Starting libvirt proxy daemon...
Oct  2 04:02:54 np0005465604 setroubleshoot[214198]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  2 04:02:54 np0005465604 setroubleshoot[214198]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l dda0109b-1f55-4136-81a7-ef67a062d1df
Oct  2 04:02:54 np0005465604 setroubleshoot[214198]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  2 04:02:54 np0005465604 systemd[1]: Started libvirt proxy daemon.
Oct  2 04:02:55 np0005465604 python3.9[214596]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 04:02:55 np0005465604 systemd[1]: Reloading.
Oct  2 04:02:56 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:02:56 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:02:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:56 np0005465604 systemd[1]: Listening on libvirt locking daemon socket.
Oct  2 04:02:56 np0005465604 systemd[1]: Starting libvirt QEMU daemon socket...
Oct  2 04:02:56 np0005465604 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct  2 04:02:56 np0005465604 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct  2 04:02:56 np0005465604 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct  2 04:02:56 np0005465604 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct  2 04:02:56 np0005465604 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct  2 04:02:56 np0005465604 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct  2 04:02:56 np0005465604 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct  2 04:02:56 np0005465604 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct  2 04:02:56 np0005465604 systemd[1]: Starting libvirt QEMU daemon...
Oct  2 04:02:56 np0005465604 systemd[1]: Started libvirt QEMU daemon.
Oct  2 04:02:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:02:57 np0005465604 python3.9[214809]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 04:02:57 np0005465604 systemd[1]: Reloading.
Oct  2 04:02:57 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:02:57 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:02:57 np0005465604 systemd[1]: Starting libvirt secret daemon socket...
Oct  2 04:02:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:02:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:02:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:02:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:02:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:02:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:02:57 np0005465604 systemd[1]: Listening on libvirt secret daemon socket.
Oct  2 04:02:57 np0005465604 systemd[1]: Starting libvirt secret daemon admin socket...
Oct  2 04:02:57 np0005465604 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct  2 04:02:57 np0005465604 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct  2 04:02:57 np0005465604 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct  2 04:02:57 np0005465604 systemd[1]: Starting libvirt secret daemon...
Oct  2 04:02:57 np0005465604 systemd[1]: Started libvirt secret daemon.
Oct  2 04:02:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:02:59 np0005465604 python3.9[215018]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:02:59 np0005465604 python3.9[215170]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 04:03:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:00 np0005465604 python3.9[215322]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 04:03:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:03:01 np0005465604 podman[215450]: 2025-10-02 08:03:01.736244682 +0000 UTC m=+0.164389463 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 04:03:01 np0005465604 python3.9[215486]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 04:03:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:02 np0005465604 python3.9[215656]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:03:03 np0005465604 python3.9[215777]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759392182.372989-1133-193736640161796/.source.xml follow=False _original_basename=secret.xml.j2 checksum=a6396b9ba65327f44b3eacf412dc0176258c1107 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:04 np0005465604 python3.9[216047]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine a52e644f-f702-594c-a648-813e3e0df2b1#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 04:03:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:03:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:03:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:03:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:03:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:03:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:03:04 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 387d424a-a2fa-4140-861c-cd4a958e1d09 does not exist
Oct  2 04:03:04 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b7da8fbe-f2b2-4e15-a001-cc570725edbc does not exist
Oct  2 04:03:04 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 5d83e9ab-7d42-48d1-8ef6-0bb7b9d3c8c7 does not exist
Oct  2 04:03:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:03:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:03:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:03:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:03:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:03:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:03:04 np0005465604 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct  2 04:03:04 np0005465604 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.007s CPU time.
Oct  2 04:03:04 np0005465604 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct  2 04:03:05 np0005465604 podman[216344]: 2025-10-02 08:03:05.364666566 +0000 UTC m=+0.073834468 container create 9841e0b9f1c64e7e59b71ae75bcecdd1bc5e14d414a7600006df9295a31e606a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:03:05 np0005465604 systemd[1]: Started libpod-conmon-9841e0b9f1c64e7e59b71ae75bcecdd1bc5e14d414a7600006df9295a31e606a.scope.
Oct  2 04:03:05 np0005465604 podman[216344]: 2025-10-02 08:03:05.335169084 +0000 UTC m=+0.044337016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:03:05 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:03:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:03:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:03:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:03:05 np0005465604 podman[216344]: 2025-10-02 08:03:05.485380512 +0000 UTC m=+0.194548484 container init 9841e0b9f1c64e7e59b71ae75bcecdd1bc5e14d414a7600006df9295a31e606a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:03:05 np0005465604 podman[216344]: 2025-10-02 08:03:05.496119865 +0000 UTC m=+0.205287787 container start 9841e0b9f1c64e7e59b71ae75bcecdd1bc5e14d414a7600006df9295a31e606a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_pascal, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:03:05 np0005465604 podman[216344]: 2025-10-02 08:03:05.50037542 +0000 UTC m=+0.209543402 container attach 9841e0b9f1c64e7e59b71ae75bcecdd1bc5e14d414a7600006df9295a31e606a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 04:03:05 np0005465604 loving_pascal[216381]: 167 167
Oct  2 04:03:05 np0005465604 systemd[1]: libpod-9841e0b9f1c64e7e59b71ae75bcecdd1bc5e14d414a7600006df9295a31e606a.scope: Deactivated successfully.
Oct  2 04:03:05 np0005465604 podman[216344]: 2025-10-02 08:03:05.504695846 +0000 UTC m=+0.213863748 container died 9841e0b9f1c64e7e59b71ae75bcecdd1bc5e14d414a7600006df9295a31e606a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:03:05 np0005465604 systemd[1]: var-lib-containers-storage-overlay-975e752162c6affcbefa14039db9291ef0e32b1abf372c8c95f786f14c3c8cc7-merged.mount: Deactivated successfully.
Oct  2 04:03:05 np0005465604 podman[216344]: 2025-10-02 08:03:05.555985694 +0000 UTC m=+0.265153636 container remove 9841e0b9f1c64e7e59b71ae75bcecdd1bc5e14d414a7600006df9295a31e606a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 04:03:05 np0005465604 python3.9[216375]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:05 np0005465604 systemd[1]: libpod-conmon-9841e0b9f1c64e7e59b71ae75bcecdd1bc5e14d414a7600006df9295a31e606a.scope: Deactivated successfully.
Oct  2 04:03:05 np0005465604 podman[216429]: 2025-10-02 08:03:05.774769334 +0000 UTC m=+0.064440853 container create d4cb30bb7ac26e4b5577c13ce1894e0ab9bd315043b5cf63e6a7d93e8325334d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 04:03:05 np0005465604 systemd[1]: Started libpod-conmon-d4cb30bb7ac26e4b5577c13ce1894e0ab9bd315043b5cf63e6a7d93e8325334d.scope.
Oct  2 04:03:05 np0005465604 podman[216429]: 2025-10-02 08:03:05.743227783 +0000 UTC m=+0.032899402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:03:05 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:03:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eff5a846ed2e39558f193ebf227cf44d70aa05672fda948d33808eff76b332e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:03:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eff5a846ed2e39558f193ebf227cf44d70aa05672fda948d33808eff76b332e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:03:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eff5a846ed2e39558f193ebf227cf44d70aa05672fda948d33808eff76b332e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:03:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eff5a846ed2e39558f193ebf227cf44d70aa05672fda948d33808eff76b332e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:03:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eff5a846ed2e39558f193ebf227cf44d70aa05672fda948d33808eff76b332e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:03:05 np0005465604 podman[216429]: 2025-10-02 08:03:05.888369741 +0000 UTC m=+0.178041340 container init d4cb30bb7ac26e4b5577c13ce1894e0ab9bd315043b5cf63e6a7d93e8325334d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 04:03:05 np0005465604 podman[216429]: 2025-10-02 08:03:05.907225432 +0000 UTC m=+0.196896941 container start d4cb30bb7ac26e4b5577c13ce1894e0ab9bd315043b5cf63e6a7d93e8325334d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:03:05 np0005465604 podman[216429]: 2025-10-02 08:03:05.910616991 +0000 UTC m=+0.200288530 container attach d4cb30bb7ac26e4b5577c13ce1894e0ab9bd315043b5cf63e6a7d93e8325334d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct  2 04:03:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:03:07 np0005465604 optimistic_banzai[216445]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:03:07 np0005465604 optimistic_banzai[216445]: --> relative data size: 1.0
Oct  2 04:03:07 np0005465604 optimistic_banzai[216445]: --> All data devices are unavailable
Oct  2 04:03:07 np0005465604 systemd[1]: libpod-d4cb30bb7ac26e4b5577c13ce1894e0ab9bd315043b5cf63e6a7d93e8325334d.scope: Deactivated successfully.
Oct  2 04:03:07 np0005465604 podman[216429]: 2025-10-02 08:03:07.177005828 +0000 UTC m=+1.466677427 container died d4cb30bb7ac26e4b5577c13ce1894e0ab9bd315043b5cf63e6a7d93e8325334d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_banzai, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:03:07 np0005465604 systemd[1]: libpod-d4cb30bb7ac26e4b5577c13ce1894e0ab9bd315043b5cf63e6a7d93e8325334d.scope: Consumed 1.209s CPU time.
Oct  2 04:03:07 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2eff5a846ed2e39558f193ebf227cf44d70aa05672fda948d33808eff76b332e-merged.mount: Deactivated successfully.
Oct  2 04:03:07 np0005465604 podman[216429]: 2025-10-02 08:03:07.261203887 +0000 UTC m=+1.550875416 container remove d4cb30bb7ac26e4b5577c13ce1894e0ab9bd315043b5cf63e6a7d93e8325334d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_banzai, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 04:03:07 np0005465604 systemd[1]: libpod-conmon-d4cb30bb7ac26e4b5577c13ce1894e0ab9bd315043b5cf63e6a7d93e8325334d.scope: Deactivated successfully.
Oct  2 04:03:08 np0005465604 podman[216958]: 2025-10-02 08:03:08.133441852 +0000 UTC m=+0.060700243 container create 02158bbe6003694da26705d7e9d93b9e486b9567c265c55906ec915c90554d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:03:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:08 np0005465604 systemd[1]: Started libpod-conmon-02158bbe6003694da26705d7e9d93b9e486b9567c265c55906ec915c90554d7b.scope.
Oct  2 04:03:08 np0005465604 podman[216958]: 2025-10-02 08:03:08.11145893 +0000 UTC m=+0.038717351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:03:08 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:03:08 np0005465604 podman[216958]: 2025-10-02 08:03:08.244491036 +0000 UTC m=+0.171749527 container init 02158bbe6003694da26705d7e9d93b9e486b9567c265c55906ec915c90554d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 04:03:08 np0005465604 podman[216958]: 2025-10-02 08:03:08.258304849 +0000 UTC m=+0.185563270 container start 02158bbe6003694da26705d7e9d93b9e486b9567c265c55906ec915c90554d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 04:03:08 np0005465604 podman[216958]: 2025-10-02 08:03:08.26344438 +0000 UTC m=+0.190702891 container attach 02158bbe6003694da26705d7e9d93b9e486b9567c265c55906ec915c90554d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_rhodes, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:03:08 np0005465604 heuristic_rhodes[217011]: 167 167
Oct  2 04:03:08 np0005465604 systemd[1]: libpod-02158bbe6003694da26705d7e9d93b9e486b9567c265c55906ec915c90554d7b.scope: Deactivated successfully.
Oct  2 04:03:08 np0005465604 conmon[217011]: conmon 02158bbe6003694da267 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-02158bbe6003694da26705d7e9d93b9e486b9567c265c55906ec915c90554d7b.scope/container/memory.events
Oct  2 04:03:08 np0005465604 podman[216958]: 2025-10-02 08:03:08.272138134 +0000 UTC m=+0.199396555 container died 02158bbe6003694da26705d7e9d93b9e486b9567c265c55906ec915c90554d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:03:08 np0005465604 podman[217002]: 2025-10-02 08:03:08.283902166 +0000 UTC m=+0.092831782 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct  2 04:03:08 np0005465604 systemd[1]: var-lib-containers-storage-overlay-492ae85a6bbb177dcf833ce569b28049ea3c6d0b4e9a0330e1c31c5a193d44d6-merged.mount: Deactivated successfully.
Oct  2 04:03:08 np0005465604 podman[216958]: 2025-10-02 08:03:08.33022997 +0000 UTC m=+0.257488361 container remove 02158bbe6003694da26705d7e9d93b9e486b9567c265c55906ec915c90554d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_rhodes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:03:08 np0005465604 systemd[1]: libpod-conmon-02158bbe6003694da26705d7e9d93b9e486b9567c265c55906ec915c90554d7b.scope: Deactivated successfully.
Oct  2 04:03:08 np0005465604 podman[217120]: 2025-10-02 08:03:08.577887813 +0000 UTC m=+0.077923797 container create f0d7646d656b2e57eb34c5cbd4985579c693b566b4b1cdd6d3783d2e346dcc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_turing, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:03:08 np0005465604 podman[217120]: 2025-10-02 08:03:08.544083046 +0000 UTC m=+0.044119100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:03:08 np0005465604 systemd[1]: Started libpod-conmon-f0d7646d656b2e57eb34c5cbd4985579c693b566b4b1cdd6d3783d2e346dcc3c.scope.
Oct  2 04:03:08 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:03:08 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e7438862f610977930c3b028f28dd3ebbae79434ba2910a2fd9984963fab37f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:03:08 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e7438862f610977930c3b028f28dd3ebbae79434ba2910a2fd9984963fab37f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:03:08 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e7438862f610977930c3b028f28dd3ebbae79434ba2910a2fd9984963fab37f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:03:08 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e7438862f610977930c3b028f28dd3ebbae79434ba2910a2fd9984963fab37f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:03:08 np0005465604 podman[217120]: 2025-10-02 08:03:08.693003255 +0000 UTC m=+0.193039279 container init f0d7646d656b2e57eb34c5cbd4985579c693b566b4b1cdd6d3783d2e346dcc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_turing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:03:08 np0005465604 podman[217120]: 2025-10-02 08:03:08.703854072 +0000 UTC m=+0.203890056 container start f0d7646d656b2e57eb34c5cbd4985579c693b566b4b1cdd6d3783d2e346dcc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_turing, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 04:03:08 np0005465604 podman[217120]: 2025-10-02 08:03:08.707890589 +0000 UTC m=+0.207926593 container attach f0d7646d656b2e57eb34c5cbd4985579c693b566b4b1cdd6d3783d2e346dcc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_turing, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:03:08 np0005465604 python3.9[217122]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:09 np0005465604 modest_turing[217137]: {
Oct  2 04:03:09 np0005465604 modest_turing[217137]:    "0": [
Oct  2 04:03:09 np0005465604 modest_turing[217137]:        {
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "devices": [
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "/dev/loop3"
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            ],
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "lv_name": "ceph_lv0",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "lv_size": "21470642176",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "name": "ceph_lv0",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "tags": {
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.cluster_name": "ceph",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.crush_device_class": "",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.encrypted": "0",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.osd_id": "0",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.type": "block",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.vdo": "0"
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            },
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "type": "block",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "vg_name": "ceph_vg0"
Oct  2 04:03:09 np0005465604 modest_turing[217137]:        }
Oct  2 04:03:09 np0005465604 modest_turing[217137]:    ],
Oct  2 04:03:09 np0005465604 modest_turing[217137]:    "1": [
Oct  2 04:03:09 np0005465604 modest_turing[217137]:        {
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "devices": [
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "/dev/loop4"
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            ],
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "lv_name": "ceph_lv1",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "lv_size": "21470642176",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "name": "ceph_lv1",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "tags": {
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.cluster_name": "ceph",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.crush_device_class": "",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.encrypted": "0",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.osd_id": "1",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.type": "block",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.vdo": "0"
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            },
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "type": "block",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "vg_name": "ceph_vg1"
Oct  2 04:03:09 np0005465604 modest_turing[217137]:        }
Oct  2 04:03:09 np0005465604 modest_turing[217137]:    ],
Oct  2 04:03:09 np0005465604 modest_turing[217137]:    "2": [
Oct  2 04:03:09 np0005465604 modest_turing[217137]:        {
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "devices": [
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "/dev/loop5"
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            ],
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "lv_name": "ceph_lv2",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "lv_size": "21470642176",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "name": "ceph_lv2",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "tags": {
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.cluster_name": "ceph",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.crush_device_class": "",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.encrypted": "0",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.osd_id": "2",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.type": "block",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:                "ceph.vdo": "0"
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            },
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "type": "block",
Oct  2 04:03:09 np0005465604 modest_turing[217137]:            "vg_name": "ceph_vg2"
Oct  2 04:03:09 np0005465604 modest_turing[217137]:        }
Oct  2 04:03:09 np0005465604 modest_turing[217137]:    ]
Oct  2 04:03:09 np0005465604 modest_turing[217137]: }
Oct  2 04:03:09 np0005465604 systemd[1]: libpod-f0d7646d656b2e57eb34c5cbd4985579c693b566b4b1cdd6d3783d2e346dcc3c.scope: Deactivated successfully.
Oct  2 04:03:09 np0005465604 podman[217120]: 2025-10-02 08:03:09.580451715 +0000 UTC m=+1.080487659 container died f0d7646d656b2e57eb34c5cbd4985579c693b566b4b1cdd6d3783d2e346dcc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_turing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:03:09 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9e7438862f610977930c3b028f28dd3ebbae79434ba2910a2fd9984963fab37f-merged.mount: Deactivated successfully.
Oct  2 04:03:09 np0005465604 podman[217120]: 2025-10-02 08:03:09.642655321 +0000 UTC m=+1.142691265 container remove f0d7646d656b2e57eb34c5cbd4985579c693b566b4b1cdd6d3783d2e346dcc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_turing, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 04:03:09 np0005465604 systemd[1]: libpod-conmon-f0d7646d656b2e57eb34c5cbd4985579c693b566b4b1cdd6d3783d2e346dcc3c.scope: Deactivated successfully.
Oct  2 04:03:09 np0005465604 python3.9[217297]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:03:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:10 np0005465604 podman[217573]: 2025-10-02 08:03:10.52630445 +0000 UTC m=+0.073257721 container create 8c0df44a7bf27de6054eeadc32843321a89085a35ea5a4f637c05603a8b36193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hellman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 04:03:10 np0005465604 python3.9[217558]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759392189.0766637-1188-203215918735879/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:10 np0005465604 systemd[1]: Started libpod-conmon-8c0df44a7bf27de6054eeadc32843321a89085a35ea5a4f637c05603a8b36193.scope.
Oct  2 04:03:10 np0005465604 podman[217573]: 2025-10-02 08:03:10.499108025 +0000 UTC m=+0.046061366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:03:10 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:03:10 np0005465604 podman[217573]: 2025-10-02 08:03:10.626442904 +0000 UTC m=+0.173396235 container init 8c0df44a7bf27de6054eeadc32843321a89085a35ea5a4f637c05603a8b36193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:03:10 np0005465604 podman[217573]: 2025-10-02 08:03:10.633981164 +0000 UTC m=+0.180934395 container start 8c0df44a7bf27de6054eeadc32843321a89085a35ea5a4f637c05603a8b36193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 04:03:10 np0005465604 podman[217573]: 2025-10-02 08:03:10.637216519 +0000 UTC m=+0.184169780 container attach 8c0df44a7bf27de6054eeadc32843321a89085a35ea5a4f637c05603a8b36193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:03:10 np0005465604 sad_hellman[217589]: 167 167
Oct  2 04:03:10 np0005465604 systemd[1]: libpod-8c0df44a7bf27de6054eeadc32843321a89085a35ea5a4f637c05603a8b36193.scope: Deactivated successfully.
Oct  2 04:03:10 np0005465604 podman[217573]: 2025-10-02 08:03:10.642080541 +0000 UTC m=+0.189033782 container died 8c0df44a7bf27de6054eeadc32843321a89085a35ea5a4f637c05603a8b36193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hellman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct  2 04:03:10 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f2c072de36ba5efd16eb98fee71d3f6f50a8fb5f8ebf956edae1009d92475c3e-merged.mount: Deactivated successfully.
Oct  2 04:03:10 np0005465604 podman[217573]: 2025-10-02 08:03:10.683870762 +0000 UTC m=+0.230824003 container remove 8c0df44a7bf27de6054eeadc32843321a89085a35ea5a4f637c05603a8b36193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:03:10 np0005465604 systemd[1]: libpod-conmon-8c0df44a7bf27de6054eeadc32843321a89085a35ea5a4f637c05603a8b36193.scope: Deactivated successfully.
Oct  2 04:03:10 np0005465604 podman[217637]: 2025-10-02 08:03:10.889804866 +0000 UTC m=+0.071076047 container create 0b5654cdb4c42d97c7781e5bd8608ac5660e5ed6a1554a636e6eec908d4153fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  2 04:03:10 np0005465604 systemd[1]: Started libpod-conmon-0b5654cdb4c42d97c7781e5bd8608ac5660e5ed6a1554a636e6eec908d4153fd.scope.
Oct  2 04:03:10 np0005465604 podman[217637]: 2025-10-02 08:03:10.864605411 +0000 UTC m=+0.045876602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:03:10 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:03:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42eaff2d36019d06b1fcbb2600b2ff4391fdeae20c59f268fe629aa4dab6ae3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:03:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42eaff2d36019d06b1fcbb2600b2ff4391fdeae20c59f268fe629aa4dab6ae3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:03:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42eaff2d36019d06b1fcbb2600b2ff4391fdeae20c59f268fe629aa4dab6ae3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:03:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42eaff2d36019d06b1fcbb2600b2ff4391fdeae20c59f268fe629aa4dab6ae3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:03:11 np0005465604 podman[217637]: 2025-10-02 08:03:11.005060182 +0000 UTC m=+0.186331373 container init 0b5654cdb4c42d97c7781e5bd8608ac5660e5ed6a1554a636e6eec908d4153fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_visvesvaraya, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:03:11 np0005465604 podman[217637]: 2025-10-02 08:03:11.021666827 +0000 UTC m=+0.202937998 container start 0b5654cdb4c42d97c7781e5bd8608ac5660e5ed6a1554a636e6eec908d4153fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:03:11 np0005465604 podman[217637]: 2025-10-02 08:03:11.027003954 +0000 UTC m=+0.208275195 container attach 0b5654cdb4c42d97c7781e5bd8608ac5660e5ed6a1554a636e6eec908d4153fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 04:03:11 np0005465604 python3.9[217785]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:03:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]: {
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:        "osd_id": 2,
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:        "type": "bluestore"
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:    },
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:        "osd_id": 1,
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:        "type": "bluestore"
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:    },
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:        "osd_id": 0,
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:        "type": "bluestore"
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]:    }
Oct  2 04:03:12 np0005465604 lucid_visvesvaraya[217680]: }
Oct  2 04:03:12 np0005465604 systemd[1]: libpod-0b5654cdb4c42d97c7781e5bd8608ac5660e5ed6a1554a636e6eec908d4153fd.scope: Deactivated successfully.
Oct  2 04:03:12 np0005465604 systemd[1]: libpod-0b5654cdb4c42d97c7781e5bd8608ac5660e5ed6a1554a636e6eec908d4153fd.scope: Consumed 1.223s CPU time.
Oct  2 04:03:12 np0005465604 podman[217637]: 2025-10-02 08:03:12.238723163 +0000 UTC m=+1.419994334 container died 0b5654cdb4c42d97c7781e5bd8608ac5660e5ed6a1554a636e6eec908d4153fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_visvesvaraya, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:03:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c42eaff2d36019d06b1fcbb2600b2ff4391fdeae20c59f268fe629aa4dab6ae3-merged.mount: Deactivated successfully.
Oct  2 04:03:12 np0005465604 podman[217637]: 2025-10-02 08:03:12.319806862 +0000 UTC m=+1.501078013 container remove 0b5654cdb4c42d97c7781e5bd8608ac5660e5ed6a1554a636e6eec908d4153fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 04:03:12 np0005465604 systemd[1]: libpod-conmon-0b5654cdb4c42d97c7781e5bd8608ac5660e5ed6a1554a636e6eec908d4153fd.scope: Deactivated successfully.
Oct  2 04:03:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:03:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:03:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:03:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:03:12 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e43431a5-7ed3-4f7c-a8a8-8ef61784e23e does not exist
Oct  2 04:03:12 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev dacfeaf6-7ab2-4f45-97b9-716a85373ff8 does not exist
Oct  2 04:03:12 np0005465604 python3.9[217974]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:03:13 np0005465604 python3.9[218108]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:13 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:03:13 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:03:13 np0005465604 python3.9[218260]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:03:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:14 np0005465604 python3.9[218338]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._6p1_sn6 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:15 np0005465604 python3.9[218490]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:03:16 np0005465604 python3.9[218568]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:03:17 np0005465604 python3.9[218720]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 04:03:18 np0005465604 python3[218873]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  2 04:03:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:18 np0005465604 python3.9[219025]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:03:19 np0005465604 python3.9[219103]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:20 np0005465604 python3.9[219255]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:03:20 np0005465604 python3.9[219333]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:21 np0005465604 python3.9[219485]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:03:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:03:22 np0005465604 python3.9[219563]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:23 np0005465604 python3.9[219715]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:03:23 np0005465604 python3.9[219793]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:24 np0005465604 python3.9[219945]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:03:25 np0005465604 python3.9[220070]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759392203.908415-1313-128776932540570/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:26 np0005465604 python3.9[220222]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:03:27 np0005465604 python3.9[220374]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:03:27
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'volumes', 'default.rgw.control', '.mgr', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', '.rgw.root']
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:03:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:03:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:28 np0005465604 python3.9[220529]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:29 np0005465604 python3.9[220681]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 04:03:29 np0005465604 python3.9[220834]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:03:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:30 np0005465604 python3.9[220988]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 04:03:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:03:31 np0005465604 python3.9[221143]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:32 np0005465604 podman[221168]: 2025-10-02 08:03:32.13442249 +0000 UTC m=+0.181885020 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:03:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:32 np0005465604 python3.9[221321]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:03:33 np0005465604 python3.9[221444]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759392212.0168986-1385-210103569251923/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:34 np0005465604 python3.9[221596]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:03:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:34 np0005465604 python3.9[221719]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759392213.5178347-1400-157923816102283/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:03:34.787 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:03:34.788 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:03:34.788 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:03:35 np0005465604 python3.9[221871]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:03:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:36 np0005465604 python3.9[221994]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759392214.9329245-1415-92261015502346/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:03:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:03:37 np0005465604 python3.9[222146]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 04:03:37 np0005465604 systemd[1]: Reloading.
Oct  2 04:03:37 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:03:37 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:03:37 np0005465604 systemd[1]: Reached target edpm_libvirt.target.
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:03:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:38 np0005465604 podman[222338]: 2025-10-02 08:03:38.457131793 +0000 UTC m=+0.080002895 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  2 04:03:38 np0005465604 python3.9[222339]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  2 04:03:38 np0005465604 systemd[1]: Reloading.
Oct  2 04:03:38 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:03:38 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:03:39 np0005465604 systemd[1]: Reloading.
Oct  2 04:03:39 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:03:39 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:03:39 np0005465604 systemd[1]: session-50.scope: Deactivated successfully.
Oct  2 04:03:39 np0005465604 systemd[1]: session-50.scope: Consumed 4min 8.200s CPU time.
Oct  2 04:03:39 np0005465604 systemd-logind[787]: Session 50 logged out. Waiting for processes to exit.
Oct  2 04:03:39 np0005465604 systemd-logind[787]: Removed session 50.
Oct  2 04:03:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:03:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:45 np0005465604 systemd-logind[787]: New session 51 of user zuul.
Oct  2 04:03:45 np0005465604 systemd[1]: Started Session 51 of User zuul.
Oct  2 04:03:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:46 np0005465604 python3.9[222606]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 04:03:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:03:47 np0005465604 python3.9[222762]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:03:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:48 np0005465604 python3.9[222914]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:03:49 np0005465604 python3.9[223066]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:03:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:50 np0005465604 python3.9[223218]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 04:03:51 np0005465604 python3.9[223370]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:03:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:03:52 np0005465604 python3.9[223522]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:03:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:53 np0005465604 python3.9[223676]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 04:03:53 np0005465604 systemd[1]: Reloading.
Oct  2 04:03:53 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:03:53 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:03:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:54 np0005465604 python3.9[223865]: ansible-ansible.builtin.service_facts Invoked
Oct  2 04:03:54 np0005465604 network[223882]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 04:03:54 np0005465604 network[223883]: 'network-scripts' will be removed from distribution in near future.
Oct  2 04:03:54 np0005465604 network[223884]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 04:03:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:03:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:03:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:03:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:03:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:03:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:03:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:03:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:03:59 np0005465604 python3.9[224158]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 04:03:59 np0005465604 systemd[1]: Reloading.
Oct  2 04:04:00 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:04:00 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:04:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:01 np0005465604 python3.9[224347]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:04:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:04:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:02 np0005465604 python3.9[224499]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85 name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  2 04:04:03 np0005465604 podman[224525]: 2025-10-02 08:04:03.084197369 +0000 UTC m=+0.134277577 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:04:03 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 04:04:03 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 04:04:03 np0005465604 podman[224511]: 2025-10-02 08:04:03.852009686 +0000 UTC m=+1.445445154 image pull 81d94872551c3ae3c30801602bbb5f0c44872f15dcde472a0ba869fe2f28966e quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85
Oct  2 04:04:04 np0005465604 podman[224597]: 2025-10-02 08:04:04.070932968 +0000 UTC m=+0.064966487 container create 77702fc8abee6f3255db1177699ecbc676a621e0af0cd414ac9c59c3174ffeae (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid_config, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 04:04:04 np0005465604 NetworkManager[45129]: <info>  [1759392244.1122] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/21)
Oct  2 04:04:04 np0005465604 podman[224597]: 2025-10-02 08:04:04.037608061 +0000 UTC m=+0.031641620 image pull 81d94872551c3ae3c30801602bbb5f0c44872f15dcde472a0ba869fe2f28966e quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85
Oct  2 04:04:04 np0005465604 kernel: podman0: port 1(veth0) entered blocking state
Oct  2 04:04:04 np0005465604 kernel: podman0: port 1(veth0) entered disabled state
Oct  2 04:04:04 np0005465604 kernel: veth0: entered allmulticast mode
Oct  2 04:04:04 np0005465604 kernel: veth0: entered promiscuous mode
Oct  2 04:04:04 np0005465604 kernel: podman0: port 1(veth0) entered blocking state
Oct  2 04:04:04 np0005465604 kernel: podman0: port 1(veth0) entered forwarding state
Oct  2 04:04:04 np0005465604 NetworkManager[45129]: <info>  [1759392244.1470] device (veth0): carrier: link connected
Oct  2 04:04:04 np0005465604 NetworkManager[45129]: <info>  [1759392244.1477] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/22)
Oct  2 04:04:04 np0005465604 NetworkManager[45129]: <info>  [1759392244.1487] device (podman0): carrier: link connected
Oct  2 04:04:04 np0005465604 systemd-udevd[224625]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:04:04 np0005465604 systemd-udevd[224629]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:04:04 np0005465604 NetworkManager[45129]: <info>  [1759392244.1948] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:04:04 np0005465604 NetworkManager[45129]: <info>  [1759392244.1964] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:04:04 np0005465604 NetworkManager[45129]: <info>  [1759392244.1982] device (podman0): Activation: starting connection 'podman0' (ba62c920-3009-4180-a3b9-5624496158be)
Oct  2 04:04:04 np0005465604 NetworkManager[45129]: <info>  [1759392244.1989] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  2 04:04:04 np0005465604 NetworkManager[45129]: <info>  [1759392244.1994] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  2 04:04:04 np0005465604 NetworkManager[45129]: <info>  [1759392244.1999] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  2 04:04:04 np0005465604 NetworkManager[45129]: <info>  [1759392244.2005] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  2 04:04:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:04 np0005465604 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 04:04:04 np0005465604 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 04:04:04 np0005465604 NetworkManager[45129]: <info>  [1759392244.2398] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  2 04:04:04 np0005465604 NetworkManager[45129]: <info>  [1759392244.2405] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  2 04:04:04 np0005465604 NetworkManager[45129]: <info>  [1759392244.2419] device (podman0): Activation: successful, device activated.
Oct  2 04:04:04 np0005465604 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct  2 04:04:04 np0005465604 systemd[1]: Started libpod-conmon-77702fc8abee6f3255db1177699ecbc676a621e0af0cd414ac9c59c3174ffeae.scope.
Oct  2 04:04:04 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:04:04 np0005465604 podman[224597]: 2025-10-02 08:04:04.573067245 +0000 UTC m=+0.567100804 container init 77702fc8abee6f3255db1177699ecbc676a621e0af0cd414ac9c59c3174ffeae (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 04:04:04 np0005465604 podman[224597]: 2025-10-02 08:04:04.586305202 +0000 UTC m=+0.580338711 container start 77702fc8abee6f3255db1177699ecbc676a621e0af0cd414ac9c59c3174ffeae (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid_config, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:04:04 np0005465604 podman[224597]: 2025-10-02 08:04:04.590600974 +0000 UTC m=+0.584634493 container attach 77702fc8abee6f3255db1177699ecbc676a621e0af0cd414ac9c59c3174ffeae (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid_config, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:04:04 np0005465604 iscsid_config[224755]: iqn.1994-05.com.redhat:93fde4a80e4#015
Oct  2 04:04:04 np0005465604 systemd[1]: libpod-77702fc8abee6f3255db1177699ecbc676a621e0af0cd414ac9c59c3174ffeae.scope: Deactivated successfully.
Oct  2 04:04:04 np0005465604 conmon[224755]: conmon 77702fc8abee6f3255db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77702fc8abee6f3255db1177699ecbc676a621e0af0cd414ac9c59c3174ffeae.scope/container/memory.events
Oct  2 04:04:04 np0005465604 podman[224597]: 2025-10-02 08:04:04.598097887 +0000 UTC m=+0.592131396 container died 77702fc8abee6f3255db1177699ecbc676a621e0af0cd414ac9c59c3174ffeae (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 04:04:04 np0005465604 kernel: podman0: port 1(veth0) entered disabled state
Oct  2 04:04:04 np0005465604 kernel: veth0 (unregistering): left allmulticast mode
Oct  2 04:04:04 np0005465604 kernel: veth0 (unregistering): left promiscuous mode
Oct  2 04:04:04 np0005465604 kernel: podman0: port 1(veth0) entered disabled state
Oct  2 04:04:04 np0005465604 NetworkManager[45129]: <info>  [1759392244.6807] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:04:05 np0005465604 systemd[1]: run-netns-netns\x2dad920cdf\x2d8dd0\x2d9eab\x2d15b2\x2dab9071200c4e.mount: Deactivated successfully.
Oct  2 04:04:05 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77702fc8abee6f3255db1177699ecbc676a621e0af0cd414ac9c59c3174ffeae-userdata-shm.mount: Deactivated successfully.
Oct  2 04:04:05 np0005465604 systemd[1]: var-lib-containers-storage-overlay-11deaed61ec364ef7c2da1d08e6eafd4f5d7abaf39cdac1276144413412d787b-merged.mount: Deactivated successfully.
Oct  2 04:04:05 np0005465604 podman[224597]: 2025-10-02 08:04:05.092342971 +0000 UTC m=+1.086376490 container remove 77702fc8abee6f3255db1177699ecbc676a621e0af0cd414ac9c59c3174ffeae (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid_config, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:04:05 np0005465604 systemd[1]: libpod-conmon-77702fc8abee6f3255db1177699ecbc676a621e0af0cd414ac9c59c3174ffeae.scope: Deactivated successfully.
Oct  2 04:04:05 np0005465604 python3.9[224499]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85 /usr/sbin/iscsi-iname
Oct  2 04:04:05 np0005465604 python3.9[224499]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: #012DEPRECATED command:#012It is recommended to use Quadlets for running containers and pods under systemd.#012#012Please refer to podman-systemd.unit(5) for details.#012Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct  2 04:04:06 np0005465604 python3.9[224997]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:04:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:04:07 np0005465604 python3.9[225120]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759392245.517832-119-121026328342114/.source.iscsi _original_basename=.55am0i6m follow=False checksum=f1aed39b6fdca19a2be47c312ad3580115a11835 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:04:07 np0005465604 python3.9[225272]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:04:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:08 np0005465604 python3.9[225422]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:04:09 np0005465604 podman[225472]: 2025-10-02 08:04:09.014637324 +0000 UTC m=+0.071444150 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:04:09 np0005465604 python3.9[225596]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:04:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:10 np0005465604 python3.9[225748]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:04:11 np0005465604 python3.9[225900]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:04:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:04:12 np0005465604 python3.9[225978]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:04:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:12 np0005465604 python3.9[226178]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:04:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:04:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:04:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:04:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:04:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:04:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:04:13 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 57ed1fef-2315-4578-89d7-0c94de4fe853 does not exist
Oct  2 04:04:13 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 4fc4991d-e53f-4751-bfcd-f1b6d8979809 does not exist
Oct  2 04:04:13 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev de85709c-9829-4bcf-90ff-2e34e783bd52 does not exist
Oct  2 04:04:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:04:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:04:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:04:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:04:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:04:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:04:13 np0005465604 python3.9[226327]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:04:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:14 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:04:14 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:04:14 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:04:14 np0005465604 podman[226632]: 2025-10-02 08:04:14.492996654 +0000 UTC m=+0.070010390 container create 03a63f4575ad4d00a92c60b1aad3d3893a1cad66367be659d490b1d486fbd68e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  2 04:04:14 np0005465604 systemd[1]: Started libpod-conmon-03a63f4575ad4d00a92c60b1aad3d3893a1cad66367be659d490b1d486fbd68e.scope.
Oct  2 04:04:14 np0005465604 podman[226632]: 2025-10-02 08:04:14.468416846 +0000 UTC m=+0.045430582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:04:14 np0005465604 python3.9[226619]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:04:14 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:04:14 np0005465604 podman[226632]: 2025-10-02 08:04:14.611471612 +0000 UTC m=+0.188485388 container init 03a63f4575ad4d00a92c60b1aad3d3893a1cad66367be659d490b1d486fbd68e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_joliot, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:04:14 np0005465604 podman[226632]: 2025-10-02 08:04:14.618826021 +0000 UTC m=+0.195839727 container start 03a63f4575ad4d00a92c60b1aad3d3893a1cad66367be659d490b1d486fbd68e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 04:04:14 np0005465604 podman[226632]: 2025-10-02 08:04:14.62306125 +0000 UTC m=+0.200074976 container attach 03a63f4575ad4d00a92c60b1aad3d3893a1cad66367be659d490b1d486fbd68e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_joliot, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 04:04:14 np0005465604 gracious_joliot[226648]: 167 167
Oct  2 04:04:14 np0005465604 systemd[1]: libpod-03a63f4575ad4d00a92c60b1aad3d3893a1cad66367be659d490b1d486fbd68e.scope: Deactivated successfully.
Oct  2 04:04:14 np0005465604 podman[226632]: 2025-10-02 08:04:14.628987629 +0000 UTC m=+0.206001335 container died 03a63f4575ad4d00a92c60b1aad3d3893a1cad66367be659d490b1d486fbd68e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_joliot, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct  2 04:04:14 np0005465604 systemd[1]: var-lib-containers-storage-overlay-479055c662ea07c12f2c78664c0cc46f1405f079ba7858c0d593856822b1081c-merged.mount: Deactivated successfully.
Oct  2 04:04:14 np0005465604 podman[226632]: 2025-10-02 08:04:14.676936922 +0000 UTC m=+0.253950638 container remove 03a63f4575ad4d00a92c60b1aad3d3893a1cad66367be659d490b1d486fbd68e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_joliot, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:04:14 np0005465604 systemd[1]: libpod-conmon-03a63f4575ad4d00a92c60b1aad3d3893a1cad66367be659d490b1d486fbd68e.scope: Deactivated successfully.
Oct  2 04:04:14 np0005465604 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 04:04:14 np0005465604 podman[226716]: 2025-10-02 08:04:14.912873306 +0000 UTC m=+0.062565249 container create 60caab144536b5659c0f78df70f1de96e70534d17bb584cf36481f5a9fe667da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct  2 04:04:14 np0005465604 systemd[1]: Started libpod-conmon-60caab144536b5659c0f78df70f1de96e70534d17bb584cf36481f5a9fe667da.scope.
Oct  2 04:04:14 np0005465604 podman[226716]: 2025-10-02 08:04:14.884915871 +0000 UTC m=+0.034607824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:04:14 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:04:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2612723fd9f32ca4c28bba27e6b7c7bc9bbc0da101f1810d44cc83788d4645/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:04:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2612723fd9f32ca4c28bba27e6b7c7bc9bbc0da101f1810d44cc83788d4645/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:04:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2612723fd9f32ca4c28bba27e6b7c7bc9bbc0da101f1810d44cc83788d4645/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:04:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2612723fd9f32ca4c28bba27e6b7c7bc9bbc0da101f1810d44cc83788d4645/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:04:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2612723fd9f32ca4c28bba27e6b7c7bc9bbc0da101f1810d44cc83788d4645/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:04:15 np0005465604 podman[226716]: 2025-10-02 08:04:15.020235676 +0000 UTC m=+0.169927669 container init 60caab144536b5659c0f78df70f1de96e70534d17bb584cf36481f5a9fe667da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mirzakhani, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  2 04:04:15 np0005465604 podman[226716]: 2025-10-02 08:04:15.033656708 +0000 UTC m=+0.183348651 container start 60caab144536b5659c0f78df70f1de96e70534d17bb584cf36481f5a9fe667da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:04:15 np0005465604 podman[226716]: 2025-10-02 08:04:15.038338591 +0000 UTC m=+0.188030514 container attach 60caab144536b5659c0f78df70f1de96e70534d17bb584cf36481f5a9fe667da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:04:15 np0005465604 python3.9[226843]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:04:16 np0005465604 python3.9[226933]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:04:16 np0005465604 heuristic_mirzakhani[226763]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:04:16 np0005465604 heuristic_mirzakhani[226763]: --> relative data size: 1.0
Oct  2 04:04:16 np0005465604 heuristic_mirzakhani[226763]: --> All data devices are unavailable
Oct  2 04:04:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:16 np0005465604 systemd[1]: libpod-60caab144536b5659c0f78df70f1de96e70534d17bb584cf36481f5a9fe667da.scope: Deactivated successfully.
Oct  2 04:04:16 np0005465604 systemd[1]: libpod-60caab144536b5659c0f78df70f1de96e70534d17bb584cf36481f5a9fe667da.scope: Consumed 1.134s CPU time.
Oct  2 04:04:16 np0005465604 podman[226716]: 2025-10-02 08:04:16.231430173 +0000 UTC m=+1.381122136 container died 60caab144536b5659c0f78df70f1de96e70534d17bb584cf36481f5a9fe667da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:04:16 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4b2612723fd9f32ca4c28bba27e6b7c7bc9bbc0da101f1810d44cc83788d4645-merged.mount: Deactivated successfully.
Oct  2 04:04:16 np0005465604 podman[226716]: 2025-10-02 08:04:16.326159825 +0000 UTC m=+1.475851768 container remove 60caab144536b5659c0f78df70f1de96e70534d17bb584cf36481f5a9fe667da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mirzakhani, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:04:16 np0005465604 systemd[1]: libpod-conmon-60caab144536b5659c0f78df70f1de96e70534d17bb584cf36481f5a9fe667da.scope: Deactivated successfully.
Oct  2 04:04:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:04:17 np0005465604 python3.9[227212]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:04:17 np0005465604 podman[227253]: 2025-10-02 08:04:17.181744297 +0000 UTC m=+0.069172937 container create 6aaa6a65105cf366274da81ab3c01e6b3bee6511f4d141e7628ffb2b150caafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cohen, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 04:04:17 np0005465604 systemd[1]: Started libpod-conmon-6aaa6a65105cf366274da81ab3c01e6b3bee6511f4d141e7628ffb2b150caafb.scope.
Oct  2 04:04:17 np0005465604 podman[227253]: 2025-10-02 08:04:17.152248649 +0000 UTC m=+0.039677339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:04:17 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:04:17 np0005465604 podman[227253]: 2025-10-02 08:04:17.299553265 +0000 UTC m=+0.186981915 container init 6aaa6a65105cf366274da81ab3c01e6b3bee6511f4d141e7628ffb2b150caafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:04:17 np0005465604 podman[227253]: 2025-10-02 08:04:17.314718426 +0000 UTC m=+0.202147066 container start 6aaa6a65105cf366274da81ab3c01e6b3bee6511f4d141e7628ffb2b150caafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cohen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:04:17 np0005465604 podman[227253]: 2025-10-02 08:04:17.31908417 +0000 UTC m=+0.206512840 container attach 6aaa6a65105cf366274da81ab3c01e6b3bee6511f4d141e7628ffb2b150caafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:04:17 np0005465604 dazzling_cohen[227293]: 167 167
Oct  2 04:04:17 np0005465604 systemd[1]: libpod-6aaa6a65105cf366274da81ab3c01e6b3bee6511f4d141e7628ffb2b150caafb.scope: Deactivated successfully.
Oct  2 04:04:17 np0005465604 conmon[227293]: conmon 6aaa6a65105cf366274d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6aaa6a65105cf366274da81ab3c01e6b3bee6511f4d141e7628ffb2b150caafb.scope/container/memory.events
Oct  2 04:04:17 np0005465604 podman[227253]: 2025-10-02 08:04:17.325002108 +0000 UTC m=+0.212430738 container died 6aaa6a65105cf366274da81ab3c01e6b3bee6511f4d141e7628ffb2b150caafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 04:04:17 np0005465604 systemd[1]: var-lib-containers-storage-overlay-64ad247b5a85b578fd18c77f7c2c3579d6894a665c2cbe6bca82b5371c7817d6-merged.mount: Deactivated successfully.
Oct  2 04:04:17 np0005465604 podman[227253]: 2025-10-02 08:04:17.377918361 +0000 UTC m=+0.265346991 container remove 6aaa6a65105cf366274da81ab3c01e6b3bee6511f4d141e7628ffb2b150caafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cohen, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 04:04:17 np0005465604 systemd[1]: libpod-conmon-6aaa6a65105cf366274da81ab3c01e6b3bee6511f4d141e7628ffb2b150caafb.scope: Deactivated successfully.
Oct  2 04:04:17 np0005465604 podman[227365]: 2025-10-02 08:04:17.626402322 +0000 UTC m=+0.049490477 container create 75e9dbf5cc076b1c9338ca9e690e7aab5454385dc65b99c2329c369e4c4fa60f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:04:17 np0005465604 systemd[1]: Started libpod-conmon-75e9dbf5cc076b1c9338ca9e690e7aab5454385dc65b99c2329c369e4c4fa60f.scope.
Oct  2 04:04:17 np0005465604 podman[227365]: 2025-10-02 08:04:17.6080309 +0000 UTC m=+0.031119055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:04:17 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:04:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc50e2617dfad5451d506afbb9d493cf0ae5843801a6ca08e1d11a7cfc0ee9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:04:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc50e2617dfad5451d506afbb9d493cf0ae5843801a6ca08e1d11a7cfc0ee9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:04:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc50e2617dfad5451d506afbb9d493cf0ae5843801a6ca08e1d11a7cfc0ee9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:04:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc50e2617dfad5451d506afbb9d493cf0ae5843801a6ca08e1d11a7cfc0ee9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:04:17 np0005465604 podman[227365]: 2025-10-02 08:04:17.74598612 +0000 UTC m=+0.169074345 container init 75e9dbf5cc076b1c9338ca9e690e7aab5454385dc65b99c2329c369e4c4fa60f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_greider, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:04:17 np0005465604 podman[227365]: 2025-10-02 08:04:17.76077365 +0000 UTC m=+0.183861835 container start 75e9dbf5cc076b1c9338ca9e690e7aab5454385dc65b99c2329c369e4c4fa60f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_greider, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:04:17 np0005465604 podman[227365]: 2025-10-02 08:04:17.764949569 +0000 UTC m=+0.188037744 container attach 75e9dbf5cc076b1c9338ca9e690e7aab5454385dc65b99c2329c369e4c4fa60f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 04:04:17 np0005465604 python3.9[227381]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:04:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:18 np0005465604 happy_greider[227387]: {
Oct  2 04:04:18 np0005465604 happy_greider[227387]:    "0": [
Oct  2 04:04:18 np0005465604 happy_greider[227387]:        {
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "devices": [
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "/dev/loop3"
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            ],
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "lv_name": "ceph_lv0",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "lv_size": "21470642176",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "name": "ceph_lv0",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "tags": {
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.cluster_name": "ceph",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.crush_device_class": "",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.encrypted": "0",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.osd_id": "0",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.type": "block",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.vdo": "0"
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            },
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "type": "block",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "vg_name": "ceph_vg0"
Oct  2 04:04:18 np0005465604 happy_greider[227387]:        }
Oct  2 04:04:18 np0005465604 happy_greider[227387]:    ],
Oct  2 04:04:18 np0005465604 happy_greider[227387]:    "1": [
Oct  2 04:04:18 np0005465604 happy_greider[227387]:        {
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "devices": [
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "/dev/loop4"
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            ],
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "lv_name": "ceph_lv1",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "lv_size": "21470642176",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "name": "ceph_lv1",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "tags": {
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.cluster_name": "ceph",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.crush_device_class": "",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.encrypted": "0",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.osd_id": "1",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.type": "block",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.vdo": "0"
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            },
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "type": "block",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "vg_name": "ceph_vg1"
Oct  2 04:04:18 np0005465604 happy_greider[227387]:        }
Oct  2 04:04:18 np0005465604 happy_greider[227387]:    ],
Oct  2 04:04:18 np0005465604 happy_greider[227387]:    "2": [
Oct  2 04:04:18 np0005465604 happy_greider[227387]:        {
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "devices": [
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "/dev/loop5"
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            ],
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "lv_name": "ceph_lv2",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "lv_size": "21470642176",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "name": "ceph_lv2",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "tags": {
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.cluster_name": "ceph",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.crush_device_class": "",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.encrypted": "0",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.osd_id": "2",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.type": "block",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:                "ceph.vdo": "0"
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            },
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "type": "block",
Oct  2 04:04:18 np0005465604 happy_greider[227387]:            "vg_name": "ceph_vg2"
Oct  2 04:04:18 np0005465604 happy_greider[227387]:        }
Oct  2 04:04:18 np0005465604 happy_greider[227387]:    ]
Oct  2 04:04:18 np0005465604 happy_greider[227387]: }
Oct  2 04:04:18 np0005465604 systemd[1]: libpod-75e9dbf5cc076b1c9338ca9e690e7aab5454385dc65b99c2329c369e4c4fa60f.scope: Deactivated successfully.
Oct  2 04:04:18 np0005465604 podman[227365]: 2025-10-02 08:04:18.607076538 +0000 UTC m=+1.030164683 container died 75e9dbf5cc076b1c9338ca9e690e7aab5454385dc65b99c2329c369e4c4fa60f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_greider, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct  2 04:04:18 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1cc50e2617dfad5451d506afbb9d493cf0ae5843801a6ca08e1d11a7cfc0ee9e-merged.mount: Deactivated successfully.
Oct  2 04:04:18 np0005465604 podman[227365]: 2025-10-02 08:04:18.665714604 +0000 UTC m=+1.088802749 container remove 75e9dbf5cc076b1c9338ca9e690e7aab5454385dc65b99c2329c369e4c4fa60f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_greider, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 04:04:18 np0005465604 systemd[1]: libpod-conmon-75e9dbf5cc076b1c9338ca9e690e7aab5454385dc65b99c2329c369e4c4fa60f.scope: Deactivated successfully.
Oct  2 04:04:18 np0005465604 python3.9[227543]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 04:04:18 np0005465604 systemd[1]: Reloading.
Oct  2 04:04:18 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:04:18 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:04:19 np0005465604 podman[227886]: 2025-10-02 08:04:19.87441589 +0000 UTC m=+0.061685023 container create 048a394d7162bfd8a7bf5063279e37bce6d8b0665ed9c4f6dee5f115727e3a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 04:04:19 np0005465604 systemd[1]: Started libpod-conmon-048a394d7162bfd8a7bf5063279e37bce6d8b0665ed9c4f6dee5f115727e3a3f.scope.
Oct  2 04:04:19 np0005465604 podman[227886]: 2025-10-02 08:04:19.845958782 +0000 UTC m=+0.033227985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:04:19 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:04:19 np0005465604 podman[227886]: 2025-10-02 08:04:19.982771439 +0000 UTC m=+0.170040572 container init 048a394d7162bfd8a7bf5063279e37bce6d8b0665ed9c4f6dee5f115727e3a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:04:19 np0005465604 python3.9[227885]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:04:20 np0005465604 podman[227886]: 2025-10-02 08:04:19.999820603 +0000 UTC m=+0.187089766 container start 048a394d7162bfd8a7bf5063279e37bce6d8b0665ed9c4f6dee5f115727e3a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 04:04:20 np0005465604 podman[227886]: 2025-10-02 08:04:20.004912789 +0000 UTC m=+0.192181932 container attach 048a394d7162bfd8a7bf5063279e37bce6d8b0665ed9c4f6dee5f115727e3a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_grothendieck, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:04:20 np0005465604 hopeful_grothendieck[227902]: 167 167
Oct  2 04:04:20 np0005465604 systemd[1]: libpod-048a394d7162bfd8a7bf5063279e37bce6d8b0665ed9c4f6dee5f115727e3a3f.scope: Deactivated successfully.
Oct  2 04:04:20 np0005465604 podman[227886]: 2025-10-02 08:04:20.010711483 +0000 UTC m=+0.197980606 container died 048a394d7162bfd8a7bf5063279e37bce6d8b0665ed9c4f6dee5f115727e3a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 04:04:20 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f1fef398818fa022bae80afbd61e21150e2dcdd6d37b305e816d84b0486820b8-merged.mount: Deactivated successfully.
Oct  2 04:04:20 np0005465604 podman[227886]: 2025-10-02 08:04:20.053129709 +0000 UTC m=+0.240398852 container remove 048a394d7162bfd8a7bf5063279e37bce6d8b0665ed9c4f6dee5f115727e3a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_grothendieck, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:04:20 np0005465604 systemd[1]: libpod-conmon-048a394d7162bfd8a7bf5063279e37bce6d8b0665ed9c4f6dee5f115727e3a3f.scope: Deactivated successfully.
Oct  2 04:04:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:20 np0005465604 podman[227959]: 2025-10-02 08:04:20.259281896 +0000 UTC m=+0.042563350 container create 37c93a918554ca80f0e71b085cd245b3010fc989170a8ea3110462457a98bbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 04:04:20 np0005465604 systemd[1]: Started libpod-conmon-37c93a918554ca80f0e71b085cd245b3010fc989170a8ea3110462457a98bbb9.scope.
Oct  2 04:04:20 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:04:20 np0005465604 podman[227959]: 2025-10-02 08:04:20.24081359 +0000 UTC m=+0.024095034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:04:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9526f64c8ff1f631f17f4c3af566eaba2c201c8296c8c115e1a1b3a16422b4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:04:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9526f64c8ff1f631f17f4c3af566eaba2c201c8296c8c115e1a1b3a16422b4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:04:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9526f64c8ff1f631f17f4c3af566eaba2c201c8296c8c115e1a1b3a16422b4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:04:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9526f64c8ff1f631f17f4c3af566eaba2c201c8296c8c115e1a1b3a16422b4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:04:20 np0005465604 podman[227959]: 2025-10-02 08:04:20.370036993 +0000 UTC m=+0.153318467 container init 37c93a918554ca80f0e71b085cd245b3010fc989170a8ea3110462457a98bbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ellis, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:04:20 np0005465604 podman[227959]: 2025-10-02 08:04:20.384736958 +0000 UTC m=+0.168018422 container start 37c93a918554ca80f0e71b085cd245b3010fc989170a8ea3110462457a98bbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ellis, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 04:04:20 np0005465604 podman[227959]: 2025-10-02 08:04:20.402060642 +0000 UTC m=+0.185342116 container attach 37c93a918554ca80f0e71b085cd245b3010fc989170a8ea3110462457a98bbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 04:04:20 np0005465604 python3.9[228021]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:04:21 np0005465604 python3.9[228180]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:04:21 np0005465604 boring_ellis[228013]: {
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:        "osd_id": 2,
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:        "type": "bluestore"
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:    },
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:        "osd_id": 1,
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:        "type": "bluestore"
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:    },
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:        "osd_id": 0,
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:        "type": "bluestore"
Oct  2 04:04:21 np0005465604 boring_ellis[228013]:    }
Oct  2 04:04:21 np0005465604 boring_ellis[228013]: }
Oct  2 04:04:21 np0005465604 systemd[1]: libpod-37c93a918554ca80f0e71b085cd245b3010fc989170a8ea3110462457a98bbb9.scope: Deactivated successfully.
Oct  2 04:04:21 np0005465604 podman[227959]: 2025-10-02 08:04:21.496842605 +0000 UTC m=+1.280124039 container died 37c93a918554ca80f0e71b085cd245b3010fc989170a8ea3110462457a98bbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:04:21 np0005465604 systemd[1]: libpod-37c93a918554ca80f0e71b085cd245b3010fc989170a8ea3110462457a98bbb9.scope: Consumed 1.122s CPU time.
Oct  2 04:04:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e9526f64c8ff1f631f17f4c3af566eaba2c201c8296c8c115e1a1b3a16422b4c-merged.mount: Deactivated successfully.
Oct  2 04:04:21 np0005465604 podman[227959]: 2025-10-02 08:04:21.566231828 +0000 UTC m=+1.349513262 container remove 37c93a918554ca80f0e71b085cd245b3010fc989170a8ea3110462457a98bbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  2 04:04:21 np0005465604 systemd[1]: libpod-conmon-37c93a918554ca80f0e71b085cd245b3010fc989170a8ea3110462457a98bbb9.scope: Deactivated successfully.
Oct  2 04:04:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:04:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:04:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:04:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:04:21 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 20c71622-b471-431a-9384-4b59c05b8814 does not exist
Oct  2 04:04:21 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev dd8909c5-d565-4dfe-95eb-ae33c5d89df9 does not exist
Oct  2 04:04:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:04:21 np0005465604 python3.9[228318]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:04:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:22 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:04:22 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:04:22 np0005465604 python3.9[228495]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 04:04:22 np0005465604 systemd[1]: Reloading.
Oct  2 04:04:23 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:04:23 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:04:23 np0005465604 systemd[1]: Starting Create netns directory...
Oct  2 04:04:23 np0005465604 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 04:04:23 np0005465604 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 04:04:23 np0005465604 systemd[1]: Finished Create netns directory.
Oct  2 04:04:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:24 np0005465604 python3.9[228688]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:04:25 np0005465604 python3.9[228840]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:04:25 np0005465604 python3.9[228963]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759392264.6650527-273-129371860663971/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:04:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:04:26 np0005465604 python3.9[229115]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:04:27 np0005465604 python3.9[229267]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:04:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:04:27
Oct  2 04:04:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:04:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:04:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'images', 'volumes', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Oct  2 04:04:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:04:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:04:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:04:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:04:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:04:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:04:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:04:27 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:04:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:28 np0005465604 python3.9[229390]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759392267.236371-298-114930666049583/.source.json _original_basename=.5eaq0vep follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:04:29 np0005465604 python3.9[229542]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:04:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:04:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:32 np0005465604 python3.9[229969]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct  2 04:04:33 np0005465604 podman[230121]: 2025-10-02 08:04:33.329663949 +0000 UTC m=+0.113371226 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 04:04:33 np0005465604 python3.9[230122]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 04:04:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:34 np0005465604 python3.9[230299]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  2 04:04:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:04:34.789 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:04:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:04:34.790 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:04:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:04:34.790 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:04:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:36 np0005465604 python3[230478]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:04:36.723132) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392276723189, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1771, "num_deletes": 250, "total_data_size": 2980933, "memory_usage": 3013480, "flush_reason": "Manual Compaction"}
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392276736696, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1681489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11751, "largest_seqno": 13521, "table_properties": {"data_size": 1675666, "index_size": 2898, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14473, "raw_average_key_size": 20, "raw_value_size": 1662901, "raw_average_value_size": 2309, "num_data_blocks": 134, "num_entries": 720, "num_filter_entries": 720, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759392076, "oldest_key_time": 1759392076, "file_creation_time": 1759392276, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 13656 microseconds, and 8865 cpu microseconds.
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:04:36.736790) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1681489 bytes OK
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:04:36.736818) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:04:36.738733) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:04:36.738777) EVENT_LOG_v1 {"time_micros": 1759392276738769, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:04:36.738800) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2973422, prev total WAL file size 2973422, number of live WAL files 2.
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:04:36.740244) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1642KB)], [29(7761KB)]
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392276740351, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9629360, "oldest_snapshot_seqno": -1}
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3999 keys, 7578539 bytes, temperature: kUnknown
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392276784734, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7578539, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7549924, "index_size": 17497, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95106, "raw_average_key_size": 23, "raw_value_size": 7475956, "raw_average_value_size": 1869, "num_data_blocks": 761, "num_entries": 3999, "num_filter_entries": 3999, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759392276, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:04:36.785125) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7578539 bytes
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:04:36.786482) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.1 rd, 170.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.6 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(10.2) write-amplify(4.5) OK, records in: 4416, records dropped: 417 output_compression: NoCompression
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:04:36.786504) EVENT_LOG_v1 {"time_micros": 1759392276786493, "job": 12, "event": "compaction_finished", "compaction_time_micros": 44559, "compaction_time_cpu_micros": 24885, "output_level": 6, "num_output_files": 1, "total_output_size": 7578539, "num_input_records": 4416, "num_output_records": 3999, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392276786989, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392276788540, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:04:36.740108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:04:36.788627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:04:36.788636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:04:36.788640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:04:36.788644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:04:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:04:36.788647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:04:36 np0005465604 podman[230515]: 2025-10-02 08:04:36.911297567 +0000 UTC m=+0.068328153 container create f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, io.buildah.version=1.41.3, container_name=iscsid, config_id=iscsid, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 04:04:36 np0005465604 podman[230515]: 2025-10-02 08:04:36.873256861 +0000 UTC m=+0.030287467 image pull 81d94872551c3ae3c30801602bbb5f0c44872f15dcde472a0ba869fe2f28966e quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85
Oct  2 04:04:36 np0005465604 python3[230478]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85
Oct  2 04:04:38 np0005465604 python3.9[230704]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:04:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:39 np0005465604 python3.9[230858]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:04:39 np0005465604 podman[230906]: 2025-10-02 08:04:39.382785035 +0000 UTC m=+0.079251555 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct  2 04:04:39 np0005465604 python3.9[230953]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:04:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:40 np0005465604 python3.9[231104]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759392279.6762066-386-154024808953966/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:04:41 np0005465604 python3.9[231180]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 04:04:41 np0005465604 systemd[1]: Reloading.
Oct  2 04:04:41 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:04:41 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:04:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:04:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:42 np0005465604 python3.9[231291]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 04:04:43 np0005465604 systemd[1]: Reloading.
Oct  2 04:04:43 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:04:43 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:04:43 np0005465604 systemd[1]: Starting iscsid container...
Oct  2 04:04:44 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:04:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3afbf48f40390a1a7ecb6fffbaa8f34665c3e6e6f6a168099940d8b76202b8fa/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 04:04:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3afbf48f40390a1a7ecb6fffbaa8f34665c3e6e6f6a168099940d8b76202b8fa/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct  2 04:04:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3afbf48f40390a1a7ecb6fffbaa8f34665c3e6e6f6a168099940d8b76202b8fa/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 04:04:44 np0005465604 systemd[1]: Started /usr/bin/podman healthcheck run f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9.
Oct  2 04:04:44 np0005465604 podman[231331]: 2025-10-02 08:04:44.056556559 +0000 UTC m=+0.160763478 container init f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  2 04:04:44 np0005465604 iscsid[231346]: + sudo -E kolla_set_configs
Oct  2 04:04:44 np0005465604 podman[231331]: 2025-10-02 08:04:44.103295572 +0000 UTC m=+0.207502501 container start f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 04:04:44 np0005465604 podman[231331]: iscsid
Oct  2 04:04:44 np0005465604 systemd[1]: Started iscsid container.
Oct  2 04:04:44 np0005465604 systemd[1]: Created slice User Slice of UID 0.
Oct  2 04:04:44 np0005465604 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  2 04:04:44 np0005465604 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  2 04:04:44 np0005465604 systemd[1]: Starting User Manager for UID 0...
Oct  2 04:04:44 np0005465604 podman[231353]: 2025-10-02 08:04:44.231821985 +0000 UTC m=+0.109774789 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 04:04:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:44 np0005465604 systemd[1]: f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9-70e50fc67c793523.service: Main process exited, code=exited, status=1/FAILURE
Oct  2 04:04:44 np0005465604 systemd[1]: f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9-70e50fc67c793523.service: Failed with result 'exit-code'.
Oct  2 04:04:44 np0005465604 systemd[231369]: Queued start job for default target Main User Target.
Oct  2 04:04:44 np0005465604 systemd[231369]: Created slice User Application Slice.
Oct  2 04:04:44 np0005465604 systemd[231369]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  2 04:04:44 np0005465604 systemd[231369]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 04:04:44 np0005465604 systemd[231369]: Reached target Paths.
Oct  2 04:04:44 np0005465604 systemd[231369]: Reached target Timers.
Oct  2 04:04:44 np0005465604 systemd[231369]: Starting D-Bus User Message Bus Socket...
Oct  2 04:04:44 np0005465604 systemd[231369]: Starting Create User's Volatile Files and Directories...
Oct  2 04:04:44 np0005465604 systemd[231369]: Listening on D-Bus User Message Bus Socket.
Oct  2 04:04:44 np0005465604 systemd[231369]: Reached target Sockets.
Oct  2 04:04:44 np0005465604 systemd[231369]: Finished Create User's Volatile Files and Directories.
Oct  2 04:04:44 np0005465604 systemd[231369]: Reached target Basic System.
Oct  2 04:04:44 np0005465604 systemd[231369]: Reached target Main User Target.
Oct  2 04:04:44 np0005465604 systemd[231369]: Startup finished in 182ms.
Oct  2 04:04:44 np0005465604 systemd[1]: Started User Manager for UID 0.
Oct  2 04:04:44 np0005465604 systemd[1]: Started Session c3 of User root.
Oct  2 04:04:44 np0005465604 iscsid[231346]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 04:04:44 np0005465604 iscsid[231346]: INFO:__main__:Validating config file
Oct  2 04:04:44 np0005465604 iscsid[231346]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 04:04:44 np0005465604 iscsid[231346]: INFO:__main__:Writing out command to execute
Oct  2 04:04:44 np0005465604 systemd[1]: session-c3.scope: Deactivated successfully.
Oct  2 04:04:44 np0005465604 iscsid[231346]: ++ cat /run_command
Oct  2 04:04:44 np0005465604 iscsid[231346]: + CMD='/usr/sbin/iscsid -f'
Oct  2 04:04:44 np0005465604 iscsid[231346]: + ARGS=
Oct  2 04:04:44 np0005465604 iscsid[231346]: + sudo kolla_copy_cacerts
Oct  2 04:04:44 np0005465604 systemd[1]: Started Session c4 of User root.
Oct  2 04:04:44 np0005465604 systemd[1]: session-c4.scope: Deactivated successfully.
Oct  2 04:04:44 np0005465604 iscsid[231346]: + [[ ! -n '' ]]
Oct  2 04:04:44 np0005465604 iscsid[231346]: + . kolla_extend_start
Oct  2 04:04:44 np0005465604 iscsid[231346]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct  2 04:04:44 np0005465604 iscsid[231346]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct  2 04:04:44 np0005465604 iscsid[231346]: Running command: '/usr/sbin/iscsid -f'
Oct  2 04:04:44 np0005465604 iscsid[231346]: + umask 0022
Oct  2 04:04:44 np0005465604 iscsid[231346]: + exec /usr/sbin/iscsid -f
Oct  2 04:04:44 np0005465604 kernel: Loading iSCSI transport class v2.0-870.
Oct  2 04:04:44 np0005465604 python3.9[231552]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:04:45 np0005465604 python3.9[231704]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:04:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:04:47 np0005465604 python3.9[231856]: ansible-ansible.builtin.service_facts Invoked
Oct  2 04:04:47 np0005465604 network[231873]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 04:04:47 np0005465604 network[231874]: 'network-scripts' will be removed from distribution in near future.
Oct  2 04:04:47 np0005465604 network[231875]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 04:04:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:04:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:53 np0005465604 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct  2 04:04:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:54 np0005465604 systemd[1]: Stopping User Manager for UID 0...
Oct  2 04:04:54 np0005465604 systemd[231369]: Activating special unit Exit the Session...
Oct  2 04:04:54 np0005465604 systemd[231369]: Stopped target Main User Target.
Oct  2 04:04:54 np0005465604 systemd[231369]: Stopped target Basic System.
Oct  2 04:04:54 np0005465604 systemd[231369]: Stopped target Paths.
Oct  2 04:04:54 np0005465604 systemd[231369]: Stopped target Sockets.
Oct  2 04:04:54 np0005465604 systemd[231369]: Stopped target Timers.
Oct  2 04:04:54 np0005465604 systemd[231369]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 04:04:54 np0005465604 systemd[231369]: Closed D-Bus User Message Bus Socket.
Oct  2 04:04:54 np0005465604 systemd[231369]: Stopped Create User's Volatile Files and Directories.
Oct  2 04:04:54 np0005465604 systemd[231369]: Removed slice User Application Slice.
Oct  2 04:04:54 np0005465604 systemd[231369]: Reached target Shutdown.
Oct  2 04:04:54 np0005465604 systemd[231369]: Finished Exit the Session.
Oct  2 04:04:54 np0005465604 systemd[231369]: Reached target Exit the Session.
Oct  2 04:04:54 np0005465604 systemd[1]: user@0.service: Deactivated successfully.
Oct  2 04:04:54 np0005465604 systemd[1]: Stopped User Manager for UID 0.
Oct  2 04:04:54 np0005465604 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  2 04:04:54 np0005465604 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  2 04:04:54 np0005465604 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  2 04:04:54 np0005465604 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  2 04:04:54 np0005465604 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  2 04:04:54 np0005465604 systemd[1]: Removed slice User Slice of UID 0.
Oct  2 04:04:55 np0005465604 python3.9[232155]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 04:04:56 np0005465604 python3.9[232307]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct  2 04:04:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:04:57 np0005465604 python3.9[232463]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:04:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:04:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:04:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:04:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:04:57 np0005465604 python3.9[232586]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759392296.5123289-460-47336077739366/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:04:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:04:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:04:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:04:58 np0005465604 python3.9[232738]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:04:59 np0005465604 python3.9[232890]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 04:04:59 np0005465604 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  2 04:04:59 np0005465604 systemd[1]: Stopped Load Kernel Modules.
Oct  2 04:04:59 np0005465604 systemd[1]: Stopping Load Kernel Modules...
Oct  2 04:04:59 np0005465604 systemd[1]: Starting Load Kernel Modules...
Oct  2 04:04:59 np0005465604 systemd[1]: Finished Load Kernel Modules.
Oct  2 04:05:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:00 np0005465604 python3.9[233046]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:05:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:05:01 np0005465604 python3.9[233198]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:05:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:02 np0005465604 python3.9[233350]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:05:03 np0005465604 python3.9[233502]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:05:04 np0005465604 podman[233597]: 2025-10-02 08:05:04.088053025 +0000 UTC m=+0.198752232 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 04:05:04 np0005465604 python3.9[233643]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759392302.924966-518-135898013369765/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:05 np0005465604 python3.9[233801]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 04:05:06 np0005465604 python3.9[233954]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:05:07 np0005465604 python3.9[234106]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:07 np0005465604 systemd[1]: virtqemud.service: Deactivated successfully.
Oct  2 04:05:07 np0005465604 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  2 04:05:07 np0005465604 python3.9[234260]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:08 np0005465604 python3.9[234412]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:09 np0005465604 podman[234564]: 2025-10-02 08:05:09.585312615 +0000 UTC m=+0.077941808 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 04:05:09 np0005465604 python3.9[234565]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:10 np0005465604 python3.9[234735]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:11 np0005465604 python3.9[234887]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:05:12 np0005465604 python3.9[235039]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:05:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:12 np0005465604 python3.9[235193]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:13 np0005465604 python3.9[235345]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:05:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:14 np0005465604 podman[235469]: 2025-10-02 08:05:14.470076252 +0000 UTC m=+0.100726783 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 04:05:14 np0005465604 python3.9[235517]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:05:15 np0005465604 python3.9[235595]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:05:15 np0005465604 python3.9[235747]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:05:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:16 np0005465604 python3.9[235825]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:05:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:05:17 np0005465604 python3.9[235977]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:18 np0005465604 python3.9[236129]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:05:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:18 np0005465604 python3.9[236207]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:19 np0005465604 python3.9[236359]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:05:20 np0005465604 python3.9[236437]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:21 np0005465604 python3.9[236589]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 04:05:21 np0005465604 systemd[1]: Reloading.
Oct  2 04:05:21 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:05:21 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:05:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:05:22 np0005465604 python3.9[236829]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:05:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:22 np0005465604 python3.9[236976]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:05:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:05:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:05:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:05:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:05:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:05:22 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 210b58ac-8ad9-4db6-b6c1-5833b50973c6 does not exist
Oct  2 04:05:22 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev d5e7a7ef-0caa-4109-83d2-683d0aee4438 does not exist
Oct  2 04:05:22 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev d4aef374-2f31-4443-bce9-a4167d32c7cd does not exist
Oct  2 04:05:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:05:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:05:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:05:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:05:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:05:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:05:23 np0005465604 podman[237279]: 2025-10-02 08:05:23.290401607 +0000 UTC m=+0.057066500 container create 841fa705e60b56d71534f0bdb5a246a2f4a7aa7637302b94433ab7e1b6dfa2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:05:23 np0005465604 systemd[1]: Started libpod-conmon-841fa705e60b56d71534f0bdb5a246a2f4a7aa7637302b94433ab7e1b6dfa2ba.scope.
Oct  2 04:05:23 np0005465604 python3.9[237264]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:05:23 np0005465604 podman[237279]: 2025-10-02 08:05:23.267369785 +0000 UTC m=+0.034034768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:05:23 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:05:23 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:05:23 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:05:23 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:05:23 np0005465604 podman[237279]: 2025-10-02 08:05:23.391398795 +0000 UTC m=+0.158063718 container init 841fa705e60b56d71534f0bdb5a246a2f4a7aa7637302b94433ab7e1b6dfa2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bell, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  2 04:05:23 np0005465604 podman[237279]: 2025-10-02 08:05:23.402047541 +0000 UTC m=+0.168712464 container start 841fa705e60b56d71534f0bdb5a246a2f4a7aa7637302b94433ab7e1b6dfa2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 04:05:23 np0005465604 podman[237279]: 2025-10-02 08:05:23.406654327 +0000 UTC m=+0.173319250 container attach 841fa705e60b56d71534f0bdb5a246a2f4a7aa7637302b94433ab7e1b6dfa2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:05:23 np0005465604 recursing_bell[237296]: 167 167
Oct  2 04:05:23 np0005465604 systemd[1]: libpod-841fa705e60b56d71534f0bdb5a246a2f4a7aa7637302b94433ab7e1b6dfa2ba.scope: Deactivated successfully.
Oct  2 04:05:23 np0005465604 podman[237279]: 2025-10-02 08:05:23.409706537 +0000 UTC m=+0.176371440 container died 841fa705e60b56d71534f0bdb5a246a2f4a7aa7637302b94433ab7e1b6dfa2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 04:05:23 np0005465604 systemd[1]: var-lib-containers-storage-overlay-23115d7973a2de27ddac98ded053aaf0f30fc3d6bdc44f3ace015569ff7b27d7-merged.mount: Deactivated successfully.
Oct  2 04:05:23 np0005465604 podman[237279]: 2025-10-02 08:05:23.473473354 +0000 UTC m=+0.240138247 container remove 841fa705e60b56d71534f0bdb5a246a2f4a7aa7637302b94433ab7e1b6dfa2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bell, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 04:05:23 np0005465604 systemd[1]: libpod-conmon-841fa705e60b56d71534f0bdb5a246a2f4a7aa7637302b94433ab7e1b6dfa2ba.scope: Deactivated successfully.
Oct  2 04:05:23 np0005465604 podman[237371]: 2025-10-02 08:05:23.716903796 +0000 UTC m=+0.070063763 container create 49ed55e4d0c01f594f92e38a1deea340ac66b7ae4bb1e0e18cd5fcaf3fc86d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:05:23 np0005465604 systemd[1]: Started libpod-conmon-49ed55e4d0c01f594f92e38a1deea340ac66b7ae4bb1e0e18cd5fcaf3fc86d02.scope.
Oct  2 04:05:23 np0005465604 podman[237371]: 2025-10-02 08:05:23.692172215 +0000 UTC m=+0.045332252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:05:23 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:05:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19704176a30d199101768c7904364227bf00570e7882183a341ad862b32adbd8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:05:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19704176a30d199101768c7904364227bf00570e7882183a341ad862b32adbd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:05:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19704176a30d199101768c7904364227bf00570e7882183a341ad862b32adbd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:05:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19704176a30d199101768c7904364227bf00570e7882183a341ad862b32adbd8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:05:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19704176a30d199101768c7904364227bf00570e7882183a341ad862b32adbd8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:05:23 np0005465604 podman[237371]: 2025-10-02 08:05:23.821083589 +0000 UTC m=+0.174243616 container init 49ed55e4d0c01f594f92e38a1deea340ac66b7ae4bb1e0e18cd5fcaf3fc86d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_pare, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 04:05:23 np0005465604 podman[237371]: 2025-10-02 08:05:23.829291892 +0000 UTC m=+0.182451849 container start 49ed55e4d0c01f594f92e38a1deea340ac66b7ae4bb1e0e18cd5fcaf3fc86d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 04:05:23 np0005465604 podman[237371]: 2025-10-02 08:05:23.832795956 +0000 UTC m=+0.185955953 container attach 49ed55e4d0c01f594f92e38a1deea340ac66b7ae4bb1e0e18cd5fcaf3fc86d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_pare, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:05:23 np0005465604 python3.9[237414]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:24 np0005465604 python3.9[237581]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 04:05:24 np0005465604 systemd[1]: Reloading.
Oct  2 04:05:24 np0005465604 epic_pare[237417]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:05:24 np0005465604 epic_pare[237417]: --> relative data size: 1.0
Oct  2 04:05:24 np0005465604 epic_pare[237417]: --> All data devices are unavailable
Oct  2 04:05:25 np0005465604 podman[237371]: 2025-10-02 08:05:25.049190588 +0000 UTC m=+1.402350595 container died 49ed55e4d0c01f594f92e38a1deea340ac66b7ae4bb1e0e18cd5fcaf3fc86d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:05:25 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:05:25 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:05:25 np0005465604 systemd[1]: libpod-49ed55e4d0c01f594f92e38a1deea340ac66b7ae4bb1e0e18cd5fcaf3fc86d02.scope: Deactivated successfully.
Oct  2 04:05:25 np0005465604 systemd[1]: libpod-49ed55e4d0c01f594f92e38a1deea340ac66b7ae4bb1e0e18cd5fcaf3fc86d02.scope: Consumed 1.155s CPU time.
Oct  2 04:05:25 np0005465604 systemd[1]: var-lib-containers-storage-overlay-19704176a30d199101768c7904364227bf00570e7882183a341ad862b32adbd8-merged.mount: Deactivated successfully.
Oct  2 04:05:25 np0005465604 podman[237371]: 2025-10-02 08:05:25.356575373 +0000 UTC m=+1.709735340 container remove 49ed55e4d0c01f594f92e38a1deea340ac66b7ae4bb1e0e18cd5fcaf3fc86d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  2 04:05:25 np0005465604 systemd[1]: libpod-conmon-49ed55e4d0c01f594f92e38a1deea340ac66b7ae4bb1e0e18cd5fcaf3fc86d02.scope: Deactivated successfully.
Oct  2 04:05:25 np0005465604 systemd[1]: Starting Create netns directory...
Oct  2 04:05:25 np0005465604 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 04:05:25 np0005465604 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 04:05:25 np0005465604 systemd[1]: Finished Create netns directory.
Oct  2 04:05:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:26 np0005465604 podman[237913]: 2025-10-02 08:05:26.279056559 +0000 UTC m=+0.074204837 container create 896641e76b8b736846a8526bb30e9e7082662685c6a9bdd91f58d9ecf589d7e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  2 04:05:26 np0005465604 systemd[1]: Started libpod-conmon-896641e76b8b736846a8526bb30e9e7082662685c6a9bdd91f58d9ecf589d7e8.scope.
Oct  2 04:05:26 np0005465604 podman[237913]: 2025-10-02 08:05:26.248402412 +0000 UTC m=+0.043550700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:05:26 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:05:26 np0005465604 podman[237913]: 2025-10-02 08:05:26.382050967 +0000 UTC m=+0.177199215 container init 896641e76b8b736846a8526bb30e9e7082662685c6a9bdd91f58d9ecf589d7e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_merkle, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 04:05:26 np0005465604 podman[237913]: 2025-10-02 08:05:26.393493984 +0000 UTC m=+0.188642262 container start 896641e76b8b736846a8526bb30e9e7082662685c6a9bdd91f58d9ecf589d7e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_merkle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:05:26 np0005465604 zen_merkle[237961]: 167 167
Oct  2 04:05:26 np0005465604 podman[237913]: 2025-10-02 08:05:26.402277795 +0000 UTC m=+0.197426113 container attach 896641e76b8b736846a8526bb30e9e7082662685c6a9bdd91f58d9ecf589d7e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 04:05:26 np0005465604 systemd[1]: libpod-896641e76b8b736846a8526bb30e9e7082662685c6a9bdd91f58d9ecf589d7e8.scope: Deactivated successfully.
Oct  2 04:05:26 np0005465604 podman[237913]: 2025-10-02 08:05:26.403716677 +0000 UTC m=+0.198864965 container died 896641e76b8b736846a8526bb30e9e7082662685c6a9bdd91f58d9ecf589d7e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_merkle, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:05:26 np0005465604 systemd[1]: var-lib-containers-storage-overlay-545238eff14056bf7d4fbb410cd36ad32a5d655480821c1ecb9222ece0ecdcd2-merged.mount: Deactivated successfully.
Oct  2 04:05:26 np0005465604 podman[237913]: 2025-10-02 08:05:26.464228568 +0000 UTC m=+0.259376836 container remove 896641e76b8b736846a8526bb30e9e7082662685c6a9bdd91f58d9ecf589d7e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_merkle, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:05:26 np0005465604 systemd[1]: libpod-conmon-896641e76b8b736846a8526bb30e9e7082662685c6a9bdd91f58d9ecf589d7e8.scope: Deactivated successfully.
Oct  2 04:05:26 np0005465604 python3.9[237963]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:05:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:05:26 np0005465604 podman[237992]: 2025-10-02 08:05:26.736897386 +0000 UTC m=+0.069979352 container create c543c05a4f95adf347e7294cd4f04f95edfbef3741061cb37aa15779de0bb727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sutherland, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 04:05:26 np0005465604 systemd[1]: Started libpod-conmon-c543c05a4f95adf347e7294cd4f04f95edfbef3741061cb37aa15779de0bb727.scope.
Oct  2 04:05:26 np0005465604 podman[237992]: 2025-10-02 08:05:26.712998639 +0000 UTC m=+0.046080685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:05:26 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:05:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e4e484118a72477eb17ec71b6bd14a27e16ab09f059734e005773155e69cbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:05:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e4e484118a72477eb17ec71b6bd14a27e16ab09f059734e005773155e69cbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:05:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e4e484118a72477eb17ec71b6bd14a27e16ab09f059734e005773155e69cbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:05:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e4e484118a72477eb17ec71b6bd14a27e16ab09f059734e005773155e69cbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:05:26 np0005465604 podman[237992]: 2025-10-02 08:05:26.84009595 +0000 UTC m=+0.173177976 container init c543c05a4f95adf347e7294cd4f04f95edfbef3741061cb37aa15779de0bb727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:05:26 np0005465604 podman[237992]: 2025-10-02 08:05:26.847595162 +0000 UTC m=+0.180677158 container start c543c05a4f95adf347e7294cd4f04f95edfbef3741061cb37aa15779de0bb727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sutherland, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:05:26 np0005465604 podman[237992]: 2025-10-02 08:05:26.852476246 +0000 UTC m=+0.185558282 container attach c543c05a4f95adf347e7294cd4f04f95edfbef3741061cb37aa15779de0bb727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 04:05:27 np0005465604 python3.9[238160]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]: {
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:    "0": [
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:        {
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "devices": [
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "/dev/loop3"
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            ],
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "lv_name": "ceph_lv0",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "lv_size": "21470642176",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "name": "ceph_lv0",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "tags": {
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.cluster_name": "ceph",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.crush_device_class": "",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.encrypted": "0",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.osd_id": "0",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.type": "block",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.vdo": "0"
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            },
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "type": "block",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "vg_name": "ceph_vg0"
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:        }
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:    ],
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:    "1": [
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:        {
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "devices": [
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "/dev/loop4"
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            ],
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "lv_name": "ceph_lv1",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "lv_size": "21470642176",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "name": "ceph_lv1",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "tags": {
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.cluster_name": "ceph",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.crush_device_class": "",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.encrypted": "0",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.osd_id": "1",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.type": "block",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.vdo": "0"
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            },
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "type": "block",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "vg_name": "ceph_vg1"
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:        }
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:    ],
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:    "2": [
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:        {
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "devices": [
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "/dev/loop5"
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            ],
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "lv_name": "ceph_lv2",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "lv_size": "21470642176",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "name": "ceph_lv2",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "tags": {
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.cluster_name": "ceph",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.crush_device_class": "",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.encrypted": "0",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.osd_id": "2",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.type": "block",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:                "ceph.vdo": "0"
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            },
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "type": "block",
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:            "vg_name": "ceph_vg2"
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:        }
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]:    ]
Oct  2 04:05:27 np0005465604 reverent_sutherland[238029]: }
Oct  2 04:05:27 np0005465604 systemd[1]: libpod-c543c05a4f95adf347e7294cd4f04f95edfbef3741061cb37aa15779de0bb727.scope: Deactivated successfully.
Oct  2 04:05:27 np0005465604 podman[237992]: 2025-10-02 08:05:27.735181854 +0000 UTC m=+1.068263840 container died c543c05a4f95adf347e7294cd4f04f95edfbef3741061cb37aa15779de0bb727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sutherland, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 04:05:27 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d0e4e484118a72477eb17ec71b6bd14a27e16ab09f059734e005773155e69cbf-merged.mount: Deactivated successfully.
Oct  2 04:05:27 np0005465604 podman[237992]: 2025-10-02 08:05:27.79721959 +0000 UTC m=+1.130301546 container remove c543c05a4f95adf347e7294cd4f04f95edfbef3741061cb37aa15779de0bb727 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sutherland, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:05:27 np0005465604 systemd[1]: libpod-conmon-c543c05a4f95adf347e7294cd4f04f95edfbef3741061cb37aa15779de0bb727.scope: Deactivated successfully.
Oct  2 04:05:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:05:27
Oct  2 04:05:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:05:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:05:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'backups', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'vms']
Oct  2 04:05:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:05:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:05:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:05:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:05:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:05:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:05:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:05:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:28 np0005465604 python3.9[238372]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759392326.8570983-725-208640732237621/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:05:28 np0005465604 podman[238463]: 2025-10-02 08:05:28.653829836 +0000 UTC m=+0.061440898 container create f8fe97a83184f54b59807fc246b6e98b7dbfef8e1ce4dce438b20a5987533f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_wu, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 04:05:28 np0005465604 systemd[1]: Started libpod-conmon-f8fe97a83184f54b59807fc246b6e98b7dbfef8e1ce4dce438b20a5987533f23.scope.
Oct  2 04:05:28 np0005465604 podman[238463]: 2025-10-02 08:05:28.624471358 +0000 UTC m=+0.032082470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:05:28 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:05:28 np0005465604 podman[238463]: 2025-10-02 08:05:28.760253445 +0000 UTC m=+0.167864487 container init f8fe97a83184f54b59807fc246b6e98b7dbfef8e1ce4dce438b20a5987533f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:05:28 np0005465604 podman[238463]: 2025-10-02 08:05:28.7739242 +0000 UTC m=+0.181535222 container start f8fe97a83184f54b59807fc246b6e98b7dbfef8e1ce4dce438b20a5987533f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_wu, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:05:28 np0005465604 podman[238463]: 2025-10-02 08:05:28.777664531 +0000 UTC m=+0.185275593 container attach f8fe97a83184f54b59807fc246b6e98b7dbfef8e1ce4dce438b20a5987533f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_wu, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 04:05:28 np0005465604 epic_wu[238479]: 167 167
Oct  2 04:05:28 np0005465604 systemd[1]: libpod-f8fe97a83184f54b59807fc246b6e98b7dbfef8e1ce4dce438b20a5987533f23.scope: Deactivated successfully.
Oct  2 04:05:28 np0005465604 podman[238463]: 2025-10-02 08:05:28.782172544 +0000 UTC m=+0.189783566 container died f8fe97a83184f54b59807fc246b6e98b7dbfef8e1ce4dce438b20a5987533f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 04:05:28 np0005465604 systemd[1]: var-lib-containers-storage-overlay-231b764d4edec8fc7be6408ede2c8f67700ae7d87f7e86fceff99d21e57e7966-merged.mount: Deactivated successfully.
Oct  2 04:05:28 np0005465604 podman[238463]: 2025-10-02 08:05:28.816943603 +0000 UTC m=+0.224554625 container remove f8fe97a83184f54b59807fc246b6e98b7dbfef8e1ce4dce438b20a5987533f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 04:05:28 np0005465604 systemd[1]: libpod-conmon-f8fe97a83184f54b59807fc246b6e98b7dbfef8e1ce4dce438b20a5987533f23.scope: Deactivated successfully.
Oct  2 04:05:29 np0005465604 podman[238578]: 2025-10-02 08:05:29.067336901 +0000 UTC m=+0.069939829 container create fc3126736afeae8e368b451847ff1ec7c0fa4c05d21a7c377dd4879d32cf01ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 04:05:29 np0005465604 podman[238578]: 2025-10-02 08:05:29.040036654 +0000 UTC m=+0.042639612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:05:29 np0005465604 systemd[1]: Started libpod-conmon-fc3126736afeae8e368b451847ff1ec7c0fa4c05d21a7c377dd4879d32cf01ef.scope.
Oct  2 04:05:29 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:05:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a66717a7426346d3204ba2d4291a705959b8281d1f7b11ad6f690e2e393b54d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:05:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a66717a7426346d3204ba2d4291a705959b8281d1f7b11ad6f690e2e393b54d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:05:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a66717a7426346d3204ba2d4291a705959b8281d1f7b11ad6f690e2e393b54d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:05:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a66717a7426346d3204ba2d4291a705959b8281d1f7b11ad6f690e2e393b54d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:05:29 np0005465604 podman[238578]: 2025-10-02 08:05:29.21730341 +0000 UTC m=+0.219906418 container init fc3126736afeae8e368b451847ff1ec7c0fa4c05d21a7c377dd4879d32cf01ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 04:05:29 np0005465604 podman[238578]: 2025-10-02 08:05:29.228566922 +0000 UTC m=+0.231169880 container start fc3126736afeae8e368b451847ff1ec7c0fa4c05d21a7c377dd4879d32cf01ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_goldberg, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 04:05:29 np0005465604 podman[238578]: 2025-10-02 08:05:29.232701075 +0000 UTC m=+0.235304043 container attach fc3126736afeae8e368b451847ff1ec7c0fa4c05d21a7c377dd4879d32cf01ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_goldberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:05:29 np0005465604 python3.9[238648]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:05:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]: {
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:        "osd_id": 2,
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:        "type": "bluestore"
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:    },
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:        "osd_id": 1,
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:        "type": "bluestore"
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:    },
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:        "osd_id": 0,
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:        "type": "bluestore"
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]:    }
Oct  2 04:05:30 np0005465604 stupefied_goldberg[238643]: }
Oct  2 04:05:30 np0005465604 python3.9[238816]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:05:30 np0005465604 systemd[1]: libpod-fc3126736afeae8e368b451847ff1ec7c0fa4c05d21a7c377dd4879d32cf01ef.scope: Deactivated successfully.
Oct  2 04:05:30 np0005465604 systemd[1]: libpod-fc3126736afeae8e368b451847ff1ec7c0fa4c05d21a7c377dd4879d32cf01ef.scope: Consumed 1.138s CPU time.
Oct  2 04:05:30 np0005465604 podman[238578]: 2025-10-02 08:05:30.361963879 +0000 UTC m=+1.364566797 container died fc3126736afeae8e368b451847ff1ec7c0fa4c05d21a7c377dd4879d32cf01ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_goldberg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 04:05:30 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4a66717a7426346d3204ba2d4291a705959b8281d1f7b11ad6f690e2e393b54d-merged.mount: Deactivated successfully.
Oct  2 04:05:30 np0005465604 podman[238578]: 2025-10-02 08:05:30.442058859 +0000 UTC m=+1.444661817 container remove fc3126736afeae8e368b451847ff1ec7c0fa4c05d21a7c377dd4879d32cf01ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct  2 04:05:30 np0005465604 systemd[1]: libpod-conmon-fc3126736afeae8e368b451847ff1ec7c0fa4c05d21a7c377dd4879d32cf01ef.scope: Deactivated successfully.
Oct  2 04:05:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:05:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:05:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:05:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:05:30 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 5414acba-af68-4647-b80e-4a7107ce3061 does not exist
Oct  2 04:05:30 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9b9734d1-7c4f-49c2-b004-9235fe235b6b does not exist
Oct  2 04:05:31 np0005465604 python3.9[239016]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759392329.744068-750-178928300297825/.source.json _original_basename=.a_yj8f94 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:05:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:05:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:05:31 np0005465604 python3.9[239168]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:34 np0005465604 python3.9[239595]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct  2 04:05:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:05:34.790 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:05:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:05:34.792 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:05:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:05:34.792 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:05:35 np0005465604 podman[239701]: 2025-10-02 08:05:35.074106797 +0000 UTC m=+0.126443951 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:05:35 np0005465604 python3.9[239772]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 04:05:36 np0005465604 python3.9[239925]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  2 04:05:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:05:38 np0005465604 python3[240103]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:05:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:39 np0005465604 podman[240117]: 2025-10-02 08:05:39.497374009 +0000 UTC m=+1.309892790 image pull 4ee39d2b05f9d7d8e7f025baefe799c33619f4419f4eb27d17ca383a40343475 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136
Oct  2 04:05:39 np0005465604 podman[240174]: 2025-10-02 08:05:39.742032127 +0000 UTC m=+0.075436892 container create 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:05:39 np0005465604 podman[240174]: 2025-10-02 08:05:39.699004815 +0000 UTC m=+0.032409620 image pull 4ee39d2b05f9d7d8e7f025baefe799c33619f4419f4eb27d17ca383a40343475 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136
Oct  2 04:05:39 np0005465604 python3[240103]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136
Oct  2 04:05:40 np0005465604 podman[240213]: 2025-10-02 08:05:40.051606568 +0000 UTC m=+0.100102863 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 04:05:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:40 np0005465604 python3.9[240383]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:05:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:05:41 np0005465604 python3.9[240537]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:42 np0005465604 python3.9[240613]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:05:43 np0005465604 python3.9[240764]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759392342.3905525-838-194008031876411/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:43 np0005465604 python3.9[240840]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 04:05:43 np0005465604 systemd[1]: Reloading.
Oct  2 04:05:44 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:05:44 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:05:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:44 np0005465604 podman[240923]: 2025-10-02 08:05:44.753989777 +0000 UTC m=+0.105649556 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:05:45 np0005465604 python3.9[240970]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 04:05:45 np0005465604 systemd[1]: Reloading.
Oct  2 04:05:45 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:05:45 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:05:45 np0005465604 systemd[1]: Starting multipathd container...
Oct  2 04:05:45 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:05:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6579c344393e341416a38e83cba63fa20e522d6b334e0a761448b62c8690392b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  2 04:05:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6579c344393e341416a38e83cba63fa20e522d6b334e0a761448b62c8690392b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 04:05:45 np0005465604 systemd[1]: Started /usr/bin/podman healthcheck run 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480.
Oct  2 04:05:45 np0005465604 podman[241011]: 2025-10-02 08:05:45.801919985 +0000 UTC m=+0.188893210 container init 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3)
Oct  2 04:05:45 np0005465604 multipathd[241027]: + sudo -E kolla_set_configs
Oct  2 04:05:45 np0005465604 podman[241011]: 2025-10-02 08:05:45.852283606 +0000 UTC m=+0.239256761 container start 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  2 04:05:45 np0005465604 podman[241011]: multipathd
Oct  2 04:05:45 np0005465604 systemd[1]: Started multipathd container.
Oct  2 04:05:45 np0005465604 multipathd[241027]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 04:05:45 np0005465604 multipathd[241027]: INFO:__main__:Validating config file
Oct  2 04:05:45 np0005465604 multipathd[241027]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 04:05:45 np0005465604 multipathd[241027]: INFO:__main__:Writing out command to execute
Oct  2 04:05:45 np0005465604 multipathd[241027]: ++ cat /run_command
Oct  2 04:05:45 np0005465604 multipathd[241027]: + CMD='/usr/sbin/multipathd -d'
Oct  2 04:05:45 np0005465604 multipathd[241027]: + ARGS=
Oct  2 04:05:45 np0005465604 multipathd[241027]: + sudo kolla_copy_cacerts
Oct  2 04:05:45 np0005465604 multipathd[241027]: Running command: '/usr/sbin/multipathd -d'
Oct  2 04:05:45 np0005465604 multipathd[241027]: + [[ ! -n '' ]]
Oct  2 04:05:45 np0005465604 multipathd[241027]: + . kolla_extend_start
Oct  2 04:05:45 np0005465604 multipathd[241027]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  2 04:05:45 np0005465604 multipathd[241027]: + umask 0022
Oct  2 04:05:45 np0005465604 multipathd[241027]: + exec /usr/sbin/multipathd -d
Oct  2 04:05:45 np0005465604 podman[241034]: 2025-10-02 08:05:45.968867606 +0000 UTC m=+0.094385265 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:05:45 np0005465604 systemd[1]: 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480-7555e8008ef6f526.service: Main process exited, code=exited, status=1/FAILURE
Oct  2 04:05:45 np0005465604 systemd[1]: 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480-7555e8008ef6f526.service: Failed with result 'exit-code'.
Oct  2 04:05:45 np0005465604 multipathd[241027]: 3224.646330 | --------start up--------
Oct  2 04:05:45 np0005465604 multipathd[241027]: 3224.646351 | read /etc/multipath.conf
Oct  2 04:05:45 np0005465604 multipathd[241027]: 3224.658626 | path checkers start up
Oct  2 04:05:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:05:46 np0005465604 python3.9[241216]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:05:47 np0005465604 python3.9[241370]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 04:05:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:48 np0005465604 python3.9[241536]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 04:05:49 np0005465604 systemd[1]: Stopping multipathd container...
Oct  2 04:05:49 np0005465604 multipathd[241027]: 3227.774592 | exit (signal)
Oct  2 04:05:49 np0005465604 multipathd[241027]: 3227.774727 | --------shut down-------
Oct  2 04:05:49 np0005465604 systemd[1]: libpod-08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480.scope: Deactivated successfully.
Oct  2 04:05:49 np0005465604 podman[241540]: 2025-10-02 08:05:49.152548898 +0000 UTC m=+0.088187921 container died 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:05:49 np0005465604 systemd[1]: 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480-7555e8008ef6f526.timer: Deactivated successfully.
Oct  2 04:05:49 np0005465604 systemd[1]: Stopped /usr/bin/podman healthcheck run 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480.
Oct  2 04:05:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480-userdata-shm.mount: Deactivated successfully.
Oct  2 04:05:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6579c344393e341416a38e83cba63fa20e522d6b334e0a761448b62c8690392b-merged.mount: Deactivated successfully.
Oct  2 04:05:49 np0005465604 podman[241540]: 2025-10-02 08:05:49.322403204 +0000 UTC m=+0.258042227 container cleanup 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 04:05:49 np0005465604 podman[241540]: multipathd
Oct  2 04:05:49 np0005465604 podman[241569]: multipathd
Oct  2 04:05:49 np0005465604 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct  2 04:05:49 np0005465604 systemd[1]: Stopped multipathd container.
Oct  2 04:05:49 np0005465604 systemd[1]: Starting multipathd container...
Oct  2 04:05:49 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:05:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6579c344393e341416a38e83cba63fa20e522d6b334e0a761448b62c8690392b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  2 04:05:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6579c344393e341416a38e83cba63fa20e522d6b334e0a761448b62c8690392b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 04:05:49 np0005465604 systemd[1]: Started /usr/bin/podman healthcheck run 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480.
Oct  2 04:05:49 np0005465604 podman[241582]: 2025-10-02 08:05:49.550184834 +0000 UTC m=+0.131304316 container init 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct  2 04:05:49 np0005465604 multipathd[241598]: + sudo -E kolla_set_configs
Oct  2 04:05:49 np0005465604 podman[241582]: 2025-10-02 08:05:49.577307526 +0000 UTC m=+0.158426998 container start 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:05:49 np0005465604 podman[241582]: multipathd
Oct  2 04:05:49 np0005465604 systemd[1]: Started multipathd container.
Oct  2 04:05:49 np0005465604 multipathd[241598]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 04:05:49 np0005465604 multipathd[241598]: INFO:__main__:Validating config file
Oct  2 04:05:49 np0005465604 multipathd[241598]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 04:05:49 np0005465604 multipathd[241598]: INFO:__main__:Writing out command to execute
Oct  2 04:05:49 np0005465604 multipathd[241598]: ++ cat /run_command
Oct  2 04:05:49 np0005465604 multipathd[241598]: + CMD='/usr/sbin/multipathd -d'
Oct  2 04:05:49 np0005465604 multipathd[241598]: + ARGS=
Oct  2 04:05:49 np0005465604 multipathd[241598]: + sudo kolla_copy_cacerts
Oct  2 04:05:49 np0005465604 multipathd[241598]: + [[ ! -n '' ]]
Oct  2 04:05:49 np0005465604 multipathd[241598]: + . kolla_extend_start
Oct  2 04:05:49 np0005465604 multipathd[241598]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  2 04:05:49 np0005465604 multipathd[241598]: Running command: '/usr/sbin/multipathd -d'
Oct  2 04:05:49 np0005465604 multipathd[241598]: + umask 0022
Oct  2 04:05:49 np0005465604 multipathd[241598]: + exec /usr/sbin/multipathd -d
Oct  2 04:05:49 np0005465604 podman[241605]: 2025-10-02 08:05:49.689100784 +0000 UTC m=+0.097642730 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 04:05:49 np0005465604 systemd[1]: 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480-4f12439686c5af9f.service: Main process exited, code=exited, status=1/FAILURE
Oct  2 04:05:49 np0005465604 systemd[1]: 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480-4f12439686c5af9f.service: Failed with result 'exit-code'.
Oct  2 04:05:49 np0005465604 multipathd[241598]: 3228.363648 | --------start up--------
Oct  2 04:05:49 np0005465604 multipathd[241598]: 3228.363662 | read /etc/multipath.conf
Oct  2 04:05:49 np0005465604 multipathd[241598]: 3228.369533 | path checkers start up
Oct  2 04:05:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:50 np0005465604 python3.9[241788]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:51 np0005465604 python3.9[241940]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 04:05:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:05:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:52 np0005465604 python3.9[242092]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct  2 04:05:52 np0005465604 kernel: Key type psk registered
Oct  2 04:05:53 np0005465604 python3.9[242255]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:05:54 np0005465604 python3.9[242378]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759392352.8662572-918-112326981945355/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:55 np0005465604 python3.9[242530]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:05:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:56 np0005465604 python3.9[242682]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 04:05:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:05:57 np0005465604 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  2 04:05:57 np0005465604 systemd[1]: Stopped Load Kernel Modules.
Oct  2 04:05:57 np0005465604 systemd[1]: Stopping Load Kernel Modules...
Oct  2 04:05:57 np0005465604 systemd[1]: Starting Load Kernel Modules...
Oct  2 04:05:57 np0005465604 systemd[1]: Finished Load Kernel Modules.
Oct  2 04:05:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:05:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:05:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:05:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:05:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:05:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:05:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:05:58 np0005465604 python3.9[242840]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 04:05:59 np0005465604 python3.9[242924]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 04:06:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:06:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:06 np0005465604 podman[242929]: 2025-10-02 08:06:06.052845514 +0000 UTC m=+0.170559528 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 04:06:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:06 np0005465604 systemd[1]: Reloading.
Oct  2 04:06:06 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:06:06 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:06:06 np0005465604 systemd[1]: Reloading.
Oct  2 04:06:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:06:06 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:06:06 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:06:07 np0005465604 systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  2 04:06:07 np0005465604 systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  2 04:06:07 np0005465604 lvm[243062]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  2 04:06:07 np0005465604 lvm[243062]: VG ceph_vg1 finished
Oct  2 04:06:07 np0005465604 lvm[243064]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 04:06:07 np0005465604 lvm[243064]: VG ceph_vg0 finished
Oct  2 04:06:07 np0005465604 lvm[243063]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  2 04:06:07 np0005465604 lvm[243063]: VG ceph_vg2 finished
Oct  2 04:06:07 np0005465604 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 04:06:07 np0005465604 systemd[1]: Starting man-db-cache-update.service...
Oct  2 04:06:07 np0005465604 systemd[1]: Reloading.
Oct  2 04:06:07 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:06:07 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:06:07 np0005465604 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 04:06:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:09 np0005465604 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 04:06:09 np0005465604 systemd[1]: Finished man-db-cache-update.service.
Oct  2 04:06:09 np0005465604 systemd[1]: man-db-cache-update.service: Consumed 2.119s CPU time.
Oct  2 04:06:09 np0005465604 systemd[1]: run-r0edbc202e59d4148a7f0bc16c43a2fc2.service: Deactivated successfully.
Oct  2 04:06:09 np0005465604 python3.9[244408]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:06:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:10 np0005465604 podman[244533]: 2025-10-02 08:06:10.347982194 +0000 UTC m=+0.111183591 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 04:06:10 np0005465604 python3.9[244577]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 04:06:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:06:11 np0005465604 python3.9[244735]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:06:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:13 np0005465604 python3.9[244887]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 04:06:13 np0005465604 systemd[1]: Reloading.
Oct  2 04:06:13 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:06:13 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:06:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:14 np0005465604 python3.9[245071]: ansible-ansible.builtin.service_facts Invoked
Oct  2 04:06:14 np0005465604 network[245088]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 04:06:14 np0005465604 network[245089]: 'network-scripts' will be removed from distribution in near future.
Oct  2 04:06:14 np0005465604 network[245090]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 04:06:15 np0005465604 podman[245096]: 2025-10-02 08:06:15.639304039 +0000 UTC m=+0.088831479 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:06:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:06:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:19 np0005465604 python3.9[245388]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 04:06:20 np0005465604 podman[245389]: 2025-10-02 08:06:20.013913192 +0000 UTC m=+0.072040304 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:06:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:20 np0005465604 python3.9[245561]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 04:06:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:06:21 np0005465604 python3.9[245714]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 04:06:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:22 np0005465604 python3.9[245867]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 04:06:23 np0005465604 python3.9[246020]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 04:06:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:24 np0005465604 python3.9[246173]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 04:06:25 np0005465604 python3.9[246326]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 04:06:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:26 np0005465604 python3.9[246479]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:06:26.752606) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392386752665, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1316, "num_deletes": 505, "total_data_size": 1608648, "memory_usage": 1639424, "flush_reason": "Manual Compaction"}
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392386765846, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1593568, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13522, "largest_seqno": 14837, "table_properties": {"data_size": 1587734, "index_size": 2718, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 14535, "raw_average_key_size": 17, "raw_value_size": 1574165, "raw_average_value_size": 1943, "num_data_blocks": 125, "num_entries": 810, "num_filter_entries": 810, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759392277, "oldest_key_time": 1759392277, "file_creation_time": 1759392386, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 13283 microseconds, and 7982 cpu microseconds.
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:06:26.765901) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1593568 bytes OK
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:06:26.765926) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:06:26.768102) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:06:26.768123) EVENT_LOG_v1 {"time_micros": 1759392386768116, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:06:26.768147) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1601691, prev total WAL file size 1601691, number of live WAL files 2.
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:06:26.771793) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1556KB)], [32(7400KB)]
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392386771856, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9172107, "oldest_snapshot_seqno": -1}
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3786 keys, 7235721 bytes, temperature: kUnknown
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392386819142, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7235721, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7208478, "index_size": 16701, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 92728, "raw_average_key_size": 24, "raw_value_size": 7137933, "raw_average_value_size": 1885, "num_data_blocks": 707, "num_entries": 3786, "num_filter_entries": 3786, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759392386, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:06:26.819596) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7235721 bytes
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:06:26.822172) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.4 rd, 152.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.2 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(10.3) write-amplify(4.5) OK, records in: 4809, records dropped: 1023 output_compression: NoCompression
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:06:26.822207) EVENT_LOG_v1 {"time_micros": 1759392386822189, "job": 14, "event": "compaction_finished", "compaction_time_micros": 47432, "compaction_time_cpu_micros": 32049, "output_level": 6, "num_output_files": 1, "total_output_size": 7235721, "num_input_records": 4809, "num_output_records": 3786, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392386822957, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392386826145, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:06:26.771641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:06:26.826203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:06:26.826212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:06:26.826215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:06:26.826219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:06:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:06:26.826222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:06:27 np0005465604 python3.9[246632]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:06:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:06:27
Oct  2 04:06:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:06:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:06:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', 'images', '.mgr', 'default.rgw.log', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'volumes', 'cephfs.cephfs.meta']
Oct  2 04:06:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:06:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:06:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:06:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:06:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:06:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:06:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:06:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:28 np0005465604 python3.9[246784]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:06:29 np0005465604 python3.9[246936]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:06:30 np0005465604 python3.9[247088]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:06:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:30 np0005465604 python3.9[247253]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:06:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:06:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:06:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:06:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:06:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:06:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:06:31 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a42887cb-1ac0-425b-aafc-1aa86a2885d2 does not exist
Oct  2 04:06:31 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c99f35aa-ff40-41e7-a5f7-c684e6d66af3 does not exist
Oct  2 04:06:31 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 77e0769c-6167-465f-82ac-6f94fd76d829 does not exist
Oct  2 04:06:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:06:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:06:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:06:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:06:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:06:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:06:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:06:31 np0005465604 python3.9[247522]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:06:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:32 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:06:32 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:06:32 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:06:32 np0005465604 podman[247817]: 2025-10-02 08:06:32.499623963 +0000 UTC m=+0.058793623 container create 2934abfacc2ca9e59f715574c5da0c02c84268d3ddc713b9a7d3f5a6fae862d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 04:06:32 np0005465604 systemd[1]: Started libpod-conmon-2934abfacc2ca9e59f715574c5da0c02c84268d3ddc713b9a7d3f5a6fae862d9.scope.
Oct  2 04:06:32 np0005465604 podman[247817]: 2025-10-02 08:06:32.474883911 +0000 UTC m=+0.034053611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:06:32 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:06:32 np0005465604 podman[247817]: 2025-10-02 08:06:32.614722382 +0000 UTC m=+0.173892032 container init 2934abfacc2ca9e59f715574c5da0c02c84268d3ddc713b9a7d3f5a6fae862d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:06:32 np0005465604 podman[247817]: 2025-10-02 08:06:32.625161373 +0000 UTC m=+0.184331023 container start 2934abfacc2ca9e59f715574c5da0c02c84268d3ddc713b9a7d3f5a6fae862d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:06:32 np0005465604 podman[247817]: 2025-10-02 08:06:32.629154026 +0000 UTC m=+0.188323666 container attach 2934abfacc2ca9e59f715574c5da0c02c84268d3ddc713b9a7d3f5a6fae862d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 04:06:32 np0005465604 condescending_driscoll[247833]: 167 167
Oct  2 04:06:32 np0005465604 systemd[1]: libpod-2934abfacc2ca9e59f715574c5da0c02c84268d3ddc713b9a7d3f5a6fae862d9.scope: Deactivated successfully.
Oct  2 04:06:32 np0005465604 podman[247817]: 2025-10-02 08:06:32.633168931 +0000 UTC m=+0.192338561 container died 2934abfacc2ca9e59f715574c5da0c02c84268d3ddc713b9a7d3f5a6fae862d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  2 04:06:32 np0005465604 python3.9[247815]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:06:32 np0005465604 systemd[1]: var-lib-containers-storage-overlay-64ced618b79ca76bed5a408ad22c147c0165222fda94a23e51119cd4fea0589e-merged.mount: Deactivated successfully.
Oct  2 04:06:32 np0005465604 podman[247817]: 2025-10-02 08:06:32.691159988 +0000 UTC m=+0.250329628 container remove 2934abfacc2ca9e59f715574c5da0c02c84268d3ddc713b9a7d3f5a6fae862d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 04:06:32 np0005465604 systemd[1]: libpod-conmon-2934abfacc2ca9e59f715574c5da0c02c84268d3ddc713b9a7d3f5a6fae862d9.scope: Deactivated successfully.
Oct  2 04:06:32 np0005465604 podman[247889]: 2025-10-02 08:06:32.861996234 +0000 UTC m=+0.046896296 container create 5df73576b294fb893765a598565c89f6fa29eb31dc9c46702eea83784bcfb1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  2 04:06:32 np0005465604 systemd[1]: Started libpod-conmon-5df73576b294fb893765a598565c89f6fa29eb31dc9c46702eea83784bcfb1f8.scope.
Oct  2 04:06:32 np0005465604 podman[247889]: 2025-10-02 08:06:32.844100803 +0000 UTC m=+0.029000885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:06:32 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:06:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3486bf7d20a307fd1ac8d118614f2237286dcea0033dab40716241b05e6040ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:06:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3486bf7d20a307fd1ac8d118614f2237286dcea0033dab40716241b05e6040ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:06:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3486bf7d20a307fd1ac8d118614f2237286dcea0033dab40716241b05e6040ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:06:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3486bf7d20a307fd1ac8d118614f2237286dcea0033dab40716241b05e6040ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:06:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3486bf7d20a307fd1ac8d118614f2237286dcea0033dab40716241b05e6040ef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:06:32 np0005465604 podman[247889]: 2025-10-02 08:06:32.964252577 +0000 UTC m=+0.149152659 container init 5df73576b294fb893765a598565c89f6fa29eb31dc9c46702eea83784bcfb1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:06:32 np0005465604 podman[247889]: 2025-10-02 08:06:32.974816432 +0000 UTC m=+0.159716534 container start 5df73576b294fb893765a598565c89f6fa29eb31dc9c46702eea83784bcfb1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:06:32 np0005465604 podman[247889]: 2025-10-02 08:06:32.979478777 +0000 UTC m=+0.164378859 container attach 5df73576b294fb893765a598565c89f6fa29eb31dc9c46702eea83784bcfb1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 04:06:33 np0005465604 python3.9[248032]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:06:34 np0005465604 festive_mclaren[247949]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:06:34 np0005465604 festive_mclaren[247949]: --> relative data size: 1.0
Oct  2 04:06:34 np0005465604 festive_mclaren[247949]: --> All data devices are unavailable
Oct  2 04:06:34 np0005465604 systemd[1]: libpod-5df73576b294fb893765a598565c89f6fa29eb31dc9c46702eea83784bcfb1f8.scope: Deactivated successfully.
Oct  2 04:06:34 np0005465604 podman[247889]: 2025-10-02 08:06:34.266977238 +0000 UTC m=+1.451877330 container died 5df73576b294fb893765a598565c89f6fa29eb31dc9c46702eea83784bcfb1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mclaren, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:06:34 np0005465604 systemd[1]: libpod-5df73576b294fb893765a598565c89f6fa29eb31dc9c46702eea83784bcfb1f8.scope: Consumed 1.216s CPU time.
Oct  2 04:06:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3486bf7d20a307fd1ac8d118614f2237286dcea0033dab40716241b05e6040ef-merged.mount: Deactivated successfully.
Oct  2 04:06:34 np0005465604 podman[247889]: 2025-10-02 08:06:34.356102785 +0000 UTC m=+1.541002887 container remove 5df73576b294fb893765a598565c89f6fa29eb31dc9c46702eea83784bcfb1f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mclaren, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:06:34 np0005465604 systemd[1]: libpod-conmon-5df73576b294fb893765a598565c89f6fa29eb31dc9c46702eea83784bcfb1f8.scope: Deactivated successfully.
Oct  2 04:06:34 np0005465604 python3.9[248207]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:06:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:06:34.791 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:06:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:06:34.792 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:06:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:06:34.793 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:06:35 np0005465604 podman[248513]: 2025-10-02 08:06:35.106507369 +0000 UTC m=+0.054307725 container create 11343fdbd548550ff06ba68c35c1dcedb87e496c1d5c616c4c0fcc50b0fbaf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galois, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:06:35 np0005465604 systemd[1]: Started libpod-conmon-11343fdbd548550ff06ba68c35c1dcedb87e496c1d5c616c4c0fcc50b0fbaf77.scope.
Oct  2 04:06:35 np0005465604 podman[248513]: 2025-10-02 08:06:35.076825784 +0000 UTC m=+0.024626190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:06:35 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:06:35 np0005465604 podman[248513]: 2025-10-02 08:06:35.19734751 +0000 UTC m=+0.145147866 container init 11343fdbd548550ff06ba68c35c1dcedb87e496c1d5c616c4c0fcc50b0fbaf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galois, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:06:35 np0005465604 podman[248513]: 2025-10-02 08:06:35.209535046 +0000 UTC m=+0.157335402 container start 11343fdbd548550ff06ba68c35c1dcedb87e496c1d5c616c4c0fcc50b0fbaf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galois, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 04:06:35 np0005465604 podman[248513]: 2025-10-02 08:06:35.21390126 +0000 UTC m=+0.161701616 container attach 11343fdbd548550ff06ba68c35c1dcedb87e496c1d5c616c4c0fcc50b0fbaf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galois, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 04:06:35 np0005465604 youthful_galois[248529]: 167 167
Oct  2 04:06:35 np0005465604 systemd[1]: libpod-11343fdbd548550ff06ba68c35c1dcedb87e496c1d5c616c4c0fcc50b0fbaf77.scope: Deactivated successfully.
Oct  2 04:06:35 np0005465604 podman[248513]: 2025-10-02 08:06:35.216462249 +0000 UTC m=+0.164262575 container died 11343fdbd548550ff06ba68c35c1dcedb87e496c1d5c616c4c0fcc50b0fbaf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 04:06:35 np0005465604 python3.9[248505]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:06:35 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3d51423ba3a27a00ecd1d7f1e6d4adb7bb78decf15f5520205d8b9d0103507d1-merged.mount: Deactivated successfully.
Oct  2 04:06:35 np0005465604 podman[248513]: 2025-10-02 08:06:35.269742021 +0000 UTC m=+0.217542377 container remove 11343fdbd548550ff06ba68c35c1dcedb87e496c1d5c616c4c0fcc50b0fbaf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galois, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:06:35 np0005465604 systemd[1]: libpod-conmon-11343fdbd548550ff06ba68c35c1dcedb87e496c1d5c616c4c0fcc50b0fbaf77.scope: Deactivated successfully.
Oct  2 04:06:35 np0005465604 podman[248600]: 2025-10-02 08:06:35.557214314 +0000 UTC m=+0.080770012 container create 9f3ad72ebe973f3da3d20f4936aa15918b1196f89dbc3e2e27f22a21c65ca281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gagarin, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 04:06:35 np0005465604 podman[248600]: 2025-10-02 08:06:35.524853506 +0000 UTC m=+0.048416514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:06:35 np0005465604 systemd[1]: Started libpod-conmon-9f3ad72ebe973f3da3d20f4936aa15918b1196f89dbc3e2e27f22a21c65ca281.scope.
Oct  2 04:06:35 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:06:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df64a49d45119945220a3792b0c3c00047be77c7ec2de0cd48616323adf148f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:06:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df64a49d45119945220a3792b0c3c00047be77c7ec2de0cd48616323adf148f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:06:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df64a49d45119945220a3792b0c3c00047be77c7ec2de0cd48616323adf148f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:06:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df64a49d45119945220a3792b0c3c00047be77c7ec2de0cd48616323adf148f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:06:35 np0005465604 podman[248600]: 2025-10-02 08:06:35.680015149 +0000 UTC m=+0.203570907 container init 9f3ad72ebe973f3da3d20f4936aa15918b1196f89dbc3e2e27f22a21c65ca281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gagarin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:06:35 np0005465604 podman[248600]: 2025-10-02 08:06:35.700710657 +0000 UTC m=+0.224266335 container start 9f3ad72ebe973f3da3d20f4936aa15918b1196f89dbc3e2e27f22a21c65ca281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gagarin, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:06:35 np0005465604 podman[248600]: 2025-10-02 08:06:35.704539756 +0000 UTC m=+0.228095514 container attach 9f3ad72ebe973f3da3d20f4936aa15918b1196f89dbc3e2e27f22a21c65ca281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gagarin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 04:06:36 np0005465604 python3.9[248726]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:06:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]: {
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:    "0": [
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:        {
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "devices": [
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "/dev/loop3"
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            ],
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "lv_name": "ceph_lv0",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "lv_size": "21470642176",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "name": "ceph_lv0",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "tags": {
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.cluster_name": "ceph",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.crush_device_class": "",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.encrypted": "0",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.osd_id": "0",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.type": "block",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.vdo": "0"
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            },
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "type": "block",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "vg_name": "ceph_vg0"
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:        }
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:    ],
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:    "1": [
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:        {
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "devices": [
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "/dev/loop4"
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            ],
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "lv_name": "ceph_lv1",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "lv_size": "21470642176",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "name": "ceph_lv1",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "tags": {
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.cluster_name": "ceph",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.crush_device_class": "",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.encrypted": "0",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.osd_id": "1",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.type": "block",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.vdo": "0"
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            },
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "type": "block",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "vg_name": "ceph_vg1"
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:        }
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:    ],
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:    "2": [
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:        {
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "devices": [
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "/dev/loop5"
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            ],
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "lv_name": "ceph_lv2",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "lv_size": "21470642176",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "name": "ceph_lv2",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "tags": {
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.cluster_name": "ceph",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.crush_device_class": "",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.encrypted": "0",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.osd_id": "2",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.type": "block",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:                "ceph.vdo": "0"
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            },
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "type": "block",
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:            "vg_name": "ceph_vg2"
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:        }
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]:    ]
Oct  2 04:06:36 np0005465604 affectionate_gagarin[248646]: }
Oct  2 04:06:36 np0005465604 systemd[1]: libpod-9f3ad72ebe973f3da3d20f4936aa15918b1196f89dbc3e2e27f22a21c65ca281.scope: Deactivated successfully.
Oct  2 04:06:36 np0005465604 podman[248600]: 2025-10-02 08:06:36.520129479 +0000 UTC m=+1.043685157 container died 9f3ad72ebe973f3da3d20f4936aa15918b1196f89dbc3e2e27f22a21c65ca281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gagarin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 04:06:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9df64a49d45119945220a3792b0c3c00047be77c7ec2de0cd48616323adf148f-merged.mount: Deactivated successfully.
Oct  2 04:06:36 np0005465604 podman[248600]: 2025-10-02 08:06:36.592052566 +0000 UTC m=+1.115608234 container remove 9f3ad72ebe973f3da3d20f4936aa15918b1196f89dbc3e2e27f22a21c65ca281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gagarin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:06:36 np0005465604 systemd[1]: libpod-conmon-9f3ad72ebe973f3da3d20f4936aa15918b1196f89dbc3e2e27f22a21c65ca281.scope: Deactivated successfully.
Oct  2 04:06:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:06:36 np0005465604 podman[248830]: 2025-10-02 08:06:36.766601997 +0000 UTC m=+0.200123841 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 04:06:37 np0005465604 python3.9[248953]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:06:37 np0005465604 podman[249157]: 2025-10-02 08:06:37.498353285 +0000 UTC m=+0.055263814 container create 899cfa1267d9e0e1a6ab86c0e2f17ab341b8845e259a0e5e2d6caf79698dd62c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:06:37 np0005465604 systemd[1]: Started libpod-conmon-899cfa1267d9e0e1a6ab86c0e2f17ab341b8845e259a0e5e2d6caf79698dd62c.scope.
Oct  2 04:06:37 np0005465604 podman[249157]: 2025-10-02 08:06:37.471213119 +0000 UTC m=+0.028123698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:06:37 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:06:37 np0005465604 podman[249157]: 2025-10-02 08:06:37.624285437 +0000 UTC m=+0.181196006 container init 899cfa1267d9e0e1a6ab86c0e2f17ab341b8845e259a0e5e2d6caf79698dd62c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 04:06:37 np0005465604 podman[249157]: 2025-10-02 08:06:37.632371147 +0000 UTC m=+0.189281656 container start 899cfa1267d9e0e1a6ab86c0e2f17ab341b8845e259a0e5e2d6caf79698dd62c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 04:06:37 np0005465604 podman[249157]: 2025-10-02 08:06:37.636734322 +0000 UTC m=+0.193644911 container attach 899cfa1267d9e0e1a6ab86c0e2f17ab341b8845e259a0e5e2d6caf79698dd62c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gates, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:06:37 np0005465604 naughty_gates[249199]: 167 167
Oct  2 04:06:37 np0005465604 systemd[1]: libpod-899cfa1267d9e0e1a6ab86c0e2f17ab341b8845e259a0e5e2d6caf79698dd62c.scope: Deactivated successfully.
Oct  2 04:06:37 np0005465604 podman[249157]: 2025-10-02 08:06:37.640511708 +0000 UTC m=+0.197422247 container died 899cfa1267d9e0e1a6ab86c0e2f17ab341b8845e259a0e5e2d6caf79698dd62c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 04:06:37 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9e681ebed81f382101d2333f9110fd4f076d208e104a09bb52ca939dc3f4c784-merged.mount: Deactivated successfully.
Oct  2 04:06:37 np0005465604 podman[249157]: 2025-10-02 08:06:37.694802151 +0000 UTC m=+0.251712660 container remove 899cfa1267d9e0e1a6ab86c0e2f17ab341b8845e259a0e5e2d6caf79698dd62c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gates, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:06:37 np0005465604 systemd[1]: libpod-conmon-899cfa1267d9e0e1a6ab86c0e2f17ab341b8845e259a0e5e2d6caf79698dd62c.scope: Deactivated successfully.
Oct  2 04:06:37 np0005465604 python3.9[249247]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:06:37 np0005465604 podman[249253]: 2025-10-02 08:06:37.899233223 +0000 UTC m=+0.054126509 container create d5c4cc2650609d27eaddbb9d658d82dd6c915368e631b46071c49cc8e31df80d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shtern, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 04:06:37 np0005465604 systemd[1]: Started libpod-conmon-d5c4cc2650609d27eaddbb9d658d82dd6c915368e631b46071c49cc8e31df80d.scope.
Oct  2 04:06:37 np0005465604 podman[249253]: 2025-10-02 08:06:37.873775269 +0000 UTC m=+0.028668605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:06:37 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:06:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0211da195fc8a9aec7b4a6a73bdd944ed15bc4ecf0eeb95bf3dcf9027670d0fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:06:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0211da195fc8a9aec7b4a6a73bdd944ed15bc4ecf0eeb95bf3dcf9027670d0fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:06:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0211da195fc8a9aec7b4a6a73bdd944ed15bc4ecf0eeb95bf3dcf9027670d0fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:06:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0211da195fc8a9aec7b4a6a73bdd944ed15bc4ecf0eeb95bf3dcf9027670d0fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:06:38 np0005465604 podman[249253]: 2025-10-02 08:06:38.016064175 +0000 UTC m=+0.170957501 container init d5c4cc2650609d27eaddbb9d658d82dd6c915368e631b46071c49cc8e31df80d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shtern, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Oct  2 04:06:38 np0005465604 podman[249253]: 2025-10-02 08:06:38.02982822 +0000 UTC m=+0.184721536 container start d5c4cc2650609d27eaddbb9d658d82dd6c915368e631b46071c49cc8e31df80d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:06:38 np0005465604 podman[249253]: 2025-10-02 08:06:38.033865455 +0000 UTC m=+0.188758781 container attach d5c4cc2650609d27eaddbb9d658d82dd6c915368e631b46071c49cc8e31df80d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:06:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:38 np0005465604 python3.9[249426]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]: {
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:        "osd_id": 2,
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:        "type": "bluestore"
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:    },
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:        "osd_id": 1,
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:        "type": "bluestore"
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:    },
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:        "osd_id": 0,
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:        "type": "bluestore"
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]:    }
Oct  2 04:06:39 np0005465604 recursing_shtern[249273]: }
Oct  2 04:06:39 np0005465604 systemd[1]: libpod-d5c4cc2650609d27eaddbb9d658d82dd6c915368e631b46071c49cc8e31df80d.scope: Deactivated successfully.
Oct  2 04:06:39 np0005465604 podman[249253]: 2025-10-02 08:06:39.224708296 +0000 UTC m=+1.379601622 container died d5c4cc2650609d27eaddbb9d658d82dd6c915368e631b46071c49cc8e31df80d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 04:06:39 np0005465604 systemd[1]: libpod-d5c4cc2650609d27eaddbb9d658d82dd6c915368e631b46071c49cc8e31df80d.scope: Consumed 1.202s CPU time.
Oct  2 04:06:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0211da195fc8a9aec7b4a6a73bdd944ed15bc4ecf0eeb95bf3dcf9027670d0fb-merged.mount: Deactivated successfully.
Oct  2 04:06:39 np0005465604 podman[249253]: 2025-10-02 08:06:39.322384657 +0000 UTC m=+1.477277943 container remove d5c4cc2650609d27eaddbb9d658d82dd6c915368e631b46071c49cc8e31df80d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shtern, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:06:39 np0005465604 systemd[1]: libpod-conmon-d5c4cc2650609d27eaddbb9d658d82dd6c915368e631b46071c49cc8e31df80d.scope: Deactivated successfully.
Oct  2 04:06:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:06:39 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:06:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:06:39 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:06:39 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 122e33cf-d71a-4c1c-8eb9-1a44dffc53e4 does not exist
Oct  2 04:06:39 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a7bbf4bb-97ca-41c7-87a0-2b2a841e7d50 does not exist
Oct  2 04:06:39 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:06:39 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:06:39 np0005465604 python3.9[249638]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:06:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:40 np0005465604 python3.9[249819]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:06:41 np0005465604 podman[249871]: 2025-10-02 08:06:41.020674042 +0000 UTC m=+0.079323506 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 04:06:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:06:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3315 writes, 14K keys, 3315 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3315 writes, 3315 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1290 writes, 5842 keys, 1290 commit groups, 1.0 writes per commit group, ingest: 8.53 MB, 0.01 MB/s#012Interval WAL: 1290 writes, 1290 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    137.8      0.12              0.06         7    0.016       0      0       0.0       0.0#012  L6      1/0    6.90 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    152.7    125.5      0.33              0.16         6    0.055     24K   3200       0.0       0.0#012 Sum      1/0    6.90 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.6    113.2    128.7      0.44              0.22        13    0.034     24K   3200       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8    116.6    117.6      0.30              0.15         8    0.037     17K   2468       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    152.7    125.5      0.33              0.16         6    0.055     24K   3200       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    144.2      0.11              0.06         6    0.018       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.016, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.4 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 308.00 MB usage: 1.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(101,1.48 MB,0.479914%) FilterBlock(14,75.80 KB,0.0240326%) IndexBlock(14,149.28 KB,0.047332%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 04:06:41 np0005465604 python3.9[249989]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 04:06:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:06:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:42 np0005465604 python3.9[250141]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 04:06:43 np0005465604 python3.9[250293]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 04:06:43 np0005465604 systemd[1]: Reloading.
Oct  2 04:06:43 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:06:43 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:06:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:44 np0005465604 python3.9[250479]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 04:06:45 np0005465604 python3.9[250632]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 04:06:45 np0005465604 podman[250634]: 2025-10-02 08:06:45.77273443 +0000 UTC m=+0.073959252 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 04:06:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:46 np0005465604 python3.9[250806]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 04:06:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:06:47 np0005465604 python3.9[250959]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 04:06:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:48 np0005465604 python3.9[251112]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 04:06:49 np0005465604 python3.9[251265]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 04:06:50 np0005465604 python3.9[251418]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 04:06:50 np0005465604 podman[251420]: 2025-10-02 08:06:50.248305124 +0000 UTC m=+0.112691925 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  2 04:06:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:50 np0005465604 python3.9[251588]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 04:06:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:06:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:52 np0005465604 python3.9[251741]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:06:53 np0005465604 python3.9[251893]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:06:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:54 np0005465604 python3.9[252045]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:06:55 np0005465604 python3.9[252197]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:06:55 np0005465604 python3.9[252349]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:06:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:06:56 np0005465604 python3.9[252501]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:06:57 np0005465604 python3.9[252653]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:06:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:06:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:06:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:06:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:06:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:06:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:06:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:06:58 np0005465604 python3.9[252805]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:06:59 np0005465604 python3.9[252957]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:07:00 np0005465604 python3.9[253109]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:07:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:00 np0005465604 python3.9[253261]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:07:01 np0005465604 python3.9[253413]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:07:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:07:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:07:07 np0005465604 podman[253490]: 2025-10-02 08:07:07.090867841 +0000 UTC m=+0.144948849 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:07:07 np0005465604 python3.9[253591]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct  2 04:07:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:08 np0005465604 python3.9[253744]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 04:07:09 np0005465604 python3.9[253902]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  2 04:07:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:10 np0005465604 systemd-logind[787]: New session 53 of user zuul.
Oct  2 04:07:10 np0005465604 systemd[1]: Started Session 53 of User zuul.
Oct  2 04:07:10 np0005465604 systemd[1]: session-53.scope: Deactivated successfully.
Oct  2 04:07:10 np0005465604 systemd-logind[787]: Session 53 logged out. Waiting for processes to exit.
Oct  2 04:07:10 np0005465604 systemd-logind[787]: Removed session 53.
Oct  2 04:07:11 np0005465604 podman[254062]: 2025-10-02 08:07:11.590453496 +0000 UTC m=+0.086204409 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  2 04:07:11 np0005465604 python3.9[254099]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:07:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:07:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:12 np0005465604 python3.9[254227]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759392431.1599667-1555-254439631797574/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:07:13 np0005465604 python3.9[254377]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:07:13 np0005465604 python3.9[254453]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:07:14 np0005465604 python3.9[254603]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:07:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:14 np0005465604 python3.9[254724]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759392433.667574-1555-100369303018463/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:07:15 np0005465604 python3.9[254874]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:07:16 np0005465604 podman[254958]: 2025-10-02 08:07:15.999703785 +0000 UTC m=+0.064946173 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:07:16 np0005465604 python3.9[255015]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759392435.013509-1555-174914487630800/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:07:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:07:16 np0005465604 python3.9[255165]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:07:17 np0005465604 python3.9[255286]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759392436.415168-1555-196095924702334/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:07:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:18 np0005465604 python3.9[255438]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:07:19 np0005465604 python3.9[255590]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:07:20 np0005465604 python3.9[255742]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:07:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:20 np0005465604 podman[255866]: 2025-10-02 08:07:20.872908138 +0000 UTC m=+0.055213984 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:07:21 np0005465604 python3.9[255915]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:07:21 np0005465604 python3.9[256038]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759392440.564383-1648-217411756810700/.source _original_basename=.ku92kaq1 follow=False checksum=124666a65a0dcec57ce63ee7f44f38be93eabef3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct  2 04:07:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:07:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:22 np0005465604 python3.9[256190]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:07:23 np0005465604 python3.9[256342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:07:24 np0005465604 python3.9[256463]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759392442.760389-1674-62369710902244/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=94b3577758c4cdedb302f4b8e4358e36921221ae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:07:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:24 np0005465604 python3.9[256613]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 04:07:25 np0005465604 python3.9[256734]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759392444.2832081-1689-40910375568826/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=78deab53b0ea0d6d685c99d2e70fdfd396f8d5ee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 04:07:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:26 np0005465604 python3.9[256886]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct  2 04:07:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:07:27 np0005465604 python3.9[257038]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 04:07:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:07:27
Oct  2 04:07:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:07:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:07:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['vms', '.mgr', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'default.rgw.control', '.rgw.root']
Oct  2 04:07:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:07:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:07:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:07:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:07:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:07:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:07:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:07:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:28 np0005465604 python3[257190]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 04:07:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:07:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:07:34.792 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:07:34.794 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:07:34.794 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:07:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:07:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:39 np0005465604 podman[257202]: 2025-10-02 08:07:39.338631164 +0000 UTC m=+10.437409959 image pull cb7a9bebda1404fc92f1415580e7da04b5fcfd160582e38b9b99703a41ed1519 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38
Oct  2 04:07:39 np0005465604 podman[257285]: 2025-10-02 08:07:39.359571909 +0000 UTC m=+1.415178048 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:07:39 np0005465604 podman[257334]: 2025-10-02 08:07:39.527918969 +0000 UTC m=+0.059793654 container create 3968affa8d8434646924e40b7c333276145b3da0cc2cce742d2cabc8734b3cf5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38, name=nova_compute_init, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2)
Oct  2 04:07:39 np0005465604 podman[257334]: 2025-10-02 08:07:39.49552967 +0000 UTC m=+0.027404355 image pull cb7a9bebda1404fc92f1415580e7da04b5fcfd160582e38b9b99703a41ed1519 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38
Oct  2 04:07:39 np0005465604 python3[257190]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct  2 04:07:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:07:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:07:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:07:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:07:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:07:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:07:40 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 17fc1afc-e728-4679-b69a-0a30663d8481 does not exist
Oct  2 04:07:40 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 86e5c35c-b344-465d-be65-901d9769b25d does not exist
Oct  2 04:07:40 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 6c760a78-bd67-4382-a00c-96d6328c8d38 does not exist
Oct  2 04:07:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:07:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:07:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:07:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:07:40 np0005465604 python3.9[257640]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:07:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:07:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:07:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:07:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:07:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:07:41 np0005465604 podman[257899]: 2025-10-02 08:07:41.440253123 +0000 UTC m=+0.053034225 container create 8928465804e570879f20c9de2569cbc458f113cd3de174d09f53d7119e207cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:07:41 np0005465604 systemd[1]: Started libpod-conmon-8928465804e570879f20c9de2569cbc458f113cd3de174d09f53d7119e207cae.scope.
Oct  2 04:07:41 np0005465604 podman[257899]: 2025-10-02 08:07:41.416729558 +0000 UTC m=+0.029510750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:07:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:07:41 np0005465604 podman[257899]: 2025-10-02 08:07:41.556356573 +0000 UTC m=+0.169137755 container init 8928465804e570879f20c9de2569cbc458f113cd3de174d09f53d7119e207cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 04:07:41 np0005465604 podman[257899]: 2025-10-02 08:07:41.565587847 +0000 UTC m=+0.178368979 container start 8928465804e570879f20c9de2569cbc458f113cd3de174d09f53d7119e207cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 04:07:41 np0005465604 podman[257899]: 2025-10-02 08:07:41.572068607 +0000 UTC m=+0.184849799 container attach 8928465804e570879f20c9de2569cbc458f113cd3de174d09f53d7119e207cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:07:41 np0005465604 systemd[1]: libpod-8928465804e570879f20c9de2569cbc458f113cd3de174d09f53d7119e207cae.scope: Deactivated successfully.
Oct  2 04:07:41 np0005465604 thirsty_shirley[257937]: 167 167
Oct  2 04:07:41 np0005465604 podman[257899]: 2025-10-02 08:07:41.57508926 +0000 UTC m=+0.187870392 container died 8928465804e570879f20c9de2569cbc458f113cd3de174d09f53d7119e207cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:07:41 np0005465604 conmon[257937]: conmon 8928465804e570879f20 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8928465804e570879f20c9de2569cbc458f113cd3de174d09f53d7119e207cae.scope/container/memory.events
Oct  2 04:07:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a75d2ffe6a75580f71daa0f731d39a004d02508036907e8d64426f2f1839bda7-merged.mount: Deactivated successfully.
Oct  2 04:07:41 np0005465604 podman[257899]: 2025-10-02 08:07:41.640058493 +0000 UTC m=+0.252839625 container remove 8928465804e570879f20c9de2569cbc458f113cd3de174d09f53d7119e207cae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shirley, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:07:41 np0005465604 systemd[1]: libpod-conmon-8928465804e570879f20c9de2569cbc458f113cd3de174d09f53d7119e207cae.scope: Deactivated successfully.
Oct  2 04:07:41 np0005465604 podman[257976]: 2025-10-02 08:07:41.743556073 +0000 UTC m=+0.092513732 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 04:07:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:07:41 np0005465604 podman[258009]: 2025-10-02 08:07:41.845882088 +0000 UTC m=+0.063878420 container create 103614b401262124f2e00fa4a4632dfe041f514228bccee7aa033e352930fd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 04:07:41 np0005465604 systemd[1]: Started libpod-conmon-103614b401262124f2e00fa4a4632dfe041f514228bccee7aa033e352930fd6a.scope.
Oct  2 04:07:41 np0005465604 podman[258009]: 2025-10-02 08:07:41.817019688 +0000 UTC m=+0.035016110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:07:41 np0005465604 python3.9[257997]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct  2 04:07:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:07:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e14ddeab095c97a309e5635e0a2924c2ba8fa17e9e94a5f9d7807e9754b997/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e14ddeab095c97a309e5635e0a2924c2ba8fa17e9e94a5f9d7807e9754b997/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e14ddeab095c97a309e5635e0a2924c2ba8fa17e9e94a5f9d7807e9754b997/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e14ddeab095c97a309e5635e0a2924c2ba8fa17e9e94a5f9d7807e9754b997/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7e14ddeab095c97a309e5635e0a2924c2ba8fa17e9e94a5f9d7807e9754b997/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:41 np0005465604 podman[258009]: 2025-10-02 08:07:41.961677338 +0000 UTC m=+0.179673770 container init 103614b401262124f2e00fa4a4632dfe041f514228bccee7aa033e352930fd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:07:41 np0005465604 podman[258009]: 2025-10-02 08:07:41.974915747 +0000 UTC m=+0.192912089 container start 103614b401262124f2e00fa4a4632dfe041f514228bccee7aa033e352930fd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hofstadter, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 04:07:41 np0005465604 podman[258009]: 2025-10-02 08:07:41.996909354 +0000 UTC m=+0.214905726 container attach 103614b401262124f2e00fa4a4632dfe041f514228bccee7aa033e352930fd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hofstadter, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:07:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:42 np0005465604 python3.9[258183]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 04:07:43 np0005465604 elated_hofstadter[258026]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:07:43 np0005465604 elated_hofstadter[258026]: --> relative data size: 1.0
Oct  2 04:07:43 np0005465604 elated_hofstadter[258026]: --> All data devices are unavailable
Oct  2 04:07:43 np0005465604 systemd[1]: libpod-103614b401262124f2e00fa4a4632dfe041f514228bccee7aa033e352930fd6a.scope: Deactivated successfully.
Oct  2 04:07:43 np0005465604 systemd[1]: libpod-103614b401262124f2e00fa4a4632dfe041f514228bccee7aa033e352930fd6a.scope: Consumed 1.104s CPU time.
Oct  2 04:07:43 np0005465604 podman[258009]: 2025-10-02 08:07:43.165859551 +0000 UTC m=+1.383855903 container died 103614b401262124f2e00fa4a4632dfe041f514228bccee7aa033e352930fd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  2 04:07:43 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b7e14ddeab095c97a309e5635e0a2924c2ba8fa17e9e94a5f9d7807e9754b997-merged.mount: Deactivated successfully.
Oct  2 04:07:43 np0005465604 podman[258009]: 2025-10-02 08:07:43.278901036 +0000 UTC m=+1.496897408 container remove 103614b401262124f2e00fa4a4632dfe041f514228bccee7aa033e352930fd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hofstadter, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 04:07:43 np0005465604 systemd[1]: libpod-conmon-103614b401262124f2e00fa4a4632dfe041f514228bccee7aa033e352930fd6a.scope: Deactivated successfully.
Oct  2 04:07:44 np0005465604 python3[258470]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 04:07:44 np0005465604 podman[258521]: 2025-10-02 08:07:44.14514161 +0000 UTC m=+0.062097495 container create 423c89007312f3d2037ce06bc6a6208155f2e8ce96a1cdf4b5b30a0b9fe09a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 04:07:44 np0005465604 systemd[1]: Started libpod-conmon-423c89007312f3d2037ce06bc6a6208155f2e8ce96a1cdf4b5b30a0b9fe09a07.scope.
Oct  2 04:07:44 np0005465604 podman[258521]: 2025-10-02 08:07:44.109177291 +0000 UTC m=+0.026133176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:07:44 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:07:44 np0005465604 podman[258521]: 2025-10-02 08:07:44.245635289 +0000 UTC m=+0.162591164 container init 423c89007312f3d2037ce06bc6a6208155f2e8ce96a1cdf4b5b30a0b9fe09a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elbakyan, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:07:44 np0005465604 podman[258521]: 2025-10-02 08:07:44.253958155 +0000 UTC m=+0.170914000 container start 423c89007312f3d2037ce06bc6a6208155f2e8ce96a1cdf4b5b30a0b9fe09a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 04:07:44 np0005465604 podman[258521]: 2025-10-02 08:07:44.258593447 +0000 UTC m=+0.175549292 container attach 423c89007312f3d2037ce06bc6a6208155f2e8ce96a1cdf4b5b30a0b9fe09a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elbakyan, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:07:44 np0005465604 pensive_elbakyan[258558]: 167 167
Oct  2 04:07:44 np0005465604 systemd[1]: libpod-423c89007312f3d2037ce06bc6a6208155f2e8ce96a1cdf4b5b30a0b9fe09a07.scope: Deactivated successfully.
Oct  2 04:07:44 np0005465604 podman[258521]: 2025-10-02 08:07:44.261314191 +0000 UTC m=+0.178270026 container died 423c89007312f3d2037ce06bc6a6208155f2e8ce96a1cdf4b5b30a0b9fe09a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:07:44 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b79aa62008adfdb18bef634a537f12c51b32da618d622b92d97faa0e13b2c7ef-merged.mount: Deactivated successfully.
Oct  2 04:07:44 np0005465604 podman[258521]: 2025-10-02 08:07:44.309177797 +0000 UTC m=+0.226133682 container remove 423c89007312f3d2037ce06bc6a6208155f2e8ce96a1cdf4b5b30a0b9fe09a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 04:07:44 np0005465604 systemd[1]: libpod-conmon-423c89007312f3d2037ce06bc6a6208155f2e8ce96a1cdf4b5b30a0b9fe09a07.scope: Deactivated successfully.
Oct  2 04:07:44 np0005465604 podman[258564]: 2025-10-02 08:07:44.321990412 +0000 UTC m=+0.089299544 container create 30c75d1eaea8e071a1f654ddf11e1a31594fcda334636a5e74999959ff9b9717 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct  2 04:07:44 np0005465604 podman[258564]: 2025-10-02 08:07:44.281915137 +0000 UTC m=+0.049224279 image pull cb7a9bebda1404fc92f1415580e7da04b5fcfd160582e38b9b99703a41ed1519 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38
Oct  2 04:07:44 np0005465604 python3[258470]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38 kolla_start
Oct  2 04:07:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:44 np0005465604 podman[258621]: 2025-10-02 08:07:44.4973872 +0000 UTC m=+0.048056213 container create 5526c8c70ea8e6cd2cc084f1e927c4f9acf9f27242c988c2c4c789c1cd0c7c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 04:07:44 np0005465604 systemd[1]: Started libpod-conmon-5526c8c70ea8e6cd2cc084f1e927c4f9acf9f27242c988c2c4c789c1cd0c7c4d.scope.
Oct  2 04:07:44 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:07:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/591cd6c2b4a5882975ee5d588de01cd7b2b3bef51f9f5da11a9ea42798a4433e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:44 np0005465604 podman[258621]: 2025-10-02 08:07:44.472991067 +0000 UTC m=+0.023660150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:07:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/591cd6c2b4a5882975ee5d588de01cd7b2b3bef51f9f5da11a9ea42798a4433e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/591cd6c2b4a5882975ee5d588de01cd7b2b3bef51f9f5da11a9ea42798a4433e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/591cd6c2b4a5882975ee5d588de01cd7b2b3bef51f9f5da11a9ea42798a4433e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:44 np0005465604 podman[258621]: 2025-10-02 08:07:44.588832949 +0000 UTC m=+0.139501972 container init 5526c8c70ea8e6cd2cc084f1e927c4f9acf9f27242c988c2c4c789c1cd0c7c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 04:07:44 np0005465604 podman[258621]: 2025-10-02 08:07:44.599951702 +0000 UTC m=+0.150620725 container start 5526c8c70ea8e6cd2cc084f1e927c4f9acf9f27242c988c2c4c789c1cd0c7c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:07:44 np0005465604 podman[258621]: 2025-10-02 08:07:44.604925094 +0000 UTC m=+0.155594117 container attach 5526c8c70ea8e6cd2cc084f1e927c4f9acf9f27242c988c2c4c789c1cd0c7c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hertz, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]: {
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:    "0": [
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:        {
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "devices": [
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "/dev/loop3"
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            ],
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "lv_name": "ceph_lv0",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "lv_size": "21470642176",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "name": "ceph_lv0",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "tags": {
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.cluster_name": "ceph",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.crush_device_class": "",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.encrypted": "0",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.osd_id": "0",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.type": "block",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.vdo": "0"
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            },
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "type": "block",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "vg_name": "ceph_vg0"
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:        }
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:    ],
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:    "1": [
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:        {
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "devices": [
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "/dev/loop4"
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            ],
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "lv_name": "ceph_lv1",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "lv_size": "21470642176",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "name": "ceph_lv1",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "tags": {
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.cluster_name": "ceph",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.crush_device_class": "",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.encrypted": "0",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.osd_id": "1",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.type": "block",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.vdo": "0"
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            },
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "type": "block",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "vg_name": "ceph_vg1"
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:        }
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:    ],
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:    "2": [
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:        {
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "devices": [
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "/dev/loop5"
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            ],
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "lv_name": "ceph_lv2",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "lv_size": "21470642176",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "name": "ceph_lv2",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "tags": {
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.cluster_name": "ceph",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.crush_device_class": "",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.encrypted": "0",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.osd_id": "2",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.type": "block",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:                "ceph.vdo": "0"
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            },
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "type": "block",
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:            "vg_name": "ceph_vg2"
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:        }
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]:    ]
Oct  2 04:07:45 np0005465604 amazing_hertz[258656]: }
Oct  2 04:07:45 np0005465604 python3.9[258794]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:07:45 np0005465604 systemd[1]: libpod-5526c8c70ea8e6cd2cc084f1e927c4f9acf9f27242c988c2c4c789c1cd0c7c4d.scope: Deactivated successfully.
Oct  2 04:07:45 np0005465604 podman[258621]: 2025-10-02 08:07:45.388095818 +0000 UTC m=+0.938764851 container died 5526c8c70ea8e6cd2cc084f1e927c4f9acf9f27242c988c2c4c789c1cd0c7c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hertz, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 04:07:45 np0005465604 systemd[1]: var-lib-containers-storage-overlay-591cd6c2b4a5882975ee5d588de01cd7b2b3bef51f9f5da11a9ea42798a4433e-merged.mount: Deactivated successfully.
Oct  2 04:07:45 np0005465604 podman[258621]: 2025-10-02 08:07:45.487143312 +0000 UTC m=+1.037812355 container remove 5526c8c70ea8e6cd2cc084f1e927c4f9acf9f27242c988c2c4c789c1cd0c7c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hertz, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 04:07:45 np0005465604 systemd[1]: libpod-conmon-5526c8c70ea8e6cd2cc084f1e927c4f9acf9f27242c988c2c4c789c1cd0c7c4d.scope: Deactivated successfully.
Oct  2 04:07:46 np0005465604 podman[259047]: 2025-10-02 08:07:46.306338956 +0000 UTC m=+0.110664432 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  2 04:07:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:46 np0005465604 podman[259119]: 2025-10-02 08:07:46.493087143 +0000 UTC m=+0.069990048 container create 7374ea34e6d840f5b91015e8d010735c1dc07ec68fda62f3f2df4069dbaad55a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nobel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 04:07:46 np0005465604 python3.9[259104]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:07:46 np0005465604 systemd[1]: Started libpod-conmon-7374ea34e6d840f5b91015e8d010735c1dc07ec68fda62f3f2df4069dbaad55a.scope.
Oct  2 04:07:46 np0005465604 podman[259119]: 2025-10-02 08:07:46.464330387 +0000 UTC m=+0.041233352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:07:46 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:07:46 np0005465604 podman[259119]: 2025-10-02 08:07:46.606232762 +0000 UTC m=+0.183135727 container init 7374ea34e6d840f5b91015e8d010735c1dc07ec68fda62f3f2df4069dbaad55a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 04:07:46 np0005465604 podman[259119]: 2025-10-02 08:07:46.624571667 +0000 UTC m=+0.201474532 container start 7374ea34e6d840f5b91015e8d010735c1dc07ec68fda62f3f2df4069dbaad55a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 04:07:46 np0005465604 podman[259119]: 2025-10-02 08:07:46.629231651 +0000 UTC m=+0.206134596 container attach 7374ea34e6d840f5b91015e8d010735c1dc07ec68fda62f3f2df4069dbaad55a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nobel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 04:07:46 np0005465604 naughty_nobel[259135]: 167 167
Oct  2 04:07:46 np0005465604 systemd[1]: libpod-7374ea34e6d840f5b91015e8d010735c1dc07ec68fda62f3f2df4069dbaad55a.scope: Deactivated successfully.
Oct  2 04:07:46 np0005465604 podman[259119]: 2025-10-02 08:07:46.638730193 +0000 UTC m=+0.215633098 container died 7374ea34e6d840f5b91015e8d010735c1dc07ec68fda62f3f2df4069dbaad55a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 04:07:46 np0005465604 systemd[1]: var-lib-containers-storage-overlay-07eb5ef99426e71ca58753ed8b7496c13bf1b8a8371a5ea53b31874dc305580a-merged.mount: Deactivated successfully.
Oct  2 04:07:46 np0005465604 podman[259119]: 2025-10-02 08:07:46.696930437 +0000 UTC m=+0.273833312 container remove 7374ea34e6d840f5b91015e8d010735c1dc07ec68fda62f3f2df4069dbaad55a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:07:46 np0005465604 systemd[1]: libpod-conmon-7374ea34e6d840f5b91015e8d010735c1dc07ec68fda62f3f2df4069dbaad55a.scope: Deactivated successfully.
Oct  2 04:07:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:07:46 np0005465604 podman[259214]: 2025-10-02 08:07:46.939163535 +0000 UTC m=+0.065777109 container create 059fd3f0471e34442708545f136da53bfeff6213becfab39b001933cef782af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dirac, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:07:46 np0005465604 systemd[1]: Started libpod-conmon-059fd3f0471e34442708545f136da53bfeff6213becfab39b001933cef782af0.scope.
Oct  2 04:07:46 np0005465604 podman[259214]: 2025-10-02 08:07:46.909006786 +0000 UTC m=+0.035620420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:07:47 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:07:47 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e36a14fc0ea83d16dbf119848aeb5c20c8cfc602017884ab1aa901b0d4be966/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:47 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e36a14fc0ea83d16dbf119848aeb5c20c8cfc602017884ab1aa901b0d4be966/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:47 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e36a14fc0ea83d16dbf119848aeb5c20c8cfc602017884ab1aa901b0d4be966/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:47 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e36a14fc0ea83d16dbf119848aeb5c20c8cfc602017884ab1aa901b0d4be966/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:47 np0005465604 podman[259214]: 2025-10-02 08:07:47.055236154 +0000 UTC m=+0.181849798 container init 059fd3f0471e34442708545f136da53bfeff6213becfab39b001933cef782af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dirac, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:07:47 np0005465604 podman[259214]: 2025-10-02 08:07:47.067677287 +0000 UTC m=+0.194290841 container start 059fd3f0471e34442708545f136da53bfeff6213becfab39b001933cef782af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:07:47 np0005465604 podman[259214]: 2025-10-02 08:07:47.071795845 +0000 UTC m=+0.198409489 container attach 059fd3f0471e34442708545f136da53bfeff6213becfab39b001933cef782af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:07:47 np0005465604 python3.9[259333]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759392466.6224012-1781-73500629418792/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]: {
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:07:48 np0005465604 python3.9[259416]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:        "osd_id": 2,
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:        "type": "bluestore"
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:    },
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:        "osd_id": 1,
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:        "type": "bluestore"
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:    },
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:        "osd_id": 0,
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:        "type": "bluestore"
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]:    }
Oct  2 04:07:48 np0005465604 intelligent_dirac[259257]: }
Oct  2 04:07:48 np0005465604 systemd[1]: Reloading.
Oct  2 04:07:48 np0005465604 podman[259214]: 2025-10-02 08:07:48.193979029 +0000 UTC m=+1.320592583 container died 059fd3f0471e34442708545f136da53bfeff6213becfab39b001933cef782af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dirac, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 04:07:48 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:07:48 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:07:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:48 np0005465604 systemd[1]: libpod-059fd3f0471e34442708545f136da53bfeff6213becfab39b001933cef782af0.scope: Deactivated successfully.
Oct  2 04:07:48 np0005465604 systemd[1]: libpod-059fd3f0471e34442708545f136da53bfeff6213becfab39b001933cef782af0.scope: Consumed 1.123s CPU time.
Oct  2 04:07:48 np0005465604 systemd[1]: var-lib-containers-storage-overlay-7e36a14fc0ea83d16dbf119848aeb5c20c8cfc602017884ab1aa901b0d4be966-merged.mount: Deactivated successfully.
Oct  2 04:07:48 np0005465604 podman[259214]: 2025-10-02 08:07:48.640316199 +0000 UTC m=+1.766929773 container remove 059fd3f0471e34442708545f136da53bfeff6213becfab39b001933cef782af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dirac, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 04:07:48 np0005465604 systemd[1]: libpod-conmon-059fd3f0471e34442708545f136da53bfeff6213becfab39b001933cef782af0.scope: Deactivated successfully.
Oct  2 04:07:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:07:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:07:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:07:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:07:48 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev d3150e1f-02c2-4fc7-b85b-ac26f62af254 does not exist
Oct  2 04:07:48 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ecd631e2-068d-4f76-9814-d5c75e773ad0 does not exist
Oct  2 04:07:49 np0005465604 python3.9[259611]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 04:07:49 np0005465604 systemd[1]: Reloading.
Oct  2 04:07:49 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:07:49 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:07:49 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 04:07:49 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 04:07:49 np0005465604 systemd[1]: Starting nova_compute container...
Oct  2 04:07:49 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:07:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065ee084dc5ad0857295c803b08ab8689b721e0de8f1d7d2ff031b51113f36b1/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065ee084dc5ad0857295c803b08ab8689b721e0de8f1d7d2ff031b51113f36b1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065ee084dc5ad0857295c803b08ab8689b721e0de8f1d7d2ff031b51113f36b1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065ee084dc5ad0857295c803b08ab8689b721e0de8f1d7d2ff031b51113f36b1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065ee084dc5ad0857295c803b08ab8689b721e0de8f1d7d2ff031b51113f36b1/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:49 np0005465604 podman[259651]: 2025-10-02 08:07:49.988244933 +0000 UTC m=+0.149306074 container init 30c75d1eaea8e071a1f654ddf11e1a31594fcda334636a5e74999959ff9b9717 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Oct  2 04:07:50 np0005465604 podman[259651]: 2025-10-02 08:07:50.001791931 +0000 UTC m=+0.162853062 container start 30c75d1eaea8e071a1f654ddf11e1a31594fcda334636a5e74999959ff9b9717 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38, name=nova_compute, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 04:07:50 np0005465604 podman[259651]: nova_compute
Oct  2 04:07:50 np0005465604 nova_compute[259667]: + sudo -E kolla_set_configs
Oct  2 04:07:50 np0005465604 systemd[1]: Started nova_compute container.
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Validating config file
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Copying service configuration files
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Deleting /etc/ceph
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Creating directory /etc/ceph
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Setting permission for /etc/ceph
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Writing out command to execute
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  2 04:07:50 np0005465604 nova_compute[259667]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  2 04:07:50 np0005465604 nova_compute[259667]: ++ cat /run_command
Oct  2 04:07:50 np0005465604 nova_compute[259667]: + CMD=nova-compute
Oct  2 04:07:50 np0005465604 nova_compute[259667]: + ARGS=
Oct  2 04:07:50 np0005465604 nova_compute[259667]: + sudo kolla_copy_cacerts
Oct  2 04:07:50 np0005465604 nova_compute[259667]: + [[ ! -n '' ]]
Oct  2 04:07:50 np0005465604 nova_compute[259667]: + . kolla_extend_start
Oct  2 04:07:50 np0005465604 nova_compute[259667]: + echo 'Running command: '\''nova-compute'\'''
Oct  2 04:07:50 np0005465604 nova_compute[259667]: Running command: 'nova-compute'
Oct  2 04:07:50 np0005465604 nova_compute[259667]: + umask 0022
Oct  2 04:07:50 np0005465604 nova_compute[259667]: + exec nova-compute
Oct  2 04:07:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:51 np0005465604 podman[259829]: 2025-10-02 08:07:51.031639659 +0000 UTC m=+0.084440194 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 04:07:51 np0005465604 python3.9[259828]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:07:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:07:52 np0005465604 python3.9[260000]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:07:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:52 np0005465604 nova_compute[259667]: 2025-10-02 08:07:52.549 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 04:07:52 np0005465604 nova_compute[259667]: 2025-10-02 08:07:52.549 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 04:07:52 np0005465604 nova_compute[259667]: 2025-10-02 08:07:52.549 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 04:07:52 np0005465604 nova_compute[259667]: 2025-10-02 08:07:52.549 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  2 04:07:52 np0005465604 nova_compute[259667]: 2025-10-02 08:07:52.695 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:07:52 np0005465604 nova_compute[259667]: 2025-10-02 08:07:52.734 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:07:52 np0005465604 python3.9[260154]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.454 2 INFO nova.virt.driver [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.646 2 INFO nova.compute.provider_config [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.668 2 DEBUG oslo_concurrency.lockutils [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.668 2 DEBUG oslo_concurrency.lockutils [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.668 2 DEBUG oslo_concurrency.lockutils [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.669 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.669 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.669 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.669 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.670 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.670 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.670 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.670 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.670 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.671 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.671 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.671 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.671 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.672 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.672 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.672 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.672 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.672 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.673 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.673 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.673 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.673 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.673 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.674 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.674 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.674 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.674 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.674 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.675 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.675 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.675 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.675 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.675 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.676 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.676 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.676 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.676 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.676 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.677 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.677 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.677 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.677 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.678 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.678 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.678 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.678 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.678 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.678 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.679 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.679 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.679 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.679 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.680 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.680 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.680 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.680 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.680 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.680 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.681 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.681 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.681 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.681 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.681 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.682 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.682 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.682 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.682 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.682 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.683 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.683 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.683 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.683 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.683 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.684 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.684 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.684 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.684 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.684 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.684 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.685 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.685 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.685 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.685 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.686 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.686 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.686 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.686 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.686 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.687 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.687 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.687 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.687 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.687 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.687 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.688 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.688 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.688 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.688 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.689 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.689 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.689 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.689 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.689 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.690 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.690 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.690 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.690 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.690 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.691 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.691 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.691 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.691 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.691 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.692 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.692 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.692 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.692 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.692 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.692 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.693 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.693 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.693 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.693 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.693 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.694 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.694 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.694 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.694 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.694 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.695 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.695 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.695 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.695 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.695 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.696 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.696 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.696 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.696 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.696 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.696 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.697 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.697 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.697 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.697 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.697 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.698 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.698 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.698 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.698 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.699 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.699 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.699 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.699 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.699 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.700 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.700 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.700 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.700 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.700 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.701 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.701 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.701 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.702 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.702 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.702 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.702 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.703 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.703 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.703 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.703 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.703 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.704 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.704 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.704 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.704 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.704 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.704 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.704 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.705 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.705 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.705 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.705 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.705 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.705 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.706 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.706 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.706 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.706 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.706 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.706 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.706 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.707 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.707 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.707 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.707 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.707 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.707 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.707 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.708 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.708 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.708 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.708 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.708 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.708 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.708 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.709 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.709 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.709 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.709 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.709 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.709 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.709 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.710 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.710 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.710 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.710 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.710 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.710 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.711 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.711 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.711 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.711 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.711 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.711 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.711 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.712 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.712 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.712 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.712 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.712 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.712 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.712 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.713 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.713 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.713 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.713 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.713 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.713 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.713 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.714 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.714 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.714 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.714 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.714 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.714 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.714 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.715 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.715 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.715 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.715 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.715 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.715 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.716 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.716 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.716 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.716 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.716 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.716 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.716 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.717 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.717 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.717 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.717 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.717 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.717 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.717 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.718 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.718 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.718 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.718 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.718 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.718 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.718 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.719 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.719 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.719 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.719 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.719 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.719 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.719 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.720 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.720 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.720 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.720 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.720 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.720 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.721 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.721 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.721 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.721 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.721 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.721 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.721 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.722 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.722 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.722 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.722 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.722 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.722 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.722 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.723 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.723 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.723 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.723 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.723 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.723 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.723 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.724 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.724 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.724 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.724 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.724 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.724 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.724 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.724 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.725 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.725 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.725 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.725 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.725 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.725 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.726 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.726 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.726 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.726 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.726 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.726 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.726 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.727 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.727 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.727 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.727 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.727 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.727 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.727 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.728 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.728 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.728 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.728 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.728 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.728 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.728 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.729 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.729 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.729 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.729 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.729 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.729 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.730 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.730 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.730 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.730 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.730 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.730 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.730 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.731 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.731 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.731 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.731 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.731 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.732 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.732 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.732 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.732 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.732 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.732 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.733 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.733 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.733 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.733 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.733 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.733 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.733 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.733 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.734 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.734 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.734 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.734 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.734 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.734 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.734 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.735 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.735 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.735 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.735 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.735 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.735 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.735 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.736 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.736 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.736 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.736 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.736 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.736 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.737 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.737 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.737 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.737 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.737 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.737 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.737 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.737 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.738 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.738 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.738 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.738 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.738 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.738 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.738 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.739 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.739 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.739 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.739 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.739 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.739 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.740 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.740 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.740 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.740 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.740 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.740 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.740 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.741 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.741 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.741 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.741 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.741 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.741 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.741 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.742 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.742 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.742 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.742 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.742 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.742 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.742 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.743 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.743 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.743 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.743 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.743 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.743 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.743 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.744 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.744 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.744 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.744 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.744 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.744 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.745 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.745 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.745 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.745 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.745 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.745 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.746 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.746 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.746 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.746 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.746 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.746 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.747 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.747 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.747 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.747 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.747 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.747 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.747 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.748 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.748 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.748 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.748 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.748 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.748 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.749 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.749 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.749 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.749 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.749 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.749 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.750 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.750 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.750 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.750 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.750 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.750 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.750 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.751 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.751 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.751 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.751 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.751 2 WARNING oslo_config.cfg [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  2 04:07:53 np0005465604 nova_compute[259667]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  2 04:07:53 np0005465604 nova_compute[259667]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  2 04:07:53 np0005465604 nova_compute[259667]: and ``live_migration_inbound_addr`` respectively.
Oct  2 04:07:53 np0005465604 nova_compute[259667]: ).  Its value may be silently ignored in the future.#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.752 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.752 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.752 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.752 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.752 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.752 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.753 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.753 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.753 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.753 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.753 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.753 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.754 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.754 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.754 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.754 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.754 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.754 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.754 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.rbd_secret_uuid        = a52e644f-f702-594c-a648-813e3e0df2b1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.755 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.755 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.755 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.755 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.755 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.755 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.755 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.756 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.756 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.756 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.756 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.756 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.757 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.757 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.757 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.757 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.757 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.757 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.757 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.758 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.758 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.758 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.758 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.758 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.758 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.758 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.759 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.759 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.759 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.759 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.759 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.759 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.759 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.760 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.760 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.760 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.760 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.760 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.760 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.761 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.761 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.761 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.761 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.761 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.761 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.761 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.762 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.762 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.762 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.762 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.762 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.762 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.763 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.763 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.763 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.763 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.763 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.763 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.763 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.764 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.764 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.764 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.764 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.764 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.765 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.765 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.765 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.765 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.765 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.765 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.765 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.766 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.766 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.766 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.766 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.766 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.766 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.767 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.767 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.767 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.767 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.767 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.767 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.767 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.768 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.768 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.768 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.768 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.768 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.768 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.768 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.769 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.769 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.769 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.769 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.769 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.769 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.769 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.770 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.770 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.770 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.770 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.770 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.770 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.771 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.771 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.771 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.771 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.771 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.771 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.771 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.772 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.772 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.772 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.772 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.772 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.772 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.773 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.773 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.773 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.773 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.773 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.773 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.774 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.774 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.774 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.774 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.774 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.774 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.775 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.775 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.775 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.775 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.775 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.775 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.775 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.776 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.776 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.776 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.776 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.776 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.776 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.777 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.777 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.777 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.777 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.777 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.777 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.777 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.778 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.778 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.778 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.778 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.778 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.778 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.778 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.779 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.779 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.779 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.779 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.779 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.779 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.780 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.780 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.780 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.780 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.780 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.780 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.781 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.781 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.781 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.781 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.781 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.781 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.782 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.782 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.782 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.782 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.782 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.782 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.783 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.783 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.783 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.783 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.783 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.783 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.783 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.784 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.784 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.784 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.784 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.784 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.784 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.784 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.785 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.785 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.785 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.785 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.785 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.785 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.786 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.786 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.786 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.786 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.786 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.786 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.787 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.787 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.787 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.787 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.787 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.787 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.788 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.788 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.788 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.788 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.788 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.788 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.789 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.789 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.789 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.789 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.789 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.789 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.790 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.790 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.790 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.790 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.790 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.791 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.791 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.791 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.791 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.791 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.791 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.792 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.792 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.792 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.792 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.792 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.792 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.792 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.793 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.793 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.793 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.793 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.793 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.793 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.794 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.794 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.794 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.794 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.794 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.794 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.794 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.795 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.795 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.795 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.795 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.795 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.795 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.796 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.796 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.796 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.796 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.796 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.797 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.797 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.797 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.797 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.797 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.797 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.798 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.798 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.798 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.798 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.798 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.798 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.798 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.799 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.799 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.799 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.799 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.799 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.800 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.800 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.800 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.800 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.800 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.800 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.801 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.801 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.801 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.801 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.801 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.801 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.802 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.802 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.802 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.802 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.802 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.802 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.802 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.803 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.803 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.803 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.803 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.803 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.803 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.804 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.804 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.804 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.804 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.804 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.804 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.804 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.805 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.805 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.805 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.805 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.805 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.805 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.806 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.806 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.806 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.806 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.806 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.807 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.807 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.807 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.807 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.807 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.807 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.808 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.808 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.808 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.808 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.808 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.808 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.809 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.809 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.809 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.809 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.809 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.810 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.810 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.810 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.810 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.810 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.810 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.811 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.811 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.811 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.811 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.811 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.812 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.812 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.812 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.812 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.812 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.812 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.812 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.813 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.813 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.813 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.813 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.813 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.814 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.814 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.814 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.814 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.814 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.814 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.815 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.815 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.815 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.815 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.815 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.815 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.815 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.816 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.816 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.816 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.816 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.816 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.816 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.816 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.817 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.817 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.817 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.817 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.817 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.817 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.817 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.818 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.818 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.818 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.818 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.818 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.818 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.818 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.819 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.819 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.819 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.819 2 DEBUG oslo_service.service [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.820 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.840 2 DEBUG nova.virt.libvirt.host [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.841 2 DEBUG nova.virt.libvirt.host [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.841 2 DEBUG nova.virt.libvirt.host [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.842 2 DEBUG nova.virt.libvirt.host [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  2 04:07:53 np0005465604 systemd[1]: Starting libvirt QEMU daemon...
Oct  2 04:07:53 np0005465604 systemd[1]: Started libvirt QEMU daemon.
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.939 2 DEBUG nova.virt.libvirt.host [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f51337bcdc0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.943 2 DEBUG nova.virt.libvirt.host [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f51337bcdc0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.944 2 INFO nova.virt.libvirt.driver [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.974 2 WARNING nova.virt.libvirt.driver [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  2 04:07:53 np0005465604 nova_compute[259667]: 2025-10-02 08:07:53.974 2 DEBUG nova.virt.libvirt.volume.mount [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  2 04:07:54 np0005465604 python3.9[260306]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  2 04:07:54 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 04:07:54 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 04:07:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:54 np0005465604 nova_compute[259667]: 2025-10-02 08:07:54.891 2 INFO nova.virt.libvirt.host [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Libvirt host capabilities <capabilities>
Oct  2 04:07:54 np0005465604 nova_compute[259667]: 
Oct  2 04:07:54 np0005465604 nova_compute[259667]:  <host>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <uuid>8d603fb8-984b-47fa-8e45-4729d5224ba1</uuid>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <cpu>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <arch>x86_64</arch>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model>EPYC-Rome-v4</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <vendor>AMD</vendor>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <microcode version='16777317'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <signature family='23' model='49' stepping='0'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <maxphysaddr mode='emulate' bits='40'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='x2apic'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='tsc-deadline'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='osxsave'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='hypervisor'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='tsc_adjust'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='spec-ctrl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='stibp'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='arch-capabilities'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='ssbd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='cmp_legacy'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='topoext'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='virt-ssbd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='lbrv'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='tsc-scale'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='vmcb-clean'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='pause-filter'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='pfthreshold'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='svme-addr-chk'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='rdctl-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='skip-l1dfl-vmentry'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='mds-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature name='pschange-mc-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <pages unit='KiB' size='4'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <pages unit='KiB' size='2048'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <pages unit='KiB' size='1048576'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    </cpu>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <power_management>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <suspend_mem/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    </power_management>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <iommu support='no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <migration_features>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <live/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <uri_transports>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <uri_transport>tcp</uri_transport>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <uri_transport>rdma</uri_transport>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </uri_transports>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    </migration_features>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <topology>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <cells num='1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <cell id='0'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:          <memory unit='KiB'>7864104</memory>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:          <pages unit='KiB' size='4'>1966026</pages>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:          <pages unit='KiB' size='2048'>0</pages>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:          <distances>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:            <sibling id='0' value='10'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:          </distances>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:          <cpus num='8'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:          </cpus>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        </cell>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </cells>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    </topology>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <cache>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    </cache>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <secmodel>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model>selinux</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <doi>0</doi>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    </secmodel>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <secmodel>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model>dac</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <doi>0</doi>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    </secmodel>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:  </host>
Oct  2 04:07:54 np0005465604 nova_compute[259667]: 
Oct  2 04:07:54 np0005465604 nova_compute[259667]:  <guest>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <os_type>hvm</os_type>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <arch name='i686'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <wordsize>32</wordsize>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <domain type='qemu'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <domain type='kvm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    </arch>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <features>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <pae/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <nonpae/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <acpi default='on' toggle='yes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <apic default='on' toggle='no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <cpuselection/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <deviceboot/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <disksnapshot default='on' toggle='no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <externalSnapshot/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    </features>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:  </guest>
Oct  2 04:07:54 np0005465604 nova_compute[259667]: 
Oct  2 04:07:54 np0005465604 nova_compute[259667]:  <guest>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <os_type>hvm</os_type>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <arch name='x86_64'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <wordsize>64</wordsize>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <domain type='qemu'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <domain type='kvm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    </arch>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <features>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <acpi default='on' toggle='yes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <apic default='on' toggle='no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <cpuselection/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <deviceboot/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <disksnapshot default='on' toggle='no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <externalSnapshot/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    </features>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:  </guest>
Oct  2 04:07:54 np0005465604 nova_compute[259667]: 
Oct  2 04:07:54 np0005465604 nova_compute[259667]: </capabilities>
Oct  2 04:07:54 np0005465604 nova_compute[259667]: #033[00m
Oct  2 04:07:54 np0005465604 nova_compute[259667]: 2025-10-02 08:07:54.897 2 DEBUG nova.virt.libvirt.host [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  2 04:07:54 np0005465604 nova_compute[259667]: 2025-10-02 08:07:54.924 2 DEBUG nova.virt.libvirt.host [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  2 04:07:54 np0005465604 nova_compute[259667]: <domainCapabilities>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:  <domain>kvm</domain>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:  <arch>i686</arch>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:  <vcpu max='240'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:  <iothreads supported='yes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:  <os supported='yes'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <enum name='firmware'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <loader supported='yes'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <enum name='type'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <value>rom</value>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <value>pflash</value>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <enum name='readonly'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <value>yes</value>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <value>no</value>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <enum name='secure'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <value>no</value>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    </loader>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:  </os>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:  <cpu>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <mode name='host-passthrough' supported='yes'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <enum name='hostPassthroughMigratable'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <value>on</value>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <value>off</value>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    </mode>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <mode name='maximum' supported='yes'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <enum name='maximumMigratable'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <value>on</value>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <value>off</value>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    </mode>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <mode name='host-model' supported='yes'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <vendor>AMD</vendor>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='x2apic'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='hypervisor'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='stibp'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='ssbd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='overflow-recov'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='succor'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='ibrs'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='lbrv'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='tsc-scale'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='flushbyasid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='pause-filter'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='pfthreshold'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='rdctl-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='mds-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='gds-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='require' name='rfds-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <feature policy='disable' name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    </mode>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:    <mode name='custom' supported='yes'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Broadwell'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-IBRS'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-noTSX'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-v2'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-v3'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-v4'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Cooperlake'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Cooperlake-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Cooperlake-v2'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Denverton'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='mpx'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Denverton-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='mpx'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Denverton-v2'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Denverton-v3'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Dhyana-v2'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Genoa'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amd-psfd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='auto-ibrs'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='stibp-always-on'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amd-psfd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='auto-ibrs'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='stibp-always-on'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Milan'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Milan-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Milan-v2'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amd-psfd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='stibp-always-on'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Rome'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Rome-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Rome-v2'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Rome-v3'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='EPYC-v3'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='EPYC-v4'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='GraniteRapids'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-fp16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='prefetchiti'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='GraniteRapids-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-fp16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='prefetchiti'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='GraniteRapids-v2'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-fp16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx10'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx10-128'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx10-256'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx10-512'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='prefetchiti'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Haswell'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Haswell-IBRS'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Haswell-noTSX'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Haswell-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Haswell-v2'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Haswell-v3'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Haswell-v4'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v2'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v3'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v4'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v5'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v6'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v7'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='IvyBridge'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='IvyBridge-IBRS'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='IvyBridge-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='IvyBridge-v2'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='KnightsMill'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-4fmaps'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-4vnniw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512er'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512pf'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='KnightsMill-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-4fmaps'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-4vnniw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512er'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512pf'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Opteron_G4'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fma4'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xop'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Opteron_G4-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fma4'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xop'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Opteron_G5'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fma4'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='tbm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xop'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Opteron_G5-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fma4'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='tbm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xop'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='SapphireRapids'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='SapphireRapids-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='SapphireRapids-v2'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='SapphireRapids-v3'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='SierraForest'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx-ifma'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx-ne-convert'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx-vnni-int8'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='cmpccxadd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='SierraForest-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx-ifma'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx-ne-convert'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx-vnni-int8'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='cmpccxadd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-v2'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-v3'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-v4'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v2'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v3'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v4'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v5'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Snowridge'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='core-capability'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='mpx'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='split-lock-detect'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Snowridge-v1'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='core-capability'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='mpx'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='split-lock-detect'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:      <blockers model='Snowridge-v2'>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='core-capability'/>
Oct  2 04:07:54 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='split-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Snowridge-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='core-capability'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='split-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Snowridge-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='athlon'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnow'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnowext'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='athlon-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnow'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnowext'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='core2duo'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='core2duo-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='coreduo'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='coreduo-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='n270'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='n270-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='phenom'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnow'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnowext'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='phenom-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnow'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnowext'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </mode>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </cpu>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <memoryBacking supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <enum name='sourceType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>file</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>anonymous</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>memfd</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </memoryBacking>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <devices>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <disk supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='diskDevice'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>disk</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>cdrom</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>floppy</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>lun</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='bus'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>ide</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>fdc</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>scsi</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>usb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>sata</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio-transitional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio-non-transitional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </disk>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <graphics supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='type'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vnc</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>egl-headless</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>dbus</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </graphics>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <video supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='modelType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vga</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>cirrus</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>none</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>bochs</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>ramfb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </video>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <hostdev supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='mode'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>subsystem</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='startupPolicy'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>default</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>mandatory</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>requisite</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>optional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='subsysType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>usb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>pci</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>scsi</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='capsType'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='pciBackend'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </hostdev>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <rng supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio-transitional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio-non-transitional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendModel'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>random</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>egd</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>builtin</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </rng>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <filesystem supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='driverType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>path</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>handle</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtiofs</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </filesystem>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <tpm supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>tpm-tis</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>tpm-crb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendModel'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>emulator</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>external</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendVersion'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>2.0</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </tpm>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <redirdev supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='bus'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>usb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </redirdev>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <channel supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='type'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>pty</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>unix</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </channel>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <crypto supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='type'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>qemu</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendModel'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>builtin</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </crypto>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <interface supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>default</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>passt</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </interface>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <panic supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>isa</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>hyperv</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </panic>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </devices>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <features>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <gic supported='no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <vmcoreinfo supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <genid supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <backingStoreInput supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <backup supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <async-teardown supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <ps2 supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <sev supported='no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <sgx supported='no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <hyperv supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='features'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>relaxed</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vapic</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>spinlocks</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vpindex</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>runtime</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>synic</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>stimer</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>reset</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vendor_id</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>frequencies</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>reenlightenment</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>tlbflush</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>ipi</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>avic</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>emsr_bitmap</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>xmm_input</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </hyperv>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <launchSecurity supported='no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </features>
Oct  2 04:07:55 np0005465604 nova_compute[259667]: </domainCapabilities>
Oct  2 04:07:55 np0005465604 nova_compute[259667]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 04:07:55 np0005465604 nova_compute[259667]: 2025-10-02 08:07:54.930 2 DEBUG nova.virt.libvirt.host [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  2 04:07:55 np0005465604 nova_compute[259667]: <domainCapabilities>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <domain>kvm</domain>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <arch>i686</arch>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <vcpu max='4096'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <iothreads supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <os supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <enum name='firmware'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <loader supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='type'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>rom</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>pflash</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='readonly'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>yes</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>no</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='secure'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>no</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </loader>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </os>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <cpu>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <mode name='host-passthrough' supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='hostPassthroughMigratable'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>on</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>off</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </mode>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <mode name='maximum' supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='maximumMigratable'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>on</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>off</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </mode>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <mode name='host-model' supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <vendor>AMD</vendor>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='x2apic'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='hypervisor'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='stibp'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='ssbd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='overflow-recov'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='succor'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='ibrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='lbrv'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='tsc-scale'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='flushbyasid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='pause-filter'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='pfthreshold'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='rdctl-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='mds-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='gds-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='rfds-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='disable' name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </mode>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <mode name='custom' supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-noTSX'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cooperlake'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cooperlake-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cooperlake-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Denverton'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mpx'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Denverton-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mpx'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Denverton-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Denverton-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Dhyana-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Genoa'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amd-psfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='auto-ibrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='stibp-always-on'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amd-psfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='auto-ibrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='stibp-always-on'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Milan'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Milan-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Milan-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amd-psfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='stibp-always-on'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Rome'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Rome-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Rome-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Rome-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='GraniteRapids'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='prefetchiti'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='GraniteRapids-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='prefetchiti'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='GraniteRapids-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx10'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx10-128'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx10-256'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx10-512'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='prefetchiti'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-noTSX'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v5'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v6'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v7'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='IvyBridge'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='IvyBridge-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='IvyBridge-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='IvyBridge-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='KnightsMill'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-4fmaps'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-4vnniw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512er'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512pf'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='KnightsMill-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-4fmaps'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-4vnniw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512er'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512pf'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Opteron_G4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fma4'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xop'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Opteron_G4-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fma4'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xop'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Opteron_G5'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fma4'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tbm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xop'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Opteron_G5-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fma4'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tbm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xop'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='SapphireRapids'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='SapphireRapids-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='SapphireRapids-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='SapphireRapids-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='SierraForest'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-ne-convert'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cmpccxadd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='SierraForest-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-ne-convert'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cmpccxadd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v5'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Snowridge'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='core-capability'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mpx'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='split-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Snowridge-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='core-capability'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mpx'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='split-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Snowridge-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='core-capability'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='split-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Snowridge-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='core-capability'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='split-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Snowridge-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='athlon'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnow'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnowext'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='athlon-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnow'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnowext'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='core2duo'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='core2duo-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='coreduo'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='coreduo-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='n270'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='n270-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='phenom'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnow'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnowext'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='phenom-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnow'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnowext'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </mode>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </cpu>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <memoryBacking supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <enum name='sourceType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>file</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>anonymous</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>memfd</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </memoryBacking>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <devices>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <disk supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='diskDevice'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>disk</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>cdrom</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>floppy</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>lun</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='bus'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>fdc</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>scsi</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>usb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>sata</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio-transitional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio-non-transitional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </disk>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <graphics supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='type'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vnc</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>egl-headless</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>dbus</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </graphics>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <video supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='modelType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vga</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>cirrus</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>none</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>bochs</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>ramfb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </video>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <hostdev supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='mode'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>subsystem</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='startupPolicy'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>default</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>mandatory</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>requisite</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>optional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='subsysType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>usb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>pci</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>scsi</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='capsType'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='pciBackend'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </hostdev>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <rng supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio-transitional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio-non-transitional</value>
Oct  2 04:07:55 np0005465604 python3.9[260540]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendModel'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>random</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>egd</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>builtin</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </rng>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <filesystem supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='driverType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>path</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>handle</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtiofs</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </filesystem>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <tpm supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>tpm-tis</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>tpm-crb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendModel'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>emulator</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>external</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendVersion'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>2.0</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </tpm>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <redirdev supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='bus'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>usb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </redirdev>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <channel supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='type'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>pty</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>unix</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </channel>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <crypto supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='type'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>qemu</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendModel'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>builtin</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </crypto>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <interface supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>default</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>passt</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </interface>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <panic supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>isa</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>hyperv</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </panic>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </devices>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <features>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <gic supported='no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <vmcoreinfo supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <genid supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <backingStoreInput supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <backup supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <async-teardown supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <ps2 supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <sev supported='no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <sgx supported='no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <hyperv supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='features'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>relaxed</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vapic</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>spinlocks</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vpindex</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>runtime</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>synic</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>stimer</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>reset</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vendor_id</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>frequencies</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>reenlightenment</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>tlbflush</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>ipi</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>avic</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>emsr_bitmap</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>xmm_input</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </hyperv>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <launchSecurity supported='no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </features>
Oct  2 04:07:55 np0005465604 nova_compute[259667]: </domainCapabilities>
Oct  2 04:07:55 np0005465604 nova_compute[259667]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 04:07:55 np0005465604 nova_compute[259667]: 2025-10-02 08:07:54.974 2 DEBUG nova.virt.libvirt.host [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  2 04:07:55 np0005465604 nova_compute[259667]: 2025-10-02 08:07:54.978 2 DEBUG nova.virt.libvirt.host [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  2 04:07:55 np0005465604 nova_compute[259667]: <domainCapabilities>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <domain>kvm</domain>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <arch>x86_64</arch>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <vcpu max='240'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <iothreads supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <os supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <enum name='firmware'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <loader supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='type'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>rom</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>pflash</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='readonly'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>yes</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>no</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='secure'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>no</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </loader>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </os>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <cpu>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <mode name='host-passthrough' supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='hostPassthroughMigratable'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>on</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>off</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </mode>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <mode name='maximum' supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='maximumMigratable'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>on</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>off</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </mode>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <mode name='host-model' supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <vendor>AMD</vendor>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='x2apic'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='hypervisor'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='stibp'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='ssbd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='overflow-recov'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='succor'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='ibrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='lbrv'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='tsc-scale'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='flushbyasid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='pause-filter'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='pfthreshold'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='rdctl-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='mds-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='gds-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='rfds-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='disable' name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </mode>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <mode name='custom' supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-noTSX'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cooperlake'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cooperlake-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cooperlake-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Denverton'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mpx'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Denverton-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mpx'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Denverton-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Denverton-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Dhyana-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Genoa'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amd-psfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='auto-ibrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='stibp-always-on'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amd-psfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='auto-ibrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='stibp-always-on'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Milan'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Milan-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Milan-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amd-psfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='stibp-always-on'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Rome'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Rome-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Rome-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Rome-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='GraniteRapids'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='prefetchiti'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='GraniteRapids-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='prefetchiti'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='GraniteRapids-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx10'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx10-128'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx10-256'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx10-512'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='prefetchiti'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-noTSX'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v5'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v6'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v7'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='IvyBridge'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='IvyBridge-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='IvyBridge-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='IvyBridge-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='KnightsMill'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-4fmaps'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-4vnniw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512er'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512pf'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='KnightsMill-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-4fmaps'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-4vnniw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512er'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512pf'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Opteron_G4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fma4'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xop'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Opteron_G4-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fma4'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xop'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Opteron_G5'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fma4'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tbm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xop'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Opteron_G5-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fma4'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tbm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xop'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='SapphireRapids'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='SapphireRapids-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='SapphireRapids-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='SapphireRapids-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='SierraForest'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-ne-convert'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cmpccxadd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='SierraForest-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-ne-convert'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cmpccxadd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 04:07:55 np0005465604 systemd[1]: Stopping nova_compute container...
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v5'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Snowridge'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='core-capability'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mpx'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='split-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Snowridge-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='core-capability'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mpx'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='split-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Snowridge-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='core-capability'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='split-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Snowridge-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='core-capability'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='split-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Snowridge-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='athlon'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnow'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnowext'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='athlon-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnow'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnowext'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='core2duo'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='core2duo-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='coreduo'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='coreduo-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='n270'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='n270-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='phenom'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnow'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnowext'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='phenom-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnow'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnowext'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </mode>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </cpu>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <memoryBacking supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <enum name='sourceType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>file</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>anonymous</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>memfd</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </memoryBacking>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <devices>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <disk supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='diskDevice'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>disk</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>cdrom</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>floppy</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>lun</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='bus'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>ide</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>fdc</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>scsi</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>usb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>sata</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio-transitional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio-non-transitional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </disk>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <graphics supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='type'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vnc</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>egl-headless</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>dbus</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </graphics>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <video supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='modelType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vga</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>cirrus</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>none</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>bochs</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>ramfb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </video>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <hostdev supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='mode'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>subsystem</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='startupPolicy'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>default</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>mandatory</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>requisite</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>optional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='subsysType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>usb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>pci</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>scsi</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='capsType'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='pciBackend'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </hostdev>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <rng supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio-transitional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio-non-transitional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendModel'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>random</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>egd</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>builtin</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </rng>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <filesystem supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='driverType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>path</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>handle</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtiofs</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </filesystem>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <tpm supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>tpm-tis</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>tpm-crb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendModel'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>emulator</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>external</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendVersion'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>2.0</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </tpm>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <redirdev supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='bus'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>usb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </redirdev>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <channel supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='type'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>pty</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>unix</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </channel>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <crypto supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='type'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>qemu</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendModel'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>builtin</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </crypto>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <interface supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>default</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>passt</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </interface>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <panic supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>isa</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>hyperv</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </panic>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </devices>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <features>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <gic supported='no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <vmcoreinfo supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <genid supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <backingStoreInput supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <backup supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <async-teardown supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <ps2 supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <sev supported='no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <sgx supported='no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <hyperv supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='features'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>relaxed</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vapic</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>spinlocks</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vpindex</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>runtime</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>synic</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>stimer</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>reset</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vendor_id</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>frequencies</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>reenlightenment</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>tlbflush</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>ipi</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>avic</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>emsr_bitmap</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>xmm_input</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </hyperv>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <launchSecurity supported='no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </features>
Oct  2 04:07:55 np0005465604 nova_compute[259667]: </domainCapabilities>
Oct  2 04:07:55 np0005465604 nova_compute[259667]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 04:07:55 np0005465604 nova_compute[259667]: 2025-10-02 08:07:55.055 2 DEBUG nova.virt.libvirt.host [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  2 04:07:55 np0005465604 nova_compute[259667]: <domainCapabilities>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <domain>kvm</domain>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <arch>x86_64</arch>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <vcpu max='4096'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <iothreads supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <os supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <enum name='firmware'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>efi</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <loader supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='type'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>rom</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>pflash</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='readonly'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>yes</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>no</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='secure'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>yes</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>no</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </loader>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </os>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <cpu>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <mode name='host-passthrough' supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='hostPassthroughMigratable'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>on</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>off</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </mode>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <mode name='maximum' supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='maximumMigratable'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>on</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>off</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </mode>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <mode name='host-model' supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <vendor>AMD</vendor>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='x2apic'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='hypervisor'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='stibp'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='ssbd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='overflow-recov'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='succor'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='ibrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='lbrv'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='tsc-scale'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='flushbyasid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='pause-filter'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='pfthreshold'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='rdctl-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='mds-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='gds-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='require' name='rfds-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <feature policy='disable' name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </mode>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <mode name='custom' supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-noTSX'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Broadwell-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cooperlake'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cooperlake-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Cooperlake-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Denverton'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mpx'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Denverton-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mpx'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Denverton-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Denverton-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Dhyana-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Genoa'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amd-psfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='auto-ibrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='stibp-always-on'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amd-psfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='auto-ibrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='stibp-always-on'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Milan'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Milan-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Milan-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amd-psfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='stibp-always-on'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Rome'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Rome-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Rome-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-Rome-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='EPYC-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='GraniteRapids'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='prefetchiti'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='GraniteRapids-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='prefetchiti'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='GraniteRapids-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx10'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx10-128'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx10-256'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx10-512'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='prefetchiti'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-noTSX'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Haswell-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v5'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v6'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Icelake-Server-v7'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='IvyBridge'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='IvyBridge-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='IvyBridge-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='IvyBridge-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='KnightsMill'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-4fmaps'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-4vnniw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512er'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512pf'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='KnightsMill-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-4fmaps'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-4vnniw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512er'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512pf'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Opteron_G4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fma4'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xop'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Opteron_G4-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fma4'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xop'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Opteron_G5'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fma4'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tbm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xop'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Opteron_G5-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fma4'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tbm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xop'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='SapphireRapids'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='SapphireRapids-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='SapphireRapids-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='SapphireRapids-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='amx-tile'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-bf16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-fp16'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bitalg'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrc'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fzrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='la57'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='taa-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xfd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='SierraForest'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-ne-convert'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cmpccxadd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='SierraForest-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-ifma'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-ne-convert'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx-vnni-int8'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cmpccxadd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fbsdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='fsrs'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ibrs-all'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mcdt-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pbrsb-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='psdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='serialize'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vaes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Client-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='hle'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='rtm'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Skylake-Server-v5'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512bw'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512cd'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512dq'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512f'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='avx512vl'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='invpcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pcid'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='pku'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Snowridge'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='core-capability'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mpx'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='split-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Snowridge-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='core-capability'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='mpx'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='split-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Snowridge-v2'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='core-capability'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='split-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Snowridge-v3'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='core-capability'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='split-lock-detect'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='Snowridge-v4'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='cldemote'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='erms'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='gfni'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdir64b'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='movdiri'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='xsaves'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='athlon'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnow'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnowext'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='athlon-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnow'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnowext'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='core2duo'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='core2duo-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='coreduo'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='coreduo-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='n270'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='n270-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='ss'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='phenom'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnow'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnowext'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <blockers model='phenom-v1'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnow'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <feature name='3dnowext'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </blockers>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </mode>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </cpu>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <memoryBacking supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <enum name='sourceType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>file</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>anonymous</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <value>memfd</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </memoryBacking>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <devices>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <disk supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='diskDevice'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>disk</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>cdrom</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>floppy</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>lun</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='bus'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>fdc</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>scsi</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>usb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>sata</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio-transitional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio-non-transitional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </disk>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <graphics supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='type'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vnc</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>egl-headless</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>dbus</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </graphics>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <video supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='modelType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vga</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>cirrus</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>none</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>bochs</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>ramfb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </video>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <hostdev supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='mode'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>subsystem</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='startupPolicy'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>default</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>mandatory</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>requisite</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>optional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='subsysType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>usb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>pci</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>scsi</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='capsType'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='pciBackend'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </hostdev>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <rng supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio-transitional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtio-non-transitional</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendModel'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>random</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>egd</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>builtin</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </rng>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <filesystem supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='driverType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>path</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>handle</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>virtiofs</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </filesystem>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <tpm supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>tpm-tis</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>tpm-crb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendModel'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>emulator</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>external</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendVersion'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>2.0</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </tpm>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <redirdev supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='bus'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>usb</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </redirdev>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <channel supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='type'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>pty</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>unix</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </channel>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <crypto supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='type'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>qemu</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendModel'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>builtin</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </crypto>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <interface supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='backendType'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>default</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>passt</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </interface>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <panic supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='model'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>isa</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>hyperv</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </panic>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </devices>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  <features>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <gic supported='no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <vmcoreinfo supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <genid supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <backingStoreInput supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <backup supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <async-teardown supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <ps2 supported='yes'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <sev supported='no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <sgx supported='no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <hyperv supported='yes'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      <enum name='features'>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>relaxed</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vapic</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>spinlocks</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vpindex</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>runtime</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>synic</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>stimer</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>reset</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>vendor_id</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>frequencies</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>reenlightenment</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>tlbflush</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>ipi</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>avic</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>emsr_bitmap</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:        <value>xmm_input</value>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:      </enum>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    </hyperv>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:    <launchSecurity supported='no'/>
Oct  2 04:07:55 np0005465604 nova_compute[259667]:  </features>
Oct  2 04:07:55 np0005465604 nova_compute[259667]: </domainCapabilities>
Oct  2 04:07:55 np0005465604 nova_compute[259667]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 04:07:55 np0005465604 nova_compute[259667]: 2025-10-02 08:07:55.123 2 DEBUG nova.virt.libvirt.host [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  2 04:07:55 np0005465604 nova_compute[259667]: 2025-10-02 08:07:55.124 2 DEBUG nova.virt.libvirt.host [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  2 04:07:55 np0005465604 nova_compute[259667]: 2025-10-02 08:07:55.124 2 DEBUG nova.virt.libvirt.host [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  2 04:07:55 np0005465604 nova_compute[259667]: 2025-10-02 08:07:55.124 2 INFO nova.virt.libvirt.host [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Secure Boot support detected#033[00m
Oct  2 04:07:55 np0005465604 nova_compute[259667]: 2025-10-02 08:07:55.126 2 INFO nova.virt.libvirt.driver [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  2 04:07:55 np0005465604 nova_compute[259667]: 2025-10-02 08:07:55.126 2 INFO nova.virt.libvirt.driver [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  2 04:07:55 np0005465604 nova_compute[259667]: 2025-10-02 08:07:55.137 2 DEBUG nova.virt.libvirt.driver [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct  2 04:07:55 np0005465604 nova_compute[259667]: 2025-10-02 08:07:55.173 2 INFO nova.virt.node [None req-2cbae8ba-d1b2-4f76-8594-dc5107410782 - - - - - -] Determined node identity 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from /var/lib/nova/compute_id#033[00m
Oct  2 04:07:55 np0005465604 nova_compute[259667]: 2025-10-02 08:07:55.182 2 DEBUG oslo_concurrency.lockutils [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:07:55 np0005465604 nova_compute[259667]: 2025-10-02 08:07:55.182 2 DEBUG oslo_concurrency.lockutils [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:07:55 np0005465604 nova_compute[259667]: 2025-10-02 08:07:55.182 2 DEBUG oslo_concurrency.lockutils [None req-81ff4106-0e6b-470f-9d88-9a89f2729678 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:07:55 np0005465604 virtqemud[260328]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct  2 04:07:55 np0005465604 virtqemud[260328]: hostname: compute-0
Oct  2 04:07:55 np0005465604 virtqemud[260328]: End of file while reading data: Input/output error
Oct  2 04:07:55 np0005465604 systemd[1]: libpod-30c75d1eaea8e071a1f654ddf11e1a31594fcda334636a5e74999959ff9b9717.scope: Deactivated successfully.
Oct  2 04:07:55 np0005465604 systemd[1]: libpod-30c75d1eaea8e071a1f654ddf11e1a31594fcda334636a5e74999959ff9b9717.scope: Consumed 3.258s CPU time.
Oct  2 04:07:55 np0005465604 podman[260548]: 2025-10-02 08:07:55.649156389 +0000 UTC m=+0.515350338 container died 30c75d1eaea8e071a1f654ddf11e1a31594fcda334636a5e74999959ff9b9717 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38, name=nova_compute, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Oct  2 04:07:55 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-30c75d1eaea8e071a1f654ddf11e1a31594fcda334636a5e74999959ff9b9717-userdata-shm.mount: Deactivated successfully.
Oct  2 04:07:55 np0005465604 systemd[1]: var-lib-containers-storage-overlay-065ee084dc5ad0857295c803b08ab8689b721e0de8f1d7d2ff031b51113f36b1-merged.mount: Deactivated successfully.
Oct  2 04:07:55 np0005465604 podman[260548]: 2025-10-02 08:07:55.858859374 +0000 UTC m=+0.725053313 container cleanup 30c75d1eaea8e071a1f654ddf11e1a31594fcda334636a5e74999959ff9b9717 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Oct  2 04:07:55 np0005465604 podman[260548]: nova_compute
Oct  2 04:07:55 np0005465604 podman[260575]: nova_compute
Oct  2 04:07:55 np0005465604 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct  2 04:07:55 np0005465604 systemd[1]: Stopped nova_compute container.
Oct  2 04:07:55 np0005465604 systemd[1]: Starting nova_compute container...
Oct  2 04:07:56 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:07:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065ee084dc5ad0857295c803b08ab8689b721e0de8f1d7d2ff031b51113f36b1/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065ee084dc5ad0857295c803b08ab8689b721e0de8f1d7d2ff031b51113f36b1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065ee084dc5ad0857295c803b08ab8689b721e0de8f1d7d2ff031b51113f36b1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065ee084dc5ad0857295c803b08ab8689b721e0de8f1d7d2ff031b51113f36b1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065ee084dc5ad0857295c803b08ab8689b721e0de8f1d7d2ff031b51113f36b1/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:56 np0005465604 podman[260588]: 2025-10-02 08:07:56.084473249 +0000 UTC m=+0.109517586 container init 30c75d1eaea8e071a1f654ddf11e1a31594fcda334636a5e74999959ff9b9717 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 04:07:56 np0005465604 podman[260588]: 2025-10-02 08:07:56.096360796 +0000 UTC m=+0.121405073 container start 30c75d1eaea8e071a1f654ddf11e1a31594fcda334636a5e74999959ff9b9717 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38, name=nova_compute, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 04:07:56 np0005465604 podman[260588]: nova_compute
Oct  2 04:07:56 np0005465604 nova_compute[260603]: + sudo -E kolla_set_configs
Oct  2 04:07:56 np0005465604 systemd[1]: Started nova_compute container.
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Validating config file
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Copying service configuration files
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Deleting /etc/ceph
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Creating directory /etc/ceph
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Setting permission for /etc/ceph
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Writing out command to execute
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  2 04:07:56 np0005465604 nova_compute[260603]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  2 04:07:56 np0005465604 nova_compute[260603]: ++ cat /run_command
Oct  2 04:07:56 np0005465604 nova_compute[260603]: + CMD=nova-compute
Oct  2 04:07:56 np0005465604 nova_compute[260603]: + ARGS=
Oct  2 04:07:56 np0005465604 nova_compute[260603]: + sudo kolla_copy_cacerts
Oct  2 04:07:56 np0005465604 nova_compute[260603]: + [[ ! -n '' ]]
Oct  2 04:07:56 np0005465604 nova_compute[260603]: + . kolla_extend_start
Oct  2 04:07:56 np0005465604 nova_compute[260603]: Running command: 'nova-compute'
Oct  2 04:07:56 np0005465604 nova_compute[260603]: + echo 'Running command: '\''nova-compute'\'''
Oct  2 04:07:56 np0005465604 nova_compute[260603]: + umask 0022
Oct  2 04:07:56 np0005465604 nova_compute[260603]: + exec nova-compute
Oct  2 04:07:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:07:57 np0005465604 python3.9[260766]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  2 04:07:57 np0005465604 systemd[1]: Started libpod-conmon-3968affa8d8434646924e40b7c333276145b3da0cc2cce742d2cabc8734b3cf5.scope.
Oct  2 04:07:57 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:07:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bb919054e3e813d469854d7d3ff1ab1f2fab9c7c17e930d777fde5cedbdbd9d/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bb919054e3e813d469854d7d3ff1ab1f2fab9c7c17e930d777fde5cedbdbd9d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bb919054e3e813d469854d7d3ff1ab1f2fab9c7c17e930d777fde5cedbdbd9d/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct  2 04:07:57 np0005465604 podman[260790]: 2025-10-02 08:07:57.296025729 +0000 UTC m=+0.139242463 container init 3968affa8d8434646924e40b7c333276145b3da0cc2cce742d2cabc8734b3cf5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Oct  2 04:07:57 np0005465604 podman[260790]: 2025-10-02 08:07:57.309615079 +0000 UTC m=+0.152831783 container start 3968affa8d8434646924e40b7c333276145b3da0cc2cce742d2cabc8734b3cf5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_id=edpm)
Oct  2 04:07:57 np0005465604 python3.9[260766]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct  2 04:07:57 np0005465604 nova_compute_init[260811]: INFO:nova_statedir:Applying nova statedir ownership
Oct  2 04:07:57 np0005465604 nova_compute_init[260811]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct  2 04:07:57 np0005465604 nova_compute_init[260811]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct  2 04:07:57 np0005465604 nova_compute_init[260811]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct  2 04:07:57 np0005465604 nova_compute_init[260811]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct  2 04:07:57 np0005465604 nova_compute_init[260811]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct  2 04:07:57 np0005465604 nova_compute_init[260811]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct  2 04:07:57 np0005465604 nova_compute_init[260811]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct  2 04:07:57 np0005465604 nova_compute_init[260811]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct  2 04:07:57 np0005465604 nova_compute_init[260811]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct  2 04:07:57 np0005465604 nova_compute_init[260811]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct  2 04:07:57 np0005465604 nova_compute_init[260811]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct  2 04:07:57 np0005465604 nova_compute_init[260811]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct  2 04:07:57 np0005465604 nova_compute_init[260811]: INFO:nova_statedir:Nova statedir ownership complete
Oct  2 04:07:57 np0005465604 systemd[1]: libpod-3968affa8d8434646924e40b7c333276145b3da0cc2cce742d2cabc8734b3cf5.scope: Deactivated successfully.
Oct  2 04:07:57 np0005465604 podman[260812]: 2025-10-02 08:07:57.396369123 +0000 UTC m=+0.046388421 container died 3968affa8d8434646924e40b7c333276145b3da0cc2cce742d2cabc8734b3cf5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38, name=nova_compute_init, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 04:07:57 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3968affa8d8434646924e40b7c333276145b3da0cc2cce742d2cabc8734b3cf5-userdata-shm.mount: Deactivated successfully.
Oct  2 04:07:57 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8bb919054e3e813d469854d7d3ff1ab1f2fab9c7c17e930d777fde5cedbdbd9d-merged.mount: Deactivated successfully.
Oct  2 04:07:57 np0005465604 podman[260825]: 2025-10-02 08:07:57.469512928 +0000 UTC m=+0.066493101 container cleanup 3968affa8d8434646924e40b7c333276145b3da0cc2cce742d2cabc8734b3cf5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:bffa691f6d884b143ff2e1ec18bf26fdfcea39492c68874b12a41aab94fdde38', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:07:57 np0005465604 systemd[1]: libpod-conmon-3968affa8d8434646924e40b7c333276145b3da0cc2cce742d2cabc8734b3cf5.scope: Deactivated successfully.
Oct  2 04:07:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:07:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:07:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:07:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:07:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:07:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:07:58 np0005465604 systemd[1]: session-51.scope: Deactivated successfully.
Oct  2 04:07:58 np0005465604 systemd[1]: session-51.scope: Consumed 3min 24.765s CPU time.
Oct  2 04:07:58 np0005465604 systemd-logind[787]: Session 51 logged out. Waiting for processes to exit.
Oct  2 04:07:58 np0005465604 systemd-logind[787]: Removed session 51.
Oct  2 04:07:58 np0005465604 nova_compute[260603]: 2025-10-02 08:07:58.272 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 04:07:58 np0005465604 nova_compute[260603]: 2025-10-02 08:07:58.275 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 04:07:58 np0005465604 nova_compute[260603]: 2025-10-02 08:07:58.275 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 04:07:58 np0005465604 nova_compute[260603]: 2025-10-02 08:07:58.276 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  2 04:07:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:07:58 np0005465604 nova_compute[260603]: 2025-10-02 08:07:58.477 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:07:58 np0005465604 nova_compute[260603]: 2025-10-02 08:07:58.508 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.003 2 INFO nova.virt.driver [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.146 2 INFO nova.compute.provider_config [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.166 2 DEBUG oslo_concurrency.lockutils [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.167 2 DEBUG oslo_concurrency.lockutils [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.167 2 DEBUG oslo_concurrency.lockutils [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.167 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.168 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.168 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.168 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.168 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.169 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.169 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.169 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.170 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.170 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.170 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.170 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.171 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.171 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.171 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.171 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.172 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.172 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.172 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.172 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.173 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.173 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.173 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.173 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.174 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.174 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.174 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.174 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.175 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.175 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.175 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.175 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.176 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.176 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.176 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.177 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.177 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.177 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.177 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.178 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.178 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.178 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.179 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.179 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.179 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.179 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.180 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.180 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.180 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.180 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.181 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.181 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.181 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.181 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.182 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.182 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.182 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.182 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.183 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.183 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.183 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.183 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.184 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.184 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.184 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.184 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.185 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.185 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.185 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.185 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.186 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.186 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.186 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.186 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.187 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.187 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.187 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.187 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.188 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.188 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.188 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.189 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.189 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.189 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.190 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.191 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.191 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.191 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.192 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.192 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.192 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.193 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.193 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.193 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.194 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.194 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.194 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.195 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.195 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.195 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.195 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.196 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.196 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.196 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.197 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.197 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.197 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.198 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.198 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.198 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.198 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.199 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.199 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.199 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.200 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.200 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.200 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.201 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.201 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.201 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.202 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.202 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.202 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.203 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.203 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.203 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.203 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.204 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.204 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.204 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.205 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.205 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.205 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.206 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.206 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.206 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.206 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.207 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.207 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.207 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.208 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.208 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.208 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.209 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.209 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.209 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.210 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.210 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.210 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.211 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.211 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.211 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.212 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.212 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.212 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.212 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.213 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.213 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.213 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.214 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.214 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.215 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.215 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.215 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.216 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.216 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.216 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.217 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.217 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.217 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.218 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.218 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.218 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.219 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.219 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.219 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.220 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.220 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.220 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.221 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.221 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.221 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.222 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.222 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.222 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.223 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.223 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.223 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.224 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.224 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.224 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.225 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.225 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.225 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.225 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.226 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.226 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.226 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.226 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.227 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.227 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.227 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.227 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.228 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.228 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.228 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.228 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.228 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.229 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.229 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.229 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.229 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.229 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.230 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.230 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.230 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.230 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.231 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.231 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.231 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.231 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.231 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.232 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.232 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.232 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.232 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.232 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.233 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.233 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.233 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.233 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.234 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.234 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.234 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.234 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.234 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.235 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.235 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.235 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.235 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.236 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.236 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.236 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.237 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.237 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.237 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.237 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.238 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.238 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.238 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.238 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.238 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.239 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.239 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.239 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.239 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.239 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.240 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.240 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.240 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.240 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.241 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.241 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.241 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.242 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.242 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.242 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.243 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.243 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.243 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.243 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.243 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.244 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.244 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.244 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.244 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.244 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.245 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.245 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.245 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.245 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.246 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.246 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.246 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.246 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.246 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.247 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.247 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.247 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.247 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.247 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.248 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.248 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.248 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.248 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.249 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.249 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.249 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.249 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.250 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.250 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.250 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.250 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.250 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.251 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.251 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.251 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.251 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.252 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.252 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.252 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.252 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.253 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.253 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.253 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.253 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.253 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.254 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.254 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.254 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.254 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.254 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.255 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.255 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.255 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.255 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.256 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.256 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.256 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.256 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.256 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.257 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.257 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.257 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.258 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.258 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.258 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.259 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.259 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.259 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.259 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.260 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.260 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.260 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.260 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.261 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.261 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.261 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.261 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.261 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.262 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.262 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.262 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.262 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.262 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.263 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.263 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.263 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.263 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.264 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.264 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.264 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.264 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.264 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.265 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.265 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.265 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.265 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.265 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.266 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.266 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.266 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.266 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.266 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.266 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.266 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.267 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.267 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.267 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.267 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.267 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.267 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.267 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.268 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.268 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.268 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.268 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.268 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.269 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.269 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.269 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.269 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.269 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.269 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.269 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.270 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.270 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.270 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.270 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.270 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.271 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.271 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.271 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.271 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.271 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.271 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.271 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.272 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.272 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.272 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.272 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.272 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.272 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.272 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.273 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.273 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.273 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.273 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.273 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.273 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.273 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.274 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.274 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.274 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.274 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.274 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.275 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.275 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.275 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.275 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.275 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.276 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.276 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.276 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.276 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.276 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.277 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.277 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.277 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.277 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.277 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.278 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.278 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.278 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.278 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.278 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.278 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.279 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.279 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.279 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.279 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.279 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.280 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.280 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.280 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.280 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.281 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.281 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.281 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.281 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.281 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.282 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.282 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.282 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.282 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.282 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.283 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.283 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.283 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.283 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.283 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.284 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.284 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.284 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.284 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.284 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.284 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.285 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.285 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.285 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.285 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.285 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.285 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.286 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.286 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.286 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.286 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.286 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.286 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.287 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.287 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.287 2 WARNING oslo_config.cfg [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  2 04:07:59 np0005465604 nova_compute[260603]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  2 04:07:59 np0005465604 nova_compute[260603]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  2 04:07:59 np0005465604 nova_compute[260603]: and ``live_migration_inbound_addr`` respectively.
Oct  2 04:07:59 np0005465604 nova_compute[260603]: ).  Its value may be silently ignored in the future.#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.287 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.288 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.288 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.288 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.288 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.288 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.288 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.289 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.289 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.289 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.289 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.289 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.289 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.289 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.290 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.290 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.290 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.290 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.290 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.rbd_secret_uuid        = a52e644f-f702-594c-a648-813e3e0df2b1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.290 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.291 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.291 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.291 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.291 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.291 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.291 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.291 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.292 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.292 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.292 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.292 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.293 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.293 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.293 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.293 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.293 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.293 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.293 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.294 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.294 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.294 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.294 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.294 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.295 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.295 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.295 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.295 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.295 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.295 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.296 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.296 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.296 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.296 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.296 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.296 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.297 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.297 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.297 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.297 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.297 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.297 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.297 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.298 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.298 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.298 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.298 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.298 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.298 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.298 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.299 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.299 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.299 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.299 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.299 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.300 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.300 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.300 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.300 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.300 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.300 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.301 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.301 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.301 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.301 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.301 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.301 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.302 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.302 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.302 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.302 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.302 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.302 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.302 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.303 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.303 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.303 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.303 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.303 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.303 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.303 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.303 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.304 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.304 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.304 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.304 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.304 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.304 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.304 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.305 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.305 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.305 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.305 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.305 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.305 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.305 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.306 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.306 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.306 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.306 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.306 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.306 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.306 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.307 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.307 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.307 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.307 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.307 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.307 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.307 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.308 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.308 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.308 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.308 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.308 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.308 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.308 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.309 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.309 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.309 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.309 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.310 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.310 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.310 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.310 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.310 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.311 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.311 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.311 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.311 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.311 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.312 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.312 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.312 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.312 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.312 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.312 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.313 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.313 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.313 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.313 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.313 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.313 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.314 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.314 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.314 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.314 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.314 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.314 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.315 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.315 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.315 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.315 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.315 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.315 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.316 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.316 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.316 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.316 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.316 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.317 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.317 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.317 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.317 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.317 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.318 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.318 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.318 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.318 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.318 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.318 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.319 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.319 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.319 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.319 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.319 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.319 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.319 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.320 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.320 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.320 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.320 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.320 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.320 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.321 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.321 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.321 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.321 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.321 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.321 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.322 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.322 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.322 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.322 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.322 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.322 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.322 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.323 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.323 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.323 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.323 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.323 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.323 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.323 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.324 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.324 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.324 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.324 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.324 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.324 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.325 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.325 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.325 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.325 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.325 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.325 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.326 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.326 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.326 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.326 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.326 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.326 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.327 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.327 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.327 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.327 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.327 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.327 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.327 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.328 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.328 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.328 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.328 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.329 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.329 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.329 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.329 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.329 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.329 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.329 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.330 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.330 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.330 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.330 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.330 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.330 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.331 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.331 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.331 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.331 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.331 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.331 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.331 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.332 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.332 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.332 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.332 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.332 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.332 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.333 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.333 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.333 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.333 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.333 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.333 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.333 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.334 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.334 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.334 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.334 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.334 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.335 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.335 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.335 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.335 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.335 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.336 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.336 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.336 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.336 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.336 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.336 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.336 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.337 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.337 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.337 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.337 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.337 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.337 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.338 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.338 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.338 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.338 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.338 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.338 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.338 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.339 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.339 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.339 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.339 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.339 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.340 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.340 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.340 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.340 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.340 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.340 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.340 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.341 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.341 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.341 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.341 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.341 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.341 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.341 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.342 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.342 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.342 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.342 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.342 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.342 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.343 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.343 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.343 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.343 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.343 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.343 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.343 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.344 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.344 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.344 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.344 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.344 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.344 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.344 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.345 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.345 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.345 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.345 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.345 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.345 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.346 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.346 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.346 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.346 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.346 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.346 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.346 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.346 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.347 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.347 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.347 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.347 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.347 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.347 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.347 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.348 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.348 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.348 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.348 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.348 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.348 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.348 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.349 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.349 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.349 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.349 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.349 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.349 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.350 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.350 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.350 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.350 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.350 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.350 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.350 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.351 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.351 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.351 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.351 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.351 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.351 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.351 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.352 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.352 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.352 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.352 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.352 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.352 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.352 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.353 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.353 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.353 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.354 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.354 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.354 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.354 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.355 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.355 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.355 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.355 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.355 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.355 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.356 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.356 2 DEBUG oslo_service.service [None req-25c19f6c-daff-4023-b730-9e2380fecffe - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.357 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.379 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.381 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.381 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.381 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.400 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fddd537bd30> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.403 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fddd537bd30> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.404 2 INFO nova.virt.libvirt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.416 2 INFO nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Libvirt host capabilities <capabilities>
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <host>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <uuid>8d603fb8-984b-47fa-8e45-4729d5224ba1</uuid>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <cpu>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <arch>x86_64</arch>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model>EPYC-Rome-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <vendor>AMD</vendor>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <microcode version='16777317'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <signature family='23' model='49' stepping='0'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <maxphysaddr mode='emulate' bits='40'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='x2apic'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='tsc-deadline'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='osxsave'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='hypervisor'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='tsc_adjust'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='spec-ctrl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='stibp'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='arch-capabilities'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='ssbd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='cmp_legacy'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='topoext'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='virt-ssbd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='lbrv'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='tsc-scale'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='vmcb-clean'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='pause-filter'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='pfthreshold'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='svme-addr-chk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='rdctl-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='skip-l1dfl-vmentry'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='mds-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature name='pschange-mc-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <pages unit='KiB' size='4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <pages unit='KiB' size='2048'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <pages unit='KiB' size='1048576'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </cpu>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <power_management>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <suspend_mem/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </power_management>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <iommu support='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <migration_features>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <live/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <uri_transports>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <uri_transport>tcp</uri_transport>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <uri_transport>rdma</uri_transport>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </uri_transports>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </migration_features>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <topology>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <cells num='1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <cell id='0'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:          <memory unit='KiB'>7864104</memory>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:          <pages unit='KiB' size='4'>1966026</pages>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:          <pages unit='KiB' size='2048'>0</pages>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:          <distances>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:            <sibling id='0' value='10'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:          </distances>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:          <cpus num='8'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:          </cpus>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        </cell>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </cells>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </topology>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <cache>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </cache>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <secmodel>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model>selinux</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <doi>0</doi>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </secmodel>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <secmodel>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model>dac</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <doi>0</doi>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </secmodel>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </host>
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <guest>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <os_type>hvm</os_type>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <arch name='i686'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <wordsize>32</wordsize>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <domain type='qemu'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <domain type='kvm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </arch>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <features>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <pae/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <nonpae/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <acpi default='on' toggle='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <apic default='on' toggle='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <cpuselection/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <deviceboot/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <disksnapshot default='on' toggle='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <externalSnapshot/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </features>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </guest>
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <guest>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <os_type>hvm</os_type>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <arch name='x86_64'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <wordsize>64</wordsize>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <domain type='qemu'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <domain type='kvm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </arch>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <features>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <acpi default='on' toggle='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <apic default='on' toggle='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <cpuselection/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <deviceboot/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <disksnapshot default='on' toggle='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <externalSnapshot/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </features>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </guest>
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 
Oct  2 04:07:59 np0005465604 nova_compute[260603]: </capabilities>
Oct  2 04:07:59 np0005465604 nova_compute[260603]: #033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.422 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.424 2 WARNING nova.virt.libvirt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.424 2 DEBUG nova.virt.libvirt.volume.mount [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.429 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  2 04:07:59 np0005465604 nova_compute[260603]: <domainCapabilities>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <domain>kvm</domain>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <arch>i686</arch>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <vcpu max='4096'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <iothreads supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <os supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <enum name='firmware'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <loader supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='type'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>rom</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>pflash</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='readonly'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>yes</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>no</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='secure'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>no</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </loader>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <cpu>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <mode name='host-passthrough' supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='hostPassthroughMigratable'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>on</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>off</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </mode>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <mode name='maximum' supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='maximumMigratable'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>on</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>off</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </mode>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <mode name='host-model' supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <vendor>AMD</vendor>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='x2apic'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='hypervisor'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='stibp'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='ssbd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='overflow-recov'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='succor'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='ibrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='lbrv'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='tsc-scale'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='flushbyasid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='pause-filter'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='pfthreshold'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='rdctl-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='mds-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='gds-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='rfds-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='disable' name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </mode>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <mode name='custom' supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-noTSX'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cooperlake'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cooperlake-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cooperlake-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Denverton'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mpx'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Denverton-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mpx'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Denverton-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Denverton-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Dhyana-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Genoa'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amd-psfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='auto-ibrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='stibp-always-on'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amd-psfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='auto-ibrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='stibp-always-on'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Milan'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Milan-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Milan-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amd-psfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='stibp-always-on'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Rome'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Rome-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Rome-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Rome-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='GraniteRapids'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='prefetchiti'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='GraniteRapids-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='prefetchiti'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='GraniteRapids-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx10'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx10-128'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx10-256'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx10-512'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='prefetchiti'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-noTSX'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v5'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v6'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v7'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='IvyBridge'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='IvyBridge-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='IvyBridge-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='IvyBridge-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='KnightsMill'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-4fmaps'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-4vnniw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512er'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512pf'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='KnightsMill-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-4fmaps'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-4vnniw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512er'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512pf'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Opteron_G4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fma4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xop'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Opteron_G4-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fma4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xop'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Opteron_G5'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fma4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tbm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xop'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Opteron_G5-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fma4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tbm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xop'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SapphireRapids'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SapphireRapids-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SapphireRapids-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SapphireRapids-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SierraForest'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-ne-convert'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cmpccxadd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SierraForest-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-ne-convert'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cmpccxadd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v5'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='core-capability'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mpx'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='split-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='core-capability'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mpx'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='split-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='core-capability'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='split-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='core-capability'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='split-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='athlon'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnow'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnowext'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='athlon-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnow'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnowext'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='core2duo'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='core2duo-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='coreduo'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='coreduo-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='n270'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='n270-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='phenom'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnow'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnowext'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='phenom-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnow'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnowext'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </mode>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <memoryBacking supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <enum name='sourceType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>file</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>anonymous</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>memfd</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </memoryBacking>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <disk supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='diskDevice'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>disk</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>cdrom</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>floppy</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>lun</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='bus'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>fdc</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>scsi</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>usb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>sata</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio-transitional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio-non-transitional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <graphics supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='type'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vnc</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>egl-headless</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>dbus</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <video supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='modelType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vga</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>cirrus</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>none</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>bochs</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>ramfb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <hostdev supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='mode'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>subsystem</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='startupPolicy'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>default</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>mandatory</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>requisite</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>optional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='subsysType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>usb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>pci</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>scsi</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='capsType'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='pciBackend'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </hostdev>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <rng supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio-transitional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio-non-transitional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendModel'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>random</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>egd</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>builtin</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <filesystem supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='driverType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>path</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>handle</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtiofs</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </filesystem>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <tpm supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>tpm-tis</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>tpm-crb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendModel'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>emulator</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>external</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendVersion'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>2.0</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </tpm>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <redirdev supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='bus'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>usb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </redirdev>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <channel supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='type'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>pty</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>unix</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </channel>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <crypto supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='type'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>qemu</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendModel'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>builtin</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </crypto>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <interface supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>default</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>passt</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <panic supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>isa</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>hyperv</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </panic>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <gic supported='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <vmcoreinfo supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <genid supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <backingStoreInput supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <backup supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <async-teardown supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <ps2 supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <sev supported='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <sgx supported='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <hyperv supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='features'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>relaxed</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vapic</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>spinlocks</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vpindex</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>runtime</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>synic</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>stimer</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>reset</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vendor_id</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>frequencies</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>reenlightenment</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>tlbflush</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>ipi</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>avic</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>emsr_bitmap</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>xmm_input</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </hyperv>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <launchSecurity supported='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:07:59 np0005465604 nova_compute[260603]: </domainCapabilities>
Oct  2 04:07:59 np0005465604 nova_compute[260603]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.438 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  2 04:07:59 np0005465604 nova_compute[260603]: <domainCapabilities>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <domain>kvm</domain>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <arch>i686</arch>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <vcpu max='240'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <iothreads supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <os supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <enum name='firmware'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <loader supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='type'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>rom</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>pflash</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='readonly'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>yes</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>no</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='secure'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>no</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </loader>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <cpu>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <mode name='host-passthrough' supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='hostPassthroughMigratable'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>on</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>off</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </mode>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <mode name='maximum' supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='maximumMigratable'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>on</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>off</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </mode>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <mode name='host-model' supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <vendor>AMD</vendor>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='x2apic'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='hypervisor'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='stibp'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='ssbd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='overflow-recov'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='succor'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='ibrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='lbrv'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='tsc-scale'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='flushbyasid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='pause-filter'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='pfthreshold'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='rdctl-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='mds-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='gds-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='rfds-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='disable' name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </mode>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <mode name='custom' supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-noTSX'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cooperlake'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cooperlake-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cooperlake-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Denverton'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mpx'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Denverton-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mpx'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Denverton-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Denverton-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Dhyana-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Genoa'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amd-psfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='auto-ibrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='stibp-always-on'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amd-psfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='auto-ibrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='stibp-always-on'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Milan'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Milan-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Milan-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amd-psfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='stibp-always-on'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Rome'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Rome-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Rome-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Rome-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='GraniteRapids'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='prefetchiti'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='GraniteRapids-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='prefetchiti'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='GraniteRapids-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx10'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx10-128'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx10-256'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx10-512'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='prefetchiti'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-noTSX'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v5'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v6'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v7'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='IvyBridge'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='IvyBridge-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='IvyBridge-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='IvyBridge-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='KnightsMill'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-4fmaps'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-4vnniw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512er'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512pf'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='KnightsMill-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-4fmaps'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-4vnniw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512er'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512pf'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Opteron_G4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fma4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xop'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Opteron_G4-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fma4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xop'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Opteron_G5'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fma4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tbm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xop'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Opteron_G5-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fma4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tbm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xop'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SapphireRapids'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SapphireRapids-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SapphireRapids-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SapphireRapids-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SierraForest'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-ne-convert'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cmpccxadd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SierraForest-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-ne-convert'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cmpccxadd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v5'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='core-capability'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mpx'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='split-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='core-capability'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mpx'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='split-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='core-capability'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='split-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='core-capability'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='split-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='athlon'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnow'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnowext'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='athlon-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnow'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnowext'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='core2duo'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='core2duo-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='coreduo'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='coreduo-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='n270'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='n270-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='phenom'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnow'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnowext'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='phenom-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnow'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnowext'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </mode>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <memoryBacking supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <enum name='sourceType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>file</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>anonymous</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>memfd</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </memoryBacking>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <disk supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='diskDevice'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>disk</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>cdrom</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>floppy</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>lun</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='bus'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>ide</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>fdc</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>scsi</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>usb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>sata</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio-transitional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio-non-transitional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <graphics supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='type'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vnc</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>egl-headless</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>dbus</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <video supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='modelType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vga</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>cirrus</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>none</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>bochs</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>ramfb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <hostdev supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='mode'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>subsystem</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='startupPolicy'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>default</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>mandatory</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>requisite</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>optional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='subsysType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>usb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>pci</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>scsi</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='capsType'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='pciBackend'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </hostdev>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <rng supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio-transitional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio-non-transitional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendModel'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>random</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>egd</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>builtin</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <filesystem supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='driverType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>path</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>handle</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtiofs</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </filesystem>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <tpm supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>tpm-tis</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>tpm-crb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendModel'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>emulator</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>external</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendVersion'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>2.0</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </tpm>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <redirdev supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='bus'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>usb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </redirdev>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <channel supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='type'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>pty</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>unix</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </channel>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <crypto supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='type'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>qemu</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendModel'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>builtin</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </crypto>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <interface supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>default</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>passt</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <panic supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>isa</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>hyperv</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </panic>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <gic supported='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <vmcoreinfo supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <genid supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <backingStoreInput supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <backup supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <async-teardown supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <ps2 supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <sev supported='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <sgx supported='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <hyperv supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='features'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>relaxed</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vapic</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>spinlocks</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vpindex</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>runtime</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>synic</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>stimer</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>reset</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vendor_id</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>frequencies</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>reenlightenment</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>tlbflush</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>ipi</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>avic</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>emsr_bitmap</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>xmm_input</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </hyperv>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <launchSecurity supported='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:07:59 np0005465604 nova_compute[260603]: </domainCapabilities>
Oct  2 04:07:59 np0005465604 nova_compute[260603]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.464 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.471 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  2 04:07:59 np0005465604 nova_compute[260603]: <domainCapabilities>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <domain>kvm</domain>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <arch>x86_64</arch>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <vcpu max='4096'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <iothreads supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <os supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <enum name='firmware'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>efi</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <loader supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='type'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>rom</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>pflash</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='readonly'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>yes</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>no</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='secure'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>yes</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>no</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </loader>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <cpu>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <mode name='host-passthrough' supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='hostPassthroughMigratable'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>on</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>off</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </mode>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <mode name='maximum' supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='maximumMigratable'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>on</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>off</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </mode>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <mode name='host-model' supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <vendor>AMD</vendor>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='x2apic'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='hypervisor'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='stibp'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='ssbd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='overflow-recov'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='succor'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='ibrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='lbrv'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='tsc-scale'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='flushbyasid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='pause-filter'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='pfthreshold'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='rdctl-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='mds-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='gds-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='rfds-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='disable' name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </mode>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <mode name='custom' supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-noTSX'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cooperlake'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cooperlake-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cooperlake-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Denverton'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mpx'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Denverton-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mpx'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Denverton-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Denverton-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Dhyana-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Genoa'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amd-psfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='auto-ibrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='stibp-always-on'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amd-psfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='auto-ibrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='stibp-always-on'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Milan'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Milan-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Milan-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amd-psfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='stibp-always-on'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Rome'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Rome-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Rome-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Rome-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='GraniteRapids'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='prefetchiti'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='GraniteRapids-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='prefetchiti'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='GraniteRapids-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx10'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx10-128'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx10-256'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx10-512'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='prefetchiti'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-noTSX'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v5'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v6'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v7'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='IvyBridge'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='IvyBridge-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='IvyBridge-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='IvyBridge-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='KnightsMill'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-4fmaps'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-4vnniw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512er'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512pf'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='KnightsMill-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-4fmaps'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-4vnniw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512er'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512pf'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Opteron_G4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fma4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xop'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Opteron_G4-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fma4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xop'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Opteron_G5'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fma4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tbm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xop'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Opteron_G5-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fma4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tbm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xop'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SapphireRapids'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SapphireRapids-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SapphireRapids-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SapphireRapids-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SierraForest'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-ne-convert'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cmpccxadd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SierraForest-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-ne-convert'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cmpccxadd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v5'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='core-capability'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mpx'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='split-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='core-capability'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mpx'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='split-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='core-capability'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='split-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='core-capability'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='split-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='athlon'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnow'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnowext'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='athlon-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnow'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnowext'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='core2duo'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='core2duo-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='coreduo'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='coreduo-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='n270'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='n270-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='phenom'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnow'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnowext'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='phenom-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnow'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnowext'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </mode>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <memoryBacking supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <enum name='sourceType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>file</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>anonymous</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>memfd</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </memoryBacking>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <disk supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='diskDevice'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>disk</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>cdrom</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>floppy</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>lun</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='bus'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>fdc</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>scsi</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>usb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>sata</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio-transitional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio-non-transitional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <graphics supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='type'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vnc</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>egl-headless</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>dbus</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <video supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='modelType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vga</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>cirrus</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>none</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>bochs</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>ramfb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <hostdev supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='mode'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>subsystem</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='startupPolicy'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>default</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>mandatory</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>requisite</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>optional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='subsysType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>usb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>pci</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>scsi</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='capsType'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='pciBackend'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </hostdev>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <rng supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio-transitional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio-non-transitional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendModel'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>random</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>egd</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>builtin</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <filesystem supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='driverType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>path</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>handle</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtiofs</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </filesystem>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <tpm supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>tpm-tis</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>tpm-crb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendModel'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>emulator</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>external</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendVersion'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>2.0</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </tpm>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <redirdev supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='bus'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>usb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </redirdev>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <channel supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='type'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>pty</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>unix</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </channel>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <crypto supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='type'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>qemu</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendModel'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>builtin</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </crypto>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <interface supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>default</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>passt</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <panic supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>isa</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>hyperv</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </panic>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <gic supported='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <vmcoreinfo supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <genid supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <backingStoreInput supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <backup supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <async-teardown supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <ps2 supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <sev supported='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <sgx supported='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <hyperv supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='features'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>relaxed</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vapic</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>spinlocks</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vpindex</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>runtime</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>synic</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>stimer</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>reset</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vendor_id</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>frequencies</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>reenlightenment</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>tlbflush</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>ipi</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>avic</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>emsr_bitmap</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>xmm_input</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </hyperv>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <launchSecurity supported='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:07:59 np0005465604 nova_compute[260603]: </domainCapabilities>
Oct  2 04:07:59 np0005465604 nova_compute[260603]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.550 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  2 04:07:59 np0005465604 nova_compute[260603]: <domainCapabilities>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <domain>kvm</domain>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <arch>x86_64</arch>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <vcpu max='240'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <iothreads supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <os supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <enum name='firmware'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <loader supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='type'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>rom</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>pflash</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='readonly'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>yes</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>no</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='secure'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>no</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </loader>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <cpu>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <mode name='host-passthrough' supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='hostPassthroughMigratable'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>on</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>off</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </mode>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <mode name='maximum' supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='maximumMigratable'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>on</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>off</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </mode>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <mode name='host-model' supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <vendor>AMD</vendor>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='x2apic'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='hypervisor'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='stibp'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='ssbd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='overflow-recov'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='succor'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='ibrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='lbrv'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='tsc-scale'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='flushbyasid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='pause-filter'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='pfthreshold'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='rdctl-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='mds-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='gds-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='require' name='rfds-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <feature policy='disable' name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </mode>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <mode name='custom' supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-noTSX'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Broadwell-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cooperlake'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cooperlake-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Cooperlake-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Denverton'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mpx'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Denverton-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mpx'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Denverton-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Denverton-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Dhyana-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Genoa'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amd-psfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='auto-ibrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='stibp-always-on'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amd-psfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='auto-ibrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='stibp-always-on'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Milan'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Milan-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Milan-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amd-psfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='no-nested-data-bp'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='null-sel-clr-base'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='stibp-always-on'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Rome'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Rome-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Rome-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-Rome-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='EPYC-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='GraniteRapids'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='prefetchiti'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='GraniteRapids-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='prefetchiti'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='GraniteRapids-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx10'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx10-128'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx10-256'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx10-512'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='prefetchiti'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-noTSX'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Haswell-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v5'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v6'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Icelake-Server-v7'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='IvyBridge'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='IvyBridge-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='IvyBridge-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='IvyBridge-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='KnightsMill'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-4fmaps'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-4vnniw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512er'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512pf'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='KnightsMill-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-4fmaps'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-4vnniw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512er'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512pf'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Opteron_G4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fma4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xop'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Opteron_G4-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fma4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xop'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Opteron_G5'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fma4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tbm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xop'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Opteron_G5-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fma4'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tbm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xop'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SapphireRapids'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SapphireRapids-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SapphireRapids-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SapphireRapids-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='amx-tile'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-bf16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-fp16'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512-vpopcntdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bitalg'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vbmi2'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrc'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fzrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='la57'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='taa-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='tsx-ldtrk'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xfd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SierraForest'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-ne-convert'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cmpccxadd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='SierraForest-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-ifma'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-ne-convert'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx-vnni-int8'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='bus-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cmpccxadd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fbsdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='fsrs'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ibrs-all'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mcdt-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pbrsb-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='psdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='sbdr-ssdp-no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='serialize'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vaes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='vpclmulqdq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Client-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='hle'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='rtm'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Skylake-Server-v5'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512bw'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512cd'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512dq'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512f'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='avx512vl'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='invpcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pcid'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='pku'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='core-capability'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mpx'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='split-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='core-capability'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='mpx'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='split-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge-v2'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='core-capability'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='split-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge-v3'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='core-capability'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='split-lock-detect'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='Snowridge-v4'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='cldemote'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='erms'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='gfni'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdir64b'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='movdiri'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='xsaves'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='athlon'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnow'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnowext'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='athlon-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnow'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnowext'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='core2duo'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='core2duo-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='coreduo'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='coreduo-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='n270'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='n270-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='ss'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='phenom'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnow'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnowext'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <blockers model='phenom-v1'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnow'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <feature name='3dnowext'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </blockers>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </mode>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <memoryBacking supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <enum name='sourceType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>file</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>anonymous</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <value>memfd</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </memoryBacking>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <disk supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='diskDevice'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>disk</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>cdrom</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>floppy</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>lun</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='bus'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>ide</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>fdc</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>scsi</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>usb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>sata</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio-transitional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio-non-transitional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <graphics supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='type'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vnc</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>egl-headless</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>dbus</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <video supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='modelType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vga</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>cirrus</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>none</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>bochs</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>ramfb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <hostdev supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='mode'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>subsystem</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='startupPolicy'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>default</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>mandatory</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>requisite</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>optional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='subsysType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>usb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>pci</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>scsi</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='capsType'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='pciBackend'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </hostdev>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <rng supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio-transitional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtio-non-transitional</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendModel'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>random</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>egd</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>builtin</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <filesystem supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='driverType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>path</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>handle</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>virtiofs</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </filesystem>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <tpm supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>tpm-tis</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>tpm-crb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendModel'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>emulator</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>external</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendVersion'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>2.0</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </tpm>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <redirdev supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='bus'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>usb</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </redirdev>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <channel supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='type'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>pty</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>unix</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </channel>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <crypto supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='type'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>qemu</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendModel'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>builtin</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </crypto>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <interface supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='backendType'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>default</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>passt</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <panic supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='model'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>isa</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>hyperv</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </panic>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <gic supported='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <vmcoreinfo supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <genid supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <backingStoreInput supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <backup supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <async-teardown supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <ps2 supported='yes'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <sev supported='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <sgx supported='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <hyperv supported='yes'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      <enum name='features'>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>relaxed</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vapic</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>spinlocks</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vpindex</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>runtime</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>synic</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>stimer</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>reset</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>vendor_id</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>frequencies</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>reenlightenment</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>tlbflush</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>ipi</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>avic</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>emsr_bitmap</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:        <value>xmm_input</value>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:      </enum>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    </hyperv>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:    <launchSecurity supported='no'/>
Oct  2 04:07:59 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:07:59 np0005465604 nova_compute[260603]: </domainCapabilities>
Oct  2 04:07:59 np0005465604 nova_compute[260603]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.624 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.625 2 INFO nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Secure Boot support detected#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.629 2 INFO nova.virt.libvirt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.629 2 INFO nova.virt.libvirt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.646 2 DEBUG nova.virt.libvirt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.669 2 INFO nova.virt.node [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Determined node identity 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from /var/lib/nova/compute_id#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.686 2 WARNING nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Compute nodes ['1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.721 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.784 2 WARNING nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.785 2 DEBUG oslo_concurrency.lockutils [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.785 2 DEBUG oslo_concurrency.lockutils [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.785 2 DEBUG oslo_concurrency.lockutils [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.786 2 DEBUG nova.compute.resource_tracker [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:07:59 np0005465604 nova_compute[260603]: 2025-10-02 08:07:59.786 2 DEBUG oslo_concurrency.processutils [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:08:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:08:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2000819757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:08:00 np0005465604 nova_compute[260603]: 2025-10-02 08:08:00.261 2 DEBUG oslo_concurrency.processutils [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:08:00 np0005465604 systemd[1]: Starting libvirt nodedev daemon...
Oct  2 04:08:00 np0005465604 systemd[1]: Started libvirt nodedev daemon.
Oct  2 04:08:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:00 np0005465604 nova_compute[260603]: 2025-10-02 08:08:00.664 2 WARNING nova.virt.libvirt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:08:00 np0005465604 nova_compute[260603]: 2025-10-02 08:08:00.665 2 DEBUG nova.compute.resource_tracker [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5187MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:08:00 np0005465604 nova_compute[260603]: 2025-10-02 08:08:00.665 2 DEBUG oslo_concurrency.lockutils [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:08:00 np0005465604 nova_compute[260603]: 2025-10-02 08:08:00.665 2 DEBUG oslo_concurrency.lockutils [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:08:00 np0005465604 nova_compute[260603]: 2025-10-02 08:08:00.686 2 WARNING nova.compute.resource_tracker [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] No compute node record for compute-0.ctlplane.example.com:1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 could not be found.#033[00m
Oct  2 04:08:00 np0005465604 nova_compute[260603]: 2025-10-02 08:08:00.711 2 INFO nova.compute.resource_tracker [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27#033[00m
Oct  2 04:08:00 np0005465604 nova_compute[260603]: 2025-10-02 08:08:00.856 2 DEBUG nova.compute.resource_tracker [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:08:00 np0005465604 nova_compute[260603]: 2025-10-02 08:08:00.857 2 DEBUG nova.compute.resource_tracker [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:08:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:08:01 np0005465604 nova_compute[260603]: 2025-10-02 08:08:01.779 2 INFO nova.scheduler.client.report [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [req-d96d015e-1332-47f0-baec-47a4462039c3] Created resource provider record via placement API for resource provider with UUID 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 and name compute-0.ctlplane.example.com.#033[00m
Oct  2 04:08:02 np0005465604 nova_compute[260603]: 2025-10-02 08:08:02.148 2 DEBUG oslo_concurrency.processutils [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:08:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:08:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/243756124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:08:02 np0005465604 nova_compute[260603]: 2025-10-02 08:08:02.587 2 DEBUG oslo_concurrency.processutils [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:08:02 np0005465604 nova_compute[260603]: 2025-10-02 08:08:02.596 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct  2 04:08:02 np0005465604 nova_compute[260603]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Oct  2 04:08:02 np0005465604 nova_compute[260603]: 2025-10-02 08:08:02.596 2 INFO nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] kernel doesn't support AMD SEV#033[00m
Oct  2 04:08:02 np0005465604 nova_compute[260603]: 2025-10-02 08:08:02.598 2 DEBUG nova.compute.provider_tree [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 04:08:02 np0005465604 nova_compute[260603]: 2025-10-02 08:08:02.599 2 DEBUG nova.virt.libvirt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:08:02 np0005465604 nova_compute[260603]: 2025-10-02 08:08:02.675 2 DEBUG nova.scheduler.client.report [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Updated inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  2 04:08:02 np0005465604 nova_compute[260603]: 2025-10-02 08:08:02.676 2 DEBUG nova.compute.provider_tree [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Updating resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  2 04:08:02 np0005465604 nova_compute[260603]: 2025-10-02 08:08:02.676 2 DEBUG nova.compute.provider_tree [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 04:08:02 np0005465604 nova_compute[260603]: 2025-10-02 08:08:02.785 2 DEBUG nova.compute.provider_tree [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Updating resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  2 04:08:02 np0005465604 nova_compute[260603]: 2025-10-02 08:08:02.818 2 DEBUG nova.compute.resource_tracker [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:08:02 np0005465604 nova_compute[260603]: 2025-10-02 08:08:02.819 2 DEBUG oslo_concurrency.lockutils [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:08:02 np0005465604 nova_compute[260603]: 2025-10-02 08:08:02.819 2 DEBUG nova.service [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Oct  2 04:08:02 np0005465604 nova_compute[260603]: 2025-10-02 08:08:02.930 2 DEBUG nova.service [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Oct  2 04:08:02 np0005465604 nova_compute[260603]: 2025-10-02 08:08:02.931 2 DEBUG nova.servicegroup.drivers.db [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Oct  2 04:08:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:08:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:10 np0005465604 podman[260963]: 2025-10-02 08:08:10.115037988 +0000 UTC m=+0.161495509 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 04:08:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:08:12 np0005465604 podman[260989]: 2025-10-02 08:08:12.046180342 +0000 UTC m=+0.105730811 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:08:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:08:17 np0005465604 podman[261009]: 2025-10-02 08:08:17.03049293 +0000 UTC m=+0.101985935 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:08:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:08:22 np0005465604 podman[261031]: 2025-10-02 08:08:21.999807247 +0000 UTC m=+0.068312257 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3)
Oct  2 04:08:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:08:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5748 writes, 24K keys, 5748 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5748 writes, 949 syncs, 6.06 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  2 04:08:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:08:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3597651121' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:08:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:08:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3597651121' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:08:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:08:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/173299475' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:08:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:08:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/173299475' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:08:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:08:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1789644914' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:08:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:08:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1789644914' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:08:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:08:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6877 writes, 28K keys, 6877 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6877 writes, 1281 syncs, 5.37 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 272 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowd
Oct  2 04:08:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:08:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:08:27
Oct  2 04:08:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:08:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:08:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'images', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.mgr', 'backups', 'default.rgw.meta']
Oct  2 04:08:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:08:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:08:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:08:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:08:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:08:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:08:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:08:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:08:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5603 writes, 23K keys, 5603 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5603 writes, 872 syncs, 6.43 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Oct  2 04:08:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:08:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:32 np0005465604 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct  2 04:08:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:08:34.793 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:08:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:08:34.794 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:08:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:08:34.794 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:08:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:08:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:41 np0005465604 podman[261051]: 2025-10-02 08:08:41.063087985 +0000 UTC m=+0.126015405 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 04:08:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:08:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:43 np0005465604 podman[261077]: 2025-10-02 08:08:43.028739623 +0000 UTC m=+0.093635057 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 04:08:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:08:48 np0005465604 podman[261096]: 2025-10-02 08:08:48.028317311 +0000 UTC m=+0.085155787 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:08:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:08:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:08:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:08:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:08:49 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:08:49 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:08:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:08:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:08:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:08:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:08:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:08:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:08:50 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c003ea7b-845b-4f59-8725-fa58e6baabaa does not exist
Oct  2 04:08:50 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c24a0ab2-0b96-4907-9bc0-79de114b6128 does not exist
Oct  2 04:08:50 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 28515743-2c0e-4c4a-9013-580388cac719 does not exist
Oct  2 04:08:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:08:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:08:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:08:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:08:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:08:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:08:50 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:08:50 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:08:50 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:08:51 np0005465604 podman[261512]: 2025-10-02 08:08:51.162908014 +0000 UTC m=+0.076389715 container create 2a8e273526b97b26ea99e35e2cbafe2b12a07d5309271d70bbcdf9f1b3bffd47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ritchie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:08:51 np0005465604 systemd[1]: Started libpod-conmon-2a8e273526b97b26ea99e35e2cbafe2b12a07d5309271d70bbcdf9f1b3bffd47.scope.
Oct  2 04:08:51 np0005465604 podman[261512]: 2025-10-02 08:08:51.130062252 +0000 UTC m=+0.043544013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:08:51 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:08:51 np0005465604 podman[261512]: 2025-10-02 08:08:51.266266781 +0000 UTC m=+0.179748542 container init 2a8e273526b97b26ea99e35e2cbafe2b12a07d5309271d70bbcdf9f1b3bffd47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 04:08:51 np0005465604 podman[261512]: 2025-10-02 08:08:51.277647122 +0000 UTC m=+0.191128873 container start 2a8e273526b97b26ea99e35e2cbafe2b12a07d5309271d70bbcdf9f1b3bffd47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 04:08:51 np0005465604 podman[261512]: 2025-10-02 08:08:51.281282314 +0000 UTC m=+0.194764115 container attach 2a8e273526b97b26ea99e35e2cbafe2b12a07d5309271d70bbcdf9f1b3bffd47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 04:08:51 np0005465604 festive_ritchie[261528]: 167 167
Oct  2 04:08:51 np0005465604 systemd[1]: libpod-2a8e273526b97b26ea99e35e2cbafe2b12a07d5309271d70bbcdf9f1b3bffd47.scope: Deactivated successfully.
Oct  2 04:08:51 np0005465604 podman[261512]: 2025-10-02 08:08:51.286418232 +0000 UTC m=+0.199899973 container died 2a8e273526b97b26ea99e35e2cbafe2b12a07d5309271d70bbcdf9f1b3bffd47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  2 04:08:51 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f01c094e74619f3117c884399ac76e4bdb6454ed5937dc5f9a5e23e85fbc16b3-merged.mount: Deactivated successfully.
Oct  2 04:08:51 np0005465604 podman[261512]: 2025-10-02 08:08:51.34473957 +0000 UTC m=+0.258221321 container remove 2a8e273526b97b26ea99e35e2cbafe2b12a07d5309271d70bbcdf9f1b3bffd47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct  2 04:08:51 np0005465604 systemd[1]: libpod-conmon-2a8e273526b97b26ea99e35e2cbafe2b12a07d5309271d70bbcdf9f1b3bffd47.scope: Deactivated successfully.
Oct  2 04:08:51 np0005465604 podman[261551]: 2025-10-02 08:08:51.572184412 +0000 UTC m=+0.070482564 container create 01a84fa6aabab6e309e03d4fea0db4af9d0dd81871b0beadd15cbe73aa25bb0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mendel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:08:51 np0005465604 systemd[1]: Started libpod-conmon-01a84fa6aabab6e309e03d4fea0db4af9d0dd81871b0beadd15cbe73aa25bb0d.scope.
Oct  2 04:08:51 np0005465604 podman[261551]: 2025-10-02 08:08:51.546393737 +0000 UTC m=+0.044691939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:08:51 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:08:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85ab24196259d5beaa7cd1d959c3c66b775c6b37f06e281abd8a2dc8f033935/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:08:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85ab24196259d5beaa7cd1d959c3c66b775c6b37f06e281abd8a2dc8f033935/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:08:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85ab24196259d5beaa7cd1d959c3c66b775c6b37f06e281abd8a2dc8f033935/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:08:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85ab24196259d5beaa7cd1d959c3c66b775c6b37f06e281abd8a2dc8f033935/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:08:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e85ab24196259d5beaa7cd1d959c3c66b775c6b37f06e281abd8a2dc8f033935/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:08:51 np0005465604 podman[261551]: 2025-10-02 08:08:51.690916622 +0000 UTC m=+0.189214844 container init 01a84fa6aabab6e309e03d4fea0db4af9d0dd81871b0beadd15cbe73aa25bb0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 04:08:51 np0005465604 podman[261551]: 2025-10-02 08:08:51.707359719 +0000 UTC m=+0.205657891 container start 01a84fa6aabab6e309e03d4fea0db4af9d0dd81871b0beadd15cbe73aa25bb0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 04:08:51 np0005465604 podman[261551]: 2025-10-02 08:08:51.713625162 +0000 UTC m=+0.211923344 container attach 01a84fa6aabab6e309e03d4fea0db4af9d0dd81871b0beadd15cbe73aa25bb0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 04:08:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:08:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:52 np0005465604 adoring_mendel[261567]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:08:52 np0005465604 adoring_mendel[261567]: --> relative data size: 1.0
Oct  2 04:08:52 np0005465604 adoring_mendel[261567]: --> All data devices are unavailable
Oct  2 04:08:52 np0005465604 systemd[1]: libpod-01a84fa6aabab6e309e03d4fea0db4af9d0dd81871b0beadd15cbe73aa25bb0d.scope: Deactivated successfully.
Oct  2 04:08:52 np0005465604 podman[261551]: 2025-10-02 08:08:52.892728822 +0000 UTC m=+1.391026974 container died 01a84fa6aabab6e309e03d4fea0db4af9d0dd81871b0beadd15cbe73aa25bb0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mendel, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:08:52 np0005465604 systemd[1]: libpod-01a84fa6aabab6e309e03d4fea0db4af9d0dd81871b0beadd15cbe73aa25bb0d.scope: Consumed 1.133s CPU time.
Oct  2 04:08:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e85ab24196259d5beaa7cd1d959c3c66b775c6b37f06e281abd8a2dc8f033935-merged.mount: Deactivated successfully.
Oct  2 04:08:52 np0005465604 podman[261551]: 2025-10-02 08:08:52.956883059 +0000 UTC m=+1.455181201 container remove 01a84fa6aabab6e309e03d4fea0db4af9d0dd81871b0beadd15cbe73aa25bb0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mendel, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:08:52 np0005465604 systemd[1]: libpod-conmon-01a84fa6aabab6e309e03d4fea0db4af9d0dd81871b0beadd15cbe73aa25bb0d.scope: Deactivated successfully.
Oct  2 04:08:53 np0005465604 podman[261597]: 2025-10-02 08:08:53.020954915 +0000 UTC m=+0.093500753 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:08:53 np0005465604 podman[261771]: 2025-10-02 08:08:53.856902446 +0000 UTC m=+0.064380936 container create f743278d2d7de5fa21b922ea2eed6f3659dec4c9cd16aea15975a6741cec1712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chatterjee, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 04:08:53 np0005465604 systemd[1]: Started libpod-conmon-f743278d2d7de5fa21b922ea2eed6f3659dec4c9cd16aea15975a6741cec1712.scope.
Oct  2 04:08:53 np0005465604 podman[261771]: 2025-10-02 08:08:53.831445231 +0000 UTC m=+0.038923781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:08:53 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:08:53 np0005465604 podman[261771]: 2025-10-02 08:08:53.976988398 +0000 UTC m=+0.184466878 container init f743278d2d7de5fa21b922ea2eed6f3659dec4c9cd16aea15975a6741cec1712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chatterjee, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:08:53 np0005465604 podman[261771]: 2025-10-02 08:08:53.991673691 +0000 UTC m=+0.199152191 container start f743278d2d7de5fa21b922ea2eed6f3659dec4c9cd16aea15975a6741cec1712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  2 04:08:53 np0005465604 podman[261771]: 2025-10-02 08:08:53.996486099 +0000 UTC m=+0.203964579 container attach f743278d2d7de5fa21b922ea2eed6f3659dec4c9cd16aea15975a6741cec1712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 04:08:54 np0005465604 vigorous_chatterjee[261787]: 167 167
Oct  2 04:08:54 np0005465604 systemd[1]: libpod-f743278d2d7de5fa21b922ea2eed6f3659dec4c9cd16aea15975a6741cec1712.scope: Deactivated successfully.
Oct  2 04:08:54 np0005465604 podman[261771]: 2025-10-02 08:08:54.002908427 +0000 UTC m=+0.210386927 container died f743278d2d7de5fa21b922ea2eed6f3659dec4c9cd16aea15975a6741cec1712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 04:08:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2759bf277bb873758f88a381cc4ce592a74e38af2a7e9f44b6503f96480a76d1-merged.mount: Deactivated successfully.
Oct  2 04:08:54 np0005465604 podman[261771]: 2025-10-02 08:08:54.050439742 +0000 UTC m=+0.257918202 container remove f743278d2d7de5fa21b922ea2eed6f3659dec4c9cd16aea15975a6741cec1712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 04:08:54 np0005465604 systemd[1]: libpod-conmon-f743278d2d7de5fa21b922ea2eed6f3659dec4c9cd16aea15975a6741cec1712.scope: Deactivated successfully.
Oct  2 04:08:54 np0005465604 podman[261809]: 2025-10-02 08:08:54.289924855 +0000 UTC m=+0.073271050 container create 8d47ac42861f2572c2382ada7d1affc99733902e2eb5b3f68482696d0460b78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_galileo, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 04:08:54 np0005465604 systemd[1]: Started libpod-conmon-8d47ac42861f2572c2382ada7d1affc99733902e2eb5b3f68482696d0460b78a.scope.
Oct  2 04:08:54 np0005465604 podman[261809]: 2025-10-02 08:08:54.257592618 +0000 UTC m=+0.040938863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:08:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:08:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3707ddc0e9f9b15bd56c8c7451ab1968f1754be9e9f0d97f116c7bb1bafcd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:08:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3707ddc0e9f9b15bd56c8c7451ab1968f1754be9e9f0d97f116c7bb1bafcd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:08:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3707ddc0e9f9b15bd56c8c7451ab1968f1754be9e9f0d97f116c7bb1bafcd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:08:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3707ddc0e9f9b15bd56c8c7451ab1968f1754be9e9f0d97f116c7bb1bafcd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:08:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:54 np0005465604 podman[261809]: 2025-10-02 08:08:54.40591957 +0000 UTC m=+0.189265795 container init 8d47ac42861f2572c2382ada7d1affc99733902e2eb5b3f68482696d0460b78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 04:08:54 np0005465604 podman[261809]: 2025-10-02 08:08:54.421566463 +0000 UTC m=+0.204912638 container start 8d47ac42861f2572c2382ada7d1affc99733902e2eb5b3f68482696d0460b78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_galileo, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:08:54 np0005465604 podman[261809]: 2025-10-02 08:08:54.425789393 +0000 UTC m=+0.209135638 container attach 8d47ac42861f2572c2382ada7d1affc99733902e2eb5b3f68482696d0460b78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct  2 04:08:54 np0005465604 nova_compute[260603]: 2025-10-02 08:08:54.932 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:08:54 np0005465604 nova_compute[260603]: 2025-10-02 08:08:54.970 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]: {
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:    "0": [
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:        {
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "devices": [
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "/dev/loop3"
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            ],
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "lv_name": "ceph_lv0",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "lv_size": "21470642176",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "name": "ceph_lv0",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "tags": {
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.cluster_name": "ceph",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.crush_device_class": "",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.encrypted": "0",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.osd_id": "0",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.type": "block",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.vdo": "0"
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            },
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "type": "block",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "vg_name": "ceph_vg0"
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:        }
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:    ],
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:    "1": [
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:        {
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "devices": [
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "/dev/loop4"
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            ],
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "lv_name": "ceph_lv1",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "lv_size": "21470642176",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "name": "ceph_lv1",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "tags": {
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.cluster_name": "ceph",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.crush_device_class": "",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.encrypted": "0",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.osd_id": "1",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.type": "block",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.vdo": "0"
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            },
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "type": "block",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "vg_name": "ceph_vg1"
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:        }
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:    ],
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:    "2": [
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:        {
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "devices": [
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "/dev/loop5"
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            ],
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "lv_name": "ceph_lv2",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "lv_size": "21470642176",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "name": "ceph_lv2",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "tags": {
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.cluster_name": "ceph",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.crush_device_class": "",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.encrypted": "0",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.osd_id": "2",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.type": "block",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:                "ceph.vdo": "0"
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            },
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "type": "block",
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:            "vg_name": "ceph_vg2"
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:        }
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]:    ]
Oct  2 04:08:55 np0005465604 gracious_galileo[261826]: }
Oct  2 04:08:55 np0005465604 systemd[1]: libpod-8d47ac42861f2572c2382ada7d1affc99733902e2eb5b3f68482696d0460b78a.scope: Deactivated successfully.
Oct  2 04:08:55 np0005465604 podman[261809]: 2025-10-02 08:08:55.269814233 +0000 UTC m=+1.053160458 container died 8d47ac42861f2572c2382ada7d1affc99733902e2eb5b3f68482696d0460b78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_galileo, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:08:55 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5a3707ddc0e9f9b15bd56c8c7451ab1968f1754be9e9f0d97f116c7bb1bafcd3-merged.mount: Deactivated successfully.
Oct  2 04:08:55 np0005465604 podman[261809]: 2025-10-02 08:08:55.35470856 +0000 UTC m=+1.138054755 container remove 8d47ac42861f2572c2382ada7d1affc99733902e2eb5b3f68482696d0460b78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_galileo, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  2 04:08:55 np0005465604 systemd[1]: libpod-conmon-8d47ac42861f2572c2382ada7d1affc99733902e2eb5b3f68482696d0460b78a.scope: Deactivated successfully.
Oct  2 04:08:56 np0005465604 podman[261990]: 2025-10-02 08:08:56.19612228 +0000 UTC m=+0.045517925 container create c31fa818e9d95eb045e8634c33d30ee4eea897f7ee8e2e03bdfd898735a30376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_brown, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:08:56 np0005465604 systemd[1]: Started libpod-conmon-c31fa818e9d95eb045e8634c33d30ee4eea897f7ee8e2e03bdfd898735a30376.scope.
Oct  2 04:08:56 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:08:56 np0005465604 podman[261990]: 2025-10-02 08:08:56.179624881 +0000 UTC m=+0.029020546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:08:56 np0005465604 podman[261990]: 2025-10-02 08:08:56.278794379 +0000 UTC m=+0.128190124 container init c31fa818e9d95eb045e8634c33d30ee4eea897f7ee8e2e03bdfd898735a30376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 04:08:56 np0005465604 podman[261990]: 2025-10-02 08:08:56.286275059 +0000 UTC m=+0.135670714 container start c31fa818e9d95eb045e8634c33d30ee4eea897f7ee8e2e03bdfd898735a30376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  2 04:08:56 np0005465604 podman[261990]: 2025-10-02 08:08:56.289420796 +0000 UTC m=+0.138816501 container attach c31fa818e9d95eb045e8634c33d30ee4eea897f7ee8e2e03bdfd898735a30376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_brown, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:08:56 np0005465604 crazy_brown[262006]: 167 167
Oct  2 04:08:56 np0005465604 systemd[1]: libpod-c31fa818e9d95eb045e8634c33d30ee4eea897f7ee8e2e03bdfd898735a30376.scope: Deactivated successfully.
Oct  2 04:08:56 np0005465604 podman[261990]: 2025-10-02 08:08:56.294452401 +0000 UTC m=+0.143848046 container died c31fa818e9d95eb045e8634c33d30ee4eea897f7ee8e2e03bdfd898735a30376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_brown, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:08:56 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f363edf40c90924d245858043a6cf812da7f0df29ba51c1f67c2a75b7c9db6e4-merged.mount: Deactivated successfully.
Oct  2 04:08:56 np0005465604 podman[261990]: 2025-10-02 08:08:56.3385285 +0000 UTC m=+0.187924165 container remove c31fa818e9d95eb045e8634c33d30ee4eea897f7ee8e2e03bdfd898735a30376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:08:56 np0005465604 systemd[1]: libpod-conmon-c31fa818e9d95eb045e8634c33d30ee4eea897f7ee8e2e03bdfd898735a30376.scope: Deactivated successfully.
Oct  2 04:08:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:56 np0005465604 podman[262030]: 2025-10-02 08:08:56.55295906 +0000 UTC m=+0.061618831 container create 85a682654ee575a44647a3efd373edd1245719c3b977823550f7757c0804b100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:08:56 np0005465604 systemd[1]: Started libpod-conmon-85a682654ee575a44647a3efd373edd1245719c3b977823550f7757c0804b100.scope.
Oct  2 04:08:56 np0005465604 podman[262030]: 2025-10-02 08:08:56.526847355 +0000 UTC m=+0.035507136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:08:56 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:08:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82c3aa45ff5bbdcca89fd994e8a3233973803607ea040c19f620d6e8b41f99db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:08:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82c3aa45ff5bbdcca89fd994e8a3233973803607ea040c19f620d6e8b41f99db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:08:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82c3aa45ff5bbdcca89fd994e8a3233973803607ea040c19f620d6e8b41f99db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:08:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82c3aa45ff5bbdcca89fd994e8a3233973803607ea040c19f620d6e8b41f99db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:08:56 np0005465604 podman[262030]: 2025-10-02 08:08:56.645357049 +0000 UTC m=+0.154016820 container init 85a682654ee575a44647a3efd373edd1245719c3b977823550f7757c0804b100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 04:08:56 np0005465604 podman[262030]: 2025-10-02 08:08:56.653419117 +0000 UTC m=+0.162078868 container start 85a682654ee575a44647a3efd373edd1245719c3b977823550f7757c0804b100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dirac, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:08:56 np0005465604 podman[262030]: 2025-10-02 08:08:56.657177783 +0000 UTC m=+0.165837574 container attach 85a682654ee575a44647a3efd373edd1245719c3b977823550f7757c0804b100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dirac, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:08:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]: {
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:        "osd_id": 2,
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:        "type": "bluestore"
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:    },
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:        "osd_id": 1,
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:        "type": "bluestore"
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:    },
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:        "osd_id": 0,
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:        "type": "bluestore"
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]:    }
Oct  2 04:08:57 np0005465604 dreamy_dirac[262047]: }
Oct  2 04:08:57 np0005465604 systemd[1]: libpod-85a682654ee575a44647a3efd373edd1245719c3b977823550f7757c0804b100.scope: Deactivated successfully.
Oct  2 04:08:57 np0005465604 podman[262030]: 2025-10-02 08:08:57.670033588 +0000 UTC m=+1.178693369 container died 85a682654ee575a44647a3efd373edd1245719c3b977823550f7757c0804b100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dirac, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:08:57 np0005465604 systemd[1]: libpod-85a682654ee575a44647a3efd373edd1245719c3b977823550f7757c0804b100.scope: Consumed 1.022s CPU time.
Oct  2 04:08:57 np0005465604 systemd[1]: var-lib-containers-storage-overlay-82c3aa45ff5bbdcca89fd994e8a3233973803607ea040c19f620d6e8b41f99db-merged.mount: Deactivated successfully.
Oct  2 04:08:57 np0005465604 podman[262030]: 2025-10-02 08:08:57.746436374 +0000 UTC m=+1.255096145 container remove 85a682654ee575a44647a3efd373edd1245719c3b977823550f7757c0804b100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dirac, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:08:57 np0005465604 systemd[1]: libpod-conmon-85a682654ee575a44647a3efd373edd1245719c3b977823550f7757c0804b100.scope: Deactivated successfully.
Oct  2 04:08:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:08:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:08:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:08:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:08:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a0d5b0b5-6424-4e9a-ad18-3ee325aba89b does not exist
Oct  2 04:08:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev d5820019-9627-4305-8149-71ed172a2cff does not exist
Oct  2 04:08:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:08:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:08:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:08:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:08:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:08:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:08:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:08:58 np0005465604 nova_compute[260603]: 2025-10-02 08:08:58.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:08:58 np0005465604 nova_compute[260603]: 2025-10-02 08:08:58.522 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:08:58 np0005465604 nova_compute[260603]: 2025-10-02 08:08:58.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:08:58 np0005465604 nova_compute[260603]: 2025-10-02 08:08:58.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:08:58 np0005465604 nova_compute[260603]: 2025-10-02 08:08:58.751 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:08:58 np0005465604 nova_compute[260603]: 2025-10-02 08:08:58.752 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:08:58 np0005465604 nova_compute[260603]: 2025-10-02 08:08:58.753 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:08:58 np0005465604 nova_compute[260603]: 2025-10-02 08:08:58.753 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:08:58 np0005465604 nova_compute[260603]: 2025-10-02 08:08:58.754 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:08:58 np0005465604 nova_compute[260603]: 2025-10-02 08:08:58.754 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:08:58 np0005465604 nova_compute[260603]: 2025-10-02 08:08:58.755 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:08:58 np0005465604 nova_compute[260603]: 2025-10-02 08:08:58.755 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:08:58 np0005465604 nova_compute[260603]: 2025-10-02 08:08:58.756 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:08:58 np0005465604 nova_compute[260603]: 2025-10-02 08:08:58.785 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:08:58 np0005465604 nova_compute[260603]: 2025-10-02 08:08:58.786 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:08:58 np0005465604 nova_compute[260603]: 2025-10-02 08:08:58.786 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:08:58 np0005465604 nova_compute[260603]: 2025-10-02 08:08:58.787 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:08:58 np0005465604 nova_compute[260603]: 2025-10-02 08:08:58.788 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:08:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:08:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:08:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:08:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3180354192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:08:59 np0005465604 nova_compute[260603]: 2025-10-02 08:08:59.253 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:08:59 np0005465604 nova_compute[260603]: 2025-10-02 08:08:59.449 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:08:59 np0005465604 nova_compute[260603]: 2025-10-02 08:08:59.450 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5174MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:08:59 np0005465604 nova_compute[260603]: 2025-10-02 08:08:59.450 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:08:59 np0005465604 nova_compute[260603]: 2025-10-02 08:08:59.451 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:08:59 np0005465604 nova_compute[260603]: 2025-10-02 08:08:59.587 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:08:59 np0005465604 nova_compute[260603]: 2025-10-02 08:08:59.588 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:08:59 np0005465604 nova_compute[260603]: 2025-10-02 08:08:59.615 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:09:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:09:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/193005285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:09:00 np0005465604 nova_compute[260603]: 2025-10-02 08:09:00.027 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:09:00 np0005465604 nova_compute[260603]: 2025-10-02 08:09:00.035 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:09:00 np0005465604 nova_compute[260603]: 2025-10-02 08:09:00.066 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:09:00 np0005465604 nova_compute[260603]: 2025-10-02 08:09:00.069 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:09:00 np0005465604 nova_compute[260603]: 2025-10-02 08:09:00.069 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:09:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:09:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:09:03.460994) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392543461091, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1496, "num_deletes": 251, "total_data_size": 2395176, "memory_usage": 2428080, "flush_reason": "Manual Compaction"}
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392543478369, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2340653, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14838, "largest_seqno": 16333, "table_properties": {"data_size": 2333653, "index_size": 4071, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14292, "raw_average_key_size": 19, "raw_value_size": 2319673, "raw_average_value_size": 3199, "num_data_blocks": 186, "num_entries": 725, "num_filter_entries": 725, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759392387, "oldest_key_time": 1759392387, "file_creation_time": 1759392543, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 17418 microseconds, and 10325 cpu microseconds.
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:09:03.478427) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2340653 bytes OK
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:09:03.478453) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:09:03.480295) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:09:03.480318) EVENT_LOG_v1 {"time_micros": 1759392543480311, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:09:03.480339) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2388605, prev total WAL file size 2388605, number of live WAL files 2.
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:09:03.481647) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2285KB)], [35(7066KB)]
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392543481689, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9576374, "oldest_snapshot_seqno": -1}
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3997 keys, 7807339 bytes, temperature: kUnknown
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392543531471, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7807339, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7778166, "index_size": 18055, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97593, "raw_average_key_size": 24, "raw_value_size": 7703415, "raw_average_value_size": 1927, "num_data_blocks": 764, "num_entries": 3997, "num_filter_entries": 3997, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759392543, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:09:03.531909) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7807339 bytes
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:09:03.533444) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.9 rd, 156.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 6.9 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.4) write-amplify(3.3) OK, records in: 4511, records dropped: 514 output_compression: NoCompression
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:09:03.533477) EVENT_LOG_v1 {"time_micros": 1759392543533463, "job": 16, "event": "compaction_finished", "compaction_time_micros": 49905, "compaction_time_cpu_micros": 31268, "output_level": 6, "num_output_files": 1, "total_output_size": 7807339, "num_input_records": 4511, "num_output_records": 3997, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392543534733, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392543537274, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:09:03.481586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:09:03.537339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:09:03.537347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:09:03.537350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:09:03.537353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:09:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:09:03.537356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:09:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct  2 04:09:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3556468546' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct  2 04:09:05 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct  2 04:09:05 np0005465604 ceph-mgr[74774]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  2 04:09:05 np0005465604 ceph-mgr[74774]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  2 04:09:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:09:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:09:12 np0005465604 podman[262185]: 2025-10-02 08:09:12.116468306 +0000 UTC m=+0.178260467 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:09:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:14 np0005465604 podman[262212]: 2025-10-02 08:09:14.029064868 +0000 UTC m=+0.084728393 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 04:09:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:09:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:19 np0005465604 podman[262231]: 2025-10-02 08:09:19.027400498 +0000 UTC m=+0.092664578 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:09:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:09:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct  2 04:09:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3059184791' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct  2 04:09:21 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.14353 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct  2 04:09:21 np0005465604 ceph-mgr[74774]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  2 04:09:21 np0005465604 ceph-mgr[74774]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  2 04:09:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:24 np0005465604 podman[262251]: 2025-10-02 08:09:24.033512988 +0000 UTC m=+0.093857514 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:09:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:09:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:09:27
Oct  2 04:09:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:09:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:09:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'backups', 'images', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.data', '.mgr']
Oct  2 04:09:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:09:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:09:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:09:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:09:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:09:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:09:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:09:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  2 04:09:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  2 04:09:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:09:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  2 04:09:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  2 04:09:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:09:34.795 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:09:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:09:34.795 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:09:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:09:34.795 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:09:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  2 04:09:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:09:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  2 04:09:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:09:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:43 np0005465604 podman[262271]: 2025-10-02 08:09:43.05865114 +0000 UTC m=+0.116738129 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 04:09:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:45 np0005465604 podman[262297]: 2025-10-02 08:09:45.006960033 +0000 UTC m=+0.074100355 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 04:09:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:09:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:50 np0005465604 podman[262317]: 2025-10-02 08:09:50.020306636 +0000 UTC m=+0.079117520 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 04:09:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:09:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:54 np0005465604 podman[262338]: 2025-10-02 08:09:54.9937896 +0000 UTC m=+0.061895569 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 04:09:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:09:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:09:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:09:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:09:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:09:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:09:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:09:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:09:58 np0005465604 podman[262530]: 2025-10-02 08:09:58.970083493 +0000 UTC m=+0.072181847 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:09:59 np0005465604 podman[262530]: 2025-10-02 08:09:59.067337731 +0000 UTC m=+0.169436125 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:09:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:09:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:09:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:09:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.062 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.063 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.090 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.091 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.091 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.107 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.107 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.108 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.108 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.109 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.109 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.109 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.109 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.110 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.141 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.141 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.142 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.142 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.142 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:10:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1442634607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.621 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.838 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.839 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5159MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.839 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.840 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:10:00 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e1d85094-a113-4f69-a382-351a0363b0d2 does not exist
Oct  2 04:10:00 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e7181533-b725-40f6-ad8f-9de8782f0a4f does not exist
Oct  2 04:10:00 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 02f645c6-32d5-40cc-bf1e-787c4dad00a3 does not exist
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 04:10:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.903 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.903 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:10:00 np0005465604 nova_compute[260603]: 2025-10-02 08:10:00.925 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:10:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:10:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4125710227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:10:01 np0005465604 nova_compute[260603]: 2025-10-02 08:10:01.403 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:10:01 np0005465604 nova_compute[260603]: 2025-10-02 08:10:01.412 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:10:01 np0005465604 nova_compute[260603]: 2025-10-02 08:10:01.434 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:10:01 np0005465604 nova_compute[260603]: 2025-10-02 08:10:01.437 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:10:01 np0005465604 nova_compute[260603]: 2025-10-02 08:10:01.438 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:10:01 np0005465604 podman[263007]: 2025-10-02 08:10:01.723582308 +0000 UTC m=+0.062444205 container create 176bd95a901b7b7ca5571b47441d24bccb97c237b4c5fad70e112bdd1851a993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_curie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 04:10:01 np0005465604 systemd[1]: Started libpod-conmon-176bd95a901b7b7ca5571b47441d24bccb97c237b4c5fad70e112bdd1851a993.scope.
Oct  2 04:10:01 np0005465604 podman[263007]: 2025-10-02 08:10:01.696191104 +0000 UTC m=+0.035053031 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:10:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:10:01 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:10:01 np0005465604 podman[263007]: 2025-10-02 08:10:01.826270524 +0000 UTC m=+0.165132421 container init 176bd95a901b7b7ca5571b47441d24bccb97c237b4c5fad70e112bdd1851a993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:10:01 np0005465604 podman[263007]: 2025-10-02 08:10:01.840467822 +0000 UTC m=+0.179329719 container start 176bd95a901b7b7ca5571b47441d24bccb97c237b4c5fad70e112bdd1851a993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_curie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:10:01 np0005465604 podman[263007]: 2025-10-02 08:10:01.845535408 +0000 UTC m=+0.184397315 container attach 176bd95a901b7b7ca5571b47441d24bccb97c237b4c5fad70e112bdd1851a993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_curie, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:10:01 np0005465604 nostalgic_curie[263023]: 167 167
Oct  2 04:10:01 np0005465604 systemd[1]: libpod-176bd95a901b7b7ca5571b47441d24bccb97c237b4c5fad70e112bdd1851a993.scope: Deactivated successfully.
Oct  2 04:10:01 np0005465604 podman[263007]: 2025-10-02 08:10:01.852239965 +0000 UTC m=+0.191101852 container died 176bd95a901b7b7ca5571b47441d24bccb97c237b4c5fad70e112bdd1851a993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:10:01 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ea7601834332b06cfff4b1c61fa3753438cb1de36e52a5b79c096635cd7ae984-merged.mount: Deactivated successfully.
Oct  2 04:10:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:10:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:10:01 np0005465604 podman[263007]: 2025-10-02 08:10:01.913939537 +0000 UTC m=+0.252801434 container remove 176bd95a901b7b7ca5571b47441d24bccb97c237b4c5fad70e112bdd1851a993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_curie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 04:10:01 np0005465604 systemd[1]: libpod-conmon-176bd95a901b7b7ca5571b47441d24bccb97c237b4c5fad70e112bdd1851a993.scope: Deactivated successfully.
Oct  2 04:10:02 np0005465604 podman[263047]: 2025-10-02 08:10:02.170179156 +0000 UTC m=+0.083440733 container create bdbbdf49a43e1d55f20b4d449f0e49f32081249ca7ac8e8f78bdc8406f714830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct  2 04:10:02 np0005465604 systemd[1]: Started libpod-conmon-bdbbdf49a43e1d55f20b4d449f0e49f32081249ca7ac8e8f78bdc8406f714830.scope.
Oct  2 04:10:02 np0005465604 podman[263047]: 2025-10-02 08:10:02.131892456 +0000 UTC m=+0.045154113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:10:02 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:10:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e50ad11d89d8ae7ebd6778059a77119d821262c35d5bb1c7aec5ed0526df9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:10:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e50ad11d89d8ae7ebd6778059a77119d821262c35d5bb1c7aec5ed0526df9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:10:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e50ad11d89d8ae7ebd6778059a77119d821262c35d5bb1c7aec5ed0526df9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:10:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e50ad11d89d8ae7ebd6778059a77119d821262c35d5bb1c7aec5ed0526df9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:10:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e50ad11d89d8ae7ebd6778059a77119d821262c35d5bb1c7aec5ed0526df9c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:10:02 np0005465604 podman[263047]: 2025-10-02 08:10:02.281266251 +0000 UTC m=+0.194527878 container init bdbbdf49a43e1d55f20b4d449f0e49f32081249ca7ac8e8f78bdc8406f714830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shirley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 04:10:02 np0005465604 podman[263047]: 2025-10-02 08:10:02.291878848 +0000 UTC m=+0.205140445 container start bdbbdf49a43e1d55f20b4d449f0e49f32081249ca7ac8e8f78bdc8406f714830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:10:02 np0005465604 podman[263047]: 2025-10-02 08:10:02.295413548 +0000 UTC m=+0.208675145 container attach bdbbdf49a43e1d55f20b4d449f0e49f32081249ca7ac8e8f78bdc8406f714830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shirley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  2 04:10:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:03 np0005465604 modest_shirley[263064]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:10:03 np0005465604 modest_shirley[263064]: --> relative data size: 1.0
Oct  2 04:10:03 np0005465604 modest_shirley[263064]: --> All data devices are unavailable
Oct  2 04:10:03 np0005465604 systemd[1]: libpod-bdbbdf49a43e1d55f20b4d449f0e49f32081249ca7ac8e8f78bdc8406f714830.scope: Deactivated successfully.
Oct  2 04:10:03 np0005465604 podman[263047]: 2025-10-02 08:10:03.380053705 +0000 UTC m=+1.293315302 container died bdbbdf49a43e1d55f20b4d449f0e49f32081249ca7ac8e8f78bdc8406f714830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shirley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:10:03 np0005465604 systemd[1]: libpod-bdbbdf49a43e1d55f20b4d449f0e49f32081249ca7ac8e8f78bdc8406f714830.scope: Consumed 1.044s CPU time.
Oct  2 04:10:03 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f7e50ad11d89d8ae7ebd6778059a77119d821262c35d5bb1c7aec5ed0526df9c-merged.mount: Deactivated successfully.
Oct  2 04:10:03 np0005465604 podman[263047]: 2025-10-02 08:10:03.446165013 +0000 UTC m=+1.359426610 container remove bdbbdf49a43e1d55f20b4d449f0e49f32081249ca7ac8e8f78bdc8406f714830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shirley, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:10:03 np0005465604 systemd[1]: libpod-conmon-bdbbdf49a43e1d55f20b4d449f0e49f32081249ca7ac8e8f78bdc8406f714830.scope: Deactivated successfully.
Oct  2 04:10:04 np0005465604 podman[263245]: 2025-10-02 08:10:04.155369476 +0000 UTC m=+0.052140958 container create 66cd8925c1c9e6bec6d296ed7be27d6106f980613c765029497c2e919e6969eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lewin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 04:10:04 np0005465604 systemd[1]: Started libpod-conmon-66cd8925c1c9e6bec6d296ed7be27d6106f980613c765029497c2e919e6969eb.scope.
Oct  2 04:10:04 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:10:04 np0005465604 podman[263245]: 2025-10-02 08:10:04.139553989 +0000 UTC m=+0.036325461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:10:04 np0005465604 podman[263245]: 2025-10-02 08:10:04.240329125 +0000 UTC m=+0.137100627 container init 66cd8925c1c9e6bec6d296ed7be27d6106f980613c765029497c2e919e6969eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:10:04 np0005465604 podman[263245]: 2025-10-02 08:10:04.24826502 +0000 UTC m=+0.145036512 container start 66cd8925c1c9e6bec6d296ed7be27d6106f980613c765029497c2e919e6969eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lewin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  2 04:10:04 np0005465604 podman[263245]: 2025-10-02 08:10:04.251968224 +0000 UTC m=+0.148739716 container attach 66cd8925c1c9e6bec6d296ed7be27d6106f980613c765029497c2e919e6969eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lewin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 04:10:04 np0005465604 vigilant_lewin[263261]: 167 167
Oct  2 04:10:04 np0005465604 systemd[1]: libpod-66cd8925c1c9e6bec6d296ed7be27d6106f980613c765029497c2e919e6969eb.scope: Deactivated successfully.
Oct  2 04:10:04 np0005465604 podman[263245]: 2025-10-02 08:10:04.255216095 +0000 UTC m=+0.151987567 container died 66cd8925c1c9e6bec6d296ed7be27d6106f980613c765029497c2e919e6969eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:10:04 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9355710d57b5e5139d5094324f19fbf5fe6efede20b485c17d4db32f81ed0918-merged.mount: Deactivated successfully.
Oct  2 04:10:04 np0005465604 podman[263245]: 2025-10-02 08:10:04.297827967 +0000 UTC m=+0.194599439 container remove 66cd8925c1c9e6bec6d296ed7be27d6106f980613c765029497c2e919e6969eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:10:04 np0005465604 systemd[1]: libpod-conmon-66cd8925c1c9e6bec6d296ed7be27d6106f980613c765029497c2e919e6969eb.scope: Deactivated successfully.
Oct  2 04:10:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:04 np0005465604 podman[263284]: 2025-10-02 08:10:04.54509024 +0000 UTC m=+0.063921151 container create f6e6a8024839706052886cda14e5ac0ed96a41c1b1f46503d989842c188e6760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rosalind, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Oct  2 04:10:04 np0005465604 systemd[1]: Started libpod-conmon-f6e6a8024839706052886cda14e5ac0ed96a41c1b1f46503d989842c188e6760.scope.
Oct  2 04:10:04 np0005465604 podman[263284]: 2025-10-02 08:10:04.514672023 +0000 UTC m=+0.033503034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:10:04 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:10:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ac353d6190541198bc18d90f705a294f72f992c97b854f6a2bc915346f93184/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:10:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ac353d6190541198bc18d90f705a294f72f992c97b854f6a2bc915346f93184/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:10:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ac353d6190541198bc18d90f705a294f72f992c97b854f6a2bc915346f93184/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:10:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ac353d6190541198bc18d90f705a294f72f992c97b854f6a2bc915346f93184/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:10:04 np0005465604 podman[263284]: 2025-10-02 08:10:04.654444252 +0000 UTC m=+0.173275173 container init f6e6a8024839706052886cda14e5ac0ed96a41c1b1f46503d989842c188e6760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rosalind, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:10:04 np0005465604 podman[263284]: 2025-10-02 08:10:04.668675021 +0000 UTC m=+0.187505912 container start f6e6a8024839706052886cda14e5ac0ed96a41c1b1f46503d989842c188e6760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 04:10:04 np0005465604 podman[263284]: 2025-10-02 08:10:04.673133548 +0000 UTC m=+0.191964489 container attach f6e6a8024839706052886cda14e5ac0ed96a41c1b1f46503d989842c188e6760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]: {
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:    "0": [
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:        {
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "devices": [
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "/dev/loop3"
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            ],
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "lv_name": "ceph_lv0",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "lv_size": "21470642176",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "name": "ceph_lv0",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "tags": {
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.cluster_name": "ceph",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.crush_device_class": "",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.encrypted": "0",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.osd_id": "0",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.type": "block",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.vdo": "0"
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            },
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "type": "block",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "vg_name": "ceph_vg0"
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:        }
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:    ],
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:    "1": [
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:        {
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "devices": [
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "/dev/loop4"
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            ],
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "lv_name": "ceph_lv1",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "lv_size": "21470642176",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "name": "ceph_lv1",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "tags": {
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.cluster_name": "ceph",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.crush_device_class": "",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.encrypted": "0",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.osd_id": "1",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.type": "block",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.vdo": "0"
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            },
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "type": "block",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "vg_name": "ceph_vg1"
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:        }
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:    ],
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:    "2": [
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:        {
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "devices": [
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "/dev/loop5"
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            ],
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "lv_name": "ceph_lv2",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "lv_size": "21470642176",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "name": "ceph_lv2",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "tags": {
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.cluster_name": "ceph",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.crush_device_class": "",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.encrypted": "0",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.osd_id": "2",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.type": "block",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:                "ceph.vdo": "0"
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            },
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "type": "block",
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:            "vg_name": "ceph_vg2"
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:        }
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]:    ]
Oct  2 04:10:05 np0005465604 fervent_rosalind[263300]: }
Oct  2 04:10:05 np0005465604 systemd[1]: libpod-f6e6a8024839706052886cda14e5ac0ed96a41c1b1f46503d989842c188e6760.scope: Deactivated successfully.
Oct  2 04:10:05 np0005465604 podman[263284]: 2025-10-02 08:10:05.428647869 +0000 UTC m=+0.947478800 container died f6e6a8024839706052886cda14e5ac0ed96a41c1b1f46503d989842c188e6760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 04:10:05 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0ac353d6190541198bc18d90f705a294f72f992c97b854f6a2bc915346f93184-merged.mount: Deactivated successfully.
Oct  2 04:10:05 np0005465604 podman[263284]: 2025-10-02 08:10:05.489177125 +0000 UTC m=+1.008008026 container remove f6e6a8024839706052886cda14e5ac0ed96a41c1b1f46503d989842c188e6760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rosalind, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:10:05 np0005465604 systemd[1]: libpod-conmon-f6e6a8024839706052886cda14e5ac0ed96a41c1b1f46503d989842c188e6760.scope: Deactivated successfully.
Oct  2 04:10:06 np0005465604 podman[263463]: 2025-10-02 08:10:06.251634981 +0000 UTC m=+0.038064265 container create 1693d4bd1cf58a2125d3aa45a2778a18f46f14cafd884c1a7455b43d9f43862a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:10:06 np0005465604 systemd[1]: Started libpod-conmon-1693d4bd1cf58a2125d3aa45a2778a18f46f14cafd884c1a7455b43d9f43862a.scope.
Oct  2 04:10:06 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:10:06 np0005465604 podman[263463]: 2025-10-02 08:10:06.320845864 +0000 UTC m=+0.107275228 container init 1693d4bd1cf58a2125d3aa45a2778a18f46f14cafd884c1a7455b43d9f43862a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mestorf, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:10:06 np0005465604 podman[263463]: 2025-10-02 08:10:06.236198075 +0000 UTC m=+0.022627389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:10:06 np0005465604 podman[263463]: 2025-10-02 08:10:06.335704823 +0000 UTC m=+0.122134147 container start 1693d4bd1cf58a2125d3aa45a2778a18f46f14cafd884c1a7455b43d9f43862a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  2 04:10:06 np0005465604 podman[263463]: 2025-10-02 08:10:06.340117449 +0000 UTC m=+0.126546733 container attach 1693d4bd1cf58a2125d3aa45a2778a18f46f14cafd884c1a7455b43d9f43862a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 04:10:06 np0005465604 crazy_mestorf[263479]: 167 167
Oct  2 04:10:06 np0005465604 systemd[1]: libpod-1693d4bd1cf58a2125d3aa45a2778a18f46f14cafd884c1a7455b43d9f43862a.scope: Deactivated successfully.
Oct  2 04:10:06 np0005465604 podman[263463]: 2025-10-02 08:10:06.346841365 +0000 UTC m=+0.133270649 container died 1693d4bd1cf58a2125d3aa45a2778a18f46f14cafd884c1a7455b43d9f43862a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mestorf, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 04:10:06 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3085ab5bcea233a83d720425f3559b7cf12cfd75be41df350708d0e85b4d5006-merged.mount: Deactivated successfully.
Oct  2 04:10:06 np0005465604 podman[263463]: 2025-10-02 08:10:06.38624761 +0000 UTC m=+0.172676894 container remove 1693d4bd1cf58a2125d3aa45a2778a18f46f14cafd884c1a7455b43d9f43862a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mestorf, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 04:10:06 np0005465604 systemd[1]: libpod-conmon-1693d4bd1cf58a2125d3aa45a2778a18f46f14cafd884c1a7455b43d9f43862a.scope: Deactivated successfully.
Oct  2 04:10:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:06 np0005465604 podman[263503]: 2025-10-02 08:10:06.622258616 +0000 UTC m=+0.058092521 container create c2a34a4daa006c6ef30e630c32411eedca898284b0a433b01757a94584f60ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 04:10:06 np0005465604 systemd[1]: Started libpod-conmon-c2a34a4daa006c6ef30e630c32411eedca898284b0a433b01757a94584f60ea9.scope.
Oct  2 04:10:06 np0005465604 podman[263503]: 2025-10-02 08:10:06.599956399 +0000 UTC m=+0.035790304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:10:06 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:10:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f90bff78db8e96c00f59d7655dc6f95ebbfc9013f82f1856114c81fc1b497c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:10:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f90bff78db8e96c00f59d7655dc6f95ebbfc9013f82f1856114c81fc1b497c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:10:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f90bff78db8e96c00f59d7655dc6f95ebbfc9013f82f1856114c81fc1b497c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:10:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f90bff78db8e96c00f59d7655dc6f95ebbfc9013f82f1856114c81fc1b497c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:10:06 np0005465604 podman[263503]: 2025-10-02 08:10:06.729995227 +0000 UTC m=+0.165829112 container init c2a34a4daa006c6ef30e630c32411eedca898284b0a433b01757a94584f60ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_carson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:10:06 np0005465604 podman[263503]: 2025-10-02 08:10:06.739601564 +0000 UTC m=+0.175435469 container start c2a34a4daa006c6ef30e630c32411eedca898284b0a433b01757a94584f60ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:10:06 np0005465604 podman[263503]: 2025-10-02 08:10:06.743641308 +0000 UTC m=+0.179475203 container attach c2a34a4daa006c6ef30e630c32411eedca898284b0a433b01757a94584f60ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:10:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]: {
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:        "osd_id": 2,
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:        "type": "bluestore"
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:    },
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:        "osd_id": 1,
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:        "type": "bluestore"
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:    },
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:        "osd_id": 0,
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:        "type": "bluestore"
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]:    }
Oct  2 04:10:07 np0005465604 pedantic_carson[263519]: }
Oct  2 04:10:07 np0005465604 systemd[1]: libpod-c2a34a4daa006c6ef30e630c32411eedca898284b0a433b01757a94584f60ea9.scope: Deactivated successfully.
Oct  2 04:10:07 np0005465604 systemd[1]: libpod-c2a34a4daa006c6ef30e630c32411eedca898284b0a433b01757a94584f60ea9.scope: Consumed 1.125s CPU time.
Oct  2 04:10:07 np0005465604 podman[263552]: 2025-10-02 08:10:07.909454888 +0000 UTC m=+0.034583087 container died c2a34a4daa006c6ef30e630c32411eedca898284b0a433b01757a94584f60ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_carson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 04:10:07 np0005465604 systemd[1]: var-lib-containers-storage-overlay-11f90bff78db8e96c00f59d7655dc6f95ebbfc9013f82f1856114c81fc1b497c-merged.mount: Deactivated successfully.
Oct  2 04:10:07 np0005465604 podman[263552]: 2025-10-02 08:10:07.955120526 +0000 UTC m=+0.080248715 container remove c2a34a4daa006c6ef30e630c32411eedca898284b0a433b01757a94584f60ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_carson, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:10:07 np0005465604 systemd[1]: libpod-conmon-c2a34a4daa006c6ef30e630c32411eedca898284b0a433b01757a94584f60ea9.scope: Deactivated successfully.
Oct  2 04:10:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:10:08 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:10:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:10:08 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:10:08 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev f8177c24-f62d-4b52-b6ec-67a086b24ff5 does not exist
Oct  2 04:10:08 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 54d4e76f-a6e7-4a24-ab38-32e0e519ebc7 does not exist
Oct  2 04:10:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:09 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:10:09 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:10:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:10:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:14 np0005465604 podman[263615]: 2025-10-02 08:10:14.07044631 +0000 UTC m=+0.128641795 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct  2 04:10:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:16 np0005465604 podman[263641]: 2025-10-02 08:10:16.004804704 +0000 UTC m=+0.069006378 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  2 04:10:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:10:16.285 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:10:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:10:16.286 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:10:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:10:16.287 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:10:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:10:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:21 np0005465604 podman[263660]: 2025-10-02 08:10:21.005467366 +0000 UTC m=+0.068963927 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct  2 04:10:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:10:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:10:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/461593651' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:10:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:10:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/461593651' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:10:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:26 np0005465604 podman[263680]: 2025-10-02 08:10:26.035186543 +0000 UTC m=+0.091272503 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:10:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:10:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:10:27
Oct  2 04:10:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:10:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:10:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'backups', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'vms', 'images', 'default.rgw.control']
Oct  2 04:10:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:10:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:10:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:10:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:10:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:10:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:10:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:10:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:10:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:10:34.796 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:10:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:10:34.796 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:10:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:10:34.796 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:10:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:10:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:10:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:45 np0005465604 podman[263700]: 2025-10-02 08:10:45.087459311 +0000 UTC m=+0.147140227 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:10:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:10:47 np0005465604 podman[263728]: 2025-10-02 08:10:47.053146799 +0000 UTC m=+0.108413933 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:10:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:10:52 np0005465604 podman[263748]: 2025-10-02 08:10:52.032822738 +0000 UTC m=+0.091210377 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:10:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:10:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:10:57 np0005465604 podman[263769]: 2025-10-02 08:10:57.023986847 +0000 UTC m=+0.087407689 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:10:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:10:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:10:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:10:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:10:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:10:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:10:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:01 np0005465604 nova_compute[260603]: 2025-10-02 08:11:01.440 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:11:01 np0005465604 nova_compute[260603]: 2025-10-02 08:11:01.441 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:11:01 np0005465604 nova_compute[260603]: 2025-10-02 08:11:01.441 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:11:01 np0005465604 nova_compute[260603]: 2025-10-02 08:11:01.441 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:11:01 np0005465604 nova_compute[260603]: 2025-10-02 08:11:01.468 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:11:01 np0005465604 nova_compute[260603]: 2025-10-02 08:11:01.468 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:11:01 np0005465604 nova_compute[260603]: 2025-10-02 08:11:01.469 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:11:01 np0005465604 nova_compute[260603]: 2025-10-02 08:11:01.469 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:11:01 np0005465604 nova_compute[260603]: 2025-10-02 08:11:01.470 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:11:01 np0005465604 nova_compute[260603]: 2025-10-02 08:11:01.470 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:11:01 np0005465604 nova_compute[260603]: 2025-10-02 08:11:01.470 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:11:01 np0005465604 nova_compute[260603]: 2025-10-02 08:11:01.471 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:11:01 np0005465604 nova_compute[260603]: 2025-10-02 08:11:01.495 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:11:01 np0005465604 nova_compute[260603]: 2025-10-02 08:11:01.496 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:11:01 np0005465604 nova_compute[260603]: 2025-10-02 08:11:01.496 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:11:01 np0005465604 nova_compute[260603]: 2025-10-02 08:11:01.496 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:11:01 np0005465604 nova_compute[260603]: 2025-10-02 08:11:01.497 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:11:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:11:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:11:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2809994775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:11:01 np0005465604 nova_compute[260603]: 2025-10-02 08:11:01.973 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:11:02 np0005465604 nova_compute[260603]: 2025-10-02 08:11:02.192 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:11:02 np0005465604 nova_compute[260603]: 2025-10-02 08:11:02.193 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5192MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:11:02 np0005465604 nova_compute[260603]: 2025-10-02 08:11:02.193 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:11:02 np0005465604 nova_compute[260603]: 2025-10-02 08:11:02.194 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:11:02 np0005465604 nova_compute[260603]: 2025-10-02 08:11:02.263 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:11:02 np0005465604 nova_compute[260603]: 2025-10-02 08:11:02.263 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:11:02 np0005465604 nova_compute[260603]: 2025-10-02 08:11:02.277 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:11:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:11:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3306741466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:11:02 np0005465604 nova_compute[260603]: 2025-10-02 08:11:02.721 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:11:02 np0005465604 nova_compute[260603]: 2025-10-02 08:11:02.727 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:11:02 np0005465604 nova_compute[260603]: 2025-10-02 08:11:02.745 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:11:02 np0005465604 nova_compute[260603]: 2025-10-02 08:11:02.746 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:11:02 np0005465604 nova_compute[260603]: 2025-10-02 08:11:02.747 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:11:02 np0005465604 nova_compute[260603]: 2025-10-02 08:11:02.795 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:11:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:11:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:11:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:11:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:11:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:11:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:11:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:11:09 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ef2acc57-37c9-4b92-b4c7-78a0aa1bf0d8 does not exist
Oct  2 04:11:09 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev db3afeef-aa68-4f1e-a27e-1c2a3bddfb14 does not exist
Oct  2 04:11:09 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 31aecd52-31d2-44d0-9383-4186aa20b222 does not exist
Oct  2 04:11:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:11:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:11:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:11:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:11:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:11:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:11:09 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:11:09 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:11:09 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:11:09 np0005465604 podman[264104]: 2025-10-02 08:11:09.626150054 +0000 UTC m=+0.037092975 container create 40c3a8e03e88cab543b9860ab621d8f082d7b4acde46b51ce8272c435c631763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:11:09 np0005465604 systemd[1]: Started libpod-conmon-40c3a8e03e88cab543b9860ab621d8f082d7b4acde46b51ce8272c435c631763.scope.
Oct  2 04:11:09 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:11:09 np0005465604 podman[264104]: 2025-10-02 08:11:09.700867227 +0000 UTC m=+0.111810188 container init 40c3a8e03e88cab543b9860ab621d8f082d7b4acde46b51ce8272c435c631763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 04:11:09 np0005465604 podman[264104]: 2025-10-02 08:11:09.609738994 +0000 UTC m=+0.020681935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:11:09 np0005465604 podman[264104]: 2025-10-02 08:11:09.70963928 +0000 UTC m=+0.120582201 container start 40c3a8e03e88cab543b9860ab621d8f082d7b4acde46b51ce8272c435c631763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 04:11:09 np0005465604 optimistic_curie[264120]: 167 167
Oct  2 04:11:09 np0005465604 systemd[1]: libpod-40c3a8e03e88cab543b9860ab621d8f082d7b4acde46b51ce8272c435c631763.scope: Deactivated successfully.
Oct  2 04:11:09 np0005465604 conmon[264120]: conmon 40c3a8e03e88cab543b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40c3a8e03e88cab543b9860ab621d8f082d7b4acde46b51ce8272c435c631763.scope/container/memory.events
Oct  2 04:11:09 np0005465604 podman[264104]: 2025-10-02 08:11:09.71608171 +0000 UTC m=+0.127024681 container attach 40c3a8e03e88cab543b9860ab621d8f082d7b4acde46b51ce8272c435c631763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:11:09 np0005465604 podman[264104]: 2025-10-02 08:11:09.716332768 +0000 UTC m=+0.127275699 container died 40c3a8e03e88cab543b9860ab621d8f082d7b4acde46b51ce8272c435c631763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 04:11:09 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1d0121355ce6b04a57d516f6e48b8f1f066a754c3f18115685a690baf5c4501b-merged.mount: Deactivated successfully.
Oct  2 04:11:09 np0005465604 podman[264104]: 2025-10-02 08:11:09.752593265 +0000 UTC m=+0.163536206 container remove 40c3a8e03e88cab543b9860ab621d8f082d7b4acde46b51ce8272c435c631763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_curie, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 04:11:09 np0005465604 systemd[1]: libpod-conmon-40c3a8e03e88cab543b9860ab621d8f082d7b4acde46b51ce8272c435c631763.scope: Deactivated successfully.
Oct  2 04:11:09 np0005465604 podman[264141]: 2025-10-02 08:11:09.978436967 +0000 UTC m=+0.050261894 container create 1caaf46c43fa5630e194b2539974f57216410e7e777832998527891332de9eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 04:11:10 np0005465604 systemd[1]: Started libpod-conmon-1caaf46c43fa5630e194b2539974f57216410e7e777832998527891332de9eb4.scope.
Oct  2 04:11:10 np0005465604 podman[264141]: 2025-10-02 08:11:09.956738722 +0000 UTC m=+0.028563639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:11:10 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:11:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290d1b802b74d3ac60e79d3b6e8fc750c9827fddb9d470c724d4072f6e991404/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:11:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290d1b802b74d3ac60e79d3b6e8fc750c9827fddb9d470c724d4072f6e991404/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:11:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290d1b802b74d3ac60e79d3b6e8fc750c9827fddb9d470c724d4072f6e991404/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:11:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290d1b802b74d3ac60e79d3b6e8fc750c9827fddb9d470c724d4072f6e991404/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:11:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290d1b802b74d3ac60e79d3b6e8fc750c9827fddb9d470c724d4072f6e991404/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:11:10 np0005465604 podman[264141]: 2025-10-02 08:11:10.074553215 +0000 UTC m=+0.146378122 container init 1caaf46c43fa5630e194b2539974f57216410e7e777832998527891332de9eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 04:11:10 np0005465604 podman[264141]: 2025-10-02 08:11:10.081642586 +0000 UTC m=+0.153467463 container start 1caaf46c43fa5630e194b2539974f57216410e7e777832998527891332de9eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:11:10 np0005465604 podman[264141]: 2025-10-02 08:11:10.084972769 +0000 UTC m=+0.156797686 container attach 1caaf46c43fa5630e194b2539974f57216410e7e777832998527891332de9eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 04:11:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:11 np0005465604 suspicious_feistel[264158]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:11:11 np0005465604 suspicious_feistel[264158]: --> relative data size: 1.0
Oct  2 04:11:11 np0005465604 suspicious_feistel[264158]: --> All data devices are unavailable
Oct  2 04:11:11 np0005465604 systemd[1]: libpod-1caaf46c43fa5630e194b2539974f57216410e7e777832998527891332de9eb4.scope: Deactivated successfully.
Oct  2 04:11:11 np0005465604 systemd[1]: libpod-1caaf46c43fa5630e194b2539974f57216410e7e777832998527891332de9eb4.scope: Consumed 1.058s CPU time.
Oct  2 04:11:11 np0005465604 podman[264187]: 2025-10-02 08:11:11.24357397 +0000 UTC m=+0.031064986 container died 1caaf46c43fa5630e194b2539974f57216410e7e777832998527891332de9eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct  2 04:11:11 np0005465604 systemd[1]: var-lib-containers-storage-overlay-290d1b802b74d3ac60e79d3b6e8fc750c9827fddb9d470c724d4072f6e991404-merged.mount: Deactivated successfully.
Oct  2 04:11:11 np0005465604 podman[264187]: 2025-10-02 08:11:11.315802406 +0000 UTC m=+0.103293442 container remove 1caaf46c43fa5630e194b2539974f57216410e7e777832998527891332de9eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 04:11:11 np0005465604 systemd[1]: libpod-conmon-1caaf46c43fa5630e194b2539974f57216410e7e777832998527891332de9eb4.scope: Deactivated successfully.
Oct  2 04:11:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:11:12 np0005465604 podman[264343]: 2025-10-02 08:11:12.199014575 +0000 UTC m=+0.073193616 container create 308f34828e3709a05446e0309cfd898edfbab6a94817fdf3dbe658991cad5128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:11:12 np0005465604 systemd[1]: Started libpod-conmon-308f34828e3709a05446e0309cfd898edfbab6a94817fdf3dbe658991cad5128.scope.
Oct  2 04:11:12 np0005465604 podman[264343]: 2025-10-02 08:11:12.170977554 +0000 UTC m=+0.045156635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:11:12 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:11:12 np0005465604 podman[264343]: 2025-10-02 08:11:12.300239862 +0000 UTC m=+0.174418953 container init 308f34828e3709a05446e0309cfd898edfbab6a94817fdf3dbe658991cad5128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wilbur, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:11:12 np0005465604 podman[264343]: 2025-10-02 08:11:12.313445673 +0000 UTC m=+0.187624674 container start 308f34828e3709a05446e0309cfd898edfbab6a94817fdf3dbe658991cad5128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wilbur, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:11:12 np0005465604 podman[264343]: 2025-10-02 08:11:12.317443628 +0000 UTC m=+0.191622729 container attach 308f34828e3709a05446e0309cfd898edfbab6a94817fdf3dbe658991cad5128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  2 04:11:12 np0005465604 relaxed_wilbur[264359]: 167 167
Oct  2 04:11:12 np0005465604 systemd[1]: libpod-308f34828e3709a05446e0309cfd898edfbab6a94817fdf3dbe658991cad5128.scope: Deactivated successfully.
Oct  2 04:11:12 np0005465604 podman[264343]: 2025-10-02 08:11:12.321665449 +0000 UTC m=+0.195844490 container died 308f34828e3709a05446e0309cfd898edfbab6a94817fdf3dbe658991cad5128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wilbur, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:11:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay-95d07f0def345acd78f712a865487530fe8ea6c91c3ef1327906774f35fae30f-merged.mount: Deactivated successfully.
Oct  2 04:11:12 np0005465604 podman[264343]: 2025-10-02 08:11:12.371396165 +0000 UTC m=+0.245575196 container remove 308f34828e3709a05446e0309cfd898edfbab6a94817fdf3dbe658991cad5128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 04:11:12 np0005465604 systemd[1]: libpod-conmon-308f34828e3709a05446e0309cfd898edfbab6a94817fdf3dbe658991cad5128.scope: Deactivated successfully.
Oct  2 04:11:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:12 np0005465604 podman[264380]: 2025-10-02 08:11:12.630908303 +0000 UTC m=+0.045704651 container create e1a8ccf5612c3e79c0603963639fa290808c2f64184c40e0dba742a31dd0ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_diffie, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 04:11:12 np0005465604 systemd[1]: Started libpod-conmon-e1a8ccf5612c3e79c0603963639fa290808c2f64184c40e0dba742a31dd0ea84.scope.
Oct  2 04:11:12 np0005465604 podman[264380]: 2025-10-02 08:11:12.609912651 +0000 UTC m=+0.024708979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:11:12 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:11:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a81378951bd74694e5ece139bef432a98aa9df0c5c90eba0476c8cf131346c84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:11:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a81378951bd74694e5ece139bef432a98aa9df0c5c90eba0476c8cf131346c84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:11:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a81378951bd74694e5ece139bef432a98aa9df0c5c90eba0476c8cf131346c84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:11:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a81378951bd74694e5ece139bef432a98aa9df0c5c90eba0476c8cf131346c84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:11:12 np0005465604 podman[264380]: 2025-10-02 08:11:12.760325567 +0000 UTC m=+0.175121885 container init e1a8ccf5612c3e79c0603963639fa290808c2f64184c40e0dba742a31dd0ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_diffie, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:11:12 np0005465604 podman[264380]: 2025-10-02 08:11:12.768700547 +0000 UTC m=+0.183496895 container start e1a8ccf5612c3e79c0603963639fa290808c2f64184c40e0dba742a31dd0ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:11:12 np0005465604 podman[264380]: 2025-10-02 08:11:12.772817025 +0000 UTC m=+0.187613343 container attach e1a8ccf5612c3e79c0603963639fa290808c2f64184c40e0dba742a31dd0ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:11:13 np0005465604 magical_diffie[264397]: {
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:    "0": [
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:        {
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "devices": [
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "/dev/loop3"
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            ],
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "lv_name": "ceph_lv0",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "lv_size": "21470642176",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "name": "ceph_lv0",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "tags": {
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.cluster_name": "ceph",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.crush_device_class": "",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.encrypted": "0",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.osd_id": "0",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.type": "block",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.vdo": "0"
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            },
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "type": "block",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "vg_name": "ceph_vg0"
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:        }
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:    ],
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:    "1": [
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:        {
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "devices": [
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "/dev/loop4"
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            ],
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "lv_name": "ceph_lv1",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "lv_size": "21470642176",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "name": "ceph_lv1",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "tags": {
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.cluster_name": "ceph",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.crush_device_class": "",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.encrypted": "0",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.osd_id": "1",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.type": "block",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.vdo": "0"
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            },
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "type": "block",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "vg_name": "ceph_vg1"
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:        }
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:    ],
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:    "2": [
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:        {
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "devices": [
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "/dev/loop5"
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            ],
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "lv_name": "ceph_lv2",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "lv_size": "21470642176",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "name": "ceph_lv2",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "tags": {
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.cluster_name": "ceph",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.crush_device_class": "",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.encrypted": "0",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.osd_id": "2",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.type": "block",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:                "ceph.vdo": "0"
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            },
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "type": "block",
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:            "vg_name": "ceph_vg2"
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:        }
Oct  2 04:11:13 np0005465604 magical_diffie[264397]:    ]
Oct  2 04:11:13 np0005465604 magical_diffie[264397]: }
Oct  2 04:11:13 np0005465604 systemd[1]: libpod-e1a8ccf5612c3e79c0603963639fa290808c2f64184c40e0dba742a31dd0ea84.scope: Deactivated successfully.
Oct  2 04:11:13 np0005465604 conmon[264397]: conmon e1a8ccf5612c3e79c060 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1a8ccf5612c3e79c0603963639fa290808c2f64184c40e0dba742a31dd0ea84.scope/container/memory.events
Oct  2 04:11:13 np0005465604 podman[264380]: 2025-10-02 08:11:13.548024686 +0000 UTC m=+0.962821014 container died e1a8ccf5612c3e79c0603963639fa290808c2f64184c40e0dba742a31dd0ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_diffie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 04:11:13 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a81378951bd74694e5ece139bef432a98aa9df0c5c90eba0476c8cf131346c84-merged.mount: Deactivated successfully.
Oct  2 04:11:13 np0005465604 podman[264380]: 2025-10-02 08:11:13.61565997 +0000 UTC m=+1.030456298 container remove e1a8ccf5612c3e79c0603963639fa290808c2f64184c40e0dba742a31dd0ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_diffie, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 04:11:13 np0005465604 systemd[1]: libpod-conmon-e1a8ccf5612c3e79c0603963639fa290808c2f64184c40e0dba742a31dd0ea84.scope: Deactivated successfully.
Oct  2 04:11:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:14 np0005465604 podman[264559]: 2025-10-02 08:11:14.458358789 +0000 UTC m=+0.075695074 container create 623ddeb86bbdcd304a9b4393cad9f028e93f9a6d52697fb3f5e93a86793183b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_clarke, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 04:11:14 np0005465604 systemd[1]: Started libpod-conmon-623ddeb86bbdcd304a9b4393cad9f028e93f9a6d52697fb3f5e93a86793183b0.scope.
Oct  2 04:11:14 np0005465604 podman[264559]: 2025-10-02 08:11:14.427378907 +0000 UTC m=+0.044715232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:11:14 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:11:14 np0005465604 podman[264559]: 2025-10-02 08:11:14.564010724 +0000 UTC m=+0.181347019 container init 623ddeb86bbdcd304a9b4393cad9f028e93f9a6d52697fb3f5e93a86793183b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_clarke, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:11:14 np0005465604 podman[264559]: 2025-10-02 08:11:14.572168568 +0000 UTC m=+0.189504813 container start 623ddeb86bbdcd304a9b4393cad9f028e93f9a6d52697fb3f5e93a86793183b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_clarke, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 04:11:14 np0005465604 podman[264559]: 2025-10-02 08:11:14.575968016 +0000 UTC m=+0.193304301 container attach 623ddeb86bbdcd304a9b4393cad9f028e93f9a6d52697fb3f5e93a86793183b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:11:14 np0005465604 hungry_clarke[264575]: 167 167
Oct  2 04:11:14 np0005465604 systemd[1]: libpod-623ddeb86bbdcd304a9b4393cad9f028e93f9a6d52697fb3f5e93a86793183b0.scope: Deactivated successfully.
Oct  2 04:11:14 np0005465604 podman[264559]: 2025-10-02 08:11:14.583727918 +0000 UTC m=+0.201064183 container died 623ddeb86bbdcd304a9b4393cad9f028e93f9a6d52697fb3f5e93a86793183b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_clarke, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:11:14 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5ed0797be6aaea4ee65ec114e7cfd230eab21c8278270e61a1848222e1986d37-merged.mount: Deactivated successfully.
Oct  2 04:11:14 np0005465604 podman[264559]: 2025-10-02 08:11:14.627117027 +0000 UTC m=+0.244453272 container remove 623ddeb86bbdcd304a9b4393cad9f028e93f9a6d52697fb3f5e93a86793183b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_clarke, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:11:14 np0005465604 systemd[1]: libpod-conmon-623ddeb86bbdcd304a9b4393cad9f028e93f9a6d52697fb3f5e93a86793183b0.scope: Deactivated successfully.
Oct  2 04:11:14 np0005465604 podman[264599]: 2025-10-02 08:11:14.861363489 +0000 UTC m=+0.063555907 container create 0eaa41a3f9bfcff52b1cc96febd6fe87704f65b9cced42598167372d5971123d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 04:11:14 np0005465604 systemd[1]: Started libpod-conmon-0eaa41a3f9bfcff52b1cc96febd6fe87704f65b9cced42598167372d5971123d.scope.
Oct  2 04:11:14 np0005465604 podman[264599]: 2025-10-02 08:11:14.841649037 +0000 UTC m=+0.043841435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:11:14 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:11:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3146d63963d1007be685d360f57126094e7ecb7310b63fee25eb6c5c8e3706/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:11:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3146d63963d1007be685d360f57126094e7ecb7310b63fee25eb6c5c8e3706/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:11:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3146d63963d1007be685d360f57126094e7ecb7310b63fee25eb6c5c8e3706/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:11:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3146d63963d1007be685d360f57126094e7ecb7310b63fee25eb6c5c8e3706/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:11:14 np0005465604 podman[264599]: 2025-10-02 08:11:14.962660438 +0000 UTC m=+0.164852906 container init 0eaa41a3f9bfcff52b1cc96febd6fe87704f65b9cced42598167372d5971123d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jepsen, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 04:11:14 np0005465604 podman[264599]: 2025-10-02 08:11:14.977436128 +0000 UTC m=+0.179628546 container start 0eaa41a3f9bfcff52b1cc96febd6fe87704f65b9cced42598167372d5971123d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:11:14 np0005465604 podman[264599]: 2025-10-02 08:11:14.982600638 +0000 UTC m=+0.184793046 container attach 0eaa41a3f9bfcff52b1cc96febd6fe87704f65b9cced42598167372d5971123d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jepsen, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]: {
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:        "osd_id": 2,
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:        "type": "bluestore"
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:    },
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:        "osd_id": 1,
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:        "type": "bluestore"
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:    },
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:        "osd_id": 0,
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:        "type": "bluestore"
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]:    }
Oct  2 04:11:15 np0005465604 lucid_jepsen[264615]: }
Oct  2 04:11:15 np0005465604 systemd[1]: libpod-0eaa41a3f9bfcff52b1cc96febd6fe87704f65b9cced42598167372d5971123d.scope: Deactivated successfully.
Oct  2 04:11:15 np0005465604 podman[264599]: 2025-10-02 08:11:15.998835133 +0000 UTC m=+1.201027551 container died 0eaa41a3f9bfcff52b1cc96febd6fe87704f65b9cced42598167372d5971123d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 04:11:16 np0005465604 systemd[1]: libpod-0eaa41a3f9bfcff52b1cc96febd6fe87704f65b9cced42598167372d5971123d.scope: Consumed 1.029s CPU time.
Oct  2 04:11:16 np0005465604 systemd[1]: var-lib-containers-storage-overlay-cb3146d63963d1007be685d360f57126094e7ecb7310b63fee25eb6c5c8e3706-merged.mount: Deactivated successfully.
Oct  2 04:11:16 np0005465604 podman[264599]: 2025-10-02 08:11:16.052352227 +0000 UTC m=+1.254544605 container remove 0eaa41a3f9bfcff52b1cc96febd6fe87704f65b9cced42598167372d5971123d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jepsen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 04:11:16 np0005465604 systemd[1]: libpod-conmon-0eaa41a3f9bfcff52b1cc96febd6fe87704f65b9cced42598167372d5971123d.scope: Deactivated successfully.
Oct  2 04:11:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:11:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:11:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:11:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:11:16 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 01ed0112-3023-4bc2-91df-14f842aae624 does not exist
Oct  2 04:11:16 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9b25be6a-3a83-4062-a560-5e73d35bc467 does not exist
Oct  2 04:11:16 np0005465604 podman[264644]: 2025-10-02 08:11:16.146659509 +0000 UTC m=+0.197547692 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:11:16 np0005465604 systemd[1]: packagekit.service: Deactivated successfully.
Oct  2 04:11:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:11:17 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:11:17 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:11:18 np0005465604 podman[264736]: 2025-10-02 08:11:18.010142666 +0000 UTC m=+0.070178943 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 04:11:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:11:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:11:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1837433618' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:11:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:11:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1837433618' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:11:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:23 np0005465604 podman[264755]: 2025-10-02 08:11:23.046193368 +0000 UTC m=+0.101754194 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:11:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:11:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:11:27
Oct  2 04:11:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:11:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:11:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['images', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'default.rgw.control', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'backups']
Oct  2 04:11:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:11:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:11:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:11:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:11:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:11:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:11:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:11:28 np0005465604 podman[264775]: 2025-10-02 08:11:28.025048153 +0000 UTC m=+0.082034082 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:11:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:11:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:11:34.797 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:11:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:11:34.797 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:11:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:11:34.797 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:11:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:11:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:11:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:11:47 np0005465604 podman[264795]: 2025-10-02 08:11:47.059647257 +0000 UTC m=+0.114331406 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller)
Oct  2 04:11:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:48 np0005465604 podman[264823]: 2025-10-02 08:11:48.989683863 +0000 UTC m=+0.056410316 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 04:11:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:11:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:53 np0005465604 podman[264843]: 2025-10-02 08:11:53.992163942 +0000 UTC m=+0.059519672 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct  2 04:11:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:11:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:11:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:11:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:11:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:11:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:11:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:11:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:11:58 np0005465604 nova_compute[260603]: 2025-10-02 08:11:58.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:11:59 np0005465604 podman[264863]: 2025-10-02 08:11:59.033005754 +0000 UTC m=+0.089657289 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 04:11:59 np0005465604 nova_compute[260603]: 2025-10-02 08:11:59.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:11:59 np0005465604 nova_compute[260603]: 2025-10-02 08:11:59.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:12:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:00 np0005465604 nova_compute[260603]: 2025-10-02 08:12:00.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:12:00 np0005465604 nova_compute[260603]: 2025-10-02 08:12:00.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:12:00 np0005465604 nova_compute[260603]: 2025-10-02 08:12:00.518 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:12:00 np0005465604 nova_compute[260603]: 2025-10-02 08:12:00.518 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:12:00 np0005465604 nova_compute[260603]: 2025-10-02 08:12:00.543 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:12:00 np0005465604 nova_compute[260603]: 2025-10-02 08:12:00.543 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:12:00 np0005465604 nova_compute[260603]: 2025-10-02 08:12:00.544 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:12:01 np0005465604 nova_compute[260603]: 2025-10-02 08:12:01.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:12:01 np0005465604 nova_compute[260603]: 2025-10-02 08:12:01.534 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:12:01 np0005465604 nova_compute[260603]: 2025-10-02 08:12:01.534 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:12:01 np0005465604 nova_compute[260603]: 2025-10-02 08:12:01.555 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:12:01 np0005465604 nova_compute[260603]: 2025-10-02 08:12:01.555 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:12:01 np0005465604 nova_compute[260603]: 2025-10-02 08:12:01.556 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:12:01 np0005465604 nova_compute[260603]: 2025-10-02 08:12:01.556 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:12:01 np0005465604 nova_compute[260603]: 2025-10-02 08:12:01.556 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:12:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:12:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:12:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684293308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:12:02 np0005465604 nova_compute[260603]: 2025-10-02 08:12:02.047 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:12:02 np0005465604 nova_compute[260603]: 2025-10-02 08:12:02.212 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:12:02 np0005465604 nova_compute[260603]: 2025-10-02 08:12:02.213 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5199MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:12:02 np0005465604 nova_compute[260603]: 2025-10-02 08:12:02.213 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:12:02 np0005465604 nova_compute[260603]: 2025-10-02 08:12:02.213 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:12:02 np0005465604 nova_compute[260603]: 2025-10-02 08:12:02.266 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:12:02 np0005465604 nova_compute[260603]: 2025-10-02 08:12:02.266 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:12:02 np0005465604 nova_compute[260603]: 2025-10-02 08:12:02.280 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:12:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:12:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3735585149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:12:02 np0005465604 nova_compute[260603]: 2025-10-02 08:12:02.722 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:12:02 np0005465604 nova_compute[260603]: 2025-10-02 08:12:02.729 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:12:02 np0005465604 nova_compute[260603]: 2025-10-02 08:12:02.744 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:12:02 np0005465604 nova_compute[260603]: 2025-10-02 08:12:02.745 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:12:02 np0005465604 nova_compute[260603]: 2025-10-02 08:12:02.746 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:12:03 np0005465604 nova_compute[260603]: 2025-10-02 08:12:03.731 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:12:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:12:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:12:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:12:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:12:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:12:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:12:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:12:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:12:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:12:17 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c4cc84a1-e60f-49b2-b653-c7bc4798b9e0 does not exist
Oct  2 04:12:17 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 2825a0e1-c2df-4dbe-8b06-4b41f67ea281 does not exist
Oct  2 04:12:17 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ebacc2a6-3013-4c2c-a95a-85b19a8a91d6 does not exist
Oct  2 04:12:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:12:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:12:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:12:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:12:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:12:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:12:17 np0005465604 podman[265085]: 2025-10-02 08:12:17.52280491 +0000 UTC m=+0.134769621 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Oct  2 04:12:17 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:12:17 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:12:17 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:12:17 np0005465604 podman[265228]: 2025-10-02 08:12:17.987405274 +0000 UTC m=+0.065296530 container create be7f73c6f3c3dc2324c6696e364c720577e3be79bffde374761ebec0993284af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 04:12:18 np0005465604 systemd[1]: Started libpod-conmon-be7f73c6f3c3dc2324c6696e364c720577e3be79bffde374761ebec0993284af.scope.
Oct  2 04:12:18 np0005465604 podman[265228]: 2025-10-02 08:12:17.956849675 +0000 UTC m=+0.034740971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:12:18 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:12:18 np0005465604 podman[265228]: 2025-10-02 08:12:18.068790154 +0000 UTC m=+0.146681380 container init be7f73c6f3c3dc2324c6696e364c720577e3be79bffde374761ebec0993284af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shamir, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 04:12:18 np0005465604 podman[265228]: 2025-10-02 08:12:18.076012469 +0000 UTC m=+0.153903705 container start be7f73c6f3c3dc2324c6696e364c720577e3be79bffde374761ebec0993284af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:12:18 np0005465604 podman[265228]: 2025-10-02 08:12:18.079969472 +0000 UTC m=+0.157860688 container attach be7f73c6f3c3dc2324c6696e364c720577e3be79bffde374761ebec0993284af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shamir, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 04:12:18 np0005465604 wonderful_shamir[265245]: 167 167
Oct  2 04:12:18 np0005465604 systemd[1]: libpod-be7f73c6f3c3dc2324c6696e364c720577e3be79bffde374761ebec0993284af.scope: Deactivated successfully.
Oct  2 04:12:18 np0005465604 podman[265228]: 2025-10-02 08:12:18.08374128 +0000 UTC m=+0.161632496 container died be7f73c6f3c3dc2324c6696e364c720577e3be79bffde374761ebec0993284af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  2 04:12:18 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ea96259f7c5cb23ba965a1aff33ebc1706d1e57114ce339b0d57c498e064af62-merged.mount: Deactivated successfully.
Oct  2 04:12:18 np0005465604 podman[265228]: 2025-10-02 08:12:18.116427016 +0000 UTC m=+0.194318232 container remove be7f73c6f3c3dc2324c6696e364c720577e3be79bffde374761ebec0993284af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shamir, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:12:18 np0005465604 systemd[1]: libpod-conmon-be7f73c6f3c3dc2324c6696e364c720577e3be79bffde374761ebec0993284af.scope: Deactivated successfully.
Oct  2 04:12:18 np0005465604 podman[265269]: 2025-10-02 08:12:18.32242884 +0000 UTC m=+0.064076273 container create d5ee08b2efde04905a6a9968a7edaa690ab73f29112e383de3058afe2c7cc591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shannon, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 04:12:18 np0005465604 systemd[1]: Started libpod-conmon-d5ee08b2efde04905a6a9968a7edaa690ab73f29112e383de3058afe2c7cc591.scope.
Oct  2 04:12:18 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:12:18 np0005465604 podman[265269]: 2025-10-02 08:12:18.302721538 +0000 UTC m=+0.044369011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:12:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a5888830071613e8448a60fc88ccb07a4c402a082ae1995783f551c7b5c380/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:12:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a5888830071613e8448a60fc88ccb07a4c402a082ae1995783f551c7b5c380/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:12:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a5888830071613e8448a60fc88ccb07a4c402a082ae1995783f551c7b5c380/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:12:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a5888830071613e8448a60fc88ccb07a4c402a082ae1995783f551c7b5c380/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:12:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a5888830071613e8448a60fc88ccb07a4c402a082ae1995783f551c7b5c380/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:12:18 np0005465604 podman[265269]: 2025-10-02 08:12:18.416061622 +0000 UTC m=+0.157709115 container init d5ee08b2efde04905a6a9968a7edaa690ab73f29112e383de3058afe2c7cc591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:12:18 np0005465604 podman[265269]: 2025-10-02 08:12:18.427431285 +0000 UTC m=+0.169078758 container start d5ee08b2efde04905a6a9968a7edaa690ab73f29112e383de3058afe2c7cc591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shannon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:12:18 np0005465604 podman[265269]: 2025-10-02 08:12:18.431983777 +0000 UTC m=+0.173631290 container attach d5ee08b2efde04905a6a9968a7edaa690ab73f29112e383de3058afe2c7cc591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shannon, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 04:12:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:19 np0005465604 sad_shannon[265285]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:12:19 np0005465604 sad_shannon[265285]: --> relative data size: 1.0
Oct  2 04:12:19 np0005465604 sad_shannon[265285]: --> All data devices are unavailable
Oct  2 04:12:19 np0005465604 systemd[1]: libpod-d5ee08b2efde04905a6a9968a7edaa690ab73f29112e383de3058afe2c7cc591.scope: Deactivated successfully.
Oct  2 04:12:19 np0005465604 systemd[1]: libpod-d5ee08b2efde04905a6a9968a7edaa690ab73f29112e383de3058afe2c7cc591.scope: Consumed 1.122s CPU time.
Oct  2 04:12:19 np0005465604 podman[265269]: 2025-10-02 08:12:19.591620981 +0000 UTC m=+1.333268454 container died d5ee08b2efde04905a6a9968a7edaa690ab73f29112e383de3058afe2c7cc591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 04:12:19 np0005465604 systemd[1]: var-lib-containers-storage-overlay-72a5888830071613e8448a60fc88ccb07a4c402a082ae1995783f551c7b5c380-merged.mount: Deactivated successfully.
Oct  2 04:12:19 np0005465604 podman[265269]: 2025-10-02 08:12:19.668597614 +0000 UTC m=+1.410245067 container remove d5ee08b2efde04905a6a9968a7edaa690ab73f29112e383de3058afe2c7cc591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shannon, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:12:19 np0005465604 systemd[1]: libpod-conmon-d5ee08b2efde04905a6a9968a7edaa690ab73f29112e383de3058afe2c7cc591.scope: Deactivated successfully.
Oct  2 04:12:19 np0005465604 podman[265314]: 2025-10-02 08:12:19.708589857 +0000 UTC m=+0.076012595 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:12:20 np0005465604 podman[265485]: 2025-10-02 08:12:20.294114401 +0000 UTC m=+0.043684789 container create c24a3d43b02eff56d713503c9f142e5907bdbe7b56d1e814311ac2eff1812834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:12:20 np0005465604 systemd[1]: Started libpod-conmon-c24a3d43b02eff56d713503c9f142e5907bdbe7b56d1e814311ac2eff1812834.scope.
Oct  2 04:12:20 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:12:20 np0005465604 podman[265485]: 2025-10-02 08:12:20.275899835 +0000 UTC m=+0.025470253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:12:20 np0005465604 podman[265485]: 2025-10-02 08:12:20.372394955 +0000 UTC m=+0.121965373 container init c24a3d43b02eff56d713503c9f142e5907bdbe7b56d1e814311ac2eff1812834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kalam, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 04:12:20 np0005465604 podman[265485]: 2025-10-02 08:12:20.384027297 +0000 UTC m=+0.133597685 container start c24a3d43b02eff56d713503c9f142e5907bdbe7b56d1e814311ac2eff1812834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:12:20 np0005465604 podman[265485]: 2025-10-02 08:12:20.387721022 +0000 UTC m=+0.137291460 container attach c24a3d43b02eff56d713503c9f142e5907bdbe7b56d1e814311ac2eff1812834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 04:12:20 np0005465604 clever_kalam[265501]: 167 167
Oct  2 04:12:20 np0005465604 systemd[1]: libpod-c24a3d43b02eff56d713503c9f142e5907bdbe7b56d1e814311ac2eff1812834.scope: Deactivated successfully.
Oct  2 04:12:20 np0005465604 podman[265485]: 2025-10-02 08:12:20.38895984 +0000 UTC m=+0.138530248 container died c24a3d43b02eff56d713503c9f142e5907bdbe7b56d1e814311ac2eff1812834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kalam, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 04:12:20 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5b2d7af847364fa2e2ef5413b1bc105e8faaf1d813888a21b075e0e95b972ada-merged.mount: Deactivated successfully.
Oct  2 04:12:20 np0005465604 podman[265485]: 2025-10-02 08:12:20.431992519 +0000 UTC m=+0.181562907 container remove c24a3d43b02eff56d713503c9f142e5907bdbe7b56d1e814311ac2eff1812834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 04:12:20 np0005465604 systemd[1]: libpod-conmon-c24a3d43b02eff56d713503c9f142e5907bdbe7b56d1e814311ac2eff1812834.scope: Deactivated successfully.
Oct  2 04:12:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:20 np0005465604 podman[265526]: 2025-10-02 08:12:20.631505961 +0000 UTC m=+0.048518889 container create c1e7fb51464cbeb97def51d5f1f26c181c5e5d7e2d4828d4bff2f9823243e892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_antonelli, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 04:12:20 np0005465604 systemd[1]: Started libpod-conmon-c1e7fb51464cbeb97def51d5f1f26c181c5e5d7e2d4828d4bff2f9823243e892.scope.
Oct  2 04:12:20 np0005465604 podman[265526]: 2025-10-02 08:12:20.611710397 +0000 UTC m=+0.028723325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:12:20 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:12:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c6c1d77edb29ec420aef15edf986ceda0fd1c1986c33841ee108257ac449e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:12:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c6c1d77edb29ec420aef15edf986ceda0fd1c1986c33841ee108257ac449e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:12:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c6c1d77edb29ec420aef15edf986ceda0fd1c1986c33841ee108257ac449e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:12:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c6c1d77edb29ec420aef15edf986ceda0fd1c1986c33841ee108257ac449e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:12:20 np0005465604 podman[265526]: 2025-10-02 08:12:20.725775272 +0000 UTC m=+0.142788210 container init c1e7fb51464cbeb97def51d5f1f26c181c5e5d7e2d4828d4bff2f9823243e892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  2 04:12:20 np0005465604 podman[265526]: 2025-10-02 08:12:20.746535158 +0000 UTC m=+0.163548086 container start c1e7fb51464cbeb97def51d5f1f26c181c5e5d7e2d4828d4bff2f9823243e892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_antonelli, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:12:20 np0005465604 podman[265526]: 2025-10-02 08:12:20.750668306 +0000 UTC m=+0.167681204 container attach c1e7fb51464cbeb97def51d5f1f26c181c5e5d7e2d4828d4bff2f9823243e892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_antonelli, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]: {
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:    "0": [
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:        {
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "devices": [
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "/dev/loop3"
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            ],
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "lv_name": "ceph_lv0",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "lv_size": "21470642176",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "name": "ceph_lv0",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "tags": {
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.cluster_name": "ceph",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.crush_device_class": "",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.encrypted": "0",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.osd_id": "0",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.type": "block",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.vdo": "0"
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            },
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "type": "block",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "vg_name": "ceph_vg0"
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:        }
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:    ],
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:    "1": [
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:        {
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "devices": [
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "/dev/loop4"
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            ],
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "lv_name": "ceph_lv1",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "lv_size": "21470642176",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "name": "ceph_lv1",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "tags": {
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.cluster_name": "ceph",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.crush_device_class": "",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.encrypted": "0",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.osd_id": "1",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.type": "block",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.vdo": "0"
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            },
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "type": "block",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "vg_name": "ceph_vg1"
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:        }
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:    ],
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:    "2": [
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:        {
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "devices": [
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "/dev/loop5"
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            ],
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "lv_name": "ceph_lv2",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "lv_size": "21470642176",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "name": "ceph_lv2",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "tags": {
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.cluster_name": "ceph",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.crush_device_class": "",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.encrypted": "0",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.osd_id": "2",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.type": "block",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:                "ceph.vdo": "0"
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            },
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "type": "block",
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:            "vg_name": "ceph_vg2"
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:        }
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]:    ]
Oct  2 04:12:21 np0005465604 wonderful_antonelli[265542]: }
Oct  2 04:12:21 np0005465604 systemd[1]: libpod-c1e7fb51464cbeb97def51d5f1f26c181c5e5d7e2d4828d4bff2f9823243e892.scope: Deactivated successfully.
Oct  2 04:12:21 np0005465604 podman[265552]: 2025-10-02 08:12:21.568215784 +0000 UTC m=+0.041273344 container died c1e7fb51464cbeb97def51d5f1f26c181c5e5d7e2d4828d4bff2f9823243e892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_antonelli, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:12:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-76c6c1d77edb29ec420aef15edf986ceda0fd1c1986c33841ee108257ac449e9-merged.mount: Deactivated successfully.
Oct  2 04:12:21 np0005465604 podman[265552]: 2025-10-02 08:12:21.64268558 +0000 UTC m=+0.115743100 container remove c1e7fb51464cbeb97def51d5f1f26c181c5e5d7e2d4828d4bff2f9823243e892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_antonelli, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 04:12:21 np0005465604 systemd[1]: libpod-conmon-c1e7fb51464cbeb97def51d5f1f26c181c5e5d7e2d4828d4bff2f9823243e892.scope: Deactivated successfully.
Oct  2 04:12:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:12:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:12:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4012412391' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:12:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:12:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4012412391' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:12:22 np0005465604 podman[265708]: 2025-10-02 08:12:22.47965018 +0000 UTC m=+0.054346690 container create 416163112b2d0fab74bbdcf80ada6d4d53333fc44d805dd27faef2d67ad180b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:12:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:22 np0005465604 systemd[1]: Started libpod-conmon-416163112b2d0fab74bbdcf80ada6d4d53333fc44d805dd27faef2d67ad180b8.scope.
Oct  2 04:12:22 np0005465604 podman[265708]: 2025-10-02 08:12:22.458157793 +0000 UTC m=+0.032854303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:12:22 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:12:22 np0005465604 podman[265708]: 2025-10-02 08:12:22.578095291 +0000 UTC m=+0.152791771 container init 416163112b2d0fab74bbdcf80ada6d4d53333fc44d805dd27faef2d67ad180b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct  2 04:12:22 np0005465604 podman[265708]: 2025-10-02 08:12:22.589897799 +0000 UTC m=+0.164594299 container start 416163112b2d0fab74bbdcf80ada6d4d53333fc44d805dd27faef2d67ad180b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:12:22 np0005465604 podman[265708]: 2025-10-02 08:12:22.594134561 +0000 UTC m=+0.168831051 container attach 416163112b2d0fab74bbdcf80ada6d4d53333fc44d805dd27faef2d67ad180b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kirch, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 04:12:22 np0005465604 sleepy_kirch[265725]: 167 167
Oct  2 04:12:22 np0005465604 systemd[1]: libpod-416163112b2d0fab74bbdcf80ada6d4d53333fc44d805dd27faef2d67ad180b8.scope: Deactivated successfully.
Oct  2 04:12:22 np0005465604 conmon[265725]: conmon 416163112b2d0fab74bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-416163112b2d0fab74bbdcf80ada6d4d53333fc44d805dd27faef2d67ad180b8.scope/container/memory.events
Oct  2 04:12:22 np0005465604 podman[265708]: 2025-10-02 08:12:22.598931189 +0000 UTC m=+0.173627689 container died 416163112b2d0fab74bbdcf80ada6d4d53333fc44d805dd27faef2d67ad180b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kirch, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 04:12:22 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3899b85241a1700a51524e0c800d7ed7980438edd16674801187b33e6f8e0250-merged.mount: Deactivated successfully.
Oct  2 04:12:22 np0005465604 podman[265708]: 2025-10-02 08:12:22.646788198 +0000 UTC m=+0.221484708 container remove 416163112b2d0fab74bbdcf80ada6d4d53333fc44d805dd27faef2d67ad180b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kirch, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:12:22 np0005465604 systemd[1]: libpod-conmon-416163112b2d0fab74bbdcf80ada6d4d53333fc44d805dd27faef2d67ad180b8.scope: Deactivated successfully.
Oct  2 04:12:22 np0005465604 podman[265750]: 2025-10-02 08:12:22.853584386 +0000 UTC m=+0.053609937 container create 7a9c402ef9c6abc92a2cfdbc2066388995a3d3ba6e209e50d13f7c0f9c4bd867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bartik, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:12:22 np0005465604 systemd[1]: Started libpod-conmon-7a9c402ef9c6abc92a2cfdbc2066388995a3d3ba6e209e50d13f7c0f9c4bd867.scope.
Oct  2 04:12:22 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:12:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a798c95ca9f35f3b4acd9100683bd9846b6a6bcb46fc65f5c857e8080164ce2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:12:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a798c95ca9f35f3b4acd9100683bd9846b6a6bcb46fc65f5c857e8080164ce2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:12:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a798c95ca9f35f3b4acd9100683bd9846b6a6bcb46fc65f5c857e8080164ce2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:12:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a798c95ca9f35f3b4acd9100683bd9846b6a6bcb46fc65f5c857e8080164ce2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:12:22 np0005465604 podman[265750]: 2025-10-02 08:12:22.837409424 +0000 UTC m=+0.037434985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:12:22 np0005465604 podman[265750]: 2025-10-02 08:12:22.935144472 +0000 UTC m=+0.135170033 container init 7a9c402ef9c6abc92a2cfdbc2066388995a3d3ba6e209e50d13f7c0f9c4bd867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Oct  2 04:12:22 np0005465604 podman[265750]: 2025-10-02 08:12:22.944199984 +0000 UTC m=+0.144225545 container start 7a9c402ef9c6abc92a2cfdbc2066388995a3d3ba6e209e50d13f7c0f9c4bd867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:12:22 np0005465604 podman[265750]: 2025-10-02 08:12:22.948032383 +0000 UTC m=+0.148057944 container attach 7a9c402ef9c6abc92a2cfdbc2066388995a3d3ba6e209e50d13f7c0f9c4bd867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 04:12:23 np0005465604 loving_bartik[265766]: {
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:        "osd_id": 2,
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:        "type": "bluestore"
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:    },
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:        "osd_id": 1,
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:        "type": "bluestore"
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:    },
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:        "osd_id": 0,
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:        "type": "bluestore"
Oct  2 04:12:23 np0005465604 loving_bartik[265766]:    }
Oct  2 04:12:23 np0005465604 loving_bartik[265766]: }
Oct  2 04:12:24 np0005465604 systemd[1]: libpod-7a9c402ef9c6abc92a2cfdbc2066388995a3d3ba6e209e50d13f7c0f9c4bd867.scope: Deactivated successfully.
Oct  2 04:12:24 np0005465604 systemd[1]: libpod-7a9c402ef9c6abc92a2cfdbc2066388995a3d3ba6e209e50d13f7c0f9c4bd867.scope: Consumed 1.104s CPU time.
Oct  2 04:12:24 np0005465604 podman[265750]: 2025-10-02 08:12:24.041172909 +0000 UTC m=+1.241198530 container died 7a9c402ef9c6abc92a2cfdbc2066388995a3d3ba6e209e50d13f7c0f9c4bd867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bartik, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:12:24 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3a798c95ca9f35f3b4acd9100683bd9846b6a6bcb46fc65f5c857e8080164ce2-merged.mount: Deactivated successfully.
Oct  2 04:12:24 np0005465604 podman[265750]: 2025-10-02 08:12:24.098000986 +0000 UTC m=+1.298026527 container remove 7a9c402ef9c6abc92a2cfdbc2066388995a3d3ba6e209e50d13f7c0f9c4bd867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 04:12:24 np0005465604 systemd[1]: libpod-conmon-7a9c402ef9c6abc92a2cfdbc2066388995a3d3ba6e209e50d13f7c0f9c4bd867.scope: Deactivated successfully.
Oct  2 04:12:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:12:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:12:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:12:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:12:24 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 064b22ec-b680-4779-b7cf-28bec1acdb60 does not exist
Oct  2 04:12:24 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev db4da15d-a11c-4ba1-a0ec-f7b67410dce0 does not exist
Oct  2 04:12:24 np0005465604 podman[265799]: 2025-10-02 08:12:24.202246977 +0000 UTC m=+0.105863593 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 04:12:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:25 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:12:25 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:12:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:12:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:12:27
Oct  2 04:12:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:12:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:12:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'backups', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'vms', 'volumes']
Oct  2 04:12:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:12:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:12:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:12:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:12:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:12:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:12:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:12:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:30 np0005465604 podman[265881]: 2025-10-02 08:12:30.007267216 +0000 UTC m=+0.074331542 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 04:12:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:12:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:12:34.797 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:12:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:12:34.798 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:12:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:12:34.798 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:12:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:12:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:12:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:12:48 np0005465604 podman[265901]: 2025-10-02 08:12:48.052154778 +0000 UTC m=+0.113011984 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 04:12:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:50 np0005465604 podman[265928]: 2025-10-02 08:12:50.01937248 +0000 UTC m=+0.079234144 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:12:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:12:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:54.568196) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392774568312, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2045, "num_deletes": 251, "total_data_size": 3454763, "memory_usage": 3513600, "flush_reason": "Manual Compaction"}
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392774587361, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3389904, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16334, "largest_seqno": 18378, "table_properties": {"data_size": 3380597, "index_size": 5865, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18291, "raw_average_key_size": 19, "raw_value_size": 3362175, "raw_average_value_size": 3638, "num_data_blocks": 266, "num_entries": 924, "num_filter_entries": 924, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759392544, "oldest_key_time": 1759392544, "file_creation_time": 1759392774, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 19199 microseconds, and 12167 cpu microseconds.
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:54.587413) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3389904 bytes OK
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:54.587437) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:54.589262) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:54.589277) EVENT_LOG_v1 {"time_micros": 1759392774589273, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:54.589296) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3446221, prev total WAL file size 3446221, number of live WAL files 2.
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:54.590549) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3310KB)], [38(7624KB)]
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392774590646, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11197243, "oldest_snapshot_seqno": -1}
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4407 keys, 9428334 bytes, temperature: kUnknown
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392774649562, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9428334, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9395015, "index_size": 21177, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 106457, "raw_average_key_size": 24, "raw_value_size": 9311573, "raw_average_value_size": 2112, "num_data_blocks": 900, "num_entries": 4407, "num_filter_entries": 4407, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759392774, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:54.649966) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9428334 bytes
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:54.651691) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.6 rd, 159.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.4 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4921, records dropped: 514 output_compression: NoCompression
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:54.651720) EVENT_LOG_v1 {"time_micros": 1759392774651706, "job": 18, "event": "compaction_finished", "compaction_time_micros": 59048, "compaction_time_cpu_micros": 41032, "output_level": 6, "num_output_files": 1, "total_output_size": 9428334, "num_input_records": 4921, "num_output_records": 4407, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392774653068, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392774655772, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:54.590425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:54.655816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:54.655822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:54.655825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:54.655828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:12:54 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:54.655831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:12:54 np0005465604 podman[265947]: 2025-10-02 08:12:54.985555341 +0000 UTC m=+0.058047186 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid)
Oct  2 04:12:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:56.833473) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392776833510, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 269, "num_deletes": 250, "total_data_size": 41519, "memory_usage": 46544, "flush_reason": "Manual Compaction"}
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392776836409, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 41221, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18379, "largest_seqno": 18647, "table_properties": {"data_size": 39338, "index_size": 112, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5053, "raw_average_key_size": 19, "raw_value_size": 35724, "raw_average_value_size": 135, "num_data_blocks": 5, "num_entries": 263, "num_filter_entries": 263, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759392775, "oldest_key_time": 1759392775, "file_creation_time": 1759392776, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 2993 microseconds, and 1071 cpu microseconds.
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:56.836463) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 41221 bytes OK
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:56.836485) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:56.838071) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:56.838093) EVENT_LOG_v1 {"time_micros": 1759392776838086, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:56.838111) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 39458, prev total WAL file size 39458, number of live WAL files 2.
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:56.838551) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(40KB)], [41(9207KB)]
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392776838604, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9469555, "oldest_snapshot_seqno": -1}
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4163 keys, 6179030 bytes, temperature: kUnknown
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392776881194, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6179030, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6152015, "index_size": 15514, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 101825, "raw_average_key_size": 24, "raw_value_size": 6077407, "raw_average_value_size": 1459, "num_data_blocks": 653, "num_entries": 4163, "num_filter_entries": 4163, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759392776, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:56.881540) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6179030 bytes
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:56.883085) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.8 rd, 144.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 9.0 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(379.6) write-amplify(149.9) OK, records in: 4670, records dropped: 507 output_compression: NoCompression
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:56.883116) EVENT_LOG_v1 {"time_micros": 1759392776883102, "job": 20, "event": "compaction_finished", "compaction_time_micros": 42697, "compaction_time_cpu_micros": 31619, "output_level": 6, "num_output_files": 1, "total_output_size": 6179030, "num_input_records": 4670, "num_output_records": 4163, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392776883299, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392776886438, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:56.838450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:56.886509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:56.886514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:56.886515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:56.886517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:12:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:12:56.886518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:12:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:12:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:12:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:12:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:12:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:12:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:12:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:12:58 np0005465604 nova_compute[260603]: 2025-10-02 08:12:58.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:12:58 np0005465604 nova_compute[260603]: 2025-10-02 08:12:58.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 04:12:58 np0005465604 nova_compute[260603]: 2025-10-02 08:12:58.543 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 04:12:58 np0005465604 nova_compute[260603]: 2025-10-02 08:12:58.544 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:12:58 np0005465604 nova_compute[260603]: 2025-10-02 08:12:58.545 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 04:12:58 np0005465604 nova_compute[260603]: 2025-10-02 08:12:58.560 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:12:59 np0005465604 nova_compute[260603]: 2025-10-02 08:12:59.571 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:13:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:00 np0005465604 nova_compute[260603]: 2025-10-02 08:13:00.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:13:00 np0005465604 nova_compute[260603]: 2025-10-02 08:13:00.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:13:01 np0005465604 podman[265967]: 2025-10-02 08:13:01.019692529 +0000 UTC m=+0.080356758 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:13:01 np0005465604 nova_compute[260603]: 2025-10-02 08:13:01.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:13:01 np0005465604 nova_compute[260603]: 2025-10-02 08:13:01.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:13:01 np0005465604 nova_compute[260603]: 2025-10-02 08:13:01.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:13:01 np0005465604 nova_compute[260603]: 2025-10-02 08:13:01.534 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:13:01 np0005465604 nova_compute[260603]: 2025-10-02 08:13:01.534 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:13:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:13:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:02 np0005465604 nova_compute[260603]: 2025-10-02 08:13:02.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:13:02 np0005465604 nova_compute[260603]: 2025-10-02 08:13:02.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:13:02 np0005465604 nova_compute[260603]: 2025-10-02 08:13:02.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:13:03 np0005465604 nova_compute[260603]: 2025-10-02 08:13:03.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:13:03 np0005465604 nova_compute[260603]: 2025-10-02 08:13:03.550 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:13:03 np0005465604 nova_compute[260603]: 2025-10-02 08:13:03.551 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:13:03 np0005465604 nova_compute[260603]: 2025-10-02 08:13:03.551 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:13:03 np0005465604 nova_compute[260603]: 2025-10-02 08:13:03.551 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:13:03 np0005465604 nova_compute[260603]: 2025-10-02 08:13:03.552 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:13:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:13:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2262033843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:13:03 np0005465604 nova_compute[260603]: 2025-10-02 08:13:03.960 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:13:04 np0005465604 nova_compute[260603]: 2025-10-02 08:13:04.165 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:13:04 np0005465604 nova_compute[260603]: 2025-10-02 08:13:04.166 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5173MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:13:04 np0005465604 nova_compute[260603]: 2025-10-02 08:13:04.166 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:13:04 np0005465604 nova_compute[260603]: 2025-10-02 08:13:04.167 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:13:04 np0005465604 nova_compute[260603]: 2025-10-02 08:13:04.424 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:13:04 np0005465604 nova_compute[260603]: 2025-10-02 08:13:04.425 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:13:04 np0005465604 nova_compute[260603]: 2025-10-02 08:13:04.500 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 04:13:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:04 np0005465604 nova_compute[260603]: 2025-10-02 08:13:04.580 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 04:13:04 np0005465604 nova_compute[260603]: 2025-10-02 08:13:04.581 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 04:13:04 np0005465604 nova_compute[260603]: 2025-10-02 08:13:04.604 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 04:13:04 np0005465604 nova_compute[260603]: 2025-10-02 08:13:04.628 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 04:13:04 np0005465604 nova_compute[260603]: 2025-10-02 08:13:04.642 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:13:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:13:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2317031649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:13:05 np0005465604 nova_compute[260603]: 2025-10-02 08:13:05.111 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:13:05 np0005465604 nova_compute[260603]: 2025-10-02 08:13:05.116 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:13:05 np0005465604 nova_compute[260603]: 2025-10-02 08:13:05.132 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:13:05 np0005465604 nova_compute[260603]: 2025-10-02 08:13:05.135 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:13:05 np0005465604 nova_compute[260603]: 2025-10-02 08:13:05.136 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:13:06 np0005465604 nova_compute[260603]: 2025-10-02 08:13:06.135 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:13:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:13:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:13:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:13:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:19 np0005465604 podman[266031]: 2025-10-02 08:13:19.098264502 +0000 UTC m=+0.165967760 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:13:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:20 np0005465604 podman[266058]: 2025-10-02 08:13:20.995831594 +0000 UTC m=+0.055986889 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:13:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:13:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:13:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3398353376' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:13:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:13:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3398353376' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:13:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:13:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:13:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:13:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:13:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:13:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:13:24 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 2332976a-acbc-44dd-a30d-034004cc5724 does not exist
Oct  2 04:13:24 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 4965d673-eff7-489c-b5b8-4c7175e68e30 does not exist
Oct  2 04:13:24 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c46e208d-81aa-4988-affd-475a43211369 does not exist
Oct  2 04:13:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:13:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:13:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:13:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:13:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:13:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:13:25 np0005465604 podman[266232]: 2025-10-02 08:13:25.081855531 +0000 UTC m=+0.050582229 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 04:13:25 np0005465604 podman[266369]: 2025-10-02 08:13:25.513741298 +0000 UTC m=+0.061876622 container create 73256bf85abe8b2c312233169142cc0aa26b9d40f4f2b984a3af3accdd4f9456 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cray, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 04:13:25 np0005465604 podman[266369]: 2025-10-02 08:13:25.489700538 +0000 UTC m=+0.037835932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:13:25 np0005465604 systemd[1]: Started libpod-conmon-73256bf85abe8b2c312233169142cc0aa26b9d40f4f2b984a3af3accdd4f9456.scope.
Oct  2 04:13:25 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:13:25 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:13:25 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:13:25 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:13:25 np0005465604 podman[266369]: 2025-10-02 08:13:25.63912597 +0000 UTC m=+0.187261304 container init 73256bf85abe8b2c312233169142cc0aa26b9d40f4f2b984a3af3accdd4f9456 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cray, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:13:25 np0005465604 podman[266369]: 2025-10-02 08:13:25.647341087 +0000 UTC m=+0.195476391 container start 73256bf85abe8b2c312233169142cc0aa26b9d40f4f2b984a3af3accdd4f9456 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:13:25 np0005465604 podman[266369]: 2025-10-02 08:13:25.651166286 +0000 UTC m=+0.199301610 container attach 73256bf85abe8b2c312233169142cc0aa26b9d40f4f2b984a3af3accdd4f9456 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 04:13:25 np0005465604 thirsty_cray[266386]: 167 167
Oct  2 04:13:25 np0005465604 systemd[1]: libpod-73256bf85abe8b2c312233169142cc0aa26b9d40f4f2b984a3af3accdd4f9456.scope: Deactivated successfully.
Oct  2 04:13:25 np0005465604 podman[266369]: 2025-10-02 08:13:25.652215198 +0000 UTC m=+0.200350502 container died 73256bf85abe8b2c312233169142cc0aa26b9d40f4f2b984a3af3accdd4f9456 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 04:13:25 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3edd87ac1524998949b81843e54334a94bad760862634c2e7bbe15db5751b7fe-merged.mount: Deactivated successfully.
Oct  2 04:13:25 np0005465604 podman[266369]: 2025-10-02 08:13:25.693528508 +0000 UTC m=+0.241663812 container remove 73256bf85abe8b2c312233169142cc0aa26b9d40f4f2b984a3af3accdd4f9456 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cray, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:13:25 np0005465604 systemd[1]: libpod-conmon-73256bf85abe8b2c312233169142cc0aa26b9d40f4f2b984a3af3accdd4f9456.scope: Deactivated successfully.
Oct  2 04:13:25 np0005465604 podman[266412]: 2025-10-02 08:13:25.858462774 +0000 UTC m=+0.048535526 container create a397a7667c843b2cfe1ad722245253af50359faf669de94529bc2f0a0758363b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goldberg, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:13:25 np0005465604 systemd[1]: Started libpod-conmon-a397a7667c843b2cfe1ad722245253af50359faf669de94529bc2f0a0758363b.scope.
Oct  2 04:13:25 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:13:25 np0005465604 podman[266412]: 2025-10-02 08:13:25.839545834 +0000 UTC m=+0.029618606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:13:25 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9595f1411f47fa59eea3fcc22597b6f1717b448c58e0c26c00dc362cc3bd747a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:13:25 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9595f1411f47fa59eea3fcc22597b6f1717b448c58e0c26c00dc362cc3bd747a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:13:25 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9595f1411f47fa59eea3fcc22597b6f1717b448c58e0c26c00dc362cc3bd747a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:13:25 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9595f1411f47fa59eea3fcc22597b6f1717b448c58e0c26c00dc362cc3bd747a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:13:25 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9595f1411f47fa59eea3fcc22597b6f1717b448c58e0c26c00dc362cc3bd747a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:13:25 np0005465604 podman[266412]: 2025-10-02 08:13:25.953708126 +0000 UTC m=+0.143780898 container init a397a7667c843b2cfe1ad722245253af50359faf669de94529bc2f0a0758363b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 04:13:25 np0005465604 podman[266412]: 2025-10-02 08:13:25.959086514 +0000 UTC m=+0.149159266 container start a397a7667c843b2cfe1ad722245253af50359faf669de94529bc2f0a0758363b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goldberg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:13:25 np0005465604 podman[266412]: 2025-10-02 08:13:25.966207106 +0000 UTC m=+0.156279878 container attach a397a7667c843b2cfe1ad722245253af50359faf669de94529bc2f0a0758363b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goldberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 04:13:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:13:27 np0005465604 admiring_goldberg[266429]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:13:27 np0005465604 admiring_goldberg[266429]: --> relative data size: 1.0
Oct  2 04:13:27 np0005465604 admiring_goldberg[266429]: --> All data devices are unavailable
Oct  2 04:13:27 np0005465604 systemd[1]: libpod-a397a7667c843b2cfe1ad722245253af50359faf669de94529bc2f0a0758363b.scope: Deactivated successfully.
Oct  2 04:13:27 np0005465604 systemd[1]: libpod-a397a7667c843b2cfe1ad722245253af50359faf669de94529bc2f0a0758363b.scope: Consumed 1.067s CPU time.
Oct  2 04:13:27 np0005465604 podman[266412]: 2025-10-02 08:13:27.083575001 +0000 UTC m=+1.273647773 container died a397a7667c843b2cfe1ad722245253af50359faf669de94529bc2f0a0758363b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goldberg, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:13:27 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9595f1411f47fa59eea3fcc22597b6f1717b448c58e0c26c00dc362cc3bd747a-merged.mount: Deactivated successfully.
Oct  2 04:13:27 np0005465604 podman[266412]: 2025-10-02 08:13:27.16010831 +0000 UTC m=+1.350181072 container remove a397a7667c843b2cfe1ad722245253af50359faf669de94529bc2f0a0758363b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goldberg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 04:13:27 np0005465604 systemd[1]: libpod-conmon-a397a7667c843b2cfe1ad722245253af50359faf669de94529bc2f0a0758363b.scope: Deactivated successfully.
Oct  2 04:13:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:13:27
Oct  2 04:13:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:13:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:13:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['volumes', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'backups', 'images']
Oct  2 04:13:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:13:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:13:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:13:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:13:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:13:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:13:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:13:27 np0005465604 podman[266612]: 2025-10-02 08:13:27.991645856 +0000 UTC m=+0.055239434 container create 7e9c6d859eaf53611a27094a51d01848dbee3b87e377575a04db49f568f1ec7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Oct  2 04:13:28 np0005465604 systemd[1]: Started libpod-conmon-7e9c6d859eaf53611a27094a51d01848dbee3b87e377575a04db49f568f1ec7d.scope.
Oct  2 04:13:28 np0005465604 podman[266612]: 2025-10-02 08:13:27.970870738 +0000 UTC m=+0.034464356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:13:28 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:13:28 np0005465604 podman[266612]: 2025-10-02 08:13:28.112167877 +0000 UTC m=+0.175761555 container init 7e9c6d859eaf53611a27094a51d01848dbee3b87e377575a04db49f568f1ec7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:13:28 np0005465604 podman[266612]: 2025-10-02 08:13:28.126125133 +0000 UTC m=+0.189718751 container start 7e9c6d859eaf53611a27094a51d01848dbee3b87e377575a04db49f568f1ec7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kare, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:13:28 np0005465604 podman[266612]: 2025-10-02 08:13:28.130062525 +0000 UTC m=+0.193656143 container attach 7e9c6d859eaf53611a27094a51d01848dbee3b87e377575a04db49f568f1ec7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 04:13:28 np0005465604 vigilant_kare[266629]: 167 167
Oct  2 04:13:28 np0005465604 systemd[1]: libpod-7e9c6d859eaf53611a27094a51d01848dbee3b87e377575a04db49f568f1ec7d.scope: Deactivated successfully.
Oct  2 04:13:28 np0005465604 podman[266612]: 2025-10-02 08:13:28.134588407 +0000 UTC m=+0.198182025 container died 7e9c6d859eaf53611a27094a51d01848dbee3b87e377575a04db49f568f1ec7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:13:28 np0005465604 systemd[1]: var-lib-containers-storage-overlay-eb8879f4686900ff0d7147c351df0ebff9990dd8f0a5252038bf1685b309e285-merged.mount: Deactivated successfully.
Oct  2 04:13:28 np0005465604 podman[266612]: 2025-10-02 08:13:28.188334894 +0000 UTC m=+0.251928512 container remove 7e9c6d859eaf53611a27094a51d01848dbee3b87e377575a04db49f568f1ec7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kare, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:13:28 np0005465604 systemd[1]: libpod-conmon-7e9c6d859eaf53611a27094a51d01848dbee3b87e377575a04db49f568f1ec7d.scope: Deactivated successfully.
Oct  2 04:13:28 np0005465604 podman[266653]: 2025-10-02 08:13:28.443166525 +0000 UTC m=+0.074096053 container create 114934f355283ecaa3a8226cee31bd7eab6b71795cde475bd0ce032e3f04cd86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_babbage, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:13:28 np0005465604 systemd[1]: Started libpod-conmon-114934f355283ecaa3a8226cee31bd7eab6b71795cde475bd0ce032e3f04cd86.scope.
Oct  2 04:13:28 np0005465604 podman[266653]: 2025-10-02 08:13:28.413078786 +0000 UTC m=+0.044008414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:13:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:28 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:13:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f346d63b9c8d52e884ebf75143147e1bbdfbea40373261fe824916b443cc735/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:13:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f346d63b9c8d52e884ebf75143147e1bbdfbea40373261fe824916b443cc735/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:13:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f346d63b9c8d52e884ebf75143147e1bbdfbea40373261fe824916b443cc735/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:13:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f346d63b9c8d52e884ebf75143147e1bbdfbea40373261fe824916b443cc735/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:13:28 np0005465604 podman[266653]: 2025-10-02 08:13:28.549202254 +0000 UTC m=+0.180131822 container init 114934f355283ecaa3a8226cee31bd7eab6b71795cde475bd0ce032e3f04cd86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_babbage, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 04:13:28 np0005465604 podman[266653]: 2025-10-02 08:13:28.562654954 +0000 UTC m=+0.193584522 container start 114934f355283ecaa3a8226cee31bd7eab6b71795cde475bd0ce032e3f04cd86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_babbage, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:13:28 np0005465604 podman[266653]: 2025-10-02 08:13:28.567049911 +0000 UTC m=+0.197979479 container attach 114934f355283ecaa3a8226cee31bd7eab6b71795cde475bd0ce032e3f04cd86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_babbage, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]: {
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:    "0": [
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:        {
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "devices": [
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "/dev/loop3"
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            ],
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "lv_name": "ceph_lv0",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "lv_size": "21470642176",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "name": "ceph_lv0",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "tags": {
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.cluster_name": "ceph",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.crush_device_class": "",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.encrypted": "0",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.osd_id": "0",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.type": "block",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.vdo": "0"
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            },
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "type": "block",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "vg_name": "ceph_vg0"
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:        }
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:    ],
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:    "1": [
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:        {
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "devices": [
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "/dev/loop4"
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            ],
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "lv_name": "ceph_lv1",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "lv_size": "21470642176",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "name": "ceph_lv1",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "tags": {
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.cluster_name": "ceph",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.crush_device_class": "",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.encrypted": "0",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.osd_id": "1",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.type": "block",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.vdo": "0"
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            },
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "type": "block",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "vg_name": "ceph_vg1"
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:        }
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:    ],
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:    "2": [
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:        {
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "devices": [
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "/dev/loop5"
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            ],
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "lv_name": "ceph_lv2",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "lv_size": "21470642176",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "name": "ceph_lv2",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "tags": {
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.cluster_name": "ceph",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.crush_device_class": "",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.encrypted": "0",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.osd_id": "2",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.type": "block",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:                "ceph.vdo": "0"
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            },
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "type": "block",
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:            "vg_name": "ceph_vg2"
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:        }
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]:    ]
Oct  2 04:13:29 np0005465604 jolly_babbage[266670]: }
Oct  2 04:13:29 np0005465604 systemd[1]: libpod-114934f355283ecaa3a8226cee31bd7eab6b71795cde475bd0ce032e3f04cd86.scope: Deactivated successfully.
Oct  2 04:13:29 np0005465604 podman[266653]: 2025-10-02 08:13:29.381005039 +0000 UTC m=+1.011934607 container died 114934f355283ecaa3a8226cee31bd7eab6b71795cde475bd0ce032e3f04cd86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_babbage, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 04:13:29 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4f346d63b9c8d52e884ebf75143147e1bbdfbea40373261fe824916b443cc735-merged.mount: Deactivated successfully.
Oct  2 04:13:29 np0005465604 podman[266653]: 2025-10-02 08:13:29.465073442 +0000 UTC m=+1.096002980 container remove 114934f355283ecaa3a8226cee31bd7eab6b71795cde475bd0ce032e3f04cd86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_babbage, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:13:29 np0005465604 systemd[1]: libpod-conmon-114934f355283ecaa3a8226cee31bd7eab6b71795cde475bd0ce032e3f04cd86.scope: Deactivated successfully.
Oct  2 04:13:30 np0005465604 podman[266834]: 2025-10-02 08:13:30.240680314 +0000 UTC m=+0.070550473 container create aa3e4f9123c6261a93377ed950a672048d93b3bba06d253e868e3753ded7eada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 04:13:30 np0005465604 systemd[1]: Started libpod-conmon-aa3e4f9123c6261a93377ed950a672048d93b3bba06d253e868e3753ded7eada.scope.
Oct  2 04:13:30 np0005465604 podman[266834]: 2025-10-02 08:13:30.211681399 +0000 UTC m=+0.041551628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:13:30 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:13:30 np0005465604 podman[266834]: 2025-10-02 08:13:30.34311375 +0000 UTC m=+0.172983969 container init aa3e4f9123c6261a93377ed950a672048d93b3bba06d253e868e3753ded7eada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lumiere, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 04:13:30 np0005465604 podman[266834]: 2025-10-02 08:13:30.355089673 +0000 UTC m=+0.184959832 container start aa3e4f9123c6261a93377ed950a672048d93b3bba06d253e868e3753ded7eada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lumiere, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 04:13:30 np0005465604 podman[266834]: 2025-10-02 08:13:30.359887913 +0000 UTC m=+0.189758072 container attach aa3e4f9123c6261a93377ed950a672048d93b3bba06d253e868e3753ded7eada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lumiere, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:13:30 np0005465604 thirsty_lumiere[266851]: 167 167
Oct  2 04:13:30 np0005465604 systemd[1]: libpod-aa3e4f9123c6261a93377ed950a672048d93b3bba06d253e868e3753ded7eada.scope: Deactivated successfully.
Oct  2 04:13:30 np0005465604 podman[266834]: 2025-10-02 08:13:30.361740601 +0000 UTC m=+0.191610770 container died aa3e4f9123c6261a93377ed950a672048d93b3bba06d253e868e3753ded7eada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lumiere, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 04:13:30 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9da7accb3df9fb5ebdbf3d60fe6f099e58b58fed4d1eb6c67e8a11e6fb3b17d7-merged.mount: Deactivated successfully.
Oct  2 04:13:30 np0005465604 podman[266834]: 2025-10-02 08:13:30.418321316 +0000 UTC m=+0.248191425 container remove aa3e4f9123c6261a93377ed950a672048d93b3bba06d253e868e3753ded7eada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lumiere, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 04:13:30 np0005465604 systemd[1]: libpod-conmon-aa3e4f9123c6261a93377ed950a672048d93b3bba06d253e868e3753ded7eada.scope: Deactivated successfully.
Oct  2 04:13:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:30 np0005465604 podman[266875]: 2025-10-02 08:13:30.62480905 +0000 UTC m=+0.055131501 container create cd346d837123264678741bcafc75a6a597db6b6d20c09a6fcf76d9bb09ef3e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pike, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 04:13:30 np0005465604 systemd[1]: Started libpod-conmon-cd346d837123264678741bcafc75a6a597db6b6d20c09a6fcf76d9bb09ef3e07.scope.
Oct  2 04:13:30 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:13:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbb0a0b620d729e867a1187b972edf83b041a9367e194892bf9162da3786007/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:13:30 np0005465604 podman[266875]: 2025-10-02 08:13:30.598445948 +0000 UTC m=+0.028768219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:13:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbb0a0b620d729e867a1187b972edf83b041a9367e194892bf9162da3786007/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:13:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbb0a0b620d729e867a1187b972edf83b041a9367e194892bf9162da3786007/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:13:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbb0a0b620d729e867a1187b972edf83b041a9367e194892bf9162da3786007/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:13:30 np0005465604 podman[266875]: 2025-10-02 08:13:30.718263396 +0000 UTC m=+0.148585597 container init cd346d837123264678741bcafc75a6a597db6b6d20c09a6fcf76d9bb09ef3e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  2 04:13:30 np0005465604 podman[266875]: 2025-10-02 08:13:30.724280194 +0000 UTC m=+0.154602415 container start cd346d837123264678741bcafc75a6a597db6b6d20c09a6fcf76d9bb09ef3e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pike, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 04:13:30 np0005465604 podman[266875]: 2025-10-02 08:13:30.728519516 +0000 UTC m=+0.158841717 container attach cd346d837123264678741bcafc75a6a597db6b6d20c09a6fcf76d9bb09ef3e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pike, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  2 04:13:31 np0005465604 strange_pike[266892]: {
Oct  2 04:13:31 np0005465604 strange_pike[266892]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:13:31 np0005465604 strange_pike[266892]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:13:31 np0005465604 strange_pike[266892]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:13:31 np0005465604 strange_pike[266892]:        "osd_id": 2,
Oct  2 04:13:31 np0005465604 strange_pike[266892]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:13:31 np0005465604 strange_pike[266892]:        "type": "bluestore"
Oct  2 04:13:31 np0005465604 strange_pike[266892]:    },
Oct  2 04:13:31 np0005465604 strange_pike[266892]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:13:31 np0005465604 strange_pike[266892]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:13:31 np0005465604 strange_pike[266892]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:13:31 np0005465604 strange_pike[266892]:        "osd_id": 1,
Oct  2 04:13:31 np0005465604 strange_pike[266892]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:13:31 np0005465604 strange_pike[266892]:        "type": "bluestore"
Oct  2 04:13:31 np0005465604 strange_pike[266892]:    },
Oct  2 04:13:31 np0005465604 strange_pike[266892]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:13:31 np0005465604 strange_pike[266892]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:13:31 np0005465604 strange_pike[266892]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:13:31 np0005465604 strange_pike[266892]:        "osd_id": 0,
Oct  2 04:13:31 np0005465604 strange_pike[266892]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:13:31 np0005465604 strange_pike[266892]:        "type": "bluestore"
Oct  2 04:13:31 np0005465604 strange_pike[266892]:    }
Oct  2 04:13:31 np0005465604 strange_pike[266892]: }
Oct  2 04:13:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:13:31 np0005465604 systemd[1]: libpod-cd346d837123264678741bcafc75a6a597db6b6d20c09a6fcf76d9bb09ef3e07.scope: Deactivated successfully.
Oct  2 04:13:31 np0005465604 systemd[1]: libpod-cd346d837123264678741bcafc75a6a597db6b6d20c09a6fcf76d9bb09ef3e07.scope: Consumed 1.148s CPU time.
Oct  2 04:13:31 np0005465604 podman[266875]: 2025-10-02 08:13:31.862777068 +0000 UTC m=+1.293099259 container died cd346d837123264678741bcafc75a6a597db6b6d20c09a6fcf76d9bb09ef3e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pike, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:13:31 np0005465604 systemd[1]: var-lib-containers-storage-overlay-fbbb0a0b620d729e867a1187b972edf83b041a9367e194892bf9162da3786007-merged.mount: Deactivated successfully.
Oct  2 04:13:31 np0005465604 podman[266875]: 2025-10-02 08:13:31.947741379 +0000 UTC m=+1.378063600 container remove cd346d837123264678741bcafc75a6a597db6b6d20c09a6fcf76d9bb09ef3e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct  2 04:13:31 np0005465604 systemd[1]: libpod-conmon-cd346d837123264678741bcafc75a6a597db6b6d20c09a6fcf76d9bb09ef3e07.scope: Deactivated successfully.
Oct  2 04:13:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:13:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:13:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:13:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:13:32 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 7e8dd2dd-a12c-4d72-8b1f-dbb8e6884af2 does not exist
Oct  2 04:13:32 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 8bef4465-df95-4b97-bfa3-eed32624704d does not exist
Oct  2 04:13:32 np0005465604 podman[266925]: 2025-10-02 08:13:32.016775293 +0000 UTC m=+0.110435156 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:13:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:33 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:13:33 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:13:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:13:34.797 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:13:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:13:34.798 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:13:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:13:34.798 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:13:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:13:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:13:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:13:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:50 np0005465604 podman[267011]: 2025-10-02 08:13:50.097289457 +0000 UTC m=+0.155201464 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 04:13:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:13:52 np0005465604 podman[267039]: 2025-10-02 08:13:52.039119559 +0000 UTC m=+0.085798448 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 04:13:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:56 np0005465604 podman[267060]: 2025-10-02 08:13:56.049042352 +0000 UTC m=+0.096895324 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:13:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:13:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:13:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:13:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:13:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:13:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:13:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:13:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:13:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:01 np0005465604 nova_compute[260603]: 2025-10-02 08:14:01.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:14:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:14:01 np0005465604 nova_compute[260603]: 2025-10-02 08:14:01.864 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:14:01 np0005465604 nova_compute[260603]: 2025-10-02 08:14:01.865 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:14:01 np0005465604 nova_compute[260603]: 2025-10-02 08:14:01.865 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:14:01 np0005465604 nova_compute[260603]: 2025-10-02 08:14:01.879 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:14:01 np0005465604 nova_compute[260603]: 2025-10-02 08:14:01.880 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:14:01 np0005465604 nova_compute[260603]: 2025-10-02 08:14:01.880 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:14:01 np0005465604 nova_compute[260603]: 2025-10-02 08:14:01.881 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:14:02 np0005465604 nova_compute[260603]: 2025-10-02 08:14:02.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:14:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:03 np0005465604 podman[267080]: 2025-10-02 08:14:03.028774882 +0000 UTC m=+0.079855963 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 04:14:03 np0005465604 nova_compute[260603]: 2025-10-02 08:14:03.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:14:03 np0005465604 nova_compute[260603]: 2025-10-02 08:14:03.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:14:03 np0005465604 nova_compute[260603]: 2025-10-02 08:14:03.546 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:14:03 np0005465604 nova_compute[260603]: 2025-10-02 08:14:03.546 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:14:03 np0005465604 nova_compute[260603]: 2025-10-02 08:14:03.546 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:14:03 np0005465604 nova_compute[260603]: 2025-10-02 08:14:03.547 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:14:03 np0005465604 nova_compute[260603]: 2025-10-02 08:14:03.547 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:14:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:14:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3186498898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:14:03 np0005465604 nova_compute[260603]: 2025-10-02 08:14:03.950 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:14:04 np0005465604 nova_compute[260603]: 2025-10-02 08:14:04.167 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:14:04 np0005465604 nova_compute[260603]: 2025-10-02 08:14:04.168 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5183MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:14:04 np0005465604 nova_compute[260603]: 2025-10-02 08:14:04.168 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:14:04 np0005465604 nova_compute[260603]: 2025-10-02 08:14:04.169 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:14:04 np0005465604 nova_compute[260603]: 2025-10-02 08:14:04.245 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:14:04 np0005465604 nova_compute[260603]: 2025-10-02 08:14:04.245 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:14:04 np0005465604 nova_compute[260603]: 2025-10-02 08:14:04.262 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:14:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:14:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3036503839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:14:04 np0005465604 nova_compute[260603]: 2025-10-02 08:14:04.715 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:14:04 np0005465604 nova_compute[260603]: 2025-10-02 08:14:04.724 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:14:04 np0005465604 nova_compute[260603]: 2025-10-02 08:14:04.739 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:14:04 np0005465604 nova_compute[260603]: 2025-10-02 08:14:04.742 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:14:04 np0005465604 nova_compute[260603]: 2025-10-02 08:14:04.743 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:14:05 np0005465604 nova_compute[260603]: 2025-10-02 08:14:05.744 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:14:05 np0005465604 nova_compute[260603]: 2025-10-02 08:14:05.745 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:14:05 np0005465604 nova_compute[260603]: 2025-10-02 08:14:05.745 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:14:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:14:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:14:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:14:16.854387) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392856854443, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 875, "num_deletes": 255, "total_data_size": 1183247, "memory_usage": 1206256, "flush_reason": "Manual Compaction"}
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392856866444, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1172538, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18648, "largest_seqno": 19522, "table_properties": {"data_size": 1168151, "index_size": 2039, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9112, "raw_average_key_size": 18, "raw_value_size": 1159375, "raw_average_value_size": 2342, "num_data_blocks": 93, "num_entries": 495, "num_filter_entries": 495, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759392777, "oldest_key_time": 1759392777, "file_creation_time": 1759392856, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 12120 microseconds, and 6651 cpu microseconds.
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:14:16.866505) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1172538 bytes OK
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:14:16.866531) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:14:16.868684) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:14:16.868711) EVENT_LOG_v1 {"time_micros": 1759392856868703, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:14:16.868736) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1178955, prev total WAL file size 1178955, number of live WAL files 2.
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:14:16.869692) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353031' seq:0, type:0; will stop at (end)
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1145KB)], [44(6034KB)]
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392856869732, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7351568, "oldest_snapshot_seqno": -1}
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4136 keys, 7226642 bytes, temperature: kUnknown
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392856901963, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7226642, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7198118, "index_size": 17086, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10373, "raw_key_size": 102337, "raw_average_key_size": 24, "raw_value_size": 7122337, "raw_average_value_size": 1722, "num_data_blocks": 718, "num_entries": 4136, "num_filter_entries": 4136, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759392856, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:14:16.902185) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7226642 bytes
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:14:16.903433) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 227.9 rd, 224.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 5.9 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(12.4) write-amplify(6.2) OK, records in: 4658, records dropped: 522 output_compression: NoCompression
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:14:16.903452) EVENT_LOG_v1 {"time_micros": 1759392856903443, "job": 22, "event": "compaction_finished", "compaction_time_micros": 32261, "compaction_time_cpu_micros": 19645, "output_level": 6, "num_output_files": 1, "total_output_size": 7226642, "num_input_records": 4658, "num_output_records": 4136, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392856903697, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392856904884, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:14:16.869565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:14:16.904909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:14:16.904913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:14:16.904914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:14:16.904915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:14:16 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:14:16.904917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:14:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:21 np0005465604 podman[267144]: 2025-10-02 08:14:21.073451595 +0000 UTC m=+0.126022333 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:14:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:14:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:14:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1133812864' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:14:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:14:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1133812864' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:14:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:23 np0005465604 podman[267171]: 2025-10-02 08:14:23.022103478 +0000 UTC m=+0.081886715 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:14:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:14:26 np0005465604 podman[267190]: 2025-10-02 08:14:26.99090534 +0000 UTC m=+0.060092436 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 04:14:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:14:27
Oct  2 04:14:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:14:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:14:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['images', 'default.rgw.control', 'backups', '.mgr', 'default.rgw.meta', 'vms', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta']
Oct  2 04:14:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:14:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:14:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:14:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:14:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:14:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:14:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:14:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:14:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:14:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:14:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:14:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:14:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:14:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:14:33 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev aef51996-4f95-43a3-8546-dfe8b9a493bb does not exist
Oct  2 04:14:33 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 0adebe33-0f76-4fc1-a9c4-b10f569d260f does not exist
Oct  2 04:14:33 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9515b152-522a-421e-a266-3c3f9b68564a does not exist
Oct  2 04:14:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:14:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:14:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:14:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:14:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:14:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:14:33 np0005465604 podman[267367]: 2025-10-02 08:14:33.412437792 +0000 UTC m=+0.083421153 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:14:33 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:14:33 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:14:33 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:14:33 np0005465604 podman[267503]: 2025-10-02 08:14:33.918765821 +0000 UTC m=+0.046749049 container create 6b42a5e9fbf81494aeeeb968b8f891ee7fc5917a579fe6c7bcdbcd1e462c72d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galileo, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:14:33 np0005465604 systemd[1]: Started libpod-conmon-6b42a5e9fbf81494aeeeb968b8f891ee7fc5917a579fe6c7bcdbcd1e462c72d7.scope.
Oct  2 04:14:33 np0005465604 podman[267503]: 2025-10-02 08:14:33.897384775 +0000 UTC m=+0.025368033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:14:33 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:14:34 np0005465604 podman[267503]: 2025-10-02 08:14:34.020902828 +0000 UTC m=+0.148886146 container init 6b42a5e9fbf81494aeeeb968b8f891ee7fc5917a579fe6c7bcdbcd1e462c72d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galileo, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:14:34 np0005465604 podman[267503]: 2025-10-02 08:14:34.02994105 +0000 UTC m=+0.157924288 container start 6b42a5e9fbf81494aeeeb968b8f891ee7fc5917a579fe6c7bcdbcd1e462c72d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 04:14:34 np0005465604 podman[267503]: 2025-10-02 08:14:34.0334622 +0000 UTC m=+0.161445468 container attach 6b42a5e9fbf81494aeeeb968b8f891ee7fc5917a579fe6c7bcdbcd1e462c72d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 04:14:34 np0005465604 amazing_galileo[267520]: 167 167
Oct  2 04:14:34 np0005465604 systemd[1]: libpod-6b42a5e9fbf81494aeeeb968b8f891ee7fc5917a579fe6c7bcdbcd1e462c72d7.scope: Deactivated successfully.
Oct  2 04:14:34 np0005465604 podman[267503]: 2025-10-02 08:14:34.04020512 +0000 UTC m=+0.168188348 container died 6b42a5e9fbf81494aeeeb968b8f891ee7fc5917a579fe6c7bcdbcd1e462c72d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 04:14:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5a1c51d8bdc3d49626797d2189fce9d3a66911c91dac288f96a227496931aff5-merged.mount: Deactivated successfully.
Oct  2 04:14:34 np0005465604 podman[267503]: 2025-10-02 08:14:34.081654644 +0000 UTC m=+0.209637872 container remove 6b42a5e9fbf81494aeeeb968b8f891ee7fc5917a579fe6c7bcdbcd1e462c72d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galileo, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:14:34 np0005465604 systemd[1]: libpod-conmon-6b42a5e9fbf81494aeeeb968b8f891ee7fc5917a579fe6c7bcdbcd1e462c72d7.scope: Deactivated successfully.
Oct  2 04:14:34 np0005465604 podman[267543]: 2025-10-02 08:14:34.293702231 +0000 UTC m=+0.047799212 container create 989d7a411e980f722e7ac93bbc836c386a7410e92dfb30fb0948da3b8ee61c3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:14:34 np0005465604 systemd[1]: Started libpod-conmon-989d7a411e980f722e7ac93bbc836c386a7410e92dfb30fb0948da3b8ee61c3b.scope.
Oct  2 04:14:34 np0005465604 podman[267543]: 2025-10-02 08:14:34.271577071 +0000 UTC m=+0.025674142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:14:34 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:14:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04b36161dfdb9935959b44fb0c43a36678552928b21ba5d3bc0e51848b72936/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:14:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04b36161dfdb9935959b44fb0c43a36678552928b21ba5d3bc0e51848b72936/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:14:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04b36161dfdb9935959b44fb0c43a36678552928b21ba5d3bc0e51848b72936/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:14:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04b36161dfdb9935959b44fb0c43a36678552928b21ba5d3bc0e51848b72936/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:14:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04b36161dfdb9935959b44fb0c43a36678552928b21ba5d3bc0e51848b72936/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:14:34 np0005465604 podman[267543]: 2025-10-02 08:14:34.393589318 +0000 UTC m=+0.147686329 container init 989d7a411e980f722e7ac93bbc836c386a7410e92dfb30fb0948da3b8ee61c3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:14:34 np0005465604 podman[267543]: 2025-10-02 08:14:34.40517708 +0000 UTC m=+0.159274051 container start 989d7a411e980f722e7ac93bbc836c386a7410e92dfb30fb0948da3b8ee61c3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:14:34 np0005465604 podman[267543]: 2025-10-02 08:14:34.408070349 +0000 UTC m=+0.162167360 container attach 989d7a411e980f722e7ac93bbc836c386a7410e92dfb30fb0948da3b8ee61c3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chatelet, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:14:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:14:34.799 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:14:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:14:34.799 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:14:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:14:34.799 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:14:35 np0005465604 romantic_chatelet[267560]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:14:35 np0005465604 romantic_chatelet[267560]: --> relative data size: 1.0
Oct  2 04:14:35 np0005465604 romantic_chatelet[267560]: --> All data devices are unavailable
Oct  2 04:14:35 np0005465604 systemd[1]: libpod-989d7a411e980f722e7ac93bbc836c386a7410e92dfb30fb0948da3b8ee61c3b.scope: Deactivated successfully.
Oct  2 04:14:35 np0005465604 podman[267543]: 2025-10-02 08:14:35.650677253 +0000 UTC m=+1.404774294 container died 989d7a411e980f722e7ac93bbc836c386a7410e92dfb30fb0948da3b8ee61c3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 04:14:35 np0005465604 systemd[1]: libpod-989d7a411e980f722e7ac93bbc836c386a7410e92dfb30fb0948da3b8ee61c3b.scope: Consumed 1.181s CPU time.
Oct  2 04:14:35 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a04b36161dfdb9935959b44fb0c43a36678552928b21ba5d3bc0e51848b72936-merged.mount: Deactivated successfully.
Oct  2 04:14:35 np0005465604 podman[267543]: 2025-10-02 08:14:35.726658194 +0000 UTC m=+1.480755175 container remove 989d7a411e980f722e7ac93bbc836c386a7410e92dfb30fb0948da3b8ee61c3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 04:14:35 np0005465604 systemd[1]: libpod-conmon-989d7a411e980f722e7ac93bbc836c386a7410e92dfb30fb0948da3b8ee61c3b.scope: Deactivated successfully.
Oct  2 04:14:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:36 np0005465604 podman[267743]: 2025-10-02 08:14:36.583683666 +0000 UTC m=+0.070410468 container create 892cfdd1464e594d556474d0fc11b09f216bed2fa8bc83b882711f24f35e1cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cori, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:14:36 np0005465604 systemd[1]: Started libpod-conmon-892cfdd1464e594d556474d0fc11b09f216bed2fa8bc83b882711f24f35e1cd9.scope.
Oct  2 04:14:36 np0005465604 podman[267743]: 2025-10-02 08:14:36.555344591 +0000 UTC m=+0.042071453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:14:36 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:14:36 np0005465604 podman[267743]: 2025-10-02 08:14:36.678134993 +0000 UTC m=+0.164861835 container init 892cfdd1464e594d556474d0fc11b09f216bed2fa8bc83b882711f24f35e1cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 04:14:36 np0005465604 podman[267743]: 2025-10-02 08:14:36.684505302 +0000 UTC m=+0.171232104 container start 892cfdd1464e594d556474d0fc11b09f216bed2fa8bc83b882711f24f35e1cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cori, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:14:36 np0005465604 wonderful_cori[267759]: 167 167
Oct  2 04:14:36 np0005465604 systemd[1]: libpod-892cfdd1464e594d556474d0fc11b09f216bed2fa8bc83b882711f24f35e1cd9.scope: Deactivated successfully.
Oct  2 04:14:36 np0005465604 podman[267743]: 2025-10-02 08:14:36.688770995 +0000 UTC m=+0.175497897 container attach 892cfdd1464e594d556474d0fc11b09f216bed2fa8bc83b882711f24f35e1cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 04:14:36 np0005465604 podman[267743]: 2025-10-02 08:14:36.691154759 +0000 UTC m=+0.177881591 container died 892cfdd1464e594d556474d0fc11b09f216bed2fa8bc83b882711f24f35e1cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cori, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:14:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay-021683c27c3f731abcfec088ad16af756253fe271373e0c3c4b7af57888a8d8e-merged.mount: Deactivated successfully.
Oct  2 04:14:36 np0005465604 podman[267743]: 2025-10-02 08:14:36.732061286 +0000 UTC m=+0.218788088 container remove 892cfdd1464e594d556474d0fc11b09f216bed2fa8bc83b882711f24f35e1cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct  2 04:14:36 np0005465604 systemd[1]: libpod-conmon-892cfdd1464e594d556474d0fc11b09f216bed2fa8bc83b882711f24f35e1cd9.scope: Deactivated successfully.
Oct  2 04:14:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:14:36 np0005465604 podman[267781]: 2025-10-02 08:14:36.987045742 +0000 UTC m=+0.071557933 container create a8ca8ce84448d0ac610aa50bfcae68aba1e8fa258990d0a315228a9909ba7125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_torvalds, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:14:37 np0005465604 podman[267781]: 2025-10-02 08:14:36.959408509 +0000 UTC m=+0.043920760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:14:37 np0005465604 systemd[1]: Started libpod-conmon-a8ca8ce84448d0ac610aa50bfcae68aba1e8fa258990d0a315228a9909ba7125.scope.
Oct  2 04:14:37 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:14:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17ec4134e46a7ff63d19dd40e475edd4d25dbf30c5b83060ab8db111b6fd849/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:14:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17ec4134e46a7ff63d19dd40e475edd4d25dbf30c5b83060ab8db111b6fd849/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:14:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17ec4134e46a7ff63d19dd40e475edd4d25dbf30c5b83060ab8db111b6fd849/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:14:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17ec4134e46a7ff63d19dd40e475edd4d25dbf30c5b83060ab8db111b6fd849/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:14:37 np0005465604 podman[267781]: 2025-10-02 08:14:37.104111465 +0000 UTC m=+0.188623666 container init a8ca8ce84448d0ac610aa50bfcae68aba1e8fa258990d0a315228a9909ba7125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_torvalds, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 04:14:37 np0005465604 podman[267781]: 2025-10-02 08:14:37.11581121 +0000 UTC m=+0.200323411 container start a8ca8ce84448d0ac610aa50bfcae68aba1e8fa258990d0a315228a9909ba7125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:14:37 np0005465604 podman[267781]: 2025-10-02 08:14:37.119532426 +0000 UTC m=+0.204044628 container attach a8ca8ce84448d0ac610aa50bfcae68aba1e8fa258990d0a315228a9909ba7125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]: {
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:    "0": [
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:        {
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "devices": [
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "/dev/loop3"
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            ],
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "lv_name": "ceph_lv0",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "lv_size": "21470642176",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "name": "ceph_lv0",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "tags": {
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.cluster_name": "ceph",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.crush_device_class": "",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.encrypted": "0",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.osd_id": "0",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.type": "block",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.vdo": "0"
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            },
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "type": "block",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "vg_name": "ceph_vg0"
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:        }
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:    ],
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:    "1": [
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:        {
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "devices": [
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "/dev/loop4"
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            ],
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "lv_name": "ceph_lv1",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "lv_size": "21470642176",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "name": "ceph_lv1",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "tags": {
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.cluster_name": "ceph",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.crush_device_class": "",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.encrypted": "0",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.osd_id": "1",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.type": "block",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.vdo": "0"
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            },
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "type": "block",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "vg_name": "ceph_vg1"
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:        }
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:    ],
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:    "2": [
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:        {
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "devices": [
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "/dev/loop5"
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            ],
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "lv_name": "ceph_lv2",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "lv_size": "21470642176",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "name": "ceph_lv2",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "tags": {
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.cluster_name": "ceph",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.crush_device_class": "",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.encrypted": "0",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.osd_id": "2",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.type": "block",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:                "ceph.vdo": "0"
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            },
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "type": "block",
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:            "vg_name": "ceph_vg2"
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:        }
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]:    ]
Oct  2 04:14:37 np0005465604 pedantic_torvalds[267798]: }
Oct  2 04:14:37 np0005465604 systemd[1]: libpod-a8ca8ce84448d0ac610aa50bfcae68aba1e8fa258990d0a315228a9909ba7125.scope: Deactivated successfully.
Oct  2 04:14:37 np0005465604 podman[267781]: 2025-10-02 08:14:37.96220633 +0000 UTC m=+1.046718511 container died a8ca8ce84448d0ac610aa50bfcae68aba1e8fa258990d0a315228a9909ba7125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_torvalds, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct  2 04:14:38 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e17ec4134e46a7ff63d19dd40e475edd4d25dbf30c5b83060ab8db111b6fd849-merged.mount: Deactivated successfully.
Oct  2 04:14:38 np0005465604 podman[267781]: 2025-10-02 08:14:38.048695739 +0000 UTC m=+1.133207940 container remove a8ca8ce84448d0ac610aa50bfcae68aba1e8fa258990d0a315228a9909ba7125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_torvalds, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 04:14:38 np0005465604 systemd[1]: libpod-conmon-a8ca8ce84448d0ac610aa50bfcae68aba1e8fa258990d0a315228a9909ba7125.scope: Deactivated successfully.
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:14:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:38 np0005465604 podman[267958]: 2025-10-02 08:14:38.929243865 +0000 UTC m=+0.044801639 container create 022299f45346963f65f2c4fe2697da4cbb5fa134b9517aeb7be5b8b597477146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_joliot, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 04:14:38 np0005465604 systemd[1]: Started libpod-conmon-022299f45346963f65f2c4fe2697da4cbb5fa134b9517aeb7be5b8b597477146.scope.
Oct  2 04:14:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:14:38 np0005465604 podman[267958]: 2025-10-02 08:14:38.985659125 +0000 UTC m=+0.101216899 container init 022299f45346963f65f2c4fe2697da4cbb5fa134b9517aeb7be5b8b597477146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 04:14:38 np0005465604 podman[267958]: 2025-10-02 08:14:38.991724325 +0000 UTC m=+0.107282099 container start 022299f45346963f65f2c4fe2697da4cbb5fa134b9517aeb7be5b8b597477146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 04:14:38 np0005465604 podman[267958]: 2025-10-02 08:14:38.994963116 +0000 UTC m=+0.110520890 container attach 022299f45346963f65f2c4fe2697da4cbb5fa134b9517aeb7be5b8b597477146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 04:14:38 np0005465604 romantic_joliot[267974]: 167 167
Oct  2 04:14:38 np0005465604 systemd[1]: libpod-022299f45346963f65f2c4fe2697da4cbb5fa134b9517aeb7be5b8b597477146.scope: Deactivated successfully.
Oct  2 04:14:38 np0005465604 podman[267958]: 2025-10-02 08:14:38.996413031 +0000 UTC m=+0.111970815 container died 022299f45346963f65f2c4fe2697da4cbb5fa134b9517aeb7be5b8b597477146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_joliot, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 04:14:39 np0005465604 podman[267958]: 2025-10-02 08:14:38.910958464 +0000 UTC m=+0.026516258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:14:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-70e0ba4da65e91989a66b958f797f05bc95357d94994937314679242220a4ebc-merged.mount: Deactivated successfully.
Oct  2 04:14:39 np0005465604 podman[267958]: 2025-10-02 08:14:39.041657783 +0000 UTC m=+0.157215577 container remove 022299f45346963f65f2c4fe2697da4cbb5fa134b9517aeb7be5b8b597477146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_joliot, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 04:14:39 np0005465604 systemd[1]: libpod-conmon-022299f45346963f65f2c4fe2697da4cbb5fa134b9517aeb7be5b8b597477146.scope: Deactivated successfully.
Oct  2 04:14:39 np0005465604 podman[267998]: 2025-10-02 08:14:39.240463406 +0000 UTC m=+0.053181270 container create d580f695437d3b8e365d7afb0a9df89f9d758d4ca308c49e463191d63a37a113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cray, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  2 04:14:39 np0005465604 systemd[1]: Started libpod-conmon-d580f695437d3b8e365d7afb0a9df89f9d758d4ca308c49e463191d63a37a113.scope.
Oct  2 04:14:39 np0005465604 podman[267998]: 2025-10-02 08:14:39.223963211 +0000 UTC m=+0.036681096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:14:39 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:14:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc4b9d8d57ca33060b8beb8d8623b6e2e528cdd0e7a0a274fae4a413aafc0e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:14:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc4b9d8d57ca33060b8beb8d8623b6e2e528cdd0e7a0a274fae4a413aafc0e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:14:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc4b9d8d57ca33060b8beb8d8623b6e2e528cdd0e7a0a274fae4a413aafc0e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:14:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc4b9d8d57ca33060b8beb8d8623b6e2e528cdd0e7a0a274fae4a413aafc0e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:14:39 np0005465604 podman[267998]: 2025-10-02 08:14:39.355609299 +0000 UTC m=+0.168327203 container init d580f695437d3b8e365d7afb0a9df89f9d758d4ca308c49e463191d63a37a113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cray, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:14:39 np0005465604 podman[267998]: 2025-10-02 08:14:39.36716681 +0000 UTC m=+0.179884714 container start d580f695437d3b8e365d7afb0a9df89f9d758d4ca308c49e463191d63a37a113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cray, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 04:14:39 np0005465604 podman[267998]: 2025-10-02 08:14:39.371097352 +0000 UTC m=+0.183815256 container attach d580f695437d3b8e365d7afb0a9df89f9d758d4ca308c49e463191d63a37a113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 04:14:40 np0005465604 angry_cray[268015]: {
Oct  2 04:14:40 np0005465604 angry_cray[268015]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:14:40 np0005465604 angry_cray[268015]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:14:40 np0005465604 angry_cray[268015]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:14:40 np0005465604 angry_cray[268015]:        "osd_id": 2,
Oct  2 04:14:40 np0005465604 angry_cray[268015]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:14:40 np0005465604 angry_cray[268015]:        "type": "bluestore"
Oct  2 04:14:40 np0005465604 angry_cray[268015]:    },
Oct  2 04:14:40 np0005465604 angry_cray[268015]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:14:40 np0005465604 angry_cray[268015]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:14:40 np0005465604 angry_cray[268015]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:14:40 np0005465604 angry_cray[268015]:        "osd_id": 1,
Oct  2 04:14:40 np0005465604 angry_cray[268015]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:14:40 np0005465604 angry_cray[268015]:        "type": "bluestore"
Oct  2 04:14:40 np0005465604 angry_cray[268015]:    },
Oct  2 04:14:40 np0005465604 angry_cray[268015]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:14:40 np0005465604 angry_cray[268015]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:14:40 np0005465604 angry_cray[268015]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:14:40 np0005465604 angry_cray[268015]:        "osd_id": 0,
Oct  2 04:14:40 np0005465604 angry_cray[268015]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:14:40 np0005465604 angry_cray[268015]:        "type": "bluestore"
Oct  2 04:14:40 np0005465604 angry_cray[268015]:    }
Oct  2 04:14:40 np0005465604 angry_cray[268015]: }
Oct  2 04:14:40 np0005465604 systemd[1]: libpod-d580f695437d3b8e365d7afb0a9df89f9d758d4ca308c49e463191d63a37a113.scope: Deactivated successfully.
Oct  2 04:14:40 np0005465604 podman[267998]: 2025-10-02 08:14:40.40103227 +0000 UTC m=+1.213750134 container died d580f695437d3b8e365d7afb0a9df89f9d758d4ca308c49e463191d63a37a113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 04:14:40 np0005465604 systemd[1]: libpod-d580f695437d3b8e365d7afb0a9df89f9d758d4ca308c49e463191d63a37a113.scope: Consumed 1.039s CPU time.
Oct  2 04:14:40 np0005465604 systemd[1]: var-lib-containers-storage-overlay-efc4b9d8d57ca33060b8beb8d8623b6e2e528cdd0e7a0a274fae4a413aafc0e8-merged.mount: Deactivated successfully.
Oct  2 04:14:40 np0005465604 podman[267998]: 2025-10-02 08:14:40.45549982 +0000 UTC m=+1.268217684 container remove d580f695437d3b8e365d7afb0a9df89f9d758d4ca308c49e463191d63a37a113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_cray, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:14:40 np0005465604 systemd[1]: libpod-conmon-d580f695437d3b8e365d7afb0a9df89f9d758d4ca308c49e463191d63a37a113.scope: Deactivated successfully.
Oct  2 04:14:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:14:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:14:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:14:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:14:40 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a7bc65f0-2dee-4d9f-b26e-946e29df22e0 does not exist
Oct  2 04:14:40 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e2b9fc8e-26f9-4ee4-9481-0ec34232d9e3 does not exist
Oct  2 04:14:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:14:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:14:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:14:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:14:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:14:52 np0005465604 podman[268112]: 2025-10-02 08:14:52.072620204 +0000 UTC m=+0.133288840 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:14:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:54 np0005465604 podman[268137]: 2025-10-02 08:14:53.999798108 +0000 UTC m=+0.058559718 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:14:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:14:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:14:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:14:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:14:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:14:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:14:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:14:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:14:58 np0005465604 podman[268156]: 2025-10-02 08:14:58.031665187 +0000 UTC m=+0.092092734 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:14:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:01 np0005465604 nova_compute[260603]: 2025-10-02 08:15:01.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:15:01 np0005465604 nova_compute[260603]: 2025-10-02 08:15:01.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:15:01 np0005465604 nova_compute[260603]: 2025-10-02 08:15:01.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:15:01 np0005465604 nova_compute[260603]: 2025-10-02 08:15:01.549 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:15:01 np0005465604 nova_compute[260603]: 2025-10-02 08:15:01.550 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:15:01 np0005465604 nova_compute[260603]: 2025-10-02 08:15:01.551 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:15:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:15:02 np0005465604 nova_compute[260603]: 2025-10-02 08:15:02.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:15:02 np0005465604 nova_compute[260603]: 2025-10-02 08:15:02.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:15:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:04 np0005465604 podman[268177]: 2025-10-02 08:15:04.030316485 +0000 UTC m=+0.092065503 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 04:15:04 np0005465604 nova_compute[260603]: 2025-10-02 08:15:04.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:15:04 np0005465604 nova_compute[260603]: 2025-10-02 08:15:04.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:15:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:05 np0005465604 nova_compute[260603]: 2025-10-02 08:15:05.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:15:05 np0005465604 nova_compute[260603]: 2025-10-02 08:15:05.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:15:05 np0005465604 nova_compute[260603]: 2025-10-02 08:15:05.554 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:15:05 np0005465604 nova_compute[260603]: 2025-10-02 08:15:05.555 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:15:05 np0005465604 nova_compute[260603]: 2025-10-02 08:15:05.555 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:15:05 np0005465604 nova_compute[260603]: 2025-10-02 08:15:05.556 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:15:05 np0005465604 nova_compute[260603]: 2025-10-02 08:15:05.556 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:15:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:15:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1074479492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:15:06 np0005465604 nova_compute[260603]: 2025-10-02 08:15:06.005 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:15:06 np0005465604 nova_compute[260603]: 2025-10-02 08:15:06.215 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:15:06 np0005465604 nova_compute[260603]: 2025-10-02 08:15:06.217 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5170MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:15:06 np0005465604 nova_compute[260603]: 2025-10-02 08:15:06.218 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:15:06 np0005465604 nova_compute[260603]: 2025-10-02 08:15:06.218 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:15:06 np0005465604 nova_compute[260603]: 2025-10-02 08:15:06.296 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:15:06 np0005465604 nova_compute[260603]: 2025-10-02 08:15:06.296 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:15:06 np0005465604 nova_compute[260603]: 2025-10-02 08:15:06.316 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:15:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:15:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/744064993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:15:06 np0005465604 nova_compute[260603]: 2025-10-02 08:15:06.797 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:15:06 np0005465604 nova_compute[260603]: 2025-10-02 08:15:06.807 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:15:06 np0005465604 nova_compute[260603]: 2025-10-02 08:15:06.831 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:15:06 np0005465604 nova_compute[260603]: 2025-10-02 08:15:06.834 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:15:06 np0005465604 nova_compute[260603]: 2025-10-02 08:15:06.835 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:15:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:15:07 np0005465604 nova_compute[260603]: 2025-10-02 08:15:07.837 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:15:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:15:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:15:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:15:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:15:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3109323905' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:15:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:15:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3109323905' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:15:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:23 np0005465604 podman[268242]: 2025-10-02 08:15:23.138560398 +0000 UTC m=+0.194891612 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:15:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:25 np0005465604 podman[268268]: 2025-10-02 08:15:25.019912092 +0000 UTC m=+0.076436756 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 04:15:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:15:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:15:27
Oct  2 04:15:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:15:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:15:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'volumes', '.mgr', 'images', 'vms']
Oct  2 04:15:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:15:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:15:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:15:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:15:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:15:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:15:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:15:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:29 np0005465604 podman[268287]: 2025-10-02 08:15:29.004724913 +0000 UTC m=+0.063547593 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 04:15:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:15:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:15:34.801 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:15:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:15:34.801 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:15:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:15:34.801 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:15:35 np0005465604 podman[268307]: 2025-10-02 08:15:35.001140702 +0000 UTC m=+0.070262354 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:15:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:15:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:15:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:15:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:15:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:15:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:15:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:15:41 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9d8f0008-ab7b-4ae4-bb5e-39cd209fd434 does not exist
Oct  2 04:15:41 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b21b28d3-bba8-4409-b3d0-099d9f6ee72c does not exist
Oct  2 04:15:41 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev af39b7a7-39d9-4b6a-b5f4-c93b8cf78f6a does not exist
Oct  2 04:15:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:15:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:15:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:15:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:15:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:15:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:15:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:15:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:15:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:15:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:15:42 np0005465604 podman[268603]: 2025-10-02 08:15:42.416689182 +0000 UTC m=+0.069571683 container create 3b3060435cd1778c3fe17d64c197249af32fb423e88f98ef2413cfed44973923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:15:42 np0005465604 systemd[1]: Started libpod-conmon-3b3060435cd1778c3fe17d64c197249af32fb423e88f98ef2413cfed44973923.scope.
Oct  2 04:15:42 np0005465604 podman[268603]: 2025-10-02 08:15:42.393438896 +0000 UTC m=+0.046321437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:15:42 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:15:42 np0005465604 podman[268603]: 2025-10-02 08:15:42.530835204 +0000 UTC m=+0.183717785 container init 3b3060435cd1778c3fe17d64c197249af32fb423e88f98ef2413cfed44973923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  2 04:15:42 np0005465604 podman[268603]: 2025-10-02 08:15:42.545896283 +0000 UTC m=+0.198778814 container start 3b3060435cd1778c3fe17d64c197249af32fb423e88f98ef2413cfed44973923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_brown, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 04:15:42 np0005465604 podman[268603]: 2025-10-02 08:15:42.549779664 +0000 UTC m=+0.202662195 container attach 3b3060435cd1778c3fe17d64c197249af32fb423e88f98ef2413cfed44973923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_brown, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:15:42 np0005465604 wizardly_brown[268620]: 167 167
Oct  2 04:15:42 np0005465604 systemd[1]: libpod-3b3060435cd1778c3fe17d64c197249af32fb423e88f98ef2413cfed44973923.scope: Deactivated successfully.
Oct  2 04:15:42 np0005465604 podman[268603]: 2025-10-02 08:15:42.555786382 +0000 UTC m=+0.208668873 container died 3b3060435cd1778c3fe17d64c197249af32fb423e88f98ef2413cfed44973923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_brown, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:15:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:42 np0005465604 systemd[1]: var-lib-containers-storage-overlay-dcdc61d53455a37d101b30740291cec6b113f872b09b99d08ce99da9bda5da4d-merged.mount: Deactivated successfully.
Oct  2 04:15:42 np0005465604 podman[268603]: 2025-10-02 08:15:42.597449742 +0000 UTC m=+0.250332233 container remove 3b3060435cd1778c3fe17d64c197249af32fb423e88f98ef2413cfed44973923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_brown, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:15:42 np0005465604 systemd[1]: libpod-conmon-3b3060435cd1778c3fe17d64c197249af32fb423e88f98ef2413cfed44973923.scope: Deactivated successfully.
Oct  2 04:15:42 np0005465604 podman[268644]: 2025-10-02 08:15:42.777693916 +0000 UTC m=+0.051820308 container create bc85a961180a046f4b2fa4a3fdeea7bc737f309a83f84e94073cef1836774ec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_heisenberg, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 04:15:42 np0005465604 systemd[1]: Started libpod-conmon-bc85a961180a046f4b2fa4a3fdeea7bc737f309a83f84e94073cef1836774ec3.scope.
Oct  2 04:15:42 np0005465604 podman[268644]: 2025-10-02 08:15:42.749590939 +0000 UTC m=+0.023717331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:15:42 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:15:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69f7e2a5810c94e496e5f2409b952452e756e2869d24689aa9f12b0888f5d541/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:15:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69f7e2a5810c94e496e5f2409b952452e756e2869d24689aa9f12b0888f5d541/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:15:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69f7e2a5810c94e496e5f2409b952452e756e2869d24689aa9f12b0888f5d541/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:15:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69f7e2a5810c94e496e5f2409b952452e756e2869d24689aa9f12b0888f5d541/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:15:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69f7e2a5810c94e496e5f2409b952452e756e2869d24689aa9f12b0888f5d541/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:15:42 np0005465604 podman[268644]: 2025-10-02 08:15:42.877973775 +0000 UTC m=+0.152100147 container init bc85a961180a046f4b2fa4a3fdeea7bc737f309a83f84e94073cef1836774ec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 04:15:42 np0005465604 podman[268644]: 2025-10-02 08:15:42.895075448 +0000 UTC m=+0.169201840 container start bc85a961180a046f4b2fa4a3fdeea7bc737f309a83f84e94073cef1836774ec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_heisenberg, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 04:15:42 np0005465604 podman[268644]: 2025-10-02 08:15:42.899411294 +0000 UTC m=+0.173537686 container attach bc85a961180a046f4b2fa4a3fdeea7bc737f309a83f84e94073cef1836774ec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:15:43 np0005465604 youthful_heisenberg[268661]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:15:43 np0005465604 youthful_heisenberg[268661]: --> relative data size: 1.0
Oct  2 04:15:43 np0005465604 youthful_heisenberg[268661]: --> All data devices are unavailable
Oct  2 04:15:44 np0005465604 systemd[1]: libpod-bc85a961180a046f4b2fa4a3fdeea7bc737f309a83f84e94073cef1836774ec3.scope: Deactivated successfully.
Oct  2 04:15:44 np0005465604 systemd[1]: libpod-bc85a961180a046f4b2fa4a3fdeea7bc737f309a83f84e94073cef1836774ec3.scope: Consumed 1.072s CPU time.
Oct  2 04:15:44 np0005465604 podman[268644]: 2025-10-02 08:15:44.016300815 +0000 UTC m=+1.290427227 container died bc85a961180a046f4b2fa4a3fdeea7bc737f309a83f84e94073cef1836774ec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_heisenberg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 04:15:44 np0005465604 systemd[1]: var-lib-containers-storage-overlay-69f7e2a5810c94e496e5f2409b952452e756e2869d24689aa9f12b0888f5d541-merged.mount: Deactivated successfully.
Oct  2 04:15:44 np0005465604 podman[268644]: 2025-10-02 08:15:44.096030053 +0000 UTC m=+1.370156415 container remove bc85a961180a046f4b2fa4a3fdeea7bc737f309a83f84e94073cef1836774ec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:15:44 np0005465604 systemd[1]: libpod-conmon-bc85a961180a046f4b2fa4a3fdeea7bc737f309a83f84e94073cef1836774ec3.scope: Deactivated successfully.
Oct  2 04:15:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:44 np0005465604 podman[268843]: 2025-10-02 08:15:44.902620391 +0000 UTC m=+0.047112371 container create 03a0b3e5ed940738e42b49cf8390d480c649f293f7fae8669c9c6b0158d44404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swartz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:15:44 np0005465604 systemd[1]: Started libpod-conmon-03a0b3e5ed940738e42b49cf8390d480c649f293f7fae8669c9c6b0158d44404.scope.
Oct  2 04:15:44 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:15:44 np0005465604 podman[268843]: 2025-10-02 08:15:44.886515268 +0000 UTC m=+0.031007278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:15:44 np0005465604 podman[268843]: 2025-10-02 08:15:44.990965948 +0000 UTC m=+0.135458018 container init 03a0b3e5ed940738e42b49cf8390d480c649f293f7fae8669c9c6b0158d44404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 04:15:45 np0005465604 podman[268843]: 2025-10-02 08:15:45.000260598 +0000 UTC m=+0.144752618 container start 03a0b3e5ed940738e42b49cf8390d480c649f293f7fae8669c9c6b0158d44404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swartz, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 04:15:45 np0005465604 podman[268843]: 2025-10-02 08:15:45.003688285 +0000 UTC m=+0.148180365 container attach 03a0b3e5ed940738e42b49cf8390d480c649f293f7fae8669c9c6b0158d44404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swartz, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 04:15:45 np0005465604 goofy_swartz[268859]: 167 167
Oct  2 04:15:45 np0005465604 systemd[1]: libpod-03a0b3e5ed940738e42b49cf8390d480c649f293f7fae8669c9c6b0158d44404.scope: Deactivated successfully.
Oct  2 04:15:45 np0005465604 podman[268843]: 2025-10-02 08:15:45.011203429 +0000 UTC m=+0.155695409 container died 03a0b3e5ed940738e42b49cf8390d480c649f293f7fae8669c9c6b0158d44404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 04:15:45 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6e8389ef6591bb0b436286730152eef0b3167e5db2d28fcc0343deb0c220327d-merged.mount: Deactivated successfully.
Oct  2 04:15:45 np0005465604 podman[268843]: 2025-10-02 08:15:45.061641653 +0000 UTC m=+0.206133683 container remove 03a0b3e5ed940738e42b49cf8390d480c649f293f7fae8669c9c6b0158d44404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:15:45 np0005465604 systemd[1]: libpod-conmon-03a0b3e5ed940738e42b49cf8390d480c649f293f7fae8669c9c6b0158d44404.scope: Deactivated successfully.
Oct  2 04:15:45 np0005465604 podman[268882]: 2025-10-02 08:15:45.325266729 +0000 UTC m=+0.064782052 container create f633af112ac5077b723df28d2eec7ab7689541ebcd6401dac804442385c6c5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bouman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:15:45 np0005465604 systemd[1]: Started libpod-conmon-f633af112ac5077b723df28d2eec7ab7689541ebcd6401dac804442385c6c5fe.scope.
Oct  2 04:15:45 np0005465604 podman[268882]: 2025-10-02 08:15:45.299260668 +0000 UTC m=+0.038776051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:15:45 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:15:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89cbe9aec26fd7c05180be24bac14350e8d8b0ff02c6cec5ad1ae80a95231fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:15:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89cbe9aec26fd7c05180be24bac14350e8d8b0ff02c6cec5ad1ae80a95231fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:15:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89cbe9aec26fd7c05180be24bac14350e8d8b0ff02c6cec5ad1ae80a95231fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:15:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89cbe9aec26fd7c05180be24bac14350e8d8b0ff02c6cec5ad1ae80a95231fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:15:45 np0005465604 podman[268882]: 2025-10-02 08:15:45.431417751 +0000 UTC m=+0.170933124 container init f633af112ac5077b723df28d2eec7ab7689541ebcd6401dac804442385c6c5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bouman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:15:45 np0005465604 podman[268882]: 2025-10-02 08:15:45.447217814 +0000 UTC m=+0.186733157 container start f633af112ac5077b723df28d2eec7ab7689541ebcd6401dac804442385c6c5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:15:45 np0005465604 podman[268882]: 2025-10-02 08:15:45.451640492 +0000 UTC m=+0.191155815 container attach f633af112ac5077b723df28d2eec7ab7689541ebcd6401dac804442385c6c5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bouman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]: {
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:    "0": [
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:        {
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "devices": [
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "/dev/loop3"
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            ],
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "lv_name": "ceph_lv0",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "lv_size": "21470642176",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "name": "ceph_lv0",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "tags": {
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.cluster_name": "ceph",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.crush_device_class": "",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.encrypted": "0",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.osd_id": "0",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.type": "block",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.vdo": "0"
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            },
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "type": "block",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "vg_name": "ceph_vg0"
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:        }
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:    ],
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:    "1": [
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:        {
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "devices": [
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "/dev/loop4"
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            ],
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "lv_name": "ceph_lv1",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "lv_size": "21470642176",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "name": "ceph_lv1",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "tags": {
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.cluster_name": "ceph",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.crush_device_class": "",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.encrypted": "0",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.osd_id": "1",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.type": "block",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.vdo": "0"
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            },
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "type": "block",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "vg_name": "ceph_vg1"
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:        }
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:    ],
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:    "2": [
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:        {
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "devices": [
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "/dev/loop5"
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            ],
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "lv_name": "ceph_lv2",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "lv_size": "21470642176",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "name": "ceph_lv2",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "tags": {
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.cluster_name": "ceph",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.crush_device_class": "",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.encrypted": "0",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.osd_id": "2",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.type": "block",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:                "ceph.vdo": "0"
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            },
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "type": "block",
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:            "vg_name": "ceph_vg2"
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:        }
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]:    ]
Oct  2 04:15:46 np0005465604 agitated_bouman[268899]: }
Oct  2 04:15:46 np0005465604 systemd[1]: libpod-f633af112ac5077b723df28d2eec7ab7689541ebcd6401dac804442385c6c5fe.scope: Deactivated successfully.
Oct  2 04:15:46 np0005465604 podman[268882]: 2025-10-02 08:15:46.217694916 +0000 UTC m=+0.957210249 container died f633af112ac5077b723df28d2eec7ab7689541ebcd6401dac804442385c6c5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bouman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:15:46 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e89cbe9aec26fd7c05180be24bac14350e8d8b0ff02c6cec5ad1ae80a95231fe-merged.mount: Deactivated successfully.
Oct  2 04:15:46 np0005465604 podman[268882]: 2025-10-02 08:15:46.286538933 +0000 UTC m=+1.026054236 container remove f633af112ac5077b723df28d2eec7ab7689541ebcd6401dac804442385c6c5fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bouman, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 04:15:46 np0005465604 systemd[1]: libpod-conmon-f633af112ac5077b723df28d2eec7ab7689541ebcd6401dac804442385c6c5fe.scope: Deactivated successfully.
Oct  2 04:15:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:15:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:15:47 np0005465604 podman[269063]: 2025-10-02 08:15:47.175615156 +0000 UTC m=+0.065624318 container create 0385251dfa5eb79821f64e36da4df64d134f2f84401ea676701f8b714b56f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cerf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 04:15:47 np0005465604 systemd[1]: Started libpod-conmon-0385251dfa5eb79821f64e36da4df64d134f2f84401ea676701f8b714b56f6ca.scope.
Oct  2 04:15:47 np0005465604 podman[269063]: 2025-10-02 08:15:47.149063988 +0000 UTC m=+0.039073220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:15:47 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:15:47 np0005465604 podman[269063]: 2025-10-02 08:15:47.274027197 +0000 UTC m=+0.164036389 container init 0385251dfa5eb79821f64e36da4df64d134f2f84401ea676701f8b714b56f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cerf, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 04:15:47 np0005465604 podman[269063]: 2025-10-02 08:15:47.285588117 +0000 UTC m=+0.175597269 container start 0385251dfa5eb79821f64e36da4df64d134f2f84401ea676701f8b714b56f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cerf, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct  2 04:15:47 np0005465604 podman[269063]: 2025-10-02 08:15:47.289379606 +0000 UTC m=+0.179388758 container attach 0385251dfa5eb79821f64e36da4df64d134f2f84401ea676701f8b714b56f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cerf, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:15:47 np0005465604 gifted_cerf[269079]: 167 167
Oct  2 04:15:47 np0005465604 systemd[1]: libpod-0385251dfa5eb79821f64e36da4df64d134f2f84401ea676701f8b714b56f6ca.scope: Deactivated successfully.
Oct  2 04:15:47 np0005465604 podman[269063]: 2025-10-02 08:15:47.294409443 +0000 UTC m=+0.184418595 container died 0385251dfa5eb79821f64e36da4df64d134f2f84401ea676701f8b714b56f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 04:15:47 np0005465604 systemd[1]: var-lib-containers-storage-overlay-7b78c3f145920bf3e7dd3146bced0f4448721d7c4c3c8c43a0b2c119217df7db-merged.mount: Deactivated successfully.
Oct  2 04:15:47 np0005465604 podman[269063]: 2025-10-02 08:15:47.341823153 +0000 UTC m=+0.231832305 container remove 0385251dfa5eb79821f64e36da4df64d134f2f84401ea676701f8b714b56f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 04:15:47 np0005465604 systemd[1]: libpod-conmon-0385251dfa5eb79821f64e36da4df64d134f2f84401ea676701f8b714b56f6ca.scope: Deactivated successfully.
Oct  2 04:15:47 np0005465604 podman[269103]: 2025-10-02 08:15:47.586399644 +0000 UTC m=+0.059182507 container create ebd8249f302df8d360480b57f9ebe65436b4ecbca86bcb78c25c462d0876820b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 04:15:47 np0005465604 systemd[1]: Started libpod-conmon-ebd8249f302df8d360480b57f9ebe65436b4ecbca86bcb78c25c462d0876820b.scope.
Oct  2 04:15:47 np0005465604 podman[269103]: 2025-10-02 08:15:47.55422728 +0000 UTC m=+0.027010213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:15:47 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:16:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.4 KiB/s wr, 40 op/s
Oct  2 04:16:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:16:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct  2 04:16:26 np0005465604 rsyslogd[1004]: imjournal: 180 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct  2 04:16:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct  2 04:16:26 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct  2 04:16:27 np0005465604 podman[269378]: 2025-10-02 08:16:27.030046084 +0000 UTC m=+0.088081960 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:16:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:16:27
Oct  2 04:16:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:16:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:16:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['images', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'vms', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta']
Oct  2 04:16:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:16:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:16:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:16:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:16:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:16:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:16:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:16:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct  2 04:16:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.7 KiB/s wr, 38 op/s
Oct  2 04:16:31 np0005465604 podman[269398]: 2025-10-02 08:16:31.05317067 +0000 UTC m=+0.114006258 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 04:16:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:16:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct  2 04:16:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct  2 04:16:31 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct  2 04:16:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.1 KiB/s wr, 22 op/s
Oct  2 04:16:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.1 KiB/s wr, 22 op/s
Oct  2 04:16:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:16:34.802 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:16:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:16:34.803 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:16:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:16:34.803 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:16:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 946 B/s wr, 18 op/s
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:16:36.705440) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392996705474, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1457, "num_deletes": 254, "total_data_size": 2233666, "memory_usage": 2273016, "flush_reason": "Manual Compaction"}
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392996717263, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2199977, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19523, "largest_seqno": 20979, "table_properties": {"data_size": 2193083, "index_size": 4029, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14370, "raw_average_key_size": 20, "raw_value_size": 2179196, "raw_average_value_size": 3052, "num_data_blocks": 183, "num_entries": 714, "num_filter_entries": 714, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759392857, "oldest_key_time": 1759392857, "file_creation_time": 1759392996, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 11868 microseconds, and 6128 cpu microseconds.
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:16:36.717304) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2199977 bytes OK
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:16:36.717325) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:16:36.719815) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:16:36.719831) EVENT_LOG_v1 {"time_micros": 1759392996719825, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:16:36.719848) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2227197, prev total WAL file size 2227197, number of live WAL files 2.
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:16:36.720620) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2148KB)], [47(7057KB)]
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392996720652, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9426619, "oldest_snapshot_seqno": -1}
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4329 keys, 7661780 bytes, temperature: kUnknown
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392996760966, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7661780, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7631469, "index_size": 18361, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 107031, "raw_average_key_size": 24, "raw_value_size": 7551719, "raw_average_value_size": 1744, "num_data_blocks": 769, "num_entries": 4329, "num_filter_entries": 4329, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759392996, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:16:36.761317) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7661780 bytes
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:16:36.762741) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 233.2 rd, 189.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 6.9 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(7.8) write-amplify(3.5) OK, records in: 4850, records dropped: 521 output_compression: NoCompression
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:16:36.762802) EVENT_LOG_v1 {"time_micros": 1759392996762788, "job": 24, "event": "compaction_finished", "compaction_time_micros": 40419, "compaction_time_cpu_micros": 24424, "output_level": 6, "num_output_files": 1, "total_output_size": 7661780, "num_input_records": 4850, "num_output_records": 4329, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392996763678, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759392996766586, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:16:36.720556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:16:36.766665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:16:36.766671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:16:36.766673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:16:36.766676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:16:36.766678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:16:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:16:37 np0005465604 podman[269418]: 2025-10-02 08:16:37.016692462 +0000 UTC m=+0.083804756 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd)
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:16:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct  2 04:16:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 KiB/s wr, 17 op/s
Oct  2 04:16:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:16:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4645 writes, 20K keys, 4645 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4645 writes, 4645 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1330 writes, 6031 keys, 1330 commit groups, 1.0 writes per commit group, ingest: 8.67 MB, 0.01 MB/s#012Interval WAL: 1330 writes, 1330 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    137.6      0.18              0.10        12    0.015       0      0       0.0       0.0#012  L6      1/0    7.31 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    171.8    140.6      0.55              0.31        11    0.050     48K   5778       0.0       0.0#012 Sum      1/0    7.31 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    129.9    139.9      0.73              0.41        23    0.032     48K   5778       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.2    155.7    157.2      0.29              0.18        10    0.029     23K   2578       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    171.8    140.6      0.55              0.31        11    0.050     48K   5778       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    141.6      0.17              0.10        11    0.016       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.024, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 0.7 seconds#012Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 308.00 MB usage: 8.85 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(554,8.46 MB,2.74522%) FilterBlock(24,142.73 KB,0.0452562%) IndexBlock(24,266.17 KB,0.084394%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 04:16:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:16:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  2 04:16:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  2 04:16:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  2 04:16:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:16:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct  2 04:16:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:16:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:16:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:16:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:16:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:16:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:16:50 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 0e87a72f-ea0d-4d42-9da9-2667398ddad9 does not exist
Oct  2 04:16:50 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b2bbf257-224d-4a4e-9043-b7c04d36e437 does not exist
Oct  2 04:16:50 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev f9712377-b5e7-4428-9d38-7ed8aebf673c does not exist
Oct  2 04:16:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:16:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:16:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:16:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:16:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:16:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:16:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:16:50 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:16:50 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:16:50 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:16:51 np0005465604 podman[269710]: 2025-10-02 08:16:51.054463999 +0000 UTC m=+0.057766284 container create 9a3785bd3bf6b30fdba7544628d1ca7d43cf2def82a3e49f78c1008bbb9458fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 04:16:51 np0005465604 systemd[1]: Started libpod-conmon-9a3785bd3bf6b30fdba7544628d1ca7d43cf2def82a3e49f78c1008bbb9458fb.scope.
Oct  2 04:16:51 np0005465604 podman[269710]: 2025-10-02 08:16:51.022434809 +0000 UTC m=+0.025737114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:16:51 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:16:51 np0005465604 podman[269710]: 2025-10-02 08:16:51.155212653 +0000 UTC m=+0.158514948 container init 9a3785bd3bf6b30fdba7544628d1ca7d43cf2def82a3e49f78c1008bbb9458fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wilson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:16:51 np0005465604 podman[269710]: 2025-10-02 08:16:51.16412549 +0000 UTC m=+0.167427775 container start 9a3785bd3bf6b30fdba7544628d1ca7d43cf2def82a3e49f78c1008bbb9458fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wilson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 04:16:51 np0005465604 podman[269710]: 2025-10-02 08:16:51.168305871 +0000 UTC m=+0.171608136 container attach 9a3785bd3bf6b30fdba7544628d1ca7d43cf2def82a3e49f78c1008bbb9458fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:16:51 np0005465604 great_wilson[269727]: 167 167
Oct  2 04:16:51 np0005465604 systemd[1]: libpod-9a3785bd3bf6b30fdba7544628d1ca7d43cf2def82a3e49f78c1008bbb9458fb.scope: Deactivated successfully.
Oct  2 04:16:51 np0005465604 podman[269710]: 2025-10-02 08:16:51.171564262 +0000 UTC m=+0.174866537 container died 9a3785bd3bf6b30fdba7544628d1ca7d43cf2def82a3e49f78c1008bbb9458fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wilson, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:16:51 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2928647aadd5f2a2b273598d25eae43c8294b44671dc64195873a13050982949-merged.mount: Deactivated successfully.
Oct  2 04:16:51 np0005465604 podman[269710]: 2025-10-02 08:16:51.230089698 +0000 UTC m=+0.233391983 container remove 9a3785bd3bf6b30fdba7544628d1ca7d43cf2def82a3e49f78c1008bbb9458fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wilson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:16:51 np0005465604 systemd[1]: libpod-conmon-9a3785bd3bf6b30fdba7544628d1ca7d43cf2def82a3e49f78c1008bbb9458fb.scope: Deactivated successfully.
Oct  2 04:16:51 np0005465604 podman[269750]: 2025-10-02 08:16:51.47016765 +0000 UTC m=+0.076458077 container create 17e2131aa1e18224ef151b9853d9d272d1eb9b00723b74fec1730778b6384f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:16:51 np0005465604 systemd[1]: Started libpod-conmon-17e2131aa1e18224ef151b9853d9d272d1eb9b00723b74fec1730778b6384f82.scope.
Oct  2 04:16:51 np0005465604 podman[269750]: 2025-10-02 08:16:51.44196253 +0000 UTC m=+0.048253057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:16:51 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:16:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1007cdf7009825927f2ca8fb35dbfb5d95961e654f0d9f8fb85ac20954c242/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:16:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1007cdf7009825927f2ca8fb35dbfb5d95961e654f0d9f8fb85ac20954c242/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:16:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1007cdf7009825927f2ca8fb35dbfb5d95961e654f0d9f8fb85ac20954c242/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:16:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1007cdf7009825927f2ca8fb35dbfb5d95961e654f0d9f8fb85ac20954c242/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:16:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1007cdf7009825927f2ca8fb35dbfb5d95961e654f0d9f8fb85ac20954c242/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:16:51 np0005465604 podman[269750]: 2025-10-02 08:16:51.575148056 +0000 UTC m=+0.181438573 container init 17e2131aa1e18224ef151b9853d9d272d1eb9b00723b74fec1730778b6384f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 04:16:51 np0005465604 podman[269750]: 2025-10-02 08:16:51.595960385 +0000 UTC m=+0.202250852 container start 17e2131aa1e18224ef151b9853d9d272d1eb9b00723b74fec1730778b6384f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  2 04:16:51 np0005465604 podman[269750]: 2025-10-02 08:16:51.600726533 +0000 UTC m=+0.207016990 container attach 17e2131aa1e18224ef151b9853d9d272d1eb9b00723b74fec1730778b6384f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:16:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:16:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:16:52 np0005465604 boring_proskuriakova[269766]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:16:52 np0005465604 boring_proskuriakova[269766]: --> relative data size: 1.0
Oct  2 04:16:52 np0005465604 boring_proskuriakova[269766]: --> All data devices are unavailable
Oct  2 04:16:52 np0005465604 systemd[1]: libpod-17e2131aa1e18224ef151b9853d9d272d1eb9b00723b74fec1730778b6384f82.scope: Deactivated successfully.
Oct  2 04:16:52 np0005465604 systemd[1]: libpod-17e2131aa1e18224ef151b9853d9d272d1eb9b00723b74fec1730778b6384f82.scope: Consumed 1.165s CPU time.
Oct  2 04:16:52 np0005465604 podman[269750]: 2025-10-02 08:16:52.800572793 +0000 UTC m=+1.406863250 container died 17e2131aa1e18224ef151b9853d9d272d1eb9b00723b74fec1730778b6384f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_proskuriakova, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 04:16:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6f1007cdf7009825927f2ca8fb35dbfb5d95961e654f0d9f8fb85ac20954c242-merged.mount: Deactivated successfully.
Oct  2 04:16:52 np0005465604 podman[269750]: 2025-10-02 08:16:52.88315658 +0000 UTC m=+1.489447037 container remove 17e2131aa1e18224ef151b9853d9d272d1eb9b00723b74fec1730778b6384f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_proskuriakova, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 04:16:52 np0005465604 systemd[1]: libpod-conmon-17e2131aa1e18224ef151b9853d9d272d1eb9b00723b74fec1730778b6384f82.scope: Deactivated successfully.
Oct  2 04:16:53 np0005465604 podman[269952]: 2025-10-02 08:16:53.721493949 +0000 UTC m=+0.057839556 container create b8b90231a9d18e56a191f80ea62c96b1f74f9e49c20f64200f9156575feb91a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 04:16:53 np0005465604 systemd[1]: Started libpod-conmon-b8b90231a9d18e56a191f80ea62c96b1f74f9e49c20f64200f9156575feb91a9.scope.
Oct  2 04:16:53 np0005465604 podman[269952]: 2025-10-02 08:16:53.690847053 +0000 UTC m=+0.027192720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:16:53 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:16:53 np0005465604 podman[269952]: 2025-10-02 08:16:53.813680385 +0000 UTC m=+0.150025992 container init b8b90231a9d18e56a191f80ea62c96b1f74f9e49c20f64200f9156575feb91a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 04:16:53 np0005465604 podman[269952]: 2025-10-02 08:16:53.822861121 +0000 UTC m=+0.159206728 container start b8b90231a9d18e56a191f80ea62c96b1f74f9e49c20f64200f9156575feb91a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:16:53 np0005465604 podman[269952]: 2025-10-02 08:16:53.826276098 +0000 UTC m=+0.162621695 container attach b8b90231a9d18e56a191f80ea62c96b1f74f9e49c20f64200f9156575feb91a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:16:53 np0005465604 quirky_chaum[269968]: 167 167
Oct  2 04:16:53 np0005465604 systemd[1]: libpod-b8b90231a9d18e56a191f80ea62c96b1f74f9e49c20f64200f9156575feb91a9.scope: Deactivated successfully.
Oct  2 04:16:53 np0005465604 conmon[269968]: conmon b8b90231a9d18e56a191 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8b90231a9d18e56a191f80ea62c96b1f74f9e49c20f64200f9156575feb91a9.scope/container/memory.events
Oct  2 04:16:53 np0005465604 podman[269952]: 2025-10-02 08:16:53.830902323 +0000 UTC m=+0.167247920 container died b8b90231a9d18e56a191f80ea62c96b1f74f9e49c20f64200f9156575feb91a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:16:53 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0e83f86855e39dfbc02eac59673f8ccf4e0b63bca42f7056a174cebbadfeadfd-merged.mount: Deactivated successfully.
Oct  2 04:16:53 np0005465604 podman[269952]: 2025-10-02 08:16:53.861635411 +0000 UTC m=+0.197980988 container remove b8b90231a9d18e56a191f80ea62c96b1f74f9e49c20f64200f9156575feb91a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chaum, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:16:53 np0005465604 systemd[1]: libpod-conmon-b8b90231a9d18e56a191f80ea62c96b1f74f9e49c20f64200f9156575feb91a9.scope: Deactivated successfully.
Oct  2 04:16:54 np0005465604 podman[269992]: 2025-10-02 08:16:54.11505737 +0000 UTC m=+0.070698048 container create 8c6750d5c94c9d302cad30566c5a6944d2d42c6c4dcb0bc2a487277ecc73191d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 04:16:54 np0005465604 systemd[1]: Started libpod-conmon-8c6750d5c94c9d302cad30566c5a6944d2d42c6c4dcb0bc2a487277ecc73191d.scope.
Oct  2 04:16:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:16:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/368b71c749daf4d8545fb147d8603524c80dc197ae3ab66986a30f7ce107ec70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:16:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/368b71c749daf4d8545fb147d8603524c80dc197ae3ab66986a30f7ce107ec70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:16:54 np0005465604 podman[269992]: 2025-10-02 08:16:54.085095134 +0000 UTC m=+0.040735862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:16:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/368b71c749daf4d8545fb147d8603524c80dc197ae3ab66986a30f7ce107ec70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:16:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/368b71c749daf4d8545fb147d8603524c80dc197ae3ab66986a30f7ce107ec70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:16:54 np0005465604 podman[269992]: 2025-10-02 08:16:54.193322441 +0000 UTC m=+0.148963169 container init 8c6750d5c94c9d302cad30566c5a6944d2d42c6c4dcb0bc2a487277ecc73191d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jang, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 04:16:54 np0005465604 podman[269992]: 2025-10-02 08:16:54.20384748 +0000 UTC m=+0.159488158 container start 8c6750d5c94c9d302cad30566c5a6944d2d42c6c4dcb0bc2a487277ecc73191d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:16:54 np0005465604 podman[269992]: 2025-10-02 08:16:54.207644228 +0000 UTC m=+0.163284916 container attach 8c6750d5c94c9d302cad30566c5a6944d2d42c6c4dcb0bc2a487277ecc73191d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jang, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 04:16:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:16:54 np0005465604 tender_jang[270008]: {
Oct  2 04:16:54 np0005465604 tender_jang[270008]:    "0": [
Oct  2 04:16:54 np0005465604 tender_jang[270008]:        {
Oct  2 04:16:54 np0005465604 tender_jang[270008]:            "devices": [
Oct  2 04:16:54 np0005465604 tender_jang[270008]:                "/dev/loop3"
Oct  2 04:16:54 np0005465604 tender_jang[270008]:            ],
Oct  2 04:16:54 np0005465604 tender_jang[270008]:            "lv_name": "ceph_lv0",
Oct  2 04:16:54 np0005465604 tender_jang[270008]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:16:54 np0005465604 tender_jang[270008]:            "lv_size": "21470642176",
Oct  2 04:16:54 np0005465604 tender_jang[270008]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:16:54 np0005465604 tender_jang[270008]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:16:54 np0005465604 tender_jang[270008]:            "name": "ceph_lv0",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "tags": {
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.cluster_name": "ceph",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.crush_device_class": "",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.encrypted": "0",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.osd_id": "0",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.type": "block",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.vdo": "0"
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            },
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "type": "block",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "vg_name": "ceph_vg0"
Oct  2 04:16:55 np0005465604 tender_jang[270008]:        }
Oct  2 04:16:55 np0005465604 tender_jang[270008]:    ],
Oct  2 04:16:55 np0005465604 tender_jang[270008]:    "1": [
Oct  2 04:16:55 np0005465604 tender_jang[270008]:        {
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "devices": [
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "/dev/loop4"
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            ],
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "lv_name": "ceph_lv1",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "lv_size": "21470642176",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "name": "ceph_lv1",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "tags": {
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.cluster_name": "ceph",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.crush_device_class": "",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.encrypted": "0",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.osd_id": "1",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.type": "block",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.vdo": "0"
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            },
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "type": "block",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "vg_name": "ceph_vg1"
Oct  2 04:16:55 np0005465604 tender_jang[270008]:        }
Oct  2 04:16:55 np0005465604 tender_jang[270008]:    ],
Oct  2 04:16:55 np0005465604 tender_jang[270008]:    "2": [
Oct  2 04:16:55 np0005465604 tender_jang[270008]:        {
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "devices": [
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "/dev/loop5"
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            ],
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "lv_name": "ceph_lv2",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "lv_size": "21470642176",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "name": "ceph_lv2",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "tags": {
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.cluster_name": "ceph",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.crush_device_class": "",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.encrypted": "0",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.osd_id": "2",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.type": "block",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:                "ceph.vdo": "0"
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            },
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "type": "block",
Oct  2 04:16:55 np0005465604 tender_jang[270008]:            "vg_name": "ceph_vg2"
Oct  2 04:16:55 np0005465604 tender_jang[270008]:        }
Oct  2 04:16:55 np0005465604 tender_jang[270008]:    ]
Oct  2 04:16:55 np0005465604 tender_jang[270008]: }
Oct  2 04:16:55 np0005465604 systemd[1]: libpod-8c6750d5c94c9d302cad30566c5a6944d2d42c6c4dcb0bc2a487277ecc73191d.scope: Deactivated successfully.
Oct  2 04:16:55 np0005465604 podman[269992]: 2025-10-02 08:16:55.041226788 +0000 UTC m=+0.996867446 container died 8c6750d5c94c9d302cad30566c5a6944d2d42c6c4dcb0bc2a487277ecc73191d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:16:55 np0005465604 systemd[1]: var-lib-containers-storage-overlay-368b71c749daf4d8545fb147d8603524c80dc197ae3ab66986a30f7ce107ec70-merged.mount: Deactivated successfully.
Oct  2 04:16:55 np0005465604 podman[269992]: 2025-10-02 08:16:55.105963759 +0000 UTC m=+1.061604397 container remove 8c6750d5c94c9d302cad30566c5a6944d2d42c6c4dcb0bc2a487277ecc73191d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jang, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 04:16:55 np0005465604 systemd[1]: libpod-conmon-8c6750d5c94c9d302cad30566c5a6944d2d42c6c4dcb0bc2a487277ecc73191d.scope: Deactivated successfully.
Oct  2 04:16:55 np0005465604 podman[270018]: 2025-10-02 08:16:55.194580624 +0000 UTC m=+0.121865864 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 04:16:55 np0005465604 podman[270198]: 2025-10-02 08:16:55.939929701 +0000 UTC m=+0.063310717 container create 4ef8ef8582db972b42091e8068ae30c6e4975501ffba59a9546b93611f2c9b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:16:56 np0005465604 podman[270198]: 2025-10-02 08:16:55.909549673 +0000 UTC m=+0.032930689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:16:56 np0005465604 systemd[1]: Started libpod-conmon-4ef8ef8582db972b42091e8068ae30c6e4975501ffba59a9546b93611f2c9b7f.scope.
Oct  2 04:16:56 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:16:56 np0005465604 podman[270198]: 2025-10-02 08:16:56.077462043 +0000 UTC m=+0.200843109 container init 4ef8ef8582db972b42091e8068ae30c6e4975501ffba59a9546b93611f2c9b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 04:16:56 np0005465604 podman[270198]: 2025-10-02 08:16:56.090910723 +0000 UTC m=+0.214291719 container start 4ef8ef8582db972b42091e8068ae30c6e4975501ffba59a9546b93611f2c9b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  2 04:16:56 np0005465604 podman[270198]: 2025-10-02 08:16:56.0940397 +0000 UTC m=+0.217420756 container attach 4ef8ef8582db972b42091e8068ae30c6e4975501ffba59a9546b93611f2c9b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_williams, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:16:56 np0005465604 nervous_williams[270214]: 167 167
Oct  2 04:16:56 np0005465604 systemd[1]: libpod-4ef8ef8582db972b42091e8068ae30c6e4975501ffba59a9546b93611f2c9b7f.scope: Deactivated successfully.
Oct  2 04:16:56 np0005465604 podman[270219]: 2025-10-02 08:16:56.16294361 +0000 UTC m=+0.039021728 container died 4ef8ef8582db972b42091e8068ae30c6e4975501ffba59a9546b93611f2c9b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_williams, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  2 04:16:56 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2e15fc2921c53efc6923234e551b7e6fe24f8c65685dfda282396df8fe1753c7-merged.mount: Deactivated successfully.
Oct  2 04:16:56 np0005465604 podman[270219]: 2025-10-02 08:16:56.196873199 +0000 UTC m=+0.072951297 container remove 4ef8ef8582db972b42091e8068ae30c6e4975501ffba59a9546b93611f2c9b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_williams, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:16:56 np0005465604 systemd[1]: libpod-conmon-4ef8ef8582db972b42091e8068ae30c6e4975501ffba59a9546b93611f2c9b7f.scope: Deactivated successfully.
Oct  2 04:16:56 np0005465604 podman[270241]: 2025-10-02 08:16:56.481033196 +0000 UTC m=+0.073117963 container create 6c80ac7d63a4cdfc011b3b10481b3d1653c209ae532146c8570659efaa8dfb40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:16:56 np0005465604 systemd[1]: Started libpod-conmon-6c80ac7d63a4cdfc011b3b10481b3d1653c209ae532146c8570659efaa8dfb40.scope.
Oct  2 04:16:56 np0005465604 podman[270241]: 2025-10-02 08:16:56.452622319 +0000 UTC m=+0.044707166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:16:56 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:16:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855b67ca33e0b6ce958047989dca2b064bdfc535baa81bdcefc488a36101e53f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:16:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855b67ca33e0b6ce958047989dca2b064bdfc535baa81bdcefc488a36101e53f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:16:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855b67ca33e0b6ce958047989dca2b064bdfc535baa81bdcefc488a36101e53f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:16:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855b67ca33e0b6ce958047989dca2b064bdfc535baa81bdcefc488a36101e53f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:16:56 np0005465604 podman[270241]: 2025-10-02 08:16:56.582499162 +0000 UTC m=+0.174583949 container init 6c80ac7d63a4cdfc011b3b10481b3d1653c209ae532146c8570659efaa8dfb40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 04:16:56 np0005465604 podman[270241]: 2025-10-02 08:16:56.597686176 +0000 UTC m=+0.189770973 container start 6c80ac7d63a4cdfc011b3b10481b3d1653c209ae532146c8570659efaa8dfb40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:16:56 np0005465604 podman[270241]: 2025-10-02 08:16:56.602579668 +0000 UTC m=+0.194664455 container attach 6c80ac7d63a4cdfc011b3b10481b3d1653c209ae532146c8570659efaa8dfb40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_elgamal, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:16:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:16:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]: {
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:        "osd_id": 2,
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:        "type": "bluestore"
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:    },
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:        "osd_id": 1,
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:        "type": "bluestore"
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:    },
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:        "osd_id": 0,
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:        "type": "bluestore"
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]:    }
Oct  2 04:16:57 np0005465604 festive_elgamal[270258]: }
Oct  2 04:16:57 np0005465604 systemd[1]: libpod-6c80ac7d63a4cdfc011b3b10481b3d1653c209ae532146c8570659efaa8dfb40.scope: Deactivated successfully.
Oct  2 04:16:57 np0005465604 systemd[1]: libpod-6c80ac7d63a4cdfc011b3b10481b3d1653c209ae532146c8570659efaa8dfb40.scope: Consumed 1.158s CPU time.
Oct  2 04:16:57 np0005465604 podman[270291]: 2025-10-02 08:16:57.811133209 +0000 UTC m=+0.043656313 container died 6c80ac7d63a4cdfc011b3b10481b3d1653c209ae532146c8570659efaa8dfb40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 04:16:57 np0005465604 systemd[1]: var-lib-containers-storage-overlay-855b67ca33e0b6ce958047989dca2b064bdfc535baa81bdcefc488a36101e53f-merged.mount: Deactivated successfully.
Oct  2 04:16:57 np0005465604 podman[270291]: 2025-10-02 08:16:57.875785786 +0000 UTC m=+0.108308870 container remove 6c80ac7d63a4cdfc011b3b10481b3d1653c209ae532146c8570659efaa8dfb40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 04:16:57 np0005465604 podman[270292]: 2025-10-02 08:16:57.877239772 +0000 UTC m=+0.088273976 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct  2 04:16:57 np0005465604 systemd[1]: libpod-conmon-6c80ac7d63a4cdfc011b3b10481b3d1653c209ae532146c8570659efaa8dfb40.scope: Deactivated successfully.
Oct  2 04:16:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:16:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:16:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:16:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:16:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:16:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:16:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:16:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:16:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:16:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:16:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 667e807d-1e04-4061-ac88-77473f499c5f does not exist
Oct  2 04:16:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev cfaf051b-ab50-4cdc-9a0c-489d69554779 does not exist
Oct  2 04:16:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:16:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:16:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:17:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:17:02 np0005465604 podman[270374]: 2025-10-02 08:17:02.023066156 +0000 UTC m=+0.082849057 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid)
Oct  2 04:17:02 np0005465604 nova_compute[260603]: 2025-10-02 08:17:02.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:17:02 np0005465604 nova_compute[260603]: 2025-10-02 08:17:02.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:17:02 np0005465604 nova_compute[260603]: 2025-10-02 08:17:02.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:17:02 np0005465604 nova_compute[260603]: 2025-10-02 08:17:02.538 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:17:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:04 np0005465604 nova_compute[260603]: 2025-10-02 08:17:04.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:17:04 np0005465604 nova_compute[260603]: 2025-10-02 08:17:04.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:17:04 np0005465604 nova_compute[260603]: 2025-10-02 08:17:04.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:17:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:06 np0005465604 nova_compute[260603]: 2025-10-02 08:17:06.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:17:06 np0005465604 nova_compute[260603]: 2025-10-02 08:17:06.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:17:06 np0005465604 nova_compute[260603]: 2025-10-02 08:17:06.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:17:06 np0005465604 nova_compute[260603]: 2025-10-02 08:17:06.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:17:06 np0005465604 nova_compute[260603]: 2025-10-02 08:17:06.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:17:06 np0005465604 nova_compute[260603]: 2025-10-02 08:17:06.550 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:17:06 np0005465604 nova_compute[260603]: 2025-10-02 08:17:06.550 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:17:06 np0005465604 nova_compute[260603]: 2025-10-02 08:17:06.551 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:17:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:17:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:17:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1023544286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:17:07 np0005465604 nova_compute[260603]: 2025-10-02 08:17:07.028 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:17:07 np0005465604 nova_compute[260603]: 2025-10-02 08:17:07.240 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:17:07 np0005465604 nova_compute[260603]: 2025-10-02 08:17:07.241 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5175MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:17:07 np0005465604 nova_compute[260603]: 2025-10-02 08:17:07.241 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:17:07 np0005465604 nova_compute[260603]: 2025-10-02 08:17:07.242 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:17:07 np0005465604 nova_compute[260603]: 2025-10-02 08:17:07.300 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:17:07 np0005465604 nova_compute[260603]: 2025-10-02 08:17:07.300 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:17:07 np0005465604 nova_compute[260603]: 2025-10-02 08:17:07.324 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:17:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:17:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2363748729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:17:07 np0005465604 nova_compute[260603]: 2025-10-02 08:17:07.842 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:17:07 np0005465604 nova_compute[260603]: 2025-10-02 08:17:07.851 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:17:07 np0005465604 nova_compute[260603]: 2025-10-02 08:17:07.868 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:17:07 np0005465604 nova_compute[260603]: 2025-10-02 08:17:07.871 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:17:07 np0005465604 nova_compute[260603]: 2025-10-02 08:17:07.872 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:17:08 np0005465604 podman[270438]: 2025-10-02 08:17:08.039303694 +0000 UTC m=+0.094516970 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 04:17:08 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 04:17:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:08 np0005465604 nova_compute[260603]: 2025-10-02 08:17:08.872 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:17:08 np0005465604 nova_compute[260603]: 2025-10-02 08:17:08.873 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:17:09 np0005465604 nova_compute[260603]: 2025-10-02 08:17:09.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:17:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:17:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:17:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:17:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:17:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/602908634' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:17:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:17:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/602908634' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:17:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:26 np0005465604 podman[270460]: 2025-10-02 08:17:26.059693974 +0000 UTC m=+0.116886549 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:17:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:17:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:17:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:17:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:17:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:17:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:17:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:17:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:17:27
Oct  2 04:17:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:17:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:17:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'vms', 'volumes', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.control', 'images']
Oct  2 04:17:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:17:28 np0005465604 podman[270486]: 2025-10-02 08:17:28.025401994 +0000 UTC m=+0.084982960 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:17:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:28 np0005465604 ceph-mgr[74774]: client.0 ms_handle_reset on v2:192.168.122.100:6800/860957497
Oct  2 04:17:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:17:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:33 np0005465604 podman[270506]: 2025-10-02 08:17:33.026379244 +0000 UTC m=+0.084973040 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:17:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:17:34.804 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:17:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:17:34.804 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:17:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:17:34.805 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:17:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:17:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:38 np0005465604 podman[270527]: 2025-10-02 08:17:38.988904108 +0000 UTC m=+0.052462122 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 04:17:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:17:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:17:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:17:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:17:57 np0005465604 podman[270548]: 2025-10-02 08:17:57.051253079 +0000 UTC m=+0.115182116 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:17:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:17:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:17:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:17:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:17:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:17:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:17:58 np0005465604 podman[270598]: 2025-10-02 08:17:58.327200934 +0000 UTC m=+0.077794356 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:17:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:17:59 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 05eee4c8-04bd-4dab-8072-cf5dcb8bfba7 does not exist
Oct  2 04:17:59 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 64c9f779-90a1-455b-a247-5b00e64131af does not exist
Oct  2 04:17:59 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev fbef18f3-eaa4-44e4-835e-871e1b84d52b does not exist
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:17:59 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:18:00 np0005465604 podman[270986]: 2025-10-02 08:18:00.35824677 +0000 UTC m=+0.048872290 container create f34625a01b461f949e4ec32a9a69e49b8b7e19d51874afb04009f3fb2ccf43dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_haslett, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:18:00 np0005465604 systemd[1]: Started libpod-conmon-f34625a01b461f949e4ec32a9a69e49b8b7e19d51874afb04009f3fb2ccf43dd.scope.
Oct  2 04:18:00 np0005465604 podman[270986]: 2025-10-02 08:18:00.33649927 +0000 UTC m=+0.027124880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:18:00 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:18:00 np0005465604 podman[270986]: 2025-10-02 08:18:00.45152536 +0000 UTC m=+0.142150900 container init f34625a01b461f949e4ec32a9a69e49b8b7e19d51874afb04009f3fb2ccf43dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_haslett, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 04:18:00 np0005465604 podman[270986]: 2025-10-02 08:18:00.463384431 +0000 UTC m=+0.154010001 container start f34625a01b461f949e4ec32a9a69e49b8b7e19d51874afb04009f3fb2ccf43dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:18:00 np0005465604 podman[270986]: 2025-10-02 08:18:00.467563132 +0000 UTC m=+0.158188682 container attach f34625a01b461f949e4ec32a9a69e49b8b7e19d51874afb04009f3fb2ccf43dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_haslett, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 04:18:00 np0005465604 xenodochial_haslett[271002]: 167 167
Oct  2 04:18:00 np0005465604 systemd[1]: libpod-f34625a01b461f949e4ec32a9a69e49b8b7e19d51874afb04009f3fb2ccf43dd.scope: Deactivated successfully.
Oct  2 04:18:00 np0005465604 conmon[271002]: conmon f34625a01b461f949e4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f34625a01b461f949e4ec32a9a69e49b8b7e19d51874afb04009f3fb2ccf43dd.scope/container/memory.events
Oct  2 04:18:00 np0005465604 podman[270986]: 2025-10-02 08:18:00.470227425 +0000 UTC m=+0.160852985 container died f34625a01b461f949e4ec32a9a69e49b8b7e19d51874afb04009f3fb2ccf43dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_haslett, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 04:18:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay-03a3eab11bc0d2d7e1746bf7c30480033fdb55772af13d02f3c5d2ca1cdb2b47-merged.mount: Deactivated successfully.
Oct  2 04:18:00 np0005465604 nova_compute[260603]: 2025-10-02 08:18:00.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:18:00 np0005465604 nova_compute[260603]: 2025-10-02 08:18:00.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 04:18:00 np0005465604 podman[270986]: 2025-10-02 08:18:00.528667944 +0000 UTC m=+0.219293464 container remove f34625a01b461f949e4ec32a9a69e49b8b7e19d51874afb04009f3fb2ccf43dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_haslett, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:18:00 np0005465604 systemd[1]: libpod-conmon-f34625a01b461f949e4ec32a9a69e49b8b7e19d51874afb04009f3fb2ccf43dd.scope: Deactivated successfully.
Oct  2 04:18:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:18:00 np0005465604 podman[271026]: 2025-10-02 08:18:00.744921102 +0000 UTC m=+0.047304031 container create 19fbe61a35b20eec23208d743ebe4aa619708e63104c3dafd950c44892ba9844 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:18:00 np0005465604 systemd[1]: Started libpod-conmon-19fbe61a35b20eec23208d743ebe4aa619708e63104c3dafd950c44892ba9844.scope.
Oct  2 04:18:00 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:18:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d0ae8260eb4edccb25a2c7ffc2292ffc799f70a0cd9e600b8749c61d3200ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:18:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d0ae8260eb4edccb25a2c7ffc2292ffc799f70a0cd9e600b8749c61d3200ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:18:00 np0005465604 podman[271026]: 2025-10-02 08:18:00.725313099 +0000 UTC m=+0.027696068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:18:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d0ae8260eb4edccb25a2c7ffc2292ffc799f70a0cd9e600b8749c61d3200ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:18:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d0ae8260eb4edccb25a2c7ffc2292ffc799f70a0cd9e600b8749c61d3200ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:18:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d0ae8260eb4edccb25a2c7ffc2292ffc799f70a0cd9e600b8749c61d3200ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:18:00 np0005465604 podman[271026]: 2025-10-02 08:18:00.845294584 +0000 UTC m=+0.147677603 container init 19fbe61a35b20eec23208d743ebe4aa619708e63104c3dafd950c44892ba9844 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 04:18:00 np0005465604 podman[271026]: 2025-10-02 08:18:00.853550272 +0000 UTC m=+0.155933251 container start 19fbe61a35b20eec23208d743ebe4aa619708e63104c3dafd950c44892ba9844 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:18:00 np0005465604 podman[271026]: 2025-10-02 08:18:00.857889048 +0000 UTC m=+0.160272007 container attach 19fbe61a35b20eec23208d743ebe4aa619708e63104c3dafd950c44892ba9844 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:18:01 np0005465604 nova_compute[260603]: 2025-10-02 08:18:01.538 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:18:01 np0005465604 nova_compute[260603]: 2025-10-02 08:18:01.541 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 04:18:01 np0005465604 nova_compute[260603]: 2025-10-02 08:18:01.559 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 04:18:01 np0005465604 kind_nobel[271044]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:18:01 np0005465604 kind_nobel[271044]: --> relative data size: 1.0
Oct  2 04:18:01 np0005465604 kind_nobel[271044]: --> All data devices are unavailable
Oct  2 04:18:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:18:01 np0005465604 systemd[1]: libpod-19fbe61a35b20eec23208d743ebe4aa619708e63104c3dafd950c44892ba9844.scope: Deactivated successfully.
Oct  2 04:18:01 np0005465604 systemd[1]: libpod-19fbe61a35b20eec23208d743ebe4aa619708e63104c3dafd950c44892ba9844.scope: Consumed 1.024s CPU time.
Oct  2 04:18:01 np0005465604 podman[271073]: 2025-10-02 08:18:01.971948246 +0000 UTC m=+0.038388732 container died 19fbe61a35b20eec23208d743ebe4aa619708e63104c3dafd950c44892ba9844 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Oct  2 04:18:02 np0005465604 systemd[1]: var-lib-containers-storage-overlay-22d0ae8260eb4edccb25a2c7ffc2292ffc799f70a0cd9e600b8749c61d3200ed-merged.mount: Deactivated successfully.
Oct  2 04:18:02 np0005465604 podman[271073]: 2025-10-02 08:18:02.049276456 +0000 UTC m=+0.115716892 container remove 19fbe61a35b20eec23208d743ebe4aa619708e63104c3dafd950c44892ba9844 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 04:18:02 np0005465604 systemd[1]: libpod-conmon-19fbe61a35b20eec23208d743ebe4aa619708e63104c3dafd950c44892ba9844.scope: Deactivated successfully.
Oct  2 04:18:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:18:02 np0005465604 podman[271228]: 2025-10-02 08:18:02.842790281 +0000 UTC m=+0.051890646 container create be3dd929e2c27f13729f28d34f4ab5a264639158e744bc499be5b9f20f62ced0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:18:02 np0005465604 systemd[1]: Started libpod-conmon-be3dd929e2c27f13729f28d34f4ab5a264639158e744bc499be5b9f20f62ced0.scope.
Oct  2 04:18:02 np0005465604 podman[271228]: 2025-10-02 08:18:02.816826629 +0000 UTC m=+0.025927054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:18:02 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:18:02 np0005465604 podman[271228]: 2025-10-02 08:18:02.934765309 +0000 UTC m=+0.143865664 container init be3dd929e2c27f13729f28d34f4ab5a264639158e744bc499be5b9f20f62ced0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_borg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 04:18:02 np0005465604 podman[271228]: 2025-10-02 08:18:02.942695487 +0000 UTC m=+0.151795862 container start be3dd929e2c27f13729f28d34f4ab5a264639158e744bc499be5b9f20f62ced0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_borg, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 04:18:02 np0005465604 podman[271228]: 2025-10-02 08:18:02.946790246 +0000 UTC m=+0.155890691 container attach be3dd929e2c27f13729f28d34f4ab5a264639158e744bc499be5b9f20f62ced0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_borg, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:18:02 np0005465604 recursing_borg[271244]: 167 167
Oct  2 04:18:02 np0005465604 systemd[1]: libpod-be3dd929e2c27f13729f28d34f4ab5a264639158e744bc499be5b9f20f62ced0.scope: Deactivated successfully.
Oct  2 04:18:02 np0005465604 podman[271228]: 2025-10-02 08:18:02.952151344 +0000 UTC m=+0.161251709 container died be3dd929e2c27f13729f28d34f4ab5a264639158e744bc499be5b9f20f62ced0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:18:02 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2934c44e87de4253763a5bbaf55b5604f9b12ee396c12329e1cf9d9f2f01d383-merged.mount: Deactivated successfully.
Oct  2 04:18:02 np0005465604 podman[271228]: 2025-10-02 08:18:02.992158806 +0000 UTC m=+0.201259141 container remove be3dd929e2c27f13729f28d34f4ab5a264639158e744bc499be5b9f20f62ced0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_borg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:18:03 np0005465604 systemd[1]: libpod-conmon-be3dd929e2c27f13729f28d34f4ab5a264639158e744bc499be5b9f20f62ced0.scope: Deactivated successfully.
Oct  2 04:18:03 np0005465604 podman[271267]: 2025-10-02 08:18:03.189697479 +0000 UTC m=+0.051456072 container create 0c16a459a665e8466e7328c13d7aac2badccefffad5e9d6a2e061eda11ba5665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lovelace, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:18:03 np0005465604 systemd[1]: Started libpod-conmon-0c16a459a665e8466e7328c13d7aac2badccefffad5e9d6a2e061eda11ba5665.scope.
Oct  2 04:18:03 np0005465604 podman[271267]: 2025-10-02 08:18:03.160319749 +0000 UTC m=+0.022078362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:18:03 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:18:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84a867b3aaad03d59b2e188b856abac1134966be1d9e3e349f97f65aa97fca88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:18:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84a867b3aaad03d59b2e188b856abac1134966be1d9e3e349f97f65aa97fca88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:18:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84a867b3aaad03d59b2e188b856abac1134966be1d9e3e349f97f65aa97fca88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:18:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84a867b3aaad03d59b2e188b856abac1134966be1d9e3e349f97f65aa97fca88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:18:03 np0005465604 podman[271267]: 2025-10-02 08:18:03.289518912 +0000 UTC m=+0.151277515 container init 0c16a459a665e8466e7328c13d7aac2badccefffad5e9d6a2e061eda11ba5665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 04:18:03 np0005465604 podman[271267]: 2025-10-02 08:18:03.301828638 +0000 UTC m=+0.163587221 container start 0c16a459a665e8466e7328c13d7aac2badccefffad5e9d6a2e061eda11ba5665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:18:03 np0005465604 podman[271267]: 2025-10-02 08:18:03.305986578 +0000 UTC m=+0.167745161 container attach 0c16a459a665e8466e7328c13d7aac2badccefffad5e9d6a2e061eda11ba5665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lovelace, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 04:18:03 np0005465604 podman[271281]: 2025-10-02 08:18:03.335730799 +0000 UTC m=+0.104661147 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct  2 04:18:03 np0005465604 nova_compute[260603]: 2025-10-02 08:18:03.541 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:18:03 np0005465604 nova_compute[260603]: 2025-10-02 08:18:03.541 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:18:03 np0005465604 nova_compute[260603]: 2025-10-02 08:18:03.541 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:18:03 np0005465604 nova_compute[260603]: 2025-10-02 08:18:03.557 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]: {
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:    "0": [
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:        {
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "devices": [
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "/dev/loop3"
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            ],
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "lv_name": "ceph_lv0",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "lv_size": "21470642176",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "name": "ceph_lv0",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "tags": {
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.cluster_name": "ceph",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.crush_device_class": "",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.encrypted": "0",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.osd_id": "0",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.type": "block",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.vdo": "0"
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            },
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "type": "block",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "vg_name": "ceph_vg0"
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:        }
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:    ],
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:    "1": [
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:        {
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "devices": [
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "/dev/loop4"
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            ],
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "lv_name": "ceph_lv1",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "lv_size": "21470642176",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "name": "ceph_lv1",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "tags": {
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.cluster_name": "ceph",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.crush_device_class": "",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.encrypted": "0",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.osd_id": "1",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.type": "block",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.vdo": "0"
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            },
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "type": "block",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "vg_name": "ceph_vg1"
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:        }
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:    ],
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:    "2": [
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:        {
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "devices": [
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "/dev/loop5"
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            ],
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "lv_name": "ceph_lv2",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "lv_size": "21470642176",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "name": "ceph_lv2",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "tags": {
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.cluster_name": "ceph",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.crush_device_class": "",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.encrypted": "0",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.osd_id": "2",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.type": "block",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:                "ceph.vdo": "0"
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            },
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "type": "block",
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:            "vg_name": "ceph_vg2"
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:        }
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]:    ]
Oct  2 04:18:04 np0005465604 tender_lovelace[271289]: }
Oct  2 04:18:04 np0005465604 systemd[1]: libpod-0c16a459a665e8466e7328c13d7aac2badccefffad5e9d6a2e061eda11ba5665.scope: Deactivated successfully.
Oct  2 04:18:04 np0005465604 podman[271267]: 2025-10-02 08:18:04.044936016 +0000 UTC m=+0.906694599 container died 0c16a459a665e8466e7328c13d7aac2badccefffad5e9d6a2e061eda11ba5665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Oct  2 04:18:04 np0005465604 systemd[1]: var-lib-containers-storage-overlay-84a867b3aaad03d59b2e188b856abac1134966be1d9e3e349f97f65aa97fca88-merged.mount: Deactivated successfully.
Oct  2 04:18:04 np0005465604 podman[271267]: 2025-10-02 08:18:04.110993293 +0000 UTC m=+0.972751886 container remove 0c16a459a665e8466e7328c13d7aac2badccefffad5e9d6a2e061eda11ba5665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:18:04 np0005465604 systemd[1]: libpod-conmon-0c16a459a665e8466e7328c13d7aac2badccefffad5e9d6a2e061eda11ba5665.scope: Deactivated successfully.
Oct  2 04:18:04 np0005465604 nova_compute[260603]: 2025-10-02 08:18:04.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:18:04 np0005465604 nova_compute[260603]: 2025-10-02 08:18:04.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:18:04 np0005465604 nova_compute[260603]: 2025-10-02 08:18:04.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:18:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:18:04 np0005465604 podman[271464]: 2025-10-02 08:18:04.917896207 +0000 UTC m=+0.056465918 container create db740b61affc47b24b537ce9ae67156f431bb03df9e43515eaad6b0fdbf4ba7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tesla, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:18:04 np0005465604 systemd[1]: Started libpod-conmon-db740b61affc47b24b537ce9ae67156f431bb03df9e43515eaad6b0fdbf4ba7b.scope.
Oct  2 04:18:04 np0005465604 podman[271464]: 2025-10-02 08:18:04.897039324 +0000 UTC m=+0.035609045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:18:04 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:18:05 np0005465604 podman[271464]: 2025-10-02 08:18:05.007878353 +0000 UTC m=+0.146448045 container init db740b61affc47b24b537ce9ae67156f431bb03df9e43515eaad6b0fdbf4ba7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:18:05 np0005465604 podman[271464]: 2025-10-02 08:18:05.0144827 +0000 UTC m=+0.153052381 container start db740b61affc47b24b537ce9ae67156f431bb03df9e43515eaad6b0fdbf4ba7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 04:18:05 np0005465604 podman[271464]: 2025-10-02 08:18:05.017076982 +0000 UTC m=+0.155646683 container attach db740b61affc47b24b537ce9ae67156f431bb03df9e43515eaad6b0fdbf4ba7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tesla, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 04:18:05 np0005465604 exciting_tesla[271481]: 167 167
Oct  2 04:18:05 np0005465604 systemd[1]: libpod-db740b61affc47b24b537ce9ae67156f431bb03df9e43515eaad6b0fdbf4ba7b.scope: Deactivated successfully.
Oct  2 04:18:05 np0005465604 podman[271464]: 2025-10-02 08:18:05.020155688 +0000 UTC m=+0.158725369 container died db740b61affc47b24b537ce9ae67156f431bb03df9e43515eaad6b0fdbf4ba7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:18:05 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9c7d7bc3514c7f237b169cbf1a9d861480f119b4f4cda5957ba8a708106a7bac-merged.mount: Deactivated successfully.
Oct  2 04:18:05 np0005465604 podman[271464]: 2025-10-02 08:18:05.056443993 +0000 UTC m=+0.195013674 container remove db740b61affc47b24b537ce9ae67156f431bb03df9e43515eaad6b0fdbf4ba7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tesla, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:18:05 np0005465604 systemd[1]: libpod-conmon-db740b61affc47b24b537ce9ae67156f431bb03df9e43515eaad6b0fdbf4ba7b.scope: Deactivated successfully.
Oct  2 04:18:05 np0005465604 podman[271506]: 2025-10-02 08:18:05.215866283 +0000 UTC m=+0.050763799 container create 4035251b1b2fb6a4a2570204f3f9e6516e1b7317e98f188804a2e70488ee3dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:18:05 np0005465604 systemd[1]: Started libpod-conmon-4035251b1b2fb6a4a2570204f3f9e6516e1b7317e98f188804a2e70488ee3dbd.scope.
Oct  2 04:18:05 np0005465604 podman[271506]: 2025-10-02 08:18:05.1889168 +0000 UTC m=+0.023814376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:18:05 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:18:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb7e3eb234ecde34e2bb134cd36306890283168377e4309afeda15c0c2299af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:18:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb7e3eb234ecde34e2bb134cd36306890283168377e4309afeda15c0c2299af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:18:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb7e3eb234ecde34e2bb134cd36306890283168377e4309afeda15c0c2299af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:18:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb7e3eb234ecde34e2bb134cd36306890283168377e4309afeda15c0c2299af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:18:05 np0005465604 podman[271506]: 2025-10-02 08:18:05.302821675 +0000 UTC m=+0.137719181 container init 4035251b1b2fb6a4a2570204f3f9e6516e1b7317e98f188804a2e70488ee3dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:18:05 np0005465604 podman[271506]: 2025-10-02 08:18:05.309195754 +0000 UTC m=+0.144093260 container start 4035251b1b2fb6a4a2570204f3f9e6516e1b7317e98f188804a2e70488ee3dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rosalind, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct  2 04:18:05 np0005465604 podman[271506]: 2025-10-02 08:18:05.313002903 +0000 UTC m=+0.147900399 container attach 4035251b1b2fb6a4a2570204f3f9e6516e1b7317e98f188804a2e70488ee3dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rosalind, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]: {
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:        "osd_id": 2,
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:        "type": "bluestore"
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:    },
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:        "osd_id": 1,
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:        "type": "bluestore"
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:    },
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:        "osd_id": 0,
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:        "type": "bluestore"
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]:    }
Oct  2 04:18:06 np0005465604 angry_rosalind[271523]: }
Oct  2 04:18:06 np0005465604 systemd[1]: libpod-4035251b1b2fb6a4a2570204f3f9e6516e1b7317e98f188804a2e70488ee3dbd.scope: Deactivated successfully.
Oct  2 04:18:06 np0005465604 systemd[1]: libpod-4035251b1b2fb6a4a2570204f3f9e6516e1b7317e98f188804a2e70488ee3dbd.scope: Consumed 1.109s CPU time.
Oct  2 04:18:06 np0005465604 podman[271506]: 2025-10-02 08:18:06.411328148 +0000 UTC m=+1.246225644 container died 4035251b1b2fb6a4a2570204f3f9e6516e1b7317e98f188804a2e70488ee3dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:18:06 np0005465604 systemd[1]: var-lib-containers-storage-overlay-bfb7e3eb234ecde34e2bb134cd36306890283168377e4309afeda15c0c2299af-merged.mount: Deactivated successfully.
Oct  2 04:18:06 np0005465604 podman[271506]: 2025-10-02 08:18:06.475669481 +0000 UTC m=+1.310566957 container remove 4035251b1b2fb6a4a2570204f3f9e6516e1b7317e98f188804a2e70488ee3dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_rosalind, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 04:18:06 np0005465604 systemd[1]: libpod-conmon-4035251b1b2fb6a4a2570204f3f9e6516e1b7317e98f188804a2e70488ee3dbd.scope: Deactivated successfully.
Oct  2 04:18:06 np0005465604 nova_compute[260603]: 2025-10-02 08:18:06.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:18:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:18:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:18:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:18:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:18:06 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 14c9c522-48d2-4bb6-9cdd-8ef9ea7edab3 does not exist
Oct  2 04:18:06 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 0ca40e10-8b35-4875-8bb2-682b30dbff2d does not exist
Oct  2 04:18:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:18:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:18:07 np0005465604 nova_compute[260603]: 2025-10-02 08:18:07.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:18:07 np0005465604 nova_compute[260603]: 2025-10-02 08:18:07.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:18:07 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:18:07 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:18:07 np0005465604 nova_compute[260603]: 2025-10-02 08:18:07.552 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:18:07 np0005465604 nova_compute[260603]: 2025-10-02 08:18:07.552 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:18:07 np0005465604 nova_compute[260603]: 2025-10-02 08:18:07.553 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:18:07 np0005465604 nova_compute[260603]: 2025-10-02 08:18:07.553 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:18:07 np0005465604 nova_compute[260603]: 2025-10-02 08:18:07.553 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:18:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:18:08 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1195731683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:18:08 np0005465604 nova_compute[260603]: 2025-10-02 08:18:08.024 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:18:08 np0005465604 nova_compute[260603]: 2025-10-02 08:18:08.205 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:18:08 np0005465604 nova_compute[260603]: 2025-10-02 08:18:08.206 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5125MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:18:08 np0005465604 nova_compute[260603]: 2025-10-02 08:18:08.206 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:18:08 np0005465604 nova_compute[260603]: 2025-10-02 08:18:08.207 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:18:08 np0005465604 nova_compute[260603]: 2025-10-02 08:18:08.442 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:18:08 np0005465604 nova_compute[260603]: 2025-10-02 08:18:08.443 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:18:08 np0005465604 nova_compute[260603]: 2025-10-02 08:18:08.525 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 04:18:08 np0005465604 nova_compute[260603]: 2025-10-02 08:18:08.612 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 04:18:08 np0005465604 nova_compute[260603]: 2025-10-02 08:18:08.613 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 04:18:08 np0005465604 nova_compute[260603]: 2025-10-02 08:18:08.626 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 04:18:08 np0005465604 nova_compute[260603]: 2025-10-02 08:18:08.657 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 04:18:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:18:08 np0005465604 nova_compute[260603]: 2025-10-02 08:18:08.678 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:18:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:18:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1092114109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:18:09 np0005465604 nova_compute[260603]: 2025-10-02 08:18:09.151 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:18:09 np0005465604 nova_compute[260603]: 2025-10-02 08:18:09.157 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:18:09 np0005465604 nova_compute[260603]: 2025-10-02 08:18:09.181 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:18:09 np0005465604 nova_compute[260603]: 2025-10-02 08:18:09.182 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:18:09 np0005465604 nova_compute[260603]: 2025-10-02 08:18:09.183 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:18:10 np0005465604 podman[271662]: 2025-10-02 08:18:10.010523115 +0000 UTC m=+0.065658026 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 04:18:10 np0005465604 nova_compute[260603]: 2025-10-02 08:18:10.179 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:18:10 np0005465604 nova_compute[260603]: 2025-10-02 08:18:10.195 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:18:10 np0005465604 nova_compute[260603]: 2025-10-02 08:18:10.196 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:18:10 np0005465604 nova_compute[260603]: 2025-10-02 08:18:10.530 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:18:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:18:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:18:12 np0005465604 nova_compute[260603]: 2025-10-02 08:18:12.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:18:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:18:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:18:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:18:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:18:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:18:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:18:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct  2 04:18:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct  2 04:18:21 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct  2 04:18:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 04:18:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:18:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2338388586' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:18:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:18:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2338388586' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:18:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:18:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 6218 writes, 25K keys, 6218 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6218 writes, 1171 syncs, 5.31 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 470 writes, 1046 keys, 470 commit groups, 1.0 writes per commit group, ingest: 0.61 MB, 0.00 MB/s#012Interval WAL: 470 writes, 222 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 04:18:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 13 MiB data, 161 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 23 op/s
Oct  2 04:18:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct  2 04:18:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct  2 04:18:23 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct  2 04:18:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 29 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.6 MiB/s wr, 35 op/s
Oct  2 04:18:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:18:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 7358 writes, 29K keys, 7358 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7358 writes, 1501 syncs, 4.90 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 481 writes, 1345 keys, 481 commit groups, 1.0 writes per commit group, ingest: 0.73 MB, 0.00 MB/s#012Interval WAL: 481 writes, 220 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 04:18:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 29 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.6 MiB/s wr, 35 op/s
Oct  2 04:18:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:18:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:18:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:18:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:18:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:18:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:18:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:18:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:18:27
Oct  2 04:18:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:18:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:18:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'vms', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'default.rgw.control']
Oct  2 04:18:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:18:28 np0005465604 podman[271682]: 2025-10-02 08:18:28.103178145 +0000 UTC m=+0.164924803 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:18:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct  2 04:18:29 np0005465604 podman[271709]: 2025-10-02 08:18:29.017345436 +0000 UTC m=+0.074823253 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 04:18:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.6 MiB/s wr, 42 op/s
Oct  2 04:18:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:18:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 6058 writes, 24K keys, 6058 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6058 writes, 1078 syncs, 5.62 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 455 writes, 1227 keys, 455 commit groups, 1.0 writes per commit group, ingest: 0.58 MB, 0.00 MB/s#012Interval WAL: 455 writes, 206 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 04:18:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:18:32 np0005465604 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct  2 04:18:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 2.8 MiB/s wr, 15 op/s
Oct  2 04:18:34 np0005465604 podman[271728]: 2025-10-02 08:18:34.050278955 +0000 UTC m=+0.110329653 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct  2 04:18:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 1.1 MiB/s wr, 8 op/s
Oct  2 04:18:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:18:34.805 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:18:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:18:34.806 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:18:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:18:34.806 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:18:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 1.0 MiB/s wr, 7 op/s
Oct  2 04:18:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:18:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 1.0 MiB/s wr, 7 op/s
Oct  2 04:18:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:18:41 np0005465604 podman[271748]: 2025-10-02 08:18:41.0378164 +0000 UTC m=+0.091863516 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  2 04:18:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:18:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:18:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:18:43.425 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:18:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:18:43.427 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:18:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:18:44.429 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:18:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:18:44 np0005465604 nova_compute[260603]: 2025-10-02 08:18:44.843 2 DEBUG oslo_concurrency.lockutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Acquiring lock "99244c26-8ba7-4081-ab88-22a769cad5b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:18:44 np0005465604 nova_compute[260603]: 2025-10-02 08:18:44.844 2 DEBUG oslo_concurrency.lockutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Lock "99244c26-8ba7-4081-ab88-22a769cad5b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:18:44 np0005465604 nova_compute[260603]: 2025-10-02 08:18:44.863 2 DEBUG nova.compute.manager [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:18:44 np0005465604 nova_compute[260603]: 2025-10-02 08:18:44.969 2 DEBUG oslo_concurrency.lockutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:18:44 np0005465604 nova_compute[260603]: 2025-10-02 08:18:44.970 2 DEBUG oslo_concurrency.lockutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:18:44 np0005465604 nova_compute[260603]: 2025-10-02 08:18:44.980 2 DEBUG nova.virt.hardware [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:18:44 np0005465604 nova_compute[260603]: 2025-10-02 08:18:44.980 2 INFO nova.compute.claims [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:18:45 np0005465604 nova_compute[260603]: 2025-10-02 08:18:45.106 2 DEBUG oslo_concurrency.processutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:18:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:18:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2724572698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:18:45 np0005465604 nova_compute[260603]: 2025-10-02 08:18:45.533 2 DEBUG oslo_concurrency.processutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:18:45 np0005465604 nova_compute[260603]: 2025-10-02 08:18:45.542 2 DEBUG nova.compute.provider_tree [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:18:45 np0005465604 nova_compute[260603]: 2025-10-02 08:18:45.559 2 DEBUG nova.scheduler.client.report [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:18:45 np0005465604 nova_compute[260603]: 2025-10-02 08:18:45.587 2 DEBUG oslo_concurrency.lockutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:18:45 np0005465604 nova_compute[260603]: 2025-10-02 08:18:45.589 2 DEBUG nova.compute.manager [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:18:45 np0005465604 nova_compute[260603]: 2025-10-02 08:18:45.644 2 DEBUG nova.compute.manager [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  2 04:18:45 np0005465604 nova_compute[260603]: 2025-10-02 08:18:45.690 2 INFO nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:18:45 np0005465604 nova_compute[260603]: 2025-10-02 08:18:45.732 2 DEBUG nova.compute.manager [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:18:45 np0005465604 nova_compute[260603]: 2025-10-02 08:18:45.854 2 DEBUG nova.compute.manager [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:18:45 np0005465604 nova_compute[260603]: 2025-10-02 08:18:45.856 2 DEBUG nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:18:45 np0005465604 nova_compute[260603]: 2025-10-02 08:18:45.857 2 INFO nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Creating image(s)#033[00m
Oct  2 04:18:45 np0005465604 nova_compute[260603]: 2025-10-02 08:18:45.897 2 DEBUG nova.storage.rbd_utils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] rbd image 99244c26-8ba7-4081-ab88-22a769cad5b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:18:45 np0005465604 nova_compute[260603]: 2025-10-02 08:18:45.932 2 DEBUG nova.storage.rbd_utils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] rbd image 99244c26-8ba7-4081-ab88-22a769cad5b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:18:45 np0005465604 nova_compute[260603]: 2025-10-02 08:18:45.967 2 DEBUG nova.storage.rbd_utils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] rbd image 99244c26-8ba7-4081-ab88-22a769cad5b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:18:45 np0005465604 nova_compute[260603]: 2025-10-02 08:18:45.971 2 DEBUG oslo_concurrency.lockutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:18:45 np0005465604 nova_compute[260603]: 2025-10-02 08:18:45.972 2 DEBUG oslo_concurrency.lockutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:18:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:18:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:18:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:18:49 np0005465604 nova_compute[260603]: 2025-10-02 08:18:49.199 2 DEBUG nova.virt.libvirt.imagebackend [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Image locations are: [{'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/420393e6-d62b-4055-afb9-674967e2c2b0/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/420393e6-d62b-4055-afb9-674967e2c2b0/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 04:18:50 np0005465604 nova_compute[260603]: 2025-10-02 08:18:50.409 2 DEBUG oslo_concurrency.processutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:18:50 np0005465604 nova_compute[260603]: 2025-10-02 08:18:50.503 2 DEBUG oslo_concurrency.processutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236.part --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:18:50 np0005465604 nova_compute[260603]: 2025-10-02 08:18:50.505 2 DEBUG nova.virt.images [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] 420393e6-d62b-4055-afb9-674967e2c2b0 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  2 04:18:50 np0005465604 nova_compute[260603]: 2025-10-02 08:18:50.507 2 DEBUG nova.privsep.utils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  2 04:18:50 np0005465604 nova_compute[260603]: 2025-10-02 08:18:50.507 2 DEBUG oslo_concurrency.processutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236.part /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:18:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  2 04:18:50 np0005465604 nova_compute[260603]: 2025-10-02 08:18:50.708 2 DEBUG oslo_concurrency.processutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236.part /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236.converted" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:18:50 np0005465604 nova_compute[260603]: 2025-10-02 08:18:50.712 2 DEBUG oslo_concurrency.processutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:18:50 np0005465604 nova_compute[260603]: 2025-10-02 08:18:50.763 2 DEBUG oslo_concurrency.processutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236.converted --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:18:50 np0005465604 nova_compute[260603]: 2025-10-02 08:18:50.765 2 DEBUG oslo_concurrency.lockutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:18:50 np0005465604 nova_compute[260603]: 2025-10-02 08:18:50.788 2 DEBUG nova.storage.rbd_utils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] rbd image 99244c26-8ba7-4081-ab88-22a769cad5b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:18:50 np0005465604 nova_compute[260603]: 2025-10-02 08:18:50.791 2 DEBUG oslo_concurrency.processutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 99244c26-8ba7-4081-ab88-22a769cad5b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:18:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct  2 04:18:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct  2 04:18:51 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct  2 04:18:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:18:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 102 B/s wr, 8 op/s
Oct  2 04:18:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct  2 04:18:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct  2 04:18:52 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.051 2 DEBUG oslo_concurrency.processutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 99244c26-8ba7-4081-ab88-22a769cad5b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.114 2 DEBUG nova.storage.rbd_utils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] resizing rbd image 99244c26-8ba7-4081-ab88-22a769cad5b8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.217 2 DEBUG nova.objects.instance [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Lazy-loading 'migration_context' on Instance uuid 99244c26-8ba7-4081-ab88-22a769cad5b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.233 2 DEBUG nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.234 2 DEBUG nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Ensure instance console log exists: /var/lib/nova/instances/99244c26-8ba7-4081-ab88-22a769cad5b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.234 2 DEBUG oslo_concurrency.lockutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.234 2 DEBUG oslo_concurrency.lockutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.235 2 DEBUG oslo_concurrency.lockutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.236 2 DEBUG nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.239 2 WARNING nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.247 2 DEBUG nova.virt.libvirt.host [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.248 2 DEBUG nova.virt.libvirt.host [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.251 2 DEBUG nova.virt.libvirt.host [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.251 2 DEBUG nova.virt.libvirt.host [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.251 2 DEBUG nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.252 2 DEBUG nova.virt.hardware [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.252 2 DEBUG nova.virt.hardware [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.252 2 DEBUG nova.virt.hardware [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.253 2 DEBUG nova.virt.hardware [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.253 2 DEBUG nova.virt.hardware [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.253 2 DEBUG nova.virt.hardware [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.253 2 DEBUG nova.virt.hardware [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.253 2 DEBUG nova.virt.hardware [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.254 2 DEBUG nova.virt.hardware [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.254 2 DEBUG nova.virt.hardware [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.254 2 DEBUG nova.virt.hardware [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.257 2 DEBUG nova.privsep.utils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.258 2 DEBUG oslo_concurrency.processutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:18:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:18:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1405297270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.732 2 DEBUG oslo_concurrency.processutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.763 2 DEBUG nova.storage.rbd_utils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] rbd image 99244c26-8ba7-4081-ab88-22a769cad5b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:18:53 np0005465604 nova_compute[260603]: 2025-10-02 08:18:53.767 2 DEBUG oslo_concurrency.processutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:18:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:18:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2422525931' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:18:54 np0005465604 nova_compute[260603]: 2025-10-02 08:18:54.197 2 DEBUG oslo_concurrency.processutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:18:54 np0005465604 nova_compute[260603]: 2025-10-02 08:18:54.201 2 DEBUG nova.objects.instance [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 99244c26-8ba7-4081-ab88-22a769cad5b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:18:54 np0005465604 nova_compute[260603]: 2025-10-02 08:18:54.223 2 DEBUG nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:18:54 np0005465604 nova_compute[260603]:  <uuid>99244c26-8ba7-4081-ab88-22a769cad5b8</uuid>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:  <name>instance-00000001</name>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <nova:name>tempest-AutoAllocateNetworkTest-server-203425138</nova:name>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:18:53</nova:creationTime>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:18:54 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:        <nova:user uuid="6d603a4db7e644a2b379dbbdedd0a93c">tempest-AutoAllocateNetworkTest-1535339729-project-member</nova:user>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:        <nova:project uuid="6aa2cb5080d8450c833bc37f523d5bf6">tempest-AutoAllocateNetworkTest-1535339729</nova:project>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <entry name="serial">99244c26-8ba7-4081-ab88-22a769cad5b8</entry>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <entry name="uuid">99244c26-8ba7-4081-ab88-22a769cad5b8</entry>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/99244c26-8ba7-4081-ab88-22a769cad5b8_disk">
Oct  2 04:18:54 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:18:54 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/99244c26-8ba7-4081-ab88-22a769cad5b8_disk.config">
Oct  2 04:18:54 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:18:54 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/99244c26-8ba7-4081-ab88-22a769cad5b8/console.log" append="off"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:18:54 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:18:54 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:18:54 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:18:54 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:18:54 np0005465604 nova_compute[260603]: 2025-10-02 08:18:54.310 2 DEBUG nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:18:54 np0005465604 nova_compute[260603]: 2025-10-02 08:18:54.310 2 DEBUG nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:18:54 np0005465604 nova_compute[260603]: 2025-10-02 08:18:54.311 2 INFO nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Using config drive#033[00m
Oct  2 04:18:54 np0005465604 nova_compute[260603]: 2025-10-02 08:18:54.343 2 DEBUG nova.storage.rbd_utils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] rbd image 99244c26-8ba7-4081-ab88-22a769cad5b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:18:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 66 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.7 MiB/s wr, 25 op/s
Oct  2 04:18:54 np0005465604 nova_compute[260603]: 2025-10-02 08:18:54.947 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:18:54 np0005465604 nova_compute[260603]: 2025-10-02 08:18:54.964 2 INFO nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Creating config drive at /var/lib/nova/instances/99244c26-8ba7-4081-ab88-22a769cad5b8/disk.config#033[00m
Oct  2 04:18:54 np0005465604 nova_compute[260603]: 2025-10-02 08:18:54.973 2 DEBUG oslo_concurrency.processutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/99244c26-8ba7-4081-ab88-22a769cad5b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2emcr5rm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:18:55 np0005465604 nova_compute[260603]: 2025-10-02 08:18:54.999 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid 99244c26-8ba7-4081-ab88-22a769cad5b8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 04:18:55 np0005465604 nova_compute[260603]: 2025-10-02 08:18:55.001 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "99244c26-8ba7-4081-ab88-22a769cad5b8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:18:55 np0005465604 nova_compute[260603]: 2025-10-02 08:18:55.105 2 DEBUG oslo_concurrency.processutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/99244c26-8ba7-4081-ab88-22a769cad5b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2emcr5rm" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:18:55 np0005465604 nova_compute[260603]: 2025-10-02 08:18:55.126 2 DEBUG nova.storage.rbd_utils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] rbd image 99244c26-8ba7-4081-ab88-22a769cad5b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:18:55 np0005465604 nova_compute[260603]: 2025-10-02 08:18:55.129 2 DEBUG oslo_concurrency.processutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/99244c26-8ba7-4081-ab88-22a769cad5b8/disk.config 99244c26-8ba7-4081-ab88-22a769cad5b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:18:55 np0005465604 nova_compute[260603]: 2025-10-02 08:18:55.280 2 DEBUG oslo_concurrency.processutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/99244c26-8ba7-4081-ab88-22a769cad5b8/disk.config 99244c26-8ba7-4081-ab88-22a769cad5b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:18:55 np0005465604 nova_compute[260603]: 2025-10-02 08:18:55.282 2 INFO nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Deleting local config drive /var/lib/nova/instances/99244c26-8ba7-4081-ab88-22a769cad5b8/disk.config because it was imported into RBD.#033[00m
Oct  2 04:18:55 np0005465604 systemd[1]: Starting libvirt secret daemon...
Oct  2 04:18:55 np0005465604 systemd[1]: Started libvirt secret daemon.
Oct  2 04:18:55 np0005465604 systemd-machined[214636]: New machine qemu-1-instance-00000001.
Oct  2 04:18:55 np0005465604 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.559 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393136.5586927, 99244c26-8ba7-4081-ab88-22a769cad5b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.560 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.569 2 DEBUG nova.compute.manager [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.570 2 DEBUG nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.574 2 INFO nova.virt.libvirt.driver [-] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Instance spawned successfully.#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.575 2 DEBUG nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.594 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.600 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.605 2 DEBUG nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.605 2 DEBUG nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.606 2 DEBUG nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.606 2 DEBUG nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.607 2 DEBUG nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.607 2 DEBUG nova.virt.libvirt.driver [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.617 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.617 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393136.5690029, 99244c26-8ba7-4081-ab88-22a769cad5b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.618 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] VM Started (Lifecycle Event)#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.634 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.638 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.654 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:18:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 66 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.7 MiB/s wr, 25 op/s
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.690 2 INFO nova.compute.manager [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Took 10.83 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.691 2 DEBUG nova.compute.manager [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.755 2 INFO nova.compute.manager [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Took 11.82 seconds to build instance.#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.770 2 DEBUG oslo_concurrency.lockutils [None req-d9839f23-daae-44c4-acbf-a09efa73268d 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Lock "99244c26-8ba7-4081-ab88-22a769cad5b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.771 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "99244c26-8ba7-4081-ab88-22a769cad5b8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 1.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.771 2 INFO nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:18:56 np0005465604 nova_compute[260603]: 2025-10-02 08:18:56.772 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "99244c26-8ba7-4081-ab88-22a769cad5b8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:18:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:18:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:18:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:18:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:18:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:18:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:18:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:18:58 np0005465604 nova_compute[260603]: 2025-10-02 08:18:58.342 2 DEBUG oslo_concurrency.lockutils [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Acquiring lock "99244c26-8ba7-4081-ab88-22a769cad5b8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:18:58 np0005465604 nova_compute[260603]: 2025-10-02 08:18:58.344 2 DEBUG oslo_concurrency.lockutils [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Lock "99244c26-8ba7-4081-ab88-22a769cad5b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:18:58 np0005465604 nova_compute[260603]: 2025-10-02 08:18:58.344 2 DEBUG oslo_concurrency.lockutils [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Acquiring lock "99244c26-8ba7-4081-ab88-22a769cad5b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:18:58 np0005465604 nova_compute[260603]: 2025-10-02 08:18:58.345 2 DEBUG oslo_concurrency.lockutils [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Lock "99244c26-8ba7-4081-ab88-22a769cad5b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:18:58 np0005465604 nova_compute[260603]: 2025-10-02 08:18:58.345 2 DEBUG oslo_concurrency.lockutils [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Lock "99244c26-8ba7-4081-ab88-22a769cad5b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:18:58 np0005465604 nova_compute[260603]: 2025-10-02 08:18:58.347 2 INFO nova.compute.manager [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Terminating instance#033[00m
Oct  2 04:18:58 np0005465604 nova_compute[260603]: 2025-10-02 08:18:58.349 2 DEBUG oslo_concurrency.lockutils [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Acquiring lock "refresh_cache-99244c26-8ba7-4081-ab88-22a769cad5b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:18:58 np0005465604 nova_compute[260603]: 2025-10-02 08:18:58.350 2 DEBUG oslo_concurrency.lockutils [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Acquired lock "refresh_cache-99244c26-8ba7-4081-ab88-22a769cad5b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:18:58 np0005465604 nova_compute[260603]: 2025-10-02 08:18:58.351 2 DEBUG nova.network.neutron [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:18:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 2.7 MiB/s wr, 131 op/s
Oct  2 04:18:58 np0005465604 nova_compute[260603]: 2025-10-02 08:18:58.781 2 DEBUG nova.network.neutron [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:18:59 np0005465604 podman[272165]: 2025-10-02 08:18:59.049094602 +0000 UTC m=+0.113886065 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller)
Oct  2 04:18:59 np0005465604 podman[272192]: 2025-10-02 08:18:59.131995747 +0000 UTC m=+0.045095932 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 04:18:59 np0005465604 nova_compute[260603]: 2025-10-02 08:18:59.267 2 DEBUG nova.network.neutron [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:18:59 np0005465604 nova_compute[260603]: 2025-10-02 08:18:59.289 2 DEBUG oslo_concurrency.lockutils [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Releasing lock "refresh_cache-99244c26-8ba7-4081-ab88-22a769cad5b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:18:59 np0005465604 nova_compute[260603]: 2025-10-02 08:18:59.290 2 DEBUG nova.compute.manager [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:18:59 np0005465604 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct  2 04:18:59 np0005465604 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3.876s CPU time.
Oct  2 04:18:59 np0005465604 systemd-machined[214636]: Machine qemu-1-instance-00000001 terminated.
Oct  2 04:18:59 np0005465604 nova_compute[260603]: 2025-10-02 08:18:59.515 2 INFO nova.virt.libvirt.driver [-] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Instance destroyed successfully.#033[00m
Oct  2 04:18:59 np0005465604 nova_compute[260603]: 2025-10-02 08:18:59.516 2 DEBUG nova.objects.instance [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Lazy-loading 'resources' on Instance uuid 99244c26-8ba7-4081-ab88-22a769cad5b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:18:59 np0005465604 nova_compute[260603]: 2025-10-02 08:18:59.887 2 INFO nova.virt.libvirt.driver [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Deleting instance files /var/lib/nova/instances/99244c26-8ba7-4081-ab88-22a769cad5b8_del#033[00m
Oct  2 04:18:59 np0005465604 nova_compute[260603]: 2025-10-02 08:18:59.888 2 INFO nova.virt.libvirt.driver [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Deletion of /var/lib/nova/instances/99244c26-8ba7-4081-ab88-22a769cad5b8_del complete#033[00m
Oct  2 04:18:59 np0005465604 nova_compute[260603]: 2025-10-02 08:18:59.978 2 DEBUG nova.virt.libvirt.host [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Oct  2 04:18:59 np0005465604 nova_compute[260603]: 2025-10-02 08:18:59.979 2 INFO nova.virt.libvirt.host [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] UEFI support detected#033[00m
Oct  2 04:18:59 np0005465604 nova_compute[260603]: 2025-10-02 08:18:59.980 2 INFO nova.compute.manager [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:18:59 np0005465604 nova_compute[260603]: 2025-10-02 08:18:59.981 2 DEBUG oslo.service.loopingcall [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:18:59 np0005465604 nova_compute[260603]: 2025-10-02 08:18:59.981 2 DEBUG nova.compute.manager [-] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:18:59 np0005465604 nova_compute[260603]: 2025-10-02 08:18:59.981 2 DEBUG nova.network.neutron [-] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:19:00 np0005465604 nova_compute[260603]: 2025-10-02 08:19:00.073 2 DEBUG oslo_concurrency.processutils [None req-ad7f88d9-8810-4036-9a9f-d403dbc5c0d1 21384b2a48bf496a81dffc5572892b0a a29ec933419c4eb3b7f2a728e33ee6bc - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:00 np0005465604 nova_compute[260603]: 2025-10-02 08:19:00.096 2 DEBUG oslo_concurrency.processutils [None req-ad7f88d9-8810-4036-9a9f-d403dbc5c0d1 21384b2a48bf496a81dffc5572892b0a a29ec933419c4eb3b7f2a728e33ee6bc - - default default] CMD "env LANG=C uptime" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:00 np0005465604 nova_compute[260603]: 2025-10-02 08:19:00.141 2 DEBUG nova.network.neutron [-] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:19:00 np0005465604 nova_compute[260603]: 2025-10-02 08:19:00.163 2 DEBUG nova.network.neutron [-] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:19:00 np0005465604 nova_compute[260603]: 2025-10-02 08:19:00.189 2 INFO nova.compute.manager [-] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Took 0.21 seconds to deallocate network for instance.#033[00m
Oct  2 04:19:00 np0005465604 nova_compute[260603]: 2025-10-02 08:19:00.253 2 DEBUG oslo_concurrency.lockutils [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:00 np0005465604 nova_compute[260603]: 2025-10-02 08:19:00.254 2 DEBUG oslo_concurrency.lockutils [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:00 np0005465604 nova_compute[260603]: 2025-10-02 08:19:00.301 2 DEBUG oslo_concurrency.processutils [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.4 MiB/s wr, 117 op/s
Oct  2 04:19:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:19:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4028682711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:19:00 np0005465604 nova_compute[260603]: 2025-10-02 08:19:00.714 2 DEBUG oslo_concurrency.processutils [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:00 np0005465604 nova_compute[260603]: 2025-10-02 08:19:00.721 2 DEBUG nova.compute.provider_tree [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 04:19:00 np0005465604 nova_compute[260603]: 2025-10-02 08:19:00.757 2 ERROR nova.scheduler.client.report [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] [req-f43f652e-bea1-495d-bf42-0de24e8dcd20] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-f43f652e-bea1-495d-bf42-0de24e8dcd20"}]}#033[00m
Oct  2 04:19:00 np0005465604 nova_compute[260603]: 2025-10-02 08:19:00.778 2 DEBUG nova.scheduler.client.report [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 04:19:00 np0005465604 nova_compute[260603]: 2025-10-02 08:19:00.797 2 DEBUG nova.scheduler.client.report [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 04:19:00 np0005465604 nova_compute[260603]: 2025-10-02 08:19:00.798 2 DEBUG nova.compute.provider_tree [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 04:19:00 np0005465604 nova_compute[260603]: 2025-10-02 08:19:00.818 2 DEBUG nova.scheduler.client.report [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 04:19:00 np0005465604 nova_compute[260603]: 2025-10-02 08:19:00.845 2 DEBUG nova.scheduler.client.report [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 04:19:00 np0005465604 nova_compute[260603]: 2025-10-02 08:19:00.881 2 DEBUG oslo_concurrency.processutils [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:19:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1252973121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:19:01 np0005465604 nova_compute[260603]: 2025-10-02 08:19:01.321 2 DEBUG oslo_concurrency.processutils [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:01 np0005465604 nova_compute[260603]: 2025-10-02 08:19:01.326 2 DEBUG nova.compute.provider_tree [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 04:19:01 np0005465604 nova_compute[260603]: 2025-10-02 08:19:01.370 2 DEBUG nova.scheduler.client.report [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Updated inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with generation 7 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  2 04:19:01 np0005465604 nova_compute[260603]: 2025-10-02 08:19:01.371 2 DEBUG nova.compute.provider_tree [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Updating resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 generation from 7 to 8 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  2 04:19:01 np0005465604 nova_compute[260603]: 2025-10-02 08:19:01.371 2 DEBUG nova.compute.provider_tree [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 04:19:01 np0005465604 nova_compute[260603]: 2025-10-02 08:19:01.388 2 DEBUG oslo_concurrency.lockutils [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:01 np0005465604 nova_compute[260603]: 2025-10-02 08:19:01.442 2 INFO nova.scheduler.client.report [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Deleted allocations for instance 99244c26-8ba7-4081-ab88-22a769cad5b8#033[00m
Oct  2 04:19:01 np0005465604 nova_compute[260603]: 2025-10-02 08:19:01.497 2 DEBUG oslo_concurrency.lockutils [None req-9faf8842-d5a6-4ce4-855c-d9a2ce746b6e 6d603a4db7e644a2b379dbbdedd0a93c 6aa2cb5080d8450c833bc37f523d5bf6 - - default default] Lock "99244c26-8ba7-4081-ab88-22a769cad5b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:19:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct  2 04:19:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct  2 04:19:01 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct  2 04:19:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 66 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Oct  2 04:19:03 np0005465604 nova_compute[260603]: 2025-10-02 08:19:03.573 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:19:03 np0005465604 nova_compute[260603]: 2025-10-02 08:19:03.574 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:19:03 np0005465604 nova_compute[260603]: 2025-10-02 08:19:03.575 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:19:03 np0005465604 nova_compute[260603]: 2025-10-02 08:19:03.596 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:19:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 808 KiB/s wr, 139 op/s
Oct  2 04:19:04 np0005465604 podman[272280]: 2025-10-02 08:19:04.996677678 +0000 UTC m=+0.065995796 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 04:19:05 np0005465604 nova_compute[260603]: 2025-10-02 08:19:05.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:19:05 np0005465604 nova_compute[260603]: 2025-10-02 08:19:05.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:19:06 np0005465604 nova_compute[260603]: 2025-10-02 08:19:06.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:19:06 np0005465604 nova_compute[260603]: 2025-10-02 08:19:06.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:19:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 808 KiB/s wr, 139 op/s
Oct  2 04:19:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:19:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:19:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:19:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:19:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:19:08 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:19:08 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:19:08 np0005465604 nova_compute[260603]: 2025-10-02 08:19:08.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:19:08 np0005465604 nova_compute[260603]: 2025-10-02 08:19:08.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:19:08 np0005465604 nova_compute[260603]: 2025-10-02 08:19:08.547 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:08 np0005465604 nova_compute[260603]: 2025-10-02 08:19:08.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:08 np0005465604 nova_compute[260603]: 2025-10-02 08:19:08.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:08 np0005465604 nova_compute[260603]: 2025-10-02 08:19:08.548 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:19:08 np0005465604 nova_compute[260603]: 2025-10-02 08:19:08.549 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 774 KiB/s rd, 1.4 KiB/s wr, 55 op/s
Oct  2 04:19:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:19:08 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1923220760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.000 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:09 np0005465604 podman[272716]: 2025-10-02 08:19:09.19286949 +0000 UTC m=+0.064620894 container create 1a12edb60e5e969dfdeb1b98fb9c9167e4b2fdce6c191687ee82805de538e946 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wright, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 04:19:09 np0005465604 systemd[1]: Started libpod-conmon-1a12edb60e5e969dfdeb1b98fb9c9167e4b2fdce6c191687ee82805de538e946.scope.
Oct  2 04:19:09 np0005465604 podman[272716]: 2025-10-02 08:19:09.163051146 +0000 UTC m=+0.034802630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.269 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.271 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5076MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.271 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.273 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:09 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:19:09 np0005465604 podman[272716]: 2025-10-02 08:19:09.301916633 +0000 UTC m=+0.173668077 container init 1a12edb60e5e969dfdeb1b98fb9c9167e4b2fdce6c191687ee82805de538e946 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 04:19:09 np0005465604 podman[272716]: 2025-10-02 08:19:09.310530282 +0000 UTC m=+0.182281696 container start 1a12edb60e5e969dfdeb1b98fb9c9167e4b2fdce6c191687ee82805de538e946 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wright, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:19:09 np0005465604 podman[272716]: 2025-10-02 08:19:09.314089254 +0000 UTC m=+0.185840668 container attach 1a12edb60e5e969dfdeb1b98fb9c9167e4b2fdce6c191687ee82805de538e946 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 04:19:09 np0005465604 determined_wright[272733]: 167 167
Oct  2 04:19:09 np0005465604 systemd[1]: libpod-1a12edb60e5e969dfdeb1b98fb9c9167e4b2fdce6c191687ee82805de538e946.scope: Deactivated successfully.
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.349 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.350 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.387 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:09 np0005465604 podman[272738]: 2025-10-02 08:19:09.391690023 +0000 UTC m=+0.045559498 container died 1a12edb60e5e969dfdeb1b98fb9c9167e4b2fdce6c191687ee82805de538e946 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wright, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:19:09 np0005465604 systemd[1]: var-lib-containers-storage-overlay-38d78d46a4823b11a0a22df8f05cfcc7efe383b82d8a1d426192d7d7f97785a5-merged.mount: Deactivated successfully.
Oct  2 04:19:09 np0005465604 podman[272738]: 2025-10-02 08:19:09.429791615 +0000 UTC m=+0.083661050 container remove 1a12edb60e5e969dfdeb1b98fb9c9167e4b2fdce6c191687ee82805de538e946 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:19:09 np0005465604 systemd[1]: libpod-conmon-1a12edb60e5e969dfdeb1b98fb9c9167e4b2fdce6c191687ee82805de538e946.scope: Deactivated successfully.
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.551 2 DEBUG oslo_concurrency.lockutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Acquiring lock "5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.552 2 DEBUG oslo_concurrency.lockutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.572 2 DEBUG nova.compute.manager [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.644 2 DEBUG oslo_concurrency.lockutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:09 np0005465604 podman[272780]: 2025-10-02 08:19:09.671586653 +0000 UTC m=+0.053523947 container create 38a4672f3aa817de216409b81514a738e33509b0c6d43908ef6061632f0ab98f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_turing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:19:09 np0005465604 systemd[1]: Started libpod-conmon-38a4672f3aa817de216409b81514a738e33509b0c6d43908ef6061632f0ab98f.scope.
Oct  2 04:19:09 np0005465604 podman[272780]: 2025-10-02 08:19:09.653027712 +0000 UTC m=+0.034965016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:19:09 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:19:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07afd07e2090c89796dba10e01430403286b364b69d23242e9e0dc6509b6a03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:19:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07afd07e2090c89796dba10e01430403286b364b69d23242e9e0dc6509b6a03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:19:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07afd07e2090c89796dba10e01430403286b364b69d23242e9e0dc6509b6a03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:19:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07afd07e2090c89796dba10e01430403286b364b69d23242e9e0dc6509b6a03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:19:09 np0005465604 podman[272780]: 2025-10-02 08:19:09.796982057 +0000 UTC m=+0.178919431 container init 38a4672f3aa817de216409b81514a738e33509b0c6d43908ef6061632f0ab98f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_turing, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:19:09 np0005465604 podman[272780]: 2025-10-02 08:19:09.808468557 +0000 UTC m=+0.190405851 container start 38a4672f3aa817de216409b81514a738e33509b0c6d43908ef6061632f0ab98f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_turing, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:19:09 np0005465604 podman[272780]: 2025-10-02 08:19:09.81370709 +0000 UTC m=+0.195644424 container attach 38a4672f3aa817de216409b81514a738e33509b0c6d43908ef6061632f0ab98f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:19:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:19:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/205378303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.869 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.883 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.903 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.922 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.923 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.923 2 DEBUG oslo_concurrency.lockutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.931 2 DEBUG nova.virt.hardware [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:19:09 np0005465604 nova_compute[260603]: 2025-10-02 08:19:09.932 2 INFO nova.compute.claims [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.041 2 DEBUG oslo_concurrency.processutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:19:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3066835532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.520 2 DEBUG oslo_concurrency.processutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.528 2 DEBUG nova.compute.provider_tree [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.546 2 DEBUG nova.scheduler.client.report [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.571 2 DEBUG oslo_concurrency.lockutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.573 2 DEBUG nova.compute.manager [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.625 2 DEBUG nova.compute.manager [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.626 2 DEBUG nova.network.neutron [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.648 2 INFO nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.671 2 DEBUG nova.compute.manager [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:19:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 774 KiB/s rd, 1.4 KiB/s wr, 55 op/s
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.789 2 DEBUG nova.compute.manager [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.792 2 DEBUG nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.793 2 INFO nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Creating image(s)#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.827 2 DEBUG nova.storage.rbd_utils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] rbd image 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.855 2 DEBUG nova.storage.rbd_utils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] rbd image 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.889 2 DEBUG nova.storage.rbd_utils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] rbd image 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.895 2 DEBUG oslo_concurrency.processutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.924 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.925 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.967 2 DEBUG oslo_concurrency.processutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.969 2 DEBUG oslo_concurrency.lockutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.970 2 DEBUG oslo_concurrency.lockutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.971 2 DEBUG oslo_concurrency.lockutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:10 np0005465604 nova_compute[260603]: 2025-10-02 08:19:10.997 2 DEBUG nova.storage.rbd_utils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] rbd image 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.001 2 DEBUG oslo_concurrency.processutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.259 2 DEBUG oslo_concurrency.processutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.318 2 DEBUG nova.storage.rbd_utils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] resizing rbd image 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:19:11 np0005465604 gracious_turing[272798]: [
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:    {
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:        "available": false,
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:        "ceph_device": false,
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:        "lsm_data": {},
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:        "lvs": [],
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:        "path": "/dev/sr0",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:        "rejected_reasons": [
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "Has a FileSystem",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "Insufficient space (<5GB)"
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:        ],
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:        "sys_api": {
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "actuators": null,
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "device_nodes": "sr0",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "devname": "sr0",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "human_readable_size": "482.00 KB",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "id_bus": "ata",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "model": "QEMU DVD-ROM",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "nr_requests": "2",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "parent": "/dev/sr0",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "partitions": {},
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "path": "/dev/sr0",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "removable": "1",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "rev": "2.5+",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "ro": "0",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "rotational": "0",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "sas_address": "",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "sas_device_handle": "",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "scheduler_mode": "mq-deadline",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "sectors": 0,
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "sectorsize": "2048",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "size": 493568.0,
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "support_discard": "2048",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "type": "disk",
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:            "vendor": "QEMU"
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:        }
Oct  2 04:19:11 np0005465604 gracious_turing[272798]:    }
Oct  2 04:19:11 np0005465604 gracious_turing[272798]: ]
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.412 2 DEBUG nova.network.neutron [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.413 2 DEBUG nova.compute.manager [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.419 2 DEBUG nova.objects.instance [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lazy-loading 'migration_context' on Instance uuid 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:11 np0005465604 systemd[1]: libpod-38a4672f3aa817de216409b81514a738e33509b0c6d43908ef6061632f0ab98f.scope: Deactivated successfully.
Oct  2 04:19:11 np0005465604 podman[272780]: 2025-10-02 08:19:11.426828567 +0000 UTC m=+1.808765861 container died 38a4672f3aa817de216409b81514a738e33509b0c6d43908ef6061632f0ab98f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_turing, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 04:19:11 np0005465604 systemd[1]: libpod-38a4672f3aa817de216409b81514a738e33509b0c6d43908ef6061632f0ab98f.scope: Consumed 1.620s CPU time.
Oct  2 04:19:11 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d07afd07e2090c89796dba10e01430403286b364b69d23242e9e0dc6509b6a03-merged.mount: Deactivated successfully.
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.465 2 DEBUG nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.466 2 DEBUG nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Ensure instance console log exists: /var/lib/nova/instances/5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.467 2 DEBUG oslo_concurrency.lockutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.468 2 DEBUG oslo_concurrency.lockutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.469 2 DEBUG oslo_concurrency.lockutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.472 2 DEBUG nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.483 2 WARNING nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:19:11 np0005465604 podman[272780]: 2025-10-02 08:19:11.488714655 +0000 UTC m=+1.870651959 container remove 38a4672f3aa817de216409b81514a738e33509b0c6d43908ef6061632f0ab98f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.489 2 DEBUG nova.virt.libvirt.host [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.491 2 DEBUG nova.virt.libvirt.host [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.496 2 DEBUG nova.virt.libvirt.host [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.497 2 DEBUG nova.virt.libvirt.host [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.498 2 DEBUG nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.498 2 DEBUG nova.virt.hardware [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.498 2 DEBUG nova.virt.hardware [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.499 2 DEBUG nova.virt.hardware [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.499 2 DEBUG nova.virt.hardware [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.499 2 DEBUG nova.virt.hardware [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.499 2 DEBUG nova.virt.hardware [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.499 2 DEBUG nova.virt.hardware [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.500 2 DEBUG nova.virt.hardware [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.500 2 DEBUG nova.virt.hardware [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.500 2 DEBUG nova.virt.hardware [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.500 2 DEBUG nova.virt.hardware [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:19:11 np0005465604 systemd[1]: libpod-conmon-38a4672f3aa817de216409b81514a738e33509b0c6d43908ef6061632f0ab98f.scope: Deactivated successfully.
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.503 2 DEBUG oslo_concurrency.processutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:19:11 np0005465604 podman[274857]: 2025-10-02 08:19:11.557876129 +0000 UTC m=+0.087849740 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:19:11 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 462c09c2-fe75-4802-972c-6a37860291e7 does not exist
Oct  2 04:19:11 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev f56fb4e8-6681-4e28-8cbb-801a1b0f517d does not exist
Oct  2 04:19:11 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e731da1f-2388-4a16-bec0-76b46ad81c71 does not exist
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:19:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/328655569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:19:11 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.961 2 DEBUG oslo_concurrency.processutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:12 np0005465604 nova_compute[260603]: 2025-10-02 08:19:11.999 2 DEBUG nova.storage.rbd_utils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] rbd image 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:12 np0005465604 nova_compute[260603]: 2025-10-02 08:19:12.005 2 DEBUG oslo_concurrency.processutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:12 np0005465604 podman[275085]: 2025-10-02 08:19:12.332991298 +0000 UTC m=+0.068436182 container create 3b9a33d676e06ec153af6f39376e7f31466e92e57961a8c9f1dfb5930d9c93f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pasteur, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 04:19:12 np0005465604 systemd[1]: Started libpod-conmon-3b9a33d676e06ec153af6f39376e7f31466e92e57961a8c9f1dfb5930d9c93f5.scope.
Oct  2 04:19:12 np0005465604 podman[275085]: 2025-10-02 08:19:12.305080805 +0000 UTC m=+0.040525729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:19:12 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:19:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:19:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2387634270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:19:12 np0005465604 podman[275085]: 2025-10-02 08:19:12.437590452 +0000 UTC m=+0.173035396 container init 3b9a33d676e06ec153af6f39376e7f31466e92e57961a8c9f1dfb5930d9c93f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pasteur, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:19:12 np0005465604 podman[275085]: 2025-10-02 08:19:12.447829233 +0000 UTC m=+0.183274117 container start 3b9a33d676e06ec153af6f39376e7f31466e92e57961a8c9f1dfb5930d9c93f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pasteur, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:19:12 np0005465604 podman[275085]: 2025-10-02 08:19:12.451808508 +0000 UTC m=+0.187253392 container attach 3b9a33d676e06ec153af6f39376e7f31466e92e57961a8c9f1dfb5930d9c93f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:19:12 np0005465604 keen_pasteur[275102]: 167 167
Oct  2 04:19:12 np0005465604 podman[275085]: 2025-10-02 08:19:12.457180355 +0000 UTC m=+0.192625209 container died 3b9a33d676e06ec153af6f39376e7f31466e92e57961a8c9f1dfb5930d9c93f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 04:19:12 np0005465604 nova_compute[260603]: 2025-10-02 08:19:12.455 2 DEBUG oslo_concurrency.processutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:12 np0005465604 systemd[1]: libpod-3b9a33d676e06ec153af6f39376e7f31466e92e57961a8c9f1dfb5930d9c93f5.scope: Deactivated successfully.
Oct  2 04:19:12 np0005465604 nova_compute[260603]: 2025-10-02 08:19:12.461 2 DEBUG nova.objects.instance [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:12 np0005465604 nova_compute[260603]: 2025-10-02 08:19:12.486 2 DEBUG nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:19:12 np0005465604 nova_compute[260603]:  <uuid>5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1</uuid>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:  <name>instance-00000002</name>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-990143244</nova:name>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:19:11</nova:creationTime>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:19:12 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:        <nova:user uuid="2bbb3d2fe7dc4d0cbce173e3b3d890ef">tempest-DeleteServersAdminTestJSON-398237668-project-member</nova:user>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:        <nova:project uuid="475fe8421bdb460d970e52b4c47a8120">tempest-DeleteServersAdminTestJSON-398237668</nova:project>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <entry name="serial">5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1</entry>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <entry name="uuid">5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1</entry>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1_disk">
Oct  2 04:19:12 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:19:12 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1_disk.config">
Oct  2 04:19:12 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:19:12 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1/console.log" append="off"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:19:12 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:19:12 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:19:12 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:19:12 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:19:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e43e3cd80a2f2ed63d488a2819a57bcaba247fccbceca2368fba5d8f8a6fa762-merged.mount: Deactivated successfully.
Oct  2 04:19:12 np0005465604 podman[275085]: 2025-10-02 08:19:12.513042724 +0000 UTC m=+0.248487618 container remove 3b9a33d676e06ec153af6f39376e7f31466e92e57961a8c9f1dfb5930d9c93f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:19:12 np0005465604 nova_compute[260603]: 2025-10-02 08:19:12.516 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:19:12 np0005465604 systemd[1]: libpod-conmon-3b9a33d676e06ec153af6f39376e7f31466e92e57961a8c9f1dfb5930d9c93f5.scope: Deactivated successfully.
Oct  2 04:19:12 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:19:12 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:19:12 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:19:12 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:19:12 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:19:12 np0005465604 nova_compute[260603]: 2025-10-02 08:19:12.562 2 DEBUG nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:19:12 np0005465604 nova_compute[260603]: 2025-10-02 08:19:12.562 2 DEBUG nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:19:12 np0005465604 nova_compute[260603]: 2025-10-02 08:19:12.563 2 INFO nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Using config drive#033[00m
Oct  2 04:19:12 np0005465604 nova_compute[260603]: 2025-10-02 08:19:12.591 2 DEBUG nova.storage.rbd_utils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] rbd image 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 76 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 719 KiB/s rd, 1.3 MiB/s wr, 54 op/s
Oct  2 04:19:12 np0005465604 podman[275145]: 2025-10-02 08:19:12.788957609 +0000 UTC m=+0.076403901 container create e1598e30de4fc0fa66c315ae7a4a476beb541a860972d577f9606cb2757e70f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_albattani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  2 04:19:12 np0005465604 nova_compute[260603]: 2025-10-02 08:19:12.810 2 INFO nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Creating config drive at /var/lib/nova/instances/5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1/disk.config#033[00m
Oct  2 04:19:12 np0005465604 nova_compute[260603]: 2025-10-02 08:19:12.820 2 DEBUG oslo_concurrency.processutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc9ytrxuv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:12 np0005465604 systemd[1]: Started libpod-conmon-e1598e30de4fc0fa66c315ae7a4a476beb541a860972d577f9606cb2757e70f9.scope.
Oct  2 04:19:12 np0005465604 podman[275145]: 2025-10-02 08:19:12.757963249 +0000 UTC m=+0.045409561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:19:12 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:19:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8317ffc8a8fe1e8b46ebc504d186a19ffb42f74fa9627f498bbe629bc3359e7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:19:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8317ffc8a8fe1e8b46ebc504d186a19ffb42f74fa9627f498bbe629bc3359e7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:19:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8317ffc8a8fe1e8b46ebc504d186a19ffb42f74fa9627f498bbe629bc3359e7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:19:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8317ffc8a8fe1e8b46ebc504d186a19ffb42f74fa9627f498bbe629bc3359e7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:19:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8317ffc8a8fe1e8b46ebc504d186a19ffb42f74fa9627f498bbe629bc3359e7c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:19:12 np0005465604 podman[275145]: 2025-10-02 08:19:12.91489448 +0000 UTC m=+0.202340752 container init e1598e30de4fc0fa66c315ae7a4a476beb541a860972d577f9606cb2757e70f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:19:12 np0005465604 podman[275145]: 2025-10-02 08:19:12.939576104 +0000 UTC m=+0.227022386 container start e1598e30de4fc0fa66c315ae7a4a476beb541a860972d577f9606cb2757e70f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_albattani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:19:12 np0005465604 podman[275145]: 2025-10-02 08:19:12.943715013 +0000 UTC m=+0.231161305 container attach e1598e30de4fc0fa66c315ae7a4a476beb541a860972d577f9606cb2757e70f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:19:12 np0005465604 nova_compute[260603]: 2025-10-02 08:19:12.966 2 DEBUG oslo_concurrency.processutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc9ytrxuv" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:13 np0005465604 nova_compute[260603]: 2025-10-02 08:19:13.011 2 DEBUG nova.storage.rbd_utils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] rbd image 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:13 np0005465604 nova_compute[260603]: 2025-10-02 08:19:13.016 2 DEBUG oslo_concurrency.processutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1/disk.config 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:13 np0005465604 nova_compute[260603]: 2025-10-02 08:19:13.202 2 DEBUG oslo_concurrency.processutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1/disk.config 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:13 np0005465604 nova_compute[260603]: 2025-10-02 08:19:13.204 2 INFO nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Deleting local config drive /var/lib/nova/instances/5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1/disk.config because it was imported into RBD.#033[00m
Oct  2 04:19:13 np0005465604 systemd-machined[214636]: New machine qemu-2-instance-00000002.
Oct  2 04:19:13 np0005465604 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Oct  2 04:19:14 np0005465604 goofy_albattani[275163]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:19:14 np0005465604 goofy_albattani[275163]: --> relative data size: 1.0
Oct  2 04:19:14 np0005465604 goofy_albattani[275163]: --> All data devices are unavailable
Oct  2 04:19:14 np0005465604 systemd[1]: libpod-e1598e30de4fc0fa66c315ae7a4a476beb541a860972d577f9606cb2757e70f9.scope: Deactivated successfully.
Oct  2 04:19:14 np0005465604 systemd[1]: libpod-e1598e30de4fc0fa66c315ae7a4a476beb541a860972d577f9606cb2757e70f9.scope: Consumed 1.175s CPU time.
Oct  2 04:19:14 np0005465604 podman[275145]: 2025-10-02 08:19:14.210923614 +0000 UTC m=+1.498369966 container died e1598e30de4fc0fa66c315ae7a4a476beb541a860972d577f9606cb2757e70f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_albattani, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 04:19:14 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8317ffc8a8fe1e8b46ebc504d186a19ffb42f74fa9627f498bbe629bc3359e7c-merged.mount: Deactivated successfully.
Oct  2 04:19:14 np0005465604 podman[275145]: 2025-10-02 08:19:14.287233742 +0000 UTC m=+1.574680024 container remove e1598e30de4fc0fa66c315ae7a4a476beb541a860972d577f9606cb2757e70f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_albattani, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.299 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393154.2986665, 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.303 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:19:14 np0005465604 systemd[1]: libpod-conmon-e1598e30de4fc0fa66c315ae7a4a476beb541a860972d577f9606cb2757e70f9.scope: Deactivated successfully.
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.308 2 DEBUG nova.compute.manager [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.308 2 DEBUG nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.315 2 INFO nova.virt.libvirt.driver [-] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Instance spawned successfully.#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.315 2 DEBUG nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.335 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.338 2 DEBUG nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.338 2 DEBUG nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.339 2 DEBUG nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.339 2 DEBUG nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.340 2 DEBUG nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.340 2 DEBUG nova.virt.libvirt.driver [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.345 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.382 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.382 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393154.3001678, 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.383 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] VM Started (Lifecycle Event)#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.407 2 INFO nova.compute.manager [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Took 3.62 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.408 2 DEBUG nova.compute.manager [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.412 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.419 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.457 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.500 2 INFO nova.compute.manager [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Took 4.88 seconds to build instance.#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.514 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393139.5124552, 99244c26-8ba7-4081-ab88-22a769cad5b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.514 2 INFO nova.compute.manager [-] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.525 2 DEBUG oslo_concurrency.lockutils [None req-0b196d86-edae-4db3-982c-0ce671c05d20 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:14 np0005465604 nova_compute[260603]: 2025-10-02 08:19:14.529 2 DEBUG nova.compute.manager [None req-7cf85db3-76ff-44fa-965c-4479e5fd9f7f - - - - - -] [instance: 99244c26-8ba7-4081-ab88-22a769cad5b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct  2 04:19:15 np0005465604 podman[275442]: 2025-10-02 08:19:15.051047288 +0000 UTC m=+0.047700504 container create c109cd75a8087a2d3328cd7f45ecff42cc48e309957d40f7ced7e12154b476f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_volhard, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 04:19:15 np0005465604 systemd[1]: Started libpod-conmon-c109cd75a8087a2d3328cd7f45ecff42cc48e309957d40f7ced7e12154b476f9.scope.
Oct  2 04:19:15 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:19:15 np0005465604 podman[275442]: 2025-10-02 08:19:15.029297737 +0000 UTC m=+0.025951013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:19:15 np0005465604 podman[275442]: 2025-10-02 08:19:15.134386596 +0000 UTC m=+0.131039822 container init c109cd75a8087a2d3328cd7f45ecff42cc48e309957d40f7ced7e12154b476f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_volhard, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 04:19:15 np0005465604 podman[275442]: 2025-10-02 08:19:15.144963227 +0000 UTC m=+0.141616473 container start c109cd75a8087a2d3328cd7f45ecff42cc48e309957d40f7ced7e12154b476f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 04:19:15 np0005465604 podman[275442]: 2025-10-02 08:19:15.149017633 +0000 UTC m=+0.145670889 container attach c109cd75a8087a2d3328cd7f45ecff42cc48e309957d40f7ced7e12154b476f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_volhard, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 04:19:15 np0005465604 angry_volhard[275459]: 167 167
Oct  2 04:19:15 np0005465604 systemd[1]: libpod-c109cd75a8087a2d3328cd7f45ecff42cc48e309957d40f7ced7e12154b476f9.scope: Deactivated successfully.
Oct  2 04:19:15 np0005465604 podman[275442]: 2025-10-02 08:19:15.15241333 +0000 UTC m=+0.149066577 container died c109cd75a8087a2d3328cd7f45ecff42cc48e309957d40f7ced7e12154b476f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:19:15 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ffea2f524a94d13a426f6d9379dcc95a22261414ddf0c79542eb8f980d828372-merged.mount: Deactivated successfully.
Oct  2 04:19:15 np0005465604 podman[275442]: 2025-10-02 08:19:15.19427505 +0000 UTC m=+0.190928256 container remove c109cd75a8087a2d3328cd7f45ecff42cc48e309957d40f7ced7e12154b476f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_volhard, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  2 04:19:15 np0005465604 systemd[1]: libpod-conmon-c109cd75a8087a2d3328cd7f45ecff42cc48e309957d40f7ced7e12154b476f9.scope: Deactivated successfully.
Oct  2 04:19:15 np0005465604 podman[275484]: 2025-10-02 08:19:15.393141495 +0000 UTC m=+0.073127481 container create 0ae43ac98d4b10059bd83e36e110d51851e46fcc92af2facc8cb0cf2d2eb77e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 04:19:15 np0005465604 systemd[1]: Started libpod-conmon-0ae43ac98d4b10059bd83e36e110d51851e46fcc92af2facc8cb0cf2d2eb77e2.scope.
Oct  2 04:19:15 np0005465604 podman[275484]: 2025-10-02 08:19:15.365057295 +0000 UTC m=+0.045043371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:19:15 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:19:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96588044d207c6319a9d30140cb4473752e2827e9d9b7da09efe45fa77f6e598/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:19:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96588044d207c6319a9d30140cb4473752e2827e9d9b7da09efe45fa77f6e598/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:19:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96588044d207c6319a9d30140cb4473752e2827e9d9b7da09efe45fa77f6e598/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:19:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96588044d207c6319a9d30140cb4473752e2827e9d9b7da09efe45fa77f6e598/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:19:15 np0005465604 podman[275484]: 2025-10-02 08:19:15.487401275 +0000 UTC m=+0.167387271 container init 0ae43ac98d4b10059bd83e36e110d51851e46fcc92af2facc8cb0cf2d2eb77e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:19:15 np0005465604 podman[275484]: 2025-10-02 08:19:15.497383627 +0000 UTC m=+0.177369613 container start 0ae43ac98d4b10059bd83e36e110d51851e46fcc92af2facc8cb0cf2d2eb77e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cartwright, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:19:15 np0005465604 podman[275484]: 2025-10-02 08:19:15.500113303 +0000 UTC m=+0.180099329 container attach 0ae43ac98d4b10059bd83e36e110d51851e46fcc92af2facc8cb0cf2d2eb77e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cartwright, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:19:15 np0005465604 nova_compute[260603]: 2025-10-02 08:19:15.700 2 DEBUG oslo_concurrency.lockutils [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] Acquiring lock "5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:15 np0005465604 nova_compute[260603]: 2025-10-02 08:19:15.701 2 DEBUG oslo_concurrency.lockutils [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] Lock "5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:15 np0005465604 nova_compute[260603]: 2025-10-02 08:19:15.701 2 DEBUG oslo_concurrency.lockutils [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] Acquiring lock "5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:15 np0005465604 nova_compute[260603]: 2025-10-02 08:19:15.701 2 DEBUG oslo_concurrency.lockutils [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] Lock "5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:15 np0005465604 nova_compute[260603]: 2025-10-02 08:19:15.701 2 DEBUG oslo_concurrency.lockutils [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] Lock "5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:15 np0005465604 nova_compute[260603]: 2025-10-02 08:19:15.703 2 INFO nova.compute.manager [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Terminating instance#033[00m
Oct  2 04:19:15 np0005465604 nova_compute[260603]: 2025-10-02 08:19:15.703 2 DEBUG oslo_concurrency.lockutils [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] Acquiring lock "refresh_cache-5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:19:15 np0005465604 nova_compute[260603]: 2025-10-02 08:19:15.704 2 DEBUG oslo_concurrency.lockutils [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] Acquired lock "refresh_cache-5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:19:15 np0005465604 nova_compute[260603]: 2025-10-02 08:19:15.704 2 DEBUG nova.network.neutron [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:19:16 np0005465604 nova_compute[260603]: 2025-10-02 08:19:16.066 2 DEBUG nova.network.neutron [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]: {
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:    "0": [
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:        {
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "devices": [
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "/dev/loop3"
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            ],
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "lv_name": "ceph_lv0",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "lv_size": "21470642176",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "name": "ceph_lv0",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "tags": {
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.cluster_name": "ceph",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.crush_device_class": "",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.encrypted": "0",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.osd_id": "0",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.type": "block",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.vdo": "0"
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            },
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "type": "block",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "vg_name": "ceph_vg0"
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:        }
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:    ],
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:    "1": [
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:        {
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "devices": [
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "/dev/loop4"
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            ],
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "lv_name": "ceph_lv1",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "lv_size": "21470642176",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "name": "ceph_lv1",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "tags": {
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.cluster_name": "ceph",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.crush_device_class": "",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.encrypted": "0",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.osd_id": "1",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.type": "block",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.vdo": "0"
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            },
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "type": "block",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "vg_name": "ceph_vg1"
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:        }
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:    ],
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:    "2": [
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:        {
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "devices": [
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "/dev/loop5"
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            ],
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "lv_name": "ceph_lv2",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "lv_size": "21470642176",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "name": "ceph_lv2",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "tags": {
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.cluster_name": "ceph",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.crush_device_class": "",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.encrypted": "0",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.osd_id": "2",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.type": "block",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:                "ceph.vdo": "0"
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            },
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "type": "block",
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:            "vg_name": "ceph_vg2"
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:        }
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]:    ]
Oct  2 04:19:16 np0005465604 sharp_cartwright[275500]: }
Oct  2 04:19:16 np0005465604 nova_compute[260603]: 2025-10-02 08:19:16.320 2 DEBUG nova.network.neutron [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:19:16 np0005465604 nova_compute[260603]: 2025-10-02 08:19:16.336 2 DEBUG oslo_concurrency.lockutils [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] Releasing lock "refresh_cache-5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:19:16 np0005465604 nova_compute[260603]: 2025-10-02 08:19:16.337 2 DEBUG nova.compute.manager [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:19:16 np0005465604 systemd[1]: libpod-0ae43ac98d4b10059bd83e36e110d51851e46fcc92af2facc8cb0cf2d2eb77e2.scope: Deactivated successfully.
Oct  2 04:19:16 np0005465604 podman[275484]: 2025-10-02 08:19:16.34962294 +0000 UTC m=+1.029608966 container died 0ae43ac98d4b10059bd83e36e110d51851e46fcc92af2facc8cb0cf2d2eb77e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:19:16 np0005465604 systemd[1]: var-lib-containers-storage-overlay-96588044d207c6319a9d30140cb4473752e2827e9d9b7da09efe45fa77f6e598-merged.mount: Deactivated successfully.
Oct  2 04:19:16 np0005465604 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct  2 04:19:16 np0005465604 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 3.014s CPU time.
Oct  2 04:19:16 np0005465604 systemd-machined[214636]: Machine qemu-2-instance-00000002 terminated.
Oct  2 04:19:16 np0005465604 podman[275484]: 2025-10-02 08:19:16.43046731 +0000 UTC m=+1.110453306 container remove 0ae43ac98d4b10059bd83e36e110d51851e46fcc92af2facc8cb0cf2d2eb77e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cartwright, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:19:16 np0005465604 systemd[1]: libpod-conmon-0ae43ac98d4b10059bd83e36e110d51851e46fcc92af2facc8cb0cf2d2eb77e2.scope: Deactivated successfully.
Oct  2 04:19:16 np0005465604 nova_compute[260603]: 2025-10-02 08:19:16.568 2 INFO nova.virt.libvirt.driver [-] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Instance destroyed successfully.#033[00m
Oct  2 04:19:16 np0005465604 nova_compute[260603]: 2025-10-02 08:19:16.569 2 DEBUG nova.objects.instance [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] Lazy-loading 'resources' on Instance uuid 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:19:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:19:16 np0005465604 nova_compute[260603]: 2025-10-02 08:19:16.994 2 INFO nova.virt.libvirt.driver [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Deleting instance files /var/lib/nova/instances/5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1_del#033[00m
Oct  2 04:19:16 np0005465604 nova_compute[260603]: 2025-10-02 08:19:16.995 2 INFO nova.virt.libvirt.driver [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Deletion of /var/lib/nova/instances/5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1_del complete#033[00m
Oct  2 04:19:17 np0005465604 nova_compute[260603]: 2025-10-02 08:19:17.064 2 INFO nova.compute.manager [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:19:17 np0005465604 nova_compute[260603]: 2025-10-02 08:19:17.065 2 DEBUG oslo.service.loopingcall [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:19:17 np0005465604 nova_compute[260603]: 2025-10-02 08:19:17.065 2 DEBUG nova.compute.manager [-] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:19:17 np0005465604 nova_compute[260603]: 2025-10-02 08:19:17.065 2 DEBUG nova.network.neutron [-] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:19:17 np0005465604 podman[275682]: 2025-10-02 08:19:17.343269249 +0000 UTC m=+0.065318086 container create e20e844bae74eb0004519400abafc21a38bbd437c06ebe931965990e95d95141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bhabha, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 04:19:17 np0005465604 systemd[1]: Started libpod-conmon-e20e844bae74eb0004519400abafc21a38bbd437c06ebe931965990e95d95141.scope.
Oct  2 04:19:17 np0005465604 podman[275682]: 2025-10-02 08:19:17.316892684 +0000 UTC m=+0.038941571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:19:17 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:19:17 np0005465604 podman[275682]: 2025-10-02 08:19:17.449174933 +0000 UTC m=+0.171223820 container init e20e844bae74eb0004519400abafc21a38bbd437c06ebe931965990e95d95141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  2 04:19:17 np0005465604 podman[275682]: 2025-10-02 08:19:17.458236947 +0000 UTC m=+0.180285744 container start e20e844bae74eb0004519400abafc21a38bbd437c06ebe931965990e95d95141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bhabha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:19:17 np0005465604 podman[275682]: 2025-10-02 08:19:17.461667575 +0000 UTC m=+0.183716402 container attach e20e844bae74eb0004519400abafc21a38bbd437c06ebe931965990e95d95141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bhabha, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:19:17 np0005465604 great_bhabha[275699]: 167 167
Oct  2 04:19:17 np0005465604 systemd[1]: libpod-e20e844bae74eb0004519400abafc21a38bbd437c06ebe931965990e95d95141.scope: Deactivated successfully.
Oct  2 04:19:17 np0005465604 podman[275682]: 2025-10-02 08:19:17.467640502 +0000 UTC m=+0.189689329 container died e20e844bae74eb0004519400abafc21a38bbd437c06ebe931965990e95d95141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bhabha, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:19:17 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b9631882b508d2b8045b49e1c8fbfebbcf0c8fe472ee2ec5a4d51107bda0f46e-merged.mount: Deactivated successfully.
Oct  2 04:19:17 np0005465604 podman[275682]: 2025-10-02 08:19:17.509047668 +0000 UTC m=+0.231096465 container remove e20e844bae74eb0004519400abafc21a38bbd437c06ebe931965990e95d95141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bhabha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:19:17 np0005465604 systemd[1]: libpod-conmon-e20e844bae74eb0004519400abafc21a38bbd437c06ebe931965990e95d95141.scope: Deactivated successfully.
Oct  2 04:19:17 np0005465604 podman[275724]: 2025-10-02 08:19:17.716614664 +0000 UTC m=+0.040935903 container create 80a9aeb4acae9dc63ff22fb55bce768025080d9a68890829e4a38b600dced212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 04:19:17 np0005465604 systemd[1]: Started libpod-conmon-80a9aeb4acae9dc63ff22fb55bce768025080d9a68890829e4a38b600dced212.scope.
Oct  2 04:19:17 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:19:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51059baa44fe4db6e93a608b9be703436626da452df98e289b6321b5932353e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:19:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51059baa44fe4db6e93a608b9be703436626da452df98e289b6321b5932353e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:19:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51059baa44fe4db6e93a608b9be703436626da452df98e289b6321b5932353e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:19:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51059baa44fe4db6e93a608b9be703436626da452df98e289b6321b5932353e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:19:17 np0005465604 podman[275724]: 2025-10-02 08:19:17.774721553 +0000 UTC m=+0.099042822 container init 80a9aeb4acae9dc63ff22fb55bce768025080d9a68890829e4a38b600dced212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lamport, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:19:17 np0005465604 podman[275724]: 2025-10-02 08:19:17.781607548 +0000 UTC m=+0.105928797 container start 80a9aeb4acae9dc63ff22fb55bce768025080d9a68890829e4a38b600dced212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:19:17 np0005465604 podman[275724]: 2025-10-02 08:19:17.785090767 +0000 UTC m=+0.109412046 container attach 80a9aeb4acae9dc63ff22fb55bce768025080d9a68890829e4a38b600dced212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lamport, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:19:17 np0005465604 podman[275724]: 2025-10-02 08:19:17.701735608 +0000 UTC m=+0.026056867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:19:18 np0005465604 nova_compute[260603]: 2025-10-02 08:19:18.331 2 DEBUG nova.network.neutron [-] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:19:18 np0005465604 nova_compute[260603]: 2025-10-02 08:19:18.446 2 DEBUG nova.network.neutron [-] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:19:18 np0005465604 nova_compute[260603]: 2025-10-02 08:19:18.461 2 INFO nova.compute.manager [-] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Took 1.40 seconds to deallocate network for instance.#033[00m
Oct  2 04:19:18 np0005465604 nova_compute[260603]: 2025-10-02 08:19:18.503 2 DEBUG oslo_concurrency.lockutils [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:18 np0005465604 nova_compute[260603]: 2025-10-02 08:19:18.504 2 DEBUG oslo_concurrency.lockutils [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:18 np0005465604 nova_compute[260603]: 2025-10-02 08:19:18.561 2 DEBUG oslo_concurrency.processutils [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]: {
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:        "osd_id": 2,
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:        "type": "bluestore"
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:    },
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:        "osd_id": 1,
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:        "type": "bluestore"
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:    },
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:        "osd_id": 0,
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:        "type": "bluestore"
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]:    }
Oct  2 04:19:18 np0005465604 gracious_lamport[275741]: }
Oct  2 04:19:18 np0005465604 systemd[1]: libpod-80a9aeb4acae9dc63ff22fb55bce768025080d9a68890829e4a38b600dced212.scope: Deactivated successfully.
Oct  2 04:19:18 np0005465604 systemd[1]: libpod-80a9aeb4acae9dc63ff22fb55bce768025080d9a68890829e4a38b600dced212.scope: Consumed 1.025s CPU time.
Oct  2 04:19:18 np0005465604 conmon[275741]: conmon 80a9aeb4acae9dc63ff2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80a9aeb4acae9dc63ff22fb55bce768025080d9a68890829e4a38b600dced212.scope/container/memory.events
Oct  2 04:19:18 np0005465604 podman[275724]: 2025-10-02 08:19:18.809372644 +0000 UTC m=+1.133693893 container died 80a9aeb4acae9dc63ff22fb55bce768025080d9a68890829e4a38b600dced212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lamport, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 04:19:18 np0005465604 systemd[1]: var-lib-containers-storage-overlay-51059baa44fe4db6e93a608b9be703436626da452df98e289b6321b5932353e2-merged.mount: Deactivated successfully.
Oct  2 04:19:18 np0005465604 podman[275724]: 2025-10-02 08:19:18.866615026 +0000 UTC m=+1.190936275 container remove 80a9aeb4acae9dc63ff22fb55bce768025080d9a68890829e4a38b600dced212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:19:18 np0005465604 systemd[1]: libpod-conmon-80a9aeb4acae9dc63ff22fb55bce768025080d9a68890829e4a38b600dced212.scope: Deactivated successfully.
Oct  2 04:19:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:19:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:19:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:19:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:19:18 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9d1e139e-2b04-483d-9c60-3ddf925ef8be does not exist
Oct  2 04:19:18 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b5c97b99-b269-4948-8e3b-9277e6b07f74 does not exist
Oct  2 04:19:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:19:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/481145472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:19:19 np0005465604 nova_compute[260603]: 2025-10-02 08:19:19.056 2 DEBUG oslo_concurrency.processutils [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:19 np0005465604 nova_compute[260603]: 2025-10-02 08:19:19.060 2 DEBUG nova.compute.provider_tree [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:19:19 np0005465604 nova_compute[260603]: 2025-10-02 08:19:19.085 2 DEBUG nova.scheduler.client.report [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:19:19 np0005465604 nova_compute[260603]: 2025-10-02 08:19:19.130 2 DEBUG oslo_concurrency.lockutils [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:19 np0005465604 nova_compute[260603]: 2025-10-02 08:19:19.162 2 INFO nova.scheduler.client.report [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] Deleted allocations for instance 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1#033[00m
Oct  2 04:19:19 np0005465604 nova_compute[260603]: 2025-10-02 08:19:19.224 2 DEBUG oslo_concurrency.lockutils [None req-131e8e3d-0c1a-44d1-ade8-35b28581a50a d32ade629c134ad08048b291aec0788a fa473305a1b246acab8af6113a5d321e - - default default] Lock "5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:19 np0005465604 nova_compute[260603]: 2025-10-02 08:19:19.432 2 DEBUG oslo_concurrency.lockutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "2cf00828-84e1-410d-8acb-d94a197cc8ea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:19 np0005465604 nova_compute[260603]: 2025-10-02 08:19:19.432 2 DEBUG oslo_concurrency.lockutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "2cf00828-84e1-410d-8acb-d94a197cc8ea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:19 np0005465604 nova_compute[260603]: 2025-10-02 08:19:19.457 2 DEBUG nova.compute.manager [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:19:19 np0005465604 nova_compute[260603]: 2025-10-02 08:19:19.518 2 DEBUG oslo_concurrency.lockutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:19 np0005465604 nova_compute[260603]: 2025-10-02 08:19:19.519 2 DEBUG oslo_concurrency.lockutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:19 np0005465604 nova_compute[260603]: 2025-10-02 08:19:19.525 2 DEBUG nova.virt.hardware [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:19:19 np0005465604 nova_compute[260603]: 2025-10-02 08:19:19.525 2 INFO nova.compute.claims [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:19:19 np0005465604 nova_compute[260603]: 2025-10-02 08:19:19.615 2 DEBUG oslo_concurrency.processutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:19 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:19:19 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:19:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:19:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1123149086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.018 2 DEBUG oslo_concurrency.processutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.025 2 DEBUG nova.compute.provider_tree [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.045 2 DEBUG nova.scheduler.client.report [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.074 2 DEBUG oslo_concurrency.lockutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.075 2 DEBUG nova.compute.manager [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.126 2 DEBUG nova.compute.manager [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.126 2 DEBUG nova.network.neutron [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.148 2 INFO nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.166 2 DEBUG nova.compute.manager [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.265 2 DEBUG nova.compute.manager [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.267 2 DEBUG nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.268 2 INFO nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Creating image(s)#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.301 2 DEBUG nova.storage.rbd_utils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] rbd image 2cf00828-84e1-410d-8acb-d94a197cc8ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.336 2 DEBUG nova.storage.rbd_utils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] rbd image 2cf00828-84e1-410d-8acb-d94a197cc8ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.370 2 DEBUG nova.storage.rbd_utils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] rbd image 2cf00828-84e1-410d-8acb-d94a197cc8ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.375 2 DEBUG oslo_concurrency.processutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.435 2 WARNING oslo_policy.policy [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.436 2 WARNING oslo_policy.policy [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.441 2 DEBUG nova.policy [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1037493f2f4c4e768c46e477c6183cd4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ab0927c5a0e0424fbcde0133feab6f16', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.465 2 DEBUG oslo_concurrency.processutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.466 2 DEBUG oslo_concurrency.lockutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.467 2 DEBUG oslo_concurrency.lockutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.467 2 DEBUG oslo_concurrency.lockutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.493 2 DEBUG nova.storage.rbd_utils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] rbd image 2cf00828-84e1-410d-8acb-d94a197cc8ea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.497 2 DEBUG oslo_concurrency.processutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 2cf00828-84e1-410d-8acb-d94a197cc8ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 41 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.767 2 DEBUG oslo_concurrency.processutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 2cf00828-84e1-410d-8acb-d94a197cc8ea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.838 2 DEBUG nova.storage.rbd_utils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] resizing rbd image 2cf00828-84e1-410d-8acb-d94a197cc8ea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.946 2 DEBUG nova.objects.instance [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lazy-loading 'migration_context' on Instance uuid 2cf00828-84e1-410d-8acb-d94a197cc8ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.959 2 DEBUG nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.959 2 DEBUG nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Ensure instance console log exists: /var/lib/nova/instances/2cf00828-84e1-410d-8acb-d94a197cc8ea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.960 2 DEBUG oslo_concurrency.lockutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.960 2 DEBUG oslo_concurrency.lockutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:20 np0005465604 nova_compute[260603]: 2025-10-02 08:19:20.960 2 DEBUG oslo_concurrency.lockutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:19:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:19:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2220473596' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:19:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:19:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2220473596' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.132 2 DEBUG oslo_concurrency.lockutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Acquiring lock "f3cc4273-e776-4d2f-aadc-005590b6cd01" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.133 2 DEBUG oslo_concurrency.lockutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "f3cc4273-e776-4d2f-aadc-005590b6cd01" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.154 2 DEBUG nova.compute.manager [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.250 2 DEBUG oslo_concurrency.lockutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.251 2 DEBUG oslo_concurrency.lockutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.259 2 DEBUG nova.virt.hardware [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.260 2 INFO nova.compute.claims [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.303 2 DEBUG nova.network.neutron [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Successfully created port: 71dc92e6-edbd-4240-ae33-d3cc5b361002 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.391 2 DEBUG oslo_concurrency.processutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 73 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 140 op/s
Oct  2 04:19:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:19:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/680420196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.818 2 DEBUG oslo_concurrency.processutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.826 2 DEBUG nova.compute.provider_tree [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.845 2 DEBUG nova.scheduler.client.report [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.870 2 DEBUG oslo_concurrency.lockutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.872 2 DEBUG nova.compute.manager [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.914 2 DEBUG nova.compute.manager [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.915 2 DEBUG nova.network.neutron [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.939 2 INFO nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.962 2 DEBUG nova.compute.manager [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.967 2 DEBUG nova.network.neutron [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Successfully updated port: 71dc92e6-edbd-4240-ae33-d3cc5b361002 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.996 2 DEBUG oslo_concurrency.lockutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "refresh_cache-2cf00828-84e1-410d-8acb-d94a197cc8ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.996 2 DEBUG oslo_concurrency.lockutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquired lock "refresh_cache-2cf00828-84e1-410d-8acb-d94a197cc8ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:19:22 np0005465604 nova_compute[260603]: 2025-10-02 08:19:22.996 2 DEBUG nova.network.neutron [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.067 2 DEBUG nova.compute.manager [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.069 2 DEBUG nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.070 2 INFO nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Creating image(s)#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.110 2 DEBUG nova.storage.rbd_utils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] rbd image f3cc4273-e776-4d2f-aadc-005590b6cd01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.136 2 DEBUG nova.storage.rbd_utils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] rbd image f3cc4273-e776-4d2f-aadc-005590b6cd01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.162 2 DEBUG nova.storage.rbd_utils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] rbd image f3cc4273-e776-4d2f-aadc-005590b6cd01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.166 2 DEBUG oslo_concurrency.processutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.247 2 DEBUG nova.network.neutron [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.252 2 DEBUG oslo_concurrency.processutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.253 2 DEBUG oslo_concurrency.lockutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.254 2 DEBUG oslo_concurrency.lockutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.254 2 DEBUG oslo_concurrency.lockutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.283 2 DEBUG nova.storage.rbd_utils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] rbd image f3cc4273-e776-4d2f-aadc-005590b6cd01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.287 2 DEBUG oslo_concurrency.processutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f3cc4273-e776-4d2f-aadc-005590b6cd01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.442 2 DEBUG nova.network.neutron [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.443 2 DEBUG nova.compute.manager [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.571 2 DEBUG oslo_concurrency.processutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f3cc4273-e776-4d2f-aadc-005590b6cd01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.636 2 DEBUG nova.storage.rbd_utils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] resizing rbd image f3cc4273-e776-4d2f-aadc-005590b6cd01_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.670 2 DEBUG nova.compute.manager [req-5edd4ba9-0f93-417b-825c-f0b60bba5b52 req-486b0bfc-db4a-4c23-bf1b-96d6a22481b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Received event network-changed-71dc92e6-edbd-4240-ae33-d3cc5b361002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.671 2 DEBUG nova.compute.manager [req-5edd4ba9-0f93-417b-825c-f0b60bba5b52 req-486b0bfc-db4a-4c23-bf1b-96d6a22481b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Refreshing instance network info cache due to event network-changed-71dc92e6-edbd-4240-ae33-d3cc5b361002. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.672 2 DEBUG oslo_concurrency.lockutils [req-5edd4ba9-0f93-417b-825c-f0b60bba5b52 req-486b0bfc-db4a-4c23-bf1b-96d6a22481b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-2cf00828-84e1-410d-8acb-d94a197cc8ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.733 2 DEBUG nova.objects.instance [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lazy-loading 'migration_context' on Instance uuid f3cc4273-e776-4d2f-aadc-005590b6cd01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.745 2 DEBUG nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.745 2 DEBUG nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Ensure instance console log exists: /var/lib/nova/instances/f3cc4273-e776-4d2f-aadc-005590b6cd01/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.746 2 DEBUG oslo_concurrency.lockutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.747 2 DEBUG oslo_concurrency.lockutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.747 2 DEBUG oslo_concurrency.lockutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.748 2 DEBUG nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.753 2 WARNING nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.758 2 DEBUG nova.virt.libvirt.host [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.759 2 DEBUG nova.virt.libvirt.host [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.762 2 DEBUG nova.virt.libvirt.host [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.762 2 DEBUG nova.virt.libvirt.host [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.763 2 DEBUG nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.763 2 DEBUG nova.virt.hardware [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.763 2 DEBUG nova.virt.hardware [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.763 2 DEBUG nova.virt.hardware [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.764 2 DEBUG nova.virt.hardware [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.764 2 DEBUG nova.virt.hardware [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.764 2 DEBUG nova.virt.hardware [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.764 2 DEBUG nova.virt.hardware [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.764 2 DEBUG nova.virt.hardware [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.764 2 DEBUG nova.virt.hardware [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.765 2 DEBUG nova.virt.hardware [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.765 2 DEBUG nova.virt.hardware [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:19:23 np0005465604 nova_compute[260603]: 2025-10-02 08:19:23.767 2 DEBUG oslo_concurrency.processutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.159 2 DEBUG nova.network.neutron [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Updating instance_info_cache with network_info: [{"id": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "address": "fa:16:3e:74:76:a0", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71dc92e6-ed", "ovs_interfaceid": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:19:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:19:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/400764906' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.185 2 DEBUG oslo_concurrency.processutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.216 2 DEBUG nova.storage.rbd_utils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] rbd image f3cc4273-e776-4d2f-aadc-005590b6cd01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.221 2 DEBUG oslo_concurrency.processutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.249 2 DEBUG oslo_concurrency.lockutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Releasing lock "refresh_cache-2cf00828-84e1-410d-8acb-d94a197cc8ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.250 2 DEBUG nova.compute.manager [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Instance network_info: |[{"id": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "address": "fa:16:3e:74:76:a0", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71dc92e6-ed", "ovs_interfaceid": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.252 2 DEBUG oslo_concurrency.lockutils [req-5edd4ba9-0f93-417b-825c-f0b60bba5b52 req-486b0bfc-db4a-4c23-bf1b-96d6a22481b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-2cf00828-84e1-410d-8acb-d94a197cc8ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.252 2 DEBUG nova.network.neutron [req-5edd4ba9-0f93-417b-825c-f0b60bba5b52 req-486b0bfc-db4a-4c23-bf1b-96d6a22481b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Refreshing network info cache for port 71dc92e6-edbd-4240-ae33-d3cc5b361002 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.258 2 DEBUG nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Start _get_guest_xml network_info=[{"id": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "address": "fa:16:3e:74:76:a0", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71dc92e6-ed", "ovs_interfaceid": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.265 2 WARNING nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.279 2 DEBUG nova.virt.libvirt.host [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.281 2 DEBUG nova.virt.libvirt.host [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.294 2 DEBUG nova.virt.libvirt.host [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.295 2 DEBUG nova.virt.libvirt.host [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.296 2 DEBUG nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.296 2 DEBUG nova.virt.hardware [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:19:12Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='469022409',id=26,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-380304525',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.297 2 DEBUG nova.virt.hardware [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.298 2 DEBUG nova.virt.hardware [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.298 2 DEBUG nova.virt.hardware [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.299 2 DEBUG nova.virt.hardware [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.299 2 DEBUG nova.virt.hardware [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.300 2 DEBUG nova.virt.hardware [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.301 2 DEBUG nova.virt.hardware [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.301 2 DEBUG nova.virt.hardware [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.302 2 DEBUG nova.virt.hardware [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.302 2 DEBUG nova.virt.hardware [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.307 2 DEBUG oslo_concurrency.processutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:19:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4048526247' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.679 2 DEBUG oslo_concurrency.processutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.682 2 DEBUG nova.objects.instance [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lazy-loading 'pci_devices' on Instance uuid f3cc4273-e776-4d2f-aadc-005590b6cd01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.699 2 DEBUG nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:19:24 np0005465604 nova_compute[260603]:  <uuid>f3cc4273-e776-4d2f-aadc-005590b6cd01</uuid>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:  <name>instance-00000004</name>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-19318582</nova:name>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:19:23</nova:creationTime>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:19:24 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:        <nova:user uuid="2bbb3d2fe7dc4d0cbce173e3b3d890ef">tempest-DeleteServersAdminTestJSON-398237668-project-member</nova:user>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:        <nova:project uuid="475fe8421bdb460d970e52b4c47a8120">tempest-DeleteServersAdminTestJSON-398237668</nova:project>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <entry name="serial">f3cc4273-e776-4d2f-aadc-005590b6cd01</entry>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <entry name="uuid">f3cc4273-e776-4d2f-aadc-005590b6cd01</entry>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f3cc4273-e776-4d2f-aadc-005590b6cd01_disk">
Oct  2 04:19:24 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:19:24 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f3cc4273-e776-4d2f-aadc-005590b6cd01_disk.config">
Oct  2 04:19:24 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:19:24 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/f3cc4273-e776-4d2f-aadc-005590b6cd01/console.log" append="off"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:19:24 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:19:24 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:19:24 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:19:24 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:19:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 151 op/s
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.747 2 DEBUG nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.747 2 DEBUG nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.748 2 INFO nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Using config drive#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.769 2 DEBUG nova.storage.rbd_utils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] rbd image f3cc4273-e776-4d2f-aadc-005590b6cd01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:19:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3011261296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.797 2 DEBUG oslo_concurrency.processutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.815 2 DEBUG nova.storage.rbd_utils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] rbd image 2cf00828-84e1-410d-8acb-d94a197cc8ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.818 2 DEBUG oslo_concurrency.processutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.981 2 INFO nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Creating config drive at /var/lib/nova/instances/f3cc4273-e776-4d2f-aadc-005590b6cd01/disk.config#033[00m
Oct  2 04:19:24 np0005465604 nova_compute[260603]: 2025-10-02 08:19:24.989 2 DEBUG oslo_concurrency.processutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f3cc4273-e776-4d2f-aadc-005590b6cd01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2irgg3bj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.114 2 DEBUG oslo_concurrency.processutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f3cc4273-e776-4d2f-aadc-005590b6cd01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2irgg3bj" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.152 2 DEBUG nova.storage.rbd_utils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] rbd image f3cc4273-e776-4d2f-aadc-005590b6cd01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.156 2 DEBUG oslo_concurrency.processutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f3cc4273-e776-4d2f-aadc-005590b6cd01/disk.config f3cc4273-e776-4d2f-aadc-005590b6cd01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:19:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1115884270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.218 2 DEBUG oslo_concurrency.processutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.221 2 DEBUG nova.virt.libvirt.vif [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:19:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1607704631',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1607704631',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(26),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1607704631',id=3,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=26,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKyfJeOY45nkQsot9BemY9U/BkbEGNqNZqr3YNUHFqg2f7Al6V+K+e19H2MuiEK+wcXfUJg+H8qbMpV33FaRnqJABUBtXxvPVSB7W9ittbX2JkYChVSSogHDOqGnTjFf2A==',key_name='tempest-keypair-67388201',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ab0927c5a0e0424fbcde0133feab6f16',ramdisk_id='',reservation_id='r-aso5e8pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1074733883',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1074733883-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:19:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1037493f2f4c4e768c46e477c6183cd4',uuid=2cf00828-84e1-410d-8acb-d94a197cc8ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "address": "fa:16:3e:74:76:a0", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71dc92e6-ed", "ovs_interfaceid": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.222 2 DEBUG nova.network.os_vif_util [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Converting VIF {"id": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "address": "fa:16:3e:74:76:a0", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71dc92e6-ed", "ovs_interfaceid": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.224 2 DEBUG nova.network.os_vif_util [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:76:a0,bridge_name='br-int',has_traffic_filtering=True,id=71dc92e6-edbd-4240-ae33-d3cc5b361002,network=Network(9847725b-0398-4587-bb8e-45200e1bbb6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71dc92e6-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.227 2 DEBUG nova.objects.instance [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2cf00828-84e1-410d-8acb-d94a197cc8ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.245 2 DEBUG nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:19:25 np0005465604 nova_compute[260603]:  <uuid>2cf00828-84e1-410d-8acb-d94a197cc8ea</uuid>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:  <name>instance-00000003</name>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1607704631</nova:name>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:19:24</nova:creationTime>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <nova:flavor name="tempest-flavor_with_ephemeral_0-380304525">
Oct  2 04:19:25 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:        <nova:user uuid="1037493f2f4c4e768c46e477c6183cd4">tempest-ServersWithSpecificFlavorTestJSON-1074733883-project-member</nova:user>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:        <nova:project uuid="ab0927c5a0e0424fbcde0133feab6f16">tempest-ServersWithSpecificFlavorTestJSON-1074733883</nova:project>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:        <nova:port uuid="71dc92e6-edbd-4240-ae33-d3cc5b361002">
Oct  2 04:19:25 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <entry name="serial">2cf00828-84e1-410d-8acb-d94a197cc8ea</entry>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <entry name="uuid">2cf00828-84e1-410d-8acb-d94a197cc8ea</entry>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/2cf00828-84e1-410d-8acb-d94a197cc8ea_disk">
Oct  2 04:19:25 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:19:25 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/2cf00828-84e1-410d-8acb-d94a197cc8ea_disk.config">
Oct  2 04:19:25 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:19:25 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:74:76:a0"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <target dev="tap71dc92e6-ed"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/2cf00828-84e1-410d-8acb-d94a197cc8ea/console.log" append="off"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:19:25 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:19:25 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:19:25 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:19:25 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.247 2 DEBUG nova.compute.manager [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Preparing to wait for external event network-vif-plugged-71dc92e6-edbd-4240-ae33-d3cc5b361002 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.247 2 DEBUG oslo_concurrency.lockutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "2cf00828-84e1-410d-8acb-d94a197cc8ea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.248 2 DEBUG oslo_concurrency.lockutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "2cf00828-84e1-410d-8acb-d94a197cc8ea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.248 2 DEBUG oslo_concurrency.lockutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "2cf00828-84e1-410d-8acb-d94a197cc8ea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.250 2 DEBUG nova.virt.libvirt.vif [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:19:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1607704631',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1607704631',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(26),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1607704631',id=3,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=26,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKyfJeOY45nkQsot9BemY9U/BkbEGNqNZqr3YNUHFqg2f7Al6V+K+e19H2MuiEK+wcXfUJg+H8qbMpV33FaRnqJABUBtXxvPVSB7W9ittbX2JkYChVSSogHDOqGnTjFf2A==',key_name='tempest-keypair-67388201',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ab0927c5a0e0424fbcde0133feab6f16',ramdisk_id='',reservation_id='r-aso5e8pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1074733883',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1074733883-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:19:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1037493f2f4c4e768c46e477c6183cd4',uuid=2cf00828-84e1-410d-8acb-d94a197cc8ea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "address": "fa:16:3e:74:76:a0", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71dc92e6-ed", "ovs_interfaceid": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.250 2 DEBUG nova.network.os_vif_util [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Converting VIF {"id": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "address": "fa:16:3e:74:76:a0", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71dc92e6-ed", "ovs_interfaceid": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.251 2 DEBUG nova.network.os_vif_util [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:76:a0,bridge_name='br-int',has_traffic_filtering=True,id=71dc92e6-edbd-4240-ae33-d3cc5b361002,network=Network(9847725b-0398-4587-bb8e-45200e1bbb6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71dc92e6-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.252 2 DEBUG os_vif [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:76:a0,bridge_name='br-int',has_traffic_filtering=True,id=71dc92e6-edbd-4240-ae33-d3cc5b361002,network=Network(9847725b-0398-4587-bb8e-45200e1bbb6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71dc92e6-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.296 2 DEBUG ovsdbapp.backend.ovs_idl [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.296 2 DEBUG ovsdbapp.backend.ovs_idl [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.297 2 DEBUG ovsdbapp.backend.ovs_idl [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [POLLOUT] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.316 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.317 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.318 2 INFO oslo.privsep.daemon [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpxatlahsd/privsep.sock']#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.337 2 DEBUG oslo_concurrency.processutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f3cc4273-e776-4d2f-aadc-005590b6cd01/disk.config f3cc4273-e776-4d2f-aadc-005590b6cd01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:25 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.338 2 INFO nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Deleting local config drive /var/lib/nova/instances/f3cc4273-e776-4d2f-aadc-005590b6cd01/disk.config because it was imported into RBD.#033[00m
Oct  2 04:19:25 np0005465604 systemd-machined[214636]: New machine qemu-3-instance-00000004.
Oct  2 04:19:25 np0005465604 systemd[1]: Started Virtual Machine qemu-3-instance-00000004.
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.131 2 INFO oslo.privsep.daemon [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.970 1860 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.978 1860 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.983 1860 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:25.984 1860 INFO oslo.privsep.daemon [-] privsep daemon running as pid 1860#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.167 2 DEBUG nova.network.neutron [req-5edd4ba9-0f93-417b-825c-f0b60bba5b52 req-486b0bfc-db4a-4c23-bf1b-96d6a22481b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Updated VIF entry in instance network info cache for port 71dc92e6-edbd-4240-ae33-d3cc5b361002. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.168 2 DEBUG nova.network.neutron [req-5edd4ba9-0f93-417b-825c-f0b60bba5b52 req-486b0bfc-db4a-4c23-bf1b-96d6a22481b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Updating instance_info_cache with network_info: [{"id": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "address": "fa:16:3e:74:76:a0", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71dc92e6-ed", "ovs_interfaceid": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.183 2 DEBUG oslo_concurrency.lockutils [req-5edd4ba9-0f93-417b-825c-f0b60bba5b52 req-486b0bfc-db4a-4c23-bf1b-96d6a22481b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-2cf00828-84e1-410d-8acb-d94a197cc8ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.201 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393166.2016933, f3cc4273-e776-4d2f-aadc-005590b6cd01 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.202 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.204 2 DEBUG nova.compute.manager [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.205 2 DEBUG nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.208 2 INFO nova.virt.libvirt.driver [-] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Instance spawned successfully.#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.208 2 DEBUG nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.225 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.230 2 DEBUG nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.230 2 DEBUG nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.231 2 DEBUG nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.231 2 DEBUG nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.232 2 DEBUG nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.232 2 DEBUG nova.virt.libvirt.driver [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.237 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.292 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.292 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393166.2040308, f3cc4273-e776-4d2f-aadc-005590b6cd01 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.293 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] VM Started (Lifecycle Event)#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.295 2 INFO nova.compute.manager [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Took 3.23 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.295 2 DEBUG nova.compute.manager [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.322 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.326 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.351 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.388 2 INFO nova.compute.manager [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Took 4.18 seconds to build instance.#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.402 2 DEBUG oslo_concurrency.lockutils [None req-11231f66-5e9b-4373-8253-f3b735064dfb 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "f3cc4273-e776-4d2f-aadc-005590b6cd01" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.471 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71dc92e6-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.472 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap71dc92e6-ed, col_values=(('external_ids', {'iface-id': '71dc92e6-edbd-4240-ae33-d3cc5b361002', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:74:76:a0', 'vm-uuid': '2cf00828-84e1-410d-8acb-d94a197cc8ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:26 np0005465604 NetworkManager[45129]: <info>  [1759393166.4746] manager: (tap71dc92e6-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.482 2 INFO os_vif [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:76:a0,bridge_name='br-int',has_traffic_filtering=True,id=71dc92e6-edbd-4240-ae33-d3cc5b361002,network=Network(9847725b-0398-4587-bb8e-45200e1bbb6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71dc92e6-ed')#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.528 2 DEBUG nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.529 2 DEBUG nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.529 2 DEBUG nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] No VIF found with MAC fa:16:3e:74:76:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.529 2 INFO nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Using config drive#033[00m
Oct  2 04:19:26 np0005465604 nova_compute[260603]: 2025-10-02 08:19:26.548 2 DEBUG nova.storage.rbd_utils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] rbd image 2cf00828-84e1-410d-8acb-d94a197cc8ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct  2 04:19:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:19:27 np0005465604 nova_compute[260603]: 2025-10-02 08:19:27.240 2 INFO nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Creating config drive at /var/lib/nova/instances/2cf00828-84e1-410d-8acb-d94a197cc8ea/disk.config#033[00m
Oct  2 04:19:27 np0005465604 nova_compute[260603]: 2025-10-02 08:19:27.246 2 DEBUG oslo_concurrency.processutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2cf00828-84e1-410d-8acb-d94a197cc8ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqkfw7onj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:27 np0005465604 nova_compute[260603]: 2025-10-02 08:19:27.386 2 DEBUG oslo_concurrency.processutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2cf00828-84e1-410d-8acb-d94a197cc8ea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqkfw7onj" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:27 np0005465604 nova_compute[260603]: 2025-10-02 08:19:27.422 2 DEBUG nova.storage.rbd_utils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] rbd image 2cf00828-84e1-410d-8acb-d94a197cc8ea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:27 np0005465604 nova_compute[260603]: 2025-10-02 08:19:27.427 2 DEBUG oslo_concurrency.processutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2cf00828-84e1-410d-8acb-d94a197cc8ea/disk.config 2cf00828-84e1-410d-8acb-d94a197cc8ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:27 np0005465604 nova_compute[260603]: 2025-10-02 08:19:27.613 2 DEBUG oslo_concurrency.processutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2cf00828-84e1-410d-8acb-d94a197cc8ea/disk.config 2cf00828-84e1-410d-8acb-d94a197cc8ea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:27 np0005465604 nova_compute[260603]: 2025-10-02 08:19:27.615 2 INFO nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Deleting local config drive /var/lib/nova/instances/2cf00828-84e1-410d-8acb-d94a197cc8ea/disk.config because it was imported into RBD.#033[00m
Oct  2 04:19:27 np0005465604 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct  2 04:19:27 np0005465604 kernel: tap71dc92e6-ed: entered promiscuous mode
Oct  2 04:19:27 np0005465604 NetworkManager[45129]: <info>  [1759393167.6813] manager: (tap71dc92e6-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Oct  2 04:19:27 np0005465604 ovn_controller[152344]: 2025-10-02T08:19:27Z|00027|binding|INFO|Claiming lport 71dc92e6-edbd-4240-ae33-d3cc5b361002 for this chassis.
Oct  2 04:19:27 np0005465604 ovn_controller[152344]: 2025-10-02T08:19:27Z|00028|binding|INFO|71dc92e6-edbd-4240-ae33-d3cc5b361002: Claiming fa:16:3e:74:76:a0 10.100.0.3
Oct  2 04:19:27 np0005465604 nova_compute[260603]: 2025-10-02 08:19:27.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:27 np0005465604 systemd-udevd[276553]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:19:27 np0005465604 nova_compute[260603]: 2025-10-02 08:19:27.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:27.698 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:76:a0 10.100.0.3'], port_security=['fa:16:3e:74:76:a0 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2cf00828-84e1-410d-8acb-d94a197cc8ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9847725b-0398-4587-bb8e-45200e1bbb6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ab0927c5a0e0424fbcde0133feab6f16', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3ed81274-9cd4-4204-8473-93dd6f65ff51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9128983a-ebb5-4d5e-81a5-171931805db2, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=71dc92e6-edbd-4240-ae33-d3cc5b361002) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:19:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:27.699 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 71dc92e6-edbd-4240-ae33-d3cc5b361002 in datapath 9847725b-0398-4587-bb8e-45200e1bbb6a bound to our chassis#033[00m
Oct  2 04:19:27 np0005465604 NetworkManager[45129]: <info>  [1759393167.7017] device (tap71dc92e6-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:19:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:27.701 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9847725b-0398-4587-bb8e-45200e1bbb6a#033[00m
Oct  2 04:19:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:27.702 162357 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp8qcrtc38/privsep.sock']#033[00m
Oct  2 04:19:27 np0005465604 NetworkManager[45129]: <info>  [1759393167.7034] device (tap71dc92e6-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:19:27 np0005465604 systemd-machined[214636]: New machine qemu-4-instance-00000003.
Oct  2 04:19:27 np0005465604 nova_compute[260603]: 2025-10-02 08:19:27.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:27 np0005465604 systemd[1]: Started Virtual Machine qemu-4-instance-00000003.
Oct  2 04:19:27 np0005465604 ovn_controller[152344]: 2025-10-02T08:19:27Z|00029|binding|INFO|Setting lport 71dc92e6-edbd-4240-ae33-d3cc5b361002 ovn-installed in OVS
Oct  2 04:19:27 np0005465604 ovn_controller[152344]: 2025-10-02T08:19:27Z|00030|binding|INFO|Setting lport 71dc92e6-edbd-4240-ae33-d3cc5b361002 up in Southbound
Oct  2 04:19:27 np0005465604 nova_compute[260603]: 2025-10-02 08:19:27.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:19:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:19:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:19:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:19:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:19:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:19:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:19:27
Oct  2 04:19:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:19:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:19:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['images', '.rgw.root', 'vms', '.mgr', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct  2 04:19:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.305 2 DEBUG oslo_concurrency.lockutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Acquiring lock "2d021418-a2ed-45c1-8a5a-7aa794929fa8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.306 2 DEBUG oslo_concurrency.lockutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Lock "2d021418-a2ed-45c1-8a5a-7aa794929fa8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.307 2 DEBUG oslo_concurrency.lockutils [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Acquiring lock "f3cc4273-e776-4d2f-aadc-005590b6cd01" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.307 2 DEBUG oslo_concurrency.lockutils [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "f3cc4273-e776-4d2f-aadc-005590b6cd01" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.307 2 DEBUG oslo_concurrency.lockutils [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Acquiring lock "f3cc4273-e776-4d2f-aadc-005590b6cd01-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.308 2 DEBUG oslo_concurrency.lockutils [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "f3cc4273-e776-4d2f-aadc-005590b6cd01-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.308 2 DEBUG oslo_concurrency.lockutils [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "f3cc4273-e776-4d2f-aadc-005590b6cd01-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.310 2 INFO nova.compute.manager [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Terminating instance#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.310 2 DEBUG oslo_concurrency.lockutils [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Acquiring lock "refresh_cache-f3cc4273-e776-4d2f-aadc-005590b6cd01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.311 2 DEBUG oslo_concurrency.lockutils [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Acquired lock "refresh_cache-f3cc4273-e776-4d2f-aadc-005590b6cd01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.311 2 DEBUG nova.network.neutron [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.332 2 DEBUG nova.compute.manager [req-dc57c314-b860-4900-aab7-e0b68f680308 req-d92c87c4-c163-49ab-b345-d6454a431c36 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Received event network-vif-plugged-71dc92e6-edbd-4240-ae33-d3cc5b361002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.332 2 DEBUG oslo_concurrency.lockutils [req-dc57c314-b860-4900-aab7-e0b68f680308 req-d92c87c4-c163-49ab-b345-d6454a431c36 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2cf00828-84e1-410d-8acb-d94a197cc8ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.333 2 DEBUG oslo_concurrency.lockutils [req-dc57c314-b860-4900-aab7-e0b68f680308 req-d92c87c4-c163-49ab-b345-d6454a431c36 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2cf00828-84e1-410d-8acb-d94a197cc8ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.333 2 DEBUG oslo_concurrency.lockutils [req-dc57c314-b860-4900-aab7-e0b68f680308 req-d92c87c4-c163-49ab-b345-d6454a431c36 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2cf00828-84e1-410d-8acb-d94a197cc8ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.333 2 DEBUG nova.compute.manager [req-dc57c314-b860-4900-aab7-e0b68f680308 req-d92c87c4-c163-49ab-b345-d6454a431c36 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Processing event network-vif-plugged-71dc92e6-edbd-4240-ae33-d3cc5b361002 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.334 2 DEBUG nova.compute.manager [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:19:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:28.382 162357 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 04:19:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:28.383 162357 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp8qcrtc38/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  2 04:19:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:28.276 276572 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 04:19:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:28.281 276572 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 04:19:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:28.283 276572 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Oct  2 04:19:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:28.284 276572 INFO oslo.privsep.daemon [-] privsep daemon running as pid 276572#033[00m
Oct  2 04:19:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:28.385 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[da4a3e3b-2430-417f-8e2c-69bc8b570acd]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.408 2 DEBUG oslo_concurrency.lockutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.409 2 DEBUG oslo_concurrency.lockutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.415 2 DEBUG nova.virt.hardware [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.415 2 INFO nova.compute.claims [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.519 2 DEBUG nova.network.neutron [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.538 2 DEBUG oslo_concurrency.processutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 278 op/s
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.765 2 DEBUG nova.network.neutron [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.782 2 DEBUG oslo_concurrency.lockutils [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Releasing lock "refresh_cache-f3cc4273-e776-4d2f-aadc-005590b6cd01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:19:28 np0005465604 nova_compute[260603]: 2025-10-02 08:19:28.784 2 DEBUG nova.compute.manager [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:19:28 np0005465604 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct  2 04:19:28 np0005465604 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Consumed 3.396s CPU time.
Oct  2 04:19:28 np0005465604 systemd-machined[214636]: Machine qemu-3-instance-00000004 terminated.
Oct  2 04:19:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:19:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/155907496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.006 2 INFO nova.virt.libvirt.driver [-] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Instance destroyed successfully.#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.008 2 DEBUG nova.objects.instance [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lazy-loading 'resources' on Instance uuid f3cc4273-e776-4d2f-aadc-005590b6cd01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.010 2 DEBUG oslo_concurrency.processutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.016 2 DEBUG nova.compute.provider_tree [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.060 2 DEBUG nova.scheduler.client.report [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.090 2 DEBUG oslo_concurrency.lockutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.092 2 DEBUG nova.compute.manager [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:19:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:29.129 276572 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:29.129 276572 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:29.129 276572 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.138 2 DEBUG nova.compute.manager [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.142 2 DEBUG nova.network.neutron [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.158 2 INFO nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.179 2 DEBUG nova.compute.manager [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.263 2 DEBUG nova.compute.manager [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.266 2 DEBUG nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.266 2 INFO nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Creating image(s)#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.291 2 DEBUG nova.storage.rbd_utils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] rbd image 2d021418-a2ed-45c1-8a5a-7aa794929fa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.319 2 DEBUG nova.storage.rbd_utils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] rbd image 2d021418-a2ed-45c1-8a5a-7aa794929fa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.342 2 DEBUG nova.storage.rbd_utils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] rbd image 2d021418-a2ed-45c1-8a5a-7aa794929fa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.347 2 DEBUG oslo_concurrency.processutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.402 2 DEBUG oslo_concurrency.processutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.403 2 DEBUG oslo_concurrency.lockutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.403 2 DEBUG oslo_concurrency.lockutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.403 2 DEBUG oslo_concurrency.lockutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.418 2 DEBUG nova.storage.rbd_utils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] rbd image 2d021418-a2ed-45c1-8a5a-7aa794929fa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.420 2 DEBUG oslo_concurrency.processutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 2d021418-a2ed-45c1-8a5a-7aa794929fa8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.439 2 INFO nova.virt.libvirt.driver [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Deleting instance files /var/lib/nova/instances/f3cc4273-e776-4d2f-aadc-005590b6cd01_del#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.439 2 INFO nova.virt.libvirt.driver [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Deletion of /var/lib/nova/instances/f3cc4273-e776-4d2f-aadc-005590b6cd01_del complete#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.470 2 DEBUG nova.network.neutron [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.471 2 DEBUG nova.compute.manager [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.483 2 INFO nova.compute.manager [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.483 2 DEBUG oslo.service.loopingcall [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.484 2 DEBUG nova.compute.manager [-] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.484 2 DEBUG nova.network.neutron [-] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.621 2 DEBUG oslo_concurrency.processutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 2d021418-a2ed-45c1-8a5a-7aa794929fa8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.643 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393169.6434598, 2cf00828-84e1-410d-8acb-d94a197cc8ea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.644 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] VM Started (Lifecycle Event)#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.646 2 DEBUG nova.compute.manager [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.672 2 DEBUG nova.network.neutron [-] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.674 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.675 2 DEBUG nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.686 2 DEBUG nova.storage.rbd_utils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] resizing rbd image 2d021418-a2ed-45c1-8a5a-7aa794929fa8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.710 2 DEBUG nova.network.neutron [-] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.712 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.714 2 INFO nova.virt.libvirt.driver [-] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Instance spawned successfully.#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.714 2 DEBUG nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.767 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.767 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393169.6456609, 2cf00828-84e1-410d-8acb-d94a197cc8ea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.767 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.769 2 INFO nova.compute.manager [-] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Took 0.28 seconds to deallocate network for instance.#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.780 2 DEBUG nova.objects.instance [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Lazy-loading 'migration_context' on Instance uuid 2d021418-a2ed-45c1-8a5a-7aa794929fa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.784 2 DEBUG nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.784 2 DEBUG nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.784 2 DEBUG nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.785 2 DEBUG nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.785 2 DEBUG nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.786 2 DEBUG nova.virt.libvirt.driver [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.792 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.794 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393169.6482089, 2cf00828-84e1-410d-8acb-d94a197cc8ea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.795 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.821 2 DEBUG nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.822 2 DEBUG nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Ensure instance console log exists: /var/lib/nova/instances/2d021418-a2ed-45c1-8a5a-7aa794929fa8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.822 2 DEBUG oslo_concurrency.lockutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.822 2 DEBUG oslo_concurrency.lockutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.822 2 DEBUG oslo_concurrency.lockutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.824 2 DEBUG nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.827 2 WARNING nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.832 2 DEBUG nova.virt.libvirt.host [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.833 2 DEBUG nova.virt.libvirt.host [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.837 2 DEBUG nova.virt.libvirt.host [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.838 2 DEBUG nova.virt.libvirt.host [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.838 2 DEBUG nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.838 2 DEBUG nova.virt.hardware [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.839 2 DEBUG nova.virt.hardware [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.839 2 DEBUG nova.virt.hardware [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.839 2 DEBUG nova.virt.hardware [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.839 2 DEBUG nova.virt.hardware [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.840 2 DEBUG nova.virt.hardware [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.840 2 DEBUG nova.virt.hardware [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.840 2 DEBUG nova.virt.hardware [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.840 2 DEBUG nova.virt.hardware [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.841 2 DEBUG nova.virt.hardware [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.841 2 DEBUG nova.virt.hardware [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.844 2 DEBUG oslo_concurrency.processutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.866 2 DEBUG oslo_concurrency.lockutils [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.867 2 DEBUG oslo_concurrency.lockutils [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.868 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.869 2 INFO nova.compute.manager [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Took 9.60 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.870 2 DEBUG nova.compute.manager [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.879 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.917 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.955 2 INFO nova.compute.manager [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Took 10.46 seconds to build instance.#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.971 2 DEBUG oslo_concurrency.lockutils [None req-7e721b5b-a789-4ac5-8f28-daed6ec63f29 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "2cf00828-84e1-410d-8acb-d94a197cc8ea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:29 np0005465604 nova_compute[260603]: 2025-10-02 08:19:29.978 2 DEBUG oslo_concurrency.processutils [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:30 np0005465604 podman[276831]: 2025-10-02 08:19:30.044929902 +0000 UTC m=+0.101049333 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:19:30 np0005465604 podman[276830]: 2025-10-02 08:19:30.073113245 +0000 UTC m=+0.133448268 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller)
Oct  2 04:19:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:30.148 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[456540af-ba85-4344-a796-09c6b7e8af42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:30.149 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9847725b-01 in ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:19:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:30.151 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9847725b-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:19:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:30.151 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4da342d3-e702-4255-893f-7f257df61114]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:30.155 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[676c046d-6491-41e9-a918-b325e82e30f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:30.176 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[d3d47fde-2f16-484a-bdca-97aa0795626f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:30.209 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1def92cc-ce9d-4d61-afb5-d255260e3057]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:30.211 162357 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmprb78s14o/privsep.sock']#033[00m
Oct  2 04:19:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:19:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1651586273' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.296 2 DEBUG oslo_concurrency.processutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.322 2 DEBUG nova.storage.rbd_utils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] rbd image 2d021418-a2ed-45c1-8a5a-7aa794929fa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.326 2 DEBUG oslo_concurrency.processutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:19:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3180148262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.423 2 DEBUG oslo_concurrency.processutils [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.428 2 DEBUG nova.compute.provider_tree [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.444 2 DEBUG nova.scheduler.client.report [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.457 2 DEBUG nova.compute.manager [req-cb88c380-190d-4a63-9a42-c6e4aaf35958 req-9e44bd12-78b0-49fb-bbd4-45ccce2b912c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Received event network-vif-plugged-71dc92e6-edbd-4240-ae33-d3cc5b361002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.458 2 DEBUG oslo_concurrency.lockutils [req-cb88c380-190d-4a63-9a42-c6e4aaf35958 req-9e44bd12-78b0-49fb-bbd4-45ccce2b912c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2cf00828-84e1-410d-8acb-d94a197cc8ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.458 2 DEBUG oslo_concurrency.lockutils [req-cb88c380-190d-4a63-9a42-c6e4aaf35958 req-9e44bd12-78b0-49fb-bbd4-45ccce2b912c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2cf00828-84e1-410d-8acb-d94a197cc8ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.458 2 DEBUG oslo_concurrency.lockutils [req-cb88c380-190d-4a63-9a42-c6e4aaf35958 req-9e44bd12-78b0-49fb-bbd4-45ccce2b912c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2cf00828-84e1-410d-8acb-d94a197cc8ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.458 2 DEBUG nova.compute.manager [req-cb88c380-190d-4a63-9a42-c6e4aaf35958 req-9e44bd12-78b0-49fb-bbd4-45ccce2b912c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] No waiting events found dispatching network-vif-plugged-71dc92e6-edbd-4240-ae33-d3cc5b361002 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.458 2 WARNING nova.compute.manager [req-cb88c380-190d-4a63-9a42-c6e4aaf35958 req-9e44bd12-78b0-49fb-bbd4-45ccce2b912c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Received unexpected event network-vif-plugged-71dc92e6-edbd-4240-ae33-d3cc5b361002 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.472 2 DEBUG oslo_concurrency.lockutils [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.509 2 INFO nova.scheduler.client.report [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Deleted allocations for instance f3cc4273-e776-4d2f-aadc-005590b6cd01#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.577 2 DEBUG oslo_concurrency.lockutils [None req-5e2ed72d-e2da-436b-9217-705e63a496c5 2bbb3d2fe7dc4d0cbce173e3b3d890ef 475fe8421bdb460d970e52b4c47a8120 - - default default] Lock "f3cc4273-e776-4d2f-aadc-005590b6cd01" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 134 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Oct  2 04:19:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:19:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1914225184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.774 2 DEBUG oslo_concurrency.processutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.776 2 DEBUG nova.objects.instance [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d021418-a2ed-45c1-8a5a-7aa794929fa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.796 2 DEBUG nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:19:30 np0005465604 nova_compute[260603]:  <uuid>2d021418-a2ed-45c1-8a5a-7aa794929fa8</uuid>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:  <name>instance-00000005</name>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerExternalEventsTest-server-913167669</nova:name>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:19:29</nova:creationTime>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:19:30 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:        <nova:user uuid="d0a62ef6f53d46d0b4bf5aa6a9eb083a">tempest-ServerExternalEventsTest-851797414-project-member</nova:user>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:        <nova:project uuid="bf26aadd4a3f4119866947e0be744a63">tempest-ServerExternalEventsTest-851797414</nova:project>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <entry name="serial">2d021418-a2ed-45c1-8a5a-7aa794929fa8</entry>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <entry name="uuid">2d021418-a2ed-45c1-8a5a-7aa794929fa8</entry>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/2d021418-a2ed-45c1-8a5a-7aa794929fa8_disk">
Oct  2 04:19:30 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:19:30 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/2d021418-a2ed-45c1-8a5a-7aa794929fa8_disk.config">
Oct  2 04:19:30 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:19:30 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/2d021418-a2ed-45c1-8a5a-7aa794929fa8/console.log" append="off"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:19:30 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:19:30 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:19:30 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:19:30 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.858 2 DEBUG nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.859 2 DEBUG nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.859 2 INFO nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Using config drive#033[00m
Oct  2 04:19:30 np0005465604 nova_compute[260603]: 2025-10-02 08:19:30.883 2 DEBUG nova.storage.rbd_utils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] rbd image 2d021418-a2ed-45c1-8a5a-7aa794929fa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:30.973 162357 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 04:19:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:30.974 162357 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmprb78s14o/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  2 04:19:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:30.865 276965 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 04:19:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:30.868 276965 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 04:19:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:30.871 276965 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  2 04:19:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:30.871 276965 INFO oslo.privsep.daemon [-] privsep daemon running as pid 276965#033[00m
Oct  2 04:19:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:30.992 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8f59e7ea-5e02-4f64-ad91-33e91ce9110d]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:31 np0005465604 nova_compute[260603]: 2025-10-02 08:19:31.094 2 INFO nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Creating config drive at /var/lib/nova/instances/2d021418-a2ed-45c1-8a5a-7aa794929fa8/disk.config#033[00m
Oct  2 04:19:31 np0005465604 nova_compute[260603]: 2025-10-02 08:19:31.098 2 DEBUG oslo_concurrency.processutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d021418-a2ed-45c1-8a5a-7aa794929fa8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbiig0ezq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:31 np0005465604 nova_compute[260603]: 2025-10-02 08:19:31.248 2 DEBUG oslo_concurrency.processutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d021418-a2ed-45c1-8a5a-7aa794929fa8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbiig0ezq" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:31 np0005465604 nova_compute[260603]: 2025-10-02 08:19:31.286 2 DEBUG nova.storage.rbd_utils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] rbd image 2d021418-a2ed-45c1-8a5a-7aa794929fa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:31 np0005465604 nova_compute[260603]: 2025-10-02 08:19:31.291 2 DEBUG oslo_concurrency.processutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d021418-a2ed-45c1-8a5a-7aa794929fa8/disk.config 2d021418-a2ed-45c1-8a5a-7aa794929fa8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:31 np0005465604 nova_compute[260603]: 2025-10-02 08:19:31.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:31 np0005465604 nova_compute[260603]: 2025-10-02 08:19:31.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:31 np0005465604 NetworkManager[45129]: <info>  [1759393171.4404] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Oct  2 04:19:31 np0005465604 NetworkManager[45129]: <info>  [1759393171.4410] device (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 04:19:31 np0005465604 NetworkManager[45129]: <info>  [1759393171.4420] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Oct  2 04:19:31 np0005465604 NetworkManager[45129]: <info>  [1759393171.4423] device (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 04:19:31 np0005465604 NetworkManager[45129]: <info>  [1759393171.4432] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Oct  2 04:19:31 np0005465604 NetworkManager[45129]: <info>  [1759393171.4441] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct  2 04:19:31 np0005465604 NetworkManager[45129]: <info>  [1759393171.4447] device (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  2 04:19:31 np0005465604 NetworkManager[45129]: <info>  [1759393171.4451] device (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  2 04:19:31 np0005465604 nova_compute[260603]: 2025-10-02 08:19:31.464 2 DEBUG oslo_concurrency.processutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d021418-a2ed-45c1-8a5a-7aa794929fa8/disk.config 2d021418-a2ed-45c1-8a5a-7aa794929fa8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:31 np0005465604 nova_compute[260603]: 2025-10-02 08:19:31.464 2 INFO nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Deleting local config drive /var/lib/nova/instances/2d021418-a2ed-45c1-8a5a-7aa794929fa8/disk.config because it was imported into RBD.#033[00m
Oct  2 04:19:31 np0005465604 nova_compute[260603]: 2025-10-02 08:19:31.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:31.493 276965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:31.493 276965 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:31.493 276965 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:31 np0005465604 nova_compute[260603]: 2025-10-02 08:19:31.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:31 np0005465604 systemd-machined[214636]: New machine qemu-5-instance-00000005.
Oct  2 04:19:31 np0005465604 nova_compute[260603]: 2025-10-02 08:19:31.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:31 np0005465604 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Oct  2 04:19:31 np0005465604 nova_compute[260603]: 2025-10-02 08:19:31.566 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393156.56518, 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:19:31 np0005465604 nova_compute[260603]: 2025-10-02 08:19:31.567 2 INFO nova.compute.manager [-] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:19:31 np0005465604 nova_compute[260603]: 2025-10-02 08:19:31.583 2 DEBUG nova.compute.manager [None req-30548ecc-5b0d-4b36-bef1-6ec1e1855793 - - - - - -] [instance: 5cd7c3a3-f245-42b2-ac5a-3bccbf76b6b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:32.051 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3766dd2c-a8a7-42b4-afdd-644032f55491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:32 np0005465604 NetworkManager[45129]: <info>  [1759393172.0600] manager: (tap9847725b-00): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Oct  2 04:19:32 np0005465604 systemd-udevd[277087]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:32.058 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[35e1c870-9015-4fbe-9487-44790dd31baf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:32.096 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[07cb2d8d-efe9-4546-982f-d3847d610a35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:32.099 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a03fd960-e11f-4096-8f30-de6466136157]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:32 np0005465604 NetworkManager[45129]: <info>  [1759393172.1213] device (tap9847725b-00): carrier: link connected
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:32.126 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[98f67eb6-198a-42b3-8861-b7d30328629b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:32.143 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b8454897-cff0-490c-8500-643736534ffc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9847725b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:c7:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405072, 'reachable_time': 29970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277109, 'error': None, 'target': 'ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:32.158 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[404ba9c4-b47e-41da-9cd6-88344fda9adc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:c797'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 405072, 'tstamp': 405072}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277110, 'error': None, 'target': 'ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:32.173 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6bc62c25-d7d6-47c1-96b4-cd212b475fb4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9847725b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:c7:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405072, 'reachable_time': 29970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277111, 'error': None, 'target': 'ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:32.201 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[570f226d-6e8f-4cf3-b238-474303dc38ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:32.257 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7d4634-0df4-4042-9733-c16d2482f4ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:32.259 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9847725b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:32.259 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:32.259 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9847725b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:32 np0005465604 NetworkManager[45129]: <info>  [1759393172.2884] manager: (tap9847725b-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Oct  2 04:19:32 np0005465604 kernel: tap9847725b-00: entered promiscuous mode
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:32.291 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9847725b-00, col_values=(('external_ids', {'iface-id': '2eaa21e0-aae3-431e-8f86-4bd51c7d127c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:32 np0005465604 ovn_controller[152344]: 2025-10-02T08:19:32Z|00031|binding|INFO|Releasing lport 2eaa21e0-aae3-431e-8f86-4bd51c7d127c from this chassis (sb_readonly=0)
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:32.322 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9847725b-0398-4587-bb8e-45200e1bbb6a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9847725b-0398-4587-bb8e-45200e1bbb6a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:32.323 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[17748b73-75a9-4af1-a0e3-1a6705e74b74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:32.324 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-9847725b-0398-4587-bb8e-45200e1bbb6a
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/9847725b-0398-4587-bb8e-45200e1bbb6a.pid.haproxy
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 9847725b-0398-4587-bb8e-45200e1bbb6a
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:19:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:32.324 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a', 'env', 'PROCESS_TAG=haproxy-9847725b-0398-4587-bb8e-45200e1bbb6a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9847725b-0398-4587-bb8e-45200e1bbb6a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.392 2 DEBUG nova.compute.manager [req-fda65d25-abc4-4d20-a8ee-952eb00b3dd1 req-f78dd312-2e62-46a0-bba5-69b9ab288886 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Received event network-changed-71dc92e6-edbd-4240-ae33-d3cc5b361002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.393 2 DEBUG nova.compute.manager [req-fda65d25-abc4-4d20-a8ee-952eb00b3dd1 req-f78dd312-2e62-46a0-bba5-69b9ab288886 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Refreshing instance network info cache due to event network-changed-71dc92e6-edbd-4240-ae33-d3cc5b361002. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.393 2 DEBUG oslo_concurrency.lockutils [req-fda65d25-abc4-4d20-a8ee-952eb00b3dd1 req-f78dd312-2e62-46a0-bba5-69b9ab288886 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-2cf00828-84e1-410d-8acb-d94a197cc8ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.393 2 DEBUG oslo_concurrency.lockutils [req-fda65d25-abc4-4d20-a8ee-952eb00b3dd1 req-f78dd312-2e62-46a0-bba5-69b9ab288886 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-2cf00828-84e1-410d-8acb-d94a197cc8ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.394 2 DEBUG nova.network.neutron [req-fda65d25-abc4-4d20-a8ee-952eb00b3dd1 req-f78dd312-2e62-46a0-bba5-69b9ab288886 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Refreshing network info cache for port 71dc92e6-edbd-4240-ae33-d3cc5b361002 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.572 2 DEBUG nova.compute.manager [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.573 2 DEBUG nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.573 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393172.5720158, 2d021418-a2ed-45c1-8a5a-7aa794929fa8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.574 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.582 2 INFO nova.virt.libvirt.driver [-] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Instance spawned successfully.#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.583 2 DEBUG nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.598 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.607 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.611 2 DEBUG nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.611 2 DEBUG nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.614 2 DEBUG nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.615 2 DEBUG nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.615 2 DEBUG nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.616 2 DEBUG nova.virt.libvirt.driver [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.646 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.647 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393172.572312, 2d021418-a2ed-45c1-8a5a-7aa794929fa8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.647 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] VM Started (Lifecycle Event)#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.700 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.704 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:19:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 135 MiB data, 242 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.1 MiB/s wr, 276 op/s
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.727 2 INFO nova.compute.manager [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Took 3.46 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.727 2 DEBUG nova.compute.manager [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.730 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:19:32 np0005465604 podman[277143]: 2025-10-02 08:19:32.779829889 +0000 UTC m=+0.091316409 container create 0d709c658420bd74874ba6d6075efa896b48e618d345fe997ce78422e91f858e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.800 2 INFO nova.compute.manager [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Took 4.41 seconds to build instance.#033[00m
Oct  2 04:19:32 np0005465604 nova_compute[260603]: 2025-10-02 08:19:32.821 2 DEBUG oslo_concurrency.lockutils [None req-b3af0bb6-d7dd-4263-94e1-2ea5d2604415 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Lock "2d021418-a2ed-45c1-8a5a-7aa794929fa8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.515s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:32 np0005465604 podman[277143]: 2025-10-02 08:19:32.741630304 +0000 UTC m=+0.053116914 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:19:32 np0005465604 systemd[1]: Started libpod-conmon-0d709c658420bd74874ba6d6075efa896b48e618d345fe997ce78422e91f858e.scope.
Oct  2 04:19:32 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:19:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67d5a1e0423db810507479b051f6832e88f90e33679f204993a2e224c5ccb80/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:19:32 np0005465604 podman[277143]: 2025-10-02 08:19:32.902057264 +0000 UTC m=+0.213543844 container init 0d709c658420bd74874ba6d6075efa896b48e618d345fe997ce78422e91f858e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 04:19:32 np0005465604 podman[277143]: 2025-10-02 08:19:32.911235841 +0000 UTC m=+0.222722361 container start 0d709c658420bd74874ba6d6075efa896b48e618d345fe997ce78422e91f858e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:19:32 np0005465604 neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a[277158]: [NOTICE]   (277162) : New worker (277164) forked
Oct  2 04:19:32 np0005465604 neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a[277158]: [NOTICE]   (277162) : Loading success.
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.168 2 DEBUG nova.compute.manager [None req-070059a8-49fd-45ad-b267-99da97dd9ad0 d07b1d5b1add4c3db3ed4156c7b56ff7 1c553d93014746a1a036511ab72b5e99 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.168 2 DEBUG nova.compute.manager [None req-070059a8-49fd-45ad-b267-99da97dd9ad0 d07b1d5b1add4c3db3ed4156c7b56ff7 1c553d93014746a1a036511ab72b5e99 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.169 2 DEBUG oslo_concurrency.lockutils [None req-070059a8-49fd-45ad-b267-99da97dd9ad0 d07b1d5b1add4c3db3ed4156c7b56ff7 1c553d93014746a1a036511ab72b5e99 - - default default] Acquiring lock "refresh_cache-2d021418-a2ed-45c1-8a5a-7aa794929fa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.169 2 DEBUG oslo_concurrency.lockutils [None req-070059a8-49fd-45ad-b267-99da97dd9ad0 d07b1d5b1add4c3db3ed4156c7b56ff7 1c553d93014746a1a036511ab72b5e99 - - default default] Acquired lock "refresh_cache-2d021418-a2ed-45c1-8a5a-7aa794929fa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.170 2 DEBUG nova.network.neutron [None req-070059a8-49fd-45ad-b267-99da97dd9ad0 d07b1d5b1add4c3db3ed4156c7b56ff7 1c553d93014746a1a036511ab72b5e99 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.310 2 DEBUG oslo_concurrency.lockutils [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Acquiring lock "2d021418-a2ed-45c1-8a5a-7aa794929fa8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.311 2 DEBUG oslo_concurrency.lockutils [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Lock "2d021418-a2ed-45c1-8a5a-7aa794929fa8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.312 2 DEBUG oslo_concurrency.lockutils [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Acquiring lock "2d021418-a2ed-45c1-8a5a-7aa794929fa8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.312 2 DEBUG oslo_concurrency.lockutils [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Lock "2d021418-a2ed-45c1-8a5a-7aa794929fa8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.312 2 DEBUG oslo_concurrency.lockutils [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Lock "2d021418-a2ed-45c1-8a5a-7aa794929fa8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.314 2 INFO nova.compute.manager [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Terminating instance#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.316 2 DEBUG oslo_concurrency.lockutils [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Acquiring lock "refresh_cache-2d021418-a2ed-45c1-8a5a-7aa794929fa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.548 2 DEBUG nova.network.neutron [None req-070059a8-49fd-45ad-b267-99da97dd9ad0 d07b1d5b1add4c3db3ed4156c7b56ff7 1c553d93014746a1a036511ab72b5e99 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.602 2 DEBUG nova.network.neutron [req-fda65d25-abc4-4d20-a8ee-952eb00b3dd1 req-f78dd312-2e62-46a0-bba5-69b9ab288886 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Updated VIF entry in instance network info cache for port 71dc92e6-edbd-4240-ae33-d3cc5b361002. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.603 2 DEBUG nova.network.neutron [req-fda65d25-abc4-4d20-a8ee-952eb00b3dd1 req-f78dd312-2e62-46a0-bba5-69b9ab288886 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Updating instance_info_cache with network_info: [{"id": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "address": "fa:16:3e:74:76:a0", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71dc92e6-ed", "ovs_interfaceid": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.664 2 DEBUG oslo_concurrency.lockutils [req-fda65d25-abc4-4d20-a8ee-952eb00b3dd1 req-f78dd312-2e62-46a0-bba5-69b9ab288886 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-2cf00828-84e1-410d-8acb-d94a197cc8ea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:19:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 134 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.7 MiB/s wr, 315 op/s
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.767 2 DEBUG nova.network.neutron [None req-070059a8-49fd-45ad-b267-99da97dd9ad0 d07b1d5b1add4c3db3ed4156c7b56ff7 1c553d93014746a1a036511ab72b5e99 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.781 2 DEBUG oslo_concurrency.lockutils [None req-070059a8-49fd-45ad-b267-99da97dd9ad0 d07b1d5b1add4c3db3ed4156c7b56ff7 1c553d93014746a1a036511ab72b5e99 - - default default] Releasing lock "refresh_cache-2d021418-a2ed-45c1-8a5a-7aa794929fa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.781 2 DEBUG oslo_concurrency.lockutils [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Acquired lock "refresh_cache-2d021418-a2ed-45c1-8a5a-7aa794929fa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.782 2 DEBUG nova.network.neutron [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:19:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:34.806 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:34.806 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:34.807 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:34 np0005465604 nova_compute[260603]: 2025-10-02 08:19:34.892 2 DEBUG nova.network.neutron [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:19:35 np0005465604 nova_compute[260603]: 2025-10-02 08:19:35.254 2 DEBUG nova.network.neutron [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:19:35 np0005465604 nova_compute[260603]: 2025-10-02 08:19:35.271 2 DEBUG oslo_concurrency.lockutils [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Releasing lock "refresh_cache-2d021418-a2ed-45c1-8a5a-7aa794929fa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:19:35 np0005465604 nova_compute[260603]: 2025-10-02 08:19:35.272 2 DEBUG nova.compute.manager [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:19:35 np0005465604 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct  2 04:19:35 np0005465604 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 3.663s CPU time.
Oct  2 04:19:35 np0005465604 systemd-machined[214636]: Machine qemu-5-instance-00000005 terminated.
Oct  2 04:19:35 np0005465604 podman[277173]: 2025-10-02 08:19:35.422138277 +0000 UTC m=+0.075957998 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid)
Oct  2 04:19:35 np0005465604 nova_compute[260603]: 2025-10-02 08:19:35.497 2 INFO nova.virt.libvirt.driver [-] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Instance destroyed successfully.#033[00m
Oct  2 04:19:35 np0005465604 nova_compute[260603]: 2025-10-02 08:19:35.498 2 DEBUG nova.objects.instance [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Lazy-loading 'resources' on Instance uuid 2d021418-a2ed-45c1-8a5a-7aa794929fa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:35 np0005465604 nova_compute[260603]: 2025-10-02 08:19:35.877 2 INFO nova.virt.libvirt.driver [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Deleting instance files /var/lib/nova/instances/2d021418-a2ed-45c1-8a5a-7aa794929fa8_del#033[00m
Oct  2 04:19:35 np0005465604 nova_compute[260603]: 2025-10-02 08:19:35.878 2 INFO nova.virt.libvirt.driver [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Deletion of /var/lib/nova/instances/2d021418-a2ed-45c1-8a5a-7aa794929fa8_del complete#033[00m
Oct  2 04:19:35 np0005465604 nova_compute[260603]: 2025-10-02 08:19:35.949 2 INFO nova.compute.manager [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:19:35 np0005465604 nova_compute[260603]: 2025-10-02 08:19:35.950 2 DEBUG oslo.service.loopingcall [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:19:35 np0005465604 nova_compute[260603]: 2025-10-02 08:19:35.951 2 DEBUG nova.compute.manager [-] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:19:35 np0005465604 nova_compute[260603]: 2025-10-02 08:19:35.952 2 DEBUG nova.network.neutron [-] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:19:36 np0005465604 nova_compute[260603]: 2025-10-02 08:19:36.253 2 DEBUG nova.network.neutron [-] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:19:36 np0005465604 nova_compute[260603]: 2025-10-02 08:19:36.266 2 DEBUG nova.network.neutron [-] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:19:36 np0005465604 nova_compute[260603]: 2025-10-02 08:19:36.278 2 INFO nova.compute.manager [-] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Took 0.33 seconds to deallocate network for instance.#033[00m
Oct  2 04:19:36 np0005465604 nova_compute[260603]: 2025-10-02 08:19:36.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:36 np0005465604 nova_compute[260603]: 2025-10-02 08:19:36.318 2 DEBUG oslo_concurrency.lockutils [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:36 np0005465604 nova_compute[260603]: 2025-10-02 08:19:36.319 2 DEBUG oslo_concurrency.lockutils [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:36 np0005465604 nova_compute[260603]: 2025-10-02 08:19:36.412 2 DEBUG oslo_concurrency.processutils [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:36 np0005465604 nova_compute[260603]: 2025-10-02 08:19:36.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 134 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 301 op/s
Oct  2 04:19:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:19:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/256508886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:19:36 np0005465604 nova_compute[260603]: 2025-10-02 08:19:36.884 2 DEBUG oslo_concurrency.processutils [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:36 np0005465604 nova_compute[260603]: 2025-10-02 08:19:36.889 2 DEBUG nova.compute.provider_tree [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:19:36 np0005465604 nova_compute[260603]: 2025-10-02 08:19:36.908 2 DEBUG nova.scheduler.client.report [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:19:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:19:36 np0005465604 nova_compute[260603]: 2025-10-02 08:19:36.934 2 DEBUG oslo_concurrency.lockutils [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:36 np0005465604 nova_compute[260603]: 2025-10-02 08:19:36.980 2 INFO nova.scheduler.client.report [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Deleted allocations for instance 2d021418-a2ed-45c1-8a5a-7aa794929fa8#033[00m
Oct  2 04:19:37 np0005465604 nova_compute[260603]: 2025-10-02 08:19:37.058 2 DEBUG oslo_concurrency.lockutils [None req-0eb775a1-6ea6-4d81-aa5d-6deb0874a3f7 d0a62ef6f53d46d0b4bf5aa6a9eb083a bf26aadd4a3f4119866947e0be744a63 - - default default] Lock "2d021418-a2ed-45c1-8a5a-7aa794929fa8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006971449298365566 of space, bias 1.0, pg target 0.209143478950967 quantized to 32 (current 32)
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:19:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 387 op/s
Oct  2 04:19:40 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  2 04:19:40 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  2 04:19:40 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  2 04:19:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.8 MiB/s wr, 235 op/s
Oct  2 04:19:41 np0005465604 nova_compute[260603]: 2025-10-02 08:19:41.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:41 np0005465604 nova_compute[260603]: 2025-10-02 08:19:41.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:19:41Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:74:76:a0 10.100.0.3
Oct  2 04:19:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:19:41Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:74:76:a0 10.100.0.3
Oct  2 04:19:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:19:42 np0005465604 podman[277239]: 2025-10-02 08:19:42.053553696 +0000 UTC m=+0.116985652 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:19:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 109 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.8 MiB/s wr, 275 op/s
Oct  2 04:19:44 np0005465604 nova_compute[260603]: 2025-10-02 08:19:44.005 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393169.0034971, f3cc4273-e776-4d2f-aadc-005590b6cd01 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:19:44 np0005465604 nova_compute[260603]: 2025-10-02 08:19:44.006 2 INFO nova.compute.manager [-] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:19:44 np0005465604 nova_compute[260603]: 2025-10-02 08:19:44.025 2 DEBUG nova.compute.manager [None req-7f267de0-3920-4aad-a6af-b22d1950839e - - - - - -] [instance: f3cc4273-e776-4d2f-aadc-005590b6cd01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:44 np0005465604 nova_compute[260603]: 2025-10-02 08:19:44.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:44.046 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:19:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:44.047 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:19:44 np0005465604 nova_compute[260603]: 2025-10-02 08:19:44.563 2 DEBUG oslo_concurrency.lockutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Acquiring lock "19ea7528-6b08-4fe7-8c5b-4d96247bd50e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:44 np0005465604 nova_compute[260603]: 2025-10-02 08:19:44.564 2 DEBUG oslo_concurrency.lockutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "19ea7528-6b08-4fe7-8c5b-4d96247bd50e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:44 np0005465604 nova_compute[260603]: 2025-10-02 08:19:44.578 2 DEBUG nova.compute.manager [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:19:44 np0005465604 nova_compute[260603]: 2025-10-02 08:19:44.645 2 DEBUG oslo_concurrency.lockutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:44 np0005465604 nova_compute[260603]: 2025-10-02 08:19:44.645 2 DEBUG oslo_concurrency.lockutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:44 np0005465604 nova_compute[260603]: 2025-10-02 08:19:44.656 2 DEBUG nova.virt.hardware [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:19:44 np0005465604 nova_compute[260603]: 2025-10-02 08:19:44.657 2 INFO nova.compute.claims [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:19:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 121 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 200 op/s
Oct  2 04:19:44 np0005465604 nova_compute[260603]: 2025-10-02 08:19:44.772 2 DEBUG oslo_concurrency.processutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:45.050 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:19:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:19:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/792721046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.230 2 DEBUG oslo_concurrency.processutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.239 2 DEBUG nova.compute.provider_tree [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.258 2 DEBUG nova.scheduler.client.report [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.282 2 DEBUG oslo_concurrency.lockutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.283 2 DEBUG nova.compute.manager [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.331 2 DEBUG nova.compute.manager [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.351 2 INFO nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.370 2 DEBUG nova.compute.manager [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.453 2 DEBUG nova.compute.manager [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.456 2 DEBUG nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.457 2 INFO nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Creating image(s)#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.491 2 DEBUG nova.storage.rbd_utils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.528 2 DEBUG nova.storage.rbd_utils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.559 2 DEBUG nova.storage.rbd_utils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.564 2 DEBUG oslo_concurrency.processutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.657 2 DEBUG oslo_concurrency.processutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.658 2 DEBUG oslo_concurrency.lockutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.659 2 DEBUG oslo_concurrency.lockutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.660 2 DEBUG oslo_concurrency.lockutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.686 2 DEBUG nova.storage.rbd_utils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.690 2 DEBUG oslo_concurrency.processutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:45 np0005465604 nova_compute[260603]: 2025-10-02 08:19:45.979 2 DEBUG oslo_concurrency.processutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.060 2 DEBUG nova.storage.rbd_utils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] resizing rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.164 2 DEBUG nova.objects.instance [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lazy-loading 'migration_context' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.178 2 DEBUG nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.178 2 DEBUG nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Ensure instance console log exists: /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.178 2 DEBUG oslo_concurrency.lockutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.179 2 DEBUG oslo_concurrency.lockutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.179 2 DEBUG oslo_concurrency.lockutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.180 2 DEBUG nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.183 2 WARNING nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.188 2 DEBUG nova.virt.libvirt.host [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.189 2 DEBUG nova.virt.libvirt.host [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.195 2 DEBUG nova.virt.libvirt.host [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.195 2 DEBUG nova.virt.libvirt.host [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.195 2 DEBUG nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.196 2 DEBUG nova.virt.hardware [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.196 2 DEBUG nova.virt.hardware [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.196 2 DEBUG nova.virt.hardware [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.196 2 DEBUG nova.virt.hardware [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.196 2 DEBUG nova.virt.hardware [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.197 2 DEBUG nova.virt.hardware [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.197 2 DEBUG nova.virt.hardware [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.197 2 DEBUG nova.virt.hardware [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.197 2 DEBUG nova.virt.hardware [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.197 2 DEBUG nova.virt.hardware [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.197 2 DEBUG nova.virt.hardware [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.200 2 DEBUG oslo_concurrency.processutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:19:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1359692750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.636 2 DEBUG oslo_concurrency.processutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.670 2 DEBUG nova.storage.rbd_utils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:46 np0005465604 nova_compute[260603]: 2025-10-02 08:19:46.676 2 DEBUG oslo_concurrency.processutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 121 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 149 op/s
Oct  2 04:19:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:19:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:19:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/432714664' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:19:47 np0005465604 nova_compute[260603]: 2025-10-02 08:19:47.111 2 DEBUG oslo_concurrency.processutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:47 np0005465604 nova_compute[260603]: 2025-10-02 08:19:47.113 2 DEBUG nova.objects.instance [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lazy-loading 'pci_devices' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:47 np0005465604 nova_compute[260603]: 2025-10-02 08:19:47.127 2 DEBUG nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:19:47 np0005465604 nova_compute[260603]:  <uuid>19ea7528-6b08-4fe7-8c5b-4d96247bd50e</uuid>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:  <name>instance-00000006</name>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersAdmin275Test-server-1855671180</nova:name>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:19:46</nova:creationTime>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:19:47 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:        <nova:user uuid="7646e9aa5e134fc7b0621d170e0ced6f">tempest-ServersAdmin275Test-824814869-project-member</nova:user>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:        <nova:project uuid="8d66eb05f1fd4827b705307a038d1157">tempest-ServersAdmin275Test-824814869</nova:project>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <entry name="serial">19ea7528-6b08-4fe7-8c5b-4d96247bd50e</entry>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <entry name="uuid">19ea7528-6b08-4fe7-8c5b-4d96247bd50e</entry>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk">
Oct  2 04:19:47 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:19:47 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk.config">
Oct  2 04:19:47 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:19:47 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/console.log" append="off"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:19:47 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:19:47 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:19:47 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:19:47 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:19:47 np0005465604 nova_compute[260603]: 2025-10-02 08:19:47.175 2 DEBUG nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:19:47 np0005465604 nova_compute[260603]: 2025-10-02 08:19:47.176 2 DEBUG nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:19:47 np0005465604 nova_compute[260603]: 2025-10-02 08:19:47.177 2 INFO nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Using config drive#033[00m
Oct  2 04:19:47 np0005465604 nova_compute[260603]: 2025-10-02 08:19:47.198 2 DEBUG nova.storage.rbd_utils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:47 np0005465604 nova_compute[260603]: 2025-10-02 08:19:47.493 2 INFO nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Creating config drive at /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/disk.config#033[00m
Oct  2 04:19:47 np0005465604 nova_compute[260603]: 2025-10-02 08:19:47.499 2 DEBUG oslo_concurrency.processutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyotuqckb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:47 np0005465604 nova_compute[260603]: 2025-10-02 08:19:47.629 2 DEBUG oslo_concurrency.processutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyotuqckb" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:47 np0005465604 nova_compute[260603]: 2025-10-02 08:19:47.653 2 DEBUG nova.storage.rbd_utils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:47 np0005465604 nova_compute[260603]: 2025-10-02 08:19:47.656 2 DEBUG oslo_concurrency.processutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/disk.config 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:47 np0005465604 nova_compute[260603]: 2025-10-02 08:19:47.821 2 DEBUG oslo_concurrency.processutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/disk.config 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:47 np0005465604 nova_compute[260603]: 2025-10-02 08:19:47.823 2 INFO nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Deleting local config drive /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/disk.config because it was imported into RBD.#033[00m
Oct  2 04:19:47 np0005465604 systemd-machined[214636]: New machine qemu-6-instance-00000006.
Oct  2 04:19:47 np0005465604 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Oct  2 04:19:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 167 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.751 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393188.7507823, 19ea7528-6b08-4fe7-8c5b-4d96247bd50e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.752 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.755 2 DEBUG nova.compute.manager [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.755 2 DEBUG nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.760 2 INFO nova.virt.libvirt.driver [-] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Instance spawned successfully.#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.761 2 DEBUG nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.778 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.783 2 DEBUG nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.784 2 DEBUG nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.785 2 DEBUG nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.785 2 DEBUG nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.786 2 DEBUG nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.787 2 DEBUG nova.virt.libvirt.driver [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.793 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.819 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.819 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393188.752459, 19ea7528-6b08-4fe7-8c5b-4d96247bd50e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.820 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] VM Started (Lifecycle Event)#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.840 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.844 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.853 2 INFO nova.compute.manager [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Took 3.40 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.853 2 DEBUG nova.compute.manager [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.862 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.903 2 DEBUG oslo_concurrency.lockutils [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "2cf00828-84e1-410d-8acb-d94a197cc8ea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.903 2 DEBUG oslo_concurrency.lockutils [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "2cf00828-84e1-410d-8acb-d94a197cc8ea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.904 2 DEBUG oslo_concurrency.lockutils [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "2cf00828-84e1-410d-8acb-d94a197cc8ea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.904 2 DEBUG oslo_concurrency.lockutils [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "2cf00828-84e1-410d-8acb-d94a197cc8ea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.905 2 DEBUG oslo_concurrency.lockutils [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "2cf00828-84e1-410d-8acb-d94a197cc8ea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.907 2 INFO nova.compute.manager [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Terminating instance#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.908 2 DEBUG nova.compute.manager [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.919 2 INFO nova.compute.manager [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Took 4.30 seconds to build instance.#033[00m
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.933 2 DEBUG oslo_concurrency.lockutils [None req-d6e95381-3675-483a-91f9-627e466bef75 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "19ea7528-6b08-4fe7-8c5b-4d96247bd50e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.369s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:48 np0005465604 kernel: tap71dc92e6-ed (unregistering): left promiscuous mode
Oct  2 04:19:48 np0005465604 NetworkManager[45129]: <info>  [1759393188.9722] device (tap71dc92e6-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:19:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:19:48Z|00032|binding|INFO|Releasing lport 71dc92e6-edbd-4240-ae33-d3cc5b361002 from this chassis (sb_readonly=0)
Oct  2 04:19:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:19:48Z|00033|binding|INFO|Setting lport 71dc92e6-edbd-4240-ae33-d3cc5b361002 down in Southbound
Oct  2 04:19:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:19:48Z|00034|binding|INFO|Removing iface tap71dc92e6-ed ovn-installed in OVS
Oct  2 04:19:48 np0005465604 nova_compute[260603]: 2025-10-02 08:19:48.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:48.993 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:76:a0 10.100.0.3'], port_security=['fa:16:3e:74:76:a0 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2cf00828-84e1-410d-8acb-d94a197cc8ea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9847725b-0398-4587-bb8e-45200e1bbb6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ab0927c5a0e0424fbcde0133feab6f16', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3ed81274-9cd4-4204-8473-93dd6f65ff51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9128983a-ebb5-4d5e-81a5-171931805db2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=71dc92e6-edbd-4240-ae33-d3cc5b361002) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:19:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:48.994 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 71dc92e6-edbd-4240-ae33-d3cc5b361002 in datapath 9847725b-0398-4587-bb8e-45200e1bbb6a unbound from our chassis#033[00m
Oct  2 04:19:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:48.996 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9847725b-0398-4587-bb8e-45200e1bbb6a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:19:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:48.998 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5f17d5d6-49e4-4bfd-a90f-6f2f990df3ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:48.999 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a namespace which is not needed anymore#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:49 np0005465604 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct  2 04:19:49 np0005465604 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000003.scope: Consumed 13.968s CPU time.
Oct  2 04:19:49 np0005465604 systemd-machined[214636]: Machine qemu-4-instance-00000003 terminated.
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.158 2 INFO nova.virt.libvirt.driver [-] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Instance destroyed successfully.#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.159 2 DEBUG nova.objects.instance [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lazy-loading 'resources' on Instance uuid 2cf00828-84e1-410d-8acb-d94a197cc8ea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.173 2 DEBUG nova.virt.libvirt.vif [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:19:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1607704631',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1607704631',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(26),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1607704631',id=3,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=26,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKyfJeOY45nkQsot9BemY9U/BkbEGNqNZqr3YNUHFqg2f7Al6V+K+e19H2MuiEK+wcXfUJg+H8qbMpV33FaRnqJABUBtXxvPVSB7W9ittbX2JkYChVSSogHDOqGnTjFf2A==',key_name='tempest-keypair-67388201',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:19:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ab0927c5a0e0424fbcde0133feab6f16',ramdisk_id='',reservation_id='r-aso5e8pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1074733883',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1074733883-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:19:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1037493f2f4c4e768c46e477c6183cd4',uuid=2cf00828-84e1-410d-8acb-d94a197cc8ea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "address": "fa:16:3e:74:76:a0", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71dc92e6-ed", "ovs_interfaceid": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.174 2 DEBUG nova.network.os_vif_util [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Converting VIF {"id": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "address": "fa:16:3e:74:76:a0", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71dc92e6-ed", "ovs_interfaceid": "71dc92e6-edbd-4240-ae33-d3cc5b361002", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.176 2 DEBUG nova.network.os_vif_util [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:74:76:a0,bridge_name='br-int',has_traffic_filtering=True,id=71dc92e6-edbd-4240-ae33-d3cc5b361002,network=Network(9847725b-0398-4587-bb8e-45200e1bbb6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71dc92e6-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.177 2 DEBUG os_vif [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:76:a0,bridge_name='br-int',has_traffic_filtering=True,id=71dc92e6-edbd-4240-ae33-d3cc5b361002,network=Network(9847725b-0398-4587-bb8e-45200e1bbb6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71dc92e6-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:19:49 np0005465604 neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a[277158]: [NOTICE]   (277162) : haproxy version is 2.8.14-c23fe91
Oct  2 04:19:49 np0005465604 neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a[277158]: [NOTICE]   (277162) : path to executable is /usr/sbin/haproxy
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:49 np0005465604 neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a[277158]: [WARNING]  (277162) : Exiting Master process...
Oct  2 04:19:49 np0005465604 neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a[277158]: [WARNING]  (277162) : Exiting Master process...
Oct  2 04:19:49 np0005465604 neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a[277158]: [ALERT]    (277162) : Current worker (277164) exited with code 143 (Terminated)
Oct  2 04:19:49 np0005465604 neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a[277158]: [WARNING]  (277162) : All workers exited. Exiting... (0)
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.185 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71dc92e6-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:49 np0005465604 systemd[1]: libpod-0d709c658420bd74874ba6d6075efa896b48e618d345fe997ce78422e91f858e.scope: Deactivated successfully.
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.195 2 INFO os_vif [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:76:a0,bridge_name='br-int',has_traffic_filtering=True,id=71dc92e6-edbd-4240-ae33-d3cc5b361002,network=Network(9847725b-0398-4587-bb8e-45200e1bbb6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71dc92e6-ed')#033[00m
Oct  2 04:19:49 np0005465604 podman[277649]: 2025-10-02 08:19:49.201018045 +0000 UTC m=+0.087807169 container died 0d709c658420bd74874ba6d6075efa896b48e618d345fe997ce78422e91f858e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 04:19:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d709c658420bd74874ba6d6075efa896b48e618d345fe997ce78422e91f858e-userdata-shm.mount: Deactivated successfully.
Oct  2 04:19:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b67d5a1e0423db810507479b051f6832e88f90e33679f204993a2e224c5ccb80-merged.mount: Deactivated successfully.
Oct  2 04:19:49 np0005465604 podman[277649]: 2025-10-02 08:19:49.253021713 +0000 UTC m=+0.139810857 container cleanup 0d709c658420bd74874ba6d6075efa896b48e618d345fe997ce78422e91f858e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 04:19:49 np0005465604 systemd[1]: libpod-conmon-0d709c658420bd74874ba6d6075efa896b48e618d345fe997ce78422e91f858e.scope: Deactivated successfully.
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.287 2 DEBUG nova.compute.manager [req-fffbe86d-938a-4b98-8efc-f46293c0bb64 req-2f7d634c-1324-48f7-b9ad-8959f2ef0a39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Received event network-vif-unplugged-71dc92e6-edbd-4240-ae33-d3cc5b361002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.287 2 DEBUG oslo_concurrency.lockutils [req-fffbe86d-938a-4b98-8efc-f46293c0bb64 req-2f7d634c-1324-48f7-b9ad-8959f2ef0a39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2cf00828-84e1-410d-8acb-d94a197cc8ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.287 2 DEBUG oslo_concurrency.lockutils [req-fffbe86d-938a-4b98-8efc-f46293c0bb64 req-2f7d634c-1324-48f7-b9ad-8959f2ef0a39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2cf00828-84e1-410d-8acb-d94a197cc8ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.288 2 DEBUG oslo_concurrency.lockutils [req-fffbe86d-938a-4b98-8efc-f46293c0bb64 req-2f7d634c-1324-48f7-b9ad-8959f2ef0a39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2cf00828-84e1-410d-8acb-d94a197cc8ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.288 2 DEBUG nova.compute.manager [req-fffbe86d-938a-4b98-8efc-f46293c0bb64 req-2f7d634c-1324-48f7-b9ad-8959f2ef0a39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] No waiting events found dispatching network-vif-unplugged-71dc92e6-edbd-4240-ae33-d3cc5b361002 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.288 2 DEBUG nova.compute.manager [req-fffbe86d-938a-4b98-8efc-f46293c0bb64 req-2f7d634c-1324-48f7-b9ad-8959f2ef0a39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Received event network-vif-unplugged-71dc92e6-edbd-4240-ae33-d3cc5b361002 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:19:49 np0005465604 podman[277703]: 2025-10-02 08:19:49.34558747 +0000 UTC m=+0.059636208 container remove 0d709c658420bd74874ba6d6075efa896b48e618d345fe997ce78422e91f858e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:19:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:49.362 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d8704794-5570-4202-8be4-a28838bb59cb]: (4, ('Thu Oct  2 08:19:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a (0d709c658420bd74874ba6d6075efa896b48e618d345fe997ce78422e91f858e)\n0d709c658420bd74874ba6d6075efa896b48e618d345fe997ce78422e91f858e\nThu Oct  2 08:19:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a (0d709c658420bd74874ba6d6075efa896b48e618d345fe997ce78422e91f858e)\n0d709c658420bd74874ba6d6075efa896b48e618d345fe997ce78422e91f858e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:49.366 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[58da35fe-21bb-4078-874d-6ae096d3ed7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:49.367 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9847725b-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:49 np0005465604 kernel: tap9847725b-00: left promiscuous mode
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:49.378 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[28a40913-2406-4cb7-9f0e-0557281c8245]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:49.409 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd2c5b6-47f0-41ac-a85a-8551f9ec6cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:49.410 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9c25e16c-e868-4d68-8e3a-849f4d28912e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:49.437 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bf394150-63d5-4d39-a470-224758cf1ad4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405064, 'reachable_time': 27362, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277718, 'error': None, 'target': 'ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:49 np0005465604 systemd[1]: run-netns-ovnmeta\x2d9847725b\x2d0398\x2d4587\x2dbb8e\x2d45200e1bbb6a.mount: Deactivated successfully.
Oct  2 04:19:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:49.458 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:19:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:19:49.459 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[33e8d597-cd26-4c8a-8da6-ce121ce3757d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.588 2 INFO nova.virt.libvirt.driver [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Deleting instance files /var/lib/nova/instances/2cf00828-84e1-410d-8acb-d94a197cc8ea_del#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.589 2 INFO nova.virt.libvirt.driver [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Deletion of /var/lib/nova/instances/2cf00828-84e1-410d-8acb-d94a197cc8ea_del complete#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.639 2 INFO nova.compute.manager [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.639 2 DEBUG oslo.service.loopingcall [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.640 2 DEBUG nova.compute.manager [-] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:19:49 np0005465604 nova_compute[260603]: 2025-10-02 08:19:49.640 2 DEBUG nova.network.neutron [-] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:19:50 np0005465604 nova_compute[260603]: 2025-10-02 08:19:50.495 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393175.4939303, 2d021418-a2ed-45c1-8a5a-7aa794929fa8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:19:50 np0005465604 nova_compute[260603]: 2025-10-02 08:19:50.496 2 INFO nova.compute.manager [-] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:19:50 np0005465604 nova_compute[260603]: 2025-10-02 08:19:50.521 2 DEBUG nova.compute.manager [None req-45a7cb16-0bbf-46b1-b70f-96c8e5474bcc - - - - - -] [instance: 2d021418-a2ed-45c1-8a5a-7aa794929fa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 167 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct  2 04:19:50 np0005465604 nova_compute[260603]: 2025-10-02 08:19:50.856 2 INFO nova.compute.manager [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Rebuilding instance#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.096 2 DEBUG nova.objects.instance [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.115 2 DEBUG nova.compute.manager [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.176 2 DEBUG nova.objects.instance [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lazy-loading 'pci_requests' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.192 2 DEBUG nova.objects.instance [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lazy-loading 'pci_devices' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.204 2 DEBUG nova.objects.instance [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lazy-loading 'resources' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.216 2 DEBUG nova.objects.instance [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lazy-loading 'migration_context' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.230 2 DEBUG nova.objects.instance [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.234 2 DEBUG nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.252 2 DEBUG nova.network.neutron [-] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.272 2 INFO nova.compute.manager [-] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Took 1.63 seconds to deallocate network for instance.#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.311 2 DEBUG oslo_concurrency.lockutils [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.312 2 DEBUG oslo_concurrency.lockutils [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.400 2 DEBUG oslo_concurrency.processutils [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.455 2 DEBUG nova.compute.manager [req-b2ce57b1-f63f-4fa7-b3d1-9af45b585931 req-a2e6ed2e-0d77-4566-b90f-9831ada906bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Received event network-vif-plugged-71dc92e6-edbd-4240-ae33-d3cc5b361002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.456 2 DEBUG oslo_concurrency.lockutils [req-b2ce57b1-f63f-4fa7-b3d1-9af45b585931 req-a2e6ed2e-0d77-4566-b90f-9831ada906bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2cf00828-84e1-410d-8acb-d94a197cc8ea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.456 2 DEBUG oslo_concurrency.lockutils [req-b2ce57b1-f63f-4fa7-b3d1-9af45b585931 req-a2e6ed2e-0d77-4566-b90f-9831ada906bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2cf00828-84e1-410d-8acb-d94a197cc8ea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.457 2 DEBUG oslo_concurrency.lockutils [req-b2ce57b1-f63f-4fa7-b3d1-9af45b585931 req-a2e6ed2e-0d77-4566-b90f-9831ada906bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2cf00828-84e1-410d-8acb-d94a197cc8ea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.457 2 DEBUG nova.compute.manager [req-b2ce57b1-f63f-4fa7-b3d1-9af45b585931 req-a2e6ed2e-0d77-4566-b90f-9831ada906bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] No waiting events found dispatching network-vif-plugged-71dc92e6-edbd-4240-ae33-d3cc5b361002 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.457 2 WARNING nova.compute.manager [req-b2ce57b1-f63f-4fa7-b3d1-9af45b585931 req-a2e6ed2e-0d77-4566-b90f-9831ada906bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Received unexpected event network-vif-plugged-71dc92e6-edbd-4240-ae33-d3cc5b361002 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.458 2 DEBUG nova.compute.manager [req-b2ce57b1-f63f-4fa7-b3d1-9af45b585931 req-a2e6ed2e-0d77-4566-b90f-9831ada906bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Received event network-vif-deleted-71dc92e6-edbd-4240-ae33-d3cc5b361002 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:19:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:19:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3473288453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.881 2 DEBUG oslo_concurrency.processutils [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.891 2 DEBUG nova.compute.provider_tree [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:19:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.929 2 DEBUG nova.scheduler.client.report [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.962 2 DEBUG oslo_concurrency.lockutils [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:51 np0005465604 nova_compute[260603]: 2025-10-02 08:19:51.989 2 INFO nova.scheduler.client.report [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Deleted allocations for instance 2cf00828-84e1-410d-8acb-d94a197cc8ea#033[00m
Oct  2 04:19:52 np0005465604 nova_compute[260603]: 2025-10-02 08:19:52.077 2 DEBUG oslo_concurrency.lockutils [None req-2b5b10a9-1928-430c-a0a4-062f2bee656a 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "2cf00828-84e1-410d-8acb-d94a197cc8ea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:52 np0005465604 nova_compute[260603]: 2025-10-02 08:19:52.527 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "7df7d893-6345-4445-8743-8d552b61a221" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:52 np0005465604 nova_compute[260603]: 2025-10-02 08:19:52.527 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "7df7d893-6345-4445-8743-8d552b61a221" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:52 np0005465604 nova_compute[260603]: 2025-10-02 08:19:52.539 2 DEBUG nova.compute.manager [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:19:52 np0005465604 nova_compute[260603]: 2025-10-02 08:19:52.602 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:52 np0005465604 nova_compute[260603]: 2025-10-02 08:19:52.603 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:52 np0005465604 nova_compute[260603]: 2025-10-02 08:19:52.608 2 DEBUG nova.virt.hardware [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:19:52 np0005465604 nova_compute[260603]: 2025-10-02 08:19:52.609 2 INFO nova.compute.claims [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:19:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 114 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Oct  2 04:19:52 np0005465604 nova_compute[260603]: 2025-10-02 08:19:52.826 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:19:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/566938408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.244 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.250 2 DEBUG nova.compute.provider_tree [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.269 2 DEBUG nova.scheduler.client.report [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.311 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.312 2 DEBUG nova.compute.manager [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.362 2 DEBUG nova.compute.manager [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.362 2 DEBUG nova.network.neutron [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.387 2 INFO nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.428 2 DEBUG nova.compute.manager [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.527 2 DEBUG nova.compute.manager [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.529 2 DEBUG nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.529 2 INFO nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Creating image(s)#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.549 2 DEBUG nova.storage.rbd_utils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] rbd image 7df7d893-6345-4445-8743-8d552b61a221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.580 2 DEBUG nova.storage.rbd_utils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] rbd image 7df7d893-6345-4445-8743-8d552b61a221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.607 2 DEBUG nova.storage.rbd_utils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] rbd image 7df7d893-6345-4445-8743-8d552b61a221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.611 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.638 2 DEBUG nova.policy [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1037493f2f4c4e768c46e477c6183cd4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ab0927c5a0e0424fbcde0133feab6f16', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.697 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.698 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.699 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.700 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.732 2 DEBUG nova.storage.rbd_utils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] rbd image 7df7d893-6345-4445-8743-8d552b61a221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:53 np0005465604 nova_compute[260603]: 2025-10-02 08:19:53.736 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7df7d893-6345-4445-8743-8d552b61a221_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:54 np0005465604 nova_compute[260603]: 2025-10-02 08:19:54.054 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7df7d893-6345-4445-8743-8d552b61a221_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:54 np0005465604 nova_compute[260603]: 2025-10-02 08:19:54.154 2 DEBUG nova.storage.rbd_utils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] resizing rbd image 7df7d893-6345-4445-8743-8d552b61a221_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:19:54 np0005465604 nova_compute[260603]: 2025-10-02 08:19:54.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:54 np0005465604 nova_compute[260603]: 2025-10-02 08:19:54.289 2 DEBUG nova.objects.instance [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lazy-loading 'migration_context' on Instance uuid 7df7d893-6345-4445-8743-8d552b61a221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:54 np0005465604 nova_compute[260603]: 2025-10-02 08:19:54.339 2 DEBUG nova.storage.rbd_utils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] rbd image 7df7d893-6345-4445-8743-8d552b61a221_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:54 np0005465604 nova_compute[260603]: 2025-10-02 08:19:54.379 2 DEBUG nova.storage.rbd_utils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] rbd image 7df7d893-6345-4445-8743-8d552b61a221_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:54 np0005465604 nova_compute[260603]: 2025-10-02 08:19:54.386 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:54 np0005465604 nova_compute[260603]: 2025-10-02 08:19:54.387 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:54 np0005465604 nova_compute[260603]: 2025-10-02 08:19:54.388 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:54 np0005465604 nova_compute[260603]: 2025-10-02 08:19:54.436 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:54 np0005465604 nova_compute[260603]: 2025-10-02 08:19:54.439 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:54 np0005465604 nova_compute[260603]: 2025-10-02 08:19:54.513 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:54 np0005465604 nova_compute[260603]: 2025-10-02 08:19:54.515 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:54 np0005465604 nova_compute[260603]: 2025-10-02 08:19:54.535 2 DEBUG nova.storage.rbd_utils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] rbd image 7df7d893-6345-4445-8743-8d552b61a221_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:54 np0005465604 nova_compute[260603]: 2025-10-02 08:19:54.539 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 7df7d893-6345-4445-8743-8d552b61a221_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 152 op/s
Oct  2 04:19:54 np0005465604 nova_compute[260603]: 2025-10-02 08:19:54.918 2 DEBUG nova.network.neutron [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Successfully created port: 49cabeb7-5aaf-4b9a-a489-57603e5c2441 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:19:55 np0005465604 nova_compute[260603]: 2025-10-02 08:19:55.460 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 7df7d893-6345-4445-8743-8d552b61a221_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.921s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:55 np0005465604 nova_compute[260603]: 2025-10-02 08:19:55.571 2 DEBUG nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:19:55 np0005465604 nova_compute[260603]: 2025-10-02 08:19:55.572 2 DEBUG nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Ensure instance console log exists: /var/lib/nova/instances/7df7d893-6345-4445-8743-8d552b61a221/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:19:55 np0005465604 nova_compute[260603]: 2025-10-02 08:19:55.572 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:55 np0005465604 nova_compute[260603]: 2025-10-02 08:19:55.573 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:55 np0005465604 nova_compute[260603]: 2025-10-02 08:19:55.573 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:55 np0005465604 nova_compute[260603]: 2025-10-02 08:19:55.656 2 DEBUG nova.network.neutron [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Successfully updated port: 49cabeb7-5aaf-4b9a-a489-57603e5c2441 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:19:55 np0005465604 nova_compute[260603]: 2025-10-02 08:19:55.674 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "refresh_cache-7df7d893-6345-4445-8743-8d552b61a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:19:55 np0005465604 nova_compute[260603]: 2025-10-02 08:19:55.674 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquired lock "refresh_cache-7df7d893-6345-4445-8743-8d552b61a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:19:55 np0005465604 nova_compute[260603]: 2025-10-02 08:19:55.674 2 DEBUG nova.network.neutron [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:19:55 np0005465604 nova_compute[260603]: 2025-10-02 08:19:55.860 2 DEBUG nova.compute.manager [req-bc13b484-ad5c-4eae-99db-c14a0e6c0b21 req-4e16e438-15b8-41ad-981c-c063b9f9bd7a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Received event network-changed-49cabeb7-5aaf-4b9a-a489-57603e5c2441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:19:55 np0005465604 nova_compute[260603]: 2025-10-02 08:19:55.860 2 DEBUG nova.compute.manager [req-bc13b484-ad5c-4eae-99db-c14a0e6c0b21 req-4e16e438-15b8-41ad-981c-c063b9f9bd7a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Refreshing instance network info cache due to event network-changed-49cabeb7-5aaf-4b9a-a489-57603e5c2441. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:19:55 np0005465604 nova_compute[260603]: 2025-10-02 08:19:55.860 2 DEBUG oslo_concurrency.lockutils [req-bc13b484-ad5c-4eae-99db-c14a0e6c0b21 req-4e16e438-15b8-41ad-981c-c063b9f9bd7a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7df7d893-6345-4445-8743-8d552b61a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:19:56 np0005465604 nova_compute[260603]: 2025-10-02 08:19:56.084 2 DEBUG nova.network.neutron [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:19:56 np0005465604 nova_compute[260603]: 2025-10-02 08:19:56.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct  2 04:19:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.393 2 DEBUG nova.network.neutron [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Updating instance_info_cache with network_info: [{"id": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "address": "fa:16:3e:47:41:3d", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49cabeb7-5a", "ovs_interfaceid": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.415 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Releasing lock "refresh_cache-7df7d893-6345-4445-8743-8d552b61a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.415 2 DEBUG nova.compute.manager [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Instance network_info: |[{"id": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "address": "fa:16:3e:47:41:3d", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49cabeb7-5a", "ovs_interfaceid": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.416 2 DEBUG oslo_concurrency.lockutils [req-bc13b484-ad5c-4eae-99db-c14a0e6c0b21 req-4e16e438-15b8-41ad-981c-c063b9f9bd7a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7df7d893-6345-4445-8743-8d552b61a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.417 2 DEBUG nova.network.neutron [req-bc13b484-ad5c-4eae-99db-c14a0e6c0b21 req-4e16e438-15b8-41ad-981c-c063b9f9bd7a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Refreshing network info cache for port 49cabeb7-5aaf-4b9a-a489-57603e5c2441 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.422 2 DEBUG nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Start _get_guest_xml network_info=[{"id": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "address": "fa:16:3e:47:41:3d", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49cabeb7-5a", "ovs_interfaceid": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [{'size': 1, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.428 2 WARNING nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.433 2 DEBUG nova.virt.libvirt.host [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.434 2 DEBUG nova.virt.libvirt.host [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.445 2 DEBUG nova.virt.libvirt.host [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.446 2 DEBUG nova.virt.libvirt.host [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.446 2 DEBUG nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.447 2 DEBUG nova.virt.hardware [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:19:12Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='223786332',id=25,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-1489482703',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.448 2 DEBUG nova.virt.hardware [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.448 2 DEBUG nova.virt.hardware [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.449 2 DEBUG nova.virt.hardware [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.449 2 DEBUG nova.virt.hardware [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.449 2 DEBUG nova.virt.hardware [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.450 2 DEBUG nova.virt.hardware [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.450 2 DEBUG nova.virt.hardware [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.451 2 DEBUG nova.virt.hardware [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.451 2 DEBUG nova.virt.hardware [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.451 2 DEBUG nova.virt.hardware [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.457 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:19:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:19:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:19:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:19:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:19:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:19:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:19:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2060739331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.945 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:57 np0005465604 nova_compute[260603]: 2025-10-02 08:19:57.952 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:19:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2064014865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.415 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.436 2 DEBUG nova.storage.rbd_utils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] rbd image 7df7d893-6345-4445-8743-8d552b61a221_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.439 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 136 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 168 op/s
Oct  2 04:19:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:19:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280768081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.847 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.850 2 DEBUG nova.virt.libvirt.vif [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:19:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1365874515',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1365874515',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(25),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1365874515',id=7,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=25,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKyfJeOY45nkQsot9BemY9U/BkbEGNqNZqr3YNUHFqg2f7Al6V+K+e19H2MuiEK+wcXfUJg+H8qbMpV33FaRnqJABUBtXxvPVSB7W9ittbX2JkYChVSSogHDOqGnTjFf2A==',key_name='tempest-keypair-67388201',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ab0927c5a0e0424fbcde0133feab6f16',ramdisk_id='',reservation_id='r-9hr5yju6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1074733883',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1074733883-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:19:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1037493f2f4c4e768c46e477c6183cd4',uuid=7df7d893-6345-4445-8743-8d552b61a221,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "address": "fa:16:3e:47:41:3d", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49cabeb7-5a", "ovs_interfaceid": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.851 2 DEBUG nova.network.os_vif_util [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Converting VIF {"id": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "address": "fa:16:3e:47:41:3d", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49cabeb7-5a", "ovs_interfaceid": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.852 2 DEBUG nova.network.os_vif_util [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:41:3d,bridge_name='br-int',has_traffic_filtering=True,id=49cabeb7-5aaf-4b9a-a489-57603e5c2441,network=Network(9847725b-0398-4587-bb8e-45200e1bbb6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49cabeb7-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.854 2 DEBUG nova.objects.instance [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7df7d893-6345-4445-8743-8d552b61a221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.871 2 DEBUG nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:  <uuid>7df7d893-6345-4445-8743-8d552b61a221</uuid>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:  <name>instance-00000007</name>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1365874515</nova:name>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:19:57</nova:creationTime>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <nova:flavor name="tempest-flavor_with_ephemeral_1-1489482703">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:        <nova:ephemeral>1</nova:ephemeral>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:        <nova:user uuid="1037493f2f4c4e768c46e477c6183cd4">tempest-ServersWithSpecificFlavorTestJSON-1074733883-project-member</nova:user>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:        <nova:project uuid="ab0927c5a0e0424fbcde0133feab6f16">tempest-ServersWithSpecificFlavorTestJSON-1074733883</nova:project>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:        <nova:port uuid="49cabeb7-5aaf-4b9a-a489-57603e5c2441">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <entry name="serial">7df7d893-6345-4445-8743-8d552b61a221</entry>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <entry name="uuid">7df7d893-6345-4445-8743-8d552b61a221</entry>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7df7d893-6345-4445-8743-8d552b61a221_disk">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7df7d893-6345-4445-8743-8d552b61a221_disk.eph0">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <target dev="vdb" bus="virtio"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7df7d893-6345-4445-8743-8d552b61a221_disk.config">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:47:41:3d"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <target dev="tap49cabeb7-5a"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/7df7d893-6345-4445-8743-8d552b61a221/console.log" append="off"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:19:58 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:19:58 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:19:58 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:19:58 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.873 2 DEBUG nova.compute.manager [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Preparing to wait for external event network-vif-plugged-49cabeb7-5aaf-4b9a-a489-57603e5c2441 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.874 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "7df7d893-6345-4445-8743-8d552b61a221-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.874 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "7df7d893-6345-4445-8743-8d552b61a221-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.874 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "7df7d893-6345-4445-8743-8d552b61a221-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.875 2 DEBUG nova.virt.libvirt.vif [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:19:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1365874515',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1365874515',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(25),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1365874515',id=7,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=25,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKyfJeOY45nkQsot9BemY9U/BkbEGNqNZqr3YNUHFqg2f7Al6V+K+e19H2MuiEK+wcXfUJg+H8qbMpV33FaRnqJABUBtXxvPVSB7W9ittbX2JkYChVSSogHDOqGnTjFf2A==',key_name='tempest-keypair-67388201',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ab0927c5a0e0424fbcde0133feab6f16',ramdisk_id='',reservation_id='r-9hr5yju6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1074733883',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1074733883-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:19:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1037493f2f4c4e768c46e477c6183cd4',uuid=7df7d893-6345-4445-8743-8d552b61a221,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "address": "fa:16:3e:47:41:3d", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49cabeb7-5a", "ovs_interfaceid": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.875 2 DEBUG nova.network.os_vif_util [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Converting VIF {"id": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "address": "fa:16:3e:47:41:3d", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49cabeb7-5a", "ovs_interfaceid": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.876 2 DEBUG nova.network.os_vif_util [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:41:3d,bridge_name='br-int',has_traffic_filtering=True,id=49cabeb7-5aaf-4b9a-a489-57603e5c2441,network=Network(9847725b-0398-4587-bb8e-45200e1bbb6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49cabeb7-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.876 2 DEBUG os_vif [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:41:3d,bridge_name='br-int',has_traffic_filtering=True,id=49cabeb7-5aaf-4b9a-a489-57603e5c2441,network=Network(9847725b-0398-4587-bb8e-45200e1bbb6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49cabeb7-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.877 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.878 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.881 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49cabeb7-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.882 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap49cabeb7-5a, col_values=(('external_ids', {'iface-id': '49cabeb7-5aaf-4b9a-a489-57603e5c2441', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:41:3d', 'vm-uuid': '7df7d893-6345-4445-8743-8d552b61a221'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:58 np0005465604 NetworkManager[45129]: <info>  [1759393198.8848] manager: (tap49cabeb7-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.893 2 INFO os_vif [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:41:3d,bridge_name='br-int',has_traffic_filtering=True,id=49cabeb7-5aaf-4b9a-a489-57603e5c2441,network=Network(9847725b-0398-4587-bb8e-45200e1bbb6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49cabeb7-5a')#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.941 2 DEBUG nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.942 2 DEBUG nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.942 2 DEBUG nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.942 2 DEBUG nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] No VIF found with MAC fa:16:3e:47:41:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.943 2 INFO nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Using config drive#033[00m
Oct  2 04:19:58 np0005465604 nova_compute[260603]: 2025-10-02 08:19:58.966 2 DEBUG nova.storage.rbd_utils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] rbd image 7df7d893-6345-4445-8743-8d552b61a221_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:59 np0005465604 nova_compute[260603]: 2025-10-02 08:19:59.685 2 INFO nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Creating config drive at /var/lib/nova/instances/7df7d893-6345-4445-8743-8d552b61a221/disk.config#033[00m
Oct  2 04:19:59 np0005465604 nova_compute[260603]: 2025-10-02 08:19:59.694 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7df7d893-6345-4445-8743-8d552b61a221/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps_a3rd1m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:19:59 np0005465604 nova_compute[260603]: 2025-10-02 08:19:59.825 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7df7d893-6345-4445-8743-8d552b61a221/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps_a3rd1m" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:19:59 np0005465604 nova_compute[260603]: 2025-10-02 08:19:59.859 2 DEBUG nova.storage.rbd_utils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] rbd image 7df7d893-6345-4445-8743-8d552b61a221_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:19:59 np0005465604 nova_compute[260603]: 2025-10-02 08:19:59.864 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7df7d893-6345-4445-8743-8d552b61a221/disk.config 7df7d893-6345-4445-8743-8d552b61a221_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.018 2 DEBUG oslo_concurrency.processutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7df7d893-6345-4445-8743-8d552b61a221/disk.config 7df7d893-6345-4445-8743-8d552b61a221_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.020 2 INFO nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Deleting local config drive /var/lib/nova/instances/7df7d893-6345-4445-8743-8d552b61a221/disk.config because it was imported into RBD.#033[00m
Oct  2 04:20:00 np0005465604 NetworkManager[45129]: <info>  [1759393200.0802] manager: (tap49cabeb7-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Oct  2 04:20:00 np0005465604 kernel: tap49cabeb7-5a: entered promiscuous mode
Oct  2 04:20:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:20:00Z|00035|binding|INFO|Claiming lport 49cabeb7-5aaf-4b9a-a489-57603e5c2441 for this chassis.
Oct  2 04:20:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:20:00Z|00036|binding|INFO|49cabeb7-5aaf-4b9a-a489-57603e5c2441: Claiming fa:16:3e:47:41:3d 10.100.0.12
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.095 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:41:3d 10.100.0.12'], port_security=['fa:16:3e:47:41:3d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '7df7d893-6345-4445-8743-8d552b61a221', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9847725b-0398-4587-bb8e-45200e1bbb6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ab0927c5a0e0424fbcde0133feab6f16', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3ed81274-9cd4-4204-8473-93dd6f65ff51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9128983a-ebb5-4d5e-81a5-171931805db2, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=49cabeb7-5aaf-4b9a-a489-57603e5c2441) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.097 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 49cabeb7-5aaf-4b9a-a489-57603e5c2441 in datapath 9847725b-0398-4587-bb8e-45200e1bbb6a bound to our chassis#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.098 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9847725b-0398-4587-bb8e-45200e1bbb6a#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.115 2 DEBUG oslo_concurrency.lockutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Acquiring lock "e4ce9040-1e20-4d58-b967-21b17e817aea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.115 2 DEBUG oslo_concurrency.lockutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "e4ce9040-1e20-4d58-b967-21b17e817aea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.115 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3e7274-0329-4cf1-ae9a-6d237c1ee2ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.116 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9847725b-01 in ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.117 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9847725b-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.117 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b144291d-9d1e-478f-b606-7761adfb4958]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.118 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[111588c4-a6e8-4da6-b167-be97844f3a45]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:20:00Z|00037|binding|INFO|Setting lport 49cabeb7-5aaf-4b9a-a489-57603e5c2441 ovn-installed in OVS
Oct  2 04:20:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:20:00Z|00038|binding|INFO|Setting lport 49cabeb7-5aaf-4b9a-a489-57603e5c2441 up in Southbound
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.133 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[67944263-adcf-4e03-9a96-8db284162f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.136 2 DEBUG nova.compute.manager [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:20:00 np0005465604 systemd-machined[214636]: New machine qemu-7-instance-00000007.
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.158 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab57ccd-8a1b-4b33-ad26-ee5e4f89f88d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:00 np0005465604 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Oct  2 04:20:00 np0005465604 systemd-udevd[278248]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.193 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5ffb39fa-4142-4aea-b37e-589ad6441832]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:00 np0005465604 NetworkManager[45129]: <info>  [1759393200.1944] device (tap49cabeb7-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:20:00 np0005465604 NetworkManager[45129]: <info>  [1759393200.1955] device (tap49cabeb7-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:20:00 np0005465604 podman[278219]: 2025-10-02 08:20:00.196626383 +0000 UTC m=+0.079798008 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.203 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac79918-0941-4480-b3d3-11dcbc9e9ad7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:00 np0005465604 NetworkManager[45129]: <info>  [1759393200.2053] manager: (tap9847725b-00): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.210 2 DEBUG oslo_concurrency.lockutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.210 2 DEBUG oslo_concurrency.lockutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.224 2 DEBUG nova.virt.hardware [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.224 2 INFO nova.compute.claims [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.244 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7c8181bc-7d85-4ee9-882b-11a8232f4c41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.247 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4c48a2-e21e-4cff-ad95-fde89ae5ed6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:00 np0005465604 podman[278217]: 2025-10-02 08:20:00.264098765 +0000 UTC m=+0.142269414 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 04:20:00 np0005465604 NetworkManager[45129]: <info>  [1759393200.2719] device (tap9847725b-00): carrier: link connected
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.279 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[91aa6851-5234-4c29-bb96-4fc2bf0cdcb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.298 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3b02d168-0439-4f95-a5f7-71458e1d1261]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9847725b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:c7:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407887, 'reachable_time': 44487, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278297, 'error': None, 'target': 'ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.315 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd89be47-fd9a-4c46-b28b-e21abae0bcdc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:c797'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407887, 'tstamp': 407887}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278298, 'error': None, 'target': 'ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.319 2 DEBUG nova.network.neutron [req-bc13b484-ad5c-4eae-99db-c14a0e6c0b21 req-4e16e438-15b8-41ad-981c-c063b9f9bd7a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Updated VIF entry in instance network info cache for port 49cabeb7-5aaf-4b9a-a489-57603e5c2441. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.319 2 DEBUG nova.network.neutron [req-bc13b484-ad5c-4eae-99db-c14a0e6c0b21 req-4e16e438-15b8-41ad-981c-c063b9f9bd7a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Updating instance_info_cache with network_info: [{"id": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "address": "fa:16:3e:47:41:3d", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49cabeb7-5a", "ovs_interfaceid": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.329 2 DEBUG nova.compute.manager [req-7198b147-d01c-411f-9699-248a294a3eb7 req-c7a74d4e-f757-43ce-a74a-c825c39cc444 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Received event network-vif-plugged-49cabeb7-5aaf-4b9a-a489-57603e5c2441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.329 2 DEBUG oslo_concurrency.lockutils [req-7198b147-d01c-411f-9699-248a294a3eb7 req-c7a74d4e-f757-43ce-a74a-c825c39cc444 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7df7d893-6345-4445-8743-8d552b61a221-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.329 2 DEBUG oslo_concurrency.lockutils [req-7198b147-d01c-411f-9699-248a294a3eb7 req-c7a74d4e-f757-43ce-a74a-c825c39cc444 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7df7d893-6345-4445-8743-8d552b61a221-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.329 2 DEBUG oslo_concurrency.lockutils [req-7198b147-d01c-411f-9699-248a294a3eb7 req-c7a74d4e-f757-43ce-a74a-c825c39cc444 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7df7d893-6345-4445-8743-8d552b61a221-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.330 2 DEBUG nova.compute.manager [req-7198b147-d01c-411f-9699-248a294a3eb7 req-c7a74d4e-f757-43ce-a74a-c825c39cc444 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Processing event network-vif-plugged-49cabeb7-5aaf-4b9a-a489-57603e5c2441 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.333 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1061fd80-b8e0-4fc2-9465-92876829cab1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9847725b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:c7:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407887, 'reachable_time': 44487, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278299, 'error': None, 'target': 'ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.342 2 DEBUG oslo_concurrency.lockutils [req-bc13b484-ad5c-4eae-99db-c14a0e6c0b21 req-4e16e438-15b8-41ad-981c-c063b9f9bd7a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7df7d893-6345-4445-8743-8d552b61a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.357 2 DEBUG oslo_concurrency.processutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.370 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[002c5d1a-7b1b-4947-85b4-f4b35191207b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.461 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df1b5ec0-bab3-4fe7-87fb-9933ef0503c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.463 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9847725b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.463 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.464 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9847725b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:20:00 np0005465604 NetworkManager[45129]: <info>  [1759393200.4673] manager: (tap9847725b-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:00 np0005465604 kernel: tap9847725b-00: entered promiscuous mode
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.472 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9847725b-00, col_values=(('external_ids', {'iface-id': '2eaa21e0-aae3-431e-8f86-4bd51c7d127c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:20:00Z|00039|binding|INFO|Releasing lport 2eaa21e0-aae3-431e-8f86-4bd51c7d127c from this chassis (sb_readonly=0)
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.495 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9847725b-0398-4587-bb8e-45200e1bbb6a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9847725b-0398-4587-bb8e-45200e1bbb6a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.496 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e8f644-f013-47d6-aea1-e19c3949eac5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.497 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-9847725b-0398-4587-bb8e-45200e1bbb6a
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/9847725b-0398-4587-bb8e-45200e1bbb6a.pid.haproxy
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 9847725b-0398-4587-bb8e-45200e1bbb6a
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:20:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:00.502 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a', 'env', 'PROCESS_TAG=haproxy-9847725b-0398-4587-bb8e-45200e1bbb6a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9847725b-0398-4587-bb8e-45200e1bbb6a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:20:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 136 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Oct  2 04:20:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:20:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1717339095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.803 2 DEBUG oslo_concurrency.processutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.814 2 DEBUG nova.compute.provider_tree [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.879 2 DEBUG nova.scheduler.client.report [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.901 2 DEBUG oslo_concurrency.lockutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.902 2 DEBUG nova.compute.manager [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.947 2 DEBUG nova.compute.manager [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.948 2 DEBUG nova.network.neutron [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:20:00 np0005465604 podman[278413]: 2025-10-02 08:20:00.952564143 +0000 UTC m=+0.068246668 container create eb67637ac8f3e32bc49b5b6ca38a631e5b051a04ceba6104ee56f3de2588e70b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.971 2 INFO nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:20:00 np0005465604 nova_compute[260603]: 2025-10-02 08:20:00.990 2 DEBUG nova.compute.manager [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:20:01 np0005465604 systemd[1]: Started libpod-conmon-eb67637ac8f3e32bc49b5b6ca38a631e5b051a04ceba6104ee56f3de2588e70b.scope.
Oct  2 04:20:01 np0005465604 podman[278413]: 2025-10-02 08:20:00.915573995 +0000 UTC m=+0.031256550 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:20:01 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:20:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79550457c293babdba5779e3bb9331ebe245c1e6f905e5264f7aaa8e95f6587e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:20:01 np0005465604 podman[278413]: 2025-10-02 08:20:01.066645323 +0000 UTC m=+0.182327868 container init eb67637ac8f3e32bc49b5b6ca38a631e5b051a04ceba6104ee56f3de2588e70b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 04:20:01 np0005465604 podman[278413]: 2025-10-02 08:20:01.074478668 +0000 UTC m=+0.190161193 container start eb67637ac8f3e32bc49b5b6ca38a631e5b051a04ceba6104ee56f3de2588e70b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.095 2 DEBUG nova.compute.manager [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.096 2 DEBUG nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.097 2 INFO nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Creating image(s)#033[00m
Oct  2 04:20:01 np0005465604 neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a[278429]: [NOTICE]   (278433) : New worker (278435) forked
Oct  2 04:20:01 np0005465604 neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a[278429]: [NOTICE]   (278433) : Loading success.
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.115 2 DEBUG nova.storage.rbd_utils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] rbd image e4ce9040-1e20-4d58-b967-21b17e817aea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.146 2 DEBUG nova.storage.rbd_utils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] rbd image e4ce9040-1e20-4d58-b967-21b17e817aea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.193 2 DEBUG nova.storage.rbd_utils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] rbd image e4ce9040-1e20-4d58-b967-21b17e817aea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.200 2 DEBUG oslo_concurrency.processutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.273 2 DEBUG oslo_concurrency.processutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.275 2 DEBUG oslo_concurrency.lockutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.276 2 DEBUG oslo_concurrency.lockutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.276 2 DEBUG oslo_concurrency.lockutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.300 2 DEBUG nova.storage.rbd_utils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] rbd image e4ce9040-1e20-4d58-b967-21b17e817aea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.306 2 DEBUG oslo_concurrency.processutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 e4ce9040-1e20-4d58-b967-21b17e817aea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.348 2 DEBUG nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.385 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393201.3851862, 7df7d893-6345-4445-8743-8d552b61a221 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.386 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7df7d893-6345-4445-8743-8d552b61a221] VM Started (Lifecycle Event)#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.389 2 DEBUG nova.compute.manager [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.394 2 DEBUG nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.398 2 INFO nova.virt.libvirt.driver [-] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Instance spawned successfully.#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.398 2 DEBUG nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.413 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.422 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.428 2 DEBUG nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.429 2 DEBUG nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.429 2 DEBUG nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.430 2 DEBUG nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.430 2 DEBUG nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.431 2 DEBUG nova.virt.libvirt.driver [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.454 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7df7d893-6345-4445-8743-8d552b61a221] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.455 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393201.3880928, 7df7d893-6345-4445-8743-8d552b61a221 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.455 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7df7d893-6345-4445-8743-8d552b61a221] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.476 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.480 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393201.394015, 7df7d893-6345-4445-8743-8d552b61a221 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.480 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7df7d893-6345-4445-8743-8d552b61a221] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.504 2 INFO nova.compute.manager [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Took 7.98 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.505 2 DEBUG nova.compute.manager [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.506 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.512 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.545 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7df7d893-6345-4445-8743-8d552b61a221] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.572 2 INFO nova.compute.manager [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Took 9.00 seconds to build instance.#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.581 2 DEBUG nova.network.neutron [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.582 2 DEBUG nova.compute.manager [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.591 2 DEBUG oslo_concurrency.lockutils [None req-d99dad34-4a67-4eb3-acbe-76bfb24da0a0 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "7df7d893-6345-4445-8743-8d552b61a221" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.621 2 DEBUG oslo_concurrency.processutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 e4ce9040-1e20-4d58-b967-21b17e817aea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.698 2 DEBUG nova.storage.rbd_utils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] resizing rbd image e4ce9040-1e20-4d58-b967-21b17e817aea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.810 2 DEBUG nova.objects.instance [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lazy-loading 'migration_context' on Instance uuid e4ce9040-1e20-4d58-b967-21b17e817aea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.822 2 DEBUG nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.822 2 DEBUG nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Ensure instance console log exists: /var/lib/nova/instances/e4ce9040-1e20-4d58-b967-21b17e817aea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.822 2 DEBUG oslo_concurrency.lockutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.823 2 DEBUG oslo_concurrency.lockutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.823 2 DEBUG oslo_concurrency.lockutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.824 2 DEBUG nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.828 2 WARNING nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.835 2 DEBUG nova.virt.libvirt.host [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.835 2 DEBUG nova.virt.libvirt.host [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.838 2 DEBUG nova.virt.libvirt.host [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.839 2 DEBUG nova.virt.libvirt.host [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.839 2 DEBUG nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.839 2 DEBUG nova.virt.hardware [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.839 2 DEBUG nova.virt.hardware [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.839 2 DEBUG nova.virt.hardware [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.840 2 DEBUG nova.virt.hardware [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.840 2 DEBUG nova.virt.hardware [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.840 2 DEBUG nova.virt.hardware [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.840 2 DEBUG nova.virt.hardware [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.840 2 DEBUG nova.virt.hardware [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.840 2 DEBUG nova.virt.hardware [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.840 2 DEBUG nova.virt.hardware [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.841 2 DEBUG nova.virt.hardware [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:20:01 np0005465604 nova_compute[260603]: 2025-10-02 08:20:01.843 2 DEBUG oslo_concurrency.processutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:20:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:20:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/70580515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:20:02 np0005465604 nova_compute[260603]: 2025-10-02 08:20:02.345 2 DEBUG oslo_concurrency.processutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:02 np0005465604 nova_compute[260603]: 2025-10-02 08:20:02.391 2 DEBUG nova.storage.rbd_utils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] rbd image e4ce9040-1e20-4d58-b967-21b17e817aea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:02 np0005465604 nova_compute[260603]: 2025-10-02 08:20:02.397 2 DEBUG oslo_concurrency.processutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:02 np0005465604 nova_compute[260603]: 2025-10-02 08:20:02.503 2 DEBUG nova.compute.manager [req-dc553487-61be-4206-b748-7d49cf2dc253 req-b66c2c46-6250-4023-9f8c-dd9509cae355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Received event network-vif-plugged-49cabeb7-5aaf-4b9a-a489-57603e5c2441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:20:02 np0005465604 nova_compute[260603]: 2025-10-02 08:20:02.504 2 DEBUG oslo_concurrency.lockutils [req-dc553487-61be-4206-b748-7d49cf2dc253 req-b66c2c46-6250-4023-9f8c-dd9509cae355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7df7d893-6345-4445-8743-8d552b61a221-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:02 np0005465604 nova_compute[260603]: 2025-10-02 08:20:02.505 2 DEBUG oslo_concurrency.lockutils [req-dc553487-61be-4206-b748-7d49cf2dc253 req-b66c2c46-6250-4023-9f8c-dd9509cae355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7df7d893-6345-4445-8743-8d552b61a221-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:02 np0005465604 nova_compute[260603]: 2025-10-02 08:20:02.505 2 DEBUG oslo_concurrency.lockutils [req-dc553487-61be-4206-b748-7d49cf2dc253 req-b66c2c46-6250-4023-9f8c-dd9509cae355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7df7d893-6345-4445-8743-8d552b61a221-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:02 np0005465604 nova_compute[260603]: 2025-10-02 08:20:02.506 2 DEBUG nova.compute.manager [req-dc553487-61be-4206-b748-7d49cf2dc253 req-b66c2c46-6250-4023-9f8c-dd9509cae355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] No waiting events found dispatching network-vif-plugged-49cabeb7-5aaf-4b9a-a489-57603e5c2441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:20:02 np0005465604 nova_compute[260603]: 2025-10-02 08:20:02.506 2 WARNING nova.compute.manager [req-dc553487-61be-4206-b748-7d49cf2dc253 req-b66c2c46-6250-4023-9f8c-dd9509cae355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Received unexpected event network-vif-plugged-49cabeb7-5aaf-4b9a-a489-57603e5c2441 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:20:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 183 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 223 op/s
Oct  2 04:20:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:20:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2651854345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:20:02 np0005465604 nova_compute[260603]: 2025-10-02 08:20:02.861 2 DEBUG oslo_concurrency.processutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:02 np0005465604 nova_compute[260603]: 2025-10-02 08:20:02.863 2 DEBUG nova.objects.instance [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lazy-loading 'pci_devices' on Instance uuid e4ce9040-1e20-4d58-b967-21b17e817aea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:02 np0005465604 nova_compute[260603]: 2025-10-02 08:20:02.883 2 DEBUG nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:20:02 np0005465604 nova_compute[260603]:  <uuid>e4ce9040-1e20-4d58-b967-21b17e817aea</uuid>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:  <name>instance-00000008</name>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <nova:name>tempest-LiveMigrationNegativeTest-server-525531889</nova:name>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:20:01</nova:creationTime>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:20:02 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:        <nova:user uuid="cfaa38a10d3a475b89a320f1949ed5f0">tempest-LiveMigrationNegativeTest-1366594485-project-member</nova:user>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:        <nova:project uuid="fce4003fcf6d49468ebb0fe22806ab23">tempest-LiveMigrationNegativeTest-1366594485</nova:project>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <entry name="serial">e4ce9040-1e20-4d58-b967-21b17e817aea</entry>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <entry name="uuid">e4ce9040-1e20-4d58-b967-21b17e817aea</entry>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/e4ce9040-1e20-4d58-b967-21b17e817aea_disk">
Oct  2 04:20:02 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:20:02 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/e4ce9040-1e20-4d58-b967-21b17e817aea_disk.config">
Oct  2 04:20:02 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:20:02 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/e4ce9040-1e20-4d58-b967-21b17e817aea/console.log" append="off"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:20:02 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:20:02 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:20:02 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:20:02 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:20:02 np0005465604 nova_compute[260603]: 2025-10-02 08:20:02.948 2 DEBUG nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:20:02 np0005465604 nova_compute[260603]: 2025-10-02 08:20:02.948 2 DEBUG nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:20:02 np0005465604 nova_compute[260603]: 2025-10-02 08:20:02.949 2 INFO nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Using config drive#033[00m
Oct  2 04:20:02 np0005465604 nova_compute[260603]: 2025-10-02 08:20:02.972 2 DEBUG nova.storage.rbd_utils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] rbd image e4ce9040-1e20-4d58-b967-21b17e817aea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:03 np0005465604 nova_compute[260603]: 2025-10-02 08:20:03.352 2 INFO nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Creating config drive at /var/lib/nova/instances/e4ce9040-1e20-4d58-b967-21b17e817aea/disk.config#033[00m
Oct  2 04:20:03 np0005465604 nova_compute[260603]: 2025-10-02 08:20:03.360 2 DEBUG oslo_concurrency.processutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e4ce9040-1e20-4d58-b967-21b17e817aea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi4hy2r48 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:03 np0005465604 nova_compute[260603]: 2025-10-02 08:20:03.518 2 DEBUG oslo_concurrency.processutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e4ce9040-1e20-4d58-b967-21b17e817aea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi4hy2r48" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:03 np0005465604 nova_compute[260603]: 2025-10-02 08:20:03.599 2 DEBUG nova.storage.rbd_utils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] rbd image e4ce9040-1e20-4d58-b967-21b17e817aea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:03 np0005465604 nova_compute[260603]: 2025-10-02 08:20:03.606 2 DEBUG oslo_concurrency.processutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e4ce9040-1e20-4d58-b967-21b17e817aea/disk.config e4ce9040-1e20-4d58-b967-21b17e817aea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:03 np0005465604 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct  2 04:20:03 np0005465604 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 12.499s CPU time.
Oct  2 04:20:03 np0005465604 systemd-machined[214636]: Machine qemu-6-instance-00000006 terminated.
Oct  2 04:20:03 np0005465604 nova_compute[260603]: 2025-10-02 08:20:03.792 2 DEBUG oslo_concurrency.processutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e4ce9040-1e20-4d58-b967-21b17e817aea/disk.config e4ce9040-1e20-4d58-b967-21b17e817aea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:03 np0005465604 nova_compute[260603]: 2025-10-02 08:20:03.793 2 INFO nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Deleting local config drive /var/lib/nova/instances/e4ce9040-1e20-4d58-b967-21b17e817aea/disk.config because it was imported into RBD.#033[00m
Oct  2 04:20:03 np0005465604 nova_compute[260603]: 2025-10-02 08:20:03.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:03 np0005465604 systemd-machined[214636]: New machine qemu-8-instance-00000008.
Oct  2 04:20:03 np0005465604 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.154 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393189.153328, 2cf00828-84e1-410d-8acb-d94a197cc8ea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.155 2 INFO nova.compute.manager [-] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.174 2 DEBUG nova.compute.manager [None req-a63cffd8-d60e-428a-9ae2-201dcb94e7c0 - - - - - -] [instance: 2cf00828-84e1-410d-8acb-d94a197cc8ea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.441 2 INFO nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.446 2 INFO nova.virt.libvirt.driver [-] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Instance destroyed successfully.#033[00m
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.450 2 INFO nova.virt.libvirt.driver [-] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Instance destroyed successfully.#033[00m
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.542 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.543 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-19ea7528-6b08-4fe7-8c5b-4d96247bd50e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.543 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-19ea7528-6b08-4fe7-8c5b-4d96247bd50e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.543 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.544 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 215 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.7 MiB/s wr, 240 op/s
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.811 2 INFO nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Deleting instance files /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e_del#033[00m
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.812 2 INFO nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Deletion of /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e_del complete#033[00m
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.943 2 DEBUG nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.944 2 INFO nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Creating image(s)#033[00m
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.968 2 DEBUG nova.storage.rbd_utils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:04 np0005465604 nova_compute[260603]: 2025-10-02 08:20:04.997 2 DEBUG nova.storage.rbd_utils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.025 2 DEBUG nova.storage.rbd_utils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.030 2 DEBUG oslo_concurrency.lockutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Acquiring lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.031 2 DEBUG oslo_concurrency.lockutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.036 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.192 2 DEBUG nova.compute.manager [req-aafdfd8a-83aa-4c12-8e6a-18ed05945f37 req-2d193c71-291c-4ae7-881e-5f7277df8931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Received event network-changed-49cabeb7-5aaf-4b9a-a489-57603e5c2441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.192 2 DEBUG nova.compute.manager [req-aafdfd8a-83aa-4c12-8e6a-18ed05945f37 req-2d193c71-291c-4ae7-881e-5f7277df8931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Refreshing instance network info cache due to event network-changed-49cabeb7-5aaf-4b9a-a489-57603e5c2441. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.192 2 DEBUG oslo_concurrency.lockutils [req-aafdfd8a-83aa-4c12-8e6a-18ed05945f37 req-2d193c71-291c-4ae7-881e-5f7277df8931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7df7d893-6345-4445-8743-8d552b61a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.193 2 DEBUG oslo_concurrency.lockutils [req-aafdfd8a-83aa-4c12-8e6a-18ed05945f37 req-2d193c71-291c-4ae7-881e-5f7277df8931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7df7d893-6345-4445-8743-8d552b61a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.193 2 DEBUG nova.network.neutron [req-aafdfd8a-83aa-4c12-8e6a-18ed05945f37 req-2d193c71-291c-4ae7-881e-5f7277df8931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Refreshing network info cache for port 49cabeb7-5aaf-4b9a-a489-57603e5c2441 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.232 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393205.23182, e4ce9040-1e20-4d58-b967-21b17e817aea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.233 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.234 2 DEBUG nova.compute.manager [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.234 2 DEBUG nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.237 2 INFO nova.virt.libvirt.driver [-] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Instance spawned successfully.#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.237 2 DEBUG nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.253 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.256 2 DEBUG nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.256 2 DEBUG nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.257 2 DEBUG nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.257 2 DEBUG nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.258 2 DEBUG nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.258 2 DEBUG nova.virt.libvirt.driver [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.261 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.288 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.288 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393205.232196, e4ce9040-1e20-4d58-b967-21b17e817aea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.288 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] VM Started (Lifecycle Event)#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.312 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.315 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.325 2 DEBUG nova.virt.libvirt.imagebackend [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Image locations are: [{'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/eeb8c9a4-e143-4b44-a997-e04d544bc537/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/eeb8c9a4-e143-4b44-a997-e04d544bc537/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.331 2 INFO nova.compute.manager [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Took 4.23 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.331 2 DEBUG nova.compute.manager [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.334 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.394 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.398 2 INFO nova.compute.manager [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Took 5.22 seconds to build instance.#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.409 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-19ea7528-6b08-4fe7-8c5b-4d96247bd50e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.409 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:20:05 np0005465604 nova_compute[260603]: 2025-10-02 08:20:05.418 2 DEBUG oslo_concurrency.lockutils [None req-3571973e-30c8-4573-a21b-375f0379cd58 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "e4ce9040-1e20-4d58-b967-21b17e817aea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:06 np0005465604 podman[278864]: 2025-10-02 08:20:06.058345682 +0000 UTC m=+0.123169686 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 04:20:06 np0005465604 nova_compute[260603]: 2025-10-02 08:20:06.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:06 np0005465604 nova_compute[260603]: 2025-10-02 08:20:06.471 2 DEBUG oslo_concurrency.processutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:06 np0005465604 nova_compute[260603]: 2025-10-02 08:20:06.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:20:06 np0005465604 nova_compute[260603]: 2025-10-02 08:20:06.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:20:06 np0005465604 nova_compute[260603]: 2025-10-02 08:20:06.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:20:06 np0005465604 nova_compute[260603]: 2025-10-02 08:20:06.546 2 DEBUG oslo_concurrency.processutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28.part --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:06 np0005465604 nova_compute[260603]: 2025-10-02 08:20:06.548 2 DEBUG nova.virt.images [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] eeb8c9a4-e143-4b44-a997-e04d544bc537 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  2 04:20:06 np0005465604 nova_compute[260603]: 2025-10-02 08:20:06.550 2 DEBUG nova.privsep.utils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  2 04:20:06 np0005465604 nova_compute[260603]: 2025-10-02 08:20:06.551 2 DEBUG oslo_concurrency.processutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28.part /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 215 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.7 MiB/s wr, 187 op/s
Oct  2 04:20:06 np0005465604 nova_compute[260603]: 2025-10-02 08:20:06.745 2 DEBUG oslo_concurrency.processutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28.part /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28.converted" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:06 np0005465604 nova_compute[260603]: 2025-10-02 08:20:06.753 2 DEBUG oslo_concurrency.processutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:20:06.786430) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393206786489, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2089, "num_deletes": 251, "total_data_size": 3372797, "memory_usage": 3429904, "flush_reason": "Manual Compaction"}
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393206800641, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3284200, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20980, "largest_seqno": 23068, "table_properties": {"data_size": 3274857, "index_size": 5837, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19406, "raw_average_key_size": 20, "raw_value_size": 3255908, "raw_average_value_size": 3384, "num_data_blocks": 264, "num_entries": 962, "num_filter_entries": 962, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759392997, "oldest_key_time": 1759392997, "file_creation_time": 1759393206, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 14241 microseconds, and 6476 cpu microseconds.
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:20:06.800680) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3284200 bytes OK
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:20:06.800699) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:20:06.802052) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:20:06.802064) EVENT_LOG_v1 {"time_micros": 1759393206802061, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:20:06.802080) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3364007, prev total WAL file size 3364007, number of live WAL files 2.
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:20:06.802909) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3207KB)], [50(7482KB)]
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393206802933, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10945980, "oldest_snapshot_seqno": -1}
Oct  2 04:20:06 np0005465604 nova_compute[260603]: 2025-10-02 08:20:06.833 2 DEBUG oslo_concurrency.processutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28.converted --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:06 np0005465604 nova_compute[260603]: 2025-10-02 08:20:06.835 2 DEBUG oslo_concurrency.lockutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4773 keys, 9199439 bytes, temperature: kUnknown
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393206838904, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9199439, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9164850, "index_size": 21551, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 116902, "raw_average_key_size": 24, "raw_value_size": 9075972, "raw_average_value_size": 1901, "num_data_blocks": 906, "num_entries": 4773, "num_filter_entries": 4773, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759393206, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:20:06.839067) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9199439 bytes
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:20:06.840381) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 303.9 rd, 255.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.3 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5291, records dropped: 518 output_compression: NoCompression
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:20:06.840394) EVENT_LOG_v1 {"time_micros": 1759393206840388, "job": 26, "event": "compaction_finished", "compaction_time_micros": 36022, "compaction_time_cpu_micros": 18028, "output_level": 6, "num_output_files": 1, "total_output_size": 9199439, "num_input_records": 5291, "num_output_records": 4773, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393206840902, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393206841956, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:20:06.802833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:20:06.842008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:20:06.842015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:20:06.842018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:20:06.842022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:20:06.842025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:20:06 np0005465604 nova_compute[260603]: 2025-10-02 08:20:06.871 2 DEBUG nova.storage.rbd_utils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:06 np0005465604 nova_compute[260603]: 2025-10-02 08:20:06.876 2 DEBUG oslo_concurrency.processutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.145 2 DEBUG nova.network.neutron [req-aafdfd8a-83aa-4c12-8e6a-18ed05945f37 req-2d193c71-291c-4ae7-881e-5f7277df8931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Updated VIF entry in instance network info cache for port 49cabeb7-5aaf-4b9a-a489-57603e5c2441. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.147 2 DEBUG nova.network.neutron [req-aafdfd8a-83aa-4c12-8e6a-18ed05945f37 req-2d193c71-291c-4ae7-881e-5f7277df8931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Updating instance_info_cache with network_info: [{"id": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "address": "fa:16:3e:47:41:3d", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49cabeb7-5a", "ovs_interfaceid": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.174 2 DEBUG oslo_concurrency.processutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.211 2 DEBUG oslo_concurrency.lockutils [req-aafdfd8a-83aa-4c12-8e6a-18ed05945f37 req-2d193c71-291c-4ae7-881e-5f7277df8931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7df7d893-6345-4445-8743-8d552b61a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.252 2 DEBUG nova.storage.rbd_utils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] resizing rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.374 2 DEBUG nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.375 2 DEBUG nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Ensure instance console log exists: /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.375 2 DEBUG oslo_concurrency.lockutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.376 2 DEBUG oslo_concurrency.lockutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.376 2 DEBUG oslo_concurrency.lockutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.379 2 DEBUG nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.383 2 WARNING nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.390 2 DEBUG nova.virt.libvirt.host [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.390 2 DEBUG nova.virt.libvirt.host [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.395 2 DEBUG nova.virt.libvirt.host [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.396 2 DEBUG nova.virt.libvirt.host [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.396 2 DEBUG nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.397 2 DEBUG nova.virt.hardware [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.398 2 DEBUG nova.virt.hardware [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.398 2 DEBUG nova.virt.hardware [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.399 2 DEBUG nova.virt.hardware [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.399 2 DEBUG nova.virt.hardware [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.399 2 DEBUG nova.virt.hardware [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.400 2 DEBUG nova.virt.hardware [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.400 2 DEBUG nova.virt.hardware [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.401 2 DEBUG nova.virt.hardware [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.401 2 DEBUG nova.virt.hardware [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.402 2 DEBUG nova.virt.hardware [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.402 2 DEBUG nova.objects.instance [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.421 2 DEBUG oslo_concurrency.processutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:20:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:20:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4054680747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.861 2 DEBUG oslo_concurrency.processutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.898 2 DEBUG nova.storage.rbd_utils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:07 np0005465604 nova_compute[260603]: 2025-10-02 08:20:07.911 2 DEBUG oslo_concurrency.processutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:20:08 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3106131176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:20:08 np0005465604 nova_compute[260603]: 2025-10-02 08:20:08.387 2 DEBUG oslo_concurrency.processutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:08 np0005465604 nova_compute[260603]: 2025-10-02 08:20:08.392 2 DEBUG nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:20:08 np0005465604 nova_compute[260603]:  <uuid>19ea7528-6b08-4fe7-8c5b-4d96247bd50e</uuid>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:  <name>instance-00000006</name>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersAdmin275Test-server-1855671180</nova:name>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:20:07</nova:creationTime>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:20:08 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:        <nova:user uuid="7646e9aa5e134fc7b0621d170e0ced6f">tempest-ServersAdmin275Test-824814869-project-member</nova:user>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:        <nova:project uuid="8d66eb05f1fd4827b705307a038d1157">tempest-ServersAdmin275Test-824814869</nova:project>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="eeb8c9a4-e143-4b44-a997-e04d544bc537"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <entry name="serial">19ea7528-6b08-4fe7-8c5b-4d96247bd50e</entry>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <entry name="uuid">19ea7528-6b08-4fe7-8c5b-4d96247bd50e</entry>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk">
Oct  2 04:20:08 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:20:08 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk.config">
Oct  2 04:20:08 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:20:08 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/console.log" append="off"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:20:08 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:20:08 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:20:08 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:20:08 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:20:08 np0005465604 nova_compute[260603]: 2025-10-02 08:20:08.458 2 DEBUG nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:20:08 np0005465604 nova_compute[260603]: 2025-10-02 08:20:08.459 2 DEBUG nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:20:08 np0005465604 nova_compute[260603]: 2025-10-02 08:20:08.460 2 INFO nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Using config drive#033[00m
Oct  2 04:20:08 np0005465604 nova_compute[260603]: 2025-10-02 08:20:08.484 2 DEBUG nova.storage.rbd_utils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:08 np0005465604 nova_compute[260603]: 2025-10-02 08:20:08.504 2 DEBUG nova.objects.instance [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:08 np0005465604 nova_compute[260603]: 2025-10-02 08:20:08.563 2 DEBUG nova.objects.instance [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lazy-loading 'keypairs' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 181 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 7.4 MiB/s wr, 340 op/s
Oct  2 04:20:08 np0005465604 nova_compute[260603]: 2025-10-02 08:20:08.802 2 INFO nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Creating config drive at /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/disk.config#033[00m
Oct  2 04:20:08 np0005465604 nova_compute[260603]: 2025-10-02 08:20:08.807 2 DEBUG oslo_concurrency.processutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpipfhk5sv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:08 np0005465604 nova_compute[260603]: 2025-10-02 08:20:08.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:08 np0005465604 nova_compute[260603]: 2025-10-02 08:20:08.948 2 DEBUG oslo_concurrency.processutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpipfhk5sv" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:08 np0005465604 nova_compute[260603]: 2025-10-02 08:20:08.976 2 DEBUG nova.storage.rbd_utils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:08 np0005465604 nova_compute[260603]: 2025-10-02 08:20:08.980 2 DEBUG oslo_concurrency.processutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/disk.config 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.005 2 DEBUG oslo_concurrency.lockutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Acquiring lock "c2f0bab0-8cd9-431b-8592-8a9ce9d5f973" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.006 2 DEBUG oslo_concurrency.lockutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "c2f0bab0-8cd9-431b-8592-8a9ce9d5f973" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.024 2 DEBUG nova.compute.manager [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.112 2 DEBUG oslo_concurrency.lockutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.113 2 DEBUG oslo_concurrency.lockutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.124 2 DEBUG nova.virt.hardware [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.125 2 INFO nova.compute.claims [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.131 2 DEBUG oslo_concurrency.processutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/disk.config 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.132 2 INFO nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Deleting local config drive /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/disk.config because it was imported into RBD.#033[00m
Oct  2 04:20:09 np0005465604 systemd-machined[214636]: New machine qemu-9-instance-00000006.
Oct  2 04:20:09 np0005465604 systemd[1]: Started Virtual Machine qemu-9-instance-00000006.
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.281 2 DEBUG oslo_concurrency.processutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:20:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/657741928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.756 2 DEBUG oslo_concurrency.processutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.791 2 DEBUG nova.compute.provider_tree [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.807 2 DEBUG nova.scheduler.client.report [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.830 2 DEBUG oslo_concurrency.lockutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.831 2 DEBUG nova.compute.manager [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.872 2 DEBUG nova.compute.manager [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.872 2 DEBUG nova.network.neutron [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.893 2 INFO nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:20:09 np0005465604 nova_compute[260603]: 2025-10-02 08:20:09.911 2 DEBUG nova.compute.manager [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.015 2 DEBUG nova.compute.manager [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.017 2 DEBUG nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.017 2 INFO nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Creating image(s)#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.036 2 DEBUG nova.storage.rbd_utils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] rbd image c2f0bab0-8cd9-431b-8592-8a9ce9d5f973_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.058 2 DEBUG nova.storage.rbd_utils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] rbd image c2f0bab0-8cd9-431b-8592-8a9ce9d5f973_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.079 2 DEBUG nova.storage.rbd_utils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] rbd image c2f0bab0-8cd9-431b-8592-8a9ce9d5f973_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.084 2 DEBUG oslo_concurrency.processutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.139 2 DEBUG oslo_concurrency.processutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.141 2 DEBUG oslo_concurrency.lockutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.141 2 DEBUG oslo_concurrency.lockutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.141 2 DEBUG oslo_concurrency.lockutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.159 2 DEBUG nova.storage.rbd_utils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] rbd image c2f0bab0-8cd9-431b-8592-8a9ce9d5f973_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.161 2 DEBUG oslo_concurrency.processutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 c2f0bab0-8cd9-431b-8592-8a9ce9d5f973_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.247 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 19ea7528-6b08-4fe7-8c5b-4d96247bd50e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.248 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393210.246054, 19ea7528-6b08-4fe7-8c5b-4d96247bd50e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.249 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.258 2 DEBUG nova.compute.manager [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.260 2 DEBUG nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.266 2 INFO nova.virt.libvirt.driver [-] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Instance spawned successfully.#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.267 2 DEBUG nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.272 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.291 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.308 2 DEBUG nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.309 2 DEBUG nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.310 2 DEBUG nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.310 2 DEBUG nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.311 2 DEBUG nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.312 2 DEBUG nova.virt.libvirt.driver [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.316 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.318 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393210.2544296, 19ea7528-6b08-4fe7-8c5b-4d96247bd50e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.318 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] VM Started (Lifecycle Event)#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.349 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.354 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.376 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.380 2 DEBUG nova.compute.manager [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.401 2 DEBUG oslo_concurrency.processutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 c2f0bab0-8cd9-431b-8592-8a9ce9d5f973_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.456 2 DEBUG oslo_concurrency.lockutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.457 2 DEBUG oslo_concurrency.lockutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.458 2 DEBUG nova.objects.instance [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.508 2 DEBUG nova.network.neutron [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.508 2 DEBUG nova.compute.manager [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.516 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.525 2 DEBUG nova.storage.rbd_utils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] resizing rbd image c2f0bab0-8cd9-431b-8592-8a9ce9d5f973_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.578 2 DEBUG oslo_concurrency.lockutils [None req-5b9f745e-dd75-48ea-a817-434de9a005c3 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.582 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.583 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.604 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.604 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.605 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.605 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.605 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.695 2 DEBUG nova.objects.instance [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lazy-loading 'migration_context' on Instance uuid c2f0bab0-8cd9-431b-8592-8a9ce9d5f973 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.708 2 DEBUG nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.709 2 DEBUG nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Ensure instance console log exists: /var/lib/nova/instances/c2f0bab0-8cd9-431b-8592-8a9ce9d5f973/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.710 2 DEBUG oslo_concurrency.lockutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.710 2 DEBUG oslo_concurrency.lockutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.711 2 DEBUG oslo_concurrency.lockutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.712 2 DEBUG nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.717 2 WARNING nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.721 2 DEBUG nova.virt.libvirt.host [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.722 2 DEBUG nova.virt.libvirt.host [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:20:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 181 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.6 MiB/s wr, 301 op/s
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.726 2 DEBUG nova.virt.libvirt.host [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.726 2 DEBUG nova.virt.libvirt.host [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.727 2 DEBUG nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.727 2 DEBUG nova.virt.hardware [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.728 2 DEBUG nova.virt.hardware [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.728 2 DEBUG nova.virt.hardware [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.729 2 DEBUG nova.virt.hardware [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.729 2 DEBUG nova.virt.hardware [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.730 2 DEBUG nova.virt.hardware [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.730 2 DEBUG nova.virt.hardware [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.731 2 DEBUG nova.virt.hardware [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.731 2 DEBUG nova.virt.hardware [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.731 2 DEBUG nova.virt.hardware [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.732 2 DEBUG nova.virt.hardware [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:20:10 np0005465604 nova_compute[260603]: 2025-10-02 08:20:10.735 2 DEBUG oslo_concurrency.processutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:20:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1872583085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.080 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.207 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.208 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.210 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.211 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.213 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.214 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.214 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:20:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:20:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2640820196' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.276 2 DEBUG oslo_concurrency.processutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.300 2 DEBUG nova.storage.rbd_utils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] rbd image c2f0bab0-8cd9-431b-8592-8a9ce9d5f973_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.305 2 DEBUG oslo_concurrency.processutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.463 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.465 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4318MB free_disk=59.92568588256836GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.465 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.465 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.538 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 19ea7528-6b08-4fe7-8c5b-4d96247bd50e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.538 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 7df7d893-6345-4445-8743-8d552b61a221 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.538 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance e4ce9040-1e20-4d58-b967-21b17e817aea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.538 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance c2f0bab0-8cd9-431b-8592-8a9ce9d5f973 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.538 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.538 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.622 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:20:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790555229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.826 2 DEBUG oslo_concurrency.processutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.828 2 DEBUG nova.objects.instance [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lazy-loading 'pci_devices' on Instance uuid c2f0bab0-8cd9-431b-8592-8a9ce9d5f973 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.841 2 DEBUG nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:20:11 np0005465604 nova_compute[260603]:  <uuid>c2f0bab0-8cd9-431b-8592-8a9ce9d5f973</uuid>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:  <name>instance-00000009</name>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <nova:name>tempest-LiveMigrationNegativeTest-server-1930508610</nova:name>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:20:10</nova:creationTime>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:20:11 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:        <nova:user uuid="cfaa38a10d3a475b89a320f1949ed5f0">tempest-LiveMigrationNegativeTest-1366594485-project-member</nova:user>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:        <nova:project uuid="fce4003fcf6d49468ebb0fe22806ab23">tempest-LiveMigrationNegativeTest-1366594485</nova:project>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <entry name="serial">c2f0bab0-8cd9-431b-8592-8a9ce9d5f973</entry>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <entry name="uuid">c2f0bab0-8cd9-431b-8592-8a9ce9d5f973</entry>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/c2f0bab0-8cd9-431b-8592-8a9ce9d5f973_disk">
Oct  2 04:20:11 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:20:11 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/c2f0bab0-8cd9-431b-8592-8a9ce9d5f973_disk.config">
Oct  2 04:20:11 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:20:11 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/c2f0bab0-8cd9-431b-8592-8a9ce9d5f973/console.log" append="off"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:20:11 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:20:11 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:20:11 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:20:11 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.891 2 DEBUG nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.892 2 DEBUG nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.892 2 INFO nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Using config drive#033[00m
Oct  2 04:20:11 np0005465604 nova_compute[260603]: 2025-10-02 08:20:11.911 2 DEBUG nova.storage.rbd_utils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] rbd image c2f0bab0-8cd9-431b-8592-8a9ce9d5f973_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:20:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:20:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1898894839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:20:12 np0005465604 nova_compute[260603]: 2025-10-02 08:20:12.107 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:12 np0005465604 nova_compute[260603]: 2025-10-02 08:20:12.118 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:20:12 np0005465604 nova_compute[260603]: 2025-10-02 08:20:12.137 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:20:12 np0005465604 nova_compute[260603]: 2025-10-02 08:20:12.149 2 INFO nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Creating config drive at /var/lib/nova/instances/c2f0bab0-8cd9-431b-8592-8a9ce9d5f973/disk.config#033[00m
Oct  2 04:20:12 np0005465604 nova_compute[260603]: 2025-10-02 08:20:12.157 2 DEBUG oslo_concurrency.processutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c2f0bab0-8cd9-431b-8592-8a9ce9d5f973/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplumvfssz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:12 np0005465604 nova_compute[260603]: 2025-10-02 08:20:12.204 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:20:12 np0005465604 nova_compute[260603]: 2025-10-02 08:20:12.205 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:12 np0005465604 nova_compute[260603]: 2025-10-02 08:20:12.285 2 DEBUG oslo_concurrency.processutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c2f0bab0-8cd9-431b-8592-8a9ce9d5f973/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplumvfssz" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:12 np0005465604 nova_compute[260603]: 2025-10-02 08:20:12.318 2 DEBUG nova.storage.rbd_utils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] rbd image c2f0bab0-8cd9-431b-8592-8a9ce9d5f973_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:12 np0005465604 nova_compute[260603]: 2025-10-02 08:20:12.323 2 DEBUG oslo_concurrency.processutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c2f0bab0-8cd9-431b-8592-8a9ce9d5f973/disk.config c2f0bab0-8cd9-431b-8592-8a9ce9d5f973_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:12 np0005465604 nova_compute[260603]: 2025-10-02 08:20:12.525 2 DEBUG oslo_concurrency.processutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c2f0bab0-8cd9-431b-8592-8a9ce9d5f973/disk.config c2f0bab0-8cd9-431b-8592-8a9ce9d5f973_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:12 np0005465604 nova_compute[260603]: 2025-10-02 08:20:12.526 2 INFO nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Deleting local config drive /var/lib/nova/instances/c2f0bab0-8cd9-431b-8592-8a9ce9d5f973/disk.config because it was imported into RBD.#033[00m
Oct  2 04:20:12 np0005465604 systemd-machined[214636]: New machine qemu-10-instance-00000009.
Oct  2 04:20:12 np0005465604 systemd[1]: Started Virtual Machine qemu-10-instance-00000009.
Oct  2 04:20:12 np0005465604 podman[279542]: 2025-10-02 08:20:12.717817109 +0000 UTC m=+0.122400872 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:20:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 210 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 7.1 MiB/s wr, 350 op/s
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.141 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.142 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.142 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.154 2 INFO nova.compute.manager [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Rebuilding instance#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.544 2 DEBUG nova.objects.instance [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Lazy-loading 'trusted_certs' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.558 2 DEBUG nova.compute.manager [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.589 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393213.5891528, c2f0bab0-8cd9-431b-8592-8a9ce9d5f973 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.589 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.591 2 DEBUG nova.compute.manager [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.591 2 DEBUG nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.594 2 INFO nova.virt.libvirt.driver [-] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Instance spawned successfully.#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.594 2 DEBUG nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.598 2 DEBUG nova.objects.instance [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Lazy-loading 'pci_requests' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.612 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.615 2 DEBUG nova.objects.instance [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Lazy-loading 'pci_devices' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.617 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.620 2 DEBUG nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.620 2 DEBUG nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.620 2 DEBUG nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.621 2 DEBUG nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.621 2 DEBUG nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.621 2 DEBUG nova.virt.libvirt.driver [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.632 2 DEBUG nova.objects.instance [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Lazy-loading 'resources' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.650 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.652 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393213.589293, c2f0bab0-8cd9-431b-8592-8a9ce9d5f973 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.653 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] VM Started (Lifecycle Event)#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.660 2 DEBUG nova.objects.instance [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Lazy-loading 'migration_context' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.676 2 DEBUG nova.objects.instance [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.678 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.681 2 INFO nova.compute.manager [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Took 3.67 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.682 2 DEBUG nova.compute.manager [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.682 2 DEBUG nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.685 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.715 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.747 2 INFO nova.compute.manager [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Took 4.68 seconds to build instance.#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.763 2 DEBUG oslo_concurrency.lockutils [None req-1cac1c08-b117-4431-863a-f491749c05b2 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "c2f0bab0-8cd9-431b-8592-8a9ce9d5f973" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:13 np0005465604 nova_compute[260603]: 2025-10-02 08:20:13.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:20:14Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:41:3d 10.100.0.12
Oct  2 04:20:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:20:14Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:41:3d 10.100.0.12
Oct  2 04:20:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 236 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 7.4 MiB/s rd, 5.8 MiB/s wr, 337 op/s
Oct  2 04:20:15 np0005465604 nova_compute[260603]: 2025-10-02 08:20:15.622 2 DEBUG nova.objects.instance [None req-42528f93-482b-4ee3-ba41-fa6ed7f9746f 02eab4dc921f455dac990476b428331c 82ed5b4a38ee4a068f7871c126b51ca3 - - default default] Lazy-loading 'pci_devices' on Instance uuid c2f0bab0-8cd9-431b-8592-8a9ce9d5f973 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:15 np0005465604 nova_compute[260603]: 2025-10-02 08:20:15.656 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393215.655931, c2f0bab0-8cd9-431b-8592-8a9ce9d5f973 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:15 np0005465604 nova_compute[260603]: 2025-10-02 08:20:15.656 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:20:15 np0005465604 nova_compute[260603]: 2025-10-02 08:20:15.694 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:15 np0005465604 nova_compute[260603]: 2025-10-02 08:20:15.698 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:20:15 np0005465604 nova_compute[260603]: 2025-10-02 08:20:15.723 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  2 04:20:16 np0005465604 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct  2 04:20:16 np0005465604 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000009.scope: Consumed 3.023s CPU time.
Oct  2 04:20:16 np0005465604 systemd-machined[214636]: Machine qemu-10-instance-00000009 terminated.
Oct  2 04:20:16 np0005465604 nova_compute[260603]: 2025-10-02 08:20:16.300 2 DEBUG nova.compute.manager [None req-42528f93-482b-4ee3-ba41-fa6ed7f9746f 02eab4dc921f455dac990476b428331c 82ed5b4a38ee4a068f7871c126b51ca3 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:16 np0005465604 nova_compute[260603]: 2025-10-02 08:20:16.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 236 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 4.3 MiB/s wr, 275 op/s
Oct  2 04:20:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:20:18 np0005465604 nova_compute[260603]: 2025-10-02 08:20:18.724 2 DEBUG oslo_concurrency.lockutils [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Acquiring lock "c2f0bab0-8cd9-431b-8592-8a9ce9d5f973" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:18 np0005465604 nova_compute[260603]: 2025-10-02 08:20:18.725 2 DEBUG oslo_concurrency.lockutils [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "c2f0bab0-8cd9-431b-8592-8a9ce9d5f973" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:18 np0005465604 nova_compute[260603]: 2025-10-02 08:20:18.726 2 DEBUG oslo_concurrency.lockutils [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Acquiring lock "c2f0bab0-8cd9-431b-8592-8a9ce9d5f973-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:18 np0005465604 nova_compute[260603]: 2025-10-02 08:20:18.726 2 DEBUG oslo_concurrency.lockutils [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "c2f0bab0-8cd9-431b-8592-8a9ce9d5f973-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:18 np0005465604 nova_compute[260603]: 2025-10-02 08:20:18.727 2 DEBUG oslo_concurrency.lockutils [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "c2f0bab0-8cd9-431b-8592-8a9ce9d5f973-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 287 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 8.7 MiB/s rd, 7.8 MiB/s wr, 451 op/s
Oct  2 04:20:18 np0005465604 nova_compute[260603]: 2025-10-02 08:20:18.729 2 INFO nova.compute.manager [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Terminating instance#033[00m
Oct  2 04:20:18 np0005465604 nova_compute[260603]: 2025-10-02 08:20:18.730 2 DEBUG oslo_concurrency.lockutils [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Acquiring lock "refresh_cache-c2f0bab0-8cd9-431b-8592-8a9ce9d5f973" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:20:18 np0005465604 nova_compute[260603]: 2025-10-02 08:20:18.731 2 DEBUG oslo_concurrency.lockutils [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Acquired lock "refresh_cache-c2f0bab0-8cd9-431b-8592-8a9ce9d5f973" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:20:18 np0005465604 nova_compute[260603]: 2025-10-02 08:20:18.731 2 DEBUG nova.network.neutron [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:20:18 np0005465604 nova_compute[260603]: 2025-10-02 08:20:18.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:18 np0005465604 nova_compute[260603]: 2025-10-02 08:20:18.955 2 DEBUG nova.network.neutron [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:20:19 np0005465604 nova_compute[260603]: 2025-10-02 08:20:19.292 2 DEBUG nova.network.neutron [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:20:19 np0005465604 nova_compute[260603]: 2025-10-02 08:20:19.373 2 DEBUG oslo_concurrency.lockutils [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Releasing lock "refresh_cache-c2f0bab0-8cd9-431b-8592-8a9ce9d5f973" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:20:19 np0005465604 nova_compute[260603]: 2025-10-02 08:20:19.374 2 DEBUG nova.compute.manager [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:20:19 np0005465604 nova_compute[260603]: 2025-10-02 08:20:19.382 2 INFO nova.virt.libvirt.driver [-] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Instance destroyed successfully.#033[00m
Oct  2 04:20:19 np0005465604 nova_compute[260603]: 2025-10-02 08:20:19.383 2 DEBUG nova.objects.instance [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lazy-loading 'resources' on Instance uuid c2f0bab0-8cd9-431b-8592-8a9ce9d5f973 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:19 np0005465604 nova_compute[260603]: 2025-10-02 08:20:19.740 2 INFO nova.virt.libvirt.driver [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Deleting instance files /var/lib/nova/instances/c2f0bab0-8cd9-431b-8592-8a9ce9d5f973_del#033[00m
Oct  2 04:20:19 np0005465604 nova_compute[260603]: 2025-10-02 08:20:19.742 2 INFO nova.virt.libvirt.driver [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Deletion of /var/lib/nova/instances/c2f0bab0-8cd9-431b-8592-8a9ce9d5f973_del complete#033[00m
Oct  2 04:20:19 np0005465604 nova_compute[260603]: 2025-10-02 08:20:19.813 2 INFO nova.compute.manager [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Took 0.44 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:20:19 np0005465604 nova_compute[260603]: 2025-10-02 08:20:19.813 2 DEBUG oslo.service.loopingcall [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:20:19 np0005465604 nova_compute[260603]: 2025-10-02 08:20:19.813 2 DEBUG nova.compute.manager [-] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:20:19 np0005465604 nova_compute[260603]: 2025-10-02 08:20:19.814 2 DEBUG nova.network.neutron [-] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:20:19 np0005465604 podman[279806]: 2025-10-02 08:20:19.894668737 +0000 UTC m=+0.063177917 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 04:20:20 np0005465604 podman[279806]: 2025-10-02 08:20:20.003246926 +0000 UTC m=+0.171756086 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:20:20 np0005465604 nova_compute[260603]: 2025-10-02 08:20:20.384 2 DEBUG nova.network.neutron [-] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:20:20 np0005465604 nova_compute[260603]: 2025-10-02 08:20:20.410 2 DEBUG nova.network.neutron [-] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:20:20 np0005465604 nova_compute[260603]: 2025-10-02 08:20:20.427 2 INFO nova.compute.manager [-] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Took 0.61 seconds to deallocate network for instance.#033[00m
Oct  2 04:20:20 np0005465604 nova_compute[260603]: 2025-10-02 08:20:20.478 2 DEBUG oslo_concurrency.lockutils [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:20 np0005465604 nova_compute[260603]: 2025-10-02 08:20:20.479 2 DEBUG oslo_concurrency.lockutils [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:20 np0005465604 nova_compute[260603]: 2025-10-02 08:20:20.575 2 DEBUG oslo_concurrency.processutils [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 287 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.2 MiB/s wr, 298 op/s
Oct  2 04:20:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:20:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:20:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:20:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:20:20 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:20:20 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3032841679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.064 2 DEBUG oslo_concurrency.processutils [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.074 2 DEBUG nova.compute.provider_tree [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.092 2 DEBUG nova.scheduler.client.report [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.098 2 DEBUG oslo_concurrency.lockutils [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "7df7d893-6345-4445-8743-8d552b61a221" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.099 2 DEBUG oslo_concurrency.lockutils [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "7df7d893-6345-4445-8743-8d552b61a221" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.099 2 DEBUG oslo_concurrency.lockutils [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "7df7d893-6345-4445-8743-8d552b61a221-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.100 2 DEBUG oslo_concurrency.lockutils [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "7df7d893-6345-4445-8743-8d552b61a221-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.101 2 DEBUG oslo_concurrency.lockutils [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "7df7d893-6345-4445-8743-8d552b61a221-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.104 2 INFO nova.compute.manager [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Terminating instance#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.106 2 DEBUG nova.compute.manager [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.122 2 DEBUG oslo_concurrency.lockutils [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.147 2 INFO nova.scheduler.client.report [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Deleted allocations for instance c2f0bab0-8cd9-431b-8592-8a9ce9d5f973#033[00m
Oct  2 04:20:21 np0005465604 kernel: tap49cabeb7-5a (unregistering): left promiscuous mode
Oct  2 04:20:21 np0005465604 NetworkManager[45129]: <info>  [1759393221.1949] device (tap49cabeb7-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:20:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:20:21Z|00040|binding|INFO|Releasing lport 49cabeb7-5aaf-4b9a-a489-57603e5c2441 from this chassis (sb_readonly=0)
Oct  2 04:20:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:20:21Z|00041|binding|INFO|Setting lport 49cabeb7-5aaf-4b9a-a489-57603e5c2441 down in Southbound
Oct  2 04:20:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:20:21Z|00042|binding|INFO|Removing iface tap49cabeb7-5a ovn-installed in OVS
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.256 2 DEBUG oslo_concurrency.lockutils [None req-a7087288-42bc-47dd-9736-ad96aafde2df cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "c2f0bab0-8cd9-431b-8592-8a9ce9d5f973" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:21.261 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:41:3d 10.100.0.12'], port_security=['fa:16:3e:47:41:3d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '7df7d893-6345-4445-8743-8d552b61a221', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9847725b-0398-4587-bb8e-45200e1bbb6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ab0927c5a0e0424fbcde0133feab6f16', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3ed81274-9cd4-4204-8473-93dd6f65ff51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9128983a-ebb5-4d5e-81a5-171931805db2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=49cabeb7-5aaf-4b9a-a489-57603e5c2441) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:20:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:21.262 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 49cabeb7-5aaf-4b9a-a489-57603e5c2441 in datapath 9847725b-0398-4587-bb8e-45200e1bbb6a unbound from our chassis#033[00m
Oct  2 04:20:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:21.263 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9847725b-0398-4587-bb8e-45200e1bbb6a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:20:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:21.264 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4f68ed8f-c336-4d20-a85a-de924835f5f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:21.264 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a namespace which is not needed anymore#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:21 np0005465604 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct  2 04:20:21 np0005465604 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 14.095s CPU time.
Oct  2 04:20:21 np0005465604 systemd-machined[214636]: Machine qemu-7-instance-00000007 terminated.
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.356 2 INFO nova.virt.libvirt.driver [-] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Instance destroyed successfully.#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.357 2 DEBUG nova.objects.instance [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lazy-loading 'resources' on Instance uuid 7df7d893-6345-4445-8743-8d552b61a221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.390 2 DEBUG nova.virt.libvirt.vif [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:19:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1365874515',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1365874515',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(25),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1365874515',id=7,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=25,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKyfJeOY45nkQsot9BemY9U/BkbEGNqNZqr3YNUHFqg2f7Al6V+K+e19H2MuiEK+wcXfUJg+H8qbMpV33FaRnqJABUBtXxvPVSB7W9ittbX2JkYChVSSogHDOqGnTjFf2A==',key_name='tempest-keypair-67388201',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:20:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ab0927c5a0e0424fbcde0133feab6f16',ramdisk_id='',reservation_id='r-9hr5yju6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1074733883',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1074733883-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:20:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1037493f2f4c4e768c46e477c6183cd4',uuid=7df7d893-6345-4445-8743-8d552b61a221,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "address": "fa:16:3e:47:41:3d", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49cabeb7-5a", "ovs_interfaceid": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.390 2 DEBUG nova.network.os_vif_util [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Converting VIF {"id": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "address": "fa:16:3e:47:41:3d", "network": {"id": "9847725b-0398-4587-bb8e-45200e1bbb6a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1486023463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ab0927c5a0e0424fbcde0133feab6f16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49cabeb7-5a", "ovs_interfaceid": "49cabeb7-5aaf-4b9a-a489-57603e5c2441", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.391 2 DEBUG nova.network.os_vif_util [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:47:41:3d,bridge_name='br-int',has_traffic_filtering=True,id=49cabeb7-5aaf-4b9a-a489-57603e5c2441,network=Network(9847725b-0398-4587-bb8e-45200e1bbb6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49cabeb7-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.391 2 DEBUG os_vif [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:41:3d,bridge_name='br-int',has_traffic_filtering=True,id=49cabeb7-5aaf-4b9a-a489-57603e5c2441,network=Network(9847725b-0398-4587-bb8e-45200e1bbb6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49cabeb7-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49cabeb7-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.400 2 INFO os_vif [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:41:3d,bridge_name='br-int',has_traffic_filtering=True,id=49cabeb7-5aaf-4b9a-a489-57603e5c2441,network=Network(9847725b-0398-4587-bb8e-45200e1bbb6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49cabeb7-5a')#033[00m
Oct  2 04:20:21 np0005465604 neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a[278429]: [NOTICE]   (278433) : haproxy version is 2.8.14-c23fe91
Oct  2 04:20:21 np0005465604 neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a[278429]: [NOTICE]   (278433) : path to executable is /usr/sbin/haproxy
Oct  2 04:20:21 np0005465604 neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a[278429]: [WARNING]  (278433) : Exiting Master process...
Oct  2 04:20:21 np0005465604 neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a[278429]: [WARNING]  (278433) : Exiting Master process...
Oct  2 04:20:21 np0005465604 neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a[278429]: [ALERT]    (278433) : Current worker (278435) exited with code 143 (Terminated)
Oct  2 04:20:21 np0005465604 neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a[278429]: [WARNING]  (278433) : All workers exited. Exiting... (0)
Oct  2 04:20:21 np0005465604 systemd[1]: libpod-eb67637ac8f3e32bc49b5b6ca38a631e5b051a04ceba6104ee56f3de2588e70b.scope: Deactivated successfully.
Oct  2 04:20:21 np0005465604 podman[280129]: 2025-10-02 08:20:21.439083925 +0000 UTC m=+0.058644827 container died eb67637ac8f3e32bc49b5b6ca38a631e5b051a04ceba6104ee56f3de2588e70b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:20:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb67637ac8f3e32bc49b5b6ca38a631e5b051a04ceba6104ee56f3de2588e70b-userdata-shm.mount: Deactivated successfully.
Oct  2 04:20:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-79550457c293babdba5779e3bb9331ebe245c1e6f905e5264f7aaa8e95f6587e-merged.mount: Deactivated successfully.
Oct  2 04:20:21 np0005465604 podman[280129]: 2025-10-02 08:20:21.486406876 +0000 UTC m=+0.105967788 container cleanup eb67637ac8f3e32bc49b5b6ca38a631e5b051a04ceba6104ee56f3de2588e70b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:20:21 np0005465604 systemd[1]: libpod-conmon-eb67637ac8f3e32bc49b5b6ca38a631e5b051a04ceba6104ee56f3de2588e70b.scope: Deactivated successfully.
Oct  2 04:20:21 np0005465604 podman[280181]: 2025-10-02 08:20:21.572047686 +0000 UTC m=+0.053834436 container remove eb67637ac8f3e32bc49b5b6ca38a631e5b051a04ceba6104ee56f3de2588e70b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:20:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:21.582 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b4331db9-68be-4a10-b77c-c8ef0b4a7c49]: (4, ('Thu Oct  2 08:20:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a (eb67637ac8f3e32bc49b5b6ca38a631e5b051a04ceba6104ee56f3de2588e70b)\neb67637ac8f3e32bc49b5b6ca38a631e5b051a04ceba6104ee56f3de2588e70b\nThu Oct  2 08:20:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a (eb67637ac8f3e32bc49b5b6ca38a631e5b051a04ceba6104ee56f3de2588e70b)\neb67637ac8f3e32bc49b5b6ca38a631e5b051a04ceba6104ee56f3de2588e70b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:21.584 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a6e795-6e2c-41a2-8d5c-2c8a68b8bbe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:21.585 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9847725b-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:21 np0005465604 kernel: tap9847725b-00: left promiscuous mode
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:21.611 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cd83635d-1487-41f1-838a-5d33e127eece]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.623 2 DEBUG nova.compute.manager [req-4dc61903-b2e7-42f3-9328-a3d60bd10283 req-c8e17b20-0360-4b16-a1eb-504edd71984b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Received event network-vif-unplugged-49cabeb7-5aaf-4b9a-a489-57603e5c2441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.623 2 DEBUG oslo_concurrency.lockutils [req-4dc61903-b2e7-42f3-9328-a3d60bd10283 req-c8e17b20-0360-4b16-a1eb-504edd71984b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7df7d893-6345-4445-8743-8d552b61a221-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.623 2 DEBUG oslo_concurrency.lockutils [req-4dc61903-b2e7-42f3-9328-a3d60bd10283 req-c8e17b20-0360-4b16-a1eb-504edd71984b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7df7d893-6345-4445-8743-8d552b61a221-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.624 2 DEBUG oslo_concurrency.lockutils [req-4dc61903-b2e7-42f3-9328-a3d60bd10283 req-c8e17b20-0360-4b16-a1eb-504edd71984b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7df7d893-6345-4445-8743-8d552b61a221-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.624 2 DEBUG nova.compute.manager [req-4dc61903-b2e7-42f3-9328-a3d60bd10283 req-c8e17b20-0360-4b16-a1eb-504edd71984b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] No waiting events found dispatching network-vif-unplugged-49cabeb7-5aaf-4b9a-a489-57603e5c2441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.624 2 DEBUG nova.compute.manager [req-4dc61903-b2e7-42f3-9328-a3d60bd10283 req-c8e17b20-0360-4b16-a1eb-504edd71984b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Received event network-vif-unplugged-49cabeb7-5aaf-4b9a-a489-57603e5c2441 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:20:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:21.636 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cac9ce08-b079-4fdf-910e-e3216cacfb7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:21.638 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1db5b3-e2d7-4c29-953b-2f6c691a5dd0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:21.654 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[55ff2dc0-6601-4dd8-acb8-fd64286f023c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407878, 'reachable_time': 28663, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280209, 'error': None, 'target': 'ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:21.657 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9847725b-0398-4587-bb8e-45200e1bbb6a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:20:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:21.657 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[bf49b944-40ce-4b99-8c53-ec2c4a5465c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:20:21 np0005465604 systemd[1]: run-netns-ovnmeta\x2d9847725b\x2d0398\x2d4587\x2dbb8e\x2d45200e1bbb6a.mount: Deactivated successfully.
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:20:21 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev fe7cdb95-161d-4b81-9e76-d9c4483c40a2 does not exist
Oct  2 04:20:21 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 1411f0a7-e720-44f3-833f-720216a368eb does not exist
Oct  2 04:20:21 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 94ac40c1-62d5-4663-9f08-e7cfc2e592fc does not exist
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.759 2 DEBUG oslo_concurrency.lockutils [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Acquiring lock "e4ce9040-1e20-4d58-b967-21b17e817aea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.760 2 DEBUG oslo_concurrency.lockutils [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "e4ce9040-1e20-4d58-b967-21b17e817aea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.760 2 DEBUG oslo_concurrency.lockutils [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Acquiring lock "e4ce9040-1e20-4d58-b967-21b17e817aea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.760 2 DEBUG oslo_concurrency.lockutils [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "e4ce9040-1e20-4d58-b967-21b17e817aea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.760 2 DEBUG oslo_concurrency.lockutils [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "e4ce9040-1e20-4d58-b967-21b17e817aea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.761 2 INFO nova.compute.manager [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Terminating instance#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.762 2 DEBUG oslo_concurrency.lockutils [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Acquiring lock "refresh_cache-e4ce9040-1e20-4d58-b967-21b17e817aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.762 2 DEBUG oslo_concurrency.lockutils [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Acquired lock "refresh_cache-e4ce9040-1e20-4d58-b967-21b17e817aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.762 2 DEBUG nova.network.neutron [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.877 2 DEBUG nova.network.neutron [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:20:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.950 2 INFO nova.virt.libvirt.driver [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Deleting instance files /var/lib/nova/instances/7df7d893-6345-4445-8743-8d552b61a221_del#033[00m
Oct  2 04:20:21 np0005465604 nova_compute[260603]: 2025-10-02 08:20:21.951 2 INFO nova.virt.libvirt.driver [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Deletion of /var/lib/nova/instances/7df7d893-6345-4445-8743-8d552b61a221_del complete#033[00m
Oct  2 04:20:22 np0005465604 nova_compute[260603]: 2025-10-02 08:20:22.027 2 INFO nova.compute.manager [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Took 0.92 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:20:22 np0005465604 nova_compute[260603]: 2025-10-02 08:20:22.027 2 DEBUG oslo.service.loopingcall [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:20:22 np0005465604 nova_compute[260603]: 2025-10-02 08:20:22.028 2 DEBUG nova.compute.manager [-] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:20:22 np0005465604 nova_compute[260603]: 2025-10-02 08:20:22.028 2 DEBUG nova.network.neutron [-] [instance: 7df7d893-6345-4445-8743-8d552b61a221] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:20:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:20:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/671821606' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:20:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:20:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/671821606' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:20:22 np0005465604 nova_compute[260603]: 2025-10-02 08:20:22.168 2 DEBUG nova.network.neutron [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:20:22 np0005465604 nova_compute[260603]: 2025-10-02 08:20:22.193 2 DEBUG oslo_concurrency.lockutils [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Releasing lock "refresh_cache-e4ce9040-1e20-4d58-b967-21b17e817aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:20:22 np0005465604 nova_compute[260603]: 2025-10-02 08:20:22.194 2 DEBUG nova.compute.manager [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:20:22 np0005465604 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct  2 04:20:22 np0005465604 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 12.975s CPU time.
Oct  2 04:20:22 np0005465604 systemd-machined[214636]: Machine qemu-8-instance-00000008 terminated.
Oct  2 04:20:22 np0005465604 nova_compute[260603]: 2025-10-02 08:20:22.422 2 INFO nova.virt.libvirt.driver [-] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Instance destroyed successfully.#033[00m
Oct  2 04:20:22 np0005465604 nova_compute[260603]: 2025-10-02 08:20:22.423 2 DEBUG nova.objects.instance [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lazy-loading 'resources' on Instance uuid e4ce9040-1e20-4d58-b967-21b17e817aea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:22 np0005465604 podman[280371]: 2025-10-02 08:20:22.558324634 +0000 UTC m=+0.053898528 container create 97589db17626b6c5de30dd01db31a6693081db6474dcc3d1fb2b352101ef4a58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:20:22 np0005465604 systemd[1]: Started libpod-conmon-97589db17626b6c5de30dd01db31a6693081db6474dcc3d1fb2b352101ef4a58.scope.
Oct  2 04:20:22 np0005465604 podman[280371]: 2025-10-02 08:20:22.537513392 +0000 UTC m=+0.033087316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:20:22 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:20:22 np0005465604 podman[280371]: 2025-10-02 08:20:22.671695903 +0000 UTC m=+0.167269817 container init 97589db17626b6c5de30dd01db31a6693081db6474dcc3d1fb2b352101ef4a58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 04:20:22 np0005465604 podman[280371]: 2025-10-02 08:20:22.680329983 +0000 UTC m=+0.175903887 container start 97589db17626b6c5de30dd01db31a6693081db6474dcc3d1fb2b352101ef4a58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_proskuriakova, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  2 04:20:22 np0005465604 zealous_proskuriakova[280387]: 167 167
Oct  2 04:20:22 np0005465604 systemd[1]: libpod-97589db17626b6c5de30dd01db31a6693081db6474dcc3d1fb2b352101ef4a58.scope: Deactivated successfully.
Oct  2 04:20:22 np0005465604 podman[280371]: 2025-10-02 08:20:22.691611765 +0000 UTC m=+0.187185689 container attach 97589db17626b6c5de30dd01db31a6693081db6474dcc3d1fb2b352101ef4a58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_proskuriakova, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 04:20:22 np0005465604 podman[280371]: 2025-10-02 08:20:22.69236495 +0000 UTC m=+0.187938864 container died 97589db17626b6c5de30dd01db31a6693081db6474dcc3d1fb2b352101ef4a58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 04:20:22 np0005465604 systemd[1]: var-lib-containers-storage-overlay-47f1e3d18f7612ea240fda5be5b80359d45d52e6a852ca2e8bd6812fc5aa4900-merged.mount: Deactivated successfully.
Oct  2 04:20:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 223 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.2 MiB/s wr, 342 op/s
Oct  2 04:20:22 np0005465604 podman[280371]: 2025-10-02 08:20:22.743543971 +0000 UTC m=+0.239117875 container remove 97589db17626b6c5de30dd01db31a6693081db6474dcc3d1fb2b352101ef4a58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct  2 04:20:22 np0005465604 systemd[1]: libpod-conmon-97589db17626b6c5de30dd01db31a6693081db6474dcc3d1fb2b352101ef4a58.scope: Deactivated successfully.
Oct  2 04:20:22 np0005465604 nova_compute[260603]: 2025-10-02 08:20:22.869 2 DEBUG nova.network.neutron [-] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:20:22 np0005465604 nova_compute[260603]: 2025-10-02 08:20:22.889 2 INFO nova.compute.manager [-] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Took 0.86 seconds to deallocate network for instance.#033[00m
Oct  2 04:20:22 np0005465604 nova_compute[260603]: 2025-10-02 08:20:22.932 2 DEBUG oslo_concurrency.lockutils [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:22 np0005465604 nova_compute[260603]: 2025-10-02 08:20:22.933 2 DEBUG oslo_concurrency.lockutils [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:22 np0005465604 podman[280412]: 2025-10-02 08:20:22.954002008 +0000 UTC m=+0.058617986 container create 20bd1a1199e5bf6279e7085304c22afd90470372757df35642e40792ddf60e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shirley, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 04:20:22 np0005465604 nova_compute[260603]: 2025-10-02 08:20:22.984 2 INFO nova.virt.libvirt.driver [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Deleting instance files /var/lib/nova/instances/e4ce9040-1e20-4d58-b967-21b17e817aea_del#033[00m
Oct  2 04:20:22 np0005465604 nova_compute[260603]: 2025-10-02 08:20:22.985 2 INFO nova.virt.libvirt.driver [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Deletion of /var/lib/nova/instances/e4ce9040-1e20-4d58-b967-21b17e817aea_del complete#033[00m
Oct  2 04:20:22 np0005465604 nova_compute[260603]: 2025-10-02 08:20:22.994 2 DEBUG nova.compute.manager [req-01492cb1-8541-4acd-acfb-23eb582951db req-997368c1-c69b-48fe-ae8e-6b2ba3cf732b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Received event network-vif-deleted-49cabeb7-5aaf-4b9a-a489-57603e5c2441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:20:23 np0005465604 systemd[1]: Started libpod-conmon-20bd1a1199e5bf6279e7085304c22afd90470372757df35642e40792ddf60e56.scope.
Oct  2 04:20:23 np0005465604 podman[280412]: 2025-10-02 08:20:22.928876952 +0000 UTC m=+0.033492950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:20:23 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.042 2 INFO nova.compute.manager [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.043 2 DEBUG oslo.service.loopingcall [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.043 2 DEBUG nova.compute.manager [-] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.043 2 DEBUG nova.network.neutron [-] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:20:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73698b426ed22b376382970187db1dffef7147410f963b9371d42683997f7ebb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:20:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73698b426ed22b376382970187db1dffef7147410f963b9371d42683997f7ebb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:20:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73698b426ed22b376382970187db1dffef7147410f963b9371d42683997f7ebb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:20:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73698b426ed22b376382970187db1dffef7147410f963b9371d42683997f7ebb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:20:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73698b426ed22b376382970187db1dffef7147410f963b9371d42683997f7ebb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.055 2 DEBUG oslo_concurrency.processutils [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:23 np0005465604 podman[280412]: 2025-10-02 08:20:23.064772885 +0000 UTC m=+0.169388883 container init 20bd1a1199e5bf6279e7085304c22afd90470372757df35642e40792ddf60e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shirley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 04:20:23 np0005465604 podman[280412]: 2025-10-02 08:20:23.07483835 +0000 UTC m=+0.179454318 container start 20bd1a1199e5bf6279e7085304c22afd90470372757df35642e40792ddf60e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 04:20:23 np0005465604 podman[280412]: 2025-10-02 08:20:23.078184944 +0000 UTC m=+0.182800952 container attach 20bd1a1199e5bf6279e7085304c22afd90470372757df35642e40792ddf60e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.184 2 DEBUG nova.network.neutron [-] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.201 2 DEBUG nova.network.neutron [-] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.220 2 INFO nova.compute.manager [-] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Took 0.18 seconds to deallocate network for instance.#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.276 2 DEBUG oslo_concurrency.lockutils [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:20:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/786633860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.551 2 DEBUG oslo_concurrency.processutils [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.558 2 DEBUG nova.compute.provider_tree [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.579 2 DEBUG nova.scheduler.client.report [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.606 2 DEBUG oslo_concurrency.lockutils [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.610 2 DEBUG oslo_concurrency.lockutils [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.634 2 INFO nova.scheduler.client.report [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Deleted allocations for instance 7df7d893-6345-4445-8743-8d552b61a221#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.706 2 DEBUG oslo_concurrency.lockutils [None req-d614c11c-9164-4ed5-983c-3d77016c3e28 1037493f2f4c4e768c46e477c6183cd4 ab0927c5a0e0424fbcde0133feab6f16 - - default default] Lock "7df7d893-6345-4445-8743-8d552b61a221" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.710 2 DEBUG oslo_concurrency.processutils [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.750 2 DEBUG nova.compute.manager [req-ca2094da-4ba8-4ab3-a292-d43f30dcef9a req-8d457ba1-2d8f-4504-990f-e62dc4634758 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Received event network-vif-plugged-49cabeb7-5aaf-4b9a-a489-57603e5c2441 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.751 2 DEBUG oslo_concurrency.lockutils [req-ca2094da-4ba8-4ab3-a292-d43f30dcef9a req-8d457ba1-2d8f-4504-990f-e62dc4634758 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7df7d893-6345-4445-8743-8d552b61a221-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.751 2 DEBUG oslo_concurrency.lockutils [req-ca2094da-4ba8-4ab3-a292-d43f30dcef9a req-8d457ba1-2d8f-4504-990f-e62dc4634758 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7df7d893-6345-4445-8743-8d552b61a221-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.752 2 DEBUG oslo_concurrency.lockutils [req-ca2094da-4ba8-4ab3-a292-d43f30dcef9a req-8d457ba1-2d8f-4504-990f-e62dc4634758 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7df7d893-6345-4445-8743-8d552b61a221-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.752 2 DEBUG nova.compute.manager [req-ca2094da-4ba8-4ab3-a292-d43f30dcef9a req-8d457ba1-2d8f-4504-990f-e62dc4634758 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] No waiting events found dispatching network-vif-plugged-49cabeb7-5aaf-4b9a-a489-57603e5c2441 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.753 2 WARNING nova.compute.manager [req-ca2094da-4ba8-4ab3-a292-d43f30dcef9a req-8d457ba1-2d8f-4504-990f-e62dc4634758 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Received unexpected event network-vif-plugged-49cabeb7-5aaf-4b9a-a489-57603e5c2441 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:20:23 np0005465604 nova_compute[260603]: 2025-10-02 08:20:23.758 2 DEBUG nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 04:20:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:20:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3816256732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:20:24 np0005465604 nova_compute[260603]: 2025-10-02 08:20:24.262 2 DEBUG oslo_concurrency.processutils [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:24 np0005465604 nova_compute[260603]: 2025-10-02 08:20:24.268 2 DEBUG nova.compute.provider_tree [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:20:24 np0005465604 dazzling_shirley[280428]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:20:24 np0005465604 dazzling_shirley[280428]: --> relative data size: 1.0
Oct  2 04:20:24 np0005465604 dazzling_shirley[280428]: --> All data devices are unavailable
Oct  2 04:20:24 np0005465604 nova_compute[260603]: 2025-10-02 08:20:24.291 2 DEBUG nova.scheduler.client.report [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:20:24 np0005465604 nova_compute[260603]: 2025-10-02 08:20:24.322 2 DEBUG oslo_concurrency.lockutils [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:24 np0005465604 systemd[1]: libpod-20bd1a1199e5bf6279e7085304c22afd90470372757df35642e40792ddf60e56.scope: Deactivated successfully.
Oct  2 04:20:24 np0005465604 podman[280412]: 2025-10-02 08:20:24.325858144 +0000 UTC m=+1.430474112 container died 20bd1a1199e5bf6279e7085304c22afd90470372757df35642e40792ddf60e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 04:20:24 np0005465604 systemd[1]: libpod-20bd1a1199e5bf6279e7085304c22afd90470372757df35642e40792ddf60e56.scope: Consumed 1.152s CPU time.
Oct  2 04:20:24 np0005465604 systemd[1]: var-lib-containers-storage-overlay-73698b426ed22b376382970187db1dffef7147410f963b9371d42683997f7ebb-merged.mount: Deactivated successfully.
Oct  2 04:20:24 np0005465604 nova_compute[260603]: 2025-10-02 08:20:24.352 2 INFO nova.scheduler.client.report [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Deleted allocations for instance e4ce9040-1e20-4d58-b967-21b17e817aea#033[00m
Oct  2 04:20:24 np0005465604 podman[280412]: 2025-10-02 08:20:24.377613164 +0000 UTC m=+1.482229132 container remove 20bd1a1199e5bf6279e7085304c22afd90470372757df35642e40792ddf60e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 04:20:24 np0005465604 systemd[1]: libpod-conmon-20bd1a1199e5bf6279e7085304c22afd90470372757df35642e40792ddf60e56.scope: Deactivated successfully.
Oct  2 04:20:24 np0005465604 nova_compute[260603]: 2025-10-02 08:20:24.428 2 DEBUG oslo_concurrency.lockutils [None req-73d4aae4-1767-4384-b3ee-257676c58c91 cfaa38a10d3a475b89a320f1949ed5f0 fce4003fcf6d49468ebb0fe22806ab23 - - default default] Lock "e4ce9040-1e20-4d58-b967-21b17e817aea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 144 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.6 MiB/s wr, 356 op/s
Oct  2 04:20:25 np0005465604 podman[280652]: 2025-10-02 08:20:25.107183728 +0000 UTC m=+0.063109827 container create 3f9066aaa2196b9c170de5aae5fb4c4aabc487b28b45478a3b822a690cc6b877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meitner, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 04:20:25 np0005465604 systemd[1]: Started libpod-conmon-3f9066aaa2196b9c170de5aae5fb4c4aabc487b28b45478a3b822a690cc6b877.scope.
Oct  2 04:20:25 np0005465604 podman[280652]: 2025-10-02 08:20:25.076078144 +0000 UTC m=+0.032004303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:20:25 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:20:25 np0005465604 podman[280652]: 2025-10-02 08:20:25.201339275 +0000 UTC m=+0.157265384 container init 3f9066aaa2196b9c170de5aae5fb4c4aabc487b28b45478a3b822a690cc6b877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meitner, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 04:20:25 np0005465604 podman[280652]: 2025-10-02 08:20:25.213470275 +0000 UTC m=+0.169396374 container start 3f9066aaa2196b9c170de5aae5fb4c4aabc487b28b45478a3b822a690cc6b877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meitner, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:20:25 np0005465604 podman[280652]: 2025-10-02 08:20:25.217926963 +0000 UTC m=+0.173853072 container attach 3f9066aaa2196b9c170de5aae5fb4c4aabc487b28b45478a3b822a690cc6b877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 04:20:25 np0005465604 epic_meitner[280669]: 167 167
Oct  2 04:20:25 np0005465604 systemd[1]: libpod-3f9066aaa2196b9c170de5aae5fb4c4aabc487b28b45478a3b822a690cc6b877.scope: Deactivated successfully.
Oct  2 04:20:25 np0005465604 podman[280652]: 2025-10-02 08:20:25.221065322 +0000 UTC m=+0.176991391 container died 3f9066aaa2196b9c170de5aae5fb4c4aabc487b28b45478a3b822a690cc6b877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:20:25 np0005465604 systemd[1]: var-lib-containers-storage-overlay-25ee51398d87205b5c95a9d3b65469e0c9fe89a6d9ea81702877353b5cb06645-merged.mount: Deactivated successfully.
Oct  2 04:20:25 np0005465604 podman[280652]: 2025-10-02 08:20:25.271330385 +0000 UTC m=+0.227256454 container remove 3f9066aaa2196b9c170de5aae5fb4c4aabc487b28b45478a3b822a690cc6b877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 04:20:25 np0005465604 systemd[1]: libpod-conmon-3f9066aaa2196b9c170de5aae5fb4c4aabc487b28b45478a3b822a690cc6b877.scope: Deactivated successfully.
Oct  2 04:20:25 np0005465604 podman[280692]: 2025-10-02 08:20:25.51881402 +0000 UTC m=+0.074388799 container create c25c0609dfcdbd2466854d9a8a576863796663c46affde7a33094309eaebe503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:20:25 np0005465604 systemd[1]: Started libpod-conmon-c25c0609dfcdbd2466854d9a8a576863796663c46affde7a33094309eaebe503.scope.
Oct  2 04:20:25 np0005465604 podman[280692]: 2025-10-02 08:20:25.49130678 +0000 UTC m=+0.046881619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:20:25 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:20:25 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902dd5a4503bab242fa72eb415d6b583ed38f96e3a8a719a2652bb761fb7d7a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:20:25 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902dd5a4503bab242fa72eb415d6b583ed38f96e3a8a719a2652bb761fb7d7a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:20:25 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902dd5a4503bab242fa72eb415d6b583ed38f96e3a8a719a2652bb761fb7d7a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:20:25 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902dd5a4503bab242fa72eb415d6b583ed38f96e3a8a719a2652bb761fb7d7a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:20:25 np0005465604 podman[280692]: 2025-10-02 08:20:25.627894445 +0000 UTC m=+0.183469234 container init c25c0609dfcdbd2466854d9a8a576863796663c46affde7a33094309eaebe503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:20:25 np0005465604 podman[280692]: 2025-10-02 08:20:25.637808115 +0000 UTC m=+0.193382864 container start c25c0609dfcdbd2466854d9a8a576863796663c46affde7a33094309eaebe503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:20:25 np0005465604 podman[280692]: 2025-10-02 08:20:25.640880972 +0000 UTC m=+0.196455771 container attach c25c0609dfcdbd2466854d9a8a576863796663c46affde7a33094309eaebe503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]: {
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:    "0": [
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:        {
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "devices": [
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "/dev/loop3"
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            ],
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "lv_name": "ceph_lv0",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "lv_size": "21470642176",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "name": "ceph_lv0",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "tags": {
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.cluster_name": "ceph",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.crush_device_class": "",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.encrypted": "0",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.osd_id": "0",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.type": "block",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.vdo": "0"
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            },
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "type": "block",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "vg_name": "ceph_vg0"
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:        }
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:    ],
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:    "1": [
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:        {
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "devices": [
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "/dev/loop4"
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            ],
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "lv_name": "ceph_lv1",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "lv_size": "21470642176",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "name": "ceph_lv1",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "tags": {
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.cluster_name": "ceph",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.crush_device_class": "",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.encrypted": "0",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.osd_id": "1",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.type": "block",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.vdo": "0"
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            },
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "type": "block",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "vg_name": "ceph_vg1"
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:        }
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:    ],
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:    "2": [
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:        {
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "devices": [
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "/dev/loop5"
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            ],
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "lv_name": "ceph_lv2",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "lv_size": "21470642176",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "name": "ceph_lv2",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "tags": {
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.cluster_name": "ceph",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.crush_device_class": "",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.encrypted": "0",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.osd_id": "2",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.type": "block",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:                "ceph.vdo": "0"
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            },
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "type": "block",
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:            "vg_name": "ceph_vg2"
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:        }
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]:    ]
Oct  2 04:20:26 np0005465604 priceless_montalcini[280709]: }
Oct  2 04:20:26 np0005465604 systemd[1]: libpod-c25c0609dfcdbd2466854d9a8a576863796663c46affde7a33094309eaebe503.scope: Deactivated successfully.
Oct  2 04:20:26 np0005465604 podman[280692]: 2025-10-02 08:20:26.384281528 +0000 UTC m=+0.939856287 container died c25c0609dfcdbd2466854d9a8a576863796663c46affde7a33094309eaebe503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_montalcini, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 04:20:26 np0005465604 nova_compute[260603]: 2025-10-02 08:20:26.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:26 np0005465604 systemd[1]: var-lib-containers-storage-overlay-902dd5a4503bab242fa72eb415d6b583ed38f96e3a8a719a2652bb761fb7d7a2-merged.mount: Deactivated successfully.
Oct  2 04:20:26 np0005465604 podman[280692]: 2025-10-02 08:20:26.465540622 +0000 UTC m=+1.021115371 container remove c25c0609dfcdbd2466854d9a8a576863796663c46affde7a33094309eaebe503 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_montalcini, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 04:20:26 np0005465604 systemd[1]: libpod-conmon-c25c0609dfcdbd2466854d9a8a576863796663c46affde7a33094309eaebe503.scope: Deactivated successfully.
Oct  2 04:20:26 np0005465604 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct  2 04:20:26 np0005465604 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000006.scope: Consumed 14.038s CPU time.
Oct  2 04:20:26 np0005465604 systemd-machined[214636]: Machine qemu-9-instance-00000006 terminated.
Oct  2 04:20:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 144 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.5 MiB/s wr, 283 op/s
Oct  2 04:20:26 np0005465604 nova_compute[260603]: 2025-10-02 08:20:26.774 2 INFO nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 04:20:26 np0005465604 nova_compute[260603]: 2025-10-02 08:20:26.778 2 INFO nova.virt.libvirt.driver [-] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Instance destroyed successfully.#033[00m
Oct  2 04:20:26 np0005465604 nova_compute[260603]: 2025-10-02 08:20:26.782 2 INFO nova.virt.libvirt.driver [-] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Instance destroyed successfully.#033[00m
Oct  2 04:20:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.064 2 INFO nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Deleting instance files /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e_del#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.065 2 INFO nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Deletion of /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e_del complete#033[00m
Oct  2 04:20:27 np0005465604 podman[280889]: 2025-10-02 08:20:27.145373479 +0000 UTC m=+0.052706971 container create 0b678d83ecf7a3e76f78061e86317b34b70e8d7e616d1a2b1f67bf341ba56e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 04:20:27 np0005465604 systemd[1]: Started libpod-conmon-0b678d83ecf7a3e76f78061e86317b34b70e8d7e616d1a2b1f67bf341ba56e99.scope.
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.191 2 DEBUG nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.191 2 INFO nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Creating image(s)#033[00m
Oct  2 04:20:27 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.217 2 DEBUG nova.storage.rbd_utils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:27 np0005465604 podman[280889]: 2025-10-02 08:20:27.128332515 +0000 UTC m=+0.035666037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:20:27 np0005465604 podman[280889]: 2025-10-02 08:20:27.232865967 +0000 UTC m=+0.140199489 container init 0b678d83ecf7a3e76f78061e86317b34b70e8d7e616d1a2b1f67bf341ba56e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mestorf, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:20:27 np0005465604 podman[280889]: 2025-10-02 08:20:27.241167527 +0000 UTC m=+0.148501019 container start 0b678d83ecf7a3e76f78061e86317b34b70e8d7e616d1a2b1f67bf341ba56e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mestorf, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 04:20:27 np0005465604 podman[280889]: 2025-10-02 08:20:27.244768199 +0000 UTC m=+0.152101691 container attach 0b678d83ecf7a3e76f78061e86317b34b70e8d7e616d1a2b1f67bf341ba56e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mestorf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.244 2 DEBUG nova.storage.rbd_utils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:27 np0005465604 laughing_mestorf[280905]: 167 167
Oct  2 04:20:27 np0005465604 systemd[1]: libpod-0b678d83ecf7a3e76f78061e86317b34b70e8d7e616d1a2b1f67bf341ba56e99.scope: Deactivated successfully.
Oct  2 04:20:27 np0005465604 podman[280889]: 2025-10-02 08:20:27.25117585 +0000 UTC m=+0.158509402 container died 0b678d83ecf7a3e76f78061e86317b34b70e8d7e616d1a2b1f67bf341ba56e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:20:27 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0263a20137207b5ef3aefa6f568ab593730aff10ed66aa3785fa69f4cd8f8ebd-merged.mount: Deactivated successfully.
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.275 2 DEBUG nova.storage.rbd_utils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.284 2 DEBUG oslo_concurrency.processutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:27 np0005465604 podman[280889]: 2025-10-02 08:20:27.291314227 +0000 UTC m=+0.198647719 container remove 0b678d83ecf7a3e76f78061e86317b34b70e8d7e616d1a2b1f67bf341ba56e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:20:27 np0005465604 systemd[1]: libpod-conmon-0b678d83ecf7a3e76f78061e86317b34b70e8d7e616d1a2b1f67bf341ba56e99.scope: Deactivated successfully.
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.356 2 DEBUG oslo_concurrency.processutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.358 2 DEBUG oslo_concurrency.lockutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.359 2 DEBUG oslo_concurrency.lockutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.359 2 DEBUG oslo_concurrency.lockutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.378 2 DEBUG nova.storage.rbd_utils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.381 2 DEBUG oslo_concurrency.processutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:27 np0005465604 podman[281005]: 2025-10-02 08:20:27.467397027 +0000 UTC m=+0.038057612 container create f3a354e9c5090df65d6752de8fdd6795c9bb0b1efb74cc6f958bedc79c812203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wilbur, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 04:20:27 np0005465604 systemd[1]: Started libpod-conmon-f3a354e9c5090df65d6752de8fdd6795c9bb0b1efb74cc6f958bedc79c812203.scope.
Oct  2 04:20:27 np0005465604 podman[281005]: 2025-10-02 08:20:27.450782467 +0000 UTC m=+0.021443072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:20:27 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:20:27 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49f5e907227e0a06d8a39aea8e49aa7fe896d4e0fc9d06848b8c8fe494c49770/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:20:27 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49f5e907227e0a06d8a39aea8e49aa7fe896d4e0fc9d06848b8c8fe494c49770/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:20:27 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49f5e907227e0a06d8a39aea8e49aa7fe896d4e0fc9d06848b8c8fe494c49770/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:20:27 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49f5e907227e0a06d8a39aea8e49aa7fe896d4e0fc9d06848b8c8fe494c49770/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:20:27 np0005465604 podman[281005]: 2025-10-02 08:20:27.598625624 +0000 UTC m=+0.169286229 container init f3a354e9c5090df65d6752de8fdd6795c9bb0b1efb74cc6f958bedc79c812203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 04:20:27 np0005465604 podman[281005]: 2025-10-02 08:20:27.604595311 +0000 UTC m=+0.175255896 container start f3a354e9c5090df65d6752de8fdd6795c9bb0b1efb74cc6f958bedc79c812203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 04:20:27 np0005465604 podman[281005]: 2025-10-02 08:20:27.607268725 +0000 UTC m=+0.177929310 container attach f3a354e9c5090df65d6752de8fdd6795c9bb0b1efb74cc6f958bedc79c812203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.625 2 DEBUG oslo_concurrency.processutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.244s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.685 2 DEBUG nova.storage.rbd_utils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] resizing rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.764 2 DEBUG nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.764 2 DEBUG nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Ensure instance console log exists: /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.765 2 DEBUG oslo_concurrency.lockutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.765 2 DEBUG oslo_concurrency.lockutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.765 2 DEBUG oslo_concurrency.lockutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.766 2 DEBUG nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.770 2 WARNING nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.775 2 DEBUG nova.virt.libvirt.host [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.775 2 DEBUG nova.virt.libvirt.host [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.778 2 DEBUG nova.virt.libvirt.host [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.778 2 DEBUG nova.virt.libvirt.host [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.779 2 DEBUG nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.779 2 DEBUG nova.virt.hardware [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.779 2 DEBUG nova.virt.hardware [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.779 2 DEBUG nova.virt.hardware [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.779 2 DEBUG nova.virt.hardware [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.780 2 DEBUG nova.virt.hardware [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.780 2 DEBUG nova.virt.hardware [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.780 2 DEBUG nova.virt.hardware [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.780 2 DEBUG nova.virt.hardware [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.780 2 DEBUG nova.virt.hardware [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.780 2 DEBUG nova.virt.hardware [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.781 2 DEBUG nova.virt.hardware [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.781 2 DEBUG nova.objects.instance [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:27 np0005465604 nova_compute[260603]: 2025-10-02 08:20:27.796 2 DEBUG oslo_concurrency.processutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:20:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:20:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:20:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:20:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:20:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:20:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:20:27
Oct  2 04:20:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:20:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:20:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'vms', 'images']
Oct  2 04:20:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:20:28 np0005465604 nova_compute[260603]: 2025-10-02 08:20:28.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:20:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:20:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1367048034' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:20:28 np0005465604 nova_compute[260603]: 2025-10-02 08:20:28.264 2 DEBUG oslo_concurrency.processutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:28 np0005465604 nova_compute[260603]: 2025-10-02 08:20:28.317 2 DEBUG nova.storage.rbd_utils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:28 np0005465604 nova_compute[260603]: 2025-10-02 08:20:28.321 2 DEBUG oslo_concurrency.processutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:28 np0005465604 nova_compute[260603]: 2025-10-02 08:20:28.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]: {
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:        "osd_id": 2,
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:        "type": "bluestore"
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:    },
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:        "osd_id": 1,
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:        "type": "bluestore"
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:    },
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:        "osd_id": 0,
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:        "type": "bluestore"
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]:    }
Oct  2 04:20:28 np0005465604 stupefied_wilbur[281038]: }
Oct  2 04:20:28 np0005465604 systemd[1]: libpod-f3a354e9c5090df65d6752de8fdd6795c9bb0b1efb74cc6f958bedc79c812203.scope: Deactivated successfully.
Oct  2 04:20:28 np0005465604 systemd[1]: libpod-f3a354e9c5090df65d6752de8fdd6795c9bb0b1efb74cc6f958bedc79c812203.scope: Consumed 1.009s CPU time.
Oct  2 04:20:28 np0005465604 conmon[281038]: conmon f3a354e9c5090df65d67 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f3a354e9c5090df65d6752de8fdd6795c9bb0b1efb74cc6f958bedc79c812203.scope/container/memory.events
Oct  2 04:20:28 np0005465604 podman[281005]: 2025-10-02 08:20:28.620485226 +0000 UTC m=+1.191145841 container died f3a354e9c5090df65d6752de8fdd6795c9bb0b1efb74cc6f958bedc79c812203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wilbur, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:20:28 np0005465604 systemd[1]: var-lib-containers-storage-overlay-49f5e907227e0a06d8a39aea8e49aa7fe896d4e0fc9d06848b8c8fe494c49770-merged.mount: Deactivated successfully.
Oct  2 04:20:28 np0005465604 podman[281005]: 2025-10-02 08:20:28.687969599 +0000 UTC m=+1.258630194 container remove f3a354e9c5090df65d6752de8fdd6795c9bb0b1efb74cc6f958bedc79c812203 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 04:20:28 np0005465604 systemd[1]: libpod-conmon-f3a354e9c5090df65d6752de8fdd6795c9bb0b1efb74cc6f958bedc79c812203.scope: Deactivated successfully.
Oct  2 04:20:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:20:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 52 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.4 MiB/s wr, 375 op/s
Oct  2 04:20:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:20:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:20:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:20:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890620329' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:20:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:20:28 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c4995cd6-affc-433e-a8f2-277f0c51da10 does not exist
Oct  2 04:20:28 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 575002fc-2777-4615-b38c-fd0f649060de does not exist
Oct  2 04:20:28 np0005465604 nova_compute[260603]: 2025-10-02 08:20:28.760 2 DEBUG oslo_concurrency.processutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:28 np0005465604 nova_compute[260603]: 2025-10-02 08:20:28.762 2 DEBUG nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:20:28 np0005465604 nova_compute[260603]:  <uuid>19ea7528-6b08-4fe7-8c5b-4d96247bd50e</uuid>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:  <name>instance-00000006</name>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersAdmin275Test-server-1855671180</nova:name>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:20:27</nova:creationTime>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:20:28 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:        <nova:user uuid="7646e9aa5e134fc7b0621d170e0ced6f">tempest-ServersAdmin275Test-824814869-project-member</nova:user>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:        <nova:project uuid="8d66eb05f1fd4827b705307a038d1157">tempest-ServersAdmin275Test-824814869</nova:project>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <entry name="serial">19ea7528-6b08-4fe7-8c5b-4d96247bd50e</entry>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <entry name="uuid">19ea7528-6b08-4fe7-8c5b-4d96247bd50e</entry>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk">
Oct  2 04:20:28 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:20:28 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk.config">
Oct  2 04:20:28 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:20:28 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/console.log" append="off"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:20:28 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:20:28 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:20:28 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:20:28 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:20:28 np0005465604 nova_compute[260603]: 2025-10-02 08:20:28.843 2 DEBUG nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:20:28 np0005465604 nova_compute[260603]: 2025-10-02 08:20:28.844 2 DEBUG nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:20:28 np0005465604 nova_compute[260603]: 2025-10-02 08:20:28.844 2 INFO nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Using config drive#033[00m
Oct  2 04:20:28 np0005465604 nova_compute[260603]: 2025-10-02 08:20:28.870 2 DEBUG nova.storage.rbd_utils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:28 np0005465604 nova_compute[260603]: 2025-10-02 08:20:28.892 2 DEBUG nova.objects.instance [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Lazy-loading 'ec2_ids' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:28 np0005465604 nova_compute[260603]: 2025-10-02 08:20:28.948 2 DEBUG nova.objects.instance [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Lazy-loading 'keypairs' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:29 np0005465604 nova_compute[260603]: 2025-10-02 08:20:29.379 2 INFO nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Creating config drive at /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/disk.config#033[00m
Oct  2 04:20:29 np0005465604 nova_compute[260603]: 2025-10-02 08:20:29.388 2 DEBUG oslo_concurrency.processutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9p4u8sk8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:29 np0005465604 nova_compute[260603]: 2025-10-02 08:20:29.525 2 DEBUG oslo_concurrency.processutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9p4u8sk8" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:29 np0005465604 nova_compute[260603]: 2025-10-02 08:20:29.565 2 DEBUG nova.storage.rbd_utils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] rbd image 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:29 np0005465604 nova_compute[260603]: 2025-10-02 08:20:29.570 2 DEBUG oslo_concurrency.processutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/disk.config 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:29 np0005465604 nova_compute[260603]: 2025-10-02 08:20:29.723 2 DEBUG oslo_concurrency.processutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/disk.config 19ea7528-6b08-4fe7-8c5b-4d96247bd50e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:29 np0005465604 nova_compute[260603]: 2025-10-02 08:20:29.724 2 INFO nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Deleting local config drive /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e/disk.config because it was imported into RBD.#033[00m
Oct  2 04:20:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:20:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:20:29 np0005465604 systemd-machined[214636]: New machine qemu-11-instance-00000006.
Oct  2 04:20:29 np0005465604 systemd[1]: Started Virtual Machine qemu-11-instance-00000006.
Oct  2 04:20:29 np0005465604 nova_compute[260603]: 2025-10-02 08:20:29.917 2 DEBUG oslo_concurrency.lockutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Acquiring lock "00e2c1b2-e94c-4824-8b74-54e2ded3fad4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:29 np0005465604 nova_compute[260603]: 2025-10-02 08:20:29.917 2 DEBUG oslo_concurrency.lockutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Lock "00e2c1b2-e94c-4824-8b74-54e2ded3fad4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:29 np0005465604 nova_compute[260603]: 2025-10-02 08:20:29.936 2 DEBUG nova.compute.manager [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.021 2 DEBUG oslo_concurrency.lockutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.022 2 DEBUG oslo_concurrency.lockutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.033 2 DEBUG nova.virt.hardware [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.033 2 INFO nova.compute.claims [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.165 2 DEBUG oslo_concurrency.processutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:20:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3917501366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.544 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 19ea7528-6b08-4fe7-8c5b-4d96247bd50e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.545 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393230.5437145, 19ea7528-6b08-4fe7-8c5b-4d96247bd50e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.545 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.549 2 DEBUG nova.compute.manager [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.550 2 DEBUG nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.553 2 INFO nova.virt.libvirt.driver [-] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Instance spawned successfully.#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.554 2 DEBUG nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.555 2 DEBUG oslo_concurrency.processutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.559 2 DEBUG nova.compute.provider_tree [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.570 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.575 2 DEBUG nova.scheduler.client.report [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.578 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.582 2 DEBUG nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.582 2 DEBUG nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.582 2 DEBUG nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.583 2 DEBUG nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.583 2 DEBUG nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.584 2 DEBUG nova.virt.libvirt.driver [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.606 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.606 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393230.5484846, 19ea7528-6b08-4fe7-8c5b-4d96247bd50e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.606 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] VM Started (Lifecycle Event)#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.610 2 DEBUG oslo_concurrency.lockutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.610 2 DEBUG nova.compute.manager [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.659 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.663 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.701 2 DEBUG nova.compute.manager [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.705 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 04:20:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 52 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 428 KiB/s rd, 2.9 MiB/s wr, 199 op/s
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.734 2 DEBUG nova.compute.manager [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.735 2 INFO nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.764 2 DEBUG nova.compute.manager [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.807 2 DEBUG oslo_concurrency.lockutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.808 2 DEBUG oslo_concurrency.lockutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.808 2 DEBUG nova.objects.instance [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.896 2 DEBUG oslo_concurrency.lockutils [None req-2ce0bfdf-663b-47ba-a066-c90c1791360b 6b7dd4e9bbee485694449f70e4c1e3f1 a38e0ac5a7ac454c9b9fb54c2bc78f6d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.902 2 DEBUG nova.compute.manager [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.904 2 DEBUG nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.904 2 INFO nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Creating image(s)#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.934 2 DEBUG nova.storage.rbd_utils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] rbd image 00e2c1b2-e94c-4824-8b74-54e2ded3fad4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:30 np0005465604 nova_compute[260603]: 2025-10-02 08:20:30.975 2 DEBUG nova.storage.rbd_utils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] rbd image 00e2c1b2-e94c-4824-8b74-54e2ded3fad4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.008 2 DEBUG nova.storage.rbd_utils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] rbd image 00e2c1b2-e94c-4824-8b74-54e2ded3fad4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:31 np0005465604 podman[281424]: 2025-10-02 08:20:31.016060763 +0000 UTC m=+0.080698307 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.062 2 DEBUG oslo_concurrency.processutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:31 np0005465604 podman[281407]: 2025-10-02 08:20:31.073582002 +0000 UTC m=+0.138865566 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.117 2 DEBUG oslo_concurrency.processutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.118 2 DEBUG oslo_concurrency.lockutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.118 2 DEBUG oslo_concurrency.lockutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.119 2 DEBUG oslo_concurrency.lockutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.140 2 DEBUG nova.storage.rbd_utils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] rbd image 00e2c1b2-e94c-4824-8b74-54e2ded3fad4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.144 2 DEBUG oslo_concurrency.processutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 00e2c1b2-e94c-4824-8b74-54e2ded3fad4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.302 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393216.3008795, c2f0bab0-8cd9-431b-8592-8a9ce9d5f973 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.303 2 INFO nova.compute.manager [-] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.323 2 DEBUG nova.compute.manager [None req-f4353b86-c69f-4920-af71-84021b92dda6 - - - - - -] [instance: c2f0bab0-8cd9-431b-8592-8a9ce9d5f973] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.412 2 DEBUG oslo_concurrency.processutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 00e2c1b2-e94c-4824-8b74-54e2ded3fad4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.484 2 DEBUG nova.storage.rbd_utils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] resizing rbd image 00e2c1b2-e94c-4824-8b74-54e2ded3fad4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.573 2 DEBUG nova.objects.instance [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Lazy-loading 'migration_context' on Instance uuid 00e2c1b2-e94c-4824-8b74-54e2ded3fad4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.591 2 DEBUG nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.591 2 DEBUG nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Ensure instance console log exists: /var/lib/nova/instances/00e2c1b2-e94c-4824-8b74-54e2ded3fad4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.591 2 DEBUG oslo_concurrency.lockutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.592 2 DEBUG oslo_concurrency.lockutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.592 2 DEBUG oslo_concurrency.lockutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.593 2 DEBUG nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.597 2 WARNING nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.601 2 DEBUG nova.virt.libvirt.host [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.602 2 DEBUG nova.virt.libvirt.host [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.605 2 DEBUG nova.virt.libvirt.host [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.605 2 DEBUG nova.virt.libvirt.host [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.606 2 DEBUG nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.606 2 DEBUG nova.virt.hardware [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.606 2 DEBUG nova.virt.hardware [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.607 2 DEBUG nova.virt.hardware [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.607 2 DEBUG nova.virt.hardware [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.607 2 DEBUG nova.virt.hardware [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.607 2 DEBUG nova.virt.hardware [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.608 2 DEBUG nova.virt.hardware [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.608 2 DEBUG nova.virt.hardware [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.608 2 DEBUG nova.virt.hardware [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.609 2 DEBUG nova.virt.hardware [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.609 2 DEBUG nova.virt.hardware [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.612 2 DEBUG oslo_concurrency.processutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.803 2 DEBUG oslo_concurrency.lockutils [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Acquiring lock "19ea7528-6b08-4fe7-8c5b-4d96247bd50e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.803 2 DEBUG oslo_concurrency.lockutils [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "19ea7528-6b08-4fe7-8c5b-4d96247bd50e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.804 2 DEBUG oslo_concurrency.lockutils [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Acquiring lock "19ea7528-6b08-4fe7-8c5b-4d96247bd50e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.804 2 DEBUG oslo_concurrency.lockutils [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "19ea7528-6b08-4fe7-8c5b-4d96247bd50e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.804 2 DEBUG oslo_concurrency.lockutils [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "19ea7528-6b08-4fe7-8c5b-4d96247bd50e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.806 2 INFO nova.compute.manager [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Terminating instance#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.806 2 DEBUG oslo_concurrency.lockutils [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Acquiring lock "refresh_cache-19ea7528-6b08-4fe7-8c5b-4d96247bd50e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.807 2 DEBUG oslo_concurrency.lockutils [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Acquired lock "refresh_cache-19ea7528-6b08-4fe7-8c5b-4d96247bd50e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:20:31 np0005465604 nova_compute[260603]: 2025-10-02 08:20:31.807 2 DEBUG nova.network.neutron [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:20:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:20:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:20:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3737537257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:20:32 np0005465604 nova_compute[260603]: 2025-10-02 08:20:32.055 2 DEBUG oslo_concurrency.processutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:32 np0005465604 nova_compute[260603]: 2025-10-02 08:20:32.092 2 DEBUG nova.storage.rbd_utils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] rbd image 00e2c1b2-e94c-4824-8b74-54e2ded3fad4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:32 np0005465604 nova_compute[260603]: 2025-10-02 08:20:32.098 2 DEBUG oslo_concurrency.processutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:32 np0005465604 nova_compute[260603]: 2025-10-02 08:20:32.194 2 DEBUG nova.network.neutron [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:20:32 np0005465604 nova_compute[260603]: 2025-10-02 08:20:32.522 2 DEBUG nova.network.neutron [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:20:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:20:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2279883804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:20:32 np0005465604 nova_compute[260603]: 2025-10-02 08:20:32.540 2 DEBUG oslo_concurrency.lockutils [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Releasing lock "refresh_cache-19ea7528-6b08-4fe7-8c5b-4d96247bd50e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:20:32 np0005465604 nova_compute[260603]: 2025-10-02 08:20:32.542 2 DEBUG nova.compute.manager [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:20:32 np0005465604 nova_compute[260603]: 2025-10-02 08:20:32.553 2 DEBUG oslo_concurrency.processutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:32 np0005465604 nova_compute[260603]: 2025-10-02 08:20:32.555 2 DEBUG nova.objects.instance [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Lazy-loading 'pci_devices' on Instance uuid 00e2c1b2-e94c-4824-8b74-54e2ded3fad4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:32 np0005465604 nova_compute[260603]: 2025-10-02 08:20:32.575 2 DEBUG nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:20:32 np0005465604 nova_compute[260603]:  <uuid>00e2c1b2-e94c-4824-8b74-54e2ded3fad4</uuid>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:  <name>instance-0000000a</name>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerDiagnosticsV248Test-server-616507960</nova:name>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:20:31</nova:creationTime>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:20:32 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:        <nova:user uuid="36108a1cfc1044718131e9ca9303ddc7">tempest-ServerDiagnosticsV248Test-1904839291-project-member</nova:user>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:        <nova:project uuid="f32007e2bace4f1aa41722adfeef40bd">tempest-ServerDiagnosticsV248Test-1904839291</nova:project>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <entry name="serial">00e2c1b2-e94c-4824-8b74-54e2ded3fad4</entry>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <entry name="uuid">00e2c1b2-e94c-4824-8b74-54e2ded3fad4</entry>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/00e2c1b2-e94c-4824-8b74-54e2ded3fad4_disk">
Oct  2 04:20:32 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:20:32 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/00e2c1b2-e94c-4824-8b74-54e2ded3fad4_disk.config">
Oct  2 04:20:32 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:20:32 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/00e2c1b2-e94c-4824-8b74-54e2ded3fad4/console.log" append="off"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:20:32 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:20:32 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:20:32 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:20:32 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:20:32 np0005465604 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct  2 04:20:32 np0005465604 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000006.scope: Consumed 2.746s CPU time.
Oct  2 04:20:32 np0005465604 systemd-machined[214636]: Machine qemu-11-instance-00000006 terminated.
Oct  2 04:20:32 np0005465604 nova_compute[260603]: 2025-10-02 08:20:32.630 2 DEBUG nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:20:32 np0005465604 nova_compute[260603]: 2025-10-02 08:20:32.631 2 DEBUG nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:20:32 np0005465604 nova_compute[260603]: 2025-10-02 08:20:32.632 2 INFO nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Using config drive#033[00m
Oct  2 04:20:32 np0005465604 nova_compute[260603]: 2025-10-02 08:20:32.649 2 DEBUG nova.storage.rbd_utils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] rbd image 00e2c1b2-e94c-4824-8b74-54e2ded3fad4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 116 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.0 MiB/s wr, 282 op/s
Oct  2 04:20:32 np0005465604 nova_compute[260603]: 2025-10-02 08:20:32.764 2 INFO nova.virt.libvirt.driver [-] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Instance destroyed successfully.#033[00m
Oct  2 04:20:32 np0005465604 nova_compute[260603]: 2025-10-02 08:20:32.764 2 DEBUG nova.objects.instance [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lazy-loading 'resources' on Instance uuid 19ea7528-6b08-4fe7-8c5b-4d96247bd50e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:32 np0005465604 nova_compute[260603]: 2025-10-02 08:20:32.990 2 INFO nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Creating config drive at /var/lib/nova/instances/00e2c1b2-e94c-4824-8b74-54e2ded3fad4/disk.config#033[00m
Oct  2 04:20:33 np0005465604 nova_compute[260603]: 2025-10-02 08:20:32.999 2 DEBUG oslo_concurrency.processutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/00e2c1b2-e94c-4824-8b74-54e2ded3fad4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7znft8j6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:33 np0005465604 nova_compute[260603]: 2025-10-02 08:20:33.148 2 DEBUG oslo_concurrency.processutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/00e2c1b2-e94c-4824-8b74-54e2ded3fad4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7znft8j6" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:33 np0005465604 nova_compute[260603]: 2025-10-02 08:20:33.182 2 DEBUG nova.storage.rbd_utils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] rbd image 00e2c1b2-e94c-4824-8b74-54e2ded3fad4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:33 np0005465604 nova_compute[260603]: 2025-10-02 08:20:33.186 2 DEBUG oslo_concurrency.processutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/00e2c1b2-e94c-4824-8b74-54e2ded3fad4/disk.config 00e2c1b2-e94c-4824-8b74-54e2ded3fad4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:33 np0005465604 nova_compute[260603]: 2025-10-02 08:20:33.212 2 INFO nova.virt.libvirt.driver [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Deleting instance files /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e_del#033[00m
Oct  2 04:20:33 np0005465604 nova_compute[260603]: 2025-10-02 08:20:33.214 2 INFO nova.virt.libvirt.driver [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Deletion of /var/lib/nova/instances/19ea7528-6b08-4fe7-8c5b-4d96247bd50e_del complete#033[00m
Oct  2 04:20:33 np0005465604 nova_compute[260603]: 2025-10-02 08:20:33.288 2 INFO nova.compute.manager [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:20:33 np0005465604 nova_compute[260603]: 2025-10-02 08:20:33.288 2 DEBUG oslo.service.loopingcall [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:20:33 np0005465604 nova_compute[260603]: 2025-10-02 08:20:33.289 2 DEBUG nova.compute.manager [-] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:20:33 np0005465604 nova_compute[260603]: 2025-10-02 08:20:33.289 2 DEBUG nova.network.neutron [-] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:20:33 np0005465604 nova_compute[260603]: 2025-10-02 08:20:33.353 2 DEBUG oslo_concurrency.processutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/00e2c1b2-e94c-4824-8b74-54e2ded3fad4/disk.config 00e2c1b2-e94c-4824-8b74-54e2ded3fad4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:33 np0005465604 nova_compute[260603]: 2025-10-02 08:20:33.355 2 INFO nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Deleting local config drive /var/lib/nova/instances/00e2c1b2-e94c-4824-8b74-54e2ded3fad4/disk.config because it was imported into RBD.#033[00m
Oct  2 04:20:33 np0005465604 systemd-machined[214636]: New machine qemu-12-instance-0000000a.
Oct  2 04:20:33 np0005465604 systemd[1]: Started Virtual Machine qemu-12-instance-0000000a.
Oct  2 04:20:33 np0005465604 nova_compute[260603]: 2025-10-02 08:20:33.585 2 DEBUG nova.network.neutron [-] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:20:33 np0005465604 nova_compute[260603]: 2025-10-02 08:20:33.616 2 DEBUG nova.network.neutron [-] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:20:33 np0005465604 nova_compute[260603]: 2025-10-02 08:20:33.634 2 INFO nova.compute.manager [-] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Took 0.34 seconds to deallocate network for instance.#033[00m
Oct  2 04:20:33 np0005465604 nova_compute[260603]: 2025-10-02 08:20:33.687 2 DEBUG oslo_concurrency.lockutils [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:33 np0005465604 nova_compute[260603]: 2025-10-02 08:20:33.688 2 DEBUG oslo_concurrency.lockutils [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:33 np0005465604 nova_compute[260603]: 2025-10-02 08:20:33.773 2 DEBUG oslo_concurrency.processutils [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:20:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1806690907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.202 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393234.202211, 00e2c1b2-e94c-4824-8b74-54e2ded3fad4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.203 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.205 2 DEBUG nova.compute.manager [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.206 2 DEBUG nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.211 2 INFO nova.virt.libvirt.driver [-] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Instance spawned successfully.#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.211 2 DEBUG nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.218 2 DEBUG oslo_concurrency.processutils [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.222 2 DEBUG nova.compute.provider_tree [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.225 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.229 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.232 2 DEBUG nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.232 2 DEBUG nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.232 2 DEBUG nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.233 2 DEBUG nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.233 2 DEBUG nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.233 2 DEBUG nova.virt.libvirt.driver [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.271 2 DEBUG nova.scheduler.client.report [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.278 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.278 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393234.2032535, 00e2c1b2-e94c-4824-8b74-54e2ded3fad4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.279 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] VM Started (Lifecycle Event)#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.324 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.326 2 DEBUG oslo_concurrency.lockutils [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.335 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.347 2 INFO nova.compute.manager [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Took 3.44 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.347 2 DEBUG nova.compute.manager [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.358 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.383 2 INFO nova.scheduler.client.report [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Deleted allocations for instance 19ea7528-6b08-4fe7-8c5b-4d96247bd50e#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.414 2 INFO nova.compute.manager [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Took 4.43 seconds to build instance.#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.437 2 DEBUG oslo_concurrency.lockutils [None req-33c1bbfd-49c7-47bd-9ba7-638a3ea935fa 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Lock "00e2c1b2-e94c-4824-8b74-54e2ded3fad4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:34 np0005465604 nova_compute[260603]: 2025-10-02 08:20:34.450 2 DEBUG oslo_concurrency.lockutils [None req-efe5b691-27f8-4e1d-b0ac-b7c738842fad 7646e9aa5e134fc7b0621d170e0ced6f 8d66eb05f1fd4827b705307a038d1157 - - default default] Lock "19ea7528-6b08-4fe7-8c5b-4d96247bd50e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 123 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 290 op/s
Oct  2 04:20:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:34.807 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:34.807 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:34.808 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:35 np0005465604 nova_compute[260603]: 2025-10-02 08:20:35.945 2 DEBUG nova.compute.manager [None req-fce22ebf-0c33-4d26-9f2b-125a0b9968cb 55075d95a10d495a8486e28b2f042419 641a311cb82a4144ae48d7c227d01909 - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:35 np0005465604 nova_compute[260603]: 2025-10-02 08:20:35.948 2 INFO nova.compute.manager [None req-fce22ebf-0c33-4d26-9f2b-125a0b9968cb 55075d95a10d495a8486e28b2f042419 641a311cb82a4144ae48d7c227d01909 - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Retrieving diagnostics#033[00m
Oct  2 04:20:36 np0005465604 nova_compute[260603]: 2025-10-02 08:20:36.350 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393221.3490505, 7df7d893-6345-4445-8743-8d552b61a221 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:36 np0005465604 nova_compute[260603]: 2025-10-02 08:20:36.351 2 INFO nova.compute.manager [-] [instance: 7df7d893-6345-4445-8743-8d552b61a221] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:20:36 np0005465604 nova_compute[260603]: 2025-10-02 08:20:36.370 2 DEBUG nova.compute.manager [None req-a9f8a1b3-1bed-43ed-a215-2f0cfc664252 - - - - - -] [instance: 7df7d893-6345-4445-8743-8d552b61a221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:36 np0005465604 nova_compute[260603]: 2025-10-02 08:20:36.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:36 np0005465604 nova_compute[260603]: 2025-10-02 08:20:36.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 123 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.8 MiB/s wr, 227 op/s
Oct  2 04:20:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:20:37 np0005465604 podman[281841]: 2025-10-02 08:20:37.051292871 +0000 UTC m=+0.104991797 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct  2 04:20:37 np0005465604 nova_compute[260603]: 2025-10-02 08:20:37.421 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393222.4199142, e4ce9040-1e20-4d58-b967-21b17e817aea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:37 np0005465604 nova_compute[260603]: 2025-10-02 08:20:37.421 2 INFO nova.compute.manager [-] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:20:37 np0005465604 nova_compute[260603]: 2025-10-02 08:20:37.448 2 DEBUG nova.compute.manager [None req-fc94fc74-a722-46bb-a657-d749bbc6c2c6 - - - - - -] [instance: e4ce9040-1e20-4d58-b967-21b17e817aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005501868040632936 of space, bias 1.0, pg target 0.1650560412189881 quantized to 32 (current 32)
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:20:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.8 MiB/s wr, 317 op/s
Oct  2 04:20:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.8 MiB/s wr, 225 op/s
Oct  2 04:20:41 np0005465604 nova_compute[260603]: 2025-10-02 08:20:41.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:41 np0005465604 nova_compute[260603]: 2025-10-02 08:20:41.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:20:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.8 MiB/s wr, 225 op/s
Oct  2 04:20:43 np0005465604 podman[281862]: 2025-10-02 08:20:43.039975594 +0000 UTC m=+0.097142932 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd)
Oct  2 04:20:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:44.251 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:20:44 np0005465604 nova_compute[260603]: 2025-10-02 08:20:44.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:44.253 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:20:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 804 KiB/s wr, 142 op/s
Oct  2 04:20:46 np0005465604 nova_compute[260603]: 2025-10-02 08:20:46.217 2 DEBUG nova.compute.manager [None req-ded4eb8d-9f34-4224-b84c-7b8f8889638c 55075d95a10d495a8486e28b2f042419 641a311cb82a4144ae48d7c227d01909 - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:46 np0005465604 nova_compute[260603]: 2025-10-02 08:20:46.223 2 INFO nova.compute.manager [None req-ded4eb8d-9f34-4224-b84c-7b8f8889638c 55075d95a10d495a8486e28b2f042419 641a311cb82a4144ae48d7c227d01909 - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Retrieving diagnostics#033[00m
Oct  2 04:20:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:20:46.257 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:20:46 np0005465604 nova_compute[260603]: 2025-10-02 08:20:46.440 2 DEBUG oslo_concurrency.lockutils [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Acquiring lock "00e2c1b2-e94c-4824-8b74-54e2ded3fad4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:46 np0005465604 nova_compute[260603]: 2025-10-02 08:20:46.441 2 DEBUG oslo_concurrency.lockutils [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Lock "00e2c1b2-e94c-4824-8b74-54e2ded3fad4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:46 np0005465604 nova_compute[260603]: 2025-10-02 08:20:46.442 2 DEBUG oslo_concurrency.lockutils [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Acquiring lock "00e2c1b2-e94c-4824-8b74-54e2ded3fad4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:46 np0005465604 nova_compute[260603]: 2025-10-02 08:20:46.442 2 DEBUG oslo_concurrency.lockutils [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Lock "00e2c1b2-e94c-4824-8b74-54e2ded3fad4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:46 np0005465604 nova_compute[260603]: 2025-10-02 08:20:46.442 2 DEBUG oslo_concurrency.lockutils [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Lock "00e2c1b2-e94c-4824-8b74-54e2ded3fad4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:46 np0005465604 nova_compute[260603]: 2025-10-02 08:20:46.444 2 INFO nova.compute.manager [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Terminating instance#033[00m
Oct  2 04:20:46 np0005465604 nova_compute[260603]: 2025-10-02 08:20:46.445 2 DEBUG oslo_concurrency.lockutils [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Acquiring lock "refresh_cache-00e2c1b2-e94c-4824-8b74-54e2ded3fad4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:20:46 np0005465604 nova_compute[260603]: 2025-10-02 08:20:46.445 2 DEBUG oslo_concurrency.lockutils [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Acquired lock "refresh_cache-00e2c1b2-e94c-4824-8b74-54e2ded3fad4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:20:46 np0005465604 nova_compute[260603]: 2025-10-02 08:20:46.445 2 DEBUG nova.network.neutron [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:20:46 np0005465604 nova_compute[260603]: 2025-10-02 08:20:46.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:46 np0005465604 nova_compute[260603]: 2025-10-02 08:20:46.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:46 np0005465604 nova_compute[260603]: 2025-10-02 08:20:46.734 2 DEBUG nova.network.neutron [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:20:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 90 op/s
Oct  2 04:20:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:20:47 np0005465604 nova_compute[260603]: 2025-10-02 08:20:47.064 2 DEBUG nova.network.neutron [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:20:47 np0005465604 nova_compute[260603]: 2025-10-02 08:20:47.082 2 DEBUG oslo_concurrency.lockutils [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Releasing lock "refresh_cache-00e2c1b2-e94c-4824-8b74-54e2ded3fad4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:20:47 np0005465604 nova_compute[260603]: 2025-10-02 08:20:47.083 2 DEBUG nova.compute.manager [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:20:47 np0005465604 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct  2 04:20:47 np0005465604 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000a.scope: Consumed 12.049s CPU time.
Oct  2 04:20:47 np0005465604 systemd-machined[214636]: Machine qemu-12-instance-0000000a terminated.
Oct  2 04:20:47 np0005465604 nova_compute[260603]: 2025-10-02 08:20:47.311 2 INFO nova.virt.libvirt.driver [-] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Instance destroyed successfully.#033[00m
Oct  2 04:20:47 np0005465604 nova_compute[260603]: 2025-10-02 08:20:47.312 2 DEBUG nova.objects.instance [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Lazy-loading 'resources' on Instance uuid 00e2c1b2-e94c-4824-8b74-54e2ded3fad4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:47 np0005465604 nova_compute[260603]: 2025-10-02 08:20:47.746 2 INFO nova.virt.libvirt.driver [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Deleting instance files /var/lib/nova/instances/00e2c1b2-e94c-4824-8b74-54e2ded3fad4_del#033[00m
Oct  2 04:20:47 np0005465604 nova_compute[260603]: 2025-10-02 08:20:47.746 2 INFO nova.virt.libvirt.driver [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Deletion of /var/lib/nova/instances/00e2c1b2-e94c-4824-8b74-54e2ded3fad4_del complete#033[00m
Oct  2 04:20:47 np0005465604 nova_compute[260603]: 2025-10-02 08:20:47.762 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393232.7616897, 19ea7528-6b08-4fe7-8c5b-4d96247bd50e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:20:47 np0005465604 nova_compute[260603]: 2025-10-02 08:20:47.762 2 INFO nova.compute.manager [-] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:20:47 np0005465604 nova_compute[260603]: 2025-10-02 08:20:47.800 2 DEBUG nova.compute.manager [None req-aa85a51b-db9a-431d-bdee-d99b4c5e1da8 - - - - - -] [instance: 19ea7528-6b08-4fe7-8c5b-4d96247bd50e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:20:47 np0005465604 nova_compute[260603]: 2025-10-02 08:20:47.809 2 INFO nova.compute.manager [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:20:47 np0005465604 nova_compute[260603]: 2025-10-02 08:20:47.809 2 DEBUG oslo.service.loopingcall [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:20:47 np0005465604 nova_compute[260603]: 2025-10-02 08:20:47.809 2 DEBUG nova.compute.manager [-] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:20:47 np0005465604 nova_compute[260603]: 2025-10-02 08:20:47.810 2 DEBUG nova.network.neutron [-] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:20:48 np0005465604 nova_compute[260603]: 2025-10-02 08:20:48.150 2 DEBUG nova.network.neutron [-] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:20:48 np0005465604 nova_compute[260603]: 2025-10-02 08:20:48.166 2 DEBUG nova.network.neutron [-] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:20:48 np0005465604 nova_compute[260603]: 2025-10-02 08:20:48.230 2 INFO nova.compute.manager [-] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Took 0.42 seconds to deallocate network for instance.#033[00m
Oct  2 04:20:48 np0005465604 nova_compute[260603]: 2025-10-02 08:20:48.323 2 DEBUG oslo_concurrency.lockutils [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:48 np0005465604 nova_compute[260603]: 2025-10-02 08:20:48.324 2 DEBUG oslo_concurrency.lockutils [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:48 np0005465604 nova_compute[260603]: 2025-10-02 08:20:48.398 2 DEBUG oslo_concurrency.processutils [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 94 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 164 op/s
Oct  2 04:20:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:20:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3071724723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:20:48 np0005465604 nova_compute[260603]: 2025-10-02 08:20:48.856 2 DEBUG oslo_concurrency.processutils [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:48 np0005465604 nova_compute[260603]: 2025-10-02 08:20:48.864 2 DEBUG nova.compute.provider_tree [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:20:48 np0005465604 nova_compute[260603]: 2025-10-02 08:20:48.882 2 DEBUG nova.scheduler.client.report [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:20:48 np0005465604 nova_compute[260603]: 2025-10-02 08:20:48.920 2 DEBUG oslo_concurrency.lockutils [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:48 np0005465604 nova_compute[260603]: 2025-10-02 08:20:48.975 2 INFO nova.scheduler.client.report [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Deleted allocations for instance 00e2c1b2-e94c-4824-8b74-54e2ded3fad4#033[00m
Oct  2 04:20:49 np0005465604 nova_compute[260603]: 2025-10-02 08:20:49.043 2 DEBUG oslo_concurrency.lockutils [None req-1bab6f90-2865-4ab4-9612-3300011a3708 36108a1cfc1044718131e9ca9303ddc7 f32007e2bace4f1aa41722adfeef40bd - - default default] Lock "00e2c1b2-e94c-4824-8b74-54e2ded3fad4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 94 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Oct  2 04:20:51 np0005465604 nova_compute[260603]: 2025-10-02 08:20:51.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:20:51 np0005465604 nova_compute[260603]: 2025-10-02 08:20:51.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:20:51 np0005465604 nova_compute[260603]: 2025-10-02 08:20:51.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  2 04:20:51 np0005465604 nova_compute[260603]: 2025-10-02 08:20:51.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 04:20:51 np0005465604 nova_compute[260603]: 2025-10-02 08:20:51.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:51 np0005465604 nova_compute[260603]: 2025-10-02 08:20:51.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 04:20:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:20:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct  2 04:20:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct  2 04:20:56 np0005465604 nova_compute[260603]: 2025-10-02 08:20:56.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:20:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct  2 04:20:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:20:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:20:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:20:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:20:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:20:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:20:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:20:57 np0005465604 nova_compute[260603]: 2025-10-02 08:20:57.932 2 DEBUG oslo_concurrency.lockutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Acquiring lock "f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:57 np0005465604 nova_compute[260603]: 2025-10-02 08:20:57.933 2 DEBUG oslo_concurrency.lockutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Lock "f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:57 np0005465604 nova_compute[260603]: 2025-10-02 08:20:57.951 2 DEBUG nova.compute.manager [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.031 2 DEBUG oslo_concurrency.lockutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.032 2 DEBUG oslo_concurrency.lockutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.041 2 DEBUG nova.virt.hardware [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.041 2 INFO nova.compute.claims [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.160 2 DEBUG oslo_concurrency.processutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:20:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1854039446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.630 2 DEBUG oslo_concurrency.processutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.639 2 DEBUG nova.compute.provider_tree [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.656 2 DEBUG nova.scheduler.client.report [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.681 2 DEBUG oslo_concurrency.lockutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.683 2 DEBUG nova.compute.manager [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.737 2 DEBUG nova.compute.manager [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.737 2 DEBUG nova.network.neutron [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:20:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.761 2 INFO nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.782 2 DEBUG nova.compute.manager [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.889 2 DEBUG nova.compute.manager [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.891 2 DEBUG nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.892 2 INFO nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Creating image(s)#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.924 2 DEBUG nova.storage.rbd_utils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] rbd image f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.960 2 DEBUG nova.storage.rbd_utils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] rbd image f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.991 2 DEBUG nova.storage.rbd_utils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] rbd image f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:58 np0005465604 nova_compute[260603]: 2025-10-02 08:20:58.996 2 DEBUG oslo_concurrency.processutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.084 2 DEBUG oslo_concurrency.processutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.086 2 DEBUG oslo_concurrency.lockutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.088 2 DEBUG oslo_concurrency.lockutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.089 2 DEBUG oslo_concurrency.lockutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.119 2 DEBUG nova.storage.rbd_utils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] rbd image f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.124 2 DEBUG oslo_concurrency.processutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.421 2 DEBUG nova.network.neutron [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.421 2 DEBUG nova.compute.manager [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.422 2 DEBUG oslo_concurrency.processutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.477 2 DEBUG nova.storage.rbd_utils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] resizing rbd image f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.571 2 DEBUG nova.objects.instance [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Lazy-loading 'migration_context' on Instance uuid f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.593 2 DEBUG nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.593 2 DEBUG nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Ensure instance console log exists: /var/lib/nova/instances/f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.594 2 DEBUG oslo_concurrency.lockutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.595 2 DEBUG oslo_concurrency.lockutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.595 2 DEBUG oslo_concurrency.lockutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.597 2 DEBUG nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.601 2 WARNING nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.607 2 DEBUG nova.virt.libvirt.host [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.608 2 DEBUG nova.virt.libvirt.host [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.613 2 DEBUG nova.virt.libvirt.host [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.613 2 DEBUG nova.virt.libvirt.host [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.614 2 DEBUG nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.614 2 DEBUG nova.virt.hardware [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.615 2 DEBUG nova.virt.hardware [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.615 2 DEBUG nova.virt.hardware [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.616 2 DEBUG nova.virt.hardware [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.616 2 DEBUG nova.virt.hardware [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.616 2 DEBUG nova.virt.hardware [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.617 2 DEBUG nova.virt.hardware [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.617 2 DEBUG nova.virt.hardware [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.617 2 DEBUG nova.virt.hardware [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.618 2 DEBUG nova.virt.hardware [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.618 2 DEBUG nova.virt.hardware [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:20:59 np0005465604 nova_compute[260603]: 2025-10-02 08:20:59.621 2 DEBUG oslo_concurrency.processutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/261326549' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.051 2 DEBUG oslo_concurrency.processutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.072 2 DEBUG nova.storage.rbd_utils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] rbd image f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.077 2 DEBUG oslo_concurrency.processutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.254 2 DEBUG oslo_concurrency.lockutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.256 2 DEBUG oslo_concurrency.lockutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.281 2 DEBUG nova.compute.manager [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.352 2 DEBUG oslo_concurrency.lockutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.353 2 DEBUG oslo_concurrency.lockutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.359 2 DEBUG nova.virt.hardware [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.360 2 INFO nova.compute.claims [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:21:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2766610789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.485 2 DEBUG oslo_concurrency.processutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.487 2 DEBUG nova.objects.instance [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Lazy-loading 'pci_devices' on Instance uuid f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.489 2 DEBUG oslo_concurrency.processutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.526 2 DEBUG nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:21:00 np0005465604 nova_compute[260603]:  <uuid>f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63</uuid>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:  <name>instance-0000000b</name>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerDiagnosticsNegativeTest-server-1149523939</nova:name>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:20:59</nova:creationTime>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:21:00 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:        <nova:user uuid="32c06977868e4dcbb10bba02a1d94488">tempest-ServerDiagnosticsNegativeTest-1657948836-project-member</nova:user>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:        <nova:project uuid="8cbab7914fc64d04bf413669eb13570a">tempest-ServerDiagnosticsNegativeTest-1657948836</nova:project>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <entry name="serial">f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63</entry>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <entry name="uuid">f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63</entry>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63_disk">
Oct  2 04:21:00 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:00 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63_disk.config">
Oct  2 04:21:00 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:00 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63/console.log" append="off"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:21:00 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:21:00 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:21:00 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:21:00 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.591 2 DEBUG nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.592 2 DEBUG nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.593 2 INFO nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Using config drive#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.617 2 DEBUG nova.storage.rbd_utils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] rbd image f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.786 2 INFO nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Creating config drive at /var/lib/nova/instances/f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63/disk.config#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.796 2 DEBUG oslo_concurrency.processutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp32ed7hfh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:21:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3839356547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.915 2 DEBUG oslo_concurrency.processutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.923 2 DEBUG nova.compute.provider_tree [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.927 2 DEBUG oslo_concurrency.processutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp32ed7hfh" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.966 2 DEBUG nova.storage.rbd_utils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] rbd image f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.971 2 DEBUG oslo_concurrency.processutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63/disk.config f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:00 np0005465604 nova_compute[260603]: 2025-10-02 08:21:00.994 2 DEBUG nova.scheduler.client.report [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.024 2 DEBUG oslo_concurrency.lockutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.025 2 DEBUG nova.compute.manager [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.111 2 DEBUG nova.compute.manager [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.113 2 DEBUG nova.network.neutron [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.120 2 DEBUG oslo_concurrency.processutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63/disk.config f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.122 2 INFO nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Deleting local config drive /var/lib/nova/instances/f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63/disk.config because it was imported into RBD.#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.139 2 INFO nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.163 2 DEBUG nova.compute.manager [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:21:01 np0005465604 systemd-machined[214636]: New machine qemu-13-instance-0000000b.
Oct  2 04:21:01 np0005465604 systemd[1]: Started Virtual Machine qemu-13-instance-0000000b.
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.264 2 DEBUG nova.compute.manager [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.267 2 DEBUG nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.267 2 INFO nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Creating image(s)#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.316 2 DEBUG nova.storage.rbd_utils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:01 np0005465604 podman[282262]: 2025-10-02 08:21:01.322983651 +0000 UTC m=+0.097749340 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 04:21:01 np0005465604 podman[282261]: 2025-10-02 08:21:01.352484045 +0000 UTC m=+0.134237702 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.362 2 DEBUG nova.storage.rbd_utils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.392 2 DEBUG nova.storage.rbd_utils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.397 2 DEBUG oslo_concurrency.processutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.431 2 DEBUG nova.policy [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2b82955fab174d8aac325e64068908f5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '13ecc6dea7a8465394379400d84a053e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.488 2 DEBUG oslo_concurrency.processutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.490 2 DEBUG oslo_concurrency.lockutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.491 2 DEBUG oslo_concurrency.lockutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.491 2 DEBUG oslo_concurrency.lockutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.513 2 DEBUG nova.storage.rbd_utils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.517 2 DEBUG oslo_concurrency.processutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.685 2 DEBUG oslo_concurrency.lockutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.690 2 DEBUG oslo_concurrency.lockutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.709 2 DEBUG nova.compute.manager [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.758 2 DEBUG oslo_concurrency.processutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.241s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.785 2 DEBUG oslo_concurrency.lockutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.786 2 DEBUG oslo_concurrency.lockutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.818 2 DEBUG nova.virt.hardware [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.819 2 INFO nova.compute.claims [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.825 2 DEBUG nova.storage.rbd_utils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] resizing rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.917 2 DEBUG nova.objects.instance [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'migration_context' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.928 2 DEBUG nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.929 2 DEBUG nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Ensure instance console log exists: /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.929 2 DEBUG oslo_concurrency.lockutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.930 2 DEBUG oslo_concurrency.lockutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.930 2 DEBUG oslo_concurrency.lockutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:21:01 np0005465604 nova_compute[260603]: 2025-10-02 08:21:01.954 2 DEBUG oslo_concurrency.processutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.097 2 DEBUG nova.compute.manager [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.098 2 DEBUG nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.099 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393262.0980082, f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.099 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.106 2 INFO nova.virt.libvirt.driver [-] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Instance spawned successfully.#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.107 2 DEBUG nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.133 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.138 2 DEBUG nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.138 2 DEBUG nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.139 2 DEBUG nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.140 2 DEBUG nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.140 2 DEBUG nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.141 2 DEBUG nova.virt.libvirt.driver [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.145 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.183 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.183 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393262.0985255, f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.183 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] VM Started (Lifecycle Event)#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.203 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.208 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.211 2 INFO nova.compute.manager [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Took 3.32 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.212 2 DEBUG nova.compute.manager [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.213 2 DEBUG nova.network.neutron [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Successfully created port: 205e575c-4af3-4a6a-af77-fd96af608b0a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.223 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.266 2 INFO nova.compute.manager [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Took 4.27 seconds to build instance.#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.281 2 DEBUG oslo_concurrency.lockutils [None req-d1102ff7-8443-4be8-9177-9817b896402a 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Lock "f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.308 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393247.307016, 00e2c1b2-e94c-4824-8b74-54e2ded3fad4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.308 2 INFO nova.compute.manager [-] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.325 2 DEBUG nova.compute.manager [None req-a5629ab5-30ff-4902-9eab-369fd62529f6 - - - - - -] [instance: 00e2c1b2-e94c-4824-8b74-54e2ded3fad4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:21:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/659179510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.434 2 DEBUG oslo_concurrency.processutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.439 2 DEBUG nova.compute.provider_tree [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.456 2 DEBUG nova.scheduler.client.report [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.485 2 DEBUG oslo_concurrency.lockutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.486 2 DEBUG nova.compute.manager [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.538 2 DEBUG nova.compute.manager [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.539 2 DEBUG nova.network.neutron [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.558 2 INFO nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.574 2 DEBUG nova.compute.manager [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.657 2 DEBUG nova.compute.manager [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.659 2 DEBUG nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.659 2 INFO nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Creating image(s)#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.686 2 DEBUG nova.storage.rbd_utils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image efe21c23-e200-4841-be3e-2b3e7c8b5c38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.717 2 DEBUG nova.storage.rbd_utils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image efe21c23-e200-4841-be3e-2b3e7c8b5c38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 112 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.9 MiB/s wr, 43 op/s
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.754 2 DEBUG nova.storage.rbd_utils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image efe21c23-e200-4841-be3e-2b3e7c8b5c38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.758 2 DEBUG oslo_concurrency.processutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.846 2 DEBUG oslo_concurrency.processutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.848 2 DEBUG oslo_concurrency.lockutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.850 2 DEBUG oslo_concurrency.lockutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.851 2 DEBUG oslo_concurrency.lockutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.890 2 DEBUG nova.storage.rbd_utils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image efe21c23-e200-4841-be3e-2b3e7c8b5c38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:02 np0005465604 nova_compute[260603]: 2025-10-02 08:21:02.895 2 DEBUG oslo_concurrency.processutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 efe21c23-e200-4841-be3e-2b3e7c8b5c38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.018 2 DEBUG nova.policy [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2b82955fab174d8aac325e64068908f5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '13ecc6dea7a8465394379400d84a053e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.085 2 DEBUG oslo_concurrency.lockutils [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Acquiring lock "f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.087 2 DEBUG oslo_concurrency.lockutils [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Lock "f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.088 2 DEBUG oslo_concurrency.lockutils [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Acquiring lock "f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.089 2 DEBUG oslo_concurrency.lockutils [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Lock "f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.090 2 DEBUG oslo_concurrency.lockutils [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Lock "f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.092 2 INFO nova.compute.manager [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Terminating instance#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.093 2 DEBUG oslo_concurrency.lockutils [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Acquiring lock "refresh_cache-f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.093 2 DEBUG oslo_concurrency.lockutils [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Acquired lock "refresh_cache-f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.094 2 DEBUG nova.network.neutron [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.148 2 DEBUG oslo_concurrency.processutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 efe21c23-e200-4841-be3e-2b3e7c8b5c38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.214 2 DEBUG nova.storage.rbd_utils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] resizing rbd image efe21c23-e200-4841-be3e-2b3e7c8b5c38_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.311 2 DEBUG nova.objects.instance [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'migration_context' on Instance uuid efe21c23-e200-4841-be3e-2b3e7c8b5c38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.332 2 DEBUG nova.network.neutron [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.335 2 DEBUG nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.335 2 DEBUG nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Ensure instance console log exists: /var/lib/nova/instances/efe21c23-e200-4841-be3e-2b3e7c8b5c38/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.336 2 DEBUG oslo_concurrency.lockutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.336 2 DEBUG oslo_concurrency.lockutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.336 2 DEBUG oslo_concurrency.lockutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.606 2 DEBUG nova.network.neutron [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Successfully updated port: 205e575c-4af3-4a6a-af77-fd96af608b0a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.623 2 DEBUG oslo_concurrency.lockutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "refresh_cache-09c17c12-7dac-4fc8-917e-cb2efa1d4607" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.624 2 DEBUG oslo_concurrency.lockutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquired lock "refresh_cache-09c17c12-7dac-4fc8-917e-cb2efa1d4607" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.624 2 DEBUG nova.network.neutron [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.750 2 DEBUG nova.compute.manager [req-af2697b7-5878-4b6b-83ee-88afb07b9ac1 req-711fa848-bbf9-43e7-9cad-bbfd86b9b6bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received event network-changed-205e575c-4af3-4a6a-af77-fd96af608b0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.751 2 DEBUG nova.compute.manager [req-af2697b7-5878-4b6b-83ee-88afb07b9ac1 req-711fa848-bbf9-43e7-9cad-bbfd86b9b6bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Refreshing instance network info cache due to event network-changed-205e575c-4af3-4a6a-af77-fd96af608b0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.751 2 DEBUG oslo_concurrency.lockutils [req-af2697b7-5878-4b6b-83ee-88afb07b9ac1 req-711fa848-bbf9-43e7-9cad-bbfd86b9b6bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-09c17c12-7dac-4fc8-917e-cb2efa1d4607" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.790 2 DEBUG nova.network.neutron [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.809 2 DEBUG oslo_concurrency.lockutils [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Releasing lock "refresh_cache-f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.810 2 DEBUG nova.compute.manager [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.847 2 DEBUG nova.network.neutron [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:21:03 np0005465604 nova_compute[260603]: 2025-10-02 08:21:03.877 2 DEBUG nova.network.neutron [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Successfully created port: 906b5985-d9e3-442e-a67d-085faadd4d3e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:21:03 np0005465604 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct  2 04:21:03 np0005465604 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000b.scope: Consumed 2.608s CPU time.
Oct  2 04:21:03 np0005465604 systemd-machined[214636]: Machine qemu-13-instance-0000000b terminated.
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.035 2 INFO nova.virt.libvirt.driver [-] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Instance destroyed successfully.#033[00m
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.036 2 DEBUG nova.objects.instance [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Lazy-loading 'resources' on Instance uuid f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.447 2 INFO nova.virt.libvirt.driver [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Deleting instance files /var/lib/nova/instances/f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63_del#033[00m
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.448 2 INFO nova.virt.libvirt.driver [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Deletion of /var/lib/nova/instances/f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63_del complete#033[00m
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.503 2 INFO nova.compute.manager [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.503 2 DEBUG oslo.service.loopingcall [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.503 2 DEBUG nova.compute.manager [-] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.504 2 DEBUG nova.network.neutron [-] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.698 2 DEBUG nova.network.neutron [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Successfully updated port: 906b5985-d9e3-442e-a67d-085faadd4d3e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.718 2 DEBUG oslo_concurrency.lockutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "refresh_cache-efe21c23-e200-4841-be3e-2b3e7c8b5c38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.719 2 DEBUG oslo_concurrency.lockutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquired lock "refresh_cache-efe21c23-e200-4841-be3e-2b3e7c8b5c38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.719 2 DEBUG nova.network.neutron [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.738 2 DEBUG nova.network.neutron [-] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:21:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 142 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 4.0 MiB/s wr, 75 op/s
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.752 2 DEBUG nova.network.neutron [-] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.767 2 INFO nova.compute.manager [-] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Took 0.26 seconds to deallocate network for instance.#033[00m
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.819 2 DEBUG oslo_concurrency.lockutils [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.820 2 DEBUG oslo_concurrency.lockutils [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.873 2 DEBUG nova.network.neutron [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.905 2 DEBUG oslo_concurrency.processutils [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:04 np0005465604 nova_compute[260603]: 2025-10-02 08:21:04.993 2 DEBUG nova.network.neutron [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Updating instance_info_cache with network_info: [{"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.021 2 DEBUG oslo_concurrency.lockutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Releasing lock "refresh_cache-09c17c12-7dac-4fc8-917e-cb2efa1d4607" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.021 2 DEBUG nova.compute.manager [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Instance network_info: |[{"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.022 2 DEBUG oslo_concurrency.lockutils [req-af2697b7-5878-4b6b-83ee-88afb07b9ac1 req-711fa848-bbf9-43e7-9cad-bbfd86b9b6bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-09c17c12-7dac-4fc8-917e-cb2efa1d4607" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.022 2 DEBUG nova.network.neutron [req-af2697b7-5878-4b6b-83ee-88afb07b9ac1 req-711fa848-bbf9-43e7-9cad-bbfd86b9b6bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Refreshing network info cache for port 205e575c-4af3-4a6a-af77-fd96af608b0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.028 2 DEBUG nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Start _get_guest_xml network_info=[{"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.034 2 WARNING nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.047 2 DEBUG nova.virt.libvirt.host [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.048 2 DEBUG nova.virt.libvirt.host [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.052 2 DEBUG nova.virt.libvirt.host [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.053 2 DEBUG nova.virt.libvirt.host [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.054 2 DEBUG nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.054 2 DEBUG nova.virt.hardware [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.055 2 DEBUG nova.virt.hardware [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.055 2 DEBUG nova.virt.hardware [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.056 2 DEBUG nova.virt.hardware [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.056 2 DEBUG nova.virt.hardware [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.056 2 DEBUG nova.virt.hardware [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.057 2 DEBUG nova.virt.hardware [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.058 2 DEBUG nova.virt.hardware [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.058 2 DEBUG nova.virt.hardware [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.058 2 DEBUG nova.virt.hardware [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.059 2 DEBUG nova.virt.hardware [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.064 2 DEBUG oslo_concurrency.processutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:21:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1458170417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.351 2 DEBUG oslo_concurrency.processutils [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.360 2 DEBUG nova.compute.provider_tree [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.382 2 DEBUG nova.scheduler.client.report [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.408 2 DEBUG oslo_concurrency.lockutils [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.449 2 INFO nova.scheduler.client.report [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Deleted allocations for instance f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63#033[00m
Oct  2 04:21:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/10217304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.491 2 DEBUG oslo_concurrency.processutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.522 2 DEBUG nova.storage.rbd_utils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.527 2 DEBUG oslo_concurrency.processutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.570 2 DEBUG oslo_concurrency.lockutils [None req-a6ea24cc-0a1e-4666-ac32-8947faafabe6 32c06977868e4dcbb10bba02a1d94488 8cbab7914fc64d04bf413669eb13570a - - default default] Lock "f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1043813430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.967 2 DEBUG oslo_concurrency.processutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.969 2 DEBUG nova.virt.libvirt.vif [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:20:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1588661618',display_name='tempest-ServersAdminTestJSON-server-1588661618',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1588661618',id=12,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='13ecc6dea7a8465394379400d84a053e',ramdisk_id='',reservation_id='r-v30mm0vb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-83453142',owner_user_name='tempest-ServersAdminTestJSON-83453142-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:21:01Z,user_data=None,user_id='2b82955fab174d8aac325e64068908f5',uuid=09c17c12-7dac-4fc8-917e-cb2efa1d4607,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.969 2 DEBUG nova.network.os_vif_util [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converting VIF {"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.970 2 DEBUG nova.network.os_vif_util [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:21:05 np0005465604 nova_compute[260603]: 2025-10-02 08:21:05.972 2 DEBUG nova.objects.instance [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'pci_devices' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.002 2 DEBUG nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:21:06 np0005465604 nova_compute[260603]:  <uuid>09c17c12-7dac-4fc8-917e-cb2efa1d4607</uuid>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:  <name>instance-0000000c</name>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersAdminTestJSON-server-1588661618</nova:name>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:21:05</nova:creationTime>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:21:06 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:        <nova:user uuid="2b82955fab174d8aac325e64068908f5">tempest-ServersAdminTestJSON-83453142-project-member</nova:user>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:        <nova:project uuid="13ecc6dea7a8465394379400d84a053e">tempest-ServersAdminTestJSON-83453142</nova:project>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:        <nova:port uuid="205e575c-4af3-4a6a-af77-fd96af608b0a">
Oct  2 04:21:06 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <entry name="serial">09c17c12-7dac-4fc8-917e-cb2efa1d4607</entry>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <entry name="uuid">09c17c12-7dac-4fc8-917e-cb2efa1d4607</entry>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk">
Oct  2 04:21:06 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:06 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk.config">
Oct  2 04:21:06 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:06 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:24:2c:3a"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <target dev="tap205e575c-4a"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/console.log" append="off"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:21:06 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:21:06 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:21:06 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:21:06 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.003 2 DEBUG nova.compute.manager [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Preparing to wait for external event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.003 2 DEBUG oslo_concurrency.lockutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.004 2 DEBUG oslo_concurrency.lockutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.004 2 DEBUG oslo_concurrency.lockutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.006 2 DEBUG nova.virt.libvirt.vif [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:20:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1588661618',display_name='tempest-ServersAdminTestJSON-server-1588661618',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1588661618',id=12,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='13ecc6dea7a8465394379400d84a053e',ramdisk_id='',reservation_id='r-v30mm0vb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-83453142',owner_user_name='tempest-ServersAdminTestJSON-83453142-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:21:01Z,user_data=None,user_id='2b82955fab174d8aac325e64068908f5',uuid=09c17c12-7dac-4fc8-917e-cb2efa1d4607,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.006 2 DEBUG nova.network.os_vif_util [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converting VIF {"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.007 2 DEBUG nova.network.os_vif_util [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.008 2 DEBUG os_vif [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.010 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.011 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.016 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap205e575c-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.016 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap205e575c-4a, col_values=(('external_ids', {'iface-id': '205e575c-4af3-4a6a-af77-fd96af608b0a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:2c:3a', 'vm-uuid': '09c17c12-7dac-4fc8-917e-cb2efa1d4607'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:06 np0005465604 NetworkManager[45129]: <info>  [1759393266.0205] manager: (tap205e575c-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.030 2 DEBUG nova.network.neutron [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Updating instance_info_cache with network_info: [{"id": "906b5985-d9e3-442e-a67d-085faadd4d3e", "address": "fa:16:3e:c1:c3:98", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906b5985-d9", "ovs_interfaceid": "906b5985-d9e3-442e-a67d-085faadd4d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.036 2 INFO os_vif [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a')#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.079 2 DEBUG oslo_concurrency.lockutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Releasing lock "refresh_cache-efe21c23-e200-4841-be3e-2b3e7c8b5c38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.079 2 DEBUG nova.compute.manager [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Instance network_info: |[{"id": "906b5985-d9e3-442e-a67d-085faadd4d3e", "address": "fa:16:3e:c1:c3:98", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906b5985-d9", "ovs_interfaceid": "906b5985-d9e3-442e-a67d-085faadd4d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.082 2 DEBUG nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Start _get_guest_xml network_info=[{"id": "906b5985-d9e3-442e-a67d-085faadd4d3e", "address": "fa:16:3e:c1:c3:98", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906b5985-d9", "ovs_interfaceid": "906b5985-d9e3-442e-a67d-085faadd4d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.089 2 WARNING nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.104 2 DEBUG nova.virt.libvirt.host [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.105 2 DEBUG nova.virt.libvirt.host [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.109 2 DEBUG nova.virt.libvirt.host [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.109 2 DEBUG nova.virt.libvirt.host [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.110 2 DEBUG nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.110 2 DEBUG nova.virt.hardware [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.111 2 DEBUG nova.virt.hardware [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.111 2 DEBUG nova.virt.hardware [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.111 2 DEBUG nova.virt.hardware [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.111 2 DEBUG nova.virt.hardware [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.112 2 DEBUG nova.virt.hardware [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.112 2 DEBUG nova.virt.hardware [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.112 2 DEBUG nova.virt.hardware [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.113 2 DEBUG nova.virt.hardware [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.113 2 DEBUG nova.virt.hardware [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.113 2 DEBUG nova.virt.hardware [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.117 2 DEBUG oslo_concurrency.processutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.163 2 DEBUG nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.164 2 DEBUG nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.165 2 DEBUG nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] No VIF found with MAC fa:16:3e:24:2c:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.166 2 INFO nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Using config drive#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.198 2 DEBUG nova.storage.rbd_utils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.516 2 DEBUG nova.compute.manager [req-0e2c6ab5-3b6e-46d1-baab-ffab3e1cc851 req-5d77d4a3-cae5-45d7-9fb8-add5eb613b82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Received event network-changed-906b5985-d9e3-442e-a67d-085faadd4d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.516 2 DEBUG nova.compute.manager [req-0e2c6ab5-3b6e-46d1-baab-ffab3e1cc851 req-5d77d4a3-cae5-45d7-9fb8-add5eb613b82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Refreshing instance network info cache due to event network-changed-906b5985-d9e3-442e-a67d-085faadd4d3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.517 2 DEBUG oslo_concurrency.lockutils [req-0e2c6ab5-3b6e-46d1-baab-ffab3e1cc851 req-5d77d4a3-cae5-45d7-9fb8-add5eb613b82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-efe21c23-e200-4841-be3e-2b3e7c8b5c38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.518 2 DEBUG oslo_concurrency.lockutils [req-0e2c6ab5-3b6e-46d1-baab-ffab3e1cc851 req-5d77d4a3-cae5-45d7-9fb8-add5eb613b82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-efe21c23-e200-4841-be3e-2b3e7c8b5c38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.518 2 DEBUG nova.network.neutron [req-0e2c6ab5-3b6e-46d1-baab-ffab3e1cc851 req-5d77d4a3-cae5-45d7-9fb8-add5eb613b82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Refreshing network info cache for port 906b5985-d9e3-442e-a67d-085faadd4d3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1961611911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.576 2 DEBUG oslo_concurrency.processutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.605 2 DEBUG nova.storage.rbd_utils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image efe21c23-e200-4841-be3e-2b3e7c8b5c38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.609 2 DEBUG oslo_concurrency.processutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.632 2 INFO nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Creating config drive at /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/disk.config#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.638 2 DEBUG oslo_concurrency.processutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphyz75j4z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.662 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.664 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.664 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:21:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 142 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 4.0 MiB/s wr, 75 op/s
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.769 2 DEBUG oslo_concurrency.processutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphyz75j4z" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.796 2 DEBUG nova.storage.rbd_utils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.800 2 DEBUG oslo_concurrency.processutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/disk.config 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.829 2 DEBUG nova.network.neutron [req-af2697b7-5878-4b6b-83ee-88afb07b9ac1 req-711fa848-bbf9-43e7-9cad-bbfd86b9b6bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Updated VIF entry in instance network info cache for port 205e575c-4af3-4a6a-af77-fd96af608b0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.830 2 DEBUG nova.network.neutron [req-af2697b7-5878-4b6b-83ee-88afb07b9ac1 req-711fa848-bbf9-43e7-9cad-bbfd86b9b6bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Updating instance_info_cache with network_info: [{"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.834 2 DEBUG oslo_concurrency.lockutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Acquiring lock "45e20d73-3f71-4e90-b1b5-f30fcb043922" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.835 2 DEBUG oslo_concurrency.lockutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "45e20d73-3f71-4e90-b1b5-f30fcb043922" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.853 2 DEBUG oslo_concurrency.lockutils [req-af2697b7-5878-4b6b-83ee-88afb07b9ac1 req-711fa848-bbf9-43e7-9cad-bbfd86b9b6bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-09c17c12-7dac-4fc8-917e-cb2efa1d4607" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.857 2 DEBUG nova.compute.manager [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.929 2 DEBUG oslo_concurrency.lockutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.929 2 DEBUG oslo_concurrency.lockutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.934 2 DEBUG nova.virt.hardware [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.935 2 INFO nova.compute.claims [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:21:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.983 2 DEBUG oslo_concurrency.processutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/disk.config 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:06 np0005465604 nova_compute[260603]: 2025-10-02 08:21:06.984 2 INFO nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Deleting local config drive /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/disk.config because it was imported into RBD.#033[00m
Oct  2 04:21:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4289450582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.035 2 DEBUG oslo_concurrency.processutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.037 2 DEBUG nova.virt.libvirt.vif [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:21:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1997142708',display_name='tempest-ServersAdminTestJSON-server-1997142708',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1997142708',id=13,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='13ecc6dea7a8465394379400d84a053e',ramdisk_id='',reservation_id='r-85fyi1fx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-83453142',owner_user_name='tempest-ServersAdminTestJSON-83453142-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:21:02Z,user_data=None,user_id='2b82955fab174d8aac325e64068908f5',uuid=efe21c23-e200-4841-be3e-2b3e7c8b5c38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "906b5985-d9e3-442e-a67d-085faadd4d3e", "address": "fa:16:3e:c1:c3:98", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906b5985-d9", "ovs_interfaceid": "906b5985-d9e3-442e-a67d-085faadd4d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.037 2 DEBUG nova.network.os_vif_util [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converting VIF {"id": "906b5985-d9e3-442e-a67d-085faadd4d3e", "address": "fa:16:3e:c1:c3:98", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906b5985-d9", "ovs_interfaceid": "906b5985-d9e3-442e-a67d-085faadd4d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.038 2 DEBUG nova.network.os_vif_util [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c3:98,bridge_name='br-int',has_traffic_filtering=True,id=906b5985-d9e3-442e-a67d-085faadd4d3e,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap906b5985-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.039 2 DEBUG nova.objects.instance [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'pci_devices' on Instance uuid efe21c23-e200-4841-be3e-2b3e7c8b5c38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:07 np0005465604 kernel: tap205e575c-4a: entered promiscuous mode
Oct  2 04:21:07 np0005465604 NetworkManager[45129]: <info>  [1759393267.0463] manager: (tap205e575c-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:07 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:07Z|00043|binding|INFO|Claiming lport 205e575c-4af3-4a6a-af77-fd96af608b0a for this chassis.
Oct  2 04:21:07 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:07Z|00044|binding|INFO|205e575c-4af3-4a6a-af77-fd96af608b0a: Claiming fa:16:3e:24:2c:3a 10.100.0.12
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.054 2 DEBUG nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:21:07 np0005465604 nova_compute[260603]:  <uuid>efe21c23-e200-4841-be3e-2b3e7c8b5c38</uuid>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:  <name>instance-0000000d</name>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersAdminTestJSON-server-1997142708</nova:name>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:21:06</nova:creationTime>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:21:07 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:        <nova:user uuid="2b82955fab174d8aac325e64068908f5">tempest-ServersAdminTestJSON-83453142-project-member</nova:user>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:        <nova:project uuid="13ecc6dea7a8465394379400d84a053e">tempest-ServersAdminTestJSON-83453142</nova:project>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:        <nova:port uuid="906b5985-d9e3-442e-a67d-085faadd4d3e">
Oct  2 04:21:07 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <entry name="serial">efe21c23-e200-4841-be3e-2b3e7c8b5c38</entry>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <entry name="uuid">efe21c23-e200-4841-be3e-2b3e7c8b5c38</entry>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/efe21c23-e200-4841-be3e-2b3e7c8b5c38_disk">
Oct  2 04:21:07 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:07 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/efe21c23-e200-4841-be3e-2b3e7c8b5c38_disk.config">
Oct  2 04:21:07 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:07 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:c1:c3:98"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <target dev="tap906b5985-d9"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/efe21c23-e200-4841-be3e-2b3e7c8b5c38/console.log" append="off"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:21:07 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:21:07 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:21:07 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:21:07 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.054 2 DEBUG nova.compute.manager [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Preparing to wait for external event network-vif-plugged-906b5985-d9e3-442e-a67d-085faadd4d3e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.055 2 DEBUG oslo_concurrency.lockutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.055 2 DEBUG oslo_concurrency.lockutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.055 2 DEBUG oslo_concurrency.lockutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.056 2 DEBUG nova.virt.libvirt.vif [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:21:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1997142708',display_name='tempest-ServersAdminTestJSON-server-1997142708',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1997142708',id=13,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='13ecc6dea7a8465394379400d84a053e',ramdisk_id='',reservation_id='r-85fyi1fx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-83453142',owner_user_name='tempest-ServersAdminTestJSON-83453142-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:21:02Z,user_data=None,user_id='2b82955fab174d8aac325e64068908f5',uuid=efe21c23-e200-4841-be3e-2b3e7c8b5c38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "906b5985-d9e3-442e-a67d-085faadd4d3e", "address": "fa:16:3e:c1:c3:98", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906b5985-d9", "ovs_interfaceid": "906b5985-d9e3-442e-a67d-085faadd4d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.056 2 DEBUG nova.network.os_vif_util [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converting VIF {"id": "906b5985-d9e3-442e-a67d-085faadd4d3e", "address": "fa:16:3e:c1:c3:98", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906b5985-d9", "ovs_interfaceid": "906b5985-d9e3-442e-a67d-085faadd4d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.057 2 DEBUG nova.network.os_vif_util [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c3:98,bridge_name='br-int',has_traffic_filtering=True,id=906b5985-d9e3-442e-a67d-085faadd4d3e,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap906b5985-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.057 2 DEBUG os_vif [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c3:98,bridge_name='br-int',has_traffic_filtering=True,id=906b5985-d9e3-442e-a67d-085faadd4d3e,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap906b5985-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.058 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.058 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.062 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap906b5985-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.063 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap906b5985-d9, col_values=(('external_ids', {'iface-id': '906b5985-d9e3-442e-a67d-085faadd4d3e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:c3:98', 'vm-uuid': 'efe21c23-e200-4841-be3e-2b3e7c8b5c38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:21:07 np0005465604 NetworkManager[45129]: <info>  [1759393267.0676] manager: (tap906b5985-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.072 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:2c:3a 10.100.0.12'], port_security=['fa:16:3e:24:2c:3a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '09c17c12-7dac-4fc8-917e-cb2efa1d4607', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13ecc6dea7a8465394379400d84a053e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '62446364-91d2-42bd-8360-1c220db2c85a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a031d7e-2b71-4bad-bd63-24b87ef28e88, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=205e575c-4af3-4a6a-af77-fd96af608b0a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.073 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 205e575c-4af3-4a6a-af77-fd96af608b0a in datapath 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e bound to our chassis#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.075 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.086 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cf10ff3b-b900-4cc6-88c1-19cd6e5ad35a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.087 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a12cdd8-51 in ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.089 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a12cdd8-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.089 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[45f111df-fbb4-40e6-8236-16d222d24b96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.090 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[15148929-e966-4e37-9ab8-d23dd5b31e16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.100 2 DEBUG oslo_concurrency.processutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.104 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[8c9c783e-0290-4db1-ab34-0c5827778704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:07 np0005465604 systemd-machined[214636]: New machine qemu-14-instance-0000000c.
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.142 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8ebe4707-3716-42c5-941d-ddb020938636]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:07 np0005465604 systemd[1]: Started Virtual Machine qemu-14-instance-0000000c.
Oct  2 04:21:07 np0005465604 systemd-udevd[282974]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:21:07 np0005465604 NetworkManager[45129]: <info>  [1759393267.1790] device (tap205e575c-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:21:07 np0005465604 NetworkManager[45129]: <info>  [1759393267.1805] device (tap205e575c-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.186 2 INFO os_vif [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c3:98,bridge_name='br-int',has_traffic_filtering=True,id=906b5985-d9e3-442e-a67d-085faadd4d3e,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap906b5985-d9')#033[00m
Oct  2 04:21:07 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:07Z|00045|binding|INFO|Setting lport 205e575c-4af3-4a6a-af77-fd96af608b0a ovn-installed in OVS
Oct  2 04:21:07 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:07Z|00046|binding|INFO|Setting lport 205e575c-4af3-4a6a-af77-fd96af608b0a up in Southbound
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.199 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[126825af-c3a4-46cc-85e0-9b8ed9bbd06a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:07 np0005465604 NetworkManager[45129]: <info>  [1759393267.2114] manager: (tap6a12cdd8-50): new Veth device (/org/freedesktop/NetworkManager/Devices/38)
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.209 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[45a6f446-e329-4e77-ad89-fa3a3bd51150]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:07 np0005465604 podman[282952]: 2025-10-02 08:21:07.213775159 +0000 UTC m=+0.105233413 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.244 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7a2cb348-a655-4a04-9847-988f6c01a786]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.247 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ded562e7-b73f-40b9-960b-1e03efec158c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.257 2 DEBUG nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.258 2 DEBUG nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.258 2 DEBUG nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] No VIF found with MAC fa:16:3e:c1:c3:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.258 2 INFO nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Using config drive#033[00m
Oct  2 04:21:07 np0005465604 NetworkManager[45129]: <info>  [1759393267.2783] device (tap6a12cdd8-50): carrier: link connected
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.278 2 DEBUG nova.storage.rbd_utils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image efe21c23-e200-4841-be3e-2b3e7c8b5c38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.285 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cd72c0a5-115a-4a8b-8008-f25d442f10d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.300 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[95bc7a4b-bfae-42b4-9a9d-e93f66071bfc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a12cdd8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:e8:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414588, 'reachable_time': 43029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283050, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.314 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e9a3b7-bb3b-45ef-a573-e8ff9dc7d180]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:e883'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414588, 'tstamp': 414588}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283051, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.337 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f13f684e-770e-460e-99eb-efbff5fa0421]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a12cdd8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:e8:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414588, 'reachable_time': 43029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283052, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.361 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[85cc6032-3f4d-401f-9416-07db8cf2de01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.408 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[537eded6-6292-42f1-b321-f5d2d4202ec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.410 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a12cdd8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.410 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.411 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a12cdd8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:07 np0005465604 NetworkManager[45129]: <info>  [1759393267.4129] manager: (tap6a12cdd8-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Oct  2 04:21:07 np0005465604 kernel: tap6a12cdd8-50: entered promiscuous mode
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.418 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a12cdd8-50, col_values=(('external_ids', {'iface-id': '9a1d90c9-45f7-468d-bd6f-f1cc59f0309a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:07 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:07Z|00047|binding|INFO|Releasing lport 9a1d90c9-45f7-468d-bd6f-f1cc59f0309a from this chassis (sb_readonly=0)
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.442 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.443 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[deaa5fe0-71a9-481f-977f-7cf813d44ee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.444 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e.pid.haproxy
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:21:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:07.445 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'env', 'PROCESS_TAG=haproxy-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:21:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:21:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3984066589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.552 2 DEBUG nova.compute.manager [req-48af0504-97d6-45f3-961b-3ea6f6ddf204 req-9a874203-6332-4457-9e34-2ccf166c6c8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.553 2 DEBUG oslo_concurrency.lockutils [req-48af0504-97d6-45f3-961b-3ea6f6ddf204 req-9a874203-6332-4457-9e34-2ccf166c6c8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.553 2 DEBUG oslo_concurrency.lockutils [req-48af0504-97d6-45f3-961b-3ea6f6ddf204 req-9a874203-6332-4457-9e34-2ccf166c6c8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.553 2 DEBUG oslo_concurrency.lockutils [req-48af0504-97d6-45f3-961b-3ea6f6ddf204 req-9a874203-6332-4457-9e34-2ccf166c6c8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.553 2 DEBUG nova.compute.manager [req-48af0504-97d6-45f3-961b-3ea6f6ddf204 req-9a874203-6332-4457-9e34-2ccf166c6c8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Processing event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.580 2 DEBUG oslo_concurrency.processutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.585 2 DEBUG nova.compute.provider_tree [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.602 2 DEBUG nova.scheduler.client.report [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.625 2 DEBUG oslo_concurrency.lockutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.625 2 DEBUG nova.compute.manager [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.667 2 DEBUG nova.compute.manager [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.667 2 DEBUG nova.network.neutron [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.684 2 INFO nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.700 2 DEBUG nova.compute.manager [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.780 2 DEBUG nova.compute.manager [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.781 2 DEBUG nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.781 2 INFO nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Creating image(s)#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.805 2 DEBUG nova.storage.rbd_utils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] rbd image 45e20d73-3f71-4e90-b1b5-f30fcb043922_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.831 2 DEBUG nova.storage.rbd_utils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] rbd image 45e20d73-3f71-4e90-b1b5-f30fcb043922_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.851 2 DEBUG nova.storage.rbd_utils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] rbd image 45e20d73-3f71-4e90-b1b5-f30fcb043922_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.856 2 DEBUG oslo_concurrency.processutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:07 np0005465604 podman[283149]: 2025-10-02 08:21:07.89818189 +0000 UTC m=+0.064090886 container create d1ffce217b799bdbd82a906aee5b1dfbc65daa20423e3a9aada203326c63e300 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:21:07 np0005465604 systemd[1]: Started libpod-conmon-d1ffce217b799bdbd82a906aee5b1dfbc65daa20423e3a9aada203326c63e300.scope.
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.952 2 DEBUG oslo_concurrency.processutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.952 2 DEBUG oslo_concurrency.lockutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.953 2 DEBUG oslo_concurrency.lockutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.953 2 DEBUG oslo_concurrency.lockutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:07 np0005465604 podman[283149]: 2025-10-02 08:21:07.870934418 +0000 UTC m=+0.036843444 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:21:07 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.972 2 DEBUG nova.storage.rbd_utils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] rbd image 45e20d73-3f71-4e90-b1b5-f30fcb043922_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f09c4a2bc6aa7f236bb71bc796a7131d7427d3d53932b9efacf4cd4670237c4f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:21:07 np0005465604 nova_compute[260603]: 2025-10-02 08:21:07.980 2 DEBUG oslo_concurrency.processutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 45e20d73-3f71-4e90-b1b5-f30fcb043922_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:07 np0005465604 podman[283149]: 2025-10-02 08:21:07.991950254 +0000 UTC m=+0.157859280 container init d1ffce217b799bdbd82a906aee5b1dfbc65daa20423e3a9aada203326c63e300 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:21:08 np0005465604 podman[283149]: 2025-10-02 08:21:08.001524185 +0000 UTC m=+0.167433191 container start d1ffce217b799bdbd82a906aee5b1dfbc65daa20423e3a9aada203326c63e300 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.009 2 INFO nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Creating config drive at /var/lib/nova/instances/efe21c23-e200-4841-be3e-2b3e7c8b5c38/disk.config#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.018 2 DEBUG oslo_concurrency.processutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/efe21c23-e200-4841-be3e-2b3e7c8b5c38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2z1iae2w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:08 np0005465604 neutron-haproxy-ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e[283202]: [NOTICE]   (283225) : New worker (283228) forked
Oct  2 04:21:08 np0005465604 neutron-haproxy-ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e[283202]: [NOTICE]   (283225) : Loading success.
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.082 2 DEBUG nova.network.neutron [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.083 2 DEBUG nova.compute.manager [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.084 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393268.0836089, 09c17c12-7dac-4fc8-917e-cb2efa1d4607 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.084 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] VM Started (Lifecycle Event)#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.087 2 DEBUG nova.compute.manager [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.091 2 DEBUG nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.097 2 INFO nova.virt.libvirt.driver [-] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Instance spawned successfully.#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.097 2 DEBUG nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.103 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.107 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.120 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.121 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393268.084095, 09c17c12-7dac-4fc8-917e-cb2efa1d4607 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.121 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.123 2 DEBUG nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.124 2 DEBUG nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.124 2 DEBUG nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.125 2 DEBUG nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.125 2 DEBUG nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.125 2 DEBUG nova.virt.libvirt.driver [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.149 2 DEBUG oslo_concurrency.processutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/efe21c23-e200-4841-be3e-2b3e7c8b5c38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2z1iae2w" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.177 2 DEBUG nova.storage.rbd_utils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image efe21c23-e200-4841-be3e-2b3e7c8b5c38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.179 2 DEBUG oslo_concurrency.processutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/efe21c23-e200-4841-be3e-2b3e7c8b5c38/disk.config efe21c23-e200-4841-be3e-2b3e7c8b5c38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.201 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.203 2 INFO nova.compute.manager [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Took 6.94 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.204 2 DEBUG nova.compute.manager [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.212 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393268.0895963, 09c17c12-7dac-4fc8-917e-cb2efa1d4607 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.213 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.216 2 DEBUG oslo_concurrency.processutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 45e20d73-3f71-4e90-b1b5-f30fcb043922_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.242 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.278 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.282 2 INFO nova.compute.manager [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Took 7.96 seconds to build instance.#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.286 2 DEBUG nova.storage.rbd_utils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] resizing rbd image 45e20d73-3f71-4e90-b1b5-f30fcb043922_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.315 2 DEBUG oslo_concurrency.lockutils [None req-ab891dcd-0d2f-4d93-b6bb-b45a7eae22a7 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.316 2 DEBUG oslo_concurrency.processutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/efe21c23-e200-4841-be3e-2b3e7c8b5c38/disk.config efe21c23-e200-4841-be3e-2b3e7c8b5c38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.316 2 INFO nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Deleting local config drive /var/lib/nova/instances/efe21c23-e200-4841-be3e-2b3e7c8b5c38/disk.config because it was imported into RBD.#033[00m
Oct  2 04:21:08 np0005465604 NetworkManager[45129]: <info>  [1759393268.3754] manager: (tap906b5985-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Oct  2 04:21:08 np0005465604 kernel: tap906b5985-d9: entered promiscuous mode
Oct  2 04:21:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:08Z|00048|binding|INFO|Claiming lport 906b5985-d9e3-442e-a67d-085faadd4d3e for this chassis.
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:08Z|00049|binding|INFO|906b5985-d9e3-442e-a67d-085faadd4d3e: Claiming fa:16:3e:c1:c3:98 10.100.0.14
Oct  2 04:21:08 np0005465604 systemd-udevd[283012]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:21:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:08.391 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:c3:98 10.100.0.14'], port_security=['fa:16:3e:c1:c3:98 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'efe21c23-e200-4841-be3e-2b3e7c8b5c38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13ecc6dea7a8465394379400d84a053e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '62446364-91d2-42bd-8360-1c220db2c85a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a031d7e-2b71-4bad-bd63-24b87ef28e88, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=906b5985-d9e3-442e-a67d-085faadd4d3e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:21:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:08.393 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 906b5985-d9e3-442e-a67d-085faadd4d3e in datapath 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e bound to our chassis#033[00m
Oct  2 04:21:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:08.395 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e#033[00m
Oct  2 04:21:08 np0005465604 NetworkManager[45129]: <info>  [1759393268.3974] device (tap906b5985-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:21:08 np0005465604 NetworkManager[45129]: <info>  [1759393268.3981] device (tap906b5985-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:21:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:08Z|00050|binding|INFO|Setting lport 906b5985-d9e3-442e-a67d-085faadd4d3e ovn-installed in OVS
Oct  2 04:21:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:08Z|00051|binding|INFO|Setting lport 906b5985-d9e3-442e-a67d-085faadd4d3e up in Southbound
Oct  2 04:21:08 np0005465604 systemd-machined[214636]: New machine qemu-15-instance-0000000d.
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.415 2 DEBUG nova.objects.instance [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lazy-loading 'migration_context' on Instance uuid 45e20d73-3f71-4e90-b1b5-f30fcb043922 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:08.416 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f6866ecf-3835-4be5-bdc3-c050cb228a41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:08 np0005465604 systemd[1]: Started Virtual Machine qemu-15-instance-0000000d.
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.433 2 DEBUG nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.433 2 DEBUG nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Ensure instance console log exists: /var/lib/nova/instances/45e20d73-3f71-4e90-b1b5-f30fcb043922/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.434 2 DEBUG oslo_concurrency.lockutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.434 2 DEBUG oslo_concurrency.lockutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.434 2 DEBUG oslo_concurrency.lockutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.438 2 DEBUG nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.445 2 WARNING nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.450 2 DEBUG nova.virt.libvirt.host [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.451 2 DEBUG nova.virt.libvirt.host [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:21:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:08.461 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b65ef482-2388-4467-aa50-d2319caebc17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.463 2 DEBUG nova.virt.libvirt.host [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.463 2 DEBUG nova.virt.libvirt.host [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.464 2 DEBUG nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.464 2 DEBUG nova.virt.hardware [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.465 2 DEBUG nova.virt.hardware [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.465 2 DEBUG nova.virt.hardware [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.465 2 DEBUG nova.virt.hardware [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.465 2 DEBUG nova.virt.hardware [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.465 2 DEBUG nova.virt.hardware [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.466 2 DEBUG nova.virt.hardware [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.466 2 DEBUG nova.virt.hardware [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:21:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:08.466 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[efdcfe05-79b5-4d5b-b51e-2e59a343007c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.466 2 DEBUG nova.virt.hardware [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.466 2 DEBUG nova.virt.hardware [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.466 2 DEBUG nova.virt.hardware [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.469 2 DEBUG oslo_concurrency.processutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.492 2 DEBUG nova.network.neutron [req-0e2c6ab5-3b6e-46d1-baab-ffab3e1cc851 req-5d77d4a3-cae5-45d7-9fb8-add5eb613b82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Updated VIF entry in instance network info cache for port 906b5985-d9e3-442e-a67d-085faadd4d3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.493 2 DEBUG nova.network.neutron [req-0e2c6ab5-3b6e-46d1-baab-ffab3e1cc851 req-5d77d4a3-cae5-45d7-9fb8-add5eb613b82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Updating instance_info_cache with network_info: [{"id": "906b5985-d9e3-442e-a67d-085faadd4d3e", "address": "fa:16:3e:c1:c3:98", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906b5985-d9", "ovs_interfaceid": "906b5985-d9e3-442e-a67d-085faadd4d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:08.497 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[42a52dac-34ac-4360-8296-af7266af3ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.509 2 DEBUG oslo_concurrency.lockutils [req-0e2c6ab5-3b6e-46d1-baab-ffab3e1cc851 req-5d77d4a3-cae5-45d7-9fb8-add5eb613b82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-efe21c23-e200-4841-be3e-2b3e7c8b5c38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:21:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:08.520 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9412b0b0-6f51-42ea-ac5a-48a0e2819a09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a12cdd8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:e8:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 306, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 306, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414588, 'reachable_time': 43029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283391, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:08.538 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0c337d-cfc4-4a10-9fb3-ed5d7c143036]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414598, 'tstamp': 414598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283393, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414600, 'tstamp': 414600}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283393, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:08.540 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a12cdd8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:08.543 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a12cdd8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:08.543 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:21:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:08.543 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a12cdd8-50, col_values=(('external_ids', {'iface-id': '9a1d90c9-45f7-468d-bd6f-f1cc59f0309a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:08.544 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:21:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 143 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.5 MiB/s wr, 200 op/s
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.780 2 DEBUG oslo_concurrency.lockutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Acquiring lock "b4e47139-8bff-466c-b453-5bafefeaad62" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.781 2 DEBUG oslo_concurrency.lockutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Lock "b4e47139-8bff-466c-b453-5bafefeaad62" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.796 2 DEBUG nova.compute.manager [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.866 2 DEBUG oslo_concurrency.lockutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.867 2 DEBUG oslo_concurrency.lockutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.877 2 DEBUG nova.virt.hardware [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.878 2 INFO nova.compute.claims [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:21:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:08 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1633294621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.940 2 DEBUG oslo_concurrency.processutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.973 2 DEBUG nova.storage.rbd_utils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] rbd image 45e20d73-3f71-4e90-b1b5-f30fcb043922_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:08 np0005465604 nova_compute[260603]: 2025-10-02 08:21:08.978 2 DEBUG oslo_concurrency.processutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.110 2 DEBUG oslo_concurrency.processutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.335 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393269.3351297, efe21c23-e200-4841-be3e-2b3e7c8b5c38 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.336 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] VM Started (Lifecycle Event)#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.368 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.372 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393269.336233, efe21c23-e200-4841-be3e-2b3e7c8b5c38 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.372 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.391 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.393 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.411 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:21:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1919230396' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.503 2 DEBUG oslo_concurrency.processutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.505 2 DEBUG nova.objects.instance [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lazy-loading 'pci_devices' on Instance uuid 45e20d73-3f71-4e90-b1b5-f30fcb043922 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.520 2 DEBUG nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:21:09 np0005465604 nova_compute[260603]:  <uuid>45e20d73-3f71-4e90-b1b5-f30fcb043922</uuid>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:  <name>instance-0000000e</name>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-480287458</nova:name>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:21:08</nova:creationTime>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:21:09 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:        <nova:user uuid="5da29605949d4f0abb43aa8f0801e6b7">tempest-ServersAdminNegativeTestJSON-384838271-project-member</nova:user>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:        <nova:project uuid="cb733b60e33d491c9c8c0cd574145cce">tempest-ServersAdminNegativeTestJSON-384838271</nova:project>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <entry name="serial">45e20d73-3f71-4e90-b1b5-f30fcb043922</entry>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <entry name="uuid">45e20d73-3f71-4e90-b1b5-f30fcb043922</entry>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/45e20d73-3f71-4e90-b1b5-f30fcb043922_disk">
Oct  2 04:21:09 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:09 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/45e20d73-3f71-4e90-b1b5-f30fcb043922_disk.config">
Oct  2 04:21:09 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:09 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/45e20d73-3f71-4e90-b1b5-f30fcb043922/console.log" append="off"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:21:09 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:21:09 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:21:09 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:21:09 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:21:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:21:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3186361663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.579 2 DEBUG nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.580 2 DEBUG nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.580 2 INFO nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Using config drive#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.597 2 DEBUG nova.storage.rbd_utils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] rbd image 45e20d73-3f71-4e90-b1b5-f30fcb043922_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.601 2 DEBUG oslo_concurrency.processutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.605 2 DEBUG nova.compute.provider_tree [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.622 2 DEBUG nova.scheduler.client.report [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.653 2 DEBUG oslo_concurrency.lockutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.654 2 DEBUG nova.compute.manager [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.673 2 DEBUG nova.compute.manager [req-b2ec18af-1e3c-4502-a17c-7b54e1861554 req-21a868ef-5d4b-45b7-a34f-bf21780ec3ca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.674 2 DEBUG oslo_concurrency.lockutils [req-b2ec18af-1e3c-4502-a17c-7b54e1861554 req-21a868ef-5d4b-45b7-a34f-bf21780ec3ca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.674 2 DEBUG oslo_concurrency.lockutils [req-b2ec18af-1e3c-4502-a17c-7b54e1861554 req-21a868ef-5d4b-45b7-a34f-bf21780ec3ca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.674 2 DEBUG oslo_concurrency.lockutils [req-b2ec18af-1e3c-4502-a17c-7b54e1861554 req-21a868ef-5d4b-45b7-a34f-bf21780ec3ca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.674 2 DEBUG nova.compute.manager [req-b2ec18af-1e3c-4502-a17c-7b54e1861554 req-21a868ef-5d4b-45b7-a34f-bf21780ec3ca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] No waiting events found dispatching network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.674 2 WARNING nova.compute.manager [req-b2ec18af-1e3c-4502-a17c-7b54e1861554 req-21a868ef-5d4b-45b7-a34f-bf21780ec3ca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received unexpected event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a for instance with vm_state active and task_state None.#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.675 2 DEBUG nova.compute.manager [req-b2ec18af-1e3c-4502-a17c-7b54e1861554 req-21a868ef-5d4b-45b7-a34f-bf21780ec3ca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Received event network-vif-plugged-906b5985-d9e3-442e-a67d-085faadd4d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.675 2 DEBUG oslo_concurrency.lockutils [req-b2ec18af-1e3c-4502-a17c-7b54e1861554 req-21a868ef-5d4b-45b7-a34f-bf21780ec3ca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.675 2 DEBUG oslo_concurrency.lockutils [req-b2ec18af-1e3c-4502-a17c-7b54e1861554 req-21a868ef-5d4b-45b7-a34f-bf21780ec3ca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.675 2 DEBUG oslo_concurrency.lockutils [req-b2ec18af-1e3c-4502-a17c-7b54e1861554 req-21a868ef-5d4b-45b7-a34f-bf21780ec3ca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.675 2 DEBUG nova.compute.manager [req-b2ec18af-1e3c-4502-a17c-7b54e1861554 req-21a868ef-5d4b-45b7-a34f-bf21780ec3ca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Processing event network-vif-plugged-906b5985-d9e3-442e-a67d-085faadd4d3e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.676 2 DEBUG nova.compute.manager [req-b2ec18af-1e3c-4502-a17c-7b54e1861554 req-21a868ef-5d4b-45b7-a34f-bf21780ec3ca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Received event network-vif-plugged-906b5985-d9e3-442e-a67d-085faadd4d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.676 2 DEBUG oslo_concurrency.lockutils [req-b2ec18af-1e3c-4502-a17c-7b54e1861554 req-21a868ef-5d4b-45b7-a34f-bf21780ec3ca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.676 2 DEBUG oslo_concurrency.lockutils [req-b2ec18af-1e3c-4502-a17c-7b54e1861554 req-21a868ef-5d4b-45b7-a34f-bf21780ec3ca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.676 2 DEBUG oslo_concurrency.lockutils [req-b2ec18af-1e3c-4502-a17c-7b54e1861554 req-21a868ef-5d4b-45b7-a34f-bf21780ec3ca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.676 2 DEBUG nova.compute.manager [req-b2ec18af-1e3c-4502-a17c-7b54e1861554 req-21a868ef-5d4b-45b7-a34f-bf21780ec3ca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] No waiting events found dispatching network-vif-plugged-906b5985-d9e3-442e-a67d-085faadd4d3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.677 2 WARNING nova.compute.manager [req-b2ec18af-1e3c-4502-a17c-7b54e1861554 req-21a868ef-5d4b-45b7-a34f-bf21780ec3ca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Received unexpected event network-vif-plugged-906b5985-d9e3-442e-a67d-085faadd4d3e for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.677 2 DEBUG nova.compute.manager [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.681 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393269.6806512, efe21c23-e200-4841-be3e-2b3e7c8b5c38 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.681 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.683 2 DEBUG nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.692 2 INFO nova.virt.libvirt.driver [-] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Instance spawned successfully.#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.692 2 DEBUG nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.701 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.705 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.711 2 DEBUG nova.compute.manager [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.712 2 DEBUG nova.network.neutron [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.722 2 DEBUG nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.722 2 DEBUG nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.722 2 DEBUG nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.723 2 DEBUG nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.723 2 DEBUG nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.724 2 DEBUG nova.virt.libvirt.driver [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.727 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.731 2 INFO nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.748 2 DEBUG nova.compute.manager [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.756 2 INFO nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Creating config drive at /var/lib/nova/instances/45e20d73-3f71-4e90-b1b5-f30fcb043922/disk.config#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.761 2 DEBUG oslo_concurrency.processutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/45e20d73-3f71-4e90-b1b5-f30fcb043922/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjrur9grq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.784 2 INFO nova.compute.manager [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Took 7.13 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.785 2 DEBUG nova.compute.manager [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.847 2 DEBUG nova.compute.manager [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.849 2 DEBUG nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.850 2 INFO nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Creating image(s)#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.871 2 DEBUG nova.storage.rbd_utils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] rbd image b4e47139-8bff-466c-b453-5bafefeaad62_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.904 2 DEBUG nova.storage.rbd_utils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] rbd image b4e47139-8bff-466c-b453-5bafefeaad62_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.928 2 DEBUG nova.storage.rbd_utils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] rbd image b4e47139-8bff-466c-b453-5bafefeaad62_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.931 2 DEBUG oslo_concurrency.processutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:09 np0005465604 nova_compute[260603]: 2025-10-02 08:21:09.968 2 DEBUG oslo_concurrency.processutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/45e20d73-3f71-4e90-b1b5-f30fcb043922/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjrur9grq" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.016 2 DEBUG nova.storage.rbd_utils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] rbd image 45e20d73-3f71-4e90-b1b5-f30fcb043922_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.022 2 DEBUG oslo_concurrency.processutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/45e20d73-3f71-4e90-b1b5-f30fcb043922/disk.config 45e20d73-3f71-4e90-b1b5-f30fcb043922_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.061 2 INFO nova.compute.manager [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Took 8.31 seconds to build instance.#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.069 2 DEBUG oslo_concurrency.processutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.072 2 DEBUG oslo_concurrency.lockutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.074 2 DEBUG oslo_concurrency.lockutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.074 2 DEBUG oslo_concurrency.lockutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.110 2 DEBUG nova.storage.rbd_utils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] rbd image b4e47139-8bff-466c-b453-5bafefeaad62_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.120 2 DEBUG oslo_concurrency.processutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 b4e47139-8bff-466c-b453-5bafefeaad62_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.150 2 DEBUG oslo_concurrency.lockutils [None req-a8353598-1d21-4578-84f1-51775c1e7160 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.195 2 DEBUG oslo_concurrency.processutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/45e20d73-3f71-4e90-b1b5-f30fcb043922/disk.config 45e20d73-3f71-4e90-b1b5-f30fcb043922_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.196 2 INFO nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Deleting local config drive /var/lib/nova/instances/45e20d73-3f71-4e90-b1b5-f30fcb043922/disk.config because it was imported into RBD.#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.246 2 DEBUG nova.network.neutron [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.247 2 DEBUG nova.compute.manager [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:21:10 np0005465604 systemd-machined[214636]: New machine qemu-16-instance-0000000e.
Oct  2 04:21:10 np0005465604 systemd[1]: Started Virtual Machine qemu-16-instance-0000000e.
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.362 2 DEBUG oslo_concurrency.processutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 b4e47139-8bff-466c-b453-5bafefeaad62_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.242s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.422 2 DEBUG nova.storage.rbd_utils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] resizing rbd image b4e47139-8bff-466c-b453-5bafefeaad62_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.519 2 DEBUG nova.objects.instance [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Lazy-loading 'migration_context' on Instance uuid b4e47139-8bff-466c-b453-5bafefeaad62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.522 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.535 2 DEBUG nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.536 2 DEBUG nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Ensure instance console log exists: /var/lib/nova/instances/b4e47139-8bff-466c-b453-5bafefeaad62/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.536 2 DEBUG oslo_concurrency.lockutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.537 2 DEBUG oslo_concurrency.lockutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.537 2 DEBUG oslo_concurrency.lockutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.538 2 DEBUG nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.542 2 WARNING nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.545 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.545 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.545 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.546 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.546 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.574 2 DEBUG nova.virt.libvirt.host [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.576 2 DEBUG nova.virt.libvirt.host [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.579 2 DEBUG nova.virt.libvirt.host [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.580 2 DEBUG nova.virt.libvirt.host [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.581 2 DEBUG nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.581 2 DEBUG nova.virt.hardware [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.583 2 DEBUG nova.virt.hardware [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.583 2 DEBUG nova.virt.hardware [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.584 2 DEBUG nova.virt.hardware [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.584 2 DEBUG nova.virt.hardware [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.584 2 DEBUG nova.virt.hardware [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.585 2 DEBUG nova.virt.hardware [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.585 2 DEBUG nova.virt.hardware [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.586 2 DEBUG nova.virt.hardware [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.586 2 DEBUG nova.virt.hardware [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.587 2 DEBUG nova.virt.hardware [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:21:10 np0005465604 nova_compute[260603]: 2025-10-02 08:21:10.591 2 DEBUG oslo_concurrency.processutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 143 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.5 MiB/s wr, 200 op/s
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2566366642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2109227237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.054 2 DEBUG oslo_concurrency.processutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.092 2 DEBUG nova.storage.rbd_utils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] rbd image b4e47139-8bff-466c-b453-5bafefeaad62_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.098 2 DEBUG oslo_concurrency.processutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.122 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.123 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393271.0611262, 45e20d73-3f71-4e90-b1b5-f30fcb043922 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.125 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.129 2 DEBUG nova.compute.manager [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.130 2 DEBUG nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.136 2 INFO nova.virt.libvirt.driver [-] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Instance spawned successfully.#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.137 2 DEBUG nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.156 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.161 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.170 2 DEBUG nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.170 2 DEBUG nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.171 2 DEBUG nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.172 2 DEBUG nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.172 2 DEBUG nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.173 2 DEBUG nova.virt.libvirt.driver [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.179 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.179 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393271.0612185, 45e20d73-3f71-4e90-b1b5-f30fcb043922 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.179 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] VM Started (Lifecycle Event)#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.214 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.217 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.243 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.252 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.253 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.257 2 INFO nova.compute.manager [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Took 3.48 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.257 2 DEBUG nova.compute.manager [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.259 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.259 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.267 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.268 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.322 2 INFO nova.compute.manager [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Took 4.42 seconds to build instance.#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.341 2 DEBUG oslo_concurrency.lockutils [None req-e5f7d412-86ef-4e70-b5af-5daaa1d95449 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "45e20d73-3f71-4e90-b1b5-f30fcb043922" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.482 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.483 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4282MB free_disk=59.94078063964844GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.483 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.484 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.552 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 09c17c12-7dac-4fc8-917e-cb2efa1d4607 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.552 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance efe21c23-e200-4841-be3e-2b3e7c8b5c38 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.552 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 45e20d73-3f71-4e90-b1b5-f30fcb043922 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.552 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance b4e47139-8bff-466c-b453-5bafefeaad62 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.553 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.553 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769726223' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.581 2 DEBUG oslo_concurrency.processutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.584 2 DEBUG nova.objects.instance [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4e47139-8bff-466c-b453-5bafefeaad62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.598 2 DEBUG nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:21:11 np0005465604 nova_compute[260603]:  <uuid>b4e47139-8bff-466c-b453-5bafefeaad62</uuid>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:  <name>instance-0000000f</name>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerDiagnosticsTest-server-680778402</nova:name>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:21:10</nova:creationTime>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:21:11 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:        <nova:user uuid="d9da402800074a7b978cd9c934c75c38">tempest-ServerDiagnosticsTest-1341509492-project-member</nova:user>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:        <nova:project uuid="640875b0d9404b19815be09cf9b518f6">tempest-ServerDiagnosticsTest-1341509492</nova:project>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <entry name="serial">b4e47139-8bff-466c-b453-5bafefeaad62</entry>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <entry name="uuid">b4e47139-8bff-466c-b453-5bafefeaad62</entry>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/b4e47139-8bff-466c-b453-5bafefeaad62_disk">
Oct  2 04:21:11 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:11 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/b4e47139-8bff-466c-b453-5bafefeaad62_disk.config">
Oct  2 04:21:11 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:11 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/b4e47139-8bff-466c-b453-5bafefeaad62/console.log" append="off"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:21:11 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:21:11 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:21:11 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:21:11 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.641 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.679 2 DEBUG nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.680 2 DEBUG nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.680 2 INFO nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Using config drive#033[00m
Oct  2 04:21:11 np0005465604 nova_compute[260603]: 2025-10-02 08:21:11.703 2 DEBUG nova.storage.rbd_utils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] rbd image b4e47139-8bff-466c-b453-5bafefeaad62_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:21:11.948811) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393271948858, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 896, "num_deletes": 256, "total_data_size": 1056014, "memory_usage": 1085008, "flush_reason": "Manual Compaction"}
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393271954628, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1033905, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23069, "largest_seqno": 23964, "table_properties": {"data_size": 1029594, "index_size": 1899, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 9840, "raw_average_key_size": 18, "raw_value_size": 1020651, "raw_average_value_size": 1955, "num_data_blocks": 86, "num_entries": 522, "num_filter_entries": 522, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759393206, "oldest_key_time": 1759393206, "file_creation_time": 1759393271, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 5855 microseconds, and 3195 cpu microseconds.
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:21:11.954668) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1033905 bytes OK
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:21:11.954687) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:21:11.956547) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:21:11.956560) EVENT_LOG_v1 {"time_micros": 1759393271956556, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:21:11.956577) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1051574, prev total WAL file size 1051574, number of live WAL files 2.
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:21:11.957147) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353030' seq:72057594037927935, type:22 .. '6C6F676D00373532' seq:0, type:0; will stop at (end)
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1009KB)], [53(8983KB)]
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393271957176, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10233344, "oldest_snapshot_seqno": -1}
Oct  2 04:21:11 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4771 keys, 10138323 bytes, temperature: kUnknown
Oct  2 04:21:12 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393271999505, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 10138323, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10102198, "index_size": 23087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 118110, "raw_average_key_size": 24, "raw_value_size": 10011874, "raw_average_value_size": 2098, "num_data_blocks": 971, "num_entries": 4771, "num_filter_entries": 4771, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759393271, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:21:12 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:21:12 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:21:11.999740) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 10138323 bytes
Oct  2 04:21:12 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:21:12.001283) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 241.3 rd, 239.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.8 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(19.7) write-amplify(9.8) OK, records in: 5295, records dropped: 524 output_compression: NoCompression
Oct  2 04:21:12 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:21:12.001299) EVENT_LOG_v1 {"time_micros": 1759393272001292, "job": 28, "event": "compaction_finished", "compaction_time_micros": 42405, "compaction_time_cpu_micros": 21460, "output_level": 6, "num_output_files": 1, "total_output_size": 10138323, "num_input_records": 5295, "num_output_records": 4771, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:21:12 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:21:12 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393272001713, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct  2 04:21:12 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:21:12 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393272003255, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct  2 04:21:12 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:21:11.957091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:21:12 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:21:12.003905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:21:12 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:21:12.003913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:21:12 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:21:12.003918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:21:12 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:21:12.003922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:21:12 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:21:12.003926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:21:12 np0005465604 nova_compute[260603]: 2025-10-02 08:21:12.022 2 INFO nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Creating config drive at /var/lib/nova/instances/b4e47139-8bff-466c-b453-5bafefeaad62/disk.config#033[00m
Oct  2 04:21:12 np0005465604 nova_compute[260603]: 2025-10-02 08:21:12.028 2 DEBUG oslo_concurrency.processutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4e47139-8bff-466c-b453-5bafefeaad62/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_b5gkn3t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:12 np0005465604 nova_compute[260603]: 2025-10-02 08:21:12.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:21:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4258848789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:21:12 np0005465604 nova_compute[260603]: 2025-10-02 08:21:12.153 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:12 np0005465604 nova_compute[260603]: 2025-10-02 08:21:12.158 2 DEBUG oslo_concurrency.processutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4e47139-8bff-466c-b453-5bafefeaad62/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_b5gkn3t" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:12 np0005465604 nova_compute[260603]: 2025-10-02 08:21:12.181 2 DEBUG nova.storage.rbd_utils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] rbd image b4e47139-8bff-466c-b453-5bafefeaad62_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:12 np0005465604 nova_compute[260603]: 2025-10-02 08:21:12.185 2 DEBUG oslo_concurrency.processutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4e47139-8bff-466c-b453-5bafefeaad62/disk.config b4e47139-8bff-466c-b453-5bafefeaad62_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:12 np0005465604 nova_compute[260603]: 2025-10-02 08:21:12.207 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:21:12 np0005465604 nova_compute[260603]: 2025-10-02 08:21:12.227 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:21:12 np0005465604 nova_compute[260603]: 2025-10-02 08:21:12.248 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:21:12 np0005465604 nova_compute[260603]: 2025-10-02 08:21:12.249 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:12 np0005465604 nova_compute[260603]: 2025-10-02 08:21:12.319 2 DEBUG oslo_concurrency.processutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4e47139-8bff-466c-b453-5bafefeaad62/disk.config b4e47139-8bff-466c-b453-5bafefeaad62_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:12 np0005465604 nova_compute[260603]: 2025-10-02 08:21:12.320 2 INFO nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Deleting local config drive /var/lib/nova/instances/b4e47139-8bff-466c-b453-5bafefeaad62/disk.config because it was imported into RBD.#033[00m
Oct  2 04:21:12 np0005465604 systemd-machined[214636]: New machine qemu-17-instance-0000000f.
Oct  2 04:21:12 np0005465604 systemd[1]: Started Virtual Machine qemu-17-instance-0000000f.
Oct  2 04:21:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 198 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 7.6 MiB/s wr, 377 op/s
Oct  2 04:21:13 np0005465604 podman[284020]: 2025-10-02 08:21:13.802840753 +0000 UTC m=+0.099194145 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.228 2 DEBUG oslo_concurrency.lockutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.228 2 DEBUG oslo_concurrency.lockutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.235 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393274.2345052, b4e47139-8bff-466c-b453-5bafefeaad62 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.235 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.238 2 DEBUG nova.compute.manager [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.239 2 DEBUG nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.242 2 DEBUG nova.compute.manager [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.246 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.247 2 INFO nova.virt.libvirt.driver [-] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Instance spawned successfully.#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.250 2 DEBUG nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.252 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.255 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.268 2 DEBUG oslo_concurrency.lockutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Acquiring lock "e500afcd-1008-4a02-8b21-b564db9bd367" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.268 2 DEBUG oslo_concurrency.lockutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Lock "e500afcd-1008-4a02-8b21-b564db9bd367" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.273 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.274 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393274.2369525, b4e47139-8bff-466c-b453-5bafefeaad62 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.274 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] VM Started (Lifecycle Event)#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.279 2 DEBUG nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.279 2 DEBUG nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.280 2 DEBUG nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.281 2 DEBUG nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.281 2 DEBUG nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.282 2 DEBUG nova.virt.libvirt.driver [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.306 2 DEBUG nova.compute.manager [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.313 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.314 2 DEBUG oslo_concurrency.lockutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.315 2 DEBUG oslo_concurrency.lockutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.318 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.321 2 DEBUG nova.virt.hardware [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.321 2 INFO nova.compute.claims [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.353 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.486 2 INFO nova.compute.manager [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Took 4.64 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.486 2 DEBUG nova.compute.manager [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.515 2 DEBUG oslo_concurrency.lockutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.527 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.529 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.562 2 INFO nova.compute.manager [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Took 5.71 seconds to build instance.#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.568 2 DEBUG oslo_concurrency.lockutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Acquiring lock "d968b5b6-ba9b-488d-b0bf-4bc79ecfe839" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.570 2 DEBUG oslo_concurrency.lockutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "d968b5b6-ba9b-488d-b0bf-4bc79ecfe839" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.580 2 DEBUG oslo_concurrency.lockutils [None req-a92f3645-a73e-461c-8884-1a243a3a0f93 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Lock "b4e47139-8bff-466c-b453-5bafefeaad62" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.584 2 DEBUG nova.compute.manager [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.632 2 DEBUG oslo_concurrency.lockutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:14 np0005465604 nova_compute[260603]: 2025-10-02 08:21:14.668 2 DEBUG oslo_concurrency.processutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 227 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 7.4 MiB/s rd, 6.0 MiB/s wr, 421 op/s
Oct  2 04:21:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:21:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359631497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.130 2 DEBUG oslo_concurrency.processutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.142 2 DEBUG nova.compute.provider_tree [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.160 2 DEBUG nova.scheduler.client.report [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.184 2 DEBUG oslo_concurrency.lockutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.192 2 DEBUG nova.compute.manager [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.195 2 DEBUG oslo_concurrency.lockutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.207 2 DEBUG nova.virt.hardware [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.207 2 INFO nova.compute.claims [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.271 2 DEBUG nova.compute.manager [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.274 2 DEBUG nova.network.neutron [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.296 2 INFO nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.320 2 DEBUG nova.compute.manager [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.424 2 DEBUG nova.compute.manager [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.426 2 DEBUG nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.427 2 INFO nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Creating image(s)#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.478 2 DEBUG nova.storage.rbd_utils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image c0ec8879-e818-40a5-88ec-c6d89b8ddea4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.502 2 DEBUG nova.storage.rbd_utils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image c0ec8879-e818-40a5-88ec-c6d89b8ddea4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.523 2 DEBUG nova.storage.rbd_utils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image c0ec8879-e818-40a5-88ec-c6d89b8ddea4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.527 2 DEBUG oslo_concurrency.processutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.552 2 DEBUG nova.policy [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2b82955fab174d8aac325e64068908f5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '13ecc6dea7a8465394379400d84a053e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.589 2 DEBUG oslo_concurrency.processutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.614 2 DEBUG nova.compute.manager [None req-81d49dcc-26a9-41c9-bc48-f0d3accda0f7 3b59ec431da34252989cb1a891bd222d 9e619c2e263c4f039b0369b55d01d2a3 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.615 2 DEBUG oslo_concurrency.processutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.616 2 DEBUG oslo_concurrency.lockutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.617 2 DEBUG oslo_concurrency.lockutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.618 2 DEBUG oslo_concurrency.lockutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.640 2 DEBUG nova.storage.rbd_utils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image c0ec8879-e818-40a5-88ec-c6d89b8ddea4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.644 2 DEBUG oslo_concurrency.processutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 c0ec8879-e818-40a5-88ec-c6d89b8ddea4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.670 2 INFO nova.compute.manager [None req-81d49dcc-26a9-41c9-bc48-f0d3accda0f7 3b59ec431da34252989cb1a891bd222d 9e619c2e263c4f039b0369b55d01d2a3 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Retrieving diagnostics#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.850 2 DEBUG oslo_concurrency.lockutils [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Acquiring lock "b4e47139-8bff-466c-b453-5bafefeaad62" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.851 2 DEBUG oslo_concurrency.lockutils [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Lock "b4e47139-8bff-466c-b453-5bafefeaad62" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.851 2 DEBUG oslo_concurrency.lockutils [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Acquiring lock "b4e47139-8bff-466c-b453-5bafefeaad62-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.852 2 DEBUG oslo_concurrency.lockutils [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Lock "b4e47139-8bff-466c-b453-5bafefeaad62-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.852 2 DEBUG oslo_concurrency.lockutils [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Lock "b4e47139-8bff-466c-b453-5bafefeaad62-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.855 2 INFO nova.compute.manager [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Terminating instance#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.856 2 DEBUG oslo_concurrency.lockutils [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Acquiring lock "refresh_cache-b4e47139-8bff-466c-b453-5bafefeaad62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.856 2 DEBUG oslo_concurrency.lockutils [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Acquired lock "refresh_cache-b4e47139-8bff-466c-b453-5bafefeaad62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.857 2 DEBUG nova.network.neutron [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.866 2 DEBUG oslo_concurrency.processutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 c0ec8879-e818-40a5-88ec-c6d89b8ddea4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:15 np0005465604 nova_compute[260603]: 2025-10-02 08:21:15.950 2 DEBUG nova.storage.rbd_utils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] resizing rbd image c0ec8879-e818-40a5-88ec-c6d89b8ddea4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.038 2 DEBUG nova.objects.instance [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'migration_context' on Instance uuid c0ec8879-e818-40a5-88ec-c6d89b8ddea4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:21:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3309922161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.058 2 DEBUG nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.058 2 DEBUG nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Ensure instance console log exists: /var/lib/nova/instances/c0ec8879-e818-40a5-88ec-c6d89b8ddea4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.059 2 DEBUG oslo_concurrency.lockutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.059 2 DEBUG oslo_concurrency.lockutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.060 2 DEBUG oslo_concurrency.lockutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.061 2 DEBUG oslo_concurrency.processutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.066 2 DEBUG nova.compute.provider_tree [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.081 2 DEBUG nova.network.neutron [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.123 2 DEBUG nova.scheduler.client.report [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.133 2 DEBUG nova.network.neutron [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Successfully created port: 580fd822-3f5a-44c0-aff1-96cab7531d57 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.151 2 DEBUG oslo_concurrency.lockutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.152 2 DEBUG nova.compute.manager [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.155 2 DEBUG oslo_concurrency.lockutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.161 2 DEBUG nova.virt.hardware [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.161 2 INFO nova.compute.claims [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.214 2 DEBUG nova.compute.manager [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.215 2 DEBUG nova.network.neutron [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.237 2 INFO nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.254 2 DEBUG nova.compute.manager [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.333 2 DEBUG nova.compute.manager [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.335 2 DEBUG nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.336 2 INFO nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Creating image(s)#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.363 2 DEBUG nova.storage.rbd_utils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] rbd image e500afcd-1008-4a02-8b21-b564db9bd367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.390 2 DEBUG nova.storage.rbd_utils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] rbd image e500afcd-1008-4a02-8b21-b564db9bd367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.414 2 DEBUG nova.storage.rbd_utils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] rbd image e500afcd-1008-4a02-8b21-b564db9bd367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.418 2 DEBUG oslo_concurrency.processutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.439 2 DEBUG nova.network.neutron [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.462 2 DEBUG oslo_concurrency.lockutils [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Releasing lock "refresh_cache-b4e47139-8bff-466c-b453-5bafefeaad62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.463 2 DEBUG nova.compute.manager [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.478 2 DEBUG oslo_concurrency.processutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.480 2 DEBUG oslo_concurrency.lockutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.481 2 DEBUG oslo_concurrency.lockutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.482 2 DEBUG oslo_concurrency.lockutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.502 2 DEBUG nova.storage.rbd_utils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] rbd image e500afcd-1008-4a02-8b21-b564db9bd367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.506 2 DEBUG oslo_concurrency.processutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 e500afcd-1008-4a02-8b21-b564db9bd367_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:16 np0005465604 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Oct  2 04:21:16 np0005465604 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Consumed 3.992s CPU time.
Oct  2 04:21:16 np0005465604 systemd-machined[214636]: Machine qemu-17-instance-0000000f terminated.
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.524 2 DEBUG oslo_concurrency.processutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.555 2 DEBUG nova.network.neutron [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.556 2 DEBUG nova.compute.manager [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.680 2 INFO nova.virt.libvirt.driver [-] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Instance destroyed successfully.#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.681 2 DEBUG nova.objects.instance [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Lazy-loading 'resources' on Instance uuid b4e47139-8bff-466c-b453-5bafefeaad62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.717 2 DEBUG oslo_concurrency.processutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 e500afcd-1008-4a02-8b21-b564db9bd367_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 227 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 7.1 MiB/s rd, 4.9 MiB/s wr, 372 op/s
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.770 2 DEBUG nova.storage.rbd_utils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] resizing rbd image e500afcd-1008-4a02-8b21-b564db9bd367_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.869 2 DEBUG nova.objects.instance [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Lazy-loading 'migration_context' on Instance uuid e500afcd-1008-4a02-8b21-b564db9bd367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.885 2 DEBUG nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.886 2 DEBUG nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Ensure instance console log exists: /var/lib/nova/instances/e500afcd-1008-4a02-8b21-b564db9bd367/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.886 2 DEBUG oslo_concurrency.lockutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.887 2 DEBUG oslo_concurrency.lockutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.887 2 DEBUG oslo_concurrency.lockutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.888 2 DEBUG nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.892 2 WARNING nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.897 2 DEBUG nova.virt.libvirt.host [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.897 2 DEBUG nova.virt.libvirt.host [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.901 2 DEBUG nova.virt.libvirt.host [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.902 2 DEBUG nova.virt.libvirt.host [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.902 2 DEBUG nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.903 2 DEBUG nova.virt.hardware [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.903 2 DEBUG nova.virt.hardware [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.903 2 DEBUG nova.virt.hardware [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.904 2 DEBUG nova.virt.hardware [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.904 2 DEBUG nova.virt.hardware [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.904 2 DEBUG nova.virt.hardware [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.905 2 DEBUG nova.virt.hardware [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.905 2 DEBUG nova.virt.hardware [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.905 2 DEBUG nova.virt.hardware [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.906 2 DEBUG nova.virt.hardware [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.906 2 DEBUG nova.virt.hardware [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.908 2 DEBUG oslo_concurrency.processutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:21:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1126706055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:21:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.955 2 DEBUG oslo_concurrency.processutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.963 2 DEBUG nova.compute.provider_tree [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.969 2 DEBUG nova.network.neutron [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Successfully updated port: 580fd822-3f5a-44c0-aff1-96cab7531d57 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.986 2 DEBUG nova.scheduler.client.report [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.990 2 DEBUG oslo_concurrency.lockutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "refresh_cache-c0ec8879-e818-40a5-88ec-c6d89b8ddea4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.990 2 DEBUG oslo_concurrency.lockutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquired lock "refresh_cache-c0ec8879-e818-40a5-88ec-c6d89b8ddea4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:21:16 np0005465604 nova_compute[260603]: 2025-10-02 08:21:16.991 2 DEBUG nova.network.neutron [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.010 2 INFO nova.virt.libvirt.driver [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Deleting instance files /var/lib/nova/instances/b4e47139-8bff-466c-b453-5bafefeaad62_del#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.010 2 INFO nova.virt.libvirt.driver [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Deletion of /var/lib/nova/instances/b4e47139-8bff-466c-b453-5bafefeaad62_del complete#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.016 2 DEBUG oslo_concurrency.lockutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.017 2 DEBUG nova.compute.manager [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.055 2 DEBUG nova.compute.manager [req-c33a72d7-2ee4-47cb-bd1b-a3061d5dcd43 req-64c916df-d81c-45cf-a80b-c6a06cc8cff6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Received event network-changed-580fd822-3f5a-44c0-aff1-96cab7531d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.056 2 DEBUG nova.compute.manager [req-c33a72d7-2ee4-47cb-bd1b-a3061d5dcd43 req-64c916df-d81c-45cf-a80b-c6a06cc8cff6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Refreshing instance network info cache due to event network-changed-580fd822-3f5a-44c0-aff1-96cab7531d57. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.056 2 DEBUG oslo_concurrency.lockutils [req-c33a72d7-2ee4-47cb-bd1b-a3061d5dcd43 req-64c916df-d81c-45cf-a80b-c6a06cc8cff6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-c0ec8879-e818-40a5-88ec-c6d89b8ddea4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.069 2 DEBUG nova.compute.manager [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.070 2 DEBUG nova.network.neutron [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.073 2 INFO nova.compute.manager [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.073 2 DEBUG oslo.service.loopingcall [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.075 2 DEBUG nova.compute.manager [-] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.076 2 DEBUG nova.network.neutron [-] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.098 2 INFO nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.142 2 DEBUG nova.compute.manager [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.145 2 DEBUG nova.network.neutron [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.200 2 DEBUG nova.network.neutron [-] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.225 2 DEBUG nova.network.neutron [-] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.243 2 INFO nova.compute.manager [-] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Took 0.17 seconds to deallocate network for instance.#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.250 2 DEBUG nova.compute.manager [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.252 2 DEBUG nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.252 2 INFO nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Creating image(s)#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.272 2 DEBUG nova.storage.rbd_utils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] rbd image d968b5b6-ba9b-488d-b0bf-4bc79ecfe839_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.294 2 DEBUG nova.storage.rbd_utils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] rbd image d968b5b6-ba9b-488d-b0bf-4bc79ecfe839_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.315 2 DEBUG nova.storage.rbd_utils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] rbd image d968b5b6-ba9b-488d-b0bf-4bc79ecfe839_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.319 2 DEBUG oslo_concurrency.processutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3980475561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.343 2 DEBUG oslo_concurrency.lockutils [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.344 2 DEBUG oslo_concurrency.lockutils [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.348 2 DEBUG oslo_concurrency.processutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.363 2 DEBUG nova.storage.rbd_utils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] rbd image e500afcd-1008-4a02-8b21-b564db9bd367_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.367 2 DEBUG oslo_concurrency.processutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.390 2 DEBUG oslo_concurrency.processutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.391 2 DEBUG nova.network.neutron [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.391 2 DEBUG nova.compute.manager [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.392 2 DEBUG oslo_concurrency.lockutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.392 2 DEBUG oslo_concurrency.lockutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.393 2 DEBUG oslo_concurrency.lockutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.411 2 DEBUG nova.storage.rbd_utils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] rbd image d968b5b6-ba9b-488d-b0bf-4bc79ecfe839_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.413 2 DEBUG oslo_concurrency.processutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d968b5b6-ba9b-488d-b0bf-4bc79ecfe839_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.606 2 DEBUG oslo_concurrency.processutils [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.658 2 DEBUG oslo_concurrency.processutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d968b5b6-ba9b-488d-b0bf-4bc79ecfe839_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.718 2 DEBUG nova.storage.rbd_utils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] resizing rbd image d968b5b6-ba9b-488d-b0bf-4bc79ecfe839_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.810 2 DEBUG nova.objects.instance [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lazy-loading 'migration_context' on Instance uuid d968b5b6-ba9b-488d-b0bf-4bc79ecfe839 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/687878504' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.836 2 DEBUG nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.837 2 DEBUG nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Ensure instance console log exists: /var/lib/nova/instances/d968b5b6-ba9b-488d-b0bf-4bc79ecfe839/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.837 2 DEBUG oslo_concurrency.lockutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.838 2 DEBUG oslo_concurrency.lockutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.838 2 DEBUG oslo_concurrency.lockutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.840 2 DEBUG nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.841 2 DEBUG oslo_concurrency.processutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.842 2 DEBUG nova.objects.instance [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Lazy-loading 'pci_devices' on Instance uuid e500afcd-1008-4a02-8b21-b564db9bd367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.847 2 WARNING nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.851 2 DEBUG nova.virt.libvirt.host [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.851 2 DEBUG nova.virt.libvirt.host [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.854 2 DEBUG nova.virt.libvirt.host [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.854 2 DEBUG nova.virt.libvirt.host [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.855 2 DEBUG nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.856 2 DEBUG nova.virt.hardware [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.856 2 DEBUG nova.virt.hardware [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.857 2 DEBUG nova.virt.hardware [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.857 2 DEBUG nova.virt.hardware [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.859 2 DEBUG nova.virt.hardware [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.860 2 DEBUG nova.virt.hardware [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.860 2 DEBUG nova.virt.hardware [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.860 2 DEBUG nova.virt.hardware [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.860 2 DEBUG nova.virt.hardware [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.861 2 DEBUG nova.virt.hardware [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.861 2 DEBUG nova.virt.hardware [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.864 2 DEBUG oslo_concurrency.processutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.886 2 DEBUG nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:21:17 np0005465604 nova_compute[260603]:  <uuid>e500afcd-1008-4a02-8b21-b564db9bd367</uuid>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:  <name>instance-00000011</name>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <nova:name>tempest-TenantUsagesTestJSON-server-1889245731</nova:name>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:21:16</nova:creationTime>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:21:17 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:        <nova:user uuid="4a2ef26eb3dc4174bd92f70f77e44d3a">tempest-TenantUsagesTestJSON-158841037-project-member</nova:user>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:        <nova:project uuid="504acd832d284781b00969cd97758303">tempest-TenantUsagesTestJSON-158841037</nova:project>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <entry name="serial">e500afcd-1008-4a02-8b21-b564db9bd367</entry>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <entry name="uuid">e500afcd-1008-4a02-8b21-b564db9bd367</entry>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/e500afcd-1008-4a02-8b21-b564db9bd367_disk">
Oct  2 04:21:17 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:17 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/e500afcd-1008-4a02-8b21-b564db9bd367_disk.config">
Oct  2 04:21:17 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:17 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/e500afcd-1008-4a02-8b21-b564db9bd367/console.log" append="off"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:21:17 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:21:17 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:21:17 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:21:17 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.894 2 DEBUG nova.network.neutron [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Updating instance_info_cache with network_info: [{"id": "580fd822-3f5a-44c0-aff1-96cab7531d57", "address": "fa:16:3e:4c:78:6e", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580fd822-3f", "ovs_interfaceid": "580fd822-3f5a-44c0-aff1-96cab7531d57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.928 2 DEBUG oslo_concurrency.lockutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Releasing lock "refresh_cache-c0ec8879-e818-40a5-88ec-c6d89b8ddea4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.928 2 DEBUG nova.compute.manager [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Instance network_info: |[{"id": "580fd822-3f5a-44c0-aff1-96cab7531d57", "address": "fa:16:3e:4c:78:6e", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580fd822-3f", "ovs_interfaceid": "580fd822-3f5a-44c0-aff1-96cab7531d57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.929 2 DEBUG oslo_concurrency.lockutils [req-c33a72d7-2ee4-47cb-bd1b-a3061d5dcd43 req-64c916df-d81c-45cf-a80b-c6a06cc8cff6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-c0ec8879-e818-40a5-88ec-c6d89b8ddea4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.929 2 DEBUG nova.network.neutron [req-c33a72d7-2ee4-47cb-bd1b-a3061d5dcd43 req-64c916df-d81c-45cf-a80b-c6a06cc8cff6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Refreshing network info cache for port 580fd822-3f5a-44c0-aff1-96cab7531d57 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.933 2 DEBUG nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Start _get_guest_xml network_info=[{"id": "580fd822-3f5a-44c0-aff1-96cab7531d57", "address": "fa:16:3e:4c:78:6e", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580fd822-3f", "ovs_interfaceid": "580fd822-3f5a-44c0-aff1-96cab7531d57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.938 2 WARNING nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.941 2 DEBUG nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.941 2 DEBUG nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.941 2 INFO nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Using config drive#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.963 2 DEBUG nova.storage.rbd_utils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] rbd image e500afcd-1008-4a02-8b21-b564db9bd367_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.973 2 DEBUG nova.virt.libvirt.host [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.974 2 DEBUG nova.virt.libvirt.host [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.985 2 DEBUG nova.virt.libvirt.host [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.985 2 DEBUG nova.virt.libvirt.host [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.985 2 DEBUG nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.986 2 DEBUG nova.virt.hardware [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.986 2 DEBUG nova.virt.hardware [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.986 2 DEBUG nova.virt.hardware [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.987 2 DEBUG nova.virt.hardware [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.987 2 DEBUG nova.virt.hardware [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.987 2 DEBUG nova.virt.hardware [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.987 2 DEBUG nova.virt.hardware [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.988 2 DEBUG nova.virt.hardware [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.988 2 DEBUG nova.virt.hardware [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.988 2 DEBUG nova.virt.hardware [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.989 2 DEBUG nova.virt.hardware [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:21:17 np0005465604 nova_compute[260603]: 2025-10-02 08:21:17.991 2 DEBUG oslo_concurrency.processutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:21:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1234447297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.054 2 DEBUG oslo_concurrency.processutils [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.059 2 DEBUG nova.compute.provider_tree [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.075 2 DEBUG nova.scheduler.client.report [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.098 2 DEBUG oslo_concurrency.lockutils [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.131 2 INFO nova.scheduler.client.report [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Deleted allocations for instance b4e47139-8bff-466c-b453-5bafefeaad62#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.175 2 INFO nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Creating config drive at /var/lib/nova/instances/e500afcd-1008-4a02-8b21-b564db9bd367/disk.config#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.179 2 DEBUG oslo_concurrency.processutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e500afcd-1008-4a02-8b21-b564db9bd367/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsgag6767 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.200 2 DEBUG oslo_concurrency.lockutils [None req-54deabc3-45dc-4a41-a9ac-29a5ad264b80 d9da402800074a7b978cd9c934c75c38 640875b0d9404b19815be09cf9b518f6 - - default default] Lock "b4e47139-8bff-466c-b453-5bafefeaad62" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.302 2 DEBUG oslo_concurrency.processutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e500afcd-1008-4a02-8b21-b564db9bd367/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsgag6767" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3486461184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.321 2 DEBUG nova.storage.rbd_utils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] rbd image e500afcd-1008-4a02-8b21-b564db9bd367_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.324 2 DEBUG oslo_concurrency.processutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e500afcd-1008-4a02-8b21-b564db9bd367/disk.config e500afcd-1008-4a02-8b21-b564db9bd367_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.345 2 DEBUG oslo_concurrency.processutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.394 2 DEBUG nova.storage.rbd_utils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] rbd image d968b5b6-ba9b-488d-b0bf-4bc79ecfe839_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.401 2 DEBUG oslo_concurrency.processutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/657730251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.427 2 DEBUG oslo_concurrency.processutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.455 2 DEBUG nova.storage.rbd_utils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image c0ec8879-e818-40a5-88ec-c6d89b8ddea4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.473 2 DEBUG oslo_concurrency.processutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.504 2 DEBUG oslo_concurrency.processutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e500afcd-1008-4a02-8b21-b564db9bd367/disk.config e500afcd-1008-4a02-8b21-b564db9bd367_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.505 2 INFO nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Deleting local config drive /var/lib/nova/instances/e500afcd-1008-4a02-8b21-b564db9bd367/disk.config because it was imported into RBD.#033[00m
Oct  2 04:21:18 np0005465604 systemd-machined[214636]: New machine qemu-18-instance-00000011.
Oct  2 04:21:18 np0005465604 systemd[1]: Started Virtual Machine qemu-18-instance-00000011.
Oct  2 04:21:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 288 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 9.4 MiB/s rd, 9.0 MiB/s wr, 548 op/s
Oct  2 04:21:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1609158573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.924 2 DEBUG oslo_concurrency.processutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.927 2 DEBUG nova.objects.instance [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lazy-loading 'pci_devices' on Instance uuid d968b5b6-ba9b-488d-b0bf-4bc79ecfe839 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1030422946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.949 2 DEBUG nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <uuid>d968b5b6-ba9b-488d-b0bf-4bc79ecfe839</uuid>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <name>instance-00000012</name>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-1168498907</nova:name>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:21:17</nova:creationTime>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <nova:user uuid="5da29605949d4f0abb43aa8f0801e6b7">tempest-ServersAdminNegativeTestJSON-384838271-project-member</nova:user>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <nova:project uuid="cb733b60e33d491c9c8c0cd574145cce">tempest-ServersAdminNegativeTestJSON-384838271</nova:project>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <entry name="serial">d968b5b6-ba9b-488d-b0bf-4bc79ecfe839</entry>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <entry name="uuid">d968b5b6-ba9b-488d-b0bf-4bc79ecfe839</entry>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/d968b5b6-ba9b-488d-b0bf-4bc79ecfe839_disk">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/d968b5b6-ba9b-488d-b0bf-4bc79ecfe839_disk.config">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/d968b5b6-ba9b-488d-b0bf-4bc79ecfe839/console.log" append="off"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:21:18 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:21:18 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.960 2 DEBUG oslo_concurrency.processutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.966 2 DEBUG nova.virt.libvirt.vif [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:21:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-525372049',display_name='tempest-ServersAdminTestJSON-server-525372049',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-525372049',id=16,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='13ecc6dea7a8465394379400d84a053e',ramdisk_id='',reservation_id='r-b1n5b2x0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-83453142',owner_user_name='tempest-ServersAdminTestJSON-83453142-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:21:15Z,user_data=None,user_id='2b82955fab174d8aac325e64068908f5',uuid=c0ec8879-e818-40a5-88ec-c6d89b8ddea4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "580fd822-3f5a-44c0-aff1-96cab7531d57", "address": "fa:16:3e:4c:78:6e", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580fd822-3f", "ovs_interfaceid": "580fd822-3f5a-44c0-aff1-96cab7531d57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.966 2 DEBUG nova.network.os_vif_util [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converting VIF {"id": "580fd822-3f5a-44c0-aff1-96cab7531d57", "address": "fa:16:3e:4c:78:6e", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580fd822-3f", "ovs_interfaceid": "580fd822-3f5a-44c0-aff1-96cab7531d57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.967 2 DEBUG nova.network.os_vif_util [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:6e,bridge_name='br-int',has_traffic_filtering=True,id=580fd822-3f5a-44c0-aff1-96cab7531d57,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580fd822-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.967 2 DEBUG nova.objects.instance [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'pci_devices' on Instance uuid c0ec8879-e818-40a5-88ec-c6d89b8ddea4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.987 2 DEBUG nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <uuid>c0ec8879-e818-40a5-88ec-c6d89b8ddea4</uuid>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <name>instance-00000010</name>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersAdminTestJSON-server-525372049</nova:name>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:21:17</nova:creationTime>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <nova:user uuid="2b82955fab174d8aac325e64068908f5">tempest-ServersAdminTestJSON-83453142-project-member</nova:user>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <nova:project uuid="13ecc6dea7a8465394379400d84a053e">tempest-ServersAdminTestJSON-83453142</nova:project>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <nova:port uuid="580fd822-3f5a-44c0-aff1-96cab7531d57">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <entry name="serial">c0ec8879-e818-40a5-88ec-c6d89b8ddea4</entry>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <entry name="uuid">c0ec8879-e818-40a5-88ec-c6d89b8ddea4</entry>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/c0ec8879-e818-40a5-88ec-c6d89b8ddea4_disk">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/c0ec8879-e818-40a5-88ec-c6d89b8ddea4_disk.config">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:4c:78:6e"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <target dev="tap580fd822-3f"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/c0ec8879-e818-40a5-88ec-c6d89b8ddea4/console.log" append="off"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:21:18 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:21:18 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:21:18 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:21:18 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.987 2 DEBUG nova.compute.manager [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Preparing to wait for external event network-vif-plugged-580fd822-3f5a-44c0-aff1-96cab7531d57 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.988 2 DEBUG oslo_concurrency.lockutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.988 2 DEBUG oslo_concurrency.lockutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.988 2 DEBUG oslo_concurrency.lockutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.989 2 DEBUG nova.virt.libvirt.vif [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:21:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-525372049',display_name='tempest-ServersAdminTestJSON-server-525372049',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-525372049',id=16,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='13ecc6dea7a8465394379400d84a053e',ramdisk_id='',reservation_id='r-b1n5b2x0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-83453142',owner_user_name='tempest-ServersAdminTestJSON-83453142-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:21:15Z,user_data=None,user_id='2b82955fab174d8aac325e64068908f5',uuid=c0ec8879-e818-40a5-88ec-c6d89b8ddea4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "580fd822-3f5a-44c0-aff1-96cab7531d57", "address": "fa:16:3e:4c:78:6e", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580fd822-3f", "ovs_interfaceid": "580fd822-3f5a-44c0-aff1-96cab7531d57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.989 2 DEBUG nova.network.os_vif_util [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converting VIF {"id": "580fd822-3f5a-44c0-aff1-96cab7531d57", "address": "fa:16:3e:4c:78:6e", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580fd822-3f", "ovs_interfaceid": "580fd822-3f5a-44c0-aff1-96cab7531d57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.989 2 DEBUG nova.network.os_vif_util [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:6e,bridge_name='br-int',has_traffic_filtering=True,id=580fd822-3f5a-44c0-aff1-96cab7531d57,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580fd822-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.990 2 DEBUG os_vif [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:6e,bridge_name='br-int',has_traffic_filtering=True,id=580fd822-3f5a-44c0-aff1-96cab7531d57,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580fd822-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.991 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.991 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.996 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap580fd822-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:18 np0005465604 nova_compute[260603]: 2025-10-02 08:21:18.997 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap580fd822-3f, col_values=(('external_ids', {'iface-id': '580fd822-3f5a-44c0-aff1-96cab7531d57', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4c:78:6e', 'vm-uuid': 'c0ec8879-e818-40a5-88ec-c6d89b8ddea4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.031 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393264.0311944, f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.032 2 INFO nova.compute.manager [-] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:21:19 np0005465604 NetworkManager[45129]: <info>  [1759393279.0368] manager: (tap580fd822-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.052 2 INFO os_vif [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:6e,bridge_name='br-int',has_traffic_filtering=True,id=580fd822-3f5a-44c0-aff1-96cab7531d57,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580fd822-3f')#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.099 2 DEBUG nova.compute.manager [None req-f639a5ef-3b55-4810-a240-d337a0fdd691 - - - - - -] [instance: f2e9ed65-1ddf-4f7a-b5cf-67c122ceae63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.101 2 DEBUG nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.101 2 DEBUG nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.102 2 INFO nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Using config drive#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.122 2 DEBUG nova.storage.rbd_utils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] rbd image d968b5b6-ba9b-488d-b0bf-4bc79ecfe839_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.135 2 DEBUG nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.135 2 DEBUG nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.135 2 DEBUG nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] No VIF found with MAC fa:16:3e:4c:78:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.136 2 INFO nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Using config drive#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.153 2 DEBUG nova.storage.rbd_utils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image c0ec8879-e818-40a5-88ec-c6d89b8ddea4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.395 2 INFO nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Creating config drive at /var/lib/nova/instances/d968b5b6-ba9b-488d-b0bf-4bc79ecfe839/disk.config#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.400 2 DEBUG oslo_concurrency.processutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d968b5b6-ba9b-488d-b0bf-4bc79ecfe839/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw4nde776 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.428 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393279.42686, e500afcd-1008-4a02-8b21-b564db9bd367 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.428 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.431 2 DEBUG nova.compute.manager [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.431 2 DEBUG nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.435 2 INFO nova.virt.libvirt.driver [-] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Instance spawned successfully.#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.435 2 DEBUG nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.460 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.464 2 DEBUG nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.465 2 DEBUG nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.465 2 DEBUG nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.465 2 DEBUG nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.465 2 DEBUG nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.466 2 DEBUG nova.virt.libvirt.driver [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.470 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.503 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.503 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393279.4279897, e500afcd-1008-4a02-8b21-b564db9bd367 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.503 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] VM Started (Lifecycle Event)#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.529 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.532 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.534 2 DEBUG oslo_concurrency.processutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d968b5b6-ba9b-488d-b0bf-4bc79ecfe839/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw4nde776" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.555 2 DEBUG nova.storage.rbd_utils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] rbd image d968b5b6-ba9b-488d-b0bf-4bc79ecfe839_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.557 2 DEBUG oslo_concurrency.processutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d968b5b6-ba9b-488d-b0bf-4bc79ecfe839/disk.config d968b5b6-ba9b-488d-b0bf-4bc79ecfe839_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.579 2 INFO nova.compute.manager [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Took 3.25 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.580 2 DEBUG nova.compute.manager [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.580 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.642 2 INFO nova.compute.manager [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Took 5.15 seconds to build instance.#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.656 2 DEBUG oslo_concurrency.lockutils [None req-20940de4-e8a6-4969-b40f-2149dd648a53 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Lock "e500afcd-1008-4a02-8b21-b564db9bd367" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.739 2 DEBUG oslo_concurrency.processutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d968b5b6-ba9b-488d-b0bf-4bc79ecfe839/disk.config d968b5b6-ba9b-488d-b0bf-4bc79ecfe839_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:19 np0005465604 nova_compute[260603]: 2025-10-02 08:21:19.740 2 INFO nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Deleting local config drive /var/lib/nova/instances/d968b5b6-ba9b-488d-b0bf-4bc79ecfe839/disk.config because it was imported into RBD.#033[00m
Oct  2 04:21:19 np0005465604 systemd-machined[214636]: New machine qemu-19-instance-00000012.
Oct  2 04:21:19 np0005465604 systemd[1]: Started Virtual Machine qemu-19-instance-00000012.
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.468 2 INFO nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Creating config drive at /var/lib/nova/instances/c0ec8879-e818-40a5-88ec-c6d89b8ddea4/disk.config#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.476 2 DEBUG oslo_concurrency.processutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c0ec8879-e818-40a5-88ec-c6d89b8ddea4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8u3_4rbl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.639 2 DEBUG oslo_concurrency.processutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c0ec8879-e818-40a5-88ec-c6d89b8ddea4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8u3_4rbl" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.663 2 DEBUG nova.storage.rbd_utils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image c0ec8879-e818-40a5-88ec-c6d89b8ddea4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.668 2 DEBUG oslo_concurrency.processutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c0ec8879-e818-40a5-88ec-c6d89b8ddea4/disk.config c0ec8879-e818-40a5-88ec-c6d89b8ddea4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.703 2 DEBUG nova.network.neutron [req-c33a72d7-2ee4-47cb-bd1b-a3061d5dcd43 req-64c916df-d81c-45cf-a80b-c6a06cc8cff6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Updated VIF entry in instance network info cache for port 580fd822-3f5a-44c0-aff1-96cab7531d57. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.704 2 DEBUG nova.network.neutron [req-c33a72d7-2ee4-47cb-bd1b-a3061d5dcd43 req-64c916df-d81c-45cf-a80b-c6a06cc8cff6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Updating instance_info_cache with network_info: [{"id": "580fd822-3f5a-44c0-aff1-96cab7531d57", "address": "fa:16:3e:4c:78:6e", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580fd822-3f", "ovs_interfaceid": "580fd822-3f5a-44c0-aff1-96cab7531d57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.720 2 DEBUG oslo_concurrency.lockutils [req-c33a72d7-2ee4-47cb-bd1b-a3061d5dcd43 req-64c916df-d81c-45cf-a80b-c6a06cc8cff6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-c0ec8879-e818-40a5-88ec-c6d89b8ddea4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:21:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 288 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 7.7 MiB/s rd, 7.5 MiB/s wr, 423 op/s
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.813 2 DEBUG nova.compute.manager [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.813 2 DEBUG nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.814 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393280.8135352, d968b5b6-ba9b-488d-b0bf-4bc79ecfe839 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.814 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.820 2 INFO nova.virt.libvirt.driver [-] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Instance spawned successfully.#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.821 2 DEBUG nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.831 2 DEBUG oslo_concurrency.processutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c0ec8879-e818-40a5-88ec-c6d89b8ddea4/disk.config c0ec8879-e818-40a5-88ec-c6d89b8ddea4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.832 2 INFO nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Deleting local config drive /var/lib/nova/instances/c0ec8879-e818-40a5-88ec-c6d89b8ddea4/disk.config because it was imported into RBD.#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.838 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.857 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.862 2 DEBUG nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.862 2 DEBUG nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.862 2 DEBUG nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.863 2 DEBUG nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.863 2 DEBUG nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.863 2 DEBUG nova.virt.libvirt.driver [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:20 np0005465604 kernel: tap580fd822-3f: entered promiscuous mode
Oct  2 04:21:20 np0005465604 NetworkManager[45129]: <info>  [1759393280.8840] manager: (tap580fd822-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Oct  2 04:21:20 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:20Z|00052|binding|INFO|Claiming lport 580fd822-3f5a-44c0-aff1-96cab7531d57 for this chassis.
Oct  2 04:21:20 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:20Z|00053|binding|INFO|580fd822-3f5a-44c0-aff1-96cab7531d57: Claiming fa:16:3e:4c:78:6e 10.100.0.11
Oct  2 04:21:20 np0005465604 systemd-udevd[284946]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.885 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.885 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393280.8136482, d968b5b6-ba9b-488d-b0bf-4bc79ecfe839 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.886 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] VM Started (Lifecycle Event)#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:20.891 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:78:6e 10.100.0.11'], port_security=['fa:16:3e:4c:78:6e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c0ec8879-e818-40a5-88ec-c6d89b8ddea4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13ecc6dea7a8465394379400d84a053e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '62446364-91d2-42bd-8360-1c220db2c85a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a031d7e-2b71-4bad-bd63-24b87ef28e88, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=580fd822-3f5a-44c0-aff1-96cab7531d57) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:21:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:20.892 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 580fd822-3f5a-44c0-aff1-96cab7531d57 in datapath 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e bound to our chassis#033[00m
Oct  2 04:21:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:20.893 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e#033[00m
Oct  2 04:21:20 np0005465604 NetworkManager[45129]: <info>  [1759393280.9029] device (tap580fd822-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:21:20 np0005465604 NetworkManager[45129]: <info>  [1759393280.9035] device (tap580fd822-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.907 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:20.914 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bf97f336-77bf-4932-bb89-63739ecb07c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.915 2 INFO nova.compute.manager [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Took 3.66 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.916 2 DEBUG nova.compute.manager [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:20 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:20Z|00054|binding|INFO|Setting lport 580fd822-3f5a-44c0-aff1-96cab7531d57 up in Southbound
Oct  2 04:21:20 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:20Z|00055|binding|INFO|Setting lport 580fd822-3f5a-44c0-aff1-96cab7531d57 ovn-installed in OVS
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.924 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:20 np0005465604 systemd-machined[214636]: New machine qemu-20-instance-00000010.
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.940 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:21:20 np0005465604 systemd[1]: Started Virtual Machine qemu-20-instance-00000010.
Oct  2 04:21:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:20.947 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c5131535-3ddd-47e8-8217-1768acb07c28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:20.949 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2c3f0b28-83f5-48f0-8e9d-5e8cb568231a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:20.978 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ae21f234-75eb-4fcd-94bb-41dd491632b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:20 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:20Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:2c:3a 10.100.0.12
Oct  2 04:21:20 np0005465604 nova_compute[260603]: 2025-10-02 08:21:20.989 2 INFO nova.compute.manager [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Took 6.37 seconds to build instance.#033[00m
Oct  2 04:21:20 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:20Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:2c:3a 10.100.0.12
Oct  2 04:21:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:20.997 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cda4465c-b792-4609-a590-efccfb4e2b93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a12cdd8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:e8:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414588, 'reachable_time': 43029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285157, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:21 np0005465604 nova_compute[260603]: 2025-10-02 08:21:21.005 2 DEBUG oslo_concurrency.lockutils [None req-69329b4a-4549-4473-8872-33be8b173bb6 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "d968b5b6-ba9b-488d-b0bf-4bc79ecfe839" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:21.016 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8da83d32-65e2-4119-bf6e-62f6a98855d9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414598, 'tstamp': 414598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285159, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414600, 'tstamp': 414600}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285159, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:21.018 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a12cdd8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:21 np0005465604 nova_compute[260603]: 2025-10-02 08:21:21.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:21 np0005465604 nova_compute[260603]: 2025-10-02 08:21:21.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:21.022 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a12cdd8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:21.022 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:21:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:21.022 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a12cdd8-50, col_values=(('external_ids', {'iface-id': '9a1d90c9-45f7-468d-bd6f-f1cc59f0309a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:21.022 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:21:21 np0005465604 nova_compute[260603]: 2025-10-02 08:21:21.261 2 DEBUG nova.compute.manager [req-2a441134-fece-4f39-9034-181d51eb13cc req-cf74c205-c202-4875-9705-29c732b7c56f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Received event network-vif-plugged-580fd822-3f5a-44c0-aff1-96cab7531d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:21:21 np0005465604 nova_compute[260603]: 2025-10-02 08:21:21.261 2 DEBUG oslo_concurrency.lockutils [req-2a441134-fece-4f39-9034-181d51eb13cc req-cf74c205-c202-4875-9705-29c732b7c56f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:21 np0005465604 nova_compute[260603]: 2025-10-02 08:21:21.262 2 DEBUG oslo_concurrency.lockutils [req-2a441134-fece-4f39-9034-181d51eb13cc req-cf74c205-c202-4875-9705-29c732b7c56f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:21 np0005465604 nova_compute[260603]: 2025-10-02 08:21:21.262 2 DEBUG oslo_concurrency.lockutils [req-2a441134-fece-4f39-9034-181d51eb13cc req-cf74c205-c202-4875-9705-29c732b7c56f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:21 np0005465604 nova_compute[260603]: 2025-10-02 08:21:21.262 2 DEBUG nova.compute.manager [req-2a441134-fece-4f39-9034-181d51eb13cc req-cf74c205-c202-4875-9705-29c732b7c56f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Processing event network-vif-plugged-580fd822-3f5a-44c0-aff1-96cab7531d57 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:21:21 np0005465604 nova_compute[260603]: 2025-10-02 08:21:21.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:21Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c1:c3:98 10.100.0.14
Oct  2 04:21:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:21Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c1:c3:98 10.100.0.14
Oct  2 04:21:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:21:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:21:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4281132284' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:21:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:21:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4281132284' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.269 2 DEBUG nova.compute.manager [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.270 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393282.2687824, c0ec8879-e818-40a5-88ec-c6d89b8ddea4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.270 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] VM Started (Lifecycle Event)#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.273 2 DEBUG nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.276 2 INFO nova.virt.libvirt.driver [-] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Instance spawned successfully.#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.276 2 DEBUG nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.302 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.304 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.312 2 DEBUG nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.313 2 DEBUG nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.313 2 DEBUG nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.314 2 DEBUG nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.314 2 DEBUG nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.314 2 DEBUG nova.virt.libvirt.driver [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.326 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.326 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393282.2697027, c0ec8879-e818-40a5-88ec-c6d89b8ddea4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.326 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.370 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.373 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393282.2730598, c0ec8879-e818-40a5-88ec-c6d89b8ddea4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.373 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.409 2 INFO nova.compute.manager [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Took 6.98 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.410 2 DEBUG nova.compute.manager [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.424 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.427 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.462 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.511 2 INFO nova.compute.manager [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Took 8.21 seconds to build instance.#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.555 2 DEBUG oslo_concurrency.lockutils [None req-d912b7cd-00a3-4aa6-b145-e5fb43ebb94a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.656 2 DEBUG nova.objects.instance [None req-61b92f5f-a4c3-464a-96f6-34bafa8f888d 7da0cb59d8da4b6980ea2c10241ce7b1 0f72775a9b5d4d38b5af47703231f1ac - - default default] Lazy-loading 'pci_devices' on Instance uuid d968b5b6-ba9b-488d-b0bf-4bc79ecfe839 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.688 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393282.6878319, d968b5b6-ba9b-488d-b0bf-4bc79ecfe839 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.689 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.708 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.717 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.739 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  2 04:21:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 378 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 13 MiB/s wr, 675 op/s
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.971 2 DEBUG oslo_concurrency.lockutils [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Acquiring lock "e500afcd-1008-4a02-8b21-b564db9bd367" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.971 2 DEBUG oslo_concurrency.lockutils [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Lock "e500afcd-1008-4a02-8b21-b564db9bd367" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.972 2 DEBUG oslo_concurrency.lockutils [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Acquiring lock "e500afcd-1008-4a02-8b21-b564db9bd367-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.972 2 DEBUG oslo_concurrency.lockutils [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Lock "e500afcd-1008-4a02-8b21-b564db9bd367-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.972 2 DEBUG oslo_concurrency.lockutils [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Lock "e500afcd-1008-4a02-8b21-b564db9bd367-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.974 2 INFO nova.compute.manager [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Terminating instance#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.975 2 DEBUG oslo_concurrency.lockutils [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Acquiring lock "refresh_cache-e500afcd-1008-4a02-8b21-b564db9bd367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.975 2 DEBUG oslo_concurrency.lockutils [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Acquired lock "refresh_cache-e500afcd-1008-4a02-8b21-b564db9bd367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:21:22 np0005465604 nova_compute[260603]: 2025-10-02 08:21:22.975 2 DEBUG nova.network.neutron [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:21:23 np0005465604 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct  2 04:21:23 np0005465604 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000012.scope: Consumed 2.713s CPU time.
Oct  2 04:21:23 np0005465604 systemd-machined[214636]: Machine qemu-19-instance-00000012 terminated.
Oct  2 04:21:23 np0005465604 nova_compute[260603]: 2025-10-02 08:21:23.161 2 DEBUG nova.network.neutron [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:21:23 np0005465604 nova_compute[260603]: 2025-10-02 08:21:23.198 2 DEBUG nova.compute.manager [None req-61b92f5f-a4c3-464a-96f6-34bafa8f888d 7da0cb59d8da4b6980ea2c10241ce7b1 0f72775a9b5d4d38b5af47703231f1ac - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:23 np0005465604 nova_compute[260603]: 2025-10-02 08:21:23.421 2 DEBUG nova.compute.manager [req-f3f023c7-74ef-4c53-b8e6-0260c06b6f04 req-3ac10f7f-3737-4ae2-97cb-64cb221cbfd9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Received event network-vif-plugged-580fd822-3f5a-44c0-aff1-96cab7531d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:21:23 np0005465604 nova_compute[260603]: 2025-10-02 08:21:23.421 2 DEBUG oslo_concurrency.lockutils [req-f3f023c7-74ef-4c53-b8e6-0260c06b6f04 req-3ac10f7f-3737-4ae2-97cb-64cb221cbfd9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:23 np0005465604 nova_compute[260603]: 2025-10-02 08:21:23.422 2 DEBUG oslo_concurrency.lockutils [req-f3f023c7-74ef-4c53-b8e6-0260c06b6f04 req-3ac10f7f-3737-4ae2-97cb-64cb221cbfd9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:23 np0005465604 nova_compute[260603]: 2025-10-02 08:21:23.422 2 DEBUG oslo_concurrency.lockutils [req-f3f023c7-74ef-4c53-b8e6-0260c06b6f04 req-3ac10f7f-3737-4ae2-97cb-64cb221cbfd9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:23 np0005465604 nova_compute[260603]: 2025-10-02 08:21:23.422 2 DEBUG nova.compute.manager [req-f3f023c7-74ef-4c53-b8e6-0260c06b6f04 req-3ac10f7f-3737-4ae2-97cb-64cb221cbfd9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] No waiting events found dispatching network-vif-plugged-580fd822-3f5a-44c0-aff1-96cab7531d57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:21:23 np0005465604 nova_compute[260603]: 2025-10-02 08:21:23.422 2 WARNING nova.compute.manager [req-f3f023c7-74ef-4c53-b8e6-0260c06b6f04 req-3ac10f7f-3737-4ae2-97cb-64cb221cbfd9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Received unexpected event network-vif-plugged-580fd822-3f5a-44c0-aff1-96cab7531d57 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:21:24 np0005465604 nova_compute[260603]: 2025-10-02 08:21:24.009 2 DEBUG nova.network.neutron [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:24 np0005465604 nova_compute[260603]: 2025-10-02 08:21:24.023 2 DEBUG oslo_concurrency.lockutils [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Releasing lock "refresh_cache-e500afcd-1008-4a02-8b21-b564db9bd367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:21:24 np0005465604 nova_compute[260603]: 2025-10-02 08:21:24.024 2 DEBUG nova.compute.manager [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:21:24 np0005465604 nova_compute[260603]: 2025-10-02 08:21:24.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:24 np0005465604 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000011.scope: Deactivated successfully.
Oct  2 04:21:24 np0005465604 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000011.scope: Consumed 5.159s CPU time.
Oct  2 04:21:24 np0005465604 systemd-machined[214636]: Machine qemu-18-instance-00000011 terminated.
Oct  2 04:21:24 np0005465604 nova_compute[260603]: 2025-10-02 08:21:24.242 2 INFO nova.virt.libvirt.driver [-] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Instance destroyed successfully.#033[00m
Oct  2 04:21:24 np0005465604 nova_compute[260603]: 2025-10-02 08:21:24.243 2 DEBUG nova.objects.instance [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Lazy-loading 'resources' on Instance uuid e500afcd-1008-4a02-8b21-b564db9bd367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:24 np0005465604 nova_compute[260603]: 2025-10-02 08:21:24.642 2 INFO nova.virt.libvirt.driver [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Deleting instance files /var/lib/nova/instances/e500afcd-1008-4a02-8b21-b564db9bd367_del#033[00m
Oct  2 04:21:24 np0005465604 nova_compute[260603]: 2025-10-02 08:21:24.643 2 INFO nova.virt.libvirt.driver [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Deletion of /var/lib/nova/instances/e500afcd-1008-4a02-8b21-b564db9bd367_del complete#033[00m
Oct  2 04:21:24 np0005465604 nova_compute[260603]: 2025-10-02 08:21:24.697 2 INFO nova.compute.manager [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:21:24 np0005465604 nova_compute[260603]: 2025-10-02 08:21:24.697 2 DEBUG oslo.service.loopingcall [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:21:24 np0005465604 nova_compute[260603]: 2025-10-02 08:21:24.698 2 DEBUG nova.compute.manager [-] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:21:24 np0005465604 nova_compute[260603]: 2025-10-02 08:21:24.699 2 DEBUG nova.network.neutron [-] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:21:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 409 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 8.4 MiB/s rd, 13 MiB/s wr, 602 op/s
Oct  2 04:21:24 np0005465604 nova_compute[260603]: 2025-10-02 08:21:24.925 2 DEBUG nova.network.neutron [-] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:21:24 np0005465604 nova_compute[260603]: 2025-10-02 08:21:24.942 2 DEBUG nova.network.neutron [-] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:24 np0005465604 nova_compute[260603]: 2025-10-02 08:21:24.966 2 INFO nova.compute.manager [-] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Took 0.27 seconds to deallocate network for instance.#033[00m
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.008 2 DEBUG oslo_concurrency.lockutils [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.009 2 DEBUG oslo_concurrency.lockutils [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.157 2 DEBUG oslo_concurrency.processutils [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.417 2 DEBUG oslo_concurrency.lockutils [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Acquiring lock "d968b5b6-ba9b-488d-b0bf-4bc79ecfe839" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.418 2 DEBUG oslo_concurrency.lockutils [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "d968b5b6-ba9b-488d-b0bf-4bc79ecfe839" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.418 2 DEBUG oslo_concurrency.lockutils [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Acquiring lock "d968b5b6-ba9b-488d-b0bf-4bc79ecfe839-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.418 2 DEBUG oslo_concurrency.lockutils [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "d968b5b6-ba9b-488d-b0bf-4bc79ecfe839-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.419 2 DEBUG oslo_concurrency.lockutils [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "d968b5b6-ba9b-488d-b0bf-4bc79ecfe839-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.420 2 INFO nova.compute.manager [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Terminating instance#033[00m
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.421 2 DEBUG oslo_concurrency.lockutils [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Acquiring lock "refresh_cache-d968b5b6-ba9b-488d-b0bf-4bc79ecfe839" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.421 2 DEBUG oslo_concurrency.lockutils [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Acquired lock "refresh_cache-d968b5b6-ba9b-488d-b0bf-4bc79ecfe839" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.421 2 DEBUG nova.network.neutron [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:21:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:21:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2858943581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.619 2 DEBUG oslo_concurrency.processutils [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.623 2 DEBUG nova.compute.provider_tree [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.644 2 DEBUG nova.scheduler.client.report [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.662 2 DEBUG oslo_concurrency.lockutils [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.685 2 INFO nova.scheduler.client.report [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Deleted allocations for instance e500afcd-1008-4a02-8b21-b564db9bd367#033[00m
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.740 2 DEBUG oslo_concurrency.lockutils [None req-b17baaa3-c88b-4fac-aca1-14a7a2fbb6e2 4a2ef26eb3dc4174bd92f70f77e44d3a 504acd832d284781b00969cd97758303 - - default default] Lock "e500afcd-1008-4a02-8b21-b564db9bd367" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:25 np0005465604 nova_compute[260603]: 2025-10-02 08:21:25.746 2 DEBUG nova.network.neutron [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:21:26 np0005465604 nova_compute[260603]: 2025-10-02 08:21:26.148 2 DEBUG nova.network.neutron [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:26 np0005465604 nova_compute[260603]: 2025-10-02 08:21:26.170 2 DEBUG oslo_concurrency.lockutils [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Releasing lock "refresh_cache-d968b5b6-ba9b-488d-b0bf-4bc79ecfe839" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:21:26 np0005465604 nova_compute[260603]: 2025-10-02 08:21:26.171 2 DEBUG nova.compute.manager [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:21:26 np0005465604 nova_compute[260603]: 2025-10-02 08:21:26.184 2 INFO nova.virt.libvirt.driver [-] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Instance destroyed successfully.#033[00m
Oct  2 04:21:26 np0005465604 nova_compute[260603]: 2025-10-02 08:21:26.185 2 DEBUG nova.objects.instance [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lazy-loading 'resources' on Instance uuid d968b5b6-ba9b-488d-b0bf-4bc79ecfe839 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:26 np0005465604 nova_compute[260603]: 2025-10-02 08:21:26.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:26 np0005465604 nova_compute[260603]: 2025-10-02 08:21:26.662 2 INFO nova.virt.libvirt.driver [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Deleting instance files /var/lib/nova/instances/d968b5b6-ba9b-488d-b0bf-4bc79ecfe839_del#033[00m
Oct  2 04:21:26 np0005465604 nova_compute[260603]: 2025-10-02 08:21:26.664 2 INFO nova.virt.libvirt.driver [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Deletion of /var/lib/nova/instances/d968b5b6-ba9b-488d-b0bf-4bc79ecfe839_del complete#033[00m
Oct  2 04:21:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 409 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 12 MiB/s wr, 532 op/s
Oct  2 04:21:26 np0005465604 nova_compute[260603]: 2025-10-02 08:21:26.767 2 INFO nova.compute.manager [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:21:26 np0005465604 nova_compute[260603]: 2025-10-02 08:21:26.768 2 DEBUG oslo.service.loopingcall [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:21:26 np0005465604 nova_compute[260603]: 2025-10-02 08:21:26.768 2 DEBUG nova.compute.manager [-] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:21:26 np0005465604 nova_compute[260603]: 2025-10-02 08:21:26.769 2 DEBUG nova.network.neutron [-] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:21:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.125 2 DEBUG nova.network.neutron [-] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.140 2 DEBUG nova.network.neutron [-] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.156 2 INFO nova.compute.manager [-] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Took 0.39 seconds to deallocate network for instance.#033[00m
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.212 2 DEBUG oslo_concurrency.lockutils [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.213 2 DEBUG oslo_concurrency.lockutils [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.355 2 DEBUG oslo_concurrency.processutils [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.794 2 DEBUG oslo_concurrency.lockutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.794 2 DEBUG oslo_concurrency.lockutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:21:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/364413093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.812 2 DEBUG nova.compute.manager [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.822 2 DEBUG oslo_concurrency.processutils [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.827 2 DEBUG nova.compute.provider_tree [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.850 2 DEBUG nova.scheduler.client.report [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.879 2 DEBUG oslo_concurrency.lockutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.880 2 DEBUG oslo_concurrency.lockutils [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.882 2 DEBUG oslo_concurrency.lockutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.888 2 DEBUG nova.virt.hardware [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.888 2 INFO nova.compute.claims [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.901 2 INFO nova.scheduler.client.report [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Deleted allocations for instance d968b5b6-ba9b-488d-b0bf-4bc79ecfe839#033[00m
Oct  2 04:21:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:21:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:21:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:21:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:21:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:21:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:21:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:21:27
Oct  2 04:21:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:21:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:21:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'backups', 'default.rgw.control', 'images', '.rgw.root', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes']
Oct  2 04:21:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:21:27 np0005465604 nova_compute[260603]: 2025-10-02 08:21:27.995 2 DEBUG oslo_concurrency.lockutils [None req-cd6590db-1c0a-4b58-9876-cdf4397d220b 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "d968b5b6-ba9b-488d-b0bf-4bc79ecfe839" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.081 2 DEBUG oslo_concurrency.processutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.399 2 DEBUG oslo_concurrency.lockutils [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Acquiring lock "45e20d73-3f71-4e90-b1b5-f30fcb043922" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.400 2 DEBUG oslo_concurrency.lockutils [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "45e20d73-3f71-4e90-b1b5-f30fcb043922" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.401 2 DEBUG oslo_concurrency.lockutils [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Acquiring lock "45e20d73-3f71-4e90-b1b5-f30fcb043922-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.401 2 DEBUG oslo_concurrency.lockutils [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "45e20d73-3f71-4e90-b1b5-f30fcb043922-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.402 2 DEBUG oslo_concurrency.lockutils [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "45e20d73-3f71-4e90-b1b5-f30fcb043922-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.403 2 INFO nova.compute.manager [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Terminating instance#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.405 2 DEBUG oslo_concurrency.lockutils [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Acquiring lock "refresh_cache-45e20d73-3f71-4e90-b1b5-f30fcb043922" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.405 2 DEBUG oslo_concurrency.lockutils [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Acquired lock "refresh_cache-45e20d73-3f71-4e90-b1b5-f30fcb043922" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.405 2 DEBUG nova.network.neutron [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:21:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:21:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2669854872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.516 2 DEBUG oslo_concurrency.processutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.525 2 DEBUG nova.compute.provider_tree [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.553 2 DEBUG nova.scheduler.client.report [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.612 2 DEBUG nova.network.neutron [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.620 2 DEBUG oslo_concurrency.lockutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.621 2 DEBUG nova.compute.manager [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.691 2 DEBUG nova.compute.manager [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.692 2 DEBUG nova.network.neutron [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.715 2 INFO nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.741 2 DEBUG nova.compute.manager [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:21:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 326 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 9.1 MiB/s rd, 12 MiB/s wr, 652 op/s
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.858 2 DEBUG nova.compute.manager [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.860 2 DEBUG nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.861 2 INFO nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Creating image(s)#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.910 2 DEBUG nova.storage.rbd_utils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 01296dff-20d5-49d6-b582-f9ec1d6b0af8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:28 np0005465604 nova_compute[260603]: 2025-10-02 08:21:28.942 2 DEBUG nova.storage.rbd_utils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 01296dff-20d5-49d6-b582-f9ec1d6b0af8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.007 2 DEBUG nova.storage.rbd_utils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 01296dff-20d5-49d6-b582-f9ec1d6b0af8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.012 2 DEBUG oslo_concurrency.processutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.044 2 DEBUG nova.network.neutron [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.048 2 DEBUG nova.policy [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2b82955fab174d8aac325e64068908f5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '13ecc6dea7a8465394379400d84a053e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.086 2 DEBUG oslo_concurrency.lockutils [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Releasing lock "refresh_cache-45e20d73-3f71-4e90-b1b5-f30fcb043922" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.088 2 DEBUG nova.compute.manager [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.122 2 DEBUG oslo_concurrency.processutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.123 2 DEBUG oslo_concurrency.lockutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.123 2 DEBUG oslo_concurrency.lockutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.124 2 DEBUG oslo_concurrency.lockutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.162 2 DEBUG nova.storage.rbd_utils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 01296dff-20d5-49d6-b582-f9ec1d6b0af8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.169 2 DEBUG oslo_concurrency.processutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 01296dff-20d5-49d6-b582-f9ec1d6b0af8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:29 np0005465604 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Oct  2 04:21:29 np0005465604 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000e.scope: Consumed 11.971s CPU time.
Oct  2 04:21:29 np0005465604 systemd-machined[214636]: Machine qemu-16-instance-0000000e terminated.
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.312 2 INFO nova.virt.libvirt.driver [-] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Instance destroyed successfully.#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.313 2 DEBUG nova.objects.instance [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lazy-loading 'resources' on Instance uuid 45e20d73-3f71-4e90-b1b5-f30fcb043922 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.470 2 DEBUG oslo_concurrency.processutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 01296dff-20d5-49d6-b582-f9ec1d6b0af8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.527 2 DEBUG nova.storage.rbd_utils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] resizing rbd image 01296dff-20d5-49d6-b582-f9ec1d6b0af8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.638 2 DEBUG nova.objects.instance [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'migration_context' on Instance uuid 01296dff-20d5-49d6-b582-f9ec1d6b0af8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.657 2 DEBUG nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.657 2 DEBUG nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Ensure instance console log exists: /var/lib/nova/instances/01296dff-20d5-49d6-b582-f9ec1d6b0af8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.658 2 DEBUG oslo_concurrency.lockutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.658 2 DEBUG oslo_concurrency.lockutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.659 2 DEBUG oslo_concurrency.lockutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.774 2 INFO nova.virt.libvirt.driver [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Deleting instance files /var/lib/nova/instances/45e20d73-3f71-4e90-b1b5-f30fcb043922_del#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.775 2 INFO nova.virt.libvirt.driver [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Deletion of /var/lib/nova/instances/45e20d73-3f71-4e90-b1b5-f30fcb043922_del complete#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.827 2 INFO nova.compute.manager [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.828 2 DEBUG oslo.service.loopingcall [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.829 2 DEBUG nova.compute.manager [-] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.829 2 DEBUG nova.network.neutron [-] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:21:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:21:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:21:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:21:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:21:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:21:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:21:29 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 8859591e-13b3-495e-9a73-e4f0671c418f does not exist
Oct  2 04:21:29 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 2c8196fc-6524-4449-8375-19135228f7ac does not exist
Oct  2 04:21:29 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 45bf8648-51e3-4718-85e9-2b35a5014de0 does not exist
Oct  2 04:21:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:21:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:21:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:21:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:21:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:21:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:21:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:21:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:21:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:21:29 np0005465604 nova_compute[260603]: 2025-10-02 08:21:29.913 2 DEBUG nova.network.neutron [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Successfully created port: 3979d203-cacc-40b7-9662-efc009354ac2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:21:30 np0005465604 nova_compute[260603]: 2025-10-02 08:21:30.114 2 DEBUG nova.network.neutron [-] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:21:30 np0005465604 nova_compute[260603]: 2025-10-02 08:21:30.131 2 DEBUG nova.network.neutron [-] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:30 np0005465604 nova_compute[260603]: 2025-10-02 08:21:30.144 2 INFO nova.compute.manager [-] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Took 0.32 seconds to deallocate network for instance.#033[00m
Oct  2 04:21:30 np0005465604 nova_compute[260603]: 2025-10-02 08:21:30.189 2 DEBUG oslo_concurrency.lockutils [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:30 np0005465604 nova_compute[260603]: 2025-10-02 08:21:30.190 2 DEBUG oslo_concurrency.lockutils [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:30 np0005465604 nova_compute[260603]: 2025-10-02 08:21:30.325 2 DEBUG oslo_concurrency.processutils [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:30 np0005465604 podman[285791]: 2025-10-02 08:21:30.555482263 +0000 UTC m=+0.042424959 container create 83849e4eb55a8fb1a114f8ba77c7a15e30fcf0af1d0f944456d63ed61c3f0246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hawking, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 04:21:30 np0005465604 systemd[1]: Started libpod-conmon-83849e4eb55a8fb1a114f8ba77c7a15e30fcf0af1d0f944456d63ed61c3f0246.scope.
Oct  2 04:21:30 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:21:30 np0005465604 podman[285791]: 2025-10-02 08:21:30.537300674 +0000 UTC m=+0.024243370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:21:30 np0005465604 podman[285791]: 2025-10-02 08:21:30.653540182 +0000 UTC m=+0.140482868 container init 83849e4eb55a8fb1a114f8ba77c7a15e30fcf0af1d0f944456d63ed61c3f0246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hawking, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:21:30 np0005465604 podman[285791]: 2025-10-02 08:21:30.661156771 +0000 UTC m=+0.148099457 container start 83849e4eb55a8fb1a114f8ba77c7a15e30fcf0af1d0f944456d63ed61c3f0246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hawking, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:21:30 np0005465604 podman[285791]: 2025-10-02 08:21:30.664228787 +0000 UTC m=+0.151171473 container attach 83849e4eb55a8fb1a114f8ba77c7a15e30fcf0af1d0f944456d63ed61c3f0246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hawking, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 04:21:30 np0005465604 vigorous_hawking[285807]: 167 167
Oct  2 04:21:30 np0005465604 systemd[1]: libpod-83849e4eb55a8fb1a114f8ba77c7a15e30fcf0af1d0f944456d63ed61c3f0246.scope: Deactivated successfully.
Oct  2 04:21:30 np0005465604 podman[285812]: 2025-10-02 08:21:30.705309743 +0000 UTC m=+0.025407737 container died 83849e4eb55a8fb1a114f8ba77c7a15e30fcf0af1d0f944456d63ed61c3f0246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hawking, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:21:30 np0005465604 systemd[1]: var-lib-containers-storage-overlay-39965f96a396e2a740a0d6d54e402675e39d1e0e252384f4a856968e430272d2-merged.mount: Deactivated successfully.
Oct  2 04:21:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:21:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3174065333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:21:30 np0005465604 podman[285812]: 2025-10-02 08:21:30.751362733 +0000 UTC m=+0.071460697 container remove 83849e4eb55a8fb1a114f8ba77c7a15e30fcf0af1d0f944456d63ed61c3f0246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:21:30 np0005465604 systemd[1]: libpod-conmon-83849e4eb55a8fb1a114f8ba77c7a15e30fcf0af1d0f944456d63ed61c3f0246.scope: Deactivated successfully.
Oct  2 04:21:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 326 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 6.7 MiB/s rd, 7.7 MiB/s wr, 475 op/s
Oct  2 04:21:30 np0005465604 nova_compute[260603]: 2025-10-02 08:21:30.767 2 DEBUG oslo_concurrency.processutils [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:30 np0005465604 nova_compute[260603]: 2025-10-02 08:21:30.779 2 DEBUG nova.compute.provider_tree [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:21:30 np0005465604 nova_compute[260603]: 2025-10-02 08:21:30.804 2 DEBUG nova.scheduler.client.report [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:21:30 np0005465604 nova_compute[260603]: 2025-10-02 08:21:30.836 2 DEBUG oslo_concurrency.lockutils [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:30 np0005465604 nova_compute[260603]: 2025-10-02 08:21:30.864 2 INFO nova.scheduler.client.report [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Deleted allocations for instance 45e20d73-3f71-4e90-b1b5-f30fcb043922#033[00m
Oct  2 04:21:30 np0005465604 podman[285836]: 2025-10-02 08:21:30.93879556 +0000 UTC m=+0.048250682 container create 1db98e8a4550ccde3d2c06b00d1a303d2f68381f4062c80825ca5b3d04cec490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  2 04:21:30 np0005465604 systemd[1]: Started libpod-conmon-1db98e8a4550ccde3d2c06b00d1a303d2f68381f4062c80825ca5b3d04cec490.scope.
Oct  2 04:21:30 np0005465604 nova_compute[260603]: 2025-10-02 08:21:30.994 2 DEBUG oslo_concurrency.lockutils [None req-46514740-e512-499a-952b-8f8b83631aec 5da29605949d4f0abb43aa8f0801e6b7 cb733b60e33d491c9c8c0cd574145cce - - default default] Lock "45e20d73-3f71-4e90-b1b5-f30fcb043922" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:30 np0005465604 nova_compute[260603]: 2025-10-02 08:21:30.998 2 DEBUG nova.network.neutron [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Successfully updated port: 3979d203-cacc-40b7-9662-efc009354ac2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:21:31 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:21:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d1526bceea8f8327270827f746268c3483e9e2893e9051bf515c804c1f787d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:21:31 np0005465604 nova_compute[260603]: 2025-10-02 08:21:31.014 2 DEBUG oslo_concurrency.lockutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "refresh_cache-01296dff-20d5-49d6-b582-f9ec1d6b0af8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:21:31 np0005465604 nova_compute[260603]: 2025-10-02 08:21:31.015 2 DEBUG oslo_concurrency.lockutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquired lock "refresh_cache-01296dff-20d5-49d6-b582-f9ec1d6b0af8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:21:31 np0005465604 nova_compute[260603]: 2025-10-02 08:21:31.015 2 DEBUG nova.network.neutron [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:21:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d1526bceea8f8327270827f746268c3483e9e2893e9051bf515c804c1f787d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:21:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d1526bceea8f8327270827f746268c3483e9e2893e9051bf515c804c1f787d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:21:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d1526bceea8f8327270827f746268c3483e9e2893e9051bf515c804c1f787d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:21:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d1526bceea8f8327270827f746268c3483e9e2893e9051bf515c804c1f787d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:21:31 np0005465604 podman[285836]: 2025-10-02 08:21:30.919935949 +0000 UTC m=+0.029391071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:21:31 np0005465604 podman[285836]: 2025-10-02 08:21:31.04233736 +0000 UTC m=+0.151792502 container init 1db98e8a4550ccde3d2c06b00d1a303d2f68381f4062c80825ca5b3d04cec490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:21:31 np0005465604 podman[285836]: 2025-10-02 08:21:31.050037492 +0000 UTC m=+0.159492604 container start 1db98e8a4550ccde3d2c06b00d1a303d2f68381f4062c80825ca5b3d04cec490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_allen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 04:21:31 np0005465604 podman[285836]: 2025-10-02 08:21:31.053869382 +0000 UTC m=+0.163324504 container attach 1db98e8a4550ccde3d2c06b00d1a303d2f68381f4062c80825ca5b3d04cec490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_allen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:21:31 np0005465604 nova_compute[260603]: 2025-10-02 08:21:31.131 2 DEBUG nova.compute.manager [req-03c5a31f-87ba-4174-814f-fd4d1b715053 req-02209168-b37d-4b27-8026-ab78f9bf0e3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Received event network-changed-3979d203-cacc-40b7-9662-efc009354ac2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:21:31 np0005465604 nova_compute[260603]: 2025-10-02 08:21:31.132 2 DEBUG nova.compute.manager [req-03c5a31f-87ba-4174-814f-fd4d1b715053 req-02209168-b37d-4b27-8026-ab78f9bf0e3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Refreshing instance network info cache due to event network-changed-3979d203-cacc-40b7-9662-efc009354ac2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:21:31 np0005465604 nova_compute[260603]: 2025-10-02 08:21:31.132 2 DEBUG oslo_concurrency.lockutils [req-03c5a31f-87ba-4174-814f-fd4d1b715053 req-02209168-b37d-4b27-8026-ab78f9bf0e3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-01296dff-20d5-49d6-b582-f9ec1d6b0af8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:21:31 np0005465604 nova_compute[260603]: 2025-10-02 08:21:31.179 2 DEBUG nova.network.neutron [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:21:31 np0005465604 nova_compute[260603]: 2025-10-02 08:21:31.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:31 np0005465604 nova_compute[260603]: 2025-10-02 08:21:31.678 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393276.6771266, b4e47139-8bff-466c-b453-5bafefeaad62 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:31 np0005465604 nova_compute[260603]: 2025-10-02 08:21:31.680 2 INFO nova.compute.manager [-] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:21:31 np0005465604 nova_compute[260603]: 2025-10-02 08:21:31.708 2 DEBUG nova.compute.manager [None req-0f3ec4db-e817-4bc4-bb0a-ae257882ce2d - - - - - -] [instance: b4e47139-8bff-466c-b453-5bafefeaad62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:21:32 np0005465604 podman[285878]: 2025-10-02 08:21:32.040778369 +0000 UTC m=+0.090065519 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:21:32 np0005465604 epic_allen[285853]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:21:32 np0005465604 epic_allen[285853]: --> relative data size: 1.0
Oct  2 04:21:32 np0005465604 epic_allen[285853]: --> All data devices are unavailable
Oct  2 04:21:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct  2 04:21:32 np0005465604 systemd[1]: libpod-1db98e8a4550ccde3d2c06b00d1a303d2f68381f4062c80825ca5b3d04cec490.scope: Deactivated successfully.
Oct  2 04:21:32 np0005465604 podman[285836]: 2025-10-02 08:21:32.109104198 +0000 UTC m=+1.218559300 container died 1db98e8a4550ccde3d2c06b00d1a303d2f68381f4062c80825ca5b3d04cec490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:21:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct  2 04:21:32 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct  2 04:21:32 np0005465604 podman[285876]: 2025-10-02 08:21:32.136965239 +0000 UTC m=+0.183889785 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:21:32 np0005465604 systemd[1]: var-lib-containers-storage-overlay-93d1526bceea8f8327270827f746268c3483e9e2893e9051bf515c804c1f787d-merged.mount: Deactivated successfully.
Oct  2 04:21:32 np0005465604 podman[285836]: 2025-10-02 08:21:32.184303062 +0000 UTC m=+1.293758154 container remove 1db98e8a4550ccde3d2c06b00d1a303d2f68381f4062c80825ca5b3d04cec490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_allen, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:21:32 np0005465604 systemd[1]: libpod-conmon-1db98e8a4550ccde3d2c06b00d1a303d2f68381f4062c80825ca5b3d04cec490.scope: Deactivated successfully.
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.283 2 DEBUG nova.network.neutron [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Updating instance_info_cache with network_info: [{"id": "3979d203-cacc-40b7-9662-efc009354ac2", "address": "fa:16:3e:9d:e1:60", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3979d203-ca", "ovs_interfaceid": "3979d203-cacc-40b7-9662-efc009354ac2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.301 2 DEBUG oslo_concurrency.lockutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Releasing lock "refresh_cache-01296dff-20d5-49d6-b582-f9ec1d6b0af8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.301 2 DEBUG nova.compute.manager [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Instance network_info: |[{"id": "3979d203-cacc-40b7-9662-efc009354ac2", "address": "fa:16:3e:9d:e1:60", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3979d203-ca", "ovs_interfaceid": "3979d203-cacc-40b7-9662-efc009354ac2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.301 2 DEBUG oslo_concurrency.lockutils [req-03c5a31f-87ba-4174-814f-fd4d1b715053 req-02209168-b37d-4b27-8026-ab78f9bf0e3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-01296dff-20d5-49d6-b582-f9ec1d6b0af8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.301 2 DEBUG nova.network.neutron [req-03c5a31f-87ba-4174-814f-fd4d1b715053 req-02209168-b37d-4b27-8026-ab78f9bf0e3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Refreshing network info cache for port 3979d203-cacc-40b7-9662-efc009354ac2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.303 2 DEBUG nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Start _get_guest_xml network_info=[{"id": "3979d203-cacc-40b7-9662-efc009354ac2", "address": "fa:16:3e:9d:e1:60", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3979d203-ca", "ovs_interfaceid": "3979d203-cacc-40b7-9662-efc009354ac2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.307 2 WARNING nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.312 2 DEBUG nova.virt.libvirt.host [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.313 2 DEBUG nova.virt.libvirt.host [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.315 2 DEBUG nova.virt.libvirt.host [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.315 2 DEBUG nova.virt.libvirt.host [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.315 2 DEBUG nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.315 2 DEBUG nova.virt.hardware [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.316 2 DEBUG nova.virt.hardware [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.316 2 DEBUG nova.virt.hardware [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.316 2 DEBUG nova.virt.hardware [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.316 2 DEBUG nova.virt.hardware [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.316 2 DEBUG nova.virt.hardware [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.317 2 DEBUG nova.virt.hardware [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.317 2 DEBUG nova.virt.hardware [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.317 2 DEBUG nova.virt.hardware [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.317 2 DEBUG nova.virt.hardware [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.317 2 DEBUG nova.virt.hardware [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.319 2 DEBUG oslo_concurrency.processutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 305 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.3 MiB/s wr, 310 op/s
Oct  2 04:21:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2206627987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:32 np0005465604 podman[286095]: 2025-10-02 08:21:32.814842566 +0000 UTC m=+0.056600353 container create 902eee3c3dad602bfc8ba9114f7498d3d767c5bdeb3779b8c4d3ec5d0fedb155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_booth, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.817 2 DEBUG oslo_concurrency.processutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.838 2 DEBUG nova.storage.rbd_utils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 01296dff-20d5-49d6-b582-f9ec1d6b0af8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:32 np0005465604 nova_compute[260603]: 2025-10-02 08:21:32.847 2 DEBUG oslo_concurrency.processutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:32 np0005465604 systemd[1]: Started libpod-conmon-902eee3c3dad602bfc8ba9114f7498d3d767c5bdeb3779b8c4d3ec5d0fedb155.scope.
Oct  2 04:21:32 np0005465604 podman[286095]: 2025-10-02 08:21:32.786689544 +0000 UTC m=+0.028447351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:21:32 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:21:32 np0005465604 podman[286095]: 2025-10-02 08:21:32.925741867 +0000 UTC m=+0.167499654 container init 902eee3c3dad602bfc8ba9114f7498d3d767c5bdeb3779b8c4d3ec5d0fedb155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:21:32 np0005465604 podman[286095]: 2025-10-02 08:21:32.936688329 +0000 UTC m=+0.178446096 container start 902eee3c3dad602bfc8ba9114f7498d3d767c5bdeb3779b8c4d3ec5d0fedb155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_booth, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 04:21:32 np0005465604 podman[286095]: 2025-10-02 08:21:32.941302644 +0000 UTC m=+0.183060441 container attach 902eee3c3dad602bfc8ba9114f7498d3d767c5bdeb3779b8c4d3ec5d0fedb155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_booth, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 04:21:32 np0005465604 inspiring_booth[286133]: 167 167
Oct  2 04:21:32 np0005465604 systemd[1]: libpod-902eee3c3dad602bfc8ba9114f7498d3d767c5bdeb3779b8c4d3ec5d0fedb155.scope: Deactivated successfully.
Oct  2 04:21:33 np0005465604 podman[286138]: 2025-10-02 08:21:33.013304897 +0000 UTC m=+0.046692352 container died 902eee3c3dad602bfc8ba9114f7498d3d767c5bdeb3779b8c4d3ec5d0fedb155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_booth, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 04:21:33 np0005465604 systemd[1]: var-lib-containers-storage-overlay-db73656753f6b4f0ee8a2acc4b7f0c1f526c47494f1a6d9001dc5630d8ffafdf-merged.mount: Deactivated successfully.
Oct  2 04:21:33 np0005465604 podman[286138]: 2025-10-02 08:21:33.050840772 +0000 UTC m=+0.084228197 container remove 902eee3c3dad602bfc8ba9114f7498d3d767c5bdeb3779b8c4d3ec5d0fedb155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_booth, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 04:21:33 np0005465604 systemd[1]: libpod-conmon-902eee3c3dad602bfc8ba9114f7498d3d767c5bdeb3779b8c4d3ec5d0fedb155.scope: Deactivated successfully.
Oct  2 04:21:33 np0005465604 podman[286177]: 2025-10-02 08:21:33.225349313 +0000 UTC m=+0.043936166 container create 2030fef60cc7ced1a54e15a4b60cb9a3380141835ea761b615d77e596ed41a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 04:21:33 np0005465604 systemd[1]: Started libpod-conmon-2030fef60cc7ced1a54e15a4b60cb9a3380141835ea761b615d77e596ed41a71.scope.
Oct  2 04:21:33 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:21:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6f82b166cf75e32bfd7841639ff6ce574e7cabc6f3835f8daa6ff5967aae19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:21:33 np0005465604 podman[286177]: 2025-10-02 08:21:33.209462946 +0000 UTC m=+0.028049799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:21:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6f82b166cf75e32bfd7841639ff6ce574e7cabc6f3835f8daa6ff5967aae19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:21:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6f82b166cf75e32bfd7841639ff6ce574e7cabc6f3835f8daa6ff5967aae19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:21:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6f82b166cf75e32bfd7841639ff6ce574e7cabc6f3835f8daa6ff5967aae19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:21:33 np0005465604 podman[286177]: 2025-10-02 08:21:33.317625651 +0000 UTC m=+0.136212514 container init 2030fef60cc7ced1a54e15a4b60cb9a3380141835ea761b615d77e596ed41a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 04:21:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/903197141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:33 np0005465604 podman[286177]: 2025-10-02 08:21:33.326494169 +0000 UTC m=+0.145081032 container start 2030fef60cc7ced1a54e15a4b60cb9a3380141835ea761b615d77e596ed41a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 04:21:33 np0005465604 podman[286177]: 2025-10-02 08:21:33.329912056 +0000 UTC m=+0.148498939 container attach 2030fef60cc7ced1a54e15a4b60cb9a3380141835ea761b615d77e596ed41a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.339 2 DEBUG oslo_concurrency.processutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.343 2 DEBUG nova.virt.libvirt.vif [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:21:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1319481695',display_name='tempest-ServersAdminTestJSON-server-1319481695',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1319481695',id=19,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='13ecc6dea7a8465394379400d84a053e',ramdisk_id='',reservation_id='r-hf9n33mn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-83453142',owner_user_name='tempest-ServersAdminTestJSON-83453142-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:21:28Z,user_data=None,user_id='2b82955fab174d8aac325e64068908f5',uuid=01296dff-20d5-49d6-b582-f9ec1d6b0af8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3979d203-cacc-40b7-9662-efc009354ac2", "address": "fa:16:3e:9d:e1:60", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3979d203-ca", "ovs_interfaceid": "3979d203-cacc-40b7-9662-efc009354ac2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.343 2 DEBUG nova.network.os_vif_util [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converting VIF {"id": "3979d203-cacc-40b7-9662-efc009354ac2", "address": "fa:16:3e:9d:e1:60", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3979d203-ca", "ovs_interfaceid": "3979d203-cacc-40b7-9662-efc009354ac2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.344 2 DEBUG nova.network.os_vif_util [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e1:60,bridge_name='br-int',has_traffic_filtering=True,id=3979d203-cacc-40b7-9662-efc009354ac2,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3979d203-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.346 2 DEBUG nova.objects.instance [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'pci_devices' on Instance uuid 01296dff-20d5-49d6-b582-f9ec1d6b0af8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.381 2 DEBUG nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:21:33 np0005465604 nova_compute[260603]:  <uuid>01296dff-20d5-49d6-b582-f9ec1d6b0af8</uuid>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:  <name>instance-00000013</name>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersAdminTestJSON-server-1319481695</nova:name>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:21:32</nova:creationTime>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:21:33 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:        <nova:user uuid="2b82955fab174d8aac325e64068908f5">tempest-ServersAdminTestJSON-83453142-project-member</nova:user>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:        <nova:project uuid="13ecc6dea7a8465394379400d84a053e">tempest-ServersAdminTestJSON-83453142</nova:project>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:        <nova:port uuid="3979d203-cacc-40b7-9662-efc009354ac2">
Oct  2 04:21:33 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <entry name="serial">01296dff-20d5-49d6-b582-f9ec1d6b0af8</entry>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <entry name="uuid">01296dff-20d5-49d6-b582-f9ec1d6b0af8</entry>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/01296dff-20d5-49d6-b582-f9ec1d6b0af8_disk">
Oct  2 04:21:33 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:33 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/01296dff-20d5-49d6-b582-f9ec1d6b0af8_disk.config">
Oct  2 04:21:33 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:33 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:9d:e1:60"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <target dev="tap3979d203-ca"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/01296dff-20d5-49d6-b582-f9ec1d6b0af8/console.log" append="off"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:21:33 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:21:33 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:21:33 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:21:33 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.381 2 DEBUG nova.compute.manager [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Preparing to wait for external event network-vif-plugged-3979d203-cacc-40b7-9662-efc009354ac2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.381 2 DEBUG oslo_concurrency.lockutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.382 2 DEBUG oslo_concurrency.lockutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.382 2 DEBUG oslo_concurrency.lockutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.383 2 DEBUG nova.virt.libvirt.vif [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:21:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1319481695',display_name='tempest-ServersAdminTestJSON-server-1319481695',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1319481695',id=19,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='13ecc6dea7a8465394379400d84a053e',ramdisk_id='',reservation_id='r-hf9n33mn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-83453142',owner_user_name='tempest-ServersAdminTestJSON-83453142-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:21:28Z,user_data=None,user_id='2b82955fab174d8aac325e64068908f5',uuid=01296dff-20d5-49d6-b582-f9ec1d6b0af8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3979d203-cacc-40b7-9662-efc009354ac2", "address": "fa:16:3e:9d:e1:60", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3979d203-ca", "ovs_interfaceid": "3979d203-cacc-40b7-9662-efc009354ac2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.383 2 DEBUG nova.network.os_vif_util [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converting VIF {"id": "3979d203-cacc-40b7-9662-efc009354ac2", "address": "fa:16:3e:9d:e1:60", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3979d203-ca", "ovs_interfaceid": "3979d203-cacc-40b7-9662-efc009354ac2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.384 2 DEBUG nova.network.os_vif_util [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e1:60,bridge_name='br-int',has_traffic_filtering=True,id=3979d203-cacc-40b7-9662-efc009354ac2,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3979d203-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.384 2 DEBUG os_vif [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e1:60,bridge_name='br-int',has_traffic_filtering=True,id=3979d203-cacc-40b7-9662-efc009354ac2,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3979d203-ca') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.385 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.385 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.390 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3979d203-ca, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.390 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3979d203-ca, col_values=(('external_ids', {'iface-id': '3979d203-cacc-40b7-9662-efc009354ac2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:e1:60', 'vm-uuid': '01296dff-20d5-49d6-b582-f9ec1d6b0af8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:33 np0005465604 NetworkManager[45129]: <info>  [1759393293.3931] manager: (tap3979d203-ca): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.402 2 INFO os_vif [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e1:60,bridge_name='br-int',has_traffic_filtering=True,id=3979d203-cacc-40b7-9662-efc009354ac2,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3979d203-ca')#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.468 2 DEBUG nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.469 2 DEBUG nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.469 2 DEBUG nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] No VIF found with MAC fa:16:3e:9d:e1:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.469 2 INFO nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Using config drive#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.492 2 DEBUG nova.storage.rbd_utils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 01296dff-20d5-49d6-b582-f9ec1d6b0af8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.775 2 INFO nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Creating config drive at /var/lib/nova/instances/01296dff-20d5-49d6-b582-f9ec1d6b0af8/disk.config#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.780 2 DEBUG oslo_concurrency.processutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/01296dff-20d5-49d6-b582-f9ec1d6b0af8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2079_saz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.911 2 DEBUG oslo_concurrency.processutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/01296dff-20d5-49d6-b582-f9ec1d6b0af8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2079_saz" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.933 2 DEBUG nova.storage.rbd_utils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 01296dff-20d5-49d6-b582-f9ec1d6b0af8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:33 np0005465604 nova_compute[260603]: 2025-10-02 08:21:33.936 2 DEBUG oslo_concurrency.processutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/01296dff-20d5-49d6-b582-f9ec1d6b0af8/disk.config 01296dff-20d5-49d6-b582-f9ec1d6b0af8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:34 np0005465604 nova_compute[260603]: 2025-10-02 08:21:34.093 2 DEBUG oslo_concurrency.processutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/01296dff-20d5-49d6-b582-f9ec1d6b0af8/disk.config 01296dff-20d5-49d6-b582-f9ec1d6b0af8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:34 np0005465604 nova_compute[260603]: 2025-10-02 08:21:34.094 2 INFO nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Deleting local config drive /var/lib/nova/instances/01296dff-20d5-49d6-b582-f9ec1d6b0af8/disk.config because it was imported into RBD.#033[00m
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]: {
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:    "0": [
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:        {
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "devices": [
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "/dev/loop3"
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            ],
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "lv_name": "ceph_lv0",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "lv_size": "21470642176",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "name": "ceph_lv0",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "tags": {
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.cluster_name": "ceph",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.crush_device_class": "",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.encrypted": "0",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.osd_id": "0",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.type": "block",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.vdo": "0"
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            },
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "type": "block",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "vg_name": "ceph_vg0"
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:        }
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:    ],
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:    "1": [
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:        {
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "devices": [
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "/dev/loop4"
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            ],
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "lv_name": "ceph_lv1",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "lv_size": "21470642176",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "name": "ceph_lv1",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "tags": {
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.cluster_name": "ceph",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.crush_device_class": "",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.encrypted": "0",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.osd_id": "1",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.type": "block",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.vdo": "0"
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            },
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "type": "block",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "vg_name": "ceph_vg1"
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:        }
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:    ],
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:    "2": [
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:        {
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "devices": [
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "/dev/loop5"
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            ],
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "lv_name": "ceph_lv2",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "lv_size": "21470642176",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "name": "ceph_lv2",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "tags": {
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.cluster_name": "ceph",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.crush_device_class": "",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.encrypted": "0",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.osd_id": "2",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.type": "block",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:                "ceph.vdo": "0"
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            },
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "type": "block",
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:            "vg_name": "ceph_vg2"
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:        }
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]:    ]
Oct  2 04:21:34 np0005465604 condescending_diffie[286194]: }
Oct  2 04:21:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct  2 04:21:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct  2 04:21:34 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct  2 04:21:34 np0005465604 systemd[1]: libpod-2030fef60cc7ced1a54e15a4b60cb9a3380141835ea761b615d77e596ed41a71.scope: Deactivated successfully.
Oct  2 04:21:34 np0005465604 podman[286177]: 2025-10-02 08:21:34.154219705 +0000 UTC m=+0.972806568 container died 2030fef60cc7ced1a54e15a4b60cb9a3380141835ea761b615d77e596ed41a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 04:21:34 np0005465604 kernel: tap3979d203-ca: entered promiscuous mode
Oct  2 04:21:34 np0005465604 NetworkManager[45129]: <info>  [1759393294.1726] manager: (tap3979d203-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Oct  2 04:21:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:34Z|00056|binding|INFO|Claiming lport 3979d203-cacc-40b7-9662-efc009354ac2 for this chassis.
Oct  2 04:21:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:34Z|00057|binding|INFO|3979d203-cacc-40b7-9662-efc009354ac2: Claiming fa:16:3e:9d:e1:60 10.100.0.6
Oct  2 04:21:34 np0005465604 nova_compute[260603]: 2025-10-02 08:21:34.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:34.183 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:e1:60 10.100.0.6'], port_security=['fa:16:3e:9d:e1:60 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '01296dff-20d5-49d6-b582-f9ec1d6b0af8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13ecc6dea7a8465394379400d84a053e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '62446364-91d2-42bd-8360-1c220db2c85a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a031d7e-2b71-4bad-bd63-24b87ef28e88, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3979d203-cacc-40b7-9662-efc009354ac2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:34.185 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3979d203-cacc-40b7-9662-efc009354ac2 in datapath 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e bound to our chassis#033[00m
Oct  2 04:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:34.186 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e#033[00m
Oct  2 04:21:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay-7d6f82b166cf75e32bfd7841639ff6ce574e7cabc6f3835f8daa6ff5967aae19-merged.mount: Deactivated successfully.
Oct  2 04:21:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:34Z|00058|binding|INFO|Setting lport 3979d203-cacc-40b7-9662-efc009354ac2 ovn-installed in OVS
Oct  2 04:21:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:34Z|00059|binding|INFO|Setting lport 3979d203-cacc-40b7-9662-efc009354ac2 up in Southbound
Oct  2 04:21:34 np0005465604 nova_compute[260603]: 2025-10-02 08:21:34.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:34 np0005465604 systemd-udevd[286284]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:21:34 np0005465604 nova_compute[260603]: 2025-10-02 08:21:34.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:34.205 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dac6fb2b-1114-4737-a4f5-27e6720fe6cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:34 np0005465604 NetworkManager[45129]: <info>  [1759393294.2243] device (tap3979d203-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:21:34 np0005465604 NetworkManager[45129]: <info>  [1759393294.2250] device (tap3979d203-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:21:34 np0005465604 podman[286177]: 2025-10-02 08:21:34.225389652 +0000 UTC m=+1.043976505 container remove 2030fef60cc7ced1a54e15a4b60cb9a3380141835ea761b615d77e596ed41a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:21:34 np0005465604 systemd-machined[214636]: New machine qemu-21-instance-00000013.
Oct  2 04:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:34.248 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e00e24-7057-4763-bfd0-03fad7dd320c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:34 np0005465604 systemd[1]: Started Virtual Machine qemu-21-instance-00000013.
Oct  2 04:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:34.250 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[17cc26c1-515e-47cf-aaad-2213de91e171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:34 np0005465604 systemd[1]: libpod-conmon-2030fef60cc7ced1a54e15a4b60cb9a3380141835ea761b615d77e596ed41a71.scope: Deactivated successfully.
Oct  2 04:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:34.291 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[69b23dc3-60b4-47d5-b533-6d7f68c8cf66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:34.307 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa225db7-50ee-4164-95ff-b657e88bd46c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a12cdd8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:e8:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 1000, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 1000, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414588, 'reachable_time': 43029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286321, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:34.322 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[93d60395-4ba2-455c-bdc4-d743800f80ed]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414598, 'tstamp': 414598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286325, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414600, 'tstamp': 414600}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286325, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:34.324 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a12cdd8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:34 np0005465604 nova_compute[260603]: 2025-10-02 08:21:34.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:34.327 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a12cdd8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:34.327 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:21:34 np0005465604 nova_compute[260603]: 2025-10-02 08:21:34.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:34.328 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a12cdd8-50, col_values=(('external_ids', {'iface-id': '9a1d90c9-45f7-468d-bd6f-f1cc59f0309a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:34.328 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:21:34 np0005465604 nova_compute[260603]: 2025-10-02 08:21:34.335 2 DEBUG nova.network.neutron [req-03c5a31f-87ba-4174-814f-fd4d1b715053 req-02209168-b37d-4b27-8026-ab78f9bf0e3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Updated VIF entry in instance network info cache for port 3979d203-cacc-40b7-9662-efc009354ac2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:21:34 np0005465604 nova_compute[260603]: 2025-10-02 08:21:34.335 2 DEBUG nova.network.neutron [req-03c5a31f-87ba-4174-814f-fd4d1b715053 req-02209168-b37d-4b27-8026-ab78f9bf0e3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Updating instance_info_cache with network_info: [{"id": "3979d203-cacc-40b7-9662-efc009354ac2", "address": "fa:16:3e:9d:e1:60", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3979d203-ca", "ovs_interfaceid": "3979d203-cacc-40b7-9662-efc009354ac2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:34 np0005465604 nova_compute[260603]: 2025-10-02 08:21:34.350 2 DEBUG oslo_concurrency.lockutils [req-03c5a31f-87ba-4174-814f-fd4d1b715053 req-02209168-b37d-4b27-8026-ab78f9bf0e3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-01296dff-20d5-49d6-b582-f9ec1d6b0af8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:21:34 np0005465604 nova_compute[260603]: 2025-10-02 08:21:34.399 2 DEBUG nova.compute.manager [req-b53642e7-04b7-435f-ae53-6ecd69214160 req-cc49c3b2-2937-49bd-8fcd-ee59a3b3759b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Received event network-vif-plugged-3979d203-cacc-40b7-9662-efc009354ac2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:21:34 np0005465604 nova_compute[260603]: 2025-10-02 08:21:34.399 2 DEBUG oslo_concurrency.lockutils [req-b53642e7-04b7-435f-ae53-6ecd69214160 req-cc49c3b2-2937-49bd-8fcd-ee59a3b3759b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:34 np0005465604 nova_compute[260603]: 2025-10-02 08:21:34.400 2 DEBUG oslo_concurrency.lockutils [req-b53642e7-04b7-435f-ae53-6ecd69214160 req-cc49c3b2-2937-49bd-8fcd-ee59a3b3759b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:34 np0005465604 nova_compute[260603]: 2025-10-02 08:21:34.400 2 DEBUG oslo_concurrency.lockutils [req-b53642e7-04b7-435f-ae53-6ecd69214160 req-cc49c3b2-2937-49bd-8fcd-ee59a3b3759b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:34 np0005465604 nova_compute[260603]: 2025-10-02 08:21:34.400 2 DEBUG nova.compute.manager [req-b53642e7-04b7-435f-ae53-6ecd69214160 req-cc49c3b2-2937-49bd-8fcd-ee59a3b3759b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Processing event network-vif-plugged-3979d203-cacc-40b7-9662-efc009354ac2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:21:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 299 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.6 MiB/s wr, 287 op/s
Oct  2 04:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:34.808 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:34.809 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:34.810 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:34 np0005465604 podman[286487]: 2025-10-02 08:21:34.880918241 +0000 UTC m=+0.035065182 container create 4ef776f2c87b399f375024ef245bae50311519aa24508ec09b6d49b226f0424a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ramanujan, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 04:21:34 np0005465604 systemd[1]: Started libpod-conmon-4ef776f2c87b399f375024ef245bae50311519aa24508ec09b6d49b226f0424a.scope.
Oct  2 04:21:34 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:21:34 np0005465604 podman[286487]: 2025-10-02 08:21:34.866976201 +0000 UTC m=+0.021123162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:21:34 np0005465604 podman[286487]: 2025-10-02 08:21:34.968738202 +0000 UTC m=+0.122885193 container init 4ef776f2c87b399f375024ef245bae50311519aa24508ec09b6d49b226f0424a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ramanujan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:21:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:34Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4c:78:6e 10.100.0.11
Oct  2 04:21:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:34Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4c:78:6e 10.100.0.11
Oct  2 04:21:34 np0005465604 podman[286487]: 2025-10-02 08:21:34.977897217 +0000 UTC m=+0.132044178 container start 4ef776f2c87b399f375024ef245bae50311519aa24508ec09b6d49b226f0424a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ramanujan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 04:21:34 np0005465604 podman[286487]: 2025-10-02 08:21:34.98108975 +0000 UTC m=+0.135236751 container attach 4ef776f2c87b399f375024ef245bae50311519aa24508ec09b6d49b226f0424a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ramanujan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 04:21:34 np0005465604 peaceful_ramanujan[286504]: 167 167
Oct  2 04:21:34 np0005465604 systemd[1]: libpod-4ef776f2c87b399f375024ef245bae50311519aa24508ec09b6d49b226f0424a.scope: Deactivated successfully.
Oct  2 04:21:34 np0005465604 conmon[286504]: conmon 4ef776f2c87b399f3750 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ef776f2c87b399f375024ef245bae50311519aa24508ec09b6d49b226f0424a.scope/container/memory.events
Oct  2 04:21:34 np0005465604 podman[286487]: 2025-10-02 08:21:34.983107545 +0000 UTC m=+0.137254496 container died 4ef776f2c87b399f375024ef245bae50311519aa24508ec09b6d49b226f0424a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:21:35 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b0c672f22fd800aafe4d737df698b80eacd1900fc1b2e073a44e3b1d2f1b67ea-merged.mount: Deactivated successfully.
Oct  2 04:21:35 np0005465604 podman[286487]: 2025-10-02 08:21:35.012845484 +0000 UTC m=+0.166992425 container remove 4ef776f2c87b399f375024ef245bae50311519aa24508ec09b6d49b226f0424a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ramanujan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 04:21:35 np0005465604 systemd[1]: libpod-conmon-4ef776f2c87b399f375024ef245bae50311519aa24508ec09b6d49b226f0424a.scope: Deactivated successfully.
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.185 2 DEBUG nova.compute.manager [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.188 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393295.1865194, 01296dff-20d5-49d6-b582-f9ec1d6b0af8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.188 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] VM Started (Lifecycle Event)#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.193 2 DEBUG nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.201 2 INFO nova.virt.libvirt.driver [-] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Instance spawned successfully.#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.202 2 DEBUG nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:21:35 np0005465604 podman[286527]: 2025-10-02 08:21:35.223583199 +0000 UTC m=+0.056144421 container create dd3ddc538967a3208a15d4f800bca1b7ec5476648d76148fc28eda79d97813e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 04:21:35 np0005465604 systemd[1]: Started libpod-conmon-dd3ddc538967a3208a15d4f800bca1b7ec5476648d76148fc28eda79d97813e9.scope.
Oct  2 04:21:35 np0005465604 podman[286527]: 2025-10-02 08:21:35.202958463 +0000 UTC m=+0.035519765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:21:35 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:21:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b94f4d9a030d373647d76e299702450a510a9daa315ec5d78c70dffe2aa41c82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:21:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b94f4d9a030d373647d76e299702450a510a9daa315ec5d78c70dffe2aa41c82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:21:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b94f4d9a030d373647d76e299702450a510a9daa315ec5d78c70dffe2aa41c82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:21:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b94f4d9a030d373647d76e299702450a510a9daa315ec5d78c70dffe2aa41c82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:21:35 np0005465604 podman[286527]: 2025-10-02 08:21:35.320064319 +0000 UTC m=+0.152625581 container init dd3ddc538967a3208a15d4f800bca1b7ec5476648d76148fc28eda79d97813e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_galileo, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  2 04:21:35 np0005465604 podman[286527]: 2025-10-02 08:21:35.330050351 +0000 UTC m=+0.162611593 container start dd3ddc538967a3208a15d4f800bca1b7ec5476648d76148fc28eda79d97813e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:21:35 np0005465604 podman[286527]: 2025-10-02 08:21:35.333476662 +0000 UTC m=+0.166037904 container attach dd3ddc538967a3208a15d4f800bca1b7ec5476648d76148fc28eda79d97813e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.346 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.351 2 DEBUG nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.351 2 DEBUG nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.351 2 DEBUG nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.352 2 DEBUG nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.352 2 DEBUG nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.353 2 DEBUG nova.virt.libvirt.driver [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.358 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.384 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.385 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393295.1866078, 01296dff-20d5-49d6-b582-f9ec1d6b0af8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.385 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.408 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.412 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393295.1909273, 01296dff-20d5-49d6-b582-f9ec1d6b0af8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.412 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.418 2 INFO nova.compute.manager [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Took 6.56 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.419 2 DEBUG nova.compute.manager [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.460 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.463 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.491 2 INFO nova.compute.manager [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Took 7.63 seconds to build instance.#033[00m
Oct  2 04:21:35 np0005465604 nova_compute[260603]: 2025-10-02 08:21:35.511 2 DEBUG oslo_concurrency.lockutils [None req-fdf7b34e-2062-49e9-8c2e-3181c57168c3 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:36 np0005465604 sad_galileo[286544]: {
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:        "osd_id": 2,
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:        "type": "bluestore"
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:    },
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:        "osd_id": 1,
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:        "type": "bluestore"
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:    },
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:        "osd_id": 0,
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:        "type": "bluestore"
Oct  2 04:21:36 np0005465604 sad_galileo[286544]:    }
Oct  2 04:21:36 np0005465604 sad_galileo[286544]: }
Oct  2 04:21:36 np0005465604 systemd[1]: libpod-dd3ddc538967a3208a15d4f800bca1b7ec5476648d76148fc28eda79d97813e9.scope: Deactivated successfully.
Oct  2 04:21:36 np0005465604 podman[286527]: 2025-10-02 08:21:36.322028866 +0000 UTC m=+1.154590098 container died dd3ddc538967a3208a15d4f800bca1b7ec5476648d76148fc28eda79d97813e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 04:21:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b94f4d9a030d373647d76e299702450a510a9daa315ec5d78c70dffe2aa41c82-merged.mount: Deactivated successfully.
Oct  2 04:21:36 np0005465604 podman[286527]: 2025-10-02 08:21:36.372560495 +0000 UTC m=+1.205121717 container remove dd3ddc538967a3208a15d4f800bca1b7ec5476648d76148fc28eda79d97813e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_galileo, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 04:21:36 np0005465604 systemd[1]: libpod-conmon-dd3ddc538967a3208a15d4f800bca1b7ec5476648d76148fc28eda79d97813e9.scope: Deactivated successfully.
Oct  2 04:21:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:21:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:21:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:21:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:21:36 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b56f2635-fd53-4da9-bb69-df245f414514 does not exist
Oct  2 04:21:36 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 17fed47d-6394-4a8b-968c-2d7d4bf19823 does not exist
Oct  2 04:21:36 np0005465604 nova_compute[260603]: 2025-10-02 08:21:36.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:36 np0005465604 nova_compute[260603]: 2025-10-02 08:21:36.610 2 DEBUG nova.compute.manager [req-5b5e7007-ae62-4d9d-a232-907a403d5fcc req-c34c93a1-324f-4481-a28a-2d0830808f99 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Received event network-vif-plugged-3979d203-cacc-40b7-9662-efc009354ac2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:21:36 np0005465604 nova_compute[260603]: 2025-10-02 08:21:36.611 2 DEBUG oslo_concurrency.lockutils [req-5b5e7007-ae62-4d9d-a232-907a403d5fcc req-c34c93a1-324f-4481-a28a-2d0830808f99 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:36 np0005465604 nova_compute[260603]: 2025-10-02 08:21:36.611 2 DEBUG oslo_concurrency.lockutils [req-5b5e7007-ae62-4d9d-a232-907a403d5fcc req-c34c93a1-324f-4481-a28a-2d0830808f99 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:36 np0005465604 nova_compute[260603]: 2025-10-02 08:21:36.611 2 DEBUG oslo_concurrency.lockutils [req-5b5e7007-ae62-4d9d-a232-907a403d5fcc req-c34c93a1-324f-4481-a28a-2d0830808f99 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:36 np0005465604 nova_compute[260603]: 2025-10-02 08:21:36.612 2 DEBUG nova.compute.manager [req-5b5e7007-ae62-4d9d-a232-907a403d5fcc req-c34c93a1-324f-4481-a28a-2d0830808f99 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] No waiting events found dispatching network-vif-plugged-3979d203-cacc-40b7-9662-efc009354ac2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:21:36 np0005465604 nova_compute[260603]: 2025-10-02 08:21:36.612 2 WARNING nova.compute.manager [req-5b5e7007-ae62-4d9d-a232-907a403d5fcc req-c34c93a1-324f-4481-a28a-2d0830808f99 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Received unexpected event network-vif-plugged-3979d203-cacc-40b7-9662-efc009354ac2 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:21:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 299 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 200 KiB/s rd, 3.5 MiB/s wr, 107 op/s
Oct  2 04:21:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:21:37 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:21:37 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:21:38 np0005465604 podman[286640]: 2025-10-02 08:21:38.065064956 +0000 UTC m=+0.122143900 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 04:21:38 np0005465604 nova_compute[260603]: 2025-10-02 08:21:38.178 2 DEBUG oslo_concurrency.lockutils [None req-fb3939c5-772b-4db5-988b-c3c0120b9053 1a47d72370d54e908da4738c6c49be70 784ead8e69974a7b919072bfe6728770 - - default default] Acquiring lock "refresh_cache-01296dff-20d5-49d6-b582-f9ec1d6b0af8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:21:38 np0005465604 nova_compute[260603]: 2025-10-02 08:21:38.178 2 DEBUG oslo_concurrency.lockutils [None req-fb3939c5-772b-4db5-988b-c3c0120b9053 1a47d72370d54e908da4738c6c49be70 784ead8e69974a7b919072bfe6728770 - - default default] Acquired lock "refresh_cache-01296dff-20d5-49d6-b582-f9ec1d6b0af8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:21:38 np0005465604 nova_compute[260603]: 2025-10-02 08:21:38.178 2 DEBUG nova.network.neutron [None req-fb3939c5-772b-4db5-988b-c3c0120b9053 1a47d72370d54e908da4738c6c49be70 784ead8e69974a7b919072bfe6728770 - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:21:38 np0005465604 nova_compute[260603]: 2025-10-02 08:21:38.199 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393283.1992745, d968b5b6-ba9b-488d-b0bf-4bc79ecfe839 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:38 np0005465604 nova_compute[260603]: 2025-10-02 08:21:38.199 2 INFO nova.compute.manager [-] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:21:38 np0005465604 nova_compute[260603]: 2025-10-02 08:21:38.215 2 DEBUG nova.compute.manager [None req-95ca25fc-f554-4899-b205-98e4584f78c1 - - - - - -] [instance: d968b5b6-ba9b-488d-b0bf-4bc79ecfe839] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002323095738099889 of space, bias 1.0, pg target 0.6969287214299668 quantized to 32 (current 32)
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663670272514163 of space, bias 1.0, pg target 0.19991010817542487 quantized to 32 (current 32)
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:21:38 np0005465604 nova_compute[260603]: 2025-10-02 08:21:38.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.9 MiB/s wr, 338 op/s
Oct  2 04:21:39 np0005465604 nova_compute[260603]: 2025-10-02 08:21:39.241 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393284.2406702, e500afcd-1008-4a02-8b21-b564db9bd367 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:39 np0005465604 nova_compute[260603]: 2025-10-02 08:21:39.242 2 INFO nova.compute.manager [-] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:21:39 np0005465604 nova_compute[260603]: 2025-10-02 08:21:39.262 2 DEBUG nova.compute.manager [None req-31103cfa-944a-4498-a53d-5dbedd735a2e - - - - - -] [instance: e500afcd-1008-4a02-8b21-b564db9bd367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:39 np0005465604 nova_compute[260603]: 2025-10-02 08:21:39.420 2 DEBUG nova.network.neutron [None req-fb3939c5-772b-4db5-988b-c3c0120b9053 1a47d72370d54e908da4738c6c49be70 784ead8e69974a7b919072bfe6728770 - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Updating instance_info_cache with network_info: [{"id": "3979d203-cacc-40b7-9662-efc009354ac2", "address": "fa:16:3e:9d:e1:60", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3979d203-ca", "ovs_interfaceid": "3979d203-cacc-40b7-9662-efc009354ac2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:39 np0005465604 nova_compute[260603]: 2025-10-02 08:21:39.437 2 DEBUG oslo_concurrency.lockutils [None req-fb3939c5-772b-4db5-988b-c3c0120b9053 1a47d72370d54e908da4738c6c49be70 784ead8e69974a7b919072bfe6728770 - - default default] Releasing lock "refresh_cache-01296dff-20d5-49d6-b582-f9ec1d6b0af8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:21:39 np0005465604 nova_compute[260603]: 2025-10-02 08:21:39.437 2 DEBUG nova.compute.manager [None req-fb3939c5-772b-4db5-988b-c3c0120b9053 1a47d72370d54e908da4738c6c49be70 784ead8e69974a7b919072bfe6728770 - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Oct  2 04:21:39 np0005465604 nova_compute[260603]: 2025-10-02 08:21:39.438 2 DEBUG nova.compute.manager [None req-fb3939c5-772b-4db5-988b-c3c0120b9053 1a47d72370d54e908da4738c6c49be70 784ead8e69974a7b919072bfe6728770 - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] network_info to inject: |[{"id": "3979d203-cacc-40b7-9662-efc009354ac2", "address": "fa:16:3e:9d:e1:60", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3979d203-ca", "ovs_interfaceid": "3979d203-cacc-40b7-9662-efc009354ac2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Oct  2 04:21:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.5 MiB/s wr, 313 op/s
Oct  2 04:21:41 np0005465604 nova_compute[260603]: 2025-10-02 08:21:41.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:21:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct  2 04:21:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct  2 04:21:41 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct  2 04:21:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 215 op/s
Oct  2 04:21:43 np0005465604 nova_compute[260603]: 2025-10-02 08:21:43.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:43 np0005465604 nova_compute[260603]: 2025-10-02 08:21:43.845 2 INFO nova.compute.manager [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Rebuilding instance#033[00m
Oct  2 04:21:44 np0005465604 podman[286659]: 2025-10-02 08:21:44.018266342 +0000 UTC m=+0.082873873 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  2 04:21:44 np0005465604 nova_compute[260603]: 2025-10-02 08:21:44.253 2 DEBUG nova.objects.instance [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:44 np0005465604 nova_compute[260603]: 2025-10-02 08:21:44.269 2 DEBUG nova.compute.manager [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:44 np0005465604 nova_compute[260603]: 2025-10-02 08:21:44.311 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393289.3103373, 45e20d73-3f71-4e90-b1b5-f30fcb043922 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:44 np0005465604 nova_compute[260603]: 2025-10-02 08:21:44.311 2 INFO nova.compute.manager [-] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:21:44 np0005465604 nova_compute[260603]: 2025-10-02 08:21:44.317 2 DEBUG nova.objects.instance [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'pci_requests' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:44 np0005465604 nova_compute[260603]: 2025-10-02 08:21:44.346 2 DEBUG nova.compute.manager [None req-4988b9e3-63fb-4957-8629-24b6f6d6d8b3 - - - - - -] [instance: 45e20d73-3f71-4e90-b1b5-f30fcb043922] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:44 np0005465604 nova_compute[260603]: 2025-10-02 08:21:44.347 2 DEBUG nova.objects.instance [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'pci_devices' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:44 np0005465604 nova_compute[260603]: 2025-10-02 08:21:44.357 2 DEBUG nova.objects.instance [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'resources' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:44 np0005465604 nova_compute[260603]: 2025-10-02 08:21:44.368 2 DEBUG nova.objects.instance [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'migration_context' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:44 np0005465604 nova_compute[260603]: 2025-10-02 08:21:44.380 2 DEBUG nova.objects.instance [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:21:44 np0005465604 nova_compute[260603]: 2025-10-02 08:21:44.385 2 DEBUG nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:21:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 185 op/s
Oct  2 04:21:45 np0005465604 nova_compute[260603]: 2025-10-02 08:21:45.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:45.021 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:21:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:45.023 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:21:46 np0005465604 nova_compute[260603]: 2025-10-02 08:21:46.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:46 np0005465604 kernel: tap205e575c-4a (unregistering): left promiscuous mode
Oct  2 04:21:46 np0005465604 NetworkManager[45129]: <info>  [1759393306.6619] device (tap205e575c-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:21:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:46Z|00060|binding|INFO|Releasing lport 205e575c-4af3-4a6a-af77-fd96af608b0a from this chassis (sb_readonly=0)
Oct  2 04:21:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:46Z|00061|binding|INFO|Setting lport 205e575c-4af3-4a6a-af77-fd96af608b0a down in Southbound
Oct  2 04:21:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:46Z|00062|binding|INFO|Removing iface tap205e575c-4a ovn-installed in OVS
Oct  2 04:21:46 np0005465604 nova_compute[260603]: 2025-10-02 08:21:46.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:46.679 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:2c:3a 10.100.0.12'], port_security=['fa:16:3e:24:2c:3a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '09c17c12-7dac-4fc8-917e-cb2efa1d4607', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13ecc6dea7a8465394379400d84a053e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '62446364-91d2-42bd-8360-1c220db2c85a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a031d7e-2b71-4bad-bd63-24b87ef28e88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=205e575c-4af3-4a6a-af77-fd96af608b0a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:21:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:46.681 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 205e575c-4af3-4a6a-af77-fd96af608b0a in datapath 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e unbound from our chassis#033[00m
Oct  2 04:21:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:46.684 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e#033[00m
Oct  2 04:21:46 np0005465604 nova_compute[260603]: 2025-10-02 08:21:46.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:46.708 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c167621b-7689-429b-95ce-3b74d65e9adf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:46 np0005465604 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct  2 04:21:46 np0005465604 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000c.scope: Consumed 13.604s CPU time.
Oct  2 04:21:46 np0005465604 systemd-machined[214636]: Machine qemu-14-instance-0000000c terminated.
Oct  2 04:21:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:46.741 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7aeff15d-c104-4b11-80c3-e64c546af582]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:46.744 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0db8a0a9-2899-47e3-b2ad-c66d641acf44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:46.770 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe7c490-2a79-46cb-88c0-0abb2fe13ab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 326 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 185 op/s
Oct  2 04:21:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:46.792 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[907a6728-16c5-4fa2-a4dc-a2baf1de8dde]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a12cdd8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:e8:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 12, 'rx_bytes': 1084, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 12, 'rx_bytes': 1084, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414588, 'reachable_time': 43029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286691, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:46.810 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1b869b7f-523c-4333-a10a-c93662e4ec98]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414598, 'tstamp': 414598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286692, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414600, 'tstamp': 414600}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286692, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:46.812 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a12cdd8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:46 np0005465604 nova_compute[260603]: 2025-10-02 08:21:46.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:46 np0005465604 nova_compute[260603]: 2025-10-02 08:21:46.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:46.817 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a12cdd8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:46.817 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:21:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:46.818 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a12cdd8-50, col_values=(('external_ids', {'iface-id': '9a1d90c9-45f7-468d-bd6f-f1cc59f0309a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:46.818 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:21:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:46Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:e1:60 10.100.0.6
Oct  2 04:21:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:46Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:e1:60 10.100.0.6
Oct  2 04:21:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:21:47 np0005465604 nova_compute[260603]: 2025-10-02 08:21:47.404 2 INFO nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 04:21:47 np0005465604 nova_compute[260603]: 2025-10-02 08:21:47.410 2 INFO nova.virt.libvirt.driver [-] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Instance destroyed successfully.#033[00m
Oct  2 04:21:47 np0005465604 nova_compute[260603]: 2025-10-02 08:21:47.416 2 INFO nova.virt.libvirt.driver [-] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Instance destroyed successfully.#033[00m
Oct  2 04:21:47 np0005465604 nova_compute[260603]: 2025-10-02 08:21:47.417 2 DEBUG nova.virt.libvirt.vif [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:20:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1588661618',display_name='tempest-ServersAdminTestJSON-server-1588661618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1588661618',id=12,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:21:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='13ecc6dea7a8465394379400d84a053e',ramdisk_id='',reservation_id='r-v30mm0vb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-83453142',owner_user_name='tempest-ServersAdminTestJSON-83453142-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:21:43Z,user_data=None,user_id='2b82955fab174d8aac325e64068908f5',uuid=09c17c12-7dac-4fc8-917e-cb2efa1d4607,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:21:47 np0005465604 nova_compute[260603]: 2025-10-02 08:21:47.418 2 DEBUG nova.network.os_vif_util [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converting VIF {"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:21:47 np0005465604 nova_compute[260603]: 2025-10-02 08:21:47.419 2 DEBUG nova.network.os_vif_util [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:21:47 np0005465604 nova_compute[260603]: 2025-10-02 08:21:47.420 2 DEBUG os_vif [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:21:47 np0005465604 nova_compute[260603]: 2025-10-02 08:21:47.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:47 np0005465604 nova_compute[260603]: 2025-10-02 08:21:47.423 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap205e575c-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:47 np0005465604 nova_compute[260603]: 2025-10-02 08:21:47.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:47 np0005465604 nova_compute[260603]: 2025-10-02 08:21:47.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:21:47 np0005465604 nova_compute[260603]: 2025-10-02 08:21:47.432 2 INFO os_vif [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a')#033[00m
Oct  2 04:21:47 np0005465604 nova_compute[260603]: 2025-10-02 08:21:47.850 2 INFO nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Deleting instance files /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607_del#033[00m
Oct  2 04:21:47 np0005465604 nova_compute[260603]: 2025-10-02 08:21:47.852 2 INFO nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Deletion of /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607_del complete#033[00m
Oct  2 04:21:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:48.027 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.043 2 DEBUG nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.044 2 INFO nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Creating image(s)#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.069 2 DEBUG nova.storage.rbd_utils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.097 2 DEBUG nova.storage.rbd_utils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.124 2 DEBUG nova.storage.rbd_utils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.129 2 DEBUG oslo_concurrency.processutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.218 2 DEBUG oslo_concurrency.processutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.219 2 DEBUG oslo_concurrency.lockutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.220 2 DEBUG oslo_concurrency.lockutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.220 2 DEBUG oslo_concurrency.lockutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.246 2 DEBUG nova.storage.rbd_utils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.250 2 DEBUG oslo_concurrency.processutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.523 2 DEBUG oslo_concurrency.processutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.608 2 DEBUG nova.storage.rbd_utils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] resizing rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.738 2 DEBUG nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.740 2 DEBUG nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Ensure instance console log exists: /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.740 2 DEBUG oslo_concurrency.lockutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.741 2 DEBUG oslo_concurrency.lockutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.741 2 DEBUG oslo_concurrency.lockutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.746 2 DEBUG nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Start _get_guest_xml network_info=[{"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.752 2 WARNING nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.761 2 DEBUG nova.virt.libvirt.host [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.762 2 DEBUG nova.virt.libvirt.host [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.766 2 DEBUG nova.virt.libvirt.host [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.767 2 DEBUG nova.virt.libvirt.host [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.767 2 DEBUG nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.768 2 DEBUG nova.virt.hardware [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.769 2 DEBUG nova.virt.hardware [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.769 2 DEBUG nova.virt.hardware [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.770 2 DEBUG nova.virt.hardware [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.770 2 DEBUG nova.virt.hardware [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.770 2 DEBUG nova.virt.hardware [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.771 2 DEBUG nova.virt.hardware [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.771 2 DEBUG nova.virt.hardware [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.772 2 DEBUG nova.virt.hardware [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.772 2 DEBUG nova.virt.hardware [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:21:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 343 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 259 KiB/s rd, 2.6 MiB/s wr, 86 op/s
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.773 2 DEBUG nova.virt.hardware [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.774 2 DEBUG nova.objects.instance [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:48 np0005465604 nova_compute[260603]: 2025-10-02 08:21:48.794 2 DEBUG oslo_concurrency.processutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2433879750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.245 2 DEBUG oslo_concurrency.processutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.280 2 DEBUG nova.storage.rbd_utils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.287 2 DEBUG oslo_concurrency.processutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2689416164' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.687 2 DEBUG oslo_concurrency.processutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.691 2 DEBUG nova.virt.libvirt.vif [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:20:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1588661618',display_name='tempest-ServersAdminTestJSON-server-1588661618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1588661618',id=12,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:21:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='13ecc6dea7a8465394379400d84a053e',ramdisk_id='',reservation_id='r-v30mm0vb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-83453142',owner_user_name='tempest-ServersAdminTestJSON-83453142-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:21:47Z,user_data=None,user_id='2b82955fab174d8aac325e64068908f5',uuid=09c17c12-7dac-4fc8-917e-cb2efa1d4607,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.693 2 DEBUG nova.network.os_vif_util [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converting VIF {"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.695 2 DEBUG nova.network.os_vif_util [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.701 2 DEBUG nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:21:49 np0005465604 nova_compute[260603]:  <uuid>09c17c12-7dac-4fc8-917e-cb2efa1d4607</uuid>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:  <name>instance-0000000c</name>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersAdminTestJSON-server-1588661618</nova:name>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:21:48</nova:creationTime>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:21:49 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:        <nova:user uuid="2b82955fab174d8aac325e64068908f5">tempest-ServersAdminTestJSON-83453142-project-member</nova:user>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:        <nova:project uuid="13ecc6dea7a8465394379400d84a053e">tempest-ServersAdminTestJSON-83453142</nova:project>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="eeb8c9a4-e143-4b44-a997-e04d544bc537"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:        <nova:port uuid="205e575c-4af3-4a6a-af77-fd96af608b0a">
Oct  2 04:21:49 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <entry name="serial">09c17c12-7dac-4fc8-917e-cb2efa1d4607</entry>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <entry name="uuid">09c17c12-7dac-4fc8-917e-cb2efa1d4607</entry>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk">
Oct  2 04:21:49 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:49 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk.config">
Oct  2 04:21:49 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:49 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:24:2c:3a"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <target dev="tap205e575c-4a"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/console.log" append="off"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:21:49 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:21:49 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:21:49 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:21:49 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.703 2 DEBUG nova.compute.manager [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Preparing to wait for external event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.704 2 DEBUG oslo_concurrency.lockutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.704 2 DEBUG oslo_concurrency.lockutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.705 2 DEBUG oslo_concurrency.lockutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.706 2 DEBUG nova.virt.libvirt.vif [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:20:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1588661618',display_name='tempest-ServersAdminTestJSON-server-1588661618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1588661618',id=12,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:21:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='13ecc6dea7a8465394379400d84a053e',ramdisk_id='',reservation_id='r-v30mm0vb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-83453142',owner_user_name='tempest-ServersAdminTestJSON-83453142-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:21:47Z,user_data=None,user_id='2b82955fab174d8aac325e64068908f5',uuid=09c17c12-7dac-4fc8-917e-cb2efa1d4607,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.707 2 DEBUG nova.network.os_vif_util [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converting VIF {"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.708 2 DEBUG nova.network.os_vif_util [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.709 2 DEBUG os_vif [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.711 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.712 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.716 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap205e575c-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.717 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap205e575c-4a, col_values=(('external_ids', {'iface-id': '205e575c-4af3-4a6a-af77-fd96af608b0a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:2c:3a', 'vm-uuid': '09c17c12-7dac-4fc8-917e-cb2efa1d4607'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:49 np0005465604 NetworkManager[45129]: <info>  [1759393309.7205] manager: (tap205e575c-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.725 2 INFO os_vif [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a')#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.813 2 DEBUG nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.814 2 DEBUG nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.815 2 DEBUG nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] No VIF found with MAC fa:16:3e:24:2c:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.816 2 INFO nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Using config drive#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.856 2 DEBUG nova.storage.rbd_utils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.888 2 DEBUG nova.objects.instance [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:49 np0005465604 nova_compute[260603]: 2025-10-02 08:21:49.930 2 DEBUG nova.objects.instance [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'keypairs' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:50 np0005465604 nova_compute[260603]: 2025-10-02 08:21:50.433 2 INFO nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Creating config drive at /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/disk.config#033[00m
Oct  2 04:21:50 np0005465604 nova_compute[260603]: 2025-10-02 08:21:50.442 2 DEBUG oslo_concurrency.processutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpky_21mxq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:50 np0005465604 nova_compute[260603]: 2025-10-02 08:21:50.591 2 DEBUG oslo_concurrency.processutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpky_21mxq" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:50 np0005465604 nova_compute[260603]: 2025-10-02 08:21:50.634 2 DEBUG nova.storage.rbd_utils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:50 np0005465604 nova_compute[260603]: 2025-10-02 08:21:50.638 2 DEBUG oslo_concurrency.processutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/disk.config 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 343 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 259 KiB/s rd, 2.6 MiB/s wr, 86 op/s
Oct  2 04:21:50 np0005465604 nova_compute[260603]: 2025-10-02 08:21:50.779 2 DEBUG oslo_concurrency.processutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/disk.config 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:50 np0005465604 nova_compute[260603]: 2025-10-02 08:21:50.781 2 INFO nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Deleting local config drive /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/disk.config because it was imported into RBD.#033[00m
Oct  2 04:21:50 np0005465604 kernel: tap205e575c-4a: entered promiscuous mode
Oct  2 04:21:50 np0005465604 NetworkManager[45129]: <info>  [1759393310.8481] manager: (tap205e575c-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Oct  2 04:21:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:50Z|00063|binding|INFO|Claiming lport 205e575c-4af3-4a6a-af77-fd96af608b0a for this chassis.
Oct  2 04:21:50 np0005465604 nova_compute[260603]: 2025-10-02 08:21:50.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:50Z|00064|binding|INFO|205e575c-4af3-4a6a-af77-fd96af608b0a: Claiming fa:16:3e:24:2c:3a 10.100.0.12
Oct  2 04:21:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:50.856 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:2c:3a 10.100.0.12'], port_security=['fa:16:3e:24:2c:3a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '09c17c12-7dac-4fc8-917e-cb2efa1d4607', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13ecc6dea7a8465394379400d84a053e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '62446364-91d2-42bd-8360-1c220db2c85a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a031d7e-2b71-4bad-bd63-24b87ef28e88, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=205e575c-4af3-4a6a-af77-fd96af608b0a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:21:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:50.857 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 205e575c-4af3-4a6a-af77-fd96af608b0a in datapath 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e bound to our chassis#033[00m
Oct  2 04:21:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:50.860 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e#033[00m
Oct  2 04:21:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:50.884 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f64686b5-8462-479a-abb0-222cb6eda5ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:50Z|00065|binding|INFO|Setting lport 205e575c-4af3-4a6a-af77-fd96af608b0a up in Southbound
Oct  2 04:21:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:50Z|00066|binding|INFO|Setting lport 205e575c-4af3-4a6a-af77-fd96af608b0a ovn-installed in OVS
Oct  2 04:21:50 np0005465604 nova_compute[260603]: 2025-10-02 08:21:50.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:50 np0005465604 nova_compute[260603]: 2025-10-02 08:21:50.891 2 DEBUG oslo_concurrency.lockutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquiring lock "ce67c5cc-b413-4871-8bcb-677c171ce721" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:50 np0005465604 nova_compute[260603]: 2025-10-02 08:21:50.892 2 DEBUG oslo_concurrency.lockutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "ce67c5cc-b413-4871-8bcb-677c171ce721" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:50 np0005465604 nova_compute[260603]: 2025-10-02 08:21:50.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:50 np0005465604 systemd-udevd[287027]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:21:50 np0005465604 systemd-machined[214636]: New machine qemu-22-instance-0000000c.
Oct  2 04:21:50 np0005465604 NetworkManager[45129]: <info>  [1759393310.9137] device (tap205e575c-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:21:50 np0005465604 NetworkManager[45129]: <info>  [1759393310.9145] device (tap205e575c-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:21:50 np0005465604 nova_compute[260603]: 2025-10-02 08:21:50.915 2 DEBUG nova.compute.manager [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:21:50 np0005465604 systemd[1]: Started Virtual Machine qemu-22-instance-0000000c.
Oct  2 04:21:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:50.943 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[47a08f2b-851d-425e-883f-0ab3ea5321b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:50.947 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e015c8f0-e420-4dc3-9be4-9cc64c4d7d17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:50.982 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[96e074fe-4894-40fa-93c4-ad80b96bf00b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:50 np0005465604 nova_compute[260603]: 2025-10-02 08:21:50.986 2 DEBUG oslo_concurrency.lockutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:50 np0005465604 nova_compute[260603]: 2025-10-02 08:21:50.987 2 DEBUG oslo_concurrency.lockutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:50 np0005465604 nova_compute[260603]: 2025-10-02 08:21:50.995 2 DEBUG nova.virt.hardware [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:21:50 np0005465604 nova_compute[260603]: 2025-10-02 08:21:50.995 2 INFO nova.compute.claims [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:21:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:51.010 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d0b31428-8ea0-41ab-a691-06c1527a3337]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a12cdd8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:e8:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 14, 'rx_bytes': 1084, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 14, 'rx_bytes': 1084, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414588, 'reachable_time': 43029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287038, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:51.029 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb812b6-b35d-403b-a55b-9f06f2c2447c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414598, 'tstamp': 414598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287041, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414600, 'tstamp': 414600}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287041, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:51.030 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a12cdd8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:51.034 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a12cdd8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:51.035 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:21:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:51.035 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a12cdd8-50, col_values=(('external_ids', {'iface-id': '9a1d90c9-45f7-468d-bd6f-f1cc59f0309a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:51.035 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.177 2 DEBUG oslo_concurrency.processutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:21:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1844038548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.640 2 DEBUG oslo_concurrency.processutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.649 2 DEBUG nova.compute.provider_tree [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.672 2 DEBUG nova.scheduler.client.report [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.717 2 DEBUG oslo_concurrency.lockutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.718 2 DEBUG nova.compute.manager [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.734 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 09c17c12-7dac-4fc8-917e-cb2efa1d4607 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.735 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393311.7332752, 09c17c12-7dac-4fc8-917e-cb2efa1d4607 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.735 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] VM Started (Lifecycle Event)#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.765 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.769 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393311.733463, 09c17c12-7dac-4fc8-917e-cb2efa1d4607 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.770 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.794 2 DEBUG nova.compute.manager [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.794 2 DEBUG nova.network.neutron [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.798 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.807 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.812 2 INFO nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.836 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.837 2 DEBUG nova.compute.manager [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.928 2 DEBUG nova.compute.manager [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.930 2 DEBUG nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.931 2 INFO nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Creating image(s)#033[00m
Oct  2 04:21:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:21:51 np0005465604 nova_compute[260603]: 2025-10-02 08:21:51.962 2 DEBUG nova.storage.rbd_utils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] rbd image ce67c5cc-b413-4871-8bcb-677c171ce721_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.000 2 DEBUG nova.storage.rbd_utils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] rbd image ce67c5cc-b413-4871-8bcb-677c171ce721_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.035 2 DEBUG nova.storage.rbd_utils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] rbd image ce67c5cc-b413-4871-8bcb-677c171ce721_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.041 2 DEBUG oslo_concurrency.processutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.079 2 DEBUG nova.policy [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be83392e7e8847878b199e35d03663f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2a46df342cad4b62ae3f9af2fd10ed84', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.146 2 DEBUG oslo_concurrency.processutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.147 2 DEBUG oslo_concurrency.lockutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.148 2 DEBUG oslo_concurrency.lockutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.148 2 DEBUG oslo_concurrency.lockutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.182 2 DEBUG nova.storage.rbd_utils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] rbd image ce67c5cc-b413-4871-8bcb-677c171ce721_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.187 2 DEBUG oslo_concurrency.processutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ce67c5cc-b413-4871-8bcb-677c171ce721_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.488 2 DEBUG oslo_concurrency.processutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ce67c5cc-b413-4871-8bcb-677c171ce721_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.557 2 DEBUG nova.storage.rbd_utils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] resizing rbd image ce67c5cc-b413-4871-8bcb-677c171ce721_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.668 2 DEBUG nova.objects.instance [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lazy-loading 'migration_context' on Instance uuid ce67c5cc-b413-4871-8bcb-677c171ce721 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 315 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 261 KiB/s rd, 4.2 MiB/s wr, 117 op/s
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.802 2 DEBUG nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.803 2 DEBUG nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Ensure instance console log exists: /var/lib/nova/instances/ce67c5cc-b413-4871-8bcb-677c171ce721/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.803 2 DEBUG oslo_concurrency.lockutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.803 2 DEBUG oslo_concurrency.lockutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.804 2 DEBUG oslo_concurrency.lockutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:52 np0005465604 nova_compute[260603]: 2025-10-02 08:21:52.836 2 DEBUG nova.network.neutron [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Successfully created port: 3aa7d14a-ae6f-4363-b28d-0dadba762d1f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:21:53 np0005465604 nova_compute[260603]: 2025-10-02 08:21:53.988 2 DEBUG nova.network.neutron [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Successfully updated port: 3aa7d14a-ae6f-4363-b28d-0dadba762d1f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:21:54 np0005465604 nova_compute[260603]: 2025-10-02 08:21:54.007 2 DEBUG oslo_concurrency.lockutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquiring lock "refresh_cache-ce67c5cc-b413-4871-8bcb-677c171ce721" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:21:54 np0005465604 nova_compute[260603]: 2025-10-02 08:21:54.008 2 DEBUG oslo_concurrency.lockutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquired lock "refresh_cache-ce67c5cc-b413-4871-8bcb-677c171ce721" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:21:54 np0005465604 nova_compute[260603]: 2025-10-02 08:21:54.008 2 DEBUG nova.network.neutron [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:21:54 np0005465604 nova_compute[260603]: 2025-10-02 08:21:54.155 2 DEBUG nova.compute.manager [req-c4dc9273-cfad-4862-b177-8f59ac97c801 req-f1db4b57-9380-4843-aec0-d867b661d3a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Received event network-changed-3aa7d14a-ae6f-4363-b28d-0dadba762d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:21:54 np0005465604 nova_compute[260603]: 2025-10-02 08:21:54.156 2 DEBUG nova.compute.manager [req-c4dc9273-cfad-4862-b177-8f59ac97c801 req-f1db4b57-9380-4843-aec0-d867b661d3a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Refreshing instance network info cache due to event network-changed-3aa7d14a-ae6f-4363-b28d-0dadba762d1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:21:54 np0005465604 nova_compute[260603]: 2025-10-02 08:21:54.156 2 DEBUG oslo_concurrency.lockutils [req-c4dc9273-cfad-4862-b177-8f59ac97c801 req-f1db4b57-9380-4843-aec0-d867b661d3a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ce67c5cc-b413-4871-8bcb-677c171ce721" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:21:54 np0005465604 nova_compute[260603]: 2025-10-02 08:21:54.256 2 DEBUG nova.network.neutron [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:21:54 np0005465604 nova_compute[260603]: 2025-10-02 08:21:54.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 345 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 261 KiB/s rd, 4.9 MiB/s wr, 139 op/s
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.627 2 DEBUG nova.network.neutron [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Updating instance_info_cache with network_info: [{"id": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "address": "fa:16:3e:99:75:4f", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa7d14a-ae", "ovs_interfaceid": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.646 2 DEBUG oslo_concurrency.lockutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Releasing lock "refresh_cache-ce67c5cc-b413-4871-8bcb-677c171ce721" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.647 2 DEBUG nova.compute.manager [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Instance network_info: |[{"id": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "address": "fa:16:3e:99:75:4f", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa7d14a-ae", "ovs_interfaceid": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.647 2 DEBUG oslo_concurrency.lockutils [req-c4dc9273-cfad-4862-b177-8f59ac97c801 req-f1db4b57-9380-4843-aec0-d867b661d3a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ce67c5cc-b413-4871-8bcb-677c171ce721" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.648 2 DEBUG nova.network.neutron [req-c4dc9273-cfad-4862-b177-8f59ac97c801 req-f1db4b57-9380-4843-aec0-d867b661d3a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Refreshing network info cache for port 3aa7d14a-ae6f-4363-b28d-0dadba762d1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.654 2 DEBUG nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Start _get_guest_xml network_info=[{"id": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "address": "fa:16:3e:99:75:4f", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa7d14a-ae", "ovs_interfaceid": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.660 2 WARNING nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.674 2 DEBUG nova.virt.libvirt.host [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.675 2 DEBUG nova.virt.libvirt.host [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.682 2 DEBUG nova.virt.libvirt.host [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.682 2 DEBUG nova.virt.libvirt.host [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.683 2 DEBUG nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.683 2 DEBUG nova.virt.hardware [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.684 2 DEBUG nova.virt.hardware [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.684 2 DEBUG nova.virt.hardware [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.684 2 DEBUG nova.virt.hardware [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.685 2 DEBUG nova.virt.hardware [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.685 2 DEBUG nova.virt.hardware [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.685 2 DEBUG nova.virt.hardware [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.686 2 DEBUG nova.virt.hardware [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.686 2 DEBUG nova.virt.hardware [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.686 2 DEBUG nova.virt.hardware [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.686 2 DEBUG nova.virt.hardware [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.689 2 DEBUG oslo_concurrency.processutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.835 2 DEBUG nova.compute.manager [req-40fbda36-8b28-44bd-91d0-3fbe2f9559a9 req-c2cbf464-ffd8-4eec-abe4-dbb5a0d897d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received event network-vif-unplugged-205e575c-4af3-4a6a-af77-fd96af608b0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.838 2 DEBUG oslo_concurrency.lockutils [req-40fbda36-8b28-44bd-91d0-3fbe2f9559a9 req-c2cbf464-ffd8-4eec-abe4-dbb5a0d897d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.838 2 DEBUG oslo_concurrency.lockutils [req-40fbda36-8b28-44bd-91d0-3fbe2f9559a9 req-c2cbf464-ffd8-4eec-abe4-dbb5a0d897d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.839 2 DEBUG oslo_concurrency.lockutils [req-40fbda36-8b28-44bd-91d0-3fbe2f9559a9 req-c2cbf464-ffd8-4eec-abe4-dbb5a0d897d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.839 2 DEBUG nova.compute.manager [req-40fbda36-8b28-44bd-91d0-3fbe2f9559a9 req-c2cbf464-ffd8-4eec-abe4-dbb5a0d897d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] No event matching network-vif-unplugged-205e575c-4af3-4a6a-af77-fd96af608b0a in dict_keys([('network-vif-plugged', '205e575c-4af3-4a6a-af77-fd96af608b0a')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  2 04:21:55 np0005465604 nova_compute[260603]: 2025-10-02 08:21:55.840 2 WARNING nova.compute.manager [req-40fbda36-8b28-44bd-91d0-3fbe2f9559a9 req-c2cbf464-ffd8-4eec-abe4-dbb5a0d897d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received unexpected event network-vif-unplugged-205e575c-4af3-4a6a-af77-fd96af608b0a for instance with vm_state error and task_state rebuild_spawning.#033[00m
Oct  2 04:21:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/224717800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.137 2 DEBUG oslo_concurrency.processutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.170 2 DEBUG nova.storage.rbd_utils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] rbd image ce67c5cc-b413-4871-8bcb-677c171ce721_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.177 2 DEBUG oslo_concurrency.processutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:21:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2566040293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.642 2 DEBUG oslo_concurrency.processutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.644 2 DEBUG nova.virt.libvirt.vif [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:21:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-553934205',display_name='tempest-FloatingIPsAssociationTestJSON-server-553934205',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-553934205',id=20,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a46df342cad4b62ae3f9af2fd10ed84',ramdisk_id='',reservation_id='r-cho0oknm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1024699440',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1024699440-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:21:51Z,user_data=None,user_id='be83392e7e8847878b199e35d03663f9',uuid=ce67c5cc-b413-4871-8bcb-677c171ce721,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "address": "fa:16:3e:99:75:4f", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa7d14a-ae", "ovs_interfaceid": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.645 2 DEBUG nova.network.os_vif_util [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Converting VIF {"id": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "address": "fa:16:3e:99:75:4f", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa7d14a-ae", "ovs_interfaceid": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.646 2 DEBUG nova.network.os_vif_util [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:75:4f,bridge_name='br-int',has_traffic_filtering=True,id=3aa7d14a-ae6f-4363-b28d-0dadba762d1f,network=Network(239ca39f-5677-4c2d-87f6-45d4404e4ead),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aa7d14a-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.648 2 DEBUG nova.objects.instance [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lazy-loading 'pci_devices' on Instance uuid ce67c5cc-b413-4871-8bcb-677c171ce721 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.674 2 DEBUG nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:21:56 np0005465604 nova_compute[260603]:  <uuid>ce67c5cc-b413-4871-8bcb-677c171ce721</uuid>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:  <name>instance-00000014</name>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-553934205</nova:name>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:21:55</nova:creationTime>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:21:56 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:        <nova:user uuid="be83392e7e8847878b199e35d03663f9">tempest-FloatingIPsAssociationTestJSON-1024699440-project-member</nova:user>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:        <nova:project uuid="2a46df342cad4b62ae3f9af2fd10ed84">tempest-FloatingIPsAssociationTestJSON-1024699440</nova:project>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:        <nova:port uuid="3aa7d14a-ae6f-4363-b28d-0dadba762d1f">
Oct  2 04:21:56 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <entry name="serial">ce67c5cc-b413-4871-8bcb-677c171ce721</entry>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <entry name="uuid">ce67c5cc-b413-4871-8bcb-677c171ce721</entry>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/ce67c5cc-b413-4871-8bcb-677c171ce721_disk">
Oct  2 04:21:56 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:56 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/ce67c5cc-b413-4871-8bcb-677c171ce721_disk.config">
Oct  2 04:21:56 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:21:56 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:99:75:4f"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <target dev="tap3aa7d14a-ae"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/ce67c5cc-b413-4871-8bcb-677c171ce721/console.log" append="off"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:21:56 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:21:56 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:21:56 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:21:56 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.676 2 DEBUG nova.compute.manager [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Preparing to wait for external event network-vif-plugged-3aa7d14a-ae6f-4363-b28d-0dadba762d1f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.677 2 DEBUG oslo_concurrency.lockutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquiring lock "ce67c5cc-b413-4871-8bcb-677c171ce721-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.677 2 DEBUG oslo_concurrency.lockutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "ce67c5cc-b413-4871-8bcb-677c171ce721-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.678 2 DEBUG oslo_concurrency.lockutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "ce67c5cc-b413-4871-8bcb-677c171ce721-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.679 2 DEBUG nova.virt.libvirt.vif [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:21:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-553934205',display_name='tempest-FloatingIPsAssociationTestJSON-server-553934205',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-553934205',id=20,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a46df342cad4b62ae3f9af2fd10ed84',ramdisk_id='',reservation_id='r-cho0oknm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1024699440',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1024699440-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:21:51Z,user_data=None,user_id='be83392e7e8847878b199e35d03663f9',uuid=ce67c5cc-b413-4871-8bcb-677c171ce721,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "address": "fa:16:3e:99:75:4f", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa7d14a-ae", "ovs_interfaceid": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.680 2 DEBUG nova.network.os_vif_util [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Converting VIF {"id": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "address": "fa:16:3e:99:75:4f", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa7d14a-ae", "ovs_interfaceid": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.681 2 DEBUG nova.network.os_vif_util [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:75:4f,bridge_name='br-int',has_traffic_filtering=True,id=3aa7d14a-ae6f-4363-b28d-0dadba762d1f,network=Network(239ca39f-5677-4c2d-87f6-45d4404e4ead),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aa7d14a-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.682 2 DEBUG os_vif [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:75:4f,bridge_name='br-int',has_traffic_filtering=True,id=3aa7d14a-ae6f-4363-b28d-0dadba762d1f,network=Network(239ca39f-5677-4c2d-87f6-45d4404e4ead),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aa7d14a-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.685 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.690 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3aa7d14a-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.691 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3aa7d14a-ae, col_values=(('external_ids', {'iface-id': '3aa7d14a-ae6f-4363-b28d-0dadba762d1f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:75:4f', 'vm-uuid': 'ce67c5cc-b413-4871-8bcb-677c171ce721'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:56 np0005465604 NetworkManager[45129]: <info>  [1759393316.6949] manager: (tap3aa7d14a-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.703 2 INFO os_vif [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:75:4f,bridge_name='br-int',has_traffic_filtering=True,id=3aa7d14a-ae6f-4363-b28d-0dadba762d1f,network=Network(239ca39f-5677-4c2d-87f6-45d4404e4ead),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aa7d14a-ae')#033[00m
Oct  2 04:21:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 345 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 261 KiB/s rd, 4.9 MiB/s wr, 139 op/s
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.776 2 DEBUG nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.777 2 DEBUG nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.778 2 DEBUG nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] No VIF found with MAC fa:16:3e:99:75:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.778 2 INFO nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Using config drive#033[00m
Oct  2 04:21:56 np0005465604 nova_compute[260603]: 2025-10-02 08:21:56.811 2 DEBUG nova.storage.rbd_utils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] rbd image ce67c5cc-b413-4871-8bcb-677c171ce721_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.771 2 INFO nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Creating config drive at /var/lib/nova/instances/ce67c5cc-b413-4871-8bcb-677c171ce721/disk.config#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.781 2 DEBUG oslo_concurrency.processutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ce67c5cc-b413-4871-8bcb-677c171ce721/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4uqxpz4n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.815 2 DEBUG nova.network.neutron [req-c4dc9273-cfad-4862-b177-8f59ac97c801 req-f1db4b57-9380-4843-aec0-d867b661d3a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Updated VIF entry in instance network info cache for port 3aa7d14a-ae6f-4363-b28d-0dadba762d1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.816 2 DEBUG nova.network.neutron [req-c4dc9273-cfad-4862-b177-8f59ac97c801 req-f1db4b57-9380-4843-aec0-d867b661d3a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Updating instance_info_cache with network_info: [{"id": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "address": "fa:16:3e:99:75:4f", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa7d14a-ae", "ovs_interfaceid": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.842 2 DEBUG oslo_concurrency.lockutils [req-c4dc9273-cfad-4862-b177-8f59ac97c801 req-f1db4b57-9380-4843-aec0-d867b661d3a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ce67c5cc-b413-4871-8bcb-677c171ce721" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:21:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:21:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:21:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:21:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:21:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:21:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.926 2 DEBUG oslo_concurrency.processutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ce67c5cc-b413-4871-8bcb-677c171ce721/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4uqxpz4n" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.953 2 DEBUG nova.storage.rbd_utils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] rbd image ce67c5cc-b413-4871-8bcb-677c171ce721_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.957 2 DEBUG oslo_concurrency.processutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ce67c5cc-b413-4871-8bcb-677c171ce721/disk.config ce67c5cc-b413-4871-8bcb-677c171ce721_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.991 2 DEBUG nova.compute.manager [req-a0230b23-2236-4418-80d8-d0a835bbf37d req-d05e8814-ca62-421f-9c0c-0c3b3285daf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.992 2 DEBUG oslo_concurrency.lockutils [req-a0230b23-2236-4418-80d8-d0a835bbf37d req-d05e8814-ca62-421f-9c0c-0c3b3285daf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.993 2 DEBUG oslo_concurrency.lockutils [req-a0230b23-2236-4418-80d8-d0a835bbf37d req-d05e8814-ca62-421f-9c0c-0c3b3285daf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.993 2 DEBUG oslo_concurrency.lockutils [req-a0230b23-2236-4418-80d8-d0a835bbf37d req-d05e8814-ca62-421f-9c0c-0c3b3285daf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.993 2 DEBUG nova.compute.manager [req-a0230b23-2236-4418-80d8-d0a835bbf37d req-d05e8814-ca62-421f-9c0c-0c3b3285daf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Processing event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.994 2 DEBUG nova.compute.manager [req-a0230b23-2236-4418-80d8-d0a835bbf37d req-d05e8814-ca62-421f-9c0c-0c3b3285daf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.994 2 DEBUG oslo_concurrency.lockutils [req-a0230b23-2236-4418-80d8-d0a835bbf37d req-d05e8814-ca62-421f-9c0c-0c3b3285daf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.994 2 DEBUG oslo_concurrency.lockutils [req-a0230b23-2236-4418-80d8-d0a835bbf37d req-d05e8814-ca62-421f-9c0c-0c3b3285daf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.995 2 DEBUG oslo_concurrency.lockutils [req-a0230b23-2236-4418-80d8-d0a835bbf37d req-d05e8814-ca62-421f-9c0c-0c3b3285daf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.995 2 DEBUG nova.compute.manager [req-a0230b23-2236-4418-80d8-d0a835bbf37d req-d05e8814-ca62-421f-9c0c-0c3b3285daf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] No waiting events found dispatching network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.995 2 WARNING nova.compute.manager [req-a0230b23-2236-4418-80d8-d0a835bbf37d req-d05e8814-ca62-421f-9c0c-0c3b3285daf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received unexpected event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a for instance with vm_state error and task_state rebuild_spawning.#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.996 2 DEBUG nova.compute.manager [req-a0230b23-2236-4418-80d8-d0a835bbf37d req-d05e8814-ca62-421f-9c0c-0c3b3285daf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.996 2 DEBUG oslo_concurrency.lockutils [req-a0230b23-2236-4418-80d8-d0a835bbf37d req-d05e8814-ca62-421f-9c0c-0c3b3285daf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.997 2 DEBUG oslo_concurrency.lockutils [req-a0230b23-2236-4418-80d8-d0a835bbf37d req-d05e8814-ca62-421f-9c0c-0c3b3285daf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.997 2 DEBUG oslo_concurrency.lockutils [req-a0230b23-2236-4418-80d8-d0a835bbf37d req-d05e8814-ca62-421f-9c0c-0c3b3285daf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.997 2 DEBUG nova.compute.manager [req-a0230b23-2236-4418-80d8-d0a835bbf37d req-d05e8814-ca62-421f-9c0c-0c3b3285daf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] No waiting events found dispatching network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.998 2 WARNING nova.compute.manager [req-a0230b23-2236-4418-80d8-d0a835bbf37d req-d05e8814-ca62-421f-9c0c-0c3b3285daf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received unexpected event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a for instance with vm_state error and task_state rebuild_spawning.#033[00m
Oct  2 04:21:57 np0005465604 nova_compute[260603]: 2025-10-02 08:21:57.999 2 DEBUG nova.compute.manager [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.002 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393318.0024865, 09c17c12-7dac-4fc8-917e-cb2efa1d4607 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.003 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.006 2 DEBUG nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.008 2 INFO nova.virt.libvirt.driver [-] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Instance spawned successfully.#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.009 2 DEBUG nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.029 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.035 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.038 2 DEBUG nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.038 2 DEBUG nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.038 2 DEBUG nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.039 2 DEBUG nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.039 2 DEBUG nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.039 2 DEBUG nova.virt.libvirt.driver [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.075 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.085 2 DEBUG oslo_concurrency.processutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ce67c5cc-b413-4871-8bcb-677c171ce721/disk.config ce67c5cc-b413-4871-8bcb-677c171ce721_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.085 2 INFO nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Deleting local config drive /var/lib/nova/instances/ce67c5cc-b413-4871-8bcb-677c171ce721/disk.config because it was imported into RBD.#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.113 2 DEBUG nova.compute.manager [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:58 np0005465604 NetworkManager[45129]: <info>  [1759393318.1410] manager: (tap3aa7d14a-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Oct  2 04:21:58 np0005465604 kernel: tap3aa7d14a-ae: entered promiscuous mode
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:58 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:58Z|00067|binding|INFO|Claiming lport 3aa7d14a-ae6f-4363-b28d-0dadba762d1f for this chassis.
Oct  2 04:21:58 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:58Z|00068|binding|INFO|3aa7d14a-ae6f-4363-b28d-0dadba762d1f: Claiming fa:16:3e:99:75:4f 10.100.0.14
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.168 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:75:4f 10.100.0.14'], port_security=['fa:16:3e:99:75:4f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ce67c5cc-b413-4871-8bcb-677c171ce721', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-239ca39f-5677-4c2d-87f6-45d4404e4ead', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a46df342cad4b62ae3f9af2fd10ed84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '572f4f54-3bbb-4e63-86a3-6fb75389fa5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b31b756-7999-4de8-b356-acd4a5e98929, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3aa7d14a-ae6f-4363-b28d-0dadba762d1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.169 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3aa7d14a-ae6f-4363-b28d-0dadba762d1f in datapath 239ca39f-5677-4c2d-87f6-45d4404e4ead bound to our chassis#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.171 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 239ca39f-5677-4c2d-87f6-45d4404e4ead#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.182 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d0b9fd36-aaba-4e04-a345-be656bd7b32c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.184 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap239ca39f-51 in ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.188 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap239ca39f-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.188 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb9d01f-024f-429a-8398-0e5c1e373681]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.189 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[59871b64-d2d8-4f65-88e2-6363a111a7ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:58 np0005465604 systemd-machined[214636]: New machine qemu-23-instance-00000014.
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.197 2 DEBUG oslo_concurrency.lockutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.197 2 DEBUG oslo_concurrency.lockutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.198 2 DEBUG nova.objects.instance [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.212 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[1c700caf-af21-43ca-b9ef-43d4c50ce3cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:58 np0005465604 systemd[1]: Started Virtual Machine qemu-23-instance-00000014.
Oct  2 04:21:58 np0005465604 systemd-udevd[287411]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.238 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cddc28e4-15b3-4315-bee4-c88c3c862e07]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:58 np0005465604 NetworkManager[45129]: <info>  [1759393318.2516] device (tap3aa7d14a-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:21:58 np0005465604 NetworkManager[45129]: <info>  [1759393318.2525] device (tap3aa7d14a-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:21:58 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:58Z|00069|binding|INFO|Setting lport 3aa7d14a-ae6f-4363-b28d-0dadba762d1f ovn-installed in OVS
Oct  2 04:21:58 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:58Z|00070|binding|INFO|Setting lport 3aa7d14a-ae6f-4363-b28d-0dadba762d1f up in Southbound
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.261 2 DEBUG oslo_concurrency.lockutils [None req-fcee12d7-3d21-43a0-b029-79ae817be08a 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.283 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[840c3ba3-90de-46c2-b649-973e5fd7616b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:58 np0005465604 NetworkManager[45129]: <info>  [1759393318.2942] manager: (tap239ca39f-50): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.293 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[47aa93db-2ead-4fe2-9474-d8bbae5d8863]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.337 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce3b664-768f-4a9f-b423-f5d34ac95926]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.340 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e56051-2641-4f9e-a219-7e41deb9ebea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:58 np0005465604 NetworkManager[45129]: <info>  [1759393318.3716] device (tap239ca39f-50): carrier: link connected
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.378 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5550e89d-0808-4e27-9ced-32dc2eceb526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.404 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1a39db91-f3b1-4af3-bdef-a0d8b5c15f39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap239ca39f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:74:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419697, 'reachable_time': 34603, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287441, 'error': None, 'target': 'ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.429 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c0ee8d-6cd1-4c0f-a1f4-b54193fa91e0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe93:74d7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 419697, 'tstamp': 419697}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287442, 'error': None, 'target': 'ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.454 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c056737a-d26d-4bf3-91a3-29c801977fb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap239ca39f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:74:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419697, 'reachable_time': 34603, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287443, 'error': None, 'target': 'ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.494 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f7dd98b3-5c2f-43ee-a3f1-b6b6ba6077da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.569 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b3aa02-912e-447c-b740-0e6293e11264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.571 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap239ca39f-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.571 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.572 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap239ca39f-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:58 np0005465604 kernel: tap239ca39f-50: entered promiscuous mode
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.576 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap239ca39f-50, col_values=(('external_ids', {'iface-id': '029a4135-d57a-4306-8fd9-eb98a54d6046'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:21:58 np0005465604 NetworkManager[45129]: <info>  [1759393318.5775] manager: (tap239ca39f-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:58 np0005465604 ovn_controller[152344]: 2025-10-02T08:21:58Z|00071|binding|INFO|Releasing lport 029a4135-d57a-4306-8fd9-eb98a54d6046 from this chassis (sb_readonly=0)
Oct  2 04:21:58 np0005465604 nova_compute[260603]: 2025-10-02 08:21:58.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.602 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/239ca39f-5677-4c2d-87f6-45d4404e4ead.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/239ca39f-5677-4c2d-87f6-45d4404e4ead.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.603 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7997123a-ce2e-43b6-b82f-2dec14f51acd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.604 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-239ca39f-5677-4c2d-87f6-45d4404e4ead
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/239ca39f-5677-4c2d-87f6-45d4404e4ead.pid.haproxy
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 239ca39f-5677-4c2d-87f6-45d4404e4ead
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:21:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:21:58.606 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead', 'env', 'PROCESS_TAG=haproxy-239ca39f-5677-4c2d-87f6-45d4404e4ead', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/239ca39f-5677-4c2d-87f6-45d4404e4ead.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:21:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 372 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 5.7 MiB/s wr, 154 op/s
Oct  2 04:21:59 np0005465604 podman[287475]: 2025-10-02 08:21:59.077433199 +0000 UTC m=+0.066186245 container create 6283c34130552abc7817dde031b5b65653f5f16bec53daa751b15b9f78a21c9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 04:21:59 np0005465604 systemd[1]: Started libpod-conmon-6283c34130552abc7817dde031b5b65653f5f16bec53daa751b15b9f78a21c9a.scope.
Oct  2 04:21:59 np0005465604 podman[287475]: 2025-10-02 08:21:59.044148646 +0000 UTC m=+0.032901712 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:21:59 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:21:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dfb4b15594cd11b271094fbb34417efbcab33f08f2808988096e78fd11f557/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:21:59 np0005465604 podman[287475]: 2025-10-02 08:21:59.172309358 +0000 UTC m=+0.161062484 container init 6283c34130552abc7817dde031b5b65653f5f16bec53daa751b15b9f78a21c9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:21:59 np0005465604 podman[287475]: 2025-10-02 08:21:59.177602769 +0000 UTC m=+0.166355845 container start 6283c34130552abc7817dde031b5b65653f5f16bec53daa751b15b9f78a21c9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  2 04:21:59 np0005465604 neutron-haproxy-ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead[287523]: [NOTICE]   (287535) : New worker (287537) forked
Oct  2 04:21:59 np0005465604 neutron-haproxy-ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead[287523]: [NOTICE]   (287535) : Loading success.
Oct  2 04:21:59 np0005465604 nova_compute[260603]: 2025-10-02 08:21:59.652 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393319.6519504, ce67c5cc-b413-4871-8bcb-677c171ce721 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:59 np0005465604 nova_compute[260603]: 2025-10-02 08:21:59.653 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] VM Started (Lifecycle Event)#033[00m
Oct  2 04:21:59 np0005465604 nova_compute[260603]: 2025-10-02 08:21:59.673 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:59 np0005465604 nova_compute[260603]: 2025-10-02 08:21:59.676 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393319.6529648, ce67c5cc-b413-4871-8bcb-677c171ce721 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:21:59 np0005465604 nova_compute[260603]: 2025-10-02 08:21:59.676 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:21:59 np0005465604 nova_compute[260603]: 2025-10-02 08:21:59.693 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:21:59 np0005465604 nova_compute[260603]: 2025-10-02 08:21:59.696 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:21:59 np0005465604 nova_compute[260603]: 2025-10-02 08:21:59.717 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.059 2 DEBUG nova.compute.manager [req-80681676-e773-404c-8d98-2bfb99a5fe44 req-6dc4e6b5-813f-4241-a728-bb19ccf5d1e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Received event network-vif-plugged-3aa7d14a-ae6f-4363-b28d-0dadba762d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.060 2 DEBUG oslo_concurrency.lockutils [req-80681676-e773-404c-8d98-2bfb99a5fe44 req-6dc4e6b5-813f-4241-a728-bb19ccf5d1e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ce67c5cc-b413-4871-8bcb-677c171ce721-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.061 2 DEBUG oslo_concurrency.lockutils [req-80681676-e773-404c-8d98-2bfb99a5fe44 req-6dc4e6b5-813f-4241-a728-bb19ccf5d1e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce67c5cc-b413-4871-8bcb-677c171ce721-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.061 2 DEBUG oslo_concurrency.lockutils [req-80681676-e773-404c-8d98-2bfb99a5fe44 req-6dc4e6b5-813f-4241-a728-bb19ccf5d1e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce67c5cc-b413-4871-8bcb-677c171ce721-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.062 2 DEBUG nova.compute.manager [req-80681676-e773-404c-8d98-2bfb99a5fe44 req-6dc4e6b5-813f-4241-a728-bb19ccf5d1e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Processing event network-vif-plugged-3aa7d14a-ae6f-4363-b28d-0dadba762d1f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.063 2 DEBUG nova.compute.manager [req-80681676-e773-404c-8d98-2bfb99a5fe44 req-6dc4e6b5-813f-4241-a728-bb19ccf5d1e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Received event network-vif-plugged-3aa7d14a-ae6f-4363-b28d-0dadba762d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.064 2 DEBUG oslo_concurrency.lockutils [req-80681676-e773-404c-8d98-2bfb99a5fe44 req-6dc4e6b5-813f-4241-a728-bb19ccf5d1e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ce67c5cc-b413-4871-8bcb-677c171ce721-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.065 2 DEBUG oslo_concurrency.lockutils [req-80681676-e773-404c-8d98-2bfb99a5fe44 req-6dc4e6b5-813f-4241-a728-bb19ccf5d1e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce67c5cc-b413-4871-8bcb-677c171ce721-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.066 2 DEBUG oslo_concurrency.lockutils [req-80681676-e773-404c-8d98-2bfb99a5fe44 req-6dc4e6b5-813f-4241-a728-bb19ccf5d1e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce67c5cc-b413-4871-8bcb-677c171ce721-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.066 2 DEBUG nova.compute.manager [req-80681676-e773-404c-8d98-2bfb99a5fe44 req-6dc4e6b5-813f-4241-a728-bb19ccf5d1e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] No waiting events found dispatching network-vif-plugged-3aa7d14a-ae6f-4363-b28d-0dadba762d1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.067 2 WARNING nova.compute.manager [req-80681676-e773-404c-8d98-2bfb99a5fe44 req-6dc4e6b5-813f-4241-a728-bb19ccf5d1e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Received unexpected event network-vif-plugged-3aa7d14a-ae6f-4363-b28d-0dadba762d1f for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.069 2 DEBUG nova.compute.manager [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.073 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393320.0727162, ce67c5cc-b413-4871-8bcb-677c171ce721 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.073 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.077 2 DEBUG nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.080 2 INFO nova.virt.libvirt.driver [-] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Instance spawned successfully.#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.081 2 DEBUG nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.097 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.108 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.112 2 DEBUG nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.112 2 DEBUG nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.113 2 DEBUG nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.113 2 DEBUG nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.114 2 DEBUG nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.115 2 DEBUG nova.virt.libvirt.driver [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.138 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.175 2 INFO nova.compute.manager [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Took 8.25 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.176 2 DEBUG nova.compute.manager [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.236 2 INFO nova.compute.manager [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Took 9.27 seconds to build instance.#033[00m
Oct  2 04:22:00 np0005465604 nova_compute[260603]: 2025-10-02 08:22:00.252 2 DEBUG oslo_concurrency.lockutils [None req-ab529f10-e878-4062-98e7-6ce18430b180 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "ce67c5cc-b413-4871-8bcb-677c171ce721" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 372 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 MiB/s wr, 83 op/s
Oct  2 04:22:01 np0005465604 nova_compute[260603]: 2025-10-02 08:22:01.501 2 INFO nova.compute.manager [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Rebuilding instance#033[00m
Oct  2 04:22:01 np0005465604 nova_compute[260603]: 2025-10-02 08:22:01.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:01 np0005465604 nova_compute[260603]: 2025-10-02 08:22:01.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:01 np0005465604 nova_compute[260603]: 2025-10-02 08:22:01.737 2 DEBUG nova.objects.instance [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:01 np0005465604 nova_compute[260603]: 2025-10-02 08:22:01.755 2 DEBUG nova.compute.manager [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:01 np0005465604 nova_compute[260603]: 2025-10-02 08:22:01.809 2 DEBUG nova.objects.instance [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'pci_requests' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:01 np0005465604 nova_compute[260603]: 2025-10-02 08:22:01.821 2 DEBUG nova.objects.instance [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'pci_devices' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:01 np0005465604 nova_compute[260603]: 2025-10-02 08:22:01.831 2 DEBUG nova.objects.instance [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'resources' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:01 np0005465604 nova_compute[260603]: 2025-10-02 08:22:01.845 2 DEBUG nova.objects.instance [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'migration_context' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:01 np0005465604 nova_compute[260603]: 2025-10-02 08:22:01.854 2 DEBUG nova.objects.instance [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:22:01 np0005465604 nova_compute[260603]: 2025-10-02 08:22:01.856 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:22:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:22:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 372 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 176 op/s
Oct  2 04:22:03 np0005465604 podman[287548]: 2025-10-02 08:22:03.014571532 +0000 UTC m=+0.072646793 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 04:22:03 np0005465604 podman[287547]: 2025-10-02 08:22:03.091883775 +0000 UTC m=+0.150241575 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 04:22:03 np0005465604 nova_compute[260603]: 2025-10-02 08:22:03.601 2 DEBUG oslo_concurrency.lockutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Acquiring lock "8de608c8-0b81-4798-adee-a9364d230016" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:03 np0005465604 nova_compute[260603]: 2025-10-02 08:22:03.601 2 DEBUG oslo_concurrency.lockutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lock "8de608c8-0b81-4798-adee-a9364d230016" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:03 np0005465604 nova_compute[260603]: 2025-10-02 08:22:03.618 2 DEBUG nova.compute.manager [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:22:03 np0005465604 nova_compute[260603]: 2025-10-02 08:22:03.669 2 DEBUG oslo_concurrency.lockutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:03 np0005465604 nova_compute[260603]: 2025-10-02 08:22:03.670 2 DEBUG oslo_concurrency.lockutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:03 np0005465604 nova_compute[260603]: 2025-10-02 08:22:03.676 2 DEBUG nova.virt.hardware [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:22:03 np0005465604 nova_compute[260603]: 2025-10-02 08:22:03.676 2 INFO nova.compute.claims [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:22:03 np0005465604 nova_compute[260603]: 2025-10-02 08:22:03.864 2 DEBUG oslo_concurrency.processutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:22:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1071541503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.366 2 DEBUG oslo_concurrency.processutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.374 2 DEBUG nova.compute.provider_tree [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.404 2 DEBUG nova.scheduler.client.report [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.439 2 DEBUG oslo_concurrency.lockutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.440 2 DEBUG nova.compute.manager [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.492 2 DEBUG nova.compute.manager [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.492 2 DEBUG nova.network.neutron [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.515 2 INFO nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.540 2 DEBUG nova.compute.manager [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.662 2 DEBUG nova.compute.manager [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.664 2 DEBUG nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.664 2 INFO nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Creating image(s)#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.685 2 DEBUG nova.storage.rbd_utils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] rbd image 8de608c8-0b81-4798-adee-a9364d230016_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.708 2 DEBUG nova.storage.rbd_utils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] rbd image 8de608c8-0b81-4798-adee-a9364d230016_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.735 2 DEBUG nova.storage.rbd_utils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] rbd image 8de608c8-0b81-4798-adee-a9364d230016_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.738 2 DEBUG oslo_concurrency.processutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 372 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 186 op/s
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.823 2 DEBUG oslo_concurrency.processutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.824 2 DEBUG oslo_concurrency.lockutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.824 2 DEBUG oslo_concurrency.lockutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.825 2 DEBUG oslo_concurrency.lockutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.846 2 DEBUG nova.storage.rbd_utils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] rbd image 8de608c8-0b81-4798-adee-a9364d230016_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:04 np0005465604 nova_compute[260603]: 2025-10-02 08:22:04.850 2 DEBUG oslo_concurrency.processutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 8de608c8-0b81-4798-adee-a9364d230016_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:05 np0005465604 nova_compute[260603]: 2025-10-02 08:22:05.103 2 DEBUG oslo_concurrency.processutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 8de608c8-0b81-4798-adee-a9364d230016_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:05 np0005465604 nova_compute[260603]: 2025-10-02 08:22:05.128 2 DEBUG nova.policy [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1752f22132cf468e92bd8857bcc0ed93', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f6a5588e45d1439d8a754fdd24946192', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:22:05 np0005465604 nova_compute[260603]: 2025-10-02 08:22:05.161 2 DEBUG nova.storage.rbd_utils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] resizing rbd image 8de608c8-0b81-4798-adee-a9364d230016_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:22:05 np0005465604 nova_compute[260603]: 2025-10-02 08:22:05.234 2 DEBUG nova.objects.instance [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lazy-loading 'migration_context' on Instance uuid 8de608c8-0b81-4798-adee-a9364d230016 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:05 np0005465604 nova_compute[260603]: 2025-10-02 08:22:05.249 2 DEBUG nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:22:05 np0005465604 nova_compute[260603]: 2025-10-02 08:22:05.249 2 DEBUG nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Ensure instance console log exists: /var/lib/nova/instances/8de608c8-0b81-4798-adee-a9364d230016/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:22:05 np0005465604 nova_compute[260603]: 2025-10-02 08:22:05.249 2 DEBUG oslo_concurrency.lockutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:05 np0005465604 nova_compute[260603]: 2025-10-02 08:22:05.250 2 DEBUG oslo_concurrency.lockutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:05 np0005465604 nova_compute[260603]: 2025-10-02 08:22:05.250 2 DEBUG oslo_concurrency.lockutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:05 np0005465604 nova_compute[260603]: 2025-10-02 08:22:05.829 2 DEBUG nova.network.neutron [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Successfully created port: 3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.232 2 DEBUG oslo_concurrency.lockutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquiring lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.232 2 DEBUG oslo_concurrency.lockutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.253 2 DEBUG nova.compute.manager [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.334 2 DEBUG oslo_concurrency.lockutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.334 2 DEBUG oslo_concurrency.lockutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.344 2 DEBUG nova.virt.hardware [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.345 2 INFO nova.compute.claims [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.521 2 DEBUG oslo_concurrency.processutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 372 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 847 KiB/s wr, 152 op/s
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.786 2 DEBUG nova.network.neutron [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Successfully updated port: 3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.804 2 DEBUG oslo_concurrency.lockutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Acquiring lock "refresh_cache-8de608c8-0b81-4798-adee-a9364d230016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.805 2 DEBUG oslo_concurrency.lockutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Acquired lock "refresh_cache-8de608c8-0b81-4798-adee-a9364d230016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.805 2 DEBUG nova.network.neutron [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.893 2 DEBUG nova.compute.manager [req-93522cda-c6e0-48ec-95c4-5659e33cf1e4 req-749b2403-f5f2-4df6-b035-9a928681e6dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Received event network-changed-3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.894 2 DEBUG nova.compute.manager [req-93522cda-c6e0-48ec-95c4-5659e33cf1e4 req-749b2403-f5f2-4df6-b035-9a928681e6dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Refreshing instance network info cache due to event network-changed-3b55cad4-91dd-4e72-a02e-ddfa3a801fe1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.895 2 DEBUG oslo_concurrency.lockutils [req-93522cda-c6e0-48ec-95c4-5659e33cf1e4 req-749b2403-f5f2-4df6-b035-9a928681e6dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-8de608c8-0b81-4798-adee-a9364d230016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.951 2 DEBUG nova.network.neutron [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:22:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:22:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/27552224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:22:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.978 2 DEBUG oslo_concurrency.processutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:06 np0005465604 nova_compute[260603]: 2025-10-02 08:22:06.983 2 DEBUG nova.compute.provider_tree [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.008 2 DEBUG nova.scheduler.client.report [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.035 2 DEBUG oslo_concurrency.lockutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.036 2 DEBUG nova.compute.manager [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.103 2 DEBUG nova.compute.manager [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.104 2 DEBUG nova.network.neutron [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.124 2 INFO nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.143 2 DEBUG nova.compute.manager [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.222 2 DEBUG nova.compute.manager [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.224 2 DEBUG nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.224 2 INFO nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Creating image(s)#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.247 2 DEBUG nova.storage.rbd_utils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] rbd image 4f7a36cf-fd1b-42f2-94be-5e9483b5f941_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.273 2 DEBUG nova.storage.rbd_utils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] rbd image 4f7a36cf-fd1b-42f2-94be-5e9483b5f941_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.297 2 DEBUG nova.storage.rbd_utils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] rbd image 4f7a36cf-fd1b-42f2-94be-5e9483b5f941_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.301 2 DEBUG oslo_concurrency.processutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.330 2 DEBUG nova.policy [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be83392e7e8847878b199e35d03663f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2a46df342cad4b62ae3f9af2fd10ed84', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.389 2 DEBUG oslo_concurrency.processutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.390 2 DEBUG oslo_concurrency.lockutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.391 2 DEBUG oslo_concurrency.lockutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.391 2 DEBUG oslo_concurrency.lockutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.412 2 DEBUG nova.storage.rbd_utils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] rbd image 4f7a36cf-fd1b-42f2-94be-5e9483b5f941_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.416 2 DEBUG oslo_concurrency.processutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 4f7a36cf-fd1b-42f2-94be-5e9483b5f941_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.643 2 DEBUG oslo_concurrency.processutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 4f7a36cf-fd1b-42f2-94be-5e9483b5f941_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.690 2 DEBUG nova.storage.rbd_utils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] resizing rbd image 4f7a36cf-fd1b-42f2-94be-5e9483b5f941_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.769 2 DEBUG nova.objects.instance [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lazy-loading 'migration_context' on Instance uuid 4f7a36cf-fd1b-42f2-94be-5e9483b5f941 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.784 2 DEBUG nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.784 2 DEBUG nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Ensure instance console log exists: /var/lib/nova/instances/4f7a36cf-fd1b-42f2-94be-5e9483b5f941/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.785 2 DEBUG oslo_concurrency.lockutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.785 2 DEBUG oslo_concurrency.lockutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:07 np0005465604 nova_compute[260603]: 2025-10-02 08:22:07.786 2 DEBUG oslo_concurrency.lockutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:08 np0005465604 nova_compute[260603]: 2025-10-02 08:22:08.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:22:08 np0005465604 nova_compute[260603]: 2025-10-02 08:22:08.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:22:08 np0005465604 nova_compute[260603]: 2025-10-02 08:22:08.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:22:08 np0005465604 nova_compute[260603]: 2025-10-02 08:22:08.549 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 04:22:08 np0005465604 nova_compute[260603]: 2025-10-02 08:22:08.549 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 04:22:08 np0005465604 nova_compute[260603]: 2025-10-02 08:22:08.550 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-09c17c12-7dac-4fc8-917e-cb2efa1d4607" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:22:08 np0005465604 nova_compute[260603]: 2025-10-02 08:22:08.550 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-09c17c12-7dac-4fc8-917e-cb2efa1d4607" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:22:08 np0005465604 nova_compute[260603]: 2025-10-02 08:22:08.550 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:22:08 np0005465604 nova_compute[260603]: 2025-10-02 08:22:08.550 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 447 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.7 MiB/s wr, 183 op/s
Oct  2 04:22:09 np0005465604 podman[287964]: 2025-10-02 08:22:09.033009532 +0000 UTC m=+0.089384053 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.189 2 DEBUG nova.network.neutron [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Successfully created port: 5385a348-2508-4677-aebb-d79c82f4fc36 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.202 2 DEBUG nova.network.neutron [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Updating instance_info_cache with network_info: [{"id": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "address": "fa:16:3e:65:4b:3b", "network": {"id": "f846e0f0-6ed8-446e-b721-3fec877f7fe7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-2045972521-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6a5588e45d1439d8a754fdd24946192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b55cad4-91", "ovs_interfaceid": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.227 2 DEBUG oslo_concurrency.lockutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Releasing lock "refresh_cache-8de608c8-0b81-4798-adee-a9364d230016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.227 2 DEBUG nova.compute.manager [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Instance network_info: |[{"id": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "address": "fa:16:3e:65:4b:3b", "network": {"id": "f846e0f0-6ed8-446e-b721-3fec877f7fe7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-2045972521-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6a5588e45d1439d8a754fdd24946192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b55cad4-91", "ovs_interfaceid": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.228 2 DEBUG oslo_concurrency.lockutils [req-93522cda-c6e0-48ec-95c4-5659e33cf1e4 req-749b2403-f5f2-4df6-b035-9a928681e6dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-8de608c8-0b81-4798-adee-a9364d230016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.228 2 DEBUG nova.network.neutron [req-93522cda-c6e0-48ec-95c4-5659e33cf1e4 req-749b2403-f5f2-4df6-b035-9a928681e6dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Refreshing network info cache for port 3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.235 2 DEBUG nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Start _get_guest_xml network_info=[{"id": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "address": "fa:16:3e:65:4b:3b", "network": {"id": "f846e0f0-6ed8-446e-b721-3fec877f7fe7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-2045972521-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6a5588e45d1439d8a754fdd24946192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b55cad4-91", "ovs_interfaceid": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.245 2 WARNING nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.252 2 DEBUG nova.virt.libvirt.host [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.253 2 DEBUG nova.virt.libvirt.host [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.273 2 DEBUG nova.virt.libvirt.host [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.274 2 DEBUG nova.virt.libvirt.host [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.275 2 DEBUG nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.275 2 DEBUG nova.virt.hardware [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.276 2 DEBUG nova.virt.hardware [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.277 2 DEBUG nova.virt.hardware [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.277 2 DEBUG nova.virt.hardware [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.278 2 DEBUG nova.virt.hardware [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.278 2 DEBUG nova.virt.hardware [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.279 2 DEBUG nova.virt.hardware [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.279 2 DEBUG nova.virt.hardware [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.280 2 DEBUG nova.virt.hardware [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.280 2 DEBUG nova.virt.hardware [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.281 2 DEBUG nova.virt.hardware [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.288 2 DEBUG oslo_concurrency.processutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.673 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Updating instance_info_cache with network_info: [{"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.690 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-09c17c12-7dac-4fc8-917e-cb2efa1d4607" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.690 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.691 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.691 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.691 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:22:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:22:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/881336081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.735 2 DEBUG oslo_concurrency.processutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.755 2 DEBUG nova.storage.rbd_utils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] rbd image 8de608c8-0b81-4798-adee-a9364d230016_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:09 np0005465604 nova_compute[260603]: 2025-10-02 08:22:09.759 2 DEBUG oslo_concurrency.processutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:22:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/557397075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.191 2 DEBUG oslo_concurrency.processutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.192 2 DEBUG nova.virt.libvirt.vif [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-150678686',display_name='tempest-ImagesNegativeTestJSON-server-150678686',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-150678686',id=21,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6a5588e45d1439d8a754fdd24946192',ramdisk_id='',reservation_id='r-a60c0pql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-100096828',owner_user_name='tempest-ImagesNegativeTestJSON-100096828-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:22:04Z,user_data=None,user_id='1752f22132cf468e92bd8857bcc0ed93',uuid=8de608c8-0b81-4798-adee-a9364d230016,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "address": "fa:16:3e:65:4b:3b", "network": {"id": "f846e0f0-6ed8-446e-b721-3fec877f7fe7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-2045972521-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6a5588e45d1439d8a754fdd24946192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b55cad4-91", "ovs_interfaceid": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.193 2 DEBUG nova.network.os_vif_util [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Converting VIF {"id": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "address": "fa:16:3e:65:4b:3b", "network": {"id": "f846e0f0-6ed8-446e-b721-3fec877f7fe7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-2045972521-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6a5588e45d1439d8a754fdd24946192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b55cad4-91", "ovs_interfaceid": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.194 2 DEBUG nova.network.os_vif_util [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=3b55cad4-91dd-4e72-a02e-ddfa3a801fe1,network=Network(f846e0f0-6ed8-446e-b721-3fec877f7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b55cad4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.195 2 DEBUG nova.objects.instance [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8de608c8-0b81-4798-adee-a9364d230016 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.209 2 DEBUG nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:22:10 np0005465604 nova_compute[260603]:  <uuid>8de608c8-0b81-4798-adee-a9364d230016</uuid>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:  <name>instance-00000015</name>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <nova:name>tempest-ImagesNegativeTestJSON-server-150678686</nova:name>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:22:09</nova:creationTime>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:22:10 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:        <nova:user uuid="1752f22132cf468e92bd8857bcc0ed93">tempest-ImagesNegativeTestJSON-100096828-project-member</nova:user>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:        <nova:project uuid="f6a5588e45d1439d8a754fdd24946192">tempest-ImagesNegativeTestJSON-100096828</nova:project>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:        <nova:port uuid="3b55cad4-91dd-4e72-a02e-ddfa3a801fe1">
Oct  2 04:22:10 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <entry name="serial">8de608c8-0b81-4798-adee-a9364d230016</entry>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <entry name="uuid">8de608c8-0b81-4798-adee-a9364d230016</entry>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/8de608c8-0b81-4798-adee-a9364d230016_disk">
Oct  2 04:22:10 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:22:10 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/8de608c8-0b81-4798-adee-a9364d230016_disk.config">
Oct  2 04:22:10 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:22:10 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:65:4b:3b"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <target dev="tap3b55cad4-91"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/8de608c8-0b81-4798-adee-a9364d230016/console.log" append="off"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:22:10 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:22:10 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:22:10 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:22:10 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.210 2 DEBUG nova.compute.manager [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Preparing to wait for external event network-vif-plugged-3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.210 2 DEBUG oslo_concurrency.lockutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Acquiring lock "8de608c8-0b81-4798-adee-a9364d230016-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.210 2 DEBUG oslo_concurrency.lockutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lock "8de608c8-0b81-4798-adee-a9364d230016-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.211 2 DEBUG oslo_concurrency.lockutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lock "8de608c8-0b81-4798-adee-a9364d230016-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.211 2 DEBUG nova.virt.libvirt.vif [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-150678686',display_name='tempest-ImagesNegativeTestJSON-server-150678686',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-150678686',id=21,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6a5588e45d1439d8a754fdd24946192',ramdisk_id='',reservation_id='r-a60c0pql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-100096828',owner_user_name='tempest-ImagesNegativeTestJSON-100096828-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:22:04Z,user_data=None,user_id='1752f22132cf468e92bd8857bcc0ed93',uuid=8de608c8-0b81-4798-adee-a9364d230016,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "address": "fa:16:3e:65:4b:3b", "network": {"id": "f846e0f0-6ed8-446e-b721-3fec877f7fe7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-2045972521-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6a5588e45d1439d8a754fdd24946192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b55cad4-91", "ovs_interfaceid": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.212 2 DEBUG nova.network.os_vif_util [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Converting VIF {"id": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "address": "fa:16:3e:65:4b:3b", "network": {"id": "f846e0f0-6ed8-446e-b721-3fec877f7fe7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-2045972521-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6a5588e45d1439d8a754fdd24946192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b55cad4-91", "ovs_interfaceid": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.212 2 DEBUG nova.network.os_vif_util [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=3b55cad4-91dd-4e72-a02e-ddfa3a801fe1,network=Network(f846e0f0-6ed8-446e-b721-3fec877f7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b55cad4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.213 2 DEBUG os_vif [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=3b55cad4-91dd-4e72-a02e-ddfa3a801fe1,network=Network(f846e0f0-6ed8-446e-b721-3fec877f7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b55cad4-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.214 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.214 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.217 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b55cad4-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.217 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b55cad4-91, col_values=(('external_ids', {'iface-id': '3b55cad4-91dd-4e72-a02e-ddfa3a801fe1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:4b:3b', 'vm-uuid': '8de608c8-0b81-4798-adee-a9364d230016'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:10 np0005465604 NetworkManager[45129]: <info>  [1759393330.2202] manager: (tap3b55cad4-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.227 2 INFO os_vif [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=3b55cad4-91dd-4e72-a02e-ddfa3a801fe1,network=Network(f846e0f0-6ed8-446e-b721-3fec877f7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b55cad4-91')#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.288 2 DEBUG nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.289 2 DEBUG nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.289 2 DEBUG nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] No VIF found with MAC fa:16:3e:65:4b:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.290 2 INFO nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Using config drive#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.323 2 DEBUG nova.storage.rbd_utils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] rbd image 8de608c8-0b81-4798-adee-a9364d230016_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:22:10 np0005465604 nova_compute[260603]: 2025-10-02 08:22:10.552 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:22:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:10Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:2c:3a 10.100.0.12
Oct  2 04:22:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:10Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:2c:3a 10.100.0.12
Oct  2 04:22:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 447 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.9 MiB/s wr, 167 op/s
Oct  2 04:22:11 np0005465604 nova_compute[260603]: 2025-10-02 08:22:11.415 2 INFO nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Creating config drive at /var/lib/nova/instances/8de608c8-0b81-4798-adee-a9364d230016/disk.config#033[00m
Oct  2 04:22:11 np0005465604 nova_compute[260603]: 2025-10-02 08:22:11.420 2 DEBUG oslo_concurrency.processutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8de608c8-0b81-4798-adee-a9364d230016/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvdsj0gpq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:11 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:11Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:75:4f 10.100.0.14
Oct  2 04:22:11 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:11Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:75:4f 10.100.0.14
Oct  2 04:22:11 np0005465604 nova_compute[260603]: 2025-10-02 08:22:11.564 2 DEBUG oslo_concurrency.processutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8de608c8-0b81-4798-adee-a9364d230016/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvdsj0gpq" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:11 np0005465604 nova_compute[260603]: 2025-10-02 08:22:11.597 2 DEBUG nova.storage.rbd_utils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] rbd image 8de608c8-0b81-4798-adee-a9364d230016_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:11 np0005465604 nova_compute[260603]: 2025-10-02 08:22:11.602 2 DEBUG oslo_concurrency.processutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8de608c8-0b81-4798-adee-a9364d230016/disk.config 8de608c8-0b81-4798-adee-a9364d230016_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:11 np0005465604 nova_compute[260603]: 2025-10-02 08:22:11.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:11 np0005465604 nova_compute[260603]: 2025-10-02 08:22:11.798 2 DEBUG oslo_concurrency.processutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8de608c8-0b81-4798-adee-a9364d230016/disk.config 8de608c8-0b81-4798-adee-a9364d230016_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:11 np0005465604 nova_compute[260603]: 2025-10-02 08:22:11.798 2 INFO nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Deleting local config drive /var/lib/nova/instances/8de608c8-0b81-4798-adee-a9364d230016/disk.config because it was imported into RBD.#033[00m
Oct  2 04:22:11 np0005465604 NetworkManager[45129]: <info>  [1759393331.8723] manager: (tap3b55cad4-91): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Oct  2 04:22:11 np0005465604 kernel: tap3b55cad4-91: entered promiscuous mode
Oct  2 04:22:11 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:11Z|00072|binding|INFO|Claiming lport 3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 for this chassis.
Oct  2 04:22:11 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:11Z|00073|binding|INFO|3b55cad4-91dd-4e72-a02e-ddfa3a801fe1: Claiming fa:16:3e:65:4b:3b 10.100.0.5
Oct  2 04:22:11 np0005465604 nova_compute[260603]: 2025-10-02 08:22:11.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:11.889 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:4b:3b 10.100.0.5'], port_security=['fa:16:3e:65:4b:3b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8de608c8-0b81-4798-adee-a9364d230016', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f846e0f0-6ed8-446e-b721-3fec877f7fe7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6a5588e45d1439d8a754fdd24946192', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b2b6e16-dfc7-42dc-a3d4-af720a27689a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=faffee4a-6858-4e90-8ff8-bd6d709d10d9, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3b55cad4-91dd-4e72-a02e-ddfa3a801fe1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:22:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:11.892 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 in datapath f846e0f0-6ed8-446e-b721-3fec877f7fe7 bound to our chassis#033[00m
Oct  2 04:22:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:11.896 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f846e0f0-6ed8-446e-b721-3fec877f7fe7#033[00m
Oct  2 04:22:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:11.909 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a7fc5966-97cd-4b0a-98d8-7a4244ea1dd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:11.910 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf846e0f0-61 in ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:22:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:11.913 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf846e0f0-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:22:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:11.913 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50bb85a2-38e0-4dae-8b84-abb58a65a91a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:11.914 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e88fb512-c9d1-481a-87a6-00359304ffd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:11 np0005465604 systemd-udevd[288121]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:22:11 np0005465604 nova_compute[260603]: 2025-10-02 08:22:11.932 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 04:22:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:11.932 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[dbcbe12e-1d3b-47dd-8cc1-4370e60778a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:11 np0005465604 systemd-machined[214636]: New machine qemu-24-instance-00000015.
Oct  2 04:22:11 np0005465604 NetworkManager[45129]: <info>  [1759393331.9444] device (tap3b55cad4-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:22:11 np0005465604 NetworkManager[45129]: <info>  [1759393331.9460] device (tap3b55cad4-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:22:11 np0005465604 systemd[1]: Started Virtual Machine qemu-24-instance-00000015.
Oct  2 04:22:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:22:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:11.965 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9e757152-2fe8-4555-8898-a091c8ea0d13]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:11 np0005465604 nova_compute[260603]: 2025-10-02 08:22:11.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:11Z|00074|binding|INFO|Setting lport 3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 ovn-installed in OVS
Oct  2 04:22:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:11Z|00075|binding|INFO|Setting lport 3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 up in Southbound
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:12.017 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7a317be0-465a-4eca-86aa-0d9952a06723]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:12.030 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[08fcc00e-e111-4989-a833-36a31faf4374]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:12 np0005465604 NetworkManager[45129]: <info>  [1759393332.0310] manager: (tapf846e0f0-60): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:12.062 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[343da9e8-8e57-4c12-a94d-02550e52db7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:12.065 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[703944fa-aeb1-4759-a10c-2db4cfc0441c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:12 np0005465604 NetworkManager[45129]: <info>  [1759393332.0960] device (tapf846e0f0-60): carrier: link connected
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:12.107 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[97b00ae7-4f96-4154-96d7-b338d0e187ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:12.130 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd54759c-8258-4885-906d-885e49cfd5cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf846e0f0-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:6b:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421069, 'reachable_time': 44185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288154, 'error': None, 'target': 'ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.142 2 DEBUG nova.network.neutron [req-93522cda-c6e0-48ec-95c4-5659e33cf1e4 req-749b2403-f5f2-4df6-b035-9a928681e6dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Updated VIF entry in instance network info cache for port 3b55cad4-91dd-4e72-a02e-ddfa3a801fe1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.143 2 DEBUG nova.network.neutron [req-93522cda-c6e0-48ec-95c4-5659e33cf1e4 req-749b2403-f5f2-4df6-b035-9a928681e6dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Updating instance_info_cache with network_info: [{"id": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "address": "fa:16:3e:65:4b:3b", "network": {"id": "f846e0f0-6ed8-446e-b721-3fec877f7fe7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-2045972521-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6a5588e45d1439d8a754fdd24946192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b55cad4-91", "ovs_interfaceid": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:12.156 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8e5b32fc-ee59-4a56-85cd-6e9b619ec582]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed3:6b09'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 421069, 'tstamp': 421069}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288155, 'error': None, 'target': 'ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.160 2 DEBUG oslo_concurrency.lockutils [req-93522cda-c6e0-48ec-95c4-5659e33cf1e4 req-749b2403-f5f2-4df6-b035-9a928681e6dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-8de608c8-0b81-4798-adee-a9364d230016" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:12.183 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d98f1a17-5aa6-47c4-981e-7282d5c1627b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf846e0f0-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:6b:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421069, 'reachable_time': 44185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288156, 'error': None, 'target': 'ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.214 2 DEBUG nova.network.neutron [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Successfully updated port: 5385a348-2508-4677-aebb-d79c82f4fc36 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:12.216 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e24f204d-5d5b-40e0-af62-00c454a09556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.227 2 DEBUG oslo_concurrency.lockutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquiring lock "refresh_cache-4f7a36cf-fd1b-42f2-94be-5e9483b5f941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.228 2 DEBUG oslo_concurrency.lockutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquired lock "refresh_cache-4f7a36cf-fd1b-42f2-94be-5e9483b5f941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.228 2 DEBUG nova.network.neutron [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:12.293 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[18f15b03-b02c-4a28-9894-ab91c936eed3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:12.294 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf846e0f0-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:12.294 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:12.295 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf846e0f0-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:12 np0005465604 NetworkManager[45129]: <info>  [1759393332.2970] manager: (tapf846e0f0-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Oct  2 04:22:12 np0005465604 kernel: tapf846e0f0-60: entered promiscuous mode
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:12.300 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf846e0f0-60, col_values=(('external_ids', {'iface-id': 'e5d836ca-9b23-4a5d-b3f8-af524cc90fb3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:12Z|00076|binding|INFO|Releasing lport e5d836ca-9b23-4a5d-b3f8-af524cc90fb3 from this chassis (sb_readonly=0)
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:12.320 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f846e0f0-6ed8-446e-b721-3fec877f7fe7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f846e0f0-6ed8-446e-b721-3fec877f7fe7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:12.320 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[964ccb8d-3273-467a-aa01-3527428cefae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:12.321 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-f846e0f0-6ed8-446e-b721-3fec877f7fe7
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/f846e0f0-6ed8-446e-b721-3fec877f7fe7.pid.haproxy
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID f846e0f0-6ed8-446e-b721-3fec877f7fe7
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:22:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:12.323 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7', 'env', 'PROCESS_TAG=haproxy-f846e0f0-6ed8-446e-b721-3fec877f7fe7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f846e0f0-6ed8-446e-b721-3fec877f7fe7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.517 2 DEBUG nova.compute.manager [req-14346480-bd3b-4936-9b7f-0ca7794573cd req-8e1dff88-a662-43a0-99b3-8916023ff657 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Received event network-changed-5385a348-2508-4677-aebb-d79c82f4fc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.517 2 DEBUG nova.compute.manager [req-14346480-bd3b-4936-9b7f-0ca7794573cd req-8e1dff88-a662-43a0-99b3-8916023ff657 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Refreshing instance network info cache due to event network-changed-5385a348-2508-4677-aebb-d79c82f4fc36. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.518 2 DEBUG oslo_concurrency.lockutils [req-14346480-bd3b-4936-9b7f-0ca7794573cd req-8e1dff88-a662-43a0-99b3-8916023ff657 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-4f7a36cf-fd1b-42f2-94be-5e9483b5f941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.543 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.544 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.544 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.544 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.545 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:12 np0005465604 nova_compute[260603]: 2025-10-02 08:22:12.630 2 DEBUG nova.network.neutron [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:22:12 np0005465604 podman[288186]: 2025-10-02 08:22:12.71008086 +0000 UTC m=+0.052301367 container create c523a2f0e1650ea21db50bfbd43fef296788076338008b9fb9367993d1852ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:22:12 np0005465604 systemd[1]: Started libpod-conmon-c523a2f0e1650ea21db50bfbd43fef296788076338008b9fb9367993d1852ee8.scope.
Oct  2 04:22:12 np0005465604 podman[288186]: 2025-10-02 08:22:12.680360522 +0000 UTC m=+0.022581049 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:22:12 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:22:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 503 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 6.2 MiB/s wr, 274 op/s
Oct  2 04:22:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10275287b8e9cfeb3186ab6dacb71ee005a3d822bdee53d9a5579ccada823be/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:22:12 np0005465604 podman[288186]: 2025-10-02 08:22:12.824056865 +0000 UTC m=+0.166277452 container init c523a2f0e1650ea21db50bfbd43fef296788076338008b9fb9367993d1852ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 04:22:12 np0005465604 podman[288186]: 2025-10-02 08:22:12.833616843 +0000 UTC m=+0.175837380 container start c523a2f0e1650ea21db50bfbd43fef296788076338008b9fb9367993d1852ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:22:12 np0005465604 neutron-haproxy-ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7[288220]: [NOTICE]   (288224) : New worker (288226) forked
Oct  2 04:22:12 np0005465604 neutron-haproxy-ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7[288220]: [NOTICE]   (288224) : Loading success.
Oct  2 04:22:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:22:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4084224322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.004 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.112 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.112 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.116 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.116 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.120 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.120 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.124 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.124 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.127 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.128 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.131 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.131 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.436 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.437 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3577MB free_disk=59.737403869628906GB free_vcpus=2 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.438 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.438 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.523 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393333.522846, 8de608c8-0b81-4798-adee-a9364d230016 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.524 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8de608c8-0b81-4798-adee-a9364d230016] VM Started (Lifecycle Event)#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.530 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 09c17c12-7dac-4fc8-917e-cb2efa1d4607 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.530 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance efe21c23-e200-4841-be3e-2b3e7c8b5c38 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.530 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance c0ec8879-e818-40a5-88ec-c6d89b8ddea4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.531 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 01296dff-20d5-49d6-b582-f9ec1d6b0af8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.531 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance ce67c5cc-b413-4871-8bcb-677c171ce721 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.531 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 8de608c8-0b81-4798-adee-a9364d230016 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.531 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 4f7a36cf-fd1b-42f2-94be-5e9483b5f941 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.531 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.532 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1408MB phys_disk=59GB used_disk=7GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.540 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.544 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393333.5230281, 8de608c8-0b81-4798-adee-a9364d230016 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.544 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8de608c8-0b81-4798-adee-a9364d230016] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.561 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.570 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.590 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8de608c8-0b81-4798-adee-a9364d230016] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:22:13 np0005465604 nova_compute[260603]: 2025-10-02 08:22:13.678 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:22:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1061938416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.164 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.171 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.189 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.212 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.213 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:14 np0005465604 kernel: tap205e575c-4a (unregistering): left promiscuous mode
Oct  2 04:22:14 np0005465604 NetworkManager[45129]: <info>  [1759393334.3126] device (tap205e575c-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:14Z|00077|binding|INFO|Releasing lport 205e575c-4af3-4a6a-af77-fd96af608b0a from this chassis (sb_readonly=0)
Oct  2 04:22:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:14Z|00078|binding|INFO|Setting lport 205e575c-4af3-4a6a-af77-fd96af608b0a down in Southbound
Oct  2 04:22:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:14Z|00079|binding|INFO|Removing iface tap205e575c-4a ovn-installed in OVS
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.333 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:2c:3a 10.100.0.12'], port_security=['fa:16:3e:24:2c:3a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '09c17c12-7dac-4fc8-917e-cb2efa1d4607', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13ecc6dea7a8465394379400d84a053e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '62446364-91d2-42bd-8360-1c220db2c85a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a031d7e-2b71-4bad-bd63-24b87ef28e88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=205e575c-4af3-4a6a-af77-fd96af608b0a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.335 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 205e575c-4af3-4a6a-af77-fd96af608b0a in datapath 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e unbound from our chassis#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.337 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.356 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6179d9d3-99fa-4853-8178-65e5799cefe1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:14 np0005465604 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:14 np0005465604 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000000c.scope: Consumed 12.455s CPU time.
Oct  2 04:22:14 np0005465604 systemd-machined[214636]: Machine qemu-22-instance-0000000c terminated.
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.396 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a34de6bb-1bf0-4670-a384-bcfcf9566d33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.400 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0736e208-6090-46f4-9fc5-c15e6427b09c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.427 2 DEBUG nova.network.neutron [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Updating instance_info_cache with network_info: [{"id": "5385a348-2508-4677-aebb-d79c82f4fc36", "address": "fa:16:3e:4f:0a:92", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5385a348-25", "ovs_interfaceid": "5385a348-2508-4677-aebb-d79c82f4fc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.442 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e859fe67-e319-423b-85a3-4ca2d06fca58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:14 np0005465604 podman[288302]: 2025-10-02 08:22:14.446732455 +0000 UTC m=+0.097851066 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.455 2 DEBUG oslo_concurrency.lockutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Releasing lock "refresh_cache-4f7a36cf-fd1b-42f2-94be-5e9483b5f941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.455 2 DEBUG nova.compute.manager [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Instance network_info: |[{"id": "5385a348-2508-4677-aebb-d79c82f4fc36", "address": "fa:16:3e:4f:0a:92", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5385a348-25", "ovs_interfaceid": "5385a348-2508-4677-aebb-d79c82f4fc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.456 2 DEBUG oslo_concurrency.lockutils [req-14346480-bd3b-4936-9b7f-0ca7794573cd req-8e1dff88-a662-43a0-99b3-8916023ff657 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-4f7a36cf-fd1b-42f2-94be-5e9483b5f941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.457 2 DEBUG nova.network.neutron [req-14346480-bd3b-4936-9b7f-0ca7794573cd req-8e1dff88-a662-43a0-99b3-8916023ff657 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Refreshing network info cache for port 5385a348-2508-4677-aebb-d79c82f4fc36 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.462 2 DEBUG nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Start _get_guest_xml network_info=[{"id": "5385a348-2508-4677-aebb-d79c82f4fc36", "address": "fa:16:3e:4f:0a:92", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5385a348-25", "ovs_interfaceid": "5385a348-2508-4677-aebb-d79c82f4fc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.469 2 WARNING nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.472 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1976598b-7f2e-40df-8d4b-063b05b9c8a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a12cdd8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:e8:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 16, 'rx_bytes': 1168, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 16, 'rx_bytes': 1168, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414588, 'reachable_time': 43029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288331, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.474 2 DEBUG nova.virt.libvirt.host [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.475 2 DEBUG nova.virt.libvirt.host [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.490 2 DEBUG nova.virt.libvirt.host [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.491 2 DEBUG nova.virt.libvirt.host [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.491 2 DEBUG nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.492 2 DEBUG nova.virt.hardware [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.493 2 DEBUG nova.virt.hardware [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.493 2 DEBUG nova.virt.hardware [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.493 2 DEBUG nova.virt.hardware [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.494 2 DEBUG nova.virt.hardware [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.494 2 DEBUG nova.virt.hardware [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.494 2 DEBUG nova.virt.hardware [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.495 2 DEBUG nova.virt.hardware [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.495 2 DEBUG nova.virt.hardware [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.495 2 DEBUG nova.virt.hardware [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.496 2 DEBUG nova.virt.hardware [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.496 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9777bd70-2be4-4a0a-a07a-caf72d58567c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414598, 'tstamp': 414598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288332, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414600, 'tstamp': 414600}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288332, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.498 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a12cdd8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.501 2 DEBUG oslo_concurrency.processutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.506 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a12cdd8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.506 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.506 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a12cdd8-50, col_values=(('external_ids', {'iface-id': '9a1d90c9-45f7-468d-bd6f-f1cc59f0309a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.507 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.532 2 DEBUG nova.compute.manager [req-175d71bb-3f19-4b00-ada2-6eee12edeabc req-b5b4ba96-fddd-46a1-bede-e1050b70c3cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received event network-vif-unplugged-205e575c-4af3-4a6a-af77-fd96af608b0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.533 2 DEBUG oslo_concurrency.lockutils [req-175d71bb-3f19-4b00-ada2-6eee12edeabc req-b5b4ba96-fddd-46a1-bede-e1050b70c3cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.533 2 DEBUG oslo_concurrency.lockutils [req-175d71bb-3f19-4b00-ada2-6eee12edeabc req-b5b4ba96-fddd-46a1-bede-e1050b70c3cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.533 2 DEBUG oslo_concurrency.lockutils [req-175d71bb-3f19-4b00-ada2-6eee12edeabc req-b5b4ba96-fddd-46a1-bede-e1050b70c3cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.534 2 DEBUG nova.compute.manager [req-175d71bb-3f19-4b00-ada2-6eee12edeabc req-b5b4ba96-fddd-46a1-bede-e1050b70c3cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] No waiting events found dispatching network-vif-unplugged-205e575c-4af3-4a6a-af77-fd96af608b0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.534 2 WARNING nova.compute.manager [req-175d71bb-3f19-4b00-ada2-6eee12edeabc req-b5b4ba96-fddd-46a1-bede-e1050b70c3cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received unexpected event network-vif-unplugged-205e575c-4af3-4a6a-af77-fd96af608b0a for instance with vm_state active and task_state rebuilding.#033[00m
Oct  2 04:22:14 np0005465604 kernel: tap205e575c-4a: entered promiscuous mode
Oct  2 04:22:14 np0005465604 systemd-udevd[288143]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:22:14 np0005465604 kernel: tap205e575c-4a (unregistering): left promiscuous mode
Oct  2 04:22:14 np0005465604 NetworkManager[45129]: <info>  [1759393334.5505] manager: (tap205e575c-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Oct  2 04:22:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:14Z|00080|binding|INFO|Claiming lport 205e575c-4af3-4a6a-af77-fd96af608b0a for this chassis.
Oct  2 04:22:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:14Z|00081|binding|INFO|205e575c-4af3-4a6a-af77-fd96af608b0a: Claiming fa:16:3e:24:2c:3a 10.100.0.12
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.566 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:2c:3a 10.100.0.12'], port_security=['fa:16:3e:24:2c:3a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '09c17c12-7dac-4fc8-917e-cb2efa1d4607', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13ecc6dea7a8465394379400d84a053e', 'neutron:revision_number': '7', 'neutron:security_group_ids': '62446364-91d2-42bd-8360-1c220db2c85a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a031d7e-2b71-4bad-bd63-24b87ef28e88, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=205e575c-4af3-4a6a-af77-fd96af608b0a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.568 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 205e575c-4af3-4a6a-af77-fd96af608b0a in datapath 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e bound to our chassis#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.574 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.600 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0745946b-145c-4afa-9730-b4df4051f473]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:14Z|00082|binding|INFO|Setting lport 205e575c-4af3-4a6a-af77-fd96af608b0a ovn-installed in OVS
Oct  2 04:22:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:14Z|00083|binding|INFO|Setting lport 205e575c-4af3-4a6a-af77-fd96af608b0a up in Southbound
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:14Z|00084|binding|INFO|Releasing lport 205e575c-4af3-4a6a-af77-fd96af608b0a from this chassis (sb_readonly=1)
Oct  2 04:22:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:14Z|00085|if_status|INFO|Not setting lport 205e575c-4af3-4a6a-af77-fd96af608b0a down as sb is readonly
Oct  2 04:22:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:14Z|00086|binding|INFO|Removing iface tap205e575c-4a ovn-installed in OVS
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:14Z|00087|binding|INFO|Releasing lport 205e575c-4af3-4a6a-af77-fd96af608b0a from this chassis (sb_readonly=0)
Oct  2 04:22:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:14Z|00088|binding|INFO|Setting lport 205e575c-4af3-4a6a-af77-fd96af608b0a down in Southbound
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.625 2 DEBUG nova.compute.manager [req-353e80b2-f574-4e90-a9b2-cc6d558b0ac5 req-89af0ec1-3833-44c5-bf18-1c2808d9d17a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Received event network-vif-plugged-3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.625 2 DEBUG oslo_concurrency.lockutils [req-353e80b2-f574-4e90-a9b2-cc6d558b0ac5 req-89af0ec1-3833-44c5-bf18-1c2808d9d17a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "8de608c8-0b81-4798-adee-a9364d230016-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.625 2 DEBUG oslo_concurrency.lockutils [req-353e80b2-f574-4e90-a9b2-cc6d558b0ac5 req-89af0ec1-3833-44c5-bf18-1c2808d9d17a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8de608c8-0b81-4798-adee-a9364d230016-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.626 2 DEBUG oslo_concurrency.lockutils [req-353e80b2-f574-4e90-a9b2-cc6d558b0ac5 req-89af0ec1-3833-44c5-bf18-1c2808d9d17a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8de608c8-0b81-4798-adee-a9364d230016-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.626 2 DEBUG nova.compute.manager [req-353e80b2-f574-4e90-a9b2-cc6d558b0ac5 req-89af0ec1-3833-44c5-bf18-1c2808d9d17a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Processing event network-vif-plugged-3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.626 2 DEBUG nova.compute.manager [req-353e80b2-f574-4e90-a9b2-cc6d558b0ac5 req-89af0ec1-3833-44c5-bf18-1c2808d9d17a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Received event network-vif-plugged-3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.626 2 DEBUG oslo_concurrency.lockutils [req-353e80b2-f574-4e90-a9b2-cc6d558b0ac5 req-89af0ec1-3833-44c5-bf18-1c2808d9d17a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "8de608c8-0b81-4798-adee-a9364d230016-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.626 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:2c:3a 10.100.0.12'], port_security=['fa:16:3e:24:2c:3a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '09c17c12-7dac-4fc8-917e-cb2efa1d4607', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13ecc6dea7a8465394379400d84a053e', 'neutron:revision_number': '7', 'neutron:security_group_ids': '62446364-91d2-42bd-8360-1c220db2c85a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a031d7e-2b71-4bad-bd63-24b87ef28e88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=205e575c-4af3-4a6a-af77-fd96af608b0a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.626 2 DEBUG oslo_concurrency.lockutils [req-353e80b2-f574-4e90-a9b2-cc6d558b0ac5 req-89af0ec1-3833-44c5-bf18-1c2808d9d17a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8de608c8-0b81-4798-adee-a9364d230016-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.627 2 DEBUG oslo_concurrency.lockutils [req-353e80b2-f574-4e90-a9b2-cc6d558b0ac5 req-89af0ec1-3833-44c5-bf18-1c2808d9d17a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8de608c8-0b81-4798-adee-a9364d230016-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.627 2 DEBUG nova.compute.manager [req-353e80b2-f574-4e90-a9b2-cc6d558b0ac5 req-89af0ec1-3833-44c5-bf18-1c2808d9d17a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] No waiting events found dispatching network-vif-plugged-3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.627 2 WARNING nova.compute.manager [req-353e80b2-f574-4e90-a9b2-cc6d558b0ac5 req-89af0ec1-3833-44c5-bf18-1c2808d9d17a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Received unexpected event network-vif-plugged-3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.628 2 DEBUG nova.compute.manager [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.635 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393334.6351247, 8de608c8-0b81-4798-adee-a9364d230016 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.635 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8de608c8-0b81-4798-adee-a9364d230016] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.638 2 DEBUG nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.644 2 INFO nova.virt.libvirt.driver [-] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Instance spawned successfully.#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.645 2 DEBUG nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.658 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8ea9b0-d498-4b73-9e91-00b5a8f0f834]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.663 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[af072573-11a2-422b-be21-4d8b51efb6cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.669 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.674 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.678 2 DEBUG nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.678 2 DEBUG nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.679 2 DEBUG nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.679 2 DEBUG nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.680 2 DEBUG nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.680 2 DEBUG nova.virt.libvirt.driver [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.701 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8de608c8-0b81-4798-adee-a9364d230016] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.704 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d845269b-a354-48e6-973f-0ae555bb59c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.724 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[52df06b9-41e6-4337-aa51-51d93afd8981]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a12cdd8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:e8:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 18, 'rx_bytes': 1168, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 18, 'rx_bytes': 1168, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414588, 'reachable_time': 43029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288363, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.732 2 INFO nova.compute.manager [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Took 10.07 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.732 2 DEBUG nova.compute.manager [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.747 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[455f0aa4-566a-429d-b7d3-b433267ae609]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414598, 'tstamp': 414598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288364, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414600, 'tstamp': 414600}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288364, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.749 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a12cdd8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.756 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a12cdd8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.756 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.757 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a12cdd8-50, col_values=(('external_ids', {'iface-id': '9a1d90c9-45f7-468d-bd6f-f1cc59f0309a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.757 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.759 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 205e575c-4af3-4a6a-af77-fd96af608b0a in datapath 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e unbound from our chassis#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.761 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e#033[00m
Oct  2 04:22:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 526 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.8 MiB/s wr, 232 op/s
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.782 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0639da09-546d-455e-b20c-2038b2ccdd9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.784 2 INFO nova.compute.manager [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Took 11.13 seconds to build instance.#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.801 2 DEBUG oslo_concurrency.lockutils [None req-fe8128ff-09c0-448e-b2f5-2f6d90b960be 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lock "8de608c8-0b81-4798-adee-a9364d230016" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.814 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[250656f8-70d6-45c9-b8fe-6f19dc0015c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.817 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b00f07d6-1c09-44c5-a3fc-09e0e9f0fc99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.846 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[db545bdb-9ed6-4055-8863-62c4f1637772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.868 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5edc6078-3d55-44be-b75f-7bab652bc448]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a12cdd8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:e8:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 20, 'rx_bytes': 1168, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 20, 'rx_bytes': 1168, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414588, 'reachable_time': 43029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288371, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.885 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eec9d811-cd66-458c-bc72-184746fc9186]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414598, 'tstamp': 414598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288372, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414600, 'tstamp': 414600}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288372, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.887 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a12cdd8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.893 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a12cdd8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.893 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.894 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a12cdd8-50, col_values=(('external_ids', {'iface-id': '9a1d90c9-45f7-468d-bd6f-f1cc59f0309a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:14.894 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.952 2 INFO nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.962 2 INFO nova.virt.libvirt.driver [-] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Instance destroyed successfully.#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.966 2 INFO nova.virt.libvirt.driver [-] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Instance destroyed successfully.#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.967 2 DEBUG nova.virt.libvirt.vif [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:20:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1588661618',display_name='tempest-ServersAdminTestJSON-server-1588661618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1588661618',id=12,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:21:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='13ecc6dea7a8465394379400d84a053e',ramdisk_id='',reservation_id='r-v30mm0vb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-83453142',owner_user_name='tempest-ServersAdminTestJSON-83453142-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:22:01Z,user_data=None,user_id='2b82955fab174d8aac325e64068908f5',uuid=09c17c12-7dac-4fc8-917e-cb2efa1d4607,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.968 2 DEBUG nova.network.os_vif_util [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converting VIF {"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.968 2 DEBUG nova.network.os_vif_util [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.969 2 DEBUG os_vif [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.971 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap205e575c-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:14 np0005465604 nova_compute[260603]: 2025-10-02 08:22:14.976 2 INFO os_vif [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a')#033[00m
Oct  2 04:22:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:22:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/737275224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.001 2 DEBUG oslo_concurrency.processutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.024 2 DEBUG nova.storage.rbd_utils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] rbd image 4f7a36cf-fd1b-42f2-94be-5e9483b5f941_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.029 2 DEBUG oslo_concurrency.processutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.213 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.218 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.321 2 INFO nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Deleting instance files /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607_del#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.322 2 INFO nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Deletion of /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607_del complete#033[00m
Oct  2 04:22:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:22:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/705696371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.484 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.485 2 INFO nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Creating image(s)#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.511 2 DEBUG nova.storage.rbd_utils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.533 2 DEBUG nova.storage.rbd_utils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.559 2 DEBUG nova.storage.rbd_utils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.562 2 DEBUG oslo_concurrency.processutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.583 2 DEBUG oslo_concurrency.processutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.586 2 DEBUG nova.virt.libvirt.vif [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:22:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-784686834',display_name='tempest-FloatingIPsAssociationTestJSON-server-784686834',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-784686834',id=22,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a46df342cad4b62ae3f9af2fd10ed84',ramdisk_id='',reservation_id='r-edy6oj90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1024699440',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1024699440-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:22:07Z,user_data=None,user_id='be83392e7e8847878b199e35d03663f9',uuid=4f7a36cf-fd1b-42f2-94be-5e9483b5f941,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5385a348-2508-4677-aebb-d79c82f4fc36", "address": "fa:16:3e:4f:0a:92", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5385a348-25", "ovs_interfaceid": "5385a348-2508-4677-aebb-d79c82f4fc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.587 2 DEBUG nova.network.os_vif_util [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Converting VIF {"id": "5385a348-2508-4677-aebb-d79c82f4fc36", "address": "fa:16:3e:4f:0a:92", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5385a348-25", "ovs_interfaceid": "5385a348-2508-4677-aebb-d79c82f4fc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.588 2 DEBUG nova.network.os_vif_util [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5385a348-2508-4677-aebb-d79c82f4fc36,network=Network(239ca39f-5677-4c2d-87f6-45d4404e4ead),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5385a348-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.589 2 DEBUG nova.objects.instance [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f7a36cf-fd1b-42f2-94be-5e9483b5f941 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.604 2 DEBUG nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:22:15 np0005465604 nova_compute[260603]:  <uuid>4f7a36cf-fd1b-42f2-94be-5e9483b5f941</uuid>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:  <name>instance-00000016</name>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-784686834</nova:name>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:22:14</nova:creationTime>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:22:15 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:        <nova:user uuid="be83392e7e8847878b199e35d03663f9">tempest-FloatingIPsAssociationTestJSON-1024699440-project-member</nova:user>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:        <nova:project uuid="2a46df342cad4b62ae3f9af2fd10ed84">tempest-FloatingIPsAssociationTestJSON-1024699440</nova:project>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:        <nova:port uuid="5385a348-2508-4677-aebb-d79c82f4fc36">
Oct  2 04:22:15 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <entry name="serial">4f7a36cf-fd1b-42f2-94be-5e9483b5f941</entry>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <entry name="uuid">4f7a36cf-fd1b-42f2-94be-5e9483b5f941</entry>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/4f7a36cf-fd1b-42f2-94be-5e9483b5f941_disk">
Oct  2 04:22:15 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:22:15 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/4f7a36cf-fd1b-42f2-94be-5e9483b5f941_disk.config">
Oct  2 04:22:15 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:22:15 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:4f:0a:92"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <target dev="tap5385a348-25"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/4f7a36cf-fd1b-42f2-94be-5e9483b5f941/console.log" append="off"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:22:15 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:22:15 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:22:15 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:22:15 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.606 2 DEBUG nova.compute.manager [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Preparing to wait for external event network-vif-plugged-5385a348-2508-4677-aebb-d79c82f4fc36 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.606 2 DEBUG oslo_concurrency.lockutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquiring lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.607 2 DEBUG oslo_concurrency.lockutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.607 2 DEBUG oslo_concurrency.lockutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.608 2 DEBUG nova.virt.libvirt.vif [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:22:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-784686834',display_name='tempest-FloatingIPsAssociationTestJSON-server-784686834',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-784686834',id=22,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a46df342cad4b62ae3f9af2fd10ed84',ramdisk_id='',reservation_id='r-edy6oj90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1024699440',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1024699440-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:22:07Z,user_data=None,user_id='be83392e7e8847878b199e35d03663f9',uuid=4f7a36cf-fd1b-42f2-94be-5e9483b5f941,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5385a348-2508-4677-aebb-d79c82f4fc36", "address": "fa:16:3e:4f:0a:92", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5385a348-25", "ovs_interfaceid": "5385a348-2508-4677-aebb-d79c82f4fc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.608 2 DEBUG nova.network.os_vif_util [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Converting VIF {"id": "5385a348-2508-4677-aebb-d79c82f4fc36", "address": "fa:16:3e:4f:0a:92", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5385a348-25", "ovs_interfaceid": "5385a348-2508-4677-aebb-d79c82f4fc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.609 2 DEBUG nova.network.os_vif_util [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5385a348-2508-4677-aebb-d79c82f4fc36,network=Network(239ca39f-5677-4c2d-87f6-45d4404e4ead),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5385a348-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.612 2 DEBUG os_vif [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5385a348-2508-4677-aebb-d79c82f4fc36,network=Network(239ca39f-5677-4c2d-87f6-45d4404e4ead),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5385a348-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.613 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.614 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.618 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5385a348-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.618 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5385a348-25, col_values=(('external_ids', {'iface-id': '5385a348-2508-4677-aebb-d79c82f4fc36', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:0a:92', 'vm-uuid': '4f7a36cf-fd1b-42f2-94be-5e9483b5f941'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:15 np0005465604 NetworkManager[45129]: <info>  [1759393335.6213] manager: (tap5385a348-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.624 2 DEBUG oslo_concurrency.processutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.625 2 DEBUG oslo_concurrency.lockutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.626 2 DEBUG oslo_concurrency.lockutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.626 2 DEBUG oslo_concurrency.lockutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.648 2 DEBUG nova.storage.rbd_utils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.652 2 DEBUG oslo_concurrency.processutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.679 2 INFO os_vif [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5385a348-2508-4677-aebb-d79c82f4fc36,network=Network(239ca39f-5677-4c2d-87f6-45d4404e4ead),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5385a348-25')#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.742 2 DEBUG nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.742 2 DEBUG nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.754 2 DEBUG nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] No VIF found with MAC fa:16:3e:4f:0a:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.754 2 INFO nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Using config drive#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.776 2 DEBUG nova.storage.rbd_utils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] rbd image 4f7a36cf-fd1b-42f2-94be-5e9483b5f941_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.899 2 DEBUG oslo_concurrency.processutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:15 np0005465604 nova_compute[260603]: 2025-10-02 08:22:15.952 2 DEBUG nova.storage.rbd_utils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] resizing rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.023 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.024 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Ensure instance console log exists: /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.024 2 DEBUG oslo_concurrency.lockutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.025 2 DEBUG oslo_concurrency.lockutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.025 2 DEBUG oslo_concurrency.lockutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.027 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Start _get_guest_xml network_info=[{"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.030 2 WARNING nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.033 2 DEBUG nova.virt.libvirt.host [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.034 2 DEBUG nova.virt.libvirt.host [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.037 2 DEBUG nova.virt.libvirt.host [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.037 2 DEBUG nova.virt.libvirt.host [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.038 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.038 2 DEBUG nova.virt.hardware [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.038 2 DEBUG nova.virt.hardware [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.038 2 DEBUG nova.virt.hardware [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.038 2 DEBUG nova.virt.hardware [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.039 2 DEBUG nova.virt.hardware [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.039 2 DEBUG nova.virt.hardware [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.039 2 DEBUG nova.virt.hardware [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.039 2 DEBUG nova.virt.hardware [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.039 2 DEBUG nova.virt.hardware [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.040 2 DEBUG nova.virt.hardware [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.040 2 DEBUG nova.virt.hardware [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.040 2 DEBUG nova.objects.instance [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.058 2 DEBUG oslo_concurrency.processutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:22:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2048696687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.447 2 DEBUG oslo_concurrency.processutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.464 2 DEBUG nova.storage.rbd_utils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.467 2 DEBUG oslo_concurrency.processutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.640 2 DEBUG nova.compute.manager [req-0816ba19-d83e-4386-a338-83fa30b477fc req-036b566b-eee8-425b-a538-a31729fbedec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.641 2 DEBUG oslo_concurrency.lockutils [req-0816ba19-d83e-4386-a338-83fa30b477fc req-036b566b-eee8-425b-a538-a31729fbedec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.641 2 DEBUG oslo_concurrency.lockutils [req-0816ba19-d83e-4386-a338-83fa30b477fc req-036b566b-eee8-425b-a538-a31729fbedec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.641 2 DEBUG oslo_concurrency.lockutils [req-0816ba19-d83e-4386-a338-83fa30b477fc req-036b566b-eee8-425b-a538-a31729fbedec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.642 2 DEBUG nova.compute.manager [req-0816ba19-d83e-4386-a338-83fa30b477fc req-036b566b-eee8-425b-a538-a31729fbedec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] No waiting events found dispatching network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.642 2 WARNING nova.compute.manager [req-0816ba19-d83e-4386-a338-83fa30b477fc req-036b566b-eee8-425b-a538-a31729fbedec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received unexpected event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.770 2 DEBUG oslo_concurrency.lockutils [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Acquiring lock "8de608c8-0b81-4798-adee-a9364d230016" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.772 2 DEBUG oslo_concurrency.lockutils [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lock "8de608c8-0b81-4798-adee-a9364d230016" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.773 2 DEBUG oslo_concurrency.lockutils [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Acquiring lock "8de608c8-0b81-4798-adee-a9364d230016-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.773 2 DEBUG oslo_concurrency.lockutils [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lock "8de608c8-0b81-4798-adee-a9364d230016-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.773 2 DEBUG oslo_concurrency.lockutils [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lock "8de608c8-0b81-4798-adee-a9364d230016-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.774 2 INFO nova.compute.manager [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Terminating instance#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.775 2 DEBUG nova.compute.manager [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:22:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 526 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 779 KiB/s rd, 7.8 MiB/s wr, 189 op/s
Oct  2 04:22:16 np0005465604 kernel: tap3b55cad4-91 (unregistering): left promiscuous mode
Oct  2 04:22:16 np0005465604 NetworkManager[45129]: <info>  [1759393336.8246] device (tap3b55cad4-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:22:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:16Z|00089|binding|INFO|Releasing lport 3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 from this chassis (sb_readonly=0)
Oct  2 04:22:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:16Z|00090|binding|INFO|Setting lport 3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 down in Southbound
Oct  2 04:22:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:16Z|00091|binding|INFO|Removing iface tap3b55cad4-91 ovn-installed in OVS
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:16.846 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:4b:3b 10.100.0.5'], port_security=['fa:16:3e:65:4b:3b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8de608c8-0b81-4798-adee-a9364d230016', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f846e0f0-6ed8-446e-b721-3fec877f7fe7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6a5588e45d1439d8a754fdd24946192', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1b2b6e16-dfc7-42dc-a3d4-af720a27689a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=faffee4a-6858-4e90-8ff8-bd6d709d10d9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3b55cad4-91dd-4e72-a02e-ddfa3a801fe1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:22:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:16.847 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 in datapath f846e0f0-6ed8-446e-b721-3fec877f7fe7 unbound from our chassis#033[00m
Oct  2 04:22:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:16.849 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f846e0f0-6ed8-446e-b721-3fec877f7fe7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:22:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:16.850 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83b49339-2ca1-4491-ab02-63e55386d918]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:16.854 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7 namespace which is not needed anymore#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.875 2 INFO nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Creating config drive at /var/lib/nova/instances/4f7a36cf-fd1b-42f2-94be-5e9483b5f941/disk.config#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.880 2 DEBUG oslo_concurrency.processutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4f7a36cf-fd1b-42f2-94be-5e9483b5f941/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzvc9v8b4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:16 np0005465604 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000015.scope: Deactivated successfully.
Oct  2 04:22:16 np0005465604 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000015.scope: Consumed 3.696s CPU time.
Oct  2 04:22:16 np0005465604 systemd-machined[214636]: Machine qemu-24-instance-00000015 terminated.
Oct  2 04:22:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:22:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589067850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.935 2 DEBUG oslo_concurrency.processutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.937 2 DEBUG nova.virt.libvirt.vif [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:20:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1588661618',display_name='tempest-ServersAdminTestJSON-server-1588661618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1588661618',id=12,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:21:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='13ecc6dea7a8465394379400d84a053e',ramdisk_id='',reservation_id='r-v30mm0vb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-83453142',owner_user_name='tempest-ServersAdminTestJSON-83453142-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:22:15Z,user_data=None,user_id='2b82955fab174d8aac325e64068908f5',uuid=09c17c12-7dac-4fc8-917e-cb2efa1d4607,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.937 2 DEBUG nova.network.os_vif_util [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converting VIF {"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.938 2 DEBUG nova.network.os_vif_util [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.940 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:22:16 np0005465604 nova_compute[260603]:  <uuid>09c17c12-7dac-4fc8-917e-cb2efa1d4607</uuid>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:  <name>instance-0000000c</name>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersAdminTestJSON-server-1588661618</nova:name>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:22:16</nova:creationTime>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:22:16 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:        <nova:user uuid="2b82955fab174d8aac325e64068908f5">tempest-ServersAdminTestJSON-83453142-project-member</nova:user>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:        <nova:project uuid="13ecc6dea7a8465394379400d84a053e">tempest-ServersAdminTestJSON-83453142</nova:project>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:        <nova:port uuid="205e575c-4af3-4a6a-af77-fd96af608b0a">
Oct  2 04:22:16 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <entry name="serial">09c17c12-7dac-4fc8-917e-cb2efa1d4607</entry>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <entry name="uuid">09c17c12-7dac-4fc8-917e-cb2efa1d4607</entry>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk">
Oct  2 04:22:16 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:22:16 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk.config">
Oct  2 04:22:16 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:22:16 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:24:2c:3a"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <target dev="tap205e575c-4a"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/console.log" append="off"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:22:16 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:22:16 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:22:16 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:22:16 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.941 2 DEBUG nova.compute.manager [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Preparing to wait for external event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.942 2 DEBUG oslo_concurrency.lockutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.942 2 DEBUG oslo_concurrency.lockutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.942 2 DEBUG oslo_concurrency.lockutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.943 2 DEBUG nova.virt.libvirt.vif [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:20:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1588661618',display_name='tempest-ServersAdminTestJSON-server-1588661618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1588661618',id=12,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:21:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='13ecc6dea7a8465394379400d84a053e',ramdisk_id='',reservation_id='r-v30mm0vb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-83453142',owner_user_name='tempest-ServersAdminTestJSON-83453142-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:22:15Z,user_data=None,user_id='2b82955fab174d8aac325e64068908f5',uuid=09c17c12-7dac-4fc8-917e-cb2efa1d4607,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.943 2 DEBUG nova.network.os_vif_util [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converting VIF {"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.944 2 DEBUG nova.network.os_vif_util [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.944 2 DEBUG os_vif [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.945 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.946 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.955 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap205e575c-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.956 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap205e575c-4a, col_values=(('external_ids', {'iface-id': '205e575c-4af3-4a6a-af77-fd96af608b0a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:2c:3a', 'vm-uuid': '09c17c12-7dac-4fc8-917e-cb2efa1d4607'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:16 np0005465604 NetworkManager[45129]: <info>  [1759393336.9586] manager: (tap205e575c-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:22:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:22:16 np0005465604 neutron-haproxy-ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7[288220]: [NOTICE]   (288224) : haproxy version is 2.8.14-c23fe91
Oct  2 04:22:16 np0005465604 neutron-haproxy-ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7[288220]: [NOTICE]   (288224) : path to executable is /usr/sbin/haproxy
Oct  2 04:22:16 np0005465604 neutron-haproxy-ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7[288220]: [WARNING]  (288224) : Exiting Master process...
Oct  2 04:22:16 np0005465604 neutron-haproxy-ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7[288220]: [ALERT]    (288224) : Current worker (288226) exited with code 143 (Terminated)
Oct  2 04:22:16 np0005465604 neutron-haproxy-ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7[288220]: [WARNING]  (288224) : All workers exited. Exiting... (0)
Oct  2 04:22:16 np0005465604 systemd[1]: libpod-c523a2f0e1650ea21db50bfbd43fef296788076338008b9fb9367993d1852ee8.scope: Deactivated successfully.
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:16 np0005465604 nova_compute[260603]: 2025-10-02 08:22:16.972 2 INFO os_vif [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a')#033[00m
Oct  2 04:22:16 np0005465604 podman[288710]: 2025-10-02 08:22:16.974956261 +0000 UTC m=+0.046031095 container died c523a2f0e1650ea21db50bfbd43fef296788076338008b9fb9367993d1852ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.005 2 INFO nova.virt.libvirt.driver [-] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Instance destroyed successfully.#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.005 2 DEBUG nova.objects.instance [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lazy-loading 'resources' on Instance uuid 8de608c8-0b81-4798-adee-a9364d230016 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:17 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c523a2f0e1650ea21db50bfbd43fef296788076338008b9fb9367993d1852ee8-userdata-shm.mount: Deactivated successfully.
Oct  2 04:22:17 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f10275287b8e9cfeb3186ab6dacb71ee005a3d822bdee53d9a5579ccada823be-merged.mount: Deactivated successfully.
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.014 2 DEBUG oslo_concurrency.processutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4f7a36cf-fd1b-42f2-94be-5e9483b5f941/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzvc9v8b4" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:17 np0005465604 podman[288710]: 2025-10-02 08:22:17.023849677 +0000 UTC m=+0.094924511 container cleanup c523a2f0e1650ea21db50bfbd43fef296788076338008b9fb9367993d1852ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:22:17 np0005465604 systemd[1]: libpod-conmon-c523a2f0e1650ea21db50bfbd43fef296788076338008b9fb9367993d1852ee8.scope: Deactivated successfully.
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.043 2 DEBUG nova.storage.rbd_utils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] rbd image 4f7a36cf-fd1b-42f2-94be-5e9483b5f941_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.047 2 DEBUG oslo_concurrency.processutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4f7a36cf-fd1b-42f2-94be-5e9483b5f941/disk.config 4f7a36cf-fd1b-42f2-94be-5e9483b5f941_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.078 2 DEBUG nova.virt.libvirt.vif [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-150678686',display_name='tempest-ImagesNegativeTestJSON-server-150678686',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-150678686',id=21,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:22:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f6a5588e45d1439d8a754fdd24946192',ramdisk_id='',reservation_id='r-a60c0pql',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-100096828',owner_user_name='tempest-ImagesNegativeTestJSON-100096828-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:22:14Z,user_data=None,user_id='1752f22132cf468e92bd8857bcc0ed93',uuid=8de608c8-0b81-4798-adee-a9364d230016,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "address": "fa:16:3e:65:4b:3b", "network": {"id": "f846e0f0-6ed8-446e-b721-3fec877f7fe7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-2045972521-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6a5588e45d1439d8a754fdd24946192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b55cad4-91", "ovs_interfaceid": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.079 2 DEBUG nova.network.os_vif_util [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Converting VIF {"id": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "address": "fa:16:3e:65:4b:3b", "network": {"id": "f846e0f0-6ed8-446e-b721-3fec877f7fe7", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-2045972521-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6a5588e45d1439d8a754fdd24946192", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b55cad4-91", "ovs_interfaceid": "3b55cad4-91dd-4e72-a02e-ddfa3a801fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.080 2 DEBUG nova.network.os_vif_util [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=3b55cad4-91dd-4e72-a02e-ddfa3a801fe1,network=Network(f846e0f0-6ed8-446e-b721-3fec877f7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b55cad4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.080 2 DEBUG os_vif [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=3b55cad4-91dd-4e72-a02e-ddfa3a801fe1,network=Network(f846e0f0-6ed8-446e-b721-3fec877f7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b55cad4-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.083 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b55cad4-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:22:17 np0005465604 podman[288776]: 2025-10-02 08:22:17.089034649 +0000 UTC m=+0.039368920 container remove c523a2f0e1650ea21db50bfbd43fef296788076338008b9fb9367993d1852ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.090 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.090 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.091 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] No VIF found with MAC fa:16:3e:24:2c:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.091 2 INFO nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Using config drive#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.094 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[79421290-7541-4bd5-9dbd-9b02bca4b740]: (4, ('Thu Oct  2 08:22:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7 (c523a2f0e1650ea21db50bfbd43fef296788076338008b9fb9367993d1852ee8)\nc523a2f0e1650ea21db50bfbd43fef296788076338008b9fb9367993d1852ee8\nThu Oct  2 08:22:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7 (c523a2f0e1650ea21db50bfbd43fef296788076338008b9fb9367993d1852ee8)\nc523a2f0e1650ea21db50bfbd43fef296788076338008b9fb9367993d1852ee8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.095 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[81ecedf1-0da4-4f29-bec1-8aa6400ab0e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.096 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf846e0f0-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:17 np0005465604 kernel: tapf846e0f0-60: left promiscuous mode
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.116 2 DEBUG nova.storage.rbd_utils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.130 2 INFO os_vif [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=3b55cad4-91dd-4e72-a02e-ddfa3a801fe1,network=Network(f846e0f0-6ed8-446e-b721-3fec877f7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b55cad4-91')#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.134 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7996d56b-98cf-47de-9a8e-7427251e9f9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.154 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ce71c9f7-c744-474e-a88a-a0271c4c27e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.154 2 DEBUG nova.objects.instance [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.155 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5f443a91-7ca4-4510-a1b9-8a514edf28a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.175 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50e43a37-4169-4d21-99a8-4d125f643a3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421060, 'reachable_time': 43019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288853, 'error': None, 'target': 'ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:17 np0005465604 systemd[1]: run-netns-ovnmeta\x2df846e0f0\x2d6ed8\x2d446e\x2db721\x2d3fec877f7fe7.mount: Deactivated successfully.
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.180 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f846e0f0-6ed8-446e-b721-3fec877f7fe7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.180 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[85a8fe86-81eb-4ded-b79c-23bd470d8ce4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.181 2 DEBUG nova.objects.instance [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'keypairs' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.217 2 DEBUG oslo_concurrency.processutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4f7a36cf-fd1b-42f2-94be-5e9483b5f941/disk.config 4f7a36cf-fd1b-42f2-94be-5e9483b5f941_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.217 2 INFO nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Deleting local config drive /var/lib/nova/instances/4f7a36cf-fd1b-42f2-94be-5e9483b5f941/disk.config because it was imported into RBD.#033[00m
Oct  2 04:22:17 np0005465604 NetworkManager[45129]: <info>  [1759393337.2871] manager: (tap5385a348-25): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Oct  2 04:22:17 np0005465604 systemd-udevd[288685]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:22:17 np0005465604 kernel: tap5385a348-25: entered promiscuous mode
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:17 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:17Z|00092|binding|INFO|Claiming lport 5385a348-2508-4677-aebb-d79c82f4fc36 for this chassis.
Oct  2 04:22:17 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:17Z|00093|binding|INFO|5385a348-2508-4677-aebb-d79c82f4fc36: Claiming fa:16:3e:4f:0a:92 10.100.0.12
Oct  2 04:22:17 np0005465604 NetworkManager[45129]: <info>  [1759393337.3026] device (tap5385a348-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:22:17 np0005465604 NetworkManager[45129]: <info>  [1759393337.3038] device (tap5385a348-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.313 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:0a:92 10.100.0.12'], port_security=['fa:16:3e:4f:0a:92 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4f7a36cf-fd1b-42f2-94be-5e9483b5f941', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-239ca39f-5677-4c2d-87f6-45d4404e4ead', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a46df342cad4b62ae3f9af2fd10ed84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '572f4f54-3bbb-4e63-86a3-6fb75389fa5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b31b756-7999-4de8-b356-acd4a5e98929, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=5385a348-2508-4677-aebb-d79c82f4fc36) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.314 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 5385a348-2508-4677-aebb-d79c82f4fc36 in datapath 239ca39f-5677-4c2d-87f6-45d4404e4ead bound to our chassis#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.316 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 239ca39f-5677-4c2d-87f6-45d4404e4ead#033[00m
Oct  2 04:22:17 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:17Z|00094|binding|INFO|Setting lport 5385a348-2508-4677-aebb-d79c82f4fc36 ovn-installed in OVS
Oct  2 04:22:17 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:17Z|00095|binding|INFO|Setting lport 5385a348-2508-4677-aebb-d79c82f4fc36 up in Southbound
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.332 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6b3ba664-8886-48b2-9649-eb70d4c68e0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:17 np0005465604 systemd-machined[214636]: New machine qemu-25-instance-00000016.
Oct  2 04:22:17 np0005465604 systemd[1]: Started Virtual Machine qemu-25-instance-00000016.
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.365 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e06073-dd99-4272-8b97-0e61cc10d257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.370 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b001edd7-9bd2-4759-a11f-a576832f6c98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.403 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf51147-6303-40bc-8028-5b29c2c51963]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.424 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[664eb4fe-03b5-4fe4-83ab-ccfd0ff8fa36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap239ca39f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:74:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419697, 'reachable_time': 34603, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288887, 'error': None, 'target': 'ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.441 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c63ad29a-c4ef-43b9-b636-d331ec141c29]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap239ca39f-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 419712, 'tstamp': 419712}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288888, 'error': None, 'target': 'ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap239ca39f-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 419716, 'tstamp': 419716}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288888, 'error': None, 'target': 'ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.443 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap239ca39f-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.450 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap239ca39f-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.450 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.451 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap239ca39f-50, col_values=(('external_ids', {'iface-id': '029a4135-d57a-4306-8fd9-eb98a54d6046'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:17.451 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.474 2 DEBUG nova.network.neutron [req-14346480-bd3b-4936-9b7f-0ca7794573cd req-8e1dff88-a662-43a0-99b3-8916023ff657 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Updated VIF entry in instance network info cache for port 5385a348-2508-4677-aebb-d79c82f4fc36. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.475 2 DEBUG nova.network.neutron [req-14346480-bd3b-4936-9b7f-0ca7794573cd req-8e1dff88-a662-43a0-99b3-8916023ff657 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Updating instance_info_cache with network_info: [{"id": "5385a348-2508-4677-aebb-d79c82f4fc36", "address": "fa:16:3e:4f:0a:92", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5385a348-25", "ovs_interfaceid": "5385a348-2508-4677-aebb-d79c82f4fc36", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.489 2 DEBUG oslo_concurrency.lockutils [req-14346480-bd3b-4936-9b7f-0ca7794573cd req-8e1dff88-a662-43a0-99b3-8916023ff657 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-4f7a36cf-fd1b-42f2-94be-5e9483b5f941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.526 2 INFO nova.virt.libvirt.driver [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Deleting instance files /var/lib/nova/instances/8de608c8-0b81-4798-adee-a9364d230016_del#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.527 2 INFO nova.virt.libvirt.driver [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Deletion of /var/lib/nova/instances/8de608c8-0b81-4798-adee-a9364d230016_del complete#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.580 2 INFO nova.compute.manager [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.581 2 DEBUG oslo.service.loopingcall [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.581 2 DEBUG nova.compute.manager [-] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.581 2 DEBUG nova.network.neutron [-] [instance: 8de608c8-0b81-4798-adee-a9364d230016] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.663 2 DEBUG nova.compute.manager [req-2602de83-f155-4c23-86a4-d72dc83feace req-2851f4d5-8589-4113-adde-88829e09ae10 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Received event network-vif-unplugged-3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.664 2 DEBUG oslo_concurrency.lockutils [req-2602de83-f155-4c23-86a4-d72dc83feace req-2851f4d5-8589-4113-adde-88829e09ae10 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "8de608c8-0b81-4798-adee-a9364d230016-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.664 2 DEBUG oslo_concurrency.lockutils [req-2602de83-f155-4c23-86a4-d72dc83feace req-2851f4d5-8589-4113-adde-88829e09ae10 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8de608c8-0b81-4798-adee-a9364d230016-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.664 2 DEBUG oslo_concurrency.lockutils [req-2602de83-f155-4c23-86a4-d72dc83feace req-2851f4d5-8589-4113-adde-88829e09ae10 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8de608c8-0b81-4798-adee-a9364d230016-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.664 2 DEBUG nova.compute.manager [req-2602de83-f155-4c23-86a4-d72dc83feace req-2851f4d5-8589-4113-adde-88829e09ae10 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] No waiting events found dispatching network-vif-unplugged-3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:17 np0005465604 nova_compute[260603]: 2025-10-02 08:22:17.664 2 DEBUG nova.compute.manager [req-2602de83-f155-4c23-86a4-d72dc83feace req-2851f4d5-8589-4113-adde-88829e09ae10 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Received event network-vif-unplugged-3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:22:18 np0005465604 nova_compute[260603]: 2025-10-02 08:22:18.525 2 INFO nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Creating config drive at /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/disk.config#033[00m
Oct  2 04:22:18 np0005465604 nova_compute[260603]: 2025-10-02 08:22:18.535 2 DEBUG oslo_concurrency.processutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc67zq98k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:18 np0005465604 nova_compute[260603]: 2025-10-02 08:22:18.663 2 DEBUG oslo_concurrency.processutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc67zq98k" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:18 np0005465604 nova_compute[260603]: 2025-10-02 08:22:18.704 2 DEBUG nova.storage.rbd_utils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] rbd image 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:18 np0005465604 nova_compute[260603]: 2025-10-02 08:22:18.708 2 DEBUG oslo_concurrency.processutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/disk.config 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:18 np0005465604 nova_compute[260603]: 2025-10-02 08:22:18.773 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393338.7723305, 4f7a36cf-fd1b-42f2-94be-5e9483b5f941 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:22:18 np0005465604 nova_compute[260603]: 2025-10-02 08:22:18.774 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] VM Started (Lifecycle Event)#033[00m
Oct  2 04:22:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 478 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 9.6 MiB/s wr, 322 op/s
Oct  2 04:22:18 np0005465604 nova_compute[260603]: 2025-10-02 08:22:18.806 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:18 np0005465604 nova_compute[260603]: 2025-10-02 08:22:18.810 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393338.7725475, 4f7a36cf-fd1b-42f2-94be-5e9483b5f941 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:22:18 np0005465604 nova_compute[260603]: 2025-10-02 08:22:18.810 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:22:18 np0005465604 nova_compute[260603]: 2025-10-02 08:22:18.830 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:18 np0005465604 nova_compute[260603]: 2025-10-02 08:22:18.834 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:22:18 np0005465604 nova_compute[260603]: 2025-10-02 08:22:18.853 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:22:18 np0005465604 nova_compute[260603]: 2025-10-02 08:22:18.971 2 DEBUG oslo_concurrency.processutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/disk.config 09c17c12-7dac-4fc8-917e-cb2efa1d4607_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:18 np0005465604 nova_compute[260603]: 2025-10-02 08:22:18.972 2 INFO nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Deleting local config drive /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607/disk.config because it was imported into RBD.#033[00m
Oct  2 04:22:19 np0005465604 kernel: tap205e575c-4a: entered promiscuous mode
Oct  2 04:22:19 np0005465604 NetworkManager[45129]: <info>  [1759393339.0439] manager: (tap205e575c-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:19Z|00096|binding|INFO|Claiming lport 205e575c-4af3-4a6a-af77-fd96af608b0a for this chassis.
Oct  2 04:22:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:19Z|00097|binding|INFO|205e575c-4af3-4a6a-af77-fd96af608b0a: Claiming fa:16:3e:24:2c:3a 10.100.0.12
Oct  2 04:22:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:19.058 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:2c:3a 10.100.0.12'], port_security=['fa:16:3e:24:2c:3a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '09c17c12-7dac-4fc8-917e-cb2efa1d4607', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13ecc6dea7a8465394379400d84a053e', 'neutron:revision_number': '7', 'neutron:security_group_ids': '62446364-91d2-42bd-8360-1c220db2c85a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a031d7e-2b71-4bad-bd63-24b87ef28e88, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=205e575c-4af3-4a6a-af77-fd96af608b0a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:22:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:19.060 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 205e575c-4af3-4a6a-af77-fd96af608b0a in datapath 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e bound to our chassis#033[00m
Oct  2 04:22:19 np0005465604 NetworkManager[45129]: <info>  [1759393339.0653] device (tap205e575c-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:22:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:19.065 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e#033[00m
Oct  2 04:22:19 np0005465604 NetworkManager[45129]: <info>  [1759393339.0660] device (tap205e575c-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:22:19 np0005465604 systemd-machined[214636]: New machine qemu-26-instance-0000000c.
Oct  2 04:22:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:19.090 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[42552d84-30cc-4135-944d-7b0410553294]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:19Z|00098|binding|INFO|Setting lport 205e575c-4af3-4a6a-af77-fd96af608b0a ovn-installed in OVS
Oct  2 04:22:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:19Z|00099|binding|INFO|Setting lport 205e575c-4af3-4a6a-af77-fd96af608b0a up in Southbound
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:19 np0005465604 systemd[1]: Started Virtual Machine qemu-26-instance-0000000c.
Oct  2 04:22:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:19.125 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[24f9bbfd-837b-4600-afd5-155c5bfa10ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:19.127 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[39e11072-5798-414e-a363-aa9e78b0fa33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:19.160 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ac0decf9-400a-4ea0-bb6b-da205bad6bcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:19.190 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[19b54705-08c2-4218-af45-d003c9fe588c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a12cdd8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:e8:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 22, 'rx_bytes': 1168, 'tx_bytes': 1112, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 22, 'rx_bytes': 1168, 'tx_bytes': 1112, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414588, 'reachable_time': 43029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288999, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:19.212 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aeaf6f34-fd69-4228-addf-69413e9493fb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414598, 'tstamp': 414598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289001, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414600, 'tstamp': 414600}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289001, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:19.213 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a12cdd8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:19.217 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a12cdd8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:19.218 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:19.218 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a12cdd8-50, col_values=(('external_ids', {'iface-id': '9a1d90c9-45f7-468d-bd6f-f1cc59f0309a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:19.218 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.877 2 DEBUG nova.compute.manager [req-f0fa5d45-0a9b-472d-a8d3-3c6f3dee4900 req-accd6ca5-0328-4bbd-bca4-0de1a2d0a535 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Received event network-vif-plugged-3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.878 2 DEBUG oslo_concurrency.lockutils [req-f0fa5d45-0a9b-472d-a8d3-3c6f3dee4900 req-accd6ca5-0328-4bbd-bca4-0de1a2d0a535 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "8de608c8-0b81-4798-adee-a9364d230016-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.878 2 DEBUG oslo_concurrency.lockutils [req-f0fa5d45-0a9b-472d-a8d3-3c6f3dee4900 req-accd6ca5-0328-4bbd-bca4-0de1a2d0a535 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8de608c8-0b81-4798-adee-a9364d230016-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.878 2 DEBUG oslo_concurrency.lockutils [req-f0fa5d45-0a9b-472d-a8d3-3c6f3dee4900 req-accd6ca5-0328-4bbd-bca4-0de1a2d0a535 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8de608c8-0b81-4798-adee-a9364d230016-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.878 2 DEBUG nova.compute.manager [req-f0fa5d45-0a9b-472d-a8d3-3c6f3dee4900 req-accd6ca5-0328-4bbd-bca4-0de1a2d0a535 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] No waiting events found dispatching network-vif-plugged-3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.879 2 WARNING nova.compute.manager [req-f0fa5d45-0a9b-472d-a8d3-3c6f3dee4900 req-accd6ca5-0328-4bbd-bca4-0de1a2d0a535 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Received unexpected event network-vif-plugged-3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.879 2 DEBUG nova.compute.manager [req-f0fa5d45-0a9b-472d-a8d3-3c6f3dee4900 req-accd6ca5-0328-4bbd-bca4-0de1a2d0a535 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Received event network-vif-plugged-5385a348-2508-4677-aebb-d79c82f4fc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.879 2 DEBUG oslo_concurrency.lockutils [req-f0fa5d45-0a9b-472d-a8d3-3c6f3dee4900 req-accd6ca5-0328-4bbd-bca4-0de1a2d0a535 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.879 2 DEBUG oslo_concurrency.lockutils [req-f0fa5d45-0a9b-472d-a8d3-3c6f3dee4900 req-accd6ca5-0328-4bbd-bca4-0de1a2d0a535 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.880 2 DEBUG oslo_concurrency.lockutils [req-f0fa5d45-0a9b-472d-a8d3-3c6f3dee4900 req-accd6ca5-0328-4bbd-bca4-0de1a2d0a535 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.880 2 DEBUG nova.compute.manager [req-f0fa5d45-0a9b-472d-a8d3-3c6f3dee4900 req-accd6ca5-0328-4bbd-bca4-0de1a2d0a535 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Processing event network-vif-plugged-5385a348-2508-4677-aebb-d79c82f4fc36 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.881 2 DEBUG nova.compute.manager [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.885 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393339.8847427, 4f7a36cf-fd1b-42f2-94be-5e9483b5f941 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.885 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.892 2 DEBUG nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.895 2 INFO nova.virt.libvirt.driver [-] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Instance spawned successfully.#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.896 2 DEBUG nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.933 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.940 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.947 2 DEBUG nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.948 2 DEBUG nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.949 2 DEBUG nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.950 2 DEBUG nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.950 2 DEBUG nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.951 2 DEBUG nova.virt.libvirt.driver [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:19 np0005465604 nova_compute[260603]: 2025-10-02 08:22:19.979 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.017 2 DEBUG nova.network.neutron [-] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.029 2 INFO nova.compute.manager [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Took 12.81 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.030 2 DEBUG nova.compute.manager [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.039 2 INFO nova.compute.manager [-] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Took 2.46 seconds to deallocate network for instance.#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.105 2 DEBUG oslo_concurrency.lockutils [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.105 2 DEBUG oslo_concurrency.lockutils [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.120 2 INFO nova.compute.manager [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Took 13.83 seconds to build instance.#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.139 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 09c17c12-7dac-4fc8-917e-cb2efa1d4607 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.140 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393340.138287, 09c17c12-7dac-4fc8-917e-cb2efa1d4607 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.140 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] VM Started (Lifecycle Event)#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.143 2 DEBUG oslo_concurrency.lockutils [None req-e3d31c07-3d4e-4df7-bd7e-618e2bee5dfb be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.157 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.163 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393340.1383736, 09c17c12-7dac-4fc8-917e-cb2efa1d4607 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.164 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.182 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.187 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.206 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.310 2 DEBUG oslo_concurrency.processutils [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 478 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.7 MiB/s wr, 291 op/s
Oct  2 04:22:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:22:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/488866196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.825 2 DEBUG oslo_concurrency.processutils [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.829 2 DEBUG nova.compute.provider_tree [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.853 2 DEBUG nova.scheduler.client.report [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.872 2 DEBUG oslo_concurrency.lockutils [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.906 2 INFO nova.scheduler.client.report [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Deleted allocations for instance 8de608c8-0b81-4798-adee-a9364d230016#033[00m
Oct  2 04:22:20 np0005465604 nova_compute[260603]: 2025-10-02 08:22:20.982 2 DEBUG oslo_concurrency.lockutils [None req-b7c03032-1c5c-49ed-b89c-92c491484415 1752f22132cf468e92bd8857bcc0ed93 f6a5588e45d1439d8a754fdd24946192 - - default default] Lock "8de608c8-0b81-4798-adee-a9364d230016" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:21 np0005465604 nova_compute[260603]: 2025-10-02 08:22:21.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:22:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:22:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/306972384' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:22:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:22:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/306972384' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:22:22 np0005465604 nova_compute[260603]: 2025-10-02 08:22:22.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:22 np0005465604 nova_compute[260603]: 2025-10-02 08:22:22.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:22 np0005465604 NetworkManager[45129]: <info>  [1759393342.2386] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Oct  2 04:22:22 np0005465604 NetworkManager[45129]: <info>  [1759393342.2403] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Oct  2 04:22:22 np0005465604 nova_compute[260603]: 2025-10-02 08:22:22.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:22Z|00100|binding|INFO|Releasing lport 029a4135-d57a-4306-8fd9-eb98a54d6046 from this chassis (sb_readonly=0)
Oct  2 04:22:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:22Z|00101|binding|INFO|Releasing lport 9a1d90c9-45f7-468d-bd6f-f1cc59f0309a from this chassis (sb_readonly=0)
Oct  2 04:22:22 np0005465604 nova_compute[260603]: 2025-10-02 08:22:22.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 451 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.7 MiB/s wr, 348 op/s
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.536 2 DEBUG nova.compute.manager [req-fe035f9a-b440-4b91-810d-163ae9df2def req-6709c650-7565-44b3-aac5-cc832cc6237e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.537 2 DEBUG oslo_concurrency.lockutils [req-fe035f9a-b440-4b91-810d-163ae9df2def req-6709c650-7565-44b3-aac5-cc832cc6237e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.538 2 DEBUG oslo_concurrency.lockutils [req-fe035f9a-b440-4b91-810d-163ae9df2def req-6709c650-7565-44b3-aac5-cc832cc6237e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.538 2 DEBUG oslo_concurrency.lockutils [req-fe035f9a-b440-4b91-810d-163ae9df2def req-6709c650-7565-44b3-aac5-cc832cc6237e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.539 2 DEBUG nova.compute.manager [req-fe035f9a-b440-4b91-810d-163ae9df2def req-6709c650-7565-44b3-aac5-cc832cc6237e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Processing event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.540 2 DEBUG nova.compute.manager [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.545 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393344.5454104, 09c17c12-7dac-4fc8-917e-cb2efa1d4607 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.546 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.550 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.554 2 INFO nova.virt.libvirt.driver [-] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Instance spawned successfully.#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.555 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.599 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.607 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.608 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.609 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.610 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.612 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.613 2 DEBUG nova.virt.libvirt.driver [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.620 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.671 2 DEBUG nova.compute.manager [req-e314435d-3d7c-4bdf-a3b6-d6abde0d156e req-0aef2176-d647-4913-93c2-8fbb47208421 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Received event network-vif-plugged-5385a348-2508-4677-aebb-d79c82f4fc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.672 2 DEBUG oslo_concurrency.lockutils [req-e314435d-3d7c-4bdf-a3b6-d6abde0d156e req-0aef2176-d647-4913-93c2-8fbb47208421 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.673 2 DEBUG oslo_concurrency.lockutils [req-e314435d-3d7c-4bdf-a3b6-d6abde0d156e req-0aef2176-d647-4913-93c2-8fbb47208421 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.673 2 DEBUG oslo_concurrency.lockutils [req-e314435d-3d7c-4bdf-a3b6-d6abde0d156e req-0aef2176-d647-4913-93c2-8fbb47208421 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.674 2 DEBUG nova.compute.manager [req-e314435d-3d7c-4bdf-a3b6-d6abde0d156e req-0aef2176-d647-4913-93c2-8fbb47208421 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] No waiting events found dispatching network-vif-plugged-5385a348-2508-4677-aebb-d79c82f4fc36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.674 2 WARNING nova.compute.manager [req-e314435d-3d7c-4bdf-a3b6-d6abde0d156e req-0aef2176-d647-4913-93c2-8fbb47208421 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Received unexpected event network-vif-plugged-5385a348-2508-4677-aebb-d79c82f4fc36 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.675 2 DEBUG nova.compute.manager [req-e314435d-3d7c-4bdf-a3b6-d6abde0d156e req-0aef2176-d647-4913-93c2-8fbb47208421 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Received event network-vif-deleted-3b55cad4-91dd-4e72-a02e-ddfa3a801fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.728 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.763 2 DEBUG nova.compute.manager [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 451 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.4 MiB/s wr, 288 op/s
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.821 2 DEBUG oslo_concurrency.lockutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.822 2 DEBUG oslo_concurrency.lockutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.822 2 DEBUG nova.objects.instance [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:22:24 np0005465604 nova_compute[260603]: 2025-10-02 08:22:24.884 2 DEBUG oslo_concurrency.lockutils [None req-0145d0e1-0b72-4c92-8312-b50aa230e12b 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:25 np0005465604 nova_compute[260603]: 2025-10-02 08:22:25.494 2 DEBUG oslo_concurrency.lockutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Acquiring lock "477099c6-d178-40d2-9d1e-a28466596e94" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:25 np0005465604 nova_compute[260603]: 2025-10-02 08:22:25.496 2 DEBUG oslo_concurrency.lockutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lock "477099c6-d178-40d2-9d1e-a28466596e94" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:25 np0005465604 nova_compute[260603]: 2025-10-02 08:22:25.545 2 DEBUG nova.compute.manager [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:22:25 np0005465604 nova_compute[260603]: 2025-10-02 08:22:25.748 2 DEBUG oslo_concurrency.lockutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:25 np0005465604 nova_compute[260603]: 2025-10-02 08:22:25.749 2 DEBUG oslo_concurrency.lockutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:25 np0005465604 nova_compute[260603]: 2025-10-02 08:22:25.759 2 DEBUG nova.virt.hardware [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:22:25 np0005465604 nova_compute[260603]: 2025-10-02 08:22:25.760 2 INFO nova.compute.claims [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:22:25 np0005465604 nova_compute[260603]: 2025-10-02 08:22:25.967 2 DEBUG oslo_concurrency.processutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:22:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3339619223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.371 2 DEBUG oslo_concurrency.processutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.380 2 DEBUG nova.compute.provider_tree [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.398 2 DEBUG nova.scheduler.client.report [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.438 2 DEBUG oslo_concurrency.lockutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.439 2 DEBUG nova.compute.manager [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.500 2 DEBUG nova.compute.manager [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.501 2 DEBUG nova.network.neutron [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.523 2 INFO nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.556 2 DEBUG nova.compute.manager [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.701 2 DEBUG nova.compute.manager [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.702 2 DEBUG nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.702 2 INFO nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Creating image(s)#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.725 2 DEBUG nova.storage.rbd_utils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] rbd image 477099c6-d178-40d2-9d1e-a28466596e94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.765 2 DEBUG nova.storage.rbd_utils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] rbd image 477099c6-d178-40d2-9d1e-a28466596e94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 451 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 237 op/s
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.794 2 DEBUG nova.storage.rbd_utils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] rbd image 477099c6-d178-40d2-9d1e-a28466596e94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.799 2 DEBUG oslo_concurrency.processutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.820 2 DEBUG nova.policy [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7ddec4f2f47d416bb59b0a823af0027e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '578d4e0b1a1a46e893d547de9bd7f6e1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.854 2 DEBUG oslo_concurrency.processutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.855 2 DEBUG oslo_concurrency.lockutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.857 2 DEBUG oslo_concurrency.lockutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.857 2 DEBUG oslo_concurrency.lockutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.888 2 DEBUG nova.storage.rbd_utils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] rbd image 477099c6-d178-40d2-9d1e-a28466596e94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.894 2 DEBUG oslo_concurrency.processutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 477099c6-d178-40d2-9d1e-a28466596e94_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.922 2 DEBUG nova.compute.manager [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.923 2 DEBUG oslo_concurrency.lockutils [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.924 2 DEBUG oslo_concurrency.lockutils [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.924 2 DEBUG oslo_concurrency.lockutils [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.924 2 DEBUG nova.compute.manager [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] No waiting events found dispatching network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.925 2 WARNING nova.compute.manager [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received unexpected event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a for instance with vm_state active and task_state None.#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.925 2 DEBUG nova.compute.manager [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received event network-vif-unplugged-205e575c-4af3-4a6a-af77-fd96af608b0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.925 2 DEBUG oslo_concurrency.lockutils [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.925 2 DEBUG oslo_concurrency.lockutils [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.926 2 DEBUG oslo_concurrency.lockutils [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.926 2 DEBUG nova.compute.manager [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] No waiting events found dispatching network-vif-unplugged-205e575c-4af3-4a6a-af77-fd96af608b0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.926 2 WARNING nova.compute.manager [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received unexpected event network-vif-unplugged-205e575c-4af3-4a6a-af77-fd96af608b0a for instance with vm_state active and task_state None.#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.927 2 DEBUG nova.compute.manager [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.927 2 DEBUG oslo_concurrency.lockutils [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.927 2 DEBUG oslo_concurrency.lockutils [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.927 2 DEBUG oslo_concurrency.lockutils [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.928 2 DEBUG nova.compute.manager [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] No waiting events found dispatching network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.928 2 WARNING nova.compute.manager [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received unexpected event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a for instance with vm_state active and task_state None.#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.928 2 DEBUG nova.compute.manager [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.929 2 DEBUG oslo_concurrency.lockutils [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.929 2 DEBUG oslo_concurrency.lockutils [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.929 2 DEBUG oslo_concurrency.lockutils [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.930 2 DEBUG nova.compute.manager [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] No waiting events found dispatching network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.930 2 WARNING nova.compute.manager [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received unexpected event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a for instance with vm_state active and task_state None.#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.930 2 DEBUG nova.compute.manager [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.930 2 DEBUG oslo_concurrency.lockutils [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.931 2 DEBUG oslo_concurrency.lockutils [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.931 2 DEBUG oslo_concurrency.lockutils [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.931 2 DEBUG nova.compute.manager [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] No waiting events found dispatching network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:26 np0005465604 nova_compute[260603]: 2025-10-02 08:22:26.931 2 WARNING nova.compute.manager [req-a4fd3a9f-160b-42a1-ae7f-3d4ef3126a85 req-8aa20d0a-e817-46b8-8aa6-96fdd5eca3bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received unexpected event network-vif-plugged-205e575c-4af3-4a6a-af77-fd96af608b0a for instance with vm_state active and task_state None.#033[00m
Oct  2 04:22:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:22:27 np0005465604 nova_compute[260603]: 2025-10-02 08:22:27.032 2 DEBUG nova.compute.manager [req-34cd537f-e00b-4498-a425-0e07af6aaac7 req-7d2bd592-74fe-410d-96fb-b93266e20ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Received event network-changed-3aa7d14a-ae6f-4363-b28d-0dadba762d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:27 np0005465604 nova_compute[260603]: 2025-10-02 08:22:27.033 2 DEBUG nova.compute.manager [req-34cd537f-e00b-4498-a425-0e07af6aaac7 req-7d2bd592-74fe-410d-96fb-b93266e20ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Refreshing instance network info cache due to event network-changed-3aa7d14a-ae6f-4363-b28d-0dadba762d1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:22:27 np0005465604 nova_compute[260603]: 2025-10-02 08:22:27.033 2 DEBUG oslo_concurrency.lockutils [req-34cd537f-e00b-4498-a425-0e07af6aaac7 req-7d2bd592-74fe-410d-96fb-b93266e20ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ce67c5cc-b413-4871-8bcb-677c171ce721" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:22:27 np0005465604 nova_compute[260603]: 2025-10-02 08:22:27.034 2 DEBUG oslo_concurrency.lockutils [req-34cd537f-e00b-4498-a425-0e07af6aaac7 req-7d2bd592-74fe-410d-96fb-b93266e20ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ce67c5cc-b413-4871-8bcb-677c171ce721" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:22:27 np0005465604 nova_compute[260603]: 2025-10-02 08:22:27.034 2 DEBUG nova.network.neutron [req-34cd537f-e00b-4498-a425-0e07af6aaac7 req-7d2bd592-74fe-410d-96fb-b93266e20ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Refreshing network info cache for port 3aa7d14a-ae6f-4363-b28d-0dadba762d1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:22:27 np0005465604 nova_compute[260603]: 2025-10-02 08:22:27.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:27 np0005465604 nova_compute[260603]: 2025-10-02 08:22:27.140 2 DEBUG oslo_concurrency.processutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 477099c6-d178-40d2-9d1e-a28466596e94_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:27 np0005465604 nova_compute[260603]: 2025-10-02 08:22:27.195 2 DEBUG nova.storage.rbd_utils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] resizing rbd image 477099c6-d178-40d2-9d1e-a28466596e94_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:22:27 np0005465604 nova_compute[260603]: 2025-10-02 08:22:27.277 2 DEBUG nova.objects.instance [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lazy-loading 'migration_context' on Instance uuid 477099c6-d178-40d2-9d1e-a28466596e94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:27 np0005465604 nova_compute[260603]: 2025-10-02 08:22:27.297 2 DEBUG nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:22:27 np0005465604 nova_compute[260603]: 2025-10-02 08:22:27.298 2 DEBUG nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Ensure instance console log exists: /var/lib/nova/instances/477099c6-d178-40d2-9d1e-a28466596e94/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:22:27 np0005465604 nova_compute[260603]: 2025-10-02 08:22:27.298 2 DEBUG oslo_concurrency.lockutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:27 np0005465604 nova_compute[260603]: 2025-10-02 08:22:27.299 2 DEBUG oslo_concurrency.lockutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:27 np0005465604 nova_compute[260603]: 2025-10-02 08:22:27.299 2 DEBUG oslo_concurrency.lockutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:22:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:22:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:22:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:22:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:22:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:22:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:22:27
Oct  2 04:22:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:22:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:22:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.meta', 'backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'vms', 'volumes', '.rgw.root']
Oct  2 04:22:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:22:28 np0005465604 nova_compute[260603]: 2025-10-02 08:22:28.450 2 DEBUG nova.network.neutron [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Successfully created port: 2135b96a-be2a-4f01-bcde-d21548009ea5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:22:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 498 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 326 op/s
Oct  2 04:22:29 np0005465604 nova_compute[260603]: 2025-10-02 08:22:29.288 2 DEBUG nova.network.neutron [req-34cd537f-e00b-4498-a425-0e07af6aaac7 req-7d2bd592-74fe-410d-96fb-b93266e20ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Updated VIF entry in instance network info cache for port 3aa7d14a-ae6f-4363-b28d-0dadba762d1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:22:29 np0005465604 nova_compute[260603]: 2025-10-02 08:22:29.288 2 DEBUG nova.network.neutron [req-34cd537f-e00b-4498-a425-0e07af6aaac7 req-7d2bd592-74fe-410d-96fb-b93266e20ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Updating instance_info_cache with network_info: [{"id": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "address": "fa:16:3e:99:75:4f", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa7d14a-ae", "ovs_interfaceid": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:29 np0005465604 nova_compute[260603]: 2025-10-02 08:22:29.309 2 DEBUG oslo_concurrency.lockutils [req-34cd537f-e00b-4498-a425-0e07af6aaac7 req-7d2bd592-74fe-410d-96fb-b93266e20ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ce67c5cc-b413-4871-8bcb-677c171ce721" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:22:29 np0005465604 nova_compute[260603]: 2025-10-02 08:22:29.870 2 DEBUG nova.compute.manager [req-9c468310-89a4-4ff9-adb6-b3a0184f269e req-6f683c23-71e8-48e5-9e4d-cc8027ab488e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Received event network-changed-3aa7d14a-ae6f-4363-b28d-0dadba762d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:29 np0005465604 nova_compute[260603]: 2025-10-02 08:22:29.871 2 DEBUG nova.compute.manager [req-9c468310-89a4-4ff9-adb6-b3a0184f269e req-6f683c23-71e8-48e5-9e4d-cc8027ab488e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Refreshing instance network info cache due to event network-changed-3aa7d14a-ae6f-4363-b28d-0dadba762d1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:22:29 np0005465604 nova_compute[260603]: 2025-10-02 08:22:29.871 2 DEBUG oslo_concurrency.lockutils [req-9c468310-89a4-4ff9-adb6-b3a0184f269e req-6f683c23-71e8-48e5-9e4d-cc8027ab488e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ce67c5cc-b413-4871-8bcb-677c171ce721" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:22:29 np0005465604 nova_compute[260603]: 2025-10-02 08:22:29.872 2 DEBUG oslo_concurrency.lockutils [req-9c468310-89a4-4ff9-adb6-b3a0184f269e req-6f683c23-71e8-48e5-9e4d-cc8027ab488e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ce67c5cc-b413-4871-8bcb-677c171ce721" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:22:29 np0005465604 nova_compute[260603]: 2025-10-02 08:22:29.872 2 DEBUG nova.network.neutron [req-9c468310-89a4-4ff9-adb6-b3a0184f269e req-6f683c23-71e8-48e5-9e4d-cc8027ab488e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Refreshing network info cache for port 3aa7d14a-ae6f-4363-b28d-0dadba762d1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:22:29 np0005465604 nova_compute[260603]: 2025-10-02 08:22:29.946 2 DEBUG nova.network.neutron [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Successfully updated port: 2135b96a-be2a-4f01-bcde-d21548009ea5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:22:29 np0005465604 nova_compute[260603]: 2025-10-02 08:22:29.974 2 DEBUG oslo_concurrency.lockutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Acquiring lock "refresh_cache-477099c6-d178-40d2-9d1e-a28466596e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:22:29 np0005465604 nova_compute[260603]: 2025-10-02 08:22:29.975 2 DEBUG oslo_concurrency.lockutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Acquired lock "refresh_cache-477099c6-d178-40d2-9d1e-a28466596e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:22:29 np0005465604 nova_compute[260603]: 2025-10-02 08:22:29.975 2 DEBUG nova.network.neutron [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:22:30 np0005465604 nova_compute[260603]: 2025-10-02 08:22:30.499 2 DEBUG nova.network.neutron [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:22:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 498 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 192 op/s
Oct  2 04:22:31 np0005465604 nova_compute[260603]: 2025-10-02 08:22:31.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:31 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:31Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4f:0a:92 10.100.0.12
Oct  2 04:22:31 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:31Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4f:0a:92 10.100.0.12
Oct  2 04:22:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.003 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393337.0030007, 8de608c8-0b81-4798-adee-a9364d230016 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.004 2 INFO nova.compute.manager [-] [instance: 8de608c8-0b81-4798-adee-a9364d230016] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.009 2 DEBUG nova.network.neutron [req-9c468310-89a4-4ff9-adb6-b3a0184f269e req-6f683c23-71e8-48e5-9e4d-cc8027ab488e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Updated VIF entry in instance network info cache for port 3aa7d14a-ae6f-4363-b28d-0dadba762d1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.009 2 DEBUG nova.network.neutron [req-9c468310-89a4-4ff9-adb6-b3a0184f269e req-6f683c23-71e8-48e5-9e4d-cc8027ab488e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Updating instance_info_cache with network_info: [{"id": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "address": "fa:16:3e:99:75:4f", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa7d14a-ae", "ovs_interfaceid": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.030 2 DEBUG nova.compute.manager [None req-c53fd676-10a4-4054-835b-15b7575fd188 - - - - - -] [instance: 8de608c8-0b81-4798-adee-a9364d230016] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.032 2 DEBUG oslo_concurrency.lockutils [req-9c468310-89a4-4ff9-adb6-b3a0184f269e req-6f683c23-71e8-48e5-9e4d-cc8027ab488e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ce67c5cc-b413-4871-8bcb-677c171ce721" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.070 2 DEBUG nova.network.neutron [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Updating instance_info_cache with network_info: [{"id": "2135b96a-be2a-4f01-bcde-d21548009ea5", "address": "fa:16:3e:92:78:ad", "network": {"id": "e81bbc6c-74d9-4326-b101-3222258ed127", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1133616281-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "578d4e0b1a1a46e893d547de9bd7f6e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2135b96a-be", "ovs_interfaceid": "2135b96a-be2a-4f01-bcde-d21548009ea5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.086 2 DEBUG oslo_concurrency.lockutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Releasing lock "refresh_cache-477099c6-d178-40d2-9d1e-a28466596e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.086 2 DEBUG nova.compute.manager [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Instance network_info: |[{"id": "2135b96a-be2a-4f01-bcde-d21548009ea5", "address": "fa:16:3e:92:78:ad", "network": {"id": "e81bbc6c-74d9-4326-b101-3222258ed127", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1133616281-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "578d4e0b1a1a46e893d547de9bd7f6e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2135b96a-be", "ovs_interfaceid": "2135b96a-be2a-4f01-bcde-d21548009ea5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.089 2 DEBUG nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Start _get_guest_xml network_info=[{"id": "2135b96a-be2a-4f01-bcde-d21548009ea5", "address": "fa:16:3e:92:78:ad", "network": {"id": "e81bbc6c-74d9-4326-b101-3222258ed127", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1133616281-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "578d4e0b1a1a46e893d547de9bd7f6e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2135b96a-be", "ovs_interfaceid": "2135b96a-be2a-4f01-bcde-d21548009ea5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.094 2 WARNING nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.099 2 DEBUG nova.virt.libvirt.host [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.100 2 DEBUG nova.virt.libvirt.host [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.102 2 DEBUG nova.virt.libvirt.host [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.103 2 DEBUG nova.virt.libvirt.host [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.104 2 DEBUG nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.104 2 DEBUG nova.virt.hardware [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.105 2 DEBUG nova.virt.hardware [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.105 2 DEBUG nova.virt.hardware [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.106 2 DEBUG nova.virt.hardware [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.106 2 DEBUG nova.virt.hardware [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.106 2 DEBUG nova.virt.hardware [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.107 2 DEBUG nova.virt.hardware [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.107 2 DEBUG nova.virt.hardware [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.108 2 DEBUG nova.virt.hardware [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.108 2 DEBUG nova.virt.hardware [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.109 2 DEBUG nova.virt.hardware [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.112 2 DEBUG oslo_concurrency.processutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.482 2 DEBUG oslo_concurrency.lockutils [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.483 2 DEBUG oslo_concurrency.lockutils [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.483 2 DEBUG oslo_concurrency.lockutils [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.484 2 DEBUG oslo_concurrency.lockutils [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.484 2 DEBUG oslo_concurrency.lockutils [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.486 2 INFO nova.compute.manager [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Terminating instance#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.487 2 DEBUG nova.compute.manager [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:22:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:22:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2940178606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:22:32 np0005465604 kernel: tap3979d203-ca (unregistering): left promiscuous mode
Oct  2 04:22:32 np0005465604 NetworkManager[45129]: <info>  [1759393352.5575] device (tap3979d203-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.558 2 DEBUG oslo_concurrency.processutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.588 2 DEBUG nova.storage.rbd_utils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] rbd image 477099c6-d178-40d2-9d1e-a28466596e94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:32 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:32Z|00102|binding|INFO|Releasing lport 3979d203-cacc-40b7-9662-efc009354ac2 from this chassis (sb_readonly=0)
Oct  2 04:22:32 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:32Z|00103|binding|INFO|Setting lport 3979d203-cacc-40b7-9662-efc009354ac2 down in Southbound
Oct  2 04:22:32 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:32Z|00104|binding|INFO|Removing iface tap3979d203-ca ovn-installed in OVS
Oct  2 04:22:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:32.601 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:e1:60 10.100.0.6'], port_security=['fa:16:3e:9d:e1:60 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '01296dff-20d5-49d6-b582-f9ec1d6b0af8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13ecc6dea7a8465394379400d84a053e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '62446364-91d2-42bd-8360-1c220db2c85a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a031d7e-2b71-4bad-bd63-24b87ef28e88, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3979d203-cacc-40b7-9662-efc009354ac2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.599 2 DEBUG oslo_concurrency.processutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:32.604 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3979d203-cacc-40b7-9662-efc009354ac2 in datapath 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e unbound from our chassis#033[00m
Oct  2 04:22:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:32.607 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:32.628 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a83db3dc-6be9-416b-8e4a-8214a54e738c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.634 2 DEBUG nova.compute.manager [req-358e5b5a-48d9-485e-b19b-dac8963af39f req-0dd2d485-d4c6-4ae0-84a3-6f252dc7ccb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Received event network-changed-5385a348-2508-4677-aebb-d79c82f4fc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.635 2 DEBUG nova.compute.manager [req-358e5b5a-48d9-485e-b19b-dac8963af39f req-0dd2d485-d4c6-4ae0-84a3-6f252dc7ccb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Refreshing instance network info cache due to event network-changed-5385a348-2508-4677-aebb-d79c82f4fc36. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.635 2 DEBUG oslo_concurrency.lockutils [req-358e5b5a-48d9-485e-b19b-dac8963af39f req-0dd2d485-d4c6-4ae0-84a3-6f252dc7ccb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-4f7a36cf-fd1b-42f2-94be-5e9483b5f941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.636 2 DEBUG oslo_concurrency.lockutils [req-358e5b5a-48d9-485e-b19b-dac8963af39f req-0dd2d485-d4c6-4ae0-84a3-6f252dc7ccb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-4f7a36cf-fd1b-42f2-94be-5e9483b5f941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.636 2 DEBUG nova.network.neutron [req-358e5b5a-48d9-485e-b19b-dac8963af39f req-0dd2d485-d4c6-4ae0-84a3-6f252dc7ccb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Refreshing network info cache for port 5385a348-2508-4677-aebb-d79c82f4fc36 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:22:32 np0005465604 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000013.scope: Deactivated successfully.
Oct  2 04:22:32 np0005465604 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000013.scope: Consumed 13.884s CPU time.
Oct  2 04:22:32 np0005465604 systemd-machined[214636]: Machine qemu-21-instance-00000013 terminated.
Oct  2 04:22:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:32.661 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[80beb2c6-9939-4ea0-90ad-30ddb63e838a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:32.664 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8b1448b4-0291-4a6a-bc6c-c476098aa5a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:32.689 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[008b31f3-ca35-4855-bcd4-d8f913625158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:32.706 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[95a7e331-798d-4117-8048-10e2eca8fed4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a12cdd8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:e8:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 24, 'rx_bytes': 1168, 'tx_bytes': 1196, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 24, 'rx_bytes': 1168, 'tx_bytes': 1196, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414588, 'reachable_time': 43029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289307, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.721 2 INFO nova.virt.libvirt.driver [-] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Instance destroyed successfully.#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.721 2 DEBUG nova.objects.instance [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'resources' on Instance uuid 01296dff-20d5-49d6-b582-f9ec1d6b0af8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:32.723 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e94f62-badc-4a23-9bef-75cfaa572366]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414598, 'tstamp': 414598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289315, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414600, 'tstamp': 414600}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289315, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:32.724 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a12cdd8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:32.731 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a12cdd8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:32.731 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:32.732 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a12cdd8-50, col_values=(('external_ids', {'iface-id': '9a1d90c9-45f7-468d-bd6f-f1cc59f0309a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:32.732 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.755 2 DEBUG nova.virt.libvirt.vif [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:21:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1319481695',display_name='tempest-ServersAdminTestJSON-server-1319481695',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1319481695',id=19,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:21:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='13ecc6dea7a8465394379400d84a053e',ramdisk_id='',reservation_id='r-hf9n33mn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-83453142',owner_user_name='tempest-ServersAdminTestJSON-83453142-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:21:35Z,user_data=None,user_id='2b82955fab174d8aac325e64068908f5',uuid=01296dff-20d5-49d6-b582-f9ec1d6b0af8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3979d203-cacc-40b7-9662-efc009354ac2", "address": "fa:16:3e:9d:e1:60", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3979d203-ca", "ovs_interfaceid": "3979d203-cacc-40b7-9662-efc009354ac2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.756 2 DEBUG nova.network.os_vif_util [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converting VIF {"id": "3979d203-cacc-40b7-9662-efc009354ac2", "address": "fa:16:3e:9d:e1:60", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3979d203-ca", "ovs_interfaceid": "3979d203-cacc-40b7-9662-efc009354ac2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.758 2 DEBUG nova.network.os_vif_util [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:e1:60,bridge_name='br-int',has_traffic_filtering=True,id=3979d203-cacc-40b7-9662-efc009354ac2,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3979d203-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.758 2 DEBUG os_vif [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:e1:60,bridge_name='br-int',has_traffic_filtering=True,id=3979d203-cacc-40b7-9662-efc009354ac2,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3979d203-ca') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.761 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3979d203-ca, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:32 np0005465604 nova_compute[260603]: 2025-10-02 08:22:32.766 2 INFO os_vif [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:e1:60,bridge_name='br-int',has_traffic_filtering=True,id=3979d203-cacc-40b7-9662-efc009354ac2,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3979d203-ca')#033[00m
Oct  2 04:22:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 516 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.9 MiB/s wr, 240 op/s
Oct  2 04:22:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:22:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3921415186' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.099 2 DEBUG oslo_concurrency.processutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.101 2 DEBUG nova.virt.libvirt.vif [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:22:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1367720284',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1367720284',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-136772028',id=23,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='578d4e0b1a1a46e893d547de9bd7f6e1',ramdisk_id='',reservation_id='r-5rmvyxw6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1136012513',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1136012513-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:22:26Z,user_data=None,user_id='7ddec4f2f47d416bb59b0a823af0027e',uuid=477099c6-d178-40d2-9d1e-a28466596e94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2135b96a-be2a-4f01-bcde-d21548009ea5", "address": "fa:16:3e:92:78:ad", "network": {"id": "e81bbc6c-74d9-4326-b101-3222258ed127", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1133616281-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "578d4e0b1a1a46e893d547de9bd7f6e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2135b96a-be", "ovs_interfaceid": "2135b96a-be2a-4f01-bcde-d21548009ea5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.102 2 DEBUG nova.network.os_vif_util [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Converting VIF {"id": "2135b96a-be2a-4f01-bcde-d21548009ea5", "address": "fa:16:3e:92:78:ad", "network": {"id": "e81bbc6c-74d9-4326-b101-3222258ed127", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1133616281-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "578d4e0b1a1a46e893d547de9bd7f6e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2135b96a-be", "ovs_interfaceid": "2135b96a-be2a-4f01-bcde-d21548009ea5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.103 2 DEBUG nova.network.os_vif_util [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:92:78:ad,bridge_name='br-int',has_traffic_filtering=True,id=2135b96a-be2a-4f01-bcde-d21548009ea5,network=Network(e81bbc6c-74d9-4326-b101-3222258ed127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2135b96a-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.106 2 DEBUG nova.objects.instance [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 477099c6-d178-40d2-9d1e-a28466596e94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.112 2 INFO nova.virt.libvirt.driver [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Deleting instance files /var/lib/nova/instances/01296dff-20d5-49d6-b582-f9ec1d6b0af8_del#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.113 2 INFO nova.virt.libvirt.driver [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Deletion of /var/lib/nova/instances/01296dff-20d5-49d6-b582-f9ec1d6b0af8_del complete#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.144 2 DEBUG nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:22:33 np0005465604 nova_compute[260603]:  <uuid>477099c6-d178-40d2-9d1e-a28466596e94</uuid>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:  <name>instance-00000017</name>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-1367720284</nova:name>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:22:32</nova:creationTime>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:22:33 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:        <nova:user uuid="7ddec4f2f47d416bb59b0a823af0027e">tempest-FloatingIPsAssociationNegativeTestJSON-1136012513-project-member</nova:user>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:        <nova:project uuid="578d4e0b1a1a46e893d547de9bd7f6e1">tempest-FloatingIPsAssociationNegativeTestJSON-1136012513</nova:project>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:        <nova:port uuid="2135b96a-be2a-4f01-bcde-d21548009ea5">
Oct  2 04:22:33 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <entry name="serial">477099c6-d178-40d2-9d1e-a28466596e94</entry>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <entry name="uuid">477099c6-d178-40d2-9d1e-a28466596e94</entry>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/477099c6-d178-40d2-9d1e-a28466596e94_disk">
Oct  2 04:22:33 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:22:33 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/477099c6-d178-40d2-9d1e-a28466596e94_disk.config">
Oct  2 04:22:33 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:22:33 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:92:78:ad"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <target dev="tap2135b96a-be"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/477099c6-d178-40d2-9d1e-a28466596e94/console.log" append="off"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:22:33 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:22:33 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:22:33 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:22:33 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.157 2 DEBUG nova.compute.manager [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Preparing to wait for external event network-vif-plugged-2135b96a-be2a-4f01-bcde-d21548009ea5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.157 2 DEBUG oslo_concurrency.lockutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Acquiring lock "477099c6-d178-40d2-9d1e-a28466596e94-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.158 2 DEBUG oslo_concurrency.lockutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lock "477099c6-d178-40d2-9d1e-a28466596e94-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.158 2 DEBUG oslo_concurrency.lockutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lock "477099c6-d178-40d2-9d1e-a28466596e94-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.159 2 DEBUG nova.virt.libvirt.vif [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:22:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1367720284',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1367720284',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-136772028',id=23,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='578d4e0b1a1a46e893d547de9bd7f6e1',ramdisk_id='',reservation_id='r-5rmvyxw6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1136012513',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1136012513-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:22:26Z,user_data=None,user_id='7ddec4f2f47d416bb59b0a823af0027e',uuid=477099c6-d178-40d2-9d1e-a28466596e94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2135b96a-be2a-4f01-bcde-d21548009ea5", "address": "fa:16:3e:92:78:ad", "network": {"id": "e81bbc6c-74d9-4326-b101-3222258ed127", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1133616281-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "578d4e0b1a1a46e893d547de9bd7f6e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2135b96a-be", "ovs_interfaceid": "2135b96a-be2a-4f01-bcde-d21548009ea5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.160 2 DEBUG nova.network.os_vif_util [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Converting VIF {"id": "2135b96a-be2a-4f01-bcde-d21548009ea5", "address": "fa:16:3e:92:78:ad", "network": {"id": "e81bbc6c-74d9-4326-b101-3222258ed127", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1133616281-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "578d4e0b1a1a46e893d547de9bd7f6e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2135b96a-be", "ovs_interfaceid": "2135b96a-be2a-4f01-bcde-d21548009ea5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.161 2 DEBUG nova.network.os_vif_util [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:92:78:ad,bridge_name='br-int',has_traffic_filtering=True,id=2135b96a-be2a-4f01-bcde-d21548009ea5,network=Network(e81bbc6c-74d9-4326-b101-3222258ed127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2135b96a-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.162 2 DEBUG os_vif [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:92:78:ad,bridge_name='br-int',has_traffic_filtering=True,id=2135b96a-be2a-4f01-bcde-d21548009ea5,network=Network(e81bbc6c-74d9-4326-b101-3222258ed127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2135b96a-be') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.163 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.164 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.168 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2135b96a-be, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.169 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2135b96a-be, col_values=(('external_ids', {'iface-id': '2135b96a-be2a-4f01-bcde-d21548009ea5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:92:78:ad', 'vm-uuid': '477099c6-d178-40d2-9d1e-a28466596e94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:33 np0005465604 NetworkManager[45129]: <info>  [1759393353.1728] manager: (tap2135b96a-be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.179 2 INFO os_vif [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:92:78:ad,bridge_name='br-int',has_traffic_filtering=True,id=2135b96a-be2a-4f01-bcde-d21548009ea5,network=Network(e81bbc6c-74d9-4326-b101-3222258ed127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2135b96a-be')#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.194 2 INFO nova.compute.manager [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.195 2 DEBUG oslo.service.loopingcall [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.195 2 DEBUG nova.compute.manager [-] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.195 2 DEBUG nova.network.neutron [-] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.267 2 DEBUG nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.268 2 DEBUG nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.268 2 DEBUG nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] No VIF found with MAC fa:16:3e:92:78:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.268 2 INFO nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Using config drive#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.300 2 DEBUG nova.storage.rbd_utils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] rbd image 477099c6-d178-40d2-9d1e-a28466596e94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:33 np0005465604 podman[289360]: 2025-10-02 08:22:33.336890601 +0000 UTC m=+0.111802986 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 04:22:33 np0005465604 podman[289361]: 2025-10-02 08:22:33.341079886 +0000 UTC m=+0.109381887 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.915 2 DEBUG nova.compute.manager [req-dbd5641c-2848-40ef-888e-9ac74c58f78e req-416341e6-25b3-4a75-9edf-ce5d1450875b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Received event network-vif-unplugged-3979d203-cacc-40b7-9662-efc009354ac2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.915 2 DEBUG oslo_concurrency.lockutils [req-dbd5641c-2848-40ef-888e-9ac74c58f78e req-416341e6-25b3-4a75-9edf-ce5d1450875b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.916 2 DEBUG oslo_concurrency.lockutils [req-dbd5641c-2848-40ef-888e-9ac74c58f78e req-416341e6-25b3-4a75-9edf-ce5d1450875b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.916 2 DEBUG oslo_concurrency.lockutils [req-dbd5641c-2848-40ef-888e-9ac74c58f78e req-416341e6-25b3-4a75-9edf-ce5d1450875b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.916 2 DEBUG nova.compute.manager [req-dbd5641c-2848-40ef-888e-9ac74c58f78e req-416341e6-25b3-4a75-9edf-ce5d1450875b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] No waiting events found dispatching network-vif-unplugged-3979d203-cacc-40b7-9662-efc009354ac2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:33 np0005465604 nova_compute[260603]: 2025-10-02 08:22:33.916 2 DEBUG nova.compute.manager [req-dbd5641c-2848-40ef-888e-9ac74c58f78e req-416341e6-25b3-4a75-9edf-ce5d1450875b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Received event network-vif-unplugged-3979d203-cacc-40b7-9662-efc009354ac2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:22:34 np0005465604 nova_compute[260603]: 2025-10-02 08:22:34.514 2 INFO nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Creating config drive at /var/lib/nova/instances/477099c6-d178-40d2-9d1e-a28466596e94/disk.config#033[00m
Oct  2 04:22:34 np0005465604 nova_compute[260603]: 2025-10-02 08:22:34.523 2 DEBUG oslo_concurrency.processutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/477099c6-d178-40d2-9d1e-a28466596e94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy4g_mfxi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:34 np0005465604 nova_compute[260603]: 2025-10-02 08:22:34.657 2 DEBUG oslo_concurrency.processutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/477099c6-d178-40d2-9d1e-a28466596e94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy4g_mfxi" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:34 np0005465604 nova_compute[260603]: 2025-10-02 08:22:34.694 2 DEBUG nova.storage.rbd_utils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] rbd image 477099c6-d178-40d2-9d1e-a28466596e94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:34 np0005465604 nova_compute[260603]: 2025-10-02 08:22:34.699 2 DEBUG oslo_concurrency.processutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/477099c6-d178-40d2-9d1e-a28466596e94/disk.config 477099c6-d178-40d2-9d1e-a28466596e94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 499 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 213 op/s
Oct  2 04:22:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:34.809 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:34.810 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:34.811 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:34 np0005465604 nova_compute[260603]: 2025-10-02 08:22:34.859 2 DEBUG oslo_concurrency.processutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/477099c6-d178-40d2-9d1e-a28466596e94/disk.config 477099c6-d178-40d2-9d1e-a28466596e94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:34 np0005465604 nova_compute[260603]: 2025-10-02 08:22:34.860 2 INFO nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Deleting local config drive /var/lib/nova/instances/477099c6-d178-40d2-9d1e-a28466596e94/disk.config because it was imported into RBD.#033[00m
Oct  2 04:22:34 np0005465604 kernel: tap2135b96a-be: entered promiscuous mode
Oct  2 04:22:34 np0005465604 NetworkManager[45129]: <info>  [1759393354.9307] manager: (tap2135b96a-be): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Oct  2 04:22:34 np0005465604 systemd-udevd[289298]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:22:34 np0005465604 nova_compute[260603]: 2025-10-02 08:22:34.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:34Z|00105|binding|INFO|Claiming lport 2135b96a-be2a-4f01-bcde-d21548009ea5 for this chassis.
Oct  2 04:22:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:34Z|00106|binding|INFO|2135b96a-be2a-4f01-bcde-d21548009ea5: Claiming fa:16:3e:92:78:ad 10.100.0.13
Oct  2 04:22:34 np0005465604 NetworkManager[45129]: <info>  [1759393354.9465] device (tap2135b96a-be): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:22:34 np0005465604 NetworkManager[45129]: <info>  [1759393354.9477] device (tap2135b96a-be): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:22:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:34.948 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:78:ad 10.100.0.13'], port_security=['fa:16:3e:92:78:ad 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '477099c6-d178-40d2-9d1e-a28466596e94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e81bbc6c-74d9-4326-b101-3222258ed127', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '578d4e0b1a1a46e893d547de9bd7f6e1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1188a9ba-9de5-4d24-b6be-c688853535eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=974a924d-67c3-4b58-b2b3-f5d189f19e5b, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2135b96a-be2a-4f01-bcde-d21548009ea5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:22:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:34.949 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2135b96a-be2a-4f01-bcde-d21548009ea5 in datapath e81bbc6c-74d9-4326-b101-3222258ed127 bound to our chassis#033[00m
Oct  2 04:22:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:34.951 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e81bbc6c-74d9-4326-b101-3222258ed127#033[00m
Oct  2 04:22:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:34Z|00107|binding|INFO|Setting lport 2135b96a-be2a-4f01-bcde-d21548009ea5 ovn-installed in OVS
Oct  2 04:22:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:34Z|00108|binding|INFO|Setting lport 2135b96a-be2a-4f01-bcde-d21548009ea5 up in Southbound
Oct  2 04:22:34 np0005465604 nova_compute[260603]: 2025-10-02 08:22:34.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:34.963 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8fee8f0c-ca6a-4a39-a558-2bd95e38d990]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:34.965 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape81bbc6c-71 in ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:22:34 np0005465604 nova_compute[260603]: 2025-10-02 08:22:34.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:34.970 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape81bbc6c-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:22:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:34.971 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce0cbe4-9e53-4e00-8741-b9a44be71a3d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:34.971 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6f522755-bf19-47ae-bdd3-a1ca50108528]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:34.981 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[353696e4-5ff7-463b-adf6-8efe37caf4f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:34 np0005465604 systemd-machined[214636]: New machine qemu-27-instance-00000017.
Oct  2 04:22:34 np0005465604 systemd[1]: Started Virtual Machine qemu-27-instance-00000017.
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.013 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c859f1cd-c516-43e9-964f-3df7c82e4ac3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.040 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[18e7cf61-a1ee-4b15-a413-9475f320def0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:35 np0005465604 NetworkManager[45129]: <info>  [1759393355.0456] manager: (tape81bbc6c-70): new Veth device (/org/freedesktop/NetworkManager/Devices/64)
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.045 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4c7f7e63-a4f3-44b8-b978-51fb77eeb5ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.079 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6e98f017-ca95-4ccc-9ad5-016924a85a38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.082 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1d6719fc-23a7-400f-880a-0e95a1b70fca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:35 np0005465604 nova_compute[260603]: 2025-10-02 08:22:35.095 2 DEBUG nova.network.neutron [req-358e5b5a-48d9-485e-b19b-dac8963af39f req-0dd2d485-d4c6-4ae0-84a3-6f252dc7ccb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Updated VIF entry in instance network info cache for port 5385a348-2508-4677-aebb-d79c82f4fc36. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:22:35 np0005465604 nova_compute[260603]: 2025-10-02 08:22:35.096 2 DEBUG nova.network.neutron [req-358e5b5a-48d9-485e-b19b-dac8963af39f req-0dd2d485-d4c6-4ae0-84a3-6f252dc7ccb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Updating instance_info_cache with network_info: [{"id": "5385a348-2508-4677-aebb-d79c82f4fc36", "address": "fa:16:3e:4f:0a:92", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5385a348-25", "ovs_interfaceid": "5385a348-2508-4677-aebb-d79c82f4fc36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:35 np0005465604 NetworkManager[45129]: <info>  [1759393355.1123] device (tape81bbc6c-70): carrier: link connected
Oct  2 04:22:35 np0005465604 nova_compute[260603]: 2025-10-02 08:22:35.113 2 DEBUG oslo_concurrency.lockutils [req-358e5b5a-48d9-485e-b19b-dac8963af39f req-0dd2d485-d4c6-4ae0-84a3-6f252dc7ccb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-4f7a36cf-fd1b-42f2-94be-5e9483b5f941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:22:35 np0005465604 nova_compute[260603]: 2025-10-02 08:22:35.114 2 DEBUG nova.compute.manager [req-358e5b5a-48d9-485e-b19b-dac8963af39f req-0dd2d485-d4c6-4ae0-84a3-6f252dc7ccb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Received event network-changed-2135b96a-be2a-4f01-bcde-d21548009ea5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:35 np0005465604 nova_compute[260603]: 2025-10-02 08:22:35.114 2 DEBUG nova.compute.manager [req-358e5b5a-48d9-485e-b19b-dac8963af39f req-0dd2d485-d4c6-4ae0-84a3-6f252dc7ccb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Refreshing instance network info cache due to event network-changed-2135b96a-be2a-4f01-bcde-d21548009ea5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:22:35 np0005465604 nova_compute[260603]: 2025-10-02 08:22:35.115 2 DEBUG oslo_concurrency.lockutils [req-358e5b5a-48d9-485e-b19b-dac8963af39f req-0dd2d485-d4c6-4ae0-84a3-6f252dc7ccb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-477099c6-d178-40d2-9d1e-a28466596e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:22:35 np0005465604 nova_compute[260603]: 2025-10-02 08:22:35.115 2 DEBUG oslo_concurrency.lockutils [req-358e5b5a-48d9-485e-b19b-dac8963af39f req-0dd2d485-d4c6-4ae0-84a3-6f252dc7ccb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-477099c6-d178-40d2-9d1e-a28466596e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:22:35 np0005465604 nova_compute[260603]: 2025-10-02 08:22:35.115 2 DEBUG nova.network.neutron [req-358e5b5a-48d9-485e-b19b-dac8963af39f req-0dd2d485-d4c6-4ae0-84a3-6f252dc7ccb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Refreshing network info cache for port 2135b96a-be2a-4f01-bcde-d21548009ea5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.122 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[10ab1240-4774-4508-9fc2-f6bc22da898c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.140 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd6c1a6-4b47-4686-949a-71d3b10c78ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape81bbc6c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:b4:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423371, 'reachable_time': 23079, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289506, 'error': None, 'target': 'ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.158 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a5266567-97ce-4bda-b173-7160c4ca687f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:b41f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 423371, 'tstamp': 423371}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289507, 'error': None, 'target': 'ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.175 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[17cb97df-2b19-4dcd-bfcf-83038a3c40cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape81bbc6c-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:b4:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423371, 'reachable_time': 23079, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289508, 'error': None, 'target': 'ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.206 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[729a0a6d-655f-413a-8c60-37cd41dd77e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.269 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb09f9f-f980-4be5-a79a-2b0033a86425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.270 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape81bbc6c-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.270 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.270 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape81bbc6c-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:35 np0005465604 NetworkManager[45129]: <info>  [1759393355.2725] manager: (tape81bbc6c-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Oct  2 04:22:35 np0005465604 kernel: tape81bbc6c-70: entered promiscuous mode
Oct  2 04:22:35 np0005465604 nova_compute[260603]: 2025-10-02 08:22:35.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.278 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape81bbc6c-70, col_values=(('external_ids', {'iface-id': '1efe6d32-c189-400d-8557-a9473dddae81'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:35 np0005465604 nova_compute[260603]: 2025-10-02 08:22:35.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:35Z|00109|binding|INFO|Releasing lport 1efe6d32-c189-400d-8557-a9473dddae81 from this chassis (sb_readonly=0)
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.283 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e81bbc6c-74d9-4326-b101-3222258ed127.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e81bbc6c-74d9-4326-b101-3222258ed127.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.283 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a6037ad7-3d0d-4cd6-a783-90d3bcc6eaf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.284 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-e81bbc6c-74d9-4326-b101-3222258ed127
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/e81bbc6c-74d9-4326-b101-3222258ed127.pid.haproxy
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID e81bbc6c-74d9-4326-b101-3222258ed127
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:22:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:35.284 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127', 'env', 'PROCESS_TAG=haproxy-e81bbc6c-74d9-4326-b101-3222258ed127', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e81bbc6c-74d9-4326-b101-3222258ed127.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:22:35 np0005465604 nova_compute[260603]: 2025-10-02 08:22:35.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:35 np0005465604 nova_compute[260603]: 2025-10-02 08:22:35.300 2 DEBUG nova.network.neutron [-] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:35 np0005465604 nova_compute[260603]: 2025-10-02 08:22:35.316 2 INFO nova.compute.manager [-] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Took 2.12 seconds to deallocate network for instance.#033[00m
Oct  2 04:22:35 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct  2 04:22:35 np0005465604 nova_compute[260603]: 2025-10-02 08:22:35.367 2 DEBUG nova.compute.manager [req-d6ed46e2-d27f-448e-9dcb-937d6ff3dbcd req-ac2f5a8a-8ccb-4940-8b26-c1dee7ef18fe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Received event network-vif-deleted-3979d203-cacc-40b7-9662-efc009354ac2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:35 np0005465604 nova_compute[260603]: 2025-10-02 08:22:35.372 2 DEBUG oslo_concurrency.lockutils [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:35 np0005465604 nova_compute[260603]: 2025-10-02 08:22:35.372 2 DEBUG oslo_concurrency.lockutils [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:35Z|00110|binding|INFO|Releasing lport 1efe6d32-c189-400d-8557-a9473dddae81 from this chassis (sb_readonly=0)
Oct  2 04:22:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:35Z|00111|binding|INFO|Releasing lport 029a4135-d57a-4306-8fd9-eb98a54d6046 from this chassis (sb_readonly=0)
Oct  2 04:22:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:35Z|00112|binding|INFO|Releasing lport 9a1d90c9-45f7-468d-bd6f-f1cc59f0309a from this chassis (sb_readonly=0)
Oct  2 04:22:35 np0005465604 nova_compute[260603]: 2025-10-02 08:22:35.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:35 np0005465604 nova_compute[260603]: 2025-10-02 08:22:35.600 2 DEBUG oslo_concurrency.processutils [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:35 np0005465604 podman[289582]: 2025-10-02 08:22:35.697149992 +0000 UTC m=+0.059664764 container create 73c4d8d495afea895fe436d1f3f72b5c3123efe0549ed62e71f3ce2916450aff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 04:22:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:35Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:2c:3a 10.100.0.12
Oct  2 04:22:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:35Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:2c:3a 10.100.0.12
Oct  2 04:22:35 np0005465604 systemd[1]: Started libpod-conmon-73c4d8d495afea895fe436d1f3f72b5c3123efe0549ed62e71f3ce2916450aff.scope.
Oct  2 04:22:35 np0005465604 podman[289582]: 2025-10-02 08:22:35.670758752 +0000 UTC m=+0.033273574 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:22:35 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:22:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f93c19e3c2c2870214fd1b92795ec466e075a11f39c04d8cf768340ca05f92bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:22:35 np0005465604 podman[289582]: 2025-10-02 08:22:35.797199059 +0000 UTC m=+0.159713841 container init 73c4d8d495afea895fe436d1f3f72b5c3123efe0549ed62e71f3ce2916450aff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:22:35 np0005465604 podman[289582]: 2025-10-02 08:22:35.802276661 +0000 UTC m=+0.164791443 container start 73c4d8d495afea895fe436d1f3f72b5c3123efe0549ed62e71f3ce2916450aff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:22:35 np0005465604 neutron-haproxy-ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127[289616]: [NOTICE]   (289620) : New worker (289622) forked
Oct  2 04:22:35 np0005465604 neutron-haproxy-ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127[289616]: [NOTICE]   (289620) : Loading success.
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.045 2 DEBUG nova.compute.manager [req-ac73c3ce-ff52-41a5-a98f-e5068ec0dafb req-a0664046-1f48-46fd-b6ac-c281844089c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Received event network-vif-plugged-3979d203-cacc-40b7-9662-efc009354ac2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.047 2 DEBUG oslo_concurrency.lockutils [req-ac73c3ce-ff52-41a5-a98f-e5068ec0dafb req-a0664046-1f48-46fd-b6ac-c281844089c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.047 2 DEBUG oslo_concurrency.lockutils [req-ac73c3ce-ff52-41a5-a98f-e5068ec0dafb req-a0664046-1f48-46fd-b6ac-c281844089c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.048 2 DEBUG oslo_concurrency.lockutils [req-ac73c3ce-ff52-41a5-a98f-e5068ec0dafb req-a0664046-1f48-46fd-b6ac-c281844089c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.048 2 DEBUG nova.compute.manager [req-ac73c3ce-ff52-41a5-a98f-e5068ec0dafb req-a0664046-1f48-46fd-b6ac-c281844089c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] No waiting events found dispatching network-vif-plugged-3979d203-cacc-40b7-9662-efc009354ac2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.049 2 WARNING nova.compute.manager [req-ac73c3ce-ff52-41a5-a98f-e5068ec0dafb req-a0664046-1f48-46fd-b6ac-c281844089c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Received unexpected event network-vif-plugged-3979d203-cacc-40b7-9662-efc009354ac2 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.050 2 DEBUG nova.compute.manager [req-ac73c3ce-ff52-41a5-a98f-e5068ec0dafb req-a0664046-1f48-46fd-b6ac-c281844089c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Received event network-vif-plugged-2135b96a-be2a-4f01-bcde-d21548009ea5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.050 2 DEBUG oslo_concurrency.lockutils [req-ac73c3ce-ff52-41a5-a98f-e5068ec0dafb req-a0664046-1f48-46fd-b6ac-c281844089c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "477099c6-d178-40d2-9d1e-a28466596e94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.051 2 DEBUG oslo_concurrency.lockutils [req-ac73c3ce-ff52-41a5-a98f-e5068ec0dafb req-a0664046-1f48-46fd-b6ac-c281844089c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "477099c6-d178-40d2-9d1e-a28466596e94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.051 2 DEBUG oslo_concurrency.lockutils [req-ac73c3ce-ff52-41a5-a98f-e5068ec0dafb req-a0664046-1f48-46fd-b6ac-c281844089c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "477099c6-d178-40d2-9d1e-a28466596e94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.052 2 DEBUG nova.compute.manager [req-ac73c3ce-ff52-41a5-a98f-e5068ec0dafb req-a0664046-1f48-46fd-b6ac-c281844089c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Processing event network-vif-plugged-2135b96a-be2a-4f01-bcde-d21548009ea5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.052 2 DEBUG nova.compute.manager [req-ac73c3ce-ff52-41a5-a98f-e5068ec0dafb req-a0664046-1f48-46fd-b6ac-c281844089c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Received event network-vif-plugged-2135b96a-be2a-4f01-bcde-d21548009ea5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.053 2 DEBUG oslo_concurrency.lockutils [req-ac73c3ce-ff52-41a5-a98f-e5068ec0dafb req-a0664046-1f48-46fd-b6ac-c281844089c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "477099c6-d178-40d2-9d1e-a28466596e94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.053 2 DEBUG oslo_concurrency.lockutils [req-ac73c3ce-ff52-41a5-a98f-e5068ec0dafb req-a0664046-1f48-46fd-b6ac-c281844089c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "477099c6-d178-40d2-9d1e-a28466596e94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.054 2 DEBUG oslo_concurrency.lockutils [req-ac73c3ce-ff52-41a5-a98f-e5068ec0dafb req-a0664046-1f48-46fd-b6ac-c281844089c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "477099c6-d178-40d2-9d1e-a28466596e94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.055 2 DEBUG nova.compute.manager [req-ac73c3ce-ff52-41a5-a98f-e5068ec0dafb req-a0664046-1f48-46fd-b6ac-c281844089c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] No waiting events found dispatching network-vif-plugged-2135b96a-be2a-4f01-bcde-d21548009ea5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.055 2 WARNING nova.compute.manager [req-ac73c3ce-ff52-41a5-a98f-e5068ec0dafb req-a0664046-1f48-46fd-b6ac-c281844089c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Received unexpected event network-vif-plugged-2135b96a-be2a-4f01-bcde-d21548009ea5 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.056 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393356.0545065, 477099c6-d178-40d2-9d1e-a28466596e94 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.057 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] VM Started (Lifecycle Event)#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.063 2 DEBUG nova.compute.manager [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.067 2 DEBUG nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.072 2 INFO nova.virt.libvirt.driver [-] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Instance spawned successfully.#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.072 2 DEBUG nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:22:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:22:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1527433875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.093 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.102 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.110 2 DEBUG nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.110 2 DEBUG nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.111 2 DEBUG nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.112 2 DEBUG nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.113 2 DEBUG nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.113 2 DEBUG nova.virt.libvirt.driver [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.120 2 DEBUG oslo_concurrency.processutils [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.129 2 DEBUG nova.compute.provider_tree [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.147 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.147 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393356.0612593, 477099c6-d178-40d2-9d1e-a28466596e94 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.148 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.152 2 DEBUG nova.scheduler.client.report [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.183 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.186 2 DEBUG oslo_concurrency.lockutils [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.192 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393356.0658815, 477099c6-d178-40d2-9d1e-a28466596e94 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.192 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.197 2 INFO nova.compute.manager [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Took 9.50 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.198 2 DEBUG nova.compute.manager [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.211 2 INFO nova.scheduler.client.report [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Deleted allocations for instance 01296dff-20d5-49d6-b582-f9ec1d6b0af8#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.230 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.243 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.294 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.317 2 INFO nova.compute.manager [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Took 10.59 seconds to build instance.#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.329 2 DEBUG oslo_concurrency.lockutils [None req-36f3b30c-0bce-41a0-8a6b-9e1d3c35429e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "01296dff-20d5-49d6-b582-f9ec1d6b0af8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.345 2 DEBUG oslo_concurrency.lockutils [None req-f0d62544-29b6-416c-849b-4fed57533347 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lock "477099c6-d178-40d2-9d1e-a28466596e94" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.651 2 DEBUG nova.network.neutron [req-358e5b5a-48d9-485e-b19b-dac8963af39f req-0dd2d485-d4c6-4ae0-84a3-6f252dc7ccb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Updated VIF entry in instance network info cache for port 2135b96a-be2a-4f01-bcde-d21548009ea5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.652 2 DEBUG nova.network.neutron [req-358e5b5a-48d9-485e-b19b-dac8963af39f req-0dd2d485-d4c6-4ae0-84a3-6f252dc7ccb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Updating instance_info_cache with network_info: [{"id": "2135b96a-be2a-4f01-bcde-d21548009ea5", "address": "fa:16:3e:92:78:ad", "network": {"id": "e81bbc6c-74d9-4326-b101-3222258ed127", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1133616281-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "578d4e0b1a1a46e893d547de9bd7f6e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2135b96a-be", "ovs_interfaceid": "2135b96a-be2a-4f01-bcde-d21548009ea5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:36 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.682 2 DEBUG oslo_concurrency.lockutils [req-358e5b5a-48d9-485e-b19b-dac8963af39f req-0dd2d485-d4c6-4ae0-84a3-6f252dc7ccb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-477099c6-d178-40d2-9d1e-a28466596e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:22:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 499 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Oct  2 04:22:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.983 2 DEBUG oslo_concurrency.lockutils [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.983 2 DEBUG oslo_concurrency.lockutils [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.983 2 DEBUG oslo_concurrency.lockutils [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.983 2 DEBUG oslo_concurrency.lockutils [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.984 2 DEBUG oslo_concurrency.lockutils [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.984 2 INFO nova.compute.manager [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Terminating instance#033[00m
Oct  2 04:22:36 np0005465604 nova_compute[260603]: 2025-10-02 08:22:36.985 2 DEBUG nova.compute.manager [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:22:37 np0005465604 kernel: tap580fd822-3f (unregistering): left promiscuous mode
Oct  2 04:22:37 np0005465604 NetworkManager[45129]: <info>  [1759393357.0378] device (tap580fd822-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:22:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:37Z|00113|binding|INFO|Releasing lport 580fd822-3f5a-44c0-aff1-96cab7531d57 from this chassis (sb_readonly=0)
Oct  2 04:22:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:37Z|00114|binding|INFO|Setting lport 580fd822-3f5a-44c0-aff1-96cab7531d57 down in Southbound
Oct  2 04:22:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:37Z|00115|binding|INFO|Removing iface tap580fd822-3f ovn-installed in OVS
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.077 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:78:6e 10.100.0.11'], port_security=['fa:16:3e:4c:78:6e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c0ec8879-e818-40a5-88ec-c6d89b8ddea4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13ecc6dea7a8465394379400d84a053e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '62446364-91d2-42bd-8360-1c220db2c85a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a031d7e-2b71-4bad-bd63-24b87ef28e88, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=580fd822-3f5a-44c0-aff1-96cab7531d57) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.078 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 580fd822-3f5a-44c0-aff1-96cab7531d57 in datapath 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e unbound from our chassis#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.079 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.107 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f7d0aa-9eb8-4c43-87b8-7a0449f31c3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:37 np0005465604 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000010.scope: Deactivated successfully.
Oct  2 04:22:37 np0005465604 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000010.scope: Consumed 14.786s CPU time.
Oct  2 04:22:37 np0005465604 systemd-machined[214636]: Machine qemu-20-instance-00000010 terminated.
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.154 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6f872f38-cab9-4422-962f-67c0f821f4c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.159 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[155e9109-ba77-44bd-808e-418976fb073b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.198 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[335bbe4f-b79f-40b9-af49-00ca31e08199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:37 np0005465604 NetworkManager[45129]: <info>  [1759393357.2064] manager: (tap580fd822-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Oct  2 04:22:37 np0005465604 kernel: tap580fd822-3f: entered promiscuous mode
Oct  2 04:22:37 np0005465604 kernel: tap580fd822-3f (unregistering): left promiscuous mode
Oct  2 04:22:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:37Z|00116|binding|INFO|Claiming lport 580fd822-3f5a-44c0-aff1-96cab7531d57 for this chassis.
Oct  2 04:22:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:37Z|00117|binding|INFO|580fd822-3f5a-44c0-aff1-96cab7531d57: Claiming fa:16:3e:4c:78:6e 10.100.0.11
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.221 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:78:6e 10.100.0.11'], port_security=['fa:16:3e:4c:78:6e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c0ec8879-e818-40a5-88ec-c6d89b8ddea4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13ecc6dea7a8465394379400d84a053e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '62446364-91d2-42bd-8360-1c220db2c85a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a031d7e-2b71-4bad-bd63-24b87ef28e88, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=580fd822-3f5a-44c0-aff1-96cab7531d57) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.226 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ac3348-fa2d-4ae6-960d-d988c7589958]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a12cdd8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:e8:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 26, 'rx_bytes': 1168, 'tx_bytes': 1280, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 26, 'rx_bytes': 1168, 'tx_bytes': 1280, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414588, 'reachable_time': 40389, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289759, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.236 2 INFO nova.virt.libvirt.driver [-] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Instance destroyed successfully.#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.238 2 DEBUG nova.objects.instance [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'resources' on Instance uuid c0ec8879-e818-40a5-88ec-c6d89b8ddea4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:37Z|00118|binding|INFO|Setting lport 580fd822-3f5a-44c0-aff1-96cab7531d57 ovn-installed in OVS
Oct  2 04:22:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:37Z|00119|binding|INFO|Setting lport 580fd822-3f5a-44c0-aff1-96cab7531d57 up in Southbound
Oct  2 04:22:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:37Z|00120|binding|INFO|Releasing lport 580fd822-3f5a-44c0-aff1-96cab7531d57 from this chassis (sb_readonly=1)
Oct  2 04:22:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:37Z|00121|if_status|INFO|Dropped 2 log messages in last 22 seconds (most recently, 22 seconds ago) due to excessive rate
Oct  2 04:22:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:37Z|00122|if_status|INFO|Not setting lport 580fd822-3f5a-44c0-aff1-96cab7531d57 down as sb is readonly
Oct  2 04:22:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:37Z|00123|binding|INFO|Removing iface tap580fd822-3f ovn-installed in OVS
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:37Z|00124|binding|INFO|Releasing lport 580fd822-3f5a-44c0-aff1-96cab7531d57 from this chassis (sb_readonly=0)
Oct  2 04:22:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:37Z|00125|binding|INFO|Setting lport 580fd822-3f5a-44c0-aff1-96cab7531d57 down in Southbound
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.252 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:78:6e 10.100.0.11'], port_security=['fa:16:3e:4c:78:6e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c0ec8879-e818-40a5-88ec-c6d89b8ddea4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13ecc6dea7a8465394379400d84a053e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '62446364-91d2-42bd-8360-1c220db2c85a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a031d7e-2b71-4bad-bd63-24b87ef28e88, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=580fd822-3f5a-44c0-aff1-96cab7531d57) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.257 2 DEBUG nova.virt.libvirt.vif [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:21:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-525372049',display_name='tempest-ServersAdminTestJSON-server-525372049',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-525372049',id=16,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='13ecc6dea7a8465394379400d84a053e',ramdisk_id='',reservation_id='r-b1n5b2x0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-83453142',owner_user_name='tempest-ServersAdminTestJSON-83453142-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:21:22Z,user_data=None,user_id='2b82955fab174d8aac325e64068908f5',uuid=c0ec8879-e818-40a5-88ec-c6d89b8ddea4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "580fd822-3f5a-44c0-aff1-96cab7531d57", "address": "fa:16:3e:4c:78:6e", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580fd822-3f", "ovs_interfaceid": "580fd822-3f5a-44c0-aff1-96cab7531d57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.258 2 DEBUG nova.network.os_vif_util [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converting VIF {"id": "580fd822-3f5a-44c0-aff1-96cab7531d57", "address": "fa:16:3e:4c:78:6e", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap580fd822-3f", "ovs_interfaceid": "580fd822-3f5a-44c0-aff1-96cab7531d57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.259 2 DEBUG nova.network.os_vif_util [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:6e,bridge_name='br-int',has_traffic_filtering=True,id=580fd822-3f5a-44c0-aff1-96cab7531d57,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580fd822-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.259 2 DEBUG os_vif [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:6e,bridge_name='br-int',has_traffic_filtering=True,id=580fd822-3f5a-44c0-aff1-96cab7531d57,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580fd822-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.256 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1503b0e0-2011-48a0-b329-fededff0ec51]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414598, 'tstamp': 414598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289762, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414600, 'tstamp': 414600}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289762, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.262 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580fd822-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.262 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a12cdd8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.265 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a12cdd8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.265 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.266 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a12cdd8-50, col_values=(('external_ids', {'iface-id': '9a1d90c9-45f7-468d-bd6f-f1cc59f0309a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.266 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.267 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 580fd822-3f5a-44c0-aff1-96cab7531d57 in datapath 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e unbound from our chassis#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.267 2 INFO os_vif [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:6e,bridge_name='br-int',has_traffic_filtering=True,id=580fd822-3f5a-44c0-aff1-96cab7531d57,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap580fd822-3f')#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.269 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.289 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3bd441-79b0-43a1-9696-f0ff9ae827d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.328 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6fccf07c-d1c8-43be-b726-dec4cc4c4603]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.333 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed15124-7966-4264-aa2e-0b3f66a618c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.364 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[87b1d186-03a6-4676-aae0-1324325a54f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.396 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8c070b89-1ef2-457a-bbcb-d7b664f3cb9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a12cdd8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:e8:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 28, 'rx_bytes': 1168, 'tx_bytes': 1364, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 28, 'rx_bytes': 1168, 'tx_bytes': 1364, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414588, 'reachable_time': 40389, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289789, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.419 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a8914b39-4034-4195-ba31-b33a8867960a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414598, 'tstamp': 414598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289792, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414600, 'tstamp': 414600}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289792, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.421 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a12cdd8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.475 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a12cdd8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.475 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.476 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a12cdd8-50, col_values=(('external_ids', {'iface-id': '9a1d90c9-45f7-468d-bd6f-f1cc59f0309a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.476 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.477 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 580fd822-3f5a-44c0-aff1-96cab7531d57 in datapath 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e unbound from our chassis#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.479 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.498 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[03553780-6834-4997-a322-1d790e59b9e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.560 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[234ea6bd-07a7-44df-ab97-c44a30d31a19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.564 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[31c5384d-1426-44d7-b62d-6dbaa5ae29b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.595 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ce80b6-f8a6-40d4-bd7e-b3f3df11c4d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.614 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[93a4085c-66fa-407a-8f46-659782af7d5b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a12cdd8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:e8:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 30, 'rx_bytes': 1168, 'tx_bytes': 1448, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 30, 'rx_bytes': 1168, 'tx_bytes': 1448, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414588, 'reachable_time': 40389, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289811, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:22:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.633 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d5baf623-ddbd-43c8-b17f-fd4a5525fefa]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414598, 'tstamp': 414598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289812, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414600, 'tstamp': 414600}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289812, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:22:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.638 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a12cdd8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:22:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:22:37 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9910fb6d-2e07-4af0-bde9-52f6e4b96311 does not exist
Oct  2 04:22:37 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev dba8dcf3-81f8-4c7f-8f73-8098a7516d00 does not exist
Oct  2 04:22:37 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 666f7302-85a0-4073-a71d-240f35b3c844 does not exist
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:22:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:22:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.652 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a12cdd8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:22:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.655 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.656 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a12cdd8-50, col_values=(('external_ids', {'iface-id': '9a1d90c9-45f7-468d-bd6f-f1cc59f0309a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:37.656 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.716 2 INFO nova.virt.libvirt.driver [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Deleting instance files /var/lib/nova/instances/c0ec8879-e818-40a5-88ec-c6d89b8ddea4_del#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.717 2 INFO nova.virt.libvirt.driver [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Deletion of /var/lib/nova/instances/c0ec8879-e818-40a5-88ec-c6d89b8ddea4_del complete#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.800 2 INFO nova.compute.manager [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.801 2 DEBUG oslo.service.loopingcall [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.801 2 DEBUG nova.compute.manager [-] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:22:37 np0005465604 nova_compute[260603]: 2025-10-02 08:22:37.801 2 DEBUG nova.network.neutron [-] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:22:38 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:22:38 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:22:38 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:22:38 np0005465604 podman[289954]: 2025-10-02 08:22:38.280056952 +0000 UTC m=+0.050421297 container create 52865bba9e5d1df45c2db03676eb7bc7a47ccc3eacbb2fe93c11cccf78db4b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  2 04:22:38 np0005465604 systemd[1]: Started libpod-conmon-52865bba9e5d1df45c2db03676eb7bc7a47ccc3eacbb2fe93c11cccf78db4b61.scope.
Oct  2 04:22:38 np0005465604 podman[289954]: 2025-10-02 08:22:38.254001652 +0000 UTC m=+0.024366047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:22:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:22:38 np0005465604 podman[289954]: 2025-10-02 08:22:38.383495927 +0000 UTC m=+0.153860292 container init 52865bba9e5d1df45c2db03676eb7bc7a47ccc3eacbb2fe93c11cccf78db4b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_einstein, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 04:22:38 np0005465604 podman[289954]: 2025-10-02 08:22:38.391003739 +0000 UTC m=+0.161368084 container start 52865bba9e5d1df45c2db03676eb7bc7a47ccc3eacbb2fe93c11cccf78db4b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_einstein, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:22:38 np0005465604 podman[289954]: 2025-10-02 08:22:38.396426584 +0000 UTC m=+0.166790949 container attach 52865bba9e5d1df45c2db03676eb7bc7a47ccc3eacbb2fe93c11cccf78db4b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_einstein, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:22:38 np0005465604 sweet_einstein[289970]: 167 167
Oct  2 04:22:38 np0005465604 systemd[1]: libpod-52865bba9e5d1df45c2db03676eb7bc7a47ccc3eacbb2fe93c11cccf78db4b61.scope: Deactivated successfully.
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:22:38 np0005465604 podman[289954]: 2025-10-02 08:22:38.398730938 +0000 UTC m=+0.169095323 container died 52865bba9e5d1df45c2db03676eb7bc7a47ccc3eacbb2fe93c11cccf78db4b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_einstein, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0041898009661717785 of space, bias 1.0, pg target 1.2569402898515336 quantized to 32 (current 32)
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.1991676866616201 quantized to 32 (current 32)
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct  2 04:22:38 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d53cc3c301e6e3481f5c1f383b4338bcfe5d8bc2a24c39e62e9a1dd5af8b5ab7-merged.mount: Deactivated successfully.
Oct  2 04:22:38 np0005465604 podman[289954]: 2025-10-02 08:22:38.449392072 +0000 UTC m=+0.219756417 container remove 52865bba9e5d1df45c2db03676eb7bc7a47ccc3eacbb2fe93c11cccf78db4b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_einstein, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct  2 04:22:38 np0005465604 systemd[1]: libpod-conmon-52865bba9e5d1df45c2db03676eb7bc7a47ccc3eacbb2fe93c11cccf78db4b61.scope: Deactivated successfully.
Oct  2 04:22:38 np0005465604 podman[289996]: 2025-10-02 08:22:38.634009294 +0000 UTC m=+0.040621250 container create 895ef7dc06874d959f6737923328e8e66ddc9b685b28f4f58c60ba19bee74cf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 04:22:38 np0005465604 systemd[1]: Started libpod-conmon-895ef7dc06874d959f6737923328e8e66ddc9b685b28f4f58c60ba19bee74cf8.scope.
Oct  2 04:22:38 np0005465604 podman[289996]: 2025-10-02 08:22:38.616696227 +0000 UTC m=+0.023308193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:22:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:22:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393b4aa9072c26e9f4be5e0dc508a4654303e93b8a7b9c005e020fe8bb6fac9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:22:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393b4aa9072c26e9f4be5e0dc508a4654303e93b8a7b9c005e020fe8bb6fac9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:22:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393b4aa9072c26e9f4be5e0dc508a4654303e93b8a7b9c005e020fe8bb6fac9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:22:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393b4aa9072c26e9f4be5e0dc508a4654303e93b8a7b9c005e020fe8bb6fac9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:22:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393b4aa9072c26e9f4be5e0dc508a4654303e93b8a7b9c005e020fe8bb6fac9f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:22:38 np0005465604 podman[289996]: 2025-10-02 08:22:38.754927213 +0000 UTC m=+0.161539189 container init 895ef7dc06874d959f6737923328e8e66ddc9b685b28f4f58c60ba19bee74cf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shannon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:22:38 np0005465604 podman[289996]: 2025-10-02 08:22:38.763142328 +0000 UTC m=+0.169754284 container start 895ef7dc06874d959f6737923328e8e66ddc9b685b28f4f58c60ba19bee74cf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:22:38 np0005465604 podman[289996]: 2025-10-02 08:22:38.766436564 +0000 UTC m=+0.173048520 container attach 895ef7dc06874d959f6737923328e8e66ddc9b685b28f4f58c60ba19bee74cf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:22:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 454 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.1 MiB/s wr, 302 op/s
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.149 2 DEBUG nova.compute.manager [req-eb8b00d8-0afc-433d-9817-dc2c660a2593 req-5e9b1152-7e30-4c75-9e9b-ec275faa543c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Received event network-changed-5385a348-2508-4677-aebb-d79c82f4fc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.150 2 DEBUG nova.compute.manager [req-eb8b00d8-0afc-433d-9817-dc2c660a2593 req-5e9b1152-7e30-4c75-9e9b-ec275faa543c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Refreshing instance network info cache due to event network-changed-5385a348-2508-4677-aebb-d79c82f4fc36. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.150 2 DEBUG oslo_concurrency.lockutils [req-eb8b00d8-0afc-433d-9817-dc2c660a2593 req-5e9b1152-7e30-4c75-9e9b-ec275faa543c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-4f7a36cf-fd1b-42f2-94be-5e9483b5f941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.153 2 DEBUG oslo_concurrency.lockutils [req-eb8b00d8-0afc-433d-9817-dc2c660a2593 req-5e9b1152-7e30-4c75-9e9b-ec275faa543c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-4f7a36cf-fd1b-42f2-94be-5e9483b5f941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.153 2 DEBUG nova.network.neutron [req-eb8b00d8-0afc-433d-9817-dc2c660a2593 req-5e9b1152-7e30-4c75-9e9b-ec275faa543c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Refreshing network info cache for port 5385a348-2508-4677-aebb-d79c82f4fc36 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.339 2 DEBUG nova.network.neutron [-] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.347 2 DEBUG nova.compute.manager [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Received event network-vif-unplugged-580fd822-3f5a-44c0-aff1-96cab7531d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.347 2 DEBUG oslo_concurrency.lockutils [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.347 2 DEBUG oslo_concurrency.lockutils [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.348 2 DEBUG oslo_concurrency.lockutils [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.348 2 DEBUG nova.compute.manager [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] No waiting events found dispatching network-vif-unplugged-580fd822-3f5a-44c0-aff1-96cab7531d57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.349 2 DEBUG nova.compute.manager [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Received event network-vif-unplugged-580fd822-3f5a-44c0-aff1-96cab7531d57 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.349 2 DEBUG nova.compute.manager [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Received event network-vif-plugged-580fd822-3f5a-44c0-aff1-96cab7531d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.349 2 DEBUG oslo_concurrency.lockutils [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.350 2 DEBUG oslo_concurrency.lockutils [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.350 2 DEBUG oslo_concurrency.lockutils [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.351 2 DEBUG nova.compute.manager [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] No waiting events found dispatching network-vif-plugged-580fd822-3f5a-44c0-aff1-96cab7531d57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.351 2 WARNING nova.compute.manager [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Received unexpected event network-vif-plugged-580fd822-3f5a-44c0-aff1-96cab7531d57 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.351 2 DEBUG nova.compute.manager [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Received event network-vif-plugged-580fd822-3f5a-44c0-aff1-96cab7531d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.352 2 DEBUG oslo_concurrency.lockutils [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.352 2 DEBUG oslo_concurrency.lockutils [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.352 2 DEBUG oslo_concurrency.lockutils [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.353 2 DEBUG nova.compute.manager [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] No waiting events found dispatching network-vif-plugged-580fd822-3f5a-44c0-aff1-96cab7531d57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.353 2 WARNING nova.compute.manager [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Received unexpected event network-vif-plugged-580fd822-3f5a-44c0-aff1-96cab7531d57 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.354 2 DEBUG nova.compute.manager [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Received event network-vif-plugged-580fd822-3f5a-44c0-aff1-96cab7531d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.354 2 DEBUG oslo_concurrency.lockutils [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.355 2 DEBUG oslo_concurrency.lockutils [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.355 2 DEBUG oslo_concurrency.lockutils [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.355 2 DEBUG nova.compute.manager [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] No waiting events found dispatching network-vif-plugged-580fd822-3f5a-44c0-aff1-96cab7531d57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.356 2 WARNING nova.compute.manager [req-b0869f15-5d34-449c-9088-f849f71aa503 req-f5851dfd-ba06-4549-8555-9c9984805dd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Received unexpected event network-vif-plugged-580fd822-3f5a-44c0-aff1-96cab7531d57 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.368 2 INFO nova.compute.manager [-] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Took 1.57 seconds to deallocate network for instance.#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.424 2 DEBUG oslo_concurrency.lockutils [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.424 2 DEBUG oslo_concurrency.lockutils [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:39 np0005465604 nova_compute[260603]: 2025-10-02 08:22:39.574 2 DEBUG oslo_concurrency.processutils [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:39 np0005465604 zen_shannon[290013]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:22:39 np0005465604 zen_shannon[290013]: --> relative data size: 1.0
Oct  2 04:22:39 np0005465604 zen_shannon[290013]: --> All data devices are unavailable
Oct  2 04:22:39 np0005465604 systemd[1]: libpod-895ef7dc06874d959f6737923328e8e66ddc9b685b28f4f58c60ba19bee74cf8.scope: Deactivated successfully.
Oct  2 04:22:39 np0005465604 podman[289996]: 2025-10-02 08:22:39.792361902 +0000 UTC m=+1.198973848 container died 895ef7dc06874d959f6737923328e8e66ddc9b685b28f4f58c60ba19bee74cf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shannon, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 04:22:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-393b4aa9072c26e9f4be5e0dc508a4654303e93b8a7b9c005e020fe8bb6fac9f-merged.mount: Deactivated successfully.
Oct  2 04:22:39 np0005465604 podman[289996]: 2025-10-02 08:22:39.859013672 +0000 UTC m=+1.265625628 container remove 895ef7dc06874d959f6737923328e8e66ddc9b685b28f4f58c60ba19bee74cf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shannon, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:22:39 np0005465604 systemd[1]: libpod-conmon-895ef7dc06874d959f6737923328e8e66ddc9b685b28f4f58c60ba19bee74cf8.scope: Deactivated successfully.
Oct  2 04:22:39 np0005465604 podman[290063]: 2025-10-02 08:22:39.892526512 +0000 UTC m=+0.073798760 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct  2 04:22:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:22:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3466693363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:22:40 np0005465604 nova_compute[260603]: 2025-10-02 08:22:40.052 2 DEBUG oslo_concurrency.processutils [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:40 np0005465604 nova_compute[260603]: 2025-10-02 08:22:40.061 2 DEBUG nova.compute.provider_tree [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:22:40 np0005465604 nova_compute[260603]: 2025-10-02 08:22:40.076 2 DEBUG nova.scheduler.client.report [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:22:40 np0005465604 nova_compute[260603]: 2025-10-02 08:22:40.099 2 DEBUG oslo_concurrency.lockutils [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:40 np0005465604 nova_compute[260603]: 2025-10-02 08:22:40.121 2 INFO nova.scheduler.client.report [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Deleted allocations for instance c0ec8879-e818-40a5-88ec-c6d89b8ddea4#033[00m
Oct  2 04:22:40 np0005465604 nova_compute[260603]: 2025-10-02 08:22:40.195 2 DEBUG oslo_concurrency.lockutils [None req-b0dc5051-cca2-4cbe-9694-3a1ef2f09960 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:40 np0005465604 podman[290233]: 2025-10-02 08:22:40.588963017 +0000 UTC m=+0.058426265 container create a6a58613910f6f35882d0f68f85f2c678333cd965c5139e87cc603c54a4e1315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hermann, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:22:40 np0005465604 systemd[1]: Started libpod-conmon-a6a58613910f6f35882d0f68f85f2c678333cd965c5139e87cc603c54a4e1315.scope.
Oct  2 04:22:40 np0005465604 podman[290233]: 2025-10-02 08:22:40.556720587 +0000 UTC m=+0.026183925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:22:40 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:22:40 np0005465604 podman[290233]: 2025-10-02 08:22:40.698961764 +0000 UTC m=+0.168425062 container init a6a58613910f6f35882d0f68f85f2c678333cd965c5139e87cc603c54a4e1315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:22:40 np0005465604 podman[290233]: 2025-10-02 08:22:40.711063384 +0000 UTC m=+0.180526672 container start a6a58613910f6f35882d0f68f85f2c678333cd965c5139e87cc603c54a4e1315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 04:22:40 np0005465604 podman[290233]: 2025-10-02 08:22:40.715934711 +0000 UTC m=+0.185397979 container attach a6a58613910f6f35882d0f68f85f2c678333cd965c5139e87cc603c54a4e1315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 04:22:40 np0005465604 angry_hermann[290249]: 167 167
Oct  2 04:22:40 np0005465604 systemd[1]: libpod-a6a58613910f6f35882d0f68f85f2c678333cd965c5139e87cc603c54a4e1315.scope: Deactivated successfully.
Oct  2 04:22:40 np0005465604 podman[290233]: 2025-10-02 08:22:40.720409606 +0000 UTC m=+0.189872884 container died a6a58613910f6f35882d0f68f85f2c678333cd965c5139e87cc603c54a4e1315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 04:22:40 np0005465604 systemd[1]: var-lib-containers-storage-overlay-02685615db2f3fe279e59aff28a1ebb3a745c43539d8e1c814884f604a7bfb54-merged.mount: Deactivated successfully.
Oct  2 04:22:40 np0005465604 podman[290233]: 2025-10-02 08:22:40.770230101 +0000 UTC m=+0.239693349 container remove a6a58613910f6f35882d0f68f85f2c678333cd965c5139e87cc603c54a4e1315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hermann, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 04:22:40 np0005465604 systemd[1]: libpod-conmon-a6a58613910f6f35882d0f68f85f2c678333cd965c5139e87cc603c54a4e1315.scope: Deactivated successfully.
Oct  2 04:22:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 454 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 214 op/s
Oct  2 04:22:40 np0005465604 nova_compute[260603]: 2025-10-02 08:22:40.796 2 DEBUG oslo_concurrency.lockutils [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquiring lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:40 np0005465604 nova_compute[260603]: 2025-10-02 08:22:40.797 2 DEBUG oslo_concurrency.lockutils [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:40 np0005465604 nova_compute[260603]: 2025-10-02 08:22:40.798 2 DEBUG oslo_concurrency.lockutils [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquiring lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:40 np0005465604 nova_compute[260603]: 2025-10-02 08:22:40.798 2 DEBUG oslo_concurrency.lockutils [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:40 np0005465604 nova_compute[260603]: 2025-10-02 08:22:40.798 2 DEBUG oslo_concurrency.lockutils [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:40 np0005465604 nova_compute[260603]: 2025-10-02 08:22:40.799 2 INFO nova.compute.manager [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Terminating instance#033[00m
Oct  2 04:22:40 np0005465604 nova_compute[260603]: 2025-10-02 08:22:40.800 2 DEBUG nova.compute.manager [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:22:40 np0005465604 kernel: tap5385a348-25 (unregistering): left promiscuous mode
Oct  2 04:22:40 np0005465604 NetworkManager[45129]: <info>  [1759393360.8407] device (tap5385a348-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:22:40 np0005465604 nova_compute[260603]: 2025-10-02 08:22:40.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:40Z|00126|binding|INFO|Releasing lport 5385a348-2508-4677-aebb-d79c82f4fc36 from this chassis (sb_readonly=0)
Oct  2 04:22:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:40Z|00127|binding|INFO|Setting lport 5385a348-2508-4677-aebb-d79c82f4fc36 down in Southbound
Oct  2 04:22:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:40Z|00128|binding|INFO|Removing iface tap5385a348-25 ovn-installed in OVS
Oct  2 04:22:40 np0005465604 nova_compute[260603]: 2025-10-02 08:22:40.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:40.855 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:0a:92 10.100.0.12'], port_security=['fa:16:3e:4f:0a:92 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4f7a36cf-fd1b-42f2-94be-5e9483b5f941', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-239ca39f-5677-4c2d-87f6-45d4404e4ead', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a46df342cad4b62ae3f9af2fd10ed84', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572f4f54-3bbb-4e63-86a3-6fb75389fa5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b31b756-7999-4de8-b356-acd4a5e98929, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=5385a348-2508-4677-aebb-d79c82f4fc36) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:22:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:40.856 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 5385a348-2508-4677-aebb-d79c82f4fc36 in datapath 239ca39f-5677-4c2d-87f6-45d4404e4ead unbound from our chassis#033[00m
Oct  2 04:22:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:40.857 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 239ca39f-5677-4c2d-87f6-45d4404e4ead#033[00m
Oct  2 04:22:40 np0005465604 nova_compute[260603]: 2025-10-02 08:22:40.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:40.876 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ddfd7349-2130-4b0f-9f85-05ecdcbed845]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:40.906 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[422d6e16-d017-4eee-b993-63e75af18f56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:40.909 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[de6794a4-0259-4b9b-811e-4e1b04edad10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:40 np0005465604 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct  2 04:22:40 np0005465604 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000016.scope: Consumed 12.533s CPU time.
Oct  2 04:22:40 np0005465604 systemd-machined[214636]: Machine qemu-25-instance-00000016 terminated.
Oct  2 04:22:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:40.952 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0c2500ab-0a7d-4cb2-8b8b-c34d3d01d3f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:40.967 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b83b232a-a6f3-43a6-95ea-581f0cd976dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap239ca39f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:74:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419697, 'reachable_time': 16294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290292, 'error': None, 'target': 'ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:40.981 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[872b31a1-0626-4ad6-bce7-87aab78dab8a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap239ca39f-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 419712, 'tstamp': 419712}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290300, 'error': None, 'target': 'ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap239ca39f-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 419716, 'tstamp': 419716}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290300, 'error': None, 'target': 'ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:40.982 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap239ca39f-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:40 np0005465604 nova_compute[260603]: 2025-10-02 08:22:40.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:40 np0005465604 nova_compute[260603]: 2025-10-02 08:22:40.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:40.989 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap239ca39f-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:40.989 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:40.990 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap239ca39f-50, col_values=(('external_ids', {'iface-id': '029a4135-d57a-4306-8fd9-eb98a54d6046'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:40.990 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:40 np0005465604 podman[290285]: 2025-10-02 08:22:40.990316868 +0000 UTC m=+0.044056342 container create 622e88aa6a751bd2619bc06b6653e9480af96d59f07ac7c82e2494d0bebc86ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ganguly, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:22:41 np0005465604 systemd[1]: Started libpod-conmon-622e88aa6a751bd2619bc06b6653e9480af96d59f07ac7c82e2494d0bebc86ec.scope.
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.041 2 INFO nova.virt.libvirt.driver [-] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Instance destroyed successfully.#033[00m
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.042 2 DEBUG nova.objects.instance [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lazy-loading 'resources' on Instance uuid 4f7a36cf-fd1b-42f2-94be-5e9483b5f941 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.058 2 DEBUG nova.virt.libvirt.vif [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:22:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-784686834',display_name='tempest-FloatingIPsAssociationTestJSON-server-784686834',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-784686834',id=22,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:22:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2a46df342cad4b62ae3f9af2fd10ed84',ramdisk_id='',reservation_id='r-edy6oj90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1024699440',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1024699440-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:22:20Z,user_data=None,user_id='be83392e7e8847878b199e35d03663f9',uuid=4f7a36cf-fd1b-42f2-94be-5e9483b5f941,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5385a348-2508-4677-aebb-d79c82f4fc36", "address": "fa:16:3e:4f:0a:92", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5385a348-25", "ovs_interfaceid": "5385a348-2508-4677-aebb-d79c82f4fc36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.058 2 DEBUG nova.network.os_vif_util [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Converting VIF {"id": "5385a348-2508-4677-aebb-d79c82f4fc36", "address": "fa:16:3e:4f:0a:92", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5385a348-25", "ovs_interfaceid": "5385a348-2508-4677-aebb-d79c82f4fc36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.059 2 DEBUG nova.network.os_vif_util [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4f:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5385a348-2508-4677-aebb-d79c82f4fc36,network=Network(239ca39f-5677-4c2d-87f6-45d4404e4ead),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5385a348-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.059 2 DEBUG os_vif [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5385a348-2508-4677-aebb-d79c82f4fc36,network=Network(239ca39f-5677-4c2d-87f6-45d4404e4ead),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5385a348-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.060 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5385a348-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:22:41 np0005465604 podman[290285]: 2025-10-02 08:22:40.971592854 +0000 UTC m=+0.025332348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:22:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4fab439ee55de593973e242069ab4632a106af623c813d626f73f8d72358021/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:22:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4fab439ee55de593973e242069ab4632a106af623c813d626f73f8d72358021/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:22:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4fab439ee55de593973e242069ab4632a106af623c813d626f73f8d72358021/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:22:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4fab439ee55de593973e242069ab4632a106af623c813d626f73f8d72358021/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.068 2 INFO os_vif [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:0a:92,bridge_name='br-int',has_traffic_filtering=True,id=5385a348-2508-4677-aebb-d79c82f4fc36,network=Network(239ca39f-5677-4c2d-87f6-45d4404e4ead),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5385a348-25')#033[00m
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.088 2 DEBUG nova.network.neutron [req-eb8b00d8-0afc-433d-9817-dc2c660a2593 req-5e9b1152-7e30-4c75-9e9b-ec275faa543c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Updated VIF entry in instance network info cache for port 5385a348-2508-4677-aebb-d79c82f4fc36. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.088 2 DEBUG nova.network.neutron [req-eb8b00d8-0afc-433d-9817-dc2c660a2593 req-5e9b1152-7e30-4c75-9e9b-ec275faa543c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Updating instance_info_cache with network_info: [{"id": "5385a348-2508-4677-aebb-d79c82f4fc36", "address": "fa:16:3e:4f:0a:92", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5385a348-25", "ovs_interfaceid": "5385a348-2508-4677-aebb-d79c82f4fc36", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:41 np0005465604 podman[290285]: 2025-10-02 08:22:41.090368934 +0000 UTC m=+0.144108408 container init 622e88aa6a751bd2619bc06b6653e9480af96d59f07ac7c82e2494d0bebc86ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct  2 04:22:41 np0005465604 podman[290285]: 2025-10-02 08:22:41.099460947 +0000 UTC m=+0.153200421 container start 622e88aa6a751bd2619bc06b6653e9480af96d59f07ac7c82e2494d0bebc86ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ganguly, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:22:41 np0005465604 podman[290285]: 2025-10-02 08:22:41.104598022 +0000 UTC m=+0.158337496 container attach 622e88aa6a751bd2619bc06b6653e9480af96d59f07ac7c82e2494d0bebc86ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ganguly, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.104 2 DEBUG oslo_concurrency.lockutils [req-eb8b00d8-0afc-433d-9817-dc2c660a2593 req-5e9b1152-7e30-4c75-9e9b-ec275faa543c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-4f7a36cf-fd1b-42f2-94be-5e9483b5f941" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.384 2 INFO nova.virt.libvirt.driver [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Deleting instance files /var/lib/nova/instances/4f7a36cf-fd1b-42f2-94be-5e9483b5f941_del#033[00m
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.385 2 INFO nova.virt.libvirt.driver [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Deletion of /var/lib/nova/instances/4f7a36cf-fd1b-42f2-94be-5e9483b5f941_del complete#033[00m
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.452 2 INFO nova.compute.manager [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.452 2 DEBUG oslo.service.loopingcall [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.453 2 DEBUG nova.compute.manager [-] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.453 2 DEBUG nova.network.neutron [-] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:22:41 np0005465604 nova_compute[260603]: 2025-10-02 08:22:41.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]: {
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:    "0": [
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:        {
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "devices": [
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "/dev/loop3"
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            ],
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "lv_name": "ceph_lv0",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "lv_size": "21470642176",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "name": "ceph_lv0",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "tags": {
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.cluster_name": "ceph",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.crush_device_class": "",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.encrypted": "0",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.osd_id": "0",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.type": "block",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.vdo": "0"
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            },
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "type": "block",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "vg_name": "ceph_vg0"
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:        }
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:    ],
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:    "1": [
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:        {
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "devices": [
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "/dev/loop4"
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            ],
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "lv_name": "ceph_lv1",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "lv_size": "21470642176",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "name": "ceph_lv1",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "tags": {
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.cluster_name": "ceph",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.crush_device_class": "",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.encrypted": "0",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.osd_id": "1",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.type": "block",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.vdo": "0"
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            },
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "type": "block",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "vg_name": "ceph_vg1"
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:        }
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:    ],
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:    "2": [
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:        {
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "devices": [
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "/dev/loop5"
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            ],
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "lv_name": "ceph_lv2",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "lv_size": "21470642176",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "name": "ceph_lv2",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "tags": {
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.cluster_name": "ceph",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.crush_device_class": "",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.encrypted": "0",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.osd_id": "2",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.type": "block",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:                "ceph.vdo": "0"
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            },
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "type": "block",
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:            "vg_name": "ceph_vg2"
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:        }
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]:    ]
Oct  2 04:22:41 np0005465604 quirky_ganguly[290315]: }
Oct  2 04:22:41 np0005465604 podman[290285]: 2025-10-02 08:22:41.876982597 +0000 UTC m=+0.930722091 container died 622e88aa6a751bd2619bc06b6653e9480af96d59f07ac7c82e2494d0bebc86ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 04:22:41 np0005465604 systemd[1]: libpod-622e88aa6a751bd2619bc06b6653e9480af96d59f07ac7c82e2494d0bebc86ec.scope: Deactivated successfully.
Oct  2 04:22:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c4fab439ee55de593973e242069ab4632a106af623c813d626f73f8d72358021-merged.mount: Deactivated successfully.
Oct  2 04:22:41 np0005465604 podman[290285]: 2025-10-02 08:22:41.959144005 +0000 UTC m=+1.012883509 container remove 622e88aa6a751bd2619bc06b6653e9480af96d59f07ac7c82e2494d0bebc86ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 04:22:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:22:41 np0005465604 systemd[1]: libpod-conmon-622e88aa6a751bd2619bc06b6653e9480af96d59f07ac7c82e2494d0bebc86ec.scope: Deactivated successfully.
Oct  2 04:22:42 np0005465604 podman[290496]: 2025-10-02 08:22:42.676522466 +0000 UTC m=+0.067123196 container create 3276a1204703b94744252e9a14e4a4d5f6824cd50542cfcb925f3a5733a2ea12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 04:22:42 np0005465604 systemd[1]: Started libpod-conmon-3276a1204703b94744252e9a14e4a4d5f6824cd50542cfcb925f3a5733a2ea12.scope.
Oct  2 04:22:42 np0005465604 podman[290496]: 2025-10-02 08:22:42.65060765 +0000 UTC m=+0.041208420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:22:42 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:22:42 np0005465604 podman[290496]: 2025-10-02 08:22:42.758825129 +0000 UTC m=+0.149425839 container init 3276a1204703b94744252e9a14e4a4d5f6824cd50542cfcb925f3a5733a2ea12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_poincare, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 04:22:42 np0005465604 podman[290496]: 2025-10-02 08:22:42.77033093 +0000 UTC m=+0.160931610 container start 3276a1204703b94744252e9a14e4a4d5f6824cd50542cfcb925f3a5733a2ea12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_poincare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:22:42 np0005465604 podman[290496]: 2025-10-02 08:22:42.773893225 +0000 UTC m=+0.164493925 container attach 3276a1204703b94744252e9a14e4a4d5f6824cd50542cfcb925f3a5733a2ea12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_poincare, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:22:42 np0005465604 bold_poincare[290512]: 167 167
Oct  2 04:22:42 np0005465604 systemd[1]: libpod-3276a1204703b94744252e9a14e4a4d5f6824cd50542cfcb925f3a5733a2ea12.scope: Deactivated successfully.
Oct  2 04:22:42 np0005465604 podman[290496]: 2025-10-02 08:22:42.779035401 +0000 UTC m=+0.169636161 container died 3276a1204703b94744252e9a14e4a4d5f6824cd50542cfcb925f3a5733a2ea12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_poincare, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:22:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 368 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 279 op/s
Oct  2 04:22:42 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d134bfde86a5f1917140622af86a4ebe46337aad51e6dd671fbfe0f7125db126-merged.mount: Deactivated successfully.
Oct  2 04:22:42 np0005465604 nova_compute[260603]: 2025-10-02 08:22:42.820 2 DEBUG nova.network.neutron [-] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:42 np0005465604 podman[290496]: 2025-10-02 08:22:42.825451887 +0000 UTC m=+0.216052607 container remove 3276a1204703b94744252e9a14e4a4d5f6824cd50542cfcb925f3a5733a2ea12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_poincare, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 04:22:42 np0005465604 nova_compute[260603]: 2025-10-02 08:22:42.847 2 INFO nova.compute.manager [-] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Took 1.39 seconds to deallocate network for instance.#033[00m
Oct  2 04:22:42 np0005465604 systemd[1]: libpod-conmon-3276a1204703b94744252e9a14e4a4d5f6824cd50542cfcb925f3a5733a2ea12.scope: Deactivated successfully.
Oct  2 04:22:42 np0005465604 nova_compute[260603]: 2025-10-02 08:22:42.887 2 DEBUG oslo_concurrency.lockutils [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:42 np0005465604 nova_compute[260603]: 2025-10-02 08:22:42.887 2 DEBUG oslo_concurrency.lockutils [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.015 2 DEBUG oslo_concurrency.processutils [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:43 np0005465604 podman[290535]: 2025-10-02 08:22:43.067097508 +0000 UTC m=+0.052194343 container create 1881e110f09bc5a47069f077a367a7ba7276abc681d8227a6832f56c46352dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_pasteur, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 04:22:43 np0005465604 systemd[1]: Started libpod-conmon-1881e110f09bc5a47069f077a367a7ba7276abc681d8227a6832f56c46352dff.scope.
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.123 2 DEBUG oslo_concurrency.lockutils [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.124 2 DEBUG oslo_concurrency.lockutils [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.125 2 DEBUG oslo_concurrency.lockutils [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.125 2 DEBUG oslo_concurrency.lockutils [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.126 2 DEBUG oslo_concurrency.lockutils [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.128 2 INFO nova.compute.manager [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Terminating instance#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.130 2 DEBUG nova.compute.manager [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:22:43 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:22:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ffc39d921d244f703cc4659bf49024adc40c8db36a5808f3bdbc5b0d7cf262e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:22:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ffc39d921d244f703cc4659bf49024adc40c8db36a5808f3bdbc5b0d7cf262e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:22:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ffc39d921d244f703cc4659bf49024adc40c8db36a5808f3bdbc5b0d7cf262e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:22:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ffc39d921d244f703cc4659bf49024adc40c8db36a5808f3bdbc5b0d7cf262e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:22:43 np0005465604 podman[290535]: 2025-10-02 08:22:43.048152718 +0000 UTC m=+0.033249593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:22:43 np0005465604 podman[290535]: 2025-10-02 08:22:43.165378498 +0000 UTC m=+0.150475353 container init 1881e110f09bc5a47069f077a367a7ba7276abc681d8227a6832f56c46352dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.171 2 DEBUG nova.compute.manager [req-a508508b-67df-45dd-9336-97f41a6a3ce3 req-5fd2b900-c10b-4d60-8ff8-bc7b18d5ddf7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Received event network-vif-deleted-580fd822-3f5a-44c0-aff1-96cab7531d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:43 np0005465604 podman[290535]: 2025-10-02 08:22:43.173646344 +0000 UTC m=+0.158743179 container start 1881e110f09bc5a47069f077a367a7ba7276abc681d8227a6832f56c46352dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_pasteur, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 04:22:43 np0005465604 podman[290535]: 2025-10-02 08:22:43.177713065 +0000 UTC m=+0.162809930 container attach 1881e110f09bc5a47069f077a367a7ba7276abc681d8227a6832f56c46352dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:22:43 np0005465604 kernel: tap906b5985-d9 (unregistering): left promiscuous mode
Oct  2 04:22:43 np0005465604 NetworkManager[45129]: <info>  [1759393363.2151] device (tap906b5985-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:43Z|00129|binding|INFO|Releasing lport 906b5985-d9e3-442e-a67d-085faadd4d3e from this chassis (sb_readonly=0)
Oct  2 04:22:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:43Z|00130|binding|INFO|Setting lport 906b5985-d9e3-442e-a67d-085faadd4d3e down in Southbound
Oct  2 04:22:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:43Z|00131|binding|INFO|Removing iface tap906b5985-d9 ovn-installed in OVS
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.234 2 DEBUG nova.compute.manager [req-ac9f82e7-9001-45af-94b8-f81acc1846bb req-56327a75-e814-41e6-9d99-11f13e3fb675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Received event network-vif-unplugged-580fd822-3f5a-44c0-aff1-96cab7531d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.234 2 DEBUG oslo_concurrency.lockutils [req-ac9f82e7-9001-45af-94b8-f81acc1846bb req-56327a75-e814-41e6-9d99-11f13e3fb675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.234 2 DEBUG oslo_concurrency.lockutils [req-ac9f82e7-9001-45af-94b8-f81acc1846bb req-56327a75-e814-41e6-9d99-11f13e3fb675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.235 2 DEBUG oslo_concurrency.lockutils [req-ac9f82e7-9001-45af-94b8-f81acc1846bb req-56327a75-e814-41e6-9d99-11f13e3fb675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.235 2 DEBUG nova.compute.manager [req-ac9f82e7-9001-45af-94b8-f81acc1846bb req-56327a75-e814-41e6-9d99-11f13e3fb675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] No waiting events found dispatching network-vif-unplugged-580fd822-3f5a-44c0-aff1-96cab7531d57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.235 2 WARNING nova.compute.manager [req-ac9f82e7-9001-45af-94b8-f81acc1846bb req-56327a75-e814-41e6-9d99-11f13e3fb675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Received unexpected event network-vif-unplugged-580fd822-3f5a-44c0-aff1-96cab7531d57 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.235 2 DEBUG nova.compute.manager [req-ac9f82e7-9001-45af-94b8-f81acc1846bb req-56327a75-e814-41e6-9d99-11f13e3fb675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Received event network-vif-plugged-580fd822-3f5a-44c0-aff1-96cab7531d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.236 2 DEBUG oslo_concurrency.lockutils [req-ac9f82e7-9001-45af-94b8-f81acc1846bb req-56327a75-e814-41e6-9d99-11f13e3fb675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.236 2 DEBUG oslo_concurrency.lockutils [req-ac9f82e7-9001-45af-94b8-f81acc1846bb req-56327a75-e814-41e6-9d99-11f13e3fb675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.237 2 DEBUG oslo_concurrency.lockutils [req-ac9f82e7-9001-45af-94b8-f81acc1846bb req-56327a75-e814-41e6-9d99-11f13e3fb675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c0ec8879-e818-40a5-88ec-c6d89b8ddea4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.237 2 DEBUG nova.compute.manager [req-ac9f82e7-9001-45af-94b8-f81acc1846bb req-56327a75-e814-41e6-9d99-11f13e3fb675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] No waiting events found dispatching network-vif-plugged-580fd822-3f5a-44c0-aff1-96cab7531d57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.237 2 WARNING nova.compute.manager [req-ac9f82e7-9001-45af-94b8-f81acc1846bb req-56327a75-e814-41e6-9d99-11f13e3fb675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Received unexpected event network-vif-plugged-580fd822-3f5a-44c0-aff1-96cab7531d57 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:22:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:43.238 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:c3:98 10.100.0.14'], port_security=['fa:16:3e:c1:c3:98 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'efe21c23-e200-4841-be3e-2b3e7c8b5c38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13ecc6dea7a8465394379400d84a053e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '62446364-91d2-42bd-8360-1c220db2c85a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a031d7e-2b71-4bad-bd63-24b87ef28e88, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=906b5985-d9e3-442e-a67d-085faadd4d3e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:22:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:43.240 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 906b5985-d9e3-442e-a67d-085faadd4d3e in datapath 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e unbound from our chassis#033[00m
Oct  2 04:22:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:43.242 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:43.259 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[78329dfd-00cc-4131-9557-1fb122e44b9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:43 np0005465604 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct  2 04:22:43 np0005465604 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Consumed 15.503s CPU time.
Oct  2 04:22:43 np0005465604 systemd-machined[214636]: Machine qemu-15-instance-0000000d terminated.
Oct  2 04:22:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:43.293 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0de31239-84d1-4bb3-bb4a-af51481cfe4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:43.295 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1d503a54-23f7-4ed8-be0a-0f5ed77755e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:43.323 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[22aefe34-564c-4563-90f8-7f1ddc5b0ac3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:43.342 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5a4dd2a7-7499-4b48-96c1-96feacaf88ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a12cdd8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:e8:83'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 32, 'rx_bytes': 1252, 'tx_bytes': 1532, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 32, 'rx_bytes': 1252, 'tx_bytes': 1532, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414588, 'reachable_time': 40389, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290585, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:43.373 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7e35dc29-1174-4583-924f-32142badf718]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414598, 'tstamp': 414598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290591, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a12cdd8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414600, 'tstamp': 414600}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290591, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:43.380 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a12cdd8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.379 2 INFO nova.virt.libvirt.driver [-] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Instance destroyed successfully.#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.380 2 DEBUG nova.objects.instance [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'resources' on Instance uuid efe21c23-e200-4841-be3e-2b3e7c8b5c38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:43.386 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a12cdd8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:43.386 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:43.386 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a12cdd8-50, col_values=(('external_ids', {'iface-id': '9a1d90c9-45f7-468d-bd6f-f1cc59f0309a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:43.386 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.406 2 DEBUG nova.virt.libvirt.vif [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:21:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1997142708',display_name='tempest-ServersAdminTestJSON-server-1997142708',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1997142708',id=13,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:21:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='13ecc6dea7a8465394379400d84a053e',ramdisk_id='',reservation_id='r-85fyi1fx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-83453142',owner_user_name='tempest-ServersAdminTestJSON-83453142-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:21:09Z,user_data=None,user_id='2b82955fab174d8aac325e64068908f5',uuid=efe21c23-e200-4841-be3e-2b3e7c8b5c38,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "906b5985-d9e3-442e-a67d-085faadd4d3e", "address": "fa:16:3e:c1:c3:98", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906b5985-d9", "ovs_interfaceid": "906b5985-d9e3-442e-a67d-085faadd4d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.407 2 DEBUG nova.network.os_vif_util [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converting VIF {"id": "906b5985-d9e3-442e-a67d-085faadd4d3e", "address": "fa:16:3e:c1:c3:98", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906b5985-d9", "ovs_interfaceid": "906b5985-d9e3-442e-a67d-085faadd4d3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.408 2 DEBUG nova.network.os_vif_util [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c3:98,bridge_name='br-int',has_traffic_filtering=True,id=906b5985-d9e3-442e-a67d-085faadd4d3e,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap906b5985-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.408 2 DEBUG os_vif [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c3:98,bridge_name='br-int',has_traffic_filtering=True,id=906b5985-d9e3-442e-a67d-085faadd4d3e,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap906b5985-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.410 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap906b5985-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.416 2 INFO os_vif [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c3:98,bridge_name='br-int',has_traffic_filtering=True,id=906b5985-d9e3-442e-a67d-085faadd4d3e,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap906b5985-d9')#033[00m
Oct  2 04:22:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:22:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3966113687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.501 2 DEBUG oslo_concurrency.processutils [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.507 2 DEBUG nova.compute.provider_tree [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.523 2 DEBUG nova.scheduler.client.report [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.549 2 DEBUG oslo_concurrency.lockutils [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.586 2 INFO nova.scheduler.client.report [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Deleted allocations for instance 4f7a36cf-fd1b-42f2-94be-5e9483b5f941#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.671 2 DEBUG oslo_concurrency.lockutils [None req-d64c72c6-eb7c-4e8b-9468-11d63c764009 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.736 2 INFO nova.virt.libvirt.driver [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Deleting instance files /var/lib/nova/instances/efe21c23-e200-4841-be3e-2b3e7c8b5c38_del#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.737 2 INFO nova.virt.libvirt.driver [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Deletion of /var/lib/nova/instances/efe21c23-e200-4841-be3e-2b3e7c8b5c38_del complete#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.786 2 INFO nova.compute.manager [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.787 2 DEBUG oslo.service.loopingcall [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.788 2 DEBUG nova.compute.manager [-] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:22:43 np0005465604 nova_compute[260603]: 2025-10-02 08:22:43.788 2 DEBUG nova.network.neutron [-] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]: {
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:        "osd_id": 2,
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:        "type": "bluestore"
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:    },
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:        "osd_id": 1,
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:        "type": "bluestore"
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:    },
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:        "osd_id": 0,
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:        "type": "bluestore"
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]:    }
Oct  2 04:22:44 np0005465604 hardcore_pasteur[290552]: }
Oct  2 04:22:44 np0005465604 systemd[1]: libpod-1881e110f09bc5a47069f077a367a7ba7276abc681d8227a6832f56c46352dff.scope: Deactivated successfully.
Oct  2 04:22:44 np0005465604 systemd[1]: libpod-1881e110f09bc5a47069f077a367a7ba7276abc681d8227a6832f56c46352dff.scope: Consumed 1.085s CPU time.
Oct  2 04:22:44 np0005465604 podman[290535]: 2025-10-02 08:22:44.307619676 +0000 UTC m=+1.292716541 container died 1881e110f09bc5a47069f077a367a7ba7276abc681d8227a6832f56c46352dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_pasteur, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:22:44 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4ffc39d921d244f703cc4659bf49024adc40c8db36a5808f3bdbc5b0d7cf262e-merged.mount: Deactivated successfully.
Oct  2 04:22:44 np0005465604 podman[290535]: 2025-10-02 08:22:44.422947285 +0000 UTC m=+1.408044130 container remove 1881e110f09bc5a47069f077a367a7ba7276abc681d8227a6832f56c46352dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:22:44 np0005465604 systemd[1]: libpod-conmon-1881e110f09bc5a47069f077a367a7ba7276abc681d8227a6832f56c46352dff.scope: Deactivated successfully.
Oct  2 04:22:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:22:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:22:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:22:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:22:44 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 7205784d-63ab-4130-a962-901f7ab5d506 does not exist
Oct  2 04:22:44 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 653cd284-70f8-4a5f-acb0-b6c8e9999b67 does not exist
Oct  2 04:22:44 np0005465604 podman[290685]: 2025-10-02 08:22:44.731469673 +0000 UTC m=+0.101502165 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Oct  2 04:22:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 326 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.2 MiB/s wr, 242 op/s
Oct  2 04:22:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:45.335 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:22:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:45.337 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:22:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.768 2 DEBUG nova.network.neutron [-] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.789 2 INFO nova.compute.manager [-] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Took 2.00 seconds to deallocate network for instance.#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.843 2 DEBUG oslo_concurrency.lockutils [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.843 2 DEBUG oslo_concurrency.lockutils [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.934 2 DEBUG oslo_concurrency.processutils [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.988 2 DEBUG nova.compute.manager [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Received event network-vif-unplugged-5385a348-2508-4677-aebb-d79c82f4fc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.988 2 DEBUG oslo_concurrency.lockutils [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.989 2 DEBUG oslo_concurrency.lockutils [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.989 2 DEBUG oslo_concurrency.lockutils [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.990 2 DEBUG nova.compute.manager [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] No waiting events found dispatching network-vif-unplugged-5385a348-2508-4677-aebb-d79c82f4fc36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.990 2 WARNING nova.compute.manager [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Received unexpected event network-vif-unplugged-5385a348-2508-4677-aebb-d79c82f4fc36 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.991 2 DEBUG nova.compute.manager [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Received event network-vif-plugged-5385a348-2508-4677-aebb-d79c82f4fc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.991 2 DEBUG oslo_concurrency.lockutils [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.991 2 DEBUG oslo_concurrency.lockutils [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.992 2 DEBUG oslo_concurrency.lockutils [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4f7a36cf-fd1b-42f2-94be-5e9483b5f941-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.992 2 DEBUG nova.compute.manager [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] No waiting events found dispatching network-vif-plugged-5385a348-2508-4677-aebb-d79c82f4fc36 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.993 2 WARNING nova.compute.manager [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Received unexpected event network-vif-plugged-5385a348-2508-4677-aebb-d79c82f4fc36 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.993 2 DEBUG nova.compute.manager [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Received event network-vif-deleted-5385a348-2508-4677-aebb-d79c82f4fc36 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.994 2 DEBUG nova.compute.manager [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Received event network-changed-3aa7d14a-ae6f-4363-b28d-0dadba762d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.994 2 DEBUG nova.compute.manager [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Refreshing instance network info cache due to event network-changed-3aa7d14a-ae6f-4363-b28d-0dadba762d1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.994 2 DEBUG oslo_concurrency.lockutils [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ce67c5cc-b413-4871-8bcb-677c171ce721" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.995 2 DEBUG oslo_concurrency.lockutils [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ce67c5cc-b413-4871-8bcb-677c171ce721" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:22:45 np0005465604 nova_compute[260603]: 2025-10-02 08:22:45.995 2 DEBUG nova.network.neutron [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Refreshing network info cache for port 3aa7d14a-ae6f-4363-b28d-0dadba762d1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.162 2 DEBUG nova.compute.manager [req-7a45a0ee-d3c0-40eb-82f9-d05850816fe5 req-ea5fb73f-c625-4fbc-80cf-dd2474ee3c61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Received event network-vif-unplugged-906b5985-d9e3-442e-a67d-085faadd4d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.163 2 DEBUG oslo_concurrency.lockutils [req-7a45a0ee-d3c0-40eb-82f9-d05850816fe5 req-ea5fb73f-c625-4fbc-80cf-dd2474ee3c61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.163 2 DEBUG oslo_concurrency.lockutils [req-7a45a0ee-d3c0-40eb-82f9-d05850816fe5 req-ea5fb73f-c625-4fbc-80cf-dd2474ee3c61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.164 2 DEBUG oslo_concurrency.lockutils [req-7a45a0ee-d3c0-40eb-82f9-d05850816fe5 req-ea5fb73f-c625-4fbc-80cf-dd2474ee3c61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.164 2 DEBUG nova.compute.manager [req-7a45a0ee-d3c0-40eb-82f9-d05850816fe5 req-ea5fb73f-c625-4fbc-80cf-dd2474ee3c61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] No waiting events found dispatching network-vif-unplugged-906b5985-d9e3-442e-a67d-085faadd4d3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.164 2 WARNING nova.compute.manager [req-7a45a0ee-d3c0-40eb-82f9-d05850816fe5 req-ea5fb73f-c625-4fbc-80cf-dd2474ee3c61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Received unexpected event network-vif-unplugged-906b5985-d9e3-442e-a67d-085faadd4d3e for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.165 2 DEBUG nova.compute.manager [req-7a45a0ee-d3c0-40eb-82f9-d05850816fe5 req-ea5fb73f-c625-4fbc-80cf-dd2474ee3c61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Received event network-vif-plugged-906b5985-d9e3-442e-a67d-085faadd4d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.165 2 DEBUG oslo_concurrency.lockutils [req-7a45a0ee-d3c0-40eb-82f9-d05850816fe5 req-ea5fb73f-c625-4fbc-80cf-dd2474ee3c61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.166 2 DEBUG oslo_concurrency.lockutils [req-7a45a0ee-d3c0-40eb-82f9-d05850816fe5 req-ea5fb73f-c625-4fbc-80cf-dd2474ee3c61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.166 2 DEBUG oslo_concurrency.lockutils [req-7a45a0ee-d3c0-40eb-82f9-d05850816fe5 req-ea5fb73f-c625-4fbc-80cf-dd2474ee3c61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.166 2 DEBUG nova.compute.manager [req-7a45a0ee-d3c0-40eb-82f9-d05850816fe5 req-ea5fb73f-c625-4fbc-80cf-dd2474ee3c61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] No waiting events found dispatching network-vif-plugged-906b5985-d9e3-442e-a67d-085faadd4d3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.167 2 WARNING nova.compute.manager [req-7a45a0ee-d3c0-40eb-82f9-d05850816fe5 req-ea5fb73f-c625-4fbc-80cf-dd2474ee3c61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Received unexpected event network-vif-plugged-906b5985-d9e3-442e-a67d-085faadd4d3e for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:22:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:22:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2972269364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.416 2 DEBUG oslo_concurrency.processutils [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.423 2 DEBUG nova.compute.provider_tree [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.444 2 DEBUG nova.scheduler.client.report [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.468 2 DEBUG oslo_concurrency.lockutils [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.518 2 INFO nova.scheduler.client.report [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Deleted allocations for instance efe21c23-e200-4841-be3e-2b3e7c8b5c38#033[00m
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.613 2 DEBUG oslo_concurrency.lockutils [None req-284b40d4-c76c-4b6e-b36a-72420067145e 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "efe21c23-e200-4841-be3e-2b3e7c8b5c38" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.489s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:46 np0005465604 nova_compute[260603]: 2025-10-02 08:22:46.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 326 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 212 op/s
Oct  2 04:22:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:22:47 np0005465604 nova_compute[260603]: 2025-10-02 08:22:47.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:47 np0005465604 nova_compute[260603]: 2025-10-02 08:22:47.715 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393352.7143419, 01296dff-20d5-49d6-b582-f9ec1d6b0af8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:22:47 np0005465604 nova_compute[260603]: 2025-10-02 08:22:47.715 2 INFO nova.compute.manager [-] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:22:47 np0005465604 nova_compute[260603]: 2025-10-02 08:22:47.735 2 DEBUG nova.compute.manager [None req-1276c83e-0625-4c60-b9ee-758153b18225 - - - - - -] [instance: 01296dff-20d5-49d6-b582-f9ec1d6b0af8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.047 2 DEBUG nova.network.neutron [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Updated VIF entry in instance network info cache for port 3aa7d14a-ae6f-4363-b28d-0dadba762d1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.048 2 DEBUG nova.network.neutron [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Updating instance_info_cache with network_info: [{"id": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "address": "fa:16:3e:99:75:4f", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa7d14a-ae", "ovs_interfaceid": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.061 2 DEBUG oslo_concurrency.lockutils [req-98c53c6e-3aed-4d5a-b2d2-3eabedcdb325 req-b8bbfe38-97de-4531-9c54-8041f9248358 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ce67c5cc-b413-4871-8bcb-677c171ce721" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:48Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:92:78:ad 10.100.0.13
Oct  2 04:22:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:48Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:92:78:ad 10.100.0.13
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.706 2 DEBUG oslo_concurrency.lockutils [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.706 2 DEBUG oslo_concurrency.lockutils [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.707 2 DEBUG oslo_concurrency.lockutils [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.707 2 DEBUG oslo_concurrency.lockutils [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.707 2 DEBUG oslo_concurrency.lockutils [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.708 2 INFO nova.compute.manager [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Terminating instance#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.709 2 DEBUG nova.compute.manager [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:22:48 np0005465604 kernel: tap205e575c-4a (unregistering): left promiscuous mode
Oct  2 04:22:48 np0005465604 NetworkManager[45129]: <info>  [1759393368.7804] device (tap205e575c-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:22:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:48Z|00132|binding|INFO|Releasing lport 205e575c-4af3-4a6a-af77-fd96af608b0a from this chassis (sb_readonly=0)
Oct  2 04:22:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:48Z|00133|binding|INFO|Setting lport 205e575c-4af3-4a6a-af77-fd96af608b0a down in Southbound
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:48Z|00134|binding|INFO|Removing iface tap205e575c-4a ovn-installed in OVS
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 256 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 259 op/s
Oct  2 04:22:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:48.802 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:2c:3a 10.100.0.12'], port_security=['fa:16:3e:24:2c:3a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '09c17c12-7dac-4fc8-917e-cb2efa1d4607', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '13ecc6dea7a8465394379400d84a053e', 'neutron:revision_number': '10', 'neutron:security_group_ids': '62446364-91d2-42bd-8360-1c220db2c85a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a031d7e-2b71-4bad-bd63-24b87ef28e88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=205e575c-4af3-4a6a-af77-fd96af608b0a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:22:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:48.804 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 205e575c-4af3-4a6a-af77-fd96af608b0a in datapath 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e unbound from our chassis#033[00m
Oct  2 04:22:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:48.806 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:22:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:48.808 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5f9c16c9-7baa-4c59-9873-e104383d2f6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:48.809 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e namespace which is not needed anymore#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:48 np0005465604 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct  2 04:22:48 np0005465604 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000000c.scope: Consumed 13.165s CPU time.
Oct  2 04:22:48 np0005465604 systemd-machined[214636]: Machine qemu-26-instance-0000000c terminated.
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.915 2 DEBUG nova.compute.manager [req-ab7c79c2-b8f1-4d04-9acb-82ef43bc83d0 req-ec89f8f3-afce-4d47-8b13-a90b6024b1d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Received event network-vif-deleted-906b5985-d9e3-442e-a67d-085faadd4d3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.917 2 DEBUG nova.compute.manager [req-ab7c79c2-b8f1-4d04-9acb-82ef43bc83d0 req-ec89f8f3-afce-4d47-8b13-a90b6024b1d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Received event network-changed-2135b96a-be2a-4f01-bcde-d21548009ea5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.918 2 DEBUG nova.compute.manager [req-ab7c79c2-b8f1-4d04-9acb-82ef43bc83d0 req-ec89f8f3-afce-4d47-8b13-a90b6024b1d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Refreshing instance network info cache due to event network-changed-2135b96a-be2a-4f01-bcde-d21548009ea5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.919 2 DEBUG oslo_concurrency.lockutils [req-ab7c79c2-b8f1-4d04-9acb-82ef43bc83d0 req-ec89f8f3-afce-4d47-8b13-a90b6024b1d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-477099c6-d178-40d2-9d1e-a28466596e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.919 2 DEBUG oslo_concurrency.lockutils [req-ab7c79c2-b8f1-4d04-9acb-82ef43bc83d0 req-ec89f8f3-afce-4d47-8b13-a90b6024b1d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-477099c6-d178-40d2-9d1e-a28466596e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.920 2 DEBUG nova.network.neutron [req-ab7c79c2-b8f1-4d04-9acb-82ef43bc83d0 req-ec89f8f3-afce-4d47-8b13-a90b6024b1d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Refreshing network info cache for port 2135b96a-be2a-4f01-bcde-d21548009ea5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.952 2 INFO nova.virt.libvirt.driver [-] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Instance destroyed successfully.#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.953 2 DEBUG nova.objects.instance [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lazy-loading 'resources' on Instance uuid 09c17c12-7dac-4fc8-917e-cb2efa1d4607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.970 2 DEBUG nova.virt.libvirt.vif [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:20:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1588661618',display_name='tempest-ServersAdminTestJSON-server-1588661618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1588661618',id=12,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:22:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='13ecc6dea7a8465394379400d84a053e',ramdisk_id='',reservation_id='r-v30mm0vb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-83453142',owner_user_name='tempest-ServersAdminTestJSON-83453142-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:22:29Z,user_data=None,user_id='2b82955fab174d8aac325e64068908f5',uuid=09c17c12-7dac-4fc8-917e-cb2efa1d4607,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.971 2 DEBUG nova.network.os_vif_util [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converting VIF {"id": "205e575c-4af3-4a6a-af77-fd96af608b0a", "address": "fa:16:3e:24:2c:3a", "network": {"id": "6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1068560885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "13ecc6dea7a8465394379400d84a053e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205e575c-4a", "ovs_interfaceid": "205e575c-4af3-4a6a-af77-fd96af608b0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.972 2 DEBUG nova.network.os_vif_util [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.972 2 DEBUG os_vif [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.976 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap205e575c-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:48 np0005465604 nova_compute[260603]: 2025-10-02 08:22:48.982 2 INFO os_vif [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:2c:3a,bridge_name='br-int',has_traffic_filtering=True,id=205e575c-4af3-4a6a-af77-fd96af608b0a,network=Network(6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap205e575c-4a')#033[00m
Oct  2 04:22:48 np0005465604 neutron-haproxy-ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e[283202]: [NOTICE]   (283225) : haproxy version is 2.8.14-c23fe91
Oct  2 04:22:48 np0005465604 neutron-haproxy-ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e[283202]: [NOTICE]   (283225) : path to executable is /usr/sbin/haproxy
Oct  2 04:22:48 np0005465604 neutron-haproxy-ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e[283202]: [WARNING]  (283225) : Exiting Master process...
Oct  2 04:22:48 np0005465604 neutron-haproxy-ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e[283202]: [ALERT]    (283225) : Current worker (283228) exited with code 143 (Terminated)
Oct  2 04:22:48 np0005465604 neutron-haproxy-ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e[283202]: [WARNING]  (283225) : All workers exited. Exiting... (0)
Oct  2 04:22:48 np0005465604 systemd[1]: libpod-d1ffce217b799bdbd82a906aee5b1dfbc65daa20423e3a9aada203326c63e300.scope: Deactivated successfully.
Oct  2 04:22:48 np0005465604 podman[290773]: 2025-10-02 08:22:48.996373663 +0000 UTC m=+0.064802040 container died d1ffce217b799bdbd82a906aee5b1dfbc65daa20423e3a9aada203326c63e300 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 04:22:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d1ffce217b799bdbd82a906aee5b1dfbc65daa20423e3a9aada203326c63e300-userdata-shm.mount: Deactivated successfully.
Oct  2 04:22:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f09c4a2bc6aa7f236bb71bc796a7131d7427d3d53932b9efacf4cd4670237c4f-merged.mount: Deactivated successfully.
Oct  2 04:22:49 np0005465604 podman[290773]: 2025-10-02 08:22:49.032706064 +0000 UTC m=+0.101134431 container cleanup d1ffce217b799bdbd82a906aee5b1dfbc65daa20423e3a9aada203326c63e300 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 04:22:49 np0005465604 systemd[1]: libpod-conmon-d1ffce217b799bdbd82a906aee5b1dfbc65daa20423e3a9aada203326c63e300.scope: Deactivated successfully.
Oct  2 04:22:49 np0005465604 podman[290825]: 2025-10-02 08:22:49.106568686 +0000 UTC m=+0.052290437 container remove d1ffce217b799bdbd82a906aee5b1dfbc65daa20423e3a9aada203326c63e300 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 04:22:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:49.113 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[22abfbe0-23de-4b48-9205-b9e780c56557]: (4, ('Thu Oct  2 08:22:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e (d1ffce217b799bdbd82a906aee5b1dfbc65daa20423e3a9aada203326c63e300)\nd1ffce217b799bdbd82a906aee5b1dfbc65daa20423e3a9aada203326c63e300\nThu Oct  2 08:22:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e (d1ffce217b799bdbd82a906aee5b1dfbc65daa20423e3a9aada203326c63e300)\nd1ffce217b799bdbd82a906aee5b1dfbc65daa20423e3a9aada203326c63e300\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:49.115 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[53f461e8-968d-4876-a54c-1ff74e11182b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:49.115 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a12cdd8-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:49 np0005465604 nova_compute[260603]: 2025-10-02 08:22:49.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:49 np0005465604 kernel: tap6a12cdd8-50: left promiscuous mode
Oct  2 04:22:49 np0005465604 nova_compute[260603]: 2025-10-02 08:22:49.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:49 np0005465604 nova_compute[260603]: 2025-10-02 08:22:49.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:49.137 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea3db7e-6f27-4a2a-b7dc-d1b7d805f4a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:49.159 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3562e0ad-7f70-4373-88ca-c563ad4b68ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:49.160 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dd456ad0-88b8-4660-8561-bc739adbc03b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:49.177 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[57d63d13-480c-4477-9393-c2d4d8740a3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414578, 'reachable_time': 34552, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290840, 'error': None, 'target': 'ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:49 np0005465604 systemd[1]: run-netns-ovnmeta\x2d6a12cdd8\x2d59b9\x2d45fd\x2dac0f\x2d871aba9c7f3e.mount: Deactivated successfully.
Oct  2 04:22:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:49.182 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a12cdd8-59b9-45fd-ac0f-871aba9c7f3e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:22:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:49.182 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[5258b260-8219-44c7-adda-2af015adaa5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:49 np0005465604 nova_compute[260603]: 2025-10-02 08:22:49.362 2 INFO nova.virt.libvirt.driver [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Deleting instance files /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607_del#033[00m
Oct  2 04:22:49 np0005465604 nova_compute[260603]: 2025-10-02 08:22:49.363 2 INFO nova.virt.libvirt.driver [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Deletion of /var/lib/nova/instances/09c17c12-7dac-4fc8-917e-cb2efa1d4607_del complete#033[00m
Oct  2 04:22:49 np0005465604 nova_compute[260603]: 2025-10-02 08:22:49.423 2 INFO nova.compute.manager [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:22:49 np0005465604 nova_compute[260603]: 2025-10-02 08:22:49.424 2 DEBUG oslo.service.loopingcall [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:22:49 np0005465604 nova_compute[260603]: 2025-10-02 08:22:49.424 2 DEBUG nova.compute.manager [-] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:22:49 np0005465604 nova_compute[260603]: 2025-10-02 08:22:49.425 2 DEBUG nova.network.neutron [-] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:22:49 np0005465604 nova_compute[260603]: 2025-10-02 08:22:49.983 2 DEBUG nova.network.neutron [-] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:50 np0005465604 nova_compute[260603]: 2025-10-02 08:22:50.003 2 INFO nova.compute.manager [-] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Took 0.58 seconds to deallocate network for instance.#033[00m
Oct  2 04:22:50 np0005465604 nova_compute[260603]: 2025-10-02 08:22:50.063 2 DEBUG oslo_concurrency.lockutils [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:50 np0005465604 nova_compute[260603]: 2025-10-02 08:22:50.064 2 DEBUG oslo_concurrency.lockutils [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:50 np0005465604 nova_compute[260603]: 2025-10-02 08:22:50.141 2 DEBUG oslo_concurrency.processutils [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:22:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1448908702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:22:50 np0005465604 nova_compute[260603]: 2025-10-02 08:22:50.561 2 DEBUG oslo_concurrency.processutils [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:50 np0005465604 nova_compute[260603]: 2025-10-02 08:22:50.566 2 DEBUG nova.compute.provider_tree [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:22:50 np0005465604 nova_compute[260603]: 2025-10-02 08:22:50.580 2 DEBUG nova.scheduler.client.report [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:22:50 np0005465604 nova_compute[260603]: 2025-10-02 08:22:50.598 2 DEBUG oslo_concurrency.lockutils [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:50 np0005465604 nova_compute[260603]: 2025-10-02 08:22:50.624 2 INFO nova.scheduler.client.report [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Deleted allocations for instance 09c17c12-7dac-4fc8-917e-cb2efa1d4607#033[00m
Oct  2 04:22:50 np0005465604 nova_compute[260603]: 2025-10-02 08:22:50.681 2 DEBUG oslo_concurrency.lockutils [None req-fb153a42-080d-4630-95cf-62d516efcb14 2b82955fab174d8aac325e64068908f5 13ecc6dea7a8465394379400d84a053e - - default default] Lock "09c17c12-7dac-4fc8-917e-cb2efa1d4607" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 256 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 651 KiB/s wr, 122 op/s
Oct  2 04:22:51 np0005465604 nova_compute[260603]: 2025-10-02 08:22:51.220 2 DEBUG nova.network.neutron [req-ab7c79c2-b8f1-4d04-9acb-82ef43bc83d0 req-ec89f8f3-afce-4d47-8b13-a90b6024b1d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Updated VIF entry in instance network info cache for port 2135b96a-be2a-4f01-bcde-d21548009ea5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:22:51 np0005465604 nova_compute[260603]: 2025-10-02 08:22:51.221 2 DEBUG nova.network.neutron [req-ab7c79c2-b8f1-4d04-9acb-82ef43bc83d0 req-ec89f8f3-afce-4d47-8b13-a90b6024b1d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Updating instance_info_cache with network_info: [{"id": "2135b96a-be2a-4f01-bcde-d21548009ea5", "address": "fa:16:3e:92:78:ad", "network": {"id": "e81bbc6c-74d9-4326-b101-3222258ed127", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1133616281-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "578d4e0b1a1a46e893d547de9bd7f6e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2135b96a-be", "ovs_interfaceid": "2135b96a-be2a-4f01-bcde-d21548009ea5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:51 np0005465604 nova_compute[260603]: 2025-10-02 08:22:51.237 2 DEBUG oslo_concurrency.lockutils [req-ab7c79c2-b8f1-4d04-9acb-82ef43bc83d0 req-ec89f8f3-afce-4d47-8b13-a90b6024b1d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-477099c6-d178-40d2-9d1e-a28466596e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:22:51 np0005465604 nova_compute[260603]: 2025-10-02 08:22:51.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:51 np0005465604 nova_compute[260603]: 2025-10-02 08:22:51.737 2 DEBUG nova.compute.manager [req-cb432580-3171-4262-9a67-250539412d69 req-b9ddb00d-a12c-4f7b-a6e1-f23564250118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Received event network-vif-deleted-205e575c-4af3-4a6a-af77-fd96af608b0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:51 np0005465604 nova_compute[260603]: 2025-10-02 08:22:51.800 2 DEBUG nova.compute.manager [req-413c9767-35ae-4e79-a643-77730d6cc0e7 req-24c36e73-73ae-4e60-afd1-baff03a616c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Received event network-changed-3aa7d14a-ae6f-4363-b28d-0dadba762d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:51 np0005465604 nova_compute[260603]: 2025-10-02 08:22:51.801 2 DEBUG nova.compute.manager [req-413c9767-35ae-4e79-a643-77730d6cc0e7 req-24c36e73-73ae-4e60-afd1-baff03a616c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Refreshing instance network info cache due to event network-changed-3aa7d14a-ae6f-4363-b28d-0dadba762d1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:22:51 np0005465604 nova_compute[260603]: 2025-10-02 08:22:51.801 2 DEBUG oslo_concurrency.lockutils [req-413c9767-35ae-4e79-a643-77730d6cc0e7 req-24c36e73-73ae-4e60-afd1-baff03a616c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ce67c5cc-b413-4871-8bcb-677c171ce721" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:22:51 np0005465604 nova_compute[260603]: 2025-10-02 08:22:51.802 2 DEBUG oslo_concurrency.lockutils [req-413c9767-35ae-4e79-a643-77730d6cc0e7 req-24c36e73-73ae-4e60-afd1-baff03a616c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ce67c5cc-b413-4871-8bcb-677c171ce721" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:22:51 np0005465604 nova_compute[260603]: 2025-10-02 08:22:51.802 2 DEBUG nova.network.neutron [req-413c9767-35ae-4e79-a643-77730d6cc0e7 req-24c36e73-73ae-4e60-afd1-baff03a616c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Refreshing network info cache for port 3aa7d14a-ae6f-4363-b28d-0dadba762d1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:22:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:22:52 np0005465604 nova_compute[260603]: 2025-10-02 08:22:52.232 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393357.2308698, c0ec8879-e818-40a5-88ec-c6d89b8ddea4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:22:52 np0005465604 nova_compute[260603]: 2025-10-02 08:22:52.233 2 INFO nova.compute.manager [-] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:22:52 np0005465604 nova_compute[260603]: 2025-10-02 08:22:52.253 2 DEBUG nova.compute.manager [None req-9b61d284-1d22-4903-9780-98804573efd6 - - - - - -] [instance: c0ec8879-e818-40a5-88ec-c6d89b8ddea4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 220 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 183 op/s
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.622 2 DEBUG oslo_concurrency.lockutils [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquiring lock "ce67c5cc-b413-4871-8bcb-677c171ce721" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.623 2 DEBUG oslo_concurrency.lockutils [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "ce67c5cc-b413-4871-8bcb-677c171ce721" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.623 2 DEBUG oslo_concurrency.lockutils [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquiring lock "ce67c5cc-b413-4871-8bcb-677c171ce721-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.624 2 DEBUG oslo_concurrency.lockutils [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "ce67c5cc-b413-4871-8bcb-677c171ce721-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.625 2 DEBUG oslo_concurrency.lockutils [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "ce67c5cc-b413-4871-8bcb-677c171ce721-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.627 2 INFO nova.compute.manager [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Terminating instance#033[00m
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.629 2 DEBUG nova.compute.manager [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:22:53 np0005465604 kernel: tap3aa7d14a-ae (unregistering): left promiscuous mode
Oct  2 04:22:53 np0005465604 NetworkManager[45129]: <info>  [1759393373.7005] device (tap3aa7d14a-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:53Z|00135|binding|INFO|Releasing lport 3aa7d14a-ae6f-4363-b28d-0dadba762d1f from this chassis (sb_readonly=0)
Oct  2 04:22:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:53Z|00136|binding|INFO|Setting lport 3aa7d14a-ae6f-4363-b28d-0dadba762d1f down in Southbound
Oct  2 04:22:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:53Z|00137|binding|INFO|Removing iface tap3aa7d14a-ae ovn-installed in OVS
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:53.725 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:75:4f 10.100.0.14'], port_security=['fa:16:3e:99:75:4f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ce67c5cc-b413-4871-8bcb-677c171ce721', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-239ca39f-5677-4c2d-87f6-45d4404e4ead', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a46df342cad4b62ae3f9af2fd10ed84', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572f4f54-3bbb-4e63-86a3-6fb75389fa5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b31b756-7999-4de8-b356-acd4a5e98929, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3aa7d14a-ae6f-4363-b28d-0dadba762d1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:22:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:53.727 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3aa7d14a-ae6f-4363-b28d-0dadba762d1f in datapath 239ca39f-5677-4c2d-87f6-45d4404e4ead unbound from our chassis#033[00m
Oct  2 04:22:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:53.730 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 239ca39f-5677-4c2d-87f6-45d4404e4ead, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:22:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:53.731 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f09b12ae-2e30-4971-ac27-4f66018f16ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:53.732 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead namespace which is not needed anymore#033[00m
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:53 np0005465604 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000014.scope: Deactivated successfully.
Oct  2 04:22:53 np0005465604 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000014.scope: Consumed 14.540s CPU time.
Oct  2 04:22:53 np0005465604 systemd-machined[214636]: Machine qemu-23-instance-00000014 terminated.
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.870 2 INFO nova.virt.libvirt.driver [-] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Instance destroyed successfully.#033[00m
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.870 2 DEBUG nova.objects.instance [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lazy-loading 'resources' on Instance uuid ce67c5cc-b413-4871-8bcb-677c171ce721 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.896 2 DEBUG nova.virt.libvirt.vif [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:21:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-553934205',display_name='tempest-FloatingIPsAssociationTestJSON-server-553934205',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-553934205',id=20,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:22:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2a46df342cad4b62ae3f9af2fd10ed84',ramdisk_id='',reservation_id='r-cho0oknm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1024699440',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1024699440-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:22:00Z,user_data=None,user_id='be83392e7e8847878b199e35d03663f9',uuid=ce67c5cc-b413-4871-8bcb-677c171ce721,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "address": "fa:16:3e:99:75:4f", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa7d14a-ae", "ovs_interfaceid": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.896 2 DEBUG nova.network.os_vif_util [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Converting VIF {"id": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "address": "fa:16:3e:99:75:4f", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa7d14a-ae", "ovs_interfaceid": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.899 2 DEBUG nova.network.os_vif_util [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:99:75:4f,bridge_name='br-int',has_traffic_filtering=True,id=3aa7d14a-ae6f-4363-b28d-0dadba762d1f,network=Network(239ca39f-5677-4c2d-87f6-45d4404e4ead),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aa7d14a-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.899 2 DEBUG os_vif [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:75:4f,bridge_name='br-int',has_traffic_filtering=True,id=3aa7d14a-ae6f-4363-b28d-0dadba762d1f,network=Network(239ca39f-5677-4c2d-87f6-45d4404e4ead),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aa7d14a-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3aa7d14a-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:53 np0005465604 nova_compute[260603]: 2025-10-02 08:22:53.909 2 INFO os_vif [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:75:4f,bridge_name='br-int',has_traffic_filtering=True,id=3aa7d14a-ae6f-4363-b28d-0dadba762d1f,network=Network(239ca39f-5677-4c2d-87f6-45d4404e4ead),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aa7d14a-ae')#033[00m
Oct  2 04:22:53 np0005465604 neutron-haproxy-ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead[287523]: [NOTICE]   (287535) : haproxy version is 2.8.14-c23fe91
Oct  2 04:22:53 np0005465604 neutron-haproxy-ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead[287523]: [NOTICE]   (287535) : path to executable is /usr/sbin/haproxy
Oct  2 04:22:53 np0005465604 neutron-haproxy-ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead[287523]: [WARNING]  (287535) : Exiting Master process...
Oct  2 04:22:53 np0005465604 neutron-haproxy-ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead[287523]: [ALERT]    (287535) : Current worker (287537) exited with code 143 (Terminated)
Oct  2 04:22:53 np0005465604 neutron-haproxy-ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead[287523]: [WARNING]  (287535) : All workers exited. Exiting... (0)
Oct  2 04:22:53 np0005465604 systemd[1]: libpod-6283c34130552abc7817dde031b5b65653f5f16bec53daa751b15b9f78a21c9a.scope: Deactivated successfully.
Oct  2 04:22:53 np0005465604 podman[290890]: 2025-10-02 08:22:53.93760212 +0000 UTC m=+0.071195556 container died 6283c34130552abc7817dde031b5b65653f5f16bec53daa751b15b9f78a21c9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 04:22:53 np0005465604 systemd[1]: var-lib-containers-storage-overlay-62dfb4b15594cd11b271094fbb34417efbcab33f08f2808988096e78fd11f557-merged.mount: Deactivated successfully.
Oct  2 04:22:53 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6283c34130552abc7817dde031b5b65653f5f16bec53daa751b15b9f78a21c9a-userdata-shm.mount: Deactivated successfully.
Oct  2 04:22:53 np0005465604 podman[290890]: 2025-10-02 08:22:53.984822603 +0000 UTC m=+0.118416079 container cleanup 6283c34130552abc7817dde031b5b65653f5f16bec53daa751b15b9f78a21c9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:22:54 np0005465604 nova_compute[260603]: 2025-10-02 08:22:54.011 2 DEBUG nova.network.neutron [req-413c9767-35ae-4e79-a643-77730d6cc0e7 req-24c36e73-73ae-4e60-afd1-baff03a616c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Updated VIF entry in instance network info cache for port 3aa7d14a-ae6f-4363-b28d-0dadba762d1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:22:54 np0005465604 nova_compute[260603]: 2025-10-02 08:22:54.013 2 DEBUG nova.network.neutron [req-413c9767-35ae-4e79-a643-77730d6cc0e7 req-24c36e73-73ae-4e60-afd1-baff03a616c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Updating instance_info_cache with network_info: [{"id": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "address": "fa:16:3e:99:75:4f", "network": {"id": "239ca39f-5677-4c2d-87f6-45d4404e4ead", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-261160198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a46df342cad4b62ae3f9af2fd10ed84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa7d14a-ae", "ovs_interfaceid": "3aa7d14a-ae6f-4363-b28d-0dadba762d1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:54 np0005465604 systemd[1]: libpod-conmon-6283c34130552abc7817dde031b5b65653f5f16bec53daa751b15b9f78a21c9a.scope: Deactivated successfully.
Oct  2 04:22:54 np0005465604 nova_compute[260603]: 2025-10-02 08:22:54.028 2 DEBUG oslo_concurrency.lockutils [req-413c9767-35ae-4e79-a643-77730d6cc0e7 req-24c36e73-73ae-4e60-afd1-baff03a616c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ce67c5cc-b413-4871-8bcb-677c171ce721" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:22:54 np0005465604 podman[290946]: 2025-10-02 08:22:54.074625458 +0000 UTC m=+0.047291975 container remove 6283c34130552abc7817dde031b5b65653f5f16bec53daa751b15b9f78a21c9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 04:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:54.084 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bea30015-6689-4349-b3b5-93799f8bae2b]: (4, ('Thu Oct  2 08:22:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead (6283c34130552abc7817dde031b5b65653f5f16bec53daa751b15b9f78a21c9a)\n6283c34130552abc7817dde031b5b65653f5f16bec53daa751b15b9f78a21c9a\nThu Oct  2 08:22:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead (6283c34130552abc7817dde031b5b65653f5f16bec53daa751b15b9f78a21c9a)\n6283c34130552abc7817dde031b5b65653f5f16bec53daa751b15b9f78a21c9a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:54.086 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd64b6d-2d0d-4dd2-ba3a-10e1392d1f4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:54.089 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap239ca39f-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:54 np0005465604 kernel: tap239ca39f-50: left promiscuous mode
Oct  2 04:22:54 np0005465604 nova_compute[260603]: 2025-10-02 08:22:54.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:54 np0005465604 nova_compute[260603]: 2025-10-02 08:22:54.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:54.113 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[64b4235b-82c7-441a-b533-bb686b104116]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:54.136 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f777c459-8760-4223-8087-4415d0a6b48e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:54.137 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6c269b0b-0e46-4134-8673-4c8dc22932dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:54.162 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[88f5d34e-ca29-4974-bf92-60ba4e1d267a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419688, 'reachable_time': 19471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290963, 'error': None, 'target': 'ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:54 np0005465604 systemd[1]: run-netns-ovnmeta\x2d239ca39f\x2d5677\x2d4c2d\x2d87f6\x2d45d4404e4ead.mount: Deactivated successfully.
Oct  2 04:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:54.164 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-239ca39f-5677-4c2d-87f6-45d4404e4ead deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:54.164 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[62efeae1-b8ba-4bd5-a511-25c116fe7f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:22:54 np0005465604 nova_compute[260603]: 2025-10-02 08:22:54.312 2 INFO nova.virt.libvirt.driver [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Deleting instance files /var/lib/nova/instances/ce67c5cc-b413-4871-8bcb-677c171ce721_del#033[00m
Oct  2 04:22:54 np0005465604 nova_compute[260603]: 2025-10-02 08:22:54.313 2 INFO nova.virt.libvirt.driver [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Deletion of /var/lib/nova/instances/ce67c5cc-b413-4871-8bcb-677c171ce721_del complete#033[00m
Oct  2 04:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:22:54.339 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:22:54 np0005465604 nova_compute[260603]: 2025-10-02 08:22:54.390 2 INFO nova.compute.manager [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:22:54 np0005465604 nova_compute[260603]: 2025-10-02 08:22:54.390 2 DEBUG oslo.service.loopingcall [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:22:54 np0005465604 nova_compute[260603]: 2025-10-02 08:22:54.391 2 DEBUG nova.compute.manager [-] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:22:54 np0005465604 nova_compute[260603]: 2025-10-02 08:22:54.391 2 DEBUG nova.network.neutron [-] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:22:54 np0005465604 nova_compute[260603]: 2025-10-02 08:22:54.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 200 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 371 KiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct  2 04:22:55 np0005465604 nova_compute[260603]: 2025-10-02 08:22:55.361 2 DEBUG nova.network.neutron [-] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:22:55 np0005465604 nova_compute[260603]: 2025-10-02 08:22:55.382 2 INFO nova.compute.manager [-] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Took 0.99 seconds to deallocate network for instance.#033[00m
Oct  2 04:22:55 np0005465604 nova_compute[260603]: 2025-10-02 08:22:55.440 2 DEBUG oslo_concurrency.lockutils [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:55 np0005465604 nova_compute[260603]: 2025-10-02 08:22:55.440 2 DEBUG oslo_concurrency.lockutils [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:55 np0005465604 nova_compute[260603]: 2025-10-02 08:22:55.506 2 DEBUG oslo_concurrency.processutils [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:22:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4022689378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:22:55 np0005465604 nova_compute[260603]: 2025-10-02 08:22:55.961 2 DEBUG oslo_concurrency.processutils [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:55 np0005465604 nova_compute[260603]: 2025-10-02 08:22:55.968 2 DEBUG nova.compute.provider_tree [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:22:55 np0005465604 nova_compute[260603]: 2025-10-02 08:22:55.992 2 DEBUG nova.scheduler.client.report [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:22:56 np0005465604 nova_compute[260603]: 2025-10-02 08:22:56.038 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393361.0366013, 4f7a36cf-fd1b-42f2-94be-5e9483b5f941 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:22:56 np0005465604 nova_compute[260603]: 2025-10-02 08:22:56.038 2 INFO nova.compute.manager [-] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:22:56 np0005465604 nova_compute[260603]: 2025-10-02 08:22:56.046 2 DEBUG oslo_concurrency.lockutils [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:56 np0005465604 nova_compute[260603]: 2025-10-02 08:22:56.078 2 DEBUG nova.compute.manager [None req-94c014c2-4c62-4630-af4b-a0d467b68182 - - - - - -] [instance: 4f7a36cf-fd1b-42f2-94be-5e9483b5f941] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:56 np0005465604 nova_compute[260603]: 2025-10-02 08:22:56.126 2 INFO nova.scheduler.client.report [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Deleted allocations for instance ce67c5cc-b413-4871-8bcb-677c171ce721#033[00m
Oct  2 04:22:56 np0005465604 nova_compute[260603]: 2025-10-02 08:22:56.196 2 DEBUG oslo_concurrency.lockutils [None req-e1951ec8-a659-4cab-8f88-f4e8601ab310 be83392e7e8847878b199e35d03663f9 2a46df342cad4b62ae3f9af2fd10ed84 - - default default] Lock "ce67c5cc-b413-4871-8bcb-677c171ce721" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:56 np0005465604 nova_compute[260603]: 2025-10-02 08:22:56.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:56 np0005465604 nova_compute[260603]: 2025-10-02 08:22:56.722 2 DEBUG nova.compute.manager [req-053285d4-ebcf-4d0d-af72-4c4f5f4dc597 req-75ecdda1-577f-4866-884e-df6943b37a5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Received event network-vif-deleted-3aa7d14a-ae6f-4363-b28d-0dadba762d1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 200 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Oct  2 04:22:56 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:56Z|00138|binding|INFO|Releasing lport 1efe6d32-c189-400d-8557-a9473dddae81 from this chassis (sb_readonly=0)
Oct  2 04:22:56 np0005465604 nova_compute[260603]: 2025-10-02 08:22:56.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:22:57 np0005465604 ovn_controller[152344]: 2025-10-02T08:22:57Z|00139|binding|INFO|Releasing lport 1efe6d32-c189-400d-8557-a9473dddae81 from this chassis (sb_readonly=0)
Oct  2 04:22:57 np0005465604 nova_compute[260603]: 2025-10-02 08:22:57.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:57 np0005465604 nova_compute[260603]: 2025-10-02 08:22:57.324 2 DEBUG oslo_concurrency.lockutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "a5536a83-4bc6-4b26-b686-817581c4ea34" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:57 np0005465604 nova_compute[260603]: 2025-10-02 08:22:57.324 2 DEBUG oslo_concurrency.lockutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "a5536a83-4bc6-4b26-b686-817581c4ea34" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:57 np0005465604 nova_compute[260603]: 2025-10-02 08:22:57.345 2 DEBUG nova.compute.manager [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:22:57 np0005465604 nova_compute[260603]: 2025-10-02 08:22:57.444 2 DEBUG oslo_concurrency.lockutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:57 np0005465604 nova_compute[260603]: 2025-10-02 08:22:57.445 2 DEBUG oslo_concurrency.lockutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:57 np0005465604 nova_compute[260603]: 2025-10-02 08:22:57.452 2 DEBUG nova.virt.hardware [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:22:57 np0005465604 nova_compute[260603]: 2025-10-02 08:22:57.452 2 INFO nova.compute.claims [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:22:57 np0005465604 nova_compute[260603]: 2025-10-02 08:22:57.574 2 DEBUG oslo_concurrency.processutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:22:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:22:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:22:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:22:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:22:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:22:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:22:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/155563246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.091 2 DEBUG oslo_concurrency.processutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.101 2 DEBUG nova.compute.provider_tree [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.119 2 DEBUG nova.scheduler.client.report [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.152 2 DEBUG oslo_concurrency.lockutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.153 2 DEBUG nova.compute.manager [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.224 2 DEBUG nova.compute.manager [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.225 2 DEBUG nova.network.neutron [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.253 2 INFO nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.279 2 DEBUG nova.compute.manager [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.364 2 DEBUG nova.compute.manager [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.367 2 DEBUG nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.367 2 INFO nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Creating image(s)#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.404 2 DEBUG nova.storage.rbd_utils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image a5536a83-4bc6-4b26-b686-817581c4ea34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.441 2 DEBUG nova.storage.rbd_utils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image a5536a83-4bc6-4b26-b686-817581c4ea34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.477 2 DEBUG nova.storage.rbd_utils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image a5536a83-4bc6-4b26-b686-817581c4ea34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.484 2 DEBUG oslo_concurrency.processutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.520 2 DEBUG nova.policy [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a117ecad98d493d8782539545db5ac9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff4066c489424391bd4a75b195bd5011', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.525 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393363.3644593, efe21c23-e200-4841-be3e-2b3e7c8b5c38 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.526 2 INFO nova.compute.manager [-] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.547 2 DEBUG nova.compute.manager [None req-b61522e0-b74d-40c7-981e-30946f4b8962 - - - - - -] [instance: efe21c23-e200-4841-be3e-2b3e7c8b5c38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.577 2 DEBUG oslo_concurrency.processutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.577 2 DEBUG oslo_concurrency.lockutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.578 2 DEBUG oslo_concurrency.lockutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.579 2 DEBUG oslo_concurrency.lockutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.611 2 DEBUG nova.storage.rbd_utils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image a5536a83-4bc6-4b26-b686-817581c4ea34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.616 2 DEBUG oslo_concurrency.processutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 a5536a83-4bc6-4b26-b686-817581c4ea34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.795 2 DEBUG nova.compute.manager [req-4c505891-89d9-4467-9619-74d032507834 req-1a04f96d-6a70-42ef-8e70-f2eac2bdd29f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Received event network-changed-2135b96a-be2a-4f01-bcde-d21548009ea5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.795 2 DEBUG nova.compute.manager [req-4c505891-89d9-4467-9619-74d032507834 req-1a04f96d-6a70-42ef-8e70-f2eac2bdd29f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Refreshing instance network info cache due to event network-changed-2135b96a-be2a-4f01-bcde-d21548009ea5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.796 2 DEBUG oslo_concurrency.lockutils [req-4c505891-89d9-4467-9619-74d032507834 req-1a04f96d-6a70-42ef-8e70-f2eac2bdd29f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-477099c6-d178-40d2-9d1e-a28466596e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.796 2 DEBUG oslo_concurrency.lockutils [req-4c505891-89d9-4467-9619-74d032507834 req-1a04f96d-6a70-42ef-8e70-f2eac2bdd29f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-477099c6-d178-40d2-9d1e-a28466596e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.796 2 DEBUG nova.network.neutron [req-4c505891-89d9-4467-9619-74d032507834 req-1a04f96d-6a70-42ef-8e70-f2eac2bdd29f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Refreshing network info cache for port 2135b96a-be2a-4f01-bcde-d21548009ea5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:22:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 121 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.2 MiB/s wr, 147 op/s
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.868 2 DEBUG oslo_concurrency.processutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 a5536a83-4bc6-4b26-b686-817581c4ea34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:22:58 np0005465604 nova_compute[260603]: 2025-10-02 08:22:58.988 2 DEBUG nova.storage.rbd_utils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] resizing rbd image a5536a83-4bc6-4b26-b686-817581c4ea34_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:22:59 np0005465604 nova_compute[260603]: 2025-10-02 08:22:59.097 2 DEBUG nova.objects.instance [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lazy-loading 'migration_context' on Instance uuid a5536a83-4bc6-4b26-b686-817581c4ea34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:22:59 np0005465604 nova_compute[260603]: 2025-10-02 08:22:59.115 2 DEBUG nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:22:59 np0005465604 nova_compute[260603]: 2025-10-02 08:22:59.116 2 DEBUG nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Ensure instance console log exists: /var/lib/nova/instances/a5536a83-4bc6-4b26-b686-817581c4ea34/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:22:59 np0005465604 nova_compute[260603]: 2025-10-02 08:22:59.116 2 DEBUG oslo_concurrency.lockutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:22:59 np0005465604 nova_compute[260603]: 2025-10-02 08:22:59.116 2 DEBUG oslo_concurrency.lockutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:22:59 np0005465604 nova_compute[260603]: 2025-10-02 08:22:59.117 2 DEBUG oslo_concurrency.lockutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 121 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 241 KiB/s rd, 1.5 MiB/s wr, 101 op/s
Oct  2 04:23:01 np0005465604 nova_compute[260603]: 2025-10-02 08:23:01.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:01 np0005465604 nova_compute[260603]: 2025-10-02 08:23:01.956 2 DEBUG oslo_concurrency.lockutils [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Acquiring lock "477099c6-d178-40d2-9d1e-a28466596e94" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:01 np0005465604 nova_compute[260603]: 2025-10-02 08:23:01.957 2 DEBUG oslo_concurrency.lockutils [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lock "477099c6-d178-40d2-9d1e-a28466596e94" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:01 np0005465604 nova_compute[260603]: 2025-10-02 08:23:01.957 2 DEBUG oslo_concurrency.lockutils [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Acquiring lock "477099c6-d178-40d2-9d1e-a28466596e94-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:01 np0005465604 nova_compute[260603]: 2025-10-02 08:23:01.957 2 DEBUG oslo_concurrency.lockutils [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lock "477099c6-d178-40d2-9d1e-a28466596e94-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:01 np0005465604 nova_compute[260603]: 2025-10-02 08:23:01.958 2 DEBUG oslo_concurrency.lockutils [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lock "477099c6-d178-40d2-9d1e-a28466596e94-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:01 np0005465604 nova_compute[260603]: 2025-10-02 08:23:01.959 2 INFO nova.compute.manager [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Terminating instance#033[00m
Oct  2 04:23:01 np0005465604 nova_compute[260603]: 2025-10-02 08:23:01.960 2 DEBUG nova.compute.manager [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:23:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:23:02 np0005465604 kernel: tap2135b96a-be (unregistering): left promiscuous mode
Oct  2 04:23:02 np0005465604 NetworkManager[45129]: <info>  [1759393382.0273] device (tap2135b96a-be): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:23:02 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:02Z|00140|binding|INFO|Releasing lport 2135b96a-be2a-4f01-bcde-d21548009ea5 from this chassis (sb_readonly=0)
Oct  2 04:23:02 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:02Z|00141|binding|INFO|Setting lport 2135b96a-be2a-4f01-bcde-d21548009ea5 down in Southbound
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:02 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:02Z|00142|binding|INFO|Removing iface tap2135b96a-be ovn-installed in OVS
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:02.040 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:78:ad 10.100.0.13'], port_security=['fa:16:3e:92:78:ad 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '477099c6-d178-40d2-9d1e-a28466596e94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e81bbc6c-74d9-4326-b101-3222258ed127', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '578d4e0b1a1a46e893d547de9bd7f6e1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1188a9ba-9de5-4d24-b6be-c688853535eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=974a924d-67c3-4b58-b2b3-f5d189f19e5b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2135b96a-be2a-4f01-bcde-d21548009ea5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:23:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:02.042 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2135b96a-be2a-4f01-bcde-d21548009ea5 in datapath e81bbc6c-74d9-4326-b101-3222258ed127 unbound from our chassis#033[00m
Oct  2 04:23:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:02.044 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e81bbc6c-74d9-4326-b101-3222258ed127, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:23:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:02.045 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aefd617c-2ea7-4beb-91c9-48feed7c7d8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:02.046 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127 namespace which is not needed anymore#033[00m
Oct  2 04:23:02 np0005465604 neutron-haproxy-ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127[289616]: [NOTICE]   (289620) : haproxy version is 2.8.14-c23fe91
Oct  2 04:23:02 np0005465604 neutron-haproxy-ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127[289616]: [NOTICE]   (289620) : path to executable is /usr/sbin/haproxy
Oct  2 04:23:02 np0005465604 neutron-haproxy-ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127[289616]: [WARNING]  (289620) : Exiting Master process...
Oct  2 04:23:02 np0005465604 neutron-haproxy-ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127[289616]: [WARNING]  (289620) : Exiting Master process...
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:02 np0005465604 neutron-haproxy-ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127[289616]: [ALERT]    (289620) : Current worker (289622) exited with code 143 (Terminated)
Oct  2 04:23:02 np0005465604 neutron-haproxy-ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127[289616]: [WARNING]  (289620) : All workers exited. Exiting... (0)
Oct  2 04:23:02 np0005465604 systemd[1]: libpod-73c4d8d495afea895fe436d1f3f72b5c3123efe0549ed62e71f3ce2916450aff.scope: Deactivated successfully.
Oct  2 04:23:02 np0005465604 podman[291200]: 2025-10-02 08:23:02.22507341 +0000 UTC m=+0.050729207 container died 73c4d8d495afea895fe436d1f3f72b5c3123efe0549ed62e71f3ce2916450aff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 04:23:02 np0005465604 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000017.scope: Deactivated successfully.
Oct  2 04:23:02 np0005465604 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000017.scope: Consumed 13.679s CPU time.
Oct  2 04:23:02 np0005465604 systemd-machined[214636]: Machine qemu-27-instance-00000017 terminated.
Oct  2 04:23:02 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73c4d8d495afea895fe436d1f3f72b5c3123efe0549ed62e71f3ce2916450aff-userdata-shm.mount: Deactivated successfully.
Oct  2 04:23:02 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f93c19e3c2c2870214fd1b92795ec466e075a11f39c04d8cf768340ca05f92bf-merged.mount: Deactivated successfully.
Oct  2 04:23:02 np0005465604 podman[291200]: 2025-10-02 08:23:02.261392651 +0000 UTC m=+0.087048438 container cleanup 73c4d8d495afea895fe436d1f3f72b5c3123efe0549ed62e71f3ce2916450aff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.261 2 DEBUG nova.compute.manager [req-51fa5241-9feb-4aec-8088-ace82535c9dd req-8c9490e4-a9fe-48b9-9ecb-bbb67140d6a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Received event network-vif-unplugged-2135b96a-be2a-4f01-bcde-d21548009ea5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.261 2 DEBUG oslo_concurrency.lockutils [req-51fa5241-9feb-4aec-8088-ace82535c9dd req-8c9490e4-a9fe-48b9-9ecb-bbb67140d6a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "477099c6-d178-40d2-9d1e-a28466596e94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.262 2 DEBUG oslo_concurrency.lockutils [req-51fa5241-9feb-4aec-8088-ace82535c9dd req-8c9490e4-a9fe-48b9-9ecb-bbb67140d6a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "477099c6-d178-40d2-9d1e-a28466596e94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.262 2 DEBUG oslo_concurrency.lockutils [req-51fa5241-9feb-4aec-8088-ace82535c9dd req-8c9490e4-a9fe-48b9-9ecb-bbb67140d6a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "477099c6-d178-40d2-9d1e-a28466596e94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.262 2 DEBUG nova.compute.manager [req-51fa5241-9feb-4aec-8088-ace82535c9dd req-8c9490e4-a9fe-48b9-9ecb-bbb67140d6a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] No waiting events found dispatching network-vif-unplugged-2135b96a-be2a-4f01-bcde-d21548009ea5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.262 2 DEBUG nova.compute.manager [req-51fa5241-9feb-4aec-8088-ace82535c9dd req-8c9490e4-a9fe-48b9-9ecb-bbb67140d6a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Received event network-vif-unplugged-2135b96a-be2a-4f01-bcde-d21548009ea5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:23:02 np0005465604 systemd[1]: libpod-conmon-73c4d8d495afea895fe436d1f3f72b5c3123efe0549ed62e71f3ce2916450aff.scope: Deactivated successfully.
Oct  2 04:23:02 np0005465604 podman[291230]: 2025-10-02 08:23:02.340288575 +0000 UTC m=+0.051669577 container remove 73c4d8d495afea895fe436d1f3f72b5c3123efe0549ed62e71f3ce2916450aff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:23:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:02.346 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[95d7670e-42b1-441c-ac10-96256d359e43]: (4, ('Thu Oct  2 08:23:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127 (73c4d8d495afea895fe436d1f3f72b5c3123efe0549ed62e71f3ce2916450aff)\n73c4d8d495afea895fe436d1f3f72b5c3123efe0549ed62e71f3ce2916450aff\nThu Oct  2 08:23:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127 (73c4d8d495afea895fe436d1f3f72b5c3123efe0549ed62e71f3ce2916450aff)\n73c4d8d495afea895fe436d1f3f72b5c3123efe0549ed62e71f3ce2916450aff\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:02.347 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3d75f9ae-2341-4d1c-b8b9-869947febc60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:02.348 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape81bbc6c-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:02 np0005465604 kernel: tape81bbc6c-70: left promiscuous mode
Oct  2 04:23:02 np0005465604 NetworkManager[45129]: <info>  [1759393382.3903] manager: (tap2135b96a-be): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:02.392 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3aa0dab3-4786-4ad4-8784-3e8dd9de7e20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.401 2 DEBUG nova.network.neutron [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Successfully created port: 69cdfaa2-3e13-4b0c-ba0a-944bb53264b4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.405 2 INFO nova.virt.libvirt.driver [-] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Instance destroyed successfully.#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.406 2 DEBUG nova.objects.instance [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lazy-loading 'resources' on Instance uuid 477099c6-d178-40d2-9d1e-a28466596e94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.426 2 DEBUG nova.virt.libvirt.vif [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:22:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1367720284',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1367720284',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-136772028',id=23,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:22:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='578d4e0b1a1a46e893d547de9bd7f6e1',ramdisk_id='',reservation_id='r-5rmvyxw6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1136012513',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1136012513-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:22:36Z,user_data=None,user_id='7ddec4f2f47d416bb59b0a823af0027e',uuid=477099c6-d178-40d2-9d1e-a28466596e94,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2135b96a-be2a-4f01-bcde-d21548009ea5", "address": "fa:16:3e:92:78:ad", "network": {"id": "e81bbc6c-74d9-4326-b101-3222258ed127", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1133616281-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "578d4e0b1a1a46e893d547de9bd7f6e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2135b96a-be", "ovs_interfaceid": "2135b96a-be2a-4f01-bcde-d21548009ea5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.426 2 DEBUG nova.network.os_vif_util [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Converting VIF {"id": "2135b96a-be2a-4f01-bcde-d21548009ea5", "address": "fa:16:3e:92:78:ad", "network": {"id": "e81bbc6c-74d9-4326-b101-3222258ed127", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1133616281-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "578d4e0b1a1a46e893d547de9bd7f6e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2135b96a-be", "ovs_interfaceid": "2135b96a-be2a-4f01-bcde-d21548009ea5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.427 2 DEBUG nova.network.os_vif_util [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:92:78:ad,bridge_name='br-int',has_traffic_filtering=True,id=2135b96a-be2a-4f01-bcde-d21548009ea5,network=Network(e81bbc6c-74d9-4326-b101-3222258ed127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2135b96a-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.427 2 DEBUG os_vif [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:92:78:ad,bridge_name='br-int',has_traffic_filtering=True,id=2135b96a-be2a-4f01-bcde-d21548009ea5,network=Network(e81bbc6c-74d9-4326-b101-3222258ed127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2135b96a-be') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2135b96a-be, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:02.429 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ae5387-cb40-418a-b068-7660448cf719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:02.431 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[634166ef-454c-4364-bfd7-90b67aea343b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.435 2 INFO os_vif [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:92:78:ad,bridge_name='br-int',has_traffic_filtering=True,id=2135b96a-be2a-4f01-bcde-d21548009ea5,network=Network(e81bbc6c-74d9-4326-b101-3222258ed127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2135b96a-be')#033[00m
Oct  2 04:23:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:02.449 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b3993f42-2f50-4633-a04f-30c38f2e0449]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423363, 'reachable_time': 43249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291255, 'error': None, 'target': 'ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:02 np0005465604 systemd[1]: run-netns-ovnmeta\x2de81bbc6c\x2d74d9\x2d4326\x2db101\x2d3222258ed127.mount: Deactivated successfully.
Oct  2 04:23:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:02.452 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e81bbc6c-74d9-4326-b101-3222258ed127 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:23:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:02.452 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[c2333c75-1810-4ade-b7fd-e72976d3446b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 152 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 258 KiB/s rd, 2.8 MiB/s wr, 126 op/s
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.853 2 INFO nova.virt.libvirt.driver [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Deleting instance files /var/lib/nova/instances/477099c6-d178-40d2-9d1e-a28466596e94_del#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.854 2 INFO nova.virt.libvirt.driver [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Deletion of /var/lib/nova/instances/477099c6-d178-40d2-9d1e-a28466596e94_del complete#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.914 2 INFO nova.compute.manager [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Took 0.95 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.915 2 DEBUG oslo.service.loopingcall [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.915 2 DEBUG nova.compute.manager [-] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:23:02 np0005465604 nova_compute[260603]: 2025-10-02 08:23:02.916 2 DEBUG nova.network.neutron [-] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:23:03 np0005465604 nova_compute[260603]: 2025-10-02 08:23:03.700 2 DEBUG nova.network.neutron [req-4c505891-89d9-4467-9619-74d032507834 req-1a04f96d-6a70-42ef-8e70-f2eac2bdd29f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Updated VIF entry in instance network info cache for port 2135b96a-be2a-4f01-bcde-d21548009ea5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:23:03 np0005465604 nova_compute[260603]: 2025-10-02 08:23:03.701 2 DEBUG nova.network.neutron [req-4c505891-89d9-4467-9619-74d032507834 req-1a04f96d-6a70-42ef-8e70-f2eac2bdd29f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Updating instance_info_cache with network_info: [{"id": "2135b96a-be2a-4f01-bcde-d21548009ea5", "address": "fa:16:3e:92:78:ad", "network": {"id": "e81bbc6c-74d9-4326-b101-3222258ed127", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1133616281-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "578d4e0b1a1a46e893d547de9bd7f6e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2135b96a-be", "ovs_interfaceid": "2135b96a-be2a-4f01-bcde-d21548009ea5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:23:03 np0005465604 nova_compute[260603]: 2025-10-02 08:23:03.731 2 DEBUG oslo_concurrency.lockutils [req-4c505891-89d9-4467-9619-74d032507834 req-1a04f96d-6a70-42ef-8e70-f2eac2bdd29f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-477099c6-d178-40d2-9d1e-a28466596e94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:23:03 np0005465604 nova_compute[260603]: 2025-10-02 08:23:03.948 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393368.9461951, 09c17c12-7dac-4fc8-917e-cb2efa1d4607 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:23:03 np0005465604 nova_compute[260603]: 2025-10-02 08:23:03.949 2 INFO nova.compute.manager [-] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:23:03 np0005465604 nova_compute[260603]: 2025-10-02 08:23:03.967 2 DEBUG nova.compute.manager [None req-ce1d6300-75d3-41b7-8281-3964f5201932 - - - - - -] [instance: 09c17c12-7dac-4fc8-917e-cb2efa1d4607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:04 np0005465604 podman[291276]: 2025-10-02 08:23:04.014590399 +0000 UTC m=+0.072764777 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 04:23:04 np0005465604 podman[291275]: 2025-10-02 08:23:04.051587341 +0000 UTC m=+0.123426330 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 04:23:04 np0005465604 nova_compute[260603]: 2025-10-02 08:23:04.362 2 DEBUG nova.compute.manager [req-19e800ed-0987-4c58-b9a6-4fc8cda8f7fc req-eb7155e5-c7bd-41c8-84ea-e5fd1921b6f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Received event network-vif-plugged-2135b96a-be2a-4f01-bcde-d21548009ea5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:04 np0005465604 nova_compute[260603]: 2025-10-02 08:23:04.363 2 DEBUG oslo_concurrency.lockutils [req-19e800ed-0987-4c58-b9a6-4fc8cda8f7fc req-eb7155e5-c7bd-41c8-84ea-e5fd1921b6f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "477099c6-d178-40d2-9d1e-a28466596e94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:04 np0005465604 nova_compute[260603]: 2025-10-02 08:23:04.364 2 DEBUG oslo_concurrency.lockutils [req-19e800ed-0987-4c58-b9a6-4fc8cda8f7fc req-eb7155e5-c7bd-41c8-84ea-e5fd1921b6f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "477099c6-d178-40d2-9d1e-a28466596e94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:04 np0005465604 nova_compute[260603]: 2025-10-02 08:23:04.365 2 DEBUG oslo_concurrency.lockutils [req-19e800ed-0987-4c58-b9a6-4fc8cda8f7fc req-eb7155e5-c7bd-41c8-84ea-e5fd1921b6f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "477099c6-d178-40d2-9d1e-a28466596e94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:04 np0005465604 nova_compute[260603]: 2025-10-02 08:23:04.365 2 DEBUG nova.compute.manager [req-19e800ed-0987-4c58-b9a6-4fc8cda8f7fc req-eb7155e5-c7bd-41c8-84ea-e5fd1921b6f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] No waiting events found dispatching network-vif-plugged-2135b96a-be2a-4f01-bcde-d21548009ea5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:23:04 np0005465604 nova_compute[260603]: 2025-10-02 08:23:04.366 2 WARNING nova.compute.manager [req-19e800ed-0987-4c58-b9a6-4fc8cda8f7fc req-eb7155e5-c7bd-41c8-84ea-e5fd1921b6f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Received unexpected event network-vif-plugged-2135b96a-be2a-4f01-bcde-d21548009ea5 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:23:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 151 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Oct  2 04:23:05 np0005465604 nova_compute[260603]: 2025-10-02 08:23:05.107 2 DEBUG nova.network.neutron [-] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:23:05 np0005465604 nova_compute[260603]: 2025-10-02 08:23:05.121 2 INFO nova.compute.manager [-] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Took 2.20 seconds to deallocate network for instance.#033[00m
Oct  2 04:23:05 np0005465604 nova_compute[260603]: 2025-10-02 08:23:05.161 2 DEBUG oslo_concurrency.lockutils [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:05 np0005465604 nova_compute[260603]: 2025-10-02 08:23:05.161 2 DEBUG oslo_concurrency.lockutils [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:05 np0005465604 nova_compute[260603]: 2025-10-02 08:23:05.264 2 DEBUG oslo_concurrency.processutils [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:05 np0005465604 nova_compute[260603]: 2025-10-02 08:23:05.302 2 DEBUG nova.compute.manager [req-09764258-f63b-44f3-90f1-5cc7edf753e8 req-da9c1c60-bedb-450a-bf89-43446bbcf163 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Received event network-vif-deleted-2135b96a-be2a-4f01-bcde-d21548009ea5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:23:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/709797508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:23:05 np0005465604 nova_compute[260603]: 2025-10-02 08:23:05.734 2 DEBUG oslo_concurrency.processutils [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:05 np0005465604 nova_compute[260603]: 2025-10-02 08:23:05.743 2 DEBUG nova.compute.provider_tree [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:23:05 np0005465604 nova_compute[260603]: 2025-10-02 08:23:05.755 2 DEBUG nova.network.neutron [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Successfully updated port: 69cdfaa2-3e13-4b0c-ba0a-944bb53264b4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:23:05 np0005465604 nova_compute[260603]: 2025-10-02 08:23:05.774 2 DEBUG nova.scheduler.client.report [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:23:05 np0005465604 nova_compute[260603]: 2025-10-02 08:23:05.782 2 DEBUG oslo_concurrency.lockutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "refresh_cache-a5536a83-4bc6-4b26-b686-817581c4ea34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:23:05 np0005465604 nova_compute[260603]: 2025-10-02 08:23:05.783 2 DEBUG oslo_concurrency.lockutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquired lock "refresh_cache-a5536a83-4bc6-4b26-b686-817581c4ea34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:23:05 np0005465604 nova_compute[260603]: 2025-10-02 08:23:05.783 2 DEBUG nova.network.neutron [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:23:05 np0005465604 nova_compute[260603]: 2025-10-02 08:23:05.808 2 DEBUG oslo_concurrency.lockutils [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:05 np0005465604 nova_compute[260603]: 2025-10-02 08:23:05.839 2 INFO nova.scheduler.client.report [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Deleted allocations for instance 477099c6-d178-40d2-9d1e-a28466596e94#033[00m
Oct  2 04:23:05 np0005465604 nova_compute[260603]: 2025-10-02 08:23:05.908 2 DEBUG oslo_concurrency.lockutils [None req-8e6ef792-19fe-47c7-8cb5-307344c98b79 7ddec4f2f47d416bb59b0a823af0027e 578d4e0b1a1a46e893d547de9bd7f6e1 - - default default] Lock "477099c6-d178-40d2-9d1e-a28466596e94" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:05 np0005465604 nova_compute[260603]: 2025-10-02 08:23:05.990 2 DEBUG nova.network.neutron [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:23:06 np0005465604 nova_compute[260603]: 2025-10-02 08:23:06.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 151 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Oct  2 04:23:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.133 2 DEBUG nova.network.neutron [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Updating instance_info_cache with network_info: [{"id": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "address": "fa:16:3e:78:c4:69", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69cdfaa2-3e", "ovs_interfaceid": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.177 2 DEBUG oslo_concurrency.lockutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Releasing lock "refresh_cache-a5536a83-4bc6-4b26-b686-817581c4ea34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.178 2 DEBUG nova.compute.manager [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Instance network_info: |[{"id": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "address": "fa:16:3e:78:c4:69", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69cdfaa2-3e", "ovs_interfaceid": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.183 2 DEBUG nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Start _get_guest_xml network_info=[{"id": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "address": "fa:16:3e:78:c4:69", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69cdfaa2-3e", "ovs_interfaceid": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.191 2 WARNING nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.199 2 DEBUG nova.virt.libvirt.host [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.200 2 DEBUG nova.virt.libvirt.host [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.204 2 DEBUG nova.virt.libvirt.host [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.205 2 DEBUG nova.virt.libvirt.host [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.206 2 DEBUG nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.206 2 DEBUG nova.virt.hardware [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.207 2 DEBUG nova.virt.hardware [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.208 2 DEBUG nova.virt.hardware [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.208 2 DEBUG nova.virt.hardware [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.209 2 DEBUG nova.virt.hardware [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.209 2 DEBUG nova.virt.hardware [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.210 2 DEBUG nova.virt.hardware [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.210 2 DEBUG nova.virt.hardware [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.211 2 DEBUG nova.virt.hardware [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.212 2 DEBUG nova.virt.hardware [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.212 2 DEBUG nova.virt.hardware [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.218 2 DEBUG oslo_concurrency.processutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.362 2 DEBUG nova.compute.manager [req-28b6f5cb-e1d2-4676-9a2a-28fe8eaf7fa9 req-fecf6f98-b363-4db3-bbf7-c5990258c346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Received event network-changed-69cdfaa2-3e13-4b0c-ba0a-944bb53264b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.362 2 DEBUG nova.compute.manager [req-28b6f5cb-e1d2-4676-9a2a-28fe8eaf7fa9 req-fecf6f98-b363-4db3-bbf7-c5990258c346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Refreshing instance network info cache due to event network-changed-69cdfaa2-3e13-4b0c-ba0a-944bb53264b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.364 2 DEBUG oslo_concurrency.lockutils [req-28b6f5cb-e1d2-4676-9a2a-28fe8eaf7fa9 req-fecf6f98-b363-4db3-bbf7-c5990258c346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-a5536a83-4bc6-4b26-b686-817581c4ea34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.364 2 DEBUG oslo_concurrency.lockutils [req-28b6f5cb-e1d2-4676-9a2a-28fe8eaf7fa9 req-fecf6f98-b363-4db3-bbf7-c5990258c346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-a5536a83-4bc6-4b26-b686-817581c4ea34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.364 2 DEBUG nova.network.neutron [req-28b6f5cb-e1d2-4676-9a2a-28fe8eaf7fa9 req-fecf6f98-b363-4db3-bbf7-c5990258c346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Refreshing network info cache for port 69cdfaa2-3e13-4b0c-ba0a-944bb53264b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 04:23:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:23:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/774239836' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.686 2 DEBUG oslo_concurrency.processutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.725 2 DEBUG nova.storage.rbd_utils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image a5536a83-4bc6-4b26-b686-817581c4ea34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:07 np0005465604 nova_compute[260603]: 2025-10-02 08:23:07.732 2 DEBUG oslo_concurrency.processutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:23:08 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/22616785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.197 2 DEBUG oslo_concurrency.processutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.199 2 DEBUG nova.virt.libvirt.vif [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:22:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1134731974',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1134731974',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1134731974',id=24,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff4066c489424391bd4a75b195bd5011',ramdisk_id='',reservation_id='r-jvfy9343',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1267478522',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1267478522-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:22:58Z,user_data=None,user_id='3a117ecad98d493d8782539545db5ac9',uuid=a5536a83-4bc6-4b26-b686-817581c4ea34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "address": "fa:16:3e:78:c4:69", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69cdfaa2-3e", "ovs_interfaceid": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.200 2 DEBUG nova.network.os_vif_util [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converting VIF {"id": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "address": "fa:16:3e:78:c4:69", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69cdfaa2-3e", "ovs_interfaceid": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.202 2 DEBUG nova.network.os_vif_util [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:c4:69,bridge_name='br-int',has_traffic_filtering=True,id=69cdfaa2-3e13-4b0c-ba0a-944bb53264b4,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69cdfaa2-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.204 2 DEBUG nova.objects.instance [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lazy-loading 'pci_devices' on Instance uuid a5536a83-4bc6-4b26-b686-817581c4ea34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.219 2 DEBUG nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:23:08 np0005465604 nova_compute[260603]:  <uuid>a5536a83-4bc6-4b26-b686-817581c4ea34</uuid>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:  <name>instance-00000018</name>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1134731974</nova:name>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:23:07</nova:creationTime>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:23:08 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:        <nova:user uuid="3a117ecad98d493d8782539545db5ac9">tempest-ImagesOneServerNegativeTestJSON-1267478522-project-member</nova:user>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:        <nova:project uuid="ff4066c489424391bd4a75b195bd5011">tempest-ImagesOneServerNegativeTestJSON-1267478522</nova:project>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:        <nova:port uuid="69cdfaa2-3e13-4b0c-ba0a-944bb53264b4">
Oct  2 04:23:08 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <entry name="serial">a5536a83-4bc6-4b26-b686-817581c4ea34</entry>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <entry name="uuid">a5536a83-4bc6-4b26-b686-817581c4ea34</entry>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/a5536a83-4bc6-4b26-b686-817581c4ea34_disk">
Oct  2 04:23:08 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:23:08 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/a5536a83-4bc6-4b26-b686-817581c4ea34_disk.config">
Oct  2 04:23:08 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:23:08 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:78:c4:69"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <target dev="tap69cdfaa2-3e"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/a5536a83-4bc6-4b26-b686-817581c4ea34/console.log" append="off"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:23:08 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:23:08 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:23:08 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:23:08 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.221 2 DEBUG nova.compute.manager [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Preparing to wait for external event network-vif-plugged-69cdfaa2-3e13-4b0c-ba0a-944bb53264b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.222 2 DEBUG oslo_concurrency.lockutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "a5536a83-4bc6-4b26-b686-817581c4ea34-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.222 2 DEBUG oslo_concurrency.lockutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "a5536a83-4bc6-4b26-b686-817581c4ea34-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.223 2 DEBUG oslo_concurrency.lockutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "a5536a83-4bc6-4b26-b686-817581c4ea34-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.224 2 DEBUG nova.virt.libvirt.vif [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:22:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1134731974',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1134731974',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1134731974',id=24,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff4066c489424391bd4a75b195bd5011',ramdisk_id='',reservation_id='r-jvfy9343',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1267478522',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1267478522-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:22:58Z,user_data=None,user_id='3a117ecad98d493d8782539545db5ac9',uuid=a5536a83-4bc6-4b26-b686-817581c4ea34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "address": "fa:16:3e:78:c4:69", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69cdfaa2-3e", "ovs_interfaceid": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.225 2 DEBUG nova.network.os_vif_util [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converting VIF {"id": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "address": "fa:16:3e:78:c4:69", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69cdfaa2-3e", "ovs_interfaceid": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.226 2 DEBUG nova.network.os_vif_util [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:c4:69,bridge_name='br-int',has_traffic_filtering=True,id=69cdfaa2-3e13-4b0c-ba0a-944bb53264b4,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69cdfaa2-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.226 2 DEBUG os_vif [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:c4:69,bridge_name='br-int',has_traffic_filtering=True,id=69cdfaa2-3e13-4b0c-ba0a-944bb53264b4,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69cdfaa2-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.228 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.229 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.233 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69cdfaa2-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.234 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap69cdfaa2-3e, col_values=(('external_ids', {'iface-id': '69cdfaa2-3e13-4b0c-ba0a-944bb53264b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:c4:69', 'vm-uuid': 'a5536a83-4bc6-4b26-b686-817581c4ea34'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:08 np0005465604 NetworkManager[45129]: <info>  [1759393388.2370] manager: (tap69cdfaa2-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.245 2 INFO os_vif [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:c4:69,bridge_name='br-int',has_traffic_filtering=True,id=69cdfaa2-3e13-4b0c-ba0a-944bb53264b4,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69cdfaa2-3e')#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.304 2 DEBUG nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.304 2 DEBUG nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.305 2 DEBUG nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] No VIF found with MAC fa:16:3e:78:c4:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.306 2 INFO nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Using config drive#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.337 2 DEBUG nova.storage.rbd_utils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image a5536a83-4bc6-4b26-b686-817581c4ea34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.534 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.537 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.763 2 INFO nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Creating config drive at /var/lib/nova/instances/a5536a83-4bc6-4b26-b686-817581c4ea34/disk.config#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.767 2 DEBUG oslo_concurrency.processutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a5536a83-4bc6-4b26-b686-817581c4ea34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbu3s8udo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 88 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.868 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393373.8663814, ce67c5cc-b413-4871-8bcb-677c171ce721 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.868 2 INFO nova.compute.manager [-] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.885 2 DEBUG nova.compute.manager [None req-e2f15fa6-c87c-4de3-8985-15262cd16a18 - - - - - -] [instance: ce67c5cc-b413-4871-8bcb-677c171ce721] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.892 2 DEBUG oslo_concurrency.processutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a5536a83-4bc6-4b26-b686-817581c4ea34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbu3s8udo" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.919 2 DEBUG nova.storage.rbd_utils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image a5536a83-4bc6-4b26-b686-817581c4ea34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:08 np0005465604 nova_compute[260603]: 2025-10-02 08:23:08.922 2 DEBUG oslo_concurrency.processutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a5536a83-4bc6-4b26-b686-817581c4ea34/disk.config a5536a83-4bc6-4b26-b686-817581c4ea34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.054 2 DEBUG oslo_concurrency.processutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a5536a83-4bc6-4b26-b686-817581c4ea34/disk.config a5536a83-4bc6-4b26-b686-817581c4ea34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.055 2 INFO nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Deleting local config drive /var/lib/nova/instances/a5536a83-4bc6-4b26-b686-817581c4ea34/disk.config because it was imported into RBD.#033[00m
Oct  2 04:23:09 np0005465604 kernel: tap69cdfaa2-3e: entered promiscuous mode
Oct  2 04:23:09 np0005465604 NetworkManager[45129]: <info>  [1759393389.1083] manager: (tap69cdfaa2-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:09Z|00143|binding|INFO|Claiming lport 69cdfaa2-3e13-4b0c-ba0a-944bb53264b4 for this chassis.
Oct  2 04:23:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:09Z|00144|binding|INFO|69cdfaa2-3e13-4b0c-ba0a-944bb53264b4: Claiming fa:16:3e:78:c4:69 10.100.0.12
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.124 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:c4:69 10.100.0.12'], port_security=['fa:16:3e:78:c4:69 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a5536a83-4bc6-4b26-b686-817581c4ea34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff4066c489424391bd4a75b195bd5011', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1aba9f6e-efc2-4ae1-83f0-6308a1293c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f5b9ac6-d3ea-442b-a1a3-0f4eb2329dfd, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=69cdfaa2-3e13-4b0c-ba0a-944bb53264b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.125 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 69cdfaa2-3e13-4b0c-ba0a-944bb53264b4 in datapath c8aeabca-6b5c-477a-9156-9f9592c20b93 bound to our chassis#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.126 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c8aeabca-6b5c-477a-9156-9f9592c20b93#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.139 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3363913c-b326-473b-8cbe-3e7ff270e0d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.140 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc8aeabca-61 in ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.141 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc8aeabca-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.141 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e60d7071-1058-4646-ac14-2870a070b167]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.142 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[865288d1-ae15-4b1c-9279-f95aaef3cc7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:09 np0005465604 systemd-udevd[291475]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:23:09 np0005465604 systemd-machined[214636]: New machine qemu-28-instance-00000018.
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.159 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[f7dd4b64-9dfc-4955-9e3e-8f912128cfc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:09 np0005465604 NetworkManager[45129]: <info>  [1759393389.1622] device (tap69cdfaa2-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:23:09 np0005465604 NetworkManager[45129]: <info>  [1759393389.1632] device (tap69cdfaa2-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:23:09 np0005465604 systemd[1]: Started Virtual Machine qemu-28-instance-00000018.
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.188 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[96994596-98bc-4239-aef9-7c081ac68ffa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:09Z|00145|binding|INFO|Setting lport 69cdfaa2-3e13-4b0c-ba0a-944bb53264b4 ovn-installed in OVS
Oct  2 04:23:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:09Z|00146|binding|INFO|Setting lport 69cdfaa2-3e13-4b0c-ba0a-944bb53264b4 up in Southbound
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.227 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[419ee134-6824-48af-84df-435e45c8b01f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:09 np0005465604 NetworkManager[45129]: <info>  [1759393389.2373] manager: (tapc8aeabca-60): new Veth device (/org/freedesktop/NetworkManager/Devices/70)
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.239 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[977f6909-ed3b-4ccc-89ec-aad99a0f0102]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.284 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[53fbdcee-190b-4ecc-bcdf-1e6c173b68f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.287 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7a4ffb6d-9ea5-4652-8e44-2c03f0599afe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.314 2 DEBUG nova.network.neutron [req-28b6f5cb-e1d2-4676-9a2a-28fe8eaf7fa9 req-fecf6f98-b363-4db3-bbf7-c5990258c346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Updated VIF entry in instance network info cache for port 69cdfaa2-3e13-4b0c-ba0a-944bb53264b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.314 2 DEBUG nova.network.neutron [req-28b6f5cb-e1d2-4676-9a2a-28fe8eaf7fa9 req-fecf6f98-b363-4db3-bbf7-c5990258c346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Updating instance_info_cache with network_info: [{"id": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "address": "fa:16:3e:78:c4:69", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69cdfaa2-3e", "ovs_interfaceid": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.336 2 DEBUG oslo_concurrency.lockutils [req-28b6f5cb-e1d2-4676-9a2a-28fe8eaf7fa9 req-fecf6f98-b363-4db3-bbf7-c5990258c346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-a5536a83-4bc6-4b26-b686-817581c4ea34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:09 np0005465604 NetworkManager[45129]: <info>  [1759393389.3609] device (tapc8aeabca-60): carrier: link connected
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.366 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a35735ea-bf5c-40ff-ae21-d8f10b696af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.382 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9b1a2b0c-4a42-4e80-820f-0c0f0d4516ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8aeabca-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:61:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426796, 'reachable_time': 19414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291506, 'error': None, 'target': 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.395 2 DEBUG nova.compute.manager [req-11accc99-281b-4eb6-8492-d9191979678c req-f7e2a245-44b2-465d-a264-10e8bb3a77f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Received event network-vif-plugged-69cdfaa2-3e13-4b0c-ba0a-944bb53264b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.398 2 DEBUG oslo_concurrency.lockutils [req-11accc99-281b-4eb6-8492-d9191979678c req-f7e2a245-44b2-465d-a264-10e8bb3a77f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "a5536a83-4bc6-4b26-b686-817581c4ea34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.398 2 DEBUG oslo_concurrency.lockutils [req-11accc99-281b-4eb6-8492-d9191979678c req-f7e2a245-44b2-465d-a264-10e8bb3a77f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a5536a83-4bc6-4b26-b686-817581c4ea34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.399 2 DEBUG oslo_concurrency.lockutils [req-11accc99-281b-4eb6-8492-d9191979678c req-f7e2a245-44b2-465d-a264-10e8bb3a77f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a5536a83-4bc6-4b26-b686-817581c4ea34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.399 2 DEBUG nova.compute.manager [req-11accc99-281b-4eb6-8492-d9191979678c req-f7e2a245-44b2-465d-a264-10e8bb3a77f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Processing event network-vif-plugged-69cdfaa2-3e13-4b0c-ba0a-944bb53264b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.399 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f868ec70-80eb-4307-b89f-9d23cd882f40]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:61ed'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 426796, 'tstamp': 426796}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291507, 'error': None, 'target': 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.414 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83a4e476-1e9f-494f-8f72-d319d7108d29]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8aeabca-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:61:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426796, 'reachable_time': 19414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291508, 'error': None, 'target': 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.441 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cb442c71-4361-4d07-9446-427b2b8ca0b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.513 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0e7ff100-20d0-4f6f-a0b0-b7f2334f1022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.515 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8aeabca-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.515 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.517 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8aeabca-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:09 np0005465604 NetworkManager[45129]: <info>  [1759393389.5211] manager: (tapc8aeabca-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Oct  2 04:23:09 np0005465604 kernel: tapc8aeabca-60: entered promiscuous mode
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.526 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc8aeabca-60, col_values=(('external_ids', {'iface-id': '4db7ab1a-8260-4cf1-9ae1-c00a351cfe04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:09Z|00147|binding|INFO|Releasing lport 4db7ab1a-8260-4cf1-9ae1-c00a351cfe04 from this chassis (sb_readonly=0)
Oct  2 04:23:09 np0005465604 nova_compute[260603]: 2025-10-02 08:23:09.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.565 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c8aeabca-6b5c-477a-9156-9f9592c20b93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c8aeabca-6b5c-477a-9156-9f9592c20b93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.566 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f3825d5e-c51f-404b-9a7e-9fa41257ebb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.568 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-c8aeabca-6b5c-477a-9156-9f9592c20b93
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/c8aeabca-6b5c-477a-9156-9f9592c20b93.pid.haproxy
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID c8aeabca-6b5c-477a-9156-9f9592c20b93
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:23:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:09.569 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'env', 'PROCESS_TAG=haproxy-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c8aeabca-6b5c-477a-9156-9f9592c20b93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:23:10 np0005465604 podman[291586]: 2025-10-02 08:23:10.023823782 +0000 UTC m=+0.057480835 container create 589ec77bf69ca71a2052e6a80b4695364e032fb1bec94782874c096cb40205db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 04:23:10 np0005465604 podman[291573]: 2025-10-02 08:23:10.037885655 +0000 UTC m=+0.100148870 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct  2 04:23:10 np0005465604 systemd[1]: Started libpod-conmon-589ec77bf69ca71a2052e6a80b4695364e032fb1bec94782874c096cb40205db.scope.
Oct  2 04:23:10 np0005465604 podman[291586]: 2025-10-02 08:23:09.993154283 +0000 UTC m=+0.026811326 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:23:10 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:23:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38718b429bd669eda92a59596555dcdf9d31f9c9dd935dc3c3112325f0d84115/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:23:10 np0005465604 podman[291586]: 2025-10-02 08:23:10.119968091 +0000 UTC m=+0.153625194 container init 589ec77bf69ca71a2052e6a80b4695364e032fb1bec94782874c096cb40205db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:23:10 np0005465604 podman[291586]: 2025-10-02 08:23:10.130592954 +0000 UTC m=+0.164250007 container start 589ec77bf69ca71a2052e6a80b4695364e032fb1bec94782874c096cb40205db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 04:23:10 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[291613]: [NOTICE]   (291617) : New worker (291619) forked
Oct  2 04:23:10 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[291613]: [NOTICE]   (291617) : Loading success.
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.245 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393390.2449336, a5536a83-4bc6-4b26-b686-817581c4ea34 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.246 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] VM Started (Lifecycle Event)#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.249 2 DEBUG nova.compute.manager [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.253 2 DEBUG nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.256 2 INFO nova.virt.libvirt.driver [-] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Instance spawned successfully.#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.256 2 DEBUG nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.268 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.272 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.276 2 DEBUG nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.277 2 DEBUG nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.277 2 DEBUG nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.277 2 DEBUG nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.278 2 DEBUG nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.278 2 DEBUG nova.virt.libvirt.driver [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.296 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.297 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393390.2451508, a5536a83-4bc6-4b26-b686-817581c4ea34 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.297 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.318 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.321 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393390.2521327, a5536a83-4bc6-4b26-b686-817581c4ea34 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.321 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.327 2 INFO nova.compute.manager [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Took 11.96 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.327 2 DEBUG nova.compute.manager [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.336 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.339 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.361 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.380 2 INFO nova.compute.manager [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Took 12.96 seconds to build instance.#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.395 2 DEBUG oslo_concurrency.lockutils [None req-a193681e-1baf-40f4-8fa8-f7b6bbeac032 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "a5536a83-4bc6-4b26-b686-817581c4ea34" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:23:10 np0005465604 nova_compute[260603]: 2025-10-02 08:23:10.598 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:23:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 88 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct  2 04:23:11 np0005465604 nova_compute[260603]: 2025-10-02 08:23:11.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:23:11 np0005465604 nova_compute[260603]: 2025-10-02 08:23:11.518 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 04:23:11 np0005465604 nova_compute[260603]: 2025-10-02 08:23:11.536 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 04:23:11 np0005465604 nova_compute[260603]: 2025-10-02 08:23:11.577 2 DEBUG nova.compute.manager [req-5e511ccf-1b1f-43eb-a9a2-89304a8e9f7b req-a129344e-24b9-4510-a171-9dd8ed7ad18f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Received event network-vif-plugged-69cdfaa2-3e13-4b0c-ba0a-944bb53264b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:11 np0005465604 nova_compute[260603]: 2025-10-02 08:23:11.578 2 DEBUG oslo_concurrency.lockutils [req-5e511ccf-1b1f-43eb-a9a2-89304a8e9f7b req-a129344e-24b9-4510-a171-9dd8ed7ad18f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "a5536a83-4bc6-4b26-b686-817581c4ea34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:11 np0005465604 nova_compute[260603]: 2025-10-02 08:23:11.578 2 DEBUG oslo_concurrency.lockutils [req-5e511ccf-1b1f-43eb-a9a2-89304a8e9f7b req-a129344e-24b9-4510-a171-9dd8ed7ad18f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a5536a83-4bc6-4b26-b686-817581c4ea34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:11 np0005465604 nova_compute[260603]: 2025-10-02 08:23:11.578 2 DEBUG oslo_concurrency.lockutils [req-5e511ccf-1b1f-43eb-a9a2-89304a8e9f7b req-a129344e-24b9-4510-a171-9dd8ed7ad18f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a5536a83-4bc6-4b26-b686-817581c4ea34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:11 np0005465604 nova_compute[260603]: 2025-10-02 08:23:11.578 2 DEBUG nova.compute.manager [req-5e511ccf-1b1f-43eb-a9a2-89304a8e9f7b req-a129344e-24b9-4510-a171-9dd8ed7ad18f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] No waiting events found dispatching network-vif-plugged-69cdfaa2-3e13-4b0c-ba0a-944bb53264b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:23:11 np0005465604 nova_compute[260603]: 2025-10-02 08:23:11.578 2 WARNING nova.compute.manager [req-5e511ccf-1b1f-43eb-a9a2-89304a8e9f7b req-a129344e-24b9-4510-a171-9dd8ed7ad18f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Received unexpected event network-vif-plugged-69cdfaa2-3e13-4b0c-ba0a-944bb53264b4 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:23:11 np0005465604 nova_compute[260603]: 2025-10-02 08:23:11.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:23:12 np0005465604 nova_compute[260603]: 2025-10-02 08:23:12.536 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:23:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 88 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Oct  2 04:23:13 np0005465604 nova_compute[260603]: 2025-10-02 08:23:13.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:13 np0005465604 nova_compute[260603]: 2025-10-02 08:23:13.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:23:13 np0005465604 nova_compute[260603]: 2025-10-02 08:23:13.551 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:13 np0005465604 nova_compute[260603]: 2025-10-02 08:23:13.552 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:13 np0005465604 nova_compute[260603]: 2025-10-02 08:23:13.553 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:13 np0005465604 nova_compute[260603]: 2025-10-02 08:23:13.553 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:23:13 np0005465604 nova_compute[260603]: 2025-10-02 08:23:13.554 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:23:13.935364) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393393935443, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1699, "num_deletes": 501, "total_data_size": 1951807, "memory_usage": 1985200, "flush_reason": "Manual Compaction"}
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393393949018, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1916975, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23965, "largest_seqno": 25663, "table_properties": {"data_size": 1909883, "index_size": 3526, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19509, "raw_average_key_size": 19, "raw_value_size": 1893134, "raw_average_value_size": 1929, "num_data_blocks": 157, "num_entries": 981, "num_filter_entries": 981, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759393272, "oldest_key_time": 1759393272, "file_creation_time": 1759393393, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 13707 microseconds, and 8889 cpu microseconds.
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:23:13.949081) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1916975 bytes OK
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:23:13.949107) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:23:13.950454) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:23:13.950474) EVENT_LOG_v1 {"time_micros": 1759393393950467, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:23:13.950497) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1943385, prev total WAL file size 1943385, number of live WAL files 2.
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:23:13.951441) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1872KB)], [56(9900KB)]
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393393951480, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12055298, "oldest_snapshot_seqno": -1}
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3371788154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4733 keys, 7013636 bytes, temperature: kUnknown
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393393983613, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7013636, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6982630, "index_size": 18096, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 118551, "raw_average_key_size": 25, "raw_value_size": 6897818, "raw_average_value_size": 1457, "num_data_blocks": 746, "num_entries": 4733, "num_filter_entries": 4733, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759393393, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:23:13.983856) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7013636 bytes
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:23:13.984959) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 374.5 rd, 217.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.7 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(9.9) write-amplify(3.7) OK, records in: 5752, records dropped: 1019 output_compression: NoCompression
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:23:13.984976) EVENT_LOG_v1 {"time_micros": 1759393393984967, "job": 30, "event": "compaction_finished", "compaction_time_micros": 32190, "compaction_time_cpu_micros": 18249, "output_level": 6, "num_output_files": 1, "total_output_size": 7013636, "num_input_records": 5752, "num_output_records": 4733, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393393985438, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393393986986, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:23:13.951389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:23:13.987017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:23:13.987021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:23:13.987023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:23:13.987025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:23:13 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:23:13.987027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:23:14 np0005465604 nova_compute[260603]: 2025-10-02 08:23:14.000 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:14 np0005465604 nova_compute[260603]: 2025-10-02 08:23:14.097 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:23:14 np0005465604 nova_compute[260603]: 2025-10-02 08:23:14.098 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:23:14 np0005465604 nova_compute[260603]: 2025-10-02 08:23:14.335 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:23:14 np0005465604 nova_compute[260603]: 2025-10-02 08:23:14.337 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4427MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:23:14 np0005465604 nova_compute[260603]: 2025-10-02 08:23:14.337 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:14 np0005465604 nova_compute[260603]: 2025-10-02 08:23:14.338 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:14 np0005465604 nova_compute[260603]: 2025-10-02 08:23:14.577 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance a5536a83-4bc6-4b26-b686-817581c4ea34 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:23:14 np0005465604 nova_compute[260603]: 2025-10-02 08:23:14.578 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:23:14 np0005465604 nova_compute[260603]: 2025-10-02 08:23:14.578 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:23:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 88 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 563 KiB/s wr, 102 op/s
Oct  2 04:23:14 np0005465604 nova_compute[260603]: 2025-10-02 08:23:14.909 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:15 np0005465604 podman[291651]: 2025-10-02 08:23:15.01405813 +0000 UTC m=+0.080602870 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct  2 04:23:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:23:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/140834150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:23:15 np0005465604 nova_compute[260603]: 2025-10-02 08:23:15.359 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:15 np0005465604 nova_compute[260603]: 2025-10-02 08:23:15.366 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:23:15 np0005465604 nova_compute[260603]: 2025-10-02 08:23:15.387 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:23:15 np0005465604 nova_compute[260603]: 2025-10-02 08:23:15.441 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:23:15 np0005465604 nova_compute[260603]: 2025-10-02 08:23:15.441 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:16 np0005465604 nova_compute[260603]: 2025-10-02 08:23:16.442 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:23:16 np0005465604 nova_compute[260603]: 2025-10-02 08:23:16.443 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:23:16 np0005465604 nova_compute[260603]: 2025-10-02 08:23:16.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:23:16 np0005465604 nova_compute[260603]: 2025-10-02 08:23:16.553 2 DEBUG nova.compute.manager [None req-ba714b6d-f53e-4ce6-a1ff-0f8c0a75e454 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:16 np0005465604 nova_compute[260603]: 2025-10-02 08:23:16.597 2 INFO nova.compute.manager [None req-ba714b6d-f53e-4ce6-a1ff-0f8c0a75e454 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] instance snapshotting#033[00m
Oct  2 04:23:16 np0005465604 nova_compute[260603]: 2025-10-02 08:23:16.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 88 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Oct  2 04:23:16 np0005465604 nova_compute[260603]: 2025-10-02 08:23:16.881 2 INFO nova.virt.libvirt.driver [None req-ba714b6d-f53e-4ce6-a1ff-0f8c0a75e454 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Beginning live snapshot process#033[00m
Oct  2 04:23:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:23:17 np0005465604 nova_compute[260603]: 2025-10-02 08:23:17.069 2 DEBUG nova.virt.libvirt.imagebackend [None req-ba714b6d-f53e-4ce6-a1ff-0f8c0a75e454 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 04:23:17 np0005465604 nova_compute[260603]: 2025-10-02 08:23:17.274 2 DEBUG nova.storage.rbd_utils [None req-ba714b6d-f53e-4ce6-a1ff-0f8c0a75e454 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] creating snapshot(34565631bb2749cca89dae51959f9cdc) on rbd image(a5536a83-4bc6-4b26-b686-817581c4ea34_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:23:17 np0005465604 nova_compute[260603]: 2025-10-02 08:23:17.400 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393382.3992977, 477099c6-d178-40d2-9d1e-a28466596e94 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:23:17 np0005465604 nova_compute[260603]: 2025-10-02 08:23:17.401 2 INFO nova.compute.manager [-] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:23:17 np0005465604 nova_compute[260603]: 2025-10-02 08:23:17.425 2 DEBUG nova.compute.manager [None req-380cac7c-4ff9-44c0-9000-0e986e82719f - - - - - -] [instance: 477099c6-d178-40d2-9d1e-a28466596e94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct  2 04:23:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct  2 04:23:17 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct  2 04:23:18 np0005465604 nova_compute[260603]: 2025-10-02 08:23:18.018 2 DEBUG nova.storage.rbd_utils [None req-ba714b6d-f53e-4ce6-a1ff-0f8c0a75e454 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] cloning vms/a5536a83-4bc6-4b26-b686-817581c4ea34_disk@34565631bb2749cca89dae51959f9cdc to images/36922f8e-d36e-49a4-87e6-a3230bebbde9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:23:18 np0005465604 nova_compute[260603]: 2025-10-02 08:23:18.125 2 DEBUG nova.storage.rbd_utils [None req-ba714b6d-f53e-4ce6-a1ff-0f8c0a75e454 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] flattening images/36922f8e-d36e-49a4-87e6-a3230bebbde9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:23:18 np0005465604 nova_compute[260603]: 2025-10-02 08:23:18.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:18 np0005465604 nova_compute[260603]: 2025-10-02 08:23:18.350 2 DEBUG nova.storage.rbd_utils [None req-ba714b6d-f53e-4ce6-a1ff-0f8c0a75e454 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] removing snapshot(34565631bb2749cca89dae51959f9cdc) on rbd image(a5536a83-4bc6-4b26-b686-817581c4ea34_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 04:23:18 np0005465604 nova_compute[260603]: 2025-10-02 08:23:18.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:23:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 92 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 32 KiB/s wr, 102 op/s
Oct  2 04:23:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct  2 04:23:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct  2 04:23:18 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct  2 04:23:19 np0005465604 nova_compute[260603]: 2025-10-02 08:23:19.036 2 DEBUG nova.storage.rbd_utils [None req-ba714b6d-f53e-4ce6-a1ff-0f8c0a75e454 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] creating snapshot(snap) on rbd image(36922f8e-d36e-49a4-87e6-a3230bebbde9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:23:19 np0005465604 nova_compute[260603]: 2025-10-02 08:23:19.897 2 DEBUG oslo_concurrency.lockutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "284aaef3-b320-49e2-a541-b17eb4eb208f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:19 np0005465604 nova_compute[260603]: 2025-10-02 08:23:19.897 2 DEBUG oslo_concurrency.lockutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:19 np0005465604 nova_compute[260603]: 2025-10-02 08:23:19.925 2 DEBUG nova.compute.manager [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:23:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct  2 04:23:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct  2 04:23:19 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.027 2 DEBUG oslo_concurrency.lockutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.028 2 DEBUG oslo_concurrency.lockutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.039 2 DEBUG nova.virt.hardware [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.041 2 INFO nova.compute.claims [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.161 2 DEBUG oslo_concurrency.processutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver [None req-ba714b6d-f53e-4ce6-a1ff-0f8c0a75e454 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 36922f8e-d36e-49a4-87e6-a3230bebbde9 could not be found.
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 36922f8e-d36e-49a4-87e6-a3230bebbde9
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver 
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver 
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 36922f8e-d36e-49a4-87e6-a3230bebbde9 could not be found.
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.393 2 ERROR nova.virt.libvirt.driver #033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.464 2 DEBUG nova.storage.rbd_utils [None req-ba714b6d-f53e-4ce6-a1ff-0f8c0a75e454 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] removing snapshot(snap) on rbd image(36922f8e-d36e-49a4-87e6-a3230bebbde9) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 04:23:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:23:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3693498441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.615 2 DEBUG oslo_concurrency.processutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.629 2 DEBUG nova.compute.provider_tree [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.658 2 DEBUG nova.scheduler.client.report [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.693 2 DEBUG oslo_concurrency.lockutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.694 2 DEBUG nova.compute.manager [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.755 2 DEBUG nova.compute.manager [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.756 2 DEBUG nova.network.neutron [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.786 2 INFO nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.808 2 DEBUG nova.compute.manager [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:23:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 92 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 28 KiB/s wr, 22 op/s
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.898 2 DEBUG nova.compute.manager [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.899 2 DEBUG nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.900 2 INFO nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Creating image(s)#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.925 2 DEBUG nova.storage.rbd_utils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 284aaef3-b320-49e2-a541-b17eb4eb208f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.954 2 DEBUG nova.storage.rbd_utils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 284aaef3-b320-49e2-a541-b17eb4eb208f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.978 2 DEBUG nova.storage.rbd_utils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 284aaef3-b320-49e2-a541-b17eb4eb208f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:20 np0005465604 nova_compute[260603]: 2025-10-02 08:23:20.981 2 DEBUG oslo_concurrency.processutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct  2 04:23:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct  2 04:23:21 np0005465604 nova_compute[260603]: 2025-10-02 08:23:21.004 2 DEBUG nova.policy [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6747651cfdcc4f868c43b9d78f5846c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56b1e1170f2e4a73aaf396476bc82261', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:23:21 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct  2 04:23:21 np0005465604 nova_compute[260603]: 2025-10-02 08:23:21.060 2 DEBUG oslo_concurrency.processutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:21 np0005465604 nova_compute[260603]: 2025-10-02 08:23:21.061 2 DEBUG oslo_concurrency.lockutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:21 np0005465604 nova_compute[260603]: 2025-10-02 08:23:21.061 2 DEBUG oslo_concurrency.lockutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:21 np0005465604 nova_compute[260603]: 2025-10-02 08:23:21.062 2 DEBUG oslo_concurrency.lockutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:21 np0005465604 nova_compute[260603]: 2025-10-02 08:23:21.084 2 DEBUG nova.storage.rbd_utils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 284aaef3-b320-49e2-a541-b17eb4eb208f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:21 np0005465604 nova_compute[260603]: 2025-10-02 08:23:21.087 2 DEBUG oslo_concurrency.processutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 284aaef3-b320-49e2-a541-b17eb4eb208f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:21 np0005465604 nova_compute[260603]: 2025-10-02 08:23:21.307 2 WARNING nova.compute.manager [None req-ba714b6d-f53e-4ce6-a1ff-0f8c0a75e454 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Image not found during snapshot: nova.exception.ImageNotFound: Image 36922f8e-d36e-49a4-87e6-a3230bebbde9 could not be found.#033[00m
Oct  2 04:23:21 np0005465604 nova_compute[260603]: 2025-10-02 08:23:21.349 2 DEBUG oslo_concurrency.processutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 284aaef3-b320-49e2-a541-b17eb4eb208f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:21 np0005465604 nova_compute[260603]: 2025-10-02 08:23:21.397 2 DEBUG nova.storage.rbd_utils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] resizing rbd image 284aaef3-b320-49e2-a541-b17eb4eb208f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:23:21 np0005465604 nova_compute[260603]: 2025-10-02 08:23:21.477 2 DEBUG nova.objects.instance [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'migration_context' on Instance uuid 284aaef3-b320-49e2-a541-b17eb4eb208f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:23:21 np0005465604 nova_compute[260603]: 2025-10-02 08:23:21.498 2 DEBUG nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:23:21 np0005465604 nova_compute[260603]: 2025-10-02 08:23:21.498 2 DEBUG nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Ensure instance console log exists: /var/lib/nova/instances/284aaef3-b320-49e2-a541-b17eb4eb208f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:23:21 np0005465604 nova_compute[260603]: 2025-10-02 08:23:21.499 2 DEBUG oslo_concurrency.lockutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:21 np0005465604 nova_compute[260603]: 2025-10-02 08:23:21.499 2 DEBUG oslo_concurrency.lockutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:21 np0005465604 nova_compute[260603]: 2025-10-02 08:23:21.499 2 DEBUG oslo_concurrency.lockutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:21 np0005465604 nova_compute[260603]: 2025-10-02 08:23:21.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:23:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:23:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/861971661' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:23:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:23:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/861971661' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.447 2 DEBUG nova.network.neutron [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Successfully created port: 368af35e-e91a-4677-ac5f-ee6169dbd8f0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.638 2 DEBUG oslo_concurrency.lockutils [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "a5536a83-4bc6-4b26-b686-817581c4ea34" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.639 2 DEBUG oslo_concurrency.lockutils [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "a5536a83-4bc6-4b26-b686-817581c4ea34" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.640 2 DEBUG oslo_concurrency.lockutils [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "a5536a83-4bc6-4b26-b686-817581c4ea34-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.640 2 DEBUG oslo_concurrency.lockutils [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "a5536a83-4bc6-4b26-b686-817581c4ea34-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.641 2 DEBUG oslo_concurrency.lockutils [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "a5536a83-4bc6-4b26-b686-817581c4ea34-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.643 2 INFO nova.compute.manager [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Terminating instance#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.645 2 DEBUG nova.compute.manager [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:23:22 np0005465604 kernel: tap69cdfaa2-3e (unregistering): left promiscuous mode
Oct  2 04:23:22 np0005465604 NetworkManager[45129]: <info>  [1759393402.7130] device (tap69cdfaa2-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:23:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:22Z|00148|binding|INFO|Releasing lport 69cdfaa2-3e13-4b0c-ba0a-944bb53264b4 from this chassis (sb_readonly=0)
Oct  2 04:23:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:22Z|00149|binding|INFO|Setting lport 69cdfaa2-3e13-4b0c-ba0a-944bb53264b4 down in Southbound
Oct  2 04:23:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:22Z|00150|binding|INFO|Removing iface tap69cdfaa2-3e ovn-installed in OVS
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:22.731 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:c4:69 10.100.0.12'], port_security=['fa:16:3e:78:c4:69 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a5536a83-4bc6-4b26-b686-817581c4ea34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff4066c489424391bd4a75b195bd5011', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1aba9f6e-efc2-4ae1-83f0-6308a1293c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f5b9ac6-d3ea-442b-a1a3-0f4eb2329dfd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=69cdfaa2-3e13-4b0c-ba0a-944bb53264b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:23:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:22.733 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 69cdfaa2-3e13-4b0c-ba0a-944bb53264b4 in datapath c8aeabca-6b5c-477a-9156-9f9592c20b93 unbound from our chassis#033[00m
Oct  2 04:23:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:22.735 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c8aeabca-6b5c-477a-9156-9f9592c20b93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:23:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:22.737 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2f072af4-7f6b-4d1d-bd8b-28dc21178263]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:22.738 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 namespace which is not needed anymore#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:22 np0005465604 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000018.scope: Deactivated successfully.
Oct  2 04:23:22 np0005465604 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000018.scope: Consumed 12.437s CPU time.
Oct  2 04:23:22 np0005465604 systemd-machined[214636]: Machine qemu-28-instance-00000018 terminated.
Oct  2 04:23:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 143 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 10 MiB/s wr, 317 op/s
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.885 2 INFO nova.virt.libvirt.driver [-] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Instance destroyed successfully.#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.886 2 DEBUG nova.objects.instance [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lazy-loading 'resources' on Instance uuid a5536a83-4bc6-4b26-b686-817581c4ea34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.910 2 DEBUG nova.virt.libvirt.vif [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:22:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1134731974',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1134731974',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1134731974',id=24,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:23:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff4066c489424391bd4a75b195bd5011',ramdisk_id='',reservation_id='r-jvfy9343',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1267478522',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1267478522-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:23:21Z,user_data=None,user_id='3a117ecad98d493d8782539545db5ac9',uuid=a5536a83-4bc6-4b26-b686-817581c4ea34,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "address": "fa:16:3e:78:c4:69", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69cdfaa2-3e", "ovs_interfaceid": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.910 2 DEBUG nova.network.os_vif_util [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converting VIF {"id": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "address": "fa:16:3e:78:c4:69", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69cdfaa2-3e", "ovs_interfaceid": "69cdfaa2-3e13-4b0c-ba0a-944bb53264b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.911 2 DEBUG nova.network.os_vif_util [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:c4:69,bridge_name='br-int',has_traffic_filtering=True,id=69cdfaa2-3e13-4b0c-ba0a-944bb53264b4,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69cdfaa2-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.911 2 DEBUG os_vif [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:c4:69,bridge_name='br-int',has_traffic_filtering=True,id=69cdfaa2-3e13-4b0c-ba0a-944bb53264b4,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69cdfaa2-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.913 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69cdfaa2-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:22 np0005465604 nova_compute[260603]: 2025-10-02 08:23:22.919 2 INFO os_vif [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:c4:69,bridge_name='br-int',has_traffic_filtering=True,id=69cdfaa2-3e13-4b0c-ba0a-944bb53264b4,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69cdfaa2-3e')#033[00m
Oct  2 04:23:22 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[291613]: [NOTICE]   (291617) : haproxy version is 2.8.14-c23fe91
Oct  2 04:23:22 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[291613]: [NOTICE]   (291617) : path to executable is /usr/sbin/haproxy
Oct  2 04:23:22 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[291613]: [WARNING]  (291617) : Exiting Master process...
Oct  2 04:23:22 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[291613]: [WARNING]  (291617) : Exiting Master process...
Oct  2 04:23:22 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[291613]: [ALERT]    (291617) : Current worker (291619) exited with code 143 (Terminated)
Oct  2 04:23:22 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[291613]: [WARNING]  (291617) : All workers exited. Exiting... (0)
Oct  2 04:23:22 np0005465604 systemd[1]: libpod-589ec77bf69ca71a2052e6a80b4695364e032fb1bec94782874c096cb40205db.scope: Deactivated successfully.
Oct  2 04:23:22 np0005465604 podman[292084]: 2025-10-02 08:23:22.938945668 +0000 UTC m=+0.068482529 container died 589ec77bf69ca71a2052e6a80b4695364e032fb1bec94782874c096cb40205db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 04:23:22 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-589ec77bf69ca71a2052e6a80b4695364e032fb1bec94782874c096cb40205db-userdata-shm.mount: Deactivated successfully.
Oct  2 04:23:22 np0005465604 systemd[1]: var-lib-containers-storage-overlay-38718b429bd669eda92a59596555dcdf9d31f9c9dd935dc3c3112325f0d84115-merged.mount: Deactivated successfully.
Oct  2 04:23:22 np0005465604 podman[292084]: 2025-10-02 08:23:22.994274542 +0000 UTC m=+0.123811393 container cleanup 589ec77bf69ca71a2052e6a80b4695364e032fb1bec94782874c096cb40205db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:23:23 np0005465604 systemd[1]: libpod-conmon-589ec77bf69ca71a2052e6a80b4695364e032fb1bec94782874c096cb40205db.scope: Deactivated successfully.
Oct  2 04:23:23 np0005465604 podman[292142]: 2025-10-02 08:23:23.065428746 +0000 UTC m=+0.041748297 container remove 589ec77bf69ca71a2052e6a80b4695364e032fb1bec94782874c096cb40205db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 04:23:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:23.070 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0a4320af-73af-4c59-a73b-8b2636fe9444]: (4, ('Thu Oct  2 08:23:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 (589ec77bf69ca71a2052e6a80b4695364e032fb1bec94782874c096cb40205db)\n589ec77bf69ca71a2052e6a80b4695364e032fb1bec94782874c096cb40205db\nThu Oct  2 08:23:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 (589ec77bf69ca71a2052e6a80b4695364e032fb1bec94782874c096cb40205db)\n589ec77bf69ca71a2052e6a80b4695364e032fb1bec94782874c096cb40205db\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:23.073 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[155b0549-3cd8-4f62-b799-1f7c66f881e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:23.074 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8aeabca-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:23 np0005465604 nova_compute[260603]: 2025-10-02 08:23:23.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:23 np0005465604 kernel: tapc8aeabca-60: left promiscuous mode
Oct  2 04:23:23 np0005465604 nova_compute[260603]: 2025-10-02 08:23:23.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:23.109 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fdb8f302-14ba-47e8-86b0-500fe83eeed2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:23.133 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ad38cde3-3b6e-4d6e-8556-8dadbc5645a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:23.137 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4175d977-83bd-42ce-8b31-3a1cfd956f36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:23.153 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9144f7a-d74f-4ae1-b8ee-b7eb3f2a9f3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426781, 'reachable_time': 20052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292158, 'error': None, 'target': 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:23 np0005465604 systemd[1]: run-netns-ovnmeta\x2dc8aeabca\x2d6b5c\x2d477a\x2d9156\x2d9f9592c20b93.mount: Deactivated successfully.
Oct  2 04:23:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:23.156 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:23:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:23.156 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3e0e02-11f8-43fd-b995-f83b26dffbe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:23 np0005465604 nova_compute[260603]: 2025-10-02 08:23:23.275 2 INFO nova.virt.libvirt.driver [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Deleting instance files /var/lib/nova/instances/a5536a83-4bc6-4b26-b686-817581c4ea34_del#033[00m
Oct  2 04:23:23 np0005465604 nova_compute[260603]: 2025-10-02 08:23:23.276 2 INFO nova.virt.libvirt.driver [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Deletion of /var/lib/nova/instances/a5536a83-4bc6-4b26-b686-817581c4ea34_del complete#033[00m
Oct  2 04:23:23 np0005465604 nova_compute[260603]: 2025-10-02 08:23:23.346 2 INFO nova.compute.manager [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:23:23 np0005465604 nova_compute[260603]: 2025-10-02 08:23:23.346 2 DEBUG oslo.service.loopingcall [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:23:23 np0005465604 nova_compute[260603]: 2025-10-02 08:23:23.347 2 DEBUG nova.compute.manager [-] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:23:23 np0005465604 nova_compute[260603]: 2025-10-02 08:23:23.347 2 DEBUG nova.network.neutron [-] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:23:23 np0005465604 nova_compute[260603]: 2025-10-02 08:23:23.893 2 DEBUG nova.network.neutron [-] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:23:23 np0005465604 nova_compute[260603]: 2025-10-02 08:23:23.911 2 INFO nova.compute.manager [-] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Took 0.56 seconds to deallocate network for instance.#033[00m
Oct  2 04:23:23 np0005465604 nova_compute[260603]: 2025-10-02 08:23:23.969 2 DEBUG oslo_concurrency.lockutils [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:23 np0005465604 nova_compute[260603]: 2025-10-02 08:23:23.970 2 DEBUG oslo_concurrency.lockutils [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:23 np0005465604 nova_compute[260603]: 2025-10-02 08:23:23.985 2 DEBUG nova.compute.manager [req-ff30d2a4-5d7b-4f76-bcaf-a965d445589d req-f3fba5ba-5eab-4781-8e74-9c9e5ca2d88d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Received event network-vif-deleted-69cdfaa2-3e13-4b0c-ba0a-944bb53264b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:24 np0005465604 nova_compute[260603]: 2025-10-02 08:23:24.071 2 DEBUG oslo_concurrency.processutils [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:24 np0005465604 nova_compute[260603]: 2025-10-02 08:23:24.142 2 DEBUG nova.network.neutron [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Successfully updated port: 368af35e-e91a-4677-ac5f-ee6169dbd8f0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:23:24 np0005465604 nova_compute[260603]: 2025-10-02 08:23:24.160 2 DEBUG oslo_concurrency.lockutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "refresh_cache-284aaef3-b320-49e2-a541-b17eb4eb208f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:23:24 np0005465604 nova_compute[260603]: 2025-10-02 08:23:24.160 2 DEBUG oslo_concurrency.lockutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquired lock "refresh_cache-284aaef3-b320-49e2-a541-b17eb4eb208f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:23:24 np0005465604 nova_compute[260603]: 2025-10-02 08:23:24.160 2 DEBUG nova.network.neutron [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:23:24 np0005465604 nova_compute[260603]: 2025-10-02 08:23:24.365 2 DEBUG nova.network.neutron [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:23:24 np0005465604 nova_compute[260603]: 2025-10-02 08:23:24.492 2 DEBUG nova.compute.manager [req-23eda9af-2194-400d-bff9-b1ed627bcfdb req-1289aa88-af00-4c72-a387-96a3dc2fe2ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Received event network-changed-368af35e-e91a-4677-ac5f-ee6169dbd8f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:24 np0005465604 nova_compute[260603]: 2025-10-02 08:23:24.493 2 DEBUG nova.compute.manager [req-23eda9af-2194-400d-bff9-b1ed627bcfdb req-1289aa88-af00-4c72-a387-96a3dc2fe2ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Refreshing instance network info cache due to event network-changed-368af35e-e91a-4677-ac5f-ee6169dbd8f0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:23:24 np0005465604 nova_compute[260603]: 2025-10-02 08:23:24.494 2 DEBUG oslo_concurrency.lockutils [req-23eda9af-2194-400d-bff9-b1ed627bcfdb req-1289aa88-af00-4c72-a387-96a3dc2fe2ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-284aaef3-b320-49e2-a541-b17eb4eb208f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:23:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:23:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3614599946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:23:24 np0005465604 nova_compute[260603]: 2025-10-02 08:23:24.581 2 DEBUG oslo_concurrency.processutils [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:24 np0005465604 nova_compute[260603]: 2025-10-02 08:23:24.586 2 DEBUG nova.compute.provider_tree [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:23:24 np0005465604 nova_compute[260603]: 2025-10-02 08:23:24.603 2 DEBUG nova.scheduler.client.report [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:23:24 np0005465604 nova_compute[260603]: 2025-10-02 08:23:24.621 2 DEBUG oslo_concurrency.lockutils [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:24 np0005465604 nova_compute[260603]: 2025-10-02 08:23:24.648 2 INFO nova.scheduler.client.report [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Deleted allocations for instance a5536a83-4bc6-4b26-b686-817581c4ea34#033[00m
Oct  2 04:23:24 np0005465604 nova_compute[260603]: 2025-10-02 08:23:24.719 2 DEBUG oslo_concurrency.lockutils [None req-2d899b2d-cd35-4f87-936b-d89241c1f88f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "a5536a83-4bc6-4b26-b686-817581c4ea34" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 133 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 11 MiB/s wr, 330 op/s
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.083 2 DEBUG nova.network.neutron [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Updating instance_info_cache with network_info: [{"id": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "address": "fa:16:3e:36:03:c3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap368af35e-e9", "ovs_interfaceid": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.103 2 DEBUG oslo_concurrency.lockutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Releasing lock "refresh_cache-284aaef3-b320-49e2-a541-b17eb4eb208f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.103 2 DEBUG nova.compute.manager [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Instance network_info: |[{"id": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "address": "fa:16:3e:36:03:c3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap368af35e-e9", "ovs_interfaceid": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.104 2 DEBUG oslo_concurrency.lockutils [req-23eda9af-2194-400d-bff9-b1ed627bcfdb req-1289aa88-af00-4c72-a387-96a3dc2fe2ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-284aaef3-b320-49e2-a541-b17eb4eb208f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.104 2 DEBUG nova.network.neutron [req-23eda9af-2194-400d-bff9-b1ed627bcfdb req-1289aa88-af00-4c72-a387-96a3dc2fe2ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Refreshing network info cache for port 368af35e-e91a-4677-ac5f-ee6169dbd8f0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.107 2 DEBUG nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Start _get_guest_xml network_info=[{"id": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "address": "fa:16:3e:36:03:c3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap368af35e-e9", "ovs_interfaceid": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.112 2 WARNING nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.118 2 DEBUG nova.virt.libvirt.host [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.118 2 DEBUG nova.virt.libvirt.host [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.125 2 DEBUG nova.virt.libvirt.host [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.126 2 DEBUG nova.virt.libvirt.host [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.127 2 DEBUG nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.127 2 DEBUG nova.virt.hardware [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.127 2 DEBUG nova.virt.hardware [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.128 2 DEBUG nova.virt.hardware [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.128 2 DEBUG nova.virt.hardware [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.128 2 DEBUG nova.virt.hardware [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.129 2 DEBUG nova.virt.hardware [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.129 2 DEBUG nova.virt.hardware [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.129 2 DEBUG nova.virt.hardware [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.129 2 DEBUG nova.virt.hardware [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.130 2 DEBUG nova.virt.hardware [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.130 2 DEBUG nova.virt.hardware [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.133 2 DEBUG oslo_concurrency.processutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:23:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4186862933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.589 2 DEBUG oslo_concurrency.processutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.617 2 DEBUG nova.storage.rbd_utils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 284aaef3-b320-49e2-a541-b17eb4eb208f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:25 np0005465604 nova_compute[260603]: 2025-10-02 08:23:25.621 2 DEBUG oslo_concurrency.processutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:23:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/828271813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.018 2 DEBUG oslo_concurrency.processutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.019 2 DEBUG nova.virt.libvirt.vif [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:23:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-924493590',display_name='tempest-ImagesTestJSON-server-924493590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-924493590',id=25,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-f9z2qxia',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:23:20Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=284aaef3-b320-49e2-a541-b17eb4eb208f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "address": "fa:16:3e:36:03:c3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap368af35e-e9", "ovs_interfaceid": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.020 2 DEBUG nova.network.os_vif_util [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "address": "fa:16:3e:36:03:c3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap368af35e-e9", "ovs_interfaceid": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.020 2 DEBUG nova.network.os_vif_util [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:03:c3,bridge_name='br-int',has_traffic_filtering=True,id=368af35e-e91a-4677-ac5f-ee6169dbd8f0,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap368af35e-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.022 2 DEBUG nova.objects.instance [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'pci_devices' on Instance uuid 284aaef3-b320-49e2-a541-b17eb4eb208f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.044 2 DEBUG nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:23:26 np0005465604 nova_compute[260603]:  <uuid>284aaef3-b320-49e2-a541-b17eb4eb208f</uuid>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:  <name>instance-00000019</name>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <nova:name>tempest-ImagesTestJSON-server-924493590</nova:name>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:23:25</nova:creationTime>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:23:26 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:        <nova:user uuid="6747651cfdcc4f868c43b9d78f5846c2">tempest-ImagesTestJSON-1188243509-project-member</nova:user>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:        <nova:project uuid="56b1e1170f2e4a73aaf396476bc82261">tempest-ImagesTestJSON-1188243509</nova:project>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:        <nova:port uuid="368af35e-e91a-4677-ac5f-ee6169dbd8f0">
Oct  2 04:23:26 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <entry name="serial">284aaef3-b320-49e2-a541-b17eb4eb208f</entry>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <entry name="uuid">284aaef3-b320-49e2-a541-b17eb4eb208f</entry>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/284aaef3-b320-49e2-a541-b17eb4eb208f_disk">
Oct  2 04:23:26 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:23:26 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/284aaef3-b320-49e2-a541-b17eb4eb208f_disk.config">
Oct  2 04:23:26 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:23:26 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:36:03:c3"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <target dev="tap368af35e-e9"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/284aaef3-b320-49e2-a541-b17eb4eb208f/console.log" append="off"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:23:26 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:23:26 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:23:26 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:23:26 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.045 2 DEBUG nova.compute.manager [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Preparing to wait for external event network-vif-plugged-368af35e-e91a-4677-ac5f-ee6169dbd8f0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.045 2 DEBUG oslo_concurrency.lockutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.046 2 DEBUG oslo_concurrency.lockutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.046 2 DEBUG oslo_concurrency.lockutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.046 2 DEBUG nova.virt.libvirt.vif [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:23:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-924493590',display_name='tempest-ImagesTestJSON-server-924493590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-924493590',id=25,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-f9z2qxia',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:23:20Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=284aaef3-b320-49e2-a541-b17eb4eb208f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "address": "fa:16:3e:36:03:c3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap368af35e-e9", "ovs_interfaceid": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.047 2 DEBUG nova.network.os_vif_util [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "address": "fa:16:3e:36:03:c3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap368af35e-e9", "ovs_interfaceid": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.047 2 DEBUG nova.network.os_vif_util [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:03:c3,bridge_name='br-int',has_traffic_filtering=True,id=368af35e-e91a-4677-ac5f-ee6169dbd8f0,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap368af35e-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.047 2 DEBUG os_vif [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:03:c3,bridge_name='br-int',has_traffic_filtering=True,id=368af35e-e91a-4677-ac5f-ee6169dbd8f0,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap368af35e-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.049 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.051 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap368af35e-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.051 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap368af35e-e9, col_values=(('external_ids', {'iface-id': '368af35e-e91a-4677-ac5f-ee6169dbd8f0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:03:c3', 'vm-uuid': '284aaef3-b320-49e2-a541-b17eb4eb208f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:26 np0005465604 NetworkManager[45129]: <info>  [1759393406.0547] manager: (tap368af35e-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.060 2 INFO os_vif [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:03:c3,bridge_name='br-int',has_traffic_filtering=True,id=368af35e-e91a-4677-ac5f-ee6169dbd8f0,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap368af35e-e9')#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.133 2 DEBUG nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.134 2 DEBUG nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.134 2 DEBUG nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No VIF found with MAC fa:16:3e:36:03:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.134 2 INFO nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Using config drive#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.154 2 DEBUG nova.storage.rbd_utils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 284aaef3-b320-49e2-a541-b17eb4eb208f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.260 2 DEBUG nova.network.neutron [req-23eda9af-2194-400d-bff9-b1ed627bcfdb req-1289aa88-af00-4c72-a387-96a3dc2fe2ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Updated VIF entry in instance network info cache for port 368af35e-e91a-4677-ac5f-ee6169dbd8f0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.261 2 DEBUG nova.network.neutron [req-23eda9af-2194-400d-bff9-b1ed627bcfdb req-1289aa88-af00-4c72-a387-96a3dc2fe2ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Updating instance_info_cache with network_info: [{"id": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "address": "fa:16:3e:36:03:c3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap368af35e-e9", "ovs_interfaceid": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.280 2 DEBUG oslo_concurrency.lockutils [req-23eda9af-2194-400d-bff9-b1ed627bcfdb req-1289aa88-af00-4c72-a387-96a3dc2fe2ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-284aaef3-b320-49e2-a541-b17eb4eb208f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.703 2 INFO nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Creating config drive at /var/lib/nova/instances/284aaef3-b320-49e2-a541-b17eb4eb208f/disk.config#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.708 2 DEBUG oslo_concurrency.processutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/284aaef3-b320-49e2-a541-b17eb4eb208f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ac81s8o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 133 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 8.7 MiB/s wr, 253 op/s
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.834 2 DEBUG oslo_concurrency.processutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/284aaef3-b320-49e2-a541-b17eb4eb208f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ac81s8o" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.855 2 DEBUG nova.storage.rbd_utils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 284aaef3-b320-49e2-a541-b17eb4eb208f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.859 2 DEBUG oslo_concurrency.processutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/284aaef3-b320-49e2-a541-b17eb4eb208f/disk.config 284aaef3-b320-49e2-a541-b17eb4eb208f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:23:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct  2 04:23:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct  2 04:23:26 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.995 2 DEBUG oslo_concurrency.processutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/284aaef3-b320-49e2-a541-b17eb4eb208f/disk.config 284aaef3-b320-49e2-a541-b17eb4eb208f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:26 np0005465604 nova_compute[260603]: 2025-10-02 08:23:26.996 2 INFO nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Deleting local config drive /var/lib/nova/instances/284aaef3-b320-49e2-a541-b17eb4eb208f/disk.config because it was imported into RBD.#033[00m
Oct  2 04:23:27 np0005465604 kernel: tap368af35e-e9: entered promiscuous mode
Oct  2 04:23:27 np0005465604 NetworkManager[45129]: <info>  [1759393407.0691] manager: (tap368af35e-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Oct  2 04:23:27 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:27Z|00151|binding|INFO|Claiming lport 368af35e-e91a-4677-ac5f-ee6169dbd8f0 for this chassis.
Oct  2 04:23:27 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:27Z|00152|binding|INFO|368af35e-e91a-4677-ac5f-ee6169dbd8f0: Claiming fa:16:3e:36:03:c3 10.100.0.5
Oct  2 04:23:27 np0005465604 nova_compute[260603]: 2025-10-02 08:23:27.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.084 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:03:c3 10.100.0.5'], port_security=['fa:16:3e:36:03:c3 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '284aaef3-b320-49e2-a541-b17eb4eb208f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '2', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=368af35e-e91a-4677-ac5f-ee6169dbd8f0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.086 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 368af35e-e91a-4677-ac5f-ee6169dbd8f0 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 bound to our chassis#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.087 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 897d7abf-9e23-43cd-8f60-7156792a4360#033[00m
Oct  2 04:23:27 np0005465604 systemd-udevd[292314]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.103 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6b781e8f-731f-4a7c-afc3-42b21ff1be0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.105 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap897d7abf-91 in ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.108 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap897d7abf-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.108 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f05d45fa-f612-4f7e-9ad9-e99ec7d09fcb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.109 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a02539e0-2148-4941-85b2-0adeed85c66c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:27 np0005465604 systemd-machined[214636]: New machine qemu-29-instance-00000019.
Oct  2 04:23:27 np0005465604 NetworkManager[45129]: <info>  [1759393407.1158] device (tap368af35e-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:23:27 np0005465604 NetworkManager[45129]: <info>  [1759393407.1171] device (tap368af35e-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.127 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a2933758-f42b-484e-a4d0-cd04e7b68730]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:27 np0005465604 systemd[1]: Started Virtual Machine qemu-29-instance-00000019.
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.149 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e05a0cb2-a39a-46ec-bbbe-cab9954f9f27]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:27 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:27Z|00153|binding|INFO|Setting lport 368af35e-e91a-4677-ac5f-ee6169dbd8f0 ovn-installed in OVS
Oct  2 04:23:27 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:27Z|00154|binding|INFO|Setting lport 368af35e-e91a-4677-ac5f-ee6169dbd8f0 up in Southbound
Oct  2 04:23:27 np0005465604 nova_compute[260603]: 2025-10-02 08:23:27.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:27 np0005465604 nova_compute[260603]: 2025-10-02 08:23:27.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.185 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e94bba-b162-476a-90df-1a02a489b0c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.190 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0f350984-190e-451e-a731-3b95038fdb34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:27 np0005465604 NetworkManager[45129]: <info>  [1759393407.1926] manager: (tap897d7abf-90): new Veth device (/org/freedesktop/NetworkManager/Devices/74)
Oct  2 04:23:27 np0005465604 systemd-udevd[292321]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.235 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[73bfdf13-5a7e-4c83-b508-925532105486]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.237 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[61c4a629-d52d-4558-b548-86ee435d31f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:27 np0005465604 NetworkManager[45129]: <info>  [1759393407.2563] device (tap897d7abf-90): carrier: link connected
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.260 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1c407b4c-5d5d-41ce-bd56-28c304501eeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.275 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[77872026-a3a9-4d07-9211-39f32e904513]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428585, 'reachable_time': 43233, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292351, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.293 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b9e89fe8-c34c-42e6-a0da-86ef823e9463]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:18ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428585, 'tstamp': 428585}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292352, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.306 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d47c4b64-513c-4e86-817f-a3417d517e79]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428585, 'reachable_time': 43233, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292353, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.345 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7767d268-8979-4ca6-8a53-88f88d29be10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.404 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[41824666-1c60-484c-a42c-5049dd743385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.406 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.407 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.408 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap897d7abf-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:27 np0005465604 nova_compute[260603]: 2025-10-02 08:23:27.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:27 np0005465604 NetworkManager[45129]: <info>  [1759393407.4115] manager: (tap897d7abf-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Oct  2 04:23:27 np0005465604 kernel: tap897d7abf-90: entered promiscuous mode
Oct  2 04:23:27 np0005465604 nova_compute[260603]: 2025-10-02 08:23:27.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.415 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap897d7abf-90, col_values=(('external_ids', {'iface-id': 'dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:27 np0005465604 nova_compute[260603]: 2025-10-02 08:23:27.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:27 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:27Z|00155|binding|INFO|Releasing lport dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76 from this chassis (sb_readonly=0)
Oct  2 04:23:27 np0005465604 nova_compute[260603]: 2025-10-02 08:23:27.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.439 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.440 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf3bc86-e286-4057-971b-7483b97997cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.441 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-897d7abf-9e23-43cd-8f60-7156792a4360
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 897d7abf-9e23-43cd-8f60-7156792a4360
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:23:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:27.442 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'env', 'PROCESS_TAG=haproxy-897d7abf-9e23-43cd-8f60-7156792a4360', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/897d7abf-9e23-43cd-8f60-7156792a4360.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:23:27 np0005465604 podman[292427]: 2025-10-02 08:23:27.822873539 +0000 UTC m=+0.069948897 container create f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 04:23:27 np0005465604 podman[292427]: 2025-10-02 08:23:27.779255882 +0000 UTC m=+0.026331280 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:23:27 np0005465604 systemd[1]: Started libpod-conmon-f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204.scope.
Oct  2 04:23:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:23:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:23:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:23:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:23:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:23:27 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:23:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:23:27 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bff6eb4a3f402d26d136c7162ad878ef5efad0cc6f70e47e24c61d9d48c5b98/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:23:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:23:27
Oct  2 04:23:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:23:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:23:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'vms', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', '.mgr']
Oct  2 04:23:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:23:27 np0005465604 podman[292427]: 2025-10-02 08:23:27.945330307 +0000 UTC m=+0.192405705 container init f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 04:23:27 np0005465604 podman[292427]: 2025-10-02 08:23:27.950611737 +0000 UTC m=+0.197687095 container start f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:23:27 np0005465604 nova_compute[260603]: 2025-10-02 08:23:27.975 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393407.9747105, 284aaef3-b320-49e2-a541-b17eb4eb208f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:23:27 np0005465604 nova_compute[260603]: 2025-10-02 08:23:27.976 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] VM Started (Lifecycle Event)#033[00m
Oct  2 04:23:27 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[292443]: [NOTICE]   (292447) : New worker (292449) forked
Oct  2 04:23:27 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[292443]: [NOTICE]   (292447) : Loading success.
Oct  2 04:23:28 np0005465604 nova_compute[260603]: 2025-10-02 08:23:28.007 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:28 np0005465604 nova_compute[260603]: 2025-10-02 08:23:28.012 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393407.9748244, 284aaef3-b320-49e2-a541-b17eb4eb208f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:23:28 np0005465604 nova_compute[260603]: 2025-10-02 08:23:28.012 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:23:28 np0005465604 nova_compute[260603]: 2025-10-02 08:23:28.031 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:28 np0005465604 nova_compute[260603]: 2025-10-02 08:23:28.034 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:23:28 np0005465604 nova_compute[260603]: 2025-10-02 08:23:28.057 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:23:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 8.5 MiB/s wr, 289 op/s
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.308 2 DEBUG nova.compute.manager [req-55f2d344-4489-48b1-8d68-cd482d3d2470 req-b9b261b5-282e-476d-8898-2e8ce9c47e5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Received event network-vif-plugged-368af35e-e91a-4677-ac5f-ee6169dbd8f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.308 2 DEBUG oslo_concurrency.lockutils [req-55f2d344-4489-48b1-8d68-cd482d3d2470 req-b9b261b5-282e-476d-8898-2e8ce9c47e5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.309 2 DEBUG oslo_concurrency.lockutils [req-55f2d344-4489-48b1-8d68-cd482d3d2470 req-b9b261b5-282e-476d-8898-2e8ce9c47e5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.309 2 DEBUG oslo_concurrency.lockutils [req-55f2d344-4489-48b1-8d68-cd482d3d2470 req-b9b261b5-282e-476d-8898-2e8ce9c47e5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.310 2 DEBUG nova.compute.manager [req-55f2d344-4489-48b1-8d68-cd482d3d2470 req-b9b261b5-282e-476d-8898-2e8ce9c47e5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Processing event network-vif-plugged-368af35e-e91a-4677-ac5f-ee6169dbd8f0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.311 2 DEBUG nova.compute.manager [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.315 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393409.3150928, 284aaef3-b320-49e2-a541-b17eb4eb208f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.315 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.320 2 DEBUG nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.324 2 INFO nova.virt.libvirt.driver [-] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Instance spawned successfully.#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.325 2 DEBUG nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.343 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.353 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.362 2 DEBUG nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.363 2 DEBUG nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.365 2 DEBUG nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.366 2 DEBUG nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.367 2 DEBUG nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.368 2 DEBUG nova.virt.libvirt.driver [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.374 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.426 2 INFO nova.compute.manager [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Took 8.53 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.426 2 DEBUG nova.compute.manager [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.501 2 INFO nova.compute.manager [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Took 9.51 seconds to build instance.#033[00m
Oct  2 04:23:29 np0005465604 nova_compute[260603]: 2025-10-02 08:23:29.524 2 DEBUG oslo_concurrency.lockutils [None req-a754d38f-29d8-4d58-8b2a-2cbef927211e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.9 MiB/s wr, 235 op/s
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.365 2 INFO nova.compute.manager [None req-574c314a-01ad-451f-8764-1551b488f0ca 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Pausing#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.366 2 DEBUG nova.objects.instance [None req-574c314a-01ad-451f-8764-1551b488f0ca 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'flavor' on Instance uuid 284aaef3-b320-49e2-a541-b17eb4eb208f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.377 2 DEBUG nova.compute.manager [req-6955d7b7-1912-4eeb-9329-5f770cf79691 req-37654497-f6cd-4bfc-aab9-0fd9b2d635bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Received event network-vif-plugged-368af35e-e91a-4677-ac5f-ee6169dbd8f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.377 2 DEBUG oslo_concurrency.lockutils [req-6955d7b7-1912-4eeb-9329-5f770cf79691 req-37654497-f6cd-4bfc-aab9-0fd9b2d635bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.378 2 DEBUG oslo_concurrency.lockutils [req-6955d7b7-1912-4eeb-9329-5f770cf79691 req-37654497-f6cd-4bfc-aab9-0fd9b2d635bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.378 2 DEBUG oslo_concurrency.lockutils [req-6955d7b7-1912-4eeb-9329-5f770cf79691 req-37654497-f6cd-4bfc-aab9-0fd9b2d635bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.379 2 DEBUG nova.compute.manager [req-6955d7b7-1912-4eeb-9329-5f770cf79691 req-37654497-f6cd-4bfc-aab9-0fd9b2d635bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] No waiting events found dispatching network-vif-plugged-368af35e-e91a-4677-ac5f-ee6169dbd8f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.379 2 WARNING nova.compute.manager [req-6955d7b7-1912-4eeb-9329-5f770cf79691 req-37654497-f6cd-4bfc-aab9-0fd9b2d635bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Received unexpected event network-vif-plugged-368af35e-e91a-4677-ac5f-ee6169dbd8f0 for instance with vm_state active and task_state pausing.#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.416 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393411.4155118, 284aaef3-b320-49e2-a541-b17eb4eb208f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.416 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.419 2 DEBUG nova.compute.manager [None req-574c314a-01ad-451f-8764-1551b488f0ca 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.443 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.448 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.481 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.517 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.518 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.534 2 DEBUG nova.compute.manager [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.614 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.615 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.625 2 DEBUG nova.virt.hardware [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.625 2 INFO nova.compute.claims [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:31 np0005465604 nova_compute[260603]: 2025-10-02 08:23:31.742 2 DEBUG oslo_concurrency.processutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:23:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:23:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2336519720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.210 2 DEBUG oslo_concurrency.processutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.215 2 DEBUG nova.compute.provider_tree [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.228 2 DEBUG nova.scheduler.client.report [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.245 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.245 2 DEBUG nova.compute.manager [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.286 2 DEBUG nova.compute.manager [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.286 2 DEBUG nova.network.neutron [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.306 2 INFO nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.320 2 DEBUG nova.compute.manager [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.391 2 DEBUG nova.compute.manager [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.392 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.392 2 INFO nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Creating image(s)#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.411 2 DEBUG nova.storage.rbd_utils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image 7c5d0818-0647-43a7-aaa1-b875b8b8424d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.430 2 DEBUG nova.storage.rbd_utils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image 7c5d0818-0647-43a7-aaa1-b875b8b8424d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.448 2 DEBUG nova.storage.rbd_utils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image 7c5d0818-0647-43a7-aaa1-b875b8b8424d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.451 2 DEBUG oslo_concurrency.processutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.537 2 DEBUG oslo_concurrency.processutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.538 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.538 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.539 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.562 2 DEBUG nova.storage.rbd_utils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image 7c5d0818-0647-43a7-aaa1-b875b8b8424d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.566 2 DEBUG oslo_concurrency.processutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7c5d0818-0647-43a7-aaa1-b875b8b8424d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.588 2 DEBUG nova.policy [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a117ecad98d493d8782539545db5ac9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff4066c489424391bd4a75b195bd5011', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.788 2 DEBUG oslo_concurrency.processutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7c5d0818-0647-43a7-aaa1-b875b8b8424d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 88 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 158 op/s
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.845 2 DEBUG nova.storage.rbd_utils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] resizing rbd image 7c5d0818-0647-43a7-aaa1-b875b8b8424d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.935 2 DEBUG nova.objects.instance [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lazy-loading 'migration_context' on Instance uuid 7c5d0818-0647-43a7-aaa1-b875b8b8424d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.951 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.951 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Ensure instance console log exists: /var/lib/nova/instances/7c5d0818-0647-43a7-aaa1-b875b8b8424d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.952 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.952 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:32 np0005465604 nova_compute[260603]: 2025-10-02 08:23:32.952 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:33 np0005465604 nova_compute[260603]: 2025-10-02 08:23:33.905 2 DEBUG nova.compute.manager [None req-0da5e771-6dde-43b2-b571-f470f29448a5 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:33 np0005465604 nova_compute[260603]: 2025-10-02 08:23:33.973 2 INFO nova.compute.manager [None req-0da5e771-6dde-43b2-b571-f470f29448a5 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] instance snapshotting#033[00m
Oct  2 04:23:33 np0005465604 nova_compute[260603]: 2025-10-02 08:23:33.974 2 WARNING nova.compute.manager [None req-0da5e771-6dde-43b2-b571-f470f29448a5 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] trying to snapshot a non-running instance: (state: 3 expected: 1)#033[00m
Oct  2 04:23:33 np0005465604 nova_compute[260603]: 2025-10-02 08:23:33.991 2 DEBUG nova.network.neutron [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Successfully created port: 6cf06b33-eb42-471e-9c4b-9119daeb78c1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:23:34 np0005465604 nova_compute[260603]: 2025-10-02 08:23:34.269 2 INFO nova.virt.libvirt.driver [None req-0da5e771-6dde-43b2-b571-f470f29448a5 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Beginning live snapshot process#033[00m
Oct  2 04:23:34 np0005465604 nova_compute[260603]: 2025-10-02 08:23:34.393 2 DEBUG nova.virt.libvirt.imagebackend [None req-0da5e771-6dde-43b2-b571-f470f29448a5 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 04:23:34 np0005465604 nova_compute[260603]: 2025-10-02 08:23:34.590 2 DEBUG nova.storage.rbd_utils [None req-0da5e771-6dde-43b2-b571-f470f29448a5 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] creating snapshot(dc23a8c8093443ea9914944c205c7269) on rbd image(284aaef3-b320-49e2-a541-b17eb4eb208f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:23:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:34.810 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:34.811 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:34.812 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 109 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.0 MiB/s wr, 143 op/s
Oct  2 04:23:34 np0005465604 podman[292698]: 2025-10-02 08:23:34.992078332 +0000 UTC m=+0.053632921 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 04:23:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct  2 04:23:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct  2 04:23:35 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct  2 04:23:35 np0005465604 podman[292697]: 2025-10-02 08:23:35.059136433 +0000 UTC m=+0.115842906 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 04:23:35 np0005465604 nova_compute[260603]: 2025-10-02 08:23:35.080 2 DEBUG nova.storage.rbd_utils [None req-0da5e771-6dde-43b2-b571-f470f29448a5 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] cloning vms/284aaef3-b320-49e2-a541-b17eb4eb208f_disk@dc23a8c8093443ea9914944c205c7269 to images/fc52751b-ec51-4856-9e3b-0c09afc7a635 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:23:35 np0005465604 nova_compute[260603]: 2025-10-02 08:23:35.168 2 DEBUG nova.storage.rbd_utils [None req-0da5e771-6dde-43b2-b571-f470f29448a5 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] flattening images/fc52751b-ec51-4856-9e3b-0c09afc7a635 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:23:35 np0005465604 nova_compute[260603]: 2025-10-02 08:23:35.216 2 DEBUG nova.network.neutron [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Successfully updated port: 6cf06b33-eb42-471e-9c4b-9119daeb78c1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:23:35 np0005465604 nova_compute[260603]: 2025-10-02 08:23:35.230 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "refresh_cache-7c5d0818-0647-43a7-aaa1-b875b8b8424d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:23:35 np0005465604 nova_compute[260603]: 2025-10-02 08:23:35.231 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquired lock "refresh_cache-7c5d0818-0647-43a7-aaa1-b875b8b8424d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:23:35 np0005465604 nova_compute[260603]: 2025-10-02 08:23:35.231 2 DEBUG nova.network.neutron [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:23:35 np0005465604 nova_compute[260603]: 2025-10-02 08:23:35.360 2 DEBUG nova.compute.manager [req-dfcc976f-d711-411f-8487-d79ca9e98a3f req-41b55e4b-b172-4620-a5f1-340e45a53a40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Received event network-changed-6cf06b33-eb42-471e-9c4b-9119daeb78c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:35 np0005465604 nova_compute[260603]: 2025-10-02 08:23:35.361 2 DEBUG nova.compute.manager [req-dfcc976f-d711-411f-8487-d79ca9e98a3f req-41b55e4b-b172-4620-a5f1-340e45a53a40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Refreshing instance network info cache due to event network-changed-6cf06b33-eb42-471e-9c4b-9119daeb78c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:23:35 np0005465604 nova_compute[260603]: 2025-10-02 08:23:35.363 2 DEBUG oslo_concurrency.lockutils [req-dfcc976f-d711-411f-8487-d79ca9e98a3f req-41b55e4b-b172-4620-a5f1-340e45a53a40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7c5d0818-0647-43a7-aaa1-b875b8b8424d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:23:35 np0005465604 nova_compute[260603]: 2025-10-02 08:23:35.364 2 DEBUG nova.storage.rbd_utils [None req-0da5e771-6dde-43b2-b571-f470f29448a5 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] removing snapshot(dc23a8c8093443ea9914944c205c7269) on rbd image(284aaef3-b320-49e2-a541-b17eb4eb208f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 04:23:35 np0005465604 nova_compute[260603]: 2025-10-02 08:23:35.491 2 DEBUG nova.network.neutron [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:23:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct  2 04:23:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct  2 04:23:36 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.075 2 DEBUG nova.storage.rbd_utils [None req-0da5e771-6dde-43b2-b571-f470f29448a5 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] creating snapshot(snap) on rbd image(fc52751b-ec51-4856-9e3b-0c09afc7a635) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.494 2 DEBUG nova.network.neutron [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Updating instance_info_cache with network_info: [{"id": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "address": "fa:16:3e:78:42:57", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cf06b33-eb", "ovs_interfaceid": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.518 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Releasing lock "refresh_cache-7c5d0818-0647-43a7-aaa1-b875b8b8424d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.518 2 DEBUG nova.compute.manager [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Instance network_info: |[{"id": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "address": "fa:16:3e:78:42:57", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cf06b33-eb", "ovs_interfaceid": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.519 2 DEBUG oslo_concurrency.lockutils [req-dfcc976f-d711-411f-8487-d79ca9e98a3f req-41b55e4b-b172-4620-a5f1-340e45a53a40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7c5d0818-0647-43a7-aaa1-b875b8b8424d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.520 2 DEBUG nova.network.neutron [req-dfcc976f-d711-411f-8487-d79ca9e98a3f req-41b55e4b-b172-4620-a5f1-340e45a53a40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Refreshing network info cache for port 6cf06b33-eb42-471e-9c4b-9119daeb78c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.525 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Start _get_guest_xml network_info=[{"id": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "address": "fa:16:3e:78:42:57", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cf06b33-eb", "ovs_interfaceid": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.532 2 WARNING nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.539 2 DEBUG nova.virt.libvirt.host [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.540 2 DEBUG nova.virt.libvirt.host [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.552 2 DEBUG nova.virt.libvirt.host [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.553 2 DEBUG nova.virt.libvirt.host [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.554 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.554 2 DEBUG nova.virt.hardware [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.555 2 DEBUG nova.virt.hardware [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.555 2 DEBUG nova.virt.hardware [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.556 2 DEBUG nova.virt.hardware [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.556 2 DEBUG nova.virt.hardware [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.557 2 DEBUG nova.virt.hardware [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.557 2 DEBUG nova.virt.hardware [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.558 2 DEBUG nova.virt.hardware [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.558 2 DEBUG nova.virt.hardware [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.558 2 DEBUG nova.virt.hardware [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.559 2 DEBUG nova.virt.hardware [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.563 2 DEBUG oslo_concurrency.processutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:36 np0005465604 nova_compute[260603]: 2025-10-02 08:23:36.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 109 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.2 MiB/s wr, 137 op/s
Oct  2 04:23:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:23:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:23:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2830748730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.020 2 DEBUG oslo_concurrency.processutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct  2 04:23:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct  2 04:23:37 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct  2 04:23:37 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.078 2 DEBUG nova.storage.rbd_utils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image 7c5d0818-0647-43a7-aaa1-b875b8b8424d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.085 2 DEBUG oslo_concurrency.processutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:23:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1118928799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.565 2 DEBUG oslo_concurrency.processutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.567 2 DEBUG nova.virt.libvirt.vif [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:23:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-571471940',display_name='tempest-ImagesOneServerNegativeTestJSON-server-571471940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-571471940',id=26,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff4066c489424391bd4a75b195bd5011',ramdisk_id='',reservation_id='r-ntyvpu68',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1267478522',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1267478522-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:23:32Z,user_data=None,user_id='3a117ecad98d493d8782539545db5ac9',uuid=7c5d0818-0647-43a7-aaa1-b875b8b8424d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "address": "fa:16:3e:78:42:57", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cf06b33-eb", "ovs_interfaceid": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.568 2 DEBUG nova.network.os_vif_util [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converting VIF {"id": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "address": "fa:16:3e:78:42:57", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cf06b33-eb", "ovs_interfaceid": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.568 2 DEBUG nova.network.os_vif_util [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:42:57,bridge_name='br-int',has_traffic_filtering=True,id=6cf06b33-eb42-471e-9c4b-9119daeb78c1,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cf06b33-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.569 2 DEBUG nova.objects.instance [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7c5d0818-0647-43a7-aaa1-b875b8b8424d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.582 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:23:37 np0005465604 nova_compute[260603]:  <uuid>7c5d0818-0647-43a7-aaa1-b875b8b8424d</uuid>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:  <name>instance-0000001a</name>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-571471940</nova:name>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:23:36</nova:creationTime>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:23:37 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:        <nova:user uuid="3a117ecad98d493d8782539545db5ac9">tempest-ImagesOneServerNegativeTestJSON-1267478522-project-member</nova:user>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:        <nova:project uuid="ff4066c489424391bd4a75b195bd5011">tempest-ImagesOneServerNegativeTestJSON-1267478522</nova:project>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:        <nova:port uuid="6cf06b33-eb42-471e-9c4b-9119daeb78c1">
Oct  2 04:23:37 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <entry name="serial">7c5d0818-0647-43a7-aaa1-b875b8b8424d</entry>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <entry name="uuid">7c5d0818-0647-43a7-aaa1-b875b8b8424d</entry>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7c5d0818-0647-43a7-aaa1-b875b8b8424d_disk">
Oct  2 04:23:37 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:23:37 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7c5d0818-0647-43a7-aaa1-b875b8b8424d_disk.config">
Oct  2 04:23:37 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:23:37 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:78:42:57"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <target dev="tap6cf06b33-eb"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/7c5d0818-0647-43a7-aaa1-b875b8b8424d/console.log" append="off"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:23:37 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:23:37 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:23:37 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:23:37 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.583 2 DEBUG nova.compute.manager [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Preparing to wait for external event network-vif-plugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.583 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.584 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.584 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.584 2 DEBUG nova.virt.libvirt.vif [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:23:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-571471940',display_name='tempest-ImagesOneServerNegativeTestJSON-server-571471940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-571471940',id=26,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff4066c489424391bd4a75b195bd5011',ramdisk_id='',reservation_id='r-ntyvpu68',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1267478522',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1267478522-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:23:32Z,user_data=None,user_id='3a117ecad98d493d8782539545db5ac9',uuid=7c5d0818-0647-43a7-aaa1-b875b8b8424d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "address": "fa:16:3e:78:42:57", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cf06b33-eb", "ovs_interfaceid": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.585 2 DEBUG nova.network.os_vif_util [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converting VIF {"id": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "address": "fa:16:3e:78:42:57", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cf06b33-eb", "ovs_interfaceid": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.585 2 DEBUG nova.network.os_vif_util [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:42:57,bridge_name='br-int',has_traffic_filtering=True,id=6cf06b33-eb42-471e-9c4b-9119daeb78c1,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cf06b33-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.586 2 DEBUG os_vif [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:42:57,bridge_name='br-int',has_traffic_filtering=True,id=6cf06b33-eb42-471e-9c4b-9119daeb78c1,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cf06b33-eb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.587 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.587 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.590 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6cf06b33-eb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.590 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6cf06b33-eb, col_values=(('external_ids', {'iface-id': '6cf06b33-eb42-471e-9c4b-9119daeb78c1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:42:57', 'vm-uuid': '7c5d0818-0647-43a7-aaa1-b875b8b8424d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:37 np0005465604 NetworkManager[45129]: <info>  [1759393417.5937] manager: (tap6cf06b33-eb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.606 2 INFO os_vif [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:42:57,bridge_name='br-int',has_traffic_filtering=True,id=6cf06b33-eb42-471e-9c4b-9119daeb78c1,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cf06b33-eb')#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.686 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.688 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.689 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] No VIF found with MAC fa:16:3e:78:42:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.690 2 INFO nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Using config drive#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.723 2 DEBUG nova.storage.rbd_utils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image 7c5d0818-0647-43a7-aaa1-b875b8b8424d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.885 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393402.8836029, a5536a83-4bc6-4b26-b686-817581c4ea34 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.885 2 INFO nova.compute.manager [-] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.915 2 DEBUG nova.compute.manager [None req-66964927-8ace-431f-9c0d-d927df3bd583 - - - - - -] [instance: a5536a83-4bc6-4b26-b686-817581c4ea34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.995 2 DEBUG nova.network.neutron [req-dfcc976f-d711-411f-8487-d79ca9e98a3f req-41b55e4b-b172-4620-a5f1-340e45a53a40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Updated VIF entry in instance network info cache for port 6cf06b33-eb42-471e-9c4b-9119daeb78c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:23:37 np0005465604 nova_compute[260603]: 2025-10-02 08:23:37.996 2 DEBUG nova.network.neutron [req-dfcc976f-d711-411f-8487-d79ca9e98a3f req-41b55e4b-b172-4620-a5f1-340e45a53a40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Updating instance_info_cache with network_info: [{"id": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "address": "fa:16:3e:78:42:57", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cf06b33-eb", "ovs_interfaceid": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:23:38 np0005465604 nova_compute[260603]: 2025-10-02 08:23:38.011 2 DEBUG oslo_concurrency.lockutils [req-dfcc976f-d711-411f-8487-d79ca9e98a3f req-41b55e4b-b172-4620-a5f1-340e45a53a40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7c5d0818-0647-43a7-aaa1-b875b8b8424d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:23:38 np0005465604 nova_compute[260603]: 2025-10-02 08:23:38.092 2 INFO nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Creating config drive at /var/lib/nova/instances/7c5d0818-0647-43a7-aaa1-b875b8b8424d/disk.config#033[00m
Oct  2 04:23:38 np0005465604 nova_compute[260603]: 2025-10-02 08:23:38.102 2 DEBUG oslo_concurrency.processutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7c5d0818-0647-43a7-aaa1-b875b8b8424d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz90fpok7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:38 np0005465604 nova_compute[260603]: 2025-10-02 08:23:38.255 2 DEBUG oslo_concurrency.processutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7c5d0818-0647-43a7-aaa1-b875b8b8424d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz90fpok7" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:38 np0005465604 nova_compute[260603]: 2025-10-02 08:23:38.285 2 DEBUG nova.storage.rbd_utils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image 7c5d0818-0647-43a7-aaa1-b875b8b8424d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:38 np0005465604 nova_compute[260603]: 2025-10-02 08:23:38.288 2 DEBUG oslo_concurrency.processutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7c5d0818-0647-43a7-aaa1-b875b8b8424d/disk.config 7c5d0818-0647-43a7-aaa1-b875b8b8424d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005087256625643029 of space, bias 1.0, pg target 0.15261769876929088 quantized to 32 (current 32)
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:23:38 np0005465604 nova_compute[260603]: 2025-10-02 08:23:38.464 2 DEBUG oslo_concurrency.processutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7c5d0818-0647-43a7-aaa1-b875b8b8424d/disk.config 7c5d0818-0647-43a7-aaa1-b875b8b8424d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:38 np0005465604 nova_compute[260603]: 2025-10-02 08:23:38.466 2 INFO nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Deleting local config drive /var/lib/nova/instances/7c5d0818-0647-43a7-aaa1-b875b8b8424d/disk.config because it was imported into RBD.#033[00m
Oct  2 04:23:38 np0005465604 nova_compute[260603]: 2025-10-02 08:23:38.484 2 INFO nova.virt.libvirt.driver [None req-0da5e771-6dde-43b2-b571-f470f29448a5 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Snapshot image upload complete#033[00m
Oct  2 04:23:38 np0005465604 nova_compute[260603]: 2025-10-02 08:23:38.485 2 INFO nova.compute.manager [None req-0da5e771-6dde-43b2-b571-f470f29448a5 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Took 4.51 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 04:23:38 np0005465604 kernel: tap6cf06b33-eb: entered promiscuous mode
Oct  2 04:23:38 np0005465604 NetworkManager[45129]: <info>  [1759393418.5329] manager: (tap6cf06b33-eb): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Oct  2 04:23:38 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:38Z|00156|binding|INFO|Claiming lport 6cf06b33-eb42-471e-9c4b-9119daeb78c1 for this chassis.
Oct  2 04:23:38 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:38Z|00157|binding|INFO|6cf06b33-eb42-471e-9c4b-9119daeb78c1: Claiming fa:16:3e:78:42:57 10.100.0.5
Oct  2 04:23:38 np0005465604 nova_compute[260603]: 2025-10-02 08:23:38.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.542 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:42:57 10.100.0.5'], port_security=['fa:16:3e:78:42:57 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7c5d0818-0647-43a7-aaa1-b875b8b8424d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff4066c489424391bd4a75b195bd5011', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1aba9f6e-efc2-4ae1-83f0-6308a1293c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f5b9ac6-d3ea-442b-a1a3-0f4eb2329dfd, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6cf06b33-eb42-471e-9c4b-9119daeb78c1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.543 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6cf06b33-eb42-471e-9c4b-9119daeb78c1 in datapath c8aeabca-6b5c-477a-9156-9f9592c20b93 bound to our chassis#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.545 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c8aeabca-6b5c-477a-9156-9f9592c20b93#033[00m
Oct  2 04:23:38 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:38Z|00158|binding|INFO|Setting lport 6cf06b33-eb42-471e-9c4b-9119daeb78c1 ovn-installed in OVS
Oct  2 04:23:38 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:38Z|00159|binding|INFO|Setting lport 6cf06b33-eb42-471e-9c4b-9119daeb78c1 up in Southbound
Oct  2 04:23:38 np0005465604 nova_compute[260603]: 2025-10-02 08:23:38.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:38 np0005465604 nova_compute[260603]: 2025-10-02 08:23:38.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.565 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b2b5b0-cfaf-4956-a869-1d655c85c910]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.567 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc8aeabca-61 in ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.571 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc8aeabca-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.571 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4d59c77a-70bd-42fe-b10e-4b5a58f24b50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.572 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f31a7981-7265-4c2a-8b77-a2ea8f250228]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:38 np0005465604 systemd-udevd[292964]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.593 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[5ae87791-b0f5-4737-8c77-77c0eadeb425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:38 np0005465604 systemd-machined[214636]: New machine qemu-30-instance-0000001a.
Oct  2 04:23:38 np0005465604 NetworkManager[45129]: <info>  [1759393418.6093] device (tap6cf06b33-eb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:23:38 np0005465604 NetworkManager[45129]: <info>  [1759393418.6114] device (tap6cf06b33-eb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:23:38 np0005465604 systemd[1]: Started Virtual Machine qemu-30-instance-0000001a.
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.627 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[729ea7f4-7dea-4054-871c-b158cadd7d54]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.669 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f49cc8-bf5f-470f-8482-3e208aec8a6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.678 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa363e21-ad72-47f1-8c82-b9378b3d2d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:38 np0005465604 NetworkManager[45129]: <info>  [1759393418.6803] manager: (tapc8aeabca-60): new Veth device (/org/freedesktop/NetworkManager/Devices/78)
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.735 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[567690bd-9f2a-4b6b-8e52-6f6650f10b1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.739 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e9936c-813d-40d0-876f-cc3b0f4fae9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:38 np0005465604 NetworkManager[45129]: <info>  [1759393418.7787] device (tapc8aeabca-60): carrier: link connected
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.789 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[62cc48ff-49d8-4c7b-ac09-15c8185827f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 180 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.1 MiB/s wr, 185 op/s
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.821 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[da3139e0-5e44-4f73-935b-cbc54db61d51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8aeabca-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:61:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429738, 'reachable_time': 36342, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292997, 'error': None, 'target': 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.842 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7b623126-6401-4132-8a29-0a72438c973e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:61ed'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429738, 'tstamp': 429738}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292998, 'error': None, 'target': 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.865 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bd486ea0-08e4-4fb6-abae-f0f51af5f678]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8aeabca-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:61:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429738, 'reachable_time': 36342, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292999, 'error': None, 'target': 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.896 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e5e72a-0d6e-4f30-90e1-c5475702d3ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.965 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[20cd38ca-8fa3-4c4c-8284-b55ac52df7e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.967 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8aeabca-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.967 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.967 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8aeabca-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:38 np0005465604 nova_compute[260603]: 2025-10-02 08:23:38.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:38 np0005465604 kernel: tapc8aeabca-60: entered promiscuous mode
Oct  2 04:23:38 np0005465604 NetworkManager[45129]: <info>  [1759393418.9707] manager: (tapc8aeabca-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Oct  2 04:23:38 np0005465604 nova_compute[260603]: 2025-10-02 08:23:38.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.974 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc8aeabca-60, col_values=(('external_ids', {'iface-id': '4db7ab1a-8260-4cf1-9ae1-c00a351cfe04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:38 np0005465604 nova_compute[260603]: 2025-10-02 08:23:38.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:38 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:38Z|00160|binding|INFO|Releasing lport 4db7ab1a-8260-4cf1-9ae1-c00a351cfe04 from this chassis (sb_readonly=0)
Oct  2 04:23:38 np0005465604 nova_compute[260603]: 2025-10-02 08:23:38.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.977 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c8aeabca-6b5c-477a-9156-9f9592c20b93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c8aeabca-6b5c-477a-9156-9f9592c20b93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.977 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d52c74b6-8c4e-45f8-b68c-cc373b293319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.978 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-c8aeabca-6b5c-477a-9156-9f9592c20b93
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/c8aeabca-6b5c-477a-9156-9f9592c20b93.pid.haproxy
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:23:38 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:23:38 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID c8aeabca-6b5c-477a-9156-9f9592c20b93
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:23:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:38.986 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'env', 'PROCESS_TAG=haproxy-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c8aeabca-6b5c-477a-9156-9f9592c20b93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:23:38 np0005465604 nova_compute[260603]: 2025-10-02 08:23:38.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:39 np0005465604 podman[293032]: 2025-10-02 08:23:39.382697377 +0000 UTC m=+0.066013641 container create 002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 04:23:39 np0005465604 systemd[1]: Started libpod-conmon-002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952.scope.
Oct  2 04:23:39 np0005465604 podman[293032]: 2025-10-02 08:23:39.34278165 +0000 UTC m=+0.026097924 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:23:39 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:23:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc34d9a9787cb780b9ef5d6332cf0d5803f416d2f51f980b7daa992b30f3c9a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:23:39 np0005465604 podman[293032]: 2025-10-02 08:23:39.483126665 +0000 UTC m=+0.166442929 container init 002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 04:23:39 np0005465604 podman[293032]: 2025-10-02 08:23:39.488784847 +0000 UTC m=+0.172101111 container start 002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:23:39 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[293047]: [NOTICE]   (293051) : New worker (293053) forked
Oct  2 04:23:39 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[293047]: [NOTICE]   (293051) : Loading success.
Oct  2 04:23:39 np0005465604 nova_compute[260603]: 2025-10-02 08:23:39.695 2 DEBUG nova.compute.manager [req-943a2eec-30d5-4932-9b8c-b2684e890213 req-ef3f309c-4bf7-4e1a-8258-acc005115153 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Received event network-vif-plugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:39 np0005465604 nova_compute[260603]: 2025-10-02 08:23:39.695 2 DEBUG oslo_concurrency.lockutils [req-943a2eec-30d5-4932-9b8c-b2684e890213 req-ef3f309c-4bf7-4e1a-8258-acc005115153 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:39 np0005465604 nova_compute[260603]: 2025-10-02 08:23:39.696 2 DEBUG oslo_concurrency.lockutils [req-943a2eec-30d5-4932-9b8c-b2684e890213 req-ef3f309c-4bf7-4e1a-8258-acc005115153 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:39 np0005465604 nova_compute[260603]: 2025-10-02 08:23:39.696 2 DEBUG oslo_concurrency.lockutils [req-943a2eec-30d5-4932-9b8c-b2684e890213 req-ef3f309c-4bf7-4e1a-8258-acc005115153 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:39 np0005465604 nova_compute[260603]: 2025-10-02 08:23:39.696 2 DEBUG nova.compute.manager [req-943a2eec-30d5-4932-9b8c-b2684e890213 req-ef3f309c-4bf7-4e1a-8258-acc005115153 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Processing event network-vif-plugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:23:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct  2 04:23:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct  2 04:23:40 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.508 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393420.5076969, 7c5d0818-0647-43a7-aaa1-b875b8b8424d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.509 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] VM Started (Lifecycle Event)#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.511 2 DEBUG nova.compute.manager [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.516 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.521 2 INFO nova.virt.libvirt.driver [-] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Instance spawned successfully.#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.521 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.546 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.551 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.561 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.562 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.562 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.563 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.564 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.565 2 DEBUG nova.virt.libvirt.driver [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.601 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.601 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393420.5089803, 7c5d0818-0647-43a7-aaa1-b875b8b8424d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.602 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.633 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.637 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393420.5157433, 7c5d0818-0647-43a7-aaa1-b875b8b8424d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.638 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.649 2 INFO nova.compute.manager [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Took 8.26 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.649 2 DEBUG nova.compute.manager [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.660 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.664 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.695 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.720 2 INFO nova.compute.manager [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Took 9.14 seconds to build instance.#033[00m
Oct  2 04:23:40 np0005465604 nova_compute[260603]: 2025-10-02 08:23:40.737 2 DEBUG oslo_concurrency.lockutils [None req-6360bb84-982f-4623-9e39-e7c9d522373f 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 180 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.6 MiB/s wr, 123 op/s
Oct  2 04:23:41 np0005465604 podman[293104]: 2025-10-02 08:23:41.010225622 +0000 UTC m=+0.077749128 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.142 2 DEBUG oslo_concurrency.lockutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "284aaef3-b320-49e2-a541-b17eb4eb208f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.142 2 DEBUG oslo_concurrency.lockutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.143 2 DEBUG oslo_concurrency.lockutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.143 2 DEBUG oslo_concurrency.lockutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.143 2 DEBUG oslo_concurrency.lockutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.145 2 INFO nova.compute.manager [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Terminating instance#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.146 2 DEBUG nova.compute.manager [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:23:41 np0005465604 kernel: tap368af35e-e9 (unregistering): left promiscuous mode
Oct  2 04:23:41 np0005465604 NetworkManager[45129]: <info>  [1759393421.1931] device (tap368af35e-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:41Z|00161|binding|INFO|Releasing lport 368af35e-e91a-4677-ac5f-ee6169dbd8f0 from this chassis (sb_readonly=0)
Oct  2 04:23:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:41Z|00162|binding|INFO|Setting lport 368af35e-e91a-4677-ac5f-ee6169dbd8f0 down in Southbound
Oct  2 04:23:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:41Z|00163|binding|INFO|Removing iface tap368af35e-e9 ovn-installed in OVS
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.227 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:03:c3 10.100.0.5'], port_security=['fa:16:3e:36:03:c3 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '284aaef3-b320-49e2-a541-b17eb4eb208f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=368af35e-e91a-4677-ac5f-ee6169dbd8f0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:23:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.229 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 368af35e-e91a-4677-ac5f-ee6169dbd8f0 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis#033[00m
Oct  2 04:23:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.230 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 897d7abf-9e23-43cd-8f60-7156792a4360, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:23:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.231 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc3e46d-c877-4576-9973-1f4a78d0a471]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.231 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 namespace which is not needed anymore#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:41 np0005465604 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000019.scope: Deactivated successfully.
Oct  2 04:23:41 np0005465604 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000019.scope: Consumed 3.051s CPU time.
Oct  2 04:23:41 np0005465604 systemd-machined[214636]: Machine qemu-29-instance-00000019 terminated.
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.392 2 INFO nova.virt.libvirt.driver [-] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Instance destroyed successfully.#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.393 2 DEBUG nova.objects.instance [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'resources' on Instance uuid 284aaef3-b320-49e2-a541-b17eb4eb208f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.409 2 DEBUG nova.virt.libvirt.vif [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:23:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-924493590',display_name='tempest-ImagesTestJSON-server-924493590',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-924493590',id=25,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:23:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-f9z2qxia',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:23:38Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=284aaef3-b320-49e2-a541-b17eb4eb208f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "address": "fa:16:3e:36:03:c3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap368af35e-e9", "ovs_interfaceid": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.411 2 DEBUG nova.network.os_vif_util [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "address": "fa:16:3e:36:03:c3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap368af35e-e9", "ovs_interfaceid": "368af35e-e91a-4677-ac5f-ee6169dbd8f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.413 2 DEBUG nova.network.os_vif_util [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:03:c3,bridge_name='br-int',has_traffic_filtering=True,id=368af35e-e91a-4677-ac5f-ee6169dbd8f0,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap368af35e-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.414 2 DEBUG os_vif [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:03:c3,bridge_name='br-int',has_traffic_filtering=True,id=368af35e-e91a-4677-ac5f-ee6169dbd8f0,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap368af35e-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.419 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap368af35e-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:23:41 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[292443]: [NOTICE]   (292447) : haproxy version is 2.8.14-c23fe91
Oct  2 04:23:41 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[292443]: [NOTICE]   (292447) : path to executable is /usr/sbin/haproxy
Oct  2 04:23:41 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[292443]: [WARNING]  (292447) : Exiting Master process...
Oct  2 04:23:41 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[292443]: [WARNING]  (292447) : Exiting Master process...
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.430 2 INFO os_vif [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:03:c3,bridge_name='br-int',has_traffic_filtering=True,id=368af35e-e91a-4677-ac5f-ee6169dbd8f0,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap368af35e-e9')#033[00m
Oct  2 04:23:41 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[292443]: [ALERT]    (292447) : Current worker (292449) exited with code 143 (Terminated)
Oct  2 04:23:41 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[292443]: [WARNING]  (292447) : All workers exited. Exiting... (0)
Oct  2 04:23:41 np0005465604 systemd[1]: libpod-f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204.scope: Deactivated successfully.
Oct  2 04:23:41 np0005465604 conmon[292443]: conmon f66990656dc36436cbcb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204.scope/container/memory.events
Oct  2 04:23:41 np0005465604 podman[293146]: 2025-10-02 08:23:41.441519928 +0000 UTC m=+0.060932075 container died f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 04:23:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204-userdata-shm.mount: Deactivated successfully.
Oct  2 04:23:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2bff6eb4a3f402d26d136c7162ad878ef5efad0cc6f70e47e24c61d9d48c5b98-merged.mount: Deactivated successfully.
Oct  2 04:23:41 np0005465604 podman[293146]: 2025-10-02 08:23:41.49058513 +0000 UTC m=+0.109997277 container cleanup f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:23:41 np0005465604 systemd[1]: libpod-conmon-f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204.scope: Deactivated successfully.
Oct  2 04:23:41 np0005465604 podman[293198]: 2025-10-02 08:23:41.580026204 +0000 UTC m=+0.061081020 container remove f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:23:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.585 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[919b852c-af2e-4fb3-ad58-06f4eb2f98eb]: (4, ('Thu Oct  2 08:23:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 (f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204)\nf66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204\nThu Oct  2 08:23:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 (f66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204)\nf66990656dc36436cbcb4b6faf00b10522bab5bfc0f9e20badbb82355bf76204\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.587 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a10aa023-f855-4bdb-9313-74fb5450408f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.588 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:41 np0005465604 kernel: tap897d7abf-90: left promiscuous mode
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.640 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c37a31c9-aeac-49f9-9086-62ab3dbb0747]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.665 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[508f8864-7e64-4841-8d39-4653876047a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.666 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9049c0-55b6-4c31-92c8-0a1167fc5d7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.690 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc10053-b537-4fe8-af08-bb64db0510f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428578, 'reachable_time': 15769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293212, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:41 np0005465604 systemd[1]: run-netns-ovnmeta\x2d897d7abf\x2d9e23\x2d43cd\x2d8f60\x2d7156792a4360.mount: Deactivated successfully.
Oct  2 04:23:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.693 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:23:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:41.693 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[6e69cf00-8987-4e61-98d5-3f6ee1280344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.844 2 INFO nova.virt.libvirt.driver [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Deleting instance files /var/lib/nova/instances/284aaef3-b320-49e2-a541-b17eb4eb208f_del#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.845 2 INFO nova.virt.libvirt.driver [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Deletion of /var/lib/nova/instances/284aaef3-b320-49e2-a541-b17eb4eb208f_del complete#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.890 2 INFO nova.compute.manager [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.891 2 DEBUG oslo.service.loopingcall [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.891 2 DEBUG nova.compute.manager [-] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:23:41 np0005465604 nova_compute[260603]: 2025-10-02 08:23:41.891 2 DEBUG nova.network.neutron [-] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:23:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:23:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct  2 04:23:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct  2 04:23:41 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct  2 04:23:42 np0005465604 nova_compute[260603]: 2025-10-02 08:23:42.356 2 DEBUG nova.compute.manager [req-6caa47af-0cd1-4a50-9300-e96ff7fd9404 req-136b6ecb-ac7e-4aea-9aa5-7b162bc908f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Received event network-vif-plugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:42 np0005465604 nova_compute[260603]: 2025-10-02 08:23:42.357 2 DEBUG oslo_concurrency.lockutils [req-6caa47af-0cd1-4a50-9300-e96ff7fd9404 req-136b6ecb-ac7e-4aea-9aa5-7b162bc908f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:42 np0005465604 nova_compute[260603]: 2025-10-02 08:23:42.358 2 DEBUG oslo_concurrency.lockutils [req-6caa47af-0cd1-4a50-9300-e96ff7fd9404 req-136b6ecb-ac7e-4aea-9aa5-7b162bc908f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:42 np0005465604 nova_compute[260603]: 2025-10-02 08:23:42.358 2 DEBUG oslo_concurrency.lockutils [req-6caa47af-0cd1-4a50-9300-e96ff7fd9404 req-136b6ecb-ac7e-4aea-9aa5-7b162bc908f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:42 np0005465604 nova_compute[260603]: 2025-10-02 08:23:42.359 2 DEBUG nova.compute.manager [req-6caa47af-0cd1-4a50-9300-e96ff7fd9404 req-136b6ecb-ac7e-4aea-9aa5-7b162bc908f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] No waiting events found dispatching network-vif-plugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:23:42 np0005465604 nova_compute[260603]: 2025-10-02 08:23:42.359 2 WARNING nova.compute.manager [req-6caa47af-0cd1-4a50-9300-e96ff7fd9404 req-136b6ecb-ac7e-4aea-9aa5-7b162bc908f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Received unexpected event network-vif-plugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:23:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 108 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 5.5 MiB/s wr, 279 op/s
Oct  2 04:23:44 np0005465604 nova_compute[260603]: 2025-10-02 08:23:44.753 2 DEBUG nova.network.neutron [-] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:23:44 np0005465604 nova_compute[260603]: 2025-10-02 08:23:44.777 2 INFO nova.compute.manager [-] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Took 2.89 seconds to deallocate network for instance.#033[00m
Oct  2 04:23:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 88 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.2 MiB/s wr, 292 op/s
Oct  2 04:23:44 np0005465604 nova_compute[260603]: 2025-10-02 08:23:44.840 2 DEBUG oslo_concurrency.lockutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:44 np0005465604 nova_compute[260603]: 2025-10-02 08:23:44.841 2 DEBUG oslo_concurrency.lockutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:44 np0005465604 nova_compute[260603]: 2025-10-02 08:23:44.911 2 DEBUG nova.compute.manager [req-04a7ff71-8277-46ac-bbc2-69020dc945bc req-fa79189e-8a5a-4421-be8d-41b38f7da9b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Received event network-vif-unplugged-368af35e-e91a-4677-ac5f-ee6169dbd8f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:44 np0005465604 nova_compute[260603]: 2025-10-02 08:23:44.912 2 DEBUG oslo_concurrency.lockutils [req-04a7ff71-8277-46ac-bbc2-69020dc945bc req-fa79189e-8a5a-4421-be8d-41b38f7da9b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:44 np0005465604 nova_compute[260603]: 2025-10-02 08:23:44.913 2 DEBUG oslo_concurrency.lockutils [req-04a7ff71-8277-46ac-bbc2-69020dc945bc req-fa79189e-8a5a-4421-be8d-41b38f7da9b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:44 np0005465604 nova_compute[260603]: 2025-10-02 08:23:44.913 2 DEBUG oslo_concurrency.lockutils [req-04a7ff71-8277-46ac-bbc2-69020dc945bc req-fa79189e-8a5a-4421-be8d-41b38f7da9b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:44 np0005465604 nova_compute[260603]: 2025-10-02 08:23:44.913 2 DEBUG nova.compute.manager [req-04a7ff71-8277-46ac-bbc2-69020dc945bc req-fa79189e-8a5a-4421-be8d-41b38f7da9b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] No waiting events found dispatching network-vif-unplugged-368af35e-e91a-4677-ac5f-ee6169dbd8f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:23:44 np0005465604 nova_compute[260603]: 2025-10-02 08:23:44.914 2 WARNING nova.compute.manager [req-04a7ff71-8277-46ac-bbc2-69020dc945bc req-fa79189e-8a5a-4421-be8d-41b38f7da9b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Received unexpected event network-vif-unplugged-368af35e-e91a-4677-ac5f-ee6169dbd8f0 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:23:44 np0005465604 nova_compute[260603]: 2025-10-02 08:23:44.925 2 DEBUG oslo_concurrency.processutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:44 np0005465604 nova_compute[260603]: 2025-10-02 08:23:44.974 2 DEBUG nova.compute.manager [req-ea2e9b60-4c29-42af-a08d-b0b4521220dc req-e457bf60-2de2-46f6-ac05-dada35f291a4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Received event network-vif-deleted-368af35e-e91a-4677-ac5f-ee6169dbd8f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:45 np0005465604 podman[293317]: 2025-10-02 08:23:45.15654253 +0000 UTC m=+0.089486817 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd)
Oct  2 04:23:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:23:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1120033406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:23:45 np0005465604 nova_compute[260603]: 2025-10-02 08:23:45.451 2 DEBUG oslo_concurrency.processutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:45 np0005465604 nova_compute[260603]: 2025-10-02 08:23:45.459 2 DEBUG nova.compute.provider_tree [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:23:45 np0005465604 nova_compute[260603]: 2025-10-02 08:23:45.490 2 DEBUG nova.scheduler.client.report [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:23:45 np0005465604 nova_compute[260603]: 2025-10-02 08:23:45.519 2 DEBUG oslo_concurrency.lockutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:45 np0005465604 nova_compute[260603]: 2025-10-02 08:23:45.555 2 INFO nova.scheduler.client.report [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Deleted allocations for instance 284aaef3-b320-49e2-a541-b17eb4eb208f#033[00m
Oct  2 04:23:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:45.620 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:23:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:45.621 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:23:45 np0005465604 nova_compute[260603]: 2025-10-02 08:23:45.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:23:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:23:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:23:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:23:45 np0005465604 nova_compute[260603]: 2025-10-02 08:23:45.640 2 DEBUG oslo_concurrency.lockutils [None req-3378963a-3e9c-490c-b5f7-be24945c26bf 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:23:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:23:45 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 53045044-d76a-4afe-8566-56f0651d222a does not exist
Oct  2 04:23:45 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9a603c03-4c82-409d-a6cb-3b7f9aeab2cb does not exist
Oct  2 04:23:45 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c0e96ea5-d285-4f12-b0df-e274fd9a580e does not exist
Oct  2 04:23:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:23:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:23:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:23:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:23:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:23:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:23:45 np0005465604 nova_compute[260603]: 2025-10-02 08:23:45.745 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:45 np0005465604 nova_compute[260603]: 2025-10-02 08:23:45.746 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:45 np0005465604 nova_compute[260603]: 2025-10-02 08:23:45.768 2 DEBUG nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:23:45 np0005465604 nova_compute[260603]: 2025-10-02 08:23:45.858 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:45 np0005465604 nova_compute[260603]: 2025-10-02 08:23:45.859 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:45 np0005465604 nova_compute[260603]: 2025-10-02 08:23:45.868 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:23:45 np0005465604 nova_compute[260603]: 2025-10-02 08:23:45.869 2 INFO nova.compute.claims [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.015 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:46 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:23:46 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:23:46 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.302 2 DEBUG nova.compute.manager [None req-add5470f-8a10-444c-8c81-b9770446aeda 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.352 2 INFO nova.compute.manager [None req-add5470f-8a10-444c-8c81-b9770446aeda 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] instance snapshotting#033[00m
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:23:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3566674992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:23:46 np0005465604 podman[293552]: 2025-10-02 08:23:46.468468259 +0000 UTC m=+0.064186100 container create 1b7f18f23ba9a77c9d5949c160e27a2b55b48c9f5de864cbf73010d4a5004125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_snyder, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.481 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.488 2 DEBUG nova.compute.provider_tree [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.510 2 DEBUG nova.scheduler.client.report [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:23:46 np0005465604 podman[293552]: 2025-10-02 08:23:46.430978771 +0000 UTC m=+0.026696692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:23:46 np0005465604 systemd[1]: Started libpod-conmon-1b7f18f23ba9a77c9d5949c160e27a2b55b48c9f5de864cbf73010d4a5004125.scope.
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.555 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.556 2 DEBUG nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:23:46 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:23:46 np0005465604 podman[293552]: 2025-10-02 08:23:46.591676603 +0000 UTC m=+0.187394514 container init 1b7f18f23ba9a77c9d5949c160e27a2b55b48c9f5de864cbf73010d4a5004125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_snyder, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.592 2 WARNING nova.compute.manager [None req-add5470f-8a10-444c-8c81-b9770446aeda 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Image not found during snapshot: nova.exception.ImageNotFound: Image ce2d4960-dbf0-47ba-910e-94d22e6c3390 could not be found.#033[00m
Oct  2 04:23:46 np0005465604 podman[293552]: 2025-10-02 08:23:46.603946958 +0000 UTC m=+0.199664799 container start 1b7f18f23ba9a77c9d5949c160e27a2b55b48c9f5de864cbf73010d4a5004125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 04:23:46 np0005465604 podman[293552]: 2025-10-02 08:23:46.607254084 +0000 UTC m=+0.202971955 container attach 1b7f18f23ba9a77c9d5949c160e27a2b55b48c9f5de864cbf73010d4a5004125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_snyder, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 04:23:46 np0005465604 focused_snyder[293570]: 167 167
Oct  2 04:23:46 np0005465604 systemd[1]: libpod-1b7f18f23ba9a77c9d5949c160e27a2b55b48c9f5de864cbf73010d4a5004125.scope: Deactivated successfully.
Oct  2 04:23:46 np0005465604 podman[293552]: 2025-10-02 08:23:46.609207237 +0000 UTC m=+0.204925098 container died 1b7f18f23ba9a77c9d5949c160e27a2b55b48c9f5de864cbf73010d4a5004125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_snyder, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.620 2 DEBUG nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.621 2 DEBUG nova.network.neutron [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:23:46 np0005465604 systemd[1]: var-lib-containers-storage-overlay-703d15ead9784e0364b9a4b33932111a1a3ba0f21293a80968b7ce03acb96a80-merged.mount: Deactivated successfully.
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.646 2 INFO nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.664 2 DEBUG nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:23:46 np0005465604 podman[293552]: 2025-10-02 08:23:46.698638011 +0000 UTC m=+0.294355882 container remove 1b7f18f23ba9a77c9d5949c160e27a2b55b48c9f5de864cbf73010d4a5004125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_snyder, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 04:23:46 np0005465604 systemd[1]: libpod-conmon-1b7f18f23ba9a77c9d5949c160e27a2b55b48c9f5de864cbf73010d4a5004125.scope: Deactivated successfully.
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.755 2 DEBUG nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.757 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.757 2 INFO nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Creating image(s)#033[00m
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.782 2 DEBUG nova.storage.rbd_utils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.809 2 DEBUG nova.storage.rbd_utils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 88 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 194 op/s
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.837 2 DEBUG nova.storage.rbd_utils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.850 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:46 np0005465604 podman[293647]: 2025-10-02 08:23:46.889681801 +0000 UTC m=+0.040062052 container create ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.908 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.910 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.910 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.911 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:46 np0005465604 systemd[1]: Started libpod-conmon-ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425.scope.
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.943 2 DEBUG nova.storage.rbd_utils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:46 np0005465604 nova_compute[260603]: 2025-10-02 08:23:46.947 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:46 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:23:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc21d44ccc749c9b881c88e141e3a63bdd3346b0ee5192e91a15ac0a5f5afd73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:23:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc21d44ccc749c9b881c88e141e3a63bdd3346b0ee5192e91a15ac0a5f5afd73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:23:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc21d44ccc749c9b881c88e141e3a63bdd3346b0ee5192e91a15ac0a5f5afd73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:23:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc21d44ccc749c9b881c88e141e3a63bdd3346b0ee5192e91a15ac0a5f5afd73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:23:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc21d44ccc749c9b881c88e141e3a63bdd3346b0ee5192e91a15ac0a5f5afd73/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:23:46 np0005465604 podman[293647]: 2025-10-02 08:23:46.87260082 +0000 UTC m=+0.022981101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:23:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:23:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct  2 04:23:46 np0005465604 podman[293647]: 2025-10-02 08:23:46.981709998 +0000 UTC m=+0.132090269 container init ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:23:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct  2 04:23:46 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct  2 04:23:46 np0005465604 podman[293647]: 2025-10-02 08:23:46.988605481 +0000 UTC m=+0.138985742 container start ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:23:46 np0005465604 podman[293647]: 2025-10-02 08:23:46.993806979 +0000 UTC m=+0.144187250 container attach ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  2 04:23:47 np0005465604 nova_compute[260603]: 2025-10-02 08:23:47.189 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.242s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:47 np0005465604 nova_compute[260603]: 2025-10-02 08:23:47.277 2 DEBUG nova.storage.rbd_utils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] resizing rbd image 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:23:47 np0005465604 nova_compute[260603]: 2025-10-02 08:23:47.408 2 DEBUG nova.objects.instance [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'migration_context' on Instance uuid 95b47d23-06fb-460c-b8d9-7b7213dae4c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:23:47 np0005465604 nova_compute[260603]: 2025-10-02 08:23:47.428 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:23:47 np0005465604 nova_compute[260603]: 2025-10-02 08:23:47.428 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Ensure instance console log exists: /var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:23:47 np0005465604 nova_compute[260603]: 2025-10-02 08:23:47.429 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:47 np0005465604 nova_compute[260603]: 2025-10-02 08:23:47.429 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:47 np0005465604 nova_compute[260603]: 2025-10-02 08:23:47.430 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:47 np0005465604 nova_compute[260603]: 2025-10-02 08:23:47.643 2 DEBUG nova.policy [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6747651cfdcc4f868c43b9d78f5846c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56b1e1170f2e4a73aaf396476bc82261', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:23:47 np0005465604 nova_compute[260603]: 2025-10-02 08:23:47.941 2 DEBUG nova.compute.manager [req-bd302b0a-83de-4b43-9fd1-26907a0ecaa2 req-cacb2f9b-cfff-4be0-b426-5ec041ffefcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Received event network-vif-plugged-368af35e-e91a-4677-ac5f-ee6169dbd8f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:47 np0005465604 nova_compute[260603]: 2025-10-02 08:23:47.943 2 DEBUG oslo_concurrency.lockutils [req-bd302b0a-83de-4b43-9fd1-26907a0ecaa2 req-cacb2f9b-cfff-4be0-b426-5ec041ffefcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:47 np0005465604 nova_compute[260603]: 2025-10-02 08:23:47.944 2 DEBUG oslo_concurrency.lockutils [req-bd302b0a-83de-4b43-9fd1-26907a0ecaa2 req-cacb2f9b-cfff-4be0-b426-5ec041ffefcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:47 np0005465604 nova_compute[260603]: 2025-10-02 08:23:47.944 2 DEBUG oslo_concurrency.lockutils [req-bd302b0a-83de-4b43-9fd1-26907a0ecaa2 req-cacb2f9b-cfff-4be0-b426-5ec041ffefcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "284aaef3-b320-49e2-a541-b17eb4eb208f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:47 np0005465604 nova_compute[260603]: 2025-10-02 08:23:47.944 2 DEBUG nova.compute.manager [req-bd302b0a-83de-4b43-9fd1-26907a0ecaa2 req-cacb2f9b-cfff-4be0-b426-5ec041ffefcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] No waiting events found dispatching network-vif-plugged-368af35e-e91a-4677-ac5f-ee6169dbd8f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:23:47 np0005465604 nova_compute[260603]: 2025-10-02 08:23:47.945 2 WARNING nova.compute.manager [req-bd302b0a-83de-4b43-9fd1-26907a0ecaa2 req-cacb2f9b-cfff-4be0-b426-5ec041ffefcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Received unexpected event network-vif-plugged-368af35e-e91a-4677-ac5f-ee6169dbd8f0 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:23:48 np0005465604 trusting_ishizaka[293683]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:23:48 np0005465604 trusting_ishizaka[293683]: --> relative data size: 1.0
Oct  2 04:23:48 np0005465604 trusting_ishizaka[293683]: --> All data devices are unavailable
Oct  2 04:23:48 np0005465604 systemd[1]: libpod-ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425.scope: Deactivated successfully.
Oct  2 04:23:48 np0005465604 systemd[1]: libpod-ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425.scope: Consumed 1.014s CPU time.
Oct  2 04:23:48 np0005465604 podman[293806]: 2025-10-02 08:23:48.120473115 +0000 UTC m=+0.030118963 container died ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 04:23:48 np0005465604 systemd[1]: var-lib-containers-storage-overlay-bc21d44ccc749c9b881c88e141e3a63bdd3346b0ee5192e91a15ac0a5f5afd73-merged.mount: Deactivated successfully.
Oct  2 04:23:48 np0005465604 podman[293806]: 2025-10-02 08:23:48.189863192 +0000 UTC m=+0.099508950 container remove ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  2 04:23:48 np0005465604 systemd[1]: libpod-conmon-ea5fec9e3af47cfc6118a54cc375645571dc7ba6ec91743a02bf7f6fc8e10425.scope: Deactivated successfully.
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.478 2 DEBUG oslo_concurrency.lockutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.478 2 DEBUG oslo_concurrency.lockutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.478 2 DEBUG oslo_concurrency.lockutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.478 2 DEBUG oslo_concurrency.lockutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.478 2 DEBUG oslo_concurrency.lockutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.479 2 INFO nova.compute.manager [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Terminating instance#033[00m
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.480 2 DEBUG nova.compute.manager [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:23:48 np0005465604 kernel: tap6cf06b33-eb (unregistering): left promiscuous mode
Oct  2 04:23:48 np0005465604 NetworkManager[45129]: <info>  [1759393428.5273] device (tap6cf06b33-eb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:23:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:48Z|00164|binding|INFO|Releasing lport 6cf06b33-eb42-471e-9c4b-9119daeb78c1 from this chassis (sb_readonly=0)
Oct  2 04:23:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:48Z|00165|binding|INFO|Setting lport 6cf06b33-eb42-471e-9c4b-9119daeb78c1 down in Southbound
Oct  2 04:23:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:48Z|00166|binding|INFO|Removing iface tap6cf06b33-eb ovn-installed in OVS
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.545 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:42:57 10.100.0.5'], port_security=['fa:16:3e:78:42:57 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7c5d0818-0647-43a7-aaa1-b875b8b8424d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff4066c489424391bd4a75b195bd5011', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1aba9f6e-efc2-4ae1-83f0-6308a1293c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f5b9ac6-d3ea-442b-a1a3-0f4eb2329dfd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6cf06b33-eb42-471e-9c4b-9119daeb78c1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:23:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.547 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6cf06b33-eb42-471e-9c4b-9119daeb78c1 in datapath c8aeabca-6b5c-477a-9156-9f9592c20b93 unbound from our chassis#033[00m
Oct  2 04:23:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.550 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c8aeabca-6b5c-477a-9156-9f9592c20b93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:23:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.551 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[623dc11d-551e-408b-acb2-f1e235eaf86c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.552 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 namespace which is not needed anymore#033[00m
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:48 np0005465604 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Oct  2 04:23:48 np0005465604 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001a.scope: Consumed 9.603s CPU time.
Oct  2 04:23:48 np0005465604 systemd-machined[214636]: Machine qemu-30-instance-0000001a terminated.
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.721 2 INFO nova.virt.libvirt.driver [-] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Instance destroyed successfully.#033[00m
Oct  2 04:23:48 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[293047]: [NOTICE]   (293051) : haproxy version is 2.8.14-c23fe91
Oct  2 04:23:48 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[293047]: [NOTICE]   (293051) : path to executable is /usr/sbin/haproxy
Oct  2 04:23:48 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[293047]: [WARNING]  (293051) : Exiting Master process...
Oct  2 04:23:48 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[293047]: [WARNING]  (293051) : Exiting Master process...
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.722 2 DEBUG nova.objects.instance [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lazy-loading 'resources' on Instance uuid 7c5d0818-0647-43a7-aaa1-b875b8b8424d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:23:48 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[293047]: [ALERT]    (293051) : Current worker (293053) exited with code 143 (Terminated)
Oct  2 04:23:48 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[293047]: [WARNING]  (293051) : All workers exited. Exiting... (0)
Oct  2 04:23:48 np0005465604 systemd[1]: libpod-002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952.scope: Deactivated successfully.
Oct  2 04:23:48 np0005465604 podman[293944]: 2025-10-02 08:23:48.735783614 +0000 UTC m=+0.078415580 container died 002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.743 2 DEBUG nova.virt.libvirt.vif [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:23:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-571471940',display_name='tempest-ImagesOneServerNegativeTestJSON-server-571471940',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-571471940',id=26,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:23:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff4066c489424391bd4a75b195bd5011',ramdisk_id='',reservation_id='r-ntyvpu68',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1267478522',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1267478522-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:23:46Z,user_data=None,user_id='3a117ecad98d493d8782539545db5ac9',uuid=7c5d0818-0647-43a7-aaa1-b875b8b8424d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "address": "fa:16:3e:78:42:57", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cf06b33-eb", "ovs_interfaceid": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.744 2 DEBUG nova.network.os_vif_util [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converting VIF {"id": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "address": "fa:16:3e:78:42:57", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cf06b33-eb", "ovs_interfaceid": "6cf06b33-eb42-471e-9c4b-9119daeb78c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.746 2 DEBUG nova.network.os_vif_util [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:42:57,bridge_name='br-int',has_traffic_filtering=True,id=6cf06b33-eb42-471e-9c4b-9119daeb78c1,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cf06b33-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.746 2 DEBUG os_vif [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:42:57,bridge_name='br-int',has_traffic_filtering=True,id=6cf06b33-eb42-471e-9c4b-9119daeb78c1,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cf06b33-eb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.750 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6cf06b33-eb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.760 2 INFO os_vif [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:42:57,bridge_name='br-int',has_traffic_filtering=True,id=6cf06b33-eb42-471e-9c4b-9119daeb78c1,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cf06b33-eb')#033[00m
Oct  2 04:23:48 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952-userdata-shm.mount: Deactivated successfully.
Oct  2 04:23:48 np0005465604 systemd[1]: var-lib-containers-storage-overlay-7fc34d9a9787cb780b9ef5d6332cf0d5803f416d2f51f980b7daa992b30f3c9a-merged.mount: Deactivated successfully.
Oct  2 04:23:48 np0005465604 podman[293944]: 2025-10-02 08:23:48.810340948 +0000 UTC m=+0.152972874 container cleanup 002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 04:23:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 130 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.6 MiB/s wr, 233 op/s
Oct  2 04:23:48 np0005465604 systemd[1]: libpod-conmon-002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952.scope: Deactivated successfully.
Oct  2 04:23:48 np0005465604 podman[294022]: 2025-10-02 08:23:48.88328615 +0000 UTC m=+0.048889387 container remove 002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 04:23:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.891 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dc126534-85c7-40c7-a027-efb00446c3f2]: (4, ('Thu Oct  2 08:23:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 (002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952)\n002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952\nThu Oct  2 08:23:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 (002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952)\n002237f94fa8fa612a7d89a051096a278e3f5571a4a8013387daebb6c9b67952\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.893 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb33c94-1d19-4bcf-8601-e914e42477da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.894 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8aeabca-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:48 np0005465604 kernel: tapc8aeabca-60: left promiscuous mode
Oct  2 04:23:48 np0005465604 nova_compute[260603]: 2025-10-02 08:23:48.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.914 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b8230b8b-b8ec-424e-b3e3-13147b262820]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.936 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[58fcdd5c-1ab9-4c92-b93d-257fbca10b59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.938 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb1f110-531e-4422-ad3a-5807bbbc9da0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.956 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4cbb72-de34-4905-860e-546c377ba57b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429726, 'reachable_time': 37791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294059, 'error': None, 'target': 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:48 np0005465604 systemd[1]: run-netns-ovnmeta\x2dc8aeabca\x2d6b5c\x2d477a\x2d9156\x2d9f9592c20b93.mount: Deactivated successfully.
Oct  2 04:23:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.961 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:23:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:48.961 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[48daa443-08e8-40ac-ba6f-e435873f72fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:49 np0005465604 podman[294061]: 2025-10-02 08:23:49.042839724 +0000 UTC m=+0.057986640 container create 86aeca248a2192bd0afa07a922adfd617aeddabf131e4b9c2d933bac4bb2bed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_shamir, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 04:23:49 np0005465604 systemd[1]: Started libpod-conmon-86aeca248a2192bd0afa07a922adfd617aeddabf131e4b9c2d933bac4bb2bed2.scope.
Oct  2 04:23:49 np0005465604 nova_compute[260603]: 2025-10-02 08:23:49.105 2 INFO nova.virt.libvirt.driver [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Deleting instance files /var/lib/nova/instances/7c5d0818-0647-43a7-aaa1-b875b8b8424d_del#033[00m
Oct  2 04:23:49 np0005465604 nova_compute[260603]: 2025-10-02 08:23:49.107 2 INFO nova.virt.libvirt.driver [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Deletion of /var/lib/nova/instances/7c5d0818-0647-43a7-aaa1-b875b8b8424d_del complete#033[00m
Oct  2 04:23:49 np0005465604 podman[294061]: 2025-10-02 08:23:49.01603905 +0000 UTC m=+0.031185996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:23:49 np0005465604 nova_compute[260603]: 2025-10-02 08:23:49.111 2 DEBUG nova.compute.manager [req-fb3d0ca4-8620-4be7-bc07-365e9f011d9b req-6067b608-44c2-4483-a8a0-4cc0d49e59dc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Received event network-vif-unplugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:49 np0005465604 nova_compute[260603]: 2025-10-02 08:23:49.112 2 DEBUG oslo_concurrency.lockutils [req-fb3d0ca4-8620-4be7-bc07-365e9f011d9b req-6067b608-44c2-4483-a8a0-4cc0d49e59dc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:49 np0005465604 nova_compute[260603]: 2025-10-02 08:23:49.112 2 DEBUG oslo_concurrency.lockutils [req-fb3d0ca4-8620-4be7-bc07-365e9f011d9b req-6067b608-44c2-4483-a8a0-4cc0d49e59dc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:49 np0005465604 nova_compute[260603]: 2025-10-02 08:23:49.113 2 DEBUG oslo_concurrency.lockutils [req-fb3d0ca4-8620-4be7-bc07-365e9f011d9b req-6067b608-44c2-4483-a8a0-4cc0d49e59dc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:49 np0005465604 nova_compute[260603]: 2025-10-02 08:23:49.113 2 DEBUG nova.compute.manager [req-fb3d0ca4-8620-4be7-bc07-365e9f011d9b req-6067b608-44c2-4483-a8a0-4cc0d49e59dc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] No waiting events found dispatching network-vif-unplugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:23:49 np0005465604 nova_compute[260603]: 2025-10-02 08:23:49.113 2 DEBUG nova.compute.manager [req-fb3d0ca4-8620-4be7-bc07-365e9f011d9b req-6067b608-44c2-4483-a8a0-4cc0d49e59dc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Received event network-vif-unplugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:23:49 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:23:49 np0005465604 podman[294061]: 2025-10-02 08:23:49.134731047 +0000 UTC m=+0.149877953 container init 86aeca248a2192bd0afa07a922adfd617aeddabf131e4b9c2d933bac4bb2bed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_shamir, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 04:23:49 np0005465604 podman[294061]: 2025-10-02 08:23:49.141999082 +0000 UTC m=+0.157145988 container start 86aeca248a2192bd0afa07a922adfd617aeddabf131e4b9c2d933bac4bb2bed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 04:23:49 np0005465604 podman[294061]: 2025-10-02 08:23:49.144615386 +0000 UTC m=+0.159762302 container attach 86aeca248a2192bd0afa07a922adfd617aeddabf131e4b9c2d933bac4bb2bed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_shamir, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:23:49 np0005465604 confident_shamir[294078]: 167 167
Oct  2 04:23:49 np0005465604 systemd[1]: libpod-86aeca248a2192bd0afa07a922adfd617aeddabf131e4b9c2d933bac4bb2bed2.scope: Deactivated successfully.
Oct  2 04:23:49 np0005465604 podman[294061]: 2025-10-02 08:23:49.150811755 +0000 UTC m=+0.165958661 container died 86aeca248a2192bd0afa07a922adfd617aeddabf131e4b9c2d933bac4bb2bed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 04:23:49 np0005465604 nova_compute[260603]: 2025-10-02 08:23:49.176 2 INFO nova.compute.manager [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:23:49 np0005465604 nova_compute[260603]: 2025-10-02 08:23:49.178 2 DEBUG oslo.service.loopingcall [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:23:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3c59f23936012ba7a6bdedf9448271b0a88465c0085adf40058df1a6a6593e19-merged.mount: Deactivated successfully.
Oct  2 04:23:49 np0005465604 nova_compute[260603]: 2025-10-02 08:23:49.182 2 DEBUG nova.compute.manager [-] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:23:49 np0005465604 nova_compute[260603]: 2025-10-02 08:23:49.183 2 DEBUG nova.network.neutron [-] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:23:49 np0005465604 podman[294061]: 2025-10-02 08:23:49.188885343 +0000 UTC m=+0.204032239 container remove 86aeca248a2192bd0afa07a922adfd617aeddabf131e4b9c2d933bac4bb2bed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_shamir, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 04:23:49 np0005465604 systemd[1]: libpod-conmon-86aeca248a2192bd0afa07a922adfd617aeddabf131e4b9c2d933bac4bb2bed2.scope: Deactivated successfully.
Oct  2 04:23:49 np0005465604 nova_compute[260603]: 2025-10-02 08:23:49.423 2 DEBUG nova.network.neutron [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Successfully created port: b546f674-89d3-44f3-82c4-9426c5910017 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:23:49 np0005465604 podman[294103]: 2025-10-02 08:23:49.443832854 +0000 UTC m=+0.059101087 container create a1e473938a2589f39c5ad52dd148ecbebc3a779fd9f59c753afcd90077026bba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 04:23:49 np0005465604 systemd[1]: Started libpod-conmon-a1e473938a2589f39c5ad52dd148ecbebc3a779fd9f59c753afcd90077026bba.scope.
Oct  2 04:23:49 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:23:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13000f181c4b8a3a33c5661566e7980634f28bcbe5eb6aa095aaf2763859f898/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:23:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13000f181c4b8a3a33c5661566e7980634f28bcbe5eb6aa095aaf2763859f898/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:23:49 np0005465604 podman[294103]: 2025-10-02 08:23:49.420450329 +0000 UTC m=+0.035718572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:23:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13000f181c4b8a3a33c5661566e7980634f28bcbe5eb6aa095aaf2763859f898/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:23:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13000f181c4b8a3a33c5661566e7980634f28bcbe5eb6aa095aaf2763859f898/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:23:49 np0005465604 podman[294103]: 2025-10-02 08:23:49.529203996 +0000 UTC m=+0.144472299 container init a1e473938a2589f39c5ad52dd148ecbebc3a779fd9f59c753afcd90077026bba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 04:23:49 np0005465604 podman[294103]: 2025-10-02 08:23:49.545963336 +0000 UTC m=+0.161231569 container start a1e473938a2589f39c5ad52dd148ecbebc3a779fd9f59c753afcd90077026bba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:23:49 np0005465604 podman[294103]: 2025-10-02 08:23:49.549598943 +0000 UTC m=+0.164867206 container attach a1e473938a2589f39c5ad52dd148ecbebc3a779fd9f59c753afcd90077026bba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]: {
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:    "0": [
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:        {
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "devices": [
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "/dev/loop3"
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            ],
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "lv_name": "ceph_lv0",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "lv_size": "21470642176",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "name": "ceph_lv0",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "tags": {
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.cluster_name": "ceph",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.crush_device_class": "",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.encrypted": "0",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.osd_id": "0",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.type": "block",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.vdo": "0"
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            },
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "type": "block",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "vg_name": "ceph_vg0"
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:        }
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:    ],
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:    "1": [
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:        {
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "devices": [
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "/dev/loop4"
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            ],
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "lv_name": "ceph_lv1",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "lv_size": "21470642176",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "name": "ceph_lv1",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "tags": {
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.cluster_name": "ceph",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.crush_device_class": "",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.encrypted": "0",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.osd_id": "1",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.type": "block",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.vdo": "0"
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            },
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "type": "block",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "vg_name": "ceph_vg1"
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:        }
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:    ],
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:    "2": [
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:        {
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "devices": [
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "/dev/loop5"
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            ],
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "lv_name": "ceph_lv2",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "lv_size": "21470642176",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "name": "ceph_lv2",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "tags": {
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.cluster_name": "ceph",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.crush_device_class": "",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.encrypted": "0",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.osd_id": "2",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.type": "block",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:                "ceph.vdo": "0"
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            },
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "type": "block",
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:            "vg_name": "ceph_vg2"
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:        }
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]:    ]
Oct  2 04:23:50 np0005465604 crazy_lovelace[294119]: }
Oct  2 04:23:50 np0005465604 systemd[1]: libpod-a1e473938a2589f39c5ad52dd148ecbebc3a779fd9f59c753afcd90077026bba.scope: Deactivated successfully.
Oct  2 04:23:50 np0005465604 podman[294103]: 2025-10-02 08:23:50.278582548 +0000 UTC m=+0.893850861 container died a1e473938a2589f39c5ad52dd148ecbebc3a779fd9f59c753afcd90077026bba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:23:50 np0005465604 systemd[1]: var-lib-containers-storage-overlay-13000f181c4b8a3a33c5661566e7980634f28bcbe5eb6aa095aaf2763859f898-merged.mount: Deactivated successfully.
Oct  2 04:23:50 np0005465604 podman[294103]: 2025-10-02 08:23:50.329961044 +0000 UTC m=+0.945229267 container remove a1e473938a2589f39c5ad52dd148ecbebc3a779fd9f59c753afcd90077026bba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 04:23:50 np0005465604 systemd[1]: libpod-conmon-a1e473938a2589f39c5ad52dd148ecbebc3a779fd9f59c753afcd90077026bba.scope: Deactivated successfully.
Oct  2 04:23:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 130 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.4 MiB/s wr, 211 op/s
Oct  2 04:23:50 np0005465604 nova_compute[260603]: 2025-10-02 08:23:50.858 2 DEBUG nova.network.neutron [-] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:23:50 np0005465604 nova_compute[260603]: 2025-10-02 08:23:50.886 2 INFO nova.compute.manager [-] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Took 1.70 seconds to deallocate network for instance.#033[00m
Oct  2 04:23:50 np0005465604 nova_compute[260603]: 2025-10-02 08:23:50.939 2 DEBUG oslo_concurrency.lockutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:50 np0005465604 nova_compute[260603]: 2025-10-02 08:23:50.940 2 DEBUG oslo_concurrency.lockutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:50 np0005465604 nova_compute[260603]: 2025-10-02 08:23:50.979 2 DEBUG nova.compute.manager [req-723000c3-9a8f-4678-9af1-c0ad02dfe188 req-a60d4b8a-0495-4ff5-a68b-0489679f0565 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Received event network-vif-deleted-6cf06b33-eb42-471e-9c4b-9119daeb78c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:51 np0005465604 nova_compute[260603]: 2025-10-02 08:23:51.001 2 DEBUG oslo_concurrency.processutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:51 np0005465604 podman[294282]: 2025-10-02 08:23:51.033891711 +0000 UTC m=+0.070377270 container create 15c55163745115ed872ef530a891505528a85128ecb37d9fa3318ba1f074e091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:23:51 np0005465604 systemd[1]: Started libpod-conmon-15c55163745115ed872ef530a891505528a85128ecb37d9fa3318ba1f074e091.scope.
Oct  2 04:23:51 np0005465604 podman[294282]: 2025-10-02 08:23:50.993380725 +0000 UTC m=+0.029866294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:23:51 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:23:51 np0005465604 podman[294282]: 2025-10-02 08:23:51.121060711 +0000 UTC m=+0.157546280 container init 15c55163745115ed872ef530a891505528a85128ecb37d9fa3318ba1f074e091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tesla, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:23:51 np0005465604 podman[294282]: 2025-10-02 08:23:51.133286175 +0000 UTC m=+0.169771704 container start 15c55163745115ed872ef530a891505528a85128ecb37d9fa3318ba1f074e091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:23:51 np0005465604 romantic_tesla[294299]: 167 167
Oct  2 04:23:51 np0005465604 systemd[1]: libpod-15c55163745115ed872ef530a891505528a85128ecb37d9fa3318ba1f074e091.scope: Deactivated successfully.
Oct  2 04:23:51 np0005465604 podman[294282]: 2025-10-02 08:23:51.13870385 +0000 UTC m=+0.175189369 container attach 15c55163745115ed872ef530a891505528a85128ecb37d9fa3318ba1f074e091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:23:51 np0005465604 podman[294282]: 2025-10-02 08:23:51.140053753 +0000 UTC m=+0.176539292 container died 15c55163745115ed872ef530a891505528a85128ecb37d9fa3318ba1f074e091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tesla, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:23:51 np0005465604 systemd[1]: var-lib-containers-storage-overlay-53e7f1d8f17e8b844cdaaeb80e789ec06b5f439a8a14ccd17a8b076b356edd9e-merged.mount: Deactivated successfully.
Oct  2 04:23:51 np0005465604 podman[294282]: 2025-10-02 08:23:51.172356715 +0000 UTC m=+0.208842224 container remove 15c55163745115ed872ef530a891505528a85128ecb37d9fa3318ba1f074e091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tesla, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 04:23:51 np0005465604 systemd[1]: libpod-conmon-15c55163745115ed872ef530a891505528a85128ecb37d9fa3318ba1f074e091.scope: Deactivated successfully.
Oct  2 04:23:51 np0005465604 nova_compute[260603]: 2025-10-02 08:23:51.234 2 DEBUG nova.compute.manager [req-8bd52909-fdd0-446d-a977-9d5df77a9b73 req-3e8b2900-7ad0-49c7-9529-22636cbbd355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Received event network-vif-plugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:51 np0005465604 nova_compute[260603]: 2025-10-02 08:23:51.235 2 DEBUG oslo_concurrency.lockutils [req-8bd52909-fdd0-446d-a977-9d5df77a9b73 req-3e8b2900-7ad0-49c7-9529-22636cbbd355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:51 np0005465604 nova_compute[260603]: 2025-10-02 08:23:51.236 2 DEBUG oslo_concurrency.lockutils [req-8bd52909-fdd0-446d-a977-9d5df77a9b73 req-3e8b2900-7ad0-49c7-9529-22636cbbd355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:51 np0005465604 nova_compute[260603]: 2025-10-02 08:23:51.236 2 DEBUG oslo_concurrency.lockutils [req-8bd52909-fdd0-446d-a977-9d5df77a9b73 req-3e8b2900-7ad0-49c7-9529-22636cbbd355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:51 np0005465604 nova_compute[260603]: 2025-10-02 08:23:51.236 2 DEBUG nova.compute.manager [req-8bd52909-fdd0-446d-a977-9d5df77a9b73 req-3e8b2900-7ad0-49c7-9529-22636cbbd355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] No waiting events found dispatching network-vif-plugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:23:51 np0005465604 nova_compute[260603]: 2025-10-02 08:23:51.236 2 WARNING nova.compute.manager [req-8bd52909-fdd0-446d-a977-9d5df77a9b73 req-3e8b2900-7ad0-49c7-9529-22636cbbd355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Received unexpected event network-vif-plugged-6cf06b33-eb42-471e-9c4b-9119daeb78c1 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:23:51 np0005465604 podman[294341]: 2025-10-02 08:23:51.318724375 +0000 UTC m=+0.038612916 container create 412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 04:23:51 np0005465604 systemd[1]: Started libpod-conmon-412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8.scope.
Oct  2 04:23:51 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:23:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/078c393a78508a8006bd5110a184264c8da45f4206687b09b2f0603e6b3d9922/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:23:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/078c393a78508a8006bd5110a184264c8da45f4206687b09b2f0603e6b3d9922/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:23:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/078c393a78508a8006bd5110a184264c8da45f4206687b09b2f0603e6b3d9922/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:23:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/078c393a78508a8006bd5110a184264c8da45f4206687b09b2f0603e6b3d9922/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:23:51 np0005465604 podman[294341]: 2025-10-02 08:23:51.39550895 +0000 UTC m=+0.115397511 container init 412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:23:51 np0005465604 podman[294341]: 2025-10-02 08:23:51.301299593 +0000 UTC m=+0.021188144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:23:51 np0005465604 podman[294341]: 2025-10-02 08:23:51.402691242 +0000 UTC m=+0.122579823 container start 412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:23:51 np0005465604 podman[294341]: 2025-10-02 08:23:51.406054 +0000 UTC m=+0.125942561 container attach 412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct  2 04:23:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:23:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3591484094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:23:51 np0005465604 nova_compute[260603]: 2025-10-02 08:23:51.443 2 DEBUG oslo_concurrency.processutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:51 np0005465604 nova_compute[260603]: 2025-10-02 08:23:51.451 2 DEBUG nova.compute.provider_tree [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:23:51 np0005465604 nova_compute[260603]: 2025-10-02 08:23:51.467 2 DEBUG nova.scheduler.client.report [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:23:51 np0005465604 nova_compute[260603]: 2025-10-02 08:23:51.486 2 DEBUG oslo_concurrency.lockutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:51 np0005465604 nova_compute[260603]: 2025-10-02 08:23:51.521 2 INFO nova.scheduler.client.report [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Deleted allocations for instance 7c5d0818-0647-43a7-aaa1-b875b8b8424d#033[00m
Oct  2 04:23:51 np0005465604 nova_compute[260603]: 2025-10-02 08:23:51.600 2 DEBUG oslo_concurrency.lockutils [None req-0bc03716-d39d-4191-bb23-bed9b82edd2b 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "7c5d0818-0647-43a7-aaa1-b875b8b8424d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:51 np0005465604 nova_compute[260603]: 2025-10-02 08:23:51.670 2 DEBUG nova.network.neutron [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Successfully updated port: b546f674-89d3-44f3-82c4-9426c5910017 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:23:51 np0005465604 nova_compute[260603]: 2025-10-02 08:23:51.691 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "refresh_cache-95b47d23-06fb-460c-b8d9-7b7213dae4c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:23:51 np0005465604 nova_compute[260603]: 2025-10-02 08:23:51.692 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquired lock "refresh_cache-95b47d23-06fb-460c-b8d9-7b7213dae4c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:23:51 np0005465604 nova_compute[260603]: 2025-10-02 08:23:51.692 2 DEBUG nova.network.neutron [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:23:51 np0005465604 nova_compute[260603]: 2025-10-02 08:23:51.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:23:52 np0005465604 nova_compute[260603]: 2025-10-02 08:23:52.205 2 DEBUG nova.network.neutron [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:23:52 np0005465604 practical_bouman[294357]: {
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:        "osd_id": 2,
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:        "type": "bluestore"
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:    },
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:        "osd_id": 1,
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:        "type": "bluestore"
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:    },
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:        "osd_id": 0,
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:        "type": "bluestore"
Oct  2 04:23:52 np0005465604 practical_bouman[294357]:    }
Oct  2 04:23:52 np0005465604 practical_bouman[294357]: }
Oct  2 04:23:52 np0005465604 systemd[1]: libpod-412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8.scope: Deactivated successfully.
Oct  2 04:23:52 np0005465604 podman[294341]: 2025-10-02 08:23:52.427735942 +0000 UTC m=+1.147624523 container died 412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 04:23:52 np0005465604 systemd[1]: libpod-412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8.scope: Consumed 1.027s CPU time.
Oct  2 04:23:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay-078c393a78508a8006bd5110a184264c8da45f4206687b09b2f0603e6b3d9922-merged.mount: Deactivated successfully.
Oct  2 04:23:52 np0005465604 podman[294341]: 2025-10-02 08:23:52.489696079 +0000 UTC m=+1.209584620 container remove 412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bouman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:23:52 np0005465604 systemd[1]: libpod-conmon-412eb3d8ba1f4fd4849f8d50efcc2d6c0221ae8a31f23aa351f33aec8aac0cd8.scope: Deactivated successfully.
Oct  2 04:23:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:23:52 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:23:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:23:52 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:23:52 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b4cf33c9-d46e-4423-8b5c-2060fa1266fd does not exist
Oct  2 04:23:52 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a01f15da-954d-43d5-9078-587f651368c2 does not exist
Oct  2 04:23:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 109 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 96 op/s
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.261 2 DEBUG nova.network.neutron [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Updating instance_info_cache with network_info: [{"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.289 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Releasing lock "refresh_cache-95b47d23-06fb-460c-b8d9-7b7213dae4c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.290 2 DEBUG nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Instance network_info: |[{"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.297 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Start _get_guest_xml network_info=[{"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.305 2 WARNING nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.311 2 DEBUG nova.virt.libvirt.host [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.312 2 DEBUG nova.virt.libvirt.host [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.318 2 DEBUG nova.virt.libvirt.host [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.318 2 DEBUG nova.virt.libvirt.host [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.319 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.320 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.320 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.321 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.321 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.322 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.322 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.323 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.324 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.324 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.325 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.325 2 DEBUG nova.virt.hardware [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.331 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:53 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:23:53 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:23:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3329157866' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.804 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.837 2 DEBUG nova.storage.rbd_utils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:53 np0005465604 nova_compute[260603]: 2025-10-02 08:23:53.842 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.154 2 DEBUG nova.compute.manager [req-ccf24f75-a702-462e-8c21-8028ffc62de8 req-929ffde2-ea30-4355-a1f9-8fad64229c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Received event network-changed-b546f674-89d3-44f3-82c4-9426c5910017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.155 2 DEBUG nova.compute.manager [req-ccf24f75-a702-462e-8c21-8028ffc62de8 req-929ffde2-ea30-4355-a1f9-8fad64229c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Refreshing instance network info cache due to event network-changed-b546f674-89d3-44f3-82c4-9426c5910017. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.156 2 DEBUG oslo_concurrency.lockutils [req-ccf24f75-a702-462e-8c21-8028ffc62de8 req-929ffde2-ea30-4355-a1f9-8fad64229c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-95b47d23-06fb-460c-b8d9-7b7213dae4c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.156 2 DEBUG oslo_concurrency.lockutils [req-ccf24f75-a702-462e-8c21-8028ffc62de8 req-929ffde2-ea30-4355-a1f9-8fad64229c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-95b47d23-06fb-460c-b8d9-7b7213dae4c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.157 2 DEBUG nova.network.neutron [req-ccf24f75-a702-462e-8c21-8028ffc62de8 req-929ffde2-ea30-4355-a1f9-8fad64229c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Refreshing network info cache for port b546f674-89d3-44f3-82c4-9426c5910017 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:23:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:23:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/212109316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.308 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.311 2 DEBUG nova.virt.libvirt.vif [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:23:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-798194672',display_name='tempest-ImagesTestJSON-server-798194672',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-798194672',id=27,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-t2xpcyan',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:23:46Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=95b47d23-06fb-460c-b8d9-7b7213dae4c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.312 2 DEBUG nova.network.os_vif_util [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.313 2 DEBUG nova.network.os_vif_util [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:3b:c8,bridge_name='br-int',has_traffic_filtering=True,id=b546f674-89d3-44f3-82c4-9426c5910017,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb546f674-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.315 2 DEBUG nova.objects.instance [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'pci_devices' on Instance uuid 95b47d23-06fb-460c-b8d9-7b7213dae4c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.330 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:23:54 np0005465604 nova_compute[260603]:  <uuid>95b47d23-06fb-460c-b8d9-7b7213dae4c7</uuid>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:  <name>instance-0000001b</name>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <nova:name>tempest-ImagesTestJSON-server-798194672</nova:name>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:23:53</nova:creationTime>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:23:54 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:        <nova:user uuid="6747651cfdcc4f868c43b9d78f5846c2">tempest-ImagesTestJSON-1188243509-project-member</nova:user>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:        <nova:project uuid="56b1e1170f2e4a73aaf396476bc82261">tempest-ImagesTestJSON-1188243509</nova:project>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:        <nova:port uuid="b546f674-89d3-44f3-82c4-9426c5910017">
Oct  2 04:23:54 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <entry name="serial">95b47d23-06fb-460c-b8d9-7b7213dae4c7</entry>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <entry name="uuid">95b47d23-06fb-460c-b8d9-7b7213dae4c7</entry>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk">
Oct  2 04:23:54 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:23:54 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk.config">
Oct  2 04:23:54 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:23:54 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:f0:3b:c8"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <target dev="tapb546f674-89"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7/console.log" append="off"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:23:54 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:23:54 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:23:54 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:23:54 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.332 2 DEBUG nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Preparing to wait for external event network-vif-plugged-b546f674-89d3-44f3-82c4-9426c5910017 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.332 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.332 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.333 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.334 2 DEBUG nova.virt.libvirt.vif [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:23:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-798194672',display_name='tempest-ImagesTestJSON-server-798194672',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-798194672',id=27,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-t2xpcyan',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:23:46Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=95b47d23-06fb-460c-b8d9-7b7213dae4c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.334 2 DEBUG nova.network.os_vif_util [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.335 2 DEBUG nova.network.os_vif_util [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:3b:c8,bridge_name='br-int',has_traffic_filtering=True,id=b546f674-89d3-44f3-82c4-9426c5910017,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb546f674-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.336 2 DEBUG os_vif [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:3b:c8,bridge_name='br-int',has_traffic_filtering=True,id=b546f674-89d3-44f3-82c4-9426c5910017,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb546f674-89') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.338 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.339 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb546f674-89, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.344 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb546f674-89, col_values=(('external_ids', {'iface-id': 'b546f674-89d3-44f3-82c4-9426c5910017', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:3b:c8', 'vm-uuid': '95b47d23-06fb-460c-b8d9-7b7213dae4c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:54 np0005465604 NetworkManager[45129]: <info>  [1759393434.3878] manager: (tapb546f674-89): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.397 2 INFO os_vif [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:3b:c8,bridge_name='br-int',has_traffic_filtering=True,id=b546f674-89d3-44f3-82c4-9426c5910017,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb546f674-89')#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.461 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.462 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.462 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No VIF found with MAC fa:16:3e:f0:3b:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.462 2 INFO nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Using config drive#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.482 2 DEBUG nova.storage.rbd_utils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 88 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.954 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.955 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:54 np0005465604 nova_compute[260603]: 2025-10-02 08:23:54.977 2 DEBUG nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.007 2 INFO nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Creating config drive at /var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7/disk.config#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.017 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbbq5ihx9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.077 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.078 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.089 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.089 2 INFO nova.compute.claims [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.157 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbbq5ihx9" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.191 2 DEBUG nova.storage.rbd_utils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.195 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7/disk.config 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.279 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.373 2 DEBUG oslo_concurrency.processutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7/disk.config 95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.374 2 INFO nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Deleting local config drive /var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7/disk.config because it was imported into RBD.#033[00m
Oct  2 04:23:55 np0005465604 kernel: tapb546f674-89: entered promiscuous mode
Oct  2 04:23:55 np0005465604 NetworkManager[45129]: <info>  [1759393435.4606] manager: (tapb546f674-89): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Oct  2 04:23:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:55Z|00167|binding|INFO|Claiming lport b546f674-89d3-44f3-82c4-9426c5910017 for this chassis.
Oct  2 04:23:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:55Z|00168|binding|INFO|b546f674-89d3-44f3-82c4-9426c5910017: Claiming fa:16:3e:f0:3b:c8 10.100.0.10
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:55 np0005465604 systemd-udevd[294607]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.510 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:3b:c8 10.100.0.10'], port_security=['fa:16:3e:f0:3b:c8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '95b47d23-06fb-460c-b8d9-7b7213dae4c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '2', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=b546f674-89d3-44f3-82c4-9426c5910017) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.512 162357 INFO neutron.agent.ovn.metadata.agent [-] Port b546f674-89d3-44f3-82c4-9426c5910017 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 bound to our chassis#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.514 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 897d7abf-9e23-43cd-8f60-7156792a4360#033[00m
Oct  2 04:23:55 np0005465604 NetworkManager[45129]: <info>  [1759393435.5328] device (tapb546f674-89): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:23:55 np0005465604 NetworkManager[45129]: <info>  [1759393435.5342] device (tapb546f674-89): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.534 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8ace38a1-ef35-44a3-98a5-10cf70fac092]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.536 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap897d7abf-91 in ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.539 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap897d7abf-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.539 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ad0e9628-f184-430f-8869-338d91ac69b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:55Z|00169|binding|INFO|Setting lport b546f674-89d3-44f3-82c4-9426c5910017 ovn-installed in OVS
Oct  2 04:23:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:55Z|00170|binding|INFO|Setting lport b546f674-89d3-44f3-82c4-9426c5910017 up in Southbound
Oct  2 04:23:55 np0005465604 systemd-machined[214636]: New machine qemu-31-instance-0000001b.
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.550 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2e131e4b-8903-4e83-84e6-949971632323]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:55 np0005465604 systemd[1]: Started Virtual Machine qemu-31-instance-0000001b.
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.568 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ab20c0-ac03-4d4f-bc1b-3061102e1e37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.600 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[13c2f8d3-f856-4182-b7fb-48b912f9987a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.623 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.650 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bba22cd6-756d-4062-81ab-1ef19af30ff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:55 np0005465604 systemd-udevd[294613]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:23:55 np0005465604 NetworkManager[45129]: <info>  [1759393435.6570] manager: (tap897d7abf-90): new Veth device (/org/freedesktop/NetworkManager/Devices/82)
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.656 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[10b3bd6a-96da-41d7-83b9-a966edb7be78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.697 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a8fc5a67-8e00-4119-830b-c670438d8a9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.702 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d40174db-5f57-496d-8f98-929749bc0c50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:55 np0005465604 NetworkManager[45129]: <info>  [1759393435.7273] device (tap897d7abf-90): carrier: link connected
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.737 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a272f655-e4f7-4bc6-8e21-0a8cdf4d5160]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.763 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f23bfb26-af0e-4743-aa68-843daf4294a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431432, 'reachable_time': 31805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294643, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.784 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e6f2019-0b12-43d5-b2eb-6a29ed8a230a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:18ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431432, 'tstamp': 431432}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294644, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:23:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228177581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.801 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cde60cba-7460-4ee7-9848-f29943aca275]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431432, 'reachable_time': 31805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294645, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.819 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.825 2 DEBUG nova.compute.provider_tree [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.836 2 DEBUG nova.network.neutron [req-ccf24f75-a702-462e-8c21-8028ffc62de8 req-929ffde2-ea30-4355-a1f9-8fad64229c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Updated VIF entry in instance network info cache for port b546f674-89d3-44f3-82c4-9426c5910017. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.837 2 DEBUG nova.network.neutron [req-ccf24f75-a702-462e-8c21-8028ffc62de8 req-929ffde2-ea30-4355-a1f9-8fad64229c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Updating instance_info_cache with network_info: [{"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.841 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[95c6722a-cbf3-40d4-bc20-6252b1595736]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.857 2 DEBUG nova.scheduler.client.report [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.862 2 DEBUG oslo_concurrency.lockutils [req-ccf24f75-a702-462e-8c21-8028ffc62de8 req-929ffde2-ea30-4355-a1f9-8fad64229c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-95b47d23-06fb-460c-b8d9-7b7213dae4c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.883 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.884 2 DEBUG nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.913 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8a221d77-0fa5-44b3-a735-c5e3279a6deb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.915 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.915 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.916 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap897d7abf-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:55 np0005465604 kernel: tap897d7abf-90: entered promiscuous mode
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:55 np0005465604 NetworkManager[45129]: <info>  [1759393435.9214] manager: (tap897d7abf-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.929 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap897d7abf-90, col_values=(('external_ids', {'iface-id': 'dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:23:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:23:55Z|00171|binding|INFO|Releasing lport dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76 from this chassis (sb_readonly=0)
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.935 2 DEBUG nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.935 2 DEBUG nova.network.neutron [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.952 2 INFO nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.952 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.953 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7e6d1765-fcc6-4643-b216-649ec1eea347]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.957 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-897d7abf-9e23-43cd-8f60-7156792a4360
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 897d7abf-9e23-43cd-8f60-7156792a4360
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:23:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:23:55.958 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'env', 'PROCESS_TAG=haproxy-897d7abf-9e23-43cd-8f60-7156792a4360', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/897d7abf-9e23-43cd-8f60-7156792a4360.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:55 np0005465604 nova_compute[260603]: 2025-10-02 08:23:55.972 2 DEBUG nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.112 2 DEBUG nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.113 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.114 2 INFO nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Creating image(s)#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.137 2 DEBUG nova.storage.rbd_utils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.167 2 DEBUG nova.storage.rbd_utils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.185 2 DEBUG nova.storage.rbd_utils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.189 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.267 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.268 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.268 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.268 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.286 2 DEBUG nova.storage.rbd_utils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.289 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:56 np0005465604 podman[294778]: 2025-10-02 08:23:56.327509751 +0000 UTC m=+0.048068810 container create 3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 04:23:56 np0005465604 systemd[1]: Started libpod-conmon-3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49.scope.
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.389 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393421.38837, 284aaef3-b320-49e2-a541-b17eb4eb208f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.390 2 INFO nova.compute.manager [-] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:23:56 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:23:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ec9518e036099ec816e19e9d3a64de8fb07b1fecfb5f7dfe22b9c3e6c79aa9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:23:56 np0005465604 podman[294778]: 2025-10-02 08:23:56.304392976 +0000 UTC m=+0.024952035 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:23:56 np0005465604 podman[294778]: 2025-10-02 08:23:56.413062109 +0000 UTC m=+0.133621188 container init 3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 04:23:56 np0005465604 podman[294778]: 2025-10-02 08:23:56.418260507 +0000 UTC m=+0.138819566 container start 3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.419 2 DEBUG nova.compute.manager [None req-8c0e8ce0-5439-45a9-a1c5-7c88dd2f0801 - - - - - -] [instance: 284aaef3-b320-49e2-a541-b17eb4eb208f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:56 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[294826]: [NOTICE]   (294836) : New worker (294838) forked
Oct  2 04:23:56 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[294826]: [NOTICE]   (294836) : Loading success.
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.541 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.570 2 DEBUG nova.policy [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a117ecad98d493d8782539545db5ac9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff4066c489424391bd4a75b195bd5011', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.605 2 DEBUG nova.storage.rbd_utils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] resizing rbd image ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.668 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393436.6520765, 95b47d23-06fb-460c-b8d9-7b7213dae4c7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.668 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] VM Started (Lifecycle Event)#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.673 2 DEBUG nova.objects.instance [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lazy-loading 'migration_context' on Instance uuid ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.700 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.701 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.701 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Ensure instance console log exists: /var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.702 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.702 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.702 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.706 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393436.6522737, 95b47d23-06fb-460c-b8d9-7b7213dae4c7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.706 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.731 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.776 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:56 np0005465604 nova_compute[260603]: 2025-10-02 08:23:56.796 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:23:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 88 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 04:23:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.220 2 DEBUG nova.compute.manager [req-b0235788-ad38-4571-87b2-8f9351420020 req-2ee46f53-86c1-4e7f-85e6-fdb7c0c98a76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Received event network-vif-plugged-b546f674-89d3-44f3-82c4-9426c5910017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.221 2 DEBUG oslo_concurrency.lockutils [req-b0235788-ad38-4571-87b2-8f9351420020 req-2ee46f53-86c1-4e7f-85e6-fdb7c0c98a76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.221 2 DEBUG oslo_concurrency.lockutils [req-b0235788-ad38-4571-87b2-8f9351420020 req-2ee46f53-86c1-4e7f-85e6-fdb7c0c98a76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.222 2 DEBUG oslo_concurrency.lockutils [req-b0235788-ad38-4571-87b2-8f9351420020 req-2ee46f53-86c1-4e7f-85e6-fdb7c0c98a76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.222 2 DEBUG nova.compute.manager [req-b0235788-ad38-4571-87b2-8f9351420020 req-2ee46f53-86c1-4e7f-85e6-fdb7c0c98a76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Processing event network-vif-plugged-b546f674-89d3-44f3-82c4-9426c5910017 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.223 2 DEBUG nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.228 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393437.2282119, 95b47d23-06fb-460c-b8d9-7b7213dae4c7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.229 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.231 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.236 2 INFO nova.virt.libvirt.driver [-] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Instance spawned successfully.#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.236 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.269 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.279 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.285 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.286 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.287 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.287 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.288 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.288 2 DEBUG nova.virt.libvirt.driver [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.303 2 DEBUG nova.network.neutron [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Successfully created port: 17b2c50e-61e3-4e6e-853a-81c49befe22b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.312 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.342 2 INFO nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Took 10.59 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.343 2 DEBUG nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.403 2 INFO nova.compute.manager [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Took 11.56 seconds to build instance.#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.418 2 DEBUG oslo_concurrency.lockutils [None req-66be12d1-c00b-4e6e-add4-c8053b844a94 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.495 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquiring lock "5d50db9c-0731-468a-81da-6762d68cda94" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.496 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.537 2 DEBUG nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.618 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.618 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.628 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.628 2 INFO nova.compute.claims [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:23:57 np0005465604 nova_compute[260603]: 2025-10-02 08:23:57.774 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:23:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:23:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:23:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:23:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:23:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:23:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:23:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2256363663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.273 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.282 2 DEBUG nova.compute.provider_tree [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.302 2 DEBUG nova.scheduler.client.report [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.331 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.332 2 DEBUG nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.379 2 DEBUG nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.380 2 DEBUG nova.network.neutron [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.398 2 INFO nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.413 2 DEBUG nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.487 2 DEBUG nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.488 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.489 2 INFO nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Creating image(s)#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.513 2 DEBUG nova.storage.rbd_utils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] rbd image 5d50db9c-0731-468a-81da-6762d68cda94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.540 2 DEBUG nova.storage.rbd_utils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] rbd image 5d50db9c-0731-468a-81da-6762d68cda94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.565 2 DEBUG nova.storage.rbd_utils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] rbd image 5d50db9c-0731-468a-81da-6762d68cda94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.570 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.640 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.641 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.642 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.642 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.663 2 DEBUG nova.storage.rbd_utils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] rbd image 5d50db9c-0731-468a-81da-6762d68cda94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.666 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5d50db9c-0731-468a-81da-6762d68cda94_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.687 2 DEBUG nova.policy [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e0a15988bafc4a03bd5b08291a4cc14c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3665ae0e483545e2aaa658ac8a3949aa', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:23:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 644 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.838 2 DEBUG nova.network.neutron [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Successfully updated port: 17b2c50e-61e3-4e6e-853a-81c49befe22b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.855 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "refresh_cache-ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.856 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquired lock "refresh_cache-ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.856 2 DEBUG nova.network.neutron [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.876 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5d50db9c-0731-468a-81da-6762d68cda94_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:23:58 np0005465604 nova_compute[260603]: 2025-10-02 08:23:58.941 2 DEBUG nova.storage.rbd_utils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] resizing rbd image 5d50db9c-0731-468a-81da-6762d68cda94_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.054 2 DEBUG nova.objects.instance [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lazy-loading 'migration_context' on Instance uuid 5d50db9c-0731-468a-81da-6762d68cda94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.068 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.069 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Ensure instance console log exists: /var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.070 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.070 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.071 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.114 2 DEBUG nova.network.neutron [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.782 2 DEBUG oslo_concurrency.lockutils [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.783 2 DEBUG oslo_concurrency.lockutils [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.784 2 DEBUG nova.compute.manager [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.790 2 DEBUG nova.compute.manager [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.791 2 DEBUG nova.objects.instance [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'flavor' on Instance uuid 95b47d23-06fb-460c-b8d9-7b7213dae4c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.811 2 DEBUG nova.compute.manager [req-86a8cabc-79a8-40b4-820c-b8911d3d34f5 req-d5ec849c-ff4e-4859-a837-b5b2ad7042b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Received event network-changed-17b2c50e-61e3-4e6e-853a-81c49befe22b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.812 2 DEBUG nova.compute.manager [req-86a8cabc-79a8-40b4-820c-b8911d3d34f5 req-d5ec849c-ff4e-4859-a837-b5b2ad7042b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Refreshing instance network info cache due to event network-changed-17b2c50e-61e3-4e6e-853a-81c49befe22b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.813 2 DEBUG oslo_concurrency.lockutils [req-86a8cabc-79a8-40b4-820c-b8911d3d34f5 req-d5ec849c-ff4e-4859-a837-b5b2ad7042b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.832 2 DEBUG nova.virt.libvirt.driver [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.895 2 DEBUG nova.compute.manager [req-bd5fa0c3-509a-4f86-97a0-2baf32210ce5 req-d8851435-c83d-42c2-b5f6-0821f5a1b605 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Received event network-vif-plugged-b546f674-89d3-44f3-82c4-9426c5910017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.896 2 DEBUG oslo_concurrency.lockutils [req-bd5fa0c3-509a-4f86-97a0-2baf32210ce5 req-d8851435-c83d-42c2-b5f6-0821f5a1b605 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.897 2 DEBUG oslo_concurrency.lockutils [req-bd5fa0c3-509a-4f86-97a0-2baf32210ce5 req-d8851435-c83d-42c2-b5f6-0821f5a1b605 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.897 2 DEBUG oslo_concurrency.lockutils [req-bd5fa0c3-509a-4f86-97a0-2baf32210ce5 req-d8851435-c83d-42c2-b5f6-0821f5a1b605 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.898 2 DEBUG nova.compute.manager [req-bd5fa0c3-509a-4f86-97a0-2baf32210ce5 req-d8851435-c83d-42c2-b5f6-0821f5a1b605 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] No waiting events found dispatching network-vif-plugged-b546f674-89d3-44f3-82c4-9426c5910017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:23:59 np0005465604 nova_compute[260603]: 2025-10-02 08:23:59.899 2 WARNING nova.compute.manager [req-bd5fa0c3-509a-4f86-97a0-2baf32210ce5 req-d8851435-c83d-42c2-b5f6-0821f5a1b605 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Received unexpected event network-vif-plugged-b546f674-89d3-44f3-82c4-9426c5910017 for instance with vm_state active and task_state powering-off.#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.188 2 DEBUG nova.network.neutron [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Successfully created port: 4e1ca975-9579-4bcd-8e92-5c169484f7fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.679 2 DEBUG nova.network.neutron [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Updating instance_info_cache with network_info: [{"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.700 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Releasing lock "refresh_cache-ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.701 2 DEBUG nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Instance network_info: |[{"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.702 2 DEBUG oslo_concurrency.lockutils [req-86a8cabc-79a8-40b4-820c-b8911d3d34f5 req-d5ec849c-ff4e-4859-a837-b5b2ad7042b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.702 2 DEBUG nova.network.neutron [req-86a8cabc-79a8-40b4-820c-b8911d3d34f5 req-d5ec849c-ff4e-4859-a837-b5b2ad7042b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Refreshing network info cache for port 17b2c50e-61e3-4e6e-853a-81c49befe22b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.708 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Start _get_guest_xml network_info=[{"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.713 2 WARNING nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.720 2 DEBUG nova.virt.libvirt.host [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.721 2 DEBUG nova.virt.libvirt.host [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.730 2 DEBUG nova.virt.libvirt.host [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.731 2 DEBUG nova.virt.libvirt.host [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.732 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.732 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.733 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.734 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.734 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.735 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.736 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.736 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.737 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.737 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.738 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.739 2 DEBUG nova.virt.hardware [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:24:00 np0005465604 nova_compute[260603]: 2025-10-02 08:24:00.743 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 619 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Oct  2 04:24:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:24:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1400921510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.270 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.306 2 DEBUG nova.storage.rbd_utils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.312 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:24:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/173100061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.789 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.791 2 DEBUG nova.virt.libvirt.vif [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:23:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1190225091',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1190225091',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1190225091',id=28,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff4066c489424391bd4a75b195bd5011',ramdisk_id='',reservation_id='r-i2hcy206',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1267478522',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1267478522-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:23:56Z,user_data=None,user_id='3a117ecad98d493d8782539545db5ac9',uuid=ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.793 2 DEBUG nova.network.os_vif_util [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converting VIF {"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.794 2 DEBUG nova.network.os_vif_util [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:4a:61,bridge_name='br-int',has_traffic_filtering=True,id=17b2c50e-61e3-4e6e-853a-81c49befe22b,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17b2c50e-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.796 2 DEBUG nova.objects.instance [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lazy-loading 'pci_devices' on Instance uuid ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.815 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:24:01 np0005465604 nova_compute[260603]:  <uuid>ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48</uuid>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:  <name>instance-0000001c</name>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1190225091</nova:name>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:24:00</nova:creationTime>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:24:01 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:        <nova:user uuid="3a117ecad98d493d8782539545db5ac9">tempest-ImagesOneServerNegativeTestJSON-1267478522-project-member</nova:user>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:        <nova:project uuid="ff4066c489424391bd4a75b195bd5011">tempest-ImagesOneServerNegativeTestJSON-1267478522</nova:project>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:        <nova:port uuid="17b2c50e-61e3-4e6e-853a-81c49befe22b">
Oct  2 04:24:01 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <entry name="serial">ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48</entry>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <entry name="uuid">ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48</entry>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk">
Oct  2 04:24:01 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:24:01 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk.config">
Oct  2 04:24:01 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:24:01 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:19:4a:61"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <target dev="tap17b2c50e-61"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48/console.log" append="off"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:24:01 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:24:01 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:24:01 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:24:01 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.822 2 DEBUG nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Preparing to wait for external event network-vif-plugged-17b2c50e-61e3-4e6e-853a-81c49befe22b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.822 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.823 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.823 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.824 2 DEBUG nova.virt.libvirt.vif [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:23:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1190225091',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1190225091',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1190225091',id=28,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff4066c489424391bd4a75b195bd5011',ramdisk_id='',reservation_id='r-i2hcy206',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1267478522',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1267478522-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:23:56Z,user_data=None,user_id='3a117ecad98d493d8782539545db5ac9',uuid=ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.824 2 DEBUG nova.network.os_vif_util [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converting VIF {"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.825 2 DEBUG nova.network.os_vif_util [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:4a:61,bridge_name='br-int',has_traffic_filtering=True,id=17b2c50e-61e3-4e6e-853a-81c49befe22b,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17b2c50e-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.825 2 DEBUG os_vif [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:4a:61,bridge_name='br-int',has_traffic_filtering=True,id=17b2c50e-61e3-4e6e-853a-81c49befe22b,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17b2c50e-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.827 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.827 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.831 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17b2c50e-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.831 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap17b2c50e-61, col_values=(('external_ids', {'iface-id': '17b2c50e-61e3-4e6e-853a-81c49befe22b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:4a:61', 'vm-uuid': 'ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:01 np0005465604 NetworkManager[45129]: <info>  [1759393441.8346] manager: (tap17b2c50e-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.848 2 INFO os_vif [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:4a:61,bridge_name='br-int',has_traffic_filtering=True,id=17b2c50e-61e3-4e6e-853a-81c49befe22b,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17b2c50e-61')#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.938 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.939 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.940 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] No VIF found with MAC fa:16:3e:19:4a:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.941 2 INFO nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Using config drive#033[00m
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.974 2 DEBUG nova.storage.rbd_utils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:24:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:24:01 np0005465604 nova_compute[260603]: 2025-10-02 08:24:01.996 2 DEBUG nova.network.neutron [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Successfully updated port: 4e1ca975-9579-4bcd-8e92-5c169484f7fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:24:02 np0005465604 nova_compute[260603]: 2025-10-02 08:24:02.014 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquiring lock "refresh_cache-5d50db9c-0731-468a-81da-6762d68cda94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:24:02 np0005465604 nova_compute[260603]: 2025-10-02 08:24:02.014 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquired lock "refresh_cache-5d50db9c-0731-468a-81da-6762d68cda94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:24:02 np0005465604 nova_compute[260603]: 2025-10-02 08:24:02.015 2 DEBUG nova.network.neutron [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:24:02 np0005465604 nova_compute[260603]: 2025-10-02 08:24:02.277 2 DEBUG nova.network.neutron [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:24:02 np0005465604 nova_compute[260603]: 2025-10-02 08:24:02.548 2 INFO nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Creating config drive at /var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48/disk.config#033[00m
Oct  2 04:24:02 np0005465604 nova_compute[260603]: 2025-10-02 08:24:02.553 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvbkxrso5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:02 np0005465604 nova_compute[260603]: 2025-10-02 08:24:02.695 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvbkxrso5" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:02 np0005465604 nova_compute[260603]: 2025-10-02 08:24:02.733 2 DEBUG nova.storage.rbd_utils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] rbd image ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:24:02 np0005465604 nova_compute[260603]: 2025-10-02 08:24:02.738 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48/disk.config ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:02 np0005465604 nova_compute[260603]: 2025-10-02 08:24:02.768 2 DEBUG nova.network.neutron [req-86a8cabc-79a8-40b4-820c-b8911d3d34f5 req-d5ec849c-ff4e-4859-a837-b5b2ad7042b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Updated VIF entry in instance network info cache for port 17b2c50e-61e3-4e6e-853a-81c49befe22b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:24:02 np0005465604 nova_compute[260603]: 2025-10-02 08:24:02.770 2 DEBUG nova.network.neutron [req-86a8cabc-79a8-40b4-820c-b8911d3d34f5 req-d5ec849c-ff4e-4859-a837-b5b2ad7042b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Updating instance_info_cache with network_info: [{"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:24:02 np0005465604 nova_compute[260603]: 2025-10-02 08:24:02.789 2 DEBUG oslo_concurrency.lockutils [req-86a8cabc-79a8-40b4-820c-b8911d3d34f5 req-d5ec849c-ff4e-4859-a837-b5b2ad7042b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:24:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 157 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.0 MiB/s wr, 138 op/s
Oct  2 04:24:02 np0005465604 nova_compute[260603]: 2025-10-02 08:24:02.901 2 DEBUG oslo_concurrency.processutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48/disk.config ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:02 np0005465604 nova_compute[260603]: 2025-10-02 08:24:02.902 2 INFO nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Deleting local config drive /var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48/disk.config because it was imported into RBD.#033[00m
Oct  2 04:24:02 np0005465604 nova_compute[260603]: 2025-10-02 08:24:02.924 2 DEBUG nova.compute.manager [req-9586b840-5275-406b-80b1-f14a07500ad9 req-12349f86-e1bb-4b74-99f0-4fdf4cd21ec3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Received event network-changed-4e1ca975-9579-4bcd-8e92-5c169484f7fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:02 np0005465604 nova_compute[260603]: 2025-10-02 08:24:02.925 2 DEBUG nova.compute.manager [req-9586b840-5275-406b-80b1-f14a07500ad9 req-12349f86-e1bb-4b74-99f0-4fdf4cd21ec3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Refreshing instance network info cache due to event network-changed-4e1ca975-9579-4bcd-8e92-5c169484f7fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:24:02 np0005465604 nova_compute[260603]: 2025-10-02 08:24:02.928 2 DEBUG oslo_concurrency.lockutils [req-9586b840-5275-406b-80b1-f14a07500ad9 req-12349f86-e1bb-4b74-99f0-4fdf4cd21ec3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5d50db9c-0731-468a-81da-6762d68cda94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:24:02 np0005465604 NetworkManager[45129]: <info>  [1759393442.9751] manager: (tap17b2c50e-61): new Tun device (/org/freedesktop/NetworkManager/Devices/85)
Oct  2 04:24:02 np0005465604 kernel: tap17b2c50e-61: entered promiscuous mode
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:03Z|00172|binding|INFO|Claiming lport 17b2c50e-61e3-4e6e-853a-81c49befe22b for this chassis.
Oct  2 04:24:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:03Z|00173|binding|INFO|17b2c50e-61e3-4e6e-853a-81c49befe22b: Claiming fa:16:3e:19:4a:61 10.100.0.11
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.024 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:4a:61 10.100.0.11'], port_security=['fa:16:3e:19:4a:61 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff4066c489424391bd4a75b195bd5011', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1aba9f6e-efc2-4ae1-83f0-6308a1293c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f5b9ac6-d3ea-442b-a1a3-0f4eb2329dfd, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=17b2c50e-61e3-4e6e-853a-81c49befe22b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.026 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 17b2c50e-61e3-4e6e-853a-81c49befe22b in datapath c8aeabca-6b5c-477a-9156-9f9592c20b93 bound to our chassis#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.028 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c8aeabca-6b5c-477a-9156-9f9592c20b93#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.049 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b059c1-19b2-4239-b202-09638a03a641]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.051 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc8aeabca-61 in ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:24:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:03Z|00174|binding|INFO|Setting lport 17b2c50e-61e3-4e6e-853a-81c49befe22b up in Southbound
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.054 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc8aeabca-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.054 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50201702-52a4-4b3c-9a3b-5d03e3f3564f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:03Z|00175|binding|INFO|Setting lport 17b2c50e-61e3-4e6e-853a-81c49befe22b ovn-installed in OVS
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.058 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9529780-844b-4fdb-95f0-b97a9338823a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.074 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[c349d3a7-da43-4f60-9452-c364a8b958de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:03 np0005465604 systemd-machined[214636]: New machine qemu-32-instance-0000001c.
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.094 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb4fc92-82b9-4757-9e4f-14f53a2fb63b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:03 np0005465604 systemd[1]: Started Virtual Machine qemu-32-instance-0000001c.
Oct  2 04:24:03 np0005465604 systemd-udevd[295247]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.129 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ed29c10c-0d31-4056-adb6-9ef0e2755f80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:03 np0005465604 NetworkManager[45129]: <info>  [1759393443.1339] device (tap17b2c50e-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:24:03 np0005465604 NetworkManager[45129]: <info>  [1759393443.1385] device (tap17b2c50e-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:24:03 np0005465604 NetworkManager[45129]: <info>  [1759393443.1484] manager: (tapc8aeabca-60): new Veth device (/org/freedesktop/NetworkManager/Devices/86)
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.148 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8927ee8d-6b8d-49bb-bb0d-618f20f8bd60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:03 np0005465604 systemd-udevd[295250]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.192 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ec30e42f-5e4a-4083-842a-9fbdef0a0d6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.196 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f4079d55-0a70-4400-9f47-d31d2b862720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:03 np0005465604 NetworkManager[45129]: <info>  [1759393443.2273] device (tapc8aeabca-60): carrier: link connected
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.233 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d7f5b34c-eec9-461a-b69c-08485fa3d719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.252 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[93330427-0e46-48a1-9021-b085e39e7686]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8aeabca-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:61:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432182, 'reachable_time': 43742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295276, 'error': None, 'target': 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.271 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc5ded7-0099-4703-bc23-28061747f0c8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:61ed'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 432182, 'tstamp': 432182}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295277, 'error': None, 'target': 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.288 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd827f8f-8a51-4fc0-9057-861b20298fe6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8aeabca-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:61:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432182, 'reachable_time': 43742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295278, 'error': None, 'target': 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.321 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bed5eec6-f396-43ae-989c-7bc90bb7d12b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.421 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e87a47c5-fa88-4a8d-aff8-0512045ab310]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.423 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8aeabca-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.423 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.424 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8aeabca-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:03 np0005465604 NetworkManager[45129]: <info>  [1759393443.4269] manager: (tapc8aeabca-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Oct  2 04:24:03 np0005465604 kernel: tapc8aeabca-60: entered promiscuous mode
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.432 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc8aeabca-60, col_values=(('external_ids', {'iface-id': '4db7ab1a-8260-4cf1-9ae1-c00a351cfe04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:03Z|00176|binding|INFO|Releasing lport 4db7ab1a-8260-4cf1-9ae1-c00a351cfe04 from this chassis (sb_readonly=0)
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.457 2 DEBUG nova.compute.manager [req-5b98efa9-8ba9-40d6-80fa-19ab068a6a8e req-ebf4c191-a78a-4fb1-a00f-0eae66f549bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Received event network-vif-plugged-17b2c50e-61e3-4e6e-853a-81c49befe22b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.458 2 DEBUG oslo_concurrency.lockutils [req-5b98efa9-8ba9-40d6-80fa-19ab068a6a8e req-ebf4c191-a78a-4fb1-a00f-0eae66f549bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.459 2 DEBUG oslo_concurrency.lockutils [req-5b98efa9-8ba9-40d6-80fa-19ab068a6a8e req-ebf4c191-a78a-4fb1-a00f-0eae66f549bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.460 2 DEBUG oslo_concurrency.lockutils [req-5b98efa9-8ba9-40d6-80fa-19ab068a6a8e req-ebf4c191-a78a-4fb1-a00f-0eae66f549bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.460 2 DEBUG nova.compute.manager [req-5b98efa9-8ba9-40d6-80fa-19ab068a6a8e req-ebf4c191-a78a-4fb1-a00f-0eae66f549bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Processing event network-vif-plugged-17b2c50e-61e3-4e6e-853a-81c49befe22b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.461 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c8aeabca-6b5c-477a-9156-9f9592c20b93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c8aeabca-6b5c-477a-9156-9f9592c20b93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.463 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3a2a22-1d71-498c-9a71-adba88f2fb1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.464 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-c8aeabca-6b5c-477a-9156-9f9592c20b93
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/c8aeabca-6b5c-477a-9156-9f9592c20b93.pid.haproxy
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID c8aeabca-6b5c-477a-9156-9f9592c20b93
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:24:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:03.465 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'env', 'PROCESS_TAG=haproxy-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c8aeabca-6b5c-477a-9156-9f9592c20b93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.480 2 DEBUG nova.network.neutron [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Updating instance_info_cache with network_info: [{"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.505 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Releasing lock "refresh_cache-5d50db9c-0731-468a-81da-6762d68cda94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.506 2 DEBUG nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Instance network_info: |[{"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.507 2 DEBUG oslo_concurrency.lockutils [req-9586b840-5275-406b-80b1-f14a07500ad9 req-12349f86-e1bb-4b74-99f0-4fdf4cd21ec3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5d50db9c-0731-468a-81da-6762d68cda94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.508 2 DEBUG nova.network.neutron [req-9586b840-5275-406b-80b1-f14a07500ad9 req-12349f86-e1bb-4b74-99f0-4fdf4cd21ec3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Refreshing network info cache for port 4e1ca975-9579-4bcd-8e92-5c169484f7fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.511 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Start _get_guest_xml network_info=[{"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.518 2 WARNING nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.528 2 DEBUG nova.virt.libvirt.host [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.529 2 DEBUG nova.virt.libvirt.host [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.565 2 DEBUG nova.virt.libvirt.host [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.566 2 DEBUG nova.virt.libvirt.host [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.567 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.567 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.567 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.567 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.568 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.568 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.568 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.568 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.568 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.569 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.569 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.569 2 DEBUG nova.virt.hardware [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.572 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.718 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393428.7174823, 7c5d0818-0647-43a7-aaa1-b875b8b8424d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.719 2 INFO nova.compute.manager [-] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.746 2 DEBUG nova.compute.manager [None req-24427bf8-e94b-45ac-b292-4e06c0d7747c - - - - - -] [instance: 7c5d0818-0647-43a7-aaa1-b875b8b8424d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:03 np0005465604 podman[295371]: 2025-10-02 08:24:03.883107853 +0000 UTC m=+0.066040100 container create 6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 04:24:03 np0005465604 systemd[1]: Started libpod-conmon-6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2.scope.
Oct  2 04:24:03 np0005465604 podman[295371]: 2025-10-02 08:24:03.848334152 +0000 UTC m=+0.031266469 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:24:03 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:24:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb557c89a09e3c98f6728bab7f7254a61a1ae88dabc0772575bffd2a9b2ec5c4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:24:03 np0005465604 podman[295371]: 2025-10-02 08:24:03.971174782 +0000 UTC m=+0.154107009 container init 6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:24:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:24:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/365269458' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:24:03 np0005465604 podman[295371]: 2025-10-02 08:24:03.983219082 +0000 UTC m=+0.166151309 container start 6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 04:24:03 np0005465604 nova_compute[260603]: 2025-10-02 08:24:03.998 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:04 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[295386]: [NOTICE]   (295392) : New worker (295409) forked
Oct  2 04:24:04 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[295386]: [NOTICE]   (295392) : Loading success.
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.019 2 DEBUG nova.storage.rbd_utils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] rbd image 5d50db9c-0731-468a-81da-6762d68cda94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.022 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.057 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393444.0371478, ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.058 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] VM Started (Lifecycle Event)#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.061 2 DEBUG nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.063 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.066 2 INFO nova.virt.libvirt.driver [-] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Instance spawned successfully.#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.066 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.084 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.089 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.092 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.092 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.093 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.093 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.094 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.094 2 DEBUG nova.virt.libvirt.driver [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.124 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.124 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393444.0374107, ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.124 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.147 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.150 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393444.0630102, ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.150 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.158 2 INFO nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Took 8.04 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.158 2 DEBUG nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.165 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.167 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.189 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.222 2 INFO nova.compute.manager [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Took 9.17 seconds to build instance.#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.237 2 DEBUG oslo_concurrency.lockutils [None req-9dffb507-10c8-4cb9-ba4b-d52378a37003 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:24:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3593587089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.517 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.520 2 DEBUG nova.virt.libvirt.vif [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:23:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1904255359',display_name='tempest-ImagesOneServerTestJSON-server-1904255359',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1904255359',id=29,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3665ae0e483545e2aaa658ac8a3949aa',ramdisk_id='',reservation_id='r-qijl0plc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-2127991989',owner_user_name='tempest-ImagesOneServerTestJSON-2127991989-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:23:58Z,user_data=None,user_id='e0a15988bafc4a03bd5b08291a4cc14c',uuid=5d50db9c-0731-468a-81da-6762d68cda94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.521 2 DEBUG nova.network.os_vif_util [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Converting VIF {"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.522 2 DEBUG nova.network.os_vif_util [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:e7:88,bridge_name='br-int',has_traffic_filtering=True,id=4e1ca975-9579-4bcd-8e92-5c169484f7fe,network=Network(281b57e5-e0d2-447a-8a70-0fab75a8117e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e1ca975-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.524 2 DEBUG nova.objects.instance [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d50db9c-0731-468a-81da-6762d68cda94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.543 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:24:04 np0005465604 nova_compute[260603]:  <uuid>5d50db9c-0731-468a-81da-6762d68cda94</uuid>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:  <name>instance-0000001d</name>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <nova:name>tempest-ImagesOneServerTestJSON-server-1904255359</nova:name>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:24:03</nova:creationTime>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:24:04 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:        <nova:user uuid="e0a15988bafc4a03bd5b08291a4cc14c">tempest-ImagesOneServerTestJSON-2127991989-project-member</nova:user>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:        <nova:project uuid="3665ae0e483545e2aaa658ac8a3949aa">tempest-ImagesOneServerTestJSON-2127991989</nova:project>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:        <nova:port uuid="4e1ca975-9579-4bcd-8e92-5c169484f7fe">
Oct  2 04:24:04 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <entry name="serial">5d50db9c-0731-468a-81da-6762d68cda94</entry>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <entry name="uuid">5d50db9c-0731-468a-81da-6762d68cda94</entry>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/5d50db9c-0731-468a-81da-6762d68cda94_disk">
Oct  2 04:24:04 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:24:04 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/5d50db9c-0731-468a-81da-6762d68cda94_disk.config">
Oct  2 04:24:04 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:24:04 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:82:e7:88"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <target dev="tap4e1ca975-95"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94/console.log" append="off"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:24:04 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:24:04 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:24:04 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:24:04 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.555 2 DEBUG nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Preparing to wait for external event network-vif-plugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.556 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquiring lock "5d50db9c-0731-468a-81da-6762d68cda94-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.557 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.557 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.558 2 DEBUG nova.virt.libvirt.vif [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:23:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1904255359',display_name='tempest-ImagesOneServerTestJSON-server-1904255359',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1904255359',id=29,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3665ae0e483545e2aaa658ac8a3949aa',ramdisk_id='',reservation_id='r-qijl0plc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-2127991989',owner_user_name='tempest-ImagesOneServerTestJSON-2127991989-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:23:58Z,user_data=None,user_id='e0a15988bafc4a03bd5b08291a4cc14c',uuid=5d50db9c-0731-468a-81da-6762d68cda94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.559 2 DEBUG nova.network.os_vif_util [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Converting VIF {"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.561 2 DEBUG nova.network.os_vif_util [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:e7:88,bridge_name='br-int',has_traffic_filtering=True,id=4e1ca975-9579-4bcd-8e92-5c169484f7fe,network=Network(281b57e5-e0d2-447a-8a70-0fab75a8117e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e1ca975-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.562 2 DEBUG os_vif [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:e7:88,bridge_name='br-int',has_traffic_filtering=True,id=4e1ca975-9579-4bcd-8e92-5c169484f7fe,network=Network(281b57e5-e0d2-447a-8a70-0fab75a8117e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e1ca975-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.565 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.566 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.571 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4e1ca975-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.573 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4e1ca975-95, col_values=(('external_ids', {'iface-id': '4e1ca975-9579-4bcd-8e92-5c169484f7fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:e7:88', 'vm-uuid': '5d50db9c-0731-468a-81da-6762d68cda94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:04 np0005465604 NetworkManager[45129]: <info>  [1759393444.5775] manager: (tap4e1ca975-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.591 2 INFO os_vif [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:e7:88,bridge_name='br-int',has_traffic_filtering=True,id=4e1ca975-9579-4bcd-8e92-5c169484f7fe,network=Network(281b57e5-e0d2-447a-8a70-0fab75a8117e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e1ca975-95')#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.656 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.657 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.658 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] No VIF found with MAC fa:16:3e:82:e7:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.659 2 INFO nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Using config drive#033[00m
Oct  2 04:24:04 np0005465604 nova_compute[260603]: 2025-10-02 08:24:04.695 2 DEBUG nova.storage.rbd_utils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] rbd image 5d50db9c-0731-468a-81da-6762d68cda94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:24:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 181 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 155 op/s
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.160 2 DEBUG nova.network.neutron [req-9586b840-5275-406b-80b1-f14a07500ad9 req-12349f86-e1bb-4b74-99f0-4fdf4cd21ec3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Updated VIF entry in instance network info cache for port 4e1ca975-9579-4bcd-8e92-5c169484f7fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.161 2 DEBUG nova.network.neutron [req-9586b840-5275-406b-80b1-f14a07500ad9 req-12349f86-e1bb-4b74-99f0-4fdf4cd21ec3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Updating instance_info_cache with network_info: [{"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.177 2 DEBUG oslo_concurrency.lockutils [req-9586b840-5275-406b-80b1-f14a07500ad9 req-12349f86-e1bb-4b74-99f0-4fdf4cd21ec3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5d50db9c-0731-468a-81da-6762d68cda94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.385 2 INFO nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Creating config drive at /var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94/disk.config#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.394 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm9txbqzo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.532 2 DEBUG nova.compute.manager [req-f90e5847-f023-4110-be80-ae58edc33be2 req-9592c382-8c1e-43dd-b6fd-d26b9687397f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Received event network-vif-plugged-17b2c50e-61e3-4e6e-853a-81c49befe22b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.533 2 DEBUG oslo_concurrency.lockutils [req-f90e5847-f023-4110-be80-ae58edc33be2 req-9592c382-8c1e-43dd-b6fd-d26b9687397f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.534 2 DEBUG oslo_concurrency.lockutils [req-f90e5847-f023-4110-be80-ae58edc33be2 req-9592c382-8c1e-43dd-b6fd-d26b9687397f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.535 2 DEBUG oslo_concurrency.lockutils [req-f90e5847-f023-4110-be80-ae58edc33be2 req-9592c382-8c1e-43dd-b6fd-d26b9687397f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.535 2 DEBUG nova.compute.manager [req-f90e5847-f023-4110-be80-ae58edc33be2 req-9592c382-8c1e-43dd-b6fd-d26b9687397f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] No waiting events found dispatching network-vif-plugged-17b2c50e-61e3-4e6e-853a-81c49befe22b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.536 2 WARNING nova.compute.manager [req-f90e5847-f023-4110-be80-ae58edc33be2 req-9592c382-8c1e-43dd-b6fd-d26b9687397f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Received unexpected event network-vif-plugged-17b2c50e-61e3-4e6e-853a-81c49befe22b for instance with vm_state active and task_state None.#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.546 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm9txbqzo" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.588 2 DEBUG nova.storage.rbd_utils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] rbd image 5d50db9c-0731-468a-81da-6762d68cda94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.595 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94/disk.config 5d50db9c-0731-468a-81da-6762d68cda94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.683 2 DEBUG oslo_concurrency.lockutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.684 2 DEBUG oslo_concurrency.lockutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.685 2 DEBUG oslo_concurrency.lockutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.685 2 DEBUG oslo_concurrency.lockutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.686 2 DEBUG oslo_concurrency.lockutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.688 2 INFO nova.compute.manager [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Terminating instance#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.690 2 DEBUG nova.compute.manager [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:24:05 np0005465604 kernel: tap17b2c50e-61 (unregistering): left promiscuous mode
Oct  2 04:24:05 np0005465604 NetworkManager[45129]: <info>  [1759393445.7349] device (tap17b2c50e-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:05 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:05Z|00177|binding|INFO|Releasing lport 17b2c50e-61e3-4e6e-853a-81c49befe22b from this chassis (sb_readonly=0)
Oct  2 04:24:05 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:05Z|00178|binding|INFO|Setting lport 17b2c50e-61e3-4e6e-853a-81c49befe22b down in Southbound
Oct  2 04:24:05 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:05Z|00179|binding|INFO|Removing iface tap17b2c50e-61 ovn-installed in OVS
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:05.759 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:4a:61 10.100.0.11'], port_security=['fa:16:3e:19:4a:61 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff4066c489424391bd4a75b195bd5011', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1aba9f6e-efc2-4ae1-83f0-6308a1293c5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f5b9ac6-d3ea-442b-a1a3-0f4eb2329dfd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=17b2c50e-61e3-4e6e-853a-81c49befe22b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:24:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:05.760 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 17b2c50e-61e3-4e6e-853a-81c49befe22b in datapath c8aeabca-6b5c-477a-9156-9f9592c20b93 unbound from our chassis#033[00m
Oct  2 04:24:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:05.762 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c8aeabca-6b5c-477a-9156-9f9592c20b93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:24:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:05.764 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e32af0e9-6bd8-4d82-83f0-dfa358f8400c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:05.764 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 namespace which is not needed anymore#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:05 np0005465604 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Oct  2 04:24:05 np0005465604 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Consumed 2.572s CPU time.
Oct  2 04:24:05 np0005465604 systemd-machined[214636]: Machine qemu-32-instance-0000001c terminated.
Oct  2 04:24:05 np0005465604 podman[295507]: 2025-10-02 08:24:05.838945884 +0000 UTC m=+0.072119986 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.857 2 DEBUG oslo_concurrency.processutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94/disk.config 5d50db9c-0731-468a-81da-6762d68cda94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.857 2 INFO nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Deleting local config drive /var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94/disk.config because it was imported into RBD.#033[00m
Oct  2 04:24:05 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[295386]: [NOTICE]   (295392) : haproxy version is 2.8.14-c23fe91
Oct  2 04:24:05 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[295386]: [NOTICE]   (295392) : path to executable is /usr/sbin/haproxy
Oct  2 04:24:05 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[295386]: [WARNING]  (295392) : Exiting Master process...
Oct  2 04:24:05 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[295386]: [ALERT]    (295392) : Current worker (295409) exited with code 143 (Terminated)
Oct  2 04:24:05 np0005465604 neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93[295386]: [WARNING]  (295392) : All workers exited. Exiting... (0)
Oct  2 04:24:05 np0005465604 systemd[1]: libpod-6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2.scope: Deactivated successfully.
Oct  2 04:24:05 np0005465604 podman[295503]: 2025-10-02 08:24:05.896999716 +0000 UTC m=+0.137002188 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:24:05 np0005465604 podman[295564]: 2025-10-02 08:24:05.900123877 +0000 UTC m=+0.040012440 container died 6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:24:05 np0005465604 kernel: tap4e1ca975-95: entered promiscuous mode
Oct  2 04:24:05 np0005465604 NetworkManager[45129]: <info>  [1759393445.9129] manager: (tap4e1ca975-95): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Oct  2 04:24:05 np0005465604 systemd-udevd[295270]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:05 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:05Z|00180|binding|INFO|Claiming lport 4e1ca975-9579-4bcd-8e92-5c169484f7fe for this chassis.
Oct  2 04:24:05 np0005465604 systemd-udevd[295266]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:24:05 np0005465604 kernel: tap17b2c50e-61: entered promiscuous mode
Oct  2 04:24:05 np0005465604 kernel: tap17b2c50e-61 (unregistering): left promiscuous mode
Oct  2 04:24:05 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:05Z|00181|binding|INFO|4e1ca975-9579-4bcd-8e92-5c169484f7fe: Claiming fa:16:3e:82:e7:88 10.100.0.7
Oct  2 04:24:05 np0005465604 NetworkManager[45129]: <info>  [1759393445.9295] device (tap4e1ca975-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:24:05 np0005465604 NetworkManager[45129]: <info>  [1759393445.9302] device (tap4e1ca975-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:24:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:05.934 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:e7:88 10.100.0.7'], port_security=['fa:16:3e:82:e7:88 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5d50db9c-0731-468a-81da-6762d68cda94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-281b57e5-e0d2-447a-8a70-0fab75a8117e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3665ae0e483545e2aaa658ac8a3949aa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '927e44a9-fbe2-4210-9cfb-52bbd0657d4e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33e27bca-0333-46b3-bd98-5e7fb62b735e, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4e1ca975-9579-4bcd-8e92-5c169484f7fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:24:05 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2-userdata-shm.mount: Deactivated successfully.
Oct  2 04:24:05 np0005465604 systemd[1]: var-lib-containers-storage-overlay-fb557c89a09e3c98f6728bab7f7254a61a1ae88dabc0772575bffd2a9b2ec5c4-merged.mount: Deactivated successfully.
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.950 2 INFO nova.virt.libvirt.driver [-] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Instance destroyed successfully.#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.951 2 DEBUG nova.objects.instance [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lazy-loading 'resources' on Instance uuid ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:24:05 np0005465604 podman[295564]: 2025-10-02 08:24:05.959334976 +0000 UTC m=+0.099223529 container cleanup 6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.966 2 DEBUG nova.virt.libvirt.vif [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:23:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1190225091',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1190225091',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1190225091',id=28,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:24:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff4066c489424391bd4a75b195bd5011',ramdisk_id='',reservation_id='r-i2hcy206',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1267478522',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1267478522-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:24:04Z,user_data=None,user_id='3a117ecad98d493d8782539545db5ac9',uuid=ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.966 2 DEBUG nova.network.os_vif_util [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converting VIF {"id": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "address": "fa:16:3e:19:4a:61", "network": {"id": "c8aeabca-6b5c-477a-9156-9f9592c20b93", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1679259193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4066c489424391bd4a75b195bd5011", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17b2c50e-61", "ovs_interfaceid": "17b2c50e-61e3-4e6e-853a-81c49befe22b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.967 2 DEBUG nova.network.os_vif_util [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:4a:61,bridge_name='br-int',has_traffic_filtering=True,id=17b2c50e-61e3-4e6e-853a-81c49befe22b,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17b2c50e-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.967 2 DEBUG os_vif [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:4a:61,bridge_name='br-int',has_traffic_filtering=True,id=17b2c50e-61e3-4e6e-853a-81c49befe22b,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17b2c50e-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.969 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17b2c50e-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:05 np0005465604 systemd-machined[214636]: New machine qemu-33-instance-0000001d.
Oct  2 04:24:05 np0005465604 systemd[1]: libpod-conmon-6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2.scope: Deactivated successfully.
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:05 np0005465604 nova_compute[260603]: 2025-10-02 08:24:05.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:24:05 np0005465604 systemd[1]: Started Virtual Machine qemu-33-instance-0000001d.
Oct  2 04:24:06 np0005465604 nova_compute[260603]: 2025-10-02 08:24:06.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:06 np0005465604 nova_compute[260603]: 2025-10-02 08:24:06.008 2 INFO os_vif [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:4a:61,bridge_name='br-int',has_traffic_filtering=True,id=17b2c50e-61e3-4e6e-853a-81c49befe22b,network=Network(c8aeabca-6b5c-477a-9156-9f9592c20b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17b2c50e-61')#033[00m
Oct  2 04:24:06 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:06Z|00182|binding|INFO|Setting lport 4e1ca975-9579-4bcd-8e92-5c169484f7fe ovn-installed in OVS
Oct  2 04:24:06 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:06Z|00183|binding|INFO|Setting lport 4e1ca975-9579-4bcd-8e92-5c169484f7fe up in Southbound
Oct  2 04:24:06 np0005465604 podman[295611]: 2025-10-02 08:24:06.034064996 +0000 UTC m=+0.049243598 container remove 6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:24:06 np0005465604 nova_compute[260603]: 2025-10-02 08:24:06.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.042 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[36918d96-5d59-4ea0-98e9-07248b6d27b6]: (4, ('Thu Oct  2 08:24:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 (6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2)\n6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2\nThu Oct  2 08:24:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 (6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2)\n6305d726a571c7941f507648b4dc590835490d533e637e91d449779c453b32b2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.049 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc8467e-3746-4761-ba54-d3068022938f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.050 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8aeabca-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:06 np0005465604 nova_compute[260603]: 2025-10-02 08:24:06.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:06 np0005465604 kernel: tapc8aeabca-60: left promiscuous mode
Oct  2 04:24:06 np0005465604 nova_compute[260603]: 2025-10-02 08:24:06.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.057 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0e674660-6929-4846-bdfa-50279fb0cd87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 nova_compute[260603]: 2025-10-02 08:24:06.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.089 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c4643909-c18a-431b-b2d2-a9feec89b70b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.090 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bcbd96c5-ce94-466f-8882-3757f76e0949]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.114 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe7d8b0-562d-4fe8-9be1-9bc224615980]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432172, 'reachable_time': 22828, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295644, 'error': None, 'target': 'ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 systemd[1]: run-netns-ovnmeta\x2dc8aeabca\x2d6b5c\x2d477a\x2d9156\x2d9f9592c20b93.mount: Deactivated successfully.
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.119 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c8aeabca-6b5c-477a-9156-9f9592c20b93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.120 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[878136ba-bc45-437a-b657-9867278b9726]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.121 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4e1ca975-9579-4bcd-8e92-5c169484f7fe in datapath 281b57e5-e0d2-447a-8a70-0fab75a8117e unbound from our chassis#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.122 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 281b57e5-e0d2-447a-8a70-0fab75a8117e#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.136 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6982660c-9a14-4172-8436-c9ed6b4f4209]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.137 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap281b57e5-e1 in ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.138 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap281b57e5-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.138 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[19639d5a-5138-431b-88dc-2969079f80ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.139 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3abfc36f-2c8b-4ad2-973a-3bc01190dcbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.156 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6c7d6e-5fd3-490e-87b5-dade1a4912df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.187 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a72967da-0959-4f58-95b6-f79a89479063]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.230 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8de8a3c6-36d8-4621-93d3-5e8a355bd04e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.241 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3074dc87-2c60-4d44-bee3-f2d09a295622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 NetworkManager[45129]: <info>  [1759393446.2429] manager: (tap281b57e5-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/90)
Oct  2 04:24:06 np0005465604 systemd-udevd[295645]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.287 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b5df1085-fe4e-408f-9c22-951247ba3ef8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.289 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab0fcec-66c1-4486-9aa9-08ae030ce084]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 NetworkManager[45129]: <info>  [1759393446.3133] device (tap281b57e5-e0): carrier: link connected
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.318 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[98db480e-0ea0-469c-81de-692fdd662490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.337 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3e10f8ac-0555-4f39-986d-c264f50c7b5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap281b57e5-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:b3:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432491, 'reachable_time': 41599, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295674, 'error': None, 'target': 'ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.353 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6d47ce93-d57f-4322-9d37-4a85d2e96d2a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea3:b397'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 432491, 'tstamp': 432491}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295675, 'error': None, 'target': 'ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.373 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2adb0e88-7155-4e11-9d85-9e17ad10ce38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap281b57e5-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:b3:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432491, 'reachable_time': 41599, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295676, 'error': None, 'target': 'ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.404 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e3b063-c974-4a50-a269-dc9805b5a561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.469 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[49f29c64-e041-4853-bde9-79674a9edd9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.470 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap281b57e5-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.471 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.472 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap281b57e5-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:06 np0005465604 NetworkManager[45129]: <info>  [1759393446.5172] manager: (tap281b57e5-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Oct  2 04:24:06 np0005465604 kernel: tap281b57e5-e0: entered promiscuous mode
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.520 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap281b57e5-e0, col_values=(('external_ids', {'iface-id': 'a1c00715-c815-46c4-9348-33448964f617'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:06 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:06Z|00184|binding|INFO|Releasing lport a1c00715-c815-46c4-9348-33448964f617 from this chassis (sb_readonly=0)
Oct  2 04:24:06 np0005465604 nova_compute[260603]: 2025-10-02 08:24:06.533 2 INFO nova.virt.libvirt.driver [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Deleting instance files /var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_del#033[00m
Oct  2 04:24:06 np0005465604 nova_compute[260603]: 2025-10-02 08:24:06.533 2 INFO nova.virt.libvirt.driver [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Deletion of /var/lib/nova/instances/ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48_del complete#033[00m
Oct  2 04:24:06 np0005465604 nova_compute[260603]: 2025-10-02 08:24:06.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:06 np0005465604 nova_compute[260603]: 2025-10-02 08:24:06.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:06 np0005465604 nova_compute[260603]: 2025-10-02 08:24:06.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.543 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/281b57e5-e0d2-447a-8a70-0fab75a8117e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/281b57e5-e0d2-447a-8a70-0fab75a8117e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.544 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f9ba66-92e5-4e97-958d-29cb3e0f585c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.545 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-281b57e5-e0d2-447a-8a70-0fab75a8117e
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/281b57e5-e0d2-447a-8a70-0fab75a8117e.pid.haproxy
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 281b57e5-e0d2-447a-8a70-0fab75a8117e
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:24:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:06.546 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e', 'env', 'PROCESS_TAG=haproxy-281b57e5-e0d2-447a-8a70-0fab75a8117e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/281b57e5-e0d2-447a-8a70-0fab75a8117e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:24:06 np0005465604 nova_compute[260603]: 2025-10-02 08:24:06.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:06 np0005465604 nova_compute[260603]: 2025-10-02 08:24:06.818 2 INFO nova.compute.manager [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Took 1.13 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:24:06 np0005465604 nova_compute[260603]: 2025-10-02 08:24:06.819 2 DEBUG oslo.service.loopingcall [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:24:06 np0005465604 nova_compute[260603]: 2025-10-02 08:24:06.819 2 DEBUG nova.compute.manager [-] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:24:06 np0005465604 nova_compute[260603]: 2025-10-02 08:24:06.819 2 DEBUG nova.network.neutron [-] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:24:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 181 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 132 op/s
Oct  2 04:24:06 np0005465604 podman[295708]: 2025-10-02 08:24:06.89271171 +0000 UTC m=+0.051879593 container create a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:24:06 np0005465604 systemd[1]: Started libpod-conmon-a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8.scope.
Oct  2 04:24:06 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:24:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abcc92a4aba871794cf5916850bf03d08984036d0d311577b9f0f2209f683ce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:24:06 np0005465604 podman[295708]: 2025-10-02 08:24:06.86756963 +0000 UTC m=+0.026737533 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:24:06 np0005465604 podman[295708]: 2025-10-02 08:24:06.97208006 +0000 UTC m=+0.131247943 container init a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:24:06 np0005465604 podman[295708]: 2025-10-02 08:24:06.977212156 +0000 UTC m=+0.136380039 container start a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:24:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:24:07 np0005465604 neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e[295724]: [NOTICE]   (295728) : New worker (295730) forked
Oct  2 04:24:07 np0005465604 neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e[295724]: [NOTICE]   (295728) : Loading success.
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.631 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393447.6310706, 5d50db9c-0731-468a-81da-6762d68cda94 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.631 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] VM Started (Lifecycle Event)#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.671 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.676 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393447.633374, 5d50db9c-0731-468a-81da-6762d68cda94 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.676 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.717 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.721 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.780 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.808 2 DEBUG nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Received event network-vif-unplugged-17b2c50e-61e3-4e6e-853a-81c49befe22b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.809 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.809 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.809 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.809 2 DEBUG nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] No waiting events found dispatching network-vif-unplugged-17b2c50e-61e3-4e6e-853a-81c49befe22b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.809 2 DEBUG nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Received event network-vif-unplugged-17b2c50e-61e3-4e6e-853a-81c49befe22b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.809 2 DEBUG nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Received event network-vif-plugged-17b2c50e-61e3-4e6e-853a-81c49befe22b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.810 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.810 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.810 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.810 2 DEBUG nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] No waiting events found dispatching network-vif-plugged-17b2c50e-61e3-4e6e-853a-81c49befe22b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.810 2 WARNING nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Received unexpected event network-vif-plugged-17b2c50e-61e3-4e6e-853a-81c49befe22b for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.810 2 DEBUG nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Received event network-vif-plugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.810 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d50db9c-0731-468a-81da-6762d68cda94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.811 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.811 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.811 2 DEBUG nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Processing event network-vif-plugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.811 2 DEBUG nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Received event network-vif-plugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.811 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d50db9c-0731-468a-81da-6762d68cda94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.811 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.811 2 DEBUG oslo_concurrency.lockutils [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.812 2 DEBUG nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] No waiting events found dispatching network-vif-plugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.812 2 WARNING nova.compute.manager [req-c23b363b-c249-49c6-bd20-a540629c9aa2 req-1341cad3-ca3a-4e68-968a-0386d376171b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Received unexpected event network-vif-plugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.812 2 DEBUG nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.818 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393447.8180413, 5d50db9c-0731-468a-81da-6762d68cda94 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.818 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.820 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.824 2 INFO nova.virt.libvirt.driver [-] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Instance spawned successfully.#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.824 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.884 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.889 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.893 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.894 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.894 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.895 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.896 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.896 2 DEBUG nova.virt.libvirt.driver [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:24:07 np0005465604 nova_compute[260603]: 2025-10-02 08:24:07.962 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:24:08 np0005465604 nova_compute[260603]: 2025-10-02 08:24:08.002 2 INFO nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Took 9.51 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:24:08 np0005465604 nova_compute[260603]: 2025-10-02 08:24:08.002 2 DEBUG nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:08 np0005465604 nova_compute[260603]: 2025-10-02 08:24:08.160 2 INFO nova.compute.manager [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Took 10.57 seconds to build instance.#033[00m
Oct  2 04:24:08 np0005465604 nova_compute[260603]: 2025-10-02 08:24:08.189 2 DEBUG oslo_concurrency.lockutils [None req-daab9835-55a4-4dab-bc31-6b11c5d847c0 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:08 np0005465604 nova_compute[260603]: 2025-10-02 08:24:08.593 2 DEBUG nova.network.neutron [-] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:24:08 np0005465604 nova_compute[260603]: 2025-10-02 08:24:08.617 2 INFO nova.compute.manager [-] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Took 1.80 seconds to deallocate network for instance.#033[00m
Oct  2 04:24:08 np0005465604 nova_compute[260603]: 2025-10-02 08:24:08.677 2 DEBUG nova.compute.manager [req-f08d490d-6448-4d29-abd0-adab22fc6eec req-4c21c98a-ef7c-46a7-94aa-965fd9334ede 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Received event network-vif-deleted-17b2c50e-61e3-4e6e-853a-81c49befe22b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:08 np0005465604 nova_compute[260603]: 2025-10-02 08:24:08.680 2 DEBUG oslo_concurrency.lockutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:08 np0005465604 nova_compute[260603]: 2025-10-02 08:24:08.681 2 DEBUG oslo_concurrency.lockutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:08 np0005465604 nova_compute[260603]: 2025-10-02 08:24:08.714 2 DEBUG nova.scheduler.client.report [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 04:24:08 np0005465604 nova_compute[260603]: 2025-10-02 08:24:08.731 2 DEBUG nova.scheduler.client.report [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 04:24:08 np0005465604 nova_compute[260603]: 2025-10-02 08:24:08.732 2 DEBUG nova.compute.provider_tree [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 04:24:08 np0005465604 nova_compute[260603]: 2025-10-02 08:24:08.746 2 DEBUG nova.scheduler.client.report [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 04:24:08 np0005465604 nova_compute[260603]: 2025-10-02 08:24:08.769 2 DEBUG nova.scheduler.client.report [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 04:24:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 146 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.8 MiB/s wr, 251 op/s
Oct  2 04:24:08 np0005465604 nova_compute[260603]: 2025-10-02 08:24:08.855 2 DEBUG oslo_concurrency.processutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:09Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f0:3b:c8 10.100.0.10
Oct  2 04:24:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:09Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f0:3b:c8 10.100.0.10
Oct  2 04:24:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:24:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3668573006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:24:09 np0005465604 nova_compute[260603]: 2025-10-02 08:24:09.263 2 DEBUG oslo_concurrency.processutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:09 np0005465604 nova_compute[260603]: 2025-10-02 08:24:09.273 2 DEBUG nova.compute.provider_tree [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:24:09 np0005465604 nova_compute[260603]: 2025-10-02 08:24:09.297 2 DEBUG nova.scheduler.client.report [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:24:09 np0005465604 nova_compute[260603]: 2025-10-02 08:24:09.325 2 DEBUG oslo_concurrency.lockutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:09 np0005465604 nova_compute[260603]: 2025-10-02 08:24:09.351 2 INFO nova.scheduler.client.report [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Deleted allocations for instance ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48#033[00m
Oct  2 04:24:09 np0005465604 nova_compute[260603]: 2025-10-02 08:24:09.419 2 DEBUG oslo_concurrency.lockutils [None req-4dddf72a-3ee2-42aa-b1b4-42b69313bcf3 3a117ecad98d493d8782539545db5ac9 ff4066c489424391bd4a75b195bd5011 - - default default] Lock "ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:09 np0005465604 nova_compute[260603]: 2025-10-02 08:24:09.532 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:24:09 np0005465604 nova_compute[260603]: 2025-10-02 08:24:09.532 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:24:09 np0005465604 nova_compute[260603]: 2025-10-02 08:24:09.533 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:24:09 np0005465604 nova_compute[260603]: 2025-10-02 08:24:09.533 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:24:09 np0005465604 nova_compute[260603]: 2025-10-02 08:24:09.888 2 DEBUG nova.virt.libvirt.driver [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 04:24:10 np0005465604 nova_compute[260603]: 2025-10-02 08:24:10.750 2 DEBUG nova.compute.manager [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:10 np0005465604 nova_compute[260603]: 2025-10-02 08:24:10.805 2 INFO nova.compute.manager [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] instance snapshotting#033[00m
Oct  2 04:24:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 146 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.0 MiB/s wr, 196 op/s
Oct  2 04:24:10 np0005465604 nova_compute[260603]: 2025-10-02 08:24:10.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:11 np0005465604 nova_compute[260603]: 2025-10-02 08:24:11.176 2 INFO nova.virt.libvirt.driver [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Beginning live snapshot process#033[00m
Oct  2 04:24:11 np0005465604 nova_compute[260603]: 2025-10-02 08:24:11.486 2 DEBUG nova.virt.libvirt.imagebackend [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 04:24:11 np0005465604 nova_compute[260603]: 2025-10-02 08:24:11.672 2 DEBUG nova.storage.rbd_utils [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] creating snapshot(243c57363d5f474cb81b45cfe5573907) on rbd image(5d50db9c-0731-468a-81da-6762d68cda94_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:24:11 np0005465604 nova_compute[260603]: 2025-10-02 08:24:11.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:24:12 np0005465604 podman[295855]: 2025-10-02 08:24:12.033531455 +0000 UTC m=+0.090078176 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  2 04:24:12 np0005465604 kernel: tapb546f674-89 (unregistering): left promiscuous mode
Oct  2 04:24:12 np0005465604 NetworkManager[45129]: <info>  [1759393452.2024] device (tapb546f674-89): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:24:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:12Z|00185|binding|INFO|Releasing lport b546f674-89d3-44f3-82c4-9426c5910017 from this chassis (sb_readonly=0)
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:12Z|00186|binding|INFO|Setting lport b546f674-89d3-44f3-82c4-9426c5910017 down in Southbound
Oct  2 04:24:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:12Z|00187|binding|INFO|Removing iface tapb546f674-89 ovn-installed in OVS
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.231 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:3b:c8 10.100.0.10'], port_security=['fa:16:3e:f0:3b:c8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '95b47d23-06fb-460c-b8d9-7b7213dae4c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=b546f674-89d3-44f3-82c4-9426c5910017) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:24:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.232 162357 INFO neutron.agent.ovn.metadata.agent [-] Port b546f674-89d3-44f3-82c4-9426c5910017 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis#033[00m
Oct  2 04:24:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.233 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 897d7abf-9e23-43cd-8f60-7156792a4360, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:24:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.234 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[14183482-4b14-4cd1-b4e0-9551f6d6ee61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.237 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 namespace which is not needed anymore#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:12 np0005465604 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Oct  2 04:24:12 np0005465604 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Consumed 12.592s CPU time.
Oct  2 04:24:12 np0005465604 systemd-machined[214636]: Machine qemu-31-instance-0000001b terminated.
Oct  2 04:24:12 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[294826]: [NOTICE]   (294836) : haproxy version is 2.8.14-c23fe91
Oct  2 04:24:12 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[294826]: [NOTICE]   (294836) : path to executable is /usr/sbin/haproxy
Oct  2 04:24:12 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[294826]: [WARNING]  (294836) : Exiting Master process...
Oct  2 04:24:12 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[294826]: [WARNING]  (294836) : Exiting Master process...
Oct  2 04:24:12 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[294826]: [ALERT]    (294836) : Current worker (294838) exited with code 143 (Terminated)
Oct  2 04:24:12 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[294826]: [WARNING]  (294836) : All workers exited. Exiting... (0)
Oct  2 04:24:12 np0005465604 systemd[1]: libpod-3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49.scope: Deactivated successfully.
Oct  2 04:24:12 np0005465604 podman[295898]: 2025-10-02 08:24:12.419702115 +0000 UTC m=+0.055341605 container died 3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:24:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49-userdata-shm.mount: Deactivated successfully.
Oct  2 04:24:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay-98ec9518e036099ec816e19e9d3a64de8fb07b1fecfb5f7dfe22b9c3e6c79aa9-merged.mount: Deactivated successfully.
Oct  2 04:24:12 np0005465604 podman[295898]: 2025-10-02 08:24:12.498133985 +0000 UTC m=+0.133773495 container cleanup 3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:24:12 np0005465604 systemd[1]: libpod-conmon-3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49.scope: Deactivated successfully.
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.540 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-95b47d23-06fb-460c-b8d9-7b7213dae4c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.541 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-95b47d23-06fb-460c-b8d9-7b7213dae4c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.542 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.543 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 95b47d23-06fb-460c-b8d9-7b7213dae4c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:24:12 np0005465604 podman[295935]: 2025-10-02 08:24:12.596349381 +0000 UTC m=+0.058202308 container remove 3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:24:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.603 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e632038a-1716-406a-9147-09a83271115f]: (4, ('Thu Oct  2 08:24:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 (3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49)\n3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49\nThu Oct  2 08:24:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 (3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49)\n3a5a07dc5db8403a6d63679e314bd47a8bea8cd5d5352ffc54deabb1822d3e49\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.605 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[801778b7-db8c-4bf6-9b73-08b4482ffe07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.606 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:12 np0005465604 kernel: tap897d7abf-90: left promiscuous mode
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct  2 04:24:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct  2 04:24:12 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.679 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff50bf7-1c25-4e95-a95f-1fd8477eb1e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.681 2 DEBUG nova.compute.manager [req-9c4f716c-0d3a-4bec-beb9-3bf841ae9f77 req-32305b1c-3db5-44cf-9b6e-11d035ba1d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Received event network-vif-unplugged-b546f674-89d3-44f3-82c4-9426c5910017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.682 2 DEBUG oslo_concurrency.lockutils [req-9c4f716c-0d3a-4bec-beb9-3bf841ae9f77 req-32305b1c-3db5-44cf-9b6e-11d035ba1d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.682 2 DEBUG oslo_concurrency.lockutils [req-9c4f716c-0d3a-4bec-beb9-3bf841ae9f77 req-32305b1c-3db5-44cf-9b6e-11d035ba1d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.682 2 DEBUG oslo_concurrency.lockutils [req-9c4f716c-0d3a-4bec-beb9-3bf841ae9f77 req-32305b1c-3db5-44cf-9b6e-11d035ba1d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.682 2 DEBUG nova.compute.manager [req-9c4f716c-0d3a-4bec-beb9-3bf841ae9f77 req-32305b1c-3db5-44cf-9b6e-11d035ba1d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] No waiting events found dispatching network-vif-unplugged-b546f674-89d3-44f3-82c4-9426c5910017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.683 2 WARNING nova.compute.manager [req-9c4f716c-0d3a-4bec-beb9-3bf841ae9f77 req-32305b1c-3db5-44cf-9b6e-11d035ba1d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Received unexpected event network-vif-unplugged-b546f674-89d3-44f3-82c4-9426c5910017 for instance with vm_state active and task_state powering-off.#033[00m
Oct  2 04:24:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.717 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[751bb59b-cb52-4f0f-907c-bba885c22100]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.718 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e321462a-5e81-4ded-ad0d-e176856020ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.737 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[56f8f13b-3636-4af2-af91-5b4ccb6c2975]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431424, 'reachable_time': 43153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295953, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:12 np0005465604 systemd[1]: run-netns-ovnmeta\x2d897d7abf\x2d9e23\x2d43cd\x2d8f60\x2d7156792a4360.mount: Deactivated successfully.
Oct  2 04:24:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.740 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:24:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:12.740 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b8c0de-ced0-4e19-8ead-798b1706e4f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.753 2 DEBUG nova.storage.rbd_utils [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] cloning vms/5d50db9c-0731-468a-81da-6762d68cda94_disk@243c57363d5f474cb81b45cfe5573907 to images/ed5c9663-363d-4483-aa94-11714a93ce2f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:24:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 165 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.3 MiB/s wr, 272 op/s
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.903 2 INFO nova.virt.libvirt.driver [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.920 2 DEBUG nova.storage.rbd_utils [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] flattening images/ed5c9663-363d-4483-aa94-11714a93ce2f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.979 2 INFO nova.virt.libvirt.driver [-] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Instance destroyed successfully.#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.979 2 DEBUG nova.objects.instance [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'numa_topology' on Instance uuid 95b47d23-06fb-460c-b8d9-7b7213dae4c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:24:12 np0005465604 nova_compute[260603]: 2025-10-02 08:24:12.997 2 DEBUG nova.compute.manager [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:13 np0005465604 nova_compute[260603]: 2025-10-02 08:24:13.052 2 DEBUG oslo_concurrency.lockutils [None req-4c541bab-936a-4923-8012-f4955905322a 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:13 np0005465604 nova_compute[260603]: 2025-10-02 08:24:13.164 2 DEBUG nova.storage.rbd_utils [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] removing snapshot(243c57363d5f474cb81b45cfe5573907) on rbd image(5d50db9c-0731-468a-81da-6762d68cda94_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 04:24:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct  2 04:24:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct  2 04:24:13 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct  2 04:24:13 np0005465604 nova_compute[260603]: 2025-10-02 08:24:13.733 2 DEBUG nova.storage.rbd_utils [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] creating snapshot(snap) on rbd image(ed5c9663-363d-4483-aa94-11714a93ce2f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:24:13 np0005465604 nova_compute[260603]: 2025-10-02 08:24:13.870 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Updating instance_info_cache with network_info: [{"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:24:13 np0005465604 nova_compute[260603]: 2025-10-02 08:24:13.892 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-95b47d23-06fb-460c-b8d9-7b7213dae4c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:24:13 np0005465604 nova_compute[260603]: 2025-10-02 08:24:13.893 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:24:13 np0005465604 nova_compute[260603]: 2025-10-02 08:24:13.894 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:24:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:14Z|00188|binding|INFO|Releasing lport a1c00715-c815-46c4-9348-33448964f617 from this chassis (sb_readonly=0)
Oct  2 04:24:14 np0005465604 nova_compute[260603]: 2025-10-02 08:24:14.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:24:14 np0005465604 nova_compute[260603]: 2025-10-02 08:24:14.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:14 np0005465604 nova_compute[260603]: 2025-10-02 08:24:14.541 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:14 np0005465604 nova_compute[260603]: 2025-10-02 08:24:14.541 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:14 np0005465604 nova_compute[260603]: 2025-10-02 08:24:14.542 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:14 np0005465604 nova_compute[260603]: 2025-10-02 08:24:14.542 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:24:14 np0005465604 nova_compute[260603]: 2025-10-02 08:24:14.542 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct  2 04:24:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct  2 04:24:14 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct  2 04:24:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 182 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 3.5 MiB/s wr, 271 op/s
Oct  2 04:24:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:24:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/690967593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:24:14 np0005465604 nova_compute[260603]: 2025-10-02 08:24:14.995 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.015 2 DEBUG nova.compute.manager [req-7ea363bf-1384-4e66-82ff-b11eff295717 req-73424abb-69c4-49b8-84f9-870ff22af8b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Received event network-vif-plugged-b546f674-89d3-44f3-82c4-9426c5910017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.016 2 DEBUG oslo_concurrency.lockutils [req-7ea363bf-1384-4e66-82ff-b11eff295717 req-73424abb-69c4-49b8-84f9-870ff22af8b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.016 2 DEBUG oslo_concurrency.lockutils [req-7ea363bf-1384-4e66-82ff-b11eff295717 req-73424abb-69c4-49b8-84f9-870ff22af8b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.017 2 DEBUG oslo_concurrency.lockutils [req-7ea363bf-1384-4e66-82ff-b11eff295717 req-73424abb-69c4-49b8-84f9-870ff22af8b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.018 2 DEBUG nova.compute.manager [req-7ea363bf-1384-4e66-82ff-b11eff295717 req-73424abb-69c4-49b8-84f9-870ff22af8b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] No waiting events found dispatching network-vif-plugged-b546f674-89d3-44f3-82c4-9426c5910017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.018 2 WARNING nova.compute.manager [req-7ea363bf-1384-4e66-82ff-b11eff295717 req-73424abb-69c4-49b8-84f9-870ff22af8b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Received unexpected event network-vif-plugged-b546f674-89d3-44f3-82c4-9426c5910017 for instance with vm_state stopped and task_state image_snapshot_pending.#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.022 2 DEBUG nova.compute.manager [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.103 2 INFO nova.compute.manager [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] instance snapshotting#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.103 2 WARNING nova.compute.manager [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.123 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.123 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.128 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.129 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.352 2 INFO nova.virt.libvirt.driver [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Beginning cold snapshot process#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.356 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.357 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4361MB free_disk=59.92191696166992GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.357 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.358 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.449 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 95b47d23-06fb-460c-b8d9-7b7213dae4c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.450 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 5d50db9c-0731-468a-81da-6762d68cda94 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.450 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.450 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.512 2 DEBUG nova.virt.libvirt.imagebackend [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.539 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.750 2 DEBUG nova.storage.rbd_utils [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] creating snapshot(8a6ab14365794246a4643e95735b5aa4) on rbd image(95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:24:15 np0005465604 nova_compute[260603]: 2025-10-02 08:24:15.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:24:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1267837788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:24:16 np0005465604 podman[296138]: 2025-10-02 08:24:16.017657283 +0000 UTC m=+0.083710211 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 04:24:16 np0005465604 nova_compute[260603]: 2025-10-02 08:24:16.038 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:16 np0005465604 nova_compute[260603]: 2025-10-02 08:24:16.046 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:24:16 np0005465604 nova_compute[260603]: 2025-10-02 08:24:16.072 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:24:16 np0005465604 nova_compute[260603]: 2025-10-02 08:24:16.097 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:24:16 np0005465604 nova_compute[260603]: 2025-10-02 08:24:16.097 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:16 np0005465604 nova_compute[260603]: 2025-10-02 08:24:16.153 2 INFO nova.virt.libvirt.driver [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Snapshot image upload complete#033[00m
Oct  2 04:24:16 np0005465604 nova_compute[260603]: 2025-10-02 08:24:16.153 2 INFO nova.compute.manager [None req-f2b54edc-8d2f-432b-b4db-9b510479cac5 e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Took 5.35 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 04:24:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Oct  2 04:24:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Oct  2 04:24:16 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Oct  2 04:24:16 np0005465604 nova_compute[260603]: 2025-10-02 08:24:16.783 2 DEBUG nova.storage.rbd_utils [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] cloning vms/95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk@8a6ab14365794246a4643e95735b5aa4 to images/dcbb3886-223f-481d-a1b9-0a1d2d27080f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:24:16 np0005465604 nova_compute[260603]: 2025-10-02 08:24:16.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 182 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 142 op/s
Oct  2 04:24:16 np0005465604 nova_compute[260603]: 2025-10-02 08:24:16.912 2 DEBUG nova.storage.rbd_utils [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] flattening images/dcbb3886-223f-481d-a1b9-0a1d2d27080f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:24:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:24:17 np0005465604 nova_compute[260603]: 2025-10-02 08:24:17.095 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:24:17 np0005465604 nova_compute[260603]: 2025-10-02 08:24:17.096 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:24:17 np0005465604 nova_compute[260603]: 2025-10-02 08:24:17.121 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:24:17 np0005465604 nova_compute[260603]: 2025-10-02 08:24:17.252 2 DEBUG nova.storage.rbd_utils [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] removing snapshot(8a6ab14365794246a4643e95735b5aa4) on rbd image(95b47d23-06fb-460c-b8d9-7b7213dae4c7_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 04:24:17 np0005465604 nova_compute[260603]: 2025-10-02 08:24:17.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:24:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Oct  2 04:24:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Oct  2 04:24:17 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Oct  2 04:24:17 np0005465604 nova_compute[260603]: 2025-10-02 08:24:17.760 2 DEBUG nova.storage.rbd_utils [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] creating snapshot(snap) on rbd image(dcbb3886-223f-481d-a1b9-0a1d2d27080f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:24:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Oct  2 04:24:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Oct  2 04:24:18 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Oct  2 04:24:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 285 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 15 MiB/s rd, 13 MiB/s wr, 355 op/s
Oct  2 04:24:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:19Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:82:e7:88 10.100.0.7
Oct  2 04:24:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:19Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:82:e7:88 10.100.0.7
Oct  2 04:24:20 np0005465604 nova_compute[260603]: 2025-10-02 08:24:20.022 2 DEBUG nova.compute.manager [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:20 np0005465604 nova_compute[260603]: 2025-10-02 08:24:20.073 2 INFO nova.compute.manager [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] instance snapshotting#033[00m
Oct  2 04:24:20 np0005465604 nova_compute[260603]: 2025-10-02 08:24:20.272 2 INFO nova.virt.libvirt.driver [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Beginning live snapshot process#033[00m
Oct  2 04:24:20 np0005465604 nova_compute[260603]: 2025-10-02 08:24:20.388 2 INFO nova.virt.libvirt.driver [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Snapshot image upload complete#033[00m
Oct  2 04:24:20 np0005465604 nova_compute[260603]: 2025-10-02 08:24:20.389 2 INFO nova.compute.manager [None req-13e977ab-01b4-402f-9496-e15644fccbb2 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Took 5.28 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 04:24:20 np0005465604 nova_compute[260603]: 2025-10-02 08:24:20.478 2 DEBUG nova.virt.libvirt.imagebackend [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 04:24:20 np0005465604 nova_compute[260603]: 2025-10-02 08:24:20.801 2 DEBUG nova.storage.rbd_utils [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] creating snapshot(8bfb15f943e943a49d11c7009e10679a) on rbd image(5d50db9c-0731-468a-81da-6762d68cda94_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:24:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 285 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 9.2 MiB/s wr, 245 op/s
Oct  2 04:24:20 np0005465604 nova_compute[260603]: 2025-10-02 08:24:20.935 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393445.9346998, ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:24:20 np0005465604 nova_compute[260603]: 2025-10-02 08:24:20.936 2 INFO nova.compute.manager [-] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:24:20 np0005465604 nova_compute[260603]: 2025-10-02 08:24:20.973 2 DEBUG nova.compute.manager [None req-22db1ee5-5dd6-4876-ac94-ef93b6fceeae - - - - - -] [instance: ab6dc8a0-7fcb-402a-8d9e-85ea3c86ce48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:20 np0005465604 nova_compute[260603]: 2025-10-02 08:24:20.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Oct  2 04:24:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Oct  2 04:24:21 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Oct  2 04:24:21 np0005465604 nova_compute[260603]: 2025-10-02 08:24:21.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:21 np0005465604 nova_compute[260603]: 2025-10-02 08:24:21.818 2 DEBUG nova.storage.rbd_utils [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] cloning vms/5d50db9c-0731-468a-81da-6762d68cda94_disk@8bfb15f943e943a49d11c7009e10679a to images/775ace25-87e6-4cc4-80cb-b697eaf54c22 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:24:21 np0005465604 nova_compute[260603]: 2025-10-02 08:24:21.968 2 DEBUG nova.storage.rbd_utils [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] flattening images/775ace25-87e6-4cc4-80cb-b697eaf54c22 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:24:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:24:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Oct  2 04:24:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Oct  2 04:24:22 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Oct  2 04:24:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:24:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4190666196' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:24:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:24:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4190666196' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.243 2 DEBUG nova.storage.rbd_utils [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] removing snapshot(8bfb15f943e943a49d11c7009e10679a) on rbd image(5d50db9c-0731-468a-81da-6762d68cda94_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.563 2 DEBUG oslo_concurrency.lockutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.564 2 DEBUG oslo_concurrency.lockutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.564 2 DEBUG oslo_concurrency.lockutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.565 2 DEBUG oslo_concurrency.lockutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.565 2 DEBUG oslo_concurrency.lockutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.567 2 INFO nova.compute.manager [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Terminating instance#033[00m
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.569 2 DEBUG nova.compute.manager [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.577 2 INFO nova.virt.libvirt.driver [-] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Instance destroyed successfully.#033[00m
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.577 2 DEBUG nova.objects.instance [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'resources' on Instance uuid 95b47d23-06fb-460c-b8d9-7b7213dae4c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.594 2 DEBUG nova.virt.libvirt.vif [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:23:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-798194672',display_name='tempest-ImagesTestJSON-server-798194672',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-798194672',id=27,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:23:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-t2xpcyan',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:24:20Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=95b47d23-06fb-460c-b8d9-7b7213dae4c7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.595 2 DEBUG nova.network.os_vif_util [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "b546f674-89d3-44f3-82c4-9426c5910017", "address": "fa:16:3e:f0:3b:c8", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb546f674-89", "ovs_interfaceid": "b546f674-89d3-44f3-82c4-9426c5910017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.596 2 DEBUG nova.network.os_vif_util [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:3b:c8,bridge_name='br-int',has_traffic_filtering=True,id=b546f674-89d3-44f3-82c4-9426c5910017,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb546f674-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.597 2 DEBUG os_vif [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:3b:c8,bridge_name='br-int',has_traffic_filtering=True,id=b546f674-89d3-44f3-82c4-9426c5910017,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb546f674-89') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.599 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb546f674-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:22 np0005465604 nova_compute[260603]: 2025-10-02 08:24:22.650 2 INFO os_vif [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:3b:c8,bridge_name='br-int',has_traffic_filtering=True,id=b546f674-89d3-44f3-82c4-9426c5910017,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb546f674-89')#033[00m
Oct  2 04:24:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 270 MiB data, 484 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 11 MiB/s wr, 403 op/s
Oct  2 04:24:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Oct  2 04:24:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Oct  2 04:24:23 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Oct  2 04:24:23 np0005465604 nova_compute[260603]: 2025-10-02 08:24:23.033 2 DEBUG nova.storage.rbd_utils [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] creating snapshot(snap) on rbd image(775ace25-87e6-4cc4-80cb-b697eaf54c22) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:24:23 np0005465604 nova_compute[260603]: 2025-10-02 08:24:23.093 2 INFO nova.virt.libvirt.driver [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Deleting instance files /var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7_del#033[00m
Oct  2 04:24:23 np0005465604 nova_compute[260603]: 2025-10-02 08:24:23.094 2 INFO nova.virt.libvirt.driver [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Deletion of /var/lib/nova/instances/95b47d23-06fb-460c-b8d9-7b7213dae4c7_del complete#033[00m
Oct  2 04:24:23 np0005465604 nova_compute[260603]: 2025-10-02 08:24:23.141 2 INFO nova.compute.manager [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:24:23 np0005465604 nova_compute[260603]: 2025-10-02 08:24:23.142 2 DEBUG oslo.service.loopingcall [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:24:23 np0005465604 nova_compute[260603]: 2025-10-02 08:24:23.143 2 DEBUG nova.compute.manager [-] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:24:23 np0005465604 nova_compute[260603]: 2025-10-02 08:24:23.144 2 DEBUG nova.network.neutron [-] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:24:23 np0005465604 nova_compute[260603]: 2025-10-02 08:24:23.987 2 DEBUG nova.network.neutron [-] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:24:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Oct  2 04:24:24 np0005465604 nova_compute[260603]: 2025-10-02 08:24:24.005 2 INFO nova.compute.manager [-] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Took 0.86 seconds to deallocate network for instance.#033[00m
Oct  2 04:24:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Oct  2 04:24:24 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Oct  2 04:24:24 np0005465604 nova_compute[260603]: 2025-10-02 08:24:24.050 2 DEBUG oslo_concurrency.lockutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:24 np0005465604 nova_compute[260603]: 2025-10-02 08:24:24.050 2 DEBUG oslo_concurrency.lockutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:24 np0005465604 nova_compute[260603]: 2025-10-02 08:24:24.148 2 DEBUG oslo_concurrency.processutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:24:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2140571578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:24:24 np0005465604 nova_compute[260603]: 2025-10-02 08:24:24.602 2 DEBUG oslo_concurrency.processutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:24 np0005465604 nova_compute[260603]: 2025-10-02 08:24:24.609 2 DEBUG nova.compute.provider_tree [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:24:24 np0005465604 nova_compute[260603]: 2025-10-02 08:24:24.630 2 DEBUG nova.scheduler.client.report [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:24:24 np0005465604 nova_compute[260603]: 2025-10-02 08:24:24.660 2 DEBUG oslo_concurrency.lockutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:24 np0005465604 nova_compute[260603]: 2025-10-02 08:24:24.686 2 INFO nova.scheduler.client.report [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Deleted allocations for instance 95b47d23-06fb-460c-b8d9-7b7213dae4c7#033[00m
Oct  2 04:24:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 234 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 18 MiB/s wr, 588 op/s
Oct  2 04:24:24 np0005465604 nova_compute[260603]: 2025-10-02 08:24:24.946 2 DEBUG nova.compute.manager [req-86091690-24ba-4a1f-b7bb-1a5d83250e1f req-e1a4b607-bc52-4f37-9603-28aec973756e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Received event network-vif-deleted-b546f674-89d3-44f3-82c4-9426c5910017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:24 np0005465604 nova_compute[260603]: 2025-10-02 08:24:24.950 2 DEBUG oslo_concurrency.lockutils [None req-76708d37-c877-4351-b04f-b31b5777583c 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "95b47d23-06fb-460c-b8d9-7b7213dae4c7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.386s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:25 np0005465604 nova_compute[260603]: 2025-10-02 08:24:25.899 2 INFO nova.virt.libvirt.driver [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Snapshot image upload complete#033[00m
Oct  2 04:24:25 np0005465604 nova_compute[260603]: 2025-10-02 08:24:25.900 2 INFO nova.compute.manager [None req-2c929a21-a0e8-4660-9c1f-20034e1217cb e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Took 5.83 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 04:24:26 np0005465604 nova_compute[260603]: 2025-10-02 08:24:26.219 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "7e111a93-618c-4a35-a412-f1e8a975a139" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:26 np0005465604 nova_compute[260603]: 2025-10-02 08:24:26.219 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:26 np0005465604 nova_compute[260603]: 2025-10-02 08:24:26.242 2 DEBUG nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:24:26 np0005465604 nova_compute[260603]: 2025-10-02 08:24:26.306 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:26 np0005465604 nova_compute[260603]: 2025-10-02 08:24:26.307 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:26 np0005465604 nova_compute[260603]: 2025-10-02 08:24:26.316 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:24:26 np0005465604 nova_compute[260603]: 2025-10-02 08:24:26.316 2 INFO nova.compute.claims [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:24:26 np0005465604 nova_compute[260603]: 2025-10-02 08:24:26.435 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:26 np0005465604 nova_compute[260603]: 2025-10-02 08:24:26.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 234 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 14 MiB/s wr, 463 op/s
Oct  2 04:24:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:24:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/264070693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:24:26 np0005465604 nova_compute[260603]: 2025-10-02 08:24:26.907 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:26 np0005465604 nova_compute[260603]: 2025-10-02 08:24:26.915 2 DEBUG nova.compute.provider_tree [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:24:26 np0005465604 nova_compute[260603]: 2025-10-02 08:24:26.937 2 DEBUG nova.scheduler.client.report [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:24:26 np0005465604 nova_compute[260603]: 2025-10-02 08:24:26.967 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:26 np0005465604 nova_compute[260603]: 2025-10-02 08:24:26.968 2 DEBUG nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:24:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:24:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Oct  2 04:24:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Oct  2 04:24:27 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.018 2 DEBUG nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.019 2 DEBUG nova.network.neutron [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.037 2 INFO nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.056 2 DEBUG nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.135 2 DEBUG nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.136 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.137 2 INFO nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Creating image(s)#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.178 2 DEBUG nova.storage.rbd_utils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 7e111a93-618c-4a35-a412-f1e8a975a139_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.206 2 DEBUG nova.storage.rbd_utils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 7e111a93-618c-4a35-a412-f1e8a975a139_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.233 2 DEBUG nova.storage.rbd_utils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 7e111a93-618c-4a35-a412-f1e8a975a139_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.236 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.330 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.331 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.332 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.332 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.359 2 DEBUG nova.storage.rbd_utils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 7e111a93-618c-4a35-a412-f1e8a975a139_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.363 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7e111a93-618c-4a35-a412-f1e8a975a139_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.400 2 DEBUG nova.policy [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6747651cfdcc4f868c43b9d78f5846c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56b1e1170f2e4a73aaf396476bc82261', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.467 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393452.466161, 95b47d23-06fb-460c-b8d9-7b7213dae4c7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.468 2 INFO nova.compute.manager [-] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.494 2 DEBUG nova.compute.manager [None req-d6d28382-7ae4-4c3d-a746-22b82c46a7bd - - - - - -] [instance: 95b47d23-06fb-460c-b8d9-7b7213dae4c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.630 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7e111a93-618c-4a35-a412-f1e8a975a139_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.746 2 DEBUG nova.storage.rbd_utils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] resizing rbd image 7e111a93-618c-4a35-a412-f1e8a975a139_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.848 2 DEBUG nova.objects.instance [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'migration_context' on Instance uuid 7e111a93-618c-4a35-a412-f1e8a975a139 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.864 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.865 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Ensure instance console log exists: /var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.865 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.866 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:27 np0005465604 nova_compute[260603]: 2025-10-02 08:24:27.866 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:24:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:24:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:24:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:24:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:24:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:24:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:24:27
Oct  2 04:24:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:24:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:24:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'vms']
Oct  2 04:24:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:24:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Oct  2 04:24:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Oct  2 04:24:28 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Oct  2 04:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:24:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 222 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.9 MiB/s wr, 265 op/s
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.314 2 DEBUG oslo_concurrency.lockutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquiring lock "5d50db9c-0731-468a-81da-6762d68cda94" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.315 2 DEBUG oslo_concurrency.lockutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.316 2 DEBUG oslo_concurrency.lockutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquiring lock "5d50db9c-0731-468a-81da-6762d68cda94-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.316 2 DEBUG oslo_concurrency.lockutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.317 2 DEBUG oslo_concurrency.lockutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.319 2 INFO nova.compute.manager [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Terminating instance#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.321 2 DEBUG nova.compute.manager [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.349 2 DEBUG nova.network.neutron [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Successfully created port: cd52f5bd-f296-4867-b590-680a2c8a2870 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:24:29 np0005465604 kernel: tap4e1ca975-95 (unregistering): left promiscuous mode
Oct  2 04:24:29 np0005465604 NetworkManager[45129]: <info>  [1759393469.3796] device (tap4e1ca975-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:24:29 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:29Z|00189|binding|INFO|Releasing lport 4e1ca975-9579-4bcd-8e92-5c169484f7fe from this chassis (sb_readonly=0)
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:29 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:29Z|00190|binding|INFO|Setting lport 4e1ca975-9579-4bcd-8e92-5c169484f7fe down in Southbound
Oct  2 04:24:29 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:29Z|00191|binding|INFO|Removing iface tap4e1ca975-95 ovn-installed in OVS
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.403 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:e7:88 10.100.0.7'], port_security=['fa:16:3e:82:e7:88 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5d50db9c-0731-468a-81da-6762d68cda94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-281b57e5-e0d2-447a-8a70-0fab75a8117e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3665ae0e483545e2aaa658ac8a3949aa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '927e44a9-fbe2-4210-9cfb-52bbd0657d4e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33e27bca-0333-46b3-bd98-5e7fb62b735e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4e1ca975-9579-4bcd-8e92-5c169484f7fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:24:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.405 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4e1ca975-9579-4bcd-8e92-5c169484f7fe in datapath 281b57e5-e0d2-447a-8a70-0fab75a8117e unbound from our chassis#033[00m
Oct  2 04:24:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.407 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 281b57e5-e0d2-447a-8a70-0fab75a8117e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:24:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.408 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f7972399-f042-4025-bff1-7c0adc7da600]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.409 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e namespace which is not needed anymore#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:29 np0005465604 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Oct  2 04:24:29 np0005465604 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Consumed 13.899s CPU time.
Oct  2 04:24:29 np0005465604 systemd-machined[214636]: Machine qemu-33-instance-0000001d terminated.
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.562 2 INFO nova.virt.libvirt.driver [-] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Instance destroyed successfully.#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.563 2 DEBUG nova.objects.instance [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lazy-loading 'resources' on Instance uuid 5d50db9c-0731-468a-81da-6762d68cda94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.575 2 DEBUG nova.virt.libvirt.vif [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:23:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1904255359',display_name='tempest-ImagesOneServerTestJSON-server-1904255359',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1904255359',id=29,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:24:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3665ae0e483545e2aaa658ac8a3949aa',ramdisk_id='',reservation_id='r-qijl0plc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-2127991989',owner_user_name='tempest-ImagesOneServerTestJSON-2127991989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:24:25Z,user_data=None,user_id='e0a15988bafc4a03bd5b08291a4cc14c',uuid=5d50db9c-0731-468a-81da-6762d68cda94,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.576 2 DEBUG nova.network.os_vif_util [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Converting VIF {"id": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "address": "fa:16:3e:82:e7:88", "network": {"id": "281b57e5-e0d2-447a-8a70-0fab75a8117e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1356365304-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3665ae0e483545e2aaa658ac8a3949aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e1ca975-95", "ovs_interfaceid": "4e1ca975-9579-4bcd-8e92-5c169484f7fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.576 2 DEBUG nova.network.os_vif_util [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:e7:88,bridge_name='br-int',has_traffic_filtering=True,id=4e1ca975-9579-4bcd-8e92-5c169484f7fe,network=Network(281b57e5-e0d2-447a-8a70-0fab75a8117e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e1ca975-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.577 2 DEBUG os_vif [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:e7:88,bridge_name='br-int',has_traffic_filtering=True,id=4e1ca975-9579-4bcd-8e92-5c169484f7fe,network=Network(281b57e5-e0d2-447a-8a70-0fab75a8117e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e1ca975-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.579 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e1ca975-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:29 np0005465604 neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e[295724]: [NOTICE]   (295728) : haproxy version is 2.8.14-c23fe91
Oct  2 04:24:29 np0005465604 neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e[295724]: [NOTICE]   (295728) : path to executable is /usr/sbin/haproxy
Oct  2 04:24:29 np0005465604 neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e[295724]: [WARNING]  (295728) : Exiting Master process...
Oct  2 04:24:29 np0005465604 neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e[295724]: [ALERT]    (295728) : Current worker (295730) exited with code 143 (Terminated)
Oct  2 04:24:29 np0005465604 neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e[295724]: [WARNING]  (295728) : All workers exited. Exiting... (0)
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:29 np0005465604 systemd[1]: libpod-a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8.scope: Deactivated successfully.
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.588 2 INFO os_vif [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:e7:88,bridge_name='br-int',has_traffic_filtering=True,id=4e1ca975-9579-4bcd-8e92-5c169484f7fe,network=Network(281b57e5-e0d2-447a-8a70-0fab75a8117e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e1ca975-95')#033[00m
Oct  2 04:24:29 np0005465604 podman[296650]: 2025-10-02 08:24:29.593431781 +0000 UTC m=+0.056001236 container died a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 04:24:29 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8-userdata-shm.mount: Deactivated successfully.
Oct  2 04:24:29 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1abcc92a4aba871794cf5916850bf03d08984036d0d311577b9f0f2209f683ce-merged.mount: Deactivated successfully.
Oct  2 04:24:29 np0005465604 podman[296650]: 2025-10-02 08:24:29.638764402 +0000 UTC m=+0.101333867 container cleanup a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 04:24:29 np0005465604 systemd[1]: libpod-conmon-a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8.scope: Deactivated successfully.
Oct  2 04:24:29 np0005465604 podman[296710]: 2025-10-02 08:24:29.705578087 +0000 UTC m=+0.043936047 container remove a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 04:24:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.711 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a13c69-2093-4d25-a9da-ea770c25b0e3]: (4, ('Thu Oct  2 08:24:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e (a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8)\na9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8\nThu Oct  2 08:24:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e (a9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8)\na9fbe2a5d33077df2e96173241441b9792bd882704cebaeccafdb0681b60cef8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.713 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d42c908a-8aba-42df-9fb6-c8d6bbd2e23e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.714 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap281b57e5-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:29 np0005465604 kernel: tap281b57e5-e0: left promiscuous mode
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.759 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9da63c-58ca-44c3-a563-4656d83fe856]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.791 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6a011701-63eb-4ec8-acb3-f6deb961b937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.792 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0248e198-200a-4acf-942f-c9ab80820b80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.813 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec6becc-3575-48bf-837b-fb8bf7a991c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432482, 'reachable_time': 24965, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296726, 'error': None, 'target': 'ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.815 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-281b57e5-e0d2-447a-8a70-0fab75a8117e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:24:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:29.815 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[1f084aec-6d2d-4c26-a465-6575945a65b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:29 np0005465604 systemd[1]: run-netns-ovnmeta\x2d281b57e5\x2de0d2\x2d447a\x2d8a70\x2d0fab75a8117e.mount: Deactivated successfully.
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.926 2 INFO nova.virt.libvirt.driver [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Deleting instance files /var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94_del#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.927 2 INFO nova.virt.libvirt.driver [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Deletion of /var/lib/nova/instances/5d50db9c-0731-468a-81da-6762d68cda94_del complete#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.968 2 INFO nova.compute.manager [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.968 2 DEBUG oslo.service.loopingcall [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.969 2 DEBUG nova.compute.manager [-] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:24:29 np0005465604 nova_compute[260603]: 2025-10-02 08:24:29.969 2 DEBUG nova.network.neutron [-] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:24:30 np0005465604 nova_compute[260603]: 2025-10-02 08:24:30.297 2 DEBUG nova.compute.manager [req-91701cf9-f77d-4350-a786-e0233305782e req-02db855f-de40-473d-b964-8acd08088088 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Received event network-vif-unplugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:30 np0005465604 nova_compute[260603]: 2025-10-02 08:24:30.298 2 DEBUG oslo_concurrency.lockutils [req-91701cf9-f77d-4350-a786-e0233305782e req-02db855f-de40-473d-b964-8acd08088088 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d50db9c-0731-468a-81da-6762d68cda94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:30 np0005465604 nova_compute[260603]: 2025-10-02 08:24:30.298 2 DEBUG oslo_concurrency.lockutils [req-91701cf9-f77d-4350-a786-e0233305782e req-02db855f-de40-473d-b964-8acd08088088 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:30 np0005465604 nova_compute[260603]: 2025-10-02 08:24:30.298 2 DEBUG oslo_concurrency.lockutils [req-91701cf9-f77d-4350-a786-e0233305782e req-02db855f-de40-473d-b964-8acd08088088 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:30 np0005465604 nova_compute[260603]: 2025-10-02 08:24:30.298 2 DEBUG nova.compute.manager [req-91701cf9-f77d-4350-a786-e0233305782e req-02db855f-de40-473d-b964-8acd08088088 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] No waiting events found dispatching network-vif-unplugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:24:30 np0005465604 nova_compute[260603]: 2025-10-02 08:24:30.298 2 DEBUG nova.compute.manager [req-91701cf9-f77d-4350-a786-e0233305782e req-02db855f-de40-473d-b964-8acd08088088 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Received event network-vif-unplugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:24:30 np0005465604 nova_compute[260603]: 2025-10-02 08:24:30.575 2 DEBUG nova.network.neutron [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Successfully updated port: cd52f5bd-f296-4867-b590-680a2c8a2870 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:24:30 np0005465604 nova_compute[260603]: 2025-10-02 08:24:30.596 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "refresh_cache-7e111a93-618c-4a35-a412-f1e8a975a139" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:24:30 np0005465604 nova_compute[260603]: 2025-10-02 08:24:30.596 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquired lock "refresh_cache-7e111a93-618c-4a35-a412-f1e8a975a139" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:24:30 np0005465604 nova_compute[260603]: 2025-10-02 08:24:30.597 2 DEBUG nova.network.neutron [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:24:30 np0005465604 nova_compute[260603]: 2025-10-02 08:24:30.700 2 DEBUG nova.compute.manager [req-50e2174f-07e0-423e-b05e-48dcbbcc12f7 req-bda79387-cba3-422f-a040-8173d521e061 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Received event network-changed-cd52f5bd-f296-4867-b590-680a2c8a2870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:30 np0005465604 nova_compute[260603]: 2025-10-02 08:24:30.700 2 DEBUG nova.compute.manager [req-50e2174f-07e0-423e-b05e-48dcbbcc12f7 req-bda79387-cba3-422f-a040-8173d521e061 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Refreshing instance network info cache due to event network-changed-cd52f5bd-f296-4867-b590-680a2c8a2870. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:24:30 np0005465604 nova_compute[260603]: 2025-10-02 08:24:30.701 2 DEBUG oslo_concurrency.lockutils [req-50e2174f-07e0-423e-b05e-48dcbbcc12f7 req-bda79387-cba3-422f-a040-8173d521e061 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7e111a93-618c-4a35-a412-f1e8a975a139" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:24:30 np0005465604 nova_compute[260603]: 2025-10-02 08:24:30.718 2 DEBUG nova.network.neutron [-] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:24:30 np0005465604 nova_compute[260603]: 2025-10-02 08:24:30.741 2 INFO nova.compute.manager [-] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Took 0.77 seconds to deallocate network for instance.#033[00m
Oct  2 04:24:30 np0005465604 nova_compute[260603]: 2025-10-02 08:24:30.807 2 DEBUG oslo_concurrency.lockutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:30 np0005465604 nova_compute[260603]: 2025-10-02 08:24:30.808 2 DEBUG oslo_concurrency.lockutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 222 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct  2 04:24:30 np0005465604 nova_compute[260603]: 2025-10-02 08:24:30.880 2 DEBUG oslo_concurrency.processutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:24:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3322198750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:24:31 np0005465604 nova_compute[260603]: 2025-10-02 08:24:31.336 2 DEBUG oslo_concurrency.processutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:31 np0005465604 nova_compute[260603]: 2025-10-02 08:24:31.344 2 DEBUG nova.compute.provider_tree [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:24:31 np0005465604 nova_compute[260603]: 2025-10-02 08:24:31.368 2 DEBUG nova.scheduler.client.report [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:24:31 np0005465604 nova_compute[260603]: 2025-10-02 08:24:31.399 2 DEBUG oslo_concurrency.lockutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:31 np0005465604 nova_compute[260603]: 2025-10-02 08:24:31.420 2 INFO nova.scheduler.client.report [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Deleted allocations for instance 5d50db9c-0731-468a-81da-6762d68cda94#033[00m
Oct  2 04:24:31 np0005465604 nova_compute[260603]: 2025-10-02 08:24:31.549 2 DEBUG nova.network.neutron [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:24:31 np0005465604 nova_compute[260603]: 2025-10-02 08:24:31.607 2 DEBUG oslo_concurrency.lockutils [None req-62d3fbb5-74c6-46a3-936f-c3989cad088a e0a15988bafc4a03bd5b08291a4cc14c 3665ae0e483545e2aaa658ac8a3949aa - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:31 np0005465604 nova_compute[260603]: 2025-10-02 08:24:31.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:24:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Oct  2 04:24:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Oct  2 04:24:32 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Oct  2 04:24:32 np0005465604 nova_compute[260603]: 2025-10-02 08:24:32.429 2 DEBUG nova.compute.manager [req-7266d71d-8938-4706-9a8c-bdc5125ea46f req-6638d1e9-5599-4e46-aaa8-277fe830a722 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Received event network-vif-plugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:32 np0005465604 nova_compute[260603]: 2025-10-02 08:24:32.429 2 DEBUG oslo_concurrency.lockutils [req-7266d71d-8938-4706-9a8c-bdc5125ea46f req-6638d1e9-5599-4e46-aaa8-277fe830a722 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d50db9c-0731-468a-81da-6762d68cda94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:32 np0005465604 nova_compute[260603]: 2025-10-02 08:24:32.430 2 DEBUG oslo_concurrency.lockutils [req-7266d71d-8938-4706-9a8c-bdc5125ea46f req-6638d1e9-5599-4e46-aaa8-277fe830a722 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:32 np0005465604 nova_compute[260603]: 2025-10-02 08:24:32.430 2 DEBUG oslo_concurrency.lockutils [req-7266d71d-8938-4706-9a8c-bdc5125ea46f req-6638d1e9-5599-4e46-aaa8-277fe830a722 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d50db9c-0731-468a-81da-6762d68cda94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:32 np0005465604 nova_compute[260603]: 2025-10-02 08:24:32.430 2 DEBUG nova.compute.manager [req-7266d71d-8938-4706-9a8c-bdc5125ea46f req-6638d1e9-5599-4e46-aaa8-277fe830a722 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] No waiting events found dispatching network-vif-plugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:24:32 np0005465604 nova_compute[260603]: 2025-10-02 08:24:32.431 2 WARNING nova.compute.manager [req-7266d71d-8938-4706-9a8c-bdc5125ea46f req-6638d1e9-5599-4e46-aaa8-277fe830a722 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Received unexpected event network-vif-plugged-4e1ca975-9579-4bcd-8e92-5c169484f7fe for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:24:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 7 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 292 active+clean; 145 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 4.0 MiB/s wr, 237 op/s
Oct  2 04:24:32 np0005465604 nova_compute[260603]: 2025-10-02 08:24:32.920 2 DEBUG nova.compute.manager [req-aeadd090-9f79-4aad-b3d2-c43379652254 req-f7470724-a9d1-4685-9528-2bbe9aa837bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Received event network-vif-deleted-4e1ca975-9579-4bcd-8e92-5c169484f7fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.697 2 DEBUG nova.network.neutron [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Updating instance_info_cache with network_info: [{"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.724 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Releasing lock "refresh_cache-7e111a93-618c-4a35-a412-f1e8a975a139" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.724 2 DEBUG nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Instance network_info: |[{"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.725 2 DEBUG oslo_concurrency.lockutils [req-50e2174f-07e0-423e-b05e-48dcbbcc12f7 req-bda79387-cba3-422f-a040-8173d521e061 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7e111a93-618c-4a35-a412-f1e8a975a139" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.725 2 DEBUG nova.network.neutron [req-50e2174f-07e0-423e-b05e-48dcbbcc12f7 req-bda79387-cba3-422f-a040-8173d521e061 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Refreshing network info cache for port cd52f5bd-f296-4867-b590-680a2c8a2870 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.730 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Start _get_guest_xml network_info=[{"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.736 2 WARNING nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.742 2 DEBUG nova.virt.libvirt.host [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.743 2 DEBUG nova.virt.libvirt.host [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.752 2 DEBUG nova.virt.libvirt.host [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.753 2 DEBUG nova.virt.libvirt.host [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.753 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.754 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.755 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.755 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.756 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.756 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.757 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.757 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.758 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.758 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.759 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.759 2 DEBUG nova.virt.hardware [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:24:33 np0005465604 nova_compute[260603]: 2025-10-02 08:24:33.763 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:24:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2400793247' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.205 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.235 2 DEBUG nova.storage.rbd_utils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 7e111a93-618c-4a35-a412-f1e8a975a139_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.240 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:24:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/817994304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.699 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.702 2 DEBUG nova.virt.libvirt.vif [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:24:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-902237424',display_name='tempest-ImagesTestJSON-server-902237424',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-902237424',id=30,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-cf2bt1a3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:24:27Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=7e111a93-618c-4a35-a412-f1e8a975a139,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.703 2 DEBUG nova.network.os_vif_util [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.704 2 DEBUG nova.network.os_vif_util [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:0f:7a,bridge_name='br-int',has_traffic_filtering=True,id=cd52f5bd-f296-4867-b590-680a2c8a2870,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd52f5bd-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.706 2 DEBUG nova.objects.instance [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e111a93-618c-4a35-a412-f1e8a975a139 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.722 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:24:34 np0005465604 nova_compute[260603]:  <uuid>7e111a93-618c-4a35-a412-f1e8a975a139</uuid>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:  <name>instance-0000001e</name>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <nova:name>tempest-ImagesTestJSON-server-902237424</nova:name>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:24:33</nova:creationTime>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:24:34 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:        <nova:user uuid="6747651cfdcc4f868c43b9d78f5846c2">tempest-ImagesTestJSON-1188243509-project-member</nova:user>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:        <nova:project uuid="56b1e1170f2e4a73aaf396476bc82261">tempest-ImagesTestJSON-1188243509</nova:project>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:        <nova:port uuid="cd52f5bd-f296-4867-b590-680a2c8a2870">
Oct  2 04:24:34 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <entry name="serial">7e111a93-618c-4a35-a412-f1e8a975a139</entry>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <entry name="uuid">7e111a93-618c-4a35-a412-f1e8a975a139</entry>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7e111a93-618c-4a35-a412-f1e8a975a139_disk">
Oct  2 04:24:34 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:24:34 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7e111a93-618c-4a35-a412-f1e8a975a139_disk.config">
Oct  2 04:24:34 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:24:34 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:cb:0f:7a"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <target dev="tapcd52f5bd-f2"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139/console.log" append="off"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:24:34 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:24:34 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:24:34 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:24:34 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.725 2 DEBUG nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Preparing to wait for external event network-vif-plugged-cd52f5bd-f296-4867-b590-680a2c8a2870 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.726 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.726 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.727 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.728 2 DEBUG nova.virt.libvirt.vif [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:24:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-902237424',display_name='tempest-ImagesTestJSON-server-902237424',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-902237424',id=30,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-cf2bt1a3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:24:27Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=7e111a93-618c-4a35-a412-f1e8a975a139,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.728 2 DEBUG nova.network.os_vif_util [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.730 2 DEBUG nova.network.os_vif_util [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:0f:7a,bridge_name='br-int',has_traffic_filtering=True,id=cd52f5bd-f296-4867-b590-680a2c8a2870,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd52f5bd-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.730 2 DEBUG os_vif [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:0f:7a,bridge_name='br-int',has_traffic_filtering=True,id=cd52f5bd-f296-4867-b590-680a2c8a2870,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd52f5bd-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.732 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.733 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.738 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd52f5bd-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.739 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcd52f5bd-f2, col_values=(('external_ids', {'iface-id': 'cd52f5bd-f296-4867-b590-680a2c8a2870', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:0f:7a', 'vm-uuid': '7e111a93-618c-4a35-a412-f1e8a975a139'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:34 np0005465604 NetworkManager[45129]: <info>  [1759393474.7417] manager: (tapcd52f5bd-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.750 2 INFO os_vif [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:0f:7a,bridge_name='br-int',has_traffic_filtering=True,id=cd52f5bd-f296-4867-b590-680a2c8a2870,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd52f5bd-f2')#033[00m
Oct  2 04:24:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:34.811 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:34.811 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:34.812 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 296 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 3.1 MiB/s wr, 221 op/s
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.865 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.866 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.866 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No VIF found with MAC fa:16:3e:cb:0f:7a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.867 2 INFO nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Using config drive#033[00m
Oct  2 04:24:34 np0005465604 nova_compute[260603]: 2025-10-02 08:24:34.899 2 DEBUG nova.storage.rbd_utils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 7e111a93-618c-4a35-a412-f1e8a975a139_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:24:35 np0005465604 nova_compute[260603]: 2025-10-02 08:24:35.583 2 INFO nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Creating config drive at /var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139/disk.config#033[00m
Oct  2 04:24:35 np0005465604 nova_compute[260603]: 2025-10-02 08:24:35.588 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5hecl41g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:35 np0005465604 nova_compute[260603]: 2025-10-02 08:24:35.735 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5hecl41g" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:35 np0005465604 nova_compute[260603]: 2025-10-02 08:24:35.759 2 DEBUG nova.storage.rbd_utils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 7e111a93-618c-4a35-a412-f1e8a975a139_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:24:35 np0005465604 nova_compute[260603]: 2025-10-02 08:24:35.762 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139/disk.config 7e111a93-618c-4a35-a412-f1e8a975a139_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:35 np0005465604 nova_compute[260603]: 2025-10-02 08:24:35.923 2 DEBUG oslo_concurrency.processutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139/disk.config 7e111a93-618c-4a35-a412-f1e8a975a139_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:35 np0005465604 nova_compute[260603]: 2025-10-02 08:24:35.924 2 INFO nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Deleting local config drive /var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139/disk.config because it was imported into RBD.#033[00m
Oct  2 04:24:35 np0005465604 kernel: tapcd52f5bd-f2: entered promiscuous mode
Oct  2 04:24:35 np0005465604 NetworkManager[45129]: <info>  [1759393475.9887] manager: (tapcd52f5bd-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Oct  2 04:24:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:35Z|00192|binding|INFO|Claiming lport cd52f5bd-f296-4867-b590-680a2c8a2870 for this chassis.
Oct  2 04:24:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:35Z|00193|binding|INFO|cd52f5bd-f296-4867-b590-680a2c8a2870: Claiming fa:16:3e:cb:0f:7a 10.100.0.8
Oct  2 04:24:36 np0005465604 nova_compute[260603]: 2025-10-02 08:24:35.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.014 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:0f:7a 10.100.0.8'], port_security=['fa:16:3e:cb:0f:7a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7e111a93-618c-4a35-a412-f1e8a975a139', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '2', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=cd52f5bd-f296-4867-b590-680a2c8a2870) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.015 162357 INFO neutron.agent.ovn.metadata.agent [-] Port cd52f5bd-f296-4867-b590-680a2c8a2870 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 bound to our chassis#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.016 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 897d7abf-9e23-43cd-8f60-7156792a4360#033[00m
Oct  2 04:24:36 np0005465604 systemd-machined[214636]: New machine qemu-34-instance-0000001e.
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.035 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dafa140f-d1a6-41a6-b4c3-594861c3ca01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.036 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap897d7abf-91 in ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.038 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap897d7abf-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.038 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[60061c05-0245-41ad-a693-e22c79168632]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.038 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7f9db66d-ef28-4d0d-b416-2a0b565bec22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:36 np0005465604 systemd[1]: Started Virtual Machine qemu-34-instance-0000001e.
Oct  2 04:24:36 np0005465604 podman[296873]: 2025-10-02 08:24:36.050764223 +0000 UTC m=+0.107094645 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.050 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[3063cbad-c2b9-419a-820a-31cb49b8c283]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:36 np0005465604 systemd-udevd[296934]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:24:36 np0005465604 nova_compute[260603]: 2025-10-02 08:24:36.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:36 np0005465604 podman[296872]: 2025-10-02 08:24:36.077004309 +0000 UTC m=+0.135692697 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 04:24:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:36Z|00194|binding|INFO|Setting lport cd52f5bd-f296-4867-b590-680a2c8a2870 ovn-installed in OVS
Oct  2 04:24:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:36Z|00195|binding|INFO|Setting lport cd52f5bd-f296-4867-b590-680a2c8a2870 up in Southbound
Oct  2 04:24:36 np0005465604 nova_compute[260603]: 2025-10-02 08:24:36.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:36 np0005465604 NetworkManager[45129]: <info>  [1759393476.0832] device (tapcd52f5bd-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:24:36 np0005465604 NetworkManager[45129]: <info>  [1759393476.0841] device (tapcd52f5bd-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.084 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4070ff39-2dbe-4ef1-b5bb-b8dd8c21a529]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.112 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b4184ae1-16ab-4a31-b2c0-c353c995fe82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:36 np0005465604 NetworkManager[45129]: <info>  [1759393476.1273] manager: (tap897d7abf-90): new Veth device (/org/freedesktop/NetworkManager/Devices/94)
Oct  2 04:24:36 np0005465604 systemd-udevd[296936]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.128 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa0fd45-cf04-49fc-bd40-0a64c83bfdd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.156 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9c693e42-b39a-4dba-b912-56b604d9cdd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.158 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2a3235-143c-452a-85af-925c5dd5b957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:36 np0005465604 NetworkManager[45129]: <info>  [1759393476.1762] device (tap897d7abf-90): carrier: link connected
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.182 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[12b74236-0067-47cf-90d5-ed047764f732]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.197 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[01acb120-7192-46aa-b818-59b6bb776158]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 435477, 'reachable_time': 23844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296964, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.211 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[30d24624-f264-4b78-be2e-72325124d4e3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:18ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 435477, 'tstamp': 435477}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296965, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.229 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[28c19370-5a80-4717-ab7b-0a149d3ebf86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 435477, 'reachable_time': 23844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296966, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.263 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[10ccca31-ddcb-4d24-8bcd-70f87ec75b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.326 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cb61f75d-c40a-43e5-bd8d-bb9636868431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.328 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.328 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.328 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap897d7abf-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:36 np0005465604 nova_compute[260603]: 2025-10-02 08:24:36.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:36 np0005465604 NetworkManager[45129]: <info>  [1759393476.3750] manager: (tap897d7abf-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Oct  2 04:24:36 np0005465604 kernel: tap897d7abf-90: entered promiscuous mode
Oct  2 04:24:36 np0005465604 nova_compute[260603]: 2025-10-02 08:24:36.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.378 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap897d7abf-90, col_values=(('external_ids', {'iface-id': 'dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:36 np0005465604 nova_compute[260603]: 2025-10-02 08:24:36.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:36Z|00196|binding|INFO|Releasing lport dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76 from this chassis (sb_readonly=0)
Oct  2 04:24:36 np0005465604 nova_compute[260603]: 2025-10-02 08:24:36.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.381 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.382 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bf472b14-0a94-4ab6-a856-fd73b14b3c6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.383 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-897d7abf-9e23-43cd-8f60-7156792a4360
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 897d7abf-9e23-43cd-8f60-7156792a4360
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:24:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:36.383 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'env', 'PROCESS_TAG=haproxy-897d7abf-9e23-43cd-8f60-7156792a4360', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/897d7abf-9e23-43cd-8f60-7156792a4360.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:24:36 np0005465604 nova_compute[260603]: 2025-10-02 08:24:36.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:36 np0005465604 podman[297040]: 2025-10-02 08:24:36.780429909 +0000 UTC m=+0.052609807 container create b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 04:24:36 np0005465604 nova_compute[260603]: 2025-10-02 08:24:36.795 2 DEBUG nova.compute.manager [req-2d2c2428-7f72-46cb-a07c-82df702715ea req-82c2e9e6-ee18-4ce3-878d-b51df6afabe1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Received event network-vif-plugged-cd52f5bd-f296-4867-b590-680a2c8a2870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:36 np0005465604 nova_compute[260603]: 2025-10-02 08:24:36.795 2 DEBUG oslo_concurrency.lockutils [req-2d2c2428-7f72-46cb-a07c-82df702715ea req-82c2e9e6-ee18-4ce3-878d-b51df6afabe1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:36 np0005465604 nova_compute[260603]: 2025-10-02 08:24:36.796 2 DEBUG oslo_concurrency.lockutils [req-2d2c2428-7f72-46cb-a07c-82df702715ea req-82c2e9e6-ee18-4ce3-878d-b51df6afabe1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:36 np0005465604 nova_compute[260603]: 2025-10-02 08:24:36.796 2 DEBUG oslo_concurrency.lockutils [req-2d2c2428-7f72-46cb-a07c-82df702715ea req-82c2e9e6-ee18-4ce3-878d-b51df6afabe1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:36 np0005465604 nova_compute[260603]: 2025-10-02 08:24:36.797 2 DEBUG nova.compute.manager [req-2d2c2428-7f72-46cb-a07c-82df702715ea req-82c2e9e6-ee18-4ce3-878d-b51df6afabe1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Processing event network-vif-plugged-cd52f5bd-f296-4867-b590-680a2c8a2870 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:24:36 np0005465604 nova_compute[260603]: 2025-10-02 08:24:36.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:36 np0005465604 systemd[1]: Started libpod-conmon-b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980.scope.
Oct  2 04:24:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 296 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 2.4 MiB/s wr, 134 op/s
Oct  2 04:24:36 np0005465604 podman[297040]: 2025-10-02 08:24:36.756075123 +0000 UTC m=+0.028255031 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:24:36 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:24:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/472410631e18c72411482e11524827d3001f58363217869b8650f65927ac6239/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:24:36 np0005465604 podman[297040]: 2025-10-02 08:24:36.891726537 +0000 UTC m=+0.163906435 container init b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:24:36 np0005465604 podman[297040]: 2025-10-02 08:24:36.90266682 +0000 UTC m=+0.174846718 container start b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 04:24:36 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[297056]: [NOTICE]   (297060) : New worker (297062) forked
Oct  2 04:24:36 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[297056]: [NOTICE]   (297060) : Loading success.
Oct  2 04:24:36 np0005465604 nova_compute[260603]: 2025-10-02 08:24:36.977 2 DEBUG nova.network.neutron [req-50e2174f-07e0-423e-b05e-48dcbbcc12f7 req-bda79387-cba3-422f-a040-8173d521e061 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Updated VIF entry in instance network info cache for port cd52f5bd-f296-4867-b590-680a2c8a2870. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:24:36 np0005465604 nova_compute[260603]: 2025-10-02 08:24:36.978 2 DEBUG nova.network.neutron [req-50e2174f-07e0-423e-b05e-48dcbbcc12f7 req-bda79387-cba3-422f-a040-8173d521e061 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Updating instance_info_cache with network_info: [{"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:24:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:24:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Oct  2 04:24:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.004 2 DEBUG oslo_concurrency.lockutils [req-50e2174f-07e0-423e-b05e-48dcbbcc12f7 req-bda79387-cba3-422f-a040-8173d521e061 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7e111a93-618c-4a35-a412-f1e8a975a139" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:24:37 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.009 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393477.0084705, 7e111a93-618c-4a35-a412-f1e8a975a139 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.009 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] VM Started (Lifecycle Event)#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.014 2 DEBUG nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.020 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.024 2 INFO nova.virt.libvirt.driver [-] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Instance spawned successfully.#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.025 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.034 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.040 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.056 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.057 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.057 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.058 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.059 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.060 2 DEBUG nova.virt.libvirt.driver [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.067 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.067 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393477.0132246, 7e111a93-618c-4a35-a412-f1e8a975a139 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.068 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.105 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.110 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393477.0170038, 7e111a93-618c-4a35-a412-f1e8a975a139 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.110 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.384 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.389 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.400 2 INFO nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Took 10.26 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.400 2 DEBUG nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.413 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.472 2 INFO nova.compute.manager [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Took 11.18 seconds to build instance.#033[00m
Oct  2 04:24:37 np0005465604 nova_compute[260603]: 2025-10-02 08:24:37.494 2 DEBUG oslo_concurrency.lockutils [None req-cb5541af-3ccf-4355-9557-0e64e67ed209 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:24:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 965 KiB/s rd, 1.5 MiB/s wr, 170 op/s
Oct  2 04:24:39 np0005465604 nova_compute[260603]: 2025-10-02 08:24:39.005 2 DEBUG nova.compute.manager [req-805923f0-7a91-4152-8546-dd0d10371578 req-30d0ae6e-297f-43c6-97d9-bcdb222f8bcf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Received event network-vif-plugged-cd52f5bd-f296-4867-b590-680a2c8a2870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:39 np0005465604 nova_compute[260603]: 2025-10-02 08:24:39.006 2 DEBUG oslo_concurrency.lockutils [req-805923f0-7a91-4152-8546-dd0d10371578 req-30d0ae6e-297f-43c6-97d9-bcdb222f8bcf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:39 np0005465604 nova_compute[260603]: 2025-10-02 08:24:39.007 2 DEBUG oslo_concurrency.lockutils [req-805923f0-7a91-4152-8546-dd0d10371578 req-30d0ae6e-297f-43c6-97d9-bcdb222f8bcf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:39 np0005465604 nova_compute[260603]: 2025-10-02 08:24:39.007 2 DEBUG oslo_concurrency.lockutils [req-805923f0-7a91-4152-8546-dd0d10371578 req-30d0ae6e-297f-43c6-97d9-bcdb222f8bcf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:39 np0005465604 nova_compute[260603]: 2025-10-02 08:24:39.008 2 DEBUG nova.compute.manager [req-805923f0-7a91-4152-8546-dd0d10371578 req-30d0ae6e-297f-43c6-97d9-bcdb222f8bcf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] No waiting events found dispatching network-vif-plugged-cd52f5bd-f296-4867-b590-680a2c8a2870 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:24:39 np0005465604 nova_compute[260603]: 2025-10-02 08:24:39.008 2 WARNING nova.compute.manager [req-805923f0-7a91-4152-8546-dd0d10371578 req-30d0ae6e-297f-43c6-97d9-bcdb222f8bcf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Received unexpected event network-vif-plugged-cd52f5bd-f296-4867-b590-680a2c8a2870 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:24:39 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:39Z|00197|binding|INFO|Releasing lport dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76 from this chassis (sb_readonly=0)
Oct  2 04:24:39 np0005465604 nova_compute[260603]: 2025-10-02 08:24:39.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:39 np0005465604 nova_compute[260603]: 2025-10-02 08:24:39.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 874 KiB/s rd, 1.4 MiB/s wr, 154 op/s
Oct  2 04:24:41 np0005465604 nova_compute[260603]: 2025-10-02 08:24:41.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:24:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Oct  2 04:24:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Oct  2 04:24:42 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Oct  2 04:24:42 np0005465604 nova_compute[260603]: 2025-10-02 08:24:42.410 2 DEBUG nova.objects.instance [None req-9194375b-afb7-4af8-9e3c-0a71542e979f 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e111a93-618c-4a35-a412-f1e8a975a139 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:24:42 np0005465604 nova_compute[260603]: 2025-10-02 08:24:42.432 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393482.432187, 7e111a93-618c-4a35-a412-f1e8a975a139 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:24:42 np0005465604 nova_compute[260603]: 2025-10-02 08:24:42.433 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:24:42 np0005465604 nova_compute[260603]: 2025-10-02 08:24:42.453 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:42 np0005465604 nova_compute[260603]: 2025-10-02 08:24:42.459 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:24:42 np0005465604 nova_compute[260603]: 2025-10-02 08:24:42.487 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  2 04:24:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 93 op/s
Oct  2 04:24:42 np0005465604 kernel: tapcd52f5bd-f2 (unregistering): left promiscuous mode
Oct  2 04:24:42 np0005465604 NetworkManager[45129]: <info>  [1759393482.9097] device (tapcd52f5bd-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:24:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:42Z|00198|binding|INFO|Releasing lport cd52f5bd-f296-4867-b590-680a2c8a2870 from this chassis (sb_readonly=0)
Oct  2 04:24:42 np0005465604 nova_compute[260603]: 2025-10-02 08:24:42.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:42Z|00199|binding|INFO|Setting lport cd52f5bd-f296-4867-b590-680a2c8a2870 down in Southbound
Oct  2 04:24:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:42Z|00200|binding|INFO|Removing iface tapcd52f5bd-f2 ovn-installed in OVS
Oct  2 04:24:42 np0005465604 nova_compute[260603]: 2025-10-02 08:24:42.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:42.934 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:0f:7a 10.100.0.8'], port_security=['fa:16:3e:cb:0f:7a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7e111a93-618c-4a35-a412-f1e8a975a139', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=cd52f5bd-f296-4867-b590-680a2c8a2870) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:24:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:42.937 162357 INFO neutron.agent.ovn.metadata.agent [-] Port cd52f5bd-f296-4867-b590-680a2c8a2870 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis#033[00m
Oct  2 04:24:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:42.939 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 897d7abf-9e23-43cd-8f60-7156792a4360, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:24:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:42.940 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e17496b6-f725-4835-a46f-fde5b9b4d4bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:42.943 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 namespace which is not needed anymore#033[00m
Oct  2 04:24:42 np0005465604 nova_compute[260603]: 2025-10-02 08:24:42.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:42 np0005465604 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Oct  2 04:24:42 np0005465604 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001e.scope: Consumed 6.556s CPU time.
Oct  2 04:24:42 np0005465604 systemd-machined[214636]: Machine qemu-34-instance-0000001e terminated.
Oct  2 04:24:43 np0005465604 podman[297075]: 2025-10-02 08:24:43.034721212 +0000 UTC m=+0.091504720 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:24:43 np0005465604 kernel: tapcd52f5bd-f2: entered promiscuous mode
Oct  2 04:24:43 np0005465604 systemd-udevd[297085]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:24:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:43Z|00201|binding|INFO|Claiming lport cd52f5bd-f296-4867-b590-680a2c8a2870 for this chassis.
Oct  2 04:24:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:43Z|00202|binding|INFO|cd52f5bd-f296-4867-b590-680a2c8a2870: Claiming fa:16:3e:cb:0f:7a 10.100.0.8
Oct  2 04:24:43 np0005465604 nova_compute[260603]: 2025-10-02 08:24:43.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:43 np0005465604 kernel: tapcd52f5bd-f2 (unregistering): left promiscuous mode
Oct  2 04:24:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.119 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:0f:7a 10.100.0.8'], port_security=['fa:16:3e:cb:0f:7a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7e111a93-618c-4a35-a412-f1e8a975a139', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=cd52f5bd-f296-4867-b590-680a2c8a2870) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:24:43 np0005465604 NetworkManager[45129]: <info>  [1759393483.1240] manager: (tapcd52f5bd-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Oct  2 04:24:43 np0005465604 virtnodedevd[260919]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct  2 04:24:43 np0005465604 virtnodedevd[260919]: hostname: compute-0
Oct  2 04:24:43 np0005465604 virtnodedevd[260919]: ethtool ioctl error on tapcd52f5bd-f2: No such device
Oct  2 04:24:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:43Z|00203|binding|INFO|Setting lport cd52f5bd-f296-4867-b590-680a2c8a2870 ovn-installed in OVS
Oct  2 04:24:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:43Z|00204|binding|INFO|Setting lport cd52f5bd-f296-4867-b590-680a2c8a2870 up in Southbound
Oct  2 04:24:43 np0005465604 nova_compute[260603]: 2025-10-02 08:24:43.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:43 np0005465604 virtnodedevd[260919]: ethtool ioctl error on tapcd52f5bd-f2: No such device
Oct  2 04:24:43 np0005465604 nova_compute[260603]: 2025-10-02 08:24:43.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:43Z|00205|binding|INFO|Releasing lport cd52f5bd-f296-4867-b590-680a2c8a2870 from this chassis (sb_readonly=0)
Oct  2 04:24:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:43Z|00206|binding|INFO|Setting lport cd52f5bd-f296-4867-b590-680a2c8a2870 down in Southbound
Oct  2 04:24:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:24:43Z|00207|binding|INFO|Removing iface tapcd52f5bd-f2 ovn-installed in OVS
Oct  2 04:24:43 np0005465604 nova_compute[260603]: 2025-10-02 08:24:43.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.149 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:0f:7a 10.100.0.8'], port_security=['fa:16:3e:cb:0f:7a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7e111a93-618c-4a35-a412-f1e8a975a139', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=cd52f5bd-f296-4867-b590-680a2c8a2870) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:24:43 np0005465604 virtnodedevd[260919]: ethtool ioctl error on tapcd52f5bd-f2: No such device
Oct  2 04:24:43 np0005465604 nova_compute[260603]: 2025-10-02 08:24:43.152 2 DEBUG nova.compute.manager [None req-9194375b-afb7-4af8-9e3c-0a71542e979f 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:43 np0005465604 virtnodedevd[260919]: ethtool ioctl error on tapcd52f5bd-f2: No such device
Oct  2 04:24:43 np0005465604 nova_compute[260603]: 2025-10-02 08:24:43.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:43 np0005465604 virtnodedevd[260919]: ethtool ioctl error on tapcd52f5bd-f2: No such device
Oct  2 04:24:43 np0005465604 virtnodedevd[260919]: ethtool ioctl error on tapcd52f5bd-f2: No such device
Oct  2 04:24:43 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[297056]: [NOTICE]   (297060) : haproxy version is 2.8.14-c23fe91
Oct  2 04:24:43 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[297056]: [NOTICE]   (297060) : path to executable is /usr/sbin/haproxy
Oct  2 04:24:43 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[297056]: [WARNING]  (297060) : Exiting Master process...
Oct  2 04:24:43 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[297056]: [WARNING]  (297060) : Exiting Master process...
Oct  2 04:24:43 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[297056]: [ALERT]    (297060) : Current worker (297062) exited with code 143 (Terminated)
Oct  2 04:24:43 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[297056]: [WARNING]  (297060) : All workers exited. Exiting... (0)
Oct  2 04:24:43 np0005465604 virtnodedevd[260919]: ethtool ioctl error on tapcd52f5bd-f2: No such device
Oct  2 04:24:43 np0005465604 systemd[1]: libpod-b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980.scope: Deactivated successfully.
Oct  2 04:24:43 np0005465604 virtnodedevd[260919]: ethtool ioctl error on tapcd52f5bd-f2: No such device
Oct  2 04:24:43 np0005465604 podman[297119]: 2025-10-02 08:24:43.190536067 +0000 UTC m=+0.055149590 container died b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:24:43 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980-userdata-shm.mount: Deactivated successfully.
Oct  2 04:24:43 np0005465604 systemd[1]: var-lib-containers-storage-overlay-472410631e18c72411482e11524827d3001f58363217869b8650f65927ac6239-merged.mount: Deactivated successfully.
Oct  2 04:24:43 np0005465604 podman[297119]: 2025-10-02 08:24:43.230353411 +0000 UTC m=+0.094966944 container cleanup b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 04:24:43 np0005465604 systemd[1]: libpod-conmon-b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980.scope: Deactivated successfully.
Oct  2 04:24:43 np0005465604 podman[297170]: 2025-10-02 08:24:43.305438732 +0000 UTC m=+0.045370234 container remove b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:24:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.315 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d66c3e-a029-4ca2-989f-a68f84831096]: (4, ('Thu Oct  2 08:24:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 (b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980)\nb553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980\nThu Oct  2 08:24:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 (b553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980)\nb553b2054fcda16ad0eaab2b0df1458f5391b298001bd94a4b70bfc0cc5ad980\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.317 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df409489-7203-4cce-84b2-1f1198fae82d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.318 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:43 np0005465604 nova_compute[260603]: 2025-10-02 08:24:43.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:43 np0005465604 kernel: tap897d7abf-90: left promiscuous mode
Oct  2 04:24:43 np0005465604 nova_compute[260603]: 2025-10-02 08:24:43.342 2 DEBUG nova.compute.manager [req-6845f693-f512-4051-8bea-4f4d054809cc req-027e10b1-b085-46e4-a310-280ec8a810df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Received event network-vif-unplugged-cd52f5bd-f296-4867-b590-680a2c8a2870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:43 np0005465604 nova_compute[260603]: 2025-10-02 08:24:43.343 2 DEBUG oslo_concurrency.lockutils [req-6845f693-f512-4051-8bea-4f4d054809cc req-027e10b1-b085-46e4-a310-280ec8a810df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:43 np0005465604 nova_compute[260603]: 2025-10-02 08:24:43.343 2 DEBUG oslo_concurrency.lockutils [req-6845f693-f512-4051-8bea-4f4d054809cc req-027e10b1-b085-46e4-a310-280ec8a810df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:43 np0005465604 nova_compute[260603]: 2025-10-02 08:24:43.343 2 DEBUG oslo_concurrency.lockutils [req-6845f693-f512-4051-8bea-4f4d054809cc req-027e10b1-b085-46e4-a310-280ec8a810df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:43 np0005465604 nova_compute[260603]: 2025-10-02 08:24:43.344 2 DEBUG nova.compute.manager [req-6845f693-f512-4051-8bea-4f4d054809cc req-027e10b1-b085-46e4-a310-280ec8a810df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] No waiting events found dispatching network-vif-unplugged-cd52f5bd-f296-4867-b590-680a2c8a2870 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:24:43 np0005465604 nova_compute[260603]: 2025-10-02 08:24:43.344 2 WARNING nova.compute.manager [req-6845f693-f512-4051-8bea-4f4d054809cc req-027e10b1-b085-46e4-a310-280ec8a810df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Received unexpected event network-vif-unplugged-cd52f5bd-f296-4867-b590-680a2c8a2870 for instance with vm_state suspended and task_state None.#033[00m
Oct  2 04:24:43 np0005465604 nova_compute[260603]: 2025-10-02 08:24:43.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.346 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0c39d775-b212-410c-88c7-5ca90af17134]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.370 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[716cb9ad-9902-441e-81c8-03d6c527f452]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.375 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5b055945-986c-46f4-bd1c-718b1892748a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.393 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd13d8ae-4c2a-4267-abf9-02c69cdf254a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 435470, 'reachable_time': 38668, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297189, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:43 np0005465604 systemd[1]: run-netns-ovnmeta\x2d897d7abf\x2d9e23\x2d43cd\x2d8f60\x2d7156792a4360.mount: Deactivated successfully.
Oct  2 04:24:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.396 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:24:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.396 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a82f76f1-d7d7-4ab1-b186-c7255e705821]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.396 162357 INFO neutron.agent.ovn.metadata.agent [-] Port cd52f5bd-f296-4867-b590-680a2c8a2870 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis#033[00m
Oct  2 04:24:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.397 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 897d7abf-9e23-43cd-8f60-7156792a4360, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:24:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.398 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4f34e67e-e6e7-409e-b20b-6386e0837fcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.398 162357 INFO neutron.agent.ovn.metadata.agent [-] Port cd52f5bd-f296-4867-b590-680a2c8a2870 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis#033[00m
Oct  2 04:24:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.399 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 897d7abf-9e23-43cd-8f60-7156792a4360, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:24:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:43.399 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[269dc7c4-5171-4605-9431-71fd4fba93ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:24:44 np0005465604 nova_compute[260603]: 2025-10-02 08:24:44.561 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393469.56035, 5d50db9c-0731-468a-81da-6762d68cda94 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:24:44 np0005465604 nova_compute[260603]: 2025-10-02 08:24:44.562 2 INFO nova.compute.manager [-] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:24:44 np0005465604 nova_compute[260603]: 2025-10-02 08:24:44.587 2 DEBUG nova.compute.manager [None req-90d3cbad-face-4e67-9010-3cf6eea9b622 - - - - - -] [instance: 5d50db9c-0731-468a-81da-6762d68cda94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:44 np0005465604 nova_compute[260603]: 2025-10-02 08:24:44.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 19 KiB/s wr, 110 op/s
Oct  2 04:24:45 np0005465604 nova_compute[260603]: 2025-10-02 08:24:45.583 2 DEBUG nova.compute.manager [req-6079e1cd-3a65-4429-b5ac-2eff9cb77b2c req-6a40664a-abe2-430a-842d-047a8f909b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Received event network-vif-plugged-cd52f5bd-f296-4867-b590-680a2c8a2870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:45 np0005465604 nova_compute[260603]: 2025-10-02 08:24:45.583 2 DEBUG oslo_concurrency.lockutils [req-6079e1cd-3a65-4429-b5ac-2eff9cb77b2c req-6a40664a-abe2-430a-842d-047a8f909b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:45 np0005465604 nova_compute[260603]: 2025-10-02 08:24:45.584 2 DEBUG oslo_concurrency.lockutils [req-6079e1cd-3a65-4429-b5ac-2eff9cb77b2c req-6a40664a-abe2-430a-842d-047a8f909b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:45 np0005465604 nova_compute[260603]: 2025-10-02 08:24:45.584 2 DEBUG oslo_concurrency.lockutils [req-6079e1cd-3a65-4429-b5ac-2eff9cb77b2c req-6a40664a-abe2-430a-842d-047a8f909b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:45 np0005465604 nova_compute[260603]: 2025-10-02 08:24:45.585 2 DEBUG nova.compute.manager [req-6079e1cd-3a65-4429-b5ac-2eff9cb77b2c req-6a40664a-abe2-430a-842d-047a8f909b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] No waiting events found dispatching network-vif-plugged-cd52f5bd-f296-4867-b590-680a2c8a2870 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:24:45 np0005465604 nova_compute[260603]: 2025-10-02 08:24:45.585 2 WARNING nova.compute.manager [req-6079e1cd-3a65-4429-b5ac-2eff9cb77b2c req-6a40664a-abe2-430a-842d-047a8f909b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Received unexpected event network-vif-plugged-cd52f5bd-f296-4867-b590-680a2c8a2870 for instance with vm_state suspended and task_state None.#033[00m
Oct  2 04:24:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:46.197 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:24:46 np0005465604 nova_compute[260603]: 2025-10-02 08:24:46.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:46.198 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:24:46 np0005465604 nova_compute[260603]: 2025-10-02 08:24:46.511 2 DEBUG nova.compute.manager [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:46 np0005465604 nova_compute[260603]: 2025-10-02 08:24:46.576 2 INFO nova.compute.manager [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] instance snapshotting#033[00m
Oct  2 04:24:46 np0005465604 nova_compute[260603]: 2025-10-02 08:24:46.577 2 WARNING nova.compute.manager [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Oct  2 04:24:46 np0005465604 nova_compute[260603]: 2025-10-02 08:24:46.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 90 op/s
Oct  2 04:24:46 np0005465604 nova_compute[260603]: 2025-10-02 08:24:46.880 2 INFO nova.virt.libvirt.driver [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Beginning cold snapshot process#033[00m
Oct  2 04:24:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:24:47 np0005465604 podman[297190]: 2025-10-02 08:24:47.027846312 +0000 UTC m=+0.087547434 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:24:47 np0005465604 nova_compute[260603]: 2025-10-02 08:24:47.252 2 DEBUG nova.virt.libvirt.imagebackend [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 04:24:47 np0005465604 nova_compute[260603]: 2025-10-02 08:24:47.516 2 DEBUG nova.storage.rbd_utils [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] creating snapshot(2d672cc0cec84643a40ae2d55a034eed) on rbd image(7e111a93-618c-4a35-a412-f1e8a975a139_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:24:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Oct  2 04:24:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Oct  2 04:24:48 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Oct  2 04:24:48 np0005465604 nova_compute[260603]: 2025-10-02 08:24:48.125 2 DEBUG nova.storage.rbd_utils [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] cloning vms/7e111a93-618c-4a35-a412-f1e8a975a139_disk@2d672cc0cec84643a40ae2d55a034eed to images/ba24338b-7147-4fdf-8929-8fac7c4161b9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:24:48 np0005465604 nova_compute[260603]: 2025-10-02 08:24:48.281 2 DEBUG nova.storage.rbd_utils [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] flattening images/ba24338b-7147-4fdf-8929-8fac7c4161b9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:24:48 np0005465604 nova_compute[260603]: 2025-10-02 08:24:48.563 2 DEBUG nova.storage.rbd_utils [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] removing snapshot(2d672cc0cec84643a40ae2d55a034eed) on rbd image(7e111a93-618c-4a35-a412-f1e8a975a139_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 04:24:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1023 B/s wr, 83 op/s
Oct  2 04:24:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Oct  2 04:24:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Oct  2 04:24:49 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Oct  2 04:24:49 np0005465604 nova_compute[260603]: 2025-10-02 08:24:49.123 2 DEBUG nova.storage.rbd_utils [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] creating snapshot(snap) on rbd image(ba24338b-7147-4fdf-8929-8fac7c4161b9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:24:49 np0005465604 nova_compute[260603]: 2025-10-02 08:24:49.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Oct  2 04:24:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Oct  2 04:24:50 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Oct  2 04:24:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 88 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.3 KiB/s wr, 21 op/s
Oct  2 04:24:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Oct  2 04:24:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Oct  2 04:24:51 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Oct  2 04:24:51 np0005465604 nova_compute[260603]: 2025-10-02 08:24:51.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:24:52 np0005465604 nova_compute[260603]: 2025-10-02 08:24:52.558 2 INFO nova.virt.libvirt.driver [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Snapshot image upload complete#033[00m
Oct  2 04:24:52 np0005465604 nova_compute[260603]: 2025-10-02 08:24:52.558 2 INFO nova.compute.manager [None req-c40f60f9-299b-40f5-8254-3c2bd33534ce 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Took 5.98 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 04:24:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 130 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 218 op/s
Oct  2 04:24:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:24:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:24:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:24:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:24:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:24:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:24:53 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 2e028815-07d8-43ca-b6f5-b585ac0bc83b does not exist
Oct  2 04:24:53 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 624315fd-5c1a-429c-a2ab-12df106c20bb does not exist
Oct  2 04:24:53 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev f828179d-3ce1-4534-b9ea-4206f90c292b does not exist
Oct  2 04:24:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:24:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:24:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:24:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:24:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:24:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:24:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:24:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:24:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:24:54 np0005465604 podman[297622]: 2025-10-02 08:24:54.237983635 +0000 UTC m=+0.068594844 container create e05e6dccf085a1ab6981ca81d527dbbdf8577e83553e5e86e406b421ba279b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wu, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:24:54 np0005465604 systemd[1]: Started libpod-conmon-e05e6dccf085a1ab6981ca81d527dbbdf8577e83553e5e86e406b421ba279b97.scope.
Oct  2 04:24:54 np0005465604 podman[297622]: 2025-10-02 08:24:54.209356111 +0000 UTC m=+0.039967370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:24:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:24:54 np0005465604 podman[297622]: 2025-10-02 08:24:54.345275374 +0000 UTC m=+0.175886593 container init e05e6dccf085a1ab6981ca81d527dbbdf8577e83553e5e86e406b421ba279b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wu, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:24:54 np0005465604 podman[297622]: 2025-10-02 08:24:54.357107755 +0000 UTC m=+0.187718954 container start e05e6dccf085a1ab6981ca81d527dbbdf8577e83553e5e86e406b421ba279b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 04:24:54 np0005465604 podman[297622]: 2025-10-02 08:24:54.362971505 +0000 UTC m=+0.193582764 container attach e05e6dccf085a1ab6981ca81d527dbbdf8577e83553e5e86e406b421ba279b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wu, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:24:54 np0005465604 blissful_wu[297638]: 167 167
Oct  2 04:24:54 np0005465604 systemd[1]: libpod-e05e6dccf085a1ab6981ca81d527dbbdf8577e83553e5e86e406b421ba279b97.scope: Deactivated successfully.
Oct  2 04:24:54 np0005465604 podman[297622]: 2025-10-02 08:24:54.366023963 +0000 UTC m=+0.196635162 container died e05e6dccf085a1ab6981ca81d527dbbdf8577e83553e5e86e406b421ba279b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wu, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:24:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay-af5aa4bc84ab77a0abdc3e7283eb938e509edb496e40c01be07672129fc3c3c6-merged.mount: Deactivated successfully.
Oct  2 04:24:54 np0005465604 podman[297622]: 2025-10-02 08:24:54.440027879 +0000 UTC m=+0.270639088 container remove e05e6dccf085a1ab6981ca81d527dbbdf8577e83553e5e86e406b421ba279b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct  2 04:24:54 np0005465604 systemd[1]: libpod-conmon-e05e6dccf085a1ab6981ca81d527dbbdf8577e83553e5e86e406b421ba279b97.scope: Deactivated successfully.
Oct  2 04:24:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Oct  2 04:24:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Oct  2 04:24:54 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Oct  2 04:24:54 np0005465604 podman[297662]: 2025-10-02 08:24:54.65810402 +0000 UTC m=+0.052268796 container create c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 04:24:54 np0005465604 systemd[1]: Started libpod-conmon-c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2.scope.
Oct  2 04:24:54 np0005465604 podman[297662]: 2025-10-02 08:24:54.631042957 +0000 UTC m=+0.025207783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:24:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:24:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70d4a57143800c01d8237f4a155a49a36f438ecbe704a93e6f7bcaa4359bfa5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:24:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70d4a57143800c01d8237f4a155a49a36f438ecbe704a93e6f7bcaa4359bfa5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:24:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70d4a57143800c01d8237f4a155a49a36f438ecbe704a93e6f7bcaa4359bfa5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:24:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70d4a57143800c01d8237f4a155a49a36f438ecbe704a93e6f7bcaa4359bfa5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:24:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d70d4a57143800c01d8237f4a155a49a36f438ecbe704a93e6f7bcaa4359bfa5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:24:54 np0005465604 nova_compute[260603]: 2025-10-02 08:24:54.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:54 np0005465604 podman[297662]: 2025-10-02 08:24:54.777170449 +0000 UTC m=+0.171335205 container init c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 04:24:54 np0005465604 podman[297662]: 2025-10-02 08:24:54.799933153 +0000 UTC m=+0.194097919 container start c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_almeida, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:24:54 np0005465604 podman[297662]: 2025-10-02 08:24:54.806905478 +0000 UTC m=+0.201070214 container attach c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 04:24:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 134 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.7 MiB/s wr, 165 op/s
Oct  2 04:24:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:24:55.201 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.508 2 DEBUG oslo_concurrency.lockutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "7e111a93-618c-4a35-a412-f1e8a975a139" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.509 2 DEBUG oslo_concurrency.lockutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.510 2 DEBUG oslo_concurrency.lockutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.510 2 DEBUG oslo_concurrency.lockutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.511 2 DEBUG oslo_concurrency.lockutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.513 2 INFO nova.compute.manager [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Terminating instance#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.514 2 DEBUG nova.compute.manager [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.524 2 INFO nova.virt.libvirt.driver [-] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Instance destroyed successfully.#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.524 2 DEBUG nova.objects.instance [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'resources' on Instance uuid 7e111a93-618c-4a35-a412-f1e8a975a139 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.541 2 DEBUG nova.virt.libvirt.vif [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:24:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-902237424',display_name='tempest-ImagesTestJSON-server-902237424',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-902237424',id=30,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:24:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-cf2bt1a3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:24:52Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=7e111a93-618c-4a35-a412-f1e8a975a139,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.542 2 DEBUG nova.network.os_vif_util [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "cd52f5bd-f296-4867-b590-680a2c8a2870", "address": "fa:16:3e:cb:0f:7a", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd52f5bd-f2", "ovs_interfaceid": "cd52f5bd-f296-4867-b590-680a2c8a2870", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.543 2 DEBUG nova.network.os_vif_util [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:0f:7a,bridge_name='br-int',has_traffic_filtering=True,id=cd52f5bd-f296-4867-b590-680a2c8a2870,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd52f5bd-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.544 2 DEBUG os_vif [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:0f:7a,bridge_name='br-int',has_traffic_filtering=True,id=cd52f5bd-f296-4867-b590-680a2c8a2870,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd52f5bd-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.547 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd52f5bd-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.555 2 INFO os_vif [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:0f:7a,bridge_name='br-int',has_traffic_filtering=True,id=cd52f5bd-f296-4867-b590-680a2c8a2870,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd52f5bd-f2')#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.938 2 INFO nova.virt.libvirt.driver [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Deleting instance files /var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139_del#033[00m
Oct  2 04:24:55 np0005465604 nova_compute[260603]: 2025-10-02 08:24:55.939 2 INFO nova.virt.libvirt.driver [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Deletion of /var/lib/nova/instances/7e111a93-618c-4a35-a412-f1e8a975a139_del complete#033[00m
Oct  2 04:24:55 np0005465604 youthful_almeida[297679]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:24:55 np0005465604 youthful_almeida[297679]: --> relative data size: 1.0
Oct  2 04:24:55 np0005465604 youthful_almeida[297679]: --> All data devices are unavailable
Oct  2 04:24:55 np0005465604 systemd[1]: libpod-c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2.scope: Deactivated successfully.
Oct  2 04:24:55 np0005465604 podman[297662]: 2025-10-02 08:24:55.990671985 +0000 UTC m=+1.384836721 container died c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_almeida, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 04:24:55 np0005465604 systemd[1]: libpod-c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2.scope: Consumed 1.128s CPU time.
Oct  2 04:24:56 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d70d4a57143800c01d8237f4a155a49a36f438ecbe704a93e6f7bcaa4359bfa5-merged.mount: Deactivated successfully.
Oct  2 04:24:56 np0005465604 nova_compute[260603]: 2025-10-02 08:24:56.027 2 INFO nova.compute.manager [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Took 0.51 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:24:56 np0005465604 nova_compute[260603]: 2025-10-02 08:24:56.029 2 DEBUG oslo.service.loopingcall [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:24:56 np0005465604 nova_compute[260603]: 2025-10-02 08:24:56.029 2 DEBUG nova.compute.manager [-] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:24:56 np0005465604 nova_compute[260603]: 2025-10-02 08:24:56.029 2 DEBUG nova.network.neutron [-] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:24:56 np0005465604 podman[297662]: 2025-10-02 08:24:56.040497912 +0000 UTC m=+1.434662648 container remove c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:24:56 np0005465604 systemd[1]: libpod-conmon-c6955488f0d25807922d008190e1beefbf8e5d975ec4e183c598175661f136f2.scope: Deactivated successfully.
Oct  2 04:24:56 np0005465604 nova_compute[260603]: 2025-10-02 08:24:56.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:24:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 134 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.2 MiB/s wr, 141 op/s
Oct  2 04:24:56 np0005465604 podman[297879]: 2025-10-02 08:24:56.858736244 +0000 UTC m=+0.069175421 container create cf21292e61564208c2449d5dc9dd587a1b28855e0ae9d3b5ea7b3af98144f82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feynman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 04:24:56 np0005465604 systemd[1]: Started libpod-conmon-cf21292e61564208c2449d5dc9dd587a1b28855e0ae9d3b5ea7b3af98144f82b.scope.
Oct  2 04:24:56 np0005465604 podman[297879]: 2025-10-02 08:24:56.829209772 +0000 UTC m=+0.039649009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:24:56 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:24:56 np0005465604 podman[297879]: 2025-10-02 08:24:56.96961362 +0000 UTC m=+0.180052857 container init cf21292e61564208c2449d5dc9dd587a1b28855e0ae9d3b5ea7b3af98144f82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 04:24:56 np0005465604 podman[297879]: 2025-10-02 08:24:56.981105789 +0000 UTC m=+0.191544976 container start cf21292e61564208c2449d5dc9dd587a1b28855e0ae9d3b5ea7b3af98144f82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feynman, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:24:56 np0005465604 podman[297879]: 2025-10-02 08:24:56.985584044 +0000 UTC m=+0.196023271 container attach cf21292e61564208c2449d5dc9dd587a1b28855e0ae9d3b5ea7b3af98144f82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 04:24:56 np0005465604 gallant_feynman[297895]: 167 167
Oct  2 04:24:56 np0005465604 systemd[1]: libpod-cf21292e61564208c2449d5dc9dd587a1b28855e0ae9d3b5ea7b3af98144f82b.scope: Deactivated successfully.
Oct  2 04:24:56 np0005465604 podman[297879]: 2025-10-02 08:24:56.989343396 +0000 UTC m=+0.199782573 container died cf21292e61564208c2449d5dc9dd587a1b28855e0ae9d3b5ea7b3af98144f82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feynman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 04:24:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:24:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Oct  2 04:24:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Oct  2 04:24:57 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Oct  2 04:24:57 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ce4441261993a3ecbd9d55ad35400e3934e5cfd797815cce65a829e096c1fbe1-merged.mount: Deactivated successfully.
Oct  2 04:24:57 np0005465604 podman[297879]: 2025-10-02 08:24:57.050115574 +0000 UTC m=+0.260554761 container remove cf21292e61564208c2449d5dc9dd587a1b28855e0ae9d3b5ea7b3af98144f82b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:24:57 np0005465604 systemd[1]: libpod-conmon-cf21292e61564208c2449d5dc9dd587a1b28855e0ae9d3b5ea7b3af98144f82b.scope: Deactivated successfully.
Oct  2 04:24:57 np0005465604 podman[297920]: 2025-10-02 08:24:57.294698521 +0000 UTC m=+0.068042455 container create 033aa29d935755e49378be49e0abe9e90c0c8fefbabf296cbcac9a6c7e3c32cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:24:57 np0005465604 systemd[1]: Started libpod-conmon-033aa29d935755e49378be49e0abe9e90c0c8fefbabf296cbcac9a6c7e3c32cb.scope.
Oct  2 04:24:57 np0005465604 podman[297920]: 2025-10-02 08:24:57.270523631 +0000 UTC m=+0.043867645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:24:57 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:24:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c04bb5a16434ec290df3213086bdf3604e7011937f94d0c50f9caaaa625fd25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:24:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c04bb5a16434ec290df3213086bdf3604e7011937f94d0c50f9caaaa625fd25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:24:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c04bb5a16434ec290df3213086bdf3604e7011937f94d0c50f9caaaa625fd25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:24:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c04bb5a16434ec290df3213086bdf3604e7011937f94d0c50f9caaaa625fd25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:24:57 np0005465604 podman[297920]: 2025-10-02 08:24:57.392780433 +0000 UTC m=+0.166124457 container init 033aa29d935755e49378be49e0abe9e90c0c8fefbabf296cbcac9a6c7e3c32cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:24:57 np0005465604 podman[297920]: 2025-10-02 08:24:57.410596297 +0000 UTC m=+0.183940221 container start 033aa29d935755e49378be49e0abe9e90c0c8fefbabf296cbcac9a6c7e3c32cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:24:57 np0005465604 podman[297920]: 2025-10-02 08:24:57.414429411 +0000 UTC m=+0.187773395 container attach 033aa29d935755e49378be49e0abe9e90c0c8fefbabf296cbcac9a6c7e3c32cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:24:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:24:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:24:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:24:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:24:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:24:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.073 2 DEBUG nova.network.neutron [-] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.103 2 INFO nova.compute.manager [-] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Took 2.07 seconds to deallocate network for instance.#033[00m
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.154 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393483.1521792, 7e111a93-618c-4a35-a412-f1e8a975a139 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:24:58 np0005465604 fervent_easley[297936]: {
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.154 2 INFO nova.compute.manager [-] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:    "0": [
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:        {
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "devices": [
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "/dev/loop3"
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            ],
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "lv_name": "ceph_lv0",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "lv_size": "21470642176",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "name": "ceph_lv0",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "tags": {
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.cluster_name": "ceph",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.crush_device_class": "",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.encrypted": "0",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.osd_id": "0",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.type": "block",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.vdo": "0"
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            },
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "type": "block",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "vg_name": "ceph_vg0"
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:        }
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:    ],
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:    "1": [
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:        {
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "devices": [
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "/dev/loop4"
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            ],
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "lv_name": "ceph_lv1",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "lv_size": "21470642176",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "name": "ceph_lv1",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "tags": {
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.cluster_name": "ceph",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.crush_device_class": "",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.encrypted": "0",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.osd_id": "1",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.type": "block",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.vdo": "0"
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            },
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "type": "block",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "vg_name": "ceph_vg1"
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:        }
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:    ],
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:    "2": [
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:        {
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "devices": [
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "/dev/loop5"
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            ],
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "lv_name": "ceph_lv2",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "lv_size": "21470642176",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "name": "ceph_lv2",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "tags": {
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.cluster_name": "ceph",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.crush_device_class": "",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.encrypted": "0",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.osd_id": "2",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.type": "block",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:                "ceph.vdo": "0"
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            },
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "type": "block",
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:            "vg_name": "ceph_vg2"
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:        }
Oct  2 04:24:58 np0005465604 fervent_easley[297936]:    ]
Oct  2 04:24:58 np0005465604 fervent_easley[297936]: }
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.190 2 DEBUG oslo_concurrency.lockutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.191 2 DEBUG oslo_concurrency.lockutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.193 2 DEBUG nova.compute.manager [None req-f1e14f0a-5c4d-4084-b935-22d3fcee806d - - - - - -] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:24:58 np0005465604 systemd[1]: libpod-033aa29d935755e49378be49e0abe9e90c0c8fefbabf296cbcac9a6c7e3c32cb.scope: Deactivated successfully.
Oct  2 04:24:58 np0005465604 podman[297920]: 2025-10-02 08:24:58.202461349 +0000 UTC m=+0.975805313 container died 033aa29d935755e49378be49e0abe9e90c0c8fefbabf296cbcac9a6c7e3c32cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:24:58 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2c04bb5a16434ec290df3213086bdf3604e7011937f94d0c50f9caaaa625fd25-merged.mount: Deactivated successfully.
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.264 2 DEBUG oslo_concurrency.processutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:58 np0005465604 podman[297920]: 2025-10-02 08:24:58.292474741 +0000 UTC m=+1.065818705 container remove 033aa29d935755e49378be49e0abe9e90c0c8fefbabf296cbcac9a6c7e3c32cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:24:58 np0005465604 systemd[1]: libpod-conmon-033aa29d935755e49378be49e0abe9e90c0c8fefbabf296cbcac9a6c7e3c32cb.scope: Deactivated successfully.
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.397 2 DEBUG nova.compute.manager [req-291ad518-7ef6-4b7f-9cc1-aba07bd38176 req-645328ca-9272-4f3f-aee3-37574bf365b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7e111a93-618c-4a35-a412-f1e8a975a139] Received event network-vif-deleted-cd52f5bd-f296-4867-b590-680a2c8a2870 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:24:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:24:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2195898992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.711 2 DEBUG oslo_concurrency.processutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.723 2 DEBUG nova.compute.provider_tree [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.753 2 DEBUG nova.scheduler.client.report [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.781 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "9ea31984-a45e-4154-9df9-3c4e8ce69309" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.782 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "9ea31984-a45e-4154-9df9-3c4e8ce69309" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.784 2 DEBUG oslo_concurrency.lockutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.818 2 DEBUG nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.833 2 INFO nova.scheduler.client.report [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Deleted allocations for instance 7e111a93-618c-4a35-a412-f1e8a975a139#033[00m
Oct  2 04:24:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 41 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.8 MiB/s wr, 217 op/s
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.901 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.902 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.914 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.915 2 INFO nova.compute.claims [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:24:58 np0005465604 nova_compute[260603]: 2025-10-02 08:24:58.939 2 DEBUG oslo_concurrency.lockutils [None req-1e1baf5c-9182-4e2e-b1ca-701a01e3cf9e 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "7e111a93-618c-4a35-a412-f1e8a975a139" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:59 np0005465604 nova_compute[260603]: 2025-10-02 08:24:59.056 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:24:59 np0005465604 podman[298121]: 2025-10-02 08:24:59.21169749 +0000 UTC m=+0.069306626 container create a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 04:24:59 np0005465604 systemd[1]: Started libpod-conmon-a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a.scope.
Oct  2 04:24:59 np0005465604 podman[298121]: 2025-10-02 08:24:59.194434513 +0000 UTC m=+0.052043639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:24:59 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:24:59 np0005465604 podman[298121]: 2025-10-02 08:24:59.314842065 +0000 UTC m=+0.172451181 container init a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 04:24:59 np0005465604 podman[298121]: 2025-10-02 08:24:59.322300635 +0000 UTC m=+0.179909741 container start a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:24:59 np0005465604 podman[298121]: 2025-10-02 08:24:59.325124177 +0000 UTC m=+0.182733303 container attach a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:24:59 np0005465604 festive_lederberg[298156]: 167 167
Oct  2 04:24:59 np0005465604 systemd[1]: libpod-a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a.scope: Deactivated successfully.
Oct  2 04:24:59 np0005465604 conmon[298156]: conmon a06f38b42262e39d9303 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a.scope/container/memory.events
Oct  2 04:24:59 np0005465604 podman[298121]: 2025-10-02 08:24:59.330512741 +0000 UTC m=+0.188121857 container died a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:24:59 np0005465604 systemd[1]: var-lib-containers-storage-overlay-64ec5e2bfd126fd5675b8ea43a6016f3febe0e5e3e0350909158a6d0db57ca0a-merged.mount: Deactivated successfully.
Oct  2 04:24:59 np0005465604 podman[298121]: 2025-10-02 08:24:59.361108307 +0000 UTC m=+0.218717413 container remove a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  2 04:24:59 np0005465604 systemd[1]: libpod-conmon-a06f38b42262e39d9303c5f023583d39e90d783bd0fe3922c85d6897c8f4d26a.scope: Deactivated successfully.
Oct  2 04:24:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:24:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/885705688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:24:59 np0005465604 podman[298180]: 2025-10-02 08:24:59.523266186 +0000 UTC m=+0.048269878 container create 4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:24:59 np0005465604 nova_compute[260603]: 2025-10-02 08:24:59.530 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:24:59 np0005465604 nova_compute[260603]: 2025-10-02 08:24:59.537 2 DEBUG nova.compute.provider_tree [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:24:59 np0005465604 nova_compute[260603]: 2025-10-02 08:24:59.554 2 DEBUG nova.scheduler.client.report [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:24:59 np0005465604 systemd[1]: Started libpod-conmon-4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5.scope.
Oct  2 04:24:59 np0005465604 nova_compute[260603]: 2025-10-02 08:24:59.590 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:24:59 np0005465604 nova_compute[260603]: 2025-10-02 08:24:59.591 2 DEBUG nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:24:59 np0005465604 podman[298180]: 2025-10-02 08:24:59.504950785 +0000 UTC m=+0.029954477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:24:59 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:24:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1870b4b395d7e371d3689ee7e61d9edb940e917258e806f463655334f8ab30b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:24:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1870b4b395d7e371d3689ee7e61d9edb940e917258e806f463655334f8ab30b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:24:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1870b4b395d7e371d3689ee7e61d9edb940e917258e806f463655334f8ab30b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:24:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1870b4b395d7e371d3689ee7e61d9edb940e917258e806f463655334f8ab30b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:24:59 np0005465604 podman[298180]: 2025-10-02 08:24:59.640345401 +0000 UTC m=+0.165349133 container init 4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 04:24:59 np0005465604 podman[298180]: 2025-10-02 08:24:59.656552423 +0000 UTC m=+0.181556155 container start 4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_torvalds, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 04:24:59 np0005465604 podman[298180]: 2025-10-02 08:24:59.660470269 +0000 UTC m=+0.185473981 container attach 4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:24:59 np0005465604 nova_compute[260603]: 2025-10-02 08:24:59.664 2 DEBUG nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:24:59 np0005465604 nova_compute[260603]: 2025-10-02 08:24:59.665 2 DEBUG nova.network.neutron [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:24:59 np0005465604 nova_compute[260603]: 2025-10-02 08:24:59.702 2 INFO nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:24:59 np0005465604 nova_compute[260603]: 2025-10-02 08:24:59.722 2 DEBUG nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:24:59 np0005465604 nova_compute[260603]: 2025-10-02 08:24:59.856 2 DEBUG nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:24:59 np0005465604 nova_compute[260603]: 2025-10-02 08:24:59.858 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:24:59 np0005465604 nova_compute[260603]: 2025-10-02 08:24:59.859 2 INFO nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Creating image(s)#033[00m
Oct  2 04:24:59 np0005465604 nova_compute[260603]: 2025-10-02 08:24:59.889 2 DEBUG nova.storage.rbd_utils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:24:59 np0005465604 nova_compute[260603]: 2025-10-02 08:24:59.924 2 DEBUG nova.storage.rbd_utils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:24:59 np0005465604 nova_compute[260603]: 2025-10-02 08:24:59.955 2 DEBUG nova.storage.rbd_utils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:24:59 np0005465604 nova_compute[260603]: 2025-10-02 08:24:59.960 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.008 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.009 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.036 2 DEBUG nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.061 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.062 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.063 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.063 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.089 2 DEBUG nova.storage.rbd_utils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.094 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.149 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.150 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.162 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.163 2 INFO nova.compute.claims [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.233 2 DEBUG nova.network.neutron [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.233 2 DEBUG nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.321 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.344 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.427 2 DEBUG nova.storage.rbd_utils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] resizing rbd image 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.535 2 DEBUG nova.objects.instance [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lazy-loading 'migration_context' on Instance uuid 9ea31984-a45e-4154-9df9-3c4e8ce69309 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.552 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.552 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Ensure instance console log exists: /var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.552 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.553 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.553 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.554 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.558 2 WARNING nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.572 2 DEBUG nova.virt.libvirt.host [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.573 2 DEBUG nova.virt.libvirt.host [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.575 2 DEBUG nova.virt.libvirt.host [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.576 2 DEBUG nova.virt.libvirt.host [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.576 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.576 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.577 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.577 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.577 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.577 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.578 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.578 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.578 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.578 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.578 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.579 2 DEBUG nova.virt.hardware [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.581 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.634 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "aebef537-a40c-45aa-98b5-ebdd7c27028b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.635 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "aebef537-a40c-45aa-98b5-ebdd7c27028b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.650 2 DEBUG nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]: {
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:        "osd_id": 2,
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:        "type": "bluestore"
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:    },
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:        "osd_id": 1,
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:        "type": "bluestore"
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:    },
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:        "osd_id": 0,
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:        "type": "bluestore"
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]:    }
Oct  2 04:25:00 np0005465604 determined_torvalds[298199]: }
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.733 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:00 np0005465604 systemd[1]: libpod-4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5.scope: Deactivated successfully.
Oct  2 04:25:00 np0005465604 systemd[1]: libpod-4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5.scope: Consumed 1.060s CPU time.
Oct  2 04:25:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:25:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3408356856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.783 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.791 2 DEBUG nova.compute.provider_tree [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:25:00 np0005465604 podman[298438]: 2025-10-02 08:25:00.793202281 +0000 UTC m=+0.035324950 container died 4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_torvalds, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.805 2 DEBUG nova.scheduler.client.report [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.824 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b1870b4b395d7e371d3689ee7e61d9edb940e917258e806f463655334f8ab30b-merged.mount: Deactivated successfully.
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.825 2 DEBUG nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.827 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.835 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.836 2 INFO nova.compute.claims [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:25:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 41 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 70 KiB/s wr, 95 op/s
Oct  2 04:25:00 np0005465604 podman[298438]: 2025-10-02 08:25:00.86140691 +0000 UTC m=+0.103529579 container remove 4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:25:00 np0005465604 systemd[1]: libpod-conmon-4e898dd21a1bab0795b5d3dfd18ce63ff9e4ebcdd0397b87ef7ea7cc07f713a5.scope: Deactivated successfully.
Oct  2 04:25:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:25:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:25:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:25:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:25:00 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 8646470f-064a-4a6e-8c0a-39d9568b6651 does not exist
Oct  2 04:25:00 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 91b2f16f-819c-477e-be2e-ca3dd1bcf061 does not exist
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.927 2 DEBUG nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.928 2 DEBUG nova.network.neutron [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.953 2 INFO nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:25:00 np0005465604 nova_compute[260603]: 2025-10-02 08:25:00.971 2 DEBUG nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:25:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:25:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.069 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:25:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2658148916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.101 2 DEBUG nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.103 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.104 2 INFO nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Creating image(s)#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.129 2 DEBUG nova.storage.rbd_utils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.155 2 DEBUG nova.storage.rbd_utils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.181 2 DEBUG nova.storage.rbd_utils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.185 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.218 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.250 2 DEBUG nova.storage.rbd_utils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.255 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.292 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.294 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.295 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.296 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.326 2 DEBUG nova.storage.rbd_utils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.331 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:25:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2804454195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.512 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.522 2 DEBUG nova.compute.provider_tree [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.615 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.696 2 DEBUG nova.storage.rbd_utils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] resizing rbd image c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:25:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:25:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225650474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.732 2 DEBUG nova.scheduler.client.report [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.738 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.741 2 DEBUG nova.objects.instance [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lazy-loading 'pci_devices' on Instance uuid 9ea31984-a45e-4154-9df9-3c4e8ce69309 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.762 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:25:01 np0005465604 nova_compute[260603]:  <uuid>9ea31984-a45e-4154-9df9-3c4e8ce69309</uuid>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:  <name>instance-0000001f</name>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <nova:name>tempest-ListImageFiltersTestJSON-server-1042949854</nova:name>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:25:00</nova:creationTime>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:25:01 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:        <nova:user uuid="cf69d2cc0a684734b9d2d6da2fee6bf7">tempest-ListImageFiltersTestJSON-1675507372-project-member</nova:user>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:        <nova:project uuid="1d3020619cd44cbd964c169ee848da8e">tempest-ListImageFiltersTestJSON-1675507372</nova:project>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <entry name="serial">9ea31984-a45e-4154-9df9-3c4e8ce69309</entry>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <entry name="uuid">9ea31984-a45e-4154-9df9-3c4e8ce69309</entry>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/9ea31984-a45e-4154-9df9-3c4e8ce69309_disk">
Oct  2 04:25:01 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:25:01 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/9ea31984-a45e-4154-9df9-3c4e8ce69309_disk.config">
Oct  2 04:25:01 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:25:01 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309/console.log" append="off"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:25:01 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:25:01 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:25:01 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:25:01 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.813 2 DEBUG nova.policy [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6747651cfdcc4f868c43b9d78f5846c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56b1e1170f2e4a73aaf396476bc82261', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.817 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.818 2 DEBUG nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.832 2 DEBUG nova.objects.instance [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'migration_context' on Instance uuid c00d9d50-5c81-4dc9-8316-c654d4802b4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.855 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.856 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Ensure instance console log exists: /var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.857 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.857 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.858 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.881 2 DEBUG nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.882 2 DEBUG nova.network.neutron [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.899 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.900 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.901 2 INFO nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Using config drive#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.925 2 DEBUG nova.storage.rbd_utils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.933 2 INFO nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:25:01 np0005465604 nova_compute[260603]: 2025-10-02 08:25:01.957 2 DEBUG nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:25:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:25:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Oct  2 04:25:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Oct  2 04:25:02 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.085 2 DEBUG nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.086 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.087 2 INFO nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Creating image(s)#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.130 2 DEBUG nova.storage.rbd_utils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image aebef537-a40c-45aa-98b5-ebdd7c27028b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.159 2 DEBUG nova.storage.rbd_utils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image aebef537-a40c-45aa-98b5-ebdd7c27028b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.188 2 DEBUG nova.storage.rbd_utils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image aebef537-a40c-45aa-98b5-ebdd7c27028b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.192 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.275 2 INFO nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Creating config drive at /var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309/disk.config#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.281 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo7xgd4hf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.307 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.308 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.309 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.309 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.334 2 DEBUG nova.storage.rbd_utils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image aebef537-a40c-45aa-98b5-ebdd7c27028b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.338 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 aebef537-a40c-45aa-98b5-ebdd7c27028b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.375 2 DEBUG nova.network.neutron [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.376 2 DEBUG nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.413 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo7xgd4hf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.435 2 DEBUG nova.storage.rbd_utils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.440 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309/disk.config 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.615 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 aebef537-a40c-45aa-98b5-ebdd7c27028b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.651 2 DEBUG oslo_concurrency.processutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309/disk.config 9ea31984-a45e-4154-9df9-3c4e8ce69309_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.652 2 INFO nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Deleting local config drive /var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309/disk.config because it was imported into RBD.#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.690 2 DEBUG nova.storage.rbd_utils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] resizing rbd image aebef537-a40c-45aa-98b5-ebdd7c27028b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:25:02 np0005465604 systemd-machined[214636]: New machine qemu-35-instance-0000001f.
Oct  2 04:25:02 np0005465604 systemd[1]: Started Virtual Machine qemu-35-instance-0000001f.
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.812 2 DEBUG nova.network.neutron [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Successfully created port: c606d1e1-b27f-498f-989e-2cce97a7589d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.825 2 DEBUG nova.objects.instance [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lazy-loading 'migration_context' on Instance uuid aebef537-a40c-45aa-98b5-ebdd7c27028b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 106 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 3.7 MiB/s wr, 134 op/s
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.852 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.853 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Ensure instance console log exists: /var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.853 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.854 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.855 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.857 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.863 2 WARNING nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.869 2 DEBUG nova.virt.libvirt.host [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.870 2 DEBUG nova.virt.libvirt.host [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.876 2 DEBUG nova.virt.libvirt.host [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.876 2 DEBUG nova.virt.libvirt.host [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.877 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.878 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.879 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.879 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.880 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.880 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.881 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.881 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.882 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.882 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.883 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.883 2 DEBUG nova.virt.hardware [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:25:02 np0005465604 nova_compute[260603]: 2025-10-02 08:25:02.888 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:25:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2375461221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:25:03 np0005465604 nova_compute[260603]: 2025-10-02 08:25:03.307 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:03 np0005465604 nova_compute[260603]: 2025-10-02 08:25:03.342 2 DEBUG nova.storage.rbd_utils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image aebef537-a40c-45aa-98b5-ebdd7c27028b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:03 np0005465604 nova_compute[260603]: 2025-10-02 08:25:03.347 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:25:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3257391125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:25:03 np0005465604 nova_compute[260603]: 2025-10-02 08:25:03.851 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:03 np0005465604 nova_compute[260603]: 2025-10-02 08:25:03.854 2 DEBUG nova.objects.instance [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lazy-loading 'pci_devices' on Instance uuid aebef537-a40c-45aa-98b5-ebdd7c27028b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:03 np0005465604 nova_compute[260603]: 2025-10-02 08:25:03.873 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:25:03 np0005465604 nova_compute[260603]:  <uuid>aebef537-a40c-45aa-98b5-ebdd7c27028b</uuid>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:  <name>instance-00000021</name>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <nova:name>tempest-ListImageFiltersTestJSON-server-868586593</nova:name>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:25:02</nova:creationTime>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:25:03 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:        <nova:user uuid="cf69d2cc0a684734b9d2d6da2fee6bf7">tempest-ListImageFiltersTestJSON-1675507372-project-member</nova:user>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:        <nova:project uuid="1d3020619cd44cbd964c169ee848da8e">tempest-ListImageFiltersTestJSON-1675507372</nova:project>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <entry name="serial">aebef537-a40c-45aa-98b5-ebdd7c27028b</entry>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <entry name="uuid">aebef537-a40c-45aa-98b5-ebdd7c27028b</entry>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/aebef537-a40c-45aa-98b5-ebdd7c27028b_disk">
Oct  2 04:25:03 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:25:03 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/aebef537-a40c-45aa-98b5-ebdd7c27028b_disk.config">
Oct  2 04:25:03 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:25:03 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b/console.log" append="off"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:25:03 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:25:03 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:25:03 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:25:03 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:25:03 np0005465604 nova_compute[260603]: 2025-10-02 08:25:03.959 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:25:03 np0005465604 nova_compute[260603]: 2025-10-02 08:25:03.960 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:25:03 np0005465604 nova_compute[260603]: 2025-10-02 08:25:03.961 2 INFO nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Using config drive#033[00m
Oct  2 04:25:03 np0005465604 nova_compute[260603]: 2025-10-02 08:25:03.992 2 DEBUG nova.storage.rbd_utils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image aebef537-a40c-45aa-98b5-ebdd7c27028b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.161 2 INFO nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Creating config drive at /var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b/disk.config#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.166 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuan1t9d4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.314 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuan1t9d4" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.350 2 DEBUG nova.storage.rbd_utils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] rbd image aebef537-a40c-45aa-98b5-ebdd7c27028b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.355 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b/disk.config aebef537-a40c-45aa-98b5-ebdd7c27028b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.428 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393504.4273956, 9ea31984-a45e-4154-9df9-3c4e8ce69309 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.430 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.436 2 DEBUG nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.437 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.442 2 INFO nova.virt.libvirt.driver [-] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Instance spawned successfully.#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.443 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.459 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.469 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.477 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.478 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.479 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.480 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.481 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.482 2 DEBUG nova.virt.libvirt.driver [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.490 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.491 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393504.4296277, 9ea31984-a45e-4154-9df9-3c4e8ce69309 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.492 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] VM Started (Lifecycle Event)#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.515 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.519 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.536 2 DEBUG oslo_concurrency.processutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b/disk.config aebef537-a40c-45aa-98b5-ebdd7c27028b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.537 2 INFO nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Deleting local config drive /var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b/disk.config because it was imported into RBD.#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.542 2 INFO nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Took 4.69 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.543 2 DEBUG nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.544 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:25:04 np0005465604 systemd-machined[214636]: New machine qemu-36-instance-00000021.
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.625 2 INFO nova.compute.manager [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Took 5.75 seconds to build instance.#033[00m
Oct  2 04:25:04 np0005465604 systemd[1]: Started Virtual Machine qemu-36-instance-00000021.
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.645 2 DEBUG oslo_concurrency.lockutils [None req-8a2a3163-1eeb-4b89-94e0-7fe3c1c1315a cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "9ea31984-a45e-4154-9df9-3c4e8ce69309" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.658 2 DEBUG nova.network.neutron [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Successfully updated port: c606d1e1-b27f-498f-989e-2cce97a7589d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.674 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "refresh_cache-c00d9d50-5c81-4dc9-8316-c654d4802b4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.674 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquired lock "refresh_cache-c00d9d50-5c81-4dc9-8316-c654d4802b4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.675 2 DEBUG nova.network.neutron [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:25:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 165 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 7.2 MiB/s wr, 196 op/s
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.864 2 DEBUG nova.compute.manager [req-84ff6938-2ece-4e35-b37f-463c58b34186 req-ecbb36d1-6403-4bcb-92da-88ef7c91cee6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Received event network-changed-c606d1e1-b27f-498f-989e-2cce97a7589d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.867 2 DEBUG nova.compute.manager [req-84ff6938-2ece-4e35-b37f-463c58b34186 req-ecbb36d1-6403-4bcb-92da-88ef7c91cee6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Refreshing instance network info cache due to event network-changed-c606d1e1-b27f-498f-989e-2cce97a7589d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:25:04 np0005465604 nova_compute[260603]: 2025-10-02 08:25:04.867 2 DEBUG oslo_concurrency.lockutils [req-84ff6938-2ece-4e35-b37f-463c58b34186 req-ecbb36d1-6403-4bcb-92da-88ef7c91cee6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-c00d9d50-5c81-4dc9-8316-c654d4802b4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.092 2 DEBUG nova.network.neutron [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.539 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393505.5396528, aebef537-a40c-45aa-98b5-ebdd7c27028b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.540 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.542 2 DEBUG nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.543 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.546 2 INFO nova.virt.libvirt.driver [-] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Instance spawned successfully.#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.546 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.585 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.589 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.589 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.590 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.590 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.591 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.591 2 DEBUG nova.virt.libvirt.driver [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.595 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.619 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.620 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393505.5422864, aebef537-a40c-45aa-98b5-ebdd7c27028b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.620 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] VM Started (Lifecycle Event)#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.646 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.650 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.667 2 INFO nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Took 3.58 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.668 2 DEBUG nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.677 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.751 2 INFO nova.compute.manager [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Took 5.06 seconds to build instance.#033[00m
Oct  2 04:25:05 np0005465604 nova_compute[260603]: 2025-10-02 08:25:05.769 2 DEBUG oslo_concurrency.lockutils [None req-43aedd60-f1d1-4b49-b33c-ab3c89633eac cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "aebef537-a40c-45aa-98b5-ebdd7c27028b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.681 2 DEBUG nova.network.neutron [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Updating instance_info_cache with network_info: [{"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.707 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Releasing lock "refresh_cache-c00d9d50-5c81-4dc9-8316-c654d4802b4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.708 2 DEBUG nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Instance network_info: |[{"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.710 2 DEBUG oslo_concurrency.lockutils [req-84ff6938-2ece-4e35-b37f-463c58b34186 req-ecbb36d1-6403-4bcb-92da-88ef7c91cee6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-c00d9d50-5c81-4dc9-8316-c654d4802b4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.710 2 DEBUG nova.network.neutron [req-84ff6938-2ece-4e35-b37f-463c58b34186 req-ecbb36d1-6403-4bcb-92da-88ef7c91cee6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Refreshing network info cache for port c606d1e1-b27f-498f-989e-2cce97a7589d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.715 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Start _get_guest_xml network_info=[{"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.722 2 WARNING nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.736 2 DEBUG nova.virt.libvirt.host [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.738 2 DEBUG nova.virt.libvirt.host [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.745 2 DEBUG nova.virt.libvirt.host [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.746 2 DEBUG nova.virt.libvirt.host [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.748 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.749 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.750 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.751 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.752 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.752 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.753 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.754 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.755 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.756 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.757 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.758 2 DEBUG nova.virt.hardware [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.764 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:06 np0005465604 nova_compute[260603]: 2025-10-02 08:25:06.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 165 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 5.9 MiB/s wr, 159 op/s
Oct  2 04:25:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:25:07 np0005465604 podman[299215]: 2025-10-02 08:25:07.045231203 +0000 UTC m=+0.099309194 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 04:25:07 np0005465604 podman[299214]: 2025-10-02 08:25:07.099507373 +0000 UTC m=+0.153581254 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 04:25:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:25:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2220696812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.201 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.219 2 DEBUG nova.storage.rbd_utils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.224 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:25:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2700383951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.634 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.636 2 DEBUG nova.virt.libvirt.vif [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:24:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1903679195',display_name='tempest-ImagesTestJSON-server-1903679195',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1903679195',id=32,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-lm5g3lue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:25:01Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=c00d9d50-5c81-4dc9-8316-c654d4802b4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.637 2 DEBUG nova.network.os_vif_util [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.637 2 DEBUG nova.network.os_vif_util [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:5d:05,bridge_name='br-int',has_traffic_filtering=True,id=c606d1e1-b27f-498f-989e-2cce97a7589d,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606d1e1-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.638 2 DEBUG nova.objects.instance [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'pci_devices' on Instance uuid c00d9d50-5c81-4dc9-8316-c654d4802b4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.653 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:25:07 np0005465604 nova_compute[260603]:  <uuid>c00d9d50-5c81-4dc9-8316-c654d4802b4f</uuid>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:  <name>instance-00000020</name>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <nova:name>tempest-ImagesTestJSON-server-1903679195</nova:name>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:25:06</nova:creationTime>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:25:07 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:        <nova:user uuid="6747651cfdcc4f868c43b9d78f5846c2">tempest-ImagesTestJSON-1188243509-project-member</nova:user>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:        <nova:project uuid="56b1e1170f2e4a73aaf396476bc82261">tempest-ImagesTestJSON-1188243509</nova:project>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:        <nova:port uuid="c606d1e1-b27f-498f-989e-2cce97a7589d">
Oct  2 04:25:07 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <entry name="serial">c00d9d50-5c81-4dc9-8316-c654d4802b4f</entry>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <entry name="uuid">c00d9d50-5c81-4dc9-8316-c654d4802b4f</entry>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk">
Oct  2 04:25:07 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:25:07 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk.config">
Oct  2 04:25:07 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:25:07 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:48:5d:05"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <target dev="tapc606d1e1-b2"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f/console.log" append="off"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:25:07 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:25:07 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:25:07 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:25:07 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.658 2 DEBUG nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Preparing to wait for external event network-vif-plugged-c606d1e1-b27f-498f-989e-2cce97a7589d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.658 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.659 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.659 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.659 2 DEBUG nova.virt.libvirt.vif [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:24:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1903679195',display_name='tempest-ImagesTestJSON-server-1903679195',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1903679195',id=32,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-lm5g3lue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:25:01Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=c00d9d50-5c81-4dc9-8316-c654d4802b4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.660 2 DEBUG nova.network.os_vif_util [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.660 2 DEBUG nova.network.os_vif_util [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:5d:05,bridge_name='br-int',has_traffic_filtering=True,id=c606d1e1-b27f-498f-989e-2cce97a7589d,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606d1e1-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.661 2 DEBUG os_vif [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:5d:05,bridge_name='br-int',has_traffic_filtering=True,id=c606d1e1-b27f-498f-989e-2cce97a7589d,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606d1e1-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.662 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.662 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.665 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc606d1e1-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.665 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc606d1e1-b2, col_values=(('external_ids', {'iface-id': 'c606d1e1-b27f-498f-989e-2cce97a7589d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:5d:05', 'vm-uuid': 'c00d9d50-5c81-4dc9-8316-c654d4802b4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:07 np0005465604 NetworkManager[45129]: <info>  [1759393507.6681] manager: (tapc606d1e1-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.675 2 INFO os_vif [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:5d:05,bridge_name='br-int',has_traffic_filtering=True,id=c606d1e1-b27f-498f-989e-2cce97a7589d,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606d1e1-b2')#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.735 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.736 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.736 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No VIF found with MAC fa:16:3e:48:5d:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.737 2 INFO nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Using config drive#033[00m
Oct  2 04:25:07 np0005465604 nova_compute[260603]: 2025-10-02 08:25:07.758 2 DEBUG nova.storage.rbd_utils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 181 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.4 MiB/s wr, 274 op/s
Oct  2 04:25:08 np0005465604 nova_compute[260603]: 2025-10-02 08:25:08.920 2 INFO nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Creating config drive at /var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f/disk.config#033[00m
Oct  2 04:25:08 np0005465604 nova_compute[260603]: 2025-10-02 08:25:08.928 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzs1obi9x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:09 np0005465604 nova_compute[260603]: 2025-10-02 08:25:09.078 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzs1obi9x" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:09 np0005465604 nova_compute[260603]: 2025-10-02 08:25:09.118 2 DEBUG nova.storage.rbd_utils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:09 np0005465604 nova_compute[260603]: 2025-10-02 08:25:09.130 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f/disk.config c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:09 np0005465604 nova_compute[260603]: 2025-10-02 08:25:09.341 2 DEBUG oslo_concurrency.processutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f/disk.config c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:09 np0005465604 nova_compute[260603]: 2025-10-02 08:25:09.342 2 INFO nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Deleting local config drive /var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f/disk.config because it was imported into RBD.#033[00m
Oct  2 04:25:09 np0005465604 nova_compute[260603]: 2025-10-02 08:25:09.365 2 DEBUG nova.compute.manager [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:09 np0005465604 nova_compute[260603]: 2025-10-02 08:25:09.410 2 INFO nova.compute.manager [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] instance snapshotting#033[00m
Oct  2 04:25:09 np0005465604 kernel: tapc606d1e1-b2: entered promiscuous mode
Oct  2 04:25:09 np0005465604 NetworkManager[45129]: <info>  [1759393509.4232] manager: (tapc606d1e1-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Oct  2 04:25:09 np0005465604 nova_compute[260603]: 2025-10-02 08:25:09.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:09Z|00208|binding|INFO|Claiming lport c606d1e1-b27f-498f-989e-2cce97a7589d for this chassis.
Oct  2 04:25:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:09Z|00209|binding|INFO|c606d1e1-b27f-498f-989e-2cce97a7589d: Claiming fa:16:3e:48:5d:05 10.100.0.11
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.439 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:5d:05 10.100.0.11'], port_security=['fa:16:3e:48:5d:05 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c00d9d50-5c81-4dc9-8316-c654d4802b4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '2', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=c606d1e1-b27f-498f-989e-2cce97a7589d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.442 162357 INFO neutron.agent.ovn.metadata.agent [-] Port c606d1e1-b27f-498f-989e-2cce97a7589d in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 bound to our chassis#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.444 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 897d7abf-9e23-43cd-8f60-7156792a4360#033[00m
Oct  2 04:25:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:09Z|00210|binding|INFO|Setting lport c606d1e1-b27f-498f-989e-2cce97a7589d ovn-installed in OVS
Oct  2 04:25:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:09Z|00211|binding|INFO|Setting lport c606d1e1-b27f-498f-989e-2cce97a7589d up in Southbound
Oct  2 04:25:09 np0005465604 nova_compute[260603]: 2025-10-02 08:25:09.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:09 np0005465604 nova_compute[260603]: 2025-10-02 08:25:09.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.468 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[65e27730-58a8-439c-b66a-a833c1ddd2f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.470 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap897d7abf-91 in ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.472 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap897d7abf-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.473 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1a35a9e7-dd7b-446f-a702-c0ca762c09f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:09 np0005465604 systemd-udevd[299372]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:25:09 np0005465604 systemd-machined[214636]: New machine qemu-37-instance-00000020.
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.481 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[00afc06b-92b8-4144-94d3-8cd2ad8dead5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.498 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[b617d16f-e97d-42b7-9fee-258eab1e0a55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:09 np0005465604 systemd[1]: Started Virtual Machine qemu-37-instance-00000020.
Oct  2 04:25:09 np0005465604 NetworkManager[45129]: <info>  [1759393509.5064] device (tapc606d1e1-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:25:09 np0005465604 NetworkManager[45129]: <info>  [1759393509.5095] device (tapc606d1e1-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:25:09 np0005465604 nova_compute[260603]: 2025-10-02 08:25:09.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:25:09 np0005465604 nova_compute[260603]: 2025-10-02 08:25:09.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.526 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4e597c4a-9b9c-4058-9d8a-a2eabf7ac24c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.573 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[24c2bd31-5167-4d47-896f-a55fbd3a40cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:09 np0005465604 systemd-udevd[299375]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:25:09 np0005465604 NetworkManager[45129]: <info>  [1759393509.5837] manager: (tap897d7abf-90): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.579 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[91456dd7-c04e-4506-b7d7-1dcfe1ddca93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.637 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8b0ef112-3567-4f43-89e6-43d208eeef9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.641 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6148a35c-0276-44f4-a822-56f06dbbb900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:09 np0005465604 NetworkManager[45129]: <info>  [1759393509.6675] device (tap897d7abf-90): carrier: link connected
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.679 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f56a3af2-d80f-47fc-b748-762b1fda6fbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.697 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[311bbdf7-18df-47de-82a7-a9f9992f5ae3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438826, 'reachable_time': 28172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299404, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:09 np0005465604 nova_compute[260603]: 2025-10-02 08:25:09.724 2 INFO nova.virt.libvirt.driver [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Beginning live snapshot process#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.728 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d392b163-7551-4c63-8fa5-44bece86a2cb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:18ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438826, 'tstamp': 438826}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299405, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.746 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bfab71aa-0269-4cc7-8dc9-826b0db7a5e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438826, 'reachable_time': 28172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299406, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.791 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[066900ae-9a41-4507-8b52-bbb71c8d1bef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.853 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0da67ee8-4209-417a-96a2-a83958e74993]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.854 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.854 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.855 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap897d7abf-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:09 np0005465604 NetworkManager[45129]: <info>  [1759393509.8576] manager: (tap897d7abf-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Oct  2 04:25:09 np0005465604 kernel: tap897d7abf-90: entered promiscuous mode
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.860 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap897d7abf-90, col_values=(('external_ids', {'iface-id': 'dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:09Z|00212|binding|INFO|Releasing lport dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76 from this chassis (sb_readonly=0)
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.879 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.880 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7c84f8d2-ffbf-4207-888b-41029d97092b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.881 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-897d7abf-9e23-43cd-8f60-7156792a4360
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/897d7abf-9e23-43cd-8f60-7156792a4360.pid.haproxy
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 897d7abf-9e23-43cd-8f60-7156792a4360
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:25:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:09.881 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'env', 'PROCESS_TAG=haproxy-897d7abf-9e23-43cd-8f60-7156792a4360', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/897d7abf-9e23-43cd-8f60-7156792a4360.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:25:09 np0005465604 nova_compute[260603]: 2025-10-02 08:25:09.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:09 np0005465604 nova_compute[260603]: 2025-10-02 08:25:09.907 2 DEBUG nova.virt.libvirt.imagebackend [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 04:25:09 np0005465604 nova_compute[260603]: 2025-10-02 08:25:09.998 2 DEBUG nova.network.neutron [req-84ff6938-2ece-4e35-b37f-463c58b34186 req-ecbb36d1-6403-4bcb-92da-88ef7c91cee6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Updated VIF entry in instance network info cache for port c606d1e1-b27f-498f-989e-2cce97a7589d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.005 2 DEBUG nova.network.neutron [req-84ff6938-2ece-4e35-b37f-463c58b34186 req-ecbb36d1-6403-4bcb-92da-88ef7c91cee6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Updating instance_info_cache with network_info: [{"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.021 2 DEBUG oslo_concurrency.lockutils [req-84ff6938-2ece-4e35-b37f-463c58b34186 req-ecbb36d1-6403-4bcb-92da-88ef7c91cee6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-c00d9d50-5c81-4dc9-8316-c654d4802b4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.126 2 DEBUG nova.storage.rbd_utils [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] creating snapshot(0474e051fbd9447f81cf1fe9fff50e99) on rbd image(9ea31984-a45e-4154-9df9-3c4e8ce69309_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.169 2 DEBUG nova.compute.manager [req-e3a4e760-fed3-402f-8d89-3839b496a0f9 req-b7a107f7-9604-400a-983e-f4ce7b8accb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Received event network-vif-plugged-c606d1e1-b27f-498f-989e-2cce97a7589d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.170 2 DEBUG oslo_concurrency.lockutils [req-e3a4e760-fed3-402f-8d89-3839b496a0f9 req-b7a107f7-9604-400a-983e-f4ce7b8accb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.171 2 DEBUG oslo_concurrency.lockutils [req-e3a4e760-fed3-402f-8d89-3839b496a0f9 req-b7a107f7-9604-400a-983e-f4ce7b8accb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.171 2 DEBUG oslo_concurrency.lockutils [req-e3a4e760-fed3-402f-8d89-3839b496a0f9 req-b7a107f7-9604-400a-983e-f4ce7b8accb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.171 2 DEBUG nova.compute.manager [req-e3a4e760-fed3-402f-8d89-3839b496a0f9 req-b7a107f7-9604-400a-983e-f4ce7b8accb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Processing event network-vif-plugged-c606d1e1-b27f-498f-989e-2cce97a7589d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:25:10 np0005465604 podman[299531]: 2025-10-02 08:25:10.266860976 +0000 UTC m=+0.050226000 container create c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:25:10 np0005465604 systemd[1]: Started libpod-conmon-c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598.scope.
Oct  2 04:25:10 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:25:10 np0005465604 podman[299531]: 2025-10-02 08:25:10.244499126 +0000 UTC m=+0.027864170 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:25:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dec502266d477323c4c3cfd6e09d38867b92c6da0907359bbdddd0555e1b6f0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:25:10 np0005465604 podman[299531]: 2025-10-02 08:25:10.360364922 +0000 UTC m=+0.143729976 container init c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:25:10 np0005465604 podman[299531]: 2025-10-02 08:25:10.367847413 +0000 UTC m=+0.151212447 container start c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.396 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393510.3952932, c00d9d50-5c81-4dc9-8316-c654d4802b4f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.396 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] VM Started (Lifecycle Event)#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.399 2 DEBUG nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:25:10 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[299543]: [NOTICE]   (299548) : New worker (299550) forked
Oct  2 04:25:10 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[299543]: [NOTICE]   (299548) : Loading success.
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.403 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.407 2 INFO nova.virt.libvirt.driver [-] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Instance spawned successfully.#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.408 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.428 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.432 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.433 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.433 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.433 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.434 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.434 2 DEBUG nova.virt.libvirt.driver [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.438 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.487 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.488 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393510.395485, c00d9d50-5c81-4dc9-8316-c654d4802b4f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.488 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.504 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.506 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393510.401984, c00d9d50-5c81-4dc9-8316-c654d4802b4f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.507 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.509 2 INFO nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Took 9.41 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.510 2 DEBUG nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.532 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.538 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.560 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.569 2 INFO nova.compute.manager [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Took 10.45 seconds to build instance.#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.592 2 DEBUG oslo_concurrency.lockutils [None req-98da275f-38ac-436d-a68c-d4047a4365ff 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 181 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 6.4 MiB/s wr, 274 op/s
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.929 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "c28cb03c-6207-4ec5-9156-03252350561c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.930 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:10 np0005465604 nova_compute[260603]: 2025-10-02 08:25:10.951 2 DEBUG nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.027 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.028 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.037 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.038 2 INFO nova.compute.claims [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:25:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Oct  2 04:25:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Oct  2 04:25:11 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.165 2 DEBUG nova.storage.rbd_utils [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] cloning vms/9ea31984-a45e-4154-9df9-3c4e8ce69309_disk@0474e051fbd9447f81cf1fe9fff50e99 to images/e94191cf-f3bc-48ee-b095-e0640354f9ca clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.242 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.314 2 DEBUG nova.storage.rbd_utils [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] flattening images/e94191cf-f3bc-48ee-b095-e0640354f9ca flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.526 2 DEBUG nova.storage.rbd_utils [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] removing snapshot(0474e051fbd9447f81cf1fe9fff50e99) on rbd image(9ea31984-a45e-4154-9df9-3c4e8ce69309_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 04:25:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:25:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3162021905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.771 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.777 2 DEBUG nova.compute.provider_tree [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.793 2 DEBUG nova.scheduler.client.report [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.812 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.813 2 DEBUG nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.880 2 DEBUG nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.881 2 DEBUG nova.network.neutron [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.898 2 INFO nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:25:11 np0005465604 nova_compute[260603]: 2025-10-02 08:25:11.918 2 DEBUG nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:25:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.068 2 DEBUG nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.073 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.073 2 INFO nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Creating image(s)#033[00m
Oct  2 04:25:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Oct  2 04:25:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.106 2 DEBUG nova.storage.rbd_utils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image c28cb03c-6207-4ec5-9156-03252350561c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:12 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.146 2 DEBUG nova.storage.rbd_utils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image c28cb03c-6207-4ec5-9156-03252350561c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.172 2 DEBUG nova.storage.rbd_utils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image c28cb03c-6207-4ec5-9156-03252350561c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.176 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.237 2 DEBUG nova.storage.rbd_utils [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] creating snapshot(snap) on rbd image(e94191cf-f3bc-48ee-b095-e0640354f9ca) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.273 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.275 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.276 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.276 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.300 2 DEBUG nova.storage.rbd_utils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image c28cb03c-6207-4ec5-9156-03252350561c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.303 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 c28cb03c-6207-4ec5-9156-03252350561c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.334 2 DEBUG nova.policy [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d235dd68314a5d82ac247a9e9842d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84c161efb2ba4334845e823db8128b62', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.397 2 DEBUG nova.compute.manager [req-6c2a80af-e549-427b-94e3-20fdb214bf51 req-e9f0a99c-0a5f-4afd-b2b3-b90f73ae2ef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Received event network-vif-plugged-c606d1e1-b27f-498f-989e-2cce97a7589d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.398 2 DEBUG oslo_concurrency.lockutils [req-6c2a80af-e549-427b-94e3-20fdb214bf51 req-e9f0a99c-0a5f-4afd-b2b3-b90f73ae2ef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.399 2 DEBUG oslo_concurrency.lockutils [req-6c2a80af-e549-427b-94e3-20fdb214bf51 req-e9f0a99c-0a5f-4afd-b2b3-b90f73ae2ef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.399 2 DEBUG oslo_concurrency.lockutils [req-6c2a80af-e549-427b-94e3-20fdb214bf51 req-e9f0a99c-0a5f-4afd-b2b3-b90f73ae2ef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.401 2 DEBUG nova.compute.manager [req-6c2a80af-e549-427b-94e3-20fdb214bf51 req-e9f0a99c-0a5f-4afd-b2b3-b90f73ae2ef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] No waiting events found dispatching network-vif-plugged-c606d1e1-b27f-498f-989e-2cce97a7589d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.401 2 WARNING nova.compute.manager [req-6c2a80af-e549-427b-94e3-20fdb214bf51 req-e9f0a99c-0a5f-4afd-b2b3-b90f73ae2ef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Received unexpected event network-vif-plugged-c606d1e1-b27f-498f-989e-2cce97a7589d for instance with vm_state active and task_state None.#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.537 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 c28cb03c-6207-4ec5-9156-03252350561c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.614 2 DEBUG nova.storage.rbd_utils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] resizing rbd image c28cb03c-6207-4ec5-9156-03252350561c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.710 2 DEBUG nova.objects.instance [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'migration_context' on Instance uuid c28cb03c-6207-4ec5-9156-03252350561c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.733 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.734 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Ensure instance console log exists: /var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.735 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.735 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.736 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 219 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 9.8 MiB/s rd, 3.4 MiB/s wr, 364 op/s
Oct  2 04:25:12 np0005465604 nova_compute[260603]: 2025-10-02 08:25:12.978 2 DEBUG nova.compute.manager [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:13 np0005465604 nova_compute[260603]: 2025-10-02 08:25:13.032 2 INFO nova.compute.manager [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] instance snapshotting#033[00m
Oct  2 04:25:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Oct  2 04:25:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Oct  2 04:25:13 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Oct  2 04:25:13 np0005465604 nova_compute[260603]: 2025-10-02 08:25:13.341 2 INFO nova.virt.libvirt.driver [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Beginning live snapshot process#033[00m
Oct  2 04:25:13 np0005465604 nova_compute[260603]: 2025-10-02 08:25:13.493 2 DEBUG nova.virt.libvirt.imagebackend [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 04:25:13 np0005465604 nova_compute[260603]: 2025-10-02 08:25:13.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:25:13 np0005465604 nova_compute[260603]: 2025-10-02 08:25:13.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:25:13 np0005465604 nova_compute[260603]: 2025-10-02 08:25:13.542 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:25:13 np0005465604 nova_compute[260603]: 2025-10-02 08:25:13.601 2 DEBUG nova.network.neutron [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Successfully created port: 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:25:13 np0005465604 nova_compute[260603]: 2025-10-02 08:25:13.775 2 DEBUG nova.storage.rbd_utils [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] creating snapshot(a00d013737de492cb26f2e5326f5769e) on rbd image(c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:25:13 np0005465604 podman[299888]: 2025-10-02 08:25:13.9915475 +0000 UTC m=+0.052920498 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:25:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Oct  2 04:25:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Oct  2 04:25:14 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Oct  2 04:25:14 np0005465604 nova_compute[260603]: 2025-10-02 08:25:14.185 2 DEBUG nova.storage.rbd_utils [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] cloning vms/c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk@a00d013737de492cb26f2e5326f5769e to images/82efe28e-fddc-4d67-b244-5e7b89266ac0 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:25:14 np0005465604 nova_compute[260603]: 2025-10-02 08:25:14.279 2 DEBUG nova.storage.rbd_utils [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] flattening images/82efe28e-fddc-4d67-b244-5e7b89266ac0 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:25:14 np0005465604 nova_compute[260603]: 2025-10-02 08:25:14.478 2 DEBUG nova.storage.rbd_utils [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] removing snapshot(a00d013737de492cb26f2e5326f5769e) on rbd image(c00d9d50-5c81-4dc9-8316-c654d4802b4f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 04:25:14 np0005465604 nova_compute[260603]: 2025-10-02 08:25:14.517 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:25:14 np0005465604 nova_compute[260603]: 2025-10-02 08:25:14.641 2 DEBUG nova.network.neutron [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Successfully updated port: 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:25:14 np0005465604 nova_compute[260603]: 2025-10-02 08:25:14.678 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:25:14 np0005465604 nova_compute[260603]: 2025-10-02 08:25:14.678 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:25:14 np0005465604 nova_compute[260603]: 2025-10-02 08:25:14.678 2 DEBUG nova.network.neutron [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:25:14 np0005465604 nova_compute[260603]: 2025-10-02 08:25:14.752 2 DEBUG nova.compute.manager [req-28fb10ee-a84a-46c3-a6cf-8a8231c8ea17 req-b0dbad91-0bed-4850-a722-fdbc01c9950e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received event network-changed-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:14 np0005465604 nova_compute[260603]: 2025-10-02 08:25:14.752 2 DEBUG nova.compute.manager [req-28fb10ee-a84a-46c3-a6cf-8a8231c8ea17 req-b0dbad91-0bed-4850-a722-fdbc01c9950e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Refreshing instance network info cache due to event network-changed-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:25:14 np0005465604 nova_compute[260603]: 2025-10-02 08:25:14.753 2 DEBUG oslo_concurrency.lockutils [req-28fb10ee-a84a-46c3-a6cf-8a8231c8ea17 req-b0dbad91-0bed-4850-a722-fdbc01c9950e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:25:14 np0005465604 nova_compute[260603]: 2025-10-02 08:25:14.763 2 INFO nova.virt.libvirt.driver [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Snapshot image upload complete#033[00m
Oct  2 04:25:14 np0005465604 nova_compute[260603]: 2025-10-02 08:25:14.763 2 INFO nova.compute.manager [None req-11e3c90e-2afa-496a-8282-d9998517ea80 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Took 5.35 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 04:25:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 240 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 6.1 MiB/s wr, 405 op/s
Oct  2 04:25:14 np0005465604 nova_compute[260603]: 2025-10-02 08:25:14.932 2 DEBUG nova.network.neutron [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:25:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Oct  2 04:25:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Oct  2 04:25:15 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Oct  2 04:25:15 np0005465604 nova_compute[260603]: 2025-10-02 08:25:15.173 2 DEBUG nova.storage.rbd_utils [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] creating snapshot(snap) on rbd image(82efe28e-fddc-4d67-b244-5e7b89266ac0) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:25:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Oct  2 04:25:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Oct  2 04:25:16 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.255 2 DEBUG nova.network.neutron [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Updating instance_info_cache with network_info: [{"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.292 2 DEBUG nova.compute.manager [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.294 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.294 2 DEBUG nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Instance network_info: |[{"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.295 2 DEBUG oslo_concurrency.lockutils [req-28fb10ee-a84a-46c3-a6cf-8a8231c8ea17 req-b0dbad91-0bed-4850-a722-fdbc01c9950e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.295 2 DEBUG nova.network.neutron [req-28fb10ee-a84a-46c3-a6cf-8a8231c8ea17 req-b0dbad91-0bed-4850-a722-fdbc01c9950e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Refreshing network info cache for port 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.301 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Start _get_guest_xml network_info=[{"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.310 2 WARNING nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.315 2 DEBUG nova.virt.libvirt.host [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.315 2 DEBUG nova.virt.libvirt.host [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.318 2 DEBUG nova.virt.libvirt.host [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.318 2 DEBUG nova.virt.libvirt.host [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.319 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.319 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.319 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.320 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.320 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.320 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.320 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.321 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.321 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.321 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.321 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.322 2 DEBUG nova.virt.hardware [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.324 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.361 2 INFO nova.compute.manager [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] instance snapshotting#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.541 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.542 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.542 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.542 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.543 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.709 2 INFO nova.virt.libvirt.driver [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Beginning live snapshot process#033[00m
Oct  2 04:25:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:25:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2940798642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.750 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.769 2 DEBUG nova.storage.rbd_utils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image c28cb03c-6207-4ec5-9156-03252350561c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.774 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 4 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 293 active+clean; 240 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 922 KiB/s wr, 152 op/s
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.902 2 DEBUG nova.virt.libvirt.imagebackend [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 04:25:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:25:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487517691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:25:16 np0005465604 nova_compute[260603]: 2025-10-02 08:25:16.991 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.070 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.071 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.074 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.074 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.077 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.077 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.119 2 DEBUG nova.storage.rbd_utils [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] creating snapshot(fe0fd5c254114774b112faf3f8e46435) on rbd image(aebef537-a40c-45aa-98b5-ebdd7c27028b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:25:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:25:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3046858845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:25:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Oct  2 04:25:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Oct  2 04:25:17 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.226 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.227 2 DEBUG nova.virt.libvirt.vif [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:25:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-458247204',display_name='tempest-AttachInterfacesTestJSON-server-458247204',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-458247204',id=34,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLaaEbFdfJl8DPmpnKJlEIvFg5540SqFF+MSTf3Rd/2eelZoXpVzf3sfGdNxC0G9xmrCg9ZN/m3Sts6FpSIoxcwOomYVGKVXIQ6YS2vLlaWSvr3+0XJ9/CqIl8gT/hDBw==',key_name='tempest-keypair-1310606954',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-jh0cjhom',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:25:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=c28cb03c-6207-4ec5-9156-03252350561c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.228 2 DEBUG nova.network.os_vif_util [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.229 2 DEBUG nova.network.os_vif_util [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:cf:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b4ce2c0-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.230 2 DEBUG nova.objects.instance [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_devices' on Instance uuid c28cb03c-6207-4ec5-9156-03252350561c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.252 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:25:17 np0005465604 nova_compute[260603]:  <uuid>c28cb03c-6207-4ec5-9156-03252350561c</uuid>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:  <name>instance-00000022</name>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <nova:name>tempest-AttachInterfacesTestJSON-server-458247204</nova:name>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:25:16</nova:creationTime>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:25:17 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:        <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:        <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:        <nova:port uuid="8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627">
Oct  2 04:25:17 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <entry name="serial">c28cb03c-6207-4ec5-9156-03252350561c</entry>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <entry name="uuid">c28cb03c-6207-4ec5-9156-03252350561c</entry>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/c28cb03c-6207-4ec5-9156-03252350561c_disk">
Oct  2 04:25:17 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:25:17 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/c28cb03c-6207-4ec5-9156-03252350561c_disk.config">
Oct  2 04:25:17 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:25:17 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:02:cf:d8"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <target dev="tap8b4ce2c0-9e"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/console.log" append="off"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:25:17 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:25:17 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:25:17 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:25:17 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.254 2 DEBUG nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Preparing to wait for external event network-vif-plugged-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.254 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "c28cb03c-6207-4ec5-9156-03252350561c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.255 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.255 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.256 2 DEBUG nova.virt.libvirt.vif [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:25:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-458247204',display_name='tempest-AttachInterfacesTestJSON-server-458247204',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-458247204',id=34,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLaaEbFdfJl8DPmpnKJlEIvFg5540SqFF+MSTf3Rd/2eelZoXpVzf3sfGdNxC0G9xmrCg9ZN/m3Sts6FpSIoxcwOomYVGKVXIQ6YS2vLlaWSvr3+0XJ9/CqIl8gT/hDBw==',key_name='tempest-keypair-1310606954',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-jh0cjhom',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:25:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=c28cb03c-6207-4ec5-9156-03252350561c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.256 2 DEBUG nova.network.os_vif_util [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.256 2 DEBUG nova.network.os_vif_util [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:cf:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b4ce2c0-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.257 2 DEBUG os_vif [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:cf:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b4ce2c0-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.258 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.258 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.262 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b4ce2c0-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.262 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8b4ce2c0-9e, col_values=(('external_ids', {'iface-id': '8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:cf:d8', 'vm-uuid': 'c28cb03c-6207-4ec5-9156-03252350561c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:17 np0005465604 NetworkManager[45129]: <info>  [1759393517.2892] manager: (tap8b4ce2c0-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.297 2 INFO os_vif [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:cf:d8,bridge_name='br-int',has_traffic_filtering=True,id=8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b4ce2c0-9e')#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.322 2 DEBUG nova.storage.rbd_utils [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] cloning vms/aebef537-a40c-45aa-98b5-ebdd7c27028b_disk@fe0fd5c254114774b112faf3f8e46435 to images/5fbebdfe-1f15-443e-9ff4-2429c6d3b3f3 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.376 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.377 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3892MB free_disk=59.92285919189453GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.377 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.377 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.417 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.418 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.418 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:02:cf:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.418 2 INFO nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Using config drive#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.438 2 DEBUG nova.storage.rbd_utils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image c28cb03c-6207-4ec5-9156-03252350561c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.449 2 DEBUG nova.storage.rbd_utils [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] flattening images/5fbebdfe-1f15-443e-9ff4-2429c6d3b3f3 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.546 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 9ea31984-a45e-4154-9df9-3c4e8ce69309 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.547 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance c00d9d50-5c81-4dc9-8316-c654d4802b4f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.547 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance aebef537-a40c-45aa-98b5-ebdd7c27028b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.547 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance c28cb03c-6207-4ec5-9156-03252350561c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.548 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.548 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.654 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.710 2 INFO nova.virt.libvirt.driver [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Snapshot image upload complete#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.711 2 INFO nova.compute.manager [None req-b257ab18-0e36-4879-ac37-270d0bc4a316 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Took 4.68 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.778 2 DEBUG nova.storage.rbd_utils [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] removing snapshot(fe0fd5c254114774b112faf3f8e46435) on rbd image(aebef537-a40c-45aa-98b5-ebdd7c27028b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 04:25:17 np0005465604 podman[300246]: 2025-10-02 08:25:17.985501796 +0000 UTC m=+0.051016436 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.996 2 DEBUG nova.network.neutron [req-28fb10ee-a84a-46c3-a6cf-8a8231c8ea17 req-b0dbad91-0bed-4850-a722-fdbc01c9950e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Updated VIF entry in instance network info cache for port 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:25:17 np0005465604 nova_compute[260603]: 2025-10-02 08:25:17.997 2 DEBUG nova.network.neutron [req-28fb10ee-a84a-46c3-a6cf-8a8231c8ea17 req-b0dbad91-0bed-4850-a722-fdbc01c9950e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Updating instance_info_cache with network_info: [{"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:25:18 np0005465604 nova_compute[260603]: 2025-10-02 08:25:18.015 2 DEBUG oslo_concurrency.lockutils [req-28fb10ee-a84a-46c3-a6cf-8a8231c8ea17 req-b0dbad91-0bed-4850-a722-fdbc01c9950e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:25:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:25:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4220638138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:25:18 np0005465604 nova_compute[260603]: 2025-10-02 08:25:18.120 2 INFO nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Creating config drive at /var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/disk.config#033[00m
Oct  2 04:25:18 np0005465604 nova_compute[260603]: 2025-10-02 08:25:18.125 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa6b_fgwh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:18 np0005465604 nova_compute[260603]: 2025-10-02 08:25:18.167 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:18 np0005465604 nova_compute[260603]: 2025-10-02 08:25:18.177 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:25:18 np0005465604 nova_compute[260603]: 2025-10-02 08:25:18.195 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:25:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Oct  2 04:25:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Oct  2 04:25:18 np0005465604 nova_compute[260603]: 2025-10-02 08:25:18.216 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:25:18 np0005465604 nova_compute[260603]: 2025-10-02 08:25:18.217 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:18 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Oct  2 04:25:18 np0005465604 nova_compute[260603]: 2025-10-02 08:25:18.242 2 DEBUG nova.storage.rbd_utils [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] creating snapshot(snap) on rbd image(5fbebdfe-1f15-443e-9ff4-2429c6d3b3f3) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:25:18 np0005465604 nova_compute[260603]: 2025-10-02 08:25:18.285 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa6b_fgwh" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:18 np0005465604 nova_compute[260603]: 2025-10-02 08:25:18.317 2 DEBUG nova.storage.rbd_utils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image c28cb03c-6207-4ec5-9156-03252350561c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:18 np0005465604 nova_compute[260603]: 2025-10-02 08:25:18.321 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/disk.config c28cb03c-6207-4ec5-9156-03252350561c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:18 np0005465604 nova_compute[260603]: 2025-10-02 08:25:18.546 2 DEBUG oslo_concurrency.processutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/disk.config c28cb03c-6207-4ec5-9156-03252350561c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:18 np0005465604 nova_compute[260603]: 2025-10-02 08:25:18.548 2 INFO nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Deleting local config drive /var/lib/nova/instances/c28cb03c-6207-4ec5-9156-03252350561c/disk.config because it was imported into RBD.#033[00m
Oct  2 04:25:18 np0005465604 kernel: tap8b4ce2c0-9e: entered promiscuous mode
Oct  2 04:25:18 np0005465604 NetworkManager[45129]: <info>  [1759393518.6331] manager: (tap8b4ce2c0-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Oct  2 04:25:18 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:18Z|00213|binding|INFO|Claiming lport 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 for this chassis.
Oct  2 04:25:18 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:18Z|00214|binding|INFO|8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627: Claiming fa:16:3e:02:cf:d8 10.100.0.3
Oct  2 04:25:18 np0005465604 nova_compute[260603]: 2025-10-02 08:25:18.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.645 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:cf:d8 10.100.0.3'], port_security=['fa:16:3e:02:cf:d8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c28cb03c-6207-4ec5-9156-03252350561c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c4b338b0-15ff-4ccb-801c-e865bb41224d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.646 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 bound to our chassis#033[00m
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.647 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1#033[00m
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.659 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b5072cc7-c17d-41ee-b36f-b9927fe943d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.660 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfa1bff6d-11 in ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.664 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfa1bff6d-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.664 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3d09bfcd-1a2e-4493-b680-094743b5666f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.666 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[11921612-bc6d-4181-954d-ac9027bcf0f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.675 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[df045daa-618d-47ec-9a7c-2b97efde9499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:18 np0005465604 systemd-udevd[300340]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:25:18 np0005465604 systemd-machined[214636]: New machine qemu-38-instance-00000022.
Oct  2 04:25:18 np0005465604 NetworkManager[45129]: <info>  [1759393518.7001] device (tap8b4ce2c0-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:25:18 np0005465604 NetworkManager[45129]: <info>  [1759393518.7012] device (tap8b4ce2c0-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.700 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ad50cbf1-3ba2-44a9-87b4-aad10e364665]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:18 np0005465604 systemd[1]: Started Virtual Machine qemu-38-instance-00000022.
Oct  2 04:25:18 np0005465604 nova_compute[260603]: 2025-10-02 08:25:18.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:18 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:18Z|00215|binding|INFO|Setting lport 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 ovn-installed in OVS
Oct  2 04:25:18 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:18Z|00216|binding|INFO|Setting lport 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 up in Southbound
Oct  2 04:25:18 np0005465604 nova_compute[260603]: 2025-10-02 08:25:18.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.758 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7fee1221-9501-41a5-98e7-180c6a2053a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.775 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c22b1d97-e142-45c9-9a7c-fc4e4b941579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:18 np0005465604 NetworkManager[45129]: <info>  [1759393518.7759] manager: (tapfa1bff6d-10): new Veth device (/org/freedesktop/NetworkManager/Devices/103)
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.820 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5101f2c6-10a8-4f97-9262-6180e4561e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.825 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[baca19eb-90e9-42de-921d-9d41d6f10b63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 399 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 13 MiB/s rd, 25 MiB/s wr, 692 op/s
Oct  2 04:25:18 np0005465604 NetworkManager[45129]: <info>  [1759393518.8584] device (tapfa1bff6d-10): carrier: link connected
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.862 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[16290e92-5eb7-4ac5-a7c1-efa40142226b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.888 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df8c5041-5ccc-4472-b268-90eddc46bc7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439746, 'reachable_time': 37526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300371, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.904 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2919a6d5-ca04-4bcb-9133-74f9aaac4053]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:c92f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439746, 'tstamp': 439746}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300372, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.925 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[48f9f101-e466-4a2d-bddc-ae7e3cedf6f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439746, 'reachable_time': 37526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300373, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:18.958 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[13afd0c4-71a1-44d3-872a-612998b46b22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:19.024 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7198b983-790f-4f83-a6ad-0b3b9adeb60c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:19.025 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:19.025 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:19.026 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:19 np0005465604 kernel: tapfa1bff6d-10: entered promiscuous mode
Oct  2 04:25:19 np0005465604 NetworkManager[45129]: <info>  [1759393519.0292] manager: (tapfa1bff6d-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Oct  2 04:25:19 np0005465604 nova_compute[260603]: 2025-10-02 08:25:19.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:19.034 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:19Z|00217|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct  2 04:25:19 np0005465604 nova_compute[260603]: 2025-10-02 08:25:19.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:19.038 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fa1bff6d-19fb-4792-a261-4da1165d95a1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fa1bff6d-19fb-4792-a261-4da1165d95a1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:19.039 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[080c5faa-598d-40d6-a5fe-c8178f999a18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:19.039 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/fa1bff6d-19fb-4792-a261-4da1165d95a1.pid.haproxy
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:25:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:19.041 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'env', 'PROCESS_TAG=haproxy-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fa1bff6d-19fb-4792-a261-4da1165d95a1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:25:19 np0005465604 nova_compute[260603]: 2025-10-02 08:25:19.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:19 np0005465604 nova_compute[260603]: 2025-10-02 08:25:19.213 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:25:19 np0005465604 nova_compute[260603]: 2025-10-02 08:25:19.213 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:25:19 np0005465604 nova_compute[260603]: 2025-10-02 08:25:19.214 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:25:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Oct  2 04:25:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Oct  2 04:25:19 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Oct  2 04:25:19 np0005465604 podman[300446]: 2025-10-02 08:25:19.557677536 +0000 UTC m=+0.057256647 container create 4fcc8eadfb793a3d99794d7e7d3c691e6e9e561db798fd47e2083feea60c538d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 04:25:19 np0005465604 systemd[1]: Started libpod-conmon-4fcc8eadfb793a3d99794d7e7d3c691e6e9e561db798fd47e2083feea60c538d.scope.
Oct  2 04:25:19 np0005465604 podman[300446]: 2025-10-02 08:25:19.528579648 +0000 UTC m=+0.028158779 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:25:19 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:25:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9b0303c5a3bf6706e7d4ed7fdb28ee9eeead9f26ad1ee83702d2c74668a46e0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:25:19 np0005465604 podman[300446]: 2025-10-02 08:25:19.661591696 +0000 UTC m=+0.161170827 container init 4fcc8eadfb793a3d99794d7e7d3c691e6e9e561db798fd47e2083feea60c538d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:25:19 np0005465604 podman[300446]: 2025-10-02 08:25:19.670932468 +0000 UTC m=+0.170511589 container start 4fcc8eadfb793a3d99794d7e7d3c691e6e9e561db798fd47e2083feea60c538d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:25:19 np0005465604 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[300461]: [NOTICE]   (300465) : New worker (300467) forked
Oct  2 04:25:19 np0005465604 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[300461]: [NOTICE]   (300465) : Loading success.
Oct  2 04:25:19 np0005465604 nova_compute[260603]: 2025-10-02 08:25:19.837 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393519.836332, c28cb03c-6207-4ec5-9156-03252350561c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:25:19 np0005465604 nova_compute[260603]: 2025-10-02 08:25:19.837 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] VM Started (Lifecycle Event)#033[00m
Oct  2 04:25:19 np0005465604 nova_compute[260603]: 2025-10-02 08:25:19.865 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:19 np0005465604 nova_compute[260603]: 2025-10-02 08:25:19.869 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393519.8365743, c28cb03c-6207-4ec5-9156-03252350561c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:25:19 np0005465604 nova_compute[260603]: 2025-10-02 08:25:19.869 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:25:19 np0005465604 nova_compute[260603]: 2025-10-02 08:25:19.889 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:19 np0005465604 nova_compute[260603]: 2025-10-02 08:25:19.895 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:25:19 np0005465604 nova_compute[260603]: 2025-10-02 08:25:19.916 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:25:20 np0005465604 nova_compute[260603]: 2025-10-02 08:25:20.558 2 INFO nova.virt.libvirt.driver [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Snapshot image upload complete#033[00m
Oct  2 04:25:20 np0005465604 nova_compute[260603]: 2025-10-02 08:25:20.559 2 INFO nova.compute.manager [None req-b0e9cf69-7514-47ca-9953-4854d1f1159d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Took 4.20 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 04:25:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 399 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 21 MiB/s wr, 589 op/s
Oct  2 04:25:21 np0005465604 nova_compute[260603]: 2025-10-02 08:25:21.303 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:21 np0005465604 nova_compute[260603]: 2025-10-02 08:25:21.304 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:21 np0005465604 nova_compute[260603]: 2025-10-02 08:25:21.334 2 DEBUG nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:25:21 np0005465604 nova_compute[260603]: 2025-10-02 08:25:21.393 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:21 np0005465604 nova_compute[260603]: 2025-10-02 08:25:21.394 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:21 np0005465604 nova_compute[260603]: 2025-10-02 08:25:21.401 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:25:21 np0005465604 nova_compute[260603]: 2025-10-02 08:25:21.401 2 INFO nova.compute.claims [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:25:21 np0005465604 nova_compute[260603]: 2025-10-02 08:25:21.595 2 DEBUG nova.compute.manager [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:21 np0005465604 nova_compute[260603]: 2025-10-02 08:25:21.652 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:21 np0005465604 nova_compute[260603]: 2025-10-02 08:25:21.703 2 INFO nova.compute.manager [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] instance snapshotting#033[00m
Oct  2 04:25:21 np0005465604 nova_compute[260603]: 2025-10-02 08:25:21.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:21 np0005465604 nova_compute[260603]: 2025-10-02 08:25:21.960 2 INFO nova.virt.libvirt.driver [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Beginning live snapshot process#033[00m
Oct  2 04:25:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:25:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Oct  2 04:25:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Oct  2 04:25:22 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Oct  2 04:25:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:25:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3792760242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:25:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:25:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3792760242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:25:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:25:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/361007538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.166 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.172 2 DEBUG nova.virt.libvirt.imagebackend [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.176 2 DEBUG nova.compute.provider_tree [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.189 2 DEBUG nova.scheduler.client.report [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.208 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.209 2 DEBUG nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.258 2 DEBUG nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.259 2 DEBUG nova.network.neutron [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.274 2 INFO nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.289 2 DEBUG nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.371 2 DEBUG nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.373 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.373 2 INFO nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Creating image(s)#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.396 2 DEBUG nova.storage.rbd_utils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.431 2 DEBUG nova.storage.rbd_utils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.460 2 DEBUG nova.storage.rbd_utils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.472 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "c5fbcb8fd7373957e056e74fa5c5e80912273aa7" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.474 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c5fbcb8fd7373957e056e74fa5c5e80912273aa7" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.483 2 DEBUG nova.storage.rbd_utils [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] creating snapshot(add832e29cb746969f4674b2a15264f6) on rbd image(9ea31984-a45e-4154-9df9-3c4e8ce69309_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.544 2 DEBUG nova.compute.manager [req-35176719-70ec-45ba-b95c-84b5d8d13a0e req-fdb06a01-b76d-4ba8-8675-8d657b8786db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received event network-vif-plugged-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.544 2 DEBUG oslo_concurrency.lockutils [req-35176719-70ec-45ba-b95c-84b5d8d13a0e req-fdb06a01-b76d-4ba8-8675-8d657b8786db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c28cb03c-6207-4ec5-9156-03252350561c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.545 2 DEBUG oslo_concurrency.lockutils [req-35176719-70ec-45ba-b95c-84b5d8d13a0e req-fdb06a01-b76d-4ba8-8675-8d657b8786db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.545 2 DEBUG oslo_concurrency.lockutils [req-35176719-70ec-45ba-b95c-84b5d8d13a0e req-fdb06a01-b76d-4ba8-8675-8d657b8786db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.546 2 DEBUG nova.compute.manager [req-35176719-70ec-45ba-b95c-84b5d8d13a0e req-fdb06a01-b76d-4ba8-8675-8d657b8786db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Processing event network-vif-plugged-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.548 2 DEBUG nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.552 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393522.5522614, c28cb03c-6207-4ec5-9156-03252350561c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.553 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.558 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.564 2 INFO nova.virt.libvirt.driver [-] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Instance spawned successfully.#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.566 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.573 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.579 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.584 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.585 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.587 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.587 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.589 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.589 2 DEBUG nova.virt.libvirt.driver [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.595 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c28cb03c-6207-4ec5-9156-03252350561c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.638 2 INFO nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Took 10.57 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.639 2 DEBUG nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.710 2 INFO nova.compute.manager [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Took 11.71 seconds to build instance.#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.727 2 DEBUG oslo_concurrency.lockutils [None req-dc553695-9284-46f0-a473-51c9946be711 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.758 2 DEBUG nova.policy [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6747651cfdcc4f868c43b9d78f5846c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56b1e1170f2e4a73aaf396476bc82261', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.808 2 DEBUG nova.virt.libvirt.imagebackend [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image locations are: [{'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/82efe28e-fddc-4d67-b244-5e7b89266ac0/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/82efe28e-fddc-4d67-b244-5e7b89266ac0/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 04:25:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:22Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:5d:05 10.100.0.11
Oct  2 04:25:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:22Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:5d:05 10.100.0.11
Oct  2 04:25:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 468 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 22 MiB/s wr, 684 op/s
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.883 2 DEBUG nova.virt.libvirt.imagebackend [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Selected location: {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/82efe28e-fddc-4d67-b244-5e7b89266ac0/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 04:25:22 np0005465604 nova_compute[260603]: 2025-10-02 08:25:22.884 2 DEBUG nova.storage.rbd_utils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] cloning images/82efe28e-fddc-4d67-b244-5e7b89266ac0@snap to None/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:25:23 np0005465604 nova_compute[260603]: 2025-10-02 08:25:23.008 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c5fbcb8fd7373957e056e74fa5c5e80912273aa7" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:23 np0005465604 nova_compute[260603]: 2025-10-02 08:25:23.149 2 DEBUG nova.objects.instance [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'migration_context' on Instance uuid 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:23 np0005465604 nova_compute[260603]: 2025-10-02 08:25:23.162 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:25:23 np0005465604 nova_compute[260603]: 2025-10-02 08:25:23.162 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Ensure instance console log exists: /var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:25:23 np0005465604 nova_compute[260603]: 2025-10-02 08:25:23.163 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:23 np0005465604 nova_compute[260603]: 2025-10-02 08:25:23.163 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:23 np0005465604 nova_compute[260603]: 2025-10-02 08:25:23.164 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Oct  2 04:25:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Oct  2 04:25:23 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Oct  2 04:25:23 np0005465604 nova_compute[260603]: 2025-10-02 08:25:23.304 2 DEBUG nova.storage.rbd_utils [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] cloning vms/9ea31984-a45e-4154-9df9-3c4e8ce69309_disk@add832e29cb746969f4674b2a15264f6 to images/6250c07d-a0dc-41fe-922a-619617d749c1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:25:23 np0005465604 nova_compute[260603]: 2025-10-02 08:25:23.420 2 DEBUG nova.storage.rbd_utils [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] flattening images/6250c07d-a0dc-41fe-922a-619617d749c1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:25:23 np0005465604 nova_compute[260603]: 2025-10-02 08:25:23.696 2 DEBUG nova.network.neutron [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Successfully created port: 74b4490f-0cc9-4b51-b6a0-246fd701e628 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:25:23 np0005465604 nova_compute[260603]: 2025-10-02 08:25:23.782 2 DEBUG nova.storage.rbd_utils [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] removing snapshot(add832e29cb746969f4674b2a15264f6) on rbd image(9ea31984-a45e-4154-9df9-3c4e8ce69309_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 04:25:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Oct  2 04:25:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Oct  2 04:25:24 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.289 2 DEBUG nova.storage.rbd_utils [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] creating snapshot(snap) on rbd image(6250c07d-a0dc-41fe-922a-619617d749c1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:25:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:24Z|00218|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct  2 04:25:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:24Z|00219|binding|INFO|Releasing lport dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76 from this chassis (sb_readonly=0)
Oct  2 04:25:24 np0005465604 NetworkManager[45129]: <info>  [1759393524.4444] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:24 np0005465604 NetworkManager[45129]: <info>  [1759393524.4459] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:24Z|00220|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct  2 04:25:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:24Z|00221|binding|INFO|Releasing lport dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76 from this chassis (sb_readonly=0)
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.625 2 DEBUG nova.compute.manager [req-9aa19a78-a814-454e-8e76-2f050b32552f req-3cf646db-e92e-4602-a1e7-8509695d853a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received event network-vif-plugged-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.625 2 DEBUG oslo_concurrency.lockutils [req-9aa19a78-a814-454e-8e76-2f050b32552f req-3cf646db-e92e-4602-a1e7-8509695d853a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c28cb03c-6207-4ec5-9156-03252350561c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.625 2 DEBUG oslo_concurrency.lockutils [req-9aa19a78-a814-454e-8e76-2f050b32552f req-3cf646db-e92e-4602-a1e7-8509695d853a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.626 2 DEBUG oslo_concurrency.lockutils [req-9aa19a78-a814-454e-8e76-2f050b32552f req-3cf646db-e92e-4602-a1e7-8509695d853a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c28cb03c-6207-4ec5-9156-03252350561c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.626 2 DEBUG nova.compute.manager [req-9aa19a78-a814-454e-8e76-2f050b32552f req-3cf646db-e92e-4602-a1e7-8509695d853a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] No waiting events found dispatching network-vif-plugged-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.626 2 WARNING nova.compute.manager [req-9aa19a78-a814-454e-8e76-2f050b32552f req-3cf646db-e92e-4602-a1e7-8509695d853a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received unexpected event network-vif-plugged-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.767 2 DEBUG nova.network.neutron [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Successfully updated port: 74b4490f-0cc9-4b51-b6a0-246fd701e628 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.777 2 DEBUG nova.compute.manager [req-203fbe8b-b6cc-423a-8264-2facba3b8291 req-f28ab4e4-f987-487c-a826-1c12fda78c38 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Received event network-changed-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.777 2 DEBUG nova.compute.manager [req-203fbe8b-b6cc-423a-8264-2facba3b8291 req-f28ab4e4-f987-487c-a826-1c12fda78c38 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Refreshing instance network info cache due to event network-changed-8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.778 2 DEBUG oslo_concurrency.lockutils [req-203fbe8b-b6cc-423a-8264-2facba3b8291 req-f28ab4e4-f987-487c-a826-1c12fda78c38 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.778 2 DEBUG oslo_concurrency.lockutils [req-203fbe8b-b6cc-423a-8264-2facba3b8291 req-f28ab4e4-f987-487c-a826-1c12fda78c38 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.778 2 DEBUG nova.network.neutron [req-203fbe8b-b6cc-423a-8264-2facba3b8291 req-f28ab4e4-f987-487c-a826-1c12fda78c38 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Refreshing network info cache for port 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.796 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "refresh_cache-1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.797 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquired lock "refresh_cache-1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.797 2 DEBUG nova.network.neutron [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:25:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 487 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 11 MiB/s wr, 351 op/s
Oct  2 04:25:24 np0005465604 nova_compute[260603]: 2025-10-02 08:25:24.987 2 DEBUG nova.network.neutron [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:25:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Oct  2 04:25:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Oct  2 04:25:25 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Oct  2 04:25:25 np0005465604 nova_compute[260603]: 2025-10-02 08:25:25.972 2 DEBUG nova.network.neutron [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Updating instance_info_cache with network_info: [{"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.002 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Releasing lock "refresh_cache-1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.002 2 DEBUG nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Instance network_info: |[{"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.007 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Start _get_guest_xml network_info=[{"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T08:25:12Z,direct_url=<?>,disk_format='raw',id=82efe28e-fddc-4d67-b244-5e7b89266ac0,min_disk=1,min_ram=0,name='tempest-test-snap-481920133',owner='56b1e1170f2e4a73aaf396476bc82261',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T08:25:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '82efe28e-fddc-4d67-b244-5e7b89266ac0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.012 2 WARNING nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.024 2 DEBUG nova.virt.libvirt.host [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.025 2 DEBUG nova.virt.libvirt.host [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.030 2 DEBUG nova.virt.libvirt.host [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.031 2 DEBUG nova.virt.libvirt.host [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.032 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.032 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T08:25:12Z,direct_url=<?>,disk_format='raw',id=82efe28e-fddc-4d67-b244-5e7b89266ac0,min_disk=1,min_ram=0,name='tempest-test-snap-481920133',owner='56b1e1170f2e4a73aaf396476bc82261',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T08:25:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.033 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.034 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.034 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.035 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.035 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.036 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.036 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.037 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.038 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.038 2 DEBUG nova.virt.hardware [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.043 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.176 2 DEBUG nova.network.neutron [req-203fbe8b-b6cc-423a-8264-2facba3b8291 req-f28ab4e4-f987-487c-a826-1c12fda78c38 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Updated VIF entry in instance network info cache for port 8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.178 2 DEBUG nova.network.neutron [req-203fbe8b-b6cc-423a-8264-2facba3b8291 req-f28ab4e4-f987-487c-a826-1c12fda78c38 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] Updating instance_info_cache with network_info: [{"id": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "address": "fa:16:3e:02:cf:d8", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b4ce2c0-9e", "ovs_interfaceid": "8b4ce2c0-9e63-42f7-8f6d-d7c1e84a2627", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.206 2 DEBUG oslo_concurrency.lockutils [req-203fbe8b-b6cc-423a-8264-2facba3b8291 req-f28ab4e4-f987-487c-a826-1c12fda78c38 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-c28cb03c-6207-4ec5-9156-03252350561c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:25:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:25:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3351288113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.543 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.573 2 DEBUG nova.storage.rbd_utils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.579 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 487 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 13 MiB/s wr, 408 op/s
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.886 2 DEBUG nova.compute.manager [req-0cbd9fb6-105f-48b1-82fe-be12b91a7f37 req-a61bd604-6fbe-4f75-828a-04eb3052affd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-changed-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.887 2 DEBUG nova.compute.manager [req-0cbd9fb6-105f-48b1-82fe-be12b91a7f37 req-a61bd604-6fbe-4f75-828a-04eb3052affd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Refreshing instance network info cache due to event network-changed-74b4490f-0cc9-4b51-b6a0-246fd701e628. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.887 2 DEBUG oslo_concurrency.lockutils [req-0cbd9fb6-105f-48b1-82fe-be12b91a7f37 req-a61bd604-6fbe-4f75-828a-04eb3052affd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.888 2 DEBUG oslo_concurrency.lockutils [req-0cbd9fb6-105f-48b1-82fe-be12b91a7f37 req-a61bd604-6fbe-4f75-828a-04eb3052affd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.891 2 DEBUG nova.network.neutron [req-0cbd9fb6-105f-48b1-82fe-be12b91a7f37 req-a61bd604-6fbe-4f75-828a-04eb3052affd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Refreshing network info cache for port 74b4490f-0cc9-4b51-b6a0-246fd701e628 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.944 2 INFO nova.virt.libvirt.driver [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Snapshot image upload complete#033[00m
Oct  2 04:25:26 np0005465604 nova_compute[260603]: 2025-10-02 08:25:26.945 2 INFO nova.compute.manager [None req-fa5c5f79-64de-4c79-8af7-20e4994d36e9 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Took 5.24 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 04:25:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:25:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Oct  2 04:25:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Oct  2 04:25:27 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Oct  2 04:25:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:25:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/494463941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.068 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.069 2 DEBUG nova.virt.libvirt.vif [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:25:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-889597672',display_name='tempest-ImagesTestJSON-server-889597672',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-889597672',id=35,image_ref='82efe28e-fddc-4d67-b244-5e7b89266ac0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-02761tfj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='c00d9d50-5c81-4dc9-8316-c654d4802b4f',image_min_disk='1',image_min_ram='0',image_owner_id='56b1e1170f2e4a73aaf396476bc82261',image_owner_project_name='tempest-ImagesTestJSON-1188243509',image_owner_user_name='tempest-ImagesTestJSON-1188243509-project-member',image_user_id='6747651cfdcc4f868c43b9d78f5846c2',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:25:22Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=1c57c283-5fac-4b60-b9dc-3dbf3bfd5828,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.069 2 DEBUG nova.network.os_vif_util [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.071 2 DEBUG nova.network.os_vif_util [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:18:6b,bridge_name='br-int',has_traffic_filtering=True,id=74b4490f-0cc9-4b51-b6a0-246fd701e628,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74b4490f-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.072 2 DEBUG nova.objects.instance [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.090 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:25:27 np0005465604 nova_compute[260603]:  <uuid>1c57c283-5fac-4b60-b9dc-3dbf3bfd5828</uuid>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:  <name>instance-00000023</name>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <nova:name>tempest-ImagesTestJSON-server-889597672</nova:name>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:25:26</nova:creationTime>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:25:27 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:        <nova:user uuid="6747651cfdcc4f868c43b9d78f5846c2">tempest-ImagesTestJSON-1188243509-project-member</nova:user>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:        <nova:project uuid="56b1e1170f2e4a73aaf396476bc82261">tempest-ImagesTestJSON-1188243509</nova:project>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="82efe28e-fddc-4d67-b244-5e7b89266ac0"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:        <nova:port uuid="74b4490f-0cc9-4b51-b6a0-246fd701e628">
Oct  2 04:25:27 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <entry name="serial">1c57c283-5fac-4b60-b9dc-3dbf3bfd5828</entry>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <entry name="uuid">1c57c283-5fac-4b60-b9dc-3dbf3bfd5828</entry>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk">
Oct  2 04:25:27 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:25:27 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk.config">
Oct  2 04:25:27 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:25:27 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:a0:18:6b"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <target dev="tap74b4490f-0c"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828/console.log" append="off"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <input type="keyboard" bus="usb"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:25:27 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:25:27 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:25:27 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:25:27 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.091 2 DEBUG nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Preparing to wait for external event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.091 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.091 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.091 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.092 2 DEBUG nova.virt.libvirt.vif [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:25:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-889597672',display_name='tempest-ImagesTestJSON-server-889597672',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-889597672',id=35,image_ref='82efe28e-fddc-4d67-b244-5e7b89266ac0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-02761tfj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='c00d9d50-5c81-4dc9-8316-c654d4802b4f',image_min_disk='1',image_min_ram='0',image_owner_id='56b1e1170f2e4a73aaf396476bc82261',image_owner_project_name='tempest-ImagesTestJSON-1188243509',image_owner_user_name='tempest-ImagesTestJSON-1188243509-project-member',image_user_id='6747651cfdcc4f868c43b9d78f5846c2',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:25:22Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=1c57c283-5fac-4b60-b9dc-3dbf3bfd5828,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.092 2 DEBUG nova.network.os_vif_util [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.093 2 DEBUG nova.network.os_vif_util [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:18:6b,bridge_name='br-int',has_traffic_filtering=True,id=74b4490f-0cc9-4b51-b6a0-246fd701e628,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74b4490f-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.093 2 DEBUG os_vif [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:18:6b,bridge_name='br-int',has_traffic_filtering=True,id=74b4490f-0cc9-4b51-b6a0-246fd701e628,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74b4490f-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.094 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.094 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.098 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74b4490f-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.098 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap74b4490f-0c, col_values=(('external_ids', {'iface-id': '74b4490f-0cc9-4b51-b6a0-246fd701e628', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:18:6b', 'vm-uuid': '1c57c283-5fac-4b60-b9dc-3dbf3bfd5828'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:27 np0005465604 NetworkManager[45129]: <info>  [1759393527.1002] manager: (tap74b4490f-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.105 2 INFO os_vif [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:18:6b,bridge_name='br-int',has_traffic_filtering=True,id=74b4490f-0cc9-4b51-b6a0-246fd701e628,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74b4490f-0c')#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.153 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.154 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.154 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No VIF found with MAC fa:16:3e:a0:18:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.155 2 INFO nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Using config drive#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.177 2 DEBUG nova.storage.rbd_utils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.837 2 INFO nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Creating config drive at /var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828/disk.config#033[00m
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.849 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp363gwooq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:25:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:25:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:25:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:25:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:25:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:25:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:25:27
Oct  2 04:25:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:25:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:25:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'volumes', '.mgr', 'vms']
Oct  2 04:25:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:25:27 np0005465604 nova_compute[260603]: 2025-10-02 08:25:27.998 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp363gwooq" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:28 np0005465604 nova_compute[260603]: 2025-10-02 08:25:28.039 2 DEBUG nova.storage.rbd_utils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:28 np0005465604 nova_compute[260603]: 2025-10-02 08:25:28.044 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828/disk.config 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:25:28 np0005465604 nova_compute[260603]: 2025-10-02 08:25:28.236 2 DEBUG oslo_concurrency.processutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828/disk.config 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:28 np0005465604 nova_compute[260603]: 2025-10-02 08:25:28.238 2 INFO nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Deleting local config drive /var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828/disk.config because it was imported into RBD.#033[00m
Oct  2 04:25:28 np0005465604 nova_compute[260603]: 2025-10-02 08:25:28.273 2 DEBUG nova.network.neutron [req-0cbd9fb6-105f-48b1-82fe-be12b91a7f37 req-a61bd604-6fbe-4f75-828a-04eb3052affd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Updated VIF entry in instance network info cache for port 74b4490f-0cc9-4b51-b6a0-246fd701e628. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:25:28 np0005465604 nova_compute[260603]: 2025-10-02 08:25:28.274 2 DEBUG nova.network.neutron [req-0cbd9fb6-105f-48b1-82fe-be12b91a7f37 req-a61bd604-6fbe-4f75-828a-04eb3052affd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Updating instance_info_cache with network_info: [{"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:25:28 np0005465604 nova_compute[260603]: 2025-10-02 08:25:28.299 2 DEBUG oslo_concurrency.lockutils [req-0cbd9fb6-105f-48b1-82fe-be12b91a7f37 req-a61bd604-6fbe-4f75-828a-04eb3052affd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:25:28 np0005465604 kernel: tap74b4490f-0c: entered promiscuous mode
Oct  2 04:25:28 np0005465604 NetworkManager[45129]: <info>  [1759393528.3083] manager: (tap74b4490f-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/108)
Oct  2 04:25:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:28Z|00222|binding|INFO|Claiming lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 for this chassis.
Oct  2 04:25:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:28Z|00223|binding|INFO|74b4490f-0cc9-4b51-b6a0-246fd701e628: Claiming fa:16:3e:a0:18:6b 10.100.0.4
Oct  2 04:25:28 np0005465604 nova_compute[260603]: 2025-10-02 08:25:28.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.361 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:18:6b 10.100.0.4'], port_security=['fa:16:3e:a0:18:6b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1c57c283-5fac-4b60-b9dc-3dbf3bfd5828', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '2', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=74b4490f-0cc9-4b51-b6a0-246fd701e628) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:25:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.363 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 74b4490f-0cc9-4b51-b6a0-246fd701e628 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 bound to our chassis#033[00m
Oct  2 04:25:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.364 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 897d7abf-9e23-43cd-8f60-7156792a4360#033[00m
Oct  2 04:25:28 np0005465604 systemd-udevd[300952]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:25:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:28Z|00224|binding|INFO|Setting lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 ovn-installed in OVS
Oct  2 04:25:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:28Z|00225|binding|INFO|Setting lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 up in Southbound
Oct  2 04:25:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.384 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5771d2c7-7c22-4565-afc2-3b672054ecc5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:28 np0005465604 systemd-machined[214636]: New machine qemu-39-instance-00000023.
Oct  2 04:25:28 np0005465604 nova_compute[260603]: 2025-10-02 08:25:28.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:28 np0005465604 NetworkManager[45129]: <info>  [1759393528.3984] device (tap74b4490f-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:25:28 np0005465604 systemd[1]: Started Virtual Machine qemu-39-instance-00000023.
Oct  2 04:25:28 np0005465604 NetworkManager[45129]: <info>  [1759393528.4000] device (tap74b4490f-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:25:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.426 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b319de57-0758-4f30-a2b6-fc2f8b619db3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.430 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[08372561-8b78-405f-b874-4867f4bdbdda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.464 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[96562fd5-d38e-4a1e-bd41-288558693a4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.486 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f00a52-76fc-4882-b677-073d5fa9c492]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438826, 'reachable_time': 28172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300966, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.512 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d29c57b5-7f22-41ac-a154-d4a1a2bac6eb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap897d7abf-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438841, 'tstamp': 438841}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300968, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap897d7abf-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438844, 'tstamp': 438844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300968, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.514 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:28 np0005465604 nova_compute[260603]: 2025-10-02 08:25:28.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:28 np0005465604 nova_compute[260603]: 2025-10-02 08:25:28.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.518 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap897d7abf-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.518 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:25:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.519 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap897d7abf-90, col_values=(('external_ids', {'iface-id': 'dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:28.519 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:25:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 569 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 8.5 MiB/s wr, 374 op/s
Oct  2 04:25:28 np0005465604 nova_compute[260603]: 2025-10-02 08:25:28.966 2 DEBUG nova.compute.manager [req-c30d633b-1cf8-44e7-9de4-3a8d6e474e8d req-90020330-324f-45b4-8b94-8882a2296748 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:28 np0005465604 nova_compute[260603]: 2025-10-02 08:25:28.966 2 DEBUG oslo_concurrency.lockutils [req-c30d633b-1cf8-44e7-9de4-3a8d6e474e8d req-90020330-324f-45b4-8b94-8882a2296748 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:28 np0005465604 nova_compute[260603]: 2025-10-02 08:25:28.966 2 DEBUG oslo_concurrency.lockutils [req-c30d633b-1cf8-44e7-9de4-3a8d6e474e8d req-90020330-324f-45b4-8b94-8882a2296748 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:28 np0005465604 nova_compute[260603]: 2025-10-02 08:25:28.967 2 DEBUG oslo_concurrency.lockutils [req-c30d633b-1cf8-44e7-9de4-3a8d6e474e8d req-90020330-324f-45b4-8b94-8882a2296748 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:28 np0005465604 nova_compute[260603]: 2025-10-02 08:25:28.967 2 DEBUG nova.compute.manager [req-c30d633b-1cf8-44e7-9de4-3a8d6e474e8d req-90020330-324f-45b4-8b94-8882a2296748 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Processing event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.314 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393529.3143692, 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.315 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] VM Started (Lifecycle Event)#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.317 2 DEBUG nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.321 2 DEBUG nova.virt.libvirt.driver [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.324 2 INFO nova.virt.libvirt.driver [-] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Instance spawned successfully.#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.325 2 INFO nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Took 6.95 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.325 2 DEBUG nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.376 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.379 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.409 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.410 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393529.3155656, 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.410 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.412 2 INFO nova.compute.manager [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Took 8.04 seconds to build instance.#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.486 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.489 2 DEBUG oslo_concurrency.lockutils [None req-e0dfdbb0-fb18-47fb-9d07-8908683fa3bc 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.492 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393529.3195827, 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.493 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.518 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:25:29 np0005465604 nova_compute[260603]: 2025-10-02 08:25:29.522 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:25:30 np0005465604 nova_compute[260603]: 2025-10-02 08:25:30.741 2 DEBUG oslo_concurrency.lockutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:30 np0005465604 nova_compute[260603]: 2025-10-02 08:25:30.742 2 DEBUG oslo_concurrency.lockutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:30 np0005465604 nova_compute[260603]: 2025-10-02 08:25:30.742 2 DEBUG oslo_concurrency.lockutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:30 np0005465604 nova_compute[260603]: 2025-10-02 08:25:30.743 2 DEBUG oslo_concurrency.lockutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:30 np0005465604 nova_compute[260603]: 2025-10-02 08:25:30.743 2 DEBUG oslo_concurrency.lockutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:30 np0005465604 nova_compute[260603]: 2025-10-02 08:25:30.744 2 INFO nova.compute.manager [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Terminating instance#033[00m
Oct  2 04:25:30 np0005465604 nova_compute[260603]: 2025-10-02 08:25:30.745 2 DEBUG nova.compute.manager [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:25:30 np0005465604 kernel: tap74b4490f-0c (unregistering): left promiscuous mode
Oct  2 04:25:30 np0005465604 NetworkManager[45129]: <info>  [1759393530.7816] device (tap74b4490f-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:25:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:30Z|00226|binding|INFO|Releasing lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 from this chassis (sb_readonly=0)
Oct  2 04:25:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:30Z|00227|binding|INFO|Setting lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 down in Southbound
Oct  2 04:25:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:30Z|00228|binding|INFO|Removing iface tap74b4490f-0c ovn-installed in OVS
Oct  2 04:25:30 np0005465604 nova_compute[260603]: 2025-10-02 08:25:30.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:30 np0005465604 nova_compute[260603]: 2025-10-02 08:25:30.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.797 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:18:6b 10.100.0.4'], port_security=['fa:16:3e:a0:18:6b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1c57c283-5fac-4b60-b9dc-3dbf3bfd5828', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=74b4490f-0cc9-4b51-b6a0-246fd701e628) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:25:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.798 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 74b4490f-0cc9-4b51-b6a0-246fd701e628 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis#033[00m
Oct  2 04:25:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.799 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 897d7abf-9e23-43cd-8f60-7156792a4360#033[00m
Oct  2 04:25:30 np0005465604 nova_compute[260603]: 2025-10-02 08:25:30.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.820 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[877ec460-5910-4957-ab76-c9663319fb95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:30 np0005465604 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Deactivated successfully.
Oct  2 04:25:30 np0005465604 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Consumed 2.410s CPU time.
Oct  2 04:25:30 np0005465604 systemd-machined[214636]: Machine qemu-39-instance-00000023 terminated.
Oct  2 04:25:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 569 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 7.2 MiB/s wr, 317 op/s
Oct  2 04:25:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.859 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6dfad62d-d95f-4d8f-bdc2-e38dcc180414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.862 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[91e64708-c0cf-44cf-94c7-d2a91501b9c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.889 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[378257ac-0514-4df6-9d3a-7b2abd898eee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.909 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b9673b83-5cba-4cd7-82d3-6ae30c006448]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438826, 'reachable_time': 28172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301020, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.927 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[51f3a762-2717-4391-858a-8b6566ccc93b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap897d7abf-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438841, 'tstamp': 438841}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301021, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap897d7abf-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438844, 'tstamp': 438844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301021, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.931 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:30 np0005465604 kernel: tap74b4490f-0c: entered promiscuous mode
Oct  2 04:25:30 np0005465604 NetworkManager[45129]: <info>  [1759393530.9623] manager: (tap74b4490f-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Oct  2 04:25:30 np0005465604 nova_compute[260603]: 2025-10-02 08:25:30.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:30Z|00229|binding|INFO|Claiming lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 for this chassis.
Oct  2 04:25:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:30Z|00230|binding|INFO|74b4490f-0cc9-4b51-b6a0-246fd701e628: Claiming fa:16:3e:a0:18:6b 10.100.0.4
Oct  2 04:25:30 np0005465604 kernel: tap74b4490f-0c (unregistering): left promiscuous mode
Oct  2 04:25:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:30.993 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:18:6b 10.100.0.4'], port_security=['fa:16:3e:a0:18:6b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1c57c283-5fac-4b60-b9dc-3dbf3bfd5828', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=74b4490f-0cc9-4b51-b6a0-246fd701e628) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.015 2 INFO nova.virt.libvirt.driver [-] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Instance destroyed successfully.#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.016 2 DEBUG nova.objects.instance [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'resources' on Instance uuid 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:31 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:31Z|00231|binding|INFO|Setting lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 ovn-installed in OVS
Oct  2 04:25:31 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:31Z|00232|binding|INFO|Setting lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 up in Southbound
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.044 2 DEBUG nova.virt.libvirt.vif [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:25:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-889597672',display_name='tempest-ImagesTestJSON-server-889597672',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-889597672',id=35,image_ref='82efe28e-fddc-4d67-b244-5e7b89266ac0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:25:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-02761tfj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='c00d9d50-5c81-4dc9-8316-c654d4802b4f',image_min_disk='1',image_min_ram='0',image_owner_id='56b1e1170f2e4a73aaf396476bc82261',image_owner_project_name='tempest-ImagesTestJSON-1188243509',image_owner_user_name='tempest-ImagesTestJSON-1188243509-project-member',image_user_id='6747651cfdcc4f868c43b9d78f5846c2',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:25:29Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=1c57c283-5fac-4b60-b9dc-3dbf3bfd5828,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.045 2 DEBUG nova.network.os_vif_util [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "address": "fa:16:3e:a0:18:6b", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74b4490f-0c", "ovs_interfaceid": "74b4490f-0cc9-4b51-b6a0-246fd701e628", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.046 2 DEBUG nova.network.os_vif_util [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:18:6b,bridge_name='br-int',has_traffic_filtering=True,id=74b4490f-0cc9-4b51-b6a0-246fd701e628,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74b4490f-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.048 2 DEBUG os_vif [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:18:6b,bridge_name='br-int',has_traffic_filtering=True,id=74b4490f-0cc9-4b51-b6a0-246fd701e628,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74b4490f-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.051 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap897d7abf-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.052 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:25:31 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:31Z|00233|binding|INFO|Releasing lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 from this chassis (sb_readonly=0)
Oct  2 04:25:31 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:31Z|00234|binding|INFO|Setting lport 74b4490f-0cc9-4b51-b6a0-246fd701e628 down in Southbound
Oct  2 04:25:31 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:31Z|00235|binding|INFO|Removing iface tap74b4490f-0c ovn-installed in OVS
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.052 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap897d7abf-90, col_values=(('external_ids', {'iface-id': 'dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.052 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.054 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 74b4490f-0cc9-4b51-b6a0-246fd701e628 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.055 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 897d7abf-9e23-43cd-8f60-7156792a4360#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.054 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74b4490f-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.062 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:18:6b 10.100.0.4'], port_security=['fa:16:3e:a0:18:6b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1c57c283-5fac-4b60-b9dc-3dbf3bfd5828', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=74b4490f-0cc9-4b51-b6a0-246fd701e628) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.077 2 INFO os_vif [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:18:6b,bridge_name='br-int',has_traffic_filtering=True,id=74b4490f-0cc9-4b51-b6a0-246fd701e628,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74b4490f-0c')#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.079 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a9efe2-a815-4911-971b-3f9b8c03f331]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.109 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6d3721-e725-4e43-8e71-671395a8f90b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.116 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f4b3ca5f-b864-4428-bc93-466b853713eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.147 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[82b9d22a-6496-46f5-bfd5-1502d72db887]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.164 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e8242dc0-52eb-4bad-ad8a-e1d6ea97edee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438826, 'reachable_time': 28172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301056, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.182 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[23234c07-0cc7-4ec9-bfdf-1ac20095a247]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap897d7abf-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438841, 'tstamp': 438841}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301057, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap897d7abf-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438844, 'tstamp': 438844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301057, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.184 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.187 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap897d7abf-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.187 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.187 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap897d7abf-90, col_values=(('external_ids', {'iface-id': 'dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.187 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.188 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 74b4490f-0cc9-4b51-b6a0-246fd701e628 in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.191 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 897d7abf-9e23-43cd-8f60-7156792a4360#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.206 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[17226e45-b9a6-41d8-a9d1-1ddb61bcd6ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.225 2 DEBUG nova.compute.manager [req-55cfac81-b835-4fb0-ba08-b263b368ff73 req-2aeecc06-0290-4f0a-8074-ce77d1437746 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.226 2 DEBUG oslo_concurrency.lockutils [req-55cfac81-b835-4fb0-ba08-b263b368ff73 req-2aeecc06-0290-4f0a-8074-ce77d1437746 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.226 2 DEBUG oslo_concurrency.lockutils [req-55cfac81-b835-4fb0-ba08-b263b368ff73 req-2aeecc06-0290-4f0a-8074-ce77d1437746 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.227 2 DEBUG oslo_concurrency.lockutils [req-55cfac81-b835-4fb0-ba08-b263b368ff73 req-2aeecc06-0290-4f0a-8074-ce77d1437746 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.227 2 DEBUG nova.compute.manager [req-55cfac81-b835-4fb0-ba08-b263b368ff73 req-2aeecc06-0290-4f0a-8074-ce77d1437746 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] No waiting events found dispatching network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.227 2 WARNING nova.compute.manager [req-55cfac81-b835-4fb0-ba08-b263b368ff73 req-2aeecc06-0290-4f0a-8074-ce77d1437746 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received unexpected event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.245 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[98940eb9-d18e-46c9-9dff-d8be204195b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.247 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[97c99d25-d5e0-4a9c-903b-9c868fe250d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.286 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[92a36ac2-da30-4d50-87cf-add1aad34acb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.304 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83368c95-ba16-4b5e-a87c-8bc4e55f6c98]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap897d7abf-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:18:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438826, 'reachable_time': 28172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301064, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.323 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[384d1e30-886d-4b96-8998-b33a4e7ebfaa]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap897d7abf-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438841, 'tstamp': 438841}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301065, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap897d7abf-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438844, 'tstamp': 438844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301065, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.325 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.328 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap897d7abf-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.328 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.329 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap897d7abf-90, col_values=(('external_ids', {'iface-id': 'dfb6b0ba-8442-43e4-bc2c-1c6bbd12cd76'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:31.329 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.402 2 INFO nova.virt.libvirt.driver [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Deleting instance files /var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_del#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.403 2 INFO nova.virt.libvirt.driver [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Deletion of /var/lib/nova/instances/1c57c283-5fac-4b60-b9dc-3dbf3bfd5828_del complete#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.450 2 INFO nova.compute.manager [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.451 2 DEBUG oslo.service.loopingcall [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.451 2 DEBUG nova.compute.manager [-] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.451 2 DEBUG nova.network.neutron [-] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:25:31 np0005465604 nova_compute[260603]: 2025-10-02 08:25:31.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:25:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Oct  2 04:25:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Oct  2 04:25:32 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Oct  2 04:25:32 np0005465604 nova_compute[260603]: 2025-10-02 08:25:32.817 2 DEBUG nova.network.neutron [-] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:25:32 np0005465604 nova_compute[260603]: 2025-10-02 08:25:32.840 2 INFO nova.compute.manager [-] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Took 1.39 seconds to deallocate network for instance.#033[00m
Oct  2 04:25:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 569 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 6.3 MiB/s wr, 365 op/s
Oct  2 04:25:32 np0005465604 nova_compute[260603]: 2025-10-02 08:25:32.887 2 DEBUG oslo_concurrency.lockutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:32 np0005465604 nova_compute[260603]: 2025-10-02 08:25:32.887 2 DEBUG oslo_concurrency.lockutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.011 2 DEBUG oslo_concurrency.processutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.307 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-vif-unplugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.308 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.309 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.310 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.311 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] No waiting events found dispatching network-vif-unplugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.311 2 WARNING nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received unexpected event network-vif-unplugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.312 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.313 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.313 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.314 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.315 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] No waiting events found dispatching network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.315 2 WARNING nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received unexpected event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.316 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.316 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.317 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.318 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.318 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] No waiting events found dispatching network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.319 2 WARNING nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received unexpected event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.319 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.320 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.321 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.321 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.322 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] No waiting events found dispatching network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.322 2 WARNING nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received unexpected event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.323 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-vif-unplugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.324 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.324 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.325 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.326 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] No waiting events found dispatching network-vif-unplugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.327 2 WARNING nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received unexpected event network-vif-unplugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.328 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.329 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.329 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.330 2 DEBUG oslo_concurrency.lockutils [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.331 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] No waiting events found dispatching network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.331 2 WARNING nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received unexpected event network-vif-plugged-74b4490f-0cc9-4b51-b6a0-246fd701e628 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.332 2 DEBUG nova.compute.manager [req-479a6036-19b2-4494-8ec3-55b06ec55a8c req-4fabf517-1f8b-4d82-a554-bf10192c81b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828] Received event network-vif-deleted-74b4490f-0cc9-4b51-b6a0-246fd701e628 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:25:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1235987546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.542 2 DEBUG oslo_concurrency.processutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.552 2 DEBUG nova.compute.provider_tree [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.573 2 DEBUG nova.scheduler.client.report [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.602 2 DEBUG oslo_concurrency.lockutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.635 2 INFO nova.scheduler.client.report [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Deleted allocations for instance 1c57c283-5fac-4b60-b9dc-3dbf3bfd5828#033[00m
Oct  2 04:25:33 np0005465604 nova_compute[260603]: 2025-10-02 08:25:33.696 2 DEBUG oslo_concurrency.lockutils [None req-2a1396d8-eedd-4e2c-969d-540f766e9849 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "1c57c283-5fac-4b60-b9dc-3dbf3bfd5828" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Oct  2 04:25:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Oct  2 04:25:34 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Oct  2 04:25:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:34.812 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:34.812 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:34.813 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 574 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.6 MiB/s wr, 422 op/s
Oct  2 04:25:34 np0005465604 nova_compute[260603]: 2025-10-02 08:25:34.913 2 DEBUG oslo_concurrency.lockutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:34 np0005465604 nova_compute[260603]: 2025-10-02 08:25:34.914 2 DEBUG oslo_concurrency.lockutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:34 np0005465604 nova_compute[260603]: 2025-10-02 08:25:34.914 2 DEBUG oslo_concurrency.lockutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:34 np0005465604 nova_compute[260603]: 2025-10-02 08:25:34.914 2 DEBUG oslo_concurrency.lockutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:34 np0005465604 nova_compute[260603]: 2025-10-02 08:25:34.915 2 DEBUG oslo_concurrency.lockutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:34 np0005465604 nova_compute[260603]: 2025-10-02 08:25:34.917 2 INFO nova.compute.manager [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Terminating instance#033[00m
Oct  2 04:25:34 np0005465604 nova_compute[260603]: 2025-10-02 08:25:34.920 2 DEBUG nova.compute.manager [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:25:34 np0005465604 kernel: tapc606d1e1-b2 (unregistering): left promiscuous mode
Oct  2 04:25:34 np0005465604 NetworkManager[45129]: <info>  [1759393534.9787] device (tapc606d1e1-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:25:34 np0005465604 nova_compute[260603]: 2025-10-02 08:25:34.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:34Z|00236|binding|INFO|Releasing lport c606d1e1-b27f-498f-989e-2cce97a7589d from this chassis (sb_readonly=0)
Oct  2 04:25:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:34Z|00237|binding|INFO|Setting lport c606d1e1-b27f-498f-989e-2cce97a7589d down in Southbound
Oct  2 04:25:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:34Z|00238|binding|INFO|Removing iface tapc606d1e1-b2 ovn-installed in OVS
Oct  2 04:25:34 np0005465604 nova_compute[260603]: 2025-10-02 08:25:34.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.002 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:5d:05 10.100.0.11'], port_security=['fa:16:3e:48:5d:05 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c00d9d50-5c81-4dc9-8316-c654d4802b4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-897d7abf-9e23-43cd-8f60-7156792a4360', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b1e1170f2e4a73aaf396476bc82261', 'neutron:revision_number': '4', 'neutron:security_group_ids': '499feab1-b366-4801-b2b7-dd6955a83cbf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd1249bc-5cfa-45c9-9c58-05221f4de160, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=c606d1e1-b27f-498f-989e-2cce97a7589d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:25:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.004 162357 INFO neutron.agent.ovn.metadata.agent [-] Port c606d1e1-b27f-498f-989e-2cce97a7589d in datapath 897d7abf-9e23-43cd-8f60-7156792a4360 unbound from our chassis#033[00m
Oct  2 04:25:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.007 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 897d7abf-9e23-43cd-8f60-7156792a4360, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:25:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.008 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad4f4ce-5f6d-48bd-9bc0-983a0494ffd2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.009 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 namespace which is not needed anymore#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:35 np0005465604 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Deactivated successfully.
Oct  2 04:25:35 np0005465604 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Consumed 12.987s CPU time.
Oct  2 04:25:35 np0005465604 systemd-machined[214636]: Machine qemu-37-instance-00000020 terminated.
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.168 2 INFO nova.virt.libvirt.driver [-] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Instance destroyed successfully.#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.169 2 DEBUG nova.objects.instance [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'resources' on Instance uuid c00d9d50-5c81-4dc9-8316-c654d4802b4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.181 2 DEBUG nova.virt.libvirt.vif [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:24:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1903679195',display_name='tempest-ImagesTestJSON-server-1903679195',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1903679195',id=32,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:25:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-lm5g3lue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:25:17Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=c00d9d50-5c81-4dc9-8316-c654d4802b4f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.181 2 DEBUG nova.network.os_vif_util [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "c606d1e1-b27f-498f-989e-2cce97a7589d", "address": "fa:16:3e:48:5d:05", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc606d1e1-b2", "ovs_interfaceid": "c606d1e1-b27f-498f-989e-2cce97a7589d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.182 2 DEBUG nova.network.os_vif_util [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:5d:05,bridge_name='br-int',has_traffic_filtering=True,id=c606d1e1-b27f-498f-989e-2cce97a7589d,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606d1e1-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.182 2 DEBUG os_vif [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:5d:05,bridge_name='br-int',has_traffic_filtering=True,id=c606d1e1-b27f-498f-989e-2cce97a7589d,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606d1e1-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc606d1e1-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.189 2 INFO os_vif [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:5d:05,bridge_name='br-int',has_traffic_filtering=True,id=c606d1e1-b27f-498f-989e-2cce97a7589d,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc606d1e1-b2')#033[00m
Oct  2 04:25:35 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[299543]: [NOTICE]   (299548) : haproxy version is 2.8.14-c23fe91
Oct  2 04:25:35 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[299543]: [NOTICE]   (299548) : path to executable is /usr/sbin/haproxy
Oct  2 04:25:35 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[299543]: [WARNING]  (299548) : Exiting Master process...
Oct  2 04:25:35 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[299543]: [WARNING]  (299548) : Exiting Master process...
Oct  2 04:25:35 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[299543]: [ALERT]    (299548) : Current worker (299550) exited with code 143 (Terminated)
Oct  2 04:25:35 np0005465604 neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360[299543]: [WARNING]  (299548) : All workers exited. Exiting... (0)
Oct  2 04:25:35 np0005465604 systemd[1]: libpod-c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598.scope: Deactivated successfully.
Oct  2 04:25:35 np0005465604 podman[301111]: 2025-10-02 08:25:35.220895917 +0000 UTC m=+0.072291573 container died c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:25:35 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598-userdata-shm.mount: Deactivated successfully.
Oct  2 04:25:35 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1dec502266d477323c4c3cfd6e09d38867b92c6da0907359bbdddd0555e1b6f0-merged.mount: Deactivated successfully.
Oct  2 04:25:35 np0005465604 podman[301111]: 2025-10-02 08:25:35.267255721 +0000 UTC m=+0.118651367 container cleanup c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 04:25:35 np0005465604 systemd[1]: libpod-conmon-c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598.scope: Deactivated successfully.
Oct  2 04:25:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Oct  2 04:25:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Oct  2 04:25:35 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Oct  2 04:25:35 np0005465604 podman[301166]: 2025-10-02 08:25:35.361982596 +0000 UTC m=+0.059330765 container remove c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 04:25:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.368 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4b3b13-9e04-4728-83b6-ebf0a5a9be78]: (4, ('Thu Oct  2 08:25:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 (c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598)\nc4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598\nThu Oct  2 08:25:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 (c4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598)\nc4c38658fd870036af549b2b8527f6c57a22f986c73eb3028f69d41abccc6598\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.370 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[59d5b35f-3180-41fd-b711-65c9aa96be92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.372 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap897d7abf-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:35 np0005465604 kernel: tap897d7abf-90: left promiscuous mode
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.400 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[af4f7633-985d-4fc5-8965-bf84c04233a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.420 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[91dc25aa-b804-48e0-ae4c-13af1772b29c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.424 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[022e6a94-2784-4868-bdf7-ae587e9913f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.448 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50b2d982-c297-4228-80da-d642eec06b2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438816, 'reachable_time': 41488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301182, 'error': None, 'target': 'ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.453 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-897d7abf-9e23-43cd-8f60-7156792a4360 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:25:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:25:35.453 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[94ad7053-bb14-4cd5-94da-81693d529386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:25:35 np0005465604 systemd[1]: run-netns-ovnmeta\x2d897d7abf\x2d9e23\x2d43cd\x2d8f60\x2d7156792a4360.mount: Deactivated successfully.
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.660 2 INFO nova.virt.libvirt.driver [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Deleting instance files /var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f_del#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.662 2 INFO nova.virt.libvirt.driver [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Deletion of /var/lib/nova/instances/c00d9d50-5c81-4dc9-8316-c654d4802b4f_del complete#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.717 2 INFO nova.compute.manager [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.718 2 DEBUG oslo.service.loopingcall [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.719 2 DEBUG nova.compute.manager [-] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.719 2 DEBUG nova.network.neutron [-] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:25:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:35Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:cf:d8 10.100.0.3
Oct  2 04:25:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:25:35Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:cf:d8 10.100.0.3
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.882 2 DEBUG nova.compute.manager [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Received event network-vif-unplugged-c606d1e1-b27f-498f-989e-2cce97a7589d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.883 2 DEBUG oslo_concurrency.lockutils [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.883 2 DEBUG oslo_concurrency.lockutils [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.883 2 DEBUG oslo_concurrency.lockutils [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.883 2 DEBUG nova.compute.manager [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] No waiting events found dispatching network-vif-unplugged-c606d1e1-b27f-498f-989e-2cce97a7589d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.884 2 DEBUG nova.compute.manager [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Received event network-vif-unplugged-c606d1e1-b27f-498f-989e-2cce97a7589d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.884 2 DEBUG nova.compute.manager [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Received event network-vif-plugged-c606d1e1-b27f-498f-989e-2cce97a7589d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.884 2 DEBUG oslo_concurrency.lockutils [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.884 2 DEBUG oslo_concurrency.lockutils [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.884 2 DEBUG oslo_concurrency.lockutils [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.885 2 DEBUG nova.compute.manager [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] No waiting events found dispatching network-vif-plugged-c606d1e1-b27f-498f-989e-2cce97a7589d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:25:35 np0005465604 nova_compute[260603]: 2025-10-02 08:25:35.885 2 WARNING nova.compute.manager [req-19bc545f-1600-46a6-b49a-038129d8712b req-443bf0d8-0112-459e-88ce-238ed2f74521 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Received unexpected event network-vif-plugged-c606d1e1-b27f-498f-989e-2cce97a7589d for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:25:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Oct  2 04:25:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Oct  2 04:25:36 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Oct  2 04:25:36 np0005465604 nova_compute[260603]: 2025-10-02 08:25:36.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 574 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 972 KiB/s wr, 250 op/s
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.004 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "aebef537-a40c-45aa-98b5-ebdd7c27028b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.005 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "aebef537-a40c-45aa-98b5-ebdd7c27028b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.005 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "aebef537-a40c-45aa-98b5-ebdd7c27028b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.006 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "aebef537-a40c-45aa-98b5-ebdd7c27028b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.006 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "aebef537-a40c-45aa-98b5-ebdd7c27028b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.008 2 INFO nova.compute.manager [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Terminating instance#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.011 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "refresh_cache-aebef537-a40c-45aa-98b5-ebdd7c27028b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.012 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquired lock "refresh_cache-aebef537-a40c-45aa-98b5-ebdd7c27028b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.012 2 DEBUG nova.network.neutron [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:25:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.043 2 DEBUG nova.network.neutron [-] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.070 2 INFO nova.compute.manager [-] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Took 1.35 seconds to deallocate network for instance.#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.128 2 DEBUG oslo_concurrency.lockutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.129 2 DEBUG oslo_concurrency.lockutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.175 2 DEBUG nova.network.neutron [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.252 2 DEBUG oslo_concurrency.processutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.437 2 DEBUG nova.network.neutron [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.460 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.460 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.461 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Releasing lock "refresh_cache-aebef537-a40c-45aa-98b5-ebdd7c27028b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.462 2 DEBUG nova.compute.manager [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.478 2 DEBUG nova.compute.manager [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.517 2 DEBUG nova.compute.manager [req-1ea3a9c8-fa01-4058-a0dc-c6414b88c80f req-83f169a8-14a4-45f2-ad1f-3d6a5010b080 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c00d9d50-5c81-4dc9-8316-c654d4802b4f] Received event network-vif-deleted-c606d1e1-b27f-498f-989e-2cce97a7589d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.535 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:37 np0005465604 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000021.scope: Deactivated successfully.
Oct  2 04:25:37 np0005465604 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000021.scope: Consumed 12.987s CPU time.
Oct  2 04:25:37 np0005465604 systemd-machined[214636]: Machine qemu-36-instance-00000021 terminated.
Oct  2 04:25:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:25:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2419347802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.711 2 INFO nova.virt.libvirt.driver [-] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Instance destroyed successfully.#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.712 2 DEBUG nova.objects.instance [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lazy-loading 'resources' on Instance uuid aebef537-a40c-45aa-98b5-ebdd7c27028b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:37 np0005465604 podman[301205]: 2025-10-02 08:25:37.717774532 +0000 UTC m=+0.114037837 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.721 2 DEBUG oslo_concurrency.processutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:37 np0005465604 podman[301204]: 2025-10-02 08:25:37.727974941 +0000 UTC m=+0.127245564 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.751 2 DEBUG nova.compute.provider_tree [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.769 2 DEBUG nova.scheduler.client.report [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.791 2 DEBUG oslo_concurrency.lockutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.794 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.803 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.803 2 INFO nova.compute.claims [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.825 2 INFO nova.scheduler.client.report [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Deleted allocations for instance c00d9d50-5c81-4dc9-8316-c654d4802b4f#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.911 2 DEBUG oslo_concurrency.lockutils [None req-b28dc184-4593-4823-855c-23d51dea5d93 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "c00d9d50-5c81-4dc9-8316-c654d4802b4f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:37 np0005465604 nova_compute[260603]: 2025-10-02 08:25:37.980 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.188 2 INFO nova.virt.libvirt.driver [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Deleting instance files /var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b_del#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.189 2 INFO nova.virt.libvirt.driver [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Deletion of /var/lib/nova/instances/aebef537-a40c-45aa-98b5-ebdd7c27028b_del complete#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.236 2 INFO nova.compute.manager [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.236 2 DEBUG oslo.service.loopingcall [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.237 2 DEBUG nova.compute.manager [-] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.237 2 DEBUG nova.network.neutron [-] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026947198346031127 of space, bias 1.0, pg target 0.8084159503809338 quantized to 32 (current 32)
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002856621776714202 of space, bias 1.0, pg target 0.8569865330142605 quantized to 32 (current 32)
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:25:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:25:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2687789325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.484 2 DEBUG nova.network.neutron [-] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.494 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.498 2 DEBUG nova.network.neutron [-] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.504 2 DEBUG nova.compute.provider_tree [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.517 2 INFO nova.compute.manager [-] [instance: aebef537-a40c-45aa-98b5-ebdd7c27028b] Took 0.28 seconds to deallocate network for instance.#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.526 2 DEBUG nova.scheduler.client.report [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.562 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.563 2 DEBUG nova.compute.manager [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.573 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.573 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.612 2 DEBUG nova.compute.manager [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.612 2 DEBUG nova.network.neutron [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.626 2 INFO nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.641 2 DEBUG nova.compute.manager [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.659 2 DEBUG oslo_concurrency.processutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.734 2 DEBUG nova.compute.manager [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.735 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.736 2 INFO nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Creating image(s)#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.760 2 DEBUG nova.storage.rbd_utils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.787 2 DEBUG nova.storage.rbd_utils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.822 2 DEBUG nova.storage.rbd_utils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.827 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.862 2 DEBUG nova.policy [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6747651cfdcc4f868c43b9d78f5846c2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56b1e1170f2e4a73aaf396476bc82261', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:25:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 241 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 464 op/s
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.870 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquiring lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.870 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "1bd45455-6745-4310-a5a6-f86dd4dcb4ca" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.887 2 DEBUG nova.compute.manager [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.913 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.913 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.914 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.914 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.932 2 DEBUG nova.storage.rbd_utils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.935 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:38 np0005465604 nova_compute[260603]: 2025-10-02 08:25:38.989 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:25:39 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4071681277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.140 2 DEBUG oslo_concurrency.processutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.146 2 DEBUG nova.compute.provider_tree [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.161 2 DEBUG nova.scheduler.client.report [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.181 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.183 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.189 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.190 2 INFO nova.compute.claims [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.224 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.248 2 INFO nova.scheduler.client.report [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Deleted allocations for instance aebef537-a40c-45aa-98b5-ebdd7c27028b#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.300 2 DEBUG nova.storage.rbd_utils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] resizing rbd image 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.338 2 DEBUG oslo_concurrency.lockutils [None req-7eb05cbb-9755-447c-ac0c-67b7973da162 cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "aebef537-a40c-45aa-98b5-ebdd7c27028b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.351 2 DEBUG nova.network.neutron [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Successfully created port: 9bdc55f5-2bf2-467e-9e1e-33215451c0c4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.415 2 DEBUG nova.objects.instance [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.426 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.427 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Ensure instance console log exists: /var/lib/nova/instances/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.427 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.427 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.427 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.446 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:25:39 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2058094305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.867 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.872 2 DEBUG nova.compute.provider_tree [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.897 2 DEBUG nova.scheduler.client.report [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.922 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.923 2 DEBUG nova.compute.manager [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.959 2 DEBUG nova.network.neutron [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Successfully updated port: 9bdc55f5-2bf2-467e-9e1e-33215451c0c4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.968 2 DEBUG nova.compute.manager [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.969 2 DEBUG nova.network.neutron [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.974 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "refresh_cache-9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.974 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquired lock "refresh_cache-9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.975 2 DEBUG nova.network.neutron [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:25:39 np0005465604 nova_compute[260603]: 2025-10-02 08:25:39.997 2 INFO nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.028 2 DEBUG nova.compute.manager [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.033 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "9ea31984-a45e-4154-9df9-3c4e8ce69309" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.033 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "9ea31984-a45e-4154-9df9-3c4e8ce69309" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.034 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "9ea31984-a45e-4154-9df9-3c4e8ce69309-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.034 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "9ea31984-a45e-4154-9df9-3c4e8ce69309-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.035 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "9ea31984-a45e-4154-9df9-3c4e8ce69309-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.037 2 INFO nova.compute.manager [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Terminating instance#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.039 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "refresh_cache-9ea31984-a45e-4154-9df9-3c4e8ce69309" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.040 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquired lock "refresh_cache-9ea31984-a45e-4154-9df9-3c4e8ce69309" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.040 2 DEBUG nova.network.neutron [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.048 2 DEBUG nova.compute.manager [req-e5740b01-a756-4362-a02d-686f3ac5af9d req-c1a254a5-ce2f-49b4-8926-3a1e4a8df152 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Received event network-changed-9bdc55f5-2bf2-467e-9e1e-33215451c0c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.048 2 DEBUG nova.compute.manager [req-e5740b01-a756-4362-a02d-686f3ac5af9d req-c1a254a5-ce2f-49b4-8926-3a1e4a8df152 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Refreshing instance network info cache due to event network-changed-9bdc55f5-2bf2-467e-9e1e-33215451c0c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.049 2 DEBUG oslo_concurrency.lockutils [req-e5740b01-a756-4362-a02d-686f3ac5af9d req-c1a254a5-ce2f-49b4-8926-3a1e4a8df152 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.143 2 DEBUG nova.policy [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f5cf08c876094c4d847b57fd4506bbff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '93b2c1583a83423288661225e3f86391', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.150 2 DEBUG nova.compute.manager [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.152 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.153 2 INFO nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Creating image(s)#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.186 2 DEBUG nova.storage.rbd_utils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] rbd image 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.224 2 DEBUG nova.storage.rbd_utils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] rbd image 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.259 2 DEBUG nova.storage.rbd_utils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] rbd image 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.263 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.303 2 DEBUG nova.network.neutron [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.358 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.359 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.360 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.360 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.384 2 DEBUG nova.storage.rbd_utils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] rbd image 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.388 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.494 2 DEBUG nova.network.neutron [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.630 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.242s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.683 2 DEBUG nova.storage.rbd_utils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] resizing rbd image 1bd45455-6745-4310-a5a6-f86dd4dcb4ca_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.765 2 DEBUG nova.network.neutron [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.771 2 DEBUG nova.objects.instance [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lazy-loading 'migration_context' on Instance uuid 1bd45455-6745-4310-a5a6-f86dd4dcb4ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.785 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.786 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Ensure instance console log exists: /var/lib/nova/instances/1bd45455-6745-4310-a5a6-f86dd4dcb4ca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.786 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.787 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.787 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.792 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Releasing lock "refresh_cache-9ea31984-a45e-4154-9df9-3c4e8ce69309" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:25:40 np0005465604 nova_compute[260603]: 2025-10-02 08:25:40.792 2 DEBUG nova.compute.manager [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:25:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 241 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 736 KiB/s rd, 3.3 MiB/s wr, 345 op/s
Oct  2 04:25:40 np0005465604 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Oct  2 04:25:40 np0005465604 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Consumed 13.874s CPU time.
Oct  2 04:25:40 np0005465604 systemd-machined[214636]: Machine qemu-35-instance-0000001f terminated.
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.019 2 DEBUG nova.network.neutron [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Successfully created port: a75b1cfa-e509-4676-bd92-b36be82f1e83 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.027 2 INFO nova.virt.libvirt.driver [-] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Instance destroyed successfully.#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.028 2 DEBUG nova.objects.instance [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lazy-loading 'resources' on Instance uuid 9ea31984-a45e-4154-9df9-3c4e8ce69309 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.140 2 DEBUG nova.network.neutron [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Updating instance_info_cache with network_info: [{"id": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "address": "fa:16:3e:88:3f:d3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bdc55f5-2b", "ovs_interfaceid": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.168 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Releasing lock "refresh_cache-9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.169 2 DEBUG nova.compute.manager [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Instance network_info: |[{"id": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "address": "fa:16:3e:88:3f:d3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bdc55f5-2b", "ovs_interfaceid": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.170 2 DEBUG oslo_concurrency.lockutils [req-e5740b01-a756-4362-a02d-686f3ac5af9d req-c1a254a5-ce2f-49b4-8926-3a1e4a8df152 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.171 2 DEBUG nova.network.neutron [req-e5740b01-a756-4362-a02d-686f3ac5af9d req-c1a254a5-ce2f-49b4-8926-3a1e4a8df152 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Refreshing network info cache for port 9bdc55f5-2bf2-467e-9e1e-33215451c0c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.177 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Start _get_guest_xml network_info=[{"id": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "address": "fa:16:3e:88:3f:d3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bdc55f5-2b", "ovs_interfaceid": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.183 2 WARNING nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.189 2 DEBUG nova.virt.libvirt.host [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.190 2 DEBUG nova.virt.libvirt.host [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.206 2 DEBUG nova.virt.libvirt.host [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.207 2 DEBUG nova.virt.libvirt.host [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.208 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.208 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.209 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.210 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.210 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.210 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.211 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.211 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.212 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.212 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.213 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.214 2 DEBUG nova.virt.hardware [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.221 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.486 2 INFO nova.virt.libvirt.driver [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Deleting instance files /var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309_del#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.488 2 INFO nova.virt.libvirt.driver [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Deletion of /var/lib/nova/instances/9ea31984-a45e-4154-9df9-3c4e8ce69309_del complete#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.578 2 INFO nova.compute.manager [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.578 2 DEBUG oslo.service.loopingcall [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.578 2 DEBUG nova.compute.manager [-] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.579 2 DEBUG nova.network.neutron [-] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.664 2 DEBUG nova.network.neutron [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Successfully updated port: a75b1cfa-e509-4676-bd92-b36be82f1e83 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.679 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquiring lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.679 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Acquired lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.679 2 DEBUG nova.network.neutron [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:25:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:25:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1118430878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.713 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.747 2 DEBUG nova.storage.rbd_utils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.754 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.798 2 DEBUG nova.network.neutron [-] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.813 2 DEBUG nova.network.neutron [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.819 2 DEBUG nova.network.neutron [-] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.835 2 INFO nova.compute.manager [-] [instance: 9ea31984-a45e-4154-9df9-3c4e8ce69309] Took 0.26 seconds to deallocate network for instance.#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.891 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.892 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.952 2 DEBUG oslo_concurrency.lockutils [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-c28cb03c-6207-4ec5-9156-03252350561c-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.952 2 DEBUG oslo_concurrency.lockutils [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-c28cb03c-6207-4ec5-9156-03252350561c-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:41 np0005465604 nova_compute[260603]: 2025-10-02 08:25:41.952 2 DEBUG nova.objects.instance [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid c28cb03c-6207-4ec5-9156-03252350561c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.020 2 DEBUG oslo_concurrency.processutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:25:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Oct  2 04:25:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Oct  2 04:25:42 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.122 2 DEBUG nova.compute.manager [req-dfda69f3-d6ef-48c7-8abc-e5bbd8b88750 req-cd903459-2fb0-4545-b111-1ecb0b8ab98a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Received event network-changed-a75b1cfa-e509-4676-bd92-b36be82f1e83 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.122 2 DEBUG nova.compute.manager [req-dfda69f3-d6ef-48c7-8abc-e5bbd8b88750 req-cd903459-2fb0-4545-b111-1ecb0b8ab98a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Refreshing instance network info cache due to event network-changed-a75b1cfa-e509-4676-bd92-b36be82f1e83. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.123 2 DEBUG oslo_concurrency.lockutils [req-dfda69f3-d6ef-48c7-8abc-e5bbd8b88750 req-cd903459-2fb0-4545-b111-1ecb0b8ab98a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:25:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:25:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/133314659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.263 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.265 2 DEBUG nova.virt.libvirt.vif [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:25:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1285601584',display_name='tempest-ImagesTestJSON-server-1285601584',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1285601584',id=36,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-ac1xe9vb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:25:38Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "address": "fa:16:3e:88:3f:d3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bdc55f5-2b", "ovs_interfaceid": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.265 2 DEBUG nova.network.os_vif_util [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "address": "fa:16:3e:88:3f:d3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bdc55f5-2b", "ovs_interfaceid": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.267 2 DEBUG nova.network.os_vif_util [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:3f:d3,bridge_name='br-int',has_traffic_filtering=True,id=9bdc55f5-2bf2-467e-9e1e-33215451c0c4,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bdc55f5-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.269 2 DEBUG nova.objects.instance [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.285 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:25:42 np0005465604 nova_compute[260603]:  <uuid>9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a</uuid>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:  <name>instance-00000024</name>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <nova:name>tempest-ImagesTestJSON-server-1285601584</nova:name>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:25:41</nova:creationTime>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:25:42 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:        <nova:user uuid="6747651cfdcc4f868c43b9d78f5846c2">tempest-ImagesTestJSON-1188243509-project-member</nova:user>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:        <nova:project uuid="56b1e1170f2e4a73aaf396476bc82261">tempest-ImagesTestJSON-1188243509</nova:project>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:        <nova:port uuid="9bdc55f5-2bf2-467e-9e1e-33215451c0c4">
Oct  2 04:25:42 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <entry name="serial">9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a</entry>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <entry name="uuid">9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a</entry>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk">
Oct  2 04:25:42 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:25:42 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk.config">
Oct  2 04:25:42 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:25:42 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:88:3f:d3"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <target dev="tap9bdc55f5-2b"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a/console.log" append="off"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:25:42 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:25:42 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:25:42 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:25:42 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.286 2 DEBUG nova.compute.manager [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Preparing to wait for external event network-vif-plugged-9bdc55f5-2bf2-467e-9e1e-33215451c0c4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.286 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Acquiring lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.286 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.287 2 DEBUG oslo_concurrency.lockutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Lock "9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.288 2 DEBUG nova.virt.libvirt.vif [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:25:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1285601584',display_name='tempest-ImagesTestJSON-server-1285601584',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1285601584',id=36,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b1e1170f2e4a73aaf396476bc82261',ramdisk_id='',reservation_id='r-ac1xe9vb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1188243509',owner_user_name='tempest-ImagesTestJSON-1188243509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:25:38Z,user_data=None,user_id='6747651cfdcc4f868c43b9d78f5846c2',uuid=9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "address": "fa:16:3e:88:3f:d3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bdc55f5-2b", "ovs_interfaceid": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.289 2 DEBUG nova.network.os_vif_util [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converting VIF {"id": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "address": "fa:16:3e:88:3f:d3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bdc55f5-2b", "ovs_interfaceid": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.290 2 DEBUG nova.network.os_vif_util [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:88:3f:d3,bridge_name='br-int',has_traffic_filtering=True,id=9bdc55f5-2bf2-467e-9e1e-33215451c0c4,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bdc55f5-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.291 2 DEBUG os_vif [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:3f:d3,bridge_name='br-int',has_traffic_filtering=True,id=9bdc55f5-2bf2-467e-9e1e-33215451c0c4,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bdc55f5-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.292 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.293 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.301 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bdc55f5-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.301 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9bdc55f5-2b, col_values=(('external_ids', {'iface-id': '9bdc55f5-2bf2-467e-9e1e-33215451c0c4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:88:3f:d3', 'vm-uuid': '9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:42 np0005465604 NetworkManager[45129]: <info>  [1759393542.3046] manager: (tap9bdc55f5-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.310 2 INFO os_vif [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:88:3f:d3,bridge_name='br-int',has_traffic_filtering=True,id=9bdc55f5-2bf2-467e-9e1e-33215451c0c4,network=Network(897d7abf-9e23-43cd-8f60-7156792a4360),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9bdc55f5-2b')#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.390 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.391 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.391 2 DEBUG nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] No VIF found with MAC fa:16:3e:88:3f:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.392 2 INFO nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Using config drive#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.424 2 DEBUG nova.storage.rbd_utils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] rbd image 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:25:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:25:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3994778032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.457 2 DEBUG oslo_concurrency.processutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.466 2 DEBUG nova.compute.provider_tree [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.486 2 DEBUG nova.scheduler.client.report [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.513 2 DEBUG nova.network.neutron [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Updating instance_info_cache with network_info: [{"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.518 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.528 2 DEBUG oslo_concurrency.lockutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Releasing lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.528 2 DEBUG nova.compute.manager [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Instance network_info: |[{"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.529 2 DEBUG oslo_concurrency.lockutils [req-dfda69f3-d6ef-48c7-8abc-e5bbd8b88750 req-cd903459-2fb0-4545-b111-1ecb0b8ab98a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1bd45455-6745-4310-a5a6-f86dd4dcb4ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.529 2 DEBUG nova.network.neutron [req-dfda69f3-d6ef-48c7-8abc-e5bbd8b88750 req-cd903459-2fb0-4545-b111-1ecb0b8ab98a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Refreshing network info cache for port a75b1cfa-e509-4676-bd92-b36be82f1e83 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.532 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Start _get_guest_xml network_info=[{"id": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "address": "fa:16:3e:55:9e:33", "network": {"id": "e086d5b9-24e9-42fc-adba-1a3993e8f3d1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1978168585-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "93b2c1583a83423288661225e3f86391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa75b1cfa-e5", "ovs_interfaceid": "a75b1cfa-e509-4676-bd92-b36be82f1e83", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.537 2 WARNING nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.541 2 DEBUG nova.virt.libvirt.host [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.542 2 DEBUG nova.virt.libvirt.host [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.551 2 DEBUG nova.virt.libvirt.host [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.551 2 DEBUG nova.virt.libvirt.host [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.552 2 DEBUG nova.virt.libvirt.driver [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.552 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.552 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.552 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.553 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.553 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.553 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.553 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.554 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.554 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.554 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.554 2 DEBUG nova.virt.hardware [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.557 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.597 2 INFO nova.scheduler.client.report [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Deleted allocations for instance 9ea31984-a45e-4154-9df9-3c4e8ce69309#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.679 2 DEBUG oslo_concurrency.lockutils [None req-0b7b3bab-c103-47fe-8c2c-b2a06dacfb6d cf69d2cc0a684734b9d2d6da2fee6bf7 1d3020619cd44cbd964c169ee848da8e - - default default] Lock "9ea31984-a45e-4154-9df9-3c4e8ce69309" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.807 2 DEBUG nova.objects.instance [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_requests' on Instance uuid c28cb03c-6207-4ec5-9156-03252350561c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.824 2 DEBUG nova.network.neutron [None req-5b9c58e1-33a4-4b66-ab36-d347a7ee5326 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: c28cb03c-6207-4ec5-9156-03252350561c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:25:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 199 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 696 KiB/s rd, 6.1 MiB/s wr, 382 op/s
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.896 2 INFO nova.virt.libvirt.driver [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Creating config drive at /var/lib/nova/instances/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a/disk.config#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.905 2 DEBUG oslo_concurrency.processutils [None req-66f5f59f-e0e1-4d88-a5aa-8e5543e69563 6747651cfdcc4f868c43b9d78f5846c2 56b1e1170f2e4a73aaf396476bc82261 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpns3ssrxa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.954 2 DEBUG nova.network.neutron [req-e5740b01-a756-4362-a02d-686f3ac5af9d req-c1a254a5-ce2f-49b4-8926-3a1e4a8df152 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Updated VIF entry in instance network info cache for port 9bdc55f5-2bf2-467e-9e1e-33215451c0c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.956 2 DEBUG nova.network.neutron [req-e5740b01-a756-4362-a02d-686f3ac5af9d req-c1a254a5-ce2f-49b4-8926-3a1e4a8df152 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a] Updating instance_info_cache with network_info: [{"id": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "address": "fa:16:3e:88:3f:d3", "network": {"id": "897d7abf-9e23-43cd-8f60-7156792a4360", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1963841282-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b1e1170f2e4a73aaf396476bc82261", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9bdc55f5-2b", "ovs_interfaceid": "9bdc55f5-2bf2-467e-9e1e-33215451c0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:25:42 np0005465604 nova_compute[260603]: 2025-10-02 08:25:42.978 2 DEBUG oslo_concurrency.lockutils [req-e5740b01-a756-4362-a02d-686f3ac5af9d req-c1a254a5-ce2f-49b4-8926-3a1e4a8df152 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-9b96eb1e-bf4c-496b-a7a2-f73b6efdd45a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:25:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:25:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/588820475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:25:43 np0005465604 nova_compute[260603]: 2025-10-02 08:25:43.007 2 DEBUG oslo_concurrency.processutils [None req-66325f46-248a-4611-916e-e1cf43c7240d f5cf08c876094c4d847b57fd4506bbff 93b2c1583a83423288661225e3f86391 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:26:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:26:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:26:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:26:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:26:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:26:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:26:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:26:27
Oct  2 04:26:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:26:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:26:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'images', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.log']
Oct  2 04:26:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:26:28 np0005465604 rsyslogd[1004]: imjournal: 3093 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct  2 04:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:26:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:28Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:e3:68 10.100.0.10
Oct  2 04:26:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:28Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:e3:68 10.100.0.10
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.380 2 DEBUG nova.network.neutron [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Updating instance_info_cache with network_info: [{"id": "47449477-dd00-4329-9053-67f80b8caafb", "address": "fa:16:3e:0f:b1:c0", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47449477-dd", "ovs_interfaceid": "47449477-dd00-4329-9053-67f80b8caafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "92e4e44e-f231-45c1-8284-1059a5c03a05", "address": "fa:16:3e:fd:1b:56", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92e4e44e-f2", "ovs_interfaceid": "92e4e44e-f231-45c1-8284-1059a5c03a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.412 2 DEBUG oslo_concurrency.lockutils [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Releasing lock "refresh_cache-6de0ab38-2086-43ab-a32f-827aebf2432d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.414 2 DEBUG oslo_concurrency.lockutils [req-38efd83b-5a57-4fcc-be2c-05b945d288d7 req-6b31901e-c8b3-4edf-997d-80918d68bbd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-6de0ab38-2086-43ab-a32f-827aebf2432d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.414 2 DEBUG nova.network.neutron [req-38efd83b-5a57-4fcc-be2c-05b945d288d7 req-6b31901e-c8b3-4edf-997d-80918d68bbd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Refreshing network info cache for port 92e4e44e-f231-45c1-8284-1059a5c03a05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.419 2 DEBUG nova.virt.libvirt.vif [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-489114303',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-489114303',id=39,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e6fa690d01af4f20ba341e59a2be26bb',ramdisk_id='',reservation_id='r-2okxr2ge',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-655113368',owner_user_name='tempest-AttachInterfacesV270Test-655113368-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:20Z,user_data=None,user_id='758b34678d69489d8841d33743bd238a',uuid=6de0ab38-2086-43ab-a32f-827aebf2432d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "92e4e44e-f231-45c1-8284-1059a5c03a05", "address": "fa:16:3e:fd:1b:56", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92e4e44e-f2", "ovs_interfaceid": "92e4e44e-f231-45c1-8284-1059a5c03a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.424 2 DEBUG nova.network.os_vif_util [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converting VIF {"id": "92e4e44e-f231-45c1-8284-1059a5c03a05", "address": "fa:16:3e:fd:1b:56", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92e4e44e-f2", "ovs_interfaceid": "92e4e44e-f231-45c1-8284-1059a5c03a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.427 2 DEBUG nova.network.os_vif_util [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1b:56,bridge_name='br-int',has_traffic_filtering=True,id=92e4e44e-f231-45c1-8284-1059a5c03a05,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92e4e44e-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.428 2 DEBUG os_vif [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1b:56,bridge_name='br-int',has_traffic_filtering=True,id=92e4e44e-f231-45c1-8284-1059a5c03a05,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92e4e44e-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.434 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92e4e44e-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.434 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap92e4e44e-f2, col_values=(('external_ids', {'iface-id': '92e4e44e-f231-45c1-8284-1059a5c03a05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fd:1b:56', 'vm-uuid': '6de0ab38-2086-43ab-a32f-827aebf2432d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:26:28 np0005465604 NetworkManager[45129]: <info>  [1759393588.4823] manager: (tap92e4e44e-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.492 2 INFO os_vif [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1b:56,bridge_name='br-int',has_traffic_filtering=True,id=92e4e44e-f231-45c1-8284-1059a5c03a05,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92e4e44e-f2')#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.492 2 DEBUG nova.virt.libvirt.vif [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-489114303',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-489114303',id=39,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e6fa690d01af4f20ba341e59a2be26bb',ramdisk_id='',reservation_id='r-2okxr2ge',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-655113368',owner_user_name='tempest-AttachInterfacesV270Test-655113368-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:20Z,user_data=None,user_id='758b34678d69489d8841d33743bd238a',uuid=6de0ab38-2086-43ab-a32f-827aebf2432d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "92e4e44e-f231-45c1-8284-1059a5c03a05", "address": "fa:16:3e:fd:1b:56", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92e4e44e-f2", "ovs_interfaceid": "92e4e44e-f231-45c1-8284-1059a5c03a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.493 2 DEBUG nova.network.os_vif_util [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converting VIF {"id": "92e4e44e-f231-45c1-8284-1059a5c03a05", "address": "fa:16:3e:fd:1b:56", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92e4e44e-f2", "ovs_interfaceid": "92e4e44e-f231-45c1-8284-1059a5c03a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.493 2 DEBUG nova.network.os_vif_util [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1b:56,bridge_name='br-int',has_traffic_filtering=True,id=92e4e44e-f231-45c1-8284-1059a5c03a05,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92e4e44e-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.495 2 DEBUG nova.virt.libvirt.guest [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] attach device xml: <interface type="ethernet">
Oct  2 04:26:28 np0005465604 nova_compute[260603]:  <mac address="fa:16:3e:fd:1b:56"/>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:  <model type="virtio"/>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:  <mtu size="1442"/>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:  <target dev="tap92e4e44e-f2"/>
Oct  2 04:26:28 np0005465604 nova_compute[260603]: </interface>
Oct  2 04:26:28 np0005465604 nova_compute[260603]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 04:26:28 np0005465604 kernel: tap92e4e44e-f2: entered promiscuous mode
Oct  2 04:26:28 np0005465604 NetworkManager[45129]: <info>  [1759393588.5078] manager: (tap92e4e44e-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Oct  2 04:26:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:28Z|00278|binding|INFO|Claiming lport 92e4e44e-f231-45c1-8284-1059a5c03a05 for this chassis.
Oct  2 04:26:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:28Z|00279|binding|INFO|92e4e44e-f231-45c1-8284-1059a5c03a05: Claiming fa:16:3e:fd:1b:56 10.100.0.3
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.526 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:1b:56 10.100.0.3'], port_security=['fa:16:3e:fd:1b:56 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6de0ab38-2086-43ab-a32f-827aebf2432d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2652a07f-2d55-4460-ab66-db7b9bf18992', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6fa690d01af4f20ba341e59a2be26bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5c3e938-ad8f-46df-8997-cca3dab53ce8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=608c728a-2464-4481-93df-0a324345398f, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=92e4e44e-f231-45c1-8284-1059a5c03a05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:26:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.529 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 92e4e44e-f231-45c1-8284-1059a5c03a05 in datapath 2652a07f-2d55-4460-ab66-db7b9bf18992 bound to our chassis#033[00m
Oct  2 04:26:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.531 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2652a07f-2d55-4460-ab66-db7b9bf18992#033[00m
Oct  2 04:26:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:28Z|00280|binding|INFO|Setting lport 92e4e44e-f231-45c1-8284-1059a5c03a05 ovn-installed in OVS
Oct  2 04:26:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:28Z|00281|binding|INFO|Setting lport 92e4e44e-f231-45c1-8284-1059a5c03a05 up in Southbound
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.555 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[739a2a78-ac65-4c24-a6cb-ab04fdfa163d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:28 np0005465604 systemd-udevd[304813]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:26:28 np0005465604 NetworkManager[45129]: <info>  [1759393588.5918] device (tap92e4e44e-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:26:28 np0005465604 NetworkManager[45129]: <info>  [1759393588.5986] device (tap92e4e44e-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:26:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.605 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2306f12b-be3c-4ca9-8c3d-c6150fa0b3e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.612 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0db832-e15a-4a42-baa4-2849b8472217]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.619 2 DEBUG nova.virt.libvirt.driver [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.620 2 DEBUG nova.virt.libvirt.driver [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.621 2 DEBUG nova.virt.libvirt.driver [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] No VIF found with MAC fa:16:3e:0f:b1:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.621 2 DEBUG nova.virt.libvirt.driver [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] No VIF found with MAC fa:16:3e:fd:1b:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:26:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.652 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2dbfe5e6-291b-43ac-a753-ae8719ac5c30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.655 2 DEBUG nova.virt.libvirt.guest [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:26:28 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:  <nova:name>tempest-AttachInterfacesV270Test-server-489114303</nova:name>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:26:28</nova:creationTime>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:26:28 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:    <nova:user uuid="758b34678d69489d8841d33743bd238a">tempest-AttachInterfacesV270Test-655113368-project-member</nova:user>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:    <nova:project uuid="e6fa690d01af4f20ba341e59a2be26bb">tempest-AttachInterfacesV270Test-655113368</nova:project>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:    <nova:port uuid="47449477-dd00-4329-9053-67f80b8caafb">
Oct  2 04:26:28 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:    <nova:port uuid="92e4e44e-f231-45c1-8284-1059a5c03a05">
Oct  2 04:26:28 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:26:28 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:26:28 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:26:28 np0005465604 nova_compute[260603]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 04:26:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.676 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[123a317f-cbe3-4b76-8e66-89bb0296f0e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2652a07f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:48:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445718, 'reachable_time': 31419, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304820, 'error': None, 'target': 'ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.685 2 DEBUG oslo_concurrency.lockutils [None req-80d0a8f4-cfc4-4b98-b18d-0bb7c8cc315d 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "interface-6de0ab38-2086-43ab-a32f-827aebf2432d-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.702 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[034e6017-7996-4c23-8bd1-d8d74de8792d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2652a07f-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445732, 'tstamp': 445732}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304821, 'error': None, 'target': 'ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2652a07f-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445736, 'tstamp': 445736}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304821, 'error': None, 'target': 'ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.704 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2652a07f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.708 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2652a07f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.708 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:26:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.709 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2652a07f-20, col_values=(('external_ids', {'iface-id': 'c6bfc1fe-b805-4828-bbbe-2ced2b46c90a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:28.710 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.796 2 DEBUG nova.compute.manager [req-b8be1e34-db3a-419f-984e-2828b058b4f2 req-853b9c82-93ad-433e-9dfd-b005bf75dfaf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-plugged-92e4e44e-f231-45c1-8284-1059a5c03a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.797 2 DEBUG oslo_concurrency.lockutils [req-b8be1e34-db3a-419f-984e-2828b058b4f2 req-853b9c82-93ad-433e-9dfd-b005bf75dfaf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.797 2 DEBUG oslo_concurrency.lockutils [req-b8be1e34-db3a-419f-984e-2828b058b4f2 req-853b9c82-93ad-433e-9dfd-b005bf75dfaf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.797 2 DEBUG oslo_concurrency.lockutils [req-b8be1e34-db3a-419f-984e-2828b058b4f2 req-853b9c82-93ad-433e-9dfd-b005bf75dfaf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.797 2 DEBUG nova.compute.manager [req-b8be1e34-db3a-419f-984e-2828b058b4f2 req-853b9c82-93ad-433e-9dfd-b005bf75dfaf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] No waiting events found dispatching network-vif-plugged-92e4e44e-f231-45c1-8284-1059a5c03a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:26:28 np0005465604 nova_compute[260603]: 2025-10-02 08:26:28.798 2 WARNING nova.compute.manager [req-b8be1e34-db3a-419f-984e-2828b058b4f2 req-853b9c82-93ad-433e-9dfd-b005bf75dfaf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received unexpected event network-vif-plugged-92e4e44e-f231-45c1-8284-1059a5c03a05 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:26:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 158 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.7 MiB/s wr, 231 op/s
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.071 2 DEBUG nova.network.neutron [req-38efd83b-5a57-4fcc-be2c-05b945d288d7 req-6b31901e-c8b3-4edf-997d-80918d68bbd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Updated VIF entry in instance network info cache for port 92e4e44e-f231-45c1-8284-1059a5c03a05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.073 2 DEBUG nova.network.neutron [req-38efd83b-5a57-4fcc-be2c-05b945d288d7 req-6b31901e-c8b3-4edf-997d-80918d68bbd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Updating instance_info_cache with network_info: [{"id": "47449477-dd00-4329-9053-67f80b8caafb", "address": "fa:16:3e:0f:b1:c0", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47449477-dd", "ovs_interfaceid": "47449477-dd00-4329-9053-67f80b8caafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "92e4e44e-f231-45c1-8284-1059a5c03a05", "address": "fa:16:3e:fd:1b:56", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92e4e44e-f2", "ovs_interfaceid": "92e4e44e-f231-45c1-8284-1059a5c03a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.111 2 DEBUG oslo_concurrency.lockutils [req-38efd83b-5a57-4fcc-be2c-05b945d288d7 req-6b31901e-c8b3-4edf-997d-80918d68bbd1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-6de0ab38-2086-43ab-a32f-827aebf2432d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.119 2 DEBUG oslo_concurrency.lockutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.119 2 DEBUG oslo_concurrency.lockutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.120 2 DEBUG oslo_concurrency.lockutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.121 2 DEBUG oslo_concurrency.lockutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.121 2 DEBUG oslo_concurrency.lockutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.123 2 INFO nova.compute.manager [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Terminating instance#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.125 2 DEBUG nova.compute.manager [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:26:30 np0005465604 kernel: tap47449477-dd (unregistering): left promiscuous mode
Oct  2 04:26:30 np0005465604 NetworkManager[45129]: <info>  [1759393590.1707] device (tap47449477-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:30Z|00282|binding|INFO|Releasing lport 47449477-dd00-4329-9053-67f80b8caafb from this chassis (sb_readonly=0)
Oct  2 04:26:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:30Z|00283|binding|INFO|Setting lport 47449477-dd00-4329-9053-67f80b8caafb down in Southbound
Oct  2 04:26:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:30Z|00284|binding|INFO|Removing iface tap47449477-dd ovn-installed in OVS
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.193 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:b1:c0 10.100.0.5'], port_security=['fa:16:3e:0f:b1:c0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6de0ab38-2086-43ab-a32f-827aebf2432d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2652a07f-2d55-4460-ab66-db7b9bf18992', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6fa690d01af4f20ba341e59a2be26bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5c3e938-ad8f-46df-8997-cca3dab53ce8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=608c728a-2464-4481-93df-0a324345398f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=47449477-dd00-4329-9053-67f80b8caafb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.194 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 47449477-dd00-4329-9053-67f80b8caafb in datapath 2652a07f-2d55-4460-ab66-db7b9bf18992 unbound from our chassis#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.195 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2652a07f-2d55-4460-ab66-db7b9bf18992#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:30 np0005465604 kernel: tap92e4e44e-f2 (unregistering): left promiscuous mode
Oct  2 04:26:30 np0005465604 NetworkManager[45129]: <info>  [1759393590.2219] device (tap92e4e44e-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.225 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d2fdabcb-14f9-44ae-b0bd-ce411e1cc413]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:30Z|00285|binding|INFO|Releasing lport 92e4e44e-f231-45c1-8284-1059a5c03a05 from this chassis (sb_readonly=0)
Oct  2 04:26:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:30Z|00286|binding|INFO|Setting lport 92e4e44e-f231-45c1-8284-1059a5c03a05 down in Southbound
Oct  2 04:26:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:30Z|00287|binding|INFO|Removing iface tap92e4e44e-f2 ovn-installed in OVS
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.245 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:1b:56 10.100.0.3'], port_security=['fa:16:3e:fd:1b:56 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6de0ab38-2086-43ab-a32f-827aebf2432d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2652a07f-2d55-4460-ab66-db7b9bf18992', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6fa690d01af4f20ba341e59a2be26bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5c3e938-ad8f-46df-8997-cca3dab53ce8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=608c728a-2464-4481-93df-0a324345398f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=92e4e44e-f231-45c1-8284-1059a5c03a05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.266 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[746c6932-0d29-47d7-91f4-c3e8cd97a420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.272 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[97c0a760-aa3e-4544-829b-4a440ee7394d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:30 np0005465604 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000027.scope: Deactivated successfully.
Oct  2 04:26:30 np0005465604 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000027.scope: Consumed 10.333s CPU time.
Oct  2 04:26:30 np0005465604 systemd-machined[214636]: Machine qemu-43-instance-00000027 terminated.
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.316 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bb104e8f-c84f-4a34-8189-6eabbfc9778b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:30 np0005465604 NetworkManager[45129]: <info>  [1759393590.3523] manager: (tap47449477-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.351 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b7b9280b-b1c4-49be-9bd1-e247d95b1375]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2652a07f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:48:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445718, 'reachable_time': 31419, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304836, 'error': None, 'target': 'ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:30 np0005465604 NetworkManager[45129]: <info>  [1759393590.3615] manager: (tap92e4e44e-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/131)
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.384 2 INFO nova.virt.libvirt.driver [-] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Instance destroyed successfully.#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.385 2 DEBUG nova.objects.instance [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lazy-loading 'resources' on Instance uuid 6de0ab38-2086-43ab-a32f-827aebf2432d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.384 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[79d5e930-f3a9-49d5-b6b7-32bc51698a9b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2652a07f-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445732, 'tstamp': 445732}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304843, 'error': None, 'target': 'ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2652a07f-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445736, 'tstamp': 445736}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304843, 'error': None, 'target': 'ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.387 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2652a07f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.399 2 DEBUG nova.virt.libvirt.vif [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-489114303',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-489114303',id=39,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6fa690d01af4f20ba341e59a2be26bb',ramdisk_id='',reservation_id='r-2okxr2ge',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-655113368',owner_user_name='tempest-AttachInterfacesV270Test-655113368-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:20Z,user_data=None,user_id='758b34678d69489d8841d33743bd238a',uuid=6de0ab38-2086-43ab-a32f-827aebf2432d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "47449477-dd00-4329-9053-67f80b8caafb", "address": "fa:16:3e:0f:b1:c0", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47449477-dd", "ovs_interfaceid": "47449477-dd00-4329-9053-67f80b8caafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.399 2 DEBUG nova.network.os_vif_util [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converting VIF {"id": "47449477-dd00-4329-9053-67f80b8caafb", "address": "fa:16:3e:0f:b1:c0", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47449477-dd", "ovs_interfaceid": "47449477-dd00-4329-9053-67f80b8caafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.400 2 DEBUG nova.network.os_vif_util [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0f:b1:c0,bridge_name='br-int',has_traffic_filtering=True,id=47449477-dd00-4329-9053-67f80b8caafb,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47449477-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.401 2 DEBUG os_vif [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:b1:c0,bridge_name='br-int',has_traffic_filtering=True,id=47449477-dd00-4329-9053-67f80b8caafb,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47449477-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.402 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47449477-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.403 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2652a07f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.404 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.404 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2652a07f-20, col_values=(('external_ids', {'iface-id': 'c6bfc1fe-b805-4828-bbbe-2ced2b46c90a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.405 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.407 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 92e4e44e-f231-45c1-8284-1059a5c03a05 in datapath 2652a07f-2d55-4460-ab66-db7b9bf18992 unbound from our chassis#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.410 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2652a07f-2d55-4460-ab66-db7b9bf18992, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.411 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[33ccd363-43bb-4922-8e24-45973f194ccb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.413 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992 namespace which is not needed anymore#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.419 2 INFO os_vif [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:b1:c0,bridge_name='br-int',has_traffic_filtering=True,id=47449477-dd00-4329-9053-67f80b8caafb,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47449477-dd')#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.420 2 DEBUG nova.virt.libvirt.vif [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-489114303',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-489114303',id=39,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6fa690d01af4f20ba341e59a2be26bb',ramdisk_id='',reservation_id='r-2okxr2ge',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-655113368',owner_user_name='tempest-AttachInterfacesV270Test-655113368-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:20Z,user_data=None,user_id='758b34678d69489d8841d33743bd238a',uuid=6de0ab38-2086-43ab-a32f-827aebf2432d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "92e4e44e-f231-45c1-8284-1059a5c03a05", "address": "fa:16:3e:fd:1b:56", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92e4e44e-f2", "ovs_interfaceid": "92e4e44e-f231-45c1-8284-1059a5c03a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.420 2 DEBUG nova.network.os_vif_util [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converting VIF {"id": "92e4e44e-f231-45c1-8284-1059a5c03a05", "address": "fa:16:3e:fd:1b:56", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92e4e44e-f2", "ovs_interfaceid": "92e4e44e-f231-45c1-8284-1059a5c03a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.421 2 DEBUG nova.network.os_vif_util [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1b:56,bridge_name='br-int',has_traffic_filtering=True,id=92e4e44e-f231-45c1-8284-1059a5c03a05,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92e4e44e-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.421 2 DEBUG os_vif [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1b:56,bridge_name='br-int',has_traffic_filtering=True,id=92e4e44e-f231-45c1-8284-1059a5c03a05,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92e4e44e-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.423 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92e4e44e-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.429 2 INFO os_vif [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1b:56,bridge_name='br-int',has_traffic_filtering=True,id=92e4e44e-f231-45c1-8284-1059a5c03a05,network=Network(2652a07f-2d55-4460-ab66-db7b9bf18992),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92e4e44e-f2')#033[00m
Oct  2 04:26:30 np0005465604 neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992[304726]: [NOTICE]   (304749) : haproxy version is 2.8.14-c23fe91
Oct  2 04:26:30 np0005465604 neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992[304726]: [NOTICE]   (304749) : path to executable is /usr/sbin/haproxy
Oct  2 04:26:30 np0005465604 neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992[304726]: [WARNING]  (304749) : Exiting Master process...
Oct  2 04:26:30 np0005465604 neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992[304726]: [WARNING]  (304749) : Exiting Master process...
Oct  2 04:26:30 np0005465604 neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992[304726]: [ALERT]    (304749) : Current worker (304751) exited with code 143 (Terminated)
Oct  2 04:26:30 np0005465604 neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992[304726]: [WARNING]  (304749) : All workers exited. Exiting... (0)
Oct  2 04:26:30 np0005465604 systemd[1]: libpod-9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c.scope: Deactivated successfully.
Oct  2 04:26:30 np0005465604 conmon[304726]: conmon 9ccb146bf210acff7f98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c.scope/container/memory.events
Oct  2 04:26:30 np0005465604 podman[304892]: 2025-10-02 08:26:30.605141096 +0000 UTC m=+0.055414497 container died 9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 04:26:30 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c-userdata-shm.mount: Deactivated successfully.
Oct  2 04:26:30 np0005465604 systemd[1]: var-lib-containers-storage-overlay-75440ca422d118eb1b1497a7f95015ff02c6e17a2b7309fec1ecfd9cabce63eb-merged.mount: Deactivated successfully.
Oct  2 04:26:30 np0005465604 podman[304892]: 2025-10-02 08:26:30.659881591 +0000 UTC m=+0.110154972 container cleanup 9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:26:30 np0005465604 systemd[1]: libpod-conmon-9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c.scope: Deactivated successfully.
Oct  2 04:26:30 np0005465604 podman[304924]: 2025-10-02 08:26:30.740587424 +0000 UTC m=+0.050444237 container remove 9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.748 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[99b4e7b2-e775-4bf8-9f1a-69d1335bcb75]: (4, ('Thu Oct  2 08:26:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992 (9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c)\n9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c\nThu Oct  2 08:26:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992 (9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c)\n9ccb146bf210acff7f98f132b793c203e0b01054ec99615d19bb7de23550cc8c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.751 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29bb3cca-d03a-4dd7-8ffe-988257876c86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.753 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2652a07f-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:30 np0005465604 kernel: tap2652a07f-20: left promiscuous mode
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.763 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[855d13be-0863-4db7-844a-d2c13d5cabdf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.793 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[37620cb5-e1f3-42f1-ae17-bef39df8c997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.794 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d979548e-4bd9-4c0d-84f3-8d4abc46b517]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.810 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eb865244-4c7d-4920-877b-3bf98142d747]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445709, 'reachable_time': 26346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304939, 'error': None, 'target': 'ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:30 np0005465604 systemd[1]: run-netns-ovnmeta\x2d2652a07f\x2d2d55\x2d4460\x2dab66\x2ddb7b9bf18992.mount: Deactivated successfully.
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.816 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2652a07f-2d55-4460-ab66-db7b9bf18992 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:26:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:30.816 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[f52df912-e0fe-4d7d-bb07-87edccb842ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.826 2 INFO nova.virt.libvirt.driver [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Deleting instance files /var/lib/nova/instances/6de0ab38-2086-43ab-a32f-827aebf2432d_del#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.827 2 INFO nova.virt.libvirt.driver [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Deletion of /var/lib/nova/instances/6de0ab38-2086-43ab-a32f-827aebf2432d_del complete#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.871 2 INFO nova.compute.manager [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.872 2 DEBUG oslo.service.loopingcall [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.872 2 DEBUG nova.compute.manager [-] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.872 2 DEBUG nova.network.neutron [-] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:26:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 158 MiB data, 449 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.899 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-plugged-92e4e44e-f231-45c1-8284-1059a5c03a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.899 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.899 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.899 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.899 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] No waiting events found dispatching network-vif-plugged-92e4e44e-f231-45c1-8284-1059a5c03a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.900 2 WARNING nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received unexpected event network-vif-plugged-92e4e44e-f231-45c1-8284-1059a5c03a05 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.900 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-unplugged-47449477-dd00-4329-9053-67f80b8caafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.900 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.900 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.900 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.901 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] No waiting events found dispatching network-vif-unplugged-47449477-dd00-4329-9053-67f80b8caafb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.901 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-unplugged-47449477-dd00-4329-9053-67f80b8caafb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.901 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-plugged-47449477-dd00-4329-9053-67f80b8caafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.901 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.901 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.902 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.902 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] No waiting events found dispatching network-vif-plugged-47449477-dd00-4329-9053-67f80b8caafb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.902 2 WARNING nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received unexpected event network-vif-plugged-47449477-dd00-4329-9053-67f80b8caafb for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.902 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-unplugged-92e4e44e-f231-45c1-8284-1059a5c03a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.902 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.903 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.903 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.903 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] No waiting events found dispatching network-vif-unplugged-92e4e44e-f231-45c1-8284-1059a5c03a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.903 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-unplugged-92e4e44e-f231-45c1-8284-1059a5c03a05 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.903 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-plugged-92e4e44e-f231-45c1-8284-1059a5c03a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.904 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.904 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.904 2 DEBUG oslo_concurrency.lockutils [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.904 2 DEBUG nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] No waiting events found dispatching network-vif-plugged-92e4e44e-f231-45c1-8284-1059a5c03a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:26:30 np0005465604 nova_compute[260603]: 2025-10-02 08:26:30.904 2 WARNING nova.compute.manager [req-919aab5b-4997-4673-a783-63209ff9962f req-83b941f1-ff6c-4a43-90df-4b38c2ec6b67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received unexpected event network-vif-plugged-92e4e44e-f231-45c1-8284-1059a5c03a05 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:26:31 np0005465604 nova_compute[260603]: 2025-10-02 08:26:31.023 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393576.0226574, 1bd45455-6745-4310-a5a6-f86dd4dcb4ca => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:26:31 np0005465604 nova_compute[260603]: 2025-10-02 08:26:31.024 2 INFO nova.compute.manager [-] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:26:31 np0005465604 nova_compute[260603]: 2025-10-02 08:26:31.060 2 DEBUG nova.compute.manager [None req-f6c07aca-55d4-4174-addc-14c344234497 - - - - - -] [instance: 1bd45455-6745-4310-a5a6-f86dd4dcb4ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:26:31 np0005465604 nova_compute[260603]: 2025-10-02 08:26:31.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:26:32 np0005465604 nova_compute[260603]: 2025-10-02 08:26:32.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:32 np0005465604 nova_compute[260603]: 2025-10-02 08:26:32.275 2 DEBUG nova.compute.manager [req-26ec0f05-f652-439b-a921-9e219d1a7081 req-f7d31239-ccfd-48a5-97bb-4f8fc4f45c05 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-deleted-92e4e44e-f231-45c1-8284-1059a5c03a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:26:32 np0005465604 nova_compute[260603]: 2025-10-02 08:26:32.275 2 INFO nova.compute.manager [req-26ec0f05-f652-439b-a921-9e219d1a7081 req-f7d31239-ccfd-48a5-97bb-4f8fc4f45c05 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Neutron deleted interface 92e4e44e-f231-45c1-8284-1059a5c03a05; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 04:26:32 np0005465604 nova_compute[260603]: 2025-10-02 08:26:32.276 2 DEBUG nova.network.neutron [req-26ec0f05-f652-439b-a921-9e219d1a7081 req-f7d31239-ccfd-48a5-97bb-4f8fc4f45c05 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Updating instance_info_cache with network_info: [{"id": "47449477-dd00-4329-9053-67f80b8caafb", "address": "fa:16:3e:0f:b1:c0", "network": {"id": "2652a07f-2d55-4460-ab66-db7b9bf18992", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-186762379-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6fa690d01af4f20ba341e59a2be26bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47449477-dd", "ovs_interfaceid": "47449477-dd00-4329-9053-67f80b8caafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:26:32 np0005465604 nova_compute[260603]: 2025-10-02 08:26:32.300 2 DEBUG nova.compute.manager [req-26ec0f05-f652-439b-a921-9e219d1a7081 req-f7d31239-ccfd-48a5-97bb-4f8fc4f45c05 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Detach interface failed, port_id=92e4e44e-f231-45c1-8284-1059a5c03a05, reason: Instance 6de0ab38-2086-43ab-a32f-827aebf2432d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 04:26:32 np0005465604 nova_compute[260603]: 2025-10-02 08:26:32.755 2 DEBUG nova.network.neutron [-] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:26:32 np0005465604 nova_compute[260603]: 2025-10-02 08:26:32.771 2 INFO nova.compute.manager [-] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Took 1.90 seconds to deallocate network for instance.#033[00m
Oct  2 04:26:32 np0005465604 nova_compute[260603]: 2025-10-02 08:26:32.808 2 DEBUG oslo_concurrency.lockutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:32 np0005465604 nova_compute[260603]: 2025-10-02 08:26:32.809 2 DEBUG oslo_concurrency.lockutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 151 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 180 op/s
Oct  2 04:26:32 np0005465604 nova_compute[260603]: 2025-10-02 08:26:32.915 2 DEBUG oslo_concurrency.processutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:26:32 np0005465604 nova_compute[260603]: 2025-10-02 08:26:32.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:26:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3151088308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:26:33 np0005465604 nova_compute[260603]: 2025-10-02 08:26:33.354 2 DEBUG oslo_concurrency.processutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:26:33 np0005465604 nova_compute[260603]: 2025-10-02 08:26:33.362 2 DEBUG nova.compute.provider_tree [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:26:33 np0005465604 nova_compute[260603]: 2025-10-02 08:26:33.388 2 DEBUG nova.scheduler.client.report [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:26:33 np0005465604 nova_compute[260603]: 2025-10-02 08:26:33.424 2 DEBUG oslo_concurrency.lockutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:33 np0005465604 nova_compute[260603]: 2025-10-02 08:26:33.459 2 INFO nova.scheduler.client.report [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Deleted allocations for instance 6de0ab38-2086-43ab-a32f-827aebf2432d#033[00m
Oct  2 04:26:33 np0005465604 nova_compute[260603]: 2025-10-02 08:26:33.539 2 DEBUG oslo_concurrency.lockutils [None req-1dd39ddc-39dd-4a8c-a6c9-89d42316811e 758b34678d69489d8841d33743bd238a e6fa690d01af4f20ba341e59a2be26bb - - default default] Lock "6de0ab38-2086-43ab-a32f-827aebf2432d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:34 np0005465604 nova_compute[260603]: 2025-10-02 08:26:34.369 2 DEBUG nova.compute.manager [req-21912fc6-3441-464e-9a74-6c10d308d94d req-9503ea3f-ea6f-4905-8f3c-c638a2b9385c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Received event network-vif-deleted-47449477-dd00-4329-9053-67f80b8caafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:26:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:34.813 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:34.813 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:34.814 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct  2 04:26:35 np0005465604 nova_compute[260603]: 2025-10-02 08:26:35.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:36 np0005465604 nova_compute[260603]: 2025-10-02 08:26:36.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct  2 04:26:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:26:37 np0005465604 nova_compute[260603]: 2025-10-02 08:26:37.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:37 np0005465604 nova_compute[260603]: 2025-10-02 08:26:37.326 2 DEBUG oslo_concurrency.lockutils [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:37 np0005465604 nova_compute[260603]: 2025-10-02 08:26:37.328 2 DEBUG oslo_concurrency.lockutils [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:37 np0005465604 nova_compute[260603]: 2025-10-02 08:26:37.329 2 DEBUG nova.objects.instance [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:26:37 np0005465604 nova_compute[260603]: 2025-10-02 08:26:37.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:37 np0005465604 nova_compute[260603]: 2025-10-02 08:26:37.354 2 DEBUG nova.objects.instance [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_requests' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:26:37 np0005465604 nova_compute[260603]: 2025-10-02 08:26:37.377 2 DEBUG nova.network.neutron [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:26:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:37Z|00288|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct  2 04:26:37 np0005465604 nova_compute[260603]: 2025-10-02 08:26:37.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:37 np0005465604 nova_compute[260603]: 2025-10-02 08:26:37.821 2 DEBUG nova.policy [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d235dd68314a5d82ac247a9e9842d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84c161efb2ba4334845e823db8128b62', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007571745580191443 of space, bias 1.0, pg target 0.2271523674057433 quantized to 32 (current 32)
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:26:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 348 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct  2 04:26:38 np0005465604 nova_compute[260603]: 2025-10-02 08:26:38.912 2 DEBUG nova.network.neutron [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Successfully created port: 8e47820a-f777-4d29-8bce-45c6eb3b7b5c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:26:38 np0005465604 podman[304964]: 2025-10-02 08:26:38.992699863 +0000 UTC m=+0.059262522 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Oct  2 04:26:39 np0005465604 podman[304963]: 2025-10-02 08:26:39.025568302 +0000 UTC m=+0.091014305 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Oct  2 04:26:40 np0005465604 nova_compute[260603]: 2025-10-02 08:26:40.239 2 DEBUG nova.network.neutron [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Successfully updated port: 8e47820a-f777-4d29-8bce-45c6eb3b7b5c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:26:40 np0005465604 nova_compute[260603]: 2025-10-02 08:26:40.287 2 DEBUG oslo_concurrency.lockutils [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:26:40 np0005465604 nova_compute[260603]: 2025-10-02 08:26:40.288 2 DEBUG oslo_concurrency.lockutils [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:26:40 np0005465604 nova_compute[260603]: 2025-10-02 08:26:40.289 2 DEBUG nova.network.neutron [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:26:40 np0005465604 nova_compute[260603]: 2025-10-02 08:26:40.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:40 np0005465604 nova_compute[260603]: 2025-10-02 08:26:40.530 2 WARNING nova.network.neutron [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] fa1bff6d-19fb-4792-a261-4da1165d95a1 already exists in list: networks containing: ['fa1bff6d-19fb-4792-a261-4da1165d95a1']. ignoring it#033[00m
Oct  2 04:26:40 np0005465604 nova_compute[260603]: 2025-10-02 08:26:40.804 2 DEBUG nova.compute.manager [req-6304c3aa-525f-4bf1-9b8c-9735a470429e req-5279f92c-59d5-4f32-a363-1e7415e4a59a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-changed-8e47820a-f777-4d29-8bce-45c6eb3b7b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:26:40 np0005465604 nova_compute[260603]: 2025-10-02 08:26:40.805 2 DEBUG nova.compute.manager [req-6304c3aa-525f-4bf1-9b8c-9735a470429e req-5279f92c-59d5-4f32-a363-1e7415e4a59a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Refreshing instance network info cache due to event network-changed-8e47820a-f777-4d29-8bce-45c6eb3b7b5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:26:40 np0005465604 nova_compute[260603]: 2025-10-02 08:26:40.806 2 DEBUG oslo_concurrency.lockutils [req-6304c3aa-525f-4bf1-9b8c-9735a470429e req-5279f92c-59d5-4f32-a363-1e7415e4a59a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:26:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 105 KiB/s wr, 44 op/s
Oct  2 04:26:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:26:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6322 writes, 28K keys, 6322 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6322 writes, 6322 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1677 writes, 7489 keys, 1677 commit groups, 1.0 writes per commit group, ingest: 9.95 MB, 0.02 MB/s#012Interval WAL: 1677 writes, 1677 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    146.3      0.23              0.13        16    0.014       0      0       0.0       0.0#012  L6      1/0    8.25 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    194.3    158.1      0.70              0.39        15    0.047     70K   8382       0.0       0.0#012 Sum      1/0    8.25 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    146.3    155.2      0.93              0.51        31    0.030     70K   8382       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    206.1    210.7      0.20              0.11         8    0.025     22K   2604       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    194.3    158.1      0.70              0.39        15    0.047     70K   8382       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    149.6      0.23              0.13        15    0.015       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.033, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 0.9 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 304.00 MB usage: 15.58 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000171 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(995,15.02 MB,4.93995%) FilterBlock(32,201.36 KB,0.0646842%) IndexBlock(32,378.80 KB,0.121684%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 04:26:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.595 2 DEBUG nova.network.neutron [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.629 2 DEBUG oslo_concurrency.lockutils [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.631 2 DEBUG oslo_concurrency.lockutils [req-6304c3aa-525f-4bf1-9b8c-9735a470429e req-5279f92c-59d5-4f32-a363-1e7415e4a59a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.632 2 DEBUG nova.network.neutron [req-6304c3aa-525f-4bf1-9b8c-9735a470429e req-5279f92c-59d5-4f32-a363-1e7415e4a59a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Refreshing network info cache for port 8e47820a-f777-4d29-8bce-45c6eb3b7b5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.635 2 DEBUG nova.virt.libvirt.vif [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.636 2 DEBUG nova.network.os_vif_util [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.636 2 DEBUG nova.network.os_vif_util [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.638 2 DEBUG os_vif [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.639 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.639 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.642 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e47820a-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8e47820a-f7, col_values=(('external_ids', {'iface-id': '8e47820a-f777-4d29-8bce-45c6eb3b7b5c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:86:1a', 'vm-uuid': '5d595e00-2287-4a6f-b347-bc277006a626'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:42 np0005465604 NetworkManager[45129]: <info>  [1759393602.6461] manager: (tap8e47820a-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.656 2 INFO os_vif [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7')#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.657 2 DEBUG nova.virt.libvirt.vif [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.657 2 DEBUG nova.network.os_vif_util [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.658 2 DEBUG nova.network.os_vif_util [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.661 2 DEBUG nova.virt.libvirt.guest [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] attach device xml: <interface type="ethernet">
Oct  2 04:26:42 np0005465604 nova_compute[260603]:  <mac address="fa:16:3e:65:86:1a"/>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:  <model type="virtio"/>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:  <mtu size="1442"/>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:  <target dev="tap8e47820a-f7"/>
Oct  2 04:26:42 np0005465604 nova_compute[260603]: </interface>
Oct  2 04:26:42 np0005465604 nova_compute[260603]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 04:26:42 np0005465604 kernel: tap8e47820a-f7: entered promiscuous mode
Oct  2 04:26:42 np0005465604 NetworkManager[45129]: <info>  [1759393602.6783] manager: (tap8e47820a-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/133)
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:42Z|00289|binding|INFO|Claiming lport 8e47820a-f777-4d29-8bce-45c6eb3b7b5c for this chassis.
Oct  2 04:26:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:42Z|00290|binding|INFO|8e47820a-f777-4d29-8bce-45c6eb3b7b5c: Claiming fa:16:3e:65:86:1a 10.100.0.12
Oct  2 04:26:42 np0005465604 systemd-udevd[305013]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:26:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.723 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:86:1a 10.100.0.12'], port_security=['fa:16:3e:65:86:1a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5d595e00-2287-4a6f-b347-bc277006a626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8e47820a-f777-4d29-8bce-45c6eb3b7b5c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:26:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.724 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8e47820a-f777-4d29-8bce-45c6eb3b7b5c in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 bound to our chassis#033[00m
Oct  2 04:26:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.726 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1#033[00m
Oct  2 04:26:42 np0005465604 NetworkManager[45129]: <info>  [1759393602.7340] device (tap8e47820a-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:26:42 np0005465604 NetworkManager[45129]: <info>  [1759393602.7353] device (tap8e47820a-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:26:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.741 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[79f941bc-731d-4f5b-a566-e01649724424]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:42Z|00291|binding|INFO|Setting lport 8e47820a-f777-4d29-8bce-45c6eb3b7b5c ovn-installed in OVS
Oct  2 04:26:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:42Z|00292|binding|INFO|Setting lport 8e47820a-f777-4d29-8bce-45c6eb3b7b5c up in Southbound
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.776 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b4aefece-4846-4873-b6ac-c410c05a676a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.780 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5004a8-0ead-414e-9e57-4724c2db15d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.800 2 DEBUG nova.virt.libvirt.driver [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.801 2 DEBUG nova.virt.libvirt.driver [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.801 2 DEBUG nova.virt.libvirt.driver [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:cf:e3:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.802 2 DEBUG nova.virt.libvirt.driver [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:65:86:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:26:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.821 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2deae077-1992-487b-9397-7367b49f46fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.830 2 DEBUG nova.virt.libvirt.guest [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:26:42 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:  <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:26:42</nova:creationTime>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:26:42 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:    <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:    <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:    <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct  2 04:26:42 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:    <nova:port uuid="8e47820a-f777-4d29-8bce-45c6eb3b7b5c">
Oct  2 04:26:42 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:26:42 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:26:42 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:26:42 np0005465604 nova_compute[260603]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 04:26:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.846 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6d3fcc46-fbac-467d-a7c4-bccb74a4308f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445002, 'reachable_time': 24920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305022, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.862 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b48f9a-5fc5-4a84-8469-eb951f045853]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445016, 'tstamp': 445016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305023, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445020, 'tstamp': 445020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305023, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.863 2 DEBUG oslo_concurrency.lockutils [None req-45b1ba73-98c0-4849-954b-49a457a3664c 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.865 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:42 np0005465604 nova_compute[260603]: 2025-10-02 08:26:42.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.869 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.870 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:26:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.871 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:42.871 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:26:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 110 KiB/s wr, 44 op/s
Oct  2 04:26:44 np0005465604 nova_compute[260603]: 2025-10-02 08:26:44.187 2 DEBUG nova.compute.manager [req-3f77ce62-f6f1-4534-bef6-957111ced482 req-93729338-ae6e-4439-a78c-07785a38201f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-plugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:26:44 np0005465604 nova_compute[260603]: 2025-10-02 08:26:44.187 2 DEBUG oslo_concurrency.lockutils [req-3f77ce62-f6f1-4534-bef6-957111ced482 req-93729338-ae6e-4439-a78c-07785a38201f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:44 np0005465604 nova_compute[260603]: 2025-10-02 08:26:44.188 2 DEBUG oslo_concurrency.lockutils [req-3f77ce62-f6f1-4534-bef6-957111ced482 req-93729338-ae6e-4439-a78c-07785a38201f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:44 np0005465604 nova_compute[260603]: 2025-10-02 08:26:44.188 2 DEBUG oslo_concurrency.lockutils [req-3f77ce62-f6f1-4534-bef6-957111ced482 req-93729338-ae6e-4439-a78c-07785a38201f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:44 np0005465604 nova_compute[260603]: 2025-10-02 08:26:44.188 2 DEBUG nova.compute.manager [req-3f77ce62-f6f1-4534-bef6-957111ced482 req-93729338-ae6e-4439-a78c-07785a38201f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-plugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:26:44 np0005465604 nova_compute[260603]: 2025-10-02 08:26:44.188 2 WARNING nova.compute.manager [req-3f77ce62-f6f1-4534-bef6-957111ced482 req-93729338-ae6e-4439-a78c-07785a38201f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-plugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c for instance with vm_state active and task_state None.#033[00m
Oct  2 04:26:44 np0005465604 nova_compute[260603]: 2025-10-02 08:26:44.506 2 DEBUG nova.network.neutron [req-6304c3aa-525f-4bf1-9b8c-9735a470429e req-5279f92c-59d5-4f32-a363-1e7415e4a59a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updated VIF entry in instance network info cache for port 8e47820a-f777-4d29-8bce-45c6eb3b7b5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:26:44 np0005465604 nova_compute[260603]: 2025-10-02 08:26:44.506 2 DEBUG nova.network.neutron [req-6304c3aa-525f-4bf1-9b8c-9735a470429e req-5279f92c-59d5-4f32-a363-1e7415e4a59a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:26:44 np0005465604 nova_compute[260603]: 2025-10-02 08:26:44.547 2 DEBUG oslo_concurrency.lockutils [req-6304c3aa-525f-4bf1-9b8c-9735a470429e req-5279f92c-59d5-4f32-a363-1e7415e4a59a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:26:44 np0005465604 nova_compute[260603]: 2025-10-02 08:26:44.794 2 DEBUG oslo_concurrency.lockutils [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:44 np0005465604 nova_compute[260603]: 2025-10-02 08:26:44.794 2 DEBUG oslo_concurrency.lockutils [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:44 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:44Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:86:1a 10.100.0.12
Oct  2 04:26:44 np0005465604 nova_compute[260603]: 2025-10-02 08:26:44.795 2 DEBUG nova.objects.instance [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:26:44 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:44Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:86:1a 10.100.0.12
Oct  2 04:26:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 17 KiB/s wr, 8 op/s
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.076 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquiring lock "f0bfef78-36cf-4c57-9205-ad81a216a221" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.077 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.099 2 DEBUG nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.187 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.187 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.203 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.203 2 INFO nova.compute.claims [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.334 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.381 2 DEBUG nova.objects.instance [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_requests' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.384 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393590.382308, 6de0ab38-2086-43ab-a32f-827aebf2432d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.384 2 INFO nova.compute.manager [-] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:26:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:26:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2871463019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.816 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.823 2 DEBUG nova.compute.provider_tree [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.848 2 DEBUG nova.network.neutron [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.856 2 DEBUG nova.compute.manager [None req-762a05dc-ef03-451b-86f0-e163b484d2ea - - - - - -] [instance: 6de0ab38-2086-43ab-a32f-827aebf2432d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.860 2 DEBUG nova.scheduler.client.report [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.888 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.889 2 DEBUG nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.964 2 DEBUG nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:26:45 np0005465604 nova_compute[260603]: 2025-10-02 08:26:45.965 2 DEBUG nova.network.neutron [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:26:46 np0005465604 podman[305046]: 2025-10-02 08:26:46.004549482 +0000 UTC m=+0.068639024 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.014 2 INFO nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.060 2 DEBUG nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.194 2 DEBUG nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.195 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.196 2 INFO nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Creating image(s)#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.215 2 DEBUG nova.storage.rbd_utils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] rbd image f0bfef78-36cf-4c57-9205-ad81a216a221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.236 2 DEBUG nova.storage.rbd_utils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] rbd image f0bfef78-36cf-4c57-9205-ad81a216a221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.254 2 DEBUG nova.storage.rbd_utils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] rbd image f0bfef78-36cf-4c57-9205-ad81a216a221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.257 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.291 2 DEBUG nova.policy [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d235dd68314a5d82ac247a9e9842d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84c161efb2ba4334845e823db8128b62', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.294 2 DEBUG nova.policy [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e785f94f63c44d0f842750666ed49360', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'de07bf9b5bef4254bdcb4d7b856304f3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.349 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.350 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.351 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.351 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.374 2 DEBUG nova.storage.rbd_utils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] rbd image f0bfef78-36cf-4c57-9205-ad81a216a221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.378 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f0bfef78-36cf-4c57-9205-ad81a216a221_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.629 2 DEBUG nova.compute.manager [req-3faf20a9-7dd8-4277-b663-acbbc1355ec4 req-86cb6346-8567-41b1-b114-e956713583fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-plugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.631 2 DEBUG oslo_concurrency.lockutils [req-3faf20a9-7dd8-4277-b663-acbbc1355ec4 req-86cb6346-8567-41b1-b114-e956713583fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.631 2 DEBUG oslo_concurrency.lockutils [req-3faf20a9-7dd8-4277-b663-acbbc1355ec4 req-86cb6346-8567-41b1-b114-e956713583fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.632 2 DEBUG oslo_concurrency.lockutils [req-3faf20a9-7dd8-4277-b663-acbbc1355ec4 req-86cb6346-8567-41b1-b114-e956713583fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.633 2 DEBUG nova.compute.manager [req-3faf20a9-7dd8-4277-b663-acbbc1355ec4 req-86cb6346-8567-41b1-b114-e956713583fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-plugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.633 2 WARNING nova.compute.manager [req-3faf20a9-7dd8-4277-b663-acbbc1355ec4 req-86cb6346-8567-41b1-b114-e956713583fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-plugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c for instance with vm_state active and task_state None.#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.656 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f0bfef78-36cf-4c57-9205-ad81a216a221_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.752 2 DEBUG nova.storage.rbd_utils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] resizing rbd image f0bfef78-36cf-4c57-9205-ad81a216a221_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.884 2 DEBUG nova.objects.instance [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lazy-loading 'migration_context' on Instance uuid f0bfef78-36cf-4c57-9205-ad81a216a221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:26:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 121 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.905 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.906 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Ensure instance console log exists: /var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.907 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.907 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:46 np0005465604 nova_compute[260603]: 2025-10-02 08:26:46.908 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:26:47 np0005465604 nova_compute[260603]: 2025-10-02 08:26:47.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:47 np0005465604 nova_compute[260603]: 2025-10-02 08:26:47.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:47 np0005465604 nova_compute[260603]: 2025-10-02 08:26:47.212 2 DEBUG nova.network.neutron [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Successfully created port: 686b6e3b-80e4-43c6-a917-3751dddecd76 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:26:47 np0005465604 nova_compute[260603]: 2025-10-02 08:26:47.578 2 DEBUG nova.network.neutron [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Successfully created port: 4f39890f-a968-41d4-9cae-0b6948551923 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:26:47 np0005465604 nova_compute[260603]: 2025-10-02 08:26:47.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:48 np0005465604 nova_compute[260603]: 2025-10-02 08:26:48.117 2 DEBUG nova.network.neutron [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Successfully updated port: 686b6e3b-80e4-43c6-a917-3751dddecd76 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:26:48 np0005465604 nova_compute[260603]: 2025-10-02 08:26:48.135 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquiring lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:26:48 np0005465604 nova_compute[260603]: 2025-10-02 08:26:48.136 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquired lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:26:48 np0005465604 nova_compute[260603]: 2025-10-02 08:26:48.136 2 DEBUG nova.network.neutron [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:26:48 np0005465604 nova_compute[260603]: 2025-10-02 08:26:48.364 2 DEBUG nova.network.neutron [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:26:48 np0005465604 nova_compute[260603]: 2025-10-02 08:26:48.769 2 DEBUG nova.network.neutron [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Successfully updated port: 4f39890f-a968-41d4-9cae-0b6948551923 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:26:48 np0005465604 nova_compute[260603]: 2025-10-02 08:26:48.794 2 DEBUG oslo_concurrency.lockutils [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:26:48 np0005465604 nova_compute[260603]: 2025-10-02 08:26:48.794 2 DEBUG oslo_concurrency.lockutils [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:26:48 np0005465604 nova_compute[260603]: 2025-10-02 08:26:48.794 2 DEBUG nova.network.neutron [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:26:48 np0005465604 nova_compute[260603]: 2025-10-02 08:26:48.851 2 DEBUG nova.compute.manager [req-64b5d454-fb38-4b79-8ab4-cdc46d59c8ff req-4e024da7-1533-4546-910e-451cb1365373 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received event network-changed-686b6e3b-80e4-43c6-a917-3751dddecd76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:26:48 np0005465604 nova_compute[260603]: 2025-10-02 08:26:48.852 2 DEBUG nova.compute.manager [req-64b5d454-fb38-4b79-8ab4-cdc46d59c8ff req-4e024da7-1533-4546-910e-451cb1365373 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Refreshing instance network info cache due to event network-changed-686b6e3b-80e4-43c6-a917-3751dddecd76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:26:48 np0005465604 nova_compute[260603]: 2025-10-02 08:26:48.852 2 DEBUG oslo_concurrency.lockutils [req-64b5d454-fb38-4b79-8ab4-cdc46d59c8ff req-4e024da7-1533-4546-910e-451cb1365373 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:26:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:48.999 2 WARNING nova.network.neutron [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] fa1bff6d-19fb-4792-a261-4da1165d95a1 already exists in list: networks containing: ['fa1bff6d-19fb-4792-a261-4da1165d95a1']. ignoring it#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.000 2 WARNING nova.network.neutron [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] fa1bff6d-19fb-4792-a261-4da1165d95a1 already exists in list: networks containing: ['fa1bff6d-19fb-4792-a261-4da1165d95a1']. ignoring it#033[00m
Oct  2 04:26:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:49.097 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:49.100 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.176 2 DEBUG nova.network.neutron [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Updating instance_info_cache with network_info: [{"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.198 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Releasing lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.199 2 DEBUG nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Instance network_info: |[{"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.200 2 DEBUG oslo_concurrency.lockutils [req-64b5d454-fb38-4b79-8ab4-cdc46d59c8ff req-4e024da7-1533-4546-910e-451cb1365373 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.201 2 DEBUG nova.network.neutron [req-64b5d454-fb38-4b79-8ab4-cdc46d59c8ff req-4e024da7-1533-4546-910e-451cb1365373 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Refreshing network info cache for port 686b6e3b-80e4-43c6-a917-3751dddecd76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.207 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Start _get_guest_xml network_info=[{"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.213 2 WARNING nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.220 2 DEBUG nova.virt.libvirt.host [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.221 2 DEBUG nova.virt.libvirt.host [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.236 2 DEBUG nova.virt.libvirt.host [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.237 2 DEBUG nova.virt.libvirt.host [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.238 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.239 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.240 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.240 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.241 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.241 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.242 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.242 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.243 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.243 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.244 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.244 2 DEBUG nova.virt.hardware [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.249 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.562 2 DEBUG nova.compute.manager [req-8a3e6f64-81cf-4686-a0e7-ec67f4895148 req-7f41e77a-231a-4c5f-8019-56107510f6af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-changed-4f39890f-a968-41d4-9cae-0b6948551923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.563 2 DEBUG nova.compute.manager [req-8a3e6f64-81cf-4686-a0e7-ec67f4895148 req-7f41e77a-231a-4c5f-8019-56107510f6af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Refreshing instance network info cache due to event network-changed-4f39890f-a968-41d4-9cae-0b6948551923. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.564 2 DEBUG oslo_concurrency.lockutils [req-8a3e6f64-81cf-4686-a0e7-ec67f4895148 req-7f41e77a-231a-4c5f-8019-56107510f6af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:26:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:26:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2251950945' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.740 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.765 2 DEBUG nova.storage.rbd_utils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] rbd image f0bfef78-36cf-4c57-9205-ad81a216a221_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:26:49 np0005465604 nova_compute[260603]: 2025-10-02 08:26:49.770 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:26:50 np0005465604 podman[305292]: 2025-10-02 08:26:50.032439031 +0000 UTC m=+0.093253857 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:26:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:26:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/484372325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.202 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.206 2 DEBUG nova.virt.libvirt.vif [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:26:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1088051137',display_name='tempest-ServersTestJSON-server-1088051137',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1088051137',id=40,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL26GjJ8yDVULcjBGSyQgq1XOXe3L4joW7EO0dcbypSf2PTplyBxh+0WCuN1+fy7bCfJLP+B7xaBUKjJV0y0oM9upBKq48cBxt+Uq1aMn9LSPatVD3E+4qRWUEz85mTqNQ==',key_name='tempest-keypair-680018376',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de07bf9b5bef4254bdcb4d7b856304f3',ramdisk_id='',reservation_id='r-bs2htycx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-279922609',owner_user_name='tempest-ServersTestJSON-279922609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:26:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e785f94f63c44d0f842750666ed49360',uuid=f0bfef78-36cf-4c57-9205-ad81a216a221,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.207 2 DEBUG nova.network.os_vif_util [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Converting VIF {"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.208 2 DEBUG nova.network.os_vif_util [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:4a:c6,bridge_name='br-int',has_traffic_filtering=True,id=686b6e3b-80e4-43c6-a917-3751dddecd76,network=Network(7afa539b-e5b5-443e-ad15-53058c3f7566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap686b6e3b-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.210 2 DEBUG nova.objects.instance [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lazy-loading 'pci_devices' on Instance uuid f0bfef78-36cf-4c57-9205-ad81a216a221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.231 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:26:50 np0005465604 nova_compute[260603]:  <uuid>f0bfef78-36cf-4c57-9205-ad81a216a221</uuid>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:  <name>instance-00000028</name>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersTestJSON-server-1088051137</nova:name>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:26:49</nova:creationTime>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:26:50 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:        <nova:user uuid="e785f94f63c44d0f842750666ed49360">tempest-ServersTestJSON-279922609-project-member</nova:user>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:        <nova:project uuid="de07bf9b5bef4254bdcb4d7b856304f3">tempest-ServersTestJSON-279922609</nova:project>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:        <nova:port uuid="686b6e3b-80e4-43c6-a917-3751dddecd76">
Oct  2 04:26:50 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <entry name="serial">f0bfef78-36cf-4c57-9205-ad81a216a221</entry>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <entry name="uuid">f0bfef78-36cf-4c57-9205-ad81a216a221</entry>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f0bfef78-36cf-4c57-9205-ad81a216a221_disk">
Oct  2 04:26:50 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:26:50 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f0bfef78-36cf-4c57-9205-ad81a216a221_disk.config">
Oct  2 04:26:50 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:26:50 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:0b:4a:c6"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <target dev="tap686b6e3b-80"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221/console.log" append="off"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:26:50 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:26:50 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:26:50 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:26:50 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.232 2 DEBUG nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Preparing to wait for external event network-vif-plugged-686b6e3b-80e4-43c6-a917-3751dddecd76 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.232 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquiring lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.233 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.233 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.234 2 DEBUG nova.virt.libvirt.vif [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:26:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1088051137',display_name='tempest-ServersTestJSON-server-1088051137',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1088051137',id=40,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL26GjJ8yDVULcjBGSyQgq1XOXe3L4joW7EO0dcbypSf2PTplyBxh+0WCuN1+fy7bCfJLP+B7xaBUKjJV0y0oM9upBKq48cBxt+Uq1aMn9LSPatVD3E+4qRWUEz85mTqNQ==',key_name='tempest-keypair-680018376',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de07bf9b5bef4254bdcb4d7b856304f3',ramdisk_id='',reservation_id='r-bs2htycx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-279922609',owner_user_name='tempest-ServersTestJSON-279922609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:26:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e785f94f63c44d0f842750666ed49360',uuid=f0bfef78-36cf-4c57-9205-ad81a216a221,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.235 2 DEBUG nova.network.os_vif_util [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Converting VIF {"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.236 2 DEBUG nova.network.os_vif_util [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:4a:c6,bridge_name='br-int',has_traffic_filtering=True,id=686b6e3b-80e4-43c6-a917-3751dddecd76,network=Network(7afa539b-e5b5-443e-ad15-53058c3f7566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap686b6e3b-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.236 2 DEBUG os_vif [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:4a:c6,bridge_name='br-int',has_traffic_filtering=True,id=686b6e3b-80e4-43c6-a917-3751dddecd76,network=Network(7afa539b-e5b5-443e-ad15-53058c3f7566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap686b6e3b-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.238 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.244 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap686b6e3b-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.245 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap686b6e3b-80, col_values=(('external_ids', {'iface-id': '686b6e3b-80e4-43c6-a917-3751dddecd76', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:4a:c6', 'vm-uuid': 'f0bfef78-36cf-4c57-9205-ad81a216a221'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:50 np0005465604 NetworkManager[45129]: <info>  [1759393610.2486] manager: (tap686b6e3b-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.257 2 INFO os_vif [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:4a:c6,bridge_name='br-int',has_traffic_filtering=True,id=686b6e3b-80e4-43c6-a917-3751dddecd76,network=Network(7afa539b-e5b5-443e-ad15-53058c3f7566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap686b6e3b-80')#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.322 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.322 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.322 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] No VIF found with MAC fa:16:3e:0b:4a:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.323 2 INFO nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Using config drive#033[00m
Oct  2 04:26:50 np0005465604 nova_compute[260603]: 2025-10-02 08:26:50.355 2 DEBUG nova.storage.rbd_utils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] rbd image f0bfef78-36cf-4c57-9205-ad81a216a221_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:26:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:26:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:26:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.102 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:52 np0005465604 nova_compute[260603]: 2025-10-02 08:26:52.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:52 np0005465604 nova_compute[260603]: 2025-10-02 08:26:52.400 2 INFO nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Creating config drive at /var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221/disk.config#033[00m
Oct  2 04:26:52 np0005465604 nova_compute[260603]: 2025-10-02 08:26:52.408 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp2m4op_1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:26:52 np0005465604 nova_compute[260603]: 2025-10-02 08:26:52.565 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp2m4op_1" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:26:52 np0005465604 nova_compute[260603]: 2025-10-02 08:26:52.601 2 DEBUG nova.storage.rbd_utils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] rbd image f0bfef78-36cf-4c57-9205-ad81a216a221_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:26:52 np0005465604 nova_compute[260603]: 2025-10-02 08:26:52.605 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221/disk.config f0bfef78-36cf-4c57-9205-ad81a216a221_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:26:52 np0005465604 nova_compute[260603]: 2025-10-02 08:26:52.795 2 DEBUG oslo_concurrency.processutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221/disk.config f0bfef78-36cf-4c57-9205-ad81a216a221_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:26:52 np0005465604 nova_compute[260603]: 2025-10-02 08:26:52.796 2 INFO nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Deleting local config drive /var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221/disk.config because it was imported into RBD.#033[00m
Oct  2 04:26:52 np0005465604 kernel: tap686b6e3b-80: entered promiscuous mode
Oct  2 04:26:52 np0005465604 NetworkManager[45129]: <info>  [1759393612.8722] manager: (tap686b6e3b-80): new Tun device (/org/freedesktop/NetworkManager/Devices/135)
Oct  2 04:26:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:26:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:52Z|00293|binding|INFO|Claiming lport 686b6e3b-80e4-43c6-a917-3751dddecd76 for this chassis.
Oct  2 04:26:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:52Z|00294|binding|INFO|686b6e3b-80e4-43c6-a917-3751dddecd76: Claiming fa:16:3e:0b:4a:c6 10.100.0.13
Oct  2 04:26:52 np0005465604 nova_compute[260603]: 2025-10-02 08:26:52.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.916 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:4a:c6 10.100.0.13'], port_security=['fa:16:3e:0b:4a:c6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f0bfef78-36cf-4c57-9205-ad81a216a221', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7afa539b-e5b5-443e-ad15-53058c3f7566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de07bf9b5bef4254bdcb4d7b856304f3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1fbf4c4c-2cd7-49c6-ae31-7e9f79f25939', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b61ad9cd-415d-4ce0-9d5a-fc0dd5afdd65, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=686b6e3b-80e4-43c6-a917-3751dddecd76) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:26:52 np0005465604 systemd-udevd[305387]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:26:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.919 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 686b6e3b-80e4-43c6-a917-3751dddecd76 in datapath 7afa539b-e5b5-443e-ad15-53058c3f7566 bound to our chassis#033[00m
Oct  2 04:26:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.922 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7afa539b-e5b5-443e-ad15-53058c3f7566#033[00m
Oct  2 04:26:52 np0005465604 NetworkManager[45129]: <info>  [1759393612.9374] device (tap686b6e3b-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:26:52 np0005465604 NetworkManager[45129]: <info>  [1759393612.9397] device (tap686b6e3b-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:26:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.941 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d774cf44-12e9-4389-aaf0-8c36f5312026]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.942 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7afa539b-e1 in ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:26:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.945 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7afa539b-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:26:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.945 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f61705d5-d687-4fec-8cab-cf92de237f3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.948 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a2d3d4-e13b-48de-b0a5-120b69538a28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:52 np0005465604 systemd-machined[214636]: New machine qemu-44-instance-00000028.
Oct  2 04:26:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:52Z|00295|binding|INFO|Setting lport 686b6e3b-80e4-43c6-a917-3751dddecd76 ovn-installed in OVS
Oct  2 04:26:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:52Z|00296|binding|INFO|Setting lport 686b6e3b-80e4-43c6-a917-3751dddecd76 up in Southbound
Oct  2 04:26:52 np0005465604 nova_compute[260603]: 2025-10-02 08:26:52.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:52 np0005465604 nova_compute[260603]: 2025-10-02 08:26:52.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:52 np0005465604 systemd[1]: Started Virtual Machine qemu-44-instance-00000028.
Oct  2 04:26:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:52.969 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[8d144427-e8d3-4213-b7c6-cc2060931b13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.003 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa337b9-6755-4ad5-9817-d9f15a9efc11]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.050 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[82e2ae3c-8fa5-4a66-89f9-f6b11893ebd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.056 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[329fdfce-7168-4044-95fe-6256d7a60ec0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:53 np0005465604 NetworkManager[45129]: <info>  [1759393613.0594] manager: (tap7afa539b-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/136)
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.104 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f6992821-705b-4aa3-8cff-95b18baaee96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.108 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3e36bfd0-91d5-41f2-8c83-e1c1fa23e1ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:53 np0005465604 NetworkManager[45129]: <info>  [1759393613.1405] device (tap7afa539b-e0): carrier: link connected
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.154 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c3958cc5-1f07-4821-9204-7a4e42922df7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.174 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[54c32e82-582a-4121-9c9d-000c0d438140]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7afa539b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fb:30:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449174, 'reachable_time': 42494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305422, 'error': None, 'target': 'ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.195 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[65c43c65-ec21-4af0-8c05-9c01f4d9a1c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefb:3090'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449174, 'tstamp': 449174}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305423, 'error': None, 'target': 'ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.214 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ed49ff86-213d-4f84-a271-4dfba7ee87e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7afa539b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fb:30:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449174, 'reachable_time': 42494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305424, 'error': None, 'target': 'ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.249 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0dbc8745-640f-4174-8a15-1ac74fd59cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.311 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3e2ad9-565a-4500-a7d2-cfe82b1d540c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.313 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7afa539b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.313 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.314 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7afa539b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:53 np0005465604 NetworkManager[45129]: <info>  [1759393613.3166] manager: (tap7afa539b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Oct  2 04:26:53 np0005465604 nova_compute[260603]: 2025-10-02 08:26:53.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:53 np0005465604 kernel: tap7afa539b-e0: entered promiscuous mode
Oct  2 04:26:53 np0005465604 nova_compute[260603]: 2025-10-02 08:26:53.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.320 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7afa539b-e0, col_values=(('external_ids', {'iface-id': '2fa8f568-a3b6-4390-a456-29e1aa7b3761'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:53 np0005465604 nova_compute[260603]: 2025-10-02 08:26:53.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:53Z|00297|binding|INFO|Releasing lport 2fa8f568-a3b6-4390-a456-29e1aa7b3761 from this chassis (sb_readonly=0)
Oct  2 04:26:53 np0005465604 nova_compute[260603]: 2025-10-02 08:26:53.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:53 np0005465604 nova_compute[260603]: 2025-10-02 08:26:53.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.357 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7afa539b-e5b5-443e-ad15-53058c3f7566.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7afa539b-e5b5-443e-ad15-53058c3f7566.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.358 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c53341d8-8f96-4984-960d-f44c5b60c34c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.359 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-7afa539b-e5b5-443e-ad15-53058c3f7566
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/7afa539b-e5b5-443e-ad15-53058c3f7566.pid.haproxy
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 7afa539b-e5b5-443e-ad15-53058c3f7566
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:26:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:53.360 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566', 'env', 'PROCESS_TAG=haproxy-7afa539b-e5b5-443e-ad15-53058c3f7566', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7afa539b-e5b5-443e-ad15-53058c3f7566.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:26:53 np0005465604 podman[305499]: 2025-10-02 08:26:53.82551856 +0000 UTC m=+0.072923822 container create 06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 04:26:53 np0005465604 systemd[1]: Started libpod-conmon-06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733.scope.
Oct  2 04:26:53 np0005465604 podman[305499]: 2025-10-02 08:26:53.78920452 +0000 UTC m=+0.036609872 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:26:53 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:26:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd2358f3274ee85f1cdb0a8441439d8ff6fe7b64e90052b15f897c73d3c0f4d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:26:53 np0005465604 podman[305499]: 2025-10-02 08:26:53.929247785 +0000 UTC m=+0.176653057 container init 06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:26:53 np0005465604 podman[305499]: 2025-10-02 08:26:53.939208767 +0000 UTC m=+0.186614029 container start 06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:26:53 np0005465604 neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566[305514]: [NOTICE]   (305518) : New worker (305520) forked
Oct  2 04:26:53 np0005465604 neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566[305514]: [NOTICE]   (305518) : Loading success.
Oct  2 04:26:54 np0005465604 nova_compute[260603]: 2025-10-02 08:26:54.009 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393614.0086, f0bfef78-36cf-4c57-9205-ad81a216a221 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:26:54 np0005465604 nova_compute[260603]: 2025-10-02 08:26:54.010 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] VM Started (Lifecycle Event)#033[00m
Oct  2 04:26:54 np0005465604 nova_compute[260603]: 2025-10-02 08:26:54.035 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:26:54 np0005465604 nova_compute[260603]: 2025-10-02 08:26:54.040 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393614.009917, f0bfef78-36cf-4c57-9205-ad81a216a221 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:26:54 np0005465604 nova_compute[260603]: 2025-10-02 08:26:54.040 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:26:54 np0005465604 nova_compute[260603]: 2025-10-02 08:26:54.060 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:26:54 np0005465604 nova_compute[260603]: 2025-10-02 08:26:54.063 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:26:54 np0005465604 nova_compute[260603]: 2025-10-02 08:26:54.091 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:26:54 np0005465604 nova_compute[260603]: 2025-10-02 08:26:54.179 2 DEBUG nova.network.neutron [req-64b5d454-fb38-4b79-8ab4-cdc46d59c8ff req-4e024da7-1533-4546-910e-451cb1365373 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Updated VIF entry in instance network info cache for port 686b6e3b-80e4-43c6-a917-3751dddecd76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:26:54 np0005465604 nova_compute[260603]: 2025-10-02 08:26:54.180 2 DEBUG nova.network.neutron [req-64b5d454-fb38-4b79-8ab4-cdc46d59c8ff req-4e024da7-1533-4546-910e-451cb1365373 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Updating instance_info_cache with network_info: [{"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:26:54 np0005465604 nova_compute[260603]: 2025-10-02 08:26:54.197 2 DEBUG oslo_concurrency.lockutils [req-64b5d454-fb38-4b79-8ab4-cdc46d59c8ff req-4e024da7-1533-4546-910e-451cb1365373 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:26:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  2 04:26:55 np0005465604 nova_compute[260603]: 2025-10-02 08:26:55.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:55 np0005465604 nova_compute[260603]: 2025-10-02 08:26:55.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  2 04:26:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:26:57 np0005465604 nova_compute[260603]: 2025-10-02 08:26:57.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:26:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:26:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:26:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:26:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:26:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.696 2 DEBUG nova.network.neutron [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.716 2 DEBUG oslo_concurrency.lockutils [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.718 2 DEBUG oslo_concurrency.lockutils [req-8a3e6f64-81cf-4686-a0e7-ec67f4895148 req-7f41e77a-231a-4c5f-8019-56107510f6af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.719 2 DEBUG nova.network.neutron [req-8a3e6f64-81cf-4686-a0e7-ec67f4895148 req-7f41e77a-231a-4c5f-8019-56107510f6af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Refreshing network info cache for port 4f39890f-a968-41d4-9cae-0b6948551923 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.724 2 DEBUG nova.virt.libvirt.vif [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.724 2 DEBUG nova.network.os_vif_util [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.726 2 DEBUG nova.network.os_vif_util [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:40:80,bridge_name='br-int',has_traffic_filtering=True,id=4f39890f-a968-41d4-9cae-0b6948551923,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f39890f-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.727 2 DEBUG os_vif [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:40:80,bridge_name='br-int',has_traffic_filtering=True,id=4f39890f-a968-41d4-9cae-0b6948551923,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f39890f-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.728 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.729 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f39890f-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f39890f-a9, col_values=(('external_ids', {'iface-id': '4f39890f-a968-41d4-9cae-0b6948551923', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:40:80', 'vm-uuid': '5d595e00-2287-4a6f-b347-bc277006a626'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:58 np0005465604 NetworkManager[45129]: <info>  [1759393618.7389] manager: (tap4f39890f-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.752 2 INFO os_vif [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:40:80,bridge_name='br-int',has_traffic_filtering=True,id=4f39890f-a968-41d4-9cae-0b6948551923,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f39890f-a9')#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.754 2 DEBUG nova.virt.libvirt.vif [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.755 2 DEBUG nova.network.os_vif_util [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.756 2 DEBUG nova.network.os_vif_util [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:40:80,bridge_name='br-int',has_traffic_filtering=True,id=4f39890f-a968-41d4-9cae-0b6948551923,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f39890f-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.761 2 DEBUG nova.virt.libvirt.guest [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] attach device xml: <interface type="ethernet">
Oct  2 04:26:58 np0005465604 nova_compute[260603]:  <mac address="fa:16:3e:01:40:80"/>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:  <model type="virtio"/>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:  <mtu size="1442"/>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:  <target dev="tap4f39890f-a9"/>
Oct  2 04:26:58 np0005465604 nova_compute[260603]: </interface>
Oct  2 04:26:58 np0005465604 nova_compute[260603]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 04:26:58 np0005465604 kernel: tap4f39890f-a9: entered promiscuous mode
Oct  2 04:26:58 np0005465604 NetworkManager[45129]: <info>  [1759393618.7798] manager: (tap4f39890f-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Oct  2 04:26:58 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:58Z|00298|binding|INFO|Claiming lport 4f39890f-a968-41d4-9cae-0b6948551923 for this chassis.
Oct  2 04:26:58 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:58Z|00299|binding|INFO|4f39890f-a968-41d4-9cae-0b6948551923: Claiming fa:16:3e:01:40:80 10.100.0.4
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.809 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:40:80 10.100.0.4'], port_security=['fa:16:3e:01:40:80 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5d595e00-2287-4a6f-b347-bc277006a626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4f39890f-a968-41d4-9cae-0b6948551923) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:26:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.811 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4f39890f-a968-41d4-9cae-0b6948551923 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 bound to our chassis#033[00m
Oct  2 04:26:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.814 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1#033[00m
Oct  2 04:26:58 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:58Z|00300|binding|INFO|Setting lport 4f39890f-a968-41d4-9cae-0b6948551923 ovn-installed in OVS
Oct  2 04:26:58 np0005465604 ovn_controller[152344]: 2025-10-02T08:26:58Z|00301|binding|INFO|Setting lport 4f39890f-a968-41d4-9cae-0b6948551923 up in Southbound
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:58 np0005465604 systemd-udevd[305536]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:26:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.831 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f741c83b-6ead-40bf-8496-4c4b93f4e727]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:58 np0005465604 NetworkManager[45129]: <info>  [1759393618.8485] device (tap4f39890f-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:26:58 np0005465604 NetworkManager[45129]: <info>  [1759393618.8498] device (tap4f39890f-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:26:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.881 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d980aeba-f870-4547-bec4-95dfad6382d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.884 2 DEBUG nova.virt.libvirt.driver [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.884 2 DEBUG nova.virt.libvirt.driver [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.884 2 DEBUG nova.virt.libvirt.driver [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:cf:e3:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.885 2 DEBUG nova.virt.libvirt.driver [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:65:86:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.885 2 DEBUG nova.virt.libvirt.driver [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:01:40:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:26:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.885 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1d81bb1a-8b69-4760-bd03-6dd91f2585a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.917 2 DEBUG nova.virt.libvirt.guest [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:26:58 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:  <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:26:58</nova:creationTime>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:26:58 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:    <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:    <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:    <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct  2 04:26:58 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:    <nova:port uuid="8e47820a-f777-4d29-8bce-45c6eb3b7b5c">
Oct  2 04:26:58 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:    <nova:port uuid="4f39890f-a968-41d4-9cae-0b6948551923">
Oct  2 04:26:58 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:26:58 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:26:58 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:26:58 np0005465604 nova_compute[260603]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 04:26:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.932 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c395b64e-b9b9-413f-a044-1c2690be40ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:58 np0005465604 nova_compute[260603]: 2025-10-02 08:26:58.941 2 DEBUG oslo_concurrency.lockutils [None req-d2a1597d-2981-4027-afb3-06329a5dd5f1 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 14.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.958 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9833755-4146-4ff4-a35a-8e28bfbe1001]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445002, 'reachable_time': 24920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305543, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.984 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b64138fc-3c40-41f6-b73f-9848c64dac2b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445016, 'tstamp': 445016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305544, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445020, 'tstamp': 445020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305544, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:26:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:58.986 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:26:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:59.026 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:59.026 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:26:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:59.026 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:26:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:26:59.027 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.471 2 DEBUG nova.compute.manager [req-c3af9a15-3451-471f-9462-02a1932117c4 req-f0f23488-9e1d-4d00-af3e-cfbe9aef81d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received event network-vif-plugged-686b6e3b-80e4-43c6-a917-3751dddecd76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.472 2 DEBUG oslo_concurrency.lockutils [req-c3af9a15-3451-471f-9462-02a1932117c4 req-f0f23488-9e1d-4d00-af3e-cfbe9aef81d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.472 2 DEBUG oslo_concurrency.lockutils [req-c3af9a15-3451-471f-9462-02a1932117c4 req-f0f23488-9e1d-4d00-af3e-cfbe9aef81d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.472 2 DEBUG oslo_concurrency.lockutils [req-c3af9a15-3451-471f-9462-02a1932117c4 req-f0f23488-9e1d-4d00-af3e-cfbe9aef81d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.473 2 DEBUG nova.compute.manager [req-c3af9a15-3451-471f-9462-02a1932117c4 req-f0f23488-9e1d-4d00-af3e-cfbe9aef81d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Processing event network-vif-plugged-686b6e3b-80e4-43c6-a917-3751dddecd76 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.474 2 DEBUG nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.479 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393619.478812, f0bfef78-36cf-4c57-9205-ad81a216a221 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.479 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.484 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.490 2 INFO nova.virt.libvirt.driver [-] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Instance spawned successfully.#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.490 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.521 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.532 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.541 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.541 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.543 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.543 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.544 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.545 2 DEBUG nova.virt.libvirt.driver [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.599 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.649 2 INFO nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Took 13.45 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.650 2 DEBUG nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.743 2 INFO nova.compute.manager [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Took 14.58 seconds to build instance.#033[00m
Oct  2 04:26:59 np0005465604 nova_compute[260603]: 2025-10-02 08:26:59.766 2 DEBUG oslo_concurrency.lockutils [None req-b642c696-68d1-4132-bb2e-7417d2537acd e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:00Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:01:40:80 10.100.0.4
Oct  2 04:27:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:00Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:40:80 10.100.0.4
Oct  2 04:27:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 18 KiB/s wr, 10 op/s
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.781 2 DEBUG nova.compute.manager [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received event network-vif-plugged-686b6e3b-80e4-43c6-a917-3751dddecd76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.781 2 DEBUG oslo_concurrency.lockutils [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.782 2 DEBUG oslo_concurrency.lockutils [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.782 2 DEBUG oslo_concurrency.lockutils [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.783 2 DEBUG nova.compute.manager [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] No waiting events found dispatching network-vif-plugged-686b6e3b-80e4-43c6-a917-3751dddecd76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.783 2 WARNING nova.compute.manager [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received unexpected event network-vif-plugged-686b6e3b-80e4-43c6-a917-3751dddecd76 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.784 2 DEBUG nova.compute.manager [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-plugged-4f39890f-a968-41d4-9cae-0b6948551923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.784 2 DEBUG oslo_concurrency.lockutils [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.785 2 DEBUG oslo_concurrency.lockutils [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.785 2 DEBUG oslo_concurrency.lockutils [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.785 2 DEBUG nova.compute.manager [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-plugged-4f39890f-a968-41d4-9cae-0b6948551923 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.786 2 WARNING nova.compute.manager [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-plugged-4f39890f-a968-41d4-9cae-0b6948551923 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.786 2 DEBUG nova.compute.manager [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-plugged-4f39890f-a968-41d4-9cae-0b6948551923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.787 2 DEBUG oslo_concurrency.lockutils [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.787 2 DEBUG oslo_concurrency.lockutils [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.788 2 DEBUG oslo_concurrency.lockutils [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.788 2 DEBUG nova.compute.manager [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-plugged-4f39890f-a968-41d4-9cae-0b6948551923 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.788 2 WARNING nova.compute.manager [req-02575b35-9acb-4051-ae1b-bccf574150c9 req-06c11531-cf23-4711-952f-caaa07075118 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-plugged-4f39890f-a968-41d4-9cae-0b6948551923 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.980 2 DEBUG nova.network.neutron [req-8a3e6f64-81cf-4686-a0e7-ec67f4895148 req-7f41e77a-231a-4c5f-8019-56107510f6af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updated VIF entry in instance network info cache for port 4f39890f-a968-41d4-9cae-0b6948551923. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:27:01 np0005465604 nova_compute[260603]: 2025-10-02 08:27:01.981 2 DEBUG nova.network.neutron [req-8a3e6f64-81cf-4686-a0e7-ec67f4895148 req-7f41e77a-231a-4c5f-8019-56107510f6af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:02 np0005465604 nova_compute[260603]: 2025-10-02 08:27:02.011 2 DEBUG oslo_concurrency.lockutils [req-8a3e6f64-81cf-4686-a0e7-ec67f4895148 req-7f41e77a-231a-4c5f-8019-56107510f6af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:27:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:27:02 np0005465604 nova_compute[260603]: 2025-10-02 08:27:02.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 21 KiB/s wr, 63 op/s
Oct  2 04:27:02 np0005465604 nova_compute[260603]: 2025-10-02 08:27:02.986 2 DEBUG oslo_concurrency.lockutils [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:02 np0005465604 nova_compute[260603]: 2025-10-02 08:27:02.987 2 DEBUG oslo_concurrency.lockutils [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:02 np0005465604 nova_compute[260603]: 2025-10-02 08:27:02.988 2 DEBUG nova.objects.instance [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:03 np0005465604 nova_compute[260603]: 2025-10-02 08:27:03.103 2 DEBUG nova.compute.manager [req-5a8c1292-5686-4874-8deb-8e8d1927d2e8 req-5bcc12da-0f9a-42fe-ba3d-2f98b9b29553 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received event network-changed-686b6e3b-80e4-43c6-a917-3751dddecd76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:03 np0005465604 nova_compute[260603]: 2025-10-02 08:27:03.104 2 DEBUG nova.compute.manager [req-5a8c1292-5686-4874-8deb-8e8d1927d2e8 req-5bcc12da-0f9a-42fe-ba3d-2f98b9b29553 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Refreshing instance network info cache due to event network-changed-686b6e3b-80e4-43c6-a917-3751dddecd76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:27:03 np0005465604 nova_compute[260603]: 2025-10-02 08:27:03.105 2 DEBUG oslo_concurrency.lockutils [req-5a8c1292-5686-4874-8deb-8e8d1927d2e8 req-5bcc12da-0f9a-42fe-ba3d-2f98b9b29553 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:27:03 np0005465604 nova_compute[260603]: 2025-10-02 08:27:03.105 2 DEBUG oslo_concurrency.lockutils [req-5a8c1292-5686-4874-8deb-8e8d1927d2e8 req-5bcc12da-0f9a-42fe-ba3d-2f98b9b29553 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:27:03 np0005465604 nova_compute[260603]: 2025-10-02 08:27:03.106 2 DEBUG nova.network.neutron [req-5a8c1292-5686-4874-8deb-8e8d1927d2e8 req-5bcc12da-0f9a-42fe-ba3d-2f98b9b29553 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Refreshing network info cache for port 686b6e3b-80e4-43c6-a917-3751dddecd76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:27:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:03Z|00302|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct  2 04:27:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:03Z|00303|binding|INFO|Releasing lport 2fa8f568-a3b6-4390-a456-29e1aa7b3761 from this chassis (sb_readonly=0)
Oct  2 04:27:03 np0005465604 nova_compute[260603]: 2025-10-02 08:27:03.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:03 np0005465604 nova_compute[260603]: 2025-10-02 08:27:03.561 2 DEBUG nova.objects.instance [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_requests' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:03 np0005465604 nova_compute[260603]: 2025-10-02 08:27:03.577 2 DEBUG nova.network.neutron [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:27:03 np0005465604 nova_compute[260603]: 2025-10-02 08:27:03.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:03 np0005465604 nova_compute[260603]: 2025-10-02 08:27:03.987 2 DEBUG nova.policy [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d235dd68314a5d82ac247a9e9842d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84c161efb2ba4334845e823db8128b62', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:27:04 np0005465604 nova_compute[260603]: 2025-10-02 08:27:04.455 2 DEBUG nova.network.neutron [req-5a8c1292-5686-4874-8deb-8e8d1927d2e8 req-5bcc12da-0f9a-42fe-ba3d-2f98b9b29553 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Updated VIF entry in instance network info cache for port 686b6e3b-80e4-43c6-a917-3751dddecd76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:27:04 np0005465604 nova_compute[260603]: 2025-10-02 08:27:04.456 2 DEBUG nova.network.neutron [req-5a8c1292-5686-4874-8deb-8e8d1927d2e8 req-5bcc12da-0f9a-42fe-ba3d-2f98b9b29553 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Updating instance_info_cache with network_info: [{"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:04 np0005465604 nova_compute[260603]: 2025-10-02 08:27:04.474 2 DEBUG oslo_concurrency.lockutils [req-5a8c1292-5686-4874-8deb-8e8d1927d2e8 req-5bcc12da-0f9a-42fe-ba3d-2f98b9b29553 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:27:04 np0005465604 nova_compute[260603]: 2025-10-02 08:27:04.769 2 DEBUG nova.network.neutron [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Successfully updated port: 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:27:04 np0005465604 nova_compute[260603]: 2025-10-02 08:27:04.788 2 DEBUG oslo_concurrency.lockutils [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:27:04 np0005465604 nova_compute[260603]: 2025-10-02 08:27:04.789 2 DEBUG oslo_concurrency.lockutils [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:27:04 np0005465604 nova_compute[260603]: 2025-10-02 08:27:04.789 2 DEBUG nova.network.neutron [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:27:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 74 op/s
Oct  2 04:27:05 np0005465604 nova_compute[260603]: 2025-10-02 08:27:05.376 2 WARNING nova.network.neutron [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] fa1bff6d-19fb-4792-a261-4da1165d95a1 already exists in list: networks containing: ['fa1bff6d-19fb-4792-a261-4da1165d95a1']. ignoring it#033[00m
Oct  2 04:27:05 np0005465604 nova_compute[260603]: 2025-10-02 08:27:05.377 2 WARNING nova.network.neutron [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] fa1bff6d-19fb-4792-a261-4da1165d95a1 already exists in list: networks containing: ['fa1bff6d-19fb-4792-a261-4da1165d95a1']. ignoring it#033[00m
Oct  2 04:27:05 np0005465604 nova_compute[260603]: 2025-10-02 08:27:05.377 2 WARNING nova.network.neutron [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] fa1bff6d-19fb-4792-a261-4da1165d95a1 already exists in list: networks containing: ['fa1bff6d-19fb-4792-a261-4da1165d95a1']. ignoring it#033[00m
Oct  2 04:27:05 np0005465604 nova_compute[260603]: 2025-10-02 08:27:05.774 2 DEBUG nova.compute.manager [req-7af216d3-8122-49c6-b22e-fe9778cc434e req-0ccd386c-d425-4de7-8ab3-4c2c07eb0a95 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-changed-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:05 np0005465604 nova_compute[260603]: 2025-10-02 08:27:05.775 2 DEBUG nova.compute.manager [req-7af216d3-8122-49c6-b22e-fe9778cc434e req-0ccd386c-d425-4de7-8ab3-4c2c07eb0a95 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Refreshing instance network info cache due to event network-changed-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:27:05 np0005465604 nova_compute[260603]: 2025-10-02 08:27:05.776 2 DEBUG oslo_concurrency.lockutils [req-7af216d3-8122-49c6-b22e-fe9778cc434e req-0ccd386c-d425-4de7-8ab3-4c2c07eb0a95 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:27:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.7 KiB/s wr, 73 op/s
Oct  2 04:27:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:27:07 np0005465604 nova_compute[260603]: 2025-10-02 08:27:07.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:08 np0005465604 nova_compute[260603]: 2025-10-02 08:27:08.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.3 KiB/s wr, 73 op/s
Oct  2 04:27:10 np0005465604 podman[305546]: 2025-10-02 08:27:10.054325113 +0000 UTC m=+0.107610195 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Oct  2 04:27:10 np0005465604 podman[305545]: 2025-10-02 08:27:10.090716073 +0000 UTC m=+0.141747185 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.293 2 DEBUG nova.network.neutron [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.319 2 DEBUG oslo_concurrency.lockutils [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.320 2 DEBUG oslo_concurrency.lockutils [req-7af216d3-8122-49c6-b22e-fe9778cc434e req-0ccd386c-d425-4de7-8ab3-4c2c07eb0a95 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.320 2 DEBUG nova.network.neutron [req-7af216d3-8122-49c6-b22e-fe9778cc434e req-0ccd386c-d425-4de7-8ab3-4c2c07eb0a95 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Refreshing network info cache for port 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.324 2 DEBUG nova.virt.libvirt.vif [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.324 2 DEBUG nova.network.os_vif_util [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.325 2 DEBUG nova.network.os_vif_util [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:04:1b,bridge_name='br-int',has_traffic_filtering=True,id=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap19a09bbc-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.326 2 DEBUG os_vif [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:04:1b,bridge_name='br-int',has_traffic_filtering=True,id=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap19a09bbc-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.328 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.328 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.331 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19a09bbc-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.332 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap19a09bbc-9b, col_values=(('external_ids', {'iface-id': '19a09bbc-9b50-4f99-8dd4-0f7f9ab15851', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:04:1b', 'vm-uuid': '5d595e00-2287-4a6f-b347-bc277006a626'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:10 np0005465604 NetworkManager[45129]: <info>  [1759393630.3348] manager: (tap19a09bbc-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.342 2 INFO os_vif [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:04:1b,bridge_name='br-int',has_traffic_filtering=True,id=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap19a09bbc-9b')#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.343 2 DEBUG nova.virt.libvirt.vif [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.344 2 DEBUG nova.network.os_vif_util [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.345 2 DEBUG nova.network.os_vif_util [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:04:1b,bridge_name='br-int',has_traffic_filtering=True,id=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap19a09bbc-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.348 2 DEBUG nova.virt.libvirt.guest [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] attach device xml: <interface type="ethernet">
Oct  2 04:27:10 np0005465604 nova_compute[260603]:  <mac address="fa:16:3e:17:04:1b"/>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:  <model type="virtio"/>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:  <mtu size="1442"/>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:  <target dev="tap19a09bbc-9b"/>
Oct  2 04:27:10 np0005465604 nova_compute[260603]: </interface>
Oct  2 04:27:10 np0005465604 nova_compute[260603]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 04:27:10 np0005465604 NetworkManager[45129]: <info>  [1759393630.3711] manager: (tap19a09bbc-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/141)
Oct  2 04:27:10 np0005465604 kernel: tap19a09bbc-9b: entered promiscuous mode
Oct  2 04:27:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:10Z|00304|binding|INFO|Claiming lport 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 for this chassis.
Oct  2 04:27:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:10Z|00305|binding|INFO|19a09bbc-9b50-4f99-8dd4-0f7f9ab15851: Claiming fa:16:3e:17:04:1b 10.100.0.7
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.382 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:04:1b 10.100.0.7'], port_security=['fa:16:3e:17:04:1b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1146870412', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5d595e00-2287-4a6f-b347-bc277006a626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1146870412', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:27:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.383 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 bound to our chassis#033[00m
Oct  2 04:27:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.384 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1#033[00m
Oct  2 04:27:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.404 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee8d0b9-1ac1-49fb-ad68-cf9bd939471c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:10Z|00306|binding|INFO|Setting lport 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 ovn-installed in OVS
Oct  2 04:27:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:10Z|00307|binding|INFO|Setting lport 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 up in Southbound
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:10 np0005465604 systemd-udevd[305701]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.452 2 DEBUG nova.virt.libvirt.driver [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.452 2 DEBUG nova.virt.libvirt.driver [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.452 2 DEBUG nova.virt.libvirt.driver [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:cf:e3:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.453 2 DEBUG nova.virt.libvirt.driver [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:65:86:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.453 2 DEBUG nova.virt.libvirt.driver [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:01:40:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.453 2 DEBUG nova.virt.libvirt.driver [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:17:04:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:27:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.453 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b917c888-b1dc-4080-97e8-6178d9bf7516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:10 np0005465604 NetworkManager[45129]: <info>  [1759393630.4574] device (tap19a09bbc-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:27:10 np0005465604 NetworkManager[45129]: <info>  [1759393630.4593] device (tap19a09bbc-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:27:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.459 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa2f1b5-a76d-4f2f-9bc8-d44043012025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.474 2 DEBUG nova.virt.libvirt.guest [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:27:10 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:  <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:27:10</nova:creationTime>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:27:10 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:    <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:    <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:    <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct  2 04:27:10 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:    <nova:port uuid="8e47820a-f777-4d29-8bce-45c6eb3b7b5c">
Oct  2 04:27:10 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:    <nova:port uuid="4f39890f-a968-41d4-9cae-0b6948551923">
Oct  2 04:27:10 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:    <nova:port uuid="19a09bbc-9b50-4f99-8dd4-0f7f9ab15851">
Oct  2 04:27:10 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:10 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:27:10 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:27:10 np0005465604 nova_compute[260603]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 04:27:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.485 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0646fd50-bed9-433f-89f6-5b6f3298b54e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.501 2 DEBUG oslo_concurrency.lockutils [None req-e9621721-2f0b-4733-9c02-58ea28c9675e 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.504 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1770a3-7f81-4545-a8f6-7b1636d0161d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 10, 'rx_bytes': 1084, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 10, 'rx_bytes': 1084, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445002, 'reachable_time': 24920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305706, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:27:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.523 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4dd5abb0-636a-482d-a79a-1e7d27341879]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445016, 'tstamp': 445016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305707, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445020, 'tstamp': 445020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305707, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.525 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.532 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.533 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:27:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.533 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:10.534 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:27:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:10Z|00308|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct  2 04:27:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:10Z|00309|binding|INFO|Releasing lport 2fa8f568-a3b6-4390-a456-29e1aa7b3761 from this chassis (sb_readonly=0)
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.674 2 DEBUG nova.compute.manager [req-94da06b0-9c3d-4b0a-bad0-e51d71e5aa6f req-8e60bb47-6d40-4a70-94d8-0d8b4a738e66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-plugged-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.675 2 DEBUG oslo_concurrency.lockutils [req-94da06b0-9c3d-4b0a-bad0-e51d71e5aa6f req-8e60bb47-6d40-4a70-94d8-0d8b4a738e66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.676 2 DEBUG oslo_concurrency.lockutils [req-94da06b0-9c3d-4b0a-bad0-e51d71e5aa6f req-8e60bb47-6d40-4a70-94d8-0d8b4a738e66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.676 2 DEBUG oslo_concurrency.lockutils [req-94da06b0-9c3d-4b0a-bad0-e51d71e5aa6f req-8e60bb47-6d40-4a70-94d8-0d8b4a738e66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.676 2 DEBUG nova.compute.manager [req-94da06b0-9c3d-4b0a-bad0-e51d71e5aa6f req-8e60bb47-6d40-4a70-94d8-0d8b4a738e66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-plugged-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:27:10 np0005465604 nova_compute[260603]: 2025-10-02 08:27:10.676 2 WARNING nova.compute.manager [req-94da06b0-9c3d-4b0a-bad0-e51d71e5aa6f req-8e60bb47-6d40-4a70-94d8-0d8b4a738e66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-plugged-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:27:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 167 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.7 KiB/s wr, 64 op/s
Oct  2 04:27:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:27:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:27:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:27:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:27:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:27:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:27:11 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 68f127d3-9607-46b2-aca0-ee345f8979b4 does not exist
Oct  2 04:27:11 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 1d5de993-b81e-4ae6-bfb1-533e0e0c3448 does not exist
Oct  2 04:27:11 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 5c2b2ef3-2119-48a9-85a6-42d7d785ec57 does not exist
Oct  2 04:27:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:27:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:27:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:27:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:27:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:27:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:27:11 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:11Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:4a:c6 10.100.0.13
Oct  2 04:27:11 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:11Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:4a:c6 10.100.0.13
Oct  2 04:27:11 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:27:11 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:27:11 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:27:11 np0005465604 podman[305880]: 2025-10-02 08:27:11.750499504 +0000 UTC m=+0.040083507 container create 2cf8fe6d6f668e50c87c79716134f671052c0e3324165e69ceb7309901b5f730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feynman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  2 04:27:11 np0005465604 systemd[1]: Started libpod-conmon-2cf8fe6d6f668e50c87c79716134f671052c0e3324165e69ceb7309901b5f730.scope.
Oct  2 04:27:11 np0005465604 podman[305880]: 2025-10-02 08:27:11.731259016 +0000 UTC m=+0.020842999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:27:11 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:27:11 np0005465604 podman[305880]: 2025-10-02 08:27:11.84560919 +0000 UTC m=+0.135193163 container init 2cf8fe6d6f668e50c87c79716134f671052c0e3324165e69ceb7309901b5f730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feynman, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:27:11 np0005465604 podman[305880]: 2025-10-02 08:27:11.854430604 +0000 UTC m=+0.144014607 container start 2cf8fe6d6f668e50c87c79716134f671052c0e3324165e69ceb7309901b5f730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feynman, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 04:27:11 np0005465604 podman[305880]: 2025-10-02 08:27:11.857785478 +0000 UTC m=+0.147369481 container attach 2cf8fe6d6f668e50c87c79716134f671052c0e3324165e69ceb7309901b5f730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feynman, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:27:11 np0005465604 laughing_feynman[305898]: 167 167
Oct  2 04:27:11 np0005465604 systemd[1]: libpod-2cf8fe6d6f668e50c87c79716134f671052c0e3324165e69ceb7309901b5f730.scope: Deactivated successfully.
Oct  2 04:27:11 np0005465604 podman[305880]: 2025-10-02 08:27:11.860527303 +0000 UTC m=+0.150111336 container died 2cf8fe6d6f668e50c87c79716134f671052c0e3324165e69ceb7309901b5f730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feynman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 04:27:11 np0005465604 systemd[1]: var-lib-containers-storage-overlay-bb2bf3a266e98d39668a965787ea65b0e99dea4053e9641ee0677f4ecc0a21f0-merged.mount: Deactivated successfully.
Oct  2 04:27:11 np0005465604 podman[305880]: 2025-10-02 08:27:11.901726614 +0000 UTC m=+0.191310577 container remove 2cf8fe6d6f668e50c87c79716134f671052c0e3324165e69ceb7309901b5f730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:27:11 np0005465604 systemd[1]: libpod-conmon-2cf8fe6d6f668e50c87c79716134f671052c0e3324165e69ceb7309901b5f730.scope: Deactivated successfully.
Oct  2 04:27:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:12Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:04:1b 10.100.0.7
Oct  2 04:27:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:12Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:04:1b 10.100.0.7
Oct  2 04:27:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:27:12 np0005465604 podman[305920]: 2025-10-02 08:27:12.162229999 +0000 UTC m=+0.057098235 container create 4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 04:27:12 np0005465604 systemd[1]: Started libpod-conmon-4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab.scope.
Oct  2 04:27:12 np0005465604 podman[305920]: 2025-10-02 08:27:12.143382873 +0000 UTC m=+0.038251099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:27:12 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:27:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3e358395a7b3324d31ac02211e0f268b47200fc4e5d9abd253fc49c792a640/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:27:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3e358395a7b3324d31ac02211e0f268b47200fc4e5d9abd253fc49c792a640/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:27:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3e358395a7b3324d31ac02211e0f268b47200fc4e5d9abd253fc49c792a640/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:27:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3e358395a7b3324d31ac02211e0f268b47200fc4e5d9abd253fc49c792a640/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:27:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3e358395a7b3324d31ac02211e0f268b47200fc4e5d9abd253fc49c792a640/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:27:12 np0005465604 podman[305920]: 2025-10-02 08:27:12.268233413 +0000 UTC m=+0.163101659 container init 4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:12 np0005465604 podman[305920]: 2025-10-02 08:27:12.282637051 +0000 UTC m=+0.177505257 container start 4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:27:12 np0005465604 podman[305920]: 2025-10-02 08:27:12.286824161 +0000 UTC m=+0.181692407 container attach 4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.383 2 DEBUG nova.network.neutron [req-7af216d3-8122-49c6-b22e-fe9778cc434e req-0ccd386c-d425-4de7-8ab3-4c2c07eb0a95 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updated VIF entry in instance network info cache for port 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.384 2 DEBUG nova.network.neutron [req-7af216d3-8122-49c6-b22e-fe9778cc434e req-0ccd386c-d425-4de7-8ab3-4c2c07eb0a95 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.403 2 DEBUG oslo_concurrency.lockutils [req-7af216d3-8122-49c6-b22e-fe9778cc434e req-0ccd386c-d425-4de7-8ab3-4c2c07eb0a95 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.507 2 DEBUG oslo_concurrency.lockutils [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-8e47820a-f777-4d29-8bce-45c6eb3b7b5c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.508 2 DEBUG oslo_concurrency.lockutils [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-8e47820a-f777-4d29-8bce-45c6eb3b7b5c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.527 2 DEBUG nova.objects.instance [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.548 2 DEBUG nova.virt.libvirt.vif [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.548 2 DEBUG nova.network.os_vif_util [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.549 2 DEBUG nova.network.os_vif_util [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.554 2 DEBUG nova.virt.libvirt.guest [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.557 2 DEBUG nova.virt.libvirt.guest [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.560 2 DEBUG nova.virt.libvirt.driver [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Attempting to detach device tap8e47820a-f7 from instance 5d595e00-2287-4a6f-b347-bc277006a626 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.561 2 DEBUG nova.virt.libvirt.guest [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] detach device xml: <interface type="ethernet">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <mac address="fa:16:3e:65:86:1a"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <model type="virtio"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <mtu size="1442"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <target dev="tap8e47820a-f7"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]: </interface>
Oct  2 04:27:12 np0005465604 nova_compute[260603]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.568 2 DEBUG nova.virt.libvirt.guest [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.572 2 DEBUG nova.virt.libvirt.guest [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface>not found in domain: <domain type='kvm' id='42'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <name>instance-00000026</name>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <uuid>5d595e00-2287-4a6f-b347-bc277006a626</uuid>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:27:10</nova:creationTime>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:port uuid="8e47820a-f777-4d29-8bce-45c6eb3b7b5c">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:port uuid="4f39890f-a968-41d4-9cae-0b6948551923">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:port uuid="19a09bbc-9b50-4f99-8dd4-0f7f9ab15851">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:27:12 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <memory unit='KiB'>131072</memory>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <vcpu placement='static'>1</vcpu>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <resource>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <partition>/machine</partition>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </resource>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <sysinfo type='smbios'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <entry name='manufacturer'>RDO</entry>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <entry name='serial'>5d595e00-2287-4a6f-b347-bc277006a626</entry>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <entry name='uuid'>5d595e00-2287-4a6f-b347-bc277006a626</entry>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <entry name='family'>Virtual Machine</entry>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <boot dev='hd'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <smbios mode='sysinfo'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <vmcoreinfo state='on'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <vendor>AMD</vendor>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='x2apic'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc-deadline'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='hypervisor'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc_adjust'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='spec-ctrl'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='stibp'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='arch-capabilities'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='ssbd'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='cmp_legacy'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='overflow-recov'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='succor'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='ibrs'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='amd-ssbd'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='virt-ssbd'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='lbrv'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='tsc-scale'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='vmcb-clean'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='flushbyasid'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pause-filter'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pfthreshold'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='rdctl-no'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='mds-no'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='gds-no'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='rfds-no'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='xsaves'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svm'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='topoext'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='npt'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='nrip-save'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <clock offset='utc'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <timer name='hpet' present='no'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <on_poweroff>destroy</on_poweroff>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <on_reboot>restart</on_reboot>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <on_crash>destroy</on_crash>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <disk type='network' device='disk'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/5d595e00-2287-4a6f-b347-bc277006a626_disk' index='2'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target dev='vda' bus='virtio'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='virtio-disk0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <disk type='network' device='cdrom'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/5d595e00-2287-4a6f-b347-bc277006a626_disk.config' index='1'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target dev='sda' bus='sata'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <readonly/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='sata0-0-0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pcie.0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='1' port='0x10'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='2' port='0x11'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.2'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='3' port='0x12'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.3'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='4' port='0x13'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.4'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='5' port='0x14'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.5'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='6' port='0x15'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.6'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='7' port='0x16'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.7'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='8' port='0x17'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.8'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='9' port='0x18'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.9'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='10' port='0x19'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.10'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='11' port='0x1a'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.11'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='12' port='0x1b'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.12'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='13' port='0x1c'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.13'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='14' port='0x1d'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.14'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='15' port='0x1e'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.15'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='16' port='0x1f'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.16'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='17' port='0x20'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.17'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='18' port='0x21'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.18'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='19' port='0x22'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.19'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='20' port='0x23'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.20'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='21' port='0x24'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.21'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='22' port='0x25'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.22'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='23' port='0x26'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.23'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='24' port='0x27'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.24'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='25' port='0x28'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.25'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-pci-bridge'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.26'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='usb'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='sata' index='0'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='ide'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:cf:e3:68'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target dev='tap0d888b1c-d2'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='net0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:65:86:1a'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target dev='tap8e47820a-f7'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='net1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:01:40:80'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target dev='tap4f39890f-a9'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='net2'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:17:04:1b'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target dev='tap19a09bbc-9b'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='net3'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <serial type='pty'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/console.log' append='off'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target type='isa-serial' port='0'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:        <model name='isa-serial'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      </target>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/console.log' append='off'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target type='serial' port='0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </console>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <input type='tablet' bus='usb'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='input0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='usb' bus='0' port='1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <input type='mouse' bus='ps2'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='input1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <input type='keyboard' bus='ps2'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='input2'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <listen type='address' address='::0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <audio id='1' type='none'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='video0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <watchdog model='itco' action='reset'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='watchdog0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </watchdog>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <memballoon model='virtio'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <stats period='10'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='balloon0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <rng model='virtio'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <backend model='random'>/dev/urandom</backend>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='rng0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <label>system_u:system_r:svirt_t:s0:c497,c676</label>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c497,c676</imagelabel>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <label>+107:+107</label>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <imagelabel>+107:+107</imagelabel>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:27:12 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:27:12 np0005465604 nova_compute[260603]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.575 2 INFO nova.virt.libvirt.driver [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully detached device tap8e47820a-f7 from instance 5d595e00-2287-4a6f-b347-bc277006a626 from the persistent domain config.#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.576 2 DEBUG nova.virt.libvirt.driver [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] (1/8): Attempting to detach device tap8e47820a-f7 with device alias net1 from instance 5d595e00-2287-4a6f-b347-bc277006a626 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.576 2 DEBUG nova.virt.libvirt.guest [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] detach device xml: <interface type="ethernet">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <mac address="fa:16:3e:65:86:1a"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <model type="virtio"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <mtu size="1442"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <target dev="tap8e47820a-f7"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]: </interface>
Oct  2 04:27:12 np0005465604 nova_compute[260603]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 04:27:12 np0005465604 kernel: tap8e47820a-f7 (unregistering): left promiscuous mode
Oct  2 04:27:12 np0005465604 NetworkManager[45129]: <info>  [1759393632.6878] device (tap8e47820a-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:12Z|00310|binding|INFO|Releasing lport 8e47820a-f777-4d29-8bce-45c6eb3b7b5c from this chassis (sb_readonly=0)
Oct  2 04:27:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:12Z|00311|binding|INFO|Setting lport 8e47820a-f777-4d29-8bce-45c6eb3b7b5c down in Southbound
Oct  2 04:27:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:12Z|00312|binding|INFO|Removing iface tap8e47820a-f7 ovn-installed in OVS
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.703 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:86:1a 10.100.0.12'], port_security=['fa:16:3e:65:86:1a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5d595e00-2287-4a6f-b347-bc277006a626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8e47820a-f777-4d29-8bce-45c6eb3b7b5c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:27:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.705 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8e47820a-f777-4d29-8bce-45c6eb3b7b5c in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis#033[00m
Oct  2 04:27:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.707 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.719 2 DEBUG nova.virt.libvirt.driver [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Start waiting for the detach event from libvirt for device tap8e47820a-f7 with device alias net1 for instance 5d595e00-2287-4a6f-b347-bc277006a626 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.720 2 DEBUG nova.virt.libvirt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Received event <DeviceRemovedEvent: 1759393632.7178273, 5d595e00-2287-4a6f-b347-bc277006a626 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.721 2 DEBUG nova.virt.libvirt.guest [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:27:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.723 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9337eb93-8cd4-461b-9c93-04a33242d87b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.726 2 DEBUG nova.virt.libvirt.guest [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface>not found in domain: <domain type='kvm' id='42'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <name>instance-00000026</name>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <uuid>5d595e00-2287-4a6f-b347-bc277006a626</uuid>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:27:10</nova:creationTime>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:port uuid="8e47820a-f777-4d29-8bce-45c6eb3b7b5c">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:port uuid="4f39890f-a968-41d4-9cae-0b6948551923">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:port uuid="19a09bbc-9b50-4f99-8dd4-0f7f9ab15851">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:27:12 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <memory unit='KiB'>131072</memory>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <vcpu placement='static'>1</vcpu>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <resource>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <partition>/machine</partition>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </resource>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <sysinfo type='smbios'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <entry name='manufacturer'>RDO</entry>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <entry name='serial'>5d595e00-2287-4a6f-b347-bc277006a626</entry>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <entry name='uuid'>5d595e00-2287-4a6f-b347-bc277006a626</entry>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <entry name='family'>Virtual Machine</entry>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <boot dev='hd'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <smbios mode='sysinfo'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <vmcoreinfo state='on'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <vendor>AMD</vendor>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='x2apic'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc-deadline'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='hypervisor'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc_adjust'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='spec-ctrl'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='stibp'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='arch-capabilities'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='ssbd'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='cmp_legacy'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='overflow-recov'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='succor'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='ibrs'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='amd-ssbd'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='virt-ssbd'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='lbrv'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='tsc-scale'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='vmcb-clean'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='flushbyasid'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pause-filter'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pfthreshold'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='rdctl-no'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='mds-no'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='gds-no'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='rfds-no'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='xsaves'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svm'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='require' name='topoext'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='npt'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <feature policy='disable' name='nrip-save'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <clock offset='utc'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <timer name='hpet' present='no'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <on_poweroff>destroy</on_poweroff>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <on_reboot>restart</on_reboot>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <on_crash>destroy</on_crash>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <disk type='network' device='disk'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/5d595e00-2287-4a6f-b347-bc277006a626_disk' index='2'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target dev='vda' bus='virtio'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='virtio-disk0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <disk type='network' device='cdrom'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/5d595e00-2287-4a6f-b347-bc277006a626_disk.config' index='1'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target dev='sda' bus='sata'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <readonly/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='sata0-0-0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pcie.0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='1' port='0x10'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='2' port='0x11'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.2'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='3' port='0x12'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.3'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='4' port='0x13'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.4'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='5' port='0x14'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.5'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='6' port='0x15'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.6'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='7' port='0x16'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.7'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='8' port='0x17'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.8'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='9' port='0x18'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.9'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='10' port='0x19'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.10'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='11' port='0x1a'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.11'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='12' port='0x1b'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.12'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='13' port='0x1c'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.13'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='14' port='0x1d'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.14'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='15' port='0x1e'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.15'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='16' port='0x1f'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.16'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='17' port='0x20'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.17'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='18' port='0x21'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.18'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='19' port='0x22'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.19'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='20' port='0x23'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.20'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='21' port='0x24'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.21'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='22' port='0x25'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.22'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='23' port='0x26'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.23'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='24' port='0x27'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.24'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target chassis='25' port='0x28'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.25'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model name='pcie-pci-bridge'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='pci.26'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='usb'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <controller type='sata' index='0'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='ide'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:cf:e3:68'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target dev='tap0d888b1c-d2'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='net0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:01:40:80'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target dev='tap4f39890f-a9'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='net2'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:17:04:1b'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target dev='tap19a09bbc-9b'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='net3'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <serial type='pty'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/console.log' append='off'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target type='isa-serial' port='0'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:        <model name='isa-serial'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      </target>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/console.log' append='off'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <target type='serial' port='0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </console>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <input type='tablet' bus='usb'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='input0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='usb' bus='0' port='1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <input type='mouse' bus='ps2'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='input1'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <input type='keyboard' bus='ps2'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='input2'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <listen type='address' address='::0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <audio id='1' type='none'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='video0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <watchdog model='itco' action='reset'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='watchdog0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </watchdog>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <memballoon model='virtio'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <stats period='10'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='balloon0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <rng model='virtio'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <backend model='random'>/dev/urandom</backend>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <alias name='rng0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <label>system_u:system_r:svirt_t:s0:c497,c676</label>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c497,c676</imagelabel>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <label>+107:+107</label>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <imagelabel>+107:+107</imagelabel>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:27:12 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:27:12 np0005465604 nova_compute[260603]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.727 2 INFO nova.virt.libvirt.driver [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully detached device tap8e47820a-f7 from instance 5d595e00-2287-4a6f-b347-bc277006a626 from the live domain config.#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.728 2 DEBUG nova.virt.libvirt.vif [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.728 2 DEBUG nova.network.os_vif_util [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.729 2 DEBUG nova.network.os_vif_util [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.730 2 DEBUG os_vif [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.732 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e47820a-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.745 2 INFO os_vif [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7')#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.746 2 DEBUG nova.virt.libvirt.guest [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:27:12</nova:creationTime>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:port uuid="4f39890f-a968-41d4-9cae-0b6948551923">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    <nova:port uuid="19a09bbc-9b50-4f99-8dd4-0f7f9ab15851">
Oct  2 04:27:12 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:12 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:27:12 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:27:12 np0005465604 nova_compute[260603]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 04:27:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.753 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e13c0503-c026-44b2-ab2c-07909b64ad27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.764 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[38f7472f-46d3-498d-9e18-4148bf2656dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.787 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[35e3b900-d3ff-4029-a17f-4f3b2ab74f87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.808 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0a692b3f-e5f2-4070-94b4-81c03ff624e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 12, 'rx_bytes': 1084, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 12, 'rx_bytes': 1084, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445002, 'reachable_time': 24920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305950, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.825 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a4cbd76e-11e3-4981-a48b-c5ac4f0f63a0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445016, 'tstamp': 445016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305951, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445020, 'tstamp': 445020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305951, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.828 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.833 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.834 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:27:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.835 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:12.835 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.862 2 DEBUG nova.compute.manager [req-d996b36c-bc58-4810-a69a-16f229f197a0 req-80012b89-cbc6-49de-8af1-4c0d1e67d6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-plugged-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.862 2 DEBUG oslo_concurrency.lockutils [req-d996b36c-bc58-4810-a69a-16f229f197a0 req-80012b89-cbc6-49de-8af1-4c0d1e67d6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.863 2 DEBUG oslo_concurrency.lockutils [req-d996b36c-bc58-4810-a69a-16f229f197a0 req-80012b89-cbc6-49de-8af1-4c0d1e67d6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.863 2 DEBUG oslo_concurrency.lockutils [req-d996b36c-bc58-4810-a69a-16f229f197a0 req-80012b89-cbc6-49de-8af1-4c0d1e67d6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.863 2 DEBUG nova.compute.manager [req-d996b36c-bc58-4810-a69a-16f229f197a0 req-80012b89-cbc6-49de-8af1-4c0d1e67d6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-plugged-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:27:12 np0005465604 nova_compute[260603]: 2025-10-02 08:27:12.863 2 WARNING nova.compute.manager [req-d996b36c-bc58-4810-a69a-16f229f197a0 req-80012b89-cbc6-49de-8af1-4c0d1e67d6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-plugged-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:27:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 189 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 115 op/s
Oct  2 04:27:13 np0005465604 nova_compute[260603]: 2025-10-02 08:27:13.072 2 DEBUG nova.compute.manager [req-360d9e02-ea17-48cd-b7a9-bdf4718c615f req-261bbace-efd9-4bfe-b668-007cd7f63a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-unplugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:13 np0005465604 nova_compute[260603]: 2025-10-02 08:27:13.073 2 DEBUG oslo_concurrency.lockutils [req-360d9e02-ea17-48cd-b7a9-bdf4718c615f req-261bbace-efd9-4bfe-b668-007cd7f63a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:13 np0005465604 nova_compute[260603]: 2025-10-02 08:27:13.074 2 DEBUG oslo_concurrency.lockutils [req-360d9e02-ea17-48cd-b7a9-bdf4718c615f req-261bbace-efd9-4bfe-b668-007cd7f63a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:13 np0005465604 nova_compute[260603]: 2025-10-02 08:27:13.074 2 DEBUG oslo_concurrency.lockutils [req-360d9e02-ea17-48cd-b7a9-bdf4718c615f req-261bbace-efd9-4bfe-b668-007cd7f63a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:13 np0005465604 nova_compute[260603]: 2025-10-02 08:27:13.074 2 DEBUG nova.compute.manager [req-360d9e02-ea17-48cd-b7a9-bdf4718c615f req-261bbace-efd9-4bfe-b668-007cd7f63a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-unplugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:27:13 np0005465604 nova_compute[260603]: 2025-10-02 08:27:13.075 2 WARNING nova.compute.manager [req-360d9e02-ea17-48cd-b7a9-bdf4718c615f req-261bbace-efd9-4bfe-b668-007cd7f63a82 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-unplugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c for instance with vm_state active and task_state None.#033[00m
Oct  2 04:27:13 np0005465604 nova_compute[260603]: 2025-10-02 08:27:13.398 2 DEBUG oslo_concurrency.lockutils [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:27:13 np0005465604 nova_compute[260603]: 2025-10-02 08:27:13.400 2 DEBUG oslo_concurrency.lockutils [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:27:13 np0005465604 nova_compute[260603]: 2025-10-02 08:27:13.401 2 DEBUG nova.network.neutron [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:27:13 np0005465604 wonderful_sutherland[305936]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:27:13 np0005465604 wonderful_sutherland[305936]: --> relative data size: 1.0
Oct  2 04:27:13 np0005465604 wonderful_sutherland[305936]: --> All data devices are unavailable
Oct  2 04:27:13 np0005465604 systemd[1]: libpod-4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab.scope: Deactivated successfully.
Oct  2 04:27:13 np0005465604 systemd[1]: libpod-4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab.scope: Consumed 1.126s CPU time.
Oct  2 04:27:13 np0005465604 podman[305920]: 2025-10-02 08:27:13.499803027 +0000 UTC m=+1.394671243 container died 4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:27:13 np0005465604 nova_compute[260603]: 2025-10-02 08:27:13.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:27:13 np0005465604 nova_compute[260603]: 2025-10-02 08:27:13.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:27:13 np0005465604 nova_compute[260603]: 2025-10-02 08:27:13.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:27:13 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5e3e358395a7b3324d31ac02211e0f268b47200fc4e5d9abd253fc49c792a640-merged.mount: Deactivated successfully.
Oct  2 04:27:13 np0005465604 podman[305920]: 2025-10-02 08:27:13.564871469 +0000 UTC m=+1.459739675 container remove 4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sutherland, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:27:13 np0005465604 systemd[1]: libpod-conmon-4901be1e5709791a1d379390f7151d080f7c14a4c883e7101134d972c3cbe1ab.scope: Deactivated successfully.
Oct  2 04:27:14 np0005465604 podman[306132]: 2025-10-02 08:27:14.281182509 +0000 UTC m=+0.055748704 container create 6148599b6c301dbc8ae78a43ed0aa7c8f7ad2da7dc9c314b380b0340b1dadda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:27:14 np0005465604 systemd[1]: Started libpod-conmon-6148599b6c301dbc8ae78a43ed0aa7c8f7ad2da7dc9c314b380b0340b1dadda3.scope.
Oct  2 04:27:14 np0005465604 podman[306132]: 2025-10-02 08:27:14.254779808 +0000 UTC m=+0.029346073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:27:14 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:27:14 np0005465604 podman[306132]: 2025-10-02 08:27:14.371213936 +0000 UTC m=+0.145780181 container init 6148599b6c301dbc8ae78a43ed0aa7c8f7ad2da7dc9c314b380b0340b1dadda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:27:14 np0005465604 podman[306132]: 2025-10-02 08:27:14.383623482 +0000 UTC m=+0.158189657 container start 6148599b6c301dbc8ae78a43ed0aa7c8f7ad2da7dc9c314b380b0340b1dadda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:27:14 np0005465604 podman[306132]: 2025-10-02 08:27:14.387256075 +0000 UTC m=+0.161822300 container attach 6148599b6c301dbc8ae78a43ed0aa7c8f7ad2da7dc9c314b380b0340b1dadda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:27:14 np0005465604 stupefied_tu[306148]: 167 167
Oct  2 04:27:14 np0005465604 systemd[1]: libpod-6148599b6c301dbc8ae78a43ed0aa7c8f7ad2da7dc9c314b380b0340b1dadda3.scope: Deactivated successfully.
Oct  2 04:27:14 np0005465604 podman[306132]: 2025-10-02 08:27:14.390557818 +0000 UTC m=+0.165124023 container died 6148599b6c301dbc8ae78a43ed0aa7c8f7ad2da7dc9c314b380b0340b1dadda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 04:27:14 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2f2f550e90c7659208f2d276d5dba70a20a150386bdfb131b24a227fda3fffde-merged.mount: Deactivated successfully.
Oct  2 04:27:14 np0005465604 podman[306132]: 2025-10-02 08:27:14.441283854 +0000 UTC m=+0.215850059 container remove 6148599b6c301dbc8ae78a43ed0aa7c8f7ad2da7dc9c314b380b0340b1dadda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:27:14 np0005465604 systemd[1]: libpod-conmon-6148599b6c301dbc8ae78a43ed0aa7c8f7ad2da7dc9c314b380b0340b1dadda3.scope: Deactivated successfully.
Oct  2 04:27:14 np0005465604 podman[306171]: 2025-10-02 08:27:14.679864109 +0000 UTC m=+0.052369549 container create 5f78774757a53e644f6fe9edc117dd18e13af1f448012bd0767168b503410e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:27:14 np0005465604 nova_compute[260603]: 2025-10-02 08:27:14.701 2 INFO nova.network.neutron [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Port 8e47820a-f777-4d29-8bce-45c6eb3b7b5c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  2 04:27:14 np0005465604 systemd[1]: Started libpod-conmon-5f78774757a53e644f6fe9edc117dd18e13af1f448012bd0767168b503410e05.scope.
Oct  2 04:27:14 np0005465604 podman[306171]: 2025-10-02 08:27:14.653480739 +0000 UTC m=+0.025986219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:27:14 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:27:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0586f30b78ea50eb63eb30237511575ea707d4eb9f7f25b41538732fc5df0e25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:27:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0586f30b78ea50eb63eb30237511575ea707d4eb9f7f25b41538732fc5df0e25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:27:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0586f30b78ea50eb63eb30237511575ea707d4eb9f7f25b41538732fc5df0e25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:27:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0586f30b78ea50eb63eb30237511575ea707d4eb9f7f25b41538732fc5df0e25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:27:14 np0005465604 podman[306171]: 2025-10-02 08:27:14.789482094 +0000 UTC m=+0.161987574 container init 5f78774757a53e644f6fe9edc117dd18e13af1f448012bd0767168b503410e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_goldberg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct  2 04:27:14 np0005465604 podman[306171]: 2025-10-02 08:27:14.800995043 +0000 UTC m=+0.173500483 container start 5f78774757a53e644f6fe9edc117dd18e13af1f448012bd0767168b503410e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:27:14 np0005465604 podman[306171]: 2025-10-02 08:27:14.804356737 +0000 UTC m=+0.176862207 container attach 5f78774757a53e644f6fe9edc117dd18e13af1f448012bd0767168b503410e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_goldberg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:27:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 200 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 684 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.007 2 DEBUG nova.compute.manager [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-deleted-8e47820a-f777-4d29-8bce-45c6eb3b7b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.007 2 INFO nova.compute.manager [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Neutron deleted interface 8e47820a-f777-4d29-8bce-45c6eb3b7b5c; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.008 2 DEBUG nova.network.neutron [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.046 2 DEBUG nova.objects.instance [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lazy-loading 'system_metadata' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.094 2 DEBUG nova.objects.instance [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lazy-loading 'flavor' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.150 2 DEBUG nova.virt.libvirt.vif [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.151 2 DEBUG nova.network.os_vif_util [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converting VIF {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.152 2 DEBUG nova.network.os_vif_util [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.158 2 DEBUG nova.virt.libvirt.guest [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.314 162357 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port f2354639-f248-4aa2-abea-ec2e3637feb1 with type ""#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.315 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:04:1b 10.100.0.7'], port_security=['fa:16:3e:17:04:1b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1146870412', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5d595e00-2287-4a6f-b347-bc277006a626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1146870412', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.316 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.318 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1#033[00m
Oct  2 04:27:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:15Z|00313|binding|INFO|Removing iface tap19a09bbc-9b ovn-installed in OVS
Oct  2 04:27:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:15Z|00314|binding|INFO|Removing lport 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 ovn-installed in OVS
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.332 2 DEBUG nova.compute.manager [req-70f57358-0b15-4593-931b-f2a6d87a1a1c req-3d2a493f-21e8-4e3f-93f3-c7e1d57f5e34 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-plugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.333 2 DEBUG oslo_concurrency.lockutils [req-70f57358-0b15-4593-931b-f2a6d87a1a1c req-3d2a493f-21e8-4e3f-93f3-c7e1d57f5e34 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.333 2 DEBUG oslo_concurrency.lockutils [req-70f57358-0b15-4593-931b-f2a6d87a1a1c req-3d2a493f-21e8-4e3f-93f3-c7e1d57f5e34 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.334 2 DEBUG oslo_concurrency.lockutils [req-70f57358-0b15-4593-931b-f2a6d87a1a1c req-3d2a493f-21e8-4e3f-93f3-c7e1d57f5e34 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.334 2 DEBUG nova.compute.manager [req-70f57358-0b15-4593-931b-f2a6d87a1a1c req-3d2a493f-21e8-4e3f-93f3-c7e1d57f5e34 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-plugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.335 2 WARNING nova.compute.manager [req-70f57358-0b15-4593-931b-f2a6d87a1a1c req-3d2a493f-21e8-4e3f-93f3-c7e1d57f5e34 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-plugged-8e47820a-f777-4d29-8bce-45c6eb3b7b5c for instance with vm_state active and task_state None.#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.335 2 DEBUG nova.virt.libvirt.guest [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface>not found in domain: <domain type='kvm' id='42'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <name>instance-00000026</name>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <uuid>5d595e00-2287-4a6f-b347-bc277006a626</uuid>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:27:12</nova:creationTime>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:port uuid="4f39890f-a968-41d4-9cae-0b6948551923">
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:port uuid="19a09bbc-9b50-4f99-8dd4-0f7f9ab15851">
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:27:15 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <memory unit='KiB'>131072</memory>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <vcpu placement='static'>1</vcpu>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <resource>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <partition>/machine</partition>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </resource>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <sysinfo type='smbios'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <entry name='manufacturer'>RDO</entry>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <entry name='serial'>5d595e00-2287-4a6f-b347-bc277006a626</entry>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <entry name='uuid'>5d595e00-2287-4a6f-b347-bc277006a626</entry>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <entry name='family'>Virtual Machine</entry>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <boot dev='hd'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <smbios mode='sysinfo'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <vmcoreinfo state='on'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <vendor>AMD</vendor>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='x2apic'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc-deadline'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='hypervisor'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc_adjust'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='spec-ctrl'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='stibp'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='arch-capabilities'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='ssbd'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='cmp_legacy'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='overflow-recov'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='succor'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='ibrs'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='amd-ssbd'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='virt-ssbd'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='lbrv'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='tsc-scale'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='vmcb-clean'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='flushbyasid'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pause-filter'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pfthreshold'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='rdctl-no'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='mds-no'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='gds-no'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='rfds-no'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='xsaves'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svm'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='topoext'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='npt'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='nrip-save'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <clock offset='utc'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <timer name='hpet' present='no'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <on_poweroff>destroy</on_poweroff>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <on_reboot>restart</on_reboot>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <on_crash>destroy</on_crash>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <disk type='network' device='disk'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/5d595e00-2287-4a6f-b347-bc277006a626_disk' index='2'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target dev='vda' bus='virtio'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='virtio-disk0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <disk type='network' device='cdrom'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/5d595e00-2287-4a6f-b347-bc277006a626_disk.config' index='1'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target dev='sda' bus='sata'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <readonly/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='sata0-0-0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pcie.0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='1' port='0x10'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.1'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='2' port='0x11'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.2'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='3' port='0x12'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.3'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='4' port='0x13'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.4'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='5' port='0x14'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.5'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='6' port='0x15'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.6'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='7' port='0x16'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.7'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='8' port='0x17'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.8'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='9' port='0x18'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.9'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='10' port='0x19'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.10'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='11' port='0x1a'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.11'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='12' port='0x1b'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.12'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='13' port='0x1c'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.13'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='14' port='0x1d'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.14'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='15' port='0x1e'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.15'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='16' port='0x1f'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.16'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='17' port='0x20'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.17'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='18' port='0x21'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.18'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='19' port='0x22'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.19'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='20' port='0x23'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.20'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='21' port='0x24'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.21'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='22' port='0x25'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.22'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='23' port='0x26'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.23'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='24' port='0x27'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.24'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='25' port='0x28'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.25'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-pci-bridge'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.26'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='usb'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='sata' index='0'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='ide'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:cf:e3:68'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target dev='tap0d888b1c-d2'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='net0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:01:40:80'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target dev='tap4f39890f-a9'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='net2'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:17:04:1b'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target dev='tap19a09bbc-9b'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='net3'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <serial type='pty'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/console.log' append='off'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target type='isa-serial' port='0'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:        <model name='isa-serial'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      </target>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/console.log' append='off'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target type='serial' port='0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </console>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <input type='tablet' bus='usb'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='input0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='usb' bus='0' port='1'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <input type='mouse' bus='ps2'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='input1'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <input type='keyboard' bus='ps2'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='input2'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <listen type='address' address='::0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <audio id='1' type='none'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='video0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <watchdog model='itco' action='reset'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='watchdog0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </watchdog>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <memballoon model='virtio'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <stats period='10'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='balloon0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <rng model='virtio'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <backend model='random'>/dev/urandom</backend>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='rng0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <label>system_u:system_r:svirt_t:s0:c497,c676</label>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c497,c676</imagelabel>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <label>+107:+107</label>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <imagelabel>+107:+107</imagelabel>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:27:15 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:27:15 np0005465604 nova_compute[260603]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.336 2 DEBUG nova.virt.libvirt.guest [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.335 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[94dd0103-ba4b-4856-9384-e457be19715f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.351 2 DEBUG nova.virt.libvirt.guest [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:65:86:1a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8e47820a-f7"/></interface>not found in domain: <domain type='kvm' id='42'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <name>instance-00000026</name>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <uuid>5d595e00-2287-4a6f-b347-bc277006a626</uuid>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:27:12</nova:creationTime>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:port uuid="4f39890f-a968-41d4-9cae-0b6948551923">
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:port uuid="19a09bbc-9b50-4f99-8dd4-0f7f9ab15851">
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:27:15 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <memory unit='KiB'>131072</memory>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <vcpu placement='static'>1</vcpu>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <resource>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <partition>/machine</partition>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </resource>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <sysinfo type='smbios'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <entry name='manufacturer'>RDO</entry>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <entry name='serial'>5d595e00-2287-4a6f-b347-bc277006a626</entry>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <entry name='uuid'>5d595e00-2287-4a6f-b347-bc277006a626</entry>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <entry name='family'>Virtual Machine</entry>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <boot dev='hd'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <smbios mode='sysinfo'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <vmcoreinfo state='on'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <vendor>AMD</vendor>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='x2apic'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc-deadline'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='hypervisor'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc_adjust'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='spec-ctrl'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='stibp'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='arch-capabilities'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='ssbd'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='cmp_legacy'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='overflow-recov'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='succor'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='ibrs'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='amd-ssbd'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='virt-ssbd'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='lbrv'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='tsc-scale'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='vmcb-clean'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='flushbyasid'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pause-filter'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pfthreshold'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='rdctl-no'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='mds-no'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='gds-no'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='rfds-no'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='xsaves'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svm'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='require' name='topoext'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='npt'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <feature policy='disable' name='nrip-save'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <clock offset='utc'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <timer name='hpet' present='no'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <on_poweroff>destroy</on_poweroff>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <on_reboot>restart</on_reboot>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <on_crash>destroy</on_crash>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <disk type='network' device='disk'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/5d595e00-2287-4a6f-b347-bc277006a626_disk' index='2'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target dev='vda' bus='virtio'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='virtio-disk0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <disk type='network' device='cdrom'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/5d595e00-2287-4a6f-b347-bc277006a626_disk.config' index='1'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target dev='sda' bus='sata'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <readonly/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='sata0-0-0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pcie.0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='1' port='0x10'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.1'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='2' port='0x11'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.2'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='3' port='0x12'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.3'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='4' port='0x13'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.4'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='5' port='0x14'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.5'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='6' port='0x15'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.6'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='7' port='0x16'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.7'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='8' port='0x17'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.8'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='9' port='0x18'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.9'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='10' port='0x19'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.10'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='11' port='0x1a'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.11'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='12' port='0x1b'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.12'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='13' port='0x1c'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.13'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='14' port='0x1d'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.14'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='15' port='0x1e'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.15'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='16' port='0x1f'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.16'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='17' port='0x20'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.17'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='18' port='0x21'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.18'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='19' port='0x22'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.19'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='20' port='0x23'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.20'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='21' port='0x24'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.21'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='22' port='0x25'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.22'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='23' port='0x26'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.23'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='24' port='0x27'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.24'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target chassis='25' port='0x28'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.25'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model name='pcie-pci-bridge'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='pci.26'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='usb'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <controller type='sata' index='0'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='ide'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:cf:e3:68'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target dev='tap0d888b1c-d2'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='net0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:01:40:80'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target dev='tap4f39890f-a9'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='net2'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:17:04:1b'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target dev='tap19a09bbc-9b'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='net3'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <serial type='pty'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/console.log' append='off'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target type='isa-serial' port='0'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:        <model name='isa-serial'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      </target>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626/console.log' append='off'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <target type='serial' port='0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </console>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <input type='tablet' bus='usb'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='input0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='usb' bus='0' port='1'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <input type='mouse' bus='ps2'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='input1'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <input type='keyboard' bus='ps2'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='input2'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <listen type='address' address='::0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <audio id='1' type='none'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='video0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <watchdog model='itco' action='reset'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='watchdog0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </watchdog>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <memballoon model='virtio'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <stats period='10'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='balloon0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <rng model='virtio'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <backend model='random'>/dev/urandom</backend>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <alias name='rng0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <label>system_u:system_r:svirt_t:s0:c497,c676</label>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c497,c676</imagelabel>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <label>+107:+107</label>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <imagelabel>+107:+107</imagelabel>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:27:15 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:27:15 np0005465604 nova_compute[260603]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.351 2 WARNING nova.virt.libvirt.driver [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Detaching interface fa:16:3e:65:86:1a failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap8e47820a-f7' not found.#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.352 2 DEBUG nova.virt.libvirt.vif [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.353 2 DEBUG nova.network.os_vif_util [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converting VIF {"id": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "address": "fa:16:3e:65:86:1a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e47820a-f7", "ovs_interfaceid": "8e47820a-f777-4d29-8bce-45c6eb3b7b5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.354 2 DEBUG nova.network.os_vif_util [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.354 2 DEBUG os_vif [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.357 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e47820a-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.357 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.360 2 INFO os_vif [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:86:1a,bridge_name='br-int',has_traffic_filtering=True,id=8e47820a-f777-4d29-8bce-45c6eb3b7b5c,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e47820a-f7')#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.361 2 DEBUG nova.virt.libvirt.guest [req-67e23396-3ce9-46aa-8f81-5a137146bfd3 req-71335b54-5780-490a-aff2-7bf6f734ac1b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:name>tempest-AttachInterfacesTestJSON-server-60577478</nova:name>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:27:15</nova:creationTime>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:port uuid="0d888b1c-d237-4db9-9ca5-4796f8c1349d">
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:port uuid="4f39890f-a968-41d4-9cae-0b6948551923">
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    <nova:port uuid="19a09bbc-9b50-4f99-8dd4-0f7f9ab15851">
Oct  2 04:27:15 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:27:15 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:27:15 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:27:15 np0005465604 nova_compute[260603]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.388 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e1931045-b415-4d93-86f5-33b58046e9fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.391 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[559fb3c4-ad8f-4ee4-9b8c-c213369c07fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.429 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4e10f005-df76-4b35-b3d0-bd0f5a2ebef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.450 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0728ef6b-39e8-42c7-b6c5-99d8bc45bb58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 14, 'rx_bytes': 1084, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 14, 'rx_bytes': 1084, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445002, 'reachable_time': 24920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306197, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.473 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[906043d6-beba-4f18-96c3-c1f536dde78e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445016, 'tstamp': 445016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306199, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445020, 'tstamp': 445020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306199, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.475 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.480 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.480 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.481 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.482 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.543 2 DEBUG oslo_concurrency.lockutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.544 2 DEBUG oslo_concurrency.lockutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.544 2 DEBUG oslo_concurrency.lockutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.545 2 DEBUG oslo_concurrency.lockutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.545 2 DEBUG oslo_concurrency.lockutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.547 2 INFO nova.compute.manager [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Terminating instance#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.549 2 DEBUG nova.compute.manager [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]: {
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:    "0": [
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:        {
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "devices": [
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "/dev/loop3"
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            ],
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "lv_name": "ceph_lv0",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "lv_size": "21470642176",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "name": "ceph_lv0",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "tags": {
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.cluster_name": "ceph",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.crush_device_class": "",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.encrypted": "0",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.osd_id": "0",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.type": "block",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.vdo": "0"
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            },
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "type": "block",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "vg_name": "ceph_vg0"
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:        }
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:    ],
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:    "1": [
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:        {
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "devices": [
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "/dev/loop4"
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            ],
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "lv_name": "ceph_lv1",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "lv_size": "21470642176",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "name": "ceph_lv1",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "tags": {
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.cluster_name": "ceph",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.crush_device_class": "",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.encrypted": "0",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.osd_id": "1",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.type": "block",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.vdo": "0"
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            },
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "type": "block",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "vg_name": "ceph_vg1"
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:        }
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:    ],
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:    "2": [
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:        {
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "devices": [
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "/dev/loop5"
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            ],
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "lv_name": "ceph_lv2",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "lv_size": "21470642176",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "name": "ceph_lv2",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "tags": {
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.cluster_name": "ceph",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.crush_device_class": "",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.encrypted": "0",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.osd_id": "2",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.type": "block",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:                "ceph.vdo": "0"
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            },
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "type": "block",
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:            "vg_name": "ceph_vg2"
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:        }
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]:    ]
Oct  2 04:27:15 np0005465604 awesome_goldberg[306187]: }
Oct  2 04:27:15 np0005465604 kernel: tap0d888b1c-d2 (unregistering): left promiscuous mode
Oct  2 04:27:15 np0005465604 NetworkManager[45129]: <info>  [1759393635.6178] device (tap0d888b1c-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:15Z|00315|binding|INFO|Releasing lport 0d888b1c-d237-4db9-9ca5-4796f8c1349d from this chassis (sb_readonly=0)
Oct  2 04:27:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:15Z|00316|binding|INFO|Setting lport 0d888b1c-d237-4db9-9ca5-4796f8c1349d down in Southbound
Oct  2 04:27:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:15Z|00317|binding|INFO|Removing iface tap0d888b1c-d2 ovn-installed in OVS
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.641 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:e3:68 10.100.0.10'], port_security=['fa:16:3e:cf:e3:68 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5d595e00-2287-4a6f-b347-bc277006a626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5197f4ad-c335-4607-928c-2b7946565ac7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=0d888b1c-d237-4db9-9ca5-4796f8c1349d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.642 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 0d888b1c-d237-4db9-9ca5-4796f8c1349d in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.644 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1#033[00m
Oct  2 04:27:15 np0005465604 systemd[1]: libpod-5f78774757a53e644f6fe9edc117dd18e13af1f448012bd0767168b503410e05.scope: Deactivated successfully.
Oct  2 04:27:15 np0005465604 podman[306171]: 2025-10-02 08:27:15.651512304 +0000 UTC m=+1.024017744 container died 5f78774757a53e644f6fe9edc117dd18e13af1f448012bd0767168b503410e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 04:27:15 np0005465604 kernel: tap4f39890f-a9 (unregistering): left promiscuous mode
Oct  2 04:27:15 np0005465604 NetworkManager[45129]: <info>  [1759393635.6732] device (tap4f39890f-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.689 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a7590862-c59a-4bf8-b4e7-3eb54e73fd3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:15 np0005465604 kernel: tap19a09bbc-9b (unregistering): left promiscuous mode
Oct  2 04:27:15 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0586f30b78ea50eb63eb30237511575ea707d4eb9f7f25b41538732fc5df0e25-merged.mount: Deactivated successfully.
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:15Z|00318|binding|INFO|Releasing lport 4f39890f-a968-41d4-9cae-0b6948551923 from this chassis (sb_readonly=0)
Oct  2 04:27:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:15Z|00319|binding|INFO|Setting lport 4f39890f-a968-41d4-9cae-0b6948551923 down in Southbound
Oct  2 04:27:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:15Z|00320|binding|INFO|Removing iface tap4f39890f-a9 ovn-installed in OVS
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:15 np0005465604 NetworkManager[45129]: <info>  [1759393635.7066] device (tap19a09bbc-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.711 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:40:80 10.100.0.4'], port_security=['fa:16:3e:01:40:80 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '5d595e00-2287-4a6f-b347-bc277006a626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4f39890f-a968-41d4-9cae-0b6948551923) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:15 np0005465604 podman[306171]: 2025-10-02 08:27:15.737736203 +0000 UTC m=+1.110241643 container remove 5f78774757a53e644f6fe9edc117dd18e13af1f448012bd0767168b503410e05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_goldberg, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.741 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cd8fd6c0-e227-4fe6-9541-8c3dd718045c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.744 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[25c969b4-40e9-41d8-978a-f99fee4a63c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:15 np0005465604 systemd[1]: libpod-conmon-5f78774757a53e644f6fe9edc117dd18e13af1f448012bd0767168b503410e05.scope: Deactivated successfully.
Oct  2 04:27:15 np0005465604 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000026.scope: Deactivated successfully.
Oct  2 04:27:15 np0005465604 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000026.scope: Consumed 14.789s CPU time.
Oct  2 04:27:15 np0005465604 systemd-machined[214636]: Machine qemu-42-instance-00000026 terminated.
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.779 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ef81f1-7d54-4f13-8c95-f491f81f14fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.801 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[10024f05-89ef-4e77-b5d1-504d5421346f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 16, 'rx_bytes': 1084, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 16, 'rx_bytes': 1084, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445002, 'reachable_time': 24920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306235, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.820 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[102b78a2-b7c4-4919-b63e-4e0b86853fbe]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445016, 'tstamp': 445016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306241, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445020, 'tstamp': 445020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306241, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.821 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.838 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.838 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.839 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.840 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.842 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4f39890f-a968-41d4-9cae-0b6948551923 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.844 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fa1bff6d-19fb-4792-a261-4da1165d95a1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.845 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f7a97a4d-1f63-4ec7-b00a-4cf39a881064]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:15.845 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 namespace which is not needed anymore#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:15 np0005465604 nova_compute[260603]: 2025-10-02 08:27:15.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.041 2 INFO nova.virt.libvirt.driver [-] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Instance destroyed successfully.#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.042 2 DEBUG nova.objects.instance [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'resources' on Instance uuid 5d595e00-2287-4a6f-b347-bc277006a626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:16 np0005465604 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[304125]: [NOTICE]   (304129) : haproxy version is 2.8.14-c23fe91
Oct  2 04:27:16 np0005465604 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[304125]: [NOTICE]   (304129) : path to executable is /usr/sbin/haproxy
Oct  2 04:27:16 np0005465604 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[304125]: [WARNING]  (304129) : Exiting Master process...
Oct  2 04:27:16 np0005465604 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[304125]: [WARNING]  (304129) : Exiting Master process...
Oct  2 04:27:16 np0005465604 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[304125]: [ALERT]    (304129) : Current worker (304131) exited with code 143 (Terminated)
Oct  2 04:27:16 np0005465604 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[304125]: [WARNING]  (304129) : All workers exited. Exiting... (0)
Oct  2 04:27:16 np0005465604 systemd[1]: libpod-1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971.scope: Deactivated successfully.
Oct  2 04:27:16 np0005465604 podman[306313]: 2025-10-02 08:27:16.058215544 +0000 UTC m=+0.098406920 container died 1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.065 2 DEBUG nova.virt.libvirt.vif [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.066 2 DEBUG nova.network.os_vif_util [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.067 2 DEBUG nova.network.os_vif_util [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:e3:68,bridge_name='br-int',has_traffic_filtering=True,id=0d888b1c-d237-4db9-9ca5-4796f8c1349d,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d888b1c-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.067 2 DEBUG os_vif [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:e3:68,bridge_name='br-int',has_traffic_filtering=True,id=0d888b1c-d237-4db9-9ca5-4796f8c1349d,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d888b1c-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.069 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d888b1c-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.081 2 INFO os_vif [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:e3:68,bridge_name='br-int',has_traffic_filtering=True,id=0d888b1c-d237-4db9-9ca5-4796f8c1349d,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d888b1c-d2')#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.081 2 DEBUG nova.virt.libvirt.vif [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.082 2 DEBUG nova.network.os_vif_util [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.082 2 DEBUG nova.network.os_vif_util [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:40:80,bridge_name='br-int',has_traffic_filtering=True,id=4f39890f-a968-41d4-9cae-0b6948551923,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f39890f-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.083 2 DEBUG os_vif [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:40:80,bridge_name='br-int',has_traffic_filtering=True,id=4f39890f-a968-41d4-9cae-0b6948551923,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f39890f-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:27:16 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971-userdata-shm.mount: Deactivated successfully.
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.088 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f39890f-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:16 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e9055c4e28162e79657260aa471df2a75d9f8b592688bd242ac2c8aac34526e3-merged.mount: Deactivated successfully.
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.098 2 INFO os_vif [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:40:80,bridge_name='br-int',has_traffic_filtering=True,id=4f39890f-a968-41d4-9cae-0b6948551923,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f39890f-a9')#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.099 2 DEBUG nova.virt.libvirt.vif [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-60577478',display_name='tempest-AttachInterfacesTestJSON-server-60577478',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-60577478',id=38,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK3n8ZQflxah9PGvMURniYJY27RMSYAh7IToIiTuXNL4FdRzkG8Fjamu4JSv+yQagK4ReOs06QM35NVSK2qg0crFnOgp17KmYOR4Qg186y3N3gzuObh7hNH8eUz0wrA1w==',key_name='tempest-keypair-1658348775',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-r8jyxbh6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=5d595e00-2287-4a6f-b347-bc277006a626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.099 2 DEBUG nova.network.os_vif_util [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.100 2 DEBUG nova.network.os_vif_util [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:04:1b,bridge_name='br-int',has_traffic_filtering=True,id=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap19a09bbc-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.100 2 DEBUG os_vif [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:04:1b,bridge_name='br-int',has_traffic_filtering=True,id=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap19a09bbc-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.102 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19a09bbc-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:27:16 np0005465604 podman[306313]: 2025-10-02 08:27:16.105225764 +0000 UTC m=+0.145417140 container cleanup 1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.111 2 INFO os_vif [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:04:1b,bridge_name='br-int',has_traffic_filtering=True,id=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap19a09bbc-9b')#033[00m
Oct  2 04:27:16 np0005465604 systemd[1]: libpod-conmon-1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971.scope: Deactivated successfully.
Oct  2 04:27:16 np0005465604 podman[306371]: 2025-10-02 08:27:16.121731776 +0000 UTC m=+0.067113645 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:27:16 np0005465604 podman[306457]: 2025-10-02 08:27:16.203987423 +0000 UTC m=+0.047727784 container remove 1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:27:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:16.212 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ac23990d-637e-432e-a911-f78d11e4d572]: (4, ('Thu Oct  2 08:27:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 (1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971)\n1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971\nThu Oct  2 08:27:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 (1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971)\n1c2e7e352356ea65cbf8cd84c48238ac58b65609ea2f1752507d8f02ae960971\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:16.213 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[43986a93-6590-497a-a26a-12f2c288e8fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:16.214 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:16 np0005465604 kernel: tapfa1bff6d-10: left promiscuous mode
Oct  2 04:27:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:16.223 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fbad1866-8956-4a82-802c-6e24a58bdc53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:16.250 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c611fa85-0d22-4d04-926a-8ef700a4f855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:16.251 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0cc2f16c-fca8-475d-b80a-ca775c3842bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:16.266 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a30f2bb5-f23d-41bc-b5f8-3fb38c314b08]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444990, 'reachable_time': 25341, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306481, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:16 np0005465604 systemd[1]: run-netns-ovnmeta\x2dfa1bff6d\x2d19fb\x2d4792\x2da261\x2d4da1165d95a1.mount: Deactivated successfully.
Oct  2 04:27:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:16.272 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:27:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:16.272 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[511a6b4e-baf7-4f95-ac2f-094d971da4e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.470 2 INFO nova.virt.libvirt.driver [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Deleting instance files /var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626_del#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.471 2 INFO nova.virt.libvirt.driver [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Deletion of /var/lib/nova/instances/5d595e00-2287-4a6f-b347-bc277006a626_del complete#033[00m
Oct  2 04:27:16 np0005465604 podman[306515]: 2025-10-02 08:27:16.507827765 +0000 UTC m=+0.051848952 container create 52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.525 2 INFO nova.compute.manager [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.526 2 DEBUG oslo.service.loopingcall [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.526 2 DEBUG nova.compute.manager [-] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.526 2 DEBUG nova.network.neutron [-] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.542 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 04:27:16 np0005465604 systemd[1]: Started libpod-conmon-52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f.scope.
Oct  2 04:27:16 np0005465604 podman[306515]: 2025-10-02 08:27:16.482045804 +0000 UTC m=+0.026067041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:27:16 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:27:16 np0005465604 podman[306515]: 2025-10-02 08:27:16.596733078 +0000 UTC m=+0.140754285 container init 52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  2 04:27:16 np0005465604 podman[306515]: 2025-10-02 08:27:16.602690053 +0000 UTC m=+0.146711240 container start 52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:27:16 np0005465604 podman[306515]: 2025-10-02 08:27:16.605954634 +0000 UTC m=+0.149975881 container attach 52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 04:27:16 np0005465604 busy_vaughan[306531]: 167 167
Oct  2 04:27:16 np0005465604 systemd[1]: libpod-52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f.scope: Deactivated successfully.
Oct  2 04:27:16 np0005465604 conmon[306531]: conmon 52627bd1a88639912cf7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f.scope/container/memory.events
Oct  2 04:27:16 np0005465604 podman[306515]: 2025-10-02 08:27:16.611293411 +0000 UTC m=+0.155314608 container died 52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 04:27:16 np0005465604 podman[306515]: 2025-10-02 08:27:16.661016595 +0000 UTC m=+0.205037822 container remove 52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 04:27:16 np0005465604 systemd[1]: libpod-conmon-52627bd1a88639912cf7e2e686547f175c5baaa9d71a1212f119d41e6ce20e5f.scope: Deactivated successfully.
Oct  2 04:27:16 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f28f875501abe61ab9763870febbf011fe23c69cf08a422245d15c0a7efb7bde-merged.mount: Deactivated successfully.
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.865 2 DEBUG neutronclient.v2_0.client [-] Error message: {"NeutronError": {"type": "PortNotFound", "message": "Port 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.867 2 DEBUG nova.network.neutron [-] Unable to show port 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 as it no longer exists. _unbind_ports /usr/lib/python3.9/site-packages/nova/network/neutron.py:666#033[00m
Oct  2 04:27:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 200 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  2 04:27:16 np0005465604 podman[306556]: 2025-10-02 08:27:16.913590005 +0000 UTC m=+0.051789600 container create d0dc7fdfee5103f5631b9316e6426ba501ba574bfd2deaca1cbdb0f19e7b6de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_montalcini, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.919 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.920 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.920 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:27:16 np0005465604 nova_compute[260603]: 2025-10-02 08:27:16.921 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f0bfef78-36cf-4c57-9205-ad81a216a221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:16 np0005465604 systemd[1]: Started libpod-conmon-d0dc7fdfee5103f5631b9316e6426ba501ba574bfd2deaca1cbdb0f19e7b6de5.scope.
Oct  2 04:27:16 np0005465604 podman[306556]: 2025-10-02 08:27:16.892155799 +0000 UTC m=+0.030355464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:27:17 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:27:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7371a09a479f14adbc17ff523ba05cf38f97050b7f06d0b9fa8510e83569138/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:27:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7371a09a479f14adbc17ff523ba05cf38f97050b7f06d0b9fa8510e83569138/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:27:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7371a09a479f14adbc17ff523ba05cf38f97050b7f06d0b9fa8510e83569138/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:27:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7371a09a479f14adbc17ff523ba05cf38f97050b7f06d0b9fa8510e83569138/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:27:17 np0005465604 podman[306556]: 2025-10-02 08:27:17.028040762 +0000 UTC m=+0.166240347 container init d0dc7fdfee5103f5631b9316e6426ba501ba574bfd2deaca1cbdb0f19e7b6de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_montalcini, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:27:17 np0005465604 podman[306556]: 2025-10-02 08:27:17.037729683 +0000 UTC m=+0.175929288 container start d0dc7fdfee5103f5631b9316e6426ba501ba574bfd2deaca1cbdb0f19e7b6de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_montalcini, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 04:27:17 np0005465604 podman[306556]: 2025-10-02 08:27:17.04118743 +0000 UTC m=+0.179387035 container attach d0dc7fdfee5103f5631b9316e6426ba501ba574bfd2deaca1cbdb0f19e7b6de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_montalcini, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 04:27:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.123 2 DEBUG nova.compute.manager [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-unplugged-0d888b1c-d237-4db9-9ca5-4796f8c1349d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.123 2 DEBUG oslo_concurrency.lockutils [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.123 2 DEBUG oslo_concurrency.lockutils [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.124 2 DEBUG oslo_concurrency.lockutils [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.124 2 DEBUG nova.compute.manager [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-unplugged-0d888b1c-d237-4db9-9ca5-4796f8c1349d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.125 2 DEBUG nova.compute.manager [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-unplugged-0d888b1c-d237-4db9-9ca5-4796f8c1349d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.125 2 DEBUG nova.compute.manager [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-plugged-0d888b1c-d237-4db9-9ca5-4796f8c1349d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.125 2 DEBUG oslo_concurrency.lockutils [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d595e00-2287-4a6f-b347-bc277006a626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.126 2 DEBUG oslo_concurrency.lockutils [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.126 2 DEBUG oslo_concurrency.lockutils [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.126 2 DEBUG nova.compute.manager [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] No waiting events found dispatching network-vif-plugged-0d888b1c-d237-4db9-9ca5-4796f8c1349d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.126 2 WARNING nova.compute.manager [req-d686bc47-4c69-4507-b7a4-365d45f3e343 req-859b8a44-bacf-4327-ae6e-9d32a2289944 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received unexpected event network-vif-plugged-0d888b1c-d237-4db9-9ca5-4796f8c1349d for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.255 2 DEBUG nova.compute.manager [req-74860bba-8480-45ce-bf0e-eb95e5c747ae req-6bbde909-fc16-4251-83ba-04c55a6c9c3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-deleted-19a09bbc-9b50-4f99-8dd4-0f7f9ab15851 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.255 2 INFO nova.compute.manager [req-74860bba-8480-45ce-bf0e-eb95e5c747ae req-6bbde909-fc16-4251-83ba-04c55a6c9c3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Neutron deleted interface 19a09bbc-9b50-4f99-8dd4-0f7f9ab15851; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.255 2 DEBUG nova.network.neutron [req-74860bba-8480-45ce-bf0e-eb95e5c747ae req-6bbde909-fc16-4251-83ba-04c55a6c9c3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.257 2 DEBUG nova.network.neutron [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [{"id": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "address": "fa:16:3e:cf:e3:68", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d888b1c-d2", "ovs_interfaceid": "0d888b1c-d237-4db9-9ca5-4796f8c1349d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4f39890f-a968-41d4-9cae-0b6948551923", "address": "fa:16:3e:01:40:80", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f39890f-a9", "ovs_interfaceid": "4f39890f-a968-41d4-9cae-0b6948551923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "address": "fa:16:3e:17:04:1b", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19a09bbc-9b", "ovs_interfaceid": "19a09bbc-9b50-4f99-8dd4-0f7f9ab15851", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.293 2 DEBUG nova.compute.manager [req-74860bba-8480-45ce-bf0e-eb95e5c747ae req-6bbde909-fc16-4251-83ba-04c55a6c9c3c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Detach interface failed, port_id=19a09bbc-9b50-4f99-8dd4-0f7f9ab15851, reason: Instance 5d595e00-2287-4a6f-b347-bc277006a626 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.299 2 DEBUG oslo_concurrency.lockutils [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-5d595e00-2287-4a6f-b347-bc277006a626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.460 2 DEBUG oslo_concurrency.lockutils [None req-792b7e95-5033-4b7a-86d2-3cecc88a5717 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-5d595e00-2287-4a6f-b347-bc277006a626-8e47820a-f777-4d29-8bce-45c6eb3b7b5c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.961 2 DEBUG nova.network.neutron [-] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:17 np0005465604 nova_compute[260603]: 2025-10-02 08:27:17.981 2 INFO nova.compute.manager [-] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Took 1.45 seconds to deallocate network for instance.#033[00m
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]: {
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:        "osd_id": 2,
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:        "type": "bluestore"
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:    },
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:        "osd_id": 1,
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:        "type": "bluestore"
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:    },
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:        "osd_id": 0,
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:        "type": "bluestore"
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]:    }
Oct  2 04:27:17 np0005465604 peaceful_montalcini[306573]: }
Oct  2 04:27:18 np0005465604 systemd[1]: libpod-d0dc7fdfee5103f5631b9316e6426ba501ba574bfd2deaca1cbdb0f19e7b6de5.scope: Deactivated successfully.
Oct  2 04:27:18 np0005465604 podman[306556]: 2025-10-02 08:27:18.020740581 +0000 UTC m=+1.158940166 container died d0dc7fdfee5103f5631b9316e6426ba501ba574bfd2deaca1cbdb0f19e7b6de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_montalcini, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 04:27:18 np0005465604 nova_compute[260603]: 2025-10-02 08:27:18.024 2 DEBUG oslo_concurrency.lockutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:18 np0005465604 nova_compute[260603]: 2025-10-02 08:27:18.025 2 DEBUG oslo_concurrency.lockutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:18 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e7371a09a479f14adbc17ff523ba05cf38f97050b7f06d0b9fa8510e83569138-merged.mount: Deactivated successfully.
Oct  2 04:27:18 np0005465604 podman[306556]: 2025-10-02 08:27:18.071647803 +0000 UTC m=+1.209847388 container remove d0dc7fdfee5103f5631b9316e6426ba501ba574bfd2deaca1cbdb0f19e7b6de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_montalcini, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:27:18 np0005465604 systemd[1]: libpod-conmon-d0dc7fdfee5103f5631b9316e6426ba501ba574bfd2deaca1cbdb0f19e7b6de5.scope: Deactivated successfully.
Oct  2 04:27:18 np0005465604 nova_compute[260603]: 2025-10-02 08:27:18.103 2 DEBUG oslo_concurrency.processutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:27:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:27:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:27:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:27:18 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 8ac6d70c-6dd6-4945-8768-98e50c2fd03f does not exist
Oct  2 04:27:18 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 72c7dabd-c0a9-4d23-b78e-4460a51bbe99 does not exist
Oct  2 04:27:18 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:27:18 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:27:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:27:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1736913234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:27:18 np0005465604 nova_compute[260603]: 2025-10-02 08:27:18.514 2 DEBUG oslo_concurrency.processutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:18 np0005465604 nova_compute[260603]: 2025-10-02 08:27:18.523 2 DEBUG nova.compute.provider_tree [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:27:18 np0005465604 nova_compute[260603]: 2025-10-02 08:27:18.537 2 DEBUG nova.scheduler.client.report [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:27:18 np0005465604 nova_compute[260603]: 2025-10-02 08:27:18.555 2 DEBUG oslo_concurrency.lockutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:18 np0005465604 nova_compute[260603]: 2025-10-02 08:27:18.599 2 INFO nova.scheduler.client.report [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Deleted allocations for instance 5d595e00-2287-4a6f-b347-bc277006a626#033[00m
Oct  2 04:27:18 np0005465604 nova_compute[260603]: 2025-10-02 08:27:18.658 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Updating instance_info_cache with network_info: [{"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:18 np0005465604 nova_compute[260603]: 2025-10-02 08:27:18.702 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-f0bfef78-36cf-4c57-9205-ad81a216a221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:27:18 np0005465604 nova_compute[260603]: 2025-10-02 08:27:18.702 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:27:18 np0005465604 nova_compute[260603]: 2025-10-02 08:27:18.703 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:27:18 np0005465604 nova_compute[260603]: 2025-10-02 08:27:18.728 2 DEBUG oslo_concurrency.lockutils [None req-a439cdf9-e604-4900-a60b-1473991eceb5 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "5d595e00-2287-4a6f-b347-bc277006a626" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:18 np0005465604 nova_compute[260603]: 2025-10-02 08:27:18.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 121 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Oct  2 04:27:19 np0005465604 nova_compute[260603]: 2025-10-02 08:27:19.365 2 DEBUG nova.compute.manager [req-d3075fd0-ba89-4bc8-9ceb-cf7fd875f9e1 req-23fee858-c0ca-4f1c-96ed-75604435ffbb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-deleted-4f39890f-a968-41d4-9cae-0b6948551923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:19 np0005465604 nova_compute[260603]: 2025-10-02 08:27:19.366 2 DEBUG nova.compute.manager [req-d3075fd0-ba89-4bc8-9ceb-cf7fd875f9e1 req-23fee858-c0ca-4f1c-96ed-75604435ffbb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Received event network-vif-deleted-0d888b1c-d237-4db9-9ca5-4796f8c1349d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:19 np0005465604 nova_compute[260603]: 2025-10-02 08:27:19.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:27:19 np0005465604 nova_compute[260603]: 2025-10-02 08:27:19.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:27:19 np0005465604 nova_compute[260603]: 2025-10-02 08:27:19.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:27:19 np0005465604 nova_compute[260603]: 2025-10-02 08:27:19.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:19 np0005465604 nova_compute[260603]: 2025-10-02 08:27:19.550 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:19 np0005465604 nova_compute[260603]: 2025-10-02 08:27:19.551 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:19 np0005465604 nova_compute[260603]: 2025-10-02 08:27:19.551 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:27:19 np0005465604 nova_compute[260603]: 2025-10-02 08:27:19.552 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:19 np0005465604 nova_compute[260603]: 2025-10-02 08:27:19.600 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:19 np0005465604 nova_compute[260603]: 2025-10-02 08:27:19.601 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:19 np0005465604 nova_compute[260603]: 2025-10-02 08:27:19.624 2 DEBUG nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:27:19 np0005465604 nova_compute[260603]: 2025-10-02 08:27:19.689 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:19 np0005465604 nova_compute[260603]: 2025-10-02 08:27:19.689 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:19 np0005465604 nova_compute[260603]: 2025-10-02 08:27:19.700 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:27:19 np0005465604 nova_compute[260603]: 2025-10-02 08:27:19.700 2 INFO nova.compute.claims [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:27:19 np0005465604 nova_compute[260603]: 2025-10-02 08:27:19.838 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:27:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1499404778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.038 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.134 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.135 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:27:20 np0005465604 podman[306732]: 2025-10-02 08:27:20.140822305 +0000 UTC m=+0.061417000 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 04:27:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:27:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/265351867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.269 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.275 2 DEBUG nova.compute.provider_tree [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.291 2 DEBUG nova.scheduler.client.report [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.325 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.326 2 DEBUG nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.341 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.342 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4094MB free_disk=59.94271469116211GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.343 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.343 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.422 2 DEBUG nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.423 2 DEBUG nova.network.neutron [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.443 2 INFO nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.449 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance f0bfef78-36cf-4c57-9205-ad81a216a221 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.450 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 3626202d-fd1e-497c-bbfd-ea0e7a7321c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.450 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.450 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.462 2 DEBUG nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.512 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.552 2 DEBUG nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.555 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.555 2 INFO nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Creating image(s)#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.581 2 DEBUG nova.storage.rbd_utils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.606 2 DEBUG nova.storage.rbd_utils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.631 2 DEBUG nova.storage.rbd_utils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.635 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.673 2 DEBUG nova.policy [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1ac6f72f7366459a86c086737b89ea69', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f269abbe5769427dbf44c430d7529c04', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.741 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.742 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.743 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.744 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.777 2 DEBUG nova.storage.rbd_utils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:20 np0005465604 nova_compute[260603]: 2025-10-02 08:27:20.782 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 121 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Oct  2 04:27:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:27:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219052854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.022 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.031 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.052 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.088 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.179 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.180 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.184 2 DEBUG nova.storage.rbd_utils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] resizing rbd image 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.287 2 DEBUG nova.objects.instance [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'migration_context' on Instance uuid 3626202d-fd1e-497c-bbfd-ea0e7a7321c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.301 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.302 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Ensure instance console log exists: /var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.302 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.303 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.303 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.304 2 DEBUG oslo_concurrency.lockutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquiring lock "f0bfef78-36cf-4c57-9205-ad81a216a221" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.305 2 DEBUG oslo_concurrency.lockutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.305 2 DEBUG oslo_concurrency.lockutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquiring lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.306 2 DEBUG oslo_concurrency.lockutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.307 2 DEBUG oslo_concurrency.lockutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.308 2 INFO nova.compute.manager [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Terminating instance#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.309 2 DEBUG nova.compute.manager [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:27:21 np0005465604 kernel: tap686b6e3b-80 (unregistering): left promiscuous mode
Oct  2 04:27:21 np0005465604 NetworkManager[45129]: <info>  [1759393641.3700] device (tap686b6e3b-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:27:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:21Z|00321|binding|INFO|Releasing lport 686b6e3b-80e4-43c6-a917-3751dddecd76 from this chassis (sb_readonly=0)
Oct  2 04:27:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:21Z|00322|binding|INFO|Setting lport 686b6e3b-80e4-43c6-a917-3751dddecd76 down in Southbound
Oct  2 04:27:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:21Z|00323|binding|INFO|Removing iface tap686b6e3b-80 ovn-installed in OVS
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.388 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:4a:c6 10.100.0.13'], port_security=['fa:16:3e:0b:4a:c6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f0bfef78-36cf-4c57-9205-ad81a216a221', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7afa539b-e5b5-443e-ad15-53058c3f7566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de07bf9b5bef4254bdcb4d7b856304f3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1fbf4c4c-2cd7-49c6-ae31-7e9f79f25939', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b61ad9cd-415d-4ce0-9d5a-fc0dd5afdd65, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=686b6e3b-80e4-43c6-a917-3751dddecd76) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:27:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.390 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 686b6e3b-80e4-43c6-a917-3751dddecd76 in datapath 7afa539b-e5b5-443e-ad15-53058c3f7566 unbound from our chassis#033[00m
Oct  2 04:27:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.392 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7afa539b-e5b5-443e-ad15-53058c3f7566, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:27:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.394 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e7732e-ad85-4fb7-86c1-565938d56f29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.395 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566 namespace which is not needed anymore#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.412 2 DEBUG nova.network.neutron [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Successfully created port: dcccfe82-bc7c-4036-bbb1-5a2f90418794 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:27:21 np0005465604 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000028.scope: Deactivated successfully.
Oct  2 04:27:21 np0005465604 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000028.scope: Consumed 12.945s CPU time.
Oct  2 04:27:21 np0005465604 systemd-machined[214636]: Machine qemu-44-instance-00000028 terminated.
Oct  2 04:27:21 np0005465604 neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566[305514]: [NOTICE]   (305518) : haproxy version is 2.8.14-c23fe91
Oct  2 04:27:21 np0005465604 neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566[305514]: [NOTICE]   (305518) : path to executable is /usr/sbin/haproxy
Oct  2 04:27:21 np0005465604 neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566[305514]: [WARNING]  (305518) : Exiting Master process...
Oct  2 04:27:21 np0005465604 neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566[305514]: [ALERT]    (305518) : Current worker (305520) exited with code 143 (Terminated)
Oct  2 04:27:21 np0005465604 neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566[305514]: [WARNING]  (305518) : All workers exited. Exiting... (0)
Oct  2 04:27:21 np0005465604 systemd[1]: libpod-06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733.scope: Deactivated successfully.
Oct  2 04:27:21 np0005465604 podman[306966]: 2025-10-02 08:27:21.535211377 +0000 UTC m=+0.044326957 container died 06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.550 2 INFO nova.virt.libvirt.driver [-] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Instance destroyed successfully.#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.551 2 DEBUG nova.objects.instance [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lazy-loading 'resources' on Instance uuid f0bfef78-36cf-4c57-9205-ad81a216a221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.564 2 DEBUG nova.compute.manager [req-c7b1fbd7-06c2-465a-86e0-24499fb88dce req-98c64db5-29c1-40cd-969f-761df2247874 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received event network-vif-unplugged-686b6e3b-80e4-43c6-a917-3751dddecd76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.564 2 DEBUG oslo_concurrency.lockutils [req-c7b1fbd7-06c2-465a-86e0-24499fb88dce req-98c64db5-29c1-40cd-969f-761df2247874 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.568 2 DEBUG oslo_concurrency.lockutils [req-c7b1fbd7-06c2-465a-86e0-24499fb88dce req-98c64db5-29c1-40cd-969f-761df2247874 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.568 2 DEBUG oslo_concurrency.lockutils [req-c7b1fbd7-06c2-465a-86e0-24499fb88dce req-98c64db5-29c1-40cd-969f-761df2247874 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.568 2 DEBUG nova.compute.manager [req-c7b1fbd7-06c2-465a-86e0-24499fb88dce req-98c64db5-29c1-40cd-969f-761df2247874 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] No waiting events found dispatching network-vif-unplugged-686b6e3b-80e4-43c6-a917-3751dddecd76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.569 2 DEBUG nova.compute.manager [req-c7b1fbd7-06c2-465a-86e0-24499fb88dce req-98c64db5-29c1-40cd-969f-761df2247874 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received event network-vif-unplugged-686b6e3b-80e4-43c6-a917-3751dddecd76 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:27:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733-userdata-shm.mount: Deactivated successfully.
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.571 2 DEBUG nova.virt.libvirt.vif [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:26:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1088051137',display_name='tempest-ServersTestJSON-server-1088051137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1088051137',id=40,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL26GjJ8yDVULcjBGSyQgq1XOXe3L4joW7EO0dcbypSf2PTplyBxh+0WCuN1+fy7bCfJLP+B7xaBUKjJV0y0oM9upBKq48cBxt+Uq1aMn9LSPatVD3E+4qRWUEz85mTqNQ==',key_name='tempest-keypair-680018376',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:26:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de07bf9b5bef4254bdcb4d7b856304f3',ramdisk_id='',reservation_id='r-bs2htycx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-279922609',owner_user_name='tempest-ServersTestJSON-279922609-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:26:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e785f94f63c44d0f842750666ed49360',uuid=f0bfef78-36cf-4c57-9205-ad81a216a221,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.572 2 DEBUG nova.network.os_vif_util [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Converting VIF {"id": "686b6e3b-80e4-43c6-a917-3751dddecd76", "address": "fa:16:3e:0b:4a:c6", "network": {"id": "7afa539b-e5b5-443e-ad15-53058c3f7566", "bridge": "br-int", "label": "tempest-ServersTestJSON-1620170624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de07bf9b5bef4254bdcb4d7b856304f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap686b6e3b-80", "ovs_interfaceid": "686b6e3b-80e4-43c6-a917-3751dddecd76", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.573 2 DEBUG nova.network.os_vif_util [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:4a:c6,bridge_name='br-int',has_traffic_filtering=True,id=686b6e3b-80e4-43c6-a917-3751dddecd76,network=Network(7afa539b-e5b5-443e-ad15-53058c3f7566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap686b6e3b-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.573 2 DEBUG os_vif [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:4a:c6,bridge_name='br-int',has_traffic_filtering=True,id=686b6e3b-80e4-43c6-a917-3751dddecd76,network=Network(7afa539b-e5b5-443e-ad15-53058c3f7566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap686b6e3b-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.576 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap686b6e3b-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8dd2358f3274ee85f1cdb0a8441439d8ff6fe7b64e90052b15f897c73d3c0f4d-merged.mount: Deactivated successfully.
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:27:21 np0005465604 podman[306966]: 2025-10-02 08:27:21.581496556 +0000 UTC m=+0.090612136 container cleanup 06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.583 2 INFO os_vif [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:4a:c6,bridge_name='br-int',has_traffic_filtering=True,id=686b6e3b-80e4-43c6-a917-3751dddecd76,network=Network(7afa539b-e5b5-443e-ad15-53058c3f7566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap686b6e3b-80')#033[00m
Oct  2 04:27:21 np0005465604 systemd[1]: libpod-conmon-06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733.scope: Deactivated successfully.
Oct  2 04:27:21 np0005465604 podman[307012]: 2025-10-02 08:27:21.658582261 +0000 UTC m=+0.052508632 container remove 06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:27:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.669 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f99591c1-f8fb-415c-9352-c87a5a512c61]: (4, ('Thu Oct  2 08:27:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566 (06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733)\n06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733\nThu Oct  2 08:27:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566 (06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733)\n06121e1641891f588ff68be6eb8a1d6c8917609cc64a8cfffa397fa1797e8733\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.671 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[daec8eac-e1aa-4133-aa6b-9fe45d8a6a44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.672 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7afa539b-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:21 np0005465604 kernel: tap7afa539b-e0: left promiscuous mode
Oct  2 04:27:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.678 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[be3eb1b7-365b-4aa3-afd1-ef461308b87f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.707 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4c28ef03-581b-46a8-bfd3-9e453ed418cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.708 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9de1529c-e06f-40b0-9b53-264413d32885]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.722 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[050c41a9-ff36-4586-90e5-c7a8dfe21cee]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449164, 'reachable_time': 33543, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307043, 'error': None, 'target': 'ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:21 np0005465604 systemd[1]: run-netns-ovnmeta\x2d7afa539b\x2de5b5\x2d443e\x2dad15\x2d53058c3f7566.mount: Deactivated successfully.
Oct  2 04:27:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.726 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7afa539b-e5b5-443e-ad15-53058c3f7566 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:27:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:21.727 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[1053a985-09ed-44aa-bb83-52590429e448]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.877 2 INFO nova.virt.libvirt.driver [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Deleting instance files /var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221_del#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.878 2 INFO nova.virt.libvirt.driver [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Deletion of /var/lib/nova/instances/f0bfef78-36cf-4c57-9205-ad81a216a221_del complete#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.935 2 INFO nova.compute.manager [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Took 0.63 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.936 2 DEBUG oslo.service.loopingcall [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.937 2 DEBUG nova.compute.manager [-] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:27:21 np0005465604 nova_compute[260603]: 2025-10-02 08:27:21.937 2 DEBUG nova.network.neutron [-] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:27:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:27:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1907413456' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:27:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:27:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1907413456' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:27:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:27:22 np0005465604 nova_compute[260603]: 2025-10-02 08:27:22.181 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:27:22 np0005465604 nova_compute[260603]: 2025-10-02 08:27:22.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:22 np0005465604 nova_compute[260603]: 2025-10-02 08:27:22.804 2 DEBUG nova.network.neutron [-] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:22 np0005465604 nova_compute[260603]: 2025-10-02 08:27:22.830 2 INFO nova.compute.manager [-] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Took 0.89 seconds to deallocate network for instance.#033[00m
Oct  2 04:27:22 np0005465604 nova_compute[260603]: 2025-10-02 08:27:22.883 2 DEBUG oslo_concurrency.lockutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:22 np0005465604 nova_compute[260603]: 2025-10-02 08:27:22.883 2 DEBUG oslo_concurrency.lockutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 101 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 381 KiB/s rd, 3.5 MiB/s wr, 146 op/s
Oct  2 04:27:22 np0005465604 nova_compute[260603]: 2025-10-02 08:27:22.973 2 DEBUG oslo_concurrency.processutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.364 2 DEBUG nova.network.neutron [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Successfully updated port: dcccfe82-bc7c-4036-bbb1-5a2f90418794 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.389 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "refresh_cache-3626202d-fd1e-497c-bbfd-ea0e7a7321c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.390 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquired lock "refresh_cache-3626202d-fd1e-497c-bbfd-ea0e7a7321c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.390 2 DEBUG nova.network.neutron [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:27:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:27:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1867720042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.411 2 DEBUG oslo_concurrency.processutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.421 2 DEBUG nova.compute.provider_tree [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.437 2 DEBUG nova.scheduler.client.report [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.459 2 DEBUG nova.compute.manager [req-bc28547a-62da-47bf-8773-f5ff0e73ffb9 req-df06e901-bcd6-4b0d-8744-0d6da7baeb8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Received event network-changed-dcccfe82-bc7c-4036-bbb1-5a2f90418794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.460 2 DEBUG nova.compute.manager [req-bc28547a-62da-47bf-8773-f5ff0e73ffb9 req-df06e901-bcd6-4b0d-8744-0d6da7baeb8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Refreshing instance network info cache due to event network-changed-dcccfe82-bc7c-4036-bbb1-5a2f90418794. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.461 2 DEBUG oslo_concurrency.lockutils [req-bc28547a-62da-47bf-8773-f5ff0e73ffb9 req-df06e901-bcd6-4b0d-8744-0d6da7baeb8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-3626202d-fd1e-497c-bbfd-ea0e7a7321c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.463 2 DEBUG oslo_concurrency.lockutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.487 2 INFO nova.scheduler.client.report [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Deleted allocations for instance f0bfef78-36cf-4c57-9205-ad81a216a221#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.547 2 DEBUG oslo_concurrency.lockutils [None req-e1a20d30-c422-48f3-b0cf-eefef7bc363d e785f94f63c44d0f842750666ed49360 de07bf9b5bef4254bdcb4d7b856304f3 - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.622 2 DEBUG nova.network.neutron [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.652 2 DEBUG nova.compute.manager [req-e4c9729e-8ec9-48fe-a64a-f774e9da3988 req-c678e096-d7a7-4625-ac66-6f58cf4d33b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received event network-vif-plugged-686b6e3b-80e4-43c6-a917-3751dddecd76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.652 2 DEBUG oslo_concurrency.lockutils [req-e4c9729e-8ec9-48fe-a64a-f774e9da3988 req-c678e096-d7a7-4625-ac66-6f58cf4d33b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.652 2 DEBUG oslo_concurrency.lockutils [req-e4c9729e-8ec9-48fe-a64a-f774e9da3988 req-c678e096-d7a7-4625-ac66-6f58cf4d33b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.653 2 DEBUG oslo_concurrency.lockutils [req-e4c9729e-8ec9-48fe-a64a-f774e9da3988 req-c678e096-d7a7-4625-ac66-6f58cf4d33b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f0bfef78-36cf-4c57-9205-ad81a216a221-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.653 2 DEBUG nova.compute.manager [req-e4c9729e-8ec9-48fe-a64a-f774e9da3988 req-c678e096-d7a7-4625-ac66-6f58cf4d33b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] No waiting events found dispatching network-vif-plugged-686b6e3b-80e4-43c6-a917-3751dddecd76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.653 2 WARNING nova.compute.manager [req-e4c9729e-8ec9-48fe-a64a-f774e9da3988 req-c678e096-d7a7-4625-ac66-6f58cf4d33b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received unexpected event network-vif-plugged-686b6e3b-80e4-43c6-a917-3751dddecd76 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:27:23 np0005465604 nova_compute[260603]: 2025-10-02 08:27:23.653 2 DEBUG nova.compute.manager [req-e4c9729e-8ec9-48fe-a64a-f774e9da3988 req-c678e096-d7a7-4625-ac66-6f58cf4d33b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Received event network-vif-deleted-686b6e3b-80e4-43c6-a917-3751dddecd76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.526 2 DEBUG nova.network.neutron [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Updating instance_info_cache with network_info: [{"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.555 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Releasing lock "refresh_cache-3626202d-fd1e-497c-bbfd-ea0e7a7321c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.556 2 DEBUG nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Instance network_info: |[{"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.556 2 DEBUG oslo_concurrency.lockutils [req-bc28547a-62da-47bf-8773-f5ff0e73ffb9 req-df06e901-bcd6-4b0d-8744-0d6da7baeb8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-3626202d-fd1e-497c-bbfd-ea0e7a7321c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.557 2 DEBUG nova.network.neutron [req-bc28547a-62da-47bf-8773-f5ff0e73ffb9 req-df06e901-bcd6-4b0d-8744-0d6da7baeb8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Refreshing network info cache for port dcccfe82-bc7c-4036-bbb1-5a2f90418794 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.562 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Start _get_guest_xml network_info=[{"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.567 2 WARNING nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.573 2 DEBUG nova.virt.libvirt.host [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.574 2 DEBUG nova.virt.libvirt.host [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.582 2 DEBUG nova.virt.libvirt.host [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.583 2 DEBUG nova.virt.libvirt.host [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.583 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.584 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.584 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.585 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.585 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.586 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.586 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.587 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.587 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.587 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.588 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.588 2 DEBUG nova.virt.hardware [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:27:24 np0005465604 nova_compute[260603]: 2025-10-02 08:27:24.593 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 2.4 MiB/s wr, 98 op/s
Oct  2 04:27:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:27:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3876417165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.065 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.087 2 DEBUG nova.storage.rbd_utils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.092 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:27:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2495510105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.507 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.510 2 DEBUG nova.virt.libvirt.vif [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-277551696',display_name='tempest-DeleteServersTestJSON-server-277551696',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-277551696',id=41,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-tia1r4kw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:20Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=3626202d-fd1e-497c-bbfd-ea0e7a7321c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.511 2 DEBUG nova.network.os_vif_util [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.513 2 DEBUG nova.network.os_vif_util [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:03:0e,bridge_name='br-int',has_traffic_filtering=True,id=dcccfe82-bc7c-4036-bbb1-5a2f90418794,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcccfe82-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.515 2 DEBUG nova.objects.instance [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3626202d-fd1e-497c-bbfd-ea0e7a7321c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.538 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:27:25 np0005465604 nova_compute[260603]:  <uuid>3626202d-fd1e-497c-bbfd-ea0e7a7321c8</uuid>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:  <name>instance-00000029</name>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <nova:name>tempest-DeleteServersTestJSON-server-277551696</nova:name>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:27:24</nova:creationTime>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:27:25 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:        <nova:user uuid="1ac6f72f7366459a86c086737b89ea69">tempest-DeleteServersTestJSON-812177785-project-member</nova:user>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:        <nova:project uuid="f269abbe5769427dbf44c430d7529c04">tempest-DeleteServersTestJSON-812177785</nova:project>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:        <nova:port uuid="dcccfe82-bc7c-4036-bbb1-5a2f90418794">
Oct  2 04:27:25 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <entry name="serial">3626202d-fd1e-497c-bbfd-ea0e7a7321c8</entry>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <entry name="uuid">3626202d-fd1e-497c-bbfd-ea0e7a7321c8</entry>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk">
Oct  2 04:27:25 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:27:25 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk.config">
Oct  2 04:27:25 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:27:25 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:58:03:0e"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <target dev="tapdcccfe82-bc"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8/console.log" append="off"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:27:25 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:27:25 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:27:25 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:27:25 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.540 2 DEBUG nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Preparing to wait for external event network-vif-plugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.540 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.541 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.541 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.542 2 DEBUG nova.virt.libvirt.vif [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-277551696',display_name='tempest-DeleteServersTestJSON-server-277551696',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-277551696',id=41,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-tia1r4kw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:20Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=3626202d-fd1e-497c-bbfd-ea0e7a7321c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.542 2 DEBUG nova.network.os_vif_util [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.543 2 DEBUG nova.network.os_vif_util [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:03:0e,bridge_name='br-int',has_traffic_filtering=True,id=dcccfe82-bc7c-4036-bbb1-5a2f90418794,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcccfe82-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.543 2 DEBUG os_vif [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:03:0e,bridge_name='br-int',has_traffic_filtering=True,id=dcccfe82-bc7c-4036-bbb1-5a2f90418794,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcccfe82-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.544 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.544 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdcccfe82-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdcccfe82-bc, col_values=(('external_ids', {'iface-id': 'dcccfe82-bc7c-4036-bbb1-5a2f90418794', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:03:0e', 'vm-uuid': '3626202d-fd1e-497c-bbfd-ea0e7a7321c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:25 np0005465604 NetworkManager[45129]: <info>  [1759393645.5505] manager: (tapdcccfe82-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.558 2 INFO os_vif [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:03:0e,bridge_name='br-int',has_traffic_filtering=True,id=dcccfe82-bc7c-4036-bbb1-5a2f90418794,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcccfe82-bc')#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.625 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.626 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.627 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No VIF found with MAC fa:16:3e:58:03:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.628 2 INFO nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Using config drive#033[00m
Oct  2 04:27:25 np0005465604 nova_compute[260603]: 2025-10-02 08:27:25.660 2 DEBUG nova.storage.rbd_utils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Oct  2 04:27:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:27:27 np0005465604 nova_compute[260603]: 2025-10-02 08:27:27.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:27 np0005465604 nova_compute[260603]: 2025-10-02 08:27:27.644 2 INFO nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Creating config drive at /var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8/disk.config#033[00m
Oct  2 04:27:27 np0005465604 nova_compute[260603]: 2025-10-02 08:27:27.654 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyuuatoqd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:27 np0005465604 nova_compute[260603]: 2025-10-02 08:27:27.809 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyuuatoqd" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:27 np0005465604 nova_compute[260603]: 2025-10-02 08:27:27.847 2 DEBUG nova.storage.rbd_utils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:27 np0005465604 nova_compute[260603]: 2025-10-02 08:27:27.852 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8/disk.config 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:27:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:27:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:27:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:27:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:27:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:27:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:27:27
Oct  2 04:27:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:27:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:27:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'images', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'vms']
Oct  2 04:27:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:27:28 np0005465604 nova_compute[260603]: 2025-10-02 08:27:28.074 2 DEBUG oslo_concurrency.processutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8/disk.config 3626202d-fd1e-497c-bbfd-ea0e7a7321c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:28 np0005465604 nova_compute[260603]: 2025-10-02 08:27:28.076 2 INFO nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Deleting local config drive /var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8/disk.config because it was imported into RBD.#033[00m
Oct  2 04:27:28 np0005465604 kernel: tapdcccfe82-bc: entered promiscuous mode
Oct  2 04:27:28 np0005465604 NetworkManager[45129]: <info>  [1759393648.1583] manager: (tapdcccfe82-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Oct  2 04:27:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:28Z|00324|binding|INFO|Claiming lport dcccfe82-bc7c-4036-bbb1-5a2f90418794 for this chassis.
Oct  2 04:27:28 np0005465604 nova_compute[260603]: 2025-10-02 08:27:28.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:28Z|00325|binding|INFO|dcccfe82-bc7c-4036-bbb1-5a2f90418794: Claiming fa:16:3e:58:03:0e 10.100.0.5
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.168 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:03:0e 10.100.0.5'], port_security=['fa:16:3e:58:03:0e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3626202d-fd1e-497c-bbfd-ea0e7a7321c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=dcccfe82-bc7c-4036-bbb1-5a2f90418794) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.170 162357 INFO neutron.agent.ovn.metadata.agent [-] Port dcccfe82-bc7c-4036-bbb1-5a2f90418794 in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca bound to our chassis#033[00m
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.172 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca#033[00m
Oct  2 04:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.191 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6b529d-e3dd-48de-859f-ce7e79380c58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.193 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa72ac8c9-11 in ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:27:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:28Z|00326|binding|INFO|Setting lport dcccfe82-bc7c-4036-bbb1-5a2f90418794 ovn-installed in OVS
Oct  2 04:27:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:28Z|00327|binding|INFO|Setting lport dcccfe82-bc7c-4036-bbb1-5a2f90418794 up in Southbound
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.195 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa72ac8c9-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.196 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[46160dce-14c9-491a-9c3d-3340e41091b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:28 np0005465604 nova_compute[260603]: 2025-10-02 08:27:28.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.199 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e9b5fe-0165-4218-949e-12f681658106]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:28 np0005465604 systemd-udevd[307205]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.218 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[418a9d2b-aff2-4f8b-8b35-5ca8039bbd10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:28 np0005465604 systemd-machined[214636]: New machine qemu-45-instance-00000029.
Oct  2 04:27:28 np0005465604 NetworkManager[45129]: <info>  [1759393648.2264] device (tapdcccfe82-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:27:28 np0005465604 NetworkManager[45129]: <info>  [1759393648.2274] device (tapdcccfe82-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.236 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd5340d6-f57d-4ec4-8002-dfed6dbd0e4d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:28 np0005465604 systemd[1]: Started Virtual Machine qemu-45-instance-00000029.
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.279 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[17ed816c-4600-40a5-b200-abd795e1c575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:28 np0005465604 NetworkManager[45129]: <info>  [1759393648.2886] manager: (tapa72ac8c9-10): new Veth device (/org/freedesktop/NetworkManager/Devices/144)
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.287 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce08969-638c-47d6-ad54-9219c33fabdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.336 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[71bbfc4d-2fef-47e9-a6ff-64ce10bf73a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.341 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ccead4d3-783d-4b8f-8fd8-b4d56a39ae3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:28 np0005465604 NetworkManager[45129]: <info>  [1759393648.3740] device (tapa72ac8c9-10): carrier: link connected
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.382 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8d61728e-8974-40ab-bd52-52cb50a6707c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.410 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[86e70d12-0257-4473-bf0b-76f5d7d3c9c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452697, 'reachable_time': 17860, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307236, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.436 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9f6b9005-14b9-4b19-8f21-d45d087a6ac8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3b:61d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452697, 'tstamp': 452697}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307237, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.463 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dded00d7-82ea-431f-b401-878cbbced5ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452697, 'reachable_time': 17860, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307238, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.506 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e54eb24c-2a50-428a-838d-c09cc7d88896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.600 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7429dcd5-5af9-419b-a4a0-41b6c1d4a7eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.602 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.602 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.604 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa72ac8c9-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:28 np0005465604 nova_compute[260603]: 2025-10-02 08:27:28.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:28 np0005465604 NetworkManager[45129]: <info>  [1759393648.6421] manager: (tapa72ac8c9-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Oct  2 04:27:28 np0005465604 kernel: tapa72ac8c9-10: entered promiscuous mode
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.646 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa72ac8c9-10, col_values=(('external_ids', {'iface-id': 'f9acec59-0200-4a1d-84e4-06e67c730498'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:28 np0005465604 nova_compute[260603]: 2025-10-02 08:27:28.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:28Z|00328|binding|INFO|Releasing lport f9acec59-0200-4a1d-84e4-06e67c730498 from this chassis (sb_readonly=0)
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.682 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.687 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5970226e-85e7-4c10-9ea3-13c60e5c5a6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:28 np0005465604 nova_compute[260603]: 2025-10-02 08:27:28.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.688 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:27:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:28.688 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'env', 'PROCESS_TAG=haproxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:27:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 88 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Oct  2 04:27:29 np0005465604 podman[307310]: 2025-10-02 08:27:29.100862551 +0000 UTC m=+0.065498106 container create e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 04:27:29 np0005465604 systemd[1]: Started libpod-conmon-e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089.scope.
Oct  2 04:27:29 np0005465604 podman[307310]: 2025-10-02 08:27:29.06123653 +0000 UTC m=+0.025872145 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:27:29 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:27:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da0ecc13612b3c76bc010b82dce48ca9d09de89c8effd567f89e84bab2ebf6f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:27:29 np0005465604 podman[307310]: 2025-10-02 08:27:29.202528801 +0000 UTC m=+0.167164346 container init e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 04:27:29 np0005465604 nova_compute[260603]: 2025-10-02 08:27:29.207 2 DEBUG nova.network.neutron [req-bc28547a-62da-47bf-8773-f5ff0e73ffb9 req-df06e901-bcd6-4b0d-8744-0d6da7baeb8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Updated VIF entry in instance network info cache for port dcccfe82-bc7c-4036-bbb1-5a2f90418794. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:27:29 np0005465604 podman[307310]: 2025-10-02 08:27:29.207689882 +0000 UTC m=+0.172325427 container start e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 04:27:29 np0005465604 nova_compute[260603]: 2025-10-02 08:27:29.207 2 DEBUG nova.network.neutron [req-bc28547a-62da-47bf-8773-f5ff0e73ffb9 req-df06e901-bcd6-4b0d-8744-0d6da7baeb8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Updating instance_info_cache with network_info: [{"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:29 np0005465604 nova_compute[260603]: 2025-10-02 08:27:29.222 2 DEBUG oslo_concurrency.lockutils [req-bc28547a-62da-47bf-8773-f5ff0e73ffb9 req-df06e901-bcd6-4b0d-8744-0d6da7baeb8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-3626202d-fd1e-497c-bbfd-ea0e7a7321c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:27:29 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[307327]: [NOTICE]   (307331) : New worker (307333) forked
Oct  2 04:27:29 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[307327]: [NOTICE]   (307331) : Loading success.
Oct  2 04:27:29 np0005465604 nova_compute[260603]: 2025-10-02 08:27:29.553 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393649.5523596, 3626202d-fd1e-497c-bbfd-ea0e7a7321c8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:27:29 np0005465604 nova_compute[260603]: 2025-10-02 08:27:29.554 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] VM Started (Lifecycle Event)#033[00m
Oct  2 04:27:29 np0005465604 nova_compute[260603]: 2025-10-02 08:27:29.579 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:29 np0005465604 nova_compute[260603]: 2025-10-02 08:27:29.584 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393649.552828, 3626202d-fd1e-497c-bbfd-ea0e7a7321c8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:27:29 np0005465604 nova_compute[260603]: 2025-10-02 08:27:29.584 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:27:29 np0005465604 nova_compute[260603]: 2025-10-02 08:27:29.602 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:29 np0005465604 nova_compute[260603]: 2025-10-02 08:27:29.605 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:27:29 np0005465604 nova_compute[260603]: 2025-10-02 08:27:29.624 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:27:29 np0005465604 nova_compute[260603]: 2025-10-02 08:27:29.913 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:29 np0005465604 nova_compute[260603]: 2025-10-02 08:27:29.915 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:29 np0005465604 nova_compute[260603]: 2025-10-02 08:27:29.936 2 DEBUG nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.049 2 DEBUG nova.compute.manager [req-04cc5d92-6eb8-410f-85d9-d922f6b3060c req-b62a8f58-b7c0-4beb-b7d5-f4cbe0380660 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Received event network-vif-plugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.050 2 DEBUG oslo_concurrency.lockutils [req-04cc5d92-6eb8-410f-85d9-d922f6b3060c req-b62a8f58-b7c0-4beb-b7d5-f4cbe0380660 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.051 2 DEBUG oslo_concurrency.lockutils [req-04cc5d92-6eb8-410f-85d9-d922f6b3060c req-b62a8f58-b7c0-4beb-b7d5-f4cbe0380660 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.051 2 DEBUG oslo_concurrency.lockutils [req-04cc5d92-6eb8-410f-85d9-d922f6b3060c req-b62a8f58-b7c0-4beb-b7d5-f4cbe0380660 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.052 2 DEBUG nova.compute.manager [req-04cc5d92-6eb8-410f-85d9-d922f6b3060c req-b62a8f58-b7c0-4beb-b7d5-f4cbe0380660 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Processing event network-vif-plugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.055 2 DEBUG nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.061 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393650.0600991, 3626202d-fd1e-497c-bbfd-ea0e7a7321c8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.062 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.065 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.070 2 INFO nova.virt.libvirt.driver [-] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Instance spawned successfully.#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.070 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.085 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.086 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.089 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.097 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.099 2 INFO nova.compute.claims [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.108 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.113 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.113 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.114 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.115 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.116 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.117 2 DEBUG nova.virt.libvirt.driver [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.150 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.194 2 INFO nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Took 9.64 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.195 2 DEBUG nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.273 2 INFO nova.compute.manager [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Took 10.61 seconds to build instance.#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.287 2 DEBUG oslo_concurrency.lockutils [None req-f1772376-51db-4be5-8780-056fa6fefa0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.302 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.395 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "05cc7244-c419-4c24-b995-95ca760837a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.397 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.418 2 DEBUG nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.491 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:27:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2617038930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.803 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.810 2 DEBUG nova.compute.provider_tree [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.829 2 DEBUG nova.scheduler.client.report [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.861 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.862 2 DEBUG nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.865 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.874 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.874 2 INFO nova.compute.claims [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:27:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 88 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.997 2 DEBUG nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:27:30 np0005465604 nova_compute[260603]: 2025-10-02 08:27:30.997 2 DEBUG nova.network.neutron [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.017 2 INFO nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.033 2 DEBUG nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.039 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393636.0377965, 5d595e00-2287-4a6f-b347-bc277006a626 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.039 2 INFO nova.compute.manager [-] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.068 2 DEBUG nova.compute.manager [None req-0da4dc6c-88d5-4408-8a3c-0a8a4a450b8a - - - - - -] [instance: 5d595e00-2287-4a6f-b347-bc277006a626] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.149 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.178 2 DEBUG nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.181 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.182 2 INFO nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Creating image(s)#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.210 2 DEBUG nova.storage.rbd_utils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.238 2 DEBUG nova.storage.rbd_utils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.261 2 DEBUG nova.storage.rbd_utils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.265 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.330 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.331 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.331 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.332 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.350 2 DEBUG nova.storage.rbd_utils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.353 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:27:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1348302952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.621 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.661 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.711 2 DEBUG nova.policy [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d235dd68314a5d82ac247a9e9842d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84c161efb2ba4334845e823db8128b62', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.721 2 DEBUG nova.storage.rbd_utils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] resizing rbd image f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.764 2 DEBUG nova.compute.provider_tree [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.786 2 DEBUG nova.scheduler.client.report [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.825 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.826 2 DEBUG nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.833 2 DEBUG nova.objects.instance [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'migration_context' on Instance uuid f13ff7c1-d7d3-443e-9f06-69f8c466af30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.849 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.849 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Ensure instance console log exists: /var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.850 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.850 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.850 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.868 2 DEBUG nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.869 2 DEBUG nova.network.neutron [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.888 2 INFO nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:27:31 np0005465604 nova_compute[260603]: 2025-10-02 08:27:31.904 2 DEBUG nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.005 2 DEBUG nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.007 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.008 2 INFO nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Creating image(s)#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.039 2 DEBUG nova.storage.rbd_utils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 05cc7244-c419-4c24-b995-95ca760837a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.078 2 DEBUG nova.storage.rbd_utils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 05cc7244-c419-4c24-b995-95ca760837a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.112 2 DEBUG nova.storage.rbd_utils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 05cc7244-c419-4c24-b995-95ca760837a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.116 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.192 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.194 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.194 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.195 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.215 2 DEBUG nova.storage.rbd_utils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 05cc7244-c419-4c24-b995-95ca760837a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.219 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 05cc7244-c419-4c24-b995-95ca760837a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.346 2 DEBUG nova.policy [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '019cd25dce6249ce9c2cf326ec62df28', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '35a4ab7cf79e41f68a1ea888c2a3592e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.359 2 DEBUG nova.compute.manager [req-bd323876-c258-4589-b8c2-c9fc7d3c3917 req-6dbc804e-8f8f-4610-b86e-786858da4492 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Received event network-vif-plugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.360 2 DEBUG oslo_concurrency.lockutils [req-bd323876-c258-4589-b8c2-c9fc7d3c3917 req-6dbc804e-8f8f-4610-b86e-786858da4492 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.360 2 DEBUG oslo_concurrency.lockutils [req-bd323876-c258-4589-b8c2-c9fc7d3c3917 req-6dbc804e-8f8f-4610-b86e-786858da4492 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.360 2 DEBUG oslo_concurrency.lockutils [req-bd323876-c258-4589-b8c2-c9fc7d3c3917 req-6dbc804e-8f8f-4610-b86e-786858da4492 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.361 2 DEBUG nova.compute.manager [req-bd323876-c258-4589-b8c2-c9fc7d3c3917 req-6dbc804e-8f8f-4610-b86e-786858da4492 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] No waiting events found dispatching network-vif-plugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.361 2 WARNING nova.compute.manager [req-bd323876-c258-4589-b8c2-c9fc7d3c3917 req-6dbc804e-8f8f-4610-b86e-786858da4492 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Received unexpected event network-vif-plugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.452 2 DEBUG oslo_concurrency.lockutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.452 2 DEBUG oslo_concurrency.lockutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.453 2 DEBUG oslo_concurrency.lockutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.453 2 DEBUG oslo_concurrency.lockutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.453 2 DEBUG oslo_concurrency.lockutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.454 2 INFO nova.compute.manager [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Terminating instance#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.455 2 DEBUG nova.compute.manager [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.501 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 05cc7244-c419-4c24-b995-95ca760837a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:32 np0005465604 kernel: tapdcccfe82-bc (unregistering): left promiscuous mode
Oct  2 04:27:32 np0005465604 NetworkManager[45129]: <info>  [1759393652.5217] device (tapdcccfe82-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:27:32 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:32Z|00329|binding|INFO|Releasing lport dcccfe82-bc7c-4036-bbb1-5a2f90418794 from this chassis (sb_readonly=0)
Oct  2 04:27:32 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:32Z|00330|binding|INFO|Setting lport dcccfe82-bc7c-4036-bbb1-5a2f90418794 down in Southbound
Oct  2 04:27:32 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:32Z|00331|binding|INFO|Removing iface tapdcccfe82-bc ovn-installed in OVS
Oct  2 04:27:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.542 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:03:0e 10.100.0.5'], port_security=['fa:16:3e:58:03:0e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3626202d-fd1e-497c-bbfd-ea0e7a7321c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=dcccfe82-bc7c-4036-bbb1-5a2f90418794) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:27:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.543 162357 INFO neutron.agent.ovn.metadata.agent [-] Port dcccfe82-bc7c-4036-bbb1-5a2f90418794 in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca unbound from our chassis#033[00m
Oct  2 04:27:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.544 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:27:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.545 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b4264815-afb6-4c33-ae83-f3e94f6b63b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.545 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace which is not needed anymore#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:32 np0005465604 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Deactivated successfully.
Oct  2 04:27:32 np0005465604 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Consumed 3.655s CPU time.
Oct  2 04:27:32 np0005465604 systemd-machined[214636]: Machine qemu-45-instance-00000029 terminated.
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.616 2 DEBUG nova.storage.rbd_utils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] resizing rbd image 05cc7244-c419-4c24-b995-95ca760837a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:27:32 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[307327]: [NOTICE]   (307331) : haproxy version is 2.8.14-c23fe91
Oct  2 04:27:32 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[307327]: [NOTICE]   (307331) : path to executable is /usr/sbin/haproxy
Oct  2 04:27:32 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[307327]: [ALERT]    (307331) : Current worker (307333) exited with code 143 (Terminated)
Oct  2 04:27:32 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[307327]: [WARNING]  (307331) : All workers exited. Exiting... (0)
Oct  2 04:27:32 np0005465604 systemd[1]: libpod-e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089.scope: Deactivated successfully.
Oct  2 04:27:32 np0005465604 podman[307724]: 2025-10-02 08:27:32.73021014 +0000 UTC m=+0.063086082 container died e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.744 2 DEBUG nova.objects.instance [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lazy-loading 'migration_context' on Instance uuid 05cc7244-c419-4c24-b995-95ca760837a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.748 2 INFO nova.virt.libvirt.driver [-] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Instance destroyed successfully.#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.749 2 DEBUG nova.objects.instance [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'resources' on Instance uuid 3626202d-fd1e-497c-bbfd-ea0e7a7321c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:32 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089-userdata-shm.mount: Deactivated successfully.
Oct  2 04:27:32 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1da0ecc13612b3c76bc010b82dce48ca9d09de89c8effd567f89e84bab2ebf6f-merged.mount: Deactivated successfully.
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.777 2 DEBUG nova.virt.libvirt.vif [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-277551696',display_name='tempest-DeleteServersTestJSON-server-277551696',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-277551696',id=41,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:27:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-tia1r4kw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:27:30Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=3626202d-fd1e-497c-bbfd-ea0e7a7321c8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:27:32 np0005465604 podman[307724]: 2025-10-02 08:27:32.777704665 +0000 UTC m=+0.110580617 container cleanup e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.777 2 DEBUG nova.network.os_vif_util [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "address": "fa:16:3e:58:03:0e", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcccfe82-bc", "ovs_interfaceid": "dcccfe82-bc7c-4036-bbb1-5a2f90418794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.778 2 DEBUG nova.network.os_vif_util [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:03:0e,bridge_name='br-int',has_traffic_filtering=True,id=dcccfe82-bc7c-4036-bbb1-5a2f90418794,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcccfe82-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.778 2 DEBUG os_vif [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:03:0e,bridge_name='br-int',has_traffic_filtering=True,id=dcccfe82-bc7c-4036-bbb1-5a2f90418794,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcccfe82-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.780 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.780 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Ensure instance console log exists: /var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.781 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.781 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.781 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.782 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdcccfe82-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.789 2 INFO os_vif [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:03:0e,bridge_name='br-int',has_traffic_filtering=True,id=dcccfe82-bc7c-4036-bbb1-5a2f90418794,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcccfe82-bc')#033[00m
Oct  2 04:27:32 np0005465604 systemd[1]: libpod-conmon-e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089.scope: Deactivated successfully.
Oct  2 04:27:32 np0005465604 podman[307778]: 2025-10-02 08:27:32.880995295 +0000 UTC m=+0.066669813 container remove e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 04:27:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.893 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[06c7e754-2def-4c12-9e2f-714308cb987a]: (4, ('Thu Oct  2 08:27:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089)\ne44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089\nThu Oct  2 08:27:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (e44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089)\ne44dabb57a119e63f2c8b1c32fd0f1efd328f8fd2ad4b03f234b83d927104089\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.895 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d6bd9b4e-c952-47c5-9641-3894f4d46a52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.896 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:32 np0005465604 kernel: tapa72ac8c9-10: left promiscuous mode
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 113 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.7 MiB/s wr, 126 op/s
Oct  2 04:27:32 np0005465604 nova_compute[260603]: 2025-10-02 08:27:32.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.925 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b7ebf4-7126-47fe-8011-777e8155a79e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.957 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[04ea676a-b436-4ff7-92ea-eb43ce7c63cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.958 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[63bb9a1d-3d51-4cb3-8741-46084411dbc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.975 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1aadc4-5bf5-4d0d-be8a-52551c4bd8b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452687, 'reachable_time': 17689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307811, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:32 np0005465604 systemd[1]: run-netns-ovnmeta\x2da72ac8c9\x2d16ee\x2d4ec0\x2db23d\x2d2741fda000ca.mount: Deactivated successfully.
Oct  2 04:27:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.980 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:27:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:32.980 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e3338d-eb30-4545-a02f-c680ebd5d3e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:33 np0005465604 nova_compute[260603]: 2025-10-02 08:27:33.163 2 INFO nova.virt.libvirt.driver [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Deleting instance files /var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8_del#033[00m
Oct  2 04:27:33 np0005465604 nova_compute[260603]: 2025-10-02 08:27:33.164 2 INFO nova.virt.libvirt.driver [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Deletion of /var/lib/nova/instances/3626202d-fd1e-497c-bbfd-ea0e7a7321c8_del complete#033[00m
Oct  2 04:27:33 np0005465604 nova_compute[260603]: 2025-10-02 08:27:33.236 2 INFO nova.compute.manager [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:27:33 np0005465604 nova_compute[260603]: 2025-10-02 08:27:33.237 2 DEBUG oslo.service.loopingcall [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:27:33 np0005465604 nova_compute[260603]: 2025-10-02 08:27:33.237 2 DEBUG nova.compute.manager [-] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:27:33 np0005465604 nova_compute[260603]: 2025-10-02 08:27:33.237 2 DEBUG nova.network.neutron [-] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:27:33 np0005465604 nova_compute[260603]: 2025-10-02 08:27:33.742 2 DEBUG nova.network.neutron [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Successfully created port: e978c120-9b3a-4a48-b553-c38b05073ad9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:27:33 np0005465604 nova_compute[260603]: 2025-10-02 08:27:33.823 2 DEBUG nova.network.neutron [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Successfully created port: 136aeb8e-dedd-4cd8-a72d-1c4309716daf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.537 2 DEBUG nova.compute.manager [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Received event network-vif-unplugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.538 2 DEBUG oslo_concurrency.lockutils [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.538 2 DEBUG oslo_concurrency.lockutils [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.539 2 DEBUG oslo_concurrency.lockutils [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.539 2 DEBUG nova.compute.manager [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] No waiting events found dispatching network-vif-unplugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.540 2 DEBUG nova.compute.manager [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Received event network-vif-unplugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.540 2 DEBUG nova.compute.manager [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Received event network-vif-plugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.541 2 DEBUG oslo_concurrency.lockutils [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.541 2 DEBUG oslo_concurrency.lockutils [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.541 2 DEBUG oslo_concurrency.lockutils [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.542 2 DEBUG nova.compute.manager [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] No waiting events found dispatching network-vif-plugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.542 2 WARNING nova.compute.manager [req-b8a669f5-eca9-4658-b516-b240861f625e req-4df72226-e82f-4656-b201-dfac544c7fed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Received unexpected event network-vif-plugged-dcccfe82-bc7c-4036-bbb1-5a2f90418794 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.554 2 DEBUG nova.network.neutron [-] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.588 2 INFO nova.compute.manager [-] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Took 1.35 seconds to deallocate network for instance.#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.670 2 DEBUG oslo_concurrency.lockutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.671 2 DEBUG oslo_concurrency.lockutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:34 np0005465604 nova_compute[260603]: 2025-10-02 08:27:34.756 2 DEBUG oslo_concurrency.processutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:34.814 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:34.816 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:34.816 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 144 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 107 op/s
Oct  2 04:27:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:27:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2317047195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:27:35 np0005465604 nova_compute[260603]: 2025-10-02 08:27:35.271 2 DEBUG oslo_concurrency.processutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:35 np0005465604 nova_compute[260603]: 2025-10-02 08:27:35.277 2 DEBUG nova.compute.provider_tree [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:27:35 np0005465604 nova_compute[260603]: 2025-10-02 08:27:35.328 2 DEBUG nova.scheduler.client.report [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:27:35 np0005465604 nova_compute[260603]: 2025-10-02 08:27:35.354 2 DEBUG oslo_concurrency.lockutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:35 np0005465604 nova_compute[260603]: 2025-10-02 08:27:35.386 2 INFO nova.scheduler.client.report [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Deleted allocations for instance 3626202d-fd1e-497c-bbfd-ea0e7a7321c8#033[00m
Oct  2 04:27:35 np0005465604 nova_compute[260603]: 2025-10-02 08:27:35.457 2 DEBUG oslo_concurrency.lockutils [None req-5784a691-d0ec-4b70-8b13-0a1982f18b40 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "3626202d-fd1e-497c-bbfd-ea0e7a7321c8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:35 np0005465604 nova_compute[260603]: 2025-10-02 08:27:35.546 2 DEBUG nova.network.neutron [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Successfully updated port: e978c120-9b3a-4a48-b553-c38b05073ad9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:27:35 np0005465604 nova_compute[260603]: 2025-10-02 08:27:35.554 2 DEBUG nova.network.neutron [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Successfully updated port: 136aeb8e-dedd-4cd8-a72d-1c4309716daf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:27:35 np0005465604 nova_compute[260603]: 2025-10-02 08:27:35.568 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:27:35 np0005465604 nova_compute[260603]: 2025-10-02 08:27:35.569 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquired lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:27:35 np0005465604 nova_compute[260603]: 2025-10-02 08:27:35.569 2 DEBUG nova.network.neutron [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:27:35 np0005465604 nova_compute[260603]: 2025-10-02 08:27:35.572 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:27:35 np0005465604 nova_compute[260603]: 2025-10-02 08:27:35.572 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:27:35 np0005465604 nova_compute[260603]: 2025-10-02 08:27:35.573 2 DEBUG nova.network.neutron [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:27:35 np0005465604 nova_compute[260603]: 2025-10-02 08:27:35.777 2 DEBUG nova.network.neutron [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:27:35 np0005465604 nova_compute[260603]: 2025-10-02 08:27:35.784 2 DEBUG nova.network.neutron [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:27:36 np0005465604 nova_compute[260603]: 2025-10-02 08:27:36.548 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393641.5465412, f0bfef78-36cf-4c57-9205-ad81a216a221 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:27:36 np0005465604 nova_compute[260603]: 2025-10-02 08:27:36.549 2 INFO nova.compute.manager [-] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:27:36 np0005465604 nova_compute[260603]: 2025-10-02 08:27:36.580 2 DEBUG nova.compute.manager [None req-a1ceab11-5d63-4bd6-bf7a-e37e7ced89c7 - - - - - -] [instance: f0bfef78-36cf-4c57-9205-ad81a216a221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 144 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 104 op/s
Oct  2 04:27:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.095 2 DEBUG nova.compute.manager [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Received event network-vif-deleted-dcccfe82-bc7c-4036-bbb1-5a2f90418794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.096 2 DEBUG nova.compute.manager [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received event network-changed-e978c120-9b3a-4a48-b553-c38b05073ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.096 2 DEBUG nova.compute.manager [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Refreshing instance network info cache due to event network-changed-e978c120-9b3a-4a48-b553-c38b05073ad9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.097 2 DEBUG oslo_concurrency.lockutils [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.853 2 DEBUG nova.network.neutron [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Updating instance_info_cache with network_info: [{"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.872 2 DEBUG nova.network.neutron [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.887 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Releasing lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.887 2 DEBUG nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Instance network_info: |[{"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.888 2 DEBUG oslo_concurrency.lockutils [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.888 2 DEBUG nova.network.neutron [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Refreshing network info cache for port e978c120-9b3a-4a48-b553-c38b05073ad9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.894 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Start _get_guest_xml network_info=[{"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.898 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.899 2 DEBUG nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Instance network_info: |[{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.903 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Start _get_guest_xml network_info=[{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.912 2 WARNING nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.921 2 WARNING nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.927 2 DEBUG nova.virt.libvirt.host [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.928 2 DEBUG nova.virt.libvirt.host [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.929 2 DEBUG nova.virt.libvirt.host [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.930 2 DEBUG nova.virt.libvirt.host [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.934 2 DEBUG nova.virt.libvirt.host [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.935 2 DEBUG nova.virt.libvirt.host [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.936 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.937 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.938 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.938 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.939 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.939 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.940 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.940 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.941 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.941 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.942 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.942 2 DEBUG nova.virt.hardware [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.947 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.981 2 DEBUG nova.virt.libvirt.host [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.983 2 DEBUG nova.virt.libvirt.host [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.983 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.984 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.984 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.985 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.985 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.985 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.985 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.986 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.986 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.986 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.987 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.987 2 DEBUG nova.virt.hardware [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:27:37 np0005465604 nova_compute[260603]: 2025-10-02 08:27:37.991 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:27:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1932988185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.396 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.430 2 DEBUG nova.storage.rbd_utils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 05cc7244-c419-4c24-b995-95ca760837a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.436 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:27:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3001243750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.478 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008102727990492934 of space, bias 1.0, pg target 0.24308183971478803 quantized to 32 (current 32)
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.515 2 DEBUG nova.storage.rbd_utils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.520 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:27:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2585865126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:27:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 134 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.918 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.921 2 DEBUG nova.virt.libvirt.vif [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1736454109',display_name='tempest-SecurityGroupsTestJSON-server-1736454109',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1736454109',id=43,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35a4ab7cf79e41f68a1ea888c2a3592e',ramdisk_id='',reservation_id='r-hb3fhgan',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-2081142325',owner_user_name='tempest-SecurityGroupsTestJSON-2081142325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:31Z,user_data=None,user_id='019cd25dce6249ce9c2cf326ec62df28',uuid=05cc7244-c419-4c24-b995-95ca760837a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.922 2 DEBUG nova.network.os_vif_util [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converting VIF {"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.923 2 DEBUG nova.network.os_vif_util [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:a6:e3,bridge_name='br-int',has_traffic_filtering=True,id=e978c120-9b3a-4a48-b553-c38b05073ad9,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape978c120-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.925 2 DEBUG nova.objects.instance [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lazy-loading 'pci_devices' on Instance uuid 05cc7244-c419-4c24-b995-95ca760837a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.953 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:27:38 np0005465604 nova_compute[260603]:  <uuid>05cc7244-c419-4c24-b995-95ca760837a4</uuid>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:  <name>instance-0000002b</name>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <nova:name>tempest-SecurityGroupsTestJSON-server-1736454109</nova:name>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:27:37</nova:creationTime>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:27:38 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:        <nova:user uuid="019cd25dce6249ce9c2cf326ec62df28">tempest-SecurityGroupsTestJSON-2081142325-project-member</nova:user>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:        <nova:project uuid="35a4ab7cf79e41f68a1ea888c2a3592e">tempest-SecurityGroupsTestJSON-2081142325</nova:project>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:        <nova:port uuid="e978c120-9b3a-4a48-b553-c38b05073ad9">
Oct  2 04:27:38 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <entry name="serial">05cc7244-c419-4c24-b995-95ca760837a4</entry>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <entry name="uuid">05cc7244-c419-4c24-b995-95ca760837a4</entry>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/05cc7244-c419-4c24-b995-95ca760837a4_disk">
Oct  2 04:27:38 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:27:38 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/05cc7244-c419-4c24-b995-95ca760837a4_disk.config">
Oct  2 04:27:38 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:27:38 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:86:a6:e3"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <target dev="tape978c120-9b"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4/console.log" append="off"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:27:38 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:27:38 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:27:38 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:27:38 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.956 2 DEBUG nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Preparing to wait for external event network-vif-plugged-e978c120-9b3a-4a48-b553-c38b05073ad9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.956 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "05cc7244-c419-4c24-b995-95ca760837a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.956 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.957 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.958 2 DEBUG nova.virt.libvirt.vif [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1736454109',display_name='tempest-SecurityGroupsTestJSON-server-1736454109',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1736454109',id=43,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35a4ab7cf79e41f68a1ea888c2a3592e',ramdisk_id='',reservation_id='r-hb3fhgan',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-2081142325',owner_user_name='tempest-SecurityGroupsTestJSON-2081142325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:31Z,user_data=None,user_id='019cd25dce6249ce9c2cf326ec62df28',uuid=05cc7244-c419-4c24-b995-95ca760837a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.958 2 DEBUG nova.network.os_vif_util [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converting VIF {"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.959 2 DEBUG nova.network.os_vif_util [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:a6:e3,bridge_name='br-int',has_traffic_filtering=True,id=e978c120-9b3a-4a48-b553-c38b05073ad9,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape978c120-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.960 2 DEBUG os_vif [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:a6:e3,bridge_name='br-int',has_traffic_filtering=True,id=e978c120-9b3a-4a48-b553-c38b05073ad9,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape978c120-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.962 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.963 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.966 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape978c120-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.967 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape978c120-9b, col_values=(('external_ids', {'iface-id': 'e978c120-9b3a-4a48-b553-c38b05073ad9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:a6:e3', 'vm-uuid': '05cc7244-c419-4c24-b995-95ca760837a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:38 np0005465604 NetworkManager[45129]: <info>  [1759393658.9700] manager: (tape978c120-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:38 np0005465604 nova_compute[260603]: 2025-10-02 08:27:38.976 2 INFO os_vif [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:a6:e3,bridge_name='br-int',has_traffic_filtering=True,id=e978c120-9b3a-4a48-b553-c38b05073ad9,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape978c120-9b')#033[00m
Oct  2 04:27:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:27:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/383078312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.006 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.009 2 DEBUG nova.virt.libvirt.vif [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-53387023',display_name='tempest-tempest.common.compute-instance-53387023',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-53387023',id=42,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-966e3isi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=f13ff7c1-d7d3-443e-9f06-69f8c466af30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.010 2 DEBUG nova.network.os_vif_util [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.011 2 DEBUG nova.network.os_vif_util [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:59:2a,bridge_name='br-int',has_traffic_filtering=True,id=136aeb8e-dedd-4cd8-a72d-1c4309716daf,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap136aeb8e-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.012 2 DEBUG nova.objects.instance [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_devices' on Instance uuid f13ff7c1-d7d3-443e-9f06-69f8c466af30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.041 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:27:39 np0005465604 nova_compute[260603]:  <uuid>f13ff7c1-d7d3-443e-9f06-69f8c466af30</uuid>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:  <name>instance-0000002a</name>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <nova:name>tempest-tempest.common.compute-instance-53387023</nova:name>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:27:37</nova:creationTime>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:27:39 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:        <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:        <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:        <nova:port uuid="136aeb8e-dedd-4cd8-a72d-1c4309716daf">
Oct  2 04:27:39 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <entry name="serial">f13ff7c1-d7d3-443e-9f06-69f8c466af30</entry>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <entry name="uuid">f13ff7c1-d7d3-443e-9f06-69f8c466af30</entry>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk">
Oct  2 04:27:39 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:27:39 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk.config">
Oct  2 04:27:39 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:27:39 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:0b:59:2a"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <target dev="tap136aeb8e-de"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/console.log" append="off"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:27:39 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:27:39 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:27:39 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:27:39 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.043 2 DEBUG nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Preparing to wait for external event network-vif-plugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.043 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.044 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.044 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.046 2 DEBUG nova.virt.libvirt.vif [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-53387023',display_name='tempest-tempest.common.compute-instance-53387023',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-53387023',id=42,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-966e3isi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=f13ff7c1-d7d3-443e-9f06-69f8c466af30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.046 2 DEBUG nova.network.os_vif_util [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.048 2 DEBUG nova.network.os_vif_util [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:59:2a,bridge_name='br-int',has_traffic_filtering=True,id=136aeb8e-dedd-4cd8-a72d-1c4309716daf,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap136aeb8e-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.049 2 DEBUG os_vif [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:59:2a,bridge_name='br-int',has_traffic_filtering=True,id=136aeb8e-dedd-4cd8-a72d-1c4309716daf,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap136aeb8e-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.051 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.058 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap136aeb8e-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.059 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap136aeb8e-de, col_values=(('external_ids', {'iface-id': '136aeb8e-dedd-4cd8-a72d-1c4309716daf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:59:2a', 'vm-uuid': 'f13ff7c1-d7d3-443e-9f06-69f8c466af30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:39 np0005465604 NetworkManager[45129]: <info>  [1759393659.0622] manager: (tap136aeb8e-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.071 2 INFO os_vif [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:59:2a,bridge_name='br-int',has_traffic_filtering=True,id=136aeb8e-dedd-4cd8-a72d-1c4309716daf,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap136aeb8e-de')#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.074 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.075 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.075 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] No VIF found with MAC fa:16:3e:86:a6:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.076 2 INFO nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Using config drive#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.106 2 DEBUG nova.storage.rbd_utils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 05cc7244-c419-4c24-b995-95ca760837a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.192 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.193 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.194 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:0b:59:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.195 2 INFO nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Using config drive#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.226 2 DEBUG nova.storage.rbd_utils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.641 2 INFO nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Creating config drive at /var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4/disk.config#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.647 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxgwritu0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.746 2 DEBUG nova.network.neutron [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Updated VIF entry in instance network info cache for port e978c120-9b3a-4a48-b553-c38b05073ad9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.748 2 DEBUG nova.network.neutron [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Updating instance_info_cache with network_info: [{"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.770 2 DEBUG oslo_concurrency.lockutils [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.771 2 DEBUG nova.compute.manager [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.771 2 DEBUG nova.compute.manager [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing instance network info cache due to event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.771 2 DEBUG oslo_concurrency.lockutils [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.772 2 DEBUG oslo_concurrency.lockutils [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.772 2 DEBUG nova.network.neutron [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.798 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxgwritu0" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.829 2 DEBUG nova.storage.rbd_utils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 05cc7244-c419-4c24-b995-95ca760837a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.833 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4/disk.config 05cc7244-c419-4c24-b995-95ca760837a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.877 2 INFO nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Creating config drive at /var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/disk.config#033[00m
Oct  2 04:27:39 np0005465604 nova_compute[260603]: 2025-10-02 08:27:39.888 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpldivgrxx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:40 np0005465604 nova_compute[260603]: 2025-10-02 08:27:40.031 2 DEBUG oslo_concurrency.processutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4/disk.config 05cc7244-c419-4c24-b995-95ca760837a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:40 np0005465604 nova_compute[260603]: 2025-10-02 08:27:40.032 2 INFO nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Deleting local config drive /var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4/disk.config because it was imported into RBD.#033[00m
Oct  2 04:27:40 np0005465604 nova_compute[260603]: 2025-10-02 08:27:40.051 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpldivgrxx" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:40 np0005465604 nova_compute[260603]: 2025-10-02 08:27:40.098 2 DEBUG nova.storage.rbd_utils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:40 np0005465604 nova_compute[260603]: 2025-10-02 08:27:40.105 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/disk.config f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:40 np0005465604 NetworkManager[45129]: <info>  [1759393660.1503] manager: (tape978c120-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/148)
Oct  2 04:27:40 np0005465604 kernel: tape978c120-9b: entered promiscuous mode
Oct  2 04:27:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:40Z|00332|binding|INFO|Claiming lport e978c120-9b3a-4a48-b553-c38b05073ad9 for this chassis.
Oct  2 04:27:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:40Z|00333|binding|INFO|e978c120-9b3a-4a48-b553-c38b05073ad9: Claiming fa:16:3e:86:a6:e3 10.100.0.10
Oct  2 04:27:40 np0005465604 nova_compute[260603]: 2025-10-02 08:27:40.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.169 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:a6:e3 10.100.0.10'], port_security=['fa:16:3e:86:a6:e3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '05cc7244-c419-4c24-b995-95ca760837a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35a4ab7cf79e41f68a1ea888c2a3592e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aceef71f-a2e6-4998-bc1f-5a8f9213efeb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e031551-92b2-44b9-87f8-368034b7a542, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e978c120-9b3a-4a48-b553-c38b05073ad9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.170 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e978c120-9b3a-4a48-b553-c38b05073ad9 in datapath cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 bound to our chassis#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.172 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.186 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c9fb1063-69a6-4581-8981-4803e75d2afc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.187 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcce7e8b6-91 in ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.189 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcce7e8b6-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.190 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d8da88e2-ec19-4342-9c28-e28f4b36f5fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.194 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b142023e-a269-484c-981a-42c78bc79b20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.204 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[47d20c7b-1f71-485c-a789-3f7e924cbfdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:40 np0005465604 systemd-machined[214636]: New machine qemu-46-instance-0000002b.
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.221 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e69495-c759-4220-854c-1480bdc12ad0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:40 np0005465604 systemd[1]: Started Virtual Machine qemu-46-instance-0000002b.
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.256 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f205a6d3-769a-4d62-9701-77ac05e682cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:40 np0005465604 systemd-udevd[308129]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.261 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[722205e6-92f9-43b2-94f8-ca31a4849e21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:40 np0005465604 systemd-udevd[308132]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:27:40 np0005465604 NetworkManager[45129]: <info>  [1759393660.2640] manager: (tapcce7e8b6-90): new Veth device (/org/freedesktop/NetworkManager/Devices/149)
Oct  2 04:27:40 np0005465604 NetworkManager[45129]: <info>  [1759393660.2802] device (tape978c120-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:27:40 np0005465604 NetworkManager[45129]: <info>  [1759393660.2808] device (tape978c120-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:27:40 np0005465604 nova_compute[260603]: 2025-10-02 08:27:40.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:40Z|00334|binding|INFO|Setting lport e978c120-9b3a-4a48-b553-c38b05073ad9 ovn-installed in OVS
Oct  2 04:27:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:40Z|00335|binding|INFO|Setting lport e978c120-9b3a-4a48-b553-c38b05073ad9 up in Southbound
Oct  2 04:27:40 np0005465604 nova_compute[260603]: 2025-10-02 08:27:40.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.298 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[32967c7e-7325-45f7-8b20-fd5738a8a615]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:40 np0005465604 podman[308081]: 2025-10-02 08:27:40.303280194 +0000 UTC m=+0.107761070 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.301 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2e455377-9981-42bc-8da7-df20a6627120]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:40 np0005465604 NetworkManager[45129]: <info>  [1759393660.3306] device (tapcce7e8b6-90): carrier: link connected
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.336 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[83993b30-2458-47c7-a653-7e3cf9990059]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:40 np0005465604 podman[308075]: 2025-10-02 08:27:40.348081746 +0000 UTC m=+0.157545067 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:27:40 np0005465604 nova_compute[260603]: 2025-10-02 08:27:40.358 2 DEBUG oslo_concurrency.processutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/disk.config f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:40 np0005465604 nova_compute[260603]: 2025-10-02 08:27:40.359 2 INFO nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Deleting local config drive /var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/disk.config because it was imported into RBD.#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.363 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a45faa67-4f1f-462c-b79a-28baaf327f00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcce7e8b6-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:5a:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453893, 'reachable_time': 18032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308174, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.377 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f46b66c7-7a28-4d26-86d3-cca5c183388f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5b:5a77'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453893, 'tstamp': 453893}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308175, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.393 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e8fb11-d5f8-4fb1-9f13-32f411bc11d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcce7e8b6-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:5a:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453893, 'reachable_time': 18032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308179, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:40 np0005465604 kernel: tap136aeb8e-de: entered promiscuous mode
Oct  2 04:27:40 np0005465604 NetworkManager[45129]: <info>  [1759393660.4124] manager: (tap136aeb8e-de): new Tun device (/org/freedesktop/NetworkManager/Devices/150)
Oct  2 04:27:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:40Z|00336|binding|INFO|Claiming lport 136aeb8e-dedd-4cd8-a72d-1c4309716daf for this chassis.
Oct  2 04:27:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:40Z|00337|binding|INFO|136aeb8e-dedd-4cd8-a72d-1c4309716daf: Claiming fa:16:3e:0b:59:2a 10.100.0.3
Oct  2 04:27:40 np0005465604 nova_compute[260603]: 2025-10-02 08:27:40.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:40 np0005465604 systemd-udevd[308154]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.424 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:59:2a 10.100.0.3'], port_security=['fa:16:3e:0b:59:2a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f13ff7c1-d7d3-443e-9f06-69f8c466af30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1616ad3a-ff5f-4423-8dd9-f2ff5717f8c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=136aeb8e-dedd-4cd8-a72d-1c4309716daf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:27:40 np0005465604 NetworkManager[45129]: <info>  [1759393660.4269] device (tap136aeb8e-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:27:40 np0005465604 NetworkManager[45129]: <info>  [1759393660.4282] device (tap136aeb8e-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:27:40 np0005465604 systemd-machined[214636]: New machine qemu-47-instance-0000002a.
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.447 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[56ff942e-0c71-4cd6-9dc5-af0fb16cf69a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:40 np0005465604 systemd[1]: Started Virtual Machine qemu-47-instance-0000002a.
Oct  2 04:27:40 np0005465604 nova_compute[260603]: 2025-10-02 08:27:40.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:40Z|00338|binding|INFO|Setting lport 136aeb8e-dedd-4cd8-a72d-1c4309716daf ovn-installed in OVS
Oct  2 04:27:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:40Z|00339|binding|INFO|Setting lport 136aeb8e-dedd-4cd8-a72d-1c4309716daf up in Southbound
Oct  2 04:27:40 np0005465604 nova_compute[260603]: 2025-10-02 08:27:40.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.534 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cf788e4c-9b34-4421-8dd6-e8caee0bdc89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.535 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcce7e8b6-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.536 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.536 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcce7e8b6-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:40 np0005465604 nova_compute[260603]: 2025-10-02 08:27:40.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:40 np0005465604 NetworkManager[45129]: <info>  [1759393660.5384] manager: (tapcce7e8b6-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Oct  2 04:27:40 np0005465604 kernel: tapcce7e8b6-90: entered promiscuous mode
Oct  2 04:27:40 np0005465604 nova_compute[260603]: 2025-10-02 08:27:40.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.542 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcce7e8b6-90, col_values=(('external_ids', {'iface-id': '2a218cce-83be-4768-9f4e-7d61802765d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:40 np0005465604 nova_compute[260603]: 2025-10-02 08:27:40.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:40Z|00340|binding|INFO|Releasing lport 2a218cce-83be-4768-9f4e-7d61802765d4 from this chassis (sb_readonly=0)
Oct  2 04:27:40 np0005465604 nova_compute[260603]: 2025-10-02 08:27:40.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.566 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.567 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[25f901c9-100b-416a-a725-494ed883b616]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.569 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9.pid.haproxy
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:27:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:40.571 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'env', 'PROCESS_TAG=haproxy-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:27:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 134 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 153 op/s
Oct  2 04:27:41 np0005465604 podman[308296]: 2025-10-02 08:27:41.008336555 +0000 UTC m=+0.078090999 container create 685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:27:41 np0005465604 podman[308296]: 2025-10-02 08:27:40.959730574 +0000 UTC m=+0.029485028 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.057 2 DEBUG nova.compute.manager [req-449c98a0-d05f-4b0e-83ca-d939840adca6 req-acce05f6-0e27-4dfc-8413-299113b87656 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received event network-vif-plugged-e978c120-9b3a-4a48-b553-c38b05073ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.058 2 DEBUG oslo_concurrency.lockutils [req-449c98a0-d05f-4b0e-83ca-d939840adca6 req-acce05f6-0e27-4dfc-8413-299113b87656 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "05cc7244-c419-4c24-b995-95ca760837a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.059 2 DEBUG oslo_concurrency.lockutils [req-449c98a0-d05f-4b0e-83ca-d939840adca6 req-acce05f6-0e27-4dfc-8413-299113b87656 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.059 2 DEBUG oslo_concurrency.lockutils [req-449c98a0-d05f-4b0e-83ca-d939840adca6 req-acce05f6-0e27-4dfc-8413-299113b87656 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.059 2 DEBUG nova.compute.manager [req-449c98a0-d05f-4b0e-83ca-d939840adca6 req-acce05f6-0e27-4dfc-8413-299113b87656 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Processing event network-vif-plugged-e978c120-9b3a-4a48-b553-c38b05073ad9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:27:41 np0005465604 systemd[1]: Started libpod-conmon-685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96.scope.
Oct  2 04:27:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:27:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4204cebb47775c2a9fbcd07d14a4cbaafc0d93423478e2e45ec500726da6ccd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:27:41 np0005465604 podman[308296]: 2025-10-02 08:27:41.124078381 +0000 UTC m=+0.193832855 container init 685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:27:41 np0005465604 podman[308296]: 2025-10-02 08:27:41.136022013 +0000 UTC m=+0.205776447 container start 685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 04:27:41 np0005465604 neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9[308328]: [NOTICE]   (308332) : New worker (308334) forked
Oct  2 04:27:41 np0005465604 neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9[308328]: [NOTICE]   (308332) : Loading success.
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.204 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 136aeb8e-dedd-4cd8-a72d-1c4309716daf in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.208 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.227 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[70e88fa9-1b5a-4823-b77b-f5126edccd42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.228 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfa1bff6d-11 in ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.231 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfa1bff6d-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.231 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[141b3c48-bb28-4081-9827-6d33d5f0aa82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.233 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa91d2fd-f9ab-44c0-b564-4dabad80eb2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.251 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[80bac23d-e577-4bcd-84f0-3f9428b98b29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.271 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8723cdf2-9947-4c21-b6c6-bb3dc61760a3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.289 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393661.2868018, 05cc7244-c419-4c24-b995-95ca760837a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.290 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] VM Started (Lifecycle Event)#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.301 2 DEBUG nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.311 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[627a2c64-f8a7-4714-ba15-735765f74408]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.312 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.315 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.320 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7275ec25-69a3-4753-91ec-8dd0e2e8ec58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:41 np0005465604 NetworkManager[45129]: <info>  [1759393661.3212] manager: (tapfa1bff6d-10): new Veth device (/org/freedesktop/NetworkManager/Devices/152)
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.325 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.328 2 INFO nova.virt.libvirt.driver [-] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Instance spawned successfully.#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.328 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.348 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.348 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393661.2870655, 05cc7244-c419-4c24-b995-95ca760837a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.348 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.352 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.352 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.353 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.353 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.353 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.354 2 DEBUG nova.virt.libvirt.driver [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.362 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec72e85-0a4f-47a6-9694-9bbe71213f4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.367 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[24afd838-8a25-45f8-bbb6-684dea71670b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.382 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.385 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393661.3144326, 05cc7244-c419-4c24-b995-95ca760837a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.385 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.405 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:41 np0005465604 NetworkManager[45129]: <info>  [1759393661.4083] device (tapfa1bff6d-10): carrier: link connected
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.410 2 INFO nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Took 9.40 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.410 2 DEBUG nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.411 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.416 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1e2a23b1-cbc5-4088-86e9-412bcf21bb0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.435 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.441 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b456d8ec-a5e8-4d7d-bc7c-d9ff603e447c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454001, 'reachable_time': 43415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308353, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.463 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ce4f96-f439-4755-ba7d-f579b2ff33bb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:c92f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454001, 'tstamp': 454001}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308354, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.468 2 INFO nova.compute.manager [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Took 11.00 seconds to build instance.#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.481 2 DEBUG oslo_concurrency.lockutils [None req-9379046b-2530-48b1-b177-c6436e8e817c 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.484 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[554a5bf7-5aa6-4d61-a3ec-4aaac45df4be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454001, 'reachable_time': 43415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308355, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.525 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc8e9b6-8ffe-457c-a9d2-d41a61b44305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.570 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393661.5695255, f13ff7c1-d7d3-443e-9f06-69f8c466af30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.570 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] VM Started (Lifecycle Event)#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.590 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.595 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393661.569632, f13ff7c1-d7d3-443e-9f06-69f8c466af30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.596 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.609 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b15c4f23-c68c-4d34-8c67-6f71d1c2b812]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.611 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.612 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.613 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:41 np0005465604 kernel: tapfa1bff6d-10: entered promiscuous mode
Oct  2 04:27:41 np0005465604 NetworkManager[45129]: <info>  [1759393661.6166] manager: (tapfa1bff6d-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.619 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.624 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:41Z|00341|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.626 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.633 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fa1bff6d-19fb-4792-a261-4da1165d95a1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fa1bff6d-19fb-4792-a261-4da1165d95a1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.634 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[646c4ce4-5ac7-4086-bc36-19cd7e18905c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.635 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/fa1bff6d-19fb-4792-a261-4da1165d95a1.pid.haproxy
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID fa1bff6d-19fb-4792-a261-4da1165d95a1
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:27:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:41.637 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'env', 'PROCESS_TAG=haproxy-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fa1bff6d-19fb-4792-a261-4da1165d95a1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.650 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.723 2 DEBUG nova.network.neutron [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updated VIF entry in instance network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.723 2 DEBUG nova.network.neutron [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:41 np0005465604 nova_compute[260603]: 2025-10-02 08:27:41.740 2 DEBUG oslo_concurrency.lockutils [req-6c0579c4-ebff-4cf1-a796-636659a6a635 req-a019a87a-2c1f-4fb7-9f18-085cebc98111 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:27:42 np0005465604 podman[308387]: 2025-10-02 08:27:42.030676796 +0000 UTC m=+0.073136805 container create 2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:27:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:27:42 np0005465604 podman[308387]: 2025-10-02 08:27:41.989399753 +0000 UTC m=+0.031859712 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:27:42 np0005465604 systemd[1]: Started libpod-conmon-2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57.scope.
Oct  2 04:27:42 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:27:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d474468b501cd6334914f2b4a9f295c2f9e45b4d98ca3d1d88fd0ebc5aaafcb2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:27:42 np0005465604 podman[308387]: 2025-10-02 08:27:42.134607395 +0000 UTC m=+0.177067294 container init 2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 04:27:42 np0005465604 podman[308387]: 2025-10-02 08:27:42.142005115 +0000 UTC m=+0.184464994 container start 2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:27:42 np0005465604 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[308403]: [NOTICE]   (308407) : New worker (308409) forked
Oct  2 04:27:42 np0005465604 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[308403]: [NOTICE]   (308407) : Loading success.
Oct  2 04:27:42 np0005465604 nova_compute[260603]: 2025-10-02 08:27:42.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:42 np0005465604 nova_compute[260603]: 2025-10-02 08:27:42.591 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "84f8672c-7a2a-4307-a5c4-7e2968d84225" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:42 np0005465604 nova_compute[260603]: 2025-10-02 08:27:42.592 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:42 np0005465604 nova_compute[260603]: 2025-10-02 08:27:42.614 2 DEBUG nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:27:42 np0005465604 nova_compute[260603]: 2025-10-02 08:27:42.710 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:42 np0005465604 nova_compute[260603]: 2025-10-02 08:27:42.711 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:42 np0005465604 nova_compute[260603]: 2025-10-02 08:27:42.717 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:27:42 np0005465604 nova_compute[260603]: 2025-10-02 08:27:42.717 2 INFO nova.compute.claims [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:27:42 np0005465604 nova_compute[260603]: 2025-10-02 08:27:42.864 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 134 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 168 op/s
Oct  2 04:27:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:27:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1189683259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.303 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.312 2 DEBUG nova.compute.provider_tree [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.338 2 DEBUG nova.scheduler.client.report [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.372 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.373 2 DEBUG nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.420 2 DEBUG nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.420 2 DEBUG nova.network.neutron [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.440 2 INFO nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.461 2 DEBUG nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.547 2 DEBUG nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.550 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.551 2 INFO nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Creating image(s)#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.600 2 DEBUG nova.storage.rbd_utils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.644 2 DEBUG nova.storage.rbd_utils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.688 2 DEBUG nova.storage.rbd_utils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.696 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.743 2 DEBUG nova.compute.manager [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received event network-vif-plugged-e978c120-9b3a-4a48-b553-c38b05073ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.744 2 DEBUG oslo_concurrency.lockutils [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "05cc7244-c419-4c24-b995-95ca760837a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.744 2 DEBUG oslo_concurrency.lockutils [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.745 2 DEBUG oslo_concurrency.lockutils [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.745 2 DEBUG nova.compute.manager [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] No waiting events found dispatching network-vif-plugged-e978c120-9b3a-4a48-b553-c38b05073ad9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.745 2 WARNING nova.compute.manager [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received unexpected event network-vif-plugged-e978c120-9b3a-4a48-b553-c38b05073ad9 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.745 2 DEBUG nova.compute.manager [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-plugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.746 2 DEBUG oslo_concurrency.lockutils [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.746 2 DEBUG oslo_concurrency.lockutils [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.746 2 DEBUG oslo_concurrency.lockutils [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.746 2 DEBUG nova.compute.manager [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Processing event network-vif-plugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.747 2 DEBUG nova.compute.manager [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-plugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.747 2 DEBUG oslo_concurrency.lockutils [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.747 2 DEBUG oslo_concurrency.lockutils [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.748 2 DEBUG oslo_concurrency.lockutils [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.748 2 DEBUG nova.compute.manager [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] No waiting events found dispatching network-vif-plugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.748 2 WARNING nova.compute.manager [req-55359829-559e-43ea-8506-12597c79b6e2 req-f6ea2aa4-e622-41d1-8b49-f6c0eb878875 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received unexpected event network-vif-plugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.749 2 DEBUG nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.763 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.764 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393663.7626524, f13ff7c1-d7d3-443e-9f06-69f8c466af30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.764 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.782 2 INFO nova.virt.libvirt.driver [-] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Instance spawned successfully.#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.783 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.788 2 DEBUG nova.policy [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1ac6f72f7366459a86c086737b89ea69', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f269abbe5769427dbf44c430d7529c04', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.798 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.799 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.800 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.800 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.828 2 DEBUG nova.storage.rbd_utils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.833 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.923 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.932 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.933 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.933 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.934 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.935 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.935 2 DEBUG nova.virt.libvirt.driver [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.947 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.968 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.993 2 INFO nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Took 12.81 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:27:43 np0005465604 nova_compute[260603]: 2025-10-02 08:27:43.995 2 DEBUG nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:44 np0005465604 nova_compute[260603]: 2025-10-02 08:27:44.060 2 INFO nova.compute.manager [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Took 14.02 seconds to build instance.#033[00m
Oct  2 04:27:44 np0005465604 nova_compute[260603]: 2025-10-02 08:27:44.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:44 np0005465604 nova_compute[260603]: 2025-10-02 08:27:44.074 2 DEBUG oslo_concurrency.lockutils [None req-e9d0e2cc-6eca-4c92-a953-953fe099661a 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:44 np0005465604 nova_compute[260603]: 2025-10-02 08:27:44.086 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:44 np0005465604 nova_compute[260603]: 2025-10-02 08:27:44.121 2 DEBUG nova.compute.manager [req-cfc0f5e7-ed2e-4594-ae32-525d3377096c req-59a1c2ae-3df9-4acf-bd0f-af8e1bc53e07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received event network-changed-e978c120-9b3a-4a48-b553-c38b05073ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:44 np0005465604 nova_compute[260603]: 2025-10-02 08:27:44.121 2 DEBUG nova.compute.manager [req-cfc0f5e7-ed2e-4594-ae32-525d3377096c req-59a1c2ae-3df9-4acf-bd0f-af8e1bc53e07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Refreshing instance network info cache due to event network-changed-e978c120-9b3a-4a48-b553-c38b05073ad9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:27:44 np0005465604 nova_compute[260603]: 2025-10-02 08:27:44.122 2 DEBUG oslo_concurrency.lockutils [req-cfc0f5e7-ed2e-4594-ae32-525d3377096c req-59a1c2ae-3df9-4acf-bd0f-af8e1bc53e07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:27:44 np0005465604 nova_compute[260603]: 2025-10-02 08:27:44.122 2 DEBUG oslo_concurrency.lockutils [req-cfc0f5e7-ed2e-4594-ae32-525d3377096c req-59a1c2ae-3df9-4acf-bd0f-af8e1bc53e07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:27:44 np0005465604 nova_compute[260603]: 2025-10-02 08:27:44.122 2 DEBUG nova.network.neutron [req-cfc0f5e7-ed2e-4594-ae32-525d3377096c req-59a1c2ae-3df9-4acf-bd0f-af8e1bc53e07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Refreshing network info cache for port e978c120-9b3a-4a48-b553-c38b05073ad9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:27:44 np0005465604 nova_compute[260603]: 2025-10-02 08:27:44.179 2 DEBUG nova.storage.rbd_utils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] resizing rbd image 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:27:44 np0005465604 nova_compute[260603]: 2025-10-02 08:27:44.319 2 DEBUG nova.objects.instance [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'migration_context' on Instance uuid 84f8672c-7a2a-4307-a5c4-7e2968d84225 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:44 np0005465604 nova_compute[260603]: 2025-10-02 08:27:44.343 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:27:44 np0005465604 nova_compute[260603]: 2025-10-02 08:27:44.344 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Ensure instance console log exists: /var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:27:44 np0005465604 nova_compute[260603]: 2025-10-02 08:27:44.344 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:44 np0005465604 nova_compute[260603]: 2025-10-02 08:27:44.345 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:44 np0005465604 nova_compute[260603]: 2025-10-02 08:27:44.345 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 134 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 144 op/s
Oct  2 04:27:45 np0005465604 nova_compute[260603]: 2025-10-02 08:27:45.292 2 DEBUG nova.network.neutron [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Successfully created port: d73ba644-fe1d-4d32-9e65-532dd96466b4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:27:46 np0005465604 nova_compute[260603]: 2025-10-02 08:27:46.183 2 DEBUG nova.compute.manager [req-f1c82a9f-1948-4597-b0a2-37687f094692 req-f96d6176-52b2-4bd2-8db3-b0042b21165b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received event network-changed-e978c120-9b3a-4a48-b553-c38b05073ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:46 np0005465604 nova_compute[260603]: 2025-10-02 08:27:46.184 2 DEBUG nova.compute.manager [req-f1c82a9f-1948-4597-b0a2-37687f094692 req-f96d6176-52b2-4bd2-8db3-b0042b21165b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Refreshing instance network info cache due to event network-changed-e978c120-9b3a-4a48-b553-c38b05073ad9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:27:46 np0005465604 nova_compute[260603]: 2025-10-02 08:27:46.184 2 DEBUG oslo_concurrency.lockutils [req-f1c82a9f-1948-4597-b0a2-37687f094692 req-f96d6176-52b2-4bd2-8db3-b0042b21165b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:27:46 np0005465604 NetworkManager[45129]: <info>  [1759393666.3290] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Oct  2 04:27:46 np0005465604 NetworkManager[45129]: <info>  [1759393666.3301] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Oct  2 04:27:46 np0005465604 nova_compute[260603]: 2025-10-02 08:27:46.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:46 np0005465604 nova_compute[260603]: 2025-10-02 08:27:46.359 2 DEBUG nova.network.neutron [req-cfc0f5e7-ed2e-4594-ae32-525d3377096c req-59a1c2ae-3df9-4acf-bd0f-af8e1bc53e07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Updated VIF entry in instance network info cache for port e978c120-9b3a-4a48-b553-c38b05073ad9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:27:46 np0005465604 nova_compute[260603]: 2025-10-02 08:27:46.360 2 DEBUG nova.network.neutron [req-cfc0f5e7-ed2e-4594-ae32-525d3377096c req-59a1c2ae-3df9-4acf-bd0f-af8e1bc53e07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Updating instance_info_cache with network_info: [{"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:46 np0005465604 nova_compute[260603]: 2025-10-02 08:27:46.382 2 DEBUG oslo_concurrency.lockutils [req-cfc0f5e7-ed2e-4594-ae32-525d3377096c req-59a1c2ae-3df9-4acf-bd0f-af8e1bc53e07 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:27:46 np0005465604 nova_compute[260603]: 2025-10-02 08:27:46.383 2 DEBUG oslo_concurrency.lockutils [req-f1c82a9f-1948-4597-b0a2-37687f094692 req-f96d6176-52b2-4bd2-8db3-b0042b21165b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:27:46 np0005465604 nova_compute[260603]: 2025-10-02 08:27:46.383 2 DEBUG nova.network.neutron [req-f1c82a9f-1948-4597-b0a2-37687f094692 req-f96d6176-52b2-4bd2-8db3-b0042b21165b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Refreshing network info cache for port e978c120-9b3a-4a48-b553-c38b05073ad9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:27:46 np0005465604 nova_compute[260603]: 2025-10-02 08:27:46.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:46Z|00342|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct  2 04:27:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:46Z|00343|binding|INFO|Releasing lport 2a218cce-83be-4768-9f4e-7d61802765d4 from this chassis (sb_readonly=0)
Oct  2 04:27:46 np0005465604 nova_compute[260603]: 2025-10-02 08:27:46.719 2 DEBUG nova.compute.manager [req-e7318288-8629-4b19-ab08-36f8f633ca73 req-422c361d-0e2e-4396-a77b-4e8049993cc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:46 np0005465604 nova_compute[260603]: 2025-10-02 08:27:46.719 2 DEBUG nova.compute.manager [req-e7318288-8629-4b19-ab08-36f8f633ca73 req-422c361d-0e2e-4396-a77b-4e8049993cc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing instance network info cache due to event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:27:46 np0005465604 nova_compute[260603]: 2025-10-02 08:27:46.720 2 DEBUG oslo_concurrency.lockutils [req-e7318288-8629-4b19-ab08-36f8f633ca73 req-422c361d-0e2e-4396-a77b-4e8049993cc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:27:46 np0005465604 nova_compute[260603]: 2025-10-02 08:27:46.720 2 DEBUG oslo_concurrency.lockutils [req-e7318288-8629-4b19-ab08-36f8f633ca73 req-422c361d-0e2e-4396-a77b-4e8049993cc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:27:46 np0005465604 nova_compute[260603]: 2025-10-02 08:27:46.720 2 DEBUG nova.network.neutron [req-e7318288-8629-4b19-ab08-36f8f633ca73 req-422c361d-0e2e-4396-a77b-4e8049993cc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:27:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 134 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 824 KiB/s wr, 112 op/s
Oct  2 04:27:47 np0005465604 podman[308607]: 2025-10-02 08:27:47.022155213 +0000 UTC m=+0.085889050 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:27:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:27:47 np0005465604 nova_compute[260603]: 2025-10-02 08:27:47.440 2 DEBUG nova.network.neutron [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Successfully updated port: d73ba644-fe1d-4d32-9e65-532dd96466b4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:27:47 np0005465604 nova_compute[260603]: 2025-10-02 08:27:47.461 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "refresh_cache-84f8672c-7a2a-4307-a5c4-7e2968d84225" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:27:47 np0005465604 nova_compute[260603]: 2025-10-02 08:27:47.462 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquired lock "refresh_cache-84f8672c-7a2a-4307-a5c4-7e2968d84225" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:27:47 np0005465604 nova_compute[260603]: 2025-10-02 08:27:47.462 2 DEBUG nova.network.neutron [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:27:47 np0005465604 nova_compute[260603]: 2025-10-02 08:27:47.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:47 np0005465604 nova_compute[260603]: 2025-10-02 08:27:47.736 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393652.7103205, 3626202d-fd1e-497c-bbfd-ea0e7a7321c8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:27:47 np0005465604 nova_compute[260603]: 2025-10-02 08:27:47.736 2 INFO nova.compute.manager [-] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:27:47 np0005465604 nova_compute[260603]: 2025-10-02 08:27:47.758 2 DEBUG nova.compute.manager [None req-448606df-39d8-49c9-ba8b-6b1bf32cfcde - - - - - -] [instance: 3626202d-fd1e-497c-bbfd-ea0e7a7321c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:48 np0005465604 nova_compute[260603]: 2025-10-02 08:27:48.648 2 DEBUG nova.network.neutron [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:27:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 181 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.6 MiB/s wr, 224 op/s
Oct  2 04:27:49 np0005465604 nova_compute[260603]: 2025-10-02 08:27:49.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:50 np0005465604 nova_compute[260603]: 2025-10-02 08:27:50.840 2 DEBUG nova.compute.manager [req-af5af16d-ee85-44dc-848e-ca13680cd0c1 req-2f719f39-088f-4f05-85a9-2cd1094ac9d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Received event network-changed-d73ba644-fe1d-4d32-9e65-532dd96466b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:50 np0005465604 nova_compute[260603]: 2025-10-02 08:27:50.841 2 DEBUG nova.compute.manager [req-af5af16d-ee85-44dc-848e-ca13680cd0c1 req-2f719f39-088f-4f05-85a9-2cd1094ac9d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Refreshing instance network info cache due to event network-changed-d73ba644-fe1d-4d32-9e65-532dd96466b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:27:50 np0005465604 nova_compute[260603]: 2025-10-02 08:27:50.841 2 DEBUG oslo_concurrency.lockutils [req-af5af16d-ee85-44dc-848e-ca13680cd0c1 req-2f719f39-088f-4f05-85a9-2cd1094ac9d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-84f8672c-7a2a-4307-a5c4-7e2968d84225" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:27:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 181 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct  2 04:27:51 np0005465604 podman[308629]: 2025-10-02 08:27:51.030610692 +0000 UTC m=+0.089717619 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 04:27:51 np0005465604 nova_compute[260603]: 2025-10-02 08:27:51.780 2 DEBUG nova.network.neutron [req-f1c82a9f-1948-4597-b0a2-37687f094692 req-f96d6176-52b2-4bd2-8db3-b0042b21165b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Updated VIF entry in instance network info cache for port e978c120-9b3a-4a48-b553-c38b05073ad9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:27:51 np0005465604 nova_compute[260603]: 2025-10-02 08:27:51.781 2 DEBUG nova.network.neutron [req-f1c82a9f-1948-4597-b0a2-37687f094692 req-f96d6176-52b2-4bd2-8db3-b0042b21165b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Updating instance_info_cache with network_info: [{"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:51 np0005465604 nova_compute[260603]: 2025-10-02 08:27:51.814 2 DEBUG oslo_concurrency.lockutils [req-f1c82a9f-1948-4597-b0a2-37687f094692 req-f96d6176-52b2-4bd2-8db3-b0042b21165b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-05cc7244-c419-4c24-b995-95ca760837a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:27:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:27:52 np0005465604 nova_compute[260603]: 2025-10-02 08:27:52.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:52 np0005465604 nova_compute[260603]: 2025-10-02 08:27:52.862 2 DEBUG nova.network.neutron [req-e7318288-8629-4b19-ab08-36f8f633ca73 req-422c361d-0e2e-4396-a77b-4e8049993cc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updated VIF entry in instance network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:27:52 np0005465604 nova_compute[260603]: 2025-10-02 08:27:52.863 2 DEBUG nova.network.neutron [req-e7318288-8629-4b19-ab08-36f8f633ca73 req-422c361d-0e2e-4396-a77b-4e8049993cc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:52 np0005465604 nova_compute[260603]: 2025-10-02 08:27:52.885 2 DEBUG oslo_concurrency.lockutils [req-e7318288-8629-4b19-ab08-36f8f633ca73 req-422c361d-0e2e-4396-a77b-4e8049993cc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:27:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 198 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 203 op/s
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.230 2 DEBUG nova.network.neutron [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Updating instance_info_cache with network_info: [{"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.252 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Releasing lock "refresh_cache-84f8672c-7a2a-4307-a5c4-7e2968d84225" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.253 2 DEBUG nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Instance network_info: |[{"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.253 2 DEBUG oslo_concurrency.lockutils [req-af5af16d-ee85-44dc-848e-ca13680cd0c1 req-2f719f39-088f-4f05-85a9-2cd1094ac9d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-84f8672c-7a2a-4307-a5c4-7e2968d84225" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.253 2 DEBUG nova.network.neutron [req-af5af16d-ee85-44dc-848e-ca13680cd0c1 req-2f719f39-088f-4f05-85a9-2cd1094ac9d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Refreshing network info cache for port d73ba644-fe1d-4d32-9e65-532dd96466b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.257 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Start _get_guest_xml network_info=[{"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.262 2 WARNING nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.269 2 DEBUG nova.virt.libvirt.host [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.269 2 DEBUG nova.virt.libvirt.host [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.275 2 DEBUG nova.virt.libvirt.host [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.276 2 DEBUG nova.virt.libvirt.host [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.276 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.277 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.277 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.278 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.278 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.278 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.279 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.279 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.279 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.279 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.280 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.280 2 DEBUG nova.virt.hardware [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.283 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:53.579 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:53.587 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:27:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:27:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/150886568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.700 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.731 2 DEBUG nova.storage.rbd_utils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.738 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.947 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.948 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:53 np0005465604 nova_compute[260603]: 2025-10-02 08:27:53.966 2 DEBUG nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.046 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.046 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.054 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.054 2 INFO nova.compute.claims [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:54Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:86:a6:e3 10.100.0.10
Oct  2 04:27:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:54Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:86:a6:e3 10.100.0.10
Oct  2 04:27:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:27:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1461974147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.157 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.160 2 DEBUG nova.virt.libvirt.vif [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1351338014',display_name='tempest-DeleteServersTestJSON-server-1351338014',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1351338014',id=44,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-dl5bh0q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:43Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=84f8672c-7a2a-4307-a5c4-7e2968d84225,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.160 2 DEBUG nova.network.os_vif_util [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.162 2 DEBUG nova.network.os_vif_util [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:aa:14,bridge_name='br-int',has_traffic_filtering=True,id=d73ba644-fe1d-4d32-9e65-532dd96466b4,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd73ba644-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.164 2 DEBUG nova.objects.instance [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 84f8672c-7a2a-4307-a5c4-7e2968d84225 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.185 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:27:54 np0005465604 nova_compute[260603]:  <uuid>84f8672c-7a2a-4307-a5c4-7e2968d84225</uuid>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:  <name>instance-0000002c</name>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <nova:name>tempest-DeleteServersTestJSON-server-1351338014</nova:name>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:27:53</nova:creationTime>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:27:54 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:        <nova:user uuid="1ac6f72f7366459a86c086737b89ea69">tempest-DeleteServersTestJSON-812177785-project-member</nova:user>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:        <nova:project uuid="f269abbe5769427dbf44c430d7529c04">tempest-DeleteServersTestJSON-812177785</nova:project>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:        <nova:port uuid="d73ba644-fe1d-4d32-9e65-532dd96466b4">
Oct  2 04:27:54 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <entry name="serial">84f8672c-7a2a-4307-a5c4-7e2968d84225</entry>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <entry name="uuid">84f8672c-7a2a-4307-a5c4-7e2968d84225</entry>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/84f8672c-7a2a-4307-a5c4-7e2968d84225_disk">
Oct  2 04:27:54 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:27:54 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/84f8672c-7a2a-4307-a5c4-7e2968d84225_disk.config">
Oct  2 04:27:54 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:27:54 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:e9:aa:14"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <target dev="tapd73ba644-fe"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225/console.log" append="off"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:27:54 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:27:54 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:27:54 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:27:54 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.185 2 DEBUG nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Preparing to wait for external event network-vif-plugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.185 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.186 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.186 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.187 2 DEBUG nova.virt.libvirt.vif [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1351338014',display_name='tempest-DeleteServersTestJSON-server-1351338014',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1351338014',id=44,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-dl5bh0q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:43Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=84f8672c-7a2a-4307-a5c4-7e2968d84225,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.187 2 DEBUG nova.network.os_vif_util [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.187 2 DEBUG nova.network.os_vif_util [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:aa:14,bridge_name='br-int',has_traffic_filtering=True,id=d73ba644-fe1d-4d32-9e65-532dd96466b4,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd73ba644-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.188 2 DEBUG os_vif [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:aa:14,bridge_name='br-int',has_traffic_filtering=True,id=d73ba644-fe1d-4d32-9e65-532dd96466b4,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd73ba644-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.189 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.189 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.197 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd73ba644-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.197 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd73ba644-fe, col_values=(('external_ids', {'iface-id': 'd73ba644-fe1d-4d32-9e65-532dd96466b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:aa:14', 'vm-uuid': '84f8672c-7a2a-4307-a5c4-7e2968d84225'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:54 np0005465604 NetworkManager[45129]: <info>  [1759393674.1994] manager: (tapd73ba644-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.206 2 INFO os_vif [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:aa:14,bridge_name='br-int',has_traffic_filtering=True,id=d73ba644-fe1d-4d32-9e65-532dd96466b4,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd73ba644-fe')#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.228 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:54 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.299 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.300 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.300 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No VIF found with MAC fa:16:3e:e9:aa:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.301 2 INFO nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Using config drive#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.325 2 DEBUG nova.storage.rbd_utils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:27:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3905763339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.674 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.678 2 DEBUG nova.compute.provider_tree [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.701 2 DEBUG nova.scheduler.client.report [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.717 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.718 2 DEBUG nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.734 2 INFO nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Creating config drive at /var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225/disk.config#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.738 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8mpgimwr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:54Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:59:2a 10.100.0.3
Oct  2 04:27:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:54Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:59:2a 10.100.0.3
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.777 2 DEBUG nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.777 2 DEBUG nova.network.neutron [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.801 2 INFO nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.818 2 DEBUG nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.850 2 DEBUG nova.network.neutron [req-af5af16d-ee85-44dc-848e-ca13680cd0c1 req-2f719f39-088f-4f05-85a9-2cd1094ac9d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Updated VIF entry in instance network info cache for port d73ba644-fe1d-4d32-9e65-532dd96466b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.850 2 DEBUG nova.network.neutron [req-af5af16d-ee85-44dc-848e-ca13680cd0c1 req-2f719f39-088f-4f05-85a9-2cd1094ac9d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Updating instance_info_cache with network_info: [{"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.868 2 DEBUG oslo_concurrency.lockutils [req-af5af16d-ee85-44dc-848e-ca13680cd0c1 req-2f719f39-088f-4f05-85a9-2cd1094ac9d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-84f8672c-7a2a-4307-a5c4-7e2968d84225" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.886 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8mpgimwr" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.905 2 DEBUG nova.storage.rbd_utils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.910 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225/disk.config 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 206 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.8 MiB/s wr, 202 op/s
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.936 2 DEBUG nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.937 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.938 2 INFO nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Creating image(s)#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.958 2 DEBUG nova.storage.rbd_utils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 247e32e5-5f07-4db4-9e6f-dcfade745228_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:54 np0005465604 nova_compute[260603]: 2025-10-02 08:27:54.979 2 DEBUG nova.storage.rbd_utils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 247e32e5-5f07-4db4-9e6f-dcfade745228_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.010 2 DEBUG nova.storage.rbd_utils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 247e32e5-5f07-4db4-9e6f-dcfade745228_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.018 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.055 2 DEBUG oslo_concurrency.processutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225/disk.config 84f8672c-7a2a-4307-a5c4-7e2968d84225_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.056 2 INFO nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Deleting local config drive /var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225/disk.config because it was imported into RBD.#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.093 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.094 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.094 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.095 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:55 np0005465604 kernel: tapd73ba644-fe: entered promiscuous mode
Oct  2 04:27:55 np0005465604 NetworkManager[45129]: <info>  [1759393675.1589] manager: (tapd73ba644-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/157)
Oct  2 04:27:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:55Z|00344|binding|INFO|Claiming lport d73ba644-fe1d-4d32-9e65-532dd96466b4 for this chassis.
Oct  2 04:27:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:55Z|00345|binding|INFO|d73ba644-fe1d-4d32-9e65-532dd96466b4: Claiming fa:16:3e:e9:aa:14 10.100.0.12
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.170 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:aa:14 10.100.0.12'], port_security=['fa:16:3e:e9:aa:14 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '84f8672c-7a2a-4307-a5c4-7e2968d84225', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d73ba644-fe1d-4d32-9e65-532dd96466b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.171 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d73ba644-fe1d-4d32-9e65-532dd96466b4 in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca bound to our chassis#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.173 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca#033[00m
Oct  2 04:27:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:55Z|00346|binding|INFO|Setting lport d73ba644-fe1d-4d32-9e65-532dd96466b4 ovn-installed in OVS
Oct  2 04:27:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:55Z|00347|binding|INFO|Setting lport d73ba644-fe1d-4d32-9e65-532dd96466b4 up in Southbound
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.183 2 DEBUG nova.storage.rbd_utils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 247e32e5-5f07-4db4-9e6f-dcfade745228_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.195 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab2d2a1-6d41-4760-914f-3197c9ff0d95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.196 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa72ac8c9-11 in ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.199 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa72ac8c9-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.199 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dabfaaa3-b265-4491-9a99-99f98f464c7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.200 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0fbd48b5-362f-45ec-8de2-a40595949b6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.204 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 247e32e5-5f07-4db4-9e6f-dcfade745228_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:55 np0005465604 systemd-udevd[308881]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:27:55 np0005465604 systemd-machined[214636]: New machine qemu-48-instance-0000002c.
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.219 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e2f1f5-e537-4231-bcc1-59188bc05552]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:55 np0005465604 NetworkManager[45129]: <info>  [1759393675.2247] device (tapd73ba644-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:27:55 np0005465604 NetworkManager[45129]: <info>  [1759393675.2254] device (tapd73ba644-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:27:55 np0005465604 systemd[1]: Started Virtual Machine qemu-48-instance-0000002c.
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.240 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e284654a-7500-47b5-8f20-d375af739629]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.257 2 DEBUG nova.policy [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d235dd68314a5d82ac247a9e9842d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84c161efb2ba4334845e823db8128b62', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.274 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1303eb55-d7ce-4e64-8aa6-a662cba192e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.282 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f8f3c18-ec4a-4d32-b03e-76a9e2e6fb5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:55 np0005465604 NetworkManager[45129]: <info>  [1759393675.2836] manager: (tapa72ac8c9-10): new Veth device (/org/freedesktop/NetworkManager/Devices/158)
Oct  2 04:27:55 np0005465604 systemd-udevd[308886]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.335 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6573aabd-f1e0-432a-bb4a-5cec7d2673cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.338 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5f12a2-f638-4a4c-be54-57d51a6925ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:55 np0005465604 NetworkManager[45129]: <info>  [1759393675.3700] device (tapa72ac8c9-10): carrier: link connected
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.378 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec85b9e-7d8c-4d2c-8c69-6e5215344458]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.404 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0c7d63-dbb8-4631-806a-218bf95550ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455397, 'reachable_time': 23563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308933, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.436 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d142e9-d927-4269-bba6-91c94db687ae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3b:61d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455397, 'tstamp': 455397}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308934, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.473 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[629304f2-62ae-40db-a84a-69347302feb7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455397, 'reachable_time': 23563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308935, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.518 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3778c708-d845-4156-a3e3-060a9720e5e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.532 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 247e32e5-5f07-4db4-9e6f-dcfade745228_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.581 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9715f7c4-8374-4c00-8ac3-15058d7c7e10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.583 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.584 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.585 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa72ac8c9-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:55 np0005465604 NetworkManager[45129]: <info>  [1759393675.5878] manager: (tapa72ac8c9-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/159)
Oct  2 04:27:55 np0005465604 kernel: tapa72ac8c9-10: entered promiscuous mode
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.593 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa72ac8c9-10, col_values=(('external_ids', {'iface-id': 'f9acec59-0200-4a1d-84e4-06e67c730498'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:27:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:27:55Z|00348|binding|INFO|Releasing lport f9acec59-0200-4a1d-84e4-06e67c730498 from this chassis (sb_readonly=0)
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.598 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.600 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a26d31-b4cd-4e73-a703-c1022eb26922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.601 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:27:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:27:55.603 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'env', 'PROCESS_TAG=haproxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.639 2 DEBUG nova.storage.rbd_utils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] resizing rbd image 247e32e5-5f07-4db4-9e6f-dcfade745228_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.734 2 DEBUG nova.objects.instance [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'migration_context' on Instance uuid 247e32e5-5f07-4db4-9e6f-dcfade745228 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.750 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.751 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Ensure instance console log exists: /var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.752 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.752 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:55 np0005465604 nova_compute[260603]: 2025-10-02 08:27:55.752 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:56 np0005465604 podman[309081]: 2025-10-02 08:27:56.005504084 +0000 UTC m=+0.047243148 container create a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 04:27:56 np0005465604 systemd[1]: Started libpod-conmon-a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf.scope.
Oct  2 04:27:56 np0005465604 podman[309081]: 2025-10-02 08:27:55.981254071 +0000 UTC m=+0.022993155 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:27:56 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:27:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59eb34d5cb9d5ee03edb075070edb4d31fba25c937c3d835829c09d4788425a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:27:56 np0005465604 podman[309081]: 2025-10-02 08:27:56.104606274 +0000 UTC m=+0.146345358 container init a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:27:56 np0005465604 podman[309081]: 2025-10-02 08:27:56.109790625 +0000 UTC m=+0.151529689 container start a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 04:27:56 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[309096]: [NOTICE]   (309100) : New worker (309102) forked
Oct  2 04:27:56 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[309096]: [NOTICE]   (309100) : Loading success.
Oct  2 04:27:56 np0005465604 nova_compute[260603]: 2025-10-02 08:27:56.240 2 DEBUG nova.network.neutron [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Successfully created port: 262f44f5-df56-4176-96b2-4819d8b7e258 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:27:56 np0005465604 nova_compute[260603]: 2025-10-02 08:27:56.273 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393676.273379, 84f8672c-7a2a-4307-a5c4-7e2968d84225 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:27:56 np0005465604 nova_compute[260603]: 2025-10-02 08:27:56.274 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] VM Started (Lifecycle Event)#033[00m
Oct  2 04:27:56 np0005465604 nova_compute[260603]: 2025-10-02 08:27:56.290 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:56 np0005465604 nova_compute[260603]: 2025-10-02 08:27:56.294 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393676.2759104, 84f8672c-7a2a-4307-a5c4-7e2968d84225 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:27:56 np0005465604 nova_compute[260603]: 2025-10-02 08:27:56.294 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:27:56 np0005465604 nova_compute[260603]: 2025-10-02 08:27:56.312 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:56 np0005465604 nova_compute[260603]: 2025-10-02 08:27:56.315 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:27:56 np0005465604 nova_compute[260603]: 2025-10-02 08:27:56.331 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:27:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 206 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 156 op/s
Oct  2 04:27:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:27:57 np0005465604 nova_compute[260603]: 2025-10-02 08:27:57.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:27:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:27:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:27:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:27:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:27:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.025 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquiring lock "197184a1-4270-40c9-87b5-6eca7e832812" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.025 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.045 2 DEBUG nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.133 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.133 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.140 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.141 2 INFO nova.compute.claims [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.205 2 DEBUG nova.network.neutron [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Successfully updated port: 262f44f5-df56-4176-96b2-4819d8b7e258 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.228 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.228 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.229 2 DEBUG nova.network.neutron [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.281 2 DEBUG nova.compute.manager [req-1f96fa62-3720-4ee3-9341-160117d83831 req-343f1b45-faa1-48ef-aaf5-f39732c81622 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Received event network-vif-plugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.281 2 DEBUG oslo_concurrency.lockutils [req-1f96fa62-3720-4ee3-9341-160117d83831 req-343f1b45-faa1-48ef-aaf5-f39732c81622 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.281 2 DEBUG oslo_concurrency.lockutils [req-1f96fa62-3720-4ee3-9341-160117d83831 req-343f1b45-faa1-48ef-aaf5-f39732c81622 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.282 2 DEBUG oslo_concurrency.lockutils [req-1f96fa62-3720-4ee3-9341-160117d83831 req-343f1b45-faa1-48ef-aaf5-f39732c81622 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.282 2 DEBUG nova.compute.manager [req-1f96fa62-3720-4ee3-9341-160117d83831 req-343f1b45-faa1-48ef-aaf5-f39732c81622 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Processing event network-vif-plugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.282 2 DEBUG nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.286 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393678.2858777, 84f8672c-7a2a-4307-a5c4-7e2968d84225 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.286 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.287 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.293 2 INFO nova.virt.libvirt.driver [-] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Instance spawned successfully.#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.293 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.312 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.318 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.323 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.323 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.324 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.324 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.324 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.325 2 DEBUG nova.virt.libvirt.driver [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.356 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.363 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.402 2 DEBUG nova.network.neutron [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.409 2 INFO nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Took 14.86 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.409 2 DEBUG nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.469 2 INFO nova.compute.manager [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Took 15.79 seconds to build instance.#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.485 2 DEBUG oslo_concurrency.lockutils [None req-b76118bd-66de-43c8-90a9-40343ed229c6 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:27:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2357591501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.833 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.840 2 DEBUG nova.compute.provider_tree [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.855 2 DEBUG nova.scheduler.client.report [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.889 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.890 2 DEBUG nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:27:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 293 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 7.8 MiB/s wr, 277 op/s
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.948 2 DEBUG nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.949 2 DEBUG nova.network.neutron [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.974 2 INFO nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:27:58 np0005465604 nova_compute[260603]: 2025-10-02 08:27:58.998 2 DEBUG nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.090 2 DEBUG nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.091 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.092 2 INFO nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Creating image(s)#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.123 2 DEBUG nova.storage.rbd_utils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] rbd image 197184a1-4270-40c9-87b5-6eca7e832812_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.153 2 DEBUG nova.storage.rbd_utils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] rbd image 197184a1-4270-40c9-87b5-6eca7e832812_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.182 2 DEBUG nova.storage.rbd_utils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] rbd image 197184a1-4270-40c9-87b5-6eca7e832812_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.186 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.256 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.257 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.258 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.259 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.281 2 DEBUG nova.storage.rbd_utils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] rbd image 197184a1-4270-40c9-87b5-6eca7e832812_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.284 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 197184a1-4270-40c9-87b5-6eca7e832812_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.507 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 197184a1-4270-40c9-87b5-6eca7e832812_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.564 2 DEBUG nova.storage.rbd_utils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] resizing rbd image 197184a1-4270-40c9-87b5-6eca7e832812_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.647 2 DEBUG nova.policy [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3e67f389787b453faa1dfdb728caea35', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd531839bafe441a391fb9161a54c74ee', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.654 2 DEBUG nova.objects.instance [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lazy-loading 'migration_context' on Instance uuid 197184a1-4270-40c9-87b5-6eca7e832812 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.675 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.675 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Ensure instance console log exists: /var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.676 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.676 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:27:59 np0005465604 nova_compute[260603]: 2025-10-02 08:27:59.677 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.652 2 DEBUG nova.network.neutron [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updating instance_info_cache with network_info: [{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.682 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.682 2 DEBUG nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Instance network_info: |[{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.684 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Start _get_guest_xml network_info=[{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.692 2 WARNING nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.697 2 DEBUG nova.virt.libvirt.host [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.698 2 DEBUG nova.virt.libvirt.host [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.702 2 DEBUG nova.virt.libvirt.host [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.702 2 DEBUG nova.virt.libvirt.host [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.702 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.703 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.703 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.703 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.703 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.704 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.704 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.704 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.704 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.705 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.705 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.705 2 DEBUG nova.virt.hardware [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.707 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.815 2 INFO nova.compute.manager [None req-02f57a2d-dbba-47af-aad6-d7d227e5059a 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Pausing#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.818 2 DEBUG nova.objects.instance [None req-02f57a2d-dbba-47af-aad6-d7d227e5059a 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'flavor' on Instance uuid 84f8672c-7a2a-4307-a5c4-7e2968d84225 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.854 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393680.853726, 84f8672c-7a2a-4307-a5c4-7e2968d84225 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.855 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.860 2 DEBUG nova.compute.manager [None req-02f57a2d-dbba-47af-aad6-d7d227e5059a 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.866 2 DEBUG nova.compute.manager [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-changed-262f44f5-df56-4176-96b2-4819d8b7e258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.866 2 DEBUG nova.compute.manager [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing instance network info cache due to event network-changed-262f44f5-df56-4176-96b2-4819d8b7e258. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.868 2 DEBUG oslo_concurrency.lockutils [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.868 2 DEBUG oslo_concurrency.lockutils [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.869 2 DEBUG nova.network.neutron [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing network info cache for port 262f44f5-df56-4176-96b2-4819d8b7e258 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.883 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.894 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:28:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 293 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 6.0 MiB/s wr, 165 op/s
Oct  2 04:28:00 np0005465604 nova_compute[260603]: 2025-10-02 08:28:00.923 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct  2 04:28:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:28:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1471208804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.172 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.209 2 DEBUG nova.storage.rbd_utils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 247e32e5-5f07-4db4-9e6f-dcfade745228_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.215 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:01.591 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:28:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2035058678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.648 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.650 2 DEBUG nova.virt.libvirt.vif [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1204621376',display_name='tempest-tempest.common.compute-instance-1204621376',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1204621376',id=45,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-8sa12d1t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=247e32e5-5f07-4db4-9e6f-dcfade745228,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.650 2 DEBUG nova.network.os_vif_util [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.651 2 DEBUG nova.network.os_vif_util [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:f8:c0,bridge_name='br-int',has_traffic_filtering=True,id=262f44f5-df56-4176-96b2-4819d8b7e258,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap262f44f5-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.653 2 DEBUG nova.objects.instance [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_devices' on Instance uuid 247e32e5-5f07-4db4-9e6f-dcfade745228 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.684 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:28:01 np0005465604 nova_compute[260603]:  <uuid>247e32e5-5f07-4db4-9e6f-dcfade745228</uuid>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:  <name>instance-0000002d</name>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <nova:name>tempest-tempest.common.compute-instance-1204621376</nova:name>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:28:00</nova:creationTime>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:28:01 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:        <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:        <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:        <nova:port uuid="262f44f5-df56-4176-96b2-4819d8b7e258">
Oct  2 04:28:01 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <entry name="serial">247e32e5-5f07-4db4-9e6f-dcfade745228</entry>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <entry name="uuid">247e32e5-5f07-4db4-9e6f-dcfade745228</entry>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/247e32e5-5f07-4db4-9e6f-dcfade745228_disk">
Oct  2 04:28:01 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:28:01 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/247e32e5-5f07-4db4-9e6f-dcfade745228_disk.config">
Oct  2 04:28:01 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:28:01 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:df:f8:c0"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <target dev="tap262f44f5-df"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/console.log" append="off"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:28:01 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:28:01 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:28:01 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:28:01 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.685 2 DEBUG nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Preparing to wait for external event network-vif-plugged-262f44f5-df56-4176-96b2-4819d8b7e258 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.686 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.686 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.687 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.688 2 DEBUG nova.virt.libvirt.vif [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1204621376',display_name='tempest-tempest.common.compute-instance-1204621376',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1204621376',id=45,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-8sa12d1t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=247e32e5-5f07-4db4-9e6f-dcfade745228,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.688 2 DEBUG nova.network.os_vif_util [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.689 2 DEBUG nova.network.os_vif_util [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:f8:c0,bridge_name='br-int',has_traffic_filtering=True,id=262f44f5-df56-4176-96b2-4819d8b7e258,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap262f44f5-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.690 2 DEBUG os_vif [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:f8:c0,bridge_name='br-int',has_traffic_filtering=True,id=262f44f5-df56-4176-96b2-4819d8b7e258,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap262f44f5-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.692 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.693 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.699 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap262f44f5-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.700 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap262f44f5-df, col_values=(('external_ids', {'iface-id': '262f44f5-df56-4176-96b2-4819d8b7e258', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:f8:c0', 'vm-uuid': '247e32e5-5f07-4db4-9e6f-dcfade745228'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:01 np0005465604 NetworkManager[45129]: <info>  [1759393681.7043] manager: (tap262f44f5-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.711 2 INFO os_vif [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:f8:c0,bridge_name='br-int',has_traffic_filtering=True,id=262f44f5-df56-4176-96b2-4819d8b7e258,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap262f44f5-df')#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.776 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.777 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.778 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:df:f8:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.778 2 INFO nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Using config drive#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.808 2 DEBUG nova.storage.rbd_utils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 247e32e5-5f07-4db4-9e6f-dcfade745228_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:01 np0005465604 nova_compute[260603]: 2025-10-02 08:28:01.953 2 DEBUG nova.network.neutron [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Successfully created port: b70e7dbb-3605-4af5-a977-d1c35f1ec20a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:28:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:28:02 np0005465604 nova_compute[260603]: 2025-10-02 08:28:02.609 2 INFO nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Creating config drive at /var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/disk.config#033[00m
Oct  2 04:28:02 np0005465604 nova_compute[260603]: 2025-10-02 08:28:02.621 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphvlg2_ot execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:02 np0005465604 nova_compute[260603]: 2025-10-02 08:28:02.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:02 np0005465604 nova_compute[260603]: 2025-10-02 08:28:02.780 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphvlg2_ot" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:02 np0005465604 nova_compute[260603]: 2025-10-02 08:28:02.824 2 DEBUG nova.storage.rbd_utils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] rbd image 247e32e5-5f07-4db4-9e6f-dcfade745228_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:02 np0005465604 nova_compute[260603]: 2025-10-02 08:28:02.829 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/disk.config 247e32e5-5f07-4db4-9e6f-dcfade745228_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 319 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.4 MiB/s wr, 235 op/s
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.027 2 DEBUG oslo_concurrency.processutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/disk.config 247e32e5-5f07-4db4-9e6f-dcfade745228_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.029 2 INFO nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Deleting local config drive /var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/disk.config because it was imported into RBD.#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.037 2 DEBUG oslo_concurrency.lockutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "84f8672c-7a2a-4307-a5c4-7e2968d84225" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.038 2 DEBUG oslo_concurrency.lockutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.039 2 DEBUG oslo_concurrency.lockutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.039 2 DEBUG oslo_concurrency.lockutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.042 2 DEBUG oslo_concurrency.lockutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.046 2 INFO nova.compute.manager [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Terminating instance#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.048 2 DEBUG nova.compute.manager [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:28:03 np0005465604 kernel: tapd73ba644-fe (unregistering): left promiscuous mode
Oct  2 04:28:03 np0005465604 NetworkManager[45129]: <info>  [1759393683.1096] device (tapd73ba644-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:28:03 np0005465604 kernel: tap262f44f5-df: entered promiscuous mode
Oct  2 04:28:03 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 04:28:03 np0005465604 NetworkManager[45129]: <info>  [1759393683.1146] manager: (tap262f44f5-df): new Tun device (/org/freedesktop/NetworkManager/Devices/161)
Oct  2 04:28:03 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:03Z|00349|binding|INFO|Releasing lport d73ba644-fe1d-4d32-9e65-532dd96466b4 from this chassis (sb_readonly=0)
Oct  2 04:28:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:03Z|00350|binding|INFO|Setting lport d73ba644-fe1d-4d32-9e65-532dd96466b4 down in Southbound
Oct  2 04:28:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:03Z|00351|binding|INFO|Claiming lport 262f44f5-df56-4176-96b2-4819d8b7e258 for this chassis.
Oct  2 04:28:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:03Z|00352|binding|INFO|262f44f5-df56-4176-96b2-4819d8b7e258: Claiming fa:16:3e:df:f8:c0 10.100.0.14
Oct  2 04:28:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:03Z|00353|binding|INFO|Removing iface tapd73ba644-fe ovn-installed in OVS
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.130 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:f8:c0 10.100.0.14'], port_security=['fa:16:3e:df:f8:c0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '247e32e5-5f07-4db4-9e6f-dcfade745228', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1616ad3a-ff5f-4423-8dd9-f2ff5717f8c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=262f44f5-df56-4176-96b2-4819d8b7e258) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.141 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:aa:14 10.100.0.12'], port_security=['fa:16:3e:e9:aa:14 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '84f8672c-7a2a-4307-a5c4-7e2968d84225', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d73ba644-fe1d-4d32-9e65-532dd96466b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.144 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 262f44f5-df56-4176-96b2-4819d8b7e258 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.148 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1#033[00m
Oct  2 04:28:03 np0005465604 systemd-machined[214636]: New machine qemu-49-instance-0000002d.
Oct  2 04:28:03 np0005465604 systemd[1]: Started Virtual Machine qemu-49-instance-0000002d.
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.174 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9cb178a-a719-4b83-93c3-c07965777ae7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:03 np0005465604 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Oct  2 04:28:03 np0005465604 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002c.scope: Consumed 3.531s CPU time.
Oct  2 04:28:03 np0005465604 systemd-machined[214636]: Machine qemu-48-instance-0000002c terminated.
Oct  2 04:28:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:03Z|00354|binding|INFO|Setting lport 262f44f5-df56-4176-96b2-4819d8b7e258 ovn-installed in OVS
Oct  2 04:28:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:03Z|00355|binding|INFO|Setting lport 262f44f5-df56-4176-96b2-4819d8b7e258 up in Southbound
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:03 np0005465604 systemd-udevd[309445]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.223 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bb344d82-ffe7-487b-8924-16873ccdfa26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.228 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1e16960e-85e7-42a9-9e36-c781fc470eed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:03 np0005465604 NetworkManager[45129]: <info>  [1759393683.2368] device (tap262f44f5-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:28:03 np0005465604 NetworkManager[45129]: <info>  [1759393683.2378] device (tap262f44f5-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.267 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ac98ac-9007-415b-bcdd-42ef05fa17a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.303 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e857eaa9-21f2-4ccb-b711-df125f6bf665]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454001, 'reachable_time': 43415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309457, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.315 2 INFO nova.virt.libvirt.driver [-] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Instance destroyed successfully.#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.316 2 DEBUG nova.objects.instance [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'resources' on Instance uuid 84f8672c-7a2a-4307-a5c4-7e2968d84225 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.338 2 DEBUG nova.virt.libvirt.vif [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1351338014',display_name='tempest-DeleteServersTestJSON-server-1351338014',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1351338014',id=44,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:27:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-dl5bh0q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:00Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=84f8672c-7a2a-4307-a5c4-7e2968d84225,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.338 2 DEBUG nova.network.os_vif_util [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "address": "fa:16:3e:e9:aa:14", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd73ba644-fe", "ovs_interfaceid": "d73ba644-fe1d-4d32-9e65-532dd96466b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.339 2 DEBUG nova.network.os_vif_util [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:aa:14,bridge_name='br-int',has_traffic_filtering=True,id=d73ba644-fe1d-4d32-9e65-532dd96466b4,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd73ba644-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.339 2 DEBUG os_vif [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:aa:14,bridge_name='br-int',has_traffic_filtering=True,id=d73ba644-fe1d-4d32-9e65-532dd96466b4,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd73ba644-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.341 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd73ba644-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.339 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2d4e4f-fbff-4b03-8486-cac276e60671]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454016, 'tstamp': 454016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309467, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454020, 'tstamp': 454020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309467, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.347 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.348 2 INFO os_vif [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:aa:14,bridge_name='br-int',has_traffic_filtering=True,id=d73ba644-fe1d-4d32-9e65-532dd96466b4,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd73ba644-fe')#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.357 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.357 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.358 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.358 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.359 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d73ba644-fe1d-4d32-9e65-532dd96466b4 in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca unbound from our chassis#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.361 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.362 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bb2f5401-8810-4513-ba65-6bb814b62295]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.362 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace which is not needed anymore#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:03 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[309096]: [NOTICE]   (309100) : haproxy version is 2.8.14-c23fe91
Oct  2 04:28:03 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[309096]: [NOTICE]   (309100) : path to executable is /usr/sbin/haproxy
Oct  2 04:28:03 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[309096]: [WARNING]  (309100) : Exiting Master process...
Oct  2 04:28:03 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[309096]: [WARNING]  (309100) : Exiting Master process...
Oct  2 04:28:03 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[309096]: [ALERT]    (309100) : Current worker (309102) exited with code 143 (Terminated)
Oct  2 04:28:03 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[309096]: [WARNING]  (309100) : All workers exited. Exiting... (0)
Oct  2 04:28:03 np0005465604 systemd[1]: libpod-a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf.scope: Deactivated successfully.
Oct  2 04:28:03 np0005465604 podman[309523]: 2025-10-02 08:28:03.559258399 +0000 UTC m=+0.072900126 container died a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 04:28:03 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf-userdata-shm.mount: Deactivated successfully.
Oct  2 04:28:03 np0005465604 systemd[1]: var-lib-containers-storage-overlay-59eb34d5cb9d5ee03edb075070edb4d31fba25c937c3d835829c09d4788425a8-merged.mount: Deactivated successfully.
Oct  2 04:28:03 np0005465604 podman[309523]: 2025-10-02 08:28:03.605037511 +0000 UTC m=+0.118679228 container cleanup a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  2 04:28:03 np0005465604 systemd[1]: libpod-conmon-a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf.scope: Deactivated successfully.
Oct  2 04:28:03 np0005465604 podman[309580]: 2025-10-02 08:28:03.697489955 +0000 UTC m=+0.060525192 container remove a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.697 2 DEBUG nova.network.neutron [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updated VIF entry in instance network info cache for port 262f44f5-df56-4176-96b2-4819d8b7e258. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.699 2 DEBUG nova.network.neutron [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updating instance_info_cache with network_info: [{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.707 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[42cbfb74-6577-4c4d-bc4f-8d2c93b98d89]: (4, ('Thu Oct  2 08:28:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf)\na6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf\nThu Oct  2 08:28:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (a6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf)\na6bf4540bf07174eb42980d0d03f43f19ed4c4b446a3ad147d81f18521d9c5bf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.709 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0d7e7c20-45c2-4320-9ccb-fa9053e71da6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.710 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.726 2 DEBUG oslo_concurrency.lockutils [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.726 2 DEBUG nova.compute.manager [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Received event network-vif-plugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.727 2 DEBUG oslo_concurrency.lockutils [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.727 2 DEBUG oslo_concurrency.lockutils [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.727 2 DEBUG oslo_concurrency.lockutils [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.727 2 DEBUG nova.compute.manager [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] No waiting events found dispatching network-vif-plugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.728 2 WARNING nova.compute.manager [req-2f3a5d34-2119-4ee8-99f4-53113a1ccf91 req-b5e0deae-5a72-4d30-89e6-db41a2504675 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Received unexpected event network-vif-plugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 for instance with vm_state active and task_state pausing.#033[00m
Oct  2 04:28:03 np0005465604 kernel: tapa72ac8c9-10: left promiscuous mode
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.761 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8dfc0f-929c-4704-83e3-12f18acfe0f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.773 2 INFO nova.virt.libvirt.driver [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Deleting instance files /var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225_del#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.774 2 INFO nova.virt.libvirt.driver [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Deletion of /var/lib/nova/instances/84f8672c-7a2a-4307-a5c4-7e2968d84225_del complete#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.780 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83f4182a-b3e5-48ae-aefa-bc8a84ca6f83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.782 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dd3314fd-bb56-40a4-8f8d-63d88c638ee0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.807 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b8c77e-cc0f-42cd-84dc-b890346bf8f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455387, 'reachable_time': 33571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309594, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:03 np0005465604 systemd[1]: run-netns-ovnmeta\x2da72ac8c9\x2d16ee\x2d4ec0\x2db23d\x2d2741fda000ca.mount: Deactivated successfully.
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.812 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:28:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:03.812 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[f52c60ab-4367-4e28-87a3-9f0cde7ca63c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.844 2 INFO nova.compute.manager [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.845 2 DEBUG oslo.service.loopingcall [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.845 2 DEBUG nova.compute.manager [-] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:28:03 np0005465604 nova_compute[260603]: 2025-10-02 08:28:03.845 2 DEBUG nova.network.neutron [-] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.017 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393684.017244, 247e32e5-5f07-4db4-9e6f-dcfade745228 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.018 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] VM Started (Lifecycle Event)#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.047 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.052 2 DEBUG nova.compute.manager [req-f2011652-ab4e-4ecf-ab73-4b822a4f5bf8 req-e38e4c38-05d3-450f-bc06-cb533338c26d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-plugged-262f44f5-df56-4176-96b2-4819d8b7e258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.052 2 DEBUG oslo_concurrency.lockutils [req-f2011652-ab4e-4ecf-ab73-4b822a4f5bf8 req-e38e4c38-05d3-450f-bc06-cb533338c26d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.052 2 DEBUG oslo_concurrency.lockutils [req-f2011652-ab4e-4ecf-ab73-4b822a4f5bf8 req-e38e4c38-05d3-450f-bc06-cb533338c26d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.053 2 DEBUG oslo_concurrency.lockutils [req-f2011652-ab4e-4ecf-ab73-4b822a4f5bf8 req-e38e4c38-05d3-450f-bc06-cb533338c26d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.053 2 DEBUG nova.compute.manager [req-f2011652-ab4e-4ecf-ab73-4b822a4f5bf8 req-e38e4c38-05d3-450f-bc06-cb533338c26d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Processing event network-vif-plugged-262f44f5-df56-4176-96b2-4819d8b7e258 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.054 2 DEBUG nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.055 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393684.0173428, 247e32e5-5f07-4db4-9e6f-dcfade745228 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.056 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.059 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.062 2 INFO nova.virt.libvirt.driver [-] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Instance spawned successfully.#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.062 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.081 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.091 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393684.0592353, 247e32e5-5f07-4db4-9e6f-dcfade745228 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.092 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.095 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.095 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.096 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.096 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.097 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.097 2 DEBUG nova.virt.libvirt.driver [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.127 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.130 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.160 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.173 2 INFO nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Took 9.24 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.174 2 DEBUG nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.257 2 INFO nova.compute.manager [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Took 10.24 seconds to build instance.#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.275 2 DEBUG oslo_concurrency.lockutils [None req-bbd4be78-87c1-4646-8c92-18dd39843f73 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.414 2 DEBUG nova.network.neutron [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Successfully updated port: b70e7dbb-3605-4af5-a977-d1c35f1ec20a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.428 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquiring lock "refresh_cache-197184a1-4270-40c9-87b5-6eca7e832812" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.429 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquired lock "refresh_cache-197184a1-4270-40c9-87b5-6eca7e832812" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.429 2 DEBUG nova.network.neutron [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.649 2 DEBUG nova.network.neutron [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.812 2 DEBUG nova.compute.manager [req-0e93545d-b6f5-4db0-a7bd-fc8be2a0127b req-5fbcaf4c-8698-4378-85a9-f9d4c4792e5d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Received event network-vif-unplugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.813 2 DEBUG oslo_concurrency.lockutils [req-0e93545d-b6f5-4db0-a7bd-fc8be2a0127b req-5fbcaf4c-8698-4378-85a9-f9d4c4792e5d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.814 2 DEBUG oslo_concurrency.lockutils [req-0e93545d-b6f5-4db0-a7bd-fc8be2a0127b req-5fbcaf4c-8698-4378-85a9-f9d4c4792e5d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.814 2 DEBUG oslo_concurrency.lockutils [req-0e93545d-b6f5-4db0-a7bd-fc8be2a0127b req-5fbcaf4c-8698-4378-85a9-f9d4c4792e5d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.814 2 DEBUG nova.compute.manager [req-0e93545d-b6f5-4db0-a7bd-fc8be2a0127b req-5fbcaf4c-8698-4378-85a9-f9d4c4792e5d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] No waiting events found dispatching network-vif-unplugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.815 2 DEBUG nova.compute.manager [req-0e93545d-b6f5-4db0-a7bd-fc8be2a0127b req-5fbcaf4c-8698-4378-85a9-f9d4c4792e5d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Received event network-vif-unplugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.838 2 DEBUG nova.network.neutron [-] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.855 2 INFO nova.compute.manager [-] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Took 1.01 seconds to deallocate network for instance.#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.894 2 DEBUG oslo_concurrency.lockutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:04 np0005465604 nova_compute[260603]: 2025-10-02 08:28:04.895 2 DEBUG oslo_concurrency.lockutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 339 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.3 MiB/s wr, 228 op/s
Oct  2 04:28:05 np0005465604 nova_compute[260603]: 2025-10-02 08:28:05.037 2 DEBUG oslo_concurrency.processutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:28:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3121913684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:28:05 np0005465604 nova_compute[260603]: 2025-10-02 08:28:05.558 2 DEBUG oslo_concurrency.processutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:05 np0005465604 nova_compute[260603]: 2025-10-02 08:28:05.563 2 DEBUG nova.compute.provider_tree [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:28:05 np0005465604 nova_compute[260603]: 2025-10-02 08:28:05.587 2 DEBUG nova.scheduler.client.report [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:28:05 np0005465604 nova_compute[260603]: 2025-10-02 08:28:05.643 2 DEBUG oslo_concurrency.lockutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:05 np0005465604 nova_compute[260603]: 2025-10-02 08:28:05.686 2 INFO nova.scheduler.client.report [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Deleted allocations for instance 84f8672c-7a2a-4307-a5c4-7e2968d84225#033[00m
Oct  2 04:28:05 np0005465604 nova_compute[260603]: 2025-10-02 08:28:05.786 2 DEBUG oslo_concurrency.lockutils [None req-a6b84aa8-8c74-4269-b1d2-726dd79f3c94 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.365 2 DEBUG nova.network.neutron [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Updating instance_info_cache with network_info: [{"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.438 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Releasing lock "refresh_cache-197184a1-4270-40c9-87b5-6eca7e832812" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.439 2 DEBUG nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Instance network_info: |[{"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.444 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Start _get_guest_xml network_info=[{"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.452 2 WARNING nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.458 2 DEBUG nova.virt.libvirt.host [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.459 2 DEBUG nova.virt.libvirt.host [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.463 2 DEBUG nova.virt.libvirt.host [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.464 2 DEBUG nova.virt.libvirt.host [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.465 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.466 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.467 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.467 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.468 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.469 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.469 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.470 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.471 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.471 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.472 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.472 2 DEBUG nova.virt.hardware [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.479 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.833 2 DEBUG nova.compute.manager [req-f7fde9a9-c332-44a4-987c-e566e2487965 req-c00a6e39-7291-48ab-8858-31ed4098877a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-plugged-262f44f5-df56-4176-96b2-4819d8b7e258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.834 2 DEBUG oslo_concurrency.lockutils [req-f7fde9a9-c332-44a4-987c-e566e2487965 req-c00a6e39-7291-48ab-8858-31ed4098877a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.835 2 DEBUG oslo_concurrency.lockutils [req-f7fde9a9-c332-44a4-987c-e566e2487965 req-c00a6e39-7291-48ab-8858-31ed4098877a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.835 2 DEBUG oslo_concurrency.lockutils [req-f7fde9a9-c332-44a4-987c-e566e2487965 req-c00a6e39-7291-48ab-8858-31ed4098877a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.836 2 DEBUG nova.compute.manager [req-f7fde9a9-c332-44a4-987c-e566e2487965 req-c00a6e39-7291-48ab-8858-31ed4098877a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] No waiting events found dispatching network-vif-plugged-262f44f5-df56-4176-96b2-4819d8b7e258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.836 2 WARNING nova.compute.manager [req-f7fde9a9-c332-44a4-987c-e566e2487965 req-c00a6e39-7291-48ab-8858-31ed4098877a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received unexpected event network-vif-plugged-262f44f5-df56-4176-96b2-4819d8b7e258 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:28:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 339 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 214 op/s
Oct  2 04:28:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:28:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2301358748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.966 2 DEBUG nova.compute.manager [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Received event network-changed-b70e7dbb-3605-4af5-a977-d1c35f1ec20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.967 2 DEBUG nova.compute.manager [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Refreshing instance network info cache due to event network-changed-b70e7dbb-3605-4af5-a977-d1c35f1ec20a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.968 2 DEBUG oslo_concurrency.lockutils [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-197184a1-4270-40c9-87b5-6eca7e832812" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.968 2 DEBUG oslo_concurrency.lockutils [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-197184a1-4270-40c9-87b5-6eca7e832812" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.969 2 DEBUG nova.network.neutron [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Refreshing network info cache for port b70e7dbb-3605-4af5-a977-d1c35f1ec20a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:28:06 np0005465604 nova_compute[260603]: 2025-10-02 08:28:06.983 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.018 2 DEBUG nova.storage.rbd_utils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] rbd image 197184a1-4270-40c9-87b5-6eca7e832812_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.024 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:28:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:28:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3035342952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.499 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.502 2 DEBUG nova.virt.libvirt.vif [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1439041942',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1439041942',id=46,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d531839bafe441a391fb9161a54c74ee',ramdisk_id='',reservation_id='r-3hltheaa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-588685837',owner_user_name='tempest-InstanceActionsV221TestJSON-588685837-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:59Z,user_data=None,user_id='3e67f389787b453faa1dfdb728caea35',uuid=197184a1-4270-40c9-87b5-6eca7e832812,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.503 2 DEBUG nova.network.os_vif_util [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Converting VIF {"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.504 2 DEBUG nova.network.os_vif_util [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:85:5d,bridge_name='br-int',has_traffic_filtering=True,id=b70e7dbb-3605-4af5-a977-d1c35f1ec20a,network=Network(a14a44c4-2ea4-49fe-ba19-5ba96209c2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb70e7dbb-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.506 2 DEBUG nova.objects.instance [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lazy-loading 'pci_devices' on Instance uuid 197184a1-4270-40c9-87b5-6eca7e832812 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.527 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:28:07 np0005465604 nova_compute[260603]:  <uuid>197184a1-4270-40c9-87b5-6eca7e832812</uuid>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:  <name>instance-0000002e</name>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <nova:name>tempest-InstanceActionsV221TestJSON-server-1439041942</nova:name>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:28:06</nova:creationTime>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:28:07 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:        <nova:user uuid="3e67f389787b453faa1dfdb728caea35">tempest-InstanceActionsV221TestJSON-588685837-project-member</nova:user>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:        <nova:project uuid="d531839bafe441a391fb9161a54c74ee">tempest-InstanceActionsV221TestJSON-588685837</nova:project>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:        <nova:port uuid="b70e7dbb-3605-4af5-a977-d1c35f1ec20a">
Oct  2 04:28:07 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <entry name="serial">197184a1-4270-40c9-87b5-6eca7e832812</entry>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <entry name="uuid">197184a1-4270-40c9-87b5-6eca7e832812</entry>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/197184a1-4270-40c9-87b5-6eca7e832812_disk">
Oct  2 04:28:07 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:28:07 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/197184a1-4270-40c9-87b5-6eca7e832812_disk.config">
Oct  2 04:28:07 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:28:07 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:e7:85:5d"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <target dev="tapb70e7dbb-36"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812/console.log" append="off"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:28:07 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:28:07 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:28:07 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:28:07 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.529 2 DEBUG nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Preparing to wait for external event network-vif-plugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.530 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquiring lock "197184a1-4270-40c9-87b5-6eca7e832812-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.530 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.530 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.532 2 DEBUG nova.virt.libvirt.vif [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:27:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1439041942',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1439041942',id=46,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d531839bafe441a391fb9161a54c74ee',ramdisk_id='',reservation_id='r-3hltheaa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-588685837',owner_user_name='tempest-InstanceActionsV221TestJSON-588685837-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:27:59Z,user_data=None,user_id='3e67f389787b453faa1dfdb728caea35',uuid=197184a1-4270-40c9-87b5-6eca7e832812,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.532 2 DEBUG nova.network.os_vif_util [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Converting VIF {"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.533 2 DEBUG nova.network.os_vif_util [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:85:5d,bridge_name='br-int',has_traffic_filtering=True,id=b70e7dbb-3605-4af5-a977-d1c35f1ec20a,network=Network(a14a44c4-2ea4-49fe-ba19-5ba96209c2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb70e7dbb-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.534 2 DEBUG os_vif [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:85:5d,bridge_name='br-int',has_traffic_filtering=True,id=b70e7dbb-3605-4af5-a977-d1c35f1ec20a,network=Network(a14a44c4-2ea4-49fe-ba19-5ba96209c2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb70e7dbb-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.535 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.536 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.540 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb70e7dbb-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.541 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb70e7dbb-36, col_values=(('external_ids', {'iface-id': 'b70e7dbb-3605-4af5-a977-d1c35f1ec20a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:85:5d', 'vm-uuid': '197184a1-4270-40c9-87b5-6eca7e832812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:07 np0005465604 NetworkManager[45129]: <info>  [1759393687.5451] manager: (tapb70e7dbb-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.554 2 INFO os_vif [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:85:5d,bridge_name='br-int',has_traffic_filtering=True,id=b70e7dbb-3605-4af5-a977-d1c35f1ec20a,network=Network(a14a44c4-2ea4-49fe-ba19-5ba96209c2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb70e7dbb-36')#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.633 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.634 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.634 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] No VIF found with MAC fa:16:3e:e7:85:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.635 2 INFO nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Using config drive#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.670 2 DEBUG nova.storage.rbd_utils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] rbd image 197184a1-4270-40c9-87b5-6eca7e832812_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:07 np0005465604 nova_compute[260603]: 2025-10-02 08:28:07.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.018 2 INFO nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Creating config drive at /var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812/disk.config#033[00m
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.027 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcx8zifqk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.178 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcx8zifqk" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.216 2 DEBUG nova.storage.rbd_utils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] rbd image 197184a1-4270-40c9-87b5-6eca7e832812_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.221 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812/disk.config 197184a1-4270-40c9-87b5-6eca7e832812_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.393 2 DEBUG oslo_concurrency.processutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812/disk.config 197184a1-4270-40c9-87b5-6eca7e832812_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.394 2 INFO nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Deleting local config drive /var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812/disk.config because it was imported into RBD.#033[00m
Oct  2 04:28:08 np0005465604 kernel: tapb70e7dbb-36: entered promiscuous mode
Oct  2 04:28:08 np0005465604 NetworkManager[45129]: <info>  [1759393688.4484] manager: (tapb70e7dbb-36): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Oct  2 04:28:08 np0005465604 systemd-udevd[309749]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:28:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:08Z|00356|binding|INFO|Claiming lport b70e7dbb-3605-4af5-a977-d1c35f1ec20a for this chassis.
Oct  2 04:28:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:08Z|00357|binding|INFO|b70e7dbb-3605-4af5-a977-d1c35f1ec20a: Claiming fa:16:3e:e7:85:5d 10.100.0.11
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.488 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:85:5d 10.100.0.11'], port_security=['fa:16:3e:e7:85:5d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '197184a1-4270-40c9-87b5-6eca7e832812', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd531839bafe441a391fb9161a54c74ee', 'neutron:revision_number': '2', 'neutron:security_group_ids': '968634c4-ac96-4f74-8c28-677b698bf17c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec8729b6-f94a-4108-b4ba-253312192658, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=b70e7dbb-3605-4af5-a977-d1c35f1ec20a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.490 162357 INFO neutron.agent.ovn.metadata.agent [-] Port b70e7dbb-3605-4af5-a977-d1c35f1ec20a in datapath a14a44c4-2ea4-49fe-ba19-5ba96209c2bc bound to our chassis#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.491 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a14a44c4-2ea4-49fe-ba19-5ba96209c2bc#033[00m
Oct  2 04:28:08 np0005465604 NetworkManager[45129]: <info>  [1759393688.4970] device (tapb70e7dbb-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:28:08 np0005465604 NetworkManager[45129]: <info>  [1759393688.5022] device (tapb70e7dbb-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:28:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:08Z|00358|binding|INFO|Setting lport b70e7dbb-3605-4af5-a977-d1c35f1ec20a ovn-installed in OVS
Oct  2 04:28:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:08Z|00359|binding|INFO|Setting lport b70e7dbb-3605-4af5-a977-d1c35f1ec20a up in Southbound
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.509 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e29b3b08-527c-4e8f-980c-5aabbdd2c79c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.509 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa14a44c4-21 in ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.512 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa14a44c4-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.512 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac7fa7a-424e-406d-85fe-4f7a9e0e6f58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.513 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e7007a0d-a443-40ab-9c19-7f6ebd16f966]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.524 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[8578aa7b-9e82-4bb8-a399-9e6ca3cbe579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:08 np0005465604 systemd-machined[214636]: New machine qemu-50-instance-0000002e.
Oct  2 04:28:08 np0005465604 systemd[1]: Started Virtual Machine qemu-50-instance-0000002e.
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.547 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f1e6e1-432e-4033-b7a0-dd7798985f89]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.579 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d69bce7b-c3d3-41db-99e1-96351e1e8bfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:08 np0005465604 systemd-udevd[309753]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:28:08 np0005465604 NetworkManager[45129]: <info>  [1759393688.5892] manager: (tapa14a44c4-20): new Veth device (/org/freedesktop/NetworkManager/Devices/164)
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.590 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[58308777-ee2b-40a5-bb88-a5d5a755d7be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.633 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc79194-5335-4b3b-b4b0-c04bb06fc476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.635 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d57f1baa-08f0-4b33-8eee-0446acbe7b0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.667 2 DEBUG nova.network.neutron [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Updated VIF entry in instance network info cache for port b70e7dbb-3605-4af5-a977-d1c35f1ec20a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.667 2 DEBUG nova.network.neutron [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Updating instance_info_cache with network_info: [{"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:08 np0005465604 NetworkManager[45129]: <info>  [1759393688.6710] device (tapa14a44c4-20): carrier: link connected
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.677 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[86c871f6-67ed-4f3d-b36c-4a8d8b528a73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.687 2 DEBUG oslo_concurrency.lockutils [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-197184a1-4270-40c9-87b5-6eca7e832812" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.687 2 DEBUG nova.compute.manager [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Received event network-vif-plugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.688 2 DEBUG oslo_concurrency.lockutils [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.688 2 DEBUG oslo_concurrency.lockutils [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.688 2 DEBUG oslo_concurrency.lockutils [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "84f8672c-7a2a-4307-a5c4-7e2968d84225-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.688 2 DEBUG nova.compute.manager [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] No waiting events found dispatching network-vif-plugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.689 2 WARNING nova.compute.manager [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Received unexpected event network-vif-plugged-d73ba644-fe1d-4d32-9e65-532dd96466b4 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.689 2 DEBUG nova.compute.manager [req-957bd03b-7dd8-4ee6-95a8-beea7c0e1d5d req-e888f9c0-4385-4b5d-ad3d-e38e80b5f3da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Received event network-vif-deleted-d73ba644-fe1d-4d32-9e65-532dd96466b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.696 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ef545259-257b-48fb-be4b-0824bb76a1cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa14a44c4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:64:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456727, 'reachable_time': 36232, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309785, 'error': None, 'target': 'ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.714 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eb73ff6d-08de-4383-997d-e24e94cafe08]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:64de'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456727, 'tstamp': 456727}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309786, 'error': None, 'target': 'ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.732 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c949a12a-9e34-4afe-82b6-9c1d8ea5e66a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa14a44c4-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:64:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456727, 'reachable_time': 36232, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309787, 'error': None, 'target': 'ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.767 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[90175b05-2e76-4c14-af48-93e9b0e5980c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.865 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[771b5cdb-8ecb-4edd-a6e3-80a11ee144d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.866 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa14a44c4-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.867 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.867 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa14a44c4-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:08 np0005465604 kernel: tapa14a44c4-20: entered promiscuous mode
Oct  2 04:28:08 np0005465604 NetworkManager[45129]: <info>  [1759393688.8710] manager: (tapa14a44c4-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.874 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa14a44c4-20, col_values=(('external_ids', {'iface-id': '96e9ca07-5376-4a96-befe-3386cfd0b28d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:08Z|00360|binding|INFO|Releasing lport 96e9ca07-5376-4a96-befe-3386cfd0b28d from this chassis (sb_readonly=0)
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:08 np0005465604 nova_compute[260603]: 2025-10-02 08:28:08.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.901 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a14a44c4-2ea4-49fe-ba19-5ba96209c2bc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a14a44c4-2ea4-49fe-ba19-5ba96209c2bc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.902 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5631e7fe-fab0-431f-bbd6-6bcc0ab945e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.903 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/a14a44c4-2ea4-49fe-ba19-5ba96209c2bc.pid.haproxy
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID a14a44c4-2ea4-49fe-ba19-5ba96209c2bc
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:28:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:08.904 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc', 'env', 'PROCESS_TAG=haproxy-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a14a44c4-2ea4-49fe-ba19-5ba96209c2bc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:28:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 293 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.8 MiB/s wr, 313 op/s
Oct  2 04:28:09 np0005465604 podman[309861]: 2025-10-02 08:28:09.339139598 +0000 UTC m=+0.059504280 container create 20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 04:28:09 np0005465604 systemd[1]: Started libpod-conmon-20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b.scope.
Oct  2 04:28:09 np0005465604 podman[309861]: 2025-10-02 08:28:09.302788178 +0000 UTC m=+0.023152900 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:28:09 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:28:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a67a4952718d6abeada72cfb2cec3851c08f6d1026af2e4c7575087548f9cfd9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:28:09 np0005465604 podman[309861]: 2025-10-02 08:28:09.424017295 +0000 UTC m=+0.144382007 container init 20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 04:28:09 np0005465604 podman[309861]: 2025-10-02 08:28:09.429608359 +0000 UTC m=+0.149973041 container start 20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 04:28:09 np0005465604 neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc[309876]: [NOTICE]   (309881) : New worker (309883) forked
Oct  2 04:28:09 np0005465604 neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc[309876]: [NOTICE]   (309881) : Loading success.
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.472 2 DEBUG nova.compute.manager [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.473 2 DEBUG nova.compute.manager [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing instance network info cache due to event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.473 2 DEBUG oslo_concurrency.lockutils [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.474 2 DEBUG oslo_concurrency.lockutils [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.474 2 DEBUG nova.network.neutron [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.607 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393689.606536, 197184a1-4270-40c9-87b5-6eca7e832812 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.608 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] VM Started (Lifecycle Event)#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.630 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.635 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393689.6068947, 197184a1-4270-40c9-87b5-6eca7e832812 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.636 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.656 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.661 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.685 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.697 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "e3ae3c82-7eb4-4727-a846-92afca9a8330" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.698 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.714 2 DEBUG nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.846 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.847 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.856 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:28:09 np0005465604 nova_compute[260603]: 2025-10-02 08:28:09.856 2 INFO nova.compute.claims [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:28:10 np0005465604 nova_compute[260603]: 2025-10-02 08:28:10.069 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:28:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4017470258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:28:10 np0005465604 nova_compute[260603]: 2025-10-02 08:28:10.592 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:10 np0005465604 nova_compute[260603]: 2025-10-02 08:28:10.602 2 DEBUG nova.compute.provider_tree [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:28:10 np0005465604 nova_compute[260603]: 2025-10-02 08:28:10.624 2 DEBUG nova.scheduler.client.report [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:28:10 np0005465604 nova_compute[260603]: 2025-10-02 08:28:10.650 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:10 np0005465604 nova_compute[260603]: 2025-10-02 08:28:10.652 2 DEBUG nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:28:10 np0005465604 nova_compute[260603]: 2025-10-02 08:28:10.707 2 DEBUG nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:28:10 np0005465604 nova_compute[260603]: 2025-10-02 08:28:10.708 2 DEBUG nova.network.neutron [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:28:10 np0005465604 nova_compute[260603]: 2025-10-02 08:28:10.731 2 INFO nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:28:10 np0005465604 nova_compute[260603]: 2025-10-02 08:28:10.754 2 DEBUG nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:28:10 np0005465604 nova_compute[260603]: 2025-10-02 08:28:10.853 2 DEBUG nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:28:10 np0005465604 nova_compute[260603]: 2025-10-02 08:28:10.855 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:28:10 np0005465604 nova_compute[260603]: 2025-10-02 08:28:10.856 2 INFO nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Creating image(s)#033[00m
Oct  2 04:28:10 np0005465604 nova_compute[260603]: 2025-10-02 08:28:10.888 2 DEBUG nova.storage.rbd_utils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image e3ae3c82-7eb4-4727-a846-92afca9a8330_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:10 np0005465604 nova_compute[260603]: 2025-10-02 08:28:10.912 2 DEBUG nova.storage.rbd_utils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image e3ae3c82-7eb4-4727-a846-92afca9a8330_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 293 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 191 op/s
Oct  2 04:28:10 np0005465604 nova_compute[260603]: 2025-10-02 08:28:10.980 2 DEBUG nova.storage.rbd_utils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image e3ae3c82-7eb4-4727-a846-92afca9a8330_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:10 np0005465604 nova_compute[260603]: 2025-10-02 08:28:10.987 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.028 2 DEBUG nova.policy [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1ac6f72f7366459a86c086737b89ea69', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f269abbe5769427dbf44c430d7529c04', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:28:11 np0005465604 podman[309951]: 2025-10-02 08:28:11.030456918 +0000 UTC m=+0.088307425 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  2 04:28:11 np0005465604 podman[309948]: 2025-10-02 08:28:11.07430624 +0000 UTC m=+0.136491322 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.096 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.098 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.099 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.099 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.132 2 DEBUG nova.storage.rbd_utils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image e3ae3c82-7eb4-4727-a846-92afca9a8330_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.137 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 e3ae3c82-7eb4-4727-a846-92afca9a8330_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.296 2 DEBUG nova.network.neutron [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updated VIF entry in instance network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.297 2 DEBUG nova.network.neutron [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.318 2 DEBUG oslo_concurrency.lockutils [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.318 2 DEBUG nova.compute.manager [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-changed-262f44f5-df56-4176-96b2-4819d8b7e258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.319 2 DEBUG nova.compute.manager [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing instance network info cache due to event network-changed-262f44f5-df56-4176-96b2-4819d8b7e258. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.319 2 DEBUG oslo_concurrency.lockutils [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.319 2 DEBUG oslo_concurrency.lockutils [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.319 2 DEBUG nova.network.neutron [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing network info cache for port 262f44f5-df56-4176-96b2-4819d8b7e258 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.423 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 e3ae3c82-7eb4-4727-a846-92afca9a8330_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.495 2 DEBUG nova.storage.rbd_utils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] resizing rbd image e3ae3c82-7eb4-4727-a846-92afca9a8330_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.554 2 DEBUG oslo_concurrency.lockutils [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-f13ff7c1-d7d3-443e-9f06-69f8c466af30-2f45b100-9bc2-4853-87ff-324e74ddfee5" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.554 2 DEBUG oslo_concurrency.lockutils [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-f13ff7c1-d7d3-443e-9f06-69f8c466af30-2f45b100-9bc2-4853-87ff-324e74ddfee5" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.555 2 DEBUG nova.objects.instance [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid f13ff7c1-d7d3-443e-9f06-69f8c466af30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.560 2 DEBUG nova.compute.manager [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Received event network-vif-plugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.561 2 DEBUG oslo_concurrency.lockutils [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "197184a1-4270-40c9-87b5-6eca7e832812-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.561 2 DEBUG oslo_concurrency.lockutils [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.562 2 DEBUG oslo_concurrency.lockutils [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.562 2 DEBUG nova.compute.manager [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Processing event network-vif-plugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.562 2 DEBUG nova.compute.manager [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-changed-262f44f5-df56-4176-96b2-4819d8b7e258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.562 2 DEBUG nova.compute.manager [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing instance network info cache due to event network-changed-262f44f5-df56-4176-96b2-4819d8b7e258. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.562 2 DEBUG oslo_concurrency.lockutils [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.564 2 DEBUG nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.569 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393691.5686553, 197184a1-4270-40c9-87b5-6eca7e832812 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.569 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.633 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.638 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.652 2 DEBUG nova.objects.instance [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'migration_context' on Instance uuid e3ae3c82-7eb4-4727-a846-92afca9a8330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.655 2 INFO nova.virt.libvirt.driver [-] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Instance spawned successfully.#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.655 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.664 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.672 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.672 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.674 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.675 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Ensure instance console log exists: /var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.675 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.675 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.676 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.685 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.685 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.686 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.686 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.687 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.687 2 DEBUG nova.virt.libvirt.driver [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.692 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.714 2 DEBUG nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.776 2 INFO nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Took 12.69 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.776 2 DEBUG nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.794 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.794 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.804 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.804 2 INFO nova.compute.claims [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.854 2 INFO nova.compute.manager [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Took 13.74 seconds to build instance.#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.885 2 DEBUG oslo_concurrency.lockutils [None req-8c0fa9e7-007c-4109-b723-8031c7d01d12 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:11 np0005465604 nova_compute[260603]: 2025-10-02 08:28:11.907 2 DEBUG nova.network.neutron [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Successfully created port: 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.012 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.441 2 DEBUG nova.objects.instance [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_requests' on Instance uuid f13ff7c1-d7d3-443e-9f06-69f8c466af30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.458 2 DEBUG nova.network.neutron [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:28:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:28:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3964096948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.520 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.529 2 DEBUG nova.compute.provider_tree [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.543 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.552 2 DEBUG nova.scheduler.client.report [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.573 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.574 2 DEBUG nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.648 2 DEBUG nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.649 2 DEBUG nova.network.neutron [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.669 2 INFO nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.685 2 DEBUG nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.771 2 DEBUG nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.772 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.773 2 INFO nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Creating image(s)#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.792 2 DEBUG nova.storage.rbd_utils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.814 2 DEBUG nova.storage.rbd_utils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.843 2 DEBUG nova.storage.rbd_utils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.846 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 318 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.7 MiB/s wr, 220 op/s
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.943 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.945 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.945 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.946 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.973 2 DEBUG nova.storage.rbd_utils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:12 np0005465604 nova_compute[260603]: 2025-10-02 08:28:12.978 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.034 2 DEBUG nova.network.neutron [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updated VIF entry in instance network info cache for port 262f44f5-df56-4176-96b2-4819d8b7e258. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.035 2 DEBUG nova.network.neutron [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updating instance_info_cache with network_info: [{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.045 2 DEBUG nova.policy [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '019cd25dce6249ce9c2cf326ec62df28', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '35a4ab7cf79e41f68a1ea888c2a3592e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.050 2 DEBUG nova.policy [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d235dd68314a5d82ac247a9e9842d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84c161efb2ba4334845e823db8128b62', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.062 2 DEBUG oslo_concurrency.lockutils [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.062 2 DEBUG nova.compute.manager [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Received event network-vif-plugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.063 2 DEBUG oslo_concurrency.lockutils [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "197184a1-4270-40c9-87b5-6eca7e832812-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.063 2 DEBUG oslo_concurrency.lockutils [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.064 2 DEBUG oslo_concurrency.lockutils [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.064 2 DEBUG nova.compute.manager [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] No waiting events found dispatching network-vif-plugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.064 2 WARNING nova.compute.manager [req-960fcb21-4003-4697-b01c-3c5b109f1a92 req-c89f2ff7-3aaa-4200-8b89-b840bcf0b9f2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Received unexpected event network-vif-plugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.066 2 DEBUG oslo_concurrency.lockutils [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.067 2 DEBUG nova.network.neutron [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing network info cache for port 262f44f5-df56-4176-96b2-4819d8b7e258 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.240 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.310 2 DEBUG nova.storage.rbd_utils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] resizing rbd image 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.421 2 DEBUG nova.objects.instance [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lazy-loading 'migration_context' on Instance uuid 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.436 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.436 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Ensure instance console log exists: /var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.437 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.437 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.437 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.544 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.544 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.584 2 DEBUG nova.network.neutron [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Successfully updated port: 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.597 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.597 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquired lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.597 2 DEBUG nova.network.neutron [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.662 2 DEBUG oslo_concurrency.lockutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquiring lock "197184a1-4270-40c9-87b5-6eca7e832812" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.663 2 DEBUG oslo_concurrency.lockutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.663 2 DEBUG oslo_concurrency.lockutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquiring lock "197184a1-4270-40c9-87b5-6eca7e832812-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.663 2 DEBUG oslo_concurrency.lockutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.664 2 DEBUG oslo_concurrency.lockutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.667 2 INFO nova.compute.manager [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Terminating instance#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.670 2 DEBUG nova.compute.manager [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.702 2 DEBUG nova.compute.manager [req-39e6a42b-186d-4f3c-ab2f-1bb481621287 req-833e344b-1e93-453c-a81f-45a5a8c5694c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Received event network-changed-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.702 2 DEBUG nova.compute.manager [req-39e6a42b-186d-4f3c-ab2f-1bb481621287 req-833e344b-1e93-453c-a81f-45a5a8c5694c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Refreshing instance network info cache due to event network-changed-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.703 2 DEBUG oslo_concurrency.lockutils [req-39e6a42b-186d-4f3c-ab2f-1bb481621287 req-833e344b-1e93-453c-a81f-45a5a8c5694c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:13 np0005465604 kernel: tapb70e7dbb-36 (unregistering): left promiscuous mode
Oct  2 04:28:13 np0005465604 NetworkManager[45129]: <info>  [1759393693.7130] device (tapb70e7dbb-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:28:13 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:13Z|00361|binding|INFO|Releasing lport b70e7dbb-3605-4af5-a977-d1c35f1ec20a from this chassis (sb_readonly=0)
Oct  2 04:28:13 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:13Z|00362|binding|INFO|Setting lport b70e7dbb-3605-4af5-a977-d1c35f1ec20a down in Southbound
Oct  2 04:28:13 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:13Z|00363|binding|INFO|Removing iface tapb70e7dbb-36 ovn-installed in OVS
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:13.746 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:85:5d 10.100.0.11'], port_security=['fa:16:3e:e7:85:5d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '197184a1-4270-40c9-87b5-6eca7e832812', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd531839bafe441a391fb9161a54c74ee', 'neutron:revision_number': '4', 'neutron:security_group_ids': '968634c4-ac96-4f74-8c28-677b698bf17c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec8729b6-f94a-4108-b4ba-253312192658, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=b70e7dbb-3605-4af5-a977-d1c35f1ec20a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:13.750 162357 INFO neutron.agent.ovn.metadata.agent [-] Port b70e7dbb-3605-4af5-a977-d1c35f1ec20a in datapath a14a44c4-2ea4-49fe-ba19-5ba96209c2bc unbound from our chassis#033[00m
Oct  2 04:28:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:13.752 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a14a44c4-2ea4-49fe-ba19-5ba96209c2bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:28:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:13.754 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1d20ea89-0b7e-4245-876c-638f492f27f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:13.754 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc namespace which is not needed anymore#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:13 np0005465604 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Oct  2 04:28:13 np0005465604 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002e.scope: Consumed 3.045s CPU time.
Oct  2 04:28:13 np0005465604 systemd-machined[214636]: Machine qemu-50-instance-0000002e terminated.
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.905 2 INFO nova.virt.libvirt.driver [-] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Instance destroyed successfully.#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.905 2 DEBUG nova.objects.instance [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lazy-loading 'resources' on Instance uuid 197184a1-4270-40c9-87b5-6eca7e832812 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:13 np0005465604 neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc[309876]: [NOTICE]   (309881) : haproxy version is 2.8.14-c23fe91
Oct  2 04:28:13 np0005465604 neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc[309876]: [NOTICE]   (309881) : path to executable is /usr/sbin/haproxy
Oct  2 04:28:13 np0005465604 neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc[309876]: [WARNING]  (309881) : Exiting Master process...
Oct  2 04:28:13 np0005465604 neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc[309876]: [ALERT]    (309881) : Current worker (309883) exited with code 143 (Terminated)
Oct  2 04:28:13 np0005465604 neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc[309876]: [WARNING]  (309881) : All workers exited. Exiting... (0)
Oct  2 04:28:13 np0005465604 systemd[1]: libpod-20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b.scope: Deactivated successfully.
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.939 2 DEBUG nova.virt.libvirt.vif [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1439041942',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1439041942',id=46,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d531839bafe441a391fb9161a54c74ee',ramdisk_id='',reservation_id='r-3hltheaa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-588685837',owner_user_name='tempest-InstanceActionsV221TestJSON-588685837-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:11Z,user_data=None,user_id='3e67f389787b453faa1dfdb728caea35',uuid=197184a1-4270-40c9-87b5-6eca7e832812,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.939 2 DEBUG nova.network.os_vif_util [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Converting VIF {"id": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "address": "fa:16:3e:e7:85:5d", "network": {"id": "a14a44c4-2ea4-49fe-ba19-5ba96209c2bc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-578492366-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d531839bafe441a391fb9161a54c74ee", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb70e7dbb-36", "ovs_interfaceid": "b70e7dbb-3605-4af5-a977-d1c35f1ec20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.940 2 DEBUG nova.network.os_vif_util [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:85:5d,bridge_name='br-int',has_traffic_filtering=True,id=b70e7dbb-3605-4af5-a977-d1c35f1ec20a,network=Network(a14a44c4-2ea4-49fe-ba19-5ba96209c2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb70e7dbb-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.940 2 DEBUG os_vif [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:85:5d,bridge_name='br-int',has_traffic_filtering=True,id=b70e7dbb-3605-4af5-a977-d1c35f1ec20a,network=Network(a14a44c4-2ea4-49fe-ba19-5ba96209c2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb70e7dbb-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.942 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb70e7dbb-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:13 np0005465604 podman[310337]: 2025-10-02 08:28:13.94331668 +0000 UTC m=+0.065264310 container died 20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.947 2 DEBUG nova.network.neutron [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:28:13 np0005465604 nova_compute[260603]: 2025-10-02 08:28:13.951 2 INFO os_vif [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:85:5d,bridge_name='br-int',has_traffic_filtering=True,id=b70e7dbb-3605-4af5-a977-d1c35f1ec20a,network=Network(a14a44c4-2ea4-49fe-ba19-5ba96209c2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb70e7dbb-36')#033[00m
Oct  2 04:28:13 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b-userdata-shm.mount: Deactivated successfully.
Oct  2 04:28:13 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a67a4952718d6abeada72cfb2cec3851c08f6d1026af2e4c7575087548f9cfd9-merged.mount: Deactivated successfully.
Oct  2 04:28:13 np0005465604 podman[310337]: 2025-10-02 08:28:13.988417881 +0000 UTC m=+0.110365521 container cleanup 20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:28:14 np0005465604 systemd[1]: libpod-conmon-20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b.scope: Deactivated successfully.
Oct  2 04:28:14 np0005465604 podman[310396]: 2025-10-02 08:28:14.074692482 +0000 UTC m=+0.057074015 container remove 20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 04:28:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:14.084 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1dff7418-4736-4521-96ee-e35572374769]: (4, ('Thu Oct  2 08:28:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc (20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b)\n20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b\nThu Oct  2 08:28:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc (20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b)\n20d4f9fe932fd9ed89d7d49d0d1187f05ef70710530a5b7dc4bc502beefcd31b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:14.087 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[01836acb-d675-49b8-86a5-5c15fac7245f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:14.088 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa14a44c4-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:14 np0005465604 kernel: tapa14a44c4-20: left promiscuous mode
Oct  2 04:28:14 np0005465604 nova_compute[260603]: 2025-10-02 08:28:14.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:14.101 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9512bd9-f41a-461c-b2a3-f1dfae33a556]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:14 np0005465604 nova_compute[260603]: 2025-10-02 08:28:14.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:14.122 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[615195e9-0cf5-4720-a697-2e6008ad488c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:14.123 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0be0eb-3fa4-4c38-b7c1-98eaa0f34637]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:14.148 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0967b51a-e72d-4818-bbbf-978770fb6449]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456717, 'reachable_time': 22331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310411, 'error': None, 'target': 'ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:14 np0005465604 systemd[1]: run-netns-ovnmeta\x2da14a44c4\x2d2ea4\x2d49fe\x2dba19\x2d5ba96209c2bc.mount: Deactivated successfully.
Oct  2 04:28:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:14.151 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a14a44c4-2ea4-49fe-ba19-5ba96209c2bc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:28:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:14.151 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f8cba5-575e-4833-adab-39e2f27e12cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:14 np0005465604 nova_compute[260603]: 2025-10-02 08:28:14.315 2 INFO nova.virt.libvirt.driver [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Deleting instance files /var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812_del#033[00m
Oct  2 04:28:14 np0005465604 nova_compute[260603]: 2025-10-02 08:28:14.316 2 INFO nova.virt.libvirt.driver [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Deletion of /var/lib/nova/instances/197184a1-4270-40c9-87b5-6eca7e832812_del complete#033[00m
Oct  2 04:28:14 np0005465604 nova_compute[260603]: 2025-10-02 08:28:14.399 2 INFO nova.compute.manager [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:28:14 np0005465604 nova_compute[260603]: 2025-10-02 08:28:14.399 2 DEBUG oslo.service.loopingcall [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:28:14 np0005465604 nova_compute[260603]: 2025-10-02 08:28:14.400 2 DEBUG nova.compute.manager [-] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:28:14 np0005465604 nova_compute[260603]: 2025-10-02 08:28:14.400 2 DEBUG nova.network.neutron [-] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:28:14 np0005465604 nova_compute[260603]: 2025-10-02 08:28:14.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:28:14 np0005465604 nova_compute[260603]: 2025-10-02 08:28:14.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 04:28:14 np0005465604 nova_compute[260603]: 2025-10-02 08:28:14.644 2 DEBUG nova.network.neutron [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Successfully created port: 922a693b-1cb4-42e8-a97d-78973183c774 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:28:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 348 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 182 op/s
Oct  2 04:28:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:15Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:df:f8:c0 10.100.0.14
Oct  2 04:28:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:15Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:df:f8:c0 10.100.0.14
Oct  2 04:28:15 np0005465604 nova_compute[260603]: 2025-10-02 08:28:15.540 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:28:15 np0005465604 nova_compute[260603]: 2025-10-02 08:28:15.871 2 DEBUG nova.network.neutron [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updated VIF entry in instance network info cache for port 262f44f5-df56-4176-96b2-4819d8b7e258. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:28:15 np0005465604 nova_compute[260603]: 2025-10-02 08:28:15.872 2 DEBUG nova.network.neutron [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updating instance_info_cache with network_info: [{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:15 np0005465604 nova_compute[260603]: 2025-10-02 08:28:15.907 2 DEBUG oslo_concurrency.lockutils [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:15 np0005465604 nova_compute[260603]: 2025-10-02 08:28:15.908 2 DEBUG nova.compute.manager [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:15 np0005465604 nova_compute[260603]: 2025-10-02 08:28:15.908 2 DEBUG nova.compute.manager [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing instance network info cache due to event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:28:15 np0005465604 nova_compute[260603]: 2025-10-02 08:28:15.909 2 DEBUG oslo_concurrency.lockutils [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:15 np0005465604 nova_compute[260603]: 2025-10-02 08:28:15.909 2 DEBUG oslo_concurrency.lockutils [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:15 np0005465604 nova_compute[260603]: 2025-10-02 08:28:15.910 2 DEBUG nova.network.neutron [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.280 2 DEBUG nova.network.neutron [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Updating instance_info_cache with network_info: [{"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.299 2 DEBUG nova.network.neutron [-] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.302 2 DEBUG nova.network.neutron [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Successfully updated port: 2f45b100-9bc2-4853-87ff-324e74ddfee5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.309 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Releasing lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.310 2 DEBUG nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Instance network_info: |[{"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.311 2 DEBUG oslo_concurrency.lockutils [req-39e6a42b-186d-4f3c-ab2f-1bb481621287 req-833e344b-1e93-453c-a81f-45a5a8c5694c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.311 2 DEBUG nova.network.neutron [req-39e6a42b-186d-4f3c-ab2f-1bb481621287 req-833e344b-1e93-453c-a81f-45a5a8c5694c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Refreshing network info cache for port 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.317 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Start _get_guest_xml network_info=[{"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.322 2 DEBUG oslo_concurrency.lockutils [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.324 2 WARNING nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.335 2 INFO nova.compute.manager [-] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Took 1.93 seconds to deallocate network for instance.#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.345 2 DEBUG nova.virt.libvirt.host [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.346 2 DEBUG nova.virt.libvirt.host [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.351 2 DEBUG nova.virt.libvirt.host [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.352 2 DEBUG nova.virt.libvirt.host [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.352 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.353 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.353 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.354 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.354 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.354 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.355 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.355 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.355 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.356 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.356 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.357 2 DEBUG nova.virt.hardware [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.361 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.407 2 DEBUG oslo_concurrency.lockutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.408 2 DEBUG oslo_concurrency.lockutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.577 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.578 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.757 2 DEBUG oslo_concurrency.processutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:28:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/756598222' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.815 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.831 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.857 2 DEBUG nova.storage.rbd_utils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image e3ae3c82-7eb4-4727-a846-92afca9a8330_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:16 np0005465604 nova_compute[260603]: 2025-10-02 08:28:16.861 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 348 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 160 op/s
Oct  2 04:28:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:28:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:28:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2565091287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.212 2 DEBUG oslo_concurrency.processutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.220 2 DEBUG nova.compute.provider_tree [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.241 2 DEBUG nova.scheduler.client.report [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.275 2 DEBUG oslo_concurrency.lockutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:28:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1456522547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.298 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.301 2 DEBUG nova.virt.libvirt.vif [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:28:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-379096976',display_name='tempest-DeleteServersTestJSON-server-379096976',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-379096976',id=47,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-f7cj6hb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:28:10Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=e3ae3c82-7eb4-4727-a846-92afca9a8330,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.301 2 DEBUG nova.network.os_vif_util [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.303 2 DEBUG nova.network.os_vif_util [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:62:72,bridge_name='br-int',has_traffic_filtering=True,id=8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e7deb4e-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.305 2 DEBUG nova.objects.instance [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'pci_devices' on Instance uuid e3ae3c82-7eb4-4727-a846-92afca9a8330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.308 2 INFO nova.scheduler.client.report [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Deleted allocations for instance 197184a1-4270-40c9-87b5-6eca7e832812#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.335 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:28:17 np0005465604 nova_compute[260603]:  <uuid>e3ae3c82-7eb4-4727-a846-92afca9a8330</uuid>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:  <name>instance-0000002f</name>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <nova:name>tempest-DeleteServersTestJSON-server-379096976</nova:name>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:28:16</nova:creationTime>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:28:17 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:        <nova:user uuid="1ac6f72f7366459a86c086737b89ea69">tempest-DeleteServersTestJSON-812177785-project-member</nova:user>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:        <nova:project uuid="f269abbe5769427dbf44c430d7529c04">tempest-DeleteServersTestJSON-812177785</nova:project>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:        <nova:port uuid="8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f">
Oct  2 04:28:17 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <entry name="serial">e3ae3c82-7eb4-4727-a846-92afca9a8330</entry>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <entry name="uuid">e3ae3c82-7eb4-4727-a846-92afca9a8330</entry>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/e3ae3c82-7eb4-4727-a846-92afca9a8330_disk">
Oct  2 04:28:17 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:28:17 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/e3ae3c82-7eb4-4727-a846-92afca9a8330_disk.config">
Oct  2 04:28:17 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:28:17 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:5a:62:72"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <target dev="tap8e7deb4e-9f"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330/console.log" append="off"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:28:17 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:28:17 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:28:17 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:28:17 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.336 2 DEBUG nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Preparing to wait for external event network-vif-plugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.337 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.337 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.337 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.338 2 DEBUG nova.virt.libvirt.vif [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:28:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-379096976',display_name='tempest-DeleteServersTestJSON-server-379096976',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-379096976',id=47,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-f7cj6hb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:28:10Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=e3ae3c82-7eb4-4727-a846-92afca9a8330,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.339 2 DEBUG nova.network.os_vif_util [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.340 2 DEBUG nova.network.os_vif_util [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:62:72,bridge_name='br-int',has_traffic_filtering=True,id=8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e7deb4e-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.340 2 DEBUG os_vif [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:62:72,bridge_name='br-int',has_traffic_filtering=True,id=8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e7deb4e-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.342 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.347 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e7deb4e-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.348 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8e7deb4e-9f, col_values=(('external_ids', {'iface-id': '8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:62:72', 'vm-uuid': 'e3ae3c82-7eb4-4727-a846-92afca9a8330'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:17 np0005465604 NetworkManager[45129]: <info>  [1759393697.3854] manager: (tap8e7deb4e-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.398 2 INFO os_vif [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:62:72,bridge_name='br-int',has_traffic_filtering=True,id=8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e7deb4e-9f')#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.438 2 DEBUG nova.network.neutron [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Successfully updated port: 922a693b-1cb4-42e8-a97d-78973183c774 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.473 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.474 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquired lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.474 2 DEBUG nova.network.neutron [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.498 2 DEBUG oslo_concurrency.lockutils [None req-656eee61-d471-496f-97de-e42f047db46c 3e67f389787b453faa1dfdb728caea35 d531839bafe441a391fb9161a54c74ee - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.511 2 DEBUG nova.compute.manager [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Received event network-vif-unplugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.511 2 DEBUG oslo_concurrency.lockutils [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "197184a1-4270-40c9-87b5-6eca7e832812-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.512 2 DEBUG oslo_concurrency.lockutils [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.512 2 DEBUG oslo_concurrency.lockutils [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.512 2 DEBUG nova.compute.manager [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] No waiting events found dispatching network-vif-unplugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.513 2 WARNING nova.compute.manager [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Received unexpected event network-vif-unplugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.513 2 DEBUG nova.compute.manager [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Received event network-vif-plugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.513 2 DEBUG oslo_concurrency.lockutils [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "197184a1-4270-40c9-87b5-6eca7e832812-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.514 2 DEBUG oslo_concurrency.lockutils [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.514 2 DEBUG oslo_concurrency.lockutils [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "197184a1-4270-40c9-87b5-6eca7e832812-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.515 2 DEBUG nova.compute.manager [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] No waiting events found dispatching network-vif-plugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.515 2 WARNING nova.compute.manager [req-70c62dc1-1eb4-47f4-a66a-83c84f41f0b5 req-1158464b-f852-475b-8afd-a1c96f6e60b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Received unexpected event network-vif-plugged-b70e7dbb-3605-4af5-a977-d1c35f1ec20a for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.525 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.526 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.527 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No VIF found with MAC fa:16:3e:5a:62:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.528 2 INFO nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Using config drive#033[00m
Oct  2 04:28:17 np0005465604 podman[310500]: 2025-10-02 08:28:17.545879964 +0000 UTC m=+0.099837953 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.560 2 DEBUG nova.storage.rbd_utils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image e3ae3c82-7eb4-4727-a846-92afca9a8330_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:17 np0005465604 nova_compute[260603]: 2025-10-02 08:28:17.660 2 DEBUG nova.network.neutron [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:28:18 np0005465604 nova_compute[260603]: 2025-10-02 08:28:18.313 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393683.310509, 84f8672c-7a2a-4307-a5c4-7e2968d84225 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:18 np0005465604 nova_compute[260603]: 2025-10-02 08:28:18.314 2 INFO nova.compute.manager [-] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:28:18 np0005465604 nova_compute[260603]: 2025-10-02 08:28:18.345 2 DEBUG nova.compute.manager [None req-7f681a8f-2b73-43f1-9003-d16ad385f01c - - - - - -] [instance: 84f8672c-7a2a-4307-a5c4-7e2968d84225] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:18 np0005465604 nova_compute[260603]: 2025-10-02 08:28:18.743 2 INFO nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Creating config drive at /var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330/disk.config#033[00m
Oct  2 04:28:18 np0005465604 nova_compute[260603]: 2025-10-02 08:28:18.753 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpatj6_4c_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:18 np0005465604 nova_compute[260603]: 2025-10-02 08:28:18.789 2 DEBUG nova.network.neutron [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updated VIF entry in instance network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:28:18 np0005465604 nova_compute[260603]: 2025-10-02 08:28:18.790 2 DEBUG nova.network.neutron [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:18 np0005465604 nova_compute[260603]: 2025-10-02 08:28:18.814 2 DEBUG oslo_concurrency.lockutils [req-deae54c5-56da-47a5-93cd-011a3dcdb24b req-348c0ac8-fa1c-4cf9-bc7b-509d8f069bc0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:18 np0005465604 nova_compute[260603]: 2025-10-02 08:28:18.815 2 DEBUG oslo_concurrency.lockutils [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:18 np0005465604 nova_compute[260603]: 2025-10-02 08:28:18.816 2 DEBUG nova.network.neutron [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:28:18 np0005465604 nova_compute[260603]: 2025-10-02 08:28:18.895 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpatj6_4c_" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:18 np0005465604 nova_compute[260603]: 2025-10-02 08:28:18.925 2 DEBUG nova.storage.rbd_utils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image e3ae3c82-7eb4-4727-a846-92afca9a8330_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:18 np0005465604 nova_compute[260603]: 2025-10-02 08:28:18.929 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330/disk.config e3ae3c82-7eb4-4727-a846-92afca9a8330_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 315 op/s
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.042 2 DEBUG nova.network.neutron [req-39e6a42b-186d-4f3c-ab2f-1bb481621287 req-833e344b-1e93-453c-a81f-45a5a8c5694c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Updated VIF entry in instance network info cache for port 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.044 2 DEBUG nova.network.neutron [req-39e6a42b-186d-4f3c-ab2f-1bb481621287 req-833e344b-1e93-453c-a81f-45a5a8c5694c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Updating instance_info_cache with network_info: [{"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.060 2 WARNING nova.network.neutron [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] fa1bff6d-19fb-4792-a261-4da1165d95a1 already exists in list: networks containing: ['fa1bff6d-19fb-4792-a261-4da1165d95a1']. ignoring it#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.066 2 DEBUG oslo_concurrency.lockutils [req-39e6a42b-186d-4f3c-ab2f-1bb481621287 req-833e344b-1e93-453c-a81f-45a5a8c5694c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.091 2 DEBUG oslo_concurrency.processutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330/disk.config e3ae3c82-7eb4-4727-a846-92afca9a8330_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.091 2 INFO nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Deleting local config drive /var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330/disk.config because it was imported into RBD.#033[00m
Oct  2 04:28:19 np0005465604 kernel: tap8e7deb4e-9f: entered promiscuous mode
Oct  2 04:28:19 np0005465604 NetworkManager[45129]: <info>  [1759393699.1553] manager: (tap8e7deb4e-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/167)
Oct  2 04:28:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:19Z|00364|binding|INFO|Claiming lport 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f for this chassis.
Oct  2 04:28:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:19Z|00365|binding|INFO|8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f: Claiming fa:16:3e:5a:62:72 10.100.0.5
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.173 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:62:72 10.100.0.5'], port_security=['fa:16:3e:5a:62:72 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e3ae3c82-7eb4-4727-a846-92afca9a8330', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.175 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca bound to our chassis#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.179 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca#033[00m
Oct  2 04:28:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:19Z|00366|binding|INFO|Setting lport 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f ovn-installed in OVS
Oct  2 04:28:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:19Z|00367|binding|INFO|Setting lport 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f up in Southbound
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:19 np0005465604 systemd-udevd[310721]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.200 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[76f21afc-e2fb-4005-a5fa-15f51e606c26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.202 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa72ac8c9-11 in ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:28:19 np0005465604 NetworkManager[45129]: <info>  [1759393699.2033] device (tap8e7deb4e-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:28:19 np0005465604 NetworkManager[45129]: <info>  [1759393699.2039] device (tap8e7deb4e-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:28:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:28:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:28:19 np0005465604 systemd-machined[214636]: New machine qemu-51-instance-0000002f.
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.207 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa72ac8c9-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.207 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[44ec57a1-bbe6-4a6d-9b77-e82f1628c590]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.209 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[304cb2ef-a998-4d26-ade9-8cbcadc0d015]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:28:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:28:19 np0005465604 systemd[1]: Started Virtual Machine qemu-51-instance-0000002f.
Oct  2 04:28:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:28:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:28:19 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev d29818d0-05ea-4b46-b458-db19a2e8c294 does not exist
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.230 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[025993cd-f3df-4989-b22d-b85d4f14294d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:19 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9a8f5757-0566-4845-b7ac-d93164472c71 does not exist
Oct  2 04:28:19 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev d8f11a62-e0ad-4550-bc06-0b833418bf95 does not exist
Oct  2 04:28:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:28:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:28:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:28:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:28:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:28:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.258 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[afc2eebf-8167-4af2-998a-2d51c4d3980e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.292 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c57e9a-ae96-4688-b1b4-71f2ff485cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.298 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b2fe1a1e-f181-4336-b528-0ee601bc3766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:19 np0005465604 NetworkManager[45129]: <info>  [1759393699.3001] manager: (tapa72ac8c9-10): new Veth device (/org/freedesktop/NetworkManager/Devices/168)
Oct  2 04:28:19 np0005465604 systemd-udevd[310725]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.345 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c766a2b5-6698-43e7-9cef-8a6d4bdc7130]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.348 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2118a4d3-f5ea-4e1d-b0f4-056c5ef08e81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:19 np0005465604 NetworkManager[45129]: <info>  [1759393699.3716] device (tapa72ac8c9-10): carrier: link connected
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.380 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4f980688-f7d0-40ea-9c3f-d235c491da21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.406 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5b423796-bacc-43bd-95b8-43a60816283f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457797, 'reachable_time': 42739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310804, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.428 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0508c9-a269-4909-b5be-b1b905a5cb84]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3b:61d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 457797, 'tstamp': 457797}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310811, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.448 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a29bbe57-2079-42ef-9a36-948f51044a5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457797, 'reachable_time': 42739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310829, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.492 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5a386d-5343-45b4-8672-e9cc5113caba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.596 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[145d93e9-60ad-493c-9adb-325a17203961]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.597 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.598 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.598 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa72ac8c9-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:19 np0005465604 kernel: tapa72ac8c9-10: entered promiscuous mode
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:19 np0005465604 NetworkManager[45129]: <info>  [1759393699.6381] manager: (tapa72ac8c9-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.639 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa72ac8c9-10, col_values=(('external_ids', {'iface-id': 'f9acec59-0200-4a1d-84e4-06e67c730498'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:19Z|00368|binding|INFO|Releasing lport f9acec59-0200-4a1d-84e4-06e67c730498 from this chassis (sb_readonly=0)
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.656 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.657 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1413d423-a8cd-4ce0-98b7-a30f7888d385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.657 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:28:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:19.658 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'env', 'PROCESS_TAG=haproxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.679 2 DEBUG nova.network.neutron [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Updating instance_info_cache with network_info: [{"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.708 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Releasing lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.708 2 DEBUG nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Instance network_info: |[{"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.712 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Start _get_guest_xml network_info=[{"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.721 2 WARNING nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.730 2 DEBUG nova.virt.libvirt.host [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.731 2 DEBUG nova.virt.libvirt.host [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.736 2 DEBUG nova.virt.libvirt.host [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.737 2 DEBUG nova.virt.libvirt.host [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.738 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.738 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.739 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.739 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.741 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.741 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.742 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.742 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.742 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.743 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.743 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.743 2 DEBUG nova.virt.hardware [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:28:19 np0005465604 nova_compute[260603]: 2025-10-02 08:28:19.747 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:19 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:28:19 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:28:19 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:28:20 np0005465604 podman[310982]: 2025-10-02 08:28:20.01994046 +0000 UTC m=+0.060231633 container create d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_grothendieck, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 04:28:20 np0005465604 podman[310998]: 2025-10-02 08:28:20.055768473 +0000 UTC m=+0.062974528 container create 039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:28:20 np0005465604 systemd[1]: Started libpod-conmon-d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906.scope.
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.071 2 DEBUG nova.compute.manager [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Received event network-vif-deleted-b70e7dbb-3605-4af5-a977-d1c35f1ec20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.072 2 DEBUG nova.compute.manager [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-changed-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.072 2 DEBUG nova.compute.manager [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing instance network info cache due to event network-changed-2f45b100-9bc2-4853-87ff-324e74ddfee5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.073 2 DEBUG oslo_concurrency.lockutils [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:20 np0005465604 podman[310982]: 2025-10-02 08:28:19.989887736 +0000 UTC m=+0.030178929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:28:20 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:28:20 np0005465604 podman[310998]: 2025-10-02 08:28:20.014998686 +0000 UTC m=+0.022204761 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:28:20 np0005465604 systemd[1]: Started libpod-conmon-039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22.scope.
Oct  2 04:28:20 np0005465604 podman[310982]: 2025-10-02 08:28:20.131171757 +0000 UTC m=+0.171462980 container init d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 04:28:20 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:28:20 np0005465604 podman[310982]: 2025-10-02 08:28:20.145146471 +0000 UTC m=+0.185437614 container start d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_grothendieck, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:28:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ebb512a42e1b9f43784c36b0857c486bcadaa4002099f22dfeb9568e9eabfd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:28:20 np0005465604 podman[310982]: 2025-10-02 08:28:20.149849146 +0000 UTC m=+0.190140319 container attach d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:28:20 np0005465604 systemd[1]: libpod-d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906.scope: Deactivated successfully.
Oct  2 04:28:20 np0005465604 interesting_grothendieck[311013]: 167 167
Oct  2 04:28:20 np0005465604 podman[310982]: 2025-10-02 08:28:20.154521512 +0000 UTC m=+0.194812665 container died d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:28:20 np0005465604 conmon[311013]: conmon d62c579e4f5e2bc7db62 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906.scope/container/memory.events
Oct  2 04:28:20 np0005465604 podman[310998]: 2025-10-02 08:28:20.170907361 +0000 UTC m=+0.178113446 container init 039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 04:28:20 np0005465604 podman[310998]: 2025-10-02 08:28:20.179428026 +0000 UTC m=+0.186634081 container start 039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 04:28:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:28:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3592014135' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:28:20 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a7efcecc1fe449763128402c7f4aced386a4d4fe9620b671511e9e5ccc8e3d91-merged.mount: Deactivated successfully.
Oct  2 04:28:20 np0005465604 podman[310982]: 2025-10-02 08:28:20.207091405 +0000 UTC m=+0.247382548 container remove d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_grothendieck, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:28:20 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[311019]: [NOTICE]   (311032) : New worker (311043) forked
Oct  2 04:28:20 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[311019]: [NOTICE]   (311032) : Loading success.
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.229 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:20 np0005465604 systemd[1]: libpod-conmon-d62c579e4f5e2bc7db62747b9a75d9468a437e15e724e7dae3c52ed9493ad906.scope: Deactivated successfully.
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.257 2 DEBUG nova.storage.rbd_utils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.262 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.342 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393700.3415005, e3ae3c82-7eb4-4727-a846-92afca9a8330 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.343 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] VM Started (Lifecycle Event)#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.367 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.374 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393700.3419397, e3ae3c82-7eb4-4727-a846-92afca9a8330 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.374 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.394 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.406 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:28:20 np0005465604 podman[311077]: 2025-10-02 08:28:20.430290632 +0000 UTC m=+0.042485971 container create 715b4c9cd245217c6ad02612153f0684c4687a133aecb436281669093b4f92c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.434 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:28:20 np0005465604 systemd[1]: Started libpod-conmon-715b4c9cd245217c6ad02612153f0684c4687a133aecb436281669093b4f92c3.scope.
Oct  2 04:28:20 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:28:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74824d06f280fc5f2bb86c9252908f2c8de21722b17ed63b102d28cc95cefe2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:28:20 np0005465604 podman[311077]: 2025-10-02 08:28:20.412733887 +0000 UTC m=+0.024929206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:28:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74824d06f280fc5f2bb86c9252908f2c8de21722b17ed63b102d28cc95cefe2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:28:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74824d06f280fc5f2bb86c9252908f2c8de21722b17ed63b102d28cc95cefe2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:28:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74824d06f280fc5f2bb86c9252908f2c8de21722b17ed63b102d28cc95cefe2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:28:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74824d06f280fc5f2bb86c9252908f2c8de21722b17ed63b102d28cc95cefe2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:28:20 np0005465604 podman[311077]: 2025-10-02 08:28:20.545412819 +0000 UTC m=+0.157608188 container init 715b4c9cd245217c6ad02612153f0684c4687a133aecb436281669093b4f92c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  2 04:28:20 np0005465604 podman[311077]: 2025-10-02 08:28:20.555279896 +0000 UTC m=+0.167475255 container start 715b4c9cd245217c6ad02612153f0684c4687a133aecb436281669093b4f92c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:28:20 np0005465604 podman[311077]: 2025-10-02 08:28:20.559775536 +0000 UTC m=+0.171970945 container attach 715b4c9cd245217c6ad02612153f0684c4687a133aecb436281669093b4f92c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 04:28:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:28:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4045614169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.698 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.702 2 DEBUG nova.virt.libvirt.vif [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:28:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2008796594',display_name='tempest-SecurityGroupsTestJSON-server-2008796594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2008796594',id=48,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35a4ab7cf79e41f68a1ea888c2a3592e',ramdisk_id='',reservation_id='r-limo2kg3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-2081142325',owner_user_name='tempest-SecurityGroupsTestJSON-2081142325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:28:12Z,user_data=None,user_id='019cd25dce6249ce9c2cf326ec62df28',uuid=21802142-fcf7-4eb2-b43b-e0fa48cab4d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.702 2 DEBUG nova.network.os_vif_util [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converting VIF {"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.705 2 DEBUG nova.network.os_vif_util [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.709 2 DEBUG nova.objects.instance [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lazy-loading 'pci_devices' on Instance uuid 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.730 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:28:20 np0005465604 nova_compute[260603]:  <uuid>21802142-fcf7-4eb2-b43b-e0fa48cab4d6</uuid>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:  <name>instance-00000030</name>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <nova:name>tempest-SecurityGroupsTestJSON-server-2008796594</nova:name>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:28:19</nova:creationTime>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:28:20 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:        <nova:user uuid="019cd25dce6249ce9c2cf326ec62df28">tempest-SecurityGroupsTestJSON-2081142325-project-member</nova:user>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:        <nova:project uuid="35a4ab7cf79e41f68a1ea888c2a3592e">tempest-SecurityGroupsTestJSON-2081142325</nova:project>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:        <nova:port uuid="922a693b-1cb4-42e8-a97d-78973183c774">
Oct  2 04:28:20 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <entry name="serial">21802142-fcf7-4eb2-b43b-e0fa48cab4d6</entry>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <entry name="uuid">21802142-fcf7-4eb2-b43b-e0fa48cab4d6</entry>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk">
Oct  2 04:28:20 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:28:20 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk.config">
Oct  2 04:28:20 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:28:20 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:bd:10:ae"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <target dev="tap922a693b-1c"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6/console.log" append="off"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:28:20 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:28:20 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:28:20 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:28:20 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.731 2 DEBUG nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Preparing to wait for external event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.731 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.732 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.732 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.734 2 DEBUG nova.virt.libvirt.vif [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:28:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2008796594',display_name='tempest-SecurityGroupsTestJSON-server-2008796594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2008796594',id=48,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35a4ab7cf79e41f68a1ea888c2a3592e',ramdisk_id='',reservation_id='r-limo2kg3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-2081142325',owner_user_name='tempest-SecurityGroupsTestJSON-2081142325-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:28:12Z,user_data=None,user_id='019cd25dce6249ce9c2cf326ec62df28',uuid=21802142-fcf7-4eb2-b43b-e0fa48cab4d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.734 2 DEBUG nova.network.os_vif_util [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converting VIF {"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.736 2 DEBUG nova.network.os_vif_util [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.736 2 DEBUG os_vif [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.738 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.739 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.745 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap922a693b-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.746 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap922a693b-1c, col_values=(('external_ids', {'iface-id': '922a693b-1cb4-42e8-a97d-78973183c774', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:10:ae', 'vm-uuid': '21802142-fcf7-4eb2-b43b-e0fa48cab4d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:20 np0005465604 NetworkManager[45129]: <info>  [1759393700.7509] manager: (tap922a693b-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.761 2 INFO os_vif [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c')#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.839 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.840 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.840 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] No VIF found with MAC fa:16:3e:bd:10:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.842 2 INFO nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Using config drive#033[00m
Oct  2 04:28:20 np0005465604 nova_compute[260603]: 2025-10-02 08:28:20.889 2 DEBUG nova.storage.rbd_utils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 216 op/s
Oct  2 04:28:21 np0005465604 eager_noether[311112]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:28:21 np0005465604 eager_noether[311112]: --> relative data size: 1.0
Oct  2 04:28:21 np0005465604 eager_noether[311112]: --> All data devices are unavailable
Oct  2 04:28:21 np0005465604 systemd[1]: libpod-715b4c9cd245217c6ad02612153f0684c4687a133aecb436281669093b4f92c3.scope: Deactivated successfully.
Oct  2 04:28:21 np0005465604 podman[311077]: 2025-10-02 08:28:21.603847142 +0000 UTC m=+1.216042441 container died 715b4c9cd245217c6ad02612153f0684c4687a133aecb436281669093b4f92c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.673 2 INFO nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Creating config drive at /var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6/disk.config#033[00m
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.677 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp78_4p3_s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f74824d06f280fc5f2bb86c9252908f2c8de21722b17ed63b102d28cc95cefe2-merged.mount: Deactivated successfully.
Oct  2 04:28:21 np0005465604 podman[311164]: 2025-10-02 08:28:21.732569833 +0000 UTC m=+0.098004008 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 04:28:21 np0005465604 podman[311077]: 2025-10-02 08:28:21.744965578 +0000 UTC m=+1.357160887 container remove 715b4c9cd245217c6ad02612153f0684c4687a133aecb436281669093b4f92c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:28:21 np0005465604 systemd[1]: libpod-conmon-715b4c9cd245217c6ad02612153f0684c4687a133aecb436281669093b4f92c3.scope: Deactivated successfully.
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.809 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp78_4p3_s" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.833 2 DEBUG nova.storage.rbd_utils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] rbd image 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.837 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6/disk.config 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.902 2 DEBUG nova.network.neutron [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.946 2 DEBUG oslo_concurrency.lockutils [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.947 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.947 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.947 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f13ff7c1-d7d3-443e-9f06-69f8c466af30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.952 2 DEBUG nova.virt.libvirt.vif [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-53387023',display_name='tempest-tempest.common.compute-instance-53387023',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-53387023',id=42,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:27:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-966e3isi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:27:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=f13ff7c1-d7d3-443e-9f06-69f8c466af30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.952 2 DEBUG nova.network.os_vif_util [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.953 2 DEBUG nova.network.os_vif_util [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.954 2 DEBUG os_vif [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.955 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.955 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.959 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f45b100-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:21 np0005465604 nova_compute[260603]: 2025-10-02 08:28:21.959 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2f45b100-9b, col_values=(('external_ids', {'iface-id': '2f45b100-9bc2-4853-87ff-324e74ddfee5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:c8:97', 'vm-uuid': 'f13ff7c1-d7d3-443e-9f06-69f8c466af30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:21 np0005465604 NetworkManager[45129]: <info>  [1759393701.9940] manager: (tap2f45b100-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.014 2 INFO os_vif [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b')#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.015 2 DEBUG nova.virt.libvirt.vif [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-53387023',display_name='tempest-tempest.common.compute-instance-53387023',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-53387023',id=42,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:27:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-966e3isi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:27:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=f13ff7c1-d7d3-443e-9f06-69f8c466af30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.015 2 DEBUG nova.network.os_vif_util [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.016 2 DEBUG nova.network.os_vif_util [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.019 2 DEBUG nova.virt.libvirt.guest [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] attach device xml: <interface type="ethernet">
Oct  2 04:28:22 np0005465604 nova_compute[260603]:  <mac address="fa:16:3e:69:c8:97"/>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:  <model type="virtio"/>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:  <mtu size="1442"/>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:  <target dev="tap2f45b100-9b"/>
Oct  2 04:28:22 np0005465604 nova_compute[260603]: </interface>
Oct  2 04:28:22 np0005465604 nova_compute[260603]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 04:28:22 np0005465604 NetworkManager[45129]: <info>  [1759393702.0293] manager: (tap2f45b100-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/172)
Oct  2 04:28:22 np0005465604 systemd-udevd[310778]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:28:22 np0005465604 kernel: tap2f45b100-9b: entered promiscuous mode
Oct  2 04:28:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:22Z|00369|binding|INFO|Claiming lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 for this chassis.
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:22Z|00370|binding|INFO|2f45b100-9bc2-4853-87ff-324e74ddfee5: Claiming fa:16:3e:69:c8:97 10.100.0.5
Oct  2 04:28:22 np0005465604 NetworkManager[45129]: <info>  [1759393702.0466] device (tap2f45b100-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:28:22 np0005465604 NetworkManager[45129]: <info>  [1759393702.0476] device (tap2f45b100-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.048 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:c8:97 10.100.0.5'], port_security=['fa:16:3e:69:c8:97 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-738296238', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f13ff7c1-d7d3-443e-9f06-69f8c466af30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-738296238', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2f45b100-9bc2-4853-87ff-324e74ddfee5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.050 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2f45b100-9bc2-4853-87ff-324e74ddfee5 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 bound to our chassis#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.051 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1#033[00m
Oct  2 04:28:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:28:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3393871243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:28:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:28:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3393871243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:28:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:22Z|00371|binding|INFO|Setting lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 ovn-installed in OVS
Oct  2 04:28:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:22Z|00372|binding|INFO|Setting lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 up in Southbound
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.067 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bf990dc9-3e35-4a79-81e0-70790e2727e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.077 2 DEBUG oslo_concurrency.processutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6/disk.config 21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.241s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.078 2 INFO nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Deleting local config drive /var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6/disk.config because it was imported into RBD.#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.106 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[00c77050-a377-45d4-8bc4-3efd715fe663]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.109 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c677de0c-fb41-460a-b7d2-78b6be542be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:22 np0005465604 kernel: tap922a693b-1c: entered promiscuous mode
Oct  2 04:28:22 np0005465604 NetworkManager[45129]: <info>  [1759393702.1274] manager: (tap922a693b-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/173)
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:22Z|00373|binding|INFO|Claiming lport 922a693b-1cb4-42e8-a97d-78973183c774 for this chassis.
Oct  2 04:28:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:22Z|00374|binding|INFO|922a693b-1cb4-42e8-a97d-78973183c774: Claiming fa:16:3e:bd:10:ae 10.100.0.7
Oct  2 04:28:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:28:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 17K writes, 70K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 17K writes, 5936 syncs, 3.03 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 47.61 MB, 0.08 MB/s#012Interval WAL: 11K writes, 4765 syncs, 2.47 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 04:28:22 np0005465604 NetworkManager[45129]: <info>  [1759393702.1396] device (tap922a693b-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:28:22 np0005465604 NetworkManager[45129]: <info>  [1759393702.1404] device (tap922a693b-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.144 2 DEBUG nova.virt.libvirt.driver [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.144 2 DEBUG nova.virt.libvirt.driver [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.145 2 DEBUG nova.virt.libvirt.driver [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:0b:59:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.145 2 DEBUG nova.virt.libvirt.driver [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:69:c8:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.145 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[46f95bb0-1cbd-46f6-be11-f7284c2ad179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:22Z|00375|binding|INFO|Setting lport 922a693b-1cb4-42e8-a97d-78973183c774 ovn-installed in OVS
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:22Z|00376|binding|INFO|Setting lport 922a693b-1cb4-42e8-a97d-78973183c774 up in Southbound
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.153 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:10:ae 10.100.0.7'], port_security=['fa:16:3e:bd:10:ae 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '21802142-fcf7-4eb2-b43b-e0fa48cab4d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35a4ab7cf79e41f68a1ea888c2a3592e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aceef71f-a2e6-4998-bc1f-5a8f9213efeb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e031551-92b2-44b9-87f8-368034b7a542, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=922a693b-1cb4-42e8-a97d-78973183c774) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.163 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[695d9731-e950-42bc-aa05-06dec29cd5cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454001, 'reachable_time': 43415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311365, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:22 np0005465604 systemd-machined[214636]: New machine qemu-52-instance-00000030.
Oct  2 04:28:22 np0005465604 systemd[1]: Started Virtual Machine qemu-52-instance-00000030.
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.181 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf06144-e928-4aeb-baa9-cb95ecb5fc83]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454016, 'tstamp': 454016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311368, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454020, 'tstamp': 454020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311368, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.183 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.186 2 DEBUG nova.virt.libvirt.guest [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:28:22 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:  <nova:name>tempest-tempest.common.compute-instance-53387023</nova:name>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:28:22</nova:creationTime>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:28:22 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:    <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:    <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:    <nova:port uuid="136aeb8e-dedd-4cd8-a72d-1c4309716daf">
Oct  2 04:28:22 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:    <nova:port uuid="2f45b100-9bc2-4853-87ff-324e74ddfee5">
Oct  2 04:28:22 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:28:22 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:28:22 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:28:22 np0005465604 nova_compute[260603]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.186 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.186 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.187 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.187 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.191 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 922a693b-1cb4-42e8-a97d-78973183c774 in datapath cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 unbound from our chassis#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.193 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.210 2 DEBUG oslo_concurrency.lockutils [None req-cefc5c62-5222-4ff1-9dd0-6deb61efdaec 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-f13ff7c1-d7d3-443e-9f06-69f8c466af30-2f45b100-9bc2-4853-87ff-324e74ddfee5" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.215 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[408de17d-8ce4-4d6d-b483-1548817d7817]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.243 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d8cc7a-1eb5-4d9b-abca-b21599845f0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.246 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[37e6ff8d-9c6f-4476-9746-eb715062c0ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.269 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c43bddb3-2364-4107-b1e8-e02a99342527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.290 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0dd734-e8d1-44df-a25b-2dfb8d9964ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcce7e8b6-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:5a:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453893, 'reachable_time': 18032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311403, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.305 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab860fd-99ef-4b1e-9073-0c908c499020]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcce7e8b6-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453909, 'tstamp': 453909}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311406, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcce7e8b6-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453913, 'tstamp': 453913}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311406, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.306 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcce7e8b6-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.309 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcce7e8b6-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.309 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.309 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcce7e8b6-90, col_values=(('external_ids', {'iface-id': '2a218cce-83be-4768-9f4e-7d61802765d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:22.310 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:22 np0005465604 podman[311419]: 2025-10-02 08:28:22.418619203 +0000 UTC m=+0.043889616 container create 3aed35b371707e64c877611dcfe4cb672decfdf9b7a0d633ca6f2c3a36bca6e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_knuth, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 04:28:22 np0005465604 systemd[1]: Started libpod-conmon-3aed35b371707e64c877611dcfe4cb672decfdf9b7a0d633ca6f2c3a36bca6e7.scope.
Oct  2 04:28:22 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:28:22 np0005465604 podman[311419]: 2025-10-02 08:28:22.400725086 +0000 UTC m=+0.025995519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:28:22 np0005465604 podman[311419]: 2025-10-02 08:28:22.510009722 +0000 UTC m=+0.135280185 container init 3aed35b371707e64c877611dcfe4cb672decfdf9b7a0d633ca6f2c3a36bca6e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:28:22 np0005465604 podman[311419]: 2025-10-02 08:28:22.517144474 +0000 UTC m=+0.142414887 container start 3aed35b371707e64c877611dcfe4cb672decfdf9b7a0d633ca6f2c3a36bca6e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 04:28:22 np0005465604 podman[311419]: 2025-10-02 08:28:22.520476887 +0000 UTC m=+0.145747310 container attach 3aed35b371707e64c877611dcfe4cb672decfdf9b7a0d633ca6f2c3a36bca6e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 04:28:22 np0005465604 thirsty_knuth[311435]: 167 167
Oct  2 04:28:22 np0005465604 podman[311419]: 2025-10-02 08:28:22.528333821 +0000 UTC m=+0.153604234 container died 3aed35b371707e64c877611dcfe4cb672decfdf9b7a0d633ca6f2c3a36bca6e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:28:22 np0005465604 systemd[1]: libpod-3aed35b371707e64c877611dcfe4cb672decfdf9b7a0d633ca6f2c3a36bca6e7.scope: Deactivated successfully.
Oct  2 04:28:22 np0005465604 systemd[1]: var-lib-containers-storage-overlay-694240d09dcdd3d202a9b83c5bdb68817147e9480559bb31723d3ad6cea5a108-merged.mount: Deactivated successfully.
Oct  2 04:28:22 np0005465604 podman[311419]: 2025-10-02 08:28:22.577937173 +0000 UTC m=+0.203207626 container remove 3aed35b371707e64c877611dcfe4cb672decfdf9b7a0d633ca6f2c3a36bca6e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_knuth, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:28:22 np0005465604 systemd[1]: libpod-conmon-3aed35b371707e64c877611dcfe4cb672decfdf9b7a0d633ca6f2c3a36bca6e7.scope: Deactivated successfully.
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.764 2 DEBUG nova.compute.manager [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Received event network-vif-plugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.765 2 DEBUG oslo_concurrency.lockutils [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.766 2 DEBUG oslo_concurrency.lockutils [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.766 2 DEBUG oslo_concurrency.lockutils [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.767 2 DEBUG nova.compute.manager [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Processing event network-vif-plugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.767 2 DEBUG nova.compute.manager [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Received event network-vif-plugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.768 2 DEBUG oslo_concurrency.lockutils [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.768 2 DEBUG oslo_concurrency.lockutils [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.769 2 DEBUG oslo_concurrency.lockutils [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.769 2 DEBUG nova.compute.manager [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] No waiting events found dispatching network-vif-plugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.769 2 WARNING nova.compute.manager [req-c640dc51-504a-4a85-bd8c-e130b5f8203e req-6ba4d827-9dcb-45c0-8738-832dd54cf945 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Received unexpected event network-vif-plugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.771 2 DEBUG nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.776 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393702.7757068, e3ae3c82-7eb4-4727-a846-92afca9a8330 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.776 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.779 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.784 2 INFO nova.virt.libvirt.driver [-] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Instance spawned successfully.#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.785 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.798 2 DEBUG nova.compute.manager [req-321d7bfc-499d-4bb9-835d-2955a4685499 req-8a70a5a2-5e31-4167-af6f-d4ad80dfc6fb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.798 2 DEBUG oslo_concurrency.lockutils [req-321d7bfc-499d-4bb9-835d-2955a4685499 req-8a70a5a2-5e31-4167-af6f-d4ad80dfc6fb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.799 2 DEBUG oslo_concurrency.lockutils [req-321d7bfc-499d-4bb9-835d-2955a4685499 req-8a70a5a2-5e31-4167-af6f-d4ad80dfc6fb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.800 2 DEBUG oslo_concurrency.lockutils [req-321d7bfc-499d-4bb9-835d-2955a4685499 req-8a70a5a2-5e31-4167-af6f-d4ad80dfc6fb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.800 2 DEBUG nova.compute.manager [req-321d7bfc-499d-4bb9-835d-2955a4685499 req-8a70a5a2-5e31-4167-af6f-d4ad80dfc6fb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] No waiting events found dispatching network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.800 2 WARNING nova.compute.manager [req-321d7bfc-499d-4bb9-835d-2955a4685499 req-8a70a5a2-5e31-4167-af6f-d4ad80dfc6fb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received unexpected event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.803 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.811 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.817 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.818 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.818 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.819 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.820 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.821 2 DEBUG nova.virt.libvirt.driver [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.833 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:28:22 np0005465604 podman[311501]: 2025-10-02 08:28:22.839590465 +0000 UTC m=+0.050994506 container create 65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  2 04:28:22 np0005465604 systemd[1]: Started libpod-conmon-65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1.scope.
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.893 2 INFO nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Took 12.04 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.894 2 DEBUG nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:22 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:28:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2b0dcf8c44ec7e500648d7cc7adce6fdcd49ffb68474c2348f8c81072e9bb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:28:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2b0dcf8c44ec7e500648d7cc7adce6fdcd49ffb68474c2348f8c81072e9bb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:28:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2b0dcf8c44ec7e500648d7cc7adce6fdcd49ffb68474c2348f8c81072e9bb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:28:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a2b0dcf8c44ec7e500648d7cc7adce6fdcd49ffb68474c2348f8c81072e9bb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:28:22 np0005465604 podman[311501]: 2025-10-02 08:28:22.821134341 +0000 UTC m=+0.032538412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:28:22 np0005465604 podman[311501]: 2025-10-02 08:28:22.920386756 +0000 UTC m=+0.131790817 container init 65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct  2 04:28:22 np0005465604 podman[311501]: 2025-10-02 08:28:22.929998784 +0000 UTC m=+0.141402825 container start 65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:28:22 np0005465604 podman[311501]: 2025-10-02 08:28:22.932505532 +0000 UTC m=+0.143909573 container attach 65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:28:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 222 op/s
Oct  2 04:28:22 np0005465604 nova_compute[260603]: 2025-10-02 08:28:22.984 2 INFO nova.compute.manager [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Took 13.21 seconds to build instance.#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.006 2 DEBUG oslo_concurrency.lockutils [None req-68bc7734-5188-4c63-9e29-8729055d97bc 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.182 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393703.1825626, 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.183 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] VM Started (Lifecycle Event)#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.208 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.212 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393703.1826618, 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.212 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.231 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.235 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.254 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.450 2 DEBUG oslo_concurrency.lockutils [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-f13ff7c1-d7d3-443e-9f06-69f8c466af30-2f45b100-9bc2-4853-87ff-324e74ddfee5" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.450 2 DEBUG oslo_concurrency.lockutils [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-f13ff7c1-d7d3-443e-9f06-69f8c466af30-2f45b100-9bc2-4853-87ff-324e74ddfee5" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.472 2 DEBUG nova.objects.instance [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid f13ff7c1-d7d3-443e-9f06-69f8c466af30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.495 2 DEBUG nova.virt.libvirt.vif [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-53387023',display_name='tempest-tempest.common.compute-instance-53387023',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-53387023',id=42,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:27:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-966e3isi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:27:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=f13ff7c1-d7d3-443e-9f06-69f8c466af30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.495 2 DEBUG nova.network.os_vif_util [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.496 2 DEBUG nova.network.os_vif_util [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.499 2 DEBUG nova.virt.libvirt.guest [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.501 2 DEBUG nova.virt.libvirt.guest [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.504 2 DEBUG nova.virt.libvirt.driver [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Attempting to detach device tap2f45b100-9b from instance f13ff7c1-d7d3-443e-9f06-69f8c466af30 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.504 2 DEBUG nova.virt.libvirt.guest [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] detach device xml: <interface type="ethernet">
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <mac address="fa:16:3e:69:c8:97"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <model type="virtio"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <mtu size="1442"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <target dev="tap2f45b100-9b"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]: </interface>
Oct  2 04:28:23 np0005465604 nova_compute[260603]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.511 2 DEBUG nova.virt.libvirt.guest [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.514 2 DEBUG nova.virt.libvirt.guest [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface>not found in domain: <domain type='kvm' id='47'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <name>instance-0000002a</name>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <uuid>f13ff7c1-d7d3-443e-9f06-69f8c466af30</uuid>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:name>tempest-tempest.common.compute-instance-53387023</nova:name>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:28:22</nova:creationTime>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:port uuid="136aeb8e-dedd-4cd8-a72d-1c4309716daf">
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:port uuid="2f45b100-9bc2-4853-87ff-324e74ddfee5">
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:28:23 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <memory unit='KiB'>131072</memory>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <vcpu placement='static'>1</vcpu>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <resource>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <partition>/machine</partition>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </resource>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <sysinfo type='smbios'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <entry name='manufacturer'>RDO</entry>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <entry name='serial'>f13ff7c1-d7d3-443e-9f06-69f8c466af30</entry>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <entry name='uuid'>f13ff7c1-d7d3-443e-9f06-69f8c466af30</entry>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <entry name='family'>Virtual Machine</entry>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <boot dev='hd'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <smbios mode='sysinfo'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <vmcoreinfo state='on'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <vendor>AMD</vendor>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='x2apic'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc-deadline'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='hypervisor'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc_adjust'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='spec-ctrl'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='stibp'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='arch-capabilities'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='ssbd'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='cmp_legacy'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='overflow-recov'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='succor'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='ibrs'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='amd-ssbd'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='virt-ssbd'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='lbrv'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='tsc-scale'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='vmcb-clean'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='flushbyasid'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pause-filter'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pfthreshold'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='rdctl-no'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='mds-no'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='gds-no'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='rfds-no'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='xsaves'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svm'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='topoext'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='npt'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='nrip-save'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <clock offset='utc'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <timer name='hpet' present='no'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <on_poweroff>destroy</on_poweroff>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <on_reboot>restart</on_reboot>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <on_crash>destroy</on_crash>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <disk type='network' device='disk'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk' index='2'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target dev='vda' bus='virtio'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='virtio-disk0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <disk type='network' device='cdrom'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk.config' index='1'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target dev='sda' bus='sata'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <readonly/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='sata0-0-0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pcie.0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='1' port='0x10'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='2' port='0x11'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.2'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='3' port='0x12'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.3'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='4' port='0x13'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.4'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='5' port='0x14'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.5'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='6' port='0x15'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.6'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='7' port='0x16'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.7'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='8' port='0x17'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.8'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='9' port='0x18'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.9'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='10' port='0x19'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.10'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='11' port='0x1a'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.11'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='12' port='0x1b'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.12'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='13' port='0x1c'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.13'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='14' port='0x1d'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.14'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='15' port='0x1e'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.15'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='16' port='0x1f'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.16'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='17' port='0x20'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.17'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='18' port='0x21'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.18'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='19' port='0x22'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.19'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='20' port='0x23'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.20'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='21' port='0x24'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.21'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='22' port='0x25'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.22'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='23' port='0x26'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.23'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='24' port='0x27'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.24'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='25' port='0x28'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.25'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-pci-bridge'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.26'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='usb'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='sata' index='0'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='ide'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:0b:59:2a'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target dev='tap136aeb8e-de'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='net0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:69:c8:97'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target dev='tap2f45b100-9b'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='net1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <serial type='pty'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <source path='/dev/pts/1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/console.log' append='off'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target type='isa-serial' port='0'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:        <model name='isa-serial'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      </target>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <console type='pty' tty='/dev/pts/1'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <source path='/dev/pts/1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/console.log' append='off'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target type='serial' port='0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </console>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <input type='tablet' bus='usb'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='input0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='usb' bus='0' port='1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <input type='mouse' bus='ps2'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='input1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <input type='keyboard' bus='ps2'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='input2'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <listen type='address' address='::0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <audio id='1' type='none'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='video0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <watchdog model='itco' action='reset'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='watchdog0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </watchdog>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <memballoon model='virtio'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <stats period='10'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='balloon0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <rng model='virtio'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <backend model='random'>/dev/urandom</backend>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='rng0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <label>system_u:system_r:svirt_t:s0:c737,c790</label>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c737,c790</imagelabel>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <label>+107:+107</label>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <imagelabel>+107:+107</imagelabel>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:28:23 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:28:23 np0005465604 nova_compute[260603]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.514 2 INFO nova.virt.libvirt.driver [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully detached device tap2f45b100-9b from instance f13ff7c1-d7d3-443e-9f06-69f8c466af30 from the persistent domain config.#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.514 2 DEBUG nova.virt.libvirt.driver [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] (1/8): Attempting to detach device tap2f45b100-9b with device alias net1 from instance f13ff7c1-d7d3-443e-9f06-69f8c466af30 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.514 2 DEBUG nova.virt.libvirt.guest [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] detach device xml: <interface type="ethernet">
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <mac address="fa:16:3e:69:c8:97"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <model type="virtio"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <mtu size="1442"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <target dev="tap2f45b100-9b"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]: </interface>
Oct  2 04:28:23 np0005465604 nova_compute[260603]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 04:28:23 np0005465604 kernel: tap2f45b100-9b (unregistering): left promiscuous mode
Oct  2 04:28:23 np0005465604 NetworkManager[45129]: <info>  [1759393703.6249] device (tap2f45b100-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.636 2 DEBUG nova.virt.libvirt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Received event <DeviceRemovedEvent: 1759393703.6363068, f13ff7c1-d7d3-443e-9f06-69f8c466af30 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 04:28:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:23Z|00377|binding|INFO|Releasing lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 from this chassis (sb_readonly=0)
Oct  2 04:28:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:23Z|00378|binding|INFO|Setting lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 down in Southbound
Oct  2 04:28:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:23Z|00379|binding|INFO|Removing iface tap2f45b100-9b ovn-installed in OVS
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.651 2 DEBUG nova.virt.libvirt.driver [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Start waiting for the detach event from libvirt for device tap2f45b100-9b with device alias net1 for instance f13ff7c1-d7d3-443e-9f06-69f8c466af30 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.652 2 DEBUG nova.virt.libvirt.guest [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:28:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.651 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:c8:97 10.100.0.5'], port_security=['fa:16:3e:69:c8:97 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-738296238', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f13ff7c1-d7d3-443e-9f06-69f8c466af30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-738296238', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2f45b100-9bc2-4853-87ff-324e74ddfee5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.653 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2f45b100-9bc2-4853-87ff-324e74ddfee5 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis#033[00m
Oct  2 04:28:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.655 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.662 2 DEBUG nova.virt.libvirt.guest [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface>not found in domain: <domain type='kvm' id='47'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <name>instance-0000002a</name>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <uuid>f13ff7c1-d7d3-443e-9f06-69f8c466af30</uuid>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:name>tempest-tempest.common.compute-instance-53387023</nova:name>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:28:22</nova:creationTime>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:port uuid="136aeb8e-dedd-4cd8-a72d-1c4309716daf">
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:port uuid="2f45b100-9bc2-4853-87ff-324e74ddfee5">
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:28:23 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <memory unit='KiB'>131072</memory>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <vcpu placement='static'>1</vcpu>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <resource>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <partition>/machine</partition>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </resource>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <sysinfo type='smbios'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <entry name='manufacturer'>RDO</entry>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <entry name='serial'>f13ff7c1-d7d3-443e-9f06-69f8c466af30</entry>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <entry name='uuid'>f13ff7c1-d7d3-443e-9f06-69f8c466af30</entry>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <entry name='family'>Virtual Machine</entry>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <boot dev='hd'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <smbios mode='sysinfo'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <vmcoreinfo state='on'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <vendor>AMD</vendor>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='x2apic'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc-deadline'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='hypervisor'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc_adjust'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='spec-ctrl'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='stibp'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='arch-capabilities'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='ssbd'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='cmp_legacy'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='overflow-recov'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='succor'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='ibrs'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='amd-ssbd'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='virt-ssbd'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='lbrv'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='tsc-scale'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='vmcb-clean'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='flushbyasid'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pause-filter'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pfthreshold'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='rdctl-no'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='mds-no'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='gds-no'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='rfds-no'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='xsaves'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svm'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='require' name='topoext'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='npt'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <feature policy='disable' name='nrip-save'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <clock offset='utc'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <timer name='hpet' present='no'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <on_poweroff>destroy</on_poweroff>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <on_reboot>restart</on_reboot>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <on_crash>destroy</on_crash>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <disk type='network' device='disk'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk' index='2'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target dev='vda' bus='virtio'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='virtio-disk0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <disk type='network' device='cdrom'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/f13ff7c1-d7d3-443e-9f06-69f8c466af30_disk.config' index='1'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target dev='sda' bus='sata'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <readonly/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='sata0-0-0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pcie.0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='1' port='0x10'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='2' port='0x11'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.2'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='3' port='0x12'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.3'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='4' port='0x13'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.4'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='5' port='0x14'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.5'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='6' port='0x15'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.6'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='7' port='0x16'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.7'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='8' port='0x17'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.8'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='9' port='0x18'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.9'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='10' port='0x19'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.10'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='11' port='0x1a'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.11'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='12' port='0x1b'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.12'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='13' port='0x1c'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.13'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='14' port='0x1d'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.14'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='15' port='0x1e'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.15'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='16' port='0x1f'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.16'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='17' port='0x20'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.17'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='18' port='0x21'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.18'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='19' port='0x22'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.19'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='20' port='0x23'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.20'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='21' port='0x24'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.21'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='22' port='0x25'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.22'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='23' port='0x26'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.23'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='24' port='0x27'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.24'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target chassis='25' port='0x28'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.25'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model name='pcie-pci-bridge'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='pci.26'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='usb'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <controller type='sata' index='0'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='ide'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:0b:59:2a'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target dev='tap136aeb8e-de'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='net0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <serial type='pty'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <source path='/dev/pts/1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/console.log' append='off'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target type='isa-serial' port='0'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:        <model name='isa-serial'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      </target>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <console type='pty' tty='/dev/pts/1'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <source path='/dev/pts/1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30/console.log' append='off'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <target type='serial' port='0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </console>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <input type='tablet' bus='usb'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='input0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='usb' bus='0' port='1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <input type='mouse' bus='ps2'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='input1'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <input type='keyboard' bus='ps2'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='input2'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <listen type='address' address='::0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <audio id='1' type='none'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='video0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <watchdog model='itco' action='reset'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='watchdog0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </watchdog>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <memballoon model='virtio'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <stats period='10'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='balloon0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <rng model='virtio'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <backend model='random'>/dev/urandom</backend>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <alias name='rng0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <label>system_u:system_r:svirt_t:s0:c737,c790</label>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c737,c790</imagelabel>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <label>+107:+107</label>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <imagelabel>+107:+107</imagelabel>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:28:23 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:28:23 np0005465604 nova_compute[260603]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.662 2 INFO nova.virt.libvirt.driver [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully detached device tap2f45b100-9b from instance f13ff7c1-d7d3-443e-9f06-69f8c466af30 from the live domain config.#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.663 2 DEBUG nova.virt.libvirt.vif [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-53387023',display_name='tempest-tempest.common.compute-instance-53387023',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-53387023',id=42,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:27:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-966e3isi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:27:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=f13ff7c1-d7d3-443e-9f06-69f8c466af30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.664 2 DEBUG nova.network.os_vif_util [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.664 2 DEBUG nova.network.os_vif_util [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.665 2 DEBUG os_vif [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.675 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f45b100-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:28:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.680 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8e622b2a-c3e8-4c07-96db-22f3fcf46597]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.687 2 INFO os_vif [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b')#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.688 2 DEBUG nova.virt.libvirt.guest [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:name>tempest-tempest.common.compute-instance-53387023</nova:name>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:28:23</nova:creationTime>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    <nova:port uuid="136aeb8e-dedd-4cd8-a72d-1c4309716daf">
Oct  2 04:28:23 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:28:23 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:28:23 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:28:23 np0005465604 nova_compute[260603]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 04:28:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.729 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[689e8f16-332c-4672-be97-5efac5ee471d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.734 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c70a41df-1a4c-4742-aa31-f4430821c928]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:23 np0005465604 keen_kilby[311519]: {
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:    "0": [
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:        {
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "devices": [
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "/dev/loop3"
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            ],
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "lv_name": "ceph_lv0",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "lv_size": "21470642176",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "name": "ceph_lv0",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "tags": {
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.cluster_name": "ceph",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.crush_device_class": "",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.encrypted": "0",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.osd_id": "0",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.type": "block",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.vdo": "0"
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            },
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "type": "block",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "vg_name": "ceph_vg0"
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:        }
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:    ],
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:    "1": [
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:        {
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "devices": [
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "/dev/loop4"
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            ],
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "lv_name": "ceph_lv1",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "lv_size": "21470642176",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "name": "ceph_lv1",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "tags": {
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.cluster_name": "ceph",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.crush_device_class": "",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.encrypted": "0",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.osd_id": "1",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.type": "block",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.vdo": "0"
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            },
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "type": "block",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "vg_name": "ceph_vg1"
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:        }
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:    ],
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:    "2": [
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:        {
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "devices": [
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "/dev/loop5"
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            ],
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "lv_name": "ceph_lv2",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "lv_size": "21470642176",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "name": "ceph_lv2",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "tags": {
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.cluster_name": "ceph",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.crush_device_class": "",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.encrypted": "0",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.osd_id": "2",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.type": "block",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:                "ceph.vdo": "0"
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            },
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "type": "block",
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:            "vg_name": "ceph_vg2"
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:        }
Oct  2 04:28:23 np0005465604 keen_kilby[311519]:    ]
Oct  2 04:28:23 np0005465604 keen_kilby[311519]: }
Oct  2 04:28:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.791 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb5ece2-4ea5-4c2e-8d10-c77f72387e95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:23 np0005465604 systemd[1]: libpod-65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1.scope: Deactivated successfully.
Oct  2 04:28:23 np0005465604 conmon[311519]: conmon 65d1e04b93ce5a77efcf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1.scope/container/memory.events
Oct  2 04:28:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.813 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a63b15-3f4c-4937-862e-2b536fa5cf1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454001, 'reachable_time': 43415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311538, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.829 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa9c667b-92dd-4aeb-9cc9-5447f94770e6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454016, 'tstamp': 454016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311543, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454020, 'tstamp': 454020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311543, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.831 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:23 np0005465604 nova_compute[260603]: 2025-10-02 08:28:23.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.834 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.835 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.835 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:23.835 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:23 np0005465604 podman[311539]: 2025-10-02 08:28:23.857738614 +0000 UTC m=+0.037721262 container died 65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:28:23 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2a2b0dcf8c44ec7e500648d7cc7adce6fdcd49ffb68474c2348f8c81072e9bb6-merged.mount: Deactivated successfully.
Oct  2 04:28:23 np0005465604 podman[311539]: 2025-10-02 08:28:23.909472852 +0000 UTC m=+0.089455480 container remove 65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 04:28:23 np0005465604 systemd[1]: libpod-conmon-65d1e04b93ce5a77efcf6a2492c983089bc15718eb45ed45f81cd1f50ff572c1.scope: Deactivated successfully.
Oct  2 04:28:24 np0005465604 podman[311699]: 2025-10-02 08:28:24.535954101 +0000 UTC m=+0.036585797 container create 4945cfd919bb01b92bcea90af6c291422b27b0f424871ceeb3acec6b610ae98c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_stonebraker, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:28:24 np0005465604 systemd[1]: Started libpod-conmon-4945cfd919bb01b92bcea90af6c291422b27b0f424871ceeb3acec6b610ae98c.scope.
Oct  2 04:28:24 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:28:24 np0005465604 podman[311699]: 2025-10-02 08:28:24.613691408 +0000 UTC m=+0.114323114 container init 4945cfd919bb01b92bcea90af6c291422b27b0f424871ceeb3acec6b610ae98c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_stonebraker, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:28:24 np0005465604 podman[311699]: 2025-10-02 08:28:24.52143598 +0000 UTC m=+0.022067716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:28:24 np0005465604 podman[311699]: 2025-10-02 08:28:24.620873591 +0000 UTC m=+0.121505337 container start 4945cfd919bb01b92bcea90af6c291422b27b0f424871ceeb3acec6b610ae98c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_stonebraker, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 04:28:24 np0005465604 pensive_stonebraker[311715]: 167 167
Oct  2 04:28:24 np0005465604 systemd[1]: libpod-4945cfd919bb01b92bcea90af6c291422b27b0f424871ceeb3acec6b610ae98c.scope: Deactivated successfully.
Oct  2 04:28:24 np0005465604 podman[311699]: 2025-10-02 08:28:24.624502993 +0000 UTC m=+0.125134739 container attach 4945cfd919bb01b92bcea90af6c291422b27b0f424871ceeb3acec6b610ae98c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:28:24 np0005465604 podman[311699]: 2025-10-02 08:28:24.624905556 +0000 UTC m=+0.125537282 container died 4945cfd919bb01b92bcea90af6c291422b27b0f424871ceeb3acec6b610ae98c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 04:28:24 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2cfadec55263f5b1123564c1eda634ec668e07a3fb52b1d8c89fd64da04c4087-merged.mount: Deactivated successfully.
Oct  2 04:28:24 np0005465604 podman[311699]: 2025-10-02 08:28:24.668505941 +0000 UTC m=+0.169137657 container remove 4945cfd919bb01b92bcea90af6c291422b27b0f424871ceeb3acec6b610ae98c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_stonebraker, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:28:24 np0005465604 systemd[1]: libpod-conmon-4945cfd919bb01b92bcea90af6c291422b27b0f424871ceeb3acec6b610ae98c.scope: Deactivated successfully.
Oct  2 04:28:24 np0005465604 podman[311738]: 2025-10-02 08:28:24.89373377 +0000 UTC m=+0.053134422 container create 0be88514b6ee9255c8e2549640393bb7cc4d8b307a31c82a6e988da4c8012768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_vaughan, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:28:24 np0005465604 systemd[1]: Started libpod-conmon-0be88514b6ee9255c8e2549640393bb7cc4d8b307a31c82a6e988da4c8012768.scope.
Oct  2 04:28:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.8 MiB/s wr, 206 op/s
Oct  2 04:28:24 np0005465604 podman[311738]: 2025-10-02 08:28:24.861735076 +0000 UTC m=+0.021135778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:28:24 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:28:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4087ba807de9057ac8d4f505ee3b24b7da5b46592c5673a7afeef0ef9006f0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:28:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4087ba807de9057ac8d4f505ee3b24b7da5b46592c5673a7afeef0ef9006f0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:28:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4087ba807de9057ac8d4f505ee3b24b7da5b46592c5673a7afeef0ef9006f0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:28:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4087ba807de9057ac8d4f505ee3b24b7da5b46592c5673a7afeef0ef9006f0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:28:24 np0005465604 podman[311738]: 2025-10-02 08:28:24.977127482 +0000 UTC m=+0.136528154 container init 0be88514b6ee9255c8e2549640393bb7cc4d8b307a31c82a6e988da4c8012768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_vaughan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 04:28:24 np0005465604 podman[311738]: 2025-10-02 08:28:24.989177016 +0000 UTC m=+0.148577668 container start 0be88514b6ee9255c8e2549640393bb7cc4d8b307a31c82a6e988da4c8012768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_vaughan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:28:24 np0005465604 podman[311738]: 2025-10-02 08:28:24.992892711 +0000 UTC m=+0.152293363 container attach 0be88514b6ee9255c8e2549640393bb7cc4d8b307a31c82a6e988da4c8012768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_vaughan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.236 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.237 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.238 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.238 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.238 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] No waiting events found dispatching network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.239 2 WARNING nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received unexpected event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.239 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.239 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.239 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.240 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.240 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Processing event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.240 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.241 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.241 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.241 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.242 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] No waiting events found dispatching network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.243 2 WARNING nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received unexpected event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.243 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-unplugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.244 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.244 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.245 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.245 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] No waiting events found dispatching network-vif-unplugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.246 2 WARNING nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received unexpected event network-vif-unplugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.247 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.247 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.248 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.248 2 DEBUG oslo_concurrency.lockutils [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.249 2 DEBUG nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] No waiting events found dispatching network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.249 2 WARNING nova.compute.manager [req-bb4ae1a3-00c2-4bbb-ac8c-79339581a605 req-5dea041c-17f7-4431-9daf-edbd654860c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received unexpected event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.256 2 DEBUG nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.261 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393705.2612765, 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.262 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.265 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.269 2 INFO nova.virt.libvirt.driver [-] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Instance spawned successfully.#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.270 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.301 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.313 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.319 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.320 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.321 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.322 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.323 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.324 2 DEBUG nova.virt.libvirt.driver [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.367 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.409 2 INFO nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Took 12.64 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.410 2 DEBUG nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.485 2 INFO nova.compute.manager [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Took 13.72 seconds to build instance.#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.504 2 DEBUG oslo_concurrency.lockutils [None req-23209ea8-2351-4749-9504-5ccc39020814 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.603 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.633 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.633 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.634 2 DEBUG oslo_concurrency.lockutils [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.635 2 DEBUG nova.network.neutron [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing network info cache for port 2f45b100-9bc2-4853-87ff-324e74ddfee5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.637 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.638 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.639 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.640 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.674 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.676 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.676 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.677 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.678 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:25 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:25Z|00380|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct  2 04:28:25 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:25Z|00381|binding|INFO|Releasing lport 2a218cce-83be-4768-9f4e-7d61802765d4 from this chassis (sb_readonly=0)
Oct  2 04:28:25 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:25Z|00382|binding|INFO|Releasing lport f9acec59-0200-4a1d-84e4-06e67c730498 from this chassis (sb_readonly=0)
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.814 2 DEBUG oslo_concurrency.lockutils [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.936 2 INFO nova.network.neutron [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Port 2f45b100-9bc2-4853-87ff-324e74ddfee5 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.937 2 DEBUG nova.network.neutron [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.965 2 DEBUG oslo_concurrency.lockutils [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.965 2 DEBUG nova.compute.manager [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-changed-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.966 2 DEBUG nova.compute.manager [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Refreshing instance network info cache due to event network-changed-922a693b-1cb4-42e8-a97d-78973183c774. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.966 2 DEBUG oslo_concurrency.lockutils [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.966 2 DEBUG oslo_concurrency.lockutils [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.967 2 DEBUG nova.network.neutron [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Refreshing network info cache for port 922a693b-1cb4-42e8-a97d-78973183c774 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.967 2 DEBUG oslo_concurrency.lockutils [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:25 np0005465604 nova_compute[260603]: 2025-10-02 08:28:25.968 2 DEBUG nova.network.neutron [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]: {
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:        "osd_id": 2,
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:        "type": "bluestore"
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:    },
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:        "osd_id": 1,
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:        "type": "bluestore"
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:    },
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:        "osd_id": 0,
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:        "type": "bluestore"
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]:    }
Oct  2 04:28:26 np0005465604 clever_vaughan[311754]: }
Oct  2 04:28:26 np0005465604 systemd[1]: libpod-0be88514b6ee9255c8e2549640393bb7cc4d8b307a31c82a6e988da4c8012768.scope: Deactivated successfully.
Oct  2 04:28:26 np0005465604 podman[311738]: 2025-10-02 08:28:26.055862035 +0000 UTC m=+1.215262697 container died 0be88514b6ee9255c8e2549640393bb7cc4d8b307a31c82a6e988da4c8012768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  2 04:28:26 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e4087ba807de9057ac8d4f505ee3b24b7da5b46592c5673a7afeef0ef9006f0e-merged.mount: Deactivated successfully.
Oct  2 04:28:26 np0005465604 podman[311738]: 2025-10-02 08:28:26.114647662 +0000 UTC m=+1.274048314 container remove 0be88514b6ee9255c8e2549640393bb7cc4d8b307a31c82a6e988da4c8012768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 04:28:26 np0005465604 systemd[1]: libpod-conmon-0be88514b6ee9255c8e2549640393bb7cc4d8b307a31c82a6e988da4c8012768.scope: Deactivated successfully.
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.136 2 DEBUG oslo_concurrency.lockutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "e3ae3c82-7eb4-4727-a846-92afca9a8330" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.137 2 DEBUG oslo_concurrency.lockutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.137 2 INFO nova.compute.manager [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Shelving#033[00m
Oct  2 04:28:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:28:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2793012690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:28:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.170 2 DEBUG nova.virt.libvirt.driver [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:28:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:28:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:28:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:28:26 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 422b3fd1-85ee-4046-a1cb-4882c15357c1 does not exist
Oct  2 04:28:26 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev fc74a7da-e589-4db7-8f44-0b84a91af12c does not exist
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.188 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.283 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.285 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.290 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.290 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.295 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.296 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.299 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.299 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.302 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.303 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:28:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:28:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 18K writes, 73K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 18K writes, 5973 syncs, 3.14 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 44.97 MB, 0.07 MB/s#012Interval WAL: 11K writes, 4472 syncs, 2.55 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.586 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.587 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3503MB free_disk=59.809844970703125GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.587 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.587 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.679 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance f13ff7c1-d7d3-443e-9f06-69f8c466af30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.679 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 05cc7244-c419-4c24-b995-95ca760837a4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.679 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 247e32e5-5f07-4db4-9e6f-dcfade745228 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.679 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance e3ae3c82-7eb4-4727-a846-92afca9a8330 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.680 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.680 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.680 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:28:26 np0005465604 nova_compute[260603]: 2025-10-02 08:28:26.823 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:26 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:28:26 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:28:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 174 op/s
Oct  2 04:28:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:28:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:28:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3096039856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:28:27 np0005465604 nova_compute[260603]: 2025-10-02 08:28:27.282 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:27 np0005465604 nova_compute[260603]: 2025-10-02 08:28:27.293 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:28:27 np0005465604 nova_compute[260603]: 2025-10-02 08:28:27.321 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:28:27 np0005465604 nova_compute[260603]: 2025-10-02 08:28:27.362 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:28:27 np0005465604 nova_compute[260603]: 2025-10-02 08:28:27.363 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:27 np0005465604 nova_compute[260603]: 2025-10-02 08:28:27.365 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:28:27 np0005465604 nova_compute[260603]: 2025-10-02 08:28:27.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:28:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:28:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:28:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:28:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:28:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:28:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:28:27
Oct  2 04:28:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:28:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:28:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.meta']
Oct  2 04:28:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:28:28 np0005465604 nova_compute[260603]: 2025-10-02 08:28:28.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:28 np0005465604 nova_compute[260603]: 2025-10-02 08:28:28.902 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393693.9021835, 197184a1-4270-40c9-87b5-6eca7e832812 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:28 np0005465604 nova_compute[260603]: 2025-10-02 08:28:28.903 2 INFO nova.compute.manager [-] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:28:28 np0005465604 nova_compute[260603]: 2025-10-02 08:28:28.928 2 DEBUG nova.compute.manager [None req-842e3b19-080d-4a21-b0e8-3757c7d0e8a3 - - - - - -] [instance: 197184a1-4270-40c9-87b5-6eca7e832812] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 3.6 MiB/s wr, 304 op/s
Oct  2 04:28:29 np0005465604 nova_compute[260603]: 2025-10-02 08:28:29.852 2 DEBUG nova.network.neutron [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Updated VIF entry in instance network info cache for port 922a693b-1cb4-42e8-a97d-78973183c774. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:28:29 np0005465604 nova_compute[260603]: 2025-10-02 08:28:29.853 2 DEBUG nova.network.neutron [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Updating instance_info_cache with network_info: [{"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:29 np0005465604 nova_compute[260603]: 2025-10-02 08:28:29.877 2 DEBUG nova.network.neutron [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:29 np0005465604 nova_compute[260603]: 2025-10-02 08:28:29.881 2 DEBUG oslo_concurrency.lockutils [req-6efc3b6e-60ed-4965-820c-f66d6b6c1682 req-1c48036e-18ac-47c4-8bd5-685971311327 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:29 np0005465604 nova_compute[260603]: 2025-10-02 08:28:29.890 2 DEBUG oslo_concurrency.lockutils [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:29 np0005465604 nova_compute[260603]: 2025-10-02 08:28:29.922 2 DEBUG oslo_concurrency.lockutils [None req-38adb423-8fbc-4d49-b3e9-17eec125c75f 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-f13ff7c1-d7d3-443e-9f06-69f8c466af30-2f45b100-9bc2-4853-87ff-324e74ddfee5" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 6.471s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:30 np0005465604 nova_compute[260603]: 2025-10-02 08:28:30.376 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:28:30 np0005465604 nova_compute[260603]: 2025-10-02 08:28:30.377 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:28:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 43 KiB/s wr, 149 op/s
Oct  2 04:28:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:28:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 16K writes, 62K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 16K writes, 5152 syncs, 3.11 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9973 writes, 37K keys, 9973 commit groups, 1.0 writes per commit group, ingest: 39.85 MB, 0.07 MB/s#012Interval WAL: 9973 writes, 4074 syncs, 2.45 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 04:28:31 np0005465604 nova_compute[260603]: 2025-10-02 08:28:31.443 2 DEBUG nova.compute.manager [req-165a9f2c-fecb-45e0-9815-43406ae6ac55 req-62f7eaee-2fd8-4358-88e8-770251e3b197 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-changed-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:31 np0005465604 nova_compute[260603]: 2025-10-02 08:28:31.443 2 DEBUG nova.compute.manager [req-165a9f2c-fecb-45e0-9815-43406ae6ac55 req-62f7eaee-2fd8-4358-88e8-770251e3b197 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Refreshing instance network info cache due to event network-changed-922a693b-1cb4-42e8-a97d-78973183c774. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:28:31 np0005465604 nova_compute[260603]: 2025-10-02 08:28:31.443 2 DEBUG oslo_concurrency.lockutils [req-165a9f2c-fecb-45e0-9815-43406ae6ac55 req-62f7eaee-2fd8-4358-88e8-770251e3b197 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:31 np0005465604 nova_compute[260603]: 2025-10-02 08:28:31.444 2 DEBUG oslo_concurrency.lockutils [req-165a9f2c-fecb-45e0-9815-43406ae6ac55 req-62f7eaee-2fd8-4358-88e8-770251e3b197 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:31 np0005465604 nova_compute[260603]: 2025-10-02 08:28:31.444 2 DEBUG nova.network.neutron [req-165a9f2c-fecb-45e0-9815-43406ae6ac55 req-62f7eaee-2fd8-4358-88e8-770251e3b197 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Refreshing network info cache for port 922a693b-1cb4-42e8-a97d-78973183c774 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:28:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:28:32 np0005465604 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct  2 04:28:32 np0005465604 nova_compute[260603]: 2025-10-02 08:28:32.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:32 np0005465604 nova_compute[260603]: 2025-10-02 08:28:32.713 2 DEBUG oslo_concurrency.lockutils [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-247e32e5-5f07-4db4-9e6f-dcfade745228-2f45b100-9bc2-4853-87ff-324e74ddfee5" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:32 np0005465604 nova_compute[260603]: 2025-10-02 08:28:32.714 2 DEBUG oslo_concurrency.lockutils [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-247e32e5-5f07-4db4-9e6f-dcfade745228-2f45b100-9bc2-4853-87ff-324e74ddfee5" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:32 np0005465604 nova_compute[260603]: 2025-10-02 08:28:32.715 2 DEBUG nova.objects.instance [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid 247e32e5-5f07-4db4-9e6f-dcfade745228 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 43 KiB/s wr, 149 op/s
Oct  2 04:28:33 np0005465604 nova_compute[260603]: 2025-10-02 08:28:33.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:33 np0005465604 nova_compute[260603]: 2025-10-02 08:28:33.828 2 DEBUG oslo_concurrency.lockutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:33 np0005465604 nova_compute[260603]: 2025-10-02 08:28:33.829 2 DEBUG oslo_concurrency.lockutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:33 np0005465604 nova_compute[260603]: 2025-10-02 08:28:33.829 2 INFO nova.compute.manager [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Rebooting instance#033[00m
Oct  2 04:28:33 np0005465604 nova_compute[260603]: 2025-10-02 08:28:33.852 2 DEBUG oslo_concurrency.lockutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:33 np0005465604 nova_compute[260603]: 2025-10-02 08:28:33.887 2 DEBUG nova.compute.manager [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:33 np0005465604 nova_compute[260603]: 2025-10-02 08:28:33.888 2 DEBUG nova.compute.manager [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing instance network info cache due to event network-changed-136aeb8e-dedd-4cd8-a72d-1c4309716daf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:28:33 np0005465604 nova_compute[260603]: 2025-10-02 08:28:33.888 2 DEBUG oslo_concurrency.lockutils [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:33 np0005465604 nova_compute[260603]: 2025-10-02 08:28:33.889 2 DEBUG oslo_concurrency.lockutils [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:33 np0005465604 nova_compute[260603]: 2025-10-02 08:28:33.889 2 DEBUG nova.network.neutron [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Refreshing network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:28:33 np0005465604 nova_compute[260603]: 2025-10-02 08:28:33.916 2 DEBUG nova.objects.instance [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'pci_requests' on Instance uuid 247e32e5-5f07-4db4-9e6f-dcfade745228 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:33 np0005465604 nova_compute[260603]: 2025-10-02 08:28:33.951 2 DEBUG nova.network.neutron [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:28:34 np0005465604 nova_compute[260603]: 2025-10-02 08:28:34.386 2 DEBUG nova.policy [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d235dd68314a5d82ac247a9e9842d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84c161efb2ba4334845e823db8128b62', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:28:34 np0005465604 nova_compute[260603]: 2025-10-02 08:28:34.442 2 DEBUG nova.network.neutron [req-165a9f2c-fecb-45e0-9815-43406ae6ac55 req-62f7eaee-2fd8-4358-88e8-770251e3b197 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Updated VIF entry in instance network info cache for port 922a693b-1cb4-42e8-a97d-78973183c774. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:28:34 np0005465604 nova_compute[260603]: 2025-10-02 08:28:34.443 2 DEBUG nova.network.neutron [req-165a9f2c-fecb-45e0-9815-43406ae6ac55 req-62f7eaee-2fd8-4358-88e8-770251e3b197 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Updating instance_info_cache with network_info: [{"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:34 np0005465604 nova_compute[260603]: 2025-10-02 08:28:34.468 2 DEBUG oslo_concurrency.lockutils [req-165a9f2c-fecb-45e0-9815-43406ae6ac55 req-62f7eaee-2fd8-4358-88e8-770251e3b197 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:34 np0005465604 nova_compute[260603]: 2025-10-02 08:28:34.469 2 DEBUG oslo_concurrency.lockutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquired lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:34 np0005465604 nova_compute[260603]: 2025-10-02 08:28:34.470 2 DEBUG nova.network.neutron [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:28:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:34.815 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:34.815 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:34.816 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 143 op/s
Oct  2 04:28:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:35Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5a:62:72 10.100.0.5
Oct  2 04:28:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:35Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5a:62:72 10.100.0.5
Oct  2 04:28:35 np0005465604 nova_compute[260603]: 2025-10-02 08:28:35.299 2 DEBUG nova.network.neutron [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Successfully updated port: 2f45b100-9bc2-4853-87ff-324e74ddfee5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:28:35 np0005465604 nova_compute[260603]: 2025-10-02 08:28:35.326 2 DEBUG oslo_concurrency.lockutils [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:35 np0005465604 nova_compute[260603]: 2025-10-02 08:28:35.326 2 DEBUG oslo_concurrency.lockutils [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:35 np0005465604 nova_compute[260603]: 2025-10-02 08:28:35.326 2 DEBUG nova.network.neutron [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:28:35 np0005465604 nova_compute[260603]: 2025-10-02 08:28:35.439 2 DEBUG nova.compute.manager [req-c51a3079-1f73-43a3-9988-5ee04c191644 req-670ecb24-0d91-4a8e-b5f3-749d2f7ceacc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-changed-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:35 np0005465604 nova_compute[260603]: 2025-10-02 08:28:35.440 2 DEBUG nova.compute.manager [req-c51a3079-1f73-43a3-9988-5ee04c191644 req-670ecb24-0d91-4a8e-b5f3-749d2f7ceacc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing instance network info cache due to event network-changed-2f45b100-9bc2-4853-87ff-324e74ddfee5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:28:35 np0005465604 nova_compute[260603]: 2025-10-02 08:28:35.440 2 DEBUG oslo_concurrency.lockutils [req-c51a3079-1f73-43a3-9988-5ee04c191644 req-670ecb24-0d91-4a8e-b5f3-749d2f7ceacc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:35 np0005465604 nova_compute[260603]: 2025-10-02 08:28:35.494 2 WARNING nova.network.neutron [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] fa1bff6d-19fb-4792-a261-4da1165d95a1 already exists in list: networks containing: ['fa1bff6d-19fb-4792-a261-4da1165d95a1']. ignoring it#033[00m
Oct  2 04:28:35 np0005465604 nova_compute[260603]: 2025-10-02 08:28:35.801 2 DEBUG nova.network.neutron [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updated VIF entry in instance network info cache for port 136aeb8e-dedd-4cd8-a72d-1c4309716daf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:28:35 np0005465604 nova_compute[260603]: 2025-10-02 08:28:35.802 2 DEBUG nova.network.neutron [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [{"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:35 np0005465604 nova_compute[260603]: 2025-10-02 08:28:35.824 2 DEBUG oslo_concurrency.lockutils [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f13ff7c1-d7d3-443e-9f06-69f8c466af30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:35 np0005465604 nova_compute[260603]: 2025-10-02 08:28:35.825 2 DEBUG nova.compute.manager [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-changed-262f44f5-df56-4176-96b2-4819d8b7e258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:35 np0005465604 nova_compute[260603]: 2025-10-02 08:28:35.825 2 DEBUG nova.compute.manager [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing instance network info cache due to event network-changed-262f44f5-df56-4176-96b2-4819d8b7e258. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:28:35 np0005465604 nova_compute[260603]: 2025-10-02 08:28:35.825 2 DEBUG oslo_concurrency.lockutils [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:35 np0005465604 nova_compute[260603]: 2025-10-02 08:28:35.947 2 DEBUG nova.network.neutron [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Updating instance_info_cache with network_info: [{"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:35 np0005465604 nova_compute[260603]: 2025-10-02 08:28:35.969 2 DEBUG oslo_concurrency.lockutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Releasing lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:35 np0005465604 nova_compute[260603]: 2025-10-02 08:28:35.971 2 DEBUG nova.compute.manager [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:36 np0005465604 kernel: tap922a693b-1c (unregistering): left promiscuous mode
Oct  2 04:28:36 np0005465604 NetworkManager[45129]: <info>  [1759393716.1836] device (tap922a693b-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:28:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:36Z|00383|binding|INFO|Releasing lport 922a693b-1cb4-42e8-a97d-78973183c774 from this chassis (sb_readonly=0)
Oct  2 04:28:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:36Z|00384|binding|INFO|Setting lport 922a693b-1cb4-42e8-a97d-78973183c774 down in Southbound
Oct  2 04:28:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:36Z|00385|binding|INFO|Removing iface tap922a693b-1c ovn-installed in OVS
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.210 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:10:ae 10.100.0.7'], port_security=['fa:16:3e:bd:10:ae 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '21802142-fcf7-4eb2-b43b-e0fa48cab4d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35a4ab7cf79e41f68a1ea888c2a3592e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '474d8864-210a-46b9-a362-54729ada24f1 aceef71f-a2e6-4998-bc1f-5a8f9213efeb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e031551-92b2-44b9-87f8-368034b7a542, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=922a693b-1cb4-42e8-a97d-78973183c774) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.216 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 922a693b-1cb4-42e8-a97d-78973183c774 in datapath cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 unbound from our chassis#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.222 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.230 2 DEBUG nova.virt.libvirt.driver [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 04:28:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.259 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cbfeaf58-c222-4ee5-bbf6-187a6639e4ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:36 np0005465604 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000030.scope: Deactivated successfully.
Oct  2 04:28:36 np0005465604 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000030.scope: Consumed 11.813s CPU time.
Oct  2 04:28:36 np0005465604 systemd-machined[214636]: Machine qemu-52-instance-00000030 terminated.
Oct  2 04:28:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.316 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[40ae5c6c-b789-4d30-8e4c-f975973db248]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.322 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad147b8-3d76-4f88-ad68-1e059777019d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.359 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e3b2e06c-e37a-45a8-bcc6-733e4bb079a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.367 2 INFO nova.virt.libvirt.driver [-] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Instance destroyed successfully.#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.368 2 DEBUG nova.objects.instance [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lazy-loading 'resources' on Instance uuid 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.389 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[181d1855-1e52-48c6-a814-4f1fc0366e81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcce7e8b6-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:5a:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453893, 'reachable_time': 18032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311914, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.391 2 DEBUG nova.virt.libvirt.vif [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:28:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2008796594',display_name='tempest-SecurityGroupsTestJSON-server-2008796594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2008796594',id=48,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35a4ab7cf79e41f68a1ea888c2a3592e',ramdisk_id='',reservation_id='r-limo2kg3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-2081142325',owner_user_name='tempest-SecurityGroupsTestJSON-2081142325-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:36Z,user_data=None,user_id='019cd25dce6249ce9c2cf326ec62df28',uuid=21802142-fcf7-4eb2-b43b-e0fa48cab4d6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.392 2 DEBUG nova.network.os_vif_util [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converting VIF {"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.393 2 DEBUG nova.network.os_vif_util [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.394 2 DEBUG os_vif [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.396 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap922a693b-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.404 2 INFO os_vif [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c')#033[00m
Oct  2 04:28:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.410 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3c0f4c7e-5b3f-4baf-be98-98e0c56caa96]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcce7e8b6-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453909, 'tstamp': 453909}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311916, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcce7e8b6-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453913, 'tstamp': 453913}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311916, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.411 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcce7e8b6-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.414 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcce7e8b6-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.415 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.415 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcce7e8b6-90, col_values=(('external_ids', {'iface-id': '2a218cce-83be-4768-9f4e-7d61802765d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:36.416 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.417 2 DEBUG nova.virt.libvirt.driver [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Start _get_guest_xml network_info=[{"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.424 2 WARNING nova.virt.libvirt.driver [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.431 2 DEBUG nova.virt.libvirt.host [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.432 2 DEBUG nova.virt.libvirt.host [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.439 2 DEBUG nova.virt.libvirt.host [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.439 2 DEBUG nova.virt.libvirt.host [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.440 2 DEBUG nova.virt.libvirt.driver [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.440 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.441 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.441 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.442 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.442 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.442 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.443 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.443 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.443 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.444 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.444 2 DEBUG nova.virt.hardware [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.444 2 DEBUG nova.objects.instance [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.465 2 DEBUG oslo_concurrency.processutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 372 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.7 KiB/s wr, 130 op/s
Oct  2 04:28:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:28:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/833570645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:28:36 np0005465604 nova_compute[260603]: 2025-10-02 08:28:36.980 2 DEBUG oslo_concurrency.processutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.012 2 DEBUG oslo_concurrency.processutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:28:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:28:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2268203493' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.516 2 DEBUG oslo_concurrency.processutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.517 2 DEBUG nova.virt.libvirt.vif [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:28:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2008796594',display_name='tempest-SecurityGroupsTestJSON-server-2008796594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2008796594',id=48,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35a4ab7cf79e41f68a1ea888c2a3592e',ramdisk_id='',reservation_id='r-limo2kg3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-2081142325',owner_user_name='tempest-SecurityGroupsTestJSON-2081142325-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:36Z,user_data=None,user_id='019cd25dce6249ce9c2cf326ec62df28',uuid=21802142-fcf7-4eb2-b43b-e0fa48cab4d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.518 2 DEBUG nova.network.os_vif_util [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converting VIF {"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.518 2 DEBUG nova.network.os_vif_util [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.519 2 DEBUG nova.objects.instance [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lazy-loading 'pci_devices' on Instance uuid 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.538 2 DEBUG nova.virt.libvirt.driver [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  <uuid>21802142-fcf7-4eb2-b43b-e0fa48cab4d6</uuid>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  <name>instance-00000030</name>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <nova:name>tempest-SecurityGroupsTestJSON-server-2008796594</nova:name>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:28:36</nova:creationTime>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:28:37 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:        <nova:user uuid="019cd25dce6249ce9c2cf326ec62df28">tempest-SecurityGroupsTestJSON-2081142325-project-member</nova:user>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:        <nova:project uuid="35a4ab7cf79e41f68a1ea888c2a3592e">tempest-SecurityGroupsTestJSON-2081142325</nova:project>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:        <nova:port uuid="922a693b-1cb4-42e8-a97d-78973183c774">
Oct  2 04:28:37 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <entry name="serial">21802142-fcf7-4eb2-b43b-e0fa48cab4d6</entry>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <entry name="uuid">21802142-fcf7-4eb2-b43b-e0fa48cab4d6</entry>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk">
Oct  2 04:28:37 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:28:37 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/21802142-fcf7-4eb2-b43b-e0fa48cab4d6_disk.config">
Oct  2 04:28:37 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:28:37 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:bd:10:ae"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <target dev="tap922a693b-1c"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6/console.log" append="off"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <input type="keyboard" bus="usb"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:28:37 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:28:37 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:28:37 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.539 2 DEBUG nova.virt.libvirt.driver [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.539 2 DEBUG nova.virt.libvirt.driver [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.540 2 DEBUG nova.virt.libvirt.vif [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:28:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2008796594',display_name='tempest-SecurityGroupsTestJSON-server-2008796594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2008796594',id=48,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='35a4ab7cf79e41f68a1ea888c2a3592e',ramdisk_id='',reservation_id='r-limo2kg3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-2081142325',owner_user_name='tempest-SecurityGroupsTestJSON-2081142325-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:36Z,user_data=None,user_id='019cd25dce6249ce9c2cf326ec62df28',uuid=21802142-fcf7-4eb2-b43b-e0fa48cab4d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.540 2 DEBUG nova.network.os_vif_util [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converting VIF {"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.540 2 DEBUG nova.network.os_vif_util [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.541 2 DEBUG os_vif [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.542 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.542 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.544 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap922a693b-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.545 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap922a693b-1c, col_values=(('external_ids', {'iface-id': '922a693b-1cb4-42e8-a97d-78973183c774', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:10:ae', 'vm-uuid': '21802142-fcf7-4eb2-b43b-e0fa48cab4d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:37 np0005465604 NetworkManager[45129]: <info>  [1759393717.5471] manager: (tap922a693b-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/174)
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.551 2 INFO os_vif [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c')#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.572 2 DEBUG nova.compute.manager [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-vif-unplugged-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.573 2 DEBUG oslo_concurrency.lockutils [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.573 2 DEBUG oslo_concurrency.lockutils [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.573 2 DEBUG oslo_concurrency.lockutils [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.574 2 DEBUG nova.compute.manager [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] No waiting events found dispatching network-vif-unplugged-922a693b-1cb4-42e8-a97d-78973183c774 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.574 2 WARNING nova.compute.manager [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received unexpected event network-vif-unplugged-922a693b-1cb4-42e8-a97d-78973183c774 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.574 2 DEBUG nova.compute.manager [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.574 2 DEBUG oslo_concurrency.lockutils [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.574 2 DEBUG oslo_concurrency.lockutils [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.574 2 DEBUG oslo_concurrency.lockutils [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.575 2 DEBUG nova.compute.manager [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] No waiting events found dispatching network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.575 2 WARNING nova.compute.manager [req-174add9e-7378-4eaf-badd-a1463d5afd58 req-338e4a11-786e-4f32-b8e1-2c4d58e1c3cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received unexpected event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  2 04:28:37 np0005465604 kernel: tap922a693b-1c: entered promiscuous mode
Oct  2 04:28:37 np0005465604 NetworkManager[45129]: <info>  [1759393717.6277] manager: (tap922a693b-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/175)
Oct  2 04:28:37 np0005465604 systemd-udevd[311897]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:28:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:37Z|00386|binding|INFO|Claiming lport 922a693b-1cb4-42e8-a97d-78973183c774 for this chassis.
Oct  2 04:28:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:37Z|00387|binding|INFO|922a693b-1cb4-42e8-a97d-78973183c774: Claiming fa:16:3e:bd:10:ae 10.100.0.7
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.644 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:10:ae 10.100.0.7'], port_security=['fa:16:3e:bd:10:ae 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '21802142-fcf7-4eb2-b43b-e0fa48cab4d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35a4ab7cf79e41f68a1ea888c2a3592e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '474d8864-210a-46b9-a362-54729ada24f1 aceef71f-a2e6-4998-bc1f-5a8f9213efeb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e031551-92b2-44b9-87f8-368034b7a542, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=922a693b-1cb4-42e8-a97d-78973183c774) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.645 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 922a693b-1cb4-42e8-a97d-78973183c774 in datapath cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 bound to our chassis#033[00m
Oct  2 04:28:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.646 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9#033[00m
Oct  2 04:28:37 np0005465604 NetworkManager[45129]: <info>  [1759393717.6491] device (tap922a693b-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:28:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:37Z|00388|binding|INFO|Setting lport 922a693b-1cb4-42e8-a97d-78973183c774 ovn-installed in OVS
Oct  2 04:28:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:37Z|00389|binding|INFO|Setting lport 922a693b-1cb4-42e8-a97d-78973183c774 up in Southbound
Oct  2 04:28:37 np0005465604 NetworkManager[45129]: <info>  [1759393717.6520] device (tap922a693b-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:37Z|00390|binding|INFO|Releasing lport f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080 from this chassis (sb_readonly=0)
Oct  2 04:28:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:37Z|00391|binding|INFO|Releasing lport 2a218cce-83be-4768-9f4e-7d61802765d4 from this chassis (sb_readonly=0)
Oct  2 04:28:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:37Z|00392|binding|INFO|Releasing lport f9acec59-0200-4a1d-84e4-06e67c730498 from this chassis (sb_readonly=0)
Oct  2 04:28:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.666 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8c379113-fe22-4565-b77c-c9323ba8bbef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:37 np0005465604 systemd-machined[214636]: New machine qemu-53-instance-00000030.
Oct  2 04:28:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.702 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3a405217-804b-4d3b-b071-db6b4802633e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:37 np0005465604 systemd[1]: Started Virtual Machine qemu-53-instance-00000030.
Oct  2 04:28:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.705 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad82583-cf8d-43d2-adee-fe8104ed2819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.741 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b9aa4f94-90ed-47f0-bca2-a76de6a22c0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.764 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aedcc586-4f30-45ea-92d1-69fdaa34ecca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcce7e8b6-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:5a:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453893, 'reachable_time': 18032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312000, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.791 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d23e9403-cb64-46fa-8ec3-21b9f21dc609]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcce7e8b6-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453909, 'tstamp': 453909}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312004, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcce7e8b6-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453913, 'tstamp': 453913}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312004, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.793 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcce7e8b6-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.796 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcce7e8b6-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.797 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.797 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcce7e8b6-90, col_values=(('external_ids', {'iface-id': '2a218cce-83be-4768-9f4e-7d61802765d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.797 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.882 2 DEBUG nova.network.neutron [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updating instance_info_cache with network_info: [{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.916 2 DEBUG oslo_concurrency.lockutils [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.917 2 DEBUG oslo_concurrency.lockutils [req-c51a3079-1f73-43a3-9988-5ee04c191644 req-670ecb24-0d91-4a8e-b5f3-749d2f7ceacc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.917 2 DEBUG nova.network.neutron [req-c51a3079-1f73-43a3-9988-5ee04c191644 req-670ecb24-0d91-4a8e-b5f3-749d2f7ceacc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing network info cache for port 2f45b100-9bc2-4853-87ff-324e74ddfee5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.920 2 DEBUG nova.virt.libvirt.vif [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1204621376',display_name='tempest-tempest.common.compute-instance-1204621376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1204621376',id=45,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-8sa12d1t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=247e32e5-5f07-4db4-9e6f-dcfade745228,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.920 2 DEBUG nova.network.os_vif_util [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.921 2 DEBUG nova.network.os_vif_util [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.921 2 DEBUG os_vif [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.922 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.923 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.925 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f45b100-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.925 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2f45b100-9b, col_values=(('external_ids', {'iface-id': '2f45b100-9bc2-4853-87ff-324e74ddfee5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:c8:97', 'vm-uuid': '247e32e5-5f07-4db4-9e6f-dcfade745228'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:37 np0005465604 NetworkManager[45129]: <info>  [1759393717.9276] manager: (tap2f45b100-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/176)
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.934 2 INFO os_vif [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b')#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.935 2 DEBUG nova.virt.libvirt.vif [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1204621376',display_name='tempest-tempest.common.compute-instance-1204621376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1204621376',id=45,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-8sa12d1t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=247e32e5-5f07-4db4-9e6f-dcfade745228,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.935 2 DEBUG nova.network.os_vif_util [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.936 2 DEBUG nova.network.os_vif_util [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.938 2 DEBUG nova.virt.libvirt.guest [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] attach device xml: <interface type="ethernet">
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  <mac address="fa:16:3e:69:c8:97"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  <model type="virtio"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  <mtu size="1442"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]:  <target dev="tap2f45b100-9b"/>
Oct  2 04:28:37 np0005465604 nova_compute[260603]: </interface>
Oct  2 04:28:37 np0005465604 nova_compute[260603]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 04:28:37 np0005465604 kernel: tap2f45b100-9b: entered promiscuous mode
Oct  2 04:28:37 np0005465604 NetworkManager[45129]: <info>  [1759393717.9510] manager: (tap2f45b100-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/177)
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:37Z|00393|binding|INFO|Claiming lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 for this chassis.
Oct  2 04:28:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:37Z|00394|binding|INFO|2f45b100-9bc2-4853-87ff-324e74ddfee5: Claiming fa:16:3e:69:c8:97 10.100.0.5
Oct  2 04:28:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.958 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:c8:97 10.100.0.5'], port_security=['fa:16:3e:69:c8:97 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-738296238', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '247e32e5-5f07-4db4-9e6f-dcfade745228', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-738296238', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2f45b100-9bc2-4853-87ff-324e74ddfee5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.959 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2f45b100-9bc2-4853-87ff-324e74ddfee5 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 bound to our chassis#033[00m
Oct  2 04:28:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.960 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1#033[00m
Oct  2 04:28:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:37Z|00395|binding|INFO|Setting lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 ovn-installed in OVS
Oct  2 04:28:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:37Z|00396|binding|INFO|Setting lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 up in Southbound
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:37 np0005465604 nova_compute[260603]: 2025-10-02 08:28:37.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:37 np0005465604 NetworkManager[45129]: <info>  [1759393717.9776] device (tap2f45b100-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:28:37 np0005465604 NetworkManager[45129]: <info>  [1759393717.9795] device (tap2f45b100-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:28:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:37.990 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3fa298-da09-4888-a91f-b4e6d2f71fff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.027 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d07349-73f7-4367-b447-43f5ebae2868]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.030 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5f9c1fec-69ea-4b6e-8da3-f345fd720217]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.045 2 DEBUG nova.virt.libvirt.driver [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.045 2 DEBUG nova.virt.libvirt.driver [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.045 2 DEBUG nova.virt.libvirt.driver [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:df:f8:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.046 2 DEBUG nova.virt.libvirt.driver [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] No VIF found with MAC fa:16:3e:69:c8:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.056 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bf49c3e3-1d78-450a-8c8b-e23576945ed3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.073 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[86c248e9-73e9-4a85-a686-ea97e97f5fd2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454001, 'reachable_time': 43415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312019, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.085 2 DEBUG nova.virt.libvirt.guest [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:28:38 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:  <nova:name>tempest-tempest.common.compute-instance-1204621376</nova:name>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:28:38</nova:creationTime>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:28:38 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:    <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:    <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:    <nova:port uuid="262f44f5-df56-4176-96b2-4819d8b7e258">
Oct  2 04:28:38 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:    <nova:port uuid="2f45b100-9bc2-4853-87ff-324e74ddfee5">
Oct  2 04:28:38 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:28:38 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:28:38 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:28:38 np0005465604 nova_compute[260603]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.088 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[54f7f0c5-5eb5-4b4b-912f-be342f19eef4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454016, 'tstamp': 454016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312020, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454020, 'tstamp': 454020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312020, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.089 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.091 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.092 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.092 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.092 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.111 2 DEBUG oslo_concurrency.lockutils [None req-b5f4c661-acfa-4ac3-bcb3-6f389478c649 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-247e32e5-5f07-4db4-9e6f-dcfade745228-2f45b100-9bc2-4853-87ff-324e74ddfee5" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.397s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:38 np0005465604 kernel: tap8e7deb4e-9f (unregistering): left promiscuous mode
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:28:38 np0005465604 NetworkManager[45129]: <info>  [1759393718.5055] device (tap8e7deb4e-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002974836902552582 of space, bias 1.0, pg target 0.8924510707657746 quantized to 32 (current 32)
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:38 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:38Z|00397|binding|INFO|Releasing lport 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f from this chassis (sb_readonly=0)
Oct  2 04:28:38 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:38Z|00398|binding|INFO|Setting lport 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f down in Southbound
Oct  2 04:28:38 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:38Z|00399|binding|INFO|Removing iface tap8e7deb4e-9f ovn-installed in OVS
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.519 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:62:72 10.100.0.5'], port_security=['fa:16:3e:5a:62:72 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e3ae3c82-7eb4-4727-a846-92afca9a8330', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.520 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca unbound from our chassis#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.521 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.522 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1d4528-569e-4e20-9d2d-d6905d0fec97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.522 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace which is not needed anymore#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:38 np0005465604 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Oct  2 04:28:38 np0005465604 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002f.scope: Consumed 13.295s CPU time.
Oct  2 04:28:38 np0005465604 systemd-machined[214636]: Machine qemu-51-instance-0000002f terminated.
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.655 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.656 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393718.6528234, 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.656 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.657 2 DEBUG nova.compute.manager [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.660 2 INFO nova.virt.libvirt.driver [-] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Instance rebooted successfully.#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.661 2 DEBUG nova.compute.manager [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:38 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[311019]: [NOTICE]   (311032) : haproxy version is 2.8.14-c23fe91
Oct  2 04:28:38 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[311019]: [NOTICE]   (311032) : path to executable is /usr/sbin/haproxy
Oct  2 04:28:38 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[311019]: [WARNING]  (311032) : Exiting Master process...
Oct  2 04:28:38 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[311019]: [ALERT]    (311032) : Current worker (311043) exited with code 143 (Terminated)
Oct  2 04:28:38 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[311019]: [WARNING]  (311032) : All workers exited. Exiting... (0)
Oct  2 04:28:38 np0005465604 systemd[1]: libpod-039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22.scope: Deactivated successfully.
Oct  2 04:28:38 np0005465604 conmon[311019]: conmon 039a6ca8595be57c963d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22.scope/container/memory.events
Oct  2 04:28:38 np0005465604 podman[312083]: 2025-10-02 08:28:38.682544869 +0000 UTC m=+0.046913440 container died 039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:28:38 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22-userdata-shm.mount: Deactivated successfully.
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.711 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:38 np0005465604 systemd[1]: var-lib-containers-storage-overlay-14ebb512a42e1b9f43784c36b0857c486bcadaa4002099f22dfeb9568e9eabfd-merged.mount: Deactivated successfully.
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.716 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:28:38 np0005465604 podman[312083]: 2025-10-02 08:28:38.721967644 +0000 UTC m=+0.086336205 container cleanup 039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:28:38 np0005465604 systemd[1]: libpod-conmon-039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22.scope: Deactivated successfully.
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.772 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.773 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393718.6546621, 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.773 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] VM Started (Lifecycle Event)#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.785 2 DEBUG oslo_concurrency.lockutils [None req-10a93950-648b-48f2-94f2-00a83f101778 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:38 np0005465604 podman[312112]: 2025-10-02 08:28:38.800001529 +0000 UTC m=+0.055710252 container remove 039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.803 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.806 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.806 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[666ec414-ad8b-4366-8ab3-96b09c1b607f]: (4, ('Thu Oct  2 08:28:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22)\n039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22\nThu Oct  2 08:28:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22)\n039a6ca8595be57c963d8bc6789c0b8d5a12de9f4e654ca783db5d5e51506c22\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.808 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d80f7c-47eb-417e-aeda-182841f989ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.809 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:38 np0005465604 kernel: tapa72ac8c9-10: left promiscuous mode
Oct  2 04:28:38 np0005465604 nova_compute[260603]: 2025-10-02 08:28:38.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.832 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea29e01-b04f-4f07-bb52-a77c0ee414a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.859 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[784ce043-8258-419a-8b79-4b5a5228d10e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.860 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2a69c024-79cf-4bc3-be69-db91f686ef70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.878 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c1915ad5-be5a-4c6a-9026-7d55b498ee9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457788, 'reachable_time': 27285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312140, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:38 np0005465604 systemd[1]: run-netns-ovnmeta\x2da72ac8c9\x2d16ee\x2d4ec0\x2db23d\x2d2741fda000ca.mount: Deactivated successfully.
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.884 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:28:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:38.884 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[0862626c-72dc-4e82-b026-0a5eb7a71428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 405 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 199 op/s
Oct  2 04:28:39 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:39Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:c8:97 10.100.0.5
Oct  2 04:28:39 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:39Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:c8:97 10.100.0.5
Oct  2 04:28:39 np0005465604 nova_compute[260603]: 2025-10-02 08:28:39.248 2 INFO nova.virt.libvirt.driver [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 04:28:39 np0005465604 nova_compute[260603]: 2025-10-02 08:28:39.255 2 INFO nova.virt.libvirt.driver [-] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Instance destroyed successfully.#033[00m
Oct  2 04:28:39 np0005465604 nova_compute[260603]: 2025-10-02 08:28:39.256 2 DEBUG nova.objects.instance [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'numa_topology' on Instance uuid e3ae3c82-7eb4-4727-a846-92afca9a8330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:39 np0005465604 nova_compute[260603]: 2025-10-02 08:28:39.434 2 DEBUG nova.compute.manager [req-2c41d4b1-110e-49f0-a5ff-af91702c0802 req-2953245f-3bec-41a8-911c-e983383c0c30 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Received event network-vif-unplugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:39 np0005465604 nova_compute[260603]: 2025-10-02 08:28:39.435 2 DEBUG oslo_concurrency.lockutils [req-2c41d4b1-110e-49f0-a5ff-af91702c0802 req-2953245f-3bec-41a8-911c-e983383c0c30 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:39 np0005465604 nova_compute[260603]: 2025-10-02 08:28:39.435 2 DEBUG oslo_concurrency.lockutils [req-2c41d4b1-110e-49f0-a5ff-af91702c0802 req-2953245f-3bec-41a8-911c-e983383c0c30 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:39 np0005465604 nova_compute[260603]: 2025-10-02 08:28:39.435 2 DEBUG oslo_concurrency.lockutils [req-2c41d4b1-110e-49f0-a5ff-af91702c0802 req-2953245f-3bec-41a8-911c-e983383c0c30 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:39 np0005465604 nova_compute[260603]: 2025-10-02 08:28:39.436 2 DEBUG nova.compute.manager [req-2c41d4b1-110e-49f0-a5ff-af91702c0802 req-2953245f-3bec-41a8-911c-e983383c0c30 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] No waiting events found dispatching network-vif-unplugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:39 np0005465604 nova_compute[260603]: 2025-10-02 08:28:39.436 2 WARNING nova.compute.manager [req-2c41d4b1-110e-49f0-a5ff-af91702c0802 req-2953245f-3bec-41a8-911c-e983383c0c30 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Received unexpected event network-vif-unplugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f for instance with vm_state active and task_state shelving.#033[00m
Oct  2 04:28:39 np0005465604 nova_compute[260603]: 2025-10-02 08:28:39.731 2 INFO nova.virt.libvirt.driver [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Beginning cold snapshot process#033[00m
Oct  2 04:28:39 np0005465604 nova_compute[260603]: 2025-10-02 08:28:39.903 2 DEBUG nova.virt.libvirt.imagebackend [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 04:28:40 np0005465604 nova_compute[260603]: 2025-10-02 08:28:40.190 2 DEBUG nova.storage.rbd_utils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] creating snapshot(72558ed2fc5944b8953d2fe6d0214bb6) on rbd image(e3ae3c82-7eb4-4727-a846-92afca9a8330_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:28:40 np0005465604 nova_compute[260603]: 2025-10-02 08:28:40.665 2 DEBUG nova.network.neutron [req-c51a3079-1f73-43a3-9988-5ee04c191644 req-670ecb24-0d91-4a8e-b5f3-749d2f7ceacc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updated VIF entry in instance network info cache for port 2f45b100-9bc2-4853-87ff-324e74ddfee5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:28:40 np0005465604 nova_compute[260603]: 2025-10-02 08:28:40.666 2 DEBUG nova.network.neutron [req-c51a3079-1f73-43a3-9988-5ee04c191644 req-670ecb24-0d91-4a8e-b5f3-749d2f7ceacc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updating instance_info_cache with network_info: [{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:40 np0005465604 nova_compute[260603]: 2025-10-02 08:28:40.698 2 DEBUG oslo_concurrency.lockutils [req-c51a3079-1f73-43a3-9988-5ee04c191644 req-670ecb24-0d91-4a8e-b5f3-749d2f7ceacc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:40 np0005465604 nova_compute[260603]: 2025-10-02 08:28:40.698 2 DEBUG oslo_concurrency.lockutils [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:40 np0005465604 nova_compute[260603]: 2025-10-02 08:28:40.699 2 DEBUG nova.network.neutron [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Refreshing network info cache for port 262f44f5-df56-4176-96b2-4819d8b7e258 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:28:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 405 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Oct  2 04:28:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Oct  2 04:28:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Oct  2 04:28:41 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Oct  2 04:28:41 np0005465604 nova_compute[260603]: 2025-10-02 08:28:41.096 2 DEBUG nova.storage.rbd_utils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] cloning vms/e3ae3c82-7eb4-4727-a846-92afca9a8330_disk@72558ed2fc5944b8953d2fe6d0214bb6 to images/42c47313-5859-4db4-83db-08656e9d2bfd clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:28:41 np0005465604 nova_compute[260603]: 2025-10-02 08:28:41.205 2 DEBUG nova.storage.rbd_utils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] flattening images/42c47313-5859-4db4-83db-08656e9d2bfd flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:28:41 np0005465604 nova_compute[260603]: 2025-10-02 08:28:41.520 2 DEBUG nova.storage.rbd_utils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] removing snapshot(72558ed2fc5944b8953d2fe6d0214bb6) on rbd image(e3ae3c82-7eb4-4727-a846-92afca9a8330_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 04:28:42 np0005465604 podman[312265]: 2025-10-02 08:28:42.015128674 +0000 UTC m=+0.075007703 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  2 04:28:42 np0005465604 podman[312264]: 2025-10-02 08:28:42.023970848 +0000 UTC m=+0.085678993 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 04:28:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Oct  2 04:28:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Oct  2 04:28:42 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.057 2 DEBUG nova.compute.manager [req-96be292f-12a0-4a3c-b090-09153f6ec030 req-c0aa0378-f1c2-41fb-9121-1bbfa1ac4a9d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.057 2 DEBUG oslo_concurrency.lockutils [req-96be292f-12a0-4a3c-b090-09153f6ec030 req-c0aa0378-f1c2-41fb-9121-1bbfa1ac4a9d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.057 2 DEBUG oslo_concurrency.lockutils [req-96be292f-12a0-4a3c-b090-09153f6ec030 req-c0aa0378-f1c2-41fb-9121-1bbfa1ac4a9d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.058 2 DEBUG oslo_concurrency.lockutils [req-96be292f-12a0-4a3c-b090-09153f6ec030 req-c0aa0378-f1c2-41fb-9121-1bbfa1ac4a9d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.058 2 DEBUG nova.compute.manager [req-96be292f-12a0-4a3c-b090-09153f6ec030 req-c0aa0378-f1c2-41fb-9121-1bbfa1ac4a9d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] No waiting events found dispatching network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.058 2 WARNING nova.compute.manager [req-96be292f-12a0-4a3c-b090-09153f6ec030 req-c0aa0378-f1c2-41fb-9121-1bbfa1ac4a9d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received unexpected event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:28:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.092 2 DEBUG nova.storage.rbd_utils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] creating snapshot(snap) on rbd image(42c47313-5859-4db4-83db-08656e9d2bfd) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.124 2 DEBUG nova.compute.manager [req-21d8582a-8b96-47bc-82c4-b7169d79cedb req-cfef9512-23f5-4f6c-9438-9555634a66fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Received event network-vif-plugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.125 2 DEBUG oslo_concurrency.lockutils [req-21d8582a-8b96-47bc-82c4-b7169d79cedb req-cfef9512-23f5-4f6c-9438-9555634a66fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.125 2 DEBUG oslo_concurrency.lockutils [req-21d8582a-8b96-47bc-82c4-b7169d79cedb req-cfef9512-23f5-4f6c-9438-9555634a66fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.126 2 DEBUG oslo_concurrency.lockutils [req-21d8582a-8b96-47bc-82c4-b7169d79cedb req-cfef9512-23f5-4f6c-9438-9555634a66fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.126 2 DEBUG nova.compute.manager [req-21d8582a-8b96-47bc-82c4-b7169d79cedb req-cfef9512-23f5-4f6c-9438-9555634a66fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] No waiting events found dispatching network-vif-plugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.127 2 WARNING nova.compute.manager [req-21d8582a-8b96-47bc-82c4-b7169d79cedb req-cfef9512-23f5-4f6c-9438-9555634a66fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Received unexpected event network-vif-plugged-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.590 2 DEBUG oslo_concurrency.lockutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.591 2 DEBUG oslo_concurrency.lockutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.591 2 DEBUG oslo_concurrency.lockutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.591 2 DEBUG oslo_concurrency.lockutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.591 2 DEBUG oslo_concurrency.lockutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.592 2 INFO nova.compute.manager [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Terminating instance#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.593 2 DEBUG nova.compute.manager [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:28:42 np0005465604 kernel: tap922a693b-1c (unregistering): left promiscuous mode
Oct  2 04:28:42 np0005465604 NetworkManager[45129]: <info>  [1759393722.6378] device (tap922a693b-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:28:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:42Z|00400|binding|INFO|Releasing lport 922a693b-1cb4-42e8-a97d-78973183c774 from this chassis (sb_readonly=0)
Oct  2 04:28:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:42Z|00401|binding|INFO|Setting lport 922a693b-1cb4-42e8-a97d-78973183c774 down in Southbound
Oct  2 04:28:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:42Z|00402|binding|INFO|Removing iface tap922a693b-1c ovn-installed in OVS
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.655 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:10:ae 10.100.0.7'], port_security=['fa:16:3e:bd:10:ae 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '21802142-fcf7-4eb2-b43b-e0fa48cab4d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35a4ab7cf79e41f68a1ea888c2a3592e', 'neutron:revision_number': '8', 'neutron:security_group_ids': '40bf9bb5-121b-4489-a712-e32650e47ab0 474d8864-210a-46b9-a362-54729ada24f1 aceef71f-a2e6-4998-bc1f-5a8f9213efeb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e031551-92b2-44b9-87f8-368034b7a542, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=922a693b-1cb4-42e8-a97d-78973183c774) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.656 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 922a693b-1cb4-42e8-a97d-78973183c774 in datapath cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 unbound from our chassis#033[00m
Oct  2 04:28:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.657 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.671 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[798a5992-d077-42ab-945e-af1a1e5d48fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.703 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3c632d-dcb2-419a-860a-f25ce9fbadab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.706 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6985fa3b-f7c9-491c-842d-34fae300934b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:42 np0005465604 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000030.scope: Deactivated successfully.
Oct  2 04:28:42 np0005465604 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000030.scope: Consumed 4.865s CPU time.
Oct  2 04:28:42 np0005465604 systemd-machined[214636]: Machine qemu-53-instance-00000030 terminated.
Oct  2 04:28:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.735 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4367f060-b2bd-420d-b6ce-cb8b095e1a69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.749 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6d82bf79-c19a-4ca3-b93a-99b502f4b29b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcce7e8b6-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:5a:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453893, 'reachable_time': 18032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312335, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.763 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a0dba7-a017-4eb6-a9a0-3f39752b010d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcce7e8b6-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453909, 'tstamp': 453909}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312336, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcce7e8b6-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453913, 'tstamp': 453913}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312336, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.764 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcce7e8b6-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.772 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcce7e8b6-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.772 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.772 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcce7e8b6-90, col_values=(('external_ids', {'iface-id': '2a218cce-83be-4768-9f4e-7d61802765d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:42.773 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.831 2 INFO nova.virt.libvirt.driver [-] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Instance destroyed successfully.#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.831 2 DEBUG nova.objects.instance [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lazy-loading 'resources' on Instance uuid 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.858 2 DEBUG nova.virt.libvirt.vif [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:28:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-2008796594',display_name='tempest-SecurityGroupsTestJSON-server-2008796594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-2008796594',id=48,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35a4ab7cf79e41f68a1ea888c2a3592e',ramdisk_id='',reservation_id='r-limo2kg3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-2081142325',owner_user_name='tempest-SecurityGroupsTestJSON-2081142325-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:38Z,user_data=None,user_id='019cd25dce6249ce9c2cf326ec62df28',uuid=21802142-fcf7-4eb2-b43b-e0fa48cab4d6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.859 2 DEBUG nova.network.os_vif_util [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converting VIF {"id": "922a693b-1cb4-42e8-a97d-78973183c774", "address": "fa:16:3e:bd:10:ae", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap922a693b-1c", "ovs_interfaceid": "922a693b-1cb4-42e8-a97d-78973183c774", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.859 2 DEBUG nova.network.os_vif_util [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.860 2 DEBUG os_vif [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.861 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap922a693b-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:42 np0005465604 nova_compute[260603]: 2025-10-02 08:28:42.870 2 INFO os_vif [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:10:ae,bridge_name='br-int',has_traffic_filtering=True,id=922a693b-1cb4-42e8-a97d-78973183c774,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap922a693b-1c')#033[00m
Oct  2 04:28:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 3 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 298 active+clean; 459 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 6.8 MiB/s wr, 246 op/s
Oct  2 04:28:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Oct  2 04:28:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Oct  2 04:28:43 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.240 2 INFO nova.virt.libvirt.driver [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Deleting instance files /var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6_del#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.241 2 INFO nova.virt.libvirt.driver [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Deletion of /var/lib/nova/instances/21802142-fcf7-4eb2-b43b-e0fa48cab4d6_del complete#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.308 2 INFO nova.compute.manager [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.309 2 DEBUG oslo.service.loopingcall [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.310 2 DEBUG nova.compute.manager [-] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.310 2 DEBUG nova.network.neutron [-] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.646 2 DEBUG nova.network.neutron [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updated VIF entry in instance network info cache for port 262f44f5-df56-4176-96b2-4819d8b7e258. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.647 2 DEBUG nova.network.neutron [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updating instance_info_cache with network_info: [{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.665 2 DEBUG oslo_concurrency.lockutils [req-54d38805-d807-4276-bf66-895394b4fe5f req-6959a826-0931-4dcc-aa34-5611417ea951 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.700 2 DEBUG oslo_concurrency.lockutils [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "interface-247e32e5-5f07-4db4-9e6f-dcfade745228-2f45b100-9bc2-4853-87ff-324e74ddfee5" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.700 2 DEBUG oslo_concurrency.lockutils [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-247e32e5-5f07-4db4-9e6f-dcfade745228-2f45b100-9bc2-4853-87ff-324e74ddfee5" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.716 2 DEBUG nova.objects.instance [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'flavor' on Instance uuid 247e32e5-5f07-4db4-9e6f-dcfade745228 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.733 2 DEBUG nova.virt.libvirt.vif [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1204621376',display_name='tempest-tempest.common.compute-instance-1204621376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1204621376',id=45,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-8sa12d1t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=247e32e5-5f07-4db4-9e6f-dcfade745228,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.734 2 DEBUG nova.network.os_vif_util [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.734 2 DEBUG nova.network.os_vif_util [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.739 2 DEBUG nova.virt.libvirt.guest [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.742 2 DEBUG nova.virt.libvirt.guest [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.745 2 DEBUG nova.virt.libvirt.driver [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Attempting to detach device tap2f45b100-9b from instance 247e32e5-5f07-4db4-9e6f-dcfade745228 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.745 2 DEBUG nova.virt.libvirt.guest [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] detach device xml: <interface type="ethernet">
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <mac address="fa:16:3e:69:c8:97"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <model type="virtio"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <mtu size="1442"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <target dev="tap2f45b100-9b"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]: </interface>
Oct  2 04:28:43 np0005465604 nova_compute[260603]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.754 2 DEBUG nova.virt.libvirt.guest [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.760 2 DEBUG nova.virt.libvirt.guest [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface>not found in domain: <domain type='kvm' id='49'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <name>instance-0000002d</name>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <uuid>247e32e5-5f07-4db4-9e6f-dcfade745228</uuid>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:name>tempest-tempest.common.compute-instance-1204621376</nova:name>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:28:38</nova:creationTime>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:port uuid="262f44f5-df56-4176-96b2-4819d8b7e258">
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:port uuid="2f45b100-9bc2-4853-87ff-324e74ddfee5">
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:28:43 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <memory unit='KiB'>131072</memory>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <vcpu placement='static'>1</vcpu>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <resource>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <partition>/machine</partition>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </resource>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <sysinfo type='smbios'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <entry name='manufacturer'>RDO</entry>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <entry name='serial'>247e32e5-5f07-4db4-9e6f-dcfade745228</entry>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <entry name='uuid'>247e32e5-5f07-4db4-9e6f-dcfade745228</entry>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <entry name='family'>Virtual Machine</entry>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <boot dev='hd'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <smbios mode='sysinfo'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <vmcoreinfo state='on'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <vendor>AMD</vendor>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='x2apic'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc-deadline'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='hypervisor'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc_adjust'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='spec-ctrl'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='stibp'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='arch-capabilities'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='ssbd'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='cmp_legacy'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='overflow-recov'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='succor'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='ibrs'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='amd-ssbd'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='virt-ssbd'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='lbrv'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='tsc-scale'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='vmcb-clean'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='flushbyasid'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pause-filter'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pfthreshold'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='rdctl-no'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='mds-no'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='gds-no'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='rfds-no'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='xsaves'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svm'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='topoext'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='npt'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='nrip-save'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <clock offset='utc'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <timer name='hpet' present='no'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <on_poweroff>destroy</on_poweroff>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <on_reboot>restart</on_reboot>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <on_crash>destroy</on_crash>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <disk type='network' device='disk'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/247e32e5-5f07-4db4-9e6f-dcfade745228_disk' index='2'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target dev='vda' bus='virtio'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='virtio-disk0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <disk type='network' device='cdrom'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/247e32e5-5f07-4db4-9e6f-dcfade745228_disk.config' index='1'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target dev='sda' bus='sata'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <readonly/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='sata0-0-0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pcie.0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='1' port='0x10'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='2' port='0x11'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.2'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='3' port='0x12'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.3'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='4' port='0x13'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.4'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='5' port='0x14'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.5'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='6' port='0x15'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.6'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='7' port='0x16'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.7'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='8' port='0x17'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.8'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='9' port='0x18'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.9'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='10' port='0x19'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.10'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='11' port='0x1a'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.11'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='12' port='0x1b'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.12'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='13' port='0x1c'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.13'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='14' port='0x1d'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.14'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='15' port='0x1e'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.15'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='16' port='0x1f'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.16'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='17' port='0x20'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.17'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='18' port='0x21'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.18'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='19' port='0x22'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.19'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='20' port='0x23'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.20'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='21' port='0x24'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.21'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='22' port='0x25'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.22'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='23' port='0x26'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.23'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='24' port='0x27'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.24'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='25' port='0x28'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.25'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-pci-bridge'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.26'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='usb'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='sata' index='0'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='ide'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:df:f8:c0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target dev='tap262f44f5-df'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='net0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:69:c8:97'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target dev='tap2f45b100-9b'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='net1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <serial type='pty'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <source path='/dev/pts/2'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/console.log' append='off'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target type='isa-serial' port='0'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:        <model name='isa-serial'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      </target>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <console type='pty' tty='/dev/pts/2'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <source path='/dev/pts/2'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/console.log' append='off'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target type='serial' port='0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </console>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <input type='tablet' bus='usb'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='input0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='usb' bus='0' port='1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <input type='mouse' bus='ps2'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='input1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <input type='keyboard' bus='ps2'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='input2'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <listen type='address' address='::0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <audio id='1' type='none'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='video0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <watchdog model='itco' action='reset'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='watchdog0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </watchdog>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <memballoon model='virtio'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <stats period='10'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='balloon0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <rng model='virtio'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <backend model='random'>/dev/urandom</backend>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='rng0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <label>system_u:system_r:svirt_t:s0:c132,c415</label>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c132,c415</imagelabel>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <label>+107:+107</label>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <imagelabel>+107:+107</imagelabel>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:28:43 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:28:43 np0005465604 nova_compute[260603]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.760 2 INFO nova.virt.libvirt.driver [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully detached device tap2f45b100-9b from instance 247e32e5-5f07-4db4-9e6f-dcfade745228 from the persistent domain config.#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.760 2 DEBUG nova.virt.libvirt.driver [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] (1/8): Attempting to detach device tap2f45b100-9b with device alias net1 from instance 247e32e5-5f07-4db4-9e6f-dcfade745228 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.761 2 DEBUG nova.virt.libvirt.guest [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] detach device xml: <interface type="ethernet">
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <mac address="fa:16:3e:69:c8:97"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <model type="virtio"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <mtu size="1442"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <target dev="tap2f45b100-9b"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]: </interface>
Oct  2 04:28:43 np0005465604 nova_compute[260603]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 04:28:43 np0005465604 kernel: tap2f45b100-9b (unregistering): left promiscuous mode
Oct  2 04:28:43 np0005465604 NetworkManager[45129]: <info>  [1759393723.8336] device (tap2f45b100-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.845 2 DEBUG nova.virt.libvirt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Received event <DeviceRemovedEvent: 1759393723.8448665, 247e32e5-5f07-4db4-9e6f-dcfade745228 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.848 2 DEBUG nova.virt.libvirt.driver [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Start waiting for the detach event from libvirt for device tap2f45b100-9b with device alias net1 for instance 247e32e5-5f07-4db4-9e6f-dcfade745228 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 04:28:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:43Z|00403|binding|INFO|Releasing lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 from this chassis (sb_readonly=0)
Oct  2 04:28:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:43Z|00404|binding|INFO|Setting lport 2f45b100-9bc2-4853-87ff-324e74ddfee5 down in Southbound
Oct  2 04:28:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:43Z|00405|binding|INFO|Removing iface tap2f45b100-9b ovn-installed in OVS
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.848 2 DEBUG nova.virt.libvirt.guest [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.853 2 DEBUG nova.virt.libvirt.guest [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:69:c8:97"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2f45b100-9b"/></interface>not found in domain: <domain type='kvm' id='49'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <name>instance-0000002d</name>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <uuid>247e32e5-5f07-4db4-9e6f-dcfade745228</uuid>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:name>tempest-tempest.common.compute-instance-1204621376</nova:name>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:28:38</nova:creationTime>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:port uuid="262f44f5-df56-4176-96b2-4819d8b7e258">
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:port uuid="2f45b100-9bc2-4853-87ff-324e74ddfee5">
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:28:43 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <memory unit='KiB'>131072</memory>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <vcpu placement='static'>1</vcpu>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <resource>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <partition>/machine</partition>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </resource>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <sysinfo type='smbios'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <entry name='manufacturer'>RDO</entry>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <entry name='serial'>247e32e5-5f07-4db4-9e6f-dcfade745228</entry>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <entry name='uuid'>247e32e5-5f07-4db4-9e6f-dcfade745228</entry>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <entry name='family'>Virtual Machine</entry>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <boot dev='hd'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <smbios mode='sysinfo'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <vmcoreinfo state='on'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <vendor>AMD</vendor>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='x2apic'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc-deadline'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='hypervisor'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc_adjust'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='spec-ctrl'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='stibp'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='arch-capabilities'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='ssbd'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='cmp_legacy'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='overflow-recov'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='succor'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='ibrs'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='amd-ssbd'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='virt-ssbd'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='lbrv'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='tsc-scale'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='vmcb-clean'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='flushbyasid'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pause-filter'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pfthreshold'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='rdctl-no'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='mds-no'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='gds-no'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='rfds-no'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='xsaves'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svm'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='require' name='topoext'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='npt'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <feature policy='disable' name='nrip-save'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <clock offset='utc'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <timer name='hpet' present='no'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <on_poweroff>destroy</on_poweroff>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <on_reboot>restart</on_reboot>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <on_crash>destroy</on_crash>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <disk type='network' device='disk'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/247e32e5-5f07-4db4-9e6f-dcfade745228_disk' index='2'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target dev='vda' bus='virtio'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='virtio-disk0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <disk type='network' device='cdrom'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/247e32e5-5f07-4db4-9e6f-dcfade745228_disk.config' index='1'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target dev='sda' bus='sata'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <readonly/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='sata0-0-0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pcie.0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='1' port='0x10'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='2' port='0x11'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.2'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='3' port='0x12'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.3'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='4' port='0x13'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.4'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='5' port='0x14'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.5'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='6' port='0x15'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.6'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='7' port='0x16'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.7'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='8' port='0x17'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.8'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='9' port='0x18'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.9'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='10' port='0x19'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.10'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='11' port='0x1a'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.11'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='12' port='0x1b'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.12'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='13' port='0x1c'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.13'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='14' port='0x1d'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.14'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='15' port='0x1e'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.15'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='16' port='0x1f'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.16'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='17' port='0x20'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.17'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='18' port='0x21'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.18'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='19' port='0x22'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.19'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='20' port='0x23'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.20'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='21' port='0x24'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.21'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='22' port='0x25'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.22'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='23' port='0x26'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.23'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='24' port='0x27'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.24'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target chassis='25' port='0x28'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.25'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model name='pcie-pci-bridge'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='pci.26'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='usb'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <controller type='sata' index='0'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='ide'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:df:f8:c0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target dev='tap262f44f5-df'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='net0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <serial type='pty'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <source path='/dev/pts/2'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/console.log' append='off'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target type='isa-serial' port='0'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:        <model name='isa-serial'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      </target>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <console type='pty' tty='/dev/pts/2'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <source path='/dev/pts/2'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228/console.log' append='off'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <target type='serial' port='0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </console>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <input type='tablet' bus='usb'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='input0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='usb' bus='0' port='1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <input type='mouse' bus='ps2'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='input1'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <input type='keyboard' bus='ps2'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='input2'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <listen type='address' address='::0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <audio id='1' type='none'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='video0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <watchdog model='itco' action='reset'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='watchdog0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </watchdog>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <memballoon model='virtio'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <stats period='10'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='balloon0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <rng model='virtio'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <backend model='random'>/dev/urandom</backend>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <alias name='rng0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <label>system_u:system_r:svirt_t:s0:c132,c415</label>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c132,c415</imagelabel>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <label>+107:+107</label>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <imagelabel>+107:+107</imagelabel>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:28:43 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:28:43 np0005465604 nova_compute[260603]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.853 2 INFO nova.virt.libvirt.driver [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully detached device tap2f45b100-9b from instance 247e32e5-5f07-4db4-9e6f-dcfade745228 from the live domain config.#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.854 2 DEBUG nova.virt.libvirt.vif [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1204621376',display_name='tempest-tempest.common.compute-instance-1204621376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1204621376',id=45,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-8sa12d1t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=247e32e5-5f07-4db4-9e6f-dcfade745228,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.854 2 DEBUG nova.network.os_vif_util [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.854 2 DEBUG nova.network.os_vif_util [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.855 2 DEBUG os_vif [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.856 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f45b100-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:43.858 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:c8:97 10.100.0.5'], port_security=['fa:16:3e:69:c8:97 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-738296238', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '247e32e5-5f07-4db4-9e6f-dcfade745228', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-738296238', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'a01b7cc0-efb1-487b-ba19-18e9f4f22f80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2f45b100-9bc2-4853-87ff-324e74ddfee5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:28:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:43.861 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2f45b100-9bc2-4853-87ff-324e74ddfee5 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis#033[00m
Oct  2 04:28:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:43.863 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.878 2 INFO os_vif [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b')#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.878 2 DEBUG nova.virt.libvirt.guest [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:name>tempest-tempest.common.compute-instance-1204621376</nova:name>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:28:43</nova:creationTime>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:user uuid="14d235dd68314a5d82ac247a9e9842d8">tempest-AttachInterfacesTestJSON-1907037263-project-member</nova:user>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:project uuid="84c161efb2ba4334845e823db8128b62">tempest-AttachInterfacesTestJSON-1907037263</nova:project>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    <nova:port uuid="262f44f5-df56-4176-96b2-4819d8b7e258">
Oct  2 04:28:43 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:28:43 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:28:43 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:28:43 np0005465604 nova_compute[260603]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 04:28:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:43.893 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab2fe14-ba93-4504-8668-b3845c50aec8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:43.941 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bc790085-192c-4d83-ad0c-5e0368417dcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:43.947 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[66e5f779-7557-41a6-a830-ca520ea47a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.980 2 DEBUG nova.network.neutron [-] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:43 np0005465604 nova_compute[260603]: 2025-10-02 08:28:43.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:43.990 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e95648ff-6f1e-44e5-9ebb-5d1ba2f08023]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.000 2 INFO nova.compute.manager [-] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Took 0.69 seconds to deallocate network for instance.#033[00m
Oct  2 04:28:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:44.017 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1aaad4d4-4602-4427-b7d8-f691e2ec8cb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 13, 'rx_bytes': 1042, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 13, 'rx_bytes': 1042, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454001, 'reachable_time': 43415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312383, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:44.046 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8fe60e-71f8-4ed6-a1ff-2b3c53aaf651]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454016, 'tstamp': 454016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312384, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454020, 'tstamp': 454020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312384, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:44.047 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:44.056 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:44.056 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:44.056 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:44.057 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.057 2 DEBUG oslo_concurrency.lockutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.057 2 DEBUG oslo_concurrency.lockutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.197 2 DEBUG oslo_concurrency.processutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.382 2 DEBUG nova.compute.manager [req-7f783184-b976-4243-ad2a-29ea34ef4e4b req-093fa631-26d3-4356-ad34-c95711fa5d9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-changed-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.383 2 DEBUG nova.compute.manager [req-7f783184-b976-4243-ad2a-29ea34ef4e4b req-093fa631-26d3-4356-ad34-c95711fa5d9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Refreshing instance network info cache due to event network-changed-922a693b-1cb4-42e8-a97d-78973183c774. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.384 2 DEBUG oslo_concurrency.lockutils [req-7f783184-b976-4243-ad2a-29ea34ef4e4b req-093fa631-26d3-4356-ad34-c95711fa5d9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.384 2 DEBUG oslo_concurrency.lockutils [req-7f783184-b976-4243-ad2a-29ea34ef4e4b req-093fa631-26d3-4356-ad34-c95711fa5d9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.385 2 DEBUG nova.network.neutron [req-7f783184-b976-4243-ad2a-29ea34ef4e4b req-093fa631-26d3-4356-ad34-c95711fa5d9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Refreshing network info cache for port 922a693b-1cb4-42e8-a97d-78973183c774 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.509 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.510 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.511 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.511 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.512 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] No waiting events found dispatching network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.512 2 WARNING nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received unexpected event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.513 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.513 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.514 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.514 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.515 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] No waiting events found dispatching network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.515 2 WARNING nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received unexpected event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.516 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.516 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.517 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.517 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.518 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] No waiting events found dispatching network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.518 2 WARNING nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received unexpected event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.519 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-vif-unplugged-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.519 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.520 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.520 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.521 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] No waiting events found dispatching network-vif-unplugged-922a693b-1cb4-42e8-a97d-78973183c774 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.521 2 WARNING nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received unexpected event network-vif-unplugged-922a693b-1cb4-42e8-a97d-78973183c774 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.522 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.522 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.523 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.523 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.524 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] No waiting events found dispatching network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.524 2 WARNING nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received unexpected event network-vif-plugged-922a693b-1cb4-42e8-a97d-78973183c774 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.525 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-unplugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.526 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.526 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.526 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.526 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] No waiting events found dispatching network-vif-unplugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.527 2 WARNING nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received unexpected event network-vif-unplugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.527 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.527 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.527 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.527 2 DEBUG oslo_concurrency.lockutils [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.528 2 DEBUG nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] No waiting events found dispatching network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.528 2 WARNING nova.compute.manager [req-f58c4d04-f88f-4750-933b-dea16d66ba98 req-fea4afba-a564-436b-a7a2-f1696f31c50a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received unexpected event network-vif-plugged-2f45b100-9bc2-4853-87ff-324e74ddfee5 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.631 2 DEBUG nova.network.neutron [req-7f783184-b976-4243-ad2a-29ea34ef4e4b req-093fa631-26d3-4356-ad34-c95711fa5d9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:28:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:28:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1894505686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.690 2 DEBUG oslo_concurrency.processutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.696 2 DEBUG nova.compute.provider_tree [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.723 2 INFO nova.virt.libvirt.driver [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Snapshot image upload complete#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.723 2 DEBUG nova.compute.manager [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.728 2 DEBUG nova.scheduler.client.report [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.750 2 DEBUG oslo_concurrency.lockutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.779 2 INFO nova.scheduler.client.report [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Deleted allocations for instance 21802142-fcf7-4eb2-b43b-e0fa48cab4d6#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.790 2 INFO nova.compute.manager [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Shelve offloading#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.806 2 INFO nova.virt.libvirt.driver [-] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Instance destroyed successfully.#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.806 2 DEBUG nova.compute.manager [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.809 2 DEBUG oslo_concurrency.lockutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.809 2 DEBUG oslo_concurrency.lockutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquired lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.809 2 DEBUG nova.network.neutron [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:28:44 np0005465604 nova_compute[260603]: 2025-10-02 08:28:44.852 2 DEBUG oslo_concurrency.lockutils [None req-9bdc53f5-6dc4-460b-8199-ae97595832ae 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "21802142-fcf7-4eb2-b43b-e0fa48cab4d6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.261s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 3 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 298 active+clean; 475 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 305 op/s
Oct  2 04:28:45 np0005465604 nova_compute[260603]: 2025-10-02 08:28:45.146 2 DEBUG nova.network.neutron [req-7f783184-b976-4243-ad2a-29ea34ef4e4b req-093fa631-26d3-4356-ad34-c95711fa5d9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:45 np0005465604 nova_compute[260603]: 2025-10-02 08:28:45.172 2 DEBUG oslo_concurrency.lockutils [req-7f783184-b976-4243-ad2a-29ea34ef4e4b req-093fa631-26d3-4356-ad34-c95711fa5d9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-21802142-fcf7-4eb2-b43b-e0fa48cab4d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:45 np0005465604 nova_compute[260603]: 2025-10-02 08:28:45.173 2 DEBUG nova.compute.manager [req-7f783184-b976-4243-ad2a-29ea34ef4e4b req-093fa631-26d3-4356-ad34-c95711fa5d9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Received event network-vif-deleted-922a693b-1cb4-42e8-a97d-78973183c774 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:45 np0005465604 nova_compute[260603]: 2025-10-02 08:28:45.390 2 DEBUG oslo_concurrency.lockutils [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:45 np0005465604 nova_compute[260603]: 2025-10-02 08:28:45.391 2 DEBUG oslo_concurrency.lockutils [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquired lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:45 np0005465604 nova_compute[260603]: 2025-10-02 08:28:45.391 2 DEBUG nova.network.neutron [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:28:46 np0005465604 nova_compute[260603]: 2025-10-02 08:28:46.293 2 DEBUG nova.network.neutron [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Updating instance_info_cache with network_info: [{"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:46 np0005465604 nova_compute[260603]: 2025-10-02 08:28:46.317 2 DEBUG oslo_concurrency.lockutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Releasing lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 3 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 298 active+clean; 475 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 305 op/s
Oct  2 04:28:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.251 2 DEBUG oslo_concurrency.lockutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.251 2 DEBUG oslo_concurrency.lockutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.252 2 DEBUG oslo_concurrency.lockutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.253 2 DEBUG oslo_concurrency.lockutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.253 2 DEBUG oslo_concurrency.lockutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.255 2 INFO nova.compute.manager [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Terminating instance#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.258 2 DEBUG nova.compute.manager [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:28:47 np0005465604 kernel: tap262f44f5-df (unregistering): left promiscuous mode
Oct  2 04:28:47 np0005465604 NetworkManager[45129]: <info>  [1759393727.3257] device (tap262f44f5-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:28:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:47Z|00406|binding|INFO|Releasing lport 262f44f5-df56-4176-96b2-4819d8b7e258 from this chassis (sb_readonly=0)
Oct  2 04:28:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:47Z|00407|binding|INFO|Setting lport 262f44f5-df56-4176-96b2-4819d8b7e258 down in Southbound
Oct  2 04:28:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:47Z|00408|binding|INFO|Removing iface tap262f44f5-df ovn-installed in OVS
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.380 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:f8:c0 10.100.0.14'], port_security=['fa:16:3e:df:f8:c0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '247e32e5-5f07-4db4-9e6f-dcfade745228', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1616ad3a-ff5f-4423-8dd9-f2ff5717f8c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=262f44f5-df56-4176-96b2-4819d8b7e258) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.383 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 262f44f5-df56-4176-96b2-4819d8b7e258 in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis#033[00m
Oct  2 04:28:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.386 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa1bff6d-19fb-4792-a261-4da1165d95a1#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.404 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd47eb2b-8ab0-45b6-b459-d04cc972ff32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:47 np0005465604 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Oct  2 04:28:47 np0005465604 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002d.scope: Consumed 14.496s CPU time.
Oct  2 04:28:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.438 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4635f16b-92ae-47bb-838c-ce415e6c3a5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:47 np0005465604 systemd-machined[214636]: Machine qemu-49-instance-0000002d terminated.
Oct  2 04:28:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.444 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3e689c1e-780f-4929-a3e8-e21965177e97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.485 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[dadc372e-b1a1-40c4-99fa-a3df2fa38606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.501 2 INFO nova.virt.libvirt.driver [-] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Instance destroyed successfully.#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.502 2 DEBUG nova.objects.instance [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'resources' on Instance uuid 247e32e5-5f07-4db4-9e6f-dcfade745228 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.510 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[07dc5876-ba0c-466d-94d1-79f4005219f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa1bff6d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:c9:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 15, 'rx_bytes': 1042, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 15, 'rx_bytes': 1042, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454001, 'reachable_time': 43415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312428, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.520 2 DEBUG nova.virt.libvirt.vif [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1204621376',display_name='tempest-tempest.common.compute-instance-1204621376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1204621376',id=45,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-8sa12d1t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=247e32e5-5f07-4db4-9e6f-dcfade745228,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.520 2 DEBUG nova.network.os_vif_util [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.521 2 DEBUG nova.network.os_vif_util [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:f8:c0,bridge_name='br-int',has_traffic_filtering=True,id=262f44f5-df56-4176-96b2-4819d8b7e258,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap262f44f5-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.521 2 DEBUG os_vif [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:f8:c0,bridge_name='br-int',has_traffic_filtering=True,id=262f44f5-df56-4176-96b2-4819d8b7e258,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap262f44f5-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.523 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap262f44f5-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.532 2 INFO os_vif [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:f8:c0,bridge_name='br-int',has_traffic_filtering=True,id=262f44f5-df56-4176-96b2-4819d8b7e258,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap262f44f5-df')#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.533 2 DEBUG nova.virt.libvirt.vif [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1204621376',display_name='tempest-tempest.common.compute-instance-1204621376',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1204621376',id=45,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-8sa12d1t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=247e32e5-5f07-4db4-9e6f-dcfade745228,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.533 2 DEBUG nova.network.os_vif_util [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "address": "fa:16:3e:69:c8:97", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f45b100-9b", "ovs_interfaceid": "2f45b100-9bc2-4853-87ff-324e74ddfee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.533 2 DEBUG nova.network.os_vif_util [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.534 2 DEBUG os_vif [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.535 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f45b100-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.535 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.536 2 INFO os_vif [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:c8:97,bridge_name='br-int',has_traffic_filtering=True,id=2f45b100-9bc2-4853-87ff-324e74ddfee5,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2f45b100-9b')#033[00m
Oct  2 04:28:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.540 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[20b13bb4-edb0-40b4-8135-74d683fc8909]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454016, 'tstamp': 454016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312435, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa1bff6d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454020, 'tstamp': 454020}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312435, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.541 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.546 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa1bff6d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.546 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.546 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa1bff6d-10, col_values=(('external_ids', {'iface-id': 'f3bbefb0-d6f2-4ac2-ae19-0f58ef03c080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:47.546 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.881 2 INFO nova.virt.libvirt.driver [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Deleting instance files /var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228_del#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.882 2 INFO nova.virt.libvirt.driver [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Deletion of /var/lib/nova/instances/247e32e5-5f07-4db4-9e6f-dcfade745228_del complete#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.934 2 INFO nova.compute.manager [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.934 2 DEBUG oslo.service.loopingcall [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.934 2 DEBUG nova.compute.manager [-] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:28:47 np0005465604 nova_compute[260603]: 2025-10-02 08:28:47.935 2 DEBUG nova.network.neutron [-] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:28:48 np0005465604 podman[312457]: 2025-10-02 08:28:48.018132247 +0000 UTC m=+0.071148121 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.040 2 INFO nova.network.neutron [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Port 2f45b100-9bc2-4853-87ff-324e74ddfee5 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.040 2 DEBUG nova.network.neutron [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updating instance_info_cache with network_info: [{"id": "262f44f5-df56-4176-96b2-4819d8b7e258", "address": "fa:16:3e:df:f8:c0", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap262f44f5-df", "ovs_interfaceid": "262f44f5-df56-4176-96b2-4819d8b7e258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.076 2 DEBUG oslo_concurrency.lockutils [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Releasing lock "refresh_cache-247e32e5-5f07-4db4-9e6f-dcfade745228" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.106 2 DEBUG oslo_concurrency.lockutils [None req-58fe75ac-7f5c-4bd2-8615-0569680d2cab 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "interface-247e32e5-5f07-4db4-9e6f-dcfade745228-2f45b100-9bc2-4853-87ff-324e74ddfee5" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.279 2 INFO nova.virt.libvirt.driver [-] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Instance destroyed successfully.#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.279 2 DEBUG nova.objects.instance [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'resources' on Instance uuid e3ae3c82-7eb4-4727-a846-92afca9a8330 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.295 2 DEBUG nova.virt.libvirt.vif [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:28:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-379096976',display_name='tempest-DeleteServersTestJSON-server-379096976',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-379096976',id=47,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-f7cj6hb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member',shelved_at='2025-10-02T08:28:44.723439',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='42c47313-5859-4db4-83db-08656e9d2bfd'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:39Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=e3ae3c82-7eb4-4727-a846-92afca9a8330,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.296 2 DEBUG nova.network.os_vif_util [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.296 2 DEBUG nova.network.os_vif_util [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:62:72,bridge_name='br-int',has_traffic_filtering=True,id=8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e7deb4e-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.296 2 DEBUG os_vif [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:62:72,bridge_name='br-int',has_traffic_filtering=True,id=8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e7deb4e-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.298 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e7deb4e-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.303 2 INFO os_vif [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:62:72,bridge_name='br-int',has_traffic_filtering=True,id=8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e7deb4e-9f')#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.339 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquiring lock "331bfae3-95e5-4c18-96ca-56597994c6b7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.340 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.343 2 DEBUG nova.compute.manager [req-7098ead1-d8ce-448f-933b-ef21417f48e4 req-cef535f4-f8af-497a-a584-80e05b15d5e8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-unplugged-262f44f5-df56-4176-96b2-4819d8b7e258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.343 2 DEBUG oslo_concurrency.lockutils [req-7098ead1-d8ce-448f-933b-ef21417f48e4 req-cef535f4-f8af-497a-a584-80e05b15d5e8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.344 2 DEBUG oslo_concurrency.lockutils [req-7098ead1-d8ce-448f-933b-ef21417f48e4 req-cef535f4-f8af-497a-a584-80e05b15d5e8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.344 2 DEBUG oslo_concurrency.lockutils [req-7098ead1-d8ce-448f-933b-ef21417f48e4 req-cef535f4-f8af-497a-a584-80e05b15d5e8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.345 2 DEBUG nova.compute.manager [req-7098ead1-d8ce-448f-933b-ef21417f48e4 req-cef535f4-f8af-497a-a584-80e05b15d5e8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] No waiting events found dispatching network-vif-unplugged-262f44f5-df56-4176-96b2-4819d8b7e258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.345 2 DEBUG nova.compute.manager [req-7098ead1-d8ce-448f-933b-ef21417f48e4 req-cef535f4-f8af-497a-a584-80e05b15d5e8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-unplugged-262f44f5-df56-4176-96b2-4819d8b7e258 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.367 2 DEBUG nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.443 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.443 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.450 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.450 2 INFO nova.compute.claims [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.647 2 INFO nova.virt.libvirt.driver [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Deleting instance files /var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330_del#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.648 2 INFO nova.virt.libvirt.driver [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Deletion of /var/lib/nova/instances/e3ae3c82-7eb4-4727-a846-92afca9a8330_del complete#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.672 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.760 2 INFO nova.scheduler.client.report [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Deleted allocations for instance e3ae3c82-7eb4-4727-a846-92afca9a8330#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.837 2 DEBUG oslo_concurrency.lockutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 407 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 5.9 MiB/s wr, 282 op/s
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.980 2 DEBUG nova.compute.manager [req-2b1646f5-dd63-4d49-997f-3cd1a9849761 req-46f786bf-f64f-4e8e-a0f3-4551bdcf84a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Received event network-changed-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.981 2 DEBUG nova.compute.manager [req-2b1646f5-dd63-4d49-997f-3cd1a9849761 req-46f786bf-f64f-4e8e-a0f3-4551bdcf84a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Refreshing instance network info cache due to event network-changed-8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.982 2 DEBUG oslo_concurrency.lockutils [req-2b1646f5-dd63-4d49-997f-3cd1a9849761 req-46f786bf-f64f-4e8e-a0f3-4551bdcf84a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.982 2 DEBUG oslo_concurrency.lockutils [req-2b1646f5-dd63-4d49-997f-3cd1a9849761 req-46f786bf-f64f-4e8e-a0f3-4551bdcf84a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:48 np0005465604 nova_compute[260603]: 2025-10-02 08:28:48.982 2 DEBUG nova.network.neutron [req-2b1646f5-dd63-4d49-997f-3cd1a9849761 req-46f786bf-f64f-4e8e-a0f3-4551bdcf84a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Refreshing network info cache for port 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:28:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:28:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3620264730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.117 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.125 2 DEBUG nova.compute.provider_tree [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.148 2 DEBUG nova.scheduler.client.report [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.175 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.176 2 DEBUG nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.181 2 DEBUG oslo_concurrency.lockutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.245 2 DEBUG nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.246 2 DEBUG nova.network.neutron [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.273 2 INFO nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.297 2 DEBUG nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.381 2 DEBUG oslo_concurrency.processutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.427 2 DEBUG nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.430 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.430 2 INFO nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Creating image(s)#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.466 2 DEBUG nova.storage.rbd_utils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] rbd image 331bfae3-95e5-4c18-96ca-56597994c6b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.502 2 DEBUG nova.storage.rbd_utils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] rbd image 331bfae3-95e5-4c18-96ca-56597994c6b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.538 2 DEBUG nova.storage.rbd_utils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] rbd image 331bfae3-95e5-4c18-96ca-56597994c6b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.543 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.616 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.617 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.617 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.618 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.640 2 DEBUG nova.storage.rbd_utils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] rbd image 331bfae3-95e5-4c18-96ca-56597994c6b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.643 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 331bfae3-95e5-4c18-96ca-56597994c6b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.682 2 DEBUG nova.policy [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2b3acbcd92044b7a9df5bd8748ab7599', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '226ee42bc28344d49ab6b0000485ab4d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:28:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:28:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1388165678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.916 2 DEBUG oslo_concurrency.processutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.923 2 DEBUG nova.compute.provider_tree [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.935 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 331bfae3-95e5-4c18-96ca-56597994c6b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:49 np0005465604 nova_compute[260603]: 2025-10-02 08:28:49.981 2 DEBUG nova.scheduler.client.report [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:28:50 np0005465604 nova_compute[260603]: 2025-10-02 08:28:50.035 2 DEBUG oslo_concurrency.lockutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:50 np0005465604 nova_compute[260603]: 2025-10-02 08:28:50.047 2 DEBUG nova.storage.rbd_utils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] resizing rbd image 331bfae3-95e5-4c18-96ca-56597994c6b7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:28:50 np0005465604 nova_compute[260603]: 2025-10-02 08:28:50.116 2 DEBUG oslo_concurrency.lockutils [None req-2e95cd1e-5cf4-4fbe-8798-6d315db06b0d 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "e3ae3c82-7eb4-4727-a846-92afca9a8330" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 23.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:50 np0005465604 nova_compute[260603]: 2025-10-02 08:28:50.165 2 DEBUG nova.objects.instance [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lazy-loading 'migration_context' on Instance uuid 331bfae3-95e5-4c18-96ca-56597994c6b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:50 np0005465604 nova_compute[260603]: 2025-10-02 08:28:50.184 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:28:50 np0005465604 nova_compute[260603]: 2025-10-02 08:28:50.185 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Ensure instance console log exists: /var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:28:50 np0005465604 nova_compute[260603]: 2025-10-02 08:28:50.186 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:50 np0005465604 nova_compute[260603]: 2025-10-02 08:28:50.186 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:50 np0005465604 nova_compute[260603]: 2025-10-02 08:28:50.187 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:50 np0005465604 nova_compute[260603]: 2025-10-02 08:28:50.846 2 DEBUG nova.network.neutron [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Successfully created port: e4363d00-4c59-46bb-ac20-98bde0cd0d4f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:28:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 407 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 5.2 MiB/s wr, 251 op/s
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.001 2 DEBUG nova.compute.manager [req-b385ac5b-5dae-4152-927d-8c558cdf88c0 req-a79e37b2-39e6-4a3f-9a77-6dbb0dc865aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-plugged-262f44f5-df56-4176-96b2-4819d8b7e258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.001 2 DEBUG oslo_concurrency.lockutils [req-b385ac5b-5dae-4152-927d-8c558cdf88c0 req-a79e37b2-39e6-4a3f-9a77-6dbb0dc865aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.001 2 DEBUG oslo_concurrency.lockutils [req-b385ac5b-5dae-4152-927d-8c558cdf88c0 req-a79e37b2-39e6-4a3f-9a77-6dbb0dc865aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.002 2 DEBUG oslo_concurrency.lockutils [req-b385ac5b-5dae-4152-927d-8c558cdf88c0 req-a79e37b2-39e6-4a3f-9a77-6dbb0dc865aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.002 2 DEBUG nova.compute.manager [req-b385ac5b-5dae-4152-927d-8c558cdf88c0 req-a79e37b2-39e6-4a3f-9a77-6dbb0dc865aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] No waiting events found dispatching network-vif-plugged-262f44f5-df56-4176-96b2-4819d8b7e258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.002 2 WARNING nova.compute.manager [req-b385ac5b-5dae-4152-927d-8c558cdf88c0 req-a79e37b2-39e6-4a3f-9a77-6dbb0dc865aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received unexpected event network-vif-plugged-262f44f5-df56-4176-96b2-4819d8b7e258 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:28:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Oct  2 04:28:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Oct  2 04:28:51 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.149 2 DEBUG nova.network.neutron [-] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.180 2 INFO nova.compute.manager [-] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Took 3.25 seconds to deallocate network for instance.#033[00m
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.248 2 DEBUG oslo_concurrency.lockutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.249 2 DEBUG oslo_concurrency.lockutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.377 2 DEBUG nova.network.neutron [req-2b1646f5-dd63-4d49-997f-3cd1a9849761 req-46f786bf-f64f-4e8e-a0f3-4551bdcf84a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Updated VIF entry in instance network info cache for port 8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.378 2 DEBUG nova.network.neutron [req-2b1646f5-dd63-4d49-997f-3cd1a9849761 req-46f786bf-f64f-4e8e-a0f3-4551bdcf84a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Updating instance_info_cache with network_info: [{"id": "8e7deb4e-9f96-4dac-8b47-d7a0b6eb325f", "address": "fa:16:3e:5a:62:72", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": null, "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap8e7deb4e-9f", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.401 2 DEBUG oslo_concurrency.lockutils [req-2b1646f5-dd63-4d49-997f-3cd1a9849761 req-46f786bf-f64f-4e8e-a0f3-4551bdcf84a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-e3ae3c82-7eb4-4727-a846-92afca9a8330" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.414 2 DEBUG oslo_concurrency.processutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:28:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2927921251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.857 2 DEBUG nova.compute.manager [req-0341af7a-c006-4f80-ac95-66a1a14a4e3c req-27167bbb-f2d4-45db-a6ac-241ef46bf487 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Received event network-vif-deleted-262f44f5-df56-4176-96b2-4819d8b7e258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.859 2 DEBUG oslo_concurrency.processutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.870 2 DEBUG nova.compute.provider_tree [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.892 2 DEBUG nova.scheduler.client.report [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.912 2 DEBUG oslo_concurrency.lockutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.966 2 INFO nova.scheduler.client.report [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Deleted allocations for instance 247e32e5-5f07-4db4-9e6f-dcfade745228#033[00m
Oct  2 04:28:51 np0005465604 nova_compute[260603]: 2025-10-02 08:28:51.986 2 DEBUG nova.network.neutron [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Successfully updated port: e4363d00-4c59-46bb-ac20-98bde0cd0d4f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.034 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquiring lock "refresh_cache-331bfae3-95e5-4c18-96ca-56597994c6b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.034 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquired lock "refresh_cache-331bfae3-95e5-4c18-96ca-56597994c6b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.035 2 DEBUG nova.network.neutron [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:28:52 np0005465604 podman[312728]: 2025-10-02 08:28:52.042897624 +0000 UTC m=+0.090497364 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.068 2 DEBUG oslo_concurrency.lockutils [None req-ca29fe3e-b5bd-4a65-92b6-ecad54055120 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "247e32e5-5f07-4db4-9e6f-dcfade745228" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:28:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Oct  2 04:28:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Oct  2 04:28:52 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.199 2 DEBUG nova.network.neutron [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.598 2 DEBUG oslo_concurrency.lockutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.599 2 DEBUG oslo_concurrency.lockutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.599 2 DEBUG oslo_concurrency.lockutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.600 2 DEBUG oslo_concurrency.lockutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.600 2 DEBUG oslo_concurrency.lockutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.601 2 INFO nova.compute.manager [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Terminating instance#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.602 2 DEBUG nova.compute.manager [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:28:52 np0005465604 kernel: tap136aeb8e-de (unregistering): left promiscuous mode
Oct  2 04:28:52 np0005465604 NetworkManager[45129]: <info>  [1759393732.6546] device (tap136aeb8e-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:52Z|00409|binding|INFO|Releasing lport 136aeb8e-dedd-4cd8-a72d-1c4309716daf from this chassis (sb_readonly=0)
Oct  2 04:28:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:52Z|00410|binding|INFO|Setting lport 136aeb8e-dedd-4cd8-a72d-1c4309716daf down in Southbound
Oct  2 04:28:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:52Z|00411|binding|INFO|Removing iface tap136aeb8e-de ovn-installed in OVS
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:52.679 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:59:2a 10.100.0.3'], port_security=['fa:16:3e:0b:59:2a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f13ff7c1-d7d3-443e-9f06-69f8c466af30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84c161efb2ba4334845e823db8128b62', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1616ad3a-ff5f-4423-8dd9-f2ff5717f8c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc943591-0c90-4643-afef-bbae457695c4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=136aeb8e-dedd-4cd8-a72d-1c4309716daf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:52.680 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 136aeb8e-dedd-4cd8-a72d-1c4309716daf in datapath fa1bff6d-19fb-4792-a261-4da1165d95a1 unbound from our chassis#033[00m
Oct  2 04:28:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:52.682 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fa1bff6d-19fb-4792-a261-4da1165d95a1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:28:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:52.686 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca3beb4-f86f-4054-b219-326fa13a5478]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:52.687 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 namespace which is not needed anymore#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:52 np0005465604 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Oct  2 04:28:52 np0005465604 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Consumed 14.759s CPU time.
Oct  2 04:28:52 np0005465604 systemd-machined[214636]: Machine qemu-47-instance-0000002a terminated.
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:52 np0005465604 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[308403]: [NOTICE]   (308407) : haproxy version is 2.8.14-c23fe91
Oct  2 04:28:52 np0005465604 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[308403]: [NOTICE]   (308407) : path to executable is /usr/sbin/haproxy
Oct  2 04:28:52 np0005465604 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[308403]: [WARNING]  (308407) : Exiting Master process...
Oct  2 04:28:52 np0005465604 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[308403]: [WARNING]  (308407) : Exiting Master process...
Oct  2 04:28:52 np0005465604 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[308403]: [ALERT]    (308407) : Current worker (308409) exited with code 143 (Terminated)
Oct  2 04:28:52 np0005465604 neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1[308403]: [WARNING]  (308407) : All workers exited. Exiting... (0)
Oct  2 04:28:52 np0005465604 systemd[1]: libpod-2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57.scope: Deactivated successfully.
Oct  2 04:28:52 np0005465604 podman[312772]: 2025-10-02 08:28:52.84756557 +0000 UTC m=+0.052763822 container died 2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.851 2 INFO nova.virt.libvirt.driver [-] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Instance destroyed successfully.#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.852 2 DEBUG nova.objects.instance [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lazy-loading 'resources' on Instance uuid f13ff7c1-d7d3-443e-9f06-69f8c466af30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.867 2 DEBUG nova.virt.libvirt.vif [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-53387023',display_name='tempest-tempest.common.compute-instance-53387023',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-53387023',id=42,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD8IyJq8joLv9lWIYbWQFDZkcMO/r29NilVmR/7yJi+FETbGEiSdsOITCDxrJyXFT8Hb2MOYz+RQS7kpOu5FY7JcRMt/OvLwxzhS/kePVnWkRD5Idni3NGjK85kKpcgtRA==',key_name='tempest-keypair-215518840',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:27:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84c161efb2ba4334845e823db8128b62',ramdisk_id='',reservation_id='r-966e3isi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1907037263',owner_user_name='tempest-AttachInterfacesTestJSON-1907037263-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:27:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='14d235dd68314a5d82ac247a9e9842d8',uuid=f13ff7c1-d7d3-443e-9f06-69f8c466af30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.869 2 DEBUG nova.network.os_vif_util [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converting VIF {"id": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "address": "fa:16:3e:0b:59:2a", "network": {"id": "fa1bff6d-19fb-4792-a261-4da1165d95a1", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-145025832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84c161efb2ba4334845e823db8128b62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap136aeb8e-de", "ovs_interfaceid": "136aeb8e-dedd-4cd8-a72d-1c4309716daf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.871 2 DEBUG nova.network.os_vif_util [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:59:2a,bridge_name='br-int',has_traffic_filtering=True,id=136aeb8e-dedd-4cd8-a72d-1c4309716daf,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap136aeb8e-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.872 2 DEBUG os_vif [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:59:2a,bridge_name='br-int',has_traffic_filtering=True,id=136aeb8e-dedd-4cd8-a72d-1c4309716daf,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap136aeb8e-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.877 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap136aeb8e-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.887 2 INFO os_vif [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:59:2a,bridge_name='br-int',has_traffic_filtering=True,id=136aeb8e-dedd-4cd8-a72d-1c4309716daf,network=Network(fa1bff6d-19fb-4792-a261-4da1165d95a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap136aeb8e-de')#033[00m
Oct  2 04:28:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d474468b501cd6334914f2b4a9f295c2f9e45b4d98ca3d1d88fd0ebc5aaafcb2-merged.mount: Deactivated successfully.
Oct  2 04:28:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57-userdata-shm.mount: Deactivated successfully.
Oct  2 04:28:52 np0005465604 podman[312772]: 2025-10-02 08:28:52.910458544 +0000 UTC m=+0.115656796 container cleanup 2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:28:52 np0005465604 systemd[1]: libpod-conmon-2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57.scope: Deactivated successfully.
Oct  2 04:28:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 291 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 1.2 MiB/s wr, 150 op/s
Oct  2 04:28:52 np0005465604 podman[312828]: 2025-10-02 08:28:52.982021078 +0000 UTC m=+0.045908488 container remove 2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 04:28:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:52.991 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8b77ab02-25f0-4cdf-9edf-f0d545705e9c]: (4, ('Thu Oct  2 08:28:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 (2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57)\n2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57\nThu Oct  2 08:28:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 (2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57)\n2a913f0f44115368732dc0d686d964dd02abd811868dd6a73db3d94dc6116a57\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:52.993 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba9f73e-9b0e-4cfa-8ddf-9ccfc52921f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:52.994 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa1bff6d-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:52 np0005465604 nova_compute[260603]: 2025-10-02 08:28:52.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:52 np0005465604 kernel: tapfa1bff6d-10: left promiscuous mode
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:53.023 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e00d242f-6d13-469d-97a9-8a8b13484c3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:53.053 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca9ca9b-e0f3-4d72-b58f-b11d86c46a76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:53.054 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d9fae1cd-31fd-4485-80c7-b9f6e84b4532]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:53.071 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9d438b-e294-4618-bc79-e714d8118b9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453990, 'reachable_time': 19448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312846, 'error': None, 'target': 'ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:53 np0005465604 systemd[1]: run-netns-ovnmeta\x2dfa1bff6d\x2d19fb\x2d4792\x2da261\x2d4da1165d95a1.mount: Deactivated successfully.
Oct  2 04:28:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:53.079 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fa1bff6d-19fb-4792-a261-4da1165d95a1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:28:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:53.080 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[2e00b9e3-7a64-45e8-9167-17e76e859e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.202 2 DEBUG nova.network.neutron [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Updating instance_info_cache with network_info: [{"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.230 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Releasing lock "refresh_cache-331bfae3-95e5-4c18-96ca-56597994c6b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.231 2 DEBUG nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Instance network_info: |[{"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.234 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Start _get_guest_xml network_info=[{"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.240 2 WARNING nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.246 2 DEBUG nova.virt.libvirt.host [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.247 2 DEBUG nova.virt.libvirt.host [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.253 2 DEBUG nova.virt.libvirt.host [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.254 2 DEBUG nova.virt.libvirt.host [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.255 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.255 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.256 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.256 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.257 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.257 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.257 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.258 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.259 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.259 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.259 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.260 2 DEBUG nova.virt.hardware [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.264 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.314 2 INFO nova.virt.libvirt.driver [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Deleting instance files /var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30_del#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.315 2 INFO nova.virt.libvirt.driver [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Deletion of /var/lib/nova/instances/f13ff7c1-d7d3-443e-9f06-69f8c466af30_del complete#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.392 2 INFO nova.compute.manager [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.394 2 DEBUG oslo.service.loopingcall [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.394 2 DEBUG nova.compute.manager [-] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.395 2 DEBUG nova.network.neutron [-] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.405 2 DEBUG nova.compute.manager [req-5a23cd2c-e559-4b45-b9ee-55d26f8e8c09 req-b0263ddb-01c0-4178-b9a2-9b2b4cc1456f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-unplugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.405 2 DEBUG oslo_concurrency.lockutils [req-5a23cd2c-e559-4b45-b9ee-55d26f8e8c09 req-b0263ddb-01c0-4178-b9a2-9b2b4cc1456f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.406 2 DEBUG oslo_concurrency.lockutils [req-5a23cd2c-e559-4b45-b9ee-55d26f8e8c09 req-b0263ddb-01c0-4178-b9a2-9b2b4cc1456f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.406 2 DEBUG oslo_concurrency.lockutils [req-5a23cd2c-e559-4b45-b9ee-55d26f8e8c09 req-b0263ddb-01c0-4178-b9a2-9b2b4cc1456f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.407 2 DEBUG nova.compute.manager [req-5a23cd2c-e559-4b45-b9ee-55d26f8e8c09 req-b0263ddb-01c0-4178-b9a2-9b2b4cc1456f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] No waiting events found dispatching network-vif-unplugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.407 2 DEBUG nova.compute.manager [req-5a23cd2c-e559-4b45-b9ee-55d26f8e8c09 req-b0263ddb-01c0-4178-b9a2-9b2b4cc1456f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-unplugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:28:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:28:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/937517978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.737 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.758 2 DEBUG nova.storage.rbd_utils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] rbd image 331bfae3-95e5-4c18-96ca-56597994c6b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.761 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.829 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393718.7640457, e3ae3c82-7eb4-4727-a846-92afca9a8330 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.830 2 INFO nova.compute.manager [-] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:28:53 np0005465604 nova_compute[260603]: 2025-10-02 08:28:53.859 2 DEBUG nova.compute.manager [None req-662b77da-1cad-4d6d-a213-5ac0cbc1df56 - - - - - -] [instance: e3ae3c82-7eb4-4727-a846-92afca9a8330] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:28:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1727180314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.161 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.164 2 DEBUG nova.virt.libvirt.vif [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:28:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1491987986',display_name='tempest-InstanceActionsNegativeTestJSON-server-1491987986',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1491987986',id=49,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='226ee42bc28344d49ab6b0000485ab4d',ramdisk_id='',reservation_id='r-w4z7ku8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-472656503',owner_user_name='tempest-InstanceActionsNegativeTestJSON-472656503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:28:49Z,user_data=None,user_id='2b3acbcd92044b7a9df5bd8748ab7599',uuid=331bfae3-95e5-4c18-96ca-56597994c6b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.165 2 DEBUG nova.network.os_vif_util [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Converting VIF {"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.166 2 DEBUG nova.network.os_vif_util [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e0:ec,bridge_name='br-int',has_traffic_filtering=True,id=e4363d00-4c59-46bb-ac20-98bde0cd0d4f,network=Network(6b09ac7d-d406-46ea-b446-c4810bf0f847),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4363d00-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.168 2 DEBUG nova.objects.instance [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lazy-loading 'pci_devices' on Instance uuid 331bfae3-95e5-4c18-96ca-56597994c6b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.200 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:28:54 np0005465604 nova_compute[260603]:  <uuid>331bfae3-95e5-4c18-96ca-56597994c6b7</uuid>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:  <name>instance-00000031</name>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1491987986</nova:name>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:28:53</nova:creationTime>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:28:54 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:        <nova:user uuid="2b3acbcd92044b7a9df5bd8748ab7599">tempest-InstanceActionsNegativeTestJSON-472656503-project-member</nova:user>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:        <nova:project uuid="226ee42bc28344d49ab6b0000485ab4d">tempest-InstanceActionsNegativeTestJSON-472656503</nova:project>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:        <nova:port uuid="e4363d00-4c59-46bb-ac20-98bde0cd0d4f">
Oct  2 04:28:54 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <entry name="serial">331bfae3-95e5-4c18-96ca-56597994c6b7</entry>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <entry name="uuid">331bfae3-95e5-4c18-96ca-56597994c6b7</entry>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/331bfae3-95e5-4c18-96ca-56597994c6b7_disk">
Oct  2 04:28:54 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:28:54 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/331bfae3-95e5-4c18-96ca-56597994c6b7_disk.config">
Oct  2 04:28:54 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:28:54 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:ba:e0:ec"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <target dev="tape4363d00-4c"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7/console.log" append="off"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:28:54 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:28:54 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:28:54 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:28:54 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.203 2 DEBUG nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Preparing to wait for external event network-vif-plugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.203 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquiring lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.204 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.205 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.206 2 DEBUG nova.virt.libvirt.vif [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:28:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1491987986',display_name='tempest-InstanceActionsNegativeTestJSON-server-1491987986',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1491987986',id=49,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='226ee42bc28344d49ab6b0000485ab4d',ramdisk_id='',reservation_id='r-w4z7ku8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-472656503',owner_user_name='tempest-InstanceActionsNegativeTestJSON-472656503-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:28:49Z,user_data=None,user_id='2b3acbcd92044b7a9df5bd8748ab7599',uuid=331bfae3-95e5-4c18-96ca-56597994c6b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.207 2 DEBUG nova.network.os_vif_util [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Converting VIF {"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.208 2 DEBUG nova.network.os_vif_util [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e0:ec,bridge_name='br-int',has_traffic_filtering=True,id=e4363d00-4c59-46bb-ac20-98bde0cd0d4f,network=Network(6b09ac7d-d406-46ea-b446-c4810bf0f847),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4363d00-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.209 2 DEBUG os_vif [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e0:ec,bridge_name='br-int',has_traffic_filtering=True,id=e4363d00-4c59-46bb-ac20-98bde0cd0d4f,network=Network(6b09ac7d-d406-46ea-b446-c4810bf0f847),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4363d00-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.212 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.213 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4363d00-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.219 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape4363d00-4c, col_values=(('external_ids', {'iface-id': 'e4363d00-4c59-46bb-ac20-98bde0cd0d4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:e0:ec', 'vm-uuid': '331bfae3-95e5-4c18-96ca-56597994c6b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:54 np0005465604 NetworkManager[45129]: <info>  [1759393734.2222] manager: (tape4363d00-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.227 2 INFO os_vif [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e0:ec,bridge_name='br-int',has_traffic_filtering=True,id=e4363d00-4c59-46bb-ac20-98bde0cd0d4f,network=Network(6b09ac7d-d406-46ea-b446-c4810bf0f847),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4363d00-4c')#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.234 2 DEBUG nova.compute.manager [req-52ffce3c-3b0c-4719-a790-bab67e8e2539 req-cebc4762-dec8-42b5-894f-4f23645a3101 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Received event network-changed-e4363d00-4c59-46bb-ac20-98bde0cd0d4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.235 2 DEBUG nova.compute.manager [req-52ffce3c-3b0c-4719-a790-bab67e8e2539 req-cebc4762-dec8-42b5-894f-4f23645a3101 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Refreshing instance network info cache due to event network-changed-e4363d00-4c59-46bb-ac20-98bde0cd0d4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.236 2 DEBUG oslo_concurrency.lockutils [req-52ffce3c-3b0c-4719-a790-bab67e8e2539 req-cebc4762-dec8-42b5-894f-4f23645a3101 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-331bfae3-95e5-4c18-96ca-56597994c6b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.236 2 DEBUG oslo_concurrency.lockutils [req-52ffce3c-3b0c-4719-a790-bab67e8e2539 req-cebc4762-dec8-42b5-894f-4f23645a3101 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-331bfae3-95e5-4c18-96ca-56597994c6b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.237 2 DEBUG nova.network.neutron [req-52ffce3c-3b0c-4719-a790-bab67e8e2539 req-cebc4762-dec8-42b5-894f-4f23645a3101 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Refreshing network info cache for port e4363d00-4c59-46bb-ac20-98bde0cd0d4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.299 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.299 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.300 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] No VIF found with MAC fa:16:3e:ba:e0:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.300 2 INFO nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Using config drive#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.324 2 DEBUG nova.storage.rbd_utils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] rbd image 331bfae3-95e5-4c18-96ca-56597994c6b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:54.446 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:54.447 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.493 2 DEBUG nova.network.neutron [-] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.509 2 INFO nova.compute.manager [-] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Took 1.11 seconds to deallocate network for instance.#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.598 2 DEBUG oslo_concurrency.lockutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.598 2 DEBUG oslo_concurrency.lockutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:54 np0005465604 nova_compute[260603]: 2025-10-02 08:28:54.714 2 DEBUG oslo_concurrency.processutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 208 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 2.7 MiB/s wr, 231 op/s
Oct  2 04:28:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:28:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1140997525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.135 2 DEBUG oslo_concurrency.processutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.142 2 DEBUG nova.compute.provider_tree [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.166 2 DEBUG nova.scheduler.client.report [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.187 2 INFO nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Creating config drive at /var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7/disk.config#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.192 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_6lyi017 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.227 2 DEBUG oslo_concurrency.lockutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.267 2 INFO nova.scheduler.client.report [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Deleted allocations for instance f13ff7c1-d7d3-443e-9f06-69f8c466af30#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.328 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_6lyi017" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.349 2 DEBUG nova.storage.rbd_utils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] rbd image 331bfae3-95e5-4c18-96ca-56597994c6b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.352 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7/disk.config 331bfae3-95e5-4c18-96ca-56597994c6b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.386 2 DEBUG oslo_concurrency.lockutils [None req-a2329845-f502-4925-b78e-af52ee1c4042 14d235dd68314a5d82ac247a9e9842d8 84c161efb2ba4334845e823db8128b62 - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.498 2 DEBUG oslo_concurrency.processutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7/disk.config 331bfae3-95e5-4c18-96ca-56597994c6b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.499 2 INFO nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Deleting local config drive /var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7/disk.config because it was imported into RBD.#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.533 2 DEBUG oslo_concurrency.lockutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "05cc7244-c419-4c24-b995-95ca760837a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.533 2 DEBUG oslo_concurrency.lockutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.534 2 DEBUG oslo_concurrency.lockutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "05cc7244-c419-4c24-b995-95ca760837a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.534 2 DEBUG oslo_concurrency.lockutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.534 2 DEBUG oslo_concurrency.lockutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.536 2 INFO nova.compute.manager [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Terminating instance#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.537 2 DEBUG nova.compute.manager [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:28:55 np0005465604 kernel: tape4363d00-4c: entered promiscuous mode
Oct  2 04:28:55 np0005465604 systemd-udevd[312751]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:28:55 np0005465604 NetworkManager[45129]: <info>  [1759393735.5593] manager: (tape4363d00-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/179)
Oct  2 04:28:55 np0005465604 NetworkManager[45129]: <info>  [1759393735.5715] device (tape4363d00-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:28:55 np0005465604 NetworkManager[45129]: <info>  [1759393735.5722] device (tape4363d00-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:28:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:55Z|00412|binding|INFO|Claiming lport e4363d00-4c59-46bb-ac20-98bde0cd0d4f for this chassis.
Oct  2 04:28:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:55Z|00413|binding|INFO|e4363d00-4c59-46bb-ac20-98bde0cd0d4f: Claiming fa:16:3e:ba:e0:ec 10.100.0.5
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.610 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:e0:ec 10.100.0.5'], port_security=['fa:16:3e:ba:e0:ec 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '331bfae3-95e5-4c18-96ca-56597994c6b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b09ac7d-d406-46ea-b446-c4810bf0f847', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '226ee42bc28344d49ab6b0000485ab4d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cadeb1ae-9c39-4da5-a2f0-8ecfaebb7175', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20da51ff-5df7-43b0-b08f-68d65fa498f7, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e4363d00-4c59-46bb-ac20-98bde0cd0d4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.612 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e4363d00-4c59-46bb-ac20-98bde0cd0d4f in datapath 6b09ac7d-d406-46ea-b446-c4810bf0f847 bound to our chassis#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.614 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6b09ac7d-d406-46ea-b446-c4810bf0f847#033[00m
Oct  2 04:28:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:55Z|00414|binding|INFO|Setting lport e4363d00-4c59-46bb-ac20-98bde0cd0d4f ovn-installed in OVS
Oct  2 04:28:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:55Z|00415|binding|INFO|Setting lport e4363d00-4c59-46bb-ac20-98bde0cd0d4f up in Southbound
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.627 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2af9c8d5-192f-4e1d-adfe-b1ba311617e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.628 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6b09ac7d-d1 in ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.631 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6b09ac7d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.631 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a3dc3ae6-3f6b-42f6-a3ec-2a8ec0daac09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.632 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f6539a-ec12-4ce6-97cc-a0295002b566]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:55 np0005465604 systemd-machined[214636]: New machine qemu-54-instance-00000031.
Oct  2 04:28:55 np0005465604 kernel: tape978c120-9b (unregistering): left promiscuous mode
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.647 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e2dd1344-3f4b-4da6-8ad9-1b8c4aa97b8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:55 np0005465604 systemd[1]: Started Virtual Machine qemu-54-instance-00000031.
Oct  2 04:28:55 np0005465604 NetworkManager[45129]: <info>  [1759393735.6546] device (tape978c120-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:55Z|00416|binding|INFO|Releasing lport e978c120-9b3a-4a48-b553-c38b05073ad9 from this chassis (sb_readonly=0)
Oct  2 04:28:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:55Z|00417|binding|INFO|Setting lport e978c120-9b3a-4a48-b553-c38b05073ad9 down in Southbound
Oct  2 04:28:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:55Z|00418|binding|INFO|Removing iface tape978c120-9b ovn-installed in OVS
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.673 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:a6:e3 10.100.0.10'], port_security=['fa:16:3e:86:a6:e3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '05cc7244-c419-4c24-b995-95ca760837a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35a4ab7cf79e41f68a1ea888c2a3592e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2cf2a9c1-4dd0-4e0e-854c-d9add7d70aee 36042788-300a-4f5b-b3a8-970ef5264604 aceef71f-a2e6-4998-bc1f-5a8f9213efeb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e031551-92b2-44b9-87f8-368034b7a542, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e978c120-9b3a-4a48-b553-c38b05073ad9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.675 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b0316b-2246-4a48-8d7b-d578f3576d89]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:55 np0005465604 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct  2 04:28:55 np0005465604 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000002b.scope: Consumed 15.563s CPU time.
Oct  2 04:28:55 np0005465604 systemd-machined[214636]: Machine qemu-46-instance-0000002b terminated.
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.713 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2f8e8679-ec1e-4450-bb86-e3fed4b22ace]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:55 np0005465604 NetworkManager[45129]: <info>  [1759393735.7203] manager: (tap6b09ac7d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/180)
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.718 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cc79099a-024d-46f3-be60-52892be807fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:55 np0005465604 systemd-udevd[313024]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.753 2 DEBUG nova.compute.manager [req-7ca00351-a487-48d9-bf61-9188953e29bf req-154d7cee-e649-4070-b4a0-3ce6959eb250 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-plugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.753 2 DEBUG oslo_concurrency.lockutils [req-7ca00351-a487-48d9-bf61-9188953e29bf req-154d7cee-e649-4070-b4a0-3ce6959eb250 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.754 2 DEBUG oslo_concurrency.lockutils [req-7ca00351-a487-48d9-bf61-9188953e29bf req-154d7cee-e649-4070-b4a0-3ce6959eb250 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.754 2 DEBUG oslo_concurrency.lockutils [req-7ca00351-a487-48d9-bf61-9188953e29bf req-154d7cee-e649-4070-b4a0-3ce6959eb250 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f13ff7c1-d7d3-443e-9f06-69f8c466af30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.754 2 DEBUG nova.compute.manager [req-7ca00351-a487-48d9-bf61-9188953e29bf req-154d7cee-e649-4070-b4a0-3ce6959eb250 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] No waiting events found dispatching network-vif-plugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.754 2 WARNING nova.compute.manager [req-7ca00351-a487-48d9-bf61-9188953e29bf req-154d7cee-e649-4070-b4a0-3ce6959eb250 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received unexpected event network-vif-plugged-136aeb8e-dedd-4cd8-a72d-1c4309716daf for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.758 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[919c83a9-45df-4c67-94df-fd8d19a4c93c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:55 np0005465604 NetworkManager[45129]: <info>  [1759393735.7602] manager: (tape978c120-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/181)
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.762 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c982de3f-1bd6-4040-8693-d79022ba70d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.784 2 INFO nova.virt.libvirt.driver [-] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Instance destroyed successfully.#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.784 2 DEBUG nova.objects.instance [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lazy-loading 'resources' on Instance uuid 05cc7244-c419-4c24-b995-95ca760837a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:55 np0005465604 NetworkManager[45129]: <info>  [1759393735.7907] device (tap6b09ac7d-d0): carrier: link connected
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.795 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[84a3ba84-0d3e-44c2-8cbf-bcfbcd0333ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.801 2 DEBUG nova.virt.libvirt.vif [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:27:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1736454109',display_name='tempest-SecurityGroupsTestJSON-server-1736454109',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1736454109',id=43,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:27:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35a4ab7cf79e41f68a1ea888c2a3592e',ramdisk_id='',reservation_id='r-hb3fhgan',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-2081142325',owner_user_name='tempest-SecurityGroupsTestJSON-2081142325-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:27:41Z,user_data=None,user_id='019cd25dce6249ce9c2cf326ec62df28',uuid=05cc7244-c419-4c24-b995-95ca760837a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.802 2 DEBUG nova.network.os_vif_util [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converting VIF {"id": "e978c120-9b3a-4a48-b553-c38b05073ad9", "address": "fa:16:3e:86:a6:e3", "network": {"id": "cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-967600401-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35a4ab7cf79e41f68a1ea888c2a3592e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape978c120-9b", "ovs_interfaceid": "e978c120-9b3a-4a48-b553-c38b05073ad9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.802 2 DEBUG nova.network.os_vif_util [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:86:a6:e3,bridge_name='br-int',has_traffic_filtering=True,id=e978c120-9b3a-4a48-b553-c38b05073ad9,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape978c120-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.802 2 DEBUG os_vif [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:a6:e3,bridge_name='br-int',has_traffic_filtering=True,id=e978c120-9b3a-4a48-b553-c38b05073ad9,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape978c120-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.804 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape978c120-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.812 2 INFO os_vif [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:86:a6:e3,bridge_name='br-int',has_traffic_filtering=True,id=e978c120-9b3a-4a48-b553-c38b05073ad9,network=Network(cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape978c120-9b')#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.828 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc3a670-a7f2-40a3-8437-1e64170ab926]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6b09ac7d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5a:3f:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461439, 'reachable_time': 19762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313056, 'error': None, 'target': 'ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.846 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2f30d6-5d23-4be5-aa88-3fa70ad62cca]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5a:3fa5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461439, 'tstamp': 461439}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313072, 'error': None, 'target': 'ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.864 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7bbd48c8-c311-4252-957c-68841abf9e48]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6b09ac7d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5a:3f:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461439, 'reachable_time': 19762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313073, 'error': None, 'target': 'ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.894 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5799c030-3448-476b-8028-9c1e898a9e5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.961 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[da1512d0-c11a-4f8a-a26b-644763a43b3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.963 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b09ac7d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.963 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.964 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b09ac7d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:55 np0005465604 NetworkManager[45129]: <info>  [1759393735.9664] manager: (tap6b09ac7d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/182)
Oct  2 04:28:55 np0005465604 kernel: tap6b09ac7d-d0: entered promiscuous mode
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.974 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6b09ac7d-d0, col_values=(('external_ids', {'iface-id': '90b3a048-b044-4d5f-b423-1da282a65ad4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:28:55Z|00419|binding|INFO|Releasing lport 90b3a048-b044-4d5f-b423-1da282a65ad4 from this chassis (sb_readonly=0)
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.980 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6b09ac7d-d406-46ea-b446-c4810bf0f847.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6b09ac7d-d406-46ea-b446-c4810bf0f847.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.980 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[684dc2cd-3b8e-42ec-9bd4-c2809021e208]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.981 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-6b09ac7d-d406-46ea-b446-c4810bf0f847
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/6b09ac7d-d406-46ea-b446-c4810bf0f847.pid.haproxy
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 6b09ac7d-d406-46ea-b446-c4810bf0f847
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:28:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:55.982 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847', 'env', 'PROCESS_TAG=haproxy-6b09ac7d-d406-46ea-b446-c4810bf0f847', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6b09ac7d-d406-46ea-b446-c4810bf0f847.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:28:55 np0005465604 nova_compute[260603]: 2025-10-02 08:28:55.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:56 np0005465604 nova_compute[260603]: 2025-10-02 08:28:56.203 2 INFO nova.virt.libvirt.driver [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Deleting instance files /var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4_del#033[00m
Oct  2 04:28:56 np0005465604 nova_compute[260603]: 2025-10-02 08:28:56.204 2 INFO nova.virt.libvirt.driver [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Deletion of /var/lib/nova/instances/05cc7244-c419-4c24-b995-95ca760837a4_del complete#033[00m
Oct  2 04:28:56 np0005465604 nova_compute[260603]: 2025-10-02 08:28:56.289 2 INFO nova.compute.manager [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:28:56 np0005465604 nova_compute[260603]: 2025-10-02 08:28:56.292 2 DEBUG oslo.service.loopingcall [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:28:56 np0005465604 nova_compute[260603]: 2025-10-02 08:28:56.292 2 DEBUG nova.compute.manager [-] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:28:56 np0005465604 nova_compute[260603]: 2025-10-02 08:28:56.293 2 DEBUG nova.network.neutron [-] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:28:56 np0005465604 podman[313151]: 2025-10-02 08:28:56.364852294 +0000 UTC m=+0.045589797 container create cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 04:28:56 np0005465604 systemd[1]: Started libpod-conmon-cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc.scope.
Oct  2 04:28:56 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:28:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2d04520e0e8b2823fcf24f311ce97671288cd92dcafceef9e2538eda23350bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:28:56 np0005465604 podman[313151]: 2025-10-02 08:28:56.427652966 +0000 UTC m=+0.108390489 container init cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:28:56 np0005465604 podman[313151]: 2025-10-02 08:28:56.433479937 +0000 UTC m=+0.114217430 container start cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 04:28:56 np0005465604 podman[313151]: 2025-10-02 08:28:56.341510849 +0000 UTC m=+0.022248372 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:28:56 np0005465604 neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847[313166]: [NOTICE]   (313170) : New worker (313172) forked
Oct  2 04:28:56 np0005465604 neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847[313166]: [NOTICE]   (313170) : Loading success.
Oct  2 04:28:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.494 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e978c120-9b3a-4a48-b553-c38b05073ad9 in datapath cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 unbound from our chassis#033[00m
Oct  2 04:28:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.496 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:28:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.496 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f1dfa740-136a-4529-9079-ec2705e7d2cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.497 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 namespace which is not needed anymore#033[00m
Oct  2 04:28:56 np0005465604 nova_compute[260603]: 2025-10-02 08:28:56.570 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393736.5706172, 331bfae3-95e5-4c18-96ca-56597994c6b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:56 np0005465604 nova_compute[260603]: 2025-10-02 08:28:56.571 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] VM Started (Lifecycle Event)#033[00m
Oct  2 04:28:56 np0005465604 nova_compute[260603]: 2025-10-02 08:28:56.607 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:56 np0005465604 nova_compute[260603]: 2025-10-02 08:28:56.610 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393736.570777, 331bfae3-95e5-4c18-96ca-56597994c6b7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:56 np0005465604 nova_compute[260603]: 2025-10-02 08:28:56.611 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:28:56 np0005465604 neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9[308328]: [NOTICE]   (308332) : haproxy version is 2.8.14-c23fe91
Oct  2 04:28:56 np0005465604 neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9[308328]: [NOTICE]   (308332) : path to executable is /usr/sbin/haproxy
Oct  2 04:28:56 np0005465604 neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9[308328]: [WARNING]  (308332) : Exiting Master process...
Oct  2 04:28:56 np0005465604 neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9[308328]: [WARNING]  (308332) : Exiting Master process...
Oct  2 04:28:56 np0005465604 neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9[308328]: [ALERT]    (308332) : Current worker (308334) exited with code 143 (Terminated)
Oct  2 04:28:56 np0005465604 neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9[308328]: [WARNING]  (308332) : All workers exited. Exiting... (0)
Oct  2 04:28:56 np0005465604 systemd[1]: libpod-685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96.scope: Deactivated successfully.
Oct  2 04:28:56 np0005465604 podman[313198]: 2025-10-02 08:28:56.625722811 +0000 UTC m=+0.044626318 container died 685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 04:28:56 np0005465604 nova_compute[260603]: 2025-10-02 08:28:56.641 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:56 np0005465604 nova_compute[260603]: 2025-10-02 08:28:56.644 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:28:56 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96-userdata-shm.mount: Deactivated successfully.
Oct  2 04:28:56 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f4204cebb47775c2a9fbcd07d14a4cbaafc0d93423478e2e45ec500726da6ccd-merged.mount: Deactivated successfully.
Oct  2 04:28:56 np0005465604 podman[313198]: 2025-10-02 08:28:56.655468255 +0000 UTC m=+0.074371762 container cleanup 685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:28:56 np0005465604 nova_compute[260603]: 2025-10-02 08:28:56.668 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:28:56 np0005465604 systemd[1]: libpod-conmon-685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96.scope: Deactivated successfully.
Oct  2 04:28:56 np0005465604 podman[313227]: 2025-10-02 08:28:56.741571851 +0000 UTC m=+0.057908430 container remove 685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:28:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.748 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[52832537-23b3-4845-854e-75e2d44cfc86]: (4, ('Thu Oct  2 08:28:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 (685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96)\n685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96\nThu Oct  2 08:28:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 (685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96)\n685a04089c1759f6c7e3ebf11b07a5d0435cab01df2e103402f01a10400cff96\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.750 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[036df5e2-9da1-4eaf-8611-e0e031ca71ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.750 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcce7e8b6-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:56 np0005465604 kernel: tapcce7e8b6-90: left promiscuous mode
Oct  2 04:28:56 np0005465604 nova_compute[260603]: 2025-10-02 08:28:56.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:56 np0005465604 nova_compute[260603]: 2025-10-02 08:28:56.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.777 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee4ec7f-3d40-4f8d-abc4-ce2e9e33506b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.810 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2a78ee65-71e4-493b-8a6a-691f385fd06b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.811 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[78b6fc57-22a1-4b45-bc81-ff6df1eb6a0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:56 np0005465604 nova_compute[260603]: 2025-10-02 08:28:56.818 2 DEBUG nova.compute.manager [req-00013593-556a-4aa2-9c5b-4e26bcd8c990 req-b03e9c77-a7b0-4217-997f-6dba96024bba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Received event network-vif-deleted-136aeb8e-dedd-4cd8-a72d-1c4309716daf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.828 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf97633-1249-4b34-a8cd-43f97fc12144]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453885, 'reachable_time': 23874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313242, 'error': None, 'target': 'ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.831 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cce7e8b6-93a5-4ed0-b6da-abfd081fb2e9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:28:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:56.831 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[9ee47c42-46da-4bcb-9223-91a6bd76b1a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:28:56 np0005465604 systemd[1]: run-netns-ovnmeta\x2dcce7e8b6\x2d93a5\x2d4ed0\x2db6da\x2dabfd081fb2e9.mount: Deactivated successfully.
Oct  2 04:28:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 208 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 2.7 MiB/s wr, 181 op/s
Oct  2 04:28:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:28:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Oct  2 04:28:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Oct  2 04:28:57 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Oct  2 04:28:57 np0005465604 nova_compute[260603]: 2025-10-02 08:28:57.177 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:57 np0005465604 nova_compute[260603]: 2025-10-02 08:28:57.178 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:57 np0005465604 nova_compute[260603]: 2025-10-02 08:28:57.202 2 DEBUG nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:28:57 np0005465604 nova_compute[260603]: 2025-10-02 08:28:57.307 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:57 np0005465604 nova_compute[260603]: 2025-10-02 08:28:57.308 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:57 np0005465604 nova_compute[260603]: 2025-10-02 08:28:57.319 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:28:57 np0005465604 nova_compute[260603]: 2025-10-02 08:28:57.320 2 INFO nova.compute.claims [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:28:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:28:57.449 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:28:57 np0005465604 nova_compute[260603]: 2025-10-02 08:28:57.460 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:57 np0005465604 nova_compute[260603]: 2025-10-02 08:28:57.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:28:57 np0005465604 nova_compute[260603]: 2025-10-02 08:28:57.830 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393722.8281705, 21802142-fcf7-4eb2-b43b-e0fa48cab4d6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:57 np0005465604 nova_compute[260603]: 2025-10-02 08:28:57.831 2 INFO nova.compute.manager [-] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:28:57 np0005465604 nova_compute[260603]: 2025-10-02 08:28:57.859 2 DEBUG nova.compute.manager [None req-8e5eda7b-779b-45c1-b6d7-d40de5090fb8 - - - - - -] [instance: 21802142-fcf7-4eb2-b43b-e0fa48cab4d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:57 np0005465604 nova_compute[260603]: 2025-10-02 08:28:57.882 2 DEBUG nova.network.neutron [req-52ffce3c-3b0c-4719-a790-bab67e8e2539 req-cebc4762-dec8-42b5-894f-4f23645a3101 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Updated VIF entry in instance network info cache for port e4363d00-4c59-46bb-ac20-98bde0cd0d4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:28:57 np0005465604 nova_compute[260603]: 2025-10-02 08:28:57.883 2 DEBUG nova.network.neutron [req-52ffce3c-3b0c-4719-a790-bab67e8e2539 req-cebc4762-dec8-42b5-894f-4f23645a3101 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Updating instance_info_cache with network_info: [{"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:57 np0005465604 nova_compute[260603]: 2025-10-02 08:28:57.904 2 DEBUG oslo_concurrency.lockutils [req-52ffce3c-3b0c-4719-a790-bab67e8e2539 req-cebc4762-dec8-42b5-894f-4f23645a3101 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-331bfae3-95e5-4c18-96ca-56597994c6b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:28:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:28:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:28:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:28:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:28:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:28:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:28:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:28:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/694643309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:28:57 np0005465604 nova_compute[260603]: 2025-10-02 08:28:57.937 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:57 np0005465604 nova_compute[260603]: 2025-10-02 08:28:57.946 2 DEBUG nova.compute.provider_tree [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:28:57 np0005465604 nova_compute[260603]: 2025-10-02 08:28:57.976 2 DEBUG nova.scheduler.client.report [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.045 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.046 2 DEBUG nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.111 2 DEBUG nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.112 2 DEBUG nova.network.neutron [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.120 2 DEBUG nova.compute.manager [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received event network-vif-unplugged-e978c120-9b3a-4a48-b553-c38b05073ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.121 2 DEBUG oslo_concurrency.lockutils [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "05cc7244-c419-4c24-b995-95ca760837a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.122 2 DEBUG oslo_concurrency.lockutils [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.122 2 DEBUG oslo_concurrency.lockutils [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.122 2 DEBUG nova.compute.manager [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] No waiting events found dispatching network-vif-unplugged-e978c120-9b3a-4a48-b553-c38b05073ad9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.123 2 DEBUG nova.compute.manager [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received event network-vif-unplugged-e978c120-9b3a-4a48-b553-c38b05073ad9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.123 2 DEBUG nova.compute.manager [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received event network-vif-plugged-e978c120-9b3a-4a48-b553-c38b05073ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.124 2 DEBUG oslo_concurrency.lockutils [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "05cc7244-c419-4c24-b995-95ca760837a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.124 2 DEBUG oslo_concurrency.lockutils [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.125 2 DEBUG oslo_concurrency.lockutils [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.125 2 DEBUG nova.compute.manager [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] No waiting events found dispatching network-vif-plugged-e978c120-9b3a-4a48-b553-c38b05073ad9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.126 2 WARNING nova.compute.manager [req-136c925b-00ef-408a-9c92-dbac1b6cc62b req-776cc274-2f98-4680-857e-bc878254e1e5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received unexpected event network-vif-plugged-e978c120-9b3a-4a48-b553-c38b05073ad9 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.128 2 DEBUG nova.network.neutron [-] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.147 2 INFO nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.166 2 INFO nova.compute.manager [-] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Took 1.87 seconds to deallocate network for instance.#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.173 2 DEBUG nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.252 2 DEBUG oslo_concurrency.lockutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.253 2 DEBUG oslo_concurrency.lockutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.298 2 DEBUG nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.300 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.301 2 INFO nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Creating image(s)#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.332 2 DEBUG nova.storage.rbd_utils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.363 2 DEBUG nova.storage.rbd_utils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.396 2 DEBUG nova.storage.rbd_utils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.400 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.440 2 DEBUG nova.policy [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1ac6f72f7366459a86c086737b89ea69', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f269abbe5769427dbf44c430d7529c04', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.471 2 DEBUG oslo_concurrency.processutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.502 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.504 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.506 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.507 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.538 2 DEBUG nova.storage.rbd_utils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.543 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.835 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.908 2 DEBUG nova.storage.rbd_utils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] resizing rbd image 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:28:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:28:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4245333166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.947 2 DEBUG oslo_concurrency.processutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:28:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 88 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 2.7 MiB/s wr, 253 op/s
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.959 2 DEBUG nova.compute.provider_tree [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.972 2 DEBUG nova.compute.manager [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Received event network-vif-plugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.973 2 DEBUG oslo_concurrency.lockutils [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.974 2 DEBUG oslo_concurrency.lockutils [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.974 2 DEBUG oslo_concurrency.lockutils [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.975 2 DEBUG nova.compute.manager [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Processing event network-vif-plugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.975 2 DEBUG nova.compute.manager [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Received event network-vif-plugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.975 2 DEBUG oslo_concurrency.lockutils [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.976 2 DEBUG oslo_concurrency.lockutils [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.976 2 DEBUG oslo_concurrency.lockutils [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.976 2 DEBUG nova.compute.manager [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] No waiting events found dispatching network-vif-plugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.976 2 WARNING nova.compute.manager [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Received unexpected event network-vif-plugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.977 2 DEBUG nova.compute.manager [req-949c311c-6f01-4194-8a9f-166a366481e7 req-7111c2c0-3a36-458b-a889-09bd01a2f82f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Received event network-vif-deleted-e978c120-9b3a-4a48-b553-c38b05073ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:28:58 np0005465604 nova_compute[260603]: 2025-10-02 08:28:58.978 2 DEBUG nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.015 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393738.980666, 331bfae3-95e5-4c18-96ca-56597994c6b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.015 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.019 2 DEBUG nova.scheduler.client.report [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.023 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.028 2 DEBUG nova.objects.instance [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'migration_context' on Instance uuid 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.031 2 INFO nova.virt.libvirt.driver [-] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Instance spawned successfully.#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.032 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.049 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.052 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.057 2 DEBUG oslo_concurrency.lockutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.067 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.067 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.068 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.068 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.069 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.069 2 DEBUG nova.virt.libvirt.driver [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.100 2 INFO nova.scheduler.client.report [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Deleted allocations for instance 05cc7244-c419-4c24-b995-95ca760837a4#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.119 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.127 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.127 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Ensure instance console log exists: /var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.128 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.128 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.128 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.140 2 DEBUG nova.network.neutron [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Successfully created port: f6ec21c0-188d-4c89-8b6b-a64a6d85f131 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.175 2 INFO nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Took 9.75 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.176 2 DEBUG nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.187 2 DEBUG oslo_concurrency.lockutils [None req-b6612807-9bb6-4e5d-93e8-d399144f3614 019cd25dce6249ce9c2cf326ec62df28 35a4ab7cf79e41f68a1ea888c2a3592e - - default default] Lock "05cc7244-c419-4c24-b995-95ca760837a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.246 2 INFO nova.compute.manager [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Took 10.83 seconds to build instance.#033[00m
Oct  2 04:28:59 np0005465604 nova_compute[260603]: 2025-10-02 08:28:59.267 2 DEBUG oslo_concurrency.lockutils [None req-f8f20a4a-de66-41c5-acfc-7112a666a2bb 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.138 2 DEBUG oslo_concurrency.lockutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquiring lock "331bfae3-95e5-4c18-96ca-56597994c6b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.139 2 DEBUG oslo_concurrency.lockutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.139 2 DEBUG oslo_concurrency.lockutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquiring lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.140 2 DEBUG oslo_concurrency.lockutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.140 2 DEBUG oslo_concurrency.lockutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.141 2 INFO nova.compute.manager [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Terminating instance#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.142 2 DEBUG nova.compute.manager [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:29:00 np0005465604 kernel: tape4363d00-4c (unregistering): left promiscuous mode
Oct  2 04:29:00 np0005465604 NetworkManager[45129]: <info>  [1759393740.1784] device (tape4363d00-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:29:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:00Z|00420|binding|INFO|Releasing lport e4363d00-4c59-46bb-ac20-98bde0cd0d4f from this chassis (sb_readonly=0)
Oct  2 04:29:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:00Z|00421|binding|INFO|Setting lport e4363d00-4c59-46bb-ac20-98bde0cd0d4f down in Southbound
Oct  2 04:29:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:00Z|00422|binding|INFO|Removing iface tape4363d00-4c ovn-installed in OVS
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.194 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:e0:ec 10.100.0.5'], port_security=['fa:16:3e:ba:e0:ec 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '331bfae3-95e5-4c18-96ca-56597994c6b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b09ac7d-d406-46ea-b446-c4810bf0f847', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '226ee42bc28344d49ab6b0000485ab4d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cadeb1ae-9c39-4da5-a2f0-8ecfaebb7175', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20da51ff-5df7-43b0-b08f-68d65fa498f7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e4363d00-4c59-46bb-ac20-98bde0cd0d4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:29:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.195 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e4363d00-4c59-46bb-ac20-98bde0cd0d4f in datapath 6b09ac7d-d406-46ea-b446-c4810bf0f847 unbound from our chassis#033[00m
Oct  2 04:29:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.196 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6b09ac7d-d406-46ea-b446-c4810bf0f847, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:29:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.197 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b4825376-ec17-459a-8751-3c45a1a59f58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.197 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847 namespace which is not needed anymore#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:00 np0005465604 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000031.scope: Deactivated successfully.
Oct  2 04:29:00 np0005465604 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000031.scope: Consumed 1.999s CPU time.
Oct  2 04:29:00 np0005465604 systemd-machined[214636]: Machine qemu-54-instance-00000031 terminated.
Oct  2 04:29:00 np0005465604 neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847[313166]: [NOTICE]   (313170) : haproxy version is 2.8.14-c23fe91
Oct  2 04:29:00 np0005465604 neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847[313166]: [NOTICE]   (313170) : path to executable is /usr/sbin/haproxy
Oct  2 04:29:00 np0005465604 neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847[313166]: [WARNING]  (313170) : Exiting Master process...
Oct  2 04:29:00 np0005465604 neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847[313166]: [WARNING]  (313170) : Exiting Master process...
Oct  2 04:29:00 np0005465604 neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847[313166]: [ALERT]    (313170) : Current worker (313172) exited with code 143 (Terminated)
Oct  2 04:29:00 np0005465604 neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847[313166]: [WARNING]  (313170) : All workers exited. Exiting... (0)
Oct  2 04:29:00 np0005465604 systemd[1]: libpod-cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc.scope: Deactivated successfully.
Oct  2 04:29:00 np0005465604 conmon[313166]: conmon cd51998dccadb453ae7e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc.scope/container/memory.events
Oct  2 04:29:00 np0005465604 podman[313479]: 2025-10-02 08:29:00.327222281 +0000 UTC m=+0.050221192 container died cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 04:29:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc-userdata-shm.mount: Deactivated successfully.
Oct  2 04:29:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e2d04520e0e8b2823fcf24f311ce97671288cd92dcafceef9e2538eda23350bf-merged.mount: Deactivated successfully.
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.377 2 INFO nova.virt.libvirt.driver [-] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Instance destroyed successfully.#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.377 2 DEBUG nova.objects.instance [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lazy-loading 'resources' on Instance uuid 331bfae3-95e5-4c18-96ca-56597994c6b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:00 np0005465604 podman[313479]: 2025-10-02 08:29:00.377945827 +0000 UTC m=+0.100944718 container cleanup cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:29:00 np0005465604 systemd[1]: libpod-conmon-cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc.scope: Deactivated successfully.
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.401 2 DEBUG nova.virt.libvirt.vif [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:28:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1491987986',display_name='tempest-InstanceActionsNegativeTestJSON-server-1491987986',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1491987986',id=49,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:28:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='226ee42bc28344d49ab6b0000485ab4d',ramdisk_id='',reservation_id='r-w4z7ku8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-472656503',owner_user_name='tempest-InstanceActionsNegativeTestJSON-472656503-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:28:59Z,user_data=None,user_id='2b3acbcd92044b7a9df5bd8748ab7599',uuid=331bfae3-95e5-4c18-96ca-56597994c6b7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.402 2 DEBUG nova.network.os_vif_util [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Converting VIF {"id": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "address": "fa:16:3e:ba:e0:ec", "network": {"id": "6b09ac7d-d406-46ea-b446-c4810bf0f847", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1915975651-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "226ee42bc28344d49ab6b0000485ab4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4363d00-4c", "ovs_interfaceid": "e4363d00-4c59-46bb-ac20-98bde0cd0d4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.403 2 DEBUG nova.network.os_vif_util [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e0:ec,bridge_name='br-int',has_traffic_filtering=True,id=e4363d00-4c59-46bb-ac20-98bde0cd0d4f,network=Network(6b09ac7d-d406-46ea-b446-c4810bf0f847),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4363d00-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.403 2 DEBUG os_vif [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e0:ec,bridge_name='br-int',has_traffic_filtering=True,id=e4363d00-4c59-46bb-ac20-98bde0cd0d4f,network=Network(6b09ac7d-d406-46ea-b446-c4810bf0f847),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4363d00-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.405 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4363d00-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.414 2 INFO os_vif [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:e0:ec,bridge_name='br-int',has_traffic_filtering=True,id=e4363d00-4c59-46bb-ac20-98bde0cd0d4f,network=Network(6b09ac7d-d406-46ea-b446-c4810bf0f847),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4363d00-4c')#033[00m
Oct  2 04:29:00 np0005465604 podman[313517]: 2025-10-02 08:29:00.467601623 +0000 UTC m=+0.058387095 container remove cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:29:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.474 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[10875fd3-687a-4e99-a911-efc7131aff42]: (4, ('Thu Oct  2 08:29:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847 (cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc)\ncd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc\nThu Oct  2 08:29:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847 (cd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc)\ncd51998dccadb453ae7e7bd0b1c9a4ff9f5b8d46bfbebb42901ba39cb1b757fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.476 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[78fcdded-ec99-44d5-9b29-b5c1398a1691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.478 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b09ac7d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:00 np0005465604 kernel: tap6b09ac7d-d0: left promiscuous mode
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.498 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[41314d30-92e7-4978-b1c8-a135a6da2ff2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.523 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6f69c8d0-321a-4617-93c7-6333702677d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.524 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e51b9cb8-7372-4f5a-8754-d5cc84caa84d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.539 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d0586570-cca0-45be-b644-cd67993fe6ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461431, 'reachable_time': 15981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313551, 'error': None, 'target': 'ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:00 np0005465604 systemd[1]: run-netns-ovnmeta\x2d6b09ac7d\x2dd406\x2d46ea\x2db446\x2dc4810bf0f847.mount: Deactivated successfully.
Oct  2 04:29:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.542 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6b09ac7d-d406-46ea-b446-c4810bf0f847 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:29:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:00.542 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e8159a-0d47-40b4-9e82-99540346a73f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.719 2 INFO nova.virt.libvirt.driver [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Deleting instance files /var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7_del#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.721 2 INFO nova.virt.libvirt.driver [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Deletion of /var/lib/nova/instances/331bfae3-95e5-4c18-96ca-56597994c6b7_del complete#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.781 2 INFO nova.compute.manager [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.782 2 DEBUG oslo.service.loopingcall [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.782 2 DEBUG nova.compute.manager [-] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.782 2 DEBUG nova.network.neutron [-] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:29:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 88 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 152 KiB/s rd, 2.4 MiB/s wr, 224 op/s
Oct  2 04:29:00 np0005465604 nova_compute[260603]: 2025-10-02 08:29:00.983 2 DEBUG nova.network.neutron [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Successfully updated port: f6ec21c0-188d-4c89-8b6b-a64a6d85f131 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.000 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "refresh_cache-640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.001 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquired lock "refresh_cache-640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.001 2 DEBUG nova.network.neutron [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.244 2 DEBUG nova.network.neutron [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.356 2 DEBUG nova.compute.manager [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Received event network-vif-unplugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.358 2 DEBUG oslo_concurrency.lockutils [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.358 2 DEBUG oslo_concurrency.lockutils [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.359 2 DEBUG oslo_concurrency.lockutils [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.359 2 DEBUG nova.compute.manager [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] No waiting events found dispatching network-vif-unplugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.360 2 DEBUG nova.compute.manager [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Received event network-vif-unplugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.360 2 DEBUG nova.compute.manager [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Received event network-vif-plugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.361 2 DEBUG oslo_concurrency.lockutils [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.361 2 DEBUG oslo_concurrency.lockutils [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.361 2 DEBUG oslo_concurrency.lockutils [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.362 2 DEBUG nova.compute.manager [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] No waiting events found dispatching network-vif-plugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.362 2 WARNING nova.compute.manager [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Received unexpected event network-vif-plugged-e4363d00-4c59-46bb-ac20-98bde0cd0d4f for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.362 2 DEBUG nova.compute.manager [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Received event network-changed-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.363 2 DEBUG nova.compute.manager [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Refreshing instance network info cache due to event network-changed-f6ec21c0-188d-4c89-8b6b-a64a6d85f131. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:29:01 np0005465604 nova_compute[260603]: 2025-10-02 08:29:01.363 2 DEBUG oslo_concurrency.lockutils [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.070 2 DEBUG nova.network.neutron [-] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.097 2 INFO nova.compute.manager [-] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Took 1.32 seconds to deallocate network for instance.#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.161 2 DEBUG oslo_concurrency.lockutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.162 2 DEBUG oslo_concurrency.lockutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.222 2 DEBUG oslo_concurrency.processutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.500 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393727.4986176, 247e32e5-5f07-4db4-9e6f-dcfade745228 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.500 2 INFO nova.compute.manager [-] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.524 2 DEBUG nova.compute.manager [None req-7fecf3d3-42cb-42fa-9a4f-9d25ac9985bf - - - - - -] [instance: 247e32e5-5f07-4db4-9e6f-dcfade745228] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:29:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1722373738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.655 2 DEBUG oslo_concurrency.processutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.660 2 DEBUG nova.compute.provider_tree [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.692 2 DEBUG nova.scheduler.client.report [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.720 2 DEBUG oslo_concurrency.lockutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.757 2 INFO nova.scheduler.client.report [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Deleted allocations for instance 331bfae3-95e5-4c18-96ca-56597994c6b7#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.817 2 DEBUG nova.network.neutron [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Updating instance_info_cache with network_info: [{"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.855 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Releasing lock "refresh_cache-640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.856 2 DEBUG nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Instance network_info: |[{"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.856 2 DEBUG oslo_concurrency.lockutils [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.856 2 DEBUG nova.network.neutron [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Refreshing network info cache for port f6ec21c0-188d-4c89-8b6b-a64a6d85f131 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.858 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Start _get_guest_xml network_info=[{"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.862 2 WARNING nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.867 2 DEBUG nova.virt.libvirt.host [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.868 2 DEBUG nova.virt.libvirt.host [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.871 2 DEBUG nova.virt.libvirt.host [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.872 2 DEBUG nova.virt.libvirt.host [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.872 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.873 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.873 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.874 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.874 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.874 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.875 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.875 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.875 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.876 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.876 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.876 2 DEBUG nova.virt.hardware [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.882 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:02 np0005465604 nova_compute[260603]: 2025-10-02 08:29:02.932 2 DEBUG oslo_concurrency.lockutils [None req-64aea2e0-19bc-48b9-a5e6-3259e436efb2 2b3acbcd92044b7a9df5bd8748ab7599 226ee42bc28344d49ab6b0000485ab4d - - default default] Lock "331bfae3-95e5-4c18-96ca-56597994c6b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 98 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.9 MiB/s wr, 210 op/s
Oct  2 04:29:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:29:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3733634119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.421 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.443 2 DEBUG nova.storage.rbd_utils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.447 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.889 2 DEBUG nova.compute.manager [req-7ffd1886-6356-45e9-ba3b-46cdfeb63700 req-e8c367d7-39a3-49e0-93eb-b6fc12bddbad 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Received event network-vif-deleted-e4363d00-4c59-46bb-ac20-98bde0cd0d4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:29:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1940109715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.946 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.947 2 DEBUG nova.virt.libvirt.vif [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:28:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-17603736',display_name='tempest-DeleteServersTestJSON-server-17603736',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-17603736',id=50,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-jo24t19c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:28:58Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=640bb5bf-5ae3-455f-82e7-3e6d647a0fbf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.948 2 DEBUG nova.network.os_vif_util [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.949 2 DEBUG nova.network.os_vif_util [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:37:2c,bridge_name='br-int',has_traffic_filtering=True,id=f6ec21c0-188d-4c89-8b6b-a64a6d85f131,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6ec21c0-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.950 2 DEBUG nova.objects.instance [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.968 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:29:03 np0005465604 nova_compute[260603]:  <uuid>640bb5bf-5ae3-455f-82e7-3e6d647a0fbf</uuid>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:  <name>instance-00000032</name>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <nova:name>tempest-DeleteServersTestJSON-server-17603736</nova:name>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:29:02</nova:creationTime>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:29:03 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:        <nova:user uuid="1ac6f72f7366459a86c086737b89ea69">tempest-DeleteServersTestJSON-812177785-project-member</nova:user>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:        <nova:project uuid="f269abbe5769427dbf44c430d7529c04">tempest-DeleteServersTestJSON-812177785</nova:project>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:        <nova:port uuid="f6ec21c0-188d-4c89-8b6b-a64a6d85f131">
Oct  2 04:29:03 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <entry name="serial">640bb5bf-5ae3-455f-82e7-3e6d647a0fbf</entry>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <entry name="uuid">640bb5bf-5ae3-455f-82e7-3e6d647a0fbf</entry>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk">
Oct  2 04:29:03 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:29:03 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk.config">
Oct  2 04:29:03 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:29:03 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:5f:37:2c"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <target dev="tapf6ec21c0-18"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf/console.log" append="off"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:29:03 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:29:03 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:29:03 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:29:03 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.970 2 DEBUG nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Preparing to wait for external event network-vif-plugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.971 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.972 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.973 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.975 2 DEBUG nova.virt.libvirt.vif [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:28:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-17603736',display_name='tempest-DeleteServersTestJSON-server-17603736',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-17603736',id=50,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-jo24t19c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:28:58Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=640bb5bf-5ae3-455f-82e7-3e6d647a0fbf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.975 2 DEBUG nova.network.os_vif_util [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.977 2 DEBUG nova.network.os_vif_util [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:37:2c,bridge_name='br-int',has_traffic_filtering=True,id=f6ec21c0-188d-4c89-8b6b-a64a6d85f131,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6ec21c0-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.978 2 DEBUG os_vif [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:37:2c,bridge_name='br-int',has_traffic_filtering=True,id=f6ec21c0-188d-4c89-8b6b-a64a6d85f131,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6ec21c0-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.981 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.982 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.987 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6ec21c0-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.988 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf6ec21c0-18, col_values=(('external_ids', {'iface-id': 'f6ec21c0-188d-4c89-8b6b-a64a6d85f131', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:37:2c', 'vm-uuid': '640bb5bf-5ae3-455f-82e7-3e6d647a0fbf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:03 np0005465604 NetworkManager[45129]: <info>  [1759393743.9919] manager: (tapf6ec21c0-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:03 np0005465604 nova_compute[260603]: 2025-10-02 08:29:03.998 2 INFO os_vif [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:37:2c,bridge_name='br-int',has_traffic_filtering=True,id=f6ec21c0-188d-4c89-8b6b-a64a6d85f131,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6ec21c0-18')#033[00m
Oct  2 04:29:04 np0005465604 nova_compute[260603]: 2025-10-02 08:29:04.076 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:29:04 np0005465604 nova_compute[260603]: 2025-10-02 08:29:04.077 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:29:04 np0005465604 nova_compute[260603]: 2025-10-02 08:29:04.077 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No VIF found with MAC fa:16:3e:5f:37:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:29:04 np0005465604 nova_compute[260603]: 2025-10-02 08:29:04.078 2 INFO nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Using config drive#033[00m
Oct  2 04:29:04 np0005465604 nova_compute[260603]: 2025-10-02 08:29:04.103 2 DEBUG nova.storage.rbd_utils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:04 np0005465604 nova_compute[260603]: 2025-10-02 08:29:04.462 2 INFO nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Creating config drive at /var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf/disk.config#033[00m
Oct  2 04:29:04 np0005465604 nova_compute[260603]: 2025-10-02 08:29:04.467 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7jsjyrfy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:04 np0005465604 nova_compute[260603]: 2025-10-02 08:29:04.606 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7jsjyrfy" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:04 np0005465604 nova_compute[260603]: 2025-10-02 08:29:04.646 2 DEBUG nova.storage.rbd_utils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:04 np0005465604 nova_compute[260603]: 2025-10-02 08:29:04.651 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf/disk.config 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:04 np0005465604 nova_compute[260603]: 2025-10-02 08:29:04.828 2 DEBUG oslo_concurrency.processutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf/disk.config 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:04 np0005465604 nova_compute[260603]: 2025-10-02 08:29:04.831 2 INFO nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Deleting local config drive /var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf/disk.config because it was imported into RBD.#033[00m
Oct  2 04:29:04 np0005465604 kernel: tapf6ec21c0-18: entered promiscuous mode
Oct  2 04:29:04 np0005465604 nova_compute[260603]: 2025-10-02 08:29:04.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:04 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:04Z|00423|binding|INFO|Claiming lport f6ec21c0-188d-4c89-8b6b-a64a6d85f131 for this chassis.
Oct  2 04:29:04 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:04Z|00424|binding|INFO|f6ec21c0-188d-4c89-8b6b-a64a6d85f131: Claiming fa:16:3e:5f:37:2c 10.100.0.12
Oct  2 04:29:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.912 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:37:2c 10.100.0.12'], port_security=['fa:16:3e:5f:37:2c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '640bb5bf-5ae3-455f-82e7-3e6d647a0fbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=f6ec21c0-188d-4c89-8b6b-a64a6d85f131) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:29:04 np0005465604 NetworkManager[45129]: <info>  [1759393744.9136] manager: (tapf6ec21c0-18): new Tun device (/org/freedesktop/NetworkManager/Devices/184)
Oct  2 04:29:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.914 162357 INFO neutron.agent.ovn.metadata.agent [-] Port f6ec21c0-188d-4c89-8b6b-a64a6d85f131 in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca bound to our chassis#033[00m
Oct  2 04:29:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.916 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca#033[00m
Oct  2 04:29:04 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:04Z|00425|binding|INFO|Setting lport f6ec21c0-188d-4c89-8b6b-a64a6d85f131 ovn-installed in OVS
Oct  2 04:29:04 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:04Z|00426|binding|INFO|Setting lport f6ec21c0-188d-4c89-8b6b-a64a6d85f131 up in Southbound
Oct  2 04:29:04 np0005465604 nova_compute[260603]: 2025-10-02 08:29:04.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.936 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[94325111-df38-4853-bc93-088abbe6dced]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.936 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa72ac8c9-11 in ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:29:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.939 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa72ac8c9-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:29:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.939 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f3490c72-d15b-45ca-a645-0201fed9a9df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.940 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eaea69aa-83c5-4a8c-8de7-c29cbd2fde5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:04 np0005465604 systemd-machined[214636]: New machine qemu-55-instance-00000032.
Oct  2 04:29:04 np0005465604 systemd-udevd[313711]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:29:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 88 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 193 op/s
Oct  2 04:29:04 np0005465604 NetworkManager[45129]: <info>  [1759393744.9560] device (tapf6ec21c0-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:29:04 np0005465604 NetworkManager[45129]: <info>  [1759393744.9575] device (tapf6ec21c0-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:29:04 np0005465604 systemd[1]: Started Virtual Machine qemu-55-instance-00000032.
Oct  2 04:29:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.965 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[df5bf083-e58f-40b8-98b2-7ecc536d0dec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:04.987 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[da90d58c-efc2-46ae-9753-272811c1bc40]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.028 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[90d024cb-4f48-4582-b6d7-26d0b1ffad8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.037 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[131208ae-5100-499c-8a03-2b7825d32564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:05 np0005465604 systemd-udevd[313714]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:29:05 np0005465604 NetworkManager[45129]: <info>  [1759393745.0395] manager: (tapa72ac8c9-10): new Veth device (/org/freedesktop/NetworkManager/Devices/185)
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.087 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3e27de11-1fa8-4278-9baa-c80d0f5616f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.093 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0b29bf1a-9f10-4199-9732-143e50d68f07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:05 np0005465604 NetworkManager[45129]: <info>  [1759393745.1211] device (tapa72ac8c9-10): carrier: link connected
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.132 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0c206b-b703-4dd3-a8c8-2f13d87f5547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.170 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c65ae93e-4e29-4a68-96c3-e421ba9cf2be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462372, 'reachable_time': 18026, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313743, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.196 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e52a0c06-aab0-46a9-a57b-e35176cde302]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3b:61d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462372, 'tstamp': 462372}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313744, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.223 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[35523270-628e-4b76-9dc9-eac6f3af613e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462372, 'reachable_time': 18026, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313745, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.258 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a6631673-82ea-4c3f-b7be-7e05891e48e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:05 np0005465604 nova_compute[260603]: 2025-10-02 08:29:05.259 2 DEBUG nova.compute.manager [req-98dcca3f-9603-4b5a-b009-20a0831580cf req-e3c7bfa8-1b65-4c2b-b354-68daf80b1a9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Received event network-vif-plugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:05 np0005465604 nova_compute[260603]: 2025-10-02 08:29:05.260 2 DEBUG oslo_concurrency.lockutils [req-98dcca3f-9603-4b5a-b009-20a0831580cf req-e3c7bfa8-1b65-4c2b-b354-68daf80b1a9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:05 np0005465604 nova_compute[260603]: 2025-10-02 08:29:05.262 2 DEBUG oslo_concurrency.lockutils [req-98dcca3f-9603-4b5a-b009-20a0831580cf req-e3c7bfa8-1b65-4c2b-b354-68daf80b1a9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:05 np0005465604 nova_compute[260603]: 2025-10-02 08:29:05.262 2 DEBUG oslo_concurrency.lockutils [req-98dcca3f-9603-4b5a-b009-20a0831580cf req-e3c7bfa8-1b65-4c2b-b354-68daf80b1a9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:05 np0005465604 nova_compute[260603]: 2025-10-02 08:29:05.263 2 DEBUG nova.compute.manager [req-98dcca3f-9603-4b5a-b009-20a0831580cf req-e3c7bfa8-1b65-4c2b-b354-68daf80b1a9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Processing event network-vif-plugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.354 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd75ab5a-77f7-4c7d-8cbb-582d5bc171e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.356 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.357 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.358 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa72ac8c9-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:05 np0005465604 nova_compute[260603]: 2025-10-02 08:29:05.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:05 np0005465604 NetworkManager[45129]: <info>  [1759393745.3625] manager: (tapa72ac8c9-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Oct  2 04:29:05 np0005465604 kernel: tapa72ac8c9-10: entered promiscuous mode
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.369 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa72ac8c9-10, col_values=(('external_ids', {'iface-id': 'f9acec59-0200-4a1d-84e4-06e67c730498'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:05 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:05Z|00427|binding|INFO|Releasing lport f9acec59-0200-4a1d-84e4-06e67c730498 from this chassis (sb_readonly=0)
Oct  2 04:29:05 np0005465604 nova_compute[260603]: 2025-10-02 08:29:05.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.375 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.384 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0aecff28-da39-4e25-a3ca-1f8b41ece230]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.385 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:29:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:05.386 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'env', 'PROCESS_TAG=haproxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:29:05 np0005465604 nova_compute[260603]: 2025-10-02 08:29:05.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:05 np0005465604 nova_compute[260603]: 2025-10-02 08:29:05.512 2 DEBUG nova.network.neutron [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Updated VIF entry in instance network info cache for port f6ec21c0-188d-4c89-8b6b-a64a6d85f131. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:29:05 np0005465604 nova_compute[260603]: 2025-10-02 08:29:05.513 2 DEBUG nova.network.neutron [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Updating instance_info_cache with network_info: [{"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:05 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:05Z|00428|binding|INFO|Releasing lport f9acec59-0200-4a1d-84e4-06e67c730498 from this chassis (sb_readonly=0)
Oct  2 04:29:05 np0005465604 nova_compute[260603]: 2025-10-02 08:29:05.534 2 DEBUG oslo_concurrency.lockutils [req-35d70c90-ecd5-4fee-ae21-c00dcbdab5fc req-7e935f90-3ed7-48d3-a5b8-98b444ec21a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:05 np0005465604 nova_compute[260603]: 2025-10-02 08:29:05.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:05 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:05Z|00429|binding|INFO|Releasing lport f9acec59-0200-4a1d-84e4-06e67c730498 from this chassis (sb_readonly=0)
Oct  2 04:29:05 np0005465604 nova_compute[260603]: 2025-10-02 08:29:05.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:05 np0005465604 podman[313817]: 2025-10-02 08:29:05.858620728 +0000 UTC m=+0.098286476 container create e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:29:05 np0005465604 podman[313817]: 2025-10-02 08:29:05.790570313 +0000 UTC m=+0.030236101 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:29:05 np0005465604 systemd[1]: Started libpod-conmon-e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f.scope.
Oct  2 04:29:05 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:29:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7366a1c716223f76f05604a8db394b41e4fc284cf97ee714770130aac3014769/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:29:05 np0005465604 podman[313817]: 2025-10-02 08:29:05.967050877 +0000 UTC m=+0.206716685 container init e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:29:05 np0005465604 podman[313817]: 2025-10-02 08:29:05.972795196 +0000 UTC m=+0.212460964 container start e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 04:29:05 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[313830]: [NOTICE]   (313834) : New worker (313836) forked
Oct  2 04:29:05 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[313830]: [NOTICE]   (313834) : Loading success.
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.044 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393746.0439987, 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.045 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] VM Started (Lifecycle Event)#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.049 2 DEBUG nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.054 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.058 2 INFO nova.virt.libvirt.driver [-] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Instance spawned successfully.#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.059 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.076 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.085 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.092 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.093 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.093 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.094 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.095 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.096 2 DEBUG nova.virt.libvirt.driver [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.107 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.108 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393746.0443287, 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.109 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.131 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.137 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393746.053895, 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.137 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.155 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.163 2 INFO nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Took 7.86 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.164 2 DEBUG nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.165 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.191 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.220 2 INFO nova.compute.manager [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Took 8.94 seconds to build instance.#033[00m
Oct  2 04:29:06 np0005465604 nova_compute[260603]: 2025-10-02 08:29:06.235 2 DEBUG oslo_concurrency.lockutils [None req-e58aa393-8d6b-47ba-a0be-aea2d94bdcf1 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 88 MiB data, 476 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 193 op/s
Oct  2 04:29:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:29:07 np0005465604 nova_compute[260603]: 2025-10-02 08:29:07.380 2 DEBUG nova.compute.manager [req-50b296bb-7452-4bf5-94dd-fac8cbfca419 req-6d3075c3-55e7-4915-a4d0-85f5b857b213 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Received event network-vif-plugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:07 np0005465604 nova_compute[260603]: 2025-10-02 08:29:07.381 2 DEBUG oslo_concurrency.lockutils [req-50b296bb-7452-4bf5-94dd-fac8cbfca419 req-6d3075c3-55e7-4915-a4d0-85f5b857b213 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:07 np0005465604 nova_compute[260603]: 2025-10-02 08:29:07.383 2 DEBUG oslo_concurrency.lockutils [req-50b296bb-7452-4bf5-94dd-fac8cbfca419 req-6d3075c3-55e7-4915-a4d0-85f5b857b213 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:07 np0005465604 nova_compute[260603]: 2025-10-02 08:29:07.383 2 DEBUG oslo_concurrency.lockutils [req-50b296bb-7452-4bf5-94dd-fac8cbfca419 req-6d3075c3-55e7-4915-a4d0-85f5b857b213 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:07 np0005465604 nova_compute[260603]: 2025-10-02 08:29:07.384 2 DEBUG nova.compute.manager [req-50b296bb-7452-4bf5-94dd-fac8cbfca419 req-6d3075c3-55e7-4915-a4d0-85f5b857b213 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] No waiting events found dispatching network-vif-plugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:07 np0005465604 nova_compute[260603]: 2025-10-02 08:29:07.384 2 WARNING nova.compute.manager [req-50b296bb-7452-4bf5-94dd-fac8cbfca419 req-6d3075c3-55e7-4915-a4d0-85f5b857b213 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Received unexpected event network-vif-plugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:29:07 np0005465604 nova_compute[260603]: 2025-10-02 08:29:07.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:07 np0005465604 nova_compute[260603]: 2025-10-02 08:29:07.846 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393732.8450491, f13ff7c1-d7d3-443e-9f06-69f8c466af30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:07 np0005465604 nova_compute[260603]: 2025-10-02 08:29:07.846 2 INFO nova.compute.manager [-] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:29:07 np0005465604 nova_compute[260603]: 2025-10-02 08:29:07.879 2 DEBUG nova.compute.manager [None req-db802f98-76db-495f-83d9-912f59dec966 - - - - - -] [instance: f13ff7c1-d7d3-443e-9f06-69f8c466af30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 237 op/s
Oct  2 04:29:09 np0005465604 nova_compute[260603]: 2025-10-02 08:29:09.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:09 np0005465604 nova_compute[260603]: 2025-10-02 08:29:09.157 2 DEBUG oslo_concurrency.lockutils [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:09 np0005465604 nova_compute[260603]: 2025-10-02 08:29:09.158 2 DEBUG oslo_concurrency.lockutils [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:09 np0005465604 nova_compute[260603]: 2025-10-02 08:29:09.159 2 DEBUG nova.compute.manager [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:09 np0005465604 nova_compute[260603]: 2025-10-02 08:29:09.164 2 DEBUG nova.compute.manager [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 04:29:09 np0005465604 nova_compute[260603]: 2025-10-02 08:29:09.166 2 DEBUG nova.objects.instance [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'flavor' on Instance uuid 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:09 np0005465604 nova_compute[260603]: 2025-10-02 08:29:09.190 2 DEBUG nova.virt.libvirt.driver [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:29:10 np0005465604 nova_compute[260603]: 2025-10-02 08:29:10.782 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393735.7808106, 05cc7244-c419-4c24-b995-95ca760837a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:10 np0005465604 nova_compute[260603]: 2025-10-02 08:29:10.783 2 INFO nova.compute.manager [-] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:29:10 np0005465604 nova_compute[260603]: 2025-10-02 08:29:10.812 2 DEBUG nova.compute.manager [None req-ea9283d0-0150-4dd2-aeec-b442a86b2a39 - - - - - -] [instance: 05cc7244-c419-4c24-b995-95ca760837a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Oct  2 04:29:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:29:12 np0005465604 nova_compute[260603]: 2025-10-02 08:29:12.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Oct  2 04:29:13 np0005465604 podman[313846]: 2025-10-02 08:29:13.057504824 +0000 UTC m=+0.108277376 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 04:29:13 np0005465604 podman[313845]: 2025-10-02 08:29:13.108973373 +0000 UTC m=+0.165042139 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 04:29:13 np0005465604 nova_compute[260603]: 2025-10-02 08:29:13.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:29:13 np0005465604 nova_compute[260603]: 2025-10-02 08:29:13.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:29:14 np0005465604 nova_compute[260603]: 2025-10-02 08:29:14.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:14 np0005465604 nova_compute[260603]: 2025-10-02 08:29:14.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:29:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 422 KiB/s wr, 113 op/s
Oct  2 04:29:15 np0005465604 nova_compute[260603]: 2025-10-02 08:29:15.375 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393740.3743992, 331bfae3-95e5-4c18-96ca-56597994c6b7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:15 np0005465604 nova_compute[260603]: 2025-10-02 08:29:15.376 2 INFO nova.compute.manager [-] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:29:15 np0005465604 nova_compute[260603]: 2025-10-02 08:29:15.397 2 DEBUG nova.compute.manager [None req-72bcb7af-3afd-4534-ad47-fdb44da7f3e7 - - - - - -] [instance: 331bfae3-95e5-4c18-96ca-56597994c6b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:16 np0005465604 nova_compute[260603]: 2025-10-02 08:29:16.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:29:16 np0005465604 nova_compute[260603]: 2025-10-02 08:29:16.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:29:16 np0005465604 nova_compute[260603]: 2025-10-02 08:29:16.586 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:29:16 np0005465604 nova_compute[260603]: 2025-10-02 08:29:16.588 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:29:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 88 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 04:29:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:29:17 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:17Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5f:37:2c 10.100.0.12
Oct  2 04:29:17 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:17Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:37:2c 10.100.0.12
Oct  2 04:29:17 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct  2 04:29:17 np0005465604 nova_compute[260603]: 2025-10-02 08:29:17.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:18 np0005465604 nova_compute[260603]: 2025-10-02 08:29:18.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:29:18 np0005465604 nova_compute[260603]: 2025-10-02 08:29:18.935 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:29:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 115 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Oct  2 04:29:18 np0005465604 nova_compute[260603]: 2025-10-02 08:29:18.960 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 04:29:18 np0005465604 nova_compute[260603]: 2025-10-02 08:29:18.961 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:19 np0005465604 nova_compute[260603]: 2025-10-02 08:29:19.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:19 np0005465604 podman[313891]: 2025-10-02 08:29:19.097206029 +0000 UTC m=+0.146202936 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  2 04:29:19 np0005465604 nova_compute[260603]: 2025-10-02 08:29:19.238 2 DEBUG nova.virt.libvirt.driver [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 04:29:19 np0005465604 nova_compute[260603]: 2025-10-02 08:29:19.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:29:19 np0005465604 nova_compute[260603]: 2025-10-02 08:29:19.560 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:19 np0005465604 nova_compute[260603]: 2025-10-02 08:29:19.561 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:19 np0005465604 nova_compute[260603]: 2025-10-02 08:29:19.561 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:19 np0005465604 nova_compute[260603]: 2025-10-02 08:29:19.561 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:29:19 np0005465604 nova_compute[260603]: 2025-10-02 08:29:19.562 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:29:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/635095801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:29:20 np0005465604 nova_compute[260603]: 2025-10-02 08:29:20.014 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:20 np0005465604 nova_compute[260603]: 2025-10-02 08:29:20.287 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:29:20 np0005465604 nova_compute[260603]: 2025-10-02 08:29:20.287 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:29:20 np0005465604 nova_compute[260603]: 2025-10-02 08:29:20.511 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:29:20 np0005465604 nova_compute[260603]: 2025-10-02 08:29:20.512 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4075MB free_disk=59.94316864013672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:29:20 np0005465604 nova_compute[260603]: 2025-10-02 08:29:20.512 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:20 np0005465604 nova_compute[260603]: 2025-10-02 08:29:20.512 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:20 np0005465604 nova_compute[260603]: 2025-10-02 08:29:20.624 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:29:20 np0005465604 nova_compute[260603]: 2025-10-02 08:29:20.624 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:29:20 np0005465604 nova_compute[260603]: 2025-10-02 08:29:20.624 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:29:20 np0005465604 nova_compute[260603]: 2025-10-02 08:29:20.656 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 04:29:20 np0005465604 nova_compute[260603]: 2025-10-02 08:29:20.682 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 04:29:20 np0005465604 nova_compute[260603]: 2025-10-02 08:29:20.683 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 04:29:20 np0005465604 nova_compute[260603]: 2025-10-02 08:29:20.699 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 04:29:20 np0005465604 nova_compute[260603]: 2025-10-02 08:29:20.728 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 04:29:20 np0005465604 nova_compute[260603]: 2025-10-02 08:29:20.773 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 115 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 187 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.133 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.134 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.166 2 DEBUG nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.204799) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393761204865, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2382, "num_deletes": 512, "total_data_size": 3276056, "memory_usage": 3348240, "flush_reason": "Manual Compaction"}
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393761224310, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3218090, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28078, "largest_seqno": 30459, "table_properties": {"data_size": 3207875, "index_size": 6005, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 24708, "raw_average_key_size": 19, "raw_value_size": 3185100, "raw_average_value_size": 2546, "num_data_blocks": 265, "num_entries": 1251, "num_filter_entries": 1251, "num_deletions": 512, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759393556, "oldest_key_time": 1759393556, "file_creation_time": 1759393761, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 19567 microseconds, and 12506 cpu microseconds.
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.224373) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3218090 bytes OK
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.224402) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.226936) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.226962) EVENT_LOG_v1 {"time_micros": 1759393761226953, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.226987) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3264990, prev total WAL file size 3264990, number of live WAL files 2.
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.228398) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3142KB)], [62(8447KB)]
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393761228461, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 11868188, "oldest_snapshot_seqno": -1}
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.264 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1364771817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5506 keys, 10252855 bytes, temperature: kUnknown
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393761275421, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10252855, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10211747, "index_size": 26214, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 138090, "raw_average_key_size": 25, "raw_value_size": 10108555, "raw_average_value_size": 1835, "num_data_blocks": 1073, "num_entries": 5506, "num_filter_entries": 5506, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759393761, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.275716) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10252855 bytes
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.277138) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 252.2 rd, 217.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.2 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 6547, records dropped: 1041 output_compression: NoCompression
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.277166) EVENT_LOG_v1 {"time_micros": 1759393761277153, "job": 34, "event": "compaction_finished", "compaction_time_micros": 47051, "compaction_time_cpu_micros": 28159, "output_level": 6, "num_output_files": 1, "total_output_size": 10252855, "num_input_records": 6547, "num_output_records": 5506, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393761278218, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393761280830, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.228316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.280875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.280881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.280884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.280887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:21.280890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.283 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.290 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.308 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.333 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.333 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.334 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.343 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.344 2 INFO nova.compute.claims [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.471 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:21 np0005465604 kernel: tapf6ec21c0-18 (unregistering): left promiscuous mode
Oct  2 04:29:21 np0005465604 NetworkManager[45129]: <info>  [1759393761.5634] device (tapf6ec21c0-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:29:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:21Z|00430|binding|INFO|Releasing lport f6ec21c0-188d-4c89-8b6b-a64a6d85f131 from this chassis (sb_readonly=0)
Oct  2 04:29:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:21Z|00431|binding|INFO|Setting lport f6ec21c0-188d-4c89-8b6b-a64a6d85f131 down in Southbound
Oct  2 04:29:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:21Z|00432|binding|INFO|Removing iface tapf6ec21c0-18 ovn-installed in OVS
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.580 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:37:2c 10.100.0.12'], port_security=['fa:16:3e:5f:37:2c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '640bb5bf-5ae3-455f-82e7-3e6d647a0fbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=f6ec21c0-188d-4c89-8b6b-a64a6d85f131) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:29:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.583 162357 INFO neutron.agent.ovn.metadata.agent [-] Port f6ec21c0-188d-4c89-8b6b-a64a6d85f131 in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca unbound from our chassis#033[00m
Oct  2 04:29:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.585 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:29:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.586 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[72218843-8cdb-4b80-9e9f-a417af50e71e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.589 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace which is not needed anymore#033[00m
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:21 np0005465604 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000032.scope: Deactivated successfully.
Oct  2 04:29:21 np0005465604 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000032.scope: Consumed 12.628s CPU time.
Oct  2 04:29:21 np0005465604 systemd-machined[214636]: Machine qemu-55-instance-00000032 terminated.
Oct  2 04:29:21 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[313830]: [NOTICE]   (313834) : haproxy version is 2.8.14-c23fe91
Oct  2 04:29:21 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[313830]: [NOTICE]   (313834) : path to executable is /usr/sbin/haproxy
Oct  2 04:29:21 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[313830]: [WARNING]  (313834) : Exiting Master process...
Oct  2 04:29:21 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[313830]: [ALERT]    (313834) : Current worker (313836) exited with code 143 (Terminated)
Oct  2 04:29:21 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[313830]: [WARNING]  (313834) : All workers exited. Exiting... (0)
Oct  2 04:29:21 np0005465604 systemd[1]: libpod-e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f.scope: Deactivated successfully.
Oct  2 04:29:21 np0005465604 podman[313999]: 2025-10-02 08:29:21.769369413 +0000 UTC m=+0.065094084 container died e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 04:29:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-7366a1c716223f76f05604a8db394b41e4fc284cf97ee714770130aac3014769-merged.mount: Deactivated successfully.
Oct  2 04:29:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f-userdata-shm.mount: Deactivated successfully.
Oct  2 04:29:21 np0005465604 podman[313999]: 2025-10-02 08:29:21.809739648 +0000 UTC m=+0.105464249 container cleanup e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:29:21 np0005465604 systemd[1]: libpod-conmon-e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f.scope: Deactivated successfully.
Oct  2 04:29:21 np0005465604 podman[314034]: 2025-10-02 08:29:21.883788779 +0000 UTC m=+0.040082487 container remove e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:29:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.889 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[134e0441-a6f1-4555-96e8-9ca1377dfe85]: (4, ('Thu Oct  2 08:29:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f)\ne2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f\nThu Oct  2 08:29:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (e2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f)\ne2036f590eeafa7c0d2cb1c40289a79fd3b595fa67421740a63fa31543e78a8f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.892 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8b0b810f-655f-40d1-89dd-a6b13a4d1fe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.893 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:21 np0005465604 kernel: tapa72ac8c9-10: left promiscuous mode
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.926 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ebef7566-83f7-439b-8bbf-e2e2298397f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:29:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/976252696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.954 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.960 2 DEBUG nova.compute.provider_tree [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:29:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.961 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a1ed2e13-4459-48e9-bfae-f0bfb39832ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.964 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a09c6a56-f296-4376-975e-d88f7ac3bfde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:21 np0005465604 nova_compute[260603]: 2025-10-02 08:29:21.979 2 DEBUG nova.scheduler.client.report [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:29:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.980 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b960da6a-a273-4d04-95c1-5c96656975ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462361, 'reachable_time': 25182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314057, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.983 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:29:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:21.983 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[1f7f8b92-b220-4168-93de-a6ee1ed4dd7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:21 np0005465604 systemd[1]: run-netns-ovnmeta\x2da72ac8c9\x2d16ee\x2d4ec0\x2db23d\x2d2741fda000ca.mount: Deactivated successfully.
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.004 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.005 2 DEBUG nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.050 2 DEBUG nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.050 2 DEBUG nova.network.neutron [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:29:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:29:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/70845979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:29:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:29:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/70845979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.071 2 INFO nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:29:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.098 2 DEBUG nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.216 2 DEBUG nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.218 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.218 2 INFO nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Creating image(s)#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.251 2 DEBUG nova.storage.rbd_utils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 9924ce7f-b701-4560-b2c5-67f673b45807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.283 2 DEBUG nova.storage.rbd_utils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 9924ce7f-b701-4560-b2c5-67f673b45807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.312 2 DEBUG nova.storage.rbd_utils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 9924ce7f-b701-4560-b2c5-67f673b45807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.317 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.372 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.373 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.377 2 INFO nova.virt.libvirt.driver [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.382 2 INFO nova.virt.libvirt.driver [-] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Instance destroyed successfully.#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.383 2 DEBUG nova.objects.instance [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'numa_topology' on Instance uuid 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.413 2 DEBUG nova.policy [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c1d66932c11043b5b90140cd2dde53d2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.420 2 DEBUG nova.compute.manager [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.426 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.428 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.430 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.431 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.468 2 DEBUG nova.storage.rbd_utils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 9924ce7f-b701-4560-b2c5-67f673b45807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.473 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 9924ce7f-b701-4560-b2c5-67f673b45807_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.654 2 DEBUG oslo_concurrency.lockutils [None req-849fed42-075e-4502-a35a-fbc5694d4f11 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.496s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.655 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.656 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 3.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.656 2 INFO nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] During sync_power_state the instance has a pending task (powering-off). Skip.#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.656 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.799 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 9924ce7f-b701-4560-b2c5-67f673b45807_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:22 np0005465604 nova_compute[260603]: 2025-10-02 08:29:22.873 2 DEBUG nova.storage.rbd_utils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] resizing rbd image 9924ce7f-b701-4560-b2c5-67f673b45807_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:29:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 121 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 218 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.008 2 DEBUG nova.objects.instance [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'migration_context' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:23 np0005465604 podman[314206]: 2025-10-02 08:29:23.019196153 +0000 UTC m=+0.082854755 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2)
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.025 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.026 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Ensure instance console log exists: /var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.027 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.028 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.028 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.087 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "797fde07-e88a-4d6e-a1a3-25e22c66097c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.088 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.114 2 DEBUG nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.201 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.202 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.213 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.213 2 INFO nova.compute.claims [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.454 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.729 2 DEBUG nova.network.neutron [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Successfully created port: bf9cdb7f-4cda-403b-b27e-12385e93db02 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:29:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:29:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/754519048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.904 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.913 2 DEBUG nova.compute.provider_tree [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.936 2 DEBUG nova.scheduler.client.report [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.964 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:23 np0005465604 nova_compute[260603]: 2025-10-02 08:29:23.965 2 DEBUG nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.013 2 DEBUG nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.014 2 DEBUG nova.network.neutron [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.043 2 INFO nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.070 2 DEBUG nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.166 2 DEBUG nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.168 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.169 2 INFO nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Creating image(s)#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.200 2 DEBUG nova.storage.rbd_utils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.233 2 DEBUG nova.storage.rbd_utils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.259 2 DEBUG nova.storage.rbd_utils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.264 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.365 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.366 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.366 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.367 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.389 2 DEBUG nova.storage.rbd_utils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.393 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.712 2 DEBUG nova.policy [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c1d66932c11043b5b90140cd2dde53d2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.726 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.826 2 DEBUG nova.storage.rbd_utils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] resizing rbd image 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.947 2 DEBUG nova.objects.instance [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'migration_context' on Instance uuid 797fde07-e88a-4d6e-a1a3-25e22c66097c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 137 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 225 KiB/s rd, 2.8 MiB/s wr, 74 op/s
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.966 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.967 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Ensure instance console log exists: /var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.967 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.968 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:24 np0005465604 nova_compute[260603]: 2025-10-02 08:29:24.968 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.010 2 DEBUG nova.compute.manager [req-c0784f4f-e9f4-4118-a4e7-c554de27e72a req-5d696ec3-dd26-4c4a-ae03-d81c66786988 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Received event network-vif-unplugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.011 2 DEBUG oslo_concurrency.lockutils [req-c0784f4f-e9f4-4118-a4e7-c554de27e72a req-5d696ec3-dd26-4c4a-ae03-d81c66786988 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.011 2 DEBUG oslo_concurrency.lockutils [req-c0784f4f-e9f4-4118-a4e7-c554de27e72a req-5d696ec3-dd26-4c4a-ae03-d81c66786988 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.012 2 DEBUG oslo_concurrency.lockutils [req-c0784f4f-e9f4-4118-a4e7-c554de27e72a req-5d696ec3-dd26-4c4a-ae03-d81c66786988 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.012 2 DEBUG nova.compute.manager [req-c0784f4f-e9f4-4118-a4e7-c554de27e72a req-5d696ec3-dd26-4c4a-ae03-d81c66786988 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] No waiting events found dispatching network-vif-unplugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.012 2 WARNING nova.compute.manager [req-c0784f4f-e9f4-4118-a4e7-c554de27e72a req-5d696ec3-dd26-4c4a-ae03-d81c66786988 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Received unexpected event network-vif-unplugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.598 2 DEBUG oslo_concurrency.lockutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.599 2 DEBUG oslo_concurrency.lockutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.599 2 DEBUG oslo_concurrency.lockutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.600 2 DEBUG oslo_concurrency.lockutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.601 2 DEBUG oslo_concurrency.lockutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.603 2 INFO nova.compute.manager [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Terminating instance#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.606 2 DEBUG nova.compute.manager [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.618 2 INFO nova.virt.libvirt.driver [-] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Instance destroyed successfully.#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.619 2 DEBUG nova.objects.instance [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'resources' on Instance uuid 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.641 2 DEBUG nova.virt.libvirt.vif [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:28:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-17603736',display_name='tempest-DeleteServersTestJSON-server-17603736',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-17603736',id=50,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-jo24t19c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:29:22Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=640bb5bf-5ae3-455f-82e7-3e6d647a0fbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.642 2 DEBUG nova.network.os_vif_util [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "address": "fa:16:3e:5f:37:2c", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6ec21c0-18", "ovs_interfaceid": "f6ec21c0-188d-4c89-8b6b-a64a6d85f131", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.643 2 DEBUG nova.network.os_vif_util [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:37:2c,bridge_name='br-int',has_traffic_filtering=True,id=f6ec21c0-188d-4c89-8b6b-a64a6d85f131,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6ec21c0-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.644 2 DEBUG os_vif [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:37:2c,bridge_name='br-int',has_traffic_filtering=True,id=f6ec21c0-188d-4c89-8b6b-a64a6d85f131,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6ec21c0-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6ec21c0-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:25 np0005465604 nova_compute[260603]: 2025-10-02 08:29:25.656 2 INFO os_vif [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:37:2c,bridge_name='br-int',has_traffic_filtering=True,id=f6ec21c0-188d-4c89-8b6b-a64a6d85f131,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6ec21c0-18')#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.003 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.003 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.045 2 DEBUG nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.163 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.165 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.174 2 INFO nova.virt.libvirt.driver [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Deleting instance files /var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_del#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.175 2 INFO nova.virt.libvirt.driver [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Deletion of /var/lib/nova/instances/640bb5bf-5ae3-455f-82e7-3e6d647a0fbf_del complete#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.184 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.185 2 INFO nova.compute.claims [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.270 2 INFO nova.compute.manager [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.271 2 DEBUG oslo.service.loopingcall [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.271 2 DEBUG nova.compute.manager [-] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.272 2 DEBUG nova.network.neutron [-] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.326 2 DEBUG nova.network.neutron [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Successfully updated port: bf9cdb7f-4cda-403b-b27e-12385e93db02 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.348 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.348 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquired lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.348 2 DEBUG nova.network.neutron [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.407 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.452 2 DEBUG nova.network.neutron [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Successfully created port: 29a765f0-6b44-4aad-9974-a0845658d5f2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.472 2 DEBUG nova.compute.manager [req-d68fa802-3acd-4562-b587-74a8088ca50d req-13c19283-0f3e-46f0-b876-edf9a734e8d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-changed-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.473 2 DEBUG nova.compute.manager [req-d68fa802-3acd-4562-b587-74a8088ca50d req-13c19283-0f3e-46f0-b876-edf9a734e8d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Refreshing instance network info cache due to event network-changed-bf9cdb7f-4cda-403b-b27e-12385e93db02. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.473 2 DEBUG oslo_concurrency.lockutils [req-d68fa802-3acd-4562-b587-74a8088ca50d req-13c19283-0f3e-46f0-b876-edf9a734e8d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.522 2 DEBUG nova.network.neutron [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.616 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "73e8c7a5-4621-4f07-824a-b81ea314a672" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.616 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.634 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.659 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.660 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.693 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.715 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.715 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.742 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.753 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.801 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:29:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023867281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:29:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.824 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:29:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:29:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.836 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.841 2 DEBUG nova.compute.provider_tree [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.856 2 DEBUG nova.scheduler.client.report [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.876 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.877 2 DEBUG nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.879 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.886 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.886 2 INFO nova.compute.claims [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.941 2 DEBUG nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.941 2 DEBUG nova.network.neutron [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:29:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 137 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 225 KiB/s rd, 2.8 MiB/s wr, 74 op/s
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.969 2 DEBUG nova.network.neutron [-] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:26 np0005465604 nova_compute[260603]: 2025-10-02 08:29:26.980 2 INFO nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.004 2 INFO nova.compute.manager [-] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Took 0.73 seconds to deallocate network for instance.#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.009 2 DEBUG nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.066 2 DEBUG oslo_concurrency.lockutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.109 2 DEBUG nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.110 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.111 2 INFO nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Creating image(s)#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.130 2 DEBUG nova.storage.rbd_utils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.152 2 DEBUG nova.storage.rbd_utils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.175 2 DEBUG nova.storage.rbd_utils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.178 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.213 2 DEBUG nova.policy [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c1d66932c11043b5b90140cd2dde53d2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.217 2 DEBUG nova.compute.manager [req-cd3304b8-4295-4254-acf7-6d36fc490eb4 req-1489b479-df13-4829-8dc5-b2b99b5d3804 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Received event network-vif-plugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.217 2 DEBUG oslo_concurrency.lockutils [req-cd3304b8-4295-4254-acf7-6d36fc490eb4 req-1489b479-df13-4829-8dc5-b2b99b5d3804 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.218 2 DEBUG oslo_concurrency.lockutils [req-cd3304b8-4295-4254-acf7-6d36fc490eb4 req-1489b479-df13-4829-8dc5-b2b99b5d3804 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.218 2 DEBUG oslo_concurrency.lockutils [req-cd3304b8-4295-4254-acf7-6d36fc490eb4 req-1489b479-df13-4829-8dc5-b2b99b5d3804 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.218 2 DEBUG nova.compute.manager [req-cd3304b8-4295-4254-acf7-6d36fc490eb4 req-1489b479-df13-4829-8dc5-b2b99b5d3804 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] No waiting events found dispatching network-vif-plugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.218 2 WARNING nova.compute.manager [req-cd3304b8-4295-4254-acf7-6d36fc490eb4 req-1489b479-df13-4829-8dc5-b2b99b5d3804 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Received unexpected event network-vif-plugged-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.218 2 DEBUG nova.compute.manager [req-cd3304b8-4295-4254-acf7-6d36fc490eb4 req-1489b479-df13-4829-8dc5-b2b99b5d3804 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Received event network-vif-deleted-f6ec21c0-188d-4c89-8b6b-a64a6d85f131 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:27 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:29:27 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.268 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.269 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.270 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.270 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.307 2 DEBUG nova.storage.rbd_utils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.313 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.359 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.624 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.675 2 DEBUG nova.storage.rbd_utils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] resizing rbd image d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:29:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:29:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:29:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:29:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:29:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:29:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:29:27 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 715c637f-14fa-413c-8aab-2b6750112854 does not exist
Oct  2 04:29:27 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev d85788bb-6dda-4eb0-a595-c34742f8858d does not exist
Oct  2 04:29:27 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 174529aa-a377-4cad-9796-ba5426119614 does not exist
Oct  2 04:29:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:29:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:29:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:29:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:29:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:29:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.770 2 DEBUG nova.network.neutron [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Successfully created port: 257d115c-e196-4921-a9d3-942604825516 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.781 2 DEBUG nova.objects.instance [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'migration_context' on Instance uuid d15c7c6a-e6a1-4538-9db0-ee1aef10f38b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.800 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.800 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Ensure instance console log exists: /var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.801 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.801 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.801 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:29:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1996189806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.842 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.847 2 DEBUG nova.compute.provider_tree [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.853 2 DEBUG nova.network.neutron [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Updating instance_info_cache with network_info: [{"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.870 2 DEBUG nova.scheduler.client.report [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.874 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Releasing lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.875 2 DEBUG nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance network_info: |[{"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.875 2 DEBUG oslo_concurrency.lockutils [req-d68fa802-3acd-4562-b587-74a8088ca50d req-13c19283-0f3e-46f0-b876-edf9a734e8d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.875 2 DEBUG nova.network.neutron [req-d68fa802-3acd-4562-b587-74a8088ca50d req-13c19283-0f3e-46f0-b876-edf9a734e8d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Refreshing network info cache for port bf9cdb7f-4cda-403b-b27e-12385e93db02 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.878 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Start _get_guest_xml network_info=[{"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.882 2 WARNING nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.887 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.888 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.890 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.892 2 DEBUG nova.virt.libvirt.host [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.892 2 DEBUG nova.virt.libvirt.host [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.896 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.897 2 INFO nova.compute.claims [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.899 2 DEBUG nova.virt.libvirt.host [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.900 2 DEBUG nova.virt.libvirt.host [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.900 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.900 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.901 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.901 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.901 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.901 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.902 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.902 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.902 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.902 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.902 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.902 2 DEBUG nova.virt.hardware [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.905 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:29:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:29:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:29:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:29:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:29:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:29:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:29:27
Oct  2 04:29:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:29:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:29:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'images', 'vms']
Oct  2 04:29:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.987 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:29:27 np0005465604 nova_compute[260603]: 2025-10-02 08:29:27.988 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.017 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.035 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.159 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.160 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.161 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Creating image(s)#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.182 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image 73e8c7a5-4621-4f07-824a-b81ea314a672_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.204 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image 73e8c7a5-4621-4f07-824a-b81ea314a672_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.225 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image 73e8c7a5-4621-4f07-824a-b81ea314a672_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:29:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:29:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.230 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:28 np0005465604 podman[315103]: 2025-10-02 08:29:28.268702509 +0000 UTC m=+0.054666259 container create d918aa201f0d3ed6fa00bb44c58b9f6919e0a3df41f4612989c35529397c2ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.298 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:28 np0005465604 systemd[1]: Started libpod-conmon-d918aa201f0d3ed6fa00bb44c58b9f6919e0a3df41f4612989c35529397c2ba7.scope.
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.328 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.329 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.329 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.330 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:28 np0005465604 podman[315103]: 2025-10-02 08:29:28.244635772 +0000 UTC m=+0.030599542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:29:28 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.359 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image 73e8c7a5-4621-4f07-824a-b81ea314a672_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:29:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3272301993' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.367 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 73e8c7a5-4621-4f07-824a-b81ea314a672_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:28 np0005465604 podman[315103]: 2025-10-02 08:29:28.374328722 +0000 UTC m=+0.160292512 container init d918aa201f0d3ed6fa00bb44c58b9f6919e0a3df41f4612989c35529397c2ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_vaughan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:29:28 np0005465604 podman[315103]: 2025-10-02 08:29:28.387856353 +0000 UTC m=+0.173820093 container start d918aa201f0d3ed6fa00bb44c58b9f6919e0a3df41f4612989c35529397c2ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:29:28 np0005465604 podman[315103]: 2025-10-02 08:29:28.391454534 +0000 UTC m=+0.177418374 container attach d918aa201f0d3ed6fa00bb44c58b9f6919e0a3df41f4612989c35529397c2ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_vaughan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:29:28 np0005465604 xenodochial_vaughan[315144]: 167 167
Oct  2 04:29:28 np0005465604 systemd[1]: libpod-d918aa201f0d3ed6fa00bb44c58b9f6919e0a3df41f4612989c35529397c2ba7.scope: Deactivated successfully.
Oct  2 04:29:28 np0005465604 podman[315103]: 2025-10-02 08:29:28.396821701 +0000 UTC m=+0.182785441 container died d918aa201f0d3ed6fa00bb44c58b9f6919e0a3df41f4612989c35529397c2ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.398 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:28 np0005465604 systemd[1]: var-lib-containers-storage-overlay-63e6b5bd3f72aa4371caad0a29cbb36dbd70856ebf5c8d0742d4f56e42d5784a-merged.mount: Deactivated successfully.
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.432 2 DEBUG nova.storage.rbd_utils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 9924ce7f-b701-4560-b2c5-67f673b45807_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.440 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:28 np0005465604 podman[315103]: 2025-10-02 08:29:28.445517514 +0000 UTC m=+0.231481254 container remove d918aa201f0d3ed6fa00bb44c58b9f6919e0a3df41f4612989c35529397c2ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_vaughan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:29:28 np0005465604 systemd[1]: libpod-conmon-d918aa201f0d3ed6fa00bb44c58b9f6919e0a3df41f4612989c35529397c2ba7.scope: Deactivated successfully.
Oct  2 04:29:28 np0005465604 podman[315243]: 2025-10-02 08:29:28.642036912 +0000 UTC m=+0.040367096 container create 32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_proskuriakova, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.648 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 73e8c7a5-4621-4f07-824a-b81ea314a672_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:28 np0005465604 systemd[1]: Started libpod-conmon-32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b.scope.
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.685 2 DEBUG nova.policy [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1057882eff8f490d837773415bf65a8a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f6f9056bf44b4bd8859c73e3cb645683', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.688 2 DEBUG nova.network.neutron [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Successfully updated port: 29a765f0-6b44-4aad-9974-a0845658d5f2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:29:28 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:29:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11469fe5d7bf15f097cad835b739b661232c08db8c1119178e44476bd11be85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:29:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11469fe5d7bf15f097cad835b739b661232c08db8c1119178e44476bd11be85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:29:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11469fe5d7bf15f097cad835b739b661232c08db8c1119178e44476bd11be85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:29:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11469fe5d7bf15f097cad835b739b661232c08db8c1119178e44476bd11be85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:29:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11469fe5d7bf15f097cad835b739b661232c08db8c1119178e44476bd11be85/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:29:28 np0005465604 podman[315243]: 2025-10-02 08:29:28.627135998 +0000 UTC m=+0.025466202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:29:28 np0005465604 podman[315243]: 2025-10-02 08:29:28.729316594 +0000 UTC m=+0.127646798 container init 32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_proskuriakova, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 04:29:28 np0005465604 podman[315243]: 2025-10-02 08:29:28.743332789 +0000 UTC m=+0.141663003 container start 32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 04:29:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:29:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/815389092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:29:28 np0005465604 podman[315243]: 2025-10-02 08:29:28.749314885 +0000 UTC m=+0.147645149 container attach 32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.760 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "refresh_cache-797fde07-e88a-4d6e-a1a3-25e22c66097c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.760 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquired lock "refresh_cache-797fde07-e88a-4d6e-a1a3-25e22c66097c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.760 2 DEBUG nova.network.neutron [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.766 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] resizing rbd image 73e8c7a5-4621-4f07-824a-b81ea314a672_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.811 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.818 2 DEBUG nova.compute.provider_tree [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.849 2 DEBUG nova.scheduler.client.report [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.858 2 DEBUG nova.objects.instance [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lazy-loading 'migration_context' on Instance uuid 73e8c7a5-4621-4f07-824a-b81ea314a672 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.874 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.875 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Ensure instance console log exists: /var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.875 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.875 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.875 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.877 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.878 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.880 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 2.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.887 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.887 2 INFO nova.compute.claims [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:29:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:29:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/941853479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.933 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.935 2 DEBUG nova.virt.libvirt.vif [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-769762436',display_name='tempest-ListServerFiltersTestJSON-instance-769762436',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-769762436',id=51,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-t76hsctw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:22Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=9924ce7f-b701-4560-b2c5-67f673b45807,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.935 2 DEBUG nova.network.os_vif_util [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.935 2 DEBUG nova.network.os_vif_util [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.936 2 DEBUG nova.objects.instance [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 143 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 314 KiB/s rd, 6.2 MiB/s wr, 214 op/s
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.969 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.970 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.975 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:29:28 np0005465604 nova_compute[260603]:  <uuid>9924ce7f-b701-4560-b2c5-67f673b45807</uuid>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:  <name>instance-00000033</name>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-769762436</nova:name>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:29:27</nova:creationTime>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:29:28 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:        <nova:user uuid="c1d66932c11043b5b90140cd2dde53d2">tempest-ListServerFiltersTestJSON-1545892750-project-member</nova:user>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:        <nova:project uuid="e7c4373fe01a4a14bea07af6dba4d170">tempest-ListServerFiltersTestJSON-1545892750</nova:project>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:        <nova:port uuid="bf9cdb7f-4cda-403b-b27e-12385e93db02">
Oct  2 04:29:28 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <entry name="serial">9924ce7f-b701-4560-b2c5-67f673b45807</entry>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <entry name="uuid">9924ce7f-b701-4560-b2c5-67f673b45807</entry>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/9924ce7f-b701-4560-b2c5-67f673b45807_disk">
Oct  2 04:29:28 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:29:28 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/9924ce7f-b701-4560-b2c5-67f673b45807_disk.config">
Oct  2 04:29:28 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:29:28 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:64:b2:ea"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <target dev="tapbf9cdb7f-4c"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807/console.log" append="off"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:29:28 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:29:28 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:29:28 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:29:28 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.975 2 DEBUG nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Preparing to wait for external event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.975 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.975 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.976 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.976 2 DEBUG nova.virt.libvirt.vif [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-769762436',display_name='tempest-ListServerFiltersTestJSON-instance-769762436',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-769762436',id=51,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-t76hsctw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:22Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=9924ce7f-b701-4560-b2c5-67f673b45807,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.976 2 DEBUG nova.network.os_vif_util [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.977 2 DEBUG nova.network.os_vif_util [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.977 2 DEBUG os_vif [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.978 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.978 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.987 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf9cdb7f-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.988 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf9cdb7f-4c, col_values=(('external_ids', {'iface-id': 'bf9cdb7f-4cda-403b-b27e-12385e93db02', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:b2:ea', 'vm-uuid': '9924ce7f-b701-4560-b2c5-67f673b45807'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:28 np0005465604 NetworkManager[45129]: <info>  [1759393768.9909] manager: (tapbf9cdb7f-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.991 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:29:28 np0005465604 nova_compute[260603]: 2025-10-02 08:29:28.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.002 2 INFO os_vif [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c')#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.019 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.074 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.074 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.074 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] No VIF found with MAC fa:16:3e:64:b2:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.075 2 INFO nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Using config drive#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.098 2 DEBUG nova.storage.rbd_utils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 9924ce7f-b701-4560-b2c5-67f673b45807_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.231 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.232 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.232 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Creating image(s)#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.261 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.287 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.315 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.319 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.376 2 DEBUG nova.network.neutron [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.379 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.416 2 DEBUG nova.compute.manager [req-3782669d-5506-4b51-b412-9603e5cac2f8 req-1893256e-7ff1-43f9-ab0d-331d7899ade4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Received event network-changed-29a765f0-6b44-4aad-9974-a0845658d5f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.417 2 DEBUG nova.compute.manager [req-3782669d-5506-4b51-b412-9603e5cac2f8 req-1893256e-7ff1-43f9-ab0d-331d7899ade4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Refreshing instance network info cache due to event network-changed-29a765f0-6b44-4aad-9974-a0845658d5f2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.417 2 DEBUG oslo_concurrency.lockutils [req-3782669d-5506-4b51-b412-9603e5cac2f8 req-1893256e-7ff1-43f9-ab0d-331d7899ade4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-797fde07-e88a-4d6e-a1a3-25e22c66097c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.425 2 DEBUG nova.policy [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1057882eff8f490d837773415bf65a8a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f6f9056bf44b4bd8859c73e3cb645683', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.449 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.450 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.451 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.451 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.482 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.490 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.787 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Successfully created port: b4dfefd6-6971-4450-ae0e-50f4bf7eaafa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.795 2 DEBUG nova.network.neutron [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Successfully updated port: 257d115c-e196-4921-a9d3-942604825516 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.804 2 INFO nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Creating config drive at /var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807/disk.config#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.814 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk2ebn_xy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:29:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/702013294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.865 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.900 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "refresh_cache-d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.901 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquired lock "refresh_cache-d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.901 2 DEBUG nova.network.neutron [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.907 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.909 2 DEBUG nova.compute.manager [req-5204a8b8-b5de-4a7d-8928-ae104050194b req-6368c015-1248-4232-878c-d3444c6bbe86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Received event network-changed-257d115c-e196-4921-a9d3-942604825516 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.910 2 DEBUG nova.compute.manager [req-5204a8b8-b5de-4a7d-8928-ae104050194b req-6368c015-1248-4232-878c-d3444c6bbe86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Refreshing instance network info cache due to event network-changed-257d115c-e196-4921-a9d3-942604825516. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.910 2 DEBUG oslo_concurrency.lockutils [req-5204a8b8-b5de-4a7d-8928-ae104050194b req-6368c015-1248-4232-878c-d3444c6bbe86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.957 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] resizing rbd image f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:29:29 np0005465604 musing_proskuriakova[315293]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:29:29 np0005465604 nova_compute[260603]: 2025-10-02 08:29:29.991 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk2ebn_xy" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:29 np0005465604 musing_proskuriakova[315293]: --> relative data size: 1.0
Oct  2 04:29:29 np0005465604 musing_proskuriakova[315293]: --> All data devices are unavailable
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.038 2 DEBUG nova.storage.rbd_utils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 9924ce7f-b701-4560-b2c5-67f673b45807_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:30 np0005465604 systemd[1]: libpod-32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b.scope: Deactivated successfully.
Oct  2 04:29:30 np0005465604 systemd[1]: libpod-32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b.scope: Consumed 1.185s CPU time.
Oct  2 04:29:30 np0005465604 podman[315243]: 2025-10-02 08:29:30.046296761 +0000 UTC m=+1.444626985 container died 32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_proskuriakova, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.054 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807/disk.config 9924ce7f-b701-4560-b2c5-67f673b45807_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:30 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b11469fe5d7bf15f097cad835b739b661232c08db8c1119178e44476bd11be85-merged.mount: Deactivated successfully.
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.103 2 DEBUG nova.compute.provider_tree [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.117 2 DEBUG nova.network.neutron [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:29:30 np0005465604 podman[315243]: 2025-10-02 08:29:30.121448267 +0000 UTC m=+1.519778461 container remove 32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_proskuriakova, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.126 2 DEBUG nova.scheduler.client.report [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:29:30 np0005465604 systemd[1]: libpod-conmon-32456b31ca7cf98c49f9725fd1b616dadbfe2c0f04e7bb1d3491125cbb01eb5b.scope: Deactivated successfully.
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.180 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.181 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.184 2 DEBUG oslo_concurrency.lockutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 3.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.196 2 DEBUG nova.objects.instance [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lazy-loading 'migration_context' on Instance uuid f7005e7b-8982-4d23-b12a-4b67c90a6c89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.261 2 DEBUG oslo_concurrency.processutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807/disk.config 9924ce7f-b701-4560-b2c5-67f673b45807_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.262 2 INFO nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Deleting local config drive /var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807/disk.config because it was imported into RBD.#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.271 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.271 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Ensure instance console log exists: /var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.272 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.272 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.272 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:30 np0005465604 kernel: tapbf9cdb7f-4c: entered promiscuous mode
Oct  2 04:29:30 np0005465604 NetworkManager[45129]: <info>  [1759393770.3416] manager: (tapbf9cdb7f-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/188)
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:30Z|00433|binding|INFO|Claiming lport bf9cdb7f-4cda-403b-b27e-12385e93db02 for this chassis.
Oct  2 04:29:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:30Z|00434|binding|INFO|bf9cdb7f-4cda-403b-b27e-12385e93db02: Claiming fa:16:3e:64:b2:ea 10.100.0.12
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:30 np0005465604 systemd-udevd[315711]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:29:30 np0005465604 systemd-machined[214636]: New machine qemu-56-instance-00000033.
Oct  2 04:29:30 np0005465604 NetworkManager[45129]: <info>  [1759393770.4062] device (tapbf9cdb7f-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:30 np0005465604 NetworkManager[45129]: <info>  [1759393770.4082] device (tapbf9cdb7f-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:29:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:30Z|00435|binding|INFO|Setting lport bf9cdb7f-4cda-403b-b27e-12385e93db02 ovn-installed in OVS
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:30 np0005465604 systemd[1]: Started Virtual Machine qemu-56-instance-00000033.
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.434 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:b2:ea 10.100.0.12'], port_security=['fa:16:3e:64:b2:ea 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9924ce7f-b701-4560-b2c5-67f673b45807', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bf9cdb7f-4cda-403b-b27e-12385e93db02) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:29:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:30Z|00436|binding|INFO|Setting lport bf9cdb7f-4cda-403b-b27e-12385e93db02 up in Southbound
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.435 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bf9cdb7f-4cda-403b-b27e-12385e93db02 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e bound to our chassis#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.436 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00da8a36-bc54-4cc1-a0e2-53333358378e#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.455 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7c06b27a-d163-49fc-bb2e-f77bf0dd0cd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.456 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap00da8a36-b1 in ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.459 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap00da8a36-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.460 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7a0183-0c9e-4948-9a21-f685ce673af3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.461 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[330ad6d0-e37c-4ed6-b560-5ee7838481c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.478 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[3384d561-bc7a-4bde-b26d-2b53fe4b4071]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.482 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.483 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.504 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.511 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c8313f-3341-44eb-b037-932b0b6fbb62]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.532 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.545 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[87871e6a-b867-462b-a556-90a75ee8ca00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.554 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9667d6f-3833-490e-af8d-81723e456638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:30 np0005465604 NetworkManager[45129]: <info>  [1759393770.5554] manager: (tap00da8a36-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/189)
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.602 2 DEBUG oslo_concurrency.processutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.614 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a4317239-601a-4c99-b25c-a43e920bdce2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.621 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b1d6c09f-8f91-43f9-89cd-43278949b101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:30 np0005465604 NetworkManager[45129]: <info>  [1759393770.6523] device (tap00da8a36-b0): carrier: link connected
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.659 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[70e12a1f-c702-4af6-8b32-d21ac456977c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.659 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.662 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.662 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Creating image(s)#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.681 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c48daeb2-fba2-4eda-b685-cdad2093a6fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315791, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.693 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.701 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4e292ce7-f777-45b2-9f64-26744ea3eb0c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:d8ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464925, 'tstamp': 464925}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315815, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.725 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8284814e-b81c-42f1-bf09-177abb61ba60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315822, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.740 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.760 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[103faacb-318a-4f2d-9c7b-fc14f2db8c9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.799 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.821 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.835 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[263d1fd8-6052-47f8-a073-5d0ff84877e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.837 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.837 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.838 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00da8a36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:30 np0005465604 NetworkManager[45129]: <info>  [1759393770.8412] manager: (tap00da8a36-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Oct  2 04:29:30 np0005465604 kernel: tap00da8a36-b0: entered promiscuous mode
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.847 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00da8a36-b0, col_values=(('external_ids', {'iface-id': 'bd053665-7e00-4f6a-95af-9d9c3c0e8cc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:30Z|00437|binding|INFO|Releasing lport bd053665-7e00-4f6a-95af-9d9c3c0e8cc0 from this chassis (sb_readonly=0)
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.873 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/00da8a36-bc54-4cc1-a0e2-53333358378e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/00da8a36-bc54-4cc1-a0e2-53333358378e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.878 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0c24ebc1-24fc-47e3-84f2-892829947f8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.878 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-00da8a36-bc54-4cc1-a0e2-53333358378e
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/00da8a36-bc54-4cc1-a0e2-53333358378e.pid.haproxy
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 00da8a36-bc54-4cc1-a0e2-53333358378e
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:29:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:30.880 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'env', 'PROCESS_TAG=haproxy-00da8a36-bc54-4cc1-a0e2-53333358378e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/00da8a36-bc54-4cc1-a0e2-53333358378e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.919 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.921 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.922 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.923 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:30 np0005465604 podman[315943]: 2025-10-02 08:29:30.923573163 +0000 UTC m=+0.047459595 container create 50e78a195eb478624b567642804fa7439ef2524a466c063cc7de61786bfcf1ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 04:29:30 np0005465604 systemd[1]: Started libpod-conmon-50e78a195eb478624b567642804fa7439ef2524a466c063cc7de61786bfcf1ef.scope.
Oct  2 04:29:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 143 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 127 KiB/s rd, 4.1 MiB/s wr, 167 op/s
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.966 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:30 np0005465604 nova_compute[260603]: 2025-10-02 08:29:30.982 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:30 np0005465604 podman[315943]: 2025-10-02 08:29:30.899854776 +0000 UTC m=+0.023741238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:29:30 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:29:31 np0005465604 podman[315943]: 2025-10-02 08:29:31.015559483 +0000 UTC m=+0.139445955 container init 50e78a195eb478624b567642804fa7439ef2524a466c063cc7de61786bfcf1ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lewin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:29:31 np0005465604 podman[315943]: 2025-10-02 08:29:31.025057657 +0000 UTC m=+0.148944089 container start 50e78a195eb478624b567642804fa7439ef2524a466c063cc7de61786bfcf1ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lewin, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:29:31 np0005465604 podman[315943]: 2025-10-02 08:29:31.028491134 +0000 UTC m=+0.152377586 container attach 50e78a195eb478624b567642804fa7439ef2524a466c063cc7de61786bfcf1ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lewin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:29:31 np0005465604 nifty_lewin[315986]: 167 167
Oct  2 04:29:31 np0005465604 systemd[1]: libpod-50e78a195eb478624b567642804fa7439ef2524a466c063cc7de61786bfcf1ef.scope: Deactivated successfully.
Oct  2 04:29:31 np0005465604 podman[315943]: 2025-10-02 08:29:31.046788583 +0000 UTC m=+0.170675015 container died 50e78a195eb478624b567642804fa7439ef2524a466c063cc7de61786bfcf1ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct  2 04:29:31 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e0b72e1e8c3ccde314c102a04cb41605891c0de5a87b7085408eaef46a8be132-merged.mount: Deactivated successfully.
Oct  2 04:29:31 np0005465604 podman[315943]: 2025-10-02 08:29:31.085567468 +0000 UTC m=+0.209453890 container remove 50e78a195eb478624b567642804fa7439ef2524a466c063cc7de61786bfcf1ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct  2 04:29:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:29:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1103214818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.094 2 DEBUG nova.policy [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1057882eff8f490d837773415bf65a8a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f6f9056bf44b4bd8859c73e3cb645683', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.118 2 DEBUG oslo_concurrency.processutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:31 np0005465604 systemd[1]: libpod-conmon-50e78a195eb478624b567642804fa7439ef2524a466c063cc7de61786bfcf1ef.scope: Deactivated successfully.
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.131 2 DEBUG nova.compute.provider_tree [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.153 2 DEBUG nova.scheduler.client.report [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.200 2 DEBUG oslo_concurrency.lockutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.232 2 INFO nova.scheduler.client.report [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Deleted allocations for instance 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf#033[00m
Oct  2 04:29:31 np0005465604 podman[316057]: 2025-10-02 08:29:31.272261199 +0000 UTC m=+0.044538325 container create 57493232e335dfc5ba82f6e7d0537d5a1b27e5669b40a9c87d2c75ed4b94b1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:29:31 np0005465604 podman[316047]: 2025-10-02 08:29:31.285255984 +0000 UTC m=+0.069202392 container create b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.285 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:31 np0005465604 systemd[1]: Started libpod-conmon-57493232e335dfc5ba82f6e7d0537d5a1b27e5669b40a9c87d2c75ed4b94b1c7.scope.
Oct  2 04:29:31 np0005465604 systemd[1]: Started libpod-conmon-b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2.scope.
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.330 2 DEBUG oslo_concurrency.lockutils [None req-fbcf119d-3520-478b-b48f-969064e133cf 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "640bb5bf-5ae3-455f-82e7-3e6d647a0fbf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:31 np0005465604 podman[316047]: 2025-10-02 08:29:31.250520484 +0000 UTC m=+0.034466932 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:29:31 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:29:31 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:29:31 np0005465604 podman[316057]: 2025-10-02 08:29:31.252676681 +0000 UTC m=+0.024953837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:29:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757accabf38b3355c6e67e0051ac348d909cefcd9f797154530e12690d3d714a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:29:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b5c5ae73f0794dadb32620ddb0a1a034413642c4424df120944407778f4d1d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:29:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757accabf38b3355c6e67e0051ac348d909cefcd9f797154530e12690d3d714a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:29:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757accabf38b3355c6e67e0051ac348d909cefcd9f797154530e12690d3d714a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:29:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757accabf38b3355c6e67e0051ac348d909cefcd9f797154530e12690d3d714a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:29:31 np0005465604 podman[316047]: 2025-10-02 08:29:31.380878995 +0000 UTC m=+0.164825453 container init b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 04:29:31 np0005465604 podman[316057]: 2025-10-02 08:29:31.386024535 +0000 UTC m=+0.158301691 container init 57493232e335dfc5ba82f6e7d0537d5a1b27e5669b40a9c87d2c75ed4b94b1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:29:31 np0005465604 podman[316047]: 2025-10-02 08:29:31.389918256 +0000 UTC m=+0.173864684 container start b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 04:29:31 np0005465604 podman[316057]: 2025-10-02 08:29:31.394304563 +0000 UTC m=+0.166581679 container start 57493232e335dfc5ba82f6e7d0537d5a1b27e5669b40a9c87d2c75ed4b94b1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 04:29:31 np0005465604 podman[316057]: 2025-10-02 08:29:31.39710544 +0000 UTC m=+0.169382586 container attach 57493232e335dfc5ba82f6e7d0537d5a1b27e5669b40a9c87d2c75ed4b94b1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.416 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] resizing rbd image f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:29:31 np0005465604 neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e[316103]: [NOTICE]   (316129) : New worker (316132) forked
Oct  2 04:29:31 np0005465604 neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e[316103]: [NOTICE]   (316129) : Loading success.
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.513 2 DEBUG nova.objects.instance [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lazy-loading 'migration_context' on Instance uuid f56dc5d2-b1f8-42ef-882c-62bcbd600954 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.529 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.529 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Ensure instance console log exists: /var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.529 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.530 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.530 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.550 2 DEBUG nova.network.neutron [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Updating instance_info_cache with network_info: [{"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.555 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393771.5548983, 9924ce7f-b701-4560-b2c5-67f673b45807 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.556 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] VM Started (Lifecycle Event)#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.577 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.581 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393771.5559416, 9924ce7f-b701-4560-b2c5-67f673b45807 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.583 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.585 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Releasing lock "refresh_cache-797fde07-e88a-4d6e-a1a3-25e22c66097c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.585 2 DEBUG nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Instance network_info: |[{"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.586 2 DEBUG oslo_concurrency.lockutils [req-3782669d-5506-4b51-b412-9603e5cac2f8 req-1893256e-7ff1-43f9-ab0d-331d7899ade4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-797fde07-e88a-4d6e-a1a3-25e22c66097c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.586 2 DEBUG nova.network.neutron [req-3782669d-5506-4b51-b412-9603e5cac2f8 req-1893256e-7ff1-43f9-ab0d-331d7899ade4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Refreshing network info cache for port 29a765f0-6b44-4aad-9974-a0845658d5f2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.589 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Start _get_guest_xml network_info=[{"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'eeb8c9a4-e143-4b44-a997-e04d544bc537'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.594 2 WARNING nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.601 2 DEBUG nova.virt.libvirt.host [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.601 2 DEBUG nova.virt.libvirt.host [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.604 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.612 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.615 2 DEBUG nova.virt.libvirt.host [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.616 2 DEBUG nova.virt.libvirt.host [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.616 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.616 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.617 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.617 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.617 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.617 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.618 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.618 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.618 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.618 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.618 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.619 2 DEBUG nova.virt.hardware [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.622 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.656 2 DEBUG nova.network.neutron [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Updating instance_info_cache with network_info: [{"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.658 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Successfully created port: 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.661 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.694 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Releasing lock "refresh_cache-d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.695 2 DEBUG nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Instance network_info: |[{"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.695 2 DEBUG oslo_concurrency.lockutils [req-5204a8b8-b5de-4a7d-8928-ae104050194b req-6368c015-1248-4232-878c-d3444c6bbe86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.695 2 DEBUG nova.network.neutron [req-5204a8b8-b5de-4a7d-8928-ae104050194b req-6368c015-1248-4232-878c-d3444c6bbe86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Refreshing network info cache for port 257d115c-e196-4921-a9d3-942604825516 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.698 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Start _get_guest_xml network_info=[{"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.703 2 WARNING nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.707 2 DEBUG nova.virt.libvirt.host [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.708 2 DEBUG nova.virt.libvirt.host [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.713 2 DEBUG nova.virt.libvirt.host [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.714 2 DEBUG nova.virt.libvirt.host [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.714 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.714 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2bd3a3ae-55dd-4609-80ca-4b6ab15f763d',id=5,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.715 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.715 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.715 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.715 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.715 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.715 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.715 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.716 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.716 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.716 2 DEBUG nova.virt.hardware [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.718 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.869 2 DEBUG nova.network.neutron [req-d68fa802-3acd-4562-b587-74a8088ca50d req-13c19283-0f3e-46f0-b876-edf9a734e8d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Updated VIF entry in instance network info cache for port bf9cdb7f-4cda-403b-b27e-12385e93db02. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.869 2 DEBUG nova.network.neutron [req-d68fa802-3acd-4562-b587-74a8088ca50d req-13c19283-0f3e-46f0-b876-edf9a734e8d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Updating instance_info_cache with network_info: [{"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:31 np0005465604 nova_compute[260603]: 2025-10-02 08:29:31.885 2 DEBUG oslo_concurrency.lockutils [req-d68fa802-3acd-4562-b587-74a8088ca50d req-13c19283-0f3e-46f0-b876-edf9a734e8d6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:29:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272438370' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.073 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.111 2 DEBUG nova.storage.rbd_utils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:29:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3971208537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.118 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]: {
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:    "0": [
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:        {
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "devices": [
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "/dev/loop3"
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            ],
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "lv_name": "ceph_lv0",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "lv_size": "21470642176",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "name": "ceph_lv0",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "tags": {
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.cluster_name": "ceph",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.crush_device_class": "",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.encrypted": "0",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.osd_id": "0",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.type": "block",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.vdo": "0"
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            },
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "type": "block",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "vg_name": "ceph_vg0"
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:        }
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:    ],
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:    "1": [
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:        {
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "devices": [
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "/dev/loop4"
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            ],
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "lv_name": "ceph_lv1",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "lv_size": "21470642176",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "name": "ceph_lv1",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "tags": {
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.cluster_name": "ceph",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.crush_device_class": "",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.encrypted": "0",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.osd_id": "1",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.type": "block",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.vdo": "0"
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            },
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "type": "block",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "vg_name": "ceph_vg1"
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:        }
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:    ],
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:    "2": [
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:        {
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "devices": [
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "/dev/loop5"
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            ],
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "lv_name": "ceph_lv2",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "lv_size": "21470642176",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "name": "ceph_lv2",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "tags": {
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.cluster_name": "ceph",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.crush_device_class": "",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.encrypted": "0",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.osd_id": "2",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.type": "block",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:                "ceph.vdo": "0"
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            },
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "type": "block",
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:            "vg_name": "ceph_vg2"
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:        }
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]:    ]
Oct  2 04:29:32 np0005465604 peaceful_bell[316101]: }
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.172 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:32 np0005465604 systemd[1]: libpod-57493232e335dfc5ba82f6e7d0537d5a1b27e5669b40a9c87d2c75ed4b94b1c7.scope: Deactivated successfully.
Oct  2 04:29:32 np0005465604 podman[316057]: 2025-10-02 08:29:32.179642258 +0000 UTC m=+0.951919474 container died 57493232e335dfc5ba82f6e7d0537d5a1b27e5669b40a9c87d2c75ed4b94b1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:29:32 np0005465604 systemd[1]: var-lib-containers-storage-overlay-757accabf38b3355c6e67e0051ac348d909cefcd9f797154530e12690d3d714a-merged.mount: Deactivated successfully.
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.222 2 DEBUG nova.storage.rbd_utils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.234 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:32 np0005465604 podman[316057]: 2025-10-02 08:29:32.237256698 +0000 UTC m=+1.009533824 container remove 57493232e335dfc5ba82f6e7d0537d5a1b27e5669b40a9c87d2c75ed4b94b1c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 04:29:32 np0005465604 systemd[1]: libpod-conmon-57493232e335dfc5ba82f6e7d0537d5a1b27e5669b40a9c87d2c75ed4b94b1c7.scope: Deactivated successfully.
Oct  2 04:29:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:29:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3092069418' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.598 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.601 2 DEBUG nova.virt.libvirt.vif [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1372609845',display_name='tempest-ListServerFiltersTestJSON-instance-1372609845',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1372609845',id=52,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-2sfo7if8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:24Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=797fde07-e88a-4d6e-a1a3-25e22c66097c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.601 2 DEBUG nova.network.os_vif_util [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.602 2 DEBUG nova.network.os_vif_util [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:7e:1d,bridge_name='br-int',has_traffic_filtering=True,id=29a765f0-6b44-4aad-9974-a0845658d5f2,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29a765f0-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.603 2 DEBUG nova.objects.instance [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'pci_devices' on Instance uuid 797fde07-e88a-4d6e-a1a3-25e22c66097c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.622 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <uuid>797fde07-e88a-4d6e-a1a3-25e22c66097c</uuid>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <name>instance-00000034</name>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-1372609845</nova:name>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:29:31</nova:creationTime>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <nova:user uuid="c1d66932c11043b5b90140cd2dde53d2">tempest-ListServerFiltersTestJSON-1545892750-project-member</nova:user>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <nova:project uuid="e7c4373fe01a4a14bea07af6dba4d170">tempest-ListServerFiltersTestJSON-1545892750</nova:project>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="eeb8c9a4-e143-4b44-a997-e04d544bc537"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <nova:port uuid="29a765f0-6b44-4aad-9974-a0845658d5f2">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <entry name="serial">797fde07-e88a-4d6e-a1a3-25e22c66097c</entry>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <entry name="uuid">797fde07-e88a-4d6e-a1a3-25e22c66097c</entry>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/797fde07-e88a-4d6e-a1a3-25e22c66097c_disk">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/797fde07-e88a-4d6e-a1a3-25e22c66097c_disk.config">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:c6:7e:1d"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <target dev="tap29a765f0-6b"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c/console.log" append="off"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:29:32 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:29:32 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.623 2 DEBUG nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Preparing to wait for external event network-vif-plugged-29a765f0-6b44-4aad-9974-a0845658d5f2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.624 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.624 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.624 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.625 2 DEBUG nova.virt.libvirt.vif [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1372609845',display_name='tempest-ListServerFiltersTestJSON-instance-1372609845',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1372609845',id=52,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-2sfo7if8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:24Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=797fde07-e88a-4d6e-a1a3-25e22c66097c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.625 2 DEBUG nova.network.os_vif_util [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.626 2 DEBUG nova.network.os_vif_util [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:7e:1d,bridge_name='br-int',has_traffic_filtering=True,id=29a765f0-6b44-4aad-9974-a0845658d5f2,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29a765f0-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.627 2 DEBUG os_vif [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:7e:1d,bridge_name='br-int',has_traffic_filtering=True,id=29a765f0-6b44-4aad-9974-a0845658d5f2,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29a765f0-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.628 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.629 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.635 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29a765f0-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.636 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29a765f0-6b, col_values=(('external_ids', {'iface-id': '29a765f0-6b44-4aad-9974-a0845658d5f2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:7e:1d', 'vm-uuid': '797fde07-e88a-4d6e-a1a3-25e22c66097c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:32 np0005465604 NetworkManager[45129]: <info>  [1759393772.6394] manager: (tap29a765f0-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/191)
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.652 2 INFO os_vif [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:7e:1d,bridge_name='br-int',has_traffic_filtering=True,id=29a765f0-6b44-4aad-9974-a0845658d5f2,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29a765f0-6b')#033[00m
Oct  2 04:29:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:29:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4232591236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.695 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.697 2 DEBUG nova.virt.libvirt.vif [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1454521037',display_name='tempest-ListServerFiltersTestJSON-instance-1454521037',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1454521037',id=53,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-w2kc410q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:27Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=d15c7c6a-e6a1-4538-9db0-ee1aef10f38b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.698 2 DEBUG nova.network.os_vif_util [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.699 2 DEBUG nova.network.os_vif_util [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:eb:54,bridge_name='br-int',has_traffic_filtering=True,id=257d115c-e196-4921-a9d3-942604825516,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap257d115c-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.700 2 DEBUG nova.objects.instance [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'pci_devices' on Instance uuid d15c7c6a-e6a1-4538-9db0-ee1aef10f38b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.723 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <uuid>d15c7c6a-e6a1-4538-9db0-ee1aef10f38b</uuid>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <name>instance-00000035</name>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <memory>196608</memory>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-1454521037</nova:name>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:29:31</nova:creationTime>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.micro">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <nova:memory>192</nova:memory>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <nova:user uuid="c1d66932c11043b5b90140cd2dde53d2">tempest-ListServerFiltersTestJSON-1545892750-project-member</nova:user>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <nova:project uuid="e7c4373fe01a4a14bea07af6dba4d170">tempest-ListServerFiltersTestJSON-1545892750</nova:project>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <nova:port uuid="257d115c-e196-4921-a9d3-942604825516">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <entry name="serial">d15c7c6a-e6a1-4538-9db0-ee1aef10f38b</entry>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <entry name="uuid">d15c7c6a-e6a1-4538-9db0-ee1aef10f38b</entry>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk.config">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:5f:eb:54"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <target dev="tap257d115c-e1"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b/console.log" append="off"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:29:32 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:29:32 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:29:32 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:29:32 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.726 2 DEBUG nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Preparing to wait for external event network-vif-plugged-257d115c-e196-4921-a9d3-942604825516 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.726 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.726 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.727 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.728 2 DEBUG nova.virt.libvirt.vif [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1454521037',display_name='tempest-ListServerFiltersTestJSON-instance-1454521037',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1454521037',id=53,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-w2kc410q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:27Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=d15c7c6a-e6a1-4538-9db0-ee1aef10f38b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.729 2 DEBUG nova.network.os_vif_util [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.729 2 DEBUG nova.network.os_vif_util [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:eb:54,bridge_name='br-int',has_traffic_filtering=True,id=257d115c-e196-4921-a9d3-942604825516,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap257d115c-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.730 2 DEBUG os_vif [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:eb:54,bridge_name='br-int',has_traffic_filtering=True,id=257d115c-e196-4921-a9d3-942604825516,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap257d115c-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.733 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.739 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap257d115c-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap257d115c-e1, col_values=(('external_ids', {'iface-id': '257d115c-e196-4921-a9d3-942604825516', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:eb:54', 'vm-uuid': 'd15c7c6a-e6a1-4538-9db0-ee1aef10f38b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:32 np0005465604 NetworkManager[45129]: <info>  [1759393772.7430] manager: (tap257d115c-e1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.751 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.751 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.751 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] No VIF found with MAC fa:16:3e:c6:7e:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.752 2 INFO nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Using config drive#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.772 2 DEBUG nova.storage.rbd_utils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.777 2 INFO os_vif [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:eb:54,bridge_name='br-int',has_traffic_filtering=True,id=257d115c-e196-4921-a9d3-942604825516,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap257d115c-e1')#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.830 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.830 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.831 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] No VIF found with MAC fa:16:3e:5f:eb:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.831 2 INFO nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Using config drive#033[00m
Oct  2 04:29:32 np0005465604 nova_compute[260603]: 2025-10-02 08:29:32.852 2 DEBUG nova.storage.rbd_utils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 280 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 181 KiB/s rd, 9.1 MiB/s wr, 251 op/s
Oct  2 04:29:33 np0005465604 podman[316496]: 2025-10-02 08:29:33.002605572 +0000 UTC m=+0.055502386 container create 1556d461f69eb7ad0b4697c77fbfd92444c58e49a8f327bccf8fde23e855d0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_burnell, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 04:29:33 np0005465604 systemd[1]: Started libpod-conmon-1556d461f69eb7ad0b4697c77fbfd92444c58e49a8f327bccf8fde23e855d0f2.scope.
Oct  2 04:29:33 np0005465604 podman[316496]: 2025-10-02 08:29:32.975459949 +0000 UTC m=+0.028356823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:29:33 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:29:33 np0005465604 podman[316496]: 2025-10-02 08:29:33.095265892 +0000 UTC m=+0.148162766 container init 1556d461f69eb7ad0b4697c77fbfd92444c58e49a8f327bccf8fde23e855d0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.099 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Successfully updated port: b4dfefd6-6971-4450-ae0e-50f4bf7eaafa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:29:33 np0005465604 podman[316496]: 2025-10-02 08:29:33.107492662 +0000 UTC m=+0.160389436 container start 1556d461f69eb7ad0b4697c77fbfd92444c58e49a8f327bccf8fde23e855d0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 04:29:33 np0005465604 podman[316496]: 2025-10-02 08:29:33.111088564 +0000 UTC m=+0.163985398 container attach 1556d461f69eb7ad0b4697c77fbfd92444c58e49a8f327bccf8fde23e855d0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_burnell, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:29:33 np0005465604 amazing_burnell[316512]: 167 167
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.118 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "refresh_cache-73e8c7a5-4621-4f07-824a-b81ea314a672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.120 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquired lock "refresh_cache-73e8c7a5-4621-4f07-824a-b81ea314a672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.120 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:29:33 np0005465604 systemd[1]: libpod-1556d461f69eb7ad0b4697c77fbfd92444c58e49a8f327bccf8fde23e855d0f2.scope: Deactivated successfully.
Oct  2 04:29:33 np0005465604 podman[316496]: 2025-10-02 08:29:33.125608075 +0000 UTC m=+0.178504889 container died 1556d461f69eb7ad0b4697c77fbfd92444c58e49a8f327bccf8fde23e855d0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_burnell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 04:29:33 np0005465604 systemd[1]: var-lib-containers-storage-overlay-371426c3d7a95a6f166309f0d3659297f1d2a84105b79a789093f79b7b5adc5e-merged.mount: Deactivated successfully.
Oct  2 04:29:33 np0005465604 podman[316496]: 2025-10-02 08:29:33.171306986 +0000 UTC m=+0.224203790 container remove 1556d461f69eb7ad0b4697c77fbfd92444c58e49a8f327bccf8fde23e855d0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_burnell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 04:29:33 np0005465604 systemd[1]: libpod-conmon-1556d461f69eb7ad0b4697c77fbfd92444c58e49a8f327bccf8fde23e855d0f2.scope: Deactivated successfully.
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.315 2 DEBUG nova.network.neutron [req-5204a8b8-b5de-4a7d-8928-ae104050194b req-6368c015-1248-4232-878c-d3444c6bbe86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Updated VIF entry in instance network info cache for port 257d115c-e196-4921-a9d3-942604825516. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.316 2 DEBUG nova.network.neutron [req-5204a8b8-b5de-4a7d-8928-ae104050194b req-6368c015-1248-4232-878c-d3444c6bbe86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Updating instance_info_cache with network_info: [{"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.334 2 DEBUG oslo_concurrency.lockutils [req-5204a8b8-b5de-4a7d-8928-ae104050194b req-6368c015-1248-4232-878c-d3444c6bbe86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:33 np0005465604 podman[316537]: 2025-10-02 08:29:33.369936958 +0000 UTC m=+0.049394646 container create 4b1748282e1fe0a2a05cd57c35c86cfecda2fc95c403d3672067774734209885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:29:33 np0005465604 systemd[1]: Started libpod-conmon-4b1748282e1fe0a2a05cd57c35c86cfecda2fc95c403d3672067774734209885.scope.
Oct  2 04:29:33 np0005465604 podman[316537]: 2025-10-02 08:29:33.347976936 +0000 UTC m=+0.027434634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:29:33 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.471 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:29:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16c9ed56f150ff1f4e4f213b40602c9b286539335c094af45554ba9764125c51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:29:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16c9ed56f150ff1f4e4f213b40602c9b286539335c094af45554ba9764125c51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:29:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16c9ed56f150ff1f4e4f213b40602c9b286539335c094af45554ba9764125c51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:29:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16c9ed56f150ff1f4e4f213b40602c9b286539335c094af45554ba9764125c51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.496 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Successfully created port: 12abaaed-2f93-40bd-bddd-8143c3709480 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:29:33 np0005465604 podman[316537]: 2025-10-02 08:29:33.507023878 +0000 UTC m=+0.186481566 container init 4b1748282e1fe0a2a05cd57c35c86cfecda2fc95c403d3672067774734209885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chebyshev, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 04:29:33 np0005465604 podman[316537]: 2025-10-02 08:29:33.519113754 +0000 UTC m=+0.198571452 container start 4b1748282e1fe0a2a05cd57c35c86cfecda2fc95c403d3672067774734209885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chebyshev, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:29:33 np0005465604 podman[316537]: 2025-10-02 08:29:33.523324615 +0000 UTC m=+0.202782283 container attach 4b1748282e1fe0a2a05cd57c35c86cfecda2fc95c403d3672067774734209885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chebyshev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.661 2 INFO nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Creating config drive at /var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c/disk.config#033[00m
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.673 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx3qjrauc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.723 2 INFO nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Creating config drive at /var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b/disk.config#033[00m
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.733 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcyxobxao execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.831 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx3qjrauc" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.872 2 DEBUG nova.storage.rbd_utils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.876 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c/disk.config 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.920 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcyxobxao" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.960 2 DEBUG nova.storage.rbd_utils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] rbd image d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:33 np0005465604 nova_compute[260603]: 2025-10-02 08:29:33.967 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b/disk.config d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.038 2 DEBUG nova.compute.manager [req-2a848cbb-184f-4be6-99fc-52bd8319b943 req-28dabd72-a5ea-49ff-bf05-e66f3cbf668f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Received event network-changed-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.039 2 DEBUG nova.compute.manager [req-2a848cbb-184f-4be6-99fc-52bd8319b943 req-28dabd72-a5ea-49ff-bf05-e66f3cbf668f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Refreshing instance network info cache due to event network-changed-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.039 2 DEBUG oslo_concurrency.lockutils [req-2a848cbb-184f-4be6-99fc-52bd8319b943 req-28dabd72-a5ea-49ff-bf05-e66f3cbf668f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-73e8c7a5-4621-4f07-824a-b81ea314a672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.053 2 DEBUG oslo_concurrency.processutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c/disk.config 797fde07-e88a-4d6e-a1a3-25e22c66097c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.054 2 INFO nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Deleting local config drive /var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c/disk.config because it was imported into RBD.#033[00m
Oct  2 04:29:34 np0005465604 kernel: tap29a765f0-6b: entered promiscuous mode
Oct  2 04:29:34 np0005465604 NetworkManager[45129]: <info>  [1759393774.1380] manager: (tap29a765f0-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/193)
Oct  2 04:29:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:34Z|00438|binding|INFO|Claiming lport 29a765f0-6b44-4aad-9974-a0845658d5f2 for this chassis.
Oct  2 04:29:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:34Z|00439|binding|INFO|29a765f0-6b44-4aad-9974-a0845658d5f2: Claiming fa:16:3e:c6:7e:1d 10.100.0.9
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.150 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:7e:1d 10.100.0.9'], port_security=['fa:16:3e:c6:7e:1d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '797fde07-e88a-4d6e-a1a3-25e22c66097c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=29a765f0-6b44-4aad-9974-a0845658d5f2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.154 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 29a765f0-6b44-4aad-9974-a0845658d5f2 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e bound to our chassis#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.158 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00da8a36-bc54-4cc1-a0e2-53333358378e#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.163 2 DEBUG oslo_concurrency.processutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b/disk.config d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.163 2 INFO nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Deleting local config drive /var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b/disk.config because it was imported into RBD.#033[00m
Oct  2 04:29:34 np0005465604 systemd-udevd[316650]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.176 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f8a143-f295-4a29-94ec-15475f3fcc7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:34 np0005465604 NetworkManager[45129]: <info>  [1759393774.1813] device (tap29a765f0-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:29:34 np0005465604 NetworkManager[45129]: <info>  [1759393774.1839] device (tap29a765f0-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:29:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:34Z|00440|binding|INFO|Setting lport 29a765f0-6b44-4aad-9974-a0845658d5f2 ovn-installed in OVS
Oct  2 04:29:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:34Z|00441|binding|INFO|Setting lport 29a765f0-6b44-4aad-9974-a0845658d5f2 up in Southbound
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:34 np0005465604 systemd-machined[214636]: New machine qemu-57-instance-00000034.
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.215 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ad6f8e-32b5-45db-9a64-d9b2ef1b9333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:34 np0005465604 systemd[1]: Started Virtual Machine qemu-57-instance-00000034.
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.220 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[88b8536b-dfd0-4b8d-9c17-e9c7ebc86ab0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:34 np0005465604 kernel: tap257d115c-e1: entered promiscuous mode
Oct  2 04:29:34 np0005465604 NetworkManager[45129]: <info>  [1759393774.2456] manager: (tap257d115c-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/194)
Oct  2 04:29:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:34Z|00442|binding|INFO|Claiming lport 257d115c-e196-4921-a9d3-942604825516 for this chassis.
Oct  2 04:29:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:34Z|00443|binding|INFO|257d115c-e196-4921-a9d3-942604825516: Claiming fa:16:3e:5f:eb:54 10.100.0.8
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:34 np0005465604 systemd-udevd[316658]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.257 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:eb:54 10.100.0.8'], port_security=['fa:16:3e:5f:eb:54 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd15c7c6a-e6a1-4538-9db0-ee1aef10f38b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=257d115c-e196-4921-a9d3-942604825516) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.260 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[47bb788b-6173-495e-9462-435777df8e5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:34 np0005465604 NetworkManager[45129]: <info>  [1759393774.2635] device (tap257d115c-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:29:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:34Z|00444|binding|INFO|Setting lport 257d115c-e196-4921-a9d3-942604825516 ovn-installed in OVS
Oct  2 04:29:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:34Z|00445|binding|INFO|Setting lport 257d115c-e196-4921-a9d3-942604825516 up in Southbound
Oct  2 04:29:34 np0005465604 NetworkManager[45129]: <info>  [1759393774.2666] device (tap257d115c-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.279 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8af3b069-1979-4115-b39c-25c00277c75c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316684, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:34 np0005465604 systemd-machined[214636]: New machine qemu-58-instance-00000035.
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.300 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f6adc80b-ff4d-4e2b-b59b-19098ad43f0a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464939, 'tstamp': 464939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316687, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464943, 'tstamp': 464943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316687, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.302 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:34 np0005465604 systemd[1]: Started Virtual Machine qemu-58-instance-00000035.
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.312 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00da8a36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.312 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.312 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00da8a36-b0, col_values=(('external_ids', {'iface-id': 'bd053665-7e00-4f6a-95af-9d9c3c0e8cc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.313 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.314 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 257d115c-e196-4921-a9d3-942604825516 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e unbound from our chassis#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.316 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00da8a36-bc54-4cc1-a0e2-53333358378e#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.333 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f250bc14-0a13-474b-8ac0-4990e079f03a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.386 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cd098071-f7dc-401f-8ad7-0db9be26e045]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.389 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8156778a-849b-46eb-a4d4-818c30f797e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.430 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9f81915a-c77f-437a-ad81-297d921a75d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.461 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1efa0550-5c7b-40e5-87b1-769885cf7ca0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316717, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.481 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Successfully updated port: 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.484 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4778c77b-bace-451e-a172-37a9791db2b5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464939, 'tstamp': 464939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316721, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464943, 'tstamp': 464943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316721, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.486 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.489 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00da8a36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.489 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.490 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00da8a36-b0, col_values=(('external_ids', {'iface-id': 'bd053665-7e00-4f6a-95af-9d9c3c0e8cc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.490 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]: {
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:        "osd_id": 2,
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:        "type": "bluestore"
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:    },
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:        "osd_id": 1,
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:        "type": "bluestore"
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:    },
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:        "osd_id": 0,
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:        "type": "bluestore"
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]:    }
Oct  2 04:29:34 np0005465604 romantic_chebyshev[316553]: }
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.508 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "refresh_cache-f7005e7b-8982-4d23-b12a-4b67c90a6c89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.508 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquired lock "refresh_cache-f7005e7b-8982-4d23-b12a-4b67c90a6c89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.508 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:29:34 np0005465604 systemd[1]: libpod-4b1748282e1fe0a2a05cd57c35c86cfecda2fc95c403d3672067774734209885.scope: Deactivated successfully.
Oct  2 04:29:34 np0005465604 podman[316537]: 2025-10-02 08:29:34.533878049 +0000 UTC m=+1.213335747 container died 4b1748282e1fe0a2a05cd57c35c86cfecda2fc95c403d3672067774734209885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chebyshev, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  2 04:29:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay-16c9ed56f150ff1f4e4f213b40602c9b286539335c094af45554ba9764125c51-merged.mount: Deactivated successfully.
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.609 2 DEBUG nova.network.neutron [req-3782669d-5506-4b51-b412-9603e5cac2f8 req-1893256e-7ff1-43f9-ab0d-331d7899ade4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Updated VIF entry in instance network info cache for port 29a765f0-6b44-4aad-9974-a0845658d5f2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:29:34 np0005465604 podman[316537]: 2025-10-02 08:29:34.610679486 +0000 UTC m=+1.290137164 container remove 4b1748282e1fe0a2a05cd57c35c86cfecda2fc95c403d3672067774734209885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chebyshev, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.611 2 DEBUG nova.network.neutron [req-3782669d-5506-4b51-b412-9603e5cac2f8 req-1893256e-7ff1-43f9-ab0d-331d7899ade4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Updating instance_info_cache with network_info: [{"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:34 np0005465604 systemd[1]: libpod-conmon-4b1748282e1fe0a2a05cd57c35c86cfecda2fc95c403d3672067774734209885.scope: Deactivated successfully.
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.635 2 DEBUG oslo_concurrency.lockutils [req-3782669d-5506-4b51-b412-9603e5cac2f8 req-1893256e-7ff1-43f9-ab0d-331d7899ade4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-797fde07-e88a-4d6e-a1a3-25e22c66097c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:29:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:29:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:29:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:29:34 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a8fe8d3c-ecd0-40ab-ae7a-e858f5222f60 does not exist
Oct  2 04:29:34 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 4fe53165-8757-410d-852d-c154b96651a3 does not exist
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.668 2 DEBUG nova.compute.manager [req-2fae3b60-8b6b-4548-bc62-ce19f1f2c958 req-8345ce5f-4e0f-4aa2-af3d-65a90bcdd339 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Received event network-vif-plugged-29a765f0-6b44-4aad-9974-a0845658d5f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.669 2 DEBUG oslo_concurrency.lockutils [req-2fae3b60-8b6b-4548-bc62-ce19f1f2c958 req-8345ce5f-4e0f-4aa2-af3d-65a90bcdd339 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.670 2 DEBUG oslo_concurrency.lockutils [req-2fae3b60-8b6b-4548-bc62-ce19f1f2c958 req-8345ce5f-4e0f-4aa2-af3d-65a90bcdd339 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.670 2 DEBUG oslo_concurrency.lockutils [req-2fae3b60-8b6b-4548-bc62-ce19f1f2c958 req-8345ce5f-4e0f-4aa2-af3d-65a90bcdd339 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.670 2 DEBUG nova.compute.manager [req-2fae3b60-8b6b-4548-bc62-ce19f1f2c958 req-8345ce5f-4e0f-4aa2-af3d-65a90bcdd339 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Processing event network-vif-plugged-29a765f0-6b44-4aad-9974-a0845658d5f2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.801 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.815 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.816 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:34.817 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.862 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquiring lock "923e00cc-7494-46f3-93e2-3c223705aff1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.862 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.888 2 DEBUG nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:29:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 319 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 11 MiB/s wr, 259 op/s
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.975 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.976 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.988 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:29:34 np0005465604 nova_compute[260603]: 2025-10-02 08:29:34.988 2 INFO nova.compute.claims [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.156 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Updating instance_info_cache with network_info: [{"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.174 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Releasing lock "refresh_cache-73e8c7a5-4621-4f07-824a-b81ea314a672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.174 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Instance network_info: |[{"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.175 2 DEBUG oslo_concurrency.lockutils [req-2a848cbb-184f-4be6-99fc-52bd8319b943 req-28dabd72-a5ea-49ff-bf05-e66f3cbf668f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-73e8c7a5-4621-4f07-824a-b81ea314a672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.175 2 DEBUG nova.network.neutron [req-2a848cbb-184f-4be6-99fc-52bd8319b943 req-28dabd72-a5ea-49ff-bf05-e66f3cbf668f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Refreshing network info cache for port b4dfefd6-6971-4450-ae0e-50f4bf7eaafa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.178 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Start _get_guest_xml network_info=[{"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.183 2 WARNING nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.192 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.193 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.197 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.197 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.198 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.198 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.199 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.199 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.199 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.199 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.199 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.200 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.200 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.200 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.201 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.201 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.204 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.246 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.551 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393775.5501964, 797fde07-e88a-4d6e-a1a3-25e22c66097c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.553 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] VM Started (Lifecycle Event)#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.560 2 DEBUG nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.568 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.577 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.597 2 INFO nova.virt.libvirt.driver [-] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Instance spawned successfully.#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.598 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.601 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.621 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.622 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393775.55034, 797fde07-e88a-4d6e-a1a3-25e22c66097c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.622 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.626 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.626 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.627 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.627 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.627 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.628 2 DEBUG nova.virt.libvirt.driver [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:35 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:29:35 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.656 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.665 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393775.570418, 797fde07-e88a-4d6e-a1a3-25e22c66097c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:29:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2864614248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.665 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:29:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:29:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/122715789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.685 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.690 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.692 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.717 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image 73e8c7a5-4621-4f07-824a-b81ea314a672_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.722 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.754 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.756 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.757 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393775.5980434, d15c7c6a-e6a1-4538-9db0-ee1aef10f38b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.757 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] VM Started (Lifecycle Event)#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.761 2 INFO nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Took 11.59 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.761 2 DEBUG nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.773 2 DEBUG nova.compute.provider_tree [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.796 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.801 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393775.5988185, d15c7c6a-e6a1-4538-9db0-ee1aef10f38b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.801 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.809 2 DEBUG nova.scheduler.client.report [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.828 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.831 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.846 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.846 2 DEBUG nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.870 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.893 2 INFO nova.compute.manager [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Took 12.72 seconds to build instance.#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.907 2 DEBUG nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.908 2 DEBUG nova.network.neutron [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.915 2 DEBUG oslo_concurrency.lockutils [None req-3dd45ded-0991-4315-8103-fb728ace9982 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.929 2 INFO nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:29:35 np0005465604 nova_compute[260603]: 2025-10-02 08:29:35.949 2 DEBUG nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.043 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Successfully updated port: 12abaaed-2f93-40bd-bddd-8143c3709480 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.069 2 DEBUG nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.071 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.072 2 INFO nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Creating image(s)#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.105 2 DEBUG nova.storage.rbd_utils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] rbd image 923e00cc-7494-46f3-93e2-3c223705aff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.142 2 DEBUG nova.storage.rbd_utils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] rbd image 923e00cc-7494-46f3-93e2-3c223705aff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:29:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3923769575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.176 2 DEBUG nova.storage.rbd_utils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] rbd image 923e00cc-7494-46f3-93e2-3c223705aff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.182 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.238 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "refresh_cache-f56dc5d2-b1f8-42ef-882c-62bcbd600954" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.239 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquired lock "refresh_cache-f56dc5d2-b1f8-42ef-882c-62bcbd600954" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.239 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.241 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.244 2 DEBUG nova.virt.libvirt.vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1504413762',display_name='tempest-ListServersNegativeTestJSON-server-1504413762-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1504413762-1',id=54,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6f9056bf44b4bd8859c73e3cb645683',ramdisk_id='',reservation_id='r-5qqg2590',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-21742049',owner_user_name='tempest-ListServersNegativeTestJSON-21742049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:28Z,user_data=None,user_id='1057882eff8f490d837773415bf65a8a',uuid=73e8c7a5-4621-4f07-824a-b81ea314a672,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.244 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converting VIF {"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.245 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:74:a7,bridge_name='br-int',has_traffic_filtering=True,id=b4dfefd6-6971-4450-ae0e-50f4bf7eaafa,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4dfefd6-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.247 2 DEBUG nova.objects.instance [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lazy-loading 'pci_devices' on Instance uuid 73e8c7a5-4621-4f07-824a-b81ea314a672 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.263 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.263 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.264 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.264 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.294 2 DEBUG nova.storage.rbd_utils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] rbd image 923e00cc-7494-46f3-93e2-3c223705aff1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.299 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 923e00cc-7494-46f3-93e2-3c223705aff1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.351 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:29:36 np0005465604 nova_compute[260603]:  <uuid>73e8c7a5-4621-4f07-824a-b81ea314a672</uuid>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:  <name>instance-00000036</name>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <nova:name>tempest-ListServersNegativeTestJSON-server-1504413762-1</nova:name>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:29:35</nova:creationTime>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:29:36 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:        <nova:user uuid="1057882eff8f490d837773415bf65a8a">tempest-ListServersNegativeTestJSON-21742049-project-member</nova:user>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:        <nova:project uuid="f6f9056bf44b4bd8859c73e3cb645683">tempest-ListServersNegativeTestJSON-21742049</nova:project>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:        <nova:port uuid="b4dfefd6-6971-4450-ae0e-50f4bf7eaafa">
Oct  2 04:29:36 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <entry name="serial">73e8c7a5-4621-4f07-824a-b81ea314a672</entry>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <entry name="uuid">73e8c7a5-4621-4f07-824a-b81ea314a672</entry>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/73e8c7a5-4621-4f07-824a-b81ea314a672_disk">
Oct  2 04:29:36 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:29:36 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/73e8c7a5-4621-4f07-824a-b81ea314a672_disk.config">
Oct  2 04:29:36 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:29:36 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:6e:74:a7"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <target dev="tapb4dfefd6-69"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672/console.log" append="off"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:29:36 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:29:36 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:29:36 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:29:36 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.354 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Preparing to wait for external event network-vif-plugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.354 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.354 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.355 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.356 2 DEBUG nova.virt.libvirt.vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1504413762',display_name='tempest-ListServersNegativeTestJSON-server-1504413762-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1504413762-1',id=54,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6f9056bf44b4bd8859c73e3cb645683',ramdisk_id='',reservation_id='r-5qqg2590',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-21742049',owner_user_name='tempest-ListServersNegativeTestJSON-21742049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:28Z,user_data=None,user_id='1057882eff8f490d837773415bf65a8a',uuid=73e8c7a5-4621-4f07-824a-b81ea314a672,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.356 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converting VIF {"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.357 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:74:a7,bridge_name='br-int',has_traffic_filtering=True,id=b4dfefd6-6971-4450-ae0e-50f4bf7eaafa,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4dfefd6-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.358 2 DEBUG os_vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:74:a7,bridge_name='br-int',has_traffic_filtering=True,id=b4dfefd6-6971-4450-ae0e-50f4bf7eaafa,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4dfefd6-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.359 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.360 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.364 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4dfefd6-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.364 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb4dfefd6-69, col_values=(('external_ids', {'iface-id': 'b4dfefd6-6971-4450-ae0e-50f4bf7eaafa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:74:a7', 'vm-uuid': '73e8c7a5-4621-4f07-824a-b81ea314a672'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:36 np0005465604 NetworkManager[45129]: <info>  [1759393776.4182] manager: (tapb4dfefd6-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.428 2 INFO os_vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:74:a7,bridge_name='br-int',has_traffic_filtering=True,id=b4dfefd6-6971-4450-ae0e-50f4bf7eaafa,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4dfefd6-69')#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.512 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.513 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.513 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] No VIF found with MAC fa:16:3e:6e:74:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.514 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Using config drive#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.580 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image 73e8c7a5-4621-4f07-824a-b81ea314a672_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.591 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.625 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 923e00cc-7494-46f3-93e2-3c223705aff1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.717 2 DEBUG nova.storage.rbd_utils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] resizing rbd image 923e00cc-7494-46f3-93e2-3c223705aff1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.776 2 DEBUG nova.compute.manager [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Received event network-changed-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.777 2 DEBUG nova.compute.manager [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Refreshing instance network info cache due to event network-changed-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.777 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f7005e7b-8982-4d23-b12a-4b67c90a6c89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.780 2 DEBUG nova.policy [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e8f0a6fb1d224a979db4b4a738bbf453', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f942883d5794a5c8e3cd2b5ef44a863', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.851 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393761.8258083, 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.852 2 INFO nova.compute.manager [-] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.861 2 DEBUG nova.objects.instance [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lazy-loading 'migration_context' on Instance uuid 923e00cc-7494-46f3-93e2-3c223705aff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.882 2 DEBUG nova.compute.manager [None req-b69d4143-5029-41a0-bc13-2744d0febebf - - - - - -] [instance: 640bb5bf-5ae3-455f-82e7-3e6d647a0fbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.890 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.891 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Ensure instance console log exists: /var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.891 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.892 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:36 np0005465604 nova_compute[260603]: 2025-10-02 08:29:36.892 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 319 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 10 MiB/s wr, 246 op/s
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.091 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "d634251d-b484-4af7-b102-fe8015603660" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.092 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.118 2 DEBUG nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.164 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Updating instance_info_cache with network_info: [{"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.188 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.189 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.190 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Releasing lock "refresh_cache-f7005e7b-8982-4d23-b12a-4b67c90a6c89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.191 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Instance network_info: |[{"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.192 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f7005e7b-8982-4d23-b12a-4b67c90a6c89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.192 2 DEBUG nova.network.neutron [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Refreshing network info cache for port 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.198 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Start _get_guest_xml network_info=[{"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.208 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.209 2 INFO nova.compute.claims [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.214 2 WARNING nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.225 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.226 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.230 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.231 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.231 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.232 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.233 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.233 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.234 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.234 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.235 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.235 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.236 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.236 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.239 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.240 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.244 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.305 2 DEBUG nova.compute.manager [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Received event network-vif-plugged-29a765f0-6b44-4aad-9974-a0845658d5f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.306 2 DEBUG oslo_concurrency.lockutils [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.306 2 DEBUG oslo_concurrency.lockutils [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.307 2 DEBUG oslo_concurrency.lockutils [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.307 2 DEBUG nova.compute.manager [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] No waiting events found dispatching network-vif-plugged-29a765f0-6b44-4aad-9974-a0845658d5f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.307 2 WARNING nova.compute.manager [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Received unexpected event network-vif-plugged-29a765f0-6b44-4aad-9974-a0845658d5f2 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.307 2 DEBUG nova.compute.manager [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Received event network-vif-plugged-257d115c-e196-4921-a9d3-942604825516 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.308 2 DEBUG oslo_concurrency.lockutils [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.308 2 DEBUG oslo_concurrency.lockutils [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.308 2 DEBUG oslo_concurrency.lockutils [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.309 2 DEBUG nova.compute.manager [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Processing event network-vif-plugged-257d115c-e196-4921-a9d3-942604825516 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.309 2 DEBUG nova.compute.manager [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Received event network-vif-plugged-257d115c-e196-4921-a9d3-942604825516 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.309 2 DEBUG oslo_concurrency.lockutils [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.310 2 DEBUG oslo_concurrency.lockutils [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.310 2 DEBUG oslo_concurrency.lockutils [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.310 2 DEBUG nova.compute.manager [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] No waiting events found dispatching network-vif-plugged-257d115c-e196-4921-a9d3-942604825516 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.310 2 WARNING nova.compute.manager [req-67f06de5-b41d-440d-8978-0a9b0e0b5514 req-05cfabb8-1215-4879-bc3d-bc10edd65b75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Received unexpected event network-vif-plugged-257d115c-e196-4921-a9d3-942604825516 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.312 2 DEBUG nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.326 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393777.3258932, d15c7c6a-e6a1-4538-9db0-ee1aef10f38b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.326 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.345 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.346 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.350 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.354 2 INFO nova.virt.libvirt.driver [-] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Instance spawned successfully.#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.354 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.378 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.394 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.395 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.395 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.396 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.396 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.397 2 DEBUG nova.virt.libvirt.driver [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.464 2 INFO nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Took 10.35 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.465 2 DEBUG nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.492 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.539 2 INFO nova.compute.manager [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Took 11.41 seconds to build instance.#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.554 2 DEBUG oslo_concurrency.lockutils [None req-a2acd5e8-e6b5-45b9-82e0-b874acd03fa6 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:29:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3915258954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.747 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Creating config drive at /var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672/disk.config#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.754 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp323k3a08 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.791 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.821 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.826 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.857 2 DEBUG nova.network.neutron [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Updating instance_info_cache with network_info: [{"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.879 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Releasing lock "refresh_cache-f56dc5d2-b1f8-42ef-882c-62bcbd600954" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.879 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Instance network_info: |[{"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.882 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Start _get_guest_xml network_info=[{"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.890 2 WARNING nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.894 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.894 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.898 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.898 2 DEBUG nova.virt.libvirt.host [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.899 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.899 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.900 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.900 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.900 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.901 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.901 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.901 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.902 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.902 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.902 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.902 2 DEBUG nova.virt.hardware [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.906 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:29:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3300001568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.941 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp323k3a08" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.983 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image 73e8c7a5-4621-4f07-824a-b81ea314a672_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:37 np0005465604 nova_compute[260603]: 2025-10-02 08:29:37.990 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672/disk.config 73e8c7a5-4621-4f07-824a-b81ea314a672_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.028 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.033 2 DEBUG nova.compute.provider_tree [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.048 2 DEBUG nova.scheduler.client.report [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.067 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.068 2 DEBUG nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.131 2 DEBUG nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.132 2 DEBUG nova.network.neutron [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.144 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672/disk.config 73e8c7a5-4621-4f07-824a-b81ea314a672_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.145 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Deleting local config drive /var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672/disk.config because it was imported into RBD.#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.154 2 INFO nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.171 2 DEBUG nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:29:38 np0005465604 kernel: tapb4dfefd6-69: entered promiscuous mode
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:38 np0005465604 NetworkManager[45129]: <info>  [1759393778.1997] manager: (tapb4dfefd6-69): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:38 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:38Z|00446|binding|INFO|Claiming lport b4dfefd6-6971-4450-ae0e-50f4bf7eaafa for this chassis.
Oct  2 04:29:38 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:38Z|00447|binding|INFO|b4dfefd6-6971-4450-ae0e-50f4bf7eaafa: Claiming fa:16:3e:6e:74:a7 10.100.0.11
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.213 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:74:a7 10.100.0.11'], port_security=['fa:16:3e:6e:74:a7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '73e8c7a5-4621-4f07-824a-b81ea314a672', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6f9056bf44b4bd8859c73e3cb645683', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd4b10ecc-e572-4092-8dce-9b7247cd181c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92074f81-fcf1-4b9d-a09c-c34c3c0535f5, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=b4dfefd6-6971-4450-ae0e-50f4bf7eaafa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.215 162357 INFO neutron.agent.ovn.metadata.agent [-] Port b4dfefd6-6971-4450-ae0e-50f4bf7eaafa in datapath 9e6563dd-5ecf-4759-9df8-5b501617e75c bound to our chassis#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.219 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9e6563dd-5ecf-4759-9df8-5b501617e75c#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.233 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[697f74a1-4952-4ded-9d5c-05af5c75e447]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.234 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9e6563dd-51 in ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.236 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9e6563dd-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.236 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c4274756-d7e2-44d5-8999-8fed106c7c3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:38 np0005465604 systemd-udevd[317295]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.240 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[42608d45-57a3-47a8-95bc-69b6fd209522]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:38 np0005465604 NetworkManager[45129]: <info>  [1759393778.2509] device (tapb4dfefd6-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:29:38 np0005465604 NetworkManager[45129]: <info>  [1759393778.2518] device (tapb4dfefd6-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.257 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a111766b-f760-45db-99b7-dd36ed7b35fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:38 np0005465604 systemd-machined[214636]: New machine qemu-59-instance-00000036.
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.274 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c2792f12-d1f5-41ad-8866-ee4a115c8c3c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:38 np0005465604 systemd[1]: Started Virtual Machine qemu-59-instance-00000036.
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.285 2 DEBUG nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.287 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.288 2 INFO nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Creating image(s)#033[00m
Oct  2 04:29:38 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:38Z|00448|binding|INFO|Setting lport b4dfefd6-6971-4450-ae0e-50f4bf7eaafa ovn-installed in OVS
Oct  2 04:29:38 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:38Z|00449|binding|INFO|Setting lport b4dfefd6-6971-4450-ae0e-50f4bf7eaafa up in Southbound
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.304 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c90ac602-6dd0-40b2-951a-afe285d64390]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:38 np0005465604 NetworkManager[45129]: <info>  [1759393778.3120] manager: (tap9e6563dd-50): new Veth device (/org/freedesktop/NetworkManager/Devices/197)
Oct  2 04:29:38 np0005465604 systemd-udevd[317299]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.311 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7a867b90-71f4-4776-8b9a-a46d996cd8e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:29:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1586218149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.350 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ef57ab7d-ba64-41eb-9ef7-da7159684e1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:29:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2406393232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.354 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf05ea2-376d-4b89-af0d-ee4241fcf9d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.365 2 DEBUG nova.storage.rbd_utils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image d634251d-b484-4af7-b102-fe8015603660_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:38 np0005465604 NetworkManager[45129]: <info>  [1759393778.3778] device (tap9e6563dd-50): carrier: link connected
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.384 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[335eae55-5bf3-408f-8085-b52b1610c3c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.401 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf5dd87-f6d1-4d09-929f-75bf020b64ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9e6563dd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:8b:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465698, 'reachable_time': 20297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317350, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.416 2 DEBUG nova.storage.rbd_utils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image d634251d-b484-4af7-b102-fe8015603660_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.427 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2226a4b7-03b9-4673-8b58-c4d2cdd194de]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef5:8ba1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465698, 'tstamp': 465698}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317367, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.451 2 DEBUG nova.storage.rbd_utils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image d634251d-b484-4af7-b102-fe8015603660_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.454 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[70c71518-408f-4175-901f-ed345f157622]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9e6563dd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:8b:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465698, 'reachable_time': 20297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317370, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.457 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.509 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[11fd5dad-039b-445b-8c2f-294303710565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.510 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.511 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002078080740867357 of space, bias 1.0, pg target 0.6234242222602071 quantized to 32 (current 32)
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.546 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.555 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.593 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b099e310-b47a-4b6f-957c-f77e801f6505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.594 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e6563dd-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.594 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.595 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e6563dd-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:38 np0005465604 kernel: tap9e6563dd-50: entered promiscuous mode
Oct  2 04:29:38 np0005465604 NetworkManager[45129]: <info>  [1759393778.6358] manager: (tap9e6563dd-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.637 2 DEBUG nova.virt.libvirt.vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1504413762',display_name='tempest-ListServersNegativeTestJSON-server-1504413762-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1504413762-2',id=55,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6f9056bf44b4bd8859c73e3cb645683',ramdisk_id='',reservation_id='r-5qqg2590',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-21742049',owner_user_name='tempest-ListServersNegativeTestJSON-21742049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:29Z,user_data=None,user_id='1057882eff8f490d837773415bf65a8a',uuid=f7005e7b-8982-4d23-b12a-4b67c90a6c89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.637 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9e6563dd-50, col_values=(('external_ids', {'iface-id': '39e267b1-3dd0-4688-b6e5-f1bccf651722'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.638 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converting VIF {"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:38 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:38Z|00450|binding|INFO|Releasing lport 39e267b1-3dd0-4688-b6e5-f1bccf651722 from this chassis (sb_readonly=0)
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.639 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:86:2c,bridge_name='br-int',has_traffic_filtering=True,id=5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bbef33f-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.640 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9e6563dd-5ecf-4759-9df8-5b501617e75c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9e6563dd-5ecf-4759-9df8-5b501617e75c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.641 2 DEBUG nova.objects.instance [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lazy-loading 'pci_devices' on Instance uuid f7005e7b-8982-4d23-b12a-4b67c90a6c89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.641 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5aae81-f333-4fe1-89db-4f27bba944c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.642 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-9e6563dd-5ecf-4759-9df8-5b501617e75c
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/9e6563dd-5ecf-4759-9df8-5b501617e75c.pid.haproxy
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 9e6563dd-5ecf-4759-9df8-5b501617e75c
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:29:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:38.642 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'env', 'PROCESS_TAG=haproxy-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9e6563dd-5ecf-4759-9df8-5b501617e75c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.650 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.651 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.652 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.652 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.679 2 DEBUG nova.storage.rbd_utils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image d634251d-b484-4af7-b102-fe8015603660_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.684 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d634251d-b484-4af7-b102-fe8015603660_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.723 2 DEBUG nova.policy [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1ac6f72f7366459a86c086737b89ea69', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f269abbe5769427dbf44c430d7529c04', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.727 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:29:38 np0005465604 nova_compute[260603]:  <uuid>f7005e7b-8982-4d23-b12a-4b67c90a6c89</uuid>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:  <name>instance-00000037</name>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <nova:name>tempest-ListServersNegativeTestJSON-server-1504413762-2</nova:name>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:29:37</nova:creationTime>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:29:38 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:        <nova:user uuid="1057882eff8f490d837773415bf65a8a">tempest-ListServersNegativeTestJSON-21742049-project-member</nova:user>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:        <nova:project uuid="f6f9056bf44b4bd8859c73e3cb645683">tempest-ListServersNegativeTestJSON-21742049</nova:project>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:        <nova:port uuid="5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c">
Oct  2 04:29:38 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <entry name="serial">f7005e7b-8982-4d23-b12a-4b67c90a6c89</entry>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <entry name="uuid">f7005e7b-8982-4d23-b12a-4b67c90a6c89</entry>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk">
Oct  2 04:29:38 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:29:38 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk.config">
Oct  2 04:29:38 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:29:38 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:95:86:2c"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <target dev="tap5bbef33f-36"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89/console.log" append="off"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:29:38 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:29:38 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:29:38 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:29:38 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.732 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Preparing to wait for external event network-vif-plugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.733 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.734 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.734 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.736 2 DEBUG nova.virt.libvirt.vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1504413762',display_name='tempest-ListServersNegativeTestJSON-server-1504413762-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1504413762-2',id=55,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6f9056bf44b4bd8859c73e3cb645683',ramdisk_id='',reservation_id='r-5qqg2590',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-21742049',owner_user_name='tempest-ListServersNegativeTestJSON-21742049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:29Z,user_data=None,user_id='1057882eff8f490d837773415bf65a8a',uuid=f7005e7b-8982-4d23-b12a-4b67c90a6c89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.736 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converting VIF {"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.737 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:86:2c,bridge_name='br-int',has_traffic_filtering=True,id=5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bbef33f-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.737 2 DEBUG os_vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:86:2c,bridge_name='br-int',has_traffic_filtering=True,id=5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bbef33f-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.739 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.739 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.743 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bbef33f-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.743 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5bbef33f-36, col_values=(('external_ids', {'iface-id': '5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:86:2c', 'vm-uuid': 'f7005e7b-8982-4d23-b12a-4b67c90a6c89'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:38 np0005465604 NetworkManager[45129]: <info>  [1759393778.7454] manager: (tap5bbef33f-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.751 2 INFO os_vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:86:2c,bridge_name='br-int',has_traffic_filtering=True,id=5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bbef33f-36')#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.810 2 DEBUG nova.network.neutron [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Successfully created port: 044e4f76-db30-47b6-b277-8c3a13743b9c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.831 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.831 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.831 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] No VIF found with MAC fa:16:3e:95:86:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.832 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Using config drive#033[00m
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.888 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 366 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 12 MiB/s wr, 359 op/s
Oct  2 04:29:38 np0005465604 nova_compute[260603]: 2025-10-02 08:29:38.978 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d634251d-b484-4af7-b102-fe8015603660_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:29:39 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4076390817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.080 2 DEBUG nova.storage.rbd_utils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] resizing rbd image d634251d-b484-4af7-b102-fe8015603660_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.127 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.129 2 DEBUG nova.virt.libvirt.vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1504413762',display_name='tempest-ListServersNegativeTestJSON-server-1504413762-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1504413762-3',id=56,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6f9056bf44b4bd8859c73e3cb645683',ramdisk_id='',reservation_id='r-5qqg2590',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-21742049',owner_user_name='tempest-ListServersNegativeTestJSON-21742049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:30Z,user_data=None,user_id='1057882eff8f490d837773415bf65a8a',uuid=f56dc5d2-b1f8-42ef-882c-62bcbd600954,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.131 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converting VIF {"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.132 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:bc:66,bridge_name='br-int',has_traffic_filtering=True,id=12abaaed-2f93-40bd-bddd-8143c3709480,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12abaaed-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.136 2 DEBUG nova.objects.instance [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lazy-loading 'pci_devices' on Instance uuid f56dc5d2-b1f8-42ef-882c-62bcbd600954 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:39 np0005465604 podman[317596]: 2025-10-02 08:29:39.140761825 +0000 UTC m=+0.052453321 container create 2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.157 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:29:39 np0005465604 nova_compute[260603]:  <uuid>f56dc5d2-b1f8-42ef-882c-62bcbd600954</uuid>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:  <name>instance-00000038</name>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <nova:name>tempest-ListServersNegativeTestJSON-server-1504413762-3</nova:name>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:29:37</nova:creationTime>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:29:39 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:        <nova:user uuid="1057882eff8f490d837773415bf65a8a">tempest-ListServersNegativeTestJSON-21742049-project-member</nova:user>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:        <nova:project uuid="f6f9056bf44b4bd8859c73e3cb645683">tempest-ListServersNegativeTestJSON-21742049</nova:project>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:        <nova:port uuid="12abaaed-2f93-40bd-bddd-8143c3709480">
Oct  2 04:29:39 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <entry name="serial">f56dc5d2-b1f8-42ef-882c-62bcbd600954</entry>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <entry name="uuid">f56dc5d2-b1f8-42ef-882c-62bcbd600954</entry>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk">
Oct  2 04:29:39 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:29:39 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk.config">
Oct  2 04:29:39 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:29:39 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:24:bc:66"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <target dev="tap12abaaed-2f"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954/console.log" append="off"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:29:39 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:29:39 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:29:39 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:29:39 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.159 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Preparing to wait for external event network-vif-plugged-12abaaed-2f93-40bd-bddd-8143c3709480 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.159 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.160 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.160 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.160 2 DEBUG nova.virt.libvirt.vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1504413762',display_name='tempest-ListServersNegativeTestJSON-server-1504413762-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1504413762-3',id=56,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6f9056bf44b4bd8859c73e3cb645683',ramdisk_id='',reservation_id='r-5qqg2590',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-21742049',owner_user_name='tempest-ListServersNegativeTestJSON-21742049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:30Z,user_data=None,user_id='1057882eff8f490d837773415bf65a8a',uuid=f56dc5d2-b1f8-42ef-882c-62bcbd600954,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.161 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converting VIF {"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.161 2 DEBUG nova.network.os_vif_util [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:bc:66,bridge_name='br-int',has_traffic_filtering=True,id=12abaaed-2f93-40bd-bddd-8143c3709480,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12abaaed-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.162 2 DEBUG os_vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:bc:66,bridge_name='br-int',has_traffic_filtering=True,id=12abaaed-2f93-40bd-bddd-8143c3709480,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12abaaed-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.162 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.163 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:39 np0005465604 systemd[1]: Started libpod-conmon-2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce.scope.
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.197 2 DEBUG nova.network.neutron [req-2a848cbb-184f-4be6-99fc-52bd8319b943 req-28dabd72-a5ea-49ff-bf05-e66f3cbf668f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Updated VIF entry in instance network info cache for port b4dfefd6-6971-4450-ae0e-50f4bf7eaafa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.198 2 DEBUG nova.network.neutron [req-2a848cbb-184f-4be6-99fc-52bd8319b943 req-28dabd72-a5ea-49ff-bf05-e66f3cbf668f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Updating instance_info_cache with network_info: [{"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12abaaed-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.200 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap12abaaed-2f, col_values=(('external_ids', {'iface-id': '12abaaed-2f93-40bd-bddd-8143c3709480', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:bc:66', 'vm-uuid': 'f56dc5d2-b1f8-42ef-882c-62bcbd600954'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:39 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:39 np0005465604 NetworkManager[45129]: <info>  [1759393779.2021] manager: (tap12abaaed-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:29:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b410b66f213cbfe1b811aab1f58708a1f7a49c73b1959803809bb9cdb00d8fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:29:39 np0005465604 podman[317596]: 2025-10-02 08:29:39.116269284 +0000 UTC m=+0.027960810 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.219 2 DEBUG nova.objects.instance [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'migration_context' on Instance uuid d634251d-b484-4af7-b102-fe8015603660 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.222 2 INFO os_vif [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:bc:66,bridge_name='br-int',has_traffic_filtering=True,id=12abaaed-2f93-40bd-bddd-8143c3709480,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12abaaed-2f')#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.223 2 DEBUG oslo_concurrency.lockutils [req-2a848cbb-184f-4be6-99fc-52bd8319b943 req-28dabd72-a5ea-49ff-bf05-e66f3cbf668f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-73e8c7a5-4621-4f07-824a-b81ea314a672" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:39 np0005465604 podman[317596]: 2025-10-02 08:29:39.225347904 +0000 UTC m=+0.137039430 container init 2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:29:39 np0005465604 podman[317596]: 2025-10-02 08:29:39.23230856 +0000 UTC m=+0.144000066 container start 2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.237 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.237 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Ensure instance console log exists: /var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.237 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.238 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.238 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:39 np0005465604 neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c[317636]: [NOTICE]   (317653) : New worker (317655) forked
Oct  2 04:29:39 np0005465604 neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c[317636]: [NOTICE]   (317653) : Loading success.
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.268 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.268 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.268 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] No VIF found with MAC fa:16:3e:24:bc:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.269 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Using config drive#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.292 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.325 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393779.3247273, 73e8c7a5-4621-4f07-824a-b81ea314a672 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.325 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] VM Started (Lifecycle Event)#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.352 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.355 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393779.3248746, 73e8c7a5-4621-4f07-824a-b81ea314a672 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.355 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.376 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.379 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.398 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.975 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Creating config drive at /var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89/disk.config#033[00m
Oct  2 04:29:39 np0005465604 nova_compute[260603]: 2025-10-02 08:29:39.981 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3cc5plsn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.049 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Creating config drive at /var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954/disk.config#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.063 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxvqz63i5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.148 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3cc5plsn" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.181 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.184 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89/disk.config f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.227 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxvqz63i5" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.263 2 DEBUG nova.storage.rbd_utils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] rbd image f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.272 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954/disk.config f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.345 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89/disk.config f7005e7b-8982-4d23-b12a-4b67c90a6c89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.346 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Deleting local config drive /var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89/disk.config because it was imported into RBD.#033[00m
Oct  2 04:29:40 np0005465604 systemd-udevd[317312]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:29:40 np0005465604 NetworkManager[45129]: <info>  [1759393780.3961] manager: (tap5bbef33f-36): new Tun device (/org/freedesktop/NetworkManager/Devices/201)
Oct  2 04:29:40 np0005465604 kernel: tap5bbef33f-36: entered promiscuous mode
Oct  2 04:29:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:40Z|00451|binding|INFO|Claiming lport 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c for this chassis.
Oct  2 04:29:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:40Z|00452|binding|INFO|5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c: Claiming fa:16:3e:95:86:2c 10.100.0.5
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:40 np0005465604 NetworkManager[45129]: <info>  [1759393780.4077] device (tap5bbef33f-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:29:40 np0005465604 NetworkManager[45129]: <info>  [1759393780.4087] device (tap5bbef33f-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:29:40 np0005465604 systemd-machined[214636]: New machine qemu-60-instance-00000037.
Oct  2 04:29:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:40Z|00453|binding|INFO|Setting lport 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c ovn-installed in OVS
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:40 np0005465604 systemd[1]: Started Virtual Machine qemu-60-instance-00000037.
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.460 2 DEBUG oslo_concurrency.processutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954/disk.config f56dc5d2-b1f8-42ef-882c-62bcbd600954_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.460 2 INFO nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Deleting local config drive /var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954/disk.config because it was imported into RBD.#033[00m
Oct  2 04:29:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:40Z|00454|binding|INFO|Setting lport 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c up in Southbound
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.464 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:86:2c 10.100.0.5'], port_security=['fa:16:3e:95:86:2c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f7005e7b-8982-4d23-b12a-4b67c90a6c89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6f9056bf44b4bd8859c73e3cb645683', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd4b10ecc-e572-4092-8dce-9b7247cd181c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92074f81-fcf1-4b9d-a09c-c34c3c0535f5, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.466 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c in datapath 9e6563dd-5ecf-4759-9df8-5b501617e75c bound to our chassis#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.468 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9e6563dd-5ecf-4759-9df8-5b501617e75c#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.484 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b726f5-66e9-474f-a992-5a3fa297053e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.490 2 DEBUG nova.compute.manager [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Received event network-vif-plugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.491 2 DEBUG oslo_concurrency.lockutils [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.491 2 DEBUG oslo_concurrency.lockutils [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.491 2 DEBUG oslo_concurrency.lockutils [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.492 2 DEBUG nova.compute.manager [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Processing event network-vif-plugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.492 2 DEBUG nova.compute.manager [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Received event network-vif-plugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.492 2 DEBUG oslo_concurrency.lockutils [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.492 2 DEBUG oslo_concurrency.lockutils [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.492 2 DEBUG oslo_concurrency.lockutils [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.493 2 DEBUG nova.compute.manager [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] No waiting events found dispatching network-vif-plugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.493 2 WARNING nova.compute.manager [req-3121f8a5-bc50-4674-901f-dc6df0bbba67 req-e27beb50-f556-4650-8ae1-523ea1afbd31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Received unexpected event network-vif-plugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.493 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.500 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393780.4994123, 73e8c7a5-4621-4f07-824a-b81ea314a672 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.501 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.511 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cf8b594f-4c4e-43ed-bb4f-63a5d931ffd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.511 2 DEBUG nova.network.neutron [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Updated VIF entry in instance network info cache for port 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.513 2 DEBUG nova.network.neutron [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Updating instance_info_cache with network_info: [{"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.516 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.515 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[934494fb-84ea-4ab5-8da5-d9ce24a71155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.519 2 INFO nova.virt.libvirt.driver [-] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Instance spawned successfully.#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.519 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.522 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:40 np0005465604 kernel: tap12abaaed-2f: entered promiscuous mode
Oct  2 04:29:40 np0005465604 NetworkManager[45129]: <info>  [1759393780.5267] manager: (tap12abaaed-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/202)
Oct  2 04:29:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:40Z|00455|binding|INFO|Claiming lport 12abaaed-2f93-40bd-bddd-8143c3709480 for this chassis.
Oct  2 04:29:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:40Z|00456|binding|INFO|12abaaed-2f93-40bd-bddd-8143c3709480: Claiming fa:16:3e:24:bc:66 10.100.0.4
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.531 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f7005e7b-8982-4d23-b12a-4b67c90a6c89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.531 2 DEBUG nova.compute.manager [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.531 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.532 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.532 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.532 2 DEBUG nova.compute.manager [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Processing event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.532 2 DEBUG nova.compute.manager [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.532 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.533 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.533 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.533 2 DEBUG nova.compute.manager [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] No waiting events found dispatching network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.534 2 WARNING nova.compute.manager [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received unexpected event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.534 2 DEBUG nova.compute.manager [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Received event network-changed-12abaaed-2f93-40bd-bddd-8143c3709480 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.534 2 DEBUG nova.compute.manager [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Refreshing instance network info cache due to event network-changed-12abaaed-2f93-40bd-bddd-8143c3709480. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.534 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f56dc5d2-b1f8-42ef-882c-62bcbd600954" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.534 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f56dc5d2-b1f8-42ef-882c-62bcbd600954" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.535 2 DEBUG nova.network.neutron [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Refreshing network info cache for port 12abaaed-2f93-40bd-bddd-8143c3709480 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.537 2 DEBUG nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance event wait completed in 8 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:29:40 np0005465604 NetworkManager[45129]: <info>  [1759393780.5398] device (tap12abaaed-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.540 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:bc:66 10.100.0.4'], port_security=['fa:16:3e:24:bc:66 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f56dc5d2-b1f8-42ef-882c-62bcbd600954', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6f9056bf44b4bd8859c73e3cb645683', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd4b10ecc-e572-4092-8dce-9b7247cd181c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92074f81-fcf1-4b9d-a09c-c34c3c0535f5, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=12abaaed-2f93-40bd-bddd-8143c3709480) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:29:40 np0005465604 NetworkManager[45129]: <info>  [1759393780.5409] device (tap12abaaed-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.546 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:40Z|00457|binding|INFO|Setting lport 12abaaed-2f93-40bd-bddd-8143c3709480 ovn-installed in OVS
Oct  2 04:29:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:40Z|00458|binding|INFO|Setting lport 12abaaed-2f93-40bd-bddd-8143c3709480 up in Southbound
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.552 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.560 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[56ddc0a2-4692-4766-a067-3c2ee07454e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.566 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.566 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.567 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.568 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.569 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.570 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.573 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.574 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393780.5458484, 9924ce7f-b701-4560-b2c5-67f673b45807 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.574 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.576 2 INFO nova.virt.libvirt.driver [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance spawned successfully.#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.576 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:29:40 np0005465604 systemd-machined[214636]: New machine qemu-61-instance-00000038.
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.585 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[94dd12b2-60e9-4056-aa9f-cd8e9b381d44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9e6563dd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:8b:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465698, 'reachable_time': 20297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317802, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:40 np0005465604 systemd[1]: Started Virtual Machine qemu-61-instance-00000038.
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.601 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.607 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.611 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.611 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.612 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.614 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.614 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9026513e-df0c-4444-8016-f3641be34731]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9e6563dd-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465714, 'tstamp': 465714}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317804, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9e6563dd-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465718, 'tstamp': 465718}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317804, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.615 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e6563dd-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.616 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.618 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e6563dd-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.618 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.618 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9e6563dd-50, col_values=(('external_ids', {'iface-id': '39e267b1-3dd0-4688-b6e5-f1bccf651722'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.619 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.620 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 12abaaed-2f93-40bd-bddd-8143c3709480 in datapath 9e6563dd-5ecf-4759-9df8-5b501617e75c unbound from our chassis#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.622 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9e6563dd-5ecf-4759-9df8-5b501617e75c#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.623 2 DEBUG nova.virt.libvirt.driver [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.632 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.638 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[153d5d02-5cba-487b-822f-e77cf83ef058]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.650 2 INFO nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Took 12.49 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.650 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.682 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[11d4c6df-cbd9-4735-a1bf-79d31ba7b315]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.685 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4709a2b4-8c3d-456b-bc26-995c284d3fbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.711 2 INFO nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Took 18.49 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.710 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1c244396-b74f-4fb9-be61-d176d1ec8c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.713 2 DEBUG nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.736 2 INFO nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Took 14.03 seconds to build instance.#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.741 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b4a461-04b1-434a-a808-2692c439c1dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9e6563dd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:8b:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465698, 'reachable_time': 20297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317849, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.773 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dff2b997-c2a0-4699-b70a-1ba7124d5d79]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9e6563dd-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465714, 'tstamp': 465714}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317854, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9e6563dd-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465718, 'tstamp': 465718}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317854, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.774 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.774 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e6563dd-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.828 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e6563dd-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.828 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.828 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9e6563dd-50, col_values=(('external_ids', {'iface-id': '39e267b1-3dd0-4688-b6e5-f1bccf651722'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:40.829 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.835 2 INFO nova.compute.manager [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Took 19.61 seconds to build instance.#033[00m
Oct  2 04:29:40 np0005465604 nova_compute[260603]: 2025-10-02 08:29:40.859 2 DEBUG oslo_concurrency.lockutils [None req-77fdf50a-3bbc-4540-82cc-81b67b7ba59d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 366 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 8.4 MiB/s wr, 219 op/s
Oct  2 04:29:41 np0005465604 nova_compute[260603]: 2025-10-02 08:29:41.268 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393781.2684371, f7005e7b-8982-4d23-b12a-4b67c90a6c89 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:41 np0005465604 nova_compute[260603]: 2025-10-02 08:29:41.269 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] VM Started (Lifecycle Event)#033[00m
Oct  2 04:29:41 np0005465604 nova_compute[260603]: 2025-10-02 08:29:41.285 2 DEBUG nova.network.neutron [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Successfully created port: f79edf8d-90b8-47b7-b366-244c63439a64 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:29:41 np0005465604 nova_compute[260603]: 2025-10-02 08:29:41.295 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:41 np0005465604 nova_compute[260603]: 2025-10-02 08:29:41.299 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393781.2685282, f7005e7b-8982-4d23-b12a-4b67c90a6c89 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:41 np0005465604 nova_compute[260603]: 2025-10-02 08:29:41.300 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:29:41 np0005465604 nova_compute[260603]: 2025-10-02 08:29:41.322 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:41 np0005465604 nova_compute[260603]: 2025-10-02 08:29:41.326 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:41 np0005465604 nova_compute[260603]: 2025-10-02 08:29:41.353 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:29:41 np0005465604 nova_compute[260603]: 2025-10-02 08:29:41.589 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393781.5886009, f56dc5d2-b1f8-42ef-882c-62bcbd600954 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:41 np0005465604 nova_compute[260603]: 2025-10-02 08:29:41.592 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] VM Started (Lifecycle Event)#033[00m
Oct  2 04:29:41 np0005465604 nova_compute[260603]: 2025-10-02 08:29:41.617 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:41 np0005465604 nova_compute[260603]: 2025-10-02 08:29:41.620 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393781.5889506, f56dc5d2-b1f8-42ef-882c-62bcbd600954 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:41 np0005465604 nova_compute[260603]: 2025-10-02 08:29:41.621 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:29:41 np0005465604 nova_compute[260603]: 2025-10-02 08:29:41.640 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:41 np0005465604 nova_compute[260603]: 2025-10-02 08:29:41.645 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:41 np0005465604 nova_compute[260603]: 2025-10-02 08:29:41.663 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.099915) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393782099965, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 502, "num_deletes": 252, "total_data_size": 421686, "memory_usage": 432016, "flush_reason": "Manual Compaction"}
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393782104216, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 353741, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30460, "largest_seqno": 30961, "table_properties": {"data_size": 350975, "index_size": 738, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7655, "raw_average_key_size": 20, "raw_value_size": 345241, "raw_average_value_size": 943, "num_data_blocks": 32, "num_entries": 366, "num_filter_entries": 366, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759393762, "oldest_key_time": 1759393762, "file_creation_time": 1759393782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 4347 microseconds, and 1930 cpu microseconds.
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.104266) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 353741 bytes OK
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.104286) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.105806) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.105822) EVENT_LOG_v1 {"time_micros": 1759393782105817, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.105842) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 418688, prev total WAL file size 418688, number of live WAL files 2.
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.106295) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303030' seq:72057594037927935, type:22 .. '6D6772737461740031323533' seq:0, type:0; will stop at (end)
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(345KB)], [65(10012KB)]
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393782106331, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 10606596, "oldest_snapshot_seqno": -1}
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5361 keys, 7332008 bytes, temperature: kUnknown
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393782140791, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7332008, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7296564, "index_size": 20934, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 135376, "raw_average_key_size": 25, "raw_value_size": 7200534, "raw_average_value_size": 1343, "num_data_blocks": 853, "num_entries": 5361, "num_filter_entries": 5361, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759393782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.140977) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7332008 bytes
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.142286) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 307.2 rd, 212.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.8 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(50.7) write-amplify(20.7) OK, records in: 5872, records dropped: 511 output_compression: NoCompression
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.142306) EVENT_LOG_v1 {"time_micros": 1759393782142296, "job": 36, "event": "compaction_finished", "compaction_time_micros": 34524, "compaction_time_cpu_micros": 17932, "output_level": 6, "num_output_files": 1, "total_output_size": 7332008, "num_input_records": 5872, "num_output_records": 5361, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393782142478, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.143 2 DEBUG nova.network.neutron [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Successfully updated port: f79edf8d-90b8-47b7-b366-244c63439a64 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759393782144580, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.106251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.144706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.144711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.144742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.144755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:29:42 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:29:42.144756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.160 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "refresh_cache-d634251d-b484-4af7-b102-fe8015603660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.160 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquired lock "refresh_cache-d634251d-b484-4af7-b102-fe8015603660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.161 2 DEBUG nova.network.neutron [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.169 2 DEBUG nova.compute.manager [req-4c07f50d-f83f-432d-8ffb-e09fdce0693e req-6467b941-2dc0-4110-96b0-7d59283fe7bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Received event network-vif-plugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.169 2 DEBUG oslo_concurrency.lockutils [req-4c07f50d-f83f-432d-8ffb-e09fdce0693e req-6467b941-2dc0-4110-96b0-7d59283fe7bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.170 2 DEBUG oslo_concurrency.lockutils [req-4c07f50d-f83f-432d-8ffb-e09fdce0693e req-6467b941-2dc0-4110-96b0-7d59283fe7bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.170 2 DEBUG oslo_concurrency.lockutils [req-4c07f50d-f83f-432d-8ffb-e09fdce0693e req-6467b941-2dc0-4110-96b0-7d59283fe7bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.170 2 DEBUG nova.compute.manager [req-4c07f50d-f83f-432d-8ffb-e09fdce0693e req-6467b941-2dc0-4110-96b0-7d59283fe7bc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Processing event network-vif-plugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.171 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.180 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393782.179207, f7005e7b-8982-4d23-b12a-4b67c90a6c89 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.180 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.184 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.200 2 DEBUG nova.network.neutron [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Successfully updated port: 044e4f76-db30-47b6-b277-8c3a13743b9c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.202 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.209 2 INFO nova.virt.libvirt.driver [-] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Instance spawned successfully.#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.210 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.212 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.223 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquiring lock "refresh_cache-923e00cc-7494-46f3-93e2-3c223705aff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.223 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquired lock "refresh_cache-923e00cc-7494-46f3-93e2-3c223705aff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.223 2 DEBUG nova.network.neutron [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.261 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.271 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.273 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.273 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.274 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.275 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.275 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.368 2 INFO nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Took 13.14 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.369 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.455 2 INFO nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Took 15.68 seconds to build instance.#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.484 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.595 2 DEBUG nova.network.neutron [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.624 2 DEBUG nova.network.neutron [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.795 2 DEBUG nova.compute.manager [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Received event network-vif-plugged-12abaaed-2f93-40bd-bddd-8143c3709480 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.796 2 DEBUG oslo_concurrency.lockutils [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.797 2 DEBUG oslo_concurrency.lockutils [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.798 2 DEBUG oslo_concurrency.lockutils [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.799 2 DEBUG nova.compute.manager [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Processing event network-vif-plugged-12abaaed-2f93-40bd-bddd-8143c3709480 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.799 2 DEBUG nova.compute.manager [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Received event network-vif-plugged-12abaaed-2f93-40bd-bddd-8143c3709480 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.800 2 DEBUG oslo_concurrency.lockutils [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.801 2 DEBUG oslo_concurrency.lockutils [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.801 2 DEBUG oslo_concurrency.lockutils [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.802 2 DEBUG nova.compute.manager [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] No waiting events found dispatching network-vif-plugged-12abaaed-2f93-40bd-bddd-8143c3709480 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.802 2 WARNING nova.compute.manager [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Received unexpected event network-vif-plugged-12abaaed-2f93-40bd-bddd-8143c3709480 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.803 2 DEBUG nova.compute.manager [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Received event network-changed-f79edf8d-90b8-47b7-b366-244c63439a64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.804 2 DEBUG nova.compute.manager [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Refreshing instance network info cache due to event network-changed-f79edf8d-90b8-47b7-b366-244c63439a64. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.804 2 DEBUG oslo_concurrency.lockutils [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d634251d-b484-4af7-b102-fe8015603660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.811 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.816 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393782.8157337, f56dc5d2-b1f8-42ef-882c-62bcbd600954 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.816 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.820 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.828 2 INFO nova.virt.libvirt.driver [-] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Instance spawned successfully.#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.829 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.837 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.840 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.850 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.851 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.851 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.852 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.852 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.853 2 DEBUG nova.virt.libvirt.driver [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.860 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.919 2 INFO nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Took 12.26 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:29:42 np0005465604 nova_compute[260603]: 2025-10-02 08:29:42.920 2 DEBUG nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 383 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 8.8 MiB/s wr, 399 op/s
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.035 2 INFO nova.compute.manager [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Took 16.24 seconds to build instance.#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.060 2 DEBUG oslo_concurrency.lockutils [None req-67a8a1c0-db1d-4d67-9b41-ccce7b53c627 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.065 2 DEBUG nova.network.neutron [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Updated VIF entry in instance network info cache for port 12abaaed-2f93-40bd-bddd-8143c3709480. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.066 2 DEBUG nova.network.neutron [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Updating instance_info_cache with network_info: [{"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.089 2 DEBUG oslo_concurrency.lockutils [req-8609a9d1-b4c9-48cb-b88f-2e0eaaa85df4 req-40bbcf41-46ef-47c4-ac86-d97dcfa47452 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f56dc5d2-b1f8-42ef-882c-62bcbd600954" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.668 2 DEBUG nova.network.neutron [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Updating instance_info_cache with network_info: [{"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.692 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Releasing lock "refresh_cache-d634251d-b484-4af7-b102-fe8015603660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.693 2 DEBUG nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Instance network_info: |[{"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.693 2 DEBUG oslo_concurrency.lockutils [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d634251d-b484-4af7-b102-fe8015603660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.693 2 DEBUG nova.network.neutron [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Refreshing network info cache for port f79edf8d-90b8-47b7-b366-244c63439a64 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.696 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Start _get_guest_xml network_info=[{"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.702 2 WARNING nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.709 2 DEBUG nova.virt.libvirt.host [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.710 2 DEBUG nova.virt.libvirt.host [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.720 2 DEBUG nova.virt.libvirt.host [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.721 2 DEBUG nova.virt.libvirt.host [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.721 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.721 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.722 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.722 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.722 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.722 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.722 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.723 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.723 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.723 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.723 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.723 2 DEBUG nova.virt.hardware [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.728 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.853 2 DEBUG nova.network.neutron [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Updating instance_info_cache with network_info: [{"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.873 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Releasing lock "refresh_cache-923e00cc-7494-46f3-93e2-3c223705aff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.874 2 DEBUG nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Instance network_info: |[{"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.876 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Start _get_guest_xml network_info=[{"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.903 2 WARNING nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.921 2 DEBUG nova.virt.libvirt.host [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.937 2 DEBUG nova.virt.libvirt.host [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.952 2 DEBUG nova.virt.libvirt.host [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.953 2 DEBUG nova.virt.libvirt.host [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.953 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.953 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.954 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.954 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.954 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.954 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.955 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.955 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.955 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.955 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.956 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.966 2 DEBUG nova.virt.hardware [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:29:43 np0005465604 nova_compute[260603]: 2025-10-02 08:29:43.983 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:44 np0005465604 podman[317913]: 2025-10-02 08:29:44.07135164 +0000 UTC m=+0.136428260 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:29:44 np0005465604 podman[317914]: 2025-10-02 08:29:44.102292812 +0000 UTC m=+0.165680189 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  2 04:29:44 np0005465604 nova_compute[260603]: 2025-10-02 08:29:44.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:44 np0005465604 nova_compute[260603]: 2025-10-02 08:29:44.269 2 DEBUG nova.compute.manager [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Received event network-vif-plugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:44 np0005465604 nova_compute[260603]: 2025-10-02 08:29:44.269 2 DEBUG oslo_concurrency.lockutils [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:44 np0005465604 nova_compute[260603]: 2025-10-02 08:29:44.270 2 DEBUG oslo_concurrency.lockutils [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:44 np0005465604 nova_compute[260603]: 2025-10-02 08:29:44.270 2 DEBUG oslo_concurrency.lockutils [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:44 np0005465604 nova_compute[260603]: 2025-10-02 08:29:44.270 2 DEBUG nova.compute.manager [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] No waiting events found dispatching network-vif-plugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:44 np0005465604 nova_compute[260603]: 2025-10-02 08:29:44.270 2 WARNING nova.compute.manager [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Received unexpected event network-vif-plugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c for instance with vm_state active and task_state None.#033[00m
Oct  2 04:29:44 np0005465604 nova_compute[260603]: 2025-10-02 08:29:44.270 2 DEBUG nova.compute.manager [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Received event network-changed-044e4f76-db30-47b6-b277-8c3a13743b9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:44 np0005465604 nova_compute[260603]: 2025-10-02 08:29:44.270 2 DEBUG nova.compute.manager [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Refreshing instance network info cache due to event network-changed-044e4f76-db30-47b6-b277-8c3a13743b9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:29:44 np0005465604 nova_compute[260603]: 2025-10-02 08:29:44.271 2 DEBUG oslo_concurrency.lockutils [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-923e00cc-7494-46f3-93e2-3c223705aff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:44 np0005465604 nova_compute[260603]: 2025-10-02 08:29:44.271 2 DEBUG oslo_concurrency.lockutils [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-923e00cc-7494-46f3-93e2-3c223705aff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:44 np0005465604 nova_compute[260603]: 2025-10-02 08:29:44.271 2 DEBUG nova.network.neutron [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Refreshing network info cache for port 044e4f76-db30-47b6-b277-8c3a13743b9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:29:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:29:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1027909258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:29:44 np0005465604 nova_compute[260603]: 2025-10-02 08:29:44.447 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.719s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:44 np0005465604 nova_compute[260603]: 2025-10-02 08:29:44.619 2 DEBUG nova.storage.rbd_utils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image d634251d-b484-4af7-b102-fe8015603660_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:44 np0005465604 nova_compute[260603]: 2025-10-02 08:29:44.647 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:29:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2952174159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:29:44 np0005465604 nova_compute[260603]: 2025-10-02 08:29:44.859 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.876s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:44 np0005465604 nova_compute[260603]: 2025-10-02 08:29:44.943 2 DEBUG nova.storage.rbd_utils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] rbd image 923e00cc-7494-46f3-93e2-3c223705aff1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 413 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 5.2 MiB/s wr, 390 op/s
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.002 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.237 2 DEBUG nova.network.neutron [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Updated VIF entry in instance network info cache for port f79edf8d-90b8-47b7-b366-244c63439a64. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.238 2 DEBUG nova.network.neutron [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Updating instance_info_cache with network_info: [{"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.261 2 DEBUG oslo_concurrency.lockutils [req-78c1620a-a6d8-4d52-a4aa-07a38f94f2c2 req-55ddf5ab-b82b-483d-a72c-55396681a360 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d634251d-b484-4af7-b102-fe8015603660" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:29:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2236559884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.455 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.808s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.457 2 DEBUG nova.virt.libvirt.vif [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-916266897',display_name='tempest-DeleteServersTestJSON-server-916266897',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-916266897',id=58,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-aeqxqe0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:38Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=d634251d-b484-4af7-b102-fe8015603660,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.458 2 DEBUG nova.network.os_vif_util [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.458 2 DEBUG nova.network.os_vif_util [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:48:7d,bridge_name='br-int',has_traffic_filtering=True,id=f79edf8d-90b8-47b7-b366-244c63439a64,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf79edf8d-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.461 2 DEBUG nova.objects.instance [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'pci_devices' on Instance uuid d634251d-b484-4af7-b102-fe8015603660 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.474 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <uuid>d634251d-b484-4af7-b102-fe8015603660</uuid>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <name>instance-0000003a</name>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <nova:name>tempest-DeleteServersTestJSON-server-916266897</nova:name>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:29:43</nova:creationTime>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <nova:user uuid="1ac6f72f7366459a86c086737b89ea69">tempest-DeleteServersTestJSON-812177785-project-member</nova:user>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <nova:project uuid="f269abbe5769427dbf44c430d7529c04">tempest-DeleteServersTestJSON-812177785</nova:project>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <nova:port uuid="f79edf8d-90b8-47b7-b366-244c63439a64">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <entry name="serial">d634251d-b484-4af7-b102-fe8015603660</entry>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <entry name="uuid">d634251d-b484-4af7-b102-fe8015603660</entry>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/d634251d-b484-4af7-b102-fe8015603660_disk">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/d634251d-b484-4af7-b102-fe8015603660_disk.config">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:a8:48:7d"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <target dev="tapf79edf8d-90"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660/console.log" append="off"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:29:45 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:29:45 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.474 2 DEBUG nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Preparing to wait for external event network-vif-plugged-f79edf8d-90b8-47b7-b366-244c63439a64 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.474 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "d634251d-b484-4af7-b102-fe8015603660-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.475 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.475 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.476 2 DEBUG nova.virt.libvirt.vif [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-916266897',display_name='tempest-DeleteServersTestJSON-server-916266897',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-916266897',id=58,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-aeqxqe0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:38Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=d634251d-b484-4af7-b102-fe8015603660,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.477 2 DEBUG nova.network.os_vif_util [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.477 2 DEBUG nova.network.os_vif_util [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:48:7d,bridge_name='br-int',has_traffic_filtering=True,id=f79edf8d-90b8-47b7-b366-244c63439a64,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf79edf8d-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.477 2 DEBUG os_vif [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:48:7d,bridge_name='br-int',has_traffic_filtering=True,id=f79edf8d-90b8-47b7-b366-244c63439a64,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf79edf8d-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.479 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.481 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf79edf8d-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.482 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf79edf8d-90, col_values=(('external_ids', {'iface-id': 'f79edf8d-90b8-47b7-b366-244c63439a64', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:48:7d', 'vm-uuid': 'd634251d-b484-4af7-b102-fe8015603660'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:45 np0005465604 NetworkManager[45129]: <info>  [1759393785.4842] manager: (tapf79edf8d-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.491 2 INFO os_vif [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:48:7d,bridge_name='br-int',has_traffic_filtering=True,id=f79edf8d-90b8-47b7-b366-244c63439a64,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf79edf8d-90')#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.577 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.578 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.579 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] No VIF found with MAC fa:16:3e:a8:48:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.579 2 INFO nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Using config drive#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.716 2 DEBUG nova.storage.rbd_utils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image d634251d-b484-4af7-b102-fe8015603660_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:29:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596397078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.781 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.777s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.808 2 DEBUG nova.virt.libvirt.vif [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-568591211',display_name='tempest-ServersTestManualDisk-server-568591211',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-568591211',id=57,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARlZgY/0EkzEMepYNnm4b01nWIMecq2XG9MuajTSjQZi/SZ8DIEdLBXsx3DCy0ARgTpk4vDQEJ3TsL+ZNLPSjyILPCIRt4tiYIZmsXwTZOquFcYjN59rB2JnY5UB/nfNw==',key_name='tempest-keypair-510296598',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f942883d5794a5c8e3cd2b5ef44a863',ramdisk_id='',reservation_id='r-9syab2uj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-2010618382',owner_user_name='tempest-ServersTestManualDisk-2010618382-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e8f0a6fb1d224a979db4b4a738bbf453',uuid=923e00cc-7494-46f3-93e2-3c223705aff1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.808 2 DEBUG nova.network.os_vif_util [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Converting VIF {"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.809 2 DEBUG nova.network.os_vif_util [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:b0:76,bridge_name='br-int',has_traffic_filtering=True,id=044e4f76-db30-47b6-b277-8c3a13743b9c,network=Network(9bd8f146-d090-40d8-8651-21c92934a6ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e4f76-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.810 2 DEBUG nova.objects.instance [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lazy-loading 'pci_devices' on Instance uuid 923e00cc-7494-46f3-93e2-3c223705aff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.827 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <uuid>923e00cc-7494-46f3-93e2-3c223705aff1</uuid>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <name>instance-00000039</name>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersTestManualDisk-server-568591211</nova:name>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:29:43</nova:creationTime>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <nova:user uuid="e8f0a6fb1d224a979db4b4a738bbf453">tempest-ServersTestManualDisk-2010618382-project-member</nova:user>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <nova:project uuid="1f942883d5794a5c8e3cd2b5ef44a863">tempest-ServersTestManualDisk-2010618382</nova:project>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <nova:port uuid="044e4f76-db30-47b6-b277-8c3a13743b9c">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <entry name="serial">923e00cc-7494-46f3-93e2-3c223705aff1</entry>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <entry name="uuid">923e00cc-7494-46f3-93e2-3c223705aff1</entry>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/923e00cc-7494-46f3-93e2-3c223705aff1_disk">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/923e00cc-7494-46f3-93e2-3c223705aff1_disk.config">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:46:b0:76"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <target dev="tap044e4f76-db"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1/console.log" append="off"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:29:45 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:29:45 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:29:45 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:29:45 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.827 2 DEBUG nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Preparing to wait for external event network-vif-plugged-044e4f76-db30-47b6-b277-8c3a13743b9c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.828 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquiring lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.828 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.828 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.829 2 DEBUG nova.virt.libvirt.vif [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:29:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-568591211',display_name='tempest-ServersTestManualDisk-server-568591211',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-568591211',id=57,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARlZgY/0EkzEMepYNnm4b01nWIMecq2XG9MuajTSjQZi/SZ8DIEdLBXsx3DCy0ARgTpk4vDQEJ3TsL+ZNLPSjyILPCIRt4tiYIZmsXwTZOquFcYjN59rB2JnY5UB/nfNw==',key_name='tempest-keypair-510296598',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f942883d5794a5c8e3cd2b5ef44a863',ramdisk_id='',reservation_id='r-9syab2uj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-2010618382',owner_user_name='tempest-ServersTestManualDisk-2010618382-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:29:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e8f0a6fb1d224a979db4b4a738bbf453',uuid=923e00cc-7494-46f3-93e2-3c223705aff1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.829 2 DEBUG nova.network.os_vif_util [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Converting VIF {"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.829 2 DEBUG nova.network.os_vif_util [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:b0:76,bridge_name='br-int',has_traffic_filtering=True,id=044e4f76-db30-47b6-b277-8c3a13743b9c,network=Network(9bd8f146-d090-40d8-8651-21c92934a6ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e4f76-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.830 2 DEBUG os_vif [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:b0:76,bridge_name='br-int',has_traffic_filtering=True,id=044e4f76-db30-47b6-b277-8c3a13743b9c,network=Network(9bd8f146-d090-40d8-8651-21c92934a6ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e4f76-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.830 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.831 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.833 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap044e4f76-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.833 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap044e4f76-db, col_values=(('external_ids', {'iface-id': '044e4f76-db30-47b6-b277-8c3a13743b9c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:b0:76', 'vm-uuid': '923e00cc-7494-46f3-93e2-3c223705aff1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:45 np0005465604 NetworkManager[45129]: <info>  [1759393785.8355] manager: (tap044e4f76-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.849 2 INFO os_vif [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:b0:76,bridge_name='br-int',has_traffic_filtering=True,id=044e4f76-db30-47b6-b277-8c3a13743b9c,network=Network(9bd8f146-d090-40d8-8651-21c92934a6ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e4f76-db')#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.892 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.893 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.893 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] No VIF found with MAC fa:16:3e:46:b0:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.893 2 INFO nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Using config drive#033[00m
Oct  2 04:29:45 np0005465604 nova_compute[260603]: 2025-10-02 08:29:45.952 2 DEBUG nova.storage.rbd_utils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] rbd image 923e00cc-7494-46f3-93e2-3c223705aff1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.281 2 DEBUG oslo_concurrency.lockutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "73e8c7a5-4621-4f07-824a-b81ea314a672" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.282 2 DEBUG oslo_concurrency.lockutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.282 2 DEBUG oslo_concurrency.lockutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.283 2 DEBUG oslo_concurrency.lockutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.283 2 DEBUG oslo_concurrency.lockutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.285 2 INFO nova.compute.manager [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Terminating instance#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.289 2 DEBUG nova.compute.manager [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.313 2 INFO nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Creating config drive at /var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660/disk.config#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.320 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph69hzfdu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:46 np0005465604 kernel: tapb4dfefd6-69 (unregistering): left promiscuous mode
Oct  2 04:29:46 np0005465604 NetworkManager[45129]: <info>  [1759393786.3431] device (tapb4dfefd6-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.360 2 DEBUG nova.network.neutron [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Updated VIF entry in instance network info cache for port 044e4f76-db30-47b6-b277-8c3a13743b9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.361 2 DEBUG nova.network.neutron [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Updating instance_info_cache with network_info: [{"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:46Z|00459|binding|INFO|Releasing lport b4dfefd6-6971-4450-ae0e-50f4bf7eaafa from this chassis (sb_readonly=0)
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:46Z|00460|binding|INFO|Setting lport b4dfefd6-6971-4450-ae0e-50f4bf7eaafa down in Southbound
Oct  2 04:29:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:46Z|00461|binding|INFO|Removing iface tapb4dfefd6-69 ovn-installed in OVS
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.383 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:74:a7 10.100.0.11'], port_security=['fa:16:3e:6e:74:a7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '73e8c7a5-4621-4f07-824a-b81ea314a672', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6f9056bf44b4bd8859c73e3cb645683', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd4b10ecc-e572-4092-8dce-9b7247cd181c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92074f81-fcf1-4b9d-a09c-c34c3c0535f5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=b4dfefd6-6971-4450-ae0e-50f4bf7eaafa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.384 162357 INFO neutron.agent.ovn.metadata.agent [-] Port b4dfefd6-6971-4450-ae0e-50f4bf7eaafa in datapath 9e6563dd-5ecf-4759-9df8-5b501617e75c unbound from our chassis#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.385 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9e6563dd-5ecf-4759-9df8-5b501617e75c#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.388 2 DEBUG oslo_concurrency.lockutils [req-46168867-623c-4485-ab8c-65f09b1bb275 req-07dd991c-10b0-4747-a2f5-a7deae4f21d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-923e00cc-7494-46f3-93e2-3c223705aff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:46 np0005465604 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000036.scope: Deactivated successfully.
Oct  2 04:29:46 np0005465604 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000036.scope: Consumed 6.070s CPU time.
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.405 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[515e0ba2-bbbd-4597-9eb2-8388e7ecb0a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:46 np0005465604 systemd-machined[214636]: Machine qemu-59-instance-00000036 terminated.
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.447 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4dbb6bf5-687f-4d36-8c18-8a6fae037e75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.450 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[13d1d85e-4a5d-4af6-b891-02b55293b1dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.476 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[02e2d79b-92c8-450b-a895-fec01b792a96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.484 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph69hzfdu" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.492 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[40105d36-5fd3-4ec1-a8ec-a608b1bf6cd2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9e6563dd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:8b:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465698, 'reachable_time': 20297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318128, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.510 2 DEBUG nova.storage.rbd_utils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] rbd image d634251d-b484-4af7-b102-fe8015603660_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.513 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dbed3c6b-eab3-4b4b-9763-af1a8037e1fe]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9e6563dd-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465714, 'tstamp': 465714}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318142, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9e6563dd-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465718, 'tstamp': 465718}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318142, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.514 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e6563dd-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.533 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e6563dd-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.533 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.534 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9e6563dd-50, col_values=(('external_ids', {'iface-id': '39e267b1-3dd0-4688-b6e5-f1bccf651722'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.534 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.539 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660/disk.config d634251d-b484-4af7-b102-fe8015603660_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.576 2 INFO nova.virt.libvirt.driver [-] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Instance destroyed successfully.#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.576 2 DEBUG nova.objects.instance [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lazy-loading 'resources' on Instance uuid 73e8c7a5-4621-4f07-824a-b81ea314a672 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.602 2 DEBUG nova.virt.libvirt.vif [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1504413762',display_name='tempest-ListServersNegativeTestJSON-server-1504413762-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1504413762-1',id=54,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f6f9056bf44b4bd8859c73e3cb645683',ramdisk_id='',reservation_id='r-5qqg2590',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-21742049',owner_user_name='tempest-ListServersNegativeTestJSON-21742049-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:29:40Z,user_data=None,user_id='1057882eff8f490d837773415bf65a8a',uuid=73e8c7a5-4621-4f07-824a-b81ea314a672,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.603 2 DEBUG nova.network.os_vif_util [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converting VIF {"id": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "address": "fa:16:3e:6e:74:a7", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4dfefd6-69", "ovs_interfaceid": "b4dfefd6-6971-4450-ae0e-50f4bf7eaafa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.608 2 DEBUG nova.network.os_vif_util [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:74:a7,bridge_name='br-int',has_traffic_filtering=True,id=b4dfefd6-6971-4450-ae0e-50f4bf7eaafa,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4dfefd6-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.612 2 DEBUG os_vif [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:74:a7,bridge_name='br-int',has_traffic_filtering=True,id=b4dfefd6-6971-4450-ae0e-50f4bf7eaafa,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4dfefd6-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4dfefd6-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.625 2 INFO nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Creating config drive at /var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1/disk.config#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.630 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdps5pjcr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.669 2 INFO os_vif [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:74:a7,bridge_name='br-int',has_traffic_filtering=True,id=b4dfefd6-6971-4450-ae0e-50f4bf7eaafa,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4dfefd6-69')#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.691 2 DEBUG oslo_concurrency.processutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660/disk.config d634251d-b484-4af7-b102-fe8015603660_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.692 2 INFO nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Deleting local config drive /var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660/disk.config because it was imported into RBD.#033[00m
Oct  2 04:29:46 np0005465604 systemd-udevd[318119]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:29:46 np0005465604 NetworkManager[45129]: <info>  [1759393786.7530] manager: (tapf79edf8d-90): new Tun device (/org/freedesktop/NetworkManager/Devices/205)
Oct  2 04:29:46 np0005465604 kernel: tapf79edf8d-90: entered promiscuous mode
Oct  2 04:29:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:46Z|00462|binding|INFO|Claiming lport f79edf8d-90b8-47b7-b366-244c63439a64 for this chassis.
Oct  2 04:29:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:46Z|00463|binding|INFO|f79edf8d-90b8-47b7-b366-244c63439a64: Claiming fa:16:3e:a8:48:7d 10.100.0.4
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.765 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:48:7d 10.100.0.4'], port_security=['fa:16:3e:a8:48:7d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd634251d-b484-4af7-b102-fe8015603660', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=f79edf8d-90b8-47b7-b366-244c63439a64) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:29:46 np0005465604 NetworkManager[45129]: <info>  [1759393786.7657] device (tapf79edf8d-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.766 162357 INFO neutron.agent.ovn.metadata.agent [-] Port f79edf8d-90b8-47b7-b366-244c63439a64 in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca bound to our chassis#033[00m
Oct  2 04:29:46 np0005465604 NetworkManager[45129]: <info>  [1759393786.7671] device (tapf79edf8d-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.767 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.774 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdps5pjcr" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.778 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f03f37d5-abcb-450d-bafc-21978ede8894]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.781 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa72ac8c9-11 in ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:29:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:46Z|00464|binding|INFO|Setting lport f79edf8d-90b8-47b7-b366-244c63439a64 ovn-installed in OVS
Oct  2 04:29:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:46Z|00465|binding|INFO|Setting lport f79edf8d-90b8-47b7-b366-244c63439a64 up in Southbound
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.784 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa72ac8c9-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.784 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e4bb04f3-37e0-4c4f-a18f-9ffb6840d133]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.785 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5fa57311-9e10-4d87-98b6-42058a1f4e69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.806 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[ac203fec-8bfb-42df-a9e5-c07d2f8bf95e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:46 np0005465604 systemd-machined[214636]: New machine qemu-62-instance-0000003a.
Oct  2 04:29:46 np0005465604 systemd[1]: Started Virtual Machine qemu-62-instance-0000003a.
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.827 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0605fa8b-3b81-4624-889c-7b1013cb830f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.874 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b19788cc-ec39-486e-98dd-73b53e64ba50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.881 2 DEBUG nova.storage.rbd_utils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] rbd image 923e00cc-7494-46f3-93e2-3c223705aff1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:29:46 np0005465604 NetworkManager[45129]: <info>  [1759393786.8878] manager: (tapa72ac8c9-10): new Veth device (/org/freedesktop/NetworkManager/Devices/206)
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.889 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1893aaa6-3a88-4317-b12f-87dee05acfed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.907 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1/disk.config 923e00cc-7494-46f3-93e2-3c223705aff1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.933 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7e0e5a-17c7-4109-9001-a657c6ff00a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.938 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3cff5d34-11db-4c76-828e-e36ac39f18c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.950 2 DEBUG nova.compute.manager [req-22703e42-a9c9-44f2-a836-fc4f010dc609 req-3f4f2867-8352-4b36-aa11-80a5d2274629 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Received event network-vif-unplugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.950 2 DEBUG oslo_concurrency.lockutils [req-22703e42-a9c9-44f2-a836-fc4f010dc609 req-3f4f2867-8352-4b36-aa11-80a5d2274629 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.950 2 DEBUG oslo_concurrency.lockutils [req-22703e42-a9c9-44f2-a836-fc4f010dc609 req-3f4f2867-8352-4b36-aa11-80a5d2274629 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.950 2 DEBUG oslo_concurrency.lockutils [req-22703e42-a9c9-44f2-a836-fc4f010dc609 req-3f4f2867-8352-4b36-aa11-80a5d2274629 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.950 2 DEBUG nova.compute.manager [req-22703e42-a9c9-44f2-a836-fc4f010dc609 req-3f4f2867-8352-4b36-aa11-80a5d2274629 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] No waiting events found dispatching network-vif-unplugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:46 np0005465604 nova_compute[260603]: 2025-10-02 08:29:46.951 2 DEBUG nova.compute.manager [req-22703e42-a9c9-44f2-a836-fc4f010dc609 req-3f4f2867-8352-4b36-aa11-80a5d2274629 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Received event network-vif-unplugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:29:46 np0005465604 NetworkManager[45129]: <info>  [1759393786.9590] device (tapa72ac8c9-10): carrier: link connected
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.968 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[08f3f2f7-1604-40fa-919d-fd5fe8d4d327]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 413 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 3.6 MiB/s wr, 367 op/s
Oct  2 04:29:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:46.992 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dfcec84b-468c-483e-9c32-1c2aab03784b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466556, 'reachable_time': 43237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318278, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.011 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ebfb8e3b-cd37-495f-98e3-108e617bf7f1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3b:61d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466556, 'tstamp': 466556}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318293, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.039 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1a9fc6-8feb-4319-8d5c-9476052a180a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa72ac8c9-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3b:61:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466556, 'reachable_time': 43237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318294, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.114 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29e9d93a-544c-4603-8d81-57d65dfdcb47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:47 np0005465604 nova_compute[260603]: 2025-10-02 08:29:47.169 2 DEBUG oslo_concurrency.processutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1/disk.config 923e00cc-7494-46f3-93e2-3c223705aff1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:47 np0005465604 nova_compute[260603]: 2025-10-02 08:29:47.170 2 INFO nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Deleting local config drive /var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1/disk.config because it was imported into RBD.#033[00m
Oct  2 04:29:47 np0005465604 virtqemud[260328]: End of file while reading data: Input/output error
Oct  2 04:29:47 np0005465604 kernel: tap044e4f76-db: entered promiscuous mode
Oct  2 04:29:47 np0005465604 NetworkManager[45129]: <info>  [1759393787.2172] manager: (tap044e4f76-db): new Tun device (/org/freedesktop/NetworkManager/Devices/207)
Oct  2 04:29:47 np0005465604 systemd-udevd[318259]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:29:47 np0005465604 nova_compute[260603]: 2025-10-02 08:29:47.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:47Z|00466|binding|INFO|Claiming lport 044e4f76-db30-47b6-b277-8c3a13743b9c for this chassis.
Oct  2 04:29:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:47Z|00467|binding|INFO|044e4f76-db30-47b6-b277-8c3a13743b9c: Claiming fa:16:3e:46:b0:76 10.100.0.11
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.217 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bca16ceb-0240-4267-8cb5-3e8725f80477]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.226 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.226 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.226 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa72ac8c9-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:47 np0005465604 nova_compute[260603]: 2025-10-02 08:29:47.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:47 np0005465604 NetworkManager[45129]: <info>  [1759393787.2289] manager: (tapa72ac8c9-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/208)
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.230 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:b0:76 10.100.0.11'], port_security=['fa:16:3e:46:b0:76 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '923e00cc-7494-46f3-93e2-3c223705aff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bd8f146-d090-40d8-8651-21c92934a6ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f942883d5794a5c8e3cd2b5ef44a863', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3c9d772-e37a-48f1-89cb-39eaf88eda56', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dec8975f-9e7e-451f-957f-04aee213c5b3, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=044e4f76-db30-47b6-b277-8c3a13743b9c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:29:47 np0005465604 NetworkManager[45129]: <info>  [1759393787.2337] device (tap044e4f76-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:29:47 np0005465604 NetworkManager[45129]: <info>  [1759393787.2355] device (tap044e4f76-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:29:47 np0005465604 systemd-machined[214636]: New machine qemu-63-instance-00000039.
Oct  2 04:29:47 np0005465604 systemd[1]: Started Virtual Machine qemu-63-instance-00000039.
Oct  2 04:29:47 np0005465604 kernel: tapa72ac8c9-10: entered promiscuous mode
Oct  2 04:29:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:47Z|00468|binding|INFO|Setting lport 044e4f76-db30-47b6-b277-8c3a13743b9c ovn-installed in OVS
Oct  2 04:29:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:47Z|00469|binding|INFO|Setting lport 044e4f76-db30-47b6-b277-8c3a13743b9c up in Southbound
Oct  2 04:29:47 np0005465604 nova_compute[260603]: 2025-10-02 08:29:47.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:47Z|00470|binding|INFO|Releasing lport f9acec59-0200-4a1d-84e4-06e67c730498 from this chassis (sb_readonly=0)
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.364 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa72ac8c9-10, col_values=(('external_ids', {'iface-id': 'f9acec59-0200-4a1d-84e4-06e67c730498'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.366 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.367 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0338394d-2bdd-4b16-939d-d93e7430c12b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.368 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.pid.haproxy
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID a72ac8c9-16ee-4ec0-b23d-2741fda000ca
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:29:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:47.368 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'env', 'PROCESS_TAG=haproxy-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a72ac8c9-16ee-4ec0-b23d-2741fda000ca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:29:47 np0005465604 nova_compute[260603]: 2025-10-02 08:29:47.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:47 np0005465604 nova_compute[260603]: 2025-10-02 08:29:47.494 2 INFO nova.virt.libvirt.driver [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Deleting instance files /var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672_del#033[00m
Oct  2 04:29:47 np0005465604 nova_compute[260603]: 2025-10-02 08:29:47.495 2 INFO nova.virt.libvirt.driver [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Deletion of /var/lib/nova/instances/73e8c7a5-4621-4f07-824a-b81ea314a672_del complete#033[00m
Oct  2 04:29:47 np0005465604 nova_compute[260603]: 2025-10-02 08:29:47.554 2 INFO nova.compute.manager [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Took 1.26 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:29:47 np0005465604 nova_compute[260603]: 2025-10-02 08:29:47.555 2 DEBUG oslo.service.loopingcall [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:29:47 np0005465604 nova_compute[260603]: 2025-10-02 08:29:47.556 2 DEBUG nova.compute.manager [-] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:29:47 np0005465604 nova_compute[260603]: 2025-10-02 08:29:47.556 2 DEBUG nova.network.neutron [-] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:29:47 np0005465604 nova_compute[260603]: 2025-10-02 08:29:47.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:47 np0005465604 podman[318391]: 2025-10-02 08:29:47.795266697 +0000 UTC m=+0.099307478 container create 7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:29:47 np0005465604 podman[318391]: 2025-10-02 08:29:47.759113153 +0000 UTC m=+0.063153954 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:29:47 np0005465604 systemd[1]: Started libpod-conmon-7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8.scope.
Oct  2 04:29:47 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:29:47 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aa27ff810503b5c16a82c7c0b62fcacee12457623b0ef10cc50f690d346bfd3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:29:47 np0005465604 podman[318391]: 2025-10-02 08:29:47.949623104 +0000 UTC m=+0.253663905 container init 7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:29:47 np0005465604 podman[318391]: 2025-10-02 08:29:47.960638986 +0000 UTC m=+0.264679767 container start 7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 04:29:47 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[318433]: [NOTICE]   (318442) : New worker (318447) forked
Oct  2 04:29:47 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[318433]: [NOTICE]   (318442) : Loading success.
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.108 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 044e4f76-db30-47b6-b277-8c3a13743b9c in datapath 9bd8f146-d090-40d8-8651-21c92934a6ff unbound from our chassis#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.110 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bd8f146-d090-40d8-8651-21c92934a6ff#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.126 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[937d314a-ab7e-4f27-a259-b75ebc6dc340]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.131 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bd8f146-d1 in ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.134 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bd8f146-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.134 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e0abc4e6-5354-4c8b-919c-ea6c4a3f4285]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.137 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[22cf968d-d4a0-48ac-bea8-006b7ade18de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.149 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[c42892ee-32d5-4f7f-bd28-02fe36d942e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.174 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ad20b7-d96b-4bf4-af75-04ac907597e9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.220 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bd70fa6a-8162-40da-8e96-6781d37f9ee9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:48 np0005465604 NetworkManager[45129]: <info>  [1759393788.2309] manager: (tap9bd8f146-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/209)
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.231 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c6fa5d2b-4952-424f-bc81-da31c8597f28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.253 2 DEBUG nova.network.neutron [-] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.275 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9902383e-75af-441b-8030-48faf64a9fc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.278 2 INFO nova.compute.manager [-] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Took 0.72 seconds to deallocate network for instance.#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.282 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[be3d8a3d-c18f-4e26-933c-49b5e7ff3463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.303 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393788.301911, d634251d-b484-4af7-b102-fe8015603660 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.303 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] VM Started (Lifecycle Event)#033[00m
Oct  2 04:29:48 np0005465604 NetworkManager[45129]: <info>  [1759393788.3185] device (tap9bd8f146-d0): carrier: link connected
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.323 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d67dc4-f282-4558-b596-6524ccfecf65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.347 2 DEBUG oslo_concurrency.lockutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.348 2 DEBUG oslo_concurrency.lockutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.358 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.362 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393788.3021214, d634251d-b484-4af7-b102-fe8015603660 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.363 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.362 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc8d37b-4982-4f51-a400-28018773fea4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bd8f146-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:55:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466692, 'reachable_time': 25414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318470, 'error': None, 'target': 'ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.379 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b429a62e-5e4a-476c-a1c3-d23fad46ba7a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:5540'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 466692, 'tstamp': 466692}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318471, 'error': None, 'target': 'ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.392 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.396 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.404 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0f3f8c56-4129-4eab-b9b1-835297a5dde1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bd8f146-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:55:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466692, 'reachable_time': 25414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318472, 'error': None, 'target': 'ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.416 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.436 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[026b1444-a0f9-4a2f-95bf-fb3858276952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.510 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e3bca4d1-be01-4ba8-998a-2d20fb91b470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.511 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bd8f146-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.511 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.512 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bd8f146-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:48 np0005465604 NetworkManager[45129]: <info>  [1759393788.5145] manager: (tap9bd8f146-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Oct  2 04:29:48 np0005465604 kernel: tap9bd8f146-d0: entered promiscuous mode
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.516 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bd8f146-d0, col_values=(('external_ids', {'iface-id': '21940774-b256-44e1-b604-ddb5b66aa4a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:48Z|00471|binding|INFO|Releasing lport 21940774-b256-44e1-b604-ddb5b66aa4a6 from this chassis (sb_readonly=0)
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.535 2 DEBUG oslo_concurrency.processutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.547 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bd8f146-d090-40d8-8651-21c92934a6ff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bd8f146-d090-40d8-8651-21c92934a6ff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.549 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[62b2fba1-417e-4beb-8f2c-d5215a9c556f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.550 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-9bd8f146-d090-40d8-8651-21c92934a6ff
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/9bd8f146-d090-40d8-8651-21c92934a6ff.pid.haproxy
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 9bd8f146-d090-40d8-8651-21c92934a6ff
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:29:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:48.550 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff', 'env', 'PROCESS_TAG=haproxy-9bd8f146-d090-40d8-8651-21c92934a6ff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bd8f146-d090-40d8-8651-21c92934a6ff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:29:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:48Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:7e:1d 10.100.0.9
Oct  2 04:29:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:48Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:7e:1d 10.100.0.9
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.616 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393788.6118574, 923e00cc-7494-46f3-93e2-3c223705aff1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.617 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] VM Started (Lifecycle Event)#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.639 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.648 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393788.611991, 923e00cc-7494-46f3-93e2-3c223705aff1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.649 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.667 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.687 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:48 np0005465604 nova_compute[260603]: 2025-10-02 08:29:48.712 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:29:48 np0005465604 podman[318521]: 2025-10-02 08:29:48.938283558 +0000 UTC m=+0.044906687 container create 7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:29:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 388 MiB data, 646 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 5.6 MiB/s wr, 553 op/s
Oct  2 04:29:48 np0005465604 systemd[1]: Started libpod-conmon-7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21.scope.
Oct  2 04:29:49 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:29:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db0e7b9516bd1a447408cbffb5be27ab3618c8243a87ad12b0787ded02cc8cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:29:49 np0005465604 podman[318521]: 2025-10-02 08:29:48.917787231 +0000 UTC m=+0.024410370 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:29:49 np0005465604 podman[318521]: 2025-10-02 08:29:49.02234447 +0000 UTC m=+0.128967599 container init 7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 04:29:49 np0005465604 podman[318521]: 2025-10-02 08:29:49.031968599 +0000 UTC m=+0.138591728 container start 7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:29:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:29:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1086178548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.068 2 DEBUG oslo_concurrency.processutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:49 np0005465604 neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff[318535]: [NOTICE]   (318539) : New worker (318543) forked
Oct  2 04:29:49 np0005465604 neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff[318535]: [NOTICE]   (318539) : Loading success.
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.084 2 DEBUG nova.compute.provider_tree [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.101 2 DEBUG nova.scheduler.client.report [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.117 2 DEBUG oslo_concurrency.lockutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.166 2 INFO nova.scheduler.client.report [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Deleted allocations for instance 73e8c7a5-4621-4f07-824a-b81ea314a672#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.223 2 DEBUG oslo_concurrency.lockutils [None req-9e36b654-2b47-465b-b9ce-842ee0b47052 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.331 2 DEBUG nova.compute.manager [req-60dfc5e9-b385-45c2-a24e-46ae03f21b98 req-aca5cc43-6fc8-4bdf-8da2-b7bb1d7e9670 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Received event network-vif-deleted-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.499 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Received event network-vif-plugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.499 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.499 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.499 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "73e8c7a5-4621-4f07-824a-b81ea314a672-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.500 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] No waiting events found dispatching network-vif-plugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.500 2 WARNING nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Received unexpected event network-vif-plugged-b4dfefd6-6971-4450-ae0e-50f4bf7eaafa for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.500 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Received event network-vif-plugged-f79edf8d-90b8-47b7-b366-244c63439a64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.500 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d634251d-b484-4af7-b102-fe8015603660-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.501 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.501 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.502 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Processing event network-vif-plugged-f79edf8d-90b8-47b7-b366-244c63439a64 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.502 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Received event network-vif-plugged-f79edf8d-90b8-47b7-b366-244c63439a64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.503 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d634251d-b484-4af7-b102-fe8015603660-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.503 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.504 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.505 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] No waiting events found dispatching network-vif-plugged-f79edf8d-90b8-47b7-b366-244c63439a64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.505 2 WARNING nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Received unexpected event network-vif-plugged-f79edf8d-90b8-47b7-b366-244c63439a64 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.505 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Received event network-vif-plugged-044e4f76-db30-47b6-b277-8c3a13743b9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.505 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.506 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.507 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.508 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Processing event network-vif-plugged-044e4f76-db30-47b6-b277-8c3a13743b9c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.509 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Received event network-vif-plugged-044e4f76-db30-47b6-b277-8c3a13743b9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.509 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.509 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.509 2 DEBUG oslo_concurrency.lockutils [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.509 2 DEBUG nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] No waiting events found dispatching network-vif-plugged-044e4f76-db30-47b6-b277-8c3a13743b9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.510 2 WARNING nova.compute.manager [req-dccd3926-57ca-4aee-bd5f-272509088e19 req-40acdc44-037b-4d2f-abb1-e2e6d068f420 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Received unexpected event network-vif-plugged-044e4f76-db30-47b6-b277-8c3a13743b9c for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.510 2 DEBUG nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.511 2 DEBUG nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.515 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393789.515318, d634251d-b484-4af7-b102-fe8015603660 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.515 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.517 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.523 2 INFO nova.virt.libvirt.driver [-] [instance: d634251d-b484-4af7-b102-fe8015603660] Instance spawned successfully.#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.524 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.527 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.532 2 INFO nova.virt.libvirt.driver [-] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Instance spawned successfully.#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.533 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.542 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.554 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.562 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.563 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.564 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.564 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.567 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.567 2 DEBUG nova.virt.libvirt.driver [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.573 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.577 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.578 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.578 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.579 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.579 2 DEBUG nova.virt.libvirt.driver [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.598 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.598 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393789.5168705, 923e00cc-7494-46f3-93e2-3c223705aff1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.599 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.647 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.651 2 INFO nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Took 11.37 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.652 2 DEBUG nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.652 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.678 2 INFO nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Took 13.61 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.678 2 DEBUG nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.685 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.792 2 INFO nova.compute.manager [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Took 12.62 seconds to build instance.#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.796 2 INFO nova.compute.manager [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Took 14.84 seconds to build instance.#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.812 2 DEBUG oslo_concurrency.lockutils [None req-057f1986-af01-482c-bf5a-113d833edb37 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:49 np0005465604 nova_compute[260603]: 2025-10-02 08:29:49.813 2 DEBUG oslo_concurrency.lockutils [None req-a144a71e-5ec6-4e99-b397-df3f6ed975af e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:49 np0005465604 podman[318553]: 2025-10-02 08:29:49.991058914 +0000 UTC m=+0.059631034 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 04:29:50 np0005465604 nova_compute[260603]: 2025-10-02 08:29:50.085 2 DEBUG oslo_concurrency.lockutils [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:50 np0005465604 nova_compute[260603]: 2025-10-02 08:29:50.086 2 DEBUG oslo_concurrency.lockutils [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:50 np0005465604 nova_compute[260603]: 2025-10-02 08:29:50.086 2 DEBUG nova.compute.manager [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:50 np0005465604 nova_compute[260603]: 2025-10-02 08:29:50.090 2 DEBUG nova.compute.manager [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 04:29:50 np0005465604 nova_compute[260603]: 2025-10-02 08:29:50.090 2 DEBUG nova.objects.instance [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'flavor' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:50Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5f:eb:54 10.100.0.8
Oct  2 04:29:50 np0005465604 nova_compute[260603]: 2025-10-02 08:29:50.119 2 DEBUG nova.virt.libvirt.driver [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:29:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:50Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:eb:54 10.100.0.8
Oct  2 04:29:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 388 MiB data, 646 MiB used, 59 GiB / 60 GiB avail; 9.7 MiB/s rd, 3.8 MiB/s wr, 440 op/s
Oct  2 04:29:51 np0005465604 nova_compute[260603]: 2025-10-02 08:29:51.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.029 2 DEBUG nova.objects.instance [None req-fe056b8f-a117-43cf-9e05-c65576ab3523 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'pci_devices' on Instance uuid d634251d-b484-4af7-b102-fe8015603660 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.049 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393792.0496888, d634251d-b484-4af7-b102-fe8015603660 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.050 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.072 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.095 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:29:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.122 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  2 04:29:52 np0005465604 kernel: tapf79edf8d-90 (unregistering): left promiscuous mode
Oct  2 04:29:52 np0005465604 NetworkManager[45129]: <info>  [1759393792.4103] device (tapf79edf8d-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:29:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:52Z|00472|binding|INFO|Releasing lport f79edf8d-90b8-47b7-b366-244c63439a64 from this chassis (sb_readonly=0)
Oct  2 04:29:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:52Z|00473|binding|INFO|Setting lport f79edf8d-90b8-47b7-b366-244c63439a64 down in Southbound
Oct  2 04:29:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:52Z|00474|binding|INFO|Removing iface tapf79edf8d-90 ovn-installed in OVS
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.486 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:48:7d 10.100.0.4'], port_security=['fa:16:3e:a8:48:7d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd634251d-b484-4af7-b102-fe8015603660', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f269abbe5769427dbf44c430d7529c04', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b320e93c-1f71-4645-afad-b813a2d6e776', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f111dc74-9aea-42f7-b927-5ba41d56cb94, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=f79edf8d-90b8-47b7-b366-244c63439a64) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:29:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.487 162357 INFO neutron.agent.ovn.metadata.agent [-] Port f79edf8d-90b8-47b7-b366-244c63439a64 in datapath a72ac8c9-16ee-4ec0-b23d-2741fda000ca unbound from our chassis#033[00m
Oct  2 04:29:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.490 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a72ac8c9-16ee-4ec0-b23d-2741fda000ca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:29:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.492 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fca171eb-9d2a-4da7-bda5-c806008d110e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.493 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca namespace which is not needed anymore#033[00m
Oct  2 04:29:52 np0005465604 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Oct  2 04:29:52 np0005465604 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000003a.scope: Consumed 3.489s CPU time.
Oct  2 04:29:52 np0005465604 systemd-machined[214636]: Machine qemu-62-instance-0000003a terminated.
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.595 2 DEBUG nova.compute.manager [None req-fe056b8f-a117-43cf-9e05-c65576ab3523 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:29:52 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[318433]: [NOTICE]   (318442) : haproxy version is 2.8.14-c23fe91
Oct  2 04:29:52 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[318433]: [NOTICE]   (318442) : path to executable is /usr/sbin/haproxy
Oct  2 04:29:52 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[318433]: [WARNING]  (318442) : Exiting Master process...
Oct  2 04:29:52 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[318433]: [WARNING]  (318442) : Exiting Master process...
Oct  2 04:29:52 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[318433]: [ALERT]    (318442) : Current worker (318447) exited with code 143 (Terminated)
Oct  2 04:29:52 np0005465604 neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca[318433]: [WARNING]  (318442) : All workers exited. Exiting... (0)
Oct  2 04:29:52 np0005465604 systemd[1]: libpod-7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8.scope: Deactivated successfully.
Oct  2 04:29:52 np0005465604 podman[318596]: 2025-10-02 08:29:52.643859924 +0000 UTC m=+0.061474381 container died 7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 04:29:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8-userdata-shm.mount: Deactivated successfully.
Oct  2 04:29:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2aa27ff810503b5c16a82c7c0b62fcacee12457623b0ef10cc50f690d346bfd3-merged.mount: Deactivated successfully.
Oct  2 04:29:52 np0005465604 podman[318596]: 2025-10-02 08:29:52.695128147 +0000 UTC m=+0.112742624 container cleanup 7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 04:29:52 np0005465604 systemd[1]: libpod-conmon-7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8.scope: Deactivated successfully.
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.720 2 DEBUG nova.compute.manager [req-6c97c494-2049-4bf4-8189-4a89548c6c19 req-e491bb10-0980-4f39-ae83-7d8212eabf2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Received event network-vif-unplugged-f79edf8d-90b8-47b7-b366-244c63439a64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.723 2 DEBUG oslo_concurrency.lockutils [req-6c97c494-2049-4bf4-8189-4a89548c6c19 req-e491bb10-0980-4f39-ae83-7d8212eabf2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d634251d-b484-4af7-b102-fe8015603660-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.723 2 DEBUG oslo_concurrency.lockutils [req-6c97c494-2049-4bf4-8189-4a89548c6c19 req-e491bb10-0980-4f39-ae83-7d8212eabf2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.724 2 DEBUG oslo_concurrency.lockutils [req-6c97c494-2049-4bf4-8189-4a89548c6c19 req-e491bb10-0980-4f39-ae83-7d8212eabf2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.725 2 DEBUG nova.compute.manager [req-6c97c494-2049-4bf4-8189-4a89548c6c19 req-e491bb10-0980-4f39-ae83-7d8212eabf2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] No waiting events found dispatching network-vif-unplugged-f79edf8d-90b8-47b7-b366-244c63439a64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.725 2 WARNING nova.compute.manager [req-6c97c494-2049-4bf4-8189-4a89548c6c19 req-e491bb10-0980-4f39-ae83-7d8212eabf2b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Received unexpected event network-vif-unplugged-f79edf8d-90b8-47b7-b366-244c63439a64 for instance with vm_state suspended and task_state None.#033[00m
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:52 np0005465604 podman[318633]: 2025-10-02 08:29:52.77564751 +0000 UTC m=+0.042907475 container remove 7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 04:29:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.781 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd60fcf-0e96-45e4-bcc0-7105e89a237c]: (4, ('Thu Oct  2 08:29:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8)\n7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8\nThu Oct  2 08:29:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca (7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8)\n7b2da4a8c2c3ad04be6ea3425e27260d73531333bb3fc671ed992a8697434cf8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.787 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ba72bc-b481-489f-8a0c-a1d2ddfa5b6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.791 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa72ac8c9-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:52 np0005465604 kernel: tapa72ac8c9-10: left promiscuous mode
Oct  2 04:29:52 np0005465604 nova_compute[260603]: 2025-10-02 08:29:52.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.825 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0a447018-dbf1-4ef8-8046-a6d4d45eff5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.845 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b76c75da-0015-4c92-8f52-166bfe6a7cf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.846 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef2cec5-dc99-4599-94cd-8a6370e59125]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.865 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ffd9329c-2236-4318-89ca-9f0057530d07]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466547, 'reachable_time': 41808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318649, 'error': None, 'target': 'ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:52 np0005465604 systemd[1]: run-netns-ovnmeta\x2da72ac8c9\x2d16ee\x2d4ec0\x2db23d\x2d2741fda000ca.mount: Deactivated successfully.
Oct  2 04:29:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.868 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a72ac8c9-16ee-4ec0-b23d-2741fda000ca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:29:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:52.868 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb59dbf-3cdb-443d-8f74-722de555ebce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 424 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.2 MiB/s wr, 586 op/s
Oct  2 04:29:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:53Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:64:b2:ea 10.100.0.12
Oct  2 04:29:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:53Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:b2:ea 10.100.0.12
Oct  2 04:29:54 np0005465604 podman[318653]: 2025-10-02 08:29:54.012673622 +0000 UTC m=+0.072220995 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:29:54 np0005465604 NetworkManager[45129]: <info>  [1759393794.2959] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 NetworkManager[45129]: <info>  [1759393794.2967] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Oct  2 04:29:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:54Z|00475|binding|INFO|Releasing lport 21940774-b256-44e1-b604-ddb5b66aa4a6 from this chassis (sb_readonly=0)
Oct  2 04:29:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:54Z|00476|binding|INFO|Releasing lport 39e267b1-3dd0-4688-b6e5-f1bccf651722 from this chassis (sb_readonly=0)
Oct  2 04:29:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:54Z|00477|binding|INFO|Releasing lport bd053665-7e00-4f6a-95af-9d9c3c0e8cc0 from this chassis (sb_readonly=0)
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:54Z|00478|binding|INFO|Releasing lport 21940774-b256-44e1-b604-ddb5b66aa4a6 from this chassis (sb_readonly=0)
Oct  2 04:29:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:54Z|00479|binding|INFO|Releasing lport 39e267b1-3dd0-4688-b6e5-f1bccf651722 from this chassis (sb_readonly=0)
Oct  2 04:29:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:54Z|00480|binding|INFO|Releasing lport bd053665-7e00-4f6a-95af-9d9c3c0e8cc0 from this chassis (sb_readonly=0)
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.377 2 DEBUG oslo_concurrency.lockutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "d634251d-b484-4af7-b102-fe8015603660" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.378 2 DEBUG oslo_concurrency.lockutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.379 2 DEBUG oslo_concurrency.lockutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "d634251d-b484-4af7-b102-fe8015603660-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.380 2 DEBUG oslo_concurrency.lockutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.380 2 DEBUG oslo_concurrency.lockutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.383 2 INFO nova.compute.manager [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Terminating instance#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.386 2 DEBUG nova.compute.manager [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.394 2 INFO nova.virt.libvirt.driver [-] [instance: d634251d-b484-4af7-b102-fe8015603660] Instance destroyed successfully.#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.396 2 DEBUG nova.objects.instance [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lazy-loading 'resources' on Instance uuid d634251d-b484-4af7-b102-fe8015603660 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.414 2 DEBUG nova.virt.libvirt.vif [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-916266897',display_name='tempest-DeleteServersTestJSON-server-916266897',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-916266897',id=58,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f269abbe5769427dbf44c430d7529c04',ramdisk_id='',reservation_id='r-aeqxqe0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-812177785',owner_user_name='tempest-DeleteServersTestJSON-812177785-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:29:52Z,user_data=None,user_id='1ac6f72f7366459a86c086737b89ea69',uuid=d634251d-b484-4af7-b102-fe8015603660,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.416 2 DEBUG nova.network.os_vif_util [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converting VIF {"id": "f79edf8d-90b8-47b7-b366-244c63439a64", "address": "fa:16:3e:a8:48:7d", "network": {"id": "a72ac8c9-16ee-4ec0-b23d-2741fda000ca", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1814217064-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f269abbe5769427dbf44c430d7529c04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf79edf8d-90", "ovs_interfaceid": "f79edf8d-90b8-47b7-b366-244c63439a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.418 2 DEBUG nova.network.os_vif_util [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:48:7d,bridge_name='br-int',has_traffic_filtering=True,id=f79edf8d-90b8-47b7-b366-244c63439a64,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf79edf8d-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.419 2 DEBUG os_vif [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:48:7d,bridge_name='br-int',has_traffic_filtering=True,id=f79edf8d-90b8-47b7-b366-244c63439a64,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf79edf8d-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf79edf8d-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.435 2 INFO os_vif [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:48:7d,bridge_name='br-int',has_traffic_filtering=True,id=f79edf8d-90b8-47b7-b366-244c63439a64,network=Network(a72ac8c9-16ee-4ec0-b23d-2741fda000ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf79edf8d-90')#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.484 2 DEBUG oslo_concurrency.lockutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.485 2 DEBUG oslo_concurrency.lockutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.485 2 DEBUG oslo_concurrency.lockutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.486 2 DEBUG oslo_concurrency.lockutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.486 2 DEBUG oslo_concurrency.lockutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.488 2 INFO nova.compute.manager [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Terminating instance#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.489 2 DEBUG nova.compute.manager [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:29:54 np0005465604 kernel: tap5bbef33f-36 (unregistering): left promiscuous mode
Oct  2 04:29:54 np0005465604 NetworkManager[45129]: <info>  [1759393794.5457] device (tap5bbef33f-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:54Z|00481|binding|INFO|Releasing lport 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c from this chassis (sb_readonly=0)
Oct  2 04:29:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:54Z|00482|binding|INFO|Setting lport 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c down in Southbound
Oct  2 04:29:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:54Z|00483|binding|INFO|Removing iface tap5bbef33f-36 ovn-installed in OVS
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.561 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:86:2c 10.100.0.5'], port_security=['fa:16:3e:95:86:2c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f7005e7b-8982-4d23-b12a-4b67c90a6c89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6f9056bf44b4bd8859c73e3cb645683', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd4b10ecc-e572-4092-8dce-9b7247cd181c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92074f81-fcf1-4b9d-a09c-c34c3c0535f5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.563 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c in datapath 9e6563dd-5ecf-4759-9df8-5b501617e75c unbound from our chassis#033[00m
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.567 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9e6563dd-5ecf-4759-9df8-5b501617e75c#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.584 2 DEBUG oslo_concurrency.lockutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.585 2 DEBUG oslo_concurrency.lockutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.585 2 DEBUG oslo_concurrency.lockutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.586 2 DEBUG oslo_concurrency.lockutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.586 2 DEBUG oslo_concurrency.lockutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.587 2 INFO nova.compute.manager [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Terminating instance#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.588 2 DEBUG nova.compute.manager [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.589 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a4ff29-015d-4b65-b74a-891c04c430ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:54 np0005465604 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000037.scope: Deactivated successfully.
Oct  2 04:29:54 np0005465604 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000037.scope: Consumed 11.669s CPU time.
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.616 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[92e30d0d-95d8-43a6-b3af-ec76e40a3749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:54 np0005465604 systemd-machined[214636]: Machine qemu-60-instance-00000037 terminated.
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.619 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf16824-c8b2-40e1-ba74-5efd0ae2d846]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:54 np0005465604 kernel: tap12abaaed-2f (unregistering): left promiscuous mode
Oct  2 04:29:54 np0005465604 NetworkManager[45129]: <info>  [1759393794.6281] device (tap12abaaed-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.651 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ad41fd69-5c70-48c2-987d-0d97ea140e8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.667 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e0f39e-13b9-4f09-9a62-c7ad4e10cef6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9e6563dd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:8b:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465698, 'reachable_time': 20297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318706, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.682 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3402e64f-ed83-46f4-967b-f63dfcad78ab]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9e6563dd-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465714, 'tstamp': 465714}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318707, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9e6563dd-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465718, 'tstamp': 465718}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318707, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.689 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e6563dd-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:54Z|00484|binding|INFO|Releasing lport 12abaaed-2f93-40bd-bddd-8143c3709480 from this chassis (sb_readonly=0)
Oct  2 04:29:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:54Z|00485|binding|INFO|Setting lport 12abaaed-2f93-40bd-bddd-8143c3709480 down in Southbound
Oct  2 04:29:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:29:54Z|00486|binding|INFO|Removing iface tap12abaaed-2f ovn-installed in OVS
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.698 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:bc:66 10.100.0.4'], port_security=['fa:16:3e:24:bc:66 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f56dc5d2-b1f8-42ef-882c-62bcbd600954', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6f9056bf44b4bd8859c73e3cb645683', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd4b10ecc-e572-4092-8dce-9b7247cd181c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=92074f81-fcf1-4b9d-a09c-c34c3c0535f5, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=12abaaed-2f93-40bd-bddd-8143c3709480) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000038.scope: Deactivated successfully.
Oct  2 04:29:54 np0005465604 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000038.scope: Consumed 11.931s CPU time.
Oct  2 04:29:54 np0005465604 systemd-machined[214636]: Machine qemu-61-instance-00000038 terminated.
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.718 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e6563dd-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.718 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.718 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9e6563dd-50, col_values=(('external_ids', {'iface-id': '39e267b1-3dd0-4688-b6e5-f1bccf651722'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.719 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.719 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 12abaaed-2f93-40bd-bddd-8143c3709480 in datapath 9e6563dd-5ecf-4759-9df8-5b501617e75c unbound from our chassis#033[00m
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.720 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9e6563dd-5ecf-4759-9df8-5b501617e75c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.721 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[751843c6-0d79-4449-b069-3199f51f769f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:54.721 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c namespace which is not needed anymore#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.734 2 INFO nova.virt.libvirt.driver [-] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Instance destroyed successfully.#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.734 2 DEBUG nova.objects.instance [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lazy-loading 'resources' on Instance uuid f7005e7b-8982-4d23-b12a-4b67c90a6c89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.754 2 DEBUG nova.virt.libvirt.vif [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1504413762',display_name='tempest-ListServersNegativeTestJSON-server-1504413762-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1504413762-2',id=55,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-02T08:29:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f6f9056bf44b4bd8859c73e3cb645683',ramdisk_id='',reservation_id='r-5qqg2590',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-21742049',owner_user_name='tempest-ListServersNegativeTestJSON-21742049-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:29:42Z,user_data=None,user_id='1057882eff8f490d837773415bf65a8a',uuid=f7005e7b-8982-4d23-b12a-4b67c90a6c89,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.755 2 DEBUG nova.network.os_vif_util [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converting VIF {"id": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "address": "fa:16:3e:95:86:2c", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bbef33f-36", "ovs_interfaceid": "5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.755 2 DEBUG nova.network.os_vif_util [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:86:2c,bridge_name='br-int',has_traffic_filtering=True,id=5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bbef33f-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.756 2 DEBUG os_vif [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:86:2c,bridge_name='br-int',has_traffic_filtering=True,id=5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bbef33f-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.758 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bbef33f-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.766 2 INFO os_vif [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:86:2c,bridge_name='br-int',has_traffic_filtering=True,id=5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bbef33f-36')#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.831 2 INFO nova.virt.libvirt.driver [-] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Instance destroyed successfully.#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.831 2 DEBUG nova.objects.instance [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lazy-loading 'resources' on Instance uuid f56dc5d2-b1f8-42ef-882c-62bcbd600954 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.848 2 DEBUG nova.virt.libvirt.vif [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1504413762',display_name='tempest-ListServersNegativeTestJSON-server-1504413762-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1504413762-3',id=56,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-10-02T08:29:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f6f9056bf44b4bd8859c73e3cb645683',ramdisk_id='',reservation_id='r-5qqg2590',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-21742049',owner_user_name='tempest-ListServersNegativeTestJSON-21742049-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:29:43Z,user_data=None,user_id='1057882eff8f490d837773415bf65a8a',uuid=f56dc5d2-b1f8-42ef-882c-62bcbd600954,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.849 2 DEBUG nova.network.os_vif_util [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converting VIF {"id": "12abaaed-2f93-40bd-bddd-8143c3709480", "address": "fa:16:3e:24:bc:66", "network": {"id": "9e6563dd-5ecf-4759-9df8-5b501617e75c", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-346011279-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6f9056bf44b4bd8859c73e3cb645683", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12abaaed-2f", "ovs_interfaceid": "12abaaed-2f93-40bd-bddd-8143c3709480", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.849 2 DEBUG nova.network.os_vif_util [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:bc:66,bridge_name='br-int',has_traffic_filtering=True,id=12abaaed-2f93-40bd-bddd-8143c3709480,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12abaaed-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.850 2 DEBUG os_vif [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:bc:66,bridge_name='br-int',has_traffic_filtering=True,id=12abaaed-2f93-40bd-bddd-8143c3709480,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12abaaed-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.851 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12abaaed-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.857 2 INFO os_vif [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:bc:66,bridge_name='br-int',has_traffic_filtering=True,id=12abaaed-2f93-40bd-bddd-8143c3709480,network=Network(9e6563dd-5ecf-4759-9df8-5b501617e75c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12abaaed-2f')#033[00m
Oct  2 04:29:54 np0005465604 neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c[317636]: [NOTICE]   (317653) : haproxy version is 2.8.14-c23fe91
Oct  2 04:29:54 np0005465604 neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c[317636]: [NOTICE]   (317653) : path to executable is /usr/sbin/haproxy
Oct  2 04:29:54 np0005465604 neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c[317636]: [WARNING]  (317653) : Exiting Master process...
Oct  2 04:29:54 np0005465604 neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c[317636]: [WARNING]  (317653) : Exiting Master process...
Oct  2 04:29:54 np0005465604 neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c[317636]: [ALERT]    (317653) : Current worker (317655) exited with code 143 (Terminated)
Oct  2 04:29:54 np0005465604 neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c[317636]: [WARNING]  (317653) : All workers exited. Exiting... (0)
Oct  2 04:29:54 np0005465604 systemd[1]: libpod-2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce.scope: Deactivated successfully.
Oct  2 04:29:54 np0005465604 podman[318760]: 2025-10-02 08:29:54.883203045 +0000 UTC m=+0.061511743 container died 2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.887 2 DEBUG nova.compute.manager [req-ca4885fd-5a71-44aa-a226-9deeb8fe919f req-0c277ee5-160b-4278-9630-789f4414eac6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Received event network-changed-044e4f76-db30-47b6-b277-8c3a13743b9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.888 2 DEBUG nova.compute.manager [req-ca4885fd-5a71-44aa-a226-9deeb8fe919f req-0c277ee5-160b-4278-9630-789f4414eac6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Refreshing instance network info cache due to event network-changed-044e4f76-db30-47b6-b277-8c3a13743b9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.888 2 DEBUG oslo_concurrency.lockutils [req-ca4885fd-5a71-44aa-a226-9deeb8fe919f req-0c277ee5-160b-4278-9630-789f4414eac6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-923e00cc-7494-46f3-93e2-3c223705aff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.888 2 DEBUG oslo_concurrency.lockutils [req-ca4885fd-5a71-44aa-a226-9deeb8fe919f req-0c277ee5-160b-4278-9630-789f4414eac6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-923e00cc-7494-46f3-93e2-3c223705aff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.888 2 DEBUG nova.network.neutron [req-ca4885fd-5a71-44aa-a226-9deeb8fe919f req-0c277ee5-160b-4278-9630-789f4414eac6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Refreshing network info cache for port 044e4f76-db30-47b6-b277-8c3a13743b9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:29:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce-userdata-shm.mount: Deactivated successfully.
Oct  2 04:29:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4b410b66f213cbfe1b811aab1f58708a1f7a49c73b1959803809bb9cdb00d8fa-merged.mount: Deactivated successfully.
Oct  2 04:29:54 np0005465604 podman[318760]: 2025-10-02 08:29:54.930304699 +0000 UTC m=+0.108613387 container cleanup 2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.937 2 INFO nova.virt.libvirt.driver [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Deleting instance files /var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660_del#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.937 2 INFO nova.virt.libvirt.driver [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Deletion of /var/lib/nova/instances/d634251d-b484-4af7-b102-fe8015603660_del complete#033[00m
Oct  2 04:29:54 np0005465604 systemd[1]: libpod-conmon-2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce.scope: Deactivated successfully.
Oct  2 04:29:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 454 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 7.7 MiB/s wr, 533 op/s
Oct  2 04:29:54 np0005465604 podman[318816]: 2025-10-02 08:29:54.996807575 +0000 UTC m=+0.041111158 container remove 2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.997 2 INFO nova.compute.manager [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:29:54 np0005465604 nova_compute[260603]: 2025-10-02 08:29:54.998 2 DEBUG oslo.service.loopingcall [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.001 2 DEBUG nova.compute.manager [-] [instance: d634251d-b484-4af7-b102-fe8015603660] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.004 2 DEBUG nova.network.neutron [-] [instance: d634251d-b484-4af7-b102-fe8015603660] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:29:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.002 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[71dda71e-0a5a-4c6c-8594-cec8c3844a84]: (4, ('Thu Oct  2 08:29:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c (2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce)\n2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce\nThu Oct  2 08:29:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c (2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce)\n2b3b496263adc40db68bf583a98236067133231c779f99a2a3a24922c91e26ce\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.006 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b648ecb1-59e0-40cf-8fe8-535d470fb562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.007 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e6563dd-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:55 np0005465604 kernel: tap9e6563dd-50: left promiscuous mode
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.030 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2049ee3e-9d39-4620-84b9-9db427db117e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.047 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f74fe7f6-6685-4828-b927-c496fadd8fde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.053 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[60d01837-5055-429a-adac-9c80f93fdcee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.071 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7030ff61-70c6-4ab4-a9ea-5e1b3a2d91f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465690, 'reachable_time': 31338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318830, 'error': None, 'target': 'ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.077 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9e6563dd-5ecf-4759-9df8-5b501617e75c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:29:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.077 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[2f8379a2-3ddb-4dd7-9dbb-26229db6f13c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:29:55 np0005465604 systemd[1]: run-netns-ovnmeta\x2d9e6563dd\x2d5ecf\x2d4759\x2d9df8\x2d5b501617e75c.mount: Deactivated successfully.
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.086 2 DEBUG nova.compute.manager [req-41f7da62-ba46-46d3-a64c-3da21ce37a46 req-4be9d7e2-95ef-4c2d-b5ae-0c9f27875fd5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Received event network-vif-plugged-f79edf8d-90b8-47b7-b366-244c63439a64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.086 2 DEBUG oslo_concurrency.lockutils [req-41f7da62-ba46-46d3-a64c-3da21ce37a46 req-4be9d7e2-95ef-4c2d-b5ae-0c9f27875fd5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d634251d-b484-4af7-b102-fe8015603660-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.086 2 DEBUG oslo_concurrency.lockutils [req-41f7da62-ba46-46d3-a64c-3da21ce37a46 req-4be9d7e2-95ef-4c2d-b5ae-0c9f27875fd5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.086 2 DEBUG oslo_concurrency.lockutils [req-41f7da62-ba46-46d3-a64c-3da21ce37a46 req-4be9d7e2-95ef-4c2d-b5ae-0c9f27875fd5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.086 2 DEBUG nova.compute.manager [req-41f7da62-ba46-46d3-a64c-3da21ce37a46 req-4be9d7e2-95ef-4c2d-b5ae-0c9f27875fd5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] No waiting events found dispatching network-vif-plugged-f79edf8d-90b8-47b7-b366-244c63439a64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.087 2 WARNING nova.compute.manager [req-41f7da62-ba46-46d3-a64c-3da21ce37a46 req-4be9d7e2-95ef-4c2d-b5ae-0c9f27875fd5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Received unexpected event network-vif-plugged-f79edf8d-90b8-47b7-b366-244c63439a64 for instance with vm_state suspended and task_state deleting.#033[00m
Oct  2 04:29:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.105 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:29:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:55.106 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.184 2 INFO nova.virt.libvirt.driver [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Deleting instance files /var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89_del#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.185 2 INFO nova.virt.libvirt.driver [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Deletion of /var/lib/nova/instances/f7005e7b-8982-4d23-b12a-4b67c90a6c89_del complete#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.234 2 INFO nova.compute.manager [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.235 2 DEBUG oslo.service.loopingcall [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.235 2 DEBUG nova.compute.manager [-] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.235 2 DEBUG nova.network.neutron [-] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.285 2 INFO nova.virt.libvirt.driver [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Deleting instance files /var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954_del#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.286 2 INFO nova.virt.libvirt.driver [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Deletion of /var/lib/nova/instances/f56dc5d2-b1f8-42ef-882c-62bcbd600954_del complete#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.335 2 INFO nova.compute.manager [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.335 2 DEBUG oslo.service.loopingcall [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.338 2 DEBUG nova.compute.manager [-] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.339 2 DEBUG nova.network.neutron [-] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.901 2 DEBUG nova.network.neutron [-] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.919 2 INFO nova.compute.manager [-] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Took 0.58 seconds to deallocate network for instance.#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.979 2 DEBUG oslo_concurrency.lockutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.979 2 DEBUG oslo_concurrency.lockutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:55 np0005465604 nova_compute[260603]: 2025-10-02 08:29:55.980 2 DEBUG nova.network.neutron [-] [instance: d634251d-b484-4af7-b102-fe8015603660] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:56 np0005465604 nova_compute[260603]: 2025-10-02 08:29:56.000 2 INFO nova.compute.manager [-] [instance: d634251d-b484-4af7-b102-fe8015603660] Took 1.00 seconds to deallocate network for instance.#033[00m
Oct  2 04:29:56 np0005465604 nova_compute[260603]: 2025-10-02 08:29:56.074 2 DEBUG oslo_concurrency.lockutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:56 np0005465604 nova_compute[260603]: 2025-10-02 08:29:56.076 2 DEBUG nova.network.neutron [-] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:56 np0005465604 nova_compute[260603]: 2025-10-02 08:29:56.091 2 INFO nova.compute.manager [-] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Took 0.86 seconds to deallocate network for instance.#033[00m
Oct  2 04:29:56 np0005465604 nova_compute[260603]: 2025-10-02 08:29:56.131 2 DEBUG oslo_concurrency.lockutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:56 np0005465604 nova_compute[260603]: 2025-10-02 08:29:56.177 2 DEBUG oslo_concurrency.processutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:29:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3419304229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:29:56 np0005465604 nova_compute[260603]: 2025-10-02 08:29:56.653 2 DEBUG oslo_concurrency.processutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:56 np0005465604 nova_compute[260603]: 2025-10-02 08:29:56.672 2 DEBUG nova.compute.provider_tree [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:29:56 np0005465604 nova_compute[260603]: 2025-10-02 08:29:56.704 2 DEBUG nova.scheduler.client.report [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:29:56 np0005465604 nova_compute[260603]: 2025-10-02 08:29:56.746 2 DEBUG oslo_concurrency.lockutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:56 np0005465604 nova_compute[260603]: 2025-10-02 08:29:56.750 2 DEBUG oslo_concurrency.lockutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:56 np0005465604 nova_compute[260603]: 2025-10-02 08:29:56.781 2 INFO nova.scheduler.client.report [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Deleted allocations for instance f56dc5d2-b1f8-42ef-882c-62bcbd600954#033[00m
Oct  2 04:29:56 np0005465604 nova_compute[260603]: 2025-10-02 08:29:56.872 2 DEBUG oslo_concurrency.lockutils [None req-d331b12a-4ca7-4467-94c4-37ed51f85f17 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:56 np0005465604 nova_compute[260603]: 2025-10-02 08:29:56.942 2 DEBUG oslo_concurrency.processutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 454 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 6.4 MiB/s wr, 458 op/s
Oct  2 04:29:56 np0005465604 nova_compute[260603]: 2025-10-02 08:29:56.992 2 DEBUG nova.network.neutron [req-ca4885fd-5a71-44aa-a226-9deeb8fe919f req-0c277ee5-160b-4278-9630-789f4414eac6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Updated VIF entry in instance network info cache for port 044e4f76-db30-47b6-b277-8c3a13743b9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:29:56 np0005465604 nova_compute[260603]: 2025-10-02 08:29:56.993 2 DEBUG nova.network.neutron [req-ca4885fd-5a71-44aa-a226-9deeb8fe919f req-0c277ee5-160b-4278-9630-789f4414eac6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Updating instance_info_cache with network_info: [{"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.032 2 DEBUG oslo_concurrency.lockutils [req-ca4885fd-5a71-44aa-a226-9deeb8fe919f req-0c277ee5-160b-4278-9630-789f4414eac6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-923e00cc-7494-46f3-93e2-3c223705aff1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.098 2 DEBUG nova.compute.manager [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Received event network-vif-unplugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.099 2 DEBUG oslo_concurrency.lockutils [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.099 2 DEBUG oslo_concurrency.lockutils [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.100 2 DEBUG oslo_concurrency.lockutils [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.100 2 DEBUG nova.compute.manager [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] No waiting events found dispatching network-vif-unplugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.100 2 WARNING nova.compute.manager [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Received unexpected event network-vif-unplugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.101 2 DEBUG nova.compute.manager [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Received event network-vif-plugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.101 2 DEBUG oslo_concurrency.lockutils [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.101 2 DEBUG oslo_concurrency.lockutils [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.102 2 DEBUG oslo_concurrency.lockutils [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.102 2 DEBUG nova.compute.manager [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] No waiting events found dispatching network-vif-plugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.102 2 WARNING nova.compute.manager [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Received unexpected event network-vif-plugged-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.103 2 DEBUG nova.compute.manager [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d634251d-b484-4af7-b102-fe8015603660] Received event network-vif-deleted-f79edf8d-90b8-47b7-b366-244c63439a64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.103 2 DEBUG nova.compute.manager [req-5e07bf1d-22aa-4b64-82e6-851789e6ac6f req-e1e9decb-8fc4-479c-a552-be046a335325 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Received event network-vif-deleted-5bbef33f-36ca-42a3-bf09-dd9cadbb3d1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.170 2 DEBUG nova.compute.manager [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Received event network-vif-unplugged-12abaaed-2f93-40bd-bddd-8143c3709480 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.171 2 DEBUG oslo_concurrency.lockutils [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.171 2 DEBUG oslo_concurrency.lockutils [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.171 2 DEBUG oslo_concurrency.lockutils [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.172 2 DEBUG nova.compute.manager [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] No waiting events found dispatching network-vif-unplugged-12abaaed-2f93-40bd-bddd-8143c3709480 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.172 2 WARNING nova.compute.manager [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Received unexpected event network-vif-unplugged-12abaaed-2f93-40bd-bddd-8143c3709480 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.172 2 DEBUG nova.compute.manager [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Received event network-vif-plugged-12abaaed-2f93-40bd-bddd-8143c3709480 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.173 2 DEBUG oslo_concurrency.lockutils [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.173 2 DEBUG oslo_concurrency.lockutils [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.173 2 DEBUG oslo_concurrency.lockutils [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f56dc5d2-b1f8-42ef-882c-62bcbd600954-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.174 2 DEBUG nova.compute.manager [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] No waiting events found dispatching network-vif-plugged-12abaaed-2f93-40bd-bddd-8143c3709480 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.174 2 WARNING nova.compute.manager [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Received unexpected event network-vif-plugged-12abaaed-2f93-40bd-bddd-8143c3709480 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.174 2 DEBUG nova.compute.manager [req-507c4d88-9353-4a4d-99d1-3fb961660e2e req-50b59b2e-136e-4d15-8bb8-cac282eb1431 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Received event network-vif-deleted-12abaaed-2f93-40bd-bddd-8143c3709480 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:29:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:29:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3279043566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.421 2 DEBUG oslo_concurrency.processutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.429 2 DEBUG nova.compute.provider_tree [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.459 2 DEBUG nova.scheduler.client.report [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.495 2 DEBUG oslo_concurrency.lockutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.498 2 DEBUG oslo_concurrency.lockutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.527 2 INFO nova.scheduler.client.report [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Deleted allocations for instance d634251d-b484-4af7-b102-fe8015603660#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.630 2 DEBUG oslo_concurrency.lockutils [None req-f7c7990c-259b-45da-be49-3785e3498fb5 1ac6f72f7366459a86c086737b89ea69 f269abbe5769427dbf44c430d7529c04 - - default default] Lock "d634251d-b484-4af7-b102-fe8015603660" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.636 2 DEBUG oslo_concurrency.processutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:29:57 np0005465604 nova_compute[260603]: 2025-10-02 08:29:57.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:29:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:29:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:29:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:29:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:29:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:29:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:29:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:29:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1589170335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:29:58 np0005465604 nova_compute[260603]: 2025-10-02 08:29:58.076 2 DEBUG oslo_concurrency.processutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:29:58 np0005465604 nova_compute[260603]: 2025-10-02 08:29:58.084 2 DEBUG nova.compute.provider_tree [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:29:58 np0005465604 nova_compute[260603]: 2025-10-02 08:29:58.104 2 DEBUG nova.scheduler.client.report [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:29:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:29:58.108 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:29:58 np0005465604 nova_compute[260603]: 2025-10-02 08:29:58.132 2 DEBUG oslo_concurrency.lockutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:58 np0005465604 nova_compute[260603]: 2025-10-02 08:29:58.169 2 INFO nova.scheduler.client.report [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Deleted allocations for instance f7005e7b-8982-4d23-b12a-4b67c90a6c89#033[00m
Oct  2 04:29:58 np0005465604 nova_compute[260603]: 2025-10-02 08:29:58.240 2 DEBUG oslo_concurrency.lockutils [None req-360b0e7c-eb17-41d1-aa5e-9c1a99166366 1057882eff8f490d837773415bf65a8a f6f9056bf44b4bd8859c73e3cb645683 - - default default] Lock "f7005e7b-8982-4d23-b12a-4b67c90a6c89" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:29:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 326 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 8.5 MiB/s wr, 597 op/s
Oct  2 04:29:59 np0005465604 nova_compute[260603]: 2025-10-02 08:29:59.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:00 np0005465604 nova_compute[260603]: 2025-10-02 08:30:00.181 2 DEBUG nova.virt.libvirt.driver [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 04:30:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 326 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.5 MiB/s wr, 412 op/s
Oct  2 04:30:01 np0005465604 nova_compute[260603]: 2025-10-02 08:30:01.572 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393786.532904, 73e8c7a5-4621-4f07-824a-b81ea314a672 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:30:01 np0005465604 nova_compute[260603]: 2025-10-02 08:30:01.572 2 INFO nova.compute.manager [-] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:30:01 np0005465604 nova_compute[260603]: 2025-10-02 08:30:01.603 2 DEBUG nova.compute.manager [None req-275a379f-29bc-4047-b8ee-485d0f2990e4 - - - - - -] [instance: 73e8c7a5-4621-4f07-824a-b81ea314a672] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:01Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:46:b0:76 10.100.0.11
Oct  2 04:30:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:01Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:46:b0:76 10.100.0.11
Oct  2 04:30:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:30:02 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:02Z|00487|binding|INFO|Releasing lport 21940774-b256-44e1-b604-ddb5b66aa4a6 from this chassis (sb_readonly=0)
Oct  2 04:30:02 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:02Z|00488|binding|INFO|Releasing lport bd053665-7e00-4f6a-95af-9d9c3c0e8cc0 from this chassis (sb_readonly=0)
Oct  2 04:30:02 np0005465604 nova_compute[260603]: 2025-10-02 08:30:02.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:02 np0005465604 kernel: tapbf9cdb7f-4c (unregistering): left promiscuous mode
Oct  2 04:30:02 np0005465604 NetworkManager[45129]: <info>  [1759393802.4946] device (tapbf9cdb7f-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:30:02 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:02Z|00489|binding|INFO|Releasing lport bf9cdb7f-4cda-403b-b27e-12385e93db02 from this chassis (sb_readonly=0)
Oct  2 04:30:02 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:02Z|00490|binding|INFO|Setting lport bf9cdb7f-4cda-403b-b27e-12385e93db02 down in Southbound
Oct  2 04:30:02 np0005465604 nova_compute[260603]: 2025-10-02 08:30:02.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:02 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:02Z|00491|binding|INFO|Removing iface tapbf9cdb7f-4c ovn-installed in OVS
Oct  2 04:30:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.517 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:b2:ea 10.100.0.12'], port_security=['fa:16:3e:64:b2:ea 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9924ce7f-b701-4560-b2c5-67f673b45807', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bf9cdb7f-4cda-403b-b27e-12385e93db02) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:30:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.520 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bf9cdb7f-4cda-403b-b27e-12385e93db02 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e unbound from our chassis#033[00m
Oct  2 04:30:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.523 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00da8a36-bc54-4cc1-a0e2-53333358378e#033[00m
Oct  2 04:30:02 np0005465604 nova_compute[260603]: 2025-10-02 08:30:02.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.548 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3326ba05-7658-481c-9a3d-25db008f53df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:02 np0005465604 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000033.scope: Deactivated successfully.
Oct  2 04:30:02 np0005465604 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000033.scope: Consumed 13.000s CPU time.
Oct  2 04:30:02 np0005465604 systemd-machined[214636]: Machine qemu-56-instance-00000033 terminated.
Oct  2 04:30:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.583 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[63437c9d-6234-4f06-a830-2ac85233fcb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.588 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3d483035-fc0c-4f3c-9957-83c48dd97e38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.628 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ee67fa10-0086-4e30-b42e-5426d8c32416]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.646 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c7fee7dc-cafd-4d3f-b53d-6f5f8feeaba8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 9, 'rx_bytes': 1042, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 9, 'rx_bytes': 1042, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318909, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.664 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[120d11cc-63fa-4563-81a0-882e1a1dc00c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464939, 'tstamp': 464939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318910, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464943, 'tstamp': 464943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318910, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.665 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:02 np0005465604 nova_compute[260603]: 2025-10-02 08:30:02.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.671 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00da8a36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:02 np0005465604 nova_compute[260603]: 2025-10-02 08:30:02.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.672 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:30:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.672 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00da8a36-b0, col_values=(('external_ids', {'iface-id': 'bd053665-7e00-4f6a-95af-9d9c3c0e8cc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:02.673 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:30:02 np0005465604 nova_compute[260603]: 2025-10-02 08:30:02.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:02 np0005465604 nova_compute[260603]: 2025-10-02 08:30:02.889 2 DEBUG nova.compute.manager [req-94ad25f1-291b-4b33-a466-897ac87e9413 req-aa17d4af-9a86-4e39-918b-cec7ef114a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-unplugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:02 np0005465604 nova_compute[260603]: 2025-10-02 08:30:02.889 2 DEBUG oslo_concurrency.lockutils [req-94ad25f1-291b-4b33-a466-897ac87e9413 req-aa17d4af-9a86-4e39-918b-cec7ef114a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:02 np0005465604 nova_compute[260603]: 2025-10-02 08:30:02.889 2 DEBUG oslo_concurrency.lockutils [req-94ad25f1-291b-4b33-a466-897ac87e9413 req-aa17d4af-9a86-4e39-918b-cec7ef114a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:02 np0005465604 nova_compute[260603]: 2025-10-02 08:30:02.890 2 DEBUG oslo_concurrency.lockutils [req-94ad25f1-291b-4b33-a466-897ac87e9413 req-aa17d4af-9a86-4e39-918b-cec7ef114a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:02 np0005465604 nova_compute[260603]: 2025-10-02 08:30:02.890 2 DEBUG nova.compute.manager [req-94ad25f1-291b-4b33-a466-897ac87e9413 req-aa17d4af-9a86-4e39-918b-cec7ef114a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] No waiting events found dispatching network-vif-unplugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:30:02 np0005465604 nova_compute[260603]: 2025-10-02 08:30:02.890 2 WARNING nova.compute.manager [req-94ad25f1-291b-4b33-a466-897ac87e9413 req-aa17d4af-9a86-4e39-918b-cec7ef114a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received unexpected event network-vif-unplugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 for instance with vm_state active and task_state powering-off.#033[00m
Oct  2 04:30:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 342 MiB data, 657 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 7.8 MiB/s wr, 443 op/s
Oct  2 04:30:03 np0005465604 nova_compute[260603]: 2025-10-02 08:30:03.195 2 INFO nova.virt.libvirt.driver [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 04:30:03 np0005465604 nova_compute[260603]: 2025-10-02 08:30:03.204 2 INFO nova.virt.libvirt.driver [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance destroyed successfully.#033[00m
Oct  2 04:30:03 np0005465604 nova_compute[260603]: 2025-10-02 08:30:03.204 2 DEBUG nova.objects.instance [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:03 np0005465604 nova_compute[260603]: 2025-10-02 08:30:03.222 2 DEBUG nova.compute.manager [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:03 np0005465604 nova_compute[260603]: 2025-10-02 08:30:03.268 2 DEBUG oslo_concurrency.lockutils [None req-d0767568-1a49-4cc2-b8ac-1ce10a350d98 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:03Z|00492|binding|INFO|Releasing lport 21940774-b256-44e1-b604-ddb5b66aa4a6 from this chassis (sb_readonly=0)
Oct  2 04:30:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:03Z|00493|binding|INFO|Releasing lport bd053665-7e00-4f6a-95af-9d9c3c0e8cc0 from this chassis (sb_readonly=0)
Oct  2 04:30:03 np0005465604 nova_compute[260603]: 2025-10-02 08:30:03.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:04 np0005465604 nova_compute[260603]: 2025-10-02 08:30:04.463 2 DEBUG nova.objects.instance [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'flavor' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:04 np0005465604 nova_compute[260603]: 2025-10-02 08:30:04.489 2 DEBUG oslo_concurrency.lockutils [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:30:04 np0005465604 nova_compute[260603]: 2025-10-02 08:30:04.490 2 DEBUG oslo_concurrency.lockutils [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquired lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:30:04 np0005465604 nova_compute[260603]: 2025-10-02 08:30:04.490 2 DEBUG nova.network.neutron [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:30:04 np0005465604 nova_compute[260603]: 2025-10-02 08:30:04.491 2 DEBUG nova.objects.instance [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'info_cache' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:04 np0005465604 nova_compute[260603]: 2025-10-02 08:30:04.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:04 np0005465604 nova_compute[260603]: 2025-10-02 08:30:04.979 2 DEBUG nova.compute.manager [req-9c985cab-cd11-45d4-8b6e-6c1c5b67062a req-f516e8f0-ce6b-4ebd-9103-dbda5287a931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:04 np0005465604 nova_compute[260603]: 2025-10-02 08:30:04.979 2 DEBUG oslo_concurrency.lockutils [req-9c985cab-cd11-45d4-8b6e-6c1c5b67062a req-f516e8f0-ce6b-4ebd-9103-dbda5287a931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 353 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.2 MiB/s wr, 321 op/s
Oct  2 04:30:04 np0005465604 nova_compute[260603]: 2025-10-02 08:30:04.979 2 DEBUG oslo_concurrency.lockutils [req-9c985cab-cd11-45d4-8b6e-6c1c5b67062a req-f516e8f0-ce6b-4ebd-9103-dbda5287a931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:04 np0005465604 nova_compute[260603]: 2025-10-02 08:30:04.980 2 DEBUG oslo_concurrency.lockutils [req-9c985cab-cd11-45d4-8b6e-6c1c5b67062a req-f516e8f0-ce6b-4ebd-9103-dbda5287a931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:04 np0005465604 nova_compute[260603]: 2025-10-02 08:30:04.980 2 DEBUG nova.compute.manager [req-9c985cab-cd11-45d4-8b6e-6c1c5b67062a req-f516e8f0-ce6b-4ebd-9103-dbda5287a931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] No waiting events found dispatching network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:30:04 np0005465604 nova_compute[260603]: 2025-10-02 08:30:04.980 2 WARNING nova.compute.manager [req-9c985cab-cd11-45d4-8b6e-6c1c5b67062a req-f516e8f0-ce6b-4ebd-9103-dbda5287a931 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received unexpected event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 for instance with vm_state stopped and task_state powering-on.#033[00m
Oct  2 04:30:06 np0005465604 nova_compute[260603]: 2025-10-02 08:30:06.917 2 DEBUG nova.network.neutron [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Updating instance_info_cache with network_info: [{"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:30:06 np0005465604 nova_compute[260603]: 2025-10-02 08:30:06.944 2 DEBUG oslo_concurrency.lockutils [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Releasing lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:30:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 353 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 532 KiB/s rd, 4.2 MiB/s wr, 195 op/s
Oct  2 04:30:06 np0005465604 nova_compute[260603]: 2025-10-02 08:30:06.981 2 INFO nova.virt.libvirt.driver [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance destroyed successfully.#033[00m
Oct  2 04:30:06 np0005465604 nova_compute[260603]: 2025-10-02 08:30:06.981 2 DEBUG nova.objects.instance [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:06 np0005465604 nova_compute[260603]: 2025-10-02 08:30:06.997 2 DEBUG nova.objects.instance [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'resources' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.014 2 DEBUG nova.virt.libvirt.vif [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-769762436',display_name='tempest-ListServerFiltersTestJSON-instance-769762436',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-769762436',id=51,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-t76hsctw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:30:03Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=9924ce7f-b701-4560-b2c5-67f673b45807,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.014 2 DEBUG nova.network.os_vif_util [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.015 2 DEBUG nova.network.os_vif_util [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.015 2 DEBUG os_vif [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.017 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf9cdb7f-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.050 2 INFO os_vif [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c')#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.063 2 DEBUG nova.virt.libvirt.driver [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Start _get_guest_xml network_info=[{"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.070 2 WARNING nova.virt.libvirt.driver [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.078 2 DEBUG nova.virt.libvirt.host [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.081 2 DEBUG nova.virt.libvirt.host [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.086 2 DEBUG nova.virt.libvirt.host [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.086 2 DEBUG nova.virt.libvirt.host [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.087 2 DEBUG nova.virt.libvirt.driver [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.087 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.088 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.088 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.088 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.088 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.088 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.089 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.089 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.089 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.089 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.090 2 DEBUG nova.virt.hardware [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.090 2 DEBUG nova.objects.instance [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.106 2 DEBUG oslo_concurrency.processutils [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.597 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393792.59524, d634251d-b484-4af7-b102-fe8015603660 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.598 2 INFO nova.compute.manager [-] [instance: d634251d-b484-4af7-b102-fe8015603660] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.616 2 DEBUG nova.compute.manager [None req-3e2d5ced-789f-4425-a47a-06fc4239fca1 - - - - - -] [instance: d634251d-b484-4af7-b102-fe8015603660] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:30:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/235783216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.653 2 DEBUG oslo_concurrency.processutils [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.694 2 DEBUG oslo_concurrency.processutils [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:07 np0005465604 nova_compute[260603]: 2025-10-02 08:30:07.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:30:08 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2263374193' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.219 2 DEBUG oslo_concurrency.processutils [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.221 2 DEBUG nova.virt.libvirt.vif [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-769762436',display_name='tempest-ListServerFiltersTestJSON-instance-769762436',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-769762436',id=51,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-t76hsctw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:30:03Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=9924ce7f-b701-4560-b2c5-67f673b45807,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.222 2 DEBUG nova.network.os_vif_util [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.225 2 DEBUG nova.network.os_vif_util [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.228 2 DEBUG nova.objects.instance [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.254 2 DEBUG nova.virt.libvirt.driver [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:30:08 np0005465604 nova_compute[260603]:  <uuid>9924ce7f-b701-4560-b2c5-67f673b45807</uuid>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:  <name>instance-00000033</name>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-769762436</nova:name>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:30:07</nova:creationTime>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:30:08 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:        <nova:user uuid="c1d66932c11043b5b90140cd2dde53d2">tempest-ListServerFiltersTestJSON-1545892750-project-member</nova:user>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:        <nova:project uuid="e7c4373fe01a4a14bea07af6dba4d170">tempest-ListServerFiltersTestJSON-1545892750</nova:project>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:        <nova:port uuid="bf9cdb7f-4cda-403b-b27e-12385e93db02">
Oct  2 04:30:08 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <entry name="serial">9924ce7f-b701-4560-b2c5-67f673b45807</entry>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <entry name="uuid">9924ce7f-b701-4560-b2c5-67f673b45807</entry>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/9924ce7f-b701-4560-b2c5-67f673b45807_disk">
Oct  2 04:30:08 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:30:08 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/9924ce7f-b701-4560-b2c5-67f673b45807_disk.config">
Oct  2 04:30:08 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:30:08 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:64:b2:ea"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <target dev="tapbf9cdb7f-4c"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807/console.log" append="off"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <input type="keyboard" bus="usb"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:30:08 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:30:08 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:30:08 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:30:08 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.255 2 DEBUG nova.virt.libvirt.driver [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.256 2 DEBUG nova.virt.libvirt.driver [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.257 2 DEBUG nova.virt.libvirt.vif [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-769762436',display_name='tempest-ListServerFiltersTestJSON-instance-769762436',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-769762436',id=51,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-t76hsctw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:30:03Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=9924ce7f-b701-4560-b2c5-67f673b45807,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.258 2 DEBUG nova.network.os_vif_util [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.259 2 DEBUG nova.network.os_vif_util [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.259 2 DEBUG os_vif [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.261 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.262 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.265 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf9cdb7f-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.266 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf9cdb7f-4c, col_values=(('external_ids', {'iface-id': 'bf9cdb7f-4cda-403b-b27e-12385e93db02', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:b2:ea', 'vm-uuid': '9924ce7f-b701-4560-b2c5-67f673b45807'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:08 np0005465604 NetworkManager[45129]: <info>  [1759393808.3004] manager: (tapbf9cdb7f-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.307 2 INFO os_vif [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c')#033[00m
Oct  2 04:30:08 np0005465604 kernel: tapbf9cdb7f-4c: entered promiscuous mode
Oct  2 04:30:08 np0005465604 NetworkManager[45129]: <info>  [1759393808.3966] manager: (tapbf9cdb7f-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Oct  2 04:30:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:08Z|00494|binding|INFO|Claiming lport bf9cdb7f-4cda-403b-b27e-12385e93db02 for this chassis.
Oct  2 04:30:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:08Z|00495|binding|INFO|bf9cdb7f-4cda-403b-b27e-12385e93db02: Claiming fa:16:3e:64:b2:ea 10.100.0.12
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.407 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:b2:ea 10.100.0.12'], port_security=['fa:16:3e:64:b2:ea 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9924ce7f-b701-4560-b2c5-67f673b45807', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bf9cdb7f-4cda-403b-b27e-12385e93db02) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:30:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.409 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bf9cdb7f-4cda-403b-b27e-12385e93db02 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e bound to our chassis#033[00m
Oct  2 04:30:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.410 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00da8a36-bc54-4cc1-a0e2-53333358378e#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:08Z|00496|binding|INFO|Setting lport bf9cdb7f-4cda-403b-b27e-12385e93db02 ovn-installed in OVS
Oct  2 04:30:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:08Z|00497|binding|INFO|Setting lport bf9cdb7f-4cda-403b-b27e-12385e93db02 up in Southbound
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:08 np0005465604 systemd-udevd[318999]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.441 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2398ff9d-15af-43e6-bf20-48aac04d5f1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:08 np0005465604 systemd-machined[214636]: New machine qemu-64-instance-00000033.
Oct  2 04:30:08 np0005465604 NetworkManager[45129]: <info>  [1759393808.4489] device (tapbf9cdb7f-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:30:08 np0005465604 NetworkManager[45129]: <info>  [1759393808.4498] device (tapbf9cdb7f-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:30:08 np0005465604 systemd[1]: Started Virtual Machine qemu-64-instance-00000033.
Oct  2 04:30:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.479 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb12ec3-5158-4106-b160-412ba9b741f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.482 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3129e11e-8951-4311-8c58-1ade1c256642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.519 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c26e93-e1f9-42d2-8129-fd3c16842cbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.537 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[668f5f30-3d37-4b3f-b851-6b6d8ed1eaed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 11, 'rx_bytes': 1042, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 11, 'rx_bytes': 1042, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319012, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.555 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca9dcdd-baa7-4e60-b58e-8d5a5d519b57]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464939, 'tstamp': 464939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319013, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464943, 'tstamp': 464943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319013, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.557 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.560 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00da8a36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.561 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:30:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.561 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00da8a36-b0, col_values=(('external_ids', {'iface-id': 'bd053665-7e00-4f6a-95af-9d9c3c0e8cc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:08.562 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.738 2 DEBUG nova.compute.manager [req-e0bb8932-a187-4b2e-af61-1b40a46086eb req-926ba8e5-f3c3-47c1-9195-f4a32df11125 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.738 2 DEBUG oslo_concurrency.lockutils [req-e0bb8932-a187-4b2e-af61-1b40a46086eb req-926ba8e5-f3c3-47c1-9195-f4a32df11125 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.739 2 DEBUG oslo_concurrency.lockutils [req-e0bb8932-a187-4b2e-af61-1b40a46086eb req-926ba8e5-f3c3-47c1-9195-f4a32df11125 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.739 2 DEBUG oslo_concurrency.lockutils [req-e0bb8932-a187-4b2e-af61-1b40a46086eb req-926ba8e5-f3c3-47c1-9195-f4a32df11125 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.740 2 DEBUG nova.compute.manager [req-e0bb8932-a187-4b2e-af61-1b40a46086eb req-926ba8e5-f3c3-47c1-9195-f4a32df11125 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] No waiting events found dispatching network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:30:08 np0005465604 nova_compute[260603]: 2025-10-02 08:30:08.740 2 WARNING nova.compute.manager [req-e0bb8932-a187-4b2e-af61-1b40a46086eb req-926ba8e5-f3c3-47c1-9195-f4a32df11125 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received unexpected event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 for instance with vm_state stopped and task_state powering-on.#033[00m
Oct  2 04:30:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 359 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 585 KiB/s rd, 4.3 MiB/s wr, 208 op/s
Oct  2 04:30:09 np0005465604 nova_compute[260603]: 2025-10-02 08:30:09.350 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 9924ce7f-b701-4560-b2c5-67f673b45807 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:30:09 np0005465604 nova_compute[260603]: 2025-10-02 08:30:09.351 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393809.3501732, 9924ce7f-b701-4560-b2c5-67f673b45807 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:30:09 np0005465604 nova_compute[260603]: 2025-10-02 08:30:09.352 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:30:09 np0005465604 nova_compute[260603]: 2025-10-02 08:30:09.354 2 DEBUG nova.compute.manager [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:30:09 np0005465604 nova_compute[260603]: 2025-10-02 08:30:09.359 2 INFO nova.virt.libvirt.driver [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance rebooted successfully.#033[00m
Oct  2 04:30:09 np0005465604 nova_compute[260603]: 2025-10-02 08:30:09.360 2 DEBUG nova.compute.manager [None req-a9249652-c6ac-4fe4-91b7-d4649a0dbd86 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:09 np0005465604 nova_compute[260603]: 2025-10-02 08:30:09.403 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:09 np0005465604 nova_compute[260603]: 2025-10-02 08:30:09.407 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:30:09 np0005465604 nova_compute[260603]: 2025-10-02 08:30:09.450 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393809.3504496, 9924ce7f-b701-4560-b2c5-67f673b45807 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:30:09 np0005465604 nova_compute[260603]: 2025-10-02 08:30:09.451 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] VM Started (Lifecycle Event)#033[00m
Oct  2 04:30:09 np0005465604 nova_compute[260603]: 2025-10-02 08:30:09.474 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:09 np0005465604 nova_compute[260603]: 2025-10-02 08:30:09.478 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:30:09 np0005465604 nova_compute[260603]: 2025-10-02 08:30:09.729 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393794.7278774, f7005e7b-8982-4d23-b12a-4b67c90a6c89 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:30:09 np0005465604 nova_compute[260603]: 2025-10-02 08:30:09.729 2 INFO nova.compute.manager [-] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:30:09 np0005465604 nova_compute[260603]: 2025-10-02 08:30:09.760 2 DEBUG nova.compute.manager [None req-8f520af6-859b-472f-bb89-acdf64d0bd8b - - - - - -] [instance: f7005e7b-8982-4d23-b12a-4b67c90a6c89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:09 np0005465604 nova_compute[260603]: 2025-10-02 08:30:09.820 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393794.819385, f56dc5d2-b1f8-42ef-882c-62bcbd600954 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:30:09 np0005465604 nova_compute[260603]: 2025-10-02 08:30:09.820 2 INFO nova.compute.manager [-] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:30:09 np0005465604 nova_compute[260603]: 2025-10-02 08:30:09.846 2 DEBUG nova.compute.manager [None req-0a6d91b2-fad1-465f-b653-216d724df4e4 - - - - - -] [instance: f56dc5d2-b1f8-42ef-882c-62bcbd600954] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:10 np0005465604 nova_compute[260603]: 2025-10-02 08:30:10.839 2 DEBUG nova.compute.manager [req-f8d69990-c5d8-424d-9fe1-e513bdc74162 req-fa2e7a92-fdf4-4684-ad17-d69383b1694a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:10 np0005465604 nova_compute[260603]: 2025-10-02 08:30:10.840 2 DEBUG oslo_concurrency.lockutils [req-f8d69990-c5d8-424d-9fe1-e513bdc74162 req-fa2e7a92-fdf4-4684-ad17-d69383b1694a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:10 np0005465604 nova_compute[260603]: 2025-10-02 08:30:10.844 2 DEBUG oslo_concurrency.lockutils [req-f8d69990-c5d8-424d-9fe1-e513bdc74162 req-fa2e7a92-fdf4-4684-ad17-d69383b1694a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:10 np0005465604 nova_compute[260603]: 2025-10-02 08:30:10.844 2 DEBUG oslo_concurrency.lockutils [req-f8d69990-c5d8-424d-9fe1-e513bdc74162 req-fa2e7a92-fdf4-4684-ad17-d69383b1694a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:10 np0005465604 nova_compute[260603]: 2025-10-02 08:30:10.844 2 DEBUG nova.compute.manager [req-f8d69990-c5d8-424d-9fe1-e513bdc74162 req-fa2e7a92-fdf4-4684-ad17-d69383b1694a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] No waiting events found dispatching network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:30:10 np0005465604 nova_compute[260603]: 2025-10-02 08:30:10.845 2 WARNING nova.compute.manager [req-f8d69990-c5d8-424d-9fe1-e513bdc74162 req-fa2e7a92-fdf4-4684-ad17-d69383b1694a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received unexpected event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:30:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 359 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 296 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Oct  2 04:30:11 np0005465604 nova_compute[260603]: 2025-10-02 08:30:11.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.234 2 DEBUG oslo_concurrency.lockutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquiring lock "923e00cc-7494-46f3-93e2-3c223705aff1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.235 2 DEBUG oslo_concurrency.lockutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.236 2 DEBUG oslo_concurrency.lockutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquiring lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.237 2 DEBUG oslo_concurrency.lockutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.237 2 DEBUG oslo_concurrency.lockutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.239 2 INFO nova.compute.manager [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Terminating instance#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.241 2 DEBUG nova.compute.manager [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:30:12 np0005465604 kernel: tap044e4f76-db (unregistering): left promiscuous mode
Oct  2 04:30:12 np0005465604 NetworkManager[45129]: <info>  [1759393812.3014] device (tap044e4f76-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:30:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:12Z|00498|binding|INFO|Releasing lport 044e4f76-db30-47b6-b277-8c3a13743b9c from this chassis (sb_readonly=0)
Oct  2 04:30:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:12Z|00499|binding|INFO|Setting lport 044e4f76-db30-47b6-b277-8c3a13743b9c down in Southbound
Oct  2 04:30:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:12Z|00500|binding|INFO|Removing iface tap044e4f76-db ovn-installed in OVS
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.367 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:b0:76 10.100.0.11'], port_security=['fa:16:3e:46:b0:76 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '923e00cc-7494-46f3-93e2-3c223705aff1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bd8f146-d090-40d8-8651-21c92934a6ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f942883d5794a5c8e3cd2b5ef44a863', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3c9d772-e37a-48f1-89cb-39eaf88eda56', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.250'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dec8975f-9e7e-451f-957f-04aee213c5b3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=044e4f76-db30-47b6-b277-8c3a13743b9c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:30:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.370 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 044e4f76-db30-47b6-b277-8c3a13743b9c in datapath 9bd8f146-d090-40d8-8651-21c92934a6ff unbound from our chassis#033[00m
Oct  2 04:30:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.372 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bd8f146-d090-40d8-8651-21c92934a6ff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:30:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.374 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ab0738-8727-45c5-b366-f8b88a987041]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.375 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff namespace which is not needed anymore#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:12 np0005465604 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000039.scope: Deactivated successfully.
Oct  2 04:30:12 np0005465604 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000039.scope: Consumed 13.706s CPU time.
Oct  2 04:30:12 np0005465604 systemd-machined[214636]: Machine qemu-63-instance-00000039 terminated.
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.489 2 INFO nova.virt.libvirt.driver [-] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Instance destroyed successfully.#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.491 2 DEBUG nova.objects.instance [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lazy-loading 'resources' on Instance uuid 923e00cc-7494-46f3-93e2-3c223705aff1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.503 2 DEBUG nova.virt.libvirt.vif [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-568591211',display_name='tempest-ServersTestManualDisk-server-568591211',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-568591211',id=57,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARlZgY/0EkzEMepYNnm4b01nWIMecq2XG9MuajTSjQZi/SZ8DIEdLBXsx3DCy0ARgTpk4vDQEJ3TsL+ZNLPSjyILPCIRt4tiYIZmsXwTZOquFcYjN59rB2JnY5UB/nfNw==',key_name='tempest-keypair-510296598',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f942883d5794a5c8e3cd2b5ef44a863',ramdisk_id='',reservation_id='r-9syab2uj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-2010618382',owner_user_name='tempest-ServersTestManualDisk-2010618382-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:29:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e8f0a6fb1d224a979db4b4a738bbf453',uuid=923e00cc-7494-46f3-93e2-3c223705aff1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.504 2 DEBUG nova.network.os_vif_util [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Converting VIF {"id": "044e4f76-db30-47b6-b277-8c3a13743b9c", "address": "fa:16:3e:46:b0:76", "network": {"id": "9bd8f146-d090-40d8-8651-21c92934a6ff", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1127124626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f942883d5794a5c8e3cd2b5ef44a863", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap044e4f76-db", "ovs_interfaceid": "044e4f76-db30-47b6-b277-8c3a13743b9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.505 2 DEBUG nova.network.os_vif_util [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:b0:76,bridge_name='br-int',has_traffic_filtering=True,id=044e4f76-db30-47b6-b277-8c3a13743b9c,network=Network(9bd8f146-d090-40d8-8651-21c92934a6ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e4f76-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.506 2 DEBUG os_vif [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:b0:76,bridge_name='br-int',has_traffic_filtering=True,id=044e4f76-db30-47b6-b277-8c3a13743b9c,network=Network(9bd8f146-d090-40d8-8651-21c92934a6ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e4f76-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap044e4f76-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.522 2 INFO os_vif [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:b0:76,bridge_name='br-int',has_traffic_filtering=True,id=044e4f76-db30-47b6-b277-8c3a13743b9c,network=Network(9bd8f146-d090-40d8-8651-21c92934a6ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap044e4f76-db')#033[00m
Oct  2 04:30:12 np0005465604 neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff[318535]: [NOTICE]   (318539) : haproxy version is 2.8.14-c23fe91
Oct  2 04:30:12 np0005465604 neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff[318535]: [NOTICE]   (318539) : path to executable is /usr/sbin/haproxy
Oct  2 04:30:12 np0005465604 neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff[318535]: [WARNING]  (318539) : Exiting Master process...
Oct  2 04:30:12 np0005465604 neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff[318535]: [ALERT]    (318539) : Current worker (318543) exited with code 143 (Terminated)
Oct  2 04:30:12 np0005465604 neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff[318535]: [WARNING]  (318539) : All workers exited. Exiting... (0)
Oct  2 04:30:12 np0005465604 systemd[1]: libpod-7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21.scope: Deactivated successfully.
Oct  2 04:30:12 np0005465604 podman[319088]: 2025-10-02 08:30:12.584874196 +0000 UTC m=+0.070419876 container died 7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:30:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21-userdata-shm.mount: Deactivated successfully.
Oct  2 04:30:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6db0e7b9516bd1a447408cbffb5be27ab3618c8243a87ad12b0787ded02cc8cb-merged.mount: Deactivated successfully.
Oct  2 04:30:12 np0005465604 podman[319088]: 2025-10-02 08:30:12.6330101 +0000 UTC m=+0.118555780 container cleanup 7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 04:30:12 np0005465604 systemd[1]: libpod-conmon-7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21.scope: Deactivated successfully.
Oct  2 04:30:12 np0005465604 podman[319136]: 2025-10-02 08:30:12.718700479 +0000 UTC m=+0.055324988 container remove 7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 04:30:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.737 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[30d1a35f-7911-4248-ad93-fe8b8f86d085]: (4, ('Thu Oct  2 08:30:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff (7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21)\n7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21\nThu Oct  2 08:30:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff (7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21)\n7fdced4b1c27c3caf7e98bb4b87b8a37bcc2f66811e4dd593a5f69a75baafe21\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.741 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[22bb530c-a976-4aa7-93a3-c0efaadc1389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.742 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bd8f146-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:12 np0005465604 kernel: tap9bd8f146-d0: left promiscuous mode
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.782 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[919cdca2-7073-4cd2-8af7-fce4f6e3d769]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.815 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9415c8c2-60bc-414c-8e03-51fac194f181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.816 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[30085d62-466f-4911-a2ea-1664f15f649e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.832 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6160b4ad-43e0-4c2c-b3d8-7759e6aa3243]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 466681, 'reachable_time': 19997, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319152, 'error': None, 'target': 'ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:12 np0005465604 systemd[1]: run-netns-ovnmeta\x2d9bd8f146\x2dd090\x2d40d8\x2d8651\x2d21c92934a6ff.mount: Deactivated successfully.
Oct  2 04:30:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.836 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bd8f146-d090-40d8-8651-21c92934a6ff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:30:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:12.837 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[4df70d9b-0d82-4877-8dae-7822f5704e12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.931 2 INFO nova.virt.libvirt.driver [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Deleting instance files /var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1_del#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.932 2 INFO nova.virt.libvirt.driver [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Deletion of /var/lib/nova/instances/923e00cc-7494-46f3-93e2-3c223705aff1_del complete#033[00m
Oct  2 04:30:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 359 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 114 op/s
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.993 2 INFO nova.compute.manager [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.994 2 DEBUG oslo.service.loopingcall [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.994 2 DEBUG nova.compute.manager [-] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:30:12 np0005465604 nova_compute[260603]: 2025-10-02 08:30:12.994 2 DEBUG nova.network.neutron [-] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:30:13 np0005465604 nova_compute[260603]: 2025-10-02 08:30:13.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:13 np0005465604 nova_compute[260603]: 2025-10-02 08:30:13.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:30:13 np0005465604 nova_compute[260603]: 2025-10-02 08:30:13.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:30:14 np0005465604 nova_compute[260603]: 2025-10-02 08:30:14.531 2 DEBUG nova.network.neutron [-] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:30:14 np0005465604 nova_compute[260603]: 2025-10-02 08:30:14.549 2 INFO nova.compute.manager [-] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Took 1.56 seconds to deallocate network for instance.#033[00m
Oct  2 04:30:14 np0005465604 nova_compute[260603]: 2025-10-02 08:30:14.616 2 DEBUG oslo_concurrency.lockutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:14 np0005465604 nova_compute[260603]: 2025-10-02 08:30:14.617 2 DEBUG oslo_concurrency.lockutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:14 np0005465604 nova_compute[260603]: 2025-10-02 08:30:14.639 2 DEBUG nova.compute.manager [req-abdf0e5a-2c3a-45bc-b711-0580f5ae6b3c req-c32b2372-2fc2-4d6c-8c4f-0a22f8ead708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Received event network-vif-plugged-044e4f76-db30-47b6-b277-8c3a13743b9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:14 np0005465604 nova_compute[260603]: 2025-10-02 08:30:14.640 2 DEBUG oslo_concurrency.lockutils [req-abdf0e5a-2c3a-45bc-b711-0580f5ae6b3c req-c32b2372-2fc2-4d6c-8c4f-0a22f8ead708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:14 np0005465604 nova_compute[260603]: 2025-10-02 08:30:14.641 2 DEBUG oslo_concurrency.lockutils [req-abdf0e5a-2c3a-45bc-b711-0580f5ae6b3c req-c32b2372-2fc2-4d6c-8c4f-0a22f8ead708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:14 np0005465604 nova_compute[260603]: 2025-10-02 08:30:14.641 2 DEBUG oslo_concurrency.lockutils [req-abdf0e5a-2c3a-45bc-b711-0580f5ae6b3c req-c32b2372-2fc2-4d6c-8c4f-0a22f8ead708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:14 np0005465604 nova_compute[260603]: 2025-10-02 08:30:14.642 2 DEBUG nova.compute.manager [req-abdf0e5a-2c3a-45bc-b711-0580f5ae6b3c req-c32b2372-2fc2-4d6c-8c4f-0a22f8ead708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] No waiting events found dispatching network-vif-plugged-044e4f76-db30-47b6-b277-8c3a13743b9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:30:14 np0005465604 nova_compute[260603]: 2025-10-02 08:30:14.643 2 WARNING nova.compute.manager [req-abdf0e5a-2c3a-45bc-b711-0580f5ae6b3c req-c32b2372-2fc2-4d6c-8c4f-0a22f8ead708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Received unexpected event network-vif-plugged-044e4f76-db30-47b6-b277-8c3a13743b9c for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:30:14 np0005465604 nova_compute[260603]: 2025-10-02 08:30:14.756 2 DEBUG oslo_concurrency.processutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 330 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 816 KiB/s wr, 109 op/s
Oct  2 04:30:15 np0005465604 podman[319172]: 2025-10-02 08:30:15.007931742 +0000 UTC m=+0.068513477 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  2 04:30:15 np0005465604 podman[319156]: 2025-10-02 08:30:15.043298339 +0000 UTC m=+0.107865388 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  2 04:30:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:30:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2641797910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:30:15 np0005465604 nova_compute[260603]: 2025-10-02 08:30:15.246 2 DEBUG oslo_concurrency.processutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:15 np0005465604 nova_compute[260603]: 2025-10-02 08:30:15.257 2 DEBUG nova.compute.provider_tree [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:30:15 np0005465604 nova_compute[260603]: 2025-10-02 08:30:15.282 2 DEBUG nova.scheduler.client.report [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:30:15 np0005465604 nova_compute[260603]: 2025-10-02 08:30:15.321 2 DEBUG oslo_concurrency.lockutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:15 np0005465604 nova_compute[260603]: 2025-10-02 08:30:15.370 2 INFO nova.scheduler.client.report [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Deleted allocations for instance 923e00cc-7494-46f3-93e2-3c223705aff1#033[00m
Oct  2 04:30:15 np0005465604 nova_compute[260603]: 2025-10-02 08:30:15.470 2 DEBUG oslo_concurrency.lockutils [None req-5d56a43d-a9d0-4f5b-a359-4e5c5f5fc4db e8f0a6fb1d224a979db4b4a738bbf453 1f942883d5794a5c8e3cd2b5ef44a863 - - default default] Lock "923e00cc-7494-46f3-93e2-3c223705aff1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.443 2 DEBUG oslo_concurrency.lockutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.444 2 DEBUG oslo_concurrency.lockutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.445 2 DEBUG oslo_concurrency.lockutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.445 2 DEBUG oslo_concurrency.lockutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.446 2 DEBUG oslo_concurrency.lockutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.448 2 INFO nova.compute.manager [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Terminating instance#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.450 2 DEBUG nova.compute.manager [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:30:16 np0005465604 kernel: tap257d115c-e1 (unregistering): left promiscuous mode
Oct  2 04:30:16 np0005465604 NetworkManager[45129]: <info>  [1759393816.5076] device (tap257d115c-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:30:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:16Z|00501|binding|INFO|Releasing lport 257d115c-e196-4921-a9d3-942604825516 from this chassis (sb_readonly=0)
Oct  2 04:30:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:16Z|00502|binding|INFO|Setting lport 257d115c-e196-4921-a9d3-942604825516 down in Southbound
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:16Z|00503|binding|INFO|Removing iface tap257d115c-e1 ovn-installed in OVS
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.523 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.524 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.524 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.530 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:eb:54 10.100.0.8'], port_security=['fa:16:3e:5f:eb:54 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd15c7c6a-e6a1-4538-9db0-ee1aef10f38b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=257d115c-e196-4921-a9d3-942604825516) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.532 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 257d115c-e196-4921-a9d3-942604825516 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e unbound from our chassis#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.534 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00da8a36-bc54-4cc1-a0e2-53333358378e#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.560 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.570 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fdadb1eb-509b-40d5-b440-8ec1a22dc547]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:16 np0005465604 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000035.scope: Deactivated successfully.
Oct  2 04:30:16 np0005465604 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000035.scope: Consumed 14.736s CPU time.
Oct  2 04:30:16 np0005465604 systemd-machined[214636]: Machine qemu-58-instance-00000035 terminated.
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.620 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1e194f6b-2695-4ca1-b40f-bbba76f298d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.625 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3027cdd1-e1fe-49b2-94ec-6a4a178edb49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.666 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ab7a1ea9-6cec-4c76-a634-67f5b35e86af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:16 np0005465604 kernel: tap257d115c-e1: entered promiscuous mode
Oct  2 04:30:16 np0005465604 kernel: tap257d115c-e1 (unregistering): left promiscuous mode
Oct  2 04:30:16 np0005465604 NetworkManager[45129]: <info>  [1759393816.6810] manager: (tap257d115c-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/215)
Oct  2 04:30:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:16Z|00504|binding|INFO|Claiming lport 257d115c-e196-4921-a9d3-942604825516 for this chassis.
Oct  2 04:30:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:16Z|00505|binding|INFO|257d115c-e196-4921-a9d3-942604825516: Claiming fa:16:3e:5f:eb:54 10.100.0.8
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.692 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[edf2a864-7f09-44e0-a3af-38031d80ba36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 13, 'rx_bytes': 1042, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 13, 'rx_bytes': 1042, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319234, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.696 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:eb:54 10.100.0.8'], port_security=['fa:16:3e:5f:eb:54 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd15c7c6a-e6a1-4538-9db0-ee1aef10f38b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=257d115c-e196-4921-a9d3-942604825516) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.703 2 INFO nova.virt.libvirt.driver [-] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Instance destroyed successfully.#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.704 2 DEBUG nova.objects.instance [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'resources' on Instance uuid d15c7c6a-e6a1-4538-9db0-ee1aef10f38b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.714 2 DEBUG nova.compute.manager [req-e0edecc2-162c-428c-9316-9775539f2cdd req-66a0ef84-8678-4468-9c4e-e37bd2a110ce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Received event network-vif-deleted-044e4f76-db30-47b6-b277-8c3a13743b9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:16Z|00506|binding|INFO|Setting lport 257d115c-e196-4921-a9d3-942604825516 ovn-installed in OVS
Oct  2 04:30:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:16Z|00507|binding|INFO|Setting lport 257d115c-e196-4921-a9d3-942604825516 up in Southbound
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.728 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[24c826f5-de3b-4aaa-9a1c-08b11031df2d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464939, 'tstamp': 464939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319238, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464943, 'tstamp': 464943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319238, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.731 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:16Z|00508|binding|INFO|Releasing lport 257d115c-e196-4921-a9d3-942604825516 from this chassis (sb_readonly=1)
Oct  2 04:30:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:16Z|00509|if_status|INFO|Dropped 2 log messages in last 460 seconds (most recently, 460 seconds ago) due to excessive rate
Oct  2 04:30:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:16Z|00510|if_status|INFO|Not setting lport 257d115c-e196-4921-a9d3-942604825516 down as sb is readonly
Oct  2 04:30:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:16Z|00511|binding|INFO|Removing iface tap257d115c-e1 ovn-installed in OVS
Oct  2 04:30:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:16Z|00512|binding|INFO|Releasing lport 257d115c-e196-4921-a9d3-942604825516 from this chassis (sb_readonly=0)
Oct  2 04:30:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:16Z|00513|binding|INFO|Setting lport 257d115c-e196-4921-a9d3-942604825516 down in Southbound
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.743 2 DEBUG nova.virt.libvirt.vif [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1454521037',display_name='tempest-ListServerFiltersTestJSON-instance-1454521037',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1454521037',id=53,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-w2kc410q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:29:37Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=d15c7c6a-e6a1-4538-9db0-ee1aef10f38b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.743 2 DEBUG nova.network.os_vif_util [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "257d115c-e196-4921-a9d3-942604825516", "address": "fa:16:3e:5f:eb:54", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap257d115c-e1", "ovs_interfaceid": "257d115c-e196-4921-a9d3-942604825516", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.744 2 DEBUG nova.network.os_vif_util [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:eb:54,bridge_name='br-int',has_traffic_filtering=True,id=257d115c-e196-4921-a9d3-942604825516,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap257d115c-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.743 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:eb:54 10.100.0.8'], port_security=['fa:16:3e:5f:eb:54 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd15c7c6a-e6a1-4538-9db0-ee1aef10f38b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=257d115c-e196-4921-a9d3-942604825516) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.744 2 DEBUG os_vif [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:eb:54,bridge_name='br-int',has_traffic_filtering=True,id=257d115c-e196-4921-a9d3-942604825516,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap257d115c-e1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.745 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap257d115c-e1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.754 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00da8a36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.755 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.756 2 INFO os_vif [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:eb:54,bridge_name='br-int',has_traffic_filtering=True,id=257d115c-e196-4921-a9d3-942604825516,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap257d115c-e1')#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.756 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00da8a36-b0, col_values=(('external_ids', {'iface-id': 'bd053665-7e00-4f6a-95af-9d9c3c0e8cc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.758 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.763 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 257d115c-e196-4921-a9d3-942604825516 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e unbound from our chassis#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.768 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00da8a36-bc54-4cc1-a0e2-53333358378e#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.791 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df2f9ba5-6cfa-4381-a1b8-40046b689564]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.824 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d39a7fe9-1771-4ec9-bb4b-3d3c9b5c14e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.829 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b8517928-63e5-42cb-a8c4-820fe2dc8389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.875 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5557d033-3a63-41e8-8988-f0f34b6f47a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.904 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[30dbc1d4-b304-4c96-a6ed-047653c0e11a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 15, 'rx_bytes': 1042, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 15, 'rx_bytes': 1042, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319267, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.925 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.926 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.926 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.926 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.929 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7c568536-6555-47dc-9534-e8430bf0ef5c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464939, 'tstamp': 464939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319268, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464943, 'tstamp': 464943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319268, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.932 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:16 np0005465604 nova_compute[260603]: 2025-10-02 08:30:16.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.936 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00da8a36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.937 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.938 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00da8a36-b0, col_values=(('external_ids', {'iface-id': 'bd053665-7e00-4f6a-95af-9d9c3c0e8cc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.939 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.941 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 257d115c-e196-4921-a9d3-942604825516 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e unbound from our chassis#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.944 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00da8a36-bc54-4cc1-a0e2-53333358378e#033[00m
Oct  2 04:30:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:16.965 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8bc286-7841-402f-bc74-f1c919e30b2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 330 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 91 KiB/s wr, 84 op/s
Oct  2 04:30:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.011 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b9180a20-583b-4221-96eb-d43dcd3a2c91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.015 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ecf3d34e-1567-4c26-bab9-6378a1cf921d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.056 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e5febcb1-626a-4ddb-a33b-7618941e45dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.082 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cb2cd09c-c09f-4e23-a62b-d635ae1c900a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 17, 'rx_bytes': 1042, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 17, 'rx_bytes': 1042, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319275, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:30:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.116 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f3e2de-09aa-4d93-8711-f2f36d5443eb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464939, 'tstamp': 464939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319276, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464943, 'tstamp': 464943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319276, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.120 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:17 np0005465604 nova_compute[260603]: 2025-10-02 08:30:17.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:17 np0005465604 nova_compute[260603]: 2025-10-02 08:30:17.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.158 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00da8a36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.159 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:30:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.159 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00da8a36-b0, col_values=(('external_ids', {'iface-id': 'bd053665-7e00-4f6a-95af-9d9c3c0e8cc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:17.160 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:30:17 np0005465604 nova_compute[260603]: 2025-10-02 08:30:17.213 2 INFO nova.virt.libvirt.driver [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Deleting instance files /var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_del#033[00m
Oct  2 04:30:17 np0005465604 nova_compute[260603]: 2025-10-02 08:30:17.214 2 INFO nova.virt.libvirt.driver [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Deletion of /var/lib/nova/instances/d15c7c6a-e6a1-4538-9db0-ee1aef10f38b_del complete#033[00m
Oct  2 04:30:17 np0005465604 nova_compute[260603]: 2025-10-02 08:30:17.267 2 INFO nova.compute.manager [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:30:17 np0005465604 nova_compute[260603]: 2025-10-02 08:30:17.267 2 DEBUG oslo.service.loopingcall [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:30:17 np0005465604 nova_compute[260603]: 2025-10-02 08:30:17.268 2 DEBUG nova.compute.manager [-] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:30:17 np0005465604 nova_compute[260603]: 2025-10-02 08:30:17.269 2 DEBUG nova.network.neutron [-] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:30:17 np0005465604 nova_compute[260603]: 2025-10-02 08:30:17.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 200 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 94 KiB/s wr, 137 op/s
Oct  2 04:30:19 np0005465604 nova_compute[260603]: 2025-10-02 08:30:19.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:19 np0005465604 nova_compute[260603]: 2025-10-02 08:30:19.661 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Updating instance_info_cache with network_info: [{"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:30:19 np0005465604 nova_compute[260603]: 2025-10-02 08:30:19.682 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-9924ce7f-b701-4560-b2c5-67f673b45807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:30:19 np0005465604 nova_compute[260603]: 2025-10-02 08:30:19.682 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:30:19 np0005465604 nova_compute[260603]: 2025-10-02 08:30:19.682 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:30:19 np0005465604 nova_compute[260603]: 2025-10-02 08:30:19.683 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:30:19 np0005465604 nova_compute[260603]: 2025-10-02 08:30:19.683 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:30:20 np0005465604 nova_compute[260603]: 2025-10-02 08:30:20.087 2 DEBUG nova.network.neutron [-] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:30:20 np0005465604 nova_compute[260603]: 2025-10-02 08:30:20.111 2 INFO nova.compute.manager [-] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Took 2.84 seconds to deallocate network for instance.#033[00m
Oct  2 04:30:20 np0005465604 nova_compute[260603]: 2025-10-02 08:30:20.160 2 DEBUG oslo_concurrency.lockutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:20 np0005465604 nova_compute[260603]: 2025-10-02 08:30:20.161 2 DEBUG oslo_concurrency.lockutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:20 np0005465604 nova_compute[260603]: 2025-10-02 08:30:20.210 2 DEBUG nova.compute.manager [req-08dfcf65-d8f2-4140-8a17-e2b1a9cd04e8 req-552964e0-f955-4779-a021-2dbbe7c945eb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Received event network-vif-deleted-257d115c-e196-4921-a9d3-942604825516 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:20 np0005465604 nova_compute[260603]: 2025-10-02 08:30:20.250 2 DEBUG oslo_concurrency.processutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:30:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/997612194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:30:20 np0005465604 nova_compute[260603]: 2025-10-02 08:30:20.686 2 DEBUG oslo_concurrency.processutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:20 np0005465604 nova_compute[260603]: 2025-10-02 08:30:20.697 2 DEBUG nova.compute.provider_tree [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:30:20 np0005465604 nova_compute[260603]: 2025-10-02 08:30:20.725 2 DEBUG nova.scheduler.client.report [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:30:20 np0005465604 nova_compute[260603]: 2025-10-02 08:30:20.754 2 DEBUG oslo_concurrency.lockutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:20 np0005465604 nova_compute[260603]: 2025-10-02 08:30:20.812 2 INFO nova.scheduler.client.report [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Deleted allocations for instance d15c7c6a-e6a1-4538-9db0-ee1aef10f38b#033[00m
Oct  2 04:30:20 np0005465604 nova_compute[260603]: 2025-10-02 08:30:20.893 2 DEBUG oslo_concurrency.lockutils [None req-53d9f36b-2be7-4231-bf35-701f6faf0c0d c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "d15c7c6a-e6a1-4538-9db0-ee1aef10f38b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 200 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 124 op/s
Oct  2 04:30:21 np0005465604 podman[319299]: 2025-10-02 08:30:21.019335351 +0000 UTC m=+0.084248915 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:30:21 np0005465604 nova_compute[260603]: 2025-10-02 08:30:21.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:30:21 np0005465604 nova_compute[260603]: 2025-10-02 08:30:21.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:30:21 np0005465604 nova_compute[260603]: 2025-10-02 08:30:21.541 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:21 np0005465604 nova_compute[260603]: 2025-10-02 08:30:21.541 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:21 np0005465604 nova_compute[260603]: 2025-10-02 08:30:21.541 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:21 np0005465604 nova_compute[260603]: 2025-10-02 08:30:21.541 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:30:21 np0005465604 nova_compute[260603]: 2025-10-02 08:30:21.541 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:21 np0005465604 nova_compute[260603]: 2025-10-02 08:30:21.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:21 np0005465604 nova_compute[260603]: 2025-10-02 08:30:21.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:30:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4028904931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.007 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.012 2 DEBUG oslo_concurrency.lockutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "797fde07-e88a-4d6e-a1a3-25e22c66097c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.012 2 DEBUG oslo_concurrency.lockutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.013 2 DEBUG oslo_concurrency.lockutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.013 2 DEBUG oslo_concurrency.lockutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.013 2 DEBUG oslo_concurrency.lockutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.015 2 INFO nova.compute.manager [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Terminating instance#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.016 2 DEBUG nova.compute.manager [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:30:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:30:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4012285713' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:30:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:30:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4012285713' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:30:22 np0005465604 kernel: tap29a765f0-6b (unregistering): left promiscuous mode
Oct  2 04:30:22 np0005465604 NetworkManager[45129]: <info>  [1759393822.0744] device (tap29a765f0-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:30:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:22Z|00514|binding|INFO|Releasing lport 29a765f0-6b44-4aad-9974-a0845658d5f2 from this chassis (sb_readonly=0)
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:22Z|00515|binding|INFO|Setting lport 29a765f0-6b44-4aad-9974-a0845658d5f2 down in Southbound
Oct  2 04:30:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:22Z|00516|binding|INFO|Removing iface tap29a765f0-6b ovn-installed in OVS
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.093 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:7e:1d 10.100.0.9'], port_security=['fa:16:3e:c6:7e:1d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '797fde07-e88a-4d6e-a1a3-25e22c66097c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=29a765f0-6b44-4aad-9974-a0845658d5f2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:30:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.095 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 29a765f0-6b44-4aad-9974-a0845658d5f2 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e unbound from our chassis#033[00m
Oct  2 04:30:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.096 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00da8a36-bc54-4cc1-a0e2-53333358378e#033[00m
Oct  2 04:30:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.119 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[885e4ccc-095c-4691-b242-1ac1b9f1b8c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:22 np0005465604 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000034.scope: Deactivated successfully.
Oct  2 04:30:22 np0005465604 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000034.scope: Consumed 14.500s CPU time.
Oct  2 04:30:22 np0005465604 systemd-machined[214636]: Machine qemu-57-instance-00000034 terminated.
Oct  2 04:30:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.161 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[fed22bdf-4a44-4642-9f1a-a8148c410abb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.165 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b14a84-c69e-4853-b0cb-5a67643daf49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.196 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9b4a0bba-f711-4031-a419-862c04aa2c2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.215 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f7e6ee78-973c-4bf3-82e3-b87e3d6cbc91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00da8a36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:d8:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 19, 'rx_bytes': 1042, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 19, 'rx_bytes': 1042, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464925, 'reachable_time': 15528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319352, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.236 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a4515dc7-d4d3-4b56-856b-08bb1c11d1f2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464939, 'tstamp': 464939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319353, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap00da8a36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464943, 'tstamp': 464943}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319353, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.238 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.248 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00da8a36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.249 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:30:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.249 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00da8a36-b0, col_values=(('external_ids', {'iface-id': 'bd053665-7e00-4f6a-95af-9d9c3c0e8cc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:22.250 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.255 2 INFO nova.virt.libvirt.driver [-] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Instance destroyed successfully.#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.256 2 DEBUG nova.objects.instance [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'resources' on Instance uuid 797fde07-e88a-4d6e-a1a3-25e22c66097c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:22Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:b2:ea 10.100.0.12
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.270 2 DEBUG nova.virt.libvirt.vif [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1372609845',display_name='tempest-ListServerFiltersTestJSON-instance-1372609845',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1372609845',id=52,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-2sfo7if8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:29:35Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=797fde07-e88a-4d6e-a1a3-25e22c66097c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.271 2 DEBUG nova.network.os_vif_util [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "29a765f0-6b44-4aad-9974-a0845658d5f2", "address": "fa:16:3e:c6:7e:1d", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29a765f0-6b", "ovs_interfaceid": "29a765f0-6b44-4aad-9974-a0845658d5f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.272 2 DEBUG nova.network.os_vif_util [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:7e:1d,bridge_name='br-int',has_traffic_filtering=True,id=29a765f0-6b44-4aad-9974-a0845658d5f2,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29a765f0-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.272 2 DEBUG os_vif [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:7e:1d,bridge_name='br-int',has_traffic_filtering=True,id=29a765f0-6b44-4aad-9974-a0845658d5f2,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29a765f0-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.274 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29a765f0-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.279 2 INFO os_vif [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:7e:1d,bridge_name='br-int',has_traffic_filtering=True,id=29a765f0-6b44-4aad-9974-a0845658d5f2,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29a765f0-6b')#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.343 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.343 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.348 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.348 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.534 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.535 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4028MB free_disk=59.897247314453125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.536 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.536 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.594 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 9924ce7f-b701-4560-b2c5-67f673b45807 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.594 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 797fde07-e88a-4d6e-a1a3-25e22c66097c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.595 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.595 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.632 2 INFO nova.virt.libvirt.driver [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Deleting instance files /var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c_del#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.633 2 INFO nova.virt.libvirt.driver [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Deletion of /var/lib/nova/instances/797fde07-e88a-4d6e-a1a3-25e22c66097c_del complete#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.682 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.725 2 INFO nova.compute.manager [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.726 2 DEBUG oslo.service.loopingcall [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.728 2 DEBUG nova.compute.manager [-] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.728 2 DEBUG nova.network.neutron [-] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:30:22 np0005465604 nova_compute[260603]: 2025-10-02 08:30:22.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 200 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 28 KiB/s wr, 146 op/s
Oct  2 04:30:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:30:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1454869559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:30:23 np0005465604 nova_compute[260603]: 2025-10-02 08:30:23.116 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:23 np0005465604 nova_compute[260603]: 2025-10-02 08:30:23.122 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:30:23 np0005465604 nova_compute[260603]: 2025-10-02 08:30:23.145 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:30:23 np0005465604 nova_compute[260603]: 2025-10-02 08:30:23.168 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:30:23 np0005465604 nova_compute[260603]: 2025-10-02 08:30:23.169 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:24 np0005465604 nova_compute[260603]: 2025-10-02 08:30:24.170 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:30:24 np0005465604 nova_compute[260603]: 2025-10-02 08:30:24.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:30:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 169 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 18 KiB/s wr, 128 op/s
Oct  2 04:30:25 np0005465604 podman[319402]: 2025-10-02 08:30:25.024841697 +0000 UTC m=+0.079490018 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:30:25 np0005465604 nova_compute[260603]: 2025-10-02 08:30:25.241 2 DEBUG nova.network.neutron [-] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:30:25 np0005465604 nova_compute[260603]: 2025-10-02 08:30:25.278 2 INFO nova.compute.manager [-] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Took 2.55 seconds to deallocate network for instance.#033[00m
Oct  2 04:30:25 np0005465604 nova_compute[260603]: 2025-10-02 08:30:25.358 2 DEBUG oslo_concurrency.lockutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:25 np0005465604 nova_compute[260603]: 2025-10-02 08:30:25.359 2 DEBUG oslo_concurrency.lockutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:25 np0005465604 nova_compute[260603]: 2025-10-02 08:30:25.369 2 DEBUG nova.compute.manager [req-dd1c297f-eef1-475f-ad7f-fb08536025bb req-a7614d00-c43e-4c9c-a55b-74fa08c134b3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Received event network-vif-deleted-29a765f0-6b44-4aad-9974-a0845658d5f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:25 np0005465604 nova_compute[260603]: 2025-10-02 08:30:25.463 2 DEBUG oslo_concurrency.processutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:25 np0005465604 nova_compute[260603]: 2025-10-02 08:30:25.520 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:25 np0005465604 nova_compute[260603]: 2025-10-02 08:30:25.521 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:25 np0005465604 nova_compute[260603]: 2025-10-02 08:30:25.548 2 DEBUG nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:30:25 np0005465604 nova_compute[260603]: 2025-10-02 08:30:25.645 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:30:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/142773208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:30:25 np0005465604 nova_compute[260603]: 2025-10-02 08:30:25.969 2 DEBUG oslo_concurrency.processutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:25 np0005465604 nova_compute[260603]: 2025-10-02 08:30:25.976 2 DEBUG nova.compute.provider_tree [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.009 2 DEBUG nova.scheduler.client.report [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.066 2 DEBUG oslo_concurrency.lockutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.068 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.074 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.074 2 INFO nova.compute.claims [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.106 2 INFO nova.scheduler.client.report [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Deleted allocations for instance 797fde07-e88a-4d6e-a1a3-25e22c66097c#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.210 2 DEBUG oslo_concurrency.lockutils [None req-4260f2b2-086a-4e32-a83e-4d22ca9da5a4 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "797fde07-e88a-4d6e-a1a3-25e22c66097c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.271 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:26 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:26Z|00517|binding|INFO|Releasing lport bd053665-7e00-4f6a-95af-9d9c3c0e8cc0 from this chassis (sb_readonly=0)
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.418 2 DEBUG oslo_concurrency.lockutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.419 2 DEBUG oslo_concurrency.lockutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.419 2 DEBUG oslo_concurrency.lockutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.419 2 DEBUG oslo_concurrency.lockutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.420 2 DEBUG oslo_concurrency.lockutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.421 2 INFO nova.compute.manager [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Terminating instance#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.422 2 DEBUG nova.compute.manager [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:30:26 np0005465604 kernel: tapbf9cdb7f-4c (unregistering): left promiscuous mode
Oct  2 04:30:26 np0005465604 NetworkManager[45129]: <info>  [1759393826.5260] device (tapbf9cdb7f-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.582 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquiring lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.582 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:26 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:26Z|00518|binding|INFO|Releasing lport bd053665-7e00-4f6a-95af-9d9c3c0e8cc0 from this chassis (sb_readonly=0)
Oct  2 04:30:26 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:26Z|00519|binding|INFO|Releasing lport bf9cdb7f-4cda-403b-b27e-12385e93db02 from this chassis (sb_readonly=0)
Oct  2 04:30:26 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:26Z|00520|binding|INFO|Removing iface tapbf9cdb7f-4c ovn-installed in OVS
Oct  2 04:30:26 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:26Z|00521|binding|INFO|Setting lport bf9cdb7f-4cda-403b-b27e-12385e93db02 down in Southbound
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.593 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:b2:ea 10.100.0.12'], port_security=['fa:16:3e:64:b2:ea 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '9924ce7f-b701-4560-b2c5-67f673b45807', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00da8a36-bc54-4cc1-a0e2-53333358378e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c4373fe01a4a14bea07af6dba4d170', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ac6b1d92-a53f-4bb8-a013-111cc626de5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fb59212-36d9-4b55-9eba-338879c3e95c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bf9cdb7f-4cda-403b-b27e-12385e93db02) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:30:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.594 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bf9cdb7f-4cda-403b-b27e-12385e93db02 in datapath 00da8a36-bc54-4cc1-a0e2-53333358378e unbound from our chassis#033[00m
Oct  2 04:30:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.595 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 00da8a36-bc54-4cc1-a0e2-53333358378e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:30:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.596 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6d89edd1-2cc2-4a15-818d-4217e50f89ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.596 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e namespace which is not needed anymore#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.600 2 DEBUG nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:30:26 np0005465604 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000033.scope: Deactivated successfully.
Oct  2 04:30:26 np0005465604 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000033.scope: Consumed 12.865s CPU time.
Oct  2 04:30:26 np0005465604 systemd-machined[214636]: Machine qemu-64-instance-00000033 terminated.
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:26 np0005465604 NetworkManager[45129]: <info>  [1759393826.6525] manager: (tapbf9cdb7f-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/216)
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.671 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.674 2 INFO nova.virt.libvirt.driver [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Instance destroyed successfully.#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.675 2 DEBUG nova.objects.instance [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lazy-loading 'resources' on Instance uuid 9924ce7f-b701-4560-b2c5-67f673b45807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.690 2 DEBUG nova.virt.libvirt.vif [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:29:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-769762436',display_name='tempest-ListServerFiltersTestJSON-instance-769762436',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-769762436',id=51,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:29:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e7c4373fe01a4a14bea07af6dba4d170',ramdisk_id='',reservation_id='r-t76hsctw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1545892750',owner_user_name='tempest-ListServerFiltersTestJSON-1545892750-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:30:09Z,user_data=None,user_id='c1d66932c11043b5b90140cd2dde53d2',uuid=9924ce7f-b701-4560-b2c5-67f673b45807,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.691 2 DEBUG nova.network.os_vif_util [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converting VIF {"id": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "address": "fa:16:3e:64:b2:ea", "network": {"id": "00da8a36-bc54-4cc1-a0e2-53333358378e", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-244426060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c4373fe01a4a14bea07af6dba4d170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf9cdb7f-4c", "ovs_interfaceid": "bf9cdb7f-4cda-403b-b27e-12385e93db02", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.692 2 DEBUG nova.network.os_vif_util [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.693 2 DEBUG os_vif [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.696 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf9cdb7f-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.705 2 INFO os_vif [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:b2:ea,bridge_name='br-int',has_traffic_filtering=True,id=bf9cdb7f-4cda-403b-b27e-12385e93db02,network=Network(00da8a36-bc54-4cc1-a0e2-53333358378e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf9cdb7f-4c')#033[00m
Oct  2 04:30:26 np0005465604 neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e[316103]: [NOTICE]   (316129) : haproxy version is 2.8.14-c23fe91
Oct  2 04:30:26 np0005465604 neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e[316103]: [NOTICE]   (316129) : path to executable is /usr/sbin/haproxy
Oct  2 04:30:26 np0005465604 neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e[316103]: [WARNING]  (316129) : Exiting Master process...
Oct  2 04:30:26 np0005465604 neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e[316103]: [WARNING]  (316129) : Exiting Master process...
Oct  2 04:30:26 np0005465604 neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e[316103]: [ALERT]    (316129) : Current worker (316132) exited with code 143 (Terminated)
Oct  2 04:30:26 np0005465604 neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e[316103]: [WARNING]  (316129) : All workers exited. Exiting... (0)
Oct  2 04:30:26 np0005465604 systemd[1]: libpod-b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2.scope: Deactivated successfully.
Oct  2 04:30:26 np0005465604 podman[319497]: 2025-10-02 08:30:26.757790509 +0000 UTC m=+0.059275280 container died b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:30:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:30:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2887373202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:30:26 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2-userdata-shm.mount: Deactivated successfully.
Oct  2 04:30:26 np0005465604 systemd[1]: var-lib-containers-storage-overlay-71b5c5ae73f0794dadb32620ddb0a1a034413642c4424df120944407778f4d1d-merged.mount: Deactivated successfully.
Oct  2 04:30:26 np0005465604 podman[319497]: 2025-10-02 08:30:26.810713581 +0000 UTC m=+0.112198312 container cleanup b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 04:30:26 np0005465604 systemd[1]: libpod-conmon-b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2.scope: Deactivated successfully.
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.828 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.836 2 DEBUG nova.compute.provider_tree [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.853 2 DEBUG nova.scheduler.client.report [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:30:26 np0005465604 podman[319545]: 2025-10-02 08:30:26.885550683 +0000 UTC m=+0.047621789 container remove b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.891 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.892 2 DEBUG nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:30:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.893 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[00869e8b-b4c1-4a16-a442-8f4bf95b155c]: (4, ('Thu Oct  2 08:30:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e (b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2)\nb95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2\nThu Oct  2 08:30:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e (b95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2)\nb95a39cb958ba976f807e5f869244e88dd5594cec55919a133f527331a82c9a2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.895 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[907639ba-812a-45ae-97dd-3cb703b6e183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.896 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00da8a36-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:26 np0005465604 kernel: tap00da8a36-b0: left promiscuous mode
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.902 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.911 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.912 2 INFO nova.compute.claims [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.918 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d43f8430-4631-4187-93cb-8ca60f20b9e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.939 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc972fe-c3d1-4fe1-80d4-804976f3a757]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.940 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9844fcb-9d94-47cc-866b-b1afec9beba3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.953 2 DEBUG nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.953 2 DEBUG nova.network.neutron [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:30:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.959 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[43ad2611-113e-49e1-8ee9-77626a868680]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464914, 'reachable_time': 19441, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319561, 'error': None, 'target': 'ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:26 np0005465604 systemd[1]: run-netns-ovnmeta\x2d00da8a36\x2dbc54\x2d4cc1\x2da0e2\x2d53333358378e.mount: Deactivated successfully.
Oct  2 04:30:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.964 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-00da8a36-bc54-4cc1-a0e2-53333358378e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:30:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:26.964 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3a6807-593a-47f3-95dd-96eabf79c5b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:26 np0005465604 nova_compute[260603]: 2025-10-02 08:30:26.978 2 INFO nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:30:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 169 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 568 KiB/s rd, 16 KiB/s wr, 102 op/s
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.017 2 DEBUG nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.082 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.133 2 INFO nova.virt.libvirt.driver [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Deleting instance files /var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807_del#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.134 2 INFO nova.virt.libvirt.driver [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Deletion of /var/lib/nova/instances/9924ce7f-b701-4560-b2c5-67f673b45807_del complete#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.176 2 DEBUG nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.178 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.179 2 INFO nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Creating image(s)#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.209 2 DEBUG nova.storage.rbd_utils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.245 2 DEBUG nova.storage.rbd_utils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.278 2 DEBUG nova.storage.rbd_utils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.283 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.327 2 INFO nova.compute.manager [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.329 2 DEBUG oslo.service.loopingcall [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.329 2 DEBUG nova.compute.manager [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.329 2 DEBUG nova.network.neutron [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.368 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.369 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.370 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.370 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.393 2 DEBUG nova.storage.rbd_utils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.397 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.486 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393812.485308, 923e00cc-7494-46f3-93e2-3c223705aff1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.486 2 INFO nova.compute.manager [-] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.513 2 DEBUG nova.compute.manager [None req-f71b9520-9335-4048-a3c8-ba898a877ede - - - - - -] [instance: 923e00cc-7494-46f3-93e2-3c223705aff1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:30:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:30:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/111856958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.555 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.566 2 DEBUG nova.compute.provider_tree [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.583 2 DEBUG nova.scheduler.client.report [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.607 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.608 2 DEBUG nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.636 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.660 2 DEBUG nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.661 2 DEBUG nova.network.neutron [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.692 2 INFO nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.700 2 DEBUG nova.storage.rbd_utils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] resizing rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.729 2 DEBUG nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.790 2 DEBUG nova.objects.instance [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'migration_context' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.827 2 DEBUG nova.policy [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '116b114f14f84e4cbd6cc966e29d82e7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.877 2 DEBUG nova.compute.manager [req-285c9386-6a4c-41c7-ba98-45bf267a3b54 req-b3a58807-d8d5-44ee-8cca-86a5bf52076d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-unplugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.878 2 DEBUG oslo_concurrency.lockutils [req-285c9386-6a4c-41c7-ba98-45bf267a3b54 req-b3a58807-d8d5-44ee-8cca-86a5bf52076d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.878 2 DEBUG oslo_concurrency.lockutils [req-285c9386-6a4c-41c7-ba98-45bf267a3b54 req-b3a58807-d8d5-44ee-8cca-86a5bf52076d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.878 2 DEBUG oslo_concurrency.lockutils [req-285c9386-6a4c-41c7-ba98-45bf267a3b54 req-b3a58807-d8d5-44ee-8cca-86a5bf52076d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.879 2 DEBUG nova.compute.manager [req-285c9386-6a4c-41c7-ba98-45bf267a3b54 req-b3a58807-d8d5-44ee-8cca-86a5bf52076d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] No waiting events found dispatching network-vif-unplugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.879 2 DEBUG nova.compute.manager [req-285c9386-6a4c-41c7-ba98-45bf267a3b54 req-b3a58807-d8d5-44ee-8cca-86a5bf52076d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-unplugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:30:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:30:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:30:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:30:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:30:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:30:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.934 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.935 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Ensure instance console log exists: /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.935 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.936 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:27 np0005465604 nova_compute[260603]: 2025-10-02 08:30:27.936 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:30:27
Oct  2 04:30:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:30:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:30:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.meta']
Oct  2 04:30:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.071 2 DEBUG nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.073 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.074 2 INFO nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Creating image(s)#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.108 2 DEBUG nova.storage.rbd_utils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] rbd image 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.144 2 DEBUG nova.storage.rbd_utils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] rbd image 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.178 2 DEBUG nova.storage.rbd_utils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] rbd image 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.187 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.273 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.274 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.275 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.276 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.313 2 DEBUG nova.storage.rbd_utils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] rbd image 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.319 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.572 2 DEBUG nova.policy [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e7ed2cfbcca04b4ca0a07910a0319456', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '649aece3b28a477fa6e0d1dc7b1d5ade', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.580 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.668 2 DEBUG nova.storage.rbd_utils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] resizing rbd image 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.789 2 DEBUG nova.objects.instance [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lazy-loading 'migration_context' on Instance uuid 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.813 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.814 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Ensure instance console log exists: /var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.815 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.815 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:28 np0005465604 nova_compute[260603]: 2025-10-02 08:30:28.816 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 50 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 330 KiB/s wr, 154 op/s
Oct  2 04:30:29 np0005465604 nova_compute[260603]: 2025-10-02 08:30:29.376 2 DEBUG nova.network.neutron [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:30:29 np0005465604 nova_compute[260603]: 2025-10-02 08:30:29.400 2 INFO nova.compute.manager [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Took 2.07 seconds to deallocate network for instance.#033[00m
Oct  2 04:30:29 np0005465604 nova_compute[260603]: 2025-10-02 08:30:29.452 2 DEBUG oslo_concurrency.lockutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:29 np0005465604 nova_compute[260603]: 2025-10-02 08:30:29.452 2 DEBUG oslo_concurrency.lockutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:29 np0005465604 nova_compute[260603]: 2025-10-02 08:30:29.580 2 DEBUG oslo_concurrency.processutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:29 np0005465604 nova_compute[260603]: 2025-10-02 08:30:29.843 2 DEBUG nova.network.neutron [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Successfully created port: 4298d267-ede8-417b-9e26-a2533908497f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:30:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:30:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1443963866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:30:30 np0005465604 nova_compute[260603]: 2025-10-02 08:30:30.061 2 DEBUG nova.compute.manager [req-518f7afb-d432-4dbc-8e21-bd0ddd509bba req-b44205e6-006a-41c5-8d76-c75d91514ab0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:30 np0005465604 nova_compute[260603]: 2025-10-02 08:30:30.062 2 DEBUG oslo_concurrency.lockutils [req-518f7afb-d432-4dbc-8e21-bd0ddd509bba req-b44205e6-006a-41c5-8d76-c75d91514ab0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:30 np0005465604 nova_compute[260603]: 2025-10-02 08:30:30.062 2 DEBUG oslo_concurrency.lockutils [req-518f7afb-d432-4dbc-8e21-bd0ddd509bba req-b44205e6-006a-41c5-8d76-c75d91514ab0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:30 np0005465604 nova_compute[260603]: 2025-10-02 08:30:30.063 2 DEBUG oslo_concurrency.lockutils [req-518f7afb-d432-4dbc-8e21-bd0ddd509bba req-b44205e6-006a-41c5-8d76-c75d91514ab0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:30 np0005465604 nova_compute[260603]: 2025-10-02 08:30:30.063 2 DEBUG nova.compute.manager [req-518f7afb-d432-4dbc-8e21-bd0ddd509bba req-b44205e6-006a-41c5-8d76-c75d91514ab0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] No waiting events found dispatching network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:30:30 np0005465604 nova_compute[260603]: 2025-10-02 08:30:30.064 2 WARNING nova.compute.manager [req-518f7afb-d432-4dbc-8e21-bd0ddd509bba req-b44205e6-006a-41c5-8d76-c75d91514ab0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received unexpected event network-vif-plugged-bf9cdb7f-4cda-403b-b27e-12385e93db02 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:30:30 np0005465604 nova_compute[260603]: 2025-10-02 08:30:30.064 2 DEBUG nova.compute.manager [req-518f7afb-d432-4dbc-8e21-bd0ddd509bba req-b44205e6-006a-41c5-8d76-c75d91514ab0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Received event network-vif-deleted-bf9cdb7f-4cda-403b-b27e-12385e93db02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:30 np0005465604 nova_compute[260603]: 2025-10-02 08:30:30.066 2 DEBUG oslo_concurrency.processutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:30 np0005465604 nova_compute[260603]: 2025-10-02 08:30:30.075 2 DEBUG nova.compute.provider_tree [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:30:30 np0005465604 nova_compute[260603]: 2025-10-02 08:30:30.105 2 DEBUG nova.scheduler.client.report [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:30:30 np0005465604 nova_compute[260603]: 2025-10-02 08:30:30.134 2 DEBUG oslo_concurrency.lockutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:30 np0005465604 nova_compute[260603]: 2025-10-02 08:30:30.175 2 INFO nova.scheduler.client.report [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Deleted allocations for instance 9924ce7f-b701-4560-b2c5-67f673b45807#033[00m
Oct  2 04:30:30 np0005465604 nova_compute[260603]: 2025-10-02 08:30:30.260 2 DEBUG oslo_concurrency.lockutils [None req-90d2a0e1-3f26-4df1-a5e2-c2e320a83980 c1d66932c11043b5b90140cd2dde53d2 e7c4373fe01a4a14bea07af6dba4d170 - - default default] Lock "9924ce7f-b701-4560-b2c5-67f673b45807" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:30 np0005465604 nova_compute[260603]: 2025-10-02 08:30:30.268 2 DEBUG nova.network.neutron [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Successfully created port: 25be6c2d-0038-4133-8d21-845a3220d33e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:30:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 50 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 566 KiB/s rd, 327 KiB/s wr, 101 op/s
Oct  2 04:30:31 np0005465604 nova_compute[260603]: 2025-10-02 08:30:31.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:31 np0005465604 nova_compute[260603]: 2025-10-02 08:30:31.702 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393816.699861, d15c7c6a-e6a1-4538-9db0-ee1aef10f38b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:30:31 np0005465604 nova_compute[260603]: 2025-10-02 08:30:31.702 2 INFO nova.compute.manager [-] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:30:31 np0005465604 nova_compute[260603]: 2025-10-02 08:30:31.726 2 DEBUG nova.compute.manager [None req-ebb709b4-588d-46f0-9d40-523e9bf7cd7e - - - - - -] [instance: d15c7c6a-e6a1-4538-9db0-ee1aef10f38b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:31 np0005465604 nova_compute[260603]: 2025-10-02 08:30:31.866 2 DEBUG nova.network.neutron [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Successfully updated port: 4298d267-ede8-417b-9e26-a2533908497f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:30:31 np0005465604 nova_compute[260603]: 2025-10-02 08:30:31.902 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "refresh_cache-f8f36f36-817a-4e64-8c57-c211cfc7b0ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:30:31 np0005465604 nova_compute[260603]: 2025-10-02 08:30:31.903 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquired lock "refresh_cache-f8f36f36-817a-4e64-8c57-c211cfc7b0ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:30:31 np0005465604 nova_compute[260603]: 2025-10-02 08:30:31.903 2 DEBUG nova.network.neutron [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:30:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:30:32 np0005465604 nova_compute[260603]: 2025-10-02 08:30:32.455 2 DEBUG nova.network.neutron [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Successfully updated port: 25be6c2d-0038-4133-8d21-845a3220d33e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:30:32 np0005465604 nova_compute[260603]: 2025-10-02 08:30:32.479 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquiring lock "refresh_cache-3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:30:32 np0005465604 nova_compute[260603]: 2025-10-02 08:30:32.480 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquired lock "refresh_cache-3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:30:32 np0005465604 nova_compute[260603]: 2025-10-02 08:30:32.480 2 DEBUG nova.network.neutron [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:30:32 np0005465604 nova_compute[260603]: 2025-10-02 08:30:32.522 2 DEBUG nova.compute.manager [req-54f39cde-040d-4444-b84e-fe3308c2086e req-01cbe5e8-2f3b-4d55-8121-0d6bb6a64706 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received event network-changed-4298d267-ede8-417b-9e26-a2533908497f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:32 np0005465604 nova_compute[260603]: 2025-10-02 08:30:32.523 2 DEBUG nova.compute.manager [req-54f39cde-040d-4444-b84e-fe3308c2086e req-01cbe5e8-2f3b-4d55-8121-0d6bb6a64706 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Refreshing instance network info cache due to event network-changed-4298d267-ede8-417b-9e26-a2533908497f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:30:32 np0005465604 nova_compute[260603]: 2025-10-02 08:30:32.523 2 DEBUG oslo_concurrency.lockutils [req-54f39cde-040d-4444-b84e-fe3308c2086e req-01cbe5e8-2f3b-4d55-8121-0d6bb6a64706 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f8f36f36-817a-4e64-8c57-c211cfc7b0ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:30:32 np0005465604 nova_compute[260603]: 2025-10-02 08:30:32.556 2 DEBUG nova.network.neutron [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:30:32 np0005465604 nova_compute[260603]: 2025-10-02 08:30:32.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:32 np0005465604 nova_compute[260603]: 2025-10-02 08:30:32.883 2 DEBUG nova.network.neutron [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:30:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 118 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 600 KiB/s rd, 2.9 MiB/s wr, 151 op/s
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.897 2 DEBUG nova.network.neutron [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Updating instance_info_cache with network_info: [{"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.932 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Releasing lock "refresh_cache-f8f36f36-817a-4e64-8c57-c211cfc7b0ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.933 2 DEBUG nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance network_info: |[{"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.934 2 DEBUG oslo_concurrency.lockutils [req-54f39cde-040d-4444-b84e-fe3308c2086e req-01cbe5e8-2f3b-4d55-8121-0d6bb6a64706 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f8f36f36-817a-4e64-8c57-c211cfc7b0ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.935 2 DEBUG nova.network.neutron [req-54f39cde-040d-4444-b84e-fe3308c2086e req-01cbe5e8-2f3b-4d55-8121-0d6bb6a64706 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Refreshing network info cache for port 4298d267-ede8-417b-9e26-a2533908497f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.940 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Start _get_guest_xml network_info=[{"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.948 2 WARNING nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.956 2 DEBUG nova.virt.libvirt.host [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.957 2 DEBUG nova.virt.libvirt.host [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.971 2 DEBUG nova.virt.libvirt.host [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.972 2 DEBUG nova.virt.libvirt.host [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.973 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.973 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.974 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.975 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.975 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.976 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.976 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.977 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.977 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.978 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.978 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.979 2 DEBUG nova.virt.hardware [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:30:33 np0005465604 nova_compute[260603]: 2025-10-02 08:30:33.984 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.057 2 DEBUG nova.network.neutron [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Updating instance_info_cache with network_info: [{"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.086 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Releasing lock "refresh_cache-3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.087 2 DEBUG nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Instance network_info: |[{"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.092 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Start _get_guest_xml network_info=[{"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.103 2 WARNING nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.108 2 DEBUG nova.virt.libvirt.host [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.109 2 DEBUG nova.virt.libvirt.host [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.116 2 DEBUG nova.virt.libvirt.host [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.117 2 DEBUG nova.virt.libvirt.host [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.118 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.118 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.119 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.120 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.121 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.121 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.122 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.122 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.123 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.123 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.124 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.124 2 DEBUG nova.virt.hardware [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.130 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:30:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1846897373' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.485 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.518 2 DEBUG nova.storage.rbd_utils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.523 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:30:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2278444661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.619 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.656 2 DEBUG nova.storage.rbd_utils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] rbd image 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:34 np0005465604 nova_compute[260603]: 2025-10-02 08:30:34.661 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:34.816 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:34.817 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:34.819 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:30:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1539084807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:30:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 134 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 3.6 MiB/s wr, 132 op/s
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.010 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.013 2 DEBUG nova.virt.libvirt.vif [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-949284758',display_name='tempest-ServerDiskConfigTestJSON-server-949284758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-949284758',id=59,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-xg0hxkh4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:27Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=f8f36f36-817a-4e64-8c57-c211cfc7b0ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.013 2 DEBUG nova.network.os_vif_util [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.014 2 DEBUG nova.network.os_vif_util [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.015 2 DEBUG nova.objects.instance [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'pci_devices' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.031 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <uuid>f8f36f36-817a-4e64-8c57-c211cfc7b0ba</uuid>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <name>instance-0000003b</name>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-949284758</nova:name>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:30:33</nova:creationTime>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <nova:user uuid="116b114f14f84e4cbd6cc966e29d82e7">tempest-ServerDiskConfigTestJSON-1277806880-project-member</nova:user>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <nova:project uuid="bce7493292bb47cfb7168bca89f78f4a">tempest-ServerDiskConfigTestJSON-1277806880</nova:project>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <nova:port uuid="4298d267-ede8-417b-9e26-a2533908497f">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <entry name="serial">f8f36f36-817a-4e64-8c57-c211cfc7b0ba</entry>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <entry name="uuid">f8f36f36-817a-4e64-8c57-c211cfc7b0ba</entry>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:50:87:db"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <target dev="tap4298d267-ed"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/console.log" append="off"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:30:35 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:30:35 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.031 2 DEBUG nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Preparing to wait for external event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.031 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.032 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.032 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.032 2 DEBUG nova.virt.libvirt.vif [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-949284758',display_name='tempest-ServerDiskConfigTestJSON-server-949284758',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-949284758',id=59,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-xg0hxkh4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:27Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=f8f36f36-817a-4e64-8c57-c211cfc7b0ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.033 2 DEBUG nova.network.os_vif_util [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.033 2 DEBUG nova.network.os_vif_util [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.033 2 DEBUG os_vif [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.035 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.035 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.037 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4298d267-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.038 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4298d267-ed, col_values=(('external_ids', {'iface-id': '4298d267-ede8-417b-9e26-a2533908497f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:87:db', 'vm-uuid': 'f8f36f36-817a-4e64-8c57-c211cfc7b0ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:35 np0005465604 NetworkManager[45129]: <info>  [1759393835.0813] manager: (tap4298d267-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.087 2 INFO os_vif [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed')#033[00m
Oct  2 04:30:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:30:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4262397527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.131 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.132 2 DEBUG nova.virt.libvirt.vif [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-275620590',display_name='tempest-ServerMetadataTestJSON-server-275620590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-275620590',id=60,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='649aece3b28a477fa6e0d1dc7b1d5ade',ramdisk_id='',reservation_id='r-h0hzwcr1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-1153289932',owner_user_name='tempest-ServerMetadataTestJSON-1153289932-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:27Z,user_data=None,user_id='e7ed2cfbcca04b4ca0a07910a0319456',uuid=3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.132 2 DEBUG nova.network.os_vif_util [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Converting VIF {"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.133 2 DEBUG nova.network.os_vif_util [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=25be6c2d-0038-4133-8d21-845a3220d33e,network=Network(688aa430-74b8-4500-af80-d797f0ec4310),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25be6c2d-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.133 2 DEBUG nova.objects.instance [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lazy-loading 'pci_devices' on Instance uuid 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.152 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <uuid>3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5</uuid>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <name>instance-0000003c</name>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerMetadataTestJSON-server-275620590</nova:name>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:30:34</nova:creationTime>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <nova:user uuid="e7ed2cfbcca04b4ca0a07910a0319456">tempest-ServerMetadataTestJSON-1153289932-project-member</nova:user>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <nova:project uuid="649aece3b28a477fa6e0d1dc7b1d5ade">tempest-ServerMetadataTestJSON-1153289932</nova:project>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <nova:port uuid="25be6c2d-0038-4133-8d21-845a3220d33e">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <entry name="serial">3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5</entry>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <entry name="uuid">3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5</entry>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk.config">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:75:65:1d"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <target dev="tap25be6c2d-00"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5/console.log" append="off"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:30:35 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:30:35 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:30:35 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:30:35 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.153 2 DEBUG nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Preparing to wait for external event network-vif-plugged-25be6c2d-0038-4133-8d21-845a3220d33e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.153 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquiring lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.153 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.154 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.154 2 DEBUG nova.virt.libvirt.vif [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-275620590',display_name='tempest-ServerMetadataTestJSON-server-275620590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-275620590',id=60,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='649aece3b28a477fa6e0d1dc7b1d5ade',ramdisk_id='',reservation_id='r-h0hzwcr1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-1153289932',owner_user_name='tempest-ServerMetadataTestJSON-1153289932-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:27Z,user_data=None,user_id='e7ed2cfbcca04b4ca0a07910a0319456',uuid=3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.155 2 DEBUG nova.network.os_vif_util [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Converting VIF {"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.155 2 DEBUG nova.network.os_vif_util [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=25be6c2d-0038-4133-8d21-845a3220d33e,network=Network(688aa430-74b8-4500-af80-d797f0ec4310),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25be6c2d-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.155 2 DEBUG os_vif [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=25be6c2d-0038-4133-8d21-845a3220d33e,network=Network(688aa430-74b8-4500-af80-d797f0ec4310),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25be6c2d-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.156 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.157 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.160 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25be6c2d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.160 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25be6c2d-00, col_values=(('external_ids', {'iface-id': '25be6c2d-0038-4133-8d21-845a3220d33e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:65:1d', 'vm-uuid': '3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:35 np0005465604 NetworkManager[45129]: <info>  [1759393835.1630] manager: (tap25be6c2d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.164 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.164 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.164 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No VIF found with MAC fa:16:3e:50:87:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.164 2 INFO nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Using config drive#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.184 2 DEBUG nova.storage.rbd_utils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.192 2 INFO os_vif [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=25be6c2d-0038-4133-8d21-845a3220d33e,network=Network(688aa430-74b8-4500-af80-d797f0ec4310),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25be6c2d-00')#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.206 2 DEBUG nova.compute.manager [req-eaf1349a-6b0b-405c-80bf-c48e73446118 req-b55a702b-049a-4fa0-b71e-fc7f58e8c6a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Received event network-changed-25be6c2d-0038-4133-8d21-845a3220d33e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.207 2 DEBUG nova.compute.manager [req-eaf1349a-6b0b-405c-80bf-c48e73446118 req-b55a702b-049a-4fa0-b71e-fc7f58e8c6a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Refreshing instance network info cache due to event network-changed-25be6c2d-0038-4133-8d21-845a3220d33e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.207 2 DEBUG oslo_concurrency.lockutils [req-eaf1349a-6b0b-405c-80bf-c48e73446118 req-b55a702b-049a-4fa0-b71e-fc7f58e8c6a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.207 2 DEBUG oslo_concurrency.lockutils [req-eaf1349a-6b0b-405c-80bf-c48e73446118 req-b55a702b-049a-4fa0-b71e-fc7f58e8c6a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.207 2 DEBUG nova.network.neutron [req-eaf1349a-6b0b-405c-80bf-c48e73446118 req-b55a702b-049a-4fa0-b71e-fc7f58e8c6a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Refreshing network info cache for port 25be6c2d-0038-4133-8d21-845a3220d33e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.250 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.250 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.250 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] No VIF found with MAC fa:16:3e:75:65:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.251 2 INFO nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Using config drive#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.273 2 DEBUG nova.storage.rbd_utils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] rbd image 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.595 2 INFO nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Creating config drive at /var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5/disk.config#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.605 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg_vsolfl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:35 np0005465604 podman[320276]: 2025-10-02 08:30:35.631807732 +0000 UTC m=+0.076823065 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:30:35 np0005465604 podman[320276]: 2025-10-02 08:30:35.717802679 +0000 UTC m=+0.162818012 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.768 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg_vsolfl" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.809 2 DEBUG nova.storage.rbd_utils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] rbd image 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.814 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5/disk.config 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.859 2 INFO nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Creating config drive at /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.867 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7myp0roh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.902 2 DEBUG nova.network.neutron [req-54f39cde-040d-4444-b84e-fe3308c2086e req-01cbe5e8-2f3b-4d55-8121-0d6bb6a64706 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Updated VIF entry in instance network info cache for port 4298d267-ede8-417b-9e26-a2533908497f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.903 2 DEBUG nova.network.neutron [req-54f39cde-040d-4444-b84e-fe3308c2086e req-01cbe5e8-2f3b-4d55-8121-0d6bb6a64706 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Updating instance_info_cache with network_info: [{"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.928 2 DEBUG oslo_concurrency.lockutils [req-54f39cde-040d-4444-b84e-fe3308c2086e req-01cbe5e8-2f3b-4d55-8121-0d6bb6a64706 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f8f36f36-817a-4e64-8c57-c211cfc7b0ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.977 2 DEBUG oslo_concurrency.processutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5/disk.config 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:35 np0005465604 nova_compute[260603]: 2025-10-02 08:30:35.977 2 INFO nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Deleting local config drive /var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5/disk.config because it was imported into RBD.#033[00m
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.001 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7myp0roh" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.034 2 DEBUG nova.storage.rbd_utils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:36 np0005465604 NetworkManager[45129]: <info>  [1759393836.0431] manager: (tap25be6c2d-00): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.044 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:36 np0005465604 kernel: tap25be6c2d-00: entered promiscuous mode
Oct  2 04:30:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:36Z|00522|binding|INFO|Claiming lport 25be6c2d-0038-4133-8d21-845a3220d33e for this chassis.
Oct  2 04:30:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:36Z|00523|binding|INFO|25be6c2d-0038-4133-8d21-845a3220d33e: Claiming fa:16:3e:75:65:1d 10.100.0.7
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.069 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:65:1d 10.100.0.7'], port_security=['fa:16:3e:75:65:1d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-688aa430-74b8-4500-af80-d797f0ec4310', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '649aece3b28a477fa6e0d1dc7b1d5ade', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4630aa5e-be12-47d4-ae0b-793f01b010a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f759ed2-fdc4-4b2b-ac94-db6a43d5407a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=25be6c2d-0038-4133-8d21-845a3220d33e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:30:36 np0005465604 systemd-udevd[320447]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.072 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 25be6c2d-0038-4133-8d21-845a3220d33e in datapath 688aa430-74b8-4500-af80-d797f0ec4310 bound to our chassis#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.074 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 688aa430-74b8-4500-af80-d797f0ec4310#033[00m
Oct  2 04:30:36 np0005465604 systemd-machined[214636]: New machine qemu-65-instance-0000003c.
Oct  2 04:30:36 np0005465604 NetworkManager[45129]: <info>  [1759393836.0833] device (tap25be6c2d-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:30:36 np0005465604 NetworkManager[45129]: <info>  [1759393836.0841] device (tap25be6c2d-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.088 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[10b5e7da-7d19-4f0a-b382-1d4eaa512a2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:36 np0005465604 systemd[1]: Started Virtual Machine qemu-65-instance-0000003c.
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.094 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap688aa430-71 in ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.096 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap688aa430-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.096 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ed73cd6f-5bbc-49e8-ae9d-946c5376c3b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.097 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e143579d-7493-4990-abcf-09de9824c001]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.110 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[fe64ec47-86ea-4d42-b9d6-42f6d01e0110]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.136 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[573de149-5ee9-4290-9a19-9d19d4a4ea53]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:36Z|00524|binding|INFO|Setting lport 25be6c2d-0038-4133-8d21-845a3220d33e ovn-installed in OVS
Oct  2 04:30:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:36Z|00525|binding|INFO|Setting lport 25be6c2d-0038-4133-8d21-845a3220d33e up in Southbound
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.171 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2a751374-dd5a-49c9-abb5-6316fced5fd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.206 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[975a9da8-21eb-4255-bec9-4d025760a370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:36 np0005465604 NetworkManager[45129]: <info>  [1759393836.2072] manager: (tap688aa430-70): new Veth device (/org/freedesktop/NetworkManager/Devices/220)
Oct  2 04:30:36 np0005465604 systemd-udevd[320455]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.246 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a5cb6cc7-366d-4c52-81f5-2c0400d5246d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.250 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1649689d-d3bf-4a5f-87e1-796a0c049961]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:36 np0005465604 NetworkManager[45129]: <info>  [1759393836.2744] device (tap688aa430-70): carrier: link connected
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.284 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f5d3d0-473d-4022-8d1e-0cfea245e2e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.311 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29463ace-95bb-4150-a04a-1a33406ebb0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap688aa430-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:29:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471487, 'reachable_time': 36328, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320532, 'error': None, 'target': 'ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.326 2 DEBUG oslo_concurrency.processutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.326 2 INFO nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Deleting local config drive /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config because it was imported into RBD.#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.335 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9bac593f-5bb6-4ffe-94f4-edbb2f8f536a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:296f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471487, 'tstamp': 471487}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320543, 'error': None, 'target': 'ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.360 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f2eef818-fb9a-433d-b4c3-0e34287fde55]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap688aa430-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:29:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471487, 'reachable_time': 36328, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320546, 'error': None, 'target': 'ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.373 2 DEBUG nova.compute.manager [req-48bab894-943e-4c22-98bb-6b6068582de8 req-d4cb7112-c970-4ac6-bc6d-cb4486929a27 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Received event network-vif-plugged-25be6c2d-0038-4133-8d21-845a3220d33e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.373 2 DEBUG oslo_concurrency.lockutils [req-48bab894-943e-4c22-98bb-6b6068582de8 req-d4cb7112-c970-4ac6-bc6d-cb4486929a27 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.374 2 DEBUG oslo_concurrency.lockutils [req-48bab894-943e-4c22-98bb-6b6068582de8 req-d4cb7112-c970-4ac6-bc6d-cb4486929a27 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.374 2 DEBUG oslo_concurrency.lockutils [req-48bab894-943e-4c22-98bb-6b6068582de8 req-d4cb7112-c970-4ac6-bc6d-cb4486929a27 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.374 2 DEBUG nova.compute.manager [req-48bab894-943e-4c22-98bb-6b6068582de8 req-d4cb7112-c970-4ac6-bc6d-cb4486929a27 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Processing event network-vif-plugged-25be6c2d-0038-4133-8d21-845a3220d33e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:30:36 np0005465604 NetworkManager[45129]: <info>  [1759393836.3893] manager: (tap4298d267-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/221)
Oct  2 04:30:36 np0005465604 kernel: tap4298d267-ed: entered promiscuous mode
Oct  2 04:30:36 np0005465604 systemd-udevd[320505]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:36Z|00526|binding|INFO|Claiming lport 4298d267-ede8-417b-9e26-a2533908497f for this chassis.
Oct  2 04:30:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:36Z|00527|binding|INFO|4298d267-ede8-417b-9e26-a2533908497f: Claiming fa:16:3e:50:87:db 10.100.0.3
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.405 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:87:db 10.100.0.3'], port_security=['fa:16:3e:50:87:db 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f8f36f36-817a-4e64-8c57-c211cfc7b0ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '70456544-9d56-4c7b-b40d-eb25e5a572db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7313fcfc-7f82-4668-88be-657e0435d03f, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4298d267-ede8-417b-9e26-a2533908497f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:30:36 np0005465604 NetworkManager[45129]: <info>  [1759393836.4070] device (tap4298d267-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:30:36 np0005465604 NetworkManager[45129]: <info>  [1759393836.4076] device (tap4298d267-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.410 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4bc17135-475f-4079-8ffa-2fe012fb007a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:36 np0005465604 systemd-machined[214636]: New machine qemu-66-instance-0000003b.
Oct  2 04:30:36 np0005465604 systemd[1]: Started Virtual Machine qemu-66-instance-0000003b.
Oct  2 04:30:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.477 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce507f4-108d-4c93-9734-7d53e642610e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.479 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap688aa430-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.479 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.480 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap688aa430-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:36 np0005465604 NetworkManager[45129]: <info>  [1759393836.4833] manager: (tap688aa430-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Oct  2 04:30:36 np0005465604 kernel: tap688aa430-70: entered promiscuous mode
Oct  2 04:30:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.486 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap688aa430-70, col_values=(('external_ids', {'iface-id': '0d737469-5d43-4396-a0ae-3cb6fa3fbe67'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:36Z|00528|binding|INFO|Releasing lport 0d737469-5d43-4396-a0ae-3cb6fa3fbe67 from this chassis (sb_readonly=0)
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.492 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/688aa430-74b8-4500-af80-d797f0ec4310.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/688aa430-74b8-4500-af80-d797f0ec4310.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.495 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1420c5ba-3082-43ea-9d82-5322267954e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.496 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-688aa430-74b8-4500-af80-d797f0ec4310
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/688aa430-74b8-4500-af80-d797f0ec4310.pid.haproxy
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 688aa430-74b8-4500-af80-d797f0ec4310
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:30:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:36.498 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310', 'env', 'PROCESS_TAG=haproxy-688aa430-74b8-4500-af80-d797f0ec4310', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/688aa430-74b8-4500-af80-d797f0ec4310.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:30:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:36Z|00529|binding|INFO|Setting lport 4298d267-ede8-417b-9e26-a2533908497f ovn-installed in OVS
Oct  2 04:30:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:36Z|00530|binding|INFO|Setting lport 4298d267-ede8-417b-9e26-a2533908497f up in Southbound
Oct  2 04:30:36 np0005465604 nova_compute[260603]: 2025-10-02 08:30:36.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:36 np0005465604 podman[320798]: 2025-10-02 08:30:36.916844745 +0000 UTC m=+0.085505144 container create cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:30:36 np0005465604 podman[320798]: 2025-10-02 08:30:36.862054515 +0000 UTC m=+0.030714934 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:30:36 np0005465604 systemd[1]: Started libpod-conmon-cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3.scope.
Oct  2 04:30:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 134 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 3.6 MiB/s wr, 105 op/s
Oct  2 04:30:36 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:30:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/327ede22813a349afb9df738f72940c2bb921ee83034aa092f6f10580f45e456/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:30:37 np0005465604 podman[320798]: 2025-10-02 08:30:37.016413475 +0000 UTC m=+0.185073924 container init cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:30:37 np0005465604 podman[320798]: 2025-10-02 08:30:37.022768582 +0000 UTC m=+0.191428991 container start cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 04:30:37 np0005465604 neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310[320825]: [NOTICE]   (320830) : New worker (320832) forked
Oct  2 04:30:37 np0005465604 neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310[320825]: [NOTICE]   (320830) : Loading success.
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.095 2 DEBUG nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.096 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393837.0958679, 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.097 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] VM Started (Lifecycle Event)#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.102 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.108 2 INFO nova.virt.libvirt.driver [-] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Instance spawned successfully.#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.108 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.114 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.120 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.141 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.143 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.144 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.144 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.145 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.145 2 DEBUG nova.virt.libvirt.driver [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.155 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.155 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393837.0959647, 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.155 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.142 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4298d267-ede8-417b-9e26-a2533908497f in datapath f8df0af1-1767-419a-8500-c28fbf45ae4b unbound from our chassis#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.159 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f8df0af1-1767-419a-8500-c28fbf45ae4b#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.173 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[89649e2c-a249-4a55-8738-1b6007d01b35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.174 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf8df0af1-11 in ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.177 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf8df0af1-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.177 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cabbcd90-fb6d-4c0a-8b16-26d06604179d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.178 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[90835da8-279a-456f-b169-7940c2c704d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.184 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.193 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[28cbd4a4-3456-4d0e-b8e8-a6cb24a3d514]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.198 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393837.1014993, 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.198 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.211 2 INFO nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Took 9.14 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.211 2 DEBUG nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.218 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.225 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.225 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ba2408ac-3a41-4b00-b2fb-4f628f241dfc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.251 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393822.2506952, 797fde07-e88a-4d6e-a1a3-25e22c66097c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.252 2 INFO nova.compute.manager [-] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.256 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.275 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac58d8b-da1e-4787-b491-0821e2326a6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.279 2 DEBUG nova.compute.manager [None req-8af441cb-d89e-430a-9b66-06914f7fbad8 - - - - - -] [instance: 797fde07-e88a-4d6e-a1a3-25e22c66097c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:37 np0005465604 NetworkManager[45129]: <info>  [1759393837.2893] manager: (tapf8df0af1-10): new Veth device (/org/freedesktop/NetworkManager/Devices/223)
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.288 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b2cc22b7-77da-4861-a932-b40ceb6dceee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.296 2 INFO nova.compute.manager [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Took 10.65 seconds to build instance.#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.317 2 DEBUG oslo_concurrency.lockutils [None req-7dd3c827-3839-4605-b150-0e92b704ae5e e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.358 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e5f8f7-f554-4fa9-ac7b-951df42c468a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.364 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[48e24c3b-ed53-4dab-aec3-1755671199ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:30:37 np0005465604 NetworkManager[45129]: <info>  [1759393837.4027] device (tapf8df0af1-10): carrier: link connected
Oct  2 04:30:37 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ac7bf907-dff9-4c4f-82dc-5e154a2f8001 does not exist
Oct  2 04:30:37 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b93ef181-3e31-4bef-8650-78da064594ab does not exist
Oct  2 04:30:37 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9f39f035-97f1-42af-9939-f69711c5730a does not exist
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.419 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3d9c3d-f5ba-4737-8600-f5adb26cfb31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.450 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e6186142-e491-441a-80dc-ea7784f2a1a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8df0af1-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:6d:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471600, 'reachable_time': 26145, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320873, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.476 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393837.4763658, f8f36f36-817a-4e64-8c57-c211cfc7b0ba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.477 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] VM Started (Lifecycle Event)#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.480 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[290ba45e-3131-46a5-9562-04780cc29c95]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:6ddc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471600, 'tstamp': 471600}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320892, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:30:37 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.499 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.504 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393837.4772487, f8f36f36-817a-4e64-8c57-c211cfc7b0ba => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.505 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.513 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[59232c45-77ad-427d-9298-9e7b6fdfd8dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8df0af1-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:6d:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471600, 'reachable_time': 26145, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320895, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.521 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.527 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.552 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.570 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cb9d2263-6672-4b30-b485-4664a3c3b028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.588 2 DEBUG nova.compute.manager [req-4a91ecc1-c153-4b05-b58e-77c4cbc81742 req-33dd4861-2751-4875-8d18-6afbc11cdd5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.588 2 DEBUG oslo_concurrency.lockutils [req-4a91ecc1-c153-4b05-b58e-77c4cbc81742 req-33dd4861-2751-4875-8d18-6afbc11cdd5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.588 2 DEBUG oslo_concurrency.lockutils [req-4a91ecc1-c153-4b05-b58e-77c4cbc81742 req-33dd4861-2751-4875-8d18-6afbc11cdd5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.588 2 DEBUG oslo_concurrency.lockutils [req-4a91ecc1-c153-4b05-b58e-77c4cbc81742 req-33dd4861-2751-4875-8d18-6afbc11cdd5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.589 2 DEBUG nova.compute.manager [req-4a91ecc1-c153-4b05-b58e-77c4cbc81742 req-33dd4861-2751-4875-8d18-6afbc11cdd5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Processing event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.589 2 DEBUG nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.593 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393837.5928373, f8f36f36-817a-4e64-8c57-c211cfc7b0ba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.593 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.594 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.597 2 INFO nova.virt.libvirt.driver [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance spawned successfully.#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.597 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.613 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.619 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.624 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.624 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.625 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.625 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.625 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.626 2 DEBUG nova.virt.libvirt.driver [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.648 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.655 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a855b2c8-8bac-4af4-8583-08b7e005eb8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.657 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8df0af1-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.657 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.658 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8df0af1-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:37 np0005465604 kernel: tapf8df0af1-10: entered promiscuous mode
Oct  2 04:30:37 np0005465604 NetworkManager[45129]: <info>  [1759393837.6607] manager: (tapf8df0af1-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.670 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf8df0af1-10, col_values=(('external_ids', {'iface-id': '1405e724-f2f6-4a95-8848-550131e62910'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:37Z|00531|binding|INFO|Releasing lport 1405e724-f2f6-4a95-8848-550131e62910 from this chassis (sb_readonly=0)
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.685 2 INFO nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Took 10.51 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.685 2 DEBUG nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.695 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.696 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0506ba88-9a55-4872-9409-439aacec2cf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.697 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:30:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:37.698 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'env', 'PROCESS_TAG=haproxy-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f8df0af1-1767-419a-8500-c28fbf45ae4b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.732 2 DEBUG nova.network.neutron [req-eaf1349a-6b0b-405c-80bf-c48e73446118 req-b55a702b-049a-4fa0-b71e-fc7f58e8c6a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Updated VIF entry in instance network info cache for port 25be6c2d-0038-4133-8d21-845a3220d33e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.732 2 DEBUG nova.network.neutron [req-eaf1349a-6b0b-405c-80bf-c48e73446118 req-b55a702b-049a-4fa0-b71e-fc7f58e8c6a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Updating instance_info_cache with network_info: [{"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.745 2 INFO nova.compute.manager [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Took 12.12 seconds to build instance.#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.747 2 DEBUG oslo_concurrency.lockutils [req-eaf1349a-6b0b-405c-80bf-c48e73446118 req-b55a702b-049a-4fa0-b71e-fc7f58e8c6a1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.763 2 DEBUG oslo_concurrency.lockutils [None req-dc6383c5-159c-44cb-a9ea-3401991e212f 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:37 np0005465604 nova_compute[260603]: 2025-10-02 08:30:37.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:38 np0005465604 podman[321036]: 2025-10-02 08:30:38.072975809 +0000 UTC m=+0.044589115 container create b37c231dbb09e6a981bd83af859088df07b38ce73b3487d78d6f353e6f9fbc72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_visvesvaraya, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:30:38 np0005465604 podman[321048]: 2025-10-02 08:30:38.108406419 +0000 UTC m=+0.063084159 container create 720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 04:30:38 np0005465604 systemd[1]: Started libpod-conmon-b37c231dbb09e6a981bd83af859088df07b38ce73b3487d78d6f353e6f9fbc72.scope.
Oct  2 04:30:38 np0005465604 podman[321036]: 2025-10-02 08:30:38.053912327 +0000 UTC m=+0.025525663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:30:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:30:38 np0005465604 systemd[1]: Started libpod-conmon-720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e.scope.
Oct  2 04:30:38 np0005465604 podman[321036]: 2025-10-02 08:30:38.168824603 +0000 UTC m=+0.140437939 container init b37c231dbb09e6a981bd83af859088df07b38ce73b3487d78d6f353e6f9fbc72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_visvesvaraya, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:30:38 np0005465604 podman[321048]: 2025-10-02 08:30:38.072263367 +0000 UTC m=+0.026941117 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:30:38 np0005465604 podman[321036]: 2025-10-02 08:30:38.178413801 +0000 UTC m=+0.150027137 container start b37c231dbb09e6a981bd83af859088df07b38ce73b3487d78d6f353e6f9fbc72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_visvesvaraya, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 04:30:38 np0005465604 podman[321036]: 2025-10-02 08:30:38.182343522 +0000 UTC m=+0.153956868 container attach b37c231dbb09e6a981bd83af859088df07b38ce73b3487d78d6f353e6f9fbc72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Oct  2 04:30:38 np0005465604 cranky_visvesvaraya[321069]: 167 167
Oct  2 04:30:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:30:38 np0005465604 podman[321036]: 2025-10-02 08:30:38.185423528 +0000 UTC m=+0.157036844 container died b37c231dbb09e6a981bd83af859088df07b38ce73b3487d78d6f353e6f9fbc72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_visvesvaraya, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 04:30:38 np0005465604 systemd[1]: libpod-b37c231dbb09e6a981bd83af859088df07b38ce73b3487d78d6f353e6f9fbc72.scope: Deactivated successfully.
Oct  2 04:30:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11063ab83b9a9c00fd7baf0f7f968c64e020d49f1d6528b03e58b1a6ea92a56d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:30:38 np0005465604 podman[321048]: 2025-10-02 08:30:38.209794344 +0000 UTC m=+0.164472114 container init 720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:30:38 np0005465604 systemd[1]: var-lib-containers-storage-overlay-347c5ee217ac598c0db47c598a2201ee99a86436c0f65e31b6a74989a96c1561-merged.mount: Deactivated successfully.
Oct  2 04:30:38 np0005465604 podman[321048]: 2025-10-02 08:30:38.222729625 +0000 UTC m=+0.177407365 container start 720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 04:30:38 np0005465604 podman[321036]: 2025-10-02 08:30:38.245510353 +0000 UTC m=+0.217123669 container remove b37c231dbb09e6a981bd83af859088df07b38ce73b3487d78d6f353e6f9fbc72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_visvesvaraya, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:30:38 np0005465604 systemd[1]: libpod-conmon-b37c231dbb09e6a981bd83af859088df07b38ce73b3487d78d6f353e6f9fbc72.scope: Deactivated successfully.
Oct  2 04:30:38 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[321074]: [NOTICE]   (321092) : New worker (321096) forked
Oct  2 04:30:38 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[321074]: [NOTICE]   (321092) : Loading success.
Oct  2 04:30:38 np0005465604 podman[321110]: 2025-10-02 08:30:38.429382628 +0000 UTC m=+0.042342805 container create 5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  2 04:30:38 np0005465604 systemd[1]: Started libpod-conmon-5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c.scope.
Oct  2 04:30:38 np0005465604 nova_compute[260603]: 2025-10-02 08:30:38.499 2 DEBUG nova.compute.manager [req-ccfb574b-c422-4714-a5f5-b5720ac5c97e req-a59d3502-50d7-408b-aa95-572dfb1aa13b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Received event network-vif-plugged-25be6c2d-0038-4133-8d21-845a3220d33e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:38 np0005465604 nova_compute[260603]: 2025-10-02 08:30:38.500 2 DEBUG oslo_concurrency.lockutils [req-ccfb574b-c422-4714-a5f5-b5720ac5c97e req-a59d3502-50d7-408b-aa95-572dfb1aa13b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:38 np0005465604 nova_compute[260603]: 2025-10-02 08:30:38.500 2 DEBUG oslo_concurrency.lockutils [req-ccfb574b-c422-4714-a5f5-b5720ac5c97e req-a59d3502-50d7-408b-aa95-572dfb1aa13b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:38 np0005465604 nova_compute[260603]: 2025-10-02 08:30:38.500 2 DEBUG oslo_concurrency.lockutils [req-ccfb574b-c422-4714-a5f5-b5720ac5c97e req-a59d3502-50d7-408b-aa95-572dfb1aa13b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:38 np0005465604 nova_compute[260603]: 2025-10-02 08:30:38.500 2 DEBUG nova.compute.manager [req-ccfb574b-c422-4714-a5f5-b5720ac5c97e req-a59d3502-50d7-408b-aa95-572dfb1aa13b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] No waiting events found dispatching network-vif-plugged-25be6c2d-0038-4133-8d21-845a3220d33e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:30:38 np0005465604 nova_compute[260603]: 2025-10-02 08:30:38.500 2 WARNING nova.compute.manager [req-ccfb574b-c422-4714-a5f5-b5720ac5c97e req-a59d3502-50d7-408b-aa95-572dfb1aa13b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Received unexpected event network-vif-plugged-25be6c2d-0038-4133-8d21-845a3220d33e for instance with vm_state active and task_state None.#033[00m
Oct  2 04:30:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:30:38 np0005465604 podman[321110]: 2025-10-02 08:30:38.412169314 +0000 UTC m=+0.025129481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:30:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c1538d29820be91413492c99f9f55b30d09a9bd9748e10957d3c8fe998d053/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:30:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c1538d29820be91413492c99f9f55b30d09a9bd9748e10957d3c8fe998d053/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:30:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c1538d29820be91413492c99f9f55b30d09a9bd9748e10957d3c8fe998d053/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:30:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c1538d29820be91413492c99f9f55b30d09a9bd9748e10957d3c8fe998d053/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:30:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c1538d29820be91413492c99f9f55b30d09a9bd9748e10957d3c8fe998d053/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:30:38 np0005465604 podman[321110]: 2025-10-02 08:30:38.529293508 +0000 UTC m=+0.142253705 container init 5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006919304917952725 of space, bias 1.0, pg target 0.20757914753858175 quantized to 32 (current 32)
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:30:38 np0005465604 podman[321110]: 2025-10-02 08:30:38.536331586 +0000 UTC m=+0.149291763 container start 5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 04:30:38 np0005465604 podman[321110]: 2025-10-02 08:30:38.539738832 +0000 UTC m=+0.152699059 container attach 5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct  2 04:30:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 134 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 509 KiB/s rd, 3.6 MiB/s wr, 139 op/s
Oct  2 04:30:39 np0005465604 nifty_brattain[321126]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:30:39 np0005465604 nifty_brattain[321126]: --> relative data size: 1.0
Oct  2 04:30:39 np0005465604 nifty_brattain[321126]: --> All data devices are unavailable
Oct  2 04:30:39 np0005465604 systemd[1]: libpod-5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c.scope: Deactivated successfully.
Oct  2 04:30:39 np0005465604 systemd[1]: libpod-5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c.scope: Consumed 1.066s CPU time.
Oct  2 04:30:39 np0005465604 conmon[321126]: conmon 5c010b6eab3482eedd6c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c.scope/container/memory.events
Oct  2 04:30:39 np0005465604 podman[321110]: 2025-10-02 08:30:39.690561491 +0000 UTC m=+1.303521668 container died 5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:30:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-64c1538d29820be91413492c99f9f55b30d09a9bd9748e10957d3c8fe998d053-merged.mount: Deactivated successfully.
Oct  2 04:30:39 np0005465604 podman[321110]: 2025-10-02 08:30:39.750387117 +0000 UTC m=+1.363347294 container remove 5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:30:39 np0005465604 systemd[1]: libpod-conmon-5c010b6eab3482eedd6c98d94f7e578184479d9b6e5c9f48e5ed1f41503b4a4c.scope: Deactivated successfully.
Oct  2 04:30:39 np0005465604 nova_compute[260603]: 2025-10-02 08:30:39.844 2 DEBUG nova.compute.manager [req-6e4cf3e0-19fb-4bf0-9eea-c323d22bace1 req-9fcc895a-e6ab-43e1-9e39-9b8bc2fc7356 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:39 np0005465604 nova_compute[260603]: 2025-10-02 08:30:39.845 2 DEBUG oslo_concurrency.lockutils [req-6e4cf3e0-19fb-4bf0-9eea-c323d22bace1 req-9fcc895a-e6ab-43e1-9e39-9b8bc2fc7356 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:39 np0005465604 nova_compute[260603]: 2025-10-02 08:30:39.845 2 DEBUG oslo_concurrency.lockutils [req-6e4cf3e0-19fb-4bf0-9eea-c323d22bace1 req-9fcc895a-e6ab-43e1-9e39-9b8bc2fc7356 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:39 np0005465604 nova_compute[260603]: 2025-10-02 08:30:39.846 2 DEBUG oslo_concurrency.lockutils [req-6e4cf3e0-19fb-4bf0-9eea-c323d22bace1 req-9fcc895a-e6ab-43e1-9e39-9b8bc2fc7356 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:39 np0005465604 nova_compute[260603]: 2025-10-02 08:30:39.846 2 DEBUG nova.compute.manager [req-6e4cf3e0-19fb-4bf0-9eea-c323d22bace1 req-9fcc895a-e6ab-43e1-9e39-9b8bc2fc7356 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] No waiting events found dispatching network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:30:39 np0005465604 nova_compute[260603]: 2025-10-02 08:30:39.846 2 WARNING nova.compute.manager [req-6e4cf3e0-19fb-4bf0-9eea-c323d22bace1 req-9fcc895a-e6ab-43e1-9e39-9b8bc2fc7356 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received unexpected event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f for instance with vm_state active and task_state None.#033[00m
Oct  2 04:30:40 np0005465604 nova_compute[260603]: 2025-10-02 08:30:40.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:40 np0005465604 podman[321306]: 2025-10-02 08:30:40.591764725 +0000 UTC m=+0.061240261 container create 29f9c27e0da8c88288be680119e124517ed7e0749ef9b069d29e767517830504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chaplygin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:30:40 np0005465604 systemd[1]: Started libpod-conmon-29f9c27e0da8c88288be680119e124517ed7e0749ef9b069d29e767517830504.scope.
Oct  2 04:30:40 np0005465604 podman[321306]: 2025-10-02 08:30:40.572203878 +0000 UTC m=+0.041679414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:30:40 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:30:40 np0005465604 podman[321306]: 2025-10-02 08:30:40.682392387 +0000 UTC m=+0.151867953 container init 29f9c27e0da8c88288be680119e124517ed7e0749ef9b069d29e767517830504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:30:40 np0005465604 podman[321306]: 2025-10-02 08:30:40.694038789 +0000 UTC m=+0.163514325 container start 29f9c27e0da8c88288be680119e124517ed7e0749ef9b069d29e767517830504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chaplygin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:30:40 np0005465604 podman[321306]: 2025-10-02 08:30:40.698423505 +0000 UTC m=+0.167899061 container attach 29f9c27e0da8c88288be680119e124517ed7e0749ef9b069d29e767517830504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chaplygin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 04:30:40 np0005465604 great_chaplygin[321322]: 167 167
Oct  2 04:30:40 np0005465604 podman[321306]: 2025-10-02 08:30:40.700895091 +0000 UTC m=+0.170370627 container died 29f9c27e0da8c88288be680119e124517ed7e0749ef9b069d29e767517830504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chaplygin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 04:30:40 np0005465604 systemd[1]: libpod-29f9c27e0da8c88288be680119e124517ed7e0749ef9b069d29e767517830504.scope: Deactivated successfully.
Oct  2 04:30:40 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8130fa34f2286e2ac0ef4d32cb8a84a7e6388fb0f4468b1186d3952b7b025add-merged.mount: Deactivated successfully.
Oct  2 04:30:40 np0005465604 podman[321306]: 2025-10-02 08:30:40.74082448 +0000 UTC m=+0.210300026 container remove 29f9c27e0da8c88288be680119e124517ed7e0749ef9b069d29e767517830504 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chaplygin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 04:30:40 np0005465604 systemd[1]: libpod-conmon-29f9c27e0da8c88288be680119e124517ed7e0749ef9b069d29e767517830504.scope: Deactivated successfully.
Oct  2 04:30:40 np0005465604 podman[321345]: 2025-10-02 08:30:40.973869431 +0000 UTC m=+0.060444707 container create d7cd6cee8e6dbe21681e28cfeee432e3fd306ebe574a40ac3f68a6c727304b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:30:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 134 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 474 KiB/s rd, 3.3 MiB/s wr, 87 op/s
Oct  2 04:30:41 np0005465604 systemd[1]: Started libpod-conmon-d7cd6cee8e6dbe21681e28cfeee432e3fd306ebe574a40ac3f68a6c727304b99.scope.
Oct  2 04:30:41 np0005465604 podman[321345]: 2025-10-02 08:30:40.950939649 +0000 UTC m=+0.037514955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:30:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:30:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb6d57200532b35fcb517989e3bc088c95c38d97c3a6a22dfd5298854a4ab46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:30:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb6d57200532b35fcb517989e3bc088c95c38d97c3a6a22dfd5298854a4ab46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:30:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb6d57200532b35fcb517989e3bc088c95c38d97c3a6a22dfd5298854a4ab46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:30:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb6d57200532b35fcb517989e3bc088c95c38d97c3a6a22dfd5298854a4ab46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:30:41 np0005465604 podman[321345]: 2025-10-02 08:30:41.097010912 +0000 UTC m=+0.183586178 container init d7cd6cee8e6dbe21681e28cfeee432e3fd306ebe574a40ac3f68a6c727304b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 04:30:41 np0005465604 podman[321345]: 2025-10-02 08:30:41.110134259 +0000 UTC m=+0.196709515 container start d7cd6cee8e6dbe21681e28cfeee432e3fd306ebe574a40ac3f68a6c727304b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:30:41 np0005465604 podman[321345]: 2025-10-02 08:30:41.112839753 +0000 UTC m=+0.199415029 container attach d7cd6cee8e6dbe21681e28cfeee432e3fd306ebe574a40ac3f68a6c727304b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:30:41 np0005465604 nova_compute[260603]: 2025-10-02 08:30:41.544 2 INFO nova.compute.manager [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Rebuilding instance#033[00m
Oct  2 04:30:41 np0005465604 nova_compute[260603]: 2025-10-02 08:30:41.668 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393826.6670313, 9924ce7f-b701-4560-b2c5-67f673b45807 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:30:41 np0005465604 nova_compute[260603]: 2025-10-02 08:30:41.669 2 INFO nova.compute.manager [-] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:30:41 np0005465604 nova_compute[260603]: 2025-10-02 08:30:41.700 2 DEBUG nova.compute.manager [None req-6fdb6802-d71c-49bb-89c1-cb233ca7cee8 - - - - - -] [instance: 9924ce7f-b701-4560-b2c5-67f673b45807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]: {
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:    "0": [
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:        {
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "devices": [
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "/dev/loop3"
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            ],
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "lv_name": "ceph_lv0",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "lv_size": "21470642176",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "name": "ceph_lv0",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "tags": {
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.cluster_name": "ceph",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.crush_device_class": "",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.encrypted": "0",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.osd_id": "0",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.type": "block",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.vdo": "0"
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            },
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "type": "block",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "vg_name": "ceph_vg0"
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:        }
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:    ],
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:    "1": [
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:        {
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "devices": [
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "/dev/loop4"
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            ],
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "lv_name": "ceph_lv1",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "lv_size": "21470642176",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "name": "ceph_lv1",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "tags": {
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.cluster_name": "ceph",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.crush_device_class": "",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.encrypted": "0",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.osd_id": "1",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.type": "block",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.vdo": "0"
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            },
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "type": "block",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "vg_name": "ceph_vg1"
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:        }
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:    ],
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:    "2": [
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:        {
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "devices": [
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "/dev/loop5"
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            ],
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "lv_name": "ceph_lv2",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "lv_size": "21470642176",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "name": "ceph_lv2",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "tags": {
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.cluster_name": "ceph",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.crush_device_class": "",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.encrypted": "0",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.osd_id": "2",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.type": "block",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:                "ceph.vdo": "0"
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            },
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "type": "block",
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:            "vg_name": "ceph_vg2"
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:        }
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]:    ]
Oct  2 04:30:41 np0005465604 cranky_kepler[321362]: }
Oct  2 04:30:41 np0005465604 nova_compute[260603]: 2025-10-02 08:30:41.916 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'trusted_certs' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:41 np0005465604 nova_compute[260603]: 2025-10-02 08:30:41.933 2 DEBUG nova.compute.manager [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:41 np0005465604 systemd[1]: libpod-d7cd6cee8e6dbe21681e28cfeee432e3fd306ebe574a40ac3f68a6c727304b99.scope: Deactivated successfully.
Oct  2 04:30:41 np0005465604 nova_compute[260603]: 2025-10-02 08:30:41.984 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'pci_requests' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:41 np0005465604 podman[321371]: 2025-10-02 08:30:41.997160213 +0000 UTC m=+0.033483600 container died d7cd6cee8e6dbe21681e28cfeee432e3fd306ebe574a40ac3f68a6c727304b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 04:30:42 np0005465604 nova_compute[260603]: 2025-10-02 08:30:42.004 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'pci_devices' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:42 np0005465604 nova_compute[260603]: 2025-10-02 08:30:42.016 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'resources' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:42 np0005465604 systemd[1]: var-lib-containers-storage-overlay-eeb6d57200532b35fcb517989e3bc088c95c38d97c3a6a22dfd5298854a4ab46-merged.mount: Deactivated successfully.
Oct  2 04:30:42 np0005465604 nova_compute[260603]: 2025-10-02 08:30:42.030 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'migration_context' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:42 np0005465604 nova_compute[260603]: 2025-10-02 08:30:42.044 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:30:42 np0005465604 nova_compute[260603]: 2025-10-02 08:30:42.049 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:30:42 np0005465604 podman[321371]: 2025-10-02 08:30:42.076457503 +0000 UTC m=+0.112780870 container remove d7cd6cee8e6dbe21681e28cfeee432e3fd306ebe574a40ac3f68a6c727304b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 04:30:42 np0005465604 systemd[1]: libpod-conmon-d7cd6cee8e6dbe21681e28cfeee432e3fd306ebe574a40ac3f68a6c727304b99.scope: Deactivated successfully.
Oct  2 04:30:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:30:42 np0005465604 nova_compute[260603]: 2025-10-02 08:30:42.737 2 DEBUG oslo_concurrency.lockutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquiring lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:42 np0005465604 nova_compute[260603]: 2025-10-02 08:30:42.737 2 DEBUG oslo_concurrency.lockutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:42 np0005465604 nova_compute[260603]: 2025-10-02 08:30:42.737 2 DEBUG oslo_concurrency.lockutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquiring lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:42 np0005465604 nova_compute[260603]: 2025-10-02 08:30:42.738 2 DEBUG oslo_concurrency.lockutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:42 np0005465604 nova_compute[260603]: 2025-10-02 08:30:42.738 2 DEBUG oslo_concurrency.lockutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:42 np0005465604 nova_compute[260603]: 2025-10-02 08:30:42.739 2 INFO nova.compute.manager [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Terminating instance#033[00m
Oct  2 04:30:42 np0005465604 nova_compute[260603]: 2025-10-02 08:30:42.740 2 DEBUG nova.compute.manager [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:30:42 np0005465604 nova_compute[260603]: 2025-10-02 08:30:42.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:42 np0005465604 kernel: tap25be6c2d-00 (unregistering): left promiscuous mode
Oct  2 04:30:42 np0005465604 NetworkManager[45129]: <info>  [1759393842.8000] device (tap25be6c2d-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:30:42 np0005465604 nova_compute[260603]: 2025-10-02 08:30:42.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:42Z|00532|binding|INFO|Releasing lport 25be6c2d-0038-4133-8d21-845a3220d33e from this chassis (sb_readonly=0)
Oct  2 04:30:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:42Z|00533|binding|INFO|Setting lport 25be6c2d-0038-4133-8d21-845a3220d33e down in Southbound
Oct  2 04:30:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:42Z|00534|binding|INFO|Removing iface tap25be6c2d-00 ovn-installed in OVS
Oct  2 04:30:42 np0005465604 nova_compute[260603]: 2025-10-02 08:30:42.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:42.818 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:65:1d 10.100.0.7'], port_security=['fa:16:3e:75:65:1d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-688aa430-74b8-4500-af80-d797f0ec4310', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '649aece3b28a477fa6e0d1dc7b1d5ade', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4630aa5e-be12-47d4-ae0b-793f01b010a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f759ed2-fdc4-4b2b-ac94-db6a43d5407a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=25be6c2d-0038-4133-8d21-845a3220d33e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:30:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:42.819 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 25be6c2d-0038-4133-8d21-845a3220d33e in datapath 688aa430-74b8-4500-af80-d797f0ec4310 unbound from our chassis#033[00m
Oct  2 04:30:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:42.821 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 688aa430-74b8-4500-af80-d797f0ec4310, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:30:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:42.823 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[72bc6529-a3b9-4283-96c4-a030ddc74d57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:42.824 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310 namespace which is not needed anymore#033[00m
Oct  2 04:30:42 np0005465604 nova_compute[260603]: 2025-10-02 08:30:42.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:42 np0005465604 podman[321523]: 2025-10-02 08:30:42.83370616 +0000 UTC m=+0.065257326 container create 97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 04:30:42 np0005465604 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Oct  2 04:30:42 np0005465604 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000003c.scope: Consumed 6.398s CPU time.
Oct  2 04:30:42 np0005465604 systemd-machined[214636]: Machine qemu-65-instance-0000003c terminated.
Oct  2 04:30:42 np0005465604 systemd[1]: Started libpod-conmon-97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46.scope.
Oct  2 04:30:42 np0005465604 podman[321523]: 2025-10-02 08:30:42.793353028 +0000 UTC m=+0.024904284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:30:42 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:30:42 np0005465604 podman[321523]: 2025-10-02 08:30:42.926716796 +0000 UTC m=+0.158267972 container init 97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carver, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:30:42 np0005465604 podman[321523]: 2025-10-02 08:30:42.936655134 +0000 UTC m=+0.168206310 container start 97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:30:42 np0005465604 podman[321523]: 2025-10-02 08:30:42.940078821 +0000 UTC m=+0.171630007 container attach 97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carver, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 04:30:42 np0005465604 heuristic_carver[321552]: 167 167
Oct  2 04:30:42 np0005465604 systemd[1]: libpod-97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46.scope: Deactivated successfully.
Oct  2 04:30:42 np0005465604 conmon[321552]: conmon 97f9ace54191eb034604 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46.scope/container/memory.events
Oct  2 04:30:42 np0005465604 podman[321523]: 2025-10-02 08:30:42.945516379 +0000 UTC m=+0.177067545 container died 97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carver, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:30:42 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5ca7b1103246e96db7afcd9e3e00de1bfc189cf0af50443656553b0c7c50bea5-merged.mount: Deactivated successfully.
Oct  2 04:30:42 np0005465604 nova_compute[260603]: 2025-10-02 08:30:42.980 2 INFO nova.virt.libvirt.driver [-] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Instance destroyed successfully.#033[00m
Oct  2 04:30:42 np0005465604 nova_compute[260603]: 2025-10-02 08:30:42.982 2 DEBUG nova.objects.instance [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lazy-loading 'resources' on Instance uuid 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:42 np0005465604 podman[321523]: 2025-10-02 08:30:42.994143658 +0000 UTC m=+0.225694824 container remove 97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_carver, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:30:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 134 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.3 MiB/s wr, 180 op/s
Oct  2 04:30:43 np0005465604 nova_compute[260603]: 2025-10-02 08:30:43.000 2 DEBUG nova.virt.libvirt.vif [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:30:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-275620590',display_name='tempest-ServerMetadataTestJSON-server-275620590',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-275620590',id=60,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:30:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='649aece3b28a477fa6e0d1dc7b1d5ade',ramdisk_id='',reservation_id='r-h0hzwcr1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-1153289932',owner_user_name='tempest-ServerMetadataTestJSON-1153289932-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:30:42Z,user_data=None,user_id='e7ed2cfbcca04b4ca0a07910a0319456',uuid=3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:30:43 np0005465604 nova_compute[260603]: 2025-10-02 08:30:43.000 2 DEBUG nova.network.os_vif_util [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Converting VIF {"id": "25be6c2d-0038-4133-8d21-845a3220d33e", "address": "fa:16:3e:75:65:1d", "network": {"id": "688aa430-74b8-4500-af80-d797f0ec4310", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1945555922-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "649aece3b28a477fa6e0d1dc7b1d5ade", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25be6c2d-00", "ovs_interfaceid": "25be6c2d-0038-4133-8d21-845a3220d33e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:30:43 np0005465604 nova_compute[260603]: 2025-10-02 08:30:43.001 2 DEBUG nova.network.os_vif_util [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=25be6c2d-0038-4133-8d21-845a3220d33e,network=Network(688aa430-74b8-4500-af80-d797f0ec4310),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25be6c2d-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:30:43 np0005465604 nova_compute[260603]: 2025-10-02 08:30:43.001 2 DEBUG os_vif [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=25be6c2d-0038-4133-8d21-845a3220d33e,network=Network(688aa430-74b8-4500-af80-d797f0ec4310),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25be6c2d-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:30:43 np0005465604 nova_compute[260603]: 2025-10-02 08:30:43.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:43 np0005465604 nova_compute[260603]: 2025-10-02 08:30:43.006 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25be6c2d-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:43 np0005465604 nova_compute[260603]: 2025-10-02 08:30:43.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:43 np0005465604 nova_compute[260603]: 2025-10-02 08:30:43.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:30:43 np0005465604 neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310[320825]: [NOTICE]   (320830) : haproxy version is 2.8.14-c23fe91
Oct  2 04:30:43 np0005465604 neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310[320825]: [NOTICE]   (320830) : path to executable is /usr/sbin/haproxy
Oct  2 04:30:43 np0005465604 neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310[320825]: [WARNING]  (320830) : Exiting Master process...
Oct  2 04:30:43 np0005465604 nova_compute[260603]: 2025-10-02 08:30:43.012 2 INFO os_vif [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=25be6c2d-0038-4133-8d21-845a3220d33e,network=Network(688aa430-74b8-4500-af80-d797f0ec4310),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25be6c2d-00')#033[00m
Oct  2 04:30:43 np0005465604 neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310[320825]: [ALERT]    (320830) : Current worker (320832) exited with code 143 (Terminated)
Oct  2 04:30:43 np0005465604 neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310[320825]: [WARNING]  (320830) : All workers exited. Exiting... (0)
Oct  2 04:30:43 np0005465604 systemd[1]: libpod-cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3.scope: Deactivated successfully.
Oct  2 04:30:43 np0005465604 podman[321563]: 2025-10-02 08:30:43.024046146 +0000 UTC m=+0.083629296 container died cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 04:30:43 np0005465604 systemd[1]: libpod-conmon-97f9ace54191eb0346040176b3907acd6d07bf6c67785ba255dad3beb2b97e46.scope: Deactivated successfully.
Oct  2 04:30:43 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3-userdata-shm.mount: Deactivated successfully.
Oct  2 04:30:43 np0005465604 systemd[1]: var-lib-containers-storage-overlay-327ede22813a349afb9df738f72940c2bb921ee83034aa092f6f10580f45e456-merged.mount: Deactivated successfully.
Oct  2 04:30:43 np0005465604 podman[321563]: 2025-10-02 08:30:43.070185118 +0000 UTC m=+0.129768268 container cleanup cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:30:43 np0005465604 systemd[1]: libpod-conmon-cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3.scope: Deactivated successfully.
Oct  2 04:30:43 np0005465604 podman[321638]: 2025-10-02 08:30:43.168408995 +0000 UTC m=+0.058368641 container remove cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 04:30:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:43.180 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a3cf56dd-ac9f-438c-9852-b46d546f827c]: (4, ('Thu Oct  2 08:30:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310 (cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3)\ncf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3\nThu Oct  2 08:30:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310 (cf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3)\ncf96258f123ff04725fc1b6335b42a38e49716be56d497c9efc78340536077b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:43.187 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[97b7ba86-3027-460b-9070-cc298bf8e678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:43.188 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap688aa430-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:43 np0005465604 nova_compute[260603]: 2025-10-02 08:30:43.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:43 np0005465604 kernel: tap688aa430-70: left promiscuous mode
Oct  2 04:30:43 np0005465604 podman[321652]: 2025-10-02 08:30:43.209579323 +0000 UTC m=+0.069866509 container create 53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 04:30:43 np0005465604 nova_compute[260603]: 2025-10-02 08:30:43.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:43 np0005465604 nova_compute[260603]: 2025-10-02 08:30:43.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:43.225 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6d1563-04aa-4e39-84f4-ffa04b60d83d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:43.255 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[75487c4b-c519-4b6d-b2a5-878cffd2dd49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:43.258 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[74ee4902-7ad6-419a-9f14-3d0dfd02530f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:43 np0005465604 podman[321652]: 2025-10-02 08:30:43.180936205 +0000 UTC m=+0.041223431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:30:43 np0005465604 systemd[1]: Started libpod-conmon-53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4.scope.
Oct  2 04:30:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:43.281 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd630b2-fea2-4d17-b910-17f0c3170ee3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471476, 'reachable_time': 36929, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321676, 'error': None, 'target': 'ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:43 np0005465604 systemd[1]: run-netns-ovnmeta\x2d688aa430\x2d74b8\x2d4500\x2daf80\x2dd797f0ec4310.mount: Deactivated successfully.
Oct  2 04:30:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:43.284 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-688aa430-74b8-4500-af80-d797f0ec4310 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:30:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:43.285 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ec0e0d-362b-4d6f-a92a-4289263e0446]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:43 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:30:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c484d04a127be9860bf60edec6feefb1e8284f2f553ba7d14241aecacb659b35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:30:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c484d04a127be9860bf60edec6feefb1e8284f2f553ba7d14241aecacb659b35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:30:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c484d04a127be9860bf60edec6feefb1e8284f2f553ba7d14241aecacb659b35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:30:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c484d04a127be9860bf60edec6feefb1e8284f2f553ba7d14241aecacb659b35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:30:43 np0005465604 podman[321652]: 2025-10-02 08:30:43.341165556 +0000 UTC m=+0.201452762 container init 53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:30:43 np0005465604 podman[321652]: 2025-10-02 08:30:43.34997257 +0000 UTC m=+0.210259756 container start 53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 04:30:43 np0005465604 podman[321652]: 2025-10-02 08:30:43.354339345 +0000 UTC m=+0.214626551 container attach 53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 04:30:43 np0005465604 nova_compute[260603]: 2025-10-02 08:30:43.464 2 INFO nova.virt.libvirt.driver [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Deleting instance files /var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_del#033[00m
Oct  2 04:30:43 np0005465604 nova_compute[260603]: 2025-10-02 08:30:43.465 2 INFO nova.virt.libvirt.driver [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Deletion of /var/lib/nova/instances/3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5_del complete#033[00m
Oct  2 04:30:43 np0005465604 nova_compute[260603]: 2025-10-02 08:30:43.519 2 INFO nova.compute.manager [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:30:43 np0005465604 nova_compute[260603]: 2025-10-02 08:30:43.520 2 DEBUG oslo.service.loopingcall [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:30:43 np0005465604 nova_compute[260603]: 2025-10-02 08:30:43.521 2 DEBUG nova.compute.manager [-] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:30:43 np0005465604 nova_compute[260603]: 2025-10-02 08:30:43.521 2 DEBUG nova.network.neutron [-] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:30:44 np0005465604 nova_compute[260603]: 2025-10-02 08:30:44.217 2 DEBUG nova.network.neutron [-] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:30:44 np0005465604 nova_compute[260603]: 2025-10-02 08:30:44.242 2 INFO nova.compute.manager [-] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Took 0.72 seconds to deallocate network for instance.#033[00m
Oct  2 04:30:44 np0005465604 nova_compute[260603]: 2025-10-02 08:30:44.297 2 DEBUG oslo_concurrency.lockutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:44 np0005465604 nova_compute[260603]: 2025-10-02 08:30:44.299 2 DEBUG oslo_concurrency.lockutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:44 np0005465604 nova_compute[260603]: 2025-10-02 08:30:44.320 2 DEBUG nova.compute.manager [req-a2a15143-9e97-4fe2-bc77-275ab77c45b0 req-6c2215cf-e4a8-4ced-9585-2addd2173fe4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Received event network-vif-deleted-25be6c2d-0038-4133-8d21-845a3220d33e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:44 np0005465604 nova_compute[260603]: 2025-10-02 08:30:44.374 2 DEBUG oslo_concurrency.processutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]: {
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:        "osd_id": 2,
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:        "type": "bluestore"
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:    },
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:        "osd_id": 1,
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:        "type": "bluestore"
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:    },
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:        "osd_id": 0,
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:        "type": "bluestore"
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]:    }
Oct  2 04:30:44 np0005465604 modest_dubinsky[321677]: }
Oct  2 04:30:44 np0005465604 systemd[1]: libpod-53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4.scope: Deactivated successfully.
Oct  2 04:30:44 np0005465604 conmon[321677]: conmon 53815ef94f8085ed38d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4.scope/container/memory.events
Oct  2 04:30:44 np0005465604 systemd[1]: libpod-53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4.scope: Consumed 1.134s CPU time.
Oct  2 04:30:44 np0005465604 podman[321652]: 2025-10-02 08:30:44.480433667 +0000 UTC m=+1.340720863 container died 53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:30:44 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c484d04a127be9860bf60edec6feefb1e8284f2f553ba7d14241aecacb659b35-merged.mount: Deactivated successfully.
Oct  2 04:30:44 np0005465604 podman[321652]: 2025-10-02 08:30:44.551430029 +0000 UTC m=+1.411717215 container remove 53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:30:44 np0005465604 systemd[1]: libpod-conmon-53815ef94f8085ed38d298dd5a564a4c39b0eb66812714be096ad009031724f4.scope: Deactivated successfully.
Oct  2 04:30:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:30:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:30:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:30:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:30:44 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 05bef98e-e718-4019-99eb-f7a32da2ad06 does not exist
Oct  2 04:30:44 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev dde8e290-268c-4bf2-a03f-fbf75d4771e0 does not exist
Oct  2 04:30:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:30:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4191673915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:30:44 np0005465604 nova_compute[260603]: 2025-10-02 08:30:44.916 2 DEBUG oslo_concurrency.processutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:44 np0005465604 nova_compute[260603]: 2025-10-02 08:30:44.924 2 DEBUG nova.compute.provider_tree [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:30:44 np0005465604 nova_compute[260603]: 2025-10-02 08:30:44.942 2 DEBUG nova.scheduler.client.report [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:30:44 np0005465604 nova_compute[260603]: 2025-10-02 08:30:44.961 2 DEBUG oslo_concurrency.lockutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:44 np0005465604 nova_compute[260603]: 2025-10-02 08:30:44.987 2 INFO nova.scheduler.client.report [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Deleted allocations for instance 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5#033[00m
Oct  2 04:30:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 118 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 668 KiB/s wr, 151 op/s
Oct  2 04:30:45 np0005465604 nova_compute[260603]: 2025-10-02 08:30:45.054 2 DEBUG oslo_concurrency.lockutils [None req-1e3da77d-3b29-479e-a908-6ba7e61a7791 e7ed2cfbcca04b4ca0a07910a0319456 649aece3b28a477fa6e0d1dc7b1d5ade - - default default] Lock "3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:30:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:30:46 np0005465604 podman[321796]: 2025-10-02 08:30:46.04392259 +0000 UTC m=+0.092013695 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 04:30:46 np0005465604 podman[321795]: 2025-10-02 08:30:46.120547867 +0000 UTC m=+0.167605441 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:30:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 118 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 148 op/s
Oct  2 04:30:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:30:47 np0005465604 nova_compute[260603]: 2025-10-02 08:30:47.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:48 np0005465604 nova_compute[260603]: 2025-10-02 08:30:48.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 88 MiB data, 500 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 174 op/s
Oct  2 04:30:49 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:49Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:50:87:db 10.100.0.3
Oct  2 04:30:49 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:49Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:50:87:db 10.100.0.3
Oct  2 04:30:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:50Z|00535|binding|INFO|Releasing lport 1405e724-f2f6-4a95-8848-550131e62910 from this chassis (sb_readonly=0)
Oct  2 04:30:50 np0005465604 nova_compute[260603]: 2025-10-02 08:30:50.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 88 MiB data, 500 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.2 KiB/s wr, 139 op/s
Oct  2 04:30:52 np0005465604 podman[321840]: 2025-10-02 08:30:52.020480055 +0000 UTC m=+0.083729649 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_managed=true, config_id=iscsid)
Oct  2 04:30:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:30:52 np0005465604 nova_compute[260603]: 2025-10-02 08:30:52.125 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 04:30:52 np0005465604 nova_compute[260603]: 2025-10-02 08:30:52.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 107 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.3 MiB/s wr, 186 op/s
Oct  2 04:30:53 np0005465604 nova_compute[260603]: 2025-10-02 08:30:53.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:54 np0005465604 kernel: tap4298d267-ed (unregistering): left promiscuous mode
Oct  2 04:30:54 np0005465604 NetworkManager[45129]: <info>  [1759393854.4478] device (tap4298d267-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:30:54 np0005465604 nova_compute[260603]: 2025-10-02 08:30:54.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:54Z|00536|binding|INFO|Releasing lport 4298d267-ede8-417b-9e26-a2533908497f from this chassis (sb_readonly=0)
Oct  2 04:30:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:54Z|00537|binding|INFO|Setting lport 4298d267-ede8-417b-9e26-a2533908497f down in Southbound
Oct  2 04:30:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:54Z|00538|binding|INFO|Removing iface tap4298d267-ed ovn-installed in OVS
Oct  2 04:30:54 np0005465604 nova_compute[260603]: 2025-10-02 08:30:54.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.468 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:87:db 10.100.0.3'], port_security=['fa:16:3e:50:87:db 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f8f36f36-817a-4e64-8c57-c211cfc7b0ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '70456544-9d56-4c7b-b40d-eb25e5a572db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7313fcfc-7f82-4668-88be-657e0435d03f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4298d267-ede8-417b-9e26-a2533908497f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:30:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.469 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4298d267-ede8-417b-9e26-a2533908497f in datapath f8df0af1-1767-419a-8500-c28fbf45ae4b unbound from our chassis#033[00m
Oct  2 04:30:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.470 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f8df0af1-1767-419a-8500-c28fbf45ae4b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:30:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.471 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[72267fe3-4f79-4985-b842-f9fc565746e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.472 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b namespace which is not needed anymore#033[00m
Oct  2 04:30:54 np0005465604 nova_compute[260603]: 2025-10-02 08:30:54.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:54 np0005465604 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Oct  2 04:30:54 np0005465604 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003b.scope: Consumed 13.241s CPU time.
Oct  2 04:30:54 np0005465604 systemd-machined[214636]: Machine qemu-66-instance-0000003b terminated.
Oct  2 04:30:54 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[321074]: [NOTICE]   (321092) : haproxy version is 2.8.14-c23fe91
Oct  2 04:30:54 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[321074]: [NOTICE]   (321092) : path to executable is /usr/sbin/haproxy
Oct  2 04:30:54 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[321074]: [WARNING]  (321092) : Exiting Master process...
Oct  2 04:30:54 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[321074]: [ALERT]    (321092) : Current worker (321096) exited with code 143 (Terminated)
Oct  2 04:30:54 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[321074]: [WARNING]  (321092) : All workers exited. Exiting... (0)
Oct  2 04:30:54 np0005465604 systemd[1]: libpod-720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e.scope: Deactivated successfully.
Oct  2 04:30:54 np0005465604 podman[321885]: 2025-10-02 08:30:54.612638618 +0000 UTC m=+0.048266329 container died 720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 04:30:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e-userdata-shm.mount: Deactivated successfully.
Oct  2 04:30:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay-11063ab83b9a9c00fd7baf0f7f968c64e020d49f1d6528b03e58b1a6ea92a56d-merged.mount: Deactivated successfully.
Oct  2 04:30:54 np0005465604 podman[321885]: 2025-10-02 08:30:54.657542641 +0000 UTC m=+0.093170342 container cleanup 720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 04:30:54 np0005465604 systemd[1]: libpod-conmon-720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e.scope: Deactivated successfully.
Oct  2 04:30:54 np0005465604 podman[321916]: 2025-10-02 08:30:54.755309885 +0000 UTC m=+0.064894485 container remove 720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:30:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.762 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f202f875-cd52-41be-b5da-2b4d300f1c3e]: (4, ('Thu Oct  2 08:30:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b (720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e)\n720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e\nThu Oct  2 08:30:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b (720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e)\n720e44202bf82205c942947d6c3ca0092e1c722123b4549acaa0eeac5877e45e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.764 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[006692f7-9d43-4719-8251-b3dfaed7f7bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.765 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8df0af1-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:54 np0005465604 nova_compute[260603]: 2025-10-02 08:30:54.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:54 np0005465604 kernel: tapf8df0af1-10: left promiscuous mode
Oct  2 04:30:54 np0005465604 nova_compute[260603]: 2025-10-02 08:30:54.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.806 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ba934cb2-3057-4846-9a51-d7e88ef3be72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.835 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[796c070c-6089-4bc1-9064-5b6d0ab7e0f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.836 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9778050f-8acf-4f7c-a724-d147b515c554]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.855 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[925e8405-6588-45a4-a032-e8cac74b0853]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471586, 'reachable_time': 30620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321946, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.857 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:30:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:54.857 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b7809e-a41f-4cd1-9d4d-708bd06aff5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:54 np0005465604 systemd[1]: run-netns-ovnmeta\x2df8df0af1\x2d1767\x2d419a\x2d8500\x2dc28fbf45ae4b.mount: Deactivated successfully.
Oct  2 04:30:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 121 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 927 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.143 2 INFO nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.148 2 INFO nova.virt.libvirt.driver [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance destroyed successfully.#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.152 2 INFO nova.virt.libvirt.driver [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance destroyed successfully.#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.152 2 DEBUG nova.virt.libvirt.vif [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-949284758',display_name='tempest-ServerDiskConfigTestJSON-server-949284758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-949284758',id=59,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:30:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-xg0hxkh4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:41Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=f8f36f36-817a-4e64-8c57-c211cfc7b0ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.153 2 DEBUG nova.network.os_vif_util [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.153 2 DEBUG nova.network.os_vif_util [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.153 2 DEBUG os_vif [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.155 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4298d267-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.209 2 INFO os_vif [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed')#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.238 2 DEBUG nova.compute.manager [req-ec30be26-2eb8-4b60-b885-90e3dcd6798e req-b754e21a-2f8d-4405-a3c7-beba2c3a6a58 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received event network-vif-unplugged-4298d267-ede8-417b-9e26-a2533908497f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.239 2 DEBUG oslo_concurrency.lockutils [req-ec30be26-2eb8-4b60-b885-90e3dcd6798e req-b754e21a-2f8d-4405-a3c7-beba2c3a6a58 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.240 2 DEBUG oslo_concurrency.lockutils [req-ec30be26-2eb8-4b60-b885-90e3dcd6798e req-b754e21a-2f8d-4405-a3c7-beba2c3a6a58 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.240 2 DEBUG oslo_concurrency.lockutils [req-ec30be26-2eb8-4b60-b885-90e3dcd6798e req-b754e21a-2f8d-4405-a3c7-beba2c3a6a58 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.241 2 DEBUG nova.compute.manager [req-ec30be26-2eb8-4b60-b885-90e3dcd6798e req-b754e21a-2f8d-4405-a3c7-beba2c3a6a58 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] No waiting events found dispatching network-vif-unplugged-4298d267-ede8-417b-9e26-a2533908497f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.241 2 WARNING nova.compute.manager [req-ec30be26-2eb8-4b60-b885-90e3dcd6798e req-b754e21a-2f8d-4405-a3c7-beba2c3a6a58 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received unexpected event network-vif-unplugged-4298d267-ede8-417b-9e26-a2533908497f for instance with vm_state active and task_state rebuilding.#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.579 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.579 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.589 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.589 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.599 2 DEBUG nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.603 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.677 2 INFO nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Deleting instance files /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba_del#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.678 2 INFO nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Deletion of /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba_del complete#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.705 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.706 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.707 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.717 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.717 2 INFO nova.compute.claims [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.813 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.814 2 INFO nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Creating image(s)#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.841 2 DEBUG nova.storage.rbd_utils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.871 2 DEBUG nova.storage.rbd_utils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.899 2 DEBUG nova.storage.rbd_utils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:55 np0005465604 nova_compute[260603]: 2025-10-02 08:30:55.904 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.004 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.005 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.005 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.006 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:56 np0005465604 podman[322020]: 2025-10-02 08:30:56.035078454 +0000 UTC m=+0.089185699 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.040 2 DEBUG nova.storage.rbd_utils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.048 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.089 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.424 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.489 2 DEBUG nova.storage.rbd_utils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] resizing rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:30:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:30:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3777630118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.561 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.568 2 DEBUG nova.compute.provider_tree [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.585 2 DEBUG nova.scheduler.client.report [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.636 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.637 2 DEBUG nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.640 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.646 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.647 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Ensure instance console log exists: /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.648 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.648 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.649 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.652 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Start _get_guest_xml network_info=[{"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.655 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.656 2 INFO nova.compute.claims [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.662 2 WARNING nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.667 2 DEBUG nova.virt.libvirt.host [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.668 2 DEBUG nova.virt.libvirt.host [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.671 2 DEBUG nova.virt.libvirt.host [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.671 2 DEBUG nova.virt.libvirt.host [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.672 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.672 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.673 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.673 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.674 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.674 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.674 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.675 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.675 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.676 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.676 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.676 2 DEBUG nova.virt.hardware [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.677 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'vcpu_model' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.702 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.740 2 DEBUG nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.740 2 DEBUG nova.network.neutron [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.766 2 INFO nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.781 2 DEBUG nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.855 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.894 2 DEBUG nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.896 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.897 2 INFO nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Creating image(s)#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.916 2 DEBUG nova.storage.rbd_utils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.936 2 DEBUG nova.storage.rbd_utils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.957 2 DEBUG nova.storage.rbd_utils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:56 np0005465604 nova_compute[260603]: 2025-10-02 08:30:56.960 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 121 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.004 2 DEBUG nova.policy [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9020ed38b31d46f88625374b2a76aef6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eda0caa41e4740148ab99d5ebf9e27ba', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.041 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.042 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.043 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.043 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:57.056 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:30:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:57.058 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.066 2 DEBUG nova.storage.rbd_utils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.069 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 49e7e668-b62c-4e35-a4e2-bba540000961_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:30:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/153148737' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:30:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.125 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.149 2 DEBUG nova.storage.rbd_utils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.153 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.312 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 49e7e668-b62c-4e35-a4e2-bba540000961_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:30:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/469259123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.356 2 DEBUG nova.compute.manager [req-277084fa-f941-40ab-8ed9-c5014e463e53 req-2798be94-0b82-4f17-83c6-6aac2dfa609a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.357 2 DEBUG oslo_concurrency.lockutils [req-277084fa-f941-40ab-8ed9-c5014e463e53 req-2798be94-0b82-4f17-83c6-6aac2dfa609a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.357 2 DEBUG oslo_concurrency.lockutils [req-277084fa-f941-40ab-8ed9-c5014e463e53 req-2798be94-0b82-4f17-83c6-6aac2dfa609a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.357 2 DEBUG oslo_concurrency.lockutils [req-277084fa-f941-40ab-8ed9-c5014e463e53 req-2798be94-0b82-4f17-83c6-6aac2dfa609a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.357 2 DEBUG nova.compute.manager [req-277084fa-f941-40ab-8ed9-c5014e463e53 req-2798be94-0b82-4f17-83c6-6aac2dfa609a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] No waiting events found dispatching network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.358 2 WARNING nova.compute.manager [req-277084fa-f941-40ab-8ed9-c5014e463e53 req-2798be94-0b82-4f17-83c6-6aac2dfa609a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received unexpected event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.358 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.392 2 DEBUG nova.storage.rbd_utils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] resizing rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.421 2 DEBUG nova.compute.provider_tree [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.441 2 DEBUG nova.scheduler.client.report [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.473 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.474 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.481 2 DEBUG nova.objects.instance [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'migration_context' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.501 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.501 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Ensure instance console log exists: /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.502 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.502 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.502 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.519 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.519 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.534 2 INFO nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.555 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:30:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:30:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/644203822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.642 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.644 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.645 2 INFO nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Creating image(s)#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.666 2 DEBUG nova.storage.rbd_utils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 501f8cba-892f-489d-81b5-abb8669f49eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.689 2 DEBUG nova.storage.rbd_utils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 501f8cba-892f-489d-81b5-abb8669f49eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.713 2 DEBUG nova.storage.rbd_utils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 501f8cba-892f-489d-81b5-abb8669f49eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.718 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.760 2 DEBUG nova.policy [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'db9a3b1e6d93495f8c849658ffc4e535', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.763 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.766 2 DEBUG nova.virt.libvirt.vif [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-949284758',display_name='tempest-ServerDiskConfigTestJSON-server-949284758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-949284758',id=59,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:30:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-xg0hxkh4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:55Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=f8f36f36-817a-4e64-8c57-c211cfc7b0ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.766 2 DEBUG nova.network.os_vif_util [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.768 2 DEBUG nova.network.os_vif_util [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.772 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:30:57 np0005465604 nova_compute[260603]:  <uuid>f8f36f36-817a-4e64-8c57-c211cfc7b0ba</uuid>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:  <name>instance-0000003b</name>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-949284758</nova:name>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:30:56</nova:creationTime>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:30:57 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:        <nova:user uuid="116b114f14f84e4cbd6cc966e29d82e7">tempest-ServerDiskConfigTestJSON-1277806880-project-member</nova:user>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:        <nova:project uuid="bce7493292bb47cfb7168bca89f78f4a">tempest-ServerDiskConfigTestJSON-1277806880</nova:project>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="eeb8c9a4-e143-4b44-a997-e04d544bc537"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:        <nova:port uuid="4298d267-ede8-417b-9e26-a2533908497f">
Oct  2 04:30:57 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <entry name="serial">f8f36f36-817a-4e64-8c57-c211cfc7b0ba</entry>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <entry name="uuid">f8f36f36-817a-4e64-8c57-c211cfc7b0ba</entry>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk">
Oct  2 04:30:57 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:30:57 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config">
Oct  2 04:30:57 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:30:57 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:50:87:db"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <target dev="tap4298d267-ed"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/console.log" append="off"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:30:57 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:30:57 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:30:57 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:30:57 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.773 2 DEBUG nova.compute.manager [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Preparing to wait for external event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.774 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.775 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.775 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.776 2 DEBUG nova.virt.libvirt.vif [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-949284758',display_name='tempest-ServerDiskConfigTestJSON-server-949284758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-949284758',id=59,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:30:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-xg0hxkh4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:55Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=f8f36f36-817a-4e64-8c57-c211cfc7b0ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.777 2 DEBUG nova.network.os_vif_util [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.778 2 DEBUG nova.network.os_vif_util [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.778 2 DEBUG os_vif [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.780 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.781 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.787 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4298d267-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.788 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4298d267-ed, col_values=(('external_ids', {'iface-id': '4298d267-ede8-417b-9e26-a2533908497f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:87:db', 'vm-uuid': 'f8f36f36-817a-4e64-8c57-c211cfc7b0ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:30:57 np0005465604 NetworkManager[45129]: <info>  [1759393857.8165] manager: (tap4298d267-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.818 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.819 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.820 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.820 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.844 2 DEBUG nova.storage.rbd_utils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 501f8cba-892f-489d-81b5-abb8669f49eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.850 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 501f8cba-892f-489d-81b5-abb8669f49eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.902 2 INFO os_vif [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed')#033[00m
Oct  2 04:30:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:30:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:30:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:30:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:30:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:30:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.978 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.979 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.980 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No VIF found with MAC fa:16:3e:50:87:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:30:57 np0005465604 nova_compute[260603]: 2025-10-02 08:30:57.980 2 INFO nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Using config drive#033[00m
Oct  2 04:30:58 np0005465604 nova_compute[260603]: 2025-10-02 08:30:58.014 2 DEBUG nova.storage.rbd_utils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:58 np0005465604 nova_compute[260603]: 2025-10-02 08:30:58.025 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393842.974334, 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:30:58 np0005465604 nova_compute[260603]: 2025-10-02 08:30:58.025 2 INFO nova.compute.manager [-] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:30:58 np0005465604 nova_compute[260603]: 2025-10-02 08:30:58.054 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'ec2_ids' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:58 np0005465604 nova_compute[260603]: 2025-10-02 08:30:58.061 2 DEBUG nova.compute.manager [None req-acbd6688-02d9-4129-ad6b-0ca392ab7a37 - - - - - -] [instance: 3aa4fbdb-4e93-4b1f-ad65-6f8f8902f8e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:30:58 np0005465604 nova_compute[260603]: 2025-10-02 08:30:58.091 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'keypairs' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:58 np0005465604 nova_compute[260603]: 2025-10-02 08:30:58.166 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 501f8cba-892f-489d-81b5-abb8669f49eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:58 np0005465604 nova_compute[260603]: 2025-10-02 08:30:58.228 2 DEBUG nova.network.neutron [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Successfully created port: 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:30:58 np0005465604 nova_compute[260603]: 2025-10-02 08:30:58.236 2 DEBUG nova.storage.rbd_utils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] resizing rbd image 501f8cba-892f-489d-81b5-abb8669f49eb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:30:58 np0005465604 nova_compute[260603]: 2025-10-02 08:30:58.328 2 DEBUG nova.objects.instance [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lazy-loading 'migration_context' on Instance uuid 501f8cba-892f-489d-81b5-abb8669f49eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:30:58 np0005465604 nova_compute[260603]: 2025-10-02 08:30:58.345 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:30:58 np0005465604 nova_compute[260603]: 2025-10-02 08:30:58.346 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Ensure instance console log exists: /var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:30:58 np0005465604 nova_compute[260603]: 2025-10-02 08:30:58.346 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:30:58 np0005465604 nova_compute[260603]: 2025-10-02 08:30:58.346 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:30:58 np0005465604 nova_compute[260603]: 2025-10-02 08:30:58.347 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:30:58 np0005465604 nova_compute[260603]: 2025-10-02 08:30:58.774 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Successfully created port: e19eb16b-f042-4e4d-922b-7057ad6ebb1c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:30:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 141 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 374 KiB/s rd, 6.1 MiB/s wr, 177 op/s
Oct  2 04:30:59 np0005465604 nova_compute[260603]: 2025-10-02 08:30:59.126 2 INFO nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Creating config drive at /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config#033[00m
Oct  2 04:30:59 np0005465604 nova_compute[260603]: 2025-10-02 08:30:59.135 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl8es79rk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:59 np0005465604 nova_compute[260603]: 2025-10-02 08:30:59.291 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl8es79rk" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:59 np0005465604 nova_compute[260603]: 2025-10-02 08:30:59.330 2 DEBUG nova.storage.rbd_utils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:30:59 np0005465604 nova_compute[260603]: 2025-10-02 08:30:59.335 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:30:59 np0005465604 nova_compute[260603]: 2025-10-02 08:30:59.595 2 DEBUG oslo_concurrency.processutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config f8f36f36-817a-4e64-8c57-c211cfc7b0ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:30:59 np0005465604 nova_compute[260603]: 2025-10-02 08:30:59.596 2 INFO nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Deleting local config drive /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba/disk.config because it was imported into RBD.#033[00m
Oct  2 04:30:59 np0005465604 kernel: tap4298d267-ed: entered promiscuous mode
Oct  2 04:30:59 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:59Z|00539|binding|INFO|Claiming lport 4298d267-ede8-417b-9e26-a2533908497f for this chassis.
Oct  2 04:30:59 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:59Z|00540|binding|INFO|4298d267-ede8-417b-9e26-a2533908497f: Claiming fa:16:3e:50:87:db 10.100.0.3
Oct  2 04:30:59 np0005465604 NetworkManager[45129]: <info>  [1759393859.6926] manager: (tap4298d267-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Oct  2 04:30:59 np0005465604 nova_compute[260603]: 2025-10-02 08:30:59.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.703 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:87:db 10.100.0.3'], port_security=['fa:16:3e:50:87:db 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f8f36f36-817a-4e64-8c57-c211cfc7b0ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '70456544-9d56-4c7b-b40d-eb25e5a572db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7313fcfc-7f82-4668-88be-657e0435d03f, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4298d267-ede8-417b-9e26-a2533908497f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:30:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.705 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4298d267-ede8-417b-9e26-a2533908497f in datapath f8df0af1-1767-419a-8500-c28fbf45ae4b bound to our chassis#033[00m
Oct  2 04:30:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.708 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f8df0af1-1767-419a-8500-c28fbf45ae4b#033[00m
Oct  2 04:30:59 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:59Z|00541|binding|INFO|Setting lport 4298d267-ede8-417b-9e26-a2533908497f ovn-installed in OVS
Oct  2 04:30:59 np0005465604 ovn_controller[152344]: 2025-10-02T08:30:59Z|00542|binding|INFO|Setting lport 4298d267-ede8-417b-9e26-a2533908497f up in Southbound
Oct  2 04:30:59 np0005465604 nova_compute[260603]: 2025-10-02 08:30:59.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.736 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fcac67d9-5c17-487a-913a-c51229c3e24c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.737 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf8df0af1-11 in ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:30:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.742 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf8df0af1-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:30:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.743 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e23db3a9-f275-485b-b73f-566970156f7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:59 np0005465604 systemd-udevd[322666]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:30:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.744 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[772bb1d2-98fc-4766-8e34-08ab2589630b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:59 np0005465604 nova_compute[260603]: 2025-10-02 08:30:59.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:30:59 np0005465604 systemd-machined[214636]: New machine qemu-67-instance-0000003b.
Oct  2 04:30:59 np0005465604 NetworkManager[45129]: <info>  [1759393859.7638] device (tap4298d267-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:30:59 np0005465604 NetworkManager[45129]: <info>  [1759393859.7647] device (tap4298d267-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:30:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.767 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[03b84f7a-c2d8-4044-9738-1f0911dda344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:59 np0005465604 systemd[1]: Started Virtual Machine qemu-67-instance-0000003b.
Oct  2 04:30:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.796 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[30895c0b-16d6-41e1-bc81-0b65f075516d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:59 np0005465604 nova_compute[260603]: 2025-10-02 08:30:59.802 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Successfully created port: e8f18c99-1964-43d6-a955-7b5064c53b3a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:30:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.843 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3f109b-f5d9-41e7-8e4e-bb6141e3abb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.850 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[91392d24-0b16-4165-9a2e-d310adef5460]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:59 np0005465604 NetworkManager[45129]: <info>  [1759393859.8516] manager: (tapf8df0af1-10): new Veth device (/org/freedesktop/NetworkManager/Devices/227)
Oct  2 04:30:59 np0005465604 systemd-udevd[322669]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:30:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.902 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[be34082f-cc4e-4892-9c8c-7507f9f82cfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.906 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[551b291f-dcc4-4864-9c47-a57c42dc90fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:59 np0005465604 NetworkManager[45129]: <info>  [1759393859.9347] device (tapf8df0af1-10): carrier: link connected
Oct  2 04:30:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.943 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d05d5267-6216-4f82-ad53-19d817afb547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.964 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5bd29e-2f3f-4321-8c66-9550ef727ec9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8df0af1-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:6d:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473853, 'reachable_time': 22677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322698, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:30:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:30:59.981 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9044d75b-3c2f-4e32-8058-332a146b0a00]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:6ddc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473853, 'tstamp': 473853}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322699, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.000 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bee5e8b6-7cec-4876-8605-0275b0ebb724]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8df0af1-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:6d:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473853, 'reachable_time': 22677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322700, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.038 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[80ac5f32-97c4-4d38-8ebe-e8dd1f6f0a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.104 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[379497a1-0cd5-4cd3-b048-1eb287c6cd2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.106 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8df0af1-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.106 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.107 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8df0af1-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:00 np0005465604 NetworkManager[45129]: <info>  [1759393860.1105] manager: (tapf8df0af1-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Oct  2 04:31:00 np0005465604 kernel: tapf8df0af1-10: entered promiscuous mode
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.114 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf8df0af1-10, col_values=(('external_ids', {'iface-id': '1405e724-f2f6-4a95-8848-550131e62910'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:00Z|00543|binding|INFO|Releasing lport 1405e724-f2f6-4a95-8848-550131e62910 from this chassis (sb_readonly=0)
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.116 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.117 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b18eecc0-d590-4e91-9a1e-12ba52842387]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.119 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:31:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:00.120 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'env', 'PROCESS_TAG=haproxy-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f8df0af1-1767-419a-8500-c28fbf45ae4b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.225 2 DEBUG nova.compute.manager [req-6c16e03e-fe13-4d52-89da-676a1b23ca6a req-cb90e0ac-68f9-41b7-a495-d741fbddd900 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.226 2 DEBUG oslo_concurrency.lockutils [req-6c16e03e-fe13-4d52-89da-676a1b23ca6a req-cb90e0ac-68f9-41b7-a495-d741fbddd900 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.226 2 DEBUG oslo_concurrency.lockutils [req-6c16e03e-fe13-4d52-89da-676a1b23ca6a req-cb90e0ac-68f9-41b7-a495-d741fbddd900 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.226 2 DEBUG oslo_concurrency.lockutils [req-6c16e03e-fe13-4d52-89da-676a1b23ca6a req-cb90e0ac-68f9-41b7-a495-d741fbddd900 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.226 2 DEBUG nova.compute.manager [req-6c16e03e-fe13-4d52-89da-676a1b23ca6a req-cb90e0ac-68f9-41b7-a495-d741fbddd900 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Processing event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.501 2 DEBUG nova.network.neutron [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Successfully updated port: 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.518 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.518 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquired lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.519 2 DEBUG nova.network.neutron [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.520 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Successfully created port: 35092937-9590-42d1-a022-549b740da3c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:31:00 np0005465604 podman[322774]: 2025-10-02 08:31:00.557267802 +0000 UTC m=+0.062211841 container create 32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:31:00 np0005465604 systemd[1]: Started libpod-conmon-32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d.scope.
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.606 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for f8f36f36-817a-4e64-8c57-c211cfc7b0ba due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.607 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393860.6058617, f8f36f36-817a-4e64-8c57-c211cfc7b0ba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.607 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] VM Started (Lifecycle Event)#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.610 2 DEBUG nova.compute.manager [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.613 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:31:00 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.619 2 INFO nova.virt.libvirt.driver [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance spawned successfully.#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.619 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:31:00 np0005465604 podman[322774]: 2025-10-02 08:31:00.525841796 +0000 UTC m=+0.030785865 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:31:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10f76ee94a6c0000dab8a51cc1abe21546c7714b833add5e7deed29110e43c9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.635 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:00 np0005465604 podman[322774]: 2025-10-02 08:31:00.641951369 +0000 UTC m=+0.146895448 container init 32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.642 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.646 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.646 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.647 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.647 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.648 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.648 2 DEBUG nova.virt.libvirt.driver [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:00 np0005465604 podman[322774]: 2025-10-02 08:31:00.654397216 +0000 UTC m=+0.159341255 container start 32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.669 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.670 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393860.6059682, f8f36f36-817a-4e64-8c57-c211cfc7b0ba => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.670 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:31:00 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[322789]: [NOTICE]   (322793) : New worker (322795) forked
Oct  2 04:31:00 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[322789]: [NOTICE]   (322793) : Loading success.
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.692 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.696 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393860.6121714, f8f36f36-817a-4e64-8c57-c211cfc7b0ba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.697 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.700 2 DEBUG nova.compute.manager [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.730 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.732 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.749 2 DEBUG nova.network.neutron [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.756 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.770 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.771 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.771 2 DEBUG nova.objects.instance [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:31:00 np0005465604 nova_compute[260603]: 2025-10-02 08:31:00.831 2 DEBUG oslo_concurrency.lockutils [None req-71dd3448-a09d-428e-a633-68fc01288416 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 141 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 6.1 MiB/s wr, 151 op/s
Oct  2 04:31:01 np0005465604 nova_compute[260603]: 2025-10-02 08:31:01.201 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Successfully updated port: e19eb16b-f042-4e4d-922b-7057ad6ebb1c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:31:01 np0005465604 nova_compute[260603]: 2025-10-02 08:31:01.326 2 DEBUG nova.compute.manager [req-e7bb1c63-3ceb-4acd-8da2-05e63fd4599a req-3bf12e70-188e-4906-86eb-62c003b50f65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-changed-e19eb16b-f042-4e4d-922b-7057ad6ebb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:01 np0005465604 nova_compute[260603]: 2025-10-02 08:31:01.327 2 DEBUG nova.compute.manager [req-e7bb1c63-3ceb-4acd-8da2-05e63fd4599a req-3bf12e70-188e-4906-86eb-62c003b50f65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Refreshing instance network info cache due to event network-changed-e19eb16b-f042-4e4d-922b-7057ad6ebb1c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:31:01 np0005465604 nova_compute[260603]: 2025-10-02 08:31:01.328 2 DEBUG oslo_concurrency.lockutils [req-e7bb1c63-3ceb-4acd-8da2-05e63fd4599a req-3bf12e70-188e-4906-86eb-62c003b50f65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:31:01 np0005465604 nova_compute[260603]: 2025-10-02 08:31:01.328 2 DEBUG oslo_concurrency.lockutils [req-e7bb1c63-3ceb-4acd-8da2-05e63fd4599a req-3bf12e70-188e-4906-86eb-62c003b50f65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:31:01 np0005465604 nova_compute[260603]: 2025-10-02 08:31:01.328 2 DEBUG nova.network.neutron [req-e7bb1c63-3ceb-4acd-8da2-05e63fd4599a req-3bf12e70-188e-4906-86eb-62c003b50f65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Refreshing network info cache for port e19eb16b-f042-4e4d-922b-7057ad6ebb1c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:31:01 np0005465604 nova_compute[260603]: 2025-10-02 08:31:01.675 2 DEBUG nova.network.neutron [req-e7bb1c63-3ceb-4acd-8da2-05e63fd4599a req-3bf12e70-188e-4906-86eb-62c003b50f65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:31:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.213 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Successfully updated port: e8f18c99-1964-43d6-a955-7b5064c53b3a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.654 2 DEBUG nova.network.neutron [req-e7bb1c63-3ceb-4acd-8da2-05e63fd4599a req-3bf12e70-188e-4906-86eb-62c003b50f65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.678 2 DEBUG oslo_concurrency.lockutils [req-e7bb1c63-3ceb-4acd-8da2-05e63fd4599a req-3bf12e70-188e-4906-86eb-62c003b50f65 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.742 2 DEBUG nova.network.neutron [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updating instance_info_cache with network_info: [{"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.776 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Releasing lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.776 2 DEBUG nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Instance network_info: |[{"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.780 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Start _get_guest_xml network_info=[{"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.785 2 WARNING nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.791 2 DEBUG nova.virt.libvirt.host [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.792 2 DEBUG nova.virt.libvirt.host [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.795 2 DEBUG nova.virt.libvirt.host [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.795 2 DEBUG nova.virt.libvirt.host [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.796 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.796 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.796 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.796 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.797 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.797 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.797 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.797 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.797 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.798 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.798 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.798 2 DEBUG nova.virt.hardware [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.800 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.910 2 DEBUG nova.compute.manager [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received event network-changed-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.910 2 DEBUG nova.compute.manager [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Refreshing instance network info cache due to event network-changed-37e9c33f-0ff9-4138-a7b5-989ba3c016a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.911 2 DEBUG oslo_concurrency.lockutils [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.911 2 DEBUG oslo_concurrency.lockutils [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:31:02 np0005465604 nova_compute[260603]: 2025-10-02 08:31:02.911 2 DEBUG nova.network.neutron [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Refreshing network info cache for port 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:31:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 180 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.5 MiB/s wr, 227 op/s
Oct  2 04:31:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:31:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2335375807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.276 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.307 2 DEBUG nova.storage.rbd_utils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.311 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.429 2 DEBUG nova.compute.manager [req-144c0295-3bed-415f-b05d-64288ab91e6b req-2a2a288c-34d4-4add-9783-ee7d1a8e9d37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-changed-e8f18c99-1964-43d6-a955-7b5064c53b3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.429 2 DEBUG nova.compute.manager [req-144c0295-3bed-415f-b05d-64288ab91e6b req-2a2a288c-34d4-4add-9783-ee7d1a8e9d37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Refreshing instance network info cache due to event network-changed-e8f18c99-1964-43d6-a955-7b5064c53b3a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.430 2 DEBUG oslo_concurrency.lockutils [req-144c0295-3bed-415f-b05d-64288ab91e6b req-2a2a288c-34d4-4add-9783-ee7d1a8e9d37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.430 2 DEBUG oslo_concurrency.lockutils [req-144c0295-3bed-415f-b05d-64288ab91e6b req-2a2a288c-34d4-4add-9783-ee7d1a8e9d37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.430 2 DEBUG nova.network.neutron [req-144c0295-3bed-415f-b05d-64288ab91e6b req-2a2a288c-34d4-4add-9783-ee7d1a8e9d37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Refreshing network info cache for port e8f18c99-1964-43d6-a955-7b5064c53b3a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.647 2 DEBUG oslo_concurrency.lockutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.650 2 DEBUG oslo_concurrency.lockutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.650 2 DEBUG oslo_concurrency.lockutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.651 2 DEBUG oslo_concurrency.lockutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.651 2 DEBUG oslo_concurrency.lockutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.654 2 INFO nova.compute.manager [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Terminating instance#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.656 2 DEBUG nova.compute.manager [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.671 2 DEBUG nova.network.neutron [req-144c0295-3bed-415f-b05d-64288ab91e6b req-2a2a288c-34d4-4add-9783-ee7d1a8e9d37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:31:03 np0005465604 kernel: tap4298d267-ed (unregistering): left promiscuous mode
Oct  2 04:31:03 np0005465604 NetworkManager[45129]: <info>  [1759393863.7062] device (tap4298d267-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:03Z|00544|binding|INFO|Releasing lport 4298d267-ede8-417b-9e26-a2533908497f from this chassis (sb_readonly=0)
Oct  2 04:31:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:03Z|00545|binding|INFO|Setting lport 4298d267-ede8-417b-9e26-a2533908497f down in Southbound
Oct  2 04:31:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:03Z|00546|binding|INFO|Removing iface tap4298d267-ed ovn-installed in OVS
Oct  2 04:31:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:03.723 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:87:db 10.100.0.3'], port_security=['fa:16:3e:50:87:db 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f8f36f36-817a-4e64-8c57-c211cfc7b0ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '70456544-9d56-4c7b-b40d-eb25e5a572db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7313fcfc-7f82-4668-88be-657e0435d03f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4298d267-ede8-417b-9e26-a2533908497f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:31:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:03.727 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4298d267-ede8-417b-9e26-a2533908497f in datapath f8df0af1-1767-419a-8500-c28fbf45ae4b unbound from our chassis#033[00m
Oct  2 04:31:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:03.730 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f8df0af1-1767-419a-8500-c28fbf45ae4b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:31:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:03.732 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3afa9b-ca94-455a-9add-fb0441c8c894]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:03.733 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b namespace which is not needed anymore#033[00m
Oct  2 04:31:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:31:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2856367483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:31:03 np0005465604 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:03 np0005465604 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003b.scope: Consumed 3.816s CPU time.
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.768 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.769 2 DEBUG nova.virt.libvirt.vif [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1525105387',display_name='tempest-ServerActionsTestOtherB-server-1525105387',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1525105387',id=61,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmeM0VXrLDuUgkvEcAKZLUawTz1v6B+3eBASOGcaRFlmF3ztxSFLOGPfQ5nMbtxqx6ZDoxMiinSb16iJLQrCBe+IsSQFaYXfW47SOpCqDkfThajOwmApFonqiBjUHNfHQ==',key_name='tempest-keypair-1019763483',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eda0caa41e4740148ab99d5ebf9e27ba',ramdisk_id='',reservation_id='r-fypdkou3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1644249004',owner_user_name='tempest-ServerActionsTestOtherB-1644249004-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9020ed38b31d46f88625374b2a76aef6',uuid=49e7e668-b62c-4e35-a4e2-bba540000961,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.770 2 DEBUG nova.network.os_vif_util [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converting VIF {"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:03 np0005465604 systemd-machined[214636]: Machine qemu-67-instance-0000003b terminated.
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.771 2 DEBUG nova.network.os_vif_util [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=37e9c33f-0ff9-4138-a7b5-989ba3c016a0,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37e9c33f-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.773 2 DEBUG nova.objects.instance [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'pci_devices' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.791 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:31:03 np0005465604 nova_compute[260603]:  <uuid>49e7e668-b62c-4e35-a4e2-bba540000961</uuid>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:  <name>instance-0000003d</name>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerActionsTestOtherB-server-1525105387</nova:name>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:31:02</nova:creationTime>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:31:03 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:        <nova:user uuid="9020ed38b31d46f88625374b2a76aef6">tempest-ServerActionsTestOtherB-1644249004-project-member</nova:user>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:        <nova:project uuid="eda0caa41e4740148ab99d5ebf9e27ba">tempest-ServerActionsTestOtherB-1644249004</nova:project>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:        <nova:port uuid="37e9c33f-0ff9-4138-a7b5-989ba3c016a0">
Oct  2 04:31:03 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <entry name="serial">49e7e668-b62c-4e35-a4e2-bba540000961</entry>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <entry name="uuid">49e7e668-b62c-4e35-a4e2-bba540000961</entry>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/49e7e668-b62c-4e35-a4e2-bba540000961_disk">
Oct  2 04:31:03 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:31:03 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/49e7e668-b62c-4e35-a4e2-bba540000961_disk.config">
Oct  2 04:31:03 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:31:03 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:19:cc:f7"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <target dev="tap37e9c33f-0f"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/console.log" append="off"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:31:03 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:31:03 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:31:03 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:31:03 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.792 2 DEBUG nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Preparing to wait for external event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.792 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.792 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.793 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.794 2 DEBUG nova.virt.libvirt.vif [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1525105387',display_name='tempest-ServerActionsTestOtherB-server-1525105387',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1525105387',id=61,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMmeM0VXrLDuUgkvEcAKZLUawTz1v6B+3eBASOGcaRFlmF3ztxSFLOGPfQ5nMbtxqx6ZDoxMiinSb16iJLQrCBe+IsSQFaYXfW47SOpCqDkfThajOwmApFonqiBjUHNfHQ==',key_name='tempest-keypair-1019763483',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eda0caa41e4740148ab99d5ebf9e27ba',ramdisk_id='',reservation_id='r-fypdkou3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1644249004',owner_user_name='tempest-ServerActionsTestOtherB-1644249004-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9020ed38b31d46f88625374b2a76aef6',uuid=49e7e668-b62c-4e35-a4e2-bba540000961,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.794 2 DEBUG nova.network.os_vif_util [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converting VIF {"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.795 2 DEBUG nova.network.os_vif_util [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=37e9c33f-0ff9-4138-a7b5-989ba3c016a0,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37e9c33f-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.795 2 DEBUG os_vif [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=37e9c33f-0ff9-4138-a7b5-989ba3c016a0,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37e9c33f-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.797 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.797 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.802 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37e9c33f-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.803 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap37e9c33f-0f, col_values=(('external_ids', {'iface-id': '37e9c33f-0ff9-4138-a7b5-989ba3c016a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:cc:f7', 'vm-uuid': '49e7e668-b62c-4e35-a4e2-bba540000961'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:03 np0005465604 NetworkManager[45129]: <info>  [1759393863.8056] manager: (tap37e9c33f-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.821 2 INFO os_vif [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:cc:f7,bridge_name='br-int',has_traffic_filtering=True,id=37e9c33f-0ff9-4138-a7b5-989ba3c016a0,network=Network(ef30d863-af60-49d9-b5d2-5e4f20c70d56),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37e9c33f-0f')#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.865 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Successfully updated port: 35092937-9590-42d1-a022-549b740da3c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.884 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.904 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:31:03 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[322789]: [NOTICE]   (322793) : haproxy version is 2.8.14-c23fe91
Oct  2 04:31:03 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[322789]: [NOTICE]   (322793) : path to executable is /usr/sbin/haproxy
Oct  2 04:31:03 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[322789]: [WARNING]  (322793) : Exiting Master process...
Oct  2 04:31:03 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[322789]: [WARNING]  (322793) : Exiting Master process...
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.904 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.907 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No VIF found with MAC fa:16:3e:19:cc:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.908 2 INFO nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Using config drive#033[00m
Oct  2 04:31:03 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[322789]: [ALERT]    (322793) : Current worker (322795) exited with code 143 (Terminated)
Oct  2 04:31:03 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[322789]: [WARNING]  (322793) : All workers exited. Exiting... (0)
Oct  2 04:31:03 np0005465604 systemd[1]: libpod-32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d.scope: Deactivated successfully.
Oct  2 04:31:03 np0005465604 podman[322893]: 2025-10-02 08:31:03.915958299 +0000 UTC m=+0.057502416 container stop 32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.946 2 DEBUG nova.storage.rbd_utils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:03 np0005465604 podman[322893]: 2025-10-02 08:31:03.948448167 +0000 UTC m=+0.089992274 container died 32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.961 2 INFO nova.virt.libvirt.driver [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Instance destroyed successfully.#033[00m
Oct  2 04:31:03 np0005465604 nova_compute[260603]: 2025-10-02 08:31:03.961 2 DEBUG nova.objects.instance [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'resources' on Instance uuid f8f36f36-817a-4e64-8c57-c211cfc7b0ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:31:03 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d-userdata-shm.mount: Deactivated successfully.
Oct  2 04:31:03 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d10f76ee94a6c0000dab8a51cc1abe21546c7714b833add5e7deed29110e43c9-merged.mount: Deactivated successfully.
Oct  2 04:31:04 np0005465604 podman[322893]: 2025-10-02 08:31:04.001353478 +0000 UTC m=+0.142897665 container cleanup 32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:31:04 np0005465604 systemd[1]: libpod-conmon-32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d.scope: Deactivated successfully.
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.029 2 DEBUG nova.network.neutron [req-144c0295-3bed-415f-b05d-64288ab91e6b req-2a2a288c-34d4-4add-9783-ee7d1a8e9d37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.037 2 DEBUG nova.virt.libvirt.vif [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-949284758',display_name='tempest-ServerDiskConfigTestJSON-server-949284758',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-949284758',id=59,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-xg0hxkh4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:31:00Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=f8f36f36-817a-4e64-8c57-c211cfc7b0ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.038 2 DEBUG nova.network.os_vif_util [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "4298d267-ede8-417b-9e26-a2533908497f", "address": "fa:16:3e:50:87:db", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4298d267-ed", "ovs_interfaceid": "4298d267-ede8-417b-9e26-a2533908497f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.038 2 DEBUG nova.network.os_vif_util [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.038 2 DEBUG os_vif [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.041 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4298d267-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.050 2 INFO os_vif [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:87:db,bridge_name='br-int',has_traffic_filtering=True,id=4298d267-ede8-417b-9e26-a2533908497f,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4298d267-ed')#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.065 2 DEBUG oslo_concurrency.lockutils [req-144c0295-3bed-415f-b05d-64288ab91e6b req-2a2a288c-34d4-4add-9783-ee7d1a8e9d37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.066 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquired lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.066 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:31:04 np0005465604 podman[322954]: 2025-10-02 08:31:04.100427663 +0000 UTC m=+0.061466819 container remove 32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:31:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.108 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fcacdd73-2636-4040-95d9-3a9bdf7852c7]: (4, ('Thu Oct  2 08:31:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b (32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d)\n32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d\nThu Oct  2 08:31:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b (32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d)\n32e6287d3dd8a134a98ba6e2f1418e5dfe808f0569a60af8f2fc90bcb510603d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.110 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[08c74384-2432-4126-9a84-e130c5dc506e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.111 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8df0af1-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:04 np0005465604 kernel: tapf8df0af1-10: left promiscuous mode
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.163 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5c19d63f-6020-4aff-8ff4-f812053d48c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.163 2 DEBUG nova.network.neutron [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updated VIF entry in instance network info cache for port 37e9c33f-0ff9-4138-a7b5-989ba3c016a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.164 2 DEBUG nova.network.neutron [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updating instance_info_cache with network_info: [{"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.186 2 DEBUG oslo_concurrency.lockutils [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.187 2 DEBUG nova.compute.manager [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.187 2 DEBUG oslo_concurrency.lockutils [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.187 2 DEBUG oslo_concurrency.lockutils [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.187 2 DEBUG oslo_concurrency.lockutils [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.188 2 DEBUG nova.compute.manager [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] No waiting events found dispatching network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.188 2 WARNING nova.compute.manager [req-74e2f7a8-8cfa-44f9-aee8-a78b3a8bebb3 req-7a954903-8340-414c-b253-fe21d401bdce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received unexpected event network-vif-plugged-4298d267-ede8-417b-9e26-a2533908497f for instance with vm_state active and task_state None.#033[00m
Oct  2 04:31:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.188 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bd03c606-6a5c-4980-a75a-d24ad1c36aa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.189 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[11196ebf-9165-44db-b12a-95e813680119]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.217 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[da406a0a-59d8-4519-9ed2-72aed500bf5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473843, 'reachable_time': 32128, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322992, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.220 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:31:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.220 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[9deea0ab-fe13-4984-ab39-30256039d530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:04 np0005465604 systemd[1]: run-netns-ovnmeta\x2df8df0af1\x2d1767\x2d419a\x2d8500\x2dc28fbf45ae4b.mount: Deactivated successfully.
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.324 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.404 2 INFO nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Creating config drive at /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/disk.config#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.410 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfnanibgs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.505 2 INFO nova.virt.libvirt.driver [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Deleting instance files /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba_del#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.506 2 INFO nova.virt.libvirt.driver [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Deletion of /var/lib/nova/instances/f8f36f36-817a-4e64-8c57-c211cfc7b0ba_del complete#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.567 2 INFO nova.compute.manager [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Took 0.91 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.568 2 DEBUG oslo.service.loopingcall [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.569 2 DEBUG nova.compute.manager [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.569 2 DEBUG nova.network.neutron [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.574 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfnanibgs" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.613 2 DEBUG nova.storage.rbd_utils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] rbd image 49e7e668-b62c-4e35-a4e2-bba540000961_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.618 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/disk.config 49e7e668-b62c-4e35-a4e2-bba540000961_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.844 2 DEBUG oslo_concurrency.processutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/disk.config 49e7e668-b62c-4e35-a4e2-bba540000961_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.845 2 INFO nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Deleting local config drive /var/lib/nova/instances/49e7e668-b62c-4e35-a4e2-bba540000961/disk.config because it was imported into RBD.#033[00m
Oct  2 04:31:04 np0005465604 kernel: tap37e9c33f-0f: entered promiscuous mode
Oct  2 04:31:04 np0005465604 systemd-udevd[322870]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:31:04 np0005465604 NetworkManager[45129]: <info>  [1759393864.9344] manager: (tap37e9c33f-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/230)
Oct  2 04:31:04 np0005465604 nova_compute[260603]: 2025-10-02 08:31:04.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:04 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:04Z|00547|binding|INFO|Claiming lport 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 for this chassis.
Oct  2 04:31:04 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:04Z|00548|binding|INFO|37e9c33f-0ff9-4138-a7b5-989ba3c016a0: Claiming fa:16:3e:19:cc:f7 10.100.0.9
Oct  2 04:31:04 np0005465604 NetworkManager[45129]: <info>  [1759393864.9470] device (tap37e9c33f-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:31:04 np0005465604 NetworkManager[45129]: <info>  [1759393864.9515] device (tap37e9c33f-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:31:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.950 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:cc:f7 10.100.0.9'], port_security=['fa:16:3e:19:cc:f7 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '49e7e668-b62c-4e35-a4e2-bba540000961', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eda0caa41e4740148ab99d5ebf9e27ba', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd09bd0a7-8be4-487a-8b24-ba3d4c0378f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04113540-c60b-4329-960e-cb06bfeb56f0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=37e9c33f-0ff9-4138-a7b5-989ba3c016a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:31:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.953 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 in datapath ef30d863-af60-49d9-b5d2-5e4f20c70d56 bound to our chassis#033[00m
Oct  2 04:31:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.957 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef30d863-af60-49d9-b5d2-5e4f20c70d56#033[00m
Oct  2 04:31:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.969 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[024eb167-2ff0-445f-84f7-b645fcbffed4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.970 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapef30d863-a1 in ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:31:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.974 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapef30d863-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:31:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.974 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5c141cd5-db37-4dca-a171-357dd6771cae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.975 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ef506798-7dd1-4385-af46-35831517ec7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:04 np0005465604 systemd-machined[214636]: New machine qemu-68-instance-0000003d.
Oct  2 04:31:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:04.990 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[9318e707-d45c-45d5-984d-329f332e6f21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 180 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.1 MiB/s wr, 200 op/s
Oct  2 04:31:05 np0005465604 systemd[1]: Started Virtual Machine qemu-68-instance-0000003d.
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.027 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb4927b-1d42-48a9-89e4-1c2e8c193ad8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:05 np0005465604 nova_compute[260603]: 2025-10-02 08:31:05.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:05 np0005465604 nova_compute[260603]: 2025-10-02 08:31:05.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:05 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:05Z|00549|binding|INFO|Setting lport 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 ovn-installed in OVS
Oct  2 04:31:05 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:05Z|00550|binding|INFO|Setting lport 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 up in Southbound
Oct  2 04:31:05 np0005465604 nova_compute[260603]: 2025-10-02 08:31:05.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.084 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[90baf83a-f980-4fcf-be0c-fa253f630eb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.110 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4607e4-87c8-4503-b590-8ded41214557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:05 np0005465604 NetworkManager[45129]: <info>  [1759393865.1173] manager: (tapef30d863-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/231)
Oct  2 04:31:05 np0005465604 nova_compute[260603]: 2025-10-02 08:31:05.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.183 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[45ed2812-3bd9-4da6-970e-dedfcf365ca0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.191 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8b355f-4640-43e8-81f1-d7e34c760bff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:05 np0005465604 NetworkManager[45129]: <info>  [1759393865.2214] device (tapef30d863-a0): carrier: link connected
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.225 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5f28985d-f4f7-4699-a65c-5df55915a133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.259 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d791496d-bb9e-4ec2-b424-1b7395ee3546]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef30d863-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:1b:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474382, 'reachable_time': 21085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323077, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.283 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1db327aa-cb02-4d5f-a1ba-dd43d0f0a464]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:1bde'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474382, 'tstamp': 474382}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323078, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:05 np0005465604 nova_compute[260603]: 2025-10-02 08:31:05.294 2 DEBUG nova.network.neutron [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.307 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[046a5cc4-72e8-4f5c-9c60-e7831ac3be8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef30d863-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:1b:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474382, 'reachable_time': 21085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323079, 'error': None, 'target': 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:05 np0005465604 nova_compute[260603]: 2025-10-02 08:31:05.334 2 INFO nova.compute.manager [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Took 0.76 seconds to deallocate network for instance.#033[00m
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.355 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3115be-04ad-438f-bcd1-a7565ca14680]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:05 np0005465604 nova_compute[260603]: 2025-10-02 08:31:05.403 2 DEBUG oslo_concurrency.lockutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:05 np0005465604 nova_compute[260603]: 2025-10-02 08:31:05.404 2 DEBUG oslo_concurrency.lockutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:05 np0005465604 nova_compute[260603]: 2025-10-02 08:31:05.410 2 DEBUG nova.compute.manager [req-4c6cedc1-8dc7-48ae-8441-14e3ae6370a8 req-dfb5cee7-0a98-4edd-b0f7-bcda60c58876 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Received event network-vif-deleted-4298d267-ede8-417b-9e26-a2533908497f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.429 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[708f284d-b0ad-4335-9320-224366944935]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.431 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef30d863-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.431 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.432 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef30d863-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:05 np0005465604 NetworkManager[45129]: <info>  [1759393865.4362] manager: (tapef30d863-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Oct  2 04:31:05 np0005465604 kernel: tapef30d863-a0: entered promiscuous mode
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.440 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef30d863-a0, col_values=(('external_ids', {'iface-id': 'd143de50-fc80-43b6-82e2-6651430a4a42'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:05 np0005465604 nova_compute[260603]: 2025-10-02 08:31:05.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:05 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:05Z|00551|binding|INFO|Releasing lport d143de50-fc80-43b6-82e2-6651430a4a42 from this chassis (sb_readonly=0)
Oct  2 04:31:05 np0005465604 nova_compute[260603]: 2025-10-02 08:31:05.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.470 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ef30d863-af60-49d9-b5d2-5e4f20c70d56.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ef30d863-af60-49d9-b5d2-5e4f20c70d56.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.471 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d21b3b9f-0b82-40be-b950-09af882c02ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.473 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-ef30d863-af60-49d9-b5d2-5e4f20c70d56
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/ef30d863-af60-49d9-b5d2-5e4f20c70d56.pid.haproxy
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID ef30d863-af60-49d9-b5d2-5e4f20c70d56
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:31:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:05.474 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'env', 'PROCESS_TAG=haproxy-ef30d863-af60-49d9-b5d2-5e4f20c70d56', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ef30d863-af60-49d9-b5d2-5e4f20c70d56.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:31:05 np0005465604 nova_compute[260603]: 2025-10-02 08:31:05.521 2 DEBUG nova.compute.manager [req-d069c3e1-cd26-4a31-910d-104f751f39a9 req-6098dd7f-5f26-442d-ae17-7d75e0aecf23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-changed-35092937-9590-42d1-a022-549b740da3c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:05 np0005465604 nova_compute[260603]: 2025-10-02 08:31:05.521 2 DEBUG nova.compute.manager [req-d069c3e1-cd26-4a31-910d-104f751f39a9 req-6098dd7f-5f26-442d-ae17-7d75e0aecf23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Refreshing instance network info cache due to event network-changed-35092937-9590-42d1-a022-549b740da3c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:31:05 np0005465604 nova_compute[260603]: 2025-10-02 08:31:05.522 2 DEBUG oslo_concurrency.lockutils [req-d069c3e1-cd26-4a31-910d-104f751f39a9 req-6098dd7f-5f26-442d-ae17-7d75e0aecf23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:31:05 np0005465604 nova_compute[260603]: 2025-10-02 08:31:05.575 2 DEBUG oslo_concurrency.processutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:05 np0005465604 podman[323173]: 2025-10-02 08:31:05.964782291 +0000 UTC m=+0.080538769 container create 67fa9bae3ec411fbb8329753e45f9194b60eb6b983eb2da40d66071ae3da774d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:31:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:31:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1473150419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:31:06 np0005465604 podman[323173]: 2025-10-02 08:31:05.924255074 +0000 UTC m=+0.040011562 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:31:06 np0005465604 systemd[1]: Started libpod-conmon-67fa9bae3ec411fbb8329753e45f9194b60eb6b983eb2da40d66071ae3da774d.scope.
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.030 2 DEBUG oslo_concurrency.processutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.047 2 DEBUG nova.compute.provider_tree [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:31:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:06.061 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:06 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:31:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f999397e7e358a7668d8bd32b5ebbd495e6797cdde869d93f7da4a88468be70b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.085 2 DEBUG nova.scheduler.client.report [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:31:06 np0005465604 podman[323173]: 2025-10-02 08:31:06.100227084 +0000 UTC m=+0.215983632 container init 67fa9bae3ec411fbb8329753e45f9194b60eb6b983eb2da40d66071ae3da774d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 04:31:06 np0005465604 podman[323173]: 2025-10-02 08:31:06.107900353 +0000 UTC m=+0.223656830 container start 67fa9bae3ec411fbb8329753e45f9194b60eb6b983eb2da40d66071ae3da774d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.110 2 DEBUG oslo_concurrency.lockutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:06 np0005465604 neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56[323190]: [NOTICE]   (323194) : New worker (323196) forked
Oct  2 04:31:06 np0005465604 neutron-haproxy-ovnmeta-ef30d863-af60-49d9-b5d2-5e4f20c70d56[323190]: [NOTICE]   (323194) : Loading success.
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.159 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393866.1584084, 49e7e668-b62c-4e35-a4e2-bba540000961 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.160 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] VM Started (Lifecycle Event)#033[00m
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.179 2 INFO nova.scheduler.client.report [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Deleted allocations for instance f8f36f36-817a-4e64-8c57-c211cfc7b0ba#033[00m
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.203 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.217 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393866.1587245, 49e7e668-b62c-4e35-a4e2-bba540000961 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.217 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.247 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.253 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.270 2 DEBUG oslo_concurrency.lockutils [None req-d737c45c-a8f1-4f07-b22c-74112c8e555e 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "f8f36f36-817a-4e64-8c57-c211cfc7b0ba" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.279 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.362 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.362 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.379 2 DEBUG nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.484 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.485 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.495 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.495 2 INFO nova.compute.claims [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:31:06 np0005465604 nova_compute[260603]: 2025-10-02 08:31:06.663 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 180 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 182 op/s
Oct  2 04:31:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:31:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:31:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2368568234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.164 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.173 2 DEBUG nova.compute.provider_tree [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.198 2 DEBUG nova.scheduler.client.report [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.217 2 DEBUG nova.network.neutron [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Updating instance_info_cache with network_info: [{"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.224 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.225 2 DEBUG nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.248 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Releasing lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.249 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Instance network_info: |[{"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.250 2 DEBUG oslo_concurrency.lockutils [req-d069c3e1-cd26-4a31-910d-104f751f39a9 req-6098dd7f-5f26-442d-ae17-7d75e0aecf23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.250 2 DEBUG nova.network.neutron [req-d069c3e1-cd26-4a31-910d-104f751f39a9 req-6098dd7f-5f26-442d-ae17-7d75e0aecf23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Refreshing network info cache for port 35092937-9590-42d1-a022-549b740da3c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.258 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Start _get_guest_xml network_info=[{"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.265 2 WARNING nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.272 2 DEBUG nova.virt.libvirt.host [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.273 2 DEBUG nova.virt.libvirt.host [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.282 2 DEBUG nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.283 2 DEBUG nova.network.neutron [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.293 2 DEBUG nova.virt.libvirt.host [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.294 2 DEBUG nova.virt.libvirt.host [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.295 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.295 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.296 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.296 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.297 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.297 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.297 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.298 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.298 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.299 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.299 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.299 2 DEBUG nova.virt.hardware [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.303 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.349 2 INFO nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.371 2 DEBUG nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.468 2 DEBUG nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.470 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.471 2 INFO nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Creating image(s)#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.513 2 DEBUG nova.storage.rbd_utils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.563 2 DEBUG nova.storage.rbd_utils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.593 2 DEBUG nova.storage.rbd_utils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.598 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.639 2 DEBUG nova.policy [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '116b114f14f84e4cbd6cc966e29d82e7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.687 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.688 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.688 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.689 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.715 2 DEBUG nova.storage.rbd_utils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.718 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:31:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/271026909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.887 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.943 2 DEBUG nova.storage.rbd_utils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 501f8cba-892f-489d-81b5-abb8669f49eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.948 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:07 np0005465604 nova_compute[260603]: 2025-10-02 08:31:07.998 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.085 2 DEBUG nova.storage.rbd_utils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] resizing rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.215 2 DEBUG nova.objects.instance [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'migration_context' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.231 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.231 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Ensure instance console log exists: /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.232 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.232 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.232 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:31:08 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1499274081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.410 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.412 2 DEBUG nova.virt.libvirt.vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1736583523',display_name='tempest-ServersTestMultiNic-server-1736583523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1736583523',id=62,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-4l4bd126',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:57Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=501f8cba-892f-489d-81b5-abb8669f49eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.412 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.413 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:26:fb,bridge_name='br-int',has_traffic_filtering=True,id=e19eb16b-f042-4e4d-922b-7057ad6ebb1c,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape19eb16b-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.415 2 DEBUG nova.virt.libvirt.vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1736583523',display_name='tempest-ServersTestMultiNic-server-1736583523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1736583523',id=62,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-4l4bd126',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:57Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=501f8cba-892f-489d-81b5-abb8669f49eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.415 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.416 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:89:45,bridge_name='br-int',has_traffic_filtering=True,id=e8f18c99-1964-43d6-a955-7b5064c53b3a,network=Network(47336fa6-fcc3-40f8-ae02-9bae73a94c41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8f18c99-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.417 2 DEBUG nova.virt.libvirt.vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1736583523',display_name='tempest-ServersTestMultiNic-server-1736583523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1736583523',id=62,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-4l4bd126',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:57Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=501f8cba-892f-489d-81b5-abb8669f49eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.418 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.419 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:5d:8f,bridge_name='br-int',has_traffic_filtering=True,id=35092937-9590-42d1-a022-549b740da3c5,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35092937-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.420 2 DEBUG nova.objects.instance [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 501f8cba-892f-489d-81b5-abb8669f49eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.423 2 DEBUG nova.network.neutron [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Successfully created port: 9835ad6c-8ea8-4a79-8f07-042186ea7c71 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.442 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:  <uuid>501f8cba-892f-489d-81b5-abb8669f49eb</uuid>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:  <name>instance-0000003e</name>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersTestMultiNic-server-1736583523</nova:name>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:31:07</nova:creationTime>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:        <nova:user uuid="db9a3b1e6d93495f8c849658ffc4e535">tempest-ServersTestMultiNic-670565182-project-member</nova:user>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:        <nova:project uuid="62c4ff42369740eebbf14969f4d8d2e5">tempest-ServersTestMultiNic-670565182</nova:project>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:        <nova:port uuid="e19eb16b-f042-4e4d-922b-7057ad6ebb1c">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.81" ipVersion="4"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:        <nova:port uuid="e8f18c99-1964-43d6-a955-7b5064c53b3a">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.1.99" ipVersion="4"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:        <nova:port uuid="35092937-9590-42d1-a022-549b740da3c5">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.194" ipVersion="4"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <entry name="serial">501f8cba-892f-489d-81b5-abb8669f49eb</entry>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <entry name="uuid">501f8cba-892f-489d-81b5-abb8669f49eb</entry>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/501f8cba-892f-489d-81b5-abb8669f49eb_disk">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/501f8cba-892f-489d-81b5-abb8669f49eb_disk.config">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:c1:26:fb"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <target dev="tape19eb16b-f0"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:28:89:45"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <target dev="tape8f18c99-19"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:7b:5d:8f"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <target dev="tap35092937-95"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb/console.log" append="off"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:31:08 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:31:08 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:31:08 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:31:08 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.443 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Preparing to wait for external event network-vif-plugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.444 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.444 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.444 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.445 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Preparing to wait for external event network-vif-plugged-e8f18c99-1964-43d6-a955-7b5064c53b3a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.445 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.445 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.446 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.446 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Preparing to wait for external event network-vif-plugged-35092937-9590-42d1-a022-549b740da3c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.446 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.447 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.447 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.448 2 DEBUG nova.virt.libvirt.vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1736583523',display_name='tempest-ServersTestMultiNic-server-1736583523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1736583523',id=62,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-4l4bd126',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:57Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=501f8cba-892f-489d-81b5-abb8669f49eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.449 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.450 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:26:fb,bridge_name='br-int',has_traffic_filtering=True,id=e19eb16b-f042-4e4d-922b-7057ad6ebb1c,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape19eb16b-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.450 2 DEBUG os_vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:26:fb,bridge_name='br-int',has_traffic_filtering=True,id=e19eb16b-f042-4e4d-922b-7057ad6ebb1c,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape19eb16b-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.458 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape19eb16b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.459 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape19eb16b-f0, col_values=(('external_ids', {'iface-id': 'e19eb16b-f042-4e4d-922b-7057ad6ebb1c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:26:fb', 'vm-uuid': '501f8cba-892f-489d-81b5-abb8669f49eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:08 np0005465604 NetworkManager[45129]: <info>  [1759393868.4622] manager: (tape19eb16b-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/233)
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.472 2 INFO os_vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:26:fb,bridge_name='br-int',has_traffic_filtering=True,id=e19eb16b-f042-4e4d-922b-7057ad6ebb1c,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape19eb16b-f0')#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.473 2 DEBUG nova.virt.libvirt.vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1736583523',display_name='tempest-ServersTestMultiNic-server-1736583523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1736583523',id=62,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-4l4bd126',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:57Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=501f8cba-892f-489d-81b5-abb8669f49eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.474 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.474 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:89:45,bridge_name='br-int',has_traffic_filtering=True,id=e8f18c99-1964-43d6-a955-7b5064c53b3a,network=Network(47336fa6-fcc3-40f8-ae02-9bae73a94c41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8f18c99-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.475 2 DEBUG os_vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:89:45,bridge_name='br-int',has_traffic_filtering=True,id=e8f18c99-1964-43d6-a955-7b5064c53b3a,network=Network(47336fa6-fcc3-40f8-ae02-9bae73a94c41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8f18c99-19') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.482 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8f18c99-19, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.482 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape8f18c99-19, col_values=(('external_ids', {'iface-id': 'e8f18c99-1964-43d6-a955-7b5064c53b3a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:89:45', 'vm-uuid': '501f8cba-892f-489d-81b5-abb8669f49eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:08 np0005465604 NetworkManager[45129]: <info>  [1759393868.4851] manager: (tape8f18c99-19): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.496 2 INFO os_vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:89:45,bridge_name='br-int',has_traffic_filtering=True,id=e8f18c99-1964-43d6-a955-7b5064c53b3a,network=Network(47336fa6-fcc3-40f8-ae02-9bae73a94c41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8f18c99-19')#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.496 2 DEBUG nova.virt.libvirt.vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1736583523',display_name='tempest-ServersTestMultiNic-server-1736583523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1736583523',id=62,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-4l4bd126',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:30:57Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=501f8cba-892f-489d-81b5-abb8669f49eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.497 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.497 2 DEBUG nova.network.os_vif_util [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:5d:8f,bridge_name='br-int',has_traffic_filtering=True,id=35092937-9590-42d1-a022-549b740da3c5,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35092937-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.498 2 DEBUG os_vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:5d:8f,bridge_name='br-int',has_traffic_filtering=True,id=35092937-9590-42d1-a022-549b740da3c5,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35092937-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.499 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.499 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35092937-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap35092937-95, col_values=(('external_ids', {'iface-id': '35092937-9590-42d1-a022-549b740da3c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:5d:8f', 'vm-uuid': '501f8cba-892f-489d-81b5-abb8669f49eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:08 np0005465604 NetworkManager[45129]: <info>  [1759393868.5049] manager: (tap35092937-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.513 2 INFO os_vif [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:5d:8f,bridge_name='br-int',has_traffic_filtering=True,id=35092937-9590-42d1-a022-549b740da3c5,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35092937-95')#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.587 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.587 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.587 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] No VIF found with MAC fa:16:3e:c1:26:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.587 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] No VIF found with MAC fa:16:3e:28:89:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.588 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] No VIF found with MAC fa:16:3e:7b:5d:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.588 2 INFO nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Using config drive#033[00m
Oct  2 04:31:08 np0005465604 nova_compute[260603]: 2025-10-02 08:31:08.619 2 DEBUG nova.storage.rbd_utils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 501f8cba-892f-489d-81b5-abb8669f49eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.001 2 INFO nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Creating config drive at /var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb/disk.config#033[00m
Oct  2 04:31:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 151 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.5 MiB/s wr, 231 op/s
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.011 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ptmyv4j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.058 2 DEBUG nova.network.neutron [req-d069c3e1-cd26-4a31-910d-104f751f39a9 req-6098dd7f-5f26-442d-ae17-7d75e0aecf23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Updated VIF entry in instance network info cache for port 35092937-9590-42d1-a022-549b740da3c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.060 2 DEBUG nova.network.neutron [req-d069c3e1-cd26-4a31-910d-104f751f39a9 req-6098dd7f-5f26-442d-ae17-7d75e0aecf23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Updating instance_info_cache with network_info: [{"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.094 2 DEBUG nova.network.neutron [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Successfully updated port: 9835ad6c-8ea8-4a79-8f07-042186ea7c71 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.099 2 DEBUG oslo_concurrency.lockutils [req-d069c3e1-cd26-4a31-910d-104f751f39a9 req-6098dd7f-5f26-442d-ae17-7d75e0aecf23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-501f8cba-892f-489d-81b5-abb8669f49eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.112 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "refresh_cache-07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.112 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquired lock "refresh_cache-07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.112 2 DEBUG nova.network.neutron [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.162 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ptmyv4j" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.207 2 DEBUG nova.storage.rbd_utils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 501f8cba-892f-489d-81b5-abb8669f49eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.214 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb/disk.config 501f8cba-892f-489d-81b5-abb8669f49eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.282 2 DEBUG nova.network.neutron [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.372 2 DEBUG oslo_concurrency.processutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb/disk.config 501f8cba-892f-489d-81b5-abb8669f49eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.373 2 INFO nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Deleting local config drive /var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb/disk.config because it was imported into RBD.#033[00m
Oct  2 04:31:09 np0005465604 NetworkManager[45129]: <info>  [1759393869.4580] manager: (tape19eb16b-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/236)
Oct  2 04:31:09 np0005465604 kernel: tape19eb16b-f0: entered promiscuous mode
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:09Z|00552|binding|INFO|Claiming lport e19eb16b-f042-4e4d-922b-7057ad6ebb1c for this chassis.
Oct  2 04:31:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:09Z|00553|binding|INFO|e19eb16b-f042-4e4d-922b-7057ad6ebb1c: Claiming fa:16:3e:c1:26:fb 10.100.0.81
Oct  2 04:31:09 np0005465604 NetworkManager[45129]: <info>  [1759393869.4866] manager: (tape8f18c99-19): new Tun device (/org/freedesktop/NetworkManager/Devices/237)
Oct  2 04:31:09 np0005465604 systemd-udevd[323536]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:31:09 np0005465604 systemd-udevd[323538]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:31:09 np0005465604 NetworkManager[45129]: <info>  [1759393869.5069] manager: (tap35092937-95): new Tun device (/org/freedesktop/NetworkManager/Devices/238)
Oct  2 04:31:09 np0005465604 systemd-udevd[323539]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:31:09 np0005465604 NetworkManager[45129]: <info>  [1759393869.5260] device (tape19eb16b-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:31:09 np0005465604 NetworkManager[45129]: <info>  [1759393869.5282] device (tape19eb16b-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:31:09 np0005465604 systemd-machined[214636]: New machine qemu-69-instance-0000003e.
Oct  2 04:31:09 np0005465604 kernel: tape8f18c99-19: entered promiscuous mode
Oct  2 04:31:09 np0005465604 kernel: tap35092937-95: entered promiscuous mode
Oct  2 04:31:09 np0005465604 NetworkManager[45129]: <info>  [1759393869.5579] device (tape8f18c99-19): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:31:09 np0005465604 NetworkManager[45129]: <info>  [1759393869.5590] device (tap35092937-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.552 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:26:fb 10.100.0.81'], port_security=['fa:16:3e:c1:26:fb 10.100.0.81'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.81/24', 'neutron:device_id': '501f8cba-892f-489d-81b5-abb8669f49eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a0928fe-c098-4b26-b16c-3388d3dcf9da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cff19d1e-1358-4314-bab4-67f6fde7eba9, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e19eb16b-f042-4e4d-922b-7057ad6ebb1c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.555 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e19eb16b-f042-4e4d-922b-7057ad6ebb1c in datapath 54248606-6cdd-4d53-9b28-14d8ac1cf290 bound to our chassis#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.558 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 54248606-6cdd-4d53-9b28-14d8ac1cf290#033[00m
Oct  2 04:31:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:09Z|00554|binding|INFO|Claiming lport e8f18c99-1964-43d6-a955-7b5064c53b3a for this chassis.
Oct  2 04:31:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:09Z|00555|binding|INFO|e8f18c99-1964-43d6-a955-7b5064c53b3a: Claiming fa:16:3e:28:89:45 10.100.1.99
Oct  2 04:31:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:09Z|00556|binding|INFO|Claiming lport 35092937-9590-42d1-a022-549b740da3c5 for this chassis.
Oct  2 04:31:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:09Z|00557|binding|INFO|35092937-9590-42d1-a022-549b740da3c5: Claiming fa:16:3e:7b:5d:8f 10.100.0.194
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:09 np0005465604 NetworkManager[45129]: <info>  [1759393869.5610] device (tape8f18c99-19): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:31:09 np0005465604 NetworkManager[45129]: <info>  [1759393869.5613] device (tap35092937-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:09 np0005465604 systemd[1]: Started Virtual Machine qemu-69-instance-0000003e.
Oct  2 04:31:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:09Z|00558|binding|INFO|Setting lport e19eb16b-f042-4e4d-922b-7057ad6ebb1c ovn-installed in OVS
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:09Z|00559|binding|INFO|Setting lport e19eb16b-f042-4e4d-922b-7057ad6ebb1c up in Southbound
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.581 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:5d:8f 10.100.0.194'], port_security=['fa:16:3e:7b:5d:8f 10.100.0.194'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.194/24', 'neutron:device_id': '501f8cba-892f-489d-81b5-abb8669f49eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a0928fe-c098-4b26-b16c-3388d3dcf9da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cff19d1e-1358-4314-bab4-67f6fde7eba9, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=35092937-9590-42d1-a022-549b740da3c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.585 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:89:45 10.100.1.99'], port_security=['fa:16:3e:28:89:45 10.100.1.99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.99/24', 'neutron:device_id': '501f8cba-892f-489d-81b5-abb8669f49eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-47336fa6-fcc3-40f8-ae02-9bae73a94c41', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a0928fe-c098-4b26-b16c-3388d3dcf9da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1454d03-261a-47dd-a4d8-470427544a20, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e8f18c99-1964-43d6-a955-7b5064c53b3a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.580 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9785981-53d8-4692-a170-25d4e3904663]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.588 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap54248606-61 in ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.594 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap54248606-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.594 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[84875de8-df16-437d-a48a-5338826caa3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.596 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[79766b68-cb32-46da-b735-612f095bff81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.616 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[0db73085-3809-4144-80c6-974b42ac99d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:09Z|00560|binding|INFO|Setting lport e8f18c99-1964-43d6-a955-7b5064c53b3a ovn-installed in OVS
Oct  2 04:31:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:09Z|00561|binding|INFO|Setting lport e8f18c99-1964-43d6-a955-7b5064c53b3a up in Southbound
Oct  2 04:31:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:09Z|00562|binding|INFO|Setting lport 35092937-9590-42d1-a022-549b740da3c5 ovn-installed in OVS
Oct  2 04:31:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:09Z|00563|binding|INFO|Setting lport 35092937-9590-42d1-a022-549b740da3c5 up in Southbound
Oct  2 04:31:09 np0005465604 nova_compute[260603]: 2025-10-02 08:31:09.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.644 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c8dea69b-ad4d-4c00-bdbd-3bd30ef7fcc1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.690 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ee5bad9d-f3ce-42da-8811-59c1d5542569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:09 np0005465604 NetworkManager[45129]: <info>  [1759393869.7006] manager: (tap54248606-60): new Veth device (/org/freedesktop/NetworkManager/Devices/239)
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.699 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[887e7741-5395-4a15-9ecb-01f4f9b01c9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.753 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[019c3dfe-f08a-4d51-9fd9-60dfdef00a63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.759 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0a5a4f79-2822-4e0e-aff6-74221e993fb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:09 np0005465604 NetworkManager[45129]: <info>  [1759393869.7992] device (tap54248606-60): carrier: link connected
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.810 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2e14ae67-f6e3-414d-a2f8-0ae6e3e078e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.843 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2446f7e8-48cb-4376-89bb-604332c7fea9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54248606-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:e5:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474840, 'reachable_time': 41953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323578, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.868 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c3cac456-6643-4e1a-976f-0e4deb1bced3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:e517'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474840, 'tstamp': 474840}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323579, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.891 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7addc0a9-0676-48f2-89c4-4bf4a679c4d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54248606-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:e5:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474840, 'reachable_time': 41953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323580, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.925 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0d4b3a-d37b-4f1c-9e06-72864c41b8a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.992 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50ebc949-74f3-430c-b1d8-40653892074b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.994 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54248606-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.994 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:31:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:09.995 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54248606-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:09 np0005465604 NetworkManager[45129]: <info>  [1759393869.9988] manager: (tap54248606-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Oct  2 04:31:10 np0005465604 kernel: tap54248606-60: entered promiscuous mode
Oct  2 04:31:10 np0005465604 nova_compute[260603]: 2025-10-02 08:31:10.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.003 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap54248606-60, col_values=(('external_ids', {'iface-id': 'dacfca36-fe1e-4001-8669-9c1cc2cd3f3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:10Z|00564|binding|INFO|Releasing lport dacfca36-fe1e-4001-8669-9c1cc2cd3f3a from this chassis (sb_readonly=0)
Oct  2 04:31:10 np0005465604 nova_compute[260603]: 2025-10-02 08:31:10.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:10 np0005465604 nova_compute[260603]: 2025-10-02 08:31:10.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:10 np0005465604 nova_compute[260603]: 2025-10-02 08:31:10.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.043 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/54248606-6cdd-4d53-9b28-14d8ac1cf290.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/54248606-6cdd-4d53-9b28-14d8ac1cf290.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.044 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[52cddb3a-917e-4fcd-9ac2-d7e3f8093bb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.046 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-54248606-6cdd-4d53-9b28-14d8ac1cf290
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/54248606-6cdd-4d53-9b28-14d8ac1cf290.pid.haproxy
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 54248606-6cdd-4d53-9b28-14d8ac1cf290
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.049 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'env', 'PROCESS_TAG=haproxy-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/54248606-6cdd-4d53-9b28-14d8ac1cf290.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:31:10 np0005465604 podman[323642]: 2025-10-02 08:31:10.4963252 +0000 UTC m=+0.063037407 container create 595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:31:10 np0005465604 systemd[1]: Started libpod-conmon-595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723.scope.
Oct  2 04:31:10 np0005465604 podman[323642]: 2025-10-02 08:31:10.465498322 +0000 UTC m=+0.032210589 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:31:10 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:31:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b45d26d074aac14704b1e922d565a27ad4c485c96934af760c5e755ad0d98b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:31:10 np0005465604 podman[323642]: 2025-10-02 08:31:10.598403467 +0000 UTC m=+0.165115714 container init 595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 04:31:10 np0005465604 podman[323642]: 2025-10-02 08:31:10.604410074 +0000 UTC m=+0.171122291 container start 595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:31:10 np0005465604 neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290[323670]: [NOTICE]   (323674) : New worker (323676) forked
Oct  2 04:31:10 np0005465604 neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290[323670]: [NOTICE]   (323674) : Loading success.
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.662 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 35092937-9590-42d1-a022-549b740da3c5 in datapath 54248606-6cdd-4d53-9b28-14d8ac1cf290 unbound from our chassis#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.668 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 54248606-6cdd-4d53-9b28-14d8ac1cf290#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.687 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ff50d5d2-d59a-4d7f-bad2-6bb20882d3de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.740 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[695a121c-5ea7-4c5a-8218-9396f2cd7380]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.746 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ec8e24-e9d2-4da1-bfb2-fc9b40d978fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:10 np0005465604 nova_compute[260603]: 2025-10-02 08:31:10.782 2 DEBUG nova.compute.manager [req-4764e0ce-39a6-478e-b52c-c799434661f0 req-af705b61-e1b1-4303-861a-4122b01ab52e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received event network-changed-9835ad6c-8ea8-4a79-8f07-042186ea7c71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:10 np0005465604 nova_compute[260603]: 2025-10-02 08:31:10.782 2 DEBUG nova.compute.manager [req-4764e0ce-39a6-478e-b52c-c799434661f0 req-af705b61-e1b1-4303-861a-4122b01ab52e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Refreshing instance network info cache due to event network-changed-9835ad6c-8ea8-4a79-8f07-042186ea7c71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:31:10 np0005465604 nova_compute[260603]: 2025-10-02 08:31:10.782 2 DEBUG oslo_concurrency.lockutils [req-4764e0ce-39a6-478e-b52c-c799434661f0 req-af705b61-e1b1-4303-861a-4122b01ab52e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.803 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[68a9a901-14d5-46a9-ac94-7e602eea7e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.832 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[52c4ace8-261e-4398-8bc5-bf67c126d21b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54248606-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:e5:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474840, 'reachable_time': 41953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323690, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.864 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ff163c40-6132-46d9-a63b-3257c2c31d1d]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap54248606-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474855, 'tstamp': 474855}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323691, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap54248606-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474858, 'tstamp': 474858}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323691, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.868 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54248606-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:10 np0005465604 nova_compute[260603]: 2025-10-02 08:31:10.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:10 np0005465604 nova_compute[260603]: 2025-10-02 08:31:10.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.874 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54248606-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.874 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.875 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap54248606-60, col_values=(('external_ids', {'iface-id': 'dacfca36-fe1e-4001-8669-9c1cc2cd3f3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.876 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.878 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e8f18c99-1964-43d6-a955-7b5064c53b3a in datapath 47336fa6-fcc3-40f8-ae02-9bae73a94c41 unbound from our chassis#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.882 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 47336fa6-fcc3-40f8-ae02-9bae73a94c41#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.902 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d8529b04-95ab-49d9-8857-301031b0e220]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.904 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap47336fa6-f1 in ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.907 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap47336fa6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.907 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aac9da2c-6993-4ecd-a79d-e54813786bf1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.909 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3a010a-faa6-453b-bf83-446b71a2174c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.934 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[d496d465-2a0b-4be9-9c25-6398d94428c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:10.971 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c549d6-fff7-42dc-9c6f-9657dcf8dc53]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 151 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 143 op/s
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.021 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f7e25f55-32e8-4162-9722-0148d48a4432]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:11 np0005465604 NetworkManager[45129]: <info>  [1759393871.0304] manager: (tap47336fa6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/241)
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.029 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[62eebb07-1fb9-4c31-a6f9-63d2e9aabfe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.064 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0710aad2-ace0-4bb7-aa85-78b82ffb38b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.068 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[df2b97cd-a8da-4664-8353-3e620eec6660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:11 np0005465604 NetworkManager[45129]: <info>  [1759393871.0992] device (tap47336fa6-f0): carrier: link connected
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.108 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6be0a0e3-40d5-4d3d-a394-db72fd93d80d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:11 np0005465604 nova_compute[260603]: 2025-10-02 08:31:11.121 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393871.1210468, 501f8cba-892f-489d-81b5-abb8669f49eb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:31:11 np0005465604 nova_compute[260603]: 2025-10-02 08:31:11.122 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] VM Started (Lifecycle Event)#033[00m
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.137 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[350155ee-d679-4eb3-8860-900205c99aeb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap47336fa6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:1c:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474970, 'reachable_time': 28655, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323702, 'error': None, 'target': 'ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:11 np0005465604 nova_compute[260603]: 2025-10-02 08:31:11.145 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:11 np0005465604 nova_compute[260603]: 2025-10-02 08:31:11.153 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393871.1212702, 501f8cba-892f-489d-81b5-abb8669f49eb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:31:11 np0005465604 nova_compute[260603]: 2025-10-02 08:31:11.154 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.161 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6a12ce5d-0494-468f-9253-d3927754fd7b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8d:1c72'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474970, 'tstamp': 474970}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323703, 'error': None, 'target': 'ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:11 np0005465604 nova_compute[260603]: 2025-10-02 08:31:11.173 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:11 np0005465604 nova_compute[260603]: 2025-10-02 08:31:11.178 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.191 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[623c41ad-f106-4698-b5f8-09464bc671eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap47336fa6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8d:1c:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474970, 'reachable_time': 28655, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323704, 'error': None, 'target': 'ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:11 np0005465604 nova_compute[260603]: 2025-10-02 08:31:11.214 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.237 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aad1545e-1ab7-4e51-8ab8-8347882c4023]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.337 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8ebc3217-2c45-4526-b288-b96111529d69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.339 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47336fa6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.339 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.340 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47336fa6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:11 np0005465604 kernel: tap47336fa6-f0: entered promiscuous mode
Oct  2 04:31:11 np0005465604 NetworkManager[45129]: <info>  [1759393871.3921] manager: (tap47336fa6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/242)
Oct  2 04:31:11 np0005465604 nova_compute[260603]: 2025-10-02 08:31:11.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:11 np0005465604 nova_compute[260603]: 2025-10-02 08:31:11.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.398 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap47336fa6-f0, col_values=(('external_ids', {'iface-id': 'a5ae140a-c6f5-41f2-b9e0-fa0f8fcd807d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:11 np0005465604 nova_compute[260603]: 2025-10-02 08:31:11.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:11 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:11Z|00565|binding|INFO|Releasing lport a5ae140a-c6f5-41f2-b9e0-fa0f8fcd807d from this chassis (sb_readonly=0)
Oct  2 04:31:11 np0005465604 nova_compute[260603]: 2025-10-02 08:31:11.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.401 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/47336fa6-fcc3-40f8-ae02-9bae73a94c41.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/47336fa6-fcc3-40f8-ae02-9bae73a94c41.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.403 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e948143c-8667-4cf6-9ef6-44b33ad2202d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.404 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-47336fa6-fcc3-40f8-ae02-9bae73a94c41
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/47336fa6-fcc3-40f8-ae02-9bae73a94c41.pid.haproxy
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 47336fa6-fcc3-40f8-ae02-9bae73a94c41
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:31:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:11.405 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41', 'env', 'PROCESS_TAG=haproxy-47336fa6-fcc3-40f8-ae02-9bae73a94c41', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/47336fa6-fcc3-40f8-ae02-9bae73a94c41.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:31:11 np0005465604 nova_compute[260603]: 2025-10-02 08:31:11.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:11 np0005465604 podman[323737]: 2025-10-02 08:31:11.900827309 +0000 UTC m=+0.075966917 container create 2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 04:31:11 np0005465604 systemd[1]: Started libpod-conmon-2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f.scope.
Oct  2 04:31:11 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:31:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927c4f8d9e26b4f276f849fe227297392dc8511c85e75a6e69adb62038a5f909/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:31:11 np0005465604 podman[323737]: 2025-10-02 08:31:11.87117825 +0000 UTC m=+0.046317868 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:31:11 np0005465604 podman[323737]: 2025-10-02 08:31:11.977802898 +0000 UTC m=+0.152942486 container init 2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 04:31:11 np0005465604 podman[323737]: 2025-10-02 08:31:11.988507001 +0000 UTC m=+0.163646569 container start 2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:31:12 np0005465604 neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41[323752]: [NOTICE]   (323756) : New worker (323758) forked
Oct  2 04:31:12 np0005465604 neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41[323752]: [NOTICE]   (323756) : Loading success.
Oct  2 04:31:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.403 2 DEBUG nova.network.neutron [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Updating instance_info_cache with network_info: [{"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.447 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Releasing lock "refresh_cache-07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.448 2 DEBUG nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance network_info: |[{"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.448 2 DEBUG oslo_concurrency.lockutils [req-4764e0ce-39a6-478e-b52c-c799434661f0 req-af705b61-e1b1-4303-861a-4122b01ab52e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.448 2 DEBUG nova.network.neutron [req-4764e0ce-39a6-478e-b52c-c799434661f0 req-af705b61-e1b1-4303-861a-4122b01ab52e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Refreshing network info cache for port 9835ad6c-8ea8-4a79-8f07-042186ea7c71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.452 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Start _get_guest_xml network_info=[{"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.458 2 WARNING nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.465 2 DEBUG nova.virt.libvirt.host [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.466 2 DEBUG nova.virt.libvirt.host [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.471 2 DEBUG nova.virt.libvirt.host [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.472 2 DEBUG nova.virt.libvirt.host [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.472 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.473 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.473 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.474 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.474 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.474 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.474 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.475 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.475 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.475 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.476 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.476 2 DEBUG nova.virt.hardware [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.479 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:12 np0005465604 nova_compute[260603]: 2025-10-02 08:31:12.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.003 2 DEBUG nova.compute.manager [req-d7ad9db1-8dc2-43d8-96b3-bf8c38a8c644 req-48d17e08-ff57-41fa-be18-678038079bef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.004 2 DEBUG oslo_concurrency.lockutils [req-d7ad9db1-8dc2-43d8-96b3-bf8c38a8c644 req-48d17e08-ff57-41fa-be18-678038079bef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.005 2 DEBUG oslo_concurrency.lockutils [req-d7ad9db1-8dc2-43d8-96b3-bf8c38a8c644 req-48d17e08-ff57-41fa-be18-678038079bef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.006 2 DEBUG oslo_concurrency.lockutils [req-d7ad9db1-8dc2-43d8-96b3-bf8c38a8c644 req-48d17e08-ff57-41fa-be18-678038079bef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.006 2 DEBUG nova.compute.manager [req-d7ad9db1-8dc2-43d8-96b3-bf8c38a8c644 req-48d17e08-ff57-41fa-be18-678038079bef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Processing event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:31:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4171617591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.007 2 DEBUG nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:31:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 181 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 164 op/s
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.014 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.016 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393873.0143142, 49e7e668-b62c-4e35-a4e2-bba540000961 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.016 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.024 2 INFO nova.virt.libvirt.driver [-] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Instance spawned successfully.#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.025 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.030 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.066 2 DEBUG nova.storage.rbd_utils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.074 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.136 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.140 2 DEBUG nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-plugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.141 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.141 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.142 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.142 2 DEBUG nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Processing event network-vif-plugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.142 2 DEBUG nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-plugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.143 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.143 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.143 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.143 2 DEBUG nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] No event matching network-vif-plugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c in dict_keys([('network-vif-plugged', 'e8f18c99-1964-43d6-a955-7b5064c53b3a'), ('network-vif-plugged', '35092937-9590-42d1-a022-549b740da3c5')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.144 2 WARNING nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received unexpected event network-vif-plugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.144 2 DEBUG nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-plugged-e8f18c99-1964-43d6-a955-7b5064c53b3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.144 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.145 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.145 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.145 2 DEBUG nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Processing event network-vif-plugged-e8f18c99-1964-43d6-a955-7b5064c53b3a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.145 2 DEBUG nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-plugged-e8f18c99-1964-43d6-a955-7b5064c53b3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.146 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.146 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.146 2 DEBUG oslo_concurrency.lockutils [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.147 2 DEBUG nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] No event matching network-vif-plugged-e8f18c99-1964-43d6-a955-7b5064c53b3a in dict_keys([('network-vif-plugged', '35092937-9590-42d1-a022-549b740da3c5')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.147 2 WARNING nova.compute.manager [req-a1b05f81-8c61-4573-ae9a-46c04edabc3b req-da2cf74c-31b6-42e3-b5e8-61424fba3368 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received unexpected event network-vif-plugged-e8f18c99-1964-43d6-a955-7b5064c53b3a for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.152 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.157 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.157 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.158 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.158 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.159 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.159 2 DEBUG nova.virt.libvirt.driver [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.196 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.246 2 INFO nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Took 16.35 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.246 2 DEBUG nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.345 2 INFO nova.compute.manager [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Took 17.68 seconds to build instance.#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.394 2 DEBUG oslo_concurrency.lockutils [None req-cc09455f-583e-455f-b8df-047aa7acbb86 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:31:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:31:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/80577841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.605 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.607 2 DEBUG nova.virt.libvirt.vif [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-361192799',display_name='tempest-ServerDiskConfigTestJSON-server-361192799',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-361192799',id=63,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-bmy2yvfw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:07Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.608 2 DEBUG nova.network.os_vif_util [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.609 2 DEBUG nova.network.os_vif_util [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.611 2 DEBUG nova.objects.instance [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'pci_devices' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.627 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:31:13 np0005465604 nova_compute[260603]:  <uuid>07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf</uuid>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:  <name>instance-0000003f</name>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-361192799</nova:name>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:31:12</nova:creationTime>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:31:13 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:        <nova:user uuid="116b114f14f84e4cbd6cc966e29d82e7">tempest-ServerDiskConfigTestJSON-1277806880-project-member</nova:user>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:        <nova:project uuid="bce7493292bb47cfb7168bca89f78f4a">tempest-ServerDiskConfigTestJSON-1277806880</nova:project>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:        <nova:port uuid="9835ad6c-8ea8-4a79-8f07-042186ea7c71">
Oct  2 04:31:13 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <entry name="serial">07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf</entry>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <entry name="uuid">07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf</entry>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk">
Oct  2 04:31:13 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:31:13 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config">
Oct  2 04:31:13 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:31:13 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:5b:e9:fe"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <target dev="tap9835ad6c-8e"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/console.log" append="off"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:31:13 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:31:13 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:31:13 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:31:13 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.632 2 DEBUG nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Preparing to wait for external event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.632 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.633 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.633 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.634 2 DEBUG nova.virt.libvirt.vif [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-361192799',display_name='tempest-ServerDiskConfigTestJSON-server-361192799',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-361192799',id=63,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-bmy2yvfw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:07Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.634 2 DEBUG nova.network.os_vif_util [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.635 2 DEBUG nova.network.os_vif_util [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.635 2 DEBUG os_vif [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.637 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.637 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.642 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9835ad6c-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9835ad6c-8e, col_values=(('external_ids', {'iface-id': '9835ad6c-8ea8-4a79-8f07-042186ea7c71', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:e9:fe', 'vm-uuid': '07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:13 np0005465604 NetworkManager[45129]: <info>  [1759393873.6797] manager: (tap9835ad6c-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.694 2 INFO os_vif [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e')#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.762 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "7ac34b0c-8ced-417d-9442-8fda77804a34" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.763 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.773 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.774 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.774 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] No VIF found with MAC fa:16:3e:5b:e9:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.775 2 INFO nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Using config drive#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.800 2 DEBUG nova.storage.rbd_utils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.809 2 DEBUG nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.917 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.919 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.929 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:31:13 np0005465604 nova_compute[260603]: 2025-10-02 08:31:13.930 2 INFO nova.compute.claims [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.137 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.198 2 DEBUG nova.network.neutron [req-4764e0ce-39a6-478e-b52c-c799434661f0 req-af705b61-e1b1-4303-861a-4122b01ab52e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Updated VIF entry in instance network info cache for port 9835ad6c-8ea8-4a79-8f07-042186ea7c71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.199 2 DEBUG nova.network.neutron [req-4764e0ce-39a6-478e-b52c-c799434661f0 req-af705b61-e1b1-4303-861a-4122b01ab52e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Updating instance_info_cache with network_info: [{"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.219 2 DEBUG oslo_concurrency.lockutils [req-4764e0ce-39a6-478e-b52c-c799434661f0 req-af705b61-e1b1-4303-861a-4122b01ab52e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.424 2 INFO nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Creating config drive at /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/disk.config#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.431 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp37e4p8wm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:31:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2544996907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.579 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp37e4p8wm" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.618 2 DEBUG nova.storage.rbd_utils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.624 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/disk.config 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.662 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.672 2 DEBUG nova.compute.provider_tree [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.700 2 DEBUG nova.scheduler.client.report [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.741 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.743 2 DEBUG nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.799 2 DEBUG oslo_concurrency.processutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/disk.config 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.800 2 INFO nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Deleting local config drive /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/disk.config because it was imported into RBD.#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.807 2 DEBUG nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.807 2 DEBUG nova.network.neutron [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.832 2 INFO nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.854 2 DEBUG nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:31:14 np0005465604 kernel: tap9835ad6c-8e: entered promiscuous mode
Oct  2 04:31:14 np0005465604 NetworkManager[45129]: <info>  [1759393874.8908] manager: (tap9835ad6c-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/244)
Oct  2 04:31:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:14Z|00566|binding|INFO|Claiming lport 9835ad6c-8ea8-4a79-8f07-042186ea7c71 for this chassis.
Oct  2 04:31:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:14Z|00567|binding|INFO|9835ad6c-8ea8-4a79-8f07-042186ea7c71: Claiming fa:16:3e:5b:e9:fe 10.100.0.10
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:14.920 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:e9:fe 10.100.0.10'], port_security=['fa:16:3e:5b:e9:fe 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '70456544-9d56-4c7b-b40d-eb25e5a572db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7313fcfc-7f82-4668-88be-657e0435d03f, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=9835ad6c-8ea8-4a79-8f07-042186ea7c71) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:31:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:14.922 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 9835ad6c-8ea8-4a79-8f07-042186ea7c71 in datapath f8df0af1-1767-419a-8500-c28fbf45ae4b bound to our chassis#033[00m
Oct  2 04:31:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:14.923 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f8df0af1-1767-419a-8500-c28fbf45ae4b#033[00m
Oct  2 04:31:14 np0005465604 systemd-udevd[323924]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:31:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:14.945 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df91c45b-0265-4e74-9ff1-37b235649bb0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:14.946 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf8df0af1-11 in ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:31:14 np0005465604 NetworkManager[45129]: <info>  [1759393874.9497] device (tap9835ad6c-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:31:14 np0005465604 NetworkManager[45129]: <info>  [1759393874.9510] device (tap9835ad6c-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:31:14 np0005465604 systemd-machined[214636]: New machine qemu-70-instance-0000003f.
Oct  2 04:31:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:14.951 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf8df0af1-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:31:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:14.951 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c5dbd8-c2dc-490a-b8dc-153b5f71d04a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:14.958 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50d6054f-e08e-4bd8-a10b-23dbea561264]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:14 np0005465604 systemd[1]: Started Virtual Machine qemu-70-instance-0000003f.
Oct  2 04:31:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:14.972 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[48b4e526-65bd-4c86-9ea0-840b8d53f60b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.977 2 DEBUG nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.979 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:31:14 np0005465604 nova_compute[260603]: 2025-10-02 08:31:14.979 2 INFO nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Creating image(s)#033[00m
Oct  2 04:31:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:14Z|00568|binding|INFO|Setting lport 9835ad6c-8ea8-4a79-8f07-042186ea7c71 ovn-installed in OVS
Oct  2 04:31:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:14Z|00569|binding|INFO|Setting lport 9835ad6c-8ea8-4a79-8f07-042186ea7c71 up in Southbound
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.005 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df0e8256-7fdb-4121-9396-a774066d99ba]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 181 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 485 KiB/s rd, 1.8 MiB/s wr, 92 op/s
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.015 2 DEBUG nova.storage.rbd_utils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7ac34b0c-8ced-417d-9442-8fda77804a34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.039 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c80ac2-0b2b-4235-af0b-80eb64053571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.049 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[297f9e83-8297-4d79-9c77-9c62392cc143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:15 np0005465604 systemd-udevd[323928]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:31:15 np0005465604 NetworkManager[45129]: <info>  [1759393875.0506] manager: (tapf8df0af1-10): new Veth device (/org/freedesktop/NetworkManager/Devices/245)
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.100 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f4378227-1644-4378-be18-af5c36a0dd6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.100 2 DEBUG nova.storage.rbd_utils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7ac34b0c-8ced-417d-9442-8fda77804a34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.104 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f00b2915-370e-4d7c-b593-4781ab456e08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:15 np0005465604 NetworkManager[45129]: <info>  [1759393875.1292] device (tapf8df0af1-10): carrier: link connected
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.135 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d6151db3-7e98-40ce-8c29-067306f7ce67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.143 2 DEBUG nova.storage.rbd_utils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7ac34b0c-8ced-417d-9442-8fda77804a34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.153 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.158 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e4f3f2-d804-4ffa-94d0-7730744b5ab0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8df0af1-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:6d:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475373, 'reachable_time': 32653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324012, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.176 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[048a21e0-d5e2-4a25-b723-50013ce1794b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:6ddc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 475373, 'tstamp': 475373}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324013, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.194 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f7943369-bdd3-4371-9989-1124adab16ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8df0af1-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:6d:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475373, 'reachable_time': 32653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324015, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.207 2 DEBUG nova.policy [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33ee6781337742479d7b4b078ad6a221', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.228 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aaac8e0e-6e4b-4257-8b76-68493bf94b48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.264 2 DEBUG nova.compute.manager [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.265 2 DEBUG oslo_concurrency.lockutils [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.265 2 DEBUG oslo_concurrency.lockutils [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.266 2 DEBUG oslo_concurrency.lockutils [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49e7e668-b62c-4e35-a4e2-bba540000961-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.266 2 DEBUG nova.compute.manager [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] No waiting events found dispatching network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.266 2 WARNING nova.compute.manager [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received unexpected event network-vif-plugged-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.266 2 DEBUG nova.compute.manager [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-plugged-35092937-9590-42d1-a022-549b740da3c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.267 2 DEBUG oslo_concurrency.lockutils [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.267 2 DEBUG oslo_concurrency.lockutils [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.267 2 DEBUG oslo_concurrency.lockutils [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.267 2 DEBUG nova.compute.manager [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Processing event network-vif-plugged-35092937-9590-42d1-a022-549b740da3c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.268 2 DEBUG nova.compute.manager [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-plugged-35092937-9590-42d1-a022-549b740da3c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.268 2 DEBUG oslo_concurrency.lockutils [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.268 2 DEBUG oslo_concurrency.lockutils [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.268 2 DEBUG oslo_concurrency.lockutils [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.268 2 DEBUG nova.compute.manager [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] No waiting events found dispatching network-vif-plugged-35092937-9590-42d1-a022-549b740da3c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.269 2 WARNING nova.compute.manager [req-6c856d72-593d-4a03-93f6-5fd0c6dd68dd req-17e31325-7b6c-499f-a744-cee47933aa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received unexpected event network-vif-plugged-35092937-9590-42d1-a022-549b740da3c5 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.269 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.270 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Instance event wait completed in 4 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.270 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.271 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.271 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.305 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a5165a57-7027-41bb-8c32-233d20b41055]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.306 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8df0af1-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.306 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.307 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8df0af1-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.307 2 DEBUG nova.storage.rbd_utils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7ac34b0c-8ced-417d-9442-8fda77804a34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:15 np0005465604 NetworkManager[45129]: <info>  [1759393875.3087] manager: (tapf8df0af1-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Oct  2 04:31:15 np0005465604 kernel: tapf8df0af1-10: entered promiscuous mode
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.311 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf8df0af1-10, col_values=(('external_ids', {'iface-id': '1405e724-f2f6-4a95-8848-550131e62910'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:15Z|00570|binding|INFO|Releasing lport 1405e724-f2f6-4a95-8848-550131e62910 from this chassis (sb_readonly=0)
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.314 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.315 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3a1a5d-c157-4f6d-adb4-e9c14e661d91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.316 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/f8df0af1-1767-419a-8500-c28fbf45ae4b.pid.haproxy
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID f8df0af1-1767-419a-8500-c28fbf45ae4b
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:31:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:15.316 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'env', 'PROCESS_TAG=haproxy-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f8df0af1-1767-419a-8500-c28fbf45ae4b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.333 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7ac34b0c-8ced-417d-9442-8fda77804a34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.381 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393875.2741497, 501f8cba-892f-489d-81b5-abb8669f49eb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.381 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.389 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.392 2 DEBUG nova.compute.manager [req-30fcc749-c83e-48ee-bdfa-0de8bcf446e5 req-f78080fc-d5da-46c0-b676-3ccc0c0d3149 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.393 2 DEBUG oslo_concurrency.lockutils [req-30fcc749-c83e-48ee-bdfa-0de8bcf446e5 req-f78080fc-d5da-46c0-b676-3ccc0c0d3149 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.393 2 DEBUG oslo_concurrency.lockutils [req-30fcc749-c83e-48ee-bdfa-0de8bcf446e5 req-f78080fc-d5da-46c0-b676-3ccc0c0d3149 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.394 2 DEBUG oslo_concurrency.lockutils [req-30fcc749-c83e-48ee-bdfa-0de8bcf446e5 req-f78080fc-d5da-46c0-b676-3ccc0c0d3149 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.394 2 DEBUG nova.compute.manager [req-30fcc749-c83e-48ee-bdfa-0de8bcf446e5 req-f78080fc-d5da-46c0-b676-3ccc0c0d3149 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Processing event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.410 2 INFO nova.virt.libvirt.driver [-] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Instance spawned successfully.#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.411 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.419 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.426 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.442 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.443 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.443 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.444 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.444 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.445 2 DEBUG nova.virt.libvirt.driver [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.454 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.510 2 INFO nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Took 17.87 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.511 2 DEBUG nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.692 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7ac34b0c-8ced-417d-9442-8fda77804a34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:15 np0005465604 podman[324124]: 2025-10-02 08:31:15.755939591 +0000 UTC m=+0.052687986 container create f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 04:31:15 np0005465604 systemd[1]: Started libpod-conmon-f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0.scope.
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.816 2 DEBUG nova.storage.rbd_utils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] resizing rbd image 7ac34b0c-8ced-417d-9442-8fda77804a34_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:31:15 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:31:15 np0005465604 podman[324124]: 2025-10-02 08:31:15.729340106 +0000 UTC m=+0.026088521 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:31:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b436514243685187aa6cdcdd36d5f1493da17b7b370783efb1b916d0e9b66972/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:31:15 np0005465604 podman[324124]: 2025-10-02 08:31:15.850523906 +0000 UTC m=+0.147272321 container init f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 04:31:15 np0005465604 podman[324124]: 2025-10-02 08:31:15.856522532 +0000 UTC m=+0.153270927 container start f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 04:31:15 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[324177]: [NOTICE]   (324199) : New worker (324201) forked
Oct  2 04:31:15 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[324177]: [NOTICE]   (324199) : Loading success.
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.954 2 INFO nova.compute.manager [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Took 20.29 seconds to build instance.#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.966 2 DEBUG nova.objects.instance [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'migration_context' on Instance uuid 7ac34b0c-8ced-417d-9442-8fda77804a34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.993 2 DEBUG oslo_concurrency.lockutils [None req-4064e2de-c5e6-461c-8c3c-1471815d4c6c db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.994 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.995 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Ensure instance console log exists: /var/lib/nova/instances/7ac34b0c-8ced-417d-9442-8fda77804a34/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.995 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.995 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:15 np0005465604 nova_compute[260603]: 2025-10-02 08:31:15.996 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.215 2 DEBUG nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.216 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393876.2144597, 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.216 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] VM Started (Lifecycle Event)#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.222 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.225 2 INFO nova.virt.libvirt.driver [-] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance spawned successfully.#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.225 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.251 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.255 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.260 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.260 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.261 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.261 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.261 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.261 2 DEBUG nova.virt.libvirt.driver [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.280 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.280 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393876.2179103, 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.281 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.310 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.316 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393876.221003, 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.316 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.324 2 INFO nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Took 8.85 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.325 2 DEBUG nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.340 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.346 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.391 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.414 2 INFO nova.compute.manager [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Took 9.97 seconds to build instance.#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.430 2 DEBUG oslo_concurrency.lockutils [None req-1c1395e4-29b8-4ec3-bd7c-46f22a1b7e0a 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:16 np0005465604 nova_compute[260603]: 2025-10-02 08:31:16.965 2 DEBUG nova.network.neutron [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Successfully created port: 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:31:16 np0005465604 podman[324229]: 2025-10-02 08:31:16.985569426 +0000 UTC m=+0.054348878 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 04:31:17 np0005465604 podman[324228]: 2025-10-02 08:31:17.010002354 +0000 UTC m=+0.079069145 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:31:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 181 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 1.8 MiB/s wr, 73 op/s
Oct  2 04:31:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:31:17 np0005465604 NetworkManager[45129]: <info>  [1759393877.1289] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Oct  2 04:31:17 np0005465604 nova_compute[260603]: 2025-10-02 08:31:17.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:17 np0005465604 NetworkManager[45129]: <info>  [1759393877.1296] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Oct  2 04:31:17 np0005465604 nova_compute[260603]: 2025-10-02 08:31:17.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:17 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:17Z|00571|binding|INFO|Releasing lport d143de50-fc80-43b6-82e2-6651430a4a42 from this chassis (sb_readonly=0)
Oct  2 04:31:17 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:17Z|00572|binding|INFO|Releasing lport dacfca36-fe1e-4001-8669-9c1cc2cd3f3a from this chassis (sb_readonly=0)
Oct  2 04:31:17 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:17Z|00573|binding|INFO|Releasing lport 1405e724-f2f6-4a95-8848-550131e62910 from this chassis (sb_readonly=0)
Oct  2 04:31:17 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:17Z|00574|binding|INFO|Releasing lport a5ae140a-c6f5-41f2-b9e0-fa0f8fcd807d from this chassis (sb_readonly=0)
Oct  2 04:31:17 np0005465604 nova_compute[260603]: 2025-10-02 08:31:17.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:17 np0005465604 nova_compute[260603]: 2025-10-02 08:31:17.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:31:17 np0005465604 nova_compute[260603]: 2025-10-02 08:31:17.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:31:17 np0005465604 nova_compute[260603]: 2025-10-02 08:31:17.532 2 DEBUG nova.compute.manager [req-2bcda2f4-4832-4a27-943e-dfa2e1e29d60 req-bfcbba90-7c95-4b19-a841-cc3a9c5f8c88 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:17 np0005465604 nova_compute[260603]: 2025-10-02 08:31:17.533 2 DEBUG oslo_concurrency.lockutils [req-2bcda2f4-4832-4a27-943e-dfa2e1e29d60 req-bfcbba90-7c95-4b19-a841-cc3a9c5f8c88 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:17 np0005465604 nova_compute[260603]: 2025-10-02 08:31:17.533 2 DEBUG oslo_concurrency.lockutils [req-2bcda2f4-4832-4a27-943e-dfa2e1e29d60 req-bfcbba90-7c95-4b19-a841-cc3a9c5f8c88 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:17 np0005465604 nova_compute[260603]: 2025-10-02 08:31:17.533 2 DEBUG oslo_concurrency.lockutils [req-2bcda2f4-4832-4a27-943e-dfa2e1e29d60 req-bfcbba90-7c95-4b19-a841-cc3a9c5f8c88 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:17 np0005465604 nova_compute[260603]: 2025-10-02 08:31:17.533 2 DEBUG nova.compute.manager [req-2bcda2f4-4832-4a27-943e-dfa2e1e29d60 req-bfcbba90-7c95-4b19-a841-cc3a9c5f8c88 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] No waiting events found dispatching network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:31:17 np0005465604 nova_compute[260603]: 2025-10-02 08:31:17.534 2 WARNING nova.compute.manager [req-2bcda2f4-4832-4a27-943e-dfa2e1e29d60 req-bfcbba90-7c95-4b19-a841-cc3a9c5f8c88 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received unexpected event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:31:17 np0005465604 nova_compute[260603]: 2025-10-02 08:31:17.546 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:31:17 np0005465604 nova_compute[260603]: 2025-10-02 08:31:17.546 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:31:17 np0005465604 nova_compute[260603]: 2025-10-02 08:31:17.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:18 np0005465604 nova_compute[260603]: 2025-10-02 08:31:18.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:18 np0005465604 nova_compute[260603]: 2025-10-02 08:31:18.727 2 DEBUG nova.network.neutron [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Successfully updated port: 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:31:18 np0005465604 nova_compute[260603]: 2025-10-02 08:31:18.742 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "refresh_cache-7ac34b0c-8ced-417d-9442-8fda77804a34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:31:18 np0005465604 nova_compute[260603]: 2025-10-02 08:31:18.742 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquired lock "refresh_cache-7ac34b0c-8ced-417d-9442-8fda77804a34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:31:18 np0005465604 nova_compute[260603]: 2025-10-02 08:31:18.742 2 DEBUG nova.network.neutron [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:31:18 np0005465604 nova_compute[260603]: 2025-10-02 08:31:18.954 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393863.9093144, f8f36f36-817a-4e64-8c57-c211cfc7b0ba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:31:18 np0005465604 nova_compute[260603]: 2025-10-02 08:31:18.954 2 INFO nova.compute.manager [-] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:31:18 np0005465604 nova_compute[260603]: 2025-10-02 08:31:18.976 2 DEBUG nova.compute.manager [None req-7524d67c-749c-445b-819f-9cefbd7fb6cd - - - - - -] [instance: f8f36f36-817a-4e64-8c57-c211cfc7b0ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 227 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 3.6 MiB/s wr, 293 op/s
Oct  2 04:31:19 np0005465604 nova_compute[260603]: 2025-10-02 08:31:19.655 2 DEBUG nova.network.neutron [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:31:19 np0005465604 nova_compute[260603]: 2025-10-02 08:31:19.841 2 DEBUG nova.compute.manager [req-90c8ec13-2688-4182-9155-2666a8c06c98 req-6c51897f-e0b8-488b-a711-4591f4327b64 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Received event network-changed-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:19 np0005465604 nova_compute[260603]: 2025-10-02 08:31:19.842 2 DEBUG nova.compute.manager [req-90c8ec13-2688-4182-9155-2666a8c06c98 req-6c51897f-e0b8-488b-a711-4591f4327b64 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Refreshing instance network info cache due to event network-changed-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:31:19 np0005465604 nova_compute[260603]: 2025-10-02 08:31:19.842 2 DEBUG oslo_concurrency.lockutils [req-90c8ec13-2688-4182-9155-2666a8c06c98 req-6c51897f-e0b8-488b-a711-4591f4327b64 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7ac34b0c-8ced-417d-9442-8fda77804a34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:31:19 np0005465604 nova_compute[260603]: 2025-10-02 08:31:19.929 2 DEBUG nova.compute.manager [req-ca325238-17df-456f-bac2-b440ac6afe51 req-9dfa3c86-4687-4ccc-919a-7c363be98409 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Received event network-changed-37e9c33f-0ff9-4138-a7b5-989ba3c016a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:19 np0005465604 nova_compute[260603]: 2025-10-02 08:31:19.930 2 DEBUG nova.compute.manager [req-ca325238-17df-456f-bac2-b440ac6afe51 req-9dfa3c86-4687-4ccc-919a-7c363be98409 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Refreshing instance network info cache due to event network-changed-37e9c33f-0ff9-4138-a7b5-989ba3c016a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:31:19 np0005465604 nova_compute[260603]: 2025-10-02 08:31:19.930 2 DEBUG oslo_concurrency.lockutils [req-ca325238-17df-456f-bac2-b440ac6afe51 req-9dfa3c86-4687-4ccc-919a-7c363be98409 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:31:19 np0005465604 nova_compute[260603]: 2025-10-02 08:31:19.930 2 DEBUG oslo_concurrency.lockutils [req-ca325238-17df-456f-bac2-b440ac6afe51 req-9dfa3c86-4687-4ccc-919a-7c363be98409 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:31:19 np0005465604 nova_compute[260603]: 2025-10-02 08:31:19.930 2 DEBUG nova.network.neutron [req-ca325238-17df-456f-bac2-b440ac6afe51 req-9dfa3c86-4687-4ccc-919a-7c363be98409 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Refreshing network info cache for port 37e9c33f-0ff9-4138-a7b5-989ba3c016a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:31:20 np0005465604 nova_compute[260603]: 2025-10-02 08:31:20.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:31:20 np0005465604 nova_compute[260603]: 2025-10-02 08:31:20.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:31:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 227 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 2.4 MiB/s wr, 244 op/s
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.063 2 DEBUG oslo_concurrency.lockutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.064 2 DEBUG oslo_concurrency.lockutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.064 2 DEBUG oslo_concurrency.lockutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.064 2 DEBUG oslo_concurrency.lockutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.064 2 DEBUG oslo_concurrency.lockutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.065 2 INFO nova.compute.manager [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Terminating instance#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.066 2 DEBUG nova.compute.manager [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:31:21 np0005465604 kernel: tape19eb16b-f0 (unregistering): left promiscuous mode
Oct  2 04:31:21 np0005465604 NetworkManager[45129]: <info>  [1759393881.1323] device (tape19eb16b-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:21Z|00575|binding|INFO|Releasing lport e19eb16b-f042-4e4d-922b-7057ad6ebb1c from this chassis (sb_readonly=0)
Oct  2 04:31:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:21Z|00576|binding|INFO|Setting lport e19eb16b-f042-4e4d-922b-7057ad6ebb1c down in Southbound
Oct  2 04:31:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:21Z|00577|binding|INFO|Removing iface tape19eb16b-f0 ovn-installed in OVS
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.149 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:26:fb 10.100.0.81'], port_security=['fa:16:3e:c1:26:fb 10.100.0.81'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.81/24', 'neutron:device_id': '501f8cba-892f-489d-81b5-abb8669f49eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a0928fe-c098-4b26-b16c-3388d3dcf9da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cff19d1e-1358-4314-bab4-67f6fde7eba9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e19eb16b-f042-4e4d-922b-7057ad6ebb1c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.151 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e19eb16b-f042-4e4d-922b-7057ad6ebb1c in datapath 54248606-6cdd-4d53-9b28-14d8ac1cf290 unbound from our chassis#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.152 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 54248606-6cdd-4d53-9b28-14d8ac1cf290#033[00m
Oct  2 04:31:21 np0005465604 kernel: tape8f18c99-19 (unregistering): left promiscuous mode
Oct  2 04:31:21 np0005465604 NetworkManager[45129]: <info>  [1759393881.1670] device (tape8f18c99-19): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:21Z|00578|binding|INFO|Releasing lport e8f18c99-1964-43d6-a955-7b5064c53b3a from this chassis (sb_readonly=0)
Oct  2 04:31:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:21Z|00579|binding|INFO|Setting lport e8f18c99-1964-43d6-a955-7b5064c53b3a down in Southbound
Oct  2 04:31:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:21Z|00580|binding|INFO|Removing iface tape8f18c99-19 ovn-installed in OVS
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.183 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:89:45 10.100.1.99'], port_security=['fa:16:3e:28:89:45 10.100.1.99'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.99/24', 'neutron:device_id': '501f8cba-892f-489d-81b5-abb8669f49eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-47336fa6-fcc3-40f8-ae02-9bae73a94c41', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a0928fe-c098-4b26-b16c-3388d3dcf9da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1454d03-261a-47dd-a4d8-470427544a20, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e8f18c99-1964-43d6-a955-7b5064c53b3a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.184 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[793eec77-9522-45d8-8a03-54a8cd612764]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 kernel: tap35092937-95 (unregistering): left promiscuous mode
Oct  2 04:31:21 np0005465604 NetworkManager[45129]: <info>  [1759393881.1918] device (tap35092937-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.215 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[86240cf4-7e30-4299-bb1f-bc83ae3721e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:21Z|00581|binding|INFO|Releasing lport 35092937-9590-42d1-a022-549b740da3c5 from this chassis (sb_readonly=0)
Oct  2 04:31:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:21Z|00582|binding|INFO|Setting lport 35092937-9590-42d1-a022-549b740da3c5 down in Southbound
Oct  2 04:31:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:21Z|00583|binding|INFO|Removing iface tap35092937-95 ovn-installed in OVS
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.220 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[06867620-f8d0-4f45-be82-69d726f25652]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.225 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:5d:8f 10.100.0.194'], port_security=['fa:16:3e:7b:5d:8f 10.100.0.194'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.194/24', 'neutron:device_id': '501f8cba-892f-489d-81b5-abb8669f49eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1a0928fe-c098-4b26-b16c-3388d3dcf9da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cff19d1e-1358-4314-bab4-67f6fde7eba9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=35092937-9590-42d1-a022-549b740da3c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.248 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2b78b8-d6eb-4385-8c4b-08faa42d4675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Oct  2 04:31:21 np0005465604 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003e.scope: Consumed 7.255s CPU time.
Oct  2 04:31:21 np0005465604 systemd-machined[214636]: Machine qemu-69-instance-0000003e terminated.
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.273 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c845db1d-95ff-4326-979d-548bdcd8d663]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54248606-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:e5:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474840, 'reachable_time': 41953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324294, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 NetworkManager[45129]: <info>  [1759393881.2936] manager: (tape8f18c99-19): new Tun device (/org/freedesktop/NetworkManager/Devices/249)
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.298 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e68c287-4272-4e0f-8e70-d09545e35a44]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap54248606-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474855, 'tstamp': 474855}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324299, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap54248606-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474858, 'tstamp': 474858}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324299, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.300 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54248606-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 NetworkManager[45129]: <info>  [1759393881.3044] manager: (tap35092937-95): new Tun device (/org/freedesktop/NetworkManager/Devices/250)
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.322 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54248606-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.322 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.322 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap54248606-60, col_values=(('external_ids', {'iface-id': 'dacfca36-fe1e-4001-8669-9c1cc2cd3f3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.323 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.325 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e8f18c99-1964-43d6-a955-7b5064c53b3a in datapath 47336fa6-fcc3-40f8-ae02-9bae73a94c41 unbound from our chassis#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.330 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 47336fa6-fcc3-40f8-ae02-9bae73a94c41, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.331 2 INFO nova.virt.libvirt.driver [-] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Instance destroyed successfully.#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.333 2 DEBUG nova.objects.instance [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lazy-loading 'resources' on Instance uuid 501f8cba-892f-489d-81b5-abb8669f49eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.331 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e25339-faf6-4232-a0f4-3210b807165f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.336 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41 namespace which is not needed anymore#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.351 2 DEBUG nova.virt.libvirt.vif [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1736583523',display_name='tempest-ServersTestMultiNic-server-1736583523',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1736583523',id=62,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-4l4bd126',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:31:15Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=501f8cba-892f-489d-81b5-abb8669f49eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.352 2 DEBUG nova.network.os_vif_util [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "address": "fa:16:3e:c1:26:fb", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.81", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape19eb16b-f0", "ovs_interfaceid": "e19eb16b-f042-4e4d-922b-7057ad6ebb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.352 2 DEBUG nova.network.os_vif_util [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:26:fb,bridge_name='br-int',has_traffic_filtering=True,id=e19eb16b-f042-4e4d-922b-7057ad6ebb1c,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape19eb16b-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.353 2 DEBUG os_vif [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:26:fb,bridge_name='br-int',has_traffic_filtering=True,id=e19eb16b-f042-4e4d-922b-7057ad6ebb1c,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape19eb16b-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.355 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape19eb16b-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.367 2 INFO os_vif [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:26:fb,bridge_name='br-int',has_traffic_filtering=True,id=e19eb16b-f042-4e4d-922b-7057ad6ebb1c,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape19eb16b-f0')#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.368 2 DEBUG nova.virt.libvirt.vif [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1736583523',display_name='tempest-ServersTestMultiNic-server-1736583523',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1736583523',id=62,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-4l4bd126',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:31:15Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=501f8cba-892f-489d-81b5-abb8669f49eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.369 2 DEBUG nova.network.os_vif_util [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "address": "fa:16:3e:28:89:45", "network": {"id": "47336fa6-fcc3-40f8-ae02-9bae73a94c41", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-274124049", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.99", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8f18c99-19", "ovs_interfaceid": "e8f18c99-1964-43d6-a955-7b5064c53b3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.369 2 DEBUG nova.network.os_vif_util [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:89:45,bridge_name='br-int',has_traffic_filtering=True,id=e8f18c99-1964-43d6-a955-7b5064c53b3a,network=Network(47336fa6-fcc3-40f8-ae02-9bae73a94c41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8f18c99-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.370 2 DEBUG os_vif [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:89:45,bridge_name='br-int',has_traffic_filtering=True,id=e8f18c99-1964-43d6-a955-7b5064c53b3a,network=Network(47336fa6-fcc3-40f8-ae02-9bae73a94c41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8f18c99-19') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.372 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8f18c99-19, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.382 2 INFO os_vif [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:89:45,bridge_name='br-int',has_traffic_filtering=True,id=e8f18c99-1964-43d6-a955-7b5064c53b3a,network=Network(47336fa6-fcc3-40f8-ae02-9bae73a94c41),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8f18c99-19')#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.383 2 DEBUG nova.virt.libvirt.vif [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:30:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1736583523',display_name='tempest-ServersTestMultiNic-server-1736583523',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1736583523',id=62,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='62c4ff42369740eebbf14969f4d8d2e5',ramdisk_id='',reservation_id='r-4l4bd126',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-670565182',owner_user_name='tempest-ServersTestMultiNic-670565182-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:31:15Z,user_data=None,user_id='db9a3b1e6d93495f8c849658ffc4e535',uuid=501f8cba-892f-489d-81b5-abb8669f49eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.384 2 DEBUG nova.network.os_vif_util [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converting VIF {"id": "35092937-9590-42d1-a022-549b740da3c5", "address": "fa:16:3e:7b:5d:8f", "network": {"id": "54248606-6cdd-4d53-9b28-14d8ac1cf290", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1272890804", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62c4ff42369740eebbf14969f4d8d2e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35092937-95", "ovs_interfaceid": "35092937-9590-42d1-a022-549b740da3c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.386 2 DEBUG nova.network.os_vif_util [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:5d:8f,bridge_name='br-int',has_traffic_filtering=True,id=35092937-9590-42d1-a022-549b740da3c5,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35092937-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.386 2 DEBUG os_vif [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:5d:8f,bridge_name='br-int',has_traffic_filtering=True,id=35092937-9590-42d1-a022-549b740da3c5,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35092937-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.387 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35092937-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.393 2 INFO os_vif [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:5d:8f,bridge_name='br-int',has_traffic_filtering=True,id=35092937-9590-42d1-a022-549b740da3c5,network=Network(54248606-6cdd-4d53-9b28-14d8ac1cf290),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35092937-95')#033[00m
Oct  2 04:31:21 np0005465604 neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41[323752]: [NOTICE]   (323756) : haproxy version is 2.8.14-c23fe91
Oct  2 04:31:21 np0005465604 neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41[323752]: [NOTICE]   (323756) : path to executable is /usr/sbin/haproxy
Oct  2 04:31:21 np0005465604 neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41[323752]: [WARNING]  (323756) : Exiting Master process...
Oct  2 04:31:21 np0005465604 neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41[323752]: [ALERT]    (323756) : Current worker (323758) exited with code 143 (Terminated)
Oct  2 04:31:21 np0005465604 neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41[323752]: [WARNING]  (323756) : All workers exited. Exiting... (0)
Oct  2 04:31:21 np0005465604 systemd[1]: libpod-2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f.scope: Deactivated successfully.
Oct  2 04:31:21 np0005465604 conmon[323752]: conmon 2366353275d0610a8c46 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f.scope/container/memory.events
Oct  2 04:31:21 np0005465604 podman[324365]: 2025-10-02 08:31:21.486250608 +0000 UTC m=+0.052274624 container died 2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 04:31:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f-userdata-shm.mount: Deactivated successfully.
Oct  2 04:31:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-927c4f8d9e26b4f276f849fe227297392dc8511c85e75a6e69adb62038a5f909-merged.mount: Deactivated successfully.
Oct  2 04:31:21 np0005465604 podman[324365]: 2025-10-02 08:31:21.540867432 +0000 UTC m=+0.106891488 container cleanup 2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:31:21 np0005465604 systemd[1]: libpod-conmon-2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f.scope: Deactivated successfully.
Oct  2 04:31:21 np0005465604 podman[324396]: 2025-10-02 08:31:21.606344284 +0000 UTC m=+0.040778626 container remove 2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.614 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c48069-016e-45a8-8af1-98bf2b592f93]: (4, ('Thu Oct  2 08:31:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41 (2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f)\n2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f\nThu Oct  2 08:31:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41 (2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f)\n2366353275d0610a8c4624d23a60b73fbde1a526a8102aa15c13ecc458f6cd0f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.618 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2c905e70-ca2b-45c0-bf30-e8b53679f623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.620 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47336fa6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 kernel: tap47336fa6-f0: left promiscuous mode
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.640 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[040caf88-7692-45e3-8d04-39b907b4a3e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.668 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d9391dca-cb24-41a2-bece-70508a452f77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.669 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[94e7037c-3b53-45b4-9df5-c8eb34e2be4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.688 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b60d4c08-6f8b-409f-a555-fab9174e1608]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474961, 'reachable_time': 32642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324412, 'error': None, 'target': 'ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 systemd[1]: run-netns-ovnmeta\x2d47336fa6\x2dfcc3\x2d40f8\x2dae02\x2d9bae73a94c41.mount: Deactivated successfully.
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.694 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-47336fa6-fcc3-40f8-ae02-9bae73a94c41 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.694 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d7a0a6-c4cb-4cba-b8ae-7832e3eb072d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.695 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 35092937-9590-42d1-a022-549b740da3c5 in datapath 54248606-6cdd-4d53-9b28-14d8ac1cf290 unbound from our chassis#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.698 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 54248606-6cdd-4d53-9b28-14d8ac1cf290, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.699 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[967ed653-9ed2-43ba-9205-ea190ea65599]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.700 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290 namespace which is not needed anymore#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.743 2 DEBUG nova.network.neutron [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Updating instance_info_cache with network_info: [{"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.781 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Releasing lock "refresh_cache-7ac34b0c-8ced-417d-9442-8fda77804a34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.782 2 DEBUG nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Instance network_info: |[{"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.783 2 DEBUG oslo_concurrency.lockutils [req-90c8ec13-2688-4182-9155-2666a8c06c98 req-6c51897f-e0b8-488b-a711-4591f4327b64 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7ac34b0c-8ced-417d-9442-8fda77804a34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.784 2 DEBUG nova.network.neutron [req-90c8ec13-2688-4182-9155-2666a8c06c98 req-6c51897f-e0b8-488b-a711-4591f4327b64 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Refreshing network info cache for port 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.789 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Start _get_guest_xml network_info=[{"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.798 2 WARNING nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.807 2 INFO nova.virt.libvirt.driver [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Deleting instance files /var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb_del#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.807 2 INFO nova.virt.libvirt.driver [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Deletion of /var/lib/nova/instances/501f8cba-892f-489d-81b5-abb8669f49eb_del complete#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.816 2 DEBUG nova.virt.libvirt.host [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.817 2 DEBUG nova.virt.libvirt.host [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.823 2 DEBUG nova.virt.libvirt.host [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.824 2 DEBUG nova.virt.libvirt.host [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.825 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.825 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.826 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.826 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.826 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.827 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.827 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.828 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.829 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.829 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.829 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.830 2 DEBUG nova.virt.hardware [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:31:21 np0005465604 neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290[323670]: [NOTICE]   (323674) : haproxy version is 2.8.14-c23fe91
Oct  2 04:31:21 np0005465604 neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290[323670]: [NOTICE]   (323674) : path to executable is /usr/sbin/haproxy
Oct  2 04:31:21 np0005465604 neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290[323670]: [WARNING]  (323674) : Exiting Master process...
Oct  2 04:31:21 np0005465604 neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290[323670]: [ALERT]    (323674) : Current worker (323676) exited with code 143 (Terminated)
Oct  2 04:31:21 np0005465604 neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290[323670]: [WARNING]  (323674) : All workers exited. Exiting... (0)
Oct  2 04:31:21 np0005465604 systemd[1]: libpod-595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723.scope: Deactivated successfully.
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.834 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:21 np0005465604 podman[324429]: 2025-10-02 08:31:21.84212804 +0000 UTC m=+0.045485672 container died 595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 04:31:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723-userdata-shm.mount: Deactivated successfully.
Oct  2 04:31:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c8b45d26d074aac14704b1e922d565a27ad4c485c96934af760c5e755ad0d98b-merged.mount: Deactivated successfully.
Oct  2 04:31:21 np0005465604 podman[324429]: 2025-10-02 08:31:21.874207365 +0000 UTC m=+0.077565007 container cleanup 595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 04:31:21 np0005465604 systemd[1]: libpod-conmon-595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723.scope: Deactivated successfully.
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.902 2 INFO nova.compute.manager [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.903 2 DEBUG oslo.service.loopingcall [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.903 2 DEBUG nova.compute.manager [-] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.904 2 DEBUG nova.network.neutron [-] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:31:21 np0005465604 podman[324457]: 2025-10-02 08:31:21.93685335 +0000 UTC m=+0.043242033 container remove 595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.943 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[392cb454-988e-4af7-a9f7-8a27e8b862e4]: (4, ('Thu Oct  2 08:31:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290 (595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723)\n595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723\nThu Oct  2 08:31:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290 (595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723)\n595aaf163b77bbd2e25fccc081dd11dfaaf7a58d870180dd4e8718584afc1723\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.946 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a5af82-5257-42c0-963c-b0a824ea0b46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.947 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54248606-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 kernel: tap54248606-60: left promiscuous mode
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.958 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[704b96af-268e-482a-93d7-4936d2ef526f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 nova_compute[260603]: 2025-10-02 08:31:21.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.985 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[39788b8a-1f7f-48c2-91f1-1b1dc2366711]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:21.990 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1696df6b-2582-47b1-b1df-f7f21367f681]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:22.013 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e680a89-9ce1-47c5-b4c2-1a46337a3292]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474828, 'reachable_time': 27722, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324491, 'error': None, 'target': 'ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:22.016 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-54248606-6cdd-4d53-9b28-14d8ac1cf290 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:31:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:22.016 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[d93aee32-7b0b-4e40-b63b-16477c57df91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:31:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/936054651' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:31:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:31:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/936054651' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:31:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:31:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:31:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925441020' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.295 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.343 2 DEBUG nova.storage.rbd_utils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7ac34b0c-8ced-417d-9442-8fda77804a34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.355 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.485 2 DEBUG nova.compute.manager [req-a685ac75-82cc-4c9b-a5d8-b457bf83ecde req-9a13193a-0ced-4139-90da-6e106f14646d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-unplugged-35092937-9590-42d1-a022-549b740da3c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.486 2 DEBUG oslo_concurrency.lockutils [req-a685ac75-82cc-4c9b-a5d8-b457bf83ecde req-9a13193a-0ced-4139-90da-6e106f14646d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.486 2 DEBUG oslo_concurrency.lockutils [req-a685ac75-82cc-4c9b-a5d8-b457bf83ecde req-9a13193a-0ced-4139-90da-6e106f14646d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.487 2 DEBUG oslo_concurrency.lockutils [req-a685ac75-82cc-4c9b-a5d8-b457bf83ecde req-9a13193a-0ced-4139-90da-6e106f14646d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.487 2 DEBUG nova.compute.manager [req-a685ac75-82cc-4c9b-a5d8-b457bf83ecde req-9a13193a-0ced-4139-90da-6e106f14646d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] No waiting events found dispatching network-vif-unplugged-35092937-9590-42d1-a022-549b740da3c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.488 2 DEBUG nova.compute.manager [req-a685ac75-82cc-4c9b-a5d8-b457bf83ecde req-9a13193a-0ced-4139-90da-6e106f14646d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-unplugged-35092937-9590-42d1-a022-549b740da3c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.517 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:31:22 np0005465604 systemd[1]: run-netns-ovnmeta\x2d54248606\x2d6cdd\x2d4d53\x2d9b28\x2d14d8ac1cf290.mount: Deactivated successfully.
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.523 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-unplugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.524 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.524 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.525 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.525 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] No waiting events found dispatching network-vif-unplugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.528 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-unplugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.530 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-plugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.530 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.532 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.533 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.534 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] No waiting events found dispatching network-vif-plugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.535 2 WARNING nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received unexpected event network-vif-plugged-e19eb16b-f042-4e4d-922b-7057ad6ebb1c for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.536 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-unplugged-e8f18c99-1964-43d6-a955-7b5064c53b3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.536 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.537 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.538 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.538 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] No waiting events found dispatching network-vif-unplugged-e8f18c99-1964-43d6-a955-7b5064c53b3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.538 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-unplugged-e8f18c99-1964-43d6-a955-7b5064c53b3a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.539 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-plugged-e8f18c99-1964-43d6-a955-7b5064c53b3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.539 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.540 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.540 2 DEBUG oslo_concurrency.lockutils [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.541 2 DEBUG nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] No waiting events found dispatching network-vif-plugged-e8f18c99-1964-43d6-a955-7b5064c53b3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.541 2 WARNING nova.compute.manager [req-847c5a8d-4bdb-4b34-a5d5-790983ff2468 req-cccb9ace-c538-44d0-be9b-f0fcc6e7f508 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received unexpected event network-vif-plugged-e8f18c99-1964-43d6-a955-7b5064c53b3a for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.543 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:31:22 np0005465604 podman[324513]: 2025-10-02 08:31:22.545343031 +0000 UTC m=+0.095765373 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.584 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.585 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.586 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.586 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.587 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.666 2 DEBUG nova.network.neutron [req-ca325238-17df-456f-bac2-b440ac6afe51 req-9dfa3c86-4687-4ccc-919a-7c363be98409 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updated VIF entry in instance network info cache for port 37e9c33f-0ff9-4138-a7b5-989ba3c016a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.667 2 DEBUG nova.network.neutron [req-ca325238-17df-456f-bac2-b440ac6afe51 req-9dfa3c86-4687-4ccc-919a-7c363be98409 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Updating instance_info_cache with network_info: [{"id": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "address": "fa:16:3e:19:cc:f7", "network": {"id": "ef30d863-af60-49d9-b5d2-5e4f20c70d56", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1474698811-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eda0caa41e4740148ab99d5ebf9e27ba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37e9c33f-0f", "ovs_interfaceid": "37e9c33f-0ff9-4138-a7b5-989ba3c016a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.690 2 DEBUG oslo_concurrency.lockutils [req-ca325238-17df-456f-bac2-b440ac6afe51 req-9dfa3c86-4687-4ccc-919a-7c363be98409 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-49e7e668-b62c-4e35-a4e2-bba540000961" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:31:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:31:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/943899668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.886 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.889 2 DEBUG nova.virt.libvirt.vif [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1230108710',display_name='tempest-₡-1230108710',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1230108710',id=64,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-djhvanl3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:14Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=7ac34b0c-8ced-417d-9442-8fda77804a34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.890 2 DEBUG nova.network.os_vif_util [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.891 2 DEBUG nova.network.os_vif_util [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:fa:94,bridge_name='br-int',has_traffic_filtering=True,id=2fbf8f14-9d1f-4042-9f0b-1abcc448ea97,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fbf8f14-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.894 2 DEBUG nova.objects.instance [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7ac34b0c-8ced-417d-9442-8fda77804a34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.912 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:31:22 np0005465604 nova_compute[260603]:  <uuid>7ac34b0c-8ced-417d-9442-8fda77804a34</uuid>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:  <name>instance-00000040</name>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <nova:name>tempest-₡-1230108710</nova:name>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:31:21</nova:creationTime>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:31:22 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:        <nova:user uuid="33ee6781337742479d7b4b078ad6a221">tempest-ServersTestJSON-520437589-project-member</nova:user>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:        <nova:project uuid="f6678937d40d4004ad15e1e9eef6f9c7">tempest-ServersTestJSON-520437589</nova:project>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:        <nova:port uuid="2fbf8f14-9d1f-4042-9f0b-1abcc448ea97">
Oct  2 04:31:22 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <entry name="serial">7ac34b0c-8ced-417d-9442-8fda77804a34</entry>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <entry name="uuid">7ac34b0c-8ced-417d-9442-8fda77804a34</entry>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7ac34b0c-8ced-417d-9442-8fda77804a34_disk">
Oct  2 04:31:22 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:31:22 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7ac34b0c-8ced-417d-9442-8fda77804a34_disk.config">
Oct  2 04:31:22 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:31:22 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:23:fa:94"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <target dev="tap2fbf8f14-9d"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/7ac34b0c-8ced-417d-9442-8fda77804a34/console.log" append="off"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:31:22 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:31:22 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:31:22 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:31:22 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.924 2 DEBUG nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Preparing to wait for external event network-vif-plugged-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.925 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.925 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.926 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.927 2 DEBUG nova.virt.libvirt.vif [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1230108710',display_name='tempest-₡-1230108710',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1230108710',id=64,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-djhvanl3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:14Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=7ac34b0c-8ced-417d-9442-8fda77804a34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.928 2 DEBUG nova.network.os_vif_util [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.930 2 DEBUG nova.network.os_vif_util [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:fa:94,bridge_name='br-int',has_traffic_filtering=True,id=2fbf8f14-9d1f-4042-9f0b-1abcc448ea97,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fbf8f14-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.931 2 DEBUG os_vif [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:fa:94,bridge_name='br-int',has_traffic_filtering=True,id=2fbf8f14-9d1f-4042-9f0b-1abcc448ea97,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fbf8f14-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.933 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.934 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.941 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fbf8f14-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.942 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2fbf8f14-9d, col_values=(('external_ids', {'iface-id': '2fbf8f14-9d1f-4042-9f0b-1abcc448ea97', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:fa:94', 'vm-uuid': '7ac34b0c-8ced-417d-9442-8fda77804a34'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:22 np0005465604 NetworkManager[45129]: <info>  [1759393882.9450] manager: (tap2fbf8f14-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:22 np0005465604 nova_compute[260603]: 2025-10-02 08:31:22.955 2 INFO os_vif [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:fa:94,bridge_name='br-int',has_traffic_filtering=True,id=2fbf8f14-9d1f-4042-9f0b-1abcc448ea97,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fbf8f14-9d')#033[00m
Oct  2 04:31:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 197 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.4 MiB/s wr, 267 op/s
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.028 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.029 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.030 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No VIF found with MAC fa:16:3e:23:fa:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.031 2 INFO nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Using config drive#033[00m
Oct  2 04:31:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:31:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1055812350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.070 2 DEBUG nova.storage.rbd_utils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7ac34b0c-8ced-417d-9442-8fda77804a34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.081 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.157 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.158 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.171 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.172 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.181 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.182 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.419 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.421 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3790MB free_disk=59.90481185913086GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.421 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.421 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.499 2 INFO nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Creating config drive at /var/lib/nova/instances/7ac34b0c-8ced-417d-9442-8fda77804a34/disk.config#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.505 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7ac34b0c-8ced-417d-9442-8fda77804a34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp13cwefln execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.565 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 49e7e668-b62c-4e35-a4e2-bba540000961 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.566 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 501f8cba-892f-489d-81b5-abb8669f49eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.566 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.566 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 7ac34b0c-8ced-417d-9442-8fda77804a34 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.566 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.566 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.646 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7ac34b0c-8ced-417d-9442-8fda77804a34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp13cwefln" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.669 2 DEBUG nova.storage.rbd_utils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 7ac34b0c-8ced-417d-9442-8fda77804a34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.678 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7ac34b0c-8ced-417d-9442-8fda77804a34/disk.config 7ac34b0c-8ced-417d-9442-8fda77804a34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.760 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.843 2 DEBUG nova.network.neutron [req-90c8ec13-2688-4182-9155-2666a8c06c98 req-6c51897f-e0b8-488b-a711-4591f4327b64 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Updated VIF entry in instance network info cache for port 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.845 2 DEBUG nova.network.neutron [req-90c8ec13-2688-4182-9155-2666a8c06c98 req-6c51897f-e0b8-488b-a711-4591f4327b64 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Updating instance_info_cache with network_info: [{"id": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "address": "fa:16:3e:23:fa:94", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fbf8f14-9d", "ovs_interfaceid": "2fbf8f14-9d1f-4042-9f0b-1abcc448ea97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.849 2 DEBUG oslo_concurrency.processutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7ac34b0c-8ced-417d-9442-8fda77804a34/disk.config 7ac34b0c-8ced-417d-9442-8fda77804a34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.850 2 INFO nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Deleting local config drive /var/lib/nova/instances/7ac34b0c-8ced-417d-9442-8fda77804a34/disk.config because it was imported into RBD.#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.876 2 DEBUG oslo_concurrency.lockutils [req-90c8ec13-2688-4182-9155-2666a8c06c98 req-6c51897f-e0b8-488b-a711-4591f4327b64 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7ac34b0c-8ced-417d-9442-8fda77804a34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:31:23 np0005465604 kernel: tap2fbf8f14-9d: entered promiscuous mode
Oct  2 04:31:23 np0005465604 NetworkManager[45129]: <info>  [1759393883.9115] manager: (tap2fbf8f14-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/252)
Oct  2 04:31:23 np0005465604 systemd-udevd[324277]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:23Z|00584|binding|INFO|Claiming lport 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 for this chassis.
Oct  2 04:31:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:23Z|00585|binding|INFO|2fbf8f14-9d1f-4042-9f0b-1abcc448ea97: Claiming fa:16:3e:23:fa:94 10.100.0.3
Oct  2 04:31:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.923 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:fa:94 10.100.0.3'], port_security=['fa:16:3e:23:fa:94 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7ac34b0c-8ced-417d-9442-8fda77804a34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37ace2bf-5f15-459a-8ff0-2d519c6d736b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8fddda6a-0229-4e87-ab37-0eab3f7af64e, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2fbf8f14-9d1f-4042-9f0b-1abcc448ea97) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:31:23 np0005465604 NetworkManager[45129]: <info>  [1759393883.9265] device (tap2fbf8f14-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:31:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.926 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 in datapath 1e3507cf-e1b2-456e-8cff-b075c2a55621 bound to our chassis#033[00m
Oct  2 04:31:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.928 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e3507cf-e1b2-456e-8cff-b075c2a55621#033[00m
Oct  2 04:31:23 np0005465604 NetworkManager[45129]: <info>  [1759393883.9316] device (tap2fbf8f14-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:31:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.942 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f673837e-8ec7-47fe-b540-2c853ed8c4b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.944 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1e3507cf-e1 in ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:23Z|00586|binding|INFO|Setting lport 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 ovn-installed in OVS
Oct  2 04:31:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.946 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1e3507cf-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:31:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:23Z|00587|binding|INFO|Setting lport 2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 up in Southbound
Oct  2 04:31:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.946 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[55abab2d-1188-4d19-ad5c-2a23e1b9cd1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.952 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6d7dd2e2-a542-46eb-aacb-ae38826fbd3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:23 np0005465604 nova_compute[260603]: 2025-10-02 08:31:23.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:23 np0005465604 systemd-machined[214636]: New machine qemu-71-instance-00000040.
Oct  2 04:31:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.969 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[b8194ffa-d5c7-4b82-8c78-8d9e6e7332fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:23 np0005465604 systemd[1]: Started Virtual Machine qemu-71-instance-00000040.
Oct  2 04:31:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:23.986 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f5bb126b-e735-4883-bbff-2459dc6c6d3d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.022 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[58984ce4-29cf-4675-85ee-7f38827678b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:24 np0005465604 NetworkManager[45129]: <info>  [1759393884.0279] manager: (tap1e3507cf-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/253)
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.028 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[63fa85d9-fbfb-4162-91ac-8d86ba8e2a40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.064 2 INFO nova.compute.manager [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Rebuilding instance#033[00m
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.073 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ce4f94-0530-4104-a2a1-2f73a732990e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.076 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[95da95d7-a967-4146-915d-b65e949ca934]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:24 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct  2 04:31:24 np0005465604 NetworkManager[45129]: <info>  [1759393884.1012] device (tap1e3507cf-e0): carrier: link connected
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.109 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b96316-d9c4-467f-b626-321f7833b789]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.127 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b989e7a4-dd72-42d1-827d-85eef2492aa0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e3507cf-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:88:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476270, 'reachable_time': 28558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324700, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:24 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.146 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f41a0940-08d3-488e-bd46-90377685a3b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:88a2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476270, 'tstamp': 476270}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324701, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.171 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c6bf16af-4fe4-4873-a5b7-903d287d0522]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e3507cf-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:88:a2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476270, 'reachable_time': 28558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324702, 'error': None, 'target': 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.222 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ee12306c-da42-4b13-95fd-c16ade7d97ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:31:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1051485142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.274 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.283 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.302 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.305 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[997b47a6-dfa4-46f6-9863-27980477b169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.307 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e3507cf-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.307 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.308 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e3507cf-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:24 np0005465604 NetworkManager[45129]: <info>  [1759393884.3106] manager: (tap1e3507cf-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:24 np0005465604 kernel: tap1e3507cf-e0: entered promiscuous mode
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.317 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e3507cf-e0, col_values=(('external_ids', {'iface-id': '8d9038d5-8bd6-460b-aca0-b6f7422e177a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:24Z|00588|binding|INFO|Releasing lport 8d9038d5-8bd6-460b-aca0-b6f7422e177a from this chassis (sb_readonly=0)
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.322 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1e3507cf-e1b2-456e-8cff-b075c2a55621.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1e3507cf-e1b2-456e-8cff-b075c2a55621.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.324 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[35d5538e-f9e2-463e-8d3b-6492e33ae80e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.326 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/1e3507cf-e1b2-456e-8cff-b075c2a55621.pid.haproxy
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 1e3507cf-e1b2-456e-8cff-b075c2a55621
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:31:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:24.327 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'env', 'PROCESS_TAG=haproxy-1e3507cf-e1b2-456e-8cff-b075c2a55621', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1e3507cf-e1b2-456e-8cff-b075c2a55621.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.335 2 DEBUG nova.objects.instance [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'trusted_certs' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.338 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.338 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.349 2 DEBUG nova.compute.manager [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.396 2 DEBUG nova.objects.instance [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'pci_requests' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.406 2 DEBUG nova.network.neutron [-] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.408 2 DEBUG nova.objects.instance [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'pci_devices' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.422 2 DEBUG nova.objects.instance [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'resources' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.424 2 INFO nova.compute.manager [-] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Took 2.52 seconds to deallocate network for instance.#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.436 2 DEBUG nova.objects.instance [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'migration_context' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.445 2 DEBUG nova.objects.instance [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.449 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.470 2 DEBUG oslo_concurrency.lockutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.471 2 DEBUG oslo_concurrency.lockutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.578 2 DEBUG oslo_concurrency.processutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.631 2 DEBUG nova.compute.manager [req-1b7d5811-a1e3-4212-b29a-bbd4233e5ffe req-917cad72-68e2-43f3-88a0-4d52d7aa9938 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-plugged-35092937-9590-42d1-a022-549b740da3c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.632 2 DEBUG oslo_concurrency.lockutils [req-1b7d5811-a1e3-4212-b29a-bbd4233e5ffe req-917cad72-68e2-43f3-88a0-4d52d7aa9938 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.632 2 DEBUG oslo_concurrency.lockutils [req-1b7d5811-a1e3-4212-b29a-bbd4233e5ffe req-917cad72-68e2-43f3-88a0-4d52d7aa9938 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.632 2 DEBUG oslo_concurrency.lockutils [req-1b7d5811-a1e3-4212-b29a-bbd4233e5ffe req-917cad72-68e2-43f3-88a0-4d52d7aa9938 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.632 2 DEBUG nova.compute.manager [req-1b7d5811-a1e3-4212-b29a-bbd4233e5ffe req-917cad72-68e2-43f3-88a0-4d52d7aa9938 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] No waiting events found dispatching network-vif-plugged-35092937-9590-42d1-a022-549b740da3c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.632 2 WARNING nova.compute.manager [req-1b7d5811-a1e3-4212-b29a-bbd4233e5ffe req-917cad72-68e2-43f3-88a0-4d52d7aa9938 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received unexpected event network-vif-plugged-35092937-9590-42d1-a022-549b740da3c5 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.633 2 DEBUG nova.compute.manager [req-1b7d5811-a1e3-4212-b29a-bbd4233e5ffe req-917cad72-68e2-43f3-88a0-4d52d7aa9938 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-deleted-35092937-9590-42d1-a022-549b740da3c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.633 2 DEBUG nova.compute.manager [req-1b7d5811-a1e3-4212-b29a-bbd4233e5ffe req-917cad72-68e2-43f3-88a0-4d52d7aa9938 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-deleted-e8f18c99-1964-43d6-a955-7b5064c53b3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.633 2 DEBUG nova.compute.manager [req-1b7d5811-a1e3-4212-b29a-bbd4233e5ffe req-917cad72-68e2-43f3-88a0-4d52d7aa9938 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Received event network-vif-deleted-e19eb16b-f042-4e4d-922b-7057ad6ebb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.662 2 DEBUG nova.compute.manager [req-f375afe3-6e2e-46df-a600-438e7e75b150 req-17e053cf-f40f-4959-b444-3d266f427b45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Received event network-vif-plugged-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.662 2 DEBUG oslo_concurrency.lockutils [req-f375afe3-6e2e-46df-a600-438e7e75b150 req-17e053cf-f40f-4959-b444-3d266f427b45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.662 2 DEBUG oslo_concurrency.lockutils [req-f375afe3-6e2e-46df-a600-438e7e75b150 req-17e053cf-f40f-4959-b444-3d266f427b45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.662 2 DEBUG oslo_concurrency.lockutils [req-f375afe3-6e2e-46df-a600-438e7e75b150 req-17e053cf-f40f-4959-b444-3d266f427b45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.663 2 DEBUG nova.compute.manager [req-f375afe3-6e2e-46df-a600-438e7e75b150 req-17e053cf-f40f-4959-b444-3d266f427b45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Processing event network-vif-plugged-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:31:24 np0005465604 podman[324779]: 2025-10-02 08:31:24.787373399 +0000 UTC m=+0.064277266 container create 1dc540d273002a46be8e30a56e76bb6f0041e52dcd3c86d6991e4b7655d45b34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:31:24 np0005465604 podman[324779]: 2025-10-02 08:31:24.755129238 +0000 UTC m=+0.032033125 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:31:24 np0005465604 systemd[1]: Started libpod-conmon-1dc540d273002a46be8e30a56e76bb6f0041e52dcd3c86d6991e4b7655d45b34.scope.
Oct  2 04:31:24 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:31:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61431590530a1e10271e0cde98940632ec3d957e216da4e4fc4080e4734f710/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:31:24 np0005465604 podman[324779]: 2025-10-02 08:31:24.890063135 +0000 UTC m=+0.166967042 container init 1dc540d273002a46be8e30a56e76bb6f0041e52dcd3c86d6991e4b7655d45b34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 04:31:24 np0005465604 podman[324779]: 2025-10-02 08:31:24.894993288 +0000 UTC m=+0.171897175 container start 1dc540d273002a46be8e30a56e76bb6f0041e52dcd3c86d6991e4b7655d45b34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 04:31:24 np0005465604 neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621[324813]: [NOTICE]   (324817) : New worker (324819) forked
Oct  2 04:31:24 np0005465604 neutron-haproxy-ovnmeta-1e3507cf-e1b2-456e-8cff-b075c2a55621[324813]: [NOTICE]   (324817) : Loading success.
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.928 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393884.9278488, 7ac34b0c-8ced-417d-9442-8fda77804a34 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.928 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] VM Started (Lifecycle Event)#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.931 2 DEBUG nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.934 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.937 2 INFO nova.virt.libvirt.driver [-] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Instance spawned successfully.#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.937 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.950 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.955 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.958 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.958 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.959 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.959 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.959 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.960 2 DEBUG nova.virt.libvirt.driver [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.987 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.988 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393884.9280937, 7ac34b0c-8ced-417d-9442-8fda77804a34 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:31:24 np0005465604 nova_compute[260603]: 2025-10-02 08:31:24.988 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:31:25 np0005465604 nova_compute[260603]: 2025-10-02 08:31:25.009 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:25 np0005465604 nova_compute[260603]: 2025-10-02 08:31:25.015 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759393884.9335034, 7ac34b0c-8ced-417d-9442-8fda77804a34 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:31:25 np0005465604 nova_compute[260603]: 2025-10-02 08:31:25.015 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:31:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 181 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 258 op/s
Oct  2 04:31:25 np0005465604 nova_compute[260603]: 2025-10-02 08:31:25.021 2 INFO nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Took 10.04 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:31:25 np0005465604 nova_compute[260603]: 2025-10-02 08:31:25.021 2 DEBUG nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:25 np0005465604 nova_compute[260603]: 2025-10-02 08:31:25.034 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:25 np0005465604 nova_compute[260603]: 2025-10-02 08:31:25.037 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:31:25 np0005465604 nova_compute[260603]: 2025-10-02 08:31:25.059 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:31:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:31:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2835659606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:31:25 np0005465604 nova_compute[260603]: 2025-10-02 08:31:25.093 2 INFO nova.compute.manager [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Took 11.21 seconds to build instance.#033[00m
Oct  2 04:31:25 np0005465604 nova_compute[260603]: 2025-10-02 08:31:25.096 2 DEBUG oslo_concurrency.processutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:25 np0005465604 nova_compute[260603]: 2025-10-02 08:31:25.103 2 DEBUG nova.compute.provider_tree [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:31:25 np0005465604 nova_compute[260603]: 2025-10-02 08:31:25.116 2 DEBUG oslo_concurrency.lockutils [None req-7367bff3-3911-4f5c-8386-ac24fb5e074c 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:25 np0005465604 nova_compute[260603]: 2025-10-02 08:31:25.134 2 DEBUG nova.scheduler.client.report [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:31:25 np0005465604 nova_compute[260603]: 2025-10-02 08:31:25.164 2 DEBUG oslo_concurrency.lockutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:25 np0005465604 nova_compute[260603]: 2025-10-02 08:31:25.198 2 INFO nova.scheduler.client.report [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Deleted allocations for instance 501f8cba-892f-489d-81b5-abb8669f49eb#033[00m
Oct  2 04:31:25 np0005465604 nova_compute[260603]: 2025-10-02 08:31:25.267 2 DEBUG oslo_concurrency.lockutils [None req-5a946de4-099b-4ef7-8288-1ad1b6c24806 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "501f8cba-892f-489d-81b5-abb8669f49eb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:25 np0005465604 nova_compute[260603]: 2025-10-02 08:31:25.314 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:31:25 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:25Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:19:cc:f7 10.100.0.9
Oct  2 04:31:25 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:25Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:cc:f7 10.100.0.9
Oct  2 04:31:26 np0005465604 nova_compute[260603]: 2025-10-02 08:31:26.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:31:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 181 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 254 op/s
Oct  2 04:31:27 np0005465604 podman[324830]: 2025-10-02 08:31:27.046084594 +0000 UTC m=+0.096345150 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:31:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:31:27 np0005465604 nova_compute[260603]: 2025-10-02 08:31:27.387 2 DEBUG nova.compute.manager [req-ae0a3333-c879-4178-8e19-dd081cad16c0 req-1998214d-8798-4376-b1a3-3829366a7a75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Received event network-vif-plugged-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:27 np0005465604 nova_compute[260603]: 2025-10-02 08:31:27.388 2 DEBUG oslo_concurrency.lockutils [req-ae0a3333-c879-4178-8e19-dd081cad16c0 req-1998214d-8798-4376-b1a3-3829366a7a75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:27 np0005465604 nova_compute[260603]: 2025-10-02 08:31:27.388 2 DEBUG oslo_concurrency.lockutils [req-ae0a3333-c879-4178-8e19-dd081cad16c0 req-1998214d-8798-4376-b1a3-3829366a7a75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:27 np0005465604 nova_compute[260603]: 2025-10-02 08:31:27.388 2 DEBUG oslo_concurrency.lockutils [req-ae0a3333-c879-4178-8e19-dd081cad16c0 req-1998214d-8798-4376-b1a3-3829366a7a75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7ac34b0c-8ced-417d-9442-8fda77804a34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:27 np0005465604 nova_compute[260603]: 2025-10-02 08:31:27.388 2 DEBUG nova.compute.manager [req-ae0a3333-c879-4178-8e19-dd081cad16c0 req-1998214d-8798-4376-b1a3-3829366a7a75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] No waiting events found dispatching network-vif-plugged-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:31:27 np0005465604 nova_compute[260603]: 2025-10-02 08:31:27.389 2 WARNING nova.compute.manager [req-ae0a3333-c879-4178-8e19-dd081cad16c0 req-1998214d-8798-4376-b1a3-3829366a7a75 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7ac34b0c-8ced-417d-9442-8fda77804a34] Received unexpected event network-vif-plugged-2fbf8f14-9d1f-4042-9f0b-1abcc448ea97 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:31:27 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:27Z|00589|binding|INFO|Releasing lport d143de50-fc80-43b6-82e2-6651430a4a42 from this chassis (sb_readonly=0)
Oct  2 04:31:27 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:27Z|00590|binding|INFO|Releasing lport 1405e724-f2f6-4a95-8848-550131e62910 from this chassis (sb_readonly=0)
Oct  2 04:31:27 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:27Z|00591|binding|INFO|Releasing lport 8d9038d5-8bd6-460b-aca0-b6f7422e177a from this chassis (sb_readonly=0)
Oct  2 04:31:27 np0005465604 nova_compute[260603]: 2025-10-02 08:31:27.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:27 np0005465604 nova_compute[260603]: 2025-10-02 08:31:27.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:31:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:31:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:31:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:31:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:31:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:31:27 np0005465604 nova_compute[260603]: 2025-10-02 08:31:27.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:31:27
Oct  2 04:31:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:31:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:31:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'backups', 'vms', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'images']
Oct  2 04:31:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:31:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:28Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:e9:fe 10.100.0.10
Oct  2 04:31:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:28Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:e9:fe 10.100.0.10
Oct  2 04:31:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 236 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 6.0 MiB/s wr, 434 op/s
Oct  2 04:31:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 236 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.2 MiB/s wr, 214 op/s
Oct  2 04:31:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:31:32 np0005465604 nova_compute[260603]: 2025-10-02 08:31:32.138 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "24339ad4-fec2-43f8-8da3-5e433206a1cc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:32 np0005465604 nova_compute[260603]: 2025-10-02 08:31:32.138 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:32 np0005465604 nova_compute[260603]: 2025-10-02 08:31:32.156 2 DEBUG nova.compute.manager [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:31:32 np0005465604 nova_compute[260603]: 2025-10-02 08:31:32.246 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:32 np0005465604 nova_compute[260603]: 2025-10-02 08:31:32.247 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:32 np0005465604 nova_compute[260603]: 2025-10-02 08:31:32.252 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:31:32 np0005465604 nova_compute[260603]: 2025-10-02 08:31:32.253 2 INFO nova.compute.claims [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:31:32 np0005465604 nova_compute[260603]: 2025-10-02 08:31:32.412 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:31:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/846241726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:31:32 np0005465604 nova_compute[260603]: 2025-10-02 08:31:32.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:32 np0005465604 nova_compute[260603]: 2025-10-02 08:31:32.892 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:32 np0005465604 nova_compute[260603]: 2025-10-02 08:31:32.899 2 DEBUG nova.compute.provider_tree [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:31:32 np0005465604 nova_compute[260603]: 2025-10-02 08:31:32.923 2 DEBUG nova.scheduler.client.report [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:31:32 np0005465604 nova_compute[260603]: 2025-10-02 08:31:32.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:32 np0005465604 nova_compute[260603]: 2025-10-02 08:31:32.947 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:32 np0005465604 nova_compute[260603]: 2025-10-02 08:31:32.948 2 DEBUG nova.compute.manager [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.004 2 DEBUG nova.compute.manager [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.004 2 DEBUG nova.network.neutron [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:31:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 244 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 234 op/s
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.035 2 INFO nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.091 2 DEBUG nova.compute.manager [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.209 2 DEBUG nova.compute.manager [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.213 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.214 2 INFO nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Creating image(s)#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.252 2 DEBUG nova.storage.rbd_utils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.291 2 DEBUG nova.storage.rbd_utils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.329 2 DEBUG nova.storage.rbd_utils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.334 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.436 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.438 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.440 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.440 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.472 2 DEBUG nova.storage.rbd_utils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.476 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.704 2 DEBUG nova.policy [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33ee6781337742479d7b4b078ad6a221', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f6678937d40d4004ad15e1e9eef6f9c7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.713 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.763 2 DEBUG nova.storage.rbd_utils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] resizing rbd image 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.850 2 DEBUG nova.objects.instance [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'migration_context' on Instance uuid 24339ad4-fec2-43f8-8da3-5e433206a1cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.868 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.868 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Ensure instance console log exists: /var/lib/nova/instances/24339ad4-fec2-43f8-8da3-5e433206a1cc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.868 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.869 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:33 np0005465604 nova_compute[260603]: 2025-10-02 08:31:33.869 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:34 np0005465604 nova_compute[260603]: 2025-10-02 08:31:34.512 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 04:31:34 np0005465604 nova_compute[260603]: 2025-10-02 08:31:34.552 2 DEBUG nova.network.neutron [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Successfully created port: a246f438-6334-440d-931b-f177dc6cadd6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:31:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:34.817 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:34.818 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:34.819 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 246 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 213 op/s
Oct  2 04:31:35 np0005465604 nova_compute[260603]: 2025-10-02 08:31:35.866 2 DEBUG nova.network.neutron [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Successfully updated port: a246f438-6334-440d-931b-f177dc6cadd6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:31:35 np0005465604 nova_compute[260603]: 2025-10-02 08:31:35.898 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "refresh_cache-24339ad4-fec2-43f8-8da3-5e433206a1cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:31:35 np0005465604 nova_compute[260603]: 2025-10-02 08:31:35.899 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquired lock "refresh_cache-24339ad4-fec2-43f8-8da3-5e433206a1cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:31:35 np0005465604 nova_compute[260603]: 2025-10-02 08:31:35.900 2 DEBUG nova.network.neutron [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:31:36 np0005465604 nova_compute[260603]: 2025-10-02 08:31:36.111 2 DEBUG nova.network.neutron [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:31:36 np0005465604 nova_compute[260603]: 2025-10-02 08:31:36.320 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759393881.3189375, 501f8cba-892f-489d-81b5-abb8669f49eb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:31:36 np0005465604 nova_compute[260603]: 2025-10-02 08:31:36.321 2 INFO nova.compute.manager [-] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:31:36 np0005465604 nova_compute[260603]: 2025-10-02 08:31:36.345 2 DEBUG nova.compute.manager [None req-e3181f8c-8480-464a-8e20-1e364f78e91f - - - - - -] [instance: 501f8cba-892f-489d-81b5-abb8669f49eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:36 np0005465604 nova_compute[260603]: 2025-10-02 08:31:36.811 2 DEBUG nova.compute.manager [req-ae3e1a43-f67d-420e-a248-88968561e338 req-5765bccc-9e99-4358-bf17-71556065e49d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Received event network-changed-a246f438-6334-440d-931b-f177dc6cadd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:36 np0005465604 nova_compute[260603]: 2025-10-02 08:31:36.811 2 DEBUG nova.compute.manager [req-ae3e1a43-f67d-420e-a248-88968561e338 req-5765bccc-9e99-4358-bf17-71556065e49d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Refreshing instance network info cache due to event network-changed-a246f438-6334-440d-931b-f177dc6cadd6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:31:36 np0005465604 nova_compute[260603]: 2025-10-02 08:31:36.812 2 DEBUG oslo_concurrency.lockutils [req-ae3e1a43-f67d-420e-a248-88968561e338 req-5765bccc-9e99-4358-bf17-71556065e49d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-24339ad4-fec2-43f8-8da3-5e433206a1cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:31:36 np0005465604 kernel: tap9835ad6c-8e (unregistering): left promiscuous mode
Oct  2 04:31:36 np0005465604 NetworkManager[45129]: <info>  [1759393896.8362] device (tap9835ad6c-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:31:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:36Z|00592|binding|INFO|Releasing lport 9835ad6c-8ea8-4a79-8f07-042186ea7c71 from this chassis (sb_readonly=0)
Oct  2 04:31:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:36Z|00593|binding|INFO|Setting lport 9835ad6c-8ea8-4a79-8f07-042186ea7c71 down in Southbound
Oct  2 04:31:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:36Z|00594|binding|INFO|Removing iface tap9835ad6c-8e ovn-installed in OVS
Oct  2 04:31:36 np0005465604 nova_compute[260603]: 2025-10-02 08:31:36.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:36.856 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:e9:fe 10.100.0.10'], port_security=['fa:16:3e:5b:e9:fe 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bce7493292bb47cfb7168bca89f78f4a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '70456544-9d56-4c7b-b40d-eb25e5a572db', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7313fcfc-7f82-4668-88be-657e0435d03f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=9835ad6c-8ea8-4a79-8f07-042186ea7c71) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:31:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:36.857 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 9835ad6c-8ea8-4a79-8f07-042186ea7c71 in datapath f8df0af1-1767-419a-8500-c28fbf45ae4b unbound from our chassis#033[00m
Oct  2 04:31:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:36.858 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f8df0af1-1767-419a-8500-c28fbf45ae4b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:31:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:36.860 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b132e394-c7ca-416f-97a7-bf7a1976a5b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:36.861 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b namespace which is not needed anymore#033[00m
Oct  2 04:31:36 np0005465604 nova_compute[260603]: 2025-10-02 08:31:36.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:36 np0005465604 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Oct  2 04:31:36 np0005465604 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003f.scope: Consumed 13.036s CPU time.
Oct  2 04:31:36 np0005465604 systemd-machined[214636]: Machine qemu-70-instance-0000003f terminated.
Oct  2 04:31:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:36Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:fa:94 10.100.0.3
Oct  2 04:31:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:31:36Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:fa:94 10.100.0.3
Oct  2 04:31:36 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[324177]: [NOTICE]   (324199) : haproxy version is 2.8.14-c23fe91
Oct  2 04:31:36 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[324177]: [NOTICE]   (324199) : path to executable is /usr/sbin/haproxy
Oct  2 04:31:36 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[324177]: [WARNING]  (324199) : Exiting Master process...
Oct  2 04:31:36 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[324177]: [ALERT]    (324199) : Current worker (324201) exited with code 143 (Terminated)
Oct  2 04:31:36 np0005465604 neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b[324177]: [WARNING]  (324199) : All workers exited. Exiting... (0)
Oct  2 04:31:36 np0005465604 systemd[1]: libpod-f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0.scope: Deactivated successfully.
Oct  2 04:31:37 np0005465604 podman[325060]: 2025-10-02 08:31:37.004431724 +0000 UTC m=+0.047208227 container died f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.015 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "49564059-b2ef-4053-bedd-56a9afb53d2c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.016 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "49564059-b2ef-4053-bedd-56a9afb53d2c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 246 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 200 op/s
Oct  2 04:31:37 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0-userdata-shm.mount: Deactivated successfully.
Oct  2 04:31:37 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b436514243685187aa6cdcdd36d5f1493da17b7b370783efb1b916d0e9b66972-merged.mount: Deactivated successfully.
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.038 2 DEBUG nova.compute.manager [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:31:37 np0005465604 podman[325060]: 2025-10-02 08:31:37.048933725 +0000 UTC m=+0.091710228 container cleanup f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 04:31:37 np0005465604 systemd[1]: libpod-conmon-f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0.scope: Deactivated successfully.
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.116 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.117 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.123 2 DEBUG nova.virt.hardware [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.124 2 INFO nova.compute.claims [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:31:37 np0005465604 podman[325091]: 2025-10-02 08:31:37.134170918 +0000 UTC m=+0.057648139 container remove f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:31:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:37.141 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f46476-c3d6-4a9b-8fc3-5db64f4828ed]: (4, ('Thu Oct  2 08:31:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b (f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0)\nf67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0\nThu Oct  2 08:31:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b (f67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0)\nf67ec7161d22ee8e267c005849e4cbc8b564956fd601583967e0e853ed060ca0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:37.143 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[42d1409d-f2ea-44ef-a6d0-285ccbd4ac45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:37.144 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8df0af1-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:37 np0005465604 kernel: tapf8df0af1-10: left promiscuous mode
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:37.169 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea363ee-cefa-44d8-b663-5665f90bba8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:37.193 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2460229f-fea2-4034-8e8e-a3d6ff8674fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:37.195 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[34301e52-6a75-443f-adad-e0aded512e51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:37.218 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[74cf0b9c-afb7-43f5-be8e-7ba707e14d60]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475363, 'reachable_time': 34562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325116, 'error': None, 'target': 'ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:37 np0005465604 systemd[1]: run-netns-ovnmeta\x2df8df0af1\x2d1767\x2d419a\x2d8500\x2dc28fbf45ae4b.mount: Deactivated successfully.
Oct  2 04:31:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:37.222 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f8df0af1-1767-419a-8500-c28fbf45ae4b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:31:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:31:37.222 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e5ecf796-322e-4e51-8926-89755515adb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.247 2 DEBUG nova.network.neutron [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Updating instance_info_cache with network_info: [{"id": "a246f438-6334-440d-931b-f177dc6cadd6", "address": "fa:16:3e:16:eb:75", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa246f438-63", "ovs_interfaceid": "a246f438-6334-440d-931b-f177dc6cadd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.263 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Releasing lock "refresh_cache-24339ad4-fec2-43f8-8da3-5e433206a1cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.263 2 DEBUG nova.compute.manager [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Instance network_info: |[{"id": "a246f438-6334-440d-931b-f177dc6cadd6", "address": "fa:16:3e:16:eb:75", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa246f438-63", "ovs_interfaceid": "a246f438-6334-440d-931b-f177dc6cadd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.264 2 DEBUG oslo_concurrency.lockutils [req-ae3e1a43-f67d-420e-a248-88968561e338 req-5765bccc-9e99-4358-bf17-71556065e49d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-24339ad4-fec2-43f8-8da3-5e433206a1cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.264 2 DEBUG nova.network.neutron [req-ae3e1a43-f67d-420e-a248-88968561e338 req-5765bccc-9e99-4358-bf17-71556065e49d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Refreshing network info cache for port a246f438-6334-440d-931b-f177dc6cadd6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.266 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Start _get_guest_xml network_info=[{"id": "a246f438-6334-440d-931b-f177dc6cadd6", "address": "fa:16:3e:16:eb:75", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa246f438-63", "ovs_interfaceid": "a246f438-6334-440d-931b-f177dc6cadd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.272 2 WARNING nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.278 2 DEBUG nova.virt.libvirt.host [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.279 2 DEBUG nova.virt.libvirt.host [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.284 2 DEBUG nova.virt.libvirt.host [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.285 2 DEBUG nova.virt.libvirt.host [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.285 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.285 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.286 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.286 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.286 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.286 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.286 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.287 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.287 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.287 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.288 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.288 2 DEBUG nova.virt.hardware [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.290 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.316 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.372 2 DEBUG nova.compute.manager [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.425 2 DEBUG nova.compute.manager [req-a1a1a55a-638f-4ea7-ae07-d5a42a95f97f req-c335f758-98c2-42ae-9660-6f916cd9cdeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received event network-vif-unplugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.425 2 DEBUG oslo_concurrency.lockutils [req-a1a1a55a-638f-4ea7-ae07-d5a42a95f97f req-c335f758-98c2-42ae-9660-6f916cd9cdeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.425 2 DEBUG oslo_concurrency.lockutils [req-a1a1a55a-638f-4ea7-ae07-d5a42a95f97f req-c335f758-98c2-42ae-9660-6f916cd9cdeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.426 2 DEBUG oslo_concurrency.lockutils [req-a1a1a55a-638f-4ea7-ae07-d5a42a95f97f req-c335f758-98c2-42ae-9660-6f916cd9cdeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.426 2 DEBUG nova.compute.manager [req-a1a1a55a-638f-4ea7-ae07-d5a42a95f97f req-c335f758-98c2-42ae-9660-6f916cd9cdeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] No waiting events found dispatching network-vif-unplugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.426 2 WARNING nova.compute.manager [req-a1a1a55a-638f-4ea7-ae07-d5a42a95f97f req-c335f758-98c2-42ae-9660-6f916cd9cdeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received unexpected event network-vif-unplugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 for instance with vm_state active and task_state rebuilding.#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.428 2 INFO nova.compute.manager [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] instance snapshotting#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.429 2 DEBUG nova.objects.instance [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] Lazy-loading 'flavor' on Instance uuid 49e7e668-b62c-4e35-a4e2-bba540000961 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.529 2 INFO nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.536 2 INFO nova.virt.libvirt.driver [-] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance destroyed successfully.#033[00m
Oct  2 04:31:37 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 04:31:37 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.573 2 INFO nova.virt.libvirt.driver [-] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Instance destroyed successfully.#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.574 2 DEBUG nova.virt.libvirt.vif [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:31:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-361192799',display_name='tempest-ServerDiskConfigTestJSON-server-361192799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-361192799',id=63,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-bmy2yvfw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:23Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.575 2 DEBUG nova.network.os_vif_util [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.576 2 DEBUG nova.network.os_vif_util [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.576 2 DEBUG os_vif [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.578 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9835ad6c-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.585 2 INFO os_vif [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e')#033[00m
Oct  2 04:31:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:31:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2435144527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.739 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.766 2 DEBUG nova.storage.rbd_utils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.770 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2753295579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.804 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.808 2 INFO nova.virt.libvirt.driver [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] [instance: 49e7e668-b62c-4e35-a4e2-bba540000961] Beginning live snapshot process#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.814 2 DEBUG nova.compute.provider_tree [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.855 2 DEBUG nova.scheduler.client.report [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.883 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.883 2 DEBUG nova.compute.manager [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.945 2 INFO nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Deleting instance files /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_del#033[00m
Oct  2 04:31:37 np0005465604 nova_compute[260603]: 2025-10-02 08:31:37.946 2 INFO nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Deletion of /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_del complete#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.002 2 DEBUG nova.compute.manager [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.002 2 DEBUG nova.network.neutron [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.012 2 DEBUG nova.virt.libvirt.imagebackend [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.049 2 INFO nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.078 2 DEBUG nova.compute.manager [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.163 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.164 2 INFO nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Creating image(s)#033[00m
Oct  2 04:31:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:31:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3293481748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.193 2 DEBUG nova.storage.rbd_utils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.218 2 DEBUG nova.storage.rbd_utils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.241 2 DEBUG nova.storage.rbd_utils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.245 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.274 2 DEBUG nova.compute.manager [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.275 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.276 2 INFO nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Creating image(s)#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.295 2 DEBUG nova.storage.rbd_utils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 49564059-b2ef-4053-bedd-56a9afb53d2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.318 2 DEBUG nova.storage.rbd_utils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 49564059-b2ef-4053-bedd-56a9afb53d2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.350 2 DEBUG nova.storage.rbd_utils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 49564059-b2ef-4053-bedd-56a9afb53d2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.357 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.397 2 DEBUG nova.storage.rbd_utils [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] creating snapshot(823eb30c50ed45c796f53778c84fb7ca) on rbd image(49e7e668-b62c-4e35-a4e2-bba540000961_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.449 2 DEBUG nova.policy [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'db9a3b1e6d93495f8c849658ffc4e535', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '62c4ff42369740eebbf14969f4d8d2e5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.458 2 DEBUG oslo_concurrency.processutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.462 2 DEBUG nova.virt.libvirt.vif [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-26328652',display_name='tempest-ServersTestJSON-server-26328652',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-26328652',id=65,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-j0hil4jt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:33Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=24339ad4-fec2-43f8-8da3-5e433206a1cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a246f438-6334-440d-931b-f177dc6cadd6", "address": "fa:16:3e:16:eb:75", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa246f438-63", "ovs_interfaceid": "a246f438-6334-440d-931b-f177dc6cadd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.463 2 DEBUG nova.network.os_vif_util [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "a246f438-6334-440d-931b-f177dc6cadd6", "address": "fa:16:3e:16:eb:75", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa246f438-63", "ovs_interfaceid": "a246f438-6334-440d-931b-f177dc6cadd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.465 2 DEBUG nova.network.os_vif_util [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:eb:75,bridge_name='br-int',has_traffic_filtering=True,id=a246f438-6334-440d-931b-f177dc6cadd6,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa246f438-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.467 2 DEBUG nova.objects.instance [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 24339ad4-fec2-43f8-8da3-5e433206a1cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.469 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.470 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.472 2 DEBUG oslo_concurrency.lockutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.473 2 DEBUG oslo_concurrency.lockutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.474 2 DEBUG oslo_concurrency.lockutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.500 2 DEBUG nova.storage.rbd_utils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.504 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.542 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.544 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.545 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018637800555121444 of space, bias 1.0, pg target 0.5591340166536433 quantized to 32 (current 32)
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:31:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.572 2 DEBUG nova.storage.rbd_utils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] rbd image 49564059-b2ef-4053-bedd-56a9afb53d2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.576 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 49564059-b2ef-4053-bedd-56a9afb53d2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.631 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:31:38 np0005465604 nova_compute[260603]:  <uuid>24339ad4-fec2-43f8-8da3-5e433206a1cc</uuid>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:  <name>instance-00000041</name>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersTestJSON-server-26328652</nova:name>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:31:37</nova:creationTime>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:31:38 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:        <nova:user uuid="33ee6781337742479d7b4b078ad6a221">tempest-ServersTestJSON-520437589-project-member</nova:user>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:        <nova:project uuid="f6678937d40d4004ad15e1e9eef6f9c7">tempest-ServersTestJSON-520437589</nova:project>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:        <nova:port uuid="a246f438-6334-440d-931b-f177dc6cadd6">
Oct  2 04:31:38 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <entry name="serial">24339ad4-fec2-43f8-8da3-5e433206a1cc</entry>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <entry name="uuid">24339ad4-fec2-43f8-8da3-5e433206a1cc</entry>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/24339ad4-fec2-43f8-8da3-5e433206a1cc_disk">
Oct  2 04:31:38 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:31:38 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/24339ad4-fec2-43f8-8da3-5e433206a1cc_disk.config">
Oct  2 04:31:38 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:31:38 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:16:eb:75"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <target dev="tapa246f438-63"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/24339ad4-fec2-43f8-8da3-5e433206a1cc/console.log" append="off"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:31:38 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:31:38 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:31:38 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:31:38 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.632 2 DEBUG nova.compute.manager [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Preparing to wait for external event network-vif-plugged-a246f438-6334-440d-931b-f177dc6cadd6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.632 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Acquiring lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.633 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.633 2 DEBUG oslo_concurrency.lockutils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Lock "24339ad4-fec2-43f8-8da3-5e433206a1cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.634 2 DEBUG nova.virt.libvirt.vif [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:31:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-26328652',display_name='tempest-ServersTestJSON-server-26328652',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-26328652',id=65,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6678937d40d4004ad15e1e9eef6f9c7',ramdisk_id='',reservation_id='r-j0hil4jt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-520437589',owner_user_name='tempest-ServersTestJSON-520437589-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:33Z,user_data=None,user_id='33ee6781337742479d7b4b078ad6a221',uuid=24339ad4-fec2-43f8-8da3-5e433206a1cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a246f438-6334-440d-931b-f177dc6cadd6", "address": "fa:16:3e:16:eb:75", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa246f438-63", "ovs_interfaceid": "a246f438-6334-440d-931b-f177dc6cadd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.635 2 DEBUG nova.network.os_vif_util [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converting VIF {"id": "a246f438-6334-440d-931b-f177dc6cadd6", "address": "fa:16:3e:16:eb:75", "network": {"id": "1e3507cf-e1b2-456e-8cff-b075c2a55621", "bridge": "br-int", "label": "tempest-ServersTestJSON-1591370672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6678937d40d4004ad15e1e9eef6f9c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa246f438-63", "ovs_interfaceid": "a246f438-6334-440d-931b-f177dc6cadd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.636 2 DEBUG nova.network.os_vif_util [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:eb:75,bridge_name='br-int',has_traffic_filtering=True,id=a246f438-6334-440d-931b-f177dc6cadd6,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa246f438-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.636 2 DEBUG os_vif [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:eb:75,bridge_name='br-int',has_traffic_filtering=True,id=a246f438-6334-440d-931b-f177dc6cadd6,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa246f438-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.639 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.639 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.645 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa246f438-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa246f438-63, col_values=(('external_ids', {'iface-id': 'a246f438-6334-440d-931b-f177dc6cadd6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:eb:75', 'vm-uuid': '24339ad4-fec2-43f8-8da3-5e433206a1cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:38 np0005465604 NetworkManager[45129]: <info>  [1759393898.6486] manager: (tapa246f438-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.661 2 INFO os_vif [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:eb:75,bridge_name='br-int',has_traffic_filtering=True,id=a246f438-6334-440d-931b-f177dc6cadd6,network=Network(1e3507cf-e1b2-456e-8cff-b075c2a55621),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa246f438-63')#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.729 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.729 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.729 2 DEBUG nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] No VIF found with MAC fa:16:3e:16:eb:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.730 2 INFO nova.virt.libvirt.driver [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] [instance: 24339ad4-fec2-43f8-8da3-5e433206a1cc] Using config drive#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.754 2 DEBUG nova.storage.rbd_utils [None req-1ec8c5a2-0d72-4870-b294-3c1d0cae21ab 33ee6781337742479d7b4b078ad6a221 f6678937d40d4004ad15e1e9eef6f9c7 - - default default] rbd image 24339ad4-fec2-43f8-8da3-5e433206a1cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.816 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.901 2 DEBUG nova.storage.rbd_utils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] resizing rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:31:38 np0005465604 nova_compute[260603]: 2025-10-02 08:31:38.935 2 DEBUG oslo_concurrency.processutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 49564059-b2ef-4053-bedd-56a9afb53d2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.016 2 DEBUG nova.storage.rbd_utils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] resizing rbd image 49564059-b2ef-4053-bedd-56a9afb53d2c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:31:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 301 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 8.2 MiB/s wr, 313 op/s
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.047 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.047 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Ensure instance console log exists: /var/lib/nova/instances/07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.048 2 DEBUG oslo_concurrency.lockutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.048 2 DEBUG oslo_concurrency.lockutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.048 2 DEBUG oslo_concurrency.lockutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.050 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Start _get_guest_xml network_info=[{"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.053 2 WARNING nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.065 2 DEBUG nova.virt.libvirt.host [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.066 2 DEBUG nova.virt.libvirt.host [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.068 2 DEBUG nova.virt.libvirt.host [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.069 2 DEBUG nova.virt.libvirt.host [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.069 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.069 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.069 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.070 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.070 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.070 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.070 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.070 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.070 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.071 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.071 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.071 2 DEBUG nova.virt.hardware [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.071 2 DEBUG nova.objects.instance [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:31:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Oct  2 04:31:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Oct  2 04:31:39 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.111 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.148 2 DEBUG nova.objects.instance [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lazy-loading 'migration_context' on Instance uuid 49564059-b2ef-4053-bedd-56a9afb53d2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.167 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.168 2 DEBUG nova.virt.libvirt.driver [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] [instance: 49564059-b2ef-4053-bedd-56a9afb53d2c] Ensure instance console log exists: /var/lib/nova/instances/49564059-b2ef-4053-bedd-56a9afb53d2c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.168 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.169 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.169 2 DEBUG oslo_concurrency.lockutils [None req-e240c72b-84d8-44cc-934e-6fa3d7938ba5 db9a3b1e6d93495f8c849658ffc4e535 62c4ff42369740eebbf14969f4d8d2e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.176 2 DEBUG nova.storage.rbd_utils [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] cloning vms/49e7e668-b62c-4e35-a4e2-bba540000961_disk@823eb30c50ed45c796f53778c84fb7ca to images/14d99014-bbf1-4f02-9b3b-cd6971c554c9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.279 2 DEBUG nova.storage.rbd_utils [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] flattening images/14d99014-bbf1-4f02-9b3b-cd6971c554c9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:31:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:31:39 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/926281068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.622 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.653 2 DEBUG nova.storage.rbd_utils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] rbd image 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.658 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.736 2 DEBUG nova.storage.rbd_utils [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] removing snapshot(823eb30c50ed45c796f53778c84fb7ca) on rbd image(49e7e668-b62c-4e35-a4e2-bba540000961_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.902 2 DEBUG nova.compute.manager [req-590f767a-f101-4a38-9fc2-0c3cac2c6582 req-79c50fba-87e6-4226-89e0-6a165eb424e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.904 2 DEBUG oslo_concurrency.lockutils [req-590f767a-f101-4a38-9fc2-0c3cac2c6582 req-79c50fba-87e6-4226-89e0-6a165eb424e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.904 2 DEBUG oslo_concurrency.lockutils [req-590f767a-f101-4a38-9fc2-0c3cac2c6582 req-79c50fba-87e6-4226-89e0-6a165eb424e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.905 2 DEBUG oslo_concurrency.lockutils [req-590f767a-f101-4a38-9fc2-0c3cac2c6582 req-79c50fba-87e6-4226-89e0-6a165eb424e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.905 2 DEBUG nova.compute.manager [req-590f767a-f101-4a38-9fc2-0c3cac2c6582 req-79c50fba-87e6-4226-89e0-6a165eb424e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] No waiting events found dispatching network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:31:39 np0005465604 nova_compute[260603]: 2025-10-02 08:31:39.906 2 WARNING nova.compute.manager [req-590f767a-f101-4a38-9fc2-0c3cac2c6582 req-79c50fba-87e6-4226-89e0-6a165eb424e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] Received unexpected event network-vif-plugged-9835ad6c-8ea8-4a79-8f07-042186ea7c71 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 04:31:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Oct  2 04:31:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Oct  2 04:31:40 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Oct  2 04:31:40 np0005465604 nova_compute[260603]: 2025-10-02 08:31:40.141 2 DEBUG nova.storage.rbd_utils [None req-9c55e2da-d6f0-450b-bd72-5c910d370467 9020ed38b31d46f88625374b2a76aef6 eda0caa41e4740148ab99d5ebf9e27ba - - default default] creating snapshot(snap) on rbd image(14d99014-bbf1-4f02-9b3b-cd6971c554c9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:31:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:31:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2092238596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:31:40 np0005465604 nova_compute[260603]: 2025-10-02 08:31:40.193 2 DEBUG oslo_concurrency.processutils [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:31:40 np0005465604 nova_compute[260603]: 2025-10-02 08:31:40.195 2 DEBUG nova.virt.libvirt.vif [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:31:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-361192799',display_name='tempest-ServerDiskConfigTestJSON-server-361192799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-361192799',id=63,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:31:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bce7493292bb47cfb7168bca89f78f4a',ramdisk_id='',reservation_id='r-bmy2yvfw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1277806880',owner_user_name='tempest-ServerDiskConfigTestJSON-1277806880-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:31:38Z,user_data=None,user_id='116b114f14f84e4cbd6cc966e29d82e7',uuid=07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:31:40 np0005465604 nova_compute[260603]: 2025-10-02 08:31:40.196 2 DEBUG nova.network.os_vif_util [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converting VIF {"id": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "address": "fa:16:3e:5b:e9:fe", "network": {"id": "f8df0af1-1767-419a-8500-c28fbf45ae4b", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-638520864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bce7493292bb47cfb7168bca89f78f4a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9835ad6c-8e", "ovs_interfaceid": "9835ad6c-8ea8-4a79-8f07-042186ea7c71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:31:40 np0005465604 nova_compute[260603]: 2025-10-02 08:31:40.197 2 DEBUG nova.network.os_vif_util [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:e9:fe,bridge_name='br-int',has_traffic_filtering=True,id=9835ad6c-8ea8-4a79-8f07-042186ea7c71,network=Network(f8df0af1-1767-419a-8500-c28fbf45ae4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9835ad6c-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:31:40 np0005465604 nova_compute[260603]: 2025-10-02 08:31:40.201 2 DEBUG nova.virt.libvirt.driver [None req-42906384-c521-47bd-b8d7-bdaed36c0a4d 116b114f14f84e4cbd6cc966e29d82e7 bce7493292bb47cfb7168bca89f78f4a - - default default] [instance: 07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:31:40 np0005465604 nova_compute[260603]:  <uuid>07f0d1c8-d0a7-4a2b-a2cb-a019e95a9aaf</uuid>
Oct  2 04:31:40 np0005465604 nova_compute[260603]:  <name>instance-0000003f</name>
Oct  2 04:31:40 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:31:40 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:31:40 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:31:40 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:31:40 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:31:40 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-361192799</nova:name>
Oct  2 04:31:40 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:31:39</nova:creationTime>
Oct  2 04:36:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:36:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3398107989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:36:28 np0005465604 nova_compute[260603]: 2025-10-02 08:36:28.990 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:36:28 np0005465604 nova_compute[260603]: 2025-10-02 08:36:28.997 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:36:29 np0005465604 nova_compute[260603]: 2025-10-02 08:36:29.017 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:36:29 np0005465604 nova_compute[260603]: 2025-10-02 08:36:29.042 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:36:29 np0005465604 nova_compute[260603]: 2025-10-02 08:36:29.043 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:36:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 123 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 682 B/s wr, 96 op/s
Oct  2 04:36:29 np0005465604 rsyslogd[1004]: imjournal: 16821 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct  2 04:36:29 np0005465604 ovn_controller[152344]: 2025-10-02T08:36:29Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:af:04 10.100.0.8
Oct  2 04:36:30 np0005465604 podman[347661]: 2025-10-02 08:36:30.006469707 +0000 UTC m=+0.066323455 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct  2 04:36:30 np0005465604 nova_compute[260603]: 2025-10-02 08:36:30.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:31 np0005465604 nova_compute[260603]: 2025-10-02 08:36:31.043 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:36:31 np0005465604 nova_compute[260603]: 2025-10-02 08:36:31.044 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:36:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 123 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Oct  2 04:36:31 np0005465604 nova_compute[260603]: 2025-10-02 08:36:31.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:36:32 np0005465604 nova_compute[260603]: 2025-10-02 08:36:32.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:36:32 np0005465604 nova_compute[260603]: 2025-10-02 08:36:32.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:36:33 np0005465604 podman[347683]: 2025-10-02 08:36:33.038725674 +0000 UTC m=+0.102635936 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:36:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 123 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 9.7 KiB/s wr, 109 op/s
Oct  2 04:36:33 np0005465604 nova_compute[260603]: 2025-10-02 08:36:33.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:34 np0005465604 nova_compute[260603]: 2025-10-02 08:36:34.679 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquiring lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:36:34 np0005465604 nova_compute[260603]: 2025-10-02 08:36:34.680 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:36:34 np0005465604 nova_compute[260603]: 2025-10-02 08:36:34.723 2 DEBUG nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:36:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:34.821 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:36:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:34.822 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:36:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:34.822 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:36:34 np0005465604 nova_compute[260603]: 2025-10-02 08:36:34.859 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:36:34 np0005465604 nova_compute[260603]: 2025-10-02 08:36:34.860 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:36:34 np0005465604 nova_compute[260603]: 2025-10-02 08:36:34.868 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:36:34 np0005465604 nova_compute[260603]: 2025-10-02 08:36:34.869 2 INFO nova.compute.claims [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:36:35 np0005465604 nova_compute[260603]: 2025-10-02 08:36:35.025 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:36:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 123 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 9.7 KiB/s wr, 43 op/s
Oct  2 04:36:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:36:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2537084274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:36:35 np0005465604 nova_compute[260603]: 2025-10-02 08:36:35.495 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:36:35 np0005465604 nova_compute[260603]: 2025-10-02 08:36:35.504 2 DEBUG nova.compute.provider_tree [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:36:35 np0005465604 nova_compute[260603]: 2025-10-02 08:36:35.530 2 DEBUG nova.scheduler.client.report [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:36:35 np0005465604 nova_compute[260603]: 2025-10-02 08:36:35.562 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:36:35 np0005465604 nova_compute[260603]: 2025-10-02 08:36:35.564 2 DEBUG nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:36:35 np0005465604 nova_compute[260603]: 2025-10-02 08:36:35.846 2 DEBUG nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:36:35 np0005465604 nova_compute[260603]: 2025-10-02 08:36:35.847 2 DEBUG nova.network.neutron [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:36:35 np0005465604 nova_compute[260603]: 2025-10-02 08:36:35.872 2 INFO nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:36:35 np0005465604 nova_compute[260603]: 2025-10-02 08:36:35.890 2 DEBUG nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:36:35 np0005465604 nova_compute[260603]: 2025-10-02 08:36:35.977 2 DEBUG nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:36:35 np0005465604 nova_compute[260603]: 2025-10-02 08:36:35.979 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:36:35 np0005465604 nova_compute[260603]: 2025-10-02 08:36:35.980 2 INFO nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Creating image(s)#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.020 2 DEBUG nova.storage.rbd_utils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] rbd image 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.060 2 DEBUG nova.storage.rbd_utils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] rbd image 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.114 2 DEBUG nova.storage.rbd_utils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] rbd image 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.121 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.192 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.193 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.194 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.194 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.220 2 DEBUG nova.storage.rbd_utils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] rbd image 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.225 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.474 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.525 2 DEBUG nova.storage.rbd_utils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] resizing rbd image 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.621 2 DEBUG nova.policy [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cbbd2bff5ed749af8443f40670db21e1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0fb52febddf74ed0a2a55eb14b67cd8f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.633 2 DEBUG nova.objects.instance [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lazy-loading 'migration_context' on Instance uuid 92c7bc30-8758-43e7-804c-9e0ef04a4a77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.661 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.661 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Ensure instance console log exists: /var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.662 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.663 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:36:36 np0005465604 nova_compute[260603]: 2025-10-02 08:36:36.664 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:36:37 np0005465604 nova_compute[260603]: 2025-10-02 08:36:37.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 137 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 486 KiB/s wr, 57 op/s
Oct  2 04:36:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:36:37 np0005465604 nova_compute[260603]: 2025-10-02 08:36:37.448 2 DEBUG nova.network.neutron [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Successfully created port: bac9c5c0-ed2b-471f-b459-79324411faf4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:36:38 np0005465604 nova_compute[260603]: 2025-10-02 08:36:38.287 2 DEBUG nova.network.neutron [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Successfully updated port: bac9c5c0-ed2b-471f-b459-79324411faf4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:36:38 np0005465604 nova_compute[260603]: 2025-10-02 08:36:38.311 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquiring lock "refresh_cache-92c7bc30-8758-43e7-804c-9e0ef04a4a77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:36:38 np0005465604 nova_compute[260603]: 2025-10-02 08:36:38.311 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquired lock "refresh_cache-92c7bc30-8758-43e7-804c-9e0ef04a4a77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:36:38 np0005465604 nova_compute[260603]: 2025-10-02 08:36:38.312 2 DEBUG nova.network.neutron [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:36:38 np0005465604 nova_compute[260603]: 2025-10-02 08:36:38.512 2 DEBUG nova.compute.manager [req-360a9e2b-2884-4d1c-98e6-b747ab7987dc req-dae0c1ac-0a76-4437-b3b2-74616e73d192 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Received event network-changed-bac9c5c0-ed2b-471f-b459-79324411faf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:36:38 np0005465604 nova_compute[260603]: 2025-10-02 08:36:38.513 2 DEBUG nova.compute.manager [req-360a9e2b-2884-4d1c-98e6-b747ab7987dc req-dae0c1ac-0a76-4437-b3b2-74616e73d192 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Refreshing instance network info cache due to event network-changed-bac9c5c0-ed2b-471f-b459-79324411faf4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:36:38 np0005465604 nova_compute[260603]: 2025-10-02 08:36:38.514 2 DEBUG oslo_concurrency.lockutils [req-360a9e2b-2884-4d1c-98e6-b747ab7987dc req-dae0c1ac-0a76-4437-b3b2-74616e73d192 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-92c7bc30-8758-43e7-804c-9e0ef04a4a77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:36:38 np0005465604 nova_compute[260603]: 2025-10-02 08:36:38.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008500169914371296 of space, bias 1.0, pg target 0.2550050974311389 quantized to 32 (current 32)
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:36:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:36:38 np0005465604 nova_compute[260603]: 2025-10-02 08:36:38.855 2 DEBUG nova.network.neutron [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:36:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 169 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.468 2 DEBUG nova.network.neutron [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Updating instance_info_cache with network_info: [{"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.496 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Releasing lock "refresh_cache-92c7bc30-8758-43e7-804c-9e0ef04a4a77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.497 2 DEBUG nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Instance network_info: |[{"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.497 2 DEBUG oslo_concurrency.lockutils [req-360a9e2b-2884-4d1c-98e6-b747ab7987dc req-dae0c1ac-0a76-4437-b3b2-74616e73d192 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-92c7bc30-8758-43e7-804c-9e0ef04a4a77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.498 2 DEBUG nova.network.neutron [req-360a9e2b-2884-4d1c-98e6-b747ab7987dc req-dae0c1ac-0a76-4437-b3b2-74616e73d192 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Refreshing network info cache for port bac9c5c0-ed2b-471f-b459-79324411faf4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.503 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Start _get_guest_xml network_info=[{"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.509 2 WARNING nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.514 2 DEBUG nova.virt.libvirt.host [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.515 2 DEBUG nova.virt.libvirt.host [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.519 2 DEBUG nova.virt.libvirt.host [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.520 2 DEBUG nova.virt.libvirt.host [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.522 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.522 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.523 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.524 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.524 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.525 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.525 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.526 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.527 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.527 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.528 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.528 2 DEBUG nova.virt.hardware [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:36:40 np0005465604 nova_compute[260603]: 2025-10-02 08:36:40.534 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:36:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:36:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4158205129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.012 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.034 2 DEBUG nova.storage.rbd_utils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] rbd image 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.038 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:36:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 169 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Oct  2 04:36:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:36:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3901209340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.470 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.473 2 DEBUG nova.virt.libvirt.vif [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:36:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-527322298',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-527322298',id=90,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0fb52febddf74ed0a2a55eb14b67cd8f',ramdisk_id='',reservation_id='r-epjg0x69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1766965232',owner_user_name='tempest-ServerTagsTestJSON-1766965232-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:36:35Z,user_data=None,user_id='cbbd2bff5ed749af8443f40670db21e1',uuid=92c7bc30-8758-43e7-804c-9e0ef04a4a77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.474 2 DEBUG nova.network.os_vif_util [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Converting VIF {"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.476 2 DEBUG nova.network.os_vif_util [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=bac9c5c0-ed2b-471f-b459-79324411faf4,network=Network(7312fb31-cf2e-459a-94af-ddc1a56ed03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbac9c5c0-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.478 2 DEBUG nova.objects.instance [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lazy-loading 'pci_devices' on Instance uuid 92c7bc30-8758-43e7-804c-9e0ef04a4a77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:36:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:36:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 7895 writes, 35K keys, 7895 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 7895 writes, 7895 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1573 writes, 7355 keys, 1573 commit groups, 1.0 writes per commit group, ingest: 9.40 MB, 0.02 MB/s#012Interval WAL: 1573 writes, 1573 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    145.6      0.29              0.16        21    0.014       0      0       0.0       0.0#012  L6      1/0    9.30 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   3.6    201.9    165.8      0.93              0.52        20    0.046    101K    11K       0.0       0.0#012 Sum      1/0    9.30 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   4.6    153.3    161.0      1.22              0.68        41    0.030    101K    11K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   5.7    176.1    179.7      0.29              0.17        10    0.029     31K   3133       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    201.9    165.8      0.93              0.52        20    0.046    101K    11K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    148.1      0.29              0.16        20    0.014       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.042, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.19 GB write, 0.07 MB/s write, 0.18 GB read, 0.06 MB/s read, 1.2 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 304.00 MB usage: 22.05 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000222 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1439,21.26 MB,6.99214%) FilterBlock(42,291.92 KB,0.0937763%) IndexBlock(42,522.70 KB,0.167912%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.509 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:36:41 np0005465604 nova_compute[260603]:  <uuid>92c7bc30-8758-43e7-804c-9e0ef04a4a77</uuid>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:  <name>instance-0000005a</name>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerTagsTestJSON-server-527322298</nova:name>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:36:40</nova:creationTime>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:36:41 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:        <nova:user uuid="cbbd2bff5ed749af8443f40670db21e1">tempest-ServerTagsTestJSON-1766965232-project-member</nova:user>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:        <nova:project uuid="0fb52febddf74ed0a2a55eb14b67cd8f">tempest-ServerTagsTestJSON-1766965232</nova:project>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:        <nova:port uuid="bac9c5c0-ed2b-471f-b459-79324411faf4">
Oct  2 04:36:41 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <entry name="serial">92c7bc30-8758-43e7-804c-9e0ef04a4a77</entry>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <entry name="uuid">92c7bc30-8758-43e7-804c-9e0ef04a4a77</entry>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk">
Oct  2 04:36:41 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:36:41 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk.config">
Oct  2 04:36:41 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:36:41 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:2a:8c:25"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <target dev="tapbac9c5c0-ed"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77/console.log" append="off"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:36:41 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:36:41 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:36:41 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:36:41 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.510 2 DEBUG nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Preparing to wait for external event network-vif-plugged-bac9c5c0-ed2b-471f-b459-79324411faf4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.511 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquiring lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.511 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.512 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.513 2 DEBUG nova.virt.libvirt.vif [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:36:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-527322298',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-527322298',id=90,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0fb52febddf74ed0a2a55eb14b67cd8f',ramdisk_id='',reservation_id='r-epjg0x69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1766965232',owner_user_name='tempest-ServerTagsTestJSON-1766965232-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:36:35Z,user_data=None,user_id='cbbd2bff5ed749af8443f40670db21e1',uuid=92c7bc30-8758-43e7-804c-9e0ef04a4a77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.514 2 DEBUG nova.network.os_vif_util [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Converting VIF {"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.515 2 DEBUG nova.network.os_vif_util [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=bac9c5c0-ed2b-471f-b459-79324411faf4,network=Network(7312fb31-cf2e-459a-94af-ddc1a56ed03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbac9c5c0-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.516 2 DEBUG os_vif [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=bac9c5c0-ed2b-471f-b459-79324411faf4,network=Network(7312fb31-cf2e-459a-94af-ddc1a56ed03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbac9c5c0-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.518 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.524 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbac9c5c0-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.525 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbac9c5c0-ed, col_values=(('external_ids', {'iface-id': 'bac9c5c0-ed2b-471f-b459-79324411faf4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:8c:25', 'vm-uuid': '92c7bc30-8758-43e7-804c-9e0ef04a4a77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:41 np0005465604 NetworkManager[45129]: <info>  [1759394201.5283] manager: (tapbac9c5c0-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.541 2 INFO os_vif [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=bac9c5c0-ed2b-471f-b459-79324411faf4,network=Network(7312fb31-cf2e-459a-94af-ddc1a56ed03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbac9c5c0-ed')#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.608 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.609 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.609 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] No VIF found with MAC fa:16:3e:2a:8c:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.610 2 INFO nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Using config drive#033[00m
Oct  2 04:36:41 np0005465604 nova_compute[260603]: 2025-10-02 08:36:41.645 2 DEBUG nova.storage.rbd_utils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] rbd image 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:36:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:36:42 np0005465604 nova_compute[260603]: 2025-10-02 08:36:42.719 2 INFO nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Creating config drive at /var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77/disk.config#033[00m
Oct  2 04:36:42 np0005465604 nova_compute[260603]: 2025-10-02 08:36:42.726 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk6knqjm7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:36:42 np0005465604 nova_compute[260603]: 2025-10-02 08:36:42.894 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk6knqjm7" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:36:42 np0005465604 nova_compute[260603]: 2025-10-02 08:36:42.922 2 DEBUG nova.storage.rbd_utils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] rbd image 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:36:42 np0005465604 nova_compute[260603]: 2025-10-02 08:36:42.926 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77/disk.config 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:36:43 np0005465604 nova_compute[260603]: 2025-10-02 08:36:43.129 2 DEBUG oslo_concurrency.processutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77/disk.config 92c7bc30-8758-43e7-804c-9e0ef04a4a77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:36:43 np0005465604 nova_compute[260603]: 2025-10-02 08:36:43.131 2 INFO nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Deleting local config drive /var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77/disk.config because it was imported into RBD.#033[00m
Oct  2 04:36:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 169 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct  2 04:36:43 np0005465604 kernel: tapbac9c5c0-ed: entered promiscuous mode
Oct  2 04:36:43 np0005465604 NetworkManager[45129]: <info>  [1759394203.2059] manager: (tapbac9c5c0-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/354)
Oct  2 04:36:43 np0005465604 nova_compute[260603]: 2025-10-02 08:36:43.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:36:43Z|00876|binding|INFO|Claiming lport bac9c5c0-ed2b-471f-b459-79324411faf4 for this chassis.
Oct  2 04:36:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:36:43Z|00877|binding|INFO|bac9c5c0-ed2b-471f-b459-79324411faf4: Claiming fa:16:3e:2a:8c:25 10.100.0.7
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.217 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:8c:25 10.100.0.7'], port_security=['fa:16:3e:2a:8c:25 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '92c7bc30-8758-43e7-804c-9e0ef04a4a77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7312fb31-cf2e-459a-94af-ddc1a56ed03f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0fb52febddf74ed0a2a55eb14b67cd8f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1f2de6c7-a80e-44b4-89d4-b4ed8f0a0be3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0e85455-5dea-43e0-94fe-8e8ae5134610, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bac9c5c0-ed2b-471f-b459-79324411faf4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.220 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bac9c5c0-ed2b-471f-b459-79324411faf4 in datapath 7312fb31-cf2e-459a-94af-ddc1a56ed03f bound to our chassis#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.223 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7312fb31-cf2e-459a-94af-ddc1a56ed03f#033[00m
Oct  2 04:36:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:36:43Z|00878|binding|INFO|Setting lport bac9c5c0-ed2b-471f-b459-79324411faf4 up in Southbound
Oct  2 04:36:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:36:43Z|00879|binding|INFO|Setting lport bac9c5c0-ed2b-471f-b459-79324411faf4 ovn-installed in OVS
Oct  2 04:36:43 np0005465604 nova_compute[260603]: 2025-10-02 08:36:43.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:43 np0005465604 nova_compute[260603]: 2025-10-02 08:36:43.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.243 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[860363f6-69c0-4cca-97e1-e7a96b42a399]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.244 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7312fb31-c1 in ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.247 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7312fb31-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.247 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29cae41a-2c83-40dd-9361-b1acfe5989ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.248 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4a119b67-9c13-4c23-be75-1d2d91703081]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.262 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[475d6f72-16d5-4a86-8b51-3802686333f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:43 np0005465604 systemd-machined[214636]: New machine qemu-109-instance-0000005a.
Oct  2 04:36:43 np0005465604 systemd[1]: Started Virtual Machine qemu-109-instance-0000005a.
Oct  2 04:36:43 np0005465604 systemd-udevd[348030]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.296 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9f34ef4a-e641-47ec-bca7-debd3d0bf75d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:43 np0005465604 NetworkManager[45129]: <info>  [1759394203.3083] device (tapbac9c5c0-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:36:43 np0005465604 NetworkManager[45129]: <info>  [1759394203.3111] device (tapbac9c5c0-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.337 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a9b863-1541-4217-9961-c952cf9b0e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:43 np0005465604 NetworkManager[45129]: <info>  [1759394203.3479] manager: (tap7312fb31-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/355)
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.346 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa6dae86-5348-4717-b013-c60951920944]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.390 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[377a79e0-64a6-42fc-a9ad-bcc99f554326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:43 np0005465604 nova_compute[260603]: 2025-10-02 08:36:43.392 2 DEBUG nova.network.neutron [req-360a9e2b-2884-4d1c-98e6-b747ab7987dc req-dae0c1ac-0a76-4437-b3b2-74616e73d192 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Updated VIF entry in instance network info cache for port bac9c5c0-ed2b-471f-b459-79324411faf4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:36:43 np0005465604 nova_compute[260603]: 2025-10-02 08:36:43.393 2 DEBUG nova.network.neutron [req-360a9e2b-2884-4d1c-98e6-b747ab7987dc req-dae0c1ac-0a76-4437-b3b2-74616e73d192 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Updating instance_info_cache with network_info: [{"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.394 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[24fcb211-8b8d-4c55-808c-83846a068422]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:43 np0005465604 nova_compute[260603]: 2025-10-02 08:36:43.421 2 DEBUG oslo_concurrency.lockutils [req-360a9e2b-2884-4d1c-98e6-b747ab7987dc req-dae0c1ac-0a76-4437-b3b2-74616e73d192 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-92c7bc30-8758-43e7-804c-9e0ef04a4a77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:36:43 np0005465604 NetworkManager[45129]: <info>  [1759394203.4233] device (tap7312fb31-c0): carrier: link connected
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.430 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ef715973-4371-4ee9-ae66-d4ddb7afa7ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.456 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b6f3df-da36-4afb-95e3-f15ae9bd0a4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7312fb31-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:f5:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508202, 'reachable_time': 17939, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348060, 'error': None, 'target': 'ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.473 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df787fb8-f28d-40a0-8918-8ea9ec2bc1d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feea:f5ef'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508202, 'tstamp': 508202}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348061, 'error': None, 'target': 'ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.500 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1b54c5-b6f8-4bcf-b469-da4459fd2cbe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7312fb31-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:f5:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508202, 'reachable_time': 17939, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348062, 'error': None, 'target': 'ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.542 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5b1794-e6cb-4d3a-ad66-df01d31c2f20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:43 np0005465604 nova_compute[260603]: 2025-10-02 08:36:43.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:43 np0005465604 nova_compute[260603]: 2025-10-02 08:36:43.621 2 DEBUG nova.compute.manager [req-dbfec227-5fd7-4d60-a202-5dcb57387995 req-a4260d72-b183-48e0-bd3b-bf5b7ed93357 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Received event network-vif-plugged-bac9c5c0-ed2b-471f-b459-79324411faf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:36:43 np0005465604 nova_compute[260603]: 2025-10-02 08:36:43.622 2 DEBUG oslo_concurrency.lockutils [req-dbfec227-5fd7-4d60-a202-5dcb57387995 req-a4260d72-b183-48e0-bd3b-bf5b7ed93357 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:36:43 np0005465604 nova_compute[260603]: 2025-10-02 08:36:43.622 2 DEBUG oslo_concurrency.lockutils [req-dbfec227-5fd7-4d60-a202-5dcb57387995 req-a4260d72-b183-48e0-bd3b-bf5b7ed93357 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:36:43 np0005465604 nova_compute[260603]: 2025-10-02 08:36:43.622 2 DEBUG oslo_concurrency.lockutils [req-dbfec227-5fd7-4d60-a202-5dcb57387995 req-a4260d72-b183-48e0-bd3b-bf5b7ed93357 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:36:43 np0005465604 nova_compute[260603]: 2025-10-02 08:36:43.623 2 DEBUG nova.compute.manager [req-dbfec227-5fd7-4d60-a202-5dcb57387995 req-a4260d72-b183-48e0-bd3b-bf5b7ed93357 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Processing event network-vif-plugged-bac9c5c0-ed2b-471f-b459-79324411faf4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.643 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[55fb9a1b-644d-4d59-be6e-fad5f8abe05e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.646 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7312fb31-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.647 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.648 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7312fb31-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:36:43 np0005465604 nova_compute[260603]: 2025-10-02 08:36:43.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:43 np0005465604 kernel: tap7312fb31-c0: entered promiscuous mode
Oct  2 04:36:43 np0005465604 NetworkManager[45129]: <info>  [1759394203.6519] manager: (tap7312fb31-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/356)
Oct  2 04:36:43 np0005465604 nova_compute[260603]: 2025-10-02 08:36:43.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.660 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7312fb31-c0, col_values=(('external_ids', {'iface-id': 'c49b84dc-2380-45f9-ba6d-c1552a7a185c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:36:43 np0005465604 nova_compute[260603]: 2025-10-02 08:36:43.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:36:43Z|00880|binding|INFO|Releasing lport c49b84dc-2380-45f9-ba6d-c1552a7a185c from this chassis (sb_readonly=0)
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.665 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7312fb31-cf2e-459a-94af-ddc1a56ed03f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7312fb31-cf2e-459a-94af-ddc1a56ed03f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.666 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d856b31d-ca54-400c-bfe8-0cc0a6ff08c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.668 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-7312fb31-cf2e-459a-94af-ddc1a56ed03f
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/7312fb31-cf2e-459a-94af-ddc1a56ed03f.pid.haproxy
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 7312fb31-cf2e-459a-94af-ddc1a56ed03f
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:36:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:43.670 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f', 'env', 'PROCESS_TAG=haproxy-7312fb31-cf2e-459a-94af-ddc1a56ed03f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7312fb31-cf2e-459a-94af-ddc1a56ed03f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:36:43 np0005465604 nova_compute[260603]: 2025-10-02 08:36:43.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:44 np0005465604 podman[348137]: 2025-10-02 08:36:44.137903029 +0000 UTC m=+0.077365486 container create fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 04:36:44 np0005465604 systemd[1]: Started libpod-conmon-fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda.scope.
Oct  2 04:36:44 np0005465604 podman[348137]: 2025-10-02 08:36:44.093663459 +0000 UTC m=+0.033126006 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:36:44 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:36:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d296f85a914a5844763b98d0871ccd31024d21220b2eea41f5c25dcb94cf2e1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:36:44 np0005465604 podman[348137]: 2025-10-02 08:36:44.229752269 +0000 UTC m=+0.169214756 container init fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:36:44 np0005465604 podman[348137]: 2025-10-02 08:36:44.235636426 +0000 UTC m=+0.175098873 container start fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 04:36:44 np0005465604 neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f[348152]: [NOTICE]   (348156) : New worker (348158) forked
Oct  2 04:36:44 np0005465604 neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f[348152]: [NOTICE]   (348156) : Loading success.
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.387 2 DEBUG nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.389 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394204.3862288, 92c7bc30-8758-43e7-804c-9e0ef04a4a77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.390 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] VM Started (Lifecycle Event)#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.397 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.404 2 INFO nova.virt.libvirt.driver [-] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Instance spawned successfully.#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.405 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.417 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.431 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.435 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.436 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.436 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.436 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.437 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.437 2 DEBUG nova.virt.libvirt.driver [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.465 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.466 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394204.3867424, 92c7bc30-8758-43e7-804c-9e0ef04a4a77 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.466 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.501 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.506 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394204.3959439, 92c7bc30-8758-43e7-804c-9e0ef04a4a77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.506 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.516 2 INFO nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Took 8.54 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.516 2 DEBUG nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.525 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.529 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.552 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.578 2 INFO nova.compute.manager [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Took 9.76 seconds to build instance.#033[00m
Oct  2 04:36:44 np0005465604 nova_compute[260603]: 2025-10-02 08:36:44.597 2 DEBUG oslo_concurrency.lockutils [None req-7f34dcb7-e1d6-4461-a2fe-3a5360fbb755 cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:36:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 169 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  2 04:36:45 np0005465604 nova_compute[260603]: 2025-10-02 08:36:45.774 2 DEBUG nova.compute.manager [req-e5b0487e-b3d4-4645-ae61-d1e6bc7fbbab req-88b684a8-b4b5-4016-940b-839e974c7382 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Received event network-vif-plugged-bac9c5c0-ed2b-471f-b459-79324411faf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:36:45 np0005465604 nova_compute[260603]: 2025-10-02 08:36:45.774 2 DEBUG oslo_concurrency.lockutils [req-e5b0487e-b3d4-4645-ae61-d1e6bc7fbbab req-88b684a8-b4b5-4016-940b-839e974c7382 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:36:45 np0005465604 nova_compute[260603]: 2025-10-02 08:36:45.774 2 DEBUG oslo_concurrency.lockutils [req-e5b0487e-b3d4-4645-ae61-d1e6bc7fbbab req-88b684a8-b4b5-4016-940b-839e974c7382 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:36:45 np0005465604 nova_compute[260603]: 2025-10-02 08:36:45.774 2 DEBUG oslo_concurrency.lockutils [req-e5b0487e-b3d4-4645-ae61-d1e6bc7fbbab req-88b684a8-b4b5-4016-940b-839e974c7382 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:36:45 np0005465604 nova_compute[260603]: 2025-10-02 08:36:45.775 2 DEBUG nova.compute.manager [req-e5b0487e-b3d4-4645-ae61-d1e6bc7fbbab req-88b684a8-b4b5-4016-940b-839e974c7382 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] No waiting events found dispatching network-vif-plugged-bac9c5c0-ed2b-471f-b459-79324411faf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:36:45 np0005465604 nova_compute[260603]: 2025-10-02 08:36:45.775 2 WARNING nova.compute.manager [req-e5b0487e-b3d4-4645-ae61-d1e6bc7fbbab req-88b684a8-b4b5-4016-940b-839e974c7382 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Received unexpected event network-vif-plugged-bac9c5c0-ed2b-471f-b459-79324411faf4 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:36:46 np0005465604 nova_compute[260603]: 2025-10-02 08:36:46.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 169 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 557 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Oct  2 04:36:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:36:48 np0005465604 nova_compute[260603]: 2025-10-02 08:36:48.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 169 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 87 op/s
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.433 2 DEBUG oslo_concurrency.lockutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquiring lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.434 2 DEBUG oslo_concurrency.lockutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.434 2 DEBUG oslo_concurrency.lockutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquiring lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.435 2 DEBUG oslo_concurrency.lockutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.435 2 DEBUG oslo_concurrency.lockutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.437 2 INFO nova.compute.manager [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Terminating instance#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.439 2 DEBUG nova.compute.manager [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:36:49 np0005465604 kernel: tapbac9c5c0-ed (unregistering): left promiscuous mode
Oct  2 04:36:49 np0005465604 NetworkManager[45129]: <info>  [1759394209.4999] device (tapbac9c5c0-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:49 np0005465604 ovn_controller[152344]: 2025-10-02T08:36:49Z|00881|binding|INFO|Releasing lport bac9c5c0-ed2b-471f-b459-79324411faf4 from this chassis (sb_readonly=0)
Oct  2 04:36:49 np0005465604 ovn_controller[152344]: 2025-10-02T08:36:49Z|00882|binding|INFO|Setting lport bac9c5c0-ed2b-471f-b459-79324411faf4 down in Southbound
Oct  2 04:36:49 np0005465604 ovn_controller[152344]: 2025-10-02T08:36:49Z|00883|binding|INFO|Removing iface tapbac9c5c0-ed ovn-installed in OVS
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:49.523 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:8c:25 10.100.0.7'], port_security=['fa:16:3e:2a:8c:25 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '92c7bc30-8758-43e7-804c-9e0ef04a4a77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7312fb31-cf2e-459a-94af-ddc1a56ed03f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0fb52febddf74ed0a2a55eb14b67cd8f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1f2de6c7-a80e-44b4-89d4-b4ed8f0a0be3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d0e85455-5dea-43e0-94fe-8e8ae5134610, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bac9c5c0-ed2b-471f-b459-79324411faf4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:36:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:49.526 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bac9c5c0-ed2b-471f-b459-79324411faf4 in datapath 7312fb31-cf2e-459a-94af-ddc1a56ed03f unbound from our chassis#033[00m
Oct  2 04:36:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:49.528 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7312fb31-cf2e-459a-94af-ddc1a56ed03f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:36:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:49.529 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[716f8a5b-7d08-4da0-96e1-3468f37306d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:49.530 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f namespace which is not needed anymore#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:49 np0005465604 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Oct  2 04:36:49 np0005465604 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000005a.scope: Consumed 6.206s CPU time.
Oct  2 04:36:49 np0005465604 systemd-machined[214636]: Machine qemu-109-instance-0000005a terminated.
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.673 2 INFO nova.virt.libvirt.driver [-] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Instance destroyed successfully.#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.673 2 DEBUG nova.objects.instance [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lazy-loading 'resources' on Instance uuid 92c7bc30-8758-43e7-804c-9e0ef04a4a77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.687 2 DEBUG nova.virt.libvirt.vif [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:36:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-527322298',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-527322298',id=90,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:36:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0fb52febddf74ed0a2a55eb14b67cd8f',ramdisk_id='',reservation_id='r-epjg0x69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-1766965232',owner_user_name='tempest-ServerTagsTestJSON-1766965232-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:36:44Z,user_data=None,user_id='cbbd2bff5ed749af8443f40670db21e1',uuid=92c7bc30-8758-43e7-804c-9e0ef04a4a77,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.687 2 DEBUG nova.network.os_vif_util [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Converting VIF {"id": "bac9c5c0-ed2b-471f-b459-79324411faf4", "address": "fa:16:3e:2a:8c:25", "network": {"id": "7312fb31-cf2e-459a-94af-ddc1a56ed03f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-691912246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fb52febddf74ed0a2a55eb14b67cd8f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbac9c5c0-ed", "ovs_interfaceid": "bac9c5c0-ed2b-471f-b459-79324411faf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.688 2 DEBUG nova.network.os_vif_util [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=bac9c5c0-ed2b-471f-b459-79324411faf4,network=Network(7312fb31-cf2e-459a-94af-ddc1a56ed03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbac9c5c0-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.688 2 DEBUG os_vif [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=bac9c5c0-ed2b-471f-b459-79324411faf4,network=Network(7312fb31-cf2e-459a-94af-ddc1a56ed03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbac9c5c0-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.689 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbac9c5c0-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.735 2 INFO os_vif [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=bac9c5c0-ed2b-471f-b459-79324411faf4,network=Network(7312fb31-cf2e-459a-94af-ddc1a56ed03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbac9c5c0-ed')#033[00m
Oct  2 04:36:49 np0005465604 neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f[348152]: [NOTICE]   (348156) : haproxy version is 2.8.14-c23fe91
Oct  2 04:36:49 np0005465604 neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f[348152]: [NOTICE]   (348156) : path to executable is /usr/sbin/haproxy
Oct  2 04:36:49 np0005465604 neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f[348152]: [WARNING]  (348156) : Exiting Master process...
Oct  2 04:36:49 np0005465604 neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f[348152]: [ALERT]    (348156) : Current worker (348158) exited with code 143 (Terminated)
Oct  2 04:36:49 np0005465604 neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f[348152]: [WARNING]  (348156) : All workers exited. Exiting... (0)
Oct  2 04:36:49 np0005465604 systemd[1]: libpod-fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda.scope: Deactivated successfully.
Oct  2 04:36:49 np0005465604 podman[348194]: 2025-10-02 08:36:49.777389207 +0000 UTC m=+0.097138990 container died fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:36:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda-userdata-shm.mount: Deactivated successfully.
Oct  2 04:36:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4d296f85a914a5844763b98d0871ccd31024d21220b2eea41f5c25dcb94cf2e1-merged.mount: Deactivated successfully.
Oct  2 04:36:49 np0005465604 podman[348194]: 2025-10-02 08:36:49.836228416 +0000 UTC m=+0.155978179 container cleanup fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 04:36:49 np0005465604 systemd[1]: libpod-conmon-fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda.scope: Deactivated successfully.
Oct  2 04:36:49 np0005465604 podman[348248]: 2025-10-02 08:36:49.943346496 +0000 UTC m=+0.076156481 container remove fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:36:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:49.957 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[845bd698-aa58-456a-9ce7-e871f57122bb]: (4, ('Thu Oct  2 08:36:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f (fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda)\nfb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda\nThu Oct  2 08:36:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f (fb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda)\nfb28265f09aebba8fe3951ef49e11f154cbb7362efb0b96bd593480508f04eda\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:49.960 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[298418ab-ba39-4827-bf07-c5a816097b02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:49.962 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7312fb31-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:49 np0005465604 kernel: tap7312fb31-c0: left promiscuous mode
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:49.969 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9c7ce7d0-0fd4-450f-8c8e-3e77a154f5a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:49 np0005465604 nova_compute[260603]: 2025-10-02 08:36:49.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:50.000 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba7d1e0-e0d0-4c63-b9cd-b3c282d9da09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:50.002 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8362bafd-29e6-4d4e-84cc-f56cf3ad53bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:50 np0005465604 nova_compute[260603]: 2025-10-02 08:36:50.024 2 DEBUG nova.compute.manager [req-78f28ee2-0377-4386-b8ba-87a11a5a5da7 req-b5e5267e-2a0f-40ee-ace5-195e75c46801 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Received event network-vif-unplugged-bac9c5c0-ed2b-471f-b459-79324411faf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:36:50 np0005465604 nova_compute[260603]: 2025-10-02 08:36:50.024 2 DEBUG oslo_concurrency.lockutils [req-78f28ee2-0377-4386-b8ba-87a11a5a5da7 req-b5e5267e-2a0f-40ee-ace5-195e75c46801 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:36:50 np0005465604 nova_compute[260603]: 2025-10-02 08:36:50.024 2 DEBUG oslo_concurrency.lockutils [req-78f28ee2-0377-4386-b8ba-87a11a5a5da7 req-b5e5267e-2a0f-40ee-ace5-195e75c46801 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:36:50 np0005465604 nova_compute[260603]: 2025-10-02 08:36:50.024 2 DEBUG oslo_concurrency.lockutils [req-78f28ee2-0377-4386-b8ba-87a11a5a5da7 req-b5e5267e-2a0f-40ee-ace5-195e75c46801 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:36:50 np0005465604 nova_compute[260603]: 2025-10-02 08:36:50.025 2 DEBUG nova.compute.manager [req-78f28ee2-0377-4386-b8ba-87a11a5a5da7 req-b5e5267e-2a0f-40ee-ace5-195e75c46801 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] No waiting events found dispatching network-vif-unplugged-bac9c5c0-ed2b-471f-b459-79324411faf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:36:50 np0005465604 nova_compute[260603]: 2025-10-02 08:36:50.025 2 DEBUG nova.compute.manager [req-78f28ee2-0377-4386-b8ba-87a11a5a5da7 req-b5e5267e-2a0f-40ee-ace5-195e75c46801 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Received event network-vif-unplugged-bac9c5c0-ed2b-471f-b459-79324411faf4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:36:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:50.028 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[415d8758-84a4-44e9-bc6d-daa4f1f0d051]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508193, 'reachable_time': 25566, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348264, 'error': None, 'target': 'ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:50.031 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7312fb31-cf2e-459a-94af-ddc1a56ed03f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:36:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:36:50.031 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[319e5e74-594c-4822-849a-23587d15bf11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:36:50 np0005465604 systemd[1]: run-netns-ovnmeta\x2d7312fb31\x2dcf2e\x2d459a\x2d94af\x2dddc1a56ed03f.mount: Deactivated successfully.
Oct  2 04:36:50 np0005465604 nova_compute[260603]: 2025-10-02 08:36:50.152 2 INFO nova.virt.libvirt.driver [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Deleting instance files /var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77_del#033[00m
Oct  2 04:36:50 np0005465604 nova_compute[260603]: 2025-10-02 08:36:50.153 2 INFO nova.virt.libvirt.driver [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Deletion of /var/lib/nova/instances/92c7bc30-8758-43e7-804c-9e0ef04a4a77_del complete#033[00m
Oct  2 04:36:50 np0005465604 nova_compute[260603]: 2025-10-02 08:36:50.206 2 INFO nova.compute.manager [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:36:50 np0005465604 nova_compute[260603]: 2025-10-02 08:36:50.206 2 DEBUG oslo.service.loopingcall [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:36:50 np0005465604 nova_compute[260603]: 2025-10-02 08:36:50.206 2 DEBUG nova.compute.manager [-] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:36:50 np0005465604 nova_compute[260603]: 2025-10-02 08:36:50.206 2 DEBUG nova.network.neutron [-] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:36:50 np0005465604 nova_compute[260603]: 2025-10-02 08:36:50.929 2 DEBUG nova.network.neutron [-] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:36:50 np0005465604 nova_compute[260603]: 2025-10-02 08:36:50.944 2 INFO nova.compute.manager [-] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Took 0.74 seconds to deallocate network for instance.#033[00m
Oct  2 04:36:50 np0005465604 nova_compute[260603]: 2025-10-02 08:36:50.988 2 DEBUG oslo_concurrency.lockutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:36:50 np0005465604 nova_compute[260603]: 2025-10-02 08:36:50.988 2 DEBUG oslo_concurrency.lockutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:36:51 np0005465604 nova_compute[260603]: 2025-10-02 08:36:51.067 2 DEBUG oslo_concurrency.processutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:36:51 np0005465604 nova_compute[260603]: 2025-10-02 08:36:51.112 2 DEBUG nova.compute.manager [req-c7eb78dc-54df-4c70-8bd4-33dd4a5d7159 req-025ce5ed-df52-430a-b11f-d3709e7c3e63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Received event network-vif-deleted-bac9c5c0-ed2b-471f-b459-79324411faf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:36:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 169 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Oct  2 04:36:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:36:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/448324435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:36:51 np0005465604 nova_compute[260603]: 2025-10-02 08:36:51.531 2 DEBUG oslo_concurrency.processutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:36:51 np0005465604 nova_compute[260603]: 2025-10-02 08:36:51.545 2 DEBUG nova.compute.provider_tree [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:36:51 np0005465604 nova_compute[260603]: 2025-10-02 08:36:51.574 2 DEBUG nova.scheduler.client.report [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:36:51 np0005465604 nova_compute[260603]: 2025-10-02 08:36:51.604 2 DEBUG oslo_concurrency.lockutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:36:51 np0005465604 nova_compute[260603]: 2025-10-02 08:36:51.644 2 INFO nova.scheduler.client.report [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Deleted allocations for instance 92c7bc30-8758-43e7-804c-9e0ef04a4a77#033[00m
Oct  2 04:36:51 np0005465604 nova_compute[260603]: 2025-10-02 08:36:51.729 2 DEBUG oslo_concurrency.lockutils [None req-ff4ddd8d-9c6a-4ebe-8a75-30e9b773485b cbbd2bff5ed749af8443f40670db21e1 0fb52febddf74ed0a2a55eb14b67cd8f - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:36:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:36:52 np0005465604 nova_compute[260603]: 2025-10-02 08:36:52.238 2 DEBUG nova.compute.manager [req-beb0af33-c51c-41ca-afcd-6ab81a683d40 req-55f89cd7-1a68-4ba3-99c4-8a6a0503bcf5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Received event network-vif-plugged-bac9c5c0-ed2b-471f-b459-79324411faf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:36:52 np0005465604 nova_compute[260603]: 2025-10-02 08:36:52.239 2 DEBUG oslo_concurrency.lockutils [req-beb0af33-c51c-41ca-afcd-6ab81a683d40 req-55f89cd7-1a68-4ba3-99c4-8a6a0503bcf5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:36:52 np0005465604 nova_compute[260603]: 2025-10-02 08:36:52.239 2 DEBUG oslo_concurrency.lockutils [req-beb0af33-c51c-41ca-afcd-6ab81a683d40 req-55f89cd7-1a68-4ba3-99c4-8a6a0503bcf5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:36:52 np0005465604 nova_compute[260603]: 2025-10-02 08:36:52.240 2 DEBUG oslo_concurrency.lockutils [req-beb0af33-c51c-41ca-afcd-6ab81a683d40 req-55f89cd7-1a68-4ba3-99c4-8a6a0503bcf5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "92c7bc30-8758-43e7-804c-9e0ef04a4a77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:36:52 np0005465604 nova_compute[260603]: 2025-10-02 08:36:52.240 2 DEBUG nova.compute.manager [req-beb0af33-c51c-41ca-afcd-6ab81a683d40 req-55f89cd7-1a68-4ba3-99c4-8a6a0503bcf5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] No waiting events found dispatching network-vif-plugged-bac9c5c0-ed2b-471f-b459-79324411faf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:36:52 np0005465604 nova_compute[260603]: 2025-10-02 08:36:52.241 2 WARNING nova.compute.manager [req-beb0af33-c51c-41ca-afcd-6ab81a683d40 req-55f89cd7-1a68-4ba3-99c4-8a6a0503bcf5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Received unexpected event network-vif-plugged-bac9c5c0-ed2b-471f-b459-79324411faf4 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:36:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 123 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 101 op/s
Oct  2 04:36:53 np0005465604 nova_compute[260603]: 2025-10-02 08:36:53.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:54 np0005465604 podman[348288]: 2025-10-02 08:36:54.048275771 +0000 UTC m=+0.098974216 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 04:36:54 np0005465604 podman[348287]: 2025-10-02 08:36:54.137378429 +0000 UTC m=+0.190679392 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 04:36:54 np0005465604 nova_compute[260603]: 2025-10-02 08:36:54.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:36:55Z|00884|binding|INFO|Releasing lport 8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9 from this chassis (sb_readonly=0)
Oct  2 04:36:55 np0005465604 nova_compute[260603]: 2025-10-02 08:36:55.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 123 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 100 op/s
Oct  2 04:36:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 123 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 100 op/s
Oct  2 04:36:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:36:57 np0005465604 nova_compute[260603]: 2025-10-02 08:36:57.528 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:36:57 np0005465604 nova_compute[260603]: 2025-10-02 08:36:57.529 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:36:57 np0005465604 nova_compute[260603]: 2025-10-02 08:36:57.549 2 DEBUG nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:36:57 np0005465604 nova_compute[260603]: 2025-10-02 08:36:57.620 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:36:57 np0005465604 nova_compute[260603]: 2025-10-02 08:36:57.621 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:36:57 np0005465604 nova_compute[260603]: 2025-10-02 08:36:57.632 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:36:57 np0005465604 nova_compute[260603]: 2025-10-02 08:36:57.632 2 INFO nova.compute.claims [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:36:57 np0005465604 nova_compute[260603]: 2025-10-02 08:36:57.749 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:36:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:36:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:36:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:36:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:36:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:36:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:36:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:36:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3238598947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.237 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.244 2 DEBUG nova.compute.provider_tree [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.275 2 DEBUG nova.scheduler.client.report [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.297 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.297 2 DEBUG nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.351 2 DEBUG nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.352 2 DEBUG nova.network.neutron [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.386 2 INFO nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.408 2 DEBUG nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.501 2 DEBUG nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.503 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.503 2 INFO nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Creating image(s)#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.525 2 DEBUG nova.storage.rbd_utils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.548 2 DEBUG nova.storage.rbd_utils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.572 2 DEBUG nova.storage.rbd_utils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.575 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.677 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.679 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.680 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.680 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.715 2 DEBUG nova.storage.rbd_utils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.720 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7392e1c1-40db-4b57-8ed8-278f89402f65_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.771 2 DEBUG nova.policy [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bb1b3a5ae9514259b27a0b7a28f23cda', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:36:58 np0005465604 nova_compute[260603]: 2025-10-02 08:36:58.985 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7392e1c1-40db-4b57-8ed8-278f89402f65_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:36:59 np0005465604 nova_compute[260603]: 2025-10-02 08:36:59.042 2 DEBUG nova.storage.rbd_utils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] resizing rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:36:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 123 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 14 KiB/s wr, 79 op/s
Oct  2 04:36:59 np0005465604 nova_compute[260603]: 2025-10-02 08:36:59.151 2 DEBUG nova.objects.instance [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'migration_context' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:36:59 np0005465604 nova_compute[260603]: 2025-10-02 08:36:59.200 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:36:59 np0005465604 nova_compute[260603]: 2025-10-02 08:36:59.201 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Ensure instance console log exists: /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:36:59 np0005465604 nova_compute[260603]: 2025-10-02 08:36:59.201 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:36:59 np0005465604 nova_compute[260603]: 2025-10-02 08:36:59.202 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:36:59 np0005465604 nova_compute[260603]: 2025-10-02 08:36:59.202 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:36:59 np0005465604 nova_compute[260603]: 2025-10-02 08:36:59.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:36:59 np0005465604 nova_compute[260603]: 2025-10-02 08:36:59.875 2 DEBUG nova.network.neutron [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Successfully created port: d38707db-5f0b-4ebe-80b9-ed84810b2c21 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:37:01 np0005465604 podman[348523]: 2025-10-02 08:37:01.033165077 +0000 UTC m=+0.085573403 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 04:37:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 123 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct  2 04:37:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:37:02 np0005465604 nova_compute[260603]: 2025-10-02 08:37:02.326 2 DEBUG nova.network.neutron [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Successfully updated port: d38707db-5f0b-4ebe-80b9-ed84810b2c21 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:37:02 np0005465604 nova_compute[260603]: 2025-10-02 08:37:02.348 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "refresh_cache-7392e1c1-40db-4b57-8ed8-278f89402f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:37:02 np0005465604 nova_compute[260603]: 2025-10-02 08:37:02.348 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquired lock "refresh_cache-7392e1c1-40db-4b57-8ed8-278f89402f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:37:02 np0005465604 nova_compute[260603]: 2025-10-02 08:37:02.349 2 DEBUG nova.network.neutron [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:37:02 np0005465604 nova_compute[260603]: 2025-10-02 08:37:02.431 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "496c1bb3-a098-41ab-ac67-5a6a89a0de53" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:02 np0005465604 nova_compute[260603]: 2025-10-02 08:37:02.431 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "496c1bb3-a098-41ab-ac67-5a6a89a0de53" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:02 np0005465604 nova_compute[260603]: 2025-10-02 08:37:02.456 2 DEBUG nova.compute.manager [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:37:02 np0005465604 nova_compute[260603]: 2025-10-02 08:37:02.567 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:02 np0005465604 nova_compute[260603]: 2025-10-02 08:37:02.568 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:02 np0005465604 nova_compute[260603]: 2025-10-02 08:37:02.576 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:37:02 np0005465604 nova_compute[260603]: 2025-10-02 08:37:02.576 2 INFO nova.compute.claims [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:37:02 np0005465604 nova_compute[260603]: 2025-10-02 08:37:02.596 2 DEBUG nova.compute.manager [req-610d8056-05b5-4284-b4af-3a1a20de0dc4 req-87124b6e-94b0-42c9-96c7-acddb5f5c188 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-changed-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:37:02 np0005465604 nova_compute[260603]: 2025-10-02 08:37:02.597 2 DEBUG nova.compute.manager [req-610d8056-05b5-4284-b4af-3a1a20de0dc4 req-87124b6e-94b0-42c9-96c7-acddb5f5c188 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Refreshing instance network info cache due to event network-changed-d38707db-5f0b-4ebe-80b9-ed84810b2c21. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:37:02 np0005465604 nova_compute[260603]: 2025-10-02 08:37:02.597 2 DEBUG oslo_concurrency.lockutils [req-610d8056-05b5-4284-b4af-3a1a20de0dc4 req-87124b6e-94b0-42c9-96c7-acddb5f5c188 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7392e1c1-40db-4b57-8ed8-278f89402f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:37:02 np0005465604 nova_compute[260603]: 2025-10-02 08:37:02.760 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:02 np0005465604 nova_compute[260603]: 2025-10-02 08:37:02.861 2 DEBUG nova.network.neutron [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:37:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 169 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Oct  2 04:37:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:37:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/343413180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.248 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.256 2 DEBUG nova.compute.provider_tree [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:37:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Oct  2 04:37:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Oct  2 04:37:03 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.316 2 DEBUG nova.scheduler.client.report [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.353 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.354 2 DEBUG nova.compute.manager [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.413 2 DEBUG nova.compute.manager [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.438 2 INFO nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.460 2 DEBUG nova.compute.manager [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.576 2 DEBUG nova.compute.manager [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.578 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.579 2 INFO nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Creating image(s)#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.615 2 DEBUG nova.storage.rbd_utils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.645 2 DEBUG nova.storage.rbd_utils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.678 2 DEBUG nova.storage.rbd_utils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.685 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.795 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.797 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.797 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.798 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.825 2 DEBUG nova.storage.rbd_utils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.829 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.922 2 DEBUG nova.network.neutron [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Updating instance_info_cache with network_info: [{"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.979 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Releasing lock "refresh_cache-7392e1c1-40db-4b57-8ed8-278f89402f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.980 2 DEBUG nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance network_info: |[{"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.981 2 DEBUG oslo_concurrency.lockutils [req-610d8056-05b5-4284-b4af-3a1a20de0dc4 req-87124b6e-94b0-42c9-96c7-acddb5f5c188 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7392e1c1-40db-4b57-8ed8-278f89402f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.982 2 DEBUG nova.network.neutron [req-610d8056-05b5-4284-b4af-3a1a20de0dc4 req-87124b6e-94b0-42c9-96c7-acddb5f5c188 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Refreshing network info cache for port d38707db-5f0b-4ebe-80b9-ed84810b2c21 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.988 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Start _get_guest_xml network_info=[{"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:37:03 np0005465604 nova_compute[260603]: 2025-10-02 08:37:03.999 2 WARNING nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.006 2 DEBUG nova.virt.libvirt.host [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.007 2 DEBUG nova.virt.libvirt.host [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.017 2 DEBUG nova.virt.libvirt.host [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.017 2 DEBUG nova.virt.libvirt.host [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.018 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.018 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.019 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.019 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.019 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.020 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.020 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.020 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.021 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.021 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.021 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.021 2 DEBUG nova.virt.hardware [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.025 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:04 np0005465604 podman[348656]: 2025-10-02 08:37:04.039499483 +0000 UTC m=+0.107490882 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251001)
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.118 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.195 2 DEBUG nova.storage.rbd_utils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] resizing rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.294 2 DEBUG nova.objects.instance [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'migration_context' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.314 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.314 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Ensure instance console log exists: /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.315 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.315 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.315 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.317 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.322 2 WARNING nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.330 2 DEBUG nova.virt.libvirt.host [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.330 2 DEBUG nova.virt.libvirt.host [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.333 2 DEBUG nova.virt.libvirt.host [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.334 2 DEBUG nova.virt.libvirt.host [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.334 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.334 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.335 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.335 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.336 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.336 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.336 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.336 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.337 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.337 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.337 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.337 2 DEBUG nova.virt.hardware [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.341 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:37:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3287600729' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.514 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.552 2 DEBUG nova.storage.rbd_utils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.563 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.672 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394209.6714199, 92c7bc30-8758-43e7-804c-9e0ef04a4a77 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.673 2 INFO nova.compute.manager [-] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.712 2 DEBUG nova.compute.manager [None req-f53031e8-781b-44a6-b839-468693ff08d0 - - - - - -] [instance: 92c7bc30-8758-43e7-804c-9e0ef04a4a77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:37:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2191010326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.771 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.793 2 DEBUG nova.storage.rbd_utils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:04 np0005465604 nova_compute[260603]: 2025-10-02 08:37:04.796 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:37:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3706953851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.025 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.028 2 DEBUG nova.virt.libvirt.vif [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:36:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1274366950',display_name='tempest-tempest.common.compute-instance-1274366950',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1274366950',id=91,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-14rph60n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:36:58Z,user_data=None,user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=7392e1c1-40db-4b57-8ed8-278f89402f65,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.029 2 DEBUG nova.network.os_vif_util [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.030 2 DEBUG nova.network.os_vif_util [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.032 2 DEBUG nova.objects.instance [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 169 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.1 MiB/s wr, 39 op/s
Oct  2 04:37:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:37:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/248856262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.203 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.205 2 DEBUG nova.objects.instance [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'pci_devices' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.255 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <uuid>496c1bb3-a098-41ab-ac67-5a6a89a0de53</uuid>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <name>instance-0000005c</name>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerShowV254Test-server-2038843268</nova:name>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:37:04</nova:creationTime>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <nova:user uuid="9aaa9a9ec2564ed3a346216a96231feb">tempest-ServerShowV254Test-348861658-project-member</nova:user>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <nova:project uuid="850f950eaa1d49239f8913dfee5dc44e">tempest-ServerShowV254Test-348861658</nova:project>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <entry name="serial">496c1bb3-a098-41ab-ac67-5a6a89a0de53</entry>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <entry name="uuid">496c1bb3-a098-41ab-ac67-5a6a89a0de53</entry>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/console.log" append="off"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:37:05 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:37:05 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.260 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <uuid>7392e1c1-40db-4b57-8ed8-278f89402f65</uuid>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <name>instance-0000005b</name>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <nova:name>tempest-tempest.common.compute-instance-1274366950</nova:name>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:37:04</nova:creationTime>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <nova:user uuid="bb1b3a5ae9514259b27a0b7a28f23cda">tempest-ServerActionsTestJSON-1407264397-project-member</nova:user>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <nova:project uuid="b43ebc87104041aba179e47c5e6ecc5f">tempest-ServerActionsTestJSON-1407264397</nova:project>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <nova:port uuid="d38707db-5f0b-4ebe-80b9-ed84810b2c21">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <entry name="serial">7392e1c1-40db-4b57-8ed8-278f89402f65</entry>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <entry name="uuid">7392e1c1-40db-4b57-8ed8-278f89402f65</entry>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7392e1c1-40db-4b57-8ed8-278f89402f65_disk">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:fe:e4:85"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <target dev="tapd38707db-5f"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/console.log" append="off"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:37:05 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:37:05 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:37:05 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:37:05 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.263 2 DEBUG nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Preparing to wait for external event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.264 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.265 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.265 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.266 2 DEBUG nova.virt.libvirt.vif [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:36:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1274366950',display_name='tempest-tempest.common.compute-instance-1274366950',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1274366950',id=91,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-14rph60n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:36:58Z,user_data=None,user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=7392e1c1-40db-4b57-8ed8-278f89402f65,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.267 2 DEBUG nova.network.os_vif_util [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.268 2 DEBUG nova.network.os_vif_util [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.269 2 DEBUG os_vif [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.275 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.276 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.282 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd38707db-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.284 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd38707db-5f, col_values=(('external_ids', {'iface-id': 'd38707db-5f0b-4ebe-80b9-ed84810b2c21', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:e4:85', 'vm-uuid': '7392e1c1-40db-4b57-8ed8-278f89402f65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:05 np0005465604 NetworkManager[45129]: <info>  [1759394225.2911] manager: (tapd38707db-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.302 2 INFO os_vif [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f')#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.369 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.370 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.370 2 INFO nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Using config drive#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.388 2 DEBUG nova.storage.rbd_utils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.398 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.399 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.399 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] No VIF found with MAC fa:16:3e:fe:e4:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.399 2 INFO nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Using config drive#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.418 2 DEBUG nova.storage.rbd_utils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.742 2 INFO nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Creating config drive at /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.750 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwnid9j9j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.883 2 DEBUG nova.network.neutron [req-610d8056-05b5-4284-b4af-3a1a20de0dc4 req-87124b6e-94b0-42c9-96c7-acddb5f5c188 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Updated VIF entry in instance network info cache for port d38707db-5f0b-4ebe-80b9-ed84810b2c21. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.884 2 DEBUG nova.network.neutron [req-610d8056-05b5-4284-b4af-3a1a20de0dc4 req-87124b6e-94b0-42c9-96c7-acddb5f5c188 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Updating instance_info_cache with network_info: [{"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.921 2 DEBUG oslo_concurrency.lockutils [req-610d8056-05b5-4284-b4af-3a1a20de0dc4 req-87124b6e-94b0-42c9-96c7-acddb5f5c188 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7392e1c1-40db-4b57-8ed8-278f89402f65" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.923 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwnid9j9j" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.951 2 DEBUG nova.storage.rbd_utils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:05 np0005465604 nova_compute[260603]: 2025-10-02 08:37:05.955 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:06 np0005465604 nova_compute[260603]: 2025-10-02 08:37:06.117 2 INFO nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Creating config drive at /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config#033[00m
Oct  2 04:37:06 np0005465604 nova_compute[260603]: 2025-10-02 08:37:06.127 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptcupm8gt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:06 np0005465604 nova_compute[260603]: 2025-10-02 08:37:06.171 2 DEBUG oslo_concurrency.processutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:06 np0005465604 nova_compute[260603]: 2025-10-02 08:37:06.173 2 INFO nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Deleting local config drive /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config because it was imported into RBD.#033[00m
Oct  2 04:37:06 np0005465604 systemd-machined[214636]: New machine qemu-110-instance-0000005c.
Oct  2 04:37:06 np0005465604 systemd[1]: Started Virtual Machine qemu-110-instance-0000005c.
Oct  2 04:37:06 np0005465604 nova_compute[260603]: 2025-10-02 08:37:06.279 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptcupm8gt" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Oct  2 04:37:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Oct  2 04:37:06 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Oct  2 04:37:06 np0005465604 nova_compute[260603]: 2025-10-02 08:37:06.347 2 DEBUG nova.storage.rbd_utils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:06 np0005465604 nova_compute[260603]: 2025-10-02 08:37:06.352 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:06 np0005465604 nova_compute[260603]: 2025-10-02 08:37:06.532 2 DEBUG oslo_concurrency.processutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:06 np0005465604 nova_compute[260603]: 2025-10-02 08:37:06.534 2 INFO nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Deleting local config drive /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config because it was imported into RBD.#033[00m
Oct  2 04:37:06 np0005465604 kernel: tapd38707db-5f: entered promiscuous mode
Oct  2 04:37:06 np0005465604 NetworkManager[45129]: <info>  [1759394226.6193] manager: (tapd38707db-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Oct  2 04:37:06 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:06Z|00885|binding|INFO|Claiming lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 for this chassis.
Oct  2 04:37:06 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:06Z|00886|binding|INFO|d38707db-5f0b-4ebe-80b9-ed84810b2c21: Claiming fa:16:3e:fe:e4:85 10.100.0.10
Oct  2 04:37:06 np0005465604 nova_compute[260603]: 2025-10-02 08:37:06.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.639 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:e4:85 10.100.0.10'], port_security=['fa:16:3e:fe:e4:85 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7392e1c1-40db-4b57-8ed8-278f89402f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd6efb5fb-5780-4bec-a02c-71fa342fd128', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d38707db-5f0b-4ebe-80b9-ed84810b2c21) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:37:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.641 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d38707db-5f0b-4ebe-80b9-ed84810b2c21 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 bound to our chassis#033[00m
Oct  2 04:37:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.642 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0#033[00m
Oct  2 04:37:06 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:06Z|00887|binding|INFO|Setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 ovn-installed in OVS
Oct  2 04:37:06 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:06Z|00888|binding|INFO|Setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 up in Southbound
Oct  2 04:37:06 np0005465604 nova_compute[260603]: 2025-10-02 08:37:06.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:06 np0005465604 systemd-machined[214636]: New machine qemu-111-instance-0000005b.
Oct  2 04:37:06 np0005465604 systemd-udevd[349023]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:37:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.676 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[19f01d48-7bc0-4926-becb-7d469fa4c194]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:06 np0005465604 systemd[1]: Started Virtual Machine qemu-111-instance-0000005b.
Oct  2 04:37:06 np0005465604 NetworkManager[45129]: <info>  [1759394226.6869] device (tapd38707db-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:37:06 np0005465604 NetworkManager[45129]: <info>  [1759394226.6877] device (tapd38707db-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:37:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.725 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b43cd2-3b34-4ebe-8db4-2a046aeb1162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.730 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[20125f4c-bf96-4edb-b75a-62c68b5e261f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.767 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c51c3f7e-e353-438e-b531-5d780cb1fd45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.792 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[925d3d1c-3713-41e2-ba2a-f6242add3954]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505581, 'reachable_time': 28024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349036, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.813 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[374c748a-7c42-4dda-8239-bc9f4d6af9c9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505595, 'tstamp': 505595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349037, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505598, 'tstamp': 505598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349037, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.816 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:06 np0005465604 nova_compute[260603]: 2025-10-02 08:37:06.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.820 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.820 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:37:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.821 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:06.822 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:37:06 np0005465604 nova_compute[260603]: 2025-10-02 08:37:06.979 2 DEBUG nova.compute.manager [req-e584d5aa-f1a6-4620-a37e-0620501ad72f req-da352786-3a04-4d23-afd6-abe10abfbd8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:37:06 np0005465604 nova_compute[260603]: 2025-10-02 08:37:06.980 2 DEBUG oslo_concurrency.lockutils [req-e584d5aa-f1a6-4620-a37e-0620501ad72f req-da352786-3a04-4d23-afd6-abe10abfbd8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:06 np0005465604 nova_compute[260603]: 2025-10-02 08:37:06.980 2 DEBUG oslo_concurrency.lockutils [req-e584d5aa-f1a6-4620-a37e-0620501ad72f req-da352786-3a04-4d23-afd6-abe10abfbd8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:06 np0005465604 nova_compute[260603]: 2025-10-02 08:37:06.981 2 DEBUG oslo_concurrency.lockutils [req-e584d5aa-f1a6-4620-a37e-0620501ad72f req-da352786-3a04-4d23-afd6-abe10abfbd8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:06 np0005465604 nova_compute[260603]: 2025-10-02 08:37:06.981 2 DEBUG nova.compute.manager [req-e584d5aa-f1a6-4620-a37e-0620501ad72f req-da352786-3a04-4d23-afd6-abe10abfbd8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Processing event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:37:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 186 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 4.1 MiB/s wr, 101 op/s
Oct  2 04:37:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:37:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:07.329 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:07.331 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.594 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394227.5941772, 496c1bb3-a098-41ab-ac67-5a6a89a0de53 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.595 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.597 2 DEBUG nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.597 2 DEBUG nova.compute.manager [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.598 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.601 2 INFO nova.virt.libvirt.driver [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance spawned successfully.#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.601 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.603 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.606 2 INFO nova.virt.libvirt.driver [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance spawned successfully.#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.606 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.620 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.625 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.634 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.634 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.635 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.636 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.637 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.638 2 DEBUG nova.virt.libvirt.driver [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.647 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.647 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394227.5954683, 7392e1c1-40db-4b57-8ed8-278f89402f65 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.648 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] VM Started (Lifecycle Event)#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.651 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.652 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.653 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.653 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.654 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.655 2 DEBUG nova.virt.libvirt.driver [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.685 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.689 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.719 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.719 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394227.5955641, 7392e1c1-40db-4b57-8ed8-278f89402f65 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.720 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.730 2 INFO nova.compute.manager [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Took 4.15 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.731 2 DEBUG nova.compute.manager [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.739 2 INFO nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Took 9.24 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.739 2 DEBUG nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.766 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.770 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394227.5955925, 496c1bb3-a098-41ab-ac67-5a6a89a0de53 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.771 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] VM Started (Lifecycle Event)#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.792 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.796 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.812 2 INFO nova.compute.manager [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Took 5.27 seconds to build instance.#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.817 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394227.6017065, 7392e1c1-40db-4b57-8ed8-278f89402f65 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.818 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.820 2 INFO nova.compute.manager [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Took 10.23 seconds to build instance.#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.834 2 DEBUG oslo_concurrency.lockutils [None req-24ed5d15-9b9d-4700-8641-5c7759f0e7c6 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "496c1bb3-a098-41ab-ac67-5a6a89a0de53" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.836 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.837 2 DEBUG oslo_concurrency.lockutils [None req-b46edb38-8280-4e66-95e7-e6c85ebd8b18 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:07 np0005465604 nova_compute[260603]: 2025-10-02 08:37:07.839 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:37:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Oct  2 04:37:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Oct  2 04:37:08 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Oct  2 04:37:08 np0005465604 nova_compute[260603]: 2025-10-02 08:37:08.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 216 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 106 KiB/s rd, 3.6 MiB/s wr, 153 op/s
Oct  2 04:37:09 np0005465604 nova_compute[260603]: 2025-10-02 08:37:09.296 2 DEBUG nova.compute.manager [req-34d7fdd0-bb03-49ac-8cf9-14ef30de3f34 req-4be4d0fd-9e2c-4de3-a7e6-df994dd88dd9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:37:09 np0005465604 nova_compute[260603]: 2025-10-02 08:37:09.297 2 DEBUG oslo_concurrency.lockutils [req-34d7fdd0-bb03-49ac-8cf9-14ef30de3f34 req-4be4d0fd-9e2c-4de3-a7e6-df994dd88dd9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:09 np0005465604 nova_compute[260603]: 2025-10-02 08:37:09.297 2 DEBUG oslo_concurrency.lockutils [req-34d7fdd0-bb03-49ac-8cf9-14ef30de3f34 req-4be4d0fd-9e2c-4de3-a7e6-df994dd88dd9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:09 np0005465604 nova_compute[260603]: 2025-10-02 08:37:09.297 2 DEBUG oslo_concurrency.lockutils [req-34d7fdd0-bb03-49ac-8cf9-14ef30de3f34 req-4be4d0fd-9e2c-4de3-a7e6-df994dd88dd9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:09 np0005465604 nova_compute[260603]: 2025-10-02 08:37:09.297 2 DEBUG nova.compute.manager [req-34d7fdd0-bb03-49ac-8cf9-14ef30de3f34 req-4be4d0fd-9e2c-4de3-a7e6-df994dd88dd9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] No waiting events found dispatching network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:37:09 np0005465604 nova_compute[260603]: 2025-10-02 08:37:09.298 2 WARNING nova.compute.manager [req-34d7fdd0-bb03-49ac-8cf9-14ef30de3f34 req-4be4d0fd-9e2c-4de3-a7e6-df994dd88dd9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received unexpected event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:37:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:09.333 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:10 np0005465604 nova_compute[260603]: 2025-10-02 08:37:10.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Oct  2 04:37:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Oct  2 04:37:10 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Oct  2 04:37:10 np0005465604 nova_compute[260603]: 2025-10-02 08:37:10.530 2 INFO nova.compute.manager [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Rebuilding instance#033[00m
Oct  2 04:37:10 np0005465604 nova_compute[260603]: 2025-10-02 08:37:10.824 2 DEBUG nova.objects.instance [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:10 np0005465604 nova_compute[260603]: 2025-10-02 08:37:10.841 2 DEBUG nova.compute.manager [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:10 np0005465604 nova_compute[260603]: 2025-10-02 08:37:10.891 2 DEBUG nova.objects.instance [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'pci_requests' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:10 np0005465604 nova_compute[260603]: 2025-10-02 08:37:10.904 2 DEBUG nova.objects.instance [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'pci_devices' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:10 np0005465604 nova_compute[260603]: 2025-10-02 08:37:10.921 2 DEBUG nova.objects.instance [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'resources' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:10 np0005465604 nova_compute[260603]: 2025-10-02 08:37:10.923 2 INFO nova.compute.manager [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Rebuilding instance#033[00m
Oct  2 04:37:10 np0005465604 nova_compute[260603]: 2025-10-02 08:37:10.932 2 DEBUG nova.objects.instance [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'migration_context' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:10 np0005465604 nova_compute[260603]: 2025-10-02 08:37:10.943 2 DEBUG nova.objects.instance [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:37:10 np0005465604 nova_compute[260603]: 2025-10-02 08:37:10.945 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:37:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 216 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 106 KiB/s rd, 3.6 MiB/s wr, 153 op/s
Oct  2 04:37:11 np0005465604 nova_compute[260603]: 2025-10-02 08:37:11.576 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:11 np0005465604 nova_compute[260603]: 2025-10-02 08:37:11.599 2 DEBUG nova.compute.manager [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:11 np0005465604 nova_compute[260603]: 2025-10-02 08:37:11.665 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'pci_requests' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:11 np0005465604 nova_compute[260603]: 2025-10-02 08:37:11.679 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:11 np0005465604 nova_compute[260603]: 2025-10-02 08:37:11.693 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'resources' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:11 np0005465604 nova_compute[260603]: 2025-10-02 08:37:11.704 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'migration_context' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:11 np0005465604 nova_compute[260603]: 2025-10-02 08:37:11.714 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:37:11 np0005465604 nova_compute[260603]: 2025-10-02 08:37:11.716 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:37:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:37:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 216 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 3.2 MiB/s wr, 435 op/s
Oct  2 04:37:13 np0005465604 nova_compute[260603]: 2025-10-02 08:37:13.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 216 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.3 MiB/s wr, 321 op/s
Oct  2 04:37:15 np0005465604 nova_compute[260603]: 2025-10-02 08:37:15.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 216 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.6 KiB/s wr, 242 op/s
Oct  2 04:37:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:37:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Oct  2 04:37:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Oct  2 04:37:17 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Oct  2 04:37:17 np0005465604 nova_compute[260603]: 2025-10-02 08:37:17.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:37:17 np0005465604 nova_compute[260603]: 2025-10-02 08:37:17.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:37:18 np0005465604 nova_compute[260603]: 2025-10-02 08:37:18.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 221 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.2 MiB/s wr, 266 op/s
Oct  2 04:37:20 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:20Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:e4:85 10.100.0.10
Oct  2 04:37:20 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:20Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:e4:85 10.100.0.10
Oct  2 04:37:20 np0005465604 nova_compute[260603]: 2025-10-02 08:37:20.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:20 np0005465604 nova_compute[260603]: 2025-10-02 08:37:20.983 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 04:37:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 221 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.0 MiB/s wr, 234 op/s
Oct  2 04:37:21 np0005465604 nova_compute[260603]: 2025-10-02 08:37:21.760 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 04:37:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:37:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2474133518' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:37:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:37:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2474133518' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:37:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:37:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Oct  2 04:37:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Oct  2 04:37:22 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Oct  2 04:37:22 np0005465604 nova_compute[260603]: 2025-10-02 08:37:22.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:37:22 np0005465604 nova_compute[260603]: 2025-10-02 08:37:22.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:37:22 np0005465604 nova_compute[260603]: 2025-10-02 08:37:22.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:37:22 np0005465604 nova_compute[260603]: 2025-10-02 08:37:22.720 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:37:22 np0005465604 nova_compute[260603]: 2025-10-02 08:37:22.721 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:37:22 np0005465604 nova_compute[260603]: 2025-10-02 08:37:22.721 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:37:22 np0005465604 nova_compute[260603]: 2025-10-02 08:37:22.721 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 393 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 948 KiB/s rd, 20 MiB/s wr, 248 op/s
Oct  2 04:37:23 np0005465604 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Oct  2 04:37:23 np0005465604 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d0000005c.scope: Consumed 12.882s CPU time.
Oct  2 04:37:23 np0005465604 systemd-machined[214636]: Machine qemu-110-instance-0000005c terminated.
Oct  2 04:37:23 np0005465604 nova_compute[260603]: 2025-10-02 08:37:23.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.002 2 INFO nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.010 2 INFO nova.virt.libvirt.driver [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance destroyed successfully.#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.018 2 INFO nova.virt.libvirt.driver [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance destroyed successfully.#033[00m
Oct  2 04:37:24 np0005465604 kernel: tapd38707db-5f (unregistering): left promiscuous mode
Oct  2 04:37:24 np0005465604 NetworkManager[45129]: <info>  [1759394244.1034] device (tapd38707db-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:37:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:24Z|00889|binding|INFO|Releasing lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 from this chassis (sb_readonly=0)
Oct  2 04:37:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:24Z|00890|binding|INFO|Setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 down in Southbound
Oct  2 04:37:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:24Z|00891|binding|INFO|Removing iface tapd38707db-5f ovn-installed in OVS
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:24 np0005465604 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Oct  2 04:37:24 np0005465604 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005b.scope: Consumed 12.644s CPU time.
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.211 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:e4:85 10.100.0.10'], port_security=['fa:16:3e:fe:e4:85 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7392e1c1-40db-4b57-8ed8-278f89402f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd6efb5fb-5780-4bec-a02c-71fa342fd128', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d38707db-5f0b-4ebe-80b9-ed84810b2c21) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.213 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d38707db-5f0b-4ebe-80b9-ed84810b2c21 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 unbound from our chassis#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.214 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0#033[00m
Oct  2 04:37:24 np0005465604 systemd-machined[214636]: Machine qemu-111-instance-0000005b terminated.
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.235 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fb2c20-8481-49f6-8141-f651bf8d72e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:24 np0005465604 podman[349145]: 2025-10-02 08:37:24.268232941 +0000 UTC m=+0.104565482 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.282 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[be09bfdf-c164-47e1-9b51-204ddee4b6e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.286 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9811c8a9-5393-4eca-b1b5-317539df905e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Oct  2 04:37:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Oct  2 04:37:24 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.323 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e2882a7c-6336-4838-ac1e-4188ab9db160]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:24 np0005465604 kernel: tapd38707db-5f: entered promiscuous mode
Oct  2 04:37:24 np0005465604 NetworkManager[45129]: <info>  [1759394244.3314] manager: (tapd38707db-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/359)
Oct  2 04:37:24 np0005465604 systemd-udevd[349124]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:37:24 np0005465604 kernel: tapd38707db-5f (unregistering): left promiscuous mode
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:24Z|00892|binding|INFO|Claiming lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 for this chassis.
Oct  2 04:37:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:24Z|00893|binding|INFO|d38707db-5f0b-4ebe-80b9-ed84810b2c21: Claiming fa:16:3e:fe:e4:85 10.100.0.10
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.345 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:e4:85 10.100.0.10'], port_security=['fa:16:3e:fe:e4:85 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7392e1c1-40db-4b57-8ed8-278f89402f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd6efb5fb-5780-4bec-a02c-71fa342fd128', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d38707db-5f0b-4ebe-80b9-ed84810b2c21) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.353 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e472d6-362b-4188-ba6c-9682cd4dbcd7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505581, 'reachable_time': 28024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349196, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:24Z|00894|binding|INFO|Setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 ovn-installed in OVS
Oct  2 04:37:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:24Z|00895|binding|INFO|Setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 up in Southbound
Oct  2 04:37:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:24Z|00896|binding|INFO|Releasing lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 from this chassis (sb_readonly=1)
Oct  2 04:37:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:24Z|00897|if_status|INFO|Dropped 2 log messages in last 71 seconds (most recently, 71 seconds ago) due to excessive rate
Oct  2 04:37:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:24Z|00898|if_status|INFO|Not setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 down as sb is readonly
Oct  2 04:37:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:24Z|00899|binding|INFO|Removing iface tapd38707db-5f ovn-installed in OVS
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:24Z|00900|binding|INFO|Releasing lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 from this chassis (sb_readonly=1)
Oct  2 04:37:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:24Z|00901|binding|INFO|Setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 down in Southbound
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.371 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[86b66655-1442-48b1-8e5c-79fea0d55621]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505595, 'tstamp': 505595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349204, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505598, 'tstamp': 505598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349204, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.379 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.385 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:e4:85 10.100.0.10'], port_security=['fa:16:3e:fe:e4:85 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7392e1c1-40db-4b57-8ed8-278f89402f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd6efb5fb-5780-4bec-a02c-71fa342fd128', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d38707db-5f0b-4ebe-80b9-ed84810b2c21) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.389 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.390 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.391 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.391 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.393 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d38707db-5f0b-4ebe-80b9-ed84810b2c21 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 unbound from our chassis#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.395 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0#033[00m
Oct  2 04:37:24 np0005465604 podman[349163]: 2025-10-02 08:37:24.415470957 +0000 UTC m=+0.172856777 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.419 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[118f5336-aa4c-45b3-a985-3ad205d3f6dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.453 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ca634f86-2450-4ae4-a6d2-4b1d2fcc3dc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.457 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3a701505-ab06-4bf4-b671-97ec9617d183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.495 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4e9757-49fd-4185-827f-a2cacbb84804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.523 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[299a06c3-830b-4f35-b8c1-a7d0f61881a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505581, 'reachable_time': 28024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349211, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.547 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[39f40b36-a98b-4318-98c7-8da2b7162665]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505595, 'tstamp': 505595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349212, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505598, 'tstamp': 505598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349212, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.550 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.558 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.558 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.559 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.560 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.561 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d38707db-5f0b-4ebe-80b9-ed84810b2c21 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 unbound from our chassis#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.563 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.589 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[06dc09ef-93cc-4c59-9c7f-a04b02bd4210]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.597 2 INFO nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Deleting instance files /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53_del#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.598 2 INFO nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Deletion of /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53_del complete#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.642 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[45225baa-7fa1-459a-9a92-8cb65d9f32fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.650 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[abc87c89-bb6a-49f4-b4c5-b1504ed121b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.700 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[36c4d5a1-d49a-42d3-a510-66e248f8fad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.729 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0113ecf4-5817-4557-817e-7738cd31af6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505581, 'reachable_time': 28024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349219, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.755 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[736d6ba7-ffec-41c9-ba34-e5a699f14fe8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505595, 'tstamp': 505595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349220, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505598, 'tstamp': 505598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349220, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.757 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.766 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.766 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.767 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:24.768 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.777 2 INFO nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.788 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.789 2 INFO nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Creating image(s)#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.820 2 DEBUG nova.storage.rbd_utils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.854 2 DEBUG nova.storage.rbd_utils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.887 2 DEBUG nova.storage.rbd_utils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.892 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.952 2 DEBUG nova.compute.manager [req-18df3d01-4e39-48d1-b711-4c9928f02f88 req-ce10e4a4-4fb0-4fe3-a8ce-24376d24542a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-unplugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.953 2 DEBUG oslo_concurrency.lockutils [req-18df3d01-4e39-48d1-b711-4c9928f02f88 req-ce10e4a4-4fb0-4fe3-a8ce-24376d24542a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.954 2 DEBUG oslo_concurrency.lockutils [req-18df3d01-4e39-48d1-b711-4c9928f02f88 req-ce10e4a4-4fb0-4fe3-a8ce-24376d24542a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.955 2 DEBUG oslo_concurrency.lockutils [req-18df3d01-4e39-48d1-b711-4c9928f02f88 req-ce10e4a4-4fb0-4fe3-a8ce-24376d24542a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.955 2 DEBUG nova.compute.manager [req-18df3d01-4e39-48d1-b711-4c9928f02f88 req-ce10e4a4-4fb0-4fe3-a8ce-24376d24542a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] No waiting events found dispatching network-vif-unplugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.956 2 WARNING nova.compute.manager [req-18df3d01-4e39-48d1-b711-4c9928f02f88 req-ce10e4a4-4fb0-4fe3-a8ce-24376d24542a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received unexpected event network-vif-unplugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 for instance with vm_state active and task_state rebuilding.#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.974 2 INFO nova.virt.libvirt.driver [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance destroyed successfully.#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.984 2 INFO nova.virt.libvirt.driver [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance destroyed successfully.#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.986 2 DEBUG nova.virt.libvirt.vif [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:36:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1274366950',display_name='tempest-ServerActionsTestJSON-server-35828613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1274366950',id=91,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:37:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-14rph60n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:37:10Z,user_data=None,user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=7392e1c1-40db-4b57-8ed8-278f89402f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.987 2 DEBUG nova.network.os_vif_util [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.989 2 DEBUG nova.network.os_vif_util [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.990 2 DEBUG os_vif [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.993 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd38707db-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:24 np0005465604 nova_compute[260603]: 2025-10-02 08:37:24.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.001 2 INFO os_vif [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f')#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.026 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.027 2 DEBUG oslo_concurrency.lockutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.028 2 DEBUG oslo_concurrency.lockutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.028 2 DEBUG oslo_concurrency.lockutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.055 2 DEBUG nova.storage.rbd_utils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.058 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 393 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 950 KiB/s rd, 21 MiB/s wr, 241 op/s
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.160 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updating instance_info_cache with network_info: [{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.192 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.192 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.193 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.193 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.193 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:37:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Oct  2 04:37:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Oct  2 04:37:25 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.345 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.419 2 DEBUG nova.storage.rbd_utils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] resizing rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.520 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.521 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Ensure instance console log exists: /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.522 2 DEBUG oslo_concurrency.lockutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.522 2 DEBUG oslo_concurrency.lockutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.522 2 DEBUG oslo_concurrency.lockutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.524 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.532 2 INFO nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Deleting instance files /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65_del#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.533 2 INFO nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Deletion of /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65_del complete#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.539 2 WARNING nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.547 2 DEBUG nova.virt.libvirt.host [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.548 2 DEBUG nova.virt.libvirt.host [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.552 2 DEBUG nova.virt.libvirt.host [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.553 2 DEBUG nova.virt.libvirt.host [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.553 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.553 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.554 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.554 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.554 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.555 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.555 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.555 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.555 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.556 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.556 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.556 2 DEBUG nova.virt.hardware [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.557 2 DEBUG nova.objects.instance [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.586 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.760 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.761 2 INFO nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Creating image(s)#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.800 2 DEBUG nova.storage.rbd_utils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.836 2 DEBUG nova.storage.rbd_utils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.873 2 DEBUG nova.storage.rbd_utils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.878 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.985 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.987 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.988 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:25 np0005465604 nova_compute[260603]: 2025-10-02 08:37:25.988 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.019 2 DEBUG nova.storage.rbd_utils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.024 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 7392e1c1-40db-4b57-8ed8-278f89402f65_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:37:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/644899086' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.078 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.100 2 DEBUG nova.storage.rbd_utils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.106 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.336 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 7392e1c1-40db-4b57-8ed8-278f89402f65_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.395 2 DEBUG nova.storage.rbd_utils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] resizing rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.480 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.481 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Ensure instance console log exists: /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.481 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.482 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.482 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.484 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Start _get_guest_xml network_info=[{"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.487 2 WARNING nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.492 2 DEBUG nova.virt.libvirt.host [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.493 2 DEBUG nova.virt.libvirt.host [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.497 2 DEBUG nova.virt.libvirt.host [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.498 2 DEBUG nova.virt.libvirt.host [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.498 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.498 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.499 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.499 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.499 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.499 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.499 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.500 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.500 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.500 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.500 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.500 2 DEBUG nova.virt.hardware [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.501 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.523 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:37:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/317023788' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.610 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.614 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:37:26 np0005465604 nova_compute[260603]:  <uuid>496c1bb3-a098-41ab-ac67-5a6a89a0de53</uuid>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:  <name>instance-0000005c</name>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerShowV254Test-server-2038843268</nova:name>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:37:25</nova:creationTime>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:37:26 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:        <nova:user uuid="9aaa9a9ec2564ed3a346216a96231feb">tempest-ServerShowV254Test-348861658-project-member</nova:user>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:        <nova:project uuid="850f950eaa1d49239f8913dfee5dc44e">tempest-ServerShowV254Test-348861658</nova:project>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="eeb8c9a4-e143-4b44-a997-e04d544bc537"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <entry name="serial">496c1bb3-a098-41ab-ac67-5a6a89a0de53</entry>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <entry name="uuid">496c1bb3-a098-41ab-ac67-5a6a89a0de53</entry>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk">
Oct  2 04:37:26 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:37:26 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config">
Oct  2 04:37:26 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:37:26 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/console.log" append="off"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:37:26 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:37:26 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:37:26 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:37:26 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.681 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.682 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.683 2 INFO nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Using config drive#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.710 2 DEBUG nova.storage.rbd_utils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.744 2 DEBUG nova.objects.instance [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:37:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4056181963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.956 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.981 2 DEBUG nova.storage.rbd_utils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:26 np0005465604 nova_compute[260603]: 2025-10-02 08:37:26.987 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.023 2 INFO nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Creating config drive at /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.028 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ac04uqg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:37:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:37:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:37:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:37:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:37:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:37:27 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 27727d66-c55f-47a8-9684-365a6cf14b9c does not exist
Oct  2 04:37:27 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e2769864-10ed-43ce-822c-7e6767a4bfe5 does not exist
Oct  2 04:37:27 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a5d5f412-b811-46fb-87de-661777682ee6 does not exist
Oct  2 04:37:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:37:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:37:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:37:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:37:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:37:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.072 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.073 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.073 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.073 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.074 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] No waiting events found dispatching network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.074 2 WARNING nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received unexpected event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.074 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.074 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.075 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.075 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.076 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] No waiting events found dispatching network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.076 2 WARNING nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received unexpected event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.076 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.076 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.077 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.077 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.077 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] No waiting events found dispatching network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.078 2 WARNING nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received unexpected event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.078 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-unplugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.078 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.078 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.079 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.079 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] No waiting events found dispatching network-vif-unplugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.079 2 WARNING nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received unexpected event network-vif-unplugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.079 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.080 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.080 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.080 2 DEBUG oslo_concurrency.lockutils [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.081 2 DEBUG nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] No waiting events found dispatching network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.081 2 WARNING nova.compute.manager [req-2eb4a934-9470-4093-8a4f-c01f43803997 req-9c9febdc-92ea-4d5f-8cb8-6ef160540653 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received unexpected event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 04:37:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 332 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 28 MiB/s wr, 390 op/s
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.177 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ac04uqg" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.199 2 DEBUG nova.storage.rbd_utils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] rbd image 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.216 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:37:27 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:37:27 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:37:27 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.394 2 DEBUG oslo_concurrency.processutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config 496c1bb3-a098-41ab-ac67-5a6a89a0de53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.397 2 INFO nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Deleting local config drive /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53/disk.config because it was imported into RBD.#033[00m
Oct  2 04:37:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:37:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2706149099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.459 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.460 2 DEBUG nova.virt.libvirt.vif [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:36:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1274366950',display_name='tempest-ServerActionsTestJSON-server-35828613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1274366950',id=91,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:37:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-14rph60n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:37:25Z,user_data=None,user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=7392e1c1-40db-4b57-8ed8-278f89402f65,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.461 2 DEBUG nova.network.os_vif_util [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.462 2 DEBUG nova.network.os_vif_util [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.464 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:37:27 np0005465604 nova_compute[260603]:  <uuid>7392e1c1-40db-4b57-8ed8-278f89402f65</uuid>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:  <name>instance-0000005b</name>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerActionsTestJSON-server-35828613</nova:name>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:37:26</nova:creationTime>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:37:27 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:        <nova:user uuid="bb1b3a5ae9514259b27a0b7a28f23cda">tempest-ServerActionsTestJSON-1407264397-project-member</nova:user>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:        <nova:project uuid="b43ebc87104041aba179e47c5e6ecc5f">tempest-ServerActionsTestJSON-1407264397</nova:project>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="eeb8c9a4-e143-4b44-a997-e04d544bc537"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:        <nova:port uuid="d38707db-5f0b-4ebe-80b9-ed84810b2c21">
Oct  2 04:37:27 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <entry name="serial">7392e1c1-40db-4b57-8ed8-278f89402f65</entry>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <entry name="uuid">7392e1c1-40db-4b57-8ed8-278f89402f65</entry>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7392e1c1-40db-4b57-8ed8-278f89402f65_disk">
Oct  2 04:37:27 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:37:27 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config">
Oct  2 04:37:27 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:37:27 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:fe:e4:85"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <target dev="tapd38707db-5f"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/console.log" append="off"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:37:27 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:37:27 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:37:27 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:37:27 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.465 2 DEBUG nova.compute.manager [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Preparing to wait for external event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.465 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.465 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.465 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.466 2 DEBUG nova.virt.libvirt.vif [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:36:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1274366950',display_name='tempest-ServerActionsTestJSON-server-35828613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1274366950',id=91,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:37:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-14rph60n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:37:25Z,user_data=None,user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=7392e1c1-40db-4b57-8ed8-278f89402f65,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.466 2 DEBUG nova.network.os_vif_util [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.466 2 DEBUG nova.network.os_vif_util [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.467 2 DEBUG os_vif [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.467 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.468 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.472 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd38707db-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.472 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd38707db-5f, col_values=(('external_ids', {'iface-id': 'd38707db-5f0b-4ebe-80b9-ed84810b2c21', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:e4:85', 'vm-uuid': '7392e1c1-40db-4b57-8ed8-278f89402f65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:27 np0005465604 NetworkManager[45129]: <info>  [1759394247.4752] manager: (tapd38707db-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.479 2 INFO os_vif [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f')#033[00m
Oct  2 04:37:27 np0005465604 systemd-machined[214636]: New machine qemu-112-instance-0000005c.
Oct  2 04:37:27 np0005465604 systemd[1]: Started Virtual Machine qemu-112-instance-0000005c.
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.526 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.526 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.526 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] No VIF found with MAC fa:16:3e:fe:e4:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.527 2 INFO nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Using config drive#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.562 2 DEBUG nova.storage.rbd_utils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.592 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.598 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.599 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.599 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.600 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.600 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:27 np0005465604 nova_compute[260603]: 2025-10-02 08:37:27.651 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'keypairs' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:27 np0005465604 podman[350078]: 2025-10-02 08:37:27.869372136 +0000 UTC m=+0.070219761 container create 7d9a81b7ee52bfc1654da465ec38064dd401240d26bf42d458d05615e51cb0a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 04:37:27 np0005465604 systemd[1]: Started libpod-conmon-7d9a81b7ee52bfc1654da465ec38064dd401240d26bf42d458d05615e51cb0a9.scope.
Oct  2 04:37:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:37:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:37:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:37:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:37:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:37:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:37:27 np0005465604 podman[350078]: 2025-10-02 08:37:27.841111926 +0000 UTC m=+0.041959621 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:37:27 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:37:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:37:27
Oct  2 04:37:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:37:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:37:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'images', 'backups', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.mgr']
Oct  2 04:37:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:37:27 np0005465604 podman[350078]: 2025-10-02 08:37:27.973560468 +0000 UTC m=+0.174408173 container init 7d9a81b7ee52bfc1654da465ec38064dd401240d26bf42d458d05615e51cb0a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 04:37:27 np0005465604 podman[350078]: 2025-10-02 08:37:27.985778335 +0000 UTC m=+0.186625950 container start 7d9a81b7ee52bfc1654da465ec38064dd401240d26bf42d458d05615e51cb0a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:37:27 np0005465604 podman[350078]: 2025-10-02 08:37:27.989106445 +0000 UTC m=+0.189954140 container attach 7d9a81b7ee52bfc1654da465ec38064dd401240d26bf42d458d05615e51cb0a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:37:27 np0005465604 gifted_montalcini[350092]: 167 167
Oct  2 04:37:27 np0005465604 systemd[1]: libpod-7d9a81b7ee52bfc1654da465ec38064dd401240d26bf42d458d05615e51cb0a9.scope: Deactivated successfully.
Oct  2 04:37:27 np0005465604 podman[350078]: 2025-10-02 08:37:27.998327032 +0000 UTC m=+0.199174677 container died 7d9a81b7ee52bfc1654da465ec38064dd401240d26bf42d458d05615e51cb0a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:37:28 np0005465604 systemd[1]: var-lib-containers-storage-overlay-56a8e728249b23ef4eec9e44572f8a2d6f70da0dab88b3f9087941af4bb8a191-merged.mount: Deactivated successfully.
Oct  2 04:37:28 np0005465604 podman[350078]: 2025-10-02 08:37:28.049587683 +0000 UTC m=+0.250435298 container remove 7d9a81b7ee52bfc1654da465ec38064dd401240d26bf42d458d05615e51cb0a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:37:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:37:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/831022876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:37:28 np0005465604 systemd[1]: libpod-conmon-7d9a81b7ee52bfc1654da465ec38064dd401240d26bf42d458d05615e51cb0a9.scope: Deactivated successfully.
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.097 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.170 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.170 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.174 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.175 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.182 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.183 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.191 2 INFO nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Creating config drive at /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.197 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb0iw9s75 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:37:28 np0005465604 podman[350157]: 2025-10-02 08:37:28.345162837 +0000 UTC m=+0.110955256 container create 39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bhabha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.355 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb0iw9s75" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:28 np0005465604 podman[350157]: 2025-10-02 08:37:28.272677997 +0000 UTC m=+0.038470456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.384 2 DEBUG nova.storage.rbd_utils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] rbd image 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.386 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:28 np0005465604 systemd[1]: Started libpod-conmon-39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c.scope.
Oct  2 04:37:28 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:37:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8feb11e72207a9f80912923d99f61a955ba70479c19cf9fc0bb767e926381a94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:37:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8feb11e72207a9f80912923d99f61a955ba70479c19cf9fc0bb767e926381a94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:37:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8feb11e72207a9f80912923d99f61a955ba70479c19cf9fc0bb767e926381a94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:37:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8feb11e72207a9f80912923d99f61a955ba70479c19cf9fc0bb767e926381a94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:37:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8feb11e72207a9f80912923d99f61a955ba70479c19cf9fc0bb767e926381a94/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.533 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.535 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3605MB free_disk=59.872135162353516GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.536 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.536 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:28 np0005465604 podman[350157]: 2025-10-02 08:37:28.552853248 +0000 UTC m=+0.318645707 container init 39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:37:28 np0005465604 podman[350157]: 2025-10-02 08:37:28.559854079 +0000 UTC m=+0.325646488 container start 39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bhabha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:37:28 np0005465604 podman[350157]: 2025-10-02 08:37:28.57089231 +0000 UTC m=+0.336684769 container attach 39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.622 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance ba2cf934-ce76-4de7-a495-285f144bdab7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.622 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 7392e1c1-40db-4b57-8ed8-278f89402f65 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.622 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 496c1bb3-a098-41ab-ac67-5a6a89a0de53 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.623 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.623 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.689 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.743 2 DEBUG oslo_concurrency.processutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config 7392e1c1-40db-4b57-8ed8-278f89402f65_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.744 2 INFO nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Deleting local config drive /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65/disk.config because it was imported into RBD.#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.802 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 496c1bb3-a098-41ab-ac67-5a6a89a0de53 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.803 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394248.8022242, 496c1bb3-a098-41ab-ac67-5a6a89a0de53 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.804 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.807 2 DEBUG nova.compute.manager [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:37:28 np0005465604 kernel: tapd38707db-5f: entered promiscuous mode
Oct  2 04:37:28 np0005465604 NetworkManager[45129]: <info>  [1759394248.8106] manager: (tapd38707db-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/361)
Oct  2 04:37:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:28Z|00902|binding|INFO|Claiming lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 for this chassis.
Oct  2 04:37:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:28Z|00903|binding|INFO|d38707db-5f0b-4ebe-80b9-ed84810b2c21: Claiming fa:16:3e:fe:e4:85 10.100.0.10
Oct  2 04:37:28 np0005465604 systemd-udevd[350175]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.808 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:37:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.821 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:e4:85 10.100.0.10'], port_security=['fa:16:3e:fe:e4:85 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7392e1c1-40db-4b57-8ed8-278f89402f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'd6efb5fb-5780-4bec-a02c-71fa342fd128', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d38707db-5f0b-4ebe-80b9-ed84810b2c21) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:37:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.824 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d38707db-5f0b-4ebe-80b9-ed84810b2c21 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 bound to our chassis#033[00m
Oct  2 04:37:28 np0005465604 NetworkManager[45129]: <info>  [1759394248.8263] device (tapd38707db-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:37:28 np0005465604 NetworkManager[45129]: <info>  [1759394248.8269] device (tapd38707db-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:37:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.828 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.834 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:28Z|00904|binding|INFO|Setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 ovn-installed in OVS
Oct  2 04:37:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:28Z|00905|binding|INFO|Setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 up in Southbound
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.846 2 INFO nova.virt.libvirt.driver [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance spawned successfully.#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.847 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:37:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.849 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b39923-3e9d-41e7-8e57-ccf24a708536]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.853 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:37:28 np0005465604 systemd-machined[214636]: New machine qemu-113-instance-0000005b.
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.872 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.872 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394248.8034272, 496c1bb3-a098-41ab-ac67-5a6a89a0de53 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.873 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] VM Started (Lifecycle Event)#033[00m
Oct  2 04:37:28 np0005465604 systemd[1]: Started Virtual Machine qemu-113-instance-0000005b.
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.881 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.881 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.882 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.883 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.884 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.885 2 DEBUG nova.virt.libvirt.driver [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.887 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[901289eb-b05d-4225-870e-46b30eb461c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.890 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bb6a8b9f-2892-4f71-854b-dcbd6ee9525a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.893 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.921 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:37:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.928 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4a413369-fbe4-41c3-a412-cb99b97714c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.929 2 DEBUG nova.compute.manager [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.941 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 04:37:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.944 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9c98053e-b8e7-4c0b-92c3-ef799171b223]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505581, 'reachable_time': 28024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350269, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.960 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b03b36-5135-43a3-a1aa-b01c0fcb5479]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505595, 'tstamp': 505595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350271, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505598, 'tstamp': 505598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350271, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.961 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:28 np0005465604 nova_compute[260603]: 2025-10-02 08:37:28.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.964 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.964 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:37:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.965 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:28.965 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:37:29 np0005465604 nova_compute[260603]: 2025-10-02 08:37:29.052 2 DEBUG oslo_concurrency.lockutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:29 np0005465604 nova_compute[260603]: 2025-10-02 08:37:29.129 2 DEBUG nova.compute.manager [req-9d2c9ab0-d460-4144-af1b-cdf5a88eba36 req-b16a99c8-02e2-40ec-bbb8-17009b19e40e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:37:29 np0005465604 nova_compute[260603]: 2025-10-02 08:37:29.129 2 DEBUG oslo_concurrency.lockutils [req-9d2c9ab0-d460-4144-af1b-cdf5a88eba36 req-b16a99c8-02e2-40ec-bbb8-17009b19e40e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:29 np0005465604 nova_compute[260603]: 2025-10-02 08:37:29.130 2 DEBUG oslo_concurrency.lockutils [req-9d2c9ab0-d460-4144-af1b-cdf5a88eba36 req-b16a99c8-02e2-40ec-bbb8-17009b19e40e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:29 np0005465604 nova_compute[260603]: 2025-10-02 08:37:29.130 2 DEBUG oslo_concurrency.lockutils [req-9d2c9ab0-d460-4144-af1b-cdf5a88eba36 req-b16a99c8-02e2-40ec-bbb8-17009b19e40e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:29 np0005465604 nova_compute[260603]: 2025-10-02 08:37:29.131 2 DEBUG nova.compute.manager [req-9d2c9ab0-d460-4144-af1b-cdf5a88eba36 req-b16a99c8-02e2-40ec-bbb8-17009b19e40e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Processing event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:37:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 215 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 931 KiB/s rd, 22 MiB/s wr, 453 op/s
Oct  2 04:37:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:37:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/410219014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:37:29 np0005465604 nova_compute[260603]: 2025-10-02 08:37:29.192 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:29 np0005465604 nova_compute[260603]: 2025-10-02 08:37:29.197 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:37:29 np0005465604 nova_compute[260603]: 2025-10-02 08:37:29.220 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:37:29 np0005465604 nova_compute[260603]: 2025-10-02 08:37:29.241 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:37:29 np0005465604 nova_compute[260603]: 2025-10-02 08:37:29.242 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:29 np0005465604 nova_compute[260603]: 2025-10-02 08:37:29.242 2 DEBUG oslo_concurrency.lockutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:29 np0005465604 nova_compute[260603]: 2025-10-02 08:37:29.242 2 DEBUG nova.objects.instance [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:37:29 np0005465604 nova_compute[260603]: 2025-10-02 08:37:29.330 2 DEBUG oslo_concurrency.lockutils [None req-1ed840b5-db57-44fb-8e7c-fd291ad38574 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:29 np0005465604 magical_bhabha[350200]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:37:29 np0005465604 magical_bhabha[350200]: --> relative data size: 1.0
Oct  2 04:37:29 np0005465604 magical_bhabha[350200]: --> All data devices are unavailable
Oct  2 04:37:29 np0005465604 systemd[1]: libpod-39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c.scope: Deactivated successfully.
Oct  2 04:37:29 np0005465604 systemd[1]: libpod-39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c.scope: Consumed 1.073s CPU time.
Oct  2 04:37:29 np0005465604 podman[350157]: 2025-10-02 08:37:29.781767685 +0000 UTC m=+1.547560094 container died 39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bhabha, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:37:29 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8feb11e72207a9f80912923d99f61a955ba70479c19cf9fc0bb767e926381a94-merged.mount: Deactivated successfully.
Oct  2 04:37:29 np0005465604 podman[350157]: 2025-10-02 08:37:29.832029395 +0000 UTC m=+1.597821804 container remove 39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:37:29 np0005465604 systemd[1]: libpod-conmon-39f1595a9bc0dbe01aba7ec41e1922ddef6984879e65922bc1b85385942fd59c.scope: Deactivated successfully.
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.127 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 7392e1c1-40db-4b57-8ed8-278f89402f65 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.128 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394250.1268086, 7392e1c1-40db-4b57-8ed8-278f89402f65 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.128 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] VM Started (Lifecycle Event)#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.130 2 DEBUG nova.compute.manager [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.133 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.135 2 INFO nova.virt.libvirt.driver [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance spawned successfully.#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.136 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.159 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.164 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.168 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.169 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.169 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.169 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.169 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.170 2 DEBUG nova.virt.libvirt.driver [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.211 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.211 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394250.127031, 7392e1c1-40db-4b57-8ed8-278f89402f65 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.212 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.253 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.258 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394250.1326697, 7392e1c1-40db-4b57-8ed8-278f89402f65 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.258 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.259 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "496c1bb3-a098-41ab-ac67-5a6a89a0de53" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.259 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "496c1bb3-a098-41ab-ac67-5a6a89a0de53" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.259 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "496c1bb3-a098-41ab-ac67-5a6a89a0de53-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.260 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "496c1bb3-a098-41ab-ac67-5a6a89a0de53-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.260 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "496c1bb3-a098-41ab-ac67-5a6a89a0de53-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.261 2 INFO nova.compute.manager [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Terminating instance#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.261 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "refresh_cache-496c1bb3-a098-41ab-ac67-5a6a89a0de53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.261 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquired lock "refresh_cache-496c1bb3-a098-41ab-ac67-5a6a89a0de53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.262 2 DEBUG nova.network.neutron [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.264 2 DEBUG nova.compute.manager [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.276 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.283 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.311 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.327 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.327 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.328 2 DEBUG nova.objects.instance [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.394 2 DEBUG oslo_concurrency.lockutils [None req-2bcf7a8f-c238-4908-8adf-dec4da9b1559 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:30 np0005465604 podman[350491]: 2025-10-02 08:37:30.410399159 +0000 UTC m=+0.039225500 container create 7188ffea9819cad35a68443e41a663d03b8640d1590a50dfabfe1d1fdc3ef572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.436 2 DEBUG nova.network.neutron [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:37:30 np0005465604 systemd[1]: Started libpod-conmon-7188ffea9819cad35a68443e41a663d03b8640d1590a50dfabfe1d1fdc3ef572.scope.
Oct  2 04:37:30 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:37:30 np0005465604 podman[350491]: 2025-10-02 08:37:30.49030642 +0000 UTC m=+0.119132781 container init 7188ffea9819cad35a68443e41a663d03b8640d1590a50dfabfe1d1fdc3ef572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 04:37:30 np0005465604 podman[350491]: 2025-10-02 08:37:30.396069788 +0000 UTC m=+0.024896149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:37:30 np0005465604 podman[350491]: 2025-10-02 08:37:30.497163937 +0000 UTC m=+0.125990278 container start 7188ffea9819cad35a68443e41a663d03b8640d1590a50dfabfe1d1fdc3ef572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_darwin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:37:30 np0005465604 podman[350491]: 2025-10-02 08:37:30.500078434 +0000 UTC m=+0.128904785 container attach 7188ffea9819cad35a68443e41a663d03b8640d1590a50dfabfe1d1fdc3ef572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:37:30 np0005465604 magical_darwin[350508]: 167 167
Oct  2 04:37:30 np0005465604 systemd[1]: libpod-7188ffea9819cad35a68443e41a663d03b8640d1590a50dfabfe1d1fdc3ef572.scope: Deactivated successfully.
Oct  2 04:37:30 np0005465604 podman[350491]: 2025-10-02 08:37:30.505229049 +0000 UTC m=+0.134055390 container died 7188ffea9819cad35a68443e41a663d03b8640d1590a50dfabfe1d1fdc3ef572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:37:30 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4bab54815eddc5b1bff8affe604d76688e211f71ca5efc053ca0cad5190539de-merged.mount: Deactivated successfully.
Oct  2 04:37:30 np0005465604 podman[350491]: 2025-10-02 08:37:30.539903531 +0000 UTC m=+0.168729872 container remove 7188ffea9819cad35a68443e41a663d03b8640d1590a50dfabfe1d1fdc3ef572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:37:30 np0005465604 systemd[1]: libpod-conmon-7188ffea9819cad35a68443e41a663d03b8640d1590a50dfabfe1d1fdc3ef572.scope: Deactivated successfully.
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.750 2 DEBUG nova.network.neutron [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.768 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Releasing lock "refresh_cache-496c1bb3-a098-41ab-ac67-5a6a89a0de53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.768 2 DEBUG nova.compute.manager [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:37:30 np0005465604 podman[350530]: 2025-10-02 08:37:30.794623407 +0000 UTC m=+0.053928992 container create 7b0bbe1de230d5d8e03db66248a23a3514c61875b0c8c31e212662a9d113fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_clarke, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:37:30 np0005465604 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Oct  2 04:37:30 np0005465604 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005c.scope: Consumed 3.148s CPU time.
Oct  2 04:37:30 np0005465604 systemd-machined[214636]: Machine qemu-112-instance-0000005c terminated.
Oct  2 04:37:30 np0005465604 systemd[1]: Started libpod-conmon-7b0bbe1de230d5d8e03db66248a23a3514c61875b0c8c31e212662a9d113fade.scope.
Oct  2 04:37:30 np0005465604 podman[350530]: 2025-10-02 08:37:30.776635406 +0000 UTC m=+0.035941021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:37:30 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:37:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a6d437798136b24479957da6ad03829e71e4a800fb19aae4309c06abcdffc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:37:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a6d437798136b24479957da6ad03829e71e4a800fb19aae4309c06abcdffc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:37:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a6d437798136b24479957da6ad03829e71e4a800fb19aae4309c06abcdffc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:37:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a6d437798136b24479957da6ad03829e71e4a800fb19aae4309c06abcdffc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:37:30 np0005465604 podman[350530]: 2025-10-02 08:37:30.949838002 +0000 UTC m=+0.209143667 container init 7b0bbe1de230d5d8e03db66248a23a3514c61875b0c8c31e212662a9d113fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_clarke, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  2 04:37:30 np0005465604 podman[350530]: 2025-10-02 08:37:30.959313427 +0000 UTC m=+0.218619042 container start 7b0bbe1de230d5d8e03db66248a23a3514c61875b0c8c31e212662a9d113fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_clarke, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:37:30 np0005465604 podman[350530]: 2025-10-02 08:37:30.965277616 +0000 UTC m=+0.224583241 container attach 7b0bbe1de230d5d8e03db66248a23a3514c61875b0c8c31e212662a9d113fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.995 2 INFO nova.virt.libvirt.driver [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance destroyed successfully.#033[00m
Oct  2 04:37:30 np0005465604 nova_compute[260603]: 2025-10-02 08:37:30.996 2 DEBUG nova.objects.instance [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lazy-loading 'resources' on Instance uuid 496c1bb3-a098-41ab-ac67-5a6a89a0de53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 215 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 162 KiB/s rd, 5.3 MiB/s wr, 240 op/s
Oct  2 04:37:31 np0005465604 nova_compute[260603]: 2025-10-02 08:37:31.205 2 DEBUG nova.compute.manager [req-7261d9ff-f01e-43e3-bb83-9e3f8a5bee25 req-872c95b1-44a4-44dc-973d-bba1f4f258ed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:37:31 np0005465604 nova_compute[260603]: 2025-10-02 08:37:31.207 2 DEBUG oslo_concurrency.lockutils [req-7261d9ff-f01e-43e3-bb83-9e3f8a5bee25 req-872c95b1-44a4-44dc-973d-bba1f4f258ed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:31 np0005465604 nova_compute[260603]: 2025-10-02 08:37:31.207 2 DEBUG oslo_concurrency.lockutils [req-7261d9ff-f01e-43e3-bb83-9e3f8a5bee25 req-872c95b1-44a4-44dc-973d-bba1f4f258ed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:31 np0005465604 nova_compute[260603]: 2025-10-02 08:37:31.207 2 DEBUG oslo_concurrency.lockutils [req-7261d9ff-f01e-43e3-bb83-9e3f8a5bee25 req-872c95b1-44a4-44dc-973d-bba1f4f258ed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:31 np0005465604 nova_compute[260603]: 2025-10-02 08:37:31.208 2 DEBUG nova.compute.manager [req-7261d9ff-f01e-43e3-bb83-9e3f8a5bee25 req-872c95b1-44a4-44dc-973d-bba1f4f258ed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] No waiting events found dispatching network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:37:31 np0005465604 nova_compute[260603]: 2025-10-02 08:37:31.208 2 WARNING nova.compute.manager [req-7261d9ff-f01e-43e3-bb83-9e3f8a5bee25 req-872c95b1-44a4-44dc-973d-bba1f4f258ed 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received unexpected event network-vif-plugged-d38707db-5f0b-4ebe-80b9-ed84810b2c21 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:37:31 np0005465604 nova_compute[260603]: 2025-10-02 08:37:31.379 2 INFO nova.virt.libvirt.driver [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Deleting instance files /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53_del#033[00m
Oct  2 04:37:31 np0005465604 nova_compute[260603]: 2025-10-02 08:37:31.380 2 INFO nova.virt.libvirt.driver [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Deletion of /var/lib/nova/instances/496c1bb3-a098-41ab-ac67-5a6a89a0de53_del complete#033[00m
Oct  2 04:37:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Oct  2 04:37:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Oct  2 04:37:31 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Oct  2 04:37:31 np0005465604 nova_compute[260603]: 2025-10-02 08:37:31.473 2 INFO nova.compute.manager [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:37:31 np0005465604 nova_compute[260603]: 2025-10-02 08:37:31.475 2 DEBUG oslo.service.loopingcall [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:37:31 np0005465604 nova_compute[260603]: 2025-10-02 08:37:31.475 2 DEBUG nova.compute.manager [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:37:31 np0005465604 nova_compute[260603]: 2025-10-02 08:37:31.476 2 DEBUG nova.network.neutron [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:37:31 np0005465604 nova_compute[260603]: 2025-10-02 08:37:31.683 2 DEBUG nova.network.neutron [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:37:31 np0005465604 nova_compute[260603]: 2025-10-02 08:37:31.711 2 DEBUG nova.network.neutron [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:37:31 np0005465604 nova_compute[260603]: 2025-10-02 08:37:31.737 2 INFO nova.compute.manager [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Took 0.26 seconds to deallocate network for instance.#033[00m
Oct  2 04:37:31 np0005465604 nova_compute[260603]: 2025-10-02 08:37:31.801 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:31 np0005465604 nova_compute[260603]: 2025-10-02 08:37:31.801 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]: {
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:    "0": [
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:        {
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "devices": [
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "/dev/loop3"
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            ],
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "lv_name": "ceph_lv0",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "lv_size": "21470642176",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "name": "ceph_lv0",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "tags": {
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.cluster_name": "ceph",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.crush_device_class": "",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.encrypted": "0",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.osd_id": "0",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.type": "block",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.vdo": "0"
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            },
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "type": "block",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "vg_name": "ceph_vg0"
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:        }
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:    ],
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:    "1": [
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:        {
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "devices": [
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "/dev/loop4"
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            ],
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "lv_name": "ceph_lv1",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "lv_size": "21470642176",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "name": "ceph_lv1",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "tags": {
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.cluster_name": "ceph",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.crush_device_class": "",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.encrypted": "0",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.osd_id": "1",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.type": "block",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.vdo": "0"
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            },
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "type": "block",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "vg_name": "ceph_vg1"
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:        }
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:    ],
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:    "2": [
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:        {
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "devices": [
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "/dev/loop5"
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            ],
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "lv_name": "ceph_lv2",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "lv_size": "21470642176",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "name": "ceph_lv2",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "tags": {
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.cluster_name": "ceph",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.crush_device_class": "",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.encrypted": "0",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.osd_id": "2",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.type": "block",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:                "ceph.vdo": "0"
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            },
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "type": "block",
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:            "vg_name": "ceph_vg2"
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:        }
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]:    ]
Oct  2 04:37:31 np0005465604 interesting_clarke[350546]: }
Oct  2 04:37:31 np0005465604 systemd[1]: libpod-7b0bbe1de230d5d8e03db66248a23a3514c61875b0c8c31e212662a9d113fade.scope: Deactivated successfully.
Oct  2 04:37:31 np0005465604 podman[350530]: 2025-10-02 08:37:31.884673769 +0000 UTC m=+1.143979364 container died 7b0bbe1de230d5d8e03db66248a23a3514c61875b0c8c31e212662a9d113fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_clarke, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:37:31 np0005465604 nova_compute[260603]: 2025-10-02 08:37:31.889 2 DEBUG oslo_concurrency.processutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:31 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d8a6d437798136b24479957da6ad03829e71e4a800fb19aae4309c06abcdffc6-merged.mount: Deactivated successfully.
Oct  2 04:37:31 np0005465604 podman[350530]: 2025-10-02 08:37:31.975653223 +0000 UTC m=+1.234958798 container remove 7b0bbe1de230d5d8e03db66248a23a3514c61875b0c8c31e212662a9d113fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 04:37:31 np0005465604 systemd[1]: libpod-conmon-7b0bbe1de230d5d8e03db66248a23a3514c61875b0c8c31e212662a9d113fade.scope: Deactivated successfully.
Oct  2 04:37:32 np0005465604 podman[350576]: 2025-10-02 08:37:32.051032589 +0000 UTC m=+0.124101691 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 04:37:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:37:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Oct  2 04:37:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Oct  2 04:37:32 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Oct  2 04:37:32 np0005465604 nova_compute[260603]: 2025-10-02 08:37:32.246 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:37:32 np0005465604 nova_compute[260603]: 2025-10-02 08:37:32.247 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:37:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:37:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2389264728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:37:32 np0005465604 nova_compute[260603]: 2025-10-02 08:37:32.357 2 DEBUG oslo_concurrency.processutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:32 np0005465604 nova_compute[260603]: 2025-10-02 08:37:32.368 2 DEBUG nova.compute.provider_tree [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:37:32 np0005465604 nova_compute[260603]: 2025-10-02 08:37:32.393 2 DEBUG nova.scheduler.client.report [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:37:32 np0005465604 nova_compute[260603]: 2025-10-02 08:37:32.423 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:32 np0005465604 nova_compute[260603]: 2025-10-02 08:37:32.450 2 INFO nova.scheduler.client.report [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Deleted allocations for instance 496c1bb3-a098-41ab-ac67-5a6a89a0de53#033[00m
Oct  2 04:37:32 np0005465604 nova_compute[260603]: 2025-10-02 08:37:32.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:37:32 np0005465604 nova_compute[260603]: 2025-10-02 08:37:32.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:32 np0005465604 nova_compute[260603]: 2025-10-02 08:37:32.551 2 DEBUG oslo_concurrency.lockutils [None req-e0203612-85cb-4676-99ab-5e66db8c2a7d 9aaa9a9ec2564ed3a346216a96231feb 850f950eaa1d49239f8913dfee5dc44e - - default default] Lock "496c1bb3-a098-41ab-ac67-5a6a89a0de53" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:32 np0005465604 podman[350769]: 2025-10-02 08:37:32.752951766 +0000 UTC m=+0.058610392 container create 2b79b681c6e03eafcdfbe774b68298cf58a45d59bfee24befd84d794dc4a4ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:37:32 np0005465604 systemd[1]: Started libpod-conmon-2b79b681c6e03eafcdfbe774b68298cf58a45d59bfee24befd84d794dc4a4ba2.scope.
Oct  2 04:37:32 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:37:32 np0005465604 podman[350769]: 2025-10-02 08:37:32.725440359 +0000 UTC m=+0.031099015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:37:32 np0005465604 podman[350769]: 2025-10-02 08:37:32.831051123 +0000 UTC m=+0.136709759 container init 2b79b681c6e03eafcdfbe774b68298cf58a45d59bfee24befd84d794dc4a4ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_satoshi, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 04:37:32 np0005465604 podman[350769]: 2025-10-02 08:37:32.843429625 +0000 UTC m=+0.149088271 container start 2b79b681c6e03eafcdfbe774b68298cf58a45d59bfee24befd84d794dc4a4ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_satoshi, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 04:37:32 np0005465604 podman[350769]: 2025-10-02 08:37:32.847350313 +0000 UTC m=+0.153008969 container attach 2b79b681c6e03eafcdfbe774b68298cf58a45d59bfee24befd84d794dc4a4ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_satoshi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:37:32 np0005465604 serene_satoshi[350785]: 167 167
Oct  2 04:37:32 np0005465604 systemd[1]: libpod-2b79b681c6e03eafcdfbe774b68298cf58a45d59bfee24befd84d794dc4a4ba2.scope: Deactivated successfully.
Oct  2 04:37:32 np0005465604 podman[350769]: 2025-10-02 08:37:32.854074375 +0000 UTC m=+0.159733041 container died 2b79b681c6e03eafcdfbe774b68298cf58a45d59bfee24befd84d794dc4a4ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_satoshi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:37:32 np0005465604 systemd[1]: var-lib-containers-storage-overlay-dca145ce764075954a7b3b63b2154e4b1287168fae6fac3162a2877c75f72546-merged.mount: Deactivated successfully.
Oct  2 04:37:32 np0005465604 podman[350769]: 2025-10-02 08:37:32.893222342 +0000 UTC m=+0.198880968 container remove 2b79b681c6e03eafcdfbe774b68298cf58a45d59bfee24befd84d794dc4a4ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_satoshi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:37:32 np0005465604 systemd[1]: libpod-conmon-2b79b681c6e03eafcdfbe774b68298cf58a45d59bfee24befd84d794dc4a4ba2.scope: Deactivated successfully.
Oct  2 04:37:33 np0005465604 podman[350811]: 2025-10-02 08:37:33.095389098 +0000 UTC m=+0.048194599 container create d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rhodes, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 04:37:33 np0005465604 systemd[1]: Started libpod-conmon-d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6.scope.
Oct  2 04:37:33 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:37:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 169 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 5.5 MiB/s wr, 544 op/s
Oct  2 04:37:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bed1c61775948368e50fbd0ed7cb4ada3bfadda67f9d1bf2717941d216769587/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:37:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bed1c61775948368e50fbd0ed7cb4ada3bfadda67f9d1bf2717941d216769587/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:37:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bed1c61775948368e50fbd0ed7cb4ada3bfadda67f9d1bf2717941d216769587/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:37:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bed1c61775948368e50fbd0ed7cb4ada3bfadda67f9d1bf2717941d216769587/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:37:33 np0005465604 podman[350811]: 2025-10-02 08:37:33.073392747 +0000 UTC m=+0.026198278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:37:33 np0005465604 podman[350811]: 2025-10-02 08:37:33.185088184 +0000 UTC m=+0.137893705 container init d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rhodes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 04:37:33 np0005465604 podman[350811]: 2025-10-02 08:37:33.196007472 +0000 UTC m=+0.148812973 container start d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 04:37:33 np0005465604 podman[350811]: 2025-10-02 08:37:33.199164827 +0000 UTC m=+0.151970348 container attach d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rhodes, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 04:37:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Oct  2 04:37:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Oct  2 04:37:33 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.570 2 DEBUG oslo_concurrency.lockutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.571 2 DEBUG oslo_concurrency.lockutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.572 2 DEBUG oslo_concurrency.lockutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.572 2 DEBUG oslo_concurrency.lockutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.573 2 DEBUG oslo_concurrency.lockutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.575 2 INFO nova.compute.manager [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Terminating instance#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.577 2 DEBUG nova.compute.manager [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:37:33 np0005465604 kernel: tapd38707db-5f (unregistering): left promiscuous mode
Oct  2 04:37:33 np0005465604 NetworkManager[45129]: <info>  [1759394253.6201] device (tapd38707db-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:33 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:33Z|00906|binding|INFO|Releasing lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 from this chassis (sb_readonly=0)
Oct  2 04:37:33 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:33Z|00907|binding|INFO|Setting lport d38707db-5f0b-4ebe-80b9-ed84810b2c21 down in Southbound
Oct  2 04:37:33 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:33Z|00908|binding|INFO|Removing iface tapd38707db-5f ovn-installed in OVS
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:33 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.690 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:e4:85 10.100.0.10'], port_security=['fa:16:3e:fe:e4:85 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7392e1c1-40db-4b57-8ed8-278f89402f65', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'd6efb5fb-5780-4bec-a02c-71fa342fd128', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d38707db-5f0b-4ebe-80b9-ed84810b2c21) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:37:33 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.691 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d38707db-5f0b-4ebe-80b9-ed84810b2c21 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 unbound from our chassis#033[00m
Oct  2 04:37:33 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.692 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:33 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.708 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2162cd-54fb-414b-b845-adb864aedb78]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:33 np0005465604 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Oct  2 04:37:33 np0005465604 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005b.scope: Consumed 4.606s CPU time.
Oct  2 04:37:33 np0005465604 systemd-machined[214636]: Machine qemu-113-instance-0000005b terminated.
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:33 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.733 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[821c5697-508e-453a-aad0-a904712eb3a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:33 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.735 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[beccf5e2-3c1c-43df-b1fd-a53055d96599]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:33 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.760 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5f072fe4-65d0-4b98-863d-2edbb07064a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:33 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.775 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d577d7a8-5d99-435d-9360-e00813d2759a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505581, 'reachable_time': 28024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350844, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:33 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.789 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2874b4-58d8-419e-8fa8-0da56b42371f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505595, 'tstamp': 505595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350845, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74f187c2-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505598, 'tstamp': 505598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350845, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:33 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.790 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:33 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.796 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:33 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.796 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:33 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.796 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:33 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:33.797 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.821 2 INFO nova.virt.libvirt.driver [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Instance destroyed successfully.#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.821 2 DEBUG nova.objects.instance [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'resources' on Instance uuid 7392e1c1-40db-4b57-8ed8-278f89402f65 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.838 2 DEBUG nova.virt.libvirt.vif [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:36:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1274366950',display_name='tempest-ServerActionsTestJSON-server-35828613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1274366950',id=91,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:37:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-14rph60n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:37:30Z,user_data=None,user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=7392e1c1-40db-4b57-8ed8-278f89402f65,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.838 2 DEBUG nova.network.os_vif_util [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "address": "fa:16:3e:fe:e4:85", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38707db-5f", "ovs_interfaceid": "d38707db-5f0b-4ebe-80b9-ed84810b2c21", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.839 2 DEBUG nova.network.os_vif_util [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.840 2 DEBUG os_vif [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.842 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd38707db-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:37:33 np0005465604 nova_compute[260603]: 2025-10-02 08:37:33.848 2 INFO os_vif [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e4:85,bridge_name='br-int',has_traffic_filtering=True,id=d38707db-5f0b-4ebe-80b9-ed84810b2c21,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38707db-5f')#033[00m
Oct  2 04:37:34 np0005465604 nova_compute[260603]: 2025-10-02 08:37:34.149 2 INFO nova.virt.libvirt.driver [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Deleting instance files /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65_del#033[00m
Oct  2 04:37:34 np0005465604 nova_compute[260603]: 2025-10-02 08:37:34.150 2 INFO nova.virt.libvirt.driver [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Deletion of /var/lib/nova/instances/7392e1c1-40db-4b57-8ed8-278f89402f65_del complete#033[00m
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]: {
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:        "osd_id": 2,
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:        "type": "bluestore"
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:    },
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:        "osd_id": 1,
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:        "type": "bluestore"
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:    },
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:        "osd_id": 0,
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:        "type": "bluestore"
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]:    }
Oct  2 04:37:34 np0005465604 gracious_rhodes[350828]: }
Oct  2 04:37:34 np0005465604 nova_compute[260603]: 2025-10-02 08:37:34.235 2 INFO nova.compute.manager [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:37:34 np0005465604 nova_compute[260603]: 2025-10-02 08:37:34.235 2 DEBUG oslo.service.loopingcall [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:37:34 np0005465604 nova_compute[260603]: 2025-10-02 08:37:34.236 2 DEBUG nova.compute.manager [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:37:34 np0005465604 nova_compute[260603]: 2025-10-02 08:37:34.237 2 DEBUG nova.network.neutron [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:37:34 np0005465604 systemd[1]: libpod-d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6.scope: Deactivated successfully.
Oct  2 04:37:34 np0005465604 systemd[1]: libpod-d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6.scope: Consumed 1.005s CPU time.
Oct  2 04:37:34 np0005465604 podman[350906]: 2025-10-02 08:37:34.297161298 +0000 UTC m=+0.034565170 container died d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rhodes, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct  2 04:37:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay-bed1c61775948368e50fbd0ed7cb4ada3bfadda67f9d1bf2717941d216769587-merged.mount: Deactivated successfully.
Oct  2 04:37:34 np0005465604 podman[350905]: 2025-10-02 08:37:34.350256164 +0000 UTC m=+0.072009865 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 04:37:34 np0005465604 podman[350906]: 2025-10-02 08:37:34.357302306 +0000 UTC m=+0.094706098 container remove d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rhodes, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:37:34 np0005465604 systemd[1]: libpod-conmon-d4bf59be0ca5c9bdb3471aacbede4632efab25d2f7bd2c6184a4c69d1e1f39f6.scope: Deactivated successfully.
Oct  2 04:37:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:37:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Oct  2 04:37:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:37:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:37:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Oct  2 04:37:34 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Oct  2 04:37:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:37:34 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 443f6d8c-4c0f-4150-b827-96866203a4f6 does not exist
Oct  2 04:37:34 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a3478dbf-b48e-46e5-a8d7-4f200d03c11b does not exist
Oct  2 04:37:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:34.822 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:34.822 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:34.824 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:34 np0005465604 nova_compute[260603]: 2025-10-02 08:37:34.966 2 DEBUG nova.network.neutron [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:37:34 np0005465604 nova_compute[260603]: 2025-10-02 08:37:34.999 2 INFO nova.compute.manager [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Took 0.76 seconds to deallocate network for instance.#033[00m
Oct  2 04:37:35 np0005465604 nova_compute[260603]: 2025-10-02 08:37:35.075 2 DEBUG oslo_concurrency.lockutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:35 np0005465604 nova_compute[260603]: 2025-10-02 08:37:35.076 2 DEBUG oslo_concurrency.lockutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:35 np0005465604 nova_compute[260603]: 2025-10-02 08:37:35.150 2 DEBUG nova.compute.manager [req-6a0314f8-4df6-4ee6-9e40-975e2bb4bc36 req-14a01032-6603-41d6-84ac-adb778c87965 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Received event network-vif-deleted-d38707db-5f0b-4ebe-80b9-ed84810b2c21 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:37:35 np0005465604 nova_compute[260603]: 2025-10-02 08:37:35.154 2 DEBUG oslo_concurrency.processutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 169 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 73 KiB/s wr, 584 op/s
Oct  2 04:37:35 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:37:35 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:37:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Oct  2 04:37:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Oct  2 04:37:35 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Oct  2 04:37:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:37:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3336699256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:37:35 np0005465604 nova_compute[260603]: 2025-10-02 08:37:35.629 2 DEBUG oslo_concurrency.processutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:35 np0005465604 nova_compute[260603]: 2025-10-02 08:37:35.639 2 DEBUG nova.compute.provider_tree [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:37:35 np0005465604 nova_compute[260603]: 2025-10-02 08:37:35.664 2 DEBUG nova.scheduler.client.report [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:37:35 np0005465604 nova_compute[260603]: 2025-10-02 08:37:35.692 2 DEBUG oslo_concurrency.lockutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:35 np0005465604 nova_compute[260603]: 2025-10-02 08:37:35.715 2 INFO nova.scheduler.client.report [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Deleted allocations for instance 7392e1c1-40db-4b57-8ed8-278f89402f65#033[00m
Oct  2 04:37:35 np0005465604 nova_compute[260603]: 2025-10-02 08:37:35.806 2 DEBUG oslo_concurrency.lockutils [None req-b03e8aae-e9f4-40d9-afaa-26ed34021261 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "7392e1c1-40db-4b57-8ed8-278f89402f65" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Oct  2 04:37:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Oct  2 04:37:36 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Oct  2 04:37:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 163 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 11 KiB/s wr, 130 op/s
Oct  2 04:37:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001020440088396171 of space, bias 1.0, pg target 0.30613202651885135 quantized to 32 (current 32)
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006670665250374422 of space, bias 1.0, pg target 0.20011995751123268 quantized to 32 (current 32)
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:37:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:37:38 np0005465604 nova_compute[260603]: 2025-10-02 08:37:38.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:38 np0005465604 nova_compute[260603]: 2025-10-02 08:37:38.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:39 np0005465604 nova_compute[260603]: 2025-10-02 08:37:39.142 2 DEBUG oslo_concurrency.lockutils [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:39 np0005465604 nova_compute[260603]: 2025-10-02 08:37:39.142 2 DEBUG oslo_concurrency.lockutils [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:39 np0005465604 nova_compute[260603]: 2025-10-02 08:37:39.143 2 DEBUG nova.compute.manager [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:39 np0005465604 nova_compute[260603]: 2025-10-02 08:37:39.148 2 DEBUG nova.compute.manager [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 04:37:39 np0005465604 nova_compute[260603]: 2025-10-02 08:37:39.149 2 DEBUG nova.objects.instance [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'flavor' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 123 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 15 KiB/s wr, 160 op/s
Oct  2 04:37:39 np0005465604 nova_compute[260603]: 2025-10-02 08:37:39.178 2 DEBUG nova.virt.libvirt.driver [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:37:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Oct  2 04:37:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Oct  2 04:37:39 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Oct  2 04:37:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Oct  2 04:37:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Oct  2 04:37:40 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Oct  2 04:37:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 123 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 15 KiB/s wr, 161 op/s
Oct  2 04:37:41 np0005465604 kernel: tap961da5ba-b0 (unregistering): left promiscuous mode
Oct  2 04:37:41 np0005465604 NetworkManager[45129]: <info>  [1759394261.4106] device (tap961da5ba-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:37:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:41Z|00909|binding|INFO|Releasing lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 from this chassis (sb_readonly=0)
Oct  2 04:37:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:41Z|00910|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 down in Southbound
Oct  2 04:37:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:41Z|00911|binding|INFO|Removing iface tap961da5ba-b0 ovn-installed in OVS
Oct  2 04:37:41 np0005465604 nova_compute[260603]: 2025-10-02 08:37:41.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.425 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:af:04 10.100.0.8'], port_security=['fa:16:3e:bb:af:04 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba2cf934-ce76-4de7-a495-285f144bdab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '65cf0e08-b09a-4109-b18c-bb1f841d4f76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:37:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.427 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 unbound from our chassis#033[00m
Oct  2 04:37:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.428 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74f187c2-780c-418d-98eb-b25294872ab0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:37:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.430 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a15901dc-8a63-400b-bc92-e63989db388f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.430 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 namespace which is not needed anymore#033[00m
Oct  2 04:37:41 np0005465604 nova_compute[260603]: 2025-10-02 08:37:41.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:41 np0005465604 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000055.scope: Deactivated successfully.
Oct  2 04:37:41 np0005465604 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000055.scope: Consumed 16.521s CPU time.
Oct  2 04:37:41 np0005465604 systemd-machined[214636]: Machine qemu-108-instance-00000055 terminated.
Oct  2 04:37:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Oct  2 04:37:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Oct  2 04:37:41 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Oct  2 04:37:41 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[346621]: [NOTICE]   (346641) : haproxy version is 2.8.14-c23fe91
Oct  2 04:37:41 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[346621]: [NOTICE]   (346641) : path to executable is /usr/sbin/haproxy
Oct  2 04:37:41 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[346621]: [WARNING]  (346641) : Exiting Master process...
Oct  2 04:37:41 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[346621]: [ALERT]    (346641) : Current worker (346652) exited with code 143 (Terminated)
Oct  2 04:37:41 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[346621]: [WARNING]  (346641) : All workers exited. Exiting... (0)
Oct  2 04:37:41 np0005465604 systemd[1]: libpod-7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd.scope: Deactivated successfully.
Oct  2 04:37:41 np0005465604 podman[351037]: 2025-10-02 08:37:41.621502308 +0000 UTC m=+0.055106148 container died 7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:37:41 np0005465604 nova_compute[260603]: 2025-10-02 08:37:41.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:41 np0005465604 nova_compute[260603]: 2025-10-02 08:37:41.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd-userdata-shm.mount: Deactivated successfully.
Oct  2 04:37:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b3ae6c2f20a3b91c5f145f721853a053d0238e20c8cff6c32cd1844fef5ff94f-merged.mount: Deactivated successfully.
Oct  2 04:37:41 np0005465604 podman[351037]: 2025-10-02 08:37:41.668655394 +0000 UTC m=+0.102259234 container cleanup 7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:37:41 np0005465604 systemd[1]: libpod-conmon-7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd.scope: Deactivated successfully.
Oct  2 04:37:41 np0005465604 podman[351075]: 2025-10-02 08:37:41.750851405 +0000 UTC m=+0.049929622 container remove 7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:37:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.764 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[73ba634b-aea0-4a72-827d-351270be1954]: (4, ('Thu Oct  2 08:37:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 (7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd)\n7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd\nThu Oct  2 08:37:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 (7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd)\n7817537c2dda061ece4640d7c251d403d63ccd912c9fa9a04bfa775f737711cd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.766 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5a28dd97-c062-4f93-934f-d22f71a2d170]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.767 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:41 np0005465604 nova_compute[260603]: 2025-10-02 08:37:41.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:41 np0005465604 kernel: tap74f187c2-70: left promiscuous mode
Oct  2 04:37:41 np0005465604 nova_compute[260603]: 2025-10-02 08:37:41.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.797 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[402a0f93-d8c6-4972-8d00-fcb4b77edaa2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.829 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[527c3bde-5aaa-43f0-a002-3cd8e00884b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.830 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3e3d19bb-f061-4e92-88a0-b54e739825d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.847 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[00cd76f0-47f0-4082-b4a1-23a1803c804b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505571, 'reachable_time': 20585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351095, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.850 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:37:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:41.850 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb67c22-d2f9-4f89-8d36-d10602c1c1f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:41 np0005465604 systemd[1]: run-netns-ovnmeta\x2d74f187c2\x2d780c\x2d418d\x2d98eb\x2db25294872ab0.mount: Deactivated successfully.
Oct  2 04:37:42 np0005465604 nova_compute[260603]: 2025-10-02 08:37:42.045 2 DEBUG nova.compute.manager [req-8f5c0a48-c155-4569-9870-103b13aebee5 req-96a0d589-159f-4f66-b3d7-5c148486dba0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:37:42 np0005465604 nova_compute[260603]: 2025-10-02 08:37:42.046 2 DEBUG oslo_concurrency.lockutils [req-8f5c0a48-c155-4569-9870-103b13aebee5 req-96a0d589-159f-4f66-b3d7-5c148486dba0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:42 np0005465604 nova_compute[260603]: 2025-10-02 08:37:42.046 2 DEBUG oslo_concurrency.lockutils [req-8f5c0a48-c155-4569-9870-103b13aebee5 req-96a0d589-159f-4f66-b3d7-5c148486dba0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:42 np0005465604 nova_compute[260603]: 2025-10-02 08:37:42.046 2 DEBUG oslo_concurrency.lockutils [req-8f5c0a48-c155-4569-9870-103b13aebee5 req-96a0d589-159f-4f66-b3d7-5c148486dba0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:42 np0005465604 nova_compute[260603]: 2025-10-02 08:37:42.046 2 DEBUG nova.compute.manager [req-8f5c0a48-c155-4569-9870-103b13aebee5 req-96a0d589-159f-4f66-b3d7-5c148486dba0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:37:42 np0005465604 nova_compute[260603]: 2025-10-02 08:37:42.046 2 WARNING nova.compute.manager [req-8f5c0a48-c155-4569-9870-103b13aebee5 req-96a0d589-159f-4f66-b3d7-5c148486dba0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state active and task_state powering-off.#033[00m
Oct  2 04:37:42 np0005465604 nova_compute[260603]: 2025-10-02 08:37:42.198 2 INFO nova.virt.libvirt.driver [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 04:37:42 np0005465604 nova_compute[260603]: 2025-10-02 08:37:42.203 2 INFO nova.virt.libvirt.driver [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance destroyed successfully.#033[00m
Oct  2 04:37:42 np0005465604 nova_compute[260603]: 2025-10-02 08:37:42.203 2 DEBUG nova.objects.instance [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'numa_topology' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:42 np0005465604 nova_compute[260603]: 2025-10-02 08:37:42.222 2 DEBUG nova.compute.manager [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:37:42 np0005465604 nova_compute[260603]: 2025-10-02 08:37:42.279 2 DEBUG oslo_concurrency.lockutils [None req-42bf42d0-c42a-4c15-8d2a-27f02e6f9947 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Oct  2 04:37:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Oct  2 04:37:42 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Oct  2 04:37:42 np0005465604 nova_compute[260603]: 2025-10-02 08:37:42.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 123 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 163 KiB/s rd, 30 KiB/s wr, 217 op/s
Oct  2 04:37:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Oct  2 04:37:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Oct  2 04:37:43 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Oct  2 04:37:43 np0005465604 nova_compute[260603]: 2025-10-02 08:37:43.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:43 np0005465604 nova_compute[260603]: 2025-10-02 08:37:43.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:43 np0005465604 nova_compute[260603]: 2025-10-02 08:37:43.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:43 np0005465604 nova_compute[260603]: 2025-10-02 08:37:43.923 2 DEBUG nova.objects.instance [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'flavor' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:43 np0005465604 nova_compute[260603]: 2025-10-02 08:37:43.955 2 DEBUG oslo_concurrency.lockutils [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:37:43 np0005465604 nova_compute[260603]: 2025-10-02 08:37:43.956 2 DEBUG oslo_concurrency.lockutils [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquired lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:37:43 np0005465604 nova_compute[260603]: 2025-10-02 08:37:43.957 2 DEBUG nova.network.neutron [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:37:43 np0005465604 nova_compute[260603]: 2025-10-02 08:37:43.957 2 DEBUG nova.objects.instance [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'info_cache' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:44 np0005465604 nova_compute[260603]: 2025-10-02 08:37:44.141 2 DEBUG nova.compute.manager [req-8f0cd429-d06a-407a-b5d6-d6b0f03a9806 req-3506c745-ec78-424c-ac0e-1003b80447fc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:37:44 np0005465604 nova_compute[260603]: 2025-10-02 08:37:44.142 2 DEBUG oslo_concurrency.lockutils [req-8f0cd429-d06a-407a-b5d6-d6b0f03a9806 req-3506c745-ec78-424c-ac0e-1003b80447fc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:44 np0005465604 nova_compute[260603]: 2025-10-02 08:37:44.142 2 DEBUG oslo_concurrency.lockutils [req-8f0cd429-d06a-407a-b5d6-d6b0f03a9806 req-3506c745-ec78-424c-ac0e-1003b80447fc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:44 np0005465604 nova_compute[260603]: 2025-10-02 08:37:44.142 2 DEBUG oslo_concurrency.lockutils [req-8f0cd429-d06a-407a-b5d6-d6b0f03a9806 req-3506c745-ec78-424c-ac0e-1003b80447fc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:44 np0005465604 nova_compute[260603]: 2025-10-02 08:37:44.142 2 DEBUG nova.compute.manager [req-8f0cd429-d06a-407a-b5d6-d6b0f03a9806 req-3506c745-ec78-424c-ac0e-1003b80447fc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:37:44 np0005465604 nova_compute[260603]: 2025-10-02 08:37:44.143 2 WARNING nova.compute.manager [req-8f0cd429-d06a-407a-b5d6-d6b0f03a9806 req-3506c745-ec78-424c-ac0e-1003b80447fc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state stopped and task_state powering-on.#033[00m
Oct  2 04:37:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Oct  2 04:37:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Oct  2 04:37:44 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Oct  2 04:37:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 123 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 163 KiB/s rd, 30 KiB/s wr, 217 op/s
Oct  2 04:37:45 np0005465604 nova_compute[260603]: 2025-10-02 08:37:45.993 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394250.9924216, 496c1bb3-a098-41ab-ac67-5a6a89a0de53 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:37:45 np0005465604 nova_compute[260603]: 2025-10-02 08:37:45.994 2 INFO nova.compute.manager [-] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:37:46 np0005465604 nova_compute[260603]: 2025-10-02 08:37:46.040 2 DEBUG nova.compute.manager [None req-5593822c-8a4d-4219-b08f-e6bf1280780f - - - - - -] [instance: 496c1bb3-a098-41ab-ac67-5a6a89a0de53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:46 np0005465604 nova_compute[260603]: 2025-10-02 08:37:46.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:46 np0005465604 nova_compute[260603]: 2025-10-02 08:37:46.988 2 DEBUG nova.network.neutron [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updating instance_info_cache with network_info: [{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.013 2 DEBUG oslo_concurrency.lockutils [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Releasing lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.042 2 INFO nova.virt.libvirt.driver [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance destroyed successfully.#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.043 2 DEBUG nova.objects.instance [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'numa_topology' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.060 2 DEBUG nova.objects.instance [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'resources' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.077 2 DEBUG nova.virt.libvirt.vif [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:37:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.078 2 DEBUG nova.network.os_vif_util [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.079 2 DEBUG nova.network.os_vif_util [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.080 2 DEBUG os_vif [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.083 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap961da5ba-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.091 2 INFO os_vif [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0')#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.101 2 DEBUG nova.virt.libvirt.driver [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Start _get_guest_xml network_info=[{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.111 2 WARNING nova.virt.libvirt.driver [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.118 2 DEBUG nova.virt.libvirt.host [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.119 2 DEBUG nova.virt.libvirt.host [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.123 2 DEBUG nova.virt.libvirt.host [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.124 2 DEBUG nova.virt.libvirt.host [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.125 2 DEBUG nova.virt.libvirt.driver [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.125 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.126 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.127 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.127 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.128 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.129 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.129 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.130 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.131 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.131 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.132 2 DEBUG nova.virt.hardware [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.133 2 DEBUG nova.objects.instance [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'vcpu_model' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.150 2 DEBUG oslo_concurrency.processutils [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 123 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 134 KiB/s rd, 21 KiB/s wr, 176 op/s
Oct  2 04:37:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:37:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Oct  2 04:37:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Oct  2 04:37:47 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Oct  2 04:37:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:37:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3736315215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.651 2 DEBUG oslo_concurrency.processutils [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:47 np0005465604 nova_compute[260603]: 2025-10-02 08:37:47.696 2 DEBUG oslo_concurrency.processutils [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:37:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3532120028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.147 2 DEBUG oslo_concurrency.processutils [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.150 2 DEBUG nova.virt.libvirt.vif [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:37:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.150 2 DEBUG nova.network.os_vif_util [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.151 2 DEBUG nova.network.os_vif_util [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.153 2 DEBUG nova.objects.instance [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'pci_devices' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.176 2 DEBUG nova.virt.libvirt.driver [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:37:48 np0005465604 nova_compute[260603]:  <uuid>ba2cf934-ce76-4de7-a495-285f144bdab7</uuid>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:  <name>instance-00000055</name>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerActionsTestJSON-server-276449458</nova:name>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:37:47</nova:creationTime>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:37:48 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:        <nova:user uuid="bb1b3a5ae9514259b27a0b7a28f23cda">tempest-ServerActionsTestJSON-1407264397-project-member</nova:user>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:        <nova:project uuid="b43ebc87104041aba179e47c5e6ecc5f">tempest-ServerActionsTestJSON-1407264397</nova:project>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:        <nova:port uuid="961da5ba-b0ac-4a87-a74c-26d0d2d2bf50">
Oct  2 04:37:48 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <entry name="serial">ba2cf934-ce76-4de7-a495-285f144bdab7</entry>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <entry name="uuid">ba2cf934-ce76-4de7-a495-285f144bdab7</entry>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/ba2cf934-ce76-4de7-a495-285f144bdab7_disk">
Oct  2 04:37:48 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:37:48 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/ba2cf934-ce76-4de7-a495-285f144bdab7_disk.config">
Oct  2 04:37:48 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:37:48 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:bb:af:04"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <target dev="tap961da5ba-b0"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/ba2cf934-ce76-4de7-a495-285f144bdab7/console.log" append="off"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <input type="keyboard" bus="usb"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:37:48 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:37:48 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:37:48 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:37:48 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.178 2 DEBUG nova.virt.libvirt.driver [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.178 2 DEBUG nova.virt.libvirt.driver [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.179 2 DEBUG nova.virt.libvirt.vif [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:37:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.180 2 DEBUG nova.network.os_vif_util [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.180 2 DEBUG nova.network.os_vif_util [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.181 2 DEBUG os_vif [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.182 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.183 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.186 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap961da5ba-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.186 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap961da5ba-b0, col_values=(('external_ids', {'iface-id': '961da5ba-b0ac-4a87-a74c-26d0d2d2bf50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:af:04', 'vm-uuid': 'ba2cf934-ce76-4de7-a495-285f144bdab7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:48 np0005465604 NetworkManager[45129]: <info>  [1759394268.1897] manager: (tap961da5ba-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.194 2 INFO os_vif [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0')#033[00m
Oct  2 04:37:48 np0005465604 kernel: tap961da5ba-b0: entered promiscuous mode
Oct  2 04:37:48 np0005465604 NetworkManager[45129]: <info>  [1759394268.2807] manager: (tap961da5ba-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/363)
Oct  2 04:37:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:48Z|00912|binding|INFO|Claiming lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for this chassis.
Oct  2 04:37:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:48Z|00913|binding|INFO|961da5ba-b0ac-4a87-a74c-26d0d2d2bf50: Claiming fa:16:3e:bb:af:04 10.100.0.8
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.291 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:af:04 10.100.0.8'], port_security=['fa:16:3e:bb:af:04 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba2cf934-ce76-4de7-a495-285f144bdab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '65cf0e08-b09a-4109-b18c-bb1f841d4f76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.292 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 bound to our chassis#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.294 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0#033[00m
Oct  2 04:37:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:48Z|00914|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 ovn-installed in OVS
Oct  2 04:37:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:48Z|00915|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 up in Southbound
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.305 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[374c73d4-b76f-418e-a729-834eab29eea6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.306 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap74f187c2-71 in ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.310 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap74f187c2-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.310 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[291ace97-7082-4582-b631-16d5255f400f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.311 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bb7d8223-b469-477c-9e89-f89716179c65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:48 np0005465604 systemd-udevd[351175]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.322 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[1a219caf-1341-4325-88da-5fb0e2648cb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:48 np0005465604 systemd-machined[214636]: New machine qemu-114-instance-00000055.
Oct  2 04:37:48 np0005465604 NetworkManager[45129]: <info>  [1759394268.3322] device (tap961da5ba-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:37:48 np0005465604 NetworkManager[45129]: <info>  [1759394268.3330] device (tap961da5ba-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:37:48 np0005465604 systemd[1]: Started Virtual Machine qemu-114-instance-00000055.
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.345 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e8e7c006-e16a-40b9-9e84-24ca0ccee3c5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.373 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d73d2187-c9db-4805-8c13-17b7404a04f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:48 np0005465604 systemd-udevd[351178]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:37:48 np0005465604 NetworkManager[45129]: <info>  [1759394268.3856] manager: (tap74f187c2-70): new Veth device (/org/freedesktop/NetworkManager/Devices/364)
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.386 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd06fdd-16cb-4eac-815c-6c0a21834df3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.416 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[85c5a2d2-d263-4a25-84da-b8b9a9ac471f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.420 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a837c333-959f-4c34-99e8-52fa48e3f127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:48 np0005465604 NetworkManager[45129]: <info>  [1759394268.4399] device (tap74f187c2-70): carrier: link connected
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.444 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4eeadc-dfa8-4c30-ab2c-ba124cec1f64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.461 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6995bb1e-4913-439d-8a44-04f6977554d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 265], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514704, 'reachable_time': 30969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351206, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.476 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[81fc2d1c-3651-4c44-bd2a-7d0d8ed92268]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:7f62'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514704, 'tstamp': 514704}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351207, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.498 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0f525473-097e-472b-bce4-b5e23e0c9e12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 265], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514704, 'reachable_time': 30969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351208, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.535 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[985fe9cd-4ff9-4d9d-a93c-6a3bbc25d3e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.615 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ef73a8-c506-48cc-b3d0-a32803da2500]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.616 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.617 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.617 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:48 np0005465604 NetworkManager[45129]: <info>  [1759394268.6608] manager: (tap74f187c2-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Oct  2 04:37:48 np0005465604 kernel: tap74f187c2-70: entered promiscuous mode
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.670 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:48Z|00916|binding|INFO|Releasing lport 8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9 from this chassis (sb_readonly=0)
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.674 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.675 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ff5f655b-0b5f-4650-bbc4-abadffb92f7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.676 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-74f187c2-780c-418d-98eb-b25294872ab0
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 74f187c2-780c-418d-98eb-b25294872ab0
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:37:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:48.677 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'env', 'PROCESS_TAG=haproxy-74f187c2-780c-418d-98eb-b25294872ab0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/74f187c2-780c-418d-98eb-b25294872ab0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.820 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394253.8185918, 7392e1c1-40db-4b57-8ed8-278f89402f65 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.820 2 INFO nova.compute.manager [-] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:37:48 np0005465604 nova_compute[260603]: 2025-10-02 08:37:48.847 2 DEBUG nova.compute.manager [None req-e2513301-b215-46df-b754-814b56162aa1 - - - - - -] [instance: 7392e1c1-40db-4b57-8ed8-278f89402f65] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:49 np0005465604 podman[351241]: 2025-10-02 08:37:49.032938703 +0000 UTC m=+0.064723206 container create 694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 04:37:49 np0005465604 systemd[1]: Started libpod-conmon-694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d.scope.
Oct  2 04:37:49 np0005465604 podman[351241]: 2025-10-02 08:37:49.002159178 +0000 UTC m=+0.033943751 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:37:49 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:37:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12325eae80981c183471de2cbf2c2a6d257a59fa83ba595ae5891a5266d01cc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:37:49 np0005465604 podman[351241]: 2025-10-02 08:37:49.126252688 +0000 UTC m=+0.158037231 container init 694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 04:37:49 np0005465604 podman[351241]: 2025-10-02 08:37:49.131553857 +0000 UTC m=+0.163338360 container start 694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  2 04:37:49 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351298]: [NOTICE]   (351303) : New worker (351305) forked
Oct  2 04:37:49 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351298]: [NOTICE]   (351303) : Loading success.
Oct  2 04:37:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 123 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 5.5 KiB/s wr, 112 op/s
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.284 2 DEBUG nova.compute.manager [req-d71d85ac-1c6a-4969-b5a2-1f705d46043a req-ce328010-5903-4086-a57a-6783b89f5aac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.286 2 DEBUG oslo_concurrency.lockutils [req-d71d85ac-1c6a-4969-b5a2-1f705d46043a req-ce328010-5903-4086-a57a-6783b89f5aac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.286 2 DEBUG oslo_concurrency.lockutils [req-d71d85ac-1c6a-4969-b5a2-1f705d46043a req-ce328010-5903-4086-a57a-6783b89f5aac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.287 2 DEBUG oslo_concurrency.lockutils [req-d71d85ac-1c6a-4969-b5a2-1f705d46043a req-ce328010-5903-4086-a57a-6783b89f5aac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.287 2 DEBUG nova.compute.manager [req-d71d85ac-1c6a-4969-b5a2-1f705d46043a req-ce328010-5903-4086-a57a-6783b89f5aac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.288 2 WARNING nova.compute.manager [req-d71d85ac-1c6a-4969-b5a2-1f705d46043a req-ce328010-5903-4086-a57a-6783b89f5aac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state stopped and task_state powering-on.#033[00m
Oct  2 04:37:49 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:49Z|00917|binding|INFO|Releasing lport 8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9 from this chassis (sb_readonly=0)
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.510 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for ba2cf934-ce76-4de7-a495-285f144bdab7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.510 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394269.5089588, ba2cf934-ce76-4de7-a495-285f144bdab7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.510 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.512 2 DEBUG nova.compute.manager [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.515 2 INFO nova.virt.libvirt.driver [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance rebooted successfully.#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.516 2 DEBUG nova.compute.manager [None req-af20d817-9fb0-4e23-ac54-0bd9366bc642 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.559 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.563 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.595 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.596 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394269.5118883, ba2cf934-ce76-4de7-a495-285f144bdab7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.596 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Started (Lifecycle Event)#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.626 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:49 np0005465604 nova_compute[260603]: 2025-10-02 08:37:49.629 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:37:51 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:51Z|00918|binding|INFO|Releasing lport 8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9 from this chassis (sb_readonly=0)
Oct  2 04:37:51 np0005465604 nova_compute[260603]: 2025-10-02 08:37:51.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 123 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 4.3 KiB/s wr, 88 op/s
Oct  2 04:37:52 np0005465604 nova_compute[260603]: 2025-10-02 08:37:52.064 2 DEBUG nova.compute.manager [req-ab0fea8c-12e6-450b-af80-458d52ae0f25 req-d014f8d3-9444-4de9-a01e-48f66861abe6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:37:52 np0005465604 nova_compute[260603]: 2025-10-02 08:37:52.065 2 DEBUG oslo_concurrency.lockutils [req-ab0fea8c-12e6-450b-af80-458d52ae0f25 req-d014f8d3-9444-4de9-a01e-48f66861abe6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:52 np0005465604 nova_compute[260603]: 2025-10-02 08:37:52.065 2 DEBUG oslo_concurrency.lockutils [req-ab0fea8c-12e6-450b-af80-458d52ae0f25 req-d014f8d3-9444-4de9-a01e-48f66861abe6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:52 np0005465604 nova_compute[260603]: 2025-10-02 08:37:52.066 2 DEBUG oslo_concurrency.lockutils [req-ab0fea8c-12e6-450b-af80-458d52ae0f25 req-d014f8d3-9444-4de9-a01e-48f66861abe6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:52 np0005465604 nova_compute[260603]: 2025-10-02 08:37:52.066 2 DEBUG nova.compute.manager [req-ab0fea8c-12e6-450b-af80-458d52ae0f25 req-d014f8d3-9444-4de9-a01e-48f66861abe6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:37:52 np0005465604 nova_compute[260603]: 2025-10-02 08:37:52.067 2 WARNING nova.compute.manager [req-ab0fea8c-12e6-450b-af80-458d52ae0f25 req-d014f8d3-9444-4de9-a01e-48f66861abe6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:37:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:37:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Oct  2 04:37:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Oct  2 04:37:52 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Oct  2 04:37:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 123 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.2 KiB/s wr, 188 op/s
Oct  2 04:37:53 np0005465604 nova_compute[260603]: 2025-10-02 08:37:53.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:53 np0005465604 nova_compute[260603]: 2025-10-02 08:37:53.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:54 np0005465604 nova_compute[260603]: 2025-10-02 08:37:54.201 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:54 np0005465604 nova_compute[260603]: 2025-10-02 08:37:54.201 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:54 np0005465604 nova_compute[260603]: 2025-10-02 08:37:54.223 2 DEBUG nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:37:54 np0005465604 nova_compute[260603]: 2025-10-02 08:37:54.303 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:54 np0005465604 nova_compute[260603]: 2025-10-02 08:37:54.304 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:54 np0005465604 nova_compute[260603]: 2025-10-02 08:37:54.310 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:37:54 np0005465604 nova_compute[260603]: 2025-10-02 08:37:54.310 2 INFO nova.compute.claims [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:37:54 np0005465604 nova_compute[260603]: 2025-10-02 08:37:54.429 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:37:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4223145461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:37:54 np0005465604 nova_compute[260603]: 2025-10-02 08:37:54.866 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:54 np0005465604 nova_compute[260603]: 2025-10-02 08:37:54.872 2 DEBUG nova.compute.provider_tree [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:37:54 np0005465604 nova_compute[260603]: 2025-10-02 08:37:54.894 2 DEBUG nova.scheduler.client.report [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:37:54 np0005465604 nova_compute[260603]: 2025-10-02 08:37:54.925 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:54 np0005465604 nova_compute[260603]: 2025-10-02 08:37:54.926 2 DEBUG nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:37:54 np0005465604 nova_compute[260603]: 2025-10-02 08:37:54.976 2 DEBUG nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:37:54 np0005465604 nova_compute[260603]: 2025-10-02 08:37:54.977 2 DEBUG nova.network.neutron [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:37:54 np0005465604 podman[351337]: 2025-10-02 08:37:54.985773888 +0000 UTC m=+0.057333814 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  2 04:37:54 np0005465604 nova_compute[260603]: 2025-10-02 08:37:54.997 2 INFO nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.015 2 DEBUG nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:37:55 np0005465604 podman[351336]: 2025-10-02 08:37:55.022171872 +0000 UTC m=+0.094764309 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.152 2 DEBUG nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.153 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.154 2 INFO nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Creating image(s)#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.173 2 DEBUG nova.storage.rbd_utils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 123 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.1 KiB/s wr, 172 op/s
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.193 2 DEBUG nova.storage.rbd_utils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.210 2 DEBUG nova.storage.rbd_utils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.213 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.304 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.305 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.306 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.306 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.324 2 DEBUG nova.storage.rbd_utils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.326 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.546 2 DEBUG nova.policy [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '15da3bbf2c9f49b68e7a7e0ccd557067', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b85786f28a064d75924559acd4f6137e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.584 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.646 2 DEBUG nova.storage.rbd_utils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] resizing rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.754 2 DEBUG nova.objects.instance [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'migration_context' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.790 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.791 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Ensure instance console log exists: /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.792 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.792 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.792 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.799 2 DEBUG nova.objects.instance [None req-8ed03c22-56ca-4879-8127-9e0fcfb5a370 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'pci_devices' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.827 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394275.8274956, ba2cf934-ce76-4de7-a495-285f144bdab7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.828 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.860 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.863 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:37:55 np0005465604 nova_compute[260603]: 2025-10-02 08:37:55.892 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  2 04:37:56 np0005465604 kernel: tap961da5ba-b0 (unregistering): left promiscuous mode
Oct  2 04:37:56 np0005465604 NetworkManager[45129]: <info>  [1759394276.2619] device (tap961da5ba-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:37:56 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:56Z|00919|binding|INFO|Releasing lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 from this chassis (sb_readonly=0)
Oct  2 04:37:56 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:56Z|00920|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 down in Southbound
Oct  2 04:37:56 np0005465604 ovn_controller[152344]: 2025-10-02T08:37:56Z|00921|binding|INFO|Removing iface tap961da5ba-b0 ovn-installed in OVS
Oct  2 04:37:56 np0005465604 nova_compute[260603]: 2025-10-02 08:37:56.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.276 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:af:04 10.100.0.8'], port_security=['fa:16:3e:bb:af:04 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba2cf934-ce76-4de7-a495-285f144bdab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '12', 'neutron:security_group_ids': '65cf0e08-b09a-4109-b18c-bb1f841d4f76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:37:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.277 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 unbound from our chassis#033[00m
Oct  2 04:37:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.278 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74f187c2-780c-418d-98eb-b25294872ab0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:37:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.279 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7a22604b-9dbc-4500-b869-a5a3b060cdff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.279 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 namespace which is not needed anymore#033[00m
Oct  2 04:37:56 np0005465604 nova_compute[260603]: 2025-10-02 08:37:56.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:56 np0005465604 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d00000055.scope: Deactivated successfully.
Oct  2 04:37:56 np0005465604 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d00000055.scope: Consumed 7.892s CPU time.
Oct  2 04:37:56 np0005465604 systemd-machined[214636]: Machine qemu-114-instance-00000055 terminated.
Oct  2 04:37:56 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351298]: [NOTICE]   (351303) : haproxy version is 2.8.14-c23fe91
Oct  2 04:37:56 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351298]: [NOTICE]   (351303) : path to executable is /usr/sbin/haproxy
Oct  2 04:37:56 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351298]: [WARNING]  (351303) : Exiting Master process...
Oct  2 04:37:56 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351298]: [WARNING]  (351303) : Exiting Master process...
Oct  2 04:37:56 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351298]: [ALERT]    (351303) : Current worker (351305) exited with code 143 (Terminated)
Oct  2 04:37:56 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351298]: [WARNING]  (351303) : All workers exited. Exiting... (0)
Oct  2 04:37:56 np0005465604 systemd[1]: libpod-694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d.scope: Deactivated successfully.
Oct  2 04:37:56 np0005465604 podman[351575]: 2025-10-02 08:37:56.406732695 +0000 UTC m=+0.044077456 container died 694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:37:56 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d-userdata-shm.mount: Deactivated successfully.
Oct  2 04:37:56 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f12325eae80981c183471de2cbf2c2a6d257a59fa83ba595ae5891a5266d01cc-merged.mount: Deactivated successfully.
Oct  2 04:37:56 np0005465604 podman[351575]: 2025-10-02 08:37:56.443720027 +0000 UTC m=+0.081064828 container cleanup 694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:37:56 np0005465604 nova_compute[260603]: 2025-10-02 08:37:56.458 2 DEBUG nova.compute.manager [None req-8ed03c22-56ca-4879-8127-9e0fcfb5a370 bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:37:56 np0005465604 systemd[1]: libpod-conmon-694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d.scope: Deactivated successfully.
Oct  2 04:37:56 np0005465604 podman[351613]: 2025-10-02 08:37:56.516288578 +0000 UTC m=+0.047558901 container remove 694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 04:37:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.526 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0c4803f5-eff9-4e4a-a333-3cc8b8882f2f]: (4, ('Thu Oct  2 08:37:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 (694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d)\n694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d\nThu Oct  2 08:37:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 (694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d)\n694976c214a4547a9147655b7abb559217611dc444a0a376b1cefec9d7e6360d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.528 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d3e62015-192c-4beb-b9b3-0cb4123d2503]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.529 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:37:56 np0005465604 kernel: tap74f187c2-70: left promiscuous mode
Oct  2 04:37:56 np0005465604 nova_compute[260603]: 2025-10-02 08:37:56.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:56 np0005465604 nova_compute[260603]: 2025-10-02 08:37:56.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.552 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[661101d1-ca48-4e20-9d41-60158f8b64da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.584 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[263c85aa-db56-4ce5-a415-d4fdcb5faf50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.585 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[679fd53a-ee06-45b0-89fe-0651c043d70e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.599 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4595b304-09fe-4adf-84cd-88ea667a18c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514697, 'reachable_time': 30369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351635, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:56 np0005465604 systemd[1]: run-netns-ovnmeta\x2d74f187c2\x2d780c\x2d418d\x2d98eb\x2db25294872ab0.mount: Deactivated successfully.
Oct  2 04:37:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.601 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:37:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:37:56.601 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[0aeba379-e5d4-4e5f-9483-0bb93697920e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:37:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 138 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 566 KiB/s wr, 153 op/s
Oct  2 04:37:57 np0005465604 nova_compute[260603]: 2025-10-02 08:37:57.181 2 DEBUG nova.network.neutron [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Successfully created port: 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:37:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:37:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:37:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:37:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:37:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:37:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:37:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:37:58 np0005465604 nova_compute[260603]: 2025-10-02 08:37:58.179 2 DEBUG nova.network.neutron [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Successfully updated port: 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:37:58 np0005465604 nova_compute[260603]: 2025-10-02 08:37:58.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:58 np0005465604 nova_compute[260603]: 2025-10-02 08:37:58.197 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:37:58 np0005465604 nova_compute[260603]: 2025-10-02 08:37:58.197 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquired lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:37:58 np0005465604 nova_compute[260603]: 2025-10-02 08:37:58.198 2 DEBUG nova.network.neutron [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:37:58 np0005465604 nova_compute[260603]: 2025-10-02 08:37:58.522 2 DEBUG nova.network.neutron [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:37:58 np0005465604 nova_compute[260603]: 2025-10-02 08:37:58.698 2 DEBUG nova.compute.manager [req-145adb0f-d37f-433d-bf07-2d94f75bb771 req-d0893be9-2148-4848-9c08-2eb909081650 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-changed-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:37:58 np0005465604 nova_compute[260603]: 2025-10-02 08:37:58.699 2 DEBUG nova.compute.manager [req-145adb0f-d37f-433d-bf07-2d94f75bb771 req-d0893be9-2148-4848-9c08-2eb909081650 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Refreshing instance network info cache due to event network-changed-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:37:58 np0005465604 nova_compute[260603]: 2025-10-02 08:37:58.699 2 DEBUG oslo_concurrency.lockutils [req-145adb0f-d37f-433d-bf07-2d94f75bb771 req-d0893be9-2148-4848-9c08-2eb909081650 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:37:58 np0005465604 nova_compute[260603]: 2025-10-02 08:37:58.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:37:59 np0005465604 nova_compute[260603]: 2025-10-02 08:37:59.019 2 INFO nova.compute.manager [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Resuming#033[00m
Oct  2 04:37:59 np0005465604 nova_compute[260603]: 2025-10-02 08:37:59.021 2 DEBUG nova.objects.instance [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'flavor' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:37:59 np0005465604 nova_compute[260603]: 2025-10-02 08:37:59.072 2 DEBUG oslo_concurrency.lockutils [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:37:59 np0005465604 nova_compute[260603]: 2025-10-02 08:37:59.072 2 DEBUG oslo_concurrency.lockutils [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquired lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:37:59 np0005465604 nova_compute[260603]: 2025-10-02 08:37:59.073 2 DEBUG nova.network.neutron [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:37:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 169 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.293 2 DEBUG nova.network.neutron [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating instance_info_cache with network_info: [{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.321 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Releasing lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.322 2 DEBUG nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance network_info: |[{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.322 2 DEBUG oslo_concurrency.lockutils [req-145adb0f-d37f-433d-bf07-2d94f75bb771 req-d0893be9-2148-4848-9c08-2eb909081650 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.323 2 DEBUG nova.network.neutron [req-145adb0f-d37f-433d-bf07-2d94f75bb771 req-d0893be9-2148-4848-9c08-2eb909081650 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Refreshing network info cache for port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.326 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Start _get_guest_xml network_info=[{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.332 2 WARNING nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.339 2 DEBUG nova.virt.libvirt.host [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.339 2 DEBUG nova.virt.libvirt.host [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.344 2 DEBUG nova.virt.libvirt.host [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.345 2 DEBUG nova.virt.libvirt.host [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.346 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.346 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.347 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.347 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.347 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.348 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.348 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.348 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.348 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.349 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.349 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.349 2 DEBUG nova.virt.hardware [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.352 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.558 2 DEBUG nova.compute.manager [req-81b45cc7-f346-4b6b-8838-fc6aa348b383 req-d7f10036-3ed5-4946-8eb0-8af70db5779c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.559 2 DEBUG oslo_concurrency.lockutils [req-81b45cc7-f346-4b6b-8838-fc6aa348b383 req-d7f10036-3ed5-4946-8eb0-8af70db5779c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.559 2 DEBUG oslo_concurrency.lockutils [req-81b45cc7-f346-4b6b-8838-fc6aa348b383 req-d7f10036-3ed5-4946-8eb0-8af70db5779c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.560 2 DEBUG oslo_concurrency.lockutils [req-81b45cc7-f346-4b6b-8838-fc6aa348b383 req-d7f10036-3ed5-4946-8eb0-8af70db5779c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.560 2 DEBUG nova.compute.manager [req-81b45cc7-f346-4b6b-8838-fc6aa348b383 req-d7f10036-3ed5-4946-8eb0-8af70db5779c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.560 2 WARNING nova.compute.manager [req-81b45cc7-f346-4b6b-8838-fc6aa348b383 req-d7f10036-3ed5-4946-8eb0-8af70db5779c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-unplugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state suspended and task_state resuming.#033[00m
Oct  2 04:38:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:38:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3141647230' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.807 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.841 2 DEBUG nova.storage.rbd_utils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:38:00 np0005465604 nova_compute[260603]: 2025-10-02 08:38:00.847 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 169 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.267 2 DEBUG nova.network.neutron [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updating instance_info_cache with network_info: [{"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:38:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:38:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1349762954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.285 2 DEBUG oslo_concurrency.lockutils [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Releasing lock "refresh_cache-ba2cf934-ce76-4de7-a495-285f144bdab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.293 2 DEBUG nova.virt.libvirt.vif [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:37:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.293 2 DEBUG nova.network.os_vif_util [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.294 2 DEBUG nova.network.os_vif_util [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.294 2 DEBUG os_vif [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.295 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.296 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.297 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.298 2 DEBUG nova.virt.libvirt.vif [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1260783938',display_name='tempest-ServersNegativeTestJSON-server-1260783938',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1260783938',id=93,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-h40zdbp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:37:55Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.298 2 DEBUG nova.network.os_vif_util [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.299 2 DEBUG nova.network.os_vif_util [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.300 2 DEBUG nova.objects.instance [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'pci_devices' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.304 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap961da5ba-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.304 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap961da5ba-b0, col_values=(('external_ids', {'iface-id': '961da5ba-b0ac-4a87-a74c-26d0d2d2bf50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:af:04', 'vm-uuid': 'ba2cf934-ce76-4de7-a495-285f144bdab7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.305 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.305 2 INFO os_vif [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0')#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.315 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:38:01 np0005465604 nova_compute[260603]:  <uuid>fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b</uuid>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:  <name>instance-0000005d</name>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersNegativeTestJSON-server-1260783938</nova:name>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:38:00</nova:creationTime>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:38:01 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:        <nova:user uuid="15da3bbf2c9f49b68e7a7e0ccd557067">tempest-ServersNegativeTestJSON-2088330606-project-member</nova:user>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:        <nova:project uuid="b85786f28a064d75924559acd4f6137e">tempest-ServersNegativeTestJSON-2088330606</nova:project>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:        <nova:port uuid="00fa1373-e4cc-4245-9cfa-5a58c77aa4eb">
Oct  2 04:38:01 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <entry name="serial">fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b</entry>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <entry name="uuid">fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b</entry>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk">
Oct  2 04:38:01 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:38:01 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config">
Oct  2 04:38:01 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:38:01 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:11:b9:2b"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <target dev="tap00fa1373-e4"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/console.log" append="off"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:38:01 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:38:01 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:38:01 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:38:01 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.317 2 DEBUG nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Preparing to wait for external event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.318 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.318 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.319 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.320 2 DEBUG nova.virt.libvirt.vif [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1260783938',display_name='tempest-ServersNegativeTestJSON-server-1260783938',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1260783938',id=93,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-h40zdbp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:37:55Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.320 2 DEBUG nova.network.os_vif_util [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.321 2 DEBUG nova.network.os_vif_util [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.322 2 DEBUG os_vif [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.323 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.323 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.326 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00fa1373-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.327 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap00fa1373-e4, col_values=(('external_ids', {'iface-id': '00fa1373-e4cc-4245-9cfa-5a58c77aa4eb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:b9:2b', 'vm-uuid': 'fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:01 np0005465604 NetworkManager[45129]: <info>  [1759394281.3699] manager: (tap00fa1373-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.373 2 DEBUG nova.objects.instance [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'numa_topology' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.380 2 INFO os_vif [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4')#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.446 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.446 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.447 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No VIF found with MAC fa:16:3e:11:b9:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.447 2 INFO nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Using config drive#033[00m
Oct  2 04:38:01 np0005465604 kernel: tap961da5ba-b0: entered promiscuous mode
Oct  2 04:38:01 np0005465604 NetworkManager[45129]: <info>  [1759394281.4600] manager: (tap961da5ba-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/367)
Oct  2 04:38:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:01Z|00922|binding|INFO|Claiming lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for this chassis.
Oct  2 04:38:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:01Z|00923|binding|INFO|961da5ba-b0ac-4a87-a74c-26d0d2d2bf50: Claiming fa:16:3e:bb:af:04 10.100.0.8
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.472 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:af:04 10.100.0.8'], port_security=['fa:16:3e:bb:af:04 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba2cf934-ce76-4de7-a495-285f144bdab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '13', 'neutron:security_group_ids': '65cf0e08-b09a-4109-b18c-bb1f841d4f76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.475 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 bound to our chassis#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.477 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74f187c2-780c-418d-98eb-b25294872ab0#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.489 2 DEBUG nova.storage.rbd_utils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.490 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c1561d3e-0d55-455e-b461-36a448f8e2ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.491 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap74f187c2-71 in ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:38:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:01Z|00924|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 ovn-installed in OVS
Oct  2 04:38:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:01Z|00925|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 up in Southbound
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.493 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap74f187c2-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.494 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f54f2f81-cdc2-45a3-a222-3761b4fa87b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.495 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[865d4c6a-240f-4e50-ac35-32d2c3924620]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.506 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[5b3fc7eb-cdd0-44b7-b68b-37add089ca35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:01 np0005465604 systemd-machined[214636]: New machine qemu-115-instance-00000055.
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.520 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aac1e3f2-b467-4495-af55-c257692c7e18]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:01 np0005465604 systemd[1]: Started Virtual Machine qemu-115-instance-00000055.
Oct  2 04:38:01 np0005465604 systemd-udevd[351741]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.550 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[444e36d3-1589-401d-aa80-945404217969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:01 np0005465604 NetworkManager[45129]: <info>  [1759394281.5542] device (tap961da5ba-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:38:01 np0005465604 NetworkManager[45129]: <info>  [1759394281.5564] manager: (tap74f187c2-70): new Veth device (/org/freedesktop/NetworkManager/Devices/368)
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.555 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[252cfba1-76f4-4d6b-b45c-0f196b76ac3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:01 np0005465604 NetworkManager[45129]: <info>  [1759394281.5571] device (tap961da5ba-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:38:01 np0005465604 systemd-udevd[351748]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.588 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0e677e58-d838-48d5-9898-e1266c5ebbd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.592 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[03aa6978-0eba-41c2-b0b1-146e1797d0cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:01 np0005465604 NetworkManager[45129]: <info>  [1759394281.6178] device (tap74f187c2-70): carrier: link connected
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.624 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4bbb03-a4e5-422d-97ac-4944a0d53e84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.644 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[48b52a0b-ac97-4f53-916d-c24fa3897a08]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516022, 'reachable_time': 21099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351772, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.661 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5a4be05c-ea5d-4bda-bb5a-6bcf4b47647f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:7f62'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516022, 'tstamp': 516022}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351773, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.679 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6889d2-e175-47c8-a824-4d6658baad36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74f187c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:7f:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516022, 'reachable_time': 21099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351774, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.717 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd285ab-70d7-41e0-8955-e6b2c6f9f4f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.797 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[de709c52-2632-40f3-b6a4-b7ae9b2e8c3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.799 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.799 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.800 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f187c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:01 np0005465604 kernel: tap74f187c2-70: entered promiscuous mode
Oct  2 04:38:01 np0005465604 NetworkManager[45129]: <info>  [1759394281.8030] manager: (tap74f187c2-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.809 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74f187c2-70, col_values=(('external_ids', {'iface-id': '8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:01Z|00926|binding|INFO|Releasing lport 8c37c84a-d96b-4eac-a8aa-b5e3ac0a30e9 from this chassis (sb_readonly=0)
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:01 np0005465604 nova_compute[260603]: 2025-10-02 08:38:01.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.852 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.854 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5402d4-beea-46f0-ad08-76bb72d98718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.857 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-74f187c2-780c-418d-98eb-b25294872ab0
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/74f187c2-780c-418d-98eb-b25294872ab0.pid.haproxy
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 74f187c2-780c-418d-98eb-b25294872ab0
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:38:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:01.859 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'env', 'PROCESS_TAG=haproxy-74f187c2-780c-418d-98eb-b25294872ab0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/74f187c2-780c-418d-98eb-b25294872ab0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.172 2 INFO nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Creating config drive at /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.178 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa1__r0w3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.212 2 DEBUG nova.network.neutron [req-145adb0f-d37f-433d-bf07-2d94f75bb771 req-d0893be9-2148-4848-9c08-2eb909081650 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updated VIF entry in instance network info cache for port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.214 2 DEBUG nova.network.neutron [req-145adb0f-d37f-433d-bf07-2d94f75bb771 req-d0893be9-2148-4848-9c08-2eb909081650 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating instance_info_cache with network_info: [{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.231 2 DEBUG oslo_concurrency.lockutils [req-145adb0f-d37f-433d-bf07-2d94f75bb771 req-d0893be9-2148-4848-9c08-2eb909081650 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:38:02 np0005465604 podman[351852]: 2025-10-02 08:38:02.237443642 +0000 UTC m=+0.059575531 container create 37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.252817) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394282252846, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1751, "num_deletes": 260, "total_data_size": 2505247, "memory_usage": 2550584, "flush_reason": "Manual Compaction"}
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394282260817, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1610002, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35331, "largest_seqno": 37081, "table_properties": {"data_size": 1603537, "index_size": 3411, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16599, "raw_average_key_size": 21, "raw_value_size": 1589466, "raw_average_value_size": 2050, "num_data_blocks": 152, "num_entries": 775, "num_filter_entries": 775, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394140, "oldest_key_time": 1759394140, "file_creation_time": 1759394282, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 8041 microseconds, and 4094 cpu microseconds.
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.260854) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1610002 bytes OK
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.260876) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.263124) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.263143) EVENT_LOG_v1 {"time_micros": 1759394282263137, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.263163) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2497578, prev total WAL file size 2497578, number of live WAL files 2.
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.263918) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323532' seq:72057594037927935, type:22 .. '6D6772737461740031353033' seq:0, type:0; will stop at (end)
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(1572KB)], [77(9524KB)]
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394282263961, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 11363023, "oldest_snapshot_seqno": -1}
Oct  2 04:38:02 np0005465604 systemd[1]: Started libpod-conmon-37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f.scope.
Oct  2 04:38:02 np0005465604 podman[351852]: 2025-10-02 08:38:02.20079195 +0000 UTC m=+0.022923889 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6330 keys, 8909469 bytes, temperature: kUnknown
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394282299640, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 8909469, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8867180, "index_size": 25347, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 158961, "raw_average_key_size": 25, "raw_value_size": 8753857, "raw_average_value_size": 1382, "num_data_blocks": 1030, "num_entries": 6330, "num_filter_entries": 6330, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394282, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.299870) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 8909469 bytes
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.301149) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 318.0 rd, 249.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.3 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(12.6) write-amplify(5.5) OK, records in: 6792, records dropped: 462 output_compression: NoCompression
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.301164) EVENT_LOG_v1 {"time_micros": 1759394282301157, "job": 44, "event": "compaction_finished", "compaction_time_micros": 35732, "compaction_time_cpu_micros": 19237, "output_level": 6, "num_output_files": 1, "total_output_size": 8909469, "num_input_records": 6792, "num_output_records": 6330, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394282301496, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394282303030, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.263868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.303057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.303061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.303063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.303064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:38:02 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:02.303065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:38:02 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:38:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fb83696c42c7f8209aee60e425fd95b280fe9157eca6c3109314b6ebc0a1f8b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.321 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa1__r0w3" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:02 np0005465604 podman[351852]: 2025-10-02 08:38:02.338145988 +0000 UTC m=+0.160277967 container init 37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 04:38:02 np0005465604 podman[351852]: 2025-10-02 08:38:02.347695365 +0000 UTC m=+0.169827264 container start 37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.360 2 DEBUG nova.storage.rbd_utils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:38:02 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351869]: [NOTICE]   (351898) : New worker (351909) forked
Oct  2 04:38:02 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351869]: [NOTICE]   (351898) : Loading success.
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.366 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.404 2 DEBUG nova.compute.manager [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.405 2 DEBUG nova.objects.instance [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'pci_devices' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.406 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for ba2cf934-ce76-4de7-a495-285f144bdab7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.406 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394282.3539577, ba2cf934-ce76-4de7-a495-285f144bdab7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.407 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Started (Lifecycle Event)#033[00m
Oct  2 04:38:02 np0005465604 podman[351866]: 2025-10-02 08:38:02.410971207 +0000 UTC m=+0.122979817 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.427 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.431 2 INFO nova.virt.libvirt.driver [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance running successfully.#033[00m
Oct  2 04:38:02 np0005465604 virtqemud[260328]: argument unsupported: QEMU guest agent is not configured
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.434 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.436 2 DEBUG nova.virt.libvirt.guest [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.436 2 DEBUG nova.compute.manager [None req-764e3fc5-0082-48e3-ad4b-763a068b1b3c bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.456 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.456 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394282.3588698, ba2cf934-ce76-4de7-a495-285f144bdab7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.456 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.473 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.477 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.497 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.527 2 DEBUG oslo_concurrency.processutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.528 2 INFO nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Deleting local config drive /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config because it was imported into RBD.#033[00m
Oct  2 04:38:02 np0005465604 kernel: tap00fa1373-e4: entered promiscuous mode
Oct  2 04:38:02 np0005465604 systemd-udevd[351755]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:38:02 np0005465604 NetworkManager[45129]: <info>  [1759394282.5845] manager: (tap00fa1373-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/370)
Oct  2 04:38:02 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:02Z|00927|binding|INFO|Claiming lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for this chassis.
Oct  2 04:38:02 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:02Z|00928|binding|INFO|00fa1373-e4cc-4245-9cfa-5a58c77aa4eb: Claiming fa:16:3e:11:b9:2b 10.100.0.10
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.591 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:b9:2b 10.100.0.10'], port_security=['fa:16:3e:11:b9:2b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.593 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 bound to our chassis#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.594 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 19d584c3-e754-47d1-9cdf-c6badbd670d7#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:02 np0005465604 NetworkManager[45129]: <info>  [1759394282.5969] device (tap00fa1373-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:38:02 np0005465604 NetworkManager[45129]: <info>  [1759394282.5999] device (tap00fa1373-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:38:02 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:02Z|00929|binding|INFO|Setting lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb ovn-installed in OVS
Oct  2 04:38:02 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:02Z|00930|binding|INFO|Setting lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb up in Southbound
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.613 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bd81306f-0d61-45a9-b139-86f57abc3076]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.614 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap19d584c3-e1 in ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.616 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap19d584c3-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.616 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[16199697-f999-402b-9ee4-afe963654272]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.617 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6c884a56-c810-49d5-a804-f38d885f7c06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.630 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[cdbfc6b7-cc1c-466e-a068-a9f16e993b19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:02 np0005465604 systemd-machined[214636]: New machine qemu-116-instance-0000005d.
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.646 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6891bf13-da14-46f4-abdb-e17429e4f7c5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:02 np0005465604 systemd[1]: Started Virtual Machine qemu-116-instance-0000005d.
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.677 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8840e8d3-87cb-48d4-b514-0281439bb0dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.688 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e287b466-1782-417c-a1ef-1111a9ff47de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:02 np0005465604 NetworkManager[45129]: <info>  [1759394282.6902] manager: (tap19d584c3-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/371)
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.721 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bcfd5d24-3bf3-4fdc-8325-1d074b0e4c36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.724 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1411e3f5-18a2-48c8-b1a1-0db83899fb3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.742 2 DEBUG nova.compute.manager [req-f281f8ea-febb-439c-b786-477ee7e3d577 req-d27adf7d-9bd8-4231-b9d5-66a9ff72b1a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.742 2 DEBUG oslo_concurrency.lockutils [req-f281f8ea-febb-439c-b786-477ee7e3d577 req-d27adf7d-9bd8-4231-b9d5-66a9ff72b1a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.742 2 DEBUG oslo_concurrency.lockutils [req-f281f8ea-febb-439c-b786-477ee7e3d577 req-d27adf7d-9bd8-4231-b9d5-66a9ff72b1a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.743 2 DEBUG oslo_concurrency.lockutils [req-f281f8ea-febb-439c-b786-477ee7e3d577 req-d27adf7d-9bd8-4231-b9d5-66a9ff72b1a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.743 2 DEBUG nova.compute.manager [req-f281f8ea-febb-439c-b786-477ee7e3d577 req-d27adf7d-9bd8-4231-b9d5-66a9ff72b1a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] No waiting events found dispatching network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.743 2 WARNING nova.compute.manager [req-f281f8ea-febb-439c-b786-477ee7e3d577 req-d27adf7d-9bd8-4231-b9d5-66a9ff72b1a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received unexpected event network-vif-plugged-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:38:02 np0005465604 NetworkManager[45129]: <info>  [1759394282.7468] device (tap19d584c3-e0): carrier: link connected
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.755 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6b86ad76-fe10-4412-bfe9-3d9fac0e97cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.779 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3a97054e-6e88-48c0-87bc-4a5e556f266b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516134, 'reachable_time': 18062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351968, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.797 2 DEBUG nova.compute.manager [req-40725915-9961-4b72-983d-9758b6a82d7f req-7de78a51-2ab2-46d7-9e50-246be40205d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.797 2 DEBUG oslo_concurrency.lockutils [req-40725915-9961-4b72-983d-9758b6a82d7f req-7de78a51-2ab2-46d7-9e50-246be40205d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.798 2 DEBUG oslo_concurrency.lockutils [req-40725915-9961-4b72-983d-9758b6a82d7f req-7de78a51-2ab2-46d7-9e50-246be40205d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.798 2 DEBUG oslo_concurrency.lockutils [req-40725915-9961-4b72-983d-9758b6a82d7f req-7de78a51-2ab2-46d7-9e50-246be40205d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.798 2 DEBUG nova.compute.manager [req-40725915-9961-4b72-983d-9758b6a82d7f req-7de78a51-2ab2-46d7-9e50-246be40205d7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Processing event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.799 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dd04d01b-d4a0-4ff1-a4fe-7530588c115c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:8c4f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516134, 'tstamp': 516134}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351969, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.814 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[abc12dd1-022d-4299-af52-19177ed8b0dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516134, 'reachable_time': 18062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351970, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.844 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[46ef09e6-e45e-4f23-b957-eb2dffdb4e2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.898 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[55854c80-d8ec-46ea-9600-384113b5131a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.899 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.900 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.900 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19d584c3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:02 np0005465604 NetworkManager[45129]: <info>  [1759394282.9027] manager: (tap19d584c3-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:02 np0005465604 kernel: tap19d584c3-e0: entered promiscuous mode
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.905 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap19d584c3-e0, col_values=(('external_ids', {'iface-id': '0167600f-b732-46e3-804b-b8a6d765a5aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:02 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:02Z|00931|binding|INFO|Releasing lport 0167600f-b732-46e3-804b-b8a6d765a5aa from this chassis (sb_readonly=0)
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.908 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/19d584c3-e754-47d1-9cdf-c6badbd670d7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/19d584c3-e754-47d1-9cdf-c6badbd670d7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.909 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0af1a5a8-514a-4f0e-81a9-635f17f4c263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.910 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/19d584c3-e754-47d1-9cdf-c6badbd670d7.pid.haproxy
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:38:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:02.911 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'env', 'PROCESS_TAG=haproxy-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/19d584c3-e754-47d1-9cdf-c6badbd670d7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:38:02 np0005465604 nova_compute[260603]: 2025-10-02 08:38:02.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 169 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.9 MiB/s wr, 76 op/s
Oct  2 04:38:03 np0005465604 podman[352044]: 2025-10-02 08:38:03.304779502 +0000 UTC m=+0.076593084 container create db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:38:03 np0005465604 podman[352044]: 2025-10-02 08:38:03.254028345 +0000 UTC m=+0.025841977 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:38:03 np0005465604 systemd[1]: Started libpod-conmon-db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1.scope.
Oct  2 04:38:03 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:38:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a74f8d6643ae70d1fa74bb44614c3aa06e7ac08d3f73b3369e5ce67a1513ec1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:38:03 np0005465604 podman[352044]: 2025-10-02 08:38:03.418668164 +0000 UTC m=+0.190481716 container init db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:38:03 np0005465604 podman[352044]: 2025-10-02 08:38:03.424946103 +0000 UTC m=+0.196759635 container start db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 04:38:03 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[352059]: [NOTICE]   (352063) : New worker (352065) forked
Oct  2 04:38:03 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[352059]: [NOTICE]   (352063) : Loading success.
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.569 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394283.5686479, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.569 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Started (Lifecycle Event)#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.571 2 DEBUG nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.583 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.586 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.589 2 INFO nova.virt.libvirt.driver [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance spawned successfully.#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.589 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.592 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.609 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.610 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394283.5695937, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.610 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.614 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.615 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.615 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.616 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.617 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.617 2 DEBUG nova.virt.libvirt.driver [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.638 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.641 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394283.5750816, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.641 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.665 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.668 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.675 2 INFO nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Took 8.52 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.675 2 DEBUG nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.688 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.737 2 INFO nova.compute.manager [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Took 9.46 seconds to build instance.#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.752 2 DEBUG oslo_concurrency.lockutils [None req-320550f4-7744-494b-bf00-04570c0d900a 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:03 np0005465604 nova_compute[260603]: 2025-10-02 08:38:03.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.310 2 DEBUG oslo_concurrency.lockutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.311 2 DEBUG oslo_concurrency.lockutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.311 2 DEBUG oslo_concurrency.lockutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.312 2 DEBUG oslo_concurrency.lockutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.313 2 DEBUG oslo_concurrency.lockutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.314 2 INFO nova.compute.manager [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Terminating instance#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.315 2 DEBUG nova.compute.manager [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:38:04 np0005465604 kernel: tap961da5ba-b0 (unregistering): left promiscuous mode
Oct  2 04:38:04 np0005465604 NetworkManager[45129]: <info>  [1759394284.3545] device (tap961da5ba-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:04 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:04Z|00932|binding|INFO|Releasing lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 from this chassis (sb_readonly=0)
Oct  2 04:38:04 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:04Z|00933|binding|INFO|Setting lport 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 down in Southbound
Oct  2 04:38:04 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:04Z|00934|binding|INFO|Removing iface tap961da5ba-b0 ovn-installed in OVS
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.371 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:af:04 10.100.0.8'], port_security=['fa:16:3e:bb:af:04 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'ba2cf934-ce76-4de7-a495-285f144bdab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74f187c2-780c-418d-98eb-b25294872ab0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b43ebc87104041aba179e47c5e6ecc5f', 'neutron:revision_number': '13', 'neutron:security_group_ids': '65cf0e08-b09a-4109-b18c-bb1f841d4f76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b85b176a-c8ed-4e58-876b-615aaeeb197c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:38:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.373 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 in datapath 74f187c2-780c-418d-98eb-b25294872ab0 unbound from our chassis#033[00m
Oct  2 04:38:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.374 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74f187c2-780c-418d-98eb-b25294872ab0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:38:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.374 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb6b0fb-31e1-4e0c-adbd-490ae2811d6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.375 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 namespace which is not needed anymore#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:04 np0005465604 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d00000055.scope: Deactivated successfully.
Oct  2 04:38:04 np0005465604 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d00000055.scope: Consumed 2.644s CPU time.
Oct  2 04:38:04 np0005465604 systemd-machined[214636]: Machine qemu-115-instance-00000055 terminated.
Oct  2 04:38:04 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351869]: [NOTICE]   (351898) : haproxy version is 2.8.14-c23fe91
Oct  2 04:38:04 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351869]: [NOTICE]   (351898) : path to executable is /usr/sbin/haproxy
Oct  2 04:38:04 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351869]: [WARNING]  (351898) : Exiting Master process...
Oct  2 04:38:04 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351869]: [WARNING]  (351898) : Exiting Master process...
Oct  2 04:38:04 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351869]: [ALERT]    (351898) : Current worker (351909) exited with code 143 (Terminated)
Oct  2 04:38:04 np0005465604 neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0[351869]: [WARNING]  (351898) : All workers exited. Exiting... (0)
Oct  2 04:38:04 np0005465604 systemd[1]: libpod-37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f.scope: Deactivated successfully.
Oct  2 04:38:04 np0005465604 podman[352087]: 2025-10-02 08:38:04.505618983 +0000 UTC m=+0.058681274 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 04:38:04 np0005465604 podman[352101]: 2025-10-02 08:38:04.508185981 +0000 UTC m=+0.042918221 container died 37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:38:04 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f-userdata-shm.mount: Deactivated successfully.
Oct  2 04:38:04 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6fb83696c42c7f8209aee60e425fd95b280fe9157eca6c3109314b6ebc0a1f8b-merged.mount: Deactivated successfully.
Oct  2 04:38:04 np0005465604 podman[352101]: 2025-10-02 08:38:04.545032518 +0000 UTC m=+0.079764758 container cleanup 37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.557 2 INFO nova.virt.libvirt.driver [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Instance destroyed successfully.#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.558 2 DEBUG nova.objects.instance [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lazy-loading 'resources' on Instance uuid ba2cf934-ce76-4de7-a495-285f144bdab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:38:04 np0005465604 systemd[1]: libpod-conmon-37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f.scope: Deactivated successfully.
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.582 2 DEBUG nova.virt.libvirt.vif [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:34:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-276449458',display_name='tempest-ServerActionsTestJSON-server-276449458',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-276449458',id=85,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJeN7oy4ZcFMQvu730CbiP6r7+lOo+d6SVIRq3vvhyWjIuhw0JtThqP2GX2Ak2aYeQCl16bWpEKGw+ykWDhQmDtngNld87fgFp9adpwXksTAfCn9sQFpXk6sVWVpVNZL1A==',key_name='tempest-keypair-1333294092',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:34:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b43ebc87104041aba179e47c5e6ecc5f',ramdisk_id='',reservation_id='r-z6eekkp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1407264397',owner_user_name='tempest-ServerActionsTestJSON-1407264397-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:38:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bb1b3a5ae9514259b27a0b7a28f23cda',uuid=ba2cf934-ce76-4de7-a495-285f144bdab7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.584 2 DEBUG nova.network.os_vif_util [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converting VIF {"id": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "address": "fa:16:3e:bb:af:04", "network": {"id": "74f187c2-780c-418d-98eb-b25294872ab0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1311039074-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b43ebc87104041aba179e47c5e6ecc5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap961da5ba-b0", "ovs_interfaceid": "961da5ba-b0ac-4a87-a74c-26d0d2d2bf50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.585 2 DEBUG nova.network.os_vif_util [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.585 2 DEBUG os_vif [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.587 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap961da5ba-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.592 2 INFO os_vif [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:af:04,bridge_name='br-int',has_traffic_filtering=True,id=961da5ba-b0ac-4a87-a74c-26d0d2d2bf50,network=Network(74f187c2-780c-418d-98eb-b25294872ab0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap961da5ba-b0')#033[00m
Oct  2 04:38:04 np0005465604 podman[352146]: 2025-10-02 08:38:04.617129844 +0000 UTC m=+0.044413205 container remove 37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:38:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.623 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5f76ce4c-ce30-4b26-b63c-cbd7fa9f0d60]: (4, ('Thu Oct  2 08:38:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 (37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f)\n37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f\nThu Oct  2 08:38:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 (37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f)\n37698d941aac0b74bffa6d96c27c3bf5f8535c914469e4e54debdff7360a8d9f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.625 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[51688357-adaa-45fd-aee8-2e390df6b45e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.626 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f187c2-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:04 np0005465604 kernel: tap74f187c2-70: left promiscuous mode
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.645 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b3de07e6-aa66-430e-887d-c6f3f3ffa788]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.664 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[480e9aa9-f6ef-4244-bc58-af30fd17fddc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.665 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cab7a08c-0260-4910-b3f6-1ee882db81ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.683 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1c6491e1-3dd5-4b2b-a677-628b1554dde3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516014, 'reachable_time': 30252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352189, 'error': None, 'target': 'ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.685 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74f187c2-780c-418d-98eb-b25294872ab0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:38:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:04.685 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e58636da-8863-4d74-8a1e-f1569640cde7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:04 np0005465604 systemd[1]: run-netns-ovnmeta\x2d74f187c2\x2d780c\x2d418d\x2d98eb\x2db25294872ab0.mount: Deactivated successfully.
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.905 2 INFO nova.virt.libvirt.driver [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Deleting instance files /var/lib/nova/instances/ba2cf934-ce76-4de7-a495-285f144bdab7_del#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.906 2 INFO nova.virt.libvirt.driver [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Deletion of /var/lib/nova/instances/ba2cf934-ce76-4de7-a495-285f144bdab7_del complete#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.963 2 INFO nova.compute.manager [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.964 2 DEBUG oslo.service.loopingcall [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.964 2 DEBUG nova.compute.manager [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.965 2 DEBUG nova.network.neutron [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.978 2 DEBUG nova.compute.manager [req-77c17d3a-e7be-475b-b81f-b63f009b0784 req-da9d6aae-3f18-4e8a-9919-477fc7252f87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.978 2 DEBUG oslo_concurrency.lockutils [req-77c17d3a-e7be-475b-b81f-b63f009b0784 req-da9d6aae-3f18-4e8a-9919-477fc7252f87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.979 2 DEBUG oslo_concurrency.lockutils [req-77c17d3a-e7be-475b-b81f-b63f009b0784 req-da9d6aae-3f18-4e8a-9919-477fc7252f87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.979 2 DEBUG oslo_concurrency.lockutils [req-77c17d3a-e7be-475b-b81f-b63f009b0784 req-da9d6aae-3f18-4e8a-9919-477fc7252f87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.979 2 DEBUG nova.compute.manager [req-77c17d3a-e7be-475b-b81f-b63f009b0784 req-da9d6aae-3f18-4e8a-9919-477fc7252f87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:38:04 np0005465604 nova_compute[260603]: 2025-10-02 08:38:04.979 2 WARNING nova.compute.manager [req-77c17d3a-e7be-475b-b81f-b63f009b0784 req-da9d6aae-3f18-4e8a-9919-477fc7252f87 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received unexpected event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with vm_state active and task_state None.#033[00m
Oct  2 04:38:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 169 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct  2 04:38:05 np0005465604 nova_compute[260603]: 2025-10-02 08:38:05.794 2 DEBUG nova.network.neutron [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:38:05 np0005465604 nova_compute[260603]: 2025-10-02 08:38:05.817 2 INFO nova.compute.manager [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Took 0.85 seconds to deallocate network for instance.#033[00m
Oct  2 04:38:05 np0005465604 nova_compute[260603]: 2025-10-02 08:38:05.867 2 DEBUG oslo_concurrency.lockutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:05 np0005465604 nova_compute[260603]: 2025-10-02 08:38:05.868 2 DEBUG oslo_concurrency.lockutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:05 np0005465604 nova_compute[260603]: 2025-10-02 08:38:05.910 2 DEBUG nova.compute.manager [req-79aca5e3-26b7-47b6-8e97-d177f69daecc req-d4f85ea3-5a0b-42c4-af1d-d89f3c03494e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Received event network-vif-deleted-961da5ba-b0ac-4a87-a74c-26d0d2d2bf50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:05 np0005465604 nova_compute[260603]: 2025-10-02 08:38:05.972 2 DEBUG oslo_concurrency.processutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:38:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/84441315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:38:06 np0005465604 nova_compute[260603]: 2025-10-02 08:38:06.498 2 DEBUG oslo_concurrency.processutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:06 np0005465604 nova_compute[260603]: 2025-10-02 08:38:06.506 2 DEBUG nova.compute.provider_tree [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:38:06 np0005465604 nova_compute[260603]: 2025-10-02 08:38:06.526 2 DEBUG nova.scheduler.client.report [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:38:06 np0005465604 nova_compute[260603]: 2025-10-02 08:38:06.560 2 DEBUG oslo_concurrency.lockutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:06 np0005465604 nova_compute[260603]: 2025-10-02 08:38:06.600 2 INFO nova.scheduler.client.report [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Deleted allocations for instance ba2cf934-ce76-4de7-a495-285f144bdab7#033[00m
Oct  2 04:38:06 np0005465604 nova_compute[260603]: 2025-10-02 08:38:06.674 2 DEBUG oslo_concurrency.lockutils [None req-b1dab46e-2c84-41b4-a8cf-24e652b56fff bb1b3a5ae9514259b27a0b7a28f23cda b43ebc87104041aba179e47c5e6ecc5f - - default default] Lock "ba2cf934-ce76-4de7-a495-285f144bdab7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 121 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 528 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Oct  2 04:38:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:38:07 np0005465604 nova_compute[260603]: 2025-10-02 08:38:07.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:07.836 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:38:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:07.837 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:38:08 np0005465604 nova_compute[260603]: 2025-10-02 08:38:08.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 88 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 121 op/s
Oct  2 04:38:09 np0005465604 nova_compute[260603]: 2025-10-02 08:38:09.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:10 np0005465604 nova_compute[260603]: 2025-10-02 08:38:10.299 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:10 np0005465604 nova_compute[260603]: 2025-10-02 08:38:10.300 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:10 np0005465604 nova_compute[260603]: 2025-10-02 08:38:10.334 2 DEBUG nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:38:10 np0005465604 nova_compute[260603]: 2025-10-02 08:38:10.452 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:10 np0005465604 nova_compute[260603]: 2025-10-02 08:38:10.453 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:10 np0005465604 nova_compute[260603]: 2025-10-02 08:38:10.465 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:38:10 np0005465604 nova_compute[260603]: 2025-10-02 08:38:10.466 2 INFO nova.compute.claims [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:38:10 np0005465604 nova_compute[260603]: 2025-10-02 08:38:10.645 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:38:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3406974725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.171 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.179 2 DEBUG nova.compute.provider_tree [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:38:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 88 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 106 op/s
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.206 2 DEBUG nova.scheduler.client.report [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.235 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.239 2 DEBUG nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.292 2 DEBUG nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.292 2 DEBUG nova.network.neutron [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.309 2 INFO nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.329 2 DEBUG nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.447 2 DEBUG nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.449 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.450 2 INFO nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Creating image(s)#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.484 2 DEBUG nova.storage.rbd_utils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image b6a5d839-2362-461b-a536-078f2c86d9b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.523 2 DEBUG nova.storage.rbd_utils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image b6a5d839-2362-461b-a536-078f2c86d9b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.559 2 DEBUG nova.storage.rbd_utils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image b6a5d839-2362-461b-a536-078f2c86d9b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.565 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.660 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.661 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.662 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.662 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.687 2 DEBUG nova.storage.rbd_utils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image b6a5d839-2362-461b-a536-078f2c86d9b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.690 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 b6a5d839-2362-461b-a536-078f2c86d9b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.733 2 DEBUG nova.policy [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '15da3bbf2c9f49b68e7a7e0ccd557067', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b85786f28a064d75924559acd4f6137e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.943 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 b6a5d839-2362-461b-a536-078f2c86d9b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:11 np0005465604 nova_compute[260603]: 2025-10-02 08:38:11.996 2 DEBUG nova.storage.rbd_utils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] resizing rbd image b6a5d839-2362-461b-a536-078f2c86d9b9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:38:12 np0005465604 nova_compute[260603]: 2025-10-02 08:38:12.085 2 DEBUG nova.objects.instance [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'migration_context' on Instance uuid b6a5d839-2362-461b-a536-078f2c86d9b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:38:12 np0005465604 nova_compute[260603]: 2025-10-02 08:38:12.116 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:38:12 np0005465604 nova_compute[260603]: 2025-10-02 08:38:12.117 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Ensure instance console log exists: /var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:38:12 np0005465604 nova_compute[260603]: 2025-10-02 08:38:12.117 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:12 np0005465604 nova_compute[260603]: 2025-10-02 08:38:12.118 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:12 np0005465604 nova_compute[260603]: 2025-10-02 08:38:12.118 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:38:12 np0005465604 nova_compute[260603]: 2025-10-02 08:38:12.825 2 DEBUG nova.network.neutron [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Successfully created port: 31d7db70-01e7-4886-b589-dc252f8fc7b5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:38:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 134 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Oct  2 04:38:13 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:13Z|00935|binding|INFO|Releasing lport 0167600f-b732-46e3-804b-b8a6d765a5aa from this chassis (sb_readonly=0)
Oct  2 04:38:13 np0005465604 nova_compute[260603]: 2025-10-02 08:38:13.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:13 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:13Z|00936|binding|INFO|Releasing lport 0167600f-b732-46e3-804b-b8a6d765a5aa from this chassis (sb_readonly=0)
Oct  2 04:38:13 np0005465604 nova_compute[260603]: 2025-10-02 08:38:13.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:13 np0005465604 nova_compute[260603]: 2025-10-02 08:38:13.786 2 DEBUG nova.network.neutron [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Successfully updated port: 31d7db70-01e7-4886-b589-dc252f8fc7b5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:38:13 np0005465604 nova_compute[260603]: 2025-10-02 08:38:13.811 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "refresh_cache-b6a5d839-2362-461b-a536-078f2c86d9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:38:13 np0005465604 nova_compute[260603]: 2025-10-02 08:38:13.812 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquired lock "refresh_cache-b6a5d839-2362-461b-a536-078f2c86d9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:38:13 np0005465604 nova_compute[260603]: 2025-10-02 08:38:13.812 2 DEBUG nova.network.neutron [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:38:13 np0005465604 nova_compute[260603]: 2025-10-02 08:38:13.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:14 np0005465604 nova_compute[260603]: 2025-10-02 08:38:14.034 2 DEBUG nova.compute.manager [req-329af5e0-0da3-4ba3-8f69-04301a3fca8c req-a8491522-bde5-4d21-aad2-38568df88498 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-changed-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:14 np0005465604 nova_compute[260603]: 2025-10-02 08:38:14.035 2 DEBUG nova.compute.manager [req-329af5e0-0da3-4ba3-8f69-04301a3fca8c req-a8491522-bde5-4d21-aad2-38568df88498 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Refreshing instance network info cache due to event network-changed-31d7db70-01e7-4886-b589-dc252f8fc7b5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:38:14 np0005465604 nova_compute[260603]: 2025-10-02 08:38:14.035 2 DEBUG oslo_concurrency.lockutils [req-329af5e0-0da3-4ba3-8f69-04301a3fca8c req-a8491522-bde5-4d21-aad2-38568df88498 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-b6a5d839-2362-461b-a536-078f2c86d9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:38:14 np0005465604 nova_compute[260603]: 2025-10-02 08:38:14.163 2 DEBUG nova.network.neutron [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:38:14 np0005465604 nova_compute[260603]: 2025-10-02 08:38:14.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:14Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:11:b9:2b 10.100.0.10
Oct  2 04:38:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:14Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:b9:2b 10.100.0.10
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.151 2 DEBUG nova.network.neutron [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Updating instance_info_cache with network_info: [{"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.175 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Releasing lock "refresh_cache-b6a5d839-2362-461b-a536-078f2c86d9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.175 2 DEBUG nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Instance network_info: |[{"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.175 2 DEBUG oslo_concurrency.lockutils [req-329af5e0-0da3-4ba3-8f69-04301a3fca8c req-a8491522-bde5-4d21-aad2-38568df88498 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-b6a5d839-2362-461b-a536-078f2c86d9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.176 2 DEBUG nova.network.neutron [req-329af5e0-0da3-4ba3-8f69-04301a3fca8c req-a8491522-bde5-4d21-aad2-38568df88498 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Refreshing network info cache for port 31d7db70-01e7-4886-b589-dc252f8fc7b5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.178 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Start _get_guest_xml network_info=[{"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.183 2 WARNING nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:38:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 134 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.187 2 DEBUG nova.virt.libvirt.host [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.188 2 DEBUG nova.virt.libvirt.host [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.191 2 DEBUG nova.virt.libvirt.host [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.191 2 DEBUG nova.virt.libvirt.host [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.192 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.192 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.193 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.193 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.193 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.193 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.193 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.194 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.194 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.194 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.194 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.195 2 DEBUG nova.virt.hardware [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.197 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:38:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1183046515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.633 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.677 2 DEBUG nova.storage.rbd_utils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image b6a5d839-2362-461b-a536-078f2c86d9b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:38:15 np0005465604 nova_compute[260603]: 2025-10-02 08:38:15.683 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:38:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3880683933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.157 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.160 2 DEBUG nova.virt.libvirt.vif [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:38:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-187016526',display_name='tempest-ServersNegativeTestJSON-server-187016526',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-187016526',id=94,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-ex30m3ht',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:38:11Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=b6a5d839-2362-461b-a536-078f2c86d9b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.161 2 DEBUG nova.network.os_vif_util [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.163 2 DEBUG nova.network.os_vif_util [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:35:da,bridge_name='br-int',has_traffic_filtering=True,id=31d7db70-01e7-4886-b589-dc252f8fc7b5,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d7db70-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.165 2 DEBUG nova.objects.instance [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'pci_devices' on Instance uuid b6a5d839-2362-461b-a536-078f2c86d9b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.182 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:38:16 np0005465604 nova_compute[260603]:  <uuid>b6a5d839-2362-461b-a536-078f2c86d9b9</uuid>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:  <name>instance-0000005e</name>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersNegativeTestJSON-server-187016526</nova:name>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:38:15</nova:creationTime>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:38:16 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:        <nova:user uuid="15da3bbf2c9f49b68e7a7e0ccd557067">tempest-ServersNegativeTestJSON-2088330606-project-member</nova:user>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:        <nova:project uuid="b85786f28a064d75924559acd4f6137e">tempest-ServersNegativeTestJSON-2088330606</nova:project>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:        <nova:port uuid="31d7db70-01e7-4886-b589-dc252f8fc7b5">
Oct  2 04:38:16 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <entry name="serial">b6a5d839-2362-461b-a536-078f2c86d9b9</entry>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <entry name="uuid">b6a5d839-2362-461b-a536-078f2c86d9b9</entry>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/b6a5d839-2362-461b-a536-078f2c86d9b9_disk">
Oct  2 04:38:16 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:38:16 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/b6a5d839-2362-461b-a536-078f2c86d9b9_disk.config">
Oct  2 04:38:16 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:38:16 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:5d:35:da"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <target dev="tap31d7db70-01"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9/console.log" append="off"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:38:16 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:38:16 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:38:16 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:38:16 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.184 2 DEBUG nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Preparing to wait for external event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.184 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.185 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.185 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.186 2 DEBUG nova.virt.libvirt.vif [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:38:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-187016526',display_name='tempest-ServersNegativeTestJSON-server-187016526',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-187016526',id=94,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-ex30m3ht',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:38:11Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=b6a5d839-2362-461b-a536-078f2c86d9b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.187 2 DEBUG nova.network.os_vif_util [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.188 2 DEBUG nova.network.os_vif_util [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:35:da,bridge_name='br-int',has_traffic_filtering=True,id=31d7db70-01e7-4886-b589-dc252f8fc7b5,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d7db70-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.188 2 DEBUG os_vif [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:35:da,bridge_name='br-int',has_traffic_filtering=True,id=31d7db70-01e7-4886-b589-dc252f8fc7b5,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d7db70-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.191 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.200 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31d7db70-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.201 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31d7db70-01, col_values=(('external_ids', {'iface-id': '31d7db70-01e7-4886-b589-dc252f8fc7b5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:35:da', 'vm-uuid': 'b6a5d839-2362-461b-a536-078f2c86d9b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:16 np0005465604 NetworkManager[45129]: <info>  [1759394296.2040] manager: (tap31d7db70-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.210 2 INFO os_vif [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:35:da,bridge_name='br-int',has_traffic_filtering=True,id=31d7db70-01e7-4886-b589-dc252f8fc7b5,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d7db70-01')#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.437 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.437 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.438 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No VIF found with MAC fa:16:3e:5d:35:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.439 2 INFO nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Using config drive#033[00m
Oct  2 04:38:16 np0005465604 nova_compute[260603]: 2025-10-02 08:38:16.469 2 DEBUG nova.storage.rbd_utils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image b6a5d839-2362-461b-a536-078f2c86d9b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:38:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:16.839 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:17 np0005465604 nova_compute[260603]: 2025-10-02 08:38:17.025 2 INFO nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Creating config drive at /var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9/disk.config#033[00m
Oct  2 04:38:17 np0005465604 nova_compute[260603]: 2025-10-02 08:38:17.036 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzpupaatt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 145 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 138 op/s
Oct  2 04:38:17 np0005465604 nova_compute[260603]: 2025-10-02 08:38:17.211 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzpupaatt" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:38:17 np0005465604 nova_compute[260603]: 2025-10-02 08:38:17.253 2 DEBUG nova.storage.rbd_utils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image b6a5d839-2362-461b-a536-078f2c86d9b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:38:17 np0005465604 nova_compute[260603]: 2025-10-02 08:38:17.258 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9/disk.config b6a5d839-2362-461b-a536-078f2c86d9b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:17 np0005465604 nova_compute[260603]: 2025-10-02 08:38:17.338 2 DEBUG nova.network.neutron [req-329af5e0-0da3-4ba3-8f69-04301a3fca8c req-a8491522-bde5-4d21-aad2-38568df88498 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Updated VIF entry in instance network info cache for port 31d7db70-01e7-4886-b589-dc252f8fc7b5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:38:17 np0005465604 nova_compute[260603]: 2025-10-02 08:38:17.339 2 DEBUG nova.network.neutron [req-329af5e0-0da3-4ba3-8f69-04301a3fca8c req-a8491522-bde5-4d21-aad2-38568df88498 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Updating instance_info_cache with network_info: [{"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:38:17 np0005465604 nova_compute[260603]: 2025-10-02 08:38:17.362 2 DEBUG oslo_concurrency.lockutils [req-329af5e0-0da3-4ba3-8f69-04301a3fca8c req-a8491522-bde5-4d21-aad2-38568df88498 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-b6a5d839-2362-461b-a536-078f2c86d9b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:38:17 np0005465604 nova_compute[260603]: 2025-10-02 08:38:17.455 2 DEBUG oslo_concurrency.processutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9/disk.config b6a5d839-2362-461b-a536-078f2c86d9b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:17 np0005465604 nova_compute[260603]: 2025-10-02 08:38:17.456 2 INFO nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Deleting local config drive /var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9/disk.config because it was imported into RBD.#033[00m
Oct  2 04:38:17 np0005465604 kernel: tap31d7db70-01: entered promiscuous mode
Oct  2 04:38:17 np0005465604 NetworkManager[45129]: <info>  [1759394297.5128] manager: (tap31d7db70-01): new Tun device (/org/freedesktop/NetworkManager/Devices/374)
Oct  2 04:38:17 np0005465604 nova_compute[260603]: 2025-10-02 08:38:17.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:17 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:17Z|00937|binding|INFO|Claiming lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 for this chassis.
Oct  2 04:38:17 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:17Z|00938|binding|INFO|31d7db70-01e7-4886-b589-dc252f8fc7b5: Claiming fa:16:3e:5d:35:da 10.100.0.9
Oct  2 04:38:17 np0005465604 nova_compute[260603]: 2025-10-02 08:38:17.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:17 np0005465604 nova_compute[260603]: 2025-10-02 08:38:17.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:38:17 np0005465604 nova_compute[260603]: 2025-10-02 08:38:17.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:38:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.524 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:35:da 10.100.0.9'], port_security=['fa:16:3e:5d:35:da 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6a5d839-2362-461b-a536-078f2c86d9b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=31d7db70-01e7-4886-b589-dc252f8fc7b5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:38:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.525 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 31d7db70-01e7-4886-b589-dc252f8fc7b5 in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 bound to our chassis#033[00m
Oct  2 04:38:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.527 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 19d584c3-e754-47d1-9cdf-c6badbd670d7#033[00m
Oct  2 04:38:17 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:17Z|00939|binding|INFO|Setting lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 ovn-installed in OVS
Oct  2 04:38:17 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:17Z|00940|binding|INFO|Setting lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 up in Southbound
Oct  2 04:38:17 np0005465604 nova_compute[260603]: 2025-10-02 08:38:17.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:17 np0005465604 nova_compute[260603]: 2025-10-02 08:38:17.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:17 np0005465604 systemd-udevd[352539]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:38:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.546 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b5afd0-5ec8-4915-904a-98e57f917c73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:17 np0005465604 NetworkManager[45129]: <info>  [1759394297.5565] device (tap31d7db70-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:38:17 np0005465604 systemd-machined[214636]: New machine qemu-117-instance-0000005e.
Oct  2 04:38:17 np0005465604 NetworkManager[45129]: <info>  [1759394297.5581] device (tap31d7db70-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:38:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.580 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd33cde-50b6-4da2-ad6f-565cb2232c4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:17 np0005465604 systemd[1]: Started Virtual Machine qemu-117-instance-0000005e.
Oct  2 04:38:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.583 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f79a6f-3e83-47b2-a61d-5a54c6baea62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.614 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c452e51f-bfb1-43e6-a3ba-3f0923a42751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.641 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4c6423-4e2c-45ea-bf6b-b6117cf7428b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516134, 'reachable_time': 18062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352549, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.658 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e591ffd-fe35-46c3-943d-2cbade8354f5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap19d584c3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516147, 'tstamp': 516147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352553, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap19d584c3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516149, 'tstamp': 516149}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352553, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.660 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:17 np0005465604 nova_compute[260603]: 2025-10-02 08:38:17.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.704 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19d584c3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.705 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:38:17 np0005465604 nova_compute[260603]: 2025-10-02 08:38:17.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.705 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap19d584c3-e0, col_values=(('external_ids', {'iface-id': '0167600f-b732-46e3-804b-b8a6d765a5aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:17.706 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.004 2 DEBUG nova.compute.manager [req-c85d4bf6-7c3c-4164-8ace-09026358ddd9 req-6eae71a8-cd48-4641-8c9e-aae15932f668 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.005 2 DEBUG oslo_concurrency.lockutils [req-c85d4bf6-7c3c-4164-8ace-09026358ddd9 req-6eae71a8-cd48-4641-8c9e-aae15932f668 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.005 2 DEBUG oslo_concurrency.lockutils [req-c85d4bf6-7c3c-4164-8ace-09026358ddd9 req-6eae71a8-cd48-4641-8c9e-aae15932f668 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.005 2 DEBUG oslo_concurrency.lockutils [req-c85d4bf6-7c3c-4164-8ace-09026358ddd9 req-6eae71a8-cd48-4641-8c9e-aae15932f668 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.005 2 DEBUG nova.compute.manager [req-c85d4bf6-7c3c-4164-8ace-09026358ddd9 req-6eae71a8-cd48-4641-8c9e-aae15932f668 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Processing event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.369 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394298.3693032, b6a5d839-2362-461b-a536-078f2c86d9b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.370 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] VM Started (Lifecycle Event)#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.373 2 DEBUG nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.378 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.386 2 INFO nova.virt.libvirt.driver [-] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Instance spawned successfully.#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.386 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.394 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.398 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.405 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.405 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.406 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.406 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.406 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.407 2 DEBUG nova.virt.libvirt.driver [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.433 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.434 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394298.3694646, b6a5d839-2362-461b-a536-078f2c86d9b9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.434 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.460 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.463 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394298.3785062, b6a5d839-2362-461b-a536-078f2c86d9b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.464 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.469 2 INFO nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Took 7.02 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.470 2 DEBUG nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.497 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.499 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.533 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.554 2 INFO nova.compute.manager [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Took 8.14 seconds to build instance.#033[00m
Oct  2 04:38:18 np0005465604 nova_compute[260603]: 2025-10-02 08:38:18.582 2 DEBUG oslo_concurrency.lockutils [None req-bba52f2b-4980-4228-8b8d-6ba9cd1a408f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.281s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.422 2 DEBUG oslo_concurrency.lockutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.425 2 DEBUG oslo_concurrency.lockutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.425 2 DEBUG oslo_concurrency.lockutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.425 2 DEBUG oslo_concurrency.lockutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.426 2 DEBUG oslo_concurrency.lockutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.427 2 INFO nova.compute.manager [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Terminating instance#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.428 2 DEBUG nova.compute.manager [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:38:19 np0005465604 kernel: tap31d7db70-01 (unregistering): left promiscuous mode
Oct  2 04:38:19 np0005465604 NetworkManager[45129]: <info>  [1759394299.4678] device (tap31d7db70-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:19Z|00941|binding|INFO|Releasing lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 from this chassis (sb_readonly=0)
Oct  2 04:38:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:19Z|00942|binding|INFO|Setting lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 down in Southbound
Oct  2 04:38:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:19Z|00943|binding|INFO|Removing iface tap31d7db70-01 ovn-installed in OVS
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.487 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:35:da 10.100.0.9'], port_security=['fa:16:3e:5d:35:da 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6a5d839-2362-461b-a536-078f2c86d9b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=31d7db70-01e7-4886-b589-dc252f8fc7b5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.488 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 31d7db70-01e7-4886-b589-dc252f8fc7b5 in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 unbound from our chassis#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.490 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 19d584c3-e754-47d1-9cdf-c6badbd670d7#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.516 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1d00a924-05a3-40d8-ac94-f6ca5fe65d0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:19 np0005465604 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Oct  2 04:38:19 np0005465604 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005e.scope: Consumed 1.817s CPU time.
Oct  2 04:38:19 np0005465604 systemd-machined[214636]: Machine qemu-117-instance-0000005e terminated.
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.551 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394284.5497692, ba2cf934-ce76-4de7-a495-285f144bdab7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.551 2 INFO nova.compute.manager [-] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.551 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[db3dfd10-69f6-496e-9e0c-cda08787b14e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.555 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[182a25e1-eaa6-4949-b8ca-a89580ec1428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.579 2 DEBUG nova.compute.manager [None req-611e1687-5dd5-4b58-9911-25fbd6564efa - - - - - -] [instance: ba2cf934-ce76-4de7-a495-285f144bdab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.592 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[180bc79f-3a4a-4559-b038-739aa516c638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.620 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6b6e92-5cce-44f0-acec-69ca9c63d536]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516134, 'reachable_time': 18062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352606, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.647 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[658a0856-0b9a-4fb4-a682-c827ddfea792]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap19d584c3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516147, 'tstamp': 516147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352607, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap19d584c3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516149, 'tstamp': 516149}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352607, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.649 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:19 np0005465604 kernel: tap31d7db70-01: entered promiscuous mode
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:19 np0005465604 NetworkManager[45129]: <info>  [1759394299.6549] manager: (tap31d7db70-01): new Tun device (/org/freedesktop/NetworkManager/Devices/375)
Oct  2 04:38:19 np0005465604 kernel: tap31d7db70-01 (unregistering): left promiscuous mode
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:19Z|00944|binding|INFO|Claiming lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 for this chassis.
Oct  2 04:38:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:19Z|00945|binding|INFO|31d7db70-01e7-4886-b589-dc252f8fc7b5: Claiming fa:16:3e:5d:35:da 10.100.0.9
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.677 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:35:da 10.100.0.9'], port_security=['fa:16:3e:5d:35:da 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6a5d839-2362-461b-a536-078f2c86d9b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=31d7db70-01e7-4886-b589-dc252f8fc7b5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.687 2 INFO nova.virt.libvirt.driver [-] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Instance destroyed successfully.#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.687 2 DEBUG nova.objects.instance [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'resources' on Instance uuid b6a5d839-2362-461b-a536-078f2c86d9b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:38:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:19Z|00946|binding|INFO|Setting lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 ovn-installed in OVS
Oct  2 04:38:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:19Z|00947|binding|INFO|Setting lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 up in Southbound
Oct  2 04:38:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:19Z|00948|binding|INFO|Releasing lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 from this chassis (sb_readonly=1)
Oct  2 04:38:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:19Z|00949|if_status|INFO|Dropped 3 log messages in last 55 seconds (most recently, 55 seconds ago) due to excessive rate
Oct  2 04:38:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:19Z|00950|if_status|INFO|Not setting lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 down as sb is readonly
Oct  2 04:38:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:19Z|00951|binding|INFO|Removing iface tap31d7db70-01 ovn-installed in OVS
Oct  2 04:38:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:19Z|00952|binding|INFO|Releasing lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 from this chassis (sb_readonly=0)
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.702 2 DEBUG nova.virt.libvirt.vif [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:38:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-187016526',display_name='tempest-ServersNegativeTestJSON-server-187016526',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-187016526',id=94,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:38:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-ex30m3ht',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:38:18Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=b6a5d839-2362-461b-a536-078f2c86d9b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.703 2 DEBUG nova.network.os_vif_util [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "address": "fa:16:3e:5d:35:da", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31d7db70-01", "ovs_interfaceid": "31d7db70-01e7-4886-b589-dc252f8fc7b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.704 2 DEBUG nova.network.os_vif_util [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:35:da,bridge_name='br-int',has_traffic_filtering=True,id=31d7db70-01e7-4886-b589-dc252f8fc7b5,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d7db70-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.705 2 DEBUG os_vif [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:35:da,bridge_name='br-int',has_traffic_filtering=True,id=31d7db70-01e7-4886-b589-dc252f8fc7b5,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d7db70-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.707 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19d584c3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.708 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:38:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:19Z|00953|binding|INFO|Setting lport 31d7db70-01e7-4886-b589-dc252f8fc7b5 down in Southbound
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.710 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap19d584c3-e0, col_values=(('external_ids', {'iface-id': '0167600f-b732-46e3-804b-b8a6d765a5aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.711 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.712 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31d7db70-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.715 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 31d7db70-01e7-4886-b589-dc252f8fc7b5 in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 unbound from our chassis#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.719 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:35:da 10.100.0.9'], port_security=['fa:16:3e:5d:35:da 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6a5d839-2362-461b-a536-078f2c86d9b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=31d7db70-01e7-4886-b589-dc252f8fc7b5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.722 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 19d584c3-e754-47d1-9cdf-c6badbd670d7#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.740 2 INFO os_vif [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:35:da,bridge_name='br-int',has_traffic_filtering=True,id=31d7db70-01e7-4886-b589-dc252f8fc7b5,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31d7db70-01')#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.749 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5178e37d-ebca-4b95-81e9-6a20f2515657]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.801 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[333c5c38-f87e-4ad5-9894-793af3d710a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.806 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[384f2951-34cd-4fc8-8b82-ea1a4cf83c3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.849 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c8ecbe-d096-427a-84a0-d2425868f243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.877 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1970f3a1-2121-41d2-ba24-25960abe6472]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 832, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 832, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516134, 'reachable_time': 18062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352636, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.903 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5482bfe2-e672-4547-9d88-4a720e893235]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap19d584c3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516147, 'tstamp': 516147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352637, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap19d584c3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516149, 'tstamp': 516149}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352637, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.906 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:19 np0005465604 nova_compute[260603]: 2025-10-02 08:38:19.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.911 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19d584c3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.912 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.913 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap19d584c3-e0, col_values=(('external_ids', {'iface-id': '0167600f-b732-46e3-804b-b8a6d765a5aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.914 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.916 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 31d7db70-01e7-4886-b589-dc252f8fc7b5 in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 unbound from our chassis#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.921 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 19d584c3-e754-47d1-9cdf-c6badbd670d7#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.950 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4c52a9-e478-41e7-ad27-8961da51db95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:19.996 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2e69a8f1-8e5c-4ef9-9bee-103972fcdefc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:20.001 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3d299eff-fc6e-4c8c-9970-84768c8bc8be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:20.045 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6d80d7b7-bd53-4198-97e8-328b0172d321]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:20.077 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa0b073-bcb4-4778-90e5-9d0dff941e0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 832, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 832, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516134, 'reachable_time': 18062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352644, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:20.102 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d43c0e-5af4-4e9a-979d-6466d03e7067]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap19d584c3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516147, 'tstamp': 516147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352645, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap19d584c3-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516149, 'tstamp': 516149}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352645, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:20.104 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:20.130 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19d584c3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:20.131 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:38:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:20.132 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap19d584c3-e0, col_values=(('external_ids', {'iface-id': '0167600f-b732-46e3-804b-b8a6d765a5aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:20.132 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.147 2 INFO nova.virt.libvirt.driver [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Deleting instance files /var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9_del#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.148 2 INFO nova.virt.libvirt.driver [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Deletion of /var/lib/nova/instances/b6a5d839-2362-461b-a536-078f2c86d9b9_del complete#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.158 2 DEBUG nova.compute.manager [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.158 2 DEBUG oslo_concurrency.lockutils [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.159 2 DEBUG oslo_concurrency.lockutils [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.159 2 DEBUG oslo_concurrency.lockutils [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.160 2 DEBUG nova.compute.manager [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] No waiting events found dispatching network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.160 2 WARNING nova.compute.manager [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received unexpected event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.161 2 DEBUG nova.compute.manager [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-unplugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.161 2 DEBUG oslo_concurrency.lockutils [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.161 2 DEBUG oslo_concurrency.lockutils [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.162 2 DEBUG oslo_concurrency.lockutils [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.162 2 DEBUG nova.compute.manager [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] No waiting events found dispatching network-vif-unplugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.163 2 DEBUG nova.compute.manager [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-unplugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.163 2 DEBUG nova.compute.manager [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.163 2 DEBUG oslo_concurrency.lockutils [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.164 2 DEBUG oslo_concurrency.lockutils [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.164 2 DEBUG oslo_concurrency.lockutils [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.165 2 DEBUG nova.compute.manager [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] No waiting events found dispatching network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.165 2 WARNING nova.compute.manager [req-016fe069-34f5-4baf-a98f-9e472bfb5689 req-085c9892-2dc1-4232-a81a-60d722ca01a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received unexpected event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.212 2 INFO nova.compute.manager [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.213 2 DEBUG oslo.service.loopingcall [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.214 2 DEBUG nova.compute.manager [-] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:38:20 np0005465604 nova_compute[260603]: 2025-10-02 08:38:20.214 2 DEBUG nova.network.neutron [-] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:38:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct  2 04:38:21 np0005465604 nova_compute[260603]: 2025-10-02 08:38:21.712 2 DEBUG nova.network.neutron [-] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:38:21 np0005465604 nova_compute[260603]: 2025-10-02 08:38:21.734 2 INFO nova.compute.manager [-] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Took 1.52 seconds to deallocate network for instance.#033[00m
Oct  2 04:38:21 np0005465604 nova_compute[260603]: 2025-10-02 08:38:21.779 2 DEBUG oslo_concurrency.lockutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:21 np0005465604 nova_compute[260603]: 2025-10-02 08:38:21.779 2 DEBUG oslo_concurrency.lockutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:38:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987305785' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:38:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:38:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987305785' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:38:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:38:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 29K writes, 117K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s#012Cumulative WAL: 29K writes, 10K syncs, 2.80 writes per sync, written: 0.11 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 47.48 MB, 0.08 MB/s#012Interval WAL: 12K writes, 4782 syncs, 2.51 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 04:38:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.262 2 DEBUG oslo_concurrency.processutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.347 2 DEBUG nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.348 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.348 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.349 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.349 2 DEBUG nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] No waiting events found dispatching network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.350 2 WARNING nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received unexpected event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.350 2 DEBUG nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.351 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.351 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.352 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.352 2 DEBUG nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] No waiting events found dispatching network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.352 2 WARNING nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received unexpected event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.353 2 DEBUG nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-unplugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.353 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.354 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.354 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.355 2 DEBUG nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] No waiting events found dispatching network-vif-unplugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.355 2 WARNING nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received unexpected event network-vif-unplugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.356 2 DEBUG nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-deleted-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.356 2 DEBUG nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.356 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.357 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.357 2 DEBUG oslo_concurrency.lockutils [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.358 2 DEBUG nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] No waiting events found dispatching network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.358 2 WARNING nova.compute.manager [req-1bca06f0-be6c-4c54-9a6a-2f6a776c56d7 req-6f4b9a3e-225f-4a23-b151-37346c055af9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Received unexpected event network-vif-plugged-31d7db70-01e7-4886-b589-dc252f8fc7b5 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:38:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:38:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1036439364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.692 2 DEBUG oslo_concurrency.processutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.700 2 DEBUG nova.compute.provider_tree [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.725 2 DEBUG nova.scheduler.client.report [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.749 2 DEBUG oslo_concurrency.lockutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.780 2 INFO nova.scheduler.client.report [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Deleted allocations for instance b6a5d839-2362-461b-a536-078f2c86d9b9#033[00m
Oct  2 04:38:22 np0005465604 nova_compute[260603]: 2025-10-02 08:38:22.853 2 DEBUG oslo_concurrency.lockutils [None req-a417289b-352a-426a-91be-1ce2850a0340 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "b6a5d839-2362-461b-a536-078f2c86d9b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Oct  2 04:38:23 np0005465604 nova_compute[260603]: 2025-10-02 08:38:23.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:38:23 np0005465604 nova_compute[260603]: 2025-10-02 08:38:23.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:38:24 np0005465604 nova_compute[260603]: 2025-10-02 08:38:24.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:24 np0005465604 nova_compute[260603]: 2025-10-02 08:38:24.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:38:24 np0005465604 nova_compute[260603]: 2025-10-02 08:38:24.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:38:24 np0005465604 nova_compute[260603]: 2025-10-02 08:38:24.574 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:38:24 np0005465604 nova_compute[260603]: 2025-10-02 08:38:24.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Oct  2 04:38:25 np0005465604 nova_compute[260603]: 2025-10-02 08:38:25.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:38:26 np0005465604 podman[352669]: 2025-10-02 08:38:26.032443218 +0000 UTC m=+0.088154170 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct  2 04:38:26 np0005465604 podman[352668]: 2025-10-02 08:38:26.091626087 +0000 UTC m=+0.148510885 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 04:38:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:38:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 31K writes, 120K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s#012Cumulative WAL: 31K writes, 11K syncs, 2.81 writes per sync, written: 0.11 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 46.85 MB, 0.08 MB/s#012Interval WAL: 12K writes, 5142 syncs, 2.43 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 04:38:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Oct  2 04:38:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:38:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:38:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:38:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:38:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:38:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:38:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:38:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:38:27
Oct  2 04:38:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:38:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:38:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['volumes', 'vms', 'images', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.meta']
Oct  2 04:38:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:38:28 np0005465604 nova_compute[260603]: 2025-10-02 08:38:28.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:38:28 np0005465604 nova_compute[260603]: 2025-10-02 08:38:28.543 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:28 np0005465604 nova_compute[260603]: 2025-10-02 08:38:28.544 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:28 np0005465604 nova_compute[260603]: 2025-10-02 08:38:28.544 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:28 np0005465604 nova_compute[260603]: 2025-10-02 08:38:28.545 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:38:28 np0005465604 nova_compute[260603]: 2025-10-02 08:38:28.545 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:38:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/270479168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:38:29 np0005465604 nova_compute[260603]: 2025-10-02 08:38:29.011 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:29 np0005465604 nova_compute[260603]: 2025-10-02 08:38:29.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:29 np0005465604 nova_compute[260603]: 2025-10-02 08:38:29.113 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:38:29 np0005465604 nova_compute[260603]: 2025-10-02 08:38:29.114 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:38:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 137 op/s
Oct  2 04:38:29 np0005465604 nova_compute[260603]: 2025-10-02 08:38:29.322 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:38:29 np0005465604 nova_compute[260603]: 2025-10-02 08:38:29.323 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3704MB free_disk=59.94279479980469GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:38:29 np0005465604 nova_compute[260603]: 2025-10-02 08:38:29.323 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:29 np0005465604 nova_compute[260603]: 2025-10-02 08:38:29.324 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:29 np0005465604 nova_compute[260603]: 2025-10-02 08:38:29.403 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:38:29 np0005465604 nova_compute[260603]: 2025-10-02 08:38:29.404 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:38:29 np0005465604 nova_compute[260603]: 2025-10-02 08:38:29.405 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:38:29 np0005465604 nova_compute[260603]: 2025-10-02 08:38:29.441 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:29 np0005465604 nova_compute[260603]: 2025-10-02 08:38:29.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:38:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4143985717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:38:30 np0005465604 nova_compute[260603]: 2025-10-02 08:38:30.015 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:30 np0005465604 nova_compute[260603]: 2025-10-02 08:38:30.024 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:38:30 np0005465604 nova_compute[260603]: 2025-10-02 08:38:30.047 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:38:30 np0005465604 nova_compute[260603]: 2025-10-02 08:38:30.068 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:38:30 np0005465604 nova_compute[260603]: 2025-10-02 08:38:30.069 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:30 np0005465604 nova_compute[260603]: 2025-10-02 08:38:30.069 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:38:30 np0005465604 nova_compute[260603]: 2025-10-02 08:38:30.069 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 04:38:30 np0005465604 nova_compute[260603]: 2025-10-02 08:38:30.090 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 04:38:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 25 KiB/s wr, 79 op/s
Oct  2 04:38:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:38:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 25K writes, 96K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s#012Cumulative WAL: 25K writes, 9018 syncs, 2.81 writes per sync, written: 0.09 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9319 writes, 34K keys, 9319 commit groups, 1.0 writes per commit group, ingest: 33.26 MB, 0.06 MB/s#012Interval WAL: 9319 writes, 3866 syncs, 2.41 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 04:38:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:38:32 np0005465604 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct  2 04:38:32 np0005465604 nova_compute[260603]: 2025-10-02 08:38:32.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:38:32 np0005465604 nova_compute[260603]: 2025-10-02 08:38:32.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:38:32 np0005465604 nova_compute[260603]: 2025-10-02 08:38:32.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:38:32 np0005465604 nova_compute[260603]: 2025-10-02 08:38:32.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 04:38:33 np0005465604 podman[352757]: 2025-10-02 08:38:33.030313584 +0000 UTC m=+0.091830121 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 04:38:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 25 KiB/s wr, 79 op/s
Oct  2 04:38:33 np0005465604 nova_compute[260603]: 2025-10-02 08:38:33.546 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.106 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.106 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.133 2 DEBUG nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.218 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.219 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.229 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.229 2 INFO nova.compute.claims [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.384 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.684 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394299.6828716, b6a5d839-2362-461b-a536-078f2c86d9b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.685 2 INFO nova.compute.manager [-] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.712 2 DEBUG nova.compute.manager [None req-468de743-2d12-4b20-8c1d-f81f3d057055 - - - - - -] [instance: b6a5d839-2362-461b-a536-078f2c86d9b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:34 np0005465604 podman[352821]: 2025-10-02 08:38:34.800635604 +0000 UTC m=+0.079544703 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:38:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:34.822 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:34.823 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:34.824 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:38:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/676199611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.869 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.877 2 DEBUG nova.compute.provider_tree [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.902 2 DEBUG nova.scheduler.client.report [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.927 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.928 2 DEBUG nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.982 2 DEBUG nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:38:34 np0005465604 nova_compute[260603]: 2025-10-02 08:38:34.982 2 DEBUG nova.network.neutron [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.017 2 INFO nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.034 2 DEBUG nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.124 2 DEBUG nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.127 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.128 2 INFO nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Creating image(s)#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.165 2 DEBUG nova.storage.rbd_utils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:38:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 121 MiB data, 694 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.218 2 DEBUG nova.storage.rbd_utils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.249 2 DEBUG nova.storage.rbd_utils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.257 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.354 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.356 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.356 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.357 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.381 2 DEBUG nova.storage.rbd_utils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.385 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:38:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:38:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:38:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:38:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:38:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:38:35 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 325fd12a-6b8a-422a-b19c-48e9c2c44400 does not exist
Oct  2 04:38:35 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9a63e180-113a-425f-9c8c-618b951ccb53 does not exist
Oct  2 04:38:35 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9e3c2a59-079a-4305-af44-0aecab3ea8bb does not exist
Oct  2 04:38:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:38:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:38:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:38:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:38:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:38:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.636 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.700 2 DEBUG nova.storage.rbd_utils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] resizing rbd image 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.727 2 DEBUG nova.policy [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd3802fedfb914c27b9b09ad6ea6f4c27', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c75535fe577642038c638a0b01f74d09', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.782 2 DEBUG nova.objects.instance [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'migration_context' on Instance uuid 4145f1a3-c327-49ee-9af1-1ace3afb70a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.795 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.796 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Ensure instance console log exists: /var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.796 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.796 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:35 np0005465604 nova_compute[260603]: 2025-10-02 08:38:35.796 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:36 np0005465604 podman[353258]: 2025-10-02 08:38:36.120677918 +0000 UTC m=+0.038587210 container create a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 04:38:36 np0005465604 systemd[1]: Started libpod-conmon-a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b.scope.
Oct  2 04:38:36 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:38:36 np0005465604 podman[353258]: 2025-10-02 08:38:36.195717563 +0000 UTC m=+0.113626875 container init a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:38:36 np0005465604 podman[353258]: 2025-10-02 08:38:36.101499992 +0000 UTC m=+0.019409304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:38:36 np0005465604 podman[353258]: 2025-10-02 08:38:36.202705383 +0000 UTC m=+0.120614665 container start a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 04:38:36 np0005465604 podman[353258]: 2025-10-02 08:38:36.205937261 +0000 UTC m=+0.123846583 container attach a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:38:36 np0005465604 adoring_gauss[353276]: 167 167
Oct  2 04:38:36 np0005465604 systemd[1]: libpod-a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b.scope: Deactivated successfully.
Oct  2 04:38:36 np0005465604 conmon[353276]: conmon a3cf4f13d7b48bf25958 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b.scope/container/memory.events
Oct  2 04:38:36 np0005465604 podman[353258]: 2025-10-02 08:38:36.210258751 +0000 UTC m=+0.128168043 container died a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:38:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e6e48f8e18e6392cb921e5441d2e676a5cd5c737165dc33fc2eedece7d15d3e8-merged.mount: Deactivated successfully.
Oct  2 04:38:36 np0005465604 podman[353258]: 2025-10-02 08:38:36.249729757 +0000 UTC m=+0.167639069 container remove a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:38:36 np0005465604 systemd[1]: libpod-conmon-a3cf4f13d7b48bf25958136f652262bf3cf454c037c5fda25cf1cce0a575133b.scope: Deactivated successfully.
Oct  2 04:38:36 np0005465604 podman[353300]: 2025-10-02 08:38:36.432047127 +0000 UTC m=+0.048944742 container create 922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 04:38:36 np0005465604 systemd[1]: Started libpod-conmon-922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719.scope.
Oct  2 04:38:36 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:38:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f60c3a3c6d3a6e207f6e07ceb89974ae3a1cffc07413e2fffb6aa8974b6bce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:38:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f60c3a3c6d3a6e207f6e07ceb89974ae3a1cffc07413e2fffb6aa8974b6bce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:38:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f60c3a3c6d3a6e207f6e07ceb89974ae3a1cffc07413e2fffb6aa8974b6bce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:38:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f60c3a3c6d3a6e207f6e07ceb89974ae3a1cffc07413e2fffb6aa8974b6bce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:38:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f60c3a3c6d3a6e207f6e07ceb89974ae3a1cffc07413e2fffb6aa8974b6bce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:38:36 np0005465604 podman[353300]: 2025-10-02 08:38:36.416386726 +0000 UTC m=+0.033284331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:38:36 np0005465604 podman[353300]: 2025-10-02 08:38:36.515382151 +0000 UTC m=+0.132279776 container init 922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  2 04:38:36 np0005465604 nova_compute[260603]: 2025-10-02 08:38:36.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:38:36 np0005465604 podman[353300]: 2025-10-02 08:38:36.527564668 +0000 UTC m=+0.144462263 container start 922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:38:36 np0005465604 podman[353300]: 2025-10-02 08:38:36.53198088 +0000 UTC m=+0.148878515 container attach 922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:38:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:38:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:38:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:38:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 139 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 390 KiB/s wr, 12 op/s
Oct  2 04:38:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:38:37 np0005465604 nova_compute[260603]: 2025-10-02 08:38:37.271 2 DEBUG nova.network.neutron [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Successfully created port: 4340f7c5-2f3a-4608-b77c-40798457ce79 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:38:37 np0005465604 nova_compute[260603]: 2025-10-02 08:38:37.526 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:38:37 np0005465604 eager_heyrovsky[353316]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:38:37 np0005465604 eager_heyrovsky[353316]: --> relative data size: 1.0
Oct  2 04:38:37 np0005465604 eager_heyrovsky[353316]: --> All data devices are unavailable
Oct  2 04:38:37 np0005465604 systemd[1]: libpod-922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719.scope: Deactivated successfully.
Oct  2 04:38:37 np0005465604 systemd[1]: libpod-922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719.scope: Consumed 1.160s CPU time.
Oct  2 04:38:37 np0005465604 conmon[353316]: conmon 922472ebf58c49a079b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719.scope/container/memory.events
Oct  2 04:38:37 np0005465604 podman[353300]: 2025-10-02 08:38:37.751738261 +0000 UTC m=+1.368635896 container died 922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:38:37 np0005465604 systemd[1]: var-lib-containers-storage-overlay-12f60c3a3c6d3a6e207f6e07ceb89974ae3a1cffc07413e2fffb6aa8974b6bce-merged.mount: Deactivated successfully.
Oct  2 04:38:37 np0005465604 podman[353300]: 2025-10-02 08:38:37.822427836 +0000 UTC m=+1.439325481 container remove 922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_heyrovsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:38:37 np0005465604 systemd[1]: libpod-conmon-922472ebf58c49a079b9ace860752b0ef8c2c3e5711b28cbbdb40c8c84479719.scope: Deactivated successfully.
Oct  2 04:38:38 np0005465604 nova_compute[260603]: 2025-10-02 08:38:38.219 2 DEBUG nova.network.neutron [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Successfully updated port: 4340f7c5-2f3a-4608-b77c-40798457ce79 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:38:38 np0005465604 nova_compute[260603]: 2025-10-02 08:38:38.234 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "refresh_cache-4145f1a3-c327-49ee-9af1-1ace3afb70a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:38:38 np0005465604 nova_compute[260603]: 2025-10-02 08:38:38.235 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquired lock "refresh_cache-4145f1a3-c327-49ee-9af1-1ace3afb70a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:38:38 np0005465604 nova_compute[260603]: 2025-10-02 08:38:38.235 2 DEBUG nova.network.neutron [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:38:38 np0005465604 nova_compute[260603]: 2025-10-02 08:38:38.441 2 DEBUG nova.network.neutron [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:38:38 np0005465604 podman[353500]: 2025-10-02 08:38:38.627545064 +0000 UTC m=+0.063182870 container create 46fc24a5df8cea9b1625486d0fbf97f5c94549edef7abd40147c1fcd65e397c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008327203189099434 of space, bias 1.0, pg target 0.24981609567298302 quantized to 32 (current 32)
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:38:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:38:38 np0005465604 nova_compute[260603]: 2025-10-02 08:38:38.665 2 DEBUG nova.compute.manager [req-2b1d592f-ad2b-4df4-a7fe-b2479346f5fc req-c1b64455-5d0d-4272-a8b8-16b9ec16d867 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-changed-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:38 np0005465604 nova_compute[260603]: 2025-10-02 08:38:38.665 2 DEBUG nova.compute.manager [req-2b1d592f-ad2b-4df4-a7fe-b2479346f5fc req-c1b64455-5d0d-4272-a8b8-16b9ec16d867 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Refreshing instance network info cache due to event network-changed-4340f7c5-2f3a-4608-b77c-40798457ce79. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:38:38 np0005465604 nova_compute[260603]: 2025-10-02 08:38:38.666 2 DEBUG oslo_concurrency.lockutils [req-2b1d592f-ad2b-4df4-a7fe-b2479346f5fc req-c1b64455-5d0d-4272-a8b8-16b9ec16d867 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-4145f1a3-c327-49ee-9af1-1ace3afb70a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:38:38 np0005465604 systemd[1]: Started libpod-conmon-46fc24a5df8cea9b1625486d0fbf97f5c94549edef7abd40147c1fcd65e397c0.scope.
Oct  2 04:38:38 np0005465604 podman[353500]: 2025-10-02 08:38:38.59283597 +0000 UTC m=+0.028473816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:38:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:38:38 np0005465604 podman[353500]: 2025-10-02 08:38:38.74687997 +0000 UTC m=+0.182517786 container init 46fc24a5df8cea9b1625486d0fbf97f5c94549edef7abd40147c1fcd65e397c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:38:38 np0005465604 podman[353500]: 2025-10-02 08:38:38.759306584 +0000 UTC m=+0.194944350 container start 46fc24a5df8cea9b1625486d0fbf97f5c94549edef7abd40147c1fcd65e397c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 04:38:38 np0005465604 podman[353500]: 2025-10-02 08:38:38.763332125 +0000 UTC m=+0.198969921 container attach 46fc24a5df8cea9b1625486d0fbf97f5c94549edef7abd40147c1fcd65e397c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:38:38 np0005465604 serene_swanson[353516]: 167 167
Oct  2 04:38:38 np0005465604 systemd[1]: libpod-46fc24a5df8cea9b1625486d0fbf97f5c94549edef7abd40147c1fcd65e397c0.scope: Deactivated successfully.
Oct  2 04:38:38 np0005465604 podman[353500]: 2025-10-02 08:38:38.769400307 +0000 UTC m=+0.205038073 container died 46fc24a5df8cea9b1625486d0fbf97f5c94549edef7abd40147c1fcd65e397c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:38:38 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6c787e74c16c7a58d22c73e0cbf88ee697df9318df4ecbc24aebd29e632f2d34-merged.mount: Deactivated successfully.
Oct  2 04:38:38 np0005465604 podman[353500]: 2025-10-02 08:38:38.813975937 +0000 UTC m=+0.249613713 container remove 46fc24a5df8cea9b1625486d0fbf97f5c94549edef7abd40147c1fcd65e397c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  2 04:38:38 np0005465604 systemd[1]: libpod-conmon-46fc24a5df8cea9b1625486d0fbf97f5c94549edef7abd40147c1fcd65e397c0.scope: Deactivated successfully.
Oct  2 04:38:39 np0005465604 podman[353539]: 2025-10-02 08:38:39.03030798 +0000 UTC m=+0.058606554 container create e6b6d124e7fd26382c20cb6d741182287b651c136af30213c5c6fa26b02288d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 04:38:39 np0005465604 nova_compute[260603]: 2025-10-02 08:38:39.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:39 np0005465604 systemd[1]: Started libpod-conmon-e6b6d124e7fd26382c20cb6d741182287b651c136af30213c5c6fa26b02288d3.scope.
Oct  2 04:38:39 np0005465604 podman[353539]: 2025-10-02 08:38:39.012453643 +0000 UTC m=+0.040752227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:38:39 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:38:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5031850c6e4b1d7f4bad7d63f44914f388f07bf835ceccd9f4c99d2c5a5daa2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:38:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5031850c6e4b1d7f4bad7d63f44914f388f07bf835ceccd9f4c99d2c5a5daa2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:38:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5031850c6e4b1d7f4bad7d63f44914f388f07bf835ceccd9f4c99d2c5a5daa2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:38:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5031850c6e4b1d7f4bad7d63f44914f388f07bf835ceccd9f4c99d2c5a5daa2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:38:39 np0005465604 podman[353539]: 2025-10-02 08:38:39.125860701 +0000 UTC m=+0.154159295 container init e6b6d124e7fd26382c20cb6d741182287b651c136af30213c5c6fa26b02288d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jepsen, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 04:38:39 np0005465604 podman[353539]: 2025-10-02 08:38:39.136665866 +0000 UTC m=+0.164964440 container start e6b6d124e7fd26382c20cb6d741182287b651c136af30213c5c6fa26b02288d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jepsen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:38:39 np0005465604 podman[353539]: 2025-10-02 08:38:39.139911894 +0000 UTC m=+0.168210558 container attach e6b6d124e7fd26382c20cb6d741182287b651c136af30213c5c6fa26b02288d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jepsen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 04:38:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:38:39 np0005465604 nova_compute[260603]: 2025-10-02 08:38:39.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]: {
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:    "0": [
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:        {
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "devices": [
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "/dev/loop3"
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            ],
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "lv_name": "ceph_lv0",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "lv_size": "21470642176",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "name": "ceph_lv0",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "tags": {
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.cluster_name": "ceph",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.crush_device_class": "",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.encrypted": "0",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.osd_id": "0",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.type": "block",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.vdo": "0"
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            },
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "type": "block",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "vg_name": "ceph_vg0"
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:        }
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:    ],
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:    "1": [
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:        {
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "devices": [
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "/dev/loop4"
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            ],
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "lv_name": "ceph_lv1",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "lv_size": "21470642176",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "name": "ceph_lv1",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "tags": {
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.cluster_name": "ceph",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.crush_device_class": "",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.encrypted": "0",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.osd_id": "1",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.type": "block",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.vdo": "0"
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            },
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "type": "block",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "vg_name": "ceph_vg1"
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:        }
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:    ],
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:    "2": [
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:        {
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "devices": [
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "/dev/loop5"
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            ],
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "lv_name": "ceph_lv2",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "lv_size": "21470642176",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "name": "ceph_lv2",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "tags": {
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.cluster_name": "ceph",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.crush_device_class": "",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.encrypted": "0",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.osd_id": "2",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.type": "block",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:                "ceph.vdo": "0"
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            },
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "type": "block",
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:            "vg_name": "ceph_vg2"
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:        }
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]:    ]
Oct  2 04:38:39 np0005465604 nostalgic_jepsen[353556]: }
Oct  2 04:38:39 np0005465604 systemd[1]: libpod-e6b6d124e7fd26382c20cb6d741182287b651c136af30213c5c6fa26b02288d3.scope: Deactivated successfully.
Oct  2 04:38:39 np0005465604 podman[353539]: 2025-10-02 08:38:39.896295878 +0000 UTC m=+0.924594472 container died e6b6d124e7fd26382c20cb6d741182287b651c136af30213c5c6fa26b02288d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jepsen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:38:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5031850c6e4b1d7f4bad7d63f44914f388f07bf835ceccd9f4c99d2c5a5daa2f-merged.mount: Deactivated successfully.
Oct  2 04:38:39 np0005465604 podman[353539]: 2025-10-02 08:38:39.968993022 +0000 UTC m=+0.997291626 container remove e6b6d124e7fd26382c20cb6d741182287b651c136af30213c5c6fa26b02288d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:38:39 np0005465604 nova_compute[260603]: 2025-10-02 08:38:39.977 2 DEBUG nova.network.neutron [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Updating instance_info_cache with network_info: [{"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:38:39 np0005465604 systemd[1]: libpod-conmon-e6b6d124e7fd26382c20cb6d741182287b651c136af30213c5c6fa26b02288d3.scope: Deactivated successfully.
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.015 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Releasing lock "refresh_cache-4145f1a3-c327-49ee-9af1-1ace3afb70a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.016 2 DEBUG nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Instance network_info: |[{"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.017 2 DEBUG oslo_concurrency.lockutils [req-2b1d592f-ad2b-4df4-a7fe-b2479346f5fc req-c1b64455-5d0d-4272-a8b8-16b9ec16d867 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-4145f1a3-c327-49ee-9af1-1ace3afb70a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.018 2 DEBUG nova.network.neutron [req-2b1d592f-ad2b-4df4-a7fe-b2479346f5fc req-c1b64455-5d0d-4272-a8b8-16b9ec16d867 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Refreshing network info cache for port 4340f7c5-2f3a-4608-b77c-40798457ce79 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.022 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Start _get_guest_xml network_info=[{"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.028 2 WARNING nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.035 2 DEBUG nova.virt.libvirt.host [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.036 2 DEBUG nova.virt.libvirt.host [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.044 2 DEBUG nova.virt.libvirt.host [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.045 2 DEBUG nova.virt.libvirt.host [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.045 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.046 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.047 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.047 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.047 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.048 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.048 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.048 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.049 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.049 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.050 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.050 2 DEBUG nova.virt.hardware [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.053 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:38:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2762823883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.547 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.586 2 DEBUG nova.storage.rbd_utils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:38:40 np0005465604 nova_compute[260603]: 2025-10-02 08:38:40.592 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:40 np0005465604 podman[353760]: 2025-10-02 08:38:40.745417118 +0000 UTC m=+0.068426828 container create 94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:38:40 np0005465604 systemd[1]: Started libpod-conmon-94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5.scope.
Oct  2 04:38:40 np0005465604 podman[353760]: 2025-10-02 08:38:40.715999344 +0000 UTC m=+0.039009144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:38:40 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:38:40 np0005465604 podman[353760]: 2025-10-02 08:38:40.846168547 +0000 UTC m=+0.169178287 container init 94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  2 04:38:40 np0005465604 podman[353760]: 2025-10-02 08:38:40.857801966 +0000 UTC m=+0.180811686 container start 94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:38:40 np0005465604 podman[353760]: 2025-10-02 08:38:40.861235289 +0000 UTC m=+0.184245019 container attach 94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:38:40 np0005465604 naughty_hofstadter[353795]: 167 167
Oct  2 04:38:40 np0005465604 systemd[1]: libpod-94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5.scope: Deactivated successfully.
Oct  2 04:38:40 np0005465604 conmon[353795]: conmon 94303197026bb03ffc76 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5.scope/container/memory.events
Oct  2 04:38:40 np0005465604 podman[353760]: 2025-10-02 08:38:40.866134957 +0000 UTC m=+0.189144677 container died 94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 04:38:40 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5b52fd148f2844ec0fe8cc68b818875d60738f5b6cb0a3e1019b92a590ed273d-merged.mount: Deactivated successfully.
Oct  2 04:38:40 np0005465604 podman[353760]: 2025-10-02 08:38:40.905455649 +0000 UTC m=+0.228465369 container remove 94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 04:38:40 np0005465604 systemd[1]: libpod-conmon-94303197026bb03ffc76b9493229c5c7cbccd6b12c536e95da2afd31fa2eadb5.scope: Deactivated successfully.
Oct  2 04:38:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:38:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3470657921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.043 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.047 2 DEBUG nova.virt.libvirt.vif [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:38:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2131100839',display_name='tempest-ServerActionsTestOtherA-server-2131100839',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2131100839',id=95,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC0HjsYe2bQh07GPDTkE/Hkn9NHAzOfs+WPsOxgVRJ14fyGEBr+vw6dokOlyhdtA2fxAJhqFEPbCShkjGLLEAdJQr2B1DlLsi6qyPK3AKei/w52/HIPGV/pd20ma4wEBfQ==',key_name='tempest-keypair-312406508',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-ccglcikc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:38:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=4145f1a3-c327-49ee-9af1-1ace3afb70a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.048 2 DEBUG nova.network.os_vif_util [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.049 2 DEBUG nova.network.os_vif_util [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:12:17,bridge_name='br-int',has_traffic_filtering=True,id=4340f7c5-2f3a-4608-b77c-40798457ce79,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4340f7c5-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.052 2 DEBUG nova.objects.instance [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4145f1a3-c327-49ee-9af1-1ace3afb70a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:38:41 np0005465604 podman[353821]: 2025-10-02 08:38:41.088416997 +0000 UTC m=+0.040411546 container create cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.124 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:38:41 np0005465604 nova_compute[260603]:  <uuid>4145f1a3-c327-49ee-9af1-1ace3afb70a5</uuid>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:  <name>instance-0000005f</name>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerActionsTestOtherA-server-2131100839</nova:name>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:38:40</nova:creationTime>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:38:41 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:        <nova:user uuid="d3802fedfb914c27b9b09ad6ea6f4c27">tempest-ServerActionsTestOtherA-249618595-project-member</nova:user>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:        <nova:project uuid="c75535fe577642038c638a0b01f74d09">tempest-ServerActionsTestOtherA-249618595</nova:project>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:        <nova:port uuid="4340f7c5-2f3a-4608-b77c-40798457ce79">
Oct  2 04:38:41 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <entry name="serial">4145f1a3-c327-49ee-9af1-1ace3afb70a5</entry>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <entry name="uuid">4145f1a3-c327-49ee-9af1-1ace3afb70a5</entry>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk">
Oct  2 04:38:41 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:38:41 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk.config">
Oct  2 04:38:41 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:38:41 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:37:12:17"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <target dev="tap4340f7c5-2f"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5/console.log" append="off"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:38:41 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:38:41 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:38:41 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:38:41 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.126 2 DEBUG nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Preparing to wait for external event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.126 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.127 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.127 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.128 2 DEBUG nova.virt.libvirt.vif [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:38:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2131100839',display_name='tempest-ServerActionsTestOtherA-server-2131100839',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2131100839',id=95,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC0HjsYe2bQh07GPDTkE/Hkn9NHAzOfs+WPsOxgVRJ14fyGEBr+vw6dokOlyhdtA2fxAJhqFEPbCShkjGLLEAdJQr2B1DlLsi6qyPK3AKei/w52/HIPGV/pd20ma4wEBfQ==',key_name='tempest-keypair-312406508',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-ccglcikc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:38:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=4145f1a3-c327-49ee-9af1-1ace3afb70a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.128 2 DEBUG nova.network.os_vif_util [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.129 2 DEBUG nova.network.os_vif_util [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:12:17,bridge_name='br-int',has_traffic_filtering=True,id=4340f7c5-2f3a-4608-b77c-40798457ce79,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4340f7c5-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.129 2 DEBUG os_vif [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:12:17,bridge_name='br-int',has_traffic_filtering=True,id=4340f7c5-2f3a-4608-b77c-40798457ce79,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4340f7c5-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:38:41 np0005465604 systemd[1]: Started libpod-conmon-cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930.scope.
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.137 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4340f7c5-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.138 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4340f7c5-2f, col_values=(('external_ids', {'iface-id': '4340f7c5-2f3a-4608-b77c-40798457ce79', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:12:17', 'vm-uuid': '4145f1a3-c327-49ee-9af1-1ace3afb70a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:41 np0005465604 NetworkManager[45129]: <info>  [1759394321.1408] manager: (tap4340f7c5-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.148 2 INFO os_vif [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:12:17,bridge_name='br-int',has_traffic_filtering=True,id=4340f7c5-2f3a-4608-b77c-40798457ce79,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4340f7c5-2f')#033[00m
Oct  2 04:38:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:38:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ee11d73360d280e50d105667a8d0b8867d24412da271144dc227f1570c11ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:38:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ee11d73360d280e50d105667a8d0b8867d24412da271144dc227f1570c11ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:38:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ee11d73360d280e50d105667a8d0b8867d24412da271144dc227f1570c11ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:38:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ee11d73360d280e50d105667a8d0b8867d24412da271144dc227f1570c11ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:38:41 np0005465604 podman[353821]: 2025-10-02 08:38:41.0731997 +0000 UTC m=+0.025194259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:38:41 np0005465604 podman[353821]: 2025-10-02 08:38:41.179578897 +0000 UTC m=+0.131573486 container init cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 04:38:41 np0005465604 podman[353821]: 2025-10-02 08:38:41.189931708 +0000 UTC m=+0.141926277 container start cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:38:41 np0005465604 podman[353821]: 2025-10-02 08:38:41.19498967 +0000 UTC m=+0.146984219 container attach cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:38:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.216 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.217 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.217 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No VIF found with MAC fa:16:3e:37:12:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.218 2 INFO nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Using config drive#033[00m
Oct  2 04:38:41 np0005465604 nova_compute[260603]: 2025-10-02 08:38:41.238 2 DEBUG nova.storage.rbd_utils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:38:42 np0005465604 nova_compute[260603]: 2025-10-02 08:38:42.148 2 INFO nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Creating config drive at /var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5/disk.config#033[00m
Oct  2 04:38:42 np0005465604 nova_compute[260603]: 2025-10-02 08:38:42.157 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqzwllo4y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]: {
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:        "osd_id": 2,
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:        "type": "bluestore"
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:    },
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:        "osd_id": 1,
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:        "type": "bluestore"
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:    },
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:        "osd_id": 0,
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:        "type": "bluestore"
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]:    }
Oct  2 04:38:42 np0005465604 eloquent_sammet[353838]: }
Oct  2 04:38:42 np0005465604 nova_compute[260603]: 2025-10-02 08:38:42.303 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqzwllo4y" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:42 np0005465604 systemd[1]: libpod-cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930.scope: Deactivated successfully.
Oct  2 04:38:42 np0005465604 systemd[1]: libpod-cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930.scope: Consumed 1.122s CPU time.
Oct  2 04:38:42 np0005465604 podman[353821]: 2025-10-02 08:38:42.312422956 +0000 UTC m=+1.264417535 container died cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:38:42 np0005465604 nova_compute[260603]: 2025-10-02 08:38:42.337 2 DEBUG nova.storage.rbd_utils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:38:42 np0005465604 nova_compute[260603]: 2025-10-02 08:38:42.343 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5/disk.config 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:38:42 np0005465604 systemd[1]: var-lib-containers-storage-overlay-78ee11d73360d280e50d105667a8d0b8867d24412da271144dc227f1570c11ed-merged.mount: Deactivated successfully.
Oct  2 04:38:42 np0005465604 podman[353821]: 2025-10-02 08:38:42.381457351 +0000 UTC m=+1.333451900 container remove cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 04:38:42 np0005465604 systemd[1]: libpod-conmon-cc910bc7eb5c4b7a99059121e837fe32c82b28ec34baaee36781d0848bc54930.scope: Deactivated successfully.
Oct  2 04:38:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:38:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:38:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:38:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:38:42 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 0ade339c-beae-4a00-94d3-896940daaea5 does not exist
Oct  2 04:38:42 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a66f40be-364a-4ac9-8657-9d528b4f7cf3 does not exist
Oct  2 04:38:42 np0005465604 nova_compute[260603]: 2025-10-02 08:38:42.544 2 DEBUG oslo_concurrency.processutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5/disk.config 4145f1a3-c327-49ee-9af1-1ace3afb70a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:38:42 np0005465604 nova_compute[260603]: 2025-10-02 08:38:42.546 2 INFO nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Deleting local config drive /var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5/disk.config because it was imported into RBD.#033[00m
Oct  2 04:38:42 np0005465604 nova_compute[260603]: 2025-10-02 08:38:42.549 2 DEBUG nova.network.neutron [req-2b1d592f-ad2b-4df4-a7fe-b2479346f5fc req-c1b64455-5d0d-4272-a8b8-16b9ec16d867 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Updated VIF entry in instance network info cache for port 4340f7c5-2f3a-4608-b77c-40798457ce79. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:38:42 np0005465604 nova_compute[260603]: 2025-10-02 08:38:42.549 2 DEBUG nova.network.neutron [req-2b1d592f-ad2b-4df4-a7fe-b2479346f5fc req-c1b64455-5d0d-4272-a8b8-16b9ec16d867 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Updating instance_info_cache with network_info: [{"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:38:42 np0005465604 nova_compute[260603]: 2025-10-02 08:38:42.575 2 DEBUG oslo_concurrency.lockutils [req-2b1d592f-ad2b-4df4-a7fe-b2479346f5fc req-c1b64455-5d0d-4272-a8b8-16b9ec16d867 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-4145f1a3-c327-49ee-9af1-1ace3afb70a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:38:42 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:38:42 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:38:42 np0005465604 kernel: tap4340f7c5-2f: entered promiscuous mode
Oct  2 04:38:42 np0005465604 NetworkManager[45129]: <info>  [1759394322.6243] manager: (tap4340f7c5-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/377)
Oct  2 04:38:42 np0005465604 systemd-udevd[354001]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:38:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:42Z|00954|binding|INFO|Claiming lport 4340f7c5-2f3a-4608-b77c-40798457ce79 for this chassis.
Oct  2 04:38:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:42Z|00955|binding|INFO|4340f7c5-2f3a-4608-b77c-40798457ce79: Claiming fa:16:3e:37:12:17 10.100.0.14
Oct  2 04:38:42 np0005465604 nova_compute[260603]: 2025-10-02 08:38:42.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:42 np0005465604 nova_compute[260603]: 2025-10-02 08:38:42.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:42 np0005465604 NetworkManager[45129]: <info>  [1759394322.6796] device (tap4340f7c5-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:38:42 np0005465604 NetworkManager[45129]: <info>  [1759394322.6821] device (tap4340f7c5-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.681 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:12:17 10.100.0.14'], port_security=['fa:16:3e:37:12:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4145f1a3-c327-49ee-9af1-1ace3afb70a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7fef51c8-51f5-4bd3-92a8-fdacaab334b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4340f7c5-2f3a-4608-b77c-40798457ce79) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.682 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4340f7c5-2f3a-4608-b77c-40798457ce79 in datapath 28f843b2-396a-4167-9840-21c273bdc044 bound to our chassis#033[00m
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.683 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28f843b2-396a-4167-9840-21c273bdc044#033[00m
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.697 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb8b2a0-f60a-4fb6-ab4d-0e26d97a4281]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.699 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap28f843b2-31 in ovnmeta-28f843b2-396a-4167-9840-21c273bdc044 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.701 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap28f843b2-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.701 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09b2b099-8a64-49fd-8ccd-3ec409050a06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.702 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1f15f370-9439-4a37-96f5-5f142cba0e75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.717 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d77daa-ec2f-46c6-99a6-235fa0b05d7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:42 np0005465604 systemd-machined[214636]: New machine qemu-118-instance-0000005f.
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.745 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a8045ef4-bf21-4fa7-95ee-cf3410a1b856]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:42 np0005465604 systemd[1]: Started Virtual Machine qemu-118-instance-0000005f.
Oct  2 04:38:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:42Z|00956|binding|INFO|Setting lport 4340f7c5-2f3a-4608-b77c-40798457ce79 ovn-installed in OVS
Oct  2 04:38:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:42Z|00957|binding|INFO|Setting lport 4340f7c5-2f3a-4608-b77c-40798457ce79 up in Southbound
Oct  2 04:38:42 np0005465604 nova_compute[260603]: 2025-10-02 08:38:42.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.784 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ab86afe8-20c8-4b95-8a18-3169140f79bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.790 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b643427b-ac30-41f2-b8cc-b72afe5d667f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:42 np0005465604 systemd-udevd[354003]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:38:42 np0005465604 NetworkManager[45129]: <info>  [1759394322.7925] manager: (tap28f843b2-30): new Veth device (/org/freedesktop/NetworkManager/Devices/378)
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.833 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d318107a-6201-498e-b48b-cb3d818e0c6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.838 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e286db65-9e7e-4fe0-89d7-c98d551a12bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:42 np0005465604 NetworkManager[45129]: <info>  [1759394322.8677] device (tap28f843b2-30): carrier: link connected
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.875 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[967dc41b-8b13-4873-99cf-fe08b42d33bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.895 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a609f0-78d8-42f8-bc74-129539b5525e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28f843b2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:e0:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520146, 'reachable_time': 29911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354037, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.919 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[493521ae-f53e-4f16-800f-1196e2234ba1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe79:e0ca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520146, 'tstamp': 520146}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354038, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.939 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[db8cd795-e25d-4b61-a94c-b3fcd9d190af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28f843b2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:e0:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520146, 'reachable_time': 29911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354039, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:42.980 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc11c85-f131-4dd6-8105-dc622ff6f870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:43.055 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[63917c07-2cff-427c-a680-4a17f1c8347b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:43.056 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28f843b2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:43.057 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:43.057 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28f843b2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:43 np0005465604 kernel: tap28f843b2-30: entered promiscuous mode
Oct  2 04:38:43 np0005465604 NetworkManager[45129]: <info>  [1759394323.0615] manager: (tap28f843b2-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:43.066 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28f843b2-30, col_values=(('external_ids', {'iface-id': '73da1479-3a84-4798-9da3-841fe88c5e3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:43Z|00958|binding|INFO|Releasing lport 73da1479-3a84-4798-9da3-841fe88c5e3a from this chassis (sb_readonly=0)
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:43.102 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/28f843b2-396a-4167-9840-21c273bdc044.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/28f843b2-396a-4167-9840-21c273bdc044.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:43.104 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29b1521f-9fcf-47e5-9407-76896b954ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:43.105 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-28f843b2-396a-4167-9840-21c273bdc044
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/28f843b2-396a-4167-9840-21c273bdc044.pid.haproxy
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 28f843b2-396a-4167-9840-21c273bdc044
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:38:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:43.107 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'env', 'PROCESS_TAG=haproxy-28f843b2-396a-4167-9840-21c273bdc044', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/28f843b2-396a-4167-9840-21c273bdc044.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:38:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.258 2 DEBUG nova.compute.manager [req-bfb4b1df-f184-4f50-ace2-91dcff364e4f req-9b90115d-e32d-44ed-bffc-c004157aa383 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.259 2 DEBUG oslo_concurrency.lockutils [req-bfb4b1df-f184-4f50-ace2-91dcff364e4f req-9b90115d-e32d-44ed-bffc-c004157aa383 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.260 2 DEBUG oslo_concurrency.lockutils [req-bfb4b1df-f184-4f50-ace2-91dcff364e4f req-9b90115d-e32d-44ed-bffc-c004157aa383 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.260 2 DEBUG oslo_concurrency.lockutils [req-bfb4b1df-f184-4f50-ace2-91dcff364e4f req-9b90115d-e32d-44ed-bffc-c004157aa383 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.261 2 DEBUG nova.compute.manager [req-bfb4b1df-f184-4f50-ace2-91dcff364e4f req-9b90115d-e32d-44ed-bffc-c004157aa383 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Processing event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:38:43 np0005465604 podman[354113]: 2025-10-02 08:38:43.554740734 +0000 UTC m=+0.077143749 container create 3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 04:38:43 np0005465604 systemd[1]: Started libpod-conmon-3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052.scope.
Oct  2 04:38:43 np0005465604 podman[354113]: 2025-10-02 08:38:43.527474265 +0000 UTC m=+0.049877290 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:38:43 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:38:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb0969f191443a386916053342273928abb5dcb10048e58d342dcf2424fd2bb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:38:43 np0005465604 podman[354113]: 2025-10-02 08:38:43.651667828 +0000 UTC m=+0.174070853 container init 3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 04:38:43 np0005465604 podman[354113]: 2025-10-02 08:38:43.661657598 +0000 UTC m=+0.184060603 container start 3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 04:38:43 np0005465604 neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044[354128]: [NOTICE]   (354132) : New worker (354134) forked
Oct  2 04:38:43 np0005465604 neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044[354128]: [NOTICE]   (354132) : Loading success.
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.863 2 DEBUG nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.864 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394323.8622792, 4145f1a3-c327-49ee-9af1-1ace3afb70a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.864 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] VM Started (Lifecycle Event)#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.871 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.874 2 INFO nova.virt.libvirt.driver [-] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Instance spawned successfully.#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.875 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.887 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.894 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.900 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.900 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.901 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.901 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.902 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.902 2 DEBUG nova.virt.libvirt.driver [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.911 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.912 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394323.8638384, 4145f1a3-c327-49ee-9af1-1ace3afb70a5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.912 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.931 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.936 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394323.8699152, 4145f1a3-c327-49ee-9af1-1ace3afb70a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.937 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.964 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.969 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.977 2 INFO nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Took 8.85 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.978 2 DEBUG nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:43 np0005465604 nova_compute[260603]: 2025-10-02 08:38:43.993 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:38:44 np0005465604 nova_compute[260603]: 2025-10-02 08:38:44.059 2 INFO nova.compute.manager [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Took 9.87 seconds to build instance.#033[00m
Oct  2 04:38:44 np0005465604 nova_compute[260603]: 2025-10-02 08:38:44.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:44 np0005465604 nova_compute[260603]: 2025-10-02 08:38:44.087 2 DEBUG oslo_concurrency.lockutils [None req-4f962b05-be63-4b6d-90c2-cb112012790d d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct  2 04:38:45 np0005465604 nova_compute[260603]: 2025-10-02 08:38:45.389 2 DEBUG nova.compute.manager [req-2f161c8f-b8d2-41df-a15c-a6eec48fbf7d req-6738fbdb-f104-4c1a-867f-6149f4161741 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:45 np0005465604 nova_compute[260603]: 2025-10-02 08:38:45.389 2 DEBUG oslo_concurrency.lockutils [req-2f161c8f-b8d2-41df-a15c-a6eec48fbf7d req-6738fbdb-f104-4c1a-867f-6149f4161741 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:38:45 np0005465604 nova_compute[260603]: 2025-10-02 08:38:45.389 2 DEBUG oslo_concurrency.lockutils [req-2f161c8f-b8d2-41df-a15c-a6eec48fbf7d req-6738fbdb-f104-4c1a-867f-6149f4161741 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:38:45 np0005465604 nova_compute[260603]: 2025-10-02 08:38:45.389 2 DEBUG oslo_concurrency.lockutils [req-2f161c8f-b8d2-41df-a15c-a6eec48fbf7d req-6738fbdb-f104-4c1a-867f-6149f4161741 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:38:45 np0005465604 nova_compute[260603]: 2025-10-02 08:38:45.389 2 DEBUG nova.compute.manager [req-2f161c8f-b8d2-41df-a15c-a6eec48fbf7d req-6738fbdb-f104-4c1a-867f-6149f4161741 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] No waiting events found dispatching network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:38:45 np0005465604 nova_compute[260603]: 2025-10-02 08:38:45.389 2 WARNING nova.compute.manager [req-2f161c8f-b8d2-41df-a15c-a6eec48fbf7d req-6738fbdb-f104-4c1a-867f-6149f4161741 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received unexpected event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:38:46 np0005465604 nova_compute[260603]: 2025-10-02 08:38:46.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Oct  2 04:38:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:38:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:48Z|00959|binding|INFO|Releasing lport 73da1479-3a84-4798-9da3-841fe88c5e3a from this chassis (sb_readonly=0)
Oct  2 04:38:48 np0005465604 nova_compute[260603]: 2025-10-02 08:38:48.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:48 np0005465604 NetworkManager[45129]: <info>  [1759394328.6411] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Oct  2 04:38:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:48Z|00960|binding|INFO|Releasing lport 0167600f-b732-46e3-804b-b8a6d765a5aa from this chassis (sb_readonly=0)
Oct  2 04:38:48 np0005465604 NetworkManager[45129]: <info>  [1759394328.6429] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Oct  2 04:38:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:48Z|00961|binding|INFO|Releasing lport 73da1479-3a84-4798-9da3-841fe88c5e3a from this chassis (sb_readonly=0)
Oct  2 04:38:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:48Z|00962|binding|INFO|Releasing lport 0167600f-b732-46e3-804b-b8a6d765a5aa from this chassis (sb_readonly=0)
Oct  2 04:38:48 np0005465604 nova_compute[260603]: 2025-10-02 08:38:48.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:48 np0005465604 nova_compute[260603]: 2025-10-02 08:38:48.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:49.110 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:38:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:49.112 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:38:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:38:49.115 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:38:49 np0005465604 nova_compute[260603]: 2025-10-02 08:38:49.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 89 op/s
Oct  2 04:38:49 np0005465604 nova_compute[260603]: 2025-10-02 08:38:49.309 2 DEBUG nova.compute.manager [req-432ef985-1734-4ece-b395-4ee11c9a7ffc req-3060643c-d06f-453b-882b-63152ff0e4b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-changed-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:38:49 np0005465604 nova_compute[260603]: 2025-10-02 08:38:49.309 2 DEBUG nova.compute.manager [req-432ef985-1734-4ece-b395-4ee11c9a7ffc req-3060643c-d06f-453b-882b-63152ff0e4b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Refreshing instance network info cache due to event network-changed-4340f7c5-2f3a-4608-b77c-40798457ce79. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:38:49 np0005465604 nova_compute[260603]: 2025-10-02 08:38:49.310 2 DEBUG oslo_concurrency.lockutils [req-432ef985-1734-4ece-b395-4ee11c9a7ffc req-3060643c-d06f-453b-882b-63152ff0e4b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-4145f1a3-c327-49ee-9af1-1ace3afb70a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:38:49 np0005465604 nova_compute[260603]: 2025-10-02 08:38:49.310 2 DEBUG oslo_concurrency.lockutils [req-432ef985-1734-4ece-b395-4ee11c9a7ffc req-3060643c-d06f-453b-882b-63152ff0e4b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-4145f1a3-c327-49ee-9af1-1ace3afb70a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:38:49 np0005465604 nova_compute[260603]: 2025-10-02 08:38:49.310 2 DEBUG nova.network.neutron [req-432ef985-1734-4ece-b395-4ee11c9a7ffc req-3060643c-d06f-453b-882b-63152ff0e4b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Refreshing network info cache for port 4340f7c5-2f3a-4608-b77c-40798457ce79 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:38:51 np0005465604 nova_compute[260603]: 2025-10-02 08:38:51.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Oct  2 04:38:51 np0005465604 nova_compute[260603]: 2025-10-02 08:38:51.402 2 DEBUG nova.network.neutron [req-432ef985-1734-4ece-b395-4ee11c9a7ffc req-3060643c-d06f-453b-882b-63152ff0e4b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Updated VIF entry in instance network info cache for port 4340f7c5-2f3a-4608-b77c-40798457ce79. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:38:51 np0005465604 nova_compute[260603]: 2025-10-02 08:38:51.403 2 DEBUG nova.network.neutron [req-432ef985-1734-4ece-b395-4ee11c9a7ffc req-3060643c-d06f-453b-882b-63152ff0e4b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Updating instance_info_cache with network_info: [{"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:38:51 np0005465604 nova_compute[260603]: 2025-10-02 08:38:51.421 2 DEBUG oslo_concurrency.lockutils [req-432ef985-1734-4ece-b395-4ee11c9a7ffc req-3060643c-d06f-453b-882b-63152ff0e4b7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-4145f1a3-c327-49ee-9af1-1ace3afb70a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.623770) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394331623801, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 683, "num_deletes": 251, "total_data_size": 778606, "memory_usage": 791352, "flush_reason": "Manual Compaction"}
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394331629674, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 770876, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37082, "largest_seqno": 37764, "table_properties": {"data_size": 767315, "index_size": 1405, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8299, "raw_average_key_size": 19, "raw_value_size": 760138, "raw_average_value_size": 1784, "num_data_blocks": 62, "num_entries": 426, "num_filter_entries": 426, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394282, "oldest_key_time": 1759394282, "file_creation_time": 1759394331, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 5940 microseconds, and 2752 cpu microseconds.
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.629710) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 770876 bytes OK
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.629725) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.631243) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.631273) EVENT_LOG_v1 {"time_micros": 1759394331631263, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.631298) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 775014, prev total WAL file size 775014, number of live WAL files 2.
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.632007) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(752KB)], [80(8700KB)]
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394331632035, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 9680345, "oldest_snapshot_seqno": -1}
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6243 keys, 8045459 bytes, temperature: kUnknown
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394331667424, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8045459, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8004616, "index_size": 24157, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 157857, "raw_average_key_size": 25, "raw_value_size": 7893594, "raw_average_value_size": 1264, "num_data_blocks": 972, "num_entries": 6243, "num_filter_entries": 6243, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394331, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.667589) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8045459 bytes
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.668690) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 273.1 rd, 227.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.5 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(23.0) write-amplify(10.4) OK, records in: 6756, records dropped: 513 output_compression: NoCompression
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.668706) EVENT_LOG_v1 {"time_micros": 1759394331668698, "job": 46, "event": "compaction_finished", "compaction_time_micros": 35442, "compaction_time_cpu_micros": 18014, "output_level": 6, "num_output_files": 1, "total_output_size": 8045459, "num_input_records": 6756, "num_output_records": 6243, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394331668903, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394331670396, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.631956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.670425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.670429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.670431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.670433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:38:51 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:38:51.670435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:38:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:38:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Oct  2 04:38:54 np0005465604 nova_compute[260603]: 2025-10-02 08:38:54.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:54 np0005465604 nova_compute[260603]: 2025-10-02 08:38:54.971 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:38:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 167 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 70 op/s
Oct  2 04:38:56 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:56Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:37:12:17 10.100.0.14
Oct  2 04:38:56 np0005465604 ovn_controller[152344]: 2025-10-02T08:38:56Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:37:12:17 10.100.0.14
Oct  2 04:38:56 np0005465604 nova_compute[260603]: 2025-10-02 08:38:56.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:57 np0005465604 podman[354146]: 2025-10-02 08:38:57.030011324 +0000 UTC m=+0.078364296 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:38:57 np0005465604 podman[354145]: 2025-10-02 08:38:57.065606594 +0000 UTC m=+0.124512054 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 04:38:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 178 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 779 KiB/s wr, 95 op/s
Oct  2 04:38:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:38:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:38:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:38:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:38:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:38:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:38:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:38:59 np0005465604 nova_compute[260603]: 2025-10-02 08:38:59.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:38:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Oct  2 04:38:59 np0005465604 nova_compute[260603]: 2025-10-02 08:38:59.387 2 INFO nova.compute.manager [None req-6d4e50e0-bb53-4a84-ba7f-4f0f1d2a48f4 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Pausing#033[00m
Oct  2 04:38:59 np0005465604 nova_compute[260603]: 2025-10-02 08:38:59.388 2 DEBUG nova.objects.instance [None req-6d4e50e0-bb53-4a84-ba7f-4f0f1d2a48f4 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'flavor' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:38:59 np0005465604 nova_compute[260603]: 2025-10-02 08:38:59.418 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394339.4183438, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:38:59 np0005465604 nova_compute[260603]: 2025-10-02 08:38:59.419 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:38:59 np0005465604 nova_compute[260603]: 2025-10-02 08:38:59.421 2 DEBUG nova.compute.manager [None req-6d4e50e0-bb53-4a84-ba7f-4f0f1d2a48f4 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:59 np0005465604 nova_compute[260603]: 2025-10-02 08:38:59.461 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:38:59 np0005465604 nova_compute[260603]: 2025-10-02 08:38:59.465 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:39:01 np0005465604 nova_compute[260603]: 2025-10-02 08:39:01.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 04:39:01 np0005465604 nova_compute[260603]: 2025-10-02 08:39:01.908 2 INFO nova.compute.manager [None req-3a5d1c68-1330-44c8-a6b5-9a52ec2ff462 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Unpausing#033[00m
Oct  2 04:39:01 np0005465604 nova_compute[260603]: 2025-10-02 08:39:01.909 2 DEBUG nova.objects.instance [None req-3a5d1c68-1330-44c8-a6b5-9a52ec2ff462 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'flavor' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:01 np0005465604 nova_compute[260603]: 2025-10-02 08:39:01.948 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394341.9481072, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:39:01 np0005465604 nova_compute[260603]: 2025-10-02 08:39:01.949 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:39:01 np0005465604 virtqemud[260328]: argument unsupported: QEMU guest agent is not configured
Oct  2 04:39:01 np0005465604 nova_compute[260603]: 2025-10-02 08:39:01.954 2 DEBUG nova.virt.libvirt.guest [None req-3a5d1c68-1330-44c8-a6b5-9a52ec2ff462 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 04:39:01 np0005465604 nova_compute[260603]: 2025-10-02 08:39:01.955 2 DEBUG nova.compute.manager [None req-3a5d1c68-1330-44c8-a6b5-9a52ec2ff462 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:01 np0005465604 nova_compute[260603]: 2025-10-02 08:39:01.986 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:01 np0005465604 nova_compute[260603]: 2025-10-02 08:39:01.990 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:39:02 np0005465604 nova_compute[260603]: 2025-10-02 08:39:02.026 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Oct  2 04:39:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:39:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 04:39:04 np0005465604 podman[354189]: 2025-10-02 08:39:04.043546101 +0000 UTC m=+0.098500021 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 04:39:04 np0005465604 nova_compute[260603]: 2025-10-02 08:39:04.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:05 np0005465604 podman[354209]: 2025-10-02 08:39:05.04795735 +0000 UTC m=+0.104576464 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:39:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 04:39:06 np0005465604 nova_compute[260603]: 2025-10-02 08:39:06.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  2 04:39:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:39:09 np0005465604 nova_compute[260603]: 2025-10-02 08:39:09.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 1.4 MiB/s wr, 39 op/s
Oct  2 04:39:09 np0005465604 nova_compute[260603]: 2025-10-02 08:39:09.496 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:09 np0005465604 nova_compute[260603]: 2025-10-02 08:39:09.496 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:09 np0005465604 nova_compute[260603]: 2025-10-02 08:39:09.516 2 DEBUG nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:39:09 np0005465604 nova_compute[260603]: 2025-10-02 08:39:09.594 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:09 np0005465604 nova_compute[260603]: 2025-10-02 08:39:09.595 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:09 np0005465604 nova_compute[260603]: 2025-10-02 08:39:09.604 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:39:09 np0005465604 nova_compute[260603]: 2025-10-02 08:39:09.604 2 INFO nova.compute.claims [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:39:09 np0005465604 nova_compute[260603]: 2025-10-02 08:39:09.733 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:39:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/652893759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.153 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.161 2 DEBUG nova.compute.provider_tree [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.182 2 DEBUG nova.scheduler.client.report [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.207 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.213 2 DEBUG nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.258 2 DEBUG nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.258 2 DEBUG nova.network.neutron [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.275 2 INFO nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.299 2 DEBUG nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.403 2 DEBUG nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.404 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.404 2 INFO nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Creating image(s)#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.433 2 DEBUG nova.storage.rbd_utils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.459 2 DEBUG nova.storage.rbd_utils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.488 2 DEBUG nova.storage.rbd_utils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.492 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.535 2 DEBUG nova.policy [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd3802fedfb914c27b9b09ad6ea6f4c27', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c75535fe577642038c638a0b01f74d09', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.575 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.576 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.577 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.577 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.608 2 DEBUG nova.storage.rbd_utils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.613 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:10 np0005465604 nova_compute[260603]: 2025-10-02 08:39:10.980 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:11 np0005465604 nova_compute[260603]: 2025-10-02 08:39:11.065 2 DEBUG nova.storage.rbd_utils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] resizing rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:39:11 np0005465604 nova_compute[260603]: 2025-10-02 08:39:11.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:11 np0005465604 nova_compute[260603]: 2025-10-02 08:39:11.179 2 DEBUG nova.objects.instance [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'migration_context' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:11 np0005465604 nova_compute[260603]: 2025-10-02 08:39:11.208 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:39:11 np0005465604 nova_compute[260603]: 2025-10-02 08:39:11.208 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Ensure instance console log exists: /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:39:11 np0005465604 nova_compute[260603]: 2025-10-02 08:39:11.209 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:11 np0005465604 nova_compute[260603]: 2025-10-02 08:39:11.209 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:11 np0005465604 nova_compute[260603]: 2025-10-02 08:39:11.210 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 200 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Oct  2 04:39:11 np0005465604 nova_compute[260603]: 2025-10-02 08:39:11.287 2 DEBUG nova.network.neutron [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Successfully created port: 664c458b-abee-4d23-a60f-a0032a8f6058 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:39:12 np0005465604 nova_compute[260603]: 2025-10-02 08:39:12.164 2 DEBUG nova.network.neutron [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Successfully updated port: 664c458b-abee-4d23-a60f-a0032a8f6058 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:39:12 np0005465604 nova_compute[260603]: 2025-10-02 08:39:12.189 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "refresh_cache-49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:39:12 np0005465604 nova_compute[260603]: 2025-10-02 08:39:12.189 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquired lock "refresh_cache-49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:39:12 np0005465604 nova_compute[260603]: 2025-10-02 08:39:12.189 2 DEBUG nova.network.neutron [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:39:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:39:12 np0005465604 nova_compute[260603]: 2025-10-02 08:39:12.278 2 DEBUG nova.compute.manager [req-3a5dfb48-1f91-4b30-8de3-7f995a8c1d7a req-b73b6019-452e-4bdd-9b19-e830bfa8d20c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-changed-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:12 np0005465604 nova_compute[260603]: 2025-10-02 08:39:12.279 2 DEBUG nova.compute.manager [req-3a5dfb48-1f91-4b30-8de3-7f995a8c1d7a req-b73b6019-452e-4bdd-9b19-e830bfa8d20c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Refreshing instance network info cache due to event network-changed-664c458b-abee-4d23-a60f-a0032a8f6058. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:39:12 np0005465604 nova_compute[260603]: 2025-10-02 08:39:12.279 2 DEBUG oslo_concurrency.lockutils [req-3a5dfb48-1f91-4b30-8de3-7f995a8c1d7a req-b73b6019-452e-4bdd-9b19-e830bfa8d20c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:39:12 np0005465604 nova_compute[260603]: 2025-10-02 08:39:12.349 2 DEBUG nova.network.neutron [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:39:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.273 2 DEBUG nova.network.neutron [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Updating instance_info_cache with network_info: [{"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.297 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Releasing lock "refresh_cache-49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.298 2 DEBUG nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance network_info: |[{"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.298 2 DEBUG oslo_concurrency.lockutils [req-3a5dfb48-1f91-4b30-8de3-7f995a8c1d7a req-b73b6019-452e-4bdd-9b19-e830bfa8d20c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.298 2 DEBUG nova.network.neutron [req-3a5dfb48-1f91-4b30-8de3-7f995a8c1d7a req-b73b6019-452e-4bdd-9b19-e830bfa8d20c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Refreshing network info cache for port 664c458b-abee-4d23-a60f-a0032a8f6058 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.301 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Start _get_guest_xml network_info=[{"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.306 2 WARNING nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.312 2 DEBUG nova.virt.libvirt.host [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.312 2 DEBUG nova.virt.libvirt.host [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.316 2 DEBUG nova.virt.libvirt.host [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.317 2 DEBUG nova.virt.libvirt.host [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.317 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.318 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.318 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.318 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.318 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.319 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.319 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.319 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.319 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.319 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.320 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.320 2 DEBUG nova.virt.hardware [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.323 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:39:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4062935910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.805 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.841 2 DEBUG nova.storage.rbd_utils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:13 np0005465604 nova_compute[260603]: 2025-10-02 08:39:13.847 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:39:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/446225819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.309 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.312 2 DEBUG nova.virt.libvirt.vif [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:39:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-487849428',display_name='tempest-tempest.common.compute-instance-487849428',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-487849428',id=96,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-il5cp84q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:39:10Z,user_data=None,user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=49cc6a50-fcc0-4336-a786-4fe32e5d5c5a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.313 2 DEBUG nova.network.os_vif_util [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.315 2 DEBUG nova.network.os_vif_util [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.317 2 DEBUG nova.objects.instance [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'pci_devices' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.342 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:39:14 np0005465604 nova_compute[260603]:  <uuid>49cc6a50-fcc0-4336-a786-4fe32e5d5c5a</uuid>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:  <name>instance-00000060</name>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <nova:name>tempest-tempest.common.compute-instance-487849428</nova:name>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:39:13</nova:creationTime>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:39:14 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:        <nova:user uuid="d3802fedfb914c27b9b09ad6ea6f4c27">tempest-ServerActionsTestOtherA-249618595-project-member</nova:user>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:        <nova:project uuid="c75535fe577642038c638a0b01f74d09">tempest-ServerActionsTestOtherA-249618595</nova:project>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:        <nova:port uuid="664c458b-abee-4d23-a60f-a0032a8f6058">
Oct  2 04:39:14 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <entry name="serial">49cc6a50-fcc0-4336-a786-4fe32e5d5c5a</entry>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <entry name="uuid">49cc6a50-fcc0-4336-a786-4fe32e5d5c5a</entry>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk">
Oct  2 04:39:14 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:39:14 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config">
Oct  2 04:39:14 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:39:14 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:f7:c6:d9"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <target dev="tap664c458b-ab"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/console.log" append="off"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:39:14 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:39:14 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:39:14 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:39:14 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.345 2 DEBUG nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Preparing to wait for external event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.346 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.346 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.347 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.349 2 DEBUG nova.virt.libvirt.vif [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:39:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-487849428',display_name='tempest-tempest.common.compute-instance-487849428',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-487849428',id=96,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-il5cp84q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:39:10Z,user_data=None,user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=49cc6a50-fcc0-4336-a786-4fe32e5d5c5a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.349 2 DEBUG nova.network.os_vif_util [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.350 2 DEBUG nova.network.os_vif_util [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.351 2 DEBUG os_vif [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.353 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.354 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.361 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap664c458b-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.362 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap664c458b-ab, col_values=(('external_ids', {'iface-id': '664c458b-abee-4d23-a60f-a0032a8f6058', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:c6:d9', 'vm-uuid': '49cc6a50-fcc0-4336-a786-4fe32e5d5c5a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:14 np0005465604 NetworkManager[45129]: <info>  [1759394354.3679] manager: (tap664c458b-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.376 2 INFO os_vif [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab')#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.452 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.453 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.454 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No VIF found with MAC fa:16:3e:f7:c6:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.454 2 INFO nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Using config drive#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.486 2 DEBUG nova.storage.rbd_utils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.950 2 INFO nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Creating config drive at /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config#033[00m
Oct  2 04:39:14 np0005465604 nova_compute[260603]: 2025-10-02 08:39:14.956 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2gzr6lj1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:15 np0005465604 nova_compute[260603]: 2025-10-02 08:39:15.119 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2gzr6lj1" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:15 np0005465604 nova_compute[260603]: 2025-10-02 08:39:15.161 2 DEBUG nova.storage.rbd_utils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:15 np0005465604 nova_compute[260603]: 2025-10-02 08:39:15.165 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:39:15 np0005465604 nova_compute[260603]: 2025-10-02 08:39:15.248 2 DEBUG nova.network.neutron [req-3a5dfb48-1f91-4b30-8de3-7f995a8c1d7a req-b73b6019-452e-4bdd-9b19-e830bfa8d20c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Updated VIF entry in instance network info cache for port 664c458b-abee-4d23-a60f-a0032a8f6058. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:39:15 np0005465604 nova_compute[260603]: 2025-10-02 08:39:15.249 2 DEBUG nova.network.neutron [req-3a5dfb48-1f91-4b30-8de3-7f995a8c1d7a req-b73b6019-452e-4bdd-9b19-e830bfa8d20c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Updating instance_info_cache with network_info: [{"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:39:15 np0005465604 nova_compute[260603]: 2025-10-02 08:39:15.270 2 DEBUG oslo_concurrency.lockutils [req-3a5dfb48-1f91-4b30-8de3-7f995a8c1d7a req-b73b6019-452e-4bdd-9b19-e830bfa8d20c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:39:15 np0005465604 nova_compute[260603]: 2025-10-02 08:39:15.363 2 DEBUG oslo_concurrency.processutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:15 np0005465604 nova_compute[260603]: 2025-10-02 08:39:15.364 2 INFO nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Deleting local config drive /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config because it was imported into RBD.#033[00m
Oct  2 04:39:15 np0005465604 kernel: tap664c458b-ab: entered promiscuous mode
Oct  2 04:39:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:15Z|00963|binding|INFO|Claiming lport 664c458b-abee-4d23-a60f-a0032a8f6058 for this chassis.
Oct  2 04:39:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:15Z|00964|binding|INFO|664c458b-abee-4d23-a60f-a0032a8f6058: Claiming fa:16:3e:f7:c6:d9 10.100.0.6
Oct  2 04:39:15 np0005465604 NetworkManager[45129]: <info>  [1759394355.4253] manager: (tap664c458b-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/383)
Oct  2 04:39:15 np0005465604 nova_compute[260603]: 2025-10-02 08:39:15.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.434 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:c6:d9 10.100.0.6'], port_security=['fa:16:3e:f7:c6:d9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '49cc6a50-fcc0-4336-a786-4fe32e5d5c5a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd37b83fc-7239-49dc-8ab1-fc95753c436a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=664c458b-abee-4d23-a60f-a0032a8f6058) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:39:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.436 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 664c458b-abee-4d23-a60f-a0032a8f6058 in datapath 28f843b2-396a-4167-9840-21c273bdc044 bound to our chassis#033[00m
Oct  2 04:39:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.438 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28f843b2-396a-4167-9840-21c273bdc044#033[00m
Oct  2 04:39:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:15Z|00965|binding|INFO|Setting lport 664c458b-abee-4d23-a60f-a0032a8f6058 ovn-installed in OVS
Oct  2 04:39:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:15Z|00966|binding|INFO|Setting lport 664c458b-abee-4d23-a60f-a0032a8f6058 up in Southbound
Oct  2 04:39:15 np0005465604 nova_compute[260603]: 2025-10-02 08:39:15.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.463 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bb6aff00-15f8-4672-a47c-1b6aab358091]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:15 np0005465604 systemd-machined[214636]: New machine qemu-119-instance-00000060.
Oct  2 04:39:15 np0005465604 systemd[1]: Started Virtual Machine qemu-119-instance-00000060.
Oct  2 04:39:15 np0005465604 systemd-udevd[354556]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:39:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.496 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f0f67d31-a4be-43a4-9553-e80f95bd11d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.501 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[44b7d020-c6e3-4acd-a89d-fd902241c5c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:15 np0005465604 NetworkManager[45129]: <info>  [1759394355.5067] device (tap664c458b-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:39:15 np0005465604 NetworkManager[45129]: <info>  [1759394355.5074] device (tap664c458b-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:39:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.546 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[19b87f7b-d698-4e3d-a334-5eb9836d2005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.562 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3b4a53fd-88eb-499f-937f-15e3ed3e130c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28f843b2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:e0:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520146, 'reachable_time': 29911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354566, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.579 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f20d59-9e5a-4b24-b691-85dde08ec94b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520161, 'tstamp': 520161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354568, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520164, 'tstamp': 520164}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354568, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.581 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28f843b2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:15 np0005465604 nova_compute[260603]: 2025-10-02 08:39:15.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:15 np0005465604 nova_compute[260603]: 2025-10-02 08:39:15.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.584 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28f843b2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.585 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:39:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.585 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28f843b2-30, col_values=(('external_ids', {'iface-id': '73da1479-3a84-4798-9da3-841fe88c5e3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:15.586 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:39:16 np0005465604 nova_compute[260603]: 2025-10-02 08:39:16.782 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394356.7820427, 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:39:16 np0005465604 nova_compute[260603]: 2025-10-02 08:39:16.782 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] VM Started (Lifecycle Event)#033[00m
Oct  2 04:39:16 np0005465604 nova_compute[260603]: 2025-10-02 08:39:16.806 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:16 np0005465604 nova_compute[260603]: 2025-10-02 08:39:16.812 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394356.784772, 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:39:16 np0005465604 nova_compute[260603]: 2025-10-02 08:39:16.813 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:39:16 np0005465604 nova_compute[260603]: 2025-10-02 08:39:16.835 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:16 np0005465604 nova_compute[260603]: 2025-10-02 08:39:16.839 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:39:16 np0005465604 nova_compute[260603]: 2025-10-02 08:39:16.862 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:39:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  2 04:39:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:39:17 np0005465604 nova_compute[260603]: 2025-10-02 08:39:17.561 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:39:17 np0005465604 nova_compute[260603]: 2025-10-02 08:39:17.561 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:39:19 np0005465604 nova_compute[260603]: 2025-10-02 08:39:19.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct  2 04:39:19 np0005465604 nova_compute[260603]: 2025-10-02 08:39:19.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.644 2 DEBUG oslo_concurrency.lockutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.645 2 DEBUG oslo_concurrency.lockutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.645 2 INFO nova.compute.manager [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Shelving#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.668 2 DEBUG nova.virt.libvirt.driver [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.885 2 DEBUG nova.compute.manager [req-da29672c-63e0-4208-989f-ac3e5f12e6d2 req-e46d1a8f-4921-4fb0-826b-bf6075f94e0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.886 2 DEBUG oslo_concurrency.lockutils [req-da29672c-63e0-4208-989f-ac3e5f12e6d2 req-e46d1a8f-4921-4fb0-826b-bf6075f94e0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.886 2 DEBUG oslo_concurrency.lockutils [req-da29672c-63e0-4208-989f-ac3e5f12e6d2 req-e46d1a8f-4921-4fb0-826b-bf6075f94e0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.887 2 DEBUG oslo_concurrency.lockutils [req-da29672c-63e0-4208-989f-ac3e5f12e6d2 req-e46d1a8f-4921-4fb0-826b-bf6075f94e0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.887 2 DEBUG nova.compute.manager [req-da29672c-63e0-4208-989f-ac3e5f12e6d2 req-e46d1a8f-4921-4fb0-826b-bf6075f94e0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Processing event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.888 2 DEBUG nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.894 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394360.8936667, 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.894 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.897 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.902 2 INFO nova.virt.libvirt.driver [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance spawned successfully.#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.903 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.915 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.926 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.935 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.936 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.936 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.937 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.938 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.939 2 DEBUG nova.virt.libvirt.driver [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:39:20 np0005465604 nova_compute[260603]: 2025-10-02 08:39:20.947 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:39:21 np0005465604 nova_compute[260603]: 2025-10-02 08:39:21.001 2 INFO nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Took 10.60 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:39:21 np0005465604 nova_compute[260603]: 2025-10-02 08:39:21.001 2 DEBUG nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:21 np0005465604 nova_compute[260603]: 2025-10-02 08:39:21.129 2 INFO nova.compute.manager [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Took 11.57 seconds to build instance.#033[00m
Oct  2 04:39:21 np0005465604 nova_compute[260603]: 2025-10-02 08:39:21.148 2 DEBUG oslo_concurrency.lockutils [None req-0a823d5b-80c2-42a2-bd93-279d086d96c3 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct  2 04:39:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:39:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/688866439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:39:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:39:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/688866439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:39:22 np0005465604 nova_compute[260603]: 2025-10-02 08:39:22.101 2 DEBUG oslo_concurrency.lockutils [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:22 np0005465604 nova_compute[260603]: 2025-10-02 08:39:22.102 2 DEBUG oslo_concurrency.lockutils [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:22 np0005465604 nova_compute[260603]: 2025-10-02 08:39:22.102 2 DEBUG nova.compute.manager [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:22 np0005465604 nova_compute[260603]: 2025-10-02 08:39:22.107 2 DEBUG nova.compute.manager [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 04:39:22 np0005465604 nova_compute[260603]: 2025-10-02 08:39:22.108 2 DEBUG nova.objects.instance [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'flavor' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:22 np0005465604 nova_compute[260603]: 2025-10-02 08:39:22.137 2 DEBUG nova.virt.libvirt.driver [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:39:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:39:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Oct  2 04:39:23 np0005465604 nova_compute[260603]: 2025-10-02 08:39:23.343 2 DEBUG nova.compute.manager [req-49bbc126-9fa9-4dfd-8415-0bc9dcc4e491 req-fd3e0879-a024-4f47-ba34-4801e78b7d98 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:23 np0005465604 nova_compute[260603]: 2025-10-02 08:39:23.344 2 DEBUG oslo_concurrency.lockutils [req-49bbc126-9fa9-4dfd-8415-0bc9dcc4e491 req-fd3e0879-a024-4f47-ba34-4801e78b7d98 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:23 np0005465604 nova_compute[260603]: 2025-10-02 08:39:23.344 2 DEBUG oslo_concurrency.lockutils [req-49bbc126-9fa9-4dfd-8415-0bc9dcc4e491 req-fd3e0879-a024-4f47-ba34-4801e78b7d98 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:23 np0005465604 nova_compute[260603]: 2025-10-02 08:39:23.344 2 DEBUG oslo_concurrency.lockutils [req-49bbc126-9fa9-4dfd-8415-0bc9dcc4e491 req-fd3e0879-a024-4f47-ba34-4801e78b7d98 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:23 np0005465604 nova_compute[260603]: 2025-10-02 08:39:23.344 2 DEBUG nova.compute.manager [req-49bbc126-9fa9-4dfd-8415-0bc9dcc4e491 req-fd3e0879-a024-4f47-ba34-4801e78b7d98 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] No waiting events found dispatching network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:39:23 np0005465604 nova_compute[260603]: 2025-10-02 08:39:23.345 2 WARNING nova.compute.manager [req-49bbc126-9fa9-4dfd-8415-0bc9dcc4e491 req-fd3e0879-a024-4f47-ba34-4801e78b7d98 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received unexpected event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 for instance with vm_state active and task_state powering-off.#033[00m
Oct  2 04:39:23 np0005465604 kernel: tap00fa1373-e4 (unregistering): left promiscuous mode
Oct  2 04:39:23 np0005465604 nova_compute[260603]: 2025-10-02 08:39:23.694 2 INFO nova.virt.libvirt.driver [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 04:39:23 np0005465604 NetworkManager[45129]: <info>  [1759394363.7002] device (tap00fa1373-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:39:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:23Z|00967|binding|INFO|Releasing lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb from this chassis (sb_readonly=0)
Oct  2 04:39:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:23Z|00968|binding|INFO|Setting lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb down in Southbound
Oct  2 04:39:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:23Z|00969|binding|INFO|Removing iface tap00fa1373-e4 ovn-installed in OVS
Oct  2 04:39:23 np0005465604 nova_compute[260603]: 2025-10-02 08:39:23.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:23.723 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:b9:2b 10.100.0.10'], port_security=['fa:16:3e:11:b9:2b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:39:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:23.724 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 unbound from our chassis#033[00m
Oct  2 04:39:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:23.726 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 19d584c3-e754-47d1-9cdf-c6badbd670d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:39:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:23.727 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3e0017-388a-485a-9aef-0f80202a7dac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:23.727 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 namespace which is not needed anymore#033[00m
Oct  2 04:39:23 np0005465604 nova_compute[260603]: 2025-10-02 08:39:23.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:23 np0005465604 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct  2 04:39:23 np0005465604 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005d.scope: Consumed 15.766s CPU time.
Oct  2 04:39:23 np0005465604 systemd-machined[214636]: Machine qemu-116-instance-0000005d terminated.
Oct  2 04:39:23 np0005465604 nova_compute[260603]: 2025-10-02 08:39:23.949 2 INFO nova.virt.libvirt.driver [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance destroyed successfully.#033[00m
Oct  2 04:39:23 np0005465604 nova_compute[260603]: 2025-10-02 08:39:23.950 2 DEBUG nova.objects.instance [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'numa_topology' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:23 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[352059]: [NOTICE]   (352063) : haproxy version is 2.8.14-c23fe91
Oct  2 04:39:23 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[352059]: [NOTICE]   (352063) : path to executable is /usr/sbin/haproxy
Oct  2 04:39:23 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[352059]: [ALERT]    (352063) : Current worker (352065) exited with code 143 (Terminated)
Oct  2 04:39:23 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[352059]: [WARNING]  (352063) : All workers exited. Exiting... (0)
Oct  2 04:39:23 np0005465604 systemd[1]: libpod-db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1.scope: Deactivated successfully.
Oct  2 04:39:23 np0005465604 podman[354633]: 2025-10-02 08:39:23.965163729 +0000 UTC m=+0.094625445 container died db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 04:39:24 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1-userdata-shm.mount: Deactivated successfully.
Oct  2 04:39:24 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3a74f8d6643ae70d1fa74bb44614c3aa06e7ac08d3f73b3369e5ce67a1513ec1-merged.mount: Deactivated successfully.
Oct  2 04:39:24 np0005465604 podman[354633]: 2025-10-02 08:39:24.068404022 +0000 UTC m=+0.197865738 container cleanup db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 04:39:24 np0005465604 systemd[1]: libpod-conmon-db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1.scope: Deactivated successfully.
Oct  2 04:39:24 np0005465604 podman[354673]: 2025-10-02 08:39:24.1558356 +0000 UTC m=+0.050933842 container remove db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:39:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:24.163 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6b630e23-b37c-407e-9622-08374c92bbcf]: (4, ('Thu Oct  2 08:39:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 (db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1)\ndb9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1\nThu Oct  2 08:39:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 (db9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1)\ndb9633532e8eca37b8edd31ccc0c9475c397daa3585d89dc06c0976218c657c1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:24 np0005465604 nova_compute[260603]: 2025-10-02 08:39:24.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:24.164 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7d27c7-7297-4c29-bf05-0b5391c696bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:24.166 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:24 np0005465604 nova_compute[260603]: 2025-10-02 08:39:24.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:24 np0005465604 kernel: tap19d584c3-e0: left promiscuous mode
Oct  2 04:39:24 np0005465604 nova_compute[260603]: 2025-10-02 08:39:24.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:24.192 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d1dbcf4f-59be-4ffa-8e2e-43cee198fc90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:24.231 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[685da682-26f9-4b5b-8544-cc5832eb6093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:24.232 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b57b3e2f-d766-42a7-86ec-bccbacf50efe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:24.254 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[635786a6-59da-4ee4-92b6-1643bd67f16c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516127, 'reachable_time': 17502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354691, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:24 np0005465604 systemd[1]: run-netns-ovnmeta\x2d19d584c3\x2de754\x2d47d1\x2d9cdf\x2dc6badbd670d7.mount: Deactivated successfully.
Oct  2 04:39:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:24.259 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:39:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:24.259 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9b0437-164a-42c0-9cfb-3afd6729fc5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:24 np0005465604 nova_compute[260603]: 2025-10-02 08:39:24.269 2 INFO nova.virt.libvirt.driver [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Beginning cold snapshot process#033[00m
Oct  2 04:39:24 np0005465604 nova_compute[260603]: 2025-10-02 08:39:24.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:24 np0005465604 nova_compute[260603]: 2025-10-02 08:39:24.442 2 DEBUG nova.virt.libvirt.imagebackend [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 04:39:24 np0005465604 nova_compute[260603]: 2025-10-02 08:39:24.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:39:24 np0005465604 nova_compute[260603]: 2025-10-02 08:39:24.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:39:24 np0005465604 nova_compute[260603]: 2025-10-02 08:39:24.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:39:24 np0005465604 nova_compute[260603]: 2025-10-02 08:39:24.546 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:39:24 np0005465604 nova_compute[260603]: 2025-10-02 08:39:24.547 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:39:24 np0005465604 nova_compute[260603]: 2025-10-02 08:39:24.547 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:39:24 np0005465604 nova_compute[260603]: 2025-10-02 08:39:24.547 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:24 np0005465604 nova_compute[260603]: 2025-10-02 08:39:24.681 2 DEBUG nova.storage.rbd_utils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] creating snapshot(72de10cf6cef4bf1bbe60a5c520f8744) on rbd image(fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:39:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Oct  2 04:39:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Oct  2 04:39:25 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Oct  2 04:39:25 np0005465604 nova_compute[260603]: 2025-10-02 08:39:25.195 2 DEBUG nova.storage.rbd_utils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] cloning vms/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk@72de10cf6cef4bf1bbe60a5c520f8744 to images/1ca55329-185d-4f33-a25c-d8eaed50208d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:39:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 246 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 36 KiB/s wr, 92 op/s
Oct  2 04:39:25 np0005465604 nova_compute[260603]: 2025-10-02 08:39:25.298 2 DEBUG nova.storage.rbd_utils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] flattening images/1ca55329-185d-4f33-a25c-d8eaed50208d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:39:25 np0005465604 nova_compute[260603]: 2025-10-02 08:39:25.478 2 DEBUG nova.compute.manager [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-unplugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:25 np0005465604 nova_compute[260603]: 2025-10-02 08:39:25.478 2 DEBUG oslo_concurrency.lockutils [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:25 np0005465604 nova_compute[260603]: 2025-10-02 08:39:25.478 2 DEBUG oslo_concurrency.lockutils [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:25 np0005465604 nova_compute[260603]: 2025-10-02 08:39:25.479 2 DEBUG oslo_concurrency.lockutils [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:25 np0005465604 nova_compute[260603]: 2025-10-02 08:39:25.479 2 DEBUG nova.compute.manager [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-unplugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:39:25 np0005465604 nova_compute[260603]: 2025-10-02 08:39:25.479 2 WARNING nova.compute.manager [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received unexpected event network-vif-unplugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct  2 04:39:25 np0005465604 nova_compute[260603]: 2025-10-02 08:39:25.479 2 DEBUG nova.compute.manager [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:25 np0005465604 nova_compute[260603]: 2025-10-02 08:39:25.480 2 DEBUG oslo_concurrency.lockutils [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:25 np0005465604 nova_compute[260603]: 2025-10-02 08:39:25.480 2 DEBUG oslo_concurrency.lockutils [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:25 np0005465604 nova_compute[260603]: 2025-10-02 08:39:25.481 2 DEBUG oslo_concurrency.lockutils [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:25 np0005465604 nova_compute[260603]: 2025-10-02 08:39:25.483 2 DEBUG nova.compute.manager [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:39:25 np0005465604 nova_compute[260603]: 2025-10-02 08:39:25.483 2 WARNING nova.compute.manager [req-36dd2766-710d-4576-bac9-04f9e6b49cb7 req-0a2b2115-f629-47e0-8535-5e53c0fa557c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received unexpected event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct  2 04:39:25 np0005465604 nova_compute[260603]: 2025-10-02 08:39:25.713 2 DEBUG nova.storage.rbd_utils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] removing snapshot(72de10cf6cef4bf1bbe60a5c520f8744) on rbd image(fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 04:39:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Oct  2 04:39:26 np0005465604 nova_compute[260603]: 2025-10-02 08:39:26.122 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating instance_info_cache with network_info: [{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:39:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Oct  2 04:39:26 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Oct  2 04:39:26 np0005465604 nova_compute[260603]: 2025-10-02 08:39:26.143 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:39:26 np0005465604 nova_compute[260603]: 2025-10-02 08:39:26.144 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:39:26 np0005465604 nova_compute[260603]: 2025-10-02 08:39:26.144 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:39:26 np0005465604 nova_compute[260603]: 2025-10-02 08:39:26.145 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:39:26 np0005465604 nova_compute[260603]: 2025-10-02 08:39:26.165 2 DEBUG nova.storage.rbd_utils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] creating snapshot(snap) on rbd image(1ca55329-185d-4f33-a25c-d8eaed50208d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Oct  2 04:39:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 279 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 3.0 MiB/s wr, 229 op/s
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.275521) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394367275559, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 548, "num_deletes": 256, "total_data_size": 543014, "memory_usage": 553480, "flush_reason": "Manual Compaction"}
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394367279926, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 538054, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37765, "largest_seqno": 38312, "table_properties": {"data_size": 534994, "index_size": 1032, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6963, "raw_average_key_size": 18, "raw_value_size": 528848, "raw_average_value_size": 1406, "num_data_blocks": 46, "num_entries": 376, "num_filter_entries": 376, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394332, "oldest_key_time": 1759394332, "file_creation_time": 1759394367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 4432 microseconds, and 2288 cpu microseconds.
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.279956) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 538054 bytes OK
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.279969) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.283308) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.283318) EVENT_LOG_v1 {"time_micros": 1759394367283315, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.283331) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 539894, prev total WAL file size 539894, number of live WAL files 2.
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.283692) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323532' seq:72057594037927935, type:22 .. '6C6F676D0031353034' seq:0, type:0; will stop at (end)
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(525KB)], [83(7856KB)]
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394367283719, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 8583513, "oldest_snapshot_seqno": -1}
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6095 keys, 8465720 bytes, temperature: kUnknown
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394367322270, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8465720, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8424740, "index_size": 24614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 155752, "raw_average_key_size": 25, "raw_value_size": 8315204, "raw_average_value_size": 1364, "num_data_blocks": 989, "num_entries": 6095, "num_filter_entries": 6095, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.322441) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8465720 bytes
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.323546) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 222.4 rd, 219.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 7.7 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(31.7) write-amplify(15.7) OK, records in: 6619, records dropped: 524 output_compression: NoCompression
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.323565) EVENT_LOG_v1 {"time_micros": 1759394367323556, "job": 48, "event": "compaction_finished", "compaction_time_micros": 38597, "compaction_time_cpu_micros": 19437, "output_level": 6, "num_output_files": 1, "total_output_size": 8465720, "num_input_records": 6619, "num_output_records": 6095, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394367323741, "job": 48, "event": "table_file_deletion", "file_number": 85}
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394367324946, "job": 48, "event": "table_file_deletion", "file_number": 83}
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.283617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.324975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.324980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.324982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.324984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:39:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:39:27.324987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:39:27 np0005465604 nova_compute[260603]: 2025-10-02 08:39:27.552 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:39:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:39:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:39:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:39:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:39:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:39:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:39:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:39:27
Oct  2 04:39:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:39:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:39:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'vms', 'default.rgw.control', 'images', 'volumes', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log']
Oct  2 04:39:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:39:28 np0005465604 podman[354834]: 2025-10-02 08:39:28.026458914 +0000 UTC m=+0.079988555 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 04:39:28 np0005465604 podman[354833]: 2025-10-02 08:39:28.071721325 +0000 UTC m=+0.124780882 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true)
Oct  2 04:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:39:28 np0005465604 nova_compute[260603]: 2025-10-02 08:39:28.347 2 INFO nova.virt.libvirt.driver [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Snapshot image upload complete#033[00m
Oct  2 04:39:28 np0005465604 nova_compute[260603]: 2025-10-02 08:39:28.348 2 DEBUG nova.compute.manager [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:28 np0005465604 nova_compute[260603]: 2025-10-02 08:39:28.403 2 INFO nova.compute.manager [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Shelve offloading#033[00m
Oct  2 04:39:28 np0005465604 nova_compute[260603]: 2025-10-02 08:39:28.410 2 INFO nova.virt.libvirt.driver [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance destroyed successfully.#033[00m
Oct  2 04:39:28 np0005465604 nova_compute[260603]: 2025-10-02 08:39:28.411 2 DEBUG nova.compute.manager [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:28 np0005465604 nova_compute[260603]: 2025-10-02 08:39:28.413 2 DEBUG oslo_concurrency.lockutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:39:28 np0005465604 nova_compute[260603]: 2025-10-02 08:39:28.414 2 DEBUG oslo_concurrency.lockutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquired lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:39:28 np0005465604 nova_compute[260603]: 2025-10-02 08:39:28.414 2 DEBUG nova.network.neutron [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:39:28 np0005465604 nova_compute[260603]: 2025-10-02 08:39:28.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:39:28 np0005465604 nova_compute[260603]: 2025-10-02 08:39:28.542 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:28 np0005465604 nova_compute[260603]: 2025-10-02 08:39:28.542 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:28 np0005465604 nova_compute[260603]: 2025-10-02 08:39:28.543 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:28 np0005465604 nova_compute[260603]: 2025-10-02 08:39:28.543 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:39:28 np0005465604 nova_compute[260603]: 2025-10-02 08:39:28.544 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:28.806 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2 2001:db8::f816:3eff:feab:240e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109fe25f-02fc-4377-b6fa-14579cac74eb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=130c7afc-9b27-4823-aeed-223452e18c05) old=Port_Binding(mac=['fa:16:3e:ab:24:0e 2001:db8::f816:3eff:feab:240e'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:39:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:28.808 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 130c7afc-9b27-4823-aeed-223452e18c05 in datapath 08388482-b0ce-472a-bb1e-16d4250fa7a0 updated#033[00m
Oct  2 04:39:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:28.810 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08388482-b0ce-472a-bb1e-16d4250fa7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:39:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:28.811 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[14e2795c-8edb-4fe1-b267-5257ab0f6e7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:39:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2753247979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.002 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.094 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.094 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.099 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.100 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.104 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.104 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 326 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 283 op/s
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.392 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.393 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3460MB free_disk=59.87623596191406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.394 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.394 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.481 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.481 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 4145f1a3-c327-49ee-9af1-1ace3afb70a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.482 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.482 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.482 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.484 2 DEBUG nova.network.neutron [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating instance_info_cache with network_info: [{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.499 2 DEBUG oslo_concurrency.lockutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Releasing lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.508 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.528 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.528 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.598 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.621 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 04:39:29 np0005465604 nova_compute[260603]: 2025-10-02 08:39:29.693 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:39:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554622571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.180 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.187 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.207 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.235 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.236 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.336 2 INFO nova.virt.libvirt.driver [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance destroyed successfully.#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.337 2 DEBUG nova.objects.instance [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'resources' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.353 2 DEBUG nova.virt.libvirt.vif [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1260783938',display_name='tempest-ServersNegativeTestJSON-server-1260783938',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1260783938',id=93,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:38:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-h40zdbp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member',shelved_at='2025-10-02T08:39:28.348390',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='1ca55329-185d-4f33-a25c-d8eaed50208d'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:39:24Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.354 2 DEBUG nova.network.os_vif_util [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.356 2 DEBUG nova.network.os_vif_util [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.357 2 DEBUG os_vif [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.362 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00fa1373-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.392 2 INFO os_vif [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4')#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.438 2 DEBUG nova.compute.manager [req-6287da2d-0979-436d-852c-08d90cfdd550 req-41770f42-bcea-4cd6-9432-265155842699 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-changed-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.439 2 DEBUG nova.compute.manager [req-6287da2d-0979-436d-852c-08d90cfdd550 req-41770f42-bcea-4cd6-9432-265155842699 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Refreshing instance network info cache due to event network-changed-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.439 2 DEBUG oslo_concurrency.lockutils [req-6287da2d-0979-436d-852c-08d90cfdd550 req-41770f42-bcea-4cd6-9432-265155842699 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.440 2 DEBUG oslo_concurrency.lockutils [req-6287da2d-0979-436d-852c-08d90cfdd550 req-41770f42-bcea-4cd6-9432-265155842699 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.440 2 DEBUG nova.network.neutron [req-6287da2d-0979-436d-852c-08d90cfdd550 req-41770f42-bcea-4cd6-9432-265155842699 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Refreshing network info cache for port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.856 2 INFO nova.virt.libvirt.driver [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Deleting instance files /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_del#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.857 2 INFO nova.virt.libvirt.driver [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Deletion of /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_del complete#033[00m
Oct  2 04:39:30 np0005465604 nova_compute[260603]: 2025-10-02 08:39:30.957 2 INFO nova.scheduler.client.report [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Deleted allocations for instance fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b#033[00m
Oct  2 04:39:31 np0005465604 nova_compute[260603]: 2025-10-02 08:39:31.009 2 DEBUG oslo_concurrency.lockutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:31 np0005465604 nova_compute[260603]: 2025-10-02 08:39:31.010 2 DEBUG oslo_concurrency.lockutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:31 np0005465604 nova_compute[260603]: 2025-10-02 08:39:31.091 2 DEBUG oslo_concurrency.processutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 326 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.6 MiB/s wr, 278 op/s
Oct  2 04:39:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:39:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3039240848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:39:31 np0005465604 nova_compute[260603]: 2025-10-02 08:39:31.580 2 DEBUG oslo_concurrency.processutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:31 np0005465604 nova_compute[260603]: 2025-10-02 08:39:31.591 2 DEBUG nova.compute.provider_tree [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:39:31 np0005465604 nova_compute[260603]: 2025-10-02 08:39:31.615 2 DEBUG nova.scheduler.client.report [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:39:31 np0005465604 nova_compute[260603]: 2025-10-02 08:39:31.640 2 DEBUG oslo_concurrency.lockutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:31 np0005465604 nova_compute[260603]: 2025-10-02 08:39:31.697 2 DEBUG oslo_concurrency.lockutils [None req-9951a342-e6f6-4c5d-a3e2-dcb14009913f 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 11.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:32 np0005465604 nova_compute[260603]: 2025-10-02 08:39:32.228 2 DEBUG nova.virt.libvirt.driver [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 04:39:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:39:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Oct  2 04:39:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Oct  2 04:39:32 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Oct  2 04:39:32 np0005465604 nova_compute[260603]: 2025-10-02 08:39:32.317 2 DEBUG nova.network.neutron [req-6287da2d-0979-436d-852c-08d90cfdd550 req-41770f42-bcea-4cd6-9432-265155842699 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updated VIF entry in instance network info cache for port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:39:32 np0005465604 nova_compute[260603]: 2025-10-02 08:39:32.318 2 DEBUG nova.network.neutron [req-6287da2d-0979-436d-852c-08d90cfdd550 req-41770f42-bcea-4cd6-9432-265155842699 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating instance_info_cache with network_info: [{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": null, "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap00fa1373-e4", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:39:32 np0005465604 nova_compute[260603]: 2025-10-02 08:39:32.335 2 DEBUG oslo_concurrency.lockutils [req-6287da2d-0979-436d-852c-08d90cfdd550 req-41770f42-bcea-4cd6-9432-265155842699 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:39:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:32.832 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:24:0e 2001:db8::f816:3eff:feab:240e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109fe25f-02fc-4377-b6fa-14579cac74eb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=130c7afc-9b27-4823-aeed-223452e18c05) old=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2 2001:db8::f816:3eff:feab:240e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:39:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:32.836 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 130c7afc-9b27-4823-aeed-223452e18c05 in datapath 08388482-b0ce-472a-bb1e-16d4250fa7a0 updated#033[00m
Oct  2 04:39:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:32.841 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08388482-b0ce-472a-bb1e-16d4250fa7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:39:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:32.842 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6ae8ab4e-347a-49a4-ab2a-79223d453bcc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:33 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:33Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:c6:d9 10.100.0.6
Oct  2 04:39:33 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:33Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:c6:d9 10.100.0.6
Oct  2 04:39:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 270 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 10 MiB/s wr, 379 op/s
Oct  2 04:39:33 np0005465604 nova_compute[260603]: 2025-10-02 08:39:33.237 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:39:33 np0005465604 nova_compute[260603]: 2025-10-02 08:39:33.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:39:34 np0005465604 nova_compute[260603]: 2025-10-02 08:39:34.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:34.454 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2 2001:db8::f816:3eff:feab:240e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109fe25f-02fc-4377-b6fa-14579cac74eb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=130c7afc-9b27-4823-aeed-223452e18c05) old=Port_Binding(mac=['fa:16:3e:ab:24:0e 2001:db8::f816:3eff:feab:240e'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:39:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:34.456 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 130c7afc-9b27-4823-aeed-223452e18c05 in datapath 08388482-b0ce-472a-bb1e-16d4250fa7a0 updated#033[00m
Oct  2 04:39:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:34.458 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08388482-b0ce-472a-bb1e-16d4250fa7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:39:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:34.459 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2f1e06-3502-436e-a06d-a860f90b2bd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:34 np0005465604 nova_compute[260603]: 2025-10-02 08:39:34.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:39:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:34.823 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:34.823 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:34.824 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:35 np0005465604 podman[354962]: 2025-10-02 08:39:35.004695329 +0000 UTC m=+0.066996305 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2)
Oct  2 04:39:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 270 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 8.8 MiB/s wr, 332 op/s
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.382 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.382 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.382 2 INFO nova.compute.manager [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Unshelving#033[00m
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.457 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.457 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.465 2 DEBUG nova.objects.instance [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'pci_requests' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.478 2 DEBUG nova.objects.instance [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'numa_topology' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:35 np0005465604 kernel: tap664c458b-ab (unregistering): left promiscuous mode
Oct  2 04:39:35 np0005465604 NetworkManager[45129]: <info>  [1759394375.4867] device (tap664c458b-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.488 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.488 2 INFO nova.compute.claims [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:39:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:35Z|00970|binding|INFO|Releasing lport 664c458b-abee-4d23-a60f-a0032a8f6058 from this chassis (sb_readonly=0)
Oct  2 04:39:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:35Z|00971|binding|INFO|Setting lport 664c458b-abee-4d23-a60f-a0032a8f6058 down in Southbound
Oct  2 04:39:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:35Z|00972|binding|INFO|Removing iface tap664c458b-ab ovn-installed in OVS
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.506 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:c6:d9 10.100.0.6'], port_security=['fa:16:3e:f7:c6:d9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '49cc6a50-fcc0-4336-a786-4fe32e5d5c5a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd37b83fc-7239-49dc-8ab1-fc95753c436a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=664c458b-abee-4d23-a60f-a0032a8f6058) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:39:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.508 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 664c458b-abee-4d23-a60f-a0032a8f6058 in datapath 28f843b2-396a-4167-9840-21c273bdc044 unbound from our chassis#033[00m
Oct  2 04:39:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.510 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28f843b2-396a-4167-9840-21c273bdc044#033[00m
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.535 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[462116f5-5e3d-4d04-b728-5ac068f6dae7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:35 np0005465604 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000060.scope: Deactivated successfully.
Oct  2 04:39:35 np0005465604 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000060.scope: Consumed 13.047s CPU time.
Oct  2 04:39:35 np0005465604 systemd-machined[214636]: Machine qemu-119-instance-00000060 terminated.
Oct  2 04:39:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.568 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[711343ba-c1c5-4ced-900a-e0479f7f50c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.571 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3166e6-f5d1-45b0-879d-0405572bfbd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:35 np0005465604 podman[354982]: 2025-10-02 08:39:35.60430865 +0000 UTC m=+0.081909732 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 04:39:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.607 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6c535a3a-2cfe-4f87-999a-37d025315f25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.628 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.631 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[62ecf13e-ecbf-4469-a425-05af9d411333]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28f843b2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:e0:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 916, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 916, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520146, 'reachable_time': 29911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355013, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.653 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[47d8ca9d-0721-4ee1-a47e-ccd2d2035f1e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520161, 'tstamp': 520161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355014, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520164, 'tstamp': 520164}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355014, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.655 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28f843b2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.662 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28f843b2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.663 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:39:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.663 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28f843b2-30, col_values=(('external_ids', {'iface-id': '73da1479-3a84-4798-9da3-841fe88c5e3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:35.664 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.943 2 DEBUG nova.compute.manager [req-5d241ff9-a2ce-42ca-aca9-f3ebf5b3956b req-97785de9-c0f1-4d75-990d-f0fd444167f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-vif-unplugged-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.943 2 DEBUG oslo_concurrency.lockutils [req-5d241ff9-a2ce-42ca-aca9-f3ebf5b3956b req-97785de9-c0f1-4d75-990d-f0fd444167f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.944 2 DEBUG oslo_concurrency.lockutils [req-5d241ff9-a2ce-42ca-aca9-f3ebf5b3956b req-97785de9-c0f1-4d75-990d-f0fd444167f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.944 2 DEBUG oslo_concurrency.lockutils [req-5d241ff9-a2ce-42ca-aca9-f3ebf5b3956b req-97785de9-c0f1-4d75-990d-f0fd444167f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.945 2 DEBUG nova.compute.manager [req-5d241ff9-a2ce-42ca-aca9-f3ebf5b3956b req-97785de9-c0f1-4d75-990d-f0fd444167f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] No waiting events found dispatching network-vif-unplugged-664c458b-abee-4d23-a60f-a0032a8f6058 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:39:35 np0005465604 nova_compute[260603]: 2025-10-02 08:39:35.945 2 WARNING nova.compute.manager [req-5d241ff9-a2ce-42ca-aca9-f3ebf5b3956b req-97785de9-c0f1-4d75-990d-f0fd444167f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received unexpected event network-vif-unplugged-664c458b-abee-4d23-a60f-a0032a8f6058 for instance with vm_state active and task_state powering-off.#033[00m
Oct  2 04:39:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:39:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2141039080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:39:36 np0005465604 nova_compute[260603]: 2025-10-02 08:39:36.053 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:36 np0005465604 nova_compute[260603]: 2025-10-02 08:39:36.061 2 DEBUG nova.compute.provider_tree [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:39:36 np0005465604 nova_compute[260603]: 2025-10-02 08:39:36.082 2 DEBUG nova.scheduler.client.report [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:39:36 np0005465604 nova_compute[260603]: 2025-10-02 08:39:36.110 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:36 np0005465604 nova_compute[260603]: 2025-10-02 08:39:36.251 2 INFO nova.virt.libvirt.driver [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance shutdown successfully after 14 seconds.#033[00m
Oct  2 04:39:36 np0005465604 nova_compute[260603]: 2025-10-02 08:39:36.259 2 INFO nova.virt.libvirt.driver [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance destroyed successfully.#033[00m
Oct  2 04:39:36 np0005465604 nova_compute[260603]: 2025-10-02 08:39:36.260 2 DEBUG nova.objects.instance [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'numa_topology' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:36 np0005465604 nova_compute[260603]: 2025-10-02 08:39:36.280 2 DEBUG nova.compute.manager [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:36 np0005465604 nova_compute[260603]: 2025-10-02 08:39:36.307 2 INFO nova.network.neutron [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  2 04:39:36 np0005465604 nova_compute[260603]: 2025-10-02 08:39:36.335 2 DEBUG oslo_concurrency.lockutils [None req-9261a66b-639b-4fe3-b6d1-29e3af89701e d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 14.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:36 np0005465604 nova_compute[260603]: 2025-10-02 08:39:36.982 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:39:36 np0005465604 nova_compute[260603]: 2025-10-02 08:39:36.983 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquired lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:39:36 np0005465604 nova_compute[260603]: 2025-10-02 08:39:36.983 2 DEBUG nova.network.neutron [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:39:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 272 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.4 MiB/s wr, 221 op/s
Oct  2 04:39:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:39:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:38.007 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109fe25f-02fc-4377-b6fa-14579cac74eb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=130c7afc-9b27-4823-aeed-223452e18c05) old=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2 2001:db8::f816:3eff:feab:240e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:39:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:38.008 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 130c7afc-9b27-4823-aeed-223452e18c05 in datapath 08388482-b0ce-472a-bb1e-16d4250fa7a0 updated#033[00m
Oct  2 04:39:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:38.009 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08388482-b0ce-472a-bb1e-16d4250fa7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:39:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:38.010 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[33722e1a-7b7e-41d5-a386-f66523559bac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.068 2 DEBUG nova.compute.manager [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.068 2 DEBUG oslo_concurrency.lockutils [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.068 2 DEBUG oslo_concurrency.lockutils [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.069 2 DEBUG oslo_concurrency.lockutils [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.069 2 DEBUG nova.compute.manager [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] No waiting events found dispatching network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.069 2 WARNING nova.compute.manager [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received unexpected event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 for instance with vm_state stopped and task_state rebuilding.#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.069 2 DEBUG nova.compute.manager [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-changed-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.070 2 DEBUG nova.compute.manager [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Refreshing instance network info cache due to event network-changed-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.070 2 DEBUG oslo_concurrency.lockutils [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.426 2 INFO nova.compute.manager [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Rebuilding instance#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.601 2 DEBUG nova.network.neutron [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating instance_info_cache with network_info: [{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.637 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Releasing lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.639 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.639 2 INFO nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Creating image(s)#033[00m
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001509897766490851 of space, bias 1.0, pg target 0.4529693299472553 quantized to 32 (current 32)
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001423096450315817 of space, bias 1.0, pg target 0.4269289350947451 quantized to 32 (current 32)
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.672 2 DEBUG nova.storage.rbd_utils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:39:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.676 2 DEBUG nova.objects.instance [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'trusted_certs' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.681 2 DEBUG oslo_concurrency.lockutils [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.682 2 DEBUG nova.network.neutron [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Refreshing network info cache for port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.739 2 DEBUG nova.storage.rbd_utils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.770 2 DEBUG nova.storage.rbd_utils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.775 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "c90862ab429e888086fa2d502838e30b06b6fcfb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.776 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "c90862ab429e888086fa2d502838e30b06b6fcfb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.804 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.834 2 DEBUG nova.compute.manager [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.887 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'pci_requests' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.906 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'pci_devices' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.930 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'resources' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.945 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394363.9444623, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.946 2 INFO nova.compute.manager [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.948 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'migration_context' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.964 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.967 2 DEBUG nova.compute.manager [None req-9971f47f-dff3-455d-959d-75fd20201b1d - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.969 2 INFO nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance already shutdown.#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.974 2 INFO nova.virt.libvirt.driver [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance destroyed successfully.#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.978 2 INFO nova.virt.libvirt.driver [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance destroyed successfully.#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.979 2 DEBUG nova.virt.libvirt.vif [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:39:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-487849428',display_name='tempest-tempest.common.compute-instance-487849428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-487849428',id=96,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:39:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-il5cp84q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:39:37Z,user_data=None,user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=49cc6a50-fcc0-4336-a786-4fe32e5d5c5a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.979 2 DEBUG nova.network.os_vif_util [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.980 2 DEBUG nova.network.os_vif_util [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.980 2 DEBUG os_vif [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:38 np0005465604 nova_compute[260603]: 2025-10-02 08:39:38.982 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap664c458b-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.018 2 INFO os_vif [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab')#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.141 2 DEBUG nova.virt.libvirt.imagebackend [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Image locations are: [{'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/1ca55329-185d-4f33-a25c-d8eaed50208d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/1ca55329-185d-4f33-a25c-d8eaed50208d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 04:39:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:39.189 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2 2001:db8::f816:3eff:feab:240e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109fe25f-02fc-4377-b6fa-14579cac74eb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=130c7afc-9b27-4823-aeed-223452e18c05) old=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:39:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:39.192 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 130c7afc-9b27-4823-aeed-223452e18c05 in datapath 08388482-b0ce-472a-bb1e-16d4250fa7a0 updated#033[00m
Oct  2 04:39:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:39.194 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08388482-b0ce-472a-bb1e-16d4250fa7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:39:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:39.195 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b686c730-605c-4cb4-9026-f4bc97b45b87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.206 2 DEBUG nova.virt.libvirt.imagebackend [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Selected location: {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/1ca55329-185d-4f33-a25c-d8eaed50208d/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.206 2 DEBUG nova.storage.rbd_utils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] cloning images/1ca55329-185d-4f33-a25c-d8eaed50208d@snap to None/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:39:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 279 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 2.6 MiB/s wr, 118 op/s
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.333 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "c90862ab429e888086fa2d502838e30b06b6fcfb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.421 2 INFO nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Deleting instance files /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_del#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.422 2 INFO nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Deletion of /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_del complete#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.470 2 DEBUG nova.objects.instance [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'migration_context' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.540 2 DEBUG nova.storage.rbd_utils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] flattening vms/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.597 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.597 2 INFO nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Creating image(s)#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.646 2 DEBUG nova.storage.rbd_utils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.700 2 DEBUG nova.storage.rbd_utils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.733 2 DEBUG nova.storage.rbd_utils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.742 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.875 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.876 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.877 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.877 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.898 2 DEBUG nova.storage.rbd_utils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.901 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.991 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Image rbd:vms/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.992 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.993 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Ensure instance console log exists: /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.994 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.995 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.996 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:39 np0005465604 nova_compute[260603]: 2025-10-02 08:39:39.999 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Start _get_guest_xml network_info=[{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T08:39:20Z,direct_url=<?>,disk_format='raw',id=1ca55329-185d-4f33-a25c-d8eaed50208d,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-1260783938-shelved',owner='b85786f28a064d75924559acd4f6137e',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T08:39:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.004 2 WARNING nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.012 2 DEBUG nova.virt.libvirt.host [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.013 2 DEBUG nova.virt.libvirt.host [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.016 2 DEBUG nova.virt.libvirt.host [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.017 2 DEBUG nova.virt.libvirt.host [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.018 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.018 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T08:39:20Z,direct_url=<?>,disk_format='raw',id=1ca55329-185d-4f33-a25c-d8eaed50208d,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-1260783938-shelved',owner='b85786f28a064d75924559acd4f6137e',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T08:39:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.019 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.019 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.020 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.020 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.021 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.021 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.021 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.022 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.022 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.023 2 DEBUG nova.virt.hardware [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.023 2 DEBUG nova.objects.instance [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'vcpu_model' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.051 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.220 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.302 2 DEBUG nova.storage.rbd_utils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] resizing rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.414 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.414 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Ensure instance console log exists: /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.415 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.415 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.416 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.418 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Start _get_guest_xml network_info=[{"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.421 2 WARNING nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.426 2 DEBUG nova.virt.libvirt.host [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.426 2 DEBUG nova.virt.libvirt.host [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.429 2 DEBUG nova.virt.libvirt.host [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.429 2 DEBUG nova.virt.libvirt.host [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.430 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.430 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.431 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.431 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.431 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.432 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.432 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.432 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.433 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.433 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.433 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.433 2 DEBUG nova.virt.hardware [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.434 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.449 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:39:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1367426997' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.499 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.520 2 DEBUG nova.storage.rbd_utils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.524 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:39:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2553504633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.888 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.904 2 DEBUG nova.storage.rbd_utils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.908 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:39:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/620838583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.946 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.948 2 DEBUG nova.virt.libvirt.vif [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1260783938',display_name='tempest-ServersNegativeTestJSON-server-1260783938',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1260783938',id=93,image_ref='1ca55329-185d-4f33-a25c-d8eaed50208d',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:38:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-h40zdbp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member',shelved_at='2025-10-02T08:39:28.348390',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='1ca55329-185d-4f33-a25c-d8eaed50208d'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:39:35Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.949 2 DEBUG nova.network.os_vif_util [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.950 2 DEBUG nova.network.os_vif_util [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.951 2 DEBUG nova.objects.instance [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'pci_devices' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.972 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:39:40 np0005465604 nova_compute[260603]:  <uuid>fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b</uuid>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:  <name>instance-0000005d</name>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersNegativeTestJSON-server-1260783938</nova:name>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:39:40</nova:creationTime>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:39:40 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:        <nova:user uuid="15da3bbf2c9f49b68e7a7e0ccd557067">tempest-ServersNegativeTestJSON-2088330606-project-member</nova:user>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:        <nova:project uuid="b85786f28a064d75924559acd4f6137e">tempest-ServersNegativeTestJSON-2088330606</nova:project>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="1ca55329-185d-4f33-a25c-d8eaed50208d"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:        <nova:port uuid="00fa1373-e4cc-4245-9cfa-5a58c77aa4eb">
Oct  2 04:39:40 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <entry name="serial">fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b</entry>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <entry name="uuid">fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b</entry>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk">
Oct  2 04:39:40 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:39:40 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config">
Oct  2 04:39:40 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:39:40 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:11:b9:2b"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <target dev="tap00fa1373-e4"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/console.log" append="off"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <input type="keyboard" bus="usb"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:39:40 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:39:40 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:39:40 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:39:40 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.973 2 DEBUG nova.compute.manager [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Preparing to wait for external event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.974 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.974 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.974 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.975 2 DEBUG nova.virt.libvirt.vif [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1260783938',display_name='tempest-ServersNegativeTestJSON-server-1260783938',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1260783938',id=93,image_ref='1ca55329-185d-4f33-a25c-d8eaed50208d',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:38:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-h40zdbp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member',shelved_at='2025-10-02T08:39:28.348390',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='1ca55329-185d-4f33-a25c-d8eaed50208d'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:39:35Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.975 2 DEBUG nova.network.os_vif_util [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.976 2 DEBUG nova.network.os_vif_util [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.977 2 DEBUG os_vif [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.978 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.978 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.981 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00fa1373-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.982 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap00fa1373-e4, col_values=(('external_ids', {'iface-id': '00fa1373-e4cc-4245-9cfa-5a58c77aa4eb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:b9:2b', 'vm-uuid': 'fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:40 np0005465604 NetworkManager[45129]: <info>  [1759394380.9843] manager: (tap00fa1373-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:40 np0005465604 nova_compute[260603]: 2025-10-02 08:39:40.988 2 INFO os_vif [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4')#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.058 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.060 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.060 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] No VIF found with MAC fa:16:3e:11:b9:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.061 2 INFO nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Using config drive#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.080 2 DEBUG nova.storage.rbd_utils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.101 2 DEBUG nova.objects.instance [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'ec2_ids' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.159 2 DEBUG nova.objects.instance [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'keypairs' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 279 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 2.6 MiB/s wr, 118 op/s
Oct  2 04:39:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:39:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/263512643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.326 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.328 2 DEBUG nova.virt.libvirt.vif [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:39:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-487849428',display_name='tempest-tempest.common.compute-instance-487849428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-487849428',id=96,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:39:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-il5cp84q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:39:39Z,user_data=None,user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=49cc6a50-fcc0-4336-a786-4fe32e5d5c5a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.329 2 DEBUG nova.network.os_vif_util [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.331 2 DEBUG nova.network.os_vif_util [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.334 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:39:41 np0005465604 nova_compute[260603]:  <uuid>49cc6a50-fcc0-4336-a786-4fe32e5d5c5a</uuid>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:  <name>instance-00000060</name>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <nova:name>tempest-tempest.common.compute-instance-487849428</nova:name>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:39:40</nova:creationTime>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:39:41 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:        <nova:user uuid="d3802fedfb914c27b9b09ad6ea6f4c27">tempest-ServerActionsTestOtherA-249618595-project-member</nova:user>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:        <nova:project uuid="c75535fe577642038c638a0b01f74d09">tempest-ServerActionsTestOtherA-249618595</nova:project>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="eeb8c9a4-e143-4b44-a997-e04d544bc537"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:        <nova:port uuid="664c458b-abee-4d23-a60f-a0032a8f6058">
Oct  2 04:39:41 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <entry name="serial">49cc6a50-fcc0-4336-a786-4fe32e5d5c5a</entry>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <entry name="uuid">49cc6a50-fcc0-4336-a786-4fe32e5d5c5a</entry>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk">
Oct  2 04:39:41 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:39:41 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config">
Oct  2 04:39:41 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:39:41 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:f7:c6:d9"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <target dev="tap664c458b-ab"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/console.log" append="off"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:39:41 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:39:41 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:39:41 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:39:41 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.336 2 DEBUG nova.compute.manager [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Preparing to wait for external event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.336 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.337 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.337 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.338 2 DEBUG nova.virt.libvirt.vif [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:39:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-487849428',display_name='tempest-tempest.common.compute-instance-487849428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-487849428',id=96,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:39:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-il5cp84q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:39:39Z,user_data=None,user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=49cc6a50-fcc0-4336-a786-4fe32e5d5c5a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.338 2 DEBUG nova.network.os_vif_util [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.339 2 DEBUG nova.network.os_vif_util [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.339 2 DEBUG os_vif [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.340 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.341 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap664c458b-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap664c458b-ab, col_values=(('external_ids', {'iface-id': '664c458b-abee-4d23-a60f-a0032a8f6058', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:c6:d9', 'vm-uuid': '49cc6a50-fcc0-4336-a786-4fe32e5d5c5a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:41 np0005465604 NetworkManager[45129]: <info>  [1759394381.3458] manager: (tap664c458b-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.352 2 INFO os_vif [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab')#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.413 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.414 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.414 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No VIF found with MAC fa:16:3e:f7:c6:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.415 2 INFO nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Using config drive#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.454 2 DEBUG nova.storage.rbd_utils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.482 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:41 np0005465604 nova_compute[260603]: 2025-10-02 08:39:41.525 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'keypairs' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.149 2 DEBUG nova.network.neutron [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updated VIF entry in instance network info cache for port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.150 2 DEBUG nova.network.neutron [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating instance_info_cache with network_info: [{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.156 2 INFO nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Creating config drive at /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config#033[00m
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.165 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfkwiojiv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.230 2 DEBUG oslo_concurrency.lockutils [req-d9e29229-834c-493d-a57e-539306a37fac req-5bf5b381-5739-4b44-b960-b7dba0a548f8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:39:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.339 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfkwiojiv" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.369 2 DEBUG nova.storage.rbd_utils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] rbd image fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.373 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.425 2 INFO nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Creating config drive at /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config#033[00m
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.430 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0aeiqcu0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.563 2 DEBUG oslo_concurrency.processutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.564 2 INFO nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Deleting local config drive /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b/disk.config because it was imported into RBD.#033[00m
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.581 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0aeiqcu0" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.622 2 DEBUG nova.storage.rbd_utils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.632 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:42 np0005465604 NetworkManager[45129]: <info>  [1759394382.6405] manager: (tap00fa1373-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/386)
Oct  2 04:39:42 np0005465604 kernel: tap00fa1373-e4: entered promiscuous mode
Oct  2 04:39:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:42Z|00973|binding|INFO|Claiming lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for this chassis.
Oct  2 04:39:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:42Z|00974|binding|INFO|00fa1373-e4cc-4245-9cfa-5a58c77aa4eb: Claiming fa:16:3e:11:b9:2b 10.100.0.10
Oct  2 04:39:42 np0005465604 systemd-udevd[355710]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:39:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.711 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:b9:2b 10.100.0.10'], port_security=['fa:16:3e:11:b9:2b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '7', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:39:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.712 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 bound to our chassis#033[00m
Oct  2 04:39:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.713 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 19d584c3-e754-47d1-9cdf-c6badbd670d7#033[00m
Oct  2 04:39:42 np0005465604 NetworkManager[45129]: <info>  [1759394382.7202] device (tap00fa1373-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:39:42 np0005465604 NetworkManager[45129]: <info>  [1759394382.7216] device (tap00fa1373-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:39:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.726 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3b6ab89b-1e11-4e6c-9ccc-248efc7b4f75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.727 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap19d584c3-e1 in ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:42Z|00975|binding|INFO|Setting lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb ovn-installed in OVS
Oct  2 04:39:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:42Z|00976|binding|INFO|Setting lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb up in Southbound
Oct  2 04:39:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.734 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap19d584c3-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:39:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.734 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[25f70c18-cc3b-442f-b5fc-b8ab4940fe82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.735 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1374e98b-c989-4c2d-b5f6-8671f817028b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:42 np0005465604 systemd-machined[214636]: New machine qemu-120-instance-0000005d.
Oct  2 04:39:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.746 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[f30f55b6-bdda-4298-b1cc-b1be896a42c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:42 np0005465604 systemd[1]: Started Virtual Machine qemu-120-instance-0000005d.
Oct  2 04:39:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.770 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[169d468e-b7d3-451c-98ae-c6a9c1560275]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.819 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[07615201-5402-4218-b064-4dba8f41c64b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.827 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[91575618-a9ed-4492-a06f-65c68c94c7d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:42 np0005465604 NetworkManager[45129]: <info>  [1759394382.8284] manager: (tap19d584c3-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/387)
Oct  2 04:39:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.869 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b88c52bb-1982-4b1b-8fa0-d68eb9099216]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.872 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9488a6a8-cd0b-4cb2-b9a0-55bf8fa3926c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:42 np0005465604 NetworkManager[45129]: <info>  [1759394382.9066] device (tap19d584c3-e0): carrier: link connected
Oct  2 04:39:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.913 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c4070ef0-d8e7-46e9-91c9-7c7b4fb58eb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.916 2 DEBUG oslo_concurrency.processutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.917 2 INFO nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Deleting local config drive /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a/disk.config because it was imported into RBD.#033[00m
Oct  2 04:39:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.943 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ffb7c227-576e-4078-9317-9ab5876921e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526150, 'reachable_time': 30671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355833, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.971 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[55443a99-6cc5-417e-99ee-c85cca41b917]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:8c4f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 526150, 'tstamp': 526150}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355848, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:42 np0005465604 kernel: tap664c458b-ab: entered promiscuous mode
Oct  2 04:39:42 np0005465604 NetworkManager[45129]: <info>  [1759394382.9832] manager: (tap664c458b-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/388)
Oct  2 04:39:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:42Z|00977|binding|INFO|Claiming lport 664c458b-abee-4d23-a60f-a0032a8f6058 for this chassis.
Oct  2 04:39:42 np0005465604 nova_compute[260603]: 2025-10-02 08:39:42.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:42Z|00978|binding|INFO|664c458b-abee-4d23-a60f-a0032a8f6058: Claiming fa:16:3e:f7:c6:d9 10.100.0.6
Oct  2 04:39:42 np0005465604 systemd-udevd[355806]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:39:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:42.996 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:c6:d9 10.100.0.6'], port_security=['fa:16:3e:f7:c6:d9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '49cc6a50-fcc0-4336-a786-4fe32e5d5c5a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd37b83fc-7239-49dc-8ab1-fc95753c436a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=664c458b-abee-4d23-a60f-a0032a8f6058) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:39:43 np0005465604 NetworkManager[45129]: <info>  [1759394383.0070] device (tap664c458b-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:39:43 np0005465604 NetworkManager[45129]: <info>  [1759394383.0082] device (tap664c458b-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:43Z|00979|binding|INFO|Setting lport 664c458b-abee-4d23-a60f-a0032a8f6058 ovn-installed in OVS
Oct  2 04:39:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:43Z|00980|binding|INFO|Setting lport 664c458b-abee-4d23-a60f-a0032a8f6058 up in Southbound
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.003 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab297b3-f997-4809-8c80-f6adec002c5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 280], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526150, 'reachable_time': 30671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355855, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:43 np0005465604 systemd-machined[214636]: New machine qemu-121-instance-00000060.
Oct  2 04:39:43 np0005465604 systemd[1]: Started Virtual Machine qemu-121-instance-00000060.
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.053 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b43acb59-4969-4761-aac2-8afe3a9a48e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.134 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9617da-f678-4088-8298-b65677f41784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.136 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.138 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.138 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19d584c3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:43 np0005465604 NetworkManager[45129]: <info>  [1759394383.1410] manager: (tap19d584c3-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Oct  2 04:39:43 np0005465604 kernel: tap19d584c3-e0: entered promiscuous mode
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.146 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap19d584c3-e0, col_values=(('external_ids', {'iface-id': '0167600f-b732-46e3-804b-b8a6d765a5aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:43Z|00981|binding|INFO|Releasing lport 0167600f-b732-46e3-804b-b8a6d765a5aa from this chassis (sb_readonly=0)
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.163 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/19d584c3-e754-47d1-9cdf-c6badbd670d7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/19d584c3-e754-47d1-9cdf-c6badbd670d7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.164 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[856b57be-396d-4aa7-a9a9-f96bb1ae69d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.168 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/19d584c3-e754-47d1-9cdf-c6badbd670d7.pid.haproxy
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.168 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'env', 'PROCESS_TAG=haproxy-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/19d584c3-e754-47d1-9cdf-c6badbd670d7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:39:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 325 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 7.4 MiB/s wr, 211 op/s
Oct  2 04:39:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:39:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:39:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:39:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:39:43 np0005465604 podman[356083]: 2025-10-02 08:39:43.545704983 +0000 UTC m=+0.069342335 container create f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 04:39:43 np0005465604 podman[356083]: 2025-10-02 08:39:43.504094273 +0000 UTC m=+0.027731674 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:39:43 np0005465604 systemd[1]: Started libpod-conmon-f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7.scope.
Oct  2 04:39:43 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:39:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8cbea9285c5da7b9818a5a69331acc3f529d43f16cdc486f4c0e52f4c5285b9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:39:43 np0005465604 podman[356083]: 2025-10-02 08:39:43.663434482 +0000 UTC m=+0.187071833 container init f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:39:43 np0005465604 podman[356083]: 2025-10-02 08:39:43.669543535 +0000 UTC m=+0.193180866 container start f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:39:43 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[356118]: [NOTICE]   (356133) : New worker (356137) forked
Oct  2 04:39:43 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[356118]: [NOTICE]   (356133) : Loading success.
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.757 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 664c458b-abee-4d23-a60f-a0032a8f6058 in datapath 28f843b2-396a-4167-9840-21c273bdc044 unbound from our chassis#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.759 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28f843b2-396a-4167-9840-21c273bdc044#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.766 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394383.7656057, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.767 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Started (Lifecycle Event)#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.778 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aaeb3c74-02a3-42bf-b55e-3cc21448a491]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.795 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.800 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394383.7658455, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.800 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.820 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.822 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc65976-cc69-4fae-8625-ae32097cdd0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.826 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e8164567-f417-43d1-822a-a78a22346605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.829 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.851 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.868 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1bafdfb3-d172-41be-a5fa-f2defa940e6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.891 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aef7d45d-027a-4e99-baf3-1a4ef36bf606]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28f843b2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:e0:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 916, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 916, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520146, 'reachable_time': 29911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356156, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.910 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.911 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394383.910339, 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.911 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] VM Started (Lifecycle Event)#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.915 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[03be8aa1-39ae-4c8a-9649-fa7a86805e7c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520161, 'tstamp': 520161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356157, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520164, 'tstamp': 520164}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356157, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.917 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28f843b2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.969 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.973 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394383.9105396, 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.973 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.975 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28f843b2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.975 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.975 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28f843b2-30, col_values=(('external_ids', {'iface-id': '73da1479-3a84-4798-9da3-841fe88c5e3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:43.976 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:39:43 np0005465604 nova_compute[260603]: 2025-10-02 08:39:43.999 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:44 np0005465604 nova_compute[260603]: 2025-10-02 08:39:44.002 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:39:44 np0005465604 nova_compute[260603]: 2025-10-02 08:39:44.025 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 04:39:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:39:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:39:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:39:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:39:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:39:44 np0005465604 nova_compute[260603]: 2025-10-02 08:39:44.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:39:44 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b6305d50-ca82-4a77-b39a-d97a6bf285df does not exist
Oct  2 04:39:44 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 39ca432a-3363-4378-b52d-9fcf682cf633 does not exist
Oct  2 04:39:44 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 7dd3bedc-c61e-48c9-a797-74a2746ca479 does not exist
Oct  2 04:39:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:39:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:39:44 np0005465604 nova_compute[260603]: 2025-10-02 08:39:44.227 2 DEBUG nova.compute.manager [req-cef50f62-a850-4b23-aade-b624ab9a1876 req-a22b6f56-d799-4048-9d1e-556ad3662b61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:44 np0005465604 nova_compute[260603]: 2025-10-02 08:39:44.227 2 DEBUG oslo_concurrency.lockutils [req-cef50f62-a850-4b23-aade-b624ab9a1876 req-a22b6f56-d799-4048-9d1e-556ad3662b61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:44 np0005465604 nova_compute[260603]: 2025-10-02 08:39:44.228 2 DEBUG oslo_concurrency.lockutils [req-cef50f62-a850-4b23-aade-b624ab9a1876 req-a22b6f56-d799-4048-9d1e-556ad3662b61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:44 np0005465604 nova_compute[260603]: 2025-10-02 08:39:44.228 2 DEBUG oslo_concurrency.lockutils [req-cef50f62-a850-4b23-aade-b624ab9a1876 req-a22b6f56-d799-4048-9d1e-556ad3662b61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:44 np0005465604 nova_compute[260603]: 2025-10-02 08:39:44.229 2 DEBUG nova.compute.manager [req-cef50f62-a850-4b23-aade-b624ab9a1876 req-a22b6f56-d799-4048-9d1e-556ad3662b61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Processing event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:39:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:39:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:39:44 np0005465604 nova_compute[260603]: 2025-10-02 08:39:44.230 2 DEBUG nova.compute.manager [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:39:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:39:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:39:44 np0005465604 nova_compute[260603]: 2025-10-02 08:39:44.235 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394384.2349272, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:39:44 np0005465604 nova_compute[260603]: 2025-10-02 08:39:44.235 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:39:44 np0005465604 nova_compute[260603]: 2025-10-02 08:39:44.238 2 DEBUG nova.virt.libvirt.driver [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:39:44 np0005465604 nova_compute[260603]: 2025-10-02 08:39:44.242 2 INFO nova.virt.libvirt.driver [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance spawned successfully.#033[00m
Oct  2 04:39:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:39:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:39:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:39:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:39:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:39:44 np0005465604 nova_compute[260603]: 2025-10-02 08:39:44.275 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:44 np0005465604 nova_compute[260603]: 2025-10-02 08:39:44.281 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:39:44 np0005465604 nova_compute[260603]: 2025-10-02 08:39:44.318 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:39:44 np0005465604 podman[356312]: 2025-10-02 08:39:44.976979062 +0000 UTC m=+0.070076407 container create c0c0f5766b0857dcb57ac1ba0c3010d89266db08df95129911642f7ed8bba756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_swartz, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:39:45 np0005465604 podman[356312]: 2025-10-02 08:39:44.937122024 +0000 UTC m=+0.030219409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:39:45 np0005465604 systemd[1]: Started libpod-conmon-c0c0f5766b0857dcb57ac1ba0c3010d89266db08df95129911642f7ed8bba756.scope.
Oct  2 04:39:45 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:39:45 np0005465604 podman[356312]: 2025-10-02 08:39:45.139308981 +0000 UTC m=+0.232406336 container init c0c0f5766b0857dcb57ac1ba0c3010d89266db08df95129911642f7ed8bba756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_swartz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:39:45 np0005465604 podman[356312]: 2025-10-02 08:39:45.152453625 +0000 UTC m=+0.245551001 container start c0c0f5766b0857dcb57ac1ba0c3010d89266db08df95129911642f7ed8bba756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_swartz, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 04:39:45 np0005465604 podman[356312]: 2025-10-02 08:39:45.160272791 +0000 UTC m=+0.253370176 container attach c0c0f5766b0857dcb57ac1ba0c3010d89266db08df95129911642f7ed8bba756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_swartz, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 04:39:45 np0005465604 sleepy_swartz[356329]: 167 167
Oct  2 04:39:45 np0005465604 systemd[1]: libpod-c0c0f5766b0857dcb57ac1ba0c3010d89266db08df95129911642f7ed8bba756.scope: Deactivated successfully.
Oct  2 04:39:45 np0005465604 podman[356312]: 2025-10-02 08:39:45.163807207 +0000 UTC m=+0.256904602 container died c0c0f5766b0857dcb57ac1ba0c3010d89266db08df95129911642f7ed8bba756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_swartz, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 04:39:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 325 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 151 op/s
Oct  2 04:39:45 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ec14c7f6bf0dfd806de8556f14500632eeb640bf40487535b5342c449addc428-merged.mount: Deactivated successfully.
Oct  2 04:39:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Oct  2 04:39:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Oct  2 04:39:45 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Oct  2 04:39:45 np0005465604 podman[356312]: 2025-10-02 08:39:45.345798927 +0000 UTC m=+0.438896262 container remove c0c0f5766b0857dcb57ac1ba0c3010d89266db08df95129911642f7ed8bba756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_swartz, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 04:39:45 np0005465604 systemd[1]: libpod-conmon-c0c0f5766b0857dcb57ac1ba0c3010d89266db08df95129911642f7ed8bba756.scope: Deactivated successfully.
Oct  2 04:39:45 np0005465604 podman[356355]: 2025-10-02 08:39:45.592927675 +0000 UTC m=+0.071198822 container create be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 04:39:45 np0005465604 podman[356355]: 2025-10-02 08:39:45.558857021 +0000 UTC m=+0.037128168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:39:45 np0005465604 systemd[1]: Started libpod-conmon-be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d.scope.
Oct  2 04:39:45 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:39:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d62cd0140d139718d23009ffc81961dbf29cacfa6a050ef22f74c756f5a2dd8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:39:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d62cd0140d139718d23009ffc81961dbf29cacfa6a050ef22f74c756f5a2dd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:39:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d62cd0140d139718d23009ffc81961dbf29cacfa6a050ef22f74c756f5a2dd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:39:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d62cd0140d139718d23009ffc81961dbf29cacfa6a050ef22f74c756f5a2dd8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:39:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d62cd0140d139718d23009ffc81961dbf29cacfa6a050ef22f74c756f5a2dd8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:39:45 np0005465604 podman[356355]: 2025-10-02 08:39:45.733045976 +0000 UTC m=+0.211317103 container init be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 04:39:45 np0005465604 podman[356355]: 2025-10-02 08:39:45.743611163 +0000 UTC m=+0.221882300 container start be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:39:45 np0005465604 podman[356355]: 2025-10-02 08:39:45.746977265 +0000 UTC m=+0.225248402 container attach be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 04:39:45 np0005465604 nova_compute[260603]: 2025-10-02 08:39:45.883 2 DEBUG nova.compute.manager [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:45.893 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109fe25f-02fc-4377-b6fa-14579cac74eb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=130c7afc-9b27-4823-aeed-223452e18c05) old=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2 2001:db8::f816:3eff:feab:240e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:39:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:45.894 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 130c7afc-9b27-4823-aeed-223452e18c05 in datapath 08388482-b0ce-472a-bb1e-16d4250fa7a0 updated#033[00m
Oct  2 04:39:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:45.895 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08388482-b0ce-472a-bb1e-16d4250fa7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:39:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:45.897 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[645b0e3a-49ed-4691-9bb9-c00617c62c18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:45 np0005465604 nova_compute[260603]: 2025-10-02 08:39:45.972 2 DEBUG oslo_concurrency.lockutils [None req-fc68eb5a-f049-4873-9133-3f51de815bca 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 10.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.384 2 DEBUG nova.compute.manager [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.384 2 DEBUG oslo_concurrency.lockutils [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.384 2 DEBUG oslo_concurrency.lockutils [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.385 2 DEBUG oslo_concurrency.lockutils [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.385 2 DEBUG nova.compute.manager [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.385 2 WARNING nova.compute.manager [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received unexpected event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with vm_state active and task_state None.#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.385 2 DEBUG nova.compute.manager [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.385 2 DEBUG oslo_concurrency.lockutils [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.386 2 DEBUG oslo_concurrency.lockutils [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.386 2 DEBUG oslo_concurrency.lockutils [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.386 2 DEBUG nova.compute.manager [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Processing event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.386 2 DEBUG nova.compute.manager [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.386 2 DEBUG oslo_concurrency.lockutils [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.386 2 DEBUG oslo_concurrency.lockutils [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.386 2 DEBUG oslo_concurrency.lockutils [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.386 2 DEBUG nova.compute.manager [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] No waiting events found dispatching network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.387 2 WARNING nova.compute.manager [req-b525533b-cce7-4ea9-b771-382bc4598c9a req-b8572860-93ea-44c3-b5e3-fd694b2f99be 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received unexpected event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 for instance with vm_state stopped and task_state rebuild_spawning.#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.391 2 DEBUG nova.compute.manager [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.397 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394386.3969915, 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.397 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.401 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.405 2 INFO nova.virt.libvirt.driver [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance spawned successfully.#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.406 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.419 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.425 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.428 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.429 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.430 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.431 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.432 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.433 2 DEBUG nova.virt.libvirt.driver [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.457 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.488 2 DEBUG nova.compute.manager [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.538 2 INFO nova.compute.manager [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] bringing vm to original state: 'stopped'#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.598 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.599 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.599 2 DEBUG nova.compute.manager [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.604 2 DEBUG nova.compute.manager [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 04:39:46 np0005465604 kernel: tap664c458b-ab (unregistering): left promiscuous mode
Oct  2 04:39:46 np0005465604 NetworkManager[45129]: <info>  [1759394386.6530] device (tap664c458b-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:39:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:46Z|00982|binding|INFO|Releasing lport 664c458b-abee-4d23-a60f-a0032a8f6058 from this chassis (sb_readonly=0)
Oct  2 04:39:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:46Z|00983|binding|INFO|Setting lport 664c458b-abee-4d23-a60f-a0032a8f6058 down in Southbound
Oct  2 04:39:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:46Z|00984|binding|INFO|Removing iface tap664c458b-ab ovn-installed in OVS
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.676 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:c6:d9 10.100.0.6'], port_security=['fa:16:3e:f7:c6:d9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '49cc6a50-fcc0-4336-a786-4fe32e5d5c5a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd37b83fc-7239-49dc-8ab1-fc95753c436a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=664c458b-abee-4d23-a60f-a0032a8f6058) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:39:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.677 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 664c458b-abee-4d23-a60f-a0032a8f6058 in datapath 28f843b2-396a-4167-9840-21c273bdc044 unbound from our chassis#033[00m
Oct  2 04:39:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.679 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28f843b2-396a-4167-9840-21c273bdc044#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.713 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4af63540-1fdc-4122-a17f-0f77df5bcc68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:46 np0005465604 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000060.scope: Deactivated successfully.
Oct  2 04:39:46 np0005465604 systemd-machined[214636]: Machine qemu-121-instance-00000060 terminated.
Oct  2 04:39:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.759 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8b792c1a-2cd4-4bff-a15c-a9601d423d1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.763 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1416449b-b373-4ac2-a4fd-9825f44a0250]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.795 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1bdc3327-bb48-4430-b069-395ccc5607b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.821 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a720d757-bbef-4402-b5ed-3ecb3f711484]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28f843b2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:e0:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 916, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 916, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520146, 'reachable_time': 29911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356407, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.842 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5b14a84f-4525-4f5d-a180-7a85320a8d99]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520161, 'tstamp': 520161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356414, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520164, 'tstamp': 520164}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356414, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.844 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28f843b2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.849 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28f843b2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.850 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:39:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.850 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28f843b2-30, col_values=(('external_ids', {'iface-id': '73da1479-3a84-4798-9da3-841fe88c5e3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.850 2 INFO nova.virt.libvirt.driver [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance destroyed successfully.#033[00m
Oct  2 04:39:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:46.850 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.850 2 DEBUG nova.compute.manager [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:46 np0005465604 pensive_margulis[356372]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:39:46 np0005465604 pensive_margulis[356372]: --> relative data size: 1.0
Oct  2 04:39:46 np0005465604 pensive_margulis[356372]: --> All data devices are unavailable
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.915 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:46 np0005465604 systemd[1]: libpod-be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d.scope: Deactivated successfully.
Oct  2 04:39:46 np0005465604 systemd[1]: libpod-be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d.scope: Consumed 1.065s CPU time.
Oct  2 04:39:46 np0005465604 conmon[356372]: conmon be8922e4fb75512a8756 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d.scope/container/memory.events
Oct  2 04:39:46 np0005465604 podman[356355]: 2025-10-02 08:39:46.938594559 +0000 UTC m=+1.416865666 container died be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.935 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.953 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.955 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:46 np0005465604 nova_compute[260603]: 2025-10-02 08:39:46.955 2 DEBUG nova.objects.instance [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:39:46 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1d62cd0140d139718d23009ffc81961dbf29cacfa6a050ef22f74c756f5a2dd8-merged.mount: Deactivated successfully.
Oct  2 04:39:46 np0005465604 podman[356355]: 2025-10-02 08:39:46.994986814 +0000 UTC m=+1.473257911 container remove be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:39:47 np0005465604 systemd[1]: libpod-conmon-be8922e4fb75512a87566b046b68031e9e71e6e21c34ac642d961577b745023d.scope: Deactivated successfully.
Oct  2 04:39:47 np0005465604 nova_compute[260603]: 2025-10-02 08:39:47.026 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 04:39:47 np0005465604 nova_compute[260603]: 2025-10-02 08:39:47.027 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid 4145f1a3-c327-49ee-9af1-1ace3afb70a5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 04:39:47 np0005465604 nova_compute[260603]: 2025-10-02 08:39:47.027 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 04:39:47 np0005465604 nova_compute[260603]: 2025-10-02 08:39:47.027 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:47 np0005465604 nova_compute[260603]: 2025-10-02 08:39:47.028 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:47 np0005465604 nova_compute[260603]: 2025-10-02 08:39:47.028 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:47 np0005465604 nova_compute[260603]: 2025-10-02 08:39:47.028 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:47 np0005465604 nova_compute[260603]: 2025-10-02 08:39:47.028 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:47 np0005465604 nova_compute[260603]: 2025-10-02 08:39:47.029 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:47 np0005465604 nova_compute[260603]: 2025-10-02 08:39:47.075 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:47 np0005465604 nova_compute[260603]: 2025-10-02 08:39:47.078 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:47 np0005465604 nova_compute[260603]: 2025-10-02 08:39:47.079 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:47 np0005465604 nova_compute[260603]: 2025-10-02 08:39:47.100 2 DEBUG oslo_concurrency.lockutils [None req-5c081d5a-5bf2-4138-af09-54041193a0c1 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 293 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 6.9 MiB/s wr, 212 op/s
Oct  2 04:39:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:39:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:47.315 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2 2001:db8::f816:3eff:feab:240e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109fe25f-02fc-4377-b6fa-14579cac74eb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=130c7afc-9b27-4823-aeed-223452e18c05) old=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:39:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:47.318 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 130c7afc-9b27-4823-aeed-223452e18c05 in datapath 08388482-b0ce-472a-bb1e-16d4250fa7a0 updated#033[00m
Oct  2 04:39:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:47.320 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08388482-b0ce-472a-bb1e-16d4250fa7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:39:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:47.321 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ffce6e0a-be40-44de-b4bb-34a26fa8089a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:47 np0005465604 podman[356577]: 2025-10-02 08:39:47.824394893 +0000 UTC m=+0.064298933 container create 0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hugle, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 04:39:47 np0005465604 systemd[1]: Started libpod-conmon-0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c.scope.
Oct  2 04:39:47 np0005465604 podman[356577]: 2025-10-02 08:39:47.796559616 +0000 UTC m=+0.036463696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:39:47 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:39:47 np0005465604 podman[356577]: 2025-10-02 08:39:47.94009898 +0000 UTC m=+0.180003020 container init 0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hugle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:39:47 np0005465604 podman[356577]: 2025-10-02 08:39:47.949572516 +0000 UTC m=+0.189476546 container start 0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:39:47 np0005465604 podman[356577]: 2025-10-02 08:39:47.953135262 +0000 UTC m=+0.193039302 container attach 0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hugle, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 04:39:47 np0005465604 musing_hugle[356593]: 167 167
Oct  2 04:39:47 np0005465604 systemd[1]: libpod-0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c.scope: Deactivated successfully.
Oct  2 04:39:47 np0005465604 conmon[356593]: conmon 0dd6dd60b910598e32f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c.scope/container/memory.events
Oct  2 04:39:48 np0005465604 podman[356598]: 2025-10-02 08:39:48.005538847 +0000 UTC m=+0.030466486 container died 0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:39:48 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b44447731674113029ce105f2505cd0c38ca7d43b728d1ba43d3f98d39e8048e-merged.mount: Deactivated successfully.
Oct  2 04:39:48 np0005465604 podman[356598]: 2025-10-02 08:39:48.050018484 +0000 UTC m=+0.074946143 container remove 0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hugle, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 04:39:48 np0005465604 systemd[1]: libpod-conmon-0dd6dd60b910598e32f9acf0ba1161ad6a6ce3bd24deb851c10742f94304b63c.scope: Deactivated successfully.
Oct  2 04:39:48 np0005465604 podman[356620]: 2025-10-02 08:39:48.288291105 +0000 UTC m=+0.052803978 container create 04e79042f862c77b079c930e4ecbf8b2411537b6d8b36c12973fe33e41c2a8ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:39:48 np0005465604 systemd[1]: Started libpod-conmon-04e79042f862c77b079c930e4ecbf8b2411537b6d8b36c12973fe33e41c2a8ec.scope.
Oct  2 04:39:48 np0005465604 podman[356620]: 2025-10-02 08:39:48.26848173 +0000 UTC m=+0.032994643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:39:48 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:39:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/920798367a516114dc06e99e736ea6976d14f46c6640ccb4a58c8cbe696b2a7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:39:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/920798367a516114dc06e99e736ea6976d14f46c6640ccb4a58c8cbe696b2a7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:39:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/920798367a516114dc06e99e736ea6976d14f46c6640ccb4a58c8cbe696b2a7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:39:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/920798367a516114dc06e99e736ea6976d14f46c6640ccb4a58c8cbe696b2a7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:39:48 np0005465604 podman[356620]: 2025-10-02 08:39:48.4055711 +0000 UTC m=+0.170084013 container init 04e79042f862c77b079c930e4ecbf8b2411537b6d8b36c12973fe33e41c2a8ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 04:39:48 np0005465604 podman[356620]: 2025-10-02 08:39:48.413348554 +0000 UTC m=+0.177861467 container start 04e79042f862c77b079c930e4ecbf8b2411537b6d8b36c12973fe33e41c2a8ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:39:48 np0005465604 podman[356620]: 2025-10-02 08:39:48.416801198 +0000 UTC m=+0.181314081 container attach 04e79042f862c77b079c930e4ecbf8b2411537b6d8b36c12973fe33e41c2a8ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_keldysh, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:39:48 np0005465604 nova_compute[260603]: 2025-10-02 08:39:48.653 2 DEBUG nova.compute.manager [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-vif-unplugged-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:48 np0005465604 nova_compute[260603]: 2025-10-02 08:39:48.654 2 DEBUG oslo_concurrency.lockutils [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:48 np0005465604 nova_compute[260603]: 2025-10-02 08:39:48.655 2 DEBUG oslo_concurrency.lockutils [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:48 np0005465604 nova_compute[260603]: 2025-10-02 08:39:48.655 2 DEBUG oslo_concurrency.lockutils [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:48 np0005465604 nova_compute[260603]: 2025-10-02 08:39:48.656 2 DEBUG nova.compute.manager [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] No waiting events found dispatching network-vif-unplugged-664c458b-abee-4d23-a60f-a0032a8f6058 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:39:48 np0005465604 nova_compute[260603]: 2025-10-02 08:39:48.656 2 WARNING nova.compute.manager [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received unexpected event network-vif-unplugged-664c458b-abee-4d23-a60f-a0032a8f6058 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 04:39:48 np0005465604 nova_compute[260603]: 2025-10-02 08:39:48.657 2 DEBUG nova.compute.manager [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:48 np0005465604 nova_compute[260603]: 2025-10-02 08:39:48.657 2 DEBUG oslo_concurrency.lockutils [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:48 np0005465604 nova_compute[260603]: 2025-10-02 08:39:48.658 2 DEBUG oslo_concurrency.lockutils [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:48 np0005465604 nova_compute[260603]: 2025-10-02 08:39:48.658 2 DEBUG oslo_concurrency.lockutils [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:48 np0005465604 nova_compute[260603]: 2025-10-02 08:39:48.659 2 DEBUG nova.compute.manager [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] No waiting events found dispatching network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:39:48 np0005465604 nova_compute[260603]: 2025-10-02 08:39:48.659 2 WARNING nova.compute.manager [req-d74f3c30-9501-4b68-955b-a91011ab3836 req-fd1e4651-e7da-40a6-bd02-b71f7eb0a56a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received unexpected event network-vif-plugged-664c458b-abee-4d23-a60f-a0032a8f6058 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]: {
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:    "0": [
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:        {
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "devices": [
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "/dev/loop3"
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            ],
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "lv_name": "ceph_lv0",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "lv_size": "21470642176",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "name": "ceph_lv0",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "tags": {
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.cluster_name": "ceph",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.crush_device_class": "",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.encrypted": "0",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.osd_id": "0",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.type": "block",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.vdo": "0"
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            },
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "type": "block",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "vg_name": "ceph_vg0"
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:        }
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:    ],
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:    "1": [
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:        {
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "devices": [
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "/dev/loop4"
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            ],
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "lv_name": "ceph_lv1",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "lv_size": "21470642176",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "name": "ceph_lv1",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "tags": {
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.cluster_name": "ceph",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.crush_device_class": "",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.encrypted": "0",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.osd_id": "1",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.type": "block",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.vdo": "0"
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            },
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "type": "block",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "vg_name": "ceph_vg1"
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:        }
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:    ],
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:    "2": [
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:        {
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "devices": [
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "/dev/loop5"
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            ],
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "lv_name": "ceph_lv2",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "lv_size": "21470642176",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "name": "ceph_lv2",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "tags": {
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.cluster_name": "ceph",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.crush_device_class": "",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.encrypted": "0",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.osd_id": "2",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.type": "block",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:                "ceph.vdo": "0"
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            },
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "type": "block",
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:            "vg_name": "ceph_vg2"
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:        }
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]:    ]
Oct  2 04:39:49 np0005465604 reverent_keldysh[356637]: }
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:49 np0005465604 systemd[1]: libpod-04e79042f862c77b079c930e4ecbf8b2411537b6d8b36c12973fe33e41c2a8ec.scope: Deactivated successfully.
Oct  2 04:39:49 np0005465604 podman[356620]: 2025-10-02 08:39:49.226271898 +0000 UTC m=+0.990784791 container died 04e79042f862c77b079c930e4ecbf8b2411537b6d8b36c12973fe33e41c2a8ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:39:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 246 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 285 op/s
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.324 2 DEBUG oslo_concurrency.lockutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.325 2 DEBUG oslo_concurrency.lockutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.325 2 DEBUG oslo_concurrency.lockutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.325 2 DEBUG oslo_concurrency.lockutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.325 2 DEBUG oslo_concurrency.lockutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.327 2 INFO nova.compute.manager [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Terminating instance#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.328 2 DEBUG nova.compute.manager [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.333 2 INFO nova.virt.libvirt.driver [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Instance destroyed successfully.#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.333 2 DEBUG nova.objects.instance [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'resources' on Instance uuid 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay-920798367a516114dc06e99e736ea6976d14f46c6640ccb4a58c8cbe696b2a7f-merged.mount: Deactivated successfully.
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.346 2 DEBUG nova.virt.libvirt.vif [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:39:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-487849428',display_name='tempest-tempest.common.compute-instance-487849428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-487849428',id=96,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:39:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-il5cp84q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:39:47Z,user_data=None,user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=49cc6a50-fcc0-4336-a786-4fe32e5d5c5a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.346 2 DEBUG nova.network.os_vif_util [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "664c458b-abee-4d23-a60f-a0032a8f6058", "address": "fa:16:3e:f7:c6:d9", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap664c458b-ab", "ovs_interfaceid": "664c458b-abee-4d23-a60f-a0032a8f6058", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.347 2 DEBUG nova.network.os_vif_util [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.347 2 DEBUG os_vif [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.349 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap664c458b-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.353 2 INFO os_vif [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c6:d9,bridge_name='br-int',has_traffic_filtering=True,id=664c458b-abee-4d23-a60f-a0032a8f6058,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap664c458b-ab')#033[00m
Oct  2 04:39:49 np0005465604 podman[356620]: 2025-10-02 08:39:49.415534966 +0000 UTC m=+1.180047849 container remove 04e79042f862c77b079c930e4ecbf8b2411537b6d8b36c12973fe33e41c2a8ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_keldysh, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:39:49 np0005465604 systemd[1]: libpod-conmon-04e79042f862c77b079c930e4ecbf8b2411537b6d8b36c12973fe33e41c2a8ec.scope: Deactivated successfully.
Oct  2 04:39:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:49.475 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:39:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:49.477 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.746 2 INFO nova.virt.libvirt.driver [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Deleting instance files /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_del#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.746 2 INFO nova.virt.libvirt.driver [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Deletion of /var/lib/nova/instances/49cc6a50-fcc0-4336-a786-4fe32e5d5c5a_del complete#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.804 2 INFO nova.compute.manager [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Took 0.48 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.804 2 DEBUG oslo.service.loopingcall [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.804 2 DEBUG nova.compute.manager [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:39:49 np0005465604 nova_compute[260603]: 2025-10-02 08:39:49.805 2 DEBUG nova.network.neutron [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:39:50 np0005465604 podman[356820]: 2025-10-02 08:39:50.086447391 +0000 UTC m=+0.052399317 container create 1bff99c9dd21cca5794e3374ded26fa3698b5bcca3c978bfd25a3bb4ab6f0742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 04:39:50 np0005465604 systemd[1]: Started libpod-conmon-1bff99c9dd21cca5794e3374ded26fa3698b5bcca3c978bfd25a3bb4ab6f0742.scope.
Oct  2 04:39:50 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:39:50 np0005465604 podman[356820]: 2025-10-02 08:39:50.068337796 +0000 UTC m=+0.034289752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:39:50 np0005465604 podman[356820]: 2025-10-02 08:39:50.183387264 +0000 UTC m=+0.149339220 container init 1bff99c9dd21cca5794e3374ded26fa3698b5bcca3c978bfd25a3bb4ab6f0742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shirley, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 04:39:50 np0005465604 podman[356820]: 2025-10-02 08:39:50.18958532 +0000 UTC m=+0.155537246 container start 1bff99c9dd21cca5794e3374ded26fa3698b5bcca3c978bfd25a3bb4ab6f0742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct  2 04:39:50 np0005465604 podman[356820]: 2025-10-02 08:39:50.192338404 +0000 UTC m=+0.158290320 container attach 1bff99c9dd21cca5794e3374ded26fa3698b5bcca3c978bfd25a3bb4ab6f0742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 04:39:50 np0005465604 magical_shirley[356837]: 167 167
Oct  2 04:39:50 np0005465604 podman[356820]: 2025-10-02 08:39:50.196059895 +0000 UTC m=+0.162011841 container died 1bff99c9dd21cca5794e3374ded26fa3698b5bcca3c978bfd25a3bb4ab6f0742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shirley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:39:50 np0005465604 systemd[1]: libpod-1bff99c9dd21cca5794e3374ded26fa3698b5bcca3c978bfd25a3bb4ab6f0742.scope: Deactivated successfully.
Oct  2 04:39:50 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c073dc2159335a6b2caa368ddf3b72c66347fd274e90c52c2308928bd5c75064-merged.mount: Deactivated successfully.
Oct  2 04:39:50 np0005465604 podman[356820]: 2025-10-02 08:39:50.238735818 +0000 UTC m=+0.204687764 container remove 1bff99c9dd21cca5794e3374ded26fa3698b5bcca3c978bfd25a3bb4ab6f0742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 04:39:50 np0005465604 systemd[1]: libpod-conmon-1bff99c9dd21cca5794e3374ded26fa3698b5bcca3c978bfd25a3bb4ab6f0742.scope: Deactivated successfully.
Oct  2 04:39:50 np0005465604 podman[356860]: 2025-10-02 08:39:50.476803943 +0000 UTC m=+0.063666625 container create 0ac3734151cb2d0f794082196a980b8c3c8d908025f05aa8e4a9c23aeb09a918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 04:39:50 np0005465604 nova_compute[260603]: 2025-10-02 08:39:50.487 2 DEBUG nova.network.neutron [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:39:50 np0005465604 nova_compute[260603]: 2025-10-02 08:39:50.509 2 INFO nova.compute.manager [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Took 0.70 seconds to deallocate network for instance.#033[00m
Oct  2 04:39:50 np0005465604 systemd[1]: Started libpod-conmon-0ac3734151cb2d0f794082196a980b8c3c8d908025f05aa8e4a9c23aeb09a918.scope.
Oct  2 04:39:50 np0005465604 podman[356860]: 2025-10-02 08:39:50.456537344 +0000 UTC m=+0.043400046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:39:50 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:39:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10455068dc46890e7fc7e6f1532780a369fff175b19e264e4679e3e23347884b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:39:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10455068dc46890e7fc7e6f1532780a369fff175b19e264e4679e3e23347884b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:39:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10455068dc46890e7fc7e6f1532780a369fff175b19e264e4679e3e23347884b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:39:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10455068dc46890e7fc7e6f1532780a369fff175b19e264e4679e3e23347884b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:39:50 np0005465604 nova_compute[260603]: 2025-10-02 08:39:50.580 2 DEBUG oslo_concurrency.lockutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:50 np0005465604 nova_compute[260603]: 2025-10-02 08:39:50.580 2 DEBUG oslo_concurrency.lockutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:50 np0005465604 podman[356860]: 2025-10-02 08:39:50.599190182 +0000 UTC m=+0.186052884 container init 0ac3734151cb2d0f794082196a980b8c3c8d908025f05aa8e4a9c23aeb09a918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_engelbart, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct  2 04:39:50 np0005465604 podman[356860]: 2025-10-02 08:39:50.609309285 +0000 UTC m=+0.196171977 container start 0ac3734151cb2d0f794082196a980b8c3c8d908025f05aa8e4a9c23aeb09a918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_engelbart, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:39:50 np0005465604 podman[356860]: 2025-10-02 08:39:50.619647557 +0000 UTC m=+0.206510269 container attach 0ac3734151cb2d0f794082196a980b8c3c8d908025f05aa8e4a9c23aeb09a918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_engelbart, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:39:50 np0005465604 nova_compute[260603]: 2025-10-02 08:39:50.683 2 DEBUG oslo_concurrency.processutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:39:50 np0005465604 nova_compute[260603]: 2025-10-02 08:39:50.888 2 DEBUG nova.compute.manager [req-584dca01-89c8-4f1b-a8d6-bb293a2eaefb req-f4de8eeb-c8e4-4e92-b9aa-ae2d1fb754bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Received event network-vif-deleted-664c458b-abee-4d23-a60f-a0032a8f6058 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:39:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1421145403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:39:51 np0005465604 nova_compute[260603]: 2025-10-02 08:39:51.174 2 DEBUG oslo_concurrency.processutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:39:51 np0005465604 nova_compute[260603]: 2025-10-02 08:39:51.180 2 DEBUG nova.compute.provider_tree [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:39:51 np0005465604 nova_compute[260603]: 2025-10-02 08:39:51.204 2 DEBUG nova.scheduler.client.report [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:39:51 np0005465604 nova_compute[260603]: 2025-10-02 08:39:51.238 2 DEBUG oslo_concurrency.lockutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 246 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 285 op/s
Oct  2 04:39:51 np0005465604 nova_compute[260603]: 2025-10-02 08:39:51.268 2 INFO nova.scheduler.client.report [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Deleted allocations for instance 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a#033[00m
Oct  2 04:39:51 np0005465604 nova_compute[260603]: 2025-10-02 08:39:51.335 2 DEBUG oslo_concurrency.lockutils [None req-4b24ea8b-bec9-41d1-b9e0-a68de2eab6ee d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "49cc6a50-fcc0-4336-a786-4fe32e5d5c5a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]: {
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:        "osd_id": 2,
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:        "type": "bluestore"
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:    },
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:        "osd_id": 1,
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:        "type": "bluestore"
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:    },
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:        "osd_id": 0,
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:        "type": "bluestore"
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]:    }
Oct  2 04:39:51 np0005465604 jolly_engelbart[356877]: }
Oct  2 04:39:51 np0005465604 systemd[1]: libpod-0ac3734151cb2d0f794082196a980b8c3c8d908025f05aa8e4a9c23aeb09a918.scope: Deactivated successfully.
Oct  2 04:39:51 np0005465604 podman[356860]: 2025-10-02 08:39:51.581658101 +0000 UTC m=+1.168520823 container died 0ac3734151cb2d0f794082196a980b8c3c8d908025f05aa8e4a9c23aeb09a918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct  2 04:39:51 np0005465604 systemd[1]: var-lib-containers-storage-overlay-10455068dc46890e7fc7e6f1532780a369fff175b19e264e4679e3e23347884b-merged.mount: Deactivated successfully.
Oct  2 04:39:51 np0005465604 podman[356860]: 2025-10-02 08:39:51.717157943 +0000 UTC m=+1.304020625 container remove 0ac3734151cb2d0f794082196a980b8c3c8d908025f05aa8e4a9c23aeb09a918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:39:51 np0005465604 systemd[1]: libpod-conmon-0ac3734151cb2d0f794082196a980b8c3c8d908025f05aa8e4a9c23aeb09a918.scope: Deactivated successfully.
Oct  2 04:39:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:39:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:39:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:39:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:39:51 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 4c3c63e7-493c-4207-9b34-370349a968fc does not exist
Oct  2 04:39:51 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9d500eb5-5f46-41cb-b243-e6b0812e75cc does not exist
Oct  2 04:39:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:39:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Oct  2 04:39:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Oct  2 04:39:52 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Oct  2 04:39:52 np0005465604 nova_compute[260603]: 2025-10-02 08:39:52.451 2 DEBUG nova.objects.instance [None req-e25dbad2-19c6-42f3-be50-942cb2bdfa20 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'pci_devices' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:52 np0005465604 nova_compute[260603]: 2025-10-02 08:39:52.473 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394392.4730625, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:39:52 np0005465604 nova_compute[260603]: 2025-10-02 08:39:52.473 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:39:52 np0005465604 nova_compute[260603]: 2025-10-02 08:39:52.494 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:52 np0005465604 nova_compute[260603]: 2025-10-02 08:39:52.498 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:39:52 np0005465604 nova_compute[260603]: 2025-10-02 08:39:52.518 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  2 04:39:53 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:39:53 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:39:53 np0005465604 kernel: tap00fa1373-e4 (unregistering): left promiscuous mode
Oct  2 04:39:53 np0005465604 NetworkManager[45129]: <info>  [1759394393.0508] device (tap00fa1373-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:39:53 np0005465604 nova_compute[260603]: 2025-10-02 08:39:53.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:53Z|00985|binding|INFO|Releasing lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb from this chassis (sb_readonly=0)
Oct  2 04:39:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:53Z|00986|binding|INFO|Setting lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb down in Southbound
Oct  2 04:39:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:53Z|00987|binding|INFO|Removing iface tap00fa1373-e4 ovn-installed in OVS
Oct  2 04:39:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.074 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:b9:2b 10.100.0.10'], port_security=['fa:16:3e:11:b9:2b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '9', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:39:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.075 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 unbound from our chassis#033[00m
Oct  2 04:39:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.076 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 19d584c3-e754-47d1-9cdf-c6badbd670d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:39:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.076 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b0fea3-d9ad-451f-adf0-3c5c5a308523]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.077 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 namespace which is not needed anymore#033[00m
Oct  2 04:39:53 np0005465604 nova_compute[260603]: 2025-10-02 08:39:53.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:53 np0005465604 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct  2 04:39:53 np0005465604 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d0000005d.scope: Consumed 9.330s CPU time.
Oct  2 04:39:53 np0005465604 systemd-machined[214636]: Machine qemu-120-instance-0000005d terminated.
Oct  2 04:39:53 np0005465604 nova_compute[260603]: 2025-10-02 08:39:53.233 2 DEBUG nova.compute.manager [None req-e25dbad2-19c6-42f3-be50-942cb2bdfa20 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 193 op/s
Oct  2 04:39:53 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[356118]: [NOTICE]   (356133) : haproxy version is 2.8.14-c23fe91
Oct  2 04:39:53 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[356118]: [NOTICE]   (356133) : path to executable is /usr/sbin/haproxy
Oct  2 04:39:53 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[356118]: [WARNING]  (356133) : Exiting Master process...
Oct  2 04:39:53 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[356118]: [ALERT]    (356133) : Current worker (356137) exited with code 143 (Terminated)
Oct  2 04:39:53 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[356118]: [WARNING]  (356133) : All workers exited. Exiting... (0)
Oct  2 04:39:53 np0005465604 systemd[1]: libpod-f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7.scope: Deactivated successfully.
Oct  2 04:39:53 np0005465604 podman[357021]: 2025-10-02 08:39:53.263609073 +0000 UTC m=+0.085137010 container died f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:39:53 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7-userdata-shm.mount: Deactivated successfully.
Oct  2 04:39:53 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e8cbea9285c5da7b9818a5a69331acc3f529d43f16cdc486f4c0e52f4c5285b9-merged.mount: Deactivated successfully.
Oct  2 04:39:53 np0005465604 podman[357021]: 2025-10-02 08:39:53.427274172 +0000 UTC m=+0.248802109 container cleanup f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:39:53 np0005465604 systemd[1]: libpod-conmon-f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7.scope: Deactivated successfully.
Oct  2 04:39:53 np0005465604 nova_compute[260603]: 2025-10-02 08:39:53.446 2 DEBUG nova.compute.manager [req-8c00db52-8197-4a65-b37a-2e892b2caefc req-042fb122-d6c3-48b1-b94c-49f6c1d3aca0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-unplugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:53 np0005465604 nova_compute[260603]: 2025-10-02 08:39:53.446 2 DEBUG oslo_concurrency.lockutils [req-8c00db52-8197-4a65-b37a-2e892b2caefc req-042fb122-d6c3-48b1-b94c-49f6c1d3aca0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:53 np0005465604 nova_compute[260603]: 2025-10-02 08:39:53.447 2 DEBUG oslo_concurrency.lockutils [req-8c00db52-8197-4a65-b37a-2e892b2caefc req-042fb122-d6c3-48b1-b94c-49f6c1d3aca0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.447 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:24:0e 2001:db8::f816:3eff:feab:240e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109fe25f-02fc-4377-b6fa-14579cac74eb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=130c7afc-9b27-4823-aeed-223452e18c05) old=Port_Binding(mac=['fa:16:3e:ab:24:0e 10.100.0.2 2001:db8::f816:3eff:feab:240e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:39:53 np0005465604 nova_compute[260603]: 2025-10-02 08:39:53.447 2 DEBUG oslo_concurrency.lockutils [req-8c00db52-8197-4a65-b37a-2e892b2caefc req-042fb122-d6c3-48b1-b94c-49f6c1d3aca0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:53 np0005465604 nova_compute[260603]: 2025-10-02 08:39:53.447 2 DEBUG nova.compute.manager [req-8c00db52-8197-4a65-b37a-2e892b2caefc req-042fb122-d6c3-48b1-b94c-49f6c1d3aca0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-unplugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:39:53 np0005465604 nova_compute[260603]: 2025-10-02 08:39:53.448 2 WARNING nova.compute.manager [req-8c00db52-8197-4a65-b37a-2e892b2caefc req-042fb122-d6c3-48b1-b94c-49f6c1d3aca0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received unexpected event network-vif-unplugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with vm_state suspended and task_state None.#033[00m
Oct  2 04:39:53 np0005465604 podman[357061]: 2025-10-02 08:39:53.549486515 +0000 UTC m=+0.082817960 container remove f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:39:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.561 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[75b1fa30-a3e1-4ec1-98b1-5e17cdfa0661]: (4, ('Thu Oct  2 08:39:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 (f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7)\nf5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7\nThu Oct  2 08:39:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 (f5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7)\nf5196ace9c750661f69ec7459f9ab78f7492f1dbd9966e660615c536fb59b3f7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.563 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9ae7dd-c36b-4e93-9e13-d66d38b16a96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.564 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:53 np0005465604 kernel: tap19d584c3-e0: left promiscuous mode
Oct  2 04:39:53 np0005465604 nova_compute[260603]: 2025-10-02 08:39:53.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:53 np0005465604 nova_compute[260603]: 2025-10-02 08:39:53.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.608 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ee9e07-f375-4d74-bbc2-bfcff0bb2af6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.643 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[415f887a-2efe-44e6-bf25-161442280bb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.645 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f34fb2a8-6d53-4330-9c50-68e3aa2b0893]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.672 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7b81a661-49ef-468f-a793-06d1b91db365]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526141, 'reachable_time': 32525, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357080, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.676 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:39:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.676 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8c63e4-bcbb-469f-9560-fd39de1a119e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:53 np0005465604 systemd[1]: run-netns-ovnmeta\x2d19d584c3\x2de754\x2d47d1\x2d9cdf\x2dc6badbd670d7.mount: Deactivated successfully.
Oct  2 04:39:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.677 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 130c7afc-9b27-4823-aeed-223452e18c05 in datapath 08388482-b0ce-472a-bb1e-16d4250fa7a0 unbound from our chassis#033[00m
Oct  2 04:39:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.680 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08388482-b0ce-472a-bb1e-16d4250fa7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:39:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:53.681 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[14668628-7e7d-4b52-aa02-0290c16c8bdc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:54 np0005465604 nova_compute[260603]: 2025-10-02 08:39:54.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:54 np0005465604 nova_compute[260603]: 2025-10-02 08:39:54.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:54 np0005465604 nova_compute[260603]: 2025-10-02 08:39:54.890 2 INFO nova.compute.manager [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Resuming#033[00m
Oct  2 04:39:54 np0005465604 nova_compute[260603]: 2025-10-02 08:39:54.893 2 DEBUG nova.objects.instance [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'flavor' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:54 np0005465604 nova_compute[260603]: 2025-10-02 08:39:54.927 2 DEBUG oslo_concurrency.lockutils [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:39:54 np0005465604 nova_compute[260603]: 2025-10-02 08:39:54.928 2 DEBUG oslo_concurrency.lockutils [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquired lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:39:54 np0005465604 nova_compute[260603]: 2025-10-02 08:39:54.928 2 DEBUG nova.network.neutron [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:39:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 156 op/s
Oct  2 04:39:55 np0005465604 nova_compute[260603]: 2025-10-02 08:39:55.809 2 DEBUG nova.compute.manager [req-10e69edc-f52b-4943-941c-7b38332e7134 req-8132e681-02e1-4592-94d3-1d08e35a2ce1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:55 np0005465604 nova_compute[260603]: 2025-10-02 08:39:55.809 2 DEBUG oslo_concurrency.lockutils [req-10e69edc-f52b-4943-941c-7b38332e7134 req-8132e681-02e1-4592-94d3-1d08e35a2ce1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:55 np0005465604 nova_compute[260603]: 2025-10-02 08:39:55.810 2 DEBUG oslo_concurrency.lockutils [req-10e69edc-f52b-4943-941c-7b38332e7134 req-8132e681-02e1-4592-94d3-1d08e35a2ce1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:55 np0005465604 nova_compute[260603]: 2025-10-02 08:39:55.810 2 DEBUG oslo_concurrency.lockutils [req-10e69edc-f52b-4943-941c-7b38332e7134 req-8132e681-02e1-4592-94d3-1d08e35a2ce1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:55 np0005465604 nova_compute[260603]: 2025-10-02 08:39:55.810 2 DEBUG nova.compute.manager [req-10e69edc-f52b-4943-941c-7b38332e7134 req-8132e681-02e1-4592-94d3-1d08e35a2ce1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:39:55 np0005465604 nova_compute[260603]: 2025-10-02 08:39:55.811 2 WARNING nova.compute.manager [req-10e69edc-f52b-4943-941c-7b38332e7134 req-8132e681-02e1-4592-94d3-1d08e35a2ce1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received unexpected event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with vm_state suspended and task_state resuming.#033[00m
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.091 2 DEBUG nova.network.neutron [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating instance_info_cache with network_info: [{"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.110 2 DEBUG oslo_concurrency.lockutils [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Releasing lock "refresh_cache-fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.117 2 DEBUG nova.virt.libvirt.vif [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1260783938',display_name='tempest-ServersNegativeTestJSON-server-1260783938',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1260783938',id=93,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:39:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-h40zdbp5',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:39:53Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.118 2 DEBUG nova.network.os_vif_util [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.119 2 DEBUG nova.network.os_vif_util [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.120 2 DEBUG os_vif [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.122 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.122 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.127 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00fa1373-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.128 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap00fa1373-e4, col_values=(('external_ids', {'iface-id': '00fa1373-e4cc-4245-9cfa-5a58c77aa4eb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:b9:2b', 'vm-uuid': 'fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.128 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.129 2 INFO os_vif [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4')#033[00m
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.160 2 DEBUG nova.objects.instance [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'numa_topology' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:56 np0005465604 kernel: tap00fa1373-e4: entered promiscuous mode
Oct  2 04:39:56 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:56Z|00988|binding|INFO|Claiming lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for this chassis.
Oct  2 04:39:56 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:56Z|00989|binding|INFO|00fa1373-e4cc-4245-9cfa-5a58c77aa4eb: Claiming fa:16:3e:11:b9:2b 10.100.0.10
Oct  2 04:39:56 np0005465604 NetworkManager[45129]: <info>  [1759394396.2580] manager: (tap00fa1373-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/390)
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.265 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:b9:2b 10.100.0.10'], port_security=['fa:16:3e:11:b9:2b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '10', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.266 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 bound to our chassis#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.267 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 19d584c3-e754-47d1-9cdf-c6badbd670d7#033[00m
Oct  2 04:39:56 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:56Z|00990|binding|INFO|Setting lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb ovn-installed in OVS
Oct  2 04:39:56 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:56Z|00991|binding|INFO|Setting lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb up in Southbound
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.284 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fddb123d-d2b4-48fd-af8b-92d49a138ba1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.285 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap19d584c3-e1 in ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.289 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap19d584c3-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.289 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[65406aba-4354-42aa-9e0f-a9c32470eb4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.289 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[689c8214-fef1-4ff4-9541-5b8a3d249bca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:56 np0005465604 systemd-udevd[357094]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.300 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9f801e-43b8-42e8-b31b-d5417f945a84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:56 np0005465604 NetworkManager[45129]: <info>  [1759394396.3121] device (tap00fa1373-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:39:56 np0005465604 NetworkManager[45129]: <info>  [1759394396.3156] device (tap00fa1373-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.329 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f2d34053-0483-4c14-8847-b189db450fb6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:56 np0005465604 systemd-machined[214636]: New machine qemu-122-instance-0000005d.
Oct  2 04:39:56 np0005465604 systemd[1]: Started Virtual Machine qemu-122-instance-0000005d.
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.359 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3b8ce5ca-69ac-4159-8709-5f14c2a26f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.364 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e673d1f8-4a46-4768-8700-4c9b36d2024e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:56 np0005465604 NetworkManager[45129]: <info>  [1759394396.3654] manager: (tap19d584c3-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/391)
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.392 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[39e352f9-4f62-4398-a608-6ff7326c376a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.395 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f3eb5e36-215d-45e8-9c52-045b07d5cf8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:56 np0005465604 NetworkManager[45129]: <info>  [1759394396.4132] device (tap19d584c3-e0): carrier: link connected
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.418 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a5f2af-c0ff-48a6-bdb3-34d8615540e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.433 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[86879179-3468-44b8-8ff3-45734eb1c583]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527501, 'reachable_time': 28627, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357128, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.448 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9863db77-febd-48a7-bd59-00cc66f108a2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:8c4f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527501, 'tstamp': 527501}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357129, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.462 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b21c6a97-8732-48bc-87bb-9755ec39099e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19d584c3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:8c:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527501, 'reachable_time': 28627, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357130, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.478 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.489 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[db82c5bd-8fa5-467f-b22a-2a01e7b318d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.570 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ab7ca13b-76dc-4b12-9f04-206fabf34a39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.571 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.572 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.572 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19d584c3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:56 np0005465604 kernel: tap19d584c3-e0: entered promiscuous mode
Oct  2 04:39:56 np0005465604 NetworkManager[45129]: <info>  [1759394396.5745] manager: (tap19d584c3-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.584 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap19d584c3-e0, col_values=(('external_ids', {'iface-id': '0167600f-b732-46e3-804b-b8a6d765a5aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:39:56 np0005465604 ovn_controller[152344]: 2025-10-02T08:39:56Z|00992|binding|INFO|Releasing lport 0167600f-b732-46e3-804b-b8a6d765a5aa from this chassis (sb_readonly=0)
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.604 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/19d584c3-e754-47d1-9cdf-c6badbd670d7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/19d584c3-e754-47d1-9cdf-c6badbd670d7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:39:56 np0005465604 nova_compute[260603]: 2025-10-02 08:39:56.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.605 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[777fa21e-5178-474f-8f47-1c3624a6462d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.606 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/19d584c3-e754-47d1-9cdf-c6badbd670d7.pid.haproxy
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 19d584c3-e754-47d1-9cdf-c6badbd670d7
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:39:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:39:56.607 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'env', 'PROCESS_TAG=haproxy-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/19d584c3-e754-47d1-9cdf-c6badbd670d7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:39:57 np0005465604 podman[357201]: 2025-10-02 08:39:57.016558329 +0000 UTC m=+0.068670514 container create d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 04:39:57 np0005465604 podman[357201]: 2025-10-02 08:39:56.979163306 +0000 UTC m=+0.031275451 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:39:57 np0005465604 systemd[1]: Started libpod-conmon-d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee.scope.
Oct  2 04:39:57 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:39:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fbca77696254007de7e96072e33264e451582408270a23fe6bb5e724afe7cd3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:39:57 np0005465604 podman[357201]: 2025-10-02 08:39:57.141868367 +0000 UTC m=+0.193980532 container init d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 04:39:57 np0005465604 podman[357201]: 2025-10-02 08:39:57.151669681 +0000 UTC m=+0.203781816 container start d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 04:39:57 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[357219]: [NOTICE]   (357223) : New worker (357225) forked
Oct  2 04:39:57 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[357219]: [NOTICE]   (357223) : Loading success.
Oct  2 04:39:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.2 KiB/s wr, 117 op/s
Oct  2 04:39:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.492 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.493 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394397.492383, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.494 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Started (Lifecycle Event)#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.513 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.529 2 DEBUG nova.compute.manager [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.530 2 DEBUG nova.objects.instance [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'pci_devices' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.533 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.556 2 INFO nova.virt.libvirt.driver [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance running successfully.#033[00m
Oct  2 04:39:57 np0005465604 virtqemud[260328]: argument unsupported: QEMU guest agent is not configured
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.560 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.561 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394397.4992716, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.561 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.564 2 DEBUG nova.virt.libvirt.guest [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.564 2 DEBUG nova.compute.manager [None req-d2ef953b-7311-4e2e-9c3c-83fc3783a26b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.591 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.595 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.631 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  2 04:39:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:39:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:39:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:39:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:39:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:39:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.977 2 DEBUG nova.compute.manager [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.978 2 DEBUG oslo_concurrency.lockutils [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.978 2 DEBUG oslo_concurrency.lockutils [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.978 2 DEBUG oslo_concurrency.lockutils [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.978 2 DEBUG nova.compute.manager [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.979 2 WARNING nova.compute.manager [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received unexpected event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with vm_state active and task_state None.#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.979 2 DEBUG nova.compute.manager [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.979 2 DEBUG oslo_concurrency.lockutils [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.980 2 DEBUG oslo_concurrency.lockutils [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.980 2 DEBUG oslo_concurrency.lockutils [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.980 2 DEBUG nova.compute.manager [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:39:57 np0005465604 nova_compute[260603]: 2025-10-02 08:39:57.980 2 WARNING nova.compute.manager [req-1a21d7e3-1eb2-4cc6-9825-8da27d48bc21 req-3dc76bd8-9600-4d08-9e46-611225e8d8df 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received unexpected event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with vm_state active and task_state None.#033[00m
Oct  2 04:39:59 np0005465604 podman[357235]: 2025-10-02 08:39:59.031391858 +0000 UTC m=+0.094943565 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:39:59 np0005465604 podman[357234]: 2025-10-02 08:39:59.125073123 +0000 UTC m=+0.180754583 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:39:59 np0005465604 nova_compute[260603]: 2025-10-02 08:39:59.128 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:59 np0005465604 nova_compute[260603]: 2025-10-02 08:39:59.130 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:59 np0005465604 nova_compute[260603]: 2025-10-02 08:39:59.159 2 DEBUG nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:39:59 np0005465604 nova_compute[260603]: 2025-10-02 08:39:59.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 12 KiB/s wr, 38 op/s
Oct  2 04:39:59 np0005465604 nova_compute[260603]: 2025-10-02 08:39:59.302 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:39:59 np0005465604 nova_compute[260603]: 2025-10-02 08:39:59.303 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:39:59 np0005465604 nova_compute[260603]: 2025-10-02 08:39:59.314 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:39:59 np0005465604 nova_compute[260603]: 2025-10-02 08:39:59.315 2 INFO nova.compute.claims [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:39:59 np0005465604 nova_compute[260603]: 2025-10-02 08:39:59.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:39:59 np0005465604 nova_compute[260603]: 2025-10-02 08:39:59.516 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:40:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/465837021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.032 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.041 2 DEBUG nova.compute.provider_tree [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.073 2 DEBUG nova.scheduler.client.report [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.104 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.104 2 DEBUG nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.172 2 DEBUG nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.172 2 DEBUG nova.network.neutron [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.189 2 INFO nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.206 2 DEBUG nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.330 2 DEBUG nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.331 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.332 2 INFO nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Creating image(s)#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.356 2 DEBUG nova.storage.rbd_utils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.382 2 DEBUG nova.storage.rbd_utils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.411 2 DEBUG nova.storage.rbd_utils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.417 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.528 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.529 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.531 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.531 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.563 2 DEBUG nova.storage.rbd_utils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.570 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.925 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:00 np0005465604 nova_compute[260603]: 2025-10-02 08:40:00.986 2 DEBUG nova.storage.rbd_utils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] resizing rbd image c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:40:01 np0005465604 nova_compute[260603]: 2025-10-02 08:40:01.098 2 DEBUG nova.objects.instance [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'migration_context' on Instance uuid c33f349a-5d06-4a81-90fd-fe32eebc4cb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:40:01 np0005465604 nova_compute[260603]: 2025-10-02 08:40:01.126 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:40:01 np0005465604 nova_compute[260603]: 2025-10-02 08:40:01.126 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Ensure instance console log exists: /var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:40:01 np0005465604 nova_compute[260603]: 2025-10-02 08:40:01.127 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:01 np0005465604 nova_compute[260603]: 2025-10-02 08:40:01.127 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:01 np0005465604 nova_compute[260603]: 2025-10-02 08:40:01.127 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:01 np0005465604 nova_compute[260603]: 2025-10-02 08:40:01.143 2 DEBUG nova.policy [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd3802fedfb914c27b9b09ad6ea6f4c27', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c75535fe577642038c638a0b01f74d09', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:40:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 12 KiB/s wr, 38 op/s
Oct  2 04:40:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:01Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:b9:2b 10.100.0.10
Oct  2 04:40:01 np0005465604 nova_compute[260603]: 2025-10-02 08:40:01.843 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394386.841953, 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:40:01 np0005465604 nova_compute[260603]: 2025-10-02 08:40:01.844 2 INFO nova.compute.manager [-] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:40:01 np0005465604 nova_compute[260603]: 2025-10-02 08:40:01.874 2 DEBUG nova.compute.manager [None req-3e374481-55fc-4a7f-9af0-508b3a597f3d - - - - - -] [instance: 49cc6a50-fcc0-4336-a786-4fe32e5d5c5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:40:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:40:02 np0005465604 nova_compute[260603]: 2025-10-02 08:40:02.906 2 DEBUG nova.network.neutron [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Successfully created port: de7564c4-d262-44cd-9232-19988049a763 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:40:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 246 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 623 KiB/s rd, 2.0 MiB/s wr, 111 op/s
Oct  2 04:40:04 np0005465604 nova_compute[260603]: 2025-10-02 08:40:04.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:04 np0005465604 nova_compute[260603]: 2025-10-02 08:40:04.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:05 np0005465604 nova_compute[260603]: 2025-10-02 08:40:05.191 2 DEBUG nova.network.neutron [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Successfully updated port: de7564c4-d262-44cd-9232-19988049a763 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:40:05 np0005465604 nova_compute[260603]: 2025-10-02 08:40:05.214 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "refresh_cache-c33f349a-5d06-4a81-90fd-fe32eebc4cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:40:05 np0005465604 nova_compute[260603]: 2025-10-02 08:40:05.215 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquired lock "refresh_cache-c33f349a-5d06-4a81-90fd-fe32eebc4cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:40:05 np0005465604 nova_compute[260603]: 2025-10-02 08:40:05.216 2 DEBUG nova.network.neutron [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:40:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 246 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 550 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Oct  2 04:40:05 np0005465604 nova_compute[260603]: 2025-10-02 08:40:05.302 2 DEBUG nova.compute.manager [req-fc4df95e-3501-4363-95a1-22e59c42de7a req-9fd7e009-8962-4de5-91b4-2a7c3f2a2fb6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received event network-changed-de7564c4-d262-44cd-9232-19988049a763 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:40:05 np0005465604 nova_compute[260603]: 2025-10-02 08:40:05.303 2 DEBUG nova.compute.manager [req-fc4df95e-3501-4363-95a1-22e59c42de7a req-9fd7e009-8962-4de5-91b4-2a7c3f2a2fb6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Refreshing instance network info cache due to event network-changed-de7564c4-d262-44cd-9232-19988049a763. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:40:05 np0005465604 nova_compute[260603]: 2025-10-02 08:40:05.304 2 DEBUG oslo_concurrency.lockutils [req-fc4df95e-3501-4363-95a1-22e59c42de7a req-9fd7e009-8962-4de5-91b4-2a7c3f2a2fb6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-c33f349a-5d06-4a81-90fd-fe32eebc4cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:40:05 np0005465604 nova_compute[260603]: 2025-10-02 08:40:05.472 2 DEBUG nova.network.neutron [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:40:06 np0005465604 podman[357466]: 2025-10-02 08:40:06.02453336 +0000 UTC m=+0.086016717 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 04:40:06 np0005465604 podman[357467]: 2025-10-02 08:40:06.048969674 +0000 UTC m=+0.107855133 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:40:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 246 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 550 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Oct  2 04:40:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.126 2 DEBUG nova.network.neutron [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Updating instance_info_cache with network_info: [{"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.156 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Releasing lock "refresh_cache-c33f349a-5d06-4a81-90fd-fe32eebc4cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.156 2 DEBUG nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Instance network_info: |[{"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.157 2 DEBUG oslo_concurrency.lockutils [req-fc4df95e-3501-4363-95a1-22e59c42de7a req-9fd7e009-8962-4de5-91b4-2a7c3f2a2fb6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-c33f349a-5d06-4a81-90fd-fe32eebc4cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.157 2 DEBUG nova.network.neutron [req-fc4df95e-3501-4363-95a1-22e59c42de7a req-9fd7e009-8962-4de5-91b4-2a7c3f2a2fb6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Refreshing network info cache for port de7564c4-d262-44cd-9232-19988049a763 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.161 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Start _get_guest_xml network_info=[{"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.167 2 WARNING nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.174 2 DEBUG nova.virt.libvirt.host [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.175 2 DEBUG nova.virt.libvirt.host [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.179 2 DEBUG nova.virt.libvirt.host [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.180 2 DEBUG nova.virt.libvirt.host [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.180 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.180 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.181 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.181 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.182 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.182 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.182 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.182 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.183 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.183 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.184 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.184 2 DEBUG nova.virt.hardware [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.187 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:40:08 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2181872288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.645 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.682 2 DEBUG nova.storage.rbd_utils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:40:08 np0005465604 nova_compute[260603]: 2025-10-02 08:40:08.687 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:40:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/113051477' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.190 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.191 2 DEBUG nova.virt.libvirt.vif [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:39:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2038300024',display_name='tempest-ServerActionsTestOtherA-server-2038300024',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2038300024',id=97,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-9f4e7pit',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:40:00Z,user_data=None,user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=c33f349a-5d06-4a81-90fd-fe32eebc4cb9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.192 2 DEBUG nova.network.os_vif_util [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.192 2 DEBUG nova.network.os_vif_util [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:fb:55,bridge_name='br-int',has_traffic_filtering=True,id=de7564c4-d262-44cd-9232-19988049a763,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7564c4-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.193 2 DEBUG nova.objects.instance [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'pci_devices' on Instance uuid c33f349a-5d06-4a81-90fd-fe32eebc4cb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.213 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:40:09 np0005465604 nova_compute[260603]:  <uuid>c33f349a-5d06-4a81-90fd-fe32eebc4cb9</uuid>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:  <name>instance-00000061</name>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerActionsTestOtherA-server-2038300024</nova:name>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:40:08</nova:creationTime>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:40:09 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:        <nova:user uuid="d3802fedfb914c27b9b09ad6ea6f4c27">tempest-ServerActionsTestOtherA-249618595-project-member</nova:user>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:        <nova:project uuid="c75535fe577642038c638a0b01f74d09">tempest-ServerActionsTestOtherA-249618595</nova:project>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:        <nova:port uuid="de7564c4-d262-44cd-9232-19988049a763">
Oct  2 04:40:09 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <entry name="serial">c33f349a-5d06-4a81-90fd-fe32eebc4cb9</entry>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <entry name="uuid">c33f349a-5d06-4a81-90fd-fe32eebc4cb9</entry>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk">
Oct  2 04:40:09 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:40:09 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk.config">
Oct  2 04:40:09 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:40:09 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:32:fb:55"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <target dev="tapde7564c4-d2"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9/console.log" append="off"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:40:09 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:40:09 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:40:09 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:40:09 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.214 2 DEBUG nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Preparing to wait for external event network-vif-plugged-de7564c4-d262-44cd-9232-19988049a763 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.214 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.214 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.215 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.215 2 DEBUG nova.virt.libvirt.vif [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:39:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2038300024',display_name='tempest-ServerActionsTestOtherA-server-2038300024',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2038300024',id=97,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-9f4e7pit',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:40:00Z,user_data=None,user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=c33f349a-5d06-4a81-90fd-fe32eebc4cb9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.215 2 DEBUG nova.network.os_vif_util [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.216 2 DEBUG nova.network.os_vif_util [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:fb:55,bridge_name='br-int',has_traffic_filtering=True,id=de7564c4-d262-44cd-9232-19988049a763,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7564c4-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.216 2 DEBUG os_vif [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:fb:55,bridge_name='br-int',has_traffic_filtering=True,id=de7564c4-d262-44cd-9232-19988049a763,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7564c4-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.217 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.221 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde7564c4-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.221 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapde7564c4-d2, col_values=(('external_ids', {'iface-id': 'de7564c4-d262-44cd-9232-19988049a763', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:fb:55', 'vm-uuid': 'c33f349a-5d06-4a81-90fd-fe32eebc4cb9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:09 np0005465604 NetworkManager[45129]: <info>  [1759394409.2254] manager: (tapde7564c4-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.237 2 INFO os_vif [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:fb:55,bridge_name='br-int',has_traffic_filtering=True,id=de7564c4-d262-44cd-9232-19988049a763,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7564c4-d2')#033[00m
Oct  2 04:40:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 248 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 548 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.313 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.313 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.313 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] No VIF found with MAC fa:16:3e:32:fb:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.314 2 INFO nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Using config drive#033[00m
Oct  2 04:40:09 np0005465604 nova_compute[260603]: 2025-10-02 08:40:09.335 2 DEBUG nova.storage.rbd_utils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:40:10 np0005465604 nova_compute[260603]: 2025-10-02 08:40:10.414 2 INFO nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Creating config drive at /var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9/disk.config#033[00m
Oct  2 04:40:10 np0005465604 nova_compute[260603]: 2025-10-02 08:40:10.424 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1h0na6bl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:10 np0005465604 nova_compute[260603]: 2025-10-02 08:40:10.603 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1h0na6bl" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:10 np0005465604 nova_compute[260603]: 2025-10-02 08:40:10.625 2 DEBUG nova.storage.rbd_utils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] rbd image c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:40:10 np0005465604 nova_compute[260603]: 2025-10-02 08:40:10.628 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9/disk.config c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:10 np0005465604 nova_compute[260603]: 2025-10-02 08:40:10.821 2 DEBUG oslo_concurrency.processutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9/disk.config c33f349a-5d06-4a81-90fd-fe32eebc4cb9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:10 np0005465604 nova_compute[260603]: 2025-10-02 08:40:10.823 2 INFO nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Deleting local config drive /var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9/disk.config because it was imported into RBD.#033[00m
Oct  2 04:40:10 np0005465604 kernel: tapde7564c4-d2: entered promiscuous mode
Oct  2 04:40:10 np0005465604 NetworkManager[45129]: <info>  [1759394410.8958] manager: (tapde7564c4-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Oct  2 04:40:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:10Z|00993|binding|INFO|Claiming lport de7564c4-d262-44cd-9232-19988049a763 for this chassis.
Oct  2 04:40:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:10Z|00994|binding|INFO|de7564c4-d262-44cd-9232-19988049a763: Claiming fa:16:3e:32:fb:55 10.100.0.10
Oct  2 04:40:10 np0005465604 nova_compute[260603]: 2025-10-02 08:40:10.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:10.907 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:fb:55 10.100.0.10'], port_security=['fa:16:3e:32:fb:55 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c33f349a-5d06-4a81-90fd-fe32eebc4cb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd37b83fc-7239-49dc-8ab1-fc95753c436a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=de7564c4-d262-44cd-9232-19988049a763) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:40:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:10.910 162357 INFO neutron.agent.ovn.metadata.agent [-] Port de7564c4-d262-44cd-9232-19988049a763 in datapath 28f843b2-396a-4167-9840-21c273bdc044 bound to our chassis#033[00m
Oct  2 04:40:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:10.913 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28f843b2-396a-4167-9840-21c273bdc044#033[00m
Oct  2 04:40:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:10Z|00995|binding|INFO|Setting lport de7564c4-d262-44cd-9232-19988049a763 up in Southbound
Oct  2 04:40:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:10Z|00996|binding|INFO|Setting lport de7564c4-d262-44cd-9232-19988049a763 ovn-installed in OVS
Oct  2 04:40:10 np0005465604 nova_compute[260603]: 2025-10-02 08:40:10.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:10 np0005465604 nova_compute[260603]: 2025-10-02 08:40:10.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:10 np0005465604 systemd-machined[214636]: New machine qemu-123-instance-00000061.
Oct  2 04:40:10 np0005465604 nova_compute[260603]: 2025-10-02 08:40:10.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:10.940 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2212b274-ac53-4b78-8f99-393c5c1c2626]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:10 np0005465604 systemd[1]: Started Virtual Machine qemu-123-instance-00000061.
Oct  2 04:40:10 np0005465604 systemd-udevd[357645]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:40:10 np0005465604 NetworkManager[45129]: <info>  [1759394410.9761] device (tapde7564c4-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:40:10 np0005465604 NetworkManager[45129]: <info>  [1759394410.9772] device (tapde7564c4-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:40:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:10.979 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[30ff254d-f18c-4f16-a226-1f559f19b32e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:10.983 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[787af4f5-c061-49ed-958e-87f62c2414f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.007 2 DEBUG nova.network.neutron [req-fc4df95e-3501-4363-95a1-22e59c42de7a req-9fd7e009-8962-4de5-91b4-2a7c3f2a2fb6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Updated VIF entry in instance network info cache for port de7564c4-d262-44cd-9232-19988049a763. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.007 2 DEBUG nova.network.neutron [req-fc4df95e-3501-4363-95a1-22e59c42de7a req-9fd7e009-8962-4de5-91b4-2a7c3f2a2fb6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Updating instance_info_cache with network_info: [{"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:40:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:11.016 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a21e51-8bd1-4fc8-b739-2d6d92d35599]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:11.041 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9552e2f-7b8c-4698-983c-073a53760a88]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28f843b2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:e0:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 916, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 916, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520146, 'reachable_time': 29911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357656, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.058 2 DEBUG oslo_concurrency.lockutils [req-fc4df95e-3501-4363-95a1-22e59c42de7a req-9fd7e009-8962-4de5-91b4-2a7c3f2a2fb6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-c33f349a-5d06-4a81-90fd-fe32eebc4cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:40:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:11.063 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9f78a1-9869-4c13-92b0-c59ba1167859]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520161, 'tstamp': 520161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357657, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520164, 'tstamp': 520164}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357657, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:11.065 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28f843b2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:11.070 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28f843b2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:40:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:11.070 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:40:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:11.071 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28f843b2-30, col_values=(('external_ids', {'iface-id': '73da1479-3a84-4798-9da3-841fe88c5e3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:40:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:11.072 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:40:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 248 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 546 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.289 2 DEBUG nova.compute.manager [req-16020913-d930-4851-ab13-13de035c2a53 req-ba62c74e-f76e-423e-8c93-e08e5ea88e69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received event network-vif-plugged-de7564c4-d262-44cd-9232-19988049a763 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.289 2 DEBUG oslo_concurrency.lockutils [req-16020913-d930-4851-ab13-13de035c2a53 req-ba62c74e-f76e-423e-8c93-e08e5ea88e69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.290 2 DEBUG oslo_concurrency.lockutils [req-16020913-d930-4851-ab13-13de035c2a53 req-ba62c74e-f76e-423e-8c93-e08e5ea88e69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.290 2 DEBUG oslo_concurrency.lockutils [req-16020913-d930-4851-ab13-13de035c2a53 req-ba62c74e-f76e-423e-8c93-e08e5ea88e69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.290 2 DEBUG nova.compute.manager [req-16020913-d930-4851-ab13-13de035c2a53 req-ba62c74e-f76e-423e-8c93-e08e5ea88e69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Processing event network-vif-plugged-de7564c4-d262-44cd-9232-19988049a763 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.885 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394411.8853679, c33f349a-5d06-4a81-90fd-fe32eebc4cb9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.886 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] VM Started (Lifecycle Event)#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.888 2 DEBUG nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.892 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.898 2 INFO nova.virt.libvirt.driver [-] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Instance spawned successfully.#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.898 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.919 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.924 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.928 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.928 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.929 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.929 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.929 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.930 2 DEBUG nova.virt.libvirt.driver [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.967 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.967 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394411.8865168, c33f349a-5d06-4a81-90fd-fe32eebc4cb9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:40:11 np0005465604 nova_compute[260603]: 2025-10-02 08:40:11.967 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.017 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.020 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394411.891461, c33f349a-5d06-4a81-90fd-fe32eebc4cb9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.020 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.054 2 INFO nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Took 11.72 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.054 2 DEBUG nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.055 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.061 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.108 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.195 2 INFO nova.compute.manager [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Took 12.94 seconds to build instance.#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.218 2 DEBUG oslo_concurrency.lockutils [None req-2f8415f3-0566-4978-ac0f-f3d4f9a552ec d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.669 2 DEBUG oslo_concurrency.lockutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.670 2 DEBUG oslo_concurrency.lockutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.670 2 DEBUG oslo_concurrency.lockutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.671 2 DEBUG oslo_concurrency.lockutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.671 2 DEBUG oslo_concurrency.lockutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.672 2 INFO nova.compute.manager [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Terminating instance#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.673 2 DEBUG nova.compute.manager [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:40:12 np0005465604 kernel: tap00fa1373-e4 (unregistering): left promiscuous mode
Oct  2 04:40:12 np0005465604 NetworkManager[45129]: <info>  [1759394412.7267] device (tap00fa1373-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:40:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:12Z|00997|binding|INFO|Releasing lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb from this chassis (sb_readonly=0)
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:12Z|00998|binding|INFO|Setting lport 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb down in Southbound
Oct  2 04:40:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:12Z|00999|binding|INFO|Removing iface tap00fa1373-e4 ovn-installed in OVS
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:12.751 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:b9:2b 10.100.0.10'], port_security=['fa:16:3e:11:b9:2b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b85786f28a064d75924559acd4f6137e', 'neutron:revision_number': '11', 'neutron:security_group_ids': '05cfcec7-0c6a-4e83-8fe9-4afbd4cec940', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43e6dc89-4a27-4ab5-bebe-04c62140c10d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:40:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:12.753 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 00fa1373-e4cc-4245-9cfa-5a58c77aa4eb in datapath 19d584c3-e754-47d1-9cdf-c6badbd670d7 unbound from our chassis#033[00m
Oct  2 04:40:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:12.755 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 19d584c3-e754-47d1-9cdf-c6badbd670d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:40:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:12.756 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bac09676-2e19-48ad-9b0f-58d5849f7af0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:12.761 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 namespace which is not needed anymore#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:12 np0005465604 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct  2 04:40:12 np0005465604 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d0000005d.scope: Consumed 5.333s CPU time.
Oct  2 04:40:12 np0005465604 systemd-machined[214636]: Machine qemu-122-instance-0000005d terminated.
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.903 2 INFO nova.virt.libvirt.driver [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Instance destroyed successfully.#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.904 2 DEBUG nova.objects.instance [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lazy-loading 'resources' on Instance uuid fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.926 2 DEBUG nova.virt.libvirt.vif [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:37:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1260783938',display_name='tempest-ServersNegativeTestJSON-server-1260783938',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1260783938',id=93,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:39:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b85786f28a064d75924559acd4f6137e',ramdisk_id='',reservation_id='r-h40zdbp5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-2088330606',owner_user_name='tempest-ServersNegativeTestJSON-2088330606-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:39:57Z,user_data=None,user_id='15da3bbf2c9f49b68e7a7e0ccd557067',uuid=fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.928 2 DEBUG nova.network.os_vif_util [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converting VIF {"id": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "address": "fa:16:3e:11:b9:2b", "network": {"id": "19d584c3-e754-47d1-9cdf-c6badbd670d7", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-384640570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b85786f28a064d75924559acd4f6137e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00fa1373-e4", "ovs_interfaceid": "00fa1373-e4cc-4245-9cfa-5a58c77aa4eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.929 2 DEBUG nova.network.os_vif_util [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.929 2 DEBUG os_vif [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.932 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00fa1373-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.936 2 DEBUG nova.compute.manager [req-84615330-8280-4a83-abbf-e81e640c10af req-129d01b0-2890-4aa2-9533-92ca0162852f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-unplugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.937 2 DEBUG oslo_concurrency.lockutils [req-84615330-8280-4a83-abbf-e81e640c10af req-129d01b0-2890-4aa2-9533-92ca0162852f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.937 2 DEBUG oslo_concurrency.lockutils [req-84615330-8280-4a83-abbf-e81e640c10af req-129d01b0-2890-4aa2-9533-92ca0162852f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.938 2 DEBUG oslo_concurrency.lockutils [req-84615330-8280-4a83-abbf-e81e640c10af req-129d01b0-2890-4aa2-9533-92ca0162852f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.938 2 DEBUG nova.compute.manager [req-84615330-8280-4a83-abbf-e81e640c10af req-129d01b0-2890-4aa2-9533-92ca0162852f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-unplugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.938 2 DEBUG nova.compute.manager [req-84615330-8280-4a83-abbf-e81e640c10af req-129d01b0-2890-4aa2-9533-92ca0162852f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-unplugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:12 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[357219]: [NOTICE]   (357223) : haproxy version is 2.8.14-c23fe91
Oct  2 04:40:12 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[357219]: [NOTICE]   (357223) : path to executable is /usr/sbin/haproxy
Oct  2 04:40:12 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[357219]: [WARNING]  (357223) : Exiting Master process...
Oct  2 04:40:12 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[357219]: [WARNING]  (357223) : Exiting Master process...
Oct  2 04:40:12 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[357219]: [ALERT]    (357223) : Current worker (357225) exited with code 143 (Terminated)
Oct  2 04:40:12 np0005465604 neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7[357219]: [WARNING]  (357223) : All workers exited. Exiting... (0)
Oct  2 04:40:12 np0005465604 nova_compute[260603]: 2025-10-02 08:40:12.942 2 INFO os_vif [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:b9:2b,bridge_name='br-int',has_traffic_filtering=True,id=00fa1373-e4cc-4245-9cfa-5a58c77aa4eb,network=Network(19d584c3-e754-47d1-9cdf-c6badbd670d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00fa1373-e4')#033[00m
Oct  2 04:40:12 np0005465604 systemd[1]: libpod-d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee.scope: Deactivated successfully.
Oct  2 04:40:12 np0005465604 podman[357723]: 2025-10-02 08:40:12.952090122 +0000 UTC m=+0.062696285 container died d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:40:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee-userdata-shm.mount: Deactivated successfully.
Oct  2 04:40:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3fbca77696254007de7e96072e33264e451582408270a23fe6bb5e724afe7cd3-merged.mount: Deactivated successfully.
Oct  2 04:40:13 np0005465604 podman[357723]: 2025-10-02 08:40:13.000852077 +0000 UTC m=+0.111458240 container cleanup d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 04:40:13 np0005465604 systemd[1]: libpod-conmon-d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee.scope: Deactivated successfully.
Oct  2 04:40:13 np0005465604 podman[357781]: 2025-10-02 08:40:13.090434359 +0000 UTC m=+0.057916321 container remove d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 04:40:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:13.102 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[52947846-eb18-4c6b-85c1-db9d30947abc]: (4, ('Thu Oct  2 08:40:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 (d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee)\nd40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee\nThu Oct  2 08:40:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 (d40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee)\nd40bc9ddc218c7b2b009c232f2b7be2391d97b8cdc64293175bbfba7794a5fee\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:13.104 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[95bd5460-98a1-4f01-b17f-202dcaf05d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:13.105 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d584c3-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:40:13 np0005465604 nova_compute[260603]: 2025-10-02 08:40:13.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:13 np0005465604 kernel: tap19d584c3-e0: left promiscuous mode
Oct  2 04:40:13 np0005465604 nova_compute[260603]: 2025-10-02 08:40:13.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:13.124 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f0cf9c-b553-473c-8877-fcd142421d82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:13.159 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c19b3ce8-28e9-4e31-a954-3003442a74c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:13.160 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f29e3fbd-f90a-45d5-9851-664c63bf37e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:13.180 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[474e064e-ec24-4cb9-9a0d-890f56a1059a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527495, 'reachable_time': 40838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357794, 'error': None, 'target': 'ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:13 np0005465604 systemd[1]: run-netns-ovnmeta\x2d19d584c3\x2de754\x2d47d1\x2d9cdf\x2dc6badbd670d7.mount: Deactivated successfully.
Oct  2 04:40:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:13.182 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-19d584c3-e754-47d1-9cdf-c6badbd670d7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:40:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:13.182 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[d819033f-8fbe-4d96-ab57-67f37ab8efa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 248 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Oct  2 04:40:13 np0005465604 nova_compute[260603]: 2025-10-02 08:40:13.308 2 INFO nova.virt.libvirt.driver [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Deleting instance files /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_del#033[00m
Oct  2 04:40:13 np0005465604 nova_compute[260603]: 2025-10-02 08:40:13.309 2 INFO nova.virt.libvirt.driver [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Deletion of /var/lib/nova/instances/fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b_del complete#033[00m
Oct  2 04:40:13 np0005465604 nova_compute[260603]: 2025-10-02 08:40:13.354 2 INFO nova.compute.manager [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:40:13 np0005465604 nova_compute[260603]: 2025-10-02 08:40:13.354 2 DEBUG oslo.service.loopingcall [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:40:13 np0005465604 nova_compute[260603]: 2025-10-02 08:40:13.355 2 DEBUG nova.compute.manager [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:40:13 np0005465604 nova_compute[260603]: 2025-10-02 08:40:13.355 2 DEBUG nova.network.neutron [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:40:13 np0005465604 nova_compute[260603]: 2025-10-02 08:40:13.503 2 DEBUG nova.compute.manager [req-cd7fe29f-d87b-45c5-bbeb-c5b874a897cd req-52053740-44c7-4e99-872e-96018ab6788d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received event network-vif-plugged-de7564c4-d262-44cd-9232-19988049a763 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:40:13 np0005465604 nova_compute[260603]: 2025-10-02 08:40:13.504 2 DEBUG oslo_concurrency.lockutils [req-cd7fe29f-d87b-45c5-bbeb-c5b874a897cd req-52053740-44c7-4e99-872e-96018ab6788d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:13 np0005465604 nova_compute[260603]: 2025-10-02 08:40:13.505 2 DEBUG oslo_concurrency.lockutils [req-cd7fe29f-d87b-45c5-bbeb-c5b874a897cd req-52053740-44c7-4e99-872e-96018ab6788d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:13 np0005465604 nova_compute[260603]: 2025-10-02 08:40:13.506 2 DEBUG oslo_concurrency.lockutils [req-cd7fe29f-d87b-45c5-bbeb-c5b874a897cd req-52053740-44c7-4e99-872e-96018ab6788d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:13 np0005465604 nova_compute[260603]: 2025-10-02 08:40:13.506 2 DEBUG nova.compute.manager [req-cd7fe29f-d87b-45c5-bbeb-c5b874a897cd req-52053740-44c7-4e99-872e-96018ab6788d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] No waiting events found dispatching network-vif-plugged-de7564c4-d262-44cd-9232-19988049a763 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:40:13 np0005465604 nova_compute[260603]: 2025-10-02 08:40:13.507 2 WARNING nova.compute.manager [req-cd7fe29f-d87b-45c5-bbeb-c5b874a897cd req-52053740-44c7-4e99-872e-96018ab6788d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received unexpected event network-vif-plugged-de7564c4-d262-44cd-9232-19988049a763 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:40:14 np0005465604 nova_compute[260603]: 2025-10-02 08:40:14.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:14 np0005465604 nova_compute[260603]: 2025-10-02 08:40:14.318 2 DEBUG nova.network.neutron [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:40:14 np0005465604 nova_compute[260603]: 2025-10-02 08:40:14.338 2 INFO nova.compute.manager [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Took 0.98 seconds to deallocate network for instance.#033[00m
Oct  2 04:40:14 np0005465604 nova_compute[260603]: 2025-10-02 08:40:14.399 2 DEBUG oslo_concurrency.lockutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:14 np0005465604 nova_compute[260603]: 2025-10-02 08:40:14.400 2 DEBUG oslo_concurrency.lockutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:14 np0005465604 nova_compute[260603]: 2025-10-02 08:40:14.507 2 DEBUG oslo_concurrency.processutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:40:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3945092216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:40:14 np0005465604 nova_compute[260603]: 2025-10-02 08:40:14.983 2 DEBUG oslo_concurrency.processutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:14 np0005465604 nova_compute[260603]: 2025-10-02 08:40:14.990 2 DEBUG nova.compute.provider_tree [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:40:15 np0005465604 nova_compute[260603]: 2025-10-02 08:40:15.020 2 DEBUG nova.scheduler.client.report [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:40:15 np0005465604 nova_compute[260603]: 2025-10-02 08:40:15.054 2 DEBUG oslo_concurrency.lockutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:15 np0005465604 nova_compute[260603]: 2025-10-02 08:40:15.064 2 DEBUG nova.compute.manager [req-321ee815-8549-4192-95be-0d81fcfdee85 req-45c1f0bf-d5c8-4e9c-9235-6a8ad3f19932 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:40:15 np0005465604 nova_compute[260603]: 2025-10-02 08:40:15.065 2 DEBUG oslo_concurrency.lockutils [req-321ee815-8549-4192-95be-0d81fcfdee85 req-45c1f0bf-d5c8-4e9c-9235-6a8ad3f19932 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:15 np0005465604 nova_compute[260603]: 2025-10-02 08:40:15.066 2 DEBUG oslo_concurrency.lockutils [req-321ee815-8549-4192-95be-0d81fcfdee85 req-45c1f0bf-d5c8-4e9c-9235-6a8ad3f19932 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:15 np0005465604 nova_compute[260603]: 2025-10-02 08:40:15.067 2 DEBUG oslo_concurrency.lockutils [req-321ee815-8549-4192-95be-0d81fcfdee85 req-45c1f0bf-d5c8-4e9c-9235-6a8ad3f19932 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:15 np0005465604 nova_compute[260603]: 2025-10-02 08:40:15.067 2 DEBUG nova.compute.manager [req-321ee815-8549-4192-95be-0d81fcfdee85 req-45c1f0bf-d5c8-4e9c-9235-6a8ad3f19932 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] No waiting events found dispatching network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:40:15 np0005465604 nova_compute[260603]: 2025-10-02 08:40:15.068 2 WARNING nova.compute.manager [req-321ee815-8549-4192-95be-0d81fcfdee85 req-45c1f0bf-d5c8-4e9c-9235-6a8ad3f19932 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received unexpected event network-vif-plugged-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:40:15 np0005465604 nova_compute[260603]: 2025-10-02 08:40:15.069 2 DEBUG nova.compute.manager [req-321ee815-8549-4192-95be-0d81fcfdee85 req-45c1f0bf-d5c8-4e9c-9235-6a8ad3f19932 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Received event network-vif-deleted-00fa1373-e4cc-4245-9cfa-5a58c77aa4eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:40:15 np0005465604 nova_compute[260603]: 2025-10-02 08:40:15.105 2 INFO nova.scheduler.client.report [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Deleted allocations for instance fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b#033[00m
Oct  2 04:40:15 np0005465604 nova_compute[260603]: 2025-10-02 08:40:15.214 2 DEBUG oslo_concurrency.lockutils [None req-9c2b6ee4-f28e-4b51-adf6-2bbdb655134b 15da3bbf2c9f49b68e7a7e0ccd557067 b85786f28a064d75924559acd4f6137e - - default default] Lock "fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 248 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 719 KiB/s rd, 25 KiB/s wr, 35 op/s
Oct  2 04:40:15 np0005465604 nova_compute[260603]: 2025-10-02 08:40:15.647 2 DEBUG nova.compute.manager [req-93faa5c2-a829-40b6-b4b3-2527e5f24f0d req-45eacea5-36c6-48a8-af8d-378458f51dde 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received event network-changed-de7564c4-d262-44cd-9232-19988049a763 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:40:15 np0005465604 nova_compute[260603]: 2025-10-02 08:40:15.648 2 DEBUG nova.compute.manager [req-93faa5c2-a829-40b6-b4b3-2527e5f24f0d req-45eacea5-36c6-48a8-af8d-378458f51dde 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Refreshing instance network info cache due to event network-changed-de7564c4-d262-44cd-9232-19988049a763. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:40:15 np0005465604 nova_compute[260603]: 2025-10-02 08:40:15.649 2 DEBUG oslo_concurrency.lockutils [req-93faa5c2-a829-40b6-b4b3-2527e5f24f0d req-45eacea5-36c6-48a8-af8d-378458f51dde 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-c33f349a-5d06-4a81-90fd-fe32eebc4cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:40:15 np0005465604 nova_compute[260603]: 2025-10-02 08:40:15.650 2 DEBUG oslo_concurrency.lockutils [req-93faa5c2-a829-40b6-b4b3-2527e5f24f0d req-45eacea5-36c6-48a8-af8d-378458f51dde 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-c33f349a-5d06-4a81-90fd-fe32eebc4cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:40:15 np0005465604 nova_compute[260603]: 2025-10-02 08:40:15.651 2 DEBUG nova.network.neutron [req-93faa5c2-a829-40b6-b4b3-2527e5f24f0d req-45eacea5-36c6-48a8-af8d-378458f51dde 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Refreshing network info cache for port de7564c4-d262-44cd-9232-19988049a763 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.414 2 DEBUG oslo_concurrency.lockutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.415 2 DEBUG oslo_concurrency.lockutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.416 2 DEBUG oslo_concurrency.lockutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.416 2 DEBUG oslo_concurrency.lockutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.417 2 DEBUG oslo_concurrency.lockutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.418 2 INFO nova.compute.manager [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Terminating instance#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.420 2 DEBUG nova.compute.manager [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:40:16 np0005465604 kernel: tapde7564c4-d2 (unregistering): left promiscuous mode
Oct  2 04:40:16 np0005465604 NetworkManager[45129]: <info>  [1759394416.4807] device (tapde7564c4-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:40:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:16Z|01000|binding|INFO|Releasing lport de7564c4-d262-44cd-9232-19988049a763 from this chassis (sb_readonly=0)
Oct  2 04:40:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:16Z|01001|binding|INFO|Setting lport de7564c4-d262-44cd-9232-19988049a763 down in Southbound
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:16Z|01002|binding|INFO|Removing iface tapde7564c4-d2 ovn-installed in OVS
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.494 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:fb:55 10.100.0.10'], port_security=['fa:16:3e:32:fb:55 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c33f349a-5d06-4a81-90fd-fe32eebc4cb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=de7564c4-d262-44cd-9232-19988049a763) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:40:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.495 162357 INFO neutron.agent.ovn.metadata.agent [-] Port de7564c4-d262-44cd-9232-19988049a763 in datapath 28f843b2-396a-4167-9840-21c273bdc044 unbound from our chassis#033[00m
Oct  2 04:40:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.496 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28f843b2-396a-4167-9840-21c273bdc044#033[00m
Oct  2 04:40:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.509 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c77fd2-1f24-4d46-b249-68b9238ea032]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.552 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b96120f3-7543-4922-a5fa-18501bf004db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.554 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e2fbf23d-ece6-4b9d-93c3-a3b42039ab0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:16 np0005465604 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000061.scope: Deactivated successfully.
Oct  2 04:40:16 np0005465604 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000061.scope: Consumed 5.394s CPU time.
Oct  2 04:40:16 np0005465604 systemd-machined[214636]: Machine qemu-123-instance-00000061 terminated.
Oct  2 04:40:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.595 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c255f5cf-52a7-4083-9af7-d9eace366bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.620 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3a16497e-4ca4-4a63-82e5-fafed6f3f4e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28f843b2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:e0:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 916, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 916, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520146, 'reachable_time': 29911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357828, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.640 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2d599811-d578-4c24-b8ca-6d17f04f9990]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520161, 'tstamp': 520161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357829, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap28f843b2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520164, 'tstamp': 520164}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357829, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.642 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28f843b2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.655 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28f843b2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:40:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.655 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:40:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.655 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28f843b2-30, col_values=(('external_ids', {'iface-id': '73da1479-3a84-4798-9da3-841fe88c5e3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:40:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:16.656 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.670 2 INFO nova.virt.libvirt.driver [-] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Instance destroyed successfully.#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.671 2 DEBUG nova.objects.instance [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'resources' on Instance uuid c33f349a-5d06-4a81-90fd-fe32eebc4cb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.703 2 DEBUG nova.virt.libvirt.vif [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:39:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2038300024',display_name='tempest-ServerActionsTestOtherA-server-2038300024',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2038300024',id=97,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:40:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-9f4e7pit',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:40:12Z,user_data=None,user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=c33f349a-5d06-4a81-90fd-fe32eebc4cb9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.704 2 DEBUG nova.network.os_vif_util [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.705 2 DEBUG nova.network.os_vif_util [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:fb:55,bridge_name='br-int',has_traffic_filtering=True,id=de7564c4-d262-44cd-9232-19988049a763,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7564c4-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.706 2 DEBUG os_vif [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:fb:55,bridge_name='br-int',has_traffic_filtering=True,id=de7564c4-d262-44cd-9232-19988049a763,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7564c4-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.708 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde7564c4-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:16 np0005465604 nova_compute[260603]: 2025-10-02 08:40:16.714 2 INFO os_vif [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:fb:55,bridge_name='br-int',has_traffic_filtering=True,id=de7564c4-d262-44cd-9232-19988049a763,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapde7564c4-d2')#033[00m
Oct  2 04:40:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 221 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 937 KiB/s rd, 26 KiB/s wr, 57 op/s
Oct  2 04:40:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:40:17 np0005465604 nova_compute[260603]: 2025-10-02 08:40:17.400 2 DEBUG nova.network.neutron [req-93faa5c2-a829-40b6-b4b3-2527e5f24f0d req-45eacea5-36c6-48a8-af8d-378458f51dde 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Updated VIF entry in instance network info cache for port de7564c4-d262-44cd-9232-19988049a763. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:40:17 np0005465604 nova_compute[260603]: 2025-10-02 08:40:17.401 2 DEBUG nova.network.neutron [req-93faa5c2-a829-40b6-b4b3-2527e5f24f0d req-45eacea5-36c6-48a8-af8d-378458f51dde 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Updating instance_info_cache with network_info: [{"id": "de7564c4-d262-44cd-9232-19988049a763", "address": "fa:16:3e:32:fb:55", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde7564c4-d2", "ovs_interfaceid": "de7564c4-d262-44cd-9232-19988049a763", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:40:17 np0005465604 nova_compute[260603]: 2025-10-02 08:40:17.437 2 DEBUG oslo_concurrency.lockutils [req-93faa5c2-a829-40b6-b4b3-2527e5f24f0d req-45eacea5-36c6-48a8-af8d-378458f51dde 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-c33f349a-5d06-4a81-90fd-fe32eebc4cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:40:17 np0005465604 nova_compute[260603]: 2025-10-02 08:40:17.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:40:17 np0005465604 nova_compute[260603]: 2025-10-02 08:40:17.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:40:17 np0005465604 nova_compute[260603]: 2025-10-02 08:40:17.739 2 DEBUG nova.compute.manager [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received event network-vif-unplugged-de7564c4-d262-44cd-9232-19988049a763 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:40:17 np0005465604 nova_compute[260603]: 2025-10-02 08:40:17.739 2 DEBUG oslo_concurrency.lockutils [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:17 np0005465604 nova_compute[260603]: 2025-10-02 08:40:17.739 2 DEBUG oslo_concurrency.lockutils [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:17 np0005465604 nova_compute[260603]: 2025-10-02 08:40:17.740 2 DEBUG oslo_concurrency.lockutils [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:17 np0005465604 nova_compute[260603]: 2025-10-02 08:40:17.740 2 DEBUG nova.compute.manager [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] No waiting events found dispatching network-vif-unplugged-de7564c4-d262-44cd-9232-19988049a763 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:40:17 np0005465604 nova_compute[260603]: 2025-10-02 08:40:17.740 2 DEBUG nova.compute.manager [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received event network-vif-unplugged-de7564c4-d262-44cd-9232-19988049a763 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:40:17 np0005465604 nova_compute[260603]: 2025-10-02 08:40:17.740 2 DEBUG nova.compute.manager [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received event network-vif-plugged-de7564c4-d262-44cd-9232-19988049a763 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:40:17 np0005465604 nova_compute[260603]: 2025-10-02 08:40:17.741 2 DEBUG oslo_concurrency.lockutils [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:17 np0005465604 nova_compute[260603]: 2025-10-02 08:40:17.741 2 DEBUG oslo_concurrency.lockutils [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:17 np0005465604 nova_compute[260603]: 2025-10-02 08:40:17.742 2 DEBUG oslo_concurrency.lockutils [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:17 np0005465604 nova_compute[260603]: 2025-10-02 08:40:17.742 2 DEBUG nova.compute.manager [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] No waiting events found dispatching network-vif-plugged-de7564c4-d262-44cd-9232-19988049a763 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:40:17 np0005465604 nova_compute[260603]: 2025-10-02 08:40:17.742 2 WARNING nova.compute.manager [req-736030e1-6b0d-4fe5-bb5e-2e6e480d9358 req-f4d7d5b0-52c3-47f2-a838-61bda1529603 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received unexpected event network-vif-plugged-de7564c4-d262-44cd-9232-19988049a763 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:40:18 np0005465604 nova_compute[260603]: 2025-10-02 08:40:18.792 2 INFO nova.virt.libvirt.driver [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Deleting instance files /var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9_del#033[00m
Oct  2 04:40:18 np0005465604 nova_compute[260603]: 2025-10-02 08:40:18.793 2 INFO nova.virt.libvirt.driver [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Deletion of /var/lib/nova/instances/c33f349a-5d06-4a81-90fd-fe32eebc4cb9_del complete#033[00m
Oct  2 04:40:18 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:18Z|01003|binding|INFO|Releasing lport 73da1479-3a84-4798-9da3-841fe88c5e3a from this chassis (sb_readonly=0)
Oct  2 04:40:18 np0005465604 nova_compute[260603]: 2025-10-02 08:40:18.870 2 INFO nova.compute.manager [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Took 2.45 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:40:18 np0005465604 nova_compute[260603]: 2025-10-02 08:40:18.871 2 DEBUG oslo.service.loopingcall [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:40:18 np0005465604 nova_compute[260603]: 2025-10-02 08:40:18.871 2 DEBUG nova.compute.manager [-] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:40:18 np0005465604 nova_compute[260603]: 2025-10-02 08:40:18.872 2 DEBUG nova.network.neutron [-] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:40:18 np0005465604 nova_compute[260603]: 2025-10-02 08:40:18.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:19 np0005465604 nova_compute[260603]: 2025-10-02 08:40:19.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 147 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 115 op/s
Oct  2 04:40:20 np0005465604 nova_compute[260603]: 2025-10-02 08:40:20.144 2 DEBUG nova.network.neutron [-] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:40:20 np0005465604 nova_compute[260603]: 2025-10-02 08:40:20.178 2 INFO nova.compute.manager [-] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Took 1.31 seconds to deallocate network for instance.#033[00m
Oct  2 04:40:20 np0005465604 nova_compute[260603]: 2025-10-02 08:40:20.235 2 DEBUG oslo_concurrency.lockutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:20 np0005465604 nova_compute[260603]: 2025-10-02 08:40:20.235 2 DEBUG oslo_concurrency.lockutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:20 np0005465604 nova_compute[260603]: 2025-10-02 08:40:20.316 2 DEBUG nova.compute.manager [req-1d8d08cb-c00a-4325-92ae-68b6dcace0b8 req-ec300ade-9ce7-46e7-8a7f-9159b9577106 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Received event network-vif-deleted-de7564c4-d262-44cd-9232-19988049a763 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:40:20 np0005465604 nova_compute[260603]: 2025-10-02 08:40:20.347 2 DEBUG oslo_concurrency.processutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:40:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183223316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:40:20 np0005465604 nova_compute[260603]: 2025-10-02 08:40:20.840 2 DEBUG oslo_concurrency.processutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:20 np0005465604 nova_compute[260603]: 2025-10-02 08:40:20.848 2 DEBUG nova.compute.provider_tree [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:40:20 np0005465604 nova_compute[260603]: 2025-10-02 08:40:20.867 2 DEBUG nova.scheduler.client.report [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:40:20 np0005465604 nova_compute[260603]: 2025-10-02 08:40:20.894 2 DEBUG oslo_concurrency.lockutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:20 np0005465604 nova_compute[260603]: 2025-10-02 08:40:20.936 2 INFO nova.scheduler.client.report [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Deleted allocations for instance c33f349a-5d06-4a81-90fd-fe32eebc4cb9#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.046 2 DEBUG oslo_concurrency.lockutils [None req-9c674506-b007-4780-ba61-64c5d4dba8c4 d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "c33f349a-5d06-4a81-90fd-fe32eebc4cb9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 147 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 114 op/s
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.467 2 DEBUG oslo_concurrency.lockutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.468 2 DEBUG oslo_concurrency.lockutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.468 2 DEBUG oslo_concurrency.lockutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.469 2 DEBUG oslo_concurrency.lockutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.469 2 DEBUG oslo_concurrency.lockutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.471 2 INFO nova.compute.manager [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Terminating instance#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.474 2 DEBUG nova.compute.manager [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:40:21 np0005465604 kernel: tap4340f7c5-2f (unregistering): left promiscuous mode
Oct  2 04:40:21 np0005465604 NetworkManager[45129]: <info>  [1759394421.5382] device (tap4340f7c5-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:40:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:21Z|01004|binding|INFO|Releasing lport 4340f7c5-2f3a-4608-b77c-40798457ce79 from this chassis (sb_readonly=0)
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:21Z|01005|binding|INFO|Setting lport 4340f7c5-2f3a-4608-b77c-40798457ce79 down in Southbound
Oct  2 04:40:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:21Z|01006|binding|INFO|Removing iface tap4340f7c5-2f ovn-installed in OVS
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.560 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:12:17 10.100.0.14'], port_security=['fa:16:3e:37:12:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4145f1a3-c327-49ee-9af1-1ace3afb70a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7fef51c8-51f5-4bd3-92a8-fdacaab334b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4340f7c5-2f3a-4608-b77c-40798457ce79) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.561 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4340f7c5-2f3a-4608-b77c-40798457ce79 in datapath 28f843b2-396a-4167-9840-21c273bdc044 unbound from our chassis#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.562 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28f843b2-396a-4167-9840-21c273bdc044, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.563 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cb605fbe-db33-4f0f-b0cd-6e5c4c7d0147]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.564 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-28f843b2-396a-4167-9840-21c273bdc044 namespace which is not needed anymore#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:21 np0005465604 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Oct  2 04:40:21 np0005465604 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d0000005f.scope: Consumed 16.692s CPU time.
Oct  2 04:40:21 np0005465604 systemd-machined[214636]: Machine qemu-118-instance-0000005f terminated.
Oct  2 04:40:21 np0005465604 kernel: tap4340f7c5-2f: entered promiscuous mode
Oct  2 04:40:21 np0005465604 systemd-udevd[357888]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:40:21 np0005465604 kernel: tap4340f7c5-2f (unregistering): left promiscuous mode
Oct  2 04:40:21 np0005465604 NetworkManager[45129]: <info>  [1759394421.6995] manager: (tap4340f7c5-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/395)
Oct  2 04:40:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:21Z|01007|binding|INFO|Claiming lport 4340f7c5-2f3a-4608-b77c-40798457ce79 for this chassis.
Oct  2 04:40:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:21Z|01008|binding|INFO|4340f7c5-2f3a-4608-b77c-40798457ce79: Claiming fa:16:3e:37:12:17 10.100.0.14
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.712 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:12:17 10.100.0.14'], port_security=['fa:16:3e:37:12:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4145f1a3-c327-49ee-9af1-1ace3afb70a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7fef51c8-51f5-4bd3-92a8-fdacaab334b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4340f7c5-2f3a-4608-b77c-40798457ce79) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.726 2 INFO nova.virt.libvirt.driver [-] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Instance destroyed successfully.#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.726 2 DEBUG nova.objects.instance [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lazy-loading 'resources' on Instance uuid 4145f1a3-c327-49ee-9af1-1ace3afb70a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:21Z|01009|binding|INFO|Setting lport 4340f7c5-2f3a-4608-b77c-40798457ce79 ovn-installed in OVS
Oct  2 04:40:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:21Z|01010|binding|INFO|Setting lport 4340f7c5-2f3a-4608-b77c-40798457ce79 up in Southbound
Oct  2 04:40:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:21Z|01011|binding|INFO|Releasing lport 4340f7c5-2f3a-4608-b77c-40798457ce79 from this chassis (sb_readonly=1)
Oct  2 04:40:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:21Z|01012|if_status|INFO|Dropped 2 log messages in last 122 seconds (most recently, 122 seconds ago) due to excessive rate
Oct  2 04:40:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:21Z|01013|if_status|INFO|Not setting lport 4340f7c5-2f3a-4608-b77c-40798457ce79 down as sb is readonly
Oct  2 04:40:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:21Z|01014|binding|INFO|Removing iface tap4340f7c5-2f ovn-installed in OVS
Oct  2 04:40:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:21Z|01015|binding|INFO|Releasing lport 4340f7c5-2f3a-4608-b77c-40798457ce79 from this chassis (sb_readonly=0)
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.746 2 DEBUG nova.virt.libvirt.vif [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:38:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2131100839',display_name='tempest-ServerActionsTestOtherA-server-2131100839',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2131100839',id=95,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC0HjsYe2bQh07GPDTkE/Hkn9NHAzOfs+WPsOxgVRJ14fyGEBr+vw6dokOlyhdtA2fxAJhqFEPbCShkjGLLEAdJQr2B1DlLsi6qyPK3AKei/w52/HIPGV/pd20ma4wEBfQ==',key_name='tempest-keypair-312406508',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:38:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c75535fe577642038c638a0b01f74d09',ramdisk_id='',reservation_id='r-ccglcikc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-249618595',owner_user_name='tempest-ServerActionsTestOtherA-249618595-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:38:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3802fedfb914c27b9b09ad6ea6f4c27',uuid=4145f1a3-c327-49ee-9af1-1ace3afb70a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.747 2 DEBUG nova.network.os_vif_util [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converting VIF {"id": "4340f7c5-2f3a-4608-b77c-40798457ce79", "address": "fa:16:3e:37:12:17", "network": {"id": "28f843b2-396a-4167-9840-21c273bdc044", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1891142783-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c75535fe577642038c638a0b01f74d09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4340f7c5-2f", "ovs_interfaceid": "4340f7c5-2f3a-4608-b77c-40798457ce79", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:40:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:40:21Z|01016|binding|INFO|Setting lport 4340f7c5-2f3a-4608-b77c-40798457ce79 down in Southbound
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.748 2 DEBUG nova.network.os_vif_util [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:37:12:17,bridge_name='br-int',has_traffic_filtering=True,id=4340f7c5-2f3a-4608-b77c-40798457ce79,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4340f7c5-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.748 2 DEBUG os_vif [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:12:17,bridge_name='br-int',has_traffic_filtering=True,id=4340f7c5-2f3a-4608-b77c-40798457ce79,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4340f7c5-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:40:21 np0005465604 neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044[354128]: [NOTICE]   (354132) : haproxy version is 2.8.14-c23fe91
Oct  2 04:40:21 np0005465604 neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044[354128]: [NOTICE]   (354132) : path to executable is /usr/sbin/haproxy
Oct  2 04:40:21 np0005465604 neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044[354128]: [WARNING]  (354132) : Exiting Master process...
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.751 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4340f7c5-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:21 np0005465604 neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044[354128]: [ALERT]    (354132) : Current worker (354134) exited with code 143 (Terminated)
Oct  2 04:40:21 np0005465604 neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044[354128]: [WARNING]  (354132) : All workers exited. Exiting... (0)
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.753 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:12:17 10.100.0.14'], port_security=['fa:16:3e:37:12:17 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4145f1a3-c327-49ee-9af1-1ace3afb70a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28f843b2-396a-4167-9840-21c273bdc044', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c75535fe577642038c638a0b01f74d09', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7fef51c8-51f5-4bd3-92a8-fdacaab334b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9025a22-b533-4e0f-aea9-93fa00c3dbe4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4340f7c5-2f3a-4608-b77c-40798457ce79) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:40:21 np0005465604 systemd[1]: libpod-3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052.scope: Deactivated successfully.
Oct  2 04:40:21 np0005465604 podman[357909]: 2025-10-02 08:40:21.762209589 +0000 UTC m=+0.056603643 container died 3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.767 2 INFO os_vif [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:12:17,bridge_name='br-int',has_traffic_filtering=True,id=4340f7c5-2f3a-4608-b77c-40798457ce79,network=Network(28f843b2-396a-4167-9840-21c273bdc044),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4340f7c5-2f')#033[00m
Oct  2 04:40:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052-userdata-shm.mount: Deactivated successfully.
Oct  2 04:40:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-afb0969f191443a386916053342273928abb5dcb10048e58d342dcf2424fd2bb-merged.mount: Deactivated successfully.
Oct  2 04:40:21 np0005465604 podman[357909]: 2025-10-02 08:40:21.811341704 +0000 UTC m=+0.105735748 container cleanup 3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 04:40:21 np0005465604 systemd[1]: libpod-conmon-3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052.scope: Deactivated successfully.
Oct  2 04:40:21 np0005465604 podman[357958]: 2025-10-02 08:40:21.885115202 +0000 UTC m=+0.048845779 container remove 3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.898 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2c5654ab-402e-40c5-88c3-c6711752157e]: (4, ('Thu Oct  2 08:40:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044 (3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052)\n3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052\nThu Oct  2 08:40:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-28f843b2-396a-4167-9840-21c273bdc044 (3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052)\n3914dcb6f581ef9ce176336c97b8af8ee736032a617c16ad46df1b01ce5c6052\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.900 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4bbceb0f-5622-4aac-982c-ec7ed3a84805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.903 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28f843b2-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:21 np0005465604 kernel: tap28f843b2-30: left promiscuous mode
Oct  2 04:40:21 np0005465604 nova_compute[260603]: 2025-10-02 08:40:21.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.923 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a004c1-b0eb-42ea-a6b7-b60209e377da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.948 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b11579d8-f021-41d7-9fc2-84b1db4b169c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.949 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9453214f-173a-4b24-9749-476833d1ba2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.971 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[02f35529-f150-4392-81a9-4eac5296975f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520137, 'reachable_time': 25083, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357973, 'error': None, 'target': 'ovnmeta-28f843b2-396a-4167-9840-21c273bdc044', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:21 np0005465604 systemd[1]: run-netns-ovnmeta\x2d28f843b2\x2d396a\x2d4167\x2d9840\x2d21c273bdc044.mount: Deactivated successfully.
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.976 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-28f843b2-396a-4167-9840-21c273bdc044 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.976 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[db6a495c-2d52-48e0-9485-5b7c50c07a11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.977 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4340f7c5-2f3a-4608-b77c-40798457ce79 in datapath 28f843b2-396a-4167-9840-21c273bdc044 unbound from our chassis#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.978 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28f843b2-396a-4167-9840-21c273bdc044, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.979 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a0eb47f1-ba7d-4c34-94b3-01043f8d3231]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.979 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4340f7c5-2f3a-4608-b77c-40798457ce79 in datapath 28f843b2-396a-4167-9840-21c273bdc044 unbound from our chassis#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.980 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28f843b2-396a-4167-9840-21c273bdc044, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:40:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:21.981 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[abbea1fe-9d6d-41bd-b5fb-59186a35f92e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:40:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/671994283' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:40:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:40:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/671994283' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:40:22 np0005465604 nova_compute[260603]: 2025-10-02 08:40:22.179 2 INFO nova.virt.libvirt.driver [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Deleting instance files /var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5_del#033[00m
Oct  2 04:40:22 np0005465604 nova_compute[260603]: 2025-10-02 08:40:22.180 2 INFO nova.virt.libvirt.driver [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Deletion of /var/lib/nova/instances/4145f1a3-c327-49ee-9af1-1ace3afb70a5_del complete#033[00m
Oct  2 04:40:22 np0005465604 nova_compute[260603]: 2025-10-02 08:40:22.237 2 INFO nova.compute.manager [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:40:22 np0005465604 nova_compute[260603]: 2025-10-02 08:40:22.238 2 DEBUG oslo.service.loopingcall [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:40:22 np0005465604 nova_compute[260603]: 2025-10-02 08:40:22.238 2 DEBUG nova.compute.manager [-] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:40:22 np0005465604 nova_compute[260603]: 2025-10-02 08:40:22.238 2 DEBUG nova.network.neutron [-] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:40:22 np0005465604 nova_compute[260603]: 2025-10-02 08:40:22.280 2 DEBUG nova.compute.manager [req-eb23ebcd-c898-4edb-8a99-e147fb75660a req-ad43a273-7ce8-403f-ab53-abbd0ceb080d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-vif-unplugged-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:40:22 np0005465604 nova_compute[260603]: 2025-10-02 08:40:22.280 2 DEBUG oslo_concurrency.lockutils [req-eb23ebcd-c898-4edb-8a99-e147fb75660a req-ad43a273-7ce8-403f-ab53-abbd0ceb080d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:22 np0005465604 nova_compute[260603]: 2025-10-02 08:40:22.280 2 DEBUG oslo_concurrency.lockutils [req-eb23ebcd-c898-4edb-8a99-e147fb75660a req-ad43a273-7ce8-403f-ab53-abbd0ceb080d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:22 np0005465604 nova_compute[260603]: 2025-10-02 08:40:22.280 2 DEBUG oslo_concurrency.lockutils [req-eb23ebcd-c898-4edb-8a99-e147fb75660a req-ad43a273-7ce8-403f-ab53-abbd0ceb080d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:22 np0005465604 nova_compute[260603]: 2025-10-02 08:40:22.281 2 DEBUG nova.compute.manager [req-eb23ebcd-c898-4edb-8a99-e147fb75660a req-ad43a273-7ce8-403f-ab53-abbd0ceb080d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] No waiting events found dispatching network-vif-unplugged-4340f7c5-2f3a-4608-b77c-40798457ce79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:40:22 np0005465604 nova_compute[260603]: 2025-10-02 08:40:22.281 2 DEBUG nova.compute.manager [req-eb23ebcd-c898-4edb-8a99-e147fb75660a req-ad43a273-7ce8-403f-ab53-abbd0ceb080d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-vif-unplugged-4340f7c5-2f3a-4608-b77c-40798457ce79 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:40:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:40:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 41 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 19 KiB/s wr, 155 op/s
Oct  2 04:40:23 np0005465604 nova_compute[260603]: 2025-10-02 08:40:23.780 2 DEBUG nova.network.neutron [-] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:40:23 np0005465604 nova_compute[260603]: 2025-10-02 08:40:23.804 2 INFO nova.compute.manager [-] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Took 1.57 seconds to deallocate network for instance.#033[00m
Oct  2 04:40:23 np0005465604 nova_compute[260603]: 2025-10-02 08:40:23.865 2 DEBUG oslo_concurrency.lockutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:23 np0005465604 nova_compute[260603]: 2025-10-02 08:40:23.866 2 DEBUG oslo_concurrency.lockutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:23 np0005465604 nova_compute[260603]: 2025-10-02 08:40:23.932 2 DEBUG oslo_concurrency.processutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:40:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/552991823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.345 2 DEBUG oslo_concurrency.processutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.352 2 DEBUG nova.compute.provider_tree [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.376 2 DEBUG nova.scheduler.client.report [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.403 2 DEBUG oslo_concurrency.lockutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.432 2 INFO nova.scheduler.client.report [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Deleted allocations for instance 4145f1a3-c327-49ee-9af1-1ace3afb70a5#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.484 2 DEBUG oslo_concurrency.lockutils [None req-47bbcd45-3b58-4452-bf5e-cabd5ae95bfa d3802fedfb914c27b9b09ad6ea6f4c27 c75535fe577642038c638a0b01f74d09 - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.508 2 DEBUG nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.508 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.508 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.509 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.509 2 DEBUG nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] No waiting events found dispatching network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.509 2 WARNING nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received unexpected event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.509 2 DEBUG nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.509 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.510 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.510 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.510 2 DEBUG nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] No waiting events found dispatching network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.510 2 WARNING nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received unexpected event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.510 2 DEBUG nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.511 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.511 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.511 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.511 2 DEBUG nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] No waiting events found dispatching network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.511 2 WARNING nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received unexpected event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.512 2 DEBUG nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.512 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.512 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.512 2 DEBUG oslo_concurrency.lockutils [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4145f1a3-c327-49ee-9af1-1ace3afb70a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.512 2 DEBUG nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] No waiting events found dispatching network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.513 2 WARNING nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received unexpected event network-vif-plugged-4340f7c5-2f3a-4608-b77c-40798457ce79 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:40:24 np0005465604 nova_compute[260603]: 2025-10-02 08:40:24.513 2 DEBUG nova.compute.manager [req-4a1a6f87-161e-4f8c-b862-432e7e303b6e req-ec3443bb-1e75-4350-8599-bf1ca62c0a72 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Received event network-vif-deleted-4340f7c5-2f3a-4608-b77c-40798457ce79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:40:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 41 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.5 KiB/s wr, 120 op/s
Oct  2 04:40:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:25.720 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:24:0e 2001:db8:0:1:f816:3eff:feab:240e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=109fe25f-02fc-4377-b6fa-14579cac74eb, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=130c7afc-9b27-4823-aeed-223452e18c05) old=Port_Binding(mac=['fa:16:3e:ab:24:0e 2001:db8::f816:3eff:feab:240e'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feab:240e/64', 'neutron:device_id': 'ovnmeta-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08388482-b0ce-472a-bb1e-16d4250fa7a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '809562557a9d41ea96728fbb81f8a3b8', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:40:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:25.722 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 130c7afc-9b27-4823-aeed-223452e18c05 in datapath 08388482-b0ce-472a-bb1e-16d4250fa7a0 updated#033[00m
Oct  2 04:40:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:25.724 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08388482-b0ce-472a-bb1e-16d4250fa7a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:40:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:25.725 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[22566f21-147e-4c81-bb1d-9964db4f632f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:40:26 np0005465604 nova_compute[260603]: 2025-10-02 08:40:26.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:40:26 np0005465604 nova_compute[260603]: 2025-10-02 08:40:26.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:40:26 np0005465604 nova_compute[260603]: 2025-10-02 08:40:26.576 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:40:26 np0005465604 nova_compute[260603]: 2025-10-02 08:40:26.577 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:40:26 np0005465604 nova_compute[260603]: 2025-10-02 08:40:26.578 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:40:26 np0005465604 nova_compute[260603]: 2025-10-02 08:40:26.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.5 KiB/s wr, 120 op/s
Oct  2 04:40:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:40:27 np0005465604 nova_compute[260603]: 2025-10-02 08:40:27.901 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394412.90098, fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:40:27 np0005465604 nova_compute[260603]: 2025-10-02 08:40:27.902 2 INFO nova.compute.manager [-] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:40:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:40:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:40:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:40:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:40:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:40:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:40:27 np0005465604 nova_compute[260603]: 2025-10-02 08:40:27.931 2 DEBUG nova.compute.manager [None req-c28a6ac7-78b0-4b7f-a835-7491f5c63962 - - - - - -] [instance: fbbbbd58-51b7-4fd7-8784-37e2a58d1f9b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:40:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:40:27
Oct  2 04:40:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:40:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:40:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'backups', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.log']
Oct  2 04:40:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:40:28 np0005465604 nova_compute[260603]: 2025-10-02 08:40:28.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:28 np0005465604 nova_compute[260603]: 2025-10-02 08:40:28.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:28 np0005465604 nova_compute[260603]: 2025-10-02 08:40:28.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:40:28 np0005465604 nova_compute[260603]: 2025-10-02 08:40:28.538 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:28 np0005465604 nova_compute[260603]: 2025-10-02 08:40:28.539 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:28 np0005465604 nova_compute[260603]: 2025-10-02 08:40:28.539 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:28 np0005465604 nova_compute[260603]: 2025-10-02 08:40:28.539 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:40:28 np0005465604 nova_compute[260603]: 2025-10-02 08:40:28.540 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:40:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1947673933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:40:29 np0005465604 nova_compute[260603]: 2025-10-02 08:40:29.073 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:29 np0005465604 podman[358021]: 2025-10-02 08:40:29.215672127 +0000 UTC m=+0.095297175 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:40:29 np0005465604 nova_compute[260603]: 2025-10-02 08:40:29.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.6 KiB/s wr, 99 op/s
Oct  2 04:40:29 np0005465604 nova_compute[260603]: 2025-10-02 08:40:29.258 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:40:29 np0005465604 nova_compute[260603]: 2025-10-02 08:40:29.258 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3835MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:40:29 np0005465604 nova_compute[260603]: 2025-10-02 08:40:29.259 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:29 np0005465604 nova_compute[260603]: 2025-10-02 08:40:29.259 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:29 np0005465604 podman[358040]: 2025-10-02 08:40:29.307705983 +0000 UTC m=+0.084570693 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:40:29 np0005465604 nova_compute[260603]: 2025-10-02 08:40:29.340 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:40:29 np0005465604 nova_compute[260603]: 2025-10-02 08:40:29.340 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:40:29 np0005465604 nova_compute[260603]: 2025-10-02 08:40:29.366 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:40:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3313699046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:40:29 np0005465604 nova_compute[260603]: 2025-10-02 08:40:29.785 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:29 np0005465604 nova_compute[260603]: 2025-10-02 08:40:29.794 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:40:29 np0005465604 nova_compute[260603]: 2025-10-02 08:40:29.816 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:40:29 np0005465604 nova_compute[260603]: 2025-10-02 08:40:29.849 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:40:29 np0005465604 nova_compute[260603]: 2025-10-02 08:40:29.849 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:30 np0005465604 nova_compute[260603]: 2025-10-02 08:40:30.851 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:40:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 41 op/s
Oct  2 04:40:31 np0005465604 nova_compute[260603]: 2025-10-02 08:40:31.668 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394416.6672437, c33f349a-5d06-4a81-90fd-fe32eebc4cb9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:40:31 np0005465604 nova_compute[260603]: 2025-10-02 08:40:31.669 2 INFO nova.compute.manager [-] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:40:31 np0005465604 nova_compute[260603]: 2025-10-02 08:40:31.700 2 DEBUG nova.compute.manager [None req-df935a1f-77a6-4be8-8cd3-3fde0fb1bdc8 - - - - - -] [instance: c33f349a-5d06-4a81-90fd-fe32eebc4cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:40:31 np0005465604 nova_compute[260603]: 2025-10-02 08:40:31.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:40:32 np0005465604 nova_compute[260603]: 2025-10-02 08:40:32.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:40:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 41 op/s
Oct  2 04:40:34 np0005465604 nova_compute[260603]: 2025-10-02 08:40:34.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:34 np0005465604 nova_compute[260603]: 2025-10-02 08:40:34.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:40:34 np0005465604 nova_compute[260603]: 2025-10-02 08:40:34.756 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Acquiring lock "15ffbe97-d811-45ba-af92-c1b624b8c8e6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:34 np0005465604 nova_compute[260603]: 2025-10-02 08:40:34.757 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "15ffbe97-d811-45ba-af92-c1b624b8c8e6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:34 np0005465604 nova_compute[260603]: 2025-10-02 08:40:34.779 2 DEBUG nova.compute.manager [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:40:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:34.824 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:34.824 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:34.825 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:34 np0005465604 nova_compute[260603]: 2025-10-02 08:40:34.912 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:34 np0005465604 nova_compute[260603]: 2025-10-02 08:40:34.913 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:34 np0005465604 nova_compute[260603]: 2025-10-02 08:40:34.921 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:40:34 np0005465604 nova_compute[260603]: 2025-10-02 08:40:34.922 2 INFO nova.compute.claims [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:40:35 np0005465604 nova_compute[260603]: 2025-10-02 08:40:35.134 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct  2 04:40:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:40:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3793537995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:40:35 np0005465604 nova_compute[260603]: 2025-10-02 08:40:35.621 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:35 np0005465604 nova_compute[260603]: 2025-10-02 08:40:35.626 2 DEBUG nova.compute.provider_tree [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:40:35 np0005465604 nova_compute[260603]: 2025-10-02 08:40:35.656 2 DEBUG nova.scheduler.client.report [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:40:35 np0005465604 nova_compute[260603]: 2025-10-02 08:40:35.684 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:35 np0005465604 nova_compute[260603]: 2025-10-02 08:40:35.684 2 DEBUG nova.compute.manager [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:40:35 np0005465604 nova_compute[260603]: 2025-10-02 08:40:35.748 2 DEBUG nova.compute.manager [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  2 04:40:35 np0005465604 nova_compute[260603]: 2025-10-02 08:40:35.767 2 INFO nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:40:35 np0005465604 nova_compute[260603]: 2025-10-02 08:40:35.782 2 DEBUG nova.compute.manager [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:40:35 np0005465604 nova_compute[260603]: 2025-10-02 08:40:35.914 2 DEBUG nova.compute.manager [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:40:35 np0005465604 nova_compute[260603]: 2025-10-02 08:40:35.916 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:40:35 np0005465604 nova_compute[260603]: 2025-10-02 08:40:35.917 2 INFO nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Creating image(s)#033[00m
Oct  2 04:40:35 np0005465604 nova_compute[260603]: 2025-10-02 08:40:35.948 2 DEBUG nova.storage.rbd_utils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] rbd image 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:40:35 np0005465604 nova_compute[260603]: 2025-10-02 08:40:35.980 2 DEBUG nova.storage.rbd_utils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] rbd image 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:40:36 np0005465604 nova_compute[260603]: 2025-10-02 08:40:36.011 2 DEBUG nova.storage.rbd_utils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] rbd image 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:40:36 np0005465604 nova_compute[260603]: 2025-10-02 08:40:36.015 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:36 np0005465604 nova_compute[260603]: 2025-10-02 08:40:36.085 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:36 np0005465604 nova_compute[260603]: 2025-10-02 08:40:36.085 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:36 np0005465604 nova_compute[260603]: 2025-10-02 08:40:36.086 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:36 np0005465604 nova_compute[260603]: 2025-10-02 08:40:36.086 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:36 np0005465604 nova_compute[260603]: 2025-10-02 08:40:36.108 2 DEBUG nova.storage.rbd_utils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] rbd image 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:40:36 np0005465604 nova_compute[260603]: 2025-10-02 08:40:36.111 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:36 np0005465604 nova_compute[260603]: 2025-10-02 08:40:36.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:40:36 np0005465604 nova_compute[260603]: 2025-10-02 08:40:36.720 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394421.7187707, 4145f1a3-c327-49ee-9af1-1ace3afb70a5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:40:36 np0005465604 nova_compute[260603]: 2025-10-02 08:40:36.721 2 INFO nova.compute.manager [-] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:40:36 np0005465604 nova_compute[260603]: 2025-10-02 08:40:36.748 2 DEBUG nova.compute.manager [None req-49adead2-f86f-4611-9766-688f7f07cbc1 - - - - - -] [instance: 4145f1a3-c327-49ee-9af1-1ace3afb70a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:40:36 np0005465604 nova_compute[260603]: 2025-10-02 08:40:36.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:36 np0005465604 nova_compute[260603]: 2025-10-02 08:40:36.839 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.728s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:36 np0005465604 nova_compute[260603]: 2025-10-02 08:40:36.951 2 DEBUG nova.storage.rbd_utils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] resizing rbd image 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:40:37 np0005465604 podman[358238]: 2025-10-02 08:40:37.038765264 +0000 UTC m=+0.097071869 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 04:40:37 np0005465604 podman[358243]: 2025-10-02 08:40:37.047775354 +0000 UTC m=+0.094280284 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.152 2 DEBUG nova.objects.instance [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lazy-loading 'migration_context' on Instance uuid 15ffbe97-d811-45ba-af92-c1b624b8c8e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.173 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.174 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Ensure instance console log exists: /var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.175 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.175 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.176 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.179 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.185 2 WARNING nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.191 2 DEBUG nova.virt.libvirt.host [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.192 2 DEBUG nova.virt.libvirt.host [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.197 2 DEBUG nova.virt.libvirt.host [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.198 2 DEBUG nova.virt.libvirt.host [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.198 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.199 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.200 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.200 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.201 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.201 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.202 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.202 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.203 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.203 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.204 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.205 2 DEBUG nova.virt.hardware [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.210 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 57 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 393 KiB/s wr, 12 op/s
Oct  2 04:40:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:40:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:40:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2355665263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.742 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.779 2 DEBUG nova.storage.rbd_utils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] rbd image 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:40:37 np0005465604 nova_compute[260603]: 2025-10-02 08:40:37.786 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:40:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1021264258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:40:38 np0005465604 nova_compute[260603]: 2025-10-02 08:40:38.289 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:38 np0005465604 nova_compute[260603]: 2025-10-02 08:40:38.293 2 DEBUG nova.objects.instance [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 15ffbe97-d811-45ba-af92-c1b624b8c8e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:40:38 np0005465604 nova_compute[260603]: 2025-10-02 08:40:38.319 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:40:38 np0005465604 nova_compute[260603]:  <uuid>15ffbe97-d811-45ba-af92-c1b624b8c8e6</uuid>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:  <name>instance-00000062</name>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersAaction247Test-server-56882368</nova:name>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:40:37</nova:creationTime>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:40:38 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:        <nova:user uuid="c96d3d2fc2b7400ebc99a6eea45149bb">tempest-ServersAaction247Test-1292878896-project-member</nova:user>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:        <nova:project uuid="2b4277ce019f42ae884ccde2057ab4a0">tempest-ServersAaction247Test-1292878896</nova:project>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <entry name="serial">15ffbe97-d811-45ba-af92-c1b624b8c8e6</entry>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <entry name="uuid">15ffbe97-d811-45ba-af92-c1b624b8c8e6</entry>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk">
Oct  2 04:40:38 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:40:38 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk.config">
Oct  2 04:40:38 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:40:38 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6/console.log" append="off"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:40:38 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:40:38 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:40:38 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:40:38 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:40:38 np0005465604 nova_compute[260603]: 2025-10-02 08:40:38.400 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:40:38 np0005465604 nova_compute[260603]: 2025-10-02 08:40:38.401 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:40:38 np0005465604 nova_compute[260603]: 2025-10-02 08:40:38.401 2 INFO nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Using config drive#033[00m
Oct  2 04:40:38 np0005465604 nova_compute[260603]: 2025-10-02 08:40:38.423 2 DEBUG nova.storage.rbd_utils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] rbd image 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.497344452041415e-05 of space, bias 1.0, pg target 0.022492033356124243 quantized to 32 (current 32)
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:40:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:40:39 np0005465604 nova_compute[260603]: 2025-10-02 08:40:39.073 2 INFO nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Creating config drive at /var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6/disk.config#033[00m
Oct  2 04:40:39 np0005465604 nova_compute[260603]: 2025-10-02 08:40:39.078 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4vovsovh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:39 np0005465604 nova_compute[260603]: 2025-10-02 08:40:39.226 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4vovsovh" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:39 np0005465604 nova_compute[260603]: 2025-10-02 08:40:39.250 2 DEBUG nova.storage.rbd_utils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] rbd image 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:40:39 np0005465604 nova_compute[260603]: 2025-10-02 08:40:39.253 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6/disk.config 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 88 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:40:39 np0005465604 nova_compute[260603]: 2025-10-02 08:40:39.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:39 np0005465604 nova_compute[260603]: 2025-10-02 08:40:39.484 2 DEBUG oslo_concurrency.processutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6/disk.config 15ffbe97-d811-45ba-af92-c1b624b8c8e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:39 np0005465604 nova_compute[260603]: 2025-10-02 08:40:39.485 2 INFO nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Deleting local config drive /var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6/disk.config because it was imported into RBD.#033[00m
Oct  2 04:40:39 np0005465604 systemd-machined[214636]: New machine qemu-124-instance-00000062.
Oct  2 04:40:39 np0005465604 systemd[1]: Started Virtual Machine qemu-124-instance-00000062.
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.540 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394440.5400782, 15ffbe97-d811-45ba-af92-c1b624b8c8e6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.541 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.544 2 DEBUG nova.compute.manager [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.544 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.548 2 INFO nova.virt.libvirt.driver [-] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Instance spawned successfully.#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.548 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.571 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.578 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.581 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.582 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.582 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.582 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.583 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.583 2 DEBUG nova.virt.libvirt.driver [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.603 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.603 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394440.5413153, 15ffbe97-d811-45ba-af92-c1b624b8c8e6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.604 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] VM Started (Lifecycle Event)#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.629 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.632 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.637 2 INFO nova.compute.manager [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Took 4.72 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.637 2 DEBUG nova.compute.manager [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.657 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.709 2 INFO nova.compute.manager [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Took 5.83 seconds to build instance.#033[00m
Oct  2 04:40:40 np0005465604 nova_compute[260603]: 2025-10-02 08:40:40.729 2 DEBUG oslo_concurrency.lockutils [None req-8ffb8d4d-aaca-4be1-a119-cf92d27f7c17 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "15ffbe97-d811-45ba-af92-c1b624b8c8e6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 88 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:40:41 np0005465604 nova_compute[260603]: 2025-10-02 08:40:41.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:40:42 np0005465604 nova_compute[260603]: 2025-10-02 08:40:42.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:40:42 np0005465604 nova_compute[260603]: 2025-10-02 08:40:42.874 2 DEBUG nova.compute.manager [None req-d7247085-b832-42d7-b771-72e45ae47254 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:40:42 np0005465604 nova_compute[260603]: 2025-10-02 08:40:42.909 2 INFO nova.compute.manager [None req-d7247085-b832-42d7-b771-72e45ae47254 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] instance snapshotting#033[00m
Oct  2 04:40:42 np0005465604 nova_compute[260603]: 2025-10-02 08:40:42.910 2 DEBUG nova.objects.instance [None req-d7247085-b832-42d7-b771-72e45ae47254 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lazy-loading 'flavor' on Instance uuid 15ffbe97-d811-45ba-af92-c1b624b8c8e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:40:43 np0005465604 nova_compute[260603]: 2025-10-02 08:40:43.081 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Acquiring lock "15ffbe97-d811-45ba-af92-c1b624b8c8e6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:43 np0005465604 nova_compute[260603]: 2025-10-02 08:40:43.081 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "15ffbe97-d811-45ba-af92-c1b624b8c8e6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:43 np0005465604 nova_compute[260603]: 2025-10-02 08:40:43.082 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Acquiring lock "15ffbe97-d811-45ba-af92-c1b624b8c8e6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:43 np0005465604 nova_compute[260603]: 2025-10-02 08:40:43.082 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "15ffbe97-d811-45ba-af92-c1b624b8c8e6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:43 np0005465604 nova_compute[260603]: 2025-10-02 08:40:43.082 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "15ffbe97-d811-45ba-af92-c1b624b8c8e6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:43 np0005465604 nova_compute[260603]: 2025-10-02 08:40:43.083 2 INFO nova.compute.manager [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Terminating instance#033[00m
Oct  2 04:40:43 np0005465604 nova_compute[260603]: 2025-10-02 08:40:43.085 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Acquiring lock "refresh_cache-15ffbe97-d811-45ba-af92-c1b624b8c8e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:40:43 np0005465604 nova_compute[260603]: 2025-10-02 08:40:43.085 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Acquired lock "refresh_cache-15ffbe97-d811-45ba-af92-c1b624b8c8e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:40:43 np0005465604 nova_compute[260603]: 2025-10-02 08:40:43.085 2 DEBUG nova.network.neutron [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:40:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 88 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 04:40:43 np0005465604 nova_compute[260603]: 2025-10-02 08:40:43.349 2 INFO nova.virt.libvirt.driver [None req-d7247085-b832-42d7-b771-72e45ae47254 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Beginning live snapshot process#033[00m
Oct  2 04:40:43 np0005465604 nova_compute[260603]: 2025-10-02 08:40:43.354 2 DEBUG nova.network.neutron [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:40:43 np0005465604 nova_compute[260603]: 2025-10-02 08:40:43.408 2 DEBUG nova.compute.manager [None req-d7247085-b832-42d7-b771-72e45ae47254 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390#033[00m
Oct  2 04:40:44 np0005465604 nova_compute[260603]: 2025-10-02 08:40:44.097 2 DEBUG nova.network.neutron [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:40:44 np0005465604 nova_compute[260603]: 2025-10-02 08:40:44.123 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Releasing lock "refresh_cache-15ffbe97-d811-45ba-af92-c1b624b8c8e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:40:44 np0005465604 nova_compute[260603]: 2025-10-02 08:40:44.124 2 DEBUG nova.compute.manager [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:40:44 np0005465604 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000062.scope: Deactivated successfully.
Oct  2 04:40:44 np0005465604 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000062.scope: Consumed 4.682s CPU time.
Oct  2 04:40:44 np0005465604 systemd-machined[214636]: Machine qemu-124-instance-00000062 terminated.
Oct  2 04:40:44 np0005465604 nova_compute[260603]: 2025-10-02 08:40:44.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:44 np0005465604 nova_compute[260603]: 2025-10-02 08:40:44.351 2 INFO nova.virt.libvirt.driver [-] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Instance destroyed successfully.#033[00m
Oct  2 04:40:44 np0005465604 nova_compute[260603]: 2025-10-02 08:40:44.351 2 DEBUG nova.objects.instance [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lazy-loading 'resources' on Instance uuid 15ffbe97-d811-45ba-af92-c1b624b8c8e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:40:44 np0005465604 nova_compute[260603]: 2025-10-02 08:40:44.841 2 DEBUG nova.compute.manager [None req-d7247085-b832-42d7-b771-72e45ae47254 c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Oct  2 04:40:44 np0005465604 nova_compute[260603]: 2025-10-02 08:40:44.919 2 INFO nova.virt.libvirt.driver [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Deleting instance files /var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6_del#033[00m
Oct  2 04:40:44 np0005465604 nova_compute[260603]: 2025-10-02 08:40:44.920 2 INFO nova.virt.libvirt.driver [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Deletion of /var/lib/nova/instances/15ffbe97-d811-45ba-af92-c1b624b8c8e6_del complete#033[00m
Oct  2 04:40:44 np0005465604 nova_compute[260603]: 2025-10-02 08:40:44.967 2 INFO nova.compute.manager [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:40:44 np0005465604 nova_compute[260603]: 2025-10-02 08:40:44.967 2 DEBUG oslo.service.loopingcall [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:40:44 np0005465604 nova_compute[260603]: 2025-10-02 08:40:44.968 2 DEBUG nova.compute.manager [-] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:40:44 np0005465604 nova_compute[260603]: 2025-10-02 08:40:44.968 2 DEBUG nova.network.neutron [-] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:40:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 88 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 04:40:45 np0005465604 nova_compute[260603]: 2025-10-02 08:40:45.296 2 DEBUG nova.network.neutron [-] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:40:45 np0005465604 nova_compute[260603]: 2025-10-02 08:40:45.313 2 DEBUG nova.network.neutron [-] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:40:45 np0005465604 nova_compute[260603]: 2025-10-02 08:40:45.329 2 INFO nova.compute.manager [-] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Took 0.36 seconds to deallocate network for instance.#033[00m
Oct  2 04:40:45 np0005465604 nova_compute[260603]: 2025-10-02 08:40:45.405 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:40:45 np0005465604 nova_compute[260603]: 2025-10-02 08:40:45.405 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:40:45 np0005465604 nova_compute[260603]: 2025-10-02 08:40:45.460 2 DEBUG oslo_concurrency.processutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:40:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:40:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3884168485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:40:45 np0005465604 nova_compute[260603]: 2025-10-02 08:40:45.910 2 DEBUG oslo_concurrency.processutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:40:45 np0005465604 nova_compute[260603]: 2025-10-02 08:40:45.918 2 DEBUG nova.compute.provider_tree [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:40:45 np0005465604 nova_compute[260603]: 2025-10-02 08:40:45.943 2 DEBUG nova.scheduler.client.report [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:40:45 np0005465604 nova_compute[260603]: 2025-10-02 08:40:45.977 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:46 np0005465604 nova_compute[260603]: 2025-10-02 08:40:46.013 2 INFO nova.scheduler.client.report [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Deleted allocations for instance 15ffbe97-d811-45ba-af92-c1b624b8c8e6#033[00m
Oct  2 04:40:46 np0005465604 nova_compute[260603]: 2025-10-02 08:40:46.120 2 DEBUG oslo_concurrency.lockutils [None req-58c48dfe-c093-428b-92ac-b7f6e826537c c96d3d2fc2b7400ebc99a6eea45149bb 2b4277ce019f42ae884ccde2057ab4a0 - - default default] Lock "15ffbe97-d811-45ba-af92-c1b624b8c8e6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:40:46 np0005465604 nova_compute[260603]: 2025-10-02 08:40:46.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 72 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Oct  2 04:40:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:40:49 np0005465604 nova_compute[260603]: 2025-10-02 08:40:49.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 114 op/s
Oct  2 04:40:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:50.142 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:40:50 np0005465604 nova_compute[260603]: 2025-10-02 08:40:50.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:50.145 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:40:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct  2 04:40:51 np0005465604 nova_compute[260603]: 2025-10-02 08:40:51.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:40:52 np0005465604 podman[358713]: 2025-10-02 08:40:52.648383519 +0000 UTC m=+0.064320793 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 04:40:52 np0005465604 podman[358713]: 2025-10-02 08:40:52.759601353 +0000 UTC m=+0.175538637 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 04:40:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct  2 04:40:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:40:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:40:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:40:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:40:54 np0005465604 nova_compute[260603]: 2025-10-02 08:40:54.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:40:54 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ee90fa6a-0be5-4389-a477-789a3fcb8660 does not exist
Oct  2 04:40:54 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 906d3843-7a0c-438f-a24d-557767790f35 does not exist
Oct  2 04:40:54 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 938da2f1-00d2-4e39-944b-0e82f773ac4e does not exist
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:40:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:40:55 np0005465604 podman[359141]: 2025-10-02 08:40:55.144131551 +0000 UTC m=+0.060908478 container create 58bb8f7e229a60de287f13a36d29b378ff7cfd499ab171fac667e1c944e293b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct  2 04:40:55 np0005465604 systemd[1]: Started libpod-conmon-58bb8f7e229a60de287f13a36d29b378ff7cfd499ab171fac667e1c944e293b6.scope.
Oct  2 04:40:55 np0005465604 podman[359141]: 2025-10-02 08:40:55.113034069 +0000 UTC m=+0.029811066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:40:55 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:40:55 np0005465604 podman[359141]: 2025-10-02 08:40:55.266265343 +0000 UTC m=+0.183042340 container init 58bb8f7e229a60de287f13a36d29b378ff7cfd499ab171fac667e1c944e293b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kirch, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 04:40:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct  2 04:40:55 np0005465604 podman[359141]: 2025-10-02 08:40:55.277979305 +0000 UTC m=+0.194756232 container start 58bb8f7e229a60de287f13a36d29b378ff7cfd499ab171fac667e1c944e293b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kirch, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct  2 04:40:55 np0005465604 podman[359141]: 2025-10-02 08:40:55.282462454 +0000 UTC m=+0.199239381 container attach 58bb8f7e229a60de287f13a36d29b378ff7cfd499ab171fac667e1c944e293b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct  2 04:40:55 np0005465604 peaceful_kirch[359157]: 167 167
Oct  2 04:40:55 np0005465604 systemd[1]: libpod-58bb8f7e229a60de287f13a36d29b378ff7cfd499ab171fac667e1c944e293b6.scope: Deactivated successfully.
Oct  2 04:40:55 np0005465604 podman[359141]: 2025-10-02 08:40:55.286231951 +0000 UTC m=+0.203008878 container died 58bb8f7e229a60de287f13a36d29b378ff7cfd499ab171fac667e1c944e293b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kirch, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:40:55 np0005465604 systemd[1]: var-lib-containers-storage-overlay-aa1f4d226d0de6b6debc025e8f3a0193381b68a268e108cb5f1b64e8e8751c76-merged.mount: Deactivated successfully.
Oct  2 04:40:55 np0005465604 podman[359141]: 2025-10-02 08:40:55.338888461 +0000 UTC m=+0.255665388 container remove 58bb8f7e229a60de287f13a36d29b378ff7cfd499ab171fac667e1c944e293b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kirch, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 04:40:55 np0005465604 systemd[1]: libpod-conmon-58bb8f7e229a60de287f13a36d29b378ff7cfd499ab171fac667e1c944e293b6.scope: Deactivated successfully.
Oct  2 04:40:55 np0005465604 podman[359180]: 2025-10-02 08:40:55.597218251 +0000 UTC m=+0.069584185 container create fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 04:40:55 np0005465604 systemd[1]: Started libpod-conmon-fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a.scope.
Oct  2 04:40:55 np0005465604 podman[359180]: 2025-10-02 08:40:55.569148212 +0000 UTC m=+0.041514176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:40:55 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:40:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bf5c093e05da884bf01dda82e2729a366d9c465c09a88f310b4b9805aa965ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:40:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bf5c093e05da884bf01dda82e2729a366d9c465c09a88f310b4b9805aa965ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:40:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bf5c093e05da884bf01dda82e2729a366d9c465c09a88f310b4b9805aa965ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:40:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bf5c093e05da884bf01dda82e2729a366d9c465c09a88f310b4b9805aa965ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:40:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bf5c093e05da884bf01dda82e2729a366d9c465c09a88f310b4b9805aa965ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:40:55 np0005465604 podman[359180]: 2025-10-02 08:40:55.707344691 +0000 UTC m=+0.179710625 container init fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 04:40:55 np0005465604 podman[359180]: 2025-10-02 08:40:55.724206373 +0000 UTC m=+0.196572277 container start fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cannon, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:40:55 np0005465604 podman[359180]: 2025-10-02 08:40:55.727637379 +0000 UTC m=+0.200003313 container attach fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cannon, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 04:40:56 np0005465604 nova_compute[260603]: 2025-10-02 08:40:56.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:56 np0005465604 gracious_cannon[359196]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:40:56 np0005465604 gracious_cannon[359196]: --> relative data size: 1.0
Oct  2 04:40:56 np0005465604 gracious_cannon[359196]: --> All data devices are unavailable
Oct  2 04:40:56 np0005465604 systemd[1]: libpod-fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a.scope: Deactivated successfully.
Oct  2 04:40:56 np0005465604 systemd[1]: libpod-fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a.scope: Consumed 1.042s CPU time.
Oct  2 04:40:56 np0005465604 conmon[359196]: conmon fe00326d0360809addeb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a.scope/container/memory.events
Oct  2 04:40:56 np0005465604 podman[359180]: 2025-10-02 08:40:56.812253465 +0000 UTC m=+1.284619359 container died fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 04:40:56 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5bf5c093e05da884bf01dda82e2729a366d9c465c09a88f310b4b9805aa965ad-merged.mount: Deactivated successfully.
Oct  2 04:40:56 np0005465604 podman[359180]: 2025-10-02 08:40:56.89412277 +0000 UTC m=+1.366488694 container remove fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 04:40:56 np0005465604 systemd[1]: libpod-conmon-fe00326d0360809addebd4cb866eda6afbe22f1e16acdd525f29418623110e4a.scope: Deactivated successfully.
Oct  2 04:40:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:40:57.147 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:40:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct  2 04:40:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:40:57 np0005465604 podman[359380]: 2025-10-02 08:40:57.777558036 +0000 UTC m=+0.085100576 container create e3cc050544ef4d56a6c7b839c44576657d89305bb003c15b29a4c7207d7c05a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_colden, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 04:40:57 np0005465604 systemd[1]: Started libpod-conmon-e3cc050544ef4d56a6c7b839c44576657d89305bb003c15b29a4c7207d7c05a9.scope.
Oct  2 04:40:57 np0005465604 podman[359380]: 2025-10-02 08:40:57.743978406 +0000 UTC m=+0.051520996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:40:57 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:40:57 np0005465604 podman[359380]: 2025-10-02 08:40:57.889315967 +0000 UTC m=+0.196858547 container init e3cc050544ef4d56a6c7b839c44576657d89305bb003c15b29a4c7207d7c05a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_colden, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 04:40:57 np0005465604 podman[359380]: 2025-10-02 08:40:57.902545767 +0000 UTC m=+0.210088307 container start e3cc050544ef4d56a6c7b839c44576657d89305bb003c15b29a4c7207d7c05a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_colden, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:40:57 np0005465604 podman[359380]: 2025-10-02 08:40:57.906853279 +0000 UTC m=+0.214395819 container attach e3cc050544ef4d56a6c7b839c44576657d89305bb003c15b29a4c7207d7c05a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_colden, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 04:40:57 np0005465604 pedantic_colden[359396]: 167 167
Oct  2 04:40:57 np0005465604 systemd[1]: libpod-e3cc050544ef4d56a6c7b839c44576657d89305bb003c15b29a4c7207d7c05a9.scope: Deactivated successfully.
Oct  2 04:40:57 np0005465604 podman[359380]: 2025-10-02 08:40:57.913088963 +0000 UTC m=+0.220631503 container died e3cc050544ef4d56a6c7b839c44576657d89305bb003c15b29a4c7207d7c05a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_colden, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 04:40:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:40:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:40:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:40:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:40:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:40:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:40:57 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a50bf239757309a85d5a686e490d417cda79c4b018e27d5d379482852cbb70fc-merged.mount: Deactivated successfully.
Oct  2 04:40:57 np0005465604 podman[359380]: 2025-10-02 08:40:57.970615294 +0000 UTC m=+0.278157834 container remove e3cc050544ef4d56a6c7b839c44576657d89305bb003c15b29a4c7207d7c05a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_colden, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 04:40:57 np0005465604 systemd[1]: libpod-conmon-e3cc050544ef4d56a6c7b839c44576657d89305bb003c15b29a4c7207d7c05a9.scope: Deactivated successfully.
Oct  2 04:40:58 np0005465604 podman[359420]: 2025-10-02 08:40:58.228423977 +0000 UTC m=+0.075268662 container create 8a0cbef38a0c5cc8b1e54226d2eaa09939c3ff054f6e081aa6686f3306284067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:40:58 np0005465604 systemd[1]: Started libpod-conmon-8a0cbef38a0c5cc8b1e54226d2eaa09939c3ff054f6e081aa6686f3306284067.scope.
Oct  2 04:40:58 np0005465604 podman[359420]: 2025-10-02 08:40:58.195098965 +0000 UTC m=+0.041943690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:40:58 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:40:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cfa5accb7feba2ab0f315daeab3ad9836f792eb22616e81b749b051a966e75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:40:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cfa5accb7feba2ab0f315daeab3ad9836f792eb22616e81b749b051a966e75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:40:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cfa5accb7feba2ab0f315daeab3ad9836f792eb22616e81b749b051a966e75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:40:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cfa5accb7feba2ab0f315daeab3ad9836f792eb22616e81b749b051a966e75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:40:58 np0005465604 podman[359420]: 2025-10-02 08:40:58.355225533 +0000 UTC m=+0.202070228 container init 8a0cbef38a0c5cc8b1e54226d2eaa09939c3ff054f6e081aa6686f3306284067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  2 04:40:58 np0005465604 podman[359420]: 2025-10-02 08:40:58.366996428 +0000 UTC m=+0.213841083 container start 8a0cbef38a0c5cc8b1e54226d2eaa09939c3ff054f6e081aa6686f3306284067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_faraday, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:40:58 np0005465604 podman[359420]: 2025-10-02 08:40:58.370817557 +0000 UTC m=+0.217662302 container attach 8a0cbef38a0c5cc8b1e54226d2eaa09939c3ff054f6e081aa6686f3306284067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_faraday, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]: {
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:    "0": [
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:        {
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "devices": [
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "/dev/loop3"
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            ],
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "lv_name": "ceph_lv0",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "lv_size": "21470642176",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "name": "ceph_lv0",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "tags": {
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.cluster_name": "ceph",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.crush_device_class": "",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.encrypted": "0",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.osd_id": "0",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.type": "block",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.vdo": "0"
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            },
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "type": "block",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "vg_name": "ceph_vg0"
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:        }
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:    ],
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:    "1": [
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:        {
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "devices": [
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "/dev/loop4"
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            ],
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "lv_name": "ceph_lv1",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "lv_size": "21470642176",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "name": "ceph_lv1",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "tags": {
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.cluster_name": "ceph",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.crush_device_class": "",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.encrypted": "0",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.osd_id": "1",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.type": "block",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.vdo": "0"
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            },
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "type": "block",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "vg_name": "ceph_vg1"
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:        }
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:    ],
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:    "2": [
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:        {
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "devices": [
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "/dev/loop5"
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            ],
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "lv_name": "ceph_lv2",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "lv_size": "21470642176",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "name": "ceph_lv2",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "tags": {
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.cluster_name": "ceph",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.crush_device_class": "",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.encrypted": "0",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.osd_id": "2",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.type": "block",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:                "ceph.vdo": "0"
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            },
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "type": "block",
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:            "vg_name": "ceph_vg2"
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:        }
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]:    ]
Oct  2 04:40:59 np0005465604 flamboyant_faraday[359436]: }
Oct  2 04:40:59 np0005465604 systemd[1]: libpod-8a0cbef38a0c5cc8b1e54226d2eaa09939c3ff054f6e081aa6686f3306284067.scope: Deactivated successfully.
Oct  2 04:40:59 np0005465604 podman[359420]: 2025-10-02 08:40:59.166815535 +0000 UTC m=+1.013660210 container died 8a0cbef38a0c5cc8b1e54226d2eaa09939c3ff054f6e081aa6686f3306284067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_faraday, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:40:59 np0005465604 systemd[1]: var-lib-containers-storage-overlay-25cfa5accb7feba2ab0f315daeab3ad9836f792eb22616e81b749b051a966e75-merged.mount: Deactivated successfully.
Oct  2 04:40:59 np0005465604 podman[359420]: 2025-10-02 08:40:59.251953751 +0000 UTC m=+1.098798436 container remove 8a0cbef38a0c5cc8b1e54226d2eaa09939c3ff054f6e081aa6686f3306284067 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 04:40:59 np0005465604 nova_compute[260603]: 2025-10-02 08:40:59.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:40:59 np0005465604 systemd[1]: libpod-conmon-8a0cbef38a0c5cc8b1e54226d2eaa09939c3ff054f6e081aa6686f3306284067.scope: Deactivated successfully.
Oct  2 04:40:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 341 B/s wr, 13 op/s
Oct  2 04:40:59 np0005465604 nova_compute[260603]: 2025-10-02 08:40:59.348 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394444.34739, 15ffbe97-d811-45ba-af92-c1b624b8c8e6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:40:59 np0005465604 nova_compute[260603]: 2025-10-02 08:40:59.349 2 INFO nova.compute.manager [-] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:40:59 np0005465604 nova_compute[260603]: 2025-10-02 08:40:59.374 2 DEBUG nova.compute.manager [None req-023ebbee-0528-448c-a9ab-c68cae939e08 - - - - - -] [instance: 15ffbe97-d811-45ba-af92-c1b624b8c8e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:40:59 np0005465604 podman[359455]: 2025-10-02 08:40:59.415793834 +0000 UTC m=+0.111220665 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:40:59 np0005465604 podman[359494]: 2025-10-02 08:40:59.603670612 +0000 UTC m=+0.169009925 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Oct  2 04:41:00 np0005465604 podman[359637]: 2025-10-02 08:41:00.182788215 +0000 UTC m=+0.059539945 container create 29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_heisenberg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  2 04:41:00 np0005465604 systemd[1]: Started libpod-conmon-29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e.scope.
Oct  2 04:41:00 np0005465604 podman[359637]: 2025-10-02 08:41:00.161903648 +0000 UTC m=+0.038655388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:41:00 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:41:00 np0005465604 podman[359637]: 2025-10-02 08:41:00.28789797 +0000 UTC m=+0.164649660 container init 29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_heisenberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 04:41:00 np0005465604 podman[359637]: 2025-10-02 08:41:00.298765766 +0000 UTC m=+0.175517456 container start 29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_heisenberg, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 04:41:00 np0005465604 podman[359637]: 2025-10-02 08:41:00.302173732 +0000 UTC m=+0.178925412 container attach 29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_heisenberg, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 04:41:00 np0005465604 reverent_heisenberg[359653]: 167 167
Oct  2 04:41:00 np0005465604 systemd[1]: libpod-29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e.scope: Deactivated successfully.
Oct  2 04:41:00 np0005465604 conmon[359653]: conmon 29708dd7f4f6bed4d0be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e.scope/container/memory.events
Oct  2 04:41:00 np0005465604 podman[359637]: 2025-10-02 08:41:00.309575821 +0000 UTC m=+0.186327521 container died 29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  2 04:41:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f9e5222714f508afb08053d8257e06e8031a3d8ddab447be872b544d6adf5065-merged.mount: Deactivated successfully.
Oct  2 04:41:00 np0005465604 podman[359637]: 2025-10-02 08:41:00.360664622 +0000 UTC m=+0.237416352 container remove 29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 04:41:00 np0005465604 nova_compute[260603]: 2025-10-02 08:41:00.369 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:00 np0005465604 nova_compute[260603]: 2025-10-02 08:41:00.372 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:00 np0005465604 systemd[1]: libpod-conmon-29708dd7f4f6bed4d0be1991effa7d726fcf894efc7a7b9c41d97e28bc49364e.scope: Deactivated successfully.
Oct  2 04:41:00 np0005465604 nova_compute[260603]: 2025-10-02 08:41:00.408 2 DEBUG nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:41:00 np0005465604 nova_compute[260603]: 2025-10-02 08:41:00.488 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:00 np0005465604 nova_compute[260603]: 2025-10-02 08:41:00.488 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:00 np0005465604 nova_compute[260603]: 2025-10-02 08:41:00.497 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:41:00 np0005465604 nova_compute[260603]: 2025-10-02 08:41:00.498 2 INFO nova.compute.claims [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:41:00 np0005465604 nova_compute[260603]: 2025-10-02 08:41:00.603 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:00 np0005465604 podman[359676]: 2025-10-02 08:41:00.633910834 +0000 UTC m=+0.081226566 container create e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct  2 04:41:00 np0005465604 systemd[1]: Started libpod-conmon-e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b.scope.
Oct  2 04:41:00 np0005465604 podman[359676]: 2025-10-02 08:41:00.605962048 +0000 UTC m=+0.053277870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:41:00 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:41:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b16cdeb3bb32c817f30e00b04e8f6a0ef576a548eedd93b0c06b95f7729021b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:41:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b16cdeb3bb32c817f30e00b04e8f6a0ef576a548eedd93b0c06b95f7729021b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:41:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b16cdeb3bb32c817f30e00b04e8f6a0ef576a548eedd93b0c06b95f7729021b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:41:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b16cdeb3bb32c817f30e00b04e8f6a0ef576a548eedd93b0c06b95f7729021b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:41:00 np0005465604 podman[359676]: 2025-10-02 08:41:00.750314689 +0000 UTC m=+0.197630441 container init e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 04:41:00 np0005465604 podman[359676]: 2025-10-02 08:41:00.757147739 +0000 UTC m=+0.204463461 container start e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:41:00 np0005465604 podman[359676]: 2025-10-02 08:41:00.76167835 +0000 UTC m=+0.208994112 container attach e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 04:41:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:41:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3111868253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.128 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.134 2 DEBUG nova.compute.provider_tree [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.155 2 DEBUG nova.scheduler.client.report [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.177 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.177 2 DEBUG nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.238 2 DEBUG nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.239 2 DEBUG nova.network.neutron [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.260 2 INFO nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:41:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 41 MiB data, 669 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.277 2 DEBUG nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.387 2 DEBUG nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.389 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.389 2 INFO nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Creating image(s)#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.411 2 DEBUG nova.storage.rbd_utils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.434 2 DEBUG nova.storage.rbd_utils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.455 2 DEBUG nova.storage.rbd_utils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.458 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.534 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.535 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.535 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.535 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.555 2 DEBUG nova.storage.rbd_utils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.558 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.677 2 DEBUG nova.policy [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3dd1e04a123f47aa8a6b835785a1c569', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:41:01 np0005465604 stoic_wing[359692]: {
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:        "osd_id": 2,
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:        "type": "bluestore"
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:    },
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:        "osd_id": 1,
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:        "type": "bluestore"
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:    },
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:        "osd_id": 0,
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:        "type": "bluestore"
Oct  2 04:41:01 np0005465604 stoic_wing[359692]:    }
Oct  2 04:41:01 np0005465604 stoic_wing[359692]: }
Oct  2 04:41:01 np0005465604 systemd[1]: libpod-e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b.scope: Deactivated successfully.
Oct  2 04:41:01 np0005465604 systemd[1]: libpod-e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b.scope: Consumed 1.024s CPU time.
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:01 np0005465604 podman[359840]: 2025-10-02 08:41:01.837118221 +0000 UTC m=+0.029064311 container died e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 04:41:01 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8b16cdeb3bb32c817f30e00b04e8f6a0ef576a548eedd93b0c06b95f7729021b-merged.mount: Deactivated successfully.
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.905 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:01 np0005465604 podman[359840]: 2025-10-02 08:41:01.923633471 +0000 UTC m=+0.115579491 container remove e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wing, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 04:41:01 np0005465604 systemd[1]: libpod-conmon-e52fcb44b2776a4c91ac66420bcc18c257fef154a695fdc1df778283450eff4b.scope: Deactivated successfully.
Oct  2 04:41:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:41:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:41:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:41:01 np0005465604 nova_compute[260603]: 2025-10-02 08:41:01.992 2 DEBUG nova.storage.rbd_utils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] resizing rbd image ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:41:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:41:01 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 5ecbf165-9a7b-4810-990f-0ddd13bcc634 does not exist
Oct  2 04:41:01 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 7c751c96-4cb4-470a-9f9a-cd8b9f56ecfb does not exist
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.032 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquiring lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.032 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.058 2 DEBUG nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.090 2 DEBUG nova.objects.instance [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'migration_context' on Instance uuid ddf9efd0-0ac6-4857-96ea-3f1d0e18590d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.165 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.166 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Ensure instance console log exists: /var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.166 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.166 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.167 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.179 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.180 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.185 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.185 2 INFO nova.compute.claims [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.297 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:41:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:41:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/19851933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.782 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.793 2 DEBUG nova.compute.provider_tree [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.804 2 DEBUG nova.network.neutron [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Successfully created port: f333ef16-60e3-449a-a6e7-4e7435c4ee30 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.816 2 DEBUG nova.scheduler.client.report [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.848 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.849 2 DEBUG nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.928 2 DEBUG nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.929 2 DEBUG nova.network.neutron [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.950 2 INFO nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:41:02 np0005465604 nova_compute[260603]: 2025-10-02 08:41:02.978 2 DEBUG nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:41:02 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:41:02 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.095 2 DEBUG nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.098 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.098 2 INFO nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Creating image(s)#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.134 2 DEBUG nova.storage.rbd_utils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] rbd image fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.163 2 DEBUG nova.storage.rbd_utils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] rbd image fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.191 2 DEBUG nova.storage.rbd_utils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] rbd image fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.195 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.270 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 88 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.271 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.272 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.272 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.298 2 DEBUG nova.storage.rbd_utils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] rbd image fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.301 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.541 2 DEBUG nova.policy [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '923a2daca06b4bf98c21b2604971789f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a0738f9f0c2244bb8bc7a350d5ee5932', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.602 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.654 2 DEBUG nova.storage.rbd_utils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] resizing rbd image fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.738 2 DEBUG nova.objects.instance [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lazy-loading 'migration_context' on Instance uuid fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.754 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.755 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Ensure instance console log exists: /var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.755 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.755 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:03 np0005465604 nova_compute[260603]: 2025-10-02 08:41:03.756 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:04 np0005465604 nova_compute[260603]: 2025-10-02 08:41:04.011 2 DEBUG nova.network.neutron [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Successfully updated port: f333ef16-60e3-449a-a6e7-4e7435c4ee30 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:41:04 np0005465604 nova_compute[260603]: 2025-10-02 08:41:04.040 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:41:04 np0005465604 nova_compute[260603]: 2025-10-02 08:41:04.040 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquired lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:41:04 np0005465604 nova_compute[260603]: 2025-10-02 08:41:04.040 2 DEBUG nova.network.neutron [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:41:04 np0005465604 nova_compute[260603]: 2025-10-02 08:41:04.175 2 DEBUG nova.compute.manager [req-ae7a4d58-2e95-46f3-b689-75c0cbf3ae41 req-75811b25-cf91-4376-b689-f649649ff281 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received event network-changed-f333ef16-60e3-449a-a6e7-4e7435c4ee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:04 np0005465604 nova_compute[260603]: 2025-10-02 08:41:04.176 2 DEBUG nova.compute.manager [req-ae7a4d58-2e95-46f3-b689-75c0cbf3ae41 req-75811b25-cf91-4376-b689-f649649ff281 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Refreshing instance network info cache due to event network-changed-f333ef16-60e3-449a-a6e7-4e7435c4ee30. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:41:04 np0005465604 nova_compute[260603]: 2025-10-02 08:41:04.177 2 DEBUG oslo_concurrency.lockutils [req-ae7a4d58-2e95-46f3-b689-75c0cbf3ae41 req-75811b25-cf91-4376-b689-f649649ff281 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:41:04 np0005465604 nova_compute[260603]: 2025-10-02 08:41:04.243 2 DEBUG nova.network.neutron [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:41:04 np0005465604 nova_compute[260603]: 2025-10-02 08:41:04.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:04 np0005465604 nova_compute[260603]: 2025-10-02 08:41:04.451 2 DEBUG nova.network.neutron [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Successfully created port: c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:41:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 88 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.572 2 DEBUG nova.network.neutron [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updating instance_info_cache with network_info: [{"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.596 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Releasing lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.596 2 DEBUG nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Instance network_info: |[{"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.597 2 DEBUG oslo_concurrency.lockutils [req-ae7a4d58-2e95-46f3-b689-75c0cbf3ae41 req-75811b25-cf91-4376-b689-f649649ff281 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.597 2 DEBUG nova.network.neutron [req-ae7a4d58-2e95-46f3-b689-75c0cbf3ae41 req-75811b25-cf91-4376-b689-f649649ff281 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Refreshing network info cache for port f333ef16-60e3-449a-a6e7-4e7435c4ee30 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.601 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Start _get_guest_xml network_info=[{"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.605 2 WARNING nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.611 2 DEBUG nova.virt.libvirt.host [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.612 2 DEBUG nova.virt.libvirt.host [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.616 2 DEBUG nova.virt.libvirt.host [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.617 2 DEBUG nova.virt.libvirt.host [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.617 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.618 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.618 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.619 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.619 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.619 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.619 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.620 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.620 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.620 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.621 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.621 2 DEBUG nova.virt.hardware [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.624 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.753 2 DEBUG nova.network.neutron [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Successfully updated port: c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.772 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquiring lock "refresh_cache-fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.772 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquired lock "refresh_cache-fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.772 2 DEBUG nova.network.neutron [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.937 2 DEBUG nova.compute.manager [req-8ff4cdf3-1276-4ee7-bc5f-f847c3bd6912 req-d6cf53bc-d170-4b8d-b221-bcdc4f6e6c62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Received event network-changed-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.938 2 DEBUG nova.compute.manager [req-8ff4cdf3-1276-4ee7-bc5f-f847c3bd6912 req-d6cf53bc-d170-4b8d-b221-bcdc4f6e6c62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Refreshing instance network info cache due to event network-changed-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:41:05 np0005465604 nova_compute[260603]: 2025-10-02 08:41:05.938 2 DEBUG oslo_concurrency.lockutils [req-8ff4cdf3-1276-4ee7-bc5f-f847c3bd6912 req-d6cf53bc-d170-4b8d-b221-bcdc4f6e6c62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:41:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:41:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4006417553' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.070 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.091 2 DEBUG nova.storage.rbd_utils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.094 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.134 2 DEBUG nova.network.neutron [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:41:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:41:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1258135181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.538 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.539 2 DEBUG nova.virt.libvirt.vif [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:40:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-948419006',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-948419006',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=99,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEVRDNLCh7cGxmY8USPZ+Uw2QoectKaA792o7mYLTgKSdgVOdamESEYTMfoSKYLlGXQvF8R+2N9sWb5Rc56fBJj8i+BKngnG3c4da6Xe6b+B/+hyHhFsw+SA5clIFi8Ulg==',key_name='tempest-TestSecurityGroupsBasicOps-2045998300',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-r7zcjfhf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:41:01Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=ddf9efd0-0ac6-4857-96ea-3f1d0e18590d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.540 2 DEBUG nova.network.os_vif_util [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.540 2 DEBUG nova.network.os_vif_util [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:fb:ed,bridge_name='br-int',has_traffic_filtering=True,id=f333ef16-60e3-449a-a6e7-4e7435c4ee30,network=Network(64b91b42-84b6-4429-b137-6399bf4f6ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf333ef16-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.541 2 DEBUG nova.objects.instance [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'pci_devices' on Instance uuid ddf9efd0-0ac6-4857-96ea-3f1d0e18590d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.557 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:41:06 np0005465604 nova_compute[260603]:  <uuid>ddf9efd0-0ac6-4857-96ea-3f1d0e18590d</uuid>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:  <name>instance-00000063</name>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-948419006</nova:name>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:41:05</nova:creationTime>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:41:06 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:        <nova:user uuid="3dd1e04a123f47aa8a6b835785a1c569">tempest-TestSecurityGroupsBasicOps-204807017-project-member</nova:user>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:        <nova:project uuid="7ef9cbc1b038423984a64b4674aa34ff">tempest-TestSecurityGroupsBasicOps-204807017</nova:project>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:        <nova:port uuid="f333ef16-60e3-449a-a6e7-4e7435c4ee30">
Oct  2 04:41:06 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <entry name="serial">ddf9efd0-0ac6-4857-96ea-3f1d0e18590d</entry>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <entry name="uuid">ddf9efd0-0ac6-4857-96ea-3f1d0e18590d</entry>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk">
Oct  2 04:41:06 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:41:06 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk.config">
Oct  2 04:41:06 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:41:06 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:2c:fb:ed"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <target dev="tapf333ef16-60"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d/console.log" append="off"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:41:06 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:41:06 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:41:06 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:41:06 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.558 2 DEBUG nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Preparing to wait for external event network-vif-plugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.558 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.558 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.559 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.559 2 DEBUG nova.virt.libvirt.vif [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:40:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-948419006',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-948419006',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=99,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEVRDNLCh7cGxmY8USPZ+Uw2QoectKaA792o7mYLTgKSdgVOdamESEYTMfoSKYLlGXQvF8R+2N9sWb5Rc56fBJj8i+BKngnG3c4da6Xe6b+B/+hyHhFsw+SA5clIFi8Ulg==',key_name='tempest-TestSecurityGroupsBasicOps-2045998300',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-r7zcjfhf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:41:01Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=ddf9efd0-0ac6-4857-96ea-3f1d0e18590d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.560 2 DEBUG nova.network.os_vif_util [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.560 2 DEBUG nova.network.os_vif_util [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:fb:ed,bridge_name='br-int',has_traffic_filtering=True,id=f333ef16-60e3-449a-a6e7-4e7435c4ee30,network=Network(64b91b42-84b6-4429-b137-6399bf4f6ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf333ef16-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.561 2 DEBUG os_vif [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:fb:ed,bridge_name='br-int',has_traffic_filtering=True,id=f333ef16-60e3-449a-a6e7-4e7435c4ee30,network=Network(64b91b42-84b6-4429-b137-6399bf4f6ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf333ef16-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.562 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.562 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.565 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf333ef16-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.565 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf333ef16-60, col_values=(('external_ids', {'iface-id': 'f333ef16-60e3-449a-a6e7-4e7435c4ee30', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:fb:ed', 'vm-uuid': 'ddf9efd0-0ac6-4857-96ea-3f1d0e18590d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:06 np0005465604 NetworkManager[45129]: <info>  [1759394466.5679] manager: (tapf333ef16-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.575 2 INFO os_vif [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:fb:ed,bridge_name='br-int',has_traffic_filtering=True,id=f333ef16-60e3-449a-a6e7-4e7435c4ee30,network=Network(64b91b42-84b6-4429-b137-6399bf4f6ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf333ef16-60')#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.630 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.631 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.631 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No VIF found with MAC fa:16:3e:2c:fb:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.631 2 INFO nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Using config drive#033[00m
Oct  2 04:41:06 np0005465604 nova_compute[260603]: 2025-10-02 08:41:06.652 2 DEBUG nova.storage.rbd_utils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 103 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.4 MiB/s wr, 29 op/s
Oct  2 04:41:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:41:08 np0005465604 podman[360249]: 2025-10-02 08:41:08.052141922 +0000 UTC m=+0.097512180 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 04:41:08 np0005465604 podman[360248]: 2025-10-02 08:41:08.060987746 +0000 UTC m=+0.106443567 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.149 2 INFO nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Creating config drive at /var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d/disk.config#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.157 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo7isdnid execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.312 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo7isdnid" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.343 2 DEBUG nova.storage.rbd_utils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.348 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d/disk.config ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.455 2 DEBUG nova.network.neutron [req-ae7a4d58-2e95-46f3-b689-75c0cbf3ae41 req-75811b25-cf91-4376-b689-f649649ff281 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updated VIF entry in instance network info cache for port f333ef16-60e3-449a-a6e7-4e7435c4ee30. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.456 2 DEBUG nova.network.neutron [req-ae7a4d58-2e95-46f3-b689-75c0cbf3ae41 req-75811b25-cf91-4376-b689-f649649ff281 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updating instance_info_cache with network_info: [{"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.485 2 DEBUG oslo_concurrency.lockutils [req-ae7a4d58-2e95-46f3-b689-75c0cbf3ae41 req-75811b25-cf91-4376-b689-f649649ff281 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.654 2 DEBUG oslo_concurrency.processutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d/disk.config ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.656 2 INFO nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Deleting local config drive /var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d/disk.config because it was imported into RBD.#033[00m
Oct  2 04:41:08 np0005465604 kernel: tapf333ef16-60: entered promiscuous mode
Oct  2 04:41:08 np0005465604 NetworkManager[45129]: <info>  [1759394468.7281] manager: (tapf333ef16-60): new Tun device (/org/freedesktop/NetworkManager/Devices/397)
Oct  2 04:41:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:08Z|01017|binding|INFO|Claiming lport f333ef16-60e3-449a-a6e7-4e7435c4ee30 for this chassis.
Oct  2 04:41:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:08Z|01018|binding|INFO|f333ef16-60e3-449a-a6e7-4e7435c4ee30: Claiming fa:16:3e:2c:fb:ed 10.100.0.5
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.748 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:fb:ed 10.100.0.5'], port_security=['fa:16:3e:2c:fb:ed 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ddf9efd0-0ac6-4857-96ea-3f1d0e18590d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64b91b42-84b6-4429-b137-6399bf4f6ccd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e1b840f-6fce-4ed8-898b-67d0829d6af4 8ef1bc1b-1bb2-413b-9f26-26e2f642b996', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d2a6eb5-cf00-4e48-b8dd-b9d998ac802a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=f333ef16-60e3-449a-a6e7-4e7435c4ee30) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.750 162357 INFO neutron.agent.ovn.metadata.agent [-] Port f333ef16-60e3-449a-a6e7-4e7435c4ee30 in datapath 64b91b42-84b6-4429-b137-6399bf4f6ccd bound to our chassis#033[00m
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.751 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64b91b42-84b6-4429-b137-6399bf4f6ccd#033[00m
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.763 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3706b93a-dbdd-483a-8c15-75a270d06149]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.764 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap64b91b42-81 in ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:41:08 np0005465604 systemd-udevd[360339]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.766 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap64b91b42-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.766 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e632d1d9-f903-4268-96e0-f1d501b93daf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.767 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6167f98d-85f8-4feb-b0bb-277259cf3272]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:08 np0005465604 NetworkManager[45129]: <info>  [1759394468.7834] device (tapf333ef16-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:41:08 np0005465604 NetworkManager[45129]: <info>  [1759394468.7844] device (tapf333ef16-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:41:08 np0005465604 systemd-machined[214636]: New machine qemu-125-instance-00000063.
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.786 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[5e628c30-a39d-4fe3-a703-fd272980a566]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.811 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[18d2ebdb-846f-43be-a154-4de3556e3612]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:08 np0005465604 systemd[1]: Started Virtual Machine qemu-125-instance-00000063.
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:08Z|01019|binding|INFO|Setting lport f333ef16-60e3-449a-a6e7-4e7435c4ee30 ovn-installed in OVS
Oct  2 04:41:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:08Z|01020|binding|INFO|Setting lport f333ef16-60e3-449a-a6e7-4e7435c4ee30 up in Southbound
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.844 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7a95923c-b245-41d0-a9f6-08453ba9019a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:08 np0005465604 NetworkManager[45129]: <info>  [1759394468.8515] manager: (tap64b91b42-80): new Veth device (/org/freedesktop/NetworkManager/Devices/398)
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.850 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[85855abf-bd5f-496b-8c2f-0463e4f2293d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.884 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8d235a-3db2-44a2-8093-f2d0fd7fad88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.887 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea50344-a627-4d1a-9c0d-9aca8a17d5c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:08 np0005465604 NetworkManager[45129]: <info>  [1759394468.9114] device (tap64b91b42-80): carrier: link connected
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.911 2 DEBUG nova.network.neutron [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Updating instance_info_cache with network_info: [{"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.916 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[15765eaf-4965-4a2e-800f-af74451bc245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.930 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7252700b-9ffc-40db-a47f-819f0910d0af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64b91b42-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:92:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534751, 'reachable_time': 24391, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360372, 'error': None, 'target': 'ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.940 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Releasing lock "refresh_cache-fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.941 2 DEBUG nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Instance network_info: |[{"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.941 2 DEBUG oslo_concurrency.lockutils [req-8ff4cdf3-1276-4ee7-bc5f-f847c3bd6912 req-d6cf53bc-d170-4b8d-b221-bcdc4f6e6c62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.941 2 DEBUG nova.network.neutron [req-8ff4cdf3-1276-4ee7-bc5f-f847c3bd6912 req-d6cf53bc-d170-4b8d-b221-bcdc4f6e6c62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Refreshing network info cache for port c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.945 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Start _get_guest_xml network_info=[{"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.947 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[562064a6-9d41-4823-a849-d3e8e8734cd3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:92ef'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534751, 'tstamp': 534751}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360373, 'error': None, 'target': 'ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.949 2 WARNING nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.963 2 DEBUG nova.virt.libvirt.host [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.964 2 DEBUG nova.virt.libvirt.host [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.965 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e24c45fe-99d8-4548-a45e-91b5aa6d3aef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64b91b42-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:92:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534751, 'reachable_time': 24391, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360374, 'error': None, 'target': 'ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.969 2 DEBUG nova.virt.libvirt.host [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.969 2 DEBUG nova.virt.libvirt.host [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.970 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.970 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.970 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.971 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.971 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.971 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.971 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.971 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.972 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.972 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.972 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.972 2 DEBUG nova.virt.hardware [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:41:08 np0005465604 nova_compute[260603]: 2025-10-02 08:41:08.975 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:08.994 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc6efb3-8796-49ac-ac33-3a03a1a13f79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:09.046 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[44d7a13b-3f50-4f04-8154-a588e8668ec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:09.047 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64b91b42-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:09.048 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:09.048 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64b91b42-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:09 np0005465604 kernel: tap64b91b42-80: entered promiscuous mode
Oct  2 04:41:09 np0005465604 NetworkManager[45129]: <info>  [1759394469.0506] manager: (tap64b91b42-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Oct  2 04:41:09 np0005465604 nova_compute[260603]: 2025-10-02 08:41:09.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:09.052 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64b91b42-80, col_values=(('external_ids', {'iface-id': '6a4c46b3-20fe-4d13-90df-77828898d571'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:09Z|01021|binding|INFO|Releasing lport 6a4c46b3-20fe-4d13-90df-77828898d571 from this chassis (sb_readonly=0)
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:09.070 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/64b91b42-84b6-4429-b137-6399bf4f6ccd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/64b91b42-84b6-4429-b137-6399bf4f6ccd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:41:09 np0005465604 nova_compute[260603]: 2025-10-02 08:41:09.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:09.071 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e718e6b6-8a70-472a-85f7-413e52559f8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:09.072 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-64b91b42-84b6-4429-b137-6399bf4f6ccd
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/64b91b42-84b6-4429-b137-6399bf4f6ccd.pid.haproxy
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 64b91b42-84b6-4429-b137-6399bf4f6ccd
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:41:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:09.076 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd', 'env', 'PROCESS_TAG=haproxy-64b91b42-84b6-4429-b137-6399bf4f6ccd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/64b91b42-84b6-4429-b137-6399bf4f6ccd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:41:09 np0005465604 nova_compute[260603]: 2025-10-02 08:41:09.105 2 DEBUG nova.compute.manager [req-8861142b-ae2d-4dfb-b267-3d7b4f605152 req-db617a5d-1ee0-430b-ab0b-63a28863c1ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received event network-vif-plugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:09 np0005465604 nova_compute[260603]: 2025-10-02 08:41:09.106 2 DEBUG oslo_concurrency.lockutils [req-8861142b-ae2d-4dfb-b267-3d7b4f605152 req-db617a5d-1ee0-430b-ab0b-63a28863c1ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:09 np0005465604 nova_compute[260603]: 2025-10-02 08:41:09.106 2 DEBUG oslo_concurrency.lockutils [req-8861142b-ae2d-4dfb-b267-3d7b4f605152 req-db617a5d-1ee0-430b-ab0b-63a28863c1ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:09 np0005465604 nova_compute[260603]: 2025-10-02 08:41:09.106 2 DEBUG oslo_concurrency.lockutils [req-8861142b-ae2d-4dfb-b267-3d7b4f605152 req-db617a5d-1ee0-430b-ab0b-63a28863c1ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:09 np0005465604 nova_compute[260603]: 2025-10-02 08:41:09.107 2 DEBUG nova.compute.manager [req-8861142b-ae2d-4dfb-b267-3d7b4f605152 req-db617a5d-1ee0-430b-ab0b-63a28863c1ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Processing event network-vif-plugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:41:09 np0005465604 nova_compute[260603]: 2025-10-02 08:41:09.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 134 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct  2 04:41:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:41:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4097377714' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:41:09 np0005465604 nova_compute[260603]: 2025-10-02 08:41:09.449 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:09 np0005465604 nova_compute[260603]: 2025-10-02 08:41:09.480 2 DEBUG nova.storage.rbd_utils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] rbd image fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:09 np0005465604 podman[360426]: 2025-10-02 08:41:09.48206726 +0000 UTC m=+0.055075477 container create 439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 04:41:09 np0005465604 nova_compute[260603]: 2025-10-02 08:41:09.496 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:09 np0005465604 systemd[1]: Started libpod-conmon-439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a.scope.
Oct  2 04:41:09 np0005465604 podman[360426]: 2025-10-02 08:41:09.452941989 +0000 UTC m=+0.025950206 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:41:09 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:41:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528c61d1dc02de92e13fd9335f527a4374ee7687c385393aa257801e40808ba5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:41:09 np0005465604 podman[360426]: 2025-10-02 08:41:09.616884465 +0000 UTC m=+0.189892722 container init 439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 04:41:09 np0005465604 podman[360426]: 2025-10-02 08:41:09.627417671 +0000 UTC m=+0.200425888 container start 439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 04:41:09 np0005465604 neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd[360463]: [NOTICE]   (360467) : New worker (360470) forked
Oct  2 04:41:09 np0005465604 neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd[360463]: [NOTICE]   (360467) : Loading success.
Oct  2 04:41:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:41:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2712152692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.035 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.037 2 DEBUG nova.virt.libvirt.vif [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-285132923',display_name='tempest-ServerAddressesNegativeTestJSON-server-285132923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-285132923',id=100,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a0738f9f0c2244bb8bc7a350d5ee5932',ramdisk_id='',reservation_id='r-pzphyj7i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1449456229',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1449456229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:41:03Z,user_data=None,user_id='923a2daca06b4bf98c21b2604971789f',uuid=fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.037 2 DEBUG nova.network.os_vif_util [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Converting VIF {"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.038 2 DEBUG nova.network.os_vif_util [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:45:0b,bridge_name='br-int',has_traffic_filtering=True,id=c1ea98b4-5e4c-459b-b4ec-2da19438e4e9,network=Network(28754b87-0f3d-4084-a6ca-7b48ab8fade9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ea98b4-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.039 2 DEBUG nova.objects.instance [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lazy-loading 'pci_devices' on Instance uuid fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.056 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:41:10 np0005465604 nova_compute[260603]:  <uuid>fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf</uuid>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:  <name>instance-00000064</name>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerAddressesNegativeTestJSON-server-285132923</nova:name>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:41:08</nova:creationTime>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:41:10 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:        <nova:user uuid="923a2daca06b4bf98c21b2604971789f">tempest-ServerAddressesNegativeTestJSON-1449456229-project-member</nova:user>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:        <nova:project uuid="a0738f9f0c2244bb8bc7a350d5ee5932">tempest-ServerAddressesNegativeTestJSON-1449456229</nova:project>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:        <nova:port uuid="c1ea98b4-5e4c-459b-b4ec-2da19438e4e9">
Oct  2 04:41:10 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <entry name="serial">fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf</entry>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <entry name="uuid">fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf</entry>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk">
Oct  2 04:41:10 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:41:10 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk.config">
Oct  2 04:41:10 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:41:10 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:c5:45:0b"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <target dev="tapc1ea98b4-5e"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf/console.log" append="off"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:41:10 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:41:10 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:41:10 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:41:10 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.058 2 DEBUG nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Preparing to wait for external event network-vif-plugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.059 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquiring lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.059 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.060 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.061 2 DEBUG nova.virt.libvirt.vif [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-285132923',display_name='tempest-ServerAddressesNegativeTestJSON-server-285132923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-285132923',id=100,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a0738f9f0c2244bb8bc7a350d5ee5932',ramdisk_id='',reservation_id='r-pzphyj7i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1449456229',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1449456229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:41:03Z,user_data=None,user_id='923a2daca06b4bf98c21b2604971789f',uuid=fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.062 2 DEBUG nova.network.os_vif_util [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Converting VIF {"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.065 2 DEBUG nova.network.os_vif_util [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:45:0b,bridge_name='br-int',has_traffic_filtering=True,id=c1ea98b4-5e4c-459b-b4ec-2da19438e4e9,network=Network(28754b87-0f3d-4084-a6ca-7b48ab8fade9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ea98b4-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.066 2 DEBUG os_vif [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:45:0b,bridge_name='br-int',has_traffic_filtering=True,id=c1ea98b4-5e4c-459b-b4ec-2da19438e4e9,network=Network(28754b87-0f3d-4084-a6ca-7b48ab8fade9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ea98b4-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.068 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.069 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.074 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1ea98b4-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.075 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1ea98b4-5e, col_values=(('external_ids', {'iface-id': 'c1ea98b4-5e4c-459b-b4ec-2da19438e4e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:45:0b', 'vm-uuid': 'fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:10 np0005465604 NetworkManager[45129]: <info>  [1759394470.0782] manager: (tapc1ea98b4-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.085 2 INFO os_vif [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:45:0b,bridge_name='br-int',has_traffic_filtering=True,id=c1ea98b4-5e4c-459b-b4ec-2da19438e4e9,network=Network(28754b87-0f3d-4084-a6ca-7b48ab8fade9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ea98b4-5e')#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.149 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.150 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.150 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] No VIF found with MAC fa:16:3e:c5:45:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.151 2 INFO nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Using config drive#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.173 2 DEBUG nova.storage.rbd_utils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] rbd image fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.310 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394470.3103132, ddf9efd0-0ac6-4857-96ea-3f1d0e18590d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.311 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] VM Started (Lifecycle Event)#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.313 2 DEBUG nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.317 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.321 2 INFO nova.virt.libvirt.driver [-] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Instance spawned successfully.#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.322 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.336 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.341 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.344 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.344 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.345 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.345 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.346 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.346 2 DEBUG nova.virt.libvirt.driver [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.370 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.370 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394470.3105607, ddf9efd0-0ac6-4857-96ea-3f1d0e18590d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.370 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.395 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.398 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394470.3165314, ddf9efd0-0ac6-4857-96ea-3f1d0e18590d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.398 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.412 2 INFO nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Took 9.02 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.412 2 DEBUG nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.420 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.422 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.447 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.473 2 INFO nova.compute.manager [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Took 10.02 seconds to build instance.#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.491 2 DEBUG oslo_concurrency.lockutils [None req-fde2a9f8-23d3-4c8d-b6ad-056db9b9f472 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.795 2 INFO nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Creating config drive at /var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf/disk.config#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.803 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp1rnpli6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:10 np0005465604 nova_compute[260603]: 2025-10-02 08:41:10.966 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp1rnpli6" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.003 2 DEBUG nova.storage.rbd_utils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] rbd image fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.008 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf/disk.config fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.194 2 DEBUG oslo_concurrency.processutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf/disk.config fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.195 2 INFO nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Deleting local config drive /var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf/disk.config because it was imported into RBD.#033[00m
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.240 2 DEBUG nova.network.neutron [req-8ff4cdf3-1276-4ee7-bc5f-f847c3bd6912 req-d6cf53bc-d170-4b8d-b221-bcdc4f6e6c62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Updated VIF entry in instance network info cache for port c1ea98b4-5e4c-459b-b4ec-2da19438e4e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.240 2 DEBUG nova.network.neutron [req-8ff4cdf3-1276-4ee7-bc5f-f847c3bd6912 req-d6cf53bc-d170-4b8d-b221-bcdc4f6e6c62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Updating instance_info_cache with network_info: [{"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:41:11 np0005465604 kernel: tapc1ea98b4-5e: entered promiscuous mode
Oct  2 04:41:11 np0005465604 NetworkManager[45129]: <info>  [1759394471.2528] manager: (tapc1ea98b4-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/401)
Oct  2 04:41:11 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:11Z|01022|binding|INFO|Claiming lport c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 for this chassis.
Oct  2 04:41:11 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:11Z|01023|binding|INFO|c1ea98b4-5e4c-459b-b4ec-2da19438e4e9: Claiming fa:16:3e:c5:45:0b 10.100.0.8
Oct  2 04:41:11 np0005465604 systemd-udevd[360355]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.264 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:45:0b 10.100.0.8'], port_security=['fa:16:3e:c5:45:0b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28754b87-0f3d-4084-a6ca-7b48ab8fade9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0738f9f0c2244bb8bc7a350d5ee5932', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64c07fa8-bcda-4bfb-b1e2-06de2925c622', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84a99230-30ce-44c4-9392-f1b665c403c0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=c1ea98b4-5e4c-459b-b4ec-2da19438e4e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.267 2 DEBUG oslo_concurrency.lockutils [req-8ff4cdf3-1276-4ee7-bc5f-f847c3bd6912 req-d6cf53bc-d170-4b8d-b221-bcdc4f6e6c62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.266 162357 INFO neutron.agent.ovn.metadata.agent [-] Port c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 in datapath 28754b87-0f3d-4084-a6ca-7b48ab8fade9 bound to our chassis#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.267 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28754b87-0f3d-4084-a6ca-7b48ab8fade9#033[00m
Oct  2 04:41:11 np0005465604 NetworkManager[45129]: <info>  [1759394471.2713] device (tapc1ea98b4-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:41:11 np0005465604 NetworkManager[45129]: <info>  [1759394471.2726] device (tapc1ea98b4-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:41:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 134 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.290 2 DEBUG nova.compute.manager [req-53b80dd9-7765-4806-8ae2-48b458131873 req-4160b509-d48d-4747-9879-d5257305d49a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received event network-vif-plugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.291 2 DEBUG oslo_concurrency.lockutils [req-53b80dd9-7765-4806-8ae2-48b458131873 req-4160b509-d48d-4747-9879-d5257305d49a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.291 2 DEBUG oslo_concurrency.lockutils [req-53b80dd9-7765-4806-8ae2-48b458131873 req-4160b509-d48d-4747-9879-d5257305d49a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.291 2 DEBUG oslo_concurrency.lockutils [req-53b80dd9-7765-4806-8ae2-48b458131873 req-4160b509-d48d-4747-9879-d5257305d49a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.291 2 DEBUG nova.compute.manager [req-53b80dd9-7765-4806-8ae2-48b458131873 req-4160b509-d48d-4747-9879-d5257305d49a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] No waiting events found dispatching network-vif-plugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.291 2 WARNING nova.compute.manager [req-53b80dd9-7765-4806-8ae2-48b458131873 req-4160b509-d48d-4747-9879-d5257305d49a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received unexpected event network-vif-plugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:41:11 np0005465604 systemd-machined[214636]: New machine qemu-126-instance-00000064.
Oct  2 04:41:11 np0005465604 systemd[1]: Started Virtual Machine qemu-126-instance-00000064.
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:11 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:11Z|01024|binding|INFO|Setting lport c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 ovn-installed in OVS
Oct  2 04:41:11 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:11Z|01025|binding|INFO|Setting lport c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 up in Southbound
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.364 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[80d88df4-5574-4508-83f2-7b2c3385089d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.365 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap28754b87-01 in ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.367 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap28754b87-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.367 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[382f1623-7545-4a1e-a8e3-2409953e8fad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.368 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[24cced58-7dc4-440e-b66c-2e844b584891]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.386 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[6c47b3bd-2e36-47f3-956f-b110f4703311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.400 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a003fdb4-6522-4d1e-9e6e-325824020c2e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.436 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[86d12928-f1d4-4766-8f19-bfd6c8b94577]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:11 np0005465604 NetworkManager[45129]: <info>  [1759394471.4443] manager: (tap28754b87-00): new Veth device (/org/freedesktop/NetworkManager/Devices/402)
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.444 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9c5882-cfe8-4a74-9cc6-3d09295d4397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.495 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[61d6287d-c132-4ddf-b5ad-c34ba36bb40b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.499 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6f55c39b-7013-4180-8e70-fbe1d9640550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:11 np0005465604 NetworkManager[45129]: <info>  [1759394471.5421] device (tap28754b87-00): carrier: link connected
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.553 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c033d1-c329-4711-b1df-104f875e3f23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.577 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c35f78d6-3979-4eaf-842e-4492793d0624]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28754b87-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:7d:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 293], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535014, 'reachable_time': 20278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360666, 'error': None, 'target': 'ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.605 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[877d73e5-bc2f-44b0-89bb-3a3faccdfdd3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:7d63'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535014, 'tstamp': 535014}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360671, 'error': None, 'target': 'ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.627 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[188894bd-4cf4-4de4-8960-aec1f83b237b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28754b87-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:7d:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 293], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535014, 'reachable_time': 20278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360676, 'error': None, 'target': 'ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.666 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[732f1cd9-0a44-4bd8-a31e-b7c8893f26e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.736 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[632f5316-d126-4280-8a3d-33c1a958b9f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.737 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28754b87-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.738 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.738 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28754b87-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:11 np0005465604 NetworkManager[45129]: <info>  [1759394471.7413] manager: (tap28754b87-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Oct  2 04:41:11 np0005465604 kernel: tap28754b87-00: entered promiscuous mode
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.744 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28754b87-00, col_values=(('external_ids', {'iface-id': '7619cc0b-6bdd-4351-b97b-66bbd3d3dcca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:11 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:11Z|01026|binding|INFO|Releasing lport 7619cc0b-6bdd-4351-b97b-66bbd3d3dcca from this chassis (sb_readonly=0)
Oct  2 04:41:11 np0005465604 nova_compute[260603]: 2025-10-02 08:41:11.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.782 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/28754b87-0f3d-4084-a6ca-7b48ab8fade9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/28754b87-0f3d-4084-a6ca-7b48ab8fade9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.783 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[98f86140-ad38-4b97-a840-e56220ec26c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.783 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-28754b87-0f3d-4084-a6ca-7b48ab8fade9
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/28754b87-0f3d-4084-a6ca-7b48ab8fade9.pid.haproxy
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 28754b87-0f3d-4084-a6ca-7b48ab8fade9
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:41:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:11.784 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9', 'env', 'PROCESS_TAG=haproxy-28754b87-0f3d-4084-a6ca-7b48ab8fade9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/28754b87-0f3d-4084-a6ca-7b48ab8fade9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:41:12 np0005465604 nova_compute[260603]: 2025-10-02 08:41:12.101 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394472.1009295, fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:41:12 np0005465604 nova_compute[260603]: 2025-10-02 08:41:12.101 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] VM Started (Lifecycle Event)#033[00m
Oct  2 04:41:12 np0005465604 nova_compute[260603]: 2025-10-02 08:41:12.123 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:41:12 np0005465604 nova_compute[260603]: 2025-10-02 08:41:12.126 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394472.1011906, fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:41:12 np0005465604 nova_compute[260603]: 2025-10-02 08:41:12.127 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:41:12 np0005465604 nova_compute[260603]: 2025-10-02 08:41:12.144 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:41:12 np0005465604 nova_compute[260603]: 2025-10-02 08:41:12.148 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:41:12 np0005465604 nova_compute[260603]: 2025-10-02 08:41:12.167 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:41:12 np0005465604 podman[360709]: 2025-10-02 08:41:12.189852049 +0000 UTC m=+0.072650531 container create b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 04:41:12 np0005465604 systemd[1]: Started libpod-conmon-b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271.scope.
Oct  2 04:41:12 np0005465604 podman[360709]: 2025-10-02 08:41:12.151309005 +0000 UTC m=+0.034107537 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:41:12 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:41:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1079717454c50e69509e5f2e549434fd41ecc501c00f63e7e50c58f205d34cf1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:41:12 np0005465604 podman[360709]: 2025-10-02 08:41:12.288783902 +0000 UTC m=+0.171582394 container init b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:41:12 np0005465604 podman[360709]: 2025-10-02 08:41:12.295028775 +0000 UTC m=+0.177827247 container start b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 04:41:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:41:12 np0005465604 neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9[360724]: [NOTICE]   (360728) : New worker (360730) forked
Oct  2 04:41:12 np0005465604 neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9[360724]: [NOTICE]   (360728) : Loading success.
Oct  2 04:41:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 134 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 137 op/s
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.792 2 DEBUG nova.compute.manager [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Received event network-vif-plugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.793 2 DEBUG oslo_concurrency.lockutils [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.794 2 DEBUG oslo_concurrency.lockutils [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.794 2 DEBUG oslo_concurrency.lockutils [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.795 2 DEBUG nova.compute.manager [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Processing event network-vif-plugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.795 2 DEBUG nova.compute.manager [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Received event network-vif-plugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.796 2 DEBUG oslo_concurrency.lockutils [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.796 2 DEBUG oslo_concurrency.lockutils [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.796 2 DEBUG oslo_concurrency.lockutils [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.797 2 DEBUG nova.compute.manager [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] No waiting events found dispatching network-vif-plugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.797 2 WARNING nova.compute.manager [req-efd27f94-d969-4a91-9e09-2809b66b8448 req-dccf0e97-0e36-4f6c-a7a5-d503e0271dbe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Received unexpected event network-vif-plugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.798 2 DEBUG nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.815 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394473.8031614, fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.815 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.817 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.823 2 INFO nova.virt.libvirt.driver [-] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Instance spawned successfully.#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.824 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.837 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.857 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.870 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.871 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.873 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.873 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.874 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.874 2 DEBUG nova.virt.libvirt.driver [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.881 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.937 2 INFO nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Took 10.84 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:41:13 np0005465604 nova_compute[260603]: 2025-10-02 08:41:13.937 2 DEBUG nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:41:14 np0005465604 nova_compute[260603]: 2025-10-02 08:41:14.003 2 INFO nova.compute.manager [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Took 11.84 seconds to build instance.#033[00m
Oct  2 04:41:14 np0005465604 nova_compute[260603]: 2025-10-02 08:41:14.023 2 DEBUG oslo_concurrency.lockutils [None req-191bb6fc-7c59-4ee4-bb71-cde9703a0ee3 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.991s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:14 np0005465604 nova_compute[260603]: 2025-10-02 08:41:14.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:15 np0005465604 nova_compute[260603]: 2025-10-02 08:41:15.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 134 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Oct  2 04:41:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:15Z|01027|binding|INFO|Releasing lport 7619cc0b-6bdd-4351-b97b-66bbd3d3dcca from this chassis (sb_readonly=0)
Oct  2 04:41:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:15Z|01028|binding|INFO|Releasing lport 6a4c46b3-20fe-4d13-90df-77828898d571 from this chassis (sb_readonly=0)
Oct  2 04:41:15 np0005465604 NetworkManager[45129]: <info>  [1759394475.8934] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/404)
Oct  2 04:41:15 np0005465604 NetworkManager[45129]: <info>  [1759394475.8946] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Oct  2 04:41:15 np0005465604 nova_compute[260603]: 2025-10-02 08:41:15.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:15Z|01029|binding|INFO|Releasing lport 7619cc0b-6bdd-4351-b97b-66bbd3d3dcca from this chassis (sb_readonly=0)
Oct  2 04:41:15 np0005465604 nova_compute[260603]: 2025-10-02 08:41:15.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:15Z|01030|binding|INFO|Releasing lport 6a4c46b3-20fe-4d13-90df-77828898d571 from this chassis (sb_readonly=0)
Oct  2 04:41:15 np0005465604 nova_compute[260603]: 2025-10-02 08:41:15.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.134 2 DEBUG oslo_concurrency.lockutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquiring lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.134 2 DEBUG oslo_concurrency.lockutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.134 2 DEBUG oslo_concurrency.lockutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquiring lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.135 2 DEBUG oslo_concurrency.lockutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.135 2 DEBUG oslo_concurrency.lockutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.136 2 INFO nova.compute.manager [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Terminating instance#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.140 2 DEBUG nova.compute.manager [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:41:16 np0005465604 kernel: tapc1ea98b4-5e (unregistering): left promiscuous mode
Oct  2 04:41:16 np0005465604 NetworkManager[45129]: <info>  [1759394476.2042] device (tapc1ea98b4-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:16Z|01031|binding|INFO|Releasing lport c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 from this chassis (sb_readonly=0)
Oct  2 04:41:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:16Z|01032|binding|INFO|Setting lport c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 down in Southbound
Oct  2 04:41:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:16Z|01033|binding|INFO|Removing iface tapc1ea98b4-5e ovn-installed in OVS
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.224 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:45:0b 10.100.0.8'], port_security=['fa:16:3e:c5:45:0b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28754b87-0f3d-4084-a6ca-7b48ab8fade9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0738f9f0c2244bb8bc7a350d5ee5932', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64c07fa8-bcda-4bfb-b1e2-06de2925c622', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84a99230-30ce-44c4-9392-f1b665c403c0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=c1ea98b4-5e4c-459b-b4ec-2da19438e4e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:41:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.225 162357 INFO neutron.agent.ovn.metadata.agent [-] Port c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 in datapath 28754b87-0f3d-4084-a6ca-7b48ab8fade9 unbound from our chassis#033[00m
Oct  2 04:41:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.226 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28754b87-0f3d-4084-a6ca-7b48ab8fade9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:41:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.227 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d779b30b-07dc-4373-b07e-bd22712ad518]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.228 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9 namespace which is not needed anymore#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:16 np0005465604 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000064.scope: Deactivated successfully.
Oct  2 04:41:16 np0005465604 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000064.scope: Consumed 3.065s CPU time.
Oct  2 04:41:16 np0005465604 systemd-machined[214636]: Machine qemu-126-instance-00000064 terminated.
Oct  2 04:41:16 np0005465604 neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9[360724]: [NOTICE]   (360728) : haproxy version is 2.8.14-c23fe91
Oct  2 04:41:16 np0005465604 neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9[360724]: [NOTICE]   (360728) : path to executable is /usr/sbin/haproxy
Oct  2 04:41:16 np0005465604 neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9[360724]: [WARNING]  (360728) : Exiting Master process...
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.384 2 INFO nova.virt.libvirt.driver [-] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Instance destroyed successfully.#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.385 2 DEBUG nova.objects.instance [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lazy-loading 'resources' on Instance uuid fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:41:16 np0005465604 neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9[360724]: [ALERT]    (360728) : Current worker (360730) exited with code 143 (Terminated)
Oct  2 04:41:16 np0005465604 neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9[360724]: [WARNING]  (360728) : All workers exited. Exiting... (0)
Oct  2 04:41:16 np0005465604 systemd[1]: libpod-b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271.scope: Deactivated successfully.
Oct  2 04:41:16 np0005465604 podman[360760]: 2025-10-02 08:41:16.393963582 +0000 UTC m=+0.065047435 container died b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.405 2 DEBUG nova.virt.libvirt.vif [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-285132923',display_name='tempest-ServerAddressesNegativeTestJSON-server-285132923',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-285132923',id=100,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:41:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a0738f9f0c2244bb8bc7a350d5ee5932',ramdisk_id='',reservation_id='r-pzphyj7i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1449456229',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1449456229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:41:13Z,user_data=None,user_id='923a2daca06b4bf98c21b2604971789f',uuid=fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.406 2 DEBUG nova.network.os_vif_util [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Converting VIF {"id": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "address": "fa:16:3e:c5:45:0b", "network": {"id": "28754b87-0f3d-4084-a6ca-7b48ab8fade9", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1433618105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a0738f9f0c2244bb8bc7a350d5ee5932", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ea98b4-5e", "ovs_interfaceid": "c1ea98b4-5e4c-459b-b4ec-2da19438e4e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.406 2 DEBUG nova.network.os_vif_util [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:45:0b,bridge_name='br-int',has_traffic_filtering=True,id=c1ea98b4-5e4c-459b-b4ec-2da19438e4e9,network=Network(28754b87-0f3d-4084-a6ca-7b48ab8fade9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ea98b4-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.407 2 DEBUG os_vif [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:45:0b,bridge_name='br-int',has_traffic_filtering=True,id=c1ea98b4-5e4c-459b-b4ec-2da19438e4e9,network=Network(28754b87-0f3d-4084-a6ca-7b48ab8fade9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ea98b4-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.409 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1ea98b4-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.414 2 INFO os_vif [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:45:0b,bridge_name='br-int',has_traffic_filtering=True,id=c1ea98b4-5e4c-459b-b4ec-2da19438e4e9,network=Network(28754b87-0f3d-4084-a6ca-7b48ab8fade9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ea98b4-5e')#033[00m
Oct  2 04:41:16 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271-userdata-shm.mount: Deactivated successfully.
Oct  2 04:41:16 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1079717454c50e69509e5f2e549434fd41ecc501c00f63e7e50c58f205d34cf1-merged.mount: Deactivated successfully.
Oct  2 04:41:16 np0005465604 podman[360760]: 2025-10-02 08:41:16.44947533 +0000 UTC m=+0.120559183 container cleanup b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 04:41:16 np0005465604 systemd[1]: libpod-conmon-b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271.scope: Deactivated successfully.
Oct  2 04:41:16 np0005465604 podman[360817]: 2025-10-02 08:41:16.517435585 +0000 UTC m=+0.043087695 container remove b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:41:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.524 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8687ae-6cba-474d-99f3-ce194931b6e1]: (4, ('Thu Oct  2 08:41:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9 (b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271)\nb96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271\nThu Oct  2 08:41:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9 (b96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271)\nb96433347f52ef72e4ccbba8844f939bbedd412c3e2b7ec1b4eb4081a5e4f271\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.526 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[956b8187-4559-4bda-a278-13979590f45f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.527 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28754b87-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.527 2 DEBUG nova.compute.manager [req-46b6a06c-2ae4-43f9-bf2a-fc794dd7eb7c req-76182dd1-0217-4728-83b1-6a7a6aaa2bc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Received event network-vif-unplugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.527 2 DEBUG oslo_concurrency.lockutils [req-46b6a06c-2ae4-43f9-bf2a-fc794dd7eb7c req-76182dd1-0217-4728-83b1-6a7a6aaa2bc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.527 2 DEBUG oslo_concurrency.lockutils [req-46b6a06c-2ae4-43f9-bf2a-fc794dd7eb7c req-76182dd1-0217-4728-83b1-6a7a6aaa2bc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.528 2 DEBUG oslo_concurrency.lockutils [req-46b6a06c-2ae4-43f9-bf2a-fc794dd7eb7c req-76182dd1-0217-4728-83b1-6a7a6aaa2bc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.528 2 DEBUG nova.compute.manager [req-46b6a06c-2ae4-43f9-bf2a-fc794dd7eb7c req-76182dd1-0217-4728-83b1-6a7a6aaa2bc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] No waiting events found dispatching network-vif-unplugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.528 2 DEBUG nova.compute.manager [req-46b6a06c-2ae4-43f9-bf2a-fc794dd7eb7c req-76182dd1-0217-4728-83b1-6a7a6aaa2bc3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Received event network-vif-unplugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:16 np0005465604 kernel: tap28754b87-00: left promiscuous mode
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.548 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cfcb0eb1-baa8-4f3b-80c4-3c491bd41a1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.574 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8b34fe9c-5469-4e56-b7b1-32de97701f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.575 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f8c1e070-1629-42d1-b866-c6f513e55039]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.590 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[91da8211-95fb-405b-9c3f-799af9f9e29a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535003, 'reachable_time': 19959, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360832, 'error': None, 'target': 'ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:16 np0005465604 systemd[1]: run-netns-ovnmeta\x2d28754b87\x2d0f3d\x2d4084\x2da6ca\x2d7b48ab8fade9.mount: Deactivated successfully.
Oct  2 04:41:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.592 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-28754b87-0f3d-4084-a6ca-7b48ab8fade9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:41:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:16.592 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a834383f-3d1b-4109-9f91-72d177073693]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.749 2 INFO nova.virt.libvirt.driver [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Deleting instance files /var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_del#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.750 2 INFO nova.virt.libvirt.driver [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Deletion of /var/lib/nova/instances/fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf_del complete#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.810 2 INFO nova.compute.manager [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.811 2 DEBUG oslo.service.loopingcall [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.811 2 DEBUG nova.compute.manager [-] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.811 2 DEBUG nova.network.neutron [-] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.820 2 DEBUG nova.compute.manager [req-a35ec562-59fc-4561-a771-5e9566b5f8c1 req-d13e017f-9bf5-4d1a-8397-e7ec3fdecee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received event network-changed-f333ef16-60e3-449a-a6e7-4e7435c4ee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.820 2 DEBUG nova.compute.manager [req-a35ec562-59fc-4561-a771-5e9566b5f8c1 req-d13e017f-9bf5-4d1a-8397-e7ec3fdecee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Refreshing instance network info cache due to event network-changed-f333ef16-60e3-449a-a6e7-4e7435c4ee30. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.821 2 DEBUG oslo_concurrency.lockutils [req-a35ec562-59fc-4561-a771-5e9566b5f8c1 req-d13e017f-9bf5-4d1a-8397-e7ec3fdecee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.821 2 DEBUG oslo_concurrency.lockutils [req-a35ec562-59fc-4561-a771-5e9566b5f8c1 req-d13e017f-9bf5-4d1a-8397-e7ec3fdecee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:41:16 np0005465604 nova_compute[260603]: 2025-10-02 08:41:16.821 2 DEBUG nova.network.neutron [req-a35ec562-59fc-4561-a771-5e9566b5f8c1 req-d13e017f-9bf5-4d1a-8397-e7ec3fdecee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Refreshing network info cache for port f333ef16-60e3-449a-a6e7-4e7435c4ee30 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:41:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 119 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Oct  2 04:41:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:41:17 np0005465604 nova_compute[260603]: 2025-10-02 08:41:17.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:41:17 np0005465604 nova_compute[260603]: 2025-10-02 08:41:17.518 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:41:17 np0005465604 nova_compute[260603]: 2025-10-02 08:41:17.690 2 DEBUG nova.network.neutron [-] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:41:17 np0005465604 nova_compute[260603]: 2025-10-02 08:41:17.709 2 INFO nova.compute.manager [-] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Took 0.90 seconds to deallocate network for instance.#033[00m
Oct  2 04:41:17 np0005465604 nova_compute[260603]: 2025-10-02 08:41:17.758 2 DEBUG oslo_concurrency.lockutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:17 np0005465604 nova_compute[260603]: 2025-10-02 08:41:17.758 2 DEBUG oslo_concurrency.lockutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:17 np0005465604 nova_compute[260603]: 2025-10-02 08:41:17.837 2 DEBUG oslo_concurrency.processutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:41:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/904911522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:41:18 np0005465604 nova_compute[260603]: 2025-10-02 08:41:18.257 2 DEBUG oslo_concurrency.processutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:18 np0005465604 nova_compute[260603]: 2025-10-02 08:41:18.268 2 DEBUG nova.compute.provider_tree [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:41:18 np0005465604 nova_compute[260603]: 2025-10-02 08:41:18.299 2 DEBUG nova.scheduler.client.report [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:41:18 np0005465604 nova_compute[260603]: 2025-10-02 08:41:18.343 2 DEBUG oslo_concurrency.lockutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:18 np0005465604 nova_compute[260603]: 2025-10-02 08:41:18.369 2 INFO nova.scheduler.client.report [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Deleted allocations for instance fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf#033[00m
Oct  2 04:41:18 np0005465604 nova_compute[260603]: 2025-10-02 08:41:18.449 2 DEBUG oslo_concurrency.lockutils [None req-1a0e4d3d-27c1-4d66-8f6f-eac9860cbfb4 923a2daca06b4bf98c21b2604971789f a0738f9f0c2244bb8bc7a350d5ee5932 - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:18 np0005465604 nova_compute[260603]: 2025-10-02 08:41:18.646 2 DEBUG nova.compute.manager [req-830b0638-e405-4c6b-8c24-4eae675dbc89 req-94bcc23e-2e29-4729-965d-0f648d7be799 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Received event network-vif-plugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:18 np0005465604 nova_compute[260603]: 2025-10-02 08:41:18.647 2 DEBUG oslo_concurrency.lockutils [req-830b0638-e405-4c6b-8c24-4eae675dbc89 req-94bcc23e-2e29-4729-965d-0f648d7be799 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:18 np0005465604 nova_compute[260603]: 2025-10-02 08:41:18.647 2 DEBUG oslo_concurrency.lockutils [req-830b0638-e405-4c6b-8c24-4eae675dbc89 req-94bcc23e-2e29-4729-965d-0f648d7be799 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:18 np0005465604 nova_compute[260603]: 2025-10-02 08:41:18.647 2 DEBUG oslo_concurrency.lockutils [req-830b0638-e405-4c6b-8c24-4eae675dbc89 req-94bcc23e-2e29-4729-965d-0f648d7be799 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:18 np0005465604 nova_compute[260603]: 2025-10-02 08:41:18.647 2 DEBUG nova.compute.manager [req-830b0638-e405-4c6b-8c24-4eae675dbc89 req-94bcc23e-2e29-4729-965d-0f648d7be799 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] No waiting events found dispatching network-vif-plugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:41:18 np0005465604 nova_compute[260603]: 2025-10-02 08:41:18.648 2 WARNING nova.compute.manager [req-830b0638-e405-4c6b-8c24-4eae675dbc89 req-94bcc23e-2e29-4729-965d-0f648d7be799 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Received unexpected event network-vif-plugged-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:41:18 np0005465604 nova_compute[260603]: 2025-10-02 08:41:18.648 2 DEBUG nova.compute.manager [req-830b0638-e405-4c6b-8c24-4eae675dbc89 req-94bcc23e-2e29-4729-965d-0f648d7be799 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Received event network-vif-deleted-c1ea98b4-5e4c-459b-b4ec-2da19438e4e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:18 np0005465604 nova_compute[260603]: 2025-10-02 08:41:18.965 2 DEBUG nova.network.neutron [req-a35ec562-59fc-4561-a771-5e9566b5f8c1 req-d13e017f-9bf5-4d1a-8397-e7ec3fdecee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updated VIF entry in instance network info cache for port f333ef16-60e3-449a-a6e7-4e7435c4ee30. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:41:18 np0005465604 nova_compute[260603]: 2025-10-02 08:41:18.966 2 DEBUG nova.network.neutron [req-a35ec562-59fc-4561-a771-5e9566b5f8c1 req-d13e017f-9bf5-4d1a-8397-e7ec3fdecee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updating instance_info_cache with network_info: [{"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:41:18 np0005465604 nova_compute[260603]: 2025-10-02 08:41:18.986 2 DEBUG oslo_concurrency.lockutils [req-a35ec562-59fc-4561-a771-5e9566b5f8c1 req-d13e017f-9bf5-4d1a-8397-e7ec3fdecee3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:41:19 np0005465604 nova_compute[260603]: 2025-10-02 08:41:19.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 88 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 198 op/s
Oct  2 04:41:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 88 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 173 op/s
Oct  2 04:41:21 np0005465604 nova_compute[260603]: 2025-10-02 08:41:21.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:41:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1714812566' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:41:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:41:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1714812566' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:41:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:41:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:22Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2c:fb:ed 10.100.0.5
Oct  2 04:41:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:22Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2c:fb:ed 10.100.0.5
Oct  2 04:41:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 118 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 226 op/s
Oct  2 04:41:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:23Z|01034|binding|INFO|Releasing lport 6a4c46b3-20fe-4d13-90df-77828898d571 from this chassis (sb_readonly=0)
Oct  2 04:41:23 np0005465604 nova_compute[260603]: 2025-10-02 08:41:23.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:24 np0005465604 nova_compute[260603]: 2025-10-02 08:41:24.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 118 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Oct  2 04:41:26 np0005465604 nova_compute[260603]: 2025-10-02 08:41:26.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:26 np0005465604 nova_compute[260603]: 2025-10-02 08:41:26.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:41:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 121 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Oct  2 04:41:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:41:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:41:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:41:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:41:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:41:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:41:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:41:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:41:27
Oct  2 04:41:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:41:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:41:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'backups', 'vms']
Oct  2 04:41:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:41:28 np0005465604 nova_compute[260603]: 2025-10-02 08:41:28.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:41:28 np0005465604 nova_compute[260603]: 2025-10-02 08:41:28.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:41:28 np0005465604 nova_compute[260603]: 2025-10-02 08:41:28.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:41:29 np0005465604 nova_compute[260603]: 2025-10-02 08:41:29.119 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:41:29 np0005465604 nova_compute[260603]: 2025-10-02 08:41:29.119 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:41:29 np0005465604 nova_compute[260603]: 2025-10-02 08:41:29.119 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:41:29 np0005465604 nova_compute[260603]: 2025-10-02 08:41:29.119 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ddf9efd0-0ac6-4857-96ea-3f1d0e18590d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:41:29 np0005465604 nova_compute[260603]: 2025-10-02 08:41:29.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Oct  2 04:41:29 np0005465604 podman[360857]: 2025-10-02 08:41:29.994070813 +0000 UTC m=+0.060823884 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 04:41:30 np0005465604 podman[360856]: 2025-10-02 08:41:30.026691053 +0000 UTC m=+0.094699132 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.148 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updating instance_info_cache with network_info: [{"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.165 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.165 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.165 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.165 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.166 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.184 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.185 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.185 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.185 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.185 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.383 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394476.3808556, fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.384 2 INFO nova.compute.manager [-] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.421 2 DEBUG nova.compute.manager [None req-4f73ff55-b76c-408f-adc0-512d8dc34bc7 - - - - - -] [instance: fbd65d2f-f857-4d88-b7a9-eec10bdf2ddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:41:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:41:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/974712362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.601 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.688 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.688 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.862 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.864 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3632MB free_disk=59.94289016723633GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.864 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.864 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.960 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance ddf9efd0-0ac6-4857-96ea-3f1d0e18590d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.960 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:41:31 np0005465604 nova_compute[260603]: 2025-10-02 08:41:31.960 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:41:32 np0005465604 nova_compute[260603]: 2025-10-02 08:41:32.007 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:41:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:41:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1512994064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:41:32 np0005465604 nova_compute[260603]: 2025-10-02 08:41:32.534 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:32 np0005465604 nova_compute[260603]: 2025-10-02 08:41:32.540 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:41:32 np0005465604 nova_compute[260603]: 2025-10-02 08:41:32.565 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:41:32 np0005465604 nova_compute[260603]: 2025-10-02 08:41:32.587 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:41:32 np0005465604 nova_compute[260603]: 2025-10-02 08:41:32.587 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 04:41:33 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:33Z|01035|binding|INFO|Releasing lport 6a4c46b3-20fe-4d13-90df-77828898d571 from this chassis (sb_readonly=0)
Oct  2 04:41:34 np0005465604 nova_compute[260603]: 2025-10-02 08:41:34.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:34 np0005465604 nova_compute[260603]: 2025-10-02 08:41:34.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:34.825 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:34.825 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:34.826 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 31 KiB/s wr, 10 op/s
Oct  2 04:41:35 np0005465604 nova_compute[260603]: 2025-10-02 08:41:35.941 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:41:36 np0005465604 nova_compute[260603]: 2025-10-02 08:41:36.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:36 np0005465604 nova_compute[260603]: 2025-10-02 08:41:36.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:41:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 31 KiB/s wr, 10 op/s
Oct  2 04:41:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:41:37 np0005465604 nova_compute[260603]: 2025-10-02 08:41:37.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:38 np0005465604 nova_compute[260603]: 2025-10-02 08:41:38.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:41:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:41:38 np0005465604 podman[360946]: 2025-10-02 08:41:38.994848476 +0000 UTC m=+0.057972957 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 04:41:39 np0005465604 podman[360945]: 2025-10-02 08:41:39.000680017 +0000 UTC m=+0.066010855 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct  2 04:41:39 np0005465604 nova_compute[260603]: 2025-10-02 08:41:39.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s rd, 12 KiB/s wr, 2 op/s
Oct  2 04:41:40 np0005465604 nova_compute[260603]: 2025-10-02 08:41:40.439 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "47497cd9-93be-482f-b4a8-4529910a9055" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:40 np0005465604 nova_compute[260603]: 2025-10-02 08:41:40.440 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:40 np0005465604 nova_compute[260603]: 2025-10-02 08:41:40.462 2 DEBUG nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:41:40 np0005465604 nova_compute[260603]: 2025-10-02 08:41:40.541 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:40 np0005465604 nova_compute[260603]: 2025-10-02 08:41:40.542 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:40 np0005465604 nova_compute[260603]: 2025-10-02 08:41:40.547 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:41:40 np0005465604 nova_compute[260603]: 2025-10-02 08:41:40.547 2 INFO nova.compute.claims [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:41:40 np0005465604 nova_compute[260603]: 2025-10-02 08:41:40.678 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:41:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3070704888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.120 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.128 2 DEBUG nova.compute.provider_tree [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.149 2 DEBUG nova.scheduler.client.report [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.188 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.252 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "35470f0c-88c0-45dc-81af-53af166bcc4e" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.252 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "35470f0c-88c0-45dc-81af-53af166bcc4e" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.264 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "35470f0c-88c0-45dc-81af-53af166bcc4e" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.266 2 DEBUG nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:41:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 121 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.323 2 DEBUG nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.324 2 DEBUG nova.network.neutron [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.345 2 INFO nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.363 2 DEBUG nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.493 2 DEBUG nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.495 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.496 2 INFO nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Creating image(s)#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.538 2 DEBUG nova.storage.rbd_utils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] rbd image 47497cd9-93be-482f-b4a8-4529910a9055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.590 2 DEBUG nova.storage.rbd_utils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] rbd image 47497cd9-93be-482f-b4a8-4529910a9055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.627 2 DEBUG nova.storage.rbd_utils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] rbd image 47497cd9-93be-482f-b4a8-4529910a9055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.632 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.682 2 DEBUG nova.policy [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4487d5c9ca094c4183c8500d1f6df983', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b230fcb7df44b5f9a434a12364fcaf2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.725 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.726 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.727 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.728 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.753 2 DEBUG nova.storage.rbd_utils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] rbd image 47497cd9-93be-482f-b4a8-4529910a9055_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:41 np0005465604 nova_compute[260603]: 2025-10-02 08:41:41.756 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 47497cd9-93be-482f-b4a8-4529910a9055_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:41:42 np0005465604 nova_compute[260603]: 2025-10-02 08:41:42.515 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 47497cd9-93be-482f-b4a8-4529910a9055_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.759s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:42 np0005465604 nova_compute[260603]: 2025-10-02 08:41:42.603 2 DEBUG nova.storage.rbd_utils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] resizing rbd image 47497cd9-93be-482f-b4a8-4529910a9055_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:41:42 np0005465604 nova_compute[260603]: 2025-10-02 08:41:42.745 2 DEBUG nova.network.neutron [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Successfully created port: 08ea1ec3-2021-4942-8b2a-c699ee4dc052 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:41:42 np0005465604 nova_compute[260603]: 2025-10-02 08:41:42.802 2 DEBUG nova.objects.instance [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lazy-loading 'migration_context' on Instance uuid 47497cd9-93be-482f-b4a8-4529910a9055 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:41:42 np0005465604 nova_compute[260603]: 2025-10-02 08:41:42.872 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:41:42 np0005465604 nova_compute[260603]: 2025-10-02 08:41:42.873 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Ensure instance console log exists: /var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:41:42 np0005465604 nova_compute[260603]: 2025-10-02 08:41:42.873 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:42 np0005465604 nova_compute[260603]: 2025-10-02 08:41:42.874 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:42 np0005465604 nova_compute[260603]: 2025-10-02 08:41:42.874 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 165 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.7 MiB/s wr, 16 op/s
Oct  2 04:41:43 np0005465604 nova_compute[260603]: 2025-10-02 08:41:43.694 2 DEBUG nova.network.neutron [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Successfully updated port: 08ea1ec3-2021-4942-8b2a-c699ee4dc052 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:41:43 np0005465604 nova_compute[260603]: 2025-10-02 08:41:43.712 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "refresh_cache-47497cd9-93be-482f-b4a8-4529910a9055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:41:43 np0005465604 nova_compute[260603]: 2025-10-02 08:41:43.713 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquired lock "refresh_cache-47497cd9-93be-482f-b4a8-4529910a9055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:41:43 np0005465604 nova_compute[260603]: 2025-10-02 08:41:43.713 2 DEBUG nova.network.neutron [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:41:43 np0005465604 nova_compute[260603]: 2025-10-02 08:41:43.870 2 DEBUG nova.compute.manager [req-49143cfb-da9e-48ad-9894-7877747a86e0 req-a74944fa-03ab-4c00-bc75-58a9feee6c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Received event network-changed-08ea1ec3-2021-4942-8b2a-c699ee4dc052 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:43 np0005465604 nova_compute[260603]: 2025-10-02 08:41:43.871 2 DEBUG nova.compute.manager [req-49143cfb-da9e-48ad-9894-7877747a86e0 req-a74944fa-03ab-4c00-bc75-58a9feee6c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Refreshing instance network info cache due to event network-changed-08ea1ec3-2021-4942-8b2a-c699ee4dc052. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:41:43 np0005465604 nova_compute[260603]: 2025-10-02 08:41:43.871 2 DEBUG oslo_concurrency.lockutils [req-49143cfb-da9e-48ad-9894-7877747a86e0 req-a74944fa-03ab-4c00-bc75-58a9feee6c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-47497cd9-93be-482f-b4a8-4529910a9055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:41:44 np0005465604 nova_compute[260603]: 2025-10-02 08:41:44.053 2 DEBUG nova.network.neutron [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:41:44 np0005465604 nova_compute[260603]: 2025-10-02 08:41:44.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 165 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.6 MiB/s wr, 16 op/s
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.335 2 DEBUG nova.network.neutron [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Updating instance_info_cache with network_info: [{"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.362 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Releasing lock "refresh_cache-47497cd9-93be-482f-b4a8-4529910a9055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.362 2 DEBUG nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Instance network_info: |[{"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.362 2 DEBUG oslo_concurrency.lockutils [req-49143cfb-da9e-48ad-9894-7877747a86e0 req-a74944fa-03ab-4c00-bc75-58a9feee6c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-47497cd9-93be-482f-b4a8-4529910a9055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.363 2 DEBUG nova.network.neutron [req-49143cfb-da9e-48ad-9894-7877747a86e0 req-a74944fa-03ab-4c00-bc75-58a9feee6c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Refreshing network info cache for port 08ea1ec3-2021-4942-8b2a-c699ee4dc052 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.365 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Start _get_guest_xml network_info=[{"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.370 2 WARNING nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.374 2 DEBUG nova.virt.libvirt.host [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.374 2 DEBUG nova.virt.libvirt.host [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.377 2 DEBUG nova.virt.libvirt.host [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.378 2 DEBUG nova.virt.libvirt.host [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.378 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.378 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.379 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.379 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.379 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.379 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.379 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.380 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.380 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.380 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.380 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.381 2 DEBUG nova.virt.hardware [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.383 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:41:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1690358141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.866 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.867 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquiring lock "5748b14f-5bbb-46f3-b563-062d530e5abd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.868 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.898 2 DEBUG nova.storage.rbd_utils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] rbd image 47497cd9-93be-482f-b4a8-4529910a9055_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.902 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:45 np0005465604 nova_compute[260603]: 2025-10-02 08:41:45.986 2 DEBUG nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.255 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.255 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.265 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.266 2 INFO nova.compute.claims [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:41:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:41:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/586273505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.396 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.398 2 DEBUG nova.virt.libvirt.vif [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:41:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-793142749',display_name='tempest-ServerGroupTestJSON-server-793142749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-793142749',id=101,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b230fcb7df44b5f9a434a12364fcaf2',ramdisk_id='',reservation_id='r-wpjgykv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1408982320',owner_user_name='tempest-ServerGroupTestJSON-1408982320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:41:41Z,user_data=None,user_id='4487d5c9ca094c4183c8500d1f6df983',uuid=47497cd9-93be-482f-b4a8-4529910a9055,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.399 2 DEBUG nova.network.os_vif_util [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Converting VIF {"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.400 2 DEBUG nova.network.os_vif_util [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:4a:a1,bridge_name='br-int',has_traffic_filtering=True,id=08ea1ec3-2021-4942-8b2a-c699ee4dc052,network=Network(fbf4a9fc-7958-460e-b38c-ae01b006309e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08ea1ec3-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.402 2 DEBUG nova.objects.instance [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 47497cd9-93be-482f-b4a8-4529910a9055 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.425 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:41:46 np0005465604 nova_compute[260603]:  <uuid>47497cd9-93be-482f-b4a8-4529910a9055</uuid>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:  <name>instance-00000065</name>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerGroupTestJSON-server-793142749</nova:name>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:41:45</nova:creationTime>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:41:46 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:        <nova:user uuid="4487d5c9ca094c4183c8500d1f6df983">tempest-ServerGroupTestJSON-1408982320-project-member</nova:user>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:        <nova:project uuid="2b230fcb7df44b5f9a434a12364fcaf2">tempest-ServerGroupTestJSON-1408982320</nova:project>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:        <nova:port uuid="08ea1ec3-2021-4942-8b2a-c699ee4dc052">
Oct  2 04:41:46 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <entry name="serial">47497cd9-93be-482f-b4a8-4529910a9055</entry>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <entry name="uuid">47497cd9-93be-482f-b4a8-4529910a9055</entry>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/47497cd9-93be-482f-b4a8-4529910a9055_disk">
Oct  2 04:41:46 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:41:46 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/47497cd9-93be-482f-b4a8-4529910a9055_disk.config">
Oct  2 04:41:46 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:41:46 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:38:4a:a1"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <target dev="tap08ea1ec3-20"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055/console.log" append="off"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:41:46 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:41:46 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:41:46 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:41:46 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.427 2 DEBUG nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Preparing to wait for external event network-vif-plugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.428 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "47497cd9-93be-482f-b4a8-4529910a9055-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.429 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.429 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.430 2 DEBUG nova.virt.libvirt.vif [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:41:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-793142749',display_name='tempest-ServerGroupTestJSON-server-793142749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-793142749',id=101,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b230fcb7df44b5f9a434a12364fcaf2',ramdisk_id='',reservation_id='r-wpjgykv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1408982320',owner_user_name='tempest-ServerGroupTestJSON-1408982320-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:41:41Z,user_data=None,user_id='4487d5c9ca094c4183c8500d1f6df983',uuid=47497cd9-93be-482f-b4a8-4529910a9055,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.431 2 DEBUG nova.network.os_vif_util [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Converting VIF {"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.432 2 DEBUG nova.network.os_vif_util [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:4a:a1,bridge_name='br-int',has_traffic_filtering=True,id=08ea1ec3-2021-4942-8b2a-c699ee4dc052,network=Network(fbf4a9fc-7958-460e-b38c-ae01b006309e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08ea1ec3-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.433 2 DEBUG os_vif [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:4a:a1,bridge_name='br-int',has_traffic_filtering=True,id=08ea1ec3-2021-4942-8b2a-c699ee4dc052,network=Network(fbf4a9fc-7958-460e-b38c-ae01b006309e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08ea1ec3-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.435 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.438 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08ea1ec3-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.512 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap08ea1ec3-20, col_values=(('external_ids', {'iface-id': '08ea1ec3-2021-4942-8b2a-c699ee4dc052', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:4a:a1', 'vm-uuid': '47497cd9-93be-482f-b4a8-4529910a9055'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:46 np0005465604 NetworkManager[45129]: <info>  [1759394506.5160] manager: (tap08ea1ec3-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.525 2 INFO os_vif [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:4a:a1,bridge_name='br-int',has_traffic_filtering=True,id=08ea1ec3-2021-4942-8b2a-c699ee4dc052,network=Network(fbf4a9fc-7958-460e-b38c-ae01b006309e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08ea1ec3-20')#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.588 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.589 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.590 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] No VIF found with MAC fa:16:3e:38:4a:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.591 2 INFO nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Using config drive#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.625 2 DEBUG nova.storage.rbd_utils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] rbd image 47497cd9-93be-482f-b4a8-4529910a9055_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:41:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274527031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.956 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.964 2 DEBUG nova.compute.provider_tree [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:41:46 np0005465604 nova_compute[260603]: 2025-10-02 08:41:46.981 2 DEBUG nova.scheduler.client.report [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.016 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.017 2 DEBUG nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.073 2 DEBUG nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.074 2 DEBUG nova.network.neutron [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.102 2 INFO nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.119 2 DEBUG nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.194 2 DEBUG nova.network.neutron [req-49143cfb-da9e-48ad-9894-7877747a86e0 req-a74944fa-03ab-4c00-bc75-58a9feee6c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Updated VIF entry in instance network info cache for port 08ea1ec3-2021-4942-8b2a-c699ee4dc052. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.196 2 DEBUG nova.network.neutron [req-49143cfb-da9e-48ad-9894-7877747a86e0 req-a74944fa-03ab-4c00-bc75-58a9feee6c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Updating instance_info_cache with network_info: [{"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.208 2 DEBUG nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.210 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.211 2 INFO nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Creating image(s)#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.243 2 DEBUG nova.storage.rbd_utils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] rbd image 5748b14f-5bbb-46f3-b563-062d530e5abd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.277 2 DEBUG nova.storage.rbd_utils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] rbd image 5748b14f-5bbb-46f3-b563-062d530e5abd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 167 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct  2 04:41:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.309 2 DEBUG nova.storage.rbd_utils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] rbd image 5748b14f-5bbb-46f3-b563-062d530e5abd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.314 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.361 2 DEBUG nova.policy [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ffb3b2bafd1c40058c2669d61c40f3f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '28277d49c8814c0691f5e1dce22bf215', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.364 2 DEBUG oslo_concurrency.lockutils [req-49143cfb-da9e-48ad-9894-7877747a86e0 req-a74944fa-03ab-4c00-bc75-58a9feee6c20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-47497cd9-93be-482f-b4a8-4529910a9055" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.388 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.389 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.389 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.389 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.414 2 DEBUG nova.storage.rbd_utils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] rbd image 5748b14f-5bbb-46f3-b563-062d530e5abd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.418 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5748b14f-5bbb-46f3-b563-062d530e5abd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.537 2 INFO nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Creating config drive at /var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055/disk.config#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.544 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp526qgzwt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.698 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp526qgzwt" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.720 2 DEBUG nova.storage.rbd_utils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] rbd image 47497cd9-93be-482f-b4a8-4529910a9055_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.726 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055/disk.config 47497cd9-93be-482f-b4a8-4529910a9055_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.763 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5748b14f-5bbb-46f3-b563-062d530e5abd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.814 2 DEBUG nova.storage.rbd_utils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] resizing rbd image 5748b14f-5bbb-46f3-b563-062d530e5abd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.897 2 DEBUG oslo_concurrency.processutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055/disk.config 47497cd9-93be-482f-b4a8-4529910a9055_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.897 2 INFO nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Deleting local config drive /var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055/disk.config because it was imported into RBD.#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.904 2 DEBUG nova.objects.instance [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lazy-loading 'migration_context' on Instance uuid 5748b14f-5bbb-46f3-b563-062d530e5abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.917 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.917 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Ensure instance console log exists: /var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.918 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.918 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.918 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:47 np0005465604 kernel: tap08ea1ec3-20: entered promiscuous mode
Oct  2 04:41:47 np0005465604 NetworkManager[45129]: <info>  [1759394507.9419] manager: (tap08ea1ec3-20): new Tun device (/org/freedesktop/NetworkManager/Devices/407)
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:47Z|01036|binding|INFO|Claiming lport 08ea1ec3-2021-4942-8b2a-c699ee4dc052 for this chassis.
Oct  2 04:41:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:47Z|01037|binding|INFO|08ea1ec3-2021-4942-8b2a-c699ee4dc052: Claiming fa:16:3e:38:4a:a1 10.100.0.9
Oct  2 04:41:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:47.951 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:4a:a1 10.100.0.9'], port_security=['fa:16:3e:38:4a:a1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '47497cd9-93be-482f-b4a8-4529910a9055', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbf4a9fc-7958-460e-b38c-ae01b006309e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b230fcb7df44b5f9a434a12364fcaf2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ba541694-b209-40e0-95e1-14515fff1dc2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=027ad20f-cef6-496f-a5dc-0e38827317a8, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=08ea1ec3-2021-4942-8b2a-c699ee4dc052) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:41:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:47.952 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 08ea1ec3-2021-4942-8b2a-c699ee4dc052 in datapath fbf4a9fc-7958-460e-b38c-ae01b006309e bound to our chassis#033[00m
Oct  2 04:41:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:47.953 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbf4a9fc-7958-460e-b38c-ae01b006309e#033[00m
Oct  2 04:41:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:47Z|01038|binding|INFO|Setting lport 08ea1ec3-2021-4942-8b2a-c699ee4dc052 ovn-installed in OVS
Oct  2 04:41:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:47Z|01039|binding|INFO|Setting lport 08ea1ec3-2021-4942-8b2a-c699ee4dc052 up in Southbound
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:47 np0005465604 nova_compute[260603]: 2025-10-02 08:41:47.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:47.968 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ef272bf1-ab6d-42c3-803c-f1cd02543cd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:47.969 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfbf4a9fc-71 in ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:41:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:47.971 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfbf4a9fc-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:41:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:47.971 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7d55f24a-fc20-442e-9bfa-5c5a1575ac5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:47.972 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa0f99f0-a973-45e0-9c29-7e717ab60e40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:47 np0005465604 systemd-machined[214636]: New machine qemu-127-instance-00000065.
Oct  2 04:41:47 np0005465604 systemd-udevd[361498]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:41:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:47.982 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[df975f91-b364-4a50-9dcc-8140e5f8f856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:47 np0005465604 systemd[1]: Started Virtual Machine qemu-127-instance-00000065.
Oct  2 04:41:47 np0005465604 NetworkManager[45129]: <info>  [1759394507.9915] device (tap08ea1ec3-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:41:47 np0005465604 NetworkManager[45129]: <info>  [1759394507.9926] device (tap08ea1ec3-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.007 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1be2a4bd-4a87-4da6-96cf-4a779acaeb56]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.035 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[daa64a67-2484-4f1e-a60c-40a1529b4078]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:48 np0005465604 NetworkManager[45129]: <info>  [1759394508.0409] manager: (tapfbf4a9fc-70): new Veth device (/org/freedesktop/NetworkManager/Devices/408)
Oct  2 04:41:48 np0005465604 systemd-udevd[361502]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.041 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[24adfeba-e7cf-43f9-ab14-e96564136578]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.070 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[94e9d5b0-4e21-4615-bd6b-8d4307728e94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.074 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4a35c6-7c73-410e-82a6-f0712bf869c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:48 np0005465604 NetworkManager[45129]: <info>  [1759394508.0933] device (tapfbf4a9fc-70): carrier: link connected
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.098 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5af5cc59-f11a-4c17-bbb5-56896ccb59c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.119 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d4869b24-0080-47ba-b071-6a0744cb3613]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbf4a9fc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:95:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538669, 'reachable_time': 22084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361530, 'error': None, 'target': 'ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.140 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2d398658-a356-4258-85d0-2704c979f1f7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:95f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538669, 'tstamp': 538669}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361531, 'error': None, 'target': 'ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.158 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1029fdd6-c2aa-41a8-98b6-6446f925dee5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbf4a9fc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:95:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538669, 'reachable_time': 22084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361532, 'error': None, 'target': 'ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:48 np0005465604 nova_compute[260603]: 2025-10-02 08:41:48.193 2 DEBUG nova.compute.manager [req-f816583b-5f8a-4605-8130-a4054c38ed6b req-0cde91fc-007b-4a9a-acfd-907361d7635e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Received event network-vif-plugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:48 np0005465604 nova_compute[260603]: 2025-10-02 08:41:48.194 2 DEBUG oslo_concurrency.lockutils [req-f816583b-5f8a-4605-8130-a4054c38ed6b req-0cde91fc-007b-4a9a-acfd-907361d7635e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "47497cd9-93be-482f-b4a8-4529910a9055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:48 np0005465604 nova_compute[260603]: 2025-10-02 08:41:48.194 2 DEBUG oslo_concurrency.lockutils [req-f816583b-5f8a-4605-8130-a4054c38ed6b req-0cde91fc-007b-4a9a-acfd-907361d7635e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:48 np0005465604 nova_compute[260603]: 2025-10-02 08:41:48.195 2 DEBUG oslo_concurrency.lockutils [req-f816583b-5f8a-4605-8130-a4054c38ed6b req-0cde91fc-007b-4a9a-acfd-907361d7635e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:48 np0005465604 nova_compute[260603]: 2025-10-02 08:41:48.195 2 DEBUG nova.compute.manager [req-f816583b-5f8a-4605-8130-a4054c38ed6b req-0cde91fc-007b-4a9a-acfd-907361d7635e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Processing event network-vif-plugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.200 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3175cf98-8e89-4b09-9539-143d48585547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.297 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e591801b-7ced-4a86-8775-f1d951d102d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.299 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbf4a9fc-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.300 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.302 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbf4a9fc-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:48 np0005465604 NetworkManager[45129]: <info>  [1759394508.3061] manager: (tapfbf4a9fc-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Oct  2 04:41:48 np0005465604 kernel: tapfbf4a9fc-70: entered promiscuous mode
Oct  2 04:41:48 np0005465604 nova_compute[260603]: 2025-10-02 08:41:48.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.309 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbf4a9fc-70, col_values=(('external_ids', {'iface-id': 'fe6c3964-69a6-4349-bab5-7c6c849c9643'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:48Z|01040|binding|INFO|Releasing lport fe6c3964-69a6-4349-bab5-7c6c849c9643 from this chassis (sb_readonly=0)
Oct  2 04:41:48 np0005465604 nova_compute[260603]: 2025-10-02 08:41:48.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:48 np0005465604 nova_compute[260603]: 2025-10-02 08:41:48.316 2 DEBUG nova.network.neutron [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Successfully created port: 52a482d6-fbe9-4583-af85-5407a6796976 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:41:48 np0005465604 nova_compute[260603]: 2025-10-02 08:41:48.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.346 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fbf4a9fc-7958-460e-b38c-ae01b006309e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fbf4a9fc-7958-460e-b38c-ae01b006309e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.347 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50e03d7e-1daf-434c-82ca-b543d8378cd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.349 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-fbf4a9fc-7958-460e-b38c-ae01b006309e
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/fbf4a9fc-7958-460e-b38c-ae01b006309e.pid.haproxy
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID fbf4a9fc-7958-460e-b38c-ae01b006309e
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:41:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:48.354 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e', 'env', 'PROCESS_TAG=haproxy-fbf4a9fc-7958-460e-b38c-ae01b006309e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fbf4a9fc-7958-460e-b38c-ae01b006309e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:41:48 np0005465604 podman[361606]: 2025-10-02 08:41:48.819470248 +0000 UTC m=+0.123447574 container create f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 04:41:48 np0005465604 podman[361606]: 2025-10-02 08:41:48.735069634 +0000 UTC m=+0.039046990 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:41:48 np0005465604 systemd[1]: Started libpod-conmon-f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145.scope.
Oct  2 04:41:48 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:41:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e103983e2dea04abc89b05140816442de9ff485d4b8298c9a16b43060795d0d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:41:48 np0005465604 podman[361606]: 2025-10-02 08:41:48.996353165 +0000 UTC m=+0.300330511 container init f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.002 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394509.001552, 47497cd9-93be-482f-b4a8-4529910a9055 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.003 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] VM Started (Lifecycle Event)#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.006 2 DEBUG nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:41:49 np0005465604 podman[361606]: 2025-10-02 08:41:49.009597865 +0000 UTC m=+0.313575201 container start f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.011 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.019 2 INFO nova.virt.libvirt.driver [-] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Instance spawned successfully.#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.020 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.024 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.030 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:41:49 np0005465604 neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e[361621]: [NOTICE]   (361625) : New worker (361627) forked
Oct  2 04:41:49 np0005465604 neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e[361621]: [NOTICE]   (361625) : Loading success.
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.048 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.049 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.050 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.051 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.052 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.054 2 DEBUG nova.virt.libvirt.driver [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.059 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.060 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394509.0017219, 47497cd9-93be-482f-b4a8-4529910a9055 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.061 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.088 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.092 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394509.0110786, 47497cd9-93be-482f-b4a8-4529910a9055 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.093 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.112 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.115 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.119 2 INFO nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Took 7.63 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.119 2 DEBUG nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.147 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.185 2 INFO nova.compute.manager [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Took 8.68 seconds to build instance.#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.214 2 DEBUG oslo_concurrency.lockutils [None req-3826a4e8-d05d-457f-85ea-176d2790d487 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 185 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.6 MiB/s wr, 42 op/s
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.826 2 DEBUG nova.network.neutron [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Successfully updated port: 52a482d6-fbe9-4583-af85-5407a6796976 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.845 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquiring lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.846 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquired lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:41:49 np0005465604 nova_compute[260603]: 2025-10-02 08:41:49.847 2 DEBUG nova.network.neutron [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.045 2 DEBUG nova.compute.manager [req-f5394c52-bc89-424e-bdf5-81ff1a182e11 req-d7a0e1b7-fd84-45ca-b229-62c8f187fb83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received event network-changed-52a482d6-fbe9-4583-af85-5407a6796976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.049 2 DEBUG nova.compute.manager [req-f5394c52-bc89-424e-bdf5-81ff1a182e11 req-d7a0e1b7-fd84-45ca-b229-62c8f187fb83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Refreshing instance network info cache due to event network-changed-52a482d6-fbe9-4583-af85-5407a6796976. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.050 2 DEBUG oslo_concurrency.lockutils [req-f5394c52-bc89-424e-bdf5-81ff1a182e11 req-d7a0e1b7-fd84-45ca-b229-62c8f187fb83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.055 2 DEBUG oslo_concurrency.lockutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "47497cd9-93be-482f-b4a8-4529910a9055" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.056 2 DEBUG oslo_concurrency.lockutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.057 2 DEBUG oslo_concurrency.lockutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "47497cd9-93be-482f-b4a8-4529910a9055-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.059 2 DEBUG oslo_concurrency.lockutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.061 2 DEBUG oslo_concurrency.lockutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.063 2 INFO nova.compute.manager [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Terminating instance#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.067 2 DEBUG nova.compute.manager [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.084 2 DEBUG nova.network.neutron [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:41:50 np0005465604 kernel: tap08ea1ec3-20 (unregistering): left promiscuous mode
Oct  2 04:41:50 np0005465604 NetworkManager[45129]: <info>  [1759394510.2359] device (tap08ea1ec3-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:50Z|01041|binding|INFO|Releasing lport 08ea1ec3-2021-4942-8b2a-c699ee4dc052 from this chassis (sb_readonly=0)
Oct  2 04:41:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:50Z|01042|binding|INFO|Setting lport 08ea1ec3-2021-4942-8b2a-c699ee4dc052 down in Southbound
Oct  2 04:41:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:50Z|01043|binding|INFO|Removing iface tap08ea1ec3-20 ovn-installed in OVS
Oct  2 04:41:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:50.256 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:4a:a1 10.100.0.9'], port_security=['fa:16:3e:38:4a:a1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '47497cd9-93be-482f-b4a8-4529910a9055', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbf4a9fc-7958-460e-b38c-ae01b006309e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b230fcb7df44b5f9a434a12364fcaf2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ba541694-b209-40e0-95e1-14515fff1dc2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=027ad20f-cef6-496f-a5dc-0e38827317a8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=08ea1ec3-2021-4942-8b2a-c699ee4dc052) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:41:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:50.259 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 08ea1ec3-2021-4942-8b2a-c699ee4dc052 in datapath fbf4a9fc-7958-460e-b38c-ae01b006309e unbound from our chassis#033[00m
Oct  2 04:41:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:50.262 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbf4a9fc-7958-460e-b38c-ae01b006309e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:41:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:50.266 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[99e03701-eec1-409c-9273-c269d6607776]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:50.267 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e namespace which is not needed anymore#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:50 np0005465604 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000065.scope: Deactivated successfully.
Oct  2 04:41:50 np0005465604 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000065.scope: Consumed 1.972s CPU time.
Oct  2 04:41:50 np0005465604 systemd-machined[214636]: Machine qemu-127-instance-00000065 terminated.
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.296 2 DEBUG nova.compute.manager [req-48b34df0-610f-4ec5-bbe0-c3f774050513 req-5d8de077-2567-4456-97d1-d65bdb09da7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Received event network-vif-plugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.297 2 DEBUG oslo_concurrency.lockutils [req-48b34df0-610f-4ec5-bbe0-c3f774050513 req-5d8de077-2567-4456-97d1-d65bdb09da7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "47497cd9-93be-482f-b4a8-4529910a9055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.297 2 DEBUG oslo_concurrency.lockutils [req-48b34df0-610f-4ec5-bbe0-c3f774050513 req-5d8de077-2567-4456-97d1-d65bdb09da7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.298 2 DEBUG oslo_concurrency.lockutils [req-48b34df0-610f-4ec5-bbe0-c3f774050513 req-5d8de077-2567-4456-97d1-d65bdb09da7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.298 2 DEBUG nova.compute.manager [req-48b34df0-610f-4ec5-bbe0-c3f774050513 req-5d8de077-2567-4456-97d1-d65bdb09da7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] No waiting events found dispatching network-vif-plugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.299 2 WARNING nova.compute.manager [req-48b34df0-610f-4ec5-bbe0-c3f774050513 req-5d8de077-2567-4456-97d1-d65bdb09da7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Received unexpected event network-vif-plugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:41:50 np0005465604 neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e[361621]: [NOTICE]   (361625) : haproxy version is 2.8.14-c23fe91
Oct  2 04:41:50 np0005465604 neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e[361621]: [NOTICE]   (361625) : path to executable is /usr/sbin/haproxy
Oct  2 04:41:50 np0005465604 neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e[361621]: [WARNING]  (361625) : Exiting Master process...
Oct  2 04:41:50 np0005465604 neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e[361621]: [WARNING]  (361625) : Exiting Master process...
Oct  2 04:41:50 np0005465604 neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e[361621]: [ALERT]    (361625) : Current worker (361627) exited with code 143 (Terminated)
Oct  2 04:41:50 np0005465604 neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e[361621]: [WARNING]  (361625) : All workers exited. Exiting... (0)
Oct  2 04:41:50 np0005465604 systemd[1]: libpod-f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145.scope: Deactivated successfully.
Oct  2 04:41:50 np0005465604 podman[361657]: 2025-10-02 08:41:50.503561436 +0000 UTC m=+0.080706800 container died f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.523 2 INFO nova.virt.libvirt.driver [-] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Instance destroyed successfully.#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.523 2 DEBUG nova.objects.instance [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lazy-loading 'resources' on Instance uuid 47497cd9-93be-482f-b4a8-4529910a9055 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.548 2 DEBUG nova.virt.libvirt.vif [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:41:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-793142749',display_name='tempest-ServerGroupTestJSON-server-793142749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-793142749',id=101,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:41:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b230fcb7df44b5f9a434a12364fcaf2',ramdisk_id='',reservation_id='r-wpjgykv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-1408982320',owner_user_name='tempest-ServerGroupTestJSON-1408982320-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:41:49Z,user_data=None,user_id='4487d5c9ca094c4183c8500d1f6df983',uuid=47497cd9-93be-482f-b4a8-4529910a9055,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.549 2 DEBUG nova.network.os_vif_util [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Converting VIF {"id": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "address": "fa:16:3e:38:4a:a1", "network": {"id": "fbf4a9fc-7958-460e-b38c-ae01b006309e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1878294879-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b230fcb7df44b5f9a434a12364fcaf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08ea1ec3-20", "ovs_interfaceid": "08ea1ec3-2021-4942-8b2a-c699ee4dc052", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.551 2 DEBUG nova.network.os_vif_util [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:4a:a1,bridge_name='br-int',has_traffic_filtering=True,id=08ea1ec3-2021-4942-8b2a-c699ee4dc052,network=Network(fbf4a9fc-7958-460e-b38c-ae01b006309e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08ea1ec3-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.551 2 DEBUG os_vif [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:4a:a1,bridge_name='br-int',has_traffic_filtering=True,id=08ea1ec3-2021-4942-8b2a-c699ee4dc052,network=Network(fbf4a9fc-7958-460e-b38c-ae01b006309e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08ea1ec3-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.553 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08ea1ec3-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.599 2 INFO os_vif [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:4a:a1,bridge_name='br-int',has_traffic_filtering=True,id=08ea1ec3-2021-4942-8b2a-c699ee4dc052,network=Network(fbf4a9fc-7958-460e-b38c-ae01b006309e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08ea1ec3-20')#033[00m
Oct  2 04:41:50 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145-userdata-shm.mount: Deactivated successfully.
Oct  2 04:41:50 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6e103983e2dea04abc89b05140816442de9ff485d4b8298c9a16b43060795d0d-merged.mount: Deactivated successfully.
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:50.869 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:41:50 np0005465604 podman[361657]: 2025-10-02 08:41:50.880659503 +0000 UTC m=+0.457804897 container cleanup f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 04:41:50 np0005465604 systemd[1]: libpod-conmon-f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145.scope: Deactivated successfully.
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.920 2 DEBUG nova.network.neutron [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Updating instance_info_cache with network_info: [{"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.949 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Releasing lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.950 2 DEBUG nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Instance network_info: |[{"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.951 2 DEBUG oslo_concurrency.lockutils [req-f5394c52-bc89-424e-bdf5-81ff1a182e11 req-d7a0e1b7-fd84-45ca-b229-62c8f187fb83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.952 2 DEBUG nova.network.neutron [req-f5394c52-bc89-424e-bdf5-81ff1a182e11 req-d7a0e1b7-fd84-45ca-b229-62c8f187fb83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Refreshing network info cache for port 52a482d6-fbe9-4583-af85-5407a6796976 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.958 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Start _get_guest_xml network_info=[{"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.963 2 WARNING nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.970 2 DEBUG nova.virt.libvirt.host [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.971 2 DEBUG nova.virt.libvirt.host [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.980 2 DEBUG nova.virt.libvirt.host [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.982 2 DEBUG nova.virt.libvirt.host [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.983 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.983 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.984 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.985 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.985 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.986 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.987 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:41:50 np0005465604 podman[361717]: 2025-10-02 08:41:50.987534143 +0000 UTC m=+0.072719894 container remove f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.987 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.988 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.988 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.989 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.989 2 DEBUG nova.virt.hardware [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:41:50 np0005465604 nova_compute[260603]: 2025-10-02 08:41:50.994 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:50.997 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e3d860-8f18-40da-8043-8fa2aee77fb2]: (4, ('Thu Oct  2 08:41:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e (f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145)\nf30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145\nThu Oct  2 08:41:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e (f30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145)\nf30bc26158eab0ee0f2ff173b395c3c4eac06f0dbc5295399e1758b960202145\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:50.998 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[25e48feb-3263-44a2-a5d6-b21bae8f8227]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:50.999 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbf4a9fc-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:51 np0005465604 kernel: tapfbf4a9fc-70: left promiscuous mode
Oct  2 04:41:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:51.019 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[552d94d2-d834-4416-bac4-8d0b775dcf4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:51 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:51.051 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[51dac703-3c1c-48e6-bf28-94c489ecaa38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:51.052 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f66a94-c920-4e6b-a1b3-f54dac0cd469]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:51.072 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0bd7b0-387b-403a-b25d-e315418282bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538663, 'reachable_time': 34742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361731, 'error': None, 'target': 'ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:51.074 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fbf4a9fc-7958-460e-b38c-ae01b006309e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:41:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:51.075 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[f20cccb6-f5ee-423d-8889-fb20f25d0588]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:51 np0005465604 systemd[1]: run-netns-ovnmeta\x2dfbf4a9fc\x2d7958\x2d460e\x2db38c\x2dae01b006309e.mount: Deactivated successfully.
Oct  2 04:41:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:51.075 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:41:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 185 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.6 MiB/s wr, 42 op/s
Oct  2 04:41:51 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.344 2 INFO nova.virt.libvirt.driver [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Deleting instance files /var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055_del#033[00m
Oct  2 04:41:51 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.345 2 INFO nova.virt.libvirt.driver [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Deletion of /var/lib/nova/instances/47497cd9-93be-482f-b4a8-4529910a9055_del complete#033[00m
Oct  2 04:41:51 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.401 2 INFO nova.compute.manager [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Took 1.33 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:41:51 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.402 2 DEBUG oslo.service.loopingcall [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:41:51 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.402 2 DEBUG nova.compute.manager [-] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:41:51 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.402 2 DEBUG nova.network.neutron [-] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:41:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:41:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3591855804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:41:51 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.447 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:51 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.471 2 DEBUG nova.storage.rbd_utils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] rbd image 5748b14f-5bbb-46f3-b563-062d530e5abd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:51 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.476 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:41:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/356067763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:41:51 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.946 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:51 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.948 2 DEBUG nova.virt.libvirt.vif [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:41:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-263070957-access_point-1630801405',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-263070957-access_point-1630801405',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-263070957-acc',id=102,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBHUVZCqw3VWkKw3WswXpGtJf1pEavua2ZnmSgdDq3pc4sGnLceByNqUw0H30LytwdI+cAke8ed31ZxradSBqQ5vuq98lJn2fkhaaMuY6USKAqJ3+Ezi42AVRSerZijcMA==',key_name='tempest-TestSecurityGroupsBasicOps-1336991398',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28277d49c8814c0691f5e1dce22bf215',ramdisk_id='',reservation_id='r-740jcdvb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-263070957',owner_user_name='tempest-TestSecurityGroupsBasicOps-263070957-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:41:47Z,user_data=None,user_id='ffb3b2bafd1c40058c2669d61c40f3f9',uuid=5748b14f-5bbb-46f3-b563-062d530e5abd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:41:51 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.948 2 DEBUG nova.network.os_vif_util [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Converting VIF {"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:41:51 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.949 2 DEBUG nova.network.os_vif_util [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:a7:87,bridge_name='br-int',has_traffic_filtering=True,id=52a482d6-fbe9-4583-af85-5407a6796976,network=Network(075eee4e-656c-44de-9ee6-7589c7382251),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a482d6-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:41:51 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.951 2 DEBUG nova.objects.instance [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5748b14f-5bbb-46f3-b563-062d530e5abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:41:51 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.995 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:41:52 np0005465604 nova_compute[260603]:  <uuid>5748b14f-5bbb-46f3-b563-062d530e5abd</uuid>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:  <name>instance-00000066</name>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-263070957-access_point-1630801405</nova:name>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:41:50</nova:creationTime>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:41:52 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:        <nova:user uuid="ffb3b2bafd1c40058c2669d61c40f3f9">tempest-TestSecurityGroupsBasicOps-263070957-project-member</nova:user>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:        <nova:project uuid="28277d49c8814c0691f5e1dce22bf215">tempest-TestSecurityGroupsBasicOps-263070957</nova:project>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:        <nova:port uuid="52a482d6-fbe9-4583-af85-5407a6796976">
Oct  2 04:41:52 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <entry name="serial">5748b14f-5bbb-46f3-b563-062d530e5abd</entry>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <entry name="uuid">5748b14f-5bbb-46f3-b563-062d530e5abd</entry>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:41:52 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 04:41:52 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/5748b14f-5bbb-46f3-b563-062d530e5abd_disk">
Oct  2 04:41:52 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:41:52 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/5748b14f-5bbb-46f3-b563-062d530e5abd_disk.config">
Oct  2 04:41:52 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:41:52 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:2c:a7:87"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <target dev="tap52a482d6-fb"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd/console.log" append="off"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:41:52 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:41:52 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:41:52 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:41:52 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.996 2 DEBUG nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Preparing to wait for external event network-vif-plugged-52a482d6-fbe9-4583-af85-5407a6796976 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.996 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquiring lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.996 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.996 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.997 2 DEBUG nova.virt.libvirt.vif [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:41:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-263070957-access_point-1630801405',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-263070957-access_point-1630801405',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-263070957-acc',id=102,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBHUVZCqw3VWkKw3WswXpGtJf1pEavua2ZnmSgdDq3pc4sGnLceByNqUw0H30LytwdI+cAke8ed31ZxradSBqQ5vuq98lJn2fkhaaMuY6USKAqJ3+Ezi42AVRSerZijcMA==',key_name='tempest-TestSecurityGroupsBasicOps-1336991398',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28277d49c8814c0691f5e1dce22bf215',ramdisk_id='',reservation_id='r-740jcdvb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-263070957',owner_user_name='tempest-TestSecurityGroupsBasicOps-263070957-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:41:47Z,user_data=None,user_id='ffb3b2bafd1c40058c2669d61c40f3f9',uuid=5748b14f-5bbb-46f3-b563-062d530e5abd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.997 2 DEBUG nova.network.os_vif_util [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Converting VIF {"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.998 2 DEBUG nova.network.os_vif_util [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:a7:87,bridge_name='br-int',has_traffic_filtering=True,id=52a482d6-fbe9-4583-af85-5407a6796976,network=Network(075eee4e-656c-44de-9ee6-7589c7382251),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a482d6-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.998 2 DEBUG os_vif [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:a7:87,bridge_name='br-int',has_traffic_filtering=True,id=52a482d6-fbe9-4583-af85-5407a6796976,network=Network(075eee4e-656c-44de-9ee6-7589c7382251),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a482d6-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:51.999 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.000 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.003 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52a482d6-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.004 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52a482d6-fb, col_values=(('external_ids', {'iface-id': '52a482d6-fbe9-4583-af85-5407a6796976', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:a7:87', 'vm-uuid': '5748b14f-5bbb-46f3-b563-062d530e5abd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:52 np0005465604 NetworkManager[45129]: <info>  [1759394512.0064] manager: (tap52a482d6-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.017 2 INFO os_vif [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:a7:87,bridge_name='br-int',has_traffic_filtering=True,id=52a482d6-fbe9-4583-af85-5407a6796976,network=Network(075eee4e-656c-44de-9ee6-7589c7382251),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a482d6-fb')#033[00m
Oct  2 04:41:52 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.079 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.079 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.079 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] No VIF found with MAC fa:16:3e:2c:a7:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.080 2 INFO nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Using config drive#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.105 2 DEBUG nova.storage.rbd_utils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] rbd image 5748b14f-5bbb-46f3-b563-062d530e5abd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.380 2 DEBUG nova.network.neutron [-] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.401 2 INFO nova.compute.manager [-] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Took 1.00 seconds to deallocate network for instance.#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.431 2 DEBUG nova.compute.manager [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Received event network-vif-unplugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.431 2 DEBUG oslo_concurrency.lockutils [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "47497cd9-93be-482f-b4a8-4529910a9055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.432 2 DEBUG oslo_concurrency.lockutils [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.432 2 DEBUG oslo_concurrency.lockutils [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.433 2 DEBUG nova.compute.manager [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] No waiting events found dispatching network-vif-unplugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.433 2 DEBUG nova.compute.manager [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Received event network-vif-unplugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.434 2 DEBUG nova.compute.manager [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Received event network-vif-plugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.434 2 DEBUG oslo_concurrency.lockutils [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "47497cd9-93be-482f-b4a8-4529910a9055-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.434 2 DEBUG oslo_concurrency.lockutils [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.435 2 DEBUG oslo_concurrency.lockutils [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.435 2 DEBUG nova.compute.manager [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] No waiting events found dispatching network-vif-plugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.435 2 WARNING nova.compute.manager [req-b5bade32-385b-4a88-9e58-cfd9c844c7e8 req-bad58cbf-d1a9-4120-bdbd-9533726f52c1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Received unexpected event network-vif-plugged-08ea1ec3-2021-4942-8b2a-c699ee4dc052 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.475 2 DEBUG oslo_concurrency.lockutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.476 2 DEBUG oslo_concurrency.lockutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.569 2 DEBUG nova.compute.manager [req-2d6d04ef-74e9-4c21-9aaf-d1434d935f8b req-be932978-dcc3-4551-8666-558cd9129205 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Received event network-vif-deleted-08ea1ec3-2021-4942-8b2a-c699ee4dc052 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.579 2 DEBUG oslo_concurrency.processutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.967 2 INFO nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Creating config drive at /var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd/disk.config#033[00m
Oct  2 04:41:52 np0005465604 nova_compute[260603]: 2025-10-02 08:41:52.973 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvyg6zi53 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:41:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/11347784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.072 2 DEBUG oslo_concurrency.processutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.081 2 DEBUG nova.compute.provider_tree [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.085 2 DEBUG nova.network.neutron [req-f5394c52-bc89-424e-bdf5-81ff1a182e11 req-d7a0e1b7-fd84-45ca-b229-62c8f187fb83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Updated VIF entry in instance network info cache for port 52a482d6-fbe9-4583-af85-5407a6796976. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.085 2 DEBUG nova.network.neutron [req-f5394c52-bc89-424e-bdf5-81ff1a182e11 req-d7a0e1b7-fd84-45ca-b229-62c8f187fb83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Updating instance_info_cache with network_info: [{"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.104 2 DEBUG nova.scheduler.client.report [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.107 2 DEBUG oslo_concurrency.lockutils [req-f5394c52-bc89-424e-bdf5-81ff1a182e11 req-d7a0e1b7-fd84-45ca-b229-62c8f187fb83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.129 2 DEBUG oslo_concurrency.lockutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.141 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvyg6zi53" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.162 2 DEBUG nova.storage.rbd_utils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] rbd image 5748b14f-5bbb-46f3-b563-062d530e5abd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.165 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd/disk.config 5748b14f-5bbb-46f3-b563-062d530e5abd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.206 2 INFO nova.scheduler.client.report [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Deleted allocations for instance 47497cd9-93be-482f-b4a8-4529910a9055#033[00m
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.277 2 DEBUG oslo_concurrency.lockutils [None req-ca328abd-e4bf-4bd9-a96f-c9629bdc0505 4487d5c9ca094c4183c8500d1f6df983 2b230fcb7df44b5f9a434a12364fcaf2 - - default default] Lock "47497cd9-93be-482f-b4a8-4529910a9055" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 135 op/s
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.331 2 DEBUG oslo_concurrency.processutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd/disk.config 5748b14f-5bbb-46f3-b563-062d530e5abd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.331 2 INFO nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Deleting local config drive /var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd/disk.config because it was imported into RBD.#033[00m
Oct  2 04:41:53 np0005465604 kernel: tap52a482d6-fb: entered promiscuous mode
Oct  2 04:41:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:53Z|01044|binding|INFO|Claiming lport 52a482d6-fbe9-4583-af85-5407a6796976 for this chassis.
Oct  2 04:41:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:53Z|01045|binding|INFO|52a482d6-fbe9-4583-af85-5407a6796976: Claiming fa:16:3e:2c:a7:87 10.100.0.3
Oct  2 04:41:53 np0005465604 NetworkManager[45129]: <info>  [1759394513.3787] manager: (tap52a482d6-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/411)
Oct  2 04:41:53 np0005465604 systemd-udevd[361733]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.387 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:a7:87 10.100.0.3'], port_security=['fa:16:3e:2c:a7:87 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5748b14f-5bbb-46f3-b563-062d530e5abd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-075eee4e-656c-44de-9ee6-7589c7382251', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28277d49c8814c0691f5e1dce22bf215', 'neutron:revision_number': '2', 'neutron:security_group_ids': '13b54cc2-9090-4249-b147-09fa8e935774 94805a92-fc8b-4089-aba3-84e7867eb02d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e7b4dd6-37a8-4a1b-88c6-3100272d2b7a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=52a482d6-fbe9-4583-af85-5407a6796976) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.389 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 52a482d6-fbe9-4583-af85-5407a6796976 in datapath 075eee4e-656c-44de-9ee6-7589c7382251 bound to our chassis#033[00m
Oct  2 04:41:53 np0005465604 NetworkManager[45129]: <info>  [1759394513.3898] device (tap52a482d6-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:41:53 np0005465604 NetworkManager[45129]: <info>  [1759394513.3909] device (tap52a482d6-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.391 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 075eee4e-656c-44de-9ee6-7589c7382251#033[00m
Oct  2 04:41:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:53Z|01046|binding|INFO|Setting lport 52a482d6-fbe9-4583-af85-5407a6796976 ovn-installed in OVS
Oct  2 04:41:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:53Z|01047|binding|INFO|Setting lport 52a482d6-fbe9-4583-af85-5407a6796976 up in Southbound
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.411 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a36d1949-f9d6-4309-95ff-7d2cb75bd23b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.412 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap075eee4e-61 in ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.415 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap075eee4e-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.415 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3edecf61-8e01-4af9-be56-3ace33ed7d1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.416 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8673aea8-31e1-43ea-84e2-84b030805c9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:53 np0005465604 systemd-machined[214636]: New machine qemu-128-instance-00000066.
Oct  2 04:41:53 np0005465604 systemd[1]: Started Virtual Machine qemu-128-instance-00000066.
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.435 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[be925358-868e-4515-877c-8cad76e1f3d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.464 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9b9172b3-7301-457b-a604-ae7799bbddc3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.497 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[36a88f5d-4a1f-4ad5-84c6-dd626893d17e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.501 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[965a81dc-8a93-4cab-887a-b6625fc02a65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:53 np0005465604 NetworkManager[45129]: <info>  [1759394513.5028] manager: (tap075eee4e-60): new Veth device (/org/freedesktop/NetworkManager/Devices/412)
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.534 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[41a1c5f2-dc7b-4b81-b388-8fd74abb44b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.537 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc865a2-afea-4e21-94d6-5609a6ad68b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:53 np0005465604 NetworkManager[45129]: <info>  [1759394513.5614] device (tap075eee4e-60): carrier: link connected
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.568 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a5a0b7-791b-4fb3-8782-26e1ce1be234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.585 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1d72b586-4275-47a1-ba72-4554d396999c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap075eee4e-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:66:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539216, 'reachable_time': 43381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361926, 'error': None, 'target': 'ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.602 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[84436875-2038-407c-b80c-bec8b70d0169]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:6651'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539216, 'tstamp': 539216}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361927, 'error': None, 'target': 'ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.620 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[016c64af-843c-4eee-9ed7-9911d4e2453e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap075eee4e-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:66:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539216, 'reachable_time': 43381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361928, 'error': None, 'target': 'ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.653 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bb080e60-b11a-4eaa-b176-b3a0d0da9b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.714 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8c193030-73d1-448a-821b-761cf22b42f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.716 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap075eee4e-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.716 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.717 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap075eee4e-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:53 np0005465604 NetworkManager[45129]: <info>  [1759394513.7194] manager: (tap075eee4e-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/413)
Oct  2 04:41:53 np0005465604 kernel: tap075eee4e-60: entered promiscuous mode
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.722 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap075eee4e-60, col_values=(('external_ids', {'iface-id': '52c223e8-68eb-4ed0-921b-f01154a7d913'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:53Z|01048|binding|INFO|Releasing lport 52c223e8-68eb-4ed0-921b-f01154a7d913 from this chassis (sb_readonly=0)
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:53 np0005465604 nova_compute[260603]: 2025-10-02 08:41:53.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.740 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/075eee4e-656c-44de-9ee6-7589c7382251.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/075eee4e-656c-44de-9ee6-7589c7382251.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.740 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf5c9f8-1b16-4f20-ad24-18b3b7f80e4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.741 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-075eee4e-656c-44de-9ee6-7589c7382251
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/075eee4e-656c-44de-9ee6-7589c7382251.pid.haproxy
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 075eee4e-656c-44de-9ee6-7589c7382251
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:41:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:53.742 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251', 'env', 'PROCESS_TAG=haproxy-075eee4e-656c-44de-9ee6-7589c7382251', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/075eee4e-656c-44de-9ee6-7589c7382251.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:41:54 np0005465604 podman[361961]: 2025-10-02 08:41:54.083353746 +0000 UTC m=+0.051517346 container create c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:41:54 np0005465604 systemd[1]: Started libpod-conmon-c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd.scope.
Oct  2 04:41:54 np0005465604 podman[361961]: 2025-10-02 08:41:54.056725991 +0000 UTC m=+0.024889641 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:41:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:41:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f2cd3807a4bd9a2a9995ba1fc8cf28a130437de2342b17d3b71902d85482d4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:41:54 np0005465604 podman[361961]: 2025-10-02 08:41:54.179999148 +0000 UTC m=+0.148162758 container init c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:41:54 np0005465604 podman[361961]: 2025-10-02 08:41:54.190158403 +0000 UTC m=+0.158321983 container start c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 04:41:54 np0005465604 neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251[361976]: [NOTICE]   (361980) : New worker (361982) forked
Oct  2 04:41:54 np0005465604 neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251[361976]: [NOTICE]   (361980) : Loading success.
Oct  2 04:41:54 np0005465604 nova_compute[260603]: 2025-10-02 08:41:54.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:54 np0005465604 nova_compute[260603]: 2025-10-02 08:41:54.709 2 DEBUG nova.compute.manager [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received event network-vif-plugged-52a482d6-fbe9-4583-af85-5407a6796976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:54 np0005465604 nova_compute[260603]: 2025-10-02 08:41:54.710 2 DEBUG oslo_concurrency.lockutils [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:54 np0005465604 nova_compute[260603]: 2025-10-02 08:41:54.711 2 DEBUG oslo_concurrency.lockutils [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:54 np0005465604 nova_compute[260603]: 2025-10-02 08:41:54.711 2 DEBUG oslo_concurrency.lockutils [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:54 np0005465604 nova_compute[260603]: 2025-10-02 08:41:54.712 2 DEBUG nova.compute.manager [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Processing event network-vif-plugged-52a482d6-fbe9-4583-af85-5407a6796976 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:41:54 np0005465604 nova_compute[260603]: 2025-10-02 08:41:54.712 2 DEBUG nova.compute.manager [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received event network-vif-plugged-52a482d6-fbe9-4583-af85-5407a6796976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:54 np0005465604 nova_compute[260603]: 2025-10-02 08:41:54.713 2 DEBUG oslo_concurrency.lockutils [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:41:54 np0005465604 nova_compute[260603]: 2025-10-02 08:41:54.714 2 DEBUG oslo_concurrency.lockutils [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:41:54 np0005465604 nova_compute[260603]: 2025-10-02 08:41:54.714 2 DEBUG oslo_concurrency.lockutils [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:54 np0005465604 nova_compute[260603]: 2025-10-02 08:41:54.715 2 DEBUG nova.compute.manager [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] No waiting events found dispatching network-vif-plugged-52a482d6-fbe9-4583-af85-5407a6796976 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:41:54 np0005465604 nova_compute[260603]: 2025-10-02 08:41:54.715 2 WARNING nova.compute.manager [req-69cd24f4-b916-4a92-9d7e-19c8f5b22676 req-38e29dd8-dd9f-462c-996b-397dcde0bedf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received unexpected event network-vif-plugged-52a482d6-fbe9-4583-af85-5407a6796976 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:41:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.9 MiB/s wr, 119 op/s
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.321 2 DEBUG nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.322 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394515.32125, 5748b14f-5bbb-46f3-b563-062d530e5abd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.323 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] VM Started (Lifecycle Event)#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.325 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.330 2 INFO nova.virt.libvirt.driver [-] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Instance spawned successfully.#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.330 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.346 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.352 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.355 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.355 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.356 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.356 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.357 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.357 2 DEBUG nova.virt.libvirt.driver [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.394 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.394 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394515.3215666, 5748b14f-5bbb-46f3-b563-062d530e5abd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.395 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.427 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.431 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394515.3245556, 5748b14f-5bbb-46f3-b563-062d530e5abd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.431 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.434 2 INFO nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Took 8.23 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.435 2 DEBUG nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.459 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.461 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.483 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.495 2 INFO nova.compute.manager [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Took 9.29 seconds to build instance.#033[00m
Oct  2 04:41:55 np0005465604 nova_compute[260603]: 2025-10-02 08:41:55.509 2 DEBUG oslo_concurrency.lockutils [None req-b51c4ed7-3240-456c-abbc-9c4b711d92fc ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:41:57 np0005465604 nova_compute[260603]: 2025-10-02 08:41:57.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:41:57.078 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:41:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.9 MiB/s wr, 128 op/s
Oct  2 04:41:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:41:57 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:57Z|01049|binding|INFO|Releasing lport 52c223e8-68eb-4ed0-921b-f01154a7d913 from this chassis (sb_readonly=0)
Oct  2 04:41:57 np0005465604 ovn_controller[152344]: 2025-10-02T08:41:57Z|01050|binding|INFO|Releasing lport 6a4c46b3-20fe-4d13-90df-77828898d571 from this chassis (sb_readonly=0)
Oct  2 04:41:57 np0005465604 nova_compute[260603]: 2025-10-02 08:41:57.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:41:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:41:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:41:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:41:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:41:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:41:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Oct  2 04:41:59 np0005465604 nova_compute[260603]: 2025-10-02 08:41:59.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:41:59 np0005465604 nova_compute[260603]: 2025-10-02 08:41:59.950 2 DEBUG nova.compute.manager [req-3efea428-9c85-4613-988a-d5b4ae5e5932 req-d3a16d3c-be9c-4105-9649-286a1f792e3b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received event network-changed-52a482d6-fbe9-4583-af85-5407a6796976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:41:59 np0005465604 nova_compute[260603]: 2025-10-02 08:41:59.951 2 DEBUG nova.compute.manager [req-3efea428-9c85-4613-988a-d5b4ae5e5932 req-d3a16d3c-be9c-4105-9649-286a1f792e3b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Refreshing instance network info cache due to event network-changed-52a482d6-fbe9-4583-af85-5407a6796976. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:41:59 np0005465604 nova_compute[260603]: 2025-10-02 08:41:59.951 2 DEBUG oslo_concurrency.lockutils [req-3efea428-9c85-4613-988a-d5b4ae5e5932 req-d3a16d3c-be9c-4105-9649-286a1f792e3b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:41:59 np0005465604 nova_compute[260603]: 2025-10-02 08:41:59.952 2 DEBUG oslo_concurrency.lockutils [req-3efea428-9c85-4613-988a-d5b4ae5e5932 req-d3a16d3c-be9c-4105-9649-286a1f792e3b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:41:59 np0005465604 nova_compute[260603]: 2025-10-02 08:41:59.952 2 DEBUG nova.network.neutron [req-3efea428-9c85-4613-988a-d5b4ae5e5932 req-d3a16d3c-be9c-4105-9649-286a1f792e3b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Refreshing network info cache for port 52a482d6-fbe9-4583-af85-5407a6796976 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:42:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Oct  2 04:42:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Oct  2 04:42:00 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Oct  2 04:42:01 np0005465604 podman[362035]: 2025-10-02 08:42:01.060020741 +0000 UTC m=+0.092945370 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 04:42:01 np0005465604 podman[362034]: 2025-10-02 08:42:01.116963034 +0000 UTC m=+0.164283718 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 04:42:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 200 op/s
Oct  2 04:42:01 np0005465604 nova_compute[260603]: 2025-10-02 08:42:01.927 2 DEBUG nova.network.neutron [req-3efea428-9c85-4613-988a-d5b4ae5e5932 req-d3a16d3c-be9c-4105-9649-286a1f792e3b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Updated VIF entry in instance network info cache for port 52a482d6-fbe9-4583-af85-5407a6796976. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:42:01 np0005465604 nova_compute[260603]: 2025-10-02 08:42:01.928 2 DEBUG nova.network.neutron [req-3efea428-9c85-4613-988a-d5b4ae5e5932 req-d3a16d3c-be9c-4105-9649-286a1f792e3b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Updating instance_info_cache with network_info: [{"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:42:01 np0005465604 nova_compute[260603]: 2025-10-02 08:42:01.949 2 DEBUG oslo_concurrency.lockutils [req-3efea428-9c85-4613-988a-d5b4ae5e5932 req-d3a16d3c-be9c-4105-9649-286a1f792e3b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:42:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Oct  2 04:42:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Oct  2 04:42:01 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Oct  2 04:42:02 np0005465604 nova_compute[260603]: 2025-10-02 08:42:02.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:42:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:42:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:42:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:42:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:42:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:42:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:42:03 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c373f736-a2aa-45d5-a211-3c162e63eec9 does not exist
Oct  2 04:42:03 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e1f499f3-2625-43c1-95e0-1370887c5f35 does not exist
Oct  2 04:42:03 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev f0214a56-9d60-422f-b506-471684895b2b does not exist
Oct  2 04:42:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:42:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:42:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:42:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:42:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:42:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:42:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 26 KiB/s wr, 161 op/s
Oct  2 04:42:03 np0005465604 podman[362353]: 2025-10-02 08:42:03.940194146 +0000 UTC m=+0.037083200 container create 4a85ad20c42e11ce3c090738b18ce82627325a0f593a0efc10ef2587a58fab70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rhodes, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Oct  2 04:42:03 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:42:03 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:42:03 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:42:03 np0005465604 systemd[1]: Started libpod-conmon-4a85ad20c42e11ce3c090738b18ce82627325a0f593a0efc10ef2587a58fab70.scope.
Oct  2 04:42:04 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:42:04 np0005465604 podman[362353]: 2025-10-02 08:42:03.923444887 +0000 UTC m=+0.020333971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:42:04 np0005465604 podman[362353]: 2025-10-02 08:42:04.0297791 +0000 UTC m=+0.126668164 container init 4a85ad20c42e11ce3c090738b18ce82627325a0f593a0efc10ef2587a58fab70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rhodes, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:42:04 np0005465604 podman[362353]: 2025-10-02 08:42:04.035843168 +0000 UTC m=+0.132732212 container start 4a85ad20c42e11ce3c090738b18ce82627325a0f593a0efc10ef2587a58fab70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 04:42:04 np0005465604 podman[362353]: 2025-10-02 08:42:04.038635814 +0000 UTC m=+0.135524868 container attach 4a85ad20c42e11ce3c090738b18ce82627325a0f593a0efc10ef2587a58fab70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rhodes, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 04:42:04 np0005465604 romantic_rhodes[362370]: 167 167
Oct  2 04:42:04 np0005465604 systemd[1]: libpod-4a85ad20c42e11ce3c090738b18ce82627325a0f593a0efc10ef2587a58fab70.scope: Deactivated successfully.
Oct  2 04:42:04 np0005465604 podman[362353]: 2025-10-02 08:42:04.042868655 +0000 UTC m=+0.139757709 container died 4a85ad20c42e11ce3c090738b18ce82627325a0f593a0efc10ef2587a58fab70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:42:04 np0005465604 systemd[1]: var-lib-containers-storage-overlay-86bb1e76d9f1580399b0a3856f14a43015d3838399bd448d70993069a77ef382-merged.mount: Deactivated successfully.
Oct  2 04:42:04 np0005465604 podman[362353]: 2025-10-02 08:42:04.074607528 +0000 UTC m=+0.171496582 container remove 4a85ad20c42e11ce3c090738b18ce82627325a0f593a0efc10ef2587a58fab70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rhodes, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 04:42:04 np0005465604 systemd[1]: libpod-conmon-4a85ad20c42e11ce3c090738b18ce82627325a0f593a0efc10ef2587a58fab70.scope: Deactivated successfully.
Oct  2 04:42:04 np0005465604 podman[362395]: 2025-10-02 08:42:04.282063612 +0000 UTC m=+0.037165872 container create d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_raman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 04:42:04 np0005465604 nova_compute[260603]: 2025-10-02 08:42:04.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:04 np0005465604 systemd[1]: Started libpod-conmon-d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e.scope.
Oct  2 04:42:04 np0005465604 podman[362395]: 2025-10-02 08:42:04.26647515 +0000 UTC m=+0.021577430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:42:04 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:42:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be432532ccd70e6e78f32d1810e357f6e46c8af5f9cb599eff286a35a8f987b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:42:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be432532ccd70e6e78f32d1810e357f6e46c8af5f9cb599eff286a35a8f987b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:42:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be432532ccd70e6e78f32d1810e357f6e46c8af5f9cb599eff286a35a8f987b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:42:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be432532ccd70e6e78f32d1810e357f6e46c8af5f9cb599eff286a35a8f987b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:42:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be432532ccd70e6e78f32d1810e357f6e46c8af5f9cb599eff286a35a8f987b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:42:04 np0005465604 podman[362395]: 2025-10-02 08:42:04.438315461 +0000 UTC m=+0.193417831 container init d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Oct  2 04:42:04 np0005465604 podman[362395]: 2025-10-02 08:42:04.45055269 +0000 UTC m=+0.205654980 container start d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_raman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 04:42:04 np0005465604 podman[362395]: 2025-10-02 08:42:04.45444358 +0000 UTC m=+0.209545880 container attach d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_raman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:42:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 167 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 24 KiB/s wr, 147 op/s
Oct  2 04:42:05 np0005465604 frosty_raman[362413]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:42:05 np0005465604 frosty_raman[362413]: --> relative data size: 1.0
Oct  2 04:42:05 np0005465604 frosty_raman[362413]: --> All data devices are unavailable
Oct  2 04:42:05 np0005465604 nova_compute[260603]: 2025-10-02 08:42:05.522 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394510.520441, 47497cd9-93be-482f-b4a8-4529910a9055 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:42:05 np0005465604 nova_compute[260603]: 2025-10-02 08:42:05.524 2 INFO nova.compute.manager [-] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:42:05 np0005465604 systemd[1]: libpod-d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e.scope: Deactivated successfully.
Oct  2 04:42:05 np0005465604 podman[362395]: 2025-10-02 08:42:05.540851561 +0000 UTC m=+1.295953821 container died d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:42:05 np0005465604 systemd[1]: libpod-d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e.scope: Consumed 1.016s CPU time.
Oct  2 04:42:05 np0005465604 nova_compute[260603]: 2025-10-02 08:42:05.547 2 DEBUG nova.compute.manager [None req-25780ba9-6419-4e75-87a5-6f0138ded05e - - - - - -] [instance: 47497cd9-93be-482f-b4a8-4529910a9055] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:42:05 np0005465604 systemd[1]: var-lib-containers-storage-overlay-be432532ccd70e6e78f32d1810e357f6e46c8af5f9cb599eff286a35a8f987b7-merged.mount: Deactivated successfully.
Oct  2 04:42:05 np0005465604 podman[362395]: 2025-10-02 08:42:05.735311922 +0000 UTC m=+1.490414182 container remove d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:42:05 np0005465604 systemd[1]: libpod-conmon-d481603ae913a170b81ed26a15b6b50bb90abd70de5e84215ca802f8318f129e.scope: Deactivated successfully.
Oct  2 04:42:06 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct  2 04:42:06 np0005465604 nova_compute[260603]: 2025-10-02 08:42:06.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:06 np0005465604 podman[362595]: 2025-10-02 08:42:06.482015624 +0000 UTC m=+0.050024340 container create c228e65e9d14a746ce767f7ff0c992f9c542b27e8fe46f411eba575d7651fcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:42:06 np0005465604 systemd[1]: Started libpod-conmon-c228e65e9d14a746ce767f7ff0c992f9c542b27e8fe46f411eba575d7651fcd5.scope.
Oct  2 04:42:06 np0005465604 podman[362595]: 2025-10-02 08:42:06.461667835 +0000 UTC m=+0.029676611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:42:06 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:42:06 np0005465604 podman[362595]: 2025-10-02 08:42:06.576131258 +0000 UTC m=+0.144140014 container init c228e65e9d14a746ce767f7ff0c992f9c542b27e8fe46f411eba575d7651fcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:42:06 np0005465604 podman[362595]: 2025-10-02 08:42:06.588052218 +0000 UTC m=+0.156060984 container start c228e65e9d14a746ce767f7ff0c992f9c542b27e8fe46f411eba575d7651fcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_swartz, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:42:06 np0005465604 podman[362595]: 2025-10-02 08:42:06.592173065 +0000 UTC m=+0.160181841 container attach c228e65e9d14a746ce767f7ff0c992f9c542b27e8fe46f411eba575d7651fcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 04:42:06 np0005465604 confident_swartz[362612]: 167 167
Oct  2 04:42:06 np0005465604 systemd[1]: libpod-c228e65e9d14a746ce767f7ff0c992f9c542b27e8fe46f411eba575d7651fcd5.scope: Deactivated successfully.
Oct  2 04:42:06 np0005465604 podman[362595]: 2025-10-02 08:42:06.59748268 +0000 UTC m=+0.165491416 container died c228e65e9d14a746ce767f7ff0c992f9c542b27e8fe46f411eba575d7651fcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_swartz, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:42:06 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4a501cd40a1e1122f1bfcf0af03fcc18db94c039ed7af769717b51a31615d828-merged.mount: Deactivated successfully.
Oct  2 04:42:06 np0005465604 podman[362595]: 2025-10-02 08:42:06.689305043 +0000 UTC m=+0.257313809 container remove c228e65e9d14a746ce767f7ff0c992f9c542b27e8fe46f411eba575d7651fcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_swartz, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 04:42:06 np0005465604 systemd[1]: libpod-conmon-c228e65e9d14a746ce767f7ff0c992f9c542b27e8fe46f411eba575d7651fcd5.scope: Deactivated successfully.
Oct  2 04:42:06 np0005465604 podman[362637]: 2025-10-02 08:42:06.921673988 +0000 UTC m=+0.053331002 container create 543809a1edfc31c444a5aa5e5f751a8933026152feb586ac6f58338abe7a4bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct  2 04:42:06 np0005465604 systemd[1]: Started libpod-conmon-543809a1edfc31c444a5aa5e5f751a8933026152feb586ac6f58338abe7a4bc2.scope.
Oct  2 04:42:06 np0005465604 podman[362637]: 2025-10-02 08:42:06.899915155 +0000 UTC m=+0.031572169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:42:07 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:42:07 np0005465604 nova_compute[260603]: 2025-10-02 08:42:07.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f3a90ba197c24891ca926a740434ad044c152aedfe656782ce7b4fba1a4d08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:42:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f3a90ba197c24891ca926a740434ad044c152aedfe656782ce7b4fba1a4d08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:42:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f3a90ba197c24891ca926a740434ad044c152aedfe656782ce7b4fba1a4d08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:42:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f3a90ba197c24891ca926a740434ad044c152aedfe656782ce7b4fba1a4d08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:42:07 np0005465604 podman[362637]: 2025-10-02 08:42:07.059358592 +0000 UTC m=+0.191015666 container init 543809a1edfc31c444a5aa5e5f751a8933026152feb586ac6f58338abe7a4bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:42:07 np0005465604 podman[362637]: 2025-10-02 08:42:07.073305454 +0000 UTC m=+0.204962468 container start 543809a1edfc31c444a5aa5e5f751a8933026152feb586ac6f58338abe7a4bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:42:07 np0005465604 podman[362637]: 2025-10-02 08:42:07.077120152 +0000 UTC m=+0.208777166 container attach 543809a1edfc31c444a5aa5e5f751a8933026152feb586ac6f58338abe7a4bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:42:07 np0005465604 ovn_controller[152344]: 2025-10-02T08:42:07Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2c:a7:87 10.100.0.3
Oct  2 04:42:07 np0005465604 ovn_controller[152344]: 2025-10-02T08:42:07Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2c:a7:87 10.100.0.3
Oct  2 04:42:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 173 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 592 KiB/s wr, 71 op/s
Oct  2 04:42:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:42:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Oct  2 04:42:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Oct  2 04:42:07 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]: {
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:    "0": [
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:        {
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "devices": [
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "/dev/loop3"
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            ],
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "lv_name": "ceph_lv0",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "lv_size": "21470642176",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "name": "ceph_lv0",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "tags": {
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.cluster_name": "ceph",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.crush_device_class": "",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.encrypted": "0",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.osd_id": "0",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.type": "block",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.vdo": "0"
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            },
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "type": "block",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "vg_name": "ceph_vg0"
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:        }
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:    ],
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:    "1": [
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:        {
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "devices": [
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "/dev/loop4"
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            ],
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "lv_name": "ceph_lv1",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "lv_size": "21470642176",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "name": "ceph_lv1",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "tags": {
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.cluster_name": "ceph",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.crush_device_class": "",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.encrypted": "0",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.osd_id": "1",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.type": "block",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.vdo": "0"
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            },
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "type": "block",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "vg_name": "ceph_vg1"
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:        }
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:    ],
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:    "2": [
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:        {
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "devices": [
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "/dev/loop5"
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            ],
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "lv_name": "ceph_lv2",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "lv_size": "21470642176",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "name": "ceph_lv2",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "tags": {
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.cluster_name": "ceph",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.crush_device_class": "",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.encrypted": "0",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.osd_id": "2",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.type": "block",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:                "ceph.vdo": "0"
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            },
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "type": "block",
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:            "vg_name": "ceph_vg2"
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:        }
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]:    ]
Oct  2 04:42:07 np0005465604 inspiring_newton[362654]: }
Oct  2 04:42:07 np0005465604 systemd[1]: libpod-543809a1edfc31c444a5aa5e5f751a8933026152feb586ac6f58338abe7a4bc2.scope: Deactivated successfully.
Oct  2 04:42:07 np0005465604 podman[362663]: 2025-10-02 08:42:07.925835412 +0000 UTC m=+0.022929990 container died 543809a1edfc31c444a5aa5e5f751a8933026152feb586ac6f58338abe7a4bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:42:07 np0005465604 systemd[1]: var-lib-containers-storage-overlay-90f3a90ba197c24891ca926a740434ad044c152aedfe656782ce7b4fba1a4d08-merged.mount: Deactivated successfully.
Oct  2 04:42:07 np0005465604 podman[362663]: 2025-10-02 08:42:07.982163197 +0000 UTC m=+0.079257725 container remove 543809a1edfc31c444a5aa5e5f751a8933026152feb586ac6f58338abe7a4bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_newton, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 04:42:07 np0005465604 systemd[1]: libpod-conmon-543809a1edfc31c444a5aa5e5f751a8933026152feb586ac6f58338abe7a4bc2.scope: Deactivated successfully.
Oct  2 04:42:08 np0005465604 podman[362816]: 2025-10-02 08:42:08.664027511 +0000 UTC m=+0.045284684 container create 0860e6f170350473d91e639c78f38cca3efdb9a30bae81bf05ef52f45d056b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:42:08 np0005465604 systemd[1]: Started libpod-conmon-0860e6f170350473d91e639c78f38cca3efdb9a30bae81bf05ef52f45d056b92.scope.
Oct  2 04:42:08 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:42:08 np0005465604 podman[362816]: 2025-10-02 08:42:08.644973971 +0000 UTC m=+0.026231124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:42:08 np0005465604 podman[362816]: 2025-10-02 08:42:08.756899687 +0000 UTC m=+0.138156870 container init 0860e6f170350473d91e639c78f38cca3efdb9a30bae81bf05ef52f45d056b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jennings, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:42:08 np0005465604 podman[362816]: 2025-10-02 08:42:08.762434348 +0000 UTC m=+0.143691481 container start 0860e6f170350473d91e639c78f38cca3efdb9a30bae81bf05ef52f45d056b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:42:08 np0005465604 podman[362816]: 2025-10-02 08:42:08.765802312 +0000 UTC m=+0.147059535 container attach 0860e6f170350473d91e639c78f38cca3efdb9a30bae81bf05ef52f45d056b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:42:08 np0005465604 relaxed_jennings[362832]: 167 167
Oct  2 04:42:08 np0005465604 systemd[1]: libpod-0860e6f170350473d91e639c78f38cca3efdb9a30bae81bf05ef52f45d056b92.scope: Deactivated successfully.
Oct  2 04:42:08 np0005465604 podman[362816]: 2025-10-02 08:42:08.770722325 +0000 UTC m=+0.151979488 container died 0860e6f170350473d91e639c78f38cca3efdb9a30bae81bf05ef52f45d056b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jennings, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  2 04:42:08 np0005465604 systemd[1]: var-lib-containers-storage-overlay-72d148e6df89e9d06706a33903b7c95f5cf1e7c6f4c8a8f0632e27a4c6c6c4ab-merged.mount: Deactivated successfully.
Oct  2 04:42:08 np0005465604 podman[362816]: 2025-10-02 08:42:08.82258081 +0000 UTC m=+0.203837953 container remove 0860e6f170350473d91e639c78f38cca3efdb9a30bae81bf05ef52f45d056b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 04:42:08 np0005465604 systemd[1]: libpod-conmon-0860e6f170350473d91e639c78f38cca3efdb9a30bae81bf05ef52f45d056b92.scope: Deactivated successfully.
Oct  2 04:42:09 np0005465604 podman[362858]: 2025-10-02 08:42:09.016176655 +0000 UTC m=+0.064223539 container create a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_albattani, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct  2 04:42:09 np0005465604 systemd[1]: Started libpod-conmon-a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f.scope.
Oct  2 04:42:09 np0005465604 podman[362858]: 2025-10-02 08:42:08.982567024 +0000 UTC m=+0.030613958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:42:09 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:42:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/526887cbf4753ccea0ac0e8ebdc16741481e605c2caf7ec916b3a4ed25d6f583/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:42:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/526887cbf4753ccea0ac0e8ebdc16741481e605c2caf7ec916b3a4ed25d6f583/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:42:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/526887cbf4753ccea0ac0e8ebdc16741481e605c2caf7ec916b3a4ed25d6f583/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:42:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/526887cbf4753ccea0ac0e8ebdc16741481e605c2caf7ec916b3a4ed25d6f583/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:42:09 np0005465604 podman[362858]: 2025-10-02 08:42:09.127933866 +0000 UTC m=+0.175980800 container init a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_albattani, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:42:09 np0005465604 podman[362858]: 2025-10-02 08:42:09.136855302 +0000 UTC m=+0.184902186 container start a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_albattani, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:42:09 np0005465604 podman[362858]: 2025-10-02 08:42:09.141272389 +0000 UTC m=+0.189319333 container attach a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 04:42:09 np0005465604 podman[362872]: 2025-10-02 08:42:09.164481378 +0000 UTC m=+0.099472202 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:42:09 np0005465604 podman[362875]: 2025-10-02 08:42:09.175609302 +0000 UTC m=+0.095592191 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:42:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 192 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 3.1 MiB/s wr, 114 op/s
Oct  2 04:42:09 np0005465604 nova_compute[260603]: 2025-10-02 08:42:09.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:10 np0005465604 nova_compute[260603]: 2025-10-02 08:42:10.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:10 np0005465604 festive_albattani[362881]: {
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:        "osd_id": 2,
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:        "type": "bluestore"
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:    },
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:        "osd_id": 1,
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:        "type": "bluestore"
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:    },
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:        "osd_id": 0,
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:        "type": "bluestore"
Oct  2 04:42:10 np0005465604 festive_albattani[362881]:    }
Oct  2 04:42:10 np0005465604 festive_albattani[362881]: }
Oct  2 04:42:10 np0005465604 systemd[1]: libpod-a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f.scope: Deactivated successfully.
Oct  2 04:42:10 np0005465604 podman[362858]: 2025-10-02 08:42:10.285635534 +0000 UTC m=+1.333682388 container died a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 04:42:10 np0005465604 systemd[1]: libpod-a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f.scope: Consumed 1.145s CPU time.
Oct  2 04:42:10 np0005465604 systemd[1]: var-lib-containers-storage-overlay-526887cbf4753ccea0ac0e8ebdc16741481e605c2caf7ec916b3a4ed25d6f583-merged.mount: Deactivated successfully.
Oct  2 04:42:10 np0005465604 podman[362858]: 2025-10-02 08:42:10.355732405 +0000 UTC m=+1.403779279 container remove a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_albattani, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 04:42:10 np0005465604 systemd[1]: libpod-conmon-a2a4595aa779ac4fd140b9dab2293e879841e7b8504b5330a6015b69f71c3e2f.scope: Deactivated successfully.
Oct  2 04:42:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:42:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:42:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:42:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:42:10 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9758bd2a-b082-4996-a014-86e00a20cede does not exist
Oct  2 04:42:10 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev fb65e4a9-0ab5-4a15-a50b-d5e85004f46c does not exist
Oct  2 04:42:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 192 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 2.6 MiB/s wr, 98 op/s
Oct  2 04:42:11 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:42:11 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:42:12 np0005465604 nova_compute[260603]: 2025-10-02 08:42:12.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:42:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 200 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 304 KiB/s rd, 2.6 MiB/s wr, 71 op/s
Oct  2 04:42:14 np0005465604 nova_compute[260603]: 2025-10-02 08:42:14.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:14 np0005465604 nova_compute[260603]: 2025-10-02 08:42:14.484 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquiring lock "e5370318-fc99-4c4a-9149-54deca5d783e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:14 np0005465604 nova_compute[260603]: 2025-10-02 08:42:14.484 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:14 np0005465604 nova_compute[260603]: 2025-10-02 08:42:14.500 2 DEBUG nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:42:14 np0005465604 nova_compute[260603]: 2025-10-02 08:42:14.572 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:14 np0005465604 nova_compute[260603]: 2025-10-02 08:42:14.572 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:14 np0005465604 nova_compute[260603]: 2025-10-02 08:42:14.582 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:42:14 np0005465604 nova_compute[260603]: 2025-10-02 08:42:14.582 2 INFO nova.compute.claims [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:42:14 np0005465604 nova_compute[260603]: 2025-10-02 08:42:14.745 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:42:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:42:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3995590944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.233 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.244 2 DEBUG nova.compute.provider_tree [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.265 2 DEBUG nova.scheduler.client.report [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.296 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.297 2 DEBUG nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:42:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 200 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 304 KiB/s rd, 2.6 MiB/s wr, 71 op/s
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.358 2 DEBUG nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.359 2 DEBUG nova.network.neutron [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.387 2 INFO nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.409 2 DEBUG nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.495 2 DEBUG nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.497 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.498 2 INFO nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Creating image(s)#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.675 2 DEBUG nova.storage.rbd_utils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] rbd image e5370318-fc99-4c4a-9149-54deca5d783e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.701 2 DEBUG nova.storage.rbd_utils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] rbd image e5370318-fc99-4c4a-9149-54deca5d783e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.724 2 DEBUG nova.storage.rbd_utils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] rbd image e5370318-fc99-4c4a-9149-54deca5d783e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.727 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.771 2 DEBUG nova.policy [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7fa065d842a644b0891a59bea27e82dd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4f0bc0e400b74a9788079e4f67262fae', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.809 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.811 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.813 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:15 np0005465604 nova_compute[260603]: 2025-10-02 08:42:15.813 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:16 np0005465604 nova_compute[260603]: 2025-10-02 08:42:16.078 2 DEBUG nova.storage.rbd_utils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] rbd image e5370318-fc99-4c4a-9149-54deca5d783e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:42:16 np0005465604 nova_compute[260603]: 2025-10-02 08:42:16.082 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 e5370318-fc99-4c4a-9149-54deca5d783e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:42:16 np0005465604 nova_compute[260603]: 2025-10-02 08:42:16.659 2 DEBUG nova.network.neutron [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Successfully created port: e1f81d45-cb4b-4514-b606-3b044d09fc3f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:42:17 np0005465604 nova_compute[260603]: 2025-10-02 08:42:17.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:17 np0005465604 nova_compute[260603]: 2025-10-02 08:42:17.248 2 DEBUG nova.network.neutron [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Successfully updated port: e1f81d45-cb4b-4514-b606-3b044d09fc3f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:42:17 np0005465604 nova_compute[260603]: 2025-10-02 08:42:17.266 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquiring lock "refresh_cache-e5370318-fc99-4c4a-9149-54deca5d783e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:42:17 np0005465604 nova_compute[260603]: 2025-10-02 08:42:17.267 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquired lock "refresh_cache-e5370318-fc99-4c4a-9149-54deca5d783e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:42:17 np0005465604 nova_compute[260603]: 2025-10-02 08:42:17.267 2 DEBUG nova.network.neutron [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:42:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 200 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 247 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Oct  2 04:42:17 np0005465604 nova_compute[260603]: 2025-10-02 08:42:17.350 2 DEBUG nova.compute.manager [req-7f034fe6-a415-4bcf-9edf-5d21e0b47aeb req-4d9536ad-2bb0-4a80-893e-3a6ef2294d66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Received event network-changed-e1f81d45-cb4b-4514-b606-3b044d09fc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:42:17 np0005465604 nova_compute[260603]: 2025-10-02 08:42:17.351 2 DEBUG nova.compute.manager [req-7f034fe6-a415-4bcf-9edf-5d21e0b47aeb req-4d9536ad-2bb0-4a80-893e-3a6ef2294d66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Refreshing instance network info cache due to event network-changed-e1f81d45-cb4b-4514-b606-3b044d09fc3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:42:17 np0005465604 nova_compute[260603]: 2025-10-02 08:42:17.351 2 DEBUG oslo_concurrency.lockutils [req-7f034fe6-a415-4bcf-9edf-5d21e0b47aeb req-4d9536ad-2bb0-4a80-893e-3a6ef2294d66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-e5370318-fc99-4c4a-9149-54deca5d783e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:42:17 np0005465604 nova_compute[260603]: 2025-10-02 08:42:17.409 2 DEBUG nova.network.neutron [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:42:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:42:18 np0005465604 nova_compute[260603]: 2025-10-02 08:42:18.163 2 DEBUG nova.network.neutron [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Updating instance_info_cache with network_info: [{"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:42:18 np0005465604 nova_compute[260603]: 2025-10-02 08:42:18.182 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Releasing lock "refresh_cache-e5370318-fc99-4c4a-9149-54deca5d783e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:42:18 np0005465604 nova_compute[260603]: 2025-10-02 08:42:18.183 2 DEBUG nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Instance network_info: |[{"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:42:18 np0005465604 nova_compute[260603]: 2025-10-02 08:42:18.184 2 DEBUG oslo_concurrency.lockutils [req-7f034fe6-a415-4bcf-9edf-5d21e0b47aeb req-4d9536ad-2bb0-4a80-893e-3a6ef2294d66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-e5370318-fc99-4c4a-9149-54deca5d783e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:42:18 np0005465604 nova_compute[260603]: 2025-10-02 08:42:18.184 2 DEBUG nova.network.neutron [req-7f034fe6-a415-4bcf-9edf-5d21e0b47aeb req-4d9536ad-2bb0-4a80-893e-3a6ef2294d66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Refreshing network info cache for port e1f81d45-cb4b-4514-b606-3b044d09fc3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:42:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 200 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 207 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Oct  2 04:42:19 np0005465604 nova_compute[260603]: 2025-10-02 08:42:19.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:19 np0005465604 nova_compute[260603]: 2025-10-02 08:42:19.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:42:19 np0005465604 nova_compute[260603]: 2025-10-02 08:42:19.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:42:19 np0005465604 nova_compute[260603]: 2025-10-02 08:42:19.635 2 DEBUG nova.network.neutron [req-7f034fe6-a415-4bcf-9edf-5d21e0b47aeb req-4d9536ad-2bb0-4a80-893e-3a6ef2294d66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Updated VIF entry in instance network info cache for port e1f81d45-cb4b-4514-b606-3b044d09fc3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:42:19 np0005465604 nova_compute[260603]: 2025-10-02 08:42:19.636 2 DEBUG nova.network.neutron [req-7f034fe6-a415-4bcf-9edf-5d21e0b47aeb req-4d9536ad-2bb0-4a80-893e-3a6ef2294d66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Updating instance_info_cache with network_info: [{"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:42:19 np0005465604 nova_compute[260603]: 2025-10-02 08:42:19.664 2 DEBUG oslo_concurrency.lockutils [req-7f034fe6-a415-4bcf-9edf-5d21e0b47aeb req-4d9536ad-2bb0-4a80-893e-3a6ef2294d66 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-e5370318-fc99-4c4a-9149-54deca5d783e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:42:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 200 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 90 KiB/s wr, 18 op/s
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:21.570005) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394541570091, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1828, "num_deletes": 254, "total_data_size": 2845007, "memory_usage": 2879784, "flush_reason": "Manual Compaction"}
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394541707706, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2771815, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38313, "largest_seqno": 40140, "table_properties": {"data_size": 2763413, "index_size": 5088, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17839, "raw_average_key_size": 20, "raw_value_size": 2746464, "raw_average_value_size": 3149, "num_data_blocks": 226, "num_entries": 872, "num_filter_entries": 872, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394368, "oldest_key_time": 1759394368, "file_creation_time": 1759394541, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 137815 microseconds, and 11452 cpu microseconds.
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:42:21 np0005465604 nova_compute[260603]: 2025-10-02 08:42:21.793 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 e5370318-fc99-4c4a-9149-54deca5d783e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.711s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:21.707829) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2771815 bytes OK
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:21.707857) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:21.798941) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:21.798993) EVENT_LOG_v1 {"time_micros": 1759394541798982, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:21.799020) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2837117, prev total WAL file size 2837117, number of live WAL files 2.
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:21.800422) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2706KB)], [86(8267KB)]
Oct  2 04:42:21 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394541800513, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11237535, "oldest_snapshot_seqno": -1}
Oct  2 04:42:21 np0005465604 nova_compute[260603]: 2025-10-02 08:42:21.909 2 DEBUG nova.storage.rbd_utils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] resizing rbd image e5370318-fc99-4c4a-9149-54deca5d783e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3861741082' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3861741082' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6443 keys, 9609636 bytes, temperature: kUnknown
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394542103654, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9609636, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9565185, "index_size": 27253, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16133, "raw_key_size": 163774, "raw_average_key_size": 25, "raw_value_size": 9448436, "raw_average_value_size": 1466, "num_data_blocks": 1097, "num_entries": 6443, "num_filter_entries": 6443, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394541, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:42:22 np0005465604 nova_compute[260603]: 2025-10-02 08:42:22.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:22 np0005465604 nova_compute[260603]: 2025-10-02 08:42:22.327 2 DEBUG oslo_concurrency.lockutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquiring lock "5748b14f-5bbb-46f3-b563-062d530e5abd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:22 np0005465604 nova_compute[260603]: 2025-10-02 08:42:22.328 2 DEBUG oslo_concurrency.lockutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:22 np0005465604 nova_compute[260603]: 2025-10-02 08:42:22.329 2 DEBUG oslo_concurrency.lockutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquiring lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:22 np0005465604 nova_compute[260603]: 2025-10-02 08:42:22.329 2 DEBUG oslo_concurrency.lockutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:22 np0005465604 nova_compute[260603]: 2025-10-02 08:42:22.330 2 DEBUG oslo_concurrency.lockutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:22 np0005465604 nova_compute[260603]: 2025-10-02 08:42:22.332 2 INFO nova.compute.manager [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Terminating instance#033[00m
Oct  2 04:42:22 np0005465604 nova_compute[260603]: 2025-10-02 08:42:22.335 2 DEBUG nova.compute.manager [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:42:22 np0005465604 nova_compute[260603]: 2025-10-02 08:42:22.397 2 DEBUG nova.compute.manager [req-946e1a26-2d2a-4274-a8c9-f683cc399057 req-23a366b9-055c-41f4-8bc3-29382fb2f42b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received event network-changed-52a482d6-fbe9-4583-af85-5407a6796976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:42:22 np0005465604 nova_compute[260603]: 2025-10-02 08:42:22.398 2 DEBUG nova.compute.manager [req-946e1a26-2d2a-4274-a8c9-f683cc399057 req-23a366b9-055c-41f4-8bc3-29382fb2f42b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Refreshing instance network info cache due to event network-changed-52a482d6-fbe9-4583-af85-5407a6796976. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:42:22 np0005465604 nova_compute[260603]: 2025-10-02 08:42:22.399 2 DEBUG oslo_concurrency.lockutils [req-946e1a26-2d2a-4274-a8c9-f683cc399057 req-23a366b9-055c-41f4-8bc3-29382fb2f42b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:42:22 np0005465604 nova_compute[260603]: 2025-10-02 08:42:22.399 2 DEBUG oslo_concurrency.lockutils [req-946e1a26-2d2a-4274-a8c9-f683cc399057 req-23a366b9-055c-41f4-8bc3-29382fb2f42b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:42:22 np0005465604 nova_compute[260603]: 2025-10-02 08:42:22.400 2 DEBUG nova.network.neutron [req-946e1a26-2d2a-4274-a8c9-f683cc399057 req-23a366b9-055c-41f4-8bc3-29382fb2f42b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Refreshing network info cache for port 52a482d6-fbe9-4583-af85-5407a6796976 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:22.104181) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9609636 bytes
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:22.498812) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 37.0 rd, 31.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 8.1 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 6967, records dropped: 524 output_compression: NoCompression
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:22.498839) EVENT_LOG_v1 {"time_micros": 1759394542498829, "job": 50, "event": "compaction_finished", "compaction_time_micros": 303442, "compaction_time_cpu_micros": 50314, "output_level": 6, "num_output_files": 1, "total_output_size": 9609636, "num_input_records": 6967, "num_output_records": 6443, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394542499343, "job": 50, "event": "table_file_deletion", "file_number": 88}
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394542500653, "job": 50, "event": "table_file_deletion", "file_number": 86}
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:21.800254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:22.500707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:22.500712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:22.500714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:22.500715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:42:22.500718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:42:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:42:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 246 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 1.9 MiB/s wr, 41 op/s
Oct  2 04:42:24 np0005465604 kernel: tap52a482d6-fb (unregistering): left promiscuous mode
Oct  2 04:42:24 np0005465604 NetworkManager[45129]: <info>  [1759394544.2045] device (tap52a482d6-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:42:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:42:24Z|01051|binding|INFO|Releasing lport 52a482d6-fbe9-4583-af85-5407a6796976 from this chassis (sb_readonly=0)
Oct  2 04:42:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:42:24Z|01052|binding|INFO|Setting lport 52a482d6-fbe9-4583-af85-5407a6796976 down in Southbound
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:42:24Z|01053|binding|INFO|Removing iface tap52a482d6-fb ovn-installed in OVS
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:24.228 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:a7:87 10.100.0.3'], port_security=['fa:16:3e:2c:a7:87 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5748b14f-5bbb-46f3-b563-062d530e5abd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-075eee4e-656c-44de-9ee6-7589c7382251', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28277d49c8814c0691f5e1dce22bf215', 'neutron:revision_number': '4', 'neutron:security_group_ids': '13b54cc2-9090-4249-b147-09fa8e935774 94805a92-fc8b-4089-aba3-84e7867eb02d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e7b4dd6-37a8-4a1b-88c6-3100272d2b7a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=52a482d6-fbe9-4583-af85-5407a6796976) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:42:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:24.230 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 52a482d6-fbe9-4583-af85-5407a6796976 in datapath 075eee4e-656c-44de-9ee6-7589c7382251 unbound from our chassis#033[00m
Oct  2 04:42:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:24.233 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 075eee4e-656c-44de-9ee6-7589c7382251, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:42:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:24.234 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df0aec4a-d0d4-4bb9-8c2d-035b1b1c7103]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:24.235 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251 namespace which is not needed anymore#033[00m
Oct  2 04:42:24 np0005465604 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000066.scope: Deactivated successfully.
Oct  2 04:42:24 np0005465604 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000066.scope: Consumed 14.648s CPU time.
Oct  2 04:42:24 np0005465604 systemd-machined[214636]: Machine qemu-128-instance-00000066 terminated.
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.377 2 INFO nova.virt.libvirt.driver [-] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Instance destroyed successfully.#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.378 2 DEBUG nova.objects.instance [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lazy-loading 'resources' on Instance uuid 5748b14f-5bbb-46f3-b563-062d530e5abd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.416 2 DEBUG nova.virt.libvirt.vif [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:41:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-263070957-access_point-1630801405',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-263070957-access_point-1630801405',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-263070957-acc',id=102,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBHUVZCqw3VWkKw3WswXpGtJf1pEavua2ZnmSgdDq3pc4sGnLceByNqUw0H30LytwdI+cAke8ed31ZxradSBqQ5vuq98lJn2fkhaaMuY6USKAqJ3+Ezi42AVRSerZijcMA==',key_name='tempest-TestSecurityGroupsBasicOps-1336991398',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:41:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='28277d49c8814c0691f5e1dce22bf215',ramdisk_id='',reservation_id='r-740jcdvb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-263070957',owner_user_name='tempest-TestSecurityGroupsBasicOps-263070957-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:41:55Z,user_data=None,user_id='ffb3b2bafd1c40058c2669d61c40f3f9',uuid=5748b14f-5bbb-46f3-b563-062d530e5abd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.416 2 DEBUG nova.network.os_vif_util [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Converting VIF {"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.417 2 DEBUG nova.network.os_vif_util [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2c:a7:87,bridge_name='br-int',has_traffic_filtering=True,id=52a482d6-fbe9-4583-af85-5407a6796976,network=Network(075eee4e-656c-44de-9ee6-7589c7382251),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a482d6-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.417 2 DEBUG os_vif [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:a7:87,bridge_name='br-int',has_traffic_filtering=True,id=52a482d6-fbe9-4583-af85-5407a6796976,network=Network(075eee4e-656c-44de-9ee6-7589c7382251),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a482d6-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.419 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52a482d6-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.423 2 INFO os_vif [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:a7:87,bridge_name='br-int',has_traffic_filtering=True,id=52a482d6-fbe9-4583-af85-5407a6796976,network=Network(075eee4e-656c-44de-9ee6-7589c7382251),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52a482d6-fb')#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.541 2 DEBUG nova.compute.manager [req-310167db-9579-4bdf-a062-ddd2c2382a72 req-b97849d2-a737-4637-9ab3-8949c7f61832 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received event network-vif-unplugged-52a482d6-fbe9-4583-af85-5407a6796976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.541 2 DEBUG oslo_concurrency.lockutils [req-310167db-9579-4bdf-a062-ddd2c2382a72 req-b97849d2-a737-4637-9ab3-8949c7f61832 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.541 2 DEBUG oslo_concurrency.lockutils [req-310167db-9579-4bdf-a062-ddd2c2382a72 req-b97849d2-a737-4637-9ab3-8949c7f61832 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.542 2 DEBUG oslo_concurrency.lockutils [req-310167db-9579-4bdf-a062-ddd2c2382a72 req-b97849d2-a737-4637-9ab3-8949c7f61832 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.542 2 DEBUG nova.compute.manager [req-310167db-9579-4bdf-a062-ddd2c2382a72 req-b97849d2-a737-4637-9ab3-8949c7f61832 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] No waiting events found dispatching network-vif-unplugged-52a482d6-fbe9-4583-af85-5407a6796976 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.542 2 DEBUG nova.compute.manager [req-310167db-9579-4bdf-a062-ddd2c2382a72 req-b97849d2-a737-4637-9ab3-8949c7f61832 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received event network-vif-unplugged-52a482d6-fbe9-4583-af85-5407a6796976 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.612 2 DEBUG nova.objects.instance [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lazy-loading 'migration_context' on Instance uuid e5370318-fc99-4c4a-9149-54deca5d783e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:42:24 np0005465604 neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251[361976]: [NOTICE]   (361980) : haproxy version is 2.8.14-c23fe91
Oct  2 04:42:24 np0005465604 neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251[361976]: [NOTICE]   (361980) : path to executable is /usr/sbin/haproxy
Oct  2 04:42:24 np0005465604 neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251[361976]: [WARNING]  (361980) : Exiting Master process...
Oct  2 04:42:24 np0005465604 neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251[361976]: [WARNING]  (361980) : Exiting Master process...
Oct  2 04:42:24 np0005465604 neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251[361976]: [ALERT]    (361980) : Current worker (361982) exited with code 143 (Terminated)
Oct  2 04:42:24 np0005465604 neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251[361976]: [WARNING]  (361980) : All workers exited. Exiting... (0)
Oct  2 04:42:24 np0005465604 systemd[1]: libpod-c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd.scope: Deactivated successfully.
Oct  2 04:42:24 np0005465604 podman[363204]: 2025-10-02 08:42:24.626304219 +0000 UTC m=+0.269791955 container died c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.628 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.629 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Ensure instance console log exists: /var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.629 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.630 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.630 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.634 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Start _get_guest_xml network_info=[{"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.647 2 WARNING nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.658 2 DEBUG nova.virt.libvirt.host [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.659 2 DEBUG nova.virt.libvirt.host [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.663 2 DEBUG nova.virt.libvirt.host [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.664 2 DEBUG nova.virt.libvirt.host [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.664 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.665 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.665 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.666 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.666 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.667 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.667 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.667 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.668 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.668 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.669 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.669 2 DEBUG nova.virt.hardware [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:42:24 np0005465604 nova_compute[260603]: 2025-10-02 08:42:24.673 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:42:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:42:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2440855643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.168 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.185 2 DEBUG nova.storage.rbd_utils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] rbd image e5370318-fc99-4c4a-9149-54deca5d783e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.189 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:42:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 246 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.310 2 DEBUG nova.network.neutron [req-946e1a26-2d2a-4274-a8c9-f683cc399057 req-23a366b9-055c-41f4-8bc3-29382fb2f42b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Updated VIF entry in instance network info cache for port 52a482d6-fbe9-4583-af85-5407a6796976. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.311 2 DEBUG nova.network.neutron [req-946e1a26-2d2a-4274-a8c9-f683cc399057 req-23a366b9-055c-41f4-8bc3-29382fb2f42b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Updating instance_info_cache with network_info: [{"id": "52a482d6-fbe9-4583-af85-5407a6796976", "address": "fa:16:3e:2c:a7:87", "network": {"id": "075eee4e-656c-44de-9ee6-7589c7382251", "bridge": "br-int", "label": "tempest-network-smoke--1919248067", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28277d49c8814c0691f5e1dce22bf215", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52a482d6-fb", "ovs_interfaceid": "52a482d6-fbe9-4583-af85-5407a6796976", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.338 2 DEBUG oslo_concurrency.lockutils [req-946e1a26-2d2a-4274-a8c9-f683cc399057 req-23a366b9-055c-41f4-8bc3-29382fb2f42b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5748b14f-5bbb-46f3-b563-062d530e5abd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:42:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:42:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/897564851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.630 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.633 2 DEBUG nova.virt.libvirt.vif [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:42:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1919713365',display_name='tempest-ServerMetadataNegativeTestJSON-server-1919713365',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1919713365',id=103,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4f0bc0e400b74a9788079e4f67262fae',ramdisk_id='',reservation_id='r-xqmiwuso',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1644163991',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1644163991-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:42:15Z,user_data=None,user_id='7fa065d842a644b0891a59bea27e82dd',uuid=e5370318-fc99-4c4a-9149-54deca5d783e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.633 2 DEBUG nova.network.os_vif_util [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Converting VIF {"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.635 2 DEBUG nova.network.os_vif_util [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:bd:92,bridge_name='br-int',has_traffic_filtering=True,id=e1f81d45-cb4b-4514-b606-3b044d09fc3f,network=Network(0cc74904-ee74-4715-87cc-18060dd682a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f81d45-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.637 2 DEBUG nova.objects.instance [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lazy-loading 'pci_devices' on Instance uuid e5370318-fc99-4c4a-9149-54deca5d783e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.657 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:42:25 np0005465604 nova_compute[260603]:  <uuid>e5370318-fc99-4c4a-9149-54deca5d783e</uuid>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:  <name>instance-00000067</name>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerMetadataNegativeTestJSON-server-1919713365</nova:name>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:42:24</nova:creationTime>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:42:25 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:        <nova:user uuid="7fa065d842a644b0891a59bea27e82dd">tempest-ServerMetadataNegativeTestJSON-1644163991-project-member</nova:user>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:        <nova:project uuid="4f0bc0e400b74a9788079e4f67262fae">tempest-ServerMetadataNegativeTestJSON-1644163991</nova:project>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:        <nova:port uuid="e1f81d45-cb4b-4514-b606-3b044d09fc3f">
Oct  2 04:42:25 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <entry name="serial">e5370318-fc99-4c4a-9149-54deca5d783e</entry>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <entry name="uuid">e5370318-fc99-4c4a-9149-54deca5d783e</entry>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/e5370318-fc99-4c4a-9149-54deca5d783e_disk">
Oct  2 04:42:25 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:42:25 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/e5370318-fc99-4c4a-9149-54deca5d783e_disk.config">
Oct  2 04:42:25 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:42:25 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:21:bd:92"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <target dev="tape1f81d45-cb"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e/console.log" append="off"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:42:25 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:42:25 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:42:25 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:42:25 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.658 2 DEBUG nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Preparing to wait for external event network-vif-plugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.658 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquiring lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.659 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.659 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.661 2 DEBUG nova.virt.libvirt.vif [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:42:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1919713365',display_name='tempest-ServerMetadataNegativeTestJSON-server-1919713365',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1919713365',id=103,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4f0bc0e400b74a9788079e4f67262fae',ramdisk_id='',reservation_id='r-xqmiwuso',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1644163991',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1644163991-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:42:15Z,user_data=None,user_id='7fa065d842a644b0891a59bea27e82dd',uuid=e5370318-fc99-4c4a-9149-54deca5d783e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.661 2 DEBUG nova.network.os_vif_util [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Converting VIF {"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.662 2 DEBUG nova.network.os_vif_util [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:bd:92,bridge_name='br-int',has_traffic_filtering=True,id=e1f81d45-cb4b-4514-b606-3b044d09fc3f,network=Network(0cc74904-ee74-4715-87cc-18060dd682a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f81d45-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.663 2 DEBUG os_vif [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:bd:92,bridge_name='br-int',has_traffic_filtering=True,id=e1f81d45-cb4b-4514-b606-3b044d09fc3f,network=Network(0cc74904-ee74-4715-87cc-18060dd682a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f81d45-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.664 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.665 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.668 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape1f81d45-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.669 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape1f81d45-cb, col_values=(('external_ids', {'iface-id': 'e1f81d45-cb4b-4514-b606-3b044d09fc3f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:21:bd:92', 'vm-uuid': 'e5370318-fc99-4c4a-9149-54deca5d783e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:25 np0005465604 NetworkManager[45129]: <info>  [1759394545.6725] manager: (tape1f81d45-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:25 np0005465604 nova_compute[260603]: 2025-10-02 08:42:25.681 2 INFO os_vif [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:bd:92,bridge_name='br-int',has_traffic_filtering=True,id=e1f81d45-cb4b-4514-b606-3b044d09fc3f,network=Network(0cc74904-ee74-4715-87cc-18060dd682a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f81d45-cb')#033[00m
Oct  2 04:42:25 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd-userdata-shm.mount: Deactivated successfully.
Oct  2 04:42:25 np0005465604 systemd[1]: var-lib-containers-storage-overlay-70f2cd3807a4bd9a2a9995ba1fc8cf28a130437de2342b17d3b71902d85482d4-merged.mount: Deactivated successfully.
Oct  2 04:42:26 np0005465604 nova_compute[260603]: 2025-10-02 08:42:26.262 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:42:26 np0005465604 nova_compute[260603]: 2025-10-02 08:42:26.263 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:42:26 np0005465604 nova_compute[260603]: 2025-10-02 08:42:26.263 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] No VIF found with MAC fa:16:3e:21:bd:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:42:26 np0005465604 nova_compute[260603]: 2025-10-02 08:42:26.264 2 INFO nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Using config drive#033[00m
Oct  2 04:42:26 np0005465604 podman[363204]: 2025-10-02 08:42:26.428015489 +0000 UTC m=+2.071503265 container cleanup c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:42:26 np0005465604 systemd[1]: libpod-conmon-c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd.scope: Deactivated successfully.
Oct  2 04:42:26 np0005465604 nova_compute[260603]: 2025-10-02 08:42:26.978 2 DEBUG nova.storage.rbd_utils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] rbd image e5370318-fc99-4c4a-9149-54deca5d783e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:42:26 np0005465604 nova_compute[260603]: 2025-10-02 08:42:26.994 2 DEBUG nova.compute.manager [req-6c40b42f-75a3-4556-9d9e-cd450a527e9e req-8fddd788-5bf1-45c0-9612-2485c58aaa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received event network-vif-plugged-52a482d6-fbe9-4583-af85-5407a6796976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:42:26 np0005465604 nova_compute[260603]: 2025-10-02 08:42:26.995 2 DEBUG oslo_concurrency.lockutils [req-6c40b42f-75a3-4556-9d9e-cd450a527e9e req-8fddd788-5bf1-45c0-9612-2485c58aaa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:26 np0005465604 nova_compute[260603]: 2025-10-02 08:42:26.995 2 DEBUG oslo_concurrency.lockutils [req-6c40b42f-75a3-4556-9d9e-cd450a527e9e req-8fddd788-5bf1-45c0-9612-2485c58aaa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:26 np0005465604 nova_compute[260603]: 2025-10-02 08:42:26.996 2 DEBUG oslo_concurrency.lockutils [req-6c40b42f-75a3-4556-9d9e-cd450a527e9e req-8fddd788-5bf1-45c0-9612-2485c58aaa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:26 np0005465604 nova_compute[260603]: 2025-10-02 08:42:26.997 2 DEBUG nova.compute.manager [req-6c40b42f-75a3-4556-9d9e-cd450a527e9e req-8fddd788-5bf1-45c0-9612-2485c58aaa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] No waiting events found dispatching network-vif-plugged-52a482d6-fbe9-4583-af85-5407a6796976 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:42:26 np0005465604 nova_compute[260603]: 2025-10-02 08:42:26.997 2 WARNING nova.compute.manager [req-6c40b42f-75a3-4556-9d9e-cd450a527e9e req-8fddd788-5bf1-45c0-9612-2485c58aaa0a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received unexpected event network-vif-plugged-52a482d6-fbe9-4583-af85-5407a6796976 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:42:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 246 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:42:27 np0005465604 nova_compute[260603]: 2025-10-02 08:42:27.361 2 INFO nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Creating config drive at /var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e/disk.config#033[00m
Oct  2 04:42:27 np0005465604 nova_compute[260603]: 2025-10-02 08:42:27.371 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwo3u019o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:42:27 np0005465604 podman[363353]: 2025-10-02 08:42:27.439950915 +0000 UTC m=+0.975019033 container remove c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:42:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:27.449 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[103d6c8c-ce68-4ee6-bc2f-10202e2ff9e3]: (4, ('Thu Oct  2 08:42:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251 (c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd)\nc625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd\nThu Oct  2 08:42:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251 (c625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd)\nc625a43fd46bdc20414ac84f9f0beb3cf3ab4bbbc7c933541166660b340792cd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:27.451 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09789626-8a6d-485f-b1b7-53698699721f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:27.453 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap075eee4e-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:42:27 np0005465604 kernel: tap075eee4e-60: left promiscuous mode
Oct  2 04:42:27 np0005465604 nova_compute[260603]: 2025-10-02 08:42:27.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:27 np0005465604 nova_compute[260603]: 2025-10-02 08:42:27.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:27.530 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e58397bd-f353-4da1-a2fe-f397eb7c12f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:27 np0005465604 nova_compute[260603]: 2025-10-02 08:42:27.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:27 np0005465604 nova_compute[260603]: 2025-10-02 08:42:27.535 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwo3u019o" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:42:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:27.559 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[04ff2f70-4e71-4ef5-83f7-2704d87a13ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:27.560 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2b690ea3-d28b-4bb5-b7e5-d99a0991ee61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:27 np0005465604 nova_compute[260603]: 2025-10-02 08:42:27.585 2 DEBUG nova.storage.rbd_utils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] rbd image e5370318-fc99-4c4a-9149-54deca5d783e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:42:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:27.590 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd4aea2-1a0a-4605-9bf3-96abe1ae238a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539209, 'reachable_time': 17407, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363396, 'error': None, 'target': 'ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:27 np0005465604 systemd[1]: run-netns-ovnmeta\x2d075eee4e\x2d656c\x2d44de\x2d9ee6\x2d7589c7382251.mount: Deactivated successfully.
Oct  2 04:42:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:27.599 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-075eee4e-656c-44de-9ee6-7589c7382251 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:42:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:27.599 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[861d7bfd-9821-4694-940a-91b38a972b3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:27 np0005465604 nova_compute[260603]: 2025-10-02 08:42:27.602 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e/disk.config e5370318-fc99-4c4a-9149-54deca5d783e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:42:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:42:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:42:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:42:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:42:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:42:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:42:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:42:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:42:27
Oct  2 04:42:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:42:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:42:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'images', 'default.rgw.control', 'default.rgw.log', 'vms', 'volumes', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct  2 04:42:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:42:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:42:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:42:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:42:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:42:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:42:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:42:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:42:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:42:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:42:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:42:28 np0005465604 nova_compute[260603]: 2025-10-02 08:42:28.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:42:28 np0005465604 nova_compute[260603]: 2025-10-02 08:42:28.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:42:28 np0005465604 nova_compute[260603]: 2025-10-02 08:42:28.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:42:28 np0005465604 nova_compute[260603]: 2025-10-02 08:42:28.547 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 04:42:28 np0005465604 nova_compute[260603]: 2025-10-02 08:42:28.548 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 04:42:28 np0005465604 nova_compute[260603]: 2025-10-02 08:42:28.750 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:42:28 np0005465604 nova_compute[260603]: 2025-10-02 08:42:28.750 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:42:28 np0005465604 nova_compute[260603]: 2025-10-02 08:42:28.750 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:42:28 np0005465604 nova_compute[260603]: 2025-10-02 08:42:28.751 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ddf9efd0-0ac6-4857-96ea-3f1d0e18590d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:42:29 np0005465604 nova_compute[260603]: 2025-10-02 08:42:29.243 2 DEBUG oslo_concurrency.processutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e/disk.config e5370318-fc99-4c4a-9149-54deca5d783e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:42:29 np0005465604 nova_compute[260603]: 2025-10-02 08:42:29.244 2 INFO nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Deleting local config drive /var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e/disk.config because it was imported into RBD.#033[00m
Oct  2 04:42:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 246 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct  2 04:42:29 np0005465604 nova_compute[260603]: 2025-10-02 08:42:29.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:29 np0005465604 kernel: tape1f81d45-cb: entered promiscuous mode
Oct  2 04:42:29 np0005465604 systemd-udevd[363402]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:42:29 np0005465604 nova_compute[260603]: 2025-10-02 08:42:29.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:29 np0005465604 ovn_controller[152344]: 2025-10-02T08:42:29Z|01054|binding|INFO|Claiming lport e1f81d45-cb4b-4514-b606-3b044d09fc3f for this chassis.
Oct  2 04:42:29 np0005465604 ovn_controller[152344]: 2025-10-02T08:42:29Z|01055|binding|INFO|e1f81d45-cb4b-4514-b606-3b044d09fc3f: Claiming fa:16:3e:21:bd:92 10.100.0.11
Oct  2 04:42:29 np0005465604 NetworkManager[45129]: <info>  [1759394549.3304] manager: (tape1f81d45-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/415)
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.340 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:bd:92 10.100.0.11'], port_security=['fa:16:3e:21:bd:92 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e5370318-fc99-4c4a-9149-54deca5d783e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cc74904-ee74-4715-87cc-18060dd682a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4f0bc0e400b74a9788079e4f67262fae', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03ef8875-7662-4d1c-ab06-cdfdd9edf304', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf9aea74-4830-4e32-899a-3a36573d95df, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e1f81d45-cb4b-4514-b606-3b044d09fc3f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.342 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e1f81d45-cb4b-4514-b606-3b044d09fc3f in datapath 0cc74904-ee74-4715-87cc-18060dd682a0 bound to our chassis#033[00m
Oct  2 04:42:29 np0005465604 NetworkManager[45129]: <info>  [1759394549.3491] device (tape1f81d45-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.348 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0cc74904-ee74-4715-87cc-18060dd682a0#033[00m
Oct  2 04:42:29 np0005465604 NetworkManager[45129]: <info>  [1759394549.3508] device (tape1f81d45-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:42:29 np0005465604 ovn_controller[152344]: 2025-10-02T08:42:29Z|01056|binding|INFO|Setting lport e1f81d45-cb4b-4514-b606-3b044d09fc3f ovn-installed in OVS
Oct  2 04:42:29 np0005465604 ovn_controller[152344]: 2025-10-02T08:42:29Z|01057|binding|INFO|Setting lport e1f81d45-cb4b-4514-b606-3b044d09fc3f up in Southbound
Oct  2 04:42:29 np0005465604 nova_compute[260603]: 2025-10-02 08:42:29.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:29 np0005465604 nova_compute[260603]: 2025-10-02 08:42:29.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.371 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6f60c5-aadd-40eb-a77c-ad33ef26f0e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.373 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0cc74904-e1 in ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.376 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0cc74904-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.377 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[37221368-8c74-494a-a322-c6f0acd4de6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.378 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6be56dba-c54c-49f0-bdb6-061b13100d5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:29 np0005465604 systemd-machined[214636]: New machine qemu-129-instance-00000067.
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.400 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[48146ac8-5444-433d-a4b3-426af262e823]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:29 np0005465604 systemd[1]: Started Virtual Machine qemu-129-instance-00000067.
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.445 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[28372283-9fa5-43c1-be55-9c5f6dad0ea4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.500 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e7047e6e-4844-4531-8f72-8fdfaae63255]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:29 np0005465604 NetworkManager[45129]: <info>  [1759394549.5109] manager: (tap0cc74904-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/416)
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.509 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9e6c90d0-68e0-4201-ab25-63e847aeafc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.567 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0c245bde-d15a-44d1-8ba9-7a753077f2c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.573 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[92d9e63b-67c0-4a7a-8156-75a19408a84e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:29 np0005465604 NetworkManager[45129]: <info>  [1759394549.6170] device (tap0cc74904-e0): carrier: link connected
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.628 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9acd323f-a637-466e-81b3-cc7fc15de01d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.656 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d6d26d-10d1-41f6-9b56-dfcc9e56c605]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cc74904-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:48:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 302], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542821, 'reachable_time': 18724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363468, 'error': None, 'target': 'ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.686 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0e853d2a-3a5e-4dfc-abe5-aa4c3ad83067]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:483b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 542821, 'tstamp': 542821}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363469, 'error': None, 'target': 'ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.715 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c642241c-e806-410d-af91-f87a6e489a3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cc74904-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:48:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 302], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542821, 'reachable_time': 18724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 363470, 'error': None, 'target': 'ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.768 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9739a18a-9261-4742-a804-ce6bfaabc987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.837 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d1b904-56cd-486f-bc71-7fb2d97cbe65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.839 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cc74904-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.840 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.840 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cc74904-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:42:29 np0005465604 nova_compute[260603]: 2025-10-02 08:42:29.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:29 np0005465604 NetworkManager[45129]: <info>  [1759394549.8893] manager: (tap0cc74904-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Oct  2 04:42:29 np0005465604 kernel: tap0cc74904-e0: entered promiscuous mode
Oct  2 04:42:29 np0005465604 nova_compute[260603]: 2025-10-02 08:42:29.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.891 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0cc74904-e0, col_values=(('external_ids', {'iface-id': 'e2050fa1-dc06-4988-9771-f46682dc29c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:42:29 np0005465604 nova_compute[260603]: 2025-10-02 08:42:29.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:29 np0005465604 ovn_controller[152344]: 2025-10-02T08:42:29Z|01058|binding|INFO|Releasing lport e2050fa1-dc06-4988-9771-f46682dc29c0 from this chassis (sb_readonly=0)
Oct  2 04:42:29 np0005465604 nova_compute[260603]: 2025-10-02 08:42:29.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:29 np0005465604 nova_compute[260603]: 2025-10-02 08:42:29.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.918 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0cc74904-ee74-4715-87cc-18060dd682a0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0cc74904-ee74-4715-87cc-18060dd682a0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.919 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5193d7-827b-4637-9d11-7942dbcf2bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.920 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-0cc74904-ee74-4715-87cc-18060dd682a0
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/0cc74904-ee74-4715-87cc-18060dd682a0.pid.haproxy
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 0cc74904-ee74-4715-87cc-18060dd682a0
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:42:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:29.924 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0', 'env', 'PROCESS_TAG=haproxy-0cc74904-ee74-4715-87cc-18060dd682a0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0cc74904-ee74-4715-87cc-18060dd682a0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:42:30 np0005465604 podman[363540]: 2025-10-02 08:42:30.286282773 +0000 UTC m=+0.031668132 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.414 2 DEBUG nova.compute.manager [req-f5cceb82-d2d6-416c-b9e7-d0d5c491a9ba req-818fd54a-287f-4a27-88bf-d3c65f54b7c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Received event network-vif-plugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.415 2 DEBUG oslo_concurrency.lockutils [req-f5cceb82-d2d6-416c-b9e7-d0d5c491a9ba req-818fd54a-287f-4a27-88bf-d3c65f54b7c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.415 2 DEBUG oslo_concurrency.lockutils [req-f5cceb82-d2d6-416c-b9e7-d0d5c491a9ba req-818fd54a-287f-4a27-88bf-d3c65f54b7c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.416 2 DEBUG oslo_concurrency.lockutils [req-f5cceb82-d2d6-416c-b9e7-d0d5c491a9ba req-818fd54a-287f-4a27-88bf-d3c65f54b7c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.416 2 DEBUG nova.compute.manager [req-f5cceb82-d2d6-416c-b9e7-d0d5c491a9ba req-818fd54a-287f-4a27-88bf-d3c65f54b7c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Processing event network-vif-plugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:42:30 np0005465604 podman[363540]: 2025-10-02 08:42:30.607736807 +0000 UTC m=+0.353122176 container create 257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.726 2 DEBUG nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.727 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394550.7270787, e5370318-fc99-4c4a-9149-54deca5d783e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.728 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] VM Started (Lifecycle Event)#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.734 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.738 2 INFO nova.virt.libvirt.driver [-] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Instance spawned successfully.#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.738 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:42:30 np0005465604 systemd[1]: Started libpod-conmon-257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4.scope.
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.760 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.768 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.768 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.769 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.770 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.770 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.771 2 DEBUG nova.virt.libvirt.driver [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.776 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:42:30 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.789 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updating instance_info_cache with network_info: [{"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:42:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12d23758f8dd1aba6e6960bdd498e43c608dac9f058bfca986ccca7ddd0c5f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.819 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.819 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.820 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.820 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.820 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394550.728044, e5370318-fc99-4c4a-9149-54deca5d783e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.821 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.822 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.846 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.849 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394550.7310724, e5370318-fc99-4c4a-9149-54deca5d783e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.849 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.854 2 INFO nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Took 15.36 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.855 2 DEBUG nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:42:30 np0005465604 podman[363540]: 2025-10-02 08:42:30.870813173 +0000 UTC m=+0.616198492 container init 257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.880 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:42:30 np0005465604 podman[363540]: 2025-10-02 08:42:30.880978618 +0000 UTC m=+0.626363937 container start 257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.883 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.911 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:42:30 np0005465604 neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0[363560]: [NOTICE]   (363564) : New worker (363566) forked
Oct  2 04:42:30 np0005465604 neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0[363560]: [NOTICE]   (363564) : Loading success.
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.921 2 INFO nova.compute.manager [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Took 16.37 seconds to build instance.#033[00m
Oct  2 04:42:30 np0005465604 nova_compute[260603]: 2025-10-02 08:42:30.937 2 DEBUG oslo_concurrency.lockutils [None req-46cf9f8c-9028-433c-ba7a-376df7c0b334 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 246 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct  2 04:42:31 np0005465604 nova_compute[260603]: 2025-10-02 08:42:31.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:42:31 np0005465604 nova_compute[260603]: 2025-10-02 08:42:31.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:42:31 np0005465604 nova_compute[260603]: 2025-10-02 08:42:31.543 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:31 np0005465604 nova_compute[260603]: 2025-10-02 08:42:31.544 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:31 np0005465604 nova_compute[260603]: 2025-10-02 08:42:31.544 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:31 np0005465604 nova_compute[260603]: 2025-10-02 08:42:31.544 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:42:31 np0005465604 nova_compute[260603]: 2025-10-02 08:42:31.544 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:42:31 np0005465604 podman[363596]: 2025-10-02 08:42:31.998630406 +0000 UTC m=+0.057079229 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:42:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:42:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/296919486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:42:32 np0005465604 podman[363595]: 2025-10-02 08:42:32.124004089 +0000 UTC m=+0.183639638 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.125 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.230 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.231 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.236 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.237 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.241 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.242 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.429 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.430 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3490MB free_disk=59.87638473510742GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.431 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.431 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.510 2 DEBUG nova.compute.manager [req-9f2f960e-0e6a-4d91-b04f-6bb0610c9362 req-f8d65e9f-c3fc-4019-9aed-815f61129057 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Received event network-vif-plugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.510 2 DEBUG oslo_concurrency.lockutils [req-9f2f960e-0e6a-4d91-b04f-6bb0610c9362 req-f8d65e9f-c3fc-4019-9aed-815f61129057 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.511 2 DEBUG oslo_concurrency.lockutils [req-9f2f960e-0e6a-4d91-b04f-6bb0610c9362 req-f8d65e9f-c3fc-4019-9aed-815f61129057 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.511 2 DEBUG oslo_concurrency.lockutils [req-9f2f960e-0e6a-4d91-b04f-6bb0610c9362 req-f8d65e9f-c3fc-4019-9aed-815f61129057 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.511 2 DEBUG nova.compute.manager [req-9f2f960e-0e6a-4d91-b04f-6bb0610c9362 req-f8d65e9f-c3fc-4019-9aed-815f61129057 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] No waiting events found dispatching network-vif-plugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.511 2 WARNING nova.compute.manager [req-9f2f960e-0e6a-4d91-b04f-6bb0610c9362 req-f8d65e9f-c3fc-4019-9aed-815f61129057 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Received unexpected event network-vif-plugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f for instance with vm_state active and task_state None.#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.540 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance ddf9efd0-0ac6-4857-96ea-3f1d0e18590d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.540 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 5748b14f-5bbb-46f3-b563-062d530e5abd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.540 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance e5370318-fc99-4c4a-9149-54deca5d783e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.541 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.541 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.611 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:42:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.920 2 INFO nova.virt.libvirt.driver [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Deleting instance files /var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd_del#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.922 2 INFO nova.virt.libvirt.driver [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Deletion of /var/lib/nova/instances/5748b14f-5bbb-46f3-b563-062d530e5abd_del complete#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.972 2 INFO nova.compute.manager [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Took 10.64 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.973 2 DEBUG oslo.service.loopingcall [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.974 2 DEBUG nova.compute.manager [-] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:42:32 np0005465604 nova_compute[260603]: 2025-10-02 08:42:32.974 2 DEBUG nova.network.neutron [-] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:42:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:42:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4025465018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:42:33 np0005465604 nova_compute[260603]: 2025-10-02 08:42:33.137 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:42:33 np0005465604 nova_compute[260603]: 2025-10-02 08:42:33.148 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:42:33 np0005465604 nova_compute[260603]: 2025-10-02 08:42:33.164 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:42:33 np0005465604 nova_compute[260603]: 2025-10-02 08:42:33.188 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:42:33 np0005465604 nova_compute[260603]: 2025-10-02 08:42:33.189 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct  2 04:42:33 np0005465604 nova_compute[260603]: 2025-10-02 08:42:33.581 2 DEBUG nova.network.neutron [-] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:42:33 np0005465604 nova_compute[260603]: 2025-10-02 08:42:33.601 2 INFO nova.compute.manager [-] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Took 0.63 seconds to deallocate network for instance.#033[00m
Oct  2 04:42:33 np0005465604 nova_compute[260603]: 2025-10-02 08:42:33.645 2 DEBUG oslo_concurrency.lockutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:33 np0005465604 nova_compute[260603]: 2025-10-02 08:42:33.646 2 DEBUG oslo_concurrency.lockutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:33 np0005465604 nova_compute[260603]: 2025-10-02 08:42:33.688 2 DEBUG nova.compute.manager [req-4feea895-6b7a-4043-9b54-2f851938460f req-dfef1879-ab65-4c0e-861b-84bd05672a61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Received event network-vif-deleted-52a482d6-fbe9-4583-af85-5407a6796976 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:42:33 np0005465604 nova_compute[260603]: 2025-10-02 08:42:33.729 2 DEBUG oslo_concurrency.processutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:42:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:42:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2440097038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:42:34 np0005465604 nova_compute[260603]: 2025-10-02 08:42:34.169 2 DEBUG oslo_concurrency.processutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:42:34 np0005465604 nova_compute[260603]: 2025-10-02 08:42:34.176 2 DEBUG nova.compute.provider_tree [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:42:34 np0005465604 nova_compute[260603]: 2025-10-02 08:42:34.200 2 DEBUG nova.scheduler.client.report [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:42:34 np0005465604 nova_compute[260603]: 2025-10-02 08:42:34.222 2 DEBUG oslo_concurrency.lockutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:34 np0005465604 nova_compute[260603]: 2025-10-02 08:42:34.244 2 INFO nova.scheduler.client.report [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Deleted allocations for instance 5748b14f-5bbb-46f3-b563-062d530e5abd#033[00m
Oct  2 04:42:34 np0005465604 nova_compute[260603]: 2025-10-02 08:42:34.349 2 DEBUG oslo_concurrency.lockutils [None req-debcb3b2-7bef-4f4a-b9b0-eb79edee9712 ffb3b2bafd1c40058c2669d61c40f3f9 28277d49c8814c0691f5e1dce22bf215 - - default default] Lock "5748b14f-5bbb-46f3-b563-062d530e5abd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:34 np0005465604 nova_compute[260603]: 2025-10-02 08:42:34.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:34.826 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:34.827 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:34.828 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 106 op/s
Oct  2 04:42:35 np0005465604 nova_compute[260603]: 2025-10-02 08:42:35.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:35 np0005465604 nova_compute[260603]: 2025-10-02 08:42:35.891 2 DEBUG oslo_concurrency.lockutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquiring lock "e5370318-fc99-4c4a-9149-54deca5d783e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:35 np0005465604 nova_compute[260603]: 2025-10-02 08:42:35.892 2 DEBUG oslo_concurrency.lockutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:35 np0005465604 nova_compute[260603]: 2025-10-02 08:42:35.892 2 DEBUG oslo_concurrency.lockutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquiring lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:35 np0005465604 nova_compute[260603]: 2025-10-02 08:42:35.893 2 DEBUG oslo_concurrency.lockutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:35 np0005465604 nova_compute[260603]: 2025-10-02 08:42:35.893 2 DEBUG oslo_concurrency.lockutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:35 np0005465604 nova_compute[260603]: 2025-10-02 08:42:35.895 2 INFO nova.compute.manager [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Terminating instance#033[00m
Oct  2 04:42:35 np0005465604 nova_compute[260603]: 2025-10-02 08:42:35.897 2 DEBUG nova.compute.manager [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:42:36 np0005465604 kernel: tape1f81d45-cb (unregistering): left promiscuous mode
Oct  2 04:42:36 np0005465604 NetworkManager[45129]: <info>  [1759394556.0911] device (tape1f81d45-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:42:36Z|01059|binding|INFO|Releasing lport e1f81d45-cb4b-4514-b606-3b044d09fc3f from this chassis (sb_readonly=0)
Oct  2 04:42:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:42:36Z|01060|binding|INFO|Setting lport e1f81d45-cb4b-4514-b606-3b044d09fc3f down in Southbound
Oct  2 04:42:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:42:36Z|01061|binding|INFO|Removing iface tape1f81d45-cb ovn-installed in OVS
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:36.107 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:bd:92 10.100.0.11'], port_security=['fa:16:3e:21:bd:92 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e5370318-fc99-4c4a-9149-54deca5d783e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cc74904-ee74-4715-87cc-18060dd682a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4f0bc0e400b74a9788079e4f67262fae', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03ef8875-7662-4d1c-ab06-cdfdd9edf304', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf9aea74-4830-4e32-899a-3a36573d95df, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e1f81d45-cb4b-4514-b606-3b044d09fc3f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:42:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:36.109 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e1f81d45-cb4b-4514-b606-3b044d09fc3f in datapath 0cc74904-ee74-4715-87cc-18060dd682a0 unbound from our chassis#033[00m
Oct  2 04:42:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:36.110 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0cc74904-ee74-4715-87cc-18060dd682a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:42:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:36.112 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[be37bef7-fbb7-4d9b-8c91-f5affef24c00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:36.112 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0 namespace which is not needed anymore#033[00m
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:36 np0005465604 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000067.scope: Deactivated successfully.
Oct  2 04:42:36 np0005465604 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000067.scope: Consumed 6.268s CPU time.
Oct  2 04:42:36 np0005465604 systemd-machined[214636]: Machine qemu-129-instance-00000067 terminated.
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.348 2 INFO nova.virt.libvirt.driver [-] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Instance destroyed successfully.#033[00m
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.349 2 DEBUG nova.objects.instance [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lazy-loading 'resources' on Instance uuid e5370318-fc99-4c4a-9149-54deca5d783e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.372 2 DEBUG nova.virt.libvirt.vif [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:42:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1919713365',display_name='tempest-ServerMetadataNegativeTestJSON-server-1919713365',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1919713365',id=103,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:42:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4f0bc0e400b74a9788079e4f67262fae',ramdisk_id='',reservation_id='r-xqmiwuso',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1644163991',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1644163991-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:42:30Z,user_data=None,user_id='7fa065d842a644b0891a59bea27e82dd',uuid=e5370318-fc99-4c4a-9149-54deca5d783e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.373 2 DEBUG nova.network.os_vif_util [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Converting VIF {"id": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "address": "fa:16:3e:21:bd:92", "network": {"id": "0cc74904-ee74-4715-87cc-18060dd682a0", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1920796543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f0bc0e400b74a9788079e4f67262fae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1f81d45-cb", "ovs_interfaceid": "e1f81d45-cb4b-4514-b606-3b044d09fc3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.374 2 DEBUG nova.network.os_vif_util [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:bd:92,bridge_name='br-int',has_traffic_filtering=True,id=e1f81d45-cb4b-4514-b606-3b044d09fc3f,network=Network(0cc74904-ee74-4715-87cc-18060dd682a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f81d45-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.374 2 DEBUG os_vif [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:bd:92,bridge_name='br-int',has_traffic_filtering=True,id=e1f81d45-cb4b-4514-b606-3b044d09fc3f,network=Network(0cc74904-ee74-4715-87cc-18060dd682a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f81d45-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.377 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1f81d45-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.386 2 INFO os_vif [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:bd:92,bridge_name='br-int',has_traffic_filtering=True,id=e1f81d45-cb4b-4514-b606-3b044d09fc3f,network=Network(0cc74904-ee74-4715-87cc-18060dd682a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1f81d45-cb')#033[00m
Oct  2 04:42:36 np0005465604 neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0[363560]: [NOTICE]   (363564) : haproxy version is 2.8.14-c23fe91
Oct  2 04:42:36 np0005465604 neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0[363560]: [NOTICE]   (363564) : path to executable is /usr/sbin/haproxy
Oct  2 04:42:36 np0005465604 neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0[363560]: [WARNING]  (363564) : Exiting Master process...
Oct  2 04:42:36 np0005465604 neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0[363560]: [ALERT]    (363564) : Current worker (363566) exited with code 143 (Terminated)
Oct  2 04:42:36 np0005465604 neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0[363560]: [WARNING]  (363564) : All workers exited. Exiting... (0)
Oct  2 04:42:36 np0005465604 systemd[1]: libpod-257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4.scope: Deactivated successfully.
Oct  2 04:42:36 np0005465604 podman[363707]: 2025-10-02 08:42:36.412287937 +0000 UTC m=+0.181391158 container died 257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.500 2 DEBUG nova.compute.manager [req-6525ed1e-745c-475b-a6d5-235ef68f2d96 req-8e063545-9c4f-44b7-a412-cb6c373db162 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Received event network-vif-unplugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.501 2 DEBUG oslo_concurrency.lockutils [req-6525ed1e-745c-475b-a6d5-235ef68f2d96 req-8e063545-9c4f-44b7-a412-cb6c373db162 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.501 2 DEBUG oslo_concurrency.lockutils [req-6525ed1e-745c-475b-a6d5-235ef68f2d96 req-8e063545-9c4f-44b7-a412-cb6c373db162 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.501 2 DEBUG oslo_concurrency.lockutils [req-6525ed1e-745c-475b-a6d5-235ef68f2d96 req-8e063545-9c4f-44b7-a412-cb6c373db162 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.501 2 DEBUG nova.compute.manager [req-6525ed1e-745c-475b-a6d5-235ef68f2d96 req-8e063545-9c4f-44b7-a412-cb6c373db162 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] No waiting events found dispatching network-vif-unplugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:42:36 np0005465604 nova_compute[260603]: 2025-10-02 08:42:36.502 2 DEBUG nova.compute.manager [req-6525ed1e-745c-475b-a6d5-235ef68f2d96 req-8e063545-9c4f-44b7-a412-cb6c373db162 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Received event network-vif-unplugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:42:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4-userdata-shm.mount: Deactivated successfully.
Oct  2 04:42:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f12d23758f8dd1aba6e6960bdd498e43c608dac9f058bfca986ccca7ddd0c5f3-merged.mount: Deactivated successfully.
Oct  2 04:42:37 np0005465604 podman[363707]: 2025-10-02 08:42:37.193114665 +0000 UTC m=+0.962217906 container cleanup 257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 04:42:37 np0005465604 systemd[1]: libpod-conmon-257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4.scope: Deactivated successfully.
Oct  2 04:42:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 109 op/s
Oct  2 04:42:37 np0005465604 podman[363768]: 2025-10-02 08:42:37.71705164 +0000 UTC m=+0.497235018 container remove 257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 04:42:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:37.729 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8d70b628-6641-4d99-b4de-a6afbc691020]: (4, ('Thu Oct  2 08:42:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0 (257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4)\n257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4\nThu Oct  2 08:42:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0 (257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4)\n257b1b21199e80828c1cec770078a363bf96e84d0bf2451da8cdb8469e62bce4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:37.732 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[91aaee63-ae1d-41be-b02e-e5dd6efe22eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:37.734 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cc74904-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:42:37 np0005465604 nova_compute[260603]: 2025-10-02 08:42:37.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:37 np0005465604 kernel: tap0cc74904-e0: left promiscuous mode
Oct  2 04:42:37 np0005465604 nova_compute[260603]: 2025-10-02 08:42:37.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:37.773 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[553d59ab-4803-4503-bf8e-007a3d677777]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:37.801 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[61bdf374-3dc2-44e9-9f93-c9e7fb3132d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:37.803 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f782d375-115f-4e8c-8e21-a213f3e6f94b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:37.830 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[02ad5fa6-e9ce-44cb-99f9-340ca42512ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542809, 'reachable_time': 24326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363784, 'error': None, 'target': 'ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:37.834 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0cc74904-ee74-4715-87cc-18060dd682a0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:42:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:37.835 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e82a8ed5-b6fe-485b-bffb-334109f804d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:37 np0005465604 systemd[1]: run-netns-ovnmeta\x2d0cc74904\x2dee74\x2d4715\x2d87cc\x2d18060dd682a0.mount: Deactivated successfully.
Oct  2 04:42:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:42:38 np0005465604 nova_compute[260603]: 2025-10-02 08:42:38.190 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:42:38 np0005465604 nova_compute[260603]: 2025-10-02 08:42:38.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:42:38 np0005465604 nova_compute[260603]: 2025-10-02 08:42:38.587 2 DEBUG nova.compute.manager [req-46415d6d-cc11-4bed-82ae-480bf04b0e3f req-f8c38206-1878-418a-9785-ff44326488b8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Received event network-vif-plugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:42:38 np0005465604 nova_compute[260603]: 2025-10-02 08:42:38.588 2 DEBUG oslo_concurrency.lockutils [req-46415d6d-cc11-4bed-82ae-480bf04b0e3f req-f8c38206-1878-418a-9785-ff44326488b8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:38 np0005465604 nova_compute[260603]: 2025-10-02 08:42:38.588 2 DEBUG oslo_concurrency.lockutils [req-46415d6d-cc11-4bed-82ae-480bf04b0e3f req-f8c38206-1878-418a-9785-ff44326488b8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:38 np0005465604 nova_compute[260603]: 2025-10-02 08:42:38.588 2 DEBUG oslo_concurrency.lockutils [req-46415d6d-cc11-4bed-82ae-480bf04b0e3f req-f8c38206-1878-418a-9785-ff44326488b8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:38 np0005465604 nova_compute[260603]: 2025-10-02 08:42:38.589 2 DEBUG nova.compute.manager [req-46415d6d-cc11-4bed-82ae-480bf04b0e3f req-f8c38206-1878-418a-9785-ff44326488b8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] No waiting events found dispatching network-vif-plugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:42:38 np0005465604 nova_compute[260603]: 2025-10-02 08:42:38.589 2 WARNING nova.compute.manager [req-46415d6d-cc11-4bed-82ae-480bf04b0e3f req-f8c38206-1878-418a-9785-ff44326488b8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Received unexpected event network-vif-plugged-e1f81d45-cb4b-4514-b606-3b044d09fc3f for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011082588558963338 of space, bias 1.0, pg target 0.3324776567689002 quantized to 32 (current 32)
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:42:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:42:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 107 op/s
Oct  2 04:42:39 np0005465604 nova_compute[260603]: 2025-10-02 08:42:39.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:39 np0005465604 nova_compute[260603]: 2025-10-02 08:42:39.376 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394544.37433, 5748b14f-5bbb-46f3-b563-062d530e5abd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:42:39 np0005465604 nova_compute[260603]: 2025-10-02 08:42:39.376 2 INFO nova.compute.manager [-] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:42:39 np0005465604 nova_compute[260603]: 2025-10-02 08:42:39.411 2 DEBUG nova.compute.manager [None req-30cac69d-3163-4769-bf4d-53a6db5d04da - - - - - -] [instance: 5748b14f-5bbb-46f3-b563-062d530e5abd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:42:40 np0005465604 podman[363786]: 2025-10-02 08:42:40.005190654 +0000 UTC m=+0.069192674 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:42:40 np0005465604 podman[363787]: 2025-10-02 08:42:40.034042337 +0000 UTC m=+0.085656033 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct  2 04:42:40 np0005465604 nova_compute[260603]: 2025-10-02 08:42:40.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:42:40 np0005465604 ovn_controller[152344]: 2025-10-02T08:42:40Z|01062|binding|INFO|Releasing lport 6a4c46b3-20fe-4d13-90df-77828898d571 from this chassis (sb_readonly=0)
Oct  2 04:42:40 np0005465604 nova_compute[260603]: 2025-10-02 08:42:40.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:41 np0005465604 nova_compute[260603]: 2025-10-02 08:42:41.147 2 INFO nova.virt.libvirt.driver [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Deleting instance files /var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e_del#033[00m
Oct  2 04:42:41 np0005465604 nova_compute[260603]: 2025-10-02 08:42:41.149 2 INFO nova.virt.libvirt.driver [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Deletion of /var/lib/nova/instances/e5370318-fc99-4c4a-9149-54deca5d783e_del complete#033[00m
Oct  2 04:42:41 np0005465604 nova_compute[260603]: 2025-10-02 08:42:41.253 2 INFO nova.compute.manager [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Took 5.36 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:42:41 np0005465604 nova_compute[260603]: 2025-10-02 08:42:41.254 2 DEBUG oslo.service.loopingcall [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:42:41 np0005465604 nova_compute[260603]: 2025-10-02 08:42:41.254 2 DEBUG nova.compute.manager [-] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:42:41 np0005465604 nova_compute[260603]: 2025-10-02 08:42:41.255 2 DEBUG nova.network.neutron [-] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:42:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 99 op/s
Oct  2 04:42:41 np0005465604 nova_compute[260603]: 2025-10-02 08:42:41.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:42 np0005465604 nova_compute[260603]: 2025-10-02 08:42:42.820 2 DEBUG nova.network.neutron [-] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:42:42 np0005465604 nova_compute[260603]: 2025-10-02 08:42:42.836 2 INFO nova.compute.manager [-] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Took 1.58 seconds to deallocate network for instance.#033[00m
Oct  2 04:42:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:42:42 np0005465604 nova_compute[260603]: 2025-10-02 08:42:42.890 2 DEBUG oslo_concurrency.lockutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:42 np0005465604 nova_compute[260603]: 2025-10-02 08:42:42.890 2 DEBUG oslo_concurrency.lockutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:42 np0005465604 nova_compute[260603]: 2025-10-02 08:42:42.953 2 DEBUG nova.compute.manager [req-f5297559-e26a-44fc-b296-1186f89ef3ce req-1253ee54-fc42-4d25-aef5-dc115935e362 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Received event network-vif-deleted-e1f81d45-cb4b-4514-b606-3b044d09fc3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:42:42 np0005465604 nova_compute[260603]: 2025-10-02 08:42:42.970 2 DEBUG oslo_concurrency.processutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:42:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 120 op/s
Oct  2 04:42:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:42:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/533730750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:42:43 np0005465604 nova_compute[260603]: 2025-10-02 08:42:43.405 2 DEBUG oslo_concurrency.processutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:42:43 np0005465604 nova_compute[260603]: 2025-10-02 08:42:43.413 2 DEBUG nova.compute.provider_tree [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:42:43 np0005465604 nova_compute[260603]: 2025-10-02 08:42:43.440 2 DEBUG nova.scheduler.client.report [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:42:43 np0005465604 nova_compute[260603]: 2025-10-02 08:42:43.472 2 DEBUG oslo_concurrency.lockutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:43 np0005465604 nova_compute[260603]: 2025-10-02 08:42:43.528 2 INFO nova.scheduler.client.report [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Deleted allocations for instance e5370318-fc99-4c4a-9149-54deca5d783e#033[00m
Oct  2 04:42:43 np0005465604 nova_compute[260603]: 2025-10-02 08:42:43.620 2 DEBUG oslo_concurrency.lockutils [None req-bb8ca3e3-b71a-4548-91ff-e11fff2a32e8 7fa065d842a644b0891a59bea27e82dd 4f0bc0e400b74a9788079e4f67262fae - - default default] Lock "e5370318-fc99-4c4a-9149-54deca5d783e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.145 2 DEBUG nova.compute.manager [req-d8129d03-f8a7-4cfd-ade2-1c13f8be57ab req-5f66999b-d3d0-41b9-a52c-114978d88160 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received event network-changed-f333ef16-60e3-449a-a6e7-4e7435c4ee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.146 2 DEBUG nova.compute.manager [req-d8129d03-f8a7-4cfd-ade2-1c13f8be57ab req-5f66999b-d3d0-41b9-a52c-114978d88160 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Refreshing instance network info cache due to event network-changed-f333ef16-60e3-449a-a6e7-4e7435c4ee30. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.146 2 DEBUG oslo_concurrency.lockutils [req-d8129d03-f8a7-4cfd-ade2-1c13f8be57ab req-5f66999b-d3d0-41b9-a52c-114978d88160 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.146 2 DEBUG oslo_concurrency.lockutils [req-d8129d03-f8a7-4cfd-ade2-1c13f8be57ab req-5f66999b-d3d0-41b9-a52c-114978d88160 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.146 2 DEBUG nova.network.neutron [req-d8129d03-f8a7-4cfd-ade2-1c13f8be57ab req-5f66999b-d3d0-41b9-a52c-114978d88160 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Refreshing network info cache for port f333ef16-60e3-449a-a6e7-4e7435c4ee30 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.238 2 DEBUG oslo_concurrency.lockutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.238 2 DEBUG oslo_concurrency.lockutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.239 2 DEBUG oslo_concurrency.lockutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.239 2 DEBUG oslo_concurrency.lockutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.239 2 DEBUG oslo_concurrency.lockutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.240 2 INFO nova.compute.manager [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Terminating instance#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.241 2 DEBUG nova.compute.manager [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:44 np0005465604 kernel: tapf333ef16-60 (unregistering): left promiscuous mode
Oct  2 04:42:44 np0005465604 NetworkManager[45129]: <info>  [1759394564.4420] device (tapf333ef16-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:44 np0005465604 ovn_controller[152344]: 2025-10-02T08:42:44Z|01063|binding|INFO|Releasing lport f333ef16-60e3-449a-a6e7-4e7435c4ee30 from this chassis (sb_readonly=0)
Oct  2 04:42:44 np0005465604 ovn_controller[152344]: 2025-10-02T08:42:44Z|01064|binding|INFO|Setting lport f333ef16-60e3-449a-a6e7-4e7435c4ee30 down in Southbound
Oct  2 04:42:44 np0005465604 ovn_controller[152344]: 2025-10-02T08:42:44Z|01065|binding|INFO|Removing iface tapf333ef16-60 ovn-installed in OVS
Oct  2 04:42:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:44.463 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:fb:ed 10.100.0.5'], port_security=['fa:16:3e:2c:fb:ed 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ddf9efd0-0ac6-4857-96ea-3f1d0e18590d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64b91b42-84b6-4429-b137-6399bf4f6ccd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e1b840f-6fce-4ed8-898b-67d0829d6af4 8ef1bc1b-1bb2-413b-9f26-26e2f642b996', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d2a6eb5-cf00-4e48-b8dd-b9d998ac802a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=f333ef16-60e3-449a-a6e7-4e7435c4ee30) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:42:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:44.464 162357 INFO neutron.agent.ovn.metadata.agent [-] Port f333ef16-60e3-449a-a6e7-4e7435c4ee30 in datapath 64b91b42-84b6-4429-b137-6399bf4f6ccd unbound from our chassis#033[00m
Oct  2 04:42:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:44.465 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64b91b42-84b6-4429-b137-6399bf4f6ccd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:42:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:44.466 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[66e9adeb-bd0e-4431-beb1-e5d801983874]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:44.467 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd namespace which is not needed anymore#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:44 np0005465604 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000063.scope: Deactivated successfully.
Oct  2 04:42:44 np0005465604 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000063.scope: Consumed 17.087s CPU time.
Oct  2 04:42:44 np0005465604 systemd-machined[214636]: Machine qemu-125-instance-00000063 terminated.
Oct  2 04:42:44 np0005465604 neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd[360463]: [NOTICE]   (360467) : haproxy version is 2.8.14-c23fe91
Oct  2 04:42:44 np0005465604 neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd[360463]: [NOTICE]   (360467) : path to executable is /usr/sbin/haproxy
Oct  2 04:42:44 np0005465604 neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd[360463]: [WARNING]  (360467) : Exiting Master process...
Oct  2 04:42:44 np0005465604 neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd[360463]: [WARNING]  (360467) : Exiting Master process...
Oct  2 04:42:44 np0005465604 neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd[360463]: [ALERT]    (360467) : Current worker (360470) exited with code 143 (Terminated)
Oct  2 04:42:44 np0005465604 neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd[360463]: [WARNING]  (360467) : All workers exited. Exiting... (0)
Oct  2 04:42:44 np0005465604 systemd[1]: libpod-439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a.scope: Deactivated successfully.
Oct  2 04:42:44 np0005465604 conmon[360463]: conmon 439c91e33d9296b058b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a.scope/container/memory.events
Oct  2 04:42:44 np0005465604 podman[363886]: 2025-10-02 08:42:44.694440488 +0000 UTC m=+0.027637077 container died 439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.719 2 INFO nova.virt.libvirt.driver [-] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Instance destroyed successfully.#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.719 2 DEBUG nova.objects.instance [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'resources' on Instance uuid ddf9efd0-0ac6-4857-96ea-3f1d0e18590d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:42:44 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a-userdata-shm.mount: Deactivated successfully.
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.735 2 DEBUG nova.virt.libvirt.vif [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:40:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-948419006',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-948419006',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=99,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEVRDNLCh7cGxmY8USPZ+Uw2QoectKaA792o7mYLTgKSdgVOdamESEYTMfoSKYLlGXQvF8R+2N9sWb5Rc56fBJj8i+BKngnG3c4da6Xe6b+B/+hyHhFsw+SA5clIFi8Ulg==',key_name='tempest-TestSecurityGroupsBasicOps-2045998300',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:41:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-r7zcjfhf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:41:10Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=ddf9efd0-0ac6-4857-96ea-3f1d0e18590d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.736 2 DEBUG nova.network.os_vif_util [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.737 2 DEBUG nova.network.os_vif_util [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2c:fb:ed,bridge_name='br-int',has_traffic_filtering=True,id=f333ef16-60e3-449a-a6e7-4e7435c4ee30,network=Network(64b91b42-84b6-4429-b137-6399bf4f6ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf333ef16-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.737 2 DEBUG os_vif [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:fb:ed,bridge_name='br-int',has_traffic_filtering=True,id=f333ef16-60e3-449a-a6e7-4e7435c4ee30,network=Network(64b91b42-84b6-4429-b137-6399bf4f6ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf333ef16-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:42:44 np0005465604 systemd[1]: var-lib-containers-storage-overlay-528c61d1dc02de92e13fd9335f527a4374ee7687c385393aa257801e40808ba5-merged.mount: Deactivated successfully.
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.745 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf333ef16-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:42:44 np0005465604 nova_compute[260603]: 2025-10-02 08:42:44.756 2 INFO os_vif [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:fb:ed,bridge_name='br-int',has_traffic_filtering=True,id=f333ef16-60e3-449a-a6e7-4e7435c4ee30,network=Network(64b91b42-84b6-4429-b137-6399bf4f6ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf333ef16-60')#033[00m
Oct  2 04:42:44 np0005465604 podman[363886]: 2025-10-02 08:42:44.809284405 +0000 UTC m=+0.142480984 container cleanup 439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:42:44 np0005465604 systemd[1]: libpod-conmon-439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a.scope: Deactivated successfully.
Oct  2 04:42:45 np0005465604 podman[363929]: 2025-10-02 08:42:45.013118666 +0000 UTC m=+0.167741435 container remove 439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 04:42:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:45.021 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cb271987-4ff8-4274-917a-44df7a546084]: (4, ('Thu Oct  2 08:42:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd (439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a)\n439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a\nThu Oct  2 08:42:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd (439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a)\n439c91e33d9296b058b95f5515f37232053ae79eb3500c687329f7c5a975d72a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:45.022 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[51a0941f-1148-405e-b1f8-c81f3600bfcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:45.023 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64b91b42-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:42:45 np0005465604 nova_compute[260603]: 2025-10-02 08:42:45.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:45 np0005465604 kernel: tap64b91b42-80: left promiscuous mode
Oct  2 04:42:45 np0005465604 nova_compute[260603]: 2025-10-02 08:42:45.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:45.045 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[25a283cc-0444-4e4c-a1c7-49128d509141]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:45.072 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3ebcdd4f-3024-4e5d-926e-f01e370c0aa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:45.073 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[94b9af97-cfb0-4eef-872c-69826a76af81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:45.087 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[94b17f70-2823-45f0-a581-780999371ec1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534744, 'reachable_time': 36039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363944, 'error': None, 'target': 'ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:45.089 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-64b91b42-84b6-4429-b137-6399bf4f6ccd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:42:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:45.089 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[419d09d0-9f3b-4708-bb30-77cddd72807d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:42:45 np0005465604 systemd[1]: run-netns-ovnmeta\x2d64b91b42\x2d84b6\x2d4429\x2db137\x2d6399bf4f6ccd.mount: Deactivated successfully.
Oct  2 04:42:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.094 2 INFO nova.virt.libvirt.driver [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Deleting instance files /var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_del#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.095 2 INFO nova.virt.libvirt.driver [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Deletion of /var/lib/nova/instances/ddf9efd0-0ac6-4857-96ea-3f1d0e18590d_del complete#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.156 2 INFO nova.compute.manager [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Took 1.91 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.157 2 DEBUG oslo.service.loopingcall [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.157 2 DEBUG nova.compute.manager [-] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.157 2 DEBUG nova.network.neutron [-] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.423 2 DEBUG nova.network.neutron [req-d8129d03-f8a7-4cfd-ade2-1c13f8be57ab req-5f66999b-d3d0-41b9-a52c-114978d88160 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updated VIF entry in instance network info cache for port f333ef16-60e3-449a-a6e7-4e7435c4ee30. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.424 2 DEBUG nova.network.neutron [req-d8129d03-f8a7-4cfd-ade2-1c13f8be57ab req-5f66999b-d3d0-41b9-a52c-114978d88160 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updating instance_info_cache with network_info: [{"id": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "address": "fa:16:3e:2c:fb:ed", "network": {"id": "64b91b42-84b6-4429-b137-6399bf4f6ccd", "bridge": "br-int", "label": "tempest-network-smoke--1571217253", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf333ef16-60", "ovs_interfaceid": "f333ef16-60e3-449a-a6e7-4e7435c4ee30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.444 2 DEBUG oslo_concurrency.lockutils [req-d8129d03-f8a7-4cfd-ade2-1c13f8be57ab req-5f66999b-d3d0-41b9-a52c-114978d88160 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.803 2 DEBUG nova.compute.manager [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received event network-vif-unplugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.804 2 DEBUG oslo_concurrency.lockutils [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.804 2 DEBUG oslo_concurrency.lockutils [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.805 2 DEBUG oslo_concurrency.lockutils [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.805 2 DEBUG nova.compute.manager [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] No waiting events found dispatching network-vif-unplugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.806 2 DEBUG nova.compute.manager [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received event network-vif-unplugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.806 2 DEBUG nova.compute.manager [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received event network-vif-plugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.807 2 DEBUG oslo_concurrency.lockutils [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.807 2 DEBUG oslo_concurrency.lockutils [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.808 2 DEBUG oslo_concurrency.lockutils [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.808 2 DEBUG nova.compute.manager [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] No waiting events found dispatching network-vif-plugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:42:46 np0005465604 nova_compute[260603]: 2025-10-02 08:42:46.808 2 WARNING nova.compute.manager [req-9994e219-4d88-4a5b-8d78-72112d7b4c4e req-098b99b3-c0a4-469e-a01c-9a2c6bd3bfb3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received unexpected event network-vif-plugged-f333ef16-60e3-449a-a6e7-4e7435c4ee30 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:42:47 np0005465604 nova_compute[260603]: 2025-10-02 08:42:47.282 2 DEBUG nova.network.neutron [-] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:42:47 np0005465604 nova_compute[260603]: 2025-10-02 08:42:47.304 2 INFO nova.compute.manager [-] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Took 1.15 seconds to deallocate network for instance.#033[00m
Oct  2 04:42:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 93 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 KiB/s wr, 40 op/s
Oct  2 04:42:47 np0005465604 nova_compute[260603]: 2025-10-02 08:42:47.330 2 DEBUG nova.compute.manager [req-1ca08182-b79d-4387-a31b-05d8b7fde661 req-885fbaea-244b-4f46-af68-eadaf07363cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Received event network-vif-deleted-f333ef16-60e3-449a-a6e7-4e7435c4ee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:42:47 np0005465604 nova_compute[260603]: 2025-10-02 08:42:47.360 2 DEBUG oslo_concurrency.lockutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:42:47 np0005465604 nova_compute[260603]: 2025-10-02 08:42:47.361 2 DEBUG oslo_concurrency.lockutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:42:47 np0005465604 nova_compute[260603]: 2025-10-02 08:42:47.417 2 DEBUG oslo_concurrency.processutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:42:47 np0005465604 nova_compute[260603]: 2025-10-02 08:42:47.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:42:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:42:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:42:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1668837629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:42:47 np0005465604 nova_compute[260603]: 2025-10-02 08:42:47.880 2 DEBUG oslo_concurrency.processutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:42:47 np0005465604 nova_compute[260603]: 2025-10-02 08:42:47.887 2 DEBUG nova.compute.provider_tree [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:42:47 np0005465604 nova_compute[260603]: 2025-10-02 08:42:47.910 2 DEBUG nova.scheduler.client.report [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:42:47 np0005465604 nova_compute[260603]: 2025-10-02 08:42:47.929 2 DEBUG oslo_concurrency.lockutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:47 np0005465604 nova_compute[260603]: 2025-10-02 08:42:47.972 2 INFO nova.scheduler.client.report [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Deleted allocations for instance ddf9efd0-0ac6-4857-96ea-3f1d0e18590d#033[00m
Oct  2 04:42:48 np0005465604 nova_compute[260603]: 2025-10-02 08:42:48.073 2 DEBUG oslo_concurrency.lockutils [None req-5b806aa6-a371-48f2-8b42-9882467401ed 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "ddf9efd0-0ac6-4857-96ea-3f1d0e18590d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:42:48 np0005465604 nova_compute[260603]: 2025-10-02 08:42:48.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:48 np0005465604 nova_compute[260603]: 2025-10-02 08:42:48.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 51 op/s
Oct  2 04:42:49 np0005465604 nova_compute[260603]: 2025-10-02 08:42:49.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:49 np0005465604 nova_compute[260603]: 2025-10-02 08:42:49.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:51.076 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:42:51 np0005465604 nova_compute[260603]: 2025-10-02 08:42:51.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:51.077 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:42:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 49 op/s
Oct  2 04:42:51 np0005465604 nova_compute[260603]: 2025-10-02 08:42:51.347 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394556.3452084, e5370318-fc99-4c4a-9149-54deca5d783e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:42:51 np0005465604 nova_compute[260603]: 2025-10-02 08:42:51.347 2 INFO nova.compute.manager [-] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:42:51 np0005465604 nova_compute[260603]: 2025-10-02 08:42:51.373 2 DEBUG nova.compute.manager [None req-0a3a5597-c268-4bc5-940e-23193869558b - - - - - -] [instance: e5370318-fc99-4c4a-9149-54deca5d783e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:42:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:42:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 49 op/s
Oct  2 04:42:54 np0005465604 nova_compute[260603]: 2025-10-02 08:42:54.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:54 np0005465604 nova_compute[260603]: 2025-10-02 08:42:54.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 04:42:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:42:57.079 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:42:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 04:42:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:42:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:42:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:42:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:42:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:42:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:42:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:42:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 341 B/s wr, 14 op/s
Oct  2 04:42:59 np0005465604 nova_compute[260603]: 2025-10-02 08:42:59.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:42:59 np0005465604 nova_compute[260603]: 2025-10-02 08:42:59.716 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394564.7156699, ddf9efd0-0ac6-4857-96ea-3f1d0e18590d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:42:59 np0005465604 nova_compute[260603]: 2025-10-02 08:42:59.717 2 INFO nova.compute.manager [-] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:42:59 np0005465604 nova_compute[260603]: 2025-10-02 08:42:59.741 2 DEBUG nova.compute.manager [None req-ce6aaf8b-5182-4ef0-a195-48af610ab871 - - - - - -] [instance: ddf9efd0-0ac6-4857-96ea-3f1d0e18590d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:42:59 np0005465604 nova_compute[260603]: 2025-10-02 08:42:59.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:43:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:43:02 np0005465604 podman[363970]: 2025-10-02 08:43:02.98765263 +0000 UTC m=+0.052221948 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:43:03 np0005465604 podman[363969]: 2025-10-02 08:43:03.012263628 +0000 UTC m=+0.083730252 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:43:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:43:04 np0005465604 nova_compute[260603]: 2025-10-02 08:43:04.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:04 np0005465604 nova_compute[260603]: 2025-10-02 08:43:04.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:05 np0005465604 nova_compute[260603]: 2025-10-02 08:43:05.140 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:05 np0005465604 nova_compute[260603]: 2025-10-02 08:43:05.140 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:05 np0005465604 nova_compute[260603]: 2025-10-02 08:43:05.169 2 DEBUG nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:43:05 np0005465604 nova_compute[260603]: 2025-10-02 08:43:05.282 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:05 np0005465604 nova_compute[260603]: 2025-10-02 08:43:05.282 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:05 np0005465604 nova_compute[260603]: 2025-10-02 08:43:05.291 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:43:05 np0005465604 nova_compute[260603]: 2025-10-02 08:43:05.292 2 INFO nova.compute.claims [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:43:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 41 MiB data, 727 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:43:05 np0005465604 nova_compute[260603]: 2025-10-02 08:43:05.410 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:43:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2391638134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:43:05 np0005465604 nova_compute[260603]: 2025-10-02 08:43:05.864 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:05 np0005465604 nova_compute[260603]: 2025-10-02 08:43:05.872 2 DEBUG nova.compute.provider_tree [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:43:05 np0005465604 nova_compute[260603]: 2025-10-02 08:43:05.891 2 DEBUG nova.scheduler.client.report [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:43:05 np0005465604 nova_compute[260603]: 2025-10-02 08:43:05.932 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:05 np0005465604 nova_compute[260603]: 2025-10-02 08:43:05.933 2 DEBUG nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:43:05 np0005465604 nova_compute[260603]: 2025-10-02 08:43:05.989 2 DEBUG nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:43:05 np0005465604 nova_compute[260603]: 2025-10-02 08:43:05.990 2 DEBUG nova.network.neutron [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:43:06 np0005465604 nova_compute[260603]: 2025-10-02 08:43:06.012 2 INFO nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:43:06 np0005465604 nova_compute[260603]: 2025-10-02 08:43:06.041 2 DEBUG nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:43:06 np0005465604 nova_compute[260603]: 2025-10-02 08:43:06.178 2 DEBUG nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:43:06 np0005465604 nova_compute[260603]: 2025-10-02 08:43:06.180 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:43:06 np0005465604 nova_compute[260603]: 2025-10-02 08:43:06.181 2 INFO nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Creating image(s)#033[00m
Oct  2 04:43:06 np0005465604 nova_compute[260603]: 2025-10-02 08:43:06.206 2 DEBUG nova.storage.rbd_utils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:06 np0005465604 nova_compute[260603]: 2025-10-02 08:43:06.230 2 DEBUG nova.storage.rbd_utils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:06 np0005465604 nova_compute[260603]: 2025-10-02 08:43:06.253 2 DEBUG nova.storage.rbd_utils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:06 np0005465604 nova_compute[260603]: 2025-10-02 08:43:06.256 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:06 np0005465604 nova_compute[260603]: 2025-10-02 08:43:06.343 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:06 np0005465604 nova_compute[260603]: 2025-10-02 08:43:06.344 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:06 np0005465604 nova_compute[260603]: 2025-10-02 08:43:06.345 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:06 np0005465604 nova_compute[260603]: 2025-10-02 08:43:06.346 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:06 np0005465604 nova_compute[260603]: 2025-10-02 08:43:06.375 2 DEBUG nova.storage.rbd_utils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:06 np0005465604 nova_compute[260603]: 2025-10-02 08:43:06.379 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:06 np0005465604 nova_compute[260603]: 2025-10-02 08:43:06.517 2 DEBUG nova.policy [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7e27caab7dd34e4a9cac5f4f1880fad8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ae7dae3968e448f1b3ace692d9d76cff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:43:07 np0005465604 nova_compute[260603]: 2025-10-02 08:43:07.027 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:07 np0005465604 nova_compute[260603]: 2025-10-02 08:43:07.091 2 DEBUG nova.storage.rbd_utils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] resizing rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:43:07 np0005465604 nova_compute[260603]: 2025-10-02 08:43:07.203 2 DEBUG nova.objects.instance [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'migration_context' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:43:07 np0005465604 nova_compute[260603]: 2025-10-02 08:43:07.218 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:43:07 np0005465604 nova_compute[260603]: 2025-10-02 08:43:07.219 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Ensure instance console log exists: /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:43:07 np0005465604 nova_compute[260603]: 2025-10-02 08:43:07.220 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:07 np0005465604 nova_compute[260603]: 2025-10-02 08:43:07.220 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:07 np0005465604 nova_compute[260603]: 2025-10-02 08:43:07.221 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 58 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 755 KiB/s wr, 24 op/s
Oct  2 04:43:07 np0005465604 nova_compute[260603]: 2025-10-02 08:43:07.608 2 DEBUG nova.network.neutron [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Successfully created port: d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:43:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:43:08 np0005465604 nova_compute[260603]: 2025-10-02 08:43:08.455 2 DEBUG nova.network.neutron [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Successfully updated port: d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:43:08 np0005465604 nova_compute[260603]: 2025-10-02 08:43:08.478 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:43:08 np0005465604 nova_compute[260603]: 2025-10-02 08:43:08.479 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquired lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:43:08 np0005465604 nova_compute[260603]: 2025-10-02 08:43:08.479 2 DEBUG nova.network.neutron [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:43:08 np0005465604 nova_compute[260603]: 2025-10-02 08:43:08.538 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "bccc9587-6f96-4032-ae07-56ab00988869" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:08 np0005465604 nova_compute[260603]: 2025-10-02 08:43:08.538 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:08 np0005465604 nova_compute[260603]: 2025-10-02 08:43:08.554 2 DEBUG nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:43:08 np0005465604 nova_compute[260603]: 2025-10-02 08:43:08.585 2 DEBUG nova.compute.manager [req-12a88d17-f754-49c2-ad8a-c2a6491a5fc1 req-ce9d617a-90c0-4f45-86b7-ca7eb82893f7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:43:08 np0005465604 nova_compute[260603]: 2025-10-02 08:43:08.586 2 DEBUG nova.compute.manager [req-12a88d17-f754-49c2-ad8a-c2a6491a5fc1 req-ce9d617a-90c0-4f45-86b7-ca7eb82893f7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing instance network info cache due to event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:43:08 np0005465604 nova_compute[260603]: 2025-10-02 08:43:08.586 2 DEBUG oslo_concurrency.lockutils [req-12a88d17-f754-49c2-ad8a-c2a6491a5fc1 req-ce9d617a-90c0-4f45-86b7-ca7eb82893f7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:43:08 np0005465604 nova_compute[260603]: 2025-10-02 08:43:08.620 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:08 np0005465604 nova_compute[260603]: 2025-10-02 08:43:08.621 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:08 np0005465604 nova_compute[260603]: 2025-10-02 08:43:08.628 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:43:08 np0005465604 nova_compute[260603]: 2025-10-02 08:43:08.629 2 INFO nova.compute.claims [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:43:08 np0005465604 nova_compute[260603]: 2025-10-02 08:43:08.677 2 DEBUG nova.network.neutron [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:43:08 np0005465604 nova_compute[260603]: 2025-10-02 08:43:08.769 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:43:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/343716132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.249 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.259 2 DEBUG nova.compute.provider_tree [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.284 2 DEBUG nova.scheduler.client.report [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.319 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.320 2 DEBUG nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:43:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 88 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.394 2 DEBUG nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.394 2 DEBUG nova.network.neutron [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.417 2 INFO nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.436 2 DEBUG nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.533 2 DEBUG nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.535 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.536 2 INFO nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Creating image(s)#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.567 2 DEBUG nova.storage.rbd_utils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image bccc9587-6f96-4032-ae07-56ab00988869_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.603 2 DEBUG nova.storage.rbd_utils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image bccc9587-6f96-4032-ae07-56ab00988869_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.629 2 DEBUG nova.storage.rbd_utils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image bccc9587-6f96-4032-ae07-56ab00988869_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.634 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.702 2 DEBUG nova.policy [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3dd1e04a123f47aa8a6b835785a1c569', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.729 2 DEBUG nova.network.neutron [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updating instance_info_cache with network_info: [{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.740 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.741 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.741 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.742 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.773 2 DEBUG nova.storage.rbd_utils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image bccc9587-6f96-4032-ae07-56ab00988869_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.777 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 bccc9587-6f96-4032-ae07-56ab00988869_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.823 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Releasing lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.824 2 DEBUG nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance network_info: |[{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.825 2 DEBUG oslo_concurrency.lockutils [req-12a88d17-f754-49c2-ad8a-c2a6491a5fc1 req-ce9d617a-90c0-4f45-86b7-ca7eb82893f7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.826 2 DEBUG nova.network.neutron [req-12a88d17-f754-49c2-ad8a-c2a6491a5fc1 req-ce9d617a-90c0-4f45-86b7-ca7eb82893f7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.832 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Start _get_guest_xml network_info=[{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.841 2 WARNING nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.849 2 DEBUG nova.virt.libvirt.host [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.851 2 DEBUG nova.virt.libvirt.host [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.865 2 DEBUG nova.virt.libvirt.host [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.867 2 DEBUG nova.virt.libvirt.host [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.868 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.868 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.869 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.869 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.870 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.870 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.871 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.871 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.872 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.872 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.872 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.873 2 DEBUG nova.virt.hardware [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:43:09 np0005465604 nova_compute[260603]: 2025-10-02 08:43:09.879 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:10 np0005465604 nova_compute[260603]: 2025-10-02 08:43:10.136 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 bccc9587-6f96-4032-ae07-56ab00988869_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:10 np0005465604 nova_compute[260603]: 2025-10-02 08:43:10.198 2 DEBUG nova.storage.rbd_utils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] resizing rbd image bccc9587-6f96-4032-ae07-56ab00988869_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:43:10 np0005465604 nova_compute[260603]: 2025-10-02 08:43:10.293 2 DEBUG nova.objects.instance [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'migration_context' on Instance uuid bccc9587-6f96-4032-ae07-56ab00988869 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:43:10 np0005465604 nova_compute[260603]: 2025-10-02 08:43:10.311 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:43:10 np0005465604 nova_compute[260603]: 2025-10-02 08:43:10.312 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Ensure instance console log exists: /var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:43:10 np0005465604 nova_compute[260603]: 2025-10-02 08:43:10.312 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:10 np0005465604 nova_compute[260603]: 2025-10-02 08:43:10.313 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:10 np0005465604 nova_compute[260603]: 2025-10-02 08:43:10.313 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:43:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2881609785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:43:10 np0005465604 nova_compute[260603]: 2025-10-02 08:43:10.455 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:10 np0005465604 nova_compute[260603]: 2025-10-02 08:43:10.485 2 DEBUG nova.storage.rbd_utils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:10 np0005465604 nova_compute[260603]: 2025-10-02 08:43:10.490 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:10 np0005465604 podman[364475]: 2025-10-02 08:43:10.856072798 +0000 UTC m=+0.091644157 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 04:43:10 np0005465604 podman[364476]: 2025-10-02 08:43:10.875733892 +0000 UTC m=+0.110507976 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:43:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:43:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4100491301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.004 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.008 2 DEBUG nova.virt.libvirt.vif [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:43:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-618462512',display_name='tempest-ServerRescueTestJSONUnderV235-server-618462512',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-618462512',id=104,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae7dae3968e448f1b3ace692d9d76cff',ramdisk_id='',reservation_id='r-g69g07mg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-299264470',owner_user_name='tempest-ServerRescueTestJSONUnderV235-299264470-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:43:06Z,user_data=None,user_id='7e27caab7dd34e4a9cac5f4f1880fad8',uuid=1c24cd5c-a165-4fcf-b24d-245a60f7ea11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.009 2 DEBUG nova.network.os_vif_util [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Converting VIF {"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.010 2 DEBUG nova.network.os_vif_util [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:24:8a,bridge_name='br-int',has_traffic_filtering=True,id=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59,network=Network(34c4e106-9919-4d6d-a50a-81b3894f2e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9b60bcb-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.014 2 DEBUG nova.objects.instance [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.034 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:43:11 np0005465604 nova_compute[260603]:  <uuid>1c24cd5c-a165-4fcf-b24d-245a60f7ea11</uuid>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:  <name>instance-00000068</name>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerRescueTestJSONUnderV235-server-618462512</nova:name>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:43:09</nova:creationTime>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:43:11 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:        <nova:user uuid="7e27caab7dd34e4a9cac5f4f1880fad8">tempest-ServerRescueTestJSONUnderV235-299264470-project-member</nova:user>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:        <nova:project uuid="ae7dae3968e448f1b3ace692d9d76cff">tempest-ServerRescueTestJSONUnderV235-299264470</nova:project>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:        <nova:port uuid="d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59">
Oct  2 04:43:11 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <entry name="serial">1c24cd5c-a165-4fcf-b24d-245a60f7ea11</entry>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <entry name="uuid">1c24cd5c-a165-4fcf-b24d-245a60f7ea11</entry>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk">
Oct  2 04:43:11 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:43:11 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config">
Oct  2 04:43:11 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:43:11 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:d2:24:8a"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <target dev="tapd9b60bcb-8d"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/console.log" append="off"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:43:11 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:43:11 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:43:11 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:43:11 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.035 2 DEBUG nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Preparing to wait for external event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.036 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.036 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.037 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.038 2 DEBUG nova.virt.libvirt.vif [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:43:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-618462512',display_name='tempest-ServerRescueTestJSONUnderV235-server-618462512',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-618462512',id=104,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae7dae3968e448f1b3ace692d9d76cff',ramdisk_id='',reservation_id='r-g69g07mg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-299264470',owner_user_name='tempest-ServerRescueTestJSONUnderV235-299264470-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:43:06Z,user_data=None,user_id='7e27caab7dd34e4a9cac5f4f1880fad8',uuid=1c24cd5c-a165-4fcf-b24d-245a60f7ea11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.038 2 DEBUG nova.network.os_vif_util [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Converting VIF {"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.039 2 DEBUG nova.network.os_vif_util [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:24:8a,bridge_name='br-int',has_traffic_filtering=True,id=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59,network=Network(34c4e106-9919-4d6d-a50a-81b3894f2e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9b60bcb-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.040 2 DEBUG os_vif [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:24:8a,bridge_name='br-int',has_traffic_filtering=True,id=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59,network=Network(34c4e106-9919-4d6d-a50a-81b3894f2e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9b60bcb-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.042 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.043 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9b60bcb-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd9b60bcb-8d, col_values=(('external_ids', {'iface-id': 'd9b60bcb-8d0e-4638-8ce0-3bb5568a6b59', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:24:8a', 'vm-uuid': '1c24cd5c-a165-4fcf-b24d-245a60f7ea11'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:11 np0005465604 NetworkManager[45129]: <info>  [1759394591.0508] manager: (tapd9b60bcb-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.057 2 INFO os_vif [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:24:8a,bridge_name='br-int',has_traffic_filtering=True,id=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59,network=Network(34c4e106-9919-4d6d-a50a-81b3894f2e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9b60bcb-8d')#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.113 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.114 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.114 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] No VIF found with MAC fa:16:3e:d2:24:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.115 2 INFO nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Using config drive#033[00m
Oct  2 04:43:11 np0005465604 nova_compute[260603]: 2025-10-02 08:43:11.145 2 DEBUG nova.storage.rbd_utils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 88 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:43:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:43:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:43:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:43:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:43:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:43:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:43:11 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 1428ad62-3f6e-4d01-a7a8-15b1fbbe87df does not exist
Oct  2 04:43:11 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev d184b9a3-bdd6-4e80-abbf-7173b996d2ea does not exist
Oct  2 04:43:11 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b0df20be-356d-406c-8963-c0ce83f9eb68 does not exist
Oct  2 04:43:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:43:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:43:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:43:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:43:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:43:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:43:12 np0005465604 podman[364786]: 2025-10-02 08:43:12.551210493 +0000 UTC m=+0.064820231 container create ec5ee02adb12e3fc6dff3fc963110c68a721eabdba20b16b7b53c6f6f4529381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:43:12 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:43:12 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:43:12 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:43:12 np0005465604 systemd[1]: Started libpod-conmon-ec5ee02adb12e3fc6dff3fc963110c68a721eabdba20b16b7b53c6f6f4529381.scope.
Oct  2 04:43:12 np0005465604 podman[364786]: 2025-10-02 08:43:12.524187261 +0000 UTC m=+0.037797039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:43:12 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:43:12 np0005465604 podman[364786]: 2025-10-02 08:43:12.663142292 +0000 UTC m=+0.176752070 container init ec5ee02adb12e3fc6dff3fc963110c68a721eabdba20b16b7b53c6f6f4529381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct  2 04:43:12 np0005465604 podman[364786]: 2025-10-02 08:43:12.674443154 +0000 UTC m=+0.188052882 container start ec5ee02adb12e3fc6dff3fc963110c68a721eabdba20b16b7b53c6f6f4529381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elgamal, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:43:12 np0005465604 podman[364786]: 2025-10-02 08:43:12.678854981 +0000 UTC m=+0.192464769 container attach ec5ee02adb12e3fc6dff3fc963110c68a721eabdba20b16b7b53c6f6f4529381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elgamal, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:43:12 np0005465604 keen_elgamal[364802]: 167 167
Oct  2 04:43:12 np0005465604 systemd[1]: libpod-ec5ee02adb12e3fc6dff3fc963110c68a721eabdba20b16b7b53c6f6f4529381.scope: Deactivated successfully.
Oct  2 04:43:12 np0005465604 podman[364786]: 2025-10-02 08:43:12.682309109 +0000 UTC m=+0.195918847 container died ec5ee02adb12e3fc6dff3fc963110c68a721eabdba20b16b7b53c6f6f4529381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 04:43:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4bf6feb1b1d310c3790632709fbad980e361b44674c88cb777120f373a46bc0e-merged.mount: Deactivated successfully.
Oct  2 04:43:12 np0005465604 podman[364786]: 2025-10-02 08:43:12.738223822 +0000 UTC m=+0.251833550 container remove ec5ee02adb12e3fc6dff3fc963110c68a721eabdba20b16b7b53c6f6f4529381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elgamal, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:43:12 np0005465604 systemd[1]: libpod-conmon-ec5ee02adb12e3fc6dff3fc963110c68a721eabdba20b16b7b53c6f6f4529381.scope: Deactivated successfully.
Oct  2 04:43:12 np0005465604 nova_compute[260603]: 2025-10-02 08:43:12.767 2 INFO nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Creating config drive at /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config#033[00m
Oct  2 04:43:12 np0005465604 nova_compute[260603]: 2025-10-02 08:43:12.779 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpersbka20 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:43:12 np0005465604 podman[364826]: 2025-10-02 08:43:12.934648425 +0000 UTC m=+0.039435181 container create 02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_visvesvaraya, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  2 04:43:12 np0005465604 nova_compute[260603]: 2025-10-02 08:43:12.938 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpersbka20" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:12 np0005465604 nova_compute[260603]: 2025-10-02 08:43:12.966 2 DEBUG nova.storage.rbd_utils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:12 np0005465604 nova_compute[260603]: 2025-10-02 08:43:12.970 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:12 np0005465604 systemd[1]: Started libpod-conmon-02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e.scope.
Oct  2 04:43:13 np0005465604 podman[364826]: 2025-10-02 08:43:12.917741967 +0000 UTC m=+0.022528803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:43:13 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:43:13 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ac4cdf434ed8002ab1d95fe73fe2cbc5d7ea67ac1ec563aef847af4b806956/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:43:13 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ac4cdf434ed8002ab1d95fe73fe2cbc5d7ea67ac1ec563aef847af4b806956/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:43:13 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ac4cdf434ed8002ab1d95fe73fe2cbc5d7ea67ac1ec563aef847af4b806956/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:43:13 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ac4cdf434ed8002ab1d95fe73fe2cbc5d7ea67ac1ec563aef847af4b806956/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:43:13 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ac4cdf434ed8002ab1d95fe73fe2cbc5d7ea67ac1ec563aef847af4b806956/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:43:13 np0005465604 podman[364826]: 2025-10-02 08:43:13.039600215 +0000 UTC m=+0.144386981 container init 02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:43:13 np0005465604 podman[364826]: 2025-10-02 08:43:13.057153423 +0000 UTC m=+0.161940189 container start 02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  2 04:43:13 np0005465604 podman[364826]: 2025-10-02 08:43:13.062317064 +0000 UTC m=+0.167103830 container attach 02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_visvesvaraya, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 04:43:13 np0005465604 nova_compute[260603]: 2025-10-02 08:43:13.099 2 DEBUG nova.network.neutron [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Successfully created port: af8bcfc4-3690-4b5f-9893-5555fa376203 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:43:13 np0005465604 nova_compute[260603]: 2025-10-02 08:43:13.184 2 DEBUG oslo_concurrency.processutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:13 np0005465604 nova_compute[260603]: 2025-10-02 08:43:13.185 2 INFO nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Deleting local config drive /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config because it was imported into RBD.#033[00m
Oct  2 04:43:13 np0005465604 kernel: tapd9b60bcb-8d: entered promiscuous mode
Oct  2 04:43:13 np0005465604 NetworkManager[45129]: <info>  [1759394593.2778] manager: (tapd9b60bcb-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/419)
Oct  2 04:43:13 np0005465604 nova_compute[260603]: 2025-10-02 08:43:13.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:13 np0005465604 nova_compute[260603]: 2025-10-02 08:43:13.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:13 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:13Z|01066|binding|INFO|Claiming lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 for this chassis.
Oct  2 04:43:13 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:13Z|01067|binding|INFO|d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59: Claiming fa:16:3e:d2:24:8a 10.100.0.8
Oct  2 04:43:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:13.301 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:24:8a 10.100.0.8'], port_security=['fa:16:3e:d2:24:8a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1c24cd5c-a165-4fcf-b24d-245a60f7ea11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34c4e106-9919-4d6d-a50a-81b3894f2e5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae7dae3968e448f1b3ace692d9d76cff', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e6912f4e-082b-4b15-9608-2b9595c16211', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b81b03c-f256-4f5a-8b01-0b7991a52f3e, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:43:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:13.303 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 in datapath 34c4e106-9919-4d6d-a50a-81b3894f2e5e bound to our chassis#033[00m
Oct  2 04:43:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:13.304 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 34c4e106-9919-4d6d-a50a-81b3894f2e5e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 04:43:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:13.307 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1916c2e8-2118-4a35-9a7a-aa1e0c7f1755]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:13 np0005465604 systemd-udevd[364896]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:43:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 134 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct  2 04:43:13 np0005465604 systemd-machined[214636]: New machine qemu-130-instance-00000068.
Oct  2 04:43:13 np0005465604 NetworkManager[45129]: <info>  [1759394593.3552] device (tapd9b60bcb-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:43:13 np0005465604 NetworkManager[45129]: <info>  [1759394593.3561] device (tapd9b60bcb-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:43:13 np0005465604 systemd[1]: Started Virtual Machine qemu-130-instance-00000068.
Oct  2 04:43:13 np0005465604 nova_compute[260603]: 2025-10-02 08:43:13.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:13 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:13Z|01068|binding|INFO|Setting lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 ovn-installed in OVS
Oct  2 04:43:13 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:13Z|01069|binding|INFO|Setting lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 up in Southbound
Oct  2 04:43:13 np0005465604 nova_compute[260603]: 2025-10-02 08:43:13.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:14 np0005465604 nova_compute[260603]: 2025-10-02 08:43:14.209 2 DEBUG nova.network.neutron [req-12a88d17-f754-49c2-ad8a-c2a6491a5fc1 req-ce9d617a-90c0-4f45-86b7-ca7eb82893f7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updated VIF entry in instance network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:43:14 np0005465604 nova_compute[260603]: 2025-10-02 08:43:14.209 2 DEBUG nova.network.neutron [req-12a88d17-f754-49c2-ad8a-c2a6491a5fc1 req-ce9d617a-90c0-4f45-86b7-ca7eb82893f7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updating instance_info_cache with network_info: [{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:43:14 np0005465604 nova_compute[260603]: 2025-10-02 08:43:14.228 2 DEBUG oslo_concurrency.lockutils [req-12a88d17-f754-49c2-ad8a-c2a6491a5fc1 req-ce9d617a-90c0-4f45-86b7-ca7eb82893f7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:43:14 np0005465604 blissful_visvesvaraya[364861]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:43:14 np0005465604 blissful_visvesvaraya[364861]: --> relative data size: 1.0
Oct  2 04:43:14 np0005465604 blissful_visvesvaraya[364861]: --> All data devices are unavailable
Oct  2 04:43:14 np0005465604 nova_compute[260603]: 2025-10-02 08:43:14.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:14 np0005465604 systemd[1]: libpod-02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e.scope: Deactivated successfully.
Oct  2 04:43:14 np0005465604 systemd[1]: libpod-02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e.scope: Consumed 1.235s CPU time.
Oct  2 04:43:14 np0005465604 podman[364826]: 2025-10-02 08:43:14.381058087 +0000 UTC m=+1.485844843 container died 02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 04:43:14 np0005465604 systemd[1]: var-lib-containers-storage-overlay-96ac4cdf434ed8002ab1d95fe73fe2cbc5d7ea67ac1ec563aef847af4b806956-merged.mount: Deactivated successfully.
Oct  2 04:43:14 np0005465604 podman[364826]: 2025-10-02 08:43:14.438231519 +0000 UTC m=+1.543018265 container remove 02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 04:43:14 np0005465604 systemd[1]: libpod-conmon-02c8f45a7b0cc1d0a55b1e1db0a47e2b97da406ca5c133160440508b7a98ae9e.scope: Deactivated successfully.
Oct  2 04:43:14 np0005465604 nova_compute[260603]: 2025-10-02 08:43:14.549 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394594.5490034, 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:43:14 np0005465604 nova_compute[260603]: 2025-10-02 08:43:14.550 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] VM Started (Lifecycle Event)#033[00m
Oct  2 04:43:14 np0005465604 nova_compute[260603]: 2025-10-02 08:43:14.577 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:43:14 np0005465604 nova_compute[260603]: 2025-10-02 08:43:14.584 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394594.5492742, 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:43:14 np0005465604 nova_compute[260603]: 2025-10-02 08:43:14.585 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:43:14 np0005465604 nova_compute[260603]: 2025-10-02 08:43:14.605 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:43:14 np0005465604 nova_compute[260603]: 2025-10-02 08:43:14.608 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:43:14 np0005465604 nova_compute[260603]: 2025-10-02 08:43:14.626 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:43:15 np0005465604 podman[365129]: 2025-10-02 08:43:15.078002939 +0000 UTC m=+0.041725980 container create af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 04:43:15 np0005465604 systemd[1]: Started libpod-conmon-af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b.scope.
Oct  2 04:43:15 np0005465604 podman[365129]: 2025-10-02 08:43:15.059434611 +0000 UTC m=+0.023157692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:43:15 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:43:15 np0005465604 podman[365129]: 2025-10-02 08:43:15.197217416 +0000 UTC m=+0.160940467 container init af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 04:43:15 np0005465604 podman[365129]: 2025-10-02 08:43:15.203659367 +0000 UTC m=+0.167382408 container start af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:43:15 np0005465604 reverent_snyder[365145]: 167 167
Oct  2 04:43:15 np0005465604 systemd[1]: libpod-af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b.scope: Deactivated successfully.
Oct  2 04:43:15 np0005465604 conmon[365145]: conmon af4b363b8bf825c33cbf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b.scope/container/memory.events
Oct  2 04:43:15 np0005465604 podman[365129]: 2025-10-02 08:43:15.2214246 +0000 UTC m=+0.185147641 container attach af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:43:15 np0005465604 podman[365129]: 2025-10-02 08:43:15.222020549 +0000 UTC m=+0.185743580 container died af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:43:15 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6eeaadb0577d972205f43b8797a81184bbcfaff1a11309411ed4ac54e97d453b-merged.mount: Deactivated successfully.
Oct  2 04:43:15 np0005465604 podman[365129]: 2025-10-02 08:43:15.26793964 +0000 UTC m=+0.231662691 container remove af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_snyder, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:43:15 np0005465604 systemd[1]: libpod-conmon-af4b363b8bf825c33cbf3c2fe7ab04f61df9921301037aca1e9061ed42c64d3b.scope: Deactivated successfully.
Oct  2 04:43:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 134 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct  2 04:43:15 np0005465604 podman[365169]: 2025-10-02 08:43:15.452519413 +0000 UTC m=+0.051843417 container create 226d3b669cd64b934f1ce8901f4557d9d0f4840b3b421ca7a570eac8c9b4fe93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:43:15 np0005465604 systemd[1]: Started libpod-conmon-226d3b669cd64b934f1ce8901f4557d9d0f4840b3b421ca7a570eac8c9b4fe93.scope.
Oct  2 04:43:15 np0005465604 podman[365169]: 2025-10-02 08:43:15.424318944 +0000 UTC m=+0.023642988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:43:15 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:43:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9369be09207f84f06dd7289a41d4f1a2c607596ec1b417c76d73e2a7739349e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:43:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9369be09207f84f06dd7289a41d4f1a2c607596ec1b417c76d73e2a7739349e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:43:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9369be09207f84f06dd7289a41d4f1a2c607596ec1b417c76d73e2a7739349e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:43:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9369be09207f84f06dd7289a41d4f1a2c607596ec1b417c76d73e2a7739349e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:43:15 np0005465604 podman[365169]: 2025-10-02 08:43:15.546798831 +0000 UTC m=+0.146122835 container init 226d3b669cd64b934f1ce8901f4557d9d0f4840b3b421ca7a570eac8c9b4fe93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:43:15 np0005465604 podman[365169]: 2025-10-02 08:43:15.556039699 +0000 UTC m=+0.155363693 container start 226d3b669cd64b934f1ce8901f4557d9d0f4840b3b421ca7a570eac8c9b4fe93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bhaskara, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 04:43:15 np0005465604 podman[365169]: 2025-10-02 08:43:15.560081705 +0000 UTC m=+0.159405739 container attach 226d3b669cd64b934f1ce8901f4557d9d0f4840b3b421ca7a570eac8c9b4fe93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 04:43:15 np0005465604 nova_compute[260603]: 2025-10-02 08:43:15.616 2 DEBUG nova.network.neutron [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Successfully updated port: af8bcfc4-3690-4b5f-9893-5555fa376203 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:43:15 np0005465604 nova_compute[260603]: 2025-10-02 08:43:15.640 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:43:15 np0005465604 nova_compute[260603]: 2025-10-02 08:43:15.640 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquired lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:43:15 np0005465604 nova_compute[260603]: 2025-10-02 08:43:15.641 2 DEBUG nova.network.neutron [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:43:15 np0005465604 nova_compute[260603]: 2025-10-02 08:43:15.754 2 DEBUG nova.compute.manager [req-d895a1ba-689e-41a3-8a9c-cc53c462ab40 req-58f63581-f345-4a41-825e-39cfa5fbd953 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received event network-changed-af8bcfc4-3690-4b5f-9893-5555fa376203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:43:15 np0005465604 nova_compute[260603]: 2025-10-02 08:43:15.754 2 DEBUG nova.compute.manager [req-d895a1ba-689e-41a3-8a9c-cc53c462ab40 req-58f63581-f345-4a41-825e-39cfa5fbd953 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Refreshing instance network info cache due to event network-changed-af8bcfc4-3690-4b5f-9893-5555fa376203. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:43:15 np0005465604 nova_compute[260603]: 2025-10-02 08:43:15.755 2 DEBUG oslo_concurrency.lockutils [req-d895a1ba-689e-41a3-8a9c-cc53c462ab40 req-58f63581-f345-4a41-825e-39cfa5fbd953 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:43:15 np0005465604 nova_compute[260603]: 2025-10-02 08:43:15.834 2 DEBUG nova.network.neutron [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:43:16 np0005465604 nova_compute[260603]: 2025-10-02 08:43:16.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]: {
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:    "0": [
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:        {
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "devices": [
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "/dev/loop3"
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            ],
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "lv_name": "ceph_lv0",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "lv_size": "21470642176",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "name": "ceph_lv0",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "tags": {
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.cluster_name": "ceph",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.crush_device_class": "",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.encrypted": "0",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.osd_id": "0",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.type": "block",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.vdo": "0"
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            },
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "type": "block",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "vg_name": "ceph_vg0"
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:        }
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:    ],
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:    "1": [
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:        {
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "devices": [
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "/dev/loop4"
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            ],
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "lv_name": "ceph_lv1",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "lv_size": "21470642176",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "name": "ceph_lv1",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "tags": {
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.cluster_name": "ceph",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.crush_device_class": "",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.encrypted": "0",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.osd_id": "1",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.type": "block",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.vdo": "0"
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            },
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "type": "block",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "vg_name": "ceph_vg1"
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:        }
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:    ],
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:    "2": [
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:        {
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "devices": [
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "/dev/loop5"
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            ],
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "lv_name": "ceph_lv2",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "lv_size": "21470642176",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "name": "ceph_lv2",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "tags": {
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.cluster_name": "ceph",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.crush_device_class": "",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.encrypted": "0",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.osd_id": "2",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.type": "block",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:                "ceph.vdo": "0"
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            },
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "type": "block",
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:            "vg_name": "ceph_vg2"
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:        }
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]:    ]
Oct  2 04:43:16 np0005465604 suspicious_bhaskara[365186]: }
Oct  2 04:43:16 np0005465604 podman[365169]: 2025-10-02 08:43:16.280665325 +0000 UTC m=+0.879989289 container died 226d3b669cd64b934f1ce8901f4557d9d0f4840b3b421ca7a570eac8c9b4fe93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 04:43:16 np0005465604 systemd[1]: libpod-226d3b669cd64b934f1ce8901f4557d9d0f4840b3b421ca7a570eac8c9b4fe93.scope: Deactivated successfully.
Oct  2 04:43:16 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f9369be09207f84f06dd7289a41d4f1a2c607596ec1b417c76d73e2a7739349e-merged.mount: Deactivated successfully.
Oct  2 04:43:16 np0005465604 podman[365169]: 2025-10-02 08:43:16.420630027 +0000 UTC m=+1.019954001 container remove 226d3b669cd64b934f1ce8901f4557d9d0f4840b3b421ca7a570eac8c9b4fe93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bhaskara, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 04:43:16 np0005465604 systemd[1]: libpod-conmon-226d3b669cd64b934f1ce8901f4557d9d0f4840b3b421ca7a570eac8c9b4fe93.scope: Deactivated successfully.
Oct  2 04:43:17 np0005465604 podman[365351]: 2025-10-02 08:43:17.204347835 +0000 UTC m=+0.053527519 container create a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:43:17 np0005465604 systemd[1]: Started libpod-conmon-a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874.scope.
Oct  2 04:43:17 np0005465604 podman[365351]: 2025-10-02 08:43:17.177733025 +0000 UTC m=+0.026912749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:43:17 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:43:17 np0005465604 podman[365351]: 2025-10-02 08:43:17.296092314 +0000 UTC m=+0.145272048 container init a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:43:17 np0005465604 podman[365351]: 2025-10-02 08:43:17.3068864 +0000 UTC m=+0.156066074 container start a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 04:43:17 np0005465604 podman[365351]: 2025-10-02 08:43:17.310542175 +0000 UTC m=+0.159721859 container attach a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:43:17 np0005465604 lucid_khorana[365367]: 167 167
Oct  2 04:43:17 np0005465604 systemd[1]: libpod-a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874.scope: Deactivated successfully.
Oct  2 04:43:17 np0005465604 conmon[365367]: conmon a86b600ac1baf3efe657 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874.scope/container/memory.events
Oct  2 04:43:17 np0005465604 podman[365351]: 2025-10-02 08:43:17.313554458 +0000 UTC m=+0.162734112 container died a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:43:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.6 MiB/s wr, 63 op/s
Oct  2 04:43:17 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0b295db08522a35e224d23494706fb44006419feef4fbecb1bc2abab6bc863c9-merged.mount: Deactivated successfully.
Oct  2 04:43:17 np0005465604 podman[365351]: 2025-10-02 08:43:17.35691058 +0000 UTC m=+0.206090224 container remove a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:43:17 np0005465604 systemd[1]: libpod-conmon-a86b600ac1baf3efe6575b0a889e44fbe815472363ef020e00ea434dec81d874.scope: Deactivated successfully.
Oct  2 04:43:17 np0005465604 podman[365391]: 2025-10-02 08:43:17.576124473 +0000 UTC m=+0.055685688 container create bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 04:43:17 np0005465604 systemd[1]: Started libpod-conmon-bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a.scope.
Oct  2 04:43:17 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:43:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49761515669b71a43edc02b912cb4cc5147c76eb0b4e2a80cda9298c3651ffd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:43:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49761515669b71a43edc02b912cb4cc5147c76eb0b4e2a80cda9298c3651ffd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:43:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49761515669b71a43edc02b912cb4cc5147c76eb0b4e2a80cda9298c3651ffd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:43:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49761515669b71a43edc02b912cb4cc5147c76eb0b4e2a80cda9298c3651ffd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:43:17 np0005465604 podman[365391]: 2025-10-02 08:43:17.559397151 +0000 UTC m=+0.038958396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:43:17 np0005465604 podman[365391]: 2025-10-02 08:43:17.661552825 +0000 UTC m=+0.141114060 container init bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brahmagupta, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 04:43:17 np0005465604 podman[365391]: 2025-10-02 08:43:17.673887099 +0000 UTC m=+0.153448344 container start bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 04:43:17 np0005465604 podman[365391]: 2025-10-02 08:43:17.677527033 +0000 UTC m=+0.157088308 container attach bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:43:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]: {
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:        "osd_id": 2,
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:        "type": "bluestore"
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:    },
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:        "osd_id": 1,
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:        "type": "bluestore"
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:    },
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:        "osd_id": 0,
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:        "type": "bluestore"
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]:    }
Oct  2 04:43:18 np0005465604 determined_brahmagupta[365409]: }
Oct  2 04:43:18 np0005465604 systemd[1]: libpod-bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a.scope: Deactivated successfully.
Oct  2 04:43:18 np0005465604 podman[365391]: 2025-10-02 08:43:18.717649282 +0000 UTC m=+1.197210537 container died bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brahmagupta, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  2 04:43:18 np0005465604 systemd[1]: libpod-bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a.scope: Consumed 1.036s CPU time.
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.715 2 DEBUG nova.network.neutron [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Updating instance_info_cache with network_info: [{"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.741 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Releasing lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.741 2 DEBUG nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Instance network_info: |[{"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.742 2 DEBUG oslo_concurrency.lockutils [req-d895a1ba-689e-41a3-8a9c-cc53c462ab40 req-58f63581-f345-4a41-825e-39cfa5fbd953 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.742 2 DEBUG nova.network.neutron [req-d895a1ba-689e-41a3-8a9c-cc53c462ab40 req-58f63581-f345-4a41-825e-39cfa5fbd953 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Refreshing network info cache for port af8bcfc4-3690-4b5f-9893-5555fa376203 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.746 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Start _get_guest_xml network_info=[{"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:43:18 np0005465604 systemd[1]: var-lib-containers-storage-overlay-49761515669b71a43edc02b912cb4cc5147c76eb0b4e2a80cda9298c3651ffd1-merged.mount: Deactivated successfully.
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.760 2 WARNING nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.773 2 DEBUG nova.virt.libvirt.host [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.773 2 DEBUG nova.virt.libvirt.host [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.776 2 DEBUG nova.virt.libvirt.host [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.777 2 DEBUG nova.virt.libvirt.host [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.777 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.777 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.778 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.778 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.778 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.778 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.779 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.779 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.779 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.779 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.779 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.780 2 DEBUG nova.virt.hardware [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:43:18 np0005465604 nova_compute[260603]: 2025-10-02 08:43:18.782 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:18 np0005465604 podman[365391]: 2025-10-02 08:43:18.800445223 +0000 UTC m=+1.280006458 container remove bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brahmagupta, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct  2 04:43:18 np0005465604 systemd[1]: libpod-conmon-bbe672a372480f8332825d48d46bd835cc6c0ee97ef9ef9189f8c7b5b9e9636a.scope: Deactivated successfully.
Oct  2 04:43:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:43:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:43:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:43:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:43:18 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 1acea6ce-11b4-4b1c-a669-d5663e6b4795 does not exist
Oct  2 04:43:18 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 64006efe-7a8f-4bc4-8270-d11f511e27d5 does not exist
Oct  2 04:43:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:43:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/378220706' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.302 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.337 2 DEBUG nova.storage.rbd_utils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image bccc9587-6f96-4032-ae07-56ab00988869_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 MiB/s wr, 39 op/s
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.343 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:43:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:43:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2198398843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.810 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.812 2 DEBUG nova.virt.libvirt.vif [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:43:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-1334187488',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-1334187488',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=105,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBp7O1+NifYmRTPjkz076R6OB0cQraodAPHkd+igs0jgMRa0ylNfh8a4FTcaAs5LMUjKZ6d3T6IfE8uMmH/Vv7/4iSPE6rs9EcqUzLfabYKjHL1D+G2YR0bhQI1PbtyQqg==',key_name='tempest-TestSecurityGroupsBasicOps-388523957',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-lm9mu0gw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:43:09Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=bccc9587-6f96-4032-ae07-56ab00988869,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.813 2 DEBUG nova.network.os_vif_util [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.814 2 DEBUG nova.network.os_vif_util [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:6d:19,bridge_name='br-int',has_traffic_filtering=True,id=af8bcfc4-3690-4b5f-9893-5555fa376203,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf8bcfc4-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.817 2 DEBUG nova.objects.instance [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'pci_devices' on Instance uuid bccc9587-6f96-4032-ae07-56ab00988869 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.852 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:43:19 np0005465604 nova_compute[260603]:  <uuid>bccc9587-6f96-4032-ae07-56ab00988869</uuid>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:  <name>instance-00000069</name>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-1334187488</nova:name>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:43:18</nova:creationTime>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:43:19 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:        <nova:user uuid="3dd1e04a123f47aa8a6b835785a1c569">tempest-TestSecurityGroupsBasicOps-204807017-project-member</nova:user>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:        <nova:project uuid="7ef9cbc1b038423984a64b4674aa34ff">tempest-TestSecurityGroupsBasicOps-204807017</nova:project>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:        <nova:port uuid="af8bcfc4-3690-4b5f-9893-5555fa376203">
Oct  2 04:43:19 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <entry name="serial">bccc9587-6f96-4032-ae07-56ab00988869</entry>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <entry name="uuid">bccc9587-6f96-4032-ae07-56ab00988869</entry>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/bccc9587-6f96-4032-ae07-56ab00988869_disk">
Oct  2 04:43:19 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:43:19 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/bccc9587-6f96-4032-ae07-56ab00988869_disk.config">
Oct  2 04:43:19 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:43:19 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:c6:6d:19"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <target dev="tapaf8bcfc4-36"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869/console.log" append="off"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:43:19 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:43:19 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:43:19 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:43:19 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.854 2 DEBUG nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Preparing to wait for external event network-vif-plugged-af8bcfc4-3690-4b5f-9893-5555fa376203 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.855 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "bccc9587-6f96-4032-ae07-56ab00988869-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.855 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.855 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.857 2 DEBUG nova.virt.libvirt.vif [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:43:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-1334187488',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-1334187488',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=105,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBp7O1+NifYmRTPjkz076R6OB0cQraodAPHkd+igs0jgMRa0ylNfh8a4FTcaAs5LMUjKZ6d3T6IfE8uMmH/Vv7/4iSPE6rs9EcqUzLfabYKjHL1D+G2YR0bhQI1PbtyQqg==',key_name='tempest-TestSecurityGroupsBasicOps-388523957',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-lm9mu0gw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:43:09Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=bccc9587-6f96-4032-ae07-56ab00988869,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.857 2 DEBUG nova.network.os_vif_util [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.858 2 DEBUG nova.network.os_vif_util [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:6d:19,bridge_name='br-int',has_traffic_filtering=True,id=af8bcfc4-3690-4b5f-9893-5555fa376203,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf8bcfc4-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.859 2 DEBUG os_vif [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:6d:19,bridge_name='br-int',has_traffic_filtering=True,id=af8bcfc4-3690-4b5f-9893-5555fa376203,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf8bcfc4-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:19 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:43:19 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.860 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.860 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.865 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaf8bcfc4-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.865 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaf8bcfc4-36, col_values=(('external_ids', {'iface-id': 'af8bcfc4-3690-4b5f-9893-5555fa376203', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:6d:19', 'vm-uuid': 'bccc9587-6f96-4032-ae07-56ab00988869'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:19 np0005465604 NetworkManager[45129]: <info>  [1759394599.8680] manager: (tapaf8bcfc4-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/420)
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.875 2 INFO os_vif [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:6d:19,bridge_name='br-int',has_traffic_filtering=True,id=af8bcfc4-3690-4b5f-9893-5555fa376203,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf8bcfc4-36')#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.946 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.947 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.947 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No VIF found with MAC fa:16:3e:c6:6d:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.948 2 INFO nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Using config drive#033[00m
Oct  2 04:43:19 np0005465604 nova_compute[260603]: 2025-10-02 08:43:19.971 2 DEBUG nova.storage.rbd_utils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image bccc9587-6f96-4032-ae07-56ab00988869_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.670 2 INFO nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Creating config drive at /var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869/disk.config#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.679 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqcs34ae8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.729 2 DEBUG nova.network.neutron [req-d895a1ba-689e-41a3-8a9c-cc53c462ab40 req-58f63581-f345-4a41-825e-39cfa5fbd953 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Updated VIF entry in instance network info cache for port af8bcfc4-3690-4b5f-9893-5555fa376203. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.731 2 DEBUG nova.network.neutron [req-d895a1ba-689e-41a3-8a9c-cc53c462ab40 req-58f63581-f345-4a41-825e-39cfa5fbd953 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Updating instance_info_cache with network_info: [{"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.736 2 DEBUG nova.compute.manager [req-0108e192-9c9f-449f-bb05-b3fa63ecf0e5 req-f783b68e-bf15-4395-a0f4-3122cb155090 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.738 2 DEBUG oslo_concurrency.lockutils [req-0108e192-9c9f-449f-bb05-b3fa63ecf0e5 req-f783b68e-bf15-4395-a0f4-3122cb155090 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.739 2 DEBUG oslo_concurrency.lockutils [req-0108e192-9c9f-449f-bb05-b3fa63ecf0e5 req-f783b68e-bf15-4395-a0f4-3122cb155090 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.739 2 DEBUG oslo_concurrency.lockutils [req-0108e192-9c9f-449f-bb05-b3fa63ecf0e5 req-f783b68e-bf15-4395-a0f4-3122cb155090 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.740 2 DEBUG nova.compute.manager [req-0108e192-9c9f-449f-bb05-b3fa63ecf0e5 req-f783b68e-bf15-4395-a0f4-3122cb155090 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Processing event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.742 2 DEBUG nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.749 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394600.7483912, 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.749 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.754 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.761 2 INFO nova.virt.libvirt.driver [-] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance spawned successfully.#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.763 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.766 2 DEBUG oslo_concurrency.lockutils [req-d895a1ba-689e-41a3-8a9c-cc53c462ab40 req-58f63581-f345-4a41-825e-39cfa5fbd953 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.787 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.797 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.805 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.806 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.806 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.807 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.808 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.809 2 DEBUG nova.virt.libvirt.driver [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.819 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.832 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqcs34ae8" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.872 2 DEBUG nova.storage.rbd_utils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image bccc9587-6f96-4032-ae07-56ab00988869_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.879 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869/disk.config bccc9587-6f96-4032-ae07-56ab00988869_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.939 2 INFO nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Took 14.76 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:43:20 np0005465604 nova_compute[260603]: 2025-10-02 08:43:20.940 2 DEBUG nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:43:21 np0005465604 nova_compute[260603]: 2025-10-02 08:43:21.026 2 INFO nova.compute.manager [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Took 15.78 seconds to build instance.#033[00m
Oct  2 04:43:21 np0005465604 nova_compute[260603]: 2025-10-02 08:43:21.051 2 DEBUG oslo_concurrency.lockutils [None req-ce6ca60b-0bbe-4775-8475-68ea05d1d3db 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:21 np0005465604 nova_compute[260603]: 2025-10-02 08:43:21.071 2 DEBUG oslo_concurrency.processutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869/disk.config bccc9587-6f96-4032-ae07-56ab00988869_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:21 np0005465604 nova_compute[260603]: 2025-10-02 08:43:21.072 2 INFO nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Deleting local config drive /var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869/disk.config because it was imported into RBD.#033[00m
Oct  2 04:43:21 np0005465604 kernel: tapaf8bcfc4-36: entered promiscuous mode
Oct  2 04:43:21 np0005465604 NetworkManager[45129]: <info>  [1759394601.1480] manager: (tapaf8bcfc4-36): new Tun device (/org/freedesktop/NetworkManager/Devices/421)
Oct  2 04:43:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:21Z|01070|binding|INFO|Claiming lport af8bcfc4-3690-4b5f-9893-5555fa376203 for this chassis.
Oct  2 04:43:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:21Z|01071|binding|INFO|af8bcfc4-3690-4b5f-9893-5555fa376203: Claiming fa:16:3e:c6:6d:19 10.100.0.5
Oct  2 04:43:21 np0005465604 nova_compute[260603]: 2025-10-02 08:43:21.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.167 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:6d:19 10.100.0.5'], port_security=['fa:16:3e:c6:6d:19 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'bccc9587-6f96-4032-ae07-56ab00988869', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '87949ce5-c546-4e93-ab3f-46b861ef5238 dd3198b8-8ad4-4f59-8253-4227db95b8da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e2b1c02-e727-474f-a9ba-53199387490d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=af8bcfc4-3690-4b5f-9893-5555fa376203) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.169 162357 INFO neutron.agent.ovn.metadata.agent [-] Port af8bcfc4-3690-4b5f-9893-5555fa376203 in datapath c7addd6c-480f-45ed-94c2-18d1d2248acb bound to our chassis#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.171 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c7addd6c-480f-45ed-94c2-18d1d2248acb#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.187 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c79239df-56a2-434c-99bd-da6a822e65cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.189 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc7addd6c-41 in ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.192 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc7addd6c-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.192 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[07a5f01f-d560-4a5d-8750-22de15a53850]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.197 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe33414-204d-4186-a4a8-356e90c8bf2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:21 np0005465604 systemd-udevd[365641]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:43:21 np0005465604 systemd-machined[214636]: New machine qemu-131-instance-00000069.
Oct  2 04:43:21 np0005465604 NetworkManager[45129]: <info>  [1759394601.2142] device (tapaf8bcfc4-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:43:21 np0005465604 NetworkManager[45129]: <info>  [1759394601.2154] device (tapaf8bcfc4-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.217 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a65dc172-09da-4a87-977a-46e63f381b08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:21 np0005465604 systemd[1]: Started Virtual Machine qemu-131-instance-00000069.
Oct  2 04:43:21 np0005465604 nova_compute[260603]: 2025-10-02 08:43:21.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:21Z|01072|binding|INFO|Setting lport af8bcfc4-3690-4b5f-9893-5555fa376203 ovn-installed in OVS
Oct  2 04:43:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:21Z|01073|binding|INFO|Setting lport af8bcfc4-3690-4b5f-9893-5555fa376203 up in Southbound
Oct  2 04:43:21 np0005465604 nova_compute[260603]: 2025-10-02 08:43:21.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.248 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9121e7c-41a9-45a3-960c-334f3c40d8be]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.292 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8885183c-e582-442d-b904-deea3fd29f27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:21 np0005465604 NetworkManager[45129]: <info>  [1759394601.2989] manager: (tapc7addd6c-40): new Veth device (/org/freedesktop/NetworkManager/Devices/422)
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.298 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e67123e9-c497-4487-8b7a-e36e83a1f8bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:21 np0005465604 systemd-udevd[365645]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:43:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.346 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4f800863-f091-4c39-b8aa-dd53672db4f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.350 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[44b88693-a466-41f2-b295-94a50ecfa73f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:21 np0005465604 NetworkManager[45129]: <info>  [1759394601.3736] device (tapc7addd6c-40): carrier: link connected
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.387 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f40998a1-d2c5-4991-a854-f646ef95598a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.405 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[61cefd43-b83f-4526-aa11-72aadefec9c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc7addd6c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:47:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547997, 'reachable_time': 26922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365674, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.424 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3bd022-3809-4419-81c2-4f2db578e2e8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:47c4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547997, 'tstamp': 547997}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365675, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.441 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5a43d388-6631-40d6-a241-3865052a47be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc7addd6c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:47:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547997, 'reachable_time': 26922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 365676, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.481 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7473c21e-636e-45da-96c7-201b6d7e3ede]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.545 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[da2e6af9-6af7-44d2-8f93-c0874adcec70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.546 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7addd6c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.547 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.547 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7addd6c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:43:21 np0005465604 NetworkManager[45129]: <info>  [1759394601.5493] manager: (tapc7addd6c-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/423)
Oct  2 04:43:21 np0005465604 kernel: tapc7addd6c-40: entered promiscuous mode
Oct  2 04:43:21 np0005465604 nova_compute[260603]: 2025-10-02 08:43:21.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.560 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc7addd6c-40, col_values=(('external_ids', {'iface-id': '8f66d020-a258-4e14-aa3b-234835306a91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:43:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:21Z|01074|binding|INFO|Releasing lport 8f66d020-a258-4e14-aa3b-234835306a91 from this chassis (sb_readonly=0)
Oct  2 04:43:21 np0005465604 nova_compute[260603]: 2025-10-02 08:43:21.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.573 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c7addd6c-480f-45ed-94c2-18d1d2248acb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c7addd6c-480f-45ed-94c2-18d1d2248acb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:43:21 np0005465604 nova_compute[260603]: 2025-10-02 08:43:21.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.575 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8927aa0f-0a7c-4931-b2d2-de4f5a97cd4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.577 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-c7addd6c-480f-45ed-94c2-18d1d2248acb
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/c7addd6c-480f-45ed-94c2-18d1d2248acb.pid.haproxy
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID c7addd6c-480f-45ed-94c2-18d1d2248acb
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:43:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:21.578 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'env', 'PROCESS_TAG=haproxy-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c7addd6c-480f-45ed-94c2-18d1d2248acb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:43:21 np0005465604 podman[365708]: 2025-10-02 08:43:21.946276453 +0000 UTC m=+0.069409214 container create ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 04:43:21 np0005465604 systemd[1]: Started libpod-conmon-ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2.scope.
Oct  2 04:43:22 np0005465604 podman[365708]: 2025-10-02 08:43:21.90544472 +0000 UTC m=+0.028577491 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:43:22 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:43:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/644ad21fe1b1e836c44e9821eabc2e68b4056b923e74837b9185693307a8e425/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:43:22 np0005465604 podman[365708]: 2025-10-02 08:43:22.031290153 +0000 UTC m=+0.154422924 container init ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:43:22 np0005465604 podman[365708]: 2025-10-02 08:43:22.03825452 +0000 UTC m=+0.161387291 container start ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 04:43:22 np0005465604 neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb[365724]: [NOTICE]   (365728) : New worker (365730) forked
Oct  2 04:43:22 np0005465604 neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb[365724]: [NOTICE]   (365728) : Loading success.
Oct  2 04:43:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:43:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1904757560' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:43:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:43:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1904757560' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:43:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.877 2 DEBUG nova.compute.manager [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.878 2 DEBUG oslo_concurrency.lockutils [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.878 2 DEBUG oslo_concurrency.lockutils [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.878 2 DEBUG oslo_concurrency.lockutils [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.878 2 DEBUG nova.compute.manager [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] No waiting events found dispatching network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.878 2 WARNING nova.compute.manager [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received unexpected event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.878 2 DEBUG nova.compute.manager [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received event network-vif-plugged-af8bcfc4-3690-4b5f-9893-5555fa376203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.879 2 DEBUG oslo_concurrency.lockutils [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "bccc9587-6f96-4032-ae07-56ab00988869-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.879 2 DEBUG oslo_concurrency.lockutils [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.879 2 DEBUG oslo_concurrency.lockutils [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.879 2 DEBUG nova.compute.manager [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Processing event network-vif-plugged-af8bcfc4-3690-4b5f-9893-5555fa376203 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.879 2 DEBUG nova.compute.manager [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received event network-vif-plugged-af8bcfc4-3690-4b5f-9893-5555fa376203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.879 2 DEBUG oslo_concurrency.lockutils [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "bccc9587-6f96-4032-ae07-56ab00988869-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.880 2 DEBUG oslo_concurrency.lockutils [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.880 2 DEBUG oslo_concurrency.lockutils [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.880 2 DEBUG nova.compute.manager [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] No waiting events found dispatching network-vif-plugged-af8bcfc4-3690-4b5f-9893-5555fa376203 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.880 2 WARNING nova.compute.manager [req-cd1a5e69-8683-4b49-a222-31a9ed1222d3 req-5b7dea27-2499-4f77-aa6a-049060f49f9e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received unexpected event network-vif-plugged-af8bcfc4-3690-4b5f-9893-5555fa376203 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.982 2 INFO nova.compute.manager [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Rescuing#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.982 2 DEBUG oslo_concurrency.lockutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.982 2 DEBUG oslo_concurrency.lockutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquired lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:43:22 np0005465604 nova_compute[260603]: 2025-10-02 08:43:22.983 2 DEBUG nova.network.neutron [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.185 2 DEBUG nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.186 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394603.1847186, bccc9587-6f96-4032-ae07-56ab00988869 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.186 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] VM Started (Lifecycle Event)#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.190 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.193 2 INFO nova.virt.libvirt.driver [-] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Instance spawned successfully.#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.194 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.207 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.212 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.223 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.223 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.225 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.226 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.227 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.227 2 DEBUG nova.virt.libvirt.driver [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.254 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.254 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394603.1849718, bccc9587-6f96-4032-ae07-56ab00988869 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.254 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.287 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.292 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394603.1892343, bccc9587-6f96-4032-ae07-56ab00988869 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.292 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.309 2 INFO nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Took 13.78 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.310 2 DEBUG nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.319 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.322 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:43:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.355 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.381 2 INFO nova.compute.manager [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Took 14.78 seconds to build instance.#033[00m
Oct  2 04:43:23 np0005465604 nova_compute[260603]: 2025-10-02 08:43:23.399 2 DEBUG oslo_concurrency.lockutils [None req-56705259-1812-4d45-9967-7754e6b177bd 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:24 np0005465604 nova_compute[260603]: 2025-10-02 08:43:24.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:24 np0005465604 nova_compute[260603]: 2025-10-02 08:43:24.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 78 op/s
Oct  2 04:43:25 np0005465604 nova_compute[260603]: 2025-10-02 08:43:25.659 2 DEBUG nova.network.neutron [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updating instance_info_cache with network_info: [{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:43:25 np0005465604 nova_compute[260603]: 2025-10-02 08:43:25.690 2 DEBUG oslo_concurrency.lockutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Releasing lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:43:26 np0005465604 nova_compute[260603]: 2025-10-02 08:43:26.199 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:43:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 25 KiB/s wr, 107 op/s
Oct  2 04:43:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:43:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:43:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:43:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:43:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:43:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:43:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:43:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:43:27
Oct  2 04:43:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:43:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:43:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'images', 'backups', '.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta']
Oct  2 04:43:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:43:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:43:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:43:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:43:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:43:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:43:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:43:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:43:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:43:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:43:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:43:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:28Z|01075|binding|INFO|Releasing lport 8f66d020-a258-4e14-aa3b-234835306a91 from this chassis (sb_readonly=0)
Oct  2 04:43:28 np0005465604 NetworkManager[45129]: <info>  [1759394608.4272] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Oct  2 04:43:28 np0005465604 NetworkManager[45129]: <info>  [1759394608.4280] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Oct  2 04:43:28 np0005465604 nova_compute[260603]: 2025-10-02 08:43:28.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:28Z|01076|binding|INFO|Releasing lport 8f66d020-a258-4e14-aa3b-234835306a91 from this chassis (sb_readonly=0)
Oct  2 04:43:28 np0005465604 nova_compute[260603]: 2025-10-02 08:43:28.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:28 np0005465604 nova_compute[260603]: 2025-10-02 08:43:28.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:43:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 138 op/s
Oct  2 04:43:29 np0005465604 nova_compute[260603]: 2025-10-02 08:43:29.369 2 DEBUG nova.compute.manager [req-729b6307-3c6b-4e8d-8fad-03293eb39de0 req-948688f9-3113-40b6-a8cc-a0e05527c15f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received event network-changed-af8bcfc4-3690-4b5f-9893-5555fa376203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:43:29 np0005465604 nova_compute[260603]: 2025-10-02 08:43:29.369 2 DEBUG nova.compute.manager [req-729b6307-3c6b-4e8d-8fad-03293eb39de0 req-948688f9-3113-40b6-a8cc-a0e05527c15f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Refreshing instance network info cache due to event network-changed-af8bcfc4-3690-4b5f-9893-5555fa376203. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:43:29 np0005465604 nova_compute[260603]: 2025-10-02 08:43:29.369 2 DEBUG oslo_concurrency.lockutils [req-729b6307-3c6b-4e8d-8fad-03293eb39de0 req-948688f9-3113-40b6-a8cc-a0e05527c15f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:43:29 np0005465604 nova_compute[260603]: 2025-10-02 08:43:29.369 2 DEBUG oslo_concurrency.lockutils [req-729b6307-3c6b-4e8d-8fad-03293eb39de0 req-948688f9-3113-40b6-a8cc-a0e05527c15f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:43:29 np0005465604 nova_compute[260603]: 2025-10-02 08:43:29.370 2 DEBUG nova.network.neutron [req-729b6307-3c6b-4e8d-8fad-03293eb39de0 req-948688f9-3113-40b6-a8cc-a0e05527c15f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Refreshing network info cache for port af8bcfc4-3690-4b5f-9893-5555fa376203 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:43:29 np0005465604 nova_compute[260603]: 2025-10-02 08:43:29.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:29 np0005465604 nova_compute[260603]: 2025-10-02 08:43:29.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:30 np0005465604 nova_compute[260603]: 2025-10-02 08:43:30.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:43:30 np0005465604 nova_compute[260603]: 2025-10-02 08:43:30.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:43:30 np0005465604 nova_compute[260603]: 2025-10-02 08:43:30.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:43:30 np0005465604 nova_compute[260603]: 2025-10-02 08:43:30.557 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:43:30 np0005465604 nova_compute[260603]: 2025-10-02 08:43:30.557 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:43:30 np0005465604 nova_compute[260603]: 2025-10-02 08:43:30.557 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:43:30 np0005465604 nova_compute[260603]: 2025-10-02 08:43:30.557 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:43:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 134 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 137 op/s
Oct  2 04:43:31 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct  2 04:43:32 np0005465604 nova_compute[260603]: 2025-10-02 08:43:32.756 2 DEBUG nova.network.neutron [req-729b6307-3c6b-4e8d-8fad-03293eb39de0 req-948688f9-3113-40b6-a8cc-a0e05527c15f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Updated VIF entry in instance network info cache for port af8bcfc4-3690-4b5f-9893-5555fa376203. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:43:32 np0005465604 nova_compute[260603]: 2025-10-02 08:43:32.757 2 DEBUG nova.network.neutron [req-729b6307-3c6b-4e8d-8fad-03293eb39de0 req-948688f9-3113-40b6-a8cc-a0e05527c15f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Updating instance_info_cache with network_info: [{"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:43:32 np0005465604 nova_compute[260603]: 2025-10-02 08:43:32.787 2 DEBUG oslo_concurrency.lockutils [req-729b6307-3c6b-4e8d-8fad-03293eb39de0 req-948688f9-3113-40b6-a8cc-a0e05527c15f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:43:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:43:33 np0005465604 nova_compute[260603]: 2025-10-02 08:43:33.205 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updating instance_info_cache with network_info: [{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:43:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 160 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 187 op/s
Oct  2 04:43:33 np0005465604 nova_compute[260603]: 2025-10-02 08:43:33.457 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:43:33 np0005465604 nova_compute[260603]: 2025-10-02 08:43:33.457 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:43:33 np0005465604 nova_compute[260603]: 2025-10-02 08:43:33.458 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:43:33 np0005465604 nova_compute[260603]: 2025-10-02 08:43:33.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:43:33 np0005465604 nova_compute[260603]: 2025-10-02 08:43:33.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:43:33 np0005465604 nova_compute[260603]: 2025-10-02 08:43:33.560 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:33 np0005465604 nova_compute[260603]: 2025-10-02 08:43:33.561 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:33 np0005465604 nova_compute[260603]: 2025-10-02 08:43:33.561 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:33 np0005465604 nova_compute[260603]: 2025-10-02 08:43:33.561 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:43:33 np0005465604 nova_compute[260603]: 2025-10-02 08:43:33.561 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:34 np0005465604 podman[365804]: 2025-10-02 08:43:34.0074139 +0000 UTC m=+0.074040858 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 04:43:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:43:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1518484444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:43:34 np0005465604 nova_compute[260603]: 2025-10-02 08:43:34.031 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:34 np0005465604 podman[365803]: 2025-10-02 08:43:34.06288112 +0000 UTC m=+0.123782260 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  2 04:43:34 np0005465604 nova_compute[260603]: 2025-10-02 08:43:34.152 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:43:34 np0005465604 nova_compute[260603]: 2025-10-02 08:43:34.153 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:43:34 np0005465604 nova_compute[260603]: 2025-10-02 08:43:34.157 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:43:34 np0005465604 nova_compute[260603]: 2025-10-02 08:43:34.157 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:43:34 np0005465604 nova_compute[260603]: 2025-10-02 08:43:34.370 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:43:34 np0005465604 nova_compute[260603]: 2025-10-02 08:43:34.371 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3368MB free_disk=59.922119140625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:43:34 np0005465604 nova_compute[260603]: 2025-10-02 08:43:34.371 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:34 np0005465604 nova_compute[260603]: 2025-10-02 08:43:34.372 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:34 np0005465604 nova_compute[260603]: 2025-10-02 08:43:34.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:34 np0005465604 nova_compute[260603]: 2025-10-02 08:43:34.545 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:43:34 np0005465604 nova_compute[260603]: 2025-10-02 08:43:34.545 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance bccc9587-6f96-4032-ae07-56ab00988869 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:43:34 np0005465604 nova_compute[260603]: 2025-10-02 08:43:34.545 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:43:34 np0005465604 nova_compute[260603]: 2025-10-02 08:43:34.546 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:43:34 np0005465604 nova_compute[260603]: 2025-10-02 08:43:34.720 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:34.826 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:34.827 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:34.827 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:34 np0005465604 nova_compute[260603]: 2025-10-02 08:43:34.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:43:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2434508286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:43:35 np0005465604 nova_compute[260603]: 2025-10-02 08:43:35.166 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:35 np0005465604 nova_compute[260603]: 2025-10-02 08:43:35.174 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:43:35 np0005465604 nova_compute[260603]: 2025-10-02 08:43:35.192 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:43:35 np0005465604 nova_compute[260603]: 2025-10-02 08:43:35.217 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:43:35 np0005465604 nova_compute[260603]: 2025-10-02 08:43:35.218 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 160 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Oct  2 04:43:35 np0005465604 nova_compute[260603]: 2025-10-02 08:43:35.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:43:35 np0005465604 nova_compute[260603]: 2025-10-02 08:43:35.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 04:43:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:35Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:6d:19 10.100.0.5
Oct  2 04:43:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:35Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:6d:19 10.100.0.5
Oct  2 04:43:36 np0005465604 nova_compute[260603]: 2025-10-02 08:43:36.250 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 04:43:36 np0005465604 nova_compute[260603]: 2025-10-02 08:43:36.531 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:43:36 np0005465604 nova_compute[260603]: 2025-10-02 08:43:36.531 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 04:43:36 np0005465604 nova_compute[260603]: 2025-10-02 08:43:36.546 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 04:43:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 186 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 150 op/s
Oct  2 04:43:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:43:38 np0005465604 kernel: tapd9b60bcb-8d (unregistering): left promiscuous mode
Oct  2 04:43:38 np0005465604 nova_compute[260603]: 2025-10-02 08:43:38.534 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:43:38 np0005465604 NetworkManager[45129]: <info>  [1759394618.5403] device (tapd9b60bcb-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:43:38 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:38Z|01077|binding|INFO|Releasing lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 from this chassis (sb_readonly=0)
Oct  2 04:43:38 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:38Z|01078|binding|INFO|Setting lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 down in Southbound
Oct  2 04:43:38 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:38Z|01079|binding|INFO|Removing iface tapd9b60bcb-8d ovn-installed in OVS
Oct  2 04:43:38 np0005465604 nova_compute[260603]: 2025-10-02 08:43:38.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:38 np0005465604 nova_compute[260603]: 2025-10-02 08:43:38.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:38.594 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:24:8a 10.100.0.8'], port_security=['fa:16:3e:d2:24:8a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1c24cd5c-a165-4fcf-b24d-245a60f7ea11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34c4e106-9919-4d6d-a50a-81b3894f2e5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae7dae3968e448f1b3ace692d9d76cff', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e6912f4e-082b-4b15-9608-2b9595c16211', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b81b03c-f256-4f5a-8b01-0b7991a52f3e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:43:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:38.596 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 in datapath 34c4e106-9919-4d6d-a50a-81b3894f2e5e unbound from our chassis#033[00m
Oct  2 04:43:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:38.597 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 34c4e106-9919-4d6d-a50a-81b3894f2e5e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 04:43:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:38.598 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4e5f30-3a77-4f77-b01b-e45baf37cba4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:38 np0005465604 nova_compute[260603]: 2025-10-02 08:43:38.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:38 np0005465604 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000068.scope: Deactivated successfully.
Oct  2 04:43:38 np0005465604 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000068.scope: Consumed 13.084s CPU time.
Oct  2 04:43:38 np0005465604 systemd-machined[214636]: Machine qemu-130-instance-00000068 terminated.
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013177266474571857 of space, bias 1.0, pg target 0.3953179942371557 quantized to 32 (current 32)
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:43:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.268 2 INFO nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.277 2 INFO nova.virt.libvirt.driver [-] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance destroyed successfully.#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.277 2 DEBUG nova.objects.instance [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'numa_topology' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.299 2 INFO nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Attempting rescue#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.300 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.306 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.307 2 INFO nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Creating image(s)#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.332 2 DEBUG nova.storage.rbd_utils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.336 2 DEBUG nova.objects.instance [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:43:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 200 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 168 op/s
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.380 2 DEBUG nova.storage.rbd_utils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.405 2 DEBUG nova.storage.rbd_utils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.409 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.514 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.515 2 DEBUG oslo_concurrency.lockutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.516 2 DEBUG oslo_concurrency.lockutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.516 2 DEBUG oslo_concurrency.lockutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.537 2 DEBUG nova.storage.rbd_utils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.544 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.853 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.855 2 DEBUG nova.objects.instance [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'migration_context' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.872 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.873 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Start _get_guest_xml network_info=[{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "vif_mac": "fa:16:3e:d2:24:8a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.874 2 DEBUG nova.objects.instance [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'resources' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.899 2 WARNING nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.906 2 DEBUG nova.virt.libvirt.host [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.907 2 DEBUG nova.virt.libvirt.host [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.912 2 DEBUG nova.virt.libvirt.host [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.913 2 DEBUG nova.virt.libvirt.host [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.913 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.914 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.915 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.915 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.915 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.916 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.916 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.916 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.917 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.917 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.917 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.918 2 DEBUG nova.virt.hardware [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.918 2 DEBUG nova.objects.instance [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:43:39 np0005465604 nova_compute[260603]: 2025-10-02 08:43:39.952 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:43:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1362286071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:43:40 np0005465604 nova_compute[260603]: 2025-10-02 08:43:40.434 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:40 np0005465604 nova_compute[260603]: 2025-10-02 08:43:40.436 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:43:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/168792568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:43:40 np0005465604 nova_compute[260603]: 2025-10-02 08:43:40.922 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:40 np0005465604 nova_compute[260603]: 2025-10-02 08:43:40.924 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:41 np0005465604 podman[366031]: 2025-10-02 08:43:41.030375835 +0000 UTC m=+0.086219178 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 04:43:41 np0005465604 podman[366030]: 2025-10-02 08:43:41.052637679 +0000 UTC m=+0.109231005 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Oct  2 04:43:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 200 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Oct  2 04:43:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:43:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4283090342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:43:41 np0005465604 nova_compute[260603]: 2025-10-02 08:43:41.425 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:41 np0005465604 nova_compute[260603]: 2025-10-02 08:43:41.429 2 DEBUG nova.virt.libvirt.vif [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:43:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-618462512',display_name='tempest-ServerRescueTestJSONUnderV235-server-618462512',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-618462512',id=104,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:43:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ae7dae3968e448f1b3ace692d9d76cff',ramdisk_id='',reservation_id='r-g69g07mg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-299264470',owner_user_name='tempest-ServerRescueTestJSONUnderV235-299264470-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:43:21Z,user_data=None,user_id='7e27caab7dd34e4a9cac5f4f1880fad8',uuid=1c24cd5c-a165-4fcf-b24d-245a60f7ea11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "vif_mac": "fa:16:3e:d2:24:8a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:43:41 np0005465604 nova_compute[260603]: 2025-10-02 08:43:41.429 2 DEBUG nova.network.os_vif_util [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Converting VIF {"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "vif_mac": "fa:16:3e:d2:24:8a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:43:41 np0005465604 nova_compute[260603]: 2025-10-02 08:43:41.431 2 DEBUG nova.network.os_vif_util [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:24:8a,bridge_name='br-int',has_traffic_filtering=True,id=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59,network=Network(34c4e106-9919-4d6d-a50a-81b3894f2e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9b60bcb-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:43:41 np0005465604 nova_compute[260603]: 2025-10-02 08:43:41.434 2 DEBUG nova.objects.instance [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:43:41 np0005465604 nova_compute[260603]: 2025-10-02 08:43:41.457 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:  <uuid>1c24cd5c-a165-4fcf-b24d-245a60f7ea11</uuid>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:  <name>instance-00000068</name>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServerRescueTestJSONUnderV235-server-618462512</nova:name>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:43:39</nova:creationTime>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:        <nova:user uuid="7e27caab7dd34e4a9cac5f4f1880fad8">tempest-ServerRescueTestJSONUnderV235-299264470-project-member</nova:user>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:        <nova:project uuid="ae7dae3968e448f1b3ace692d9d76cff">tempest-ServerRescueTestJSONUnderV235-299264470</nova:project>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:        <nova:port uuid="d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <entry name="serial">1c24cd5c-a165-4fcf-b24d-245a60f7ea11</entry>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <entry name="uuid">1c24cd5c-a165-4fcf-b24d-245a60f7ea11</entry>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.rescue">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <target dev="vdb" bus="virtio"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config.rescue">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:d2:24:8a"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <target dev="tapd9b60bcb-8d"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/console.log" append="off"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:43:41 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:43:41 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:43:41 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:43:41 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:43:41 np0005465604 nova_compute[260603]: 2025-10-02 08:43:41.470 2 INFO nova.virt.libvirt.driver [-] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance destroyed successfully.#033[00m
Oct  2 04:43:41 np0005465604 nova_compute[260603]: 2025-10-02 08:43:41.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:43:41 np0005465604 nova_compute[260603]: 2025-10-02 08:43:41.529 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:43:41 np0005465604 nova_compute[260603]: 2025-10-02 08:43:41.530 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:43:41 np0005465604 nova_compute[260603]: 2025-10-02 08:43:41.530 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:43:41 np0005465604 nova_compute[260603]: 2025-10-02 08:43:41.530 2 DEBUG nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] No VIF found with MAC fa:16:3e:d2:24:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:43:41 np0005465604 nova_compute[260603]: 2025-10-02 08:43:41.531 2 INFO nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Using config drive#033[00m
Oct  2 04:43:41 np0005465604 nova_compute[260603]: 2025-10-02 08:43:41.566 2 DEBUG nova.storage.rbd_utils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:41 np0005465604 nova_compute[260603]: 2025-10-02 08:43:41.595 2 DEBUG nova.objects.instance [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:43:41 np0005465604 nova_compute[260603]: 2025-10-02 08:43:41.633 2 DEBUG nova.objects.instance [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'keypairs' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:43:42 np0005465604 nova_compute[260603]: 2025-10-02 08:43:42.246 2 DEBUG nova.compute.manager [req-67425ac7-9411-497b-b28d-c4425362e9bb req-f69ea8b9-b195-4c2d-8f83-1b9e722cf2a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-vif-unplugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:43:42 np0005465604 nova_compute[260603]: 2025-10-02 08:43:42.246 2 DEBUG oslo_concurrency.lockutils [req-67425ac7-9411-497b-b28d-c4425362e9bb req-f69ea8b9-b195-4c2d-8f83-1b9e722cf2a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:42 np0005465604 nova_compute[260603]: 2025-10-02 08:43:42.247 2 DEBUG oslo_concurrency.lockutils [req-67425ac7-9411-497b-b28d-c4425362e9bb req-f69ea8b9-b195-4c2d-8f83-1b9e722cf2a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:42 np0005465604 nova_compute[260603]: 2025-10-02 08:43:42.247 2 DEBUG oslo_concurrency.lockutils [req-67425ac7-9411-497b-b28d-c4425362e9bb req-f69ea8b9-b195-4c2d-8f83-1b9e722cf2a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:42 np0005465604 nova_compute[260603]: 2025-10-02 08:43:42.248 2 DEBUG nova.compute.manager [req-67425ac7-9411-497b-b28d-c4425362e9bb req-f69ea8b9-b195-4c2d-8f83-1b9e722cf2a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] No waiting events found dispatching network-vif-unplugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:43:42 np0005465604 nova_compute[260603]: 2025-10-02 08:43:42.248 2 WARNING nova.compute.manager [req-67425ac7-9411-497b-b28d-c4425362e9bb req-f69ea8b9-b195-4c2d-8f83-1b9e722cf2a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received unexpected event network-vif-unplugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 04:43:42 np0005465604 nova_compute[260603]: 2025-10-02 08:43:42.337 2 INFO nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Creating config drive at /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config.rescue#033[00m
Oct  2 04:43:42 np0005465604 nova_compute[260603]: 2025-10-02 08:43:42.347 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppx7b47nn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:42 np0005465604 nova_compute[260603]: 2025-10-02 08:43:42.517 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppx7b47nn" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:42 np0005465604 nova_compute[260603]: 2025-10-02 08:43:42.554 2 DEBUG nova.storage.rbd_utils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] rbd image 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:42 np0005465604 nova_compute[260603]: 2025-10-02 08:43:42.560 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config.rescue 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:42 np0005465604 nova_compute[260603]: 2025-10-02 08:43:42.618 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:43:42 np0005465604 nova_compute[260603]: 2025-10-02 08:43:42.768 2 DEBUG oslo_concurrency.processutils [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config.rescue 1c24cd5c-a165-4fcf-b24d-245a60f7ea11_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:42 np0005465604 nova_compute[260603]: 2025-10-02 08:43:42.769 2 INFO nova.virt.libvirt.driver [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Deleting local config drive /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11/disk.config.rescue because it was imported into RBD.#033[00m
Oct  2 04:43:42 np0005465604 kernel: tapd9b60bcb-8d: entered promiscuous mode
Oct  2 04:43:42 np0005465604 NetworkManager[45129]: <info>  [1759394622.8405] manager: (tapd9b60bcb-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/426)
Oct  2 04:43:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:42Z|01080|binding|INFO|Claiming lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 for this chassis.
Oct  2 04:43:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:42Z|01081|binding|INFO|d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59: Claiming fa:16:3e:d2:24:8a 10.100.0.8
Oct  2 04:43:42 np0005465604 nova_compute[260603]: 2025-10-02 08:43:42.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:43:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:42Z|01082|binding|INFO|Setting lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 ovn-installed in OVS
Oct  2 04:43:42 np0005465604 nova_compute[260603]: 2025-10-02 08:43:42.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:42 np0005465604 nova_compute[260603]: 2025-10-02 08:43:42.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:42 np0005465604 systemd-udevd[366158]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:43:42 np0005465604 NetworkManager[45129]: <info>  [1759394622.8921] device (tapd9b60bcb-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:43:42 np0005465604 NetworkManager[45129]: <info>  [1759394622.8931] device (tapd9b60bcb-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:43:42 np0005465604 systemd-machined[214636]: New machine qemu-132-instance-00000068.
Oct  2 04:43:42 np0005465604 systemd[1]: Started Virtual Machine qemu-132-instance-00000068.
Oct  2 04:43:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:42Z|01083|binding|INFO|Setting lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 up in Southbound
Oct  2 04:43:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:42.934 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:24:8a 10.100.0.8'], port_security=['fa:16:3e:d2:24:8a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1c24cd5c-a165-4fcf-b24d-245a60f7ea11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34c4e106-9919-4d6d-a50a-81b3894f2e5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae7dae3968e448f1b3ace692d9d76cff', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e6912f4e-082b-4b15-9608-2b9595c16211', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b81b03c-f256-4f5a-8b01-0b7991a52f3e, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:43:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:42.935 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 in datapath 34c4e106-9919-4d6d-a50a-81b3894f2e5e bound to our chassis#033[00m
Oct  2 04:43:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:42.936 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 34c4e106-9919-4d6d-a50a-81b3894f2e5e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 04:43:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:42.937 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b60795d9-0ed3-46df-ac87-c8a166529696]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 246 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 661 KiB/s rd, 6.1 MiB/s wr, 149 op/s
Oct  2 04:43:43 np0005465604 nova_compute[260603]: 2025-10-02 08:43:43.820 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:43:43 np0005465604 nova_compute[260603]: 2025-10-02 08:43:43.822 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394623.8203838, 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:43:43 np0005465604 nova_compute[260603]: 2025-10-02 08:43:43.823 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:43:43 np0005465604 nova_compute[260603]: 2025-10-02 08:43:43.831 2 DEBUG nova.compute.manager [None req-3aa63771-3d3e-4ba6-8301-efdadaa7037d 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:43:43 np0005465604 nova_compute[260603]: 2025-10-02 08:43:43.886 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:43:43 np0005465604 nova_compute[260603]: 2025-10-02 08:43:43.892 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:43:43 np0005465604 nova_compute[260603]: 2025-10-02 08:43:43.934 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394623.82243, 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:43:43 np0005465604 nova_compute[260603]: 2025-10-02 08:43:43.934 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] VM Started (Lifecycle Event)#033[00m
Oct  2 04:43:43 np0005465604 nova_compute[260603]: 2025-10-02 08:43:43.953 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:43:43 np0005465604 nova_compute[260603]: 2025-10-02 08:43:43.959 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.456 2 DEBUG nova.compute.manager [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.457 2 DEBUG oslo_concurrency.lockutils [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.457 2 DEBUG oslo_concurrency.lockutils [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.458 2 DEBUG oslo_concurrency.lockutils [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.458 2 DEBUG nova.compute.manager [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] No waiting events found dispatching network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.459 2 WARNING nova.compute.manager [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received unexpected event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 for instance with vm_state rescued and task_state None.#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.459 2 DEBUG nova.compute.manager [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.460 2 DEBUG oslo_concurrency.lockutils [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.460 2 DEBUG oslo_concurrency.lockutils [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.461 2 DEBUG oslo_concurrency.lockutils [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.461 2 DEBUG nova.compute.manager [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] No waiting events found dispatching network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.461 2 WARNING nova.compute.manager [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received unexpected event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 for instance with vm_state rescued and task_state None.#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.462 2 DEBUG nova.compute.manager [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.462 2 DEBUG oslo_concurrency.lockutils [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.463 2 DEBUG oslo_concurrency.lockutils [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.463 2 DEBUG oslo_concurrency.lockutils [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.463 2 DEBUG nova.compute.manager [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] No waiting events found dispatching network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.463 2 WARNING nova.compute.manager [req-79331213-6f48-499a-82e0-ecf8ab006c18 req-e5780b4c-2db1-40bc-a3e0-16d33fddb7d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received unexpected event network-vif-plugged-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 for instance with vm_state rescued and task_state None.#033[00m
Oct  2 04:43:44 np0005465604 nova_compute[260603]: 2025-10-02 08:43:44.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 246 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 391 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Oct  2 04:43:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 246 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.0 MiB/s wr, 133 op/s
Oct  2 04:43:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:43:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 246 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 140 op/s
Oct  2 04:43:49 np0005465604 nova_compute[260603]: 2025-10-02 08:43:49.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:49 np0005465604 nova_compute[260603]: 2025-10-02 08:43:49.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:50 np0005465604 nova_compute[260603]: 2025-10-02 08:43:50.229 2 DEBUG nova.compute.manager [req-8d34806e-6d5e-4763-83dd-7506d5388a77 req-9db7c966-c533-4d67-8e52-4b6148d62550 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:43:50 np0005465604 nova_compute[260603]: 2025-10-02 08:43:50.230 2 DEBUG nova.compute.manager [req-8d34806e-6d5e-4763-83dd-7506d5388a77 req-9db7c966-c533-4d67-8e52-4b6148d62550 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing instance network info cache due to event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:43:50 np0005465604 nova_compute[260603]: 2025-10-02 08:43:50.230 2 DEBUG oslo_concurrency.lockutils [req-8d34806e-6d5e-4763-83dd-7506d5388a77 req-9db7c966-c533-4d67-8e52-4b6148d62550 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:43:50 np0005465604 nova_compute[260603]: 2025-10-02 08:43:50.231 2 DEBUG oslo_concurrency.lockutils [req-8d34806e-6d5e-4763-83dd-7506d5388a77 req-9db7c966-c533-4d67-8e52-4b6148d62550 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:43:50 np0005465604 nova_compute[260603]: 2025-10-02 08:43:50.231 2 DEBUG nova.network.neutron [req-8d34806e-6d5e-4763-83dd-7506d5388a77 req-9db7c966-c533-4d67-8e52-4b6148d62550 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:43:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 246 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Oct  2 04:43:51 np0005465604 nova_compute[260603]: 2025-10-02 08:43:51.408 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:51 np0005465604 nova_compute[260603]: 2025-10-02 08:43:51.409 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:51 np0005465604 nova_compute[260603]: 2025-10-02 08:43:51.425 2 DEBUG nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:43:51 np0005465604 nova_compute[260603]: 2025-10-02 08:43:51.507 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:51 np0005465604 nova_compute[260603]: 2025-10-02 08:43:51.508 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:51 np0005465604 nova_compute[260603]: 2025-10-02 08:43:51.515 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:43:51 np0005465604 nova_compute[260603]: 2025-10-02 08:43:51.515 2 INFO nova.compute.claims [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:43:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:51.645 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:43:51 np0005465604 nova_compute[260603]: 2025-10-02 08:43:51.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:51.647 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:43:51 np0005465604 nova_compute[260603]: 2025-10-02 08:43:51.683 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:43:52 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/567126948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.172 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.178 2 DEBUG nova.compute.provider_tree [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.195 2 DEBUG nova.scheduler.client.report [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.221 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.222 2 DEBUG nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.283 2 DEBUG nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.283 2 DEBUG nova.network.neutron [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.305 2 INFO nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.328 2 DEBUG nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.438 2 DEBUG nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.439 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.440 2 INFO nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Creating image(s)#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.464 2 DEBUG nova.storage.rbd_utils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.491 2 DEBUG nova.storage.rbd_utils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.517 2 DEBUG nova.storage.rbd_utils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.521 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.559 2 DEBUG nova.compute.manager [req-04bc5414-0ad5-4148-ab2c-38c950d3241a req-4ab1ec7d-2699-4048-b0d0-0f5dd22c747a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.560 2 DEBUG nova.compute.manager [req-04bc5414-0ad5-4148-ab2c-38c950d3241a req-4ab1ec7d-2699-4048-b0d0-0f5dd22c747a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing instance network info cache due to event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.560 2 DEBUG oslo_concurrency.lockutils [req-04bc5414-0ad5-4148-ab2c-38c950d3241a req-4ab1ec7d-2699-4048-b0d0-0f5dd22c747a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.588 2 DEBUG nova.policy [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3dd1e04a123f47aa8a6b835785a1c569', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.592 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.592 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.593 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.593 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.617 2 DEBUG nova.storage.rbd_utils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.621 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.660 2 DEBUG nova.network.neutron [req-8d34806e-6d5e-4763-83dd-7506d5388a77 req-9db7c966-c533-4d67-8e52-4b6148d62550 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updated VIF entry in instance network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.661 2 DEBUG nova.network.neutron [req-8d34806e-6d5e-4763-83dd-7506d5388a77 req-9db7c966-c533-4d67-8e52-4b6148d62550 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updating instance_info_cache with network_info: [{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.710 2 DEBUG oslo_concurrency.lockutils [req-8d34806e-6d5e-4763-83dd-7506d5388a77 req-9db7c966-c533-4d67-8e52-4b6148d62550 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.710 2 DEBUG oslo_concurrency.lockutils [req-04bc5414-0ad5-4148-ab2c-38c950d3241a req-4ab1ec7d-2699-4048-b0d0-0f5dd22c747a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.711 2 DEBUG nova.network.neutron [req-04bc5414-0ad5-4148-ab2c-38c950d3241a req-4ab1ec7d-2699-4048-b0d0-0f5dd22c747a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:43:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:43:52 np0005465604 nova_compute[260603]: 2025-10-02 08:43:52.949 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:53 np0005465604 nova_compute[260603]: 2025-10-02 08:43:53.027 2 DEBUG nova.storage.rbd_utils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] resizing rbd image 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:43:53 np0005465604 nova_compute[260603]: 2025-10-02 08:43:53.162 2 DEBUG nova.objects.instance [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'migration_context' on Instance uuid 7f00dee7-5a33-4ae8-a230-2ed05afd17c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:43:53 np0005465604 nova_compute[260603]: 2025-10-02 08:43:53.179 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:43:53 np0005465604 nova_compute[260603]: 2025-10-02 08:43:53.180 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Ensure instance console log exists: /var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:43:53 np0005465604 nova_compute[260603]: 2025-10-02 08:43:53.181 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:53 np0005465604 nova_compute[260603]: 2025-10-02 08:43:53.181 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:53 np0005465604 nova_compute[260603]: 2025-10-02 08:43:53.182 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 267 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 95 op/s
Oct  2 04:43:53 np0005465604 nova_compute[260603]: 2025-10-02 08:43:53.547 2 DEBUG nova.network.neutron [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Successfully created port: 41b1bf69-0f3d-4e03-9c65-112b4a4b731d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:43:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:53.648 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:43:54 np0005465604 nova_compute[260603]: 2025-10-02 08:43:54.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:54 np0005465604 nova_compute[260603]: 2025-10-02 08:43:54.466 2 DEBUG nova.network.neutron [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Successfully updated port: 41b1bf69-0f3d-4e03-9c65-112b4a4b731d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:43:54 np0005465604 nova_compute[260603]: 2025-10-02 08:43:54.487 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "refresh_cache-7f00dee7-5a33-4ae8-a230-2ed05afd17c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:43:54 np0005465604 nova_compute[260603]: 2025-10-02 08:43:54.487 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquired lock "refresh_cache-7f00dee7-5a33-4ae8-a230-2ed05afd17c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:43:54 np0005465604 nova_compute[260603]: 2025-10-02 08:43:54.488 2 DEBUG nova.network.neutron [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:43:54 np0005465604 nova_compute[260603]: 2025-10-02 08:43:54.575 2 DEBUG nova.compute.manager [req-b5073b03-a350-4aec-aa25-1895e7acd68c req-c0b258d2-d012-4dcb-9e73-c8578539e8cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Received event network-changed-41b1bf69-0f3d-4e03-9c65-112b4a4b731d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:43:54 np0005465604 nova_compute[260603]: 2025-10-02 08:43:54.575 2 DEBUG nova.compute.manager [req-b5073b03-a350-4aec-aa25-1895e7acd68c req-c0b258d2-d012-4dcb-9e73-c8578539e8cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Refreshing instance network info cache due to event network-changed-41b1bf69-0f3d-4e03-9c65-112b4a4b731d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:43:54 np0005465604 nova_compute[260603]: 2025-10-02 08:43:54.576 2 DEBUG oslo_concurrency.lockutils [req-b5073b03-a350-4aec-aa25-1895e7acd68c req-c0b258d2-d012-4dcb-9e73-c8578539e8cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7f00dee7-5a33-4ae8-a230-2ed05afd17c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:43:54 np0005465604 nova_compute[260603]: 2025-10-02 08:43:54.700 2 DEBUG nova.network.neutron [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:43:54 np0005465604 nova_compute[260603]: 2025-10-02 08:43:54.962 2 DEBUG nova.network.neutron [req-04bc5414-0ad5-4148-ab2c-38c950d3241a req-4ab1ec7d-2699-4048-b0d0-0f5dd22c747a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updated VIF entry in instance network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:43:54 np0005465604 nova_compute[260603]: 2025-10-02 08:43:54.963 2 DEBUG nova.network.neutron [req-04bc5414-0ad5-4148-ab2c-38c950d3241a req-4ab1ec7d-2699-4048-b0d0-0f5dd22c747a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updating instance_info_cache with network_info: [{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:43:54 np0005465604 nova_compute[260603]: 2025-10-02 08:43:54.979 2 DEBUG oslo_concurrency.lockutils [req-04bc5414-0ad5-4148-ab2c-38c950d3241a req-4ab1ec7d-2699-4048-b0d0-0f5dd22c747a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:43:54 np0005465604 nova_compute[260603]: 2025-10-02 08:43:54.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 267 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 718 KiB/s wr, 75 op/s
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.512 2 DEBUG nova.network.neutron [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Updating instance_info_cache with network_info: [{"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.531 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Releasing lock "refresh_cache-7f00dee7-5a33-4ae8-a230-2ed05afd17c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.531 2 DEBUG nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Instance network_info: |[{"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.532 2 DEBUG oslo_concurrency.lockutils [req-b5073b03-a350-4aec-aa25-1895e7acd68c req-c0b258d2-d012-4dcb-9e73-c8578539e8cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7f00dee7-5a33-4ae8-a230-2ed05afd17c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.532 2 DEBUG nova.network.neutron [req-b5073b03-a350-4aec-aa25-1895e7acd68c req-c0b258d2-d012-4dcb-9e73-c8578539e8cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Refreshing network info cache for port 41b1bf69-0f3d-4e03-9c65-112b4a4b731d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.535 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Start _get_guest_xml network_info=[{"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.539 2 WARNING nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.543 2 DEBUG nova.virt.libvirt.host [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.544 2 DEBUG nova.virt.libvirt.host [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.549 2 DEBUG nova.virt.libvirt.host [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.549 2 DEBUG nova.virt.libvirt.host [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.550 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.550 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.551 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.551 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.551 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.551 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.552 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.552 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.552 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.552 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.553 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.553 2 DEBUG nova.virt.hardware [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.555 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.946 2 DEBUG nova.compute.manager [req-ddd2ee03-6b7a-457a-8577-dce3ce1d88c3 req-e9531a5b-d34b-4d34-ab9e-d3f5af407489 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.947 2 DEBUG nova.compute.manager [req-ddd2ee03-6b7a-457a-8577-dce3ce1d88c3 req-e9531a5b-d34b-4d34-ab9e-d3f5af407489 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing instance network info cache due to event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.949 2 DEBUG oslo_concurrency.lockutils [req-ddd2ee03-6b7a-457a-8577-dce3ce1d88c3 req-e9531a5b-d34b-4d34-ab9e-d3f5af407489 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.949 2 DEBUG oslo_concurrency.lockutils [req-ddd2ee03-6b7a-457a-8577-dce3ce1d88c3 req-e9531a5b-d34b-4d34-ab9e-d3f5af407489 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:43:55 np0005465604 nova_compute[260603]: 2025-10-02 08:43:55.950 2 DEBUG nova.network.neutron [req-ddd2ee03-6b7a-457a-8577-dce3ce1d88c3 req-e9531a5b-d34b-4d34-ab9e-d3f5af407489 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:43:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:43:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4008580821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.040 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.070 2 DEBUG nova.storage.rbd_utils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.074 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:43:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2812076317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.526 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.529 2 DEBUG nova.virt.libvirt.vif [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:43:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-0-688023159',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-0-688023159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-gen',id=106,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBp7O1+NifYmRTPjkz076R6OB0cQraodAPHkd+igs0jgMRa0ylNfh8a4FTcaAs5LMUjKZ6d3T6IfE8uMmH/Vv7/4iSPE6rs9EcqUzLfabYKjHL1D+G2YR0bhQI1PbtyQqg==',key_name='tempest-TestSecurityGroupsBasicOps-388523957',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-rpve6319',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:43:52Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=7f00dee7-5a33-4ae8-a230-2ed05afd17c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.529 2 DEBUG nova.network.os_vif_util [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.532 2 DEBUG nova.network.os_vif_util [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:a1:dd,bridge_name='br-int',has_traffic_filtering=True,id=41b1bf69-0f3d-4e03-9c65-112b4a4b731d,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41b1bf69-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.533 2 DEBUG nova.objects.instance [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f00dee7-5a33-4ae8-a230-2ed05afd17c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.550 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:43:56 np0005465604 nova_compute[260603]:  <uuid>7f00dee7-5a33-4ae8-a230-2ed05afd17c3</uuid>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:  <name>instance-0000006a</name>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-0-688023159</nova:name>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:43:55</nova:creationTime>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:43:56 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:        <nova:user uuid="3dd1e04a123f47aa8a6b835785a1c569">tempest-TestSecurityGroupsBasicOps-204807017-project-member</nova:user>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:        <nova:project uuid="7ef9cbc1b038423984a64b4674aa34ff">tempest-TestSecurityGroupsBasicOps-204807017</nova:project>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:        <nova:port uuid="41b1bf69-0f3d-4e03-9c65-112b4a4b731d">
Oct  2 04:43:56 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <entry name="serial">7f00dee7-5a33-4ae8-a230-2ed05afd17c3</entry>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <entry name="uuid">7f00dee7-5a33-4ae8-a230-2ed05afd17c3</entry>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk">
Oct  2 04:43:56 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:43:56 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk.config">
Oct  2 04:43:56 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:43:56 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:80:a1:dd"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <target dev="tap41b1bf69-0f"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3/console.log" append="off"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:43:56 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:43:56 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:43:56 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:43:56 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.552 2 DEBUG nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Preparing to wait for external event network-vif-plugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.553 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.553 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.554 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.555 2 DEBUG nova.virt.libvirt.vif [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:43:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-0-688023159',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-0-688023159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-gen',id=106,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBp7O1+NifYmRTPjkz076R6OB0cQraodAPHkd+igs0jgMRa0ylNfh8a4FTcaAs5LMUjKZ6d3T6IfE8uMmH/Vv7/4iSPE6rs9EcqUzLfabYKjHL1D+G2YR0bhQI1PbtyQqg==',key_name='tempest-TestSecurityGroupsBasicOps-388523957',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-rpve6319',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:43:52Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=7f00dee7-5a33-4ae8-a230-2ed05afd17c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.555 2 DEBUG nova.network.os_vif_util [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.557 2 DEBUG nova.network.os_vif_util [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:a1:dd,bridge_name='br-int',has_traffic_filtering=True,id=41b1bf69-0f3d-4e03-9c65-112b4a4b731d,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41b1bf69-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.557 2 DEBUG os_vif [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:a1:dd,bridge_name='br-int',has_traffic_filtering=True,id=41b1bf69-0f3d-4e03-9c65-112b4a4b731d,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41b1bf69-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.559 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.560 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.566 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41b1bf69-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.567 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41b1bf69-0f, col_values=(('external_ids', {'iface-id': '41b1bf69-0f3d-4e03-9c65-112b4a4b731d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:a1:dd', 'vm-uuid': '7f00dee7-5a33-4ae8-a230-2ed05afd17c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:56 np0005465604 NetworkManager[45129]: <info>  [1759394636.5696] manager: (tap41b1bf69-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.579 2 INFO os_vif [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:a1:dd,bridge_name='br-int',has_traffic_filtering=True,id=41b1bf69-0f3d-4e03-9c65-112b4a4b731d,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41b1bf69-0f')#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.633 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.633 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.634 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No VIF found with MAC fa:16:3e:80:a1:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.634 2 INFO nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Using config drive#033[00m
Oct  2 04:43:56 np0005465604 nova_compute[260603]: 2025-10-02 08:43:56.664 2 DEBUG nova.storage.rbd_utils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 288 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 132 op/s
Oct  2 04:43:57 np0005465604 nova_compute[260603]: 2025-10-02 08:43:57.623 2 INFO nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Creating config drive at /var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3/disk.config#033[00m
Oct  2 04:43:57 np0005465604 nova_compute[260603]: 2025-10-02 08:43:57.633 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpocab6tuf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:57 np0005465604 nova_compute[260603]: 2025-10-02 08:43:57.789 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpocab6tuf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:57 np0005465604 nova_compute[260603]: 2025-10-02 08:43:57.831 2 DEBUG nova.storage.rbd_utils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:43:57 np0005465604 nova_compute[260603]: 2025-10-02 08:43:57.836 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3/disk.config 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:43:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:43:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:43:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:43:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:43:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:43:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:43:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.077 2 DEBUG oslo_concurrency.processutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3/disk.config 7f00dee7-5a33-4ae8-a230-2ed05afd17c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.078 2 INFO nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Deleting local config drive /var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3/disk.config because it was imported into RBD.#033[00m
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.120 2 DEBUG nova.network.neutron [req-ddd2ee03-6b7a-457a-8577-dce3ce1d88c3 req-e9531a5b-d34b-4d34-ab9e-d3f5af407489 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updated VIF entry in instance network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.121 2 DEBUG nova.network.neutron [req-ddd2ee03-6b7a-457a-8577-dce3ce1d88c3 req-e9531a5b-d34b-4d34-ab9e-d3f5af407489 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updating instance_info_cache with network_info: [{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:43:58 np0005465604 NetworkManager[45129]: <info>  [1759394638.1455] manager: (tap41b1bf69-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/428)
Oct  2 04:43:58 np0005465604 kernel: tap41b1bf69-0f: entered promiscuous mode
Oct  2 04:43:58 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:58Z|01084|binding|INFO|Claiming lport 41b1bf69-0f3d-4e03-9c65-112b4a4b731d for this chassis.
Oct  2 04:43:58 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:58Z|01085|binding|INFO|41b1bf69-0f3d-4e03-9c65-112b4a4b731d: Claiming fa:16:3e:80:a1:dd 10.100.0.11
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.148 2 DEBUG oslo_concurrency.lockutils [req-ddd2ee03-6b7a-457a-8577-dce3ce1d88c3 req-e9531a5b-d34b-4d34-ab9e-d3f5af407489 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.156 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:a1:dd 10.100.0.11'], port_security=['fa:16:3e:80:a1:dd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '7f00dee7-5a33-4ae8-a230-2ed05afd17c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd3198b8-8ad4-4f59-8253-4227db95b8da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e2b1c02-e727-474f-a9ba-53199387490d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=41b1bf69-0f3d-4e03-9c65-112b4a4b731d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:43:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.158 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 41b1bf69-0f3d-4e03-9c65-112b4a4b731d in datapath c7addd6c-480f-45ed-94c2-18d1d2248acb bound to our chassis#033[00m
Oct  2 04:43:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.161 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c7addd6c-480f-45ed-94c2-18d1d2248acb#033[00m
Oct  2 04:43:58 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:58Z|01086|binding|INFO|Setting lport 41b1bf69-0f3d-4e03-9c65-112b4a4b731d up in Southbound
Oct  2 04:43:58 np0005465604 ovn_controller[152344]: 2025-10-02T08:43:58Z|01087|binding|INFO|Setting lport 41b1bf69-0f3d-4e03-9c65-112b4a4b731d ovn-installed in OVS
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.189 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2a8730b7-8cc8-4ed5-baa7-de06302b50a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:58 np0005465604 systemd-udevd[366555]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:43:58 np0005465604 systemd-machined[214636]: New machine qemu-133-instance-0000006a.
Oct  2 04:43:58 np0005465604 systemd[1]: Started Virtual Machine qemu-133-instance-0000006a.
Oct  2 04:43:58 np0005465604 NetworkManager[45129]: <info>  [1759394638.2184] device (tap41b1bf69-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:43:58 np0005465604 NetworkManager[45129]: <info>  [1759394638.2207] device (tap41b1bf69-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:43:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.231 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ef9c3dc5-09a6-411a-8e9b-b034a55e1ebb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.236 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e825acd3-ecda-4ec6-a254-7df34312606f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.281 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[db78ca5f-73ad-408a-985f-06515ed01f82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.306 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7b0035a7-a214-4fbe-9470-98eba81c7512]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc7addd6c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:47:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547997, 'reachable_time': 26922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366567, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.329 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[942d2813-fee7-43b3-b9ed-e7db82a41231]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc7addd6c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548011, 'tstamp': 548011}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366569, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc7addd6c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548014, 'tstamp': 548014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366569, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:43:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.332 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7addd6c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:43:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.336 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7addd6c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:43:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.336 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:43:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.337 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc7addd6c-40, col_values=(('external_ids', {'iface-id': '8f66d020-a258-4e14-aa3b-234835306a91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:43:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:43:58.338 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.413 2 DEBUG nova.compute.manager [req-ae6cae4d-fc7f-4f66-ad07-3c05861e5220 req-4384b2d1-510a-45da-a765-c25ca00c929f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Received event network-vif-plugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.414 2 DEBUG oslo_concurrency.lockutils [req-ae6cae4d-fc7f-4f66-ad07-3c05861e5220 req-4384b2d1-510a-45da-a765-c25ca00c929f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.414 2 DEBUG oslo_concurrency.lockutils [req-ae6cae4d-fc7f-4f66-ad07-3c05861e5220 req-4384b2d1-510a-45da-a765-c25ca00c929f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.415 2 DEBUG oslo_concurrency.lockutils [req-ae6cae4d-fc7f-4f66-ad07-3c05861e5220 req-4384b2d1-510a-45da-a765-c25ca00c929f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.415 2 DEBUG nova.compute.manager [req-ae6cae4d-fc7f-4f66-ad07-3c05861e5220 req-4384b2d1-510a-45da-a765-c25ca00c929f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Processing event network-vif-plugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.606 2 DEBUG nova.network.neutron [req-b5073b03-a350-4aec-aa25-1895e7acd68c req-c0b258d2-d012-4dcb-9e73-c8578539e8cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Updated VIF entry in instance network info cache for port 41b1bf69-0f3d-4e03-9c65-112b4a4b731d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.607 2 DEBUG nova.network.neutron [req-b5073b03-a350-4aec-aa25-1895e7acd68c req-c0b258d2-d012-4dcb-9e73-c8578539e8cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Updating instance_info_cache with network_info: [{"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:43:58 np0005465604 nova_compute[260603]: 2025-10-02 08:43:58.625 2 DEBUG oslo_concurrency.lockutils [req-b5073b03-a350-4aec-aa25-1895e7acd68c req-c0b258d2-d012-4dcb-9e73-c8578539e8cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7f00dee7-5a33-4ae8-a230-2ed05afd17c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.104 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394639.1039274, 7f00dee7-5a33-4ae8-a230-2ed05afd17c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.105 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] VM Started (Lifecycle Event)#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.108 2 DEBUG nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.116 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.122 2 INFO nova.virt.libvirt.driver [-] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Instance spawned successfully.#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.123 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.128 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.132 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.149 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.149 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.150 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.150 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.151 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.152 2 DEBUG nova.virt.libvirt.driver [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.156 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.156 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394639.1051219, 7f00dee7-5a33-4ae8-a230-2ed05afd17c3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.156 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.186 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.193 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394639.112804, 7f00dee7-5a33-4ae8-a230-2ed05afd17c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.193 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.226 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.229 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.242 2 INFO nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Took 6.80 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.243 2 DEBUG nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.255 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.297 2 INFO nova.compute.manager [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Took 7.83 seconds to build instance.#033[00m
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.321 2 DEBUG oslo_concurrency.lockutils [None req-865b4bf1-7da1-4e6b-b698-f4c86d59d165 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:43:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 293 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct  2 04:43:59 np0005465604 nova_compute[260603]: 2025-10-02 08:43:59.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:00 np0005465604 nova_compute[260603]: 2025-10-02 08:44:00.720 2 DEBUG nova.compute.manager [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Received event network-vif-plugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:44:00 np0005465604 nova_compute[260603]: 2025-10-02 08:44:00.720 2 DEBUG oslo_concurrency.lockutils [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:00 np0005465604 nova_compute[260603]: 2025-10-02 08:44:00.720 2 DEBUG oslo_concurrency.lockutils [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:00 np0005465604 nova_compute[260603]: 2025-10-02 08:44:00.721 2 DEBUG oslo_concurrency.lockutils [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:00 np0005465604 nova_compute[260603]: 2025-10-02 08:44:00.721 2 DEBUG nova.compute.manager [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] No waiting events found dispatching network-vif-plugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:44:00 np0005465604 nova_compute[260603]: 2025-10-02 08:44:00.721 2 WARNING nova.compute.manager [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Received unexpected event network-vif-plugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d for instance with vm_state active and task_state None.#033[00m
Oct  2 04:44:00 np0005465604 nova_compute[260603]: 2025-10-02 08:44:00.721 2 DEBUG nova.compute.manager [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:44:00 np0005465604 nova_compute[260603]: 2025-10-02 08:44:00.721 2 DEBUG nova.compute.manager [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing instance network info cache due to event network-changed-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:44:00 np0005465604 nova_compute[260603]: 2025-10-02 08:44:00.722 2 DEBUG oslo_concurrency.lockutils [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:44:00 np0005465604 nova_compute[260603]: 2025-10-02 08:44:00.722 2 DEBUG oslo_concurrency.lockutils [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:44:00 np0005465604 nova_compute[260603]: 2025-10-02 08:44:00.722 2 DEBUG nova.network.neutron [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Refreshing network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:44:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 293 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct  2 04:44:01 np0005465604 nova_compute[260603]: 2025-10-02 08:44:01.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:44:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 295 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 162 op/s
Oct  2 04:44:03 np0005465604 nova_compute[260603]: 2025-10-02 08:44:03.651 2 DEBUG nova.network.neutron [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updated VIF entry in instance network info cache for port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:44:03 np0005465604 nova_compute[260603]: 2025-10-02 08:44:03.652 2 DEBUG nova.network.neutron [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updating instance_info_cache with network_info: [{"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:44:03 np0005465604 nova_compute[260603]: 2025-10-02 08:44:03.667 2 DEBUG oslo_concurrency.lockutils [req-48d83df4-b506-4063-bb1f-a222072cf371 req-c070ec64-0b6d-47ac-95a5-f9df22ca2261 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1c24cd5c-a165-4fcf-b24d-245a60f7ea11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:44:04 np0005465604 nova_compute[260603]: 2025-10-02 08:44:04.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:05 np0005465604 podman[366613]: 2025-10-02 08:44:05.055952996 +0000 UTC m=+0.104274990 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 04:44:05 np0005465604 podman[366612]: 2025-10-02 08:44:05.083705491 +0000 UTC m=+0.136224266 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 04:44:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 295 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 160 op/s
Oct  2 04:44:05 np0005465604 nova_compute[260603]: 2025-10-02 08:44:05.948 2 DEBUG oslo_concurrency.lockutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:05 np0005465604 nova_compute[260603]: 2025-10-02 08:44:05.949 2 DEBUG oslo_concurrency.lockutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:05 np0005465604 nova_compute[260603]: 2025-10-02 08:44:05.949 2 DEBUG oslo_concurrency.lockutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:05 np0005465604 nova_compute[260603]: 2025-10-02 08:44:05.949 2 DEBUG oslo_concurrency.lockutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:05 np0005465604 nova_compute[260603]: 2025-10-02 08:44:05.950 2 DEBUG oslo_concurrency.lockutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:05 np0005465604 nova_compute[260603]: 2025-10-02 08:44:05.951 2 INFO nova.compute.manager [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Terminating instance#033[00m
Oct  2 04:44:05 np0005465604 nova_compute[260603]: 2025-10-02 08:44:05.952 2 DEBUG nova.compute.manager [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:44:06 np0005465604 kernel: tapd9b60bcb-8d (unregistering): left promiscuous mode
Oct  2 04:44:06 np0005465604 NetworkManager[45129]: <info>  [1759394646.5036] device (tapd9b60bcb-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:44:06 np0005465604 nova_compute[260603]: 2025-10-02 08:44:06.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:06 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:06Z|01088|binding|INFO|Releasing lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 from this chassis (sb_readonly=0)
Oct  2 04:44:06 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:06Z|01089|binding|INFO|Setting lport d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 down in Southbound
Oct  2 04:44:06 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:06Z|01090|binding|INFO|Removing iface tapd9b60bcb-8d ovn-installed in OVS
Oct  2 04:44:06 np0005465604 nova_compute[260603]: 2025-10-02 08:44:06.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:06.529 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:24:8a 10.100.0.8'], port_security=['fa:16:3e:d2:24:8a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1c24cd5c-a165-4fcf-b24d-245a60f7ea11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34c4e106-9919-4d6d-a50a-81b3894f2e5e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae7dae3968e448f1b3ace692d9d76cff', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e6912f4e-082b-4b15-9608-2b9595c16211', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b81b03c-f256-4f5a-8b01-0b7991a52f3e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:44:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:06.532 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 in datapath 34c4e106-9919-4d6d-a50a-81b3894f2e5e unbound from our chassis#033[00m
Oct  2 04:44:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:06.533 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 34c4e106-9919-4d6d-a50a-81b3894f2e5e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 04:44:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:06.536 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e099a061-8a73-4be0-a493-b29595cc0280]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:06 np0005465604 nova_compute[260603]: 2025-10-02 08:44:06.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:06 np0005465604 nova_compute[260603]: 2025-10-02 08:44:06.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:06 np0005465604 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000068.scope: Deactivated successfully.
Oct  2 04:44:06 np0005465604 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000068.scope: Consumed 13.198s CPU time.
Oct  2 04:44:06 np0005465604 systemd-machined[214636]: Machine qemu-132-instance-00000068 terminated.
Oct  2 04:44:06 np0005465604 nova_compute[260603]: 2025-10-02 08:44:06.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:06 np0005465604 nova_compute[260603]: 2025-10-02 08:44:06.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:06 np0005465604 nova_compute[260603]: 2025-10-02 08:44:06.804 2 INFO nova.virt.libvirt.driver [-] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Instance destroyed successfully.#033[00m
Oct  2 04:44:06 np0005465604 nova_compute[260603]: 2025-10-02 08:44:06.805 2 DEBUG nova.objects.instance [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lazy-loading 'resources' on Instance uuid 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:44:06 np0005465604 nova_compute[260603]: 2025-10-02 08:44:06.819 2 DEBUG nova.virt.libvirt.vif [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:43:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-618462512',display_name='tempest-ServerRescueTestJSONUnderV235-server-618462512',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-618462512',id=104,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:43:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ae7dae3968e448f1b3ace692d9d76cff',ramdisk_id='',reservation_id='r-g69g07mg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-299264470',owner_user_name='tempest-ServerRescueTestJSONUnderV235-299264470-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:43:43Z,user_data=None,user_id='7e27caab7dd34e4a9cac5f4f1880fad8',uuid=1c24cd5c-a165-4fcf-b24d-245a60f7ea11,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:44:06 np0005465604 nova_compute[260603]: 2025-10-02 08:44:06.820 2 DEBUG nova.network.os_vif_util [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Converting VIF {"id": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "address": "fa:16:3e:d2:24:8a", "network": {"id": "34c4e106-9919-4d6d-a50a-81b3894f2e5e", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-592750548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ae7dae3968e448f1b3ace692d9d76cff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9b60bcb-8d", "ovs_interfaceid": "d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:44:06 np0005465604 nova_compute[260603]: 2025-10-02 08:44:06.821 2 DEBUG nova.network.os_vif_util [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:24:8a,bridge_name='br-int',has_traffic_filtering=True,id=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59,network=Network(34c4e106-9919-4d6d-a50a-81b3894f2e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9b60bcb-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:44:06 np0005465604 nova_compute[260603]: 2025-10-02 08:44:06.821 2 DEBUG os_vif [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:24:8a,bridge_name='br-int',has_traffic_filtering=True,id=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59,network=Network(34c4e106-9919-4d6d-a50a-81b3894f2e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9b60bcb-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:44:06 np0005465604 nova_compute[260603]: 2025-10-02 08:44:06.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:06 np0005465604 nova_compute[260603]: 2025-10-02 08:44:06.824 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9b60bcb-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:06 np0005465604 nova_compute[260603]: 2025-10-02 08:44:06.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:06 np0005465604 nova_compute[260603]: 2025-10-02 08:44:06.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:06 np0005465604 nova_compute[260603]: 2025-10-02 08:44:06.830 2 INFO os_vif [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:24:8a,bridge_name='br-int',has_traffic_filtering=True,id=d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59,network=Network(34c4e106-9919-4d6d-a50a-81b3894f2e5e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9b60bcb-8d')#033[00m
Oct  2 04:44:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 265 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 186 op/s
Oct  2 04:44:07 np0005465604 nova_compute[260603]: 2025-10-02 08:44:07.503 2 INFO nova.virt.libvirt.driver [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Deleting instance files /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11_del#033[00m
Oct  2 04:44:07 np0005465604 nova_compute[260603]: 2025-10-02 08:44:07.504 2 INFO nova.virt.libvirt.driver [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Deletion of /var/lib/nova/instances/1c24cd5c-a165-4fcf-b24d-245a60f7ea11_del complete#033[00m
Oct  2 04:44:07 np0005465604 nova_compute[260603]: 2025-10-02 08:44:07.588 2 INFO nova.compute.manager [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Took 1.64 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:44:07 np0005465604 nova_compute[260603]: 2025-10-02 08:44:07.589 2 DEBUG oslo.service.loopingcall [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:44:07 np0005465604 nova_compute[260603]: 2025-10-02 08:44:07.589 2 DEBUG nova.compute.manager [-] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:44:07 np0005465604 nova_compute[260603]: 2025-10-02 08:44:07.589 2 DEBUG nova.network.neutron [-] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:44:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:44:08 np0005465604 nova_compute[260603]: 2025-10-02 08:44:08.523 2 DEBUG nova.network.neutron [-] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:44:08 np0005465604 nova_compute[260603]: 2025-10-02 08:44:08.542 2 INFO nova.compute.manager [-] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Took 0.95 seconds to deallocate network for instance.#033[00m
Oct  2 04:44:08 np0005465604 nova_compute[260603]: 2025-10-02 08:44:08.586 2 DEBUG oslo_concurrency.lockutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:08 np0005465604 nova_compute[260603]: 2025-10-02 08:44:08.587 2 DEBUG oslo_concurrency.lockutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:08 np0005465604 nova_compute[260603]: 2025-10-02 08:44:08.604 2 DEBUG nova.compute.manager [req-ffe8d87a-b661-44f3-ba31-26d69af7fe22 req-0f51a2e2-8415-4cef-87dc-dcb4db95893f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Received event network-vif-deleted-d9b60bcb-8d0e-4638-8ce0-3bb5568a6b59 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:44:08 np0005465604 nova_compute[260603]: 2025-10-02 08:44:08.679 2 DEBUG oslo_concurrency.processutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:44:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/127347217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:44:09 np0005465604 nova_compute[260603]: 2025-10-02 08:44:09.144 2 DEBUG oslo_concurrency.processutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:09 np0005465604 nova_compute[260603]: 2025-10-02 08:44:09.150 2 DEBUG nova.compute.provider_tree [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:44:09 np0005465604 nova_compute[260603]: 2025-10-02 08:44:09.166 2 DEBUG nova.scheduler.client.report [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:44:09 np0005465604 nova_compute[260603]: 2025-10-02 08:44:09.181 2 DEBUG oslo_concurrency.lockutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:09 np0005465604 nova_compute[260603]: 2025-10-02 08:44:09.222 2 INFO nova.scheduler.client.report [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Deleted allocations for instance 1c24cd5c-a165-4fcf-b24d-245a60f7ea11#033[00m
Oct  2 04:44:09 np0005465604 nova_compute[260603]: 2025-10-02 08:44:09.281 2 DEBUG oslo_concurrency.lockutils [None req-5f1bd2b8-3132-4e5b-b81e-bf67e1f60af9 7e27caab7dd34e4a9cac5f4f1880fad8 ae7dae3968e448f1b3ace692d9d76cff - - default default] Lock "1c24cd5c-a165-4fcf-b24d-245a60f7ea11" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 187 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 296 KiB/s wr, 145 op/s
Oct  2 04:44:09 np0005465604 nova_compute[260603]: 2025-10-02 08:44:09.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:10Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:80:a1:dd 10.100.0.11
Oct  2 04:44:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:10Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:a1:dd 10.100.0.11
Oct  2 04:44:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 187 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 116 op/s
Oct  2 04:44:11 np0005465604 nova_compute[260603]: 2025-10-02 08:44:11.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:12 np0005465604 podman[366711]: 2025-10-02 08:44:12.016937509 +0000 UTC m=+0.076298889 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:44:12 np0005465604 podman[366712]: 2025-10-02 08:44:12.031148013 +0000 UTC m=+0.077115585 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 04:44:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:12Z|01091|binding|INFO|Releasing lport 8f66d020-a258-4e14-aa3b-234835306a91 from this chassis (sb_readonly=0)
Oct  2 04:44:12 np0005465604 nova_compute[260603]: 2025-10-02 08:44:12.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:44:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 200 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 193 op/s
Oct  2 04:44:14 np0005465604 nova_compute[260603]: 2025-10-02 08:44:14.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 200 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 118 op/s
Oct  2 04:44:16 np0005465604 nova_compute[260603]: 2025-10-02 08:44:16.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:16 np0005465604 nova_compute[260603]: 2025-10-02 08:44:16.850 2 DEBUG oslo_concurrency.lockutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:16 np0005465604 nova_compute[260603]: 2025-10-02 08:44:16.851 2 DEBUG oslo_concurrency.lockutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:16 np0005465604 nova_compute[260603]: 2025-10-02 08:44:16.852 2 DEBUG oslo_concurrency.lockutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:16 np0005465604 nova_compute[260603]: 2025-10-02 08:44:16.852 2 DEBUG oslo_concurrency.lockutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:16 np0005465604 nova_compute[260603]: 2025-10-02 08:44:16.853 2 DEBUG oslo_concurrency.lockutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:16 np0005465604 nova_compute[260603]: 2025-10-02 08:44:16.855 2 INFO nova.compute.manager [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Terminating instance#033[00m
Oct  2 04:44:16 np0005465604 nova_compute[260603]: 2025-10-02 08:44:16.857 2 DEBUG nova.compute.manager [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:44:16 np0005465604 kernel: tap41b1bf69-0f (unregistering): left promiscuous mode
Oct  2 04:44:16 np0005465604 NetworkManager[45129]: <info>  [1759394656.9270] device (tap41b1bf69-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:44:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:16Z|01092|binding|INFO|Releasing lport 41b1bf69-0f3d-4e03-9c65-112b4a4b731d from this chassis (sb_readonly=0)
Oct  2 04:44:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:16Z|01093|binding|INFO|Setting lport 41b1bf69-0f3d-4e03-9c65-112b4a4b731d down in Southbound
Oct  2 04:44:16 np0005465604 nova_compute[260603]: 2025-10-02 08:44:16.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:16 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:16Z|01094|binding|INFO|Removing iface tap41b1bf69-0f ovn-installed in OVS
Oct  2 04:44:16 np0005465604 nova_compute[260603]: 2025-10-02 08:44:16.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:16.946 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:a1:dd 10.100.0.11'], port_security=['fa:16:3e:80:a1:dd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '7f00dee7-5a33-4ae8-a230-2ed05afd17c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd3198b8-8ad4-4f59-8253-4227db95b8da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e2b1c02-e727-474f-a9ba-53199387490d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=41b1bf69-0f3d-4e03-9c65-112b4a4b731d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:44:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:16.947 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 41b1bf69-0f3d-4e03-9c65-112b4a4b731d in datapath c7addd6c-480f-45ed-94c2-18d1d2248acb unbound from our chassis#033[00m
Oct  2 04:44:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:16.949 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c7addd6c-480f-45ed-94c2-18d1d2248acb#033[00m
Oct  2 04:44:16 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:16.969 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc12519-05f9-4810-804e-1778efb3965b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:16 np0005465604 nova_compute[260603]: 2025-10-02 08:44:16.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:16 np0005465604 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Oct  2 04:44:17 np0005465604 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006a.scope: Consumed 12.243s CPU time.
Oct  2 04:44:17 np0005465604 systemd-machined[214636]: Machine qemu-133-instance-0000006a terminated.
Oct  2 04:44:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.016 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f18050-a429-4a06-8c9e-203ca581931c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.021 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f4ce71-48df-4285-8bd4-ea7e84582a8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.062 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3f016b2c-1a8e-4cc1-af1d-38a123159584]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.099 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d69429-bb52-4d0e-967f-17a49585f212]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc7addd6c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:47:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 8, 'rx_bytes': 958, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 8, 'rx_bytes': 958, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547997, 'reachable_time': 26922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366760, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.113 2 INFO nova.virt.libvirt.driver [-] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Instance destroyed successfully.#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.113 2 DEBUG nova.objects.instance [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'resources' on Instance uuid 7f00dee7-5a33-4ae8-a230-2ed05afd17c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:44:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.132 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3205ebe2-535c-482a-951a-e424691e6f23]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc7addd6c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548011, 'tstamp': 548011}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366771, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc7addd6c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548014, 'tstamp': 548014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366771, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.134 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7addd6c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.134 2 DEBUG nova.virt.libvirt.vif [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:43:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-0-688023159',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-0-688023159',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-gen',id=106,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBp7O1+NifYmRTPjkz076R6OB0cQraodAPHkd+igs0jgMRa0ylNfh8a4FTcaAs5LMUjKZ6d3T6IfE8uMmH/Vv7/4iSPE6rs9EcqUzLfabYKjHL1D+G2YR0bhQI1PbtyQqg==',key_name='tempest-TestSecurityGroupsBasicOps-388523957',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:43:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-rpve6319',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:43:59Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=7f00dee7-5a33-4ae8-a230-2ed05afd17c3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.134 2 DEBUG nova.network.os_vif_util [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "address": "fa:16:3e:80:a1:dd", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41b1bf69-0f", "ovs_interfaceid": "41b1bf69-0f3d-4e03-9c65-112b4a4b731d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.135 2 DEBUG nova.network.os_vif_util [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:a1:dd,bridge_name='br-int',has_traffic_filtering=True,id=41b1bf69-0f3d-4e03-9c65-112b4a4b731d,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41b1bf69-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.135 2 DEBUG os_vif [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:a1:dd,bridge_name='br-int',has_traffic_filtering=True,id=41b1bf69-0f3d-4e03-9c65-112b4a4b731d,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41b1bf69-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.138 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b1bf69-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.146 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7addd6c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.147 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.148 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc7addd6c-40, col_values=(('external_ids', {'iface-id': '8f66d020-a258-4e14-aa3b-234835306a91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:17.148 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.149 2 INFO os_vif [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:a1:dd,bridge_name='br-int',has_traffic_filtering=True,id=41b1bf69-0f3d-4e03-9c65-112b4a4b731d,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41b1bf69-0f')#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.327 2 DEBUG nova.compute.manager [req-190840ab-3072-4fa8-92f8-b1bf16e37d67 req-7acdf92f-65b9-4fac-983d-acdb797b7f8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Received event network-vif-unplugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.329 2 DEBUG oslo_concurrency.lockutils [req-190840ab-3072-4fa8-92f8-b1bf16e37d67 req-7acdf92f-65b9-4fac-983d-acdb797b7f8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.329 2 DEBUG oslo_concurrency.lockutils [req-190840ab-3072-4fa8-92f8-b1bf16e37d67 req-7acdf92f-65b9-4fac-983d-acdb797b7f8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.330 2 DEBUG oslo_concurrency.lockutils [req-190840ab-3072-4fa8-92f8-b1bf16e37d67 req-7acdf92f-65b9-4fac-983d-acdb797b7f8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.331 2 DEBUG nova.compute.manager [req-190840ab-3072-4fa8-92f8-b1bf16e37d67 req-7acdf92f-65b9-4fac-983d-acdb797b7f8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] No waiting events found dispatching network-vif-unplugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.332 2 DEBUG nova.compute.manager [req-190840ab-3072-4fa8-92f8-b1bf16e37d67 req-7acdf92f-65b9-4fac-983d-acdb797b7f8d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Received event network-vif-unplugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:44:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 200 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.570 2 INFO nova.virt.libvirt.driver [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Deleting instance files /var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3_del#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.571 2 INFO nova.virt.libvirt.driver [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Deletion of /var/lib/nova/instances/7f00dee7-5a33-4ae8-a230-2ed05afd17c3_del complete#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.618 2 INFO nova.compute.manager [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.618 2 DEBUG oslo.service.loopingcall [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.618 2 DEBUG nova.compute.manager [-] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:44:17 np0005465604 nova_compute[260603]: 2025-10-02 08:44:17.619 2 DEBUG nova.network.neutron [-] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:44:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:44:18 np0005465604 nova_compute[260603]: 2025-10-02 08:44:18.973 2 DEBUG nova.network.neutron [-] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:44:18 np0005465604 nova_compute[260603]: 2025-10-02 08:44:18.996 2 INFO nova.compute.manager [-] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Took 1.38 seconds to deallocate network for instance.#033[00m
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.035 2 DEBUG oslo_concurrency.lockutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.036 2 DEBUG oslo_concurrency.lockutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.114 2 DEBUG oslo_concurrency.processutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 159 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.519 2 DEBUG nova.compute.manager [req-51df8448-1dd5-4c19-8062-9ca605c09f89 req-eedb9138-4f0b-44b8-987c-47492d77521c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Received event network-vif-plugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.520 2 DEBUG oslo_concurrency.lockutils [req-51df8448-1dd5-4c19-8062-9ca605c09f89 req-eedb9138-4f0b-44b8-987c-47492d77521c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.520 2 DEBUG oslo_concurrency.lockutils [req-51df8448-1dd5-4c19-8062-9ca605c09f89 req-eedb9138-4f0b-44b8-987c-47492d77521c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.520 2 DEBUG oslo_concurrency.lockutils [req-51df8448-1dd5-4c19-8062-9ca605c09f89 req-eedb9138-4f0b-44b8-987c-47492d77521c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.520 2 DEBUG nova.compute.manager [req-51df8448-1dd5-4c19-8062-9ca605c09f89 req-eedb9138-4f0b-44b8-987c-47492d77521c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] No waiting events found dispatching network-vif-plugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.521 2 WARNING nova.compute.manager [req-51df8448-1dd5-4c19-8062-9ca605c09f89 req-eedb9138-4f0b-44b8-987c-47492d77521c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Received unexpected event network-vif-plugged-41b1bf69-0f3d-4e03-9c65-112b4a4b731d for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.525 2 DEBUG nova.compute.manager [req-51df8448-1dd5-4c19-8062-9ca605c09f89 req-eedb9138-4f0b-44b8-987c-47492d77521c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Received event network-vif-deleted-41b1bf69-0f3d-4e03-9c65-112b4a4b731d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.526 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.526 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:44:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:44:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4040599607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.567 2 DEBUG oslo_concurrency.processutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.573 2 DEBUG nova.compute.provider_tree [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.591 2 DEBUG nova.scheduler.client.report [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.608 2 DEBUG oslo_concurrency.lockutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.639 2 INFO nova.scheduler.client.report [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Deleted allocations for instance 7f00dee7-5a33-4ae8-a230-2ed05afd17c3#033[00m
Oct  2 04:44:19 np0005465604 nova_compute[260603]: 2025-10-02 08:44:19.704 2 DEBUG oslo_concurrency.lockutils [None req-e8a3a7ab-f648-4972-b0d6-96c8aa670ee6 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "7f00dee7-5a33-4ae8-a230-2ed05afd17c3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:44:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:44:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:44:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:44:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:44:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:44:19 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 6c4babb2-c41d-4446-b207-f3e46ec30239 does not exist
Oct  2 04:44:19 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 917733e1-6c46-4651-bd0f-300007534a22 does not exist
Oct  2 04:44:19 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ceea174c-c05b-4e35-a850-167dabbbe83e does not exist
Oct  2 04:44:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:44:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:44:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:44:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:44:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:44:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:44:20 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:44:20 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:44:20 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:44:20 np0005465604 podman[367088]: 2025-10-02 08:44:20.545137111 +0000 UTC m=+0.069547179 container create 722cdb76477a3686e07787040999da45d904db57dbef38dcb76bdc554363ab1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:44:20 np0005465604 systemd[1]: Started libpod-conmon-722cdb76477a3686e07787040999da45d904db57dbef38dcb76bdc554363ab1e.scope.
Oct  2 04:44:20 np0005465604 podman[367088]: 2025-10-02 08:44:20.515839477 +0000 UTC m=+0.040249606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:44:20 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:44:20 np0005465604 podman[367088]: 2025-10-02 08:44:20.658599287 +0000 UTC m=+0.183009405 container init 722cdb76477a3686e07787040999da45d904db57dbef38dcb76bdc554363ab1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:44:20 np0005465604 podman[367088]: 2025-10-02 08:44:20.670624362 +0000 UTC m=+0.195034430 container start 722cdb76477a3686e07787040999da45d904db57dbef38dcb76bdc554363ab1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:44:20 np0005465604 podman[367088]: 2025-10-02 08:44:20.674859144 +0000 UTC m=+0.199269222 container attach 722cdb76477a3686e07787040999da45d904db57dbef38dcb76bdc554363ab1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 04:44:20 np0005465604 wonderful_almeida[367105]: 167 167
Oct  2 04:44:20 np0005465604 systemd[1]: libpod-722cdb76477a3686e07787040999da45d904db57dbef38dcb76bdc554363ab1e.scope: Deactivated successfully.
Oct  2 04:44:20 np0005465604 podman[367088]: 2025-10-02 08:44:20.680639324 +0000 UTC m=+0.205049402 container died 722cdb76477a3686e07787040999da45d904db57dbef38dcb76bdc554363ab1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:44:20 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ce7dddef25a0a59b300450fd697cab93de6c2bb25a2b1003400e071a8ba87787-merged.mount: Deactivated successfully.
Oct  2 04:44:20 np0005465604 podman[367088]: 2025-10-02 08:44:20.743001708 +0000 UTC m=+0.267411776 container remove 722cdb76477a3686e07787040999da45d904db57dbef38dcb76bdc554363ab1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:44:20 np0005465604 systemd[1]: libpod-conmon-722cdb76477a3686e07787040999da45d904db57dbef38dcb76bdc554363ab1e.scope: Deactivated successfully.
Oct  2 04:44:20 np0005465604 podman[367128]: 2025-10-02 08:44:20.98724096 +0000 UTC m=+0.057261665 container create 9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:44:21 np0005465604 systemd[1]: Started libpod-conmon-9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a.scope.
Oct  2 04:44:21 np0005465604 podman[367128]: 2025-10-02 08:44:20.95482486 +0000 UTC m=+0.024845555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:44:21 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:44:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4857f30f5eecfd101687d68127dad841c59f8f950c2a1e98423158c4e605327/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:44:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4857f30f5eecfd101687d68127dad841c59f8f950c2a1e98423158c4e605327/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:44:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4857f30f5eecfd101687d68127dad841c59f8f950c2a1e98423158c4e605327/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:44:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4857f30f5eecfd101687d68127dad841c59f8f950c2a1e98423158c4e605327/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:44:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4857f30f5eecfd101687d68127dad841c59f8f950c2a1e98423158c4e605327/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:44:21 np0005465604 podman[367128]: 2025-10-02 08:44:21.093002697 +0000 UTC m=+0.163023372 container init 9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gould, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:44:21 np0005465604 podman[367128]: 2025-10-02 08:44:21.112010819 +0000 UTC m=+0.182031514 container start 9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 04:44:21 np0005465604 podman[367128]: 2025-10-02 08:44:21.116975104 +0000 UTC m=+0.186995799 container attach 9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gould, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.179 2 DEBUG oslo_concurrency.lockutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "bccc9587-6f96-4032-ae07-56ab00988869" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.183 2 DEBUG oslo_concurrency.lockutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.183 2 DEBUG oslo_concurrency.lockutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "bccc9587-6f96-4032-ae07-56ab00988869-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.184 2 DEBUG oslo_concurrency.lockutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.184 2 DEBUG oslo_concurrency.lockutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.186 2 INFO nova.compute.manager [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Terminating instance#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.188 2 DEBUG nova.compute.manager [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:44:21 np0005465604 kernel: tapaf8bcfc4-36 (unregistering): left promiscuous mode
Oct  2 04:44:21 np0005465604 NetworkManager[45129]: <info>  [1759394661.2503] device (tapaf8bcfc4-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:21Z|01095|binding|INFO|Releasing lport af8bcfc4-3690-4b5f-9893-5555fa376203 from this chassis (sb_readonly=0)
Oct  2 04:44:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:21Z|01096|binding|INFO|Setting lport af8bcfc4-3690-4b5f-9893-5555fa376203 down in Southbound
Oct  2 04:44:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:21Z|01097|binding|INFO|Removing iface tapaf8bcfc4-36 ovn-installed in OVS
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.318 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:6d:19 10.100.0.5'], port_security=['fa:16:3e:c6:6d:19 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'bccc9587-6f96-4032-ae07-56ab00988869', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87949ce5-c546-4e93-ab3f-46b861ef5238 dd3198b8-8ad4-4f59-8253-4227db95b8da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e2b1c02-e727-474f-a9ba-53199387490d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=af8bcfc4-3690-4b5f-9893-5555fa376203) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:44:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.319 162357 INFO neutron.agent.ovn.metadata.agent [-] Port af8bcfc4-3690-4b5f-9893-5555fa376203 in datapath c7addd6c-480f-45ed-94c2-18d1d2248acb unbound from our chassis#033[00m
Oct  2 04:44:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.320 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c7addd6c-480f-45ed-94c2-18d1d2248acb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:44:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.323 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e8afc47f-ad50-46d6-8e5c-f74ef01641aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.324 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb namespace which is not needed anymore#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:21 np0005465604 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000069.scope: Deactivated successfully.
Oct  2 04:44:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 159 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Oct  2 04:44:21 np0005465604 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000069.scope: Consumed 15.906s CPU time.
Oct  2 04:44:21 np0005465604 systemd-machined[214636]: Machine qemu-131-instance-00000069 terminated.
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.443 2 INFO nova.virt.libvirt.driver [-] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Instance destroyed successfully.#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.444 2 DEBUG nova.objects.instance [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'resources' on Instance uuid bccc9587-6f96-4032-ae07-56ab00988869 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.466 2 DEBUG nova.virt.libvirt.vif [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:43:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-1334187488',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-1334187488',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=105,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBp7O1+NifYmRTPjkz076R6OB0cQraodAPHkd+igs0jgMRa0ylNfh8a4FTcaAs5LMUjKZ6d3T6IfE8uMmH/Vv7/4iSPE6rs9EcqUzLfabYKjHL1D+G2YR0bhQI1PbtyQqg==',key_name='tempest-TestSecurityGroupsBasicOps-388523957',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:43:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-lm9mu0gw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:43:23Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=bccc9587-6f96-4032-ae07-56ab00988869,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.467 2 DEBUG nova.network.os_vif_util [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.470 2 DEBUG nova.network.os_vif_util [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c6:6d:19,bridge_name='br-int',has_traffic_filtering=True,id=af8bcfc4-3690-4b5f-9893-5555fa376203,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf8bcfc4-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.471 2 DEBUG os_vif [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:6d:19,bridge_name='br-int',has_traffic_filtering=True,id=af8bcfc4-3690-4b5f-9893-5555fa376203,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf8bcfc4-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.474 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaf8bcfc4-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.487 2 INFO os_vif [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:6d:19,bridge_name='br-int',has_traffic_filtering=True,id=af8bcfc4-3690-4b5f-9893-5555fa376203,network=Network(c7addd6c-480f-45ed-94c2-18d1d2248acb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf8bcfc4-36')#033[00m
Oct  2 04:44:21 np0005465604 neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb[365724]: [NOTICE]   (365728) : haproxy version is 2.8.14-c23fe91
Oct  2 04:44:21 np0005465604 neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb[365724]: [NOTICE]   (365728) : path to executable is /usr/sbin/haproxy
Oct  2 04:44:21 np0005465604 neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb[365724]: [WARNING]  (365728) : Exiting Master process...
Oct  2 04:44:21 np0005465604 neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb[365724]: [WARNING]  (365728) : Exiting Master process...
Oct  2 04:44:21 np0005465604 neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb[365724]: [ALERT]    (365728) : Current worker (365730) exited with code 143 (Terminated)
Oct  2 04:44:21 np0005465604 neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb[365724]: [WARNING]  (365728) : All workers exited. Exiting... (0)
Oct  2 04:44:21 np0005465604 systemd[1]: libpod-ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2.scope: Deactivated successfully.
Oct  2 04:44:21 np0005465604 podman[367182]: 2025-10-02 08:44:21.553401107 +0000 UTC m=+0.061524199 container died ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 04:44:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2-userdata-shm.mount: Deactivated successfully.
Oct  2 04:44:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-644ad21fe1b1e836c44e9821eabc2e68b4056b923e74837b9185693307a8e425-merged.mount: Deactivated successfully.
Oct  2 04:44:21 np0005465604 podman[367182]: 2025-10-02 08:44:21.630967474 +0000 UTC m=+0.139090536 container cleanup ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 04:44:21 np0005465604 systemd[1]: libpod-conmon-ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2.scope: Deactivated successfully.
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.675 2 DEBUG nova.compute.manager [req-f2fb1bc1-65bf-499f-84af-fd6c7335a446 req-ab128634-520a-4d77-988a-70cf5a1678b3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received event network-changed-af8bcfc4-3690-4b5f-9893-5555fa376203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.678 2 DEBUG nova.compute.manager [req-f2fb1bc1-65bf-499f-84af-fd6c7335a446 req-ab128634-520a-4d77-988a-70cf5a1678b3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Refreshing instance network info cache due to event network-changed-af8bcfc4-3690-4b5f-9893-5555fa376203. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.679 2 DEBUG oslo_concurrency.lockutils [req-f2fb1bc1-65bf-499f-84af-fd6c7335a446 req-ab128634-520a-4d77-988a-70cf5a1678b3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.679 2 DEBUG oslo_concurrency.lockutils [req-f2fb1bc1-65bf-499f-84af-fd6c7335a446 req-ab128634-520a-4d77-988a-70cf5a1678b3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.679 2 DEBUG nova.network.neutron [req-f2fb1bc1-65bf-499f-84af-fd6c7335a446 req-ab128634-520a-4d77-988a-70cf5a1678b3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Refreshing network info cache for port af8bcfc4-3690-4b5f-9893-5555fa376203 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:44:21 np0005465604 podman[367229]: 2025-10-02 08:44:21.730218637 +0000 UTC m=+0.066595406 container remove ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 04:44:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.737 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6693c2ef-be59-46c8-a037-928198bafefd]: (4, ('Thu Oct  2 08:44:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb (ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2)\nba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2\nThu Oct  2 08:44:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb (ba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2)\nba7fbc88afdd061d83e6ca3cb990ba7060cd73e857c006dfcfcadc25005ae6e2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.740 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5f4f87ac-2428-4255-a0c1-13089e436fe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.742 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7addd6c-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:21 np0005465604 kernel: tapc7addd6c-40: left promiscuous mode
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.780 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[217ecd99-b017-444e-8155-56687a929e6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.798 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394646.7972877, 1c24cd5c-a165-4fcf-b24d-245a60f7ea11 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.799 2 INFO nova.compute.manager [-] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:44:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.801 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2e148327-7f9e-4b5f-8f98-09590d7718a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.807 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f783d9-3176-4c0a-bed6-b371bf6dc65c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.817 2 DEBUG nova.compute.manager [req-e5f1dec8-584c-4d98-8f74-964d713b2d38 req-7f934826-a140-4a03-8c0a-0be8025e55b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received event network-vif-unplugged-af8bcfc4-3690-4b5f-9893-5555fa376203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.818 2 DEBUG oslo_concurrency.lockutils [req-e5f1dec8-584c-4d98-8f74-964d713b2d38 req-7f934826-a140-4a03-8c0a-0be8025e55b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "bccc9587-6f96-4032-ae07-56ab00988869-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.818 2 DEBUG oslo_concurrency.lockutils [req-e5f1dec8-584c-4d98-8f74-964d713b2d38 req-7f934826-a140-4a03-8c0a-0be8025e55b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.818 2 DEBUG oslo_concurrency.lockutils [req-e5f1dec8-584c-4d98-8f74-964d713b2d38 req-7f934826-a140-4a03-8c0a-0be8025e55b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.819 2 DEBUG nova.compute.manager [req-e5f1dec8-584c-4d98-8f74-964d713b2d38 req-7f934826-a140-4a03-8c0a-0be8025e55b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] No waiting events found dispatching network-vif-unplugged-af8bcfc4-3690-4b5f-9893-5555fa376203 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.819 2 DEBUG nova.compute.manager [req-e5f1dec8-584c-4d98-8f74-964d713b2d38 req-7f934826-a140-4a03-8c0a-0be8025e55b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received event network-vif-unplugged-af8bcfc4-3690-4b5f-9893-5555fa376203 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:44:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.826 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[965d589f-c877-498c-92f4-b949d4ee07bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547988, 'reachable_time': 20763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367249, 'error': None, 'target': 'ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.829 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c7addd6c-480f-45ed-94c2-18d1d2248acb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:44:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:21.830 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[1434ef32-3940-483e-b8f5-1e689146ae49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:21 np0005465604 systemd[1]: run-netns-ovnmeta\x2dc7addd6c\x2d480f\x2d45ed\x2d94c2\x2d18d1d2248acb.mount: Deactivated successfully.
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.834 2 DEBUG nova.compute.manager [None req-dcc39f44-02ac-482e-ac64-f477158307a6 - - - - - -] [instance: 1c24cd5c-a165-4fcf-b24d-245a60f7ea11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.952 2 INFO nova.virt.libvirt.driver [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Deleting instance files /var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869_del#033[00m
Oct  2 04:44:21 np0005465604 nova_compute[260603]: 2025-10-02 08:44:21.953 2 INFO nova.virt.libvirt.driver [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Deletion of /var/lib/nova/instances/bccc9587-6f96-4032-ae07-56ab00988869_del complete#033[00m
Oct  2 04:44:22 np0005465604 nova_compute[260603]: 2025-10-02 08:44:22.018 2 INFO nova.compute.manager [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:44:22 np0005465604 nova_compute[260603]: 2025-10-02 08:44:22.019 2 DEBUG oslo.service.loopingcall [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:44:22 np0005465604 nova_compute[260603]: 2025-10-02 08:44:22.020 2 DEBUG nova.compute.manager [-] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:44:22 np0005465604 nova_compute[260603]: 2025-10-02 08:44:22.020 2 DEBUG nova.network.neutron [-] [instance: bccc9587-6f96-4032-ae07-56ab00988869] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:44:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:44:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2575932264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:44:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:44:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2575932264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:44:22 np0005465604 compassionate_gould[367145]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:44:22 np0005465604 compassionate_gould[367145]: --> relative data size: 1.0
Oct  2 04:44:22 np0005465604 compassionate_gould[367145]: --> All data devices are unavailable
Oct  2 04:44:22 np0005465604 systemd[1]: libpod-9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a.scope: Deactivated successfully.
Oct  2 04:44:22 np0005465604 systemd[1]: libpod-9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a.scope: Consumed 1.057s CPU time.
Oct  2 04:44:22 np0005465604 podman[367128]: 2025-10-02 08:44:22.236065044 +0000 UTC m=+1.306085739 container died 9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct  2 04:44:22 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e4857f30f5eecfd101687d68127dad841c59f8f950c2a1e98423158c4e605327-merged.mount: Deactivated successfully.
Oct  2 04:44:22 np0005465604 podman[367128]: 2025-10-02 08:44:22.312881308 +0000 UTC m=+1.382901973 container remove 9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gould, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:44:22 np0005465604 systemd[1]: libpod-conmon-9ae0dcb0c4fc2f5744c6591ce42a665e8040949ac4728be59345dff17c8b196a.scope: Deactivated successfully.
Oct  2 04:44:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:44:23 np0005465604 podman[367424]: 2025-10-02 08:44:23.221844859 +0000 UTC m=+0.053057924 container create 4c6ca23c9c9c6c5b9c6f2181a31da389f99ab2eadd63728e8911cf0471d1ce0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:44:23 np0005465604 systemd[1]: Started libpod-conmon-4c6ca23c9c9c6c5b9c6f2181a31da389f99ab2eadd63728e8911cf0471d1ce0b.scope.
Oct  2 04:44:23 np0005465604 podman[367424]: 2025-10-02 08:44:23.195458708 +0000 UTC m=+0.026671833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:44:23 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:44:23 np0005465604 podman[367424]: 2025-10-02 08:44:23.321291299 +0000 UTC m=+0.152504424 container init 4c6ca23c9c9c6c5b9c6f2181a31da389f99ab2eadd63728e8911cf0471d1ce0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_margulis, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 04:44:23 np0005465604 podman[367424]: 2025-10-02 08:44:23.333174849 +0000 UTC m=+0.164387924 container start 4c6ca23c9c9c6c5b9c6f2181a31da389f99ab2eadd63728e8911cf0471d1ce0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_margulis, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:44:23 np0005465604 podman[367424]: 2025-10-02 08:44:23.337878046 +0000 UTC m=+0.169091181 container attach 4c6ca23c9c9c6c5b9c6f2181a31da389f99ab2eadd63728e8911cf0471d1ce0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_margulis, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:44:23 np0005465604 affectionate_margulis[367440]: 167 167
Oct  2 04:44:23 np0005465604 systemd[1]: libpod-4c6ca23c9c9c6c5b9c6f2181a31da389f99ab2eadd63728e8911cf0471d1ce0b.scope: Deactivated successfully.
Oct  2 04:44:23 np0005465604 podman[367424]: 2025-10-02 08:44:23.343314686 +0000 UTC m=+0.174527751 container died 4c6ca23c9c9c6c5b9c6f2181a31da389f99ab2eadd63728e8911cf0471d1ce0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_margulis, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 04:44:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 41 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct  2 04:44:23 np0005465604 systemd[1]: var-lib-containers-storage-overlay-28a91eb7152b7677524d3d1ecff4cfe435fe16ca16f6691c6d4898125d405d75-merged.mount: Deactivated successfully.
Oct  2 04:44:23 np0005465604 podman[367424]: 2025-10-02 08:44:23.39319206 +0000 UTC m=+0.224405125 container remove 4c6ca23c9c9c6c5b9c6f2181a31da389f99ab2eadd63728e8911cf0471d1ce0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_margulis, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:44:23 np0005465604 systemd[1]: libpod-conmon-4c6ca23c9c9c6c5b9c6f2181a31da389f99ab2eadd63728e8911cf0471d1ce0b.scope: Deactivated successfully.
Oct  2 04:44:23 np0005465604 nova_compute[260603]: 2025-10-02 08:44:23.453 2 DEBUG nova.network.neutron [-] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:44:23 np0005465604 nova_compute[260603]: 2025-10-02 08:44:23.475 2 INFO nova.compute.manager [-] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Took 1.46 seconds to deallocate network for instance.#033[00m
Oct  2 04:44:23 np0005465604 nova_compute[260603]: 2025-10-02 08:44:23.557 2 DEBUG oslo_concurrency.lockutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:23 np0005465604 nova_compute[260603]: 2025-10-02 08:44:23.557 2 DEBUG oslo_concurrency.lockutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:23 np0005465604 podman[367463]: 2025-10-02 08:44:23.607940353 +0000 UTC m=+0.051654040 container create 7e3ce503a54b01acf9980a6e66638de891ced91a0fda6900764ebac34fb77242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:44:23 np0005465604 nova_compute[260603]: 2025-10-02 08:44:23.614 2 DEBUG oslo_concurrency.processutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:23 np0005465604 systemd[1]: Started libpod-conmon-7e3ce503a54b01acf9980a6e66638de891ced91a0fda6900764ebac34fb77242.scope.
Oct  2 04:44:23 np0005465604 podman[367463]: 2025-10-02 08:44:23.585096311 +0000 UTC m=+0.028810018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:44:23 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:44:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cab69e81ef290eb7f6a772b7db2515b7a861d006fe1b6c31729f91abde5346e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:44:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cab69e81ef290eb7f6a772b7db2515b7a861d006fe1b6c31729f91abde5346e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:44:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cab69e81ef290eb7f6a772b7db2515b7a861d006fe1b6c31729f91abde5346e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:44:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cab69e81ef290eb7f6a772b7db2515b7a861d006fe1b6c31729f91abde5346e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:44:23 np0005465604 podman[367463]: 2025-10-02 08:44:23.721571335 +0000 UTC m=+0.165285022 container init 7e3ce503a54b01acf9980a6e66638de891ced91a0fda6900764ebac34fb77242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:44:23 np0005465604 podman[367463]: 2025-10-02 08:44:23.739208995 +0000 UTC m=+0.182922682 container start 7e3ce503a54b01acf9980a6e66638de891ced91a0fda6900764ebac34fb77242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 04:44:23 np0005465604 podman[367463]: 2025-10-02 08:44:23.745662796 +0000 UTC m=+0.189376533 container attach 7e3ce503a54b01acf9980a6e66638de891ced91a0fda6900764ebac34fb77242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:44:23 np0005465604 nova_compute[260603]: 2025-10-02 08:44:23.841 2 DEBUG nova.compute.manager [req-5529cff5-d743-4530-88a1-3930e8f7bca5 req-b9e88d1f-2ad6-4039-9126-64abac55380e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received event network-vif-deleted-af8bcfc4-3690-4b5f-9893-5555fa376203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:44:23 np0005465604 nova_compute[260603]: 2025-10-02 08:44:23.996 2 DEBUG nova.compute.manager [req-3af6e8b2-2a36-4dfb-a556-0af606880b38 req-12e446f4-9362-41a2-a207-b50804f0cfe2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received event network-vif-plugged-af8bcfc4-3690-4b5f-9893-5555fa376203 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:44:23 np0005465604 nova_compute[260603]: 2025-10-02 08:44:23.996 2 DEBUG oslo_concurrency.lockutils [req-3af6e8b2-2a36-4dfb-a556-0af606880b38 req-12e446f4-9362-41a2-a207-b50804f0cfe2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "bccc9587-6f96-4032-ae07-56ab00988869-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:23 np0005465604 nova_compute[260603]: 2025-10-02 08:44:23.997 2 DEBUG oslo_concurrency.lockutils [req-3af6e8b2-2a36-4dfb-a556-0af606880b38 req-12e446f4-9362-41a2-a207-b50804f0cfe2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:23 np0005465604 nova_compute[260603]: 2025-10-02 08:44:23.997 2 DEBUG oslo_concurrency.lockutils [req-3af6e8b2-2a36-4dfb-a556-0af606880b38 req-12e446f4-9362-41a2-a207-b50804f0cfe2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:23 np0005465604 nova_compute[260603]: 2025-10-02 08:44:23.998 2 DEBUG nova.compute.manager [req-3af6e8b2-2a36-4dfb-a556-0af606880b38 req-12e446f4-9362-41a2-a207-b50804f0cfe2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] No waiting events found dispatching network-vif-plugged-af8bcfc4-3690-4b5f-9893-5555fa376203 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:44:23 np0005465604 nova_compute[260603]: 2025-10-02 08:44:23.998 2 WARNING nova.compute.manager [req-3af6e8b2-2a36-4dfb-a556-0af606880b38 req-12e446f4-9362-41a2-a207-b50804f0cfe2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Received unexpected event network-vif-plugged-af8bcfc4-3690-4b5f-9893-5555fa376203 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:44:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:44:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1509340627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:44:24 np0005465604 nova_compute[260603]: 2025-10-02 08:44:24.066 2 DEBUG oslo_concurrency.processutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:24 np0005465604 nova_compute[260603]: 2025-10-02 08:44:24.075 2 DEBUG nova.compute.provider_tree [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:44:24 np0005465604 nova_compute[260603]: 2025-10-02 08:44:24.102 2 DEBUG nova.scheduler.client.report [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:44:24 np0005465604 nova_compute[260603]: 2025-10-02 08:44:24.146 2 DEBUG oslo_concurrency.lockutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:24 np0005465604 nova_compute[260603]: 2025-10-02 08:44:24.182 2 INFO nova.scheduler.client.report [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Deleted allocations for instance bccc9587-6f96-4032-ae07-56ab00988869#033[00m
Oct  2 04:44:24 np0005465604 nova_compute[260603]: 2025-10-02 08:44:24.264 2 DEBUG oslo_concurrency.lockutils [None req-f866185e-299f-4475-a7e3-1f358316504b 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "bccc9587-6f96-4032-ae07-56ab00988869" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:24 np0005465604 nova_compute[260603]: 2025-10-02 08:44:24.268 2 DEBUG nova.network.neutron [req-f2fb1bc1-65bf-499f-84af-fd6c7335a446 req-ab128634-520a-4d77-988a-70cf5a1678b3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Updated VIF entry in instance network info cache for port af8bcfc4-3690-4b5f-9893-5555fa376203. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:44:24 np0005465604 nova_compute[260603]: 2025-10-02 08:44:24.269 2 DEBUG nova.network.neutron [req-f2fb1bc1-65bf-499f-84af-fd6c7335a446 req-ab128634-520a-4d77-988a-70cf5a1678b3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Updating instance_info_cache with network_info: [{"id": "af8bcfc4-3690-4b5f-9893-5555fa376203", "address": "fa:16:3e:c6:6d:19", "network": {"id": "c7addd6c-480f-45ed-94c2-18d1d2248acb", "bridge": "br-int", "label": "tempest-network-smoke--692521166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf8bcfc4-36", "ovs_interfaceid": "af8bcfc4-3690-4b5f-9893-5555fa376203", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:44:24 np0005465604 nova_compute[260603]: 2025-10-02 08:44:24.287 2 DEBUG oslo_concurrency.lockutils [req-f2fb1bc1-65bf-499f-84af-fd6c7335a446 req-ab128634-520a-4d77-988a-70cf5a1678b3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-bccc9587-6f96-4032-ae07-56ab00988869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:44:24 np0005465604 nova_compute[260603]: 2025-10-02 08:44:24.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]: {
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:    "0": [
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:        {
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "devices": [
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "/dev/loop3"
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            ],
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "lv_name": "ceph_lv0",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "lv_size": "21470642176",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "name": "ceph_lv0",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "tags": {
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.cluster_name": "ceph",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.crush_device_class": "",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.encrypted": "0",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.osd_id": "0",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.type": "block",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.vdo": "0"
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            },
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "type": "block",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "vg_name": "ceph_vg0"
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:        }
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:    ],
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:    "1": [
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:        {
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "devices": [
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "/dev/loop4"
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            ],
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "lv_name": "ceph_lv1",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "lv_size": "21470642176",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "name": "ceph_lv1",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "tags": {
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.cluster_name": "ceph",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.crush_device_class": "",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.encrypted": "0",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.osd_id": "1",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.type": "block",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.vdo": "0"
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            },
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "type": "block",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "vg_name": "ceph_vg1"
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:        }
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:    ],
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:    "2": [
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:        {
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "devices": [
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "/dev/loop5"
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            ],
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "lv_name": "ceph_lv2",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "lv_size": "21470642176",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "name": "ceph_lv2",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "tags": {
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.cluster_name": "ceph",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.crush_device_class": "",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.encrypted": "0",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.osd_id": "2",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.type": "block",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:                "ceph.vdo": "0"
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            },
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "type": "block",
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:            "vg_name": "ceph_vg2"
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:        }
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]:    ]
Oct  2 04:44:24 np0005465604 jovial_yalow[367480]: }
Oct  2 04:44:24 np0005465604 systemd[1]: libpod-7e3ce503a54b01acf9980a6e66638de891ced91a0fda6900764ebac34fb77242.scope: Deactivated successfully.
Oct  2 04:44:24 np0005465604 podman[367463]: 2025-10-02 08:44:24.564625732 +0000 UTC m=+1.008339409 container died 7e3ce503a54b01acf9980a6e66638de891ced91a0fda6900764ebac34fb77242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 04:44:24 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2cab69e81ef290eb7f6a772b7db2515b7a861d006fe1b6c31729f91abde5346e-merged.mount: Deactivated successfully.
Oct  2 04:44:24 np0005465604 podman[367463]: 2025-10-02 08:44:24.640745184 +0000 UTC m=+1.084458871 container remove 7e3ce503a54b01acf9980a6e66638de891ced91a0fda6900764ebac34fb77242 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct  2 04:44:24 np0005465604 systemd[1]: libpod-conmon-7e3ce503a54b01acf9980a6e66638de891ced91a0fda6900764ebac34fb77242.scope: Deactivated successfully.
Oct  2 04:44:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 41 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct  2 04:44:25 np0005465604 podman[367664]: 2025-10-02 08:44:25.550842551 +0000 UTC m=+0.058431242 container create cf5eb30e39cbd666725aae38023d6c7c9f17959869dff40a6116c38a4821622f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:44:25 np0005465604 systemd[1]: Started libpod-conmon-cf5eb30e39cbd666725aae38023d6c7c9f17959869dff40a6116c38a4821622f.scope.
Oct  2 04:44:25 np0005465604 podman[367664]: 2025-10-02 08:44:25.532035564 +0000 UTC m=+0.039624295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:44:25 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:44:25 np0005465604 podman[367664]: 2025-10-02 08:44:25.64805346 +0000 UTC m=+0.155642211 container init cf5eb30e39cbd666725aae38023d6c7c9f17959869dff40a6116c38a4821622f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 04:44:25 np0005465604 podman[367664]: 2025-10-02 08:44:25.660107017 +0000 UTC m=+0.167695738 container start cf5eb30e39cbd666725aae38023d6c7c9f17959869dff40a6116c38a4821622f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:44:25 np0005465604 podman[367664]: 2025-10-02 08:44:25.66439333 +0000 UTC m=+0.171982051 container attach cf5eb30e39cbd666725aae38023d6c7c9f17959869dff40a6116c38a4821622f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 04:44:25 np0005465604 great_lovelace[367680]: 167 167
Oct  2 04:44:25 np0005465604 systemd[1]: libpod-cf5eb30e39cbd666725aae38023d6c7c9f17959869dff40a6116c38a4821622f.scope: Deactivated successfully.
Oct  2 04:44:25 np0005465604 podman[367664]: 2025-10-02 08:44:25.669675295 +0000 UTC m=+0.177263996 container died cf5eb30e39cbd666725aae38023d6c7c9f17959869dff40a6116c38a4821622f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 04:44:25 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f80e4e92436759268a1a32363e53f404da6ea67cdb9b514860f0639dd81c6af3-merged.mount: Deactivated successfully.
Oct  2 04:44:25 np0005465604 podman[367664]: 2025-10-02 08:44:25.71604133 +0000 UTC m=+0.223630041 container remove cf5eb30e39cbd666725aae38023d6c7c9f17959869dff40a6116c38a4821622f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:44:25 np0005465604 systemd[1]: libpod-conmon-cf5eb30e39cbd666725aae38023d6c7c9f17959869dff40a6116c38a4821622f.scope: Deactivated successfully.
Oct  2 04:44:25 np0005465604 podman[367704]: 2025-10-02 08:44:25.968330003 +0000 UTC m=+0.078686754 container create ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:44:26 np0005465604 systemd[1]: Started libpod-conmon-ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5.scope.
Oct  2 04:44:26 np0005465604 podman[367704]: 2025-10-02 08:44:25.937494242 +0000 UTC m=+0.047851053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:44:26 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:44:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622cb652e8078d7da671491a8083773840148c2323c654a8f3a34c03f57224e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:44:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622cb652e8078d7da671491a8083773840148c2323c654a8f3a34c03f57224e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:44:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622cb652e8078d7da671491a8083773840148c2323c654a8f3a34c03f57224e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:44:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622cb652e8078d7da671491a8083773840148c2323c654a8f3a34c03f57224e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:44:26 np0005465604 podman[367704]: 2025-10-02 08:44:26.0975198 +0000 UTC m=+0.207876591 container init ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hamilton, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:44:26 np0005465604 nova_compute[260603]: 2025-10-02 08:44:26.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:26 np0005465604 podman[367704]: 2025-10-02 08:44:26.111736173 +0000 UTC m=+0.222092904 container start ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:44:26 np0005465604 podman[367704]: 2025-10-02 08:44:26.115927384 +0000 UTC m=+0.226284165 container attach ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 04:44:26 np0005465604 nova_compute[260603]: 2025-10-02 08:44:26.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]: {
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:        "osd_id": 2,
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:        "type": "bluestore"
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:    },
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:        "osd_id": 1,
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:        "type": "bluestore"
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:    },
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:        "osd_id": 0,
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:        "type": "bluestore"
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]:    }
Oct  2 04:44:27 np0005465604 nostalgic_hamilton[367720]: }
Oct  2 04:44:27 np0005465604 systemd[1]: libpod-ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5.scope: Deactivated successfully.
Oct  2 04:44:27 np0005465604 systemd[1]: libpod-ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5.scope: Consumed 1.208s CPU time.
Oct  2 04:44:27 np0005465604 podman[367753]: 2025-10-02 08:44:27.358883305 +0000 UTC m=+0.030023447 container died ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hamilton, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 04:44:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 41 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct  2 04:44:27 np0005465604 systemd[1]: var-lib-containers-storage-overlay-622cb652e8078d7da671491a8083773840148c2323c654a8f3a34c03f57224e6-merged.mount: Deactivated successfully.
Oct  2 04:44:27 np0005465604 podman[367753]: 2025-10-02 08:44:27.410808463 +0000 UTC m=+0.081948595 container remove ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hamilton, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 04:44:27 np0005465604 systemd[1]: libpod-conmon-ac5b30da45d2500a463333299b58e76645b51cc20a87d35d942b97275aec39e5.scope: Deactivated successfully.
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:44:27 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ae49bea7-0394-472f-bb2d-f21922ec9726 does not exist
Oct  2 04:44:27 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ed7ec83d-7982-47aa-9fc7-40e7264f8a23 does not exist
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.870098) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394667870153, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1298, "num_deletes": 250, "total_data_size": 1939184, "memory_usage": 1963776, "flush_reason": "Manual Compaction"}
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394667885129, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 1910056, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40141, "largest_seqno": 41438, "table_properties": {"data_size": 1903944, "index_size": 3379, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12049, "raw_average_key_size": 18, "raw_value_size": 1891725, "raw_average_value_size": 2870, "num_data_blocks": 151, "num_entries": 659, "num_filter_entries": 659, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394542, "oldest_key_time": 1759394542, "file_creation_time": 1759394667, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 15096 microseconds, and 10334 cpu microseconds.
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.885193) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 1910056 bytes OK
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.885219) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.886959) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.886984) EVENT_LOG_v1 {"time_micros": 1759394667886976, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.887008) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 1933328, prev total WAL file size 1933328, number of live WAL files 2.
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.888209) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(1865KB)], [89(9384KB)]
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394667888292, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11519692, "oldest_snapshot_seqno": -1}
Oct  2 04:44:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:44:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:44:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:44:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:44:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:44:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6590 keys, 10808407 bytes, temperature: kUnknown
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394667982744, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 10808407, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10761370, "index_size": 29453, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16517, "raw_key_size": 168651, "raw_average_key_size": 25, "raw_value_size": 10640464, "raw_average_value_size": 1614, "num_data_blocks": 1174, "num_entries": 6590, "num_filter_entries": 6590, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394667, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.983085) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 10808407 bytes
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.984510) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.8 rd, 114.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.2 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(11.7) write-amplify(5.7) OK, records in: 7102, records dropped: 512 output_compression: NoCompression
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.984541) EVENT_LOG_v1 {"time_micros": 1759394667984527, "job": 52, "event": "compaction_finished", "compaction_time_micros": 94565, "compaction_time_cpu_micros": 47386, "output_level": 6, "num_output_files": 1, "total_output_size": 10808407, "num_input_records": 7102, "num_output_records": 6590, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394667985426, "job": 52, "event": "table_file_deletion", "file_number": 91}
Oct  2 04:44:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:44:27
Oct  2 04:44:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394667989217, "job": 52, "event": "table_file_deletion", "file_number": 89}
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.888039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.989325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.989334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.989337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.989339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:44:27 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:44:27.989342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:44:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:44:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'vms', '.mgr', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'backups']
Oct  2 04:44:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:44:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:44:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:44:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:44:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:44:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:44:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:44:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:44:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:44:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:44:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:44:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:44:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:44:28 np0005465604 nova_compute[260603]: 2025-10-02 08:44:28.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:28 np0005465604 nova_compute[260603]: 2025-10-02 08:44:28.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 3.1 KiB/s wr, 45 op/s
Oct  2 04:44:29 np0005465604 nova_compute[260603]: 2025-10-02 08:44:29.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:29 np0005465604 nova_compute[260603]: 2025-10-02 08:44:29.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:44:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 3.1 KiB/s wr, 37 op/s
Oct  2 04:44:31 np0005465604 nova_compute[260603]: 2025-10-02 08:44:31.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:32 np0005465604 nova_compute[260603]: 2025-10-02 08:44:32.110 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394657.1091447, 7f00dee7-5a33-4ae8-a230-2ed05afd17c3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:44:32 np0005465604 nova_compute[260603]: 2025-10-02 08:44:32.111 2 INFO nova.compute.manager [-] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:44:32 np0005465604 nova_compute[260603]: 2025-10-02 08:44:32.175 2 DEBUG nova.compute.manager [None req-10bef5f5-bfc2-4046-a516-2ebb1c41d168 - - - - - -] [instance: 7f00dee7-5a33-4ae8-a230-2ed05afd17c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:44:32 np0005465604 nova_compute[260603]: 2025-10-02 08:44:32.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:44:32 np0005465604 nova_compute[260603]: 2025-10-02 08:44:32.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:44:32 np0005465604 nova_compute[260603]: 2025-10-02 08:44:32.542 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:44:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:44:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 3.1 KiB/s wr, 37 op/s
Oct  2 04:44:33 np0005465604 nova_compute[260603]: 2025-10-02 08:44:33.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:44:34 np0005465604 nova_compute[260603]: 2025-10-02 08:44:34.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:34 np0005465604 nova_compute[260603]: 2025-10-02 08:44:34.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:44:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:34.827 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:34.828 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:34.828 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:44:35 np0005465604 nova_compute[260603]: 2025-10-02 08:44:35.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:44:35 np0005465604 nova_compute[260603]: 2025-10-02 08:44:35.556 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:35 np0005465604 nova_compute[260603]: 2025-10-02 08:44:35.557 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:35 np0005465604 nova_compute[260603]: 2025-10-02 08:44:35.558 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:35 np0005465604 nova_compute[260603]: 2025-10-02 08:44:35.558 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:44:35 np0005465604 nova_compute[260603]: 2025-10-02 08:44:35.559 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:36 np0005465604 podman[367841]: 2025-10-02 08:44:36.039585177 +0000 UTC m=+0.095648162 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent)
Oct  2 04:44:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:44:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1312186581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:44:36 np0005465604 podman[367840]: 2025-10-02 08:44:36.116534395 +0000 UTC m=+0.172686923 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 04:44:36 np0005465604 nova_compute[260603]: 2025-10-02 08:44:36.124 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:36 np0005465604 nova_compute[260603]: 2025-10-02 08:44:36.324 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:44:36 np0005465604 nova_compute[260603]: 2025-10-02 08:44:36.326 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3816MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:44:36 np0005465604 nova_compute[260603]: 2025-10-02 08:44:36.326 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:36 np0005465604 nova_compute[260603]: 2025-10-02 08:44:36.327 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:36 np0005465604 nova_compute[260603]: 2025-10-02 08:44:36.413 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:44:36 np0005465604 nova_compute[260603]: 2025-10-02 08:44:36.414 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:44:36 np0005465604 nova_compute[260603]: 2025-10-02 08:44:36.437 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394661.436321, bccc9587-6f96-4032-ae07-56ab00988869 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:44:36 np0005465604 nova_compute[260603]: 2025-10-02 08:44:36.438 2 INFO nova.compute.manager [-] [instance: bccc9587-6f96-4032-ae07-56ab00988869] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:44:36 np0005465604 nova_compute[260603]: 2025-10-02 08:44:36.442 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 04:44:36 np0005465604 nova_compute[260603]: 2025-10-02 08:44:36.457 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 04:44:36 np0005465604 nova_compute[260603]: 2025-10-02 08:44:36.458 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 04:44:36 np0005465604 nova_compute[260603]: 2025-10-02 08:44:36.460 2 DEBUG nova.compute.manager [None req-bfb7a08e-5fb2-4f68-a197-3107b273efce - - - - - -] [instance: bccc9587-6f96-4032-ae07-56ab00988869] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:44:36 np0005465604 nova_compute[260603]: 2025-10-02 08:44:36.482 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 04:44:36 np0005465604 nova_compute[260603]: 2025-10-02 08:44:36.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:36 np0005465604 nova_compute[260603]: 2025-10-02 08:44:36.526 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 04:44:36 np0005465604 nova_compute[260603]: 2025-10-02 08:44:36.542 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:44:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2438127581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:44:37 np0005465604 nova_compute[260603]: 2025-10-02 08:44:37.018 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:37 np0005465604 nova_compute[260603]: 2025-10-02 08:44:37.026 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:44:37 np0005465604 nova_compute[260603]: 2025-10-02 08:44:37.043 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:44:37 np0005465604 nova_compute[260603]: 2025-10-02 08:44:37.068 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:44:37 np0005465604 nova_compute[260603]: 2025-10-02 08:44:37.069 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:44:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:44:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:44:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:44:39 np0005465604 nova_compute[260603]: 2025-10-02 08:44:39.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:40 np0005465604 nova_compute[260603]: 2025-10-02 08:44:40.070 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:44:40 np0005465604 nova_compute[260603]: 2025-10-02 08:44:40.071 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:44:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:44:41 np0005465604 nova_compute[260603]: 2025-10-02 08:44:41.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:42.350 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:70:42 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0d051606-6f26-4b92-af35-30ce92246a08', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d051606-6f26-4b92-af35-30ce92246a08', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc1228dc2b0140318899a7d8a6bc11d6', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9fd76562-7c34-4301-b34a-62f8c08775ba, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cfcee3ef-d64f-4220-a1c8-19d56cff5646) old=Port_Binding(mac=['fa:16:3e:65:70:42 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0d051606-6f26-4b92-af35-30ce92246a08', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d051606-6f26-4b92-af35-30ce92246a08', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc1228dc2b0140318899a7d8a6bc11d6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:44:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:42.352 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cfcee3ef-d64f-4220-a1c8-19d56cff5646 in datapath 0d051606-6f26-4b92-af35-30ce92246a08 updated#033[00m
Oct  2 04:44:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:42.354 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0d051606-6f26-4b92-af35-30ce92246a08, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:44:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:42.356 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d4586650-98e4-43e9-a4a1-42c3fd0a9d73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:44:43 np0005465604 podman[367911]: 2025-10-02 08:44:43.033129894 +0000 UTC m=+0.090529812 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS)
Oct  2 04:44:43 np0005465604 podman[367910]: 2025-10-02 08:44:43.03971701 +0000 UTC m=+0.091730640 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:44:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:44:44 np0005465604 nova_compute[260603]: 2025-10-02 08:44:44.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:44 np0005465604 nova_compute[260603]: 2025-10-02 08:44:44.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:44:44 np0005465604 nova_compute[260603]: 2025-10-02 08:44:44.751 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "f8c66e48-8186-4434-8a82-a9be6fd98570" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:44 np0005465604 nova_compute[260603]: 2025-10-02 08:44:44.752 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:44 np0005465604 nova_compute[260603]: 2025-10-02 08:44:44.773 2 DEBUG nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:44:44 np0005465604 nova_compute[260603]: 2025-10-02 08:44:44.850 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:44 np0005465604 nova_compute[260603]: 2025-10-02 08:44:44.851 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:44 np0005465604 nova_compute[260603]: 2025-10-02 08:44:44.857 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:44:44 np0005465604 nova_compute[260603]: 2025-10-02 08:44:44.858 2 INFO nova.compute.claims [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:44:44 np0005465604 nova_compute[260603]: 2025-10-02 08:44:44.972 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 41 MiB data, 734 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:44:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:44:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1731150562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.399 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.408 2 DEBUG nova.compute.provider_tree [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.429 2 DEBUG nova.scheduler.client.report [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.457 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.459 2 DEBUG nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.509 2 DEBUG nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.510 2 DEBUG nova.network.neutron [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.536 2 INFO nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.555 2 DEBUG nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.652 2 DEBUG nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.653 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.654 2 INFO nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Creating image(s)#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.672 2 DEBUG nova.storage.rbd_utils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image f8c66e48-8186-4434-8a82-a9be6fd98570_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.691 2 DEBUG nova.storage.rbd_utils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image f8c66e48-8186-4434-8a82-a9be6fd98570_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.710 2 DEBUG nova.storage.rbd_utils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image f8c66e48-8186-4434-8a82-a9be6fd98570_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.713 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.816 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.817 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.818 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.818 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.839 2 DEBUG nova.storage.rbd_utils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image f8c66e48-8186-4434-8a82-a9be6fd98570_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.842 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f8c66e48-8186-4434-8a82-a9be6fd98570_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:45 np0005465604 nova_compute[260603]: 2025-10-02 08:44:45.884 2 DEBUG nova.policy [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3dd1e04a123f47aa8a6b835785a1c569', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:44:46 np0005465604 nova_compute[260603]: 2025-10-02 08:44:46.183 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f8c66e48-8186-4434-8a82-a9be6fd98570_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:46 np0005465604 nova_compute[260603]: 2025-10-02 08:44:46.222 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquiring lock "2ea36b69-0b0d-4253-8207-a159c75280b3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:46 np0005465604 nova_compute[260603]: 2025-10-02 08:44:46.223 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:46 np0005465604 nova_compute[260603]: 2025-10-02 08:44:46.258 2 DEBUG nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:44:46 np0005465604 nova_compute[260603]: 2025-10-02 08:44:46.265 2 DEBUG nova.storage.rbd_utils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] resizing rbd image f8c66e48-8186-4434-8a82-a9be6fd98570_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:44:46 np0005465604 nova_compute[260603]: 2025-10-02 08:44:46.334 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:46 np0005465604 nova_compute[260603]: 2025-10-02 08:44:46.335 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:46 np0005465604 nova_compute[260603]: 2025-10-02 08:44:46.343 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:44:46 np0005465604 nova_compute[260603]: 2025-10-02 08:44:46.343 2 INFO nova.compute.claims [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:44:46 np0005465604 nova_compute[260603]: 2025-10-02 08:44:46.388 2 DEBUG nova.objects.instance [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'migration_context' on Instance uuid f8c66e48-8186-4434-8a82-a9be6fd98570 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:44:46 np0005465604 nova_compute[260603]: 2025-10-02 08:44:46.401 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:44:46 np0005465604 nova_compute[260603]: 2025-10-02 08:44:46.402 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Ensure instance console log exists: /var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:44:46 np0005465604 nova_compute[260603]: 2025-10-02 08:44:46.402 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:46 np0005465604 nova_compute[260603]: 2025-10-02 08:44:46.403 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:46 np0005465604 nova_compute[260603]: 2025-10-02 08:44:46.403 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:46 np0005465604 nova_compute[260603]: 2025-10-02 08:44:46.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:46 np0005465604 nova_compute[260603]: 2025-10-02 08:44:46.515 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:44:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3291319667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:44:46 np0005465604 nova_compute[260603]: 2025-10-02 08:44:46.995 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.002 2 DEBUG nova.compute.provider_tree [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.006 2 DEBUG nova.network.neutron [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Successfully created port: 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.024 2 DEBUG nova.scheduler.client.report [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.051 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.052 2 DEBUG nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.099 2 DEBUG nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.100 2 DEBUG nova.network.neutron [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.117 2 INFO nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.131 2 DEBUG nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.219 2 DEBUG nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.222 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.223 2 INFO nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Creating image(s)#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.260 2 DEBUG nova.storage.rbd_utils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] rbd image 2ea36b69-0b0d-4253-8207-a159c75280b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.294 2 DEBUG nova.storage.rbd_utils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] rbd image 2ea36b69-0b0d-4253-8207-a159c75280b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.326 2 DEBUG nova.storage.rbd_utils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] rbd image 2ea36b69-0b0d-4253-8207-a159c75280b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.331 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 59 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 671 KiB/s wr, 12 op/s
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.390 2 DEBUG nova.policy [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2e201c8855514748b06d7da4f56ed1b5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '75aaa01d4f144326a24dea8ff25b20a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.441 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.442 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.443 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.443 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.470 2 DEBUG nova.storage.rbd_utils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] rbd image 2ea36b69-0b0d-4253-8207-a159c75280b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.475 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 2ea36b69-0b0d-4253-8207-a159c75280b3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.525 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.835 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 2ea36b69-0b0d-4253-8207-a159c75280b3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.885 2 DEBUG nova.storage.rbd_utils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] resizing rbd image 2ea36b69-0b0d-4253-8207-a159c75280b3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.972 2 DEBUG nova.network.neutron [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Successfully created port: 46d394ee-978a-48af-81e0-3175e3fedd10 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.978 2 DEBUG nova.objects.instance [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lazy-loading 'migration_context' on Instance uuid 2ea36b69-0b0d-4253-8207-a159c75280b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.995 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.996 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Ensure instance console log exists: /var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.996 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.997 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:47 np0005465604 nova_compute[260603]: 2025-10-02 08:44:47.997 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:49 np0005465604 nova_compute[260603]: 2025-10-02 08:44:49.042 2 DEBUG nova.network.neutron [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Successfully updated port: 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:44:49 np0005465604 nova_compute[260603]: 2025-10-02 08:44:49.060 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:44:49 np0005465604 nova_compute[260603]: 2025-10-02 08:44:49.061 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquired lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:44:49 np0005465604 nova_compute[260603]: 2025-10-02 08:44:49.061 2 DEBUG nova.network.neutron [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:44:49 np0005465604 nova_compute[260603]: 2025-10-02 08:44:49.161 2 DEBUG nova.compute.manager [req-16a1362e-5c42-4bdc-9b79-073526c07c7e req-a4bf7f45-796a-42ef-a71f-1bfd80b79e59 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Received event network-changed-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:44:49 np0005465604 nova_compute[260603]: 2025-10-02 08:44:49.162 2 DEBUG nova.compute.manager [req-16a1362e-5c42-4bdc-9b79-073526c07c7e req-a4bf7f45-796a-42ef-a71f-1bfd80b79e59 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Refreshing instance network info cache due to event network-changed-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:44:49 np0005465604 nova_compute[260603]: 2025-10-02 08:44:49.163 2 DEBUG oslo_concurrency.lockutils [req-16a1362e-5c42-4bdc-9b79-073526c07c7e req-a4bf7f45-796a-42ef-a71f-1bfd80b79e59 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:44:49 np0005465604 nova_compute[260603]: 2025-10-02 08:44:49.236 2 DEBUG nova.network.neutron [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:44:49 np0005465604 nova_compute[260603]: 2025-10-02 08:44:49.263 2 DEBUG nova.network.neutron [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Successfully updated port: 46d394ee-978a-48af-81e0-3175e3fedd10 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:44:49 np0005465604 nova_compute[260603]: 2025-10-02 08:44:49.279 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquiring lock "refresh_cache-2ea36b69-0b0d-4253-8207-a159c75280b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:44:49 np0005465604 nova_compute[260603]: 2025-10-02 08:44:49.280 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquired lock "refresh_cache-2ea36b69-0b0d-4253-8207-a159c75280b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:44:49 np0005465604 nova_compute[260603]: 2025-10-02 08:44:49.280 2 DEBUG nova.network.neutron [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:44:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 103 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 MiB/s wr, 39 op/s
Oct  2 04:44:49 np0005465604 nova_compute[260603]: 2025-10-02 08:44:49.386 2 DEBUG nova.compute.manager [req-2e7c1427-6e0d-4619-9ad8-8d770b316e7a req-c3615c65-e0b1-4c9b-a2f6-18c101a8edf0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Received event network-changed-46d394ee-978a-48af-81e0-3175e3fedd10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:44:49 np0005465604 nova_compute[260603]: 2025-10-02 08:44:49.386 2 DEBUG nova.compute.manager [req-2e7c1427-6e0d-4619-9ad8-8d770b316e7a req-c3615c65-e0b1-4c9b-a2f6-18c101a8edf0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Refreshing instance network info cache due to event network-changed-46d394ee-978a-48af-81e0-3175e3fedd10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:44:49 np0005465604 nova_compute[260603]: 2025-10-02 08:44:49.387 2 DEBUG oslo_concurrency.lockutils [req-2e7c1427-6e0d-4619-9ad8-8d770b316e7a req-c3615c65-e0b1-4c9b-a2f6-18c101a8edf0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-2ea36b69-0b0d-4253-8207-a159c75280b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:44:49 np0005465604 nova_compute[260603]: 2025-10-02 08:44:49.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:49 np0005465604 nova_compute[260603]: 2025-10-02 08:44:49.495 2 DEBUG nova.network.neutron [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.537 2 DEBUG nova.network.neutron [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Updating instance_info_cache with network_info: [{"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.576 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Releasing lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.577 2 DEBUG nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Instance network_info: |[{"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.578 2 DEBUG oslo_concurrency.lockutils [req-16a1362e-5c42-4bdc-9b79-073526c07c7e req-a4bf7f45-796a-42ef-a71f-1bfd80b79e59 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.579 2 DEBUG nova.network.neutron [req-16a1362e-5c42-4bdc-9b79-073526c07c7e req-a4bf7f45-796a-42ef-a71f-1bfd80b79e59 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Refreshing network info cache for port 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.588 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Start _get_guest_xml network_info=[{"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.594 2 WARNING nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.599 2 DEBUG nova.virt.libvirt.host [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.599 2 DEBUG nova.virt.libvirt.host [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.603 2 DEBUG nova.virt.libvirt.host [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.604 2 DEBUG nova.virt.libvirt.host [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.604 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.605 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.605 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.605 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.606 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.606 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.606 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.607 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.607 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.607 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.608 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.608 2 DEBUG nova.virt.hardware [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.612 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.818 2 DEBUG nova.network.neutron [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Updating instance_info_cache with network_info: [{"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.851 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Releasing lock "refresh_cache-2ea36b69-0b0d-4253-8207-a159c75280b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.851 2 DEBUG nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Instance network_info: |[{"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.851 2 DEBUG oslo_concurrency.lockutils [req-2e7c1427-6e0d-4619-9ad8-8d770b316e7a req-c3615c65-e0b1-4c9b-a2f6-18c101a8edf0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-2ea36b69-0b0d-4253-8207-a159c75280b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.852 2 DEBUG nova.network.neutron [req-2e7c1427-6e0d-4619-9ad8-8d770b316e7a req-c3615c65-e0b1-4c9b-a2f6-18c101a8edf0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Refreshing network info cache for port 46d394ee-978a-48af-81e0-3175e3fedd10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.854 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Start _get_guest_xml network_info=[{"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.859 2 WARNING nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.864 2 DEBUG nova.virt.libvirt.host [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.864 2 DEBUG nova.virt.libvirt.host [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.867 2 DEBUG nova.virt.libvirt.host [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.867 2 DEBUG nova.virt.libvirt.host [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.868 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.868 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.868 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.868 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.869 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.869 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.869 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.869 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.869 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.870 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.870 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.870 2 DEBUG nova.virt.hardware [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:44:50 np0005465604 nova_compute[260603]: 2025-10-02 08:44:50.873 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:44:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1436933517' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.133 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.165 2 DEBUG nova.storage.rbd_utils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image f8c66e48-8186-4434-8a82-a9be6fd98570_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.170 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:44:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2635793004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.330 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.362 2 DEBUG nova.storage.rbd_utils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] rbd image 2ea36b69-0b0d-4253-8207-a159c75280b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.368 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 103 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 MiB/s wr, 39 op/s
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:44:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3457303530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.593 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.596 2 DEBUG nova.virt.libvirt.vif [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:44:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-470181145',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-470181145',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=107,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA8lO59ZZnrYbP7wEOczQOvY36PKA+gVe/fckBMMuh4THRBaRJXFK1RXQfVOkub4S1cwnd2kmq3jyKyYaZVJ6dSkF3B7eITAKra64ROntkkxqN5HCVmiUlIAa9236hzUHg==',key_name='tempest-TestSecurityGroupsBasicOps-1941794773',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-zb68sy0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:44:45Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=f8c66e48-8186-4434-8a82-a9be6fd98570,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.597 2 DEBUG nova.network.os_vif_util [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.598 2 DEBUG nova.network.os_vif_util [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:e4:9d,bridge_name='br-int',has_traffic_filtering=True,id=2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f,network=Network(7f709a68-4708-4cf6-ab7a-ec213a44899e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe5cc8e-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.600 2 DEBUG nova.objects.instance [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'pci_devices' on Instance uuid f8c66e48-8186-4434-8a82-a9be6fd98570 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.634 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <uuid>f8c66e48-8186-4434-8a82-a9be6fd98570</uuid>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <name>instance-0000006b</name>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-470181145</nova:name>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:44:50</nova:creationTime>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <nova:user uuid="3dd1e04a123f47aa8a6b835785a1c569">tempest-TestSecurityGroupsBasicOps-204807017-project-member</nova:user>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <nova:project uuid="7ef9cbc1b038423984a64b4674aa34ff">tempest-TestSecurityGroupsBasicOps-204807017</nova:project>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <nova:port uuid="2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <entry name="serial">f8c66e48-8186-4434-8a82-a9be6fd98570</entry>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <entry name="uuid">f8c66e48-8186-4434-8a82-a9be6fd98570</entry>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f8c66e48-8186-4434-8a82-a9be6fd98570_disk">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f8c66e48-8186-4434-8a82-a9be6fd98570_disk.config">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:4e:e4:9d"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <target dev="tap2fe5cc8e-26"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570/console.log" append="off"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:44:51 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:44:51 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.636 2 DEBUG nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Preparing to wait for external event network-vif-plugged-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.636 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.637 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.637 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.638 2 DEBUG nova.virt.libvirt.vif [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:44:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-470181145',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-470181145',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=107,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA8lO59ZZnrYbP7wEOczQOvY36PKA+gVe/fckBMMuh4THRBaRJXFK1RXQfVOkub4S1cwnd2kmq3jyKyYaZVJ6dSkF3B7eITAKra64ROntkkxqN5HCVmiUlIAa9236hzUHg==',key_name='tempest-TestSecurityGroupsBasicOps-1941794773',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-zb68sy0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:44:45Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=f8c66e48-8186-4434-8a82-a9be6fd98570,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.639 2 DEBUG nova.network.os_vif_util [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.640 2 DEBUG nova.network.os_vif_util [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:e4:9d,bridge_name='br-int',has_traffic_filtering=True,id=2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f,network=Network(7f709a68-4708-4cf6-ab7a-ec213a44899e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe5cc8e-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.641 2 DEBUG os_vif [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:e4:9d,bridge_name='br-int',has_traffic_filtering=True,id=2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f,network=Network(7f709a68-4708-4cf6-ab7a-ec213a44899e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe5cc8e-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.642 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fe5cc8e-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2fe5cc8e-26, col_values=(('external_ids', {'iface-id': '2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4e:e4:9d', 'vm-uuid': 'f8c66e48-8186-4434-8a82-a9be6fd98570'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:51 np0005465604 NetworkManager[45129]: <info>  [1759394691.6511] manager: (tap2fe5cc8e-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/429)
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.657 2 INFO os_vif [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:e4:9d,bridge_name='br-int',has_traffic_filtering=True,id=2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f,network=Network(7f709a68-4708-4cf6-ab7a-ec213a44899e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe5cc8e-26')#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.712 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.713 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.713 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No VIF found with MAC fa:16:3e:4e:e4:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.714 2 INFO nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Using config drive#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.741 2 DEBUG nova.storage.rbd_utils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image f8c66e48-8186-4434-8a82-a9be6fd98570_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:44:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:44:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3288734153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.866 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.867 2 DEBUG nova.virt.libvirt.vif [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:44:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-459466961',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-459466961',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-459466961',id=108,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75aaa01d4f144326a24dea8ff25b20a7',ramdisk_id='',reservation_id='r-qjxua0d9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-200952902',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-200952902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:44:47Z,user_data=None,user_id='2e201c8855514748b06d7da4f56ed1b5',uuid=2ea36b69-0b0d-4253-8207-a159c75280b3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.868 2 DEBUG nova.network.os_vif_util [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Converting VIF {"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.868 2 DEBUG nova.network.os_vif_util [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:11:18,bridge_name='br-int',has_traffic_filtering=True,id=46d394ee-978a-48af-81e0-3175e3fedd10,network=Network(a70dde6b-eda7-404f-be3f-fdfd21130765),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46d394ee-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.869 2 DEBUG nova.objects.instance [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ea36b69-0b0d-4253-8207-a159c75280b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.897 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <uuid>2ea36b69-0b0d-4253-8207-a159c75280b3</uuid>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <name>instance-0000006c</name>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-459466961</nova:name>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:44:50</nova:creationTime>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <nova:user uuid="2e201c8855514748b06d7da4f56ed1b5">tempest-ServersNegativeTestMultiTenantJSON-200952902-project-member</nova:user>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <nova:project uuid="75aaa01d4f144326a24dea8ff25b20a7">tempest-ServersNegativeTestMultiTenantJSON-200952902</nova:project>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <nova:port uuid="46d394ee-978a-48af-81e0-3175e3fedd10">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <entry name="serial">2ea36b69-0b0d-4253-8207-a159c75280b3</entry>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <entry name="uuid">2ea36b69-0b0d-4253-8207-a159c75280b3</entry>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/2ea36b69-0b0d-4253-8207-a159c75280b3_disk">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/2ea36b69-0b0d-4253-8207-a159c75280b3_disk.config">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:1b:11:18"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <target dev="tap46d394ee-97"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3/console.log" append="off"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:44:51 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:44:51 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:44:51 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:44:51 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.899 2 DEBUG nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Preparing to wait for external event network-vif-plugged-46d394ee-978a-48af-81e0-3175e3fedd10 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.899 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquiring lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.899 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.899 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.900 2 DEBUG nova.virt.libvirt.vif [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:44:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-459466961',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-459466961',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-459466961',id=108,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75aaa01d4f144326a24dea8ff25b20a7',ramdisk_id='',reservation_id='r-qjxua0d9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-200952902',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-200952902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:44:47Z,user_data=None,user_id='2e201c8855514748b06d7da4f56ed1b5',uuid=2ea36b69-0b0d-4253-8207-a159c75280b3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.901 2 DEBUG nova.network.os_vif_util [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Converting VIF {"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.901 2 DEBUG nova.network.os_vif_util [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:11:18,bridge_name='br-int',has_traffic_filtering=True,id=46d394ee-978a-48af-81e0-3175e3fedd10,network=Network(a70dde6b-eda7-404f-be3f-fdfd21130765),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46d394ee-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.902 2 DEBUG os_vif [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:11:18,bridge_name='br-int',has_traffic_filtering=True,id=46d394ee-978a-48af-81e0-3175e3fedd10,network=Network(a70dde6b-eda7-404f-be3f-fdfd21130765),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46d394ee-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.905 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap46d394ee-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.906 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap46d394ee-97, col_values=(('external_ids', {'iface-id': '46d394ee-978a-48af-81e0-3175e3fedd10', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:11:18', 'vm-uuid': '2ea36b69-0b0d-4253-8207-a159c75280b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:51 np0005465604 NetworkManager[45129]: <info>  [1759394691.9087] manager: (tap46d394ee-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.915 2 INFO os_vif [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:11:18,bridge_name='br-int',has_traffic_filtering=True,id=46d394ee-978a-48af-81e0-3175e3fedd10,network=Network(a70dde6b-eda7-404f-be3f-fdfd21130765),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46d394ee-97')#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.980 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.981 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.981 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] No VIF found with MAC fa:16:3e:1b:11:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:44:51 np0005465604 nova_compute[260603]: 2025-10-02 08:44:51.982 2 INFO nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Using config drive#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.014 2 DEBUG nova.storage.rbd_utils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] rbd image 2ea36b69-0b0d-4253-8207-a159c75280b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.198 2 INFO nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Creating config drive at /var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570/disk.config#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.208 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgsy6u3yq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.376 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgsy6u3yq" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.415 2 DEBUG nova.storage.rbd_utils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image f8c66e48-8186-4434-8a82-a9be6fd98570_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.418 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570/disk.config f8c66e48-8186-4434-8a82-a9be6fd98570_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.472 2 INFO nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Creating config drive at /var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3/disk.config#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.477 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf3zus22g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.596 2 DEBUG oslo_concurrency.processutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570/disk.config f8c66e48-8186-4434-8a82-a9be6fd98570_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.598 2 INFO nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Deleting local config drive /var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570/disk.config because it was imported into RBD.#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.648 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf3zus22g" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:52 np0005465604 kernel: tap2fe5cc8e-26: entered promiscuous mode
Oct  2 04:44:52 np0005465604 NetworkManager[45129]: <info>  [1759394692.6554] manager: (tap2fe5cc8e-26): new Tun device (/org/freedesktop/NetworkManager/Devices/431)
Oct  2 04:44:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:52Z|01098|binding|INFO|Claiming lport 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f for this chassis.
Oct  2 04:44:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:52Z|01099|binding|INFO|2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f: Claiming fa:16:3e:4e:e4:9d 10.100.0.6
Oct  2 04:44:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.726 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:e4:9d 10.100.0.6'], port_security=['fa:16:3e:4e:e4:9d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f8c66e48-8186-4434-8a82-a9be6fd98570', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f709a68-4708-4cf6-ab7a-ec213a44899e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '97c204ff-8b60-40d3-b073-c7cc05dde092 b7a9cf86-0b36-4647-8672-befd80171150', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6baff0c8-9572-4afd-b1a8-e77a5daa40e0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:44:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.727 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f in datapath 7f709a68-4708-4cf6-ab7a-ec213a44899e bound to our chassis#033[00m
Oct  2 04:44:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.728 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7f709a68-4708-4cf6-ab7a-ec213a44899e#033[00m
Oct  2 04:44:52 np0005465604 systemd-udevd[368562]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.746 2 DEBUG nova.storage.rbd_utils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] rbd image 2ea36b69-0b0d-4253-8207-a159c75280b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:44:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.746 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[65b2eb93-87db-4779-9240-55a79f3f9a01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.747 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7f709a68-41 in ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:44:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.750 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7f709a68-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:44:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.750 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[15a7b11e-8550-4918-917e-74334f180b97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.752 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a411b7dd-93d6-4b2b-bd5e-251ec8fa07fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:52 np0005465604 systemd-machined[214636]: New machine qemu-134-instance-0000006b.
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.756 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3/disk.config 2ea36b69-0b0d-4253-8207-a159c75280b3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:44:52 np0005465604 NetworkManager[45129]: <info>  [1759394692.7631] device (tap2fe5cc8e-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:44:52 np0005465604 NetworkManager[45129]: <info>  [1759394692.7640] device (tap2fe5cc8e-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:44:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.772 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[93286f1d-b3de-4951-933d-3c787184fc6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:52 np0005465604 systemd[1]: Started Virtual Machine qemu-134-instance-0000006b.
Oct  2 04:44:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.800 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[341225d8-212f-439f-82c2-92e8b42ea04a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:52Z|01100|binding|INFO|Setting lport 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f ovn-installed in OVS
Oct  2 04:44:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:52Z|01101|binding|INFO|Setting lport 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f up in Southbound
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.816 2 DEBUG nova.network.neutron [req-16a1362e-5c42-4bdc-9b79-073526c07c7e req-a4bf7f45-796a-42ef-a71f-1bfd80b79e59 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Updated VIF entry in instance network info cache for port 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.818 2 DEBUG nova.network.neutron [req-16a1362e-5c42-4bdc-9b79-073526c07c7e req-a4bf7f45-796a-42ef-a71f-1bfd80b79e59 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Updating instance_info_cache with network_info: [{"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.826 2 DEBUG nova.network.neutron [req-2e7c1427-6e0d-4619-9ad8-8d770b316e7a req-c3615c65-e0b1-4c9b-a2f6-18c101a8edf0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Updated VIF entry in instance network info cache for port 46d394ee-978a-48af-81e0-3175e3fedd10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.827 2 DEBUG nova.network.neutron [req-2e7c1427-6e0d-4619-9ad8-8d770b316e7a req-c3615c65-e0b1-4c9b-a2f6-18c101a8edf0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Updating instance_info_cache with network_info: [{"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:44:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.845 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bac249f1-5ac0-43ab-8130-f2c95b6357d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.846 2 DEBUG oslo_concurrency.lockutils [req-16a1362e-5c42-4bdc-9b79-073526c07c7e req-a4bf7f45-796a-42ef-a71f-1bfd80b79e59 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.847 2 DEBUG oslo_concurrency.lockutils [req-2e7c1427-6e0d-4619-9ad8-8d770b316e7a req-c3615c65-e0b1-4c9b-a2f6-18c101a8edf0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-2ea36b69-0b0d-4253-8207-a159c75280b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:44:52 np0005465604 systemd-udevd[368569]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:44:52 np0005465604 NetworkManager[45129]: <info>  [1759394692.8523] manager: (tap7f709a68-40): new Veth device (/org/freedesktop/NetworkManager/Devices/432)
Oct  2 04:44:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.851 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd80981-7fd2-436e-98ad-92be666761b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:44:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.887 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ee59424b-8878-4974-81d5-e24de15ac943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.892 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[fe423b0f-52ca-417d-9e1e-9caf3d7b6163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:52 np0005465604 NetworkManager[45129]: <info>  [1759394692.9248] device (tap7f709a68-40): carrier: link connected
Oct  2 04:44:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.932 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9c80b634-741d-4dad-873c-4607b9180eab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.959 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3cb0ea-8a4e-44fe-a630-e8b461527ec0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f709a68-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:84:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 315], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557152, 'reachable_time': 42578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368622, 'error': None, 'target': 'ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.960 2 DEBUG oslo_concurrency.processutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3/disk.config 2ea36b69-0b0d-4253-8207-a159c75280b3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:44:52 np0005465604 nova_compute[260603]: 2025-10-02 08:44:52.962 2 INFO nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Deleting local config drive /var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3/disk.config because it was imported into RBD.#033[00m
Oct  2 04:44:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:52.986 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e38f8c67-b3e1-4621-a1a8-a71764d16c8a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:8476'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 557152, 'tstamp': 557152}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368623, 'error': None, 'target': 'ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.010 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83b7ab20-9720-494a-bb12-45937ca938d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f709a68-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:84:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 315], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557152, 'reachable_time': 42578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368627, 'error': None, 'target': 'ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 kernel: tap46d394ee-97: entered promiscuous mode
Oct  2 04:44:53 np0005465604 NetworkManager[45129]: <info>  [1759394693.0232] manager: (tap46d394ee-97): new Tun device (/org/freedesktop/NetworkManager/Devices/433)
Oct  2 04:44:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:53Z|01102|binding|INFO|Claiming lport 46d394ee-978a-48af-81e0-3175e3fedd10 for this chassis.
Oct  2 04:44:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:53Z|01103|binding|INFO|46d394ee-978a-48af-81e0-3175e3fedd10: Claiming fa:16:3e:1b:11:18 10.100.0.5
Oct  2 04:44:53 np0005465604 nova_compute[260603]: 2025-10-02 08:44:53.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:53 np0005465604 systemd-udevd[368604]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:44:53 np0005465604 NetworkManager[45129]: <info>  [1759394693.0362] device (tap46d394ee-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:44:53 np0005465604 NetworkManager[45129]: <info>  [1759394693.0372] device (tap46d394ee-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.040 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:11:18 10.100.0.5'], port_security=['fa:16:3e:1b:11:18 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2ea36b69-0b0d-4253-8207-a159c75280b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a70dde6b-eda7-404f-be3f-fdfd21130765', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75aaa01d4f144326a24dea8ff25b20a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b642237-0371-4fa9-9985-5a0296cfa4d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b63137b-fac5-4aa7-bab2-c0c2acc8a529, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=46d394ee-978a-48af-81e0-3175e3fedd10) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.048 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2c0b0118-f6e5-4f8f-b5da-ccdb20a278ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 systemd-machined[214636]: New machine qemu-135-instance-0000006c.
Oct  2 04:44:53 np0005465604 systemd[1]: Started Virtual Machine qemu-135-instance-0000006c.
Oct  2 04:44:53 np0005465604 nova_compute[260603]: 2025-10-02 08:44:53.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:53Z|01104|binding|INFO|Setting lport 46d394ee-978a-48af-81e0-3175e3fedd10 ovn-installed in OVS
Oct  2 04:44:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:53Z|01105|binding|INFO|Setting lport 46d394ee-978a-48af-81e0-3175e3fedd10 up in Southbound
Oct  2 04:44:53 np0005465604 nova_compute[260603]: 2025-10-02 08:44:53.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.116 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[18871649-b129-4d65-a5ee-e401022066f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.117 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f709a68-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.117 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.118 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f709a68-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:53 np0005465604 nova_compute[260603]: 2025-10-02 08:44:53.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:53 np0005465604 NetworkManager[45129]: <info>  [1759394693.1202] manager: (tap7f709a68-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Oct  2 04:44:53 np0005465604 kernel: tap7f709a68-40: entered promiscuous mode
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.123 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7f709a68-40, col_values=(('external_ids', {'iface-id': 'd2cd5a12-3184-4d72-ac2f-758e53e7613e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:53Z|01106|binding|INFO|Releasing lport d2cd5a12-3184-4d72-ac2f-758e53e7613e from this chassis (sb_readonly=0)
Oct  2 04:44:53 np0005465604 nova_compute[260603]: 2025-10-02 08:44:53.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:53 np0005465604 nova_compute[260603]: 2025-10-02 08:44:53.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.140 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7f709a68-4708-4cf6-ab7a-ec213a44899e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7f709a68-4708-4cf6-ab7a-ec213a44899e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.141 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[621297bc-d0f1-4516-bc4f-355d5ded0696]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.142 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-7f709a68-4708-4cf6-ab7a-ec213a44899e
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/7f709a68-4708-4cf6-ab7a-ec213a44899e.pid.haproxy
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 7f709a68-4708-4cf6-ab7a-ec213a44899e
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.142 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e', 'env', 'PROCESS_TAG=haproxy-7f709a68-4708-4cf6-ab7a-ec213a44899e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7f709a68-4708-4cf6-ab7a-ec213a44899e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:44:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 134 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Oct  2 04:44:53 np0005465604 nova_compute[260603]: 2025-10-02 08:44:53.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.421 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:44:53 np0005465604 podman[368673]: 2025-10-02 08:44:53.503779039 +0000 UTC m=+0.069961241 container create e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 04:44:53 np0005465604 systemd[1]: Started libpod-conmon-e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2.scope.
Oct  2 04:44:53 np0005465604 podman[368673]: 2025-10-02 08:44:53.477921994 +0000 UTC m=+0.044104216 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:44:53 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:44:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5042a5e1a9fd8d273a882aed1c11acd286e1034728129aad25bceba67731ade/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:44:53 np0005465604 podman[368673]: 2025-10-02 08:44:53.575674311 +0000 UTC m=+0.141856503 container init e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 04:44:53 np0005465604 podman[368673]: 2025-10-02 08:44:53.580572414 +0000 UTC m=+0.146754606 container start e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:44:53 np0005465604 neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e[368688]: [NOTICE]   (368692) : New worker (368694) forked
Oct  2 04:44:53 np0005465604 neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e[368688]: [NOTICE]   (368692) : Loading success.
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.656 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 46d394ee-978a-48af-81e0-3175e3fedd10 in datapath a70dde6b-eda7-404f-be3f-fdfd21130765 unbound from our chassis#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.659 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a70dde6b-eda7-404f-be3f-fdfd21130765#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.670 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[899d265d-f4d1-4928-bf81-a0af14b56bc2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.671 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa70dde6b-e1 in ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.673 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa70dde6b-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.673 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[05af059d-5ba7-4b93-9de7-d02c0f31f4f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.674 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[85732702-c317-4c76-9446-b9db4ac54ed6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.685 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[53f97b12-9a0e-4577-82c0-afa388dba6a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.710 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[93b78820-507c-45ff-b0da-7fd28e7623fd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.740 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c91275e5-4539-4c03-b20b-fdfd45123e3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.750 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[864cbf4e-8fa6-4489-ac34-763359edcd8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 NetworkManager[45129]: <info>  [1759394693.7509] manager: (tapa70dde6b-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/435)
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.785 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0e31f57c-2c43-4c53-8a31-d674968e068f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.791 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9051dd6e-644c-428f-bb39-0bb4d6efb3b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 NetworkManager[45129]: <info>  [1759394693.8173] device (tapa70dde6b-e0): carrier: link connected
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.822 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf5c76d-c24c-4573-92c8-13c3828cd550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.842 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2f84c9-9bba-4a80-b2e3-302673d8e392]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa70dde6b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:9a:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557242, 'reachable_time': 43247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368715, 'error': None, 'target': 'ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.859 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[378bda73-4863-4ae4-a8d3-136710617a8d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9d:9a19'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 557242, 'tstamp': 557242}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368716, 'error': None, 'target': 'ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.878 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ded4db-061b-491e-a067-bac65cf948ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa70dde6b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:9a:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557242, 'reachable_time': 43247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368717, 'error': None, 'target': 'ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.913 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[be664e51-4ada-46bd-87d7-a2e21cc00d62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.989 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[59ef813e-0914-4a38-9e05-e7724c862e2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.990 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa70dde6b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.991 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.991 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa70dde6b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:53 np0005465604 kernel: tapa70dde6b-e0: entered promiscuous mode
Oct  2 04:44:53 np0005465604 NetworkManager[45129]: <info>  [1759394693.9942] manager: (tapa70dde6b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/436)
Oct  2 04:44:53 np0005465604 nova_compute[260603]: 2025-10-02 08:44:53.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.996 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa70dde6b-e0, col_values=(('external_ids', {'iface-id': 'fdf0eff9-d3e0-4347-9246-553457a706bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:53Z|01107|binding|INFO|Releasing lport fdf0eff9-d3e0-4347-9246-553457a706bc from this chassis (sb_readonly=0)
Oct  2 04:44:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:53.999 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a70dde6b-eda7-404f-be3f-fdfd21130765.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a70dde6b-eda7-404f-be3f-fdfd21130765.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:54.001 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6129f4e6-8577-435d-99a7-9d4e3acffb9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:54.002 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-a70dde6b-eda7-404f-be3f-fdfd21130765
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/a70dde6b-eda7-404f-be3f-fdfd21130765.pid.haproxy
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID a70dde6b-eda7-404f-be3f-fdfd21130765
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:54.004 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765', 'env', 'PROCESS_TAG=haproxy-a70dde6b-eda7-404f-be3f-fdfd21130765', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a70dde6b-eda7-404f-be3f-fdfd21130765.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:54 np0005465604 podman[368833]: 2025-10-02 08:44:54.361273177 +0000 UTC m=+0.044704605 container create 572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:44:54 np0005465604 systemd[1]: Started libpod-conmon-572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383.scope.
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:44:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8c8c7591665252ab2243dbe65cfdd1efc17d9bbb25418b141b6fa64fb6308b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:44:54 np0005465604 podman[368833]: 2025-10-02 08:44:54.336204835 +0000 UTC m=+0.019636273 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:44:54 np0005465604 podman[368833]: 2025-10-02 08:44:54.432262589 +0000 UTC m=+0.115694048 container init 572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 04:44:54 np0005465604 podman[368833]: 2025-10-02 08:44:54.437323987 +0000 UTC m=+0.120755425 container start 572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:44:54 np0005465604 neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765[368848]: [NOTICE]   (368852) : New worker (368854) forked
Oct  2 04:44:54 np0005465604 neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765[368848]: [NOTICE]   (368852) : Loading success.
Oct  2 04:44:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:54.484 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.511 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394694.5110915, 2ea36b69-0b0d-4253-8207-a159c75280b3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.512 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] VM Started (Lifecycle Event)#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.534 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.538 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394694.5118325, 2ea36b69-0b0d-4253-8207-a159c75280b3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.538 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.564 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.568 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.594 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.689 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394694.6891563, f8c66e48-8186-4434-8a82-a9be6fd98570 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.690 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] VM Started (Lifecycle Event)#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.709 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.712 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394694.6900394, f8c66e48-8186-4434-8a82-a9be6fd98570 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.712 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.732 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.735 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.853 2 DEBUG nova.compute.manager [req-43f3f298-edcf-43e2-81e1-539fe3c7519a req-e0a7b4c1-71a9-4aa0-8c93-3fc8d1ce02b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Received event network-vif-plugged-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.853 2 DEBUG oslo_concurrency.lockutils [req-43f3f298-edcf-43e2-81e1-539fe3c7519a req-e0a7b4c1-71a9-4aa0-8c93-3fc8d1ce02b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.853 2 DEBUG oslo_concurrency.lockutils [req-43f3f298-edcf-43e2-81e1-539fe3c7519a req-e0a7b4c1-71a9-4aa0-8c93-3fc8d1ce02b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.853 2 DEBUG oslo_concurrency.lockutils [req-43f3f298-edcf-43e2-81e1-539fe3c7519a req-e0a7b4c1-71a9-4aa0-8c93-3fc8d1ce02b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.854 2 DEBUG nova.compute.manager [req-43f3f298-edcf-43e2-81e1-539fe3c7519a req-e0a7b4c1-71a9-4aa0-8c93-3fc8d1ce02b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Processing event network-vif-plugged-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.854 2 DEBUG nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.857 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.860 2 INFO nova.virt.libvirt.driver [-] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Instance spawned successfully.#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.860 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.871 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.871 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394694.8571382, f8c66e48-8186-4434-8a82-a9be6fd98570 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.872 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.879 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.880 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.880 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.881 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.881 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.882 2 DEBUG nova.virt.libvirt.driver [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.887 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.890 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.910 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.962 2 INFO nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Took 9.31 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:44:54 np0005465604 nova_compute[260603]: 2025-10-02 08:44:54.963 2 DEBUG nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:44:55 np0005465604 nova_compute[260603]: 2025-10-02 08:44:55.037 2 INFO nova.compute.manager [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Took 10.21 seconds to build instance.#033[00m
Oct  2 04:44:55 np0005465604 nova_compute[260603]: 2025-10-02 08:44:55.058 2 DEBUG oslo_concurrency.lockutils [None req-7b589a79-5df4-4a78-b225-49e1441f3a12 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 134 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Oct  2 04:44:56 np0005465604 nova_compute[260603]: 2025-10-02 08:44:56.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 134 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 705 KiB/s rd, 3.6 MiB/s wr, 90 op/s
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.533 2 DEBUG nova.compute.manager [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Received event network-vif-plugged-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.534 2 DEBUG oslo_concurrency.lockutils [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.534 2 DEBUG oslo_concurrency.lockutils [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.534 2 DEBUG oslo_concurrency.lockutils [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.535 2 DEBUG nova.compute.manager [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] No waiting events found dispatching network-vif-plugged-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.535 2 WARNING nova.compute.manager [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Received unexpected event network-vif-plugged-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f for instance with vm_state active and task_state None.#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.536 2 DEBUG nova.compute.manager [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Received event network-vif-plugged-46d394ee-978a-48af-81e0-3175e3fedd10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.536 2 DEBUG oslo_concurrency.lockutils [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.536 2 DEBUG oslo_concurrency.lockutils [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.537 2 DEBUG oslo_concurrency.lockutils [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.537 2 DEBUG nova.compute.manager [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Processing event network-vif-plugged-46d394ee-978a-48af-81e0-3175e3fedd10 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.537 2 DEBUG nova.compute.manager [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Received event network-vif-plugged-46d394ee-978a-48af-81e0-3175e3fedd10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.538 2 DEBUG oslo_concurrency.lockutils [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.538 2 DEBUG oslo_concurrency.lockutils [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.539 2 DEBUG oslo_concurrency.lockutils [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.539 2 DEBUG nova.compute.manager [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] No waiting events found dispatching network-vif-plugged-46d394ee-978a-48af-81e0-3175e3fedd10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.540 2 WARNING nova.compute.manager [req-d71a7ad2-0cc2-4131-ba3a-58f771389195 req-876e85b8-4950-4031-bf81-b3331a52ca96 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Received unexpected event network-vif-plugged-46d394ee-978a-48af-81e0-3175e3fedd10 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.541 2 DEBUG nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.551 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.552 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394697.5510383, 2ea36b69-0b0d-4253-8207-a159c75280b3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.552 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.560 2 INFO nova.virt.libvirt.driver [-] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Instance spawned successfully.#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.561 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.598 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.604 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.604 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.604 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.605 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.605 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.605 2 DEBUG nova.virt.libvirt.driver [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.610 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.650 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.682 2 INFO nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Took 10.46 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.683 2 DEBUG nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.750 2 INFO nova.compute.manager [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Took 11.44 seconds to build instance.#033[00m
Oct  2 04:44:57 np0005465604 nova_compute[260603]: 2025-10-02 08:44:57.777 2 DEBUG oslo_concurrency.lockutils [None req-67a44e9d-4aaf-457d-8932-46cbe4308ad9 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:44:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:44:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:44:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:44:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:44:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:44:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:44:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:44:58 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:44:58.486 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:44:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 134 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 127 op/s
Oct  2 04:44:59 np0005465604 nova_compute[260603]: 2025-10-02 08:44:59.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:59 np0005465604 nova_compute[260603]: 2025-10-02 08:44:59.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:59 np0005465604 NetworkManager[45129]: <info>  [1759394699.6584] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/437)
Oct  2 04:44:59 np0005465604 NetworkManager[45129]: <info>  [1759394699.6593] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/438)
Oct  2 04:44:59 np0005465604 nova_compute[260603]: 2025-10-02 08:44:59.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:44:59 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:59Z|01108|binding|INFO|Releasing lport fdf0eff9-d3e0-4347-9246-553457a706bc from this chassis (sb_readonly=0)
Oct  2 04:44:59 np0005465604 ovn_controller[152344]: 2025-10-02T08:44:59Z|01109|binding|INFO|Releasing lport d2cd5a12-3184-4d72-ac2f-758e53e7613e from this chassis (sb_readonly=0)
Oct  2 04:44:59 np0005465604 nova_compute[260603]: 2025-10-02 08:44:59.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:00 np0005465604 nova_compute[260603]: 2025-10-02 08:45:00.658 2 DEBUG nova.compute.manager [req-e81ba5f9-5884-4280-bc74-531b9980b035 req-127fbe63-9318-4e7c-bf2e-0d8a74efa92f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Received event network-changed-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:45:00 np0005465604 nova_compute[260603]: 2025-10-02 08:45:00.659 2 DEBUG nova.compute.manager [req-e81ba5f9-5884-4280-bc74-531b9980b035 req-127fbe63-9318-4e7c-bf2e-0d8a74efa92f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Refreshing instance network info cache due to event network-changed-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:45:00 np0005465604 nova_compute[260603]: 2025-10-02 08:45:00.659 2 DEBUG oslo_concurrency.lockutils [req-e81ba5f9-5884-4280-bc74-531b9980b035 req-127fbe63-9318-4e7c-bf2e-0d8a74efa92f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:45:00 np0005465604 nova_compute[260603]: 2025-10-02 08:45:00.660 2 DEBUG oslo_concurrency.lockutils [req-e81ba5f9-5884-4280-bc74-531b9980b035 req-127fbe63-9318-4e7c-bf2e-0d8a74efa92f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:45:00 np0005465604 nova_compute[260603]: 2025-10-02 08:45:00.660 2 DEBUG nova.network.neutron [req-e81ba5f9-5884-4280-bc74-531b9980b035 req-127fbe63-9318-4e7c-bf2e-0d8a74efa92f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Refreshing network info cache for port 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:45:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 134 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 99 op/s
Oct  2 04:45:01 np0005465604 nova_compute[260603]: 2025-10-02 08:45:01.776 2 DEBUG oslo_concurrency.lockutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquiring lock "2ea36b69-0b0d-4253-8207-a159c75280b3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:01 np0005465604 nova_compute[260603]: 2025-10-02 08:45:01.777 2 DEBUG oslo_concurrency.lockutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:01 np0005465604 nova_compute[260603]: 2025-10-02 08:45:01.778 2 DEBUG oslo_concurrency.lockutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquiring lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:01 np0005465604 nova_compute[260603]: 2025-10-02 08:45:01.778 2 DEBUG oslo_concurrency.lockutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:01 np0005465604 nova_compute[260603]: 2025-10-02 08:45:01.778 2 DEBUG oslo_concurrency.lockutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:01 np0005465604 nova_compute[260603]: 2025-10-02 08:45:01.779 2 INFO nova.compute.manager [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Terminating instance#033[00m
Oct  2 04:45:01 np0005465604 nova_compute[260603]: 2025-10-02 08:45:01.780 2 DEBUG nova.compute.manager [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:45:01 np0005465604 kernel: tap46d394ee-97 (unregistering): left promiscuous mode
Oct  2 04:45:01 np0005465604 NetworkManager[45129]: <info>  [1759394701.8238] device (tap46d394ee-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:45:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:01Z|01110|binding|INFO|Releasing lport 46d394ee-978a-48af-81e0-3175e3fedd10 from this chassis (sb_readonly=0)
Oct  2 04:45:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:01Z|01111|binding|INFO|Setting lport 46d394ee-978a-48af-81e0-3175e3fedd10 down in Southbound
Oct  2 04:45:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:01Z|01112|binding|INFO|Removing iface tap46d394ee-97 ovn-installed in OVS
Oct  2 04:45:01 np0005465604 nova_compute[260603]: 2025-10-02 08:45:01.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:01.836 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:11:18 10.100.0.5'], port_security=['fa:16:3e:1b:11:18 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '2ea36b69-0b0d-4253-8207-a159c75280b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a70dde6b-eda7-404f-be3f-fdfd21130765', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75aaa01d4f144326a24dea8ff25b20a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1b642237-0371-4fa9-9985-5a0296cfa4d7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b63137b-fac5-4aa7-bab2-c0c2acc8a529, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=46d394ee-978a-48af-81e0-3175e3fedd10) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:45:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:01.837 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 46d394ee-978a-48af-81e0-3175e3fedd10 in datapath a70dde6b-eda7-404f-be3f-fdfd21130765 unbound from our chassis#033[00m
Oct  2 04:45:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:01.839 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a70dde6b-eda7-404f-be3f-fdfd21130765, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:45:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:01.841 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c807c2e8-18ec-476f-af60-22a65c19f886]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:01.842 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765 namespace which is not needed anymore#033[00m
Oct  2 04:45:01 np0005465604 nova_compute[260603]: 2025-10-02 08:45:01.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:01 np0005465604 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Oct  2 04:45:01 np0005465604 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006c.scope: Consumed 5.668s CPU time.
Oct  2 04:45:01 np0005465604 systemd-machined[214636]: Machine qemu-135-instance-0000006c terminated.
Oct  2 04:45:01 np0005465604 nova_compute[260603]: 2025-10-02 08:45:01.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:01 np0005465604 neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765[368848]: [NOTICE]   (368852) : haproxy version is 2.8.14-c23fe91
Oct  2 04:45:01 np0005465604 neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765[368848]: [NOTICE]   (368852) : path to executable is /usr/sbin/haproxy
Oct  2 04:45:01 np0005465604 neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765[368848]: [WARNING]  (368852) : Exiting Master process...
Oct  2 04:45:01 np0005465604 neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765[368848]: [WARNING]  (368852) : Exiting Master process...
Oct  2 04:45:01 np0005465604 neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765[368848]: [ALERT]    (368852) : Current worker (368854) exited with code 143 (Terminated)
Oct  2 04:45:01 np0005465604 neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765[368848]: [WARNING]  (368852) : All workers exited. Exiting... (0)
Oct  2 04:45:01 np0005465604 systemd[1]: libpod-572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383.scope: Deactivated successfully.
Oct  2 04:45:01 np0005465604 conmon[368848]: conmon 572bf55a670b000ba6c3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383.scope/container/memory.events
Oct  2 04:45:01 np0005465604 podman[368887]: 2025-10-02 08:45:01.981522668 +0000 UTC m=+0.042628560 container died 572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.015 2 INFO nova.virt.libvirt.driver [-] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Instance destroyed successfully.#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.015 2 DEBUG nova.objects.instance [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lazy-loading 'resources' on Instance uuid 2ea36b69-0b0d-4253-8207-a159c75280b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:45:02 np0005465604 systemd[1]: var-lib-containers-storage-overlay-fa8c8c7591665252ab2243dbe65cfdd1efc17d9bbb25418b141b6fa64fb6308b-merged.mount: Deactivated successfully.
Oct  2 04:45:02 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383-userdata-shm.mount: Deactivated successfully.
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.033 2 DEBUG nova.virt.libvirt.vif [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:44:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-459466961',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-459466961',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-459466961',id=108,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:44:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='75aaa01d4f144326a24dea8ff25b20a7',ramdisk_id='',reservation_id='r-qjxua0d9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-200952902',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-200952902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:44:57Z,user_data=None,user_id='2e201c8855514748b06d7da4f56ed1b5',uuid=2ea36b69-0b0d-4253-8207-a159c75280b3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.034 2 DEBUG nova.network.os_vif_util [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Converting VIF {"id": "46d394ee-978a-48af-81e0-3175e3fedd10", "address": "fa:16:3e:1b:11:18", "network": {"id": "a70dde6b-eda7-404f-be3f-fdfd21130765", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1793865892-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75aaa01d4f144326a24dea8ff25b20a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap46d394ee-97", "ovs_interfaceid": "46d394ee-978a-48af-81e0-3175e3fedd10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.035 2 DEBUG nova.network.os_vif_util [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:11:18,bridge_name='br-int',has_traffic_filtering=True,id=46d394ee-978a-48af-81e0-3175e3fedd10,network=Network(a70dde6b-eda7-404f-be3f-fdfd21130765),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46d394ee-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.035 2 DEBUG os_vif [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:11:18,bridge_name='br-int',has_traffic_filtering=True,id=46d394ee-978a-48af-81e0-3175e3fedd10,network=Network(a70dde6b-eda7-404f-be3f-fdfd21130765),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46d394ee-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:45:02 np0005465604 podman[368887]: 2025-10-02 08:45:02.038608018 +0000 UTC m=+0.099713900 container cleanup 572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.039 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap46d394ee-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.049 2 INFO os_vif [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:11:18,bridge_name='br-int',has_traffic_filtering=True,id=46d394ee-978a-48af-81e0-3175e3fedd10,network=Network(a70dde6b-eda7-404f-be3f-fdfd21130765),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap46d394ee-97')#033[00m
Oct  2 04:45:02 np0005465604 systemd[1]: libpod-conmon-572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383.scope: Deactivated successfully.
Oct  2 04:45:02 np0005465604 podman[368926]: 2025-10-02 08:45:02.143709393 +0000 UTC m=+0.081190571 container remove 572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:45:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:02.152 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[547d9847-5a03-4b52-ab69-33eee4bcdaba]: (4, ('Thu Oct  2 08:45:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765 (572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383)\n572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383\nThu Oct  2 08:45:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765 (572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383)\n572bf55a670b000ba6c3d7268477f4a059d1a1d0fa9c9c1338b9a48081453383\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:02.154 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[af309ce0-2fee-49ae-8391-11ac8fdd3b16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:02.156 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa70dde6b-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:45:02 np0005465604 kernel: tapa70dde6b-e0: left promiscuous mode
Oct  2 04:45:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:02.163 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[72338187-3723-4545-9d03-d8aa421f5dae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:02.196 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[44a07e20-ff41-4113-b55c-5569021be744]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:02.197 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d3cba5-491d-4987-a945-d66505141236]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:02.212 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e99ebc8f-8a8a-47a9-92eb-520732fe223f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557233, 'reachable_time': 39595, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368958, 'error': None, 'target': 'ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:02 np0005465604 systemd[1]: run-netns-ovnmeta\x2da70dde6b\x2deda7\x2d404f\x2dbe3f\x2dfdfd21130765.mount: Deactivated successfully.
Oct  2 04:45:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:02.218 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a70dde6b-eda7-404f-be3f-fdfd21130765 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:45:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:02.218 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[323135b7-71bd-4f5c-ba40-8c3323159984]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.384 2 DEBUG nova.network.neutron [req-e81ba5f9-5884-4280-bc74-531b9980b035 req-127fbe63-9318-4e7c-bf2e-0d8a74efa92f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Updated VIF entry in instance network info cache for port 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.384 2 DEBUG nova.network.neutron [req-e81ba5f9-5884-4280-bc74-531b9980b035 req-127fbe63-9318-4e7c-bf2e-0d8a74efa92f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Updating instance_info_cache with network_info: [{"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.416 2 DEBUG oslo_concurrency.lockutils [req-e81ba5f9-5884-4280-bc74-531b9980b035 req-127fbe63-9318-4e7c-bf2e-0d8a74efa92f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.664 2 INFO nova.virt.libvirt.driver [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Deleting instance files /var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3_del#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.665 2 INFO nova.virt.libvirt.driver [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Deletion of /var/lib/nova/instances/2ea36b69-0b0d-4253-8207-a159c75280b3_del complete#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.754 2 INFO nova.compute.manager [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Took 0.97 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.755 2 DEBUG oslo.service.loopingcall [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.756 2 DEBUG nova.compute.manager [-] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.756 2 DEBUG nova.network.neutron [-] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:45:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.914 2 DEBUG nova.compute.manager [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Received event network-vif-unplugged-46d394ee-978a-48af-81e0-3175e3fedd10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.914 2 DEBUG oslo_concurrency.lockutils [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.915 2 DEBUG oslo_concurrency.lockutils [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.915 2 DEBUG oslo_concurrency.lockutils [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.916 2 DEBUG nova.compute.manager [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] No waiting events found dispatching network-vif-unplugged-46d394ee-978a-48af-81e0-3175e3fedd10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.916 2 DEBUG nova.compute.manager [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Received event network-vif-unplugged-46d394ee-978a-48af-81e0-3175e3fedd10 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.917 2 DEBUG nova.compute.manager [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Received event network-vif-plugged-46d394ee-978a-48af-81e0-3175e3fedd10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.917 2 DEBUG oslo_concurrency.lockutils [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.918 2 DEBUG oslo_concurrency.lockutils [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.918 2 DEBUG oslo_concurrency.lockutils [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.918 2 DEBUG nova.compute.manager [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] No waiting events found dispatching network-vif-plugged-46d394ee-978a-48af-81e0-3175e3fedd10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:45:02 np0005465604 nova_compute[260603]: 2025-10-02 08:45:02.919 2 WARNING nova.compute.manager [req-538a7b66-68d5-4e8d-8f0f-e796b9adc269 req-1d91975d-f39e-47cf-89ef-dbb9ddc2adaa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Received unexpected event network-vif-plugged-46d394ee-978a-48af-81e0-3175e3fedd10 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:45:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 109 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 175 op/s
Oct  2 04:45:03 np0005465604 nova_compute[260603]: 2025-10-02 08:45:03.533 2 DEBUG nova.network.neutron [-] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:45:03 np0005465604 nova_compute[260603]: 2025-10-02 08:45:03.557 2 INFO nova.compute.manager [-] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Took 0.80 seconds to deallocate network for instance.#033[00m
Oct  2 04:45:03 np0005465604 nova_compute[260603]: 2025-10-02 08:45:03.625 2 DEBUG oslo_concurrency.lockutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:03 np0005465604 nova_compute[260603]: 2025-10-02 08:45:03.626 2 DEBUG oslo_concurrency.lockutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:03 np0005465604 nova_compute[260603]: 2025-10-02 08:45:03.675 2 DEBUG nova.compute.manager [req-4b63b552-23ac-4d71-8ba6-6360fa2fb894 req-19878b0b-f0cd-4ef6-a0ca-7109e07a4379 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Received event network-vif-deleted-46d394ee-978a-48af-81e0-3175e3fedd10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:45:03 np0005465604 nova_compute[260603]: 2025-10-02 08:45:03.694 2 DEBUG oslo_concurrency.processutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:45:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:45:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3288667511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:45:04 np0005465604 nova_compute[260603]: 2025-10-02 08:45:04.190 2 DEBUG oslo_concurrency.processutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:45:04 np0005465604 nova_compute[260603]: 2025-10-02 08:45:04.199 2 DEBUG nova.compute.provider_tree [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:45:04 np0005465604 nova_compute[260603]: 2025-10-02 08:45:04.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:04 np0005465604 nova_compute[260603]: 2025-10-02 08:45:04.419 2 DEBUG nova.scheduler.client.report [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:45:04 np0005465604 nova_compute[260603]: 2025-10-02 08:45:04.447 2 DEBUG oslo_concurrency.lockutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:04 np0005465604 nova_compute[260603]: 2025-10-02 08:45:04.470 2 INFO nova.scheduler.client.report [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Deleted allocations for instance 2ea36b69-0b0d-4253-8207-a159c75280b3#033[00m
Oct  2 04:45:04 np0005465604 nova_compute[260603]: 2025-10-02 08:45:04.552 2 DEBUG oslo_concurrency.lockutils [None req-ea1ea70e-2936-446e-9b30-10c68d627ea1 2e201c8855514748b06d7da4f56ed1b5 75aaa01d4f144326a24dea8ff25b20a7 - - default default] Lock "2ea36b69-0b0d-4253-8207-a159c75280b3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:05 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct  2 04:45:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 109 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 155 op/s
Oct  2 04:45:06 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:06Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4e:e4:9d 10.100.0.6
Oct  2 04:45:06 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:06Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4e:e4:9d 10.100.0.6
Oct  2 04:45:07 np0005465604 nova_compute[260603]: 2025-10-02 08:45:07.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:07 np0005465604 podman[368984]: 2025-10-02 08:45:07.071496575 +0000 UTC m=+0.116453291 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  2 04:45:07 np0005465604 podman[368983]: 2025-10-02 08:45:07.113649509 +0000 UTC m=+0.158336486 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 04:45:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 97 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 933 KiB/s wr, 182 op/s
Oct  2 04:45:07 np0005465604 nova_compute[260603]: 2025-10-02 08:45:07.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:45:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.1 MiB/s wr, 200 op/s
Oct  2 04:45:09 np0005465604 nova_compute[260603]: 2025-10-02 08:45:09.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:09Z|01113|binding|INFO|Releasing lport d2cd5a12-3184-4d72-ac2f-758e53e7613e from this chassis (sb_readonly=0)
Oct  2 04:45:09 np0005465604 nova_compute[260603]: 2025-10-02 08:45:09.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Oct  2 04:45:12 np0005465604 nova_compute[260603]: 2025-10-02 08:45:12.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:12 np0005465604 nova_compute[260603]: 2025-10-02 08:45:12.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:45:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Oct  2 04:45:14 np0005465604 podman[369028]: 2025-10-02 08:45:14.051033567 +0000 UTC m=+0.095783067 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:45:14 np0005465604 podman[369027]: 2025-10-02 08:45:14.057822599 +0000 UTC m=+0.105825180 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 04:45:14 np0005465604 nova_compute[260603]: 2025-10-02 08:45:14.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:14.809 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:9f:d1 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b4dae717-ce03-4a30-b5fe-ad727373e453', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4dae717-ce03-4a30-b5fe-ad727373e453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc1228dc2b0140318899a7d8a6bc11d6', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71241621-328b-4cc6-9189-8a3f2523fa85, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5d7367a8-24c1-4575-a359-215362e30ab8) old=Port_Binding(mac=['fa:16:3e:f0:9f:d1 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b4dae717-ce03-4a30-b5fe-ad727373e453', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4dae717-ce03-4a30-b5fe-ad727373e453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc1228dc2b0140318899a7d8a6bc11d6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:45:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:14.812 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 5d7367a8-24c1-4575-a359-215362e30ab8 in datapath b4dae717-ce03-4a30-b5fe-ad727373e453 updated#033[00m
Oct  2 04:45:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:14.815 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b4dae717-ce03-4a30-b5fe-ad727373e453, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:45:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:14.820 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa9e47c6-d72f-4b8f-8038-99dec0a6d795]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct  2 04:45:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:15Z|01114|binding|INFO|Releasing lport d2cd5a12-3184-4d72-ac2f-758e53e7613e from this chassis (sb_readonly=0)
Oct  2 04:45:15 np0005465604 nova_compute[260603]: 2025-10-02 08:45:15.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:17 np0005465604 nova_compute[260603]: 2025-10-02 08:45:17.013 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394702.0108202, 2ea36b69-0b0d-4253-8207-a159c75280b3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:45:17 np0005465604 nova_compute[260603]: 2025-10-02 08:45:17.013 2 INFO nova.compute.manager [-] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:45:17 np0005465604 nova_compute[260603]: 2025-10-02 08:45:17.040 2 DEBUG nova.compute.manager [None req-4d1ff898-7376-4e9e-96aa-4e67cba91d91 - - - - - -] [instance: 2ea36b69-0b0d-4253-8207-a159c75280b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:45:17 np0005465604 nova_compute[260603]: 2025-10-02 08:45:17.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct  2 04:45:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:45:19 np0005465604 nova_compute[260603]: 2025-10-02 08:45:19.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 1.3 MiB/s wr, 50 op/s
Oct  2 04:45:19 np0005465604 nova_compute[260603]: 2025-10-02 08:45:19.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:20 np0005465604 nova_compute[260603]: 2025-10-02 08:45:20.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:45:20 np0005465604 nova_compute[260603]: 2025-10-02 08:45:20.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:45:20 np0005465604 nova_compute[260603]: 2025-10-02 08:45:20.971 2 DEBUG nova.compute.manager [req-faa40984-af18-47f8-a994-3decbbd085cc req-7c95a04b-a59d-4733-b73b-02ab20c9973a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Received event network-changed-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:45:20 np0005465604 nova_compute[260603]: 2025-10-02 08:45:20.971 2 DEBUG nova.compute.manager [req-faa40984-af18-47f8-a994-3decbbd085cc req-7c95a04b-a59d-4733-b73b-02ab20c9973a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Refreshing instance network info cache due to event network-changed-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:45:20 np0005465604 nova_compute[260603]: 2025-10-02 08:45:20.972 2 DEBUG oslo_concurrency.lockutils [req-faa40984-af18-47f8-a994-3decbbd085cc req-7c95a04b-a59d-4733-b73b-02ab20c9973a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:45:20 np0005465604 nova_compute[260603]: 2025-10-02 08:45:20.972 2 DEBUG oslo_concurrency.lockutils [req-faa40984-af18-47f8-a994-3decbbd085cc req-7c95a04b-a59d-4733-b73b-02ab20c9973a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:45:20 np0005465604 nova_compute[260603]: 2025-10-02 08:45:20.973 2 DEBUG nova.network.neutron [req-faa40984-af18-47f8-a994-3decbbd085cc req-7c95a04b-a59d-4733-b73b-02ab20c9973a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Refreshing network info cache for port 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:45:20 np0005465604 nova_compute[260603]: 2025-10-02 08:45:20.988 2 DEBUG oslo_concurrency.lockutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "f8c66e48-8186-4434-8a82-a9be6fd98570" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:20 np0005465604 nova_compute[260603]: 2025-10-02 08:45:20.988 2 DEBUG oslo_concurrency.lockutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:20 np0005465604 nova_compute[260603]: 2025-10-02 08:45:20.989 2 DEBUG oslo_concurrency.lockutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:20 np0005465604 nova_compute[260603]: 2025-10-02 08:45:20.989 2 DEBUG oslo_concurrency.lockutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:20 np0005465604 nova_compute[260603]: 2025-10-02 08:45:20.990 2 DEBUG oslo_concurrency.lockutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:20 np0005465604 nova_compute[260603]: 2025-10-02 08:45:20.992 2 INFO nova.compute.manager [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Terminating instance#033[00m
Oct  2 04:45:20 np0005465604 nova_compute[260603]: 2025-10-02 08:45:20.994 2 DEBUG nova.compute.manager [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:45:21 np0005465604 kernel: tap2fe5cc8e-26 (unregistering): left promiscuous mode
Oct  2 04:45:21 np0005465604 NetworkManager[45129]: <info>  [1759394721.0749] device (tap2fe5cc8e-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:45:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:21Z|01115|binding|INFO|Releasing lport 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f from this chassis (sb_readonly=0)
Oct  2 04:45:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:21Z|01116|binding|INFO|Setting lport 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f down in Southbound
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:21 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:21Z|01117|binding|INFO|Removing iface tap2fe5cc8e-26 ovn-installed in OVS
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.097 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:e4:9d 10.100.0.6'], port_security=['fa:16:3e:4e:e4:9d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f8c66e48-8186-4434-8a82-a9be6fd98570', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f709a68-4708-4cf6-ab7a-ec213a44899e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '97c204ff-8b60-40d3-b073-c7cc05dde092 b7a9cf86-0b36-4647-8672-befd80171150', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6baff0c8-9572-4afd-b1a8-e77a5daa40e0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:45:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.099 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f in datapath 7f709a68-4708-4cf6-ab7a-ec213a44899e unbound from our chassis#033[00m
Oct  2 04:45:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.102 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7f709a68-4708-4cf6-ab7a-ec213a44899e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:45:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.103 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6fbdfe33-31b8-487b-8e65-dd1678016e80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.105 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e namespace which is not needed anymore#033[00m
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:21 np0005465604 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Oct  2 04:45:21 np0005465604 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006b.scope: Consumed 14.072s CPU time.
Oct  2 04:45:21 np0005465604 systemd-machined[214636]: Machine qemu-134-instance-0000006b terminated.
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.249 2 INFO nova.virt.libvirt.driver [-] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Instance destroyed successfully.#033[00m
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.250 2 DEBUG nova.objects.instance [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'resources' on Instance uuid f8c66e48-8186-4434-8a82-a9be6fd98570 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.269 2 DEBUG nova.virt.libvirt.vif [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:44:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-470181145',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-470181145',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=107,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA8lO59ZZnrYbP7wEOczQOvY36PKA+gVe/fckBMMuh4THRBaRJXFK1RXQfVOkub4S1cwnd2kmq3jyKyYaZVJ6dSkF3B7eITAKra64ROntkkxqN5HCVmiUlIAa9236hzUHg==',key_name='tempest-TestSecurityGroupsBasicOps-1941794773',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:44:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-zb68sy0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:44:55Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=f8c66e48-8186-4434-8a82-a9be6fd98570,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.269 2 DEBUG nova.network.os_vif_util [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:45:21 np0005465604 neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e[368688]: [NOTICE]   (368692) : haproxy version is 2.8.14-c23fe91
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.270 2 DEBUG nova.network.os_vif_util [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4e:e4:9d,bridge_name='br-int',has_traffic_filtering=True,id=2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f,network=Network(7f709a68-4708-4cf6-ab7a-ec213a44899e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe5cc8e-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:45:21 np0005465604 neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e[368688]: [NOTICE]   (368692) : path to executable is /usr/sbin/haproxy
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.271 2 DEBUG os_vif [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:e4:9d,bridge_name='br-int',has_traffic_filtering=True,id=2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f,network=Network(7f709a68-4708-4cf6-ab7a-ec213a44899e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe5cc8e-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:45:21 np0005465604 neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e[368688]: [ALERT]    (368692) : Current worker (368694) exited with code 143 (Terminated)
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fe5cc8e-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:45:21 np0005465604 neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e[368688]: [WARNING]  (368692) : All workers exited. Exiting... (0)
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:21 np0005465604 systemd[1]: libpod-e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2.scope: Deactivated successfully.
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.280 2 INFO os_vif [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:e4:9d,bridge_name='br-int',has_traffic_filtering=True,id=2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f,network=Network(7f709a68-4708-4cf6-ab7a-ec213a44899e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fe5cc8e-26')#033[00m
Oct  2 04:45:21 np0005465604 podman[369092]: 2025-10-02 08:45:21.288463955 +0000 UTC m=+0.059397872 container died e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 04:45:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2-userdata-shm.mount: Deactivated successfully.
Oct  2 04:45:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f5042a5e1a9fd8d273a882aed1c11acd286e1034728129aad25bceba67731ade-merged.mount: Deactivated successfully.
Oct  2 04:45:21 np0005465604 podman[369092]: 2025-10-02 08:45:21.339855297 +0000 UTC m=+0.110789174 container cleanup e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:45:21 np0005465604 systemd[1]: libpod-conmon-e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2.scope: Deactivated successfully.
Oct  2 04:45:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 121 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct  2 04:45:21 np0005465604 podman[369146]: 2025-10-02 08:45:21.427355495 +0000 UTC m=+0.059338251 container remove e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:45:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.435 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6c97c432-fde8-48fd-addc-747368a4ef54]: (4, ('Thu Oct  2 08:45:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e (e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2)\ne32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2\nThu Oct  2 08:45:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e (e32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2)\ne32d56c2f12ae3236e3cc45b9cf7bd79f8a1391437f9993a77bd7ae589d610d2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.436 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a51c093a-4625-4979-b40c-1cc2b84f0d59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.438 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f709a68-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:45:21 np0005465604 kernel: tap7f709a68-40: left promiscuous mode
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.450 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[537018ee-ce9a-4bc5-b33a-463d22b2cf69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.483 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ced586db-5e89-491d-ade7-850f534c6c74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.484 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac59f53-9335-4c17-b708-735206188173]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.509 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a9cbaeed-50c9-4f75-8c9d-b8d975fb1ed6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557144, 'reachable_time': 42358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369161, 'error': None, 'target': 'ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:21 np0005465604 systemd[1]: run-netns-ovnmeta\x2d7f709a68\x2d4708\x2d4cf6\x2dab7a\x2dec213a44899e.mount: Deactivated successfully.
Oct  2 04:45:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.517 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7f709a68-4708-4cf6-ab7a-ec213a44899e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:45:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:21.517 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[7d59673d-8dd7-459a-8e8f-101c00d98369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.693 2 INFO nova.virt.libvirt.driver [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Deleting instance files /var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570_del#033[00m
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.694 2 INFO nova.virt.libvirt.driver [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Deletion of /var/lib/nova/instances/f8c66e48-8186-4434-8a82-a9be6fd98570_del complete#033[00m
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.758 2 INFO nova.compute.manager [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.759 2 DEBUG oslo.service.loopingcall [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.759 2 DEBUG nova.compute.manager [-] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:45:21 np0005465604 nova_compute[260603]: 2025-10-02 08:45:21.760 2 DEBUG nova.network.neutron [-] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:45:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:45:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1429547899' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:45:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:45:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1429547899' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:45:22 np0005465604 nova_compute[260603]: 2025-10-02 08:45:22.498 2 DEBUG nova.network.neutron [-] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:45:22 np0005465604 nova_compute[260603]: 2025-10-02 08:45:22.539 2 INFO nova.compute.manager [-] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Took 0.78 seconds to deallocate network for instance.#033[00m
Oct  2 04:45:22 np0005465604 nova_compute[260603]: 2025-10-02 08:45:22.620 2 DEBUG oslo_concurrency.lockutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:22 np0005465604 nova_compute[260603]: 2025-10-02 08:45:22.620 2 DEBUG oslo_concurrency.lockutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:22 np0005465604 nova_compute[260603]: 2025-10-02 08:45:22.683 2 DEBUG oslo_concurrency.processutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:45:22 np0005465604 nova_compute[260603]: 2025-10-02 08:45:22.740 2 DEBUG nova.compute.manager [req-67a12357-33ff-4c40-a2ad-9f3104a59b33 req-2a0fdf83-66ea-4294-8777-fce210cf53c0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Received event network-vif-plugged-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:45:22 np0005465604 nova_compute[260603]: 2025-10-02 08:45:22.740 2 DEBUG oslo_concurrency.lockutils [req-67a12357-33ff-4c40-a2ad-9f3104a59b33 req-2a0fdf83-66ea-4294-8777-fce210cf53c0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:22 np0005465604 nova_compute[260603]: 2025-10-02 08:45:22.741 2 DEBUG oslo_concurrency.lockutils [req-67a12357-33ff-4c40-a2ad-9f3104a59b33 req-2a0fdf83-66ea-4294-8777-fce210cf53c0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:22 np0005465604 nova_compute[260603]: 2025-10-02 08:45:22.741 2 DEBUG oslo_concurrency.lockutils [req-67a12357-33ff-4c40-a2ad-9f3104a59b33 req-2a0fdf83-66ea-4294-8777-fce210cf53c0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:22 np0005465604 nova_compute[260603]: 2025-10-02 08:45:22.742 2 DEBUG nova.compute.manager [req-67a12357-33ff-4c40-a2ad-9f3104a59b33 req-2a0fdf83-66ea-4294-8777-fce210cf53c0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] No waiting events found dispatching network-vif-plugged-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:45:22 np0005465604 nova_compute[260603]: 2025-10-02 08:45:22.742 2 WARNING nova.compute.manager [req-67a12357-33ff-4c40-a2ad-9f3104a59b33 req-2a0fdf83-66ea-4294-8777-fce210cf53c0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Received unexpected event network-vif-plugged-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:45:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:45:23 np0005465604 nova_compute[260603]: 2025-10-02 08:45:23.162 2 DEBUG nova.compute.manager [req-2a7ddda0-5dbc-4ab4-9350-b2874c045dca req-55d9ba5e-c3f9-4910-aa56-f1f6a065caf4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Received event network-vif-deleted-2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:45:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:45:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2712699084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:45:23 np0005465604 nova_compute[260603]: 2025-10-02 08:45:23.197 2 DEBUG oslo_concurrency.processutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:45:23 np0005465604 nova_compute[260603]: 2025-10-02 08:45:23.203 2 DEBUG nova.compute.provider_tree [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:45:23 np0005465604 nova_compute[260603]: 2025-10-02 08:45:23.226 2 DEBUG nova.scheduler.client.report [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:45:23 np0005465604 nova_compute[260603]: 2025-10-02 08:45:23.253 2 DEBUG oslo_concurrency.lockutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:23 np0005465604 nova_compute[260603]: 2025-10-02 08:45:23.278 2 INFO nova.scheduler.client.report [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Deleted allocations for instance f8c66e48-8186-4434-8a82-a9be6fd98570#033[00m
Oct  2 04:45:23 np0005465604 nova_compute[260603]: 2025-10-02 08:45:23.350 2 DEBUG oslo_concurrency.lockutils [None req-f4905700-9600-498e-9e52-5125d32adbf4 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "f8c66e48-8186-4434-8a82-a9be6fd98570" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:23 np0005465604 nova_compute[260603]: 2025-10-02 08:45:23.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Oct  2 04:45:23 np0005465604 nova_compute[260603]: 2025-10-02 08:45:23.451 2 DEBUG nova.network.neutron [req-faa40984-af18-47f8-a994-3decbbd085cc req-7c95a04b-a59d-4733-b73b-02ab20c9973a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Updated VIF entry in instance network info cache for port 2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:45:23 np0005465604 nova_compute[260603]: 2025-10-02 08:45:23.452 2 DEBUG nova.network.neutron [req-faa40984-af18-47f8-a994-3decbbd085cc req-7c95a04b-a59d-4733-b73b-02ab20c9973a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Updating instance_info_cache with network_info: [{"id": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "address": "fa:16:3e:4e:e4:9d", "network": {"id": "7f709a68-4708-4cf6-ab7a-ec213a44899e", "bridge": "br-int", "label": "tempest-network-smoke--1486576063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fe5cc8e-26", "ovs_interfaceid": "2fe5cc8e-2662-4cb3-b4d5-b76d2f4aa18f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:45:23 np0005465604 nova_compute[260603]: 2025-10-02 08:45:23.468 2 DEBUG oslo_concurrency.lockutils [req-faa40984-af18-47f8-a994-3decbbd085cc req-7c95a04b-a59d-4733-b73b-02ab20c9973a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f8c66e48-8186-4434-8a82-a9be6fd98570" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:45:24 np0005465604 nova_compute[260603]: 2025-10-02 08:45:24.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:24.527 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:9f:d1 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-b4dae717-ce03-4a30-b5fe-ad727373e453', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4dae717-ce03-4a30-b5fe-ad727373e453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc1228dc2b0140318899a7d8a6bc11d6', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71241621-328b-4cc6-9189-8a3f2523fa85, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5d7367a8-24c1-4575-a359-215362e30ab8) old=Port_Binding(mac=['fa:16:3e:f0:9f:d1 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b4dae717-ce03-4a30-b5fe-ad727373e453', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4dae717-ce03-4a30-b5fe-ad727373e453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc1228dc2b0140318899a7d8a6bc11d6', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:45:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:24.528 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 5d7367a8-24c1-4575-a359-215362e30ab8 in datapath b4dae717-ce03-4a30-b5fe-ad727373e453 updated#033[00m
Oct  2 04:45:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:24.529 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b4dae717-ce03-4a30-b5fe-ad727373e453, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:45:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:24.530 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ce66c5aa-a43c-47c9-8298-f09b9d9dbf79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 04:45:26 np0005465604 nova_compute[260603]: 2025-10-02 08:45:26.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:26 np0005465604 nova_compute[260603]: 2025-10-02 08:45:26.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:27 np0005465604 nova_compute[260603]: 2025-10-02 08:45:27.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 04:45:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:45:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:45:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:45:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:45:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:45:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:45:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:45:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:45:27
Oct  2 04:45:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:45:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:45:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'backups', 'vms', 'default.rgw.control', 'volumes', 'images']
Oct  2 04:45:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:45:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:45:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:45:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:45:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:45:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:45:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:45:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:45:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:45:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:45:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:45:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:45:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:45:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:45:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:45:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:45:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:45:28 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 7a24c125-2ecb-4a3e-8f06-fb802ac63f31 does not exist
Oct  2 04:45:28 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev bfabb4a4-4520-4031-8d5a-c583fafbc2a7 does not exist
Oct  2 04:45:28 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c502f0d3-ec1a-4f92-b78d-625a431dc724 does not exist
Oct  2 04:45:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:45:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:45:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:45:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:45:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:45:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:45:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:45:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:45:28 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:45:29 np0005465604 podman[369457]: 2025-10-02 08:45:29.371506282 +0000 UTC m=+0.053367434 container create 6b69b38a05b86ae1165c038c55fd18cd54a6a0f89ea4f5665da4ab3304b47f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lewin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:45:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 04:45:29 np0005465604 systemd[1]: Started libpod-conmon-6b69b38a05b86ae1165c038c55fd18cd54a6a0f89ea4f5665da4ab3304b47f7e.scope.
Oct  2 04:45:29 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:45:29 np0005465604 podman[369457]: 2025-10-02 08:45:29.350418235 +0000 UTC m=+0.032279457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:45:29 np0005465604 podman[369457]: 2025-10-02 08:45:29.453720895 +0000 UTC m=+0.135582087 container init 6b69b38a05b86ae1165c038c55fd18cd54a6a0f89ea4f5665da4ab3304b47f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lewin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:45:29 np0005465604 nova_compute[260603]: 2025-10-02 08:45:29.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:29 np0005465604 podman[369457]: 2025-10-02 08:45:29.464738588 +0000 UTC m=+0.146599780 container start 6b69b38a05b86ae1165c038c55fd18cd54a6a0f89ea4f5665da4ab3304b47f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lewin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 04:45:29 np0005465604 podman[369457]: 2025-10-02 08:45:29.469550728 +0000 UTC m=+0.151411970 container attach 6b69b38a05b86ae1165c038c55fd18cd54a6a0f89ea4f5665da4ab3304b47f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lewin, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 04:45:29 np0005465604 hopeful_lewin[369474]: 167 167
Oct  2 04:45:29 np0005465604 systemd[1]: libpod-6b69b38a05b86ae1165c038c55fd18cd54a6a0f89ea4f5665da4ab3304b47f7e.scope: Deactivated successfully.
Oct  2 04:45:29 np0005465604 podman[369457]: 2025-10-02 08:45:29.475477023 +0000 UTC m=+0.157338185 container died 6b69b38a05b86ae1165c038c55fd18cd54a6a0f89ea4f5665da4ab3304b47f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct  2 04:45:29 np0005465604 systemd[1]: var-lib-containers-storage-overlay-57f329b43aedb0ab1c5ed8b7164e08311fbe0404549df65491e45f2a15b6befa-merged.mount: Deactivated successfully.
Oct  2 04:45:29 np0005465604 podman[369457]: 2025-10-02 08:45:29.521325622 +0000 UTC m=+0.203186814 container remove 6b69b38a05b86ae1165c038c55fd18cd54a6a0f89ea4f5665da4ab3304b47f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:45:29 np0005465604 systemd[1]: libpod-conmon-6b69b38a05b86ae1165c038c55fd18cd54a6a0f89ea4f5665da4ab3304b47f7e.scope: Deactivated successfully.
Oct  2 04:45:29 np0005465604 podman[369498]: 2025-10-02 08:45:29.77344878 +0000 UTC m=+0.067862357 container create e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 04:45:29 np0005465604 systemd[1]: Started libpod-conmon-e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30.scope.
Oct  2 04:45:29 np0005465604 podman[369498]: 2025-10-02 08:45:29.749992888 +0000 UTC m=+0.044406515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:45:29 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:45:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e570d7f3a2c10fc92d6b36fc2d503a6af7aa5eea60c4b4e576b826b5fbcc1ba1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:45:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e570d7f3a2c10fc92d6b36fc2d503a6af7aa5eea60c4b4e576b826b5fbcc1ba1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:45:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e570d7f3a2c10fc92d6b36fc2d503a6af7aa5eea60c4b4e576b826b5fbcc1ba1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:45:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e570d7f3a2c10fc92d6b36fc2d503a6af7aa5eea60c4b4e576b826b5fbcc1ba1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:45:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e570d7f3a2c10fc92d6b36fc2d503a6af7aa5eea60c4b4e576b826b5fbcc1ba1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:45:29 np0005465604 podman[369498]: 2025-10-02 08:45:29.876343107 +0000 UTC m=+0.170756714 container init e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kepler, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 04:45:29 np0005465604 podman[369498]: 2025-10-02 08:45:29.891141408 +0000 UTC m=+0.185554985 container start e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kepler, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 04:45:29 np0005465604 podman[369498]: 2025-10-02 08:45:29.895611287 +0000 UTC m=+0.190024904 container attach e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:45:30 np0005465604 sharp_kepler[369515]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:45:30 np0005465604 sharp_kepler[369515]: --> relative data size: 1.0
Oct  2 04:45:30 np0005465604 sharp_kepler[369515]: --> All data devices are unavailable
Oct  2 04:45:31 np0005465604 systemd[1]: libpod-e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30.scope: Deactivated successfully.
Oct  2 04:45:31 np0005465604 podman[369498]: 2025-10-02 08:45:31.020026503 +0000 UTC m=+1.314440040 container died e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kepler, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:45:31 np0005465604 systemd[1]: libpod-e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30.scope: Consumed 1.088s CPU time.
Oct  2 04:45:31 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e570d7f3a2c10fc92d6b36fc2d503a6af7aa5eea60c4b4e576b826b5fbcc1ba1-merged.mount: Deactivated successfully.
Oct  2 04:45:31 np0005465604 podman[369498]: 2025-10-02 08:45:31.07540612 +0000 UTC m=+1.369819657 container remove e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:45:31 np0005465604 systemd[1]: libpod-conmon-e280427291723f32bf91252afb2fb099385a99975b30763fef52b5568dfabd30.scope: Deactivated successfully.
Oct  2 04:45:31 np0005465604 nova_compute[260603]: 2025-10-02 08:45:31.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:31 np0005465604 auditd[702]: Audit daemon rotating log files
Oct  2 04:45:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 04:45:31 np0005465604 nova_compute[260603]: 2025-10-02 08:45:31.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:45:31 np0005465604 podman[369696]: 2025-10-02 08:45:31.775423928 +0000 UTC m=+0.065975346 container create a7c3774717a995d54b09a38b3f6fe4de31eb6113859dd4974a64e22f7943e610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:45:31 np0005465604 systemd[1]: Started libpod-conmon-a7c3774717a995d54b09a38b3f6fe4de31eb6113859dd4974a64e22f7943e610.scope.
Oct  2 04:45:31 np0005465604 podman[369696]: 2025-10-02 08:45:31.752582986 +0000 UTC m=+0.043134494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:45:31 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:45:31 np0005465604 podman[369696]: 2025-10-02 08:45:31.884093566 +0000 UTC m=+0.174645064 container init a7c3774717a995d54b09a38b3f6fe4de31eb6113859dd4974a64e22f7943e610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kilby, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:45:31 np0005465604 podman[369696]: 2025-10-02 08:45:31.895179861 +0000 UTC m=+0.185731309 container start a7c3774717a995d54b09a38b3f6fe4de31eb6113859dd4974a64e22f7943e610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kilby, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 04:45:31 np0005465604 podman[369696]: 2025-10-02 08:45:31.899539397 +0000 UTC m=+0.190090845 container attach a7c3774717a995d54b09a38b3f6fe4de31eb6113859dd4974a64e22f7943e610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kilby, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:45:31 np0005465604 happy_kilby[369713]: 167 167
Oct  2 04:45:31 np0005465604 systemd[1]: libpod-a7c3774717a995d54b09a38b3f6fe4de31eb6113859dd4974a64e22f7943e610.scope: Deactivated successfully.
Oct  2 04:45:31 np0005465604 podman[369696]: 2025-10-02 08:45:31.907034451 +0000 UTC m=+0.197585899 container died a7c3774717a995d54b09a38b3f6fe4de31eb6113859dd4974a64e22f7943e610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 04:45:31 np0005465604 systemd[1]: var-lib-containers-storage-overlay-aef05fd193a24fe1c1ae426f38d282ea8f55016a2f24818cb9c3d2ab48a5d892-merged.mount: Deactivated successfully.
Oct  2 04:45:31 np0005465604 podman[369696]: 2025-10-02 08:45:31.965515264 +0000 UTC m=+0.256066712 container remove a7c3774717a995d54b09a38b3f6fe4de31eb6113859dd4974a64e22f7943e610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kilby, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:45:31 np0005465604 systemd[1]: libpod-conmon-a7c3774717a995d54b09a38b3f6fe4de31eb6113859dd4974a64e22f7943e610.scope: Deactivated successfully.
Oct  2 04:45:32 np0005465604 podman[369736]: 2025-10-02 08:45:32.222498423 +0000 UTC m=+0.068159205 container create e70e57cf14757ac28158b7d71018ceb8dcff1c72708ab557c8af289cf5a9af90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:45:32 np0005465604 systemd[1]: Started libpod-conmon-e70e57cf14757ac28158b7d71018ceb8dcff1c72708ab557c8af289cf5a9af90.scope.
Oct  2 04:45:32 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:45:32 np0005465604 podman[369736]: 2025-10-02 08:45:32.203055977 +0000 UTC m=+0.048716769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:45:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25e945644f6bd8fae9b836b11839dd4ee60718e7c47eeb2c72b726d708cf4cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:45:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25e945644f6bd8fae9b836b11839dd4ee60718e7c47eeb2c72b726d708cf4cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:45:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25e945644f6bd8fae9b836b11839dd4ee60718e7c47eeb2c72b726d708cf4cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:45:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25e945644f6bd8fae9b836b11839dd4ee60718e7c47eeb2c72b726d708cf4cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:45:32 np0005465604 podman[369736]: 2025-10-02 08:45:32.313599092 +0000 UTC m=+0.159259874 container init e70e57cf14757ac28158b7d71018ceb8dcff1c72708ab557c8af289cf5a9af90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hawking, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 04:45:32 np0005465604 podman[369736]: 2025-10-02 08:45:32.325601787 +0000 UTC m=+0.171262539 container start e70e57cf14757ac28158b7d71018ceb8dcff1c72708ab557c8af289cf5a9af90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 04:45:32 np0005465604 podman[369736]: 2025-10-02 08:45:32.329107236 +0000 UTC m=+0.174768018 container attach e70e57cf14757ac28158b7d71018ceb8dcff1c72708ab557c8af289cf5a9af90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:45:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:45:33 np0005465604 epic_hawking[369752]: {
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:    "0": [
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:        {
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "devices": [
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "/dev/loop3"
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            ],
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "lv_name": "ceph_lv0",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "lv_size": "21470642176",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "name": "ceph_lv0",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "tags": {
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.cluster_name": "ceph",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.crush_device_class": "",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.encrypted": "0",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.osd_id": "0",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.type": "block",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.vdo": "0"
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            },
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "type": "block",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "vg_name": "ceph_vg0"
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:        }
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:    ],
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:    "1": [
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:        {
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "devices": [
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "/dev/loop4"
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            ],
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "lv_name": "ceph_lv1",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "lv_size": "21470642176",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "name": "ceph_lv1",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "tags": {
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.cluster_name": "ceph",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.crush_device_class": "",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.encrypted": "0",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.osd_id": "1",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.type": "block",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.vdo": "0"
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            },
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "type": "block",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "vg_name": "ceph_vg1"
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:        }
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:    ],
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:    "2": [
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:        {
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "devices": [
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "/dev/loop5"
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            ],
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "lv_name": "ceph_lv2",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "lv_size": "21470642176",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "name": "ceph_lv2",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "tags": {
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.cluster_name": "ceph",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.crush_device_class": "",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.encrypted": "0",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.osd_id": "2",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.type": "block",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:                "ceph.vdo": "0"
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            },
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "type": "block",
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:            "vg_name": "ceph_vg2"
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:        }
Oct  2 04:45:33 np0005465604 epic_hawking[369752]:    ]
Oct  2 04:45:33 np0005465604 epic_hawking[369752]: }
Oct  2 04:45:33 np0005465604 systemd[1]: libpod-e70e57cf14757ac28158b7d71018ceb8dcff1c72708ab557c8af289cf5a9af90.scope: Deactivated successfully.
Oct  2 04:45:33 np0005465604 podman[369761]: 2025-10-02 08:45:33.180643717 +0000 UTC m=+0.050344570 container died e70e57cf14757ac28158b7d71018ceb8dcff1c72708ab557c8af289cf5a9af90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hawking, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 04:45:33 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a25e945644f6bd8fae9b836b11839dd4ee60718e7c47eeb2c72b726d708cf4cf-merged.mount: Deactivated successfully.
Oct  2 04:45:33 np0005465604 podman[369761]: 2025-10-02 08:45:33.237065026 +0000 UTC m=+0.106765839 container remove e70e57cf14757ac28158b7d71018ceb8dcff1c72708ab557c8af289cf5a9af90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 04:45:33 np0005465604 systemd[1]: libpod-conmon-e70e57cf14757ac28158b7d71018ceb8dcff1c72708ab557c8af289cf5a9af90.scope: Deactivated successfully.
Oct  2 04:45:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 04:45:33 np0005465604 nova_compute[260603]: 2025-10-02 08:45:33.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:45:33 np0005465604 podman[369918]: 2025-10-02 08:45:33.92510854 +0000 UTC m=+0.035165517 container create c364120d59b128fa4524a9d0ba78e092bcf1dc68420aeefebfee38613eeface3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:45:33 np0005465604 systemd[1]: Started libpod-conmon-c364120d59b128fa4524a9d0ba78e092bcf1dc68420aeefebfee38613eeface3.scope.
Oct  2 04:45:33 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:45:34 np0005465604 podman[369918]: 2025-10-02 08:45:34.002251116 +0000 UTC m=+0.112308113 container init c364120d59b128fa4524a9d0ba78e092bcf1dc68420aeefebfee38613eeface3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swanson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  2 04:45:34 np0005465604 podman[369918]: 2025-10-02 08:45:33.910285099 +0000 UTC m=+0.020342076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:45:34 np0005465604 podman[369918]: 2025-10-02 08:45:34.008640805 +0000 UTC m=+0.118697782 container start c364120d59b128fa4524a9d0ba78e092bcf1dc68420aeefebfee38613eeface3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swanson, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 04:45:34 np0005465604 podman[369918]: 2025-10-02 08:45:34.011869715 +0000 UTC m=+0.121926702 container attach c364120d59b128fa4524a9d0ba78e092bcf1dc68420aeefebfee38613eeface3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 04:45:34 np0005465604 elegant_swanson[369934]: 167 167
Oct  2 04:45:34 np0005465604 systemd[1]: libpod-c364120d59b128fa4524a9d0ba78e092bcf1dc68420aeefebfee38613eeface3.scope: Deactivated successfully.
Oct  2 04:45:34 np0005465604 podman[369918]: 2025-10-02 08:45:34.013855647 +0000 UTC m=+0.123912654 container died c364120d59b128fa4524a9d0ba78e092bcf1dc68420aeefebfee38613eeface3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swanson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 04:45:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5831877b59822680e43142374c88ad214ddf4855f7fc05aaedf7fd09a31263c5-merged.mount: Deactivated successfully.
Oct  2 04:45:34 np0005465604 podman[369918]: 2025-10-02 08:45:34.050154688 +0000 UTC m=+0.160211675 container remove c364120d59b128fa4524a9d0ba78e092bcf1dc68420aeefebfee38613eeface3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct  2 04:45:34 np0005465604 systemd[1]: libpod-conmon-c364120d59b128fa4524a9d0ba78e092bcf1dc68420aeefebfee38613eeface3.scope: Deactivated successfully.
Oct  2 04:45:34 np0005465604 podman[369958]: 2025-10-02 08:45:34.267230395 +0000 UTC m=+0.048755611 container create 4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:45:34 np0005465604 systemd[1]: Started libpod-conmon-4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7.scope.
Oct  2 04:45:34 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:45:34 np0005465604 podman[369958]: 2025-10-02 08:45:34.246638003 +0000 UTC m=+0.028163229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:45:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70872ac383fa5dd4a5ae093f6582d015ffe61d760f26e95e6b8e9b4375211f93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:45:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70872ac383fa5dd4a5ae093f6582d015ffe61d760f26e95e6b8e9b4375211f93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:45:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70872ac383fa5dd4a5ae093f6582d015ffe61d760f26e95e6b8e9b4375211f93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:45:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70872ac383fa5dd4a5ae093f6582d015ffe61d760f26e95e6b8e9b4375211f93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:45:34 np0005465604 podman[369958]: 2025-10-02 08:45:34.361303526 +0000 UTC m=+0.142828712 container init 4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:45:34 np0005465604 podman[369958]: 2025-10-02 08:45:34.37809746 +0000 UTC m=+0.159622666 container start 4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  2 04:45:34 np0005465604 podman[369958]: 2025-10-02 08:45:34.382629551 +0000 UTC m=+0.164154767 container attach 4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yalow, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 04:45:34 np0005465604 nova_compute[260603]: 2025-10-02 08:45:34.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:34 np0005465604 nova_compute[260603]: 2025-10-02 08:45:34.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:45:34 np0005465604 nova_compute[260603]: 2025-10-02 08:45:34.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:45:34 np0005465604 nova_compute[260603]: 2025-10-02 08:45:34.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:45:34 np0005465604 nova_compute[260603]: 2025-10-02 08:45:34.541 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:45:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:34.828 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:34.828 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:34.829 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:35 np0005465604 festive_yalow[369974]: {
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:        "osd_id": 2,
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:        "type": "bluestore"
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:    },
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:        "osd_id": 1,
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:        "type": "bluestore"
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:    },
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:        "osd_id": 0,
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:        "type": "bluestore"
Oct  2 04:45:35 np0005465604 festive_yalow[369974]:    }
Oct  2 04:45:35 np0005465604 festive_yalow[369974]: }
Oct  2 04:45:35 np0005465604 systemd[1]: libpod-4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7.scope: Deactivated successfully.
Oct  2 04:45:35 np0005465604 systemd[1]: libpod-4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7.scope: Consumed 1.006s CPU time.
Oct  2 04:45:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:45:35 np0005465604 podman[370007]: 2025-10-02 08:45:35.424454013 +0000 UTC m=+0.033722452 container died 4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:45:35 np0005465604 systemd[1]: var-lib-containers-storage-overlay-70872ac383fa5dd4a5ae093f6582d015ffe61d760f26e95e6b8e9b4375211f93-merged.mount: Deactivated successfully.
Oct  2 04:45:35 np0005465604 podman[370007]: 2025-10-02 08:45:35.475393081 +0000 UTC m=+0.084661470 container remove 4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yalow, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 04:45:35 np0005465604 systemd[1]: libpod-conmon-4f3bdb419d08b7d9b8024d9b63b3d20a10d8503a4cd0c074a11364dbeb0006d7.scope: Deactivated successfully.
Oct  2 04:45:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:45:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:45:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:45:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:45:35 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b2f43bc1-5754-4d6b-9843-e490559ca6c2 does not exist
Oct  2 04:45:35 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 59b70159-d11d-428e-b3be-4868f54871d3 does not exist
Oct  2 04:45:36 np0005465604 nova_compute[260603]: 2025-10-02 08:45:36.247 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394721.2456565, f8c66e48-8186-4434-8a82-a9be6fd98570 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:45:36 np0005465604 nova_compute[260603]: 2025-10-02 08:45:36.248 2 INFO nova.compute.manager [-] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:45:36 np0005465604 nova_compute[260603]: 2025-10-02 08:45:36.268 2 DEBUG nova.compute.manager [None req-ba906367-c0f4-45d0-a88b-af9f65205c7e - - - - - -] [instance: f8c66e48-8186-4434-8a82-a9be6fd98570] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:45:36 np0005465604 nova_compute[260603]: 2025-10-02 08:45:36.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:36 np0005465604 nova_compute[260603]: 2025-10-02 08:45:36.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:45:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:45:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:45:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:45:37 np0005465604 nova_compute[260603]: 2025-10-02 08:45:37.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:45:37 np0005465604 nova_compute[260603]: 2025-10-02 08:45:37.544 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:37 np0005465604 nova_compute[260603]: 2025-10-02 08:45:37.545 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:37 np0005465604 nova_compute[260603]: 2025-10-02 08:45:37.545 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:37 np0005465604 nova_compute[260603]: 2025-10-02 08:45:37.545 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:45:37 np0005465604 nova_compute[260603]: 2025-10-02 08:45:37.545 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:45:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:45:38 np0005465604 podman[370094]: 2025-10-02 08:45:38.02473615 +0000 UTC m=+0.085225297 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 04:45:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:45:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/809159189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:45:38 np0005465604 nova_compute[260603]: 2025-10-02 08:45:38.062 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:45:38 np0005465604 podman[370093]: 2025-10-02 08:45:38.10944919 +0000 UTC m=+0.172874639 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 04:45:38 np0005465604 nova_compute[260603]: 2025-10-02 08:45:38.250 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:45:38 np0005465604 nova_compute[260603]: 2025-10-02 08:45:38.251 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3813MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:45:38 np0005465604 nova_compute[260603]: 2025-10-02 08:45:38.251 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:38 np0005465604 nova_compute[260603]: 2025-10-02 08:45:38.252 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:38 np0005465604 nova_compute[260603]: 2025-10-02 08:45:38.334 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:45:38 np0005465604 nova_compute[260603]: 2025-10-02 08:45:38.335 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:45:38 np0005465604 nova_compute[260603]: 2025-10-02 08:45:38.356 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:45:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:45:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:45:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/812370792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:45:38 np0005465604 nova_compute[260603]: 2025-10-02 08:45:38.843 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:45:38 np0005465604 nova_compute[260603]: 2025-10-02 08:45:38.853 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:45:38 np0005465604 nova_compute[260603]: 2025-10-02 08:45:38.873 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:45:38 np0005465604 nova_compute[260603]: 2025-10-02 08:45:38.899 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:45:38 np0005465604 nova_compute[260603]: 2025-10-02 08:45:38.900 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:45:39 np0005465604 nova_compute[260603]: 2025-10-02 08:45:39.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:39 np0005465604 nova_compute[260603]: 2025-10-02 08:45:39.901 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:45:39 np0005465604 nova_compute[260603]: 2025-10-02 08:45:39.902 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:45:41 np0005465604 nova_compute[260603]: 2025-10-02 08:45:41.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:45:42 np0005465604 nova_compute[260603]: 2025-10-02 08:45:42.598 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:42 np0005465604 nova_compute[260603]: 2025-10-02 08:45:42.598 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:42 np0005465604 nova_compute[260603]: 2025-10-02 08:45:42.613 2 DEBUG nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:45:42 np0005465604 nova_compute[260603]: 2025-10-02 08:45:42.685 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:42 np0005465604 nova_compute[260603]: 2025-10-02 08:45:42.686 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:42 np0005465604 nova_compute[260603]: 2025-10-02 08:45:42.695 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:45:42 np0005465604 nova_compute[260603]: 2025-10-02 08:45:42.695 2 INFO nova.compute.claims [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:45:42 np0005465604 nova_compute[260603]: 2025-10-02 08:45:42.820 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:45:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:45:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:45:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2575695996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.275 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.281 2 DEBUG nova.compute.provider_tree [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.305 2 DEBUG nova.scheduler.client.report [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.336 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.337 2 DEBUG nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.404 2 DEBUG nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.404 2 DEBUG nova.network.neutron [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:45:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.434 2 INFO nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.457 2 DEBUG nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.578 2 DEBUG nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.580 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.581 2 INFO nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Creating image(s)#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.624 2 DEBUG nova.storage.rbd_utils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.659 2 DEBUG nova.storage.rbd_utils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.695 2 DEBUG nova.storage.rbd_utils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.701 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.807 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.808 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.809 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.809 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.834 2 DEBUG nova.storage.rbd_utils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:45:43 np0005465604 nova_compute[260603]: 2025-10-02 08:45:43.840 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.158 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.251 2 DEBUG nova.storage.rbd_utils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] resizing rbd image fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.331 2 DEBUG nova.policy [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3dd1e04a123f47aa8a6b835785a1c569', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.393 2 DEBUG nova.objects.instance [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'migration_context' on Instance uuid fc71f095-bde6-43da-bec6-e0a30dc1b71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.411 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.412 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Ensure instance console log exists: /var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.413 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.413 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.414 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.660 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.662 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.681 2 DEBUG nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.764 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.765 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.774 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.774 2 INFO nova.compute.claims [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:45:44 np0005465604 nova_compute[260603]: 2025-10-02 08:45:44.910 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:45:45 np0005465604 podman[370351]: 2025-10-02 08:45:45.049582642 +0000 UTC m=+0.087456266 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS)
Oct  2 04:45:45 np0005465604 podman[370349]: 2025-10-02 08:45:45.05687738 +0000 UTC m=+0.105372796 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.390 2 DEBUG nova.network.neutron [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Successfully created port: 71e3efd4-125e-40e4-bdda-59df254d21f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:45:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 41 MiB data, 756 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:45:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:45:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2314088705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.448 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.455 2 DEBUG nova.compute.provider_tree [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.477 2 DEBUG nova.scheduler.client.report [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.503 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.504 2 DEBUG nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.586 2 DEBUG nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.587 2 DEBUG nova.network.neutron [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.610 2 INFO nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.642 2 DEBUG nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.754 2 DEBUG nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.757 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.758 2 INFO nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Creating image(s)#033[00m
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.803 2 DEBUG nova.storage.rbd_utils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] rbd image ece43baf-b502-44c4-9065-61d6c3271ae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.844 2 DEBUG nova.storage.rbd_utils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] rbd image ece43baf-b502-44c4-9065-61d6c3271ae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.878 2 DEBUG nova.storage.rbd_utils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] rbd image ece43baf-b502-44c4-9065-61d6c3271ae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.883 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.997 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:45:45 np0005465604 nova_compute[260603]: 2025-10-02 08:45:45.999 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.000 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.001 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.046 2 DEBUG nova.storage.rbd_utils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] rbd image ece43baf-b502-44c4-9065-61d6c3271ae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.054 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ece43baf-b502-44c4-9065-61d6c3271ae4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.108 2 DEBUG nova.policy [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a1aa1da2fbc74e148134df6efbe63791', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5461da915a3245579ae75d81001ad2c2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.400 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ece43baf-b502-44c4-9065-61d6c3271ae4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.482 2 DEBUG nova.storage.rbd_utils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] resizing rbd image ece43baf-b502-44c4-9065-61d6c3271ae4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.615 2 DEBUG nova.objects.instance [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'migration_context' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.638 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.638 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Ensure instance console log exists: /var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.639 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.639 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.640 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.685 2 DEBUG nova.network.neutron [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Successfully updated port: 71e3efd4-125e-40e4-bdda-59df254d21f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.709 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.709 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquired lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.709 2 DEBUG nova.network.neutron [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.849 2 DEBUG nova.compute.manager [req-29a00f27-9fc6-4ea1-9cba-58842da33594 req-51c7fb0d-c738-417b-bb78-1db3a21b33a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Received event network-changed-71e3efd4-125e-40e4-bdda-59df254d21f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.850 2 DEBUG nova.compute.manager [req-29a00f27-9fc6-4ea1-9cba-58842da33594 req-51c7fb0d-c738-417b-bb78-1db3a21b33a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Refreshing instance network info cache due to event network-changed-71e3efd4-125e-40e4-bdda-59df254d21f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:45:46 np0005465604 nova_compute[260603]: 2025-10-02 08:45:46.851 2 DEBUG oslo_concurrency.lockutils [req-29a00f27-9fc6-4ea1-9cba-58842da33594 req-51c7fb0d-c738-417b-bb78-1db3a21b33a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:45:47 np0005465604 nova_compute[260603]: 2025-10-02 08:45:47.014 2 DEBUG nova.network.neutron [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:45:47 np0005465604 nova_compute[260603]: 2025-10-02 08:45:47.040 2 DEBUG nova.network.neutron [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Successfully created port: bceec1b4-ab90-4791-a600-a62ec7870928 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:45:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 92 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.7 MiB/s wr, 13 op/s
Oct  2 04:45:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.028 2 DEBUG nova.network.neutron [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Successfully updated port: bceec1b4-ab90-4791-a600-a62ec7870928 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.056 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.056 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquired lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.056 2 DEBUG nova.network.neutron [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.175 2 DEBUG nova.network.neutron [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updating instance_info_cache with network_info: [{"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.206 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Releasing lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.207 2 DEBUG nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Instance network_info: |[{"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.208 2 DEBUG oslo_concurrency.lockutils [req-29a00f27-9fc6-4ea1-9cba-58842da33594 req-51c7fb0d-c738-417b-bb78-1db3a21b33a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.209 2 DEBUG nova.network.neutron [req-29a00f27-9fc6-4ea1-9cba-58842da33594 req-51c7fb0d-c738-417b-bb78-1db3a21b33a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Refreshing network info cache for port 71e3efd4-125e-40e4-bdda-59df254d21f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.214 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Start _get_guest_xml network_info=[{"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.220 2 WARNING nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.233 2 DEBUG nova.virt.libvirt.host [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.234 2 DEBUG nova.virt.libvirt.host [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.239 2 DEBUG nova.virt.libvirt.host [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.239 2 DEBUG nova.virt.libvirt.host [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.240 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.241 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.242 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.242 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.243 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.243 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.244 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.244 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.244 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.245 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.246 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.246 2 DEBUG nova.virt.hardware [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.251 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.348 2 DEBUG nova.network.neutron [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:45:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:45:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1383214943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.694 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.730 2 DEBUG nova.storage.rbd_utils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.737 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.980 2 DEBUG nova.compute.manager [req-9baa3f9d-1b1f-4396-972c-7e94e8ac8624 req-21201b6e-9ee7-4e56-8b10-a42d29140b90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-changed-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.981 2 DEBUG nova.compute.manager [req-9baa3f9d-1b1f-4396-972c-7e94e8ac8624 req-21201b6e-9ee7-4e56-8b10-a42d29140b90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Refreshing instance network info cache due to event network-changed-bceec1b4-ab90-4791-a600-a62ec7870928. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:45:48 np0005465604 nova_compute[260603]: 2025-10-02 08:45:48.981 2 DEBUG oslo_concurrency.lockutils [req-9baa3f9d-1b1f-4396-972c-7e94e8ac8624 req-21201b6e-9ee7-4e56-8b10-a42d29140b90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:45:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:45:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3887349724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.226 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.228 2 DEBUG nova.virt.libvirt.vif [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:45:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-636868978',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-636868978',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=109,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVA79qbd3NtJv84RrbhGptrnsrVMgvbKHQMrDgaSLfUQo5hxQDp/dq5BHyqGmWd6dJ7yqexCddmRMhWAszT1sZolLZV1xXu34aiHfjKSbuLnXtvoVyFqHt1Oka+6ZlQ1g==',key_name='tempest-TestSecurityGroupsBasicOps-1359298851',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-dp87un5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:45:43Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=fc71f095-bde6-43da-bec6-e0a30dc1b71a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.228 2 DEBUG nova.network.os_vif_util [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.229 2 DEBUG nova.network.os_vif_util [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:06:68,bridge_name='br-int',has_traffic_filtering=True,id=71e3efd4-125e-40e4-bdda-59df254d21f9,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71e3efd4-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.230 2 DEBUG nova.objects.instance [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'pci_devices' on Instance uuid fc71f095-bde6-43da-bec6-e0a30dc1b71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.249 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:45:49 np0005465604 nova_compute[260603]:  <uuid>fc71f095-bde6-43da-bec6-e0a30dc1b71a</uuid>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:  <name>instance-0000006d</name>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-636868978</nova:name>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:45:48</nova:creationTime>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:45:49 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:        <nova:user uuid="3dd1e04a123f47aa8a6b835785a1c569">tempest-TestSecurityGroupsBasicOps-204807017-project-member</nova:user>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:        <nova:project uuid="7ef9cbc1b038423984a64b4674aa34ff">tempest-TestSecurityGroupsBasicOps-204807017</nova:project>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:        <nova:port uuid="71e3efd4-125e-40e4-bdda-59df254d21f9">
Oct  2 04:45:49 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <entry name="serial">fc71f095-bde6-43da-bec6-e0a30dc1b71a</entry>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <entry name="uuid">fc71f095-bde6-43da-bec6-e0a30dc1b71a</entry>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk">
Oct  2 04:45:49 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:45:49 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk.config">
Oct  2 04:45:49 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:45:49 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:e8:06:68"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <target dev="tap71e3efd4-12"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a/console.log" append="off"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:45:49 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:45:49 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:45:49 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:45:49 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.251 2 DEBUG nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Preparing to wait for external event network-vif-plugged-71e3efd4-125e-40e4-bdda-59df254d21f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.252 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.252 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.253 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.254 2 DEBUG nova.virt.libvirt.vif [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:45:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-636868978',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-636868978',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=109,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVA79qbd3NtJv84RrbhGptrnsrVMgvbKHQMrDgaSLfUQo5hxQDp/dq5BHyqGmWd6dJ7yqexCddmRMhWAszT1sZolLZV1xXu34aiHfjKSbuLnXtvoVyFqHt1Oka+6ZlQ1g==',key_name='tempest-TestSecurityGroupsBasicOps-1359298851',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-dp87un5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:45:43Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=fc71f095-bde6-43da-bec6-e0a30dc1b71a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.255 2 DEBUG nova.network.os_vif_util [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.256 2 DEBUG nova.network.os_vif_util [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:06:68,bridge_name='br-int',has_traffic_filtering=True,id=71e3efd4-125e-40e4-bdda-59df254d21f9,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71e3efd4-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.257 2 DEBUG os_vif [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:06:68,bridge_name='br-int',has_traffic_filtering=True,id=71e3efd4-125e-40e4-bdda-59df254d21f9,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71e3efd4-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.259 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.260 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.269 2 DEBUG nova.network.neutron [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Updating instance_info_cache with network_info: [{"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71e3efd4-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap71e3efd4-12, col_values=(('external_ids', {'iface-id': '71e3efd4-125e-40e4-bdda-59df254d21f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:06:68', 'vm-uuid': 'fc71f095-bde6-43da-bec6-e0a30dc1b71a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:45:49 np0005465604 NetworkManager[45129]: <info>  [1759394749.2776] manager: (tap71e3efd4-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/439)
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.285 2 INFO os_vif [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:06:68,bridge_name='br-int',has_traffic_filtering=True,id=71e3efd4-125e-40e4-bdda-59df254d21f9,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71e3efd4-12')#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.295 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Releasing lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.296 2 DEBUG nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Instance network_info: |[{"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.297 2 DEBUG oslo_concurrency.lockutils [req-9baa3f9d-1b1f-4396-972c-7e94e8ac8624 req-21201b6e-9ee7-4e56-8b10-a42d29140b90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.298 2 DEBUG nova.network.neutron [req-9baa3f9d-1b1f-4396-972c-7e94e8ac8624 req-21201b6e-9ee7-4e56-8b10-a42d29140b90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Refreshing network info cache for port bceec1b4-ab90-4791-a600-a62ec7870928 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.303 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Start _get_guest_xml network_info=[{"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.309 2 WARNING nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.314 2 DEBUG nova.virt.libvirt.host [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.315 2 DEBUG nova.virt.libvirt.host [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.320 2 DEBUG nova.virt.libvirt.host [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.320 2 DEBUG nova.virt.libvirt.host [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.321 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.321 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.322 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.323 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.323 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.324 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.324 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.324 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.325 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.325 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.326 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.326 2 DEBUG nova.virt.hardware [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.331 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:45:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.417 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.418 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.419 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No VIF found with MAC fa:16:3e:e8:06:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.419 2 INFO nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Using config drive#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.453 2 DEBUG nova.storage.rbd_utils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:45:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/701068349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.817 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.848 2 DEBUG nova.storage.rbd_utils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] rbd image ece43baf-b502-44c4-9065-61d6c3271ae4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.853 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.908 2 DEBUG nova.network.neutron [req-29a00f27-9fc6-4ea1-9cba-58842da33594 req-51c7fb0d-c738-417b-bb78-1db3a21b33a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updated VIF entry in instance network info cache for port 71e3efd4-125e-40e4-bdda-59df254d21f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.910 2 DEBUG nova.network.neutron [req-29a00f27-9fc6-4ea1-9cba-58842da33594 req-51c7fb0d-c738-417b-bb78-1db3a21b33a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updating instance_info_cache with network_info: [{"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.918 2 INFO nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Creating config drive at /var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a/disk.config#033[00m
Oct  2 04:45:49 np0005465604 nova_compute[260603]: 2025-10-02 08:45:49.928 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2l7evrii execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.004 2 DEBUG oslo_concurrency.lockutils [req-29a00f27-9fc6-4ea1-9cba-58842da33594 req-51c7fb0d-c738-417b-bb78-1db3a21b33a8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.099 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2l7evrii" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.127 2 DEBUG nova.storage.rbd_utils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.131 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a/disk.config fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:45:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:45:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/948405515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.329 2 DEBUG oslo_concurrency.processutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a/disk.config fc71f095-bde6-43da-bec6-e0a30dc1b71a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.331 2 INFO nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Deleting local config drive /var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a/disk.config because it was imported into RBD.#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.336 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.339 2 DEBUG nova.virt.libvirt.vif [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:45:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1660606380',display_name='tempest-TestServerAdvancedOps-server-1660606380',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1660606380',id=110,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5461da915a3245579ae75d81001ad2c2',ramdisk_id='',reservation_id='r-g3wq9z05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-77325392',owner_user_name='tempest-TestServerAdvancedOps-77325392-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:45:45Z,user_data=None,user_id='a1aa1da2fbc74e148134df6efbe63791',uuid=ece43baf-b502-44c4-9065-61d6c3271ae4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.340 2 DEBUG nova.network.os_vif_util [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converting VIF {"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.341 2 DEBUG nova.network.os_vif_util [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.343 2 DEBUG nova.objects.instance [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.372 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:45:50 np0005465604 nova_compute[260603]:  <uuid>ece43baf-b502-44c4-9065-61d6c3271ae4</uuid>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:  <name>instance-0000006e</name>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestServerAdvancedOps-server-1660606380</nova:name>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:45:49</nova:creationTime>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:45:50 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:        <nova:user uuid="a1aa1da2fbc74e148134df6efbe63791">tempest-TestServerAdvancedOps-77325392-project-member</nova:user>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:        <nova:project uuid="5461da915a3245579ae75d81001ad2c2">tempest-TestServerAdvancedOps-77325392</nova:project>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:        <nova:port uuid="bceec1b4-ab90-4791-a600-a62ec7870928">
Oct  2 04:45:50 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <entry name="serial">ece43baf-b502-44c4-9065-61d6c3271ae4</entry>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <entry name="uuid">ece43baf-b502-44c4-9065-61d6c3271ae4</entry>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/ece43baf-b502-44c4-9065-61d6c3271ae4_disk">
Oct  2 04:45:50 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:45:50 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/ece43baf-b502-44c4-9065-61d6c3271ae4_disk.config">
Oct  2 04:45:50 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:45:50 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:b0:e4:55"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <target dev="tapbceec1b4-ab"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4/console.log" append="off"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:45:50 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:45:50 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:45:50 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:45:50 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.375 2 DEBUG nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Preparing to wait for external event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.375 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.376 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.376 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.377 2 DEBUG nova.virt.libvirt.vif [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:45:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1660606380',display_name='tempest-TestServerAdvancedOps-server-1660606380',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1660606380',id=110,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5461da915a3245579ae75d81001ad2c2',ramdisk_id='',reservation_id='r-g3wq9z05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-77325392',owner_user_name='tempest-TestServerAdvancedOps-77325392-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:45:45Z,user_data=None,user_id='a1aa1da2fbc74e148134df6efbe63791',uuid=ece43baf-b502-44c4-9065-61d6c3271ae4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.378 2 DEBUG nova.network.os_vif_util [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converting VIF {"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.379 2 DEBUG nova.network.os_vif_util [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.379 2 DEBUG os_vif [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.381 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.386 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbceec1b4-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.387 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbceec1b4-ab, col_values=(('external_ids', {'iface-id': 'bceec1b4-ab90-4791-a600-a62ec7870928', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:e4:55', 'vm-uuid': 'ece43baf-b502-44c4-9065-61d6c3271ae4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:45:50 np0005465604 NetworkManager[45129]: <info>  [1759394750.3905] manager: (tapbceec1b4-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/440)
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.400 2 INFO os_vif [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab')#033[00m
Oct  2 04:45:50 np0005465604 kernel: tap71e3efd4-12: entered promiscuous mode
Oct  2 04:45:50 np0005465604 NetworkManager[45129]: <info>  [1759394750.4071] manager: (tap71e3efd4-12): new Tun device (/org/freedesktop/NetworkManager/Devices/441)
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:50Z|01118|binding|INFO|Claiming lport 71e3efd4-125e-40e4-bdda-59df254d21f9 for this chassis.
Oct  2 04:45:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:50Z|01119|binding|INFO|71e3efd4-125e-40e4-bdda-59df254d21f9: Claiming fa:16:3e:e8:06:68 10.100.0.4
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.431 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:06:68 10.100.0.4'], port_security=['fa:16:3e:e8:06:68 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fc71f095-bde6-43da-bec6-e0a30dc1b71a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '026f2ff0-b634-4fa0-8159-21403eda59a0 d0f166da-6d6f-4ea3-ab29-33f59ca9931c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12ca4751-2801-43b7-bd66-26826481ad08, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=71e3efd4-125e-40e4-bdda-59df254d21f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.432 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 71e3efd4-125e-40e4-bdda-59df254d21f9 in datapath 0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758 bound to our chassis#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.433 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758#033[00m
Oct  2 04:45:50 np0005465604 systemd-machined[214636]: New machine qemu-136-instance-0000006d.
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.453 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe236f8-9e73-4f2a-92bb-3260b1dcc43d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.454 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0ee1fadc-d1 in ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.456 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0ee1fadc-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.456 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb7f390-8759-4c7d-8e78-3b347e661493]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.458 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[03531e11-a79c-454e-8220-c1f9f369c8ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.465 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.466 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.466 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] No VIF found with MAC fa:16:3e:b0:e4:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.466 2 INFO nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Using config drive#033[00m
Oct  2 04:45:50 np0005465604 systemd[1]: Started Virtual Machine qemu-136-instance-0000006d.
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.471 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[585f96df-eb54-4842-a60a-3fa89859f5a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:50 np0005465604 systemd-udevd[370795]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.487 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b25505ad-c429-4d9b-9e71-79b70518e4e8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:50Z|01120|binding|INFO|Setting lport 71e3efd4-125e-40e4-bdda-59df254d21f9 ovn-installed in OVS
Oct  2 04:45:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:50Z|01121|binding|INFO|Setting lport 71e3efd4-125e-40e4-bdda-59df254d21f9 up in Southbound
Oct  2 04:45:50 np0005465604 NetworkManager[45129]: <info>  [1759394750.4989] device (tap71e3efd4-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.499 2 DEBUG nova.storage.rbd_utils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] rbd image ece43baf-b502-44c4-9065-61d6c3271ae4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:45:50 np0005465604 NetworkManager[45129]: <info>  [1759394750.5005] device (tap71e3efd4-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.523 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[214fd372-8ba1-4bbd-aa1d-d730c4b3dbc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.527 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[600e0662-e352-44ba-a3ba-fcd4b3393d9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:50 np0005465604 systemd-udevd[370803]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:45:50 np0005465604 NetworkManager[45129]: <info>  [1759394750.5285] manager: (tap0ee1fadc-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/442)
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.559 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c8269d31-ac89-4186-a300-25139a232529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.561 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9834d6bc-9b14-483b-87d0-a3f318ffa2c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:50 np0005465604 NetworkManager[45129]: <info>  [1759394750.5870] device (tap0ee1fadc-d0): carrier: link connected
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.594 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ee3ba2-1da9-4afc-bbef-2a852a4a7190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.614 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09fe2fa5-7bad-4c05-abd7-2327ebd7b377]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ee1fadc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:f0:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562918, 'reachable_time': 43019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370835, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.631 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6f719a1d-a36d-4ccc-8e5f-43c9821c2d4a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe07:f032'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562918, 'tstamp': 562918}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370836, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.647 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9813cc85-1888-4ec5-9f93-aac3762c2182]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ee1fadc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:f0:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562918, 'reachable_time': 43019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370837, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.676 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e44439d6-c1b4-4224-a9a9-916ab66e2c66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.728 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[be2dd7d4-0e5b-4cd7-9cfc-7cd72151d9b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.729 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ee1fadc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.729 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.729 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ee1fadc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:45:50 np0005465604 kernel: tap0ee1fadc-d0: entered promiscuous mode
Oct  2 04:45:50 np0005465604 NetworkManager[45129]: <info>  [1759394750.7325] manager: (tap0ee1fadc-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/443)
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.736 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ee1fadc-d0, col_values=(('external_ids', {'iface-id': 'f994afa7-373d-469a-a6b3-0a33b20c9e54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:45:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:50Z|01122|binding|INFO|Releasing lport f994afa7-373d-469a-a6b3-0a33b20c9e54 from this chassis (sb_readonly=0)
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.757 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.757 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb4cc5c-6155-41a1-aa44-3c1cefd258d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.758 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758.pid.haproxy
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:45:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:50.759 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'env', 'PROCESS_TAG=haproxy-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.763 2 DEBUG nova.compute.manager [req-0c50758b-ea6e-4abd-bc8a-3c1c29683921 req-8f30f015-5df1-4fd1-b617-2d8eb7dc51c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Received event network-vif-plugged-71e3efd4-125e-40e4-bdda-59df254d21f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.764 2 DEBUG oslo_concurrency.lockutils [req-0c50758b-ea6e-4abd-bc8a-3c1c29683921 req-8f30f015-5df1-4fd1-b617-2d8eb7dc51c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.764 2 DEBUG oslo_concurrency.lockutils [req-0c50758b-ea6e-4abd-bc8a-3c1c29683921 req-8f30f015-5df1-4fd1-b617-2d8eb7dc51c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.764 2 DEBUG oslo_concurrency.lockutils [req-0c50758b-ea6e-4abd-bc8a-3c1c29683921 req-8f30f015-5df1-4fd1-b617-2d8eb7dc51c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:50 np0005465604 nova_compute[260603]: 2025-10-02 08:45:50.764 2 DEBUG nova.compute.manager [req-0c50758b-ea6e-4abd-bc8a-3c1c29683921 req-8f30f015-5df1-4fd1-b617-2d8eb7dc51c3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Processing event network-vif-plugged-71e3efd4-125e-40e4-bdda-59df254d21f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:45:51 np0005465604 podman[370916]: 2025-10-02 08:45:51.132977922 +0000 UTC m=+0.066320588 container create be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:45:51 np0005465604 systemd[1]: Started libpod-conmon-be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2.scope.
Oct  2 04:45:51 np0005465604 podman[370916]: 2025-10-02 08:45:51.091339025 +0000 UTC m=+0.024681770 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:45:51 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:45:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a8bfea6155d379b6c0d528c11335822e5f651d1b1147fcb20b7f259f8bc47f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:45:51 np0005465604 podman[370916]: 2025-10-02 08:45:51.215666091 +0000 UTC m=+0.149008756 container init be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:45:51 np0005465604 podman[370916]: 2025-10-02 08:45:51.220984986 +0000 UTC m=+0.154327651 container start be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:45:51 np0005465604 neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758[370931]: [NOTICE]   (370935) : New worker (370937) forked
Oct  2 04:45:51 np0005465604 neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758[370931]: [NOTICE]   (370935) : Loading success.
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.296 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394751.295631, fc71f095-bde6-43da-bec6-e0a30dc1b71a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.296 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] VM Started (Lifecycle Event)#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.298 2 DEBUG nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.301 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.305 2 INFO nova.virt.libvirt.driver [-] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Instance spawned successfully.#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.305 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.329 2 INFO nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Creating config drive at /var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4/disk.config#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.338 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzrw3cmno execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.387 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.394 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.400 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.401 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.402 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.402 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.403 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.403 2 DEBUG nova.virt.libvirt.driver [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:45:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.437 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.438 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394751.296004, fc71f095-bde6-43da-bec6-e0a30dc1b71a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.438 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.479 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.484 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394751.3008535, fc71f095-bde6-43da-bec6-e0a30dc1b71a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.484 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.491 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzrw3cmno" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.519 2 DEBUG nova.storage.rbd_utils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] rbd image ece43baf-b502-44c4-9065-61d6c3271ae4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.524 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4/disk.config ece43baf-b502-44c4-9065-61d6c3271ae4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.581 2 INFO nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Took 8.00 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.582 2 DEBUG nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.583 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.599 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.632 2 DEBUG nova.network.neutron [req-9baa3f9d-1b1f-4396-972c-7e94e8ac8624 req-21201b6e-9ee7-4e56-8b10-a42d29140b90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Updated VIF entry in instance network info cache for port bceec1b4-ab90-4791-a600-a62ec7870928. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.633 2 DEBUG nova.network.neutron [req-9baa3f9d-1b1f-4396-972c-7e94e8ac8624 req-21201b6e-9ee7-4e56-8b10-a42d29140b90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Updating instance_info_cache with network_info: [{"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.643 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.664 2 DEBUG oslo_concurrency.lockutils [req-9baa3f9d-1b1f-4396-972c-7e94e8ac8624 req-21201b6e-9ee7-4e56-8b10-a42d29140b90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.684 2 INFO nova.compute.manager [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Took 9.02 seconds to build instance.#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.703 2 DEBUG oslo_concurrency.lockutils [None req-2f793fd4-7dd4-432a-a003-cbfcfe2dd5b3 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.735 2 DEBUG oslo_concurrency.processutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4/disk.config ece43baf-b502-44c4-9065-61d6c3271ae4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.736 2 INFO nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Deleting local config drive /var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4/disk.config because it was imported into RBD.#033[00m
Oct  2 04:45:51 np0005465604 kernel: tapbceec1b4-ab: entered promiscuous mode
Oct  2 04:45:51 np0005465604 NetworkManager[45129]: <info>  [1759394751.8224] manager: (tapbceec1b4-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/444)
Oct  2 04:45:51 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:51Z|01123|binding|INFO|Claiming lport bceec1b4-ab90-4791-a600-a62ec7870928 for this chassis.
Oct  2 04:45:51 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:51Z|01124|binding|INFO|bceec1b4-ab90-4791-a600-a62ec7870928: Claiming fa:16:3e:b0:e4:55 10.100.0.13
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:51 np0005465604 systemd-udevd[370825]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:45:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:51.834 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:e4:55 10.100.0.13'], port_security=['fa:16:3e:b0:e4:55 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ece43baf-b502-44c4-9065-61d6c3271ae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56bd5300-f7cc-484e-a7dc-25ea062dfd97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5461da915a3245579ae75d81001ad2c2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd611935a-c399-480f-8308-94b3315588fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d9fc10f-d634-42a9-a474-31b6181a157a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bceec1b4-ab90-4791-a600-a62ec7870928) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:45:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:51.837 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bceec1b4-ab90-4791-a600-a62ec7870928 in datapath 56bd5300-f7cc-484e-a7dc-25ea062dfd97 bound to our chassis#033[00m
Oct  2 04:45:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:51.838 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 56bd5300-f7cc-484e-a7dc-25ea062dfd97 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 04:45:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:51.839 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9a860ddc-bca9-41e0-9795-1333363c9876]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:51 np0005465604 NetworkManager[45129]: <info>  [1759394751.8535] device (tapbceec1b4-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:45:51 np0005465604 NetworkManager[45129]: <info>  [1759394751.8558] device (tapbceec1b4-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:51 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:51Z|01125|binding|INFO|Setting lport bceec1b4-ab90-4791-a600-a62ec7870928 ovn-installed in OVS
Oct  2 04:45:51 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:51Z|01126|binding|INFO|Setting lport bceec1b4-ab90-4791-a600-a62ec7870928 up in Southbound
Oct  2 04:45:51 np0005465604 nova_compute[260603]: 2025-10-02 08:45:51.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:51 np0005465604 systemd-machined[214636]: New machine qemu-137-instance-0000006e.
Oct  2 04:45:51 np0005465604 systemd[1]: Started Virtual Machine qemu-137-instance-0000006e.
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.438 2 DEBUG nova.compute.manager [req-8b3ecc4b-271c-49e8-b147-083a2cdbee0d req-7e1003e6-92b2-4659-85d7-b2b663a6c2f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.438 2 DEBUG oslo_concurrency.lockutils [req-8b3ecc4b-271c-49e8-b147-083a2cdbee0d req-7e1003e6-92b2-4659-85d7-b2b663a6c2f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.438 2 DEBUG oslo_concurrency.lockutils [req-8b3ecc4b-271c-49e8-b147-083a2cdbee0d req-7e1003e6-92b2-4659-85d7-b2b663a6c2f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.438 2 DEBUG oslo_concurrency.lockutils [req-8b3ecc4b-271c-49e8-b147-083a2cdbee0d req-7e1003e6-92b2-4659-85d7-b2b663a6c2f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.439 2 DEBUG nova.compute.manager [req-8b3ecc4b-271c-49e8-b147-083a2cdbee0d req-7e1003e6-92b2-4659-85d7-b2b663a6c2f9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Processing event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.861 2 DEBUG nova.compute.manager [req-6561361e-7730-4c95-b0fa-5e41a7429728 req-aaea254f-1beb-4b3e-b0f1-d97d80c11a51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Received event network-vif-plugged-71e3efd4-125e-40e4-bdda-59df254d21f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.861 2 DEBUG oslo_concurrency.lockutils [req-6561361e-7730-4c95-b0fa-5e41a7429728 req-aaea254f-1beb-4b3e-b0f1-d97d80c11a51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.862 2 DEBUG oslo_concurrency.lockutils [req-6561361e-7730-4c95-b0fa-5e41a7429728 req-aaea254f-1beb-4b3e-b0f1-d97d80c11a51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.862 2 DEBUG oslo_concurrency.lockutils [req-6561361e-7730-4c95-b0fa-5e41a7429728 req-aaea254f-1beb-4b3e-b0f1-d97d80c11a51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.862 2 DEBUG nova.compute.manager [req-6561361e-7730-4c95-b0fa-5e41a7429728 req-aaea254f-1beb-4b3e-b0f1-d97d80c11a51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] No waiting events found dispatching network-vif-plugged-71e3efd4-125e-40e4-bdda-59df254d21f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.863 2 WARNING nova.compute.manager [req-6561361e-7730-4c95-b0fa-5e41a7429728 req-aaea254f-1beb-4b3e-b0f1-d97d80c11a51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Received unexpected event network-vif-plugged-71e3efd4-125e-40e4-bdda-59df254d21f9 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:45:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.946 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394752.9466424, ece43baf-b502-44c4-9065-61d6c3271ae4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.947 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Started (Lifecycle Event)#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.949 2 DEBUG nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.953 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.956 2 INFO nova.virt.libvirt.driver [-] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Instance spawned successfully.#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.957 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.969 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.974 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.979 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.979 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.979 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.980 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.980 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:45:52 np0005465604 nova_compute[260603]: 2025-10-02 08:45:52.981 2 DEBUG nova.virt.libvirt.driver [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:45:53 np0005465604 nova_compute[260603]: 2025-10-02 08:45:53.007 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:45:53 np0005465604 nova_compute[260603]: 2025-10-02 08:45:53.008 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394752.9473505, ece43baf-b502-44c4-9065-61d6c3271ae4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:45:53 np0005465604 nova_compute[260603]: 2025-10-02 08:45:53.008 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:45:53 np0005465604 nova_compute[260603]: 2025-10-02 08:45:53.044 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:45:53 np0005465604 nova_compute[260603]: 2025-10-02 08:45:53.047 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394752.954984, ece43baf-b502-44c4-9065-61d6c3271ae4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:45:53 np0005465604 nova_compute[260603]: 2025-10-02 08:45:53.047 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:45:53 np0005465604 nova_compute[260603]: 2025-10-02 08:45:53.058 2 INFO nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Took 7.30 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:45:53 np0005465604 nova_compute[260603]: 2025-10-02 08:45:53.058 2 DEBUG nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:45:53 np0005465604 nova_compute[260603]: 2025-10-02 08:45:53.066 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:45:53 np0005465604 nova_compute[260603]: 2025-10-02 08:45:53.069 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:45:53 np0005465604 nova_compute[260603]: 2025-10-02 08:45:53.104 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:45:53 np0005465604 nova_compute[260603]: 2025-10-02 08:45:53.135 2 INFO nova.compute.manager [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Took 8.39 seconds to build instance.#033[00m
Oct  2 04:45:53 np0005465604 nova_compute[260603]: 2025-10-02 08:45:53.157 2 DEBUG oslo_concurrency.lockutils [None req-41ad738d-7042-473c-a5d5-f5d48642bbaf a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 117 op/s
Oct  2 04:45:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:53.593 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:45:53 np0005465604 nova_compute[260603]: 2025-10-02 08:45:53.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:53.595 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:45:54 np0005465604 nova_compute[260603]: 2025-10-02 08:45:54.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:54 np0005465604 nova_compute[260603]: 2025-10-02 08:45:54.573 2 DEBUG nova.compute.manager [req-6b4f234a-b8bf-420a-a5be-659814e8d846 req-5e31a21d-6bfc-4805-81ed-731b0eeed4ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:45:54 np0005465604 nova_compute[260603]: 2025-10-02 08:45:54.574 2 DEBUG oslo_concurrency.lockutils [req-6b4f234a-b8bf-420a-a5be-659814e8d846 req-5e31a21d-6bfc-4805-81ed-731b0eeed4ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:54 np0005465604 nova_compute[260603]: 2025-10-02 08:45:54.574 2 DEBUG oslo_concurrency.lockutils [req-6b4f234a-b8bf-420a-a5be-659814e8d846 req-5e31a21d-6bfc-4805-81ed-731b0eeed4ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:54 np0005465604 nova_compute[260603]: 2025-10-02 08:45:54.574 2 DEBUG oslo_concurrency.lockutils [req-6b4f234a-b8bf-420a-a5be-659814e8d846 req-5e31a21d-6bfc-4805-81ed-731b0eeed4ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:54 np0005465604 nova_compute[260603]: 2025-10-02 08:45:54.574 2 DEBUG nova.compute.manager [req-6b4f234a-b8bf-420a-a5be-659814e8d846 req-5e31a21d-6bfc-4805-81ed-731b0eeed4ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:45:54 np0005465604 nova_compute[260603]: 2025-10-02 08:45:54.574 2 WARNING nova.compute.manager [req-6b4f234a-b8bf-420a-a5be-659814e8d846 req-5e31a21d-6bfc-4805-81ed-731b0eeed4ac 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:45:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:55Z|01127|binding|INFO|Releasing lport f994afa7-373d-469a-a6b3-0a33b20c9e54 from this chassis (sb_readonly=0)
Oct  2 04:45:55 np0005465604 NetworkManager[45129]: <info>  [1759394755.1891] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/445)
Oct  2 04:45:55 np0005465604 NetworkManager[45129]: <info>  [1759394755.1898] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/446)
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.279 2 DEBUG nova.objects.instance [None req-1472c8d9-d03a-44bc-a16e-cc3a23084743 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:45:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:55Z|01128|binding|INFO|Releasing lport f994afa7-373d-469a-a6b3-0a33b20c9e54 from this chassis (sb_readonly=0)
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.312 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394755.312049, ece43baf-b502-44c4-9065-61d6c3271ae4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.312 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.330 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.334 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.350 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 117 op/s
Oct  2 04:45:55 np0005465604 kernel: tapbceec1b4-ab (unregistering): left promiscuous mode
Oct  2 04:45:55 np0005465604 NetworkManager[45129]: <info>  [1759394755.7242] device (tapbceec1b4-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:45:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:55Z|01129|binding|INFO|Releasing lport bceec1b4-ab90-4791-a600-a62ec7870928 from this chassis (sb_readonly=0)
Oct  2 04:45:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:55Z|01130|binding|INFO|Setting lport bceec1b4-ab90-4791-a600-a62ec7870928 down in Southbound
Oct  2 04:45:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:55Z|01131|binding|INFO|Removing iface tapbceec1b4-ab ovn-installed in OVS
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:55.742 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:e4:55 10.100.0.13'], port_security=['fa:16:3e:b0:e4:55 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ece43baf-b502-44c4-9065-61d6c3271ae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56bd5300-f7cc-484e-a7dc-25ea062dfd97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5461da915a3245579ae75d81001ad2c2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd611935a-c399-480f-8308-94b3315588fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d9fc10f-d634-42a9-a474-31b6181a157a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bceec1b4-ab90-4791-a600-a62ec7870928) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:45:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:55.743 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bceec1b4-ab90-4791-a600-a62ec7870928 in datapath 56bd5300-f7cc-484e-a7dc-25ea062dfd97 unbound from our chassis#033[00m
Oct  2 04:45:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:55.743 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 56bd5300-f7cc-484e-a7dc-25ea062dfd97 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 04:45:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:55.744 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7524aa5e-5c68-4e8a-bce4-adc4e992015c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:55 np0005465604 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Oct  2 04:45:55 np0005465604 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006e.scope: Consumed 3.465s CPU time.
Oct  2 04:45:55 np0005465604 systemd-machined[214636]: Machine qemu-137-instance-0000006e terminated.
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.916 2 DEBUG nova.compute.manager [None req-1472c8d9-d03a-44bc-a16e-cc3a23084743 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.930 2 DEBUG nova.compute.manager [req-72e75b2f-b144-49f4-a500-ffe79cc0eb48 req-8fb420e5-6fb4-4f25-a077-db95a201b099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Received event network-changed-71e3efd4-125e-40e4-bdda-59df254d21f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.931 2 DEBUG nova.compute.manager [req-72e75b2f-b144-49f4-a500-ffe79cc0eb48 req-8fb420e5-6fb4-4f25-a077-db95a201b099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Refreshing instance network info cache due to event network-changed-71e3efd4-125e-40e4-bdda-59df254d21f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.935 2 DEBUG oslo_concurrency.lockutils [req-72e75b2f-b144-49f4-a500-ffe79cc0eb48 req-8fb420e5-6fb4-4f25-a077-db95a201b099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.935 2 DEBUG oslo_concurrency.lockutils [req-72e75b2f-b144-49f4-a500-ffe79cc0eb48 req-8fb420e5-6fb4-4f25-a077-db95a201b099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:45:55 np0005465604 nova_compute[260603]: 2025-10-02 08:45:55.937 2 DEBUG nova.network.neutron [req-72e75b2f-b144-49f4-a500-ffe79cc0eb48 req-8fb420e5-6fb4-4f25-a077-db95a201b099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Refreshing network info cache for port 71e3efd4-125e-40e4-bdda-59df254d21f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:45:56 np0005465604 nova_compute[260603]: 2025-10-02 08:45:56.719 2 DEBUG nova.compute.manager [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-unplugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:45:56 np0005465604 nova_compute[260603]: 2025-10-02 08:45:56.719 2 DEBUG oslo_concurrency.lockutils [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:56 np0005465604 nova_compute[260603]: 2025-10-02 08:45:56.719 2 DEBUG oslo_concurrency.lockutils [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:56 np0005465604 nova_compute[260603]: 2025-10-02 08:45:56.719 2 DEBUG oslo_concurrency.lockutils [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:56 np0005465604 nova_compute[260603]: 2025-10-02 08:45:56.719 2 DEBUG nova.compute.manager [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-unplugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:45:56 np0005465604 nova_compute[260603]: 2025-10-02 08:45:56.720 2 WARNING nova.compute.manager [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-unplugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state suspended and task_state None.#033[00m
Oct  2 04:45:56 np0005465604 nova_compute[260603]: 2025-10-02 08:45:56.720 2 DEBUG nova.compute.manager [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:45:56 np0005465604 nova_compute[260603]: 2025-10-02 08:45:56.720 2 DEBUG oslo_concurrency.lockutils [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:45:56 np0005465604 nova_compute[260603]: 2025-10-02 08:45:56.720 2 DEBUG oslo_concurrency.lockutils [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:45:56 np0005465604 nova_compute[260603]: 2025-10-02 08:45:56.720 2 DEBUG oslo_concurrency.lockutils [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:45:56 np0005465604 nova_compute[260603]: 2025-10-02 08:45:56.720 2 DEBUG nova.compute.manager [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:45:56 np0005465604 nova_compute[260603]: 2025-10-02 08:45:56.720 2 WARNING nova.compute.manager [req-679efc26-3d62-453e-b479-fe57bcea406e req-41f85aa1-1b9d-4610-bb28-3525e7be8c6c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state suspended and task_state None.#033[00m
Oct  2 04:45:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 162 op/s
Oct  2 04:45:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:45:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:45:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:45:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:45:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:45:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:45:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:45:57 np0005465604 nova_compute[260603]: 2025-10-02 08:45:57.972 2 DEBUG nova.network.neutron [req-72e75b2f-b144-49f4-a500-ffe79cc0eb48 req-8fb420e5-6fb4-4f25-a077-db95a201b099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updated VIF entry in instance network info cache for port 71e3efd4-125e-40e4-bdda-59df254d21f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:45:57 np0005465604 nova_compute[260603]: 2025-10-02 08:45:57.972 2 DEBUG nova.network.neutron [req-72e75b2f-b144-49f4-a500-ffe79cc0eb48 req-8fb420e5-6fb4-4f25-a077-db95a201b099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updating instance_info_cache with network_info: [{"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:45:58 np0005465604 nova_compute[260603]: 2025-10-02 08:45:58.003 2 DEBUG oslo_concurrency.lockutils [req-72e75b2f-b144-49f4-a500-ffe79cc0eb48 req-8fb420e5-6fb4-4f25-a077-db95a201b099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:45:58 np0005465604 nova_compute[260603]: 2025-10-02 08:45:58.317 2 INFO nova.compute.manager [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Resuming#033[00m
Oct  2 04:45:58 np0005465604 nova_compute[260603]: 2025-10-02 08:45:58.318 2 DEBUG nova.objects.instance [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'flavor' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:45:58 np0005465604 nova_compute[260603]: 2025-10-02 08:45:58.356 2 DEBUG oslo_concurrency.lockutils [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:45:58 np0005465604 nova_compute[260603]: 2025-10-02 08:45:58.356 2 DEBUG oslo_concurrency.lockutils [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquired lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:45:58 np0005465604 nova_compute[260603]: 2025-10-02 08:45:58.357 2 DEBUG nova.network.neutron [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:45:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 188 op/s
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.742 2 DEBUG nova.network.neutron [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Updating instance_info_cache with network_info: [{"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.762 2 DEBUG oslo_concurrency.lockutils [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Releasing lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.767 2 DEBUG nova.virt.libvirt.vif [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:45:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1660606380',display_name='tempest-TestServerAdvancedOps-server-1660606380',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1660606380',id=110,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:45:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='5461da915a3245579ae75d81001ad2c2',ramdisk_id='',reservation_id='r-g3wq9z05',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-77325392',owner_user_name='tempest-TestServerAdvancedOps-77325392-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:45:55Z,user_data=None,user_id='a1aa1da2fbc74e148134df6efbe63791',uuid=ece43baf-b502-44c4-9065-61d6c3271ae4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.767 2 DEBUG nova.network.os_vif_util [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converting VIF {"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.768 2 DEBUG nova.network.os_vif_util [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.768 2 DEBUG os_vif [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.769 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.769 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.772 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbceec1b4-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.772 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbceec1b4-ab, col_values=(('external_ids', {'iface-id': 'bceec1b4-ab90-4791-a600-a62ec7870928', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:e4:55', 'vm-uuid': 'ece43baf-b502-44c4-9065-61d6c3271ae4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.772 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.773 2 INFO os_vif [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab')#033[00m
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.794 2 DEBUG nova.objects.instance [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'numa_topology' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:45:59 np0005465604 kernel: tapbceec1b4-ab: entered promiscuous mode
Oct  2 04:45:59 np0005465604 NetworkManager[45129]: <info>  [1759394759.8621] manager: (tapbceec1b4-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/447)
Oct  2 04:45:59 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:59Z|01132|binding|INFO|Claiming lport bceec1b4-ab90-4791-a600-a62ec7870928 for this chassis.
Oct  2 04:45:59 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:59Z|01133|binding|INFO|bceec1b4-ab90-4791-a600-a62ec7870928: Claiming fa:16:3e:b0:e4:55 10.100.0.13
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:59.871 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:e4:55 10.100.0.13'], port_security=['fa:16:3e:b0:e4:55 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ece43baf-b502-44c4-9065-61d6c3271ae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56bd5300-f7cc-484e-a7dc-25ea062dfd97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5461da915a3245579ae75d81001ad2c2', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd611935a-c399-480f-8308-94b3315588fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d9fc10f-d634-42a9-a474-31b6181a157a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bceec1b4-ab90-4791-a600-a62ec7870928) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:45:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:59.873 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bceec1b4-ab90-4791-a600-a62ec7870928 in datapath 56bd5300-f7cc-484e-a7dc-25ea062dfd97 bound to our chassis#033[00m
Oct  2 04:45:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:59.873 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 56bd5300-f7cc-484e-a7dc-25ea062dfd97 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 04:45:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:45:59.874 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[352965a3-bcc6-4c8d-8c83-51b772b8ab09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:59 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:59Z|01134|binding|INFO|Setting lport bceec1b4-ab90-4791-a600-a62ec7870928 up in Southbound
Oct  2 04:45:59 np0005465604 ovn_controller[152344]: 2025-10-02T08:45:59Z|01135|binding|INFO|Setting lport bceec1b4-ab90-4791-a600-a62ec7870928 ovn-installed in OVS
Oct  2 04:45:59 np0005465604 nova_compute[260603]: 2025-10-02 08:45:59.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:45:59 np0005465604 systemd-machined[214636]: New machine qemu-138-instance-0000006e.
Oct  2 04:45:59 np0005465604 systemd[1]: Started Virtual Machine qemu-138-instance-0000006e.
Oct  2 04:45:59 np0005465604 systemd-udevd[371082]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:45:59 np0005465604 NetworkManager[45129]: <info>  [1759394759.9410] device (tapbceec1b4-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:45:59 np0005465604 NetworkManager[45129]: <info>  [1759394759.9422] device (tapbceec1b4-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.165 2 DEBUG nova.compute.manager [req-9d5ed20f-7fdc-4067-ad19-168a29fae4e1 req-8f70da69-da26-44c6-9727-2965e9396a35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.166 2 DEBUG oslo_concurrency.lockutils [req-9d5ed20f-7fdc-4067-ad19-168a29fae4e1 req-8f70da69-da26-44c6-9727-2965e9396a35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.166 2 DEBUG oslo_concurrency.lockutils [req-9d5ed20f-7fdc-4067-ad19-168a29fae4e1 req-8f70da69-da26-44c6-9727-2965e9396a35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.166 2 DEBUG oslo_concurrency.lockutils [req-9d5ed20f-7fdc-4067-ad19-168a29fae4e1 req-8f70da69-da26-44c6-9727-2965e9396a35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.166 2 DEBUG nova.compute.manager [req-9d5ed20f-7fdc-4067-ad19-168a29fae4e1 req-8f70da69-da26-44c6-9727-2965e9396a35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.166 2 WARNING nova.compute.manager [req-9d5ed20f-7fdc-4067-ad19-168a29fae4e1 req-8f70da69-da26-44c6-9727-2965e9396a35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state suspended and task_state resuming.#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.711226) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394760711258, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 1014, "num_deletes": 251, "total_data_size": 1377317, "memory_usage": 1401744, "flush_reason": "Manual Compaction"}
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394760720447, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 1363402, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41439, "largest_seqno": 42452, "table_properties": {"data_size": 1358483, "index_size": 2443, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10926, "raw_average_key_size": 19, "raw_value_size": 1348536, "raw_average_value_size": 2443, "num_data_blocks": 109, "num_entries": 552, "num_filter_entries": 552, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394668, "oldest_key_time": 1759394668, "file_creation_time": 1759394760, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 9258 microseconds, and 4527 cpu microseconds.
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.720486) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 1363402 bytes OK
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.720504) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.721867) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.721884) EVENT_LOG_v1 {"time_micros": 1759394760721879, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.721901) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 1372512, prev total WAL file size 1372512, number of live WAL files 2.
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.722491) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(1331KB)], [92(10MB)]
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394760722541, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 12171809, "oldest_snapshot_seqno": -1}
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6628 keys, 10508803 bytes, temperature: kUnknown
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394760782727, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 10508803, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10461984, "index_size": 29171, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 170150, "raw_average_key_size": 25, "raw_value_size": 10340808, "raw_average_value_size": 1560, "num_data_blocks": 1155, "num_entries": 6628, "num_filter_entries": 6628, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394760, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.783079) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 10508803 bytes
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.784276) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.5 rd, 174.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 10.3 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(16.6) write-amplify(7.7) OK, records in: 7142, records dropped: 514 output_compression: NoCompression
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.784292) EVENT_LOG_v1 {"time_micros": 1759394760784284, "job": 54, "event": "compaction_finished", "compaction_time_micros": 60402, "compaction_time_cpu_micros": 21748, "output_level": 6, "num_output_files": 1, "total_output_size": 10508803, "num_input_records": 7142, "num_output_records": 6628, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394760784891, "job": 54, "event": "table_file_deletion", "file_number": 94}
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394760786667, "job": 54, "event": "table_file_deletion", "file_number": 92}
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.722408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.786829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.786833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.786835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.786836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:46:00 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:00.786838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.827 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for ece43baf-b502-44c4-9065-61d6c3271ae4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.827 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394760.8269885, ece43baf-b502-44c4-9065-61d6c3271ae4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.828 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Started (Lifecycle Event)#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.849 2 DEBUG nova.compute.manager [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.850 2 DEBUG nova.objects.instance [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.856 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.860 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.876 2 INFO nova.virt.libvirt.driver [-] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Instance running successfully.#033[00m
Oct  2 04:46:00 np0005465604 virtqemud[260328]: argument unsupported: QEMU guest agent is not configured
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.879 2 DEBUG nova.virt.libvirt.guest [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.879 2 DEBUG nova.compute.manager [None req-8349aa5d-e1d5-4586-8045-cdb2dd6ab724 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.886 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.886 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394760.8328013, ece43baf-b502-44c4-9065-61d6c3271ae4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.886 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.907 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.910 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:46:00 np0005465604 nova_compute[260603]: 2025-10-02 08:46:00.943 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  2 04:46:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Oct  2 04:46:02 np0005465604 nova_compute[260603]: 2025-10-02 08:46:02.381 2 DEBUG nova.compute.manager [req-2fa214f1-5c2d-44ff-9f83-0455b9ed01cc req-b2ae2e3a-9f90-4d37-82e7-d616bf80977a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:46:02 np0005465604 nova_compute[260603]: 2025-10-02 08:46:02.382 2 DEBUG oslo_concurrency.lockutils [req-2fa214f1-5c2d-44ff-9f83-0455b9ed01cc req-b2ae2e3a-9f90-4d37-82e7-d616bf80977a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:02 np0005465604 nova_compute[260603]: 2025-10-02 08:46:02.382 2 DEBUG oslo_concurrency.lockutils [req-2fa214f1-5c2d-44ff-9f83-0455b9ed01cc req-b2ae2e3a-9f90-4d37-82e7-d616bf80977a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:02 np0005465604 nova_compute[260603]: 2025-10-02 08:46:02.382 2 DEBUG oslo_concurrency.lockutils [req-2fa214f1-5c2d-44ff-9f83-0455b9ed01cc req-b2ae2e3a-9f90-4d37-82e7-d616bf80977a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:02 np0005465604 nova_compute[260603]: 2025-10-02 08:46:02.382 2 DEBUG nova.compute.manager [req-2fa214f1-5c2d-44ff-9f83-0455b9ed01cc req-b2ae2e3a-9f90-4d37-82e7-d616bf80977a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:46:02 np0005465604 nova_compute[260603]: 2025-10-02 08:46:02.382 2 WARNING nova.compute.manager [req-2fa214f1-5c2d-44ff-9f83-0455b9ed01cc req-b2ae2e3a-9f90-4d37-82e7-d616bf80977a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:46:02 np0005465604 nova_compute[260603]: 2025-10-02 08:46:02.831 2 DEBUG nova.objects.instance [None req-7ff53994-cb4c-4cc3-a546-c5e4aee8a164 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:46:02 np0005465604 nova_compute[260603]: 2025-10-02 08:46:02.856 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394762.8559055, ece43baf-b502-44c4-9065-61d6c3271ae4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:46:02 np0005465604 nova_compute[260603]: 2025-10-02 08:46:02.856 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:46:02 np0005465604 nova_compute[260603]: 2025-10-02 08:46:02.877 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:46:02 np0005465604 nova_compute[260603]: 2025-10-02 08:46:02.884 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:46:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:46:02 np0005465604 nova_compute[260603]: 2025-10-02 08:46:02.908 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  2 04:46:03 np0005465604 kernel: tapbceec1b4-ab (unregistering): left promiscuous mode
Oct  2 04:46:03 np0005465604 NetworkManager[45129]: <info>  [1759394763.3509] device (tapbceec1b4-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:46:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:03Z|01136|binding|INFO|Releasing lport bceec1b4-ab90-4791-a600-a62ec7870928 from this chassis (sb_readonly=0)
Oct  2 04:46:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:03Z|01137|binding|INFO|Setting lport bceec1b4-ab90-4791-a600-a62ec7870928 down in Southbound
Oct  2 04:46:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:03Z|01138|binding|INFO|Removing iface tapbceec1b4-ab ovn-installed in OVS
Oct  2 04:46:03 np0005465604 nova_compute[260603]: 2025-10-02 08:46:03.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:03.383 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:e4:55 10.100.0.13'], port_security=['fa:16:3e:b0:e4:55 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ece43baf-b502-44c4-9065-61d6c3271ae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56bd5300-f7cc-484e-a7dc-25ea062dfd97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5461da915a3245579ae75d81001ad2c2', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd611935a-c399-480f-8308-94b3315588fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d9fc10f-d634-42a9-a474-31b6181a157a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bceec1b4-ab90-4791-a600-a62ec7870928) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:46:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:03.386 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bceec1b4-ab90-4791-a600-a62ec7870928 in datapath 56bd5300-f7cc-484e-a7dc-25ea062dfd97 unbound from our chassis#033[00m
Oct  2 04:46:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:03.387 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 56bd5300-f7cc-484e-a7dc-25ea062dfd97 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 04:46:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:03.388 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bc9dd12f-36d2-4fb5-8ea8-c10ea8dd2eb8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:03Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e8:06:68 10.100.0.4
Oct  2 04:46:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:03Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:06:68 10.100.0.4
Oct  2 04:46:03 np0005465604 nova_compute[260603]: 2025-10-02 08:46:03.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 165 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 196 op/s
Oct  2 04:46:03 np0005465604 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Oct  2 04:46:03 np0005465604 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006e.scope: Consumed 2.878s CPU time.
Oct  2 04:46:03 np0005465604 systemd-machined[214636]: Machine qemu-138-instance-0000006e terminated.
Oct  2 04:46:03 np0005465604 nova_compute[260603]: 2025-10-02 08:46:03.537 2 DEBUG nova.compute.manager [None req-7ff53994-cb4c-4cc3-a546-c5e4aee8a164 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:46:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:03.597 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:46:04 np0005465604 nova_compute[260603]: 2025-10-02 08:46:04.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:04 np0005465604 nova_compute[260603]: 2025-10-02 08:46:04.521 2 DEBUG nova.compute.manager [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-unplugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:46:04 np0005465604 nova_compute[260603]: 2025-10-02 08:46:04.521 2 DEBUG oslo_concurrency.lockutils [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:04 np0005465604 nova_compute[260603]: 2025-10-02 08:46:04.522 2 DEBUG oslo_concurrency.lockutils [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:04 np0005465604 nova_compute[260603]: 2025-10-02 08:46:04.522 2 DEBUG oslo_concurrency.lockutils [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:04 np0005465604 nova_compute[260603]: 2025-10-02 08:46:04.522 2 DEBUG nova.compute.manager [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-unplugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:46:04 np0005465604 nova_compute[260603]: 2025-10-02 08:46:04.522 2 WARNING nova.compute.manager [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-unplugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state suspended and task_state None.#033[00m
Oct  2 04:46:04 np0005465604 nova_compute[260603]: 2025-10-02 08:46:04.522 2 DEBUG nova.compute.manager [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:46:04 np0005465604 nova_compute[260603]: 2025-10-02 08:46:04.522 2 DEBUG oslo_concurrency.lockutils [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:04 np0005465604 nova_compute[260603]: 2025-10-02 08:46:04.523 2 DEBUG oslo_concurrency.lockutils [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:04 np0005465604 nova_compute[260603]: 2025-10-02 08:46:04.523 2 DEBUG oslo_concurrency.lockutils [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:04 np0005465604 nova_compute[260603]: 2025-10-02 08:46:04.523 2 DEBUG nova.compute.manager [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:46:04 np0005465604 nova_compute[260603]: 2025-10-02 08:46:04.523 2 WARNING nova.compute.manager [req-cffbd507-2433-4968-a0b5-556c5c1d028d req-2f04ed33-da74-4b88-b993-e1ba8e3bd210 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state suspended and task_state None.#033[00m
Oct  2 04:46:05 np0005465604 nova_compute[260603]: 2025-10-02 08:46:05.132 2 INFO nova.compute.manager [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Resuming#033[00m
Oct  2 04:46:05 np0005465604 nova_compute[260603]: 2025-10-02 08:46:05.133 2 DEBUG nova.objects.instance [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'flavor' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:46:05 np0005465604 nova_compute[260603]: 2025-10-02 08:46:05.171 2 DEBUG oslo_concurrency.lockutils [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:46:05 np0005465604 nova_compute[260603]: 2025-10-02 08:46:05.172 2 DEBUG oslo_concurrency.lockutils [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquired lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:46:05 np0005465604 nova_compute[260603]: 2025-10-02 08:46:05.172 2 DEBUG nova.network.neutron [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:46:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 165 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.0 MiB/s wr, 132 op/s
Oct  2 04:46:05 np0005465604 nova_compute[260603]: 2025-10-02 08:46:05.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:06 np0005465604 nova_compute[260603]: 2025-10-02 08:46:06.990 2 DEBUG nova.network.neutron [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Updating instance_info_cache with network_info: [{"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.012 2 DEBUG oslo_concurrency.lockutils [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Releasing lock "refresh_cache-ece43baf-b502-44c4-9065-61d6c3271ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.019 2 DEBUG nova.virt.libvirt.vif [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:45:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1660606380',display_name='tempest-TestServerAdvancedOps-server-1660606380',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1660606380',id=110,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:45:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='5461da915a3245579ae75d81001ad2c2',ramdisk_id='',reservation_id='r-g3wq9z05',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-77325392',owner_user_name='tempest-TestServerAdvancedOps-77325392-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:46:03Z,user_data=None,user_id='a1aa1da2fbc74e148134df6efbe63791',uuid=ece43baf-b502-44c4-9065-61d6c3271ae4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.019 2 DEBUG nova.network.os_vif_util [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converting VIF {"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.021 2 DEBUG nova.network.os_vif_util [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.021 2 DEBUG os_vif [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.023 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.023 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.028 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbceec1b4-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.029 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbceec1b4-ab, col_values=(('external_ids', {'iface-id': 'bceec1b4-ab90-4791-a600-a62ec7870928', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:e4:55', 'vm-uuid': 'ece43baf-b502-44c4-9065-61d6c3271ae4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.029 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.030 2 INFO os_vif [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab')#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.063 2 DEBUG nova.objects.instance [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'numa_topology' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:46:07 np0005465604 kernel: tapbceec1b4-ab: entered promiscuous mode
Oct  2 04:46:07 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:07Z|01139|binding|INFO|Claiming lport bceec1b4-ab90-4791-a600-a62ec7870928 for this chassis.
Oct  2 04:46:07 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:07Z|01140|binding|INFO|bceec1b4-ab90-4791-a600-a62ec7870928: Claiming fa:16:3e:b0:e4:55 10.100.0.13
Oct  2 04:46:07 np0005465604 NetworkManager[45129]: <info>  [1759394767.1689] manager: (tapbceec1b4-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/448)
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:07.179 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:e4:55 10.100.0.13'], port_security=['fa:16:3e:b0:e4:55 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ece43baf-b502-44c4-9065-61d6c3271ae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56bd5300-f7cc-484e-a7dc-25ea062dfd97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5461da915a3245579ae75d81001ad2c2', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'd611935a-c399-480f-8308-94b3315588fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d9fc10f-d634-42a9-a474-31b6181a157a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bceec1b4-ab90-4791-a600-a62ec7870928) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:46:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:07.181 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bceec1b4-ab90-4791-a600-a62ec7870928 in datapath 56bd5300-f7cc-484e-a7dc-25ea062dfd97 bound to our chassis#033[00m
Oct  2 04:46:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:07.183 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 56bd5300-f7cc-484e-a7dc-25ea062dfd97 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 04:46:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:07.184 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0962e167-7cd6-4679-9ab7-6b825e3460ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:07 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:07Z|01141|binding|INFO|Setting lport bceec1b4-ab90-4791-a600-a62ec7870928 up in Southbound
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:07 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:07Z|01142|binding|INFO|Setting lport bceec1b4-ab90-4791-a600-a62ec7870928 ovn-installed in OVS
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:07 np0005465604 systemd-udevd[371166]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:46:07 np0005465604 systemd-machined[214636]: New machine qemu-139-instance-0000006e.
Oct  2 04:46:07 np0005465604 systemd[1]: Started Virtual Machine qemu-139-instance-0000006e.
Oct  2 04:46:07 np0005465604 NetworkManager[45129]: <info>  [1759394767.2414] device (tapbceec1b4-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:46:07 np0005465604 NetworkManager[45129]: <info>  [1759394767.2428] device (tapbceec1b4-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.409 2 DEBUG nova.compute.manager [req-bc1560ea-a672-4c72-965c-4da812bbad28 req-434fad10-f356-4f1f-b939-e35929068f69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.410 2 DEBUG oslo_concurrency.lockutils [req-bc1560ea-a672-4c72-965c-4da812bbad28 req-434fad10-f356-4f1f-b939-e35929068f69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.410 2 DEBUG oslo_concurrency.lockutils [req-bc1560ea-a672-4c72-965c-4da812bbad28 req-434fad10-f356-4f1f-b939-e35929068f69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.411 2 DEBUG oslo_concurrency.lockutils [req-bc1560ea-a672-4c72-965c-4da812bbad28 req-434fad10-f356-4f1f-b939-e35929068f69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.411 2 DEBUG nova.compute.manager [req-bc1560ea-a672-4c72-965c-4da812bbad28 req-434fad10-f356-4f1f-b939-e35929068f69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:46:07 np0005465604 nova_compute[260603]: 2025-10-02 08:46:07.411 2 WARNING nova.compute.manager [req-bc1560ea-a672-4c72-965c-4da812bbad28 req-434fad10-f356-4f1f-b939-e35929068f69 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state suspended and task_state resuming.#033[00m
Oct  2 04:46:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 167 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Oct  2 04:46:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:46:08 np0005465604 nova_compute[260603]: 2025-10-02 08:46:08.153 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for ece43baf-b502-44c4-9065-61d6c3271ae4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:46:08 np0005465604 nova_compute[260603]: 2025-10-02 08:46:08.154 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394768.1530848, ece43baf-b502-44c4-9065-61d6c3271ae4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:46:08 np0005465604 nova_compute[260603]: 2025-10-02 08:46:08.154 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Started (Lifecycle Event)#033[00m
Oct  2 04:46:08 np0005465604 nova_compute[260603]: 2025-10-02 08:46:08.174 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:46:08 np0005465604 nova_compute[260603]: 2025-10-02 08:46:08.180 2 DEBUG nova.compute.manager [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:46:08 np0005465604 nova_compute[260603]: 2025-10-02 08:46:08.181 2 DEBUG nova.objects.instance [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:46:08 np0005465604 nova_compute[260603]: 2025-10-02 08:46:08.185 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:46:08 np0005465604 nova_compute[260603]: 2025-10-02 08:46:08.199 2 INFO nova.virt.libvirt.driver [-] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Instance running successfully.#033[00m
Oct  2 04:46:08 np0005465604 virtqemud[260328]: argument unsupported: QEMU guest agent is not configured
Oct  2 04:46:08 np0005465604 nova_compute[260603]: 2025-10-02 08:46:08.203 2 DEBUG nova.virt.libvirt.guest [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 04:46:08 np0005465604 nova_compute[260603]: 2025-10-02 08:46:08.204 2 DEBUG nova.compute.manager [None req-559d7697-2af3-4013-85a5-605ee19c3dee a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:46:08 np0005465604 nova_compute[260603]: 2025-10-02 08:46:08.208 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  2 04:46:08 np0005465604 nova_compute[260603]: 2025-10-02 08:46:08.208 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394768.1582346, ece43baf-b502-44c4-9065-61d6c3271ae4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:46:08 np0005465604 nova_compute[260603]: 2025-10-02 08:46:08.209 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:46:08 np0005465604 nova_compute[260603]: 2025-10-02 08:46:08.236 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:46:08 np0005465604 nova_compute[260603]: 2025-10-02 08:46:08.242 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:46:08 np0005465604 nova_compute[260603]: 2025-10-02 08:46:08.262 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  2 04:46:09 np0005465604 podman[371219]: 2025-10-02 08:46:09.041245656 +0000 UTC m=+0.091857824 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.146 2 DEBUG oslo_concurrency.lockutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.147 2 DEBUG oslo_concurrency.lockutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.147 2 DEBUG oslo_concurrency.lockutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.148 2 DEBUG oslo_concurrency.lockutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.148 2 DEBUG oslo_concurrency.lockutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:09 np0005465604 podman[371218]: 2025-10-02 08:46:09.150002706 +0000 UTC m=+0.201126500 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.150 2 INFO nova.compute.manager [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Terminating instance#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.151 2 DEBUG nova.compute.manager [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:46:09 np0005465604 kernel: tapbceec1b4-ab (unregistering): left promiscuous mode
Oct  2 04:46:09 np0005465604 NetworkManager[45129]: <info>  [1759394769.1955] device (tapbceec1b4-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:09Z|01143|binding|INFO|Releasing lport bceec1b4-ab90-4791-a600-a62ec7870928 from this chassis (sb_readonly=0)
Oct  2 04:46:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:09Z|01144|binding|INFO|Setting lport bceec1b4-ab90-4791-a600-a62ec7870928 down in Southbound
Oct  2 04:46:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:09Z|01145|binding|INFO|Removing iface tapbceec1b4-ab ovn-installed in OVS
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:09.211 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:e4:55 10.100.0.13'], port_security=['fa:16:3e:b0:e4:55 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ece43baf-b502-44c4-9065-61d6c3271ae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56bd5300-f7cc-484e-a7dc-25ea062dfd97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5461da915a3245579ae75d81001ad2c2', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'd611935a-c399-480f-8308-94b3315588fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d9fc10f-d634-42a9-a474-31b6181a157a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bceec1b4-ab90-4791-a600-a62ec7870928) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:46:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:09.211 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bceec1b4-ab90-4791-a600-a62ec7870928 in datapath 56bd5300-f7cc-484e-a7dc-25ea062dfd97 unbound from our chassis#033[00m
Oct  2 04:46:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:09.212 162357 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 56bd5300-f7cc-484e-a7dc-25ea062dfd97 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 04:46:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:09.213 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a895e600-2c06-423f-aa29-10967d105493]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:09 np0005465604 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Oct  2 04:46:09 np0005465604 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006e.scope: Consumed 1.784s CPU time.
Oct  2 04:46:09 np0005465604 systemd-machined[214636]: Machine qemu-139-instance-0000006e terminated.
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.390 2 INFO nova.virt.libvirt.driver [-] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Instance destroyed successfully.#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.392 2 DEBUG nova.objects.instance [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lazy-loading 'resources' on Instance uuid ece43baf-b502-44c4-9065-61d6c3271ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.405 2 DEBUG nova.virt.libvirt.vif [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:45:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1660606380',display_name='tempest-TestServerAdvancedOps-server-1660606380',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1660606380',id=110,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:45:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5461da915a3245579ae75d81001ad2c2',ramdisk_id='',reservation_id='r-g3wq9z05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-77325392',owner_user_name='tempest-TestServerAdvancedOps-77325392-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:46:08Z,user_data=None,user_id='a1aa1da2fbc74e148134df6efbe63791',uuid=ece43baf-b502-44c4-9065-61d6c3271ae4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.406 2 DEBUG nova.network.os_vif_util [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converting VIF {"id": "bceec1b4-ab90-4791-a600-a62ec7870928", "address": "fa:16:3e:b0:e4:55", "network": {"id": "56bd5300-f7cc-484e-a7dc-25ea062dfd97", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1002826691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "5461da915a3245579ae75d81001ad2c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbceec1b4-ab", "ovs_interfaceid": "bceec1b4-ab90-4791-a600-a62ec7870928", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.407 2 DEBUG nova.network.os_vif_util [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.407 2 DEBUG os_vif [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.410 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbceec1b4-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 167 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.424 2 INFO os_vif [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:e4:55,bridge_name='br-int',has_traffic_filtering=True,id=bceec1b4-ab90-4791-a600-a62ec7870928,network=Network(56bd5300-f7cc-484e-a7dc-25ea062dfd97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbceec1b4-ab')#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.522 2 DEBUG nova.compute.manager [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.523 2 DEBUG oslo_concurrency.lockutils [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.523 2 DEBUG oslo_concurrency.lockutils [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.524 2 DEBUG oslo_concurrency.lockutils [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.525 2 DEBUG nova.compute.manager [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.525 2 WARNING nova.compute.manager [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.525 2 DEBUG nova.compute.manager [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-unplugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.526 2 DEBUG oslo_concurrency.lockutils [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.526 2 DEBUG oslo_concurrency.lockutils [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.526 2 DEBUG oslo_concurrency.lockutils [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.527 2 DEBUG nova.compute.manager [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-unplugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.527 2 DEBUG nova.compute.manager [req-aa989a50-b8f4-462c-b5aa-4f8faa9ec6b7 req-112d5965-e09a-414f-bb74-85dd6685d099 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-unplugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.754 2 INFO nova.virt.libvirt.driver [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Deleting instance files /var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4_del#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.754 2 INFO nova.virt.libvirt.driver [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Deletion of /var/lib/nova/instances/ece43baf-b502-44c4-9065-61d6c3271ae4_del complete#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.822 2 INFO nova.compute.manager [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.823 2 DEBUG oslo.service.loopingcall [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.823 2 DEBUG nova.compute.manager [-] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:46:09 np0005465604 nova_compute[260603]: 2025-10-02 08:46:09.823 2 DEBUG nova.network.neutron [-] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:46:10 np0005465604 nova_compute[260603]: 2025-10-02 08:46:10.666 2 DEBUG nova.network.neutron [-] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:46:10 np0005465604 nova_compute[260603]: 2025-10-02 08:46:10.690 2 INFO nova.compute.manager [-] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Took 0.87 seconds to deallocate network for instance.#033[00m
Oct  2 04:46:10 np0005465604 nova_compute[260603]: 2025-10-02 08:46:10.744 2 DEBUG oslo_concurrency.lockutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:10 np0005465604 nova_compute[260603]: 2025-10-02 08:46:10.744 2 DEBUG oslo_concurrency.lockutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:10 np0005465604 nova_compute[260603]: 2025-10-02 08:46:10.832 2 DEBUG oslo_concurrency.processutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:46:10 np0005465604 nova_compute[260603]: 2025-10-02 08:46:10.896 2 DEBUG nova.compute.manager [req-b77dec63-63e5-4e80-b2fc-a570a32b69b1 req-e84c443e-68c7-40ca-bc21-11321d713542 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-deleted-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:46:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:46:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4008508401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:46:11 np0005465604 nova_compute[260603]: 2025-10-02 08:46:11.253 2 DEBUG oslo_concurrency.processutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:46:11 np0005465604 nova_compute[260603]: 2025-10-02 08:46:11.261 2 DEBUG nova.compute.provider_tree [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:46:11 np0005465604 nova_compute[260603]: 2025-10-02 08:46:11.279 2 DEBUG nova.scheduler.client.report [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:46:11 np0005465604 nova_compute[260603]: 2025-10-02 08:46:11.306 2 DEBUG oslo_concurrency.lockutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:11 np0005465604 nova_compute[260603]: 2025-10-02 08:46:11.338 2 INFO nova.scheduler.client.report [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Deleted allocations for instance ece43baf-b502-44c4-9065-61d6c3271ae4#033[00m
Oct  2 04:46:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 167 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Oct  2 04:46:11 np0005465604 nova_compute[260603]: 2025-10-02 08:46:11.425 2 DEBUG oslo_concurrency.lockutils [None req-729be8c1-7711-4a29-bc18-ab6bd7ecf014 a1aa1da2fbc74e148134df6efbe63791 5461da915a3245579ae75d81001ad2c2 - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:11 np0005465604 nova_compute[260603]: 2025-10-02 08:46:11.733 2 DEBUG nova.compute.manager [req-ca75133c-403e-469f-8e72-c04d5b1b080e req-3c980c74-e1cf-4a65-bcdc-7299e2f23ee0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:46:11 np0005465604 nova_compute[260603]: 2025-10-02 08:46:11.733 2 DEBUG oslo_concurrency.lockutils [req-ca75133c-403e-469f-8e72-c04d5b1b080e req-3c980c74-e1cf-4a65-bcdc-7299e2f23ee0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:11 np0005465604 nova_compute[260603]: 2025-10-02 08:46:11.734 2 DEBUG oslo_concurrency.lockutils [req-ca75133c-403e-469f-8e72-c04d5b1b080e req-3c980c74-e1cf-4a65-bcdc-7299e2f23ee0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:11 np0005465604 nova_compute[260603]: 2025-10-02 08:46:11.735 2 DEBUG oslo_concurrency.lockutils [req-ca75133c-403e-469f-8e72-c04d5b1b080e req-3c980c74-e1cf-4a65-bcdc-7299e2f23ee0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ece43baf-b502-44c4-9065-61d6c3271ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:11 np0005465604 nova_compute[260603]: 2025-10-02 08:46:11.735 2 DEBUG nova.compute.manager [req-ca75133c-403e-469f-8e72-c04d5b1b080e req-3c980c74-e1cf-4a65-bcdc-7299e2f23ee0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] No waiting events found dispatching network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:46:11 np0005465604 nova_compute[260603]: 2025-10-02 08:46:11.736 2 WARNING nova.compute.manager [req-ca75133c-403e-469f-8e72-c04d5b1b080e req-3c980c74-e1cf-4a65-bcdc-7299e2f23ee0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Received unexpected event network-vif-plugged-bceec1b4-ab90-4791-a600-a62ec7870928 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:46:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:46:13 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:13Z|01146|binding|INFO|Releasing lport f994afa7-373d-469a-a6b3-0a33b20c9e54 from this chassis (sb_readonly=0)
Oct  2 04:46:13 np0005465604 nova_compute[260603]: 2025-10-02 08:46:13.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 121 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Oct  2 04:46:14 np0005465604 nova_compute[260603]: 2025-10-02 08:46:14.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:14 np0005465604 nova_compute[260603]: 2025-10-02 08:46:14.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 121 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 102 KiB/s wr, 51 op/s
Oct  2 04:46:16 np0005465604 podman[371318]: 2025-10-02 08:46:16.020894811 +0000 UTC m=+0.082632267 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 04:46:16 np0005465604 podman[371319]: 2025-10-02 08:46:16.027675403 +0000 UTC m=+0.080186360 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct  2 04:46:16 np0005465604 nova_compute[260603]: 2025-10-02 08:46:16.402 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "a541632d-06cf-48a9-a44d-19bcce4df36f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:16 np0005465604 nova_compute[260603]: 2025-10-02 08:46:16.403 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:16 np0005465604 nova_compute[260603]: 2025-10-02 08:46:16.419 2 DEBUG nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:46:16 np0005465604 nova_compute[260603]: 2025-10-02 08:46:16.483 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:16 np0005465604 nova_compute[260603]: 2025-10-02 08:46:16.483 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:16 np0005465604 nova_compute[260603]: 2025-10-02 08:46:16.489 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:46:16 np0005465604 nova_compute[260603]: 2025-10-02 08:46:16.490 2 INFO nova.compute.claims [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:46:16 np0005465604 nova_compute[260603]: 2025-10-02 08:46:16.596 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:46:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:46:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4223246840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.033 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.043 2 DEBUG nova.compute.provider_tree [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.064 2 DEBUG nova.scheduler.client.report [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.088 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.089 2 DEBUG nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.136 2 DEBUG nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.137 2 DEBUG nova.network.neutron [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.164 2 INFO nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.187 2 DEBUG nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.292 2 DEBUG nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.294 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.294 2 INFO nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Creating image(s)#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.330 2 DEBUG nova.storage.rbd_utils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image a541632d-06cf-48a9-a44d-19bcce4df36f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.367 2 DEBUG nova.storage.rbd_utils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image a541632d-06cf-48a9-a44d-19bcce4df36f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.403 2 DEBUG nova.storage.rbd_utils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image a541632d-06cf-48a9-a44d-19bcce4df36f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.408 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:46:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 121 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 102 KiB/s wr, 51 op/s
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.452 2 DEBUG nova.policy [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3dd1e04a123f47aa8a6b835785a1c569', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.487 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.488 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.489 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.489 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.514 2 DEBUG nova.storage.rbd_utils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image a541632d-06cf-48a9-a44d-19bcce4df36f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.518 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 a541632d-06cf-48a9-a44d-19bcce4df36f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.879 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 a541632d-06cf-48a9-a44d-19bcce4df36f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:46:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:46:17 np0005465604 nova_compute[260603]: 2025-10-02 08:46:17.943 2 DEBUG nova.storage.rbd_utils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] resizing rbd image a541632d-06cf-48a9-a44d-19bcce4df36f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:46:18 np0005465604 nova_compute[260603]: 2025-10-02 08:46:18.059 2 DEBUG nova.objects.instance [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'migration_context' on Instance uuid a541632d-06cf-48a9-a44d-19bcce4df36f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:46:18 np0005465604 nova_compute[260603]: 2025-10-02 08:46:18.074 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:46:18 np0005465604 nova_compute[260603]: 2025-10-02 08:46:18.075 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Ensure instance console log exists: /var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:46:18 np0005465604 nova_compute[260603]: 2025-10-02 08:46:18.075 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:18 np0005465604 nova_compute[260603]: 2025-10-02 08:46:18.076 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:18 np0005465604 nova_compute[260603]: 2025-10-02 08:46:18.076 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:18 np0005465604 nova_compute[260603]: 2025-10-02 08:46:18.325 2 DEBUG nova.network.neutron [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Successfully created port: 4475b0be-d2ac-4bf1-8ad7-e117311654a3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:46:19 np0005465604 nova_compute[260603]: 2025-10-02 08:46:19.410 2 DEBUG nova.network.neutron [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Successfully updated port: 4475b0be-d2ac-4bf1-8ad7-e117311654a3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:46:19 np0005465604 nova_compute[260603]: 2025-10-02 08:46:19.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 151 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 1.2 MiB/s wr, 56 op/s
Oct  2 04:46:19 np0005465604 nova_compute[260603]: 2025-10-02 08:46:19.438 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:46:19 np0005465604 nova_compute[260603]: 2025-10-02 08:46:19.438 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquired lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:46:19 np0005465604 nova_compute[260603]: 2025-10-02 08:46:19.439 2 DEBUG nova.network.neutron [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:46:19 np0005465604 nova_compute[260603]: 2025-10-02 08:46:19.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:19 np0005465604 nova_compute[260603]: 2025-10-02 08:46:19.595 2 DEBUG nova.compute.manager [req-6613ca9b-1f0e-494a-b27d-5231b931746d req-d2e64414-53de-4990-af7e-e3360b9e54b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Received event network-changed-4475b0be-d2ac-4bf1-8ad7-e117311654a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:46:19 np0005465604 nova_compute[260603]: 2025-10-02 08:46:19.596 2 DEBUG nova.compute.manager [req-6613ca9b-1f0e-494a-b27d-5231b931746d req-d2e64414-53de-4990-af7e-e3360b9e54b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Refreshing instance network info cache due to event network-changed-4475b0be-d2ac-4bf1-8ad7-e117311654a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:46:19 np0005465604 nova_compute[260603]: 2025-10-02 08:46:19.596 2 DEBUG oslo_concurrency.lockutils [req-6613ca9b-1f0e-494a-b27d-5231b931746d req-d2e64414-53de-4990-af7e-e3360b9e54b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:46:19 np0005465604 nova_compute[260603]: 2025-10-02 08:46:19.660 2 DEBUG nova.network.neutron [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.598 2 DEBUG nova.network.neutron [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Updating instance_info_cache with network_info: [{"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.629 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Releasing lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.630 2 DEBUG nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Instance network_info: |[{"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.630 2 DEBUG oslo_concurrency.lockutils [req-6613ca9b-1f0e-494a-b27d-5231b931746d req-d2e64414-53de-4990-af7e-e3360b9e54b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.631 2 DEBUG nova.network.neutron [req-6613ca9b-1f0e-494a-b27d-5231b931746d req-d2e64414-53de-4990-af7e-e3360b9e54b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Refreshing network info cache for port 4475b0be-d2ac-4bf1-8ad7-e117311654a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.637 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Start _get_guest_xml network_info=[{"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.644 2 WARNING nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.656 2 DEBUG nova.virt.libvirt.host [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.657 2 DEBUG nova.virt.libvirt.host [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.662 2 DEBUG nova.virt.libvirt.host [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.663 2 DEBUG nova.virt.libvirt.host [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.664 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.665 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.665 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.666 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.666 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.667 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.667 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.668 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.668 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.669 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.669 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.670 2 DEBUG nova.virt.hardware [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:46:20 np0005465604 nova_compute[260603]: 2025-10-02 08:46:20.675 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:46:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:46:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/365795224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.190 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.223 2 DEBUG nova.storage.rbd_utils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image a541632d-06cf-48a9-a44d-19bcce4df36f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.227 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:46:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 151 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.2 MiB/s wr, 42 op/s
Oct  2 04:46:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:46:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2995743722' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.652 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.654 2 DEBUG nova.virt.libvirt.vif [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:46:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-2026517778',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-2026517778',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-gen',id=111,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVA79qbd3NtJv84RrbhGptrnsrVMgvbKHQMrDgaSLfUQo5hxQDp/dq5BHyqGmWd6dJ7yqexCddmRMhWAszT1sZolLZV1xXu34aiHfjKSbuLnXtvoVyFqHt1Oka+6ZlQ1g==',key_name='tempest-TestSecurityGroupsBasicOps-1359298851',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-xidjkow3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:46:17Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=a541632d-06cf-48a9-a44d-19bcce4df36f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.654 2 DEBUG nova.network.os_vif_util [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.655 2 DEBUG nova.network.os_vif_util [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:7c:24,bridge_name='br-int',has_traffic_filtering=True,id=4475b0be-d2ac-4bf1-8ad7-e117311654a3,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4475b0be-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.656 2 DEBUG nova.objects.instance [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'pci_devices' on Instance uuid a541632d-06cf-48a9-a44d-19bcce4df36f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.682 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:46:21 np0005465604 nova_compute[260603]:  <uuid>a541632d-06cf-48a9-a44d-19bcce4df36f</uuid>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:  <name>instance-0000006f</name>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-2026517778</nova:name>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:46:20</nova:creationTime>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:46:21 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:        <nova:user uuid="3dd1e04a123f47aa8a6b835785a1c569">tempest-TestSecurityGroupsBasicOps-204807017-project-member</nova:user>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:        <nova:project uuid="7ef9cbc1b038423984a64b4674aa34ff">tempest-TestSecurityGroupsBasicOps-204807017</nova:project>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:        <nova:port uuid="4475b0be-d2ac-4bf1-8ad7-e117311654a3">
Oct  2 04:46:21 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <entry name="serial">a541632d-06cf-48a9-a44d-19bcce4df36f</entry>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <entry name="uuid">a541632d-06cf-48a9-a44d-19bcce4df36f</entry>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/a541632d-06cf-48a9-a44d-19bcce4df36f_disk">
Oct  2 04:46:21 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:46:21 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/a541632d-06cf-48a9-a44d-19bcce4df36f_disk.config">
Oct  2 04:46:21 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:46:21 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:56:7c:24"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <target dev="tap4475b0be-d2"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f/console.log" append="off"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:46:21 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:46:21 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:46:21 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:46:21 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.684 2 DEBUG nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Preparing to wait for external event network-vif-plugged-4475b0be-d2ac-4bf1-8ad7-e117311654a3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.684 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.685 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.685 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.686 2 DEBUG nova.virt.libvirt.vif [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:46:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-2026517778',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-2026517778',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-gen',id=111,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVA79qbd3NtJv84RrbhGptrnsrVMgvbKHQMrDgaSLfUQo5hxQDp/dq5BHyqGmWd6dJ7yqexCddmRMhWAszT1sZolLZV1xXu34aiHfjKSbuLnXtvoVyFqHt1Oka+6ZlQ1g==',key_name='tempest-TestSecurityGroupsBasicOps-1359298851',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-xidjkow3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:46:17Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=a541632d-06cf-48a9-a44d-19bcce4df36f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.687 2 DEBUG nova.network.os_vif_util [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.688 2 DEBUG nova.network.os_vif_util [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:7c:24,bridge_name='br-int',has_traffic_filtering=True,id=4475b0be-d2ac-4bf1-8ad7-e117311654a3,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4475b0be-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.689 2 DEBUG os_vif [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:7c:24,bridge_name='br-int',has_traffic_filtering=True,id=4475b0be-d2ac-4bf1-8ad7-e117311654a3,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4475b0be-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.690 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.691 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.696 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4475b0be-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.697 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4475b0be-d2, col_values=(('external_ids', {'iface-id': '4475b0be-d2ac-4bf1-8ad7-e117311654a3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:7c:24', 'vm-uuid': 'a541632d-06cf-48a9-a44d-19bcce4df36f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:21 np0005465604 NetworkManager[45129]: <info>  [1759394781.7011] manager: (tap4475b0be-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/449)
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.706 2 INFO os_vif [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:7c:24,bridge_name='br-int',has_traffic_filtering=True,id=4475b0be-d2ac-4bf1-8ad7-e117311654a3,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4475b0be-d2')#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.770 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.771 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.772 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No VIF found with MAC fa:16:3e:56:7c:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.773 2 INFO nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Using config drive#033[00m
Oct  2 04:46:21 np0005465604 nova_compute[260603]: 2025-10-02 08:46:21.800 2 DEBUG nova.storage.rbd_utils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image a541632d-06cf-48a9-a44d-19bcce4df36f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:46:22 np0005465604 nova_compute[260603]: 2025-10-02 08:46:22.066 2 DEBUG nova.network.neutron [req-6613ca9b-1f0e-494a-b27d-5231b931746d req-d2e64414-53de-4990-af7e-e3360b9e54b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Updated VIF entry in instance network info cache for port 4475b0be-d2ac-4bf1-8ad7-e117311654a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:46:22 np0005465604 nova_compute[260603]: 2025-10-02 08:46:22.067 2 DEBUG nova.network.neutron [req-6613ca9b-1f0e-494a-b27d-5231b931746d req-d2e64414-53de-4990-af7e-e3360b9e54b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Updating instance_info_cache with network_info: [{"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:46:22 np0005465604 nova_compute[260603]: 2025-10-02 08:46:22.092 2 DEBUG oslo_concurrency.lockutils [req-6613ca9b-1f0e-494a-b27d-5231b931746d req-d2e64414-53de-4990-af7e-e3360b9e54b4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1313432951' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1313432951' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:46:22 np0005465604 nova_compute[260603]: 2025-10-02 08:46:22.263 2 INFO nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Creating config drive at /var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f/disk.config#033[00m
Oct  2 04:46:22 np0005465604 nova_compute[260603]: 2025-10-02 08:46:22.273 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_pudoozx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:46:22 np0005465604 nova_compute[260603]: 2025-10-02 08:46:22.441 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_pudoozx" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:46:22 np0005465604 nova_compute[260603]: 2025-10-02 08:46:22.482 2 DEBUG nova.storage.rbd_utils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image a541632d-06cf-48a9-a44d-19bcce4df36f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:46:22 np0005465604 nova_compute[260603]: 2025-10-02 08:46:22.488 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f/disk.config a541632d-06cf-48a9-a44d-19bcce4df36f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:46:22 np0005465604 nova_compute[260603]: 2025-10-02 08:46:22.675 2 DEBUG oslo_concurrency.processutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f/disk.config a541632d-06cf-48a9-a44d-19bcce4df36f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:46:22 np0005465604 nova_compute[260603]: 2025-10-02 08:46:22.677 2 INFO nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Deleting local config drive /var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f/disk.config because it was imported into RBD.#033[00m
Oct  2 04:46:22 np0005465604 kernel: tap4475b0be-d2: entered promiscuous mode
Oct  2 04:46:22 np0005465604 NetworkManager[45129]: <info>  [1759394782.7652] manager: (tap4475b0be-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/450)
Oct  2 04:46:22 np0005465604 systemd-udevd[371679]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:46:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:22Z|01147|binding|INFO|Claiming lport 4475b0be-d2ac-4bf1-8ad7-e117311654a3 for this chassis.
Oct  2 04:46:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:22Z|01148|binding|INFO|4475b0be-d2ac-4bf1-8ad7-e117311654a3: Claiming fa:16:3e:56:7c:24 10.100.0.12
Oct  2 04:46:22 np0005465604 nova_compute[260603]: 2025-10-02 08:46:22.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.821 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:7c:24 10.100.0.12'], port_security=['fa:16:3e:56:7c:24 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a541632d-06cf-48a9-a44d-19bcce4df36f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd0f166da-6d6f-4ea3-ab29-33f59ca9931c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12ca4751-2801-43b7-bd66-26826481ad08, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4475b0be-d2ac-4bf1-8ad7-e117311654a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:46:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.824 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4475b0be-d2ac-4bf1-8ad7-e117311654a3 in datapath 0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758 bound to our chassis#033[00m
Oct  2 04:46:22 np0005465604 NetworkManager[45129]: <info>  [1759394782.8299] device (tap4475b0be-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:46:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.827 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758#033[00m
Oct  2 04:46:22 np0005465604 NetworkManager[45129]: <info>  [1759394782.8331] device (tap4475b0be-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:46:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:22Z|01149|binding|INFO|Setting lport 4475b0be-d2ac-4bf1-8ad7-e117311654a3 ovn-installed in OVS
Oct  2 04:46:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:22Z|01150|binding|INFO|Setting lport 4475b0be-d2ac-4bf1-8ad7-e117311654a3 up in Southbound
Oct  2 04:46:22 np0005465604 nova_compute[260603]: 2025-10-02 08:46:22.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:22 np0005465604 systemd-machined[214636]: New machine qemu-140-instance-0000006f.
Oct  2 04:46:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.857 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[01d71aa7-3d3b-49ff-a093-bcdbcf86d428]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:22 np0005465604 systemd[1]: Started Virtual Machine qemu-140-instance-0000006f.
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.894441) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394782894527, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 440, "num_deletes": 255, "total_data_size": 330339, "memory_usage": 340056, "flush_reason": "Manual Compaction"}
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Oct  2 04:46:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.899 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[64211ba9-b0f8-4ae1-8cbc-31cc3aa0efc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394782900555, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 327308, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42453, "largest_seqno": 42892, "table_properties": {"data_size": 324802, "index_size": 606, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5995, "raw_average_key_size": 18, "raw_value_size": 319765, "raw_average_value_size": 963, "num_data_blocks": 27, "num_entries": 332, "num_filter_entries": 332, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394761, "oldest_key_time": 1759394761, "file_creation_time": 1759394782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 6160 microseconds, and 2861 cpu microseconds.
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.900609) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 327308 bytes OK
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.900633) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.902124) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.902152) EVENT_LOG_v1 {"time_micros": 1759394782902142, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.902179) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 327628, prev total WAL file size 339949, number of live WAL files 2.
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.902949) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353033' seq:72057594037927935, type:22 .. '6C6F676D0031373534' seq:0, type:0; will stop at (end)
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(319KB)], [95(10MB)]
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394782902983, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 10836111, "oldest_snapshot_seqno": -1}
Oct  2 04:46:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.907 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[100fbec6-01d2-4e66-a389-2133f5f36769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.941 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[49019a29-fc45-499f-ad80-b3d6d75750f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6442 keys, 10710675 bytes, temperature: kUnknown
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394782950588, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 10710675, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10664317, "index_size": 29160, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16133, "raw_key_size": 167219, "raw_average_key_size": 25, "raw_value_size": 10545569, "raw_average_value_size": 1637, "num_data_blocks": 1151, "num_entries": 6442, "num_filter_entries": 6442, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.950881) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10710675 bytes
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.952069) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 227.3 rd, 224.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.0 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(65.8) write-amplify(32.7) OK, records in: 6960, records dropped: 518 output_compression: NoCompression
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.952089) EVENT_LOG_v1 {"time_micros": 1759394782952080, "job": 56, "event": "compaction_finished", "compaction_time_micros": 47677, "compaction_time_cpu_micros": 22249, "output_level": 6, "num_output_files": 1, "total_output_size": 10710675, "num_input_records": 6960, "num_output_records": 6442, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394782952283, "job": 56, "event": "table_file_deletion", "file_number": 97}
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394782954643, "job": 56, "event": "table_file_deletion", "file_number": 95}
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.902843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.954719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.954725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.954728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.954730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.954732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.955178) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394782955282, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 255, "num_deletes": 250, "total_data_size": 14332, "memory_usage": 20112, "flush_reason": "Manual Compaction"}
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394782957576, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 13850, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42893, "largest_seqno": 43147, "table_properties": {"data_size": 12098, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 5124, "raw_average_key_size": 20, "raw_value_size": 8697, "raw_average_value_size": 34, "num_data_blocks": 2, "num_entries": 255, "num_filter_entries": 255, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394782, "oldest_key_time": 1759394782, "file_creation_time": 1759394782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 2435 microseconds, and 1214 cpu microseconds.
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.957629) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 13850 bytes OK
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.957650) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.958879) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.958901) EVENT_LOG_v1 {"time_micros": 1759394782958894, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.958923) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 12321, prev total WAL file size 12321, number of live WAL files 2.
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.959334) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353032' seq:72057594037927935, type:22 .. '6D6772737461740031373533' seq:0, type:0; will stop at (end)
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(13KB)], [98(10MB)]
Oct  2 04:46:22 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394782959370, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 10724525, "oldest_snapshot_seqno": -1}
Oct  2 04:46:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.963 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cfc399ce-fa43-418c-83c2-76f20f9c3b1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ee1fadc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:f0:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562918, 'reachable_time': 43019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371695, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.982 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e77b0b15-07ed-471f-9dae-5d5def5743b0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0ee1fadc-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562930, 'tstamp': 562930}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371697, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0ee1fadc-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562932, 'tstamp': 562932}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371697, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.983 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ee1fadc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:46:22 np0005465604 nova_compute[260603]: 2025-10-02 08:46:22.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.985 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ee1fadc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:46:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.986 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:46:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.986 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ee1fadc-d0, col_values=(('external_ids', {'iface-id': 'f994afa7-373d-469a-a6b3-0a33b20c9e54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:46:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:22.986 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:46:23 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6193 keys, 7435108 bytes, temperature: kUnknown
Oct  2 04:46:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394783003585, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 7435108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7395347, "index_size": 23203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15493, "raw_key_size": 162237, "raw_average_key_size": 26, "raw_value_size": 7285814, "raw_average_value_size": 1176, "num_data_blocks": 902, "num_entries": 6193, "num_filter_entries": 6193, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:46:23 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:46:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:23.003791) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 7435108 bytes
Oct  2 04:46:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:23.005092) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 242.3 rd, 167.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 10.2 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(1311.2) write-amplify(536.8) OK, records in: 6697, records dropped: 504 output_compression: NoCompression
Oct  2 04:46:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:23.005108) EVENT_LOG_v1 {"time_micros": 1759394783005100, "job": 58, "event": "compaction_finished", "compaction_time_micros": 44270, "compaction_time_cpu_micros": 20456, "output_level": 6, "num_output_files": 1, "total_output_size": 7435108, "num_input_records": 6697, "num_output_records": 6193, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:46:23 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:46:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394783005214, "job": 58, "event": "table_file_deletion", "file_number": 100}
Oct  2 04:46:23 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:46:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394783006935, "job": 58, "event": "table_file_deletion", "file_number": 98}
Oct  2 04:46:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:22.959268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:46:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:23.006989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:46:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:23.006995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:46:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:23.006997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:46:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:23.006999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:46:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:46:23.007001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.106 2 DEBUG nova.compute.manager [req-fc13baaf-d630-4b40-8ed2-d0a3e0ade417 req-b52e5947-ee26-4da3-b55e-014640c2a3e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Received event network-vif-plugged-4475b0be-d2ac-4bf1-8ad7-e117311654a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.106 2 DEBUG oslo_concurrency.lockutils [req-fc13baaf-d630-4b40-8ed2-d0a3e0ade417 req-b52e5947-ee26-4da3-b55e-014640c2a3e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.107 2 DEBUG oslo_concurrency.lockutils [req-fc13baaf-d630-4b40-8ed2-d0a3e0ade417 req-b52e5947-ee26-4da3-b55e-014640c2a3e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.107 2 DEBUG oslo_concurrency.lockutils [req-fc13baaf-d630-4b40-8ed2-d0a3e0ade417 req-b52e5947-ee26-4da3-b55e-014640c2a3e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.107 2 DEBUG nova.compute.manager [req-fc13baaf-d630-4b40-8ed2-d0a3e0ade417 req-b52e5947-ee26-4da3-b55e-014640c2a3e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Processing event network-vif-plugged-4475b0be-d2ac-4bf1-8ad7-e117311654a3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:46:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.737 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394783.73671, a541632d-06cf-48a9-a44d-19bcce4df36f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.738 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] VM Started (Lifecycle Event)#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.742 2 DEBUG nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.747 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.752 2 INFO nova.virt.libvirt.driver [-] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Instance spawned successfully.#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.752 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.761 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.767 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.784 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.785 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.786 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.787 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.787 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.788 2 DEBUG nova.virt.libvirt.driver [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.795 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.796 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394783.7369533, a541632d-06cf-48a9-a44d-19bcce4df36f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.796 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.823 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.826 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394783.7464197, a541632d-06cf-48a9-a44d-19bcce4df36f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.826 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.850 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.853 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.880 2 INFO nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Took 6.59 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.881 2 DEBUG nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:46:23 np0005465604 nova_compute[260603]: 2025-10-02 08:46:23.882 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:46:24 np0005465604 nova_compute[260603]: 2025-10-02 08:46:24.001 2 INFO nova.compute.manager [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Took 7.54 seconds to build instance.#033[00m
Oct  2 04:46:24 np0005465604 nova_compute[260603]: 2025-10-02 08:46:24.025 2 DEBUG oslo_concurrency.lockutils [None req-0d1510e9-d796-49c4-af98-86d75a0e4030 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:24 np0005465604 nova_compute[260603]: 2025-10-02 08:46:24.388 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394769.387032, ece43baf-b502-44c4-9065-61d6c3271ae4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:46:24 np0005465604 nova_compute[260603]: 2025-10-02 08:46:24.388 2 INFO nova.compute.manager [-] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:46:24 np0005465604 nova_compute[260603]: 2025-10-02 08:46:24.407 2 DEBUG nova.compute.manager [None req-d7d789b1-d8f5-4ad4-88ad-193419a98a6e - - - - - -] [instance: ece43baf-b502-44c4-9065-61d6c3271ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:46:24 np0005465604 nova_compute[260603]: 2025-10-02 08:46:24.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:25 np0005465604 nova_compute[260603]: 2025-10-02 08:46:25.227 2 DEBUG nova.compute.manager [req-3cc34c20-16fa-4706-9955-d647cc35376b req-944679d4-0b12-48dc-b023-4f11ac1f5c4d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Received event network-vif-plugged-4475b0be-d2ac-4bf1-8ad7-e117311654a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:46:25 np0005465604 nova_compute[260603]: 2025-10-02 08:46:25.228 2 DEBUG oslo_concurrency.lockutils [req-3cc34c20-16fa-4706-9955-d647cc35376b req-944679d4-0b12-48dc-b023-4f11ac1f5c4d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:25 np0005465604 nova_compute[260603]: 2025-10-02 08:46:25.229 2 DEBUG oslo_concurrency.lockutils [req-3cc34c20-16fa-4706-9955-d647cc35376b req-944679d4-0b12-48dc-b023-4f11ac1f5c4d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:25 np0005465604 nova_compute[260603]: 2025-10-02 08:46:25.230 2 DEBUG oslo_concurrency.lockutils [req-3cc34c20-16fa-4706-9955-d647cc35376b req-944679d4-0b12-48dc-b023-4f11ac1f5c4d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:25 np0005465604 nova_compute[260603]: 2025-10-02 08:46:25.230 2 DEBUG nova.compute.manager [req-3cc34c20-16fa-4706-9955-d647cc35376b req-944679d4-0b12-48dc-b023-4f11ac1f5c4d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] No waiting events found dispatching network-vif-plugged-4475b0be-d2ac-4bf1-8ad7-e117311654a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:46:25 np0005465604 nova_compute[260603]: 2025-10-02 08:46:25.231 2 WARNING nova.compute.manager [req-3cc34c20-16fa-4706-9955-d647cc35376b req-944679d4-0b12-48dc-b023-4f11ac1f5c4d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Received unexpected event network-vif-plugged-4475b0be-d2ac-4bf1-8ad7-e117311654a3 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:46:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct  2 04:46:26 np0005465604 nova_compute[260603]: 2025-10-02 08:46:26.343 2 DEBUG nova.compute.manager [req-0b89c8f9-eef6-4dac-88da-b753d83ab625 req-138133c3-2ccc-436f-a59d-a30497d5906e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Received event network-changed-4475b0be-d2ac-4bf1-8ad7-e117311654a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:46:26 np0005465604 nova_compute[260603]: 2025-10-02 08:46:26.345 2 DEBUG nova.compute.manager [req-0b89c8f9-eef6-4dac-88da-b753d83ab625 req-138133c3-2ccc-436f-a59d-a30497d5906e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Refreshing instance network info cache due to event network-changed-4475b0be-d2ac-4bf1-8ad7-e117311654a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:46:26 np0005465604 nova_compute[260603]: 2025-10-02 08:46:26.346 2 DEBUG oslo_concurrency.lockutils [req-0b89c8f9-eef6-4dac-88da-b753d83ab625 req-138133c3-2ccc-436f-a59d-a30497d5906e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:46:26 np0005465604 nova_compute[260603]: 2025-10-02 08:46:26.347 2 DEBUG oslo_concurrency.lockutils [req-0b89c8f9-eef6-4dac-88da-b753d83ab625 req-138133c3-2ccc-436f-a59d-a30497d5906e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:46:26 np0005465604 nova_compute[260603]: 2025-10-02 08:46:26.347 2 DEBUG nova.network.neutron [req-0b89c8f9-eef6-4dac-88da-b753d83ab625 req-138133c3-2ccc-436f-a59d-a30497d5906e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Refreshing network info cache for port 4475b0be-d2ac-4bf1-8ad7-e117311654a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:46:26 np0005465604 nova_compute[260603]: 2025-10-02 08:46:26.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 428 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct  2 04:46:27 np0005465604 nova_compute[260603]: 2025-10-02 08:46:27.596 2 DEBUG nova.network.neutron [req-0b89c8f9-eef6-4dac-88da-b753d83ab625 req-138133c3-2ccc-436f-a59d-a30497d5906e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Updated VIF entry in instance network info cache for port 4475b0be-d2ac-4bf1-8ad7-e117311654a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:46:27 np0005465604 nova_compute[260603]: 2025-10-02 08:46:27.598 2 DEBUG nova.network.neutron [req-0b89c8f9-eef6-4dac-88da-b753d83ab625 req-138133c3-2ccc-436f-a59d-a30497d5906e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Updating instance_info_cache with network_info: [{"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:46:27 np0005465604 nova_compute[260603]: 2025-10-02 08:46:27.617 2 DEBUG oslo_concurrency.lockutils [req-0b89c8f9-eef6-4dac-88da-b753d83ab625 req-138133c3-2ccc-436f-a59d-a30497d5906e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:46:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:46:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:46:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:46:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:46:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:46:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:46:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:46:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:46:27
Oct  2 04:46:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:46:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:46:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', '.rgw.root', '.mgr', 'backups', 'vms', 'volumes', 'default.rgw.meta']
Oct  2 04:46:27 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:46:28 np0005465604 nova_compute[260603]: 2025-10-02 08:46:28.081 2 DEBUG nova.compute.manager [req-83321b1e-4b6f-4ede-8d1c-5936fd2351d4 req-f06214c4-b565-46ae-ba7f-047ee9d8a5e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Received event network-changed-4475b0be-d2ac-4bf1-8ad7-e117311654a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:46:28 np0005465604 nova_compute[260603]: 2025-10-02 08:46:28.082 2 DEBUG nova.compute.manager [req-83321b1e-4b6f-4ede-8d1c-5936fd2351d4 req-f06214c4-b565-46ae-ba7f-047ee9d8a5e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Refreshing instance network info cache due to event network-changed-4475b0be-d2ac-4bf1-8ad7-e117311654a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:46:28 np0005465604 nova_compute[260603]: 2025-10-02 08:46:28.083 2 DEBUG oslo_concurrency.lockutils [req-83321b1e-4b6f-4ede-8d1c-5936fd2351d4 req-f06214c4-b565-46ae-ba7f-047ee9d8a5e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:46:28 np0005465604 nova_compute[260603]: 2025-10-02 08:46:28.083 2 DEBUG oslo_concurrency.lockutils [req-83321b1e-4b6f-4ede-8d1c-5936fd2351d4 req-f06214c4-b565-46ae-ba7f-047ee9d8a5e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:46:28 np0005465604 nova_compute[260603]: 2025-10-02 08:46:28.084 2 DEBUG nova.network.neutron [req-83321b1e-4b6f-4ede-8d1c-5936fd2351d4 req-f06214c4-b565-46ae-ba7f-047ee9d8a5e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Refreshing network info cache for port 4475b0be-d2ac-4bf1-8ad7-e117311654a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:46:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:46:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:46:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:46:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:46:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:46:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:46:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:46:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:46:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:46:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:46:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  2 04:46:29 np0005465604 nova_compute[260603]: 2025-10-02 08:46:29.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:29 np0005465604 nova_compute[260603]: 2025-10-02 08:46:29.576 2 DEBUG nova.network.neutron [req-83321b1e-4b6f-4ede-8d1c-5936fd2351d4 req-f06214c4-b565-46ae-ba7f-047ee9d8a5e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Updated VIF entry in instance network info cache for port 4475b0be-d2ac-4bf1-8ad7-e117311654a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:46:29 np0005465604 nova_compute[260603]: 2025-10-02 08:46:29.576 2 DEBUG nova.network.neutron [req-83321b1e-4b6f-4ede-8d1c-5936fd2351d4 req-f06214c4-b565-46ae-ba7f-047ee9d8a5e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Updating instance_info_cache with network_info: [{"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:46:29 np0005465604 nova_compute[260603]: 2025-10-02 08:46:29.590 2 DEBUG oslo_concurrency.lockutils [req-83321b1e-4b6f-4ede-8d1c-5936fd2351d4 req-f06214c4-b565-46ae-ba7f-047ee9d8a5e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-a541632d-06cf-48a9-a44d-19bcce4df36f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:46:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 651 KiB/s wr, 86 op/s
Oct  2 04:46:31 np0005465604 nova_compute[260603]: 2025-10-02 08:46:31.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:46:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 654 KiB/s wr, 86 op/s
Oct  2 04:46:33 np0005465604 nova_compute[260603]: 2025-10-02 08:46:33.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:46:34 np0005465604 nova_compute[260603]: 2025-10-02 08:46:34.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:34 np0005465604 nova_compute[260603]: 2025-10-02 08:46:34.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:46:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:34.828 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:34.829 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:34.830 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 167 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 KiB/s wr, 72 op/s
Oct  2 04:46:35 np0005465604 nova_compute[260603]: 2025-10-02 08:46:35.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:46:35 np0005465604 nova_compute[260603]: 2025-10-02 08:46:35.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:46:35 np0005465604 nova_compute[260603]: 2025-10-02 08:46:35.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:46:35 np0005465604 nova_compute[260603]: 2025-10-02 08:46:35.911 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:46:35 np0005465604 nova_compute[260603]: 2025-10-02 08:46:35.911 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:46:35 np0005465604 nova_compute[260603]: 2025-10-02 08:46:35.912 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:46:35 np0005465604 nova_compute[260603]: 2025-10-02 08:46:35.912 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fc71f095-bde6-43da-bec6-e0a30dc1b71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:46:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:46:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:46:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:46:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:46:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:46:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:36Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:56:7c:24 10.100.0.12
Oct  2 04:46:36 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:36Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:56:7c:24 10.100.0.12
Oct  2 04:46:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:46:36 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e8ee8fce-37af-4fae-9249-f6f6c981c3de does not exist
Oct  2 04:46:36 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 1cb8fc4c-1299-4188-bbd2-17fab0b47493 does not exist
Oct  2 04:46:36 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 36f32b54-e701-4f9a-834b-a56b969a35b1 does not exist
Oct  2 04:46:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:46:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:46:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:46:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:46:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:46:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:46:36 np0005465604 nova_compute[260603]: 2025-10-02 08:46:36.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Oct  2 04:46:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:46:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:46:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:46:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Oct  2 04:46:36 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Oct  2 04:46:37 np0005465604 nova_compute[260603]: 2025-10-02 08:46:37.415 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updating instance_info_cache with network_info: [{"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:46:37 np0005465604 podman[372015]: 2025-10-02 08:46:37.417800608 +0000 UTC m=+0.059860497 container create 6b2db90775163f9ba96e277b7bae2b83c1a282c5e8686622c7d6e09774d3bd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Oct  2 04:46:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 180 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 748 KiB/s wr, 99 op/s
Oct  2 04:46:37 np0005465604 nova_compute[260603]: 2025-10-02 08:46:37.440 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:46:37 np0005465604 nova_compute[260603]: 2025-10-02 08:46:37.440 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:46:37 np0005465604 systemd[1]: Started libpod-conmon-6b2db90775163f9ba96e277b7bae2b83c1a282c5e8686622c7d6e09774d3bd2c.scope.
Oct  2 04:46:37 np0005465604 podman[372015]: 2025-10-02 08:46:37.394635386 +0000 UTC m=+0.036695275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:46:37 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:46:37 np0005465604 podman[372015]: 2025-10-02 08:46:37.546165909 +0000 UTC m=+0.188225848 container init 6b2db90775163f9ba96e277b7bae2b83c1a282c5e8686622c7d6e09774d3bd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 04:46:37 np0005465604 podman[372015]: 2025-10-02 08:46:37.558916336 +0000 UTC m=+0.200976225 container start 6b2db90775163f9ba96e277b7bae2b83c1a282c5e8686622c7d6e09774d3bd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 04:46:37 np0005465604 podman[372015]: 2025-10-02 08:46:37.562684294 +0000 UTC m=+0.204744173 container attach 6b2db90775163f9ba96e277b7bae2b83c1a282c5e8686622c7d6e09774d3bd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:46:37 np0005465604 suspicious_pare[372032]: 167 167
Oct  2 04:46:37 np0005465604 systemd[1]: libpod-6b2db90775163f9ba96e277b7bae2b83c1a282c5e8686622c7d6e09774d3bd2c.scope: Deactivated successfully.
Oct  2 04:46:37 np0005465604 podman[372015]: 2025-10-02 08:46:37.570351422 +0000 UTC m=+0.212411311 container died 6b2db90775163f9ba96e277b7bae2b83c1a282c5e8686622c7d6e09774d3bd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 04:46:37 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ec71db7b19e3f351e9e910cd27dc24ce2c8f9e3e35b915bf2c1a0f588c33d332-merged.mount: Deactivated successfully.
Oct  2 04:46:37 np0005465604 podman[372015]: 2025-10-02 08:46:37.622570591 +0000 UTC m=+0.264630480 container remove 6b2db90775163f9ba96e277b7bae2b83c1a282c5e8686622c7d6e09774d3bd2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct  2 04:46:37 np0005465604 systemd[1]: libpod-conmon-6b2db90775163f9ba96e277b7bae2b83c1a282c5e8686622c7d6e09774d3bd2c.scope: Deactivated successfully.
Oct  2 04:46:37 np0005465604 podman[372057]: 2025-10-02 08:46:37.869326621 +0000 UTC m=+0.077747533 container create a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 04:46:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:46:37 np0005465604 systemd[1]: Started libpod-conmon-a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f.scope.
Oct  2 04:46:37 np0005465604 podman[372057]: 2025-10-02 08:46:37.838953194 +0000 UTC m=+0.047374176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:46:37 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:46:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d295d5db41934d71067c3c97189ceda25ed87f235e790bf487239a65c617b438/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:46:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d295d5db41934d71067c3c97189ceda25ed87f235e790bf487239a65c617b438/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:46:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d295d5db41934d71067c3c97189ceda25ed87f235e790bf487239a65c617b438/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:46:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d295d5db41934d71067c3c97189ceda25ed87f235e790bf487239a65c617b438/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:46:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d295d5db41934d71067c3c97189ceda25ed87f235e790bf487239a65c617b438/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:46:38 np0005465604 podman[372057]: 2025-10-02 08:46:38.001024846 +0000 UTC m=+0.209445798 container init a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 04:46:38 np0005465604 podman[372057]: 2025-10-02 08:46:38.01365981 +0000 UTC m=+0.222080722 container start a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct  2 04:46:38 np0005465604 podman[372057]: 2025-10-02 08:46:38.017835611 +0000 UTC m=+0.226256523 container attach a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:46:38 np0005465604 nova_compute[260603]: 2025-10-02 08:46:38.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001223548809174969 of space, bias 1.0, pg target 0.36706464275249073 quantized to 32 (current 32)
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:46:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:46:39 np0005465604 epic_wright[372074]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:46:39 np0005465604 epic_wright[372074]: --> relative data size: 1.0
Oct  2 04:46:39 np0005465604 epic_wright[372074]: --> All data devices are unavailable
Oct  2 04:46:39 np0005465604 systemd[1]: libpod-a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f.scope: Deactivated successfully.
Oct  2 04:46:39 np0005465604 systemd[1]: libpod-a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f.scope: Consumed 1.005s CPU time.
Oct  2 04:46:39 np0005465604 podman[372057]: 2025-10-02 08:46:39.070401727 +0000 UTC m=+1.278822639 container died a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_wright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:46:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d295d5db41934d71067c3c97189ceda25ed87f235e790bf487239a65c617b438-merged.mount: Deactivated successfully.
Oct  2 04:46:39 np0005465604 podman[372057]: 2025-10-02 08:46:39.129956084 +0000 UTC m=+1.338376966 container remove a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:46:39 np0005465604 systemd[1]: libpod-conmon-a32ea53bf0aec544282b4706e5954900260bbffdb71d4b41e17b51589f30a50f.scope: Deactivated successfully.
Oct  2 04:46:39 np0005465604 podman[372104]: 2025-10-02 08:46:39.180702675 +0000 UTC m=+0.082398310 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct  2 04:46:39 np0005465604 podman[372133]: 2025-10-02 08:46:39.370576784 +0000 UTC m=+0.159265526 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:46:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 200 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 459 KiB/s rd, 2.6 MiB/s wr, 93 op/s
Oct  2 04:46:39 np0005465604 nova_compute[260603]: 2025-10-02 08:46:39.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:39 np0005465604 nova_compute[260603]: 2025-10-02 08:46:39.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:39 np0005465604 nova_compute[260603]: 2025-10-02 08:46:39.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:46:39 np0005465604 nova_compute[260603]: 2025-10-02 08:46:39.547 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:39 np0005465604 nova_compute[260603]: 2025-10-02 08:46:39.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:39 np0005465604 nova_compute[260603]: 2025-10-02 08:46:39.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:39 np0005465604 nova_compute[260603]: 2025-10-02 08:46:39.550 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:46:39 np0005465604 nova_compute[260603]: 2025-10-02 08:46:39.550 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:46:39 np0005465604 nova_compute[260603]: 2025-10-02 08:46:39.884 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Acquiring lock "bf1ef571-5f72-49ef-9c3f-f12f88c79a03" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:39 np0005465604 nova_compute[260603]: 2025-10-02 08:46:39.885 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "bf1ef571-5f72-49ef-9c3f-f12f88c79a03" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:39 np0005465604 nova_compute[260603]: 2025-10-02 08:46:39.909 2 DEBUG nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:46:39 np0005465604 podman[372321]: 2025-10-02 08:46:39.961176752 +0000 UTC m=+0.040496664 container create 027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 04:46:39 np0005465604 nova_compute[260603]: 2025-10-02 08:46:39.998 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:39.999 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:40 np0005465604 systemd[1]: Started libpod-conmon-027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527.scope.
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.007 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.007 2 INFO nova.compute.claims [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:46:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:46:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2565884205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:46:40 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:46:40 np0005465604 podman[372321]: 2025-10-02 08:46:39.944183412 +0000 UTC m=+0.023503344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.047 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:46:40 np0005465604 podman[372321]: 2025-10-02 08:46:40.05994495 +0000 UTC m=+0.139264922 container init 027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 04:46:40 np0005465604 podman[372321]: 2025-10-02 08:46:40.074794473 +0000 UTC m=+0.154114425 container start 027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 04:46:40 np0005465604 podman[372321]: 2025-10-02 08:46:40.079160009 +0000 UTC m=+0.158479951 container attach 027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:46:40 np0005465604 dazzling_darwin[372337]: 167 167
Oct  2 04:46:40 np0005465604 systemd[1]: libpod-027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527.scope: Deactivated successfully.
Oct  2 04:46:40 np0005465604 conmon[372337]: conmon 027d9c7a929b4f20abe6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527.scope/container/memory.events
Oct  2 04:46:40 np0005465604 podman[372321]: 2025-10-02 08:46:40.084474284 +0000 UTC m=+0.163794236 container died 027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:46:40 np0005465604 systemd[1]: var-lib-containers-storage-overlay-041dad26d87b6184cd87c2fc80071ff5c2598dd3b3fb14a2c39b061d4c86038c-merged.mount: Deactivated successfully.
Oct  2 04:46:40 np0005465604 podman[372321]: 2025-10-02 08:46:40.138146318 +0000 UTC m=+0.217466240 container remove 027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.142 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.142 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.148 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.149 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:46:40 np0005465604 systemd[1]: libpod-conmon-027d9c7a929b4f20abe679819f14c41dbb677a650ea8a0a38a9a7dc62a562527.scope: Deactivated successfully.
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.216 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:46:40 np0005465604 podman[372365]: 2025-10-02 08:46:40.389600805 +0000 UTC m=+0.052502178 container create 84500f460a66bc441b4ca5fd92b6030dc298cd38daccbfcbc83f7d45e0e7b8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:46:40 np0005465604 systemd[1]: Started libpod-conmon-84500f460a66bc441b4ca5fd92b6030dc298cd38daccbfcbc83f7d45e0e7b8e4.scope.
Oct  2 04:46:40 np0005465604 podman[372365]: 2025-10-02 08:46:40.366539466 +0000 UTC m=+0.029440869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.467 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:46:40 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.470 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3325MB free_disk=59.897300720214844GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.470 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1ca78facd261c72bfae5d0e82aaee30c72adf946d44251456ad3da9ba73940/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:46:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1ca78facd261c72bfae5d0e82aaee30c72adf946d44251456ad3da9ba73940/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:46:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1ca78facd261c72bfae5d0e82aaee30c72adf946d44251456ad3da9ba73940/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:46:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef1ca78facd261c72bfae5d0e82aaee30c72adf946d44251456ad3da9ba73940/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:46:40 np0005465604 podman[372365]: 2025-10-02 08:46:40.484295446 +0000 UTC m=+0.147196809 container init 84500f460a66bc441b4ca5fd92b6030dc298cd38daccbfcbc83f7d45e0e7b8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  2 04:46:40 np0005465604 podman[372365]: 2025-10-02 08:46:40.499425018 +0000 UTC m=+0.162326401 container start 84500f460a66bc441b4ca5fd92b6030dc298cd38daccbfcbc83f7d45e0e7b8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 04:46:40 np0005465604 podman[372365]: 2025-10-02 08:46:40.503451393 +0000 UTC m=+0.166352776 container attach 84500f460a66bc441b4ca5fd92b6030dc298cd38daccbfcbc83f7d45e0e7b8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 04:46:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:46:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1886248276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.676 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.685 2 DEBUG nova.compute.provider_tree [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.709 2 DEBUG nova.scheduler.client.report [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.743 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.745 2 DEBUG nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.749 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.845 2 DEBUG nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.845 2 DEBUG nova.network.neutron [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.880 2 INFO nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.885 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance fc71f095-bde6-43da-bec6-e0a30dc1b71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.885 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance a541632d-06cf-48a9-a44d-19bcce4df36f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.886 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance bf1ef571-5f72-49ef-9c3f-f12f88c79a03 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.886 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.887 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:46:40 np0005465604 nova_compute[260603]: 2025-10-02 08:46:40.905 2 DEBUG nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.002 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.101 2 DEBUG nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.107 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.108 2 INFO nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Creating image(s)#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.146 2 DEBUG nova.storage.rbd_utils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] rbd image bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.179 2 DEBUG nova.storage.rbd_utils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] rbd image bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.226 2 DEBUG nova.storage.rbd_utils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] rbd image bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.231 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Acquiring lock "6933fc953345c6934eae28c9b7b7a2121b435e49" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.232 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "6933fc953345c6934eae28c9b7b7a2121b435e49" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]: {
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:    "0": [
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:        {
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "devices": [
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "/dev/loop3"
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            ],
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "lv_name": "ceph_lv0",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "lv_size": "21470642176",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "name": "ceph_lv0",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "tags": {
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.cluster_name": "ceph",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.crush_device_class": "",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.encrypted": "0",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.osd_id": "0",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.type": "block",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.vdo": "0"
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            },
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "type": "block",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "vg_name": "ceph_vg0"
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:        }
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:    ],
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:    "1": [
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:        {
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "devices": [
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "/dev/loop4"
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            ],
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "lv_name": "ceph_lv1",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "lv_size": "21470642176",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "name": "ceph_lv1",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "tags": {
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.cluster_name": "ceph",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.crush_device_class": "",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.encrypted": "0",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.osd_id": "1",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.type": "block",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.vdo": "0"
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            },
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "type": "block",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "vg_name": "ceph_vg1"
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:        }
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:    ],
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:    "2": [
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:        {
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "devices": [
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "/dev/loop5"
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            ],
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "lv_name": "ceph_lv2",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "lv_size": "21470642176",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "name": "ceph_lv2",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "tags": {
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.cluster_name": "ceph",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.crush_device_class": "",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.encrypted": "0",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.osd_id": "2",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.type": "block",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:                "ceph.vdo": "0"
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            },
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "type": "block",
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:            "vg_name": "ceph_vg2"
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:        }
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]:    ]
Oct  2 04:46:41 np0005465604 stupefied_sutherland[372400]: }
Oct  2 04:46:41 np0005465604 systemd[1]: libpod-84500f460a66bc441b4ca5fd92b6030dc298cd38daccbfcbc83f7d45e0e7b8e4.scope: Deactivated successfully.
Oct  2 04:46:41 np0005465604 podman[372365]: 2025-10-02 08:46:41.269780888 +0000 UTC m=+0.932682251 container died 84500f460a66bc441b4ca5fd92b6030dc298cd38daccbfcbc83f7d45e0e7b8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 04:46:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ef1ca78facd261c72bfae5d0e82aaee30c72adf946d44251456ad3da9ba73940-merged.mount: Deactivated successfully.
Oct  2 04:46:41 np0005465604 podman[372365]: 2025-10-02 08:46:41.328273682 +0000 UTC m=+0.991175045 container remove 84500f460a66bc441b4ca5fd92b6030dc298cd38daccbfcbc83f7d45e0e7b8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 04:46:41 np0005465604 systemd[1]: libpod-conmon-84500f460a66bc441b4ca5fd92b6030dc298cd38daccbfcbc83f7d45e0e7b8e4.scope: Deactivated successfully.
Oct  2 04:46:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 200 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 459 KiB/s rd, 2.6 MiB/s wr, 93 op/s
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.443 2 DEBUG nova.network.neutron [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.445 2 DEBUG nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:46:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:46:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/213203243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:46:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:46:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 9385 writes, 43K keys, 9385 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 9385 writes, 9385 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1490 writes, 7453 keys, 1490 commit groups, 1.0 writes per commit group, ingest: 9.21 MB, 0.02 MB/s#012Interval WAL: 1490 writes, 1490 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    106.9      0.48              0.20        29    0.017       0      0       0.0       0.0#012  L6      1/0    7.09 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.4    169.7    141.6      1.59              0.74        28    0.057    156K    15K       0.0       0.0#012 Sum      1/0    7.09 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.4    130.1    133.5      2.07              0.94        57    0.036    156K    15K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.0     96.7     94.1      0.85              0.26        16    0.053     55K   4071       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    169.7    141.6      1.59              0.74        28    0.057    156K    15K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    108.0      0.48              0.20        28    0.017       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.050, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.27 GB write, 0.08 MB/s write, 0.26 GB read, 0.07 MB/s read, 2.1 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 304.00 MB usage: 29.27 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000321 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1905,28.08 MB,9.23808%) FilterBlock(58,439.61 KB,0.141219%) IndexBlock(58,770.64 KB,0.247559%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.514 2 DEBUG nova.virt.libvirt.imagebackend [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Image locations are: [{'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/bdb1b5ad-fd90-4b15-b6e6-05cebcc152f4/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/bdb1b5ad-fd90-4b15-b6e6-05cebcc152f4/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.588 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.593 2 DEBUG nova.virt.libvirt.imagebackend [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Selected location: {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/bdb1b5ad-fd90-4b15-b6e6-05cebcc152f4/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.594 2 DEBUG nova.storage.rbd_utils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] cloning images/bdb1b5ad-fd90-4b15-b6e6-05cebcc152f4@snap to None/bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.640 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.675 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.720 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "6933fc953345c6934eae28c9b7b7a2121b435e49" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.766 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.767 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.900 2 DEBUG nova.storage.rbd_utils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] resizing rbd image bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:41 np0005465604 nova_compute[260603]: 2025-10-02 08:46:41.989 2 DEBUG nova.objects.instance [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lazy-loading 'migration_context' on Instance uuid bf1ef571-5f72-49ef-9c3f-f12f88c79a03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.011 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.012 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Ensure instance console log exists: /var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.012 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.013 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.013 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.015 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='860376c145374e34a3becd9382e32f37',container_format='bare',created_at=2025-10-02T08:46:35Z,direct_url=<?>,disk_format='raw',id=bdb1b5ad-fd90-4b15-b6e6-05cebcc152f4,min_disk=0,min_ram=0,name='tempest-image-dependency-test-425033708',owner='ced39caa9dc64a9c8f98ed8725b23025',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-10-02T08:46:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'bdb1b5ad-fd90-4b15-b6e6-05cebcc152f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.019 2 WARNING nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.024 2 DEBUG nova.virt.libvirt.host [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.025 2 DEBUG nova.virt.libvirt.host [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.028 2 DEBUG nova.virt.libvirt.host [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.029 2 DEBUG nova.virt.libvirt.host [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.029 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.029 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='860376c145374e34a3becd9382e32f37',container_format='bare',created_at=2025-10-02T08:46:35Z,direct_url=<?>,disk_format='raw',id=bdb1b5ad-fd90-4b15-b6e6-05cebcc152f4,min_disk=0,min_ram=0,name='tempest-image-dependency-test-425033708',owner='ced39caa9dc64a9c8f98ed8725b23025',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-10-02T08:46:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.030 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.030 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.031 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.031 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.031 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.031 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.031 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.032 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.032 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.032 2 DEBUG nova.virt.hardware [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.036 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:46:42 np0005465604 podman[372778]: 2025-10-02 08:46:42.142735537 +0000 UTC m=+0.050206076 container create 28566a5761c7fde5bfcf1dbad682418b63d4371aa919e5e7ce7af47212bd40a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:46:42 np0005465604 systemd[1]: Started libpod-conmon-28566a5761c7fde5bfcf1dbad682418b63d4371aa919e5e7ce7af47212bd40a5.scope.
Oct  2 04:46:42 np0005465604 podman[372778]: 2025-10-02 08:46:42.119955747 +0000 UTC m=+0.027426336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:46:42 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:46:42 np0005465604 podman[372778]: 2025-10-02 08:46:42.245592163 +0000 UTC m=+0.153062732 container init 28566a5761c7fde5bfcf1dbad682418b63d4371aa919e5e7ce7af47212bd40a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_einstein, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  2 04:46:42 np0005465604 podman[372778]: 2025-10-02 08:46:42.259108715 +0000 UTC m=+0.166579254 container start 28566a5761c7fde5bfcf1dbad682418b63d4371aa919e5e7ce7af47212bd40a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_einstein, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 04:46:42 np0005465604 podman[372778]: 2025-10-02 08:46:42.262761588 +0000 UTC m=+0.170232127 container attach 28566a5761c7fde5bfcf1dbad682418b63d4371aa919e5e7ce7af47212bd40a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_einstein, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:46:42 np0005465604 sleepy_einstein[372797]: 167 167
Oct  2 04:46:42 np0005465604 systemd[1]: libpod-28566a5761c7fde5bfcf1dbad682418b63d4371aa919e5e7ce7af47212bd40a5.scope: Deactivated successfully.
Oct  2 04:46:42 np0005465604 podman[372778]: 2025-10-02 08:46:42.266590848 +0000 UTC m=+0.174061397 container died 28566a5761c7fde5bfcf1dbad682418b63d4371aa919e5e7ce7af47212bd40a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_einstein, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 04:46:42 np0005465604 systemd[1]: var-lib-containers-storage-overlay-264bee3033c3083fe32fef8a022ca89767c3005ce0699029a4a9397903a00895-merged.mount: Deactivated successfully.
Oct  2 04:46:42 np0005465604 podman[372778]: 2025-10-02 08:46:42.311549859 +0000 UTC m=+0.219020388 container remove 28566a5761c7fde5bfcf1dbad682418b63d4371aa919e5e7ce7af47212bd40a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_einstein, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 04:46:42 np0005465604 systemd[1]: libpod-conmon-28566a5761c7fde5bfcf1dbad682418b63d4371aa919e5e7ce7af47212bd40a5.scope: Deactivated successfully.
Oct  2 04:46:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:46:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1792084536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.508 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.535 2 DEBUG nova.storage.rbd_utils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] rbd image bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.545 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:46:42 np0005465604 podman[372838]: 2025-10-02 08:46:42.565292667 +0000 UTC m=+0.069893299 container create ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lamport, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 04:46:42 np0005465604 systemd[1]: Started libpod-conmon-ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4.scope.
Oct  2 04:46:42 np0005465604 podman[372838]: 2025-10-02 08:46:42.542442525 +0000 UTC m=+0.047043177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:46:42 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:46:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b76dd7d1354c5d1325fa28412b5b676e76db8310503ebca2b8dce2ff68cda6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:46:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b76dd7d1354c5d1325fa28412b5b676e76db8310503ebca2b8dce2ff68cda6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:46:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b76dd7d1354c5d1325fa28412b5b676e76db8310503ebca2b8dce2ff68cda6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:46:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b76dd7d1354c5d1325fa28412b5b676e76db8310503ebca2b8dce2ff68cda6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:46:42 np0005465604 podman[372838]: 2025-10-02 08:46:42.659775152 +0000 UTC m=+0.164375774 container init ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:46:42 np0005465604 podman[372838]: 2025-10-02 08:46:42.672287482 +0000 UTC m=+0.176888114 container start ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lamport, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 04:46:42 np0005465604 podman[372838]: 2025-10-02 08:46:42.675355828 +0000 UTC m=+0.179956470 container attach ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lamport, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.769 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.772 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:46:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.949 2 DEBUG oslo_concurrency.lockutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "a541632d-06cf-48a9-a44d-19bcce4df36f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.950 2 DEBUG oslo_concurrency.lockutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.950 2 DEBUG oslo_concurrency.lockutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:46:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1120015341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.951 2 DEBUG oslo_concurrency.lockutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.951 2 DEBUG oslo_concurrency.lockutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.952 2 INFO nova.compute.manager [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Terminating instance#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.953 2 DEBUG nova.compute.manager [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.968 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.970 2 DEBUG nova.objects.instance [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lazy-loading 'pci_devices' on Instance uuid bf1ef571-5f72-49ef-9c3f-f12f88c79a03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:46:42 np0005465604 nova_compute[260603]: 2025-10-02 08:46:42.990 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:46:42 np0005465604 nova_compute[260603]:  <uuid>bf1ef571-5f72-49ef-9c3f-f12f88c79a03</uuid>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:  <name>instance-00000070</name>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <nova:name>instance-depend-image</nova:name>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:46:42</nova:creationTime>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:46:42 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:        <nova:user uuid="84fa2fc10124476dac4b89b077888ca1">tempest-ImageDependencyTests-1169999881-project-member</nova:user>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:        <nova:project uuid="ced39caa9dc64a9c8f98ed8725b23025">tempest-ImageDependencyTests-1169999881</nova:project>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="bdb1b5ad-fd90-4b15-b6e6-05cebcc152f4"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <entry name="serial">bf1ef571-5f72-49ef-9c3f-f12f88c79a03</entry>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <entry name="uuid">bf1ef571-5f72-49ef-9c3f-f12f88c79a03</entry>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk">
Oct  2 04:46:42 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:46:42 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk.config">
Oct  2 04:46:42 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:46:42 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03/console.log" append="off"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:46:42 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:46:42 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:46:42 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:46:42 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:46:43 np0005465604 kernel: tap4475b0be-d2 (unregistering): left promiscuous mode
Oct  2 04:46:43 np0005465604 NetworkManager[45129]: <info>  [1759394803.0358] device (tap4475b0be-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:46:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:43Z|01151|binding|INFO|Releasing lport 4475b0be-d2ac-4bf1-8ad7-e117311654a3 from this chassis (sb_readonly=0)
Oct  2 04:46:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:43Z|01152|binding|INFO|Setting lport 4475b0be-d2ac-4bf1-8ad7-e117311654a3 down in Southbound
Oct  2 04:46:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:43Z|01153|binding|INFO|Removing iface tap4475b0be-d2 ovn-installed in OVS
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.049 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:7c:24 10.100.0.12', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a541632d-06cf-48a9-a44d-19bcce4df36f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12ca4751-2801-43b7-bd66-26826481ad08, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4475b0be-d2ac-4bf1-8ad7-e117311654a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:46:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.051 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4475b0be-d2ac-4bf1-8ad7-e117311654a3 in datapath 0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758 unbound from our chassis#033[00m
Oct  2 04:46:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.052 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.068 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.069 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.069 2 INFO nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Using config drive#033[00m
Oct  2 04:46:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.075 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8275c703-a6c6-4119-90d3-34f6618645a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:43 np0005465604 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Oct  2 04:46:43 np0005465604 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006f.scope: Consumed 12.583s CPU time.
Oct  2 04:46:43 np0005465604 systemd-machined[214636]: Machine qemu-140-instance-0000006f terminated.
Oct  2 04:46:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.106 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[eabfa8fa-8081-41fc-a020-cd63ab4a06d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.106 2 DEBUG nova.storage.rbd_utils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] rbd image bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:46:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.109 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[fe849cd5-99d7-4da0-8b77-dc5f902c02e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.137 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[71c4aa9d-2d0d-46af-9ea1-22498263d2fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.156 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e48e5a52-0a83-4fa4-b436-3177c4271079]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ee1fadc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:f0:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1670, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1670, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562918, 'reachable_time': 43019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 15, 'inoctets': 1208, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 15, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1208, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 15, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372931, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.173 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b77e1b-63d0-4b72-943f-a7f762f4fd5a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0ee1fadc-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562930, 'tstamp': 562930}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372932, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0ee1fadc-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562932, 'tstamp': 562932}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372932, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.175 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ee1fadc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.191 2 INFO nova.virt.libvirt.driver [-] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Instance destroyed successfully.#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.192 2 DEBUG nova.objects.instance [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'resources' on Instance uuid a541632d-06cf-48a9-a44d-19bcce4df36f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:46:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.195 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ee1fadc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:46:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.196 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:46:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.196 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ee1fadc-d0, col_values=(('external_ids', {'iface-id': 'f994afa7-373d-469a-a6b3-0a33b20c9e54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:46:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:43.197 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.215 2 DEBUG nova.virt.libvirt.vif [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:46:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-2026517778',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-2026517778',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-gen',id=111,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVA79qbd3NtJv84RrbhGptrnsrVMgvbKHQMrDgaSLfUQo5hxQDp/dq5BHyqGmWd6dJ7yqexCddmRMhWAszT1sZolLZV1xXu34aiHfjKSbuLnXtvoVyFqHt1Oka+6ZlQ1g==',key_name='tempest-TestSecurityGroupsBasicOps-1359298851',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:46:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-xidjkow3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:46:23Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=a541632d-06cf-48a9-a44d-19bcce4df36f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.215 2 DEBUG nova.network.os_vif_util [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "address": "fa:16:3e:56:7c:24", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4475b0be-d2", "ovs_interfaceid": "4475b0be-d2ac-4bf1-8ad7-e117311654a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.216 2 DEBUG nova.network.os_vif_util [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:7c:24,bridge_name='br-int',has_traffic_filtering=True,id=4475b0be-d2ac-4bf1-8ad7-e117311654a3,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4475b0be-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.216 2 DEBUG os_vif [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:7c:24,bridge_name='br-int',has_traffic_filtering=True,id=4475b0be-d2ac-4bf1-8ad7-e117311654a3,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4475b0be-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4475b0be-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.229 2 INFO os_vif [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:7c:24,bridge_name='br-int',has_traffic_filtering=True,id=4475b0be-d2ac-4bf1-8ad7-e117311654a3,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4475b0be-d2')#033[00m
Oct  2 04:46:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 200 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 492 KiB/s rd, 2.6 MiB/s wr, 135 op/s
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.443 2 INFO nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Creating config drive at /var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03/disk.config#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.448 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0jx8jipb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.599 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0jx8jipb" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.621 2 DEBUG nova.storage.rbd_utils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] rbd image bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.624 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03/disk.config bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.670 2 INFO nova.virt.libvirt.driver [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Deleting instance files /var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f_del#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.671 2 INFO nova.virt.libvirt.driver [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Deletion of /var/lib/nova/instances/a541632d-06cf-48a9-a44d-19bcce4df36f_del complete#033[00m
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]: {
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:        "osd_id": 2,
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:        "type": "bluestore"
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:    },
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:        "osd_id": 1,
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:        "type": "bluestore"
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:    },
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:        "osd_id": 0,
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:        "type": "bluestore"
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]:    }
Oct  2 04:46:43 np0005465604 vigorous_lamport[372875]: }
Oct  2 04:46:43 np0005465604 systemd[1]: libpod-ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4.scope: Deactivated successfully.
Oct  2 04:46:43 np0005465604 systemd[1]: libpod-ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4.scope: Consumed 1.012s CPU time.
Oct  2 04:46:43 np0005465604 conmon[372875]: conmon ac59cb75709636a54aa4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4.scope/container/memory.events
Oct  2 04:46:43 np0005465604 podman[372838]: 2025-10-02 08:46:43.716787348 +0000 UTC m=+1.221387980 container died ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lamport, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.724 2 INFO nova.compute.manager [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.725 2 DEBUG oslo.service.loopingcall [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.725 2 DEBUG nova.compute.manager [-] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.725 2 DEBUG nova.network.neutron [-] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:46:43 np0005465604 systemd[1]: var-lib-containers-storage-overlay-12b76dd7d1354c5d1325fa28412b5b676e76db8310503ebca2b8dce2ff68cda6-merged.mount: Deactivated successfully.
Oct  2 04:46:43 np0005465604 podman[372838]: 2025-10-02 08:46:43.773212117 +0000 UTC m=+1.277812739 container remove ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 04:46:43 np0005465604 systemd[1]: libpod-conmon-ac59cb75709636a54aa410ae8873f229ec37f9c17985c95be561680edebcf8e4.scope: Deactivated successfully.
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.800 2 DEBUG oslo_concurrency.processutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03/disk.config bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:46:43 np0005465604 nova_compute[260603]: 2025-10-02 08:46:43.802 2 INFO nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Deleting local config drive /var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03/disk.config because it was imported into RBD.#033[00m
Oct  2 04:46:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:46:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:46:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:46:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:46:43 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 751ed800-49b0-42b7-8348-7ab20bb2445b does not exist
Oct  2 04:46:43 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev fea4ff8a-d4a4-4bdc-9820-beaa3d20065f does not exist
Oct  2 04:46:43 np0005465604 systemd-machined[214636]: New machine qemu-141-instance-00000070.
Oct  2 04:46:43 np0005465604 systemd[1]: Started Virtual Machine qemu-141-instance-00000070.
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:46:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.834 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394804.8344653, bf1ef571-5f72-49ef-9c3f-f12f88c79a03 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.835 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.837 2 DEBUG nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.837 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.841 2 INFO nova.virt.libvirt.driver [-] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Instance spawned successfully.#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.841 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.868 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.868 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.869 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.869 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.870 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.870 2 DEBUG nova.virt.libvirt.driver [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.876 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.879 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.910 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.911 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394804.8367307, bf1ef571-5f72-49ef-9c3f-f12f88c79a03 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.911 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] VM Started (Lifecycle Event)#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.958 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.962 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.971 2 INFO nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Took 3.87 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.971 2 DEBUG nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:46:44 np0005465604 nova_compute[260603]: 2025-10-02 08:46:44.996 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:46:45 np0005465604 nova_compute[260603]: 2025-10-02 08:46:45.031 2 INFO nova.compute.manager [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Took 5.07 seconds to build instance.#033[00m
Oct  2 04:46:45 np0005465604 nova_compute[260603]: 2025-10-02 08:46:45.048 2 DEBUG oslo_concurrency.lockutils [None req-7756e6ad-e0c3-44f5-81e3-1fb9f52d63c4 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "bf1ef571-5f72-49ef-9c3f-f12f88c79a03" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:45 np0005465604 nova_compute[260603]: 2025-10-02 08:46:45.374 2 DEBUG nova.network.neutron [-] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:46:45 np0005465604 nova_compute[260603]: 2025-10-02 08:46:45.391 2 INFO nova.compute.manager [-] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Took 1.67 seconds to deallocate network for instance.#033[00m
Oct  2 04:46:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 200 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 492 KiB/s rd, 2.6 MiB/s wr, 135 op/s
Oct  2 04:46:45 np0005465604 nova_compute[260603]: 2025-10-02 08:46:45.434 2 DEBUG oslo_concurrency.lockutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:45 np0005465604 nova_compute[260603]: 2025-10-02 08:46:45.434 2 DEBUG oslo_concurrency.lockutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:45 np0005465604 nova_compute[260603]: 2025-10-02 08:46:45.500 2 DEBUG nova.compute.manager [req-19490c9a-4c11-42a5-993a-9124ad55bcbe req-4bd4e116-01d2-4a61-a717-dc313328976b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Received event network-vif-deleted-4475b0be-d2ac-4bf1-8ad7-e117311654a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:46:45 np0005465604 nova_compute[260603]: 2025-10-02 08:46:45.514 2 DEBUG oslo_concurrency.processutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:46:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:46:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4004492399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:46:45 np0005465604 nova_compute[260603]: 2025-10-02 08:46:45.977 2 DEBUG oslo_concurrency.processutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:46:45 np0005465604 nova_compute[260603]: 2025-10-02 08:46:45.983 2 DEBUG nova.compute.provider_tree [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:46:46 np0005465604 nova_compute[260603]: 2025-10-02 08:46:46.007 2 DEBUG nova.scheduler.client.report [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:46:46 np0005465604 nova_compute[260603]: 2025-10-02 08:46:46.031 2 DEBUG oslo_concurrency.lockutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:46 np0005465604 nova_compute[260603]: 2025-10-02 08:46:46.057 2 INFO nova.scheduler.client.report [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Deleted allocations for instance a541632d-06cf-48a9-a44d-19bcce4df36f#033[00m
Oct  2 04:46:46 np0005465604 nova_compute[260603]: 2025-10-02 08:46:46.180 2 DEBUG oslo_concurrency.lockutils [None req-c9bd44dd-6c4d-4674-978e-44903287f60a 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "a541632d-06cf-48a9-a44d-19bcce4df36f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:46 np0005465604 nova_compute[260603]: 2025-10-02 08:46:46.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:46:47 np0005465604 podman[373174]: 2025-10-02 08:46:47.016811905 +0000 UTC m=+0.066155903 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:46:47 np0005465604 podman[373173]: 2025-10-02 08:46:47.026238129 +0000 UTC m=+0.080994506 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.425 2 DEBUG nova.compute.manager [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:46:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 176 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 490 KiB/s rd, 2.5 MiB/s wr, 156 op/s
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.449 2 DEBUG nova.compute.manager [req-d8e00eb0-eace-4b12-b7ed-a72016eda29f req-8924bd8f-75e7-45b6-9593-22dc991da624 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Received event network-changed-71e3efd4-125e-40e4-bdda-59df254d21f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.449 2 DEBUG nova.compute.manager [req-d8e00eb0-eace-4b12-b7ed-a72016eda29f req-8924bd8f-75e7-45b6-9593-22dc991da624 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Refreshing instance network info cache due to event network-changed-71e3efd4-125e-40e4-bdda-59df254d21f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.450 2 DEBUG oslo_concurrency.lockutils [req-d8e00eb0-eace-4b12-b7ed-a72016eda29f req-8924bd8f-75e7-45b6-9593-22dc991da624 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.451 2 DEBUG oslo_concurrency.lockutils [req-d8e00eb0-eace-4b12-b7ed-a72016eda29f req-8924bd8f-75e7-45b6-9593-22dc991da624 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.452 2 DEBUG nova.network.neutron [req-d8e00eb0-eace-4b12-b7ed-a72016eda29f req-8924bd8f-75e7-45b6-9593-22dc991da624 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Refreshing network info cache for port 71e3efd4-125e-40e4-bdda-59df254d21f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.499 2 INFO nova.compute.manager [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] instance snapshotting#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.509 2 DEBUG oslo_concurrency.lockutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.510 2 DEBUG oslo_concurrency.lockutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.511 2 DEBUG oslo_concurrency.lockutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.511 2 DEBUG oslo_concurrency.lockutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.512 2 DEBUG oslo_concurrency.lockutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.513 2 INFO nova.compute.manager [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Terminating instance#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.515 2 DEBUG nova.compute.manager [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:46:47 np0005465604 kernel: tap71e3efd4-12 (unregistering): left promiscuous mode
Oct  2 04:46:47 np0005465604 NetworkManager[45129]: <info>  [1759394807.5953] device (tap71e3efd4-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:47Z|01154|binding|INFO|Releasing lport 71e3efd4-125e-40e4-bdda-59df254d21f9 from this chassis (sb_readonly=0)
Oct  2 04:46:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:47Z|01155|binding|INFO|Setting lport 71e3efd4-125e-40e4-bdda-59df254d21f9 down in Southbound
Oct  2 04:46:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:46:47Z|01156|binding|INFO|Removing iface tap71e3efd4-12 ovn-installed in OVS
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.697 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:06:68 10.100.0.4'], port_security=['fa:16:3e:e8:06:68 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fc71f095-bde6-43da-bec6-e0a30dc1b71a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '026f2ff0-b634-4fa0-8159-21403eda59a0 d0f166da-6d6f-4ea3-ab29-33f59ca9931c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12ca4751-2801-43b7-bd66-26826481ad08, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=71e3efd4-125e-40e4-bdda-59df254d21f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:46:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.698 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 71e3efd4-125e-40e4-bdda-59df254d21f9 in datapath 0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758 unbound from our chassis#033[00m
Oct  2 04:46:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.699 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:46:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.699 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4a63c090-7936-4e96-962e-e5d7a5a94327]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.700 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758 namespace which is not needed anymore#033[00m
Oct  2 04:46:47 np0005465604 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Oct  2 04:46:47 np0005465604 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006d.scope: Consumed 14.806s CPU time.
Oct  2 04:46:47 np0005465604 systemd-machined[214636]: Machine qemu-136-instance-0000006d terminated.
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.770 2 INFO nova.virt.libvirt.driver [-] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Instance destroyed successfully.#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.771 2 DEBUG nova.objects.instance [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'resources' on Instance uuid fc71f095-bde6-43da-bec6-e0a30dc1b71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.806 2 DEBUG nova.virt.libvirt.vif [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:45:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-636868978',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-636868978',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=109,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVA79qbd3NtJv84RrbhGptrnsrVMgvbKHQMrDgaSLfUQo5hxQDp/dq5BHyqGmWd6dJ7yqexCddmRMhWAszT1sZolLZV1xXu34aiHfjKSbuLnXtvoVyFqHt1Oka+6ZlQ1g==',key_name='tempest-TestSecurityGroupsBasicOps-1359298851',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:45:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-dp87un5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:45:51Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=fc71f095-bde6-43da-bec6-e0a30dc1b71a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.807 2 DEBUG nova.network.os_vif_util [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.808 2 DEBUG nova.network.os_vif_util [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:06:68,bridge_name='br-int',has_traffic_filtering=True,id=71e3efd4-125e-40e4-bdda-59df254d21f9,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71e3efd4-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.808 2 DEBUG os_vif [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:06:68,bridge_name='br-int',has_traffic_filtering=True,id=71e3efd4-125e-40e4-bdda-59df254d21f9,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71e3efd4-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.810 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71e3efd4-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.817 2 INFO os_vif [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:06:68,bridge_name='br-int',has_traffic_filtering=True,id=71e3efd4-125e-40e4-bdda-59df254d21f9,network=Network(0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71e3efd4-12')#033[00m
Oct  2 04:46:47 np0005465604 neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758[370931]: [NOTICE]   (370935) : haproxy version is 2.8.14-c23fe91
Oct  2 04:46:47 np0005465604 neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758[370931]: [NOTICE]   (370935) : path to executable is /usr/sbin/haproxy
Oct  2 04:46:47 np0005465604 neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758[370931]: [WARNING]  (370935) : Exiting Master process...
Oct  2 04:46:47 np0005465604 neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758[370931]: [WARNING]  (370935) : Exiting Master process...
Oct  2 04:46:47 np0005465604 neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758[370931]: [ALERT]    (370935) : Current worker (370937) exited with code 143 (Terminated)
Oct  2 04:46:47 np0005465604 neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758[370931]: [WARNING]  (370935) : All workers exited. Exiting... (0)
Oct  2 04:46:47 np0005465604 systemd[1]: libpod-be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2.scope: Deactivated successfully.
Oct  2 04:46:47 np0005465604 podman[373249]: 2025-10-02 08:46:47.835319366 +0000 UTC m=+0.043416254 container died be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:46:47 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2-userdata-shm.mount: Deactivated successfully.
Oct  2 04:46:47 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a2a8bfea6155d379b6c0d528c11335822e5f651d1b1147fcb20b7f259f8bc47f-merged.mount: Deactivated successfully.
Oct  2 04:46:47 np0005465604 podman[373249]: 2025-10-02 08:46:47.872226567 +0000 UTC m=+0.080323445 container cleanup be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 04:46:47 np0005465604 systemd[1]: libpod-conmon-be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2.scope: Deactivated successfully.
Oct  2 04:46:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:46:47 np0005465604 podman[373298]: 2025-10-02 08:46:47.940100183 +0000 UTC m=+0.045448628 container remove be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.947 2 INFO nova.virt.libvirt.driver [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Beginning live snapshot process#033[00m
Oct  2 04:46:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.948 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[669fc4ec-819d-496d-82db-463ecad03e9d]: (4, ('Thu Oct  2 08:46:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758 (be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2)\nbe55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2\nThu Oct  2 08:46:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758 (be55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2)\nbe55a914efbddb4a663a20f9506d0f90894adf0bb1db0147174937c86f50f2c2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.949 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c91b23-868e-456b-b176-544ee4a04270]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.950 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ee1fadc-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:47 np0005465604 kernel: tap0ee1fadc-d0: left promiscuous mode
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.960 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ac16571f-e5c9-4dda-923a-09c8975724da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:47 np0005465604 nova_compute[260603]: 2025-10-02 08:46:47.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.985 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4e291726-a418-4138-8747-ffbb914077ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:47.986 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[acd8beaf-b5e5-47bc-ba3e-5a5356b92b7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:48.009 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df95fe2f-d59e-487e-a05a-b5e75b21dd32]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562912, 'reachable_time': 28143, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373313, 'error': None, 'target': 'ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:48.011 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:46:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:48.012 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[4e166d66-0c27-4bd2-8284-d2d4ae2b306e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:46:48 np0005465604 systemd[1]: run-netns-ovnmeta\x2d0ee1fadc\x2dd3e5\x2d4c06\x2db0fc\x2d8a72fa56e758.mount: Deactivated successfully.
Oct  2 04:46:48 np0005465604 nova_compute[260603]: 2025-10-02 08:46:48.148 2 INFO nova.virt.libvirt.driver [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Deleting instance files /var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a_del#033[00m
Oct  2 04:46:48 np0005465604 nova_compute[260603]: 2025-10-02 08:46:48.149 2 INFO nova.virt.libvirt.driver [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Deletion of /var/lib/nova/instances/fc71f095-bde6-43da-bec6-e0a30dc1b71a_del complete#033[00m
Oct  2 04:46:48 np0005465604 nova_compute[260603]: 2025-10-02 08:46:48.156 2 DEBUG nova.storage.rbd_utils [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] creating snapshot(d9af03ccea004794b28fd44bb03c004a) on rbd image(bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:46:48 np0005465604 nova_compute[260603]: 2025-10-02 08:46:48.215 2 INFO nova.compute.manager [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:46:48 np0005465604 nova_compute[260603]: 2025-10-02 08:46:48.216 2 DEBUG oslo.service.loopingcall [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:46:48 np0005465604 nova_compute[260603]: 2025-10-02 08:46:48.217 2 DEBUG nova.compute.manager [-] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:46:48 np0005465604 nova_compute[260603]: 2025-10-02 08:46:48.217 2 DEBUG nova.network.neutron [-] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:46:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Oct  2 04:46:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Oct  2 04:46:48 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Oct  2 04:46:48 np0005465604 nova_compute[260603]: 2025-10-02 08:46:48.910 2 DEBUG nova.storage.rbd_utils [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] cloning vms/bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk@d9af03ccea004794b28fd44bb03c004a to images/f9e3d91e-e186-463d-8f89-5f30afae86fb clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.035 2 DEBUG nova.storage.rbd_utils [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] flattening images/f9e3d91e-e186-463d-8f89-5f30afae86fb flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.101 2 DEBUG nova.network.neutron [-] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.163 2 INFO nova.compute.manager [-] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Took 0.95 seconds to deallocate network for instance.#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.177 2 DEBUG nova.storage.rbd_utils [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] removing snapshot(d9af03ccea004794b28fd44bb03c004a) on rbd image(bf1ef571-5f72-49ef-9c3f-f12f88c79a03_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.202 2 DEBUG oslo_concurrency.lockutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.203 2 DEBUG oslo_concurrency.lockutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.258 2 DEBUG nova.network.neutron [req-d8e00eb0-eace-4b12-b7ed-a72016eda29f req-8924bd8f-75e7-45b6-9593-22dc991da624 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updated VIF entry in instance network info cache for port 71e3efd4-125e-40e4-bdda-59df254d21f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.258 2 DEBUG nova.network.neutron [req-d8e00eb0-eace-4b12-b7ed-a72016eda29f req-8924bd8f-75e7-45b6-9593-22dc991da624 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updating instance_info_cache with network_info: [{"id": "71e3efd4-125e-40e4-bdda-59df254d21f9", "address": "fa:16:3e:e8:06:68", "network": {"id": "0ee1fadc-d3e5-4c06-b0fc-8a72fa56e758", "bridge": "br-int", "label": "tempest-network-smoke--409092237", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71e3efd4-12", "ovs_interfaceid": "71e3efd4-125e-40e4-bdda-59df254d21f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.276 2 DEBUG oslo_concurrency.processutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.317 2 DEBUG oslo_concurrency.lockutils [req-d8e00eb0-eace-4b12-b7ed-a72016eda29f req-8924bd8f-75e7-45b6-9593-22dc991da624 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-fc71f095-bde6-43da-bec6-e0a30dc1b71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:46:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 96 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 86 KiB/s rd, 33 KiB/s wr, 115 op/s
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.558 2 DEBUG nova.compute.manager [req-98e3aa7f-38a7-4ca1-aa90-02ff364df75c req-b2acda4b-3aea-40ea-bdd1-f6c5eb23a91e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Received event network-vif-deleted-71e3efd4-125e-40e4-bdda-59df254d21f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.558 2 INFO nova.compute.manager [req-98e3aa7f-38a7-4ca1-aa90-02ff364df75c req-b2acda4b-3aea-40ea-bdd1-f6c5eb23a91e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Neutron deleted interface 71e3efd4-125e-40e4-bdda-59df254d21f9; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.559 2 DEBUG nova.network.neutron [req-98e3aa7f-38a7-4ca1-aa90-02ff364df75c req-b2acda4b-3aea-40ea-bdd1-f6c5eb23a91e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.594 2 DEBUG nova.compute.manager [req-98e3aa7f-38a7-4ca1-aa90-02ff364df75c req-b2acda4b-3aea-40ea-bdd1-f6c5eb23a91e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Detach interface failed, port_id=71e3efd4-125e-40e4-bdda-59df254d21f9, reason: Instance fc71f095-bde6-43da-bec6-e0a30dc1b71a could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 04:46:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:46:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1890090234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.746 2 DEBUG oslo_concurrency.processutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.751 2 DEBUG nova.compute.provider_tree [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.773 2 DEBUG nova.scheduler.client.report [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.799 2 DEBUG oslo_concurrency.lockutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.833 2 INFO nova.scheduler.client.report [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Deleted allocations for instance fc71f095-bde6-43da-bec6-e0a30dc1b71a#033[00m
Oct  2 04:46:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Oct  2 04:46:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Oct  2 04:46:49 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.911 2 DEBUG oslo_concurrency.lockutils [None req-bbe40d5c-bf9b-47c1-b139-d6aecd6bcc95 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "fc71f095-bde6-43da-bec6-e0a30dc1b71a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:49 np0005465604 nova_compute[260603]: 2025-10-02 08:46:49.917 2 DEBUG nova.storage.rbd_utils [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] creating snapshot(snap) on rbd image(f9e3d91e-e186-463d-8f89-5f30afae86fb) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:46:50 np0005465604 nova_compute[260603]: 2025-10-02 08:46:50.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:46:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Oct  2 04:46:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Oct  2 04:46:50 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Oct  2 04:46:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 96 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 28 KiB/s wr, 123 op/s
Oct  2 04:46:52 np0005465604 nova_compute[260603]: 2025-10-02 08:46:52.652 2 INFO nova.virt.libvirt.driver [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Snapshot image upload complete#033[00m
Oct  2 04:46:52 np0005465604 nova_compute[260603]: 2025-10-02 08:46:52.653 2 INFO nova.compute.manager [None req-e2a83d5c-e181-4f66-9574-a99c07538255 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Took 5.15 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 04:46:52 np0005465604 nova_compute[260603]: 2025-10-02 08:46:52.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:46:53 np0005465604 nova_compute[260603]: 2025-10-02 08:46:53.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 162 KiB/s rd, 33 KiB/s wr, 220 op/s
Oct  2 04:46:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:53.904 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:46:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:53.905 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:46:53 np0005465604 nova_compute[260603]: 2025-10-02 08:46:53.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:54 np0005465604 nova_compute[260603]: 2025-10-02 08:46:54.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Oct  2 04:46:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Oct  2 04:46:54 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Oct  2 04:46:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 6.7 KiB/s wr, 143 op/s
Oct  2 04:46:55 np0005465604 nova_compute[260603]: 2025-10-02 08:46:55.661 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Acquiring lock "bf1ef571-5f72-49ef-9c3f-f12f88c79a03" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:55 np0005465604 nova_compute[260603]: 2025-10-02 08:46:55.662 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "bf1ef571-5f72-49ef-9c3f-f12f88c79a03" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:55 np0005465604 nova_compute[260603]: 2025-10-02 08:46:55.662 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Acquiring lock "bf1ef571-5f72-49ef-9c3f-f12f88c79a03-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:55 np0005465604 nova_compute[260603]: 2025-10-02 08:46:55.663 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "bf1ef571-5f72-49ef-9c3f-f12f88c79a03-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:55 np0005465604 nova_compute[260603]: 2025-10-02 08:46:55.663 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "bf1ef571-5f72-49ef-9c3f-f12f88c79a03-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:55 np0005465604 nova_compute[260603]: 2025-10-02 08:46:55.665 2 INFO nova.compute.manager [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Terminating instance#033[00m
Oct  2 04:46:55 np0005465604 nova_compute[260603]: 2025-10-02 08:46:55.666 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Acquiring lock "refresh_cache-bf1ef571-5f72-49ef-9c3f-f12f88c79a03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:46:55 np0005465604 nova_compute[260603]: 2025-10-02 08:46:55.667 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Acquired lock "refresh_cache-bf1ef571-5f72-49ef-9c3f-f12f88c79a03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:46:55 np0005465604 nova_compute[260603]: 2025-10-02 08:46:55.667 2 DEBUG nova.network.neutron [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:46:56 np0005465604 nova_compute[260603]: 2025-10-02 08:46:56.276 2 DEBUG nova.network.neutron [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:46:56 np0005465604 nova_compute[260603]: 2025-10-02 08:46:56.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:56 np0005465604 nova_compute[260603]: 2025-10-02 08:46:56.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:56 np0005465604 nova_compute[260603]: 2025-10-02 08:46:56.704 2 DEBUG nova.network.neutron [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:46:56 np0005465604 nova_compute[260603]: 2025-10-02 08:46:56.729 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Releasing lock "refresh_cache-bf1ef571-5f72-49ef-9c3f-f12f88c79a03" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:46:56 np0005465604 nova_compute[260603]: 2025-10-02 08:46:56.730 2 DEBUG nova.compute.manager [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:46:56 np0005465604 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000070.scope: Deactivated successfully.
Oct  2 04:46:56 np0005465604 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000070.scope: Consumed 1.328s CPU time.
Oct  2 04:46:56 np0005465604 systemd-machined[214636]: Machine qemu-141-instance-00000070 terminated.
Oct  2 04:46:56 np0005465604 nova_compute[260603]: 2025-10-02 08:46:56.954 2 INFO nova.virt.libvirt.driver [-] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Instance destroyed successfully.#033[00m
Oct  2 04:46:56 np0005465604 nova_compute[260603]: 2025-10-02 08:46:56.955 2 DEBUG nova.objects.instance [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lazy-loading 'resources' on Instance uuid bf1ef571-5f72-49ef-9c3f-f12f88c79a03 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:46:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 129 KiB/s rd, 7.3 KiB/s wr, 171 op/s
Oct  2 04:46:57 np0005465604 nova_compute[260603]: 2025-10-02 08:46:57.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:46:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Oct  2 04:46:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Oct  2 04:46:57 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Oct  2 04:46:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:46:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:46:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:46:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:46:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:46:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:46:58 np0005465604 nova_compute[260603]: 2025-10-02 08:46:58.155 2 INFO nova.virt.libvirt.driver [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Deleting instance files /var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03_del#033[00m
Oct  2 04:46:58 np0005465604 nova_compute[260603]: 2025-10-02 08:46:58.156 2 INFO nova.virt.libvirt.driver [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Deletion of /var/lib/nova/instances/bf1ef571-5f72-49ef-9c3f-f12f88c79a03_del complete#033[00m
Oct  2 04:46:58 np0005465604 nova_compute[260603]: 2025-10-02 08:46:58.190 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394803.1892493, a541632d-06cf-48a9-a44d-19bcce4df36f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:46:58 np0005465604 nova_compute[260603]: 2025-10-02 08:46:58.190 2 INFO nova.compute.manager [-] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:46:58 np0005465604 nova_compute[260603]: 2025-10-02 08:46:58.216 2 DEBUG nova.compute.manager [None req-e5721f86-68d1-43b6-88e8-71e3f3ed6b63 - - - - - -] [instance: a541632d-06cf-48a9-a44d-19bcce4df36f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:46:58 np0005465604 nova_compute[260603]: 2025-10-02 08:46:58.236 2 INFO nova.compute.manager [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Took 1.51 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:46:58 np0005465604 nova_compute[260603]: 2025-10-02 08:46:58.236 2 DEBUG oslo.service.loopingcall [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:46:58 np0005465604 nova_compute[260603]: 2025-10-02 08:46:58.237 2 DEBUG nova.compute.manager [-] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:46:58 np0005465604 nova_compute[260603]: 2025-10-02 08:46:58.237 2 DEBUG nova.network.neutron [-] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:46:58 np0005465604 nova_compute[260603]: 2025-10-02 08:46:58.394 2 DEBUG nova.network.neutron [-] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:46:58 np0005465604 nova_compute[260603]: 2025-10-02 08:46:58.407 2 DEBUG nova.network.neutron [-] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:46:58 np0005465604 nova_compute[260603]: 2025-10-02 08:46:58.430 2 INFO nova.compute.manager [-] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Took 0.19 seconds to deallocate network for instance.#033[00m
Oct  2 04:46:58 np0005465604 nova_compute[260603]: 2025-10-02 08:46:58.483 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:46:58 np0005465604 nova_compute[260603]: 2025-10-02 08:46:58.484 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:46:58 np0005465604 nova_compute[260603]: 2025-10-02 08:46:58.543 2 DEBUG oslo_concurrency.processutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:46:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:46:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/438301172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:46:59 np0005465604 nova_compute[260603]: 2025-10-02 08:46:59.015 2 DEBUG oslo_concurrency.processutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:46:59 np0005465604 nova_compute[260603]: 2025-10-02 08:46:59.025 2 DEBUG nova.compute.provider_tree [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:46:59 np0005465604 nova_compute[260603]: 2025-10-02 08:46:59.051 2 DEBUG nova.scheduler.client.report [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:46:59 np0005465604 nova_compute[260603]: 2025-10-02 08:46:59.077 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:59 np0005465604 nova_compute[260603]: 2025-10-02 08:46:59.122 2 INFO nova.scheduler.client.report [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Deleted allocations for instance bf1ef571-5f72-49ef-9c3f-f12f88c79a03#033[00m
Oct  2 04:46:59 np0005465604 nova_compute[260603]: 2025-10-02 08:46:59.203 2 DEBUG oslo_concurrency.lockutils [None req-8b2aeb8a-d5c5-4b1f-8c09-f2a51d5050bb 84fa2fc10124476dac4b89b077888ca1 ced39caa9dc64a9c8f98ed8725b23025 - - default default] Lock "bf1ef571-5f72-49ef-9c3f-f12f88c79a03" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:46:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 145 KiB/s rd, 8.2 KiB/s wr, 192 op/s
Oct  2 04:46:59 np0005465604 nova_compute[260603]: 2025-10-02 08:46:59.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:46:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:46:59.907 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 3.2 KiB/s wr, 84 op/s
Oct  2 04:47:02 np0005465604 nova_compute[260603]: 2025-10-02 08:47:02.769 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394807.7678638, fc71f095-bde6-43da-bec6-e0a30dc1b71a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:47:02 np0005465604 nova_compute[260603]: 2025-10-02 08:47:02.770 2 INFO nova.compute.manager [-] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:47:02 np0005465604 nova_compute[260603]: 2025-10-02 08:47:02.796 2 DEBUG nova.compute.manager [None req-72e98fc3-c52d-47d4-a34b-8c07326299c9 - - - - - -] [instance: fc71f095-bde6-43da-bec6-e0a30dc1b71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:47:02 np0005465604 nova_compute[260603]: 2025-10-02 08:47:02.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:47:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Oct  2 04:47:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Oct  2 04:47:02 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Oct  2 04:47:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 4.2 KiB/s wr, 104 op/s
Oct  2 04:47:04 np0005465604 nova_compute[260603]: 2025-10-02 08:47:04.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.4 KiB/s wr, 50 op/s
Oct  2 04:47:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Oct  2 04:47:07 np0005465604 nova_compute[260603]: 2025-10-02 08:47:07.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:47:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Oct  2 04:47:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Oct  2 04:47:07 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Oct  2 04:47:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1023 B/s wr, 20 op/s
Oct  2 04:47:09 np0005465604 nova_compute[260603]: 2025-10-02 08:47:09.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:09 np0005465604 nova_compute[260603]: 2025-10-02 08:47:09.667 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "36a84233-3256-49b2-ae05-1569eb78b50f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:09 np0005465604 nova_compute[260603]: 2025-10-02 08:47:09.668 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:09 np0005465604 nova_compute[260603]: 2025-10-02 08:47:09.692 2 DEBUG nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:47:09 np0005465604 nova_compute[260603]: 2025-10-02 08:47:09.982 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:09 np0005465604 nova_compute[260603]: 2025-10-02 08:47:09.982 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:09 np0005465604 nova_compute[260603]: 2025-10-02 08:47:09.990 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:47:09 np0005465604 nova_compute[260603]: 2025-10-02 08:47:09.990 2 INFO nova.compute.claims [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:47:10 np0005465604 podman[373524]: 2025-10-02 08:47:10.039599788 +0000 UTC m=+0.084886427 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:47:10 np0005465604 podman[373523]: 2025-10-02 08:47:10.043966024 +0000 UTC m=+0.101294618 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.090 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:47:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1187534461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.506 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.513 2 DEBUG nova.compute.provider_tree [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.539 2 DEBUG nova.scheduler.client.report [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.579 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.580 2 DEBUG nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.646 2 DEBUG nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.647 2 DEBUG nova.network.neutron [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.671 2 INFO nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.689 2 DEBUG nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.807 2 DEBUG nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.810 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.811 2 INFO nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Creating image(s)#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.843 2 DEBUG nova.storage.rbd_utils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 36a84233-3256-49b2-ae05-1569eb78b50f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.867 2 DEBUG nova.storage.rbd_utils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 36a84233-3256-49b2-ae05-1569eb78b50f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.885 2 DEBUG nova.storage.rbd_utils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 36a84233-3256-49b2-ae05-1569eb78b50f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.888 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.984 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.985 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.985 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:10 np0005465604 nova_compute[260603]: 2025-10-02 08:47:10.986 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:11 np0005465604 nova_compute[260603]: 2025-10-02 08:47:11.010 2 DEBUG nova.storage.rbd_utils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 36a84233-3256-49b2-ae05-1569eb78b50f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:11 np0005465604 nova_compute[260603]: 2025-10-02 08:47:11.013 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 36a84233-3256-49b2-ae05-1569eb78b50f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:11 np0005465604 nova_compute[260603]: 2025-10-02 08:47:11.334 2 DEBUG nova.policy [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3dd1e04a123f47aa8a6b835785a1c569', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:47:11 np0005465604 nova_compute[260603]: 2025-10-02 08:47:11.341 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 36a84233-3256-49b2-ae05-1569eb78b50f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:11 np0005465604 nova_compute[260603]: 2025-10-02 08:47:11.421 2 DEBUG nova.storage.rbd_utils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] resizing rbd image 36a84233-3256-49b2-ae05-1569eb78b50f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:47:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 41 MiB data, 741 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:47:11 np0005465604 nova_compute[260603]: 2025-10-02 08:47:11.522 2 DEBUG nova.objects.instance [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'migration_context' on Instance uuid 36a84233-3256-49b2-ae05-1569eb78b50f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:47:11 np0005465604 nova_compute[260603]: 2025-10-02 08:47:11.547 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:47:11 np0005465604 nova_compute[260603]: 2025-10-02 08:47:11.547 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Ensure instance console log exists: /var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:47:11 np0005465604 nova_compute[260603]: 2025-10-02 08:47:11.548 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:11 np0005465604 nova_compute[260603]: 2025-10-02 08:47:11.548 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:11 np0005465604 nova_compute[260603]: 2025-10-02 08:47:11.548 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:11 np0005465604 nova_compute[260603]: 2025-10-02 08:47:11.953 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394816.9528518, bf1ef571-5f72-49ef-9c3f-f12f88c79a03 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:47:11 np0005465604 nova_compute[260603]: 2025-10-02 08:47:11.954 2 INFO nova.compute.manager [-] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:47:11 np0005465604 nova_compute[260603]: 2025-10-02 08:47:11.975 2 DEBUG nova.compute.manager [None req-c26d2266-072f-4fc5-8b63-e26d44f8cd43 - - - - - -] [instance: bf1ef571-5f72-49ef-9c3f-f12f88c79a03] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:47:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:47:12 np0005465604 nova_compute[260603]: 2025-10-02 08:47:12.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:13 np0005465604 nova_compute[260603]: 2025-10-02 08:47:13.215 2 DEBUG nova.network.neutron [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Successfully created port: d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:47:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Oct  2 04:47:14 np0005465604 nova_compute[260603]: 2025-10-02 08:47:14.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:14 np0005465604 nova_compute[260603]: 2025-10-02 08:47:14.746 2 DEBUG nova.network.neutron [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Successfully updated port: d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:47:14 np0005465604 nova_compute[260603]: 2025-10-02 08:47:14.768 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:47:14 np0005465604 nova_compute[260603]: 2025-10-02 08:47:14.768 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquired lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:47:14 np0005465604 nova_compute[260603]: 2025-10-02 08:47:14.769 2 DEBUG nova.network.neutron [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:47:14 np0005465604 nova_compute[260603]: 2025-10-02 08:47:14.961 2 DEBUG nova.compute.manager [req-99f54baa-2177-45a2-a9df-12400b21457c req-68968ce1-4293-4301-9927-5e9424584aa2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received event network-changed-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:47:14 np0005465604 nova_compute[260603]: 2025-10-02 08:47:14.962 2 DEBUG nova.compute.manager [req-99f54baa-2177-45a2-a9df-12400b21457c req-68968ce1-4293-4301-9927-5e9424584aa2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Refreshing instance network info cache due to event network-changed-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:47:14 np0005465604 nova_compute[260603]: 2025-10-02 08:47:14.962 2 DEBUG oslo_concurrency.lockutils [req-99f54baa-2177-45a2-a9df-12400b21457c req-68968ce1-4293-4301-9927-5e9424584aa2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:47:15 np0005465604 nova_compute[260603]: 2025-10-02 08:47:15.106 2 DEBUG nova.network.neutron [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:47:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.824 2 DEBUG nova.network.neutron [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Updating instance_info_cache with network_info: [{"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.862 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Releasing lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.863 2 DEBUG nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Instance network_info: |[{"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.864 2 DEBUG oslo_concurrency.lockutils [req-99f54baa-2177-45a2-a9df-12400b21457c req-68968ce1-4293-4301-9927-5e9424584aa2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.865 2 DEBUG nova.network.neutron [req-99f54baa-2177-45a2-a9df-12400b21457c req-68968ce1-4293-4301-9927-5e9424584aa2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Refreshing network info cache for port d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.870 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Start _get_guest_xml network_info=[{"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.878 2 WARNING nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.887 2 DEBUG nova.virt.libvirt.host [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.888 2 DEBUG nova.virt.libvirt.host [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.898 2 DEBUG nova.virt.libvirt.host [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.899 2 DEBUG nova.virt.libvirt.host [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.900 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.900 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.902 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.902 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.903 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.903 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.903 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.904 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.904 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.905 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.905 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.906 2 DEBUG nova.virt.hardware [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:47:16 np0005465604 nova_compute[260603]: 2025-10-02 08:47:16.911 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:47:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2040014468' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.427 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.468 2 DEBUG nova.storage.rbd_utils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 36a84233-3256-49b2-ae05-1569eb78b50f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.473 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:47:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/66998200' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:47:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.915 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.917 2 DEBUG nova.virt.libvirt.vif [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:47:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-2101904742',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-2101904742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=113,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIUeT37zgtyGodwUyb3iGY7HM/3FmTKFRI0uJBAJFSKjatvrWQTObfDMZE8YBSZZGIkssGkh11uDOpS/CEO9ZlzyZpSQ7kHVbROBWfktxGuem67Vh+IUVlFakqZKCOP1Eg==',key_name='tempest-TestSecurityGroupsBasicOps-112352132',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-cn883z4w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:47:10Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=36a84233-3256-49b2-ae05-1569eb78b50f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.918 2 DEBUG nova.network.os_vif_util [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.918 2 DEBUG nova.network.os_vif_util [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:d6:a0,bridge_name='br-int',has_traffic_filtering=True,id=d1e15184-f166-48f4-a04d-1c6fa2a9f2a2,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e15184-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.920 2 DEBUG nova.objects.instance [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'pci_devices' on Instance uuid 36a84233-3256-49b2-ae05-1569eb78b50f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.942 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:47:17 np0005465604 nova_compute[260603]:  <uuid>36a84233-3256-49b2-ae05-1569eb78b50f</uuid>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:  <name>instance-00000071</name>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-2101904742</nova:name>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:47:16</nova:creationTime>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:47:17 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:        <nova:user uuid="3dd1e04a123f47aa8a6b835785a1c569">tempest-TestSecurityGroupsBasicOps-204807017-project-member</nova:user>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:        <nova:project uuid="7ef9cbc1b038423984a64b4674aa34ff">tempest-TestSecurityGroupsBasicOps-204807017</nova:project>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:        <nova:port uuid="d1e15184-f166-48f4-a04d-1c6fa2a9f2a2">
Oct  2 04:47:17 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <entry name="serial">36a84233-3256-49b2-ae05-1569eb78b50f</entry>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <entry name="uuid">36a84233-3256-49b2-ae05-1569eb78b50f</entry>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/36a84233-3256-49b2-ae05-1569eb78b50f_disk">
Oct  2 04:47:17 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:47:17 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/36a84233-3256-49b2-ae05-1569eb78b50f_disk.config">
Oct  2 04:47:17 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:47:17 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:5e:d6:a0"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <target dev="tapd1e15184-f1"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f/console.log" append="off"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:47:17 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:47:17 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:47:17 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:47:17 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.942 2 DEBUG nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Preparing to wait for external event network-vif-plugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.943 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.943 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.943 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.944 2 DEBUG nova.virt.libvirt.vif [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:47:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-2101904742',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-2101904742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=113,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIUeT37zgtyGodwUyb3iGY7HM/3FmTKFRI0uJBAJFSKjatvrWQTObfDMZE8YBSZZGIkssGkh11uDOpS/CEO9ZlzyZpSQ7kHVbROBWfktxGuem67Vh+IUVlFakqZKCOP1Eg==',key_name='tempest-TestSecurityGroupsBasicOps-112352132',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-cn883z4w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:47:10Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=36a84233-3256-49b2-ae05-1569eb78b50f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.944 2 DEBUG nova.network.os_vif_util [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.944 2 DEBUG nova.network.os_vif_util [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:d6:a0,bridge_name='br-int',has_traffic_filtering=True,id=d1e15184-f166-48f4-a04d-1c6fa2a9f2a2,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e15184-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.945 2 DEBUG os_vif [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:d6:a0,bridge_name='br-int',has_traffic_filtering=True,id=d1e15184-f166-48f4-a04d-1c6fa2a9f2a2,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e15184-f1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.945 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.946 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.949 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd1e15184-f1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.949 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd1e15184-f1, col_values=(('external_ids', {'iface-id': 'd1e15184-f166-48f4-a04d-1c6fa2a9f2a2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:d6:a0', 'vm-uuid': '36a84233-3256-49b2-ae05-1569eb78b50f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:17 np0005465604 NetworkManager[45129]: <info>  [1759394837.9529] manager: (tapd1e15184-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:17 np0005465604 nova_compute[260603]: 2025-10-02 08:47:17.958 2 INFO os_vif [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:d6:a0,bridge_name='br-int',has_traffic_filtering=True,id=d1e15184-f166-48f4-a04d-1c6fa2a9f2a2,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e15184-f1')#033[00m
Oct  2 04:47:18 np0005465604 podman[373817]: 2025-10-02 08:47:18.001477828 +0000 UTC m=+0.063791040 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 04:47:18 np0005465604 podman[373816]: 2025-10-02 08:47:18.00731905 +0000 UTC m=+0.069826547 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 04:47:18 np0005465604 nova_compute[260603]: 2025-10-02 08:47:18.028 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:47:18 np0005465604 nova_compute[260603]: 2025-10-02 08:47:18.029 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:47:18 np0005465604 nova_compute[260603]: 2025-10-02 08:47:18.029 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No VIF found with MAC fa:16:3e:5e:d6:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:47:18 np0005465604 nova_compute[260603]: 2025-10-02 08:47:18.029 2 INFO nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Using config drive#033[00m
Oct  2 04:47:18 np0005465604 nova_compute[260603]: 2025-10-02 08:47:18.047 2 DEBUG nova.storage.rbd_utils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 36a84233-3256-49b2-ae05-1569eb78b50f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:18 np0005465604 nova_compute[260603]: 2025-10-02 08:47:18.606 2 INFO nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Creating config drive at /var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f/disk.config#033[00m
Oct  2 04:47:18 np0005465604 nova_compute[260603]: 2025-10-02 08:47:18.616 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwjiv218d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:18 np0005465604 nova_compute[260603]: 2025-10-02 08:47:18.680 2 DEBUG nova.network.neutron [req-99f54baa-2177-45a2-a9df-12400b21457c req-68968ce1-4293-4301-9927-5e9424584aa2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Updated VIF entry in instance network info cache for port d1e15184-f166-48f4-a04d-1c6fa2a9f2a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:47:18 np0005465604 nova_compute[260603]: 2025-10-02 08:47:18.681 2 DEBUG nova.network.neutron [req-99f54baa-2177-45a2-a9df-12400b21457c req-68968ce1-4293-4301-9927-5e9424584aa2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Updating instance_info_cache with network_info: [{"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:47:18 np0005465604 nova_compute[260603]: 2025-10-02 08:47:18.702 2 DEBUG oslo_concurrency.lockutils [req-99f54baa-2177-45a2-a9df-12400b21457c req-68968ce1-4293-4301-9927-5e9424584aa2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:47:18 np0005465604 nova_compute[260603]: 2025-10-02 08:47:18.788 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwjiv218d" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:18 np0005465604 nova_compute[260603]: 2025-10-02 08:47:18.825 2 DEBUG nova.storage.rbd_utils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 36a84233-3256-49b2-ae05-1569eb78b50f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:18 np0005465604 nova_compute[260603]: 2025-10-02 08:47:18.830 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f/disk.config 36a84233-3256-49b2-ae05-1569eb78b50f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:19 np0005465604 nova_compute[260603]: 2025-10-02 08:47:19.031 2 DEBUG oslo_concurrency.processutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f/disk.config 36a84233-3256-49b2-ae05-1569eb78b50f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:19 np0005465604 nova_compute[260603]: 2025-10-02 08:47:19.032 2 INFO nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Deleting local config drive /var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f/disk.config because it was imported into RBD.#033[00m
Oct  2 04:47:19 np0005465604 kernel: tapd1e15184-f1: entered promiscuous mode
Oct  2 04:47:19 np0005465604 NetworkManager[45129]: <info>  [1759394839.1093] manager: (tapd1e15184-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/452)
Oct  2 04:47:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:19Z|01157|binding|INFO|Claiming lport d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 for this chassis.
Oct  2 04:47:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:19Z|01158|binding|INFO|d1e15184-f166-48f4-a04d-1c6fa2a9f2a2: Claiming fa:16:3e:5e:d6:a0 10.100.0.7
Oct  2 04:47:19 np0005465604 nova_compute[260603]: 2025-10-02 08:47:19.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:19 np0005465604 nova_compute[260603]: 2025-10-02 08:47:19.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.126 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:d6:a0 10.100.0.7'], port_security=['fa:16:3e:5e:d6:a0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '36a84233-3256-49b2-ae05-1569eb78b50f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '57638211-3760-47a5-8fdb-d4470031f4cf 580c7ec0-94be-443b-88b7-53672ca4c5d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2af26aac-073a-413a-88d9-fc16235c9487, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d1e15184-f166-48f4-a04d-1c6fa2a9f2a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.127 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 in datapath 828d3558-d7f1-4d90-8e6f-bf6eff4d744e bound to our chassis#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.129 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 828d3558-d7f1-4d90-8e6f-bf6eff4d744e#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.146 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a34304-4556-423a-9aad-fec55c87cf95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.148 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap828d3558-d1 in ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.150 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap828d3558-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.150 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e0159d78-5290-446a-9f14-eb08d75c83f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.151 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0131d353-aa9a-4c70-b876-99b8c3076fdd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:19 np0005465604 systemd-machined[214636]: New machine qemu-142-instance-00000071.
Oct  2 04:47:19 np0005465604 systemd-udevd[373934]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.175 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[efe496c4-9649-4272-8a4d-e80d725c5450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:19 np0005465604 NetworkManager[45129]: <info>  [1759394839.1860] device (tapd1e15184-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:47:19 np0005465604 NetworkManager[45129]: <info>  [1759394839.1870] device (tapd1e15184-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:47:19 np0005465604 systemd[1]: Started Virtual Machine qemu-142-instance-00000071.
Oct  2 04:47:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:19Z|01159|binding|INFO|Setting lport d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 ovn-installed in OVS
Oct  2 04:47:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:19Z|01160|binding|INFO|Setting lport d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 up in Southbound
Oct  2 04:47:19 np0005465604 nova_compute[260603]: 2025-10-02 08:47:19.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.207 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[20049194-c7ef-4a83-b591-97f21727e40d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:19 np0005465604 nova_compute[260603]: 2025-10-02 08:47:19.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.248 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f36631eb-7eaf-469d-8500-6d76e4091729]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:19 np0005465604 NetworkManager[45129]: <info>  [1759394839.2560] manager: (tap828d3558-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/453)
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.255 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9f5c1bd5-3301-4bc6-bd64-b0e2aef28a81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.298 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9141b91a-8df6-49b3-9fc4-78382796b1ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.301 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ad830d0c-e8c5-48f6-9caa-f8ccc4616bb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:19 np0005465604 NetworkManager[45129]: <info>  [1759394839.3375] device (tap828d3558-d0): carrier: link connected
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.344 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e26207f9-43fa-4af5-af82-3cf918c1c50c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.369 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c11ee92f-c0e3-46db-9df2-5a69225e4851]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap828d3558-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:2f:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571794, 'reachable_time': 28448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373965, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.391 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb128c8-6dcd-4242-bde7-87db42be65d2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:2f9c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571794, 'tstamp': 571794}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373966, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.410 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e3aaff0b-76af-4e0c-a0dc-bec1400cab1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap828d3558-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:2f:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571794, 'reachable_time': 28448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373967, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.449 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b69a6bf2-ea9e-4ebe-b09c-b13bd476241d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:19 np0005465604 nova_compute[260603]: 2025-10-02 08:47:19.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.523 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50057331-00bf-412d-b8c0-ec1730db58a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.524 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap828d3558-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.524 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.524 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap828d3558-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:19 np0005465604 nova_compute[260603]: 2025-10-02 08:47:19.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:19 np0005465604 kernel: tap828d3558-d0: entered promiscuous mode
Oct  2 04:47:19 np0005465604 nova_compute[260603]: 2025-10-02 08:47:19.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.528 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap828d3558-d0, col_values=(('external_ids', {'iface-id': '8765a176-985e-405d-870e-44dc7d3390b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:19 np0005465604 NetworkManager[45129]: <info>  [1759394839.5296] manager: (tap828d3558-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/454)
Oct  2 04:47:19 np0005465604 nova_compute[260603]: 2025-10-02 08:47:19.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:19Z|01161|binding|INFO|Releasing lport 8765a176-985e-405d-870e-44dc7d3390b4 from this chassis (sb_readonly=0)
Oct  2 04:47:19 np0005465604 nova_compute[260603]: 2025-10-02 08:47:19.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.549 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/828d3558-d7f1-4d90-8e6f-bf6eff4d744e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/828d3558-d7f1-4d90-8e6f-bf6eff4d744e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.550 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0526f5b0-83bd-45a7-a250-7c843bc27a83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.550 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-828d3558-d7f1-4d90-8e6f-bf6eff4d744e
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/828d3558-d7f1-4d90-8e6f-bf6eff4d744e.pid.haproxy
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 828d3558-d7f1-4d90-8e6f-bf6eff4d744e
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:47:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:19.551 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'env', 'PROCESS_TAG=haproxy-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/828d3558-d7f1-4d90-8e6f-bf6eff4d744e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:47:19 np0005465604 nova_compute[260603]: 2025-10-02 08:47:19.638 2 DEBUG nova.compute.manager [req-2dd67d04-5c05-4aee-b8b5-97db63ba606c req-14571408-57dd-4d3f-8fe9-e9b42d4f70dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received event network-vif-plugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:47:19 np0005465604 nova_compute[260603]: 2025-10-02 08:47:19.639 2 DEBUG oslo_concurrency.lockutils [req-2dd67d04-5c05-4aee-b8b5-97db63ba606c req-14571408-57dd-4d3f-8fe9-e9b42d4f70dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:19 np0005465604 nova_compute[260603]: 2025-10-02 08:47:19.639 2 DEBUG oslo_concurrency.lockutils [req-2dd67d04-5c05-4aee-b8b5-97db63ba606c req-14571408-57dd-4d3f-8fe9-e9b42d4f70dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:19 np0005465604 nova_compute[260603]: 2025-10-02 08:47:19.639 2 DEBUG oslo_concurrency.lockutils [req-2dd67d04-5c05-4aee-b8b5-97db63ba606c req-14571408-57dd-4d3f-8fe9-e9b42d4f70dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:19 np0005465604 nova_compute[260603]: 2025-10-02 08:47:19.639 2 DEBUG nova.compute.manager [req-2dd67d04-5c05-4aee-b8b5-97db63ba606c req-14571408-57dd-4d3f-8fe9-e9b42d4f70dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Processing event network-vif-plugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:47:19 np0005465604 podman[374041]: 2025-10-02 08:47:19.955566523 +0000 UTC m=+0.061354203 container create ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:47:20 np0005465604 systemd[1]: Started libpod-conmon-ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985.scope.
Oct  2 04:47:20 np0005465604 podman[374041]: 2025-10-02 08:47:19.92820181 +0000 UTC m=+0.033989530 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:47:20 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:47:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09d31a5b55753d611210a316cb1a0d76d716a9026841d73fcc07cd5dc9fe4ce7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:47:20 np0005465604 podman[374041]: 2025-10-02 08:47:20.070816746 +0000 UTC m=+0.176604436 container init ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 04:47:20 np0005465604 podman[374041]: 2025-10-02 08:47:20.081739946 +0000 UTC m=+0.187527636 container start ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 04:47:20 np0005465604 neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e[374057]: [NOTICE]   (374061) : New worker (374063) forked
Oct  2 04:47:20 np0005465604 neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e[374057]: [NOTICE]   (374061) : Loading success.
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.185 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394840.1844373, 36a84233-3256-49b2-ae05-1569eb78b50f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.185 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] VM Started (Lifecycle Event)#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.189 2 DEBUG nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.193 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.198 2 INFO nova.virt.libvirt.driver [-] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Instance spawned successfully.#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.199 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.211 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.216 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.227 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.227 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.228 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.228 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.229 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.229 2 DEBUG nova.virt.libvirt.driver [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.260 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.261 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394840.1846619, 36a84233-3256-49b2-ae05-1569eb78b50f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.261 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.287 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.291 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394840.1929514, 36a84233-3256-49b2-ae05-1569eb78b50f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.292 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.301 2 INFO nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Took 9.49 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.302 2 DEBUG nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.310 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.314 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.339 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.382 2 INFO nova.compute.manager [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Took 10.43 seconds to build instance.#033[00m
Oct  2 04:47:20 np0005465604 nova_compute[260603]: 2025-10-02 08:47:20.401 2 DEBUG oslo_concurrency.lockutils [None req-8b76381d-cdf6-430e-a32a-19762539f99f 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:47:21 np0005465604 nova_compute[260603]: 2025-10-02 08:47:21.831 2 DEBUG nova.compute.manager [req-63c7f4be-8f55-4d15-a17a-3d93939c0a0d req-6ad6982a-cad8-47e1-98ac-7e95d63d094c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received event network-vif-plugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:47:21 np0005465604 nova_compute[260603]: 2025-10-02 08:47:21.832 2 DEBUG oslo_concurrency.lockutils [req-63c7f4be-8f55-4d15-a17a-3d93939c0a0d req-6ad6982a-cad8-47e1-98ac-7e95d63d094c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:21 np0005465604 nova_compute[260603]: 2025-10-02 08:47:21.832 2 DEBUG oslo_concurrency.lockutils [req-63c7f4be-8f55-4d15-a17a-3d93939c0a0d req-6ad6982a-cad8-47e1-98ac-7e95d63d094c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:21 np0005465604 nova_compute[260603]: 2025-10-02 08:47:21.833 2 DEBUG oslo_concurrency.lockutils [req-63c7f4be-8f55-4d15-a17a-3d93939c0a0d req-6ad6982a-cad8-47e1-98ac-7e95d63d094c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:21 np0005465604 nova_compute[260603]: 2025-10-02 08:47:21.833 2 DEBUG nova.compute.manager [req-63c7f4be-8f55-4d15-a17a-3d93939c0a0d req-6ad6982a-cad8-47e1-98ac-7e95d63d094c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] No waiting events found dispatching network-vif-plugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:47:21 np0005465604 nova_compute[260603]: 2025-10-02 08:47:21.833 2 WARNING nova.compute.manager [req-63c7f4be-8f55-4d15-a17a-3d93939c0a0d req-6ad6982a-cad8-47e1-98ac-7e95d63d094c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received unexpected event network-vif-plugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:47:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:47:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3657045289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:47:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:47:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3657045289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:47:22 np0005465604 nova_compute[260603]: 2025-10-02 08:47:22.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:47:22 np0005465604 nova_compute[260603]: 2025-10-02 08:47:22.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:47:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:47:22 np0005465604 nova_compute[260603]: 2025-10-02 08:47:22.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 04:47:24 np0005465604 NetworkManager[45129]: <info>  [1759394844.4262] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/455)
Oct  2 04:47:24 np0005465604 NetworkManager[45129]: <info>  [1759394844.4271] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/456)
Oct  2 04:47:24 np0005465604 nova_compute[260603]: 2025-10-02 08:47:24.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:24 np0005465604 nova_compute[260603]: 2025-10-02 08:47:24.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:24 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:24Z|01162|binding|INFO|Releasing lport 8765a176-985e-405d-870e-44dc7d3390b4 from this chassis (sb_readonly=0)
Oct  2 04:47:24 np0005465604 nova_compute[260603]: 2025-10-02 08:47:24.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:24 np0005465604 nova_compute[260603]: 2025-10-02 08:47:24.721 2 DEBUG nova.compute.manager [req-5dcf42d1-b7ae-43e7-b572-d2398be130ee req-fd68714a-fee5-4de6-a39b-0f67d4362f39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received event network-changed-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:47:24 np0005465604 nova_compute[260603]: 2025-10-02 08:47:24.722 2 DEBUG nova.compute.manager [req-5dcf42d1-b7ae-43e7-b572-d2398be130ee req-fd68714a-fee5-4de6-a39b-0f67d4362f39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Refreshing instance network info cache due to event network-changed-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:47:24 np0005465604 nova_compute[260603]: 2025-10-02 08:47:24.722 2 DEBUG oslo_concurrency.lockutils [req-5dcf42d1-b7ae-43e7-b572-d2398be130ee req-fd68714a-fee5-4de6-a39b-0f67d4362f39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:47:24 np0005465604 nova_compute[260603]: 2025-10-02 08:47:24.723 2 DEBUG oslo_concurrency.lockutils [req-5dcf42d1-b7ae-43e7-b572-d2398be130ee req-fd68714a-fee5-4de6-a39b-0f67d4362f39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:47:24 np0005465604 nova_compute[260603]: 2025-10-02 08:47:24.724 2 DEBUG nova.network.neutron [req-5dcf42d1-b7ae-43e7-b572-d2398be130ee req-fd68714a-fee5-4de6-a39b-0f67d4362f39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Refreshing network info cache for port d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:47:24 np0005465604 nova_compute[260603]: 2025-10-02 08:47:24.878 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "42e467eb-b532-4383-91dd-4c8e9f68328c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:24 np0005465604 nova_compute[260603]: 2025-10-02 08:47:24.878 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:24 np0005465604 nova_compute[260603]: 2025-10-02 08:47:24.901 2 DEBUG nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.023 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.024 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.030 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.031 2 INFO nova.compute.claims [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.169 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 04:47:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:47:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2898768290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.672 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.682 2 DEBUG nova.compute.provider_tree [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.699 2 DEBUG nova.scheduler.client.report [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.722 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.723 2 DEBUG nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.768 2 DEBUG nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.769 2 DEBUG nova.network.neutron [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.787 2 INFO nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.807 2 DEBUG nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.894 2 DEBUG nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.896 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.896 2 INFO nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Creating image(s)#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.918 2 DEBUG nova.storage.rbd_utils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 42e467eb-b532-4383-91dd-4c8e9f68328c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.944 2 DEBUG nova.storage.rbd_utils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 42e467eb-b532-4383-91dd-4c8e9f68328c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.966 2 DEBUG nova.storage.rbd_utils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 42e467eb-b532-4383-91dd-4c8e9f68328c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:25 np0005465604 nova_compute[260603]: 2025-10-02 08:47:25.971 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.016 2 DEBUG nova.policy [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7767630a5b1049f48d7e0fed29e221ba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c86b416fdb524f21b0228639a3a14116', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.048 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.049 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.050 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.050 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.076 2 DEBUG nova.storage.rbd_utils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 42e467eb-b532-4383-91dd-4c8e9f68328c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.079 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 42e467eb-b532-4383-91dd-4c8e9f68328c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.412 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 42e467eb-b532-4383-91dd-4c8e9f68328c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.490 2 DEBUG nova.storage.rbd_utils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] resizing rbd image 42e467eb-b532-4383-91dd-4c8e9f68328c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.609 2 DEBUG nova.objects.instance [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'migration_context' on Instance uuid 42e467eb-b532-4383-91dd-4c8e9f68328c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.642 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.643 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Ensure instance console log exists: /var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.644 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.645 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.646 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.837 2 DEBUG nova.network.neutron [req-5dcf42d1-b7ae-43e7-b572-d2398be130ee req-fd68714a-fee5-4de6-a39b-0f67d4362f39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Updated VIF entry in instance network info cache for port d1e15184-f166-48f4-a04d-1c6fa2a9f2a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.839 2 DEBUG nova.network.neutron [req-5dcf42d1-b7ae-43e7-b572-d2398be130ee req-fd68714a-fee5-4de6-a39b-0f67d4362f39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Updating instance_info_cache with network_info: [{"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.861 2 DEBUG oslo_concurrency.lockutils [req-5dcf42d1-b7ae-43e7-b572-d2398be130ee req-fd68714a-fee5-4de6-a39b-0f67d4362f39 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:47:26 np0005465604 nova_compute[260603]: 2025-10-02 08:47:26.952 2 DEBUG nova.network.neutron [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Successfully created port: 98d1bfd0-f123-48c7-a32d-bca6b92ab19d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:47:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 107 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 85 op/s
Oct  2 04:47:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:47:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:47:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:47:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:47:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:47:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:47:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:47:27 np0005465604 nova_compute[260603]: 2025-10-02 08:47:27.989 2 DEBUG nova.network.neutron [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Successfully updated port: 98d1bfd0-f123-48c7-a32d-bca6b92ab19d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:47:27 np0005465604 nova_compute[260603]: 2025-10-02 08:47:27.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:47:27
Oct  2 04:47:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:47:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:47:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.log', 'backups']
Oct  2 04:47:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:47:28 np0005465604 nova_compute[260603]: 2025-10-02 08:47:28.011 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:47:28 np0005465604 nova_compute[260603]: 2025-10-02 08:47:28.011 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquired lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:47:28 np0005465604 nova_compute[260603]: 2025-10-02 08:47:28.012 2 DEBUG nova.network.neutron [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:47:28 np0005465604 nova_compute[260603]: 2025-10-02 08:47:28.181 2 DEBUG nova.compute.manager [req-81c3a65f-a028-43a4-9e1e-617392a48d90 req-a00e4415-0354-4042-9462-1c8882e501e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received event network-changed-98d1bfd0-f123-48c7-a32d-bca6b92ab19d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:47:28 np0005465604 nova_compute[260603]: 2025-10-02 08:47:28.182 2 DEBUG nova.compute.manager [req-81c3a65f-a028-43a4-9e1e-617392a48d90 req-a00e4415-0354-4042-9462-1c8882e501e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Refreshing instance network info cache due to event network-changed-98d1bfd0-f123-48c7-a32d-bca6b92ab19d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:47:28 np0005465604 nova_compute[260603]: 2025-10-02 08:47:28.183 2 DEBUG oslo_concurrency.lockutils [req-81c3a65f-a028-43a4-9e1e-617392a48d90 req-a00e4415-0354-4042-9462-1c8882e501e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:47:28 np0005465604 ceph-mgr[74774]: client.0 ms_handle_reset on v2:192.168.122.100:6800/860957497
Oct  2 04:47:29 np0005465604 nova_compute[260603]: 2025-10-02 08:47:29.332 2 DEBUG nova.network.neutron [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:47:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 134 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 04:47:29 np0005465604 nova_compute[260603]: 2025-10-02 08:47:29.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 134 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 04:47:32 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:32Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:d6:a0 10.100.0.7
Oct  2 04:47:32 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:32Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:d6:a0 10.100.0.7
Oct  2 04:47:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.366 2 DEBUG nova.network.neutron [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Updating instance_info_cache with network_info: [{"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.393 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Releasing lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.394 2 DEBUG nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Instance network_info: |[{"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.394 2 DEBUG oslo_concurrency.lockutils [req-81c3a65f-a028-43a4-9e1e-617392a48d90 req-a00e4415-0354-4042-9462-1c8882e501e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.395 2 DEBUG nova.network.neutron [req-81c3a65f-a028-43a4-9e1e-617392a48d90 req-a00e4415-0354-4042-9462-1c8882e501e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Refreshing network info cache for port 98d1bfd0-f123-48c7-a32d-bca6b92ab19d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.400 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Start _get_guest_xml network_info=[{"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.406 2 WARNING nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.412 2 DEBUG nova.virt.libvirt.host [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.413 2 DEBUG nova.virt.libvirt.host [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.421 2 DEBUG nova.virt.libvirt.host [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.422 2 DEBUG nova.virt.libvirt.host [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.424 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.424 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.425 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.426 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.426 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.427 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.427 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.428 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.428 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.429 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.429 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.430 2 DEBUG nova.virt.hardware [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.435 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 164 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:47:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:47:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944547598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.908 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.947 2 DEBUG nova.storage.rbd_utils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 42e467eb-b532-4383-91dd-4c8e9f68328c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:33 np0005465604 nova_compute[260603]: 2025-10-02 08:47:33.953 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:47:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/921354071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.403 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.406 2 DEBUG nova.virt.libvirt.vif [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:47:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-440315055',display_name='tempest-TestNetworkAdvancedServerOps-server-440315055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-440315055',id=114,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAL7Jay6VGgIbZM/uTD3le9XaJZEa2h1Gi+bt0pESVUIc4MgOYJNkH9P9hwcT+CLKqZhGrKxvTkkVtq+Eg9Dj0D8XlduRBRW3I1LeGVNjAVQ1naigScKF/Jy3yqgcndKBQ==',key_name='tempest-TestNetworkAdvancedServerOps-1091937656',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-l13jj000',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:47:25Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=42e467eb-b532-4383-91dd-4c8e9f68328c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.407 2 DEBUG nova.network.os_vif_util [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.408 2 DEBUG nova.network.os_vif_util [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:15:11,bridge_name='br-int',has_traffic_filtering=True,id=98d1bfd0-f123-48c7-a32d-bca6b92ab19d,network=Network(d4ef4078-5ea1-4ac7-a0f4-0c2647248d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98d1bfd0-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.411 2 DEBUG nova.objects.instance [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_devices' on Instance uuid 42e467eb-b532-4383-91dd-4c8e9f68328c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.438 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:47:34 np0005465604 nova_compute[260603]:  <uuid>42e467eb-b532-4383-91dd-4c8e9f68328c</uuid>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:  <name>instance-00000072</name>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-440315055</nova:name>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:47:33</nova:creationTime>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:47:34 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:        <nova:user uuid="7767630a5b1049f48d7e0fed29e221ba">tempest-TestNetworkAdvancedServerOps-19684921-project-member</nova:user>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:        <nova:project uuid="c86b416fdb524f21b0228639a3a14116">tempest-TestNetworkAdvancedServerOps-19684921</nova:project>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:        <nova:port uuid="98d1bfd0-f123-48c7-a32d-bca6b92ab19d">
Oct  2 04:47:34 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <entry name="serial">42e467eb-b532-4383-91dd-4c8e9f68328c</entry>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <entry name="uuid">42e467eb-b532-4383-91dd-4c8e9f68328c</entry>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/42e467eb-b532-4383-91dd-4c8e9f68328c_disk">
Oct  2 04:47:34 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:47:34 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/42e467eb-b532-4383-91dd-4c8e9f68328c_disk.config">
Oct  2 04:47:34 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:47:34 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:cf:15:11"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <target dev="tap98d1bfd0-f1"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c/console.log" append="off"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:47:34 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:47:34 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:47:34 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:47:34 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.440 2 DEBUG nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Preparing to wait for external event network-vif-plugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.440 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.441 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.442 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.443 2 DEBUG nova.virt.libvirt.vif [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:47:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-440315055',display_name='tempest-TestNetworkAdvancedServerOps-server-440315055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-440315055',id=114,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAL7Jay6VGgIbZM/uTD3le9XaJZEa2h1Gi+bt0pESVUIc4MgOYJNkH9P9hwcT+CLKqZhGrKxvTkkVtq+Eg9Dj0D8XlduRBRW3I1LeGVNjAVQ1naigScKF/Jy3yqgcndKBQ==',key_name='tempest-TestNetworkAdvancedServerOps-1091937656',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-l13jj000',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:47:25Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=42e467eb-b532-4383-91dd-4c8e9f68328c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.444 2 DEBUG nova.network.os_vif_util [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.445 2 DEBUG nova.network.os_vif_util [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:15:11,bridge_name='br-int',has_traffic_filtering=True,id=98d1bfd0-f123-48c7-a32d-bca6b92ab19d,network=Network(d4ef4078-5ea1-4ac7-a0f4-0c2647248d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98d1bfd0-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.446 2 DEBUG os_vif [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:15:11,bridge_name='br-int',has_traffic_filtering=True,id=98d1bfd0-f123-48c7-a32d-bca6b92ab19d,network=Network(d4ef4078-5ea1-4ac7-a0f4-0c2647248d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98d1bfd0-f1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.448 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.450 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.456 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98d1bfd0-f1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.457 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap98d1bfd0-f1, col_values=(('external_ids', {'iface-id': '98d1bfd0-f123-48c7-a32d-bca6b92ab19d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:15:11', 'vm-uuid': '42e467eb-b532-4383-91dd-4c8e9f68328c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:34 np0005465604 NetworkManager[45129]: <info>  [1759394854.4610] manager: (tap98d1bfd0-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/457)
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.469 2 INFO os_vif [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:15:11,bridge_name='br-int',has_traffic_filtering=True,id=98d1bfd0-f123-48c7-a32d-bca6b92ab19d,network=Network(d4ef4078-5ea1-4ac7-a0f4-0c2647248d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98d1bfd0-f1')#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.557 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.558 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.558 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No VIF found with MAC fa:16:3e:cf:15:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.559 2 INFO nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Using config drive#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.598 2 DEBUG nova.storage.rbd_utils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 42e467eb-b532-4383-91dd-4c8e9f68328c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:34.829 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:34.830 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:34.831 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.924 2 INFO nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Creating config drive at /var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c/disk.config#033[00m
Oct  2 04:47:34 np0005465604 nova_compute[260603]: 2025-10-02 08:47:34.935 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_450hxwk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.113 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_450hxwk" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.156 2 DEBUG nova.storage.rbd_utils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 42e467eb-b532-4383-91dd-4c8e9f68328c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.160 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c/disk.config 42e467eb-b532-4383-91dd-4c8e9f68328c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.349 2 DEBUG oslo_concurrency.processutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c/disk.config 42e467eb-b532-4383-91dd-4c8e9f68328c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.350 2 INFO nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Deleting local config drive /var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c/disk.config because it was imported into RBD.#033[00m
Oct  2 04:47:35 np0005465604 kernel: tap98d1bfd0-f1: entered promiscuous mode
Oct  2 04:47:35 np0005465604 NetworkManager[45129]: <info>  [1759394855.3994] manager: (tap98d1bfd0-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/458)
Oct  2 04:47:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:35Z|01163|binding|INFO|Claiming lport 98d1bfd0-f123-48c7-a32d-bca6b92ab19d for this chassis.
Oct  2 04:47:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:35Z|01164|binding|INFO|98d1bfd0-f123-48c7-a32d-bca6b92ab19d: Claiming fa:16:3e:cf:15:11 10.100.0.12
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.413 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:15:11 10.100.0.12'], port_security=['fa:16:3e:cf:15:11 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '42e467eb-b532-4383-91dd-4c8e9f68328c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8c633b73-9b65-49fc-b72d-a746e692a924', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6683c145-41ec-4e92-b711-415f2e27650f, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=98d1bfd0-f123-48c7-a32d-bca6b92ab19d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.414 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 98d1bfd0-f123-48c7-a32d-bca6b92ab19d in datapath d4ef4078-5ea1-4ac7-a0f4-0c2647248d03 bound to our chassis#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.415 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d4ef4078-5ea1-4ac7-a0f4-0c2647248d03#033[00m
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.421 2 DEBUG nova.network.neutron [req-81c3a65f-a028-43a4-9e1e-617392a48d90 req-a00e4415-0354-4042-9462-1c8882e501e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Updated VIF entry in instance network info cache for port 98d1bfd0-f123-48c7-a32d-bca6b92ab19d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.422 2 DEBUG nova.network.neutron [req-81c3a65f-a028-43a4-9e1e-617392a48d90 req-a00e4415-0354-4042-9462-1c8882e501e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Updating instance_info_cache with network_info: [{"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:47:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:35Z|01165|binding|INFO|Setting lport 98d1bfd0-f123-48c7-a32d-bca6b92ab19d ovn-installed in OVS
Oct  2 04:47:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:35Z|01166|binding|INFO|Setting lport 98d1bfd0-f123-48c7-a32d-bca6b92ab19d up in Southbound
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.433 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c850792e-4914-4cae-a5a8-6b6683c47d0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.434 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd4ef4078-51 in ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.435 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd4ef4078-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.435 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[52788b8f-e16b-4896-a362-90e620d92481]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:35 np0005465604 systemd-udevd[374399]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.436 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b960aa-61db-4817-8f2f-cc3decf9cb64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:35 np0005465604 systemd-machined[214636]: New machine qemu-143-instance-00000072.
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.445 2 DEBUG oslo_concurrency.lockutils [req-81c3a65f-a028-43a4-9e1e-617392a48d90 req-a00e4415-0354-4042-9462-1c8882e501e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.449 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b39a23-f7fa-4304-ba0b-3afb221006bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:35 np0005465604 systemd[1]: Started Virtual Machine qemu-143-instance-00000072.
Oct  2 04:47:35 np0005465604 NetworkManager[45129]: <info>  [1759394855.4536] device (tap98d1bfd0-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:47:35 np0005465604 NetworkManager[45129]: <info>  [1759394855.4545] device (tap98d1bfd0-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:47:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 164 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 3.9 MiB/s wr, 82 op/s
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.478 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a4bb2b23-eca6-478a-bf5e-417b1a38fac4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.511 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e8d0ff-fb37-4ef2-9b0e-1f14a6bcd688]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:35 np0005465604 NetworkManager[45129]: <info>  [1759394855.5183] manager: (tapd4ef4078-50): new Veth device (/org/freedesktop/NetworkManager/Devices/459)
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.517 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d73c9b12-5236-4b46-8ee0-cef1131c46e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.551 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[94a4b8ca-eb20-469a-b6b6-125d49680f57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.554 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb074a5-efe2-480d-92ee-d89c7a1b827d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:35 np0005465604 NetworkManager[45129]: <info>  [1759394855.5778] device (tapd4ef4078-50): carrier: link connected
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.582 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[870b5c66-04bd-42be-aaa3-e71fb2228b68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.599 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bf14064b-0350-4de7-84e2-b21832d3e99c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd4ef4078-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:d2:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 334], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573417, 'reachable_time': 38911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374431, 'error': None, 'target': 'ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.614 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f6bdaaa4-be15-48cd-abe0-65d4e3213e58]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:d2ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573417, 'tstamp': 573417}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374439, 'error': None, 'target': 'ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.628 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0401dbca-8970-4a21-bf4d-afc2ef837afa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd4ef4078-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:d2:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 334], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573417, 'reachable_time': 38911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374449, 'error': None, 'target': 'ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.657 2 DEBUG nova.compute.manager [req-e16629d0-72ff-4aaf-a169-4436f0031086 req-eef1f677-5f1e-40fc-8c34-8fc206c85cc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received event network-vif-plugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.658 2 DEBUG oslo_concurrency.lockutils [req-e16629d0-72ff-4aaf-a169-4436f0031086 req-eef1f677-5f1e-40fc-8c34-8fc206c85cc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.658 2 DEBUG oslo_concurrency.lockutils [req-e16629d0-72ff-4aaf-a169-4436f0031086 req-eef1f677-5f1e-40fc-8c34-8fc206c85cc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.659 2 DEBUG oslo_concurrency.lockutils [req-e16629d0-72ff-4aaf-a169-4436f0031086 req-eef1f677-5f1e-40fc-8c34-8fc206c85cc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.659 2 DEBUG nova.compute.manager [req-e16629d0-72ff-4aaf-a169-4436f0031086 req-eef1f677-5f1e-40fc-8c34-8fc206c85cc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Processing event network-vif-plugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.660 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[89c41aaf-03b3-4f72-a6a0-9ec5e4b635b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.717 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e0643020-35eb-4ca1-acc9-26a96a3cdf55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.721 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4ef4078-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.722 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.722 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4ef4078-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:35 np0005465604 kernel: tapd4ef4078-50: entered promiscuous mode
Oct  2 04:47:35 np0005465604 NetworkManager[45129]: <info>  [1759394855.7256] manager: (tapd4ef4078-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/460)
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.729 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd4ef4078-50, col_values=(('external_ids', {'iface-id': 'd2b8281e-e8e6-435e-9d86-c9ccb80a7650'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:35Z|01167|binding|INFO|Releasing lport d2b8281e-e8e6-435e-9d86-c9ccb80a7650 from this chassis (sb_readonly=0)
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.734 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d4ef4078-5ea1-4ac7-a0f4-0c2647248d03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d4ef4078-5ea1-4ac7-a0f4-0c2647248d03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.735 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5351be-0682-4b87-96b7-9a3e1085f800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.736 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/d4ef4078-5ea1-4ac7-a0f4-0c2647248d03.pid.haproxy
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID d4ef4078-5ea1-4ac7-a0f4-0c2647248d03
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:47:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:35.737 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03', 'env', 'PROCESS_TAG=haproxy-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d4ef4078-5ea1-4ac7-a0f4-0c2647248d03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:47:35 np0005465604 nova_compute[260603]: 2025-10-02 08:47:35.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:36 np0005465604 podman[374507]: 2025-10-02 08:47:36.140222644 +0000 UTC m=+0.066094370 container create 9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:47:36 np0005465604 podman[374507]: 2025-10-02 08:47:36.102341885 +0000 UTC m=+0.028213631 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:47:36 np0005465604 systemd[1]: Started libpod-conmon-9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514.scope.
Oct  2 04:47:36 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:47:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02087fd71ae88db67d2279610b147edf86c03fa40d0a2813a2b3379bd99958ff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:47:36 np0005465604 podman[374507]: 2025-10-02 08:47:36.2491621 +0000 UTC m=+0.175033856 container init 9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 04:47:36 np0005465604 podman[374507]: 2025-10-02 08:47:36.254536758 +0000 UTC m=+0.180408494 container start 9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:47:36 np0005465604 neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03[374522]: [NOTICE]   (374526) : New worker (374528) forked
Oct  2 04:47:36 np0005465604 neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03[374522]: [NOTICE]   (374526) : Loading success.
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.332 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394856.3316793, 42e467eb-b532-4383-91dd-4c8e9f68328c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.332 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] VM Started (Lifecycle Event)#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.335 2 DEBUG nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.338 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.342 2 INFO nova.virt.libvirt.driver [-] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Instance spawned successfully.#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.342 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.392 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.392 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.393 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.394 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.394 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.395 2 DEBUG nova.virt.libvirt.driver [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.401 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.404 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.459 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.459 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394856.3318453, 42e467eb-b532-4383-91dd-4c8e9f68328c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.460 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.491 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.496 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394856.337718, 42e467eb-b532-4383-91dd-4c8e9f68328c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.496 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.555 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.560 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.591 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.598 2 INFO nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Took 10.70 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.598 2 DEBUG nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.723 2 INFO nova.compute.manager [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Took 11.75 seconds to build instance.#033[00m
Oct  2 04:47:36 np0005465604 nova_compute[260603]: 2025-10-02 08:47:36.792 2 DEBUG oslo_concurrency.lockutils [None req-8c240216-80ca-44be-a168-5dba6a414dd6 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 852 KiB/s rd, 3.9 MiB/s wr, 111 op/s
Oct  2 04:47:37 np0005465604 nova_compute[260603]: 2025-10-02 08:47:37.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:47:37 np0005465604 nova_compute[260603]: 2025-10-02 08:47:37.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:47:37 np0005465604 nova_compute[260603]: 2025-10-02 08:47:37.541 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:47:37 np0005465604 nova_compute[260603]: 2025-10-02 08:47:37.859 2 DEBUG nova.compute.manager [req-82812d47-c6f0-44ed-a87c-4c260c27d369 req-c870cc02-bad1-4090-9908-15436ce19b9c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received event network-vif-plugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:47:37 np0005465604 nova_compute[260603]: 2025-10-02 08:47:37.860 2 DEBUG oslo_concurrency.lockutils [req-82812d47-c6f0-44ed-a87c-4c260c27d369 req-c870cc02-bad1-4090-9908-15436ce19b9c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:37 np0005465604 nova_compute[260603]: 2025-10-02 08:47:37.860 2 DEBUG oslo_concurrency.lockutils [req-82812d47-c6f0-44ed-a87c-4c260c27d369 req-c870cc02-bad1-4090-9908-15436ce19b9c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:37 np0005465604 nova_compute[260603]: 2025-10-02 08:47:37.860 2 DEBUG oslo_concurrency.lockutils [req-82812d47-c6f0-44ed-a87c-4c260c27d369 req-c870cc02-bad1-4090-9908-15436ce19b9c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:37 np0005465604 nova_compute[260603]: 2025-10-02 08:47:37.861 2 DEBUG nova.compute.manager [req-82812d47-c6f0-44ed-a87c-4c260c27d369 req-c870cc02-bad1-4090-9908-15436ce19b9c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] No waiting events found dispatching network-vif-plugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:47:37 np0005465604 nova_compute[260603]: 2025-10-02 08:47:37.861 2 WARNING nova.compute.manager [req-82812d47-c6f0-44ed-a87c-4c260c27d369 req-c870cc02-bad1-4090-9908-15436ce19b9c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received unexpected event network-vif-plugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d for instance with vm_state active and task_state None.#033[00m
Oct  2 04:47:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011026628736081265 of space, bias 1.0, pg target 0.33079886208243797 quantized to 32 (current 32)
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:47:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:47:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.9 MiB/s wr, 131 op/s
Oct  2 04:47:39 np0005465604 nova_compute[260603]: 2025-10-02 08:47:39.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:39 np0005465604 nova_compute[260603]: 2025-10-02 08:47:39.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:47:39 np0005465604 nova_compute[260603]: 2025-10-02 08:47:39.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:47:39 np0005465604 nova_compute[260603]: 2025-10-02 08:47:39.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:39 np0005465604 nova_compute[260603]: 2025-10-02 08:47:39.557 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:39 np0005465604 nova_compute[260603]: 2025-10-02 08:47:39.558 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:39 np0005465604 nova_compute[260603]: 2025-10-02 08:47:39.558 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:39 np0005465604 nova_compute[260603]: 2025-10-02 08:47:39.558 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:47:39 np0005465604 nova_compute[260603]: 2025-10-02 08:47:39.559 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:47:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1044921527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:47:40 np0005465604 nova_compute[260603]: 2025-10-02 08:47:40.069 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:40 np0005465604 nova_compute[260603]: 2025-10-02 08:47:40.192 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:47:40 np0005465604 nova_compute[260603]: 2025-10-02 08:47:40.193 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:47:40 np0005465604 nova_compute[260603]: 2025-10-02 08:47:40.201 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:47:40 np0005465604 nova_compute[260603]: 2025-10-02 08:47:40.201 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:47:40 np0005465604 podman[374562]: 2025-10-02 08:47:40.224989471 +0000 UTC m=+0.089018395 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 04:47:40 np0005465604 podman[374560]: 2025-10-02 08:47:40.245861532 +0000 UTC m=+0.119805195 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 04:47:40 np0005465604 nova_compute[260603]: 2025-10-02 08:47:40.435 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:47:40 np0005465604 nova_compute[260603]: 2025-10-02 08:47:40.437 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3388MB free_disk=59.92213439941406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:47:40 np0005465604 nova_compute[260603]: 2025-10-02 08:47:40.437 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:40 np0005465604 nova_compute[260603]: 2025-10-02 08:47:40.438 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:40 np0005465604 nova_compute[260603]: 2025-10-02 08:47:40.521 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 36a84233-3256-49b2-ae05-1569eb78b50f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:47:40 np0005465604 nova_compute[260603]: 2025-10-02 08:47:40.521 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 42e467eb-b532-4383-91dd-4c8e9f68328c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:47:40 np0005465604 nova_compute[260603]: 2025-10-02 08:47:40.522 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:47:40 np0005465604 nova_compute[260603]: 2025-10-02 08:47:40.522 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:47:40 np0005465604 nova_compute[260603]: 2025-10-02 08:47:40.579 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:47:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4130836935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:47:41 np0005465604 nova_compute[260603]: 2025-10-02 08:47:41.051 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:41 np0005465604 nova_compute[260603]: 2025-10-02 08:47:41.056 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:47:41 np0005465604 nova_compute[260603]: 2025-10-02 08:47:41.079 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:47:41 np0005465604 nova_compute[260603]: 2025-10-02 08:47:41.105 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:47:41 np0005465604 nova_compute[260603]: 2025-10-02 08:47:41.105 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct  2 04:47:42 np0005465604 nova_compute[260603]: 2025-10-02 08:47:42.770 2 DEBUG nova.compute.manager [req-ad19b2eb-1cd3-4078-b95c-509c4eedf885 req-dec022ed-0a06-4a5b-8610-568509a11d90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received event network-changed-98d1bfd0-f123-48c7-a32d-bca6b92ab19d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:47:42 np0005465604 nova_compute[260603]: 2025-10-02 08:47:42.771 2 DEBUG nova.compute.manager [req-ad19b2eb-1cd3-4078-b95c-509c4eedf885 req-dec022ed-0a06-4a5b-8610-568509a11d90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Refreshing instance network info cache due to event network-changed-98d1bfd0-f123-48c7-a32d-bca6b92ab19d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:47:42 np0005465604 nova_compute[260603]: 2025-10-02 08:47:42.771 2 DEBUG oslo_concurrency.lockutils [req-ad19b2eb-1cd3-4078-b95c-509c4eedf885 req-dec022ed-0a06-4a5b-8610-568509a11d90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:47:42 np0005465604 nova_compute[260603]: 2025-10-02 08:47:42.772 2 DEBUG oslo_concurrency.lockutils [req-ad19b2eb-1cd3-4078-b95c-509c4eedf885 req-dec022ed-0a06-4a5b-8610-568509a11d90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:47:42 np0005465604 nova_compute[260603]: 2025-10-02 08:47:42.772 2 DEBUG nova.network.neutron [req-ad19b2eb-1cd3-4078-b95c-509c4eedf885 req-dec022ed-0a06-4a5b-8610-568509a11d90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Refreshing network info cache for port 98d1bfd0-f123-48c7-a32d-bca6b92ab19d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:47:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:47:43 np0005465604 nova_compute[260603]: 2025-10-02 08:47:43.101 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:47:43 np0005465604 nova_compute[260603]: 2025-10-02 08:47:43.102 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:47:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 135 op/s
Oct  2 04:47:44 np0005465604 nova_compute[260603]: 2025-10-02 08:47:44.339 2 DEBUG nova.network.neutron [req-ad19b2eb-1cd3-4078-b95c-509c4eedf885 req-dec022ed-0a06-4a5b-8610-568509a11d90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Updated VIF entry in instance network info cache for port 98d1bfd0-f123-48c7-a32d-bca6b92ab19d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:47:44 np0005465604 nova_compute[260603]: 2025-10-02 08:47:44.341 2 DEBUG nova.network.neutron [req-ad19b2eb-1cd3-4078-b95c-509c4eedf885 req-dec022ed-0a06-4a5b-8610-568509a11d90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Updating instance_info_cache with network_info: [{"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:47:44 np0005465604 nova_compute[260603]: 2025-10-02 08:47:44.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:44 np0005465604 nova_compute[260603]: 2025-10-02 08:47:44.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:44 np0005465604 nova_compute[260603]: 2025-10-02 08:47:44.711 2 DEBUG oslo_concurrency.lockutils [req-ad19b2eb-1cd3-4078-b95c-509c4eedf885 req-dec022ed-0a06-4a5b-8610-568509a11d90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:47:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:47:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:47:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:47:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:47:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:47:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:47:44 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev dbb651d4-e3f8-4dfe-aae6-12360ca84a28 does not exist
Oct  2 04:47:44 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev f343bc7e-398c-4be1-9c38-28da8e552ede does not exist
Oct  2 04:47:44 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 24b30a72-4b4a-4f9d-b893-0be446fa8280 does not exist
Oct  2 04:47:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:47:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:47:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:47:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:47:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:47:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:47:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:47:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:47:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:47:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 54 KiB/s wr, 79 op/s
Oct  2 04:47:45 np0005465604 podman[374901]: 2025-10-02 08:47:45.480537949 +0000 UTC m=+0.058904088 container create 42193dc75ff8b6e260e31c0709278a89947ff38f691af0cc0d70ac008ffd91fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:47:45 np0005465604 podman[374901]: 2025-10-02 08:47:45.449532022 +0000 UTC m=+0.027898171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:47:45 np0005465604 systemd[1]: Started libpod-conmon-42193dc75ff8b6e260e31c0709278a89947ff38f691af0cc0d70ac008ffd91fc.scope.
Oct  2 04:47:45 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:47:45 np0005465604 podman[374901]: 2025-10-02 08:47:45.621798472 +0000 UTC m=+0.200164631 container init 42193dc75ff8b6e260e31c0709278a89947ff38f691af0cc0d70ac008ffd91fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wescoff, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 04:47:45 np0005465604 podman[374901]: 2025-10-02 08:47:45.631728991 +0000 UTC m=+0.210095160 container start 42193dc75ff8b6e260e31c0709278a89947ff38f691af0cc0d70ac008ffd91fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct  2 04:47:45 np0005465604 eager_wescoff[374917]: 167 167
Oct  2 04:47:45 np0005465604 systemd[1]: libpod-42193dc75ff8b6e260e31c0709278a89947ff38f691af0cc0d70ac008ffd91fc.scope: Deactivated successfully.
Oct  2 04:47:45 np0005465604 podman[374901]: 2025-10-02 08:47:45.645556301 +0000 UTC m=+0.223922460 container attach 42193dc75ff8b6e260e31c0709278a89947ff38f691af0cc0d70ac008ffd91fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:47:45 np0005465604 podman[374901]: 2025-10-02 08:47:45.646191552 +0000 UTC m=+0.224557681 container died 42193dc75ff8b6e260e31c0709278a89947ff38f691af0cc0d70ac008ffd91fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:47:45 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3485d28d97893cdef468bf895e64e0ee4dbe013e1e7f79b0088cee076c75613f-merged.mount: Deactivated successfully.
Oct  2 04:47:45 np0005465604 podman[374901]: 2025-10-02 08:47:45.762135725 +0000 UTC m=+0.340501864 container remove 42193dc75ff8b6e260e31c0709278a89947ff38f691af0cc0d70ac008ffd91fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 04:47:45 np0005465604 systemd[1]: libpod-conmon-42193dc75ff8b6e260e31c0709278a89947ff38f691af0cc0d70ac008ffd91fc.scope: Deactivated successfully.
Oct  2 04:47:45 np0005465604 podman[374943]: 2025-10-02 08:47:45.968775357 +0000 UTC m=+0.060330282 container create 1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_grothendieck, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 04:47:46 np0005465604 systemd[1]: Started libpod-conmon-1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8.scope.
Oct  2 04:47:46 np0005465604 podman[374943]: 2025-10-02 08:47:45.944933943 +0000 UTC m=+0.036488898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:47:46 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:47:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050487dd21b7c9ebd153cdbf51bccc50f825b4ade22ef4fe322b57f5caea3506/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:47:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050487dd21b7c9ebd153cdbf51bccc50f825b4ade22ef4fe322b57f5caea3506/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:47:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050487dd21b7c9ebd153cdbf51bccc50f825b4ade22ef4fe322b57f5caea3506/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:47:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050487dd21b7c9ebd153cdbf51bccc50f825b4ade22ef4fe322b57f5caea3506/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:47:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050487dd21b7c9ebd153cdbf51bccc50f825b4ade22ef4fe322b57f5caea3506/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:47:46 np0005465604 podman[374943]: 2025-10-02 08:47:46.103411642 +0000 UTC m=+0.194966637 container init 1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_grothendieck, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 04:47:46 np0005465604 podman[374943]: 2025-10-02 08:47:46.113806607 +0000 UTC m=+0.205361542 container start 1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_grothendieck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:47:46 np0005465604 podman[374943]: 2025-10-02 08:47:46.117726979 +0000 UTC m=+0.209281944 container attach 1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 04:47:46 np0005465604 nova_compute[260603]: 2025-10-02 08:47:46.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:47:46 np0005465604 nova_compute[260603]: 2025-10-02 08:47:46.592 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "262d5706-35bc-45eb-809b-217369a86015" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:46 np0005465604 nova_compute[260603]: 2025-10-02 08:47:46.593 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:46 np0005465604 nova_compute[260603]: 2025-10-02 08:47:46.614 2 DEBUG nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:47:46 np0005465604 nova_compute[260603]: 2025-10-02 08:47:46.684 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:46 np0005465604 nova_compute[260603]: 2025-10-02 08:47:46.685 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:46 np0005465604 nova_compute[260603]: 2025-10-02 08:47:46.693 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:47:46 np0005465604 nova_compute[260603]: 2025-10-02 08:47:46.693 2 INFO nova.compute.claims [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:47:46 np0005465604 nova_compute[260603]: 2025-10-02 08:47:46.866 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:47 np0005465604 strange_grothendieck[374960]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:47:47 np0005465604 strange_grothendieck[374960]: --> relative data size: 1.0
Oct  2 04:47:47 np0005465604 strange_grothendieck[374960]: --> All data devices are unavailable
Oct  2 04:47:47 np0005465604 systemd[1]: libpod-1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8.scope: Deactivated successfully.
Oct  2 04:47:47 np0005465604 podman[374943]: 2025-10-02 08:47:47.219031544 +0000 UTC m=+1.310586479 container died 1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 04:47:47 np0005465604 systemd[1]: libpod-1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8.scope: Consumed 1.014s CPU time.
Oct  2 04:47:47 np0005465604 systemd[1]: var-lib-containers-storage-overlay-050487dd21b7c9ebd153cdbf51bccc50f825b4ade22ef4fe322b57f5caea3506-merged.mount: Deactivated successfully.
Oct  2 04:47:47 np0005465604 podman[374943]: 2025-10-02 08:47:47.277128315 +0000 UTC m=+1.368683220 container remove 1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:47:47 np0005465604 systemd[1]: libpod-conmon-1b1831814b7823623a1bfdcd2735975bdf3e421998100d98eb49c3acee5a73e8.scope: Deactivated successfully.
Oct  2 04:47:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:47:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3397444312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.343 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.348 2 DEBUG nova.compute.provider_tree [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.366 2 DEBUG nova.scheduler.client.report [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.387 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.387 2 DEBUG nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.433 2 DEBUG nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.433 2 DEBUG nova.network.neutron [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:47:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 175 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 89 op/s
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.477 2 INFO nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.506 2 DEBUG nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.595 2 DEBUG nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.596 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.596 2 INFO nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Creating image(s)#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.617 2 DEBUG nova.storage.rbd_utils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 262d5706-35bc-45eb-809b-217369a86015_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.641 2 DEBUG nova.storage.rbd_utils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 262d5706-35bc-45eb-809b-217369a86015_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.663 2 DEBUG nova.storage.rbd_utils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 262d5706-35bc-45eb-809b-217369a86015_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.666 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.706 2 DEBUG nova.policy [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3dd1e04a123f47aa8a6b835785a1c569', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.747 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.747 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.748 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.748 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.767 2 DEBUG nova.storage.rbd_utils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 262d5706-35bc-45eb-809b-217369a86015_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:47 np0005465604 nova_compute[260603]: 2025-10-02 08:47:47.774 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 262d5706-35bc-45eb-809b-217369a86015_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:47 np0005465604 podman[375237]: 2025-10-02 08:47:47.886770118 +0000 UTC m=+0.036790778 container create 964e5da14f4620a9fecb0764bb9c444bbff83d676de4e50c9c1aa44187ec56e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jones, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:47:47 np0005465604 systemd[1]: Started libpod-conmon-964e5da14f4620a9fecb0764bb9c444bbff83d676de4e50c9c1aa44187ec56e1.scope.
Oct  2 04:47:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:47:47 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:47:47 np0005465604 podman[375237]: 2025-10-02 08:47:47.872448141 +0000 UTC m=+0.022468831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:47:47 np0005465604 podman[375237]: 2025-10-02 08:47:47.985102402 +0000 UTC m=+0.135123092 container init 964e5da14f4620a9fecb0764bb9c444bbff83d676de4e50c9c1aa44187ec56e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 04:47:47 np0005465604 podman[375237]: 2025-10-02 08:47:47.992492962 +0000 UTC m=+0.142513632 container start 964e5da14f4620a9fecb0764bb9c444bbff83d676de4e50c9c1aa44187ec56e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jones, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:47:47 np0005465604 affectionate_jones[375270]: 167 167
Oct  2 04:47:47 np0005465604 systemd[1]: libpod-964e5da14f4620a9fecb0764bb9c444bbff83d676de4e50c9c1aa44187ec56e1.scope: Deactivated successfully.
Oct  2 04:47:48 np0005465604 podman[375237]: 2025-10-02 08:47:48.007453419 +0000 UTC m=+0.157474089 container attach 964e5da14f4620a9fecb0764bb9c444bbff83d676de4e50c9c1aa44187ec56e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 04:47:48 np0005465604 podman[375237]: 2025-10-02 08:47:48.007962005 +0000 UTC m=+0.157982675 container died 964e5da14f4620a9fecb0764bb9c444bbff83d676de4e50c9c1aa44187ec56e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct  2 04:47:48 np0005465604 systemd[1]: var-lib-containers-storage-overlay-fa3e9fcb635c190bf709619d3f3e995eee7d68e27fa16e364fc4f4c0614f2f9f-merged.mount: Deactivated successfully.
Oct  2 04:47:48 np0005465604 podman[375237]: 2025-10-02 08:47:48.048956532 +0000 UTC m=+0.198977202 container remove 964e5da14f4620a9fecb0764bb9c444bbff83d676de4e50c9c1aa44187ec56e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jones, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:47:48 np0005465604 systemd[1]: libpod-conmon-964e5da14f4620a9fecb0764bb9c444bbff83d676de4e50c9c1aa44187ec56e1.scope: Deactivated successfully.
Oct  2 04:47:48 np0005465604 nova_compute[260603]: 2025-10-02 08:47:48.067 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 262d5706-35bc-45eb-809b-217369a86015_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:48 np0005465604 podman[375287]: 2025-10-02 08:47:48.112741 +0000 UTC m=+0.059253977 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 04:47:48 np0005465604 podman[375282]: 2025-10-02 08:47:48.138687219 +0000 UTC m=+0.087202259 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  2 04:47:48 np0005465604 nova_compute[260603]: 2025-10-02 08:47:48.168 2 DEBUG nova.storage.rbd_utils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] resizing rbd image 262d5706-35bc-45eb-809b-217369a86015_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:47:48 np0005465604 podman[375370]: 2025-10-02 08:47:48.224445392 +0000 UTC m=+0.039944276 container create 21b56dbb6aa62e6815deb05bec28f91d518ec395f853219817c1c75b94137a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mclaren, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:47:48 np0005465604 systemd[1]: Started libpod-conmon-21b56dbb6aa62e6815deb05bec28f91d518ec395f853219817c1c75b94137a60.scope.
Oct  2 04:47:48 np0005465604 nova_compute[260603]: 2025-10-02 08:47:48.272 2 DEBUG nova.objects.instance [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'migration_context' on Instance uuid 262d5706-35bc-45eb-809b-217369a86015 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:47:48 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:47:48 np0005465604 nova_compute[260603]: 2025-10-02 08:47:48.298 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:47:48 np0005465604 nova_compute[260603]: 2025-10-02 08:47:48.298 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Ensure instance console log exists: /var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:47:48 np0005465604 nova_compute[260603]: 2025-10-02 08:47:48.298 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:48 np0005465604 nova_compute[260603]: 2025-10-02 08:47:48.299 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:48 np0005465604 nova_compute[260603]: 2025-10-02 08:47:48.299 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69124acb057edb9f9d9e827ed35c5597b38609e0370b7fb880dadf3aaa8a5eec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:47:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69124acb057edb9f9d9e827ed35c5597b38609e0370b7fb880dadf3aaa8a5eec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:47:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69124acb057edb9f9d9e827ed35c5597b38609e0370b7fb880dadf3aaa8a5eec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:47:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69124acb057edb9f9d9e827ed35c5597b38609e0370b7fb880dadf3aaa8a5eec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:47:48 np0005465604 podman[375370]: 2025-10-02 08:47:48.207715001 +0000 UTC m=+0.023213905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:47:48 np0005465604 podman[375370]: 2025-10-02 08:47:48.316566833 +0000 UTC m=+0.132065727 container init 21b56dbb6aa62e6815deb05bec28f91d518ec395f853219817c1c75b94137a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mclaren, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:47:48 np0005465604 podman[375370]: 2025-10-02 08:47:48.323926853 +0000 UTC m=+0.139425737 container start 21b56dbb6aa62e6815deb05bec28f91d518ec395f853219817c1c75b94137a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:47:48 np0005465604 podman[375370]: 2025-10-02 08:47:48.327321638 +0000 UTC m=+0.142820552 container attach 21b56dbb6aa62e6815deb05bec28f91d518ec395f853219817c1c75b94137a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mclaren, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 04:47:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:48Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:15:11 10.100.0.12
Oct  2 04:47:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:48Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:15:11 10.100.0.12
Oct  2 04:47:48 np0005465604 nova_compute[260603]: 2025-10-02 08:47:48.686 2 DEBUG nova.network.neutron [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Successfully created port: f66c7ec4-73d7-4000-99de-ce8f1674d0ee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]: {
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:    "0": [
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:        {
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "devices": [
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "/dev/loop3"
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            ],
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "lv_name": "ceph_lv0",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "lv_size": "21470642176",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "name": "ceph_lv0",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "tags": {
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.cluster_name": "ceph",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.crush_device_class": "",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.encrypted": "0",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.osd_id": "0",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.type": "block",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.vdo": "0"
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            },
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "type": "block",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "vg_name": "ceph_vg0"
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:        }
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:    ],
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:    "1": [
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:        {
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "devices": [
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "/dev/loop4"
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            ],
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "lv_name": "ceph_lv1",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "lv_size": "21470642176",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "name": "ceph_lv1",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "tags": {
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.cluster_name": "ceph",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.crush_device_class": "",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.encrypted": "0",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.osd_id": "1",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.type": "block",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.vdo": "0"
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            },
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "type": "block",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "vg_name": "ceph_vg1"
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:        }
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:    ],
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:    "2": [
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:        {
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "devices": [
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "/dev/loop5"
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            ],
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "lv_name": "ceph_lv2",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "lv_size": "21470642176",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "name": "ceph_lv2",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "tags": {
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.cluster_name": "ceph",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.crush_device_class": "",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.encrypted": "0",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.osd_id": "2",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.type": "block",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:                "ceph.vdo": "0"
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            },
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "type": "block",
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:            "vg_name": "ceph_vg2"
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:        }
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]:    ]
Oct  2 04:47:49 np0005465604 vigilant_mclaren[375422]: }
Oct  2 04:47:49 np0005465604 systemd[1]: libpod-21b56dbb6aa62e6815deb05bec28f91d518ec395f853219817c1c75b94137a60.scope: Deactivated successfully.
Oct  2 04:47:49 np0005465604 podman[375370]: 2025-10-02 08:47:49.070388889 +0000 UTC m=+0.885887773 container died 21b56dbb6aa62e6815deb05bec28f91d518ec395f853219817c1c75b94137a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mclaren, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 04:47:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay-69124acb057edb9f9d9e827ed35c5597b38609e0370b7fb880dadf3aaa8a5eec-merged.mount: Deactivated successfully.
Oct  2 04:47:49 np0005465604 podman[375370]: 2025-10-02 08:47:49.127968874 +0000 UTC m=+0.943467768 container remove 21b56dbb6aa62e6815deb05bec28f91d518ec395f853219817c1c75b94137a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mclaren, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  2 04:47:49 np0005465604 systemd[1]: libpod-conmon-21b56dbb6aa62e6815deb05bec28f91d518ec395f853219817c1c75b94137a60.scope: Deactivated successfully.
Oct  2 04:47:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 208 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 106 op/s
Oct  2 04:47:49 np0005465604 nova_compute[260603]: 2025-10-02 08:47:49.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:49 np0005465604 nova_compute[260603]: 2025-10-02 08:47:49.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:49 np0005465604 podman[375589]: 2025-10-02 08:47:49.906975353 +0000 UTC m=+0.086553508 container create 67d8c080859155a62863b006ac50f4d02cf1b85ab000c26bf59c19db42ff2eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 04:47:49 np0005465604 podman[375589]: 2025-10-02 08:47:49.86160977 +0000 UTC m=+0.041187985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:47:49 np0005465604 systemd[1]: Started libpod-conmon-67d8c080859155a62863b006ac50f4d02cf1b85ab000c26bf59c19db42ff2eee.scope.
Oct  2 04:47:50 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:47:50 np0005465604 podman[375589]: 2025-10-02 08:47:50.027458309 +0000 UTC m=+0.207036464 container init 67d8c080859155a62863b006ac50f4d02cf1b85ab000c26bf59c19db42ff2eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 04:47:50 np0005465604 podman[375589]: 2025-10-02 08:47:50.039969799 +0000 UTC m=+0.219547934 container start 67d8c080859155a62863b006ac50f4d02cf1b85ab000c26bf59c19db42ff2eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_herschel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 04:47:50 np0005465604 elegant_herschel[375606]: 167 167
Oct  2 04:47:50 np0005465604 systemd[1]: libpod-67d8c080859155a62863b006ac50f4d02cf1b85ab000c26bf59c19db42ff2eee.scope: Deactivated successfully.
Oct  2 04:47:50 np0005465604 podman[375589]: 2025-10-02 08:47:50.062919844 +0000 UTC m=+0.242498009 container attach 67d8c080859155a62863b006ac50f4d02cf1b85ab000c26bf59c19db42ff2eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_herschel, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:47:50 np0005465604 podman[375589]: 2025-10-02 08:47:50.064038859 +0000 UTC m=+0.243616994 container died 67d8c080859155a62863b006ac50f4d02cf1b85ab000c26bf59c19db42ff2eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 04:47:50 np0005465604 systemd[1]: var-lib-containers-storage-overlay-99868da62706d5862c7ab285abaaef153d1f3d27fa6ff00fd21e3e473a4e6d4b-merged.mount: Deactivated successfully.
Oct  2 04:47:50 np0005465604 podman[375589]: 2025-10-02 08:47:50.121944784 +0000 UTC m=+0.301522919 container remove 67d8c080859155a62863b006ac50f4d02cf1b85ab000c26bf59c19db42ff2eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:47:50 np0005465604 systemd[1]: libpod-conmon-67d8c080859155a62863b006ac50f4d02cf1b85ab000c26bf59c19db42ff2eee.scope: Deactivated successfully.
Oct  2 04:47:50 np0005465604 podman[375629]: 2025-10-02 08:47:50.340826386 +0000 UTC m=+0.059895798 container create ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  2 04:47:50 np0005465604 podman[375629]: 2025-10-02 08:47:50.313640999 +0000 UTC m=+0.032710381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:47:50 np0005465604 systemd[1]: Started libpod-conmon-ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f.scope.
Oct  2 04:47:50 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:47:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3788657d3a069f301942b93bc2ec4e1849e15b05a04c2ca1ea0f156f34b7e18d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:47:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3788657d3a069f301942b93bc2ec4e1849e15b05a04c2ca1ea0f156f34b7e18d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:47:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3788657d3a069f301942b93bc2ec4e1849e15b05a04c2ca1ea0f156f34b7e18d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:47:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3788657d3a069f301942b93bc2ec4e1849e15b05a04c2ca1ea0f156f34b7e18d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:47:50 np0005465604 podman[375629]: 2025-10-02 08:47:50.477331441 +0000 UTC m=+0.196400913 container init ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct  2 04:47:50 np0005465604 podman[375629]: 2025-10-02 08:47:50.488685075 +0000 UTC m=+0.207754437 container start ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_franklin, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  2 04:47:50 np0005465604 podman[375629]: 2025-10-02 08:47:50.494966301 +0000 UTC m=+0.214035783 container attach ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_franklin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 04:47:50 np0005465604 nova_compute[260603]: 2025-10-02 08:47:50.520 2 DEBUG nova.network.neutron [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Successfully updated port: f66c7ec4-73d7-4000-99de-ce8f1674d0ee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:47:50 np0005465604 nova_compute[260603]: 2025-10-02 08:47:50.538 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "refresh_cache-262d5706-35bc-45eb-809b-217369a86015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:47:50 np0005465604 nova_compute[260603]: 2025-10-02 08:47:50.539 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquired lock "refresh_cache-262d5706-35bc-45eb-809b-217369a86015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:47:50 np0005465604 nova_compute[260603]: 2025-10-02 08:47:50.539 2 DEBUG nova.network.neutron [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:47:50 np0005465604 nova_compute[260603]: 2025-10-02 08:47:50.649 2 DEBUG nova.compute.manager [req-e4d7ca1b-cc2a-4333-a6dc-21f149e2b819 req-286a4bf2-a047-4155-be99-10f9240f8d46 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Received event network-changed-f66c7ec4-73d7-4000-99de-ce8f1674d0ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:47:50 np0005465604 nova_compute[260603]: 2025-10-02 08:47:50.649 2 DEBUG nova.compute.manager [req-e4d7ca1b-cc2a-4333-a6dc-21f149e2b819 req-286a4bf2-a047-4155-be99-10f9240f8d46 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Refreshing instance network info cache due to event network-changed-f66c7ec4-73d7-4000-99de-ce8f1674d0ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:47:50 np0005465604 nova_compute[260603]: 2025-10-02 08:47:50.649 2 DEBUG oslo_concurrency.lockutils [req-e4d7ca1b-cc2a-4333-a6dc-21f149e2b819 req-286a4bf2-a047-4155-be99-10f9240f8d46 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-262d5706-35bc-45eb-809b-217369a86015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:47:51 np0005465604 nova_compute[260603]: 2025-10-02 08:47:51.336 2 DEBUG nova.network.neutron [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:47:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 208 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 848 KiB/s rd, 2.5 MiB/s wr, 74 op/s
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]: {
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:        "osd_id": 2,
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:        "type": "bluestore"
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:    },
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:        "osd_id": 1,
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:        "type": "bluestore"
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:    },
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:        "osd_id": 0,
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:        "type": "bluestore"
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]:    }
Oct  2 04:47:51 np0005465604 zealous_franklin[375646]: }
Oct  2 04:47:51 np0005465604 systemd[1]: libpod-ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f.scope: Deactivated successfully.
Oct  2 04:47:51 np0005465604 podman[375629]: 2025-10-02 08:47:51.515289383 +0000 UTC m=+1.234358775 container died ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_franklin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:47:51 np0005465604 systemd[1]: libpod-ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f.scope: Consumed 1.024s CPU time.
Oct  2 04:47:51 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3788657d3a069f301942b93bc2ec4e1849e15b05a04c2ca1ea0f156f34b7e18d-merged.mount: Deactivated successfully.
Oct  2 04:47:51 np0005465604 podman[375629]: 2025-10-02 08:47:51.699925017 +0000 UTC m=+1.418994389 container remove ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 04:47:51 np0005465604 systemd[1]: libpod-conmon-ebf4745bea6227d444a32e2496ace1bd6c73fd080da23c86e79f64f418a9001f.scope: Deactivated successfully.
Oct  2 04:47:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:47:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:47:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:47:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:47:51 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ab93ea77-0581-43bd-8acb-bdd586dde5da does not exist
Oct  2 04:47:51 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 8e59717d-8c65-4e80-a33a-2482cc521d9b does not exist
Oct  2 04:47:52 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:47:52 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:47:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.421 2 DEBUG nova.network.neutron [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Updating instance_info_cache with network_info: [{"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.444 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Releasing lock "refresh_cache-262d5706-35bc-45eb-809b-217369a86015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.445 2 DEBUG nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Instance network_info: |[{"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.446 2 DEBUG oslo_concurrency.lockutils [req-e4d7ca1b-cc2a-4333-a6dc-21f149e2b819 req-286a4bf2-a047-4155-be99-10f9240f8d46 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-262d5706-35bc-45eb-809b-217369a86015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.446 2 DEBUG nova.network.neutron [req-e4d7ca1b-cc2a-4333-a6dc-21f149e2b819 req-286a4bf2-a047-4155-be99-10f9240f8d46 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Refreshing network info cache for port f66c7ec4-73d7-4000-99de-ce8f1674d0ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.455 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Start _get_guest_xml network_info=[{"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.462 2 WARNING nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:47:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 246 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 924 KiB/s rd, 3.9 MiB/s wr, 109 op/s
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.471 2 DEBUG nova.virt.libvirt.host [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.473 2 DEBUG nova.virt.libvirt.host [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.483 2 DEBUG nova.virt.libvirt.host [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.484 2 DEBUG nova.virt.libvirt.host [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.485 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.486 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.486 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.487 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.487 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.488 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.488 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.489 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.489 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.490 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.491 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.491 2 DEBUG nova.virt.hardware [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.496 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:47:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2047987986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.952 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.991 2 DEBUG nova.storage.rbd_utils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 262d5706-35bc-45eb-809b-217369a86015_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:53 np0005465604 nova_compute[260603]: 2025-10-02 08:47:53.997 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.223 2 INFO nova.compute.manager [None req-9ec27381-5145-4842-8cdb-d031083c76d8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Get console output#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.233 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 04:47:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:47:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3248806711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.434 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.437 2 DEBUG nova.virt.libvirt.vif [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:47:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-1512666723',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-1512666723',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-gen',id=115,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIUeT37zgtyGodwUyb3iGY7HM/3FmTKFRI0uJBAJFSKjatvrWQTObfDMZE8YBSZZGIkssGkh11uDOpS/CEO9ZlzyZpSQ7kHVbROBWfktxGuem67Vh+IUVlFakqZKCOP1Eg==',key_name='tempest-TestSecurityGroupsBasicOps-112352132',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-igjgp1fk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:47:47Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=262d5706-35bc-45eb-809b-217369a86015,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.438 2 DEBUG nova.network.os_vif_util [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.439 2 DEBUG nova.network.os_vif_util [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:dd:2b,bridge_name='br-int',has_traffic_filtering=True,id=f66c7ec4-73d7-4000-99de-ce8f1674d0ee,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66c7ec4-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.441 2 DEBUG nova.objects.instance [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'pci_devices' on Instance uuid 262d5706-35bc-45eb-809b-217369a86015 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.460 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:47:54 np0005465604 nova_compute[260603]:  <uuid>262d5706-35bc-45eb-809b-217369a86015</uuid>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:  <name>instance-00000073</name>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-1512666723</nova:name>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:47:53</nova:creationTime>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:47:54 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:        <nova:user uuid="3dd1e04a123f47aa8a6b835785a1c569">tempest-TestSecurityGroupsBasicOps-204807017-project-member</nova:user>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:        <nova:project uuid="7ef9cbc1b038423984a64b4674aa34ff">tempest-TestSecurityGroupsBasicOps-204807017</nova:project>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:        <nova:port uuid="f66c7ec4-73d7-4000-99de-ce8f1674d0ee">
Oct  2 04:47:54 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <entry name="serial">262d5706-35bc-45eb-809b-217369a86015</entry>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <entry name="uuid">262d5706-35bc-45eb-809b-217369a86015</entry>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/262d5706-35bc-45eb-809b-217369a86015_disk">
Oct  2 04:47:54 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:47:54 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/262d5706-35bc-45eb-809b-217369a86015_disk.config">
Oct  2 04:47:54 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:47:54 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:cf:dd:2b"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <target dev="tapf66c7ec4-73"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015/console.log" append="off"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:47:54 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:47:54 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:47:54 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:47:54 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.462 2 DEBUG nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Preparing to wait for external event network-vif-plugged-f66c7ec4-73d7-4000-99de-ce8f1674d0ee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.463 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "262d5706-35bc-45eb-809b-217369a86015-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.463 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.464 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.465 2 DEBUG nova.virt.libvirt.vif [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:47:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-1512666723',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-1512666723',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-gen',id=115,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIUeT37zgtyGodwUyb3iGY7HM/3FmTKFRI0uJBAJFSKjatvrWQTObfDMZE8YBSZZGIkssGkh11uDOpS/CEO9ZlzyZpSQ7kHVbROBWfktxGuem67Vh+IUVlFakqZKCOP1Eg==',key_name='tempest-TestSecurityGroupsBasicOps-112352132',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-igjgp1fk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:47:47Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=262d5706-35bc-45eb-809b-217369a86015,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.466 2 DEBUG nova.network.os_vif_util [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.467 2 DEBUG nova.network.os_vif_util [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:dd:2b,bridge_name='br-int',has_traffic_filtering=True,id=f66c7ec4-73d7-4000-99de-ce8f1674d0ee,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66c7ec4-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.468 2 DEBUG os_vif [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:dd:2b,bridge_name='br-int',has_traffic_filtering=True,id=f66c7ec4-73d7-4000-99de-ce8f1674d0ee,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66c7ec4-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.470 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.471 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf66c7ec4-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf66c7ec4-73, col_values=(('external_ids', {'iface-id': 'f66c7ec4-73d7-4000-99de-ce8f1674d0ee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:dd:2b', 'vm-uuid': '262d5706-35bc-45eb-809b-217369a86015'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:54 np0005465604 NetworkManager[45129]: <info>  [1759394874.4806] manager: (tapf66c7ec4-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.491 2 INFO os_vif [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:dd:2b,bridge_name='br-int',has_traffic_filtering=True,id=f66c7ec4-73d7-4000-99de-ce8f1674d0ee,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66c7ec4-73')#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.525 2 INFO nova.compute.manager [None req-5a2b120a-4a0a-4e5e-bab7-d231244045c0 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Pausing#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.526 2 DEBUG nova.objects.instance [None req-5a2b120a-4a0a-4e5e-bab7-d231244045c0 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'flavor' on Instance uuid 42e467eb-b532-4383-91dd-4c8e9f68328c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.545 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.546 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.546 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] No VIF found with MAC fa:16:3e:cf:dd:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.546 2 INFO nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Using config drive#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.572 2 DEBUG nova.storage.rbd_utils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 262d5706-35bc-45eb-809b-217369a86015_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.604 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394874.600825, 42e467eb-b532-4383-91dd-4c8e9f68328c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.605 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.606 2 DEBUG nova.compute.manager [None req-5a2b120a-4a0a-4e5e-bab7-d231244045c0 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.645 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.648 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:47:54 np0005465604 nova_compute[260603]: 2025-10-02 08:47:54.702 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct  2 04:47:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 246 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct  2 04:47:55 np0005465604 nova_compute[260603]: 2025-10-02 08:47:55.675 2 DEBUG nova.network.neutron [req-e4d7ca1b-cc2a-4333-a6dc-21f149e2b819 req-286a4bf2-a047-4155-be99-10f9240f8d46 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Updated VIF entry in instance network info cache for port f66c7ec4-73d7-4000-99de-ce8f1674d0ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:47:55 np0005465604 nova_compute[260603]: 2025-10-02 08:47:55.676 2 DEBUG nova.network.neutron [req-e4d7ca1b-cc2a-4333-a6dc-21f149e2b819 req-286a4bf2-a047-4155-be99-10f9240f8d46 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Updating instance_info_cache with network_info: [{"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:47:55 np0005465604 nova_compute[260603]: 2025-10-02 08:47:55.696 2 DEBUG oslo_concurrency.lockutils [req-e4d7ca1b-cc2a-4333-a6dc-21f149e2b819 req-286a4bf2-a047-4155-be99-10f9240f8d46 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-262d5706-35bc-45eb-809b-217369a86015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:47:55 np0005465604 nova_compute[260603]: 2025-10-02 08:47:55.960 2 INFO nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Creating config drive at /var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015/disk.config#033[00m
Oct  2 04:47:55 np0005465604 nova_compute[260603]: 2025-10-02 08:47:55.968 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpasia5lxx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:56 np0005465604 nova_compute[260603]: 2025-10-02 08:47:56.114 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpasia5lxx" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:56 np0005465604 nova_compute[260603]: 2025-10-02 08:47:56.136 2 DEBUG nova.storage.rbd_utils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] rbd image 262d5706-35bc-45eb-809b-217369a86015_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:47:56 np0005465604 nova_compute[260603]: 2025-10-02 08:47:56.139 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015/disk.config 262d5706-35bc-45eb-809b-217369a86015_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:47:56 np0005465604 nova_compute[260603]: 2025-10-02 08:47:56.331 2 DEBUG oslo_concurrency.processutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015/disk.config 262d5706-35bc-45eb-809b-217369a86015_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:47:56 np0005465604 nova_compute[260603]: 2025-10-02 08:47:56.333 2 INFO nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Deleting local config drive /var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015/disk.config because it was imported into RBD.#033[00m
Oct  2 04:47:56 np0005465604 kernel: tapf66c7ec4-73: entered promiscuous mode
Oct  2 04:47:56 np0005465604 NetworkManager[45129]: <info>  [1759394876.3940] manager: (tapf66c7ec4-73): new Tun device (/org/freedesktop/NetworkManager/Devices/462)
Oct  2 04:47:56 np0005465604 nova_compute[260603]: 2025-10-02 08:47:56.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:56 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:56Z|01168|binding|INFO|Claiming lport f66c7ec4-73d7-4000-99de-ce8f1674d0ee for this chassis.
Oct  2 04:47:56 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:56Z|01169|binding|INFO|f66c7ec4-73d7-4000-99de-ce8f1674d0ee: Claiming fa:16:3e:cf:dd:2b 10.100.0.6
Oct  2 04:47:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.455 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:dd:2b 10.100.0.6'], port_security=['fa:16:3e:cf:dd:2b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '262d5706-35bc-45eb-809b-217369a86015', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '2', 'neutron:security_group_ids': '580c7ec0-94be-443b-88b7-53672ca4c5d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2af26aac-073a-413a-88d9-fc16235c9487, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=f66c7ec4-73d7-4000-99de-ce8f1674d0ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:47:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.457 162357 INFO neutron.agent.ovn.metadata.agent [-] Port f66c7ec4-73d7-4000-99de-ce8f1674d0ee in datapath 828d3558-d7f1-4d90-8e6f-bf6eff4d744e bound to our chassis#033[00m
Oct  2 04:47:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.459 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 828d3558-d7f1-4d90-8e6f-bf6eff4d744e#033[00m
Oct  2 04:47:56 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:56Z|01170|binding|INFO|Setting lport f66c7ec4-73d7-4000-99de-ce8f1674d0ee ovn-installed in OVS
Oct  2 04:47:56 np0005465604 nova_compute[260603]: 2025-10-02 08:47:56.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:56 np0005465604 ovn_controller[152344]: 2025-10-02T08:47:56Z|01171|binding|INFO|Setting lport f66c7ec4-73d7-4000-99de-ce8f1674d0ee up in Southbound
Oct  2 04:47:56 np0005465604 systemd-udevd[375879]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:47:56 np0005465604 nova_compute[260603]: 2025-10-02 08:47:56.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:56 np0005465604 systemd-machined[214636]: New machine qemu-144-instance-00000073.
Oct  2 04:47:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.478 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c87b5bcc-6d17-42ce-a3d0-991ae5f82e42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:56 np0005465604 systemd[1]: Started Virtual Machine qemu-144-instance-00000073.
Oct  2 04:47:56 np0005465604 NetworkManager[45129]: <info>  [1759394876.4929] device (tapf66c7ec4-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:47:56 np0005465604 NetworkManager[45129]: <info>  [1759394876.4943] device (tapf66c7ec4-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:47:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.510 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9a9abfe7-7d4a-4fa5-807a-1907d60fce57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.514 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[fb48a8f7-b9ea-433e-8449-145c7c496fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.546 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[66586c2e-8f90-4675-ab83-fbb29066f7b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.565 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[00bb8a2b-845e-4d07-bce7-6621f6ccbdfe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap828d3558-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:2f:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571794, 'reachable_time': 28448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375893, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.587 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e37b9cca-0fd0-437a-95c0-a993b5041ea0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap828d3558-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571808, 'tstamp': 571808}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375895, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap828d3558-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571811, 'tstamp': 571811}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375895, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:47:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.589 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap828d3558-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.592 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap828d3558-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.593 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:47:56 np0005465604 nova_compute[260603]: 2025-10-02 08:47:56.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.593 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap828d3558-d0, col_values=(('external_ids', {'iface-id': '8765a176-985e-405d-870e-44dc7d3390b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:47:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.593 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:47:56 np0005465604 nova_compute[260603]: 2025-10-02 08:47:56.712 2 DEBUG nova.compute.manager [req-6ecfbcb0-a6cf-48c4-8abf-2b46af835111 req-9fe0ba69-7c54-49dd-9e72-66df95beff21 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Received event network-vif-plugged-f66c7ec4-73d7-4000-99de-ce8f1674d0ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:47:56 np0005465604 nova_compute[260603]: 2025-10-02 08:47:56.713 2 DEBUG oslo_concurrency.lockutils [req-6ecfbcb0-a6cf-48c4-8abf-2b46af835111 req-9fe0ba69-7c54-49dd-9e72-66df95beff21 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "262d5706-35bc-45eb-809b-217369a86015-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:56 np0005465604 nova_compute[260603]: 2025-10-02 08:47:56.713 2 DEBUG oslo_concurrency.lockutils [req-6ecfbcb0-a6cf-48c4-8abf-2b46af835111 req-9fe0ba69-7c54-49dd-9e72-66df95beff21 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:56 np0005465604 nova_compute[260603]: 2025-10-02 08:47:56.714 2 DEBUG oslo_concurrency.lockutils [req-6ecfbcb0-a6cf-48c4-8abf-2b46af835111 req-9fe0ba69-7c54-49dd-9e72-66df95beff21 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:56 np0005465604 nova_compute[260603]: 2025-10-02 08:47:56.714 2 DEBUG nova.compute.manager [req-6ecfbcb0-a6cf-48c4-8abf-2b46af835111 req-9fe0ba69-7c54-49dd-9e72-66df95beff21 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Processing event network-vif-plugged-f66c7ec4-73d7-4000-99de-ce8f1674d0ee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:47:56 np0005465604 nova_compute[260603]: 2025-10-02 08:47:56.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.794 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:47:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:47:56.795 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:47:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 246 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct  2 04:47:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:47:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:47:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:47:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:47:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:47:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:47:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.063 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394878.0632343, 262d5706-35bc-45eb-809b-217369a86015 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.064 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] VM Started (Lifecycle Event)#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.066 2 DEBUG nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.068 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.072 2 INFO nova.virt.libvirt.driver [-] [instance: 262d5706-35bc-45eb-809b-217369a86015] Instance spawned successfully.#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.073 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.093 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.104 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.109 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.109 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.110 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.110 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.111 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.111 2 DEBUG nova.virt.libvirt.driver [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.134 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.134 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394878.0640945, 262d5706-35bc-45eb-809b-217369a86015 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.135 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.158 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.161 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394878.0682342, 262d5706-35bc-45eb-809b-217369a86015 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.161 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.169 2 INFO nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Took 10.57 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.170 2 DEBUG nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.193 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.198 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.218 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.227 2 INFO nova.compute.manager [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Took 11.57 seconds to build instance.#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.249 2 DEBUG oslo_concurrency.lockutils [None req-a9c02268-3d0c-4ab0-861a-d42a836345d7 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.519 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.521 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.521 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.521 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.521 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.522 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.545 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.559 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.559 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Image id 420393e6-d62b-4055-afb9-674967e2c2b0 yields fingerprint 55fe19af44c773772fc736fd085017e37d622236 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.559 2 INFO nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] image 420393e6-d62b-4055-afb9-674967e2c2b0 at (/var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236): checking#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.560 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] image 420393e6-d62b-4055-afb9-674967e2c2b0 at (/var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.561 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.562 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] 36a84233-3256-49b2-ae05-1569eb78b50f is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.562 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] 42e467eb-b532-4383-91dd-4c8e9f68328c is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.562 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] 262d5706-35bc-45eb-809b-217369a86015 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.562 2 WARNING nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.563 2 INFO nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Active base files: /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.563 2 INFO nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Removable base files: /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.563 2 INFO nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.564 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.564 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.564 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.564 2 INFO nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.871 2 DEBUG nova.compute.manager [req-0a293d64-cbb3-41e9-b52d-2abb69b57890 req-6ebae71a-7b88-4389-b38a-6c57f373493a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Received event network-vif-plugged-f66c7ec4-73d7-4000-99de-ce8f1674d0ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.872 2 DEBUG oslo_concurrency.lockutils [req-0a293d64-cbb3-41e9-b52d-2abb69b57890 req-6ebae71a-7b88-4389-b38a-6c57f373493a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "262d5706-35bc-45eb-809b-217369a86015-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.872 2 DEBUG oslo_concurrency.lockutils [req-0a293d64-cbb3-41e9-b52d-2abb69b57890 req-6ebae71a-7b88-4389-b38a-6c57f373493a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.873 2 DEBUG oslo_concurrency.lockutils [req-0a293d64-cbb3-41e9-b52d-2abb69b57890 req-6ebae71a-7b88-4389-b38a-6c57f373493a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.873 2 DEBUG nova.compute.manager [req-0a293d64-cbb3-41e9-b52d-2abb69b57890 req-6ebae71a-7b88-4389-b38a-6c57f373493a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] No waiting events found dispatching network-vif-plugged-f66c7ec4-73d7-4000-99de-ce8f1674d0ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.873 2 WARNING nova.compute.manager [req-0a293d64-cbb3-41e9-b52d-2abb69b57890 req-6ebae71a-7b88-4389-b38a-6c57f373493a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Received unexpected event network-vif-plugged-f66c7ec4-73d7-4000-99de-ce8f1674d0ee for instance with vm_state active and task_state None.#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.974 2 INFO nova.compute.manager [None req-e596b27b-850c-4bef-accc-8a451fc19782 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Get console output#033[00m
Oct  2 04:47:58 np0005465604 nova_compute[260603]: 2025-10-02 08:47:58.979 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 04:47:59 np0005465604 nova_compute[260603]: 2025-10-02 08:47:59.164 2 INFO nova.compute.manager [None req-a4e6974c-f2bc-427e-aeb3-0a344cb24f99 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Unpausing#033[00m
Oct  2 04:47:59 np0005465604 nova_compute[260603]: 2025-10-02 08:47:59.165 2 DEBUG nova.objects.instance [None req-a4e6974c-f2bc-427e-aeb3-0a344cb24f99 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'flavor' on Instance uuid 42e467eb-b532-4383-91dd-4c8e9f68328c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:47:59 np0005465604 nova_compute[260603]: 2025-10-02 08:47:59.204 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394879.2045522, 42e467eb-b532-4383-91dd-4c8e9f68328c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:47:59 np0005465604 nova_compute[260603]: 2025-10-02 08:47:59.205 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:47:59 np0005465604 virtqemud[260328]: argument unsupported: QEMU guest agent is not configured
Oct  2 04:47:59 np0005465604 nova_compute[260603]: 2025-10-02 08:47:59.209 2 DEBUG nova.virt.libvirt.guest [None req-a4e6974c-f2bc-427e-aeb3-0a344cb24f99 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 04:47:59 np0005465604 nova_compute[260603]: 2025-10-02 08:47:59.210 2 DEBUG nova.compute.manager [None req-a4e6974c-f2bc-427e-aeb3-0a344cb24f99 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:47:59 np0005465604 nova_compute[260603]: 2025-10-02 08:47:59.220 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:47:59 np0005465604 nova_compute[260603]: 2025-10-02 08:47:59.223 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:47:59 np0005465604 nova_compute[260603]: 2025-10-02 08:47:59.254 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Oct  2 04:47:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 246 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 2.9 MiB/s wr, 87 op/s
Oct  2 04:47:59 np0005465604 nova_compute[260603]: 2025-10-02 08:47:59.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:47:59 np0005465604 nova_compute[260603]: 2025-10-02 08:47:59.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:00 np0005465604 nova_compute[260603]: 2025-10-02 08:48:00.135 2 INFO nova.compute.manager [None req-33c9c66d-08f1-4e56-9025-db9ae7300f6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Get console output#033[00m
Oct  2 04:48:00 np0005465604 nova_compute[260603]: 2025-10-02 08:48:00.142 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 04:48:00 np0005465604 nova_compute[260603]: 2025-10-02 08:48:00.933 2 DEBUG oslo_concurrency.lockutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "42e467eb-b532-4383-91dd-4c8e9f68328c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:00 np0005465604 nova_compute[260603]: 2025-10-02 08:48:00.934 2 DEBUG oslo_concurrency.lockutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:00 np0005465604 nova_compute[260603]: 2025-10-02 08:48:00.935 2 DEBUG oslo_concurrency.lockutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:00 np0005465604 nova_compute[260603]: 2025-10-02 08:48:00.935 2 DEBUG oslo_concurrency.lockutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:00 np0005465604 nova_compute[260603]: 2025-10-02 08:48:00.935 2 DEBUG oslo_concurrency.lockutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:00 np0005465604 nova_compute[260603]: 2025-10-02 08:48:00.937 2 INFO nova.compute.manager [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Terminating instance#033[00m
Oct  2 04:48:00 np0005465604 nova_compute[260603]: 2025-10-02 08:48:00.939 2 DEBUG nova.compute.manager [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:48:01 np0005465604 kernel: tap98d1bfd0-f1 (unregistering): left promiscuous mode
Oct  2 04:48:01 np0005465604 NetworkManager[45129]: <info>  [1759394881.0105] device (tap98d1bfd0-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.013 2 DEBUG nova.compute.manager [req-f1bba362-6ff3-45c4-928e-ab1d932e6d58 req-683833e1-f981-424a-912b-09ffe3effa3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received event network-changed-98d1bfd0-f123-48c7-a32d-bca6b92ab19d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.014 2 DEBUG nova.compute.manager [req-f1bba362-6ff3-45c4-928e-ab1d932e6d58 req-683833e1-f981-424a-912b-09ffe3effa3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Refreshing instance network info cache due to event network-changed-98d1bfd0-f123-48c7-a32d-bca6b92ab19d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.015 2 DEBUG oslo_concurrency.lockutils [req-f1bba362-6ff3-45c4-928e-ab1d932e6d58 req-683833e1-f981-424a-912b-09ffe3effa3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.015 2 DEBUG oslo_concurrency.lockutils [req-f1bba362-6ff3-45c4-928e-ab1d932e6d58 req-683833e1-f981-424a-912b-09ffe3effa3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.016 2 DEBUG nova.network.neutron [req-f1bba362-6ff3-45c4-928e-ab1d932e6d58 req-683833e1-f981-424a-912b-09ffe3effa3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Refreshing network info cache for port 98d1bfd0-f123-48c7-a32d-bca6b92ab19d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:48:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:01Z|01172|binding|INFO|Releasing lport 98d1bfd0-f123-48c7-a32d-bca6b92ab19d from this chassis (sb_readonly=0)
Oct  2 04:48:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:01Z|01173|binding|INFO|Setting lport 98d1bfd0-f123-48c7-a32d-bca6b92ab19d down in Southbound
Oct  2 04:48:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:01Z|01174|binding|INFO|Removing iface tap98d1bfd0-f1 ovn-installed in OVS
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.038 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:15:11 10.100.0.12'], port_security=['fa:16:3e:cf:15:11 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '42e467eb-b532-4383-91dd-4c8e9f68328c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8c633b73-9b65-49fc-b72d-a746e692a924', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6683c145-41ec-4e92-b711-415f2e27650f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=98d1bfd0-f123-48c7-a32d-bca6b92ab19d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:48:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.039 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 98d1bfd0-f123-48c7-a32d-bca6b92ab19d in datapath d4ef4078-5ea1-4ac7-a0f4-0c2647248d03 unbound from our chassis#033[00m
Oct  2 04:48:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.040 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d4ef4078-5ea1-4ac7-a0f4-0c2647248d03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:48:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.042 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[76db684d-8073-4173-8463-6502c2223e07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.043 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03 namespace which is not needed anymore#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:01 np0005465604 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000072.scope: Deactivated successfully.
Oct  2 04:48:01 np0005465604 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000072.scope: Consumed 12.577s CPU time.
Oct  2 04:48:01 np0005465604 systemd-machined[214636]: Machine qemu-143-instance-00000072 terminated.
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.184 2 INFO nova.virt.libvirt.driver [-] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Instance destroyed successfully.#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.184 2 DEBUG nova.objects.instance [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'resources' on Instance uuid 42e467eb-b532-4383-91dd-4c8e9f68328c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.197 2 DEBUG nova.virt.libvirt.vif [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:47:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-440315055',display_name='tempest-TestNetworkAdvancedServerOps-server-440315055',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-440315055',id=114,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAL7Jay6VGgIbZM/uTD3le9XaJZEa2h1Gi+bt0pESVUIc4MgOYJNkH9P9hwcT+CLKqZhGrKxvTkkVtq+Eg9Dj0D8XlduRBRW3I1LeGVNjAVQ1naigScKF/Jy3yqgcndKBQ==',key_name='tempest-TestNetworkAdvancedServerOps-1091937656',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:47:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-l13jj000',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:47:59Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=42e467eb-b532-4383-91dd-4c8e9f68328c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.197 2 DEBUG nova.network.os_vif_util [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.198 2 DEBUG nova.network.os_vif_util [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:15:11,bridge_name='br-int',has_traffic_filtering=True,id=98d1bfd0-f123-48c7-a32d-bca6b92ab19d,network=Network(d4ef4078-5ea1-4ac7-a0f4-0c2647248d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98d1bfd0-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.198 2 DEBUG os_vif [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:15:11,bridge_name='br-int',has_traffic_filtering=True,id=98d1bfd0-f123-48c7-a32d-bca6b92ab19d,network=Network(d4ef4078-5ea1-4ac7-a0f4-0c2647248d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98d1bfd0-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.200 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98d1bfd0-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.207 2 INFO os_vif [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:15:11,bridge_name='br-int',has_traffic_filtering=True,id=98d1bfd0-f123-48c7-a32d-bca6b92ab19d,network=Network(d4ef4078-5ea1-4ac7-a0f4-0c2647248d03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98d1bfd0-f1')#033[00m
Oct  2 04:48:01 np0005465604 neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03[374522]: [NOTICE]   (374526) : haproxy version is 2.8.14-c23fe91
Oct  2 04:48:01 np0005465604 neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03[374522]: [NOTICE]   (374526) : path to executable is /usr/sbin/haproxy
Oct  2 04:48:01 np0005465604 neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03[374522]: [ALERT]    (374526) : Current worker (374528) exited with code 143 (Terminated)
Oct  2 04:48:01 np0005465604 neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03[374522]: [WARNING]  (374526) : All workers exited. Exiting... (0)
Oct  2 04:48:01 np0005465604 systemd[1]: libpod-9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514.scope: Deactivated successfully.
Oct  2 04:48:01 np0005465604 podman[375962]: 2025-10-02 08:48:01.232183994 +0000 UTC m=+0.096637674 container died 9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 04:48:01 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514-userdata-shm.mount: Deactivated successfully.
Oct  2 04:48:01 np0005465604 systemd[1]: var-lib-containers-storage-overlay-02087fd71ae88db67d2279610b147edf86c03fa40d0a2813a2b3379bd99958ff-merged.mount: Deactivated successfully.
Oct  2 04:48:01 np0005465604 podman[375962]: 2025-10-02 08:48:01.348620623 +0000 UTC m=+0.213074303 container cleanup 9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 04:48:01 np0005465604 systemd[1]: libpod-conmon-9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514.scope: Deactivated successfully.
Oct  2 04:48:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 246 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 1.4 MiB/s wr, 41 op/s
Oct  2 04:48:01 np0005465604 podman[376018]: 2025-10-02 08:48:01.483513646 +0000 UTC m=+0.112378993 container remove 9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 04:48:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.490 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f5f212-4e8b-4d49-92f2-06ffb10a5b89]: (4, ('Thu Oct  2 08:48:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03 (9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514)\n9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514\nThu Oct  2 08:48:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03 (9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514)\n9fb32d7e9bc3e7b3bf1f01f8f3374f028066074b835052b73f5b292db904e514\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.492 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[85641175-4e00-404a-bd79-9420d88c6620]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.493 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4ef4078-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:01 np0005465604 kernel: tapd4ef4078-50: left promiscuous mode
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.513 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe04e92-e97b-4327-897a-b4506f291ddc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.544 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9276d924-f755-4249-82d7-68dba1531b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.546 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[90f6e09b-a16c-4a84-8592-50297e0c3561]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.563 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4b0456f3-05f8-452f-8ca2-fbd76e454401]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573410, 'reachable_time': 21827, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376033, 'error': None, 'target': 'ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:01 np0005465604 systemd[1]: run-netns-ovnmeta\x2dd4ef4078\x2d5ea1\x2d4ac7\x2da0f4\x2d0c2647248d03.mount: Deactivated successfully.
Oct  2 04:48:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.568 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d4ef4078-5ea1-4ac7-a0f4-0c2647248d03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:48:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:01.568 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[f6540b3b-4f6a-49fa-be99-eadb779b92c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.708 2 DEBUG nova.compute.manager [req-b5450ac9-e181-4fa5-8752-c9207ecb0b53 req-58e466ef-abb5-46c8-8be0-64436c8f1fc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received event network-vif-unplugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.708 2 DEBUG oslo_concurrency.lockutils [req-b5450ac9-e181-4fa5-8752-c9207ecb0b53 req-58e466ef-abb5-46c8-8be0-64436c8f1fc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.708 2 DEBUG oslo_concurrency.lockutils [req-b5450ac9-e181-4fa5-8752-c9207ecb0b53 req-58e466ef-abb5-46c8-8be0-64436c8f1fc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.708 2 DEBUG oslo_concurrency.lockutils [req-b5450ac9-e181-4fa5-8752-c9207ecb0b53 req-58e466ef-abb5-46c8-8be0-64436c8f1fc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.709 2 DEBUG nova.compute.manager [req-b5450ac9-e181-4fa5-8752-c9207ecb0b53 req-58e466ef-abb5-46c8-8be0-64436c8f1fc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] No waiting events found dispatching network-vif-unplugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:48:01 np0005465604 nova_compute[260603]: 2025-10-02 08:48:01.709 2 DEBUG nova.compute.manager [req-b5450ac9-e181-4fa5-8752-c9207ecb0b53 req-58e466ef-abb5-46c8-8be0-64436c8f1fc2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received event network-vif-unplugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:48:02 np0005465604 nova_compute[260603]: 2025-10-02 08:48:02.529 2 INFO nova.virt.libvirt.driver [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Deleting instance files /var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c_del#033[00m
Oct  2 04:48:02 np0005465604 nova_compute[260603]: 2025-10-02 08:48:02.530 2 INFO nova.virt.libvirt.driver [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Deletion of /var/lib/nova/instances/42e467eb-b532-4383-91dd-4c8e9f68328c_del complete#033[00m
Oct  2 04:48:02 np0005465604 nova_compute[260603]: 2025-10-02 08:48:02.594 2 INFO nova.compute.manager [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Took 1.65 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:48:02 np0005465604 nova_compute[260603]: 2025-10-02 08:48:02.595 2 DEBUG oslo.service.loopingcall [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:48:02 np0005465604 nova_compute[260603]: 2025-10-02 08:48:02.595 2 DEBUG nova.compute.manager [-] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:48:02 np0005465604 nova_compute[260603]: 2025-10-02 08:48:02.595 2 DEBUG nova.network.neutron [-] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:48:02 np0005465604 nova_compute[260603]: 2025-10-02 08:48:02.824 2 DEBUG nova.network.neutron [req-f1bba362-6ff3-45c4-928e-ab1d932e6d58 req-683833e1-f981-424a-912b-09ffe3effa3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Updated VIF entry in instance network info cache for port 98d1bfd0-f123-48c7-a32d-bca6b92ab19d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:48:02 np0005465604 nova_compute[260603]: 2025-10-02 08:48:02.824 2 DEBUG nova.network.neutron [req-f1bba362-6ff3-45c4-928e-ab1d932e6d58 req-683833e1-f981-424a-912b-09ffe3effa3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Updating instance_info_cache with network_info: [{"id": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "address": "fa:16:3e:cf:15:11", "network": {"id": "d4ef4078-5ea1-4ac7-a0f4-0c2647248d03", "bridge": "br-int", "label": "tempest-network-smoke--511203014", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98d1bfd0-f1", "ovs_interfaceid": "98d1bfd0-f123-48c7-a32d-bca6b92ab19d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:48:02 np0005465604 nova_compute[260603]: 2025-10-02 08:48:02.852 2 DEBUG oslo_concurrency.lockutils [req-f1bba362-6ff3-45c4-928e-ab1d932e6d58 req-683833e1-f981-424a-912b-09ffe3effa3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-42e467eb-b532-4383-91dd-4c8e9f68328c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:48:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:48:03 np0005465604 nova_compute[260603]: 2025-10-02 08:48:03.096 2 DEBUG nova.compute.manager [req-b4fc4131-d25c-44a7-8f29-8ddd1488f4c0 req-b7ac1ff4-6472-451c-a265-3727e4008a26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Received event network-changed-f66c7ec4-73d7-4000-99de-ce8f1674d0ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:48:03 np0005465604 nova_compute[260603]: 2025-10-02 08:48:03.097 2 DEBUG nova.compute.manager [req-b4fc4131-d25c-44a7-8f29-8ddd1488f4c0 req-b7ac1ff4-6472-451c-a265-3727e4008a26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Refreshing instance network info cache due to event network-changed-f66c7ec4-73d7-4000-99de-ce8f1674d0ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:48:03 np0005465604 nova_compute[260603]: 2025-10-02 08:48:03.098 2 DEBUG oslo_concurrency.lockutils [req-b4fc4131-d25c-44a7-8f29-8ddd1488f4c0 req-b7ac1ff4-6472-451c-a265-3727e4008a26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-262d5706-35bc-45eb-809b-217369a86015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:48:03 np0005465604 nova_compute[260603]: 2025-10-02 08:48:03.098 2 DEBUG oslo_concurrency.lockutils [req-b4fc4131-d25c-44a7-8f29-8ddd1488f4c0 req-b7ac1ff4-6472-451c-a265-3727e4008a26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-262d5706-35bc-45eb-809b-217369a86015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:48:03 np0005465604 nova_compute[260603]: 2025-10-02 08:48:03.099 2 DEBUG nova.network.neutron [req-b4fc4131-d25c-44a7-8f29-8ddd1488f4c0 req-b7ac1ff4-6472-451c-a265-3727e4008a26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Refreshing network info cache for port f66c7ec4-73d7-4000-99de-ce8f1674d0ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:48:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 167 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 134 op/s
Oct  2 04:48:03 np0005465604 nova_compute[260603]: 2025-10-02 08:48:03.812 2 DEBUG nova.compute.manager [req-be5404dd-7bbd-4d48-bdce-16a0ae51f275 req-ed9b69be-7ec6-437a-b5b4-fa309af72daf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received event network-vif-plugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:48:03 np0005465604 nova_compute[260603]: 2025-10-02 08:48:03.813 2 DEBUG oslo_concurrency.lockutils [req-be5404dd-7bbd-4d48-bdce-16a0ae51f275 req-ed9b69be-7ec6-437a-b5b4-fa309af72daf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:03 np0005465604 nova_compute[260603]: 2025-10-02 08:48:03.813 2 DEBUG oslo_concurrency.lockutils [req-be5404dd-7bbd-4d48-bdce-16a0ae51f275 req-ed9b69be-7ec6-437a-b5b4-fa309af72daf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:03 np0005465604 nova_compute[260603]: 2025-10-02 08:48:03.814 2 DEBUG oslo_concurrency.lockutils [req-be5404dd-7bbd-4d48-bdce-16a0ae51f275 req-ed9b69be-7ec6-437a-b5b4-fa309af72daf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:03 np0005465604 nova_compute[260603]: 2025-10-02 08:48:03.814 2 DEBUG nova.compute.manager [req-be5404dd-7bbd-4d48-bdce-16a0ae51f275 req-ed9b69be-7ec6-437a-b5b4-fa309af72daf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] No waiting events found dispatching network-vif-plugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:48:03 np0005465604 nova_compute[260603]: 2025-10-02 08:48:03.814 2 WARNING nova.compute.manager [req-be5404dd-7bbd-4d48-bdce-16a0ae51f275 req-ed9b69be-7ec6-437a-b5b4-fa309af72daf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received unexpected event network-vif-plugged-98d1bfd0-f123-48c7-a32d-bca6b92ab19d for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:48:03 np0005465604 nova_compute[260603]: 2025-10-02 08:48:03.965 2 DEBUG nova.network.neutron [-] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:48:03 np0005465604 nova_compute[260603]: 2025-10-02 08:48:03.992 2 INFO nova.compute.manager [-] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Took 1.40 seconds to deallocate network for instance.#033[00m
Oct  2 04:48:04 np0005465604 nova_compute[260603]: 2025-10-02 08:48:04.038 2 DEBUG oslo_concurrency.lockutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:04 np0005465604 nova_compute[260603]: 2025-10-02 08:48:04.038 2 DEBUG oslo_concurrency.lockutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:04 np0005465604 nova_compute[260603]: 2025-10-02 08:48:04.149 2 DEBUG oslo_concurrency.processutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:48:04 np0005465604 nova_compute[260603]: 2025-10-02 08:48:04.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:48:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3754006147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:48:04 np0005465604 nova_compute[260603]: 2025-10-02 08:48:04.626 2 DEBUG oslo_concurrency.processutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:48:04 np0005465604 nova_compute[260603]: 2025-10-02 08:48:04.633 2 DEBUG nova.compute.provider_tree [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:48:04 np0005465604 nova_compute[260603]: 2025-10-02 08:48:04.653 2 DEBUG nova.scheduler.client.report [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:48:04 np0005465604 nova_compute[260603]: 2025-10-02 08:48:04.676 2 DEBUG oslo_concurrency.lockutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:04 np0005465604 nova_compute[260603]: 2025-10-02 08:48:04.728 2 INFO nova.scheduler.client.report [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Deleted allocations for instance 42e467eb-b532-4383-91dd-4c8e9f68328c#033[00m
Oct  2 04:48:04 np0005465604 nova_compute[260603]: 2025-10-02 08:48:04.795 2 DEBUG oslo_concurrency.lockutils [None req-331991ac-df11-40f1-a54d-71e23a78bb0e 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "42e467eb-b532-4383-91dd-4c8e9f68328c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:04 np0005465604 nova_compute[260603]: 2025-10-02 08:48:04.997 2 DEBUG nova.network.neutron [req-b4fc4131-d25c-44a7-8f29-8ddd1488f4c0 req-b7ac1ff4-6472-451c-a265-3727e4008a26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Updated VIF entry in instance network info cache for port f66c7ec4-73d7-4000-99de-ce8f1674d0ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:48:04 np0005465604 nova_compute[260603]: 2025-10-02 08:48:04.998 2 DEBUG nova.network.neutron [req-b4fc4131-d25c-44a7-8f29-8ddd1488f4c0 req-b7ac1ff4-6472-451c-a265-3727e4008a26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Updating instance_info_cache with network_info: [{"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:48:05 np0005465604 nova_compute[260603]: 2025-10-02 08:48:05.024 2 DEBUG oslo_concurrency.lockutils [req-b4fc4131-d25c-44a7-8f29-8ddd1488f4c0 req-b7ac1ff4-6472-451c-a265-3727e4008a26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-262d5706-35bc-45eb-809b-217369a86015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:48:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 167 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 99 op/s
Oct  2 04:48:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:05.796 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:48:06 np0005465604 nova_compute[260603]: 2025-10-02 08:48:06.020 2 DEBUG nova.compute.manager [req-ce18ae08-d95c-4958-b2dc-fbcf2d35b8b0 req-fda6b97e-b93d-4724-95ec-bc081a4518b6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Received event network-vif-deleted-98d1bfd0-f123-48c7-a32d-bca6b92ab19d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:48:06 np0005465604 nova_compute[260603]: 2025-10-02 08:48:06.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 167 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 102 op/s
Oct  2 04:48:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:48:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:09Z|01175|binding|INFO|Releasing lport 8765a176-985e-405d-870e-44dc7d3390b4 from this chassis (sb_readonly=0)
Oct  2 04:48:09 np0005465604 nova_compute[260603]: 2025-10-02 08:48:09.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 102 op/s
Oct  2 04:48:09 np0005465604 nova_compute[260603]: 2025-10-02 08:48:09.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:11 np0005465604 podman[376059]: 2025-10-02 08:48:11.032976459 +0000 UTC m=+0.092093151 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 04:48:11 np0005465604 podman[376058]: 2025-10-02 08:48:11.052701744 +0000 UTC m=+0.121769886 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 04:48:11 np0005465604 nova_compute[260603]: 2025-10-02 08:48:11.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.8 KiB/s wr, 96 op/s
Oct  2 04:48:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:12Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:dd:2b 10.100.0.6
Oct  2 04:48:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:12Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:dd:2b 10.100.0.6
Oct  2 04:48:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:48:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 200 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 160 op/s
Oct  2 04:48:14 np0005465604 nova_compute[260603]: 2025-10-02 08:48:14.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:15 np0005465604 nova_compute[260603]: 2025-10-02 08:48:15.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 200 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  2 04:48:16 np0005465604 nova_compute[260603]: 2025-10-02 08:48:16.184 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394881.182366, 42e467eb-b532-4383-91dd-4c8e9f68328c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:48:16 np0005465604 nova_compute[260603]: 2025-10-02 08:48:16.185 2 INFO nova.compute.manager [-] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:48:16 np0005465604 nova_compute[260603]: 2025-10-02 08:48:16.222 2 DEBUG nova.compute.manager [None req-b062d4bc-de0d-48c5-9350-761caf36b850 - - - - - -] [instance: 42e467eb-b532-4383-91dd-4c8e9f68328c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:48:16 np0005465604 nova_compute[260603]: 2025-10-02 08:48:16.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 200 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct  2 04:48:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:48:18 np0005465604 podman[376099]: 2025-10-02 08:48:18.977529588 +0000 UTC m=+0.050405873 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 04:48:18 np0005465604 podman[376100]: 2025-10-02 08:48:18.977612471 +0000 UTC m=+0.047554473 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 04:48:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 200 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 04:48:19 np0005465604 nova_compute[260603]: 2025-10-02 08:48:19.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:19 np0005465604 nova_compute[260603]: 2025-10-02 08:48:19.703 2 DEBUG oslo_concurrency.lockutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "262d5706-35bc-45eb-809b-217369a86015" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:19 np0005465604 nova_compute[260603]: 2025-10-02 08:48:19.704 2 DEBUG oslo_concurrency.lockutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:19 np0005465604 nova_compute[260603]: 2025-10-02 08:48:19.704 2 DEBUG oslo_concurrency.lockutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "262d5706-35bc-45eb-809b-217369a86015-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:19 np0005465604 nova_compute[260603]: 2025-10-02 08:48:19.704 2 DEBUG oslo_concurrency.lockutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:19 np0005465604 nova_compute[260603]: 2025-10-02 08:48:19.705 2 DEBUG oslo_concurrency.lockutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:19 np0005465604 nova_compute[260603]: 2025-10-02 08:48:19.706 2 INFO nova.compute.manager [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Terminating instance#033[00m
Oct  2 04:48:19 np0005465604 nova_compute[260603]: 2025-10-02 08:48:19.708 2 DEBUG nova.compute.manager [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:48:19 np0005465604 kernel: tapf66c7ec4-73 (unregistering): left promiscuous mode
Oct  2 04:48:19 np0005465604 NetworkManager[45129]: <info>  [1759394899.8899] device (tapf66c7ec4-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:48:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:19Z|01176|binding|INFO|Releasing lport f66c7ec4-73d7-4000-99de-ce8f1674d0ee from this chassis (sb_readonly=0)
Oct  2 04:48:19 np0005465604 nova_compute[260603]: 2025-10-02 08:48:19.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:19Z|01177|binding|INFO|Setting lport f66c7ec4-73d7-4000-99de-ce8f1674d0ee down in Southbound
Oct  2 04:48:19 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:19Z|01178|binding|INFO|Removing iface tapf66c7ec4-73 ovn-installed in OVS
Oct  2 04:48:19 np0005465604 nova_compute[260603]: 2025-10-02 08:48:19.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:19.909 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:dd:2b 10.100.0.6'], port_security=['fa:16:3e:cf:dd:2b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '262d5706-35bc-45eb-809b-217369a86015', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7705270d-7f1f-45f4-8a90-d1c36237d9ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2af26aac-073a-413a-88d9-fc16235c9487, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=f66c7ec4-73d7-4000-99de-ce8f1674d0ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:48:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:19.911 162357 INFO neutron.agent.ovn.metadata.agent [-] Port f66c7ec4-73d7-4000-99de-ce8f1674d0ee in datapath 828d3558-d7f1-4d90-8e6f-bf6eff4d744e unbound from our chassis#033[00m
Oct  2 04:48:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:19.914 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 828d3558-d7f1-4d90-8e6f-bf6eff4d744e#033[00m
Oct  2 04:48:19 np0005465604 nova_compute[260603]: 2025-10-02 08:48:19.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:19.939 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e11d49fa-ec98-4ccb-96e8-263dcc63f573]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:19.984 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8200707b-2fa0-482d-a6e9-51658e5dc193]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:19.988 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[32a64ec6-cc98-4f5f-953d-8f4e9b66cb85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:20 np0005465604 nova_compute[260603]: 2025-10-02 08:48:20.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:20 np0005465604 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000073.scope: Deactivated successfully.
Oct  2 04:48:20 np0005465604 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000073.scope: Consumed 14.596s CPU time.
Oct  2 04:48:20 np0005465604 systemd-machined[214636]: Machine qemu-144-instance-00000073 terminated.
Oct  2 04:48:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:20.030 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3630da86-e508-416c-89d3-cc141673b66e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:20.062 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[af8390a5-b8ac-48d2-88e8-2f234553294a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap828d3558-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:2f:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571794, 'reachable_time': 26922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376149, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:20.088 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7e17f7-b3a0-41c4-a89f-3cb1e0967cca]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap828d3558-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571808, 'tstamp': 571808}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376150, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap828d3558-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571811, 'tstamp': 571811}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376150, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:20.091 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap828d3558-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:48:20 np0005465604 nova_compute[260603]: 2025-10-02 08:48:20.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:20 np0005465604 nova_compute[260603]: 2025-10-02 08:48:20.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:20.102 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap828d3558-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:48:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:20.103 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:48:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:20.103 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap828d3558-d0, col_values=(('external_ids', {'iface-id': '8765a176-985e-405d-870e-44dc7d3390b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:48:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:20.104 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:48:20 np0005465604 nova_compute[260603]: 2025-10-02 08:48:20.154 2 INFO nova.virt.libvirt.driver [-] [instance: 262d5706-35bc-45eb-809b-217369a86015] Instance destroyed successfully.#033[00m
Oct  2 04:48:20 np0005465604 nova_compute[260603]: 2025-10-02 08:48:20.154 2 DEBUG nova.objects.instance [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'resources' on Instance uuid 262d5706-35bc-45eb-809b-217369a86015 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:48:20 np0005465604 nova_compute[260603]: 2025-10-02 08:48:20.182 2 DEBUG nova.virt.libvirt.vif [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:47:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-1512666723',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-gen-1-1512666723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-gen',id=115,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIUeT37zgtyGodwUyb3iGY7HM/3FmTKFRI0uJBAJFSKjatvrWQTObfDMZE8YBSZZGIkssGkh11uDOpS/CEO9ZlzyZpSQ7kHVbROBWfktxGuem67Vh+IUVlFakqZKCOP1Eg==',key_name='tempest-TestSecurityGroupsBasicOps-112352132',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:47:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-igjgp1fk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:47:58Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=262d5706-35bc-45eb-809b-217369a86015,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:48:20 np0005465604 nova_compute[260603]: 2025-10-02 08:48:20.183 2 DEBUG nova.network.os_vif_util [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "address": "fa:16:3e:cf:dd:2b", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66c7ec4-73", "ovs_interfaceid": "f66c7ec4-73d7-4000-99de-ce8f1674d0ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:48:20 np0005465604 nova_compute[260603]: 2025-10-02 08:48:20.183 2 DEBUG nova.network.os_vif_util [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:dd:2b,bridge_name='br-int',has_traffic_filtering=True,id=f66c7ec4-73d7-4000-99de-ce8f1674d0ee,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66c7ec4-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:48:20 np0005465604 nova_compute[260603]: 2025-10-02 08:48:20.184 2 DEBUG os_vif [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:dd:2b,bridge_name='br-int',has_traffic_filtering=True,id=f66c7ec4-73d7-4000-99de-ce8f1674d0ee,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66c7ec4-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:48:20 np0005465604 nova_compute[260603]: 2025-10-02 08:48:20.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:20 np0005465604 nova_compute[260603]: 2025-10-02 08:48:20.186 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66c7ec4-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:48:20 np0005465604 nova_compute[260603]: 2025-10-02 08:48:20.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:20 np0005465604 nova_compute[260603]: 2025-10-02 08:48:20.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:20 np0005465604 nova_compute[260603]: 2025-10-02 08:48:20.192 2 INFO os_vif [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:dd:2b,bridge_name='br-int',has_traffic_filtering=True,id=f66c7ec4-73d7-4000-99de-ce8f1674d0ee,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66c7ec4-73')#033[00m
Oct  2 04:48:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 200 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 04:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 34K writes, 136K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s#012Cumulative WAL: 34K writes, 12K syncs, 2.76 writes per sync, written: 0.13 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4613 writes, 18K keys, 4613 commit groups, 1.0 writes per commit group, ingest: 20.20 MB, 0.03 MB/s#012Interval WAL: 4613 writes, 1808 syncs, 2.55 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 04:48:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:48:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1859926937' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:48:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:48:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1859926937' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:48:22 np0005465604 nova_compute[260603]: 2025-10-02 08:48:22.560 2 INFO nova.virt.libvirt.driver [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Deleting instance files /var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015_del#033[00m
Oct  2 04:48:22 np0005465604 nova_compute[260603]: 2025-10-02 08:48:22.561 2 INFO nova.virt.libvirt.driver [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Deletion of /var/lib/nova/instances/262d5706-35bc-45eb-809b-217369a86015_del complete#033[00m
Oct  2 04:48:22 np0005465604 nova_compute[260603]: 2025-10-02 08:48:22.614 2 INFO nova.compute.manager [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Took 2.91 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:48:22 np0005465604 nova_compute[260603]: 2025-10-02 08:48:22.615 2 DEBUG oslo.service.loopingcall [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:48:22 np0005465604 nova_compute[260603]: 2025-10-02 08:48:22.615 2 DEBUG nova.compute.manager [-] [instance: 262d5706-35bc-45eb-809b-217369a86015] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:48:22 np0005465604 nova_compute[260603]: 2025-10-02 08:48:22.615 2 DEBUG nova.network.neutron [-] [instance: 262d5706-35bc-45eb-809b-217369a86015] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:48:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:48:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct  2 04:48:23 np0005465604 nova_compute[260603]: 2025-10-02 08:48:23.720 2 DEBUG nova.network.neutron [-] [instance: 262d5706-35bc-45eb-809b-217369a86015] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:48:23 np0005465604 nova_compute[260603]: 2025-10-02 08:48:23.741 2 INFO nova.compute.manager [-] [instance: 262d5706-35bc-45eb-809b-217369a86015] Took 1.13 seconds to deallocate network for instance.#033[00m
Oct  2 04:48:23 np0005465604 nova_compute[260603]: 2025-10-02 08:48:23.810 2 DEBUG oslo_concurrency.lockutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:23 np0005465604 nova_compute[260603]: 2025-10-02 08:48:23.811 2 DEBUG oslo_concurrency.lockutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:23 np0005465604 nova_compute[260603]: 2025-10-02 08:48:23.936 2 DEBUG nova.compute.manager [req-6580af2b-f44b-4c91-9aed-df03b7a27e2b req-7ac875a6-217d-4c66-bcdc-8b9448c49860 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 262d5706-35bc-45eb-809b-217369a86015] Received event network-vif-deleted-f66c7ec4-73d7-4000-99de-ce8f1674d0ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:48:23 np0005465604 nova_compute[260603]: 2025-10-02 08:48:23.939 2 DEBUG oslo_concurrency.processutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:48:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:48:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/785040472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:48:24 np0005465604 nova_compute[260603]: 2025-10-02 08:48:24.417 2 DEBUG oslo_concurrency.processutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:48:24 np0005465604 nova_compute[260603]: 2025-10-02 08:48:24.424 2 DEBUG nova.compute.provider_tree [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:48:24 np0005465604 nova_compute[260603]: 2025-10-02 08:48:24.442 2 DEBUG nova.scheduler.client.report [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:48:24 np0005465604 nova_compute[260603]: 2025-10-02 08:48:24.467 2 DEBUG oslo_concurrency.lockutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:24 np0005465604 nova_compute[260603]: 2025-10-02 08:48:24.512 2 INFO nova.scheduler.client.report [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Deleted allocations for instance 262d5706-35bc-45eb-809b-217369a86015#033[00m
Oct  2 04:48:24 np0005465604 nova_compute[260603]: 2025-10-02 08:48:24.516 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:24 np0005465604 nova_compute[260603]: 2025-10-02 08:48:24.517 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:24 np0005465604 nova_compute[260603]: 2025-10-02 08:48:24.539 2 DEBUG nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:48:24 np0005465604 nova_compute[260603]: 2025-10-02 08:48:24.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:24 np0005465604 nova_compute[260603]: 2025-10-02 08:48:24.564 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:48:24 np0005465604 nova_compute[260603]: 2025-10-02 08:48:24.565 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:48:24 np0005465604 nova_compute[260603]: 2025-10-02 08:48:24.678 2 DEBUG oslo_concurrency.lockutils [None req-a5e53291-baee-442b-9a24-a6af4bc9806c 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "262d5706-35bc-45eb-809b-217369a86015" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:24 np0005465604 nova_compute[260603]: 2025-10-02 08:48:24.696 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:24 np0005465604 nova_compute[260603]: 2025-10-02 08:48:24.696 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:24 np0005465604 nova_compute[260603]: 2025-10-02 08:48:24.706 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:48:24 np0005465604 nova_compute[260603]: 2025-10-02 08:48:24.707 2 INFO nova.compute.claims [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:48:24 np0005465604 nova_compute[260603]: 2025-10-02 08:48:24.847 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:48:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1436378918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.374 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.383 2 DEBUG nova.compute.provider_tree [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.425 2 DEBUG nova.scheduler.client.report [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.465 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.467 2 DEBUG nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:48:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.534 2 DEBUG nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.534 2 DEBUG nova.network.neutron [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.555 2 INFO nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.584 2 DEBUG nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.697 2 DEBUG nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.699 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.699 2 INFO nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Creating image(s)#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.725 2 DEBUG nova.storage.rbd_utils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image dcf324a5-8e22-40e4-8f75-469fe7a04756_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.751 2 DEBUG nova.storage.rbd_utils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image dcf324a5-8e22-40e4-8f75-469fe7a04756_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.777 2 DEBUG nova.storage.rbd_utils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image dcf324a5-8e22-40e4-8f75-469fe7a04756_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.782 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.872 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.873 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.874 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.874 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.898 2 DEBUG nova.storage.rbd_utils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image dcf324a5-8e22-40e4-8f75-469fe7a04756_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:48:25 np0005465604 nova_compute[260603]: 2025-10-02 08:48:25.902 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 dcf324a5-8e22-40e4-8f75-469fe7a04756_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:48:26 np0005465604 nova_compute[260603]: 2025-10-02 08:48:26.217 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 dcf324a5-8e22-40e4-8f75-469fe7a04756_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:48:26 np0005465604 nova_compute[260603]: 2025-10-02 08:48:26.308 2 DEBUG nova.storage.rbd_utils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] resizing rbd image dcf324a5-8e22-40e4-8f75-469fe7a04756_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:48:26 np0005465604 nova_compute[260603]: 2025-10-02 08:48:26.437 2 DEBUG nova.objects.instance [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'migration_context' on Instance uuid dcf324a5-8e22-40e4-8f75-469fe7a04756 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:48:26 np0005465604 nova_compute[260603]: 2025-10-02 08:48:26.442 2 DEBUG nova.policy [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7767630a5b1049f48d7e0fed29e221ba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c86b416fdb524f21b0228639a3a14116', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 37K writes, 141K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.75 writes per sync, written: 0.13 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5854 writes, 21K keys, 5854 commit groups, 1.0 writes per commit group, ingest: 22.85 MB, 0.04 MB/s#012Interval WAL: 5854 writes, 2385 syncs, 2.45 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 04:48:26 np0005465604 nova_compute[260603]: 2025-10-02 08:48:26.528 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:48:26 np0005465604 nova_compute[260603]: 2025-10-02 08:48:26.528 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Ensure instance console log exists: /var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:48:26 np0005465604 nova_compute[260603]: 2025-10-02 08:48:26.529 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:26 np0005465604 nova_compute[260603]: 2025-10-02 08:48:26.529 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:26 np0005465604 nova_compute[260603]: 2025-10-02 08:48:26.529 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 127 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 215 KiB/s wr, 30 op/s
Oct  2 04:48:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:48:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:48:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:48:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:48:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:48:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:48:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:48:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:48:28
Oct  2 04:48:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:48:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:48:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'images', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'backups', '.mgr']
Oct  2 04:48:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:48:28 np0005465604 nova_compute[260603]: 2025-10-02 08:48:28.429 2 DEBUG nova.compute.manager [req-6207316d-aac9-4eb6-99cd-83452b7e1604 req-661ddb82-4f33-4857-bd1d-6a7aae66edeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received event network-changed-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:48:28 np0005465604 nova_compute[260603]: 2025-10-02 08:48:28.430 2 DEBUG nova.compute.manager [req-6207316d-aac9-4eb6-99cd-83452b7e1604 req-661ddb82-4f33-4857-bd1d-6a7aae66edeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Refreshing instance network info cache due to event network-changed-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:48:28 np0005465604 nova_compute[260603]: 2025-10-02 08:48:28.430 2 DEBUG oslo_concurrency.lockutils [req-6207316d-aac9-4eb6-99cd-83452b7e1604 req-661ddb82-4f33-4857-bd1d-6a7aae66edeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:48:28 np0005465604 nova_compute[260603]: 2025-10-02 08:48:28.430 2 DEBUG oslo_concurrency.lockutils [req-6207316d-aac9-4eb6-99cd-83452b7e1604 req-661ddb82-4f33-4857-bd1d-6a7aae66edeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:48:28 np0005465604 nova_compute[260603]: 2025-10-02 08:48:28.431 2 DEBUG nova.network.neutron [req-6207316d-aac9-4eb6-99cd-83452b7e1604 req-661ddb82-4f33-4857-bd1d-6a7aae66edeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Refreshing network info cache for port d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:48:28 np0005465604 nova_compute[260603]: 2025-10-02 08:48:28.592 2 DEBUG oslo_concurrency.lockutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "36a84233-3256-49b2-ae05-1569eb78b50f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:28 np0005465604 nova_compute[260603]: 2025-10-02 08:48:28.592 2 DEBUG oslo_concurrency.lockutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:28 np0005465604 nova_compute[260603]: 2025-10-02 08:48:28.593 2 DEBUG oslo_concurrency.lockutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:28 np0005465604 nova_compute[260603]: 2025-10-02 08:48:28.593 2 DEBUG oslo_concurrency.lockutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:28 np0005465604 nova_compute[260603]: 2025-10-02 08:48:28.594 2 DEBUG oslo_concurrency.lockutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:28 np0005465604 nova_compute[260603]: 2025-10-02 08:48:28.596 2 INFO nova.compute.manager [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Terminating instance#033[00m
Oct  2 04:48:28 np0005465604 nova_compute[260603]: 2025-10-02 08:48:28.598 2 DEBUG nova.compute.manager [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:48:28 np0005465604 kernel: tapd1e15184-f1 (unregistering): left promiscuous mode
Oct  2 04:48:28 np0005465604 NetworkManager[45129]: <info>  [1759394908.8122] device (tapd1e15184-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:48:28 np0005465604 nova_compute[260603]: 2025-10-02 08:48:28.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:28Z|01179|binding|INFO|Releasing lport d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 from this chassis (sb_readonly=0)
Oct  2 04:48:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:28Z|01180|binding|INFO|Setting lport d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 down in Southbound
Oct  2 04:48:28 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:28Z|01181|binding|INFO|Removing iface tapd1e15184-f1 ovn-installed in OVS
Oct  2 04:48:28 np0005465604 nova_compute[260603]: 2025-10-02 08:48:28.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:28.835 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:d6:a0 10.100.0.7'], port_security=['fa:16:3e:5e:d6:a0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '36a84233-3256-49b2-ae05-1569eb78b50f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ef9cbc1b038423984a64b4674aa34ff', 'neutron:revision_number': '4', 'neutron:security_group_ids': '57638211-3760-47a5-8fdb-d4470031f4cf 580c7ec0-94be-443b-88b7-53672ca4c5d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2af26aac-073a-413a-88d9-fc16235c9487, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d1e15184-f166-48f4-a04d-1c6fa2a9f2a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:48:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:28.837 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 in datapath 828d3558-d7f1-4d90-8e6f-bf6eff4d744e unbound from our chassis#033[00m
Oct  2 04:48:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:28.838 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 828d3558-d7f1-4d90-8e6f-bf6eff4d744e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:48:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:28.839 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[38ba11c1-ea1f-4f1a-b1f9-db377a3f2892]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:28.840 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e namespace which is not needed anymore#033[00m
Oct  2 04:48:28 np0005465604 nova_compute[260603]: 2025-10-02 08:48:28.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:28 np0005465604 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000071.scope: Deactivated successfully.
Oct  2 04:48:28 np0005465604 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000071.scope: Consumed 15.690s CPU time.
Oct  2 04:48:28 np0005465604 systemd-machined[214636]: Machine qemu-142-instance-00000071 terminated.
Oct  2 04:48:29 np0005465604 nova_compute[260603]: 2025-10-02 08:48:29.044 2 INFO nova.virt.libvirt.driver [-] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Instance destroyed successfully.#033[00m
Oct  2 04:48:29 np0005465604 nova_compute[260603]: 2025-10-02 08:48:29.045 2 DEBUG nova.objects.instance [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lazy-loading 'resources' on Instance uuid 36a84233-3256-49b2-ae05-1569eb78b50f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:48:29 np0005465604 nova_compute[260603]: 2025-10-02 08:48:29.064 2 DEBUG nova.virt.libvirt.vif [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:47:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-2101904742',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-204807017-access_point-2101904742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-204807017-acc',id=113,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIUeT37zgtyGodwUyb3iGY7HM/3FmTKFRI0uJBAJFSKjatvrWQTObfDMZE8YBSZZGIkssGkh11uDOpS/CEO9ZlzyZpSQ7kHVbROBWfktxGuem67Vh+IUVlFakqZKCOP1Eg==',key_name='tempest-TestSecurityGroupsBasicOps-112352132',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:47:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ef9cbc1b038423984a64b4674aa34ff',ramdisk_id='',reservation_id='r-cn883z4w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-204807017',owner_user_name='tempest-TestSecurityGroupsBasicOps-204807017-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:47:20Z,user_data=None,user_id='3dd1e04a123f47aa8a6b835785a1c569',uuid=36a84233-3256-49b2-ae05-1569eb78b50f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:48:29 np0005465604 nova_compute[260603]: 2025-10-02 08:48:29.065 2 DEBUG nova.network.os_vif_util [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converting VIF {"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:48:29 np0005465604 nova_compute[260603]: 2025-10-02 08:48:29.066 2 DEBUG nova.network.os_vif_util [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:d6:a0,bridge_name='br-int',has_traffic_filtering=True,id=d1e15184-f166-48f4-a04d-1c6fa2a9f2a2,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e15184-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:48:29 np0005465604 nova_compute[260603]: 2025-10-02 08:48:29.066 2 DEBUG os_vif [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:d6:a0,bridge_name='br-int',has_traffic_filtering=True,id=d1e15184-f166-48f4-a04d-1c6fa2a9f2a2,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e15184-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:48:29 np0005465604 nova_compute[260603]: 2025-10-02 08:48:29.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:29 np0005465604 nova_compute[260603]: 2025-10-02 08:48:29.068 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1e15184-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:48:29 np0005465604 nova_compute[260603]: 2025-10-02 08:48:29.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:29 np0005465604 nova_compute[260603]: 2025-10-02 08:48:29.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:48:29 np0005465604 neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e[374057]: [NOTICE]   (374061) : haproxy version is 2.8.14-c23fe91
Oct  2 04:48:29 np0005465604 neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e[374057]: [NOTICE]   (374061) : path to executable is /usr/sbin/haproxy
Oct  2 04:48:29 np0005465604 neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e[374057]: [WARNING]  (374061) : Exiting Master process...
Oct  2 04:48:29 np0005465604 nova_compute[260603]: 2025-10-02 08:48:29.109 2 INFO os_vif [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:d6:a0,bridge_name='br-int',has_traffic_filtering=True,id=d1e15184-f166-48f4-a04d-1c6fa2a9f2a2,network=Network(828d3558-d7f1-4d90-8e6f-bf6eff4d744e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e15184-f1')#033[00m
Oct  2 04:48:29 np0005465604 neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e[374057]: [ALERT]    (374061) : Current worker (374063) exited with code 143 (Terminated)
Oct  2 04:48:29 np0005465604 neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e[374057]: [WARNING]  (374061) : All workers exited. Exiting... (0)
Oct  2 04:48:29 np0005465604 systemd[1]: libpod-ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985.scope: Deactivated successfully.
Oct  2 04:48:29 np0005465604 podman[376417]: 2025-10-02 08:48:29.119679702 +0000 UTC m=+0.139521300 container died ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 04:48:29 np0005465604 systemd[1]: var-lib-containers-storage-overlay-09d31a5b55753d611210a316cb1a0d76d716a9026841d73fcc07cd5dc9fe4ce7-merged.mount: Deactivated successfully.
Oct  2 04:48:29 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985-userdata-shm.mount: Deactivated successfully.
Oct  2 04:48:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct  2 04:48:29 np0005465604 nova_compute[260603]: 2025-10-02 08:48:29.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:29 np0005465604 nova_compute[260603]: 2025-10-02 08:48:29.561 2 DEBUG nova.network.neutron [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Successfully created port: a488a1b0-7749-499c-971f-662c8a9ee29b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:48:29 np0005465604 podman[376417]: 2025-10-02 08:48:29.580112062 +0000 UTC m=+0.599953660 container cleanup ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 04:48:29 np0005465604 systemd[1]: libpod-conmon-ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985.scope: Deactivated successfully.
Oct  2 04:48:29 np0005465604 podman[376477]: 2025-10-02 08:48:29.886959637 +0000 UTC m=+0.272862546 container remove ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:48:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:29.897 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3a46b3-ae57-44c4-baac-5eacf027ad52]: (4, ('Thu Oct  2 08:48:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e (ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985)\nef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985\nThu Oct  2 08:48:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e (ef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985)\nef2b0ea31911c97bee25a94b757101425530be32a62f9c304e1893daa5c9d985\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:29.900 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[22b81870-e924-483c-8341-26eae47a9223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:29.901 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap828d3558-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:48:29 np0005465604 nova_compute[260603]: 2025-10-02 08:48:29.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:29 np0005465604 kernel: tap828d3558-d0: left promiscuous mode
Oct  2 04:48:29 np0005465604 nova_compute[260603]: 2025-10-02 08:48:29.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:29.939 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[21d9573a-74bd-4f31-9b33-c0aa561e8ee9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:29.985 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f90a738f-6234-4536-ad62-8557e0d5df91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:29.986 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[17192e48-24a6-4046-976b-5ed11c176890]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:30.015 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[826f24ea-4fce-4eaf-acc8-bd713ae7fada]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571784, 'reachable_time': 31758, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376492, 'error': None, 'target': 'ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:30.019 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-828d3558-d7f1-4d90-8e6f-bf6eff4d744e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:48:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:30.019 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[6610980e-9ead-4b32-969e-7805e4fce4da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:30 np0005465604 systemd[1]: run-netns-ovnmeta\x2d828d3558\x2dd7f1\x2d4d90\x2d8e6f\x2dbf6eff4d744e.mount: Deactivated successfully.
Oct  2 04:48:30 np0005465604 nova_compute[260603]: 2025-10-02 08:48:30.074 2 DEBUG nova.compute.manager [req-c44bb5ed-aa16-4f8a-b733-02aae0c74077 req-0a1ac4c8-1b5e-4e9b-ad6a-8b74ad820d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received event network-vif-unplugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:48:30 np0005465604 nova_compute[260603]: 2025-10-02 08:48:30.075 2 DEBUG oslo_concurrency.lockutils [req-c44bb5ed-aa16-4f8a-b733-02aae0c74077 req-0a1ac4c8-1b5e-4e9b-ad6a-8b74ad820d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:30 np0005465604 nova_compute[260603]: 2025-10-02 08:48:30.075 2 DEBUG oslo_concurrency.lockutils [req-c44bb5ed-aa16-4f8a-b733-02aae0c74077 req-0a1ac4c8-1b5e-4e9b-ad6a-8b74ad820d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:30 np0005465604 nova_compute[260603]: 2025-10-02 08:48:30.076 2 DEBUG oslo_concurrency.lockutils [req-c44bb5ed-aa16-4f8a-b733-02aae0c74077 req-0a1ac4c8-1b5e-4e9b-ad6a-8b74ad820d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:30 np0005465604 nova_compute[260603]: 2025-10-02 08:48:30.076 2 DEBUG nova.compute.manager [req-c44bb5ed-aa16-4f8a-b733-02aae0c74077 req-0a1ac4c8-1b5e-4e9b-ad6a-8b74ad820d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] No waiting events found dispatching network-vif-unplugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:48:30 np0005465604 nova_compute[260603]: 2025-10-02 08:48:30.076 2 DEBUG nova.compute.manager [req-c44bb5ed-aa16-4f8a-b733-02aae0c74077 req-0a1ac4c8-1b5e-4e9b-ad6a-8b74ad820d47 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received event network-vif-unplugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:48:30 np0005465604 nova_compute[260603]: 2025-10-02 08:48:30.437 2 DEBUG nova.network.neutron [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Successfully updated port: a488a1b0-7749-499c-971f-662c8a9ee29b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:48:30 np0005465604 nova_compute[260603]: 2025-10-02 08:48:30.450 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:48:30 np0005465604 nova_compute[260603]: 2025-10-02 08:48:30.450 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquired lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:48:30 np0005465604 nova_compute[260603]: 2025-10-02 08:48:30.450 2 DEBUG nova.network.neutron [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:48:30 np0005465604 nova_compute[260603]: 2025-10-02 08:48:30.662 2 DEBUG nova.network.neutron [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:48:31 np0005465604 nova_compute[260603]: 2025-10-02 08:48:31.072 2 DEBUG nova.compute.manager [req-64f90273-d601-4089-b1dd-0c6f7b6d914e req-48181d67-bd5d-4ffe-bbc0-c3d41a0963d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-changed-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:48:31 np0005465604 nova_compute[260603]: 2025-10-02 08:48:31.073 2 DEBUG nova.compute.manager [req-64f90273-d601-4089-b1dd-0c6f7b6d914e req-48181d67-bd5d-4ffe-bbc0-c3d41a0963d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Refreshing instance network info cache due to event network-changed-a488a1b0-7749-499c-971f-662c8a9ee29b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:48:31 np0005465604 nova_compute[260603]: 2025-10-02 08:48:31.074 2 DEBUG oslo_concurrency.lockutils [req-64f90273-d601-4089-b1dd-0c6f7b6d914e req-48181d67-bd5d-4ffe-bbc0-c3d41a0963d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:48:31 np0005465604 nova_compute[260603]: 2025-10-02 08:48:31.355 2 DEBUG nova.network.neutron [req-6207316d-aac9-4eb6-99cd-83452b7e1604 req-661ddb82-4f33-4857-bd1d-6a7aae66edeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Updated VIF entry in instance network info cache for port d1e15184-f166-48f4-a04d-1c6fa2a9f2a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:48:31 np0005465604 nova_compute[260603]: 2025-10-02 08:48:31.356 2 DEBUG nova.network.neutron [req-6207316d-aac9-4eb6-99cd-83452b7e1604 req-661ddb82-4f33-4857-bd1d-6a7aae66edeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Updating instance_info_cache with network_info: [{"id": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "address": "fa:16:3e:5e:d6:a0", "network": {"id": "828d3558-d7f1-4d90-8e6f-bf6eff4d744e", "bridge": "br-int", "label": "tempest-network-smoke--743596258", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ef9cbc1b038423984a64b4674aa34ff", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e15184-f1", "ovs_interfaceid": "d1e15184-f166-48f4-a04d-1c6fa2a9f2a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:48:31 np0005465604 nova_compute[260603]: 2025-10-02 08:48:31.391 2 DEBUG oslo_concurrency.lockutils [req-6207316d-aac9-4eb6-99cd-83452b7e1604 req-661ddb82-4f33-4857-bd1d-6a7aae66edeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-36a84233-3256-49b2-ae05-1569eb78b50f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 29K writes, 111K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 29K writes, 10K syncs, 2.77 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3746 writes, 14K keys, 3746 commit groups, 1.0 writes per commit group, ingest: 15.33 MB, 0.03 MB/s#012Interval WAL: 3746 writes, 1490 syncs, 2.51 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 04:48:31 np0005465604 nova_compute[260603]: 2025-10-02 08:48:31.441 2 INFO nova.virt.libvirt.driver [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Deleting instance files /var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f_del#033[00m
Oct  2 04:48:31 np0005465604 nova_compute[260603]: 2025-10-02 08:48:31.442 2 INFO nova.virt.libvirt.driver [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Deletion of /var/lib/nova/instances/36a84233-3256-49b2-ae05-1569eb78b50f_del complete#033[00m
Oct  2 04:48:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 167 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct  2 04:48:31 np0005465604 nova_compute[260603]: 2025-10-02 08:48:31.623 2 INFO nova.compute.manager [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Took 3.02 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:48:31 np0005465604 nova_compute[260603]: 2025-10-02 08:48:31.624 2 DEBUG oslo.service.loopingcall [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:48:31 np0005465604 nova_compute[260603]: 2025-10-02 08:48:31.624 2 DEBUG nova.compute.manager [-] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:48:31 np0005465604 nova_compute[260603]: 2025-10-02 08:48:31.624 2 DEBUG nova.network.neutron [-] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.162 2 DEBUG nova.compute.manager [req-060ae9fc-f276-4ea2-a17d-c919681e4db8 req-6c8dfa40-f004-45b9-a4e5-e037c78c674b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received event network-vif-plugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.163 2 DEBUG oslo_concurrency.lockutils [req-060ae9fc-f276-4ea2-a17d-c919681e4db8 req-6c8dfa40-f004-45b9-a4e5-e037c78c674b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.163 2 DEBUG oslo_concurrency.lockutils [req-060ae9fc-f276-4ea2-a17d-c919681e4db8 req-6c8dfa40-f004-45b9-a4e5-e037c78c674b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.164 2 DEBUG oslo_concurrency.lockutils [req-060ae9fc-f276-4ea2-a17d-c919681e4db8 req-6c8dfa40-f004-45b9-a4e5-e037c78c674b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.164 2 DEBUG nova.compute.manager [req-060ae9fc-f276-4ea2-a17d-c919681e4db8 req-6c8dfa40-f004-45b9-a4e5-e037c78c674b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] No waiting events found dispatching network-vif-plugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.164 2 WARNING nova.compute.manager [req-060ae9fc-f276-4ea2-a17d-c919681e4db8 req-6c8dfa40-f004-45b9-a4e5-e037c78c674b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received unexpected event network-vif-plugged-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:48:32 np0005465604 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.501 2 DEBUG nova.network.neutron [-] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.532 2 INFO nova.compute.manager [-] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Took 0.91 seconds to deallocate network for instance.#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.548 2 DEBUG nova.network.neutron [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updating instance_info_cache with network_info: [{"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.586 2 DEBUG oslo_concurrency.lockutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.586 2 DEBUG oslo_concurrency.lockutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.587 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Releasing lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.587 2 DEBUG nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Instance network_info: |[{"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.588 2 DEBUG oslo_concurrency.lockutils [req-64f90273-d601-4089-b1dd-0c6f7b6d914e req-48181d67-bd5d-4ffe-bbc0-c3d41a0963d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.588 2 DEBUG nova.network.neutron [req-64f90273-d601-4089-b1dd-0c6f7b6d914e req-48181d67-bd5d-4ffe-bbc0-c3d41a0963d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Refreshing network info cache for port a488a1b0-7749-499c-971f-662c8a9ee29b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.592 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Start _get_guest_xml network_info=[{"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.601 2 WARNING nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.605 2 DEBUG nova.virt.libvirt.host [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.605 2 DEBUG nova.virt.libvirt.host [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.612 2 DEBUG nova.virt.libvirt.host [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.612 2 DEBUG nova.virt.libvirt.host [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.613 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.613 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.613 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.614 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.614 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.614 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.614 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.614 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.615 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.615 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.615 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.615 2 DEBUG nova.virt.hardware [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.618 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:48:32 np0005465604 nova_compute[260603]: 2025-10-02 08:48:32.686 2 DEBUG oslo_concurrency.processutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:48:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:48:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:48:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1259085605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.057 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.090 2 DEBUG nova.storage.rbd_utils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image dcf324a5-8e22-40e4-8f75-469fe7a04756_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.095 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:48:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:48:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1406254739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.189 2 DEBUG oslo_concurrency.processutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.199 2 DEBUG nova.compute.provider_tree [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.221 2 DEBUG nova.scheduler.client.report [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.248 2 DEBUG oslo_concurrency.lockutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.269 2 INFO nova.scheduler.client.report [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Deleted allocations for instance 36a84233-3256-49b2-ae05-1569eb78b50f#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.338 2 DEBUG oslo_concurrency.lockutils [None req-69fffe9a-03a6-43ea-b2b5-afec6a29a7be 3dd1e04a123f47aa8a6b835785a1c569 7ef9cbc1b038423984a64b4674aa34ff - - default default] Lock "36a84233-3256-49b2-ae05-1569eb78b50f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Oct  2 04:48:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:48:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1220778805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.544 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.546 2 DEBUG nova.virt.libvirt.vif [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:48:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-457680791',display_name='tempest-TestNetworkAdvancedServerOps-server-457680791',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-457680791',id=116,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLWb168hMylok2hbHIreuvgrfJq9EYhmSfhH2YBEWlnwrl2sEeWEt1hD3O/DPiepMH4x8+byvmbAISpkoWCoZjQ5z/Keocqhs6SeEf4SxPYxes1ihT4KVXi2eUj3jZBOrQ==',key_name='tempest-TestNetworkAdvancedServerOps-74461562',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-mqv16bk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:48:25Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=dcf324a5-8e22-40e4-8f75-469fe7a04756,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.547 2 DEBUG nova.network.os_vif_util [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.548 2 DEBUG nova.network.os_vif_util [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:28:cb,bridge_name='br-int',has_traffic_filtering=True,id=a488a1b0-7749-499c-971f-662c8a9ee29b,network=Network(d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa488a1b0-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.551 2 DEBUG nova.objects.instance [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_devices' on Instance uuid dcf324a5-8e22-40e4-8f75-469fe7a04756 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.574 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:48:33 np0005465604 nova_compute[260603]:  <uuid>dcf324a5-8e22-40e4-8f75-469fe7a04756</uuid>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:  <name>instance-00000074</name>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-457680791</nova:name>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:48:32</nova:creationTime>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:48:33 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:        <nova:user uuid="7767630a5b1049f48d7e0fed29e221ba">tempest-TestNetworkAdvancedServerOps-19684921-project-member</nova:user>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:        <nova:project uuid="c86b416fdb524f21b0228639a3a14116">tempest-TestNetworkAdvancedServerOps-19684921</nova:project>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:        <nova:port uuid="a488a1b0-7749-499c-971f-662c8a9ee29b">
Oct  2 04:48:33 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <entry name="serial">dcf324a5-8e22-40e4-8f75-469fe7a04756</entry>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <entry name="uuid">dcf324a5-8e22-40e4-8f75-469fe7a04756</entry>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/dcf324a5-8e22-40e4-8f75-469fe7a04756_disk">
Oct  2 04:48:33 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:48:33 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/dcf324a5-8e22-40e4-8f75-469fe7a04756_disk.config">
Oct  2 04:48:33 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:48:33 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:f7:28:cb"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <target dev="tapa488a1b0-77"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756/console.log" append="off"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:48:33 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:48:33 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:48:33 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:48:33 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.576 2 DEBUG nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Preparing to wait for external event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.577 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.578 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.578 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.579 2 DEBUG nova.virt.libvirt.vif [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:48:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-457680791',display_name='tempest-TestNetworkAdvancedServerOps-server-457680791',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-457680791',id=116,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLWb168hMylok2hbHIreuvgrfJq9EYhmSfhH2YBEWlnwrl2sEeWEt1hD3O/DPiepMH4x8+byvmbAISpkoWCoZjQ5z/Keocqhs6SeEf4SxPYxes1ihT4KVXi2eUj3jZBOrQ==',key_name='tempest-TestNetworkAdvancedServerOps-74461562',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-mqv16bk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:48:25Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=dcf324a5-8e22-40e4-8f75-469fe7a04756,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.580 2 DEBUG nova.network.os_vif_util [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.581 2 DEBUG nova.network.os_vif_util [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:28:cb,bridge_name='br-int',has_traffic_filtering=True,id=a488a1b0-7749-499c-971f-662c8a9ee29b,network=Network(d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa488a1b0-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.582 2 DEBUG os_vif [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:28:cb,bridge_name='br-int',has_traffic_filtering=True,id=a488a1b0-7749-499c-971f-662c8a9ee29b,network=Network(d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa488a1b0-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.585 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.590 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa488a1b0-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.591 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa488a1b0-77, col_values=(('external_ids', {'iface-id': 'a488a1b0-7749-499c-971f-662c8a9ee29b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:28:cb', 'vm-uuid': 'dcf324a5-8e22-40e4-8f75-469fe7a04756'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:33 np0005465604 NetworkManager[45129]: <info>  [1759394913.5952] manager: (tapa488a1b0-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/463)
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.602 2 INFO os_vif [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:28:cb,bridge_name='br-int',has_traffic_filtering=True,id=a488a1b0-7749-499c-971f-662c8a9ee29b,network=Network(d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa488a1b0-77')#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.700 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.700 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.700 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No VIF found with MAC fa:16:3e:f7:28:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.701 2 INFO nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Using config drive#033[00m
Oct  2 04:48:33 np0005465604 nova_compute[260603]: 2025-10-02 08:48:33.724 2 DEBUG nova.storage.rbd_utils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image dcf324a5-8e22-40e4-8f75-469fe7a04756_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:48:34 np0005465604 nova_compute[260603]: 2025-10-02 08:48:34.262 2 DEBUG nova.compute.manager [req-28893072-43d2-43f1-9273-7958dcd1a527 req-69b395ab-68ef-43f6-98fd-d0edc2973f3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Received event network-vif-deleted-d1e15184-f166-48f4-a04d-1c6fa2a9f2a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:48:34 np0005465604 nova_compute[260603]: 2025-10-02 08:48:34.405 2 INFO nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Creating config drive at /var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756/disk.config#033[00m
Oct  2 04:48:34 np0005465604 nova_compute[260603]: 2025-10-02 08:48:34.410 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv84zgs72 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:48:34 np0005465604 nova_compute[260603]: 2025-10-02 08:48:34.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:48:34 np0005465604 nova_compute[260603]: 2025-10-02 08:48:34.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:34 np0005465604 nova_compute[260603]: 2025-10-02 08:48:34.571 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv84zgs72" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:48:34 np0005465604 nova_compute[260603]: 2025-10-02 08:48:34.597 2 DEBUG nova.storage.rbd_utils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image dcf324a5-8e22-40e4-8f75-469fe7a04756_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:48:34 np0005465604 nova_compute[260603]: 2025-10-02 08:48:34.601 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756/disk.config dcf324a5-8e22-40e4-8f75-469fe7a04756_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:48:34 np0005465604 nova_compute[260603]: 2025-10-02 08:48:34.709 2 DEBUG nova.network.neutron [req-64f90273-d601-4089-b1dd-0c6f7b6d914e req-48181d67-bd5d-4ffe-bbc0-c3d41a0963d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updated VIF entry in instance network info cache for port a488a1b0-7749-499c-971f-662c8a9ee29b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:48:34 np0005465604 nova_compute[260603]: 2025-10-02 08:48:34.710 2 DEBUG nova.network.neutron [req-64f90273-d601-4089-b1dd-0c6f7b6d914e req-48181d67-bd5d-4ffe-bbc0-c3d41a0963d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updating instance_info_cache with network_info: [{"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:48:34 np0005465604 nova_compute[260603]: 2025-10-02 08:48:34.729 2 DEBUG oslo_concurrency.lockutils [req-64f90273-d601-4089-b1dd-0c6f7b6d914e req-48181d67-bd5d-4ffe-bbc0-c3d41a0963d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:48:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:34.829 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:34.830 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:34.830 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:34 np0005465604 nova_compute[260603]: 2025-10-02 08:48:34.998 2 DEBUG oslo_concurrency.processutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756/disk.config dcf324a5-8e22-40e4-8f75-469fe7a04756_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:48:34 np0005465604 nova_compute[260603]: 2025-10-02 08:48:34.999 2 INFO nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Deleting local config drive /var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756/disk.config because it was imported into RBD.#033[00m
Oct  2 04:48:35 np0005465604 kernel: tapa488a1b0-77: entered promiscuous mode
Oct  2 04:48:35 np0005465604 NetworkManager[45129]: <info>  [1759394915.0792] manager: (tapa488a1b0-77): new Tun device (/org/freedesktop/NetworkManager/Devices/464)
Oct  2 04:48:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:35Z|01182|binding|INFO|Claiming lport a488a1b0-7749-499c-971f-662c8a9ee29b for this chassis.
Oct  2 04:48:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:35Z|01183|binding|INFO|a488a1b0-7749-499c-971f-662c8a9ee29b: Claiming fa:16:3e:f7:28:cb 10.100.0.12
Oct  2 04:48:35 np0005465604 nova_compute[260603]: 2025-10-02 08:48:35.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.089 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:28:cb 10.100.0.12'], port_security=['fa:16:3e:f7:28:cb 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'dcf324a5-8e22-40e4-8f75-469fe7a04756', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0bfd547f-13c2-4a12-bb9c-fb5dbeb2009f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=47273294-f17c-4793-9e46-5229ffbf5fc3, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=a488a1b0-7749-499c-971f-662c8a9ee29b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.091 162357 INFO neutron.agent.ovn.metadata.agent [-] Port a488a1b0-7749-499c-971f-662c8a9ee29b in datapath d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 bound to our chassis#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.094 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6#033[00m
Oct  2 04:48:35 np0005465604 systemd-udevd[376650]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.107 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ddba6b86-0ed8-4067-baf8-b8876254450e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.108 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd95e5b25-31 in ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:48:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:35Z|01184|binding|INFO|Setting lport a488a1b0-7749-499c-971f-662c8a9ee29b ovn-installed in OVS
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.111 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd95e5b25-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.111 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f6593aba-3809-4a46-b57f-1ce996d9846b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:35Z|01185|binding|INFO|Setting lport a488a1b0-7749-499c-971f-662c8a9ee29b up in Southbound
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.113 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9c45d16f-fa2a-4753-ad7d-2c1bbf227d3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:35 np0005465604 nova_compute[260603]: 2025-10-02 08:48:35.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:35 np0005465604 nova_compute[260603]: 2025-10-02 08:48:35.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:35 np0005465604 NetworkManager[45129]: <info>  [1759394915.1310] device (tapa488a1b0-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.130 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[db02ce65-4a31-4576-a7fb-9b0de8e7d40b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:35 np0005465604 NetworkManager[45129]: <info>  [1759394915.1326] device (tapa488a1b0-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:48:35 np0005465604 systemd-machined[214636]: New machine qemu-145-instance-00000074.
Oct  2 04:48:35 np0005465604 systemd[1]: Started Virtual Machine qemu-145-instance-00000074.
Oct  2 04:48:35 np0005465604 nova_compute[260603]: 2025-10-02 08:48:35.151 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394900.1503158, 262d5706-35bc-45eb-809b-217369a86015 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:48:35 np0005465604 nova_compute[260603]: 2025-10-02 08:48:35.151 2 INFO nova.compute.manager [-] [instance: 262d5706-35bc-45eb-809b-217369a86015] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.164 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c84c8f99-590e-4bab-be90-9fbf24e4df3b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:35 np0005465604 nova_compute[260603]: 2025-10-02 08:48:35.186 2 DEBUG nova.compute.manager [None req-1a6a91b6-1281-4bf0-a683-ee45c4687ac9 - - - - - -] [instance: 262d5706-35bc-45eb-809b-217369a86015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.204 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5b9f5603-f5f3-41b4-836d-1e5a459936bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:35 np0005465604 systemd-udevd[376656]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.210 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[543998da-6663-45e5-ab69-5e53a6c136b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:35 np0005465604 NetworkManager[45129]: <info>  [1759394915.2120] manager: (tapd95e5b25-30): new Veth device (/org/freedesktop/NetworkManager/Devices/465)
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.248 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[55c5ea34-b454-40ba-ac0d-0c43536960e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.252 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[11105366-9c01-4251-844e-128c85784b82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:35 np0005465604 NetworkManager[45129]: <info>  [1759394915.2828] device (tapd95e5b25-30): carrier: link connected
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.290 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d551d929-0066-44ab-8e58-221304e10fa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.315 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[40429f81-a23b-4512-a0ed-10d0b75eacda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd95e5b25-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:8b:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579388, 'reachable_time': 32942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376685, 'error': None, 'target': 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.342 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[091661e7-bfc1-4b1c-8456-bde449e7d501]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:8b57'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 579388, 'tstamp': 579388}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376686, 'error': None, 'target': 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.358 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2410d7d5-7a6f-4da8-abe4-d3092d703913]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd95e5b25-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:8b:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579388, 'reachable_time': 32942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 376687, 'error': None, 'target': 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:35 np0005465604 nova_compute[260603]: 2025-10-02 08:48:35.392 2 DEBUG nova.compute.manager [req-bd932a02-d299-4cca-b40b-b669b0b97459 req-e85258ae-08d9-463f-b500-1b98ee43b28f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:48:35 np0005465604 nova_compute[260603]: 2025-10-02 08:48:35.393 2 DEBUG oslo_concurrency.lockutils [req-bd932a02-d299-4cca-b40b-b669b0b97459 req-e85258ae-08d9-463f-b500-1b98ee43b28f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:35 np0005465604 nova_compute[260603]: 2025-10-02 08:48:35.394 2 DEBUG oslo_concurrency.lockutils [req-bd932a02-d299-4cca-b40b-b669b0b97459 req-e85258ae-08d9-463f-b500-1b98ee43b28f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.394 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ae53ff-d8c7-48bf-b7c8-54f49dde58c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:35 np0005465604 nova_compute[260603]: 2025-10-02 08:48:35.394 2 DEBUG oslo_concurrency.lockutils [req-bd932a02-d299-4cca-b40b-b669b0b97459 req-e85258ae-08d9-463f-b500-1b98ee43b28f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:35 np0005465604 nova_compute[260603]: 2025-10-02 08:48:35.395 2 DEBUG nova.compute.manager [req-bd932a02-d299-4cca-b40b-b669b0b97459 req-e85258ae-08d9-463f-b500-1b98ee43b28f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Processing event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.468 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b7262b31-cea3-4575-a205-a772add8c659]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.469 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd95e5b25-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.469 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.470 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd95e5b25-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:48:35 np0005465604 kernel: tapd95e5b25-30: entered promiscuous mode
Oct  2 04:48:35 np0005465604 NetworkManager[45129]: <info>  [1759394915.4728] manager: (tapd95e5b25-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/466)
Oct  2 04:48:35 np0005465604 nova_compute[260603]: 2025-10-02 08:48:35.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.476 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd95e5b25-30, col_values=(('external_ids', {'iface-id': 'b700b6c1-06ca-44eb-af9b-8fe8d11dd204'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:48:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:35Z|01186|binding|INFO|Releasing lport b700b6c1-06ca-44eb-af9b-8fe8d11dd204 from this chassis (sb_readonly=0)
Oct  2 04:48:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct  2 04:48:35 np0005465604 nova_compute[260603]: 2025-10-02 08:48:35.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.499 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.500 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6159529a-06dd-480f-b67e-45569a86aea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.501 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6.pid.haproxy
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:48:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:35.501 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'env', 'PROCESS_TAG=haproxy-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:48:35 np0005465604 podman[376761]: 2025-10-02 08:48:35.86900706 +0000 UTC m=+0.033661731 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:48:36 np0005465604 podman[376761]: 2025-10-02 08:48:36.124490002 +0000 UTC m=+0.289144643 container create b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.137 2 DEBUG nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.139 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394916.1372008, dcf324a5-8e22-40e4-8f75-469fe7a04756 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.140 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] VM Started (Lifecycle Event)#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.144 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.150 2 INFO nova.virt.libvirt.driver [-] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Instance spawned successfully.#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.151 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.192 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:48:36 np0005465604 systemd[1]: Started libpod-conmon-b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7.scope.
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.202 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.207 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.208 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.209 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.211 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.212 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.213 2 DEBUG nova.virt.libvirt.driver [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:48:36 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:48:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fddb7fd1c24079af1e2d9c723ccd3113c5ea87904fa4f101a52504ba886f8ae2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.247 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.248 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394916.1383905, dcf324a5-8e22-40e4-8f75-469fe7a04756 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.248 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:48:36 np0005465604 podman[376761]: 2025-10-02 08:48:36.270609827 +0000 UTC m=+0.435264568 container init b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:48:36 np0005465604 podman[376761]: 2025-10-02 08:48:36.277073178 +0000 UTC m=+0.441727859 container start b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.278 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.283 2 INFO nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Took 10.59 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.284 2 DEBUG nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.288 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394916.1412237, dcf324a5-8e22-40e4-8f75-469fe7a04756 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.289 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:48:36 np0005465604 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[376776]: [NOTICE]   (376780) : New worker (376782) forked
Oct  2 04:48:36 np0005465604 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[376776]: [NOTICE]   (376780) : Loading success.
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.355 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.364 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.392 2 INFO nova.compute.manager [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Took 11.72 seconds to build instance.#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.408 2 DEBUG oslo_concurrency.lockutils [None req-34a690a9-1285-4464-a046-3a2ec29d5ae4 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:36 np0005465604 nova_compute[260603]: 2025-10-02 08:48:36.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:48:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct  2 04:48:37 np0005465604 nova_compute[260603]: 2025-10-02 08:48:37.533 2 DEBUG nova.compute.manager [req-132739d1-b838-409a-9b1a-9523c1317d3e req-e4b8d2ae-2b7c-4959-9a95-493b68134b41 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:48:37 np0005465604 nova_compute[260603]: 2025-10-02 08:48:37.534 2 DEBUG oslo_concurrency.lockutils [req-132739d1-b838-409a-9b1a-9523c1317d3e req-e4b8d2ae-2b7c-4959-9a95-493b68134b41 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:37 np0005465604 nova_compute[260603]: 2025-10-02 08:48:37.534 2 DEBUG oslo_concurrency.lockutils [req-132739d1-b838-409a-9b1a-9523c1317d3e req-e4b8d2ae-2b7c-4959-9a95-493b68134b41 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:37 np0005465604 nova_compute[260603]: 2025-10-02 08:48:37.535 2 DEBUG oslo_concurrency.lockutils [req-132739d1-b838-409a-9b1a-9523c1317d3e req-e4b8d2ae-2b7c-4959-9a95-493b68134b41 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:37 np0005465604 nova_compute[260603]: 2025-10-02 08:48:37.535 2 DEBUG nova.compute.manager [req-132739d1-b838-409a-9b1a-9523c1317d3e req-e4b8d2ae-2b7c-4959-9a95-493b68134b41 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] No waiting events found dispatching network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:48:37 np0005465604 nova_compute[260603]: 2025-10-02 08:48:37.536 2 WARNING nova.compute.manager [req-132739d1-b838-409a-9b1a-9523c1317d3e req-e4b8d2ae-2b7c-4959-9a95-493b68134b41 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received unexpected event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b for instance with vm_state active and task_state None.#033[00m
Oct  2 04:48:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:48:38 np0005465604 nova_compute[260603]: 2025-10-02 08:48:38.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:48:38 np0005465604 nova_compute[260603]: 2025-10-02 08:48:38.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:48:38 np0005465604 nova_compute[260603]: 2025-10-02 08:48:38.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:48:38 np0005465604 nova_compute[260603]: 2025-10-02 08:48:38.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:48:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:48:39 np0005465604 nova_compute[260603]: 2025-10-02 08:48:39.375 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:48:39 np0005465604 nova_compute[260603]: 2025-10-02 08:48:39.376 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:48:39 np0005465604 nova_compute[260603]: 2025-10-02 08:48:39.376 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:48:39 np0005465604 nova_compute[260603]: 2025-10-02 08:48:39.377 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dcf324a5-8e22-40e4-8f75-469fe7a04756 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:48:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 125 op/s
Oct  2 04:48:39 np0005465604 nova_compute[260603]: 2025-10-02 08:48:39.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:41 np0005465604 nova_compute[260603]: 2025-10-02 08:48:41.423 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updating instance_info_cache with network_info: [{"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:48:41 np0005465604 nova_compute[260603]: 2025-10-02 08:48:41.446 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:48:41 np0005465604 nova_compute[260603]: 2025-10-02 08:48:41.446 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:48:41 np0005465604 nova_compute[260603]: 2025-10-02 08:48:41.447 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:48:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Oct  2 04:48:41 np0005465604 nova_compute[260603]: 2025-10-02 08:48:41.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:48:41 np0005465604 nova_compute[260603]: 2025-10-02 08:48:41.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:41 np0005465604 nova_compute[260603]: 2025-10-02 08:48:41.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:41 np0005465604 nova_compute[260603]: 2025-10-02 08:48:41.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:41 np0005465604 nova_compute[260603]: 2025-10-02 08:48:41.549 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:48:41 np0005465604 nova_compute[260603]: 2025-10-02 08:48:41.550 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:48:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:41Z|01187|binding|INFO|Releasing lport b700b6c1-06ca-44eb-af9b-8fe8d11dd204 from this chassis (sb_readonly=0)
Oct  2 04:48:41 np0005465604 nova_compute[260603]: 2025-10-02 08:48:41.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:41Z|01188|binding|INFO|Releasing lport b700b6c1-06ca-44eb-af9b-8fe8d11dd204 from this chassis (sb_readonly=0)
Oct  2 04:48:41 np0005465604 nova_compute[260603]: 2025-10-02 08:48:41.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:48:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3968116699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:48:42 np0005465604 podman[376814]: 2025-10-02 08:48:42.019261264 +0000 UTC m=+0.074632347 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  2 04:48:42 np0005465604 nova_compute[260603]: 2025-10-02 08:48:42.056 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:48:42 np0005465604 podman[376813]: 2025-10-02 08:48:42.074400242 +0000 UTC m=+0.134163602 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct  2 04:48:42 np0005465604 nova_compute[260603]: 2025-10-02 08:48:42.163 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:48:42 np0005465604 nova_compute[260603]: 2025-10-02 08:48:42.164 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:48:42 np0005465604 nova_compute[260603]: 2025-10-02 08:48:42.412 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:48:42 np0005465604 nova_compute[260603]: 2025-10-02 08:48:42.414 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3671MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:48:42 np0005465604 nova_compute[260603]: 2025-10-02 08:48:42.415 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:42 np0005465604 nova_compute[260603]: 2025-10-02 08:48:42.415 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:42 np0005465604 nova_compute[260603]: 2025-10-02 08:48:42.625 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance dcf324a5-8e22-40e4-8f75-469fe7a04756 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:48:42 np0005465604 nova_compute[260603]: 2025-10-02 08:48:42.626 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:48:42 np0005465604 nova_compute[260603]: 2025-10-02 08:48:42.626 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:48:42 np0005465604 nova_compute[260603]: 2025-10-02 08:48:42.913 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:48:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:48:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:48:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1857537261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:48:43 np0005465604 nova_compute[260603]: 2025-10-02 08:48:43.391 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:48:43 np0005465604 nova_compute[260603]: 2025-10-02 08:48:43.402 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:48:43 np0005465604 nova_compute[260603]: 2025-10-02 08:48:43.440 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:48:43 np0005465604 nova_compute[260603]: 2025-10-02 08:48:43.464 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:48:43 np0005465604 nova_compute[260603]: 2025-10-02 08:48:43.465 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:48:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct  2 04:48:43 np0005465604 nova_compute[260603]: 2025-10-02 08:48:43.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:48:43 np0005465604 nova_compute[260603]: 2025-10-02 08:48:43.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:48:43 np0005465604 nova_compute[260603]: 2025-10-02 08:48:43.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:48:43 np0005465604 nova_compute[260603]: 2025-10-02 08:48:43.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 04:48:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:43Z|01189|binding|INFO|Releasing lport b700b6c1-06ca-44eb-af9b-8fe8d11dd204 from this chassis (sb_readonly=0)
Oct  2 04:48:43 np0005465604 NetworkManager[45129]: <info>  [1759394923.6028] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/467)
Oct  2 04:48:43 np0005465604 nova_compute[260603]: 2025-10-02 08:48:43.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:43 np0005465604 NetworkManager[45129]: <info>  [1759394923.6043] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Oct  2 04:48:43 np0005465604 nova_compute[260603]: 2025-10-02 08:48:43.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:43Z|01190|binding|INFO|Releasing lport b700b6c1-06ca-44eb-af9b-8fe8d11dd204 from this chassis (sb_readonly=0)
Oct  2 04:48:43 np0005465604 nova_compute[260603]: 2025-10-02 08:48:43.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:43 np0005465604 nova_compute[260603]: 2025-10-02 08:48:43.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:43 np0005465604 nova_compute[260603]: 2025-10-02 08:48:43.826 2 DEBUG nova.compute.manager [req-f108d774-fce6-48e9-8159-683c51c93dba req-29c0c0d2-8f23-4a5b-9d4e-08a1159c12ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-changed-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:48:43 np0005465604 nova_compute[260603]: 2025-10-02 08:48:43.826 2 DEBUG nova.compute.manager [req-f108d774-fce6-48e9-8159-683c51c93dba req-29c0c0d2-8f23-4a5b-9d4e-08a1159c12ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Refreshing instance network info cache due to event network-changed-a488a1b0-7749-499c-971f-662c8a9ee29b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:48:43 np0005465604 nova_compute[260603]: 2025-10-02 08:48:43.827 2 DEBUG oslo_concurrency.lockutils [req-f108d774-fce6-48e9-8159-683c51c93dba req-29c0c0d2-8f23-4a5b-9d4e-08a1159c12ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:48:43 np0005465604 nova_compute[260603]: 2025-10-02 08:48:43.827 2 DEBUG oslo_concurrency.lockutils [req-f108d774-fce6-48e9-8159-683c51c93dba req-29c0c0d2-8f23-4a5b-9d4e-08a1159c12ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:48:43 np0005465604 nova_compute[260603]: 2025-10-02 08:48:43.827 2 DEBUG nova.network.neutron [req-f108d774-fce6-48e9-8159-683c51c93dba req-29c0c0d2-8f23-4a5b-9d4e-08a1159c12ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Refreshing network info cache for port a488a1b0-7749-499c-971f-662c8a9ee29b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:48:44 np0005465604 nova_compute[260603]: 2025-10-02 08:48:44.040 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394909.0397034, 36a84233-3256-49b2-ae05-1569eb78b50f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:48:44 np0005465604 nova_compute[260603]: 2025-10-02 08:48:44.041 2 INFO nova.compute.manager [-] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:48:44 np0005465604 nova_compute[260603]: 2025-10-02 08:48:44.072 2 DEBUG nova.compute.manager [None req-8f7e35af-a9cf-4a23-8418-903e0e1ebf3f - - - - - -] [instance: 36a84233-3256-49b2-ae05-1569eb78b50f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:48:44 np0005465604 nova_compute[260603]: 2025-10-02 08:48:44.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:45 np0005465604 nova_compute[260603]: 2025-10-02 08:48:45.330 2 DEBUG nova.network.neutron [req-f108d774-fce6-48e9-8159-683c51c93dba req-29c0c0d2-8f23-4a5b-9d4e-08a1159c12ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updated VIF entry in instance network info cache for port a488a1b0-7749-499c-971f-662c8a9ee29b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:48:45 np0005465604 nova_compute[260603]: 2025-10-02 08:48:45.332 2 DEBUG nova.network.neutron [req-f108d774-fce6-48e9-8159-683c51c93dba req-29c0c0d2-8f23-4a5b-9d4e-08a1159c12ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updating instance_info_cache with network_info: [{"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:48:45 np0005465604 nova_compute[260603]: 2025-10-02 08:48:45.354 2 DEBUG oslo_concurrency.lockutils [req-f108d774-fce6-48e9-8159-683c51c93dba req-29c0c0d2-8f23-4a5b-9d4e-08a1159c12ba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:48:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 04:48:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 90 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 366 KiB/s wr, 80 op/s
Oct  2 04:48:47 np0005465604 nova_compute[260603]: 2025-10-02 08:48:47.534 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:48:47 np0005465604 nova_compute[260603]: 2025-10-02 08:48:47.535 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:48:47 np0005465604 nova_compute[260603]: 2025-10-02 08:48:47.535 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 04:48:47 np0005465604 nova_compute[260603]: 2025-10-02 08:48:47.558 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 04:48:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:48:48 np0005465604 nova_compute[260603]: 2025-10-02 08:48:48.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:48Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:28:cb 10.100.0.12
Oct  2 04:48:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:48Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:28:cb 10.100.0.12
Oct  2 04:48:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 101 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.6 MiB/s wr, 91 op/s
Oct  2 04:48:49 np0005465604 nova_compute[260603]: 2025-10-02 08:48:49.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:48:49 np0005465604 nova_compute[260603]: 2025-10-02 08:48:49.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:50 np0005465604 podman[376882]: 2025-10-02 08:48:50.033507576 +0000 UTC m=+0.093050851 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 04:48:50 np0005465604 podman[376883]: 2025-10-02 08:48:50.046056377 +0000 UTC m=+0.098038486 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct  2 04:48:51 np0005465604 nova_compute[260603]: 2025-10-02 08:48:51.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 101 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 1.6 MiB/s wr, 35 op/s
Oct  2 04:48:51 np0005465604 nova_compute[260603]: 2025-10-02 08:48:51.526 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:48:51 np0005465604 nova_compute[260603]: 2025-10-02 08:48:51.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:48:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:48:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:48:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:48:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:48:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:48:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:48:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:48:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:48:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:48:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:48:53 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 80ca897a-2a6f-4045-b742-b09557197a1f does not exist
Oct  2 04:48:53 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 6e470c5f-2c64-43d4-ae4a-68e02830ca84 does not exist
Oct  2 04:48:53 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 4756f1fa-bd74-47f6-a516-d26bd9afcdb3 does not exist
Oct  2 04:48:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:48:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:48:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:48:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:48:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:48:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:48:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 04:48:53 np0005465604 nova_compute[260603]: 2025-10-02 08:48:53.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:54 np0005465604 podman[377312]: 2025-10-02 08:48:54.149878047 +0000 UTC m=+0.075316068 container create 1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:48:54 np0005465604 systemd[1]: Started libpod-conmon-1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22.scope.
Oct  2 04:48:54 np0005465604 podman[377312]: 2025-10-02 08:48:54.124776975 +0000 UTC m=+0.050215066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:48:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:48:54 np0005465604 podman[377312]: 2025-10-02 08:48:54.257554344 +0000 UTC m=+0.182992405 container init 1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  2 04:48:54 np0005465604 podman[377312]: 2025-10-02 08:48:54.271811838 +0000 UTC m=+0.197249859 container start 1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 04:48:54 np0005465604 podman[377312]: 2025-10-02 08:48:54.275373979 +0000 UTC m=+0.200812040 container attach 1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:48:54 np0005465604 sharp_lederberg[377328]: 167 167
Oct  2 04:48:54 np0005465604 systemd[1]: libpod-1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22.scope: Deactivated successfully.
Oct  2 04:48:54 np0005465604 conmon[377328]: conmon 1e5e73cee0d021d034b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22.scope/container/memory.events
Oct  2 04:48:54 np0005465604 podman[377312]: 2025-10-02 08:48:54.285240566 +0000 UTC m=+0.210678617 container died 1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 04:48:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0773656a8574ad998601fb10fe258eab4b68a0bc582e0d46483210c058a3cd85-merged.mount: Deactivated successfully.
Oct  2 04:48:54 np0005465604 podman[377312]: 2025-10-02 08:48:54.340577301 +0000 UTC m=+0.266015342 container remove 1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct  2 04:48:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:48:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:48:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:48:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:48:54 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:48:54 np0005465604 systemd[1]: libpod-conmon-1e5e73cee0d021d034b874a5fa01b9a693ebbbd24db90f5c0a8724bb214cff22.scope: Deactivated successfully.
Oct  2 04:48:54 np0005465604 nova_compute[260603]: 2025-10-02 08:48:54.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:54 np0005465604 nova_compute[260603]: 2025-10-02 08:48:54.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:54 np0005465604 podman[377351]: 2025-10-02 08:48:54.608541224 +0000 UTC m=+0.055954936 container create b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 04:48:54 np0005465604 systemd[1]: Started libpod-conmon-b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f.scope.
Oct  2 04:48:54 np0005465604 podman[377351]: 2025-10-02 08:48:54.589443358 +0000 UTC m=+0.036857070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:48:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:48:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a529dafe8a49c1fa241ebc6ec0ac2fa0bbf10c73eac8ad748839f2430facf0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:48:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a529dafe8a49c1fa241ebc6ec0ac2fa0bbf10c73eac8ad748839f2430facf0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:48:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a529dafe8a49c1fa241ebc6ec0ac2fa0bbf10c73eac8ad748839f2430facf0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:48:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a529dafe8a49c1fa241ebc6ec0ac2fa0bbf10c73eac8ad748839f2430facf0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:48:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a529dafe8a49c1fa241ebc6ec0ac2fa0bbf10c73eac8ad748839f2430facf0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:48:54 np0005465604 podman[377351]: 2025-10-02 08:48:54.726891372 +0000 UTC m=+0.174305104 container init b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:48:54 np0005465604 podman[377351]: 2025-10-02 08:48:54.739907797 +0000 UTC m=+0.187321499 container start b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 04:48:54 np0005465604 podman[377351]: 2025-10-02 08:48:54.743398856 +0000 UTC m=+0.190812568 container attach b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 04:48:55 np0005465604 nova_compute[260603]: 2025-10-02 08:48:55.151 2 INFO nova.compute.manager [None req-0a34b517-eede-4798-83ed-981f97945204 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Get console output#033[00m
Oct  2 04:48:55 np0005465604 nova_compute[260603]: 2025-10-02 08:48:55.157 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 04:48:55 np0005465604 nova_compute[260603]: 2025-10-02 08:48:55.438 2 DEBUG oslo_concurrency.lockutils [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:48:55 np0005465604 nova_compute[260603]: 2025-10-02 08:48:55.439 2 DEBUG oslo_concurrency.lockutils [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:48:55 np0005465604 nova_compute[260603]: 2025-10-02 08:48:55.439 2 INFO nova.compute.manager [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Rebooting instance#033[00m
Oct  2 04:48:55 np0005465604 nova_compute[260603]: 2025-10-02 08:48:55.453 2 DEBUG oslo_concurrency.lockutils [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:48:55 np0005465604 nova_compute[260603]: 2025-10-02 08:48:55.453 2 DEBUG oslo_concurrency.lockutils [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquired lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:48:55 np0005465604 nova_compute[260603]: 2025-10-02 08:48:55.453 2 DEBUG nova.network.neutron [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:48:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  2 04:48:55 np0005465604 magical_meitner[377368]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:48:55 np0005465604 magical_meitner[377368]: --> relative data size: 1.0
Oct  2 04:48:55 np0005465604 magical_meitner[377368]: --> All data devices are unavailable
Oct  2 04:48:55 np0005465604 systemd[1]: libpod-b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f.scope: Deactivated successfully.
Oct  2 04:48:55 np0005465604 podman[377351]: 2025-10-02 08:48:55.819832937 +0000 UTC m=+1.267246629 container died b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 04:48:55 np0005465604 systemd[1]: libpod-b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f.scope: Consumed 1.013s CPU time.
Oct  2 04:48:55 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f4a529dafe8a49c1fa241ebc6ec0ac2fa0bbf10c73eac8ad748839f2430facf0-merged.mount: Deactivated successfully.
Oct  2 04:48:55 np0005465604 podman[377351]: 2025-10-02 08:48:55.872197239 +0000 UTC m=+1.319610921 container remove b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  2 04:48:55 np0005465604 systemd[1]: libpod-conmon-b57f80dfd1d1f4f7796adf877557d72994a2bf23e4c30262ec3878bfa5936e0f.scope: Deactivated successfully.
Oct  2 04:48:56 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:56Z|01191|binding|INFO|Releasing lport b700b6c1-06ca-44eb-af9b-8fe8d11dd204 from this chassis (sb_readonly=0)
Oct  2 04:48:56 np0005465604 nova_compute[260603]: 2025-10-02 08:48:56.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:56 np0005465604 podman[377549]: 2025-10-02 08:48:56.610958446 +0000 UTC m=+0.054303484 container create 1e996720d8913598e65c5c83560e8a23c238913f2b2a5a151648bd1c44ac238e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_heyrovsky, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:48:56 np0005465604 nova_compute[260603]: 2025-10-02 08:48:56.623 2 DEBUG nova.network.neutron [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updating instance_info_cache with network_info: [{"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:48:56 np0005465604 nova_compute[260603]: 2025-10-02 08:48:56.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:56 np0005465604 nova_compute[260603]: 2025-10-02 08:48:56.644 2 DEBUG oslo_concurrency.lockutils [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Releasing lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:48:56 np0005465604 nova_compute[260603]: 2025-10-02 08:48:56.645 2 DEBUG nova.compute.manager [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:48:56 np0005465604 systemd[1]: Started libpod-conmon-1e996720d8913598e65c5c83560e8a23c238913f2b2a5a151648bd1c44ac238e.scope.
Oct  2 04:48:56 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:48:56 np0005465604 podman[377549]: 2025-10-02 08:48:56.578556216 +0000 UTC m=+0.021901294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:48:56 np0005465604 podman[377549]: 2025-10-02 08:48:56.691068683 +0000 UTC m=+0.134413771 container init 1e996720d8913598e65c5c83560e8a23c238913f2b2a5a151648bd1c44ac238e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:48:56 np0005465604 podman[377549]: 2025-10-02 08:48:56.698574237 +0000 UTC m=+0.141919275 container start 1e996720d8913598e65c5c83560e8a23c238913f2b2a5a151648bd1c44ac238e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_heyrovsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:48:56 np0005465604 podman[377549]: 2025-10-02 08:48:56.702177669 +0000 UTC m=+0.145522717 container attach 1e996720d8913598e65c5c83560e8a23c238913f2b2a5a151648bd1c44ac238e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_heyrovsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct  2 04:48:56 np0005465604 magical_heyrovsky[377565]: 167 167
Oct  2 04:48:56 np0005465604 systemd[1]: libpod-1e996720d8913598e65c5c83560e8a23c238913f2b2a5a151648bd1c44ac238e.scope: Deactivated successfully.
Oct  2 04:48:56 np0005465604 podman[377549]: 2025-10-02 08:48:56.704280834 +0000 UTC m=+0.147625882 container died 1e996720d8913598e65c5c83560e8a23c238913f2b2a5a151648bd1c44ac238e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_heyrovsky, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 04:48:56 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f65d13435db74fe3922cd185474521f9207d2b03e08c90c726053f6dd44ccbd6-merged.mount: Deactivated successfully.
Oct  2 04:48:56 np0005465604 podman[377549]: 2025-10-02 08:48:56.75578873 +0000 UTC m=+0.199133768 container remove 1e996720d8913598e65c5c83560e8a23c238913f2b2a5a151648bd1c44ac238e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_heyrovsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:48:56 np0005465604 systemd[1]: libpod-conmon-1e996720d8913598e65c5c83560e8a23c238913f2b2a5a151648bd1c44ac238e.scope: Deactivated successfully.
Oct  2 04:48:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:56.904 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:48:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:56.907 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:48:56 np0005465604 nova_compute[260603]: 2025-10-02 08:48:56.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:56 np0005465604 podman[377590]: 2025-10-02 08:48:56.950173478 +0000 UTC m=+0.053764006 container create a2e87e8f9d55e993234de2d5449efb747cbf421c14c56ee07f47a87d0982ca42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:48:56 np0005465604 systemd[1]: Started libpod-conmon-a2e87e8f9d55e993234de2d5449efb747cbf421c14c56ee07f47a87d0982ca42.scope.
Oct  2 04:48:57 np0005465604 podman[377590]: 2025-10-02 08:48:56.925461258 +0000 UTC m=+0.029051886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:48:57 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:48:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ed3e5c27fc5f1d267d9e19083f0324678777f5a46357d80a9890351c272a07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:48:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ed3e5c27fc5f1d267d9e19083f0324678777f5a46357d80a9890351c272a07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:48:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ed3e5c27fc5f1d267d9e19083f0324678777f5a46357d80a9890351c272a07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:48:57 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ed3e5c27fc5f1d267d9e19083f0324678777f5a46357d80a9890351c272a07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:48:57 np0005465604 podman[377590]: 2025-10-02 08:48:57.042202616 +0000 UTC m=+0.145793154 container init a2e87e8f9d55e993234de2d5449efb747cbf421c14c56ee07f47a87d0982ca42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cray, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:48:57 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:57Z|01192|binding|INFO|Releasing lport b700b6c1-06ca-44eb-af9b-8fe8d11dd204 from this chassis (sb_readonly=0)
Oct  2 04:48:57 np0005465604 podman[377590]: 2025-10-02 08:48:57.05193725 +0000 UTC m=+0.155527808 container start a2e87e8f9d55e993234de2d5449efb747cbf421c14c56ee07f47a87d0982ca42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 04:48:57 np0005465604 podman[377590]: 2025-10-02 08:48:57.05737848 +0000 UTC m=+0.160969008 container attach a2e87e8f9d55e993234de2d5449efb747cbf421c14c56ee07f47a87d0982ca42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cray, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct  2 04:48:57 np0005465604 nova_compute[260603]: 2025-10-02 08:48:57.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]: {
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:    "0": [
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:        {
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "devices": [
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "/dev/loop3"
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            ],
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "lv_name": "ceph_lv0",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "lv_size": "21470642176",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "name": "ceph_lv0",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "tags": {
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.cluster_name": "ceph",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.crush_device_class": "",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.encrypted": "0",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.osd_id": "0",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.type": "block",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.vdo": "0"
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            },
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "type": "block",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "vg_name": "ceph_vg0"
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:        }
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:    ],
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:    "1": [
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:        {
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "devices": [
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "/dev/loop4"
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            ],
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "lv_name": "ceph_lv1",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "lv_size": "21470642176",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "name": "ceph_lv1",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "tags": {
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.cluster_name": "ceph",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.crush_device_class": "",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.encrypted": "0",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.osd_id": "1",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.type": "block",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.vdo": "0"
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            },
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "type": "block",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "vg_name": "ceph_vg1"
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:        }
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:    ],
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:    "2": [
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:        {
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "devices": [
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "/dev/loop5"
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            ],
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "lv_name": "ceph_lv2",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "lv_size": "21470642176",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "name": "ceph_lv2",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "tags": {
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.cluster_name": "ceph",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.crush_device_class": "",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.encrypted": "0",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.osd_id": "2",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.type": "block",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:                "ceph.vdo": "0"
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            },
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "type": "block",
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:            "vg_name": "ceph_vg2"
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:        }
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]:    ]
Oct  2 04:48:57 np0005465604 hardcore_cray[377608]: }
Oct  2 04:48:57 np0005465604 systemd[1]: libpod-a2e87e8f9d55e993234de2d5449efb747cbf421c14c56ee07f47a87d0982ca42.scope: Deactivated successfully.
Oct  2 04:48:57 np0005465604 podman[377590]: 2025-10-02 08:48:57.882848739 +0000 UTC m=+0.986439307 container died a2e87e8f9d55e993234de2d5449efb747cbf421c14c56ee07f47a87d0982ca42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cray, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:48:57 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a0ed3e5c27fc5f1d267d9e19083f0324678777f5a46357d80a9890351c272a07-merged.mount: Deactivated successfully.
Oct  2 04:48:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:48:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:48:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:48:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:48:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:48:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:48:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:48:57 np0005465604 podman[377590]: 2025-10-02 08:48:57.956497894 +0000 UTC m=+1.060088432 container remove a2e87e8f9d55e993234de2d5449efb747cbf421c14c56ee07f47a87d0982ca42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cray, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 04:48:57 np0005465604 systemd[1]: libpod-conmon-a2e87e8f9d55e993234de2d5449efb747cbf421c14c56ee07f47a87d0982ca42.scope: Deactivated successfully.
Oct  2 04:48:58 np0005465604 nova_compute[260603]: 2025-10-02 08:48:58.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:58 np0005465604 podman[377768]: 2025-10-02 08:48:58.802493572 +0000 UTC m=+0.057992818 container create 364b0709e152e3c8a95603dad56ecccbb25c13360bb034eb35ff5a23f719e484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 04:48:58 np0005465604 systemd[1]: Started libpod-conmon-364b0709e152e3c8a95603dad56ecccbb25c13360bb034eb35ff5a23f719e484.scope.
Oct  2 04:48:58 np0005465604 podman[377768]: 2025-10-02 08:48:58.772423075 +0000 UTC m=+0.027922371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:48:58 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:48:58 np0005465604 podman[377768]: 2025-10-02 08:48:58.897968088 +0000 UTC m=+0.153467334 container init 364b0709e152e3c8a95603dad56ecccbb25c13360bb034eb35ff5a23f719e484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  2 04:48:58 np0005465604 podman[377768]: 2025-10-02 08:48:58.910389485 +0000 UTC m=+0.165888701 container start 364b0709e152e3c8a95603dad56ecccbb25c13360bb034eb35ff5a23f719e484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  2 04:48:58 np0005465604 podman[377768]: 2025-10-02 08:48:58.914333758 +0000 UTC m=+0.169833004 container attach 364b0709e152e3c8a95603dad56ecccbb25c13360bb034eb35ff5a23f719e484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 04:48:58 np0005465604 hungry_torvalds[377784]: 167 167
Oct  2 04:48:58 np0005465604 systemd[1]: libpod-364b0709e152e3c8a95603dad56ecccbb25c13360bb034eb35ff5a23f719e484.scope: Deactivated successfully.
Oct  2 04:48:58 np0005465604 podman[377768]: 2025-10-02 08:48:58.918119296 +0000 UTC m=+0.173618532 container died 364b0709e152e3c8a95603dad56ecccbb25c13360bb034eb35ff5a23f719e484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 04:48:58 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9ff3841355b070469f04cfc3bebeb0bcbc90716130470634b99bca7f4bf5c452-merged.mount: Deactivated successfully.
Oct  2 04:48:58 np0005465604 podman[377768]: 2025-10-02 08:48:58.965555435 +0000 UTC m=+0.221054691 container remove 364b0709e152e3c8a95603dad56ecccbb25c13360bb034eb35ff5a23f719e484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  2 04:48:58 np0005465604 systemd[1]: libpod-conmon-364b0709e152e3c8a95603dad56ecccbb25c13360bb034eb35ff5a23f719e484.scope: Deactivated successfully.
Oct  2 04:48:59 np0005465604 kernel: tapa488a1b0-77 (unregistering): left promiscuous mode
Oct  2 04:48:59 np0005465604 NetworkManager[45129]: <info>  [1759394939.0865] device (tapa488a1b0-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:48:59 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:59Z|01193|binding|INFO|Releasing lport a488a1b0-7749-499c-971f-662c8a9ee29b from this chassis (sb_readonly=0)
Oct  2 04:48:59 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:59Z|01194|binding|INFO|Setting lport a488a1b0-7749-499c-971f-662c8a9ee29b down in Southbound
Oct  2 04:48:59 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:59Z|01195|binding|INFO|Removing iface tapa488a1b0-77 ovn-installed in OVS
Oct  2 04:48:59 np0005465604 nova_compute[260603]: 2025-10-02 08:48:59.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.101 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:28:cb 10.100.0.12'], port_security=['fa:16:3e:f7:28:cb 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'dcf324a5-8e22-40e4-8f75-469fe7a04756', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0bfd547f-13c2-4a12-bb9c-fb5dbeb2009f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.179'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=47273294-f17c-4793-9e46-5229ffbf5fc3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=a488a1b0-7749-499c-971f-662c8a9ee29b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.103 162357 INFO neutron.agent.ovn.metadata.agent [-] Port a488a1b0-7749-499c-971f-662c8a9ee29b in datapath d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 unbound from our chassis#033[00m
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.105 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.109 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b598bcd2-95f1-495b-8aac-fe743d828448]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.110 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 namespace which is not needed anymore#033[00m
Oct  2 04:48:59 np0005465604 nova_compute[260603]: 2025-10-02 08:48:59.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:59 np0005465604 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000074.scope: Deactivated successfully.
Oct  2 04:48:59 np0005465604 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000074.scope: Consumed 13.311s CPU time.
Oct  2 04:48:59 np0005465604 systemd-machined[214636]: Machine qemu-145-instance-00000074 terminated.
Oct  2 04:48:59 np0005465604 podman[377819]: 2025-10-02 08:48:59.23006934 +0000 UTC m=+0.051270829 container create 1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_easley, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:48:59 np0005465604 systemd[1]: Started libpod-conmon-1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca.scope.
Oct  2 04:48:59 np0005465604 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[376776]: [NOTICE]   (376780) : haproxy version is 2.8.14-c23fe91
Oct  2 04:48:59 np0005465604 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[376776]: [NOTICE]   (376780) : path to executable is /usr/sbin/haproxy
Oct  2 04:48:59 np0005465604 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[376776]: [WARNING]  (376780) : Exiting Master process...
Oct  2 04:48:59 np0005465604 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[376776]: [WARNING]  (376780) : Exiting Master process...
Oct  2 04:48:59 np0005465604 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[376776]: [ALERT]    (376780) : Current worker (376782) exited with code 143 (Terminated)
Oct  2 04:48:59 np0005465604 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[376776]: [WARNING]  (376780) : All workers exited. Exiting... (0)
Oct  2 04:48:59 np0005465604 systemd[1]: libpod-b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7.scope: Deactivated successfully.
Oct  2 04:48:59 np0005465604 podman[377843]: 2025-10-02 08:48:59.289370968 +0000 UTC m=+0.061214609 container died b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 04:48:59 np0005465604 podman[377819]: 2025-10-02 08:48:59.208884669 +0000 UTC m=+0.030086208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:48:59 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:48:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d91b9582040d159ce93f4656410ffd40b16e4747bb6b61b0f11bec3525319/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:48:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d91b9582040d159ce93f4656410ffd40b16e4747bb6b61b0f11bec3525319/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:48:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d91b9582040d159ce93f4656410ffd40b16e4747bb6b61b0f11bec3525319/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:48:59 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d91b9582040d159ce93f4656410ffd40b16e4747bb6b61b0f11bec3525319/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:48:59 np0005465604 nova_compute[260603]: 2025-10-02 08:48:59.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:59 np0005465604 podman[377819]: 2025-10-02 08:48:59.332544953 +0000 UTC m=+0.153746492 container init 1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_easley, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 04:48:59 np0005465604 nova_compute[260603]: 2025-10-02 08:48:59.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:59 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7-userdata-shm.mount: Deactivated successfully.
Oct  2 04:48:59 np0005465604 systemd[1]: var-lib-containers-storage-overlay-fddb7fd1c24079af1e2d9c723ccd3113c5ea87904fa4f101a52504ba886f8ae2-merged.mount: Deactivated successfully.
Oct  2 04:48:59 np0005465604 podman[377819]: 2025-10-02 08:48:59.341475492 +0000 UTC m=+0.162676981 container start 1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_easley, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:48:59 np0005465604 podman[377819]: 2025-10-02 08:48:59.348554472 +0000 UTC m=+0.169755971 container attach 1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:48:59 np0005465604 podman[377843]: 2025-10-02 08:48:59.352696632 +0000 UTC m=+0.124540233 container cleanup b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 04:48:59 np0005465604 systemd[1]: libpod-conmon-b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7.scope: Deactivated successfully.
Oct  2 04:48:59 np0005465604 podman[377894]: 2025-10-02 08:48:59.434412648 +0000 UTC m=+0.052070814 container remove b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.439 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[522e2ff1-d730-478e-9ebc-975335a7d88f]: (4, ('Thu Oct  2 08:48:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 (b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7)\nb9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7\nThu Oct  2 08:48:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 (b9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7)\nb9b13740227268d550ab66473dd524c04a94cc51c39eb722f907c672e34b57a7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.441 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[59b82b6e-e40f-4dc5-b306-c2641f06af0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.442 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd95e5b25-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:48:59 np0005465604 nova_compute[260603]: 2025-10-02 08:48:59.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:59 np0005465604 kernel: tapd95e5b25-30: left promiscuous mode
Oct  2 04:48:59 np0005465604 nova_compute[260603]: 2025-10-02 08:48:59.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.471 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff8fd5e-567f-4195-8463-b48d684c133e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.504 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[44513d22-08a3-4436-b4f1-541a5ed82d42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.506 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4c7e99d8-b79f-4763-bd4a-fd082486b45b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.523 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d51af675-adfd-4f92-baff-0503a3d66289]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579379, 'reachable_time': 41621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377913, 'error': None, 'target': 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.527 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.528 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[65e38658-fdf7-43bb-bf2e-4169bb8c68f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:59 np0005465604 nova_compute[260603]: 2025-10-02 08:48:59.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:59 np0005465604 nova_compute[260603]: 2025-10-02 08:48:59.791 2 INFO nova.virt.libvirt.driver [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Instance shutdown successfully.#033[00m
Oct  2 04:48:59 np0005465604 systemd[1]: run-netns-ovnmeta\x2dd95e5b25\x2d314c\x2d4a6b\x2db468\x2d4c3f1e9ba0f6.mount: Deactivated successfully.
Oct  2 04:48:59 np0005465604 kernel: tapa488a1b0-77: entered promiscuous mode
Oct  2 04:48:59 np0005465604 systemd-udevd[377807]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:48:59 np0005465604 NetworkManager[45129]: <info>  [1759394939.8891] manager: (tapa488a1b0-77): new Tun device (/org/freedesktop/NetworkManager/Devices/469)
Oct  2 04:48:59 np0005465604 nova_compute[260603]: 2025-10-02 08:48:59.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:59 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:59Z|01196|binding|INFO|Claiming lport a488a1b0-7749-499c-971f-662c8a9ee29b for this chassis.
Oct  2 04:48:59 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:59Z|01197|binding|INFO|a488a1b0-7749-499c-971f-662c8a9ee29b: Claiming fa:16:3e:f7:28:cb 10.100.0.12
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.906 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:28:cb 10.100.0.12'], port_security=['fa:16:3e:f7:28:cb 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'dcf324a5-8e22-40e4-8f75-469fe7a04756', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0bfd547f-13c2-4a12-bb9c-fb5dbeb2009f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.179'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=47273294-f17c-4793-9e46-5229ffbf5fc3, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=a488a1b0-7749-499c-971f-662c8a9ee29b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:48:59 np0005465604 NetworkManager[45129]: <info>  [1759394939.9130] device (tapa488a1b0-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.915 162357 INFO neutron.agent.ovn.metadata.agent [-] Port a488a1b0-7749-499c-971f-662c8a9ee29b in datapath d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 bound to our chassis#033[00m
Oct  2 04:48:59 np0005465604 NetworkManager[45129]: <info>  [1759394939.9161] device (tapa488a1b0-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.918 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6#033[00m
Oct  2 04:48:59 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:59Z|01198|binding|INFO|Setting lport a488a1b0-7749-499c-971f-662c8a9ee29b ovn-installed in OVS
Oct  2 04:48:59 np0005465604 ovn_controller[152344]: 2025-10-02T08:48:59Z|01199|binding|INFO|Setting lport a488a1b0-7749-499c-971f-662c8a9ee29b up in Southbound
Oct  2 04:48:59 np0005465604 nova_compute[260603]: 2025-10-02 08:48:59.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.938 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[78381410-e5d4-4edc-9089-b8e4055714e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.939 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd95e5b25-31 in ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.942 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd95e5b25-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.942 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2d42c4d6-621c-4a94-b772-f2ee4d427433]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.943 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[60fd623a-7343-4960-a613-fc7dfcf51554]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:48:59 np0005465604 systemd-machined[214636]: New machine qemu-146-instance-00000074.
Oct  2 04:48:59 np0005465604 systemd[1]: Started Virtual Machine qemu-146-instance-00000074.
Oct  2 04:48:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:48:59.970 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[5b437fc2-82ab-4870-b776-305368459c0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.010 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[04e7e87c-db0c-4fc0-bca0-85874a25568c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.057 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ddcd384c-4c3f-4978-8568-2be0fdfad640]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:00 np0005465604 NetworkManager[45129]: <info>  [1759394940.0670] manager: (tapd95e5b25-30): new Veth device (/org/freedesktop/NetworkManager/Devices/470)
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.065 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e91748-0fc6-40da-923d-33ee6b9f32d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.128 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f7638d28-a26a-4ffd-9e45-1e83916ae162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.136 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4d0cda82-1a60-4054-bebb-0134d19e4b09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:00 np0005465604 NetworkManager[45129]: <info>  [1759394940.1771] device (tapd95e5b25-30): carrier: link connected
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.178 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[58b241f8-e791-4893-aedb-c3c2ee936a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.198 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3627bc0f-dbaf-4ce0-9676-52bcd4a3919d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd95e5b25-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:8b:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581877, 'reachable_time': 26454, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377971, 'error': None, 'target': 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.228 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5e5587b8-e04b-42e2-8962-93f21aa277b5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:8b57'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 581877, 'tstamp': 581877}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377975, 'error': None, 'target': 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.251 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4022229d-5c4b-4aa9-94fe-ea60d2431a33]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd95e5b25-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:8b:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581877, 'reachable_time': 26454, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377979, 'error': None, 'target': 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.284 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[291d21ef-7e76-4f4e-b082-e9648a66e32d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.344 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d0bb68a2-9e83-4f61-be6f-dfa19c42a9d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.348 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd95e5b25-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.348 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.349 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd95e5b25-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:00 np0005465604 nova_compute[260603]: 2025-10-02 08:49:00.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:00 np0005465604 NetworkManager[45129]: <info>  [1759394940.3521] manager: (tapd95e5b25-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/471)
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]: {
Oct  2 04:49:00 np0005465604 kernel: tapd95e5b25-30: entered promiscuous mode
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:        "osd_id": 2,
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:        "type": "bluestore"
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:    },
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:        "osd_id": 1,
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:        "type": "bluestore"
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:    },
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:        "osd_id": 0,
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:        "type": "bluestore"
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]:    }
Oct  2 04:49:00 np0005465604 stupefied_easley[377861]: }
Oct  2 04:49:00 np0005465604 nova_compute[260603]: 2025-10-02 08:49:00.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.358 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd95e5b25-30, col_values=(('external_ids', {'iface-id': 'b700b6c1-06ca-44eb-af9b-8fe8d11dd204'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:00 np0005465604 nova_compute[260603]: 2025-10-02 08:49:00.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:00Z|01200|binding|INFO|Releasing lport b700b6c1-06ca-44eb-af9b-8fe8d11dd204 from this chassis (sb_readonly=0)
Oct  2 04:49:00 np0005465604 nova_compute[260603]: 2025-10-02 08:49:00.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.380 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.380 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[18967c0e-c9e0-46ff-93ff-22e701180bea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.381 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6.pid.haproxy
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:49:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:00.382 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'env', 'PROCESS_TAG=haproxy-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:49:00 np0005465604 systemd[1]: libpod-1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca.scope: Deactivated successfully.
Oct  2 04:49:00 np0005465604 systemd[1]: libpod-1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca.scope: Consumed 1.026s CPU time.
Oct  2 04:49:00 np0005465604 podman[377819]: 2025-10-02 08:49:00.389865729 +0000 UTC m=+1.211067218 container died 1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 04:49:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay-670d91b9582040d159ce93f4656410ffd40b16e4747bb6b61b0f11bec3525319-merged.mount: Deactivated successfully.
Oct  2 04:49:00 np0005465604 podman[377819]: 2025-10-02 08:49:00.440167746 +0000 UTC m=+1.261369225 container remove 1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_easley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:49:00 np0005465604 systemd[1]: libpod-conmon-1ab5858c021e70e466be93a5bdb7d0ed9f5bb5ade67faeeee4bf00b43d5a71ca.scope: Deactivated successfully.
Oct  2 04:49:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:49:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:49:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:49:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:49:00 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 4121721e-2cb2-4a64-8548-0b62cadafded does not exist
Oct  2 04:49:00 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 3000fde1-6e69-43ac-a66c-1771d86fe92d does not exist
Oct  2 04:49:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:49:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:49:00 np0005465604 podman[378118]: 2025-10-02 08:49:00.792709775 +0000 UTC m=+0.056449201 container create 52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:49:00 np0005465604 systemd[1]: Started libpod-conmon-52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d.scope.
Oct  2 04:49:00 np0005465604 podman[378118]: 2025-10-02 08:49:00.763536855 +0000 UTC m=+0.027276291 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:49:00 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:49:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7278ca594e124300e4494e2616a3c5112ca3087c80f96b44c8579fb6561b13/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:49:00 np0005465604 podman[378118]: 2025-10-02 08:49:00.889420999 +0000 UTC m=+0.153160445 container init 52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 04:49:00 np0005465604 podman[378118]: 2025-10-02 08:49:00.901449504 +0000 UTC m=+0.165188930 container start 52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 04:49:00 np0005465604 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[378138]: [NOTICE]   (378142) : New worker (378144) forked
Oct  2 04:49:00 np0005465604 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[378138]: [NOTICE]   (378142) : Loading success.
Oct  2 04:49:01 np0005465604 nova_compute[260603]: 2025-10-02 08:49:01.298 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for dcf324a5-8e22-40e4-8f75-469fe7a04756 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:49:01 np0005465604 nova_compute[260603]: 2025-10-02 08:49:01.299 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394941.2981935, dcf324a5-8e22-40e4-8f75-469fe7a04756 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:49:01 np0005465604 nova_compute[260603]: 2025-10-02 08:49:01.299 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:49:01 np0005465604 nova_compute[260603]: 2025-10-02 08:49:01.306 2 INFO nova.virt.libvirt.driver [-] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Instance running successfully.#033[00m
Oct  2 04:49:01 np0005465604 nova_compute[260603]: 2025-10-02 08:49:01.307 2 INFO nova.virt.libvirt.driver [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Instance soft rebooted successfully.#033[00m
Oct  2 04:49:01 np0005465604 nova_compute[260603]: 2025-10-02 08:49:01.307 2 DEBUG nova.compute.manager [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:49:01 np0005465604 nova_compute[260603]: 2025-10-02 08:49:01.339 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:49:01 np0005465604 nova_compute[260603]: 2025-10-02 08:49:01.344 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:49:01 np0005465604 nova_compute[260603]: 2025-10-02 08:49:01.377 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] During sync_power_state the instance has a pending task (reboot_started). Skip.#033[00m
Oct  2 04:49:01 np0005465604 nova_compute[260603]: 2025-10-02 08:49:01.377 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394941.298443, dcf324a5-8e22-40e4-8f75-469fe7a04756 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:49:01 np0005465604 nova_compute[260603]: 2025-10-02 08:49:01.378 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] VM Started (Lifecycle Event)#033[00m
Oct  2 04:49:01 np0005465604 nova_compute[260603]: 2025-10-02 08:49:01.388 2 DEBUG oslo_concurrency.lockutils [None req-b18a2a34-ce11-4f1e-b55f-64cf40297faa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:01 np0005465604 nova_compute[260603]: 2025-10-02 08:49:01.400 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:49:01 np0005465604 nova_compute[260603]: 2025-10-02 08:49:01.404 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:49:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 138 KiB/s rd, 594 KiB/s wr, 32 op/s
Oct  2 04:49:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:49:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 594 KiB/s wr, 99 op/s
Oct  2 04:49:03 np0005465604 nova_compute[260603]: 2025-10-02 08:49:03.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:03.909 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:04 np0005465604 nova_compute[260603]: 2025-10-02 08:49:04.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:04 np0005465604 nova_compute[260603]: 2025-10-02 08:49:04.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 70 op/s
Oct  2 04:49:05 np0005465604 nova_compute[260603]: 2025-10-02 08:49:05.534 2 DEBUG nova.compute.manager [req-61136a2d-7c5f-4178-9dfa-c1275e737ee4 req-d535d685-fc07-4b91-bceb-a83ec4dc2310 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-vif-unplugged-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:05 np0005465604 nova_compute[260603]: 2025-10-02 08:49:05.534 2 DEBUG oslo_concurrency.lockutils [req-61136a2d-7c5f-4178-9dfa-c1275e737ee4 req-d535d685-fc07-4b91-bceb-a83ec4dc2310 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:05 np0005465604 nova_compute[260603]: 2025-10-02 08:49:05.535 2 DEBUG oslo_concurrency.lockutils [req-61136a2d-7c5f-4178-9dfa-c1275e737ee4 req-d535d685-fc07-4b91-bceb-a83ec4dc2310 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:05 np0005465604 nova_compute[260603]: 2025-10-02 08:49:05.535 2 DEBUG oslo_concurrency.lockutils [req-61136a2d-7c5f-4178-9dfa-c1275e737ee4 req-d535d685-fc07-4b91-bceb-a83ec4dc2310 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:05 np0005465604 nova_compute[260603]: 2025-10-02 08:49:05.535 2 DEBUG nova.compute.manager [req-61136a2d-7c5f-4178-9dfa-c1275e737ee4 req-d535d685-fc07-4b91-bceb-a83ec4dc2310 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] No waiting events found dispatching network-vif-unplugged-a488a1b0-7749-499c-971f-662c8a9ee29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:49:05 np0005465604 nova_compute[260603]: 2025-10-02 08:49:05.536 2 WARNING nova.compute.manager [req-61136a2d-7c5f-4178-9dfa-c1275e737ee4 req-d535d685-fc07-4b91-bceb-a83ec4dc2310 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received unexpected event network-vif-unplugged-a488a1b0-7749-499c-971f-662c8a9ee29b for instance with vm_state active and task_state None.#033[00m
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 72 op/s
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.693 2 DEBUG nova.compute.manager [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.693 2 DEBUG oslo_concurrency.lockutils [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.694 2 DEBUG oslo_concurrency.lockutils [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.694 2 DEBUG oslo_concurrency.lockutils [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.695 2 DEBUG nova.compute.manager [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] No waiting events found dispatching network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.695 2 WARNING nova.compute.manager [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received unexpected event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b for instance with vm_state active and task_state None.#033[00m
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.695 2 DEBUG nova.compute.manager [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.696 2 DEBUG oslo_concurrency.lockutils [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.696 2 DEBUG oslo_concurrency.lockutils [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.696 2 DEBUG oslo_concurrency.lockutils [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.697 2 DEBUG nova.compute.manager [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] No waiting events found dispatching network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.697 2 WARNING nova.compute.manager [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received unexpected event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b for instance with vm_state active and task_state None.#033[00m
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.697 2 DEBUG nova.compute.manager [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.698 2 DEBUG oslo_concurrency.lockutils [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.698 2 DEBUG oslo_concurrency.lockutils [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.699 2 DEBUG oslo_concurrency.lockutils [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.699 2 DEBUG nova.compute.manager [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] No waiting events found dispatching network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:49:07 np0005465604 nova_compute[260603]: 2025-10-02 08:49:07.700 2 WARNING nova.compute.manager [req-8309a254-28db-4d3c-9993-7498dfa8d47b req-58904f33-1105-4f42-8573-8fc600e54761 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received unexpected event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b for instance with vm_state active and task_state None.#033[00m
Oct  2 04:49:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:49:08 np0005465604 nova_compute[260603]: 2025-10-02 08:49:08.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 11 KiB/s wr, 71 op/s
Oct  2 04:49:09 np0005465604 nova_compute[260603]: 2025-10-02 08:49:09.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 69 op/s
Oct  2 04:49:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:49:13 np0005465604 podman[378154]: 2025-10-02 08:49:13.03542401 +0000 UTC m=+0.068159216 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct  2 04:49:13 np0005465604 podman[378153]: 2025-10-02 08:49:13.057511689 +0000 UTC m=+0.096833890 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Oct  2 04:49:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 12 KiB/s wr, 111 op/s
Oct  2 04:49:13 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:13Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:28:cb 10.100.0.12
Oct  2 04:49:13 np0005465604 nova_compute[260603]: 2025-10-02 08:49:13.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:14 np0005465604 nova_compute[260603]: 2025-10-02 08:49:14.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 579 KiB/s rd, 12 KiB/s wr, 45 op/s
Oct  2 04:49:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 12 KiB/s wr, 46 op/s
Oct  2 04:49:17 np0005465604 nova_compute[260603]: 2025-10-02 08:49:17.804 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquiring lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:17 np0005465604 nova_compute[260603]: 2025-10-02 08:49:17.804 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:17 np0005465604 nova_compute[260603]: 2025-10-02 08:49:17.828 2 DEBUG nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:49:17 np0005465604 nova_compute[260603]: 2025-10-02 08:49:17.908 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:17 np0005465604 nova_compute[260603]: 2025-10-02 08:49:17.909 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:17 np0005465604 nova_compute[260603]: 2025-10-02 08:49:17.921 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:49:17 np0005465604 nova_compute[260603]: 2025-10-02 08:49:17.921 2 INFO nova.compute.claims [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:49:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.040 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:49:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:49:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3575857951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.489 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.498 2 DEBUG nova.compute.provider_tree [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.515 2 DEBUG nova.scheduler.client.report [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.538 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.539 2 DEBUG nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.594 2 DEBUG nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.595 2 DEBUG nova.network.neutron [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.611 2 INFO nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.626 2 DEBUG nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.639 2 INFO nova.compute.manager [None req-9e0f94d9-bdca-4ea0-944b-96670c2a4bdc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Get console output#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.646 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.701 2 DEBUG nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.702 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.702 2 INFO nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Creating image(s)#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.730 2 DEBUG nova.storage.rbd_utils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] rbd image 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.757 2 DEBUG nova.storage.rbd_utils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] rbd image 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.785 2 DEBUG nova.storage.rbd_utils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] rbd image 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.789 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.830 2 DEBUG nova.policy [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ba06f614b3ef4ffe8b1a2d1b848554a9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fe0d9bd380934b368dfa78fea95edb29', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.875 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.876 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.876 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.876 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.896 2 DEBUG nova.storage.rbd_utils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] rbd image 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:49:18 np0005465604 nova_compute[260603]: 2025-10-02 08:49:18.899 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:49:19 np0005465604 nova_compute[260603]: 2025-10-02 08:49:19.203 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:49:19 np0005465604 nova_compute[260603]: 2025-10-02 08:49:19.256 2 DEBUG nova.storage.rbd_utils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] resizing rbd image 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:49:19 np0005465604 nova_compute[260603]: 2025-10-02 08:49:19.333 2 DEBUG nova.objects.instance [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lazy-loading 'migration_context' on Instance uuid 5d39bd73-ba92-45a5-9d69-f1f1901aa895 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:49:19 np0005465604 nova_compute[260603]: 2025-10-02 08:49:19.350 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:49:19 np0005465604 nova_compute[260603]: 2025-10-02 08:49:19.351 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Ensure instance console log exists: /var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:49:19 np0005465604 nova_compute[260603]: 2025-10-02 08:49:19.351 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:19 np0005465604 nova_compute[260603]: 2025-10-02 08:49:19.352 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:19 np0005465604 nova_compute[260603]: 2025-10-02 08:49:19.352 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 44 op/s
Oct  2 04:49:19 np0005465604 nova_compute[260603]: 2025-10-02 08:49:19.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.043 2 DEBUG oslo_concurrency.lockutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.043 2 DEBUG oslo_concurrency.lockutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.044 2 DEBUG oslo_concurrency.lockutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.045 2 DEBUG oslo_concurrency.lockutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.045 2 DEBUG oslo_concurrency.lockutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.047 2 INFO nova.compute.manager [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Terminating instance#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.049 2 DEBUG nova.compute.manager [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:49:20 np0005465604 kernel: tapa488a1b0-77 (unregistering): left promiscuous mode
Oct  2 04:49:20 np0005465604 NetworkManager[45129]: <info>  [1759394960.1108] device (tapa488a1b0-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.123 2 DEBUG nova.compute.manager [req-b8109346-2de2-4a10-9590-81de29441fb2 req-94de3ce6-3013-4b14-b5b9-0f35cc0e1ea2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-changed-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.124 2 DEBUG nova.compute.manager [req-b8109346-2de2-4a10-9590-81de29441fb2 req-94de3ce6-3013-4b14-b5b9-0f35cc0e1ea2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Refreshing instance network info cache due to event network-changed-a488a1b0-7749-499c-971f-662c8a9ee29b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.125 2 DEBUG oslo_concurrency.lockutils [req-b8109346-2de2-4a10-9590-81de29441fb2 req-94de3ce6-3013-4b14-b5b9-0f35cc0e1ea2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:49:20 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:20Z|01201|binding|INFO|Releasing lport a488a1b0-7749-499c-971f-662c8a9ee29b from this chassis (sb_readonly=0)
Oct  2 04:49:20 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:20Z|01202|binding|INFO|Setting lport a488a1b0-7749-499c-971f-662c8a9ee29b down in Southbound
Oct  2 04:49:20 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:20Z|01203|binding|INFO|Removing iface tapa488a1b0-77 ovn-installed in OVS
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.126 2 DEBUG oslo_concurrency.lockutils [req-b8109346-2de2-4a10-9590-81de29441fb2 req-94de3ce6-3013-4b14-b5b9-0f35cc0e1ea2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.127 2 DEBUG nova.network.neutron [req-b8109346-2de2-4a10-9590-81de29441fb2 req-94de3ce6-3013-4b14-b5b9-0f35cc0e1ea2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Refreshing network info cache for port a488a1b0-7749-499c-971f-662c8a9ee29b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.137 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:28:cb 10.100.0.12'], port_security=['fa:16:3e:f7:28:cb 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'dcf324a5-8e22-40e4-8f75-469fe7a04756', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0bfd547f-13c2-4a12-bb9c-fb5dbeb2009f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=47273294-f17c-4793-9e46-5229ffbf5fc3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=a488a1b0-7749-499c-971f-662c8a9ee29b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:49:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.140 162357 INFO neutron.agent.ovn.metadata.agent [-] Port a488a1b0-7749-499c-971f-662c8a9ee29b in datapath d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 unbound from our chassis#033[00m
Oct  2 04:49:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.143 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:49:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.146 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[689cf18c-dd50-4f64-8872-07821d5e2707]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.147 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 namespace which is not needed anymore#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:20 np0005465604 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000074.scope: Deactivated successfully.
Oct  2 04:49:20 np0005465604 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000074.scope: Consumed 12.713s CPU time.
Oct  2 04:49:20 np0005465604 systemd-machined[214636]: Machine qemu-146-instance-00000074 terminated.
Oct  2 04:49:20 np0005465604 podman[378385]: 2025-10-02 08:49:20.248352647 +0000 UTC m=+0.102267739 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:49:20 np0005465604 podman[378389]: 2025-10-02 08:49:20.249368948 +0000 UTC m=+0.096916331 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.291 2 INFO nova.virt.libvirt.driver [-] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Instance destroyed successfully.#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.292 2 DEBUG nova.objects.instance [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'resources' on Instance uuid dcf324a5-8e22-40e4-8f75-469fe7a04756 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.307 2 DEBUG nova.network.neutron [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Successfully created port: 8add63d4-81e0-4c57-a383-69310529c53b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.315 2 DEBUG nova.virt.libvirt.vif [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:48:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-457680791',display_name='tempest-TestNetworkAdvancedServerOps-server-457680791',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-457680791',id=116,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLWb168hMylok2hbHIreuvgrfJq9EYhmSfhH2YBEWlnwrl2sEeWEt1hD3O/DPiepMH4x8+byvmbAISpkoWCoZjQ5z/Keocqhs6SeEf4SxPYxes1ihT4KVXi2eUj3jZBOrQ==',key_name='tempest-TestNetworkAdvancedServerOps-74461562',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:48:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-mqv16bk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:49:01Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=dcf324a5-8e22-40e4-8f75-469fe7a04756,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.316 2 DEBUG nova.network.os_vif_util [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.317 2 DEBUG nova.network.os_vif_util [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:28:cb,bridge_name='br-int',has_traffic_filtering=True,id=a488a1b0-7749-499c-971f-662c8a9ee29b,network=Network(d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa488a1b0-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.318 2 DEBUG os_vif [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:28:cb,bridge_name='br-int',has_traffic_filtering=True,id=a488a1b0-7749-499c-971f-662c8a9ee29b,network=Network(d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa488a1b0-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.320 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa488a1b0-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.327 2 INFO os_vif [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:28:cb,bridge_name='br-int',has_traffic_filtering=True,id=a488a1b0-7749-499c-971f-662c8a9ee29b,network=Network(d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa488a1b0-77')#033[00m
Oct  2 04:49:20 np0005465604 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[378138]: [NOTICE]   (378142) : haproxy version is 2.8.14-c23fe91
Oct  2 04:49:20 np0005465604 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[378138]: [NOTICE]   (378142) : path to executable is /usr/sbin/haproxy
Oct  2 04:49:20 np0005465604 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[378138]: [WARNING]  (378142) : Exiting Master process...
Oct  2 04:49:20 np0005465604 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[378138]: [ALERT]    (378142) : Current worker (378144) exited with code 143 (Terminated)
Oct  2 04:49:20 np0005465604 neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6[378138]: [WARNING]  (378142) : All workers exited. Exiting... (0)
Oct  2 04:49:20 np0005465604 systemd[1]: libpod-52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d.scope: Deactivated successfully.
Oct  2 04:49:20 np0005465604 podman[378446]: 2025-10-02 08:49:20.36169482 +0000 UTC m=+0.079218310 container died 52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:49:20 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d-userdata-shm.mount: Deactivated successfully.
Oct  2 04:49:20 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2a7278ca594e124300e4494e2616a3c5112ca3087c80f96b44c8579fb6561b13-merged.mount: Deactivated successfully.
Oct  2 04:49:20 np0005465604 podman[378446]: 2025-10-02 08:49:20.437476482 +0000 UTC m=+0.154999982 container cleanup 52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:49:20 np0005465604 systemd[1]: libpod-conmon-52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d.scope: Deactivated successfully.
Oct  2 04:49:20 np0005465604 podman[378500]: 2025-10-02 08:49:20.544673863 +0000 UTC m=+0.070984714 container remove 52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 04:49:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.553 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[27c3f038-50c2-4684-b8e2-1c2b1236c2e4]: (4, ('Thu Oct  2 08:49:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 (52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d)\n52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d\nThu Oct  2 08:49:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 (52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d)\n52dfc35bc63be3a0f19fbc4bd78347385bfa46113bcda9a3f66d89490f69d87d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.556 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[68dc94ca-185f-4b4a-8848-095ef33c6f56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.557 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd95e5b25-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:20 np0005465604 kernel: tapd95e5b25-30: left promiscuous mode
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.579 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[10702663-6a96-4f17-aa29-95c5323872ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.604 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c9a666-a4ca-487a-b665-bdb0d49e14cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.606 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d6eaf865-b026-4e8f-bee2-dd7c1d819aaf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.628 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e79c700e-baf1-40d1-999e-77de93e591f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581865, 'reachable_time': 41659, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378516, 'error': None, 'target': 'ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.631 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:49:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:20.631 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[b80a77fd-16c6-489e-bf67-c35f7f315204]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:20 np0005465604 systemd[1]: run-netns-ovnmeta\x2dd95e5b25\x2d314c\x2d4a6b\x2db468\x2d4c3f1e9ba0f6.mount: Deactivated successfully.
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.770 2 INFO nova.virt.libvirt.driver [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Deleting instance files /var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756_del#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.771 2 INFO nova.virt.libvirt.driver [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Deletion of /var/lib/nova/instances/dcf324a5-8e22-40e4-8f75-469fe7a04756_del complete#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.975 2 INFO nova.compute.manager [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.976 2 DEBUG oslo.service.loopingcall [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.976 2 DEBUG nova.compute.manager [-] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:49:20 np0005465604 nova_compute[260603]: 2025-10-02 08:49:20.977 2 DEBUG nova.network.neutron [-] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:49:21 np0005465604 nova_compute[260603]: 2025-10-02 08:49:21.297 2 DEBUG nova.network.neutron [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Successfully updated port: 8add63d4-81e0-4c57-a383-69310529c53b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:49:21 np0005465604 nova_compute[260603]: 2025-10-02 08:49:21.314 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquiring lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:49:21 np0005465604 nova_compute[260603]: 2025-10-02 08:49:21.314 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquired lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:49:21 np0005465604 nova_compute[260603]: 2025-10-02 08:49:21.314 2 DEBUG nova.network.neutron [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:49:21 np0005465604 nova_compute[260603]: 2025-10-02 08:49:21.424 2 DEBUG nova.compute.manager [req-094ce944-f3bc-46f5-9e0d-928b9a1f8c95 req-8ddccf3f-2b10-49a6-91e2-c413de78507d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received event network-changed-8add63d4-81e0-4c57-a383-69310529c53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:21 np0005465604 nova_compute[260603]: 2025-10-02 08:49:21.425 2 DEBUG nova.compute.manager [req-094ce944-f3bc-46f5-9e0d-928b9a1f8c95 req-8ddccf3f-2b10-49a6-91e2-c413de78507d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Refreshing instance network info cache due to event network-changed-8add63d4-81e0-4c57-a383-69310529c53b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:49:21 np0005465604 nova_compute[260603]: 2025-10-02 08:49:21.425 2 DEBUG oslo_concurrency.lockutils [req-094ce944-f3bc-46f5-9e0d-928b9a1f8c95 req-8ddccf3f-2b10-49a6-91e2-c413de78507d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:49:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 121 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 44 op/s
Oct  2 04:49:21 np0005465604 nova_compute[260603]: 2025-10-02 08:49:21.505 2 DEBUG nova.network.neutron [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:49:21 np0005465604 nova_compute[260603]: 2025-10-02 08:49:21.622 2 DEBUG nova.network.neutron [req-b8109346-2de2-4a10-9590-81de29441fb2 req-94de3ce6-3013-4b14-b5b9-0f35cc0e1ea2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updated VIF entry in instance network info cache for port a488a1b0-7749-499c-971f-662c8a9ee29b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:49:21 np0005465604 nova_compute[260603]: 2025-10-02 08:49:21.623 2 DEBUG nova.network.neutron [req-b8109346-2de2-4a10-9590-81de29441fb2 req-94de3ce6-3013-4b14-b5b9-0f35cc0e1ea2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updating instance_info_cache with network_info: [{"id": "a488a1b0-7749-499c-971f-662c8a9ee29b", "address": "fa:16:3e:f7:28:cb", "network": {"id": "d95e5b25-314c-4a6b-b468-4c3f1e9ba0f6", "bridge": "br-int", "label": "tempest-network-smoke--1764258343", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa488a1b0-77", "ovs_interfaceid": "a488a1b0-7749-499c-971f-662c8a9ee29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:49:21 np0005465604 nova_compute[260603]: 2025-10-02 08:49:21.643 2 DEBUG oslo_concurrency.lockutils [req-b8109346-2de2-4a10-9590-81de29441fb2 req-94de3ce6-3013-4b14-b5b9-0f35cc0e1ea2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-dcf324a5-8e22-40e4-8f75-469fe7a04756" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:49:21 np0005465604 nova_compute[260603]: 2025-10-02 08:49:21.741 2 DEBUG nova.network.neutron [-] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:49:21 np0005465604 nova_compute[260603]: 2025-10-02 08:49:21.760 2 INFO nova.compute.manager [-] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Took 0.78 seconds to deallocate network for instance.#033[00m
Oct  2 04:49:21 np0005465604 nova_compute[260603]: 2025-10-02 08:49:21.807 2 DEBUG oslo_concurrency.lockutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:21 np0005465604 nova_compute[260603]: 2025-10-02 08:49:21.808 2 DEBUG oslo_concurrency.lockutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:21 np0005465604 nova_compute[260603]: 2025-10-02 08:49:21.871 2 DEBUG oslo_concurrency.processutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:49:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:49:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2480251863' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:49:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:49:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2480251863' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.212 2 DEBUG nova.compute.manager [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-vif-unplugged-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.212 2 DEBUG oslo_concurrency.lockutils [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.213 2 DEBUG oslo_concurrency.lockutils [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.213 2 DEBUG oslo_concurrency.lockutils [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.213 2 DEBUG nova.compute.manager [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] No waiting events found dispatching network-vif-unplugged-a488a1b0-7749-499c-971f-662c8a9ee29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.213 2 WARNING nova.compute.manager [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received unexpected event network-vif-unplugged-a488a1b0-7749-499c-971f-662c8a9ee29b for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.214 2 DEBUG nova.compute.manager [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.214 2 DEBUG oslo_concurrency.lockutils [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.214 2 DEBUG oslo_concurrency.lockutils [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.214 2 DEBUG oslo_concurrency.lockutils [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.215 2 DEBUG nova.compute.manager [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] No waiting events found dispatching network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.215 2 WARNING nova.compute.manager [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received unexpected event network-vif-plugged-a488a1b0-7749-499c-971f-662c8a9ee29b for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.215 2 DEBUG nova.compute.manager [req-0529f46a-690e-4252-8057-af1b56430ea9 req-22e3bc9b-9d36-4f13-bad2-9402a10f6d62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Received event network-vif-deleted-a488a1b0-7749-499c-971f-662c8a9ee29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:49:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1380823278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.432 2 DEBUG oslo_concurrency.processutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.437 2 DEBUG nova.compute.provider_tree [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.455 2 DEBUG nova.scheduler.client.report [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.484 2 DEBUG oslo_concurrency.lockutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.512 2 INFO nova.scheduler.client.report [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Deleted allocations for instance dcf324a5-8e22-40e4-8f75-469fe7a04756#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.568 2 DEBUG oslo_concurrency.lockutils [None req-442a9642-7837-4cb6-b00a-962e31384719 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "dcf324a5-8e22-40e4-8f75-469fe7a04756" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.908 2 DEBUG nova.network.neutron [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Updating instance_info_cache with network_info: [{"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.926 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Releasing lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.927 2 DEBUG nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Instance network_info: |[{"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.927 2 DEBUG oslo_concurrency.lockutils [req-094ce944-f3bc-46f5-9e0d-928b9a1f8c95 req-8ddccf3f-2b10-49a6-91e2-c413de78507d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.928 2 DEBUG nova.network.neutron [req-094ce944-f3bc-46f5-9e0d-928b9a1f8c95 req-8ddccf3f-2b10-49a6-91e2-c413de78507d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Refreshing network info cache for port 8add63d4-81e0-4c57-a383-69310529c53b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.933 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Start _get_guest_xml network_info=[{"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.939 2 WARNING nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:49:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.951 2 DEBUG nova.virt.libvirt.host [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.953 2 DEBUG nova.virt.libvirt.host [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.959 2 DEBUG nova.virt.libvirt.host [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.959 2 DEBUG nova.virt.libvirt.host [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.960 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.960 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.961 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.962 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.962 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.962 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.963 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.963 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.964 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.964 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.964 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.965 2 DEBUG nova.virt.hardware [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:49:22 np0005465604 nova_compute[260603]: 2025-10-02 08:49:22.970 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:49:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:49:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2958514284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:49:23 np0005465604 nova_compute[260603]: 2025-10-02 08:49:23.444 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:49:23 np0005465604 nova_compute[260603]: 2025-10-02 08:49:23.473 2 DEBUG nova.storage.rbd_utils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] rbd image 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:49:23 np0005465604 nova_compute[260603]: 2025-10-02 08:49:23.477 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:49:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 88 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 04:49:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:49:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1686313764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.001 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.003 2 DEBUG nova.virt.libvirt.vif [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:49:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1419201510',display_name='tempest-TestServerBasicOps-server-1419201510',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1419201510',id=117,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOxIiQkCuUb4hsjuixWL10UGBkKUMnwTZeKLoBPdzp7C/XvwvYOMF2Ky32VWHt/hKjva7kq8MtJ3oMOLi7ZAeECvKOs+ESXsUFj3RXVpjLuEPsjWq3UA/7zA5pWt+U9M2g==',key_name='tempest-TestServerBasicOps-36899864',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fe0d9bd380934b368dfa78fea95edb29',ramdisk_id='',reservation_id='r-xw0wp2kg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1506272321',owner_user_name='tempest-TestServerBasicOps-1506272321-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:49:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ba06f614b3ef4ffe8b1a2d1b848554a9',uuid=5d39bd73-ba92-45a5-9d69-f1f1901aa895,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.003 2 DEBUG nova.network.os_vif_util [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Converting VIF {"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.005 2 DEBUG nova.network.os_vif_util [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:8f:fc,bridge_name='br-int',has_traffic_filtering=True,id=8add63d4-81e0-4c57-a383-69310529c53b,network=Network(86d7b4c5-3981-43ff-aa9f-1933a61f6bec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8add63d4-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.006 2 DEBUG nova.objects.instance [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d39bd73-ba92-45a5-9d69-f1f1901aa895 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.027 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:49:24 np0005465604 nova_compute[260603]:  <uuid>5d39bd73-ba92-45a5-9d69-f1f1901aa895</uuid>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:  <name>instance-00000075</name>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestServerBasicOps-server-1419201510</nova:name>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:49:22</nova:creationTime>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:49:24 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:        <nova:user uuid="ba06f614b3ef4ffe8b1a2d1b848554a9">tempest-TestServerBasicOps-1506272321-project-member</nova:user>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:        <nova:project uuid="fe0d9bd380934b368dfa78fea95edb29">tempest-TestServerBasicOps-1506272321</nova:project>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:        <nova:port uuid="8add63d4-81e0-4c57-a383-69310529c53b">
Oct  2 04:49:24 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <entry name="serial">5d39bd73-ba92-45a5-9d69-f1f1901aa895</entry>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <entry name="uuid">5d39bd73-ba92-45a5-9d69-f1f1901aa895</entry>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk">
Oct  2 04:49:24 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:49:24 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk.config">
Oct  2 04:49:24 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:49:24 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:c1:8f:fc"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <target dev="tap8add63d4-81"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895/console.log" append="off"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:49:24 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:49:24 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:49:24 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:49:24 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.028 2 DEBUG nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Preparing to wait for external event network-vif-plugged-8add63d4-81e0-4c57-a383-69310529c53b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.028 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquiring lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.029 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.029 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.030 2 DEBUG nova.virt.libvirt.vif [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:49:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1419201510',display_name='tempest-TestServerBasicOps-server-1419201510',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1419201510',id=117,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOxIiQkCuUb4hsjuixWL10UGBkKUMnwTZeKLoBPdzp7C/XvwvYOMF2Ky32VWHt/hKjva7kq8MtJ3oMOLi7ZAeECvKOs+ESXsUFj3RXVpjLuEPsjWq3UA/7zA5pWt+U9M2g==',key_name='tempest-TestServerBasicOps-36899864',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fe0d9bd380934b368dfa78fea95edb29',ramdisk_id='',reservation_id='r-xw0wp2kg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1506272321',owner_user_name='tempest-TestServerBasicOps-1506272321-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:49:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ba06f614b3ef4ffe8b1a2d1b848554a9',uuid=5d39bd73-ba92-45a5-9d69-f1f1901aa895,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.030 2 DEBUG nova.network.os_vif_util [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Converting VIF {"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.031 2 DEBUG nova.network.os_vif_util [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:8f:fc,bridge_name='br-int',has_traffic_filtering=True,id=8add63d4-81e0-4c57-a383-69310529c53b,network=Network(86d7b4c5-3981-43ff-aa9f-1933a61f6bec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8add63d4-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.031 2 DEBUG os_vif [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:8f:fc,bridge_name='br-int',has_traffic_filtering=True,id=8add63d4-81e0-4c57-a383-69310529c53b,network=Network(86d7b4c5-3981-43ff-aa9f-1933a61f6bec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8add63d4-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.032 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.033 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.036 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8add63d4-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.036 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8add63d4-81, col_values=(('external_ids', {'iface-id': '8add63d4-81e0-4c57-a383-69310529c53b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:8f:fc', 'vm-uuid': '5d39bd73-ba92-45a5-9d69-f1f1901aa895'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:24 np0005465604 NetworkManager[45129]: <info>  [1759394964.0390] manager: (tap8add63d4-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.045 2 INFO os_vif [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:8f:fc,bridge_name='br-int',has_traffic_filtering=True,id=8add63d4-81e0-4c57-a383-69310529c53b,network=Network(86d7b4c5-3981-43ff-aa9f-1933a61f6bec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8add63d4-81')#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.112 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.112 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.112 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] No VIF found with MAC fa:16:3e:c1:8f:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.113 2 INFO nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Using config drive#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.137 2 DEBUG nova.storage.rbd_utils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] rbd image 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.527 2 INFO nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Creating config drive at /var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895/disk.config#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.540 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp6hsavpa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.697 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp6hsavpa" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.738 2 DEBUG nova.storage.rbd_utils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] rbd image 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.742 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895/disk.config 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.946 2 DEBUG oslo_concurrency.processutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895/disk.config 5d39bd73-ba92-45a5-9d69-f1f1901aa895_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:49:24 np0005465604 nova_compute[260603]: 2025-10-02 08:49:24.947 2 INFO nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Deleting local config drive /var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895/disk.config because it was imported into RBD.#033[00m
Oct  2 04:49:25 np0005465604 kernel: tap8add63d4-81: entered promiscuous mode
Oct  2 04:49:25 np0005465604 NetworkManager[45129]: <info>  [1759394965.0019] manager: (tap8add63d4-81): new Tun device (/org/freedesktop/NetworkManager/Devices/473)
Oct  2 04:49:25 np0005465604 systemd-udevd[378671]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:49:25 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:25Z|01204|binding|INFO|Claiming lport 8add63d4-81e0-4c57-a383-69310529c53b for this chassis.
Oct  2 04:49:25 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:25Z|01205|binding|INFO|8add63d4-81e0-4c57-a383-69310529c53b: Claiming fa:16:3e:c1:8f:fc 10.100.0.11
Oct  2 04:49:25 np0005465604 nova_compute[260603]: 2025-10-02 08:49:25.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.046 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:8f:fc 10.100.0.11'], port_security=['fa:16:3e:c1:8f:fc 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '5d39bd73-ba92-45a5-9d69-f1f1901aa895', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86d7b4c5-3981-43ff-aa9f-1933a61f6bec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe0d9bd380934b368dfa78fea95edb29', 'neutron:revision_number': '2', 'neutron:security_group_ids': '685b2755-a769-4920-a3c6-7a8b24097a10 ece867f3-4b20-4c54-9ee4-8b5852490079', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86495746-eacb-4aba-a3c0-a2e0aa4fb357, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8add63d4-81e0-4c57-a383-69310529c53b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:49:25 np0005465604 NetworkManager[45129]: <info>  [1759394965.0499] device (tap8add63d4-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:49:25 np0005465604 NetworkManager[45129]: <info>  [1759394965.0511] device (tap8add63d4-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.049 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8add63d4-81e0-4c57-a383-69310529c53b in datapath 86d7b4c5-3981-43ff-aa9f-1933a61f6bec bound to our chassis#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.051 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 86d7b4c5-3981-43ff-aa9f-1933a61f6bec#033[00m
Oct  2 04:49:25 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:25Z|01206|binding|INFO|Setting lport 8add63d4-81e0-4c57-a383-69310529c53b ovn-installed in OVS
Oct  2 04:49:25 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:25Z|01207|binding|INFO|Setting lport 8add63d4-81e0-4c57-a383-69310529c53b up in Southbound
Oct  2 04:49:25 np0005465604 nova_compute[260603]: 2025-10-02 08:49:25.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:25 np0005465604 nova_compute[260603]: 2025-10-02 08:49:25.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:25 np0005465604 systemd-machined[214636]: New machine qemu-147-instance-00000075.
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.069 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c42dae59-00df-4f02-bc0a-aaf355ef6fb9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.070 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap86d7b4c5-31 in ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.072 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap86d7b4c5-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.072 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cad07def-168d-4b57-8f21-1e9a83007c36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.074 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0ef425-9da7-4f3c-a8f8-1066ee72944e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:25 np0005465604 systemd[1]: Started Virtual Machine qemu-147-instance-00000075.
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.087 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[aa347af6-be91-4a06-b618-da841f37fe81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.111 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[73338777-9583-4860-9606-9525d97164b6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.153 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[25e561ff-6a78-43cf-9939-71dd8fd3fffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.158 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[baa997c8-76e3-4985-822f-47474983ee1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:25 np0005465604 NetworkManager[45129]: <info>  [1759394965.1599] manager: (tap86d7b4c5-30): new Veth device (/org/freedesktop/NetworkManager/Devices/474)
Oct  2 04:49:25 np0005465604 systemd-udevd[378675]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.200 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e01b61-8a1b-4cb0-9419-158c5ccf201c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.203 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[677b4c58-1337-4d52-98f5-2818ebe5a55f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:25 np0005465604 NetworkManager[45129]: <info>  [1759394965.2278] device (tap86d7b4c5-30): carrier: link connected
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.238 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a3563eee-b264-433d-bfed-22bd71bb2cef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.258 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[daf4e84c-e429-4f90-bafe-1c81f121d92b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap86d7b4c5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:68:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 346], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584383, 'reachable_time': 17605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378707, 'error': None, 'target': 'ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.276 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc5f05d-f360-4e8e-af83-e19d1306f0cc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe49:68b5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 584383, 'tstamp': 584383}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378708, 'error': None, 'target': 'ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.299 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[64d8f2d0-18e5-429e-8b86-62d69a1c49f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap86d7b4c5-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:68:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 346], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584383, 'reachable_time': 17605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378709, 'error': None, 'target': 'ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.335 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3a7507d5-6374-488d-a2bc-0f7765db5bd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.400 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[650fc49f-bf8c-4780-a932-38ad23af1075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.401 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86d7b4c5-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.401 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.402 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86d7b4c5-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:25 np0005465604 nova_compute[260603]: 2025-10-02 08:49:25.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:25 np0005465604 NetworkManager[45129]: <info>  [1759394965.4036] manager: (tap86d7b4c5-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/475)
Oct  2 04:49:25 np0005465604 kernel: tap86d7b4c5-30: entered promiscuous mode
Oct  2 04:49:25 np0005465604 nova_compute[260603]: 2025-10-02 08:49:25.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.406 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap86d7b4c5-30, col_values=(('external_ids', {'iface-id': 'cff74aee-8cf5-4a00-8641-877b88ad7399'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:25 np0005465604 nova_compute[260603]: 2025-10-02 08:49:25.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:25 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:25Z|01208|binding|INFO|Releasing lport cff74aee-8cf5-4a00-8641-877b88ad7399 from this chassis (sb_readonly=0)
Oct  2 04:49:25 np0005465604 nova_compute[260603]: 2025-10-02 08:49:25.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.421 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/86d7b4c5-3981-43ff-aa9f-1933a61f6bec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/86d7b4c5-3981-43ff-aa9f-1933a61f6bec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.422 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3dfd5f37-f6de-4b64-bcdf-ad6cb8684b50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.422 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-86d7b4c5-3981-43ff-aa9f-1933a61f6bec
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/86d7b4c5-3981-43ff-aa9f-1933a61f6bec.pid.haproxy
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 86d7b4c5-3981-43ff-aa9f-1933a61f6bec
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:49:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:25.423 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec', 'env', 'PROCESS_TAG=haproxy-86d7b4c5-3981-43ff-aa9f-1933a61f6bec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/86d7b4c5-3981-43ff-aa9f-1933a61f6bec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:49:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 88 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Oct  2 04:49:25 np0005465604 nova_compute[260603]: 2025-10-02 08:49:25.718 2 DEBUG nova.network.neutron [req-094ce944-f3bc-46f5-9e0d-928b9a1f8c95 req-8ddccf3f-2b10-49a6-91e2-c413de78507d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Updated VIF entry in instance network info cache for port 8add63d4-81e0-4c57-a383-69310529c53b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:49:25 np0005465604 nova_compute[260603]: 2025-10-02 08:49:25.719 2 DEBUG nova.network.neutron [req-094ce944-f3bc-46f5-9e0d-928b9a1f8c95 req-8ddccf3f-2b10-49a6-91e2-c413de78507d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Updating instance_info_cache with network_info: [{"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:49:25 np0005465604 nova_compute[260603]: 2025-10-02 08:49:25.739 2 DEBUG oslo_concurrency.lockutils [req-094ce944-f3bc-46f5-9e0d-928b9a1f8c95 req-8ddccf3f-2b10-49a6-91e2-c413de78507d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:49:25 np0005465604 podman[378783]: 2025-10-02 08:49:25.741275344 +0000 UTC m=+0.020334205 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:49:25 np0005465604 nova_compute[260603]: 2025-10-02 08:49:25.861 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394965.86115, 5d39bd73-ba92-45a5-9d69-f1f1901aa895 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:49:25 np0005465604 nova_compute[260603]: 2025-10-02 08:49:25.862 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] VM Started (Lifecycle Event)#033[00m
Oct  2 04:49:25 np0005465604 nova_compute[260603]: 2025-10-02 08:49:25.883 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:49:25 np0005465604 nova_compute[260603]: 2025-10-02 08:49:25.887 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394965.8613257, 5d39bd73-ba92-45a5-9d69-f1f1901aa895 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:49:25 np0005465604 nova_compute[260603]: 2025-10-02 08:49:25.887 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:49:25 np0005465604 nova_compute[260603]: 2025-10-02 08:49:25.905 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:49:25 np0005465604 nova_compute[260603]: 2025-10-02 08:49:25.908 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:49:25 np0005465604 nova_compute[260603]: 2025-10-02 08:49:25.944 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:49:25 np0005465604 podman[378783]: 2025-10-02 08:49:25.997985235 +0000 UTC m=+0.277044116 container create f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:49:26 np0005465604 systemd[1]: Started libpod-conmon-f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff.scope.
Oct  2 04:49:26 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:49:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b46779ccf78646eb1fa1b179b99646c119a819bb4ab4cf83b813387cadabddf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:49:26 np0005465604 podman[378783]: 2025-10-02 08:49:26.434621944 +0000 UTC m=+0.713680835 container init f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:49:26 np0005465604 podman[378783]: 2025-10-02 08:49:26.447443304 +0000 UTC m=+0.726502195 container start f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 04:49:26 np0005465604 neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec[378798]: [NOTICE]   (378802) : New worker (378804) forked
Oct  2 04:49:26 np0005465604 neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec[378798]: [NOTICE]   (378802) : Loading success.
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.500 2 DEBUG nova.compute.manager [req-5fc21c4f-a8c8-4188-86bf-115f49a0c5d7 req-92379df9-ce03-4001-bb73-d79ee2d45c5f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received event network-vif-plugged-8add63d4-81e0-4c57-a383-69310529c53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.500 2 DEBUG oslo_concurrency.lockutils [req-5fc21c4f-a8c8-4188-86bf-115f49a0c5d7 req-92379df9-ce03-4001-bb73-d79ee2d45c5f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.501 2 DEBUG oslo_concurrency.lockutils [req-5fc21c4f-a8c8-4188-86bf-115f49a0c5d7 req-92379df9-ce03-4001-bb73-d79ee2d45c5f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.501 2 DEBUG oslo_concurrency.lockutils [req-5fc21c4f-a8c8-4188-86bf-115f49a0c5d7 req-92379df9-ce03-4001-bb73-d79ee2d45c5f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.502 2 DEBUG nova.compute.manager [req-5fc21c4f-a8c8-4188-86bf-115f49a0c5d7 req-92379df9-ce03-4001-bb73-d79ee2d45c5f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Processing event network-vif-plugged-8add63d4-81e0-4c57-a383-69310529c53b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.503 2 DEBUG nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.509 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394966.5093637, 5d39bd73-ba92-45a5-9d69-f1f1901aa895 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.510 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.512 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:49:26 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:26Z|01209|binding|INFO|Releasing lport cff74aee-8cf5-4a00-8641-877b88ad7399 from this chassis (sb_readonly=0)
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.516 2 INFO nova.virt.libvirt.driver [-] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Instance spawned successfully.#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.516 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.537 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.544 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.560 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.561 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.561 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.561 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.562 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.562 2 DEBUG nova.virt.libvirt.driver [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.570 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.649 2 INFO nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Took 7.95 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.650 2 DEBUG nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:49:26 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:26Z|01210|binding|INFO|Releasing lport cff74aee-8cf5-4a00-8641-877b88ad7399 from this chassis (sb_readonly=0)
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.730 2 INFO nova.compute.manager [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Took 8.86 seconds to build instance.#033[00m
Oct  2 04:49:26 np0005465604 nova_compute[260603]: 2025-10-02 08:49:26.753 2 DEBUG oslo_concurrency.lockutils [None req-83ae3009-c594-437b-8128-bf57a2bb2850 ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 88 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct  2 04:49:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:49:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:49:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:49:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:49:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:49:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:49:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:49:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:49:28
Oct  2 04:49:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:49:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:49:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'default.rgw.control', 'backups', 'vms', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta']
Oct  2 04:49:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:49:28 np0005465604 nova_compute[260603]: 2025-10-02 08:49:28.638 2 DEBUG nova.compute.manager [req-272c2df5-57fd-4cea-a6b6-a35a0cec09eb req-bd7dbcd0-dabe-4dfb-8183-91f104596755 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received event network-vif-plugged-8add63d4-81e0-4c57-a383-69310529c53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:28 np0005465604 nova_compute[260603]: 2025-10-02 08:49:28.638 2 DEBUG oslo_concurrency.lockutils [req-272c2df5-57fd-4cea-a6b6-a35a0cec09eb req-bd7dbcd0-dabe-4dfb-8183-91f104596755 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:28 np0005465604 nova_compute[260603]: 2025-10-02 08:49:28.639 2 DEBUG oslo_concurrency.lockutils [req-272c2df5-57fd-4cea-a6b6-a35a0cec09eb req-bd7dbcd0-dabe-4dfb-8183-91f104596755 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:28 np0005465604 nova_compute[260603]: 2025-10-02 08:49:28.639 2 DEBUG oslo_concurrency.lockutils [req-272c2df5-57fd-4cea-a6b6-a35a0cec09eb req-bd7dbcd0-dabe-4dfb-8183-91f104596755 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:28 np0005465604 nova_compute[260603]: 2025-10-02 08:49:28.639 2 DEBUG nova.compute.manager [req-272c2df5-57fd-4cea-a6b6-a35a0cec09eb req-bd7dbcd0-dabe-4dfb-8183-91f104596755 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] No waiting events found dispatching network-vif-plugged-8add63d4-81e0-4c57-a383-69310529c53b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:49:28 np0005465604 nova_compute[260603]: 2025-10-02 08:49:28.640 2 WARNING nova.compute.manager [req-272c2df5-57fd-4cea-a6b6-a35a0cec09eb req-bd7dbcd0-dabe-4dfb-8183-91f104596755 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received unexpected event network-vif-plugged-8add63d4-81e0-4c57-a383-69310529c53b for instance with vm_state active and task_state None.#033[00m
Oct  2 04:49:28 np0005465604 NetworkManager[45129]: <info>  [1759394968.9183] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/476)
Oct  2 04:49:28 np0005465604 NetworkManager[45129]: <info>  [1759394968.9193] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/477)
Oct  2 04:49:28 np0005465604 nova_compute[260603]: 2025-10-02 08:49:28.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:29 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:29Z|01211|binding|INFO|Releasing lport cff74aee-8cf5-4a00-8641-877b88ad7399 from this chassis (sb_readonly=0)
Oct  2 04:49:29 np0005465604 nova_compute[260603]: 2025-10-02 08:49:29.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:29 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:29Z|01212|binding|INFO|Releasing lport cff74aee-8cf5-4a00-8641-877b88ad7399 from this chassis (sb_readonly=0)
Oct  2 04:49:29 np0005465604 nova_compute[260603]: 2025-10-02 08:49:29.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 731 KiB/s rd, 1.8 MiB/s wr, 114 op/s
Oct  2 04:49:29 np0005465604 nova_compute[260603]: 2025-10-02 08:49:29.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:29 np0005465604 nova_compute[260603]: 2025-10-02 08:49:29.723 2 DEBUG nova.compute.manager [req-7c42360f-817c-401d-80ae-239030901a2f req-b7672fb4-c11d-4a5d-bf44-b5a8feff90d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received event network-changed-8add63d4-81e0-4c57-a383-69310529c53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:29 np0005465604 nova_compute[260603]: 2025-10-02 08:49:29.724 2 DEBUG nova.compute.manager [req-7c42360f-817c-401d-80ae-239030901a2f req-b7672fb4-c11d-4a5d-bf44-b5a8feff90d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Refreshing instance network info cache due to event network-changed-8add63d4-81e0-4c57-a383-69310529c53b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:49:29 np0005465604 nova_compute[260603]: 2025-10-02 08:49:29.724 2 DEBUG oslo_concurrency.lockutils [req-7c42360f-817c-401d-80ae-239030901a2f req-b7672fb4-c11d-4a5d-bf44-b5a8feff90d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:49:29 np0005465604 nova_compute[260603]: 2025-10-02 08:49:29.724 2 DEBUG oslo_concurrency.lockutils [req-7c42360f-817c-401d-80ae-239030901a2f req-b7672fb4-c11d-4a5d-bf44-b5a8feff90d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:49:29 np0005465604 nova_compute[260603]: 2025-10-02 08:49:29.725 2 DEBUG nova.network.neutron [req-7c42360f-817c-401d-80ae-239030901a2f req-b7672fb4-c11d-4a5d-bf44-b5a8feff90d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Refreshing network info cache for port 8add63d4-81e0-4c57-a383-69310529c53b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:49:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 731 KiB/s rd, 1.8 MiB/s wr, 114 op/s
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:31.947567) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394971947600, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1915, "num_deletes": 254, "total_data_size": 3044248, "memory_usage": 3091120, "flush_reason": "Manual Compaction"}
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394971961170, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 2980023, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43148, "largest_seqno": 45062, "table_properties": {"data_size": 2971193, "index_size": 5452, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18353, "raw_average_key_size": 20, "raw_value_size": 2953498, "raw_average_value_size": 3288, "num_data_blocks": 242, "num_entries": 898, "num_filter_entries": 898, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394783, "oldest_key_time": 1759394783, "file_creation_time": 1759394971, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 13639 microseconds, and 7331 cpu microseconds.
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:31.961207) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 2980023 bytes OK
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:31.961225) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:31.963033) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:31.963080) EVENT_LOG_v1 {"time_micros": 1759394971963073, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:31.963100) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 3036061, prev total WAL file size 3036061, number of live WAL files 2.
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:31.964241) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(2910KB)], [101(7260KB)]
Oct  2 04:49:31 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394971964342, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10415131, "oldest_snapshot_seqno": -1}
Oct  2 04:49:32 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6568 keys, 8699750 bytes, temperature: kUnknown
Oct  2 04:49:32 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394972036562, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8699750, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8656166, "index_size": 26064, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 170820, "raw_average_key_size": 26, "raw_value_size": 8538892, "raw_average_value_size": 1300, "num_data_blocks": 1019, "num_entries": 6568, "num_filter_entries": 6568, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759394971, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:49:32 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:49:32 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:32.036973) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8699750 bytes
Oct  2 04:49:32 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:32.038184) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.9 rd, 120.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 7.1 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 7091, records dropped: 523 output_compression: NoCompression
Oct  2 04:49:32 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:32.038206) EVENT_LOG_v1 {"time_micros": 1759394972038196, "job": 60, "event": "compaction_finished", "compaction_time_micros": 72372, "compaction_time_cpu_micros": 41589, "output_level": 6, "num_output_files": 1, "total_output_size": 8699750, "num_input_records": 7091, "num_output_records": 6568, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:49:32 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:49:32 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394972038946, "job": 60, "event": "table_file_deletion", "file_number": 103}
Oct  2 04:49:32 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:49:32 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759394972040630, "job": 60, "event": "table_file_deletion", "file_number": 101}
Oct  2 04:49:32 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:31.964025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:49:32 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:32.040726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:49:32 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:32.040733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:49:32 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:32.040735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:49:32 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:32.040737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:49:32 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:49:32.040739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:49:32 np0005465604 nova_compute[260603]: 2025-10-02 08:49:32.059 2 DEBUG nova.network.neutron [req-7c42360f-817c-401d-80ae-239030901a2f req-b7672fb4-c11d-4a5d-bf44-b5a8feff90d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Updated VIF entry in instance network info cache for port 8add63d4-81e0-4c57-a383-69310529c53b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:49:32 np0005465604 nova_compute[260603]: 2025-10-02 08:49:32.059 2 DEBUG nova.network.neutron [req-7c42360f-817c-401d-80ae-239030901a2f req-b7672fb4-c11d-4a5d-bf44-b5a8feff90d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Updating instance_info_cache with network_info: [{"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:49:32 np0005465604 nova_compute[260603]: 2025-10-02 08:49:32.081 2 DEBUG oslo_concurrency.lockutils [req-7c42360f-817c-401d-80ae-239030901a2f req-b7672fb4-c11d-4a5d-bf44-b5a8feff90d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:49:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:49:32 np0005465604 nova_compute[260603]: 2025-10-02 08:49:32.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 202 op/s
Oct  2 04:49:34 np0005465604 nova_compute[260603]: 2025-10-02 08:49:34.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:34 np0005465604 nova_compute[260603]: 2025-10-02 08:49:34.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:34.830 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:34.832 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:34.833 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:35 np0005465604 nova_compute[260603]: 2025-10-02 08:49:35.290 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394960.2894125, dcf324a5-8e22-40e4-8f75-469fe7a04756 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:49:35 np0005465604 nova_compute[260603]: 2025-10-02 08:49:35.291 2 INFO nova.compute.manager [-] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:49:35 np0005465604 nova_compute[260603]: 2025-10-02 08:49:35.319 2 DEBUG nova.compute.manager [None req-3b454f31-9788-4b82-a393-7c16e2a80b01 - - - - - -] [instance: dcf324a5-8e22-40e4-8f75-469fe7a04756] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:49:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 146 op/s
Oct  2 04:49:36 np0005465604 nova_compute[260603]: 2025-10-02 08:49:36.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:49:37 np0005465604 nova_compute[260603]: 2025-10-02 08:49:37.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 88 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 147 op/s
Oct  2 04:49:37 np0005465604 nova_compute[260603]: 2025-10-02 08:49:37.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:49:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:49:38 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:38Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c1:8f:fc 10.100.0.11
Oct  2 04:49:38 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:38Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c1:8f:fc 10.100.0.11
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003487950323956502 of space, bias 1.0, pg target 0.10463850971869505 quantized to 32 (current 32)
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:49:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:49:39 np0005465604 nova_compute[260603]: 2025-10-02 08:49:39.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 102 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 154 op/s
Oct  2 04:49:39 np0005465604 nova_compute[260603]: 2025-10-02 08:49:39.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:49:39 np0005465604 nova_compute[260603]: 2025-10-02 08:49:39.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:49:39 np0005465604 nova_compute[260603]: 2025-10-02 08:49:39.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:49:39 np0005465604 nova_compute[260603]: 2025-10-02 08:49:39.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:40 np0005465604 nova_compute[260603]: 2025-10-02 08:49:40.435 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:49:40 np0005465604 nova_compute[260603]: 2025-10-02 08:49:40.436 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:49:40 np0005465604 nova_compute[260603]: 2025-10-02 08:49:40.436 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:49:40 np0005465604 nova_compute[260603]: 2025-10-02 08:49:40.437 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5d39bd73-ba92-45a5-9d69-f1f1901aa895 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:49:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 102 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.3 MiB/s wr, 112 op/s
Oct  2 04:49:42 np0005465604 nova_compute[260603]: 2025-10-02 08:49:42.940 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:42 np0005465604 nova_compute[260603]: 2025-10-02 08:49:42.940 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:42 np0005465604 nova_compute[260603]: 2025-10-02 08:49:42.957 2 DEBUG nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:49:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.029 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.029 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.040 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.040 2 INFO nova.compute.claims [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.048 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Updating instance_info_cache with network_info: [{"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.065 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-5d39bd73-ba92-45a5-9d69-f1f1901aa895" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.065 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.066 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.066 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.066 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.090 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.135 2 DEBUG nova.scheduler.client.report [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.153 2 DEBUG nova.scheduler.client.report [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.153 2 DEBUG nova.compute.provider_tree [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.169 2 DEBUG nova.scheduler.client.report [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.205 2 DEBUG nova.scheduler.client.report [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.263 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:49:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Oct  2 04:49:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:49:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3688360750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.733 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.742 2 DEBUG nova.compute.provider_tree [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.758 2 DEBUG nova.scheduler.client.report [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.780 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.782 2 DEBUG nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.789 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.790 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.790 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.791 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.882 2 DEBUG nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.882 2 DEBUG nova.network.neutron [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.901 2 INFO nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:49:43 np0005465604 nova_compute[260603]: 2025-10-02 08:49:43.919 2 DEBUG nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.015 2 DEBUG nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.018 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.019 2 INFO nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Creating image(s)#033[00m
Oct  2 04:49:44 np0005465604 podman[378842]: 2025-10-02 08:49:44.028985913 +0000 UTC m=+0.083905466 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 04:49:44 np0005465604 podman[378841]: 2025-10-02 08:49:44.065812371 +0000 UTC m=+0.124198112 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.074 2 DEBUG nova.storage.rbd_utils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.098 2 DEBUG nova.storage.rbd_utils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.121 2 DEBUG nova.storage.rbd_utils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.124 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.171 2 DEBUG nova.policy [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7767630a5b1049f48d7e0fed29e221ba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c86b416fdb524f21b0228639a3a14116', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.211 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.212 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.213 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.213 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.238 2 DEBUG nova.storage.rbd_utils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.241 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:49:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:49:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2649381719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.296 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.400 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.400 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.604 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.666 2 DEBUG nova.storage.rbd_utils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] resizing rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.757 2 DEBUG nova.objects.instance [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'migration_context' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.774 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.774 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Ensure instance console log exists: /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.775 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.775 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.775 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.792 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.793 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3594MB free_disk=59.9428596496582GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.793 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.794 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.878 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 5d39bd73-ba92-45a5-9d69-f1f1901aa895 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.878 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.878 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.879 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:49:44 np0005465604 nova_compute[260603]: 2025-10-02 08:49:44.948 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:49:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:49:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1434462225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:49:45 np0005465604 nova_compute[260603]: 2025-10-02 08:49:45.376 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:49:45 np0005465604 nova_compute[260603]: 2025-10-02 08:49:45.382 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:49:45 np0005465604 nova_compute[260603]: 2025-10-02 08:49:45.402 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:49:45 np0005465604 nova_compute[260603]: 2025-10-02 08:49:45.431 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:49:45 np0005465604 nova_compute[260603]: 2025-10-02 08:49:45.431 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:45 np0005465604 nova_compute[260603]: 2025-10-02 08:49:45.498 2 DEBUG nova.network.neutron [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Successfully created port: ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:49:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 121 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  2 04:49:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:46.355 162644 DEBUG eventlet.wsgi.server [-] (162644) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Oct  2 04:49:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:46.357 162644 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Oct  2 04:49:46 np0005465604 ovn_metadata_agent[162328]: Accept: */*#015
Oct  2 04:49:46 np0005465604 ovn_metadata_agent[162328]: Connection: close#015
Oct  2 04:49:46 np0005465604 ovn_metadata_agent[162328]: Content-Type: text/plain#015
Oct  2 04:49:46 np0005465604 ovn_metadata_agent[162328]: Host: 169.254.169.254#015
Oct  2 04:49:46 np0005465604 ovn_metadata_agent[162328]: User-Agent: curl/7.84.0#015
Oct  2 04:49:46 np0005465604 ovn_metadata_agent[162328]: X-Forwarded-For: 10.100.0.11#015
Oct  2 04:49:46 np0005465604 ovn_metadata_agent[162328]: X-Ovn-Network-Id: 86d7b4c5-3981-43ff-aa9f-1933a61f6bec __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Oct  2 04:49:46 np0005465604 nova_compute[260603]: 2025-10-02 08:49:46.547 2 DEBUG nova.network.neutron [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Successfully updated port: ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:49:46 np0005465604 nova_compute[260603]: 2025-10-02 08:49:46.569 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:49:46 np0005465604 nova_compute[260603]: 2025-10-02 08:49:46.569 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquired lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:49:46 np0005465604 nova_compute[260603]: 2025-10-02 08:49:46.570 2 DEBUG nova.network.neutron [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:49:46 np0005465604 nova_compute[260603]: 2025-10-02 08:49:46.676 2 DEBUG nova.compute.manager [req-9918245b-1acf-47a7-bcd5-0787061176b9 req-f3313721-fb16-4a99-86f1-fb5d8f0faf15 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-changed-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:46 np0005465604 nova_compute[260603]: 2025-10-02 08:49:46.677 2 DEBUG nova.compute.manager [req-9918245b-1acf-47a7-bcd5-0787061176b9 req-f3313721-fb16-4a99-86f1-fb5d8f0faf15 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Refreshing instance network info cache due to event network-changed-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:49:46 np0005465604 nova_compute[260603]: 2025-10-02 08:49:46.678 2 DEBUG oslo_concurrency.lockutils [req-9918245b-1acf-47a7-bcd5-0787061176b9 req-f3313721-fb16-4a99-86f1-fb5d8f0faf15 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:49:46 np0005465604 nova_compute[260603]: 2025-10-02 08:49:46.738 2 DEBUG nova.network.neutron [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:49:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 142 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 2.8 MiB/s wr, 79 op/s
Oct  2 04:49:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:47.812 162644 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Oct  2 04:49:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:47.813 162644 INFO eventlet.wsgi.server [-] 10.100.0.11,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.4563913#033[00m
Oct  2 04:49:47 np0005465604 haproxy-metadata-proxy-86d7b4c5-3981-43ff-aa9f-1933a61f6bec[378804]: 10.100.0.11:42998 [02/Oct/2025:08:49:46.353] listener listener/metadata 0/0/0/1460/1460 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.844 2 DEBUG nova.network.neutron [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updating instance_info_cache with network_info: [{"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.865 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Releasing lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.865 2 DEBUG nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance network_info: |[{"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.866 2 DEBUG oslo_concurrency.lockutils [req-9918245b-1acf-47a7-bcd5-0787061176b9 req-f3313721-fb16-4a99-86f1-fb5d8f0faf15 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.866 2 DEBUG nova.network.neutron [req-9918245b-1acf-47a7-bcd5-0787061176b9 req-f3313721-fb16-4a99-86f1-fb5d8f0faf15 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Refreshing network info cache for port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.874 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Start _get_guest_xml network_info=[{"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.882 2 WARNING nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.895 2 DEBUG nova.virt.libvirt.host [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.896 2 DEBUG nova.virt.libvirt.host [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.901 2 DEBUG nova.virt.libvirt.host [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.902 2 DEBUG nova.virt.libvirt.host [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.903 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.903 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.904 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.905 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.905 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.906 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.906 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.907 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.908 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.908 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.909 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.909 2 DEBUG nova.virt.hardware [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:49:47 np0005465604 nova_compute[260603]: 2025-10-02 08:49:47.916 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:49:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:47.936 162644 DEBUG eventlet.wsgi.server [-] (162644) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Oct  2 04:49:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:47.938 162644 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Oct  2 04:49:47 np0005465604 ovn_metadata_agent[162328]: Accept: */*#015
Oct  2 04:49:47 np0005465604 ovn_metadata_agent[162328]: Connection: close#015
Oct  2 04:49:47 np0005465604 ovn_metadata_agent[162328]: Content-Length: 100#015
Oct  2 04:49:47 np0005465604 ovn_metadata_agent[162328]: Content-Type: application/x-www-form-urlencoded#015
Oct  2 04:49:47 np0005465604 ovn_metadata_agent[162328]: Host: 169.254.169.254#015
Oct  2 04:49:47 np0005465604 ovn_metadata_agent[162328]: User-Agent: curl/7.84.0#015
Oct  2 04:49:47 np0005465604 ovn_metadata_agent[162328]: X-Forwarded-For: 10.100.0.11#015
Oct  2 04:49:47 np0005465604 ovn_metadata_agent[162328]: X-Ovn-Network-Id: 86d7b4c5-3981-43ff-aa9f-1933a61f6bec#015
Oct  2 04:49:47 np0005465604 ovn_metadata_agent[162328]: #015
Oct  2 04:49:47 np0005465604 ovn_metadata_agent[162328]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Oct  2 04:49:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:49:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:48.211 162644 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Oct  2 04:49:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:48.211 162644 INFO eventlet.wsgi.server [-] 10.100.0.11,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2735717#033[00m
Oct  2 04:49:48 np0005465604 haproxy-metadata-proxy-86d7b4c5-3981-43ff-aa9f-1933a61f6bec[378804]: 10.100.0.11:43000 [02/Oct/2025:08:49:47.935] listener listener/metadata 0/0/0/275/275 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Oct  2 04:49:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:49:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/567280329' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.389 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.411 2 DEBUG nova.storage.rbd_utils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.415 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:49:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:49:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/507854933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.830 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.833 2 DEBUG nova.virt.libvirt.vif [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:49:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1109879286',display_name='tempest-TestNetworkAdvancedServerOps-server-1109879286',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1109879286',id=118,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+sVd8qjo0SYazKDVirRw3g5x5Vnd8QovbWt8+eK0+qPC8DTt4OfcPnD5uSpSSNv9pvlZO4xpKtw3lzuxGU0DKfNjlzDyLY6S9ZZDG7PGtaZb0/k3fS7lf7qHC3kDB/wg==',key_name='tempest-TestNetworkAdvancedServerOps-514118472',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-552rpvj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:49:43Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=9d79c6a7-edec-4a60-af9f-7e1401bb9a64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.833 2 DEBUG nova.network.os_vif_util [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.835 2 DEBUG nova.network.os_vif_util [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.836 2 DEBUG nova.objects.instance [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.854 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:49:48 np0005465604 nova_compute[260603]:  <uuid>9d79c6a7-edec-4a60-af9f-7e1401bb9a64</uuid>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:  <name>instance-00000076</name>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1109879286</nova:name>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:49:47</nova:creationTime>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:49:48 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:        <nova:user uuid="7767630a5b1049f48d7e0fed29e221ba">tempest-TestNetworkAdvancedServerOps-19684921-project-member</nova:user>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:        <nova:project uuid="c86b416fdb524f21b0228639a3a14116">tempest-TestNetworkAdvancedServerOps-19684921</nova:project>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:        <nova:port uuid="ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da">
Oct  2 04:49:48 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <entry name="serial">9d79c6a7-edec-4a60-af9f-7e1401bb9a64</entry>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <entry name="uuid">9d79c6a7-edec-4a60-af9f-7e1401bb9a64</entry>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk">
Oct  2 04:49:48 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:49:48 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config">
Oct  2 04:49:48 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:49:48 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:4a:e6:c0"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <target dev="tapac5b8c6c-8d"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/console.log" append="off"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:49:48 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:49:48 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:49:48 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:49:48 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.856 2 DEBUG nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Preparing to wait for external event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.856 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.857 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.857 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.859 2 DEBUG nova.virt.libvirt.vif [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:49:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1109879286',display_name='tempest-TestNetworkAdvancedServerOps-server-1109879286',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1109879286',id=118,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+sVd8qjo0SYazKDVirRw3g5x5Vnd8QovbWt8+eK0+qPC8DTt4OfcPnD5uSpSSNv9pvlZO4xpKtw3lzuxGU0DKfNjlzDyLY6S9ZZDG7PGtaZb0/k3fS7lf7qHC3kDB/wg==',key_name='tempest-TestNetworkAdvancedServerOps-514118472',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-552rpvj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:49:43Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=9d79c6a7-edec-4a60-af9f-7e1401bb9a64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.859 2 DEBUG nova.network.os_vif_util [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.860 2 DEBUG nova.network.os_vif_util [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.861 2 DEBUG os_vif [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.863 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.864 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.868 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac5b8c6c-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.869 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac5b8c6c-8d, col_values=(('external_ids', {'iface-id': 'ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:e6:c0', 'vm-uuid': '9d79c6a7-edec-4a60-af9f-7e1401bb9a64'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:48 np0005465604 NetworkManager[45129]: <info>  [1759394988.8718] manager: (tapac5b8c6c-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/478)
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.878 2 INFO os_vif [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d')#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.933 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.933 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.933 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No VIF found with MAC fa:16:3e:4a:e6:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.934 2 INFO nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Using config drive#033[00m
Oct  2 04:49:48 np0005465604 nova_compute[260603]: 2025-10-02 08:49:48.955 2 DEBUG nova.storage.rbd_utils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:49:49 np0005465604 nova_compute[260603]: 2025-10-02 08:49:49.413 2 INFO nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Creating config drive at /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config#033[00m
Oct  2 04:49:49 np0005465604 nova_compute[260603]: 2025-10-02 08:49:49.425 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_t0f4_lv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:49:49 np0005465604 nova_compute[260603]: 2025-10-02 08:49:49.507 2 DEBUG nova.network.neutron [req-9918245b-1acf-47a7-bcd5-0787061176b9 req-f3313721-fb16-4a99-86f1-fb5d8f0faf15 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updated VIF entry in instance network info cache for port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:49:49 np0005465604 nova_compute[260603]: 2025-10-02 08:49:49.508 2 DEBUG nova.network.neutron [req-9918245b-1acf-47a7-bcd5-0787061176b9 req-f3313721-fb16-4a99-86f1-fb5d8f0faf15 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updating instance_info_cache with network_info: [{"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:49:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 167 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct  2 04:49:49 np0005465604 nova_compute[260603]: 2025-10-02 08:49:49.526 2 DEBUG oslo_concurrency.lockutils [req-9918245b-1acf-47a7-bcd5-0787061176b9 req-f3313721-fb16-4a99-86f1-fb5d8f0faf15 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:49:49 np0005465604 nova_compute[260603]: 2025-10-02 08:49:49.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:49 np0005465604 nova_compute[260603]: 2025-10-02 08:49:49.600 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_t0f4_lv" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:49:49 np0005465604 nova_compute[260603]: 2025-10-02 08:49:49.628 2 DEBUG nova.storage.rbd_utils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:49:49 np0005465604 nova_compute[260603]: 2025-10-02 08:49:49.632 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.006 2 DEBUG oslo_concurrency.processutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.007 2 INFO nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Deleting local config drive /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config because it was imported into RBD.#033[00m
Oct  2 04:49:50 np0005465604 NetworkManager[45129]: <info>  [1759394990.0800] manager: (tapac5b8c6c-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/479)
Oct  2 04:49:50 np0005465604 kernel: tapac5b8c6c-8d: entered promiscuous mode
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:50Z|01213|binding|INFO|Claiming lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for this chassis.
Oct  2 04:49:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:50Z|01214|binding|INFO|ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da: Claiming fa:16:3e:4a:e6:c0 10.100.0.11
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.094 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:e6:c0 10.100.0.11'], port_security=['fa:16:3e:4a:e6:c0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9d79c6a7-edec-4a60-af9f-7e1401bb9a64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42b6327c-2fd8-40d7-ba08-912b6664d7e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=250114f0-0578-4b19-ab29-d2499c25ead0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.097 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da in datapath 9ceec3f2-374d-41aa-a1df-26773af00fd7 bound to our chassis#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.100 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ceec3f2-374d-41aa-a1df-26773af00fd7#033[00m
Oct  2 04:49:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:50Z|01215|binding|INFO|Setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da ovn-installed in OVS
Oct  2 04:49:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:50Z|01216|binding|INFO|Setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da up in Southbound
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.122 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d00cbb-2835-49d4-bc7d-0e940b821f07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.125 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9ceec3f2-31 in ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.128 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9ceec3f2-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.129 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[adc4be6a-9f3a-45dc-a0fa-02cebea16b1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.130 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc65f45-db93-4b16-8279-6d6e2d23ff1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:50 np0005465604 systemd-udevd[379234]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.150 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[9e5c3918-dcd8-42e5-8502-7887cffbd381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:50 np0005465604 systemd-machined[214636]: New machine qemu-148-instance-00000076.
Oct  2 04:49:50 np0005465604 NetworkManager[45129]: <info>  [1759394990.1612] device (tapac5b8c6c-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:49:50 np0005465604 NetworkManager[45129]: <info>  [1759394990.1632] device (tapac5b8c6c-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:49:50 np0005465604 systemd[1]: Started Virtual Machine qemu-148-instance-00000076.
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.182 2 DEBUG oslo_concurrency.lockutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquiring lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.184 2 DEBUG oslo_concurrency.lockutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.184 2 DEBUG oslo_concurrency.lockutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquiring lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.184 2 DEBUG oslo_concurrency.lockutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.185 2 DEBUG oslo_concurrency.lockutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.184 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[78dfade3-b60e-48fb-a8c5-621bbb51fe66]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.189 2 INFO nova.compute.manager [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Terminating instance#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.191 2 DEBUG nova.compute.manager [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.226 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ec50fa2c-2c99-427e-b7a9-b5f653fede3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.231 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[43dba610-764d-45fa-9f3b-bb4d59d584a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:50 np0005465604 NetworkManager[45129]: <info>  [1759394990.2384] manager: (tap9ceec3f2-30): new Veth device (/org/freedesktop/NetworkManager/Devices/480)
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.291 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[dd459b67-f39f-4a34-a722-cf77c4ec4c5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.295 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[27e4c914-fce8-4680-9121-d5c49fe211ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:50 np0005465604 NetworkManager[45129]: <info>  [1759394990.3232] device (tap9ceec3f2-30): carrier: link connected
Oct  2 04:49:50 np0005465604 kernel: tap8add63d4-81 (unregistering): left promiscuous mode
Oct  2 04:49:50 np0005465604 NetworkManager[45129]: <info>  [1759394990.3290] device (tap8add63d4-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.331 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[21c3d667-6bd5-4408-86c4-620dbfafa5ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:50Z|01217|binding|INFO|Releasing lport 8add63d4-81e0-4c57-a383-69310529c53b from this chassis (sb_readonly=0)
Oct  2 04:49:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:50Z|01218|binding|INFO|Setting lport 8add63d4-81e0-4c57-a383-69310529c53b down in Southbound
Oct  2 04:49:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:50Z|01219|binding|INFO|Removing iface tap8add63d4-81 ovn-installed in OVS
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.349 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:8f:fc 10.100.0.11'], port_security=['fa:16:3e:c1:8f:fc 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '5d39bd73-ba92-45a5-9d69-f1f1901aa895', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86d7b4c5-3981-43ff-aa9f-1933a61f6bec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe0d9bd380934b368dfa78fea95edb29', 'neutron:revision_number': '4', 'neutron:security_group_ids': '685b2755-a769-4920-a3c6-7a8b24097a10 ece867f3-4b20-4c54-9ee4-8b5852490079', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86495746-eacb-4aba-a3c0-a2e0aa4fb357, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8add63d4-81e0-4c57-a383-69310529c53b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.363 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fc62bef0-e6de-4c72-9a0a-adcb03041d6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ceec3f2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:37:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 348], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586892, 'reachable_time': 35326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379269, 'error': None, 'target': 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.382 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ade3528d-eb43-4ce5-8e62-39e83f15e377]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:3769'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586892, 'tstamp': 586892}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379285, 'error': None, 'target': 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:50 np0005465604 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000075.scope: Deactivated successfully.
Oct  2 04:49:50 np0005465604 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000075.scope: Consumed 13.955s CPU time.
Oct  2 04:49:50 np0005465604 systemd-machined[214636]: Machine qemu-147-instance-00000075 terminated.
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.397 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a95a6843-bd85-4aff-82ca-377ea5238575]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ceec3f2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:37:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 348], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586892, 'reachable_time': 35326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379291, 'error': None, 'target': 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.426 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.426 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.440 2 INFO nova.virt.libvirt.driver [-] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Instance destroyed successfully.#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.440 2 DEBUG nova.objects.instance [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lazy-loading 'resources' on Instance uuid 5d39bd73-ba92-45a5-9d69-f1f1901aa895 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:49:50 np0005465604 podman[379270]: 2025-10-02 08:49:50.447148417 +0000 UTC m=+0.086877079 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.447 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0e05240d-41b7-4a57-828a-47d79bb8426b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.454 2 DEBUG nova.virt.libvirt.vif [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:49:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1419201510',display_name='tempest-TestServerBasicOps-server-1419201510',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1419201510',id=117,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOxIiQkCuUb4hsjuixWL10UGBkKUMnwTZeKLoBPdzp7C/XvwvYOMF2Ky32VWHt/hKjva7kq8MtJ3oMOLi7ZAeECvKOs+ESXsUFj3RXVpjLuEPsjWq3UA/7zA5pWt+U9M2g==',key_name='tempest-TestServerBasicOps-36899864',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:49:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fe0d9bd380934b368dfa78fea95edb29',ramdisk_id='',reservation_id='r-xw0wp2kg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1506272321',owner_user_name='tempest-TestServerBasicOps-1506272321-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:49:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ba06f614b3ef4ffe8b1a2d1b848554a9',uuid=5d39bd73-ba92-45a5-9d69-f1f1901aa895,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.455 2 DEBUG nova.network.os_vif_util [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Converting VIF {"id": "8add63d4-81e0-4c57-a383-69310529c53b", "address": "fa:16:3e:c1:8f:fc", "network": {"id": "86d7b4c5-3981-43ff-aa9f-1933a61f6bec", "bridge": "br-int", "label": "tempest-TestServerBasicOps-723749024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe0d9bd380934b368dfa78fea95edb29", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8add63d4-81", "ovs_interfaceid": "8add63d4-81e0-4c57-a383-69310529c53b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.456 2 DEBUG nova.network.os_vif_util [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:8f:fc,bridge_name='br-int',has_traffic_filtering=True,id=8add63d4-81e0-4c57-a383-69310529c53b,network=Network(86d7b4c5-3981-43ff-aa9f-1933a61f6bec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8add63d4-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.457 2 DEBUG os_vif [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:8f:fc,bridge_name='br-int',has_traffic_filtering=True,id=8add63d4-81e0-4c57-a383-69310529c53b,network=Network(86d7b4c5-3981-43ff-aa9f-1933a61f6bec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8add63d4-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.459 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8add63d4-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.465 2 INFO os_vif [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:8f:fc,bridge_name='br-int',has_traffic_filtering=True,id=8add63d4-81e0-4c57-a383-69310529c53b,network=Network(86d7b4c5-3981-43ff-aa9f-1933a61f6bec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8add63d4-81')#033[00m
Oct  2 04:49:50 np0005465604 podman[379266]: 2025-10-02 08:49:50.466576882 +0000 UTC m=+0.110303628 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.512 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2744bb08-9665-465a-b159-4d3c25082605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.515 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ceec3f2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.515 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.516 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ceec3f2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:50 np0005465604 NetworkManager[45129]: <info>  [1759394990.5186] manager: (tap9ceec3f2-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Oct  2 04:49:50 np0005465604 kernel: tap9ceec3f2-30: entered promiscuous mode
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.523 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ceec3f2-30, col_values=(('external_ids', {'iface-id': '251c75ac-afb0-42a1-a89b-23c6beb7b3f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:50 np0005465604 ovn_controller[152344]: 2025-10-02T08:49:50Z|01220|binding|INFO|Releasing lport 251c75ac-afb0-42a1-a89b-23c6beb7b3f6 from this chassis (sb_readonly=0)
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.545 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9ceec3f2-374d-41aa-a1df-26773af00fd7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9ceec3f2-374d-41aa-a1df-26773af00fd7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.547 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3ba5e835-5478-4192-af3e-f71e531e60ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.548 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-9ceec3f2-374d-41aa-a1df-26773af00fd7
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/9ceec3f2-374d-41aa-a1df-26773af00fd7.pid.haproxy
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 9ceec3f2-374d-41aa-a1df-26773af00fd7
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:49:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:50.548 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'env', 'PROCESS_TAG=haproxy-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9ceec3f2-374d-41aa-a1df-26773af00fd7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.778 2 DEBUG nova.compute.manager [req-45751aa2-5ee3-40ed-b3ff-e163ee4d9c78 req-a9dc8d3a-7f00-4895-8eae-bee39cbcf770 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received event network-vif-unplugged-8add63d4-81e0-4c57-a383-69310529c53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.778 2 DEBUG oslo_concurrency.lockutils [req-45751aa2-5ee3-40ed-b3ff-e163ee4d9c78 req-a9dc8d3a-7f00-4895-8eae-bee39cbcf770 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.778 2 DEBUG oslo_concurrency.lockutils [req-45751aa2-5ee3-40ed-b3ff-e163ee4d9c78 req-a9dc8d3a-7f00-4895-8eae-bee39cbcf770 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.778 2 DEBUG oslo_concurrency.lockutils [req-45751aa2-5ee3-40ed-b3ff-e163ee4d9c78 req-a9dc8d3a-7f00-4895-8eae-bee39cbcf770 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.778 2 DEBUG nova.compute.manager [req-45751aa2-5ee3-40ed-b3ff-e163ee4d9c78 req-a9dc8d3a-7f00-4895-8eae-bee39cbcf770 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] No waiting events found dispatching network-vif-unplugged-8add63d4-81e0-4c57-a383-69310529c53b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.779 2 DEBUG nova.compute.manager [req-45751aa2-5ee3-40ed-b3ff-e163ee4d9c78 req-a9dc8d3a-7f00-4895-8eae-bee39cbcf770 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received event network-vif-unplugged-8add63d4-81e0-4c57-a383-69310529c53b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.930 2 DEBUG nova.compute.manager [req-a2143ef4-ce10-4e31-8422-d49e0cb7d511 req-32fd6338-effe-4cdf-a990-0e988456047b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.930 2 DEBUG oslo_concurrency.lockutils [req-a2143ef4-ce10-4e31-8422-d49e0cb7d511 req-32fd6338-effe-4cdf-a990-0e988456047b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.930 2 DEBUG oslo_concurrency.lockutils [req-a2143ef4-ce10-4e31-8422-d49e0cb7d511 req-32fd6338-effe-4cdf-a990-0e988456047b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.931 2 DEBUG oslo_concurrency.lockutils [req-a2143ef4-ce10-4e31-8422-d49e0cb7d511 req-32fd6338-effe-4cdf-a990-0e988456047b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:50 np0005465604 nova_compute[260603]: 2025-10-02 08:49:50.931 2 DEBUG nova.compute.manager [req-a2143ef4-ce10-4e31-8422-d49e0cb7d511 req-32fd6338-effe-4cdf-a990-0e988456047b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Processing event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:49:50 np0005465604 podman[379409]: 2025-10-02 08:49:50.946842422 +0000 UTC m=+0.100307698 container create c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 04:49:50 np0005465604 podman[379409]: 2025-10-02 08:49:50.867898742 +0000 UTC m=+0.021364028 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:49:51 np0005465604 systemd[1]: Started libpod-conmon-c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957.scope.
Oct  2 04:49:51 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:49:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37bbf101800d2f34c5533d4db9bfd73f0e4cf42dd09496c7c2a5063e08e8353/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:49:51 np0005465604 podman[379409]: 2025-10-02 08:49:51.075496312 +0000 UTC m=+0.228961588 container init c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:49:51 np0005465604 podman[379409]: 2025-10-02 08:49:51.080979092 +0000 UTC m=+0.234444358 container start c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.106 2 DEBUG nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.108 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394991.1060913, 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.108 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] VM Started (Lifecycle Event)#033[00m
Oct  2 04:49:51 np0005465604 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[379425]: [NOTICE]   (379429) : New worker (379431) forked
Oct  2 04:49:51 np0005465604 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[379425]: [NOTICE]   (379429) : Loading success.
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.111 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.115 2 INFO nova.virt.libvirt.driver [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance spawned successfully.#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.115 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.144 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.147 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.153 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.153 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.153 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.154 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.154 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.155 2 DEBUG nova.virt.libvirt.driver [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.179 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.179 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394991.1078343, 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.179 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:49:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:51.191 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8add63d4-81e0-4c57-a383-69310529c53b in datapath 86d7b4c5-3981-43ff-aa9f-1933a61f6bec unbound from our chassis#033[00m
Oct  2 04:49:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:51.194 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 86d7b4c5-3981-43ff-aa9f-1933a61f6bec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:49:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:51.195 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[66c0e86f-75d6-4b8c-b8c5-c0616234e6f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:51.196 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec namespace which is not needed anymore#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.213 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.216 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759394991.1107974, 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.216 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.234 2 INFO nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Took 7.22 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.234 2 DEBUG nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.241 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.244 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.268 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.298 2 INFO nova.compute.manager [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Took 8.30 seconds to build instance.#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.316 2 DEBUG oslo_concurrency.lockutils [None req-35169bd1-703a-4f3a-9c29-ec5e4dc7acbc 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:51 np0005465604 neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec[378798]: [NOTICE]   (378802) : haproxy version is 2.8.14-c23fe91
Oct  2 04:49:51 np0005465604 neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec[378798]: [NOTICE]   (378802) : path to executable is /usr/sbin/haproxy
Oct  2 04:49:51 np0005465604 neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec[378798]: [WARNING]  (378802) : Exiting Master process...
Oct  2 04:49:51 np0005465604 neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec[378798]: [ALERT]    (378802) : Current worker (378804) exited with code 143 (Terminated)
Oct  2 04:49:51 np0005465604 neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec[378798]: [WARNING]  (378802) : All workers exited. Exiting... (0)
Oct  2 04:49:51 np0005465604 systemd[1]: libpod-f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff.scope: Deactivated successfully.
Oct  2 04:49:51 np0005465604 podman[379457]: 2025-10-02 08:49:51.364596682 +0000 UTC m=+0.081502771 container died f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 04:49:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 167 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 251 KiB/s rd, 2.6 MiB/s wr, 68 op/s
Oct  2 04:49:51 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff-userdata-shm.mount: Deactivated successfully.
Oct  2 04:49:51 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5b46779ccf78646eb1fa1b179b99646c119a819bb4ab4cf83b813387cadabddf-merged.mount: Deactivated successfully.
Oct  2 04:49:51 np0005465604 podman[379457]: 2025-10-02 08:49:51.775898902 +0000 UTC m=+0.492804971 container cleanup f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 04:49:51 np0005465604 systemd[1]: libpod-conmon-f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff.scope: Deactivated successfully.
Oct  2 04:49:51 np0005465604 podman[379488]: 2025-10-02 08:49:51.977272969 +0000 UTC m=+0.175222003 container remove f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 04:49:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:51.984 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[810e3064-810a-4fc0-a8e8-3177dbbed25e]: (4, ('Thu Oct  2 08:49:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec (f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff)\nf19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff\nThu Oct  2 08:49:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec (f19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff)\nf19a0b34c6c066997d79ba478de9ad1a0a870c0f50074aa2b3f07858d30281ff\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:51.986 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9ddb11cc-a963-4721-8788-ba67fc78e10f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:51.987 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86d7b4c5-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:49:51 np0005465604 nova_compute[260603]: 2025-10-02 08:49:51.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:52 np0005465604 nova_compute[260603]: 2025-10-02 08:49:52.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:52 np0005465604 kernel: tap86d7b4c5-30: left promiscuous mode
Oct  2 04:49:52 np0005465604 nova_compute[260603]: 2025-10-02 08:49:52.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:52.014 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[154e541f-f4b4-47f5-a448-a78d2b81f18c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:52.041 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6b12b94f-0498-4e34-a39e-c13cfb06b96a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:52.043 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a672946e-c6ff-4911-9b3d-ba7721eacf54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:52.059 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[959b027f-5cf5-4314-83d8-9b1213440f50]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584374, 'reachable_time': 32110, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379504, 'error': None, 'target': 'ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:52 np0005465604 systemd[1]: run-netns-ovnmeta\x2d86d7b4c5\x2d3981\x2d43ff\x2daa9f\x2d1933a61f6bec.mount: Deactivated successfully.
Oct  2 04:49:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:52.063 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-86d7b4c5-3981-43ff-aa9f-1933a61f6bec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:49:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:49:52.063 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[3ff89304-4cc3-4fe9-965b-94e455a19d71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:49:52 np0005465604 nova_compute[260603]: 2025-10-02 08:49:52.425 2 INFO nova.virt.libvirt.driver [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Deleting instance files /var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895_del#033[00m
Oct  2 04:49:52 np0005465604 nova_compute[260603]: 2025-10-02 08:49:52.426 2 INFO nova.virt.libvirt.driver [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Deletion of /var/lib/nova/instances/5d39bd73-ba92-45a5-9d69-f1f1901aa895_del complete#033[00m
Oct  2 04:49:52 np0005465604 nova_compute[260603]: 2025-10-02 08:49:52.503 2 INFO nova.compute.manager [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Took 2.31 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:49:52 np0005465604 nova_compute[260603]: 2025-10-02 08:49:52.503 2 DEBUG oslo.service.loopingcall [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:49:52 np0005465604 nova_compute[260603]: 2025-10-02 08:49:52.503 2 DEBUG nova.compute.manager [-] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:49:52 np0005465604 nova_compute[260603]: 2025-10-02 08:49:52.504 2 DEBUG nova.network.neutron [-] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:49:52 np0005465604 nova_compute[260603]: 2025-10-02 08:49:52.860 2 DEBUG nova.compute.manager [req-e2c11d70-8211-42e1-8735-85f117883c53 req-43b0f21b-d893-44f3-8aa3-06ef0695ed43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received event network-vif-plugged-8add63d4-81e0-4c57-a383-69310529c53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:52 np0005465604 nova_compute[260603]: 2025-10-02 08:49:52.860 2 DEBUG oslo_concurrency.lockutils [req-e2c11d70-8211-42e1-8735-85f117883c53 req-43b0f21b-d893-44f3-8aa3-06ef0695ed43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:52 np0005465604 nova_compute[260603]: 2025-10-02 08:49:52.861 2 DEBUG oslo_concurrency.lockutils [req-e2c11d70-8211-42e1-8735-85f117883c53 req-43b0f21b-d893-44f3-8aa3-06ef0695ed43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:52 np0005465604 nova_compute[260603]: 2025-10-02 08:49:52.861 2 DEBUG oslo_concurrency.lockutils [req-e2c11d70-8211-42e1-8735-85f117883c53 req-43b0f21b-d893-44f3-8aa3-06ef0695ed43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:52 np0005465604 nova_compute[260603]: 2025-10-02 08:49:52.861 2 DEBUG nova.compute.manager [req-e2c11d70-8211-42e1-8735-85f117883c53 req-43b0f21b-d893-44f3-8aa3-06ef0695ed43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] No waiting events found dispatching network-vif-plugged-8add63d4-81e0-4c57-a383-69310529c53b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:49:52 np0005465604 nova_compute[260603]: 2025-10-02 08:49:52.862 2 WARNING nova.compute.manager [req-e2c11d70-8211-42e1-8735-85f117883c53 req-43b0f21b-d893-44f3-8aa3-06ef0695ed43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received unexpected event network-vif-plugged-8add63d4-81e0-4c57-a383-69310529c53b for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:49:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:49:53 np0005465604 nova_compute[260603]: 2025-10-02 08:49:53.012 2 DEBUG nova.compute.manager [req-67bf16e6-7e0b-475e-be99-e695598f63a5 req-be554120-b52c-426c-a6b8-a0ce4de54b10 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:53 np0005465604 nova_compute[260603]: 2025-10-02 08:49:53.012 2 DEBUG oslo_concurrency.lockutils [req-67bf16e6-7e0b-475e-be99-e695598f63a5 req-be554120-b52c-426c-a6b8-a0ce4de54b10 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:53 np0005465604 nova_compute[260603]: 2025-10-02 08:49:53.012 2 DEBUG oslo_concurrency.lockutils [req-67bf16e6-7e0b-475e-be99-e695598f63a5 req-be554120-b52c-426c-a6b8-a0ce4de54b10 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:53 np0005465604 nova_compute[260603]: 2025-10-02 08:49:53.013 2 DEBUG oslo_concurrency.lockutils [req-67bf16e6-7e0b-475e-be99-e695598f63a5 req-be554120-b52c-426c-a6b8-a0ce4de54b10 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:53 np0005465604 nova_compute[260603]: 2025-10-02 08:49:53.013 2 DEBUG nova.compute.manager [req-67bf16e6-7e0b-475e-be99-e695598f63a5 req-be554120-b52c-426c-a6b8-a0ce4de54b10 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:49:53 np0005465604 nova_compute[260603]: 2025-10-02 08:49:53.013 2 WARNING nova.compute.manager [req-67bf16e6-7e0b-475e-be99-e695598f63a5 req-be554120-b52c-426c-a6b8-a0ce4de54b10 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state active and task_state None.#033[00m
Oct  2 04:49:53 np0005465604 nova_compute[260603]: 2025-10-02 08:49:53.273 2 DEBUG nova.network.neutron [-] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:49:53 np0005465604 nova_compute[260603]: 2025-10-02 08:49:53.306 2 INFO nova.compute.manager [-] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Took 0.80 seconds to deallocate network for instance.#033[00m
Oct  2 04:49:53 np0005465604 nova_compute[260603]: 2025-10-02 08:49:53.377 2 DEBUG oslo_concurrency.lockutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:49:53 np0005465604 nova_compute[260603]: 2025-10-02 08:49:53.377 2 DEBUG oslo_concurrency.lockutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:49:53 np0005465604 nova_compute[260603]: 2025-10-02 08:49:53.465 2 DEBUG oslo_concurrency.processutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:49:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 88 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.7 MiB/s wr, 158 op/s
Oct  2 04:49:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:49:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1646485759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:49:53 np0005465604 nova_compute[260603]: 2025-10-02 08:49:53.912 2 DEBUG oslo_concurrency.processutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:49:53 np0005465604 nova_compute[260603]: 2025-10-02 08:49:53.918 2 DEBUG nova.compute.provider_tree [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:49:53 np0005465604 nova_compute[260603]: 2025-10-02 08:49:53.937 2 DEBUG nova.scheduler.client.report [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:49:53 np0005465604 nova_compute[260603]: 2025-10-02 08:49:53.965 2 DEBUG oslo_concurrency.lockutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:54 np0005465604 nova_compute[260603]: 2025-10-02 08:49:54.003 2 INFO nova.scheduler.client.report [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Deleted allocations for instance 5d39bd73-ba92-45a5-9d69-f1f1901aa895#033[00m
Oct  2 04:49:54 np0005465604 nova_compute[260603]: 2025-10-02 08:49:54.098 2 DEBUG oslo_concurrency.lockutils [None req-b35f093d-f2b4-4f77-9a04-cbbc5ad4dffc ba06f614b3ef4ffe8b1a2d1b848554a9 fe0d9bd380934b368dfa78fea95edb29 - - default default] Lock "5d39bd73-ba92-45a5-9d69-f1f1901aa895" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:49:54 np0005465604 nova_compute[260603]: 2025-10-02 08:49:54.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:54 np0005465604 nova_compute[260603]: 2025-10-02 08:49:54.950 2 DEBUG nova.compute.manager [req-3da6d977-7b5b-4f48-8c10-288036663b38 req-3b12d4ce-d937-4dfe-ae13-45e88214a237 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-changed-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:54 np0005465604 nova_compute[260603]: 2025-10-02 08:49:54.950 2 DEBUG nova.compute.manager [req-3da6d977-7b5b-4f48-8c10-288036663b38 req-3b12d4ce-d937-4dfe-ae13-45e88214a237 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Refreshing instance network info cache due to event network-changed-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:49:54 np0005465604 nova_compute[260603]: 2025-10-02 08:49:54.950 2 DEBUG oslo_concurrency.lockutils [req-3da6d977-7b5b-4f48-8c10-288036663b38 req-3b12d4ce-d937-4dfe-ae13-45e88214a237 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:49:54 np0005465604 nova_compute[260603]: 2025-10-02 08:49:54.950 2 DEBUG oslo_concurrency.lockutils [req-3da6d977-7b5b-4f48-8c10-288036663b38 req-3b12d4ce-d937-4dfe-ae13-45e88214a237 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:49:54 np0005465604 nova_compute[260603]: 2025-10-02 08:49:54.951 2 DEBUG nova.network.neutron [req-3da6d977-7b5b-4f48-8c10-288036663b38 req-3b12d4ce-d937-4dfe-ae13-45e88214a237 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Refreshing network info cache for port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:49:55 np0005465604 nova_compute[260603]: 2025-10-02 08:49:55.116 2 DEBUG nova.compute.manager [req-342cf86a-e5b1-4f2e-95f9-d47dcaa79c23 req-5bbe38a5-c62d-4fd8-b2cf-fc1153fc6cba 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Received event network-vif-deleted-8add63d4-81e0-4c57-a383-69310529c53b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:49:55 np0005465604 nova_compute[260603]: 2025-10-02 08:49:55.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:49:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 88 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Oct  2 04:49:56 np0005465604 nova_compute[260603]: 2025-10-02 08:49:56.653 2 DEBUG nova.network.neutron [req-3da6d977-7b5b-4f48-8c10-288036663b38 req-3b12d4ce-d937-4dfe-ae13-45e88214a237 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updated VIF entry in instance network info cache for port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:49:56 np0005465604 nova_compute[260603]: 2025-10-02 08:49:56.654 2 DEBUG nova.network.neutron [req-3da6d977-7b5b-4f48-8c10-288036663b38 req-3b12d4ce-d937-4dfe-ae13-45e88214a237 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updating instance_info_cache with network_info: [{"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:49:56 np0005465604 nova_compute[260603]: 2025-10-02 08:49:56.678 2 DEBUG oslo_concurrency.lockutils [req-3da6d977-7b5b-4f48-8c10-288036663b38 req-3b12d4ce-d937-4dfe-ae13-45e88214a237 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:49:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct  2 04:49:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:49:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:49:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:49:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:49:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:49:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:49:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:49:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 117 op/s
Oct  2 04:49:59 np0005465604 nova_compute[260603]: 2025-10-02 08:49:59.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:00 np0005465604 nova_compute[260603]: 2025-10-02 08:50:00.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:00Z|01221|binding|INFO|Releasing lport 251c75ac-afb0-42a1-a89b-23c6beb7b3f6 from this chassis (sb_readonly=0)
Oct  2 04:50:00 np0005465604 nova_compute[260603]: 2025-10-02 08:50:00.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:50:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:50:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:50:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:50:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 88 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 104 op/s
Oct  2 04:50:01 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct  2 04:50:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:02.096 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:50:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:02.097 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:50:02 np0005465604 nova_compute[260603]: 2025-10-02 08:50:02.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:02 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:50:02 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:50:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:50:02 np0005465604 podman[379916]: 2025-10-02 08:50:02.969542231 +0000 UTC m=+0.049060090 container create 7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_merkle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 04:50:03 np0005465604 systemd[1]: Started libpod-conmon-7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b.scope.
Oct  2 04:50:03 np0005465604 podman[379916]: 2025-10-02 08:50:02.946607116 +0000 UTC m=+0.026124945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:50:03 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:50:03 np0005465604 podman[379916]: 2025-10-02 08:50:03.071787028 +0000 UTC m=+0.151304867 container init 7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 04:50:03 np0005465604 podman[379916]: 2025-10-02 08:50:03.084158013 +0000 UTC m=+0.163675852 container start 7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 04:50:03 np0005465604 podman[379916]: 2025-10-02 08:50:03.088167989 +0000 UTC m=+0.167685828 container attach 7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:50:03 np0005465604 ecstatic_merkle[379932]: 167 167
Oct  2 04:50:03 np0005465604 systemd[1]: libpod-7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b.scope: Deactivated successfully.
Oct  2 04:50:03 np0005465604 conmon[379932]: conmon 7b8982814b215b24fc0d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b.scope/container/memory.events
Oct  2 04:50:03 np0005465604 podman[379916]: 2025-10-02 08:50:03.095990263 +0000 UTC m=+0.175508122 container died 7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_merkle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 04:50:03 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c3cf6c2866a2f68fd2902759bcb50b1496189a9b885a97acb09ee7afadd136fa-merged.mount: Deactivated successfully.
Oct  2 04:50:03 np0005465604 podman[379916]: 2025-10-02 08:50:03.148075866 +0000 UTC m=+0.227593675 container remove 7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 04:50:03 np0005465604 systemd[1]: libpod-conmon-7b8982814b215b24fc0dcc186fe260f5e8e67502c434197e1b52851e4df0052b.scope: Deactivated successfully.
Oct  2 04:50:03 np0005465604 podman[379956]: 2025-10-02 08:50:03.308707092 +0000 UTC m=+0.037273962 container create c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:50:03 np0005465604 systemd[1]: Started libpod-conmon-c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d.scope.
Oct  2 04:50:03 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:50:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2aab9b04ed6dd1d94db7e5c36114546f4aca655df7c874bfd43892465198df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2aab9b04ed6dd1d94db7e5c36114546f4aca655df7c874bfd43892465198df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2aab9b04ed6dd1d94db7e5c36114546f4aca655df7c874bfd43892465198df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2aab9b04ed6dd1d94db7e5c36114546f4aca655df7c874bfd43892465198df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:03 np0005465604 podman[379956]: 2025-10-02 08:50:03.292381183 +0000 UTC m=+0.020948073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:50:03 np0005465604 podman[379956]: 2025-10-02 08:50:03.399928626 +0000 UTC m=+0.128495576 container init c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 04:50:03 np0005465604 podman[379956]: 2025-10-02 08:50:03.410051521 +0000 UTC m=+0.138618401 container start c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:50:03 np0005465604 podman[379956]: 2025-10-02 08:50:03.413831559 +0000 UTC m=+0.142398479 container attach c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:50:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 115 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 148 op/s
Oct  2 04:50:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:03Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:e6:c0 10.100.0.11
Oct  2 04:50:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:03Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:e6:c0 10.100.0.11
Oct  2 04:50:04 np0005465604 nova_compute[260603]: 2025-10-02 08:50:04.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]: [
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:    {
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:        "available": false,
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:        "ceph_device": false,
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:        "lsm_data": {},
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:        "lvs": [],
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:        "path": "/dev/sr0",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:        "rejected_reasons": [
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "Has a FileSystem",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "Insufficient space (<5GB)"
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:        ],
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:        "sys_api": {
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "actuators": null,
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "device_nodes": "sr0",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "devname": "sr0",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "human_readable_size": "482.00 KB",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "id_bus": "ata",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "model": "QEMU DVD-ROM",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "nr_requests": "2",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "parent": "/dev/sr0",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "partitions": {},
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "path": "/dev/sr0",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "removable": "1",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "rev": "2.5+",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "ro": "0",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "rotational": "0",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "sas_address": "",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "sas_device_handle": "",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "scheduler_mode": "mq-deadline",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "sectors": 0,
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "sectorsize": "2048",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "size": 493568.0,
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "support_discard": "2048",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "type": "disk",
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:            "vendor": "QEMU"
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:        }
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]:    }
Oct  2 04:50:04 np0005465604 beautiful_moore[379972]: ]
Oct  2 04:50:04 np0005465604 systemd[1]: libpod-c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d.scope: Deactivated successfully.
Oct  2 04:50:04 np0005465604 podman[379956]: 2025-10-02 08:50:04.81272427 +0000 UTC m=+1.541291140 container died c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 04:50:04 np0005465604 systemd[1]: libpod-c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d.scope: Consumed 1.441s CPU time.
Oct  2 04:50:04 np0005465604 systemd[1]: var-lib-containers-storage-overlay-7b2aab9b04ed6dd1d94db7e5c36114546f4aca655df7c874bfd43892465198df-merged.mount: Deactivated successfully.
Oct  2 04:50:04 np0005465604 podman[379956]: 2025-10-02 08:50:04.866509487 +0000 UTC m=+1.595076347 container remove c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:50:04 np0005465604 systemd[1]: libpod-conmon-c70236f3d7f9b2158f7a324162141ab1781f9408fe08cd9cfd9fab09ab66d27d.scope: Deactivated successfully.
Oct  2 04:50:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:50:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:50:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:50:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:50:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:50:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:50:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:50:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:50:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:50:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:50:04 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 5b62f3a9-4742-4945-a567-2d43f25a8f7e does not exist
Oct  2 04:50:04 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 8670188f-637f-4bbf-87af-d53351442408 does not exist
Oct  2 04:50:04 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ecaba74a-f8da-4ee3-a6d6-655a6715fa19 does not exist
Oct  2 04:50:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:50:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:50:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:50:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:50:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:50:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:50:05 np0005465604 nova_compute[260603]: 2025-10-02 08:50:05.433 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759394990.432356, 5d39bd73-ba92-45a5-9d69-f1f1901aa895 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:50:05 np0005465604 nova_compute[260603]: 2025-10-02 08:50:05.433 2 INFO nova.compute.manager [-] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:50:05 np0005465604 nova_compute[260603]: 2025-10-02 08:50:05.454 2 DEBUG nova.compute.manager [None req-2c0da99a-a2dc-4f3b-beb9-419ec3a3e696 - - - - - -] [instance: 5d39bd73-ba92-45a5-9d69-f1f1901aa895] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:50:05 np0005465604 nova_compute[260603]: 2025-10-02 08:50:05.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 115 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 671 KiB/s rd, 2.0 MiB/s wr, 57 op/s
Oct  2 04:50:05 np0005465604 podman[382076]: 2025-10-02 08:50:05.846822352 +0000 UTC m=+0.076275629 container create c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:50:05 np0005465604 systemd[1]: Started libpod-conmon-c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659.scope.
Oct  2 04:50:05 np0005465604 podman[382076]: 2025-10-02 08:50:05.818034265 +0000 UTC m=+0.047487592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:50:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:50:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:50:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:50:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:50:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:50:05 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:50:05 np0005465604 podman[382076]: 2025-10-02 08:50:05.961678682 +0000 UTC m=+0.191132009 container init c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 04:50:05 np0005465604 podman[382076]: 2025-10-02 08:50:05.974320926 +0000 UTC m=+0.203774213 container start c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:50:05 np0005465604 podman[382076]: 2025-10-02 08:50:05.978469794 +0000 UTC m=+0.207923141 container attach c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 04:50:05 np0005465604 flamboyant_chaum[382092]: 167 167
Oct  2 04:50:05 np0005465604 systemd[1]: libpod-c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659.scope: Deactivated successfully.
Oct  2 04:50:05 np0005465604 conmon[382092]: conmon c56141c9c1065709bb96 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659.scope/container/memory.events
Oct  2 04:50:05 np0005465604 podman[382076]: 2025-10-02 08:50:05.987205517 +0000 UTC m=+0.216658794 container died c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:50:06 np0005465604 systemd[1]: var-lib-containers-storage-overlay-26b29776e7fbfe51b456914e1587bfc46720df1555a887b94ce27d601e1a7524-merged.mount: Deactivated successfully.
Oct  2 04:50:06 np0005465604 podman[382076]: 2025-10-02 08:50:06.063066082 +0000 UTC m=+0.292519369 container remove c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:50:06 np0005465604 systemd[1]: libpod-conmon-c56141c9c1065709bb96d8aeef2dfa2ce8c8df97781ed425c8da13fdaac06659.scope: Deactivated successfully.
Oct  2 04:50:06 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:06Z|01222|binding|INFO|Releasing lport 251c75ac-afb0-42a1-a89b-23c6beb7b3f6 from this chassis (sb_readonly=0)
Oct  2 04:50:06 np0005465604 podman[382118]: 2025-10-02 08:50:06.291774341 +0000 UTC m=+0.051928010 container create 37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:50:06 np0005465604 nova_compute[260603]: 2025-10-02 08:50:06.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:06 np0005465604 systemd[1]: Started libpod-conmon-37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2.scope.
Oct  2 04:50:06 np0005465604 podman[382118]: 2025-10-02 08:50:06.273842181 +0000 UTC m=+0.033995870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:50:06 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:50:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167d190fb33d2d97dc8db421d89b06a87f5d566a8505d880d61517e8955895b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167d190fb33d2d97dc8db421d89b06a87f5d566a8505d880d61517e8955895b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167d190fb33d2d97dc8db421d89b06a87f5d566a8505d880d61517e8955895b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167d190fb33d2d97dc8db421d89b06a87f5d566a8505d880d61517e8955895b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167d190fb33d2d97dc8db421d89b06a87f5d566a8505d880d61517e8955895b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:06 np0005465604 podman[382118]: 2025-10-02 08:50:06.424829946 +0000 UTC m=+0.184983715 container init 37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:50:06 np0005465604 podman[382118]: 2025-10-02 08:50:06.436223252 +0000 UTC m=+0.196376921 container start 37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_nash, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:50:06 np0005465604 podman[382118]: 2025-10-02 08:50:06.440380202 +0000 UTC m=+0.200533911 container attach 37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_nash, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 04:50:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 121 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 732 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct  2 04:50:07 np0005465604 friendly_nash[382133]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:50:07 np0005465604 friendly_nash[382133]: --> relative data size: 1.0
Oct  2 04:50:07 np0005465604 friendly_nash[382133]: --> All data devices are unavailable
Oct  2 04:50:07 np0005465604 systemd[1]: libpod-37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2.scope: Deactivated successfully.
Oct  2 04:50:07 np0005465604 systemd[1]: libpod-37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2.scope: Consumed 1.120s CPU time.
Oct  2 04:50:07 np0005465604 podman[382118]: 2025-10-02 08:50:07.609894124 +0000 UTC m=+1.370047833 container died 37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 04:50:07 np0005465604 systemd[1]: var-lib-containers-storage-overlay-167d190fb33d2d97dc8db421d89b06a87f5d566a8505d880d61517e8955895b8-merged.mount: Deactivated successfully.
Oct  2 04:50:07 np0005465604 podman[382118]: 2025-10-02 08:50:07.681984211 +0000 UTC m=+1.442137880 container remove 37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_nash, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:50:07 np0005465604 systemd[1]: libpod-conmon-37bba8977df69a4526c3afa54ca59be73bf28e80c8fa3908ef23a677cace2ba2.scope: Deactivated successfully.
Oct  2 04:50:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:50:08 np0005465604 podman[382317]: 2025-10-02 08:50:08.548932012 +0000 UTC m=+0.079511259 container create 06c42e37de359d00072c66a0bffb1dcc1a81c02d0c8a6aec39f8e7d5b1cbcb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shockley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct  2 04:50:08 np0005465604 systemd[1]: Started libpod-conmon-06c42e37de359d00072c66a0bffb1dcc1a81c02d0c8a6aec39f8e7d5b1cbcb00.scope.
Oct  2 04:50:08 np0005465604 podman[382317]: 2025-10-02 08:50:08.517311627 +0000 UTC m=+0.047890944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:50:08 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:50:08 np0005465604 podman[382317]: 2025-10-02 08:50:08.657499607 +0000 UTC m=+0.188078874 container init 06c42e37de359d00072c66a0bffb1dcc1a81c02d0c8a6aec39f8e7d5b1cbcb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shockley, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 04:50:08 np0005465604 podman[382317]: 2025-10-02 08:50:08.674678042 +0000 UTC m=+0.205257299 container start 06c42e37de359d00072c66a0bffb1dcc1a81c02d0c8a6aec39f8e7d5b1cbcb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shockley, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:50:08 np0005465604 podman[382317]: 2025-10-02 08:50:08.679299986 +0000 UTC m=+0.209879313 container attach 06c42e37de359d00072c66a0bffb1dcc1a81c02d0c8a6aec39f8e7d5b1cbcb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shockley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:50:08 np0005465604 flamboyant_shockley[382333]: 167 167
Oct  2 04:50:08 np0005465604 systemd[1]: libpod-06c42e37de359d00072c66a0bffb1dcc1a81c02d0c8a6aec39f8e7d5b1cbcb00.scope: Deactivated successfully.
Oct  2 04:50:08 np0005465604 podman[382317]: 2025-10-02 08:50:08.689266397 +0000 UTC m=+0.219845644 container died 06c42e37de359d00072c66a0bffb1dcc1a81c02d0c8a6aec39f8e7d5b1cbcb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shockley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:50:08 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8f269de981be268fe3ab139d5945030c2f083d1c7f0a1c420d7c1fbbc2f17af5-merged.mount: Deactivated successfully.
Oct  2 04:50:08 np0005465604 podman[382317]: 2025-10-02 08:50:08.73461154 +0000 UTC m=+0.265190797 container remove 06c42e37de359d00072c66a0bffb1dcc1a81c02d0c8a6aec39f8e7d5b1cbcb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shockley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:50:08 np0005465604 systemd[1]: libpod-conmon-06c42e37de359d00072c66a0bffb1dcc1a81c02d0c8a6aec39f8e7d5b1cbcb00.scope: Deactivated successfully.
Oct  2 04:50:08 np0005465604 podman[382358]: 2025-10-02 08:50:08.991803546 +0000 UTC m=+0.055471950 container create b744bfc10c729ea252810495415251da1d0f4082d929b43941f10698112e71df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 04:50:09 np0005465604 systemd[1]: Started libpod-conmon-b744bfc10c729ea252810495415251da1d0f4082d929b43941f10698112e71df.scope.
Oct  2 04:50:09 np0005465604 podman[382358]: 2025-10-02 08:50:08.969363777 +0000 UTC m=+0.033032211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:50:09 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:50:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bdf3409dbd86d27ab71077ad7431d24b1a38585f9b48cdbab417b676e90fa2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bdf3409dbd86d27ab71077ad7431d24b1a38585f9b48cdbab417b676e90fa2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bdf3409dbd86d27ab71077ad7431d24b1a38585f9b48cdbab417b676e90fa2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bdf3409dbd86d27ab71077ad7431d24b1a38585f9b48cdbab417b676e90fa2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:09 np0005465604 podman[382358]: 2025-10-02 08:50:09.115106079 +0000 UTC m=+0.178774493 container init b744bfc10c729ea252810495415251da1d0f4082d929b43941f10698112e71df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:50:09 np0005465604 podman[382358]: 2025-10-02 08:50:09.129679574 +0000 UTC m=+0.193347968 container start b744bfc10c729ea252810495415251da1d0f4082d929b43941f10698112e71df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:50:09 np0005465604 podman[382358]: 2025-10-02 08:50:09.132689307 +0000 UTC m=+0.196357731 container attach b744bfc10c729ea252810495415251da1d0f4082d929b43941f10698112e71df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:50:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 121 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 04:50:09 np0005465604 nova_compute[260603]: 2025-10-02 08:50:09.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]: {
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:    "0": [
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:        {
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "devices": [
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "/dev/loop3"
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            ],
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "lv_name": "ceph_lv0",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "lv_size": "21470642176",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "name": "ceph_lv0",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "tags": {
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.cluster_name": "ceph",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.crush_device_class": "",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.encrypted": "0",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.osd_id": "0",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.type": "block",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.vdo": "0"
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            },
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "type": "block",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "vg_name": "ceph_vg0"
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:        }
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:    ],
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:    "1": [
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:        {
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "devices": [
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "/dev/loop4"
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            ],
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "lv_name": "ceph_lv1",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "lv_size": "21470642176",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "name": "ceph_lv1",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "tags": {
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.cluster_name": "ceph",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.crush_device_class": "",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.encrypted": "0",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.osd_id": "1",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.type": "block",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.vdo": "0"
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            },
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "type": "block",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "vg_name": "ceph_vg1"
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:        }
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:    ],
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:    "2": [
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:        {
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "devices": [
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "/dev/loop5"
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            ],
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "lv_name": "ceph_lv2",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "lv_size": "21470642176",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "name": "ceph_lv2",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "tags": {
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.cluster_name": "ceph",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.crush_device_class": "",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.encrypted": "0",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.osd_id": "2",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.type": "block",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:                "ceph.vdo": "0"
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            },
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "type": "block",
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:            "vg_name": "ceph_vg2"
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:        }
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]:    ]
Oct  2 04:50:09 np0005465604 practical_chandrasekhar[382374]: }
Oct  2 04:50:09 np0005465604 podman[382358]: 2025-10-02 08:50:09.92749474 +0000 UTC m=+0.991163144 container died b744bfc10c729ea252810495415251da1d0f4082d929b43941f10698112e71df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  2 04:50:09 np0005465604 systemd[1]: libpod-b744bfc10c729ea252810495415251da1d0f4082d929b43941f10698112e71df.scope: Deactivated successfully.
Oct  2 04:50:09 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1bdf3409dbd86d27ab71077ad7431d24b1a38585f9b48cdbab417b676e90fa2f-merged.mount: Deactivated successfully.
Oct  2 04:50:09 np0005465604 podman[382358]: 2025-10-02 08:50:09.983488655 +0000 UTC m=+1.047157059 container remove b744bfc10c729ea252810495415251da1d0f4082d929b43941f10698112e71df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:50:09 np0005465604 systemd[1]: libpod-conmon-b744bfc10c729ea252810495415251da1d0f4082d929b43941f10698112e71df.scope: Deactivated successfully.
Oct  2 04:50:10 np0005465604 nova_compute[260603]: 2025-10-02 08:50:10.035 2 INFO nova.compute.manager [None req-62e34207-d7bd-4cdd-bc8f-52699248f1b8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Get console output#033[00m
Oct  2 04:50:10 np0005465604 nova_compute[260603]: 2025-10-02 08:50:10.044 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 04:50:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:10.099 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:50:10 np0005465604 nova_compute[260603]: 2025-10-02 08:50:10.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:10 np0005465604 podman[382532]: 2025-10-02 08:50:10.704883469 +0000 UTC m=+0.071511070 container create ff57193b1f05c8ea02a84025e1d3ed4bb2b179ce4618ce654d472aa4e8b9d1ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldberg, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:50:10 np0005465604 systemd[1]: Started libpod-conmon-ff57193b1f05c8ea02a84025e1d3ed4bb2b179ce4618ce654d472aa4e8b9d1ee.scope.
Oct  2 04:50:10 np0005465604 podman[382532]: 2025-10-02 08:50:10.676493955 +0000 UTC m=+0.043121606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:50:10 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:50:10 np0005465604 podman[382532]: 2025-10-02 08:50:10.820420931 +0000 UTC m=+0.187048582 container init ff57193b1f05c8ea02a84025e1d3ed4bb2b179ce4618ce654d472aa4e8b9d1ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  2 04:50:10 np0005465604 podman[382532]: 2025-10-02 08:50:10.828192293 +0000 UTC m=+0.194819864 container start ff57193b1f05c8ea02a84025e1d3ed4bb2b179ce4618ce654d472aa4e8b9d1ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldberg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:50:10 np0005465604 podman[382532]: 2025-10-02 08:50:10.831459785 +0000 UTC m=+0.198087426 container attach ff57193b1f05c8ea02a84025e1d3ed4bb2b179ce4618ce654d472aa4e8b9d1ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldberg, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:50:10 np0005465604 upbeat_goldberg[382548]: 167 167
Oct  2 04:50:10 np0005465604 systemd[1]: libpod-ff57193b1f05c8ea02a84025e1d3ed4bb2b179ce4618ce654d472aa4e8b9d1ee.scope: Deactivated successfully.
Oct  2 04:50:10 np0005465604 podman[382532]: 2025-10-02 08:50:10.833627502 +0000 UTC m=+0.200255073 container died ff57193b1f05c8ea02a84025e1d3ed4bb2b179ce4618ce654d472aa4e8b9d1ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:50:10 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b034d43316f54d882f8c54ac9b58d7b608eb24a58a4f6bbed67e07adde9bc807-merged.mount: Deactivated successfully.
Oct  2 04:50:10 np0005465604 podman[382532]: 2025-10-02 08:50:10.872947318 +0000 UTC m=+0.239574919 container remove ff57193b1f05c8ea02a84025e1d3ed4bb2b179ce4618ce654d472aa4e8b9d1ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goldberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:50:10 np0005465604 systemd[1]: libpod-conmon-ff57193b1f05c8ea02a84025e1d3ed4bb2b179ce4618ce654d472aa4e8b9d1ee.scope: Deactivated successfully.
Oct  2 04:50:11 np0005465604 podman[382571]: 2025-10-02 08:50:11.124449707 +0000 UTC m=+0.068732763 container create 602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kapitsa, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:50:11 np0005465604 systemd[1]: Started libpod-conmon-602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442.scope.
Oct  2 04:50:11 np0005465604 podman[382571]: 2025-10-02 08:50:11.097036573 +0000 UTC m=+0.041319679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:50:11 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:50:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb4136ab49e5e455510e49384d704c31824faf1523c3abd554c8ff68cd070e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb4136ab49e5e455510e49384d704c31824faf1523c3abd554c8ff68cd070e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb4136ab49e5e455510e49384d704c31824faf1523c3abd554c8ff68cd070e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb4136ab49e5e455510e49384d704c31824faf1523c3abd554c8ff68cd070e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:11 np0005465604 podman[382571]: 2025-10-02 08:50:11.241537846 +0000 UTC m=+0.185820942 container init 602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kapitsa, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:50:11 np0005465604 podman[382571]: 2025-10-02 08:50:11.253496239 +0000 UTC m=+0.197779295 container start 602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kapitsa, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  2 04:50:11 np0005465604 podman[382571]: 2025-10-02 08:50:11.258573968 +0000 UTC m=+0.202857024 container attach 602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:50:11 np0005465604 nova_compute[260603]: 2025-10-02 08:50:11.355 2 INFO nova.compute.manager [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Rebuilding instance#033[00m
Oct  2 04:50:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 121 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 04:50:11 np0005465604 nova_compute[260603]: 2025-10-02 08:50:11.620 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:50:11 np0005465604 nova_compute[260603]: 2025-10-02 08:50:11.642 2 DEBUG nova.compute.manager [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:50:11 np0005465604 nova_compute[260603]: 2025-10-02 08:50:11.695 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:50:11 np0005465604 nova_compute[260603]: 2025-10-02 08:50:11.711 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:50:11 np0005465604 nova_compute[260603]: 2025-10-02 08:50:11.727 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'resources' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:50:11 np0005465604 nova_compute[260603]: 2025-10-02 08:50:11.740 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'migration_context' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:50:11 np0005465604 nova_compute[260603]: 2025-10-02 08:50:11.752 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:50:11 np0005465604 nova_compute[260603]: 2025-10-02 08:50:11.757 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]: {
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:        "osd_id": 2,
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:        "type": "bluestore"
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:    },
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:        "osd_id": 1,
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:        "type": "bluestore"
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:    },
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:        "osd_id": 0,
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:        "type": "bluestore"
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]:    }
Oct  2 04:50:12 np0005465604 hopeful_kapitsa[382587]: }
Oct  2 04:50:12 np0005465604 systemd[1]: libpod-602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442.scope: Deactivated successfully.
Oct  2 04:50:12 np0005465604 podman[382571]: 2025-10-02 08:50:12.383367295 +0000 UTC m=+1.327650311 container died 602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kapitsa, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:50:12 np0005465604 systemd[1]: libpod-602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442.scope: Consumed 1.133s CPU time.
Oct  2 04:50:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay-bdb4136ab49e5e455510e49384d704c31824faf1523c3abd554c8ff68cd070e6-merged.mount: Deactivated successfully.
Oct  2 04:50:12 np0005465604 podman[382571]: 2025-10-02 08:50:12.43099906 +0000 UTC m=+1.375282076 container remove 602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kapitsa, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 04:50:12 np0005465604 systemd[1]: libpod-conmon-602de9b5e095814f2fef118a6f063d8492c71864a18b22625cbe020ddb348442.scope: Deactivated successfully.
Oct  2 04:50:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:50:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:50:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:50:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:50:12 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 8f87997f-15bc-4d2f-b6d5-6ccdf8bbb3e5 does not exist
Oct  2 04:50:12 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev bc63ba61-d6a6-4bbf-aef5-94461a7f65a9 does not exist
Oct  2 04:50:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:50:12 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:50:12 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:50:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 121 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Oct  2 04:50:13 np0005465604 nova_compute[260603]: 2025-10-02 08:50:13.936 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:50:13 np0005465604 nova_compute[260603]: 2025-10-02 08:50:13.961 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 04:50:13 np0005465604 nova_compute[260603]: 2025-10-02 08:50:13.962 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:13 np0005465604 nova_compute[260603]: 2025-10-02 08:50:13.962 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:13 np0005465604 nova_compute[260603]: 2025-10-02 08:50:13.963 2 INFO nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] During sync_power_state the instance has a pending task (rebuilding). Skip.#033[00m
Oct  2 04:50:13 np0005465604 nova_compute[260603]: 2025-10-02 08:50:13.963 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:14 np0005465604 kernel: tapac5b8c6c-8d (unregistering): left promiscuous mode
Oct  2 04:50:14 np0005465604 NetworkManager[45129]: <info>  [1759395014.1013] device (tapac5b8c6c-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:50:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:14Z|01223|binding|INFO|Releasing lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da from this chassis (sb_readonly=0)
Oct  2 04:50:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:14Z|01224|binding|INFO|Setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da down in Southbound
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:14Z|01225|binding|INFO|Removing iface tapac5b8c6c-8d ovn-installed in OVS
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.121 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:e6:c0 10.100.0.11'], port_security=['fa:16:3e:4a:e6:c0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9d79c6a7-edec-4a60-af9f-7e1401bb9a64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42b6327c-2fd8-40d7-ba08-912b6664d7e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=250114f0-0578-4b19-ab29-d2499c25ead0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.123 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da in datapath 9ceec3f2-374d-41aa-a1df-26773af00fd7 unbound from our chassis#033[00m
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.128 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ceec3f2-374d-41aa-a1df-26773af00fd7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.129 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1a60d360-e4b7-4778-824b-2853845d5aee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.130 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 namespace which is not needed anymore#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:14 np0005465604 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct  2 04:50:14 np0005465604 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000076.scope: Consumed 12.960s CPU time.
Oct  2 04:50:14 np0005465604 systemd-machined[214636]: Machine qemu-148-instance-00000076 terminated.
Oct  2 04:50:14 np0005465604 podman[382686]: 2025-10-02 08:50:14.207570143 +0000 UTC m=+0.076063482 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct  2 04:50:14 np0005465604 podman[382682]: 2025-10-02 08:50:14.219642779 +0000 UTC m=+0.095242289 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  2 04:50:14 np0005465604 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[379425]: [NOTICE]   (379429) : haproxy version is 2.8.14-c23fe91
Oct  2 04:50:14 np0005465604 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[379425]: [NOTICE]   (379429) : path to executable is /usr/sbin/haproxy
Oct  2 04:50:14 np0005465604 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[379425]: [ALERT]    (379429) : Current worker (379431) exited with code 143 (Terminated)
Oct  2 04:50:14 np0005465604 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[379425]: [WARNING]  (379429) : All workers exited. Exiting... (0)
Oct  2 04:50:14 np0005465604 systemd[1]: libpod-c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957.scope: Deactivated successfully.
Oct  2 04:50:14 np0005465604 podman[382750]: 2025-10-02 08:50:14.285543623 +0000 UTC m=+0.047232773 container died c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:50:14 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957-userdata-shm.mount: Deactivated successfully.
Oct  2 04:50:14 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d37bbf101800d2f34c5533d4db9bfd73f0e4cf42dd09496c7c2a5063e08e8353-merged.mount: Deactivated successfully.
Oct  2 04:50:14 np0005465604 kernel: tapac5b8c6c-8d: entered promiscuous mode
Oct  2 04:50:14 np0005465604 NetworkManager[45129]: <info>  [1759395014.3334] manager: (tapac5b8c6c-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/482)
Oct  2 04:50:14 np0005465604 systemd-udevd[382715]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:14 np0005465604 kernel: tapac5b8c6c-8d (unregistering): left promiscuous mode
Oct  2 04:50:14 np0005465604 podman[382750]: 2025-10-02 08:50:14.370056967 +0000 UTC m=+0.131746107 container cleanup c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:50:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:14Z|01226|binding|INFO|Claiming lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for this chassis.
Oct  2 04:50:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:14Z|01227|binding|INFO|ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da: Claiming fa:16:3e:4a:e6:c0 10.100.0.11
Oct  2 04:50:14 np0005465604 virtnodedevd[260919]: ethtool ioctl error on tapac5b8c6c-8d: No such device
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.379 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:e6:c0 10.100.0.11'], port_security=['fa:16:3e:4a:e6:c0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9d79c6a7-edec-4a60-af9f-7e1401bb9a64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42b6327c-2fd8-40d7-ba08-912b6664d7e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=250114f0-0578-4b19-ab29-d2499c25ead0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:50:14 np0005465604 virtnodedevd[260919]: ethtool ioctl error on tapac5b8c6c-8d: No such device
Oct  2 04:50:14 np0005465604 systemd[1]: libpod-conmon-c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957.scope: Deactivated successfully.
Oct  2 04:50:14 np0005465604 virtnodedevd[260919]: ethtool ioctl error on tapac5b8c6c-8d: No such device
Oct  2 04:50:14 np0005465604 virtnodedevd[260919]: ethtool ioctl error on tapac5b8c6c-8d: No such device
Oct  2 04:50:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:14Z|01228|binding|INFO|Setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da ovn-installed in OVS
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:14Z|01229|binding|INFO|Setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da up in Southbound
Oct  2 04:50:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:14Z|01230|binding|INFO|Releasing lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da from this chassis (sb_readonly=1)
Oct  2 04:50:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:14Z|01231|binding|INFO|Removing iface tapac5b8c6c-8d ovn-installed in OVS
Oct  2 04:50:14 np0005465604 virtnodedevd[260919]: ethtool ioctl error on tapac5b8c6c-8d: No such device
Oct  2 04:50:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:14Z|01232|if_status|INFO|Dropped 2 log messages in last 593 seconds (most recently, 593 seconds ago) due to excessive rate
Oct  2 04:50:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:14Z|01233|if_status|INFO|Not setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da down as sb is readonly
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:14Z|01234|binding|INFO|Releasing lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da from this chassis (sb_readonly=0)
Oct  2 04:50:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:14Z|01235|binding|INFO|Setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da down in Southbound
Oct  2 04:50:14 np0005465604 virtnodedevd[260919]: ethtool ioctl error on tapac5b8c6c-8d: No such device
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.410 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:e6:c0 10.100.0.11'], port_security=['fa:16:3e:4a:e6:c0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9d79c6a7-edec-4a60-af9f-7e1401bb9a64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42b6327c-2fd8-40d7-ba08-912b6664d7e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=250114f0-0578-4b19-ab29-d2499c25ead0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:50:14 np0005465604 virtnodedevd[260919]: ethtool ioctl error on tapac5b8c6c-8d: No such device
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:14 np0005465604 virtnodedevd[260919]: ethtool ioctl error on tapac5b8c6c-8d: No such device
Oct  2 04:50:14 np0005465604 podman[382786]: 2025-10-02 08:50:14.458718291 +0000 UTC m=+0.057380530 container remove c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.467 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[66023eb1-d838-42c6-8b55-8698448d288e]: (4, ('Thu Oct  2 08:50:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 (c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957)\nc6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957\nThu Oct  2 08:50:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 (c6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957)\nc6cda32dc8be24d3f3804f6daf420572134ff8b3d777c4d913cb62580ef63957\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.470 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c1722dd0-01b2-4848-aa06-e729af998e46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.471 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ceec3f2-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:14 np0005465604 kernel: tap9ceec3f2-30: left promiscuous mode
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.491 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d5844144-2517-4c72-b705-ed6808908010]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.518 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[300e3f10-7c71-4417-bbb2-2d50884894d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.519 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b0b92c-d3d3-4929-aee2-5140f8056497]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.537 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd164ea1-542e-4257-aa7d-b4fb2625a1fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586882, 'reachable_time': 18168, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382820, 'error': None, 'target': 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.540 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:50:14 np0005465604 systemd[1]: run-netns-ovnmeta\x2d9ceec3f2\x2d374d\x2d41aa\x2da1df\x2d26773af00fd7.mount: Deactivated successfully.
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.540 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a79f6504-f9f9-434e-88e3-2244e4aed85f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.540 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da in datapath 9ceec3f2-374d-41aa-a1df-26773af00fd7 unbound from our chassis#033[00m
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.541 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ceec3f2-374d-41aa-a1df-26773af00fd7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.542 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[17e9d969-2ac6-4484-90f0-c6926c202b04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.542 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da in datapath 9ceec3f2-374d-41aa-a1df-26773af00fd7 unbound from our chassis#033[00m
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.543 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ceec3f2-374d-41aa-a1df-26773af00fd7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:50:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:14.543 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa98740-9096-4017-b253-b1b23223dbe0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.589 2 DEBUG nova.compute.manager [req-fef72a42-6e69-4031-9a10-708f25d07be3 req-3be58cee-9bc2-4619-85e2-d4e5517e16a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-unplugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.589 2 DEBUG oslo_concurrency.lockutils [req-fef72a42-6e69-4031-9a10-708f25d07be3 req-3be58cee-9bc2-4619-85e2-d4e5517e16a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.590 2 DEBUG oslo_concurrency.lockutils [req-fef72a42-6e69-4031-9a10-708f25d07be3 req-3be58cee-9bc2-4619-85e2-d4e5517e16a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.591 2 DEBUG oslo_concurrency.lockutils [req-fef72a42-6e69-4031-9a10-708f25d07be3 req-3be58cee-9bc2-4619-85e2-d4e5517e16a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.591 2 DEBUG nova.compute.manager [req-fef72a42-6e69-4031-9a10-708f25d07be3 req-3be58cee-9bc2-4619-85e2-d4e5517e16a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-unplugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.591 2 WARNING nova.compute.manager [req-fef72a42-6e69-4031-9a10-708f25d07be3 req-3be58cee-9bc2-4619-85e2-d4e5517e16a0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-unplugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state active and task_state rebuilding.#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.781 2 INFO nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.793 2 INFO nova.virt.libvirt.driver [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance destroyed successfully.#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.799 2 INFO nova.virt.libvirt.driver [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance destroyed successfully.#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.800 2 DEBUG nova.virt.libvirt.vif [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:49:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1109879286',display_name='tempest-TestNetworkAdvancedServerOps-server-1109879286',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1109879286',id=118,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+sVd8qjo0SYazKDVirRw3g5x5Vnd8QovbWt8+eK0+qPC8DTt4OfcPnD5uSpSSNv9pvlZO4xpKtw3lzuxGU0DKfNjlzDyLY6S9ZZDG7PGtaZb0/k3fS7lf7qHC3kDB/wg==',key_name='tempest-TestNetworkAdvancedServerOps-514118472',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:49:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-552rpvj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:50:10Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=9d79c6a7-edec-4a60-af9f-7e1401bb9a64,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.800 2 DEBUG nova.network.os_vif_util [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.801 2 DEBUG nova.network.os_vif_util [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.801 2 DEBUG os_vif [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.803 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac5b8c6c-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:14 np0005465604 nova_compute[260603]: 2025-10-02 08:50:14.808 2 INFO os_vif [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d')#033[00m
Oct  2 04:50:15 np0005465604 nova_compute[260603]: 2025-10-02 08:50:15.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:15 np0005465604 nova_compute[260603]: 2025-10-02 08:50:15.199 2 INFO nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Deleting instance files /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64_del#033[00m
Oct  2 04:50:15 np0005465604 nova_compute[260603]: 2025-10-02 08:50:15.200 2 INFO nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Deletion of /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64_del complete#033[00m
Oct  2 04:50:15 np0005465604 nova_compute[260603]: 2025-10-02 08:50:15.373 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:50:15 np0005465604 nova_compute[260603]: 2025-10-02 08:50:15.374 2 INFO nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Creating image(s)#033[00m
Oct  2 04:50:15 np0005465604 nova_compute[260603]: 2025-10-02 08:50:15.399 2 DEBUG nova.storage.rbd_utils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:50:15 np0005465604 nova_compute[260603]: 2025-10-02 08:50:15.429 2 DEBUG nova.storage.rbd_utils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:50:15 np0005465604 nova_compute[260603]: 2025-10-02 08:50:15.456 2 DEBUG nova.storage.rbd_utils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:50:15 np0005465604 nova_compute[260603]: 2025-10-02 08:50:15.462 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:50:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 121 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 117 KiB/s wr, 22 op/s
Oct  2 04:50:15 np0005465604 nova_compute[260603]: 2025-10-02 08:50:15.590 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 --force-share --output=json" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:50:15 np0005465604 nova_compute[260603]: 2025-10-02 08:50:15.592 2 DEBUG oslo_concurrency.lockutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:15 np0005465604 nova_compute[260603]: 2025-10-02 08:50:15.594 2 DEBUG oslo_concurrency.lockutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:15 np0005465604 nova_compute[260603]: 2025-10-02 08:50:15.594 2 DEBUG oslo_concurrency.lockutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "c0fdc067b2937ea086be0c187b6d99f3c486af28" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:15 np0005465604 nova_compute[260603]: 2025-10-02 08:50:15.630 2 DEBUG nova.storage.rbd_utils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:50:15 np0005465604 nova_compute[260603]: 2025-10-02 08:50:15.638 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.031 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.117 2 DEBUG nova.storage.rbd_utils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] resizing rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.207 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.208 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Ensure instance console log exists: /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.209 2 DEBUG oslo_concurrency.lockutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.209 2 DEBUG oslo_concurrency.lockutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.210 2 DEBUG oslo_concurrency.lockutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.212 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Start _get_guest_xml network_info=[{"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.217 2 WARNING nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.224 2 DEBUG nova.virt.libvirt.host [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.224 2 DEBUG nova.virt.libvirt.host [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.228 2 DEBUG nova.virt.libvirt.host [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.228 2 DEBUG nova.virt.libvirt.host [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.229 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.229 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:22Z,direct_url=<?>,disk_format='qcow2',id=eeb8c9a4-e143-4b44-a997-e04d544bc537,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:23Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.229 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.230 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.230 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.230 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.231 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.231 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.231 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.232 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.232 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.232 2 DEBUG nova.virt.hardware [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.232 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.249 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:50:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:50:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1970559105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:50:16 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.688 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:50:16 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.689 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.689 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.690 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.690 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.690 2 WARNING nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.690 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.691 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.691 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.691 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.691 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.692 2 WARNING nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.692 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.692 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.692 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.692 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.693 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.693 2 WARNING nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.693 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-unplugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.693 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.694 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.694 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.694 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-unplugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.694 2 WARNING nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-unplugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.695 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.695 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.695 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.695 2 DEBUG oslo_concurrency.lockutils [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.696 2 DEBUG nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.696 2 WARNING nova.compute.manager [req-c3fd959d-6d95-4f75-804e-92f8428aa183 req-9ff94818-9f2d-44eb-9b17-7840c4973f7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.696 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.716 2 DEBUG nova.storage.rbd_utils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:50:16 np0005465604 nova_compute[260603]: 2025-10-02 08:50:16.720 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:50:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:50:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/921506983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.199 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.201 2 DEBUG nova.virt.libvirt.vif [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:49:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1109879286',display_name='tempest-TestNetworkAdvancedServerOps-server-1109879286',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1109879286',id=118,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+sVd8qjo0SYazKDVirRw3g5x5Vnd8QovbWt8+eK0+qPC8DTt4OfcPnD5uSpSSNv9pvlZO4xpKtw3lzuxGU0DKfNjlzDyLY6S9ZZDG7PGtaZb0/k3fS7lf7qHC3kDB/wg==',key_name='tempest-TestNetworkAdvancedServerOps-514118472',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:49:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-552rpvj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:50:15Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=9d79c6a7-edec-4a60-af9f-7e1401bb9a64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.201 2 DEBUG nova.network.os_vif_util [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.202 2 DEBUG nova.network.os_vif_util [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.204 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:50:17 np0005465604 nova_compute[260603]:  <uuid>9d79c6a7-edec-4a60-af9f-7e1401bb9a64</uuid>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:  <name>instance-00000076</name>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1109879286</nova:name>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:50:16</nova:creationTime>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:50:17 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:        <nova:user uuid="7767630a5b1049f48d7e0fed29e221ba">tempest-TestNetworkAdvancedServerOps-19684921-project-member</nova:user>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:        <nova:project uuid="c86b416fdb524f21b0228639a3a14116">tempest-TestNetworkAdvancedServerOps-19684921</nova:project>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="eeb8c9a4-e143-4b44-a997-e04d544bc537"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:        <nova:port uuid="ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da">
Oct  2 04:50:17 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <entry name="serial">9d79c6a7-edec-4a60-af9f-7e1401bb9a64</entry>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <entry name="uuid">9d79c6a7-edec-4a60-af9f-7e1401bb9a64</entry>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk">
Oct  2 04:50:17 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:50:17 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config">
Oct  2 04:50:17 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:50:17 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:4a:e6:c0"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <target dev="tapac5b8c6c-8d"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/console.log" append="off"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:50:17 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:50:17 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:50:17 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:50:17 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.206 2 DEBUG nova.virt.libvirt.vif [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:49:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1109879286',display_name='tempest-TestNetworkAdvancedServerOps-server-1109879286',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1109879286',id=118,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+sVd8qjo0SYazKDVirRw3g5x5Vnd8QovbWt8+eK0+qPC8DTt4OfcPnD5uSpSSNv9pvlZO4xpKtw3lzuxGU0DKfNjlzDyLY6S9ZZDG7PGtaZb0/k3fS7lf7qHC3kDB/wg==',key_name='tempest-TestNetworkAdvancedServerOps-514118472',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:49:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-552rpvj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:50:15Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=9d79c6a7-edec-4a60-af9f-7e1401bb9a64,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.206 2 DEBUG nova.network.os_vif_util [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.206 2 DEBUG nova.network.os_vif_util [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.207 2 DEBUG os_vif [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.208 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.208 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.211 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac5b8c6c-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.212 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac5b8c6c-8d, col_values=(('external_ids', {'iface-id': 'ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:e6:c0', 'vm-uuid': '9d79c6a7-edec-4a60-af9f-7e1401bb9a64'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:17 np0005465604 NetworkManager[45129]: <info>  [1759395017.2151] manager: (tapac5b8c6c-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/483)
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.223 2 INFO os_vif [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d')#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.290 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.290 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.290 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No VIF found with MAC fa:16:3e:4a:e6:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.291 2 INFO nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Using config drive#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.315 2 DEBUG nova.storage.rbd_utils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.338 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.363 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'keypairs' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:50:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 117 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.757 2 INFO nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Creating config drive at /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.761 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuj986w_1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.911 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuj986w_1" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.947 2 DEBUG nova.storage.rbd_utils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:50:17 np0005465604 nova_compute[260603]: 2025-10-02 08:50:17.952 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:50:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:50:18 np0005465604 nova_compute[260603]: 2025-10-02 08:50:18.126 2 DEBUG oslo_concurrency.processutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config 9d79c6a7-edec-4a60-af9f-7e1401bb9a64_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:50:18 np0005465604 nova_compute[260603]: 2025-10-02 08:50:18.128 2 INFO nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Deleting local config drive /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64/disk.config because it was imported into RBD.#033[00m
Oct  2 04:50:18 np0005465604 kernel: tapac5b8c6c-8d: entered promiscuous mode
Oct  2 04:50:18 np0005465604 nova_compute[260603]: 2025-10-02 08:50:18.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:18 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:18Z|01236|binding|INFO|Claiming lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for this chassis.
Oct  2 04:50:18 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:18Z|01237|binding|INFO|ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da: Claiming fa:16:3e:4a:e6:c0 10.100.0.11
Oct  2 04:50:18 np0005465604 NetworkManager[45129]: <info>  [1759395018.2170] manager: (tapac5b8c6c-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/484)
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.227 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:e6:c0 10.100.0.11'], port_security=['fa:16:3e:4a:e6:c0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9d79c6a7-edec-4a60-af9f-7e1401bb9a64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '7', 'neutron:security_group_ids': '42b6327c-2fd8-40d7-ba08-912b6664d7e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=250114f0-0578-4b19-ab29-d2499c25ead0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.228 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da in datapath 9ceec3f2-374d-41aa-a1df-26773af00fd7 bound to our chassis#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.230 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ceec3f2-374d-41aa-a1df-26773af00fd7#033[00m
Oct  2 04:50:18 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:18Z|01238|binding|INFO|Setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da ovn-installed in OVS
Oct  2 04:50:18 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:18Z|01239|binding|INFO|Setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da up in Southbound
Oct  2 04:50:18 np0005465604 nova_compute[260603]: 2025-10-02 08:50:18.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.247 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ff382342-4245-4b44-95bf-ca89ac9a969c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.248 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9ceec3f2-31 in ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.250 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9ceec3f2-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.250 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[90003908-baf5-4885-8b81-e340303b453c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.251 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6fd401-b748-40bb-8146-b108c17639fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.270 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[683c7532-0388-44a6-8ea2-fcba3e6ff38e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:18 np0005465604 systemd-machined[214636]: New machine qemu-149-instance-00000076.
Oct  2 04:50:18 np0005465604 systemd[1]: Started Virtual Machine qemu-149-instance-00000076.
Oct  2 04:50:18 np0005465604 systemd-udevd[383145]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.296 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e8ef92e6-943c-4c8e-bc7a-8907f82c01e5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:18 np0005465604 NetworkManager[45129]: <info>  [1759395018.3103] device (tapac5b8c6c-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:50:18 np0005465604 NetworkManager[45129]: <info>  [1759395018.3154] device (tapac5b8c6c-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.329 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7e60b02f-3f93-479f-9cb1-5f844630a2b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:18 np0005465604 NetworkManager[45129]: <info>  [1759395018.3367] manager: (tap9ceec3f2-30): new Veth device (/org/freedesktop/NetworkManager/Devices/485)
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.335 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb6e294-741f-4a7d-abde-331491b33e3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.374 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[721862ff-03bc-4967-9174-049467a9809b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.377 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1b18d1a1-f4ef-4c17-8082-74bbb33db28a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:18 np0005465604 NetworkManager[45129]: <info>  [1759395018.4009] device (tap9ceec3f2-30): carrier: link connected
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.408 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cf12e2fe-d5cd-4c3b-93f6-44155b81f6a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.424 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ff38eda5-db3a-421f-9cff-5748e7aa008f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ceec3f2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:37:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589700, 'reachable_time': 28367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383175, 'error': None, 'target': 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.442 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dea6a320-98e8-44fb-a439-dfc2430292d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:3769'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589700, 'tstamp': 589700}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383176, 'error': None, 'target': 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.460 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[81cbb336-8628-493a-b932-7049b575a4b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ceec3f2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:37:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589700, 'reachable_time': 28367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383177, 'error': None, 'target': 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.491 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[96951f0c-283a-4a65-8651-c9e2280af6e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.552 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1357741e-6786-4389-952e-23777d2bdfa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.554 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ceec3f2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.554 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.554 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ceec3f2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:50:18 np0005465604 nova_compute[260603]: 2025-10-02 08:50:18.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:18 np0005465604 NetworkManager[45129]: <info>  [1759395018.5571] manager: (tap9ceec3f2-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/486)
Oct  2 04:50:18 np0005465604 kernel: tap9ceec3f2-30: entered promiscuous mode
Oct  2 04:50:18 np0005465604 nova_compute[260603]: 2025-10-02 08:50:18.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.561 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ceec3f2-30, col_values=(('external_ids', {'iface-id': '251c75ac-afb0-42a1-a89b-23c6beb7b3f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:50:18 np0005465604 nova_compute[260603]: 2025-10-02 08:50:18.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:18 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:18Z|01240|binding|INFO|Releasing lport 251c75ac-afb0-42a1-a89b-23c6beb7b3f6 from this chassis (sb_readonly=0)
Oct  2 04:50:18 np0005465604 nova_compute[260603]: 2025-10-02 08:50:18.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.584 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9ceec3f2-374d-41aa-a1df-26773af00fd7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9ceec3f2-374d-41aa-a1df-26773af00fd7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.584 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[edc36f49-752c-4158-afcd-9e7e968a0f3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.585 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-9ceec3f2-374d-41aa-a1df-26773af00fd7
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/9ceec3f2-374d-41aa-a1df-26773af00fd7.pid.haproxy
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 9ceec3f2-374d-41aa-a1df-26773af00fd7
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:50:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:18.586 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'env', 'PROCESS_TAG=haproxy-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9ceec3f2-374d-41aa-a1df-26773af00fd7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:50:18 np0005465604 nova_compute[260603]: 2025-10-02 08:50:18.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:18 np0005465604 nova_compute[260603]: 2025-10-02 08:50:18.791 2 DEBUG nova.compute.manager [req-e6ad55d9-5e72-4919-aa15-5f50988d80ed req-346108e3-ccaa-4e47-8112-3e16327fead5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:50:18 np0005465604 nova_compute[260603]: 2025-10-02 08:50:18.791 2 DEBUG oslo_concurrency.lockutils [req-e6ad55d9-5e72-4919-aa15-5f50988d80ed req-346108e3-ccaa-4e47-8112-3e16327fead5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:18 np0005465604 nova_compute[260603]: 2025-10-02 08:50:18.792 2 DEBUG oslo_concurrency.lockutils [req-e6ad55d9-5e72-4919-aa15-5f50988d80ed req-346108e3-ccaa-4e47-8112-3e16327fead5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:18 np0005465604 nova_compute[260603]: 2025-10-02 08:50:18.792 2 DEBUG oslo_concurrency.lockutils [req-e6ad55d9-5e72-4919-aa15-5f50988d80ed req-346108e3-ccaa-4e47-8112-3e16327fead5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:18 np0005465604 nova_compute[260603]: 2025-10-02 08:50:18.793 2 DEBUG nova.compute.manager [req-e6ad55d9-5e72-4919-aa15-5f50988d80ed req-346108e3-ccaa-4e47-8112-3e16327fead5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:50:18 np0005465604 nova_compute[260603]: 2025-10-02 08:50:18.793 2 WARNING nova.compute.manager [req-e6ad55d9-5e72-4919-aa15-5f50988d80ed req-346108e3-ccaa-4e47-8112-3e16327fead5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 04:50:19 np0005465604 podman[383251]: 2025-10-02 08:50:19.070969339 +0000 UTC m=+0.058977760 container create a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.085 2 DEBUG nova.compute.manager [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.086 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.086 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.087 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395019.0855837, 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.087 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.091 2 INFO nova.virt.libvirt.driver [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance spawned successfully.#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.091 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.109 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.113 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.114 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.114 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.114 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.115 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.115 2 DEBUG nova.virt.libvirt.driver [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.119 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:50:19 np0005465604 systemd[1]: Started libpod-conmon-a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677.scope.
Oct  2 04:50:19 np0005465604 podman[383251]: 2025-10-02 08:50:19.040424927 +0000 UTC m=+0.028433378 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.151 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.151 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395019.085648, 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.152 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] VM Started (Lifecycle Event)#033[00m
Oct  2 04:50:19 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:50:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/784dcec7337b162a43bf6659c2be12ccdf58165f84be64f967db39bb20ec5671/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.172 2 DEBUG nova.compute.manager [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.174 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.181 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:50:19 np0005465604 podman[383251]: 2025-10-02 08:50:19.18139195 +0000 UTC m=+0.169400381 container init a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:50:19 np0005465604 podman[383251]: 2025-10-02 08:50:19.18713893 +0000 UTC m=+0.175147341 container start a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:50:19 np0005465604 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[383266]: [NOTICE]   (383270) : New worker (383272) forked
Oct  2 04:50:19 np0005465604 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[383266]: [NOTICE]   (383270) : Loading success.
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.215 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.240 2 DEBUG oslo_concurrency.lockutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.240 2 DEBUG oslo_concurrency.lockutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.240 2 DEBUG nova.objects.instance [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.301 2 DEBUG oslo_concurrency.lockutils [None req-2fa4548d-0bd4-45f5-8a08-e217a5f04711 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Oct  2 04:50:19 np0005465604 nova_compute[260603]: 2025-10-02 08:50:19.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:20 np0005465604 nova_compute[260603]: 2025-10-02 08:50:20.917 2 DEBUG nova.compute.manager [req-26f6a5ec-834d-406a-bd77-cb39000fdfd0 req-381e8944-459f-4328-9562-4326f39de6fe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:50:20 np0005465604 nova_compute[260603]: 2025-10-02 08:50:20.918 2 DEBUG oslo_concurrency.lockutils [req-26f6a5ec-834d-406a-bd77-cb39000fdfd0 req-381e8944-459f-4328-9562-4326f39de6fe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:20 np0005465604 nova_compute[260603]: 2025-10-02 08:50:20.918 2 DEBUG oslo_concurrency.lockutils [req-26f6a5ec-834d-406a-bd77-cb39000fdfd0 req-381e8944-459f-4328-9562-4326f39de6fe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:20 np0005465604 nova_compute[260603]: 2025-10-02 08:50:20.919 2 DEBUG oslo_concurrency.lockutils [req-26f6a5ec-834d-406a-bd77-cb39000fdfd0 req-381e8944-459f-4328-9562-4326f39de6fe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:20 np0005465604 nova_compute[260603]: 2025-10-02 08:50:20.919 2 DEBUG nova.compute.manager [req-26f6a5ec-834d-406a-bd77-cb39000fdfd0 req-381e8944-459f-4328-9562-4326f39de6fe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:50:20 np0005465604 nova_compute[260603]: 2025-10-02 08:50:20.920 2 WARNING nova.compute.manager [req-26f6a5ec-834d-406a-bd77-cb39000fdfd0 req-381e8944-459f-4328-9562-4326f39de6fe 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state active and task_state None.#033[00m
Oct  2 04:50:20 np0005465604 podman[383282]: 2025-10-02 08:50:20.994239354 +0000 UTC m=+0.063268883 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 04:50:21 np0005465604 podman[383281]: 2025-10-02 08:50:21.010211602 +0000 UTC m=+0.072147490 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 04:50:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Oct  2 04:50:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:50:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2010036876' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:50:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:50:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2010036876' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:50:22 np0005465604 nova_compute[260603]: 2025-10-02 08:50:22.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:50:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Oct  2 04:50:24 np0005465604 nova_compute[260603]: 2025-10-02 08:50:24.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:24 np0005465604 nova_compute[260603]: 2025-10-02 08:50:24.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct  2 04:50:26 np0005465604 nova_compute[260603]: 2025-10-02 08:50:26.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:50:26 np0005465604 nova_compute[260603]: 2025-10-02 08:50:26.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:50:27 np0005465604 nova_compute[260603]: 2025-10-02 08:50:27.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct  2 04:50:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:50:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:50:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:50:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:50:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:50:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:50:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:50:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:50:28
Oct  2 04:50:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:50:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:50:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'vms', 'volumes', 'images', '.mgr', 'backups']
Oct  2 04:50:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:50:29 np0005465604 nova_compute[260603]: 2025-10-02 08:50:29.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 86 KiB/s wr, 95 op/s
Oct  2 04:50:29 np0005465604 nova_compute[260603]: 2025-10-02 08:50:29.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 88 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 70 op/s
Oct  2 04:50:32 np0005465604 nova_compute[260603]: 2025-10-02 08:50:32.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:32 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:32Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:e6:c0 10.100.0.11
Oct  2 04:50:32 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:32Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:e6:c0 10.100.0.11
Oct  2 04:50:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:50:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 109 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 113 op/s
Oct  2 04:50:34 np0005465604 nova_compute[260603]: 2025-10-02 08:50:34.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:34.831 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:34.832 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:34.833 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 109 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 187 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Oct  2 04:50:36 np0005465604 nova_compute[260603]: 2025-10-02 08:50:36.431 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:36 np0005465604 nova_compute[260603]: 2025-10-02 08:50:36.432 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:36 np0005465604 nova_compute[260603]: 2025-10-02 08:50:36.466 2 DEBUG nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:50:36 np0005465604 nova_compute[260603]: 2025-10-02 08:50:36.647 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:36 np0005465604 nova_compute[260603]: 2025-10-02 08:50:36.648 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:36 np0005465604 nova_compute[260603]: 2025-10-02 08:50:36.658 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:50:36 np0005465604 nova_compute[260603]: 2025-10-02 08:50:36.659 2 INFO nova.compute.claims [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:50:36 np0005465604 nova_compute[260603]: 2025-10-02 08:50:36.920 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:50:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3949143917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.398 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.405 2 DEBUG nova.compute.provider_tree [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.421 2 DEBUG nova.scheduler.client.report [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.444 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.445 2 DEBUG nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.499 2 DEBUG nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.500 2 DEBUG nova.network.neutron [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.524 2 INFO nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:50:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 117 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 274 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.540 2 DEBUG nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.654 2 DEBUG nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.655 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.655 2 INFO nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Creating image(s)#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.682 2 DEBUG nova.storage.rbd_utils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.709 2 DEBUG nova.storage.rbd_utils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.735 2 DEBUG nova.storage.rbd_utils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.739 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.807 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.808 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.810 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.810 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.845 2 DEBUG nova.storage.rbd_utils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:50:37 np0005465604 nova_compute[260603]: 2025-10-02 08:50:37.850 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:50:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:50:38 np0005465604 nova_compute[260603]: 2025-10-02 08:50:38.437 2 DEBUG nova.policy [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aceb8b0273154f1abe964d78a6261936', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bfcc44155f2d45ff9f66fe254a7b21c7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:50:38 np0005465604 nova_compute[260603]: 2025-10-02 08:50:38.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:50:38 np0005465604 nova_compute[260603]: 2025-10-02 08:50:38.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:50:38 np0005465604 nova_compute[260603]: 2025-10-02 08:50:38.526 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.676s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:50:38 np0005465604 nova_compute[260603]: 2025-10-02 08:50:38.619 2 DEBUG nova.storage.rbd_utils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] resizing rbd image d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000754758111121964 of space, bias 1.0, pg target 0.22642743333658918 quantized to 32 (current 32)
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:50:38 np0005465604 nova_compute[260603]: 2025-10-02 08:50:38.864 2 DEBUG nova.objects.instance [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lazy-loading 'migration_context' on Instance uuid d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:50:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:50:38 np0005465604 nova_compute[260603]: 2025-10-02 08:50:38.962 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:50:38 np0005465604 nova_compute[260603]: 2025-10-02 08:50:38.963 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Ensure instance console log exists: /var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:50:38 np0005465604 nova_compute[260603]: 2025-10-02 08:50:38.964 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:38 np0005465604 nova_compute[260603]: 2025-10-02 08:50:38.965 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:38 np0005465604 nova_compute[260603]: 2025-10-02 08:50:38.966 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 121 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  2 04:50:39 np0005465604 nova_compute[260603]: 2025-10-02 08:50:39.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:39 np0005465604 nova_compute[260603]: 2025-10-02 08:50:39.921 2 INFO nova.compute.manager [None req-df2a7b0e-c963-487f-8157-7cd0aa470225 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Get console output#033[00m
Oct  2 04:50:39 np0005465604 nova_compute[260603]: 2025-10-02 08:50:39.930 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 04:50:40 np0005465604 nova_compute[260603]: 2025-10-02 08:50:40.415 2 DEBUG nova.network.neutron [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Successfully created port: 236df88e-d54e-410f-9a5c-cf5fa95debd1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:50:40 np0005465604 nova_compute[260603]: 2025-10-02 08:50:40.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:50:40 np0005465604 nova_compute[260603]: 2025-10-02 08:50:40.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:50:40 np0005465604 nova_compute[260603]: 2025-10-02 08:50:40.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:50:40 np0005465604 nova_compute[260603]: 2025-10-02 08:50:40.553 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 04:50:40 np0005465604 nova_compute[260603]: 2025-10-02 08:50:40.791 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:50:40 np0005465604 nova_compute[260603]: 2025-10-02 08:50:40.792 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:50:40 np0005465604 nova_compute[260603]: 2025-10-02 08:50:40.793 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:50:40 np0005465604 nova_compute[260603]: 2025-10-02 08:50:40.793 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.131 2 DEBUG nova.compute.manager [req-cee22917-ab37-4f34-b228-a0d9cca175d6 req-9d576af1-6f5f-4e17-8e7c-45e996258572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-changed-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.132 2 DEBUG nova.compute.manager [req-cee22917-ab37-4f34-b228-a0d9cca175d6 req-9d576af1-6f5f-4e17-8e7c-45e996258572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Refreshing instance network info cache due to event network-changed-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.133 2 DEBUG oslo_concurrency.lockutils [req-cee22917-ab37-4f34-b228-a0d9cca175d6 req-9d576af1-6f5f-4e17-8e7c-45e996258572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.288 2 DEBUG oslo_concurrency.lockutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.289 2 DEBUG oslo_concurrency.lockutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.290 2 DEBUG oslo_concurrency.lockutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.290 2 DEBUG oslo_concurrency.lockutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.291 2 DEBUG oslo_concurrency.lockutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.293 2 INFO nova.compute.manager [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Terminating instance#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.295 2 DEBUG nova.compute.manager [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:50:41 np0005465604 kernel: tapac5b8c6c-8d (unregistering): left promiscuous mode
Oct  2 04:50:41 np0005465604 NetworkManager[45129]: <info>  [1759395041.3603] device (tapac5b8c6c-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:41Z|01241|binding|INFO|Releasing lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da from this chassis (sb_readonly=0)
Oct  2 04:50:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:41Z|01242|binding|INFO|Setting lport ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da down in Southbound
Oct  2 04:50:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:41Z|01243|binding|INFO|Removing iface tapac5b8c6c-8d ovn-installed in OVS
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:41Z|01244|binding|INFO|Releasing lport 251c75ac-afb0-42a1-a89b-23c6beb7b3f6 from this chassis (sb_readonly=0)
Oct  2 04:50:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.392 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:e6:c0 10.100.0.11'], port_security=['fa:16:3e:4a:e6:c0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9d79c6a7-edec-4a60-af9f-7e1401bb9a64', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '8', 'neutron:security_group_ids': '42b6327c-2fd8-40d7-ba08-912b6664d7e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=250114f0-0578-4b19-ab29-d2499c25ead0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:50:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.393 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da in datapath 9ceec3f2-374d-41aa-a1df-26773af00fd7 unbound from our chassis#033[00m
Oct  2 04:50:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.396 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ceec3f2-374d-41aa-a1df-26773af00fd7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:50:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.397 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9a892080-696f-490c-b88c-be52912aac2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.398 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 namespace which is not needed anymore#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 121 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  2 04:50:41 np0005465604 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct  2 04:50:41 np0005465604 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000076.scope: Consumed 13.178s CPU time.
Oct  2 04:50:41 np0005465604 systemd-machined[214636]: Machine qemu-149-instance-00000076 terminated.
Oct  2 04:50:41 np0005465604 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[383266]: [NOTICE]   (383270) : haproxy version is 2.8.14-c23fe91
Oct  2 04:50:41 np0005465604 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[383266]: [NOTICE]   (383270) : path to executable is /usr/sbin/haproxy
Oct  2 04:50:41 np0005465604 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[383266]: [WARNING]  (383270) : Exiting Master process...
Oct  2 04:50:41 np0005465604 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[383266]: [WARNING]  (383270) : Exiting Master process...
Oct  2 04:50:41 np0005465604 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[383266]: [ALERT]    (383270) : Current worker (383272) exited with code 143 (Terminated)
Oct  2 04:50:41 np0005465604 neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7[383266]: [WARNING]  (383270) : All workers exited. Exiting... (0)
Oct  2 04:50:41 np0005465604 systemd[1]: libpod-a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677.scope: Deactivated successfully.
Oct  2 04:50:41 np0005465604 podman[383537]: 2025-10-02 08:50:41.576014687 +0000 UTC m=+0.067586158 container died a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:50:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:41Z|01245|binding|INFO|Releasing lport 251c75ac-afb0-42a1-a89b-23c6beb7b3f6 from this chassis (sb_readonly=0)
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay-784dcec7337b162a43bf6659c2be12ccdf58165f84be64f967db39bb20ec5671-merged.mount: Deactivated successfully.
Oct  2 04:50:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677-userdata-shm.mount: Deactivated successfully.
Oct  2 04:50:41 np0005465604 podman[383537]: 2025-10-02 08:50:41.621795183 +0000 UTC m=+0.113366644 container cleanup a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 04:50:41 np0005465604 systemd[1]: libpod-conmon-a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677.scope: Deactivated successfully.
Oct  2 04:50:41 np0005465604 podman[383567]: 2025-10-02 08:50:41.686457649 +0000 UTC m=+0.043488646 container remove a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:50:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.695 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a379b2-4343-4318-85d6-f4c313c6a8ba]: (4, ('Thu Oct  2 08:50:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 (a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677)\na6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677\nThu Oct  2 08:50:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 (a6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677)\na6a2c9813bc8ff891882e48c4a9c8fe956ea6160e1749d6a1383481d9e005677\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.696 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9c949c3d-922a-4876-9d5f-8c22c793e2c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.697 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ceec3f2-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:50:41 np0005465604 kernel: tap9ceec3f2-30: left promiscuous mode
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.716 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ba9f9edf-1f47-497e-9fc3-e3bff2879afe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:41 np0005465604 NetworkManager[45129]: <info>  [1759395041.7206] manager: (tapac5b8c6c-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/487)
Oct  2 04:50:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.740 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[126b60cf-557c-4b1a-b805-66ca636611ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.741 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0d28ab-f27c-4a7d-9a95-f9a670d07534]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.740 2 INFO nova.virt.libvirt.driver [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Instance destroyed successfully.#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.740 2 DEBUG nova.objects.instance [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'resources' on Instance uuid 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.754 2 DEBUG nova.virt.libvirt.vif [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:49:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1109879286',display_name='tempest-TestNetworkAdvancedServerOps-server-1109879286',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1109879286',id=118,image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+sVd8qjo0SYazKDVirRw3g5x5Vnd8QovbWt8+eK0+qPC8DTt4OfcPnD5uSpSSNv9pvlZO4xpKtw3lzuxGU0DKfNjlzDyLY6S9ZZDG7PGtaZb0/k3fS7lf7qHC3kDB/wg==',key_name='tempest-TestNetworkAdvancedServerOps-514118472',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:50:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-552rpvj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='eeb8c9a4-e143-4b44-a997-e04d544bc537',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:50:19Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=9d79c6a7-edec-4a60-af9f-7e1401bb9a64,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.755 2 DEBUG nova.network.os_vif_util [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.755 2 DEBUG nova.network.os_vif_util [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.756 2 DEBUG os_vif [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:50:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.755 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[41b679ca-aa1b-4a58-ad9b-c87ecb44e4fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589692, 'reachable_time': 16748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383593, 'error': None, 'target': 'ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.759 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac5b8c6c-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:50:41 np0005465604 systemd[1]: run-netns-ovnmeta\x2d9ceec3f2\x2d374d\x2d41aa\x2da1df\x2d26773af00fd7.mount: Deactivated successfully.
Oct  2 04:50:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.758 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9ceec3f2-374d-41aa-a1df-26773af00fd7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:50:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:41.758 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[4620b791-48c9-4548-ac8e-7ba7c8ee43d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.768 2 INFO os_vif [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:e6:c0,bridge_name='br-int',has_traffic_filtering=True,id=ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da,network=Network(9ceec3f2-374d-41aa-a1df-26773af00fd7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5b8c6c-8d')#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.797 2 DEBUG nova.network.neutron [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Successfully updated port: 236df88e-d54e-410f-9a5c-cf5fa95debd1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.826 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.826 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquired lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.826 2 DEBUG nova.network.neutron [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.915 2 DEBUG nova.compute.manager [req-569f99c3-09a7-4bda-8811-a411f5233096 req-7f7e7c3c-a235-4daf-a990-9d031e23629d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Received event network-changed-236df88e-d54e-410f-9a5c-cf5fa95debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.915 2 DEBUG nova.compute.manager [req-569f99c3-09a7-4bda-8811-a411f5233096 req-7f7e7c3c-a235-4daf-a990-9d031e23629d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Refreshing instance network info cache due to event network-changed-236df88e-d54e-410f-9a5c-cf5fa95debd1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:50:41 np0005465604 nova_compute[260603]: 2025-10-02 08:50:41.917 2 DEBUG oslo_concurrency.lockutils [req-569f99c3-09a7-4bda-8811-a411f5233096 req-7f7e7c3c-a235-4daf-a990-9d031e23629d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.025 2 DEBUG nova.network.neutron [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.161 2 INFO nova.virt.libvirt.driver [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Deleting instance files /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64_del#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.163 2 INFO nova.virt.libvirt.driver [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Deletion of /var/lib/nova/instances/9d79c6a7-edec-4a60-af9f-7e1401bb9a64_del complete#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.212 2 INFO nova.compute.manager [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Took 0.92 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.213 2 DEBUG oslo.service.loopingcall [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.214 2 DEBUG nova.compute.manager [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.214 2 DEBUG nova.network.neutron [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.475 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updating instance_info_cache with network_info: [{"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.505 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.506 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.507 2 DEBUG oslo_concurrency.lockutils [req-cee22917-ab37-4f34-b228-a0d9cca175d6 req-9d576af1-6f5f-4e17-8e7c-45e996258572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.507 2 DEBUG nova.network.neutron [req-cee22917-ab37-4f34-b228-a0d9cca175d6 req-9d576af1-6f5f-4e17-8e7c-45e996258572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Refreshing network info cache for port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.509 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.510 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.539 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.539 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.540 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.540 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.541 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.801 2 DEBUG nova.network.neutron [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.826 2 INFO nova.compute.manager [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Took 0.61 seconds to deallocate network for instance.#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.879 2 DEBUG oslo_concurrency.lockutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.879 2 DEBUG oslo_concurrency.lockutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:42 np0005465604 nova_compute[260603]: 2025-10-02 08:50:42.941 2 DEBUG oslo_concurrency.processutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:50:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:50:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:50:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1729854280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.012 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.320 2 DEBUG nova.compute.manager [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-unplugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.321 2 DEBUG oslo_concurrency.lockutils [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.321 2 DEBUG oslo_concurrency.lockutils [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.321 2 DEBUG oslo_concurrency.lockutils [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.321 2 DEBUG nova.compute.manager [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-unplugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.322 2 WARNING nova.compute.manager [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-unplugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.322 2 DEBUG nova.compute.manager [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.322 2 DEBUG oslo_concurrency.lockutils [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.322 2 DEBUG oslo_concurrency.lockutils [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.323 2 DEBUG oslo_concurrency.lockutils [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.323 2 DEBUG nova.compute.manager [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] No waiting events found dispatching network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.323 2 WARNING nova.compute.manager [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received unexpected event network-vif-plugged-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.323 2 DEBUG nova.compute.manager [req-c88590c8-ba3d-47d9-acda-5d5bd6b6cfc8 req-a0222273-831d-4880-b5dc-1ff99efa0157 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Received event network-vif-deleted-ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.354 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.355 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3710MB free_disk=59.94289016723633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.355 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:50:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/877712512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.470 2 DEBUG nova.network.neutron [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updating instance_info_cache with network_info: [{"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.482 2 DEBUG oslo_concurrency.processutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.489 2 DEBUG nova.compute.provider_tree [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.499 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Releasing lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.500 2 DEBUG nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Instance network_info: |[{"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.500 2 DEBUG oslo_concurrency.lockutils [req-569f99c3-09a7-4bda-8811-a411f5233096 req-7f7e7c3c-a235-4daf-a990-9d031e23629d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.501 2 DEBUG nova.network.neutron [req-569f99c3-09a7-4bda-8811-a411f5233096 req-7f7e7c3c-a235-4daf-a990-9d031e23629d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Refreshing network info cache for port 236df88e-d54e-410f-9a5c-cf5fa95debd1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.506 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Start _get_guest_xml network_info=[{"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.512 2 DEBUG nova.scheduler.client.report [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.519 2 WARNING nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.529 2 DEBUG nova.virt.libvirt.host [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.530 2 DEBUG nova.virt.libvirt.host [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.533 2 DEBUG nova.virt.libvirt.host [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.534 2 DEBUG nova.virt.libvirt.host [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.535 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.535 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.536 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.536 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.537 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.537 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.538 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.538 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.539 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.539 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.539 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.540 2 DEBUG nova.virt.hardware [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:50:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 88 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 3.9 MiB/s wr, 116 op/s
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.545 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.582 2 DEBUG oslo_concurrency.lockutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.588 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.624 2 INFO nova.scheduler.client.report [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Deleted allocations for instance 9d79c6a7-edec-4a60-af9f-7e1401bb9a64#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.660 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.660 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.661 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.691 2 DEBUG oslo_concurrency.lockutils [None req-0bdd4778-8d27-478a-ac28-d19be66d55aa 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "9d79c6a7-edec-4a60-af9f-7e1401bb9a64" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:43 np0005465604 nova_compute[260603]: 2025-10-02 08:50:43.723 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:50:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:50:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1287723593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.057 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.083 2 DEBUG nova.storage.rbd_utils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.086 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:50:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:50:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3346735743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.160 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.166 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.187 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.212 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.212 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.300 2 DEBUG nova.network.neutron [req-cee22917-ab37-4f34-b228-a0d9cca175d6 req-9d576af1-6f5f-4e17-8e7c-45e996258572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updated VIF entry in instance network info cache for port ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.301 2 DEBUG nova.network.neutron [req-cee22917-ab37-4f34-b228-a0d9cca175d6 req-9d576af1-6f5f-4e17-8e7c-45e996258572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Updating instance_info_cache with network_info: [{"id": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "address": "fa:16:3e:4a:e6:c0", "network": {"id": "9ceec3f2-374d-41aa-a1df-26773af00fd7", "bridge": "br-int", "label": "tempest-network-smoke--282300763", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5b8c6c-8d", "ovs_interfaceid": "ac5b8c6c-8dbb-4b9f-b3e1-f92aed6c72da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.320 2 DEBUG oslo_concurrency.lockutils [req-cee22917-ab37-4f34-b228-a0d9cca175d6 req-9d576af1-6f5f-4e17-8e7c-45e996258572 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-9d79c6a7-edec-4a60-af9f-7e1401bb9a64" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:50:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:50:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1411603649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.509 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.510 2 DEBUG nova.virt.libvirt.vif [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:50:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1240552507',display_name='tempest-TestSnapshotPattern-server-1240552507',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1240552507',id=119,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCpLz1iGKUc/R0jTBlojNlaCcKVn52HrOCGXK3cRl7ZwI3LdmmDfGB817F44mQzj+4scFHSvX8lk2zoI/a0vux5P2hegs1HkAovNnUXiH3pFHVXQWoAuzDtOhefGzzHniQ==',key_name='tempest-TestSnapshotPattern-624580712',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfcc44155f2d45ff9f66fe254a7b21c7',ramdisk_id='',reservation_id='r-s694dn34',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-495107275',owner_user_name='tempest-TestSnapshotPattern-495107275-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:50:37Z,user_data=None,user_id='aceb8b0273154f1abe964d78a6261936',uuid=d5e3e825-fcee-4f1b-8c05-5a9ee07013d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.511 2 DEBUG nova.network.os_vif_util [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converting VIF {"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.512 2 DEBUG nova.network.os_vif_util [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=236df88e-d54e-410f-9a5c-cf5fa95debd1,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap236df88e-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.513 2 DEBUG nova.objects.instance [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.532 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:50:44 np0005465604 nova_compute[260603]:  <uuid>d5e3e825-fcee-4f1b-8c05-5a9ee07013d7</uuid>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:  <name>instance-00000077</name>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestSnapshotPattern-server-1240552507</nova:name>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:50:43</nova:creationTime>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:50:44 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:        <nova:user uuid="aceb8b0273154f1abe964d78a6261936">tempest-TestSnapshotPattern-495107275-project-member</nova:user>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:        <nova:project uuid="bfcc44155f2d45ff9f66fe254a7b21c7">tempest-TestSnapshotPattern-495107275</nova:project>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:        <nova:port uuid="236df88e-d54e-410f-9a5c-cf5fa95debd1">
Oct  2 04:50:44 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <entry name="serial">d5e3e825-fcee-4f1b-8c05-5a9ee07013d7</entry>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <entry name="uuid">d5e3e825-fcee-4f1b-8c05-5a9ee07013d7</entry>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk">
Oct  2 04:50:44 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:50:44 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk.config">
Oct  2 04:50:44 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:50:44 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:de:8a:1b"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <target dev="tap236df88e-d5"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7/console.log" append="off"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:50:44 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:50:44 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:50:44 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:50:44 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.534 2 DEBUG nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Preparing to wait for external event network-vif-plugged-236df88e-d54e-410f-9a5c-cf5fa95debd1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.534 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.534 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.535 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.536 2 DEBUG nova.virt.libvirt.vif [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:50:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1240552507',display_name='tempest-TestSnapshotPattern-server-1240552507',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1240552507',id=119,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCpLz1iGKUc/R0jTBlojNlaCcKVn52HrOCGXK3cRl7ZwI3LdmmDfGB817F44mQzj+4scFHSvX8lk2zoI/a0vux5P2hegs1HkAovNnUXiH3pFHVXQWoAuzDtOhefGzzHniQ==',key_name='tempest-TestSnapshotPattern-624580712',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfcc44155f2d45ff9f66fe254a7b21c7',ramdisk_id='',reservation_id='r-s694dn34',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-495107275',owner_user_name='tempest-TestSnapshotPattern-495107275-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:50:37Z,user_data=None,user_id='aceb8b0273154f1abe964d78a6261936',uuid=d5e3e825-fcee-4f1b-8c05-5a9ee07013d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.536 2 DEBUG nova.network.os_vif_util [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converting VIF {"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.537 2 DEBUG nova.network.os_vif_util [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=236df88e-d54e-410f-9a5c-cf5fa95debd1,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap236df88e-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.537 2 DEBUG os_vif [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=236df88e-d54e-410f-9a5c-cf5fa95debd1,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap236df88e-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.538 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.539 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.541 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap236df88e-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.541 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap236df88e-d5, col_values=(('external_ids', {'iface-id': '236df88e-d54e-410f-9a5c-cf5fa95debd1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:8a:1b', 'vm-uuid': 'd5e3e825-fcee-4f1b-8c05-5a9ee07013d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:44 np0005465604 NetworkManager[45129]: <info>  [1759395044.5850] manager: (tap236df88e-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/488)
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.592 2 INFO os_vif [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=236df88e-d54e-410f-9a5c-cf5fa95debd1,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap236df88e-d5')#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.657 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.657 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.658 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] No VIF found with MAC fa:16:3e:de:8a:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.659 2 INFO nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Using config drive#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.689 2 DEBUG nova.storage.rbd_utils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:50:44 np0005465604 nova_compute[260603]: 2025-10-02 08:50:44.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:45 np0005465604 podman[383765]: 2025-10-02 08:50:45.019016921 +0000 UTC m=+0.071726697 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:50:45 np0005465604 podman[383764]: 2025-10-02 08:50:45.101298996 +0000 UTC m=+0.159812873 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 04:50:45 np0005465604 nova_compute[260603]: 2025-10-02 08:50:45.221 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:50:45 np0005465604 nova_compute[260603]: 2025-10-02 08:50:45.221 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:50:45 np0005465604 nova_compute[260603]: 2025-10-02 08:50:45.305 2 INFO nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Creating config drive at /var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7/disk.config#033[00m
Oct  2 04:50:45 np0005465604 nova_compute[260603]: 2025-10-02 08:50:45.311 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpylwz_x1z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:50:45 np0005465604 nova_compute[260603]: 2025-10-02 08:50:45.358 2 DEBUG nova.network.neutron [req-569f99c3-09a7-4bda-8811-a411f5233096 req-7f7e7c3c-a235-4daf-a990-9d031e23629d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updated VIF entry in instance network info cache for port 236df88e-d54e-410f-9a5c-cf5fa95debd1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:50:45 np0005465604 nova_compute[260603]: 2025-10-02 08:50:45.359 2 DEBUG nova.network.neutron [req-569f99c3-09a7-4bda-8811-a411f5233096 req-7f7e7c3c-a235-4daf-a990-9d031e23629d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updating instance_info_cache with network_info: [{"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:50:45 np0005465604 nova_compute[260603]: 2025-10-02 08:50:45.389 2 DEBUG oslo_concurrency.lockutils [req-569f99c3-09a7-4bda-8811-a411f5233096 req-7f7e7c3c-a235-4daf-a990-9d031e23629d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:50:45 np0005465604 nova_compute[260603]: 2025-10-02 08:50:45.475 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpylwz_x1z" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:50:45 np0005465604 nova_compute[260603]: 2025-10-02 08:50:45.512 2 DEBUG nova.storage.rbd_utils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:50:45 np0005465604 nova_compute[260603]: 2025-10-02 08:50:45.515 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7/disk.config d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:50:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 88 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 1.9 MiB/s wr, 72 op/s
Oct  2 04:50:45 np0005465604 nova_compute[260603]: 2025-10-02 08:50:45.707 2 DEBUG oslo_concurrency.processutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7/disk.config d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:50:45 np0005465604 nova_compute[260603]: 2025-10-02 08:50:45.709 2 INFO nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Deleting local config drive /var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7/disk.config because it was imported into RBD.#033[00m
Oct  2 04:50:45 np0005465604 kernel: tap236df88e-d5: entered promiscuous mode
Oct  2 04:50:45 np0005465604 NetworkManager[45129]: <info>  [1759395045.7892] manager: (tap236df88e-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/489)
Oct  2 04:50:45 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:45Z|01246|binding|INFO|Claiming lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 for this chassis.
Oct  2 04:50:45 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:45Z|01247|binding|INFO|236df88e-d54e-410f-9a5c-cf5fa95debd1: Claiming fa:16:3e:de:8a:1b 10.100.0.3
Oct  2 04:50:45 np0005465604 nova_compute[260603]: 2025-10-02 08:50:45.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.803 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:8a:1b 10.100.0.3'], port_security=['fa:16:3e:de:8a:1b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd5e3e825-fcee-4f1b-8c05-5a9ee07013d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e8372a1-ae46-455d-aa74-54645f729e73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc44155f2d45ff9f66fe254a7b21c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b23ca944-49a2-456f-a94b-c77445a13bdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10547b4a-e45d-47e8-b4d5-6979908e5ff3, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=236df88e-d54e-410f-9a5c-cf5fa95debd1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:50:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.806 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 236df88e-d54e-410f-9a5c-cf5fa95debd1 in datapath 5e8372a1-ae46-455d-aa74-54645f729e73 bound to our chassis#033[00m
Oct  2 04:50:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.809 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e8372a1-ae46-455d-aa74-54645f729e73#033[00m
Oct  2 04:50:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.830 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[48421315-5aab-479b-91e1-b150e9a6a6dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.831 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5e8372a1-a1 in ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:50:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.835 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5e8372a1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:50:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.835 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[74c5386c-18b3-4192-9056-e9c1a1d45cd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.837 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[77461d47-cdc4-4279-9fd7-2d2600e07881]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:45 np0005465604 systemd-udevd[383861]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:50:45 np0005465604 systemd-machined[214636]: New machine qemu-150-instance-00000077.
Oct  2 04:50:45 np0005465604 NetworkManager[45129]: <info>  [1759395045.8674] device (tap236df88e-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:50:45 np0005465604 NetworkManager[45129]: <info>  [1759395045.8688] device (tap236df88e-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:50:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.868 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[00cc90bc-6c4a-4c3a-9213-d49c4a9302ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:45 np0005465604 systemd[1]: Started Virtual Machine qemu-150-instance-00000077.
Oct  2 04:50:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.892 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed7406e-440b-4885-93d3-c28230f7ae47]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:45 np0005465604 nova_compute[260603]: 2025-10-02 08:50:45.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:45 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:45Z|01248|binding|INFO|Setting lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 ovn-installed in OVS
Oct  2 04:50:45 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:45Z|01249|binding|INFO|Setting lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 up in Southbound
Oct  2 04:50:45 np0005465604 nova_compute[260603]: 2025-10-02 08:50:45.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.933 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[45f4aeaf-5270-460a-879a-7dbdc28d4e8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:45 np0005465604 systemd-udevd[383865]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:50:45 np0005465604 NetworkManager[45129]: <info>  [1759395045.9426] manager: (tap5e8372a1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/490)
Oct  2 04:50:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.941 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1a433757-8104-447e-8d57-5b841c49f079]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:45.988 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[281ef230-e4c8-48d7-959b-36a15a853f07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.090 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[dff92745-5d42-422e-9ee5-534faf4bcf42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:46 np0005465604 NetworkManager[45129]: <info>  [1759395046.1165] device (tap5e8372a1-a0): carrier: link connected
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.122 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7fde9017-4f5e-4337-a9c5-d4b532f1ea99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.137 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7d76d5d0-1b31-4b07-b6ed-1313a04a8b76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e8372a1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ee:60:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592471, 'reachable_time': 17032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383894, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.151 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[caca82ee-6a87-43ec-b12e-eaaaba235ac6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feee:6077'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592471, 'tstamp': 592471}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383895, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.165 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[043c76d5-fc45-4ce0-bb60-e1a80e4629c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e8372a1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ee:60:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592471, 'reachable_time': 17032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383896, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.192 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d5ff9946-e15e-4b93-9620-3c52956ea417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.271 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[68be29d0-c12c-4e11-8ebc-8397ef8bf2ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.276 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e8372a1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.277 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.278 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e8372a1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:46 np0005465604 kernel: tap5e8372a1-a0: entered promiscuous mode
Oct  2 04:50:46 np0005465604 NetworkManager[45129]: <info>  [1759395046.2821] manager: (tap5e8372a1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/491)
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.286 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e8372a1-a0, col_values=(('external_ids', {'iface-id': '6972524a-7a8a-4c19-a841-08e51ccd9aaa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:46Z|01250|binding|INFO|Releasing lport 6972524a-7a8a-4c19-a841-08e51ccd9aaa from this chassis (sb_readonly=0)
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.291 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5e8372a1-ae46-455d-aa74-54645f729e73.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5e8372a1-ae46-455d-aa74-54645f729e73.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.294 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2af8a7-8d37-41f7-9738-50e79e0d7a78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.295 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-5e8372a1-ae46-455d-aa74-54645f729e73
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/5e8372a1-ae46-455d-aa74-54645f729e73.pid.haproxy
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 5e8372a1-ae46-455d-aa74-54645f729e73
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:50:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:50:46.296 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'env', 'PROCESS_TAG=haproxy-5e8372a1-ae46-455d-aa74-54645f729e73', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5e8372a1-ae46-455d-aa74-54645f729e73.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.698 2 DEBUG nova.compute.manager [req-84ed9cf5-1a6f-4325-b58c-bbbb4da7d873 req-ec360967-84bd-47a6-bf78-2ac3e0d41dfa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Received event network-vif-plugged-236df88e-d54e-410f-9a5c-cf5fa95debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.699 2 DEBUG oslo_concurrency.lockutils [req-84ed9cf5-1a6f-4325-b58c-bbbb4da7d873 req-ec360967-84bd-47a6-bf78-2ac3e0d41dfa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.701 2 DEBUG oslo_concurrency.lockutils [req-84ed9cf5-1a6f-4325-b58c-bbbb4da7d873 req-ec360967-84bd-47a6-bf78-2ac3e0d41dfa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.701 2 DEBUG oslo_concurrency.lockutils [req-84ed9cf5-1a6f-4325-b58c-bbbb4da7d873 req-ec360967-84bd-47a6-bf78-2ac3e0d41dfa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.702 2 DEBUG nova.compute.manager [req-84ed9cf5-1a6f-4325-b58c-bbbb4da7d873 req-ec360967-84bd-47a6-bf78-2ac3e0d41dfa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Processing event network-vif-plugged-236df88e-d54e-410f-9a5c-cf5fa95debd1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:50:46 np0005465604 podman[383970]: 2025-10-02 08:50:46.757429404 +0000 UTC m=+0.069995582 container create 36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 04:50:46 np0005465604 systemd[1]: Started libpod-conmon-36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a.scope.
Oct  2 04:50:46 np0005465604 podman[383970]: 2025-10-02 08:50:46.725243202 +0000 UTC m=+0.037809420 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:50:46 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:50:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55629b6f151fe9b01df698b94aebf7c6e06eef4cd3a7619588ea9563a64ff3ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.854 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395046.8535216, d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.855 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] VM Started (Lifecycle Event)#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.860 2 DEBUG nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.864 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.868 2 INFO nova.virt.libvirt.driver [-] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Instance spawned successfully.#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.869 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:50:46 np0005465604 podman[383970]: 2025-10-02 08:50:46.874008358 +0000 UTC m=+0.186574616 container init 36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.876 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.882 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:50:46 np0005465604 podman[383970]: 2025-10-02 08:50:46.883592757 +0000 UTC m=+0.196158975 container start 36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.896 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.897 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.898 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.899 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.899 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.900 2 DEBUG nova.virt.libvirt.driver [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:50:46 np0005465604 neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73[383985]: [NOTICE]   (383989) : New worker (383991) forked
Oct  2 04:50:46 np0005465604 neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73[383985]: [NOTICE]   (383989) : Loading success.
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.952 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.952 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395046.8539078, d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.952 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.989 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.993 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395046.8630984, d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:50:46 np0005465604 nova_compute[260603]: 2025-10-02 08:50:46.993 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:50:47 np0005465604 nova_compute[260603]: 2025-10-02 08:50:47.016 2 INFO nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Took 9.36 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:50:47 np0005465604 nova_compute[260603]: 2025-10-02 08:50:47.017 2 DEBUG nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:50:47 np0005465604 nova_compute[260603]: 2025-10-02 08:50:47.020 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:50:47 np0005465604 nova_compute[260603]: 2025-10-02 08:50:47.025 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:50:47 np0005465604 nova_compute[260603]: 2025-10-02 08:50:47.056 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:50:47 np0005465604 nova_compute[260603]: 2025-10-02 08:50:47.081 2 INFO nova.compute.manager [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Took 10.48 seconds to build instance.#033[00m
Oct  2 04:50:47 np0005465604 nova_compute[260603]: 2025-10-02 08:50:47.100 2 DEBUG oslo_concurrency.lockutils [None req-80741428-f674-481b-ba5a-6e9f7b5044ce aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 88 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 138 KiB/s rd, 1.9 MiB/s wr, 76 op/s
Oct  2 04:50:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:50:48 np0005465604 nova_compute[260603]: 2025-10-02 08:50:48.838 2 DEBUG nova.compute.manager [req-05878fde-14b8-4572-be49-fcb9f60e686e req-803688b6-cda9-45e9-a4c3-b72dc6981b7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Received event network-vif-plugged-236df88e-d54e-410f-9a5c-cf5fa95debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:50:48 np0005465604 nova_compute[260603]: 2025-10-02 08:50:48.839 2 DEBUG oslo_concurrency.lockutils [req-05878fde-14b8-4572-be49-fcb9f60e686e req-803688b6-cda9-45e9-a4c3-b72dc6981b7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:50:48 np0005465604 nova_compute[260603]: 2025-10-02 08:50:48.839 2 DEBUG oslo_concurrency.lockutils [req-05878fde-14b8-4572-be49-fcb9f60e686e req-803688b6-cda9-45e9-a4c3-b72dc6981b7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:50:48 np0005465604 nova_compute[260603]: 2025-10-02 08:50:48.840 2 DEBUG oslo_concurrency.lockutils [req-05878fde-14b8-4572-be49-fcb9f60e686e req-803688b6-cda9-45e9-a4c3-b72dc6981b7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:50:48 np0005465604 nova_compute[260603]: 2025-10-02 08:50:48.840 2 DEBUG nova.compute.manager [req-05878fde-14b8-4572-be49-fcb9f60e686e req-803688b6-cda9-45e9-a4c3-b72dc6981b7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] No waiting events found dispatching network-vif-plugged-236df88e-d54e-410f-9a5c-cf5fa95debd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:50:48 np0005465604 nova_compute[260603]: 2025-10-02 08:50:48.840 2 WARNING nova.compute.manager [req-05878fde-14b8-4572-be49-fcb9f60e686e req-803688b6-cda9-45e9-a4c3-b72dc6981b7c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Received unexpected event network-vif-plugged-236df88e-d54e-410f-9a5c-cf5fa95debd1 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:50:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 88 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 807 KiB/s rd, 1.8 MiB/s wr, 94 op/s
Oct  2 04:50:49 np0005465604 nova_compute[260603]: 2025-10-02 08:50:49.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:49 np0005465604 nova_compute[260603]: 2025-10-02 08:50:49.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:50 np0005465604 nova_compute[260603]: 2025-10-02 08:50:50.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:50:50 np0005465604 nova_compute[260603]: 2025-10-02 08:50:50.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:50 np0005465604 NetworkManager[45129]: <info>  [1759395050.9093] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/492)
Oct  2 04:50:50 np0005465604 NetworkManager[45129]: <info>  [1759395050.9104] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/493)
Oct  2 04:50:51 np0005465604 nova_compute[260603]: 2025-10-02 08:50:51.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:51 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:51Z|01251|binding|INFO|Releasing lport 6972524a-7a8a-4c19-a841-08e51ccd9aaa from this chassis (sb_readonly=0)
Oct  2 04:50:51 np0005465604 nova_compute[260603]: 2025-10-02 08:50:51.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 88 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 795 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Oct  2 04:50:51 np0005465604 nova_compute[260603]: 2025-10-02 08:50:51.763 2 DEBUG nova.compute.manager [req-e7687c3d-3458-42ed-9ede-9b197fc4f558 req-b2a449ee-55b6-44df-9fe5-a961268ceb25 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Received event network-changed-236df88e-d54e-410f-9a5c-cf5fa95debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:50:51 np0005465604 nova_compute[260603]: 2025-10-02 08:50:51.763 2 DEBUG nova.compute.manager [req-e7687c3d-3458-42ed-9ede-9b197fc4f558 req-b2a449ee-55b6-44df-9fe5-a961268ceb25 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Refreshing instance network info cache due to event network-changed-236df88e-d54e-410f-9a5c-cf5fa95debd1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:50:51 np0005465604 nova_compute[260603]: 2025-10-02 08:50:51.764 2 DEBUG oslo_concurrency.lockutils [req-e7687c3d-3458-42ed-9ede-9b197fc4f558 req-b2a449ee-55b6-44df-9fe5-a961268ceb25 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:50:51 np0005465604 nova_compute[260603]: 2025-10-02 08:50:51.764 2 DEBUG oslo_concurrency.lockutils [req-e7687c3d-3458-42ed-9ede-9b197fc4f558 req-b2a449ee-55b6-44df-9fe5-a961268ceb25 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:50:51 np0005465604 nova_compute[260603]: 2025-10-02 08:50:51.764 2 DEBUG nova.network.neutron [req-e7687c3d-3458-42ed-9ede-9b197fc4f558 req-b2a449ee-55b6-44df-9fe5-a961268ceb25 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Refreshing network info cache for port 236df88e-d54e-410f-9a5c-cf5fa95debd1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:50:52 np0005465604 podman[384002]: 2025-10-02 08:50:52.035483083 +0000 UTC m=+0.082098599 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 04:50:52 np0005465604 podman[384001]: 2025-10-02 08:50:52.054682102 +0000 UTC m=+0.101238947 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:50:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:50:53 np0005465604 nova_compute[260603]: 2025-10-02 08:50:53.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:50:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 88 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct  2 04:50:53 np0005465604 nova_compute[260603]: 2025-10-02 08:50:53.608 2 DEBUG nova.network.neutron [req-e7687c3d-3458-42ed-9ede-9b197fc4f558 req-b2a449ee-55b6-44df-9fe5-a961268ceb25 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updated VIF entry in instance network info cache for port 236df88e-d54e-410f-9a5c-cf5fa95debd1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:50:53 np0005465604 nova_compute[260603]: 2025-10-02 08:50:53.609 2 DEBUG nova.network.neutron [req-e7687c3d-3458-42ed-9ede-9b197fc4f558 req-b2a449ee-55b6-44df-9fe5-a961268ceb25 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updating instance_info_cache with network_info: [{"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:50:53 np0005465604 nova_compute[260603]: 2025-10-02 08:50:53.635 2 DEBUG oslo_concurrency.lockutils [req-e7687c3d-3458-42ed-9ede-9b197fc4f558 req-b2a449ee-55b6-44df-9fe5-a961268ceb25 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:50:54 np0005465604 nova_compute[260603]: 2025-10-02 08:50:54.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:54 np0005465604 nova_compute[260603]: 2025-10-02 08:50:54.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:54 np0005465604 nova_compute[260603]: 2025-10-02 08:50:54.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 88 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 04:50:56 np0005465604 nova_compute[260603]: 2025-10-02 08:50:56.736 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395041.7356882, 9d79c6a7-edec-4a60-af9f-7e1401bb9a64 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:50:56 np0005465604 nova_compute[260603]: 2025-10-02 08:50:56.736 2 INFO nova.compute.manager [-] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:50:56 np0005465604 nova_compute[260603]: 2025-10-02 08:50:56.758 2 DEBUG nova.compute.manager [None req-6f09ed6a-0597-4818-a383-fbc03c66b1ad - - - - - -] [instance: 9d79c6a7-edec-4a60-af9f-7e1401bb9a64] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:50:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 88 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 81 op/s
Oct  2 04:50:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:50:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:50:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:50:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:50:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:50:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:50:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:50:58 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:58Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:8a:1b 10.100.0.3
Oct  2 04:50:58 np0005465604 ovn_controller[152344]: 2025-10-02T08:50:58Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:8a:1b 10.100.0.3
Oct  2 04:50:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 102 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Oct  2 04:50:59 np0005465604 nova_compute[260603]: 2025-10-02 08:50:59.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:50:59 np0005465604 nova_compute[260603]: 2025-10-02 08:50:59.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:00 np0005465604 nova_compute[260603]: 2025-10-02 08:51:00.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 102 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.1 MiB/s wr, 54 op/s
Oct  2 04:51:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:02.318 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:51:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:02.320 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:51:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:02.321 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:02 np0005465604 nova_compute[260603]: 2025-10-02 08:51:02.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:51:03 np0005465604 nova_compute[260603]: 2025-10-02 08:51:03.374 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:03 np0005465604 nova_compute[260603]: 2025-10-02 08:51:03.375 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:03 np0005465604 nova_compute[260603]: 2025-10-02 08:51:03.395 2 DEBUG nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:51:03 np0005465604 nova_compute[260603]: 2025-10-02 08:51:03.492 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:03 np0005465604 nova_compute[260603]: 2025-10-02 08:51:03.493 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:03 np0005465604 nova_compute[260603]: 2025-10-02 08:51:03.505 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:51:03 np0005465604 nova_compute[260603]: 2025-10-02 08:51:03.505 2 INFO nova.compute.claims [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:51:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 121 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Oct  2 04:51:03 np0005465604 nova_compute[260603]: 2025-10-02 08:51:03.658 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:51:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:51:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4090223476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.127 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.137 2 DEBUG nova.compute.provider_tree [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.158 2 DEBUG nova.scheduler.client.report [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.187 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.188 2 DEBUG nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.250 2 DEBUG nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.250 2 DEBUG nova.network.neutron [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.274 2 INFO nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.295 2 DEBUG nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.415 2 DEBUG nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.416 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.417 2 INFO nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Creating image(s)#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.448 2 DEBUG nova.storage.rbd_utils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.479 2 DEBUG nova.storage.rbd_utils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.509 2 DEBUG nova.storage.rbd_utils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.514 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.622 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.624 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.625 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.625 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.658 2 DEBUG nova.storage.rbd_utils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.663 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.718 2 DEBUG nova.policy [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7767630a5b1049f48d7e0fed29e221ba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c86b416fdb524f21b0228639a3a14116', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:04 np0005465604 nova_compute[260603]: 2025-10-02 08:51:04.992 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:51:05 np0005465604 nova_compute[260603]: 2025-10-02 08:51:05.050 2 DEBUG nova.storage.rbd_utils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] resizing rbd image 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:51:05 np0005465604 nova_compute[260603]: 2025-10-02 08:51:05.175 2 DEBUG nova.objects.instance [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'migration_context' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:51:05 np0005465604 nova_compute[260603]: 2025-10-02 08:51:05.194 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:51:05 np0005465604 nova_compute[260603]: 2025-10-02 08:51:05.195 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Ensure instance console log exists: /var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:51:05 np0005465604 nova_compute[260603]: 2025-10-02 08:51:05.195 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:05 np0005465604 nova_compute[260603]: 2025-10-02 08:51:05.196 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:05 np0005465604 nova_compute[260603]: 2025-10-02 08:51:05.197 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 121 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 04:51:05 np0005465604 nova_compute[260603]: 2025-10-02 08:51:05.557 2 DEBUG nova.network.neutron [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Successfully created port: ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:51:06 np0005465604 nova_compute[260603]: 2025-10-02 08:51:06.645 2 DEBUG nova.network.neutron [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Successfully updated port: ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:51:06 np0005465604 nova_compute[260603]: 2025-10-02 08:51:06.671 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:51:06 np0005465604 nova_compute[260603]: 2025-10-02 08:51:06.672 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquired lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:51:06 np0005465604 nova_compute[260603]: 2025-10-02 08:51:06.672 2 DEBUG nova.network.neutron [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:51:06 np0005465604 nova_compute[260603]: 2025-10-02 08:51:06.875 2 DEBUG nova.compute.manager [req-983e4dd2-175c-4b86-a0c3-008c9a707a5a req-6486e8cc-2740-4496-9d97-7ce74f858f12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-changed-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:51:06 np0005465604 nova_compute[260603]: 2025-10-02 08:51:06.876 2 DEBUG nova.compute.manager [req-983e4dd2-175c-4b86-a0c3-008c9a707a5a req-6486e8cc-2740-4496-9d97-7ce74f858f12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Refreshing instance network info cache due to event network-changed-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:51:06 np0005465604 nova_compute[260603]: 2025-10-02 08:51:06.877 2 DEBUG oslo_concurrency.lockutils [req-983e4dd2-175c-4b86-a0c3-008c9a707a5a req-6486e8cc-2740-4496-9d97-7ce74f858f12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:51:06 np0005465604 nova_compute[260603]: 2025-10-02 08:51:06.927 2 DEBUG nova.network.neutron [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:51:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 154 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 3.4 MiB/s wr, 67 op/s
Oct  2 04:51:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.286 2 DEBUG nova.compute.manager [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.347 2 INFO nova.compute.manager [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] instance snapshotting#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.637 2 INFO nova.virt.libvirt.driver [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Beginning live snapshot process#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.690 2 DEBUG nova.network.neutron [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Updating instance_info_cache with network_info: [{"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.717 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Releasing lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.718 2 DEBUG nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance network_info: |[{"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.719 2 DEBUG oslo_concurrency.lockutils [req-983e4dd2-175c-4b86-a0c3-008c9a707a5a req-6486e8cc-2740-4496-9d97-7ce74f858f12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.719 2 DEBUG nova.network.neutron [req-983e4dd2-175c-4b86-a0c3-008c9a707a5a req-6486e8cc-2740-4496-9d97-7ce74f858f12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Refreshing network info cache for port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.725 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Start _get_guest_xml network_info=[{"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.818 2 DEBUG nova.virt.libvirt.imagebackend [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.824 2 WARNING nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.830 2 DEBUG nova.virt.libvirt.host [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.831 2 DEBUG nova.virt.libvirt.host [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.833 2 DEBUG nova.virt.libvirt.host [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.834 2 DEBUG nova.virt.libvirt.host [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.834 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.834 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.835 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.835 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.835 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.835 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.836 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.836 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.836 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.836 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.837 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.837 2 DEBUG nova.virt.hardware [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:51:08 np0005465604 nova_compute[260603]: 2025-10-02 08:51:08.839 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.098 2 DEBUG nova.storage.rbd_utils [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] creating snapshot(1105335cf2ba49d4b4fa4d2098cd8e03) on rbd image(d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:51:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:51:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274241326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.316 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.339 2 DEBUG nova.storage.rbd_utils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.343 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:51:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 167 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 3.9 MiB/s wr, 83 op/s
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 04:51:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:51:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3117980269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.902 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.905 2 DEBUG nova.virt.libvirt.vif [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:51:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1642420573',display_name='tempest-TestNetworkAdvancedServerOps-server-1642420573',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1642420573',id=120,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4QvS560h5Nt86MUax5zsZlHkfG3ItwowrPNScykbxoYWCKz1GitCrHEDcUIk0uoHK8H9Y5yTAmbaorVjHeOMl3in6cmQv3hOwWfkcNBCwehb1AIqliYtOYjjLnnPR64Q==',key_name='tempest-TestNetworkAdvancedServerOps-1971503056',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-vx22hflz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:51:04Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=0239090c-f1eb-4b8b-8f45-94efee345fa5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.906 2 DEBUG nova.network.os_vif_util [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.908 2 DEBUG nova.network.os_vif_util [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.911 2 DEBUG nova.objects.instance [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.930 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:51:09 np0005465604 nova_compute[260603]:  <uuid>0239090c-f1eb-4b8b-8f45-94efee345fa5</uuid>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:  <name>instance-00000078</name>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1642420573</nova:name>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:51:08</nova:creationTime>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:51:09 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:        <nova:user uuid="7767630a5b1049f48d7e0fed29e221ba">tempest-TestNetworkAdvancedServerOps-19684921-project-member</nova:user>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:        <nova:project uuid="c86b416fdb524f21b0228639a3a14116">tempest-TestNetworkAdvancedServerOps-19684921</nova:project>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:        <nova:port uuid="ceb50499-7e3c-4d47-a2dc-05ce86dbbde1">
Oct  2 04:51:09 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <entry name="serial">0239090c-f1eb-4b8b-8f45-94efee345fa5</entry>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <entry name="uuid">0239090c-f1eb-4b8b-8f45-94efee345fa5</entry>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/0239090c-f1eb-4b8b-8f45-94efee345fa5_disk">
Oct  2 04:51:09 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:51:09 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/0239090c-f1eb-4b8b-8f45-94efee345fa5_disk.config">
Oct  2 04:51:09 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:51:09 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:80:51:fb"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <target dev="tapceb50499-7e"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5/console.log" append="off"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:51:09 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:51:09 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:51:09 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:51:09 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.932 2 DEBUG nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Preparing to wait for external event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.932 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.933 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.933 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.933 2 DEBUG nova.virt.libvirt.vif [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:51:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1642420573',display_name='tempest-TestNetworkAdvancedServerOps-server-1642420573',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1642420573',id=120,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4QvS560h5Nt86MUax5zsZlHkfG3ItwowrPNScykbxoYWCKz1GitCrHEDcUIk0uoHK8H9Y5yTAmbaorVjHeOMl3in6cmQv3hOwWfkcNBCwehb1AIqliYtOYjjLnnPR64Q==',key_name='tempest-TestNetworkAdvancedServerOps-1971503056',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-vx22hflz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:51:04Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=0239090c-f1eb-4b8b-8f45-94efee345fa5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.934 2 DEBUG nova.network.os_vif_util [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.934 2 DEBUG nova.network.os_vif_util [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.934 2 DEBUG os_vif [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.936 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.938 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapceb50499-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.938 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapceb50499-7e, col_values=(('external_ids', {'iface-id': 'ceb50499-7e3c-4d47-a2dc-05ce86dbbde1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:51:fb', 'vm-uuid': '0239090c-f1eb-4b8b-8f45-94efee345fa5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:09 np0005465604 NetworkManager[45129]: <info>  [1759395069.9412] manager: (tapceb50499-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/494)
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:09 np0005465604 nova_compute[260603]: 2025-10-02 08:51:09.952 2 INFO os_vif [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e')#033[00m
Oct  2 04:51:10 np0005465604 nova_compute[260603]: 2025-10-02 08:51:10.018 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:51:10 np0005465604 nova_compute[260603]: 2025-10-02 08:51:10.019 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:51:10 np0005465604 nova_compute[260603]: 2025-10-02 08:51:10.019 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No VIF found with MAC fa:16:3e:80:51:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:51:10 np0005465604 nova_compute[260603]: 2025-10-02 08:51:10.020 2 INFO nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Using config drive#033[00m
Oct  2 04:51:10 np0005465604 nova_compute[260603]: 2025-10-02 08:51:10.055 2 DEBUG nova.storage.rbd_utils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:51:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Oct  2 04:51:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Oct  2 04:51:10 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Oct  2 04:51:10 np0005465604 nova_compute[260603]: 2025-10-02 08:51:10.181 2 DEBUG nova.storage.rbd_utils [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] cloning vms/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk@1105335cf2ba49d4b4fa4d2098cd8e03 to images/bfda7a79-c502-4c19-8d8e-da8dbbb22d04 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:51:10 np0005465604 nova_compute[260603]: 2025-10-02 08:51:10.283 2 DEBUG nova.storage.rbd_utils [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] flattening images/bfda7a79-c502-4c19-8d8e-da8dbbb22d04 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:51:10 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct  2 04:51:10 np0005465604 nova_compute[260603]: 2025-10-02 08:51:10.611 2 INFO nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Creating config drive at /var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5/disk.config#033[00m
Oct  2 04:51:10 np0005465604 nova_compute[260603]: 2025-10-02 08:51:10.621 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj1gfcyxm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:51:10 np0005465604 nova_compute[260603]: 2025-10-02 08:51:10.785 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj1gfcyxm" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:51:10 np0005465604 nova_compute[260603]: 2025-10-02 08:51:10.817 2 DEBUG nova.storage.rbd_utils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:51:10 np0005465604 nova_compute[260603]: 2025-10-02 08:51:10.821 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5/disk.config 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:51:10 np0005465604 nova_compute[260603]: 2025-10-02 08:51:10.950 2 DEBUG nova.storage.rbd_utils [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] removing snapshot(1105335cf2ba49d4b4fa4d2098cd8e03) on rbd image(d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.033 2 DEBUG oslo_concurrency.processutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5/disk.config 0239090c-f1eb-4b8b-8f45-94efee345fa5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.034 2 INFO nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Deleting local config drive /var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5/disk.config because it was imported into RBD.#033[00m
Oct  2 04:51:11 np0005465604 NetworkManager[45129]: <info>  [1759395071.0845] manager: (tapceb50499-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/495)
Oct  2 04:51:11 np0005465604 kernel: tapceb50499-7e: entered promiscuous mode
Oct  2 04:51:11 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:11Z|01252|binding|INFO|Claiming lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 for this chassis.
Oct  2 04:51:11 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:11Z|01253|binding|INFO|ceb50499-7e3c-4d47-a2dc-05ce86dbbde1: Claiming fa:16:3e:80:51:fb 10.100.0.11
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.095 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:51:fb 10.100.0.11'], port_security=['fa:16:3e:80:51:fb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0239090c-f1eb-4b8b-8f45-94efee345fa5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a398fde7-e9b6-47e4-afc2-221c0b15c74f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=633447ab-1fed-4ecc-895b-fd3d7334df6b, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.096 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 in datapath 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 bound to our chassis#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.097 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73#033[00m
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:11 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:11Z|01254|binding|INFO|Setting lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 ovn-installed in OVS
Oct  2 04:51:11 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:11Z|01255|binding|INFO|Setting lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 up in Southbound
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.108 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c25d67f7-f348-47fb-9c75-131397a45d27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.111 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap21ddf177-31 in ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.114 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap21ddf177-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.114 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09849c76-9fd8-4274-8eb7-8681865ed451]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.115 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3574f12d-4c5d-4b2e-99c4-adc4a3d25680]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:11 np0005465604 systemd-udevd[384489]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:51:11 np0005465604 systemd-machined[214636]: New machine qemu-151-instance-00000078.
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.126 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[028e075e-b183-4f28-9562-3fcf03489057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Oct  2 04:51:11 np0005465604 NetworkManager[45129]: <info>  [1759395071.1360] device (tapceb50499-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:51:11 np0005465604 NetworkManager[45129]: <info>  [1759395071.1381] device (tapceb50499-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:51:11 np0005465604 systemd[1]: Started Virtual Machine qemu-151-instance-00000078.
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.142 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[18e1b5b8-ec87-46f9-8bb3-dfe668c3d73e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Oct  2 04:51:11 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.172 2 DEBUG nova.storage.rbd_utils [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] creating snapshot(snap) on rbd image(bfda7a79-c502-4c19-8d8e-da8dbbb22d04) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.186 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b6375ff4-adb8-49ee-b572-4883bfb7f482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:11 np0005465604 systemd-udevd[384494]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.192 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[889ea84e-f3c8-4760-b4ce-137b3935b7b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:11 np0005465604 NetworkManager[45129]: <info>  [1759395071.1936] manager: (tap21ddf177-30): new Veth device (/org/freedesktop/NetworkManager/Devices/496)
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.229 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f795a6c5-192f-45b0-962c-55570d0515a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.233 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed054c3-5c20-4bcb-b34b-d3f69b10fe12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:11 np0005465604 NetworkManager[45129]: <info>  [1759395071.2607] device (tap21ddf177-30): carrier: link connected
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.267 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb6088a-98e5-4f86-82e6-1fbd43c23df4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.288 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83737bc9-90dc-4a32-8a78-74007622de40]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21ddf177-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:39:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594986, 'reachable_time': 28269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384540, 'error': None, 'target': 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.305 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[25258b77-2469-4977-a23b-834d1e6b86a7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:390d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 594986, 'tstamp': 594986}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384541, 'error': None, 'target': 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.327 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a565b21b-621d-4e44-9de6-d81c3215a6e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21ddf177-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:39:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594986, 'reachable_time': 28269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384542, 'error': None, 'target': 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.361 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[77ba12d5-28a3-4710-b38d-3645f0395913]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.416 2 DEBUG nova.compute.manager [req-ae8b7ff3-f8d5-4f2f-935a-2f58dba6bcd0 req-9f2b517d-5fb3-4243-b37a-2f471eeb19c9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.417 2 DEBUG oslo_concurrency.lockutils [req-ae8b7ff3-f8d5-4f2f-935a-2f58dba6bcd0 req-9f2b517d-5fb3-4243-b37a-2f471eeb19c9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.417 2 DEBUG oslo_concurrency.lockutils [req-ae8b7ff3-f8d5-4f2f-935a-2f58dba6bcd0 req-9f2b517d-5fb3-4243-b37a-2f471eeb19c9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.417 2 DEBUG oslo_concurrency.lockutils [req-ae8b7ff3-f8d5-4f2f-935a-2f58dba6bcd0 req-9f2b517d-5fb3-4243-b37a-2f471eeb19c9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.417 2 DEBUG nova.compute.manager [req-ae8b7ff3-f8d5-4f2f-935a-2f58dba6bcd0 req-9f2b517d-5fb3-4243-b37a-2f471eeb19c9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Processing event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.448 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[79b32e08-6024-46a4-9525-2ac131a81509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.449 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21ddf177-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.450 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.450 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21ddf177-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:11 np0005465604 NetworkManager[45129]: <info>  [1759395071.4531] manager: (tap21ddf177-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/497)
Oct  2 04:51:11 np0005465604 kernel: tap21ddf177-30: entered promiscuous mode
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.457 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap21ddf177-30, col_values=(('external_ids', {'iface-id': 'da5470e2-37c6-4448-8dab-7b3e1d890a66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:11 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:11Z|01256|binding|INFO|Releasing lport da5470e2-37c6-4448-8dab-7b3e1d890a66 from this chassis (sb_readonly=0)
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.493 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/21ddf177-3d8e-4ce2-9cd9-fa17adfabd73.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/21ddf177-3d8e-4ce2-9cd9-fa17adfabd73.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.494 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f821c61-d88d-4849-976e-a3772f578c6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.495 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/21ddf177-3d8e-4ce2-9cd9-fa17adfabd73.pid.haproxy
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:51:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:11.495 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'env', 'PROCESS_TAG=haproxy-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/21ddf177-3d8e-4ce2-9cd9-fa17adfabd73.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.544 2 DEBUG nova.network.neutron [req-983e4dd2-175c-4b86-a0c3-008c9a707a5a req-6486e8cc-2740-4496-9d97-7ce74f858f12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Updated VIF entry in instance network info cache for port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.544 2 DEBUG nova.network.neutron [req-983e4dd2-175c-4b86-a0c3-008c9a707a5a req-6486e8cc-2740-4496-9d97-7ce74f858f12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Updating instance_info_cache with network_info: [{"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:51:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 167 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.7 MiB/s wr, 42 op/s
Oct  2 04:51:11 np0005465604 nova_compute[260603]: 2025-10-02 08:51:11.568 2 DEBUG oslo_concurrency.lockutils [req-983e4dd2-175c-4b86-a0c3-008c9a707a5a req-6486e8cc-2740-4496-9d97-7ce74f858f12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:51:11 np0005465604 podman[384589]: 2025-10-02 08:51:11.945981033 +0000 UTC m=+0.090323526 container create f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:51:11 np0005465604 podman[384589]: 2025-10-02 08:51:11.909203216 +0000 UTC m=+0.053545789 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:51:12 np0005465604 systemd[1]: Started libpod-conmon-f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3.scope.
Oct  2 04:51:12 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:51:12 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a077a86bcce7e5489f801b5d34f88a8eb9ddfc03b2537cc12437d7144a2267e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:51:12 np0005465604 podman[384589]: 2025-10-02 08:51:12.06715043 +0000 UTC m=+0.211492903 container init f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 04:51:12 np0005465604 podman[384589]: 2025-10-02 08:51:12.085653947 +0000 UTC m=+0.229996420 container start f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 04:51:12 np0005465604 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[384631]: [NOTICE]   (384636) : New worker (384638) forked
Oct  2 04:51:12 np0005465604 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[384631]: [NOTICE]   (384636) : Loading success.
Oct  2 04:51:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Oct  2 04:51:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Oct  2 04:51:12 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.419 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395072.4181633, 0239090c-f1eb-4b8b-8f45-94efee345fa5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.419 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] VM Started (Lifecycle Event)#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.422 2 DEBUG nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.426 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.431 2 INFO nova.virt.libvirt.driver [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance spawned successfully.#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.432 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.453 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.459 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.464 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.464 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.464 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.465 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.465 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.466 2 DEBUG nova.virt.libvirt.driver [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.480 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.480 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395072.418307, 0239090c-f1eb-4b8b-8f45-94efee345fa5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.480 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.509 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.512 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395072.4249125, 0239090c-f1eb-4b8b-8f45-94efee345fa5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.513 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.529 2 INFO nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Took 8.11 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.529 2 DEBUG nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.530 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.538 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.609 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.615 2 INFO nova.compute.manager [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Took 9.16 seconds to build instance.#033[00m
Oct  2 04:51:12 np0005465604 nova_compute[260603]: 2025-10-02 08:51:12.629 2 DEBUG oslo_concurrency.lockutils [None req-4a885624-51d6-46dd-ba40-4936b08d1e6d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:51:13 np0005465604 podman[384819]: 2025-10-02 08:51:13.440730403 +0000 UTC m=+0.068803666 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:51:13 np0005465604 nova_compute[260603]: 2025-10-02 08:51:13.473 2 INFO nova.virt.libvirt.driver [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Snapshot image upload complete#033[00m
Oct  2 04:51:13 np0005465604 nova_compute[260603]: 2025-10-02 08:51:13.474 2 INFO nova.compute.manager [None req-c476894d-4255-4008-8b25-47816a5aa7ae aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Took 5.12 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 04:51:13 np0005465604 nova_compute[260603]: 2025-10-02 08:51:13.491 2 DEBUG nova.compute.manager [req-39744ba4-8a1a-46ed-b718-93a803a37c56 req-9d6ba5c4-5230-4a2e-9844-bc880612b1d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:51:13 np0005465604 nova_compute[260603]: 2025-10-02 08:51:13.492 2 DEBUG oslo_concurrency.lockutils [req-39744ba4-8a1a-46ed-b718-93a803a37c56 req-9d6ba5c4-5230-4a2e-9844-bc880612b1d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:13 np0005465604 nova_compute[260603]: 2025-10-02 08:51:13.493 2 DEBUG oslo_concurrency.lockutils [req-39744ba4-8a1a-46ed-b718-93a803a37c56 req-9d6ba5c4-5230-4a2e-9844-bc880612b1d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:13 np0005465604 nova_compute[260603]: 2025-10-02 08:51:13.493 2 DEBUG oslo_concurrency.lockutils [req-39744ba4-8a1a-46ed-b718-93a803a37c56 req-9d6ba5c4-5230-4a2e-9844-bc880612b1d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:13 np0005465604 nova_compute[260603]: 2025-10-02 08:51:13.494 2 DEBUG nova.compute.manager [req-39744ba4-8a1a-46ed-b718-93a803a37c56 req-9d6ba5c4-5230-4a2e-9844-bc880612b1d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] No waiting events found dispatching network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:51:13 np0005465604 nova_compute[260603]: 2025-10-02 08:51:13.494 2 WARNING nova.compute.manager [req-39744ba4-8a1a-46ed-b718-93a803a37c56 req-9d6ba5c4-5230-4a2e-9844-bc880612b1d9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received unexpected event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:51:13 np0005465604 podman[384819]: 2025-10-02 08:51:13.537892311 +0000 UTC m=+0.165965474 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 04:51:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 8.8 MiB/s wr, 239 op/s
Oct  2 04:51:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:51:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:51:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:51:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:51:14 np0005465604 nova_compute[260603]: 2025-10-02 08:51:14.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:14 np0005465604 nova_compute[260603]: 2025-10-02 08:51:14.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:15 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:51:15 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:51:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  2 04:51:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 04:51:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:51:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:51:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:51:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:51:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:51:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:51:15 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 0e50915f-49a1-4c61-82e6-cdd9804f7774 does not exist
Oct  2 04:51:15 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 22a74ab9-e60d-4604-b1c6-f4255d105b16 does not exist
Oct  2 04:51:15 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 1afe87f1-1bdd-48db-86c4-65653b7e57d3 does not exist
Oct  2 04:51:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:51:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:51:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:51:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:51:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:51:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:51:15 np0005465604 podman[385135]: 2025-10-02 08:51:15.346005437 +0000 UTC m=+0.057853084 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  2 04:51:15 np0005465604 podman[385134]: 2025-10-02 08:51:15.373723331 +0000 UTC m=+0.086324442 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:51:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 7.8 MiB/s wr, 192 op/s
Oct  2 04:51:15 np0005465604 nova_compute[260603]: 2025-10-02 08:51:15.772 2 DEBUG nova.compute.manager [req-62c4b79f-ba97-4b31-93af-7e895cb1c546 req-42821ef1-533c-4788-b42f-d373fc07bf77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-changed-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:51:15 np0005465604 nova_compute[260603]: 2025-10-02 08:51:15.773 2 DEBUG nova.compute.manager [req-62c4b79f-ba97-4b31-93af-7e895cb1c546 req-42821ef1-533c-4788-b42f-d373fc07bf77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Refreshing instance network info cache due to event network-changed-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:51:15 np0005465604 nova_compute[260603]: 2025-10-02 08:51:15.773 2 DEBUG oslo_concurrency.lockutils [req-62c4b79f-ba97-4b31-93af-7e895cb1c546 req-42821ef1-533c-4788-b42f-d373fc07bf77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:51:15 np0005465604 nova_compute[260603]: 2025-10-02 08:51:15.773 2 DEBUG oslo_concurrency.lockutils [req-62c4b79f-ba97-4b31-93af-7e895cb1c546 req-42821ef1-533c-4788-b42f-d373fc07bf77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:51:15 np0005465604 nova_compute[260603]: 2025-10-02 08:51:15.773 2 DEBUG nova.network.neutron [req-62c4b79f-ba97-4b31-93af-7e895cb1c546 req-42821ef1-533c-4788-b42f-d373fc07bf77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Refreshing network info cache for port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:51:15 np0005465604 podman[385296]: 2025-10-02 08:51:15.801450583 +0000 UTC m=+0.052080125 container create 13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 04:51:15 np0005465604 systemd[1]: Started libpod-conmon-13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35.scope.
Oct  2 04:51:15 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:51:15 np0005465604 podman[385296]: 2025-10-02 08:51:15.782063428 +0000 UTC m=+0.032692990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:51:15 np0005465604 podman[385296]: 2025-10-02 08:51:15.881546339 +0000 UTC m=+0.132175971 container init 13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:51:15 np0005465604 podman[385296]: 2025-10-02 08:51:15.893919615 +0000 UTC m=+0.144549157 container start 13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:51:15 np0005465604 podman[385296]: 2025-10-02 08:51:15.897137184 +0000 UTC m=+0.147766726 container attach 13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 04:51:15 np0005465604 gifted_cray[385311]: 167 167
Oct  2 04:51:15 np0005465604 systemd[1]: libpod-13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35.scope: Deactivated successfully.
Oct  2 04:51:15 np0005465604 conmon[385311]: conmon 13bf8f55f760cad54c00 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35.scope/container/memory.events
Oct  2 04:51:15 np0005465604 podman[385296]: 2025-10-02 08:51:15.899300412 +0000 UTC m=+0.149929954 container died 13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 04:51:15 np0005465604 systemd[1]: var-lib-containers-storage-overlay-936ab3dc185968f5543489d1595bfa247c7688d24667f0a037bd2d0b33ea59b9-merged.mount: Deactivated successfully.
Oct  2 04:51:15 np0005465604 podman[385296]: 2025-10-02 08:51:15.936018007 +0000 UTC m=+0.186647549 container remove 13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct  2 04:51:15 np0005465604 systemd[1]: libpod-conmon-13bf8f55f760cad54c00420a955b487017a3aa3f3c107648327cb06534c21b35.scope: Deactivated successfully.
Oct  2 04:51:16 np0005465604 podman[385335]: 2025-10-02 08:51:16.116050178 +0000 UTC m=+0.041138153 container create 4bdfe9173554c46b7c8970eba25ac073f22e748076ef578730dab6486ab2a379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:51:16 np0005465604 systemd[1]: Started libpod-conmon-4bdfe9173554c46b7c8970eba25ac073f22e748076ef578730dab6486ab2a379.scope.
Oct  2 04:51:16 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:51:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e7b227dbee8be176950fc3c368ffd17e8bdee1d6358cf65d0e124a98a4a466/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:51:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e7b227dbee8be176950fc3c368ffd17e8bdee1d6358cf65d0e124a98a4a466/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:51:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e7b227dbee8be176950fc3c368ffd17e8bdee1d6358cf65d0e124a98a4a466/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:51:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e7b227dbee8be176950fc3c368ffd17e8bdee1d6358cf65d0e124a98a4a466/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:51:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e7b227dbee8be176950fc3c368ffd17e8bdee1d6358cf65d0e124a98a4a466/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:51:16 np0005465604 podman[385335]: 2025-10-02 08:51:16.101336069 +0000 UTC m=+0.026424084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:51:16 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 04:51:16 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:51:16 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:51:16 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:51:16 np0005465604 podman[385335]: 2025-10-02 08:51:16.225791398 +0000 UTC m=+0.150879473 container init 4bdfe9173554c46b7c8970eba25ac073f22e748076ef578730dab6486ab2a379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 04:51:16 np0005465604 podman[385335]: 2025-10-02 08:51:16.236834663 +0000 UTC m=+0.161922678 container start 4bdfe9173554c46b7c8970eba25ac073f22e748076ef578730dab6486ab2a379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:51:16 np0005465604 podman[385335]: 2025-10-02 08:51:16.252433219 +0000 UTC m=+0.177521244 container attach 4bdfe9173554c46b7c8970eba25ac073f22e748076ef578730dab6486ab2a379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:51:17 np0005465604 nova_compute[260603]: 2025-10-02 08:51:17.153 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:17 np0005465604 nova_compute[260603]: 2025-10-02 08:51:17.155 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:17 np0005465604 nova_compute[260603]: 2025-10-02 08:51:17.212 2 DEBUG nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:51:17 np0005465604 wonderful_lewin[385352]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:51:17 np0005465604 wonderful_lewin[385352]: --> relative data size: 1.0
Oct  2 04:51:17 np0005465604 wonderful_lewin[385352]: --> All data devices are unavailable
Oct  2 04:51:17 np0005465604 systemd[1]: libpod-4bdfe9173554c46b7c8970eba25ac073f22e748076ef578730dab6486ab2a379.scope: Deactivated successfully.
Oct  2 04:51:17 np0005465604 podman[385335]: 2025-10-02 08:51:17.272188913 +0000 UTC m=+1.197276908 container died 4bdfe9173554c46b7c8970eba25ac073f22e748076ef578730dab6486ab2a379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:51:17 np0005465604 systemd[1]: var-lib-containers-storage-overlay-01e7b227dbee8be176950fc3c368ffd17e8bdee1d6358cf65d0e124a98a4a466-merged.mount: Deactivated successfully.
Oct  2 04:51:17 np0005465604 nova_compute[260603]: 2025-10-02 08:51:17.306 2 DEBUG nova.network.neutron [req-62c4b79f-ba97-4b31-93af-7e895cb1c546 req-42821ef1-533c-4788-b42f-d373fc07bf77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Updated VIF entry in instance network info cache for port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:51:17 np0005465604 nova_compute[260603]: 2025-10-02 08:51:17.307 2 DEBUG nova.network.neutron [req-62c4b79f-ba97-4b31-93af-7e895cb1c546 req-42821ef1-533c-4788-b42f-d373fc07bf77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Updating instance_info_cache with network_info: [{"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:51:17 np0005465604 nova_compute[260603]: 2025-10-02 08:51:17.330 2 DEBUG oslo_concurrency.lockutils [req-62c4b79f-ba97-4b31-93af-7e895cb1c546 req-42821ef1-533c-4788-b42f-d373fc07bf77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:51:17 np0005465604 podman[385335]: 2025-10-02 08:51:17.336876249 +0000 UTC m=+1.261964264 container remove 4bdfe9173554c46b7c8970eba25ac073f22e748076ef578730dab6486ab2a379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 04:51:17 np0005465604 nova_compute[260603]: 2025-10-02 08:51:17.336 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:17 np0005465604 nova_compute[260603]: 2025-10-02 08:51:17.337 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:17 np0005465604 nova_compute[260603]: 2025-10-02 08:51:17.349 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:51:17 np0005465604 nova_compute[260603]: 2025-10-02 08:51:17.349 2 INFO nova.compute.claims [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:51:17 np0005465604 systemd[1]: libpod-conmon-4bdfe9173554c46b7c8970eba25ac073f22e748076ef578730dab6486ab2a379.scope: Deactivated successfully.
Oct  2 04:51:17 np0005465604 nova_compute[260603]: 2025-10-02 08:51:17.507 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:51:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 6.3 MiB/s wr, 239 op/s
Oct  2 04:51:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:51:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3651839947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:51:17 np0005465604 nova_compute[260603]: 2025-10-02 08:51:17.961 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:51:17 np0005465604 nova_compute[260603]: 2025-10-02 08:51:17.968 2 DEBUG nova.compute.provider_tree [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:51:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:51:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Oct  2 04:51:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Oct  2 04:51:17 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Oct  2 04:51:17 np0005465604 nova_compute[260603]: 2025-10-02 08:51:17.985 2 DEBUG nova.scheduler.client.report [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.013 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.014 2 DEBUG nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:51:18 np0005465604 podman[385555]: 2025-10-02 08:51:18.052540586 +0000 UTC m=+0.036827490 container create 2b5f26d9f774ae462e9392d007f1f4e9285eb2904973d5ec0fc29982b75c187b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_johnson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.063 2 DEBUG nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.064 2 DEBUG nova.network.neutron [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.085 2 INFO nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:51:18 np0005465604 systemd[1]: Started libpod-conmon-2b5f26d9f774ae462e9392d007f1f4e9285eb2904973d5ec0fc29982b75c187b.scope.
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.107 2 DEBUG nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:51:18 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:51:18 np0005465604 podman[385555]: 2025-10-02 08:51:18.037274279 +0000 UTC m=+0.021561183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:51:18 np0005465604 podman[385555]: 2025-10-02 08:51:18.143031436 +0000 UTC m=+0.127318350 container init 2b5f26d9f774ae462e9392d007f1f4e9285eb2904973d5ec0fc29982b75c187b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_johnson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct  2 04:51:18 np0005465604 podman[385555]: 2025-10-02 08:51:18.155261217 +0000 UTC m=+0.139548111 container start 2b5f26d9f774ae462e9392d007f1f4e9285eb2904973d5ec0fc29982b75c187b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_johnson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:51:18 np0005465604 podman[385555]: 2025-10-02 08:51:18.15920787 +0000 UTC m=+0.143494784 container attach 2b5f26d9f774ae462e9392d007f1f4e9285eb2904973d5ec0fc29982b75c187b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_johnson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:51:18 np0005465604 great_johnson[385571]: 167 167
Oct  2 04:51:18 np0005465604 systemd[1]: libpod-2b5f26d9f774ae462e9392d007f1f4e9285eb2904973d5ec0fc29982b75c187b.scope: Deactivated successfully.
Oct  2 04:51:18 np0005465604 podman[385555]: 2025-10-02 08:51:18.163304908 +0000 UTC m=+0.147591822 container died 2b5f26d9f774ae462e9392d007f1f4e9285eb2904973d5ec0fc29982b75c187b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 04:51:18 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3e4ceaadd6d083aa073e5162a1ec8a774980b2ba651d661c8829a34b9a73544b-merged.mount: Deactivated successfully.
Oct  2 04:51:18 np0005465604 podman[385555]: 2025-10-02 08:51:18.216597148 +0000 UTC m=+0.200884072 container remove 2b5f26d9f774ae462e9392d007f1f4e9285eb2904973d5ec0fc29982b75c187b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_johnson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.229 2 DEBUG nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.233 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.234 2 INFO nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Creating image(s)#033[00m
Oct  2 04:51:18 np0005465604 systemd[1]: libpod-conmon-2b5f26d9f774ae462e9392d007f1f4e9285eb2904973d5ec0fc29982b75c187b.scope: Deactivated successfully.
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.288 2 DEBUG nova.storage.rbd_utils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image 062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.323 2 DEBUG nova.storage.rbd_utils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image 062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.384 2 DEBUG nova.storage.rbd_utils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image 062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.394 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "61b3d24d77efae9abe448d859237f603a5c1f35c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.395 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "61b3d24d77efae9abe448d859237f603a5c1f35c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.462 2 DEBUG nova.policy [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aceb8b0273154f1abe964d78a6261936', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bfcc44155f2d45ff9f66fe254a7b21c7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:51:18 np0005465604 podman[385648]: 2025-10-02 08:51:18.469528062 +0000 UTC m=+0.062149848 container create 4e71fc0b0dcc6dfc547d6310b723d7f2ac38b504fa95dec75d99fdde9e7c92b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:51:18 np0005465604 systemd[1]: Started libpod-conmon-4e71fc0b0dcc6dfc547d6310b723d7f2ac38b504fa95dec75d99fdde9e7c92b5.scope.
Oct  2 04:51:18 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:51:18 np0005465604 podman[385648]: 2025-10-02 08:51:18.450723336 +0000 UTC m=+0.043345152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:51:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f0b79f3aff3d90abeca45195365c26bc30b13a1f521e7db162132afd5a98d8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:51:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f0b79f3aff3d90abeca45195365c26bc30b13a1f521e7db162132afd5a98d8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:51:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f0b79f3aff3d90abeca45195365c26bc30b13a1f521e7db162132afd5a98d8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:51:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f0b79f3aff3d90abeca45195365c26bc30b13a1f521e7db162132afd5a98d8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:51:18 np0005465604 podman[385648]: 2025-10-02 08:51:18.565915447 +0000 UTC m=+0.158537273 container init 4e71fc0b0dcc6dfc547d6310b723d7f2ac38b504fa95dec75d99fdde9e7c92b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 04:51:18 np0005465604 podman[385648]: 2025-10-02 08:51:18.572814961 +0000 UTC m=+0.165436767 container start 4e71fc0b0dcc6dfc547d6310b723d7f2ac38b504fa95dec75d99fdde9e7c92b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:51:18 np0005465604 podman[385648]: 2025-10-02 08:51:18.576710033 +0000 UTC m=+0.169331919 container attach 4e71fc0b0dcc6dfc547d6310b723d7f2ac38b504fa95dec75d99fdde9e7c92b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.628 2 DEBUG nova.virt.libvirt.imagebackend [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Image locations are: [{'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/bfda7a79-c502-4c19-8d8e-da8dbbb22d04/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/bfda7a79-c502-4c19-8d8e-da8dbbb22d04/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.720 2 DEBUG nova.virt.libvirt.imagebackend [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Selected location: {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/bfda7a79-c502-4c19-8d8e-da8dbbb22d04/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.721 2 DEBUG nova.storage.rbd_utils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] cloning images/bfda7a79-c502-4c19-8d8e-da8dbbb22d04@snap to None/062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.847 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "61b3d24d77efae9abe448d859237f603a5c1f35c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.964 2 DEBUG nova.objects.instance [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lazy-loading 'migration_context' on Instance uuid 062b2a3e-b612-42bf-b96c-fb9bdd9008ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.982 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.983 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Ensure instance console log exists: /var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.983 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.984 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:18 np0005465604 nova_compute[260603]: 2025-10-02 08:51:18.984 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:19 np0005465604 condescending_carson[385665]: {
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:    "0": [
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:        {
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "devices": [
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "/dev/loop3"
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            ],
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "lv_name": "ceph_lv0",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "lv_size": "21470642176",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "name": "ceph_lv0",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "tags": {
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.cluster_name": "ceph",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.crush_device_class": "",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.encrypted": "0",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.osd_id": "0",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.type": "block",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.vdo": "0"
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            },
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "type": "block",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "vg_name": "ceph_vg0"
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:        }
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:    ],
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:    "1": [
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:        {
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "devices": [
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "/dev/loop4"
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            ],
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "lv_name": "ceph_lv1",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "lv_size": "21470642176",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "name": "ceph_lv1",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "tags": {
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.cluster_name": "ceph",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.crush_device_class": "",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.encrypted": "0",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.osd_id": "1",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.type": "block",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.vdo": "0"
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            },
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "type": "block",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "vg_name": "ceph_vg1"
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:        }
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:    ],
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:    "2": [
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:        {
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "devices": [
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "/dev/loop5"
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            ],
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "lv_name": "ceph_lv2",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "lv_size": "21470642176",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "name": "ceph_lv2",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "tags": {
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.cluster_name": "ceph",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.crush_device_class": "",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.encrypted": "0",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.osd_id": "2",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.type": "block",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:                "ceph.vdo": "0"
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            },
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "type": "block",
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:            "vg_name": "ceph_vg2"
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:        }
Oct  2 04:51:19 np0005465604 condescending_carson[385665]:    ]
Oct  2 04:51:19 np0005465604 condescending_carson[385665]: }
Oct  2 04:51:19 np0005465604 systemd[1]: libpod-4e71fc0b0dcc6dfc547d6310b723d7f2ac38b504fa95dec75d99fdde9e7c92b5.scope: Deactivated successfully.
Oct  2 04:51:19 np0005465604 podman[385648]: 2025-10-02 08:51:19.344542175 +0000 UTC m=+0.937163971 container died 4e71fc0b0dcc6dfc547d6310b723d7f2ac38b504fa95dec75d99fdde9e7c92b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:51:19 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6f0b79f3aff3d90abeca45195365c26bc30b13a1f521e7db162132afd5a98d8a-merged.mount: Deactivated successfully.
Oct  2 04:51:19 np0005465604 podman[385648]: 2025-10-02 08:51:19.42266363 +0000 UTC m=+1.015285426 container remove 4e71fc0b0dcc6dfc547d6310b723d7f2ac38b504fa95dec75d99fdde9e7c92b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_carson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 04:51:19 np0005465604 systemd[1]: libpod-conmon-4e71fc0b0dcc6dfc547d6310b723d7f2ac38b504fa95dec75d99fdde9e7c92b5.scope: Deactivated successfully.
Oct  2 04:51:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 5.9 MiB/s wr, 246 op/s
Oct  2 04:51:19 np0005465604 nova_compute[260603]: 2025-10-02 08:51:19.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:19 np0005465604 nova_compute[260603]: 2025-10-02 08:51:19.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:20 np0005465604 podman[385951]: 2025-10-02 08:51:20.102575462 +0000 UTC m=+0.041468223 container create a38f4ed27f6fda9e552d3eb2dc957284e8a9e2a220fa64900507e9afc661e40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 04:51:20 np0005465604 systemd[1]: Started libpod-conmon-a38f4ed27f6fda9e552d3eb2dc957284e8a9e2a220fa64900507e9afc661e40a.scope.
Oct  2 04:51:20 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:51:20 np0005465604 podman[385951]: 2025-10-02 08:51:20.178355654 +0000 UTC m=+0.117248495 container init a38f4ed27f6fda9e552d3eb2dc957284e8a9e2a220fa64900507e9afc661e40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bardeen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:51:20 np0005465604 podman[385951]: 2025-10-02 08:51:20.08548609 +0000 UTC m=+0.024378871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:51:20 np0005465604 podman[385951]: 2025-10-02 08:51:20.186610652 +0000 UTC m=+0.125503433 container start a38f4ed27f6fda9e552d3eb2dc957284e8a9e2a220fa64900507e9afc661e40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bardeen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:51:20 np0005465604 podman[385951]: 2025-10-02 08:51:20.190097559 +0000 UTC m=+0.128990420 container attach a38f4ed27f6fda9e552d3eb2dc957284e8a9e2a220fa64900507e9afc661e40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bardeen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 04:51:20 np0005465604 amazing_bardeen[385965]: 167 167
Oct  2 04:51:20 np0005465604 systemd[1]: libpod-a38f4ed27f6fda9e552d3eb2dc957284e8a9e2a220fa64900507e9afc661e40a.scope: Deactivated successfully.
Oct  2 04:51:20 np0005465604 podman[385951]: 2025-10-02 08:51:20.19266393 +0000 UTC m=+0.131556691 container died a38f4ed27f6fda9e552d3eb2dc957284e8a9e2a220fa64900507e9afc661e40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:51:20 np0005465604 systemd[1]: var-lib-containers-storage-overlay-50073cabf32ad22f7a353d7d862624d82ad2ed468d8087112e9d10972707a87c-merged.mount: Deactivated successfully.
Oct  2 04:51:20 np0005465604 podman[385951]: 2025-10-02 08:51:20.249183202 +0000 UTC m=+0.188076003 container remove a38f4ed27f6fda9e552d3eb2dc957284e8a9e2a220fa64900507e9afc661e40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  2 04:51:20 np0005465604 systemd[1]: libpod-conmon-a38f4ed27f6fda9e552d3eb2dc957284e8a9e2a220fa64900507e9afc661e40a.scope: Deactivated successfully.
Oct  2 04:51:20 np0005465604 podman[385991]: 2025-10-02 08:51:20.489782861 +0000 UTC m=+0.053991924 container create 48ff4b3de85b2c0cb451851d5e476b998e4222ea7f8057dc9fb0394395d26778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:51:20 np0005465604 systemd[1]: Started libpod-conmon-48ff4b3de85b2c0cb451851d5e476b998e4222ea7f8057dc9fb0394395d26778.scope.
Oct  2 04:51:20 np0005465604 podman[385991]: 2025-10-02 08:51:20.466547286 +0000 UTC m=+0.030756439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:51:20 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:51:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8c4cb8e24e824f2f879981b3507483141b8b6da0482d4ecd2eeda44cb319b1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:51:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8c4cb8e24e824f2f879981b3507483141b8b6da0482d4ecd2eeda44cb319b1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:51:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8c4cb8e24e824f2f879981b3507483141b8b6da0482d4ecd2eeda44cb319b1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:51:20 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8c4cb8e24e824f2f879981b3507483141b8b6da0482d4ecd2eeda44cb319b1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:51:20 np0005465604 podman[385991]: 2025-10-02 08:51:20.591821081 +0000 UTC m=+0.156030164 container init 48ff4b3de85b2c0cb451851d5e476b998e4222ea7f8057dc9fb0394395d26778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_merkle, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 04:51:20 np0005465604 podman[385991]: 2025-10-02 08:51:20.603163335 +0000 UTC m=+0.167372408 container start 48ff4b3de85b2c0cb451851d5e476b998e4222ea7f8057dc9fb0394395d26778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:51:20 np0005465604 podman[385991]: 2025-10-02 08:51:20.607075397 +0000 UTC m=+0.171284480 container attach 48ff4b3de85b2c0cb451851d5e476b998e4222ea7f8057dc9fb0394395d26778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 04:51:21 np0005465604 nova_compute[260603]: 2025-10-02 08:51:21.020 2 DEBUG nova.network.neutron [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Successfully created port: 06e58664-5a1a-4f08-9eb3-6a275e07a062 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]: {
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:        "osd_id": 2,
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:        "type": "bluestore"
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:    },
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:        "osd_id": 1,
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:        "type": "bluestore"
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:    },
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:        "osd_id": 0,
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:        "type": "bluestore"
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]:    }
Oct  2 04:51:21 np0005465604 thirsty_merkle[386008]: }
Oct  2 04:51:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 5.0 MiB/s wr, 209 op/s
Oct  2 04:51:21 np0005465604 systemd[1]: libpod-48ff4b3de85b2c0cb451851d5e476b998e4222ea7f8057dc9fb0394395d26778.scope: Deactivated successfully.
Oct  2 04:51:21 np0005465604 podman[385991]: 2025-10-02 08:51:21.580395303 +0000 UTC m=+1.144604386 container died 48ff4b3de85b2c0cb451851d5e476b998e4222ea7f8057dc9fb0394395d26778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:51:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d8c4cb8e24e824f2f879981b3507483141b8b6da0482d4ecd2eeda44cb319b1a-merged.mount: Deactivated successfully.
Oct  2 04:51:21 np0005465604 podman[385991]: 2025-10-02 08:51:21.686449829 +0000 UTC m=+1.250658902 container remove 48ff4b3de85b2c0cb451851d5e476b998e4222ea7f8057dc9fb0394395d26778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_merkle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:51:21 np0005465604 systemd[1]: libpod-conmon-48ff4b3de85b2c0cb451851d5e476b998e4222ea7f8057dc9fb0394395d26778.scope: Deactivated successfully.
Oct  2 04:51:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:51:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:51:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:51:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:51:21 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b0895a0b-30d1-48e7-9971-9128fd013df3 does not exist
Oct  2 04:51:21 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev fe385605-b082-4473-add0-d028a94f29b6 does not exist
Oct  2 04:51:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:51:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3641325275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:51:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:51:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3641325275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:51:22 np0005465604 nova_compute[260603]: 2025-10-02 08:51:22.271 2 DEBUG nova.network.neutron [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Successfully updated port: 06e58664-5a1a-4f08-9eb3-6a275e07a062 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:51:22 np0005465604 nova_compute[260603]: 2025-10-02 08:51:22.289 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:51:22 np0005465604 nova_compute[260603]: 2025-10-02 08:51:22.290 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquired lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:51:22 np0005465604 nova_compute[260603]: 2025-10-02 08:51:22.290 2 DEBUG nova.network.neutron [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:51:22 np0005465604 nova_compute[260603]: 2025-10-02 08:51:22.386 2 DEBUG nova.compute.manager [req-22d839d5-e5a4-4abd-bfba-1862ca002348 req-4d57ce01-c86e-405c-a843-d630890ad079 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Received event network-changed-06e58664-5a1a-4f08-9eb3-6a275e07a062 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:51:22 np0005465604 nova_compute[260603]: 2025-10-02 08:51:22.387 2 DEBUG nova.compute.manager [req-22d839d5-e5a4-4abd-bfba-1862ca002348 req-4d57ce01-c86e-405c-a843-d630890ad079 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Refreshing instance network info cache due to event network-changed-06e58664-5a1a-4f08-9eb3-6a275e07a062. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:51:22 np0005465604 nova_compute[260603]: 2025-10-02 08:51:22.387 2 DEBUG oslo_concurrency.lockutils [req-22d839d5-e5a4-4abd-bfba-1862ca002348 req-4d57ce01-c86e-405c-a843-d630890ad079 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:51:22 np0005465604 nova_compute[260603]: 2025-10-02 08:51:22.504 2 DEBUG nova.network.neutron [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:51:22 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:51:22 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:51:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:51:23 np0005465604 podman[386105]: 2025-10-02 08:51:23.055104148 +0000 UTC m=+0.100364839 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Oct  2 04:51:23 np0005465604 podman[386106]: 2025-10-02 08:51:23.059310099 +0000 UTC m=+0.104337333 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.450 2 DEBUG nova.network.neutron [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Updating instance_info_cache with network_info: [{"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.470 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Releasing lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.471 2 DEBUG nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Instance network_info: |[{"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.471 2 DEBUG oslo_concurrency.lockutils [req-22d839d5-e5a4-4abd-bfba-1862ca002348 req-4d57ce01-c86e-405c-a843-d630890ad079 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.471 2 DEBUG nova.network.neutron [req-22d839d5-e5a4-4abd-bfba-1862ca002348 req-4d57ce01-c86e-405c-a843-d630890ad079 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Refreshing network info cache for port 06e58664-5a1a-4f08-9eb3-6a275e07a062 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.474 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Start _get_guest_xml network_info=[{"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T08:51:08Z,direct_url=<?>,disk_format='raw',id=bfda7a79-c502-4c19-8d8e-da8dbbb22d04,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-694582967',owner='bfcc44155f2d45ff9f66fe254a7b21c7',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T08:51:14Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': 'bfda7a79-c502-4c19-8d8e-da8dbbb22d04'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.481 2 WARNING nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.487 2 DEBUG nova.virt.libvirt.host [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.488 2 DEBUG nova.virt.libvirt.host [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.492 2 DEBUG nova.virt.libvirt.host [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.493 2 DEBUG nova.virt.libvirt.host [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.493 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.493 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T08:51:08Z,direct_url=<?>,disk_format='raw',id=bfda7a79-c502-4c19-8d8e-da8dbbb22d04,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-694582967',owner='bfcc44155f2d45ff9f66fe254a7b21c7',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T08:51:14Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.494 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.494 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.494 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.495 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.495 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.495 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.495 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.496 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.496 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.496 2 DEBUG nova.virt.hardware [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.500 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:51:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 249 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 283 KiB/s wr, 118 op/s
Oct  2 04:51:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:51:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1600227038' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.948 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.985 2 DEBUG nova.storage.rbd_utils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image 062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:51:23 np0005465604 nova_compute[260603]: 2025-10-02 08:51:23.991 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:51:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:51:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1948412629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.460 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.462 2 DEBUG nova.virt.libvirt.vif [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:51:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1642693838',display_name='tempest-TestSnapshotPattern-server-1642693838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1642693838',id=121,image_ref='bfda7a79-c502-4c19-8d8e-da8dbbb22d04',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCpLz1iGKUc/R0jTBlojNlaCcKVn52HrOCGXK3cRl7ZwI3LdmmDfGB817F44mQzj+4scFHSvX8lk2zoI/a0vux5P2hegs1HkAovNnUXiH3pFHVXQWoAuzDtOhefGzzHniQ==',key_name='tempest-TestSnapshotPattern-624580712',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfcc44155f2d45ff9f66fe254a7b21c7',ramdisk_id='',reservation_id='r-uis0qfoh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='d5e3e825-fcee-4f1b-8c05-5a9ee07013d7',image_min_disk='1',image_min_ram='0',image_owner_id='bfcc44155f2d45ff9f66fe254a7b21c7',image_owner_project_name='tempest-TestSnapshotPattern-495107275',image_owner_user_name='tempest-TestSnapshotPattern-495107275-project-member',image_user_id='aceb8b0273154f1abe964d78a6261936',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-495107275',owner_user_name='tempest-TestSnapshotPattern-495107275-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:51:18Z,user_data=None,user_id='aceb8b0273154f1abe964d78a6261936',uuid=062b2a3e-b612-42bf-b96c-fb9bdd9008ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.462 2 DEBUG nova.network.os_vif_util [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converting VIF {"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.463 2 DEBUG nova.network.os_vif_util [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=06e58664-5a1a-4f08-9eb3-6a275e07a062,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e58664-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.464 2 DEBUG nova.objects.instance [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 062b2a3e-b612-42bf-b96c-fb9bdd9008ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.495 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:51:24 np0005465604 nova_compute[260603]:  <uuid>062b2a3e-b612-42bf-b96c-fb9bdd9008ee</uuid>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:  <name>instance-00000079</name>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestSnapshotPattern-server-1642693838</nova:name>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:51:23</nova:creationTime>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:51:24 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:        <nova:user uuid="aceb8b0273154f1abe964d78a6261936">tempest-TestSnapshotPattern-495107275-project-member</nova:user>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:        <nova:project uuid="bfcc44155f2d45ff9f66fe254a7b21c7">tempest-TestSnapshotPattern-495107275</nova:project>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="bfda7a79-c502-4c19-8d8e-da8dbbb22d04"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:        <nova:port uuid="06e58664-5a1a-4f08-9eb3-6a275e07a062">
Oct  2 04:51:24 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <entry name="serial">062b2a3e-b612-42bf-b96c-fb9bdd9008ee</entry>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <entry name="uuid">062b2a3e-b612-42bf-b96c-fb9bdd9008ee</entry>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk">
Oct  2 04:51:24 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:51:24 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk.config">
Oct  2 04:51:24 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:51:24 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:e1:f8:b9"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <target dev="tap06e58664-5a"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee/console.log" append="off"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <input type="keyboard" bus="usb"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:51:24 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:51:24 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:51:24 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:51:24 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.496 2 DEBUG nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Preparing to wait for external event network-vif-plugged-06e58664-5a1a-4f08-9eb3-6a275e07a062 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.497 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.497 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.498 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.499 2 DEBUG nova.virt.libvirt.vif [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:51:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1642693838',display_name='tempest-TestSnapshotPattern-server-1642693838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1642693838',id=121,image_ref='bfda7a79-c502-4c19-8d8e-da8dbbb22d04',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCpLz1iGKUc/R0jTBlojNlaCcKVn52HrOCGXK3cRl7ZwI3LdmmDfGB817F44mQzj+4scFHSvX8lk2zoI/a0vux5P2hegs1HkAovNnUXiH3pFHVXQWoAuzDtOhefGzzHniQ==',key_name='tempest-TestSnapshotPattern-624580712',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfcc44155f2d45ff9f66fe254a7b21c7',ramdisk_id='',reservation_id='r-uis0qfoh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='d5e3e825-fcee-4f1b-8c05-5a9ee07013d7',image_min_disk='1',image_min_ram='0',image_owner_id='bfcc44155f2d45ff9f66fe254a7b21c7',image_owner_project_name='tempest-TestSnapshotPattern-495107275',image_owner_user_name='tempest-TestSnapshotPattern-495107275-project-member',image_user_id='aceb8b0273154f1abe964d78a6261936',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-495107275',owner_user_name='tempest-TestSnapshotPattern-495107275-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:51:18Z,user_data=None,user_id='aceb8b0273154f1abe964d78a6261936',uuid=062b2a3e-b612-42bf-b96c-fb9bdd9008ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.500 2 DEBUG nova.network.os_vif_util [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converting VIF {"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.501 2 DEBUG nova.network.os_vif_util [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=06e58664-5a1a-4f08-9eb3-6a275e07a062,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e58664-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.501 2 DEBUG os_vif [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=06e58664-5a1a-4f08-9eb3-6a275e07a062,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e58664-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.503 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.507 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06e58664-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.507 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap06e58664-5a, col_values=(('external_ids', {'iface-id': '06e58664-5a1a-4f08-9eb3-6a275e07a062', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:f8:b9', 'vm-uuid': '062b2a3e-b612-42bf-b96c-fb9bdd9008ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:24 np0005465604 NetworkManager[45129]: <info>  [1759395084.5096] manager: (tap06e58664-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.517 2 INFO os_vif [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=06e58664-5a1a-4f08-9eb3-6a275e07a062,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e58664-5a')#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.586 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.586 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.587 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] No VIF found with MAC fa:16:3e:e1:f8:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.587 2 INFO nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Using config drive#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.607 2 DEBUG nova.storage.rbd_utils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image 062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:51:24 np0005465604 nova_compute[260603]: 2025-10-02 08:51:24.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:25 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:25Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:80:51:fb 10.100.0.11
Oct  2 04:51:25 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:25Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:51:fb 10.100.0.11
Oct  2 04:51:25 np0005465604 nova_compute[260603]: 2025-10-02 08:51:25.547 2 INFO nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Creating config drive at /var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee/disk.config#033[00m
Oct  2 04:51:25 np0005465604 nova_compute[260603]: 2025-10-02 08:51:25.557 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe1haa8op execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:51:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 249 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 283 KiB/s wr, 118 op/s
Oct  2 04:51:25 np0005465604 nova_compute[260603]: 2025-10-02 08:51:25.727 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe1haa8op" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:51:25 np0005465604 nova_compute[260603]: 2025-10-02 08:51:25.756 2 DEBUG nova.storage.rbd_utils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] rbd image 062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:51:25 np0005465604 nova_compute[260603]: 2025-10-02 08:51:25.760 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee/disk.config 062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:51:25 np0005465604 nova_compute[260603]: 2025-10-02 08:51:25.812 2 DEBUG nova.network.neutron [req-22d839d5-e5a4-4abd-bfba-1862ca002348 req-4d57ce01-c86e-405c-a843-d630890ad079 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Updated VIF entry in instance network info cache for port 06e58664-5a1a-4f08-9eb3-6a275e07a062. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:51:25 np0005465604 nova_compute[260603]: 2025-10-02 08:51:25.813 2 DEBUG nova.network.neutron [req-22d839d5-e5a4-4abd-bfba-1862ca002348 req-4d57ce01-c86e-405c-a843-d630890ad079 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Updating instance_info_cache with network_info: [{"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:51:25 np0005465604 nova_compute[260603]: 2025-10-02 08:51:25.840 2 DEBUG oslo_concurrency.lockutils [req-22d839d5-e5a4-4abd-bfba-1862ca002348 req-4d57ce01-c86e-405c-a843-d630890ad079 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:51:25 np0005465604 nova_compute[260603]: 2025-10-02 08:51:25.970 2 DEBUG oslo_concurrency.processutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee/disk.config 062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:51:25 np0005465604 nova_compute[260603]: 2025-10-02 08:51:25.972 2 INFO nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Deleting local config drive /var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee/disk.config because it was imported into RBD.#033[00m
Oct  2 04:51:26 np0005465604 kernel: tap06e58664-5a: entered promiscuous mode
Oct  2 04:51:26 np0005465604 NetworkManager[45129]: <info>  [1759395086.0353] manager: (tap06e58664-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/499)
Oct  2 04:51:26 np0005465604 nova_compute[260603]: 2025-10-02 08:51:26.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:26 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:26Z|01257|binding|INFO|Claiming lport 06e58664-5a1a-4f08-9eb3-6a275e07a062 for this chassis.
Oct  2 04:51:26 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:26Z|01258|binding|INFO|06e58664-5a1a-4f08-9eb3-6a275e07a062: Claiming fa:16:3e:e1:f8:b9 10.100.0.8
Oct  2 04:51:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.048 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:f8:b9 10.100.0.8'], port_security=['fa:16:3e:e1:f8:b9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '062b2a3e-b612-42bf-b96c-fb9bdd9008ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e8372a1-ae46-455d-aa74-54645f729e73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc44155f2d45ff9f66fe254a7b21c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b23ca944-49a2-456f-a94b-c77445a13bdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10547b4a-e45d-47e8-b4d5-6979908e5ff3, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=06e58664-5a1a-4f08-9eb3-6a275e07a062) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:51:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.050 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 06e58664-5a1a-4f08-9eb3-6a275e07a062 in datapath 5e8372a1-ae46-455d-aa74-54645f729e73 bound to our chassis#033[00m
Oct  2 04:51:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.053 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e8372a1-ae46-455d-aa74-54645f729e73#033[00m
Oct  2 04:51:26 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:26Z|01259|binding|INFO|Setting lport 06e58664-5a1a-4f08-9eb3-6a275e07a062 ovn-installed in OVS
Oct  2 04:51:26 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:26Z|01260|binding|INFO|Setting lport 06e58664-5a1a-4f08-9eb3-6a275e07a062 up in Southbound
Oct  2 04:51:26 np0005465604 nova_compute[260603]: 2025-10-02 08:51:26.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.070 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6bddbc90-097b-4c3b-b21e-090b2d01582a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:26 np0005465604 systemd-machined[214636]: New machine qemu-152-instance-00000079.
Oct  2 04:51:26 np0005465604 systemd[1]: Started Virtual Machine qemu-152-instance-00000079.
Oct  2 04:51:26 np0005465604 systemd-udevd[386283]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:51:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.106 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f9067118-80b2-4070-97e7-b10395b04f96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.113 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[58d69353-66a0-4a65-afe0-d67b5dcf5fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:26 np0005465604 NetworkManager[45129]: <info>  [1759395086.1169] device (tap06e58664-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:51:26 np0005465604 NetworkManager[45129]: <info>  [1759395086.1183] device (tap06e58664-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:51:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.143 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[63c2711a-7897-4062-9f11-0fe01f97b0e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.167 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a18c9a18-2d6d-429a-8ea2-2f1a32e31ce6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e8372a1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ee:60:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592471, 'reachable_time': 17032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386293, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.184 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[62529704-a1fa-4f7d-ac17-5367e5bec152]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e8372a1-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592482, 'tstamp': 592482}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386294, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e8372a1-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592486, 'tstamp': 592486}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386294, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.185 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e8372a1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:26 np0005465604 nova_compute[260603]: 2025-10-02 08:51:26.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:26 np0005465604 nova_compute[260603]: 2025-10-02 08:51:26.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.222 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e8372a1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.222 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:51:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.222 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e8372a1-a0, col_values=(('external_ids', {'iface-id': '6972524a-7a8a-4c19-a841-08e51ccd9aaa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:26.223 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:51:26 np0005465604 nova_compute[260603]: 2025-10-02 08:51:26.660 2 DEBUG nova.compute.manager [req-a2431e80-ddc5-4f18-b625-a22de7cf8299 req-ea40eb60-7e1e-429d-8a27-2e2e60387d6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Received event network-vif-plugged-06e58664-5a1a-4f08-9eb3-6a275e07a062 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:51:26 np0005465604 nova_compute[260603]: 2025-10-02 08:51:26.661 2 DEBUG oslo_concurrency.lockutils [req-a2431e80-ddc5-4f18-b625-a22de7cf8299 req-ea40eb60-7e1e-429d-8a27-2e2e60387d6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:26 np0005465604 nova_compute[260603]: 2025-10-02 08:51:26.662 2 DEBUG oslo_concurrency.lockutils [req-a2431e80-ddc5-4f18-b625-a22de7cf8299 req-ea40eb60-7e1e-429d-8a27-2e2e60387d6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:26 np0005465604 nova_compute[260603]: 2025-10-02 08:51:26.662 2 DEBUG oslo_concurrency.lockutils [req-a2431e80-ddc5-4f18-b625-a22de7cf8299 req-ea40eb60-7e1e-429d-8a27-2e2e60387d6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:26 np0005465604 nova_compute[260603]: 2025-10-02 08:51:26.662 2 DEBUG nova.compute.manager [req-a2431e80-ddc5-4f18-b625-a22de7cf8299 req-ea40eb60-7e1e-429d-8a27-2e2e60387d6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Processing event network-vif-plugged-06e58664-5a1a-4f08-9eb3-6a275e07a062 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:51:26 np0005465604 nova_compute[260603]: 2025-10-02 08:51:26.940 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395086.9399688, 062b2a3e-b612-42bf-b96c-fb9bdd9008ee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:51:26 np0005465604 nova_compute[260603]: 2025-10-02 08:51:26.941 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] VM Started (Lifecycle Event)#033[00m
Oct  2 04:51:26 np0005465604 nova_compute[260603]: 2025-10-02 08:51:26.943 2 DEBUG nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:51:26 np0005465604 nova_compute[260603]: 2025-10-02 08:51:26.948 2 DEBUG nova.virt.libvirt.driver [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:51:26 np0005465604 nova_compute[260603]: 2025-10-02 08:51:26.952 2 INFO nova.virt.libvirt.driver [-] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Instance spawned successfully.#033[00m
Oct  2 04:51:26 np0005465604 nova_compute[260603]: 2025-10-02 08:51:26.952 2 INFO nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Took 8.72 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:51:26 np0005465604 nova_compute[260603]: 2025-10-02 08:51:26.953 2 DEBUG nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:51:26 np0005465604 nova_compute[260603]: 2025-10-02 08:51:26.963 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:51:26 np0005465604 nova_compute[260603]: 2025-10-02 08:51:26.967 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:51:27 np0005465604 nova_compute[260603]: 2025-10-02 08:51:27.000 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:51:27 np0005465604 nova_compute[260603]: 2025-10-02 08:51:27.000 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395086.9409995, 062b2a3e-b612-42bf-b96c-fb9bdd9008ee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:51:27 np0005465604 nova_compute[260603]: 2025-10-02 08:51:27.001 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:51:27 np0005465604 nova_compute[260603]: 2025-10-02 08:51:27.033 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:51:27 np0005465604 nova_compute[260603]: 2025-10-02 08:51:27.035 2 INFO nova.compute.manager [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Took 9.75 seconds to build instance.#033[00m
Oct  2 04:51:27 np0005465604 nova_compute[260603]: 2025-10-02 08:51:27.039 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395086.9468055, 062b2a3e-b612-42bf-b96c-fb9bdd9008ee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:51:27 np0005465604 nova_compute[260603]: 2025-10-02 08:51:27.039 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:51:27 np0005465604 nova_compute[260603]: 2025-10-02 08:51:27.062 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:51:27 np0005465604 nova_compute[260603]: 2025-10-02 08:51:27.065 2 DEBUG oslo_concurrency.lockutils [None req-03bda302-6e08-41ef-ace7-b1207c78afe2 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:27 np0005465604 nova_compute[260603]: 2025-10-02 08:51:27.067 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:51:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 269 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 741 KiB/s rd, 1.7 MiB/s wr, 105 op/s
Oct  2 04:51:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:51:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:51:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:51:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:51:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:51:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:51:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:51:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:51:28
Oct  2 04:51:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:51:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:51:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'images', 'vms', '.rgw.root', '.mgr', 'default.rgw.meta']
Oct  2 04:51:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:51:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:51:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:51:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:51:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:51:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:51:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:51:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:51:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:51:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:51:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:51:28 np0005465604 nova_compute[260603]: 2025-10-02 08:51:28.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:51:28 np0005465604 nova_compute[260603]: 2025-10-02 08:51:28.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:51:28 np0005465604 nova_compute[260603]: 2025-10-02 08:51:28.776 2 DEBUG nova.compute.manager [req-8a1dfd64-2d46-48df-870b-92e5aa61fa54 req-3f6141a8-860c-4eba-8591-614241604ceb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Received event network-vif-plugged-06e58664-5a1a-4f08-9eb3-6a275e07a062 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:51:28 np0005465604 nova_compute[260603]: 2025-10-02 08:51:28.777 2 DEBUG oslo_concurrency.lockutils [req-8a1dfd64-2d46-48df-870b-92e5aa61fa54 req-3f6141a8-860c-4eba-8591-614241604ceb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:28 np0005465604 nova_compute[260603]: 2025-10-02 08:51:28.779 2 DEBUG oslo_concurrency.lockutils [req-8a1dfd64-2d46-48df-870b-92e5aa61fa54 req-3f6141a8-860c-4eba-8591-614241604ceb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:28 np0005465604 nova_compute[260603]: 2025-10-02 08:51:28.779 2 DEBUG oslo_concurrency.lockutils [req-8a1dfd64-2d46-48df-870b-92e5aa61fa54 req-3f6141a8-860c-4eba-8591-614241604ceb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:28 np0005465604 nova_compute[260603]: 2025-10-02 08:51:28.780 2 DEBUG nova.compute.manager [req-8a1dfd64-2d46-48df-870b-92e5aa61fa54 req-3f6141a8-860c-4eba-8591-614241604ceb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] No waiting events found dispatching network-vif-plugged-06e58664-5a1a-4f08-9eb3-6a275e07a062 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:51:28 np0005465604 nova_compute[260603]: 2025-10-02 08:51:28.781 2 WARNING nova.compute.manager [req-8a1dfd64-2d46-48df-870b-92e5aa61fa54 req-3f6141a8-860c-4eba-8591-614241604ceb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Received unexpected event network-vif-plugged-06e58664-5a1a-4f08-9eb3-6a275e07a062 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:51:29 np0005465604 nova_compute[260603]: 2025-10-02 08:51:29.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 1006 KiB/s rd, 2.2 MiB/s wr, 131 op/s
Oct  2 04:51:29 np0005465604 nova_compute[260603]: 2025-10-02 08:51:29.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 541 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct  2 04:51:31 np0005465604 nova_compute[260603]: 2025-10-02 08:51:31.563 2 INFO nova.compute.manager [None req-3360325b-e0d4-438a-b8b2-a4d1d31d78f9 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Get console output#033[00m
Oct  2 04:51:31 np0005465604 nova_compute[260603]: 2025-10-02 08:51:31.569 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 04:51:31 np0005465604 nova_compute[260603]: 2025-10-02 08:51:31.908 2 DEBUG oslo_concurrency.lockutils [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:31 np0005465604 nova_compute[260603]: 2025-10-02 08:51:31.909 2 DEBUG oslo_concurrency.lockutils [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:31 np0005465604 nova_compute[260603]: 2025-10-02 08:51:31.909 2 DEBUG nova.compute.manager [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:51:31 np0005465604 nova_compute[260603]: 2025-10-02 08:51:31.914 2 DEBUG nova.compute.manager [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 04:51:31 np0005465604 nova_compute[260603]: 2025-10-02 08:51:31.915 2 DEBUG nova.objects.instance [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'flavor' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:51:31 np0005465604 nova_compute[260603]: 2025-10-02 08:51:31.941 2 DEBUG nova.virt.libvirt.driver [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:51:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:51:33 np0005465604 nova_compute[260603]: 2025-10-02 08:51:33.488 2 DEBUG nova.compute.manager [req-afecb621-66ec-4bdd-8de9-707e68a04793 req-66f0fe5c-4b39-43d9-be30-b14677c82974 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Received event network-changed-06e58664-5a1a-4f08-9eb3-6a275e07a062 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:51:33 np0005465604 nova_compute[260603]: 2025-10-02 08:51:33.488 2 DEBUG nova.compute.manager [req-afecb621-66ec-4bdd-8de9-707e68a04793 req-66f0fe5c-4b39-43d9-be30-b14677c82974 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Refreshing instance network info cache due to event network-changed-06e58664-5a1a-4f08-9eb3-6a275e07a062. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:51:33 np0005465604 nova_compute[260603]: 2025-10-02 08:51:33.488 2 DEBUG oslo_concurrency.lockutils [req-afecb621-66ec-4bdd-8de9-707e68a04793 req-66f0fe5c-4b39-43d9-be30-b14677c82974 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:51:33 np0005465604 nova_compute[260603]: 2025-10-02 08:51:33.488 2 DEBUG oslo_concurrency.lockutils [req-afecb621-66ec-4bdd-8de9-707e68a04793 req-66f0fe5c-4b39-43d9-be30-b14677c82974 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:51:33 np0005465604 nova_compute[260603]: 2025-10-02 08:51:33.489 2 DEBUG nova.network.neutron [req-afecb621-66ec-4bdd-8de9-707e68a04793 req-66f0fe5c-4b39-43d9-be30-b14677c82974 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Refreshing network info cache for port 06e58664-5a1a-4f08-9eb3-6a275e07a062 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:51:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 171 op/s
Oct  2 04:51:34 np0005465604 kernel: tapceb50499-7e (unregistering): left promiscuous mode
Oct  2 04:51:34 np0005465604 NetworkManager[45129]: <info>  [1759395094.2336] device (tapceb50499-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:51:34 np0005465604 nova_compute[260603]: 2025-10-02 08:51:34.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:34Z|01261|binding|INFO|Releasing lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 from this chassis (sb_readonly=0)
Oct  2 04:51:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:34Z|01262|binding|INFO|Setting lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 down in Southbound
Oct  2 04:51:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:34Z|01263|binding|INFO|Removing iface tapceb50499-7e ovn-installed in OVS
Oct  2 04:51:34 np0005465604 nova_compute[260603]: 2025-10-02 08:51:34.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.252 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:51:fb 10.100.0.11'], port_security=['fa:16:3e:80:51:fb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0239090c-f1eb-4b8b-8f45-94efee345fa5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a398fde7-e9b6-47e4-afc2-221c0b15c74f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=633447ab-1fed-4ecc-895b-fd3d7334df6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:51:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.253 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 in datapath 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 unbound from our chassis#033[00m
Oct  2 04:51:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.254 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:51:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.256 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4871c549-7a61-4e88-89fd-0bdeccff6119]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.257 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 namespace which is not needed anymore#033[00m
Oct  2 04:51:34 np0005465604 nova_compute[260603]: 2025-10-02 08:51:34.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:34 np0005465604 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000078.scope: Deactivated successfully.
Oct  2 04:51:34 np0005465604 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000078.scope: Consumed 13.173s CPU time.
Oct  2 04:51:34 np0005465604 systemd-machined[214636]: Machine qemu-151-instance-00000078 terminated.
Oct  2 04:51:34 np0005465604 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[384631]: [NOTICE]   (384636) : haproxy version is 2.8.14-c23fe91
Oct  2 04:51:34 np0005465604 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[384631]: [NOTICE]   (384636) : path to executable is /usr/sbin/haproxy
Oct  2 04:51:34 np0005465604 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[384631]: [WARNING]  (384636) : Exiting Master process...
Oct  2 04:51:34 np0005465604 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[384631]: [ALERT]    (384636) : Current worker (384638) exited with code 143 (Terminated)
Oct  2 04:51:34 np0005465604 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[384631]: [WARNING]  (384636) : All workers exited. Exiting... (0)
Oct  2 04:51:34 np0005465604 systemd[1]: libpod-f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3.scope: Deactivated successfully.
Oct  2 04:51:34 np0005465604 podman[386362]: 2025-10-02 08:51:34.418002522 +0000 UTC m=+0.052947270 container died f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:51:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3-userdata-shm.mount: Deactivated successfully.
Oct  2 04:51:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4a077a86bcce7e5489f801b5d34f88a8eb9ddfc03b2537cc12437d7144a2267e-merged.mount: Deactivated successfully.
Oct  2 04:51:34 np0005465604 podman[386362]: 2025-10-02 08:51:34.47664485 +0000 UTC m=+0.111589588 container cleanup f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:51:34 np0005465604 systemd[1]: libpod-conmon-f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3.scope: Deactivated successfully.
Oct  2 04:51:34 np0005465604 nova_compute[260603]: 2025-10-02 08:51:34.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:34 np0005465604 nova_compute[260603]: 2025-10-02 08:51:34.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:34 np0005465604 podman[386396]: 2025-10-02 08:51:34.597842688 +0000 UTC m=+0.059869437 container remove f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 04:51:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.604 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b5710436-8545-4f5a-92c6-e2c5d8d92b1c]: (4, ('Thu Oct  2 08:51:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 (f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3)\nf261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3\nThu Oct  2 08:51:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 (f261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3)\nf261fcd3d8dd3dea559510f319bf56b1db30ff3952f36fee29249208e2ee66d3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.607 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[91970ce8-b63d-4295-8506-41c9dd6ef68b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.608 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21ddf177-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:34 np0005465604 nova_compute[260603]: 2025-10-02 08:51:34.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:34 np0005465604 kernel: tap21ddf177-30: left promiscuous mode
Oct  2 04:51:34 np0005465604 nova_compute[260603]: 2025-10-02 08:51:34.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.632 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa91ca3-4212-44b5-9ef9-dd0bcbc559cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.650 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0241a934-2018-45b5-9dcc-a041edd5b38f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.651 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d6403598-abf9-4e59-9ffb-06a6e2d3a9b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.667 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ea7ba9-c035-45b9-b3b0-73f99fed59c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594978, 'reachable_time': 35841, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386415, 'error': None, 'target': 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.670 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:51:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.670 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[4d5c861e-d72a-455b-9629-7c7025d1870d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:34 np0005465604 systemd[1]: run-netns-ovnmeta\x2d21ddf177\x2d3d8e\x2d4ce2\x2d9cd9\x2dfa17adfabd73.mount: Deactivated successfully.
Oct  2 04:51:34 np0005465604 nova_compute[260603]: 2025-10-02 08:51:34.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.832 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.833 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:34.834 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:34 np0005465604 nova_compute[260603]: 2025-10-02 08:51:34.959 2 INFO nova.virt.libvirt.driver [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 04:51:34 np0005465604 nova_compute[260603]: 2025-10-02 08:51:34.972 2 INFO nova.virt.libvirt.driver [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance destroyed successfully.#033[00m
Oct  2 04:51:34 np0005465604 nova_compute[260603]: 2025-10-02 08:51:34.973 2 DEBUG nova.objects.instance [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:51:34 np0005465604 nova_compute[260603]: 2025-10-02 08:51:34.988 2 DEBUG nova.compute.manager [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:51:35 np0005465604 nova_compute[260603]: 2025-10-02 08:51:35.036 2 DEBUG oslo_concurrency.lockutils [None req-5f72936b-91ca-4aa1-a9b7-2ecc72e9cffb 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:35 np0005465604 nova_compute[260603]: 2025-10-02 08:51:35.219 2 DEBUG nova.network.neutron [req-afecb621-66ec-4bdd-8de9-707e68a04793 req-66f0fe5c-4b39-43d9-be30-b14677c82974 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Updated VIF entry in instance network info cache for port 06e58664-5a1a-4f08-9eb3-6a275e07a062. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:51:35 np0005465604 nova_compute[260603]: 2025-10-02 08:51:35.220 2 DEBUG nova.network.neutron [req-afecb621-66ec-4bdd-8de9-707e68a04793 req-66f0fe5c-4b39-43d9-be30-b14677c82974 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Updating instance_info_cache with network_info: [{"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:51:35 np0005465604 nova_compute[260603]: 2025-10-02 08:51:35.238 2 DEBUG oslo_concurrency.lockutils [req-afecb621-66ec-4bdd-8de9-707e68a04793 req-66f0fe5c-4b39-43d9-be30-b14677c82974 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:51:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 140 op/s
Oct  2 04:51:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 140 op/s
Oct  2 04:51:37 np0005465604 nova_compute[260603]: 2025-10-02 08:51:37.758 2 DEBUG nova.compute.manager [req-51a0e2a0-e79e-4ff9-a6ff-cf2d94445e76 req-142366ee-ecc1-4578-8256-4a8fc06c4c68 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-vif-unplugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:51:37 np0005465604 nova_compute[260603]: 2025-10-02 08:51:37.758 2 DEBUG oslo_concurrency.lockutils [req-51a0e2a0-e79e-4ff9-a6ff-cf2d94445e76 req-142366ee-ecc1-4578-8256-4a8fc06c4c68 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:37 np0005465604 nova_compute[260603]: 2025-10-02 08:51:37.758 2 DEBUG oslo_concurrency.lockutils [req-51a0e2a0-e79e-4ff9-a6ff-cf2d94445e76 req-142366ee-ecc1-4578-8256-4a8fc06c4c68 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:37 np0005465604 nova_compute[260603]: 2025-10-02 08:51:37.759 2 DEBUG oslo_concurrency.lockutils [req-51a0e2a0-e79e-4ff9-a6ff-cf2d94445e76 req-142366ee-ecc1-4578-8256-4a8fc06c4c68 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:37 np0005465604 nova_compute[260603]: 2025-10-02 08:51:37.759 2 DEBUG nova.compute.manager [req-51a0e2a0-e79e-4ff9-a6ff-cf2d94445e76 req-142366ee-ecc1-4578-8256-4a8fc06c4c68 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] No waiting events found dispatching network-vif-unplugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:51:37 np0005465604 nova_compute[260603]: 2025-10-02 08:51:37.759 2 WARNING nova.compute.manager [req-51a0e2a0-e79e-4ff9-a6ff-cf2d94445e76 req-142366ee-ecc1-4578-8256-4a8fc06c4c68 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received unexpected event network-vif-unplugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 04:51:37 np0005465604 nova_compute[260603]: 2025-10-02 08:51:37.885 2 INFO nova.compute.manager [None req-59e53893-93dd-4111-9be1-70be2ccc7669 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Get console output#033[00m
Oct  2 04:51:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:51:38 np0005465604 nova_compute[260603]: 2025-10-02 08:51:38.098 2 DEBUG nova.objects.instance [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'flavor' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:51:38 np0005465604 nova_compute[260603]: 2025-10-02 08:51:38.124 2 DEBUG oslo_concurrency.lockutils [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:51:38 np0005465604 nova_compute[260603]: 2025-10-02 08:51:38.124 2 DEBUG oslo_concurrency.lockutils [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquired lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:51:38 np0005465604 nova_compute[260603]: 2025-10-02 08:51:38.124 2 DEBUG nova.network.neutron [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:51:38 np0005465604 nova_compute[260603]: 2025-10-02 08:51:38.124 2 DEBUG nova.objects.instance [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'info_cache' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:51:38 np0005465604 nova_compute[260603]: 2025-10-02 08:51:38.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015229338615940613 of space, bias 1.0, pg target 0.4568801584782184 quantized to 32 (current 32)
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014239867202253044 of space, bias 1.0, pg target 0.42719601606759133 quantized to 32 (current 32)
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:51:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:51:39 np0005465604 nova_compute[260603]: 2025-10-02 08:51:39.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 757 KiB/s wr, 100 op/s
Oct  2 04:51:39 np0005465604 nova_compute[260603]: 2025-10-02 08:51:39.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:39 np0005465604 nova_compute[260603]: 2025-10-02 08:51:39.855 2 DEBUG nova.compute.manager [req-afe8d924-2c2a-4c5f-9e4d-d1921cf2a5d0 req-ac1ee12e-431a-405b-9353-6583c00982f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:51:39 np0005465604 nova_compute[260603]: 2025-10-02 08:51:39.855 2 DEBUG oslo_concurrency.lockutils [req-afe8d924-2c2a-4c5f-9e4d-d1921cf2a5d0 req-ac1ee12e-431a-405b-9353-6583c00982f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:39 np0005465604 nova_compute[260603]: 2025-10-02 08:51:39.855 2 DEBUG oslo_concurrency.lockutils [req-afe8d924-2c2a-4c5f-9e4d-d1921cf2a5d0 req-ac1ee12e-431a-405b-9353-6583c00982f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:39 np0005465604 nova_compute[260603]: 2025-10-02 08:51:39.856 2 DEBUG oslo_concurrency.lockutils [req-afe8d924-2c2a-4c5f-9e4d-d1921cf2a5d0 req-ac1ee12e-431a-405b-9353-6583c00982f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:39 np0005465604 nova_compute[260603]: 2025-10-02 08:51:39.856 2 DEBUG nova.compute.manager [req-afe8d924-2c2a-4c5f-9e4d-d1921cf2a5d0 req-ac1ee12e-431a-405b-9353-6583c00982f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] No waiting events found dispatching network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:51:39 np0005465604 nova_compute[260603]: 2025-10-02 08:51:39.856 2 WARNING nova.compute.manager [req-afe8d924-2c2a-4c5f-9e4d-d1921cf2a5d0 req-ac1ee12e-431a-405b-9353-6583c00982f1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received unexpected event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 for instance with vm_state stopped and task_state powering-on.#033[00m
Oct  2 04:51:40 np0005465604 nova_compute[260603]: 2025-10-02 08:51:40.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:51:40 np0005465604 nova_compute[260603]: 2025-10-02 08:51:40.961 2 DEBUG nova.network.neutron [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Updating instance_info_cache with network_info: [{"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:51:40 np0005465604 nova_compute[260603]: 2025-10-02 08:51:40.986 2 DEBUG oslo_concurrency.lockutils [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Releasing lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.010 2 INFO nova.virt.libvirt.driver [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance destroyed successfully.#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.011 2 DEBUG nova.objects.instance [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.030 2 DEBUG nova.objects.instance [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'resources' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.047 2 DEBUG nova.virt.libvirt.vif [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:51:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1642420573',display_name='tempest-TestNetworkAdvancedServerOps-server-1642420573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1642420573',id=120,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4QvS560h5Nt86MUax5zsZlHkfG3ItwowrPNScykbxoYWCKz1GitCrHEDcUIk0uoHK8H9Y5yTAmbaorVjHeOMl3in6cmQv3hOwWfkcNBCwehb1AIqliYtOYjjLnnPR64Q==',key_name='tempest-TestNetworkAdvancedServerOps-1971503056',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:51:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-vx22hflz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:51:35Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=0239090c-f1eb-4b8b-8f45-94efee345fa5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.048 2 DEBUG nova.network.os_vif_util [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.048 2 DEBUG nova.network.os_vif_util [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.049 2 DEBUG os_vif [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapceb50499-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.055 2 INFO os_vif [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e')#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.061 2 DEBUG nova.virt.libvirt.driver [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Start _get_guest_xml network_info=[{"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.064 2 WARNING nova.virt.libvirt.driver [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.070 2 DEBUG nova.virt.libvirt.host [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.070 2 DEBUG nova.virt.libvirt.host [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.073 2 DEBUG nova.virt.libvirt.host [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.074 2 DEBUG nova.virt.libvirt.host [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.074 2 DEBUG nova.virt.libvirt.driver [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.074 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.075 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.075 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.075 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.075 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.076 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.076 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.076 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.076 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.076 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.077 2 DEBUG nova.virt.hardware [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.077 2 DEBUG nova.objects.instance [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.097 2 DEBUG oslo_concurrency.processutils [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:51:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:51:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/104328999' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.515 2 DEBUG oslo_concurrency.processutils [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.554 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.555 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.555 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:51:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 279 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 30 KiB/s wr, 61 op/s
Oct  2 04:51:41 np0005465604 nova_compute[260603]: 2025-10-02 08:51:41.565 2 DEBUG oslo_concurrency.processutils [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:51:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:41Z|00135|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.8
Oct  2 04:51:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:41Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:e1:f8:b9 10.100.0.8
Oct  2 04:51:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:51:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3804192273' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.021 2 DEBUG oslo_concurrency.processutils [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.022 2 DEBUG nova.virt.libvirt.vif [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:51:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1642420573',display_name='tempest-TestNetworkAdvancedServerOps-server-1642420573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1642420573',id=120,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4QvS560h5Nt86MUax5zsZlHkfG3ItwowrPNScykbxoYWCKz1GitCrHEDcUIk0uoHK8H9Y5yTAmbaorVjHeOMl3in6cmQv3hOwWfkcNBCwehb1AIqliYtOYjjLnnPR64Q==',key_name='tempest-TestNetworkAdvancedServerOps-1971503056',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:51:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-vx22hflz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:51:35Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=0239090c-f1eb-4b8b-8f45-94efee345fa5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.023 2 DEBUG nova.network.os_vif_util [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.023 2 DEBUG nova.network.os_vif_util [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.025 2 DEBUG nova.objects.instance [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.037 2 DEBUG nova.virt.libvirt.driver [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:51:42 np0005465604 nova_compute[260603]:  <uuid>0239090c-f1eb-4b8b-8f45-94efee345fa5</uuid>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:  <name>instance-00000078</name>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1642420573</nova:name>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:51:41</nova:creationTime>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:51:42 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:        <nova:user uuid="7767630a5b1049f48d7e0fed29e221ba">tempest-TestNetworkAdvancedServerOps-19684921-project-member</nova:user>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:        <nova:project uuid="c86b416fdb524f21b0228639a3a14116">tempest-TestNetworkAdvancedServerOps-19684921</nova:project>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:        <nova:port uuid="ceb50499-7e3c-4d47-a2dc-05ce86dbbde1">
Oct  2 04:51:42 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <entry name="serial">0239090c-f1eb-4b8b-8f45-94efee345fa5</entry>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <entry name="uuid">0239090c-f1eb-4b8b-8f45-94efee345fa5</entry>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/0239090c-f1eb-4b8b-8f45-94efee345fa5_disk">
Oct  2 04:51:42 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:51:42 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/0239090c-f1eb-4b8b-8f45-94efee345fa5_disk.config">
Oct  2 04:51:42 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:51:42 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:80:51:fb"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <target dev="tapceb50499-7e"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5/console.log" append="off"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <input type="keyboard" bus="usb"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:51:42 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:51:42 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:51:42 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:51:42 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.040 2 DEBUG nova.virt.libvirt.driver [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.040 2 DEBUG nova.virt.libvirt.driver [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.042 2 DEBUG nova.virt.libvirt.vif [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:51:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1642420573',display_name='tempest-TestNetworkAdvancedServerOps-server-1642420573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1642420573',id=120,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4QvS560h5Nt86MUax5zsZlHkfG3ItwowrPNScykbxoYWCKz1GitCrHEDcUIk0uoHK8H9Y5yTAmbaorVjHeOMl3in6cmQv3hOwWfkcNBCwehb1AIqliYtOYjjLnnPR64Q==',key_name='tempest-TestNetworkAdvancedServerOps-1971503056',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:51:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-vx22hflz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:51:35Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=0239090c-f1eb-4b8b-8f45-94efee345fa5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.042 2 DEBUG nova.network.os_vif_util [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.043 2 DEBUG nova.network.os_vif_util [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.044 2 DEBUG os_vif [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.051 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapceb50499-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.052 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapceb50499-7e, col_values=(('external_ids', {'iface-id': 'ceb50499-7e3c-4d47-a2dc-05ce86dbbde1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:51:fb', 'vm-uuid': '0239090c-f1eb-4b8b-8f45-94efee345fa5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:42 np0005465604 NetworkManager[45129]: <info>  [1759395102.0556] manager: (tapceb50499-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/500)
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.061 2 INFO os_vif [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e')#033[00m
Oct  2 04:51:42 np0005465604 kernel: tapceb50499-7e: entered promiscuous mode
Oct  2 04:51:42 np0005465604 NetworkManager[45129]: <info>  [1759395102.1412] manager: (tapceb50499-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/501)
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:42Z|01264|binding|INFO|Claiming lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 for this chassis.
Oct  2 04:51:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:42Z|01265|binding|INFO|ceb50499-7e3c-4d47-a2dc-05ce86dbbde1: Claiming fa:16:3e:80:51:fb 10.100.0.11
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.154 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:51:fb 10.100.0.11'], port_security=['fa:16:3e:80:51:fb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0239090c-f1eb-4b8b-8f45-94efee345fa5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a398fde7-e9b6-47e4-afc2-221c0b15c74f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=633447ab-1fed-4ecc-895b-fd3d7334df6b, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.155 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 in datapath 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 bound to our chassis#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.156 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73#033[00m
Oct  2 04:51:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:42Z|01266|binding|INFO|Setting lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 ovn-installed in OVS
Oct  2 04:51:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:42Z|01267|binding|INFO|Setting lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 up in Southbound
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.173 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e745fc-fae5-46e8-8cbc-5b58fcec8336]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.174 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap21ddf177-31 in ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.178 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap21ddf177-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.178 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cf7e8652-85bc-4996-9187-2c8f6a929c4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.179 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[971b6ca9-3887-41cd-9431-a82d526d357f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:42 np0005465604 systemd-machined[214636]: New machine qemu-153-instance-00000078.
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.191 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[b00c6ae6-082c-4c7c-a86a-a94329e50246]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:42 np0005465604 systemd[1]: Started Virtual Machine qemu-153-instance-00000078.
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.211 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d86e141b-e855-443c-9a03-c1422ba3b7ab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:42 np0005465604 systemd-udevd[386501]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:51:42 np0005465604 NetworkManager[45129]: <info>  [1759395102.2402] device (tapceb50499-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:51:42 np0005465604 NetworkManager[45129]: <info>  [1759395102.2413] device (tapceb50499-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.249 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5d4f8ac3-ae1f-454f-8b5b-c52aa959fae2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:42 np0005465604 systemd-udevd[386506]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:51:42 np0005465604 NetworkManager[45129]: <info>  [1759395102.2570] manager: (tap21ddf177-30): new Veth device (/org/freedesktop/NetworkManager/Devices/502)
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.256 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[08c25fb5-0d63-4ffd-b53b-902285ab4c17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.296 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6dbe0c0a-6b67-465f-99db-029aa00dcd23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.300 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1917d7a6-9ccb-4fd1-abfb-0d0e9ab3881a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:42 np0005465604 NetworkManager[45129]: <info>  [1759395102.3238] device (tap21ddf177-30): carrier: link connected
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.330 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[50f0e135-84b0-4562-a415-46a36d2925db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.350 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e10b6da6-c6f3-4c9f-ae3f-8c73be315fd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21ddf177-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:39:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598092, 'reachable_time': 17190, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386530, 'error': None, 'target': 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.367 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9e51be-9900-4f04-9e13-c613b1628107]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:390d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598092, 'tstamp': 598092}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386531, 'error': None, 'target': 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.384 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[84951224-8c46-462b-8234-ef8a113c0d6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21ddf177-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:39:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598092, 'reachable_time': 17190, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 386536, 'error': None, 'target': 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.418 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d2df0502-bbce-47e3-b808-cba11a4eb958]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.483 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a8102d05-2863-4120-a183-ae2abbeda929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.485 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21ddf177-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.485 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.486 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21ddf177-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:42 np0005465604 NetworkManager[45129]: <info>  [1759395102.4886] manager: (tap21ddf177-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/503)
Oct  2 04:51:42 np0005465604 kernel: tap21ddf177-30: entered promiscuous mode
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.494 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap21ddf177-30, col_values=(('external_ids', {'iface-id': 'da5470e2-37c6-4448-8dab-7b3e1d890a66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:51:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:42Z|01268|binding|INFO|Releasing lport da5470e2-37c6-4448-8dab-7b3e1d890a66 from this chassis (sb_readonly=0)
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.499 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/21ddf177-3d8e-4ce2-9cd9-fa17adfabd73.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/21ddf177-3d8e-4ce2-9cd9-fa17adfabd73.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.500 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac4663b-d718-40ab-a8eb-19b041d8c54f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.501 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/21ddf177-3d8e-4ce2-9cd9-fa17adfabd73.pid.haproxy
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:51:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:51:42.502 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'env', 'PROCESS_TAG=haproxy-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/21ddf177-3d8e-4ce2-9cd9-fa17adfabd73.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.507 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.507 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.507 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.508 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.722 2 DEBUG nova.compute.manager [req-8458ac16-58d4-4054-a820-350ae3e5ae46 req-1ad74504-f8c0-44a5-bd81-3605a47e6dcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.723 2 DEBUG oslo_concurrency.lockutils [req-8458ac16-58d4-4054-a820-350ae3e5ae46 req-1ad74504-f8c0-44a5-bd81-3605a47e6dcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.723 2 DEBUG oslo_concurrency.lockutils [req-8458ac16-58d4-4054-a820-350ae3e5ae46 req-1ad74504-f8c0-44a5-bd81-3605a47e6dcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.723 2 DEBUG oslo_concurrency.lockutils [req-8458ac16-58d4-4054-a820-350ae3e5ae46 req-1ad74504-f8c0-44a5-bd81-3605a47e6dcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.724 2 DEBUG nova.compute.manager [req-8458ac16-58d4-4054-a820-350ae3e5ae46 req-1ad74504-f8c0-44a5-bd81-3605a47e6dcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] No waiting events found dispatching network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.724 2 WARNING nova.compute.manager [req-8458ac16-58d4-4054-a820-350ae3e5ae46 req-1ad74504-f8c0-44a5-bd81-3605a47e6dcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received unexpected event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 for instance with vm_state stopped and task_state powering-on.#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.933 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 0239090c-f1eb-4b8b-8f45-94efee345fa5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.934 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395102.9308743, 0239090c-f1eb-4b8b-8f45-94efee345fa5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.934 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.945 2 DEBUG nova.compute.manager [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.949 2 INFO nova.virt.libvirt.driver [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance rebooted successfully.#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.949 2 DEBUG nova.compute.manager [None req-c25731c5-ff12-41c6-a535-bb14dd9f8a2d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.976 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:51:42 np0005465604 nova_compute[260603]: 2025-10-02 08:51:42.980 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:51:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:51:42 np0005465604 podman[386606]: 2025-10-02 08:51:42.900278882 +0000 UTC m=+0.031544282 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:51:42 np0005465604 podman[386606]: 2025-10-02 08:51:42.995627375 +0000 UTC m=+0.126892725 container create c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 04:51:43 np0005465604 nova_compute[260603]: 2025-10-02 08:51:43.012 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Oct  2 04:51:43 np0005465604 nova_compute[260603]: 2025-10-02 08:51:43.013 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395102.9449165, 0239090c-f1eb-4b8b-8f45-94efee345fa5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:51:43 np0005465604 nova_compute[260603]: 2025-10-02 08:51:43.014 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] VM Started (Lifecycle Event)#033[00m
Oct  2 04:51:43 np0005465604 systemd[1]: Started libpod-conmon-c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5.scope.
Oct  2 04:51:43 np0005465604 nova_compute[260603]: 2025-10-02 08:51:43.044 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:51:43 np0005465604 nova_compute[260603]: 2025-10-02 08:51:43.047 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:51:43 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:51:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce46867541b1cf76ec1f6225ba651561c209bb33faf6a185de6c04d8cdfff8a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:51:43 np0005465604 podman[386606]: 2025-10-02 08:51:43.068248466 +0000 UTC m=+0.199513836 container init c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 04:51:43 np0005465604 podman[386606]: 2025-10-02 08:51:43.074003239 +0000 UTC m=+0.205268589 container start c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:51:43 np0005465604 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[386621]: [NOTICE]   (386626) : New worker (386628) forked
Oct  2 04:51:43 np0005465604 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[386621]: [NOTICE]   (386626) : Loading success.
Oct  2 04:51:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 294 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 529 KiB/s wr, 116 op/s
Oct  2 04:51:44 np0005465604 nova_compute[260603]: 2025-10-02 08:51:44.826 2 DEBUG nova.compute.manager [req-fa853198-e6e5-45ed-a738-89bbc7b312e2 req-40ff427f-eaff-429d-b924-2b296af3d06f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:51:44 np0005465604 nova_compute[260603]: 2025-10-02 08:51:44.828 2 DEBUG oslo_concurrency.lockutils [req-fa853198-e6e5-45ed-a738-89bbc7b312e2 req-40ff427f-eaff-429d-b924-2b296af3d06f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:44 np0005465604 nova_compute[260603]: 2025-10-02 08:51:44.828 2 DEBUG oslo_concurrency.lockutils [req-fa853198-e6e5-45ed-a738-89bbc7b312e2 req-40ff427f-eaff-429d-b924-2b296af3d06f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:44 np0005465604 nova_compute[260603]: 2025-10-02 08:51:44.829 2 DEBUG oslo_concurrency.lockutils [req-fa853198-e6e5-45ed-a738-89bbc7b312e2 req-40ff427f-eaff-429d-b924-2b296af3d06f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:44 np0005465604 nova_compute[260603]: 2025-10-02 08:51:44.829 2 DEBUG nova.compute.manager [req-fa853198-e6e5-45ed-a738-89bbc7b312e2 req-40ff427f-eaff-429d-b924-2b296af3d06f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] No waiting events found dispatching network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:51:44 np0005465604 nova_compute[260603]: 2025-10-02 08:51:44.830 2 WARNING nova.compute.manager [req-fa853198-e6e5-45ed-a738-89bbc7b312e2 req-40ff427f-eaff-429d-b924-2b296af3d06f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received unexpected event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:51:44 np0005465604 nova_compute[260603]: 2025-10-02 08:51:44.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 294 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 55 op/s
Oct  2 04:51:45 np0005465604 nova_compute[260603]: 2025-10-02 08:51:45.797 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updating instance_info_cache with network_info: [{"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:51:45 np0005465604 nova_compute[260603]: 2025-10-02 08:51:45.825 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:51:45 np0005465604 nova_compute[260603]: 2025-10-02 08:51:45.826 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:51:45 np0005465604 nova_compute[260603]: 2025-10-02 08:51:45.827 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:51:45 np0005465604 nova_compute[260603]: 2025-10-02 08:51:45.827 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:51:45 np0005465604 nova_compute[260603]: 2025-10-02 08:51:45.827 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:51:45 np0005465604 nova_compute[260603]: 2025-10-02 08:51:45.863 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:45 np0005465604 nova_compute[260603]: 2025-10-02 08:51:45.864 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:45 np0005465604 nova_compute[260603]: 2025-10-02 08:51:45.864 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:45 np0005465604 nova_compute[260603]: 2025-10-02 08:51:45.865 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:51:45 np0005465604 nova_compute[260603]: 2025-10-02 08:51:45.865 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:51:45 np0005465604 podman[386639]: 2025-10-02 08:51:45.989532201 +0000 UTC m=+0.053149186 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:51:46 np0005465604 podman[386638]: 2025-10-02 08:51:46.015188265 +0000 UTC m=+0.086537165 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 04:51:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:51:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2413806608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:51:46 np0005465604 nova_compute[260603]: 2025-10-02 08:51:46.329 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:51:46 np0005465604 nova_compute[260603]: 2025-10-02 08:51:46.396 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:51:46 np0005465604 nova_compute[260603]: 2025-10-02 08:51:46.396 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:51:46 np0005465604 nova_compute[260603]: 2025-10-02 08:51:46.399 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:51:46 np0005465604 nova_compute[260603]: 2025-10-02 08:51:46.399 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:51:46 np0005465604 nova_compute[260603]: 2025-10-02 08:51:46.402 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:51:46 np0005465604 nova_compute[260603]: 2025-10-02 08:51:46.403 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:51:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:46Z|00137|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.8
Oct  2 04:51:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:46Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:e1:f8:b9 10.100.0.8
Oct  2 04:51:46 np0005465604 nova_compute[260603]: 2025-10-02 08:51:46.551 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:51:46 np0005465604 nova_compute[260603]: 2025-10-02 08:51:46.552 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3183MB free_disk=59.8912467956543GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:51:46 np0005465604 nova_compute[260603]: 2025-10-02 08:51:46.552 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:51:46 np0005465604 nova_compute[260603]: 2025-10-02 08:51:46.553 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:51:46 np0005465604 nova_compute[260603]: 2025-10-02 08:51:46.623 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:51:46 np0005465604 nova_compute[260603]: 2025-10-02 08:51:46.623 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 0239090c-f1eb-4b8b-8f45-94efee345fa5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:51:46 np0005465604 nova_compute[260603]: 2025-10-02 08:51:46.623 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 062b2a3e-b612-42bf-b96c-fb9bdd9008ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:51:46 np0005465604 nova_compute[260603]: 2025-10-02 08:51:46.624 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:51:46 np0005465604 nova_compute[260603]: 2025-10-02 08:51:46.624 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:51:46 np0005465604 nova_compute[260603]: 2025-10-02 08:51:46.677 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:51:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:46Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e1:f8:b9 10.100.0.8
Oct  2 04:51:46 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:46Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:f8:b9 10.100.0.8
Oct  2 04:51:47 np0005465604 nova_compute[260603]: 2025-10-02 08:51:47.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:51:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1011446166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:51:47 np0005465604 nova_compute[260603]: 2025-10-02 08:51:47.093 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:51:47 np0005465604 nova_compute[260603]: 2025-10-02 08:51:47.098 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:51:47 np0005465604 nova_compute[260603]: 2025-10-02 08:51:47.114 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:51:47 np0005465604 nova_compute[260603]: 2025-10-02 08:51:47.136 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:51:47 np0005465604 nova_compute[260603]: 2025-10-02 08:51:47.136 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:51:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 546 KiB/s wr, 109 op/s
Oct  2 04:51:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:51:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 545 KiB/s wr, 124 op/s
Oct  2 04:51:49 np0005465604 nova_compute[260603]: 2025-10-02 08:51:49.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 545 KiB/s wr, 124 op/s
Oct  2 04:51:51 np0005465604 nova_compute[260603]: 2025-10-02 08:51:51.828 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:51:51 np0005465604 nova_compute[260603]: 2025-10-02 08:51:51.829 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:51:52 np0005465604 nova_compute[260603]: 2025-10-02 08:51:52.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:51:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 549 KiB/s wr, 124 op/s
Oct  2 04:51:54 np0005465604 podman[386726]: 2025-10-02 08:51:53.999486846 +0000 UTC m=+0.067241013 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 04:51:54 np0005465604 podman[386727]: 2025-10-02 08:51:54.044884905 +0000 UTC m=+0.096752728 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:51:54 np0005465604 nova_compute[260603]: 2025-10-02 08:51:54.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:51:54Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:51:fb 10.100.0.11
Oct  2 04:51:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 50 KiB/s wr, 69 op/s
Oct  2 04:51:57 np0005465604 nova_compute[260603]: 2025-10-02 08:51:57.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:51:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 60 KiB/s wr, 99 op/s
Oct  2 04:51:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:51:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:51:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:51:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:51:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:51:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:51:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:51:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1021 KiB/s rd, 15 KiB/s wr, 59 op/s
Oct  2 04:51:59 np0005465604 nova_compute[260603]: 2025-10-02 08:51:59.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:00 np0005465604 nova_compute[260603]: 2025-10-02 08:52:00.157 2 INFO nova.compute.manager [None req-2cd302c7-df42-4ade-acca-e6e914f44ab9 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Get console output#033[00m
Oct  2 04:52:00 np0005465604 nova_compute[260603]: 2025-10-02 08:52:00.164 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 04:52:00 np0005465604 nova_compute[260603]: 2025-10-02 08:52:00.907 2 DEBUG nova.compute.manager [req-d9b4a8f1-c8ab-4815-8879-2122c7edc872 req-951e2142-93fe-47c7-8f5d-403c3a817bee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-changed-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:52:00 np0005465604 nova_compute[260603]: 2025-10-02 08:52:00.907 2 DEBUG nova.compute.manager [req-d9b4a8f1-c8ab-4815-8879-2122c7edc872 req-951e2142-93fe-47c7-8f5d-403c3a817bee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Refreshing instance network info cache due to event network-changed-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:52:00 np0005465604 nova_compute[260603]: 2025-10-02 08:52:00.908 2 DEBUG oslo_concurrency.lockutils [req-d9b4a8f1-c8ab-4815-8879-2122c7edc872 req-951e2142-93fe-47c7-8f5d-403c3a817bee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:52:00 np0005465604 nova_compute[260603]: 2025-10-02 08:52:00.909 2 DEBUG oslo_concurrency.lockutils [req-d9b4a8f1-c8ab-4815-8879-2122c7edc872 req-951e2142-93fe-47c7-8f5d-403c3a817bee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:52:00 np0005465604 nova_compute[260603]: 2025-10-02 08:52:00.909 2 DEBUG nova.network.neutron [req-d9b4a8f1-c8ab-4815-8879-2122c7edc872 req-951e2142-93fe-47c7-8f5d-403c3a817bee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Refreshing network info cache for port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.008 2 DEBUG oslo_concurrency.lockutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.008 2 DEBUG oslo_concurrency.lockutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.009 2 DEBUG oslo_concurrency.lockutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.009 2 DEBUG oslo_concurrency.lockutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.009 2 DEBUG oslo_concurrency.lockutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.011 2 INFO nova.compute.manager [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Terminating instance#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.012 2 DEBUG nova.compute.manager [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:52:01 np0005465604 kernel: tapceb50499-7e (unregistering): left promiscuous mode
Oct  2 04:52:01 np0005465604 NetworkManager[45129]: <info>  [1759395121.0647] device (tapceb50499-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:52:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:01Z|01269|binding|INFO|Releasing lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 from this chassis (sb_readonly=0)
Oct  2 04:52:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:01Z|01270|binding|INFO|Setting lport ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 down in Southbound
Oct  2 04:52:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:01Z|01271|binding|INFO|Removing iface tapceb50499-7e ovn-installed in OVS
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.093 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:51:fb 10.100.0.11'], port_security=['fa:16:3e:80:51:fb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0239090c-f1eb-4b8b-8f45-94efee345fa5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a398fde7-e9b6-47e4-afc2-221c0b15c74f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=633447ab-1fed-4ecc-895b-fd3d7334df6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:52:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.096 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 in datapath 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 unbound from our chassis#033[00m
Oct  2 04:52:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.099 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.101 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[21f7503d-b315-42e0-b135-ccdb6d5a8d06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.101 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 namespace which is not needed anymore#033[00m
Oct  2 04:52:01 np0005465604 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000078.scope: Deactivated successfully.
Oct  2 04:52:01 np0005465604 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000078.scope: Consumed 12.360s CPU time.
Oct  2 04:52:01 np0005465604 systemd-machined[214636]: Machine qemu-153-instance-00000078 terminated.
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.261 2 INFO nova.virt.libvirt.driver [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Instance destroyed successfully.#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.263 2 DEBUG nova.objects.instance [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'resources' on Instance uuid 0239090c-f1eb-4b8b-8f45-94efee345fa5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:52:01 np0005465604 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[386621]: [NOTICE]   (386626) : haproxy version is 2.8.14-c23fe91
Oct  2 04:52:01 np0005465604 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[386621]: [NOTICE]   (386626) : path to executable is /usr/sbin/haproxy
Oct  2 04:52:01 np0005465604 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[386621]: [WARNING]  (386626) : Exiting Master process...
Oct  2 04:52:01 np0005465604 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[386621]: [ALERT]    (386626) : Current worker (386628) exited with code 143 (Terminated)
Oct  2 04:52:01 np0005465604 neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73[386621]: [WARNING]  (386626) : All workers exited. Exiting... (0)
Oct  2 04:52:01 np0005465604 systemd[1]: libpod-c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5.scope: Deactivated successfully.
Oct  2 04:52:01 np0005465604 podman[386790]: 2025-10-02 08:52:01.282444132 +0000 UTC m=+0.073211502 container died c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.282 2 DEBUG nova.virt.libvirt.vif [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:51:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1642420573',display_name='tempest-TestNetworkAdvancedServerOps-server-1642420573',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1642420573',id=120,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA4QvS560h5Nt86MUax5zsZlHkfG3ItwowrPNScykbxoYWCKz1GitCrHEDcUIk0uoHK8H9Y5yTAmbaorVjHeOMl3in6cmQv3hOwWfkcNBCwehb1AIqliYtOYjjLnnPR64Q==',key_name='tempest-TestNetworkAdvancedServerOps-1971503056',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:51:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-vx22hflz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:51:43Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=0239090c-f1eb-4b8b-8f45-94efee345fa5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.283 2 DEBUG nova.network.os_vif_util [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.284 2 DEBUG nova.network.os_vif_util [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.284 2 DEBUG os_vif [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.286 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapceb50499-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.297 2 INFO os_vif [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:51:fb,bridge_name='br-int',has_traffic_filtering=True,id=ceb50499-7e3c-4d47-a2dc-05ce86dbbde1,network=Network(21ddf177-3d8e-4ce2-9cd9-fa17adfabd73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapceb50499-7e')#033[00m
Oct  2 04:52:01 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ce46867541b1cf76ec1f6225ba651561c209bb33faf6a185de6c04d8cdfff8a2-merged.mount: Deactivated successfully.
Oct  2 04:52:01 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5-userdata-shm.mount: Deactivated successfully.
Oct  2 04:52:01 np0005465604 podman[386790]: 2025-10-02 08:52:01.342050802 +0000 UTC m=+0.132818172 container cleanup c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 04:52:01 np0005465604 systemd[1]: libpod-conmon-c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5.scope: Deactivated successfully.
Oct  2 04:52:01 np0005465604 podman[386845]: 2025-10-02 08:52:01.429304898 +0000 UTC m=+0.058709822 container remove c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:52:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.438 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a36cae46-d40c-4af6-8d2c-85a6dd2356eb]: (4, ('Thu Oct  2 08:52:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 (c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5)\nc06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5\nThu Oct  2 08:52:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 (c06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5)\nc06c3b14bd7b827986c9869438a52fa0c9ee82beb5f07eff2b6dba390881bbd5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.442 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[433f2d7a-4553-4c4a-ac3d-ba0ff1345d61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.444 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21ddf177-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:52:01 np0005465604 kernel: tap21ddf177-30: left promiscuous mode
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.487 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e0451e5a-ced4-430f-855b-f664cc3ef0b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.522 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c12f3edf-373a-4a16-b34c-6a3e86d6180f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.523 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1fcc2142-c8f0-435e-bcb8-07ed1cebcc54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.548 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[16da16db-ad5f-4a99-9dd0-354d2df51290]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598084, 'reachable_time': 15525, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386861, 'error': None, 'target': 'ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.550 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-21ddf177-3d8e-4ce2-9cd9-fa17adfabd73 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:52:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:01.550 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[8251e9cf-af91-4934-8e3f-96b684939684]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:01 np0005465604 systemd[1]: run-netns-ovnmeta\x2d21ddf177\x2d3d8e\x2d4ce2\x2d9cd9\x2dfa17adfabd73.mount: Deactivated successfully.
Oct  2 04:52:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 297 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 523 KiB/s rd, 15 KiB/s wr, 43 op/s
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.774 2 INFO nova.virt.libvirt.driver [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Deleting instance files /var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5_del#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.774 2 INFO nova.virt.libvirt.driver [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Deletion of /var/lib/nova/instances/0239090c-f1eb-4b8b-8f45-94efee345fa5_del complete#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.833 2 INFO nova.compute.manager [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.833 2 DEBUG oslo.service.loopingcall [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.834 2 DEBUG nova.compute.manager [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:52:01 np0005465604 nova_compute[260603]: 2025-10-02 08:52:01.834 2 DEBUG nova.network.neutron [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:52:02 np0005465604 nova_compute[260603]: 2025-10-02 08:52:02.683 2 DEBUG nova.network.neutron [req-d9b4a8f1-c8ab-4815-8879-2122c7edc872 req-951e2142-93fe-47c7-8f5d-403c3a817bee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Updated VIF entry in instance network info cache for port ceb50499-7e3c-4d47-a2dc-05ce86dbbde1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:52:02 np0005465604 nova_compute[260603]: 2025-10-02 08:52:02.684 2 DEBUG nova.network.neutron [req-d9b4a8f1-c8ab-4815-8879-2122c7edc872 req-951e2142-93fe-47c7-8f5d-403c3a817bee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Updating instance_info_cache with network_info: [{"id": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "address": "fa:16:3e:80:51:fb", "network": {"id": "21ddf177-3d8e-4ce2-9cd9-fa17adfabd73", "bridge": "br-int", "label": "tempest-network-smoke--460980015", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapceb50499-7e", "ovs_interfaceid": "ceb50499-7e3c-4d47-a2dc-05ce86dbbde1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:52:02 np0005465604 nova_compute[260603]: 2025-10-02 08:52:02.720 2 DEBUG oslo_concurrency.lockutils [req-d9b4a8f1-c8ab-4815-8879-2122c7edc872 req-951e2142-93fe-47c7-8f5d-403c3a817bee 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-0239090c-f1eb-4b8b-8f45-94efee345fa5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:52:02 np0005465604 nova_compute[260603]: 2025-10-02 08:52:02.743 2 DEBUG nova.network.neutron [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:52:02 np0005465604 nova_compute[260603]: 2025-10-02 08:52:02.763 2 INFO nova.compute.manager [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Took 0.93 seconds to deallocate network for instance.#033[00m
Oct  2 04:52:02 np0005465604 nova_compute[260603]: 2025-10-02 08:52:02.828 2 DEBUG oslo_concurrency.lockutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:02 np0005465604 nova_compute[260603]: 2025-10-02 08:52:02.829 2 DEBUG oslo_concurrency.lockutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:02.889 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:52:02 np0005465604 nova_compute[260603]: 2025-10-02 08:52:02.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:02.892 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:52:02 np0005465604 nova_compute[260603]: 2025-10-02 08:52:02.923 2 DEBUG oslo_concurrency.processutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:52:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.010 2 DEBUG nova.compute.manager [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-vif-unplugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.011 2 DEBUG oslo_concurrency.lockutils [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.011 2 DEBUG oslo_concurrency.lockutils [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.012 2 DEBUG oslo_concurrency.lockutils [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.012 2 DEBUG nova.compute.manager [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] No waiting events found dispatching network-vif-unplugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.012 2 WARNING nova.compute.manager [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received unexpected event network-vif-unplugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.012 2 DEBUG nova.compute.manager [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.012 2 DEBUG oslo_concurrency.lockutils [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.013 2 DEBUG oslo_concurrency.lockutils [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.013 2 DEBUG oslo_concurrency.lockutils [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.013 2 DEBUG nova.compute.manager [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] No waiting events found dispatching network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.013 2 WARNING nova.compute.manager [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received unexpected event network-vif-plugged-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.014 2 DEBUG nova.compute.manager [req-a6e60fa5-5a4e-4475-8d17-06761858bdbc req-26ae5be2-f07a-4346-a231-3ae1b45545cd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Received event network-vif-deleted-ceb50499-7e3c-4d47-a2dc-05ce86dbbde1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:52:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:52:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/262064444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.405 2 DEBUG oslo_concurrency.processutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.411 2 DEBUG nova.compute.provider_tree [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.428 2 DEBUG nova.scheduler.client.report [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.450 2 DEBUG oslo_concurrency.lockutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.482 2 INFO nova.scheduler.client.report [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Deleted allocations for instance 0239090c-f1eb-4b8b-8f45-94efee345fa5#033[00m
Oct  2 04:52:03 np0005465604 nova_compute[260603]: 2025-10-02 08:52:03.549 2 DEBUG oslo_concurrency.lockutils [None req-c5e864c6-eec4-4cdd-bdb6-651428743ae8 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "0239090c-f1eb-4b8b-8f45-94efee345fa5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 217 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 30 KiB/s wr, 73 op/s
Oct  2 04:52:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:03.895 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:52:04 np0005465604 nova_compute[260603]: 2025-10-02 08:52:04.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 217 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 27 KiB/s wr, 72 op/s
Oct  2 04:52:06 np0005465604 nova_compute[260603]: 2025-10-02 08:52:06.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:06 np0005465604 nova_compute[260603]: 2025-10-02 08:52:06.425 2 DEBUG nova.compute.manager [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:52:06 np0005465604 nova_compute[260603]: 2025-10-02 08:52:06.484 2 INFO nova.compute.manager [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] instance snapshotting#033[00m
Oct  2 04:52:06 np0005465604 nova_compute[260603]: 2025-10-02 08:52:06.753 2 INFO nova.virt.libvirt.driver [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Beginning live snapshot process#033[00m
Oct  2 04:52:06 np0005465604 nova_compute[260603]: 2025-10-02 08:52:06.898 2 DEBUG nova.storage.rbd_utils [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] creating snapshot(8863bfc66ade4ce6966d26e758a216a9) on rbd image(062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:52:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Oct  2 04:52:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Oct  2 04:52:07 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Oct  2 04:52:07 np0005465604 nova_compute[260603]: 2025-10-02 08:52:07.188 2 DEBUG nova.storage.rbd_utils [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] cloning vms/062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk@8863bfc66ade4ce6966d26e758a216a9 to images/f103e592-76b3-4821-8bf4-1ab4b6adc845 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:52:07 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:07Z|01272|binding|INFO|Releasing lport 6972524a-7a8a-4c19-a841-08e51ccd9aaa from this chassis (sb_readonly=0)
Oct  2 04:52:07 np0005465604 nova_compute[260603]: 2025-10-02 08:52:07.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:07 np0005465604 nova_compute[260603]: 2025-10-02 08:52:07.374 2 DEBUG nova.storage.rbd_utils [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] flattening images/f103e592-76b3-4821-8bf4-1ab4b6adc845 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:52:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 217 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 497 KiB/s rd, 21 KiB/s wr, 65 op/s
Oct  2 04:52:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:52:08 np0005465604 nova_compute[260603]: 2025-10-02 08:52:08.415 2 DEBUG nova.storage.rbd_utils [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] removing snapshot(8863bfc66ade4ce6966d26e758a216a9) on rbd image(062b2a3e-b612-42bf-b96c-fb9bdd9008ee_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 04:52:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Oct  2 04:52:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Oct  2 04:52:09 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Oct  2 04:52:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 237 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct  2 04:52:09 np0005465604 nova_compute[260603]: 2025-10-02 08:52:09.637 2 DEBUG nova.storage.rbd_utils [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] creating snapshot(snap) on rbd image(f103e592-76b3-4821-8bf4-1ab4b6adc845) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:52:09 np0005465604 nova_compute[260603]: 2025-10-02 08:52:09.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Oct  2 04:52:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Oct  2 04:52:10 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Oct  2 04:52:11 np0005465604 nova_compute[260603]: 2025-10-02 08:52:11.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 237 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 55 op/s
Oct  2 04:52:12 np0005465604 nova_compute[260603]: 2025-10-02 08:52:12.222 2 INFO nova.virt.libvirt.driver [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Snapshot image upload complete#033[00m
Oct  2 04:52:12 np0005465604 nova_compute[260603]: 2025-10-02 08:52:12.223 2 INFO nova.compute.manager [None req-f078661b-29b3-45a5-94dd-25860b6990dc aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Took 5.74 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 04:52:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Oct  2 04:52:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Oct  2 04:52:12 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Oct  2 04:52:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:52:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 295 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 15 MiB/s wr, 191 op/s
Oct  2 04:52:14 np0005465604 nova_compute[260603]: 2025-10-02 08:52:14.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:14 np0005465604 nova_compute[260603]: 2025-10-02 08:52:14.857 2 DEBUG nova.compute.manager [req-8b7dc983-449b-4210-b56b-ab37d5e5f7a9 req-a22127ba-5332-40a9-bf25-8405e882ebfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Received event network-changed-06e58664-5a1a-4f08-9eb3-6a275e07a062 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:52:14 np0005465604 nova_compute[260603]: 2025-10-02 08:52:14.858 2 DEBUG nova.compute.manager [req-8b7dc983-449b-4210-b56b-ab37d5e5f7a9 req-a22127ba-5332-40a9-bf25-8405e882ebfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Refreshing instance network info cache due to event network-changed-06e58664-5a1a-4f08-9eb3-6a275e07a062. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:52:14 np0005465604 nova_compute[260603]: 2025-10-02 08:52:14.859 2 DEBUG oslo_concurrency.lockutils [req-8b7dc983-449b-4210-b56b-ab37d5e5f7a9 req-a22127ba-5332-40a9-bf25-8405e882ebfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:52:14 np0005465604 nova_compute[260603]: 2025-10-02 08:52:14.860 2 DEBUG oslo_concurrency.lockutils [req-8b7dc983-449b-4210-b56b-ab37d5e5f7a9 req-a22127ba-5332-40a9-bf25-8405e882ebfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:52:14 np0005465604 nova_compute[260603]: 2025-10-02 08:52:14.860 2 DEBUG nova.network.neutron [req-8b7dc983-449b-4210-b56b-ab37d5e5f7a9 req-a22127ba-5332-40a9-bf25-8405e882ebfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Refreshing network info cache for port 06e58664-5a1a-4f08-9eb3-6a275e07a062 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:52:14 np0005465604 nova_compute[260603]: 2025-10-02 08:52:14.866 2 DEBUG oslo_concurrency.lockutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:14 np0005465604 nova_compute[260603]: 2025-10-02 08:52:14.867 2 DEBUG oslo_concurrency.lockutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:14 np0005465604 nova_compute[260603]: 2025-10-02 08:52:14.868 2 DEBUG oslo_concurrency.lockutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:14 np0005465604 nova_compute[260603]: 2025-10-02 08:52:14.869 2 DEBUG oslo_concurrency.lockutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:14 np0005465604 nova_compute[260603]: 2025-10-02 08:52:14.869 2 DEBUG oslo_concurrency.lockutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:14 np0005465604 nova_compute[260603]: 2025-10-02 08:52:14.871 2 INFO nova.compute.manager [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Terminating instance#033[00m
Oct  2 04:52:14 np0005465604 nova_compute[260603]: 2025-10-02 08:52:14.876 2 DEBUG nova.compute.manager [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:52:14 np0005465604 nova_compute[260603]: 2025-10-02 08:52:14.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:14 np0005465604 kernel: tap06e58664-5a (unregistering): left promiscuous mode
Oct  2 04:52:14 np0005465604 NetworkManager[45129]: <info>  [1759395134.9800] device (tap06e58664-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:52:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:15Z|01273|binding|INFO|Releasing lport 06e58664-5a1a-4f08-9eb3-6a275e07a062 from this chassis (sb_readonly=0)
Oct  2 04:52:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:15Z|01274|binding|INFO|Setting lport 06e58664-5a1a-4f08-9eb3-6a275e07a062 down in Southbound
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:15Z|01275|binding|INFO|Removing iface tap06e58664-5a ovn-installed in OVS
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.065 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:f8:b9 10.100.0.8'], port_security=['fa:16:3e:e1:f8:b9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '062b2a3e-b612-42bf-b96c-fb9bdd9008ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e8372a1-ae46-455d-aa74-54645f729e73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc44155f2d45ff9f66fe254a7b21c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b23ca944-49a2-456f-a94b-c77445a13bdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10547b4a-e45d-47e8-b4d5-6979908e5ff3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=06e58664-5a1a-4f08-9eb3-6a275e07a062) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:52:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.066 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 06e58664-5a1a-4f08-9eb3-6a275e07a062 in datapath 5e8372a1-ae46-455d-aa74-54645f729e73 unbound from our chassis#033[00m
Oct  2 04:52:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.067 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e8372a1-ae46-455d-aa74-54645f729e73#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.094 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[84d11dac-1cc4-4bd8-9916-aac416a70822]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:15 np0005465604 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000079.scope: Deactivated successfully.
Oct  2 04:52:15 np0005465604 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000079.scope: Consumed 16.524s CPU time.
Oct  2 04:52:15 np0005465604 systemd-machined[214636]: Machine qemu-152-instance-00000079 terminated.
Oct  2 04:52:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.137 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[112ccdca-a765-495d-b887-dde30612c583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.140 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8d53b5ff-77f1-48b4-9d30-a87aad699436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.178 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca5a9e5-9d7a-4c9c-943f-0aa1b3dcdfcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.207 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d827ce56-232f-4ab6-8421-03f517b1485a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e8372a1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ee:60:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592471, 'reachable_time': 17032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387038, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.226 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f7297d-9c58-4062-93fd-823e66dbecdf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5e8372a1-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592482, 'tstamp': 592482}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387039, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5e8372a1-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592486, 'tstamp': 592486}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387039, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.228 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e8372a1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.239 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e8372a1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:52:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.239 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:52:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.241 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e8372a1-a0, col_values=(('external_ids', {'iface-id': '6972524a-7a8a-4c19-a841-08e51ccd9aaa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:52:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:15.242 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.329 2 INFO nova.virt.libvirt.driver [-] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Instance destroyed successfully.#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.329 2 DEBUG nova.objects.instance [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lazy-loading 'resources' on Instance uuid 062b2a3e-b612-42bf-b96c-fb9bdd9008ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.343 2 DEBUG nova.virt.libvirt.vif [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:51:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1642693838',display_name='tempest-TestSnapshotPattern-server-1642693838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1642693838',id=121,image_ref='bfda7a79-c502-4c19-8d8e-da8dbbb22d04',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCpLz1iGKUc/R0jTBlojNlaCcKVn52HrOCGXK3cRl7ZwI3LdmmDfGB817F44mQzj+4scFHSvX8lk2zoI/a0vux5P2hegs1HkAovNnUXiH3pFHVXQWoAuzDtOhefGzzHniQ==',key_name='tempest-TestSnapshotPattern-624580712',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:51:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bfcc44155f2d45ff9f66fe254a7b21c7',ramdisk_id='',reservation_id='r-uis0qfoh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='d5e3e825-fcee-4f1b-8c05-5a9ee07013d7',image_min_disk='1',image_min_ram='0',image_owner_id='bfcc44155f2d45ff9f66fe254a7b21c7',image_owner_project_name='tempest-TestSnapshotPattern-495107275',image_owner_user_name='tempest-TestSnapshotPattern-495107275-project-member',image_user_id='aceb8b0273154f1abe964d78a6261936',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-495107275',owner_user_name='tempest-TestSnapshotPattern-495107275-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:52:12Z,user_data=None,user_id='aceb8b0273154f1abe964d78a6261936',uuid=062b2a3e-b612-42bf-b96c-fb9bdd9008ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.344 2 DEBUG nova.network.os_vif_util [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converting VIF {"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.345 2 DEBUG nova.network.os_vif_util [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=06e58664-5a1a-4f08-9eb3-6a275e07a062,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e58664-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.345 2 DEBUG os_vif [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=06e58664-5a1a-4f08-9eb3-6a275e07a062,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e58664-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.347 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06e58664-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.354 2 INFO os_vif [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:f8:b9,bridge_name='br-int',has_traffic_filtering=True,id=06e58664-5a1a-4f08-9eb3-6a275e07a062,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e58664-5a')#033[00m
Oct  2 04:52:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 295 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 12 MiB/s wr, 152 op/s
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.740 2 INFO nova.virt.libvirt.driver [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Deleting instance files /var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee_del#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.741 2 INFO nova.virt.libvirt.driver [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Deletion of /var/lib/nova/instances/062b2a3e-b612-42bf-b96c-fb9bdd9008ee_del complete#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.816 2 INFO nova.compute.manager [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Took 0.94 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.817 2 DEBUG oslo.service.loopingcall [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.817 2 DEBUG nova.compute.manager [-] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:52:15 np0005465604 nova_compute[260603]: 2025-10-02 08:52:15.818 2 DEBUG nova.network.neutron [-] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:52:16 np0005465604 nova_compute[260603]: 2025-10-02 08:52:16.260 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395121.2589183, 0239090c-f1eb-4b8b-8f45-94efee345fa5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:52:16 np0005465604 nova_compute[260603]: 2025-10-02 08:52:16.261 2 INFO nova.compute.manager [-] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:52:16 np0005465604 nova_compute[260603]: 2025-10-02 08:52:16.282 2 DEBUG nova.compute.manager [None req-d1312103-c3b5-4c25-9a20-185c8b39333f - - - - - -] [instance: 0239090c-f1eb-4b8b-8f45-94efee345fa5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:52:16 np0005465604 nova_compute[260603]: 2025-10-02 08:52:16.833 2 DEBUG nova.network.neutron [-] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:52:16 np0005465604 nova_compute[260603]: 2025-10-02 08:52:16.849 2 INFO nova.compute.manager [-] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Took 1.03 seconds to deallocate network for instance.#033[00m
Oct  2 04:52:16 np0005465604 nova_compute[260603]: 2025-10-02 08:52:16.905 2 DEBUG oslo_concurrency.lockutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:16 np0005465604 nova_compute[260603]: 2025-10-02 08:52:16.906 2 DEBUG oslo_concurrency.lockutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:16 np0005465604 nova_compute[260603]: 2025-10-02 08:52:16.933 2 DEBUG nova.compute.manager [req-cfc1e01c-d23b-43ea-8e6f-c7077be969fd req-a327c7b6-6bfd-4bc7-b439-05432cfb2013 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Received event network-vif-deleted-06e58664-5a1a-4f08-9eb3-6a275e07a062 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:52:17 np0005465604 nova_compute[260603]: 2025-10-02 08:52:16.989 2 DEBUG oslo_concurrency.processutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:52:17 np0005465604 podman[387072]: 2025-10-02 08:52:17.014795437 +0000 UTC m=+0.073825451 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct  2 04:52:17 np0005465604 podman[387071]: 2025-10-02 08:52:17.038709865 +0000 UTC m=+0.099456263 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct  2 04:52:17 np0005465604 nova_compute[260603]: 2025-10-02 08:52:17.424 2 DEBUG nova.network.neutron [req-8b7dc983-449b-4210-b56b-ab37d5e5f7a9 req-a22127ba-5332-40a9-bf25-8405e882ebfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Updated VIF entry in instance network info cache for port 06e58664-5a1a-4f08-9eb3-6a275e07a062. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:52:17 np0005465604 nova_compute[260603]: 2025-10-02 08:52:17.425 2 DEBUG nova.network.neutron [req-8b7dc983-449b-4210-b56b-ab37d5e5f7a9 req-a22127ba-5332-40a9-bf25-8405e882ebfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Updating instance_info_cache with network_info: [{"id": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "address": "fa:16:3e:e1:f8:b9", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e58664-5a", "ovs_interfaceid": "06e58664-5a1a-4f08-9eb3-6a275e07a062", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:52:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:52:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1397368750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:52:17 np0005465604 nova_compute[260603]: 2025-10-02 08:52:17.445 2 DEBUG oslo_concurrency.lockutils [req-8b7dc983-449b-4210-b56b-ab37d5e5f7a9 req-a22127ba-5332-40a9-bf25-8405e882ebfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-062b2a3e-b612-42bf-b96c-fb9bdd9008ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:52:17 np0005465604 nova_compute[260603]: 2025-10-02 08:52:17.466 2 DEBUG oslo_concurrency.processutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:52:17 np0005465604 nova_compute[260603]: 2025-10-02 08:52:17.475 2 DEBUG nova.compute.provider_tree [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:52:17 np0005465604 nova_compute[260603]: 2025-10-02 08:52:17.491 2 DEBUG nova.scheduler.client.report [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:52:17 np0005465604 nova_compute[260603]: 2025-10-02 08:52:17.515 2 DEBUG oslo_concurrency.lockutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:17 np0005465604 nova_compute[260603]: 2025-10-02 08:52:17.541 2 INFO nova.scheduler.client.report [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Deleted allocations for instance 062b2a3e-b612-42bf-b96c-fb9bdd9008ee#033[00m
Oct  2 04:52:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 207 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 9.5 MiB/s wr, 169 op/s
Oct  2 04:52:17 np0005465604 nova_compute[260603]: 2025-10-02 08:52:17.598 2 DEBUG oslo_concurrency.lockutils [None req-ccf34d19-a354-47d4-97ee-bae483bbc86e aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "062b2a3e-b612-42bf-b96c-fb9bdd9008ee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:52:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Oct  2 04:52:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Oct  2 04:52:18 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Oct  2 04:52:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Oct  2 04:52:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Oct  2 04:52:19 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Oct  2 04:52:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 200 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 1.4 MiB/s wr, 85 op/s
Oct  2 04:52:19 np0005465604 nova_compute[260603]: 2025-10-02 08:52:19.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:20 np0005465604 nova_compute[260603]: 2025-10-02 08:52:20.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 200 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 3.2 KiB/s wr, 67 op/s
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:52:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3660420233' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:52:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:52:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3660420233' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.163 2 DEBUG nova.compute.manager [req-ee5c83fd-6d19-4cc3-8390-a103255e6024 req-df72dd47-14da-4263-aa1c-6b71224556eb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Received event network-changed-236df88e-d54e-410f-9a5c-cf5fa95debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.163 2 DEBUG nova.compute.manager [req-ee5c83fd-6d19-4cc3-8390-a103255e6024 req-df72dd47-14da-4263-aa1c-6b71224556eb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Refreshing instance network info cache due to event network-changed-236df88e-d54e-410f-9a5c-cf5fa95debd1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.164 2 DEBUG oslo_concurrency.lockutils [req-ee5c83fd-6d19-4cc3-8390-a103255e6024 req-df72dd47-14da-4263-aa1c-6b71224556eb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.164 2 DEBUG oslo_concurrency.lockutils [req-ee5c83fd-6d19-4cc3-8390-a103255e6024 req-df72dd47-14da-4263-aa1c-6b71224556eb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.164 2 DEBUG nova.network.neutron [req-ee5c83fd-6d19-4cc3-8390-a103255e6024 req-df72dd47-14da-4263-aa1c-6b71224556eb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Refreshing network info cache for port 236df88e-d54e-410f-9a5c-cf5fa95debd1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.230 2 DEBUG oslo_concurrency.lockutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.231 2 DEBUG oslo_concurrency.lockutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.231 2 DEBUG oslo_concurrency.lockutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.231 2 DEBUG oslo_concurrency.lockutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.231 2 DEBUG oslo_concurrency.lockutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.233 2 INFO nova.compute.manager [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Terminating instance#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.234 2 DEBUG nova.compute.manager [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:52:22 np0005465604 kernel: tap236df88e-d5 (unregistering): left promiscuous mode
Oct  2 04:52:22 np0005465604 NetworkManager[45129]: <info>  [1759395142.2979] device (tap236df88e-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:22Z|01276|binding|INFO|Releasing lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 from this chassis (sb_readonly=0)
Oct  2 04:52:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:22Z|01277|binding|INFO|Setting lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 down in Southbound
Oct  2 04:52:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:22Z|01278|binding|INFO|Removing iface tap236df88e-d5 ovn-installed in OVS
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.310 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:8a:1b 10.100.0.3'], port_security=['fa:16:3e:de:8a:1b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd5e3e825-fcee-4f1b-8c05-5a9ee07013d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e8372a1-ae46-455d-aa74-54645f729e73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc44155f2d45ff9f66fe254a7b21c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b23ca944-49a2-456f-a94b-c77445a13bdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10547b4a-e45d-47e8-b4d5-6979908e5ff3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=236df88e-d54e-410f-9a5c-cf5fa95debd1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.311 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 236df88e-d54e-410f-9a5c-cf5fa95debd1 in datapath 5e8372a1-ae46-455d-aa74-54645f729e73 unbound from our chassis#033[00m
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.312 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5e8372a1-ae46-455d-aa74-54645f729e73, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.313 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[94ddc038-1455-4404-a4e5-14d45f83d208]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.314 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73 namespace which is not needed anymore#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:22 np0005465604 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000077.scope: Deactivated successfully.
Oct  2 04:52:22 np0005465604 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000077.scope: Consumed 16.041s CPU time.
Oct  2 04:52:22 np0005465604 systemd-machined[214636]: Machine qemu-150-instance-00000077 terminated.
Oct  2 04:52:22 np0005465604 kernel: tap236df88e-d5: entered promiscuous mode
Oct  2 04:52:22 np0005465604 NetworkManager[45129]: <info>  [1759395142.4589] manager: (tap236df88e-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/504)
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:22 np0005465604 kernel: tap236df88e-d5 (unregistering): left promiscuous mode
Oct  2 04:52:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:22Z|01279|binding|INFO|Claiming lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 for this chassis.
Oct  2 04:52:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:22Z|01280|binding|INFO|236df88e-d54e-410f-9a5c-cf5fa95debd1: Claiming fa:16:3e:de:8a:1b 10.100.0.3
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.520 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:8a:1b 10.100.0.3'], port_security=['fa:16:3e:de:8a:1b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd5e3e825-fcee-4f1b-8c05-5a9ee07013d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e8372a1-ae46-455d-aa74-54645f729e73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc44155f2d45ff9f66fe254a7b21c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b23ca944-49a2-456f-a94b-c77445a13bdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10547b4a-e45d-47e8-b4d5-6979908e5ff3, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=236df88e-d54e-410f-9a5c-cf5fa95debd1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.532 2 INFO nova.virt.libvirt.driver [-] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Instance destroyed successfully.#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.532 2 DEBUG nova.objects.instance [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lazy-loading 'resources' on Instance uuid d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:52:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:22Z|01281|binding|INFO|Setting lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 ovn-installed in OVS
Oct  2 04:52:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:22Z|01282|binding|INFO|Setting lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 up in Southbound
Oct  2 04:52:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:22Z|01283|binding|INFO|Releasing lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 from this chassis (sb_readonly=1)
Oct  2 04:52:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:22Z|01284|if_status|INFO|Dropped 1 log messages in last 128 seconds (most recently, 128 seconds ago) due to excessive rate
Oct  2 04:52:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:22Z|01285|if_status|INFO|Not setting lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 down as sb is readonly
Oct  2 04:52:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:22Z|01286|binding|INFO|Removing iface tap236df88e-d5 ovn-installed in OVS
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.551 2 DEBUG nova.virt.libvirt.vif [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:50:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1240552507',display_name='tempest-TestSnapshotPattern-server-1240552507',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1240552507',id=119,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCpLz1iGKUc/R0jTBlojNlaCcKVn52HrOCGXK3cRl7ZwI3LdmmDfGB817F44mQzj+4scFHSvX8lk2zoI/a0vux5P2hegs1HkAovNnUXiH3pFHVXQWoAuzDtOhefGzzHniQ==',key_name='tempest-TestSnapshotPattern-624580712',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:50:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bfcc44155f2d45ff9f66fe254a7b21c7',ramdisk_id='',reservation_id='r-s694dn34',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-495107275',owner_user_name='tempest-TestSnapshotPattern-495107275-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:51:13Z,user_data=None,user_id='aceb8b0273154f1abe964d78a6261936',uuid=d5e3e825-fcee-4f1b-8c05-5a9ee07013d7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.552 2 DEBUG nova.network.os_vif_util [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converting VIF {"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:52:22 np0005465604 neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73[383985]: [NOTICE]   (383989) : haproxy version is 2.8.14-c23fe91
Oct  2 04:52:22 np0005465604 neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73[383985]: [NOTICE]   (383989) : path to executable is /usr/sbin/haproxy
Oct  2 04:52:22 np0005465604 neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73[383985]: [WARNING]  (383989) : Exiting Master process...
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.552 2 DEBUG nova.network.os_vif_util [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:de:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=236df88e-d54e-410f-9a5c-cf5fa95debd1,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap236df88e-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.554 2 DEBUG os_vif [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=236df88e-d54e-410f-9a5c-cf5fa95debd1,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap236df88e-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:52:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:22Z|01287|binding|INFO|Releasing lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 from this chassis (sb_readonly=0)
Oct  2 04:52:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:22Z|01288|binding|INFO|Setting lport 236df88e-d54e-410f-9a5c-cf5fa95debd1 down in Southbound
Oct  2 04:52:22 np0005465604 neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73[383985]: [WARNING]  (383989) : Exiting Master process...
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.556 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap236df88e-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:22 np0005465604 neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73[383985]: [ALERT]    (383989) : Current worker (383991) exited with code 143 (Terminated)
Oct  2 04:52:22 np0005465604 neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73[383985]: [WARNING]  (383989) : All workers exited. Exiting... (0)
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:22 np0005465604 systemd[1]: libpod-36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a.scope: Deactivated successfully.
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.564 2 INFO os_vif [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=236df88e-d54e-410f-9a5c-cf5fa95debd1,network=Network(5e8372a1-ae46-455d-aa74-54645f729e73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap236df88e-d5')#033[00m
Oct  2 04:52:22 np0005465604 podman[387261]: 2025-10-02 08:52:22.567120528 +0000 UTC m=+0.113237521 container died 36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.571 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:8a:1b 10.100.0.3'], port_security=['fa:16:3e:de:8a:1b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd5e3e825-fcee-4f1b-8c05-5a9ee07013d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e8372a1-ae46-455d-aa74-54645f729e73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfcc44155f2d45ff9f66fe254a7b21c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b23ca944-49a2-456f-a94b-c77445a13bdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10547b4a-e45d-47e8-b4d5-6979908e5ff3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=236df88e-d54e-410f-9a5c-cf5fa95debd1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:52:22 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a-userdata-shm.mount: Deactivated successfully.
Oct  2 04:52:22 np0005465604 systemd[1]: var-lib-containers-storage-overlay-55629b6f151fe9b01df698b94aebf7c6e06eef4cd3a7619588ea9563a64ff3ac-merged.mount: Deactivated successfully.
Oct  2 04:52:22 np0005465604 podman[387261]: 2025-10-02 08:52:22.609970226 +0000 UTC m=+0.156087189 container cleanup 36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:52:22 np0005465604 systemd[1]: libpod-conmon-36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a.scope: Deactivated successfully.
Oct  2 04:52:22 np0005465604 podman[387323]: 2025-10-02 08:52:22.687952138 +0000 UTC m=+0.055549522 container remove 36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.695 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a9700df3-476b-497e-b6c0-9965fd30ac39]: (4, ('Thu Oct  2 08:52:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73 (36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a)\n36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a\nThu Oct  2 08:52:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73 (36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a)\n36438d6494b515338315b85df0ceaee92aa48d4193f75ea7f49c283a044c323a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.698 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ce58611b-b355-4360-8c9b-cbbc77baa6db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.699 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e8372a1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:22 np0005465604 kernel: tap5e8372a1-a0: left promiscuous mode
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.717 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4a9372-37b8-424e-9fca-c6dd1d2a638f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.744 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7d66db18-490e-4668-bdcf-5bd76e721395]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.745 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09964408-5520-4604-8f61-09a1ee4ae89c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.760 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fb20268a-b66e-45f7-a3fd-6cf7be8c702b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592452, 'reachable_time': 36995, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387345, 'error': None, 'target': 'ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.762 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5e8372a1-ae46-455d-aa74-54645f729e73 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.763 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[9485a7e9-39bc-448e-89a2-edeb5f746e30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:22 np0005465604 systemd[1]: run-netns-ovnmeta\x2d5e8372a1\x2dae46\x2d455d\x2daa74\x2d54645f729e73.mount: Deactivated successfully.
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.764 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 236df88e-d54e-410f-9a5c-cf5fa95debd1 in datapath 5e8372a1-ae46-455d-aa74-54645f729e73 unbound from our chassis#033[00m
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.766 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5e8372a1-ae46-455d-aa74-54645f729e73, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.767 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[845d22f3-17e8-4082-b1de-fee6603daeef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.768 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 236df88e-d54e-410f-9a5c-cf5fa95debd1 in datapath 5e8372a1-ae46-455d-aa74-54645f729e73 unbound from our chassis#033[00m
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.770 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5e8372a1-ae46-455d-aa74-54645f729e73, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:52:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:22.770 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[67582e2b-2ee6-467d-8b8f-cc7b2842c32b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.924 2 INFO nova.virt.libvirt.driver [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Deleting instance files /var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_del#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.925 2 INFO nova.virt.libvirt.driver [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Deletion of /var/lib/nova/instances/d5e3e825-fcee-4f1b-8c05-5a9ee07013d7_del complete#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.971 2 INFO nova.compute.manager [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.973 2 DEBUG oslo.service.loopingcall [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.973 2 DEBUG nova.compute.manager [-] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:52:22 np0005465604 nova_compute[260603]: 2025-10-02 08:52:22.974 2 DEBUG nova.network.neutron [-] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:52:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:52:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:52:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:52:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:52:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:52:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:52:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:52:23 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 64bfcaf4-9d8d-4204-bfcf-5746f5c73a13 does not exist
Oct  2 04:52:23 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 30de6dfa-4662-4299-bac8-5f9699482a6f does not exist
Oct  2 04:52:23 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 5c5563d6-7e23-4baa-b9ce-e107d41d93e2 does not exist
Oct  2 04:52:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:52:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:52:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:52:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:52:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:52:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:52:23 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:52:23 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:52:23 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:52:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 86 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 5.5 KiB/s wr, 121 op/s
Oct  2 04:52:23 np0005465604 podman[387502]: 2025-10-02 08:52:23.707595214 +0000 UTC m=+0.064993761 container create 3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mendeleev, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 04:52:23 np0005465604 systemd[1]: Started libpod-conmon-3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40.scope.
Oct  2 04:52:23 np0005465604 podman[387502]: 2025-10-02 08:52:23.674357161 +0000 UTC m=+0.031755768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:52:23 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:52:23 np0005465604 podman[387502]: 2025-10-02 08:52:23.813270355 +0000 UTC m=+0.170668942 container init 3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mendeleev, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:52:23 np0005465604 podman[387502]: 2025-10-02 08:52:23.823739766 +0000 UTC m=+0.181138313 container start 3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mendeleev, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 04:52:23 np0005465604 podman[387502]: 2025-10-02 08:52:23.827990051 +0000 UTC m=+0.185388658 container attach 3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mendeleev, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:52:23 np0005465604 sad_mendeleev[387518]: 167 167
Oct  2 04:52:23 np0005465604 systemd[1]: libpod-3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40.scope: Deactivated successfully.
Oct  2 04:52:23 np0005465604 conmon[387518]: conmon 3a80ef3f06d5a61122cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40.scope/container/memory.events
Oct  2 04:52:23 np0005465604 podman[387502]: 2025-10-02 08:52:23.831807412 +0000 UTC m=+0.189205959 container died 3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:52:23 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c247211939cfa4ff808d8aed7ad2a623e5fbdd17d1e7dad2b410887a50b8ec1e-merged.mount: Deactivated successfully.
Oct  2 04:52:23 np0005465604 podman[387502]: 2025-10-02 08:52:23.887782667 +0000 UTC m=+0.245181184 container remove 3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mendeleev, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 04:52:23 np0005465604 systemd[1]: libpod-conmon-3a80ef3f06d5a61122cc38ecdcca81c97e85f26b346ec384b27718e82d8a4f40.scope: Deactivated successfully.
Oct  2 04:52:24 np0005465604 podman[387544]: 2025-10-02 08:52:24.074134445 +0000 UTC m=+0.046418753 container create 8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 04:52:24 np0005465604 systemd[1]: Started libpod-conmon-8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d.scope.
Oct  2 04:52:24 np0005465604 podman[387544]: 2025-10-02 08:52:24.055241666 +0000 UTC m=+0.027526024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:52:24 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:52:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f051d6d8559a35a225d32f817c527185c3e88c1070d0465174cd8e31328b62b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:52:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f051d6d8559a35a225d32f817c527185c3e88c1070d0465174cd8e31328b62b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:52:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f051d6d8559a35a225d32f817c527185c3e88c1070d0465174cd8e31328b62b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:52:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f051d6d8559a35a225d32f817c527185c3e88c1070d0465174cd8e31328b62b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:52:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f051d6d8559a35a225d32f817c527185c3e88c1070d0465174cd8e31328b62b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:52:24 np0005465604 podman[387560]: 2025-10-02 08:52:24.189854084 +0000 UTC m=+0.069187065 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  2 04:52:24 np0005465604 podman[387544]: 2025-10-02 08:52:24.197898088 +0000 UTC m=+0.170182376 container init 8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 04:52:24 np0005465604 podman[387544]: 2025-10-02 08:52:24.208121053 +0000 UTC m=+0.180405321 container start 8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 04:52:24 np0005465604 podman[387544]: 2025-10-02 08:52:24.211174019 +0000 UTC m=+0.183458307 container attach 8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 04:52:24 np0005465604 podman[387558]: 2025-10-02 08:52:24.247177011 +0000 UTC m=+0.123487457 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct  2 04:52:24 np0005465604 nova_compute[260603]: 2025-10-02 08:52:24.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:25 np0005465604 magical_ellis[387577]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:52:25 np0005465604 magical_ellis[387577]: --> relative data size: 1.0
Oct  2 04:52:25 np0005465604 magical_ellis[387577]: --> All data devices are unavailable
Oct  2 04:52:25 np0005465604 systemd[1]: libpod-8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d.scope: Deactivated successfully.
Oct  2 04:52:25 np0005465604 podman[387544]: 2025-10-02 08:52:25.395683653 +0000 UTC m=+1.367967941 container died 8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:52:25 np0005465604 systemd[1]: libpod-8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d.scope: Consumed 1.116s CPU time.
Oct  2 04:52:25 np0005465604 nova_compute[260603]: 2025-10-02 08:52:25.413 2 DEBUG nova.network.neutron [-] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:52:25 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8f051d6d8559a35a225d32f817c527185c3e88c1070d0465174cd8e31328b62b-merged.mount: Deactivated successfully.
Oct  2 04:52:25 np0005465604 nova_compute[260603]: 2025-10-02 08:52:25.436 2 INFO nova.compute.manager [-] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Took 2.46 seconds to deallocate network for instance.#033[00m
Oct  2 04:52:25 np0005465604 nova_compute[260603]: 2025-10-02 08:52:25.460 2 DEBUG nova.network.neutron [req-ee5c83fd-6d19-4cc3-8390-a103255e6024 req-df72dd47-14da-4263-aa1c-6b71224556eb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updated VIF entry in instance network info cache for port 236df88e-d54e-410f-9a5c-cf5fa95debd1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:52:25 np0005465604 nova_compute[260603]: 2025-10-02 08:52:25.460 2 DEBUG nova.network.neutron [req-ee5c83fd-6d19-4cc3-8390-a103255e6024 req-df72dd47-14da-4263-aa1c-6b71224556eb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updating instance_info_cache with network_info: [{"id": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "address": "fa:16:3e:de:8a:1b", "network": {"id": "5e8372a1-ae46-455d-aa74-54645f729e73", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-737831824-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfcc44155f2d45ff9f66fe254a7b21c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap236df88e-d5", "ovs_interfaceid": "236df88e-d54e-410f-9a5c-cf5fa95debd1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:52:25 np0005465604 podman[387544]: 2025-10-02 08:52:25.466161167 +0000 UTC m=+1.438445445 container remove 8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ellis, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 04:52:25 np0005465604 systemd[1]: libpod-conmon-8792e214360f09b0b6b40787489c581423a064c0910c5eac10c4ca5739892d8d.scope: Deactivated successfully.
Oct  2 04:52:25 np0005465604 nova_compute[260603]: 2025-10-02 08:52:25.498 2 DEBUG oslo_concurrency.lockutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:25 np0005465604 nova_compute[260603]: 2025-10-02 08:52:25.499 2 DEBUG oslo_concurrency.lockutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:25 np0005465604 nova_compute[260603]: 2025-10-02 08:52:25.501 2 DEBUG oslo_concurrency.lockutils [req-ee5c83fd-6d19-4cc3-8390-a103255e6024 req-df72dd47-14da-4263-aa1c-6b71224556eb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:52:25 np0005465604 nova_compute[260603]: 2025-10-02 08:52:25.520 2 DEBUG nova.compute.manager [req-5ad345f9-7856-422b-aa53-ea21d3a0d624 req-957e2f9f-f599-49f9-94af-f7b9fd2b30fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Received event network-vif-deleted-236df88e-d54e-410f-9a5c-cf5fa95debd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:52:25 np0005465604 nova_compute[260603]: 2025-10-02 08:52:25.521 2 INFO nova.compute.manager [req-5ad345f9-7856-422b-aa53-ea21d3a0d624 req-957e2f9f-f599-49f9-94af-f7b9fd2b30fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Neutron deleted interface 236df88e-d54e-410f-9a5c-cf5fa95debd1; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 04:52:25 np0005465604 nova_compute[260603]: 2025-10-02 08:52:25.521 2 DEBUG nova.network.neutron [req-5ad345f9-7856-422b-aa53-ea21d3a0d624 req-957e2f9f-f599-49f9-94af-f7b9fd2b30fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:52:25 np0005465604 nova_compute[260603]: 2025-10-02 08:52:25.561 2 DEBUG nova.compute.manager [req-5ad345f9-7856-422b-aa53-ea21d3a0d624 req-957e2f9f-f599-49f9-94af-f7b9fd2b30fd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Detach interface failed, port_id=236df88e-d54e-410f-9a5c-cf5fa95debd1, reason: Instance d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 04:52:25 np0005465604 nova_compute[260603]: 2025-10-02 08:52:25.581 2 DEBUG oslo_concurrency.processutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:52:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 86 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.0 KiB/s wr, 73 op/s
Oct  2 04:52:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:52:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3142570958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:52:26 np0005465604 nova_compute[260603]: 2025-10-02 08:52:26.043 2 DEBUG oslo_concurrency.processutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:52:26 np0005465604 nova_compute[260603]: 2025-10-02 08:52:26.052 2 DEBUG nova.compute.provider_tree [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:52:26 np0005465604 nova_compute[260603]: 2025-10-02 08:52:26.073 2 DEBUG nova.scheduler.client.report [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:52:26 np0005465604 nova_compute[260603]: 2025-10-02 08:52:26.096 2 DEBUG oslo_concurrency.lockutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:26 np0005465604 nova_compute[260603]: 2025-10-02 08:52:26.125 2 INFO nova.scheduler.client.report [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Deleted allocations for instance d5e3e825-fcee-4f1b-8c05-5a9ee07013d7#033[00m
Oct  2 04:52:26 np0005465604 nova_compute[260603]: 2025-10-02 08:52:26.204 2 DEBUG oslo_concurrency.lockutils [None req-81abbc77-028f-4a82-8767-ae4e0d0f19a1 aceb8b0273154f1abe964d78a6261936 bfcc44155f2d45ff9f66fe254a7b21c7 - - default default] Lock "d5e3e825-fcee-4f1b-8c05-5a9ee07013d7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:26 np0005465604 podman[387809]: 2025-10-02 08:52:26.256938197 +0000 UTC m=+0.046386682 container create 6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 04:52:26 np0005465604 systemd[1]: Started libpod-conmon-6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd.scope.
Oct  2 04:52:26 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:52:26 np0005465604 podman[387809]: 2025-10-02 08:52:26.331186851 +0000 UTC m=+0.120635336 container init 6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:52:26 np0005465604 podman[387809]: 2025-10-02 08:52:26.338028538 +0000 UTC m=+0.127477023 container start 6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:52:26 np0005465604 podman[387809]: 2025-10-02 08:52:26.243251883 +0000 UTC m=+0.032700388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:52:26 np0005465604 podman[387809]: 2025-10-02 08:52:26.341917411 +0000 UTC m=+0.131365926 container attach 6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 04:52:26 np0005465604 stupefied_aryabhata[387825]: 167 167
Oct  2 04:52:26 np0005465604 systemd[1]: libpod-6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd.scope: Deactivated successfully.
Oct  2 04:52:26 np0005465604 conmon[387825]: conmon 6b8fc817dbccb640ca66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd.scope/container/memory.events
Oct  2 04:52:26 np0005465604 podman[387809]: 2025-10-02 08:52:26.344531774 +0000 UTC m=+0.133980259 container died 6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:52:26 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3473bd7bd6cc6ab123d10ada981f33d9377ed9eb389aa8d310801e3b6d7c2513-merged.mount: Deactivated successfully.
Oct  2 04:52:26 np0005465604 podman[387809]: 2025-10-02 08:52:26.378123609 +0000 UTC m=+0.167572094 container remove 6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:52:26 np0005465604 systemd[1]: libpod-conmon-6b8fc817dbccb640ca66a148c0fcc1aed7610dd52fc5c9b0a11049de12e89ccd.scope: Deactivated successfully.
Oct  2 04:52:26 np0005465604 podman[387848]: 2025-10-02 08:52:26.534124975 +0000 UTC m=+0.048196759 container create 1ef7d83a2d3883455ea786a58800045b45445744f37f02f23e2e524293940802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chatelet, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 04:52:26 np0005465604 systemd[1]: Started libpod-conmon-1ef7d83a2d3883455ea786a58800045b45445744f37f02f23e2e524293940802.scope.
Oct  2 04:52:26 np0005465604 podman[387848]: 2025-10-02 08:52:26.509398781 +0000 UTC m=+0.023470625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:52:26 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:52:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad398c56f2ada02be8d5c14ae4534ab8635290c6aebdf935a35368fc013c1a69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:52:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad398c56f2ada02be8d5c14ae4534ab8635290c6aebdf935a35368fc013c1a69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:52:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad398c56f2ada02be8d5c14ae4534ab8635290c6aebdf935a35368fc013c1a69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:52:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad398c56f2ada02be8d5c14ae4534ab8635290c6aebdf935a35368fc013c1a69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:52:26 np0005465604 podman[387848]: 2025-10-02 08:52:26.639596908 +0000 UTC m=+0.153668662 container init 1ef7d83a2d3883455ea786a58800045b45445744f37f02f23e2e524293940802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:52:26 np0005465604 podman[387848]: 2025-10-02 08:52:26.651396643 +0000 UTC m=+0.165468397 container start 1ef7d83a2d3883455ea786a58800045b45445744f37f02f23e2e524293940802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chatelet, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:52:26 np0005465604 podman[387848]: 2025-10-02 08:52:26.654614515 +0000 UTC m=+0.168686269 container attach 1ef7d83a2d3883455ea786a58800045b45445744f37f02f23e2e524293940802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chatelet, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:52:26 np0005465604 nova_compute[260603]: 2025-10-02 08:52:26.940 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:26 np0005465604 nova_compute[260603]: 2025-10-02 08:52:26.942 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:26 np0005465604 nova_compute[260603]: 2025-10-02 08:52:26.965 2 DEBUG nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.032 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.032 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.039 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.040 2 INFO nova.compute.claims [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.124 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:52:27 np0005465604 great_chatelet[387864]: {
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:    "0": [
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:        {
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "devices": [
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "/dev/loop3"
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            ],
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "lv_name": "ceph_lv0",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "lv_size": "21470642176",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "name": "ceph_lv0",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "tags": {
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.cluster_name": "ceph",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.crush_device_class": "",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.encrypted": "0",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.osd_id": "0",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.type": "block",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.vdo": "0"
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            },
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "type": "block",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "vg_name": "ceph_vg0"
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:        }
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:    ],
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:    "1": [
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:        {
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "devices": [
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "/dev/loop4"
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            ],
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "lv_name": "ceph_lv1",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "lv_size": "21470642176",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "name": "ceph_lv1",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "tags": {
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.cluster_name": "ceph",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.crush_device_class": "",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.encrypted": "0",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.osd_id": "1",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.type": "block",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.vdo": "0"
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            },
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "type": "block",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "vg_name": "ceph_vg1"
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:        }
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:    ],
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:    "2": [
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:        {
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "devices": [
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "/dev/loop5"
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            ],
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "lv_name": "ceph_lv2",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "lv_size": "21470642176",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "name": "ceph_lv2",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "tags": {
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.cluster_name": "ceph",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.crush_device_class": "",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.encrypted": "0",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.osd_id": "2",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.type": "block",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:                "ceph.vdo": "0"
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            },
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "type": "block",
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:            "vg_name": "ceph_vg2"
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:        }
Oct  2 04:52:27 np0005465604 great_chatelet[387864]:    ]
Oct  2 04:52:27 np0005465604 great_chatelet[387864]: }
Oct  2 04:52:27 np0005465604 systemd[1]: libpod-1ef7d83a2d3883455ea786a58800045b45445744f37f02f23e2e524293940802.scope: Deactivated successfully.
Oct  2 04:52:27 np0005465604 podman[387848]: 2025-10-02 08:52:27.437623489 +0000 UTC m=+0.951695283 container died 1ef7d83a2d3883455ea786a58800045b45445744f37f02f23e2e524293940802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chatelet, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  2 04:52:27 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ad398c56f2ada02be8d5c14ae4534ab8635290c6aebdf935a35368fc013c1a69-merged.mount: Deactivated successfully.
Oct  2 04:52:27 np0005465604 podman[387848]: 2025-10-02 08:52:27.508406563 +0000 UTC m=+1.022478317 container remove 1ef7d83a2d3883455ea786a58800045b45445744f37f02f23e2e524293940802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:52:27 np0005465604 systemd[1]: libpod-conmon-1ef7d83a2d3883455ea786a58800045b45445744f37f02f23e2e524293940802.scope: Deactivated successfully.
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 3.5 KiB/s wr, 79 op/s
Oct  2 04:52:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:52:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589371837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.645 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.656 2 DEBUG nova.compute.provider_tree [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.675 2 DEBUG nova.scheduler.client.report [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.696 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.697 2 DEBUG nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.746 2 DEBUG nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.747 2 DEBUG nova.network.neutron [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.766 2 INFO nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.783 2 DEBUG nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.881 2 DEBUG nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.882 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.883 2 INFO nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Creating image(s)#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.915 2 DEBUG nova.storage.rbd_utils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:52:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:52:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:52:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:52:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:52:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:52:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.945 2 DEBUG nova.storage.rbd_utils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.971 2 DEBUG nova.storage.rbd_utils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:52:27 np0005465604 nova_compute[260603]: 2025-10-02 08:52:27.975 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:52:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:52:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Oct  2 04:52:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Oct  2 04:52:28 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Oct  2 04:52:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:52:28
Oct  2 04:52:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:52:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:52:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'images', '.mgr', 'cephfs.cephfs.data', 'vms', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta']
Oct  2 04:52:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:52:28 np0005465604 nova_compute[260603]: 2025-10-02 08:52:28.080 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:52:28 np0005465604 nova_compute[260603]: 2025-10-02 08:52:28.081 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:28 np0005465604 nova_compute[260603]: 2025-10-02 08:52:28.081 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:28 np0005465604 nova_compute[260603]: 2025-10-02 08:52:28.081 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:28 np0005465604 nova_compute[260603]: 2025-10-02 08:52:28.106 2 DEBUG nova.storage.rbd_utils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:52:28 np0005465604 nova_compute[260603]: 2025-10-02 08:52:28.109 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:52:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:52:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:52:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:52:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:52:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:52:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:52:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:52:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:52:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:52:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:52:28 np0005465604 podman[388138]: 2025-10-02 08:52:28.403783989 +0000 UTC m=+0.054117796 container create 589a03c17a1b86cc960d5bc5841a51c9769f2761db902a0b9062cd4692ad0dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 04:52:28 np0005465604 nova_compute[260603]: 2025-10-02 08:52:28.413 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:52:28 np0005465604 systemd[1]: Started libpod-conmon-589a03c17a1b86cc960d5bc5841a51c9769f2761db902a0b9062cd4692ad0dfa.scope.
Oct  2 04:52:28 np0005465604 podman[388138]: 2025-10-02 08:52:28.380116839 +0000 UTC m=+0.030450696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:52:28 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:52:28 np0005465604 nova_compute[260603]: 2025-10-02 08:52:28.484 2 DEBUG nova.storage.rbd_utils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] resizing rbd image 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:52:28 np0005465604 podman[388138]: 2025-10-02 08:52:28.500778355 +0000 UTC m=+0.151112192 container init 589a03c17a1b86cc960d5bc5841a51c9769f2761db902a0b9062cd4692ad0dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_maxwell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 04:52:28 np0005465604 podman[388138]: 2025-10-02 08:52:28.514207621 +0000 UTC m=+0.164541448 container start 589a03c17a1b86cc960d5bc5841a51c9769f2761db902a0b9062cd4692ad0dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_maxwell, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 04:52:28 np0005465604 nova_compute[260603]: 2025-10-02 08:52:28.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:52:28 np0005465604 podman[388138]: 2025-10-02 08:52:28.519431776 +0000 UTC m=+0.169765633 container attach 589a03c17a1b86cc960d5bc5841a51c9769f2761db902a0b9062cd4692ad0dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 04:52:28 np0005465604 nova_compute[260603]: 2025-10-02 08:52:28.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:52:28 np0005465604 distracted_maxwell[388173]: 167 167
Oct  2 04:52:28 np0005465604 systemd[1]: libpod-589a03c17a1b86cc960d5bc5841a51c9769f2761db902a0b9062cd4692ad0dfa.scope: Deactivated successfully.
Oct  2 04:52:28 np0005465604 podman[388138]: 2025-10-02 08:52:28.524097844 +0000 UTC m=+0.174431651 container died 589a03c17a1b86cc960d5bc5841a51c9769f2761db902a0b9062cd4692ad0dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_maxwell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 04:52:28 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f2f5d90b66cdd8ffd3a66d722219e0ab2e616680456d4e85fb0ec9916fcc05e7-merged.mount: Deactivated successfully.
Oct  2 04:52:28 np0005465604 podman[388138]: 2025-10-02 08:52:28.565710993 +0000 UTC m=+0.216044800 container remove 589a03c17a1b86cc960d5bc5841a51c9769f2761db902a0b9062cd4692ad0dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_maxwell, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:52:28 np0005465604 systemd[1]: libpod-conmon-589a03c17a1b86cc960d5bc5841a51c9769f2761db902a0b9062cd4692ad0dfa.scope: Deactivated successfully.
Oct  2 04:52:28 np0005465604 nova_compute[260603]: 2025-10-02 08:52:28.586 2 DEBUG nova.objects.instance [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'migration_context' on Instance uuid 094b6b24-7323-4ad5-bd0d-e449c0c96f6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:52:28 np0005465604 nova_compute[260603]: 2025-10-02 08:52:28.610 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:52:28 np0005465604 nova_compute[260603]: 2025-10-02 08:52:28.610 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Ensure instance console log exists: /var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:52:28 np0005465604 nova_compute[260603]: 2025-10-02 08:52:28.611 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:28 np0005465604 nova_compute[260603]: 2025-10-02 08:52:28.612 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:28 np0005465604 nova_compute[260603]: 2025-10-02 08:52:28.612 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:28 np0005465604 nova_compute[260603]: 2025-10-02 08:52:28.656 2 DEBUG nova.policy [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7767630a5b1049f48d7e0fed29e221ba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c86b416fdb524f21b0228639a3a14116', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:52:28 np0005465604 podman[388251]: 2025-10-02 08:52:28.78673848 +0000 UTC m=+0.057399780 container create afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rubin, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  2 04:52:28 np0005465604 systemd[1]: Started libpod-conmon-afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de.scope.
Oct  2 04:52:28 np0005465604 podman[388251]: 2025-10-02 08:52:28.759208188 +0000 UTC m=+0.029869548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:52:28 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:52:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdecddb408a271405c3c1fe962f1c0ffe436fb60e58d6b4acb7495edaf8e5a22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:52:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdecddb408a271405c3c1fe962f1c0ffe436fb60e58d6b4acb7495edaf8e5a22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:52:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdecddb408a271405c3c1fe962f1c0ffe436fb60e58d6b4acb7495edaf8e5a22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:52:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdecddb408a271405c3c1fe962f1c0ffe436fb60e58d6b4acb7495edaf8e5a22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:52:28 np0005465604 podman[388251]: 2025-10-02 08:52:28.903256404 +0000 UTC m=+0.173917744 container init afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 04:52:28 np0005465604 podman[388251]: 2025-10-02 08:52:28.913837909 +0000 UTC m=+0.184499179 container start afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rubin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:52:28 np0005465604 podman[388251]: 2025-10-02 08:52:28.91858865 +0000 UTC m=+0.189249910 container attach afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rubin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:52:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.1 KiB/s wr, 61 op/s
Oct  2 04:52:29 np0005465604 nova_compute[260603]: 2025-10-02 08:52:29.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]: {
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:        "osd_id": 2,
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:        "type": "bluestore"
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:    },
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:        "osd_id": 1,
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:        "type": "bluestore"
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:    },
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:        "osd_id": 0,
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:        "type": "bluestore"
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]:    }
Oct  2 04:52:29 np0005465604 wonderful_rubin[388268]: }
Oct  2 04:52:30 np0005465604 systemd[1]: libpod-afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de.scope: Deactivated successfully.
Oct  2 04:52:30 np0005465604 podman[388251]: 2025-10-02 08:52:30.013584005 +0000 UTC m=+1.284245305 container died afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rubin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:52:30 np0005465604 systemd[1]: libpod-afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de.scope: Consumed 1.105s CPU time.
Oct  2 04:52:30 np0005465604 systemd[1]: var-lib-containers-storage-overlay-fdecddb408a271405c3c1fe962f1c0ffe436fb60e58d6b4acb7495edaf8e5a22-merged.mount: Deactivated successfully.
Oct  2 04:52:30 np0005465604 podman[388251]: 2025-10-02 08:52:30.086554309 +0000 UTC m=+1.357215599 container remove afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rubin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 04:52:30 np0005465604 systemd[1]: libpod-conmon-afcf64be3c39627fcdb6cc0dc4fefda7c278c14ed3de46f132fe8b200482b6de.scope: Deactivated successfully.
Oct  2 04:52:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:52:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:52:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:52:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:52:30 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b404ed55-95a0-433c-95d5-79d2cb4e5c3f does not exist
Oct  2 04:52:30 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 64904732-6785-4aac-bc9a-fa768ff05f5d does not exist
Oct  2 04:52:30 np0005465604 nova_compute[260603]: 2025-10-02 08:52:30.328 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395135.326101, 062b2a3e-b612-42bf-b96c-fb9bdd9008ee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:52:30 np0005465604 nova_compute[260603]: 2025-10-02 08:52:30.329 2 INFO nova.compute.manager [-] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:52:30 np0005465604 nova_compute[260603]: 2025-10-02 08:52:30.361 2 DEBUG nova.compute.manager [None req-71e12c87-40bf-4889-9653-311681661fb8 - - - - - -] [instance: 062b2a3e-b612-42bf-b96c-fb9bdd9008ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:52:30 np0005465604 nova_compute[260603]: 2025-10-02 08:52:30.961 2 DEBUG nova.network.neutron [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Successfully created port: 4092976f-1133-4b84-91bd-87043169cb4f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:52:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:52:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:52:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.1 KiB/s wr, 61 op/s
Oct  2 04:52:32 np0005465604 nova_compute[260603]: 2025-10-02 08:52:32.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:32 np0005465604 nova_compute[260603]: 2025-10-02 08:52:32.607 2 DEBUG nova.network.neutron [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Successfully updated port: 4092976f-1133-4b84-91bd-87043169cb4f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:52:32 np0005465604 nova_compute[260603]: 2025-10-02 08:52:32.628 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:52:32 np0005465604 nova_compute[260603]: 2025-10-02 08:52:32.628 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquired lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:52:32 np0005465604 nova_compute[260603]: 2025-10-02 08:52:32.628 2 DEBUG nova.network.neutron [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:52:32 np0005465604 nova_compute[260603]: 2025-10-02 08:52:32.743 2 DEBUG nova.compute.manager [req-696f60a6-69c1-4bf9-aec2-3087766f5aac req-f4fd0131-ac1b-4ed6-b4eb-d80aa5a2dcb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-changed-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:52:32 np0005465604 nova_compute[260603]: 2025-10-02 08:52:32.743 2 DEBUG nova.compute.manager [req-696f60a6-69c1-4bf9-aec2-3087766f5aac req-f4fd0131-ac1b-4ed6-b4eb-d80aa5a2dcb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Refreshing instance network info cache due to event network-changed-4092976f-1133-4b84-91bd-87043169cb4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:52:32 np0005465604 nova_compute[260603]: 2025-10-02 08:52:32.744 2 DEBUG oslo_concurrency.lockutils [req-696f60a6-69c1-4bf9-aec2-3087766f5aac req-f4fd0131-ac1b-4ed6-b4eb-d80aa5a2dcb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:52:32 np0005465604 nova_compute[260603]: 2025-10-02 08:52:32.817 2 DEBUG nova.network.neutron [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:52:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:52:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.752 2 DEBUG nova.network.neutron [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Updating instance_info_cache with network_info: [{"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.779 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Releasing lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.780 2 DEBUG nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Instance network_info: |[{"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.781 2 DEBUG oslo_concurrency.lockutils [req-696f60a6-69c1-4bf9-aec2-3087766f5aac req-f4fd0131-ac1b-4ed6-b4eb-d80aa5a2dcb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.782 2 DEBUG nova.network.neutron [req-696f60a6-69c1-4bf9-aec2-3087766f5aac req-f4fd0131-ac1b-4ed6-b4eb-d80aa5a2dcb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Refreshing network info cache for port 4092976f-1133-4b84-91bd-87043169cb4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.786 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Start _get_guest_xml network_info=[{"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.793 2 WARNING nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.800 2 DEBUG nova.virt.libvirt.host [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.800 2 DEBUG nova.virt.libvirt.host [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.806 2 DEBUG nova.virt.libvirt.host [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.806 2 DEBUG nova.virt.libvirt.host [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.807 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.807 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.808 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.808 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.808 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.808 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.809 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.809 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.809 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.809 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.810 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.810 2 DEBUG nova.virt.hardware [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.812 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:52:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:34.833 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:34.834 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:34.834 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:34 np0005465604 nova_compute[260603]: 2025-10-02 08:52:34.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:52:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3594699734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.317 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.351 2 DEBUG nova.storage.rbd_utils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.355 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:52:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Oct  2 04:52:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:52:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1176357024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.824 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.825 2 DEBUG nova.virt.libvirt.vif [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:52:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1775312205',display_name='tempest-TestNetworkAdvancedServerOps-server-1775312205',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1775312205',id=122,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjAoXXSuQzVTrLfupmZ2yj1eQO64gMLoVS/w1fXN3YlLWUVSU8Ny9eVy5tfjHnq8vP2d+YlXvRC/+xl37WQ+3lkHjliAoJJEZ9n209ktTnU2lK2CJZClltvCqZ21bE87w==',key_name='tempest-TestNetworkAdvancedServerOps-1329763099',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-gkp0e6eh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:52:27Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=094b6b24-7323-4ad5-bd0d-e449c0c96f6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.826 2 DEBUG nova.network.os_vif_util [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.827 2 DEBUG nova.network.os_vif_util [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.828 2 DEBUG nova.objects.instance [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_devices' on Instance uuid 094b6b24-7323-4ad5-bd0d-e449c0c96f6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.860 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:52:35 np0005465604 nova_compute[260603]:  <uuid>094b6b24-7323-4ad5-bd0d-e449c0c96f6f</uuid>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:  <name>instance-0000007a</name>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1775312205</nova:name>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:52:34</nova:creationTime>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:52:35 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:        <nova:user uuid="7767630a5b1049f48d7e0fed29e221ba">tempest-TestNetworkAdvancedServerOps-19684921-project-member</nova:user>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:        <nova:project uuid="c86b416fdb524f21b0228639a3a14116">tempest-TestNetworkAdvancedServerOps-19684921</nova:project>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:        <nova:port uuid="4092976f-1133-4b84-91bd-87043169cb4f">
Oct  2 04:52:35 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <entry name="serial">094b6b24-7323-4ad5-bd0d-e449c0c96f6f</entry>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <entry name="uuid">094b6b24-7323-4ad5-bd0d-e449c0c96f6f</entry>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk">
Oct  2 04:52:35 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:52:35 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk.config">
Oct  2 04:52:35 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:52:35 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:9e:3a:a7"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <target dev="tap4092976f-11"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f/console.log" append="off"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:52:35 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:52:35 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:52:35 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:52:35 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.861 2 DEBUG nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Preparing to wait for external event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.862 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.862 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.862 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.863 2 DEBUG nova.virt.libvirt.vif [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:52:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1775312205',display_name='tempest-TestNetworkAdvancedServerOps-server-1775312205',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1775312205',id=122,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjAoXXSuQzVTrLfupmZ2yj1eQO64gMLoVS/w1fXN3YlLWUVSU8Ny9eVy5tfjHnq8vP2d+YlXvRC/+xl37WQ+3lkHjliAoJJEZ9n209ktTnU2lK2CJZClltvCqZ21bE87w==',key_name='tempest-TestNetworkAdvancedServerOps-1329763099',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-gkp0e6eh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:52:27Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=094b6b24-7323-4ad5-bd0d-e449c0c96f6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.863 2 DEBUG nova.network.os_vif_util [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.864 2 DEBUG nova.network.os_vif_util [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.865 2 DEBUG os_vif [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.866 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.866 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.869 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4092976f-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.869 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4092976f-11, col_values=(('external_ids', {'iface-id': '4092976f-1133-4b84-91bd-87043169cb4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:3a:a7', 'vm-uuid': '094b6b24-7323-4ad5-bd0d-e449c0c96f6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:35 np0005465604 NetworkManager[45129]: <info>  [1759395155.9016] manager: (tap4092976f-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/505)
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.910 2 INFO os_vif [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11')#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.973 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.976 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.976 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] No VIF found with MAC fa:16:3e:9e:3a:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:52:35 np0005465604 nova_compute[260603]: 2025-10-02 08:52:35.977 2 INFO nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Using config drive#033[00m
Oct  2 04:52:36 np0005465604 nova_compute[260603]: 2025-10-02 08:52:36.007 2 DEBUG nova.storage.rbd_utils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:52:36 np0005465604 nova_compute[260603]: 2025-10-02 08:52:36.629 2 INFO nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Creating config drive at /var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f/disk.config#033[00m
Oct  2 04:52:36 np0005465604 nova_compute[260603]: 2025-10-02 08:52:36.640 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz3jxxvc9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:52:36 np0005465604 nova_compute[260603]: 2025-10-02 08:52:36.817 2 DEBUG nova.network.neutron [req-696f60a6-69c1-4bf9-aec2-3087766f5aac req-f4fd0131-ac1b-4ed6-b4eb-d80aa5a2dcb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Updated VIF entry in instance network info cache for port 4092976f-1133-4b84-91bd-87043169cb4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:52:36 np0005465604 nova_compute[260603]: 2025-10-02 08:52:36.819 2 DEBUG nova.network.neutron [req-696f60a6-69c1-4bf9-aec2-3087766f5aac req-f4fd0131-ac1b-4ed6-b4eb-d80aa5a2dcb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Updating instance_info_cache with network_info: [{"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:52:36 np0005465604 nova_compute[260603]: 2025-10-02 08:52:36.823 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz3jxxvc9" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:52:36 np0005465604 nova_compute[260603]: 2025-10-02 08:52:36.863 2 DEBUG nova.storage.rbd_utils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] rbd image 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:52:36 np0005465604 nova_compute[260603]: 2025-10-02 08:52:36.868 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f/disk.config 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:52:36 np0005465604 nova_compute[260603]: 2025-10-02 08:52:36.927 2 DEBUG oslo_concurrency.lockutils [req-696f60a6-69c1-4bf9-aec2-3087766f5aac req-f4fd0131-ac1b-4ed6-b4eb-d80aa5a2dcb9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.060 2 DEBUG oslo_concurrency.processutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f/disk.config 094b6b24-7323-4ad5-bd0d-e449c0c96f6f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.062 2 INFO nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Deleting local config drive /var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f/disk.config because it was imported into RBD.#033[00m
Oct  2 04:52:37 np0005465604 kernel: tap4092976f-11: entered promiscuous mode
Oct  2 04:52:37 np0005465604 NetworkManager[45129]: <info>  [1759395157.1397] manager: (tap4092976f-11): new Tun device (/org/freedesktop/NetworkManager/Devices/506)
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:37Z|01289|binding|INFO|Claiming lport 4092976f-1133-4b84-91bd-87043169cb4f for this chassis.
Oct  2 04:52:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:37Z|01290|binding|INFO|4092976f-1133-4b84-91bd-87043169cb4f: Claiming fa:16:3e:9e:3a:a7 10.100.0.3
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.199 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:3a:a7 10.100.0.3'], port_security=['fa:16:3e:9e:3a:a7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '094b6b24-7323-4ad5-bd0d-e449c0c96f6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca65236d-124f-4d37-afbf-f114cc90e015', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1c5f2f44-40d9-415b-8ae8-affca47a93a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0a762f3-67f7-4fbc-8632-5b2810d06f8f, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4092976f-1133-4b84-91bd-87043169cb4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.200 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4092976f-1133-4b84-91bd-87043169cb4f in datapath ca65236d-124f-4d37-afbf-f114cc90e015 bound to our chassis#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.203 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ca65236d-124f-4d37-afbf-f114cc90e015#033[00m
Oct  2 04:52:37 np0005465604 systemd-udevd[388503]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:52:37 np0005465604 systemd-machined[214636]: New machine qemu-154-instance-0000007a.
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.226 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[71f3c494-4566-4324-b22f-ab1235a76011]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.227 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapca65236d-11 in ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.230 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapca65236d-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.230 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b4889272-2dcc-4a58-afff-24fd2ca269ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.232 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[56da630e-7980-4f9f-93be-e23ad7b24d17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.246 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[5c1d6cbd-2a3e-451e-8a4e-29e8fa6612af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:37 np0005465604 NetworkManager[45129]: <info>  [1759395157.2551] device (tap4092976f-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:52:37 np0005465604 NetworkManager[45129]: <info>  [1759395157.2558] device (tap4092976f-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:52:37 np0005465604 systemd[1]: Started Virtual Machine qemu-154-instance-0000007a.
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.276 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[abd50e35-1219-46ec-b0b9-dd32ff708a71]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:37Z|01291|binding|INFO|Setting lport 4092976f-1133-4b84-91bd-87043169cb4f ovn-installed in OVS
Oct  2 04:52:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:37Z|01292|binding|INFO|Setting lport 4092976f-1133-4b84-91bd-87043169cb4f up in Southbound
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.313 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d6957a81-e3c7-4c05-8950-1fd091cf8fc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.325 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7c2fd6f6-4c5f-4af3-8e71-da43813f04b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:37 np0005465604 NetworkManager[45129]: <info>  [1759395157.3267] manager: (tapca65236d-10): new Veth device (/org/freedesktop/NetworkManager/Devices/507)
Oct  2 04:52:37 np0005465604 systemd-udevd[388507]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.372 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[706fbf47-29d5-459e-a031-0ccc11dc3d44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.377 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9a6eee3d-d89d-4a0a-92f4-a5707d57ba18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:37 np0005465604 NetworkManager[45129]: <info>  [1759395157.4063] device (tapca65236d-10): carrier: link connected
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.412 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c39960ed-2e5f-4f2f-817d-29b800570524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.433 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b642de-dde7-4382-b507-5c4a593cd61e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca65236d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:d7:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603600, 'reachable_time': 44887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388535, 'error': None, 'target': 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.464 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ddd31c-c875-488f-9ab3-53ff22eb4b71]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:d753'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603600, 'tstamp': 603600}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388536, 'error': None, 'target': 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.489 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[884e8e58-f9cf-4856-8adc-82df75ecab3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca65236d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:d7:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603600, 'reachable_time': 44887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 388537, 'error': None, 'target': 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.508 2 DEBUG nova.compute.manager [req-3ede35a0-4ea8-489b-95a7-22854d4fdcfc req-168580ad-b9af-48a4-abaf-a54c51c2586e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.508 2 DEBUG oslo_concurrency.lockutils [req-3ede35a0-4ea8-489b-95a7-22854d4fdcfc req-168580ad-b9af-48a4-abaf-a54c51c2586e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.508 2 DEBUG oslo_concurrency.lockutils [req-3ede35a0-4ea8-489b-95a7-22854d4fdcfc req-168580ad-b9af-48a4-abaf-a54c51c2586e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.508 2 DEBUG oslo_concurrency.lockutils [req-3ede35a0-4ea8-489b-95a7-22854d4fdcfc req-168580ad-b9af-48a4-abaf-a54c51c2586e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.509 2 DEBUG nova.compute.manager [req-3ede35a0-4ea8-489b-95a7-22854d4fdcfc req-168580ad-b9af-48a4-abaf-a54c51c2586e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Processing event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.527 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[48fadc4d-7f4e-4833-9f66-7691973a64e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.529 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395142.528547, d5e3e825-fcee-4f1b-8c05-5a9ee07013d7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.530 2 INFO nova.compute.manager [-] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.553 2 DEBUG nova.compute.manager [None req-1ce90d65-4a7a-4aaa-8ff0-32558516d18e - - - - - -] [instance: d5e3e825-fcee-4f1b-8c05-5a9ee07013d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:52:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 MiB/s wr, 38 op/s
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.601 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[88f87964-2e66-4d6e-8b62-5fb025790ece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.602 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca65236d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.603 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.603 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca65236d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:52:37 np0005465604 NetworkManager[45129]: <info>  [1759395157.6060] manager: (tapca65236d-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/508)
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:37 np0005465604 kernel: tapca65236d-10: entered promiscuous mode
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.611 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapca65236d-10, col_values=(('external_ids', {'iface-id': '11510ab0-6469-4123-a903-78f5dba4ba14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:52:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:37Z|01293|binding|INFO|Releasing lport 11510ab0-6469-4123-a903-78f5dba4ba14 from this chassis (sb_readonly=0)
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:37 np0005465604 nova_compute[260603]: 2025-10-02 08:52:37.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.633 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ca65236d-124f-4d37-afbf-f114cc90e015.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ca65236d-124f-4d37-afbf-f114cc90e015.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.634 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6a908d15-e81c-4e6a-901a-3786ba2fcacf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.635 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-ca65236d-124f-4d37-afbf-f114cc90e015
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/ca65236d-124f-4d37-afbf-f114cc90e015.pid.haproxy
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID ca65236d-124f-4d37-afbf-f114cc90e015
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:52:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:37.636 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'env', 'PROCESS_TAG=haproxy-ca65236d-124f-4d37-afbf-f114cc90e015', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ca65236d-124f-4d37-afbf-f114cc90e015.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:52:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:52:38 np0005465604 podman[388609]: 2025-10-02 08:52:38.038921965 +0000 UTC m=+0.076919720 container create 795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:52:38 np0005465604 systemd[1]: Started libpod-conmon-795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66.scope.
Oct  2 04:52:38 np0005465604 podman[388609]: 2025-10-02 08:52:37.989687504 +0000 UTC m=+0.027685289 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:52:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:52:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bb29d2f3499731a67cce06b19a7eb0e193f73fce6d50e29c10b23f7511d4a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:52:38 np0005465604 podman[388609]: 2025-10-02 08:52:38.141610011 +0000 UTC m=+0.179607836 container init 795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:52:38 np0005465604 podman[388609]: 2025-10-02 08:52:38.152044852 +0000 UTC m=+0.190042617 container start 795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.166 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395158.1655385, 094b6b24-7323-4ad5-bd0d-e449c0c96f6f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.166 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] VM Started (Lifecycle Event)#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.168 2 DEBUG nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:52:38 np0005465604 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388625]: [NOTICE]   (388629) : New worker (388631) forked
Oct  2 04:52:38 np0005465604 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388625]: [NOTICE]   (388629) : Loading success.
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.189 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.198 2 INFO nova.virt.libvirt.driver [-] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Instance spawned successfully.#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.198 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.203 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.209 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.226 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.227 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.227 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.227 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.228 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.228 2 DEBUG nova.virt.libvirt.driver [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.242 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.242 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395158.1680305, 094b6b24-7323-4ad5-bd0d-e449c0c96f6f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.242 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.272 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.274 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395158.1709986, 094b6b24-7323-4ad5-bd0d-e449c0c96f6f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.274 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.302 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.304 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.314 2 INFO nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Took 10.43 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.315 2 DEBUG nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.325 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.392 2 INFO nova.compute.manager [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Took 11.38 seconds to build instance.#033[00m
Oct  2 04:52:38 np0005465604 nova_compute[260603]: 2025-10-02 08:52:38.410 2 DEBUG oslo_concurrency.lockutils [None req-b994678f-4775-4c7e-b4c9-38867e35344a 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.468s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:52:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:52:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct  2 04:52:39 np0005465604 nova_compute[260603]: 2025-10-02 08:52:39.601 2 DEBUG nova.compute.manager [req-9a211d68-f44f-49e9-a735-f8a57c5529ce req-c82fafa9-6d73-485e-854f-99794de29ef5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:52:39 np0005465604 nova_compute[260603]: 2025-10-02 08:52:39.602 2 DEBUG oslo_concurrency.lockutils [req-9a211d68-f44f-49e9-a735-f8a57c5529ce req-c82fafa9-6d73-485e-854f-99794de29ef5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:39 np0005465604 nova_compute[260603]: 2025-10-02 08:52:39.602 2 DEBUG oslo_concurrency.lockutils [req-9a211d68-f44f-49e9-a735-f8a57c5529ce req-c82fafa9-6d73-485e-854f-99794de29ef5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:39 np0005465604 nova_compute[260603]: 2025-10-02 08:52:39.603 2 DEBUG oslo_concurrency.lockutils [req-9a211d68-f44f-49e9-a735-f8a57c5529ce req-c82fafa9-6d73-485e-854f-99794de29ef5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:39 np0005465604 nova_compute[260603]: 2025-10-02 08:52:39.603 2 DEBUG nova.compute.manager [req-9a211d68-f44f-49e9-a735-f8a57c5529ce req-c82fafa9-6d73-485e-854f-99794de29ef5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] No waiting events found dispatching network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:52:39 np0005465604 nova_compute[260603]: 2025-10-02 08:52:39.603 2 WARNING nova.compute.manager [req-9a211d68-f44f-49e9-a735-f8a57c5529ce req-c82fafa9-6d73-485e-854f-99794de29ef5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received unexpected event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f for instance with vm_state active and task_state None.#033[00m
Oct  2 04:52:39 np0005465604 nova_compute[260603]: 2025-10-02 08:52:39.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:40 np0005465604 nova_compute[260603]: 2025-10-02 08:52:40.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:52:40 np0005465604 nova_compute[260603]: 2025-10-02 08:52:40.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct  2 04:52:42 np0005465604 nova_compute[260603]: 2025-10-02 08:52:42.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:52:42 np0005465604 nova_compute[260603]: 2025-10-02 08:52:42.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:52:42 np0005465604 nova_compute[260603]: 2025-10-02 08:52:42.557 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:52:42 np0005465604 nova_compute[260603]: 2025-10-02 08:52:42.557 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:52:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:52:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:43Z|01294|binding|INFO|Releasing lport 11510ab0-6469-4123-a903-78f5dba4ba14 from this chassis (sb_readonly=0)
Oct  2 04:52:43 np0005465604 NetworkManager[45129]: <info>  [1759395163.1269] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/509)
Oct  2 04:52:43 np0005465604 NetworkManager[45129]: <info>  [1759395163.1289] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/510)
Oct  2 04:52:43 np0005465604 nova_compute[260603]: 2025-10-02 08:52:43.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:43 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:43Z|01295|binding|INFO|Releasing lport 11510ab0-6469-4123-a903-78f5dba4ba14 from this chassis (sb_readonly=0)
Oct  2 04:52:43 np0005465604 nova_compute[260603]: 2025-10-02 08:52:43.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:43 np0005465604 nova_compute[260603]: 2025-10-02 08:52:43.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:43 np0005465604 nova_compute[260603]: 2025-10-02 08:52:43.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:52:43 np0005465604 nova_compute[260603]: 2025-10-02 08:52:43.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:52:43 np0005465604 nova_compute[260603]: 2025-10-02 08:52:43.543 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:43 np0005465604 nova_compute[260603]: 2025-10-02 08:52:43.544 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:43 np0005465604 nova_compute[260603]: 2025-10-02 08:52:43.544 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:43 np0005465604 nova_compute[260603]: 2025-10-02 08:52:43.544 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:52:43 np0005465604 nova_compute[260603]: 2025-10-02 08:52:43.545 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:52:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 04:52:43 np0005465604 nova_compute[260603]: 2025-10-02 08:52:43.732 2 DEBUG nova.compute.manager [req-5d229d7d-e50a-4781-8a85-f8f387763411 req-4be7a83c-db43-437e-8fe3-117383da7469 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-changed-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:52:43 np0005465604 nova_compute[260603]: 2025-10-02 08:52:43.732 2 DEBUG nova.compute.manager [req-5d229d7d-e50a-4781-8a85-f8f387763411 req-4be7a83c-db43-437e-8fe3-117383da7469 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Refreshing instance network info cache due to event network-changed-4092976f-1133-4b84-91bd-87043169cb4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:52:43 np0005465604 nova_compute[260603]: 2025-10-02 08:52:43.733 2 DEBUG oslo_concurrency.lockutils [req-5d229d7d-e50a-4781-8a85-f8f387763411 req-4be7a83c-db43-437e-8fe3-117383da7469 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:52:43 np0005465604 nova_compute[260603]: 2025-10-02 08:52:43.733 2 DEBUG oslo_concurrency.lockutils [req-5d229d7d-e50a-4781-8a85-f8f387763411 req-4be7a83c-db43-437e-8fe3-117383da7469 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:52:43 np0005465604 nova_compute[260603]: 2025-10-02 08:52:43.733 2 DEBUG nova.network.neutron [req-5d229d7d-e50a-4781-8a85-f8f387763411 req-4be7a83c-db43-437e-8fe3-117383da7469 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Refreshing network info cache for port 4092976f-1133-4b84-91bd-87043169cb4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:52:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:52:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3445214993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:52:44 np0005465604 nova_compute[260603]: 2025-10-02 08:52:44.020 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:52:44 np0005465604 nova_compute[260603]: 2025-10-02 08:52:44.109 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:52:44 np0005465604 nova_compute[260603]: 2025-10-02 08:52:44.110 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:52:44 np0005465604 nova_compute[260603]: 2025-10-02 08:52:44.328 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:52:44 np0005465604 nova_compute[260603]: 2025-10-02 08:52:44.329 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3546MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:52:44 np0005465604 nova_compute[260603]: 2025-10-02 08:52:44.330 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:44 np0005465604 nova_compute[260603]: 2025-10-02 08:52:44.330 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:44 np0005465604 nova_compute[260603]: 2025-10-02 08:52:44.403 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 094b6b24-7323-4ad5-bd0d-e449c0c96f6f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:52:44 np0005465604 nova_compute[260603]: 2025-10-02 08:52:44.404 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:52:44 np0005465604 nova_compute[260603]: 2025-10-02 08:52:44.404 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:52:44 np0005465604 nova_compute[260603]: 2025-10-02 08:52:44.442 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:52:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:52:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2147402747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:52:44 np0005465604 nova_compute[260603]: 2025-10-02 08:52:44.879 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:52:44 np0005465604 nova_compute[260603]: 2025-10-02 08:52:44.888 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:52:44 np0005465604 nova_compute[260603]: 2025-10-02 08:52:44.916 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:52:44 np0005465604 nova_compute[260603]: 2025-10-02 08:52:44.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:44 np0005465604 nova_compute[260603]: 2025-10-02 08:52:44.977 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:52:44 np0005465604 nova_compute[260603]: 2025-10-02 08:52:44.978 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 04:52:45 np0005465604 nova_compute[260603]: 2025-10-02 08:52:45.641 2 DEBUG nova.network.neutron [req-5d229d7d-e50a-4781-8a85-f8f387763411 req-4be7a83c-db43-437e-8fe3-117383da7469 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Updated VIF entry in instance network info cache for port 4092976f-1133-4b84-91bd-87043169cb4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:52:45 np0005465604 nova_compute[260603]: 2025-10-02 08:52:45.642 2 DEBUG nova.network.neutron [req-5d229d7d-e50a-4781-8a85-f8f387763411 req-4be7a83c-db43-437e-8fe3-117383da7469 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Updating instance_info_cache with network_info: [{"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:52:45 np0005465604 nova_compute[260603]: 2025-10-02 08:52:45.664 2 DEBUG oslo_concurrency.lockutils [req-5d229d7d-e50a-4781-8a85-f8f387763411 req-4be7a83c-db43-437e-8fe3-117383da7469 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:52:45 np0005465604 nova_compute[260603]: 2025-10-02 08:52:45.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 04:52:47 np0005465604 nova_compute[260603]: 2025-10-02 08:52:47.974 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:52:47 np0005465604 nova_compute[260603]: 2025-10-02 08:52:47.974 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:52:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:52:48 np0005465604 podman[388687]: 2025-10-02 08:52:48.038510446 +0000 UTC m=+0.088415214 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 04:52:48 np0005465604 podman[388686]: 2025-10-02 08:52:48.068020082 +0000 UTC m=+0.121304368 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:52:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 69 op/s
Oct  2 04:52:49 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:49Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9e:3a:a7 10.100.0.3
Oct  2 04:52:49 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:49Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:3a:a7 10.100.0.3
Oct  2 04:52:49 np0005465604 nova_compute[260603]: 2025-10-02 08:52:49.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:50 np0005465604 nova_compute[260603]: 2025-10-02 08:52:50.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:51 np0005465604 nova_compute[260603]: 2025-10-02 08:52:51.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:52:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Oct  2 04:52:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:52:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct  2 04:52:54 np0005465604 nova_compute[260603]: 2025-10-02 08:52:54.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:55 np0005465604 podman[388731]: 2025-10-02 08:52:55.011423741 +0000 UTC m=+0.070107273 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 04:52:55 np0005465604 podman[388730]: 2025-10-02 08:52:55.017613918 +0000 UTC m=+0.078905022 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  2 04:52:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  2 04:52:55 np0005465604 nova_compute[260603]: 2025-10-02 08:52:55.621 2 INFO nova.compute.manager [None req-30ea783e-e4b5-4579-9b92-151299b25866 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Get console output#033[00m
Oct  2 04:52:55 np0005465604 nova_compute[260603]: 2025-10-02 08:52:55.629 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 04:52:55 np0005465604 nova_compute[260603]: 2025-10-02 08:52:55.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:56 np0005465604 nova_compute[260603]: 2025-10-02 08:52:56.394 2 DEBUG nova.objects.instance [None req-876a88ad-a9f1-4b9c-ab37-67d5d6b67d8d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_devices' on Instance uuid 094b6b24-7323-4ad5-bd0d-e449c0c96f6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:52:56 np0005465604 nova_compute[260603]: 2025-10-02 08:52:56.424 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395176.4235923, 094b6b24-7323-4ad5-bd0d-e449c0c96f6f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:52:56 np0005465604 nova_compute[260603]: 2025-10-02 08:52:56.424 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:52:56 np0005465604 nova_compute[260603]: 2025-10-02 08:52:56.445 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:52:56 np0005465604 nova_compute[260603]: 2025-10-02 08:52:56.451 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:52:56 np0005465604 nova_compute[260603]: 2025-10-02 08:52:56.473 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  2 04:52:56 np0005465604 nova_compute[260603]: 2025-10-02 08:52:56.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.073895) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395177073951, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2127, "num_deletes": 255, "total_data_size": 3394690, "memory_usage": 3448808, "flush_reason": "Manual Compaction"}
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395177101031, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3303781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45063, "largest_seqno": 47189, "table_properties": {"data_size": 3294116, "index_size": 6095, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19987, "raw_average_key_size": 20, "raw_value_size": 3274734, "raw_average_value_size": 3351, "num_data_blocks": 269, "num_entries": 977, "num_filter_entries": 977, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759394972, "oldest_key_time": 1759394972, "file_creation_time": 1759395177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 27181 microseconds, and 9229 cpu microseconds.
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.101081) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3303781 bytes OK
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.101103) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.102453) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.102805) EVENT_LOG_v1 {"time_micros": 1759395177102799, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.102826) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3385702, prev total WAL file size 3385702, number of live WAL files 2.
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.104380) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3226KB)], [104(8495KB)]
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395177104449, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 12003531, "oldest_snapshot_seqno": -1}
Oct  2 04:52:57 np0005465604 kernel: tap4092976f-11 (unregistering): left promiscuous mode
Oct  2 04:52:57 np0005465604 NetworkManager[45129]: <info>  [1759395177.1506] device (tap4092976f-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:52:57 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:57Z|01296|binding|INFO|Releasing lport 4092976f-1133-4b84-91bd-87043169cb4f from this chassis (sb_readonly=0)
Oct  2 04:52:57 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:57Z|01297|binding|INFO|Setting lport 4092976f-1133-4b84-91bd-87043169cb4f down in Southbound
Oct  2 04:52:57 np0005465604 nova_compute[260603]: 2025-10-02 08:52:57.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:57 np0005465604 ovn_controller[152344]: 2025-10-02T08:52:57Z|01298|binding|INFO|Removing iface tap4092976f-11 ovn-installed in OVS
Oct  2 04:52:57 np0005465604 nova_compute[260603]: 2025-10-02 08:52:57.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.171 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:3a:a7 10.100.0.3'], port_security=['fa:16:3e:9e:3a:a7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '094b6b24-7323-4ad5-bd0d-e449c0c96f6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca65236d-124f-4d37-afbf-f114cc90e015', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c5f2f44-40d9-415b-8ae8-affca47a93a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0a762f3-67f7-4fbc-8632-5b2810d06f8f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4092976f-1133-4b84-91bd-87043169cb4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:52:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.175 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4092976f-1133-4b84-91bd-87043169cb4f in datapath ca65236d-124f-4d37-afbf-f114cc90e015 unbound from our chassis#033[00m
Oct  2 04:52:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.177 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ca65236d-124f-4d37-afbf-f114cc90e015, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:52:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.178 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5458460e-45fe-45e2-a829-de5fb9fcc205]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.179 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 namespace which is not needed anymore#033[00m
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7021 keys, 10324194 bytes, temperature: kUnknown
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395177183075, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10324194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10275896, "index_size": 29626, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17605, "raw_key_size": 180990, "raw_average_key_size": 25, "raw_value_size": 10149031, "raw_average_value_size": 1445, "num_data_blocks": 1167, "num_entries": 7021, "num_filter_entries": 7021, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759395177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.183293) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10324194 bytes
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.184294) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.5 rd, 131.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.3 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7545, records dropped: 524 output_compression: NoCompression
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.184310) EVENT_LOG_v1 {"time_micros": 1759395177184303, "job": 62, "event": "compaction_finished", "compaction_time_micros": 78705, "compaction_time_cpu_micros": 42361, "output_level": 6, "num_output_files": 1, "total_output_size": 10324194, "num_input_records": 7545, "num_output_records": 7021, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395177185024, "job": 62, "event": "table_file_deletion", "file_number": 106}
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395177186800, "job": 62, "event": "table_file_deletion", "file_number": 104}
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.104249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.187817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.187827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.187831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.187835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:52:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:52:57.187839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:52:57 np0005465604 nova_compute[260603]: 2025-10-02 08:52:57.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:57 np0005465604 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Oct  2 04:52:57 np0005465604 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007a.scope: Consumed 13.146s CPU time.
Oct  2 04:52:57 np0005465604 systemd-machined[214636]: Machine qemu-154-instance-0000007a terminated.
Oct  2 04:52:57 np0005465604 nova_compute[260603]: 2025-10-02 08:52:57.296 2 DEBUG nova.compute.manager [None req-876a88ad-a9f1-4b9c-ab37-67d5d6b67d8d 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:52:57 np0005465604 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388625]: [NOTICE]   (388629) : haproxy version is 2.8.14-c23fe91
Oct  2 04:52:57 np0005465604 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388625]: [NOTICE]   (388629) : path to executable is /usr/sbin/haproxy
Oct  2 04:52:57 np0005465604 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388625]: [WARNING]  (388629) : Exiting Master process...
Oct  2 04:52:57 np0005465604 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388625]: [WARNING]  (388629) : Exiting Master process...
Oct  2 04:52:57 np0005465604 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388625]: [ALERT]    (388629) : Current worker (388631) exited with code 143 (Terminated)
Oct  2 04:52:57 np0005465604 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388625]: [WARNING]  (388629) : All workers exited. Exiting... (0)
Oct  2 04:52:57 np0005465604 systemd[1]: libpod-795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66.scope: Deactivated successfully.
Oct  2 04:52:57 np0005465604 podman[388800]: 2025-10-02 08:52:57.350035304 +0000 UTC m=+0.058755254 container died 795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 04:52:57 np0005465604 systemd[1]: var-lib-containers-storage-overlay-88bb29d2f3499731a67cce06b19a7eb0e193f73fce6d50e29c10b23f7511d4a2-merged.mount: Deactivated successfully.
Oct  2 04:52:57 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66-userdata-shm.mount: Deactivated successfully.
Oct  2 04:52:57 np0005465604 podman[388800]: 2025-10-02 08:52:57.389191135 +0000 UTC m=+0.097911085 container cleanup 795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:52:57 np0005465604 systemd[1]: libpod-conmon-795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66.scope: Deactivated successfully.
Oct  2 04:52:57 np0005465604 nova_compute[260603]: 2025-10-02 08:52:57.446 2 DEBUG nova.compute.manager [req-9095d607-99dc-42c7-beef-8162cc557eaf req-3611fa71-6056-4d11-841a-9dd2ee435c78 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-unplugged-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:52:57 np0005465604 nova_compute[260603]: 2025-10-02 08:52:57.446 2 DEBUG oslo_concurrency.lockutils [req-9095d607-99dc-42c7-beef-8162cc557eaf req-3611fa71-6056-4d11-841a-9dd2ee435c78 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:57 np0005465604 nova_compute[260603]: 2025-10-02 08:52:57.447 2 DEBUG oslo_concurrency.lockutils [req-9095d607-99dc-42c7-beef-8162cc557eaf req-3611fa71-6056-4d11-841a-9dd2ee435c78 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:57 np0005465604 nova_compute[260603]: 2025-10-02 08:52:57.447 2 DEBUG oslo_concurrency.lockutils [req-9095d607-99dc-42c7-beef-8162cc557eaf req-3611fa71-6056-4d11-841a-9dd2ee435c78 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:57 np0005465604 nova_compute[260603]: 2025-10-02 08:52:57.447 2 DEBUG nova.compute.manager [req-9095d607-99dc-42c7-beef-8162cc557eaf req-3611fa71-6056-4d11-841a-9dd2ee435c78 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] No waiting events found dispatching network-vif-unplugged-4092976f-1133-4b84-91bd-87043169cb4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:52:57 np0005465604 nova_compute[260603]: 2025-10-02 08:52:57.447 2 WARNING nova.compute.manager [req-9095d607-99dc-42c7-beef-8162cc557eaf req-3611fa71-6056-4d11-841a-9dd2ee435c78 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received unexpected event network-vif-unplugged-4092976f-1133-4b84-91bd-87043169cb4f for instance with vm_state suspended and task_state None.#033[00m
Oct  2 04:52:57 np0005465604 podman[388832]: 2025-10-02 08:52:57.457636686 +0000 UTC m=+0.048238701 container remove 795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:52:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.467 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0443b151-90bc-4fff-bd8b-96240b919648]: (4, ('Thu Oct  2 08:52:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 (795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66)\n795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66\nThu Oct  2 08:52:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 (795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66)\n795c2d86c52d57f2590180ac0a306637e1557b9418bb6d7118a66a8ed1225b66\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.468 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[80e71a20-ec3a-43fc-9ae0-14fd4e8d8622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.469 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca65236d-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:52:57 np0005465604 nova_compute[260603]: 2025-10-02 08:52:57.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:57 np0005465604 kernel: tapca65236d-10: left promiscuous mode
Oct  2 04:52:57 np0005465604 nova_compute[260603]: 2025-10-02 08:52:57.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:52:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.525 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8420fbae-3a1a-47dd-b8ea-aa62b7b4cd43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.566 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ad2838-3e47-4e46-b090-39a9b688aa4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.567 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ee6ec4ec-07d0-43aa-ad89-4bff334649ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.589 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c1c1fafd-6b1b-4c09-8908-5b48791daf2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603591, 'reachable_time': 32463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388851, 'error': None, 'target': 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.593 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:52:57 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:52:57.594 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[2e6cd7b5-8d49-46f6-9f07-ef4691cbc976]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:52:57 np0005465604 systemd[1]: run-netns-ovnmeta\x2dca65236d\x2d124f\x2d4d37\x2dafbf\x2df114cc90e015.mount: Deactivated successfully.
Oct  2 04:52:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  2 04:52:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:52:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:52:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:52:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:52:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:52:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:52:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:52:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct  2 04:52:59 np0005465604 nova_compute[260603]: 2025-10-02 08:52:59.619 2 DEBUG nova.compute.manager [req-d6f8798c-d059-4bbe-b737-5ff60a86e402 req-2e3666a6-5459-477d-afcd-14ae7fb51a13 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:52:59 np0005465604 nova_compute[260603]: 2025-10-02 08:52:59.620 2 DEBUG oslo_concurrency.lockutils [req-d6f8798c-d059-4bbe-b737-5ff60a86e402 req-2e3666a6-5459-477d-afcd-14ae7fb51a13 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:52:59 np0005465604 nova_compute[260603]: 2025-10-02 08:52:59.621 2 DEBUG oslo_concurrency.lockutils [req-d6f8798c-d059-4bbe-b737-5ff60a86e402 req-2e3666a6-5459-477d-afcd-14ae7fb51a13 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:52:59 np0005465604 nova_compute[260603]: 2025-10-02 08:52:59.621 2 DEBUG oslo_concurrency.lockutils [req-d6f8798c-d059-4bbe-b737-5ff60a86e402 req-2e3666a6-5459-477d-afcd-14ae7fb51a13 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:52:59 np0005465604 nova_compute[260603]: 2025-10-02 08:52:59.622 2 DEBUG nova.compute.manager [req-d6f8798c-d059-4bbe-b737-5ff60a86e402 req-2e3666a6-5459-477d-afcd-14ae7fb51a13 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] No waiting events found dispatching network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:52:59 np0005465604 nova_compute[260603]: 2025-10-02 08:52:59.622 2 WARNING nova.compute.manager [req-d6f8798c-d059-4bbe-b737-5ff60a86e402 req-2e3666a6-5459-477d-afcd-14ae7fb51a13 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received unexpected event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f for instance with vm_state suspended and task_state None.#033[00m
Oct  2 04:52:59 np0005465604 nova_compute[260603]: 2025-10-02 08:52:59.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:00 np0005465604 nova_compute[260603]: 2025-10-02 08:53:00.182 2 INFO nova.compute.manager [None req-8174d3f2-90e6-4535-b808-136a879bd745 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Get console output#033[00m
Oct  2 04:53:00 np0005465604 nova_compute[260603]: 2025-10-02 08:53:00.769 2 INFO nova.compute.manager [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Resuming#033[00m
Oct  2 04:53:00 np0005465604 nova_compute[260603]: 2025-10-02 08:53:00.770 2 DEBUG nova.objects.instance [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'flavor' on Instance uuid 094b6b24-7323-4ad5-bd0d-e449c0c96f6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:53:00 np0005465604 nova_compute[260603]: 2025-10-02 08:53:00.807 2 DEBUG oslo_concurrency.lockutils [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:53:00 np0005465604 nova_compute[260603]: 2025-10-02 08:53:00.807 2 DEBUG oslo_concurrency.lockutils [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquired lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:53:00 np0005465604 nova_compute[260603]: 2025-10-02 08:53:00.808 2 DEBUG nova.network.neutron [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:53:00 np0005465604 nova_compute[260603]: 2025-10-02 08:53:00.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  2 04:53:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.353 2 DEBUG nova.network.neutron [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Updating instance_info_cache with network_info: [{"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.381 2 DEBUG oslo_concurrency.lockutils [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Releasing lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.389 2 DEBUG nova.virt.libvirt.vif [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:52:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1775312205',display_name='tempest-TestNetworkAdvancedServerOps-server-1775312205',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1775312205',id=122,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjAoXXSuQzVTrLfupmZ2yj1eQO64gMLoVS/w1fXN3YlLWUVSU8Ny9eVy5tfjHnq8vP2d+YlXvRC/+xl37WQ+3lkHjliAoJJEZ9n209ktTnU2lK2CJZClltvCqZ21bE87w==',key_name='tempest-TestNetworkAdvancedServerOps-1329763099',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:52:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-gkp0e6eh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:52:57Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=094b6b24-7323-4ad5-bd0d-e449c0c96f6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.389 2 DEBUG nova.network.os_vif_util [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.391 2 DEBUG nova.network.os_vif_util [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.391 2 DEBUG os_vif [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.397 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4092976f-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.397 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4092976f-11, col_values=(('external_ids', {'iface-id': '4092976f-1133-4b84-91bd-87043169cb4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:3a:a7', 'vm-uuid': '094b6b24-7323-4ad5-bd0d-e449c0c96f6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.398 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.398 2 INFO os_vif [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11')#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.433 2 DEBUG nova.objects.instance [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'numa_topology' on Instance uuid 094b6b24-7323-4ad5-bd0d-e449c0c96f6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:53:03 np0005465604 NetworkManager[45129]: <info>  [1759395183.5265] manager: (tap4092976f-11): new Tun device (/org/freedesktop/NetworkManager/Devices/511)
Oct  2 04:53:03 np0005465604 kernel: tap4092976f-11: entered promiscuous mode
Oct  2 04:53:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:53:03Z|01299|binding|INFO|Claiming lport 4092976f-1133-4b84-91bd-87043169cb4f for this chassis.
Oct  2 04:53:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:53:03Z|01300|binding|INFO|4092976f-1133-4b84-91bd-87043169cb4f: Claiming fa:16:3e:9e:3a:a7 10.100.0.3
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.589 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:3a:a7 10.100.0.3'], port_security=['fa:16:3e:9e:3a:a7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '094b6b24-7323-4ad5-bd0d-e449c0c96f6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca65236d-124f-4d37-afbf-f114cc90e015', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '5', 'neutron:security_group_ids': '1c5f2f44-40d9-415b-8ae8-affca47a93a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0a762f3-67f7-4fbc-8632-5b2810d06f8f, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4092976f-1133-4b84-91bd-87043169cb4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.591 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4092976f-1133-4b84-91bd-87043169cb4f in datapath ca65236d-124f-4d37-afbf-f114cc90e015 bound to our chassis#033[00m
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.594 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ca65236d-124f-4d37-afbf-f114cc90e015#033[00m
Oct  2 04:53:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  2 04:53:03 np0005465604 systemd-udevd[388866]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:53:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:53:03Z|01301|binding|INFO|Setting lport 4092976f-1133-4b84-91bd-87043169cb4f ovn-installed in OVS
Oct  2 04:53:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:53:03Z|01302|binding|INFO|Setting lport 4092976f-1133-4b84-91bd-87043169cb4f up in Southbound
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.608 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[178e4c72-ab00-4063-ae07-2364d7f59030]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.609 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapca65236d-11 in ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.612 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapca65236d-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.612 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3caa3884-d460-4805-8bc3-6446e72b84d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.614 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fe698746-bd45-4b4c-889b-1e386621c106]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:03 np0005465604 NetworkManager[45129]: <info>  [1759395183.6263] device (tap4092976f-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:53:03 np0005465604 NetworkManager[45129]: <info>  [1759395183.6270] device (tap4092976f-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.625 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e548fdcf-b162-4f39-a09c-4fc4b22da48c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:03 np0005465604 systemd-machined[214636]: New machine qemu-155-instance-0000007a.
Oct  2 04:53:03 np0005465604 systemd[1]: Started Virtual Machine qemu-155-instance-0000007a.
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.656 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f703766a-68bf-4627-8d5d-f90e038a0435]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.688 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f192fe1a-8257-4f77-88d7-4e6869bea14e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.693 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f0fe75-0dc5-4619-98eb-3d7b5229a870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:03 np0005465604 NetworkManager[45129]: <info>  [1759395183.6968] manager: (tapca65236d-10): new Veth device (/org/freedesktop/NetworkManager/Devices/512)
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.748 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1d80a886-459b-4c1d-aa89-646a131fd7ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.750 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2eb8a4-2d7f-4560-ade9-ff70e5661d01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:03 np0005465604 NetworkManager[45129]: <info>  [1759395183.7883] device (tapca65236d-10): carrier: link connected
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.798 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[768426df-fa41-4916-8e03-42eea1fb4547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.824 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb5e013-531d-4fbe-b0e3-c0aa4a227772]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca65236d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:d7:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 369], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606239, 'reachable_time': 32654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388899, 'error': None, 'target': 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.829 2 DEBUG nova.compute.manager [req-c0b14a11-8348-42bb-8dca-0ce0ebc810b4 req-56b2bd6c-49ec-4e8a-aa7c-37f523840b51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.829 2 DEBUG oslo_concurrency.lockutils [req-c0b14a11-8348-42bb-8dca-0ce0ebc810b4 req-56b2bd6c-49ec-4e8a-aa7c-37f523840b51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.829 2 DEBUG oslo_concurrency.lockutils [req-c0b14a11-8348-42bb-8dca-0ce0ebc810b4 req-56b2bd6c-49ec-4e8a-aa7c-37f523840b51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.830 2 DEBUG oslo_concurrency.lockutils [req-c0b14a11-8348-42bb-8dca-0ce0ebc810b4 req-56b2bd6c-49ec-4e8a-aa7c-37f523840b51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.830 2 DEBUG nova.compute.manager [req-c0b14a11-8348-42bb-8dca-0ce0ebc810b4 req-56b2bd6c-49ec-4e8a-aa7c-37f523840b51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] No waiting events found dispatching network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.830 2 WARNING nova.compute.manager [req-c0b14a11-8348-42bb-8dca-0ce0ebc810b4 req-56b2bd6c-49ec-4e8a-aa7c-37f523840b51 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received unexpected event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f for instance with vm_state suspended and task_state resuming.#033[00m
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.848 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7ec84c17-142f-4a4b-becc-012747714903]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:d753'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606239, 'tstamp': 606239}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388900, 'error': None, 'target': 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.878 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bf0d9f01-c4f5-42ca-978c-58fed357e588]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca65236d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:d7:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 369], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606239, 'reachable_time': 32654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 388901, 'error': None, 'target': 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.922 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[08288e86-8698-4448-b146-625472b2bbf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:03 np0005465604 nova_compute[260603]: 2025-10-02 08:53:03.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:03.960 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.019 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d79401-bb7a-431b-adad-83bb4400935d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.020 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca65236d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.021 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.021 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca65236d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:53:04 np0005465604 nova_compute[260603]: 2025-10-02 08:53:04.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:04 np0005465604 kernel: tapca65236d-10: entered promiscuous mode
Oct  2 04:53:04 np0005465604 NetworkManager[45129]: <info>  [1759395184.0259] manager: (tapca65236d-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/513)
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.030 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapca65236d-10, col_values=(('external_ids', {'iface-id': '11510ab0-6469-4123-a903-78f5dba4ba14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:53:04 np0005465604 ovn_controller[152344]: 2025-10-02T08:53:04Z|01303|binding|INFO|Releasing lport 11510ab0-6469-4123-a903-78f5dba4ba14 from this chassis (sb_readonly=0)
Oct  2 04:53:04 np0005465604 nova_compute[260603]: 2025-10-02 08:53:04.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:04 np0005465604 nova_compute[260603]: 2025-10-02 08:53:04.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:04 np0005465604 nova_compute[260603]: 2025-10-02 08:53:04.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.062 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ca65236d-124f-4d37-afbf-f114cc90e015.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ca65236d-124f-4d37-afbf-f114cc90e015.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.063 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ea6073-b09e-41fa-97d9-082b1d4362b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.063 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-ca65236d-124f-4d37-afbf-f114cc90e015
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/ca65236d-124f-4d37-afbf-f114cc90e015.pid.haproxy
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID ca65236d-124f-4d37-afbf-f114cc90e015
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.064 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'env', 'PROCESS_TAG=haproxy-ca65236d-124f-4d37-afbf-f114cc90e015', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ca65236d-124f-4d37-afbf-f114cc90e015.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:53:04 np0005465604 podman[388970]: 2025-10-02 08:53:04.475010891 +0000 UTC m=+0.066940253 container create 33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 04:53:04 np0005465604 podman[388970]: 2025-10-02 08:53:04.437724488 +0000 UTC m=+0.029653880 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:53:04 np0005465604 systemd[1]: Started libpod-conmon-33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425.scope.
Oct  2 04:53:04 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:53:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df3af0a887f56125e18a5dff2994a58aeee1c818086f552ccc3946b82615b0ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:53:04 np0005465604 podman[388970]: 2025-10-02 08:53:04.598934979 +0000 UTC m=+0.190864361 container init 33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:53:04 np0005465604 podman[388970]: 2025-10-02 08:53:04.611963413 +0000 UTC m=+0.203892795 container start 33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:53:04 np0005465604 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388990]: [NOTICE]   (388994) : New worker (388996) forked
Oct  2 04:53:04 np0005465604 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388990]: [NOTICE]   (388994) : Loading success.
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.667 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:53:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:04.669 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:53:04 np0005465604 nova_compute[260603]: 2025-10-02 08:53:04.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.118 2 DEBUG nova.virt.libvirt.host [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Removed pending event for 094b6b24-7323-4ad5-bd0d-e449c0c96f6f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.118 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395185.1176782, 094b6b24-7323-4ad5-bd0d-e449c0c96f6f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.119 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] VM Started (Lifecycle Event)#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.138 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.148 2 DEBUG nova.compute.manager [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.149 2 DEBUG nova.objects.instance [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'pci_devices' on Instance uuid 094b6b24-7323-4ad5-bd0d-e449c0c96f6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.153 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.169 2 INFO nova.virt.libvirt.driver [-] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Instance running successfully.#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.171 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.171 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395185.1230593, 094b6b24-7323-4ad5-bd0d-e449c0c96f6f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:53:05 np0005465604 virtqemud[260328]: argument unsupported: QEMU guest agent is not configured
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.172 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.175 2 DEBUG nova.virt.libvirt.guest [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.176 2 DEBUG nova.compute.manager [None req-d19f6831-7fc7-4d2b-b618-970293386a49 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.444 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.449 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:53:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 852 B/s rd, 12 KiB/s wr, 0 op/s
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.937 2 DEBUG nova.compute.manager [req-2be64af2-a569-45b1-92f1-5961499b591b req-f6fcc5ce-e761-4bdc-bc18-0b3dcc1c3058 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.937 2 DEBUG oslo_concurrency.lockutils [req-2be64af2-a569-45b1-92f1-5961499b591b req-f6fcc5ce-e761-4bdc-bc18-0b3dcc1c3058 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.938 2 DEBUG oslo_concurrency.lockutils [req-2be64af2-a569-45b1-92f1-5961499b591b req-f6fcc5ce-e761-4bdc-bc18-0b3dcc1c3058 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.938 2 DEBUG oslo_concurrency.lockutils [req-2be64af2-a569-45b1-92f1-5961499b591b req-f6fcc5ce-e761-4bdc-bc18-0b3dcc1c3058 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.938 2 DEBUG nova.compute.manager [req-2be64af2-a569-45b1-92f1-5961499b591b req-f6fcc5ce-e761-4bdc-bc18-0b3dcc1c3058 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] No waiting events found dispatching network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:53:05 np0005465604 nova_compute[260603]: 2025-10-02 08:53:05.939 2 WARNING nova.compute.manager [req-2be64af2-a569-45b1-92f1-5961499b591b req-f6fcc5ce-e761-4bdc-bc18-0b3dcc1c3058 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received unexpected event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f for instance with vm_state active and task_state None.#033[00m
Oct  2 04:53:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 3.1 KiB/s rd, 12 KiB/s wr, 3 op/s
Oct  2 04:53:07 np0005465604 nova_compute[260603]: 2025-10-02 08:53:07.929 2 INFO nova.compute.manager [None req-265eb2d1-7b16-48a8-86c9-64fbc1af29f9 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Get console output#033[00m
Oct  2 04:53:07 np0005465604 nova_compute[260603]: 2025-10-02 08:53:07.934 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 04:53:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:53:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 0 B/s wr, 5 op/s
Oct  2 04:53:09 np0005465604 nova_compute[260603]: 2025-10-02 08:53:09.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:10 np0005465604 nova_compute[260603]: 2025-10-02 08:53:10.690 2 DEBUG nova.compute.manager [req-4a3c313e-b43d-43c8-b1c0-26bc079c5e91 req-83e391f9-2eb6-4005-875a-b9588fffcaca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-changed-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:53:10 np0005465604 nova_compute[260603]: 2025-10-02 08:53:10.691 2 DEBUG nova.compute.manager [req-4a3c313e-b43d-43c8-b1c0-26bc079c5e91 req-83e391f9-2eb6-4005-875a-b9588fffcaca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Refreshing instance network info cache due to event network-changed-4092976f-1133-4b84-91bd-87043169cb4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:53:10 np0005465604 nova_compute[260603]: 2025-10-02 08:53:10.691 2 DEBUG oslo_concurrency.lockutils [req-4a3c313e-b43d-43c8-b1c0-26bc079c5e91 req-83e391f9-2eb6-4005-875a-b9588fffcaca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:53:10 np0005465604 nova_compute[260603]: 2025-10-02 08:53:10.692 2 DEBUG oslo_concurrency.lockutils [req-4a3c313e-b43d-43c8-b1c0-26bc079c5e91 req-83e391f9-2eb6-4005-875a-b9588fffcaca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:53:10 np0005465604 nova_compute[260603]: 2025-10-02 08:53:10.692 2 DEBUG nova.network.neutron [req-4a3c313e-b43d-43c8-b1c0-26bc079c5e91 req-83e391f9-2eb6-4005-875a-b9588fffcaca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Refreshing network info cache for port 4092976f-1133-4b84-91bd-87043169cb4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:53:10 np0005465604 nova_compute[260603]: 2025-10-02 08:53:10.798 2 DEBUG oslo_concurrency.lockutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:10 np0005465604 nova_compute[260603]: 2025-10-02 08:53:10.799 2 DEBUG oslo_concurrency.lockutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:10 np0005465604 nova_compute[260603]: 2025-10-02 08:53:10.799 2 DEBUG oslo_concurrency.lockutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:10 np0005465604 nova_compute[260603]: 2025-10-02 08:53:10.800 2 DEBUG oslo_concurrency.lockutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:10 np0005465604 nova_compute[260603]: 2025-10-02 08:53:10.800 2 DEBUG oslo_concurrency.lockutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:10 np0005465604 nova_compute[260603]: 2025-10-02 08:53:10.802 2 INFO nova.compute.manager [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Terminating instance#033[00m
Oct  2 04:53:10 np0005465604 nova_compute[260603]: 2025-10-02 08:53:10.804 2 DEBUG nova.compute.manager [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:53:10 np0005465604 kernel: tap4092976f-11 (unregistering): left promiscuous mode
Oct  2 04:53:10 np0005465604 NetworkManager[45129]: <info>  [1759395190.8801] device (tap4092976f-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:53:10 np0005465604 nova_compute[260603]: 2025-10-02 08:53:10.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:53:10Z|01304|binding|INFO|Releasing lport 4092976f-1133-4b84-91bd-87043169cb4f from this chassis (sb_readonly=0)
Oct  2 04:53:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:53:10Z|01305|binding|INFO|Setting lport 4092976f-1133-4b84-91bd-87043169cb4f down in Southbound
Oct  2 04:53:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:53:10Z|01306|binding|INFO|Removing iface tap4092976f-11 ovn-installed in OVS
Oct  2 04:53:10 np0005465604 nova_compute[260603]: 2025-10-02 08:53:10.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:10.899 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:3a:a7 10.100.0.3'], port_security=['fa:16:3e:9e:3a:a7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '094b6b24-7323-4ad5-bd0d-e449c0c96f6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca65236d-124f-4d37-afbf-f114cc90e015', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c86b416fdb524f21b0228639a3a14116', 'neutron:revision_number': '6', 'neutron:security_group_ids': '1c5f2f44-40d9-415b-8ae8-affca47a93a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0a762f3-67f7-4fbc-8632-5b2810d06f8f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4092976f-1133-4b84-91bd-87043169cb4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:53:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:10.901 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4092976f-1133-4b84-91bd-87043169cb4f in datapath ca65236d-124f-4d37-afbf-f114cc90e015 unbound from our chassis#033[00m
Oct  2 04:53:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:10.904 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ca65236d-124f-4d37-afbf-f114cc90e015, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:53:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:10.905 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9ce422-50cf-4413-96d1-b58488628c13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:10 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:10.906 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 namespace which is not needed anymore#033[00m
Oct  2 04:53:10 np0005465604 nova_compute[260603]: 2025-10-02 08:53:10.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:10 np0005465604 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Oct  2 04:53:10 np0005465604 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007a.scope: Consumed 1.619s CPU time.
Oct  2 04:53:10 np0005465604 systemd-machined[214636]: Machine qemu-155-instance-0000007a terminated.
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.083 2 INFO nova.virt.libvirt.driver [-] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Instance destroyed successfully.#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.084 2 DEBUG nova.objects.instance [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lazy-loading 'resources' on Instance uuid 094b6b24-7323-4ad5-bd0d-e449c0c96f6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.104 2 DEBUG nova.virt.libvirt.vif [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:52:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1775312205',display_name='tempest-TestNetworkAdvancedServerOps-server-1775312205',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1775312205',id=122,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHjAoXXSuQzVTrLfupmZ2yj1eQO64gMLoVS/w1fXN3YlLWUVSU8Ny9eVy5tfjHnq8vP2d+YlXvRC/+xl37WQ+3lkHjliAoJJEZ9n209ktTnU2lK2CJZClltvCqZ21bE87w==',key_name='tempest-TestNetworkAdvancedServerOps-1329763099',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:52:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c86b416fdb524f21b0228639a3a14116',ramdisk_id='',reservation_id='r-gkp0e6eh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-19684921',owner_user_name='tempest-TestNetworkAdvancedServerOps-19684921-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:53:05Z,user_data=None,user_id='7767630a5b1049f48d7e0fed29e221ba',uuid=094b6b24-7323-4ad5-bd0d-e449c0c96f6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.105 2 DEBUG nova.network.os_vif_util [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converting VIF {"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.106 2 DEBUG nova.network.os_vif_util [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.106 2 DEBUG os_vif [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.108 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4092976f-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:53:11 np0005465604 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388990]: [NOTICE]   (388994) : haproxy version is 2.8.14-c23fe91
Oct  2 04:53:11 np0005465604 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388990]: [NOTICE]   (388994) : path to executable is /usr/sbin/haproxy
Oct  2 04:53:11 np0005465604 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388990]: [WARNING]  (388994) : Exiting Master process...
Oct  2 04:53:11 np0005465604 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388990]: [WARNING]  (388994) : Exiting Master process...
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:11 np0005465604 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388990]: [ALERT]    (388994) : Current worker (388996) exited with code 143 (Terminated)
Oct  2 04:53:11 np0005465604 neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015[388990]: [WARNING]  (388994) : All workers exited. Exiting... (0)
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:11 np0005465604 systemd[1]: libpod-33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425.scope: Deactivated successfully.
Oct  2 04:53:11 np0005465604 conmon[388990]: conmon 33fc396ec372afccd690 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425.scope/container/memory.events
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.116 2 INFO os_vif [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:3a:a7,bridge_name='br-int',has_traffic_filtering=True,id=4092976f-1133-4b84-91bd-87043169cb4f,network=Network(ca65236d-124f-4d37-afbf-f114cc90e015),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4092976f-11')#033[00m
Oct  2 04:53:11 np0005465604 podman[389030]: 2025-10-02 08:53:11.120247126 +0000 UTC m=+0.051457452 container died 33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:53:11 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425-userdata-shm.mount: Deactivated successfully.
Oct  2 04:53:11 np0005465604 systemd[1]: var-lib-containers-storage-overlay-df3af0a887f56125e18a5dff2994a58aeee1c818086f552ccc3946b82615b0ba-merged.mount: Deactivated successfully.
Oct  2 04:53:11 np0005465604 podman[389030]: 2025-10-02 08:53:11.162787615 +0000 UTC m=+0.093997941 container cleanup 33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:53:11 np0005465604 systemd[1]: libpod-conmon-33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425.scope: Deactivated successfully.
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.198 2 DEBUG nova.compute.manager [req-9e249330-2190-49d0-b5e5-64966af5278b req-f25ef52b-606d-4b25-b346-42885d6603b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-unplugged-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.198 2 DEBUG oslo_concurrency.lockutils [req-9e249330-2190-49d0-b5e5-64966af5278b req-f25ef52b-606d-4b25-b346-42885d6603b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.198 2 DEBUG oslo_concurrency.lockutils [req-9e249330-2190-49d0-b5e5-64966af5278b req-f25ef52b-606d-4b25-b346-42885d6603b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.199 2 DEBUG oslo_concurrency.lockutils [req-9e249330-2190-49d0-b5e5-64966af5278b req-f25ef52b-606d-4b25-b346-42885d6603b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.199 2 DEBUG nova.compute.manager [req-9e249330-2190-49d0-b5e5-64966af5278b req-f25ef52b-606d-4b25-b346-42885d6603b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] No waiting events found dispatching network-vif-unplugged-4092976f-1133-4b84-91bd-87043169cb4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.199 2 DEBUG nova.compute.manager [req-9e249330-2190-49d0-b5e5-64966af5278b req-f25ef52b-606d-4b25-b346-42885d6603b9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-unplugged-4092976f-1133-4b84-91bd-87043169cb4f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:53:11 np0005465604 podman[389084]: 2025-10-02 08:53:11.243345779 +0000 UTC m=+0.046856967 container remove 33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 04:53:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:11.254 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c1cd49cc-905d-4ccb-b34e-ccc7c999690f]: (4, ('Thu Oct  2 08:53:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 (33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425)\n33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425\nThu Oct  2 08:53:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 (33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425)\n33fc396ec372afccd69096ad5599397b34f00164e667318302e4faabcf0cf425\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:11.257 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3321fb90-3df5-4514-8b22-731f4c46bf24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:11.258 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca65236d-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:11 np0005465604 kernel: tapca65236d-10: left promiscuous mode
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:11.282 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ab24571c-8310-455d-9447-0699180bdc21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:11.314 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c19a0b-541c-4e62-a4e0-b5af7a9be127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:11.316 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e8cbf18a-c337-4cf5-a38c-46343c3a23f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:11.350 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9baa68c9-13ba-46e7-8013-65b2176dd32e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606228, 'reachable_time': 44539, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389099, 'error': None, 'target': 'ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:11 np0005465604 systemd[1]: run-netns-ovnmeta\x2dca65236d\x2d124f\x2d4d37\x2dafbf\x2df114cc90e015.mount: Deactivated successfully.
Oct  2 04:53:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:11.354 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ca65236d-124f-4d37-afbf-f114cc90e015 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:53:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:11.354 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[da7aa489-1667-4fb6-a278-14289b222ed5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.620 2 INFO nova.virt.libvirt.driver [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Deleting instance files /var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f_del#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.621 2 INFO nova.virt.libvirt.driver [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Deletion of /var/lib/nova/instances/094b6b24-7323-4ad5-bd0d-e449c0c96f6f_del complete#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.672 2 INFO nova.compute.manager [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.673 2 DEBUG oslo.service.loopingcall [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.673 2 DEBUG nova.compute.manager [-] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:53:11 np0005465604 nova_compute[260603]: 2025-10-02 08:53:11.674 2 DEBUG nova.network.neutron [-] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:53:12 np0005465604 nova_compute[260603]: 2025-10-02 08:53:12.937 2 DEBUG nova.network.neutron [-] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:53:12 np0005465604 nova_compute[260603]: 2025-10-02 08:53:12.956 2 INFO nova.compute.manager [-] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Took 1.28 seconds to deallocate network for instance.#033[00m
Oct  2 04:53:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.009 2 DEBUG oslo_concurrency.lockutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.010 2 DEBUG oslo_concurrency.lockutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.072 2 DEBUG oslo_concurrency.processutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.294 2 DEBUG nova.compute.manager [req-b1cc8c2a-ba3b-4d4b-885c-b9d5067fb4ff req-058f5c7b-f161-42b3-b5fb-87d28ab524de 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.295 2 DEBUG oslo_concurrency.lockutils [req-b1cc8c2a-ba3b-4d4b-885c-b9d5067fb4ff req-058f5c7b-f161-42b3-b5fb-87d28ab524de 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.295 2 DEBUG oslo_concurrency.lockutils [req-b1cc8c2a-ba3b-4d4b-885c-b9d5067fb4ff req-058f5c7b-f161-42b3-b5fb-87d28ab524de 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.296 2 DEBUG oslo_concurrency.lockutils [req-b1cc8c2a-ba3b-4d4b-885c-b9d5067fb4ff req-058f5c7b-f161-42b3-b5fb-87d28ab524de 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.296 2 DEBUG nova.compute.manager [req-b1cc8c2a-ba3b-4d4b-885c-b9d5067fb4ff req-058f5c7b-f161-42b3-b5fb-87d28ab524de 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] No waiting events found dispatching network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.296 2 WARNING nova.compute.manager [req-b1cc8c2a-ba3b-4d4b-885c-b9d5067fb4ff req-058f5c7b-f161-42b3-b5fb-87d28ab524de 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received unexpected event network-vif-plugged-4092976f-1133-4b84-91bd-87043169cb4f for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.297 2 DEBUG nova.compute.manager [req-b1cc8c2a-ba3b-4d4b-885c-b9d5067fb4ff req-058f5c7b-f161-42b3-b5fb-87d28ab524de 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Received event network-vif-deleted-4092976f-1133-4b84-91bd-87043169cb4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.411 2 DEBUG nova.network.neutron [req-4a3c313e-b43d-43c8-b1c0-26bc079c5e91 req-83e391f9-2eb6-4005-875a-b9588fffcaca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Updated VIF entry in instance network info cache for port 4092976f-1133-4b84-91bd-87043169cb4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.412 2 DEBUG nova.network.neutron [req-4a3c313e-b43d-43c8-b1c0-26bc079c5e91 req-83e391f9-2eb6-4005-875a-b9588fffcaca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Updating instance_info_cache with network_info: [{"id": "4092976f-1133-4b84-91bd-87043169cb4f", "address": "fa:16:3e:9e:3a:a7", "network": {"id": "ca65236d-124f-4d37-afbf-f114cc90e015", "bridge": "br-int", "label": "tempest-network-smoke--1558167580", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c86b416fdb524f21b0228639a3a14116", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4092976f-11", "ovs_interfaceid": "4092976f-1133-4b84-91bd-87043169cb4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.438 2 DEBUG oslo_concurrency.lockutils [req-4a3c313e-b43d-43c8-b1c0-26bc079c5e91 req-83e391f9-2eb6-4005-875a-b9588fffcaca 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-094b6b24-7323-4ad5-bd0d-e449c0c96f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:53:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:53:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3901527271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.549 2 DEBUG oslo_concurrency.processutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.556 2 DEBUG nova.compute.provider_tree [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.570 2 DEBUG nova.scheduler.client.report [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.593 2 DEBUG oslo_concurrency.lockutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 3.2 KiB/s wr, 33 op/s
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.618 2 INFO nova.scheduler.client.report [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Deleted allocations for instance 094b6b24-7323-4ad5-bd0d-e449c0c96f6f#033[00m
Oct  2 04:53:13 np0005465604 nova_compute[260603]: 2025-10-02 08:53:13.680 2 DEBUG oslo_concurrency.lockutils [None req-6e5b9f53-5571-4e76-882b-56a95fc4079c 7767630a5b1049f48d7e0fed29e221ba c86b416fdb524f21b0228639a3a14116 - - default default] Lock "094b6b24-7323-4ad5-bd0d-e449c0c96f6f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:14 np0005465604 nova_compute[260603]: 2025-10-02 08:53:14.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 41 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 3.2 KiB/s wr, 33 op/s
Oct  2 04:53:16 np0005465604 nova_compute[260603]: 2025-10-02 08:53:16.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 3.2 KiB/s wr, 33 op/s
Oct  2 04:53:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:53:19 np0005465604 podman[389125]: 2025-10-02 08:53:19.04284505 +0000 UTC m=+0.091918525 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:53:19 np0005465604 nova_compute[260603]: 2025-10-02 08:53:19.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:19 np0005465604 podman[389124]: 2025-10-02 08:53:19.131433269 +0000 UTC m=+0.183079325 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:53:19 np0005465604 nova_compute[260603]: 2025-10-02 08:53:19.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 3.2 KiB/s wr, 30 op/s
Oct  2 04:53:19 np0005465604 nova_compute[260603]: 2025-10-02 08:53:19.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:21 np0005465604 nova_compute[260603]: 2025-10-02 08:53:21.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct  2 04:53:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:53:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3417358275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:53:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:53:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3417358275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.018508) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395203018583, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 452, "num_deletes": 255, "total_data_size": 380591, "memory_usage": 390808, "flush_reason": "Manual Compaction"}
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395203024100, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 377416, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47190, "largest_seqno": 47641, "table_properties": {"data_size": 374779, "index_size": 673, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6019, "raw_average_key_size": 18, "raw_value_size": 369629, "raw_average_value_size": 1106, "num_data_blocks": 31, "num_entries": 334, "num_filter_entries": 334, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759395178, "oldest_key_time": 1759395178, "file_creation_time": 1759395203, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 5657 microseconds, and 2796 cpu microseconds.
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.024176) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 377416 bytes OK
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.024199) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.025636) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.025656) EVENT_LOG_v1 {"time_micros": 1759395203025649, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.025677) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 377844, prev total WAL file size 377844, number of live WAL files 2.
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.026281) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373533' seq:72057594037927935, type:22 .. '6C6F676D0032303034' seq:0, type:0; will stop at (end)
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(368KB)], [107(10082KB)]
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395203026323, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 10701610, "oldest_snapshot_seqno": -1}
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6837 keys, 10583870 bytes, temperature: kUnknown
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395203111278, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 10583870, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10535905, "index_size": 29741, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 178083, "raw_average_key_size": 26, "raw_value_size": 10411423, "raw_average_value_size": 1522, "num_data_blocks": 1169, "num_entries": 6837, "num_filter_entries": 6837, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759395203, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.111582) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10583870 bytes
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.113381) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.8 rd, 124.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(56.4) write-amplify(28.0) OK, records in: 7355, records dropped: 518 output_compression: NoCompression
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.113411) EVENT_LOG_v1 {"time_micros": 1759395203113397, "job": 64, "event": "compaction_finished", "compaction_time_micros": 85043, "compaction_time_cpu_micros": 49435, "output_level": 6, "num_output_files": 1, "total_output_size": 10583870, "num_input_records": 7355, "num_output_records": 6837, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395203113723, "job": 64, "event": "table_file_deletion", "file_number": 109}
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395203117438, "job": 64, "event": "table_file_deletion", "file_number": 107}
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.026182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.117512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.117518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.117521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.117524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:53:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:53:23.117527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:53:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct  2 04:53:24 np0005465604 nova_compute[260603]: 2025-10-02 08:53:24.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:53:26 np0005465604 podman[389169]: 2025-10-02 08:53:25.999739828 +0000 UTC m=+0.062805232 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid)
Oct  2 04:53:26 np0005465604 podman[389168]: 2025-10-02 08:53:26.036654728 +0000 UTC m=+0.091841682 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct  2 04:53:26 np0005465604 nova_compute[260603]: 2025-10-02 08:53:26.082 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395191.0809329, 094b6b24-7323-4ad5-bd0d-e449c0c96f6f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:53:26 np0005465604 nova_compute[260603]: 2025-10-02 08:53:26.082 2 INFO nova.compute.manager [-] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:53:26 np0005465604 nova_compute[260603]: 2025-10-02 08:53:26.103 2 DEBUG nova.compute.manager [None req-6d4ea745-a3d2-4f3e-8f5e-a40031873548 - - - - - -] [instance: 094b6b24-7323-4ad5-bd0d-e449c0c96f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:53:26 np0005465604 nova_compute[260603]: 2025-10-02 08:53:26.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:53:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:53:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:53:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:53:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:53:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:53:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:53:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:53:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:53:28
Oct  2 04:53:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:53:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:53:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'volumes', 'backups']
Oct  2 04:53:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:53:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:53:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:53:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:53:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:53:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:53:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:53:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:53:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:53:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:53:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:53:28 np0005465604 nova_compute[260603]: 2025-10-02 08:53:28.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:53:28 np0005465604 nova_compute[260603]: 2025-10-02 08:53:28.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:53:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:53:29 np0005465604 nova_compute[260603]: 2025-10-02 08:53:29.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:31 np0005465604 nova_compute[260603]: 2025-10-02 08:53:31.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:53:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:53:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:53:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:53:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:53:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:53:31 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev cfcb97f9-09e3-45b3-a7aa-bb10ac5581ab does not exist
Oct  2 04:53:31 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 24da39d4-8461-495f-a8fa-d36de2fb2eb1 does not exist
Oct  2 04:53:31 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 0c12582d-9e08-4949-aff8-28849b32dd68 does not exist
Oct  2 04:53:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:53:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:53:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:53:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:53:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:53:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:53:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:53:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:53:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:53:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:53:32 np0005465604 podman[389478]: 2025-10-02 08:53:32.117309595 +0000 UTC m=+0.053930630 container create 81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:53:32 np0005465604 systemd[1]: Started libpod-conmon-81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274.scope.
Oct  2 04:53:32 np0005465604 podman[389478]: 2025-10-02 08:53:32.101395751 +0000 UTC m=+0.038016806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:53:32 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:53:32 np0005465604 podman[389478]: 2025-10-02 08:53:32.222999796 +0000 UTC m=+0.159620861 container init 81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  2 04:53:32 np0005465604 podman[389478]: 2025-10-02 08:53:32.236884206 +0000 UTC m=+0.173505281 container start 81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_neumann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:53:32 np0005465604 podman[389478]: 2025-10-02 08:53:32.241393419 +0000 UTC m=+0.178014494 container attach 81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:53:32 np0005465604 nervous_neumann[389494]: 167 167
Oct  2 04:53:32 np0005465604 systemd[1]: libpod-81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274.scope: Deactivated successfully.
Oct  2 04:53:32 np0005465604 conmon[389494]: conmon 81e5ace31b6a22b287fa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274.scope/container/memory.events
Oct  2 04:53:32 np0005465604 podman[389478]: 2025-10-02 08:53:32.249352551 +0000 UTC m=+0.185973626 container died 81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_neumann, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:53:32 np0005465604 systemd[1]: var-lib-containers-storage-overlay-64c60e31b2732dfdb292db9cf1a968bcb0ab1a62a53d7ce5b9e77fd2c8536df3-merged.mount: Deactivated successfully.
Oct  2 04:53:32 np0005465604 podman[389478]: 2025-10-02 08:53:32.305242194 +0000 UTC m=+0.241863279 container remove 81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 04:53:32 np0005465604 systemd[1]: libpod-conmon-81e5ace31b6a22b287faa6039157971b7b395849955cbbcaa1370e21eccc5274.scope: Deactivated successfully.
Oct  2 04:53:32 np0005465604 podman[389519]: 2025-10-02 08:53:32.53820626 +0000 UTC m=+0.067299495 container create 11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:53:32 np0005465604 systemd[1]: Started libpod-conmon-11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf.scope.
Oct  2 04:53:32 np0005465604 podman[389519]: 2025-10-02 08:53:32.513960821 +0000 UTC m=+0.043054136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:53:32 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:53:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05b3f6ac2474417a99710534cd6fa2b1e4a10d655fa2b83016876c1d9f14db4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:53:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05b3f6ac2474417a99710534cd6fa2b1e4a10d655fa2b83016876c1d9f14db4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:53:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05b3f6ac2474417a99710534cd6fa2b1e4a10d655fa2b83016876c1d9f14db4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:53:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05b3f6ac2474417a99710534cd6fa2b1e4a10d655fa2b83016876c1d9f14db4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:53:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a05b3f6ac2474417a99710534cd6fa2b1e4a10d655fa2b83016876c1d9f14db4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:53:32 np0005465604 podman[389519]: 2025-10-02 08:53:32.656094477 +0000 UTC m=+0.185187772 container init 11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poincare, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 04:53:32 np0005465604 podman[389519]: 2025-10-02 08:53:32.666855097 +0000 UTC m=+0.195948352 container start 11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poincare, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 04:53:32 np0005465604 podman[389519]: 2025-10-02 08:53:32.671406662 +0000 UTC m=+0.200499917 container attach 11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poincare, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 04:53:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:53:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:53:33 np0005465604 interesting_poincare[389536]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:53:33 np0005465604 interesting_poincare[389536]: --> relative data size: 1.0
Oct  2 04:53:33 np0005465604 interesting_poincare[389536]: --> All data devices are unavailable
Oct  2 04:53:33 np0005465604 systemd[1]: libpod-11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf.scope: Deactivated successfully.
Oct  2 04:53:33 np0005465604 podman[389519]: 2025-10-02 08:53:33.95228806 +0000 UTC m=+1.481381315 container died 11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poincare, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 04:53:33 np0005465604 systemd[1]: libpod-11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf.scope: Consumed 1.229s CPU time.
Oct  2 04:53:33 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a05b3f6ac2474417a99710534cd6fa2b1e4a10d655fa2b83016876c1d9f14db4-merged.mount: Deactivated successfully.
Oct  2 04:53:34 np0005465604 podman[389519]: 2025-10-02 08:53:34.01094563 +0000 UTC m=+1.540038865 container remove 11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poincare, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  2 04:53:34 np0005465604 systemd[1]: libpod-conmon-11daa1a53418bd408934c9ba86a1edd1e579c62b5d10abdae4e0ec999d348ebf.scope: Deactivated successfully.
Oct  2 04:53:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:34.834 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:34.838 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:34.838 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:34 np0005465604 podman[389720]: 2025-10-02 08:53:34.918805372 +0000 UTC m=+0.074876805 container create dfc580899791871a971ddbf13a4e4457bb7a3a5862baf5a2cd7bfee63be35e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 04:53:34 np0005465604 systemd[1]: Started libpod-conmon-dfc580899791871a971ddbf13a4e4457bb7a3a5862baf5a2cd7bfee63be35e63.scope.
Oct  2 04:53:34 np0005465604 podman[389720]: 2025-10-02 08:53:34.887606923 +0000 UTC m=+0.043678396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:53:34 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:53:34 np0005465604 nova_compute[260603]: 2025-10-02 08:53:34.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:35 np0005465604 podman[389720]: 2025-10-02 08:53:35.00862169 +0000 UTC m=+0.164693153 container init dfc580899791871a971ddbf13a4e4457bb7a3a5862baf5a2cd7bfee63be35e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:53:35 np0005465604 podman[389720]: 2025-10-02 08:53:35.018116141 +0000 UTC m=+0.174187544 container start dfc580899791871a971ddbf13a4e4457bb7a3a5862baf5a2cd7bfee63be35e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 04:53:35 np0005465604 podman[389720]: 2025-10-02 08:53:35.021546409 +0000 UTC m=+0.177617912 container attach dfc580899791871a971ddbf13a4e4457bb7a3a5862baf5a2cd7bfee63be35e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct  2 04:53:35 np0005465604 goofy_driscoll[389736]: 167 167
Oct  2 04:53:35 np0005465604 systemd[1]: libpod-dfc580899791871a971ddbf13a4e4457bb7a3a5862baf5a2cd7bfee63be35e63.scope: Deactivated successfully.
Oct  2 04:53:35 np0005465604 podman[389720]: 2025-10-02 08:53:35.026147906 +0000 UTC m=+0.182219339 container died dfc580899791871a971ddbf13a4e4457bb7a3a5862baf5a2cd7bfee63be35e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  2 04:53:35 np0005465604 systemd[1]: var-lib-containers-storage-overlay-02e7f122343f2fb14c2db925f5321338bb245e4c972523f47b273646f491d546-merged.mount: Deactivated successfully.
Oct  2 04:53:35 np0005465604 podman[389720]: 2025-10-02 08:53:35.075928204 +0000 UTC m=+0.231999637 container remove dfc580899791871a971ddbf13a4e4457bb7a3a5862baf5a2cd7bfee63be35e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 04:53:35 np0005465604 systemd[1]: libpod-conmon-dfc580899791871a971ddbf13a4e4457bb7a3a5862baf5a2cd7bfee63be35e63.scope: Deactivated successfully.
Oct  2 04:53:35 np0005465604 podman[389762]: 2025-10-02 08:53:35.324336279 +0000 UTC m=+0.064204356 container create bb38b9957173422c510654a448838f465b2dd4cf243b1b7cfbb49b6aac6fead0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 04:53:35 np0005465604 systemd[1]: Started libpod-conmon-bb38b9957173422c510654a448838f465b2dd4cf243b1b7cfbb49b6aac6fead0.scope.
Oct  2 04:53:35 np0005465604 podman[389762]: 2025-10-02 08:53:35.296299351 +0000 UTC m=+0.036167468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:53:35 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:53:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f18cd734e860a51889aa1938ee9519b4348c478c4d2857a4e4c6626dccc5c0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:53:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f18cd734e860a51889aa1938ee9519b4348c478c4d2857a4e4c6626dccc5c0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:53:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f18cd734e860a51889aa1938ee9519b4348c478c4d2857a4e4c6626dccc5c0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:53:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f18cd734e860a51889aa1938ee9519b4348c478c4d2857a4e4c6626dccc5c0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:53:35 np0005465604 podman[389762]: 2025-10-02 08:53:35.438259371 +0000 UTC m=+0.178127448 container init bb38b9957173422c510654a448838f465b2dd4cf243b1b7cfbb49b6aac6fead0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 04:53:35 np0005465604 podman[389762]: 2025-10-02 08:53:35.453045579 +0000 UTC m=+0.192913656 container start bb38b9957173422c510654a448838f465b2dd4cf243b1b7cfbb49b6aac6fead0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 04:53:35 np0005465604 podman[389762]: 2025-10-02 08:53:35.457589304 +0000 UTC m=+0.197457441 container attach bb38b9957173422c510654a448838f465b2dd4cf243b1b7cfbb49b6aac6fead0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:53:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:53:36 np0005465604 nova_compute[260603]: 2025-10-02 08:53:36.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]: {
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:    "0": [
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:        {
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "devices": [
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "/dev/loop3"
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            ],
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "lv_name": "ceph_lv0",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "lv_size": "21470642176",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "name": "ceph_lv0",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "tags": {
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.cluster_name": "ceph",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.crush_device_class": "",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.encrypted": "0",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.osd_id": "0",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.type": "block",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.vdo": "0"
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            },
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "type": "block",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "vg_name": "ceph_vg0"
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:        }
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:    ],
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:    "1": [
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:        {
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "devices": [
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "/dev/loop4"
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            ],
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "lv_name": "ceph_lv1",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "lv_size": "21470642176",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "name": "ceph_lv1",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "tags": {
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.cluster_name": "ceph",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.crush_device_class": "",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.encrypted": "0",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.osd_id": "1",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.type": "block",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.vdo": "0"
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            },
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "type": "block",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "vg_name": "ceph_vg1"
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:        }
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:    ],
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:    "2": [
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:        {
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "devices": [
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "/dev/loop5"
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            ],
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "lv_name": "ceph_lv2",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "lv_size": "21470642176",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "name": "ceph_lv2",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "tags": {
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.cluster_name": "ceph",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.crush_device_class": "",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.encrypted": "0",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.osd_id": "2",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.type": "block",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:                "ceph.vdo": "0"
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            },
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "type": "block",
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:            "vg_name": "ceph_vg2"
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:        }
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]:    ]
Oct  2 04:53:36 np0005465604 inspiring_taussig[389779]: }
Oct  2 04:53:36 np0005465604 systemd[1]: libpod-bb38b9957173422c510654a448838f465b2dd4cf243b1b7cfbb49b6aac6fead0.scope: Deactivated successfully.
Oct  2 04:53:36 np0005465604 podman[389762]: 2025-10-02 08:53:36.194366212 +0000 UTC m=+0.934234249 container died bb38b9957173422c510654a448838f465b2dd4cf243b1b7cfbb49b6aac6fead0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:53:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3f18cd734e860a51889aa1938ee9519b4348c478c4d2857a4e4c6626dccc5c0a-merged.mount: Deactivated successfully.
Oct  2 04:53:36 np0005465604 podman[389762]: 2025-10-02 08:53:36.253165346 +0000 UTC m=+0.993033383 container remove bb38b9957173422c510654a448838f465b2dd4cf243b1b7cfbb49b6aac6fead0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_taussig, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:53:36 np0005465604 systemd[1]: libpod-conmon-bb38b9957173422c510654a448838f465b2dd4cf243b1b7cfbb49b6aac6fead0.scope: Deactivated successfully.
Oct  2 04:53:37 np0005465604 podman[389941]: 2025-10-02 08:53:37.039719982 +0000 UTC m=+0.069906307 container create a2ef04c58917a57d50c0fcf867fcbb54d5a9336a829bb0c9e3ef3944253fd47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mahavira, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct  2 04:53:37 np0005465604 systemd[1]: Started libpod-conmon-a2ef04c58917a57d50c0fcf867fcbb54d5a9336a829bb0c9e3ef3944253fd47e.scope.
Oct  2 04:53:37 np0005465604 podman[389941]: 2025-10-02 08:53:37.010372672 +0000 UTC m=+0.040559037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:53:37 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:53:37 np0005465604 podman[389941]: 2025-10-02 08:53:37.158811958 +0000 UTC m=+0.188998293 container init a2ef04c58917a57d50c0fcf867fcbb54d5a9336a829bb0c9e3ef3944253fd47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 04:53:37 np0005465604 podman[389941]: 2025-10-02 08:53:37.169993203 +0000 UTC m=+0.200179518 container start a2ef04c58917a57d50c0fcf867fcbb54d5a9336a829bb0c9e3ef3944253fd47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mahavira, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:53:37 np0005465604 podman[389941]: 2025-10-02 08:53:37.174386092 +0000 UTC m=+0.204572407 container attach a2ef04c58917a57d50c0fcf867fcbb54d5a9336a829bb0c9e3ef3944253fd47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  2 04:53:37 np0005465604 gifted_mahavira[389957]: 167 167
Oct  2 04:53:37 np0005465604 systemd[1]: libpod-a2ef04c58917a57d50c0fcf867fcbb54d5a9336a829bb0c9e3ef3944253fd47e.scope: Deactivated successfully.
Oct  2 04:53:37 np0005465604 podman[389941]: 2025-10-02 08:53:37.178232654 +0000 UTC m=+0.208418979 container died a2ef04c58917a57d50c0fcf867fcbb54d5a9336a829bb0c9e3ef3944253fd47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mahavira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:53:37 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c60578550ff433f79edca02bf1f05e3806d3db3ec6fc3de3c699f386c2d7f2c5-merged.mount: Deactivated successfully.
Oct  2 04:53:37 np0005465604 podman[389941]: 2025-10-02 08:53:37.232917328 +0000 UTC m=+0.263103643 container remove a2ef04c58917a57d50c0fcf867fcbb54d5a9336a829bb0c9e3ef3944253fd47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_mahavira, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:53:37 np0005465604 systemd[1]: libpod-conmon-a2ef04c58917a57d50c0fcf867fcbb54d5a9336a829bb0c9e3ef3944253fd47e.scope: Deactivated successfully.
Oct  2 04:53:37 np0005465604 podman[389980]: 2025-10-02 08:53:37.496918227 +0000 UTC m=+0.075571606 container create 014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 04:53:37 np0005465604 systemd[1]: Started libpod-conmon-014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100.scope.
Oct  2 04:53:37 np0005465604 podman[389980]: 2025-10-02 08:53:37.466545955 +0000 UTC m=+0.045199374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:53:37 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:53:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0129eb30fb1a81395cc661d69e8a159dd0c2ae09dd968c646d35e28de3b081ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:53:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0129eb30fb1a81395cc661d69e8a159dd0c2ae09dd968c646d35e28de3b081ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:53:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0129eb30fb1a81395cc661d69e8a159dd0c2ae09dd968c646d35e28de3b081ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:53:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0129eb30fb1a81395cc661d69e8a159dd0c2ae09dd968c646d35e28de3b081ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:53:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:53:37 np0005465604 podman[389980]: 2025-10-02 08:53:37.627256529 +0000 UTC m=+0.205909898 container init 014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  2 04:53:37 np0005465604 podman[389980]: 2025-10-02 08:53:37.640542641 +0000 UTC m=+0.219196020 container start 014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:53:37 np0005465604 podman[389980]: 2025-10-02 08:53:37.645038633 +0000 UTC m=+0.223692052 container attach 014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 04:53:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]: {
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:        "osd_id": 2,
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:        "type": "bluestore"
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:    },
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:        "osd_id": 1,
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:        "type": "bluestore"
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:    },
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:        "osd_id": 0,
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:        "type": "bluestore"
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]:    }
Oct  2 04:53:38 np0005465604 stupefied_golick[389997]: }
Oct  2 04:53:38 np0005465604 systemd[1]: libpod-014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100.scope: Deactivated successfully.
Oct  2 04:53:38 np0005465604 systemd[1]: libpod-014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100.scope: Consumed 1.089s CPU time.
Oct  2 04:53:38 np0005465604 podman[390031]: 2025-10-02 08:53:38.770416821 +0000 UTC m=+0.024983113 container died 014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 04:53:38 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0129eb30fb1a81395cc661d69e8a159dd0c2ae09dd968c646d35e28de3b081ab-merged.mount: Deactivated successfully.
Oct  2 04:53:38 np0005465604 podman[390031]: 2025-10-02 08:53:38.863123621 +0000 UTC m=+0.117689873 container remove 014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:53:38 np0005465604 systemd[1]: libpod-conmon-014847982c2228b339083ad6511c84a7e013d2782a9ed5849d6dc608344cd100.scope: Deactivated successfully.
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:53:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:53:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:53:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:53:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 42c27c1c-c518-44f0-b24d-18a086314123 does not exist
Oct  2 04:53:38 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 4c18ad71-7e6c-46fd-937a-d1832f4ad690 does not exist
Oct  2 04:53:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:53:39 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:53:39 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:53:40 np0005465604 nova_compute[260603]: 2025-10-02 08:53:40.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:41 np0005465604 nova_compute[260603]: 2025-10-02 08:53:41.008 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "ce9a5c17-646f-4ba2-a974-90e4b864872e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:41 np0005465604 nova_compute[260603]: 2025-10-02 08:53:41.008 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:41 np0005465604 nova_compute[260603]: 2025-10-02 08:53:41.089 2 DEBUG nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:53:41 np0005465604 nova_compute[260603]: 2025-10-02 08:53:41.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:41 np0005465604 nova_compute[260603]: 2025-10-02 08:53:41.245 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:41 np0005465604 nova_compute[260603]: 2025-10-02 08:53:41.245 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:41 np0005465604 nova_compute[260603]: 2025-10-02 08:53:41.256 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:53:41 np0005465604 nova_compute[260603]: 2025-10-02 08:53:41.257 2 INFO nova.compute.claims [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:53:41 np0005465604 nova_compute[260603]: 2025-10-02 08:53:41.471 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:53:41 np0005465604 nova_compute[260603]: 2025-10-02 08:53:41.522 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:53:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 41 MiB data, 824 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:53:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:53:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3348612134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:53:41 np0005465604 nova_compute[260603]: 2025-10-02 08:53:41.998 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.003 2 DEBUG nova.compute.provider_tree [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.016 2 DEBUG nova.scheduler.client.report [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.051 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.052 2 DEBUG nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.185 2 DEBUG nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.186 2 DEBUG nova.network.neutron [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.212 2 INFO nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.235 2 DEBUG nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.431 2 DEBUG nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.433 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.434 2 INFO nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Creating image(s)#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.466 2 DEBUG nova.storage.rbd_utils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image ce9a5c17-646f-4ba2-a974-90e4b864872e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.500 2 DEBUG nova.storage.rbd_utils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image ce9a5c17-646f-4ba2-a974-90e4b864872e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.533 2 DEBUG nova.storage.rbd_utils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image ce9a5c17-646f-4ba2-a974-90e4b864872e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.538 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.581 2 DEBUG nova.policy [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.615 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.615 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.616 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.616 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.639 2 DEBUG nova.storage.rbd_utils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image ce9a5c17-646f-4ba2-a974-90e4b864872e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.643 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ce9a5c17-646f-4ba2-a974-90e4b864872e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:53:42 np0005465604 nova_compute[260603]: 2025-10-02 08:53:42.947 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ce9a5c17-646f-4ba2-a974-90e4b864872e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:53:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:53:43 np0005465604 nova_compute[260603]: 2025-10-02 08:53:43.049 2 DEBUG nova.storage.rbd_utils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image ce9a5c17-646f-4ba2-a974-90e4b864872e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:53:43 np0005465604 nova_compute[260603]: 2025-10-02 08:53:43.133 2 DEBUG nova.objects.instance [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid ce9a5c17-646f-4ba2-a974-90e4b864872e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:53:43 np0005465604 nova_compute[260603]: 2025-10-02 08:53:43.165 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:53:43 np0005465604 nova_compute[260603]: 2025-10-02 08:53:43.166 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Ensure instance console log exists: /var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:53:43 np0005465604 nova_compute[260603]: 2025-10-02 08:53:43.166 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:43 np0005465604 nova_compute[260603]: 2025-10-02 08:53:43.166 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:43 np0005465604 nova_compute[260603]: 2025-10-02 08:53:43.167 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:43 np0005465604 nova_compute[260603]: 2025-10-02 08:53:43.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:53:43 np0005465604 nova_compute[260603]: 2025-10-02 08:53:43.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:53:43 np0005465604 nova_compute[260603]: 2025-10-02 08:53:43.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:53:43 np0005465604 nova_compute[260603]: 2025-10-02 08:53:43.538 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 04:53:43 np0005465604 nova_compute[260603]: 2025-10-02 08:53:43.538 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:53:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 54 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 655 KiB/s wr, 23 op/s
Oct  2 04:53:43 np0005465604 nova_compute[260603]: 2025-10-02 08:53:43.661 2 DEBUG nova.network.neutron [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Successfully created port: 4fd5381b-e8ba-485f-9cb6-692a37b716a1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:53:44 np0005465604 nova_compute[260603]: 2025-10-02 08:53:44.418 2 DEBUG nova.network.neutron [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Successfully updated port: 4fd5381b-e8ba-485f-9cb6-692a37b716a1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:53:44 np0005465604 nova_compute[260603]: 2025-10-02 08:53:44.432 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:53:44 np0005465604 nova_compute[260603]: 2025-10-02 08:53:44.432 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:53:44 np0005465604 nova_compute[260603]: 2025-10-02 08:53:44.432 2 DEBUG nova.network.neutron [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:53:44 np0005465604 nova_compute[260603]: 2025-10-02 08:53:44.500 2 DEBUG nova.compute.manager [req-bdb819b5-cf7d-4f00-a996-16e59f3fcd62 req-6cdb6ddf-fd98-47df-a103-d931ce48ab2c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received event network-changed-4fd5381b-e8ba-485f-9cb6-692a37b716a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:53:44 np0005465604 nova_compute[260603]: 2025-10-02 08:53:44.500 2 DEBUG nova.compute.manager [req-bdb819b5-cf7d-4f00-a996-16e59f3fcd62 req-6cdb6ddf-fd98-47df-a103-d931ce48ab2c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Refreshing instance network info cache due to event network-changed-4fd5381b-e8ba-485f-9cb6-692a37b716a1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:53:44 np0005465604 nova_compute[260603]: 2025-10-02 08:53:44.500 2 DEBUG oslo_concurrency.lockutils [req-bdb819b5-cf7d-4f00-a996-16e59f3fcd62 req-6cdb6ddf-fd98-47df-a103-d931ce48ab2c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:53:44 np0005465604 nova_compute[260603]: 2025-10-02 08:53:44.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:53:44 np0005465604 nova_compute[260603]: 2025-10-02 08:53:44.576 2 DEBUG nova.network.neutron [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.282 2 DEBUG nova.network.neutron [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updating instance_info_cache with network_info: [{"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.304 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.305 2 DEBUG nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Instance network_info: |[{"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.305 2 DEBUG oslo_concurrency.lockutils [req-bdb819b5-cf7d-4f00-a996-16e59f3fcd62 req-6cdb6ddf-fd98-47df-a103-d931ce48ab2c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.306 2 DEBUG nova.network.neutron [req-bdb819b5-cf7d-4f00-a996-16e59f3fcd62 req-6cdb6ddf-fd98-47df-a103-d931ce48ab2c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Refreshing network info cache for port 4fd5381b-e8ba-485f-9cb6-692a37b716a1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.308 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Start _get_guest_xml network_info=[{"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.314 2 WARNING nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.320 2 DEBUG nova.virt.libvirt.host [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.321 2 DEBUG nova.virt.libvirt.host [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.330 2 DEBUG nova.virt.libvirt.host [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.331 2 DEBUG nova.virt.libvirt.host [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.332 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.333 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.334 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.334 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.335 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.335 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.335 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.336 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.336 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.336 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.337 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.337 2 DEBUG nova.virt.hardware [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.342 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.541 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.542 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.542 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.543 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.543 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:53:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 54 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 655 KiB/s wr, 23 op/s
Oct  2 04:53:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:53:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4292277673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.793 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.823 2 DEBUG nova.storage.rbd_utils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image ce9a5c17-646f-4ba2-a974-90e4b864872e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:53:45 np0005465604 nova_compute[260603]: 2025-10-02 08:53:45.827 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:53:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:53:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099875373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.034 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.235 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.237 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3601MB free_disk=59.98080825805664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.237 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.237 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:53:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1070282520' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.271 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.272 2 DEBUG nova.virt.libvirt.vif [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:53:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1285499332',display_name='tempest-TestNetworkBasicOps-server-1285499332',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1285499332',id=123,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAz8IMZ7Jf5vn2RyFjWWITiqepr2aLtS8oy68xeil523gnhXxTAuBsHLCjnOZ53PJL0p7gfBo3vI6+ggEPUbO2Sg7r3zAGgPHkKwwrkmc/GeJgKH1QAb2g1ODGvS4w3VLw==',key_name='tempest-TestNetworkBasicOps-2122334356',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-5j8y63ir',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:53:42Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=ce9a5c17-646f-4ba2-a974-90e4b864872e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.273 2 DEBUG nova.network.os_vif_util [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.274 2 DEBUG nova.network.os_vif_util [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:0b:fe,bridge_name='br-int',has_traffic_filtering=True,id=4fd5381b-e8ba-485f-9cb6-692a37b716a1,network=Network(7511e8c2-7c26-4eea-b465-32e904aba1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fd5381b-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.275 2 DEBUG nova.objects.instance [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid ce9a5c17-646f-4ba2-a974-90e4b864872e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.306 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:53:46 np0005465604 nova_compute[260603]:  <uuid>ce9a5c17-646f-4ba2-a974-90e4b864872e</uuid>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:  <name>instance-0000007b</name>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkBasicOps-server-1285499332</nova:name>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:53:45</nova:creationTime>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:53:46 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:        <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:        <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:        <nova:port uuid="4fd5381b-e8ba-485f-9cb6-692a37b716a1">
Oct  2 04:53:46 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <entry name="serial">ce9a5c17-646f-4ba2-a974-90e4b864872e</entry>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <entry name="uuid">ce9a5c17-646f-4ba2-a974-90e4b864872e</entry>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/ce9a5c17-646f-4ba2-a974-90e4b864872e_disk">
Oct  2 04:53:46 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:53:46 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/ce9a5c17-646f-4ba2-a974-90e4b864872e_disk.config">
Oct  2 04:53:46 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:53:46 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:04:0b:fe"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <target dev="tap4fd5381b-e8"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e/console.log" append="off"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:53:46 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:53:46 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:53:46 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:53:46 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.308 2 DEBUG nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Preparing to wait for external event network-vif-plugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.308 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.308 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.308 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.309 2 DEBUG nova.virt.libvirt.vif [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:53:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1285499332',display_name='tempest-TestNetworkBasicOps-server-1285499332',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1285499332',id=123,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAz8IMZ7Jf5vn2RyFjWWITiqepr2aLtS8oy68xeil523gnhXxTAuBsHLCjnOZ53PJL0p7gfBo3vI6+ggEPUbO2Sg7r3zAGgPHkKwwrkmc/GeJgKH1QAb2g1ODGvS4w3VLw==',key_name='tempest-TestNetworkBasicOps-2122334356',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-5j8y63ir',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:53:42Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=ce9a5c17-646f-4ba2-a974-90e4b864872e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.310 2 DEBUG nova.network.os_vif_util [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.310 2 DEBUG nova.network.os_vif_util [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:0b:fe,bridge_name='br-int',has_traffic_filtering=True,id=4fd5381b-e8ba-485f-9cb6-692a37b716a1,network=Network(7511e8c2-7c26-4eea-b465-32e904aba1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fd5381b-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.311 2 DEBUG os_vif [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:0b:fe,bridge_name='br-int',has_traffic_filtering=True,id=4fd5381b-e8ba-485f-9cb6-692a37b716a1,network=Network(7511e8c2-7c26-4eea-b465-32e904aba1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fd5381b-e8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.312 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.313 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.318 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fd5381b-e8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.318 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fd5381b-e8, col_values=(('external_ids', {'iface-id': '4fd5381b-e8ba-485f-9cb6-692a37b716a1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:0b:fe', 'vm-uuid': 'ce9a5c17-646f-4ba2-a974-90e4b864872e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:46 np0005465604 NetworkManager[45129]: <info>  [1759395226.3472] manager: (tap4fd5381b-e8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/514)
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.353 2 INFO os_vif [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:0b:fe,bridge_name='br-int',has_traffic_filtering=True,id=4fd5381b-e8ba-485f-9cb6-692a37b716a1,network=Network(7511e8c2-7c26-4eea-b465-32e904aba1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fd5381b-e8')#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.412 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.412 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.413 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:04:0b:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.413 2 INFO nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Using config drive#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.435 2 DEBUG nova.storage.rbd_utils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image ce9a5c17-646f-4ba2-a974-90e4b864872e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.461 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance ce9a5c17-646f-4ba2-a974-90e4b864872e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.461 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.462 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.596 2 DEBUG nova.network.neutron [req-bdb819b5-cf7d-4f00-a996-16e59f3fcd62 req-6cdb6ddf-fd98-47df-a103-d931ce48ab2c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updated VIF entry in instance network info cache for port 4fd5381b-e8ba-485f-9cb6-692a37b716a1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.597 2 DEBUG nova.network.neutron [req-bdb819b5-cf7d-4f00-a996-16e59f3fcd62 req-6cdb6ddf-fd98-47df-a103-d931ce48ab2c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updating instance_info_cache with network_info: [{"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.602 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.659 2 DEBUG oslo_concurrency.lockutils [req-bdb819b5-cf7d-4f00-a996-16e59f3fcd62 req-6cdb6ddf-fd98-47df-a103-d931ce48ab2c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.844 2 INFO nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Creating config drive at /var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e/disk.config#033[00m
Oct  2 04:53:46 np0005465604 nova_compute[260603]: 2025-10-02 08:53:46.850 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbjpw9e8i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.015 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbjpw9e8i" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.056 2 DEBUG nova.storage.rbd_utils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image ce9a5c17-646f-4ba2-a974-90e4b864872e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.061 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e/disk.config ce9a5c17-646f-4ba2-a974-90e4b864872e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:53:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:53:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3013958535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.119 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.129 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.147 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.169 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.170 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.275 2 DEBUG oslo_concurrency.processutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e/disk.config ce9a5c17-646f-4ba2-a974-90e4b864872e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.275 2 INFO nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Deleting local config drive /var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e/disk.config because it was imported into RBD.#033[00m
Oct  2 04:53:47 np0005465604 kernel: tap4fd5381b-e8: entered promiscuous mode
Oct  2 04:53:47 np0005465604 NetworkManager[45129]: <info>  [1759395227.3536] manager: (tap4fd5381b-e8): new Tun device (/org/freedesktop/NetworkManager/Devices/515)
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:53:47Z|01307|binding|INFO|Claiming lport 4fd5381b-e8ba-485f-9cb6-692a37b716a1 for this chassis.
Oct  2 04:53:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:53:47Z|01308|binding|INFO|4fd5381b-e8ba-485f-9cb6-692a37b716a1: Claiming fa:16:3e:04:0b:fe 10.100.0.10
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.373 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:0b:fe 10.100.0.10'], port_security=['fa:16:3e:04:0b:fe 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ce9a5c17-646f-4ba2-a974-90e4b864872e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7511e8c2-7c26-4eea-b465-32e904aba1a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7fe3b6c1-e179-4b79-8e83-9d4b983838b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=300ebc04-b3b0-45d9-94e7-4b6ae68b1331, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4fd5381b-e8ba-485f-9cb6-692a37b716a1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.374 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4fd5381b-e8ba-485f-9cb6-692a37b716a1 in datapath 7511e8c2-7c26-4eea-b465-32e904aba1a9 bound to our chassis#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.376 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7511e8c2-7c26-4eea-b465-32e904aba1a9#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.397 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f40d67-da28-4845-a828-6a8d9a261f29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.398 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7511e8c2-71 in ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:53:47 np0005465604 systemd-udevd[390464]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.401 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7511e8c2-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.402 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1c34c4fe-0165-4697-a123-2736b541d4d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.403 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[056faaba-dd58-4a0b-ba5c-a1c072f3b2f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:47 np0005465604 systemd-machined[214636]: New machine qemu-156-instance-0000007b.
Oct  2 04:53:47 np0005465604 NetworkManager[45129]: <info>  [1759395227.4146] device (tap4fd5381b-e8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:53:47 np0005465604 NetworkManager[45129]: <info>  [1759395227.4162] device (tap4fd5381b-e8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.421 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[3221b740-bbaa-4001-9cd7-6367a5a39604]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:47 np0005465604 systemd[1]: Started Virtual Machine qemu-156-instance-0000007b.
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.455 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3d70c249-6867-4a2b-84b4-75bdada2844e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:53:47Z|01309|binding|INFO|Setting lport 4fd5381b-e8ba-485f-9cb6-692a37b716a1 ovn-installed in OVS
Oct  2 04:53:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:53:47Z|01310|binding|INFO|Setting lport 4fd5381b-e8ba-485f-9cb6-692a37b716a1 up in Southbound
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.487 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6126e26a-568a-48d5-8f23-b417d6c74a1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:47 np0005465604 NetworkManager[45129]: <info>  [1759395227.5071] manager: (tap7511e8c2-70): new Veth device (/org/freedesktop/NetworkManager/Devices/516)
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.506 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3c27d6-e298-4d99-9d95-f43e4b9e4809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.540 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5b31e849-23e1-4f8a-bde7-9976e6a20905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.542 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e4346170-91de-4c9e-89af-9752bbbde23c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:47 np0005465604 NetworkManager[45129]: <info>  [1759395227.5680] device (tap7511e8c2-70): carrier: link connected
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.576 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[21d8cf89-59c7-4dbe-857d-8b85609a9163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.594 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[831f4731-8ceb-4ee7-81f5-29c5cce72bb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7511e8c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:ab:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610617, 'reachable_time': 28651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390496, 'error': None, 'target': 'ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.610 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[323430ba-7861-426b-a7fc-82a2c26f099c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feea:ab2e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 610617, 'tstamp': 610617}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390497, 'error': None, 'target': 'ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.626 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e417a2-7189-4148-8453-68cd1b81e389]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7511e8c2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:ab:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610617, 'reachable_time': 28651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 390498, 'error': None, 'target': 'ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.655 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[af1995fd-ac24-4e9f-9dbf-0490599d0b03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.702 2 DEBUG nova.compute.manager [req-e3b11257-b53a-46d0-97a0-ba82423a0f0b req-94d6fcb4-c17b-4867-8662-4876e5d30168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received event network-vif-plugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.703 2 DEBUG oslo_concurrency.lockutils [req-e3b11257-b53a-46d0-97a0-ba82423a0f0b req-94d6fcb4-c17b-4867-8662-4876e5d30168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.703 2 DEBUG oslo_concurrency.lockutils [req-e3b11257-b53a-46d0-97a0-ba82423a0f0b req-94d6fcb4-c17b-4867-8662-4876e5d30168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.703 2 DEBUG oslo_concurrency.lockutils [req-e3b11257-b53a-46d0-97a0-ba82423a0f0b req-94d6fcb4-c17b-4867-8662-4876e5d30168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.703 2 DEBUG nova.compute.manager [req-e3b11257-b53a-46d0-97a0-ba82423a0f0b req-94d6fcb4-c17b-4867-8662-4876e5d30168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Processing event network-vif-plugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.708 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a73fd3f3-b2fe-4839-9422-ec5cbfa8ef8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.709 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7511e8c2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.709 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.710 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7511e8c2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:47 np0005465604 NetworkManager[45129]: <info>  [1759395227.7122] manager: (tap7511e8c2-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/517)
Oct  2 04:53:47 np0005465604 kernel: tap7511e8c2-70: entered promiscuous mode
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.716 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7511e8c2-70, col_values=(('external_ids', {'iface-id': 'f80df61f-a234-49f9-816c-2f176ad94b02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:47 np0005465604 ovn_controller[152344]: 2025-10-02T08:53:47Z|01311|binding|INFO|Releasing lport f80df61f-a234-49f9-816c-2f176ad94b02 from this chassis (sb_readonly=0)
Oct  2 04:53:47 np0005465604 nova_compute[260603]: 2025-10-02 08:53:47.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.730 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7511e8c2-7c26-4eea-b465-32e904aba1a9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7511e8c2-7c26-4eea-b465-32e904aba1a9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.734 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[41c9c83a-9913-4177-aa55-ff37d442e728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.734 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-7511e8c2-7c26-4eea-b465-32e904aba1a9
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/7511e8c2-7c26-4eea-b465-32e904aba1a9.pid.haproxy
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 7511e8c2-7c26-4eea-b465-32e904aba1a9
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:53:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:53:47.735 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9', 'env', 'PROCESS_TAG=haproxy-7511e8c2-7c26-4eea-b465-32e904aba1a9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7511e8c2-7c26-4eea-b465-32e904aba1a9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:53:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:53:48 np0005465604 podman[390530]: 2025-10-02 08:53:48.109396581 +0000 UTC m=+0.055624604 container create bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:53:48 np0005465604 systemd[1]: Started libpod-conmon-bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a.scope.
Oct  2 04:53:48 np0005465604 podman[390530]: 2025-10-02 08:53:48.076121797 +0000 UTC m=+0.022349810 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:53:48 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:53:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841e1ce90c14f958dd89444a0f9f00fd4fb528b405b275066734c073b28f4568/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:53:48 np0005465604 podman[390530]: 2025-10-02 08:53:48.192182096 +0000 UTC m=+0.138410159 container init bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:53:48 np0005465604 podman[390530]: 2025-10-02 08:53:48.197777123 +0000 UTC m=+0.144005156 container start bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 04:53:48 np0005465604 neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9[390545]: [NOTICE]   (390549) : New worker (390551) forked
Oct  2 04:53:48 np0005465604 neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9[390545]: [NOTICE]   (390549) : Loading success.
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.847 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395228.8468075, ce9a5c17-646f-4ba2-a974-90e4b864872e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.848 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] VM Started (Lifecycle Event)#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.851 2 DEBUG nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.854 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.858 2 INFO nova.virt.libvirt.driver [-] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Instance spawned successfully.#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.858 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.875 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.885 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.888 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.889 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.889 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.889 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.890 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.890 2 DEBUG nova.virt.libvirt.driver [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.914 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.914 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395228.8471572, ce9a5c17-646f-4ba2-a974-90e4b864872e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.915 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.946 2 INFO nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Took 6.51 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.946 2 DEBUG nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.948 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.954 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395228.854169, ce9a5c17-646f-4ba2-a974-90e4b864872e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:53:48 np0005465604 nova_compute[260603]: 2025-10-02 08:53:48.954 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:53:49 np0005465604 nova_compute[260603]: 2025-10-02 08:53:49.001 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:53:49 np0005465604 nova_compute[260603]: 2025-10-02 08:53:49.004 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:53:49 np0005465604 nova_compute[260603]: 2025-10-02 08:53:49.023 2 INFO nova.compute.manager [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Took 7.82 seconds to build instance.#033[00m
Oct  2 04:53:49 np0005465604 nova_compute[260603]: 2025-10-02 08:53:49.042 2 DEBUG oslo_concurrency.lockutils [None req-1e8a93e4-2af1-4bde-957c-d6a64ccdf8fa ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct  2 04:53:50 np0005465604 nova_compute[260603]: 2025-10-02 08:53:50.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:50 np0005465604 podman[390603]: 2025-10-02 08:53:50.043490339 +0000 UTC m=+0.090390017 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 04:53:50 np0005465604 nova_compute[260603]: 2025-10-02 08:53:50.065 2 DEBUG nova.compute.manager [req-1102cea7-a77c-4577-9ef1-950045f2bf20 req-07ca1f04-e5bc-4ab0-8e59-ba72adc57e5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received event network-vif-plugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:53:50 np0005465604 nova_compute[260603]: 2025-10-02 08:53:50.066 2 DEBUG oslo_concurrency.lockutils [req-1102cea7-a77c-4577-9ef1-950045f2bf20 req-07ca1f04-e5bc-4ab0-8e59-ba72adc57e5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:50 np0005465604 nova_compute[260603]: 2025-10-02 08:53:50.066 2 DEBUG oslo_concurrency.lockutils [req-1102cea7-a77c-4577-9ef1-950045f2bf20 req-07ca1f04-e5bc-4ab0-8e59-ba72adc57e5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:50 np0005465604 nova_compute[260603]: 2025-10-02 08:53:50.066 2 DEBUG oslo_concurrency.lockutils [req-1102cea7-a77c-4577-9ef1-950045f2bf20 req-07ca1f04-e5bc-4ab0-8e59-ba72adc57e5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:50 np0005465604 nova_compute[260603]: 2025-10-02 08:53:50.067 2 DEBUG nova.compute.manager [req-1102cea7-a77c-4577-9ef1-950045f2bf20 req-07ca1f04-e5bc-4ab0-8e59-ba72adc57e5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] No waiting events found dispatching network-vif-plugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:53:50 np0005465604 nova_compute[260603]: 2025-10-02 08:53:50.067 2 WARNING nova.compute.manager [req-1102cea7-a77c-4577-9ef1-950045f2bf20 req-07ca1f04-e5bc-4ab0-8e59-ba72adc57e5b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received unexpected event network-vif-plugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:53:50 np0005465604 podman[390602]: 2025-10-02 08:53:50.078792668 +0000 UTC m=+0.125736367 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 04:53:50 np0005465604 nova_compute[260603]: 2025-10-02 08:53:50.165 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:53:50 np0005465604 nova_compute[260603]: 2025-10-02 08:53:50.166 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:51 np0005465604 NetworkManager[45129]: <info>  [1759395231.3279] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/518)
Oct  2 04:53:51 np0005465604 NetworkManager[45129]: <info>  [1759395231.3287] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/519)
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.345 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.345 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.362 2 DEBUG nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:51 np0005465604 ovn_controller[152344]: 2025-10-02T08:53:51Z|01312|binding|INFO|Releasing lport f80df61f-a234-49f9-816c-2f176ad94b02 from this chassis (sb_readonly=0)
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.441 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.442 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.450 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.451 2 INFO nova.compute.claims [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.536 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.588 2 DEBUG nova.compute.manager [req-0c48afb7-7dcb-48a4-9fdc-b918a1c5140f req-e33bcf68-a966-4f35-bf53-f28db147b9c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received event network-changed-4fd5381b-e8ba-485f-9cb6-692a37b716a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.588 2 DEBUG nova.compute.manager [req-0c48afb7-7dcb-48a4-9fdc-b918a1c5140f req-e33bcf68-a966-4f35-bf53-f28db147b9c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Refreshing instance network info cache due to event network-changed-4fd5381b-e8ba-485f-9cb6-692a37b716a1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.589 2 DEBUG oslo_concurrency.lockutils [req-0c48afb7-7dcb-48a4-9fdc-b918a1c5140f req-e33bcf68-a966-4f35-bf53-f28db147b9c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.589 2 DEBUG oslo_concurrency.lockutils [req-0c48afb7-7dcb-48a4-9fdc-b918a1c5140f req-e33bcf68-a966-4f35-bf53-f28db147b9c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.590 2 DEBUG nova.network.neutron [req-0c48afb7-7dcb-48a4-9fdc-b918a1c5140f req-e33bcf68-a966-4f35-bf53-f28db147b9c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Refreshing network info cache for port 4fd5381b-e8ba-485f-9cb6-692a37b716a1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:53:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 88 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct  2 04:53:51 np0005465604 nova_compute[260603]: 2025-10-02 08:53:51.638 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:53:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:53:52 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4083111441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.127 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.137 2 DEBUG nova.compute.provider_tree [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.183 2 DEBUG nova.scheduler.client.report [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.232 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.233 2 DEBUG nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.300 2 DEBUG nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.300 2 DEBUG nova.network.neutron [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.337 2 INFO nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.372 2 DEBUG nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.501 2 DEBUG nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.502 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.502 2 INFO nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Creating image(s)#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.520 2 DEBUG nova.storage.rbd_utils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.543 2 DEBUG nova.storage.rbd_utils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.561 2 DEBUG nova.storage.rbd_utils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.564 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.606 2 DEBUG nova.policy [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bd8daf03c9d144d7a60fe3f81abdfbb4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '21220064aba34c77a8af713fad28c08b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.611 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.648 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.649 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.650 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.650 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.670 2 DEBUG nova.storage.rbd_utils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.673 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.920 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:53:52 np0005465604 nova_compute[260603]: 2025-10-02 08:53:52.969 2 DEBUG nova.storage.rbd_utils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] resizing rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:53:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:53:53 np0005465604 nova_compute[260603]: 2025-10-02 08:53:53.063 2 DEBUG nova.objects.instance [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'migration_context' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:53:53 np0005465604 nova_compute[260603]: 2025-10-02 08:53:53.126 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:53:53 np0005465604 nova_compute[260603]: 2025-10-02 08:53:53.127 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Ensure instance console log exists: /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:53:53 np0005465604 nova_compute[260603]: 2025-10-02 08:53:53.128 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:53 np0005465604 nova_compute[260603]: 2025-10-02 08:53:53.129 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:53 np0005465604 nova_compute[260603]: 2025-10-02 08:53:53.129 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 111 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.1 MiB/s wr, 102 op/s
Oct  2 04:53:55 np0005465604 nova_compute[260603]: 2025-10-02 08:53:55.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:55 np0005465604 nova_compute[260603]: 2025-10-02 08:53:55.187 2 DEBUG nova.network.neutron [req-0c48afb7-7dcb-48a4-9fdc-b918a1c5140f req-e33bcf68-a966-4f35-bf53-f28db147b9c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updated VIF entry in instance network info cache for port 4fd5381b-e8ba-485f-9cb6-692a37b716a1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:53:55 np0005465604 nova_compute[260603]: 2025-10-02 08:53:55.188 2 DEBUG nova.network.neutron [req-0c48afb7-7dcb-48a4-9fdc-b918a1c5140f req-e33bcf68-a966-4f35-bf53-f28db147b9c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updating instance_info_cache with network_info: [{"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:53:55 np0005465604 nova_compute[260603]: 2025-10-02 08:53:55.293 2 DEBUG oslo_concurrency.lockutils [req-0c48afb7-7dcb-48a4-9fdc-b918a1c5140f req-e33bcf68-a966-4f35-bf53-f28db147b9c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:53:55 np0005465604 nova_compute[260603]: 2025-10-02 08:53:55.316 2 DEBUG nova.network.neutron [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Successfully created port: 6c4955ab-011a-4e64-9871-c01b4818d740 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:53:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 111 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 78 op/s
Oct  2 04:53:56 np0005465604 nova_compute[260603]: 2025-10-02 08:53:56.138 2 DEBUG nova.network.neutron [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Successfully updated port: 6c4955ab-011a-4e64-9871-c01b4818d740 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:53:56 np0005465604 nova_compute[260603]: 2025-10-02 08:53:56.220 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:53:56 np0005465604 nova_compute[260603]: 2025-10-02 08:53:56.221 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquired lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:53:56 np0005465604 nova_compute[260603]: 2025-10-02 08:53:56.221 2 DEBUG nova.network.neutron [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:53:56 np0005465604 nova_compute[260603]: 2025-10-02 08:53:56.286 2 DEBUG nova.compute.manager [req-3fd826de-9e2f-4083-af4c-2d02d9727eac req-8b7a1e6f-436a-4210-bcdc-ce8139b84122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:53:56 np0005465604 nova_compute[260603]: 2025-10-02 08:53:56.287 2 DEBUG nova.compute.manager [req-3fd826de-9e2f-4083-af4c-2d02d9727eac req-8b7a1e6f-436a-4210-bcdc-ce8139b84122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing instance network info cache due to event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:53:56 np0005465604 nova_compute[260603]: 2025-10-02 08:53:56.287 2 DEBUG oslo_concurrency.lockutils [req-3fd826de-9e2f-4083-af4c-2d02d9727eac req-8b7a1e6f-436a-4210-bcdc-ce8139b84122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:53:56 np0005465604 nova_compute[260603]: 2025-10-02 08:53:56.396 2 DEBUG nova.network.neutron [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:53:56 np0005465604 nova_compute[260603]: 2025-10-02 08:53:56.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:56 np0005465604 nova_compute[260603]: 2025-10-02 08:53:56.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:53:56 np0005465604 nova_compute[260603]: 2025-10-02 08:53:56.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 04:53:56 np0005465604 nova_compute[260603]: 2025-10-02 08:53:56.539 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:53:57 np0005465604 podman[390835]: 2025-10-02 08:53:57.008163993 +0000 UTC m=+0.069715832 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct  2 04:53:57 np0005465604 podman[390834]: 2025-10-02 08:53:57.011875161 +0000 UTC m=+0.075904368 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 04:53:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 134 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.9 MiB/s wr, 104 op/s
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.824 2 DEBUG nova.network.neutron [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating instance_info_cache with network_info: [{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.850 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Releasing lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.851 2 DEBUG nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance network_info: |[{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.852 2 DEBUG oslo_concurrency.lockutils [req-3fd826de-9e2f-4083-af4c-2d02d9727eac req-8b7a1e6f-436a-4210-bcdc-ce8139b84122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.853 2 DEBUG nova.network.neutron [req-3fd826de-9e2f-4083-af4c-2d02d9727eac req-8b7a1e6f-436a-4210-bcdc-ce8139b84122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.859 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Start _get_guest_xml network_info=[{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.867 2 WARNING nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.880 2 DEBUG nova.virt.libvirt.host [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.882 2 DEBUG nova.virt.libvirt.host [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.888 2 DEBUG nova.virt.libvirt.host [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.889 2 DEBUG nova.virt.libvirt.host [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.890 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.891 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.892 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.893 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.893 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.894 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.895 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.895 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.896 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.897 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.897 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.898 2 DEBUG nova.virt.hardware [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:53:57 np0005465604 nova_compute[260603]: 2025-10-02 08:53:57.905 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:53:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:53:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:53:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:53:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:53:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:53:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:53:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:53:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:53:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1021823407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.383 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.413 2 DEBUG nova.storage.rbd_utils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.418 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:53:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:53:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1682623345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.938 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.940 2 DEBUG nova.virt.libvirt.vif [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-199210370',display_name='tempest-TestShelveInstance-server-199210370',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-199210370',id=124,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGzsQ/6b3tjUGybuPyR4SVVOx0MdNuIeiSvtBwT6p2uLvkn4tE1YW+Uq6JJ1YItW22s6E/D+cvCAL9Uj4JCY+wAR12IR6juEk+h0nnTQwb0m9GcGi0Su/6v36I6xXc+YZw==',key_name='tempest-TestShelveInstance-203364505',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='21220064aba34c77a8af713fad28c08b',ramdisk_id='',reservation_id='r-2iy01e2x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-783621896',owner_user_name='tempest-TestShelveInstance-783621896-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:53:52Z,user_data=None,user_id='bd8daf03c9d144d7a60fe3f81abdfbb4',uuid=29e25f63-82b1-45cf-8916-41d9acc44ac9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.941 2 DEBUG nova.network.os_vif_util [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converting VIF {"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.942 2 DEBUG nova.network.os_vif_util [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.943 2 DEBUG nova.objects.instance [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'pci_devices' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.964 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:53:58 np0005465604 nova_compute[260603]:  <uuid>29e25f63-82b1-45cf-8916-41d9acc44ac9</uuid>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:  <name>instance-0000007c</name>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestShelveInstance-server-199210370</nova:name>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:53:57</nova:creationTime>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:53:58 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:        <nova:user uuid="bd8daf03c9d144d7a60fe3f81abdfbb4">tempest-TestShelveInstance-783621896-project-member</nova:user>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:        <nova:project uuid="21220064aba34c77a8af713fad28c08b">tempest-TestShelveInstance-783621896</nova:project>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:        <nova:port uuid="6c4955ab-011a-4e64-9871-c01b4818d740">
Oct  2 04:53:58 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <entry name="serial">29e25f63-82b1-45cf-8916-41d9acc44ac9</entry>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <entry name="uuid">29e25f63-82b1-45cf-8916-41d9acc44ac9</entry>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/29e25f63-82b1-45cf-8916-41d9acc44ac9_disk">
Oct  2 04:53:58 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:53:58 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config">
Oct  2 04:53:58 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:53:58 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:ec:f3:67"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <target dev="tap6c4955ab-01"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/console.log" append="off"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:53:58 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:53:58 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:53:58 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:53:58 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.966 2 DEBUG nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Preparing to wait for external event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.966 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.967 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.967 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.968 2 DEBUG nova.virt.libvirt.vif [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-199210370',display_name='tempest-TestShelveInstance-server-199210370',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-199210370',id=124,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGzsQ/6b3tjUGybuPyR4SVVOx0MdNuIeiSvtBwT6p2uLvkn4tE1YW+Uq6JJ1YItW22s6E/D+cvCAL9Uj4JCY+wAR12IR6juEk+h0nnTQwb0m9GcGi0Su/6v36I6xXc+YZw==',key_name='tempest-TestShelveInstance-203364505',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='21220064aba34c77a8af713fad28c08b',ramdisk_id='',reservation_id='r-2iy01e2x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-783621896',owner_user_name='tempest-TestShelveInstance-783621896-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:53:52Z,user_data=None,user_id='bd8daf03c9d144d7a60fe3f81abdfbb4',uuid=29e25f63-82b1-45cf-8916-41d9acc44ac9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.969 2 DEBUG nova.network.os_vif_util [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converting VIF {"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.970 2 DEBUG nova.network.os_vif_util [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.970 2 DEBUG os_vif [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.972 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.972 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.976 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c4955ab-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.977 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6c4955ab-01, col_values=(('external_ids', {'iface-id': '6c4955ab-011a-4e64-9871-c01b4818d740', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:f3:67', 'vm-uuid': '29e25f63-82b1-45cf-8916-41d9acc44ac9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:58 np0005465604 NetworkManager[45129]: <info>  [1759395238.9794] manager: (tap6c4955ab-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/520)
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:53:58 np0005465604 nova_compute[260603]: 2025-10-02 08:53:58.989 2 INFO os_vif [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01')#033[00m
Oct  2 04:53:59 np0005465604 nova_compute[260603]: 2025-10-02 08:53:59.051 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:53:59 np0005465604 nova_compute[260603]: 2025-10-02 08:53:59.051 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:53:59 np0005465604 nova_compute[260603]: 2025-10-02 08:53:59.052 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] No VIF found with MAC fa:16:3e:ec:f3:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:53:59 np0005465604 nova_compute[260603]: 2025-10-02 08:53:59.052 2 INFO nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Using config drive#033[00m
Oct  2 04:53:59 np0005465604 nova_compute[260603]: 2025-10-02 08:53:59.086 2 DEBUG nova.storage.rbd_utils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:53:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 134 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Oct  2 04:53:59 np0005465604 nova_compute[260603]: 2025-10-02 08:53:59.692 2 INFO nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Creating config drive at /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config#033[00m
Oct  2 04:53:59 np0005465604 nova_compute[260603]: 2025-10-02 08:53:59.697 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvn_ke8og execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:53:59 np0005465604 nova_compute[260603]: 2025-10-02 08:53:59.846 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvn_ke8og" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:53:59 np0005465604 nova_compute[260603]: 2025-10-02 08:53:59.885 2 DEBUG nova.storage.rbd_utils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:53:59 np0005465604 nova_compute[260603]: 2025-10-02 08:53:59.889 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:53:59 np0005465604 nova_compute[260603]: 2025-10-02 08:53:59.928 2 DEBUG nova.network.neutron [req-3fd826de-9e2f-4083-af4c-2d02d9727eac req-8b7a1e6f-436a-4210-bcdc-ce8139b84122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updated VIF entry in instance network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:53:59 np0005465604 nova_compute[260603]: 2025-10-02 08:53:59.929 2 DEBUG nova.network.neutron [req-3fd826de-9e2f-4083-af4c-2d02d9727eac req-8b7a1e6f-436a-4210-bcdc-ce8139b84122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating instance_info_cache with network_info: [{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:53:59 np0005465604 nova_compute[260603]: 2025-10-02 08:53:59.949 2 DEBUG oslo_concurrency.lockutils [req-3fd826de-9e2f-4083-af4c-2d02d9727eac req-8b7a1e6f-436a-4210-bcdc-ce8139b84122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.081 2 DEBUG oslo_concurrency.processutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.082 2 INFO nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Deleting local config drive /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config because it was imported into RBD.#033[00m
Oct  2 04:54:00 np0005465604 kernel: tap6c4955ab-01: entered promiscuous mode
Oct  2 04:54:00 np0005465604 NetworkManager[45129]: <info>  [1759395240.1356] manager: (tap6c4955ab-01): new Tun device (/org/freedesktop/NetworkManager/Devices/521)
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:00Z|01313|binding|INFO|Claiming lport 6c4955ab-011a-4e64-9871-c01b4818d740 for this chassis.
Oct  2 04:54:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:00Z|01314|binding|INFO|6c4955ab-011a-4e64-9871-c01b4818d740: Claiming fa:16:3e:ec:f3:67 10.100.0.4
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.153 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:f3:67 10.100.0.4'], port_security=['fa:16:3e:ec:f3:67 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29e25f63-82b1-45cf-8916-41d9acc44ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21220064aba34c77a8af713fad28c08b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19fc7080-3cba-4840-a004-532eaa7c989b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee760329-6e77-4a17-b501-5dd001a2d022, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6c4955ab-011a-4e64-9871-c01b4818d740) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.155 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6c4955ab-011a-4e64-9871-c01b4818d740 in datapath 5c4ca2f6-ca60-420d-aded-392a44195bf1 bound to our chassis#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.157 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c4ca2f6-ca60-420d-aded-392a44195bf1#033[00m
Oct  2 04:54:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:00Z|01315|binding|INFO|Setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 ovn-installed in OVS
Oct  2 04:54:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:00Z|01316|binding|INFO|Setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 up in Southbound
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.171 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e657e71f-c279-492f-b718-29b1edf30eb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.171 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c4ca2f6-c1 in ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:54:00 np0005465604 systemd-udevd[391008]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.173 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c4ca2f6-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.174 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6731f28b-3b29-4391-8dbe-e67b59cdc9d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.174 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[05086e11-f4d0-4516-9076-a49c75e76950]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:00 np0005465604 systemd-machined[214636]: New machine qemu-157-instance-0000007c.
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.192 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b4c5bb-9310-427d-b97c-2ad9127f392c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:00Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:04:0b:fe 10.100.0.10
Oct  2 04:54:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:00Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:04:0b:fe 10.100.0.10
Oct  2 04:54:00 np0005465604 NetworkManager[45129]: <info>  [1759395240.1945] device (tap6c4955ab-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:54:00 np0005465604 NetworkManager[45129]: <info>  [1759395240.1956] device (tap6c4955ab-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:54:00 np0005465604 systemd[1]: Started Virtual Machine qemu-157-instance-0000007c.
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.217 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5e1600d5-47e2-48cf-bd5c-b2393e4b0484]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.247 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d2fb02-11dc-4e9c-b1f2-305076620626]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.251 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8a8991a5-3f2f-4a6f-a9cd-3c3bb14c931d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:00 np0005465604 NetworkManager[45129]: <info>  [1759395240.2524] manager: (tap5c4ca2f6-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/522)
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.293 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6bfab5c0-ef9c-4b36-ae5b-45ec353b4133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.298 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d532a3fd-5ebc-4d8b-9b85-cc6cf9792ebd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:00 np0005465604 NetworkManager[45129]: <info>  [1759395240.3276] device (tap5c4ca2f6-c0): carrier: link connected
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.334 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e1653457-ef77-4e25-b1d8-9650baf0414e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.358 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fb587f9d-6614-4200-8269-84b443e61e38]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c4ca2f6-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:2b:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611893, 'reachable_time': 43974, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391041, 'error': None, 'target': 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.373 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[364e7e6c-15e1-42a7-a178-5049000e4eca]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:2b16'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611893, 'tstamp': 611893}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391042, 'error': None, 'target': 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.392 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3e037b50-5d24-4436-9810-517012393857]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c4ca2f6-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:2b:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611893, 'reachable_time': 43974, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391043, 'error': None, 'target': 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.423 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c941b092-3a26-4806-bbe2-9a89d5d51173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.493 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[322a4831-3c85-494c-80ea-9564b0ec8551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.494 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c4ca2f6-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.495 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.495 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c4ca2f6-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:54:00 np0005465604 NetworkManager[45129]: <info>  [1759395240.4982] manager: (tap5c4ca2f6-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/523)
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:00 np0005465604 kernel: tap5c4ca2f6-c0: entered promiscuous mode
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.502 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c4ca2f6-c0, col_values=(('external_ids', {'iface-id': '2b6a47ce-d685-4242-a053-f63d07d5d559'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:00 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:00Z|01317|binding|INFO|Releasing lport 2b6a47ce-d685-4242-a053-f63d07d5d559 from this chassis (sb_readonly=0)
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.518 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c4ca2f6-ca60-420d-aded-392a44195bf1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c4ca2f6-ca60-420d-aded-392a44195bf1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.519 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7cda5733-dea1-4b98-829d-d3aa5b2ca75b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.520 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-5c4ca2f6-ca60-420d-aded-392a44195bf1
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/5c4ca2f6-ca60-420d-aded-392a44195bf1.pid.haproxy
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 5c4ca2f6-ca60-420d-aded-392a44195bf1
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:54:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:00.522 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'env', 'PROCESS_TAG=haproxy-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c4ca2f6-ca60-420d-aded-392a44195bf1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.640 2 DEBUG nova.compute.manager [req-cbcb1686-592a-41d4-b61a-8b6bfaba86f7 req-7b07d386-75af-40be-a851-6d03d8b56f62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.641 2 DEBUG oslo_concurrency.lockutils [req-cbcb1686-592a-41d4-b61a-8b6bfaba86f7 req-7b07d386-75af-40be-a851-6d03d8b56f62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.641 2 DEBUG oslo_concurrency.lockutils [req-cbcb1686-592a-41d4-b61a-8b6bfaba86f7 req-7b07d386-75af-40be-a851-6d03d8b56f62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.642 2 DEBUG oslo_concurrency.lockutils [req-cbcb1686-592a-41d4-b61a-8b6bfaba86f7 req-7b07d386-75af-40be-a851-6d03d8b56f62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.642 2 DEBUG nova.compute.manager [req-cbcb1686-592a-41d4-b61a-8b6bfaba86f7 req-7b07d386-75af-40be-a851-6d03d8b56f62 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Processing event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:54:00 np0005465604 podman[391116]: 2025-10-02 08:54:00.958156303 +0000 UTC m=+0.071927032 container create 64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.978 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395240.9774995, 29e25f63-82b1-45cf-8916-41d9acc44ac9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.978 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] VM Started (Lifecycle Event)#033[00m
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.982 2 DEBUG nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.985 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.989 2 INFO nova.virt.libvirt.driver [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance spawned successfully.#033[00m
Oct  2 04:54:00 np0005465604 nova_compute[260603]: 2025-10-02 08:54:00.990 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:54:01 np0005465604 systemd[1]: Started libpod-conmon-64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54.scope.
Oct  2 04:54:01 np0005465604 podman[391116]: 2025-10-02 08:54:00.921397737 +0000 UTC m=+0.035168586 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.014 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.020 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.039 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.040 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.041 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.041 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.042 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.042 2 DEBUG nova.virt.libvirt.driver [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:54:01 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.054 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.054 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395240.9776886, 29e25f63-82b1-45cf-8916-41d9acc44ac9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.055 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:54:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9d580230bf0e4e9fc32b97474e3cee5deeb211aa3deb462ec3ffd85a1c2b12f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:54:01 np0005465604 podman[391116]: 2025-10-02 08:54:01.081821752 +0000 UTC m=+0.195592491 container init 64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:54:01 np0005465604 podman[391116]: 2025-10-02 08:54:01.090169877 +0000 UTC m=+0.203940596 container start 64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:54:01 np0005465604 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[391131]: [NOTICE]   (391135) : New worker (391137) forked
Oct  2 04:54:01 np0005465604 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[391131]: [NOTICE]   (391135) : Loading success.
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.138 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.144 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395240.9849544, 29e25f63-82b1-45cf-8916-41d9acc44ac9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.144 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.171 2 INFO nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Took 8.67 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.171 2 DEBUG nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.176 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.185 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.238 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.254 2 INFO nova.compute.manager [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Took 9.84 seconds to build instance.#033[00m
Oct  2 04:54:01 np0005465604 nova_compute[260603]: 2025-10-02 08:54:01.272 2 DEBUG oslo_concurrency.lockutils [None req-fd07f6f9-66e5-43e5-99a6-7ecf66fc4b79 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 134 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct  2 04:54:02 np0005465604 nova_compute[260603]: 2025-10-02 08:54:02.761 2 DEBUG nova.compute.manager [req-2ecced72-c33a-4010-974e-26f0172f4493 req-082a9aa4-8d0e-4fad-9139-625734235bbc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:54:02 np0005465604 nova_compute[260603]: 2025-10-02 08:54:02.762 2 DEBUG oslo_concurrency.lockutils [req-2ecced72-c33a-4010-974e-26f0172f4493 req-082a9aa4-8d0e-4fad-9139-625734235bbc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:02 np0005465604 nova_compute[260603]: 2025-10-02 08:54:02.762 2 DEBUG oslo_concurrency.lockutils [req-2ecced72-c33a-4010-974e-26f0172f4493 req-082a9aa4-8d0e-4fad-9139-625734235bbc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:02 np0005465604 nova_compute[260603]: 2025-10-02 08:54:02.763 2 DEBUG oslo_concurrency.lockutils [req-2ecced72-c33a-4010-974e-26f0172f4493 req-082a9aa4-8d0e-4fad-9139-625734235bbc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:02 np0005465604 nova_compute[260603]: 2025-10-02 08:54:02.763 2 DEBUG nova.compute.manager [req-2ecced72-c33a-4010-974e-26f0172f4493 req-082a9aa4-8d0e-4fad-9139-625734235bbc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:54:02 np0005465604 nova_compute[260603]: 2025-10-02 08:54:02.764 2 WARNING nova.compute.manager [req-2ecced72-c33a-4010-974e-26f0172f4493 req-082a9aa4-8d0e-4fad-9139-625734235bbc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received unexpected event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:54:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:54:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 167 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 236 op/s
Oct  2 04:54:03 np0005465604 nova_compute[260603]: 2025-10-02 08:54:03.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:04.199 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:54:04 np0005465604 nova_compute[260603]: 2025-10-02 08:54:04.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:04.203 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:54:04 np0005465604 nova_compute[260603]: 2025-10-02 08:54:04.944 2 DEBUG nova.compute.manager [req-4bbbad52-501b-41f0-90d0-d9c253bf4343 req-caaa094c-1c4f-47d2-8156-f48925b4cb90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:54:04 np0005465604 nova_compute[260603]: 2025-10-02 08:54:04.945 2 DEBUG nova.compute.manager [req-4bbbad52-501b-41f0-90d0-d9c253bf4343 req-caaa094c-1c4f-47d2-8156-f48925b4cb90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing instance network info cache due to event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:54:04 np0005465604 nova_compute[260603]: 2025-10-02 08:54:04.945 2 DEBUG oslo_concurrency.lockutils [req-4bbbad52-501b-41f0-90d0-d9c253bf4343 req-caaa094c-1c4f-47d2-8156-f48925b4cb90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:54:04 np0005465604 nova_compute[260603]: 2025-10-02 08:54:04.946 2 DEBUG oslo_concurrency.lockutils [req-4bbbad52-501b-41f0-90d0-d9c253bf4343 req-caaa094c-1c4f-47d2-8156-f48925b4cb90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:54:04 np0005465604 nova_compute[260603]: 2025-10-02 08:54:04.946 2 DEBUG nova.network.neutron [req-4bbbad52-501b-41f0-90d0-d9c253bf4343 req-caaa094c-1c4f-47d2-8156-f48925b4cb90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:54:05 np0005465604 nova_compute[260603]: 2025-10-02 08:54:05.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 167 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 163 op/s
Oct  2 04:54:06 np0005465604 nova_compute[260603]: 2025-10-02 08:54:06.456 2 DEBUG nova.network.neutron [req-4bbbad52-501b-41f0-90d0-d9c253bf4343 req-caaa094c-1c4f-47d2-8156-f48925b4cb90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updated VIF entry in instance network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:54:06 np0005465604 nova_compute[260603]: 2025-10-02 08:54:06.457 2 DEBUG nova.network.neutron [req-4bbbad52-501b-41f0-90d0-d9c253bf4343 req-caaa094c-1c4f-47d2-8156-f48925b4cb90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating instance_info_cache with network_info: [{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:54:06 np0005465604 nova_compute[260603]: 2025-10-02 08:54:06.486 2 DEBUG oslo_concurrency.lockutils [req-4bbbad52-501b-41f0-90d0-d9c253bf4343 req-caaa094c-1c4f-47d2-8156-f48925b4cb90 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:54:07 np0005465604 nova_compute[260603]: 2025-10-02 08:54:07.407 2 INFO nova.compute.manager [None req-4a1a1ce2-f670-4033-9885-36ba87b3f749 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Get console output#033[00m
Oct  2 04:54:07 np0005465604 nova_compute[260603]: 2025-10-02 08:54:07.414 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 04:54:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 167 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 164 op/s
Oct  2 04:54:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:54:08 np0005465604 nova_compute[260603]: 2025-10-02 08:54:08.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 167 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct  2 04:54:10 np0005465604 nova_compute[260603]: 2025-10-02 08:54:10.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 167 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct  2 04:54:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:54:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 170 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 152 op/s
Oct  2 04:54:13 np0005465604 nova_compute[260603]: 2025-10-02 08:54:13.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:14Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:f3:67 10.100.0.4
Oct  2 04:54:14 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:14Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:f3:67 10.100.0.4
Oct  2 04:54:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:14.206 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:54:15 np0005465604 nova_compute[260603]: 2025-10-02 08:54:15.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 170 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 360 KiB/s wr, 14 op/s
Oct  2 04:54:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 199 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct  2 04:54:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:54:19 np0005465604 nova_compute[260603]: 2025-10-02 08:54:19.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 200 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 04:54:20 np0005465604 nova_compute[260603]: 2025-10-02 08:54:20.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:21 np0005465604 podman[391149]: 2025-10-02 08:54:21.021546401 +0000 UTC m=+0.078620283 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 04:54:21 np0005465604 podman[391148]: 2025-10-02 08:54:21.053577487 +0000 UTC m=+0.123644401 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:54:21 np0005465604 nova_compute[260603]: 2025-10-02 08:54:21.130 2 DEBUG oslo_concurrency.lockutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:21 np0005465604 nova_compute[260603]: 2025-10-02 08:54:21.130 2 DEBUG oslo_concurrency.lockutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:21 np0005465604 nova_compute[260603]: 2025-10-02 08:54:21.131 2 INFO nova.compute.manager [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Shelving#033[00m
Oct  2 04:54:21 np0005465604 nova_compute[260603]: 2025-10-02 08:54:21.163 2 DEBUG nova.virt.libvirt.driver [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 04:54:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 200 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 04:54:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:54:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3705302853' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:54:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:54:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3705302853' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:54:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:54:23 np0005465604 kernel: tap6c4955ab-01 (unregistering): left promiscuous mode
Oct  2 04:54:23 np0005465604 NetworkManager[45129]: <info>  [1759395263.4760] device (tap6c4955ab-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:54:23 np0005465604 nova_compute[260603]: 2025-10-02 08:54:23.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:23Z|01318|binding|INFO|Releasing lport 6c4955ab-011a-4e64-9871-c01b4818d740 from this chassis (sb_readonly=0)
Oct  2 04:54:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:23Z|01319|binding|INFO|Setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 down in Southbound
Oct  2 04:54:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:23Z|01320|binding|INFO|Removing iface tap6c4955ab-01 ovn-installed in OVS
Oct  2 04:54:23 np0005465604 nova_compute[260603]: 2025-10-02 08:54:23.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.497 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:f3:67 10.100.0.4'], port_security=['fa:16:3e:ec:f3:67 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29e25f63-82b1-45cf-8916-41d9acc44ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21220064aba34c77a8af713fad28c08b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19fc7080-3cba-4840-a004-532eaa7c989b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee760329-6e77-4a17-b501-5dd001a2d022, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6c4955ab-011a-4e64-9871-c01b4818d740) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.499 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6c4955ab-011a-4e64-9871-c01b4818d740 in datapath 5c4ca2f6-ca60-420d-aded-392a44195bf1 unbound from our chassis#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.500 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c4ca2f6-ca60-420d-aded-392a44195bf1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.502 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[30352437-283f-4dba-9659-5d94b270839d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.502 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 namespace which is not needed anymore#033[00m
Oct  2 04:54:23 np0005465604 nova_compute[260603]: 2025-10-02 08:54:23.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:23 np0005465604 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Oct  2 04:54:23 np0005465604 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007c.scope: Consumed 14.436s CPU time.
Oct  2 04:54:23 np0005465604 systemd-machined[214636]: Machine qemu-157-instance-0000007c terminated.
Oct  2 04:54:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 200 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Oct  2 04:54:23 np0005465604 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[391131]: [NOTICE]   (391135) : haproxy version is 2.8.14-c23fe91
Oct  2 04:54:23 np0005465604 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[391131]: [NOTICE]   (391135) : path to executable is /usr/sbin/haproxy
Oct  2 04:54:23 np0005465604 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[391131]: [WARNING]  (391135) : Exiting Master process...
Oct  2 04:54:23 np0005465604 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[391131]: [ALERT]    (391135) : Current worker (391137) exited with code 143 (Terminated)
Oct  2 04:54:23 np0005465604 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[391131]: [WARNING]  (391135) : All workers exited. Exiting... (0)
Oct  2 04:54:23 np0005465604 systemd[1]: libpod-64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54.scope: Deactivated successfully.
Oct  2 04:54:23 np0005465604 podman[391216]: 2025-10-02 08:54:23.661223879 +0000 UTC m=+0.048304832 container died 64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:54:23 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54-userdata-shm.mount: Deactivated successfully.
Oct  2 04:54:23 np0005465604 kernel: tap6c4955ab-01: entered promiscuous mode
Oct  2 04:54:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:23Z|01321|binding|INFO|Claiming lport 6c4955ab-011a-4e64-9871-c01b4818d740 for this chassis.
Oct  2 04:54:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:23Z|01322|binding|INFO|6c4955ab-011a-4e64-9871-c01b4818d740: Claiming fa:16:3e:ec:f3:67 10.100.0.4
Oct  2 04:54:23 np0005465604 nova_compute[260603]: 2025-10-02 08:54:23.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:23 np0005465604 kernel: tap6c4955ab-01 (unregistering): left promiscuous mode
Oct  2 04:54:23 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e9d580230bf0e4e9fc32b97474e3cee5deeb211aa3deb462ec3ffd85a1c2b12f-merged.mount: Deactivated successfully.
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.724 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:f3:67 10.100.0.4'], port_security=['fa:16:3e:ec:f3:67 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29e25f63-82b1-45cf-8916-41d9acc44ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21220064aba34c77a8af713fad28c08b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19fc7080-3cba-4840-a004-532eaa7c989b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee760329-6e77-4a17-b501-5dd001a2d022, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6c4955ab-011a-4e64-9871-c01b4818d740) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:54:23 np0005465604 podman[391216]: 2025-10-02 08:54:23.734199102 +0000 UTC m=+0.121280075 container cleanup 64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 04:54:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:23Z|01323|binding|INFO|Setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 ovn-installed in OVS
Oct  2 04:54:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:23Z|01324|binding|INFO|Setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 up in Southbound
Oct  2 04:54:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:23Z|01325|binding|INFO|Releasing lport 6c4955ab-011a-4e64-9871-c01b4818d740 from this chassis (sb_readonly=1)
Oct  2 04:54:23 np0005465604 nova_compute[260603]: 2025-10-02 08:54:23.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:23Z|01326|if_status|INFO|Dropped 4 log messages in last 121 seconds (most recently, 121 seconds ago) due to excessive rate
Oct  2 04:54:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:23Z|01327|if_status|INFO|Not setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 down as sb is readonly
Oct  2 04:54:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:23Z|01328|binding|INFO|Removing iface tap6c4955ab-01 ovn-installed in OVS
Oct  2 04:54:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:23Z|01329|binding|INFO|Releasing lport 6c4955ab-011a-4e64-9871-c01b4818d740 from this chassis (sb_readonly=0)
Oct  2 04:54:23 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:23Z|01330|binding|INFO|Setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 down in Southbound
Oct  2 04:54:23 np0005465604 nova_compute[260603]: 2025-10-02 08:54:23.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.762 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:f3:67 10.100.0.4'], port_security=['fa:16:3e:ec:f3:67 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29e25f63-82b1-45cf-8916-41d9acc44ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21220064aba34c77a8af713fad28c08b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19fc7080-3cba-4840-a004-532eaa7c989b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee760329-6e77-4a17-b501-5dd001a2d022, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6c4955ab-011a-4e64-9871-c01b4818d740) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:54:23 np0005465604 systemd[1]: libpod-conmon-64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54.scope: Deactivated successfully.
Oct  2 04:54:23 np0005465604 nova_compute[260603]: 2025-10-02 08:54:23.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:23 np0005465604 podman[391255]: 2025-10-02 08:54:23.829345099 +0000 UTC m=+0.054981004 container remove 64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.841 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[91239f45-e29d-42eb-8a45-1f3335c8e293]: (4, ('Thu Oct  2 08:54:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 (64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54)\n64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54\nThu Oct  2 08:54:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 (64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54)\n64804180508c7f968e2ca28a3e3d83ce08695843fe448df677dbf82cb5485b54\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.843 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[229d2e25-da79-4a0d-b315-6fd298ff8ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.845 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c4ca2f6-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:54:23 np0005465604 nova_compute[260603]: 2025-10-02 08:54:23.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:23 np0005465604 kernel: tap5c4ca2f6-c0: left promiscuous mode
Oct  2 04:54:23 np0005465604 nova_compute[260603]: 2025-10-02 08:54:23.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.872 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[827c7ba4-b455-4710-b4c6-44846e9c999c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:23 np0005465604 nova_compute[260603]: 2025-10-02 08:54:23.881 2 DEBUG nova.compute.manager [req-feeab2c3-8487-42d6-ae98-f205310a2ef0 req-457eef20-33df-420b-a0f3-b7618bb7dfda 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-unplugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:54:23 np0005465604 nova_compute[260603]: 2025-10-02 08:54:23.883 2 DEBUG oslo_concurrency.lockutils [req-feeab2c3-8487-42d6-ae98-f205310a2ef0 req-457eef20-33df-420b-a0f3-b7618bb7dfda 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:23 np0005465604 nova_compute[260603]: 2025-10-02 08:54:23.883 2 DEBUG oslo_concurrency.lockutils [req-feeab2c3-8487-42d6-ae98-f205310a2ef0 req-457eef20-33df-420b-a0f3-b7618bb7dfda 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:23 np0005465604 nova_compute[260603]: 2025-10-02 08:54:23.883 2 DEBUG oslo_concurrency.lockutils [req-feeab2c3-8487-42d6-ae98-f205310a2ef0 req-457eef20-33df-420b-a0f3-b7618bb7dfda 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:23 np0005465604 nova_compute[260603]: 2025-10-02 08:54:23.884 2 DEBUG nova.compute.manager [req-feeab2c3-8487-42d6-ae98-f205310a2ef0 req-457eef20-33df-420b-a0f3-b7618bb7dfda 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-unplugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:54:23 np0005465604 nova_compute[260603]: 2025-10-02 08:54:23.884 2 WARNING nova.compute.manager [req-feeab2c3-8487-42d6-ae98-f205310a2ef0 req-457eef20-33df-420b-a0f3-b7618bb7dfda 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received unexpected event network-vif-unplugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with vm_state active and task_state shelving.#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.896 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[71af15d8-a5c1-4618-8b24-f47a52fb67a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.897 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b8c92a63-660c-49a3-897f-8cbbd735ace1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:23 np0005465604 nova_compute[260603]: 2025-10-02 08:54:23.909 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:23 np0005465604 nova_compute[260603]: 2025-10-02 08:54:23.910 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.917 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f75c9e2e-11c9-441d-9952-bd000d9ed810]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611884, 'reachable_time': 41284, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391273, 'error': None, 'target': 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:23 np0005465604 systemd[1]: run-netns-ovnmeta\x2d5c4ca2f6\x2dca60\x2d420d\x2daded\x2d392a44195bf1.mount: Deactivated successfully.
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.921 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.921 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e6537c59-4a96-4355-be51-b910336d01f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.922 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6c4955ab-011a-4e64-9871-c01b4818d740 in datapath 5c4ca2f6-ca60-420d-aded-392a44195bf1 unbound from our chassis#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.924 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c4ca2f6-ca60-420d-aded-392a44195bf1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.925 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e99bcd85-c67f-4fea-95b1-27fe61363bdd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.925 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6c4955ab-011a-4e64-9871-c01b4818d740 in datapath 5c4ca2f6-ca60-420d-aded-392a44195bf1 unbound from our chassis#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.927 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c4ca2f6-ca60-420d-aded-392a44195bf1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:54:23 np0005465604 nova_compute[260603]: 2025-10-02 08:54:23.928 2 DEBUG nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:54:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:23.928 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[86a7cb29-3ba1-4d8a-bd51-d61289e4e42a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.007 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.008 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.020 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.020 2 INFO nova.compute.claims [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.183 2 INFO nova.virt.libvirt.driver [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.190 2 INFO nova.virt.libvirt.driver [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance destroyed successfully.#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.190 2 DEBUG nova.objects.instance [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'numa_topology' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.208 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.598 2 INFO nova.virt.libvirt.driver [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Beginning cold snapshot process#033[00m
Oct  2 04:54:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:54:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469341986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.697 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.705 2 DEBUG nova.compute.provider_tree [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.788 2 DEBUG nova.scheduler.client.report [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.803 2 DEBUG nova.virt.libvirt.imagebackend [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] No parent info for 420393e6-d62b-4055-afb9-674967e2c2b0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.814 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.816 2 DEBUG nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.870 2 DEBUG nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.872 2 DEBUG nova.network.neutron [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.897 2 INFO nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:54:24 np0005465604 nova_compute[260603]: 2025-10-02 08:54:24.917 2 DEBUG nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.016 2 DEBUG nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.018 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.018 2 INFO nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Creating image(s)#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.050 2 DEBUG nova.storage.rbd_utils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.086 2 DEBUG nova.storage.rbd_utils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.122 2 DEBUG nova.storage.rbd_utils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.132 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.198 2 DEBUG nova.storage.rbd_utils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] creating snapshot(46bbf4a8538e4f82909a89b0898209ee) on rbd image(29e25f63-82b1-45cf-8916-41d9acc44ac9_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.247 2 DEBUG nova.policy [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.257 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.259 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.260 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.260 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.286 2 DEBUG nova.storage.rbd_utils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.290 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.562 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.631 2 DEBUG nova.storage.rbd_utils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:54:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 200 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.738 2 DEBUG nova.objects.instance [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid 1e566f75-d068-4a59-bf53-0a71ad8c5e45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.760 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.761 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Ensure instance console log exists: /var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.761 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.762 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.762 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Oct  2 04:54:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Oct  2 04:54:25 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.857 2 DEBUG nova.storage.rbd_utils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] cloning vms/29e25f63-82b1-45cf-8916-41d9acc44ac9_disk@46bbf4a8538e4f82909a89b0898209ee to images/fb02f3f9-de2f-45c4-91bc-9579a3c7316c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:54:25 np0005465604 nova_compute[260603]: 2025-10-02 08:54:25.972 2 DEBUG nova.storage.rbd_utils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] flattening images/fb02f3f9-de2f-45c4-91bc-9579a3c7316c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.040 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.040 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.041 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.041 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.042 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.043 2 WARNING nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received unexpected event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.044 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.044 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.044 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.045 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.045 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.045 2 WARNING nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received unexpected event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.046 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.046 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.046 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.047 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.047 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.047 2 WARNING nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received unexpected event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.048 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-unplugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.048 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.048 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.049 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.049 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-unplugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.050 2 WARNING nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received unexpected event network-vif-unplugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.050 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.050 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.051 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.051 2 DEBUG oslo_concurrency.lockutils [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.052 2 DEBUG nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.052 2 WARNING nova.compute.manager [req-13b9dd59-a343-4351-acb8-6ff7db80b004 req-35d1027b-ea63-4e79-b3d1-cba0736ddcfd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received unexpected event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.358 2 DEBUG nova.storage.rbd_utils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] removing snapshot(46bbf4a8538e4f82909a89b0898209ee) on rbd image(29e25f63-82b1-45cf-8916-41d9acc44ac9_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 04:54:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Oct  2 04:54:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Oct  2 04:54:26 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Oct  2 04:54:26 np0005465604 nova_compute[260603]: 2025-10-02 08:54:26.850 2 DEBUG nova.storage.rbd_utils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] creating snapshot(snap) on rbd image(fb02f3f9-de2f-45c4-91bc-9579a3c7316c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 04:54:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 287 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.5 MiB/s wr, 102 op/s
Oct  2 04:54:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Oct  2 04:54:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Oct  2 04:54:27 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Oct  2 04:54:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:54:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:54:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:54:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:54:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:54:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:54:28 np0005465604 podman[391603]: 2025-10-02 08:54:28.007027456 +0000 UTC m=+0.079935205 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 04:54:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:54:28
Oct  2 04:54:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:54:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:54:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'default.rgw.log', 'volumes', '.mgr']
Oct  2 04:54:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:54:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:54:28 np0005465604 podman[391604]: 2025-10-02 08:54:28.040363723 +0000 UTC m=+0.100882729 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  2 04:54:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:54:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:54:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:54:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:54:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:54:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:54:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:54:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:54:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:54:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:54:28 np0005465604 nova_compute[260603]: 2025-10-02 08:54:28.640 2 DEBUG nova.network.neutron [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Successfully created port: 0114a231-1abb-410d-8d5f-29713ba6ca28 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:54:29 np0005465604 nova_compute[260603]: 2025-10-02 08:54:29.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:29 np0005465604 nova_compute[260603]: 2025-10-02 08:54:29.330 2 INFO nova.virt.libvirt.driver [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Snapshot image upload complete#033[00m
Oct  2 04:54:29 np0005465604 nova_compute[260603]: 2025-10-02 08:54:29.331 2 DEBUG nova.compute.manager [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:54:29 np0005465604 nova_compute[260603]: 2025-10-02 08:54:29.383 2 INFO nova.compute.manager [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Shelve offloading#033[00m
Oct  2 04:54:29 np0005465604 nova_compute[260603]: 2025-10-02 08:54:29.394 2 INFO nova.virt.libvirt.driver [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance destroyed successfully.#033[00m
Oct  2 04:54:29 np0005465604 nova_compute[260603]: 2025-10-02 08:54:29.395 2 DEBUG nova.compute.manager [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:54:29 np0005465604 nova_compute[260603]: 2025-10-02 08:54:29.398 2 DEBUG oslo_concurrency.lockutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:54:29 np0005465604 nova_compute[260603]: 2025-10-02 08:54:29.399 2 DEBUG oslo_concurrency.lockutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquired lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:54:29 np0005465604 nova_compute[260603]: 2025-10-02 08:54:29.399 2 DEBUG nova.network.neutron [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:54:29 np0005465604 nova_compute[260603]: 2025-10-02 08:54:29.552 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:54:29 np0005465604 nova_compute[260603]: 2025-10-02 08:54:29.552 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:54:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 325 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 200 op/s
Oct  2 04:54:30 np0005465604 nova_compute[260603]: 2025-10-02 08:54:30.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:30 np0005465604 nova_compute[260603]: 2025-10-02 08:54:30.547 2 DEBUG nova.network.neutron [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Successfully updated port: 0114a231-1abb-410d-8d5f-29713ba6ca28 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:54:30 np0005465604 nova_compute[260603]: 2025-10-02 08:54:30.565 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-1e566f75-d068-4a59-bf53-0a71ad8c5e45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:54:30 np0005465604 nova_compute[260603]: 2025-10-02 08:54:30.565 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-1e566f75-d068-4a59-bf53-0a71ad8c5e45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:54:30 np0005465604 nova_compute[260603]: 2025-10-02 08:54:30.565 2 DEBUG nova.network.neutron [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:54:31 np0005465604 nova_compute[260603]: 2025-10-02 08:54:31.553 2 DEBUG nova.network.neutron [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:54:31 np0005465604 nova_compute[260603]: 2025-10-02 08:54:31.607 2 DEBUG nova.compute.manager [req-be0949c6-d7ec-4cc6-842d-a4b2f4b7d3c1 req-69cc8c2c-6dc5-490b-beeb-5a45399232d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Received event network-changed-0114a231-1abb-410d-8d5f-29713ba6ca28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:54:31 np0005465604 nova_compute[260603]: 2025-10-02 08:54:31.608 2 DEBUG nova.compute.manager [req-be0949c6-d7ec-4cc6-842d-a4b2f4b7d3c1 req-69cc8c2c-6dc5-490b-beeb-5a45399232d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Refreshing instance network info cache due to event network-changed-0114a231-1abb-410d-8d5f-29713ba6ca28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:54:31 np0005465604 nova_compute[260603]: 2025-10-02 08:54:31.608 2 DEBUG oslo_concurrency.lockutils [req-be0949c6-d7ec-4cc6-842d-a4b2f4b7d3c1 req-69cc8c2c-6dc5-490b-beeb-5a45399232d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1e566f75-d068-4a59-bf53-0a71ad8c5e45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:54:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 325 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 200 op/s
Oct  2 04:54:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:54:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Oct  2 04:54:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Oct  2 04:54:33 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Oct  2 04:54:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 325 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 10 MiB/s wr, 200 op/s
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.539 2 DEBUG nova.network.neutron [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating instance_info_cache with network_info: [{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.541 2 DEBUG nova.network.neutron [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Updating instance_info_cache with network_info: [{"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.568 2 DEBUG oslo_concurrency.lockutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Releasing lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.573 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-1e566f75-d068-4a59-bf53-0a71ad8c5e45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.574 2 DEBUG nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Instance network_info: |[{"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.575 2 DEBUG oslo_concurrency.lockutils [req-be0949c6-d7ec-4cc6-842d-a4b2f4b7d3c1 req-69cc8c2c-6dc5-490b-beeb-5a45399232d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1e566f75-d068-4a59-bf53-0a71ad8c5e45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.576 2 DEBUG nova.network.neutron [req-be0949c6-d7ec-4cc6-842d-a4b2f4b7d3c1 req-69cc8c2c-6dc5-490b-beeb-5a45399232d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Refreshing network info cache for port 0114a231-1abb-410d-8d5f-29713ba6ca28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.582 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Start _get_guest_xml network_info=[{"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.589 2 WARNING nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.600 2 DEBUG nova.virt.libvirt.host [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.601 2 DEBUG nova.virt.libvirt.host [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.613 2 DEBUG nova.virt.libvirt.host [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.614 2 DEBUG nova.virt.libvirt.host [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.614 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.615 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.616 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.616 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.617 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.617 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.618 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.618 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.619 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.619 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.619 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.620 2 DEBUG nova.virt.hardware [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:54:34 np0005465604 nova_compute[260603]: 2025-10-02 08:54:34.625 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:54:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:34.835 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:34.836 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:34.837 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:54:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3914923318' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.091 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.115 2 DEBUG nova.storage.rbd_utils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.121 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:54:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/205786419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.560 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.561 2 DEBUG nova.virt.libvirt.vif [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:54:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1505974006',display_name='tempest-TestNetworkBasicOps-server-1505974006',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1505974006',id=125,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOBsPfGQIKXrU0ixL8zdO1Id1043n5Wya2XViCYfbc8+hdkDDiOW4w1UIZN3cliMo4JpT768RX8ZTyGW3MECALsE1YSIbBikkYwsG7A75q2etUDqr7ZHw9V2TylyqodwPA==',key_name='tempest-TestNetworkBasicOps-1332028269',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-q4dl0p75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:54:24Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=1e566f75-d068-4a59-bf53-0a71ad8c5e45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.562 2 DEBUG nova.network.os_vif_util [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.563 2 DEBUG nova.network.os_vif_util [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:dd:de,bridge_name='br-int',has_traffic_filtering=True,id=0114a231-1abb-410d-8d5f-29713ba6ca28,network=Network(fcd51a14-855e-486f-92b0-d9c9ee06ef45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0114a231-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.564 2 DEBUG nova.objects.instance [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e566f75-d068-4a59-bf53-0a71ad8c5e45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.588 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:54:35 np0005465604 nova_compute[260603]:  <uuid>1e566f75-d068-4a59-bf53-0a71ad8c5e45</uuid>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:  <name>instance-0000007d</name>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkBasicOps-server-1505974006</nova:name>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:54:34</nova:creationTime>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:54:35 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:        <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:        <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:        <nova:port uuid="0114a231-1abb-410d-8d5f-29713ba6ca28">
Oct  2 04:54:35 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <entry name="serial">1e566f75-d068-4a59-bf53-0a71ad8c5e45</entry>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <entry name="uuid">1e566f75-d068-4a59-bf53-0a71ad8c5e45</entry>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk">
Oct  2 04:54:35 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:54:35 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk.config">
Oct  2 04:54:35 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:54:35 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:33:dd:de"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <target dev="tap0114a231-1a"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45/console.log" append="off"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:54:35 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:54:35 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:54:35 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:54:35 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.590 2 DEBUG nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Preparing to wait for external event network-vif-plugged-0114a231-1abb-410d-8d5f-29713ba6ca28 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.591 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.591 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.592 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.593 2 DEBUG nova.virt.libvirt.vif [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:54:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1505974006',display_name='tempest-TestNetworkBasicOps-server-1505974006',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1505974006',id=125,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOBsPfGQIKXrU0ixL8zdO1Id1043n5Wya2XViCYfbc8+hdkDDiOW4w1UIZN3cliMo4JpT768RX8ZTyGW3MECALsE1YSIbBikkYwsG7A75q2etUDqr7ZHw9V2TylyqodwPA==',key_name='tempest-TestNetworkBasicOps-1332028269',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-q4dl0p75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:54:24Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=1e566f75-d068-4a59-bf53-0a71ad8c5e45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.593 2 DEBUG nova.network.os_vif_util [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.594 2 DEBUG nova.network.os_vif_util [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:dd:de,bridge_name='br-int',has_traffic_filtering=True,id=0114a231-1abb-410d-8d5f-29713ba6ca28,network=Network(fcd51a14-855e-486f-92b0-d9c9ee06ef45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0114a231-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.595 2 DEBUG os_vif [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:dd:de,bridge_name='br-int',has_traffic_filtering=True,id=0114a231-1abb-410d-8d5f-29713ba6ca28,network=Network(fcd51a14-855e-486f-92b0-d9c9ee06ef45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0114a231-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0114a231-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.602 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0114a231-1a, col_values=(('external_ids', {'iface-id': '0114a231-1abb-410d-8d5f-29713ba6ca28', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:dd:de', 'vm-uuid': '1e566f75-d068-4a59-bf53-0a71ad8c5e45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:35 np0005465604 NetworkManager[45129]: <info>  [1759395275.6054] manager: (tap0114a231-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/524)
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.613 2 INFO os_vif [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:dd:de,bridge_name='br-int',has_traffic_filtering=True,id=0114a231-1abb-410d-8d5f-29713ba6ca28,network=Network(fcd51a14-855e-486f-92b0-d9c9ee06ef45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0114a231-1a')#033[00m
Oct  2 04:54:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 325 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.684 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.685 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.686 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:33:dd:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.687 2 INFO nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Using config drive#033[00m
Oct  2 04:54:35 np0005465604 nova_compute[260603]: 2025-10-02 08:54:35.712 2 DEBUG nova.storage.rbd_utils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:54:36 np0005465604 nova_compute[260603]: 2025-10-02 08:54:36.529 2 INFO nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Creating config drive at /var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45/disk.config#033[00m
Oct  2 04:54:36 np0005465604 nova_compute[260603]: 2025-10-02 08:54:36.538 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5dkdczu5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:54:36 np0005465604 nova_compute[260603]: 2025-10-02 08:54:36.708 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5dkdczu5" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:54:36 np0005465604 nova_compute[260603]: 2025-10-02 08:54:36.750 2 DEBUG nova.storage.rbd_utils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:54:36 np0005465604 nova_compute[260603]: 2025-10-02 08:54:36.755 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45/disk.config 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:54:36 np0005465604 nova_compute[260603]: 2025-10-02 08:54:36.974 2 DEBUG oslo_concurrency.processutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45/disk.config 1e566f75-d068-4a59-bf53-0a71ad8c5e45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:54:36 np0005465604 nova_compute[260603]: 2025-10-02 08:54:36.976 2 INFO nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Deleting local config drive /var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45/disk.config because it was imported into RBD.#033[00m
Oct  2 04:54:37 np0005465604 kernel: tap0114a231-1a: entered promiscuous mode
Oct  2 04:54:37 np0005465604 NetworkManager[45129]: <info>  [1759395277.0582] manager: (tap0114a231-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/525)
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:37Z|01331|binding|INFO|Claiming lport 0114a231-1abb-410d-8d5f-29713ba6ca28 for this chassis.
Oct  2 04:54:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:37Z|01332|binding|INFO|0114a231-1abb-410d-8d5f-29713ba6ca28: Claiming fa:16:3e:33:dd:de 10.100.0.26
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.081 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:dd:de 10.100.0.26'], port_security=['fa:16:3e:33:dd:de 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '1e566f75-d068-4a59-bf53-0a71ad8c5e45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fcd51a14-855e-486f-92b0-d9c9ee06ef45', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bb814552-631a-471e-89ab-a7fd4a866ac7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999292f0-e1e9-4e7e-82ce-9e16433e3e2c, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=0114a231-1abb-410d-8d5f-29713ba6ca28) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.083 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 0114a231-1abb-410d-8d5f-29713ba6ca28 in datapath fcd51a14-855e-486f-92b0-d9c9ee06ef45 bound to our chassis#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.085 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fcd51a14-855e-486f-92b0-d9c9ee06ef45#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.105 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f34809e6-e797-4d54-8b65-787f20b21ff9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.106 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfcd51a14-81 in ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:54:37 np0005465604 systemd-udevd[391784]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.113 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfcd51a14-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.113 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ae56a8a2-6d6f-4102-a73e-8379fe809199]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.115 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[236af463-50af-4bd5-b87a-0fe32db74f49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:37 np0005465604 systemd-machined[214636]: New machine qemu-158-instance-0000007d.
Oct  2 04:54:37 np0005465604 NetworkManager[45129]: <info>  [1759395277.1354] device (tap0114a231-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:54:37 np0005465604 systemd[1]: Started Virtual Machine qemu-158-instance-0000007d.
Oct  2 04:54:37 np0005465604 NetworkManager[45129]: <info>  [1759395277.1376] device (tap0114a231-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.139 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[d803898c-8752-4353-b66d-c2163a46ffd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:37Z|01333|binding|INFO|Setting lport 0114a231-1abb-410d-8d5f-29713ba6ca28 ovn-installed in OVS
Oct  2 04:54:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:37Z|01334|binding|INFO|Setting lport 0114a231-1abb-410d-8d5f-29713ba6ca28 up in Southbound
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.152 2 INFO nova.virt.libvirt.driver [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance destroyed successfully.#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.157 2 DEBUG nova.objects.instance [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'resources' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.166 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a8a378cd-7dac-412a-982c-e08a60617f12]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.181 2 DEBUG nova.virt.libvirt.vif [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-199210370',display_name='tempest-TestShelveInstance-server-199210370',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-199210370',id=124,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGzsQ/6b3tjUGybuPyR4SVVOx0MdNuIeiSvtBwT6p2uLvkn4tE1YW+Uq6JJ1YItW22s6E/D+cvCAL9Uj4JCY+wAR12IR6juEk+h0nnTQwb0m9GcGi0Su/6v36I6xXc+YZw==',key_name='tempest-TestShelveInstance-203364505',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:54:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='21220064aba34c77a8af713fad28c08b',ramdisk_id='',reservation_id='r-2iy01e2x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-783621896',owner_user_name='tempest-TestShelveInstance-783621896-project-member',shelved_at='2025-10-02T08:54:29.331405',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fb02f3f9-de2f-45c4-91bc-9579a3c7316c'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:54:24Z,user_data=None,user_id='bd8daf03c9d144d7a60fe3f81abdfbb4',uuid=29e25f63-82b1-45cf-8916-41d9acc44ac9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.182 2 DEBUG nova.network.os_vif_util [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converting VIF {"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.183 2 DEBUG nova.network.os_vif_util [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.184 2 DEBUG os_vif [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c4955ab-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.200 2 INFO os_vif [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01')#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.226 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0b3875ef-9c41-452b-ae7a-0fc114bc1b19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.234 2 DEBUG nova.compute.manager [req-2d765da3-2988-4028-99e0-8467708e818e req-c23be098-15ae-4968-8804-9d5a6a6ca0e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.235 2 DEBUG nova.compute.manager [req-2d765da3-2988-4028-99e0-8467708e818e req-c23be098-15ae-4968-8804-9d5a6a6ca0e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing instance network info cache due to event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.235 2 DEBUG oslo_concurrency.lockutils [req-2d765da3-2988-4028-99e0-8467708e818e req-c23be098-15ae-4968-8804-9d5a6a6ca0e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.237 2 DEBUG oslo_concurrency.lockutils [req-2d765da3-2988-4028-99e0-8467708e818e req-c23be098-15ae-4968-8804-9d5a6a6ca0e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.237 2 DEBUG nova.network.neutron [req-2d765da3-2988-4028-99e0-8467708e818e req-c23be098-15ae-4968-8804-9d5a6a6ca0e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.244 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7aca103e-3cea-4a69-8f67-b6fe655f9fe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:37 np0005465604 NetworkManager[45129]: <info>  [1759395277.2467] manager: (tapfcd51a14-80): new Veth device (/org/freedesktop/NetworkManager/Devices/526)
Oct  2 04:54:37 np0005465604 systemd-udevd[391787]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.295 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6be222d3-e4af-4c57-baea-7d82e2eeb50c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.300 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1a315321-459b-43c0-9fce-831e37c98d32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:37 np0005465604 NetworkManager[45129]: <info>  [1759395277.3309] device (tapfcd51a14-80): carrier: link connected
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.336 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7c9c0e18-d02d-4f66-b368-ceae2c48f236]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.364 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[905a2b9f-2523-453a-85f7-ae2504a18aee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfcd51a14-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:4c:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 377], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 615593, 'reachable_time': 35538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391837, 'error': None, 'target': 'ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.385 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df144f46-ba78-42c6-88b5-1328c3c85535]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:4cb2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 615593, 'tstamp': 615593}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391838, 'error': None, 'target': 'ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.416 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ea3fe8-4951-4e87-b96e-521d203f70da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfcd51a14-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:4c:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 377], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 615593, 'reachable_time': 35538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391839, 'error': None, 'target': 'ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.425 2 DEBUG nova.network.neutron [req-be0949c6-d7ec-4cc6-842d-a4b2f4b7d3c1 req-69cc8c2c-6dc5-490b-beeb-5a45399232d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Updated VIF entry in instance network info cache for port 0114a231-1abb-410d-8d5f-29713ba6ca28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.426 2 DEBUG nova.network.neutron [req-be0949c6-d7ec-4cc6-842d-a4b2f4b7d3c1 req-69cc8c2c-6dc5-490b-beeb-5a45399232d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Updating instance_info_cache with network_info: [{"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.453 2 DEBUG oslo_concurrency.lockutils [req-be0949c6-d7ec-4cc6-842d-a4b2f4b7d3c1 req-69cc8c2c-6dc5-490b-beeb-5a45399232d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1e566f75-d068-4a59-bf53-0a71ad8c5e45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.474 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7da2bda1-6343-44e3-9dcf-69d6ec65ca6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.577 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[245ec138-c1ef-4b25-8e4c-5b4165f18593]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.579 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfcd51a14-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.580 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.580 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfcd51a14-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:37 np0005465604 kernel: tapfcd51a14-80: entered promiscuous mode
Oct  2 04:54:37 np0005465604 NetworkManager[45129]: <info>  [1759395277.5851] manager: (tapfcd51a14-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/527)
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.588 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfcd51a14-80, col_values=(('external_ids', {'iface-id': 'ed953b14-70f0-43c8-ae8c-e9063bcdd7db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:54:37 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:37Z|01335|binding|INFO|Releasing lport ed953b14-70f0-43c8-ae8c-e9063bcdd7db from this chassis (sb_readonly=0)
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.624 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fcd51a14-855e-486f-92b0-d9c9ee06ef45.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fcd51a14-855e-486f-92b0-d9c9ee06ef45.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.626 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[15f6414f-7e62-43fd-8c9a-e065afa8aa8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.627 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-fcd51a14-855e-486f-92b0-d9c9ee06ef45
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/fcd51a14-855e-486f-92b0-d9c9ee06ef45.pid.haproxy
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID fcd51a14-855e-486f-92b0-d9c9ee06ef45
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:54:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:37.627 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45', 'env', 'PROCESS_TAG=haproxy-fcd51a14-855e-486f-92b0-d9c9ee06ef45', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fcd51a14-855e-486f-92b0-d9c9ee06ef45.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:54:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 325 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 67 op/s
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.676 2 INFO nova.virt.libvirt.driver [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Deleting instance files /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9_del#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.678 2 INFO nova.virt.libvirt.driver [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Deletion of /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9_del complete#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.840 2 INFO nova.scheduler.client.report [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Deleted allocations for instance 29e25f63-82b1-45cf-8916-41d9acc44ac9#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.896 2 DEBUG oslo_concurrency.lockutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.898 2 DEBUG oslo_concurrency.lockutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:37 np0005465604 nova_compute[260603]: 2025-10-02 08:54:37.982 2 DEBUG oslo_concurrency.processutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:54:38 np0005465604 podman[391914]: 2025-10-02 08:54:38.021953622 +0000 UTC m=+0.056136590 container create 42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 04:54:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:54:38 np0005465604 systemd[1]: Started libpod-conmon-42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403.scope.
Oct  2 04:54:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:54:38 np0005465604 podman[391914]: 2025-10-02 08:54:37.993679406 +0000 UTC m=+0.027862384 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:54:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19e482ac1bfb83fa0f30e489e044044eba2a82cab9b325cf7843b422f0931f0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:54:38 np0005465604 podman[391914]: 2025-10-02 08:54:38.105133909 +0000 UTC m=+0.139316897 container init 42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 04:54:38 np0005465604 podman[391914]: 2025-10-02 08:54:38.110452748 +0000 UTC m=+0.144635706 container start 42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 04:54:38 np0005465604 neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45[391929]: [NOTICE]   (391934) : New worker (391954) forked
Oct  2 04:54:38 np0005465604 neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45[391929]: [NOTICE]   (391934) : Loading success.
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.223 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395278.222527, 1e566f75-d068-4a59-bf53-0a71ad8c5e45 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.224 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] VM Started (Lifecycle Event)#033[00m
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.248 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.251 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395278.2226515, 1e566f75-d068-4a59-bf53-0a71ad8c5e45 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.252 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.275 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.279 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.302 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:54:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:54:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3401185990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.442 2 DEBUG oslo_concurrency.processutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.449 2 DEBUG nova.compute.provider_tree [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.470 2 DEBUG nova.scheduler.client.report [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.499 2 DEBUG oslo_concurrency.lockutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.563 2 DEBUG oslo_concurrency.lockutils [None req-a15bb34c-7912-4f2d-a507-789b0f455b54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 17.432s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.730 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395263.7298799, 29e25f63-82b1-45cf-8916-41d9acc44ac9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.731 2 INFO nova.compute.manager [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.759 2 DEBUG nova.compute.manager [None req-44bec396-a4eb-41aa-a179-00adb4d6af62 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018669595909031713 of space, bias 1.0, pg target 0.5600878772709514 quantized to 32 (current 32)
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014249405808426125 of space, bias 1.0, pg target 0.42748217425278373 quantized to 32 (current 32)
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:54:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.943 2 DEBUG nova.network.neutron [req-2d765da3-2988-4028-99e0-8467708e818e req-c23be098-15ae-4968-8804-9d5a6a6ca0e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updated VIF entry in instance network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.944 2 DEBUG nova.network.neutron [req-2d765da3-2988-4028-99e0-8467708e818e req-c23be098-15ae-4968-8804-9d5a6a6ca0e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating instance_info_cache with network_info: [{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": null, "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap6c4955ab-01", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:54:38 np0005465604 nova_compute[260603]: 2025-10-02 08:54:38.970 2 DEBUG oslo_concurrency.lockutils [req-2d765da3-2988-4028-99e0-8467708e818e req-c23be098-15ae-4968-8804-9d5a6a6ca0e7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.346 2 DEBUG nova.compute.manager [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Received event network-vif-plugged-0114a231-1abb-410d-8d5f-29713ba6ca28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.347 2 DEBUG oslo_concurrency.lockutils [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.348 2 DEBUG oslo_concurrency.lockutils [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.349 2 DEBUG oslo_concurrency.lockutils [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.349 2 DEBUG nova.compute.manager [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Processing event network-vif-plugged-0114a231-1abb-410d-8d5f-29713ba6ca28 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.350 2 DEBUG nova.compute.manager [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Received event network-vif-plugged-0114a231-1abb-410d-8d5f-29713ba6ca28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.351 2 DEBUG oslo_concurrency.lockutils [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.351 2 DEBUG oslo_concurrency.lockutils [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.352 2 DEBUG oslo_concurrency.lockutils [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.352 2 DEBUG nova.compute.manager [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] No waiting events found dispatching network-vif-plugged-0114a231-1abb-410d-8d5f-29713ba6ca28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.353 2 WARNING nova.compute.manager [req-caf20fc3-bcd5-4439-8e26-d159c8836b7f req-7b2c4b82-780a-4bec-a8df-aedde4aba224 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Received unexpected event network-vif-plugged-0114a231-1abb-410d-8d5f-29713ba6ca28 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.354 2 DEBUG nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.358 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395279.3581648, 1e566f75-d068-4a59-bf53-0a71ad8c5e45 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.358 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.361 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.366 2 INFO nova.virt.libvirt.driver [-] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Instance spawned successfully.#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.367 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.386 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.396 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.402 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.403 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.404 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.405 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.406 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.407 2 DEBUG nova.virt.libvirt.driver [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.455 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.495 2 INFO nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Took 14.48 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.496 2 DEBUG nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.576 2 INFO nova.compute.manager [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Took 15.61 seconds to build instance.#033[00m
Oct  2 04:54:39 np0005465604 nova_compute[260603]: 2025-10-02 08:54:39.595 2 DEBUG oslo_concurrency.lockutils [None req-9da7a5ef-4a3a-4cc5-8fbe-0bc603cb4926 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 298 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 18 KiB/s wr, 24 op/s
Oct  2 04:54:40 np0005465604 nova_compute[260603]: 2025-10-02 08:54:40.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:54:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:54:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:54:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:54:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:54:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:54:40 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 0e67ec3b-ffdc-46ce-afff-85c14a9cccca does not exist
Oct  2 04:54:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:54:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:54:40 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 0443375a-8f02-43a1-b6a6-2e154b0d269b does not exist
Oct  2 04:54:40 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 4af6aecd-ed27-4f06-afe8-0d09ae598972 does not exist
Oct  2 04:54:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:54:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:54:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:54:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:54:40 np0005465604 podman[392234]: 2025-10-02 08:54:40.787730617 +0000 UTC m=+0.073411928 container create a3c426ce8b1dc2edc0fbe07f5d2e20a47af9c084f15305ac83180a81350e0a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldberg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 04:54:40 np0005465604 systemd[1]: Started libpod-conmon-a3c426ce8b1dc2edc0fbe07f5d2e20a47af9c084f15305ac83180a81350e0a20.scope.
Oct  2 04:54:40 np0005465604 podman[392234]: 2025-10-02 08:54:40.758191731 +0000 UTC m=+0.043873132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:54:40 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:54:40 np0005465604 podman[392234]: 2025-10-02 08:54:40.882333347 +0000 UTC m=+0.168014698 container init a3c426ce8b1dc2edc0fbe07f5d2e20a47af9c084f15305ac83180a81350e0a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  2 04:54:40 np0005465604 podman[392234]: 2025-10-02 08:54:40.888483031 +0000 UTC m=+0.174164382 container start a3c426ce8b1dc2edc0fbe07f5d2e20a47af9c084f15305ac83180a81350e0a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldberg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 04:54:40 np0005465604 podman[392234]: 2025-10-02 08:54:40.892038695 +0000 UTC m=+0.177720086 container attach a3c426ce8b1dc2edc0fbe07f5d2e20a47af9c084f15305ac83180a81350e0a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldberg, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:54:40 np0005465604 unruffled_goldberg[392250]: 167 167
Oct  2 04:54:40 np0005465604 systemd[1]: libpod-a3c426ce8b1dc2edc0fbe07f5d2e20a47af9c084f15305ac83180a81350e0a20.scope: Deactivated successfully.
Oct  2 04:54:40 np0005465604 podman[392234]: 2025-10-02 08:54:40.899214502 +0000 UTC m=+0.184895853 container died a3c426ce8b1dc2edc0fbe07f5d2e20a47af9c084f15305ac83180a81350e0a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldberg, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:54:40 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e926b3787a29684bbb8b93386e1aad3a1986634cd30f435aab4fec66881ba274-merged.mount: Deactivated successfully.
Oct  2 04:54:40 np0005465604 podman[392234]: 2025-10-02 08:54:40.956403065 +0000 UTC m=+0.242084416 container remove a3c426ce8b1dc2edc0fbe07f5d2e20a47af9c084f15305ac83180a81350e0a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldberg, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:54:40 np0005465604 systemd[1]: libpod-conmon-a3c426ce8b1dc2edc0fbe07f5d2e20a47af9c084f15305ac83180a81350e0a20.scope: Deactivated successfully.
Oct  2 04:54:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:54:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:54:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:54:41 np0005465604 podman[392274]: 2025-10-02 08:54:41.248593028 +0000 UTC m=+0.081366261 container create 0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_golick, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 04:54:41 np0005465604 systemd[1]: Started libpod-conmon-0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756.scope.
Oct  2 04:54:41 np0005465604 podman[392274]: 2025-10-02 08:54:41.208115055 +0000 UTC m=+0.040888368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:54:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:54:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d6a37f2648b61bb18f4abece52060f78ae71539cb16b9cff1987b0f7aee270/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:54:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d6a37f2648b61bb18f4abece52060f78ae71539cb16b9cff1987b0f7aee270/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:54:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d6a37f2648b61bb18f4abece52060f78ae71539cb16b9cff1987b0f7aee270/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:54:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d6a37f2648b61bb18f4abece52060f78ae71539cb16b9cff1987b0f7aee270/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:54:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d6a37f2648b61bb18f4abece52060f78ae71539cb16b9cff1987b0f7aee270/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:54:41 np0005465604 podman[392274]: 2025-10-02 08:54:41.347719801 +0000 UTC m=+0.180493114 container init 0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_golick, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  2 04:54:41 np0005465604 podman[392274]: 2025-10-02 08:54:41.355322422 +0000 UTC m=+0.188095675 container start 0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:54:41 np0005465604 podman[392274]: 2025-10-02 08:54:41.3587192 +0000 UTC m=+0.191492463 container attach 0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_golick, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:54:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 298 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 18 KiB/s wr, 24 op/s
Oct  2 04:54:42 np0005465604 nova_compute[260603]: 2025-10-02 08:54:42.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:42 np0005465604 suspicious_golick[392290]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:54:42 np0005465604 suspicious_golick[392290]: --> relative data size: 1.0
Oct  2 04:54:42 np0005465604 suspicious_golick[392290]: --> All data devices are unavailable
Oct  2 04:54:42 np0005465604 systemd[1]: libpod-0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756.scope: Deactivated successfully.
Oct  2 04:54:42 np0005465604 podman[392274]: 2025-10-02 08:54:42.509199174 +0000 UTC m=+1.341972437 container died 0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 04:54:42 np0005465604 systemd[1]: libpod-0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756.scope: Consumed 1.070s CPU time.
Oct  2 04:54:42 np0005465604 systemd[1]: var-lib-containers-storage-overlay-22d6a37f2648b61bb18f4abece52060f78ae71539cb16b9cff1987b0f7aee270-merged.mount: Deactivated successfully.
Oct  2 04:54:42 np0005465604 podman[392274]: 2025-10-02 08:54:42.582483847 +0000 UTC m=+1.415257070 container remove 0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_golick, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:54:42 np0005465604 systemd[1]: libpod-conmon-0bc1e1ebf2c67d2571ed999a37a0b4aa60d621c384fad9fd5d29fe922a8fd756.scope: Deactivated successfully.
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.044572) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395283044604, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 965, "num_deletes": 251, "total_data_size": 1272138, "memory_usage": 1294584, "flush_reason": "Manual Compaction"}
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395283052214, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 812145, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47642, "largest_seqno": 48606, "table_properties": {"data_size": 808200, "index_size": 1597, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10571, "raw_average_key_size": 20, "raw_value_size": 799624, "raw_average_value_size": 1586, "num_data_blocks": 71, "num_entries": 504, "num_filter_entries": 504, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759395203, "oldest_key_time": 1759395203, "file_creation_time": 1759395283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 7706 microseconds, and 3624 cpu microseconds.
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.052273) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 812145 bytes OK
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.052298) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.053802) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.053828) EVENT_LOG_v1 {"time_micros": 1759395283053819, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.053851) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1267484, prev total WAL file size 1267484, number of live WAL files 2.
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.054605) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373532' seq:72057594037927935, type:22 .. '6D6772737461740032303033' seq:0, type:0; will stop at (end)
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(793KB)], [110(10MB)]
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395283054627, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11396015, "oldest_snapshot_seqno": -1}
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6857 keys, 8538723 bytes, temperature: kUnknown
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395283102954, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 8538723, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8494333, "index_size": 26166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 178739, "raw_average_key_size": 26, "raw_value_size": 8373118, "raw_average_value_size": 1221, "num_data_blocks": 1022, "num_entries": 6857, "num_filter_entries": 6857, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759395283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.103313) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 8538723 bytes
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.106132) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.3 rd, 176.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(24.5) write-amplify(10.5) OK, records in: 7341, records dropped: 484 output_compression: NoCompression
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.106165) EVENT_LOG_v1 {"time_micros": 1759395283106151, "job": 66, "event": "compaction_finished", "compaction_time_micros": 48439, "compaction_time_cpu_micros": 20038, "output_level": 6, "num_output_files": 1, "total_output_size": 8538723, "num_input_records": 7341, "num_output_records": 6857, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395283106602, "job": 66, "event": "table_file_deletion", "file_number": 112}
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395283110398, "job": 66, "event": "table_file_deletion", "file_number": 110}
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.054540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.110468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.110474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.110478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.110481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:54:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:54:43.110483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:54:43 np0005465604 podman[392471]: 2025-10-02 08:54:43.362804626 +0000 UTC m=+0.053491797 container create 7705d39b8370726eb871e6f6c8820107d21df3d6711d20234e13ce143fe7b216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 04:54:43 np0005465604 systemd[1]: Started libpod-conmon-7705d39b8370726eb871e6f6c8820107d21df3d6711d20234e13ce143fe7b216.scope.
Oct  2 04:54:43 np0005465604 podman[392471]: 2025-10-02 08:54:43.340618153 +0000 UTC m=+0.031305364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:54:43 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:54:43 np0005465604 podman[392471]: 2025-10-02 08:54:43.474352033 +0000 UTC m=+0.165039294 container init 7705d39b8370726eb871e6f6c8820107d21df3d6711d20234e13ce143fe7b216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 04:54:43 np0005465604 podman[392471]: 2025-10-02 08:54:43.481618353 +0000 UTC m=+0.172305524 container start 7705d39b8370726eb871e6f6c8820107d21df3d6711d20234e13ce143fe7b216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 04:54:43 np0005465604 vigilant_heisenberg[392486]: 167 167
Oct  2 04:54:43 np0005465604 systemd[1]: libpod-7705d39b8370726eb871e6f6c8820107d21df3d6711d20234e13ce143fe7b216.scope: Deactivated successfully.
Oct  2 04:54:43 np0005465604 podman[392471]: 2025-10-02 08:54:43.489185422 +0000 UTC m=+0.179872673 container attach 7705d39b8370726eb871e6f6c8820107d21df3d6711d20234e13ce143fe7b216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_heisenberg, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:54:43 np0005465604 podman[392471]: 2025-10-02 08:54:43.490209175 +0000 UTC m=+0.180896376 container died 7705d39b8370726eb871e6f6c8820107d21df3d6711d20234e13ce143fe7b216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_heisenberg, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:54:43 np0005465604 nova_compute[260603]: 2025-10-02 08:54:43.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:54:43 np0005465604 nova_compute[260603]: 2025-10-02 08:54:43.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:54:43 np0005465604 nova_compute[260603]: 2025-10-02 08:54:43.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:54:43 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ba94bb24485ccd622723c680953348650b65bb3baa90b2becaf4f8257c15c213-merged.mount: Deactivated successfully.
Oct  2 04:54:43 np0005465604 podman[392471]: 2025-10-02 08:54:43.543202985 +0000 UTC m=+0.233890146 container remove 7705d39b8370726eb871e6f6c8820107d21df3d6711d20234e13ce143fe7b216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct  2 04:54:43 np0005465604 systemd[1]: libpod-conmon-7705d39b8370726eb871e6f6c8820107d21df3d6711d20234e13ce143fe7b216.scope: Deactivated successfully.
Oct  2 04:54:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 246 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 16 KiB/s wr, 114 op/s
Oct  2 04:54:43 np0005465604 nova_compute[260603]: 2025-10-02 08:54:43.757 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:54:43 np0005465604 nova_compute[260603]: 2025-10-02 08:54:43.757 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:54:43 np0005465604 nova_compute[260603]: 2025-10-02 08:54:43.758 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:54:43 np0005465604 nova_compute[260603]: 2025-10-02 08:54:43.758 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ce9a5c17-646f-4ba2-a974-90e4b864872e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:54:43 np0005465604 podman[392510]: 2025-10-02 08:54:43.771358398 +0000 UTC m=+0.079920254 container create 16b792f64936c12e3fb64e0a58407b59b425b044a6ca20fb23050378790f0956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_proskuriakova, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:54:43 np0005465604 systemd[1]: Started libpod-conmon-16b792f64936c12e3fb64e0a58407b59b425b044a6ca20fb23050378790f0956.scope.
Oct  2 04:54:43 np0005465604 podman[392510]: 2025-10-02 08:54:43.738662261 +0000 UTC m=+0.047224137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:54:43 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:54:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/466649ac5d278b686e2b9722dd49fd945aa21a61825d602023116c68970cac18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:54:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/466649ac5d278b686e2b9722dd49fd945aa21a61825d602023116c68970cac18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:54:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/466649ac5d278b686e2b9722dd49fd945aa21a61825d602023116c68970cac18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:54:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/466649ac5d278b686e2b9722dd49fd945aa21a61825d602023116c68970cac18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:54:43 np0005465604 podman[392510]: 2025-10-02 08:54:43.896352231 +0000 UTC m=+0.204914097 container init 16b792f64936c12e3fb64e0a58407b59b425b044a6ca20fb23050378790f0956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_proskuriakova, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:54:43 np0005465604 podman[392510]: 2025-10-02 08:54:43.910097547 +0000 UTC m=+0.218659423 container start 16b792f64936c12e3fb64e0a58407b59b425b044a6ca20fb23050378790f0956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 04:54:43 np0005465604 podman[392510]: 2025-10-02 08:54:43.914197937 +0000 UTC m=+0.222759893 container attach 16b792f64936c12e3fb64e0a58407b59b425b044a6ca20fb23050378790f0956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:54:44 np0005465604 nova_compute[260603]: 2025-10-02 08:54:44.179 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:44 np0005465604 nova_compute[260603]: 2025-10-02 08:54:44.180 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:44 np0005465604 nova_compute[260603]: 2025-10-02 08:54:44.180 2 INFO nova.compute.manager [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Unshelving#033[00m
Oct  2 04:54:44 np0005465604 nova_compute[260603]: 2025-10-02 08:54:44.306 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:44 np0005465604 nova_compute[260603]: 2025-10-02 08:54:44.307 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:44 np0005465604 nova_compute[260603]: 2025-10-02 08:54:44.313 2 DEBUG nova.objects.instance [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'pci_requests' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:54:44 np0005465604 nova_compute[260603]: 2025-10-02 08:54:44.326 2 DEBUG nova.objects.instance [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'numa_topology' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:54:44 np0005465604 nova_compute[260603]: 2025-10-02 08:54:44.370 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:54:44 np0005465604 nova_compute[260603]: 2025-10-02 08:54:44.370 2 INFO nova.compute.claims [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:54:44 np0005465604 nova_compute[260603]: 2025-10-02 08:54:44.501 2 DEBUG nova.scheduler.client.report [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 04:54:44 np0005465604 nova_compute[260603]: 2025-10-02 08:54:44.533 2 DEBUG nova.scheduler.client.report [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 04:54:44 np0005465604 nova_compute[260603]: 2025-10-02 08:54:44.534 2 DEBUG nova.compute.provider_tree [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 04:54:44 np0005465604 nova_compute[260603]: 2025-10-02 08:54:44.557 2 DEBUG nova.scheduler.client.report [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 04:54:44 np0005465604 nova_compute[260603]: 2025-10-02 08:54:44.595 2 DEBUG nova.scheduler.client.report [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 04:54:44 np0005465604 nova_compute[260603]: 2025-10-02 08:54:44.686 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]: {
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:    "0": [
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:        {
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "devices": [
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "/dev/loop3"
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            ],
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "lv_name": "ceph_lv0",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "lv_size": "21470642176",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "name": "ceph_lv0",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "tags": {
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.cluster_name": "ceph",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.crush_device_class": "",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.encrypted": "0",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.osd_id": "0",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.type": "block",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.vdo": "0"
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            },
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "type": "block",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "vg_name": "ceph_vg0"
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:        }
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:    ],
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:    "1": [
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:        {
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "devices": [
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "/dev/loop4"
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            ],
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "lv_name": "ceph_lv1",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "lv_size": "21470642176",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "name": "ceph_lv1",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "tags": {
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.cluster_name": "ceph",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.crush_device_class": "",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.encrypted": "0",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.osd_id": "1",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.type": "block",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.vdo": "0"
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            },
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "type": "block",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "vg_name": "ceph_vg1"
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:        }
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:    ],
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:    "2": [
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:        {
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "devices": [
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "/dev/loop5"
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            ],
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "lv_name": "ceph_lv2",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "lv_size": "21470642176",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "name": "ceph_lv2",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "tags": {
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.cluster_name": "ceph",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.crush_device_class": "",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.encrypted": "0",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.osd_id": "2",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.type": "block",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:                "ceph.vdo": "0"
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            },
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "type": "block",
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:            "vg_name": "ceph_vg2"
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:        }
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]:    ]
Oct  2 04:54:44 np0005465604 reverent_proskuriakova[392527]: }
Oct  2 04:54:44 np0005465604 systemd[1]: libpod-16b792f64936c12e3fb64e0a58407b59b425b044a6ca20fb23050378790f0956.scope: Deactivated successfully.
Oct  2 04:54:44 np0005465604 podman[392510]: 2025-10-02 08:54:44.757844104 +0000 UTC m=+1.066406000 container died 16b792f64936c12e3fb64e0a58407b59b425b044a6ca20fb23050378790f0956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 04:54:44 np0005465604 systemd[1]: var-lib-containers-storage-overlay-466649ac5d278b686e2b9722dd49fd945aa21a61825d602023116c68970cac18-merged.mount: Deactivated successfully.
Oct  2 04:54:44 np0005465604 podman[392510]: 2025-10-02 08:54:44.835775144 +0000 UTC m=+1.144336990 container remove 16b792f64936c12e3fb64e0a58407b59b425b044a6ca20fb23050378790f0956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_proskuriakova, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 04:54:44 np0005465604 systemd[1]: libpod-conmon-16b792f64936c12e3fb64e0a58407b59b425b044a6ca20fb23050378790f0956.scope: Deactivated successfully.
Oct  2 04:54:45 np0005465604 nova_compute[260603]: 2025-10-02 08:54:45.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:54:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/345732408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:54:45 np0005465604 nova_compute[260603]: 2025-10-02 08:54:45.191 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:54:45 np0005465604 nova_compute[260603]: 2025-10-02 08:54:45.203 2 DEBUG nova.compute.provider_tree [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:54:45 np0005465604 nova_compute[260603]: 2025-10-02 08:54:45.227 2 DEBUG nova.scheduler.client.report [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:54:45 np0005465604 nova_compute[260603]: 2025-10-02 08:54:45.258 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:45 np0005465604 nova_compute[260603]: 2025-10-02 08:54:45.498 2 INFO nova.network.neutron [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating port 6c4955ab-011a-4e64-9871-c01b4818d740 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  2 04:54:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 246 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct  2 04:54:45 np0005465604 podman[392712]: 2025-10-02 08:54:45.695612844 +0000 UTC m=+0.043333605 container create 7cc76fcb0ee4ca6b2db6a216236e266e85dcbc74f7acfdb25d6b4522fd9b89d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_swartz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:54:45 np0005465604 systemd[1]: Started libpod-conmon-7cc76fcb0ee4ca6b2db6a216236e266e85dcbc74f7acfdb25d6b4522fd9b89d6.scope.
Oct  2 04:54:45 np0005465604 nova_compute[260603]: 2025-10-02 08:54:45.736 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updating instance_info_cache with network_info: [{"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:54:45 np0005465604 nova_compute[260603]: 2025-10-02 08:54:45.754 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:54:45 np0005465604 nova_compute[260603]: 2025-10-02 08:54:45.755 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:54:45 np0005465604 nova_compute[260603]: 2025-10-02 08:54:45.755 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:54:45 np0005465604 nova_compute[260603]: 2025-10-02 08:54:45.756 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:54:45 np0005465604 podman[392712]: 2025-10-02 08:54:45.675405153 +0000 UTC m=+0.023125914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:54:45 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:54:45 np0005465604 podman[392712]: 2025-10-02 08:54:45.789907683 +0000 UTC m=+0.137628454 container init 7cc76fcb0ee4ca6b2db6a216236e266e85dcbc74f7acfdb25d6b4522fd9b89d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 04:54:45 np0005465604 podman[392712]: 2025-10-02 08:54:45.796007287 +0000 UTC m=+0.143728008 container start 7cc76fcb0ee4ca6b2db6a216236e266e85dcbc74f7acfdb25d6b4522fd9b89d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 04:54:45 np0005465604 podman[392712]: 2025-10-02 08:54:45.79928252 +0000 UTC m=+0.147003291 container attach 7cc76fcb0ee4ca6b2db6a216236e266e85dcbc74f7acfdb25d6b4522fd9b89d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:54:45 np0005465604 stupefied_swartz[392729]: 167 167
Oct  2 04:54:45 np0005465604 systemd[1]: libpod-7cc76fcb0ee4ca6b2db6a216236e266e85dcbc74f7acfdb25d6b4522fd9b89d6.scope: Deactivated successfully.
Oct  2 04:54:45 np0005465604 podman[392712]: 2025-10-02 08:54:45.801419389 +0000 UTC m=+0.149140110 container died 7cc76fcb0ee4ca6b2db6a216236e266e85dcbc74f7acfdb25d6b4522fd9b89d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_swartz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 04:54:45 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8b73da2309655c36b3cc5791c153228a05faf88e28c7ef51943e3584f76ea1a4-merged.mount: Deactivated successfully.
Oct  2 04:54:45 np0005465604 podman[392712]: 2025-10-02 08:54:45.837655457 +0000 UTC m=+0.185376188 container remove 7cc76fcb0ee4ca6b2db6a216236e266e85dcbc74f7acfdb25d6b4522fd9b89d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_swartz, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 04:54:45 np0005465604 systemd[1]: libpod-conmon-7cc76fcb0ee4ca6b2db6a216236e266e85dcbc74f7acfdb25d6b4522fd9b89d6.scope: Deactivated successfully.
Oct  2 04:54:46 np0005465604 podman[392754]: 2025-10-02 08:54:46.054903675 +0000 UTC m=+0.071055804 container create 3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_keller, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:54:46 np0005465604 systemd[1]: Started libpod-conmon-3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1.scope.
Oct  2 04:54:46 np0005465604 podman[392754]: 2025-10-02 08:54:46.021888018 +0000 UTC m=+0.038040207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:54:46 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:54:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf8bbd6eeac31f1ec93cb9914ec2692dd898b44bbc388cc438de719de3d1d1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:54:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf8bbd6eeac31f1ec93cb9914ec2692dd898b44bbc388cc438de719de3d1d1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:54:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf8bbd6eeac31f1ec93cb9914ec2692dd898b44bbc388cc438de719de3d1d1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:54:46 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf8bbd6eeac31f1ec93cb9914ec2692dd898b44bbc388cc438de719de3d1d1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:54:46 np0005465604 podman[392754]: 2025-10-02 08:54:46.180449905 +0000 UTC m=+0.196602094 container init 3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_keller, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:54:46 np0005465604 podman[392754]: 2025-10-02 08:54:46.194382226 +0000 UTC m=+0.210534355 container start 3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_keller, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 04:54:46 np0005465604 podman[392754]: 2025-10-02 08:54:46.198472507 +0000 UTC m=+0.214624696 container attach 3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_keller, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 04:54:46 np0005465604 nova_compute[260603]: 2025-10-02 08:54:46.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:54:46 np0005465604 nova_compute[260603]: 2025-10-02 08:54:46.732 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:54:46 np0005465604 nova_compute[260603]: 2025-10-02 08:54:46.733 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquired lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:54:46 np0005465604 nova_compute[260603]: 2025-10-02 08:54:46.734 2 DEBUG nova.network.neutron [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:54:46 np0005465604 nova_compute[260603]: 2025-10-02 08:54:46.844 2 DEBUG nova.compute.manager [req-36d1fed2-3d81-4fde-962d-55bc831ecde7 req-b14e2463-3654-4631-bb9f-b23fabfd1d86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:54:46 np0005465604 nova_compute[260603]: 2025-10-02 08:54:46.846 2 DEBUG nova.compute.manager [req-36d1fed2-3d81-4fde-962d-55bc831ecde7 req-b14e2463-3654-4631-bb9f-b23fabfd1d86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing instance network info cache due to event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:54:46 np0005465604 nova_compute[260603]: 2025-10-02 08:54:46.846 2 DEBUG oslo_concurrency.lockutils [req-36d1fed2-3d81-4fde-962d-55bc831ecde7 req-b14e2463-3654-4631-bb9f-b23fabfd1d86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:54:47 np0005465604 nova_compute[260603]: 2025-10-02 08:54:47.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:47 np0005465604 goofy_keller[392771]: {
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:        "osd_id": 2,
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:        "type": "bluestore"
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:    },
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:        "osd_id": 1,
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:        "type": "bluestore"
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:    },
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:        "osd_id": 0,
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:        "type": "bluestore"
Oct  2 04:54:47 np0005465604 goofy_keller[392771]:    }
Oct  2 04:54:47 np0005465604 goofy_keller[392771]: }
Oct  2 04:54:47 np0005465604 podman[392754]: 2025-10-02 08:54:47.275464911 +0000 UTC m=+1.291617000 container died 3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_keller, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:54:47 np0005465604 systemd[1]: libpod-3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1.scope: Deactivated successfully.
Oct  2 04:54:47 np0005465604 systemd[1]: libpod-3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1.scope: Consumed 1.083s CPU time.
Oct  2 04:54:47 np0005465604 systemd[1]: var-lib-containers-storage-overlay-abf8bbd6eeac31f1ec93cb9914ec2692dd898b44bbc388cc438de719de3d1d1a-merged.mount: Deactivated successfully.
Oct  2 04:54:47 np0005465604 podman[392754]: 2025-10-02 08:54:47.358880035 +0000 UTC m=+1.375032134 container remove 3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_keller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 04:54:47 np0005465604 systemd[1]: libpod-conmon-3c74f2868c838347a468874c76426018a1f26d2a0173fc52e7efa76a95d2c8d1.scope: Deactivated successfully.
Oct  2 04:54:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:54:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:54:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:54:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:54:47 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9dc656b6-7f9c-4bf4-958b-0043e896a332 does not exist
Oct  2 04:54:47 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 2b2ba02a-b577-46f8-abfe-91425eea59bb does not exist
Oct  2 04:54:47 np0005465604 nova_compute[260603]: 2025-10-02 08:54:47.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:54:47 np0005465604 nova_compute[260603]: 2025-10-02 08:54:47.551 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:47 np0005465604 nova_compute[260603]: 2025-10-02 08:54:47.551 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:47 np0005465604 nova_compute[260603]: 2025-10-02 08:54:47.551 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:47 np0005465604 nova_compute[260603]: 2025-10-02 08:54:47.552 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:54:47 np0005465604 nova_compute[260603]: 2025-10-02 08:54:47.552 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:54:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 246 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct  2 04:54:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:54:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1361077569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:54:48 np0005465604 nova_compute[260603]: 2025-10-02 08:54:48.005 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:54:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:54:48 np0005465604 nova_compute[260603]: 2025-10-02 08:54:48.145 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:54:48 np0005465604 nova_compute[260603]: 2025-10-02 08:54:48.146 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:54:48 np0005465604 nova_compute[260603]: 2025-10-02 08:54:48.155 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:54:48 np0005465604 nova_compute[260603]: 2025-10-02 08:54:48.155 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:54:48 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:54:48 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:54:48 np0005465604 nova_compute[260603]: 2025-10-02 08:54:48.519 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:54:48 np0005465604 nova_compute[260603]: 2025-10-02 08:54:48.521 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3362MB free_disk=59.92179870605469GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:54:48 np0005465604 nova_compute[260603]: 2025-10-02 08:54:48.521 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:48 np0005465604 nova_compute[260603]: 2025-10-02 08:54:48.521 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:48 np0005465604 nova_compute[260603]: 2025-10-02 08:54:48.616 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance ce9a5c17-646f-4ba2-a974-90e4b864872e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:54:48 np0005465604 nova_compute[260603]: 2025-10-02 08:54:48.616 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 1e566f75-d068-4a59-bf53-0a71ad8c5e45 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:54:48 np0005465604 nova_compute[260603]: 2025-10-02 08:54:48.617 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 29e25f63-82b1-45cf-8916-41d9acc44ac9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:54:48 np0005465604 nova_compute[260603]: 2025-10-02 08:54:48.617 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:54:48 np0005465604 nova_compute[260603]: 2025-10-02 08:54:48.617 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:54:48 np0005465604 nova_compute[260603]: 2025-10-02 08:54:48.734 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:54:48 np0005465604 nova_compute[260603]: 2025-10-02 08:54:48.978 2 DEBUG nova.network.neutron [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating instance_info_cache with network_info: [{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.011 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Releasing lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.015 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.016 2 INFO nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Creating image(s)#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.049 2 DEBUG nova.storage.rbd_utils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.055 2 DEBUG nova.objects.instance [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'trusted_certs' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.058 2 DEBUG oslo_concurrency.lockutils [req-36d1fed2-3d81-4fde-962d-55bc831ecde7 req-b14e2463-3654-4631-bb9f-b23fabfd1d86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.059 2 DEBUG nova.network.neutron [req-36d1fed2-3d81-4fde-962d-55bc831ecde7 req-b14e2463-3654-4631-bb9f-b23fabfd1d86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.106 2 DEBUG nova.storage.rbd_utils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.138 2 DEBUG nova.storage.rbd_utils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.143 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "01a08b635df1a702c6eaac83ad3113a321b2a912" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.144 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "01a08b635df1a702c6eaac83ad3113a321b2a912" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:54:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3015154286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.319 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.327 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.349 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.373 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.373 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.437 2 DEBUG nova.virt.libvirt.imagebackend [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Image locations are: [{'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/fb02f3f9-de2f-45c4-91bc-9579a3c7316c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/fb02f3f9-de2f-45c4-91bc-9579a3c7316c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.495 2 DEBUG nova.virt.libvirt.imagebackend [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Selected location: {'url': 'rbd://a52e644f-f702-594c-a648-813e3e0df2b1/images/fb02f3f9-de2f-45c4-91bc-9579a3c7316c/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.496 2 DEBUG nova.storage.rbd_utils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] cloning images/fb02f3f9-de2f-45c4-91bc-9579a3c7316c@snap to None/29e25f63-82b1-45cf-8916-41d9acc44ac9_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.625 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "01a08b635df1a702c6eaac83ad3113a321b2a912" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 246 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 96 op/s
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.812 2 DEBUG nova.objects.instance [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'migration_context' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:54:49 np0005465604 nova_compute[260603]: 2025-10-02 08:54:49.921 2 DEBUG nova.storage.rbd_utils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] flattening vms/29e25f63-82b1-45cf-8916-41d9acc44ac9_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.363 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Image rbd:vms/29e25f63-82b1-45cf-8916-41d9acc44ac9_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.364 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.364 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Ensure instance console log exists: /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.365 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.365 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.366 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.370 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Start _get_guest_xml network_info=[{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T08:54:21Z,direct_url=<?>,disk_format='raw',id=fb02f3f9-de2f-45c4-91bc-9579a3c7316c,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-199210370-shelved',owner='21220064aba34c77a8af713fad28c08b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T08:54:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.375 2 WARNING nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.382 2 DEBUG nova.virt.libvirt.host [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.383 2 DEBUG nova.virt.libvirt.host [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.386 2 DEBUG nova.virt.libvirt.host [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.387 2 DEBUG nova.virt.libvirt.host [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.387 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.388 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T08:54:21Z,direct_url=<?>,disk_format='raw',id=fb02f3f9-de2f-45c4-91bc-9579a3c7316c,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-199210370-shelved',owner='21220064aba34c77a8af713fad28c08b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T08:54:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.389 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.389 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.389 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.390 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.390 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.391 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.392 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.392 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.393 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.394 2 DEBUG nova.virt.hardware [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.394 2 DEBUG nova.objects.instance [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.423 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.917 2 DEBUG nova.network.neutron [req-36d1fed2-3d81-4fde-962d-55bc831ecde7 req-b14e2463-3654-4631-bb9f-b23fabfd1d86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updated VIF entry in instance network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.918 2 DEBUG nova.network.neutron [req-36d1fed2-3d81-4fde-962d-55bc831ecde7 req-b14e2463-3654-4631-bb9f-b23fabfd1d86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating instance_info_cache with network_info: [{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:54:50 np0005465604 nova_compute[260603]: 2025-10-02 08:54:50.933 2 DEBUG oslo_concurrency.lockutils [req-36d1fed2-3d81-4fde-962d-55bc831ecde7 req-b14e2463-3654-4631-bb9f-b23fabfd1d86 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:54:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:54:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1788032437' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.011 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.036 2 DEBUG nova.storage.rbd_utils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.040 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.369 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.371 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:54:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:54:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4113836995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.556 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.558 2 DEBUG nova.virt.libvirt.vif [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-199210370',display_name='tempest-TestShelveInstance-server-199210370',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-199210370',id=124,image_ref='fb02f3f9-de2f-45c4-91bc-9579a3c7316c',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-203364505',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:54:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='21220064aba34c77a8af713fad28c08b',ramdisk_id='',reservation_id='r-2iy01e2x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-783621896',owner_user_name='tempest-TestShelveInstance-783621896-project-member',shelved_at='2025-10-02T08:54:29.331405',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fb02f3f9-de2f-45c4-91bc-9579a3c7316c'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:54:44Z,user_data=None,user_id='bd8daf03c9d144d7a60fe3f81abdfbb4',uuid=29e25f63-82b1-45cf-8916-41d9acc44ac9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.559 2 DEBUG nova.network.os_vif_util [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converting VIF {"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.560 2 DEBUG nova.network.os_vif_util [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.562 2 DEBUG nova.objects.instance [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'pci_devices' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.587 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:54:51 np0005465604 nova_compute[260603]:  <uuid>29e25f63-82b1-45cf-8916-41d9acc44ac9</uuid>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:  <name>instance-0000007c</name>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestShelveInstance-server-199210370</nova:name>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:54:50</nova:creationTime>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:54:51 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:        <nova:user uuid="bd8daf03c9d144d7a60fe3f81abdfbb4">tempest-TestShelveInstance-783621896-project-member</nova:user>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:        <nova:project uuid="21220064aba34c77a8af713fad28c08b">tempest-TestShelveInstance-783621896</nova:project>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="fb02f3f9-de2f-45c4-91bc-9579a3c7316c"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:        <nova:port uuid="6c4955ab-011a-4e64-9871-c01b4818d740">
Oct  2 04:54:51 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <entry name="serial">29e25f63-82b1-45cf-8916-41d9acc44ac9</entry>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <entry name="uuid">29e25f63-82b1-45cf-8916-41d9acc44ac9</entry>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/29e25f63-82b1-45cf-8916-41d9acc44ac9_disk">
Oct  2 04:54:51 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:54:51 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config">
Oct  2 04:54:51 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:54:51 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:ec:f3:67"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <target dev="tap6c4955ab-01"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/console.log" append="off"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <input type="keyboard" bus="usb"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:54:51 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:54:51 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:54:51 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:54:51 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.589 2 DEBUG nova.compute.manager [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Preparing to wait for external event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.589 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.590 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.590 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.590 2 DEBUG nova.virt.libvirt.vif [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-199210370',display_name='tempest-TestShelveInstance-server-199210370',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-199210370',id=124,image_ref='fb02f3f9-de2f-45c4-91bc-9579a3c7316c',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-203364505',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:54:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='21220064aba34c77a8af713fad28c08b',ramdisk_id='',reservation_id='r-2iy01e2x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-783621896',owner_user_name='tempest-TestShelveInstance-783621896-project-member',shelved_at='2025-10-02T08:54:29.331405',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fb02f3f9-de2f-45c4-91bc-9579a3c7316c'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:54:44Z,user_data=None,user_id='bd8daf03c9d144d7a60fe3f81abdfbb4',uuid=29e25f63-82b1-45cf-8916-41d9acc44ac9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.591 2 DEBUG nova.network.os_vif_util [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converting VIF {"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.591 2 DEBUG nova.network.os_vif_util [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.592 2 DEBUG os_vif [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.594 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.594 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.598 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c4955ab-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.598 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6c4955ab-01, col_values=(('external_ids', {'iface-id': '6c4955ab-011a-4e64-9871-c01b4818d740', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:f3:67', 'vm-uuid': '29e25f63-82b1-45cf-8916-41d9acc44ac9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:54:51 np0005465604 NetworkManager[45129]: <info>  [1759395291.6018] manager: (tap6c4955ab-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/528)
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.610 2 INFO os_vif [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01')#033[00m
Oct  2 04:54:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 246 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 93 op/s
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.668 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.669 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.669 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] No VIF found with MAC fa:16:3e:ec:f3:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.670 2 INFO nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Using config drive#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.695 2 DEBUG nova.storage.rbd_utils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.721 2 DEBUG nova.objects.instance [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'ec2_ids' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:54:51 np0005465604 nova_compute[260603]: 2025-10-02 08:54:51.780 2 DEBUG nova.objects.instance [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'keypairs' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:54:52 np0005465604 podman[393209]: 2025-10-02 08:54:52.052396276 +0000 UTC m=+0.109199413 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:54:52 np0005465604 podman[393208]: 2025-10-02 08:54:52.071281925 +0000 UTC m=+0.128852896 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:54:52 np0005465604 nova_compute[260603]: 2025-10-02 08:54:52.250 2 INFO nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Creating config drive at /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config#033[00m
Oct  2 04:54:52 np0005465604 nova_compute[260603]: 2025-10-02 08:54:52.260 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpef0_4ucw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:54:52 np0005465604 nova_compute[260603]: 2025-10-02 08:54:52.443 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpef0_4ucw" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:54:52 np0005465604 nova_compute[260603]: 2025-10-02 08:54:52.482 2 DEBUG nova.storage.rbd_utils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] rbd image 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:54:52 np0005465604 nova_compute[260603]: 2025-10-02 08:54:52.487 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:54:52 np0005465604 nova_compute[260603]: 2025-10-02 08:54:52.535 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:54:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:52Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:dd:de 10.100.0.26
Oct  2 04:54:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:52Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:dd:de 10.100.0.26
Oct  2 04:54:52 np0005465604 nova_compute[260603]: 2025-10-02 08:54:52.712 2 DEBUG oslo_concurrency.processutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config 29e25f63-82b1-45cf-8916-41d9acc44ac9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:54:52 np0005465604 nova_compute[260603]: 2025-10-02 08:54:52.713 2 INFO nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Deleting local config drive /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9/disk.config because it was imported into RBD.#033[00m
Oct  2 04:54:52 np0005465604 kernel: tap6c4955ab-01: entered promiscuous mode
Oct  2 04:54:52 np0005465604 NetworkManager[45129]: <info>  [1759395292.7743] manager: (tap6c4955ab-01): new Tun device (/org/freedesktop/NetworkManager/Devices/529)
Oct  2 04:54:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:52Z|01336|binding|INFO|Claiming lport 6c4955ab-011a-4e64-9871-c01b4818d740 for this chassis.
Oct  2 04:54:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:52Z|01337|binding|INFO|6c4955ab-011a-4e64-9871-c01b4818d740: Claiming fa:16:3e:ec:f3:67 10.100.0.4
Oct  2 04:54:52 np0005465604 nova_compute[260603]: 2025-10-02 08:54:52.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.786 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:f3:67 10.100.0.4'], port_security=['fa:16:3e:ec:f3:67 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29e25f63-82b1-45cf-8916-41d9acc44ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21220064aba34c77a8af713fad28c08b', 'neutron:revision_number': '9', 'neutron:security_group_ids': '19fc7080-3cba-4840-a004-532eaa7c989b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee760329-6e77-4a17-b501-5dd001a2d022, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6c4955ab-011a-4e64-9871-c01b4818d740) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:54:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.787 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6c4955ab-011a-4e64-9871-c01b4818d740 in datapath 5c4ca2f6-ca60-420d-aded-392a44195bf1 bound to our chassis#033[00m
Oct  2 04:54:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.789 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c4ca2f6-ca60-420d-aded-392a44195bf1#033[00m
Oct  2 04:54:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:52Z|01338|binding|INFO|Setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 ovn-installed in OVS
Oct  2 04:54:52 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:52Z|01339|binding|INFO|Setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 up in Southbound
Oct  2 04:54:52 np0005465604 nova_compute[260603]: 2025-10-02 08:54:52.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:52 np0005465604 nova_compute[260603]: 2025-10-02 08:54:52.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.806 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5543afc9-e562-4de1-86ff-7161ffcfcdd5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.807 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c4ca2f6-c1 in ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:54:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.811 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c4ca2f6-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:54:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.812 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9abfc91f-fe65-492c-aa08-48e8cef15a36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.814 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a71c7a-6652-4be5-a6f8-11e1df43a9e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:52 np0005465604 systemd-udevd[393306]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:54:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.826 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[8b1717ae-3961-422b-9688-82a48d928a7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:52 np0005465604 systemd-machined[214636]: New machine qemu-159-instance-0000007c.
Oct  2 04:54:52 np0005465604 NetworkManager[45129]: <info>  [1759395292.8411] device (tap6c4955ab-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:54:52 np0005465604 NetworkManager[45129]: <info>  [1759395292.8427] device (tap6c4955ab-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:54:52 np0005465604 systemd[1]: Started Virtual Machine qemu-159-instance-0000007c.
Oct  2 04:54:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.858 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f706e4f8-58ab-4520-9528-8009e26d6468]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.892 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[46f39986-9b36-4339-b1ca-b0e0c9d9141a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:52 np0005465604 systemd-udevd[393311]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:54:52 np0005465604 NetworkManager[45129]: <info>  [1759395292.9056] manager: (tap5c4ca2f6-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/530)
Oct  2 04:54:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.901 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[60f16b78-6e0a-4220-8491-13d1a79e89ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.936 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bb47d600-6196-4da6-bb8b-1fbde9f473e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.939 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cb72fb93-c5a6-4728-8ae4-a9d8098c7780]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:52 np0005465604 NetworkManager[45129]: <info>  [1759395292.9729] device (tap5c4ca2f6-c0): carrier: link connected
Oct  2 04:54:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:52.979 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c9925ce2-8e77-4c1a-9fb0-5decb2eb09c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.001 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6efae54a-42b8-43cd-a20f-ba2c26558f6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c4ca2f6-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:2b:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617157, 'reachable_time': 33543, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393339, 'error': None, 'target': 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.019 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a7884f34-ce6a-424b-9443-fcbe7a80dc5a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:2b16'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 617157, 'tstamp': 617157}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393340, 'error': None, 'target': 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.034 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[482c369b-6e44-47d2-aea3-084220701c70]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c4ca2f6-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:2b:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617157, 'reachable_time': 33543, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 393341, 'error': None, 'target': 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.074 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd81147-8ac4-49a4-8c69-25919856b567]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.152 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c47a3f1b-ea6e-4e2b-9243-201e0a574a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.154 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c4ca2f6-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.155 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.155 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c4ca2f6-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:54:53 np0005465604 NetworkManager[45129]: <info>  [1759395293.1586] manager: (tap5c4ca2f6-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/531)
Oct  2 04:54:53 np0005465604 kernel: tap5c4ca2f6-c0: entered promiscuous mode
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.162 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c4ca2f6-c0, col_values=(('external_ids', {'iface-id': '2b6a47ce-d685-4242-a053-f63d07d5d559'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:54:53 np0005465604 ovn_controller[152344]: 2025-10-02T08:54:53Z|01340|binding|INFO|Releasing lport 2b6a47ce-d685-4242-a053-f63d07d5d559 from this chassis (sb_readonly=0)
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.185 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c4ca2f6-ca60-420d-aded-392a44195bf1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c4ca2f6-ca60-420d-aded-392a44195bf1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.186 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b43abed4-907b-4eb9-998a-288e98cb412d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.189 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-5c4ca2f6-ca60-420d-aded-392a44195bf1
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/5c4ca2f6-ca60-420d-aded-392a44195bf1.pid.haproxy
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 5c4ca2f6-ca60-420d-aded-392a44195bf1
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:54:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:54:53.190 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'env', 'PROCESS_TAG=haproxy-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c4ca2f6-ca60-420d-aded-392a44195bf1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:54:53 np0005465604 podman[393415]: 2025-10-02 08:54:53.624341532 +0000 UTC m=+0.076113715 container create 910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:54:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 358 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 6.0 MiB/s wr, 240 op/s
Oct  2 04:54:53 np0005465604 podman[393415]: 2025-10-02 08:54:53.591549962 +0000 UTC m=+0.043322145 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:54:53 np0005465604 systemd[1]: Started libpod-conmon-910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293.scope.
Oct  2 04:54:53 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:54:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5854661b36c6863990b03e1ef3fd12b61c77135bce82b5bcc5ab4d576bfc16fd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:54:53 np0005465604 podman[393415]: 2025-10-02 08:54:53.72364807 +0000 UTC m=+0.175420263 container init 910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:54:53 np0005465604 podman[393415]: 2025-10-02 08:54:53.729503345 +0000 UTC m=+0.181275498 container start 910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.757 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395293.7565522, 29e25f63-82b1-45cf-8916-41d9acc44ac9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.758 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] VM Started (Lifecycle Event)#033[00m
Oct  2 04:54:53 np0005465604 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[393430]: [NOTICE]   (393434) : New worker (393436) forked
Oct  2 04:54:53 np0005465604 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[393430]: [NOTICE]   (393434) : Loading success.
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.797 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.801 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395293.7576892, 29e25f63-82b1-45cf-8916-41d9acc44ac9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.801 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.840 2 DEBUG nova.compute.manager [req-b368290f-8328-4639-b905-fe68502b87bc req-0f354b1d-c594-41b1-bd00-d898fdf03852 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.841 2 DEBUG oslo_concurrency.lockutils [req-b368290f-8328-4639-b905-fe68502b87bc req-0f354b1d-c594-41b1-bd00-d898fdf03852 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.841 2 DEBUG oslo_concurrency.lockutils [req-b368290f-8328-4639-b905-fe68502b87bc req-0f354b1d-c594-41b1-bd00-d898fdf03852 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.842 2 DEBUG oslo_concurrency.lockutils [req-b368290f-8328-4639-b905-fe68502b87bc req-0f354b1d-c594-41b1-bd00-d898fdf03852 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.842 2 DEBUG nova.compute.manager [req-b368290f-8328-4639-b905-fe68502b87bc req-0f354b1d-c594-41b1-bd00-d898fdf03852 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Processing event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.844 2 DEBUG nova.compute.manager [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.845 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.849 2 DEBUG nova.virt.libvirt.driver [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.853 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395293.8488777, 29e25f63-82b1-45cf-8916-41d9acc44ac9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.854 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.858 2 INFO nova.virt.libvirt.driver [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance spawned successfully.#033[00m
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.875 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.879 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:54:53 np0005465604 nova_compute[260603]: 2025-10-02 08:54:53.900 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:54:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Oct  2 04:54:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Oct  2 04:54:54 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Oct  2 04:54:55 np0005465604 nova_compute[260603]: 2025-10-02 08:54:55.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:55 np0005465604 nova_compute[260603]: 2025-10-02 08:54:55.175 2 DEBUG nova.compute.manager [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:54:55 np0005465604 nova_compute[260603]: 2025-10-02 08:54:55.272 2 DEBUG oslo_concurrency.lockutils [None req-1de2a414-46b2-4390-9810-95ca9a58c086 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 11.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 358 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 7.2 MiB/s wr, 175 op/s
Oct  2 04:54:55 np0005465604 nova_compute[260603]: 2025-10-02 08:54:55.973 2 DEBUG nova.compute.manager [req-64afbaaa-99ad-45af-9471-0e7aa9b58c0c req-e788365a-6f4a-4287-9e77-a215025619ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:54:55 np0005465604 nova_compute[260603]: 2025-10-02 08:54:55.973 2 DEBUG oslo_concurrency.lockutils [req-64afbaaa-99ad-45af-9471-0e7aa9b58c0c req-e788365a-6f4a-4287-9e77-a215025619ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:54:55 np0005465604 nova_compute[260603]: 2025-10-02 08:54:55.974 2 DEBUG oslo_concurrency.lockutils [req-64afbaaa-99ad-45af-9471-0e7aa9b58c0c req-e788365a-6f4a-4287-9e77-a215025619ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:54:55 np0005465604 nova_compute[260603]: 2025-10-02 08:54:55.975 2 DEBUG oslo_concurrency.lockutils [req-64afbaaa-99ad-45af-9471-0e7aa9b58c0c req-e788365a-6f4a-4287-9e77-a215025619ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:54:55 np0005465604 nova_compute[260603]: 2025-10-02 08:54:55.976 2 DEBUG nova.compute.manager [req-64afbaaa-99ad-45af-9471-0e7aa9b58c0c req-e788365a-6f4a-4287-9e77-a215025619ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:54:55 np0005465604 nova_compute[260603]: 2025-10-02 08:54:55.976 2 WARNING nova.compute.manager [req-64afbaaa-99ad-45af-9471-0e7aa9b58c0c req-e788365a-6f4a-4287-9e77-a215025619ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received unexpected event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:54:56 np0005465604 nova_compute[260603]: 2025-10-02 08:54:56.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:54:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 307 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 7.2 MiB/s wr, 238 op/s
Oct  2 04:54:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:54:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:54:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:54:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:54:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:54:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:54:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:54:58 np0005465604 nova_compute[260603]: 2025-10-02 08:54:58.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:54:59 np0005465604 podman[393446]: 2025-10-02 08:54:59.057380067 +0000 UTC m=+0.104772702 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:54:59 np0005465604 podman[393445]: 2025-10-02 08:54:59.06188723 +0000 UTC m=+0.112481607 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 04:54:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 279 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 7.3 MiB/s wr, 287 op/s
Oct  2 04:55:00 np0005465604 nova_compute[260603]: 2025-10-02 08:55:00.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:01 np0005465604 nova_compute[260603]: 2025-10-02 08:55:01.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 279 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 7.2 MiB/s wr, 287 op/s
Oct  2 04:55:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:55:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Oct  2 04:55:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Oct  2 04:55:03 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Oct  2 04:55:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 279 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 30 KiB/s wr, 127 op/s
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.610 2 DEBUG oslo_concurrency.lockutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.611 2 DEBUG oslo_concurrency.lockutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.612 2 DEBUG oslo_concurrency.lockutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.613 2 DEBUG oslo_concurrency.lockutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.613 2 DEBUG oslo_concurrency.lockutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.616 2 INFO nova.compute.manager [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Terminating instance#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.618 2 DEBUG nova.compute.manager [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:55:04 np0005465604 kernel: tap0114a231-1a (unregistering): left promiscuous mode
Oct  2 04:55:04 np0005465604 NetworkManager[45129]: <info>  [1759395304.6713] device (tap0114a231-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:55:04 np0005465604 ovn_controller[152344]: 2025-10-02T08:55:04Z|01341|binding|INFO|Releasing lport 0114a231-1abb-410d-8d5f-29713ba6ca28 from this chassis (sb_readonly=0)
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:04 np0005465604 ovn_controller[152344]: 2025-10-02T08:55:04Z|01342|binding|INFO|Setting lport 0114a231-1abb-410d-8d5f-29713ba6ca28 down in Southbound
Oct  2 04:55:04 np0005465604 ovn_controller[152344]: 2025-10-02T08:55:04Z|01343|binding|INFO|Removing iface tap0114a231-1a ovn-installed in OVS
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:04.695 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:dd:de 10.100.0.26'], port_security=['fa:16:3e:33:dd:de 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '1e566f75-d068-4a59-bf53-0a71ad8c5e45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fcd51a14-855e-486f-92b0-d9c9ee06ef45', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bb814552-631a-471e-89ab-a7fd4a866ac7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=999292f0-e1e9-4e7e-82ce-9e16433e3e2c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=0114a231-1abb-410d-8d5f-29713ba6ca28) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:55:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:04.697 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 0114a231-1abb-410d-8d5f-29713ba6ca28 in datapath fcd51a14-855e-486f-92b0-d9c9ee06ef45 unbound from our chassis#033[00m
Oct  2 04:55:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:04.699 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fcd51a14-855e-486f-92b0-d9c9ee06ef45, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:55:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:04.700 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a433e1b0-b857-42ac-b7f3-acf43c43a31c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:04.701 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45 namespace which is not needed anymore#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:04 np0005465604 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Oct  2 04:55:04 np0005465604 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007d.scope: Consumed 13.141s CPU time.
Oct  2 04:55:04 np0005465604 systemd-machined[214636]: Machine qemu-158-instance-0000007d terminated.
Oct  2 04:55:04 np0005465604 neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45[391929]: [NOTICE]   (391934) : haproxy version is 2.8.14-c23fe91
Oct  2 04:55:04 np0005465604 neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45[391929]: [NOTICE]   (391934) : path to executable is /usr/sbin/haproxy
Oct  2 04:55:04 np0005465604 neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45[391929]: [WARNING]  (391934) : Exiting Master process...
Oct  2 04:55:04 np0005465604 neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45[391929]: [WARNING]  (391934) : Exiting Master process...
Oct  2 04:55:04 np0005465604 neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45[391929]: [ALERT]    (391934) : Current worker (391954) exited with code 143 (Terminated)
Oct  2 04:55:04 np0005465604 neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45[391929]: [WARNING]  (391934) : All workers exited. Exiting... (0)
Oct  2 04:55:04 np0005465604 systemd[1]: libpod-42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403.scope: Deactivated successfully.
Oct  2 04:55:04 np0005465604 podman[393511]: 2025-10-02 08:55:04.868758457 +0000 UTC m=+0.053127245 container died 42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.872 2 INFO nova.virt.libvirt.driver [-] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Instance destroyed successfully.#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.873 2 DEBUG nova.objects.instance [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid 1e566f75-d068-4a59-bf53-0a71ad8c5e45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.889 2 DEBUG nova.virt.libvirt.vif [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:54:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1505974006',display_name='tempest-TestNetworkBasicOps-server-1505974006',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1505974006',id=125,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOBsPfGQIKXrU0ixL8zdO1Id1043n5Wya2XViCYfbc8+hdkDDiOW4w1UIZN3cliMo4JpT768RX8ZTyGW3MECALsE1YSIbBikkYwsG7A75q2etUDqr7ZHw9V2TylyqodwPA==',key_name='tempest-TestNetworkBasicOps-1332028269',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:54:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-q4dl0p75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:54:39Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=1e566f75-d068-4a59-bf53-0a71ad8c5e45,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.890 2 DEBUG nova.network.os_vif_util [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "0114a231-1abb-410d-8d5f-29713ba6ca28", "address": "fa:16:3e:33:dd:de", "network": {"id": "fcd51a14-855e-486f-92b0-d9c9ee06ef45", "bridge": "br-int", "label": "tempest-network-smoke--1339197261", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0114a231-1a", "ovs_interfaceid": "0114a231-1abb-410d-8d5f-29713ba6ca28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.890 2 DEBUG nova.network.os_vif_util [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:dd:de,bridge_name='br-int',has_traffic_filtering=True,id=0114a231-1abb-410d-8d5f-29713ba6ca28,network=Network(fcd51a14-855e-486f-92b0-d9c9ee06ef45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0114a231-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.891 2 DEBUG os_vif [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:dd:de,bridge_name='br-int',has_traffic_filtering=True,id=0114a231-1abb-410d-8d5f-29713ba6ca28,network=Network(fcd51a14-855e-486f-92b0-d9c9ee06ef45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0114a231-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.893 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0114a231-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.899 2 INFO os_vif [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:dd:de,bridge_name='br-int',has_traffic_filtering=True,id=0114a231-1abb-410d-8d5f-29713ba6ca28,network=Network(fcd51a14-855e-486f-92b0-d9c9ee06ef45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0114a231-1a')#033[00m
Oct  2 04:55:04 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403-userdata-shm.mount: Deactivated successfully.
Oct  2 04:55:04 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d19e482ac1bfb83fa0f30e489e044044eba2a82cab9b325cf7843b422f0931f0-merged.mount: Deactivated successfully.
Oct  2 04:55:04 np0005465604 podman[393511]: 2025-10-02 08:55:04.919005131 +0000 UTC m=+0.103373909 container cleanup 42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 04:55:04 np0005465604 systemd[1]: libpod-conmon-42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403.scope: Deactivated successfully.
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.950 2 DEBUG nova.compute.manager [req-4f2fe99c-ab5a-4747-9e01-0ca025249102 req-a61e5641-efa5-407e-b8ca-967d14fe4ecb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Received event network-vif-unplugged-0114a231-1abb-410d-8d5f-29713ba6ca28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.950 2 DEBUG oslo_concurrency.lockutils [req-4f2fe99c-ab5a-4747-9e01-0ca025249102 req-a61e5641-efa5-407e-b8ca-967d14fe4ecb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.951 2 DEBUG oslo_concurrency.lockutils [req-4f2fe99c-ab5a-4747-9e01-0ca025249102 req-a61e5641-efa5-407e-b8ca-967d14fe4ecb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.951 2 DEBUG oslo_concurrency.lockutils [req-4f2fe99c-ab5a-4747-9e01-0ca025249102 req-a61e5641-efa5-407e-b8ca-967d14fe4ecb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.951 2 DEBUG nova.compute.manager [req-4f2fe99c-ab5a-4747-9e01-0ca025249102 req-a61e5641-efa5-407e-b8ca-967d14fe4ecb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] No waiting events found dispatching network-vif-unplugged-0114a231-1abb-410d-8d5f-29713ba6ca28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:55:04 np0005465604 nova_compute[260603]: 2025-10-02 08:55:04.951 2 DEBUG nova.compute.manager [req-4f2fe99c-ab5a-4747-9e01-0ca025249102 req-a61e5641-efa5-407e-b8ca-967d14fe4ecb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Received event network-vif-unplugged-0114a231-1abb-410d-8d5f-29713ba6ca28 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:55:05 np0005465604 podman[393571]: 2025-10-02 08:55:05.008113105 +0000 UTC m=+0.053805846 container remove 42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 04:55:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:05.015 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc06c8d-7eae-4b23-a3b2-a7698553da40]: (4, ('Thu Oct  2 08:55:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45 (42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403)\n42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403\nThu Oct  2 08:55:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45 (42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403)\n42dffc7c0b75d4d78e58695b6ae97961cca9601c30e44437af7f165198794403\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:05.018 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7e362efd-adc3-4db1-8457-a99afe449827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:05.019 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfcd51a14-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:55:05 np0005465604 nova_compute[260603]: 2025-10-02 08:55:05.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:05 np0005465604 kernel: tapfcd51a14-80: left promiscuous mode
Oct  2 04:55:05 np0005465604 nova_compute[260603]: 2025-10-02 08:55:05.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:05.046 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4234c23e-86c1-49e0-9e1b-3b87ef238e88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:05.077 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df7eaccb-451c-484e-ab80-0541bffdc64a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:05.078 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[404d8e38-002e-49ab-9f47-ce5b16f50361]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:05.101 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1006602f-e9cf-4191-96b0-6e52ef0c903d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 615581, 'reachable_time': 41535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393586, 'error': None, 'target': 'ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:05 np0005465604 systemd[1]: run-netns-ovnmeta\x2dfcd51a14\x2d855e\x2d486f\x2d92b0\x2dd9c9ee06ef45.mount: Deactivated successfully.
Oct  2 04:55:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:05.105 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fcd51a14-855e-486f-92b0-d9c9ee06ef45 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:55:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:05.105 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a7551fbb-b1cc-4899-97fe-48eb899608db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:05 np0005465604 nova_compute[260603]: 2025-10-02 08:55:05.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:05 np0005465604 nova_compute[260603]: 2025-10-02 08:55:05.234 2 INFO nova.virt.libvirt.driver [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Deleting instance files /var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45_del#033[00m
Oct  2 04:55:05 np0005465604 nova_compute[260603]: 2025-10-02 08:55:05.235 2 INFO nova.virt.libvirt.driver [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Deletion of /var/lib/nova/instances/1e566f75-d068-4a59-bf53-0a71ad8c5e45_del complete#033[00m
Oct  2 04:55:05 np0005465604 nova_compute[260603]: 2025-10-02 08:55:05.295 2 INFO nova.compute.manager [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:55:05 np0005465604 nova_compute[260603]: 2025-10-02 08:55:05.295 2 DEBUG oslo.service.loopingcall [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:55:05 np0005465604 nova_compute[260603]: 2025-10-02 08:55:05.296 2 DEBUG nova.compute.manager [-] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:55:05 np0005465604 nova_compute[260603]: 2025-10-02 08:55:05.296 2 DEBUG nova.network.neutron [-] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:55:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 279 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 27 KiB/s wr, 113 op/s
Oct  2 04:55:06 np0005465604 nova_compute[260603]: 2025-10-02 08:55:06.542 2 DEBUG nova.network.neutron [-] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:55:06 np0005465604 nova_compute[260603]: 2025-10-02 08:55:06.560 2 INFO nova.compute.manager [-] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Took 1.26 seconds to deallocate network for instance.#033[00m
Oct  2 04:55:06 np0005465604 nova_compute[260603]: 2025-10-02 08:55:06.603 2 DEBUG oslo_concurrency.lockutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:06 np0005465604 nova_compute[260603]: 2025-10-02 08:55:06.604 2 DEBUG oslo_concurrency.lockutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:06 np0005465604 nova_compute[260603]: 2025-10-02 08:55:06.682 2 DEBUG oslo_concurrency.processutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:55:07 np0005465604 nova_compute[260603]: 2025-10-02 08:55:07.051 2 DEBUG nova.compute.manager [req-cdd0f0a9-3b81-4d61-afb3-eaeeb313b067 req-a8812904-dd50-4d38-a4bc-01826f279f6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Received event network-vif-plugged-0114a231-1abb-410d-8d5f-29713ba6ca28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:55:07 np0005465604 nova_compute[260603]: 2025-10-02 08:55:07.052 2 DEBUG oslo_concurrency.lockutils [req-cdd0f0a9-3b81-4d61-afb3-eaeeb313b067 req-a8812904-dd50-4d38-a4bc-01826f279f6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:07 np0005465604 nova_compute[260603]: 2025-10-02 08:55:07.052 2 DEBUG oslo_concurrency.lockutils [req-cdd0f0a9-3b81-4d61-afb3-eaeeb313b067 req-a8812904-dd50-4d38-a4bc-01826f279f6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:07 np0005465604 nova_compute[260603]: 2025-10-02 08:55:07.053 2 DEBUG oslo_concurrency.lockutils [req-cdd0f0a9-3b81-4d61-afb3-eaeeb313b067 req-a8812904-dd50-4d38-a4bc-01826f279f6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:07 np0005465604 nova_compute[260603]: 2025-10-02 08:55:07.053 2 DEBUG nova.compute.manager [req-cdd0f0a9-3b81-4d61-afb3-eaeeb313b067 req-a8812904-dd50-4d38-a4bc-01826f279f6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] No waiting events found dispatching network-vif-plugged-0114a231-1abb-410d-8d5f-29713ba6ca28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:55:07 np0005465604 nova_compute[260603]: 2025-10-02 08:55:07.054 2 WARNING nova.compute.manager [req-cdd0f0a9-3b81-4d61-afb3-eaeeb313b067 req-a8812904-dd50-4d38-a4bc-01826f279f6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Received unexpected event network-vif-plugged-0114a231-1abb-410d-8d5f-29713ba6ca28 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:55:07 np0005465604 nova_compute[260603]: 2025-10-02 08:55:07.054 2 DEBUG nova.compute.manager [req-cdd0f0a9-3b81-4d61-afb3-eaeeb313b067 req-a8812904-dd50-4d38-a4bc-01826f279f6d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Received event network-vif-deleted-0114a231-1abb-410d-8d5f-29713ba6ca28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:55:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:55:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3590211806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:55:07 np0005465604 nova_compute[260603]: 2025-10-02 08:55:07.122 2 DEBUG oslo_concurrency.processutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:55:07 np0005465604 nova_compute[260603]: 2025-10-02 08:55:07.129 2 DEBUG nova.compute.provider_tree [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:55:07 np0005465604 nova_compute[260603]: 2025-10-02 08:55:07.151 2 DEBUG nova.scheduler.client.report [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:55:07 np0005465604 nova_compute[260603]: 2025-10-02 08:55:07.176 2 DEBUG oslo_concurrency.lockutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:07 np0005465604 nova_compute[260603]: 2025-10-02 08:55:07.214 2 INFO nova.scheduler.client.report [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance 1e566f75-d068-4a59-bf53-0a71ad8c5e45#033[00m
Oct  2 04:55:07 np0005465604 nova_compute[260603]: 2025-10-02 08:55:07.305 2 DEBUG oslo_concurrency.lockutils [None req-cec6e80a-1f3e-4453-9eb1-203540e312b8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "1e566f75-d068-4a59-bf53-0a71ad8c5e45" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:07 np0005465604 ovn_controller[152344]: 2025-10-02T08:55:07Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:f3:67 10.100.0.4
Oct  2 04:55:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 227 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 41 KiB/s wr, 105 op/s
Oct  2 04:55:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:55:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 200 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 33 KiB/s wr, 89 op/s
Oct  2 04:55:09 np0005465604 nova_compute[260603]: 2025-10-02 08:55:09.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:10 np0005465604 nova_compute[260603]: 2025-10-02 08:55:10.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:55:10Z|01344|binding|INFO|Releasing lport 2b6a47ce-d685-4242-a053-f63d07d5d559 from this chassis (sb_readonly=0)
Oct  2 04:55:10 np0005465604 ovn_controller[152344]: 2025-10-02T08:55:10Z|01345|binding|INFO|Releasing lport f80df61f-a234-49f9-816c-2f176ad94b02 from this chassis (sb_readonly=0)
Oct  2 04:55:10 np0005465604 nova_compute[260603]: 2025-10-02 08:55:10.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 200 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 669 KiB/s rd, 33 KiB/s wr, 89 op/s
Oct  2 04:55:12 np0005465604 nova_compute[260603]: 2025-10-02 08:55:12.715 2 DEBUG nova.compute.manager [req-50e53a70-849c-4674-88c2-2deeaed08063 req-5cffba2d-da15-4c13-9a13-60528ad12122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received event network-changed-4fd5381b-e8ba-485f-9cb6-692a37b716a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:55:12 np0005465604 nova_compute[260603]: 2025-10-02 08:55:12.715 2 DEBUG nova.compute.manager [req-50e53a70-849c-4674-88c2-2deeaed08063 req-5cffba2d-da15-4c13-9a13-60528ad12122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Refreshing instance network info cache due to event network-changed-4fd5381b-e8ba-485f-9cb6-692a37b716a1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:55:12 np0005465604 nova_compute[260603]: 2025-10-02 08:55:12.716 2 DEBUG oslo_concurrency.lockutils [req-50e53a70-849c-4674-88c2-2deeaed08063 req-5cffba2d-da15-4c13-9a13-60528ad12122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:55:12 np0005465604 nova_compute[260603]: 2025-10-02 08:55:12.716 2 DEBUG oslo_concurrency.lockutils [req-50e53a70-849c-4674-88c2-2deeaed08063 req-5cffba2d-da15-4c13-9a13-60528ad12122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:55:12 np0005465604 nova_compute[260603]: 2025-10-02 08:55:12.716 2 DEBUG nova.network.neutron [req-50e53a70-849c-4674-88c2-2deeaed08063 req-5cffba2d-da15-4c13-9a13-60528ad12122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Refreshing network info cache for port 4fd5381b-e8ba-485f-9cb6-692a37b716a1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:55:12 np0005465604 nova_compute[260603]: 2025-10-02 08:55:12.791 2 DEBUG oslo_concurrency.lockutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "ce9a5c17-646f-4ba2-a974-90e4b864872e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:12 np0005465604 nova_compute[260603]: 2025-10-02 08:55:12.792 2 DEBUG oslo_concurrency.lockutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:12 np0005465604 nova_compute[260603]: 2025-10-02 08:55:12.792 2 DEBUG oslo_concurrency.lockutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:12 np0005465604 nova_compute[260603]: 2025-10-02 08:55:12.793 2 DEBUG oslo_concurrency.lockutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:12 np0005465604 nova_compute[260603]: 2025-10-02 08:55:12.793 2 DEBUG oslo_concurrency.lockutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:12 np0005465604 nova_compute[260603]: 2025-10-02 08:55:12.795 2 INFO nova.compute.manager [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Terminating instance#033[00m
Oct  2 04:55:12 np0005465604 nova_compute[260603]: 2025-10-02 08:55:12.797 2 DEBUG nova.compute.manager [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:55:12 np0005465604 kernel: tap4fd5381b-e8 (unregistering): left promiscuous mode
Oct  2 04:55:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:12.847 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:55:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:12.848 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:55:12 np0005465604 NetworkManager[45129]: <info>  [1759395312.8898] device (tap4fd5381b-e8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:55:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:55:12Z|01346|binding|INFO|Releasing lport 4fd5381b-e8ba-485f-9cb6-692a37b716a1 from this chassis (sb_readonly=0)
Oct  2 04:55:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:55:12Z|01347|binding|INFO|Setting lport 4fd5381b-e8ba-485f-9cb6-692a37b716a1 down in Southbound
Oct  2 04:55:12 np0005465604 ovn_controller[152344]: 2025-10-02T08:55:12Z|01348|binding|INFO|Removing iface tap4fd5381b-e8 ovn-installed in OVS
Oct  2 04:55:12 np0005465604 nova_compute[260603]: 2025-10-02 08:55:12.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:12 np0005465604 nova_compute[260603]: 2025-10-02 08:55:12.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:12.901 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:0b:fe 10.100.0.10'], port_security=['fa:16:3e:04:0b:fe 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ce9a5c17-646f-4ba2-a974-90e4b864872e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7511e8c2-7c26-4eea-b465-32e904aba1a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7fe3b6c1-e179-4b79-8e83-9d4b983838b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=300ebc04-b3b0-45d9-94e7-4b6ae68b1331, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4fd5381b-e8ba-485f-9cb6-692a37b716a1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:55:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:12.902 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4fd5381b-e8ba-485f-9cb6-692a37b716a1 in datapath 7511e8c2-7c26-4eea-b465-32e904aba1a9 unbound from our chassis#033[00m
Oct  2 04:55:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:12.904 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7511e8c2-7c26-4eea-b465-32e904aba1a9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:55:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:12.905 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc87fb6-03b3-4096-8c32-22ce9810a658]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:12.905 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9 namespace which is not needed anymore#033[00m
Oct  2 04:55:12 np0005465604 nova_compute[260603]: 2025-10-02 08:55:12.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:12 np0005465604 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct  2 04:55:12 np0005465604 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007b.scope: Consumed 15.993s CPU time.
Oct  2 04:55:12 np0005465604 systemd-machined[214636]: Machine qemu-156-instance-0000007b terminated.
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.039 2 INFO nova.virt.libvirt.driver [-] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Instance destroyed successfully.#033[00m
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.040 2 DEBUG nova.objects.instance [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid ce9a5c17-646f-4ba2-a974-90e4b864872e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:55:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.052 2 DEBUG nova.virt.libvirt.vif [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:53:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1285499332',display_name='tempest-TestNetworkBasicOps-server-1285499332',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1285499332',id=123,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAz8IMZ7Jf5vn2RyFjWWITiqepr2aLtS8oy68xeil523gnhXxTAuBsHLCjnOZ53PJL0p7gfBo3vI6+ggEPUbO2Sg7r3zAGgPHkKwwrkmc/GeJgKH1QAb2g1ODGvS4w3VLw==',key_name='tempest-TestNetworkBasicOps-2122334356',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:53:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-5j8y63ir',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:53:49Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=ce9a5c17-646f-4ba2-a974-90e4b864872e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.053 2 DEBUG nova.network.os_vif_util [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.055 2 DEBUG nova.network.os_vif_util [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:04:0b:fe,bridge_name='br-int',has_traffic_filtering=True,id=4fd5381b-e8ba-485f-9cb6-692a37b716a1,network=Network(7511e8c2-7c26-4eea-b465-32e904aba1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fd5381b-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:55:13 np0005465604 neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9[390545]: [NOTICE]   (390549) : haproxy version is 2.8.14-c23fe91
Oct  2 04:55:13 np0005465604 neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9[390545]: [NOTICE]   (390549) : path to executable is /usr/sbin/haproxy
Oct  2 04:55:13 np0005465604 neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9[390545]: [WARNING]  (390549) : Exiting Master process...
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.056 2 DEBUG os_vif [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:0b:fe,bridge_name='br-int',has_traffic_filtering=True,id=4fd5381b-e8ba-485f-9cb6-692a37b716a1,network=Network(7511e8c2-7c26-4eea-b465-32e904aba1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fd5381b-e8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:55:13 np0005465604 neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9[390545]: [ALERT]    (390549) : Current worker (390551) exited with code 143 (Terminated)
Oct  2 04:55:13 np0005465604 neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9[390545]: [WARNING]  (390549) : All workers exited. Exiting... (0)
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.060 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd5381b-e8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:55:13 np0005465604 systemd[1]: libpod-bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a.scope: Deactivated successfully.
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:55:13 np0005465604 podman[393634]: 2025-10-02 08:55:13.068941234 +0000 UTC m=+0.057465733 container died bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.067 2 INFO os_vif [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:0b:fe,bridge_name='br-int',has_traffic_filtering=True,id=4fd5381b-e8ba-485f-9cb6-692a37b716a1,network=Network(7511e8c2-7c26-4eea-b465-32e904aba1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fd5381b-e8')#033[00m
Oct  2 04:55:13 np0005465604 systemd[1]: var-lib-containers-storage-overlay-841e1ce90c14f958dd89444a0f9f00fd4fb528b405b275066734c073b28f4568-merged.mount: Deactivated successfully.
Oct  2 04:55:13 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a-userdata-shm.mount: Deactivated successfully.
Oct  2 04:55:13 np0005465604 podman[393634]: 2025-10-02 08:55:13.103641523 +0000 UTC m=+0.092165952 container cleanup bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:55:13 np0005465604 systemd[1]: libpod-conmon-bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a.scope: Deactivated successfully.
Oct  2 04:55:13 np0005465604 podman[393689]: 2025-10-02 08:55:13.184988342 +0000 UTC m=+0.048416306 container remove bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 04:55:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:13.193 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[49162d11-84cc-4951-ac85-4c967289f6c5]: (4, ('Thu Oct  2 08:55:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9 (bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a)\nbd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a\nThu Oct  2 08:55:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9 (bd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a)\nbd0a9544c26885f1dbb58c99019a08dcea8eaf918e1e594d87c31856c0dd060a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:13.196 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae4b5ad-71f2-4443-b7e5-aa463d4ddcfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:13.197 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7511e8c2-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:13 np0005465604 kernel: tap7511e8c2-70: left promiscuous mode
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:13.237 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c2275ece-f5f1-4544-b3bf-f34be25cfc1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:13.267 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[675827a4-12e4-4a88-8364-d6de7e6def18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:13.269 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[634ffde8-2ac0-4415-a52b-f29b905bd2a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:13.290 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d6de0d24-26cd-4459-8bb6-91ffae66b4dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610608, 'reachable_time': 43695, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393706, 'error': None, 'target': 'ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:13 np0005465604 systemd[1]: run-netns-ovnmeta\x2d7511e8c2\x2d7c26\x2d4eea\x2db465\x2d32e904aba1a9.mount: Deactivated successfully.
Oct  2 04:55:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:13.293 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7511e8c2-7c26-4eea-b465-32e904aba1a9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:55:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:13.293 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[30bf4fd1-0ccc-4fe7-9671-7f3f829054ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.496 2 INFO nova.virt.libvirt.driver [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Deleting instance files /var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e_del#033[00m
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.498 2 INFO nova.virt.libvirt.driver [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Deletion of /var/lib/nova/instances/ce9a5c17-646f-4ba2-a974-90e4b864872e_del complete#033[00m
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.582 2 INFO nova.compute.manager [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.583 2 DEBUG oslo.service.loopingcall [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.583 2 DEBUG nova.compute.manager [-] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:55:13 np0005465604 nova_compute[260603]: 2025-10-02 08:55:13.584 2 DEBUG nova.network.neutron [-] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:55:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 189 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 641 KiB/s rd, 41 KiB/s wr, 96 op/s
Oct  2 04:55:14 np0005465604 nova_compute[260603]: 2025-10-02 08:55:14.613 2 DEBUG nova.compute.manager [req-1fdd247b-2498-455d-8082-3d1a19c9aa49 req-b1a9d915-750d-413b-b39e-a3fa6f70c4b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received event network-vif-unplugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:55:14 np0005465604 nova_compute[260603]: 2025-10-02 08:55:14.614 2 DEBUG oslo_concurrency.lockutils [req-1fdd247b-2498-455d-8082-3d1a19c9aa49 req-b1a9d915-750d-413b-b39e-a3fa6f70c4b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:14 np0005465604 nova_compute[260603]: 2025-10-02 08:55:14.615 2 DEBUG oslo_concurrency.lockutils [req-1fdd247b-2498-455d-8082-3d1a19c9aa49 req-b1a9d915-750d-413b-b39e-a3fa6f70c4b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:14 np0005465604 nova_compute[260603]: 2025-10-02 08:55:14.615 2 DEBUG oslo_concurrency.lockutils [req-1fdd247b-2498-455d-8082-3d1a19c9aa49 req-b1a9d915-750d-413b-b39e-a3fa6f70c4b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:14 np0005465604 nova_compute[260603]: 2025-10-02 08:55:14.616 2 DEBUG nova.compute.manager [req-1fdd247b-2498-455d-8082-3d1a19c9aa49 req-b1a9d915-750d-413b-b39e-a3fa6f70c4b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] No waiting events found dispatching network-vif-unplugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:55:14 np0005465604 nova_compute[260603]: 2025-10-02 08:55:14.616 2 DEBUG nova.compute.manager [req-1fdd247b-2498-455d-8082-3d1a19c9aa49 req-b1a9d915-750d-413b-b39e-a3fa6f70c4b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received event network-vif-unplugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:55:14 np0005465604 nova_compute[260603]: 2025-10-02 08:55:14.867 2 DEBUG nova.compute.manager [req-507e8c74-bc6e-4c8f-8052-a78d53b79e11 req-da2e83c0-e669-461d-b6e4-cc040f19cd81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:55:14 np0005465604 nova_compute[260603]: 2025-10-02 08:55:14.868 2 DEBUG nova.compute.manager [req-507e8c74-bc6e-4c8f-8052-a78d53b79e11 req-da2e83c0-e669-461d-b6e4-cc040f19cd81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing instance network info cache due to event network-changed-6c4955ab-011a-4e64-9871-c01b4818d740. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:55:14 np0005465604 nova_compute[260603]: 2025-10-02 08:55:14.868 2 DEBUG oslo_concurrency.lockutils [req-507e8c74-bc6e-4c8f-8052-a78d53b79e11 req-da2e83c0-e669-461d-b6e4-cc040f19cd81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:55:14 np0005465604 nova_compute[260603]: 2025-10-02 08:55:14.869 2 DEBUG oslo_concurrency.lockutils [req-507e8c74-bc6e-4c8f-8052-a78d53b79e11 req-da2e83c0-e669-461d-b6e4-cc040f19cd81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:55:14 np0005465604 nova_compute[260603]: 2025-10-02 08:55:14.869 2 DEBUG nova.network.neutron [req-507e8c74-bc6e-4c8f-8052-a78d53b79e11 req-da2e83c0-e669-461d-b6e4-cc040f19cd81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Refreshing network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.021 2 DEBUG oslo_concurrency.lockutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.022 2 DEBUG oslo_concurrency.lockutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.022 2 DEBUG oslo_concurrency.lockutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.023 2 DEBUG oslo_concurrency.lockutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.023 2 DEBUG oslo_concurrency.lockutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.025 2 INFO nova.compute.manager [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Terminating instance#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.027 2 DEBUG nova.compute.manager [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:55:15 np0005465604 kernel: tap6c4955ab-01 (unregistering): left promiscuous mode
Oct  2 04:55:15 np0005465604 NetworkManager[45129]: <info>  [1759395315.0824] device (tap6c4955ab-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:55:15Z|01349|binding|INFO|Releasing lport 6c4955ab-011a-4e64-9871-c01b4818d740 from this chassis (sb_readonly=0)
Oct  2 04:55:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:55:15Z|01350|binding|INFO|Setting lport 6c4955ab-011a-4e64-9871-c01b4818d740 down in Southbound
Oct  2 04:55:15 np0005465604 ovn_controller[152344]: 2025-10-02T08:55:15Z|01351|binding|INFO|Removing iface tap6c4955ab-01 ovn-installed in OVS
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.150 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:f3:67 10.100.0.4'], port_security=['fa:16:3e:ec:f3:67 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29e25f63-82b1-45cf-8916-41d9acc44ac9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21220064aba34c77a8af713fad28c08b', 'neutron:revision_number': '11', 'neutron:security_group_ids': '19fc7080-3cba-4840-a004-532eaa7c989b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee760329-6e77-4a17-b501-5dd001a2d022, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6c4955ab-011a-4e64-9871-c01b4818d740) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:55:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.152 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6c4955ab-011a-4e64-9871-c01b4818d740 in datapath 5c4ca2f6-ca60-420d-aded-392a44195bf1 unbound from our chassis#033[00m
Oct  2 04:55:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.154 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c4ca2f6-ca60-420d-aded-392a44195bf1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:55:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.157 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[99bb3090-4a84-4ffd-a563-a2a0d63bcdca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.157 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 namespace which is not needed anymore#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:15 np0005465604 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Oct  2 04:55:15 np0005465604 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007c.scope: Consumed 14.362s CPU time.
Oct  2 04:55:15 np0005465604 systemd-machined[214636]: Machine qemu-159-instance-0000007c terminated.
Oct  2 04:55:15 np0005465604 NetworkManager[45129]: <info>  [1759395315.2530] manager: (tap6c4955ab-01): new Tun device (/org/freedesktop/NetworkManager/Devices/532)
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.272 2 INFO nova.virt.libvirt.driver [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Instance destroyed successfully.#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.273 2 DEBUG nova.objects.instance [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lazy-loading 'resources' on Instance uuid 29e25f63-82b1-45cf-8916-41d9acc44ac9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.289 2 DEBUG nova.virt.libvirt.vif [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T08:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-199210370',display_name='tempest-TestShelveInstance-server-199210370',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-199210370',id=124,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGzsQ/6b3tjUGybuPyR4SVVOx0MdNuIeiSvtBwT6p2uLvkn4tE1YW+Uq6JJ1YItW22s6E/D+cvCAL9Uj4JCY+wAR12IR6juEk+h0nnTQwb0m9GcGi0Su/6v36I6xXc+YZw==',key_name='tempest-TestShelveInstance-203364505',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:54:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='21220064aba34c77a8af713fad28c08b',ramdisk_id='',reservation_id='r-2iy01e2x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-783621896',owner_user_name='tempest-TestShelveInstance-783621896-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:54:55Z,user_data=None,user_id='bd8daf03c9d144d7a60fe3f81abdfbb4',uuid=29e25f63-82b1-45cf-8916-41d9acc44ac9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.290 2 DEBUG nova.network.os_vif_util [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converting VIF {"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.291 2 DEBUG nova.network.os_vif_util [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.292 2 DEBUG os_vif [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.295 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c4955ab-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.303 2 INFO os_vif [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:f3:67,bridge_name='br-int',has_traffic_filtering=True,id=6c4955ab-011a-4e64-9871-c01b4818d740,network=Network(5c4ca2f6-ca60-420d-aded-392a44195bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c4955ab-01')#033[00m
Oct  2 04:55:15 np0005465604 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[393430]: [NOTICE]   (393434) : haproxy version is 2.8.14-c23fe91
Oct  2 04:55:15 np0005465604 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[393430]: [NOTICE]   (393434) : path to executable is /usr/sbin/haproxy
Oct  2 04:55:15 np0005465604 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[393430]: [WARNING]  (393434) : Exiting Master process...
Oct  2 04:55:15 np0005465604 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[393430]: [ALERT]    (393434) : Current worker (393436) exited with code 143 (Terminated)
Oct  2 04:55:15 np0005465604 neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1[393430]: [WARNING]  (393434) : All workers exited. Exiting... (0)
Oct  2 04:55:15 np0005465604 systemd[1]: libpod-910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293.scope: Deactivated successfully.
Oct  2 04:55:15 np0005465604 conmon[393430]: conmon 910f5d907573da932248 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293.scope/container/memory.events
Oct  2 04:55:15 np0005465604 podman[393741]: 2025-10-02 08:55:15.381661276 +0000 UTC m=+0.060848371 container died 910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:55:15 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5854661b36c6863990b03e1ef3fd12b61c77135bce82b5bcc5ab4d576bfc16fd-merged.mount: Deactivated successfully.
Oct  2 04:55:15 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293-userdata-shm.mount: Deactivated successfully.
Oct  2 04:55:15 np0005465604 podman[393741]: 2025-10-02 08:55:15.42344246 +0000 UTC m=+0.102629515 container cleanup 910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 04:55:15 np0005465604 systemd[1]: libpod-conmon-910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293.scope: Deactivated successfully.
Oct  2 04:55:15 np0005465604 podman[393784]: 2025-10-02 08:55:15.514182117 +0000 UTC m=+0.058160605 container remove 910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:55:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.522 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[07d6dd22-60d5-4293-944d-e716e04a785b]: (4, ('Thu Oct  2 08:55:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 (910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293)\n910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293\nThu Oct  2 08:55:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 (910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293)\n910f5d907573da9322483d7bec2745f1762debcde4a20736bcb8baf212bfc293\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.524 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[86852060-489a-49a9-89ad-219d24994888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.525 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c4ca2f6-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:15 np0005465604 kernel: tap5c4ca2f6-c0: left promiscuous mode
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.550 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7559249a-350d-4044-898a-85326fecb459]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.573 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[97b232c4-6738-416f-9076-1e81d5d34c2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.574 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a8fa53bf-d420-4b34-9844-9e18505a56e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.591 2 DEBUG nova.network.neutron [-] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:55:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.593 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cae825a5-080d-44d8-86ed-0c2c312a7d36]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617148, 'reachable_time': 43694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393804, 'error': None, 'target': 'ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.595 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c4ca2f6-ca60-420d-aded-392a44195bf1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:55:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:15.595 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[0a386ac7-29a3-4e71-9ec9-7d70129e73ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:15 np0005465604 systemd[1]: run-netns-ovnmeta\x2d5c4ca2f6\x2dca60\x2d420d\x2daded\x2d392a44195bf1.mount: Deactivated successfully.
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.610 2 INFO nova.compute.manager [-] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Took 2.03 seconds to deallocate network for instance.#033[00m
Oct  2 04:55:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 189 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 566 KiB/s rd, 28 KiB/s wr, 84 op/s
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.659 2 DEBUG oslo_concurrency.lockutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.660 2 DEBUG oslo_concurrency.lockutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.732 2 INFO nova.virt.libvirt.driver [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Deleting instance files /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9_del#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.733 2 INFO nova.virt.libvirt.driver [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Deletion of /var/lib/nova/instances/29e25f63-82b1-45cf-8916-41d9acc44ac9_del complete#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.739 2 DEBUG oslo_concurrency.processutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.808 2 INFO nova.compute.manager [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.809 2 DEBUG oslo.service.loopingcall [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.810 2 DEBUG nova.compute.manager [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.811 2 DEBUG nova.network.neutron [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.851 2 DEBUG nova.network.neutron [req-50e53a70-849c-4674-88c2-2deeaed08063 req-5cffba2d-da15-4c13-9a13-60528ad12122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updated VIF entry in instance network info cache for port 4fd5381b-e8ba-485f-9cb6-692a37b716a1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.853 2 DEBUG nova.network.neutron [req-50e53a70-849c-4674-88c2-2deeaed08063 req-5cffba2d-da15-4c13-9a13-60528ad12122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Updating instance_info_cache with network_info: [{"id": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "address": "fa:16:3e:04:0b:fe", "network": {"id": "7511e8c2-7c26-4eea-b465-32e904aba1a9", "bridge": "br-int", "label": "tempest-network-smoke--375539865", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fd5381b-e8", "ovs_interfaceid": "4fd5381b-e8ba-485f-9cb6-692a37b716a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:55:15 np0005465604 nova_compute[260603]: 2025-10-02 08:55:15.883 2 DEBUG oslo_concurrency.lockutils [req-50e53a70-849c-4674-88c2-2deeaed08063 req-5cffba2d-da15-4c13-9a13-60528ad12122 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ce9a5c17-646f-4ba2-a974-90e4b864872e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:55:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:55:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/884811579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.184 2 DEBUG oslo_concurrency.processutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.192 2 DEBUG nova.compute.provider_tree [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.221 2 DEBUG nova.scheduler.client.report [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.262 2 DEBUG oslo_concurrency.lockutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.294 2 INFO nova.scheduler.client.report [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance ce9a5c17-646f-4ba2-a974-90e4b864872e#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.382 2 DEBUG oslo_concurrency.lockutils [None req-410f83f0-5448-41f1-8b34-f8ea67c320ad ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.781 2 DEBUG nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received event network-vif-plugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.782 2 DEBUG oslo_concurrency.lockutils [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.783 2 DEBUG oslo_concurrency.lockutils [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.783 2 DEBUG oslo_concurrency.lockutils [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ce9a5c17-646f-4ba2-a974-90e4b864872e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.783 2 DEBUG nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] No waiting events found dispatching network-vif-plugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.784 2 WARNING nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received unexpected event network-vif-plugged-4fd5381b-e8ba-485f-9cb6-692a37b716a1 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.784 2 DEBUG nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Received event network-vif-deleted-4fd5381b-e8ba-485f-9cb6-692a37b716a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.785 2 INFO nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Neutron deleted interface 4fd5381b-e8ba-485f-9cb6-692a37b716a1; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.785 2 DEBUG nova.network.neutron [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.792 2 DEBUG nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Detach interface failed, port_id=4fd5381b-e8ba-485f-9cb6-692a37b716a1, reason: Instance ce9a5c17-646f-4ba2-a974-90e4b864872e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.793 2 DEBUG nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-unplugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.793 2 DEBUG oslo_concurrency.lockutils [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.794 2 DEBUG oslo_concurrency.lockutils [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.794 2 DEBUG oslo_concurrency.lockutils [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.794 2 DEBUG nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-unplugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.795 2 DEBUG nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-unplugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.795 2 DEBUG nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.796 2 DEBUG oslo_concurrency.lockutils [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.796 2 DEBUG oslo_concurrency.lockutils [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.797 2 DEBUG oslo_concurrency.lockutils [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.797 2 DEBUG nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] No waiting events found dispatching network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.797 2 WARNING nova.compute.manager [req-6521e1cd-7ac7-4aa3-8368-07642be8886b req-d8a22c2a-df5e-4f9d-981d-7af952f3e708 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received unexpected event network-vif-plugged-6c4955ab-011a-4e64-9871-c01b4818d740 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.929 2 DEBUG nova.network.neutron [req-507e8c74-bc6e-4c8f-8052-a78d53b79e11 req-da2e83c0-e669-461d-b6e4-cc040f19cd81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updated VIF entry in instance network info cache for port 6c4955ab-011a-4e64-9871-c01b4818d740. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:55:16 np0005465604 nova_compute[260603]: 2025-10-02 08:55:16.930 2 DEBUG nova.network.neutron [req-507e8c74-bc6e-4c8f-8052-a78d53b79e11 req-da2e83c0-e669-461d-b6e4-cc040f19cd81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating instance_info_cache with network_info: [{"id": "6c4955ab-011a-4e64-9871-c01b4818d740", "address": "fa:16:3e:ec:f3:67", "network": {"id": "5c4ca2f6-ca60-420d-aded-392a44195bf1", "bridge": "br-int", "label": "tempest-TestShelveInstance-132138046-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "21220064aba34c77a8af713fad28c08b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c4955ab-01", "ovs_interfaceid": "6c4955ab-011a-4e64-9871-c01b4818d740", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:55:17 np0005465604 nova_compute[260603]: 2025-10-02 08:55:17.013 2 DEBUG oslo_concurrency.lockutils [req-507e8c74-bc6e-4c8f-8052-a78d53b79e11 req-da2e83c0-e669-461d-b6e4-cc040f19cd81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-29e25f63-82b1-45cf-8916-41d9acc44ac9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:55:17 np0005465604 nova_compute[260603]: 2025-10-02 08:55:17.137 2 DEBUG nova.network.neutron [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:55:17 np0005465604 nova_compute[260603]: 2025-10-02 08:55:17.231 2 INFO nova.compute.manager [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Took 1.42 seconds to deallocate network for instance.#033[00m
Oct  2 04:55:17 np0005465604 nova_compute[260603]: 2025-10-02 08:55:17.291 2 DEBUG oslo_concurrency.lockutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:17 np0005465604 nova_compute[260603]: 2025-10-02 08:55:17.292 2 DEBUG oslo_concurrency.lockutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:17 np0005465604 nova_compute[260603]: 2025-10-02 08:55:17.344 2 DEBUG oslo_concurrency.processutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:55:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 87 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 733 KiB/s rd, 30 KiB/s wr, 117 op/s
Oct  2 04:55:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:55:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4172470615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:55:17 np0005465604 nova_compute[260603]: 2025-10-02 08:55:17.840 2 DEBUG oslo_concurrency.processutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:55:17 np0005465604 nova_compute[260603]: 2025-10-02 08:55:17.847 2 DEBUG nova.compute.provider_tree [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:55:17 np0005465604 nova_compute[260603]: 2025-10-02 08:55:17.873 2 DEBUG nova.scheduler.client.report [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:55:17 np0005465604 nova_compute[260603]: 2025-10-02 08:55:17.893 2 DEBUG oslo_concurrency.lockutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:17 np0005465604 nova_compute[260603]: 2025-10-02 08:55:17.927 2 INFO nova.scheduler.client.report [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Deleted allocations for instance 29e25f63-82b1-45cf-8916-41d9acc44ac9#033[00m
Oct  2 04:55:18 np0005465604 nova_compute[260603]: 2025-10-02 08:55:18.004 2 DEBUG oslo_concurrency.lockutils [None req-c2e77fe4-1d9c-40c3-8741-7622afb94f54 bd8daf03c9d144d7a60fe3f81abdfbb4 21220064aba34c77a8af713fad28c08b - - default default] Lock "29e25f63-82b1-45cf-8916-41d9acc44ac9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:55:18 np0005465604 nova_compute[260603]: 2025-10-02 08:55:18.884 2 DEBUG nova.compute.manager [req-fd8eb654-13ef-4e45-bd07-eac7fc215986 req-0433cea9-8bdc-4eef-8cf8-361fb0e69815 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Received event network-vif-deleted-6c4955ab-011a-4e64-9871-c01b4818d740 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:55:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 543 KiB/s rd, 17 KiB/s wr, 89 op/s
Oct  2 04:55:19 np0005465604 nova_compute[260603]: 2025-10-02 08:55:19.868 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395304.867757, 1e566f75-d068-4a59-bf53-0a71ad8c5e45 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:55:19 np0005465604 nova_compute[260603]: 2025-10-02 08:55:19.869 2 INFO nova.compute.manager [-] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:55:19 np0005465604 nova_compute[260603]: 2025-10-02 08:55:19.999 2 DEBUG nova.compute.manager [None req-f7af6bc1-22cf-4440-8eb9-aeec870fcfca - - - - - -] [instance: 1e566f75-d068-4a59-bf53-0a71ad8c5e45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:55:20 np0005465604 nova_compute[260603]: 2025-10-02 08:55:20.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:20 np0005465604 nova_compute[260603]: 2025-10-02 08:55:20.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 13 KiB/s wr, 61 op/s
Oct  2 04:55:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:55:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3779683840' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:55:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:55:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3779683840' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:55:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:22.849 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:55:22 np0005465604 nova_compute[260603]: 2025-10-02 08:55:22.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:23 np0005465604 podman[393850]: 2025-10-02 08:55:23.030433078 +0000 UTC m=+0.069736893 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 04:55:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:55:23 np0005465604 podman[393849]: 2025-10-02 08:55:23.072289405 +0000 UTC m=+0.123011831 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct  2 04:55:23 np0005465604 nova_compute[260603]: 2025-10-02 08:55:23.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 13 KiB/s wr, 61 op/s
Oct  2 04:55:25 np0005465604 nova_compute[260603]: 2025-10-02 08:55:25.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:25 np0005465604 nova_compute[260603]: 2025-10-02 08:55:25.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 241 KiB/s rd, 2.2 KiB/s wr, 50 op/s
Oct  2 04:55:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 241 KiB/s rd, 2.2 KiB/s wr, 50 op/s
Oct  2 04:55:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:55:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:55:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:55:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:55:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:55:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:55:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:55:28
Oct  2 04:55:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:55:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:55:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', '.mgr', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes']
Oct  2 04:55:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:55:28 np0005465604 nova_compute[260603]: 2025-10-02 08:55:28.037 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395313.0369906, ce9a5c17-646f-4ba2-a974-90e4b864872e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:55:28 np0005465604 nova_compute[260603]: 2025-10-02 08:55:28.038 2 INFO nova.compute.manager [-] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:55:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:55:28 np0005465604 nova_compute[260603]: 2025-10-02 08:55:28.085 2 DEBUG nova.compute.manager [None req-adfaf4c5-3baa-41cf-a7cb-6df1a8020d8f - - - - - -] [instance: ce9a5c17-646f-4ba2-a974-90e4b864872e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:55:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:55:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:55:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:55:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:55:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:55:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:55:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:55:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:55:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:55:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:55:29 np0005465604 nova_compute[260603]: 2025-10-02 08:55:29.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:55:29 np0005465604 nova_compute[260603]: 2025-10-02 08:55:29.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:55:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 511 B/s wr, 17 op/s
Oct  2 04:55:30 np0005465604 podman[393895]: 2025-10-02 08:55:30.033534099 +0000 UTC m=+0.094848087 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd)
Oct  2 04:55:30 np0005465604 podman[393896]: 2025-10-02 08:55:30.034550461 +0000 UTC m=+0.090926463 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:55:30 np0005465604 nova_compute[260603]: 2025-10-02 08:55:30.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:30 np0005465604 nova_compute[260603]: 2025-10-02 08:55:30.268 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395315.2672634, 29e25f63-82b1-45cf-8916-41d9acc44ac9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:55:30 np0005465604 nova_compute[260603]: 2025-10-02 08:55:30.269 2 INFO nova.compute.manager [-] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:55:30 np0005465604 nova_compute[260603]: 2025-10-02 08:55:30.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:30 np0005465604 nova_compute[260603]: 2025-10-02 08:55:30.303 2 DEBUG nova.compute.manager [None req-ce346288-d7df-4611-a3c2-4a2e62c655f3 - - - - - -] [instance: 29e25f63-82b1-45cf-8916-41d9acc44ac9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:55:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:55:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:55:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:55:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:34.837 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:34.837 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:34.838 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:35 np0005465604 nova_compute[260603]: 2025-10-02 08:55:35.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:35 np0005465604 nova_compute[260603]: 2025-10-02 08:55:35.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:55:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:55:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:55:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:55:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:55:40 np0005465604 nova_compute[260603]: 2025-10-02 08:55:40.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:40 np0005465604 nova_compute[260603]: 2025-10-02 08:55:40.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:55:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:55:43 np0005465604 nova_compute[260603]: 2025-10-02 08:55:43.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:55:43 np0005465604 nova_compute[260603]: 2025-10-02 08:55:43.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:55:43 np0005465604 nova_compute[260603]: 2025-10-02 08:55:43.553 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:55:43 np0005465604 nova_compute[260603]: 2025-10-02 08:55:43.554 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:55:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.007 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.007 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.033 2 DEBUG nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.127 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.127 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.140 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.140 2 INFO nova.compute.claims [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.254 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:55:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1521662997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:55:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 41 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.690 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.696 2 DEBUG nova.compute.provider_tree [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.717 2 DEBUG nova.scheduler.client.report [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.737 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.738 2 DEBUG nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.795 2 DEBUG nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.796 2 DEBUG nova.network.neutron [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.821 2 INFO nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.851 2 DEBUG nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.959 2 DEBUG nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.961 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.961 2 INFO nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Creating image(s)#033[00m
Oct  2 04:55:45 np0005465604 nova_compute[260603]: 2025-10-02 08:55:45.981 2 DEBUG nova.storage.rbd_utils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.000 2 DEBUG nova.storage.rbd_utils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.019 2 DEBUG nova.storage.rbd_utils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.022 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.069 2 DEBUG nova.policy [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.106 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.106 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.107 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.107 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.126 2 DEBUG nova.storage.rbd_utils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.129 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.497 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.536 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.583 2 DEBUG nova.storage.rbd_utils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.683 2 DEBUG nova.objects.instance [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid d84a3f3a-a7fb-4222-9b59-c51b64d74a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.703 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.704 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Ensure instance console log exists: /var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.704 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.705 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:46 np0005465604 nova_compute[260603]: 2025-10-02 08:55:46.705 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 71 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.1 MiB/s wr, 13 op/s
Oct  2 04:55:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:55:48 np0005465604 nova_compute[260603]: 2025-10-02 08:55:48.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:55:48 np0005465604 nova_compute[260603]: 2025-10-02 08:55:48.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:55:48 np0005465604 nova_compute[260603]: 2025-10-02 08:55:48.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:48 np0005465604 nova_compute[260603]: 2025-10-02 08:55:48.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:48 np0005465604 nova_compute[260603]: 2025-10-02 08:55:48.550 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:48 np0005465604 nova_compute[260603]: 2025-10-02 08:55:48.550 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:55:48 np0005465604 nova_compute[260603]: 2025-10-02 08:55:48.550 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:55:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:55:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:55:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:55:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:55:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:55:48 np0005465604 nova_compute[260603]: 2025-10-02 08:55:48.603 2 DEBUG nova.network.neutron [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Successfully created port: 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:55:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:55:48 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 274c8f6e-9512-4411-9c06-27b20753faf2 does not exist
Oct  2 04:55:48 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 5fab23ab-a79d-42d0-aeb4-363191fea15e does not exist
Oct  2 04:55:48 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 14e78a10-6323-4558-96cc-b950cf3d72a0 does not exist
Oct  2 04:55:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:55:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:55:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:55:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:55:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:55:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:55:48 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:55:48 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:55:48 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:55:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:55:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3922536804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:55:49 np0005465604 nova_compute[260603]: 2025-10-02 08:55:49.028 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:55:49 np0005465604 nova_compute[260603]: 2025-10-02 08:55:49.171 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:55:49 np0005465604 nova_compute[260603]: 2025-10-02 08:55:49.172 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3656MB free_disk=59.97496795654297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:55:49 np0005465604 nova_compute[260603]: 2025-10-02 08:55:49.172 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:49 np0005465604 nova_compute[260603]: 2025-10-02 08:55:49.172 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:49 np0005465604 podman[394420]: 2025-10-02 08:55:49.228433924 +0000 UTC m=+0.035229848 container create fc361dc07dacc2e8e3209490ee130cc6d5b3ae99388b101d693b212778611a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:55:49 np0005465604 systemd[1]: Started libpod-conmon-fc361dc07dacc2e8e3209490ee130cc6d5b3ae99388b101d693b212778611a20.scope.
Oct  2 04:55:49 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:55:49 np0005465604 nova_compute[260603]: 2025-10-02 08:55:49.300 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance d84a3f3a-a7fb-4222-9b59-c51b64d74a13 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:55:49 np0005465604 nova_compute[260603]: 2025-10-02 08:55:49.301 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:55:49 np0005465604 nova_compute[260603]: 2025-10-02 08:55:49.302 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:55:49 np0005465604 podman[394420]: 2025-10-02 08:55:49.211794997 +0000 UTC m=+0.018590921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:55:49 np0005465604 podman[394420]: 2025-10-02 08:55:49.314056409 +0000 UTC m=+0.120852363 container init fc361dc07dacc2e8e3209490ee130cc6d5b3ae99388b101d693b212778611a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:55:49 np0005465604 podman[394420]: 2025-10-02 08:55:49.319889803 +0000 UTC m=+0.126685717 container start fc361dc07dacc2e8e3209490ee130cc6d5b3ae99388b101d693b212778611a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 04:55:49 np0005465604 podman[394420]: 2025-10-02 08:55:49.323088165 +0000 UTC m=+0.129884079 container attach fc361dc07dacc2e8e3209490ee130cc6d5b3ae99388b101d693b212778611a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 04:55:49 np0005465604 fervent_edison[394436]: 167 167
Oct  2 04:55:49 np0005465604 systemd[1]: libpod-fc361dc07dacc2e8e3209490ee130cc6d5b3ae99388b101d693b212778611a20.scope: Deactivated successfully.
Oct  2 04:55:49 np0005465604 podman[394420]: 2025-10-02 08:55:49.327578357 +0000 UTC m=+0.134374271 container died fc361dc07dacc2e8e3209490ee130cc6d5b3ae99388b101d693b212778611a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 04:55:49 np0005465604 nova_compute[260603]: 2025-10-02 08:55:49.350 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:55:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4e710210b2cfc285f19e80578c3d3328ea7dd995630914b528d3754cbf237296-merged.mount: Deactivated successfully.
Oct  2 04:55:49 np0005465604 podman[394420]: 2025-10-02 08:55:49.368799844 +0000 UTC m=+0.175595758 container remove fc361dc07dacc2e8e3209490ee130cc6d5b3ae99388b101d693b212778611a20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:55:49 np0005465604 systemd[1]: libpod-conmon-fc361dc07dacc2e8e3209490ee130cc6d5b3ae99388b101d693b212778611a20.scope: Deactivated successfully.
Oct  2 04:55:49 np0005465604 podman[394460]: 2025-10-02 08:55:49.535724416 +0000 UTC m=+0.038701357 container create 106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 04:55:49 np0005465604 systemd[1]: Started libpod-conmon-106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb.scope.
Oct  2 04:55:49 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:55:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac949bf0e619bcc39c422c5be502fe43c718e13602574e6b348b23a588cb4ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:55:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac949bf0e619bcc39c422c5be502fe43c718e13602574e6b348b23a588cb4ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:55:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac949bf0e619bcc39c422c5be502fe43c718e13602574e6b348b23a588cb4ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:55:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac949bf0e619bcc39c422c5be502fe43c718e13602574e6b348b23a588cb4ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:55:49 np0005465604 podman[394460]: 2025-10-02 08:55:49.520665379 +0000 UTC m=+0.023642340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:55:49 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac949bf0e619bcc39c422c5be502fe43c718e13602574e6b348b23a588cb4ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:55:49 np0005465604 podman[394460]: 2025-10-02 08:55:49.632112812 +0000 UTC m=+0.135089753 container init 106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_matsumoto, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:55:49 np0005465604 podman[394460]: 2025-10-02 08:55:49.640777937 +0000 UTC m=+0.143754868 container start 106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_matsumoto, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:55:49 np0005465604 podman[394460]: 2025-10-02 08:55:49.643921176 +0000 UTC m=+0.146898117 container attach 106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_matsumoto, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 04:55:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:55:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:55:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2111831245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:55:49 np0005465604 nova_compute[260603]: 2025-10-02 08:55:49.834 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:55:49 np0005465604 nova_compute[260603]: 2025-10-02 08:55:49.840 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:55:49 np0005465604 nova_compute[260603]: 2025-10-02 08:55:49.871 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:55:49 np0005465604 nova_compute[260603]: 2025-10-02 08:55:49.939 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:55:49 np0005465604 nova_compute[260603]: 2025-10-02 08:55:49.940 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:50 np0005465604 nova_compute[260603]: 2025-10-02 08:55:50.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:50 np0005465604 nova_compute[260603]: 2025-10-02 08:55:50.237 2 DEBUG nova.network.neutron [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Successfully updated port: 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:55:50 np0005465604 nova_compute[260603]: 2025-10-02 08:55:50.257 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:55:50 np0005465604 nova_compute[260603]: 2025-10-02 08:55:50.258 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:55:50 np0005465604 nova_compute[260603]: 2025-10-02 08:55:50.258 2 DEBUG nova.network.neutron [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:55:50 np0005465604 nova_compute[260603]: 2025-10-02 08:55:50.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:50 np0005465604 nova_compute[260603]: 2025-10-02 08:55:50.423 2 DEBUG nova.compute.manager [req-8292a0cf-911b-470d-8035-22a928f088c5 req-1c656d76-ce3a-4e30-959a-f217db15f9a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-changed-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:55:50 np0005465604 nova_compute[260603]: 2025-10-02 08:55:50.426 2 DEBUG nova.compute.manager [req-8292a0cf-911b-470d-8035-22a928f088c5 req-1c656d76-ce3a-4e30-959a-f217db15f9a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Refreshing instance network info cache due to event network-changed-3e0191b3-5405-4fe1-ba86-56a0b092a5d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:55:50 np0005465604 nova_compute[260603]: 2025-10-02 08:55:50.426 2 DEBUG oslo_concurrency.lockutils [req-8292a0cf-911b-470d-8035-22a928f088c5 req-1c656d76-ce3a-4e30-959a-f217db15f9a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:55:50 np0005465604 nova_compute[260603]: 2025-10-02 08:55:50.522 2 DEBUG nova.network.neutron [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:55:50 np0005465604 cool_matsumoto[394497]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:55:50 np0005465604 cool_matsumoto[394497]: --> relative data size: 1.0
Oct  2 04:55:50 np0005465604 cool_matsumoto[394497]: --> All data devices are unavailable
Oct  2 04:55:50 np0005465604 systemd[1]: libpod-106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb.scope: Deactivated successfully.
Oct  2 04:55:50 np0005465604 podman[394460]: 2025-10-02 08:55:50.747354459 +0000 UTC m=+1.250331410 container died 106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 04:55:50 np0005465604 systemd[1]: libpod-106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb.scope: Consumed 1.051s CPU time.
Oct  2 04:55:50 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8ac949bf0e619bcc39c422c5be502fe43c718e13602574e6b348b23a588cb4ed-merged.mount: Deactivated successfully.
Oct  2 04:55:50 np0005465604 podman[394460]: 2025-10-02 08:55:50.816833321 +0000 UTC m=+1.319810302 container remove 106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 04:55:50 np0005465604 systemd[1]: libpod-conmon-106fa1057de8a960c9b29bd8de17a1ac14ac9619dc7e1de96d1bb7e967309eeb.scope: Deactivated successfully.
Oct  2 04:55:50 np0005465604 nova_compute[260603]: 2025-10-02 08:55:50.939 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:55:50 np0005465604 nova_compute[260603]: 2025-10-02 08:55:50.940 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:55:51 np0005465604 podman[394682]: 2025-10-02 08:55:51.62108873 +0000 UTC m=+0.063690600 container create 3a5c68269eae1fc86888aeb7326a8180b0dfa4770a0c3ef7decfaa25652a72c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_maxwell, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:55:51 np0005465604 systemd[1]: Started libpod-conmon-3a5c68269eae1fc86888aeb7326a8180b0dfa4770a0c3ef7decfaa25652a72c7.scope.
Oct  2 04:55:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:55:51 np0005465604 podman[394682]: 2025-10-02 08:55:51.595478227 +0000 UTC m=+0.038080147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:55:51 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:55:51 np0005465604 podman[394682]: 2025-10-02 08:55:51.709247155 +0000 UTC m=+0.151849035 container init 3a5c68269eae1fc86888aeb7326a8180b0dfa4770a0c3ef7decfaa25652a72c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_maxwell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:55:51 np0005465604 podman[394682]: 2025-10-02 08:55:51.717102534 +0000 UTC m=+0.159704404 container start 3a5c68269eae1fc86888aeb7326a8180b0dfa4770a0c3ef7decfaa25652a72c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_maxwell, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:55:51 np0005465604 podman[394682]: 2025-10-02 08:55:51.720988227 +0000 UTC m=+0.163590137 container attach 3a5c68269eae1fc86888aeb7326a8180b0dfa4770a0c3ef7decfaa25652a72c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_maxwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:55:51 np0005465604 musing_maxwell[394698]: 167 167
Oct  2 04:55:51 np0005465604 systemd[1]: libpod-3a5c68269eae1fc86888aeb7326a8180b0dfa4770a0c3ef7decfaa25652a72c7.scope: Deactivated successfully.
Oct  2 04:55:51 np0005465604 podman[394682]: 2025-10-02 08:55:51.723111954 +0000 UTC m=+0.165713824 container died 3a5c68269eae1fc86888aeb7326a8180b0dfa4770a0c3ef7decfaa25652a72c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:55:51 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2da188fa719b46c77b29ee2fcd9655993fde59171af5f3294bcbbee9e1917e8f-merged.mount: Deactivated successfully.
Oct  2 04:55:51 np0005465604 podman[394682]: 2025-10-02 08:55:51.772964844 +0000 UTC m=+0.215566684 container remove 3a5c68269eae1fc86888aeb7326a8180b0dfa4770a0c3ef7decfaa25652a72c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:55:51 np0005465604 systemd[1]: libpod-conmon-3a5c68269eae1fc86888aeb7326a8180b0dfa4770a0c3ef7decfaa25652a72c7.scope: Deactivated successfully.
Oct  2 04:55:52 np0005465604 podman[394722]: 2025-10-02 08:55:52.005982232 +0000 UTC m=+0.053982362 container create ba16f7b1b02d7ac059518d6077290e7c7e5dce1678fab5f062b9ee86630162f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:55:52 np0005465604 systemd[1]: Started libpod-conmon-ba16f7b1b02d7ac059518d6077290e7c7e5dce1678fab5f062b9ee86630162f9.scope.
Oct  2 04:55:52 np0005465604 podman[394722]: 2025-10-02 08:55:51.979381799 +0000 UTC m=+0.027381999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:55:52 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:55:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d4e8a54fcfecb2d4c4382fda4ac59b7a10f632697fee039bbcd6dda072c1c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:55:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d4e8a54fcfecb2d4c4382fda4ac59b7a10f632697fee039bbcd6dda072c1c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:55:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d4e8a54fcfecb2d4c4382fda4ac59b7a10f632697fee039bbcd6dda072c1c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:55:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8d4e8a54fcfecb2d4c4382fda4ac59b7a10f632697fee039bbcd6dda072c1c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:55:52 np0005465604 podman[394722]: 2025-10-02 08:55:52.108582365 +0000 UTC m=+0.156582575 container init ba16f7b1b02d7ac059518d6077290e7c7e5dce1678fab5f062b9ee86630162f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hofstadter, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:55:52 np0005465604 podman[394722]: 2025-10-02 08:55:52.123063113 +0000 UTC m=+0.171063233 container start ba16f7b1b02d7ac059518d6077290e7c7e5dce1678fab5f062b9ee86630162f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 04:55:52 np0005465604 podman[394722]: 2025-10-02 08:55:52.127171664 +0000 UTC m=+0.175171874 container attach ba16f7b1b02d7ac059518d6077290e7c7e5dce1678fab5f062b9ee86630162f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hofstadter, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.662 2 DEBUG nova.network.neutron [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updating instance_info_cache with network_info: [{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.685 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.686 2 DEBUG nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Instance network_info: |[{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.687 2 DEBUG oslo_concurrency.lockutils [req-8292a0cf-911b-470d-8035-22a928f088c5 req-1c656d76-ce3a-4e30-959a-f217db15f9a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.687 2 DEBUG nova.network.neutron [req-8292a0cf-911b-470d-8035-22a928f088c5 req-1c656d76-ce3a-4e30-959a-f217db15f9a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Refreshing network info cache for port 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.691 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Start _get_guest_xml network_info=[{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.696 2 WARNING nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.704 2 DEBUG nova.virt.libvirt.host [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.704 2 DEBUG nova.virt.libvirt.host [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.708 2 DEBUG nova.virt.libvirt.host [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.709 2 DEBUG nova.virt.libvirt.host [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.709 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.709 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.710 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.710 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.711 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.711 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.711 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.711 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.712 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.712 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.712 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.713 2 DEBUG nova.virt.hardware [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:55:52 np0005465604 nova_compute[260603]: 2025-10-02 08:55:52.716 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]: {
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:    "0": [
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:        {
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "devices": [
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "/dev/loop3"
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            ],
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "lv_name": "ceph_lv0",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "lv_size": "21470642176",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "name": "ceph_lv0",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "tags": {
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.cluster_name": "ceph",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.crush_device_class": "",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.encrypted": "0",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.osd_id": "0",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.type": "block",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.vdo": "0"
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            },
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "type": "block",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "vg_name": "ceph_vg0"
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:        }
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:    ],
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:    "1": [
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:        {
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "devices": [
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "/dev/loop4"
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            ],
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "lv_name": "ceph_lv1",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "lv_size": "21470642176",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "name": "ceph_lv1",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "tags": {
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.cluster_name": "ceph",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.crush_device_class": "",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.encrypted": "0",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.osd_id": "1",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.type": "block",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.vdo": "0"
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            },
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "type": "block",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "vg_name": "ceph_vg1"
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:        }
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:    ],
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:    "2": [
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:        {
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "devices": [
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "/dev/loop5"
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            ],
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "lv_name": "ceph_lv2",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "lv_size": "21470642176",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "name": "ceph_lv2",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "tags": {
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.cluster_name": "ceph",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.crush_device_class": "",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.encrypted": "0",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.osd_id": "2",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.type": "block",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:                "ceph.vdo": "0"
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            },
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "type": "block",
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:            "vg_name": "ceph_vg2"
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:        }
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]:    ]
Oct  2 04:55:52 np0005465604 elegant_hofstadter[394739]: }
Oct  2 04:55:52 np0005465604 systemd[1]: libpod-ba16f7b1b02d7ac059518d6077290e7c7e5dce1678fab5f062b9ee86630162f9.scope: Deactivated successfully.
Oct  2 04:55:52 np0005465604 podman[394722]: 2025-10-02 08:55:52.948065959 +0000 UTC m=+0.996066099 container died ba16f7b1b02d7ac059518d6077290e7c7e5dce1678fab5f062b9ee86630162f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hofstadter, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:55:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f8d4e8a54fcfecb2d4c4382fda4ac59b7a10f632697fee039bbcd6dda072c1c3-merged.mount: Deactivated successfully.
Oct  2 04:55:53 np0005465604 podman[394722]: 2025-10-02 08:55:53.035684097 +0000 UTC m=+1.083684217 container remove ba16f7b1b02d7ac059518d6077290e7c7e5dce1678fab5f062b9ee86630162f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hofstadter, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:55:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:55:53 np0005465604 systemd[1]: libpod-conmon-ba16f7b1b02d7ac059518d6077290e7c7e5dce1678fab5f062b9ee86630162f9.scope: Deactivated successfully.
Oct  2 04:55:53 np0005465604 podman[394782]: 2025-10-02 08:55:53.172303249 +0000 UTC m=+0.075687692 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:55:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:55:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1257783275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.198 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.229 2 DEBUG nova.storage.rbd_utils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.236 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:55:53 np0005465604 podman[394826]: 2025-10-02 08:55:53.300506143 +0000 UTC m=+0.105503996 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 04:55:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:55:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:55:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/395293971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.700 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.703 2 DEBUG nova.virt.libvirt.vif [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:55:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1254556339',display_name='tempest-TestNetworkBasicOps-server-1254556339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1254556339',id=126,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCV6ppIFHQsRK8BqpoHx4hB1U2Nvc+5EznbB1uzi/14IuMPKrn5t7EhvgkbYfjK4hho3uCvJrZSyE+lc9+zXrc8+KyMSVz0PD8Wom9eq1MMNuY+jwkMfim2/72V2zzH01Q==',key_name='tempest-TestNetworkBasicOps-2002344881',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-lv1agx0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:55:45Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=d84a3f3a-a7fb-4222-9b59-c51b64d74a13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.703 2 DEBUG nova.network.os_vif_util [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.704 2 DEBUG nova.network.os_vif_util [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:df:98,bridge_name='br-int',has_traffic_filtering=True,id=3e0191b3-5405-4fe1-ba86-56a0b092a5d6,network=Network(0ca3ac45-6851-48dd-917a-457549b659ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e0191b3-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.706 2 DEBUG nova.objects.instance [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid d84a3f3a-a7fb-4222-9b59-c51b64d74a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.728 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:55:53 np0005465604 nova_compute[260603]:  <uuid>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</uuid>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:  <name>instance-0000007e</name>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkBasicOps-server-1254556339</nova:name>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:55:52</nova:creationTime>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:55:53 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:        <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:        <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:        <nova:port uuid="3e0191b3-5405-4fe1-ba86-56a0b092a5d6">
Oct  2 04:55:53 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <entry name="serial">d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <entry name="uuid">d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk">
Oct  2 04:55:53 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:55:53 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config">
Oct  2 04:55:53 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:55:53 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:00:df:98"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <target dev="tap3e0191b3-54"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log" append="off"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:55:53 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:55:53 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:55:53 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:55:53 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.729 2 DEBUG nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Preparing to wait for external event network-vif-plugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.729 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.729 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.730 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.730 2 DEBUG nova.virt.libvirt.vif [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:55:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1254556339',display_name='tempest-TestNetworkBasicOps-server-1254556339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1254556339',id=126,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCV6ppIFHQsRK8BqpoHx4hB1U2Nvc+5EznbB1uzi/14IuMPKrn5t7EhvgkbYfjK4hho3uCvJrZSyE+lc9+zXrc8+KyMSVz0PD8Wom9eq1MMNuY+jwkMfim2/72V2zzH01Q==',key_name='tempest-TestNetworkBasicOps-2002344881',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-lv1agx0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:55:45Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=d84a3f3a-a7fb-4222-9b59-c51b64d74a13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.731 2 DEBUG nova.network.os_vif_util [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.731 2 DEBUG nova.network.os_vif_util [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:df:98,bridge_name='br-int',has_traffic_filtering=True,id=3e0191b3-5405-4fe1-ba86-56a0b092a5d6,network=Network(0ca3ac45-6851-48dd-917a-457549b659ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e0191b3-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.732 2 DEBUG os_vif [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:df:98,bridge_name='br-int',has_traffic_filtering=True,id=3e0191b3-5405-4fe1-ba86-56a0b092a5d6,network=Network(0ca3ac45-6851-48dd-917a-457549b659ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e0191b3-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.733 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.733 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.737 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e0191b3-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.737 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e0191b3-54, col_values=(('external_ids', {'iface-id': '3e0191b3-5405-4fe1-ba86-56a0b092a5d6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:df:98', 'vm-uuid': 'd84a3f3a-a7fb-4222-9b59-c51b64d74a13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:53 np0005465604 NetworkManager[45129]: <info>  [1759395353.7405] manager: (tap3e0191b3-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/533)
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.752 2 INFO os_vif [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:df:98,bridge_name='br-int',has_traffic_filtering=True,id=3e0191b3-5405-4fe1-ba86-56a0b092a5d6,network=Network(0ca3ac45-6851-48dd-917a-457549b659ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e0191b3-54')#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.813 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.813 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.813 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:00:df:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.814 2 INFO nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Using config drive#033[00m
Oct  2 04:55:53 np0005465604 nova_compute[260603]: 2025-10-02 08:55:53.837 2 DEBUG nova.storage.rbd_utils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:55:53 np0005465604 podman[395012]: 2025-10-02 08:55:53.863036227 +0000 UTC m=+0.041159736 container create f0830d4a454e2ee0e1a00e8c1158a523d312735fc6a0bf57486c23a12622846e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:55:53 np0005465604 systemd[1]: Started libpod-conmon-f0830d4a454e2ee0e1a00e8c1158a523d312735fc6a0bf57486c23a12622846e.scope.
Oct  2 04:55:53 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:55:53 np0005465604 podman[395012]: 2025-10-02 08:55:53.844984884 +0000 UTC m=+0.023108383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:55:53 np0005465604 podman[395012]: 2025-10-02 08:55:53.953342499 +0000 UTC m=+0.131466028 container init f0830d4a454e2ee0e1a00e8c1158a523d312735fc6a0bf57486c23a12622846e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:55:53 np0005465604 podman[395012]: 2025-10-02 08:55:53.959711681 +0000 UTC m=+0.137835150 container start f0830d4a454e2ee0e1a00e8c1158a523d312735fc6a0bf57486c23a12622846e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 04:55:53 np0005465604 podman[395012]: 2025-10-02 08:55:53.962677836 +0000 UTC m=+0.140801355 container attach f0830d4a454e2ee0e1a00e8c1158a523d312735fc6a0bf57486c23a12622846e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:55:53 np0005465604 elated_cori[395046]: 167 167
Oct  2 04:55:53 np0005465604 systemd[1]: libpod-f0830d4a454e2ee0e1a00e8c1158a523d312735fc6a0bf57486c23a12622846e.scope: Deactivated successfully.
Oct  2 04:55:53 np0005465604 podman[395012]: 2025-10-02 08:55:53.966170176 +0000 UTC m=+0.144293685 container died f0830d4a454e2ee0e1a00e8c1158a523d312735fc6a0bf57486c23a12622846e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 04:55:53 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9fbaeda1f040c2b8946a1ede03f073c08d56074ecc78ee4236a227b43cce4924-merged.mount: Deactivated successfully.
Oct  2 04:55:54 np0005465604 podman[395012]: 2025-10-02 08:55:54.016202972 +0000 UTC m=+0.194326451 container remove f0830d4a454e2ee0e1a00e8c1158a523d312735fc6a0bf57486c23a12622846e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 04:55:54 np0005465604 systemd[1]: libpod-conmon-f0830d4a454e2ee0e1a00e8c1158a523d312735fc6a0bf57486c23a12622846e.scope: Deactivated successfully.
Oct  2 04:55:54 np0005465604 podman[395072]: 2025-10-02 08:55:54.240182293 +0000 UTC m=+0.060455608 container create b10eac018b762e4124aaebd18a70017933f87e7fd9f4a6b7185b4e8c47704148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:55:54 np0005465604 systemd[1]: Started libpod-conmon-b10eac018b762e4124aaebd18a70017933f87e7fd9f4a6b7185b4e8c47704148.scope.
Oct  2 04:55:54 np0005465604 podman[395072]: 2025-10-02 08:55:54.212078812 +0000 UTC m=+0.032352167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:55:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:55:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6dffbf82a5a905471494b97e1598a398fb8739d158d5e0cef27a4453c892b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:55:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6dffbf82a5a905471494b97e1598a398fb8739d158d5e0cef27a4453c892b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:55:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6dffbf82a5a905471494b97e1598a398fb8739d158d5e0cef27a4453c892b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:55:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6dffbf82a5a905471494b97e1598a398fb8739d158d5e0cef27a4453c892b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:55:54 np0005465604 podman[395072]: 2025-10-02 08:55:54.3580485 +0000 UTC m=+0.178321855 container init b10eac018b762e4124aaebd18a70017933f87e7fd9f4a6b7185b4e8c47704148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hofstadter, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 04:55:54 np0005465604 podman[395072]: 2025-10-02 08:55:54.379494159 +0000 UTC m=+0.199767464 container start b10eac018b762e4124aaebd18a70017933f87e7fd9f4a6b7185b4e8c47704148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hofstadter, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 04:55:54 np0005465604 podman[395072]: 2025-10-02 08:55:54.384019083 +0000 UTC m=+0.204292448 container attach b10eac018b762e4124aaebd18a70017933f87e7fd9f4a6b7185b4e8c47704148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:55:54 np0005465604 nova_compute[260603]: 2025-10-02 08:55:54.459 2 INFO nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Creating config drive at /var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/disk.config#033[00m
Oct  2 04:55:54 np0005465604 nova_compute[260603]: 2025-10-02 08:55:54.463 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejvrc2v6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:55:54 np0005465604 nova_compute[260603]: 2025-10-02 08:55:54.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:55:54 np0005465604 nova_compute[260603]: 2025-10-02 08:55:54.604 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejvrc2v6" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:55:54 np0005465604 nova_compute[260603]: 2025-10-02 08:55:54.644 2 DEBUG nova.storage.rbd_utils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:55:54 np0005465604 nova_compute[260603]: 2025-10-02 08:55:54.649 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/disk.config d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:55:54 np0005465604 nova_compute[260603]: 2025-10-02 08:55:54.723 2 DEBUG nova.network.neutron [req-8292a0cf-911b-470d-8035-22a928f088c5 req-1c656d76-ce3a-4e30-959a-f217db15f9a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updated VIF entry in instance network info cache for port 3e0191b3-5405-4fe1-ba86-56a0b092a5d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:55:54 np0005465604 nova_compute[260603]: 2025-10-02 08:55:54.724 2 DEBUG nova.network.neutron [req-8292a0cf-911b-470d-8035-22a928f088c5 req-1c656d76-ce3a-4e30-959a-f217db15f9a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updating instance_info_cache with network_info: [{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:55:54 np0005465604 nova_compute[260603]: 2025-10-02 08:55:54.749 2 DEBUG oslo_concurrency.lockutils [req-8292a0cf-911b-470d-8035-22a928f088c5 req-1c656d76-ce3a-4e30-959a-f217db15f9a2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:55:54 np0005465604 nova_compute[260603]: 2025-10-02 08:55:54.823 2 DEBUG oslo_concurrency.processutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/disk.config d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:55:54 np0005465604 nova_compute[260603]: 2025-10-02 08:55:54.824 2 INFO nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Deleting local config drive /var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/disk.config because it was imported into RBD.#033[00m
Oct  2 04:55:54 np0005465604 kernel: tap3e0191b3-54: entered promiscuous mode
Oct  2 04:55:54 np0005465604 NetworkManager[45129]: <info>  [1759395354.9045] manager: (tap3e0191b3-54): new Tun device (/org/freedesktop/NetworkManager/Devices/534)
Oct  2 04:55:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:55:54Z|01352|binding|INFO|Claiming lport 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 for this chassis.
Oct  2 04:55:54 np0005465604 ovn_controller[152344]: 2025-10-02T08:55:54Z|01353|binding|INFO|3e0191b3-5405-4fe1-ba86-56a0b092a5d6: Claiming fa:16:3e:00:df:98 10.100.0.3
Oct  2 04:55:54 np0005465604 nova_compute[260603]: 2025-10-02 08:55:54.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:54 np0005465604 systemd-udevd[395143]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:55:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:54.932 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:df:98 10.100.0.3'], port_security=['fa:16:3e:00:df:98 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd84a3f3a-a7fb-4222-9b59-c51b64d74a13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ca3ac45-6851-48dd-917a-457549b659ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5f31abdd-db09-4203-b1b9-483c21cac467', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=95f926d6-852b-4212-98b8-ed01a74b48f6, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3e0191b3-5405-4fe1-ba86-56a0b092a5d6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:55:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:54.936 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 in datapath 0ca3ac45-6851-48dd-917a-457549b659ba bound to our chassis#033[00m
Oct  2 04:55:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:54.939 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0ca3ac45-6851-48dd-917a-457549b659ba#033[00m
Oct  2 04:55:54 np0005465604 NetworkManager[45129]: <info>  [1759395354.9422] device (tap3e0191b3-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:55:54 np0005465604 NetworkManager[45129]: <info>  [1759395354.9442] device (tap3e0191b3-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:55:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:54.955 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[37c6442a-7756-4ac5-af7f-0d452231d8d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:54.956 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0ca3ac45-61 in ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:55:54 np0005465604 systemd-machined[214636]: New machine qemu-160-instance-0000007e.
Oct  2 04:55:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:54.961 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0ca3ac45-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:55:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:54.961 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[37c5c069-e105-4516-ba39-aee476719630]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:54.964 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2d82de28-a18a-4178-b9ca-9df753eaaeca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:54 np0005465604 systemd[1]: Started Virtual Machine qemu-160-instance-0000007e.
Oct  2 04:55:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:54.978 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[ff275f94-c11c-42ad-a539-80dd8e9a55ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.006 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dd153e0b-69d9-48d0-8132-89bf182e6fa0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:55 np0005465604 nova_compute[260603]: 2025-10-02 08:55:55.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:55 np0005465604 nova_compute[260603]: 2025-10-02 08:55:55.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:55 np0005465604 nova_compute[260603]: 2025-10-02 08:55:55.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:55:55Z|01354|binding|INFO|Setting lport 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 ovn-installed in OVS
Oct  2 04:55:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:55:55Z|01355|binding|INFO|Setting lport 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 up in Southbound
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.039 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a3570d-2866-440e-b051-7cd1ba600606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.043 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[69fca0da-4ecc-4d6c-be95-aa3b2248560e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:55 np0005465604 NetworkManager[45129]: <info>  [1759395355.0448] manager: (tap0ca3ac45-60): new Veth device (/org/freedesktop/NetworkManager/Devices/535)
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.091 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3c0ebb7a-4fc6-4731-919a-4ec857b48475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.094 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[fed9d15b-567f-4c08-9d1b-b2ef933f7c23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:55 np0005465604 NetworkManager[45129]: <info>  [1759395355.1244] device (tap0ca3ac45-60): carrier: link connected
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.132 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8127cde4-5ded-47f7-894a-29ab361f2d61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.161 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff9dd76-6aca-470f-a6f9-0768efc88a94]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ca3ac45-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:46:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 384], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 623372, 'reachable_time': 38556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395184, 'error': None, 'target': 'ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.184 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bc46e523-f7ec-410d-9f40-cb8e50fa1641]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:46db'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 623372, 'tstamp': 623372}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395188, 'error': None, 'target': 'ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.205 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9d95a3a-328a-4ad2-8dd3-4b41dc5f1a9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0ca3ac45-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:46:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 384], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 623372, 'reachable_time': 38556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 395189, 'error': None, 'target': 'ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:55 np0005465604 nova_compute[260603]: 2025-10-02 08:55:55.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.244 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4a482d-3edd-4606-b814-831535d0b16d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.302 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[45bf6096-c237-4fb4-a2b8-cf5a02949076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.304 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ca3ac45-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.304 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.304 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ca3ac45-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:55:55 np0005465604 nova_compute[260603]: 2025-10-02 08:55:55.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:55 np0005465604 NetworkManager[45129]: <info>  [1759395355.3068] manager: (tap0ca3ac45-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/536)
Oct  2 04:55:55 np0005465604 kernel: tap0ca3ac45-60: entered promiscuous mode
Oct  2 04:55:55 np0005465604 nova_compute[260603]: 2025-10-02 08:55:55.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.309 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0ca3ac45-60, col_values=(('external_ids', {'iface-id': 'ab6e362e-db4a-479c-a0fc-ea7c3db44980'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:55:55 np0005465604 nova_compute[260603]: 2025-10-02 08:55:55.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:55:55Z|01356|binding|INFO|Releasing lport ab6e362e-db4a-479c-a0fc-ea7c3db44980 from this chassis (sb_readonly=0)
Oct  2 04:55:55 np0005465604 nova_compute[260603]: 2025-10-02 08:55:55.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.325 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0ca3ac45-6851-48dd-917a-457549b659ba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0ca3ac45-6851-48dd-917a-457549b659ba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.326 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5719efbb-8a0d-40ea-9422-85d0e2338757]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.328 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-0ca3ac45-6851-48dd-917a-457549b659ba
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/0ca3ac45-6851-48dd-917a-457549b659ba.pid.haproxy
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 0ca3ac45-6851-48dd-917a-457549b659ba
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:55:55 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:55:55.328 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba', 'env', 'PROCESS_TAG=haproxy-0ca3ac45-6851-48dd-917a-457549b659ba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0ca3ac45-6851-48dd-917a-457549b659ba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]: {
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:        "osd_id": 2,
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:        "type": "bluestore"
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:    },
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:        "osd_id": 1,
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:        "type": "bluestore"
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:    },
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:        "osd_id": 0,
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:        "type": "bluestore"
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]:    }
Oct  2 04:55:55 np0005465604 trusting_hofstadter[395088]: }
Oct  2 04:55:55 np0005465604 systemd[1]: libpod-b10eac018b762e4124aaebd18a70017933f87e7fd9f4a6b7185b4e8c47704148.scope: Deactivated successfully.
Oct  2 04:55:55 np0005465604 podman[395072]: 2025-10-02 08:55:55.423174497 +0000 UTC m=+1.243447792 container died b10eac018b762e4124aaebd18a70017933f87e7fd9f4a6b7185b4e8c47704148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:55:55 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ed6dffbf82a5a905471494b97e1598a398fb8739d158d5e0cef27a4453c892b6-merged.mount: Deactivated successfully.
Oct  2 04:55:55 np0005465604 podman[395072]: 2025-10-02 08:55:55.491577456 +0000 UTC m=+1.311850741 container remove b10eac018b762e4124aaebd18a70017933f87e7fd9f4a6b7185b4e8c47704148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:55:55 np0005465604 systemd[1]: libpod-conmon-b10eac018b762e4124aaebd18a70017933f87e7fd9f4a6b7185b4e8c47704148.scope: Deactivated successfully.
Oct  2 04:55:55 np0005465604 nova_compute[260603]: 2025-10-02 08:55:55.521 2 DEBUG nova.compute.manager [req-fa1844bf-5088-456d-82ae-5f4f1d0deb1f req-db9a1723-a130-4462-92ef-a094810cff63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-plugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:55:55 np0005465604 nova_compute[260603]: 2025-10-02 08:55:55.522 2 DEBUG oslo_concurrency.lockutils [req-fa1844bf-5088-456d-82ae-5f4f1d0deb1f req-db9a1723-a130-4462-92ef-a094810cff63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:55 np0005465604 nova_compute[260603]: 2025-10-02 08:55:55.522 2 DEBUG oslo_concurrency.lockutils [req-fa1844bf-5088-456d-82ae-5f4f1d0deb1f req-db9a1723-a130-4462-92ef-a094810cff63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:55 np0005465604 nova_compute[260603]: 2025-10-02 08:55:55.522 2 DEBUG oslo_concurrency.lockutils [req-fa1844bf-5088-456d-82ae-5f4f1d0deb1f req-db9a1723-a130-4462-92ef-a094810cff63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:55 np0005465604 nova_compute[260603]: 2025-10-02 08:55:55.523 2 DEBUG nova.compute.manager [req-fa1844bf-5088-456d-82ae-5f4f1d0deb1f req-db9a1723-a130-4462-92ef-a094810cff63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Processing event network-vif-plugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:55:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:55:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:55:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:55:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:55:55 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 90a8f4db-9711-433e-b33c-0453efe3fc1f does not exist
Oct  2 04:55:55 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 636d9398-06f4-4e55-8eb2-e123640ca08f does not exist
Oct  2 04:55:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:55:55 np0005465604 podman[395301]: 2025-10-02 08:55:55.737588995 +0000 UTC m=+0.046712152 container create 117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 04:55:55 np0005465604 systemd[1]: Started libpod-conmon-117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60.scope.
Oct  2 04:55:55 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:55:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931c6303f091c199c5789a98492e9e4f99efc35fbf41e4ee1e289eeeb8eb185b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:55:55 np0005465604 podman[395301]: 2025-10-02 08:55:55.712774878 +0000 UTC m=+0.021898085 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:55:55 np0005465604 podman[395301]: 2025-10-02 08:55:55.817345913 +0000 UTC m=+0.126469090 container init 117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:55:55 np0005465604 podman[395301]: 2025-10-02 08:55:55.823597311 +0000 UTC m=+0.132720478 container start 117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:55:55 np0005465604 neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba[395316]: [NOTICE]   (395321) : New worker (395323) forked
Oct  2 04:55:55 np0005465604 neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba[395316]: [NOTICE]   (395321) : Loading success.
Oct  2 04:55:56 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:55:56 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.031 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395357.0304892, d84a3f3a-a7fb-4222-9b59-c51b64d74a13 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.032 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] VM Started (Lifecycle Event)#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.035 2 DEBUG nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.040 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.045 2 INFO nova.virt.libvirt.driver [-] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Instance spawned successfully.#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.046 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.061 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.071 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.077 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.077 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.078 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.079 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.080 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.081 2 DEBUG nova.virt.libvirt.driver [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.113 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.114 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395357.0369425, d84a3f3a-a7fb-4222-9b59-c51b64d74a13 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.115 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.150 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.155 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395357.0395665, d84a3f3a-a7fb-4222-9b59-c51b64d74a13 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.156 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.160 2 INFO nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Took 11.20 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.161 2 DEBUG nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.177 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.182 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.208 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.235 2 INFO nova.compute.manager [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Took 12.15 seconds to build instance.#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.260 2 DEBUG oslo_concurrency.lockutils [None req-888234bb-0c55-4551-a024-633aaed884b3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.631 2 DEBUG nova.compute.manager [req-551adfbb-c4e6-4eb3-a22e-f8008e096dfd req-d025032e-a318-4584-be7a-933a9529372d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-plugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.632 2 DEBUG oslo_concurrency.lockutils [req-551adfbb-c4e6-4eb3-a22e-f8008e096dfd req-d025032e-a318-4584-be7a-933a9529372d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.633 2 DEBUG oslo_concurrency.lockutils [req-551adfbb-c4e6-4eb3-a22e-f8008e096dfd req-d025032e-a318-4584-be7a-933a9529372d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.633 2 DEBUG oslo_concurrency.lockutils [req-551adfbb-c4e6-4eb3-a22e-f8008e096dfd req-d025032e-a318-4584-be7a-933a9529372d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.634 2 DEBUG nova.compute.manager [req-551adfbb-c4e6-4eb3-a22e-f8008e096dfd req-d025032e-a318-4584-be7a-933a9529372d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] No waiting events found dispatching network-vif-plugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:55:57 np0005465604 nova_compute[260603]: 2025-10-02 08:55:57.635 2 WARNING nova.compute.manager [req-551adfbb-c4e6-4eb3-a22e-f8008e096dfd req-d025032e-a318-4584-be7a-933a9529372d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received unexpected event network-vif-plugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:55:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct  2 04:55:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:55:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:55:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:55:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:55:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:55:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:55:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:55:58 np0005465604 nova_compute[260603]: 2025-10-02 08:55:58.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:55:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 554 KiB/s rd, 665 KiB/s wr, 40 op/s
Oct  2 04:56:00 np0005465604 nova_compute[260603]: 2025-10-02 08:56:00.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:01 np0005465604 podman[395374]: 2025-10-02 08:56:01.035080204 +0000 UTC m=+0.088620010 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 04:56:01 np0005465604 podman[395375]: 2025-10-02 08:56:01.05103803 +0000 UTC m=+0.099093333 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:56:01 np0005465604 NetworkManager[45129]: <info>  [1759395361.3169] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/537)
Oct  2 04:56:01 np0005465604 NetworkManager[45129]: <info>  [1759395361.3190] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/538)
Oct  2 04:56:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:56:01Z|01357|binding|INFO|Releasing lport ab6e362e-db4a-479c-a0fc-ea7c3db44980 from this chassis (sb_readonly=0)
Oct  2 04:56:01 np0005465604 nova_compute[260603]: 2025-10-02 08:56:01.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:01 np0005465604 ovn_controller[152344]: 2025-10-02T08:56:01Z|01358|binding|INFO|Releasing lport ab6e362e-db4a-479c-a0fc-ea7c3db44980 from this chassis (sb_readonly=0)
Oct  2 04:56:01 np0005465604 nova_compute[260603]: 2025-10-02 08:56:01.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:01 np0005465604 nova_compute[260603]: 2025-10-02 08:56:01.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 547 KiB/s rd, 12 KiB/s wr, 27 op/s
Oct  2 04:56:01 np0005465604 nova_compute[260603]: 2025-10-02 08:56:01.921 2 DEBUG nova.compute.manager [req-a2c4010c-30f3-45a0-806f-def89dc38fa5 req-0fe600e0-cc7b-42f4-9ca0-c91a4a0eb661 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-changed-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:56:01 np0005465604 nova_compute[260603]: 2025-10-02 08:56:01.922 2 DEBUG nova.compute.manager [req-a2c4010c-30f3-45a0-806f-def89dc38fa5 req-0fe600e0-cc7b-42f4-9ca0-c91a4a0eb661 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Refreshing instance network info cache due to event network-changed-3e0191b3-5405-4fe1-ba86-56a0b092a5d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:56:01 np0005465604 nova_compute[260603]: 2025-10-02 08:56:01.922 2 DEBUG oslo_concurrency.lockutils [req-a2c4010c-30f3-45a0-806f-def89dc38fa5 req-0fe600e0-cc7b-42f4-9ca0-c91a4a0eb661 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:56:01 np0005465604 nova_compute[260603]: 2025-10-02 08:56:01.922 2 DEBUG oslo_concurrency.lockutils [req-a2c4010c-30f3-45a0-806f-def89dc38fa5 req-0fe600e0-cc7b-42f4-9ca0-c91a4a0eb661 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:56:01 np0005465604 nova_compute[260603]: 2025-10-02 08:56:01.922 2 DEBUG nova.network.neutron [req-a2c4010c-30f3-45a0-806f-def89dc38fa5 req-0fe600e0-cc7b-42f4-9ca0-c91a4a0eb661 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Refreshing network info cache for port 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:56:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:56:03 np0005465604 nova_compute[260603]: 2025-10-02 08:56:03.158 2 DEBUG nova.network.neutron [req-a2c4010c-30f3-45a0-806f-def89dc38fa5 req-0fe600e0-cc7b-42f4-9ca0-c91a4a0eb661 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updated VIF entry in instance network info cache for port 3e0191b3-5405-4fe1-ba86-56a0b092a5d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:56:03 np0005465604 nova_compute[260603]: 2025-10-02 08:56:03.159 2 DEBUG nova.network.neutron [req-a2c4010c-30f3-45a0-806f-def89dc38fa5 req-0fe600e0-cc7b-42f4-9ca0-c91a4a0eb661 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updating instance_info_cache with network_info: [{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:56:03 np0005465604 nova_compute[260603]: 2025-10-02 08:56:03.186 2 DEBUG oslo_concurrency.lockutils [req-a2c4010c-30f3-45a0-806f-def89dc38fa5 req-0fe600e0-cc7b-42f4-9ca0-c91a4a0eb661 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:56:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 04:56:03 np0005465604 nova_compute[260603]: 2025-10-02 08:56:03.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:05 np0005465604 nova_compute[260603]: 2025-10-02 08:56:05.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 04:56:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 88 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct  2 04:56:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:56:08 np0005465604 nova_compute[260603]: 2025-10-02 08:56:08.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:56:08Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:df:98 10.100.0.3
Oct  2 04:56:08 np0005465604 ovn_controller[152344]: 2025-10-02T08:56:08Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:df:98 10.100.0.3
Oct  2 04:56:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 94 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 713 KiB/s wr, 84 op/s
Oct  2 04:56:10 np0005465604 nova_compute[260603]: 2025-10-02 08:56:10.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 94 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 700 KiB/s wr, 65 op/s
Oct  2 04:56:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:56:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct  2 04:56:13 np0005465604 nova_compute[260603]: 2025-10-02 08:56:13.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:14 np0005465604 nova_compute[260603]: 2025-10-02 08:56:14.867 2 INFO nova.compute.manager [None req-d81b26de-9148-4b0a-94ff-e98fe0edaa4b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Get console output#033[00m
Oct  2 04:56:14 np0005465604 nova_compute[260603]: 2025-10-02 08:56:14.876 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 04:56:15 np0005465604 nova_compute[260603]: 2025-10-02 08:56:15.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 04:56:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 04:56:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:56:18 np0005465604 nova_compute[260603]: 2025-10-02 08:56:18.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 04:56:20 np0005465604 nova_compute[260603]: 2025-10-02 08:56:20.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:21 np0005465604 nova_compute[260603]: 2025-10-02 08:56:21.392 2 DEBUG oslo_concurrency.lockutils [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "interface-d84a3f3a-a7fb-4222-9b59-c51b64d74a13-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:56:21 np0005465604 nova_compute[260603]: 2025-10-02 08:56:21.393 2 DEBUG oslo_concurrency.lockutils [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "interface-d84a3f3a-a7fb-4222-9b59-c51b64d74a13-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:56:21 np0005465604 nova_compute[260603]: 2025-10-02 08:56:21.393 2 DEBUG nova.objects.instance [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'flavor' on Instance uuid d84a3f3a-a7fb-4222-9b59-c51b64d74a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:56:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 1.5 MiB/s wr, 44 op/s
Oct  2 04:56:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:56:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2871296160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:56:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:56:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2871296160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:56:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:56:23 np0005465604 nova_compute[260603]: 2025-10-02 08:56:23.557 2 DEBUG nova.objects.instance [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_requests' on Instance uuid d84a3f3a-a7fb-4222-9b59-c51b64d74a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:56:23 np0005465604 nova_compute[260603]: 2025-10-02 08:56:23.579 2 DEBUG nova.network.neutron [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:56:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 1.5 MiB/s wr, 45 op/s
Oct  2 04:56:23 np0005465604 nova_compute[260603]: 2025-10-02 08:56:23.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:24 np0005465604 podman[395418]: 2025-10-02 08:56:24.026658477 +0000 UTC m=+0.079112609 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 04:56:24 np0005465604 podman[395417]: 2025-10-02 08:56:24.043658565 +0000 UTC m=+0.111944619 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:56:24 np0005465604 nova_compute[260603]: 2025-10-02 08:56:24.481 2 DEBUG nova.policy [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:56:25 np0005465604 nova_compute[260603]: 2025-10-02 08:56:25.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:25.376 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:56:25 np0005465604 nova_compute[260603]: 2025-10-02 08:56:25.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:25.380 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:56:25 np0005465604 nova_compute[260603]: 2025-10-02 08:56:25.450 2 DEBUG nova.network.neutron [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Successfully created port: 3d7e333e-5ee7-4373-b49b-3f78fb696f43 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:56:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Oct  2 04:56:26 np0005465604 nova_compute[260603]: 2025-10-02 08:56:26.795 2 DEBUG nova.network.neutron [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Successfully updated port: 3d7e333e-5ee7-4373-b49b-3f78fb696f43 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:56:26 np0005465604 nova_compute[260603]: 2025-10-02 08:56:26.813 2 DEBUG oslo_concurrency.lockutils [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:56:26 np0005465604 nova_compute[260603]: 2025-10-02 08:56:26.814 2 DEBUG oslo_concurrency.lockutils [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:56:26 np0005465604 nova_compute[260603]: 2025-10-02 08:56:26.814 2 DEBUG nova.network.neutron [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:56:26 np0005465604 nova_compute[260603]: 2025-10-02 08:56:26.902 2 DEBUG nova.compute.manager [req-73bc643f-eb0a-4f0e-a839-b796c7b2573b req-d3e9fb38-c127-4ec0-bfd1-f6d621d67e3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-changed-3d7e333e-5ee7-4373-b49b-3f78fb696f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:56:26 np0005465604 nova_compute[260603]: 2025-10-02 08:56:26.902 2 DEBUG nova.compute.manager [req-73bc643f-eb0a-4f0e-a839-b796c7b2573b req-d3e9fb38-c127-4ec0-bfd1-f6d621d67e3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Refreshing instance network info cache due to event network-changed-3d7e333e-5ee7-4373-b49b-3f78fb696f43. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:56:26 np0005465604 nova_compute[260603]: 2025-10-02 08:56:26.903 2 DEBUG oslo_concurrency.lockutils [req-73bc643f-eb0a-4f0e-a839-b796c7b2573b req-d3e9fb38-c127-4ec0-bfd1-f6d621d67e3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:56:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:27.382 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:56:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Oct  2 04:56:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:56:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:56:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:56:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:56:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:56:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:56:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:56:28
Oct  2 04:56:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:56:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:56:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.log', '.rgw.root', 'vms', 'default.rgw.control', 'default.rgw.meta', 'volumes']
Oct  2 04:56:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:56:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:56:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:56:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:56:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:56:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:56:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:56:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:56:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:56:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:56:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:56:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:56:28 np0005465604 nova_compute[260603]: 2025-10-02 08:56:28.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:29 np0005465604 nova_compute[260603]: 2025-10-02 08:56:29.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:56:29 np0005465604 nova_compute[260603]: 2025-10-02 08:56:29.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:56:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 4.0 KiB/s wr, 0 op/s
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:29.918469) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395389918526, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1168, "num_deletes": 252, "total_data_size": 1673677, "memory_usage": 1700816, "flush_reason": "Manual Compaction"}
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395389931256, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1645795, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48607, "largest_seqno": 49774, "table_properties": {"data_size": 1640239, "index_size": 2951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12147, "raw_average_key_size": 19, "raw_value_size": 1628911, "raw_average_value_size": 2674, "num_data_blocks": 132, "num_entries": 609, "num_filter_entries": 609, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759395283, "oldest_key_time": 1759395283, "file_creation_time": 1759395389, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 12851 microseconds, and 8045 cpu microseconds.
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:29.931315) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1645795 bytes OK
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:29.931352) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:29.932971) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:29.932994) EVENT_LOG_v1 {"time_micros": 1759395389932986, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:29.933018) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1668321, prev total WAL file size 1668321, number of live WAL files 2.
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:29.934079) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1607KB)], [113(8338KB)]
Oct  2 04:56:29 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395389934118, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 10184518, "oldest_snapshot_seqno": -1}
Oct  2 04:56:29 np0005465604 nova_compute[260603]: 2025-10-02 08:56:29.972 2 DEBUG nova.network.neutron [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updating instance_info_cache with network_info: [{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:56:30 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6946 keys, 8467232 bytes, temperature: kUnknown
Oct  2 04:56:30 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395390000192, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 8467232, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8422302, "index_size": 26450, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17413, "raw_key_size": 181257, "raw_average_key_size": 26, "raw_value_size": 8299604, "raw_average_value_size": 1194, "num_data_blocks": 1028, "num_entries": 6946, "num_filter_entries": 6946, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759395389, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Oct  2 04:56:30 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 04:56:30 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:30.000552) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 8467232 bytes
Oct  2 04:56:30 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:30.002390) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.9 rd, 127.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(11.3) write-amplify(5.1) OK, records in: 7466, records dropped: 520 output_compression: NoCompression
Oct  2 04:56:30 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:30.002422) EVENT_LOG_v1 {"time_micros": 1759395390002407, "job": 68, "event": "compaction_finished", "compaction_time_micros": 66194, "compaction_time_cpu_micros": 40117, "output_level": 6, "num_output_files": 1, "total_output_size": 8467232, "num_input_records": 7466, "num_output_records": 6946, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 04:56:30 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:56:30 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395390003118, "job": 68, "event": "table_file_deletion", "file_number": 115}
Oct  2 04:56:30 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 04:56:30 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395390006205, "job": 68, "event": "table_file_deletion", "file_number": 113}
Oct  2 04:56:30 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:29.933969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:56:30 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:30.006253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:56:30 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:30.006259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.006 2 DEBUG oslo_concurrency.lockutils [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:56:30 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:30.006262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:56:30 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:30.006266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:56:30 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-08:56:30.006269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.007 2 DEBUG oslo_concurrency.lockutils [req-73bc643f-eb0a-4f0e-a839-b796c7b2573b req-d3e9fb38-c127-4ec0-bfd1-f6d621d67e3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.008 2 DEBUG nova.network.neutron [req-73bc643f-eb0a-4f0e-a839-b796c7b2573b req-d3e9fb38-c127-4ec0-bfd1-f6d621d67e3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Refreshing network info cache for port 3d7e333e-5ee7-4373-b49b-3f78fb696f43 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.013 2 DEBUG nova.virt.libvirt.vif [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:55:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1254556339',display_name='tempest-TestNetworkBasicOps-server-1254556339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1254556339',id=126,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCV6ppIFHQsRK8BqpoHx4hB1U2Nvc+5EznbB1uzi/14IuMPKrn5t7EhvgkbYfjK4hho3uCvJrZSyE+lc9+zXrc8+KyMSVz0PD8Wom9eq1MMNuY+jwkMfim2/72V2zzH01Q==',key_name='tempest-TestNetworkBasicOps-2002344881',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:55:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-lv1agx0u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:55:57Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=d84a3f3a-a7fb-4222-9b59-c51b64d74a13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.014 2 DEBUG nova.network.os_vif_util [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.015 2 DEBUG nova.network.os_vif_util [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.016 2 DEBUG os_vif [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.018 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.019 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.023 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d7e333e-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.024 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3d7e333e-5e, col_values=(('external_ids', {'iface-id': '3d7e333e-5ee7-4373-b49b-3f78fb696f43', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:a9:81', 'vm-uuid': 'd84a3f3a-a7fb-4222-9b59-c51b64d74a13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:56:30 np0005465604 NetworkManager[45129]: <info>  [1759395390.0291] manager: (tap3d7e333e-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/539)
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.052 2 INFO os_vif [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e')#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.053 2 DEBUG nova.virt.libvirt.vif [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:55:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1254556339',display_name='tempest-TestNetworkBasicOps-server-1254556339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1254556339',id=126,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCV6ppIFHQsRK8BqpoHx4hB1U2Nvc+5EznbB1uzi/14IuMPKrn5t7EhvgkbYfjK4hho3uCvJrZSyE+lc9+zXrc8+KyMSVz0PD8Wom9eq1MMNuY+jwkMfim2/72V2zzH01Q==',key_name='tempest-TestNetworkBasicOps-2002344881',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:55:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-lv1agx0u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:55:57Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=d84a3f3a-a7fb-4222-9b59-c51b64d74a13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.054 2 DEBUG nova.network.os_vif_util [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.055 2 DEBUG nova.network.os_vif_util [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.059 2 DEBUG nova.virt.libvirt.guest [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] attach device xml: <interface type="ethernet">
Oct  2 04:56:30 np0005465604 nova_compute[260603]:  <mac address="fa:16:3e:26:a9:81"/>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:  <model type="virtio"/>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:  <mtu size="1442"/>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:  <target dev="tap3d7e333e-5e"/>
Oct  2 04:56:30 np0005465604 nova_compute[260603]: </interface>
Oct  2 04:56:30 np0005465604 nova_compute[260603]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 04:56:30 np0005465604 NetworkManager[45129]: <info>  [1759395390.0767] manager: (tap3d7e333e-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/540)
Oct  2 04:56:30 np0005465604 kernel: tap3d7e333e-5e: entered promiscuous mode
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:56:30Z|01359|binding|INFO|Claiming lport 3d7e333e-5ee7-4373-b49b-3f78fb696f43 for this chassis.
Oct  2 04:56:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:56:30Z|01360|binding|INFO|3d7e333e-5ee7-4373-b49b-3f78fb696f43: Claiming fa:16:3e:26:a9:81 10.100.0.27
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.093 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:a9:81 10.100.0.27'], port_security=['fa:16:3e:26:a9:81 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'd84a3f3a-a7fb-4222-9b59-c51b64d74a13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-582d804a-4891-4cd3-8540-2cfd9f71444e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc3a0c93-fc04-4c05-88f3-b624ca1ad1bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=614405e1-3a58-497d-8a87-2cf68394c005, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3d7e333e-5ee7-4373-b49b-3f78fb696f43) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.094 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3d7e333e-5ee7-4373-b49b-3f78fb696f43 in datapath 582d804a-4891-4cd3-8540-2cfd9f71444e bound to our chassis#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.095 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 582d804a-4891-4cd3-8540-2cfd9f71444e#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.117 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb869a4-916c-44f7-ac70-74dc9e6f782b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.118 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap582d804a-41 in ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.121 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap582d804a-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.121 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3a03dcd5-e17c-4cac-8940-cf79f1f81001]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.122 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[337010d2-b881-45d5-b7d6-ef6207ba7224]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:30 np0005465604 systemd-udevd[395469]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.143 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[7ddfabdb-9df2-448d-aa5e-02c023d86ffe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:56:30Z|01361|binding|INFO|Setting lport 3d7e333e-5ee7-4373-b49b-3f78fb696f43 ovn-installed in OVS
Oct  2 04:56:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:56:30Z|01362|binding|INFO|Setting lport 3d7e333e-5ee7-4373-b49b-3f78fb696f43 up in Southbound
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:30 np0005465604 NetworkManager[45129]: <info>  [1759395390.1643] device (tap3d7e333e-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:56:30 np0005465604 NetworkManager[45129]: <info>  [1759395390.1655] device (tap3d7e333e-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.179 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[772249ba-afb3-453c-b113-e10762d60713]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.213 2 DEBUG nova.virt.libvirt.driver [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.214 2 DEBUG nova.virt.libvirt.driver [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.214 2 DEBUG nova.virt.libvirt.driver [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:00:df:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.214 2 DEBUG nova.virt.libvirt.driver [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:26:a9:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.233 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[79a97f6d-5e64-4196-8596-55f2807394ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:30 np0005465604 NetworkManager[45129]: <info>  [1759395390.2425] manager: (tap582d804a-40): new Veth device (/org/freedesktop/NetworkManager/Devices/541)
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.241 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[22851b80-c9bd-4b30-aeb5-5d62f64a017d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:30 np0005465604 systemd-udevd[395472]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.249 2 DEBUG nova.virt.libvirt.guest [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:56:30 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:  <nova:name>tempest-TestNetworkBasicOps-server-1254556339</nova:name>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:56:30</nova:creationTime>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:56:30 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:    <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:    <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:    <nova:port uuid="3e0191b3-5405-4fe1-ba86-56a0b092a5d6">
Oct  2 04:56:30 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:    <nova:port uuid="3d7e333e-5ee7-4373-b49b-3f78fb696f43">
Oct  2 04:56:30 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:56:30 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:56:30 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:56:30 np0005465604 nova_compute[260603]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.284 2 DEBUG oslo_concurrency.lockutils [None req-f056cd46-8c33-4bcc-8e14-48ce4644845c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "interface-d84a3f3a-a7fb-4222-9b59-c51b64d74a13-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.292 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d71fb43f-a508-4e0b-a3bd-61e05a572fda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.295 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a5cc7725-1754-4713-9966-4c8cc3ca288d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:30 np0005465604 NetworkManager[45129]: <info>  [1759395390.3295] device (tap582d804a-40): carrier: link connected
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.339 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b9fecf-db22-4284-b617-a7f7e7e23ff9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.361 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9e4051-9dbe-4af3-8408-a1769ffb8178]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap582d804a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:8e:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626893, 'reachable_time': 37342, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395494, 'error': None, 'target': 'ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.386 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6da29dda-ef93-42a1-b69f-5f0b75941cb9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:8e6b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626893, 'tstamp': 626893}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395495, 'error': None, 'target': 'ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.415 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[42b68cb1-a750-4c48-9d4a-6261066e2952]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap582d804a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:8e:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626893, 'reachable_time': 37342, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 395496, 'error': None, 'target': 'ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.462 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e93f1a93-acc3-4763-b71f-2d8e1ca8f8f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.563 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e00541e8-d59f-4e44-bcdd-a5f073979548]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.566 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap582d804a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.566 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.567 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap582d804a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:30 np0005465604 kernel: tap582d804a-40: entered promiscuous mode
Oct  2 04:56:30 np0005465604 NetworkManager[45129]: <info>  [1759395390.5722] manager: (tap582d804a-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.574 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap582d804a-40, col_values=(('external_ids', {'iface-id': 'fd77efd3-8b3c-4ce5-bc3b-650b9665197c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:30 np0005465604 ovn_controller[152344]: 2025-10-02T08:56:30Z|01363|binding|INFO|Releasing lport fd77efd3-8b3c-4ce5-bc3b-650b9665197c from this chassis (sb_readonly=0)
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.597 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/582d804a-4891-4cd3-8540-2cfd9f71444e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/582d804a-4891-4cd3-8540-2cfd9f71444e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.598 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f563d364-0c79-4856-993a-e6599982711b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.599 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-582d804a-4891-4cd3-8540-2cfd9f71444e
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/582d804a-4891-4cd3-8540-2cfd9f71444e.pid.haproxy
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 582d804a-4891-4cd3-8540-2cfd9f71444e
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:56:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:30.600 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e', 'env', 'PROCESS_TAG=haproxy-582d804a-4891-4cd3-8540-2cfd9f71444e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/582d804a-4891-4cd3-8540-2cfd9f71444e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.783 2 DEBUG nova.compute.manager [req-cb44ebce-a6e9-4da9-97e5-1eab001f08c6 req-8747b6bb-3a45-4476-96f3-85106eb10d2f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-plugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.784 2 DEBUG oslo_concurrency.lockutils [req-cb44ebce-a6e9-4da9-97e5-1eab001f08c6 req-8747b6bb-3a45-4476-96f3-85106eb10d2f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.784 2 DEBUG oslo_concurrency.lockutils [req-cb44ebce-a6e9-4da9-97e5-1eab001f08c6 req-8747b6bb-3a45-4476-96f3-85106eb10d2f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.784 2 DEBUG oslo_concurrency.lockutils [req-cb44ebce-a6e9-4da9-97e5-1eab001f08c6 req-8747b6bb-3a45-4476-96f3-85106eb10d2f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.785 2 DEBUG nova.compute.manager [req-cb44ebce-a6e9-4da9-97e5-1eab001f08c6 req-8747b6bb-3a45-4476-96f3-85106eb10d2f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] No waiting events found dispatching network-vif-plugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:56:30 np0005465604 nova_compute[260603]: 2025-10-02 08:56:30.785 2 WARNING nova.compute.manager [req-cb44ebce-a6e9-4da9-97e5-1eab001f08c6 req-8747b6bb-3a45-4476-96f3-85106eb10d2f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received unexpected event network-vif-plugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:56:31 np0005465604 podman[395528]: 2025-10-02 08:56:31.071730082 +0000 UTC m=+0.079720198 container create 47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 04:56:31 np0005465604 podman[395528]: 2025-10-02 08:56:31.041277186 +0000 UTC m=+0.049267342 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:56:31 np0005465604 systemd[1]: Started libpod-conmon-47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5.scope.
Oct  2 04:56:31 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:56:31 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9dfa101d84a32dcff657b7d40a07ab3dd078bf7004b1c5d61a78dcd0c932fc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:56:31 np0005465604 podman[395528]: 2025-10-02 08:56:31.2072933 +0000 UTC m=+0.215283446 container init 47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:56:31 np0005465604 podman[395528]: 2025-10-02 08:56:31.214662763 +0000 UTC m=+0.222652879 container start 47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:56:31 np0005465604 podman[395544]: 2025-10-02 08:56:31.246856263 +0000 UTC m=+0.091965106 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:56:31 np0005465604 neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e[395545]: [NOTICE]   (395578) : New worker (395585) forked
Oct  2 04:56:31 np0005465604 neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e[395545]: [NOTICE]   (395578) : Loading success.
Oct  2 04:56:31 np0005465604 podman[395541]: 2025-10-02 08:56:31.268885402 +0000 UTC m=+0.118636742 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.503 2 DEBUG oslo_concurrency.lockutils [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "interface-d84a3f3a-a7fb-4222-9b59-c51b64d74a13-3d7e333e-5ee7-4373-b49b-3f78fb696f43" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.504 2 DEBUG oslo_concurrency.lockutils [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "interface-d84a3f3a-a7fb-4222-9b59-c51b64d74a13-3d7e333e-5ee7-4373-b49b-3f78fb696f43" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.518 2 DEBUG nova.objects.instance [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'flavor' on Instance uuid d84a3f3a-a7fb-4222-9b59-c51b64d74a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.540 2 DEBUG nova.virt.libvirt.vif [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:55:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1254556339',display_name='tempest-TestNetworkBasicOps-server-1254556339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1254556339',id=126,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCV6ppIFHQsRK8BqpoHx4hB1U2Nvc+5EznbB1uzi/14IuMPKrn5t7EhvgkbYfjK4hho3uCvJrZSyE+lc9+zXrc8+KyMSVz0PD8Wom9eq1MMNuY+jwkMfim2/72V2zzH01Q==',key_name='tempest-TestNetworkBasicOps-2002344881',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:55:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-lv1agx0u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:55:57Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=d84a3f3a-a7fb-4222-9b59-c51b64d74a13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.540 2 DEBUG nova.network.os_vif_util [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.541 2 DEBUG nova.network.os_vif_util [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.546 2 DEBUG nova.virt.libvirt.guest [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.548 2 DEBUG nova.virt.libvirt.guest [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.550 2 DEBUG nova.virt.libvirt.driver [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Attempting to detach device tap3d7e333e-5e from instance d84a3f3a-a7fb-4222-9b59-c51b64d74a13 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.551 2 DEBUG nova.virt.libvirt.guest [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] detach device xml: <interface type="ethernet">
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <mac address="fa:16:3e:26:a9:81"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <model type="virtio"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <mtu size="1442"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <target dev="tap3d7e333e-5e"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]: </interface>
Oct  2 04:56:31 np0005465604 nova_compute[260603]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.557 2 DEBUG nova.virt.libvirt.guest [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.560 2 DEBUG nova.virt.libvirt.guest [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface>not found in domain: <domain type='kvm' id='160'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <name>instance-0000007e</name>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <uuid>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</uuid>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:name>tempest-TestNetworkBasicOps-server-1254556339</nova:name>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:56:30</nova:creationTime>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:port uuid="3e0191b3-5405-4fe1-ba86-56a0b092a5d6">
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:port uuid="3d7e333e-5ee7-4373-b49b-3f78fb696f43">
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:56:31 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <memory unit='KiB'>131072</memory>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <vcpu placement='static'>1</vcpu>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <resource>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <partition>/machine</partition>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </resource>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <sysinfo type='smbios'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <entry name='manufacturer'>RDO</entry>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <entry name='serial'>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <entry name='uuid'>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <entry name='family'>Virtual Machine</entry>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <boot dev='hd'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <smbios mode='sysinfo'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <vmcoreinfo state='on'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <vendor>AMD</vendor>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='x2apic'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc-deadline'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='hypervisor'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc_adjust'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='spec-ctrl'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='stibp'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='arch-capabilities'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='ssbd'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='cmp_legacy'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='overflow-recov'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='succor'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='ibrs'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='amd-ssbd'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='virt-ssbd'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='lbrv'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='tsc-scale'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='vmcb-clean'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='flushbyasid'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pause-filter'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pfthreshold'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='rdctl-no'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='mds-no'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='gds-no'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='rfds-no'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='xsaves'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svm'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='topoext'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='npt'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='nrip-save'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <clock offset='utc'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <timer name='hpet' present='no'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <on_poweroff>destroy</on_poweroff>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <on_reboot>restart</on_reboot>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <on_crash>destroy</on_crash>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <disk type='network' device='disk'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk' index='2'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target dev='vda' bus='virtio'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='virtio-disk0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <disk type='network' device='cdrom'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config' index='1'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target dev='sda' bus='sata'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <readonly/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='sata0-0-0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pcie.0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='1' port='0x10'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='2' port='0x11'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.2'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='3' port='0x12'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.3'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='4' port='0x13'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.4'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='5' port='0x14'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.5'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='6' port='0x15'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.6'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='7' port='0x16'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.7'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='8' port='0x17'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.8'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='9' port='0x18'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.9'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='10' port='0x19'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.10'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='11' port='0x1a'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.11'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='12' port='0x1b'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.12'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='13' port='0x1c'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.13'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='14' port='0x1d'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.14'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='15' port='0x1e'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.15'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='16' port='0x1f'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.16'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='17' port='0x20'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.17'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='18' port='0x21'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.18'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='19' port='0x22'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.19'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='20' port='0x23'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.20'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='21' port='0x24'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.21'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='22' port='0x25'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.22'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='23' port='0x26'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.23'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='24' port='0x27'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.24'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='25' port='0x28'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.25'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-pci-bridge'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.26'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='usb'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='sata' index='0'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='ide'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:00:df:98'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target dev='tap3e0191b3-54'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='net0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:26:a9:81'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target dev='tap3d7e333e-5e'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='net1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <serial type='pty'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log' append='off'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target type='isa-serial' port='0'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:        <model name='isa-serial'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      </target>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log' append='off'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target type='serial' port='0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </console>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <input type='tablet' bus='usb'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='input0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='usb' bus='0' port='1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <input type='mouse' bus='ps2'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='input1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <input type='keyboard' bus='ps2'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='input2'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <listen type='address' address='::0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <audio id='1' type='none'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='video0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <watchdog model='itco' action='reset'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='watchdog0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </watchdog>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <memballoon model='virtio'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <stats period='10'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='balloon0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <rng model='virtio'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <backend model='random'>/dev/urandom</backend>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='rng0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <label>system_u:system_r:svirt_t:s0:c271,c793</label>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c271,c793</imagelabel>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <label>+107:+107</label>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <imagelabel>+107:+107</imagelabel>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:56:31 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:56:31 np0005465604 nova_compute[260603]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.563 2 INFO nova.virt.libvirt.driver [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully detached device tap3d7e333e-5e from instance d84a3f3a-a7fb-4222-9b59-c51b64d74a13 from the persistent domain config.#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.564 2 DEBUG nova.virt.libvirt.driver [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] (1/8): Attempting to detach device tap3d7e333e-5e with device alias net1 from instance d84a3f3a-a7fb-4222-9b59-c51b64d74a13 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.564 2 DEBUG nova.virt.libvirt.guest [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] detach device xml: <interface type="ethernet">
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <mac address="fa:16:3e:26:a9:81"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <model type="virtio"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <mtu size="1442"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <target dev="tap3d7e333e-5e"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]: </interface>
Oct  2 04:56:31 np0005465604 nova_compute[260603]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 04:56:31 np0005465604 kernel: tap3d7e333e-5e (unregistering): left promiscuous mode
Oct  2 04:56:31 np0005465604 NetworkManager[45129]: <info>  [1759395391.6721] device (tap3d7e333e-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:31 np0005465604 ovn_controller[152344]: 2025-10-02T08:56:31Z|01364|binding|INFO|Releasing lport 3d7e333e-5ee7-4373-b49b-3f78fb696f43 from this chassis (sb_readonly=0)
Oct  2 04:56:31 np0005465604 ovn_controller[152344]: 2025-10-02T08:56:31Z|01365|binding|INFO|Setting lport 3d7e333e-5ee7-4373-b49b-3f78fb696f43 down in Southbound
Oct  2 04:56:31 np0005465604 ovn_controller[152344]: 2025-10-02T08:56:31Z|01366|binding|INFO|Removing iface tap3d7e333e-5e ovn-installed in OVS
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.694 2 DEBUG nova.virt.libvirt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Received event <DeviceRemovedEvent: 1759395391.6931865, d84a3f3a-a7fb-4222-9b59-c51b64d74a13 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.697 2 DEBUG nova.virt.libvirt.driver [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Start waiting for the detach event from libvirt for device tap3d7e333e-5e with device alias net1 for instance d84a3f3a-a7fb-4222-9b59-c51b64d74a13 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.698 2 DEBUG nova.virt.libvirt.guest [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:56:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:31.700 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:a9:81 10.100.0.27'], port_security=['fa:16:3e:26:a9:81 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'd84a3f3a-a7fb-4222-9b59-c51b64d74a13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-582d804a-4891-4cd3-8540-2cfd9f71444e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc3a0c93-fc04-4c05-88f3-b624ca1ad1bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=614405e1-3a58-497d-8a87-2cf68394c005, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3d7e333e-5ee7-4373-b49b-3f78fb696f43) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:56:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:31.705 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3d7e333e-5ee7-4373-b49b-3f78fb696f43 in datapath 582d804a-4891-4cd3-8540-2cfd9f71444e unbound from our chassis#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.705 2 DEBUG nova.virt.libvirt.guest [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface>not found in domain: <domain type='kvm' id='160'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <name>instance-0000007e</name>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <uuid>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</uuid>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:name>tempest-TestNetworkBasicOps-server-1254556339</nova:name>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:56:30</nova:creationTime>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:port uuid="3e0191b3-5405-4fe1-ba86-56a0b092a5d6">
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:port uuid="3d7e333e-5ee7-4373-b49b-3f78fb696f43">
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:56:31 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <memory unit='KiB'>131072</memory>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <vcpu placement='static'>1</vcpu>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <resource>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <partition>/machine</partition>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </resource>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <sysinfo type='smbios'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <entry name='manufacturer'>RDO</entry>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <entry name='serial'>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <entry name='uuid'>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <entry name='family'>Virtual Machine</entry>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <boot dev='hd'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <smbios mode='sysinfo'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <vmcoreinfo state='on'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <vendor>AMD</vendor>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='x2apic'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc-deadline'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='hypervisor'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc_adjust'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='spec-ctrl'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='stibp'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='arch-capabilities'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='ssbd'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='cmp_legacy'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='overflow-recov'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='succor'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='ibrs'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='amd-ssbd'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='virt-ssbd'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='lbrv'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='tsc-scale'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='vmcb-clean'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='flushbyasid'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pause-filter'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pfthreshold'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='rdctl-no'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='mds-no'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='gds-no'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='rfds-no'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='xsaves'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svm'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='require' name='topoext'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='npt'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <feature policy='disable' name='nrip-save'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <clock offset='utc'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <timer name='hpet' present='no'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <on_poweroff>destroy</on_poweroff>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <on_reboot>restart</on_reboot>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <on_crash>destroy</on_crash>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <disk type='network' device='disk'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk' index='2'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target dev='vda' bus='virtio'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='virtio-disk0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <disk type='network' device='cdrom'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config' index='1'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target dev='sda' bus='sata'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <readonly/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='sata0-0-0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pcie.0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='1' port='0x10'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='2' port='0x11'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.2'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='3' port='0x12'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.3'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='4' port='0x13'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.4'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='5' port='0x14'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.5'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='6' port='0x15'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.6'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='7' port='0x16'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.7'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='8' port='0x17'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.8'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='9' port='0x18'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.9'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='10' port='0x19'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.10'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='11' port='0x1a'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.11'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='12' port='0x1b'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.12'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='13' port='0x1c'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.13'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='14' port='0x1d'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.14'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='15' port='0x1e'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.15'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='16' port='0x1f'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.16'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='17' port='0x20'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.17'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='18' port='0x21'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.18'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='19' port='0x22'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.19'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='20' port='0x23'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.20'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='21' port='0x24'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.21'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='22' port='0x25'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.22'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='23' port='0x26'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.23'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='24' port='0x27'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.24'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target chassis='25' port='0x28'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.25'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model name='pcie-pci-bridge'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='pci.26'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='usb'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <controller type='sata' index='0'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='ide'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:00:df:98'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target dev='tap3e0191b3-54'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='net0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <serial type='pty'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log' append='off'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target type='isa-serial' port='0'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:        <model name='isa-serial'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      </target>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log' append='off'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <target type='serial' port='0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </console>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <input type='tablet' bus='usb'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='input0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='usb' bus='0' port='1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <input type='mouse' bus='ps2'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='input1'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <input type='keyboard' bus='ps2'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='input2'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <listen type='address' address='::0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <audio id='1' type='none'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='video0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <watchdog model='itco' action='reset'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='watchdog0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </watchdog>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <memballoon model='virtio'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <stats period='10'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='balloon0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <rng model='virtio'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <backend model='random'>/dev/urandom</backend>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <alias name='rng0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <label>system_u:system_r:svirt_t:s0:c271,c793</label>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c271,c793</imagelabel>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <label>+107:+107</label>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <imagelabel>+107:+107</imagelabel>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:56:31 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:56:31 np0005465604 nova_compute[260603]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.705 2 INFO nova.virt.libvirt.driver [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully detached device tap3d7e333e-5e from instance d84a3f3a-a7fb-4222-9b59-c51b64d74a13 from the live domain config.#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.707 2 DEBUG nova.virt.libvirt.vif [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:55:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1254556339',display_name='tempest-TestNetworkBasicOps-server-1254556339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1254556339',id=126,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCV6ppIFHQsRK8BqpoHx4hB1U2Nvc+5EznbB1uzi/14IuMPKrn5t7EhvgkbYfjK4hho3uCvJrZSyE+lc9+zXrc8+KyMSVz0PD8Wom9eq1MMNuY+jwkMfim2/72V2zzH01Q==',key_name='tempest-TestNetworkBasicOps-2002344881',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:55:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-lv1agx0u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:55:57Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=d84a3f3a-a7fb-4222-9b59-c51b64d74a13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.707 2 DEBUG nova.network.os_vif_util [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:56:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:31.707 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 582d804a-4891-4cd3-8540-2cfd9f71444e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.708 2 DEBUG nova.network.os_vif_util [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.709 2 DEBUG os_vif [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:56:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:31.709 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[16c9af7a-777d-400c-82c2-600ecbb866c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.712 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d7e333e-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:31.710 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e namespace which is not needed anymore#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.722 2 INFO os_vif [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e')#033[00m
Oct  2 04:56:31 np0005465604 nova_compute[260603]: 2025-10-02 08:56:31.723 2 DEBUG nova.virt.libvirt.guest [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:name>tempest-TestNetworkBasicOps-server-1254556339</nova:name>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:56:31</nova:creationTime>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    <nova:port uuid="3e0191b3-5405-4fe1-ba86-56a0b092a5d6">
Oct  2 04:56:31 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:56:31 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:56:31 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:56:31 np0005465604 nova_compute[260603]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 04:56:31 np0005465604 neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e[395545]: [NOTICE]   (395578) : haproxy version is 2.8.14-c23fe91
Oct  2 04:56:31 np0005465604 neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e[395545]: [NOTICE]   (395578) : path to executable is /usr/sbin/haproxy
Oct  2 04:56:31 np0005465604 neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e[395545]: [WARNING]  (395578) : Exiting Master process...
Oct  2 04:56:31 np0005465604 neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e[395545]: [ALERT]    (395578) : Current worker (395585) exited with code 143 (Terminated)
Oct  2 04:56:31 np0005465604 neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e[395545]: [WARNING]  (395578) : All workers exited. Exiting... (0)
Oct  2 04:56:31 np0005465604 systemd[1]: libpod-47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5.scope: Deactivated successfully.
Oct  2 04:56:31 np0005465604 podman[395615]: 2025-10-02 08:56:31.908067596 +0000 UTC m=+0.070012480 container died 47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:56:31 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5-userdata-shm.mount: Deactivated successfully.
Oct  2 04:56:31 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8c9dfa101d84a32dcff657b7d40a07ab3dd078bf7004b1c5d61a78dcd0c932fc-merged.mount: Deactivated successfully.
Oct  2 04:56:31 np0005465604 podman[395615]: 2025-10-02 08:56:31.973179961 +0000 UTC m=+0.135124785 container cleanup 47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 04:56:32 np0005465604 systemd[1]: libpod-conmon-47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5.scope: Deactivated successfully.
Oct  2 04:56:32 np0005465604 podman[395645]: 2025-10-02 08:56:32.076777605 +0000 UTC m=+0.063614268 container remove 47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 04:56:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:32.083 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ab7bdcbe-f569-4839-9a75-53fc4ee57439]: (4, ('Thu Oct  2 08:56:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e (47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5)\n47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5\nThu Oct  2 08:56:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e (47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5)\n47bbe2225928b23a00e9608daa7400a8b1bc5a5b475bce91ad19e525ffe6dac5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:32.085 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3314cb5e-0c4d-425c-8540-261463de944a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:32.087 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap582d804a-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:32 np0005465604 kernel: tap582d804a-40: left promiscuous mode
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:32.107 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[896028fe-5e32-4cfe-b75a-15a7b30e5ff8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:32.136 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[99bbda72-4621-41f0-881c-9ab2c40af85e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:32.137 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e72ddf7e-eb3c-4e97-9f1e-c0e31aa0dd89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:32.155 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b67af64a-d9d3-4c1a-bbda-cc688c8fa8cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626882, 'reachable_time': 16983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395660, 'error': None, 'target': 'ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:32.158 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-582d804a-4891-4cd3-8540-2cfd9f71444e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:56:32 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:32.158 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[dbad63d6-d65e-4eb3-94f8-8a72ea0f0843]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:32 np0005465604 systemd[1]: run-netns-ovnmeta\x2d582d804a\x2d4891\x2d4cd3\x2d8540\x2d2cfd9f71444e.mount: Deactivated successfully.
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.400 2 DEBUG nova.network.neutron [req-73bc643f-eb0a-4f0e-a839-b796c7b2573b req-d3e9fb38-c127-4ec0-bfd1-f6d621d67e3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updated VIF entry in instance network info cache for port 3d7e333e-5ee7-4373-b49b-3f78fb696f43. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.401 2 DEBUG nova.network.neutron [req-73bc643f-eb0a-4f0e-a839-b796c7b2573b req-d3e9fb38-c127-4ec0-bfd1-f6d621d67e3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updating instance_info_cache with network_info: [{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.420 2 DEBUG oslo_concurrency.lockutils [req-73bc643f-eb0a-4f0e-a839-b796c7b2573b req-d3e9fb38-c127-4ec0-bfd1-f6d621d67e3d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.445 2 DEBUG oslo_concurrency.lockutils [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.445 2 DEBUG oslo_concurrency.lockutils [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.446 2 DEBUG nova.network.neutron [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.483 2 DEBUG nova.compute.manager [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-deleted-3d7e333e-5ee7-4373-b49b-3f78fb696f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.483 2 INFO nova.compute.manager [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Neutron deleted interface 3d7e333e-5ee7-4373-b49b-3f78fb696f43; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.484 2 DEBUG nova.network.neutron [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updating instance_info_cache with network_info: [{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.506 2 DEBUG nova.objects.instance [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lazy-loading 'system_metadata' on Instance uuid d84a3f3a-a7fb-4222-9b59-c51b64d74a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.532 2 DEBUG nova.objects.instance [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lazy-loading 'flavor' on Instance uuid d84a3f3a-a7fb-4222-9b59-c51b64d74a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.553 2 DEBUG nova.virt.libvirt.vif [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:55:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1254556339',display_name='tempest-TestNetworkBasicOps-server-1254556339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1254556339',id=126,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCV6ppIFHQsRK8BqpoHx4hB1U2Nvc+5EznbB1uzi/14IuMPKrn5t7EhvgkbYfjK4hho3uCvJrZSyE+lc9+zXrc8+KyMSVz0PD8Wom9eq1MMNuY+jwkMfim2/72V2zzH01Q==',key_name='tempest-TestNetworkBasicOps-2002344881',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:55:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-lv1agx0u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:55:57Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=d84a3f3a-a7fb-4222-9b59-c51b64d74a13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.554 2 DEBUG nova.network.os_vif_util [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converting VIF {"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.554 2 DEBUG nova.network.os_vif_util [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.558 2 DEBUG nova.virt.libvirt.guest [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.562 2 DEBUG nova.virt.libvirt.guest [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface>not found in domain: <domain type='kvm' id='160'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <name>instance-0000007e</name>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <uuid>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</uuid>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:name>tempest-TestNetworkBasicOps-server-1254556339</nova:name>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:56:31</nova:creationTime>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:port uuid="3e0191b3-5405-4fe1-ba86-56a0b092a5d6">
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:56:32 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <memory unit='KiB'>131072</memory>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <vcpu placement='static'>1</vcpu>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <resource>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <partition>/machine</partition>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </resource>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <sysinfo type='smbios'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <entry name='manufacturer'>RDO</entry>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <entry name='serial'>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <entry name='uuid'>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <entry name='family'>Virtual Machine</entry>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <boot dev='hd'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <smbios mode='sysinfo'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <vmcoreinfo state='on'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <vendor>AMD</vendor>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='x2apic'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc-deadline'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='hypervisor'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc_adjust'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='spec-ctrl'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='stibp'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='arch-capabilities'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='ssbd'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='cmp_legacy'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='overflow-recov'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='succor'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='ibrs'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='amd-ssbd'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='virt-ssbd'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='lbrv'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='tsc-scale'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='vmcb-clean'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='flushbyasid'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pause-filter'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pfthreshold'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='rdctl-no'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='mds-no'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='gds-no'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='rfds-no'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='xsaves'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svm'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='topoext'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='npt'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='nrip-save'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <clock offset='utc'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <timer name='hpet' present='no'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <on_poweroff>destroy</on_poweroff>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <on_reboot>restart</on_reboot>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <on_crash>destroy</on_crash>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <disk type='network' device='disk'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk' index='2'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target dev='vda' bus='virtio'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='virtio-disk0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <disk type='network' device='cdrom'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config' index='1'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target dev='sda' bus='sata'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <readonly/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='sata0-0-0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pcie.0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='1' port='0x10'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.1'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='2' port='0x11'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.2'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='3' port='0x12'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.3'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='4' port='0x13'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.4'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='5' port='0x14'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.5'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='6' port='0x15'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.6'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='7' port='0x16'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.7'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='8' port='0x17'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.8'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='9' port='0x18'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.9'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='10' port='0x19'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.10'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='11' port='0x1a'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.11'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='12' port='0x1b'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.12'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='13' port='0x1c'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.13'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='14' port='0x1d'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.14'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='15' port='0x1e'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.15'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='16' port='0x1f'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.16'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='17' port='0x20'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.17'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='18' port='0x21'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.18'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='19' port='0x22'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.19'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='20' port='0x23'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.20'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='21' port='0x24'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.21'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='22' port='0x25'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.22'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='23' port='0x26'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.23'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='24' port='0x27'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.24'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='25' port='0x28'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.25'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-pci-bridge'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.26'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='usb'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='sata' index='0'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='ide'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:00:df:98'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target dev='tap3e0191b3-54'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='net0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <serial type='pty'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log' append='off'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target type='isa-serial' port='0'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:        <model name='isa-serial'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      </target>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log' append='off'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target type='serial' port='0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </console>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <input type='tablet' bus='usb'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='input0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='usb' bus='0' port='1'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <input type='mouse' bus='ps2'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='input1'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <input type='keyboard' bus='ps2'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='input2'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <listen type='address' address='::0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <audio id='1' type='none'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='video0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <watchdog model='itco' action='reset'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='watchdog0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </watchdog>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <memballoon model='virtio'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <stats period='10'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='balloon0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <rng model='virtio'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <backend model='random'>/dev/urandom</backend>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='rng0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <label>system_u:system_r:svirt_t:s0:c271,c793</label>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c271,c793</imagelabel>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <label>+107:+107</label>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <imagelabel>+107:+107</imagelabel>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:56:32 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:56:32 np0005465604 nova_compute[260603]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.562 2 DEBUG nova.virt.libvirt.guest [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.567 2 DEBUG nova.virt.libvirt.guest [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:26:a9:81"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3d7e333e-5e"/></interface>not found in domain: <domain type='kvm' id='160'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <name>instance-0000007e</name>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <uuid>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</uuid>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:name>tempest-TestNetworkBasicOps-server-1254556339</nova:name>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:56:31</nova:creationTime>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:port uuid="3e0191b3-5405-4fe1-ba86-56a0b092a5d6">
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:56:32 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <memory unit='KiB'>131072</memory>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <vcpu placement='static'>1</vcpu>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <resource>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <partition>/machine</partition>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </resource>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <sysinfo type='smbios'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <entry name='manufacturer'>RDO</entry>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <entry name='serial'>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <entry name='uuid'>d84a3f3a-a7fb-4222-9b59-c51b64d74a13</entry>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <entry name='family'>Virtual Machine</entry>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <boot dev='hd'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <smbios mode='sysinfo'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <vmcoreinfo state='on'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <vendor>AMD</vendor>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='x2apic'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc-deadline'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='hypervisor'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc_adjust'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='spec-ctrl'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='stibp'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='arch-capabilities'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='ssbd'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='cmp_legacy'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='overflow-recov'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='succor'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='ibrs'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='amd-ssbd'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='virt-ssbd'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='lbrv'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='tsc-scale'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='vmcb-clean'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='flushbyasid'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pause-filter'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pfthreshold'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='rdctl-no'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='mds-no'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='pschange-mc-no'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='gds-no'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='rfds-no'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='xsaves'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svm'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='require' name='topoext'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='npt'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <feature policy='disable' name='nrip-save'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <clock offset='utc'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <timer name='hpet' present='no'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <on_poweroff>destroy</on_poweroff>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <on_reboot>restart</on_reboot>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <on_crash>destroy</on_crash>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <disk type='network' device='disk'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk' index='2'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target dev='vda' bus='virtio'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='virtio-disk0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <disk type='network' device='cdrom'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_disk.config' index='1'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target dev='sda' bus='sata'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <readonly/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='sata0-0-0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pcie.0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='1' port='0x10'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.1'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='2' port='0x11'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.2'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='3' port='0x12'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.3'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='4' port='0x13'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.4'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='5' port='0x14'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.5'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='6' port='0x15'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.6'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='7' port='0x16'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.7'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='8' port='0x17'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.8'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='9' port='0x18'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.9'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='10' port='0x19'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.10'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='11' port='0x1a'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.11'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='12' port='0x1b'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.12'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='13' port='0x1c'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.13'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='14' port='0x1d'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.14'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='15' port='0x1e'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.15'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='16' port='0x1f'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.16'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='17' port='0x20'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.17'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='18' port='0x21'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.18'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='19' port='0x22'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.19'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='20' port='0x23'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.20'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='21' port='0x24'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.21'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='22' port='0x25'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.22'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='23' port='0x26'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.23'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='24' port='0x27'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.24'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target chassis='25' port='0x28'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.25'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model name='pcie-pci-bridge'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='pci.26'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='usb'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <controller type='sata' index='0'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='ide'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </controller>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:00:df:98'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target dev='tap3e0191b3-54'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='net0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <serial type='pty'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log' append='off'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target type='isa-serial' port='0'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:        <model name='isa-serial'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      </target>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13/console.log' append='off'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <target type='serial' port='0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </console>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <input type='tablet' bus='usb'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='input0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='usb' bus='0' port='1'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <input type='mouse' bus='ps2'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='input1'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <input type='keyboard' bus='ps2'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='input2'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </input>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <listen type='address' address='::0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <audio id='1' type='none'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='video0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <watchdog model='itco' action='reset'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='watchdog0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </watchdog>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <memballoon model='virtio'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <stats period='10'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='balloon0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <rng model='virtio'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <backend model='random'>/dev/urandom</backend>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <alias name='rng0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <label>system_u:system_r:svirt_t:s0:c271,c793</label>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c271,c793</imagelabel>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <label>+107:+107</label>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <imagelabel>+107:+107</imagelabel>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 04:56:32 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:56:32 np0005465604 nova_compute[260603]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.568 2 WARNING nova.virt.libvirt.driver [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Detaching interface fa:16:3e:26:a9:81 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap3d7e333e-5e' not found.#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.568 2 DEBUG nova.virt.libvirt.vif [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:55:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1254556339',display_name='tempest-TestNetworkBasicOps-server-1254556339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1254556339',id=126,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCV6ppIFHQsRK8BqpoHx4hB1U2Nvc+5EznbB1uzi/14IuMPKrn5t7EhvgkbYfjK4hho3uCvJrZSyE+lc9+zXrc8+KyMSVz0PD8Wom9eq1MMNuY+jwkMfim2/72V2zzH01Q==',key_name='tempest-TestNetworkBasicOps-2002344881',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:55:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-lv1agx0u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:55:57Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=d84a3f3a-a7fb-4222-9b59-c51b64d74a13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.569 2 DEBUG nova.network.os_vif_util [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converting VIF {"id": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "address": "fa:16:3e:26:a9:81", "network": {"id": "582d804a-4891-4cd3-8540-2cfd9f71444e", "bridge": "br-int", "label": "tempest-network-smoke--1505955463", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d7e333e-5e", "ovs_interfaceid": "3d7e333e-5ee7-4373-b49b-3f78fb696f43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.569 2 DEBUG nova.network.os_vif_util [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.570 2 DEBUG os_vif [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.571 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d7e333e-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.572 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.575 2 INFO os_vif [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a9:81,bridge_name='br-int',has_traffic_filtering=True,id=3d7e333e-5ee7-4373-b49b-3f78fb696f43,network=Network(582d804a-4891-4cd3-8540-2cfd9f71444e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d7e333e-5e')#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.575 2 DEBUG nova.virt.libvirt.guest [req-0a92d2b8-3cf6-4363-90d9-df243a537c17 req-786eb6a3-9099-40ca-bbb3-296046be497c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:name>tempest-TestNetworkBasicOps-server-1254556339</nova:name>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:56:32</nova:creationTime>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    <nova:port uuid="3e0191b3-5405-4fe1-ba86-56a0b092a5d6">
Oct  2 04:56:32 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:56:32 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:56:32 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:56:32 np0005465604 nova_compute[260603]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.871 2 DEBUG nova.compute.manager [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-plugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.871 2 DEBUG oslo_concurrency.lockutils [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.872 2 DEBUG oslo_concurrency.lockutils [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.872 2 DEBUG oslo_concurrency.lockutils [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.872 2 DEBUG nova.compute.manager [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] No waiting events found dispatching network-vif-plugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.872 2 WARNING nova.compute.manager [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received unexpected event network-vif-plugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.873 2 DEBUG nova.compute.manager [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-unplugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.873 2 DEBUG oslo_concurrency.lockutils [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.873 2 DEBUG oslo_concurrency.lockutils [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.873 2 DEBUG oslo_concurrency.lockutils [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.873 2 DEBUG nova.compute.manager [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] No waiting events found dispatching network-vif-unplugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.874 2 WARNING nova.compute.manager [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received unexpected event network-vif-unplugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.874 2 DEBUG nova.compute.manager [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-plugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.874 2 DEBUG oslo_concurrency.lockutils [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.874 2 DEBUG oslo_concurrency.lockutils [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.875 2 DEBUG oslo_concurrency.lockutils [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.875 2 DEBUG nova.compute.manager [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] No waiting events found dispatching network-vif-plugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:56:32 np0005465604 nova_compute[260603]: 2025-10-02 08:56:32.875 2 WARNING nova.compute.manager [req-64f4def4-5fb6-4b2d-86b2-e3054ffd044f req-8f134600-6004-4869-9070-863f4edbfc93 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received unexpected event network-vif-plugged-3d7e333e-5ee7-4373-b49b-3f78fb696f43 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:56:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:56:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Oct  2 04:56:33 np0005465604 nova_compute[260603]: 2025-10-02 08:56:33.801 2 INFO nova.network.neutron [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Port 3d7e333e-5ee7-4373-b49b-3f78fb696f43 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  2 04:56:33 np0005465604 nova_compute[260603]: 2025-10-02 08:56:33.801 2 DEBUG nova.network.neutron [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updating instance_info_cache with network_info: [{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:56:33 np0005465604 nova_compute[260603]: 2025-10-02 08:56:33.835 2 DEBUG oslo_concurrency.lockutils [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:56:33 np0005465604 nova_compute[260603]: 2025-10-02 08:56:33.855 2 DEBUG oslo_concurrency.lockutils [None req-cbe96986-b4f0-41f8-9a70-ccf1871fa3c4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "interface-d84a3f3a-a7fb-4222-9b59-c51b64d74a13-3d7e333e-5ee7-4373-b49b-3f78fb696f43" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:56:33 np0005465604 ovn_controller[152344]: 2025-10-02T08:56:33Z|01367|binding|INFO|Releasing lport ab6e362e-db4a-479c-a0fc-ea7c3db44980 from this chassis (sb_readonly=0)
Oct  2 04:56:34 np0005465604 nova_compute[260603]: 2025-10-02 08:56:34.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:34 np0005465604 nova_compute[260603]: 2025-10-02 08:56:34.741 2 DEBUG nova.compute.manager [req-bbcb895b-fd4f-4c0c-a1f1-8990303da53c req-c61bcf2e-02d0-41e4-914c-bfaf8bc1515d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-changed-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:56:34 np0005465604 nova_compute[260603]: 2025-10-02 08:56:34.741 2 DEBUG nova.compute.manager [req-bbcb895b-fd4f-4c0c-a1f1-8990303da53c req-c61bcf2e-02d0-41e4-914c-bfaf8bc1515d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Refreshing instance network info cache due to event network-changed-3e0191b3-5405-4fe1-ba86-56a0b092a5d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:56:34 np0005465604 nova_compute[260603]: 2025-10-02 08:56:34.742 2 DEBUG oslo_concurrency.lockutils [req-bbcb895b-fd4f-4c0c-a1f1-8990303da53c req-c61bcf2e-02d0-41e4-914c-bfaf8bc1515d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:56:34 np0005465604 nova_compute[260603]: 2025-10-02 08:56:34.742 2 DEBUG oslo_concurrency.lockutils [req-bbcb895b-fd4f-4c0c-a1f1-8990303da53c req-c61bcf2e-02d0-41e4-914c-bfaf8bc1515d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:56:34 np0005465604 nova_compute[260603]: 2025-10-02 08:56:34.743 2 DEBUG nova.network.neutron [req-bbcb895b-fd4f-4c0c-a1f1-8990303da53c req-c61bcf2e-02d0-41e4-914c-bfaf8bc1515d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Refreshing network info cache for port 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:56:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:34.837 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:56:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:34.838 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:56:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:34.839 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:56:34 np0005465604 nova_compute[260603]: 2025-10-02 08:56:34.875 2 DEBUG oslo_concurrency.lockutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:56:34 np0005465604 nova_compute[260603]: 2025-10-02 08:56:34.876 2 DEBUG oslo_concurrency.lockutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:56:34 np0005465604 nova_compute[260603]: 2025-10-02 08:56:34.876 2 DEBUG oslo_concurrency.lockutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:56:34 np0005465604 nova_compute[260603]: 2025-10-02 08:56:34.877 2 DEBUG oslo_concurrency.lockutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:56:34 np0005465604 nova_compute[260603]: 2025-10-02 08:56:34.877 2 DEBUG oslo_concurrency.lockutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:56:34 np0005465604 nova_compute[260603]: 2025-10-02 08:56:34.879 2 INFO nova.compute.manager [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Terminating instance#033[00m
Oct  2 04:56:34 np0005465604 nova_compute[260603]: 2025-10-02 08:56:34.881 2 DEBUG nova.compute.manager [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:56:34 np0005465604 kernel: tap3e0191b3-54 (unregistering): left promiscuous mode
Oct  2 04:56:34 np0005465604 NetworkManager[45129]: <info>  [1759395394.9731] device (tap3e0191b3-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:56:34 np0005465604 nova_compute[260603]: 2025-10-02 08:56:34.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:34 np0005465604 nova_compute[260603]: 2025-10-02 08:56:34.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:56:34Z|01368|binding|INFO|Releasing lport 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 from this chassis (sb_readonly=0)
Oct  2 04:56:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:56:34Z|01369|binding|INFO|Setting lport 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 down in Southbound
Oct  2 04:56:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:56:34Z|01370|binding|INFO|Removing iface tap3e0191b3-54 ovn-installed in OVS
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:35 np0005465604 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Oct  2 04:56:35 np0005465604 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d0000007e.scope: Consumed 15.290s CPU time.
Oct  2 04:56:35 np0005465604 systemd-machined[214636]: Machine qemu-160-instance-0000007e terminated.
Oct  2 04:56:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.071 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:df:98 10.100.0.3'], port_security=['fa:16:3e:00:df:98 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd84a3f3a-a7fb-4222-9b59-c51b64d74a13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ca3ac45-6851-48dd-917a-457549b659ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f31abdd-db09-4203-b1b9-483c21cac467', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=95f926d6-852b-4212-98b8-ed01a74b48f6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3e0191b3-5405-4fe1-ba86-56a0b092a5d6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:56:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.073 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3e0191b3-5405-4fe1-ba86-56a0b092a5d6 in datapath 0ca3ac45-6851-48dd-917a-457549b659ba unbound from our chassis#033[00m
Oct  2 04:56:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.075 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0ca3ac45-6851-48dd-917a-457549b659ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:56:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.076 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6efac0-f3df-46fe-839c-58d715bf3c59]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.077 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba namespace which is not needed anymore#033[00m
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.152 2 INFO nova.virt.libvirt.driver [-] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Instance destroyed successfully.#033[00m
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.153 2 DEBUG nova.objects.instance [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid d84a3f3a-a7fb-4222-9b59-c51b64d74a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.169 2 DEBUG nova.virt.libvirt.vif [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:55:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1254556339',display_name='tempest-TestNetworkBasicOps-server-1254556339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1254556339',id=126,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCV6ppIFHQsRK8BqpoHx4hB1U2Nvc+5EznbB1uzi/14IuMPKrn5t7EhvgkbYfjK4hho3uCvJrZSyE+lc9+zXrc8+KyMSVz0PD8Wom9eq1MMNuY+jwkMfim2/72V2zzH01Q==',key_name='tempest-TestNetworkBasicOps-2002344881',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:55:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-lv1agx0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:55:57Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=d84a3f3a-a7fb-4222-9b59-c51b64d74a13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.170 2 DEBUG nova.network.os_vif_util [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.171 2 DEBUG nova.network.os_vif_util [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:df:98,bridge_name='br-int',has_traffic_filtering=True,id=3e0191b3-5405-4fe1-ba86-56a0b092a5d6,network=Network(0ca3ac45-6851-48dd-917a-457549b659ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e0191b3-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.172 2 DEBUG os_vif [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:df:98,bridge_name='br-int',has_traffic_filtering=True,id=3e0191b3-5405-4fe1-ba86-56a0b092a5d6,network=Network(0ca3ac45-6851-48dd-917a-457549b659ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e0191b3-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.173 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e0191b3-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.179 2 INFO os_vif [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:df:98,bridge_name='br-int',has_traffic_filtering=True,id=3e0191b3-5405-4fe1-ba86-56a0b092a5d6,network=Network(0ca3ac45-6851-48dd-917a-457549b659ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e0191b3-54')#033[00m
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:35 np0005465604 neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba[395316]: [NOTICE]   (395321) : haproxy version is 2.8.14-c23fe91
Oct  2 04:56:35 np0005465604 neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba[395316]: [NOTICE]   (395321) : path to executable is /usr/sbin/haproxy
Oct  2 04:56:35 np0005465604 neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba[395316]: [WARNING]  (395321) : Exiting Master process...
Oct  2 04:56:35 np0005465604 neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba[395316]: [ALERT]    (395321) : Current worker (395323) exited with code 143 (Terminated)
Oct  2 04:56:35 np0005465604 neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba[395316]: [WARNING]  (395321) : All workers exited. Exiting... (0)
Oct  2 04:56:35 np0005465604 systemd[1]: libpod-117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60.scope: Deactivated successfully.
Oct  2 04:56:35 np0005465604 podman[395711]: 2025-10-02 08:56:35.267548223 +0000 UTC m=+0.052056241 container died 117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:56:35 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60-userdata-shm.mount: Deactivated successfully.
Oct  2 04:56:35 np0005465604 systemd[1]: var-lib-containers-storage-overlay-931c6303f091c199c5789a98492e9e4f99efc35fbf41e4ee1e289eeeb8eb185b-merged.mount: Deactivated successfully.
Oct  2 04:56:35 np0005465604 podman[395711]: 2025-10-02 08:56:35.312887651 +0000 UTC m=+0.097395679 container cleanup 117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 04:56:35 np0005465604 systemd[1]: libpod-conmon-117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60.scope: Deactivated successfully.
Oct  2 04:56:35 np0005465604 podman[395744]: 2025-10-02 08:56:35.405047082 +0000 UTC m=+0.063390250 container remove 117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 04:56:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.413 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c61ffbb5-6a6b-45a0-8565-b6e341812377]: (4, ('Thu Oct  2 08:56:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba (117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60)\n117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60\nThu Oct  2 08:56:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba (117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60)\n117d99daed4082d840b5a5374384a8e34f7dc332e23ad4736a994f4f93ab5f60\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.417 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4621ec30-7c7a-4a90-94dd-203c50871686]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.418 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ca3ac45-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:35 np0005465604 kernel: tap0ca3ac45-60: left promiscuous mode
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.453 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9dd8d3-67d4-42c2-a729-78ad46f9a152]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.482 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[216a8a4d-c66c-43d6-b54b-ae5972a58eaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.484 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[86a113c8-1c28-4959-bdca-0fe582ac4804]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.513 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ba7540c1-0193-416a-a753-5f67a4597255]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 623363, 'reachable_time': 35620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395760, 'error': None, 'target': 'ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.516 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0ca3ac45-6851-48dd-917a-457549b659ba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:56:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:56:35.516 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[cac5e413-b714-4791-a0ce-64de4113d8ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:56:35 np0005465604 systemd[1]: run-netns-ovnmeta\x2d0ca3ac45\x2d6851\x2d48dd\x2d917a\x2d457549b659ba.mount: Deactivated successfully.
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.596 2 INFO nova.virt.libvirt.driver [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Deleting instance files /var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_del#033[00m
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.596 2 INFO nova.virt.libvirt.driver [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Deletion of /var/lib/nova/instances/d84a3f3a-a7fb-4222-9b59-c51b64d74a13_del complete#033[00m
Oct  2 04:56:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 1023 B/s wr, 0 op/s
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.799 2 INFO nova.compute.manager [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Took 0.92 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.801 2 DEBUG oslo.service.loopingcall [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.801 2 DEBUG nova.compute.manager [-] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:56:35 np0005465604 nova_compute[260603]: 2025-10-02 08:56:35.802 2 DEBUG nova.network.neutron [-] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.573 2 DEBUG nova.network.neutron [req-bbcb895b-fd4f-4c0c-a1f1-8990303da53c req-c61bcf2e-02d0-41e4-914c-bfaf8bc1515d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updated VIF entry in instance network info cache for port 3e0191b3-5405-4fe1-ba86-56a0b092a5d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.574 2 DEBUG nova.network.neutron [req-bbcb895b-fd4f-4c0c-a1f1-8990303da53c req-c61bcf2e-02d0-41e4-914c-bfaf8bc1515d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updating instance_info_cache with network_info: [{"id": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "address": "fa:16:3e:00:df:98", "network": {"id": "0ca3ac45-6851-48dd-917a-457549b659ba", "bridge": "br-int", "label": "tempest-network-smoke--1411041182", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e0191b3-54", "ovs_interfaceid": "3e0191b3-5405-4fe1-ba86-56a0b092a5d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.633 2 DEBUG oslo_concurrency.lockutils [req-bbcb895b-fd4f-4c0c-a1f1-8990303da53c req-c61bcf2e-02d0-41e4-914c-bfaf8bc1515d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d84a3f3a-a7fb-4222-9b59-c51b64d74a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.683 2 DEBUG nova.network.neutron [-] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.746 2 INFO nova.compute.manager [-] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Took 0.94 seconds to deallocate network for instance.#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.822 2 DEBUG oslo_concurrency.lockutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.823 2 DEBUG oslo_concurrency.lockutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.853 2 DEBUG nova.compute.manager [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-unplugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.854 2 DEBUG oslo_concurrency.lockutils [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.854 2 DEBUG oslo_concurrency.lockutils [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.855 2 DEBUG oslo_concurrency.lockutils [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.855 2 DEBUG nova.compute.manager [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] No waiting events found dispatching network-vif-unplugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.855 2 WARNING nova.compute.manager [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received unexpected event network-vif-unplugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.856 2 DEBUG nova.compute.manager [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-plugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.856 2 DEBUG oslo_concurrency.lockutils [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.857 2 DEBUG oslo_concurrency.lockutils [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.857 2 DEBUG oslo_concurrency.lockutils [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.857 2 DEBUG nova.compute.manager [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] No waiting events found dispatching network-vif-plugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.857 2 WARNING nova.compute.manager [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received unexpected event network-vif-plugged-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.858 2 DEBUG nova.compute.manager [req-ce948b08-c948-4589-a0ce-1e217fd009ce req-dc4f6baf-4f58-4ff2-acf0-71768c55e174 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Received event network-vif-deleted-3e0191b3-5405-4fe1-ba86-56a0b092a5d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:56:36 np0005465604 nova_compute[260603]: 2025-10-02 08:56:36.884 2 DEBUG oslo_concurrency.processutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:56:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:56:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1557312375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:56:37 np0005465604 nova_compute[260603]: 2025-10-02 08:56:37.321 2 DEBUG oslo_concurrency.processutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:56:37 np0005465604 nova_compute[260603]: 2025-10-02 08:56:37.331 2 DEBUG nova.compute.provider_tree [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:56:37 np0005465604 nova_compute[260603]: 2025-10-02 08:56:37.353 2 DEBUG nova.scheduler.client.report [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:56:37 np0005465604 nova_compute[260603]: 2025-10-02 08:56:37.388 2 DEBUG oslo_concurrency.lockutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:56:37 np0005465604 nova_compute[260603]: 2025-10-02 08:56:37.430 2 INFO nova.scheduler.client.report [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance d84a3f3a-a7fb-4222-9b59-c51b64d74a13#033[00m
Oct  2 04:56:37 np0005465604 nova_compute[260603]: 2025-10-02 08:56:37.501 2 DEBUG oslo_concurrency.lockutils [None req-a26c49c2-a9fa-4cdc-afef-16d313b1f3c9 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "d84a3f3a-a7fb-4222-9b59-c51b64d74a13" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:56:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 66 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.5 KiB/s wr, 20 op/s
Oct  2 04:56:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0002623116697597187 of space, bias 1.0, pg target 0.0786935009279156 quantized to 32 (current 32)
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:56:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:56:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  2 04:56:40 np0005465604 nova_compute[260603]: 2025-10-02 08:56:40.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:40 np0005465604 nova_compute[260603]: 2025-10-02 08:56:40.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:40 np0005465604 nova_compute[260603]: 2025-10-02 08:56:40.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:40 np0005465604 nova_compute[260603]: 2025-10-02 08:56:40.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:56:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 10K writes, 49K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1458 writes, 6553 keys, 1458 commit groups, 1.0 writes per commit group, ingest: 9.16 MB, 0.02 MB/s#012Interval WAL: 1458 writes, 1458 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    109.7      0.55              0.23        34    0.016       0      0       0.0       0.0#012  L6      1/0    8.07 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.5    165.9    138.9      1.94              0.93        33    0.059    193K    18K       0.0       0.0#012 Sum      1/0    8.07 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.5    129.2    132.5      2.49              1.16        67    0.037    193K    18K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.1    124.9    127.2      0.42              0.22        10    0.042     36K   2569       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    165.9    138.9      1.94              0.93        33    0.059    193K    18K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    110.7      0.54              0.23        33    0.017       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.059, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.32 GB write, 0.08 MB/s write, 0.31 GB read, 0.08 MB/s read, 2.5 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 304.00 MB usage: 36.23 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000285 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2348,34.79 MB,11.4427%) FilterBlock(68,549.05 KB,0.176375%) IndexBlock(68,930.86 KB,0.299027%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 04:56:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  2 04:56:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:56:43 np0005465604 nova_compute[260603]: 2025-10-02 08:56:43.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:56:43 np0005465604 nova_compute[260603]: 2025-10-02 08:56:43.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:56:43 np0005465604 nova_compute[260603]: 2025-10-02 08:56:43.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:56:43 np0005465604 nova_compute[260603]: 2025-10-02 08:56:43.535 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:56:43 np0005465604 nova_compute[260603]: 2025-10-02 08:56:43.536 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:56:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  2 04:56:45 np0005465604 nova_compute[260603]: 2025-10-02 08:56:45.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:45 np0005465604 nova_compute[260603]: 2025-10-02 08:56:45.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 04:56:47 np0005465604 nova_compute[260603]: 2025-10-02 08:56:47.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:56:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 04:56:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:56:49 np0005465604 nova_compute[260603]: 2025-10-02 08:56:49.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:56:49 np0005465604 nova_compute[260603]: 2025-10-02 08:56:49.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:56:49 np0005465604 nova_compute[260603]: 2025-10-02 08:56:49.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:56:49 np0005465604 nova_compute[260603]: 2025-10-02 08:56:49.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:56:49 np0005465604 nova_compute[260603]: 2025-10-02 08:56:49.550 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:56:49 np0005465604 nova_compute[260603]: 2025-10-02 08:56:49.550 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:56:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 682 B/s wr, 8 op/s
Oct  2 04:56:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:56:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/123398550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:56:50 np0005465604 nova_compute[260603]: 2025-10-02 08:56:50.073 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:56:50 np0005465604 nova_compute[260603]: 2025-10-02 08:56:50.151 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395395.1505415, d84a3f3a-a7fb-4222-9b59-c51b64d74a13 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:56:50 np0005465604 nova_compute[260603]: 2025-10-02 08:56:50.152 2 INFO nova.compute.manager [-] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:56:50 np0005465604 nova_compute[260603]: 2025-10-02 08:56:50.175 2 DEBUG nova.compute.manager [None req-ad7ce0f9-e662-4aa1-b6cd-dcb8f72785a9 - - - - - -] [instance: d84a3f3a-a7fb-4222-9b59-c51b64d74a13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:56:50 np0005465604 nova_compute[260603]: 2025-10-02 08:56:50.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:50 np0005465604 nova_compute[260603]: 2025-10-02 08:56:50.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:50 np0005465604 nova_compute[260603]: 2025-10-02 08:56:50.384 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:56:50 np0005465604 nova_compute[260603]: 2025-10-02 08:56:50.386 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3693MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:56:50 np0005465604 nova_compute[260603]: 2025-10-02 08:56:50.387 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:56:50 np0005465604 nova_compute[260603]: 2025-10-02 08:56:50.388 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:56:50 np0005465604 nova_compute[260603]: 2025-10-02 08:56:50.490 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:56:50 np0005465604 nova_compute[260603]: 2025-10-02 08:56:50.490 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:56:50 np0005465604 nova_compute[260603]: 2025-10-02 08:56:50.516 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:56:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:56:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/738804684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:56:51 np0005465604 nova_compute[260603]: 2025-10-02 08:56:51.086 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:56:51 np0005465604 nova_compute[260603]: 2025-10-02 08:56:51.096 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:56:51 np0005465604 nova_compute[260603]: 2025-10-02 08:56:51.122 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:56:51 np0005465604 nova_compute[260603]: 2025-10-02 08:56:51.161 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:56:51 np0005465604 nova_compute[260603]: 2025-10-02 08:56:51.162 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:56:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:56:52 np0005465604 nova_compute[260603]: 2025-10-02 08:56:52.164 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:56:52 np0005465604 nova_compute[260603]: 2025-10-02 08:56:52.165 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:56:52 np0005465604 nova_compute[260603]: 2025-10-02 08:56:52.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:56:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:56:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:56:55 np0005465604 podman[395833]: 2025-10-02 08:56:55.063939958 +0000 UTC m=+0.112752897 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 04:56:55 np0005465604 podman[395832]: 2025-10-02 08:56:55.097277345 +0000 UTC m=+0.152686722 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:56:55 np0005465604 nova_compute[260603]: 2025-10-02 08:56:55.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:55 np0005465604 nova_compute[260603]: 2025-10-02 08:56:55.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:56:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:56:56 np0005465604 nova_compute[260603]: 2025-10-02 08:56:56.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:56:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:56:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:56:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:56:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:56:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:56:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:56:56 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 8443ee5c-ef44-463d-9d4e-05023784aa43 does not exist
Oct  2 04:56:56 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a5afbcef-4129-4616-8141-bdabc7eb12eb does not exist
Oct  2 04:56:56 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9b68f3fd-6814-4ec4-8764-3e35dd67e807 does not exist
Oct  2 04:56:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:56:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:56:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:56:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:56:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:56:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:56:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:56:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:56:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:56:57 np0005465604 nova_compute[260603]: 2025-10-02 08:56:57.420 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:56:57 np0005465604 nova_compute[260603]: 2025-10-02 08:56:57.420 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:56:57 np0005465604 nova_compute[260603]: 2025-10-02 08:56:57.449 2 DEBUG nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:56:57 np0005465604 podman[396148]: 2025-10-02 08:56:57.521353047 +0000 UTC m=+0.052368562 container create 36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_swirles, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 04:56:57 np0005465604 nova_compute[260603]: 2025-10-02 08:56:57.547 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:56:57 np0005465604 nova_compute[260603]: 2025-10-02 08:56:57.549 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:56:57 np0005465604 nova_compute[260603]: 2025-10-02 08:56:57.559 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:56:57 np0005465604 nova_compute[260603]: 2025-10-02 08:56:57.559 2 INFO nova.compute.claims [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:56:57 np0005465604 systemd[1]: Started libpod-conmon-36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f.scope.
Oct  2 04:56:57 np0005465604 podman[396148]: 2025-10-02 08:56:57.496590311 +0000 UTC m=+0.027605796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:56:57 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:56:57 np0005465604 podman[396148]: 2025-10-02 08:56:57.648607681 +0000 UTC m=+0.179623186 container init 36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 04:56:57 np0005465604 podman[396148]: 2025-10-02 08:56:57.660973674 +0000 UTC m=+0.191989189 container start 36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_swirles, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:56:57 np0005465604 podman[396148]: 2025-10-02 08:56:57.664444424 +0000 UTC m=+0.195459939 container attach 36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_swirles, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 04:56:57 np0005465604 peaceful_swirles[396164]: 167 167
Oct  2 04:56:57 np0005465604 systemd[1]: libpod-36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f.scope: Deactivated successfully.
Oct  2 04:56:57 np0005465604 conmon[396164]: conmon 36670a450c502fe0509c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f.scope/container/memory.events
Oct  2 04:56:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:56:57 np0005465604 podman[396169]: 2025-10-02 08:56:57.735206846 +0000 UTC m=+0.041699102 container died 36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:56:57 np0005465604 nova_compute[260603]: 2025-10-02 08:56:57.738 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:56:57 np0005465604 systemd[1]: var-lib-containers-storage-overlay-bd3b859adf3aa941c183d995a77f083c383538817fdb0ed9c52ff96281600239-merged.mount: Deactivated successfully.
Oct  2 04:56:57 np0005465604 podman[396169]: 2025-10-02 08:56:57.779702187 +0000 UTC m=+0.086194453 container remove 36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:56:57 np0005465604 systemd[1]: libpod-conmon-36670a450c502fe0509cdbba0bfc3011a0815177a268fb4579b5cd1b773d7d7f.scope: Deactivated successfully.
Oct  2 04:56:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:56:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:56:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:56:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:56:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:56:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:56:58 np0005465604 podman[396211]: 2025-10-02 08:56:58.052547037 +0000 UTC m=+0.065728844 container create 9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 04:56:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:56:58 np0005465604 systemd[1]: Started libpod-conmon-9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5.scope.
Oct  2 04:56:58 np0005465604 podman[396211]: 2025-10-02 08:56:58.032105889 +0000 UTC m=+0.045287726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:56:58 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:56:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feed2b77eae23de7b5bce9565d9ce528bcccb103c9473f1f0789e1f7e1867cf9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:56:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feed2b77eae23de7b5bce9565d9ce528bcccb103c9473f1f0789e1f7e1867cf9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:56:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feed2b77eae23de7b5bce9565d9ce528bcccb103c9473f1f0789e1f7e1867cf9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:56:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feed2b77eae23de7b5bce9565d9ce528bcccb103c9473f1f0789e1f7e1867cf9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:56:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feed2b77eae23de7b5bce9565d9ce528bcccb103c9473f1f0789e1f7e1867cf9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:56:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:56:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637743910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:56:58 np0005465604 podman[396211]: 2025-10-02 08:56:58.216090903 +0000 UTC m=+0.229272790 container init 9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 04:56:58 np0005465604 podman[396211]: 2025-10-02 08:56:58.225959445 +0000 UTC m=+0.239141252 container start 9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_solomon, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 04:56:58 np0005465604 podman[396211]: 2025-10-02 08:56:58.229710544 +0000 UTC m=+0.242892441 container attach 9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.232 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.242 2 DEBUG nova.compute.provider_tree [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.260 2 DEBUG nova.scheduler.client.report [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.293 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.294 2 DEBUG nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.371 2 DEBUG nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.372 2 DEBUG nova.network.neutron [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.401 2 INFO nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.422 2 DEBUG nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.545 2 DEBUG nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.547 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.548 2 INFO nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Creating image(s)#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.583 2 DEBUG nova.storage.rbd_utils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.621 2 DEBUG nova.storage.rbd_utils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.658 2 DEBUG nova.storage.rbd_utils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.663 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.724 2 DEBUG nova.policy [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.773 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.775 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.775 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.776 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.801 2 DEBUG nova.storage.rbd_utils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:56:58 np0005465604 nova_compute[260603]: 2025-10-02 08:56:58.806 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:56:59 np0005465604 nova_compute[260603]: 2025-10-02 08:56:59.136 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:56:59 np0005465604 nova_compute[260603]: 2025-10-02 08:56:59.240 2 DEBUG nova.storage.rbd_utils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:56:59 np0005465604 relaxed_solomon[396227]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:56:59 np0005465604 relaxed_solomon[396227]: --> relative data size: 1.0
Oct  2 04:56:59 np0005465604 relaxed_solomon[396227]: --> All data devices are unavailable
Oct  2 04:56:59 np0005465604 nova_compute[260603]: 2025-10-02 08:56:59.362 2 DEBUG nova.objects.instance [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:56:59 np0005465604 systemd[1]: libpod-9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5.scope: Deactivated successfully.
Oct  2 04:56:59 np0005465604 podman[396211]: 2025-10-02 08:56:59.38373033 +0000 UTC m=+1.396912167 container died 9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_solomon, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 04:56:59 np0005465604 nova_compute[260603]: 2025-10-02 08:56:59.384 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:56:59 np0005465604 systemd[1]: libpod-9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5.scope: Consumed 1.072s CPU time.
Oct  2 04:56:59 np0005465604 nova_compute[260603]: 2025-10-02 08:56:59.384 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Ensure instance console log exists: /var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:56:59 np0005465604 nova_compute[260603]: 2025-10-02 08:56:59.385 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:56:59 np0005465604 nova_compute[260603]: 2025-10-02 08:56:59.385 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:56:59 np0005465604 nova_compute[260603]: 2025-10-02 08:56:59.385 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:56:59 np0005465604 systemd[1]: var-lib-containers-storage-overlay-feed2b77eae23de7b5bce9565d9ce528bcccb103c9473f1f0789e1f7e1867cf9-merged.mount: Deactivated successfully.
Oct  2 04:56:59 np0005465604 podman[396211]: 2025-10-02 08:56:59.449604039 +0000 UTC m=+1.462785836 container remove 9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_solomon, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:56:59 np0005465604 systemd[1]: libpod-conmon-9c1e0451f2e82f99c63b66119502e3a17af2fbc7e794f31244d33ff18bd955f5.scope: Deactivated successfully.
Oct  2 04:56:59 np0005465604 nova_compute[260603]: 2025-10-02 08:56:59.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:56:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:56:59 np0005465604 nova_compute[260603]: 2025-10-02 08:56:59.720 2 DEBUG nova.network.neutron [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Successfully created port: d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:57:00 np0005465604 nova_compute[260603]: 2025-10-02 08:57:00.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:00 np0005465604 nova_compute[260603]: 2025-10-02 08:57:00.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:00 np0005465604 podman[396581]: 2025-10-02 08:57:00.342081304 +0000 UTC m=+0.050659818 container create 0d08a1277240c681d281898a9f0ae9da14ca9e171309caf80e58cffa1b5164f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 04:57:00 np0005465604 systemd[1]: Started libpod-conmon-0d08a1277240c681d281898a9f0ae9da14ca9e171309caf80e58cffa1b5164f4.scope.
Oct  2 04:57:00 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:57:00 np0005465604 podman[396581]: 2025-10-02 08:57:00.321314054 +0000 UTC m=+0.029892648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:57:00 np0005465604 podman[396581]: 2025-10-02 08:57:00.430207817 +0000 UTC m=+0.138786331 container init 0d08a1277240c681d281898a9f0ae9da14ca9e171309caf80e58cffa1b5164f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:57:00 np0005465604 podman[396581]: 2025-10-02 08:57:00.441904308 +0000 UTC m=+0.150482822 container start 0d08a1277240c681d281898a9f0ae9da14ca9e171309caf80e58cffa1b5164f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 04:57:00 np0005465604 podman[396581]: 2025-10-02 08:57:00.44543986 +0000 UTC m=+0.154018374 container attach 0d08a1277240c681d281898a9f0ae9da14ca9e171309caf80e58cffa1b5164f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 04:57:00 np0005465604 suspicious_pasteur[396599]: 167 167
Oct  2 04:57:00 np0005465604 systemd[1]: libpod-0d08a1277240c681d281898a9f0ae9da14ca9e171309caf80e58cffa1b5164f4.scope: Deactivated successfully.
Oct  2 04:57:00 np0005465604 podman[396581]: 2025-10-02 08:57:00.449304343 +0000 UTC m=+0.157882867 container died 0d08a1277240c681d281898a9f0ae9da14ca9e171309caf80e58cffa1b5164f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:57:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay-aca76938ff4b65222f9daeaeb875b5820ae3c1be1bf7be924ce40d5c18999d45-merged.mount: Deactivated successfully.
Oct  2 04:57:00 np0005465604 podman[396581]: 2025-10-02 08:57:00.488191575 +0000 UTC m=+0.196770109 container remove 0d08a1277240c681d281898a9f0ae9da14ca9e171309caf80e58cffa1b5164f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 04:57:00 np0005465604 systemd[1]: libpod-conmon-0d08a1277240c681d281898a9f0ae9da14ca9e171309caf80e58cffa1b5164f4.scope: Deactivated successfully.
Oct  2 04:57:00 np0005465604 nova_compute[260603]: 2025-10-02 08:57:00.684 2 DEBUG nova.network.neutron [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Successfully updated port: d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:57:00 np0005465604 podman[396623]: 2025-10-02 08:57:00.698427661 +0000 UTC m=+0.043545642 container create 9737fe7027245ae7a6cf9343b5aaa4cdf496e5a3405b0874d7261ca5055e1bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:57:00 np0005465604 nova_compute[260603]: 2025-10-02 08:57:00.704 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:57:00 np0005465604 nova_compute[260603]: 2025-10-02 08:57:00.705 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:57:00 np0005465604 nova_compute[260603]: 2025-10-02 08:57:00.705 2 DEBUG nova.network.neutron [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:57:00 np0005465604 systemd[1]: Started libpod-conmon-9737fe7027245ae7a6cf9343b5aaa4cdf496e5a3405b0874d7261ca5055e1bf5.scope.
Oct  2 04:57:00 np0005465604 podman[396623]: 2025-10-02 08:57:00.677826227 +0000 UTC m=+0.022944208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:57:00 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:57:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4180288c5274885df5fb34efcc70ec10f7bab2e721f8a9a4c5fc661ee13fdc71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:57:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4180288c5274885df5fb34efcc70ec10f7bab2e721f8a9a4c5fc661ee13fdc71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:57:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4180288c5274885df5fb34efcc70ec10f7bab2e721f8a9a4c5fc661ee13fdc71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:57:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4180288c5274885df5fb34efcc70ec10f7bab2e721f8a9a4c5fc661ee13fdc71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:57:00 np0005465604 nova_compute[260603]: 2025-10-02 08:57:00.809 2 DEBUG nova.compute.manager [req-10812624-7780-4860-8fe9-8771af55d8df req-c6dba36a-7141-405d-b488-506d7e651edf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received event network-changed-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:57:00 np0005465604 nova_compute[260603]: 2025-10-02 08:57:00.809 2 DEBUG nova.compute.manager [req-10812624-7780-4860-8fe9-8771af55d8df req-c6dba36a-7141-405d-b488-506d7e651edf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Refreshing instance network info cache due to event network-changed-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:57:00 np0005465604 nova_compute[260603]: 2025-10-02 08:57:00.810 2 DEBUG oslo_concurrency.lockutils [req-10812624-7780-4860-8fe9-8771af55d8df req-c6dba36a-7141-405d-b488-506d7e651edf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:57:00 np0005465604 podman[396623]: 2025-10-02 08:57:00.810846505 +0000 UTC m=+0.155964536 container init 9737fe7027245ae7a6cf9343b5aaa4cdf496e5a3405b0874d7261ca5055e1bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:57:00 np0005465604 podman[396623]: 2025-10-02 08:57:00.819741017 +0000 UTC m=+0.164859018 container start 9737fe7027245ae7a6cf9343b5aaa4cdf496e5a3405b0874d7261ca5055e1bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:57:00 np0005465604 podman[396623]: 2025-10-02 08:57:00.824176528 +0000 UTC m=+0.169294579 container attach 9737fe7027245ae7a6cf9343b5aaa4cdf496e5a3405b0874d7261ca5055e1bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 04:57:00 np0005465604 nova_compute[260603]: 2025-10-02 08:57:00.899 2 DEBUG nova.network.neutron [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]: {
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:    "0": [
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:        {
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "devices": [
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "/dev/loop3"
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            ],
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "lv_name": "ceph_lv0",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "lv_size": "21470642176",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "name": "ceph_lv0",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "tags": {
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.cluster_name": "ceph",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.crush_device_class": "",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.encrypted": "0",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.osd_id": "0",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.type": "block",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.vdo": "0"
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            },
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "type": "block",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "vg_name": "ceph_vg0"
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:        }
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:    ],
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:    "1": [
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:        {
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "devices": [
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "/dev/loop4"
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            ],
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "lv_name": "ceph_lv1",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "lv_size": "21470642176",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "name": "ceph_lv1",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "tags": {
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.cluster_name": "ceph",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.crush_device_class": "",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.encrypted": "0",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.osd_id": "1",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.type": "block",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.vdo": "0"
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            },
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "type": "block",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "vg_name": "ceph_vg1"
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:        }
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:    ],
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:    "2": [
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:        {
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "devices": [
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "/dev/loop5"
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            ],
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "lv_name": "ceph_lv2",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "lv_size": "21470642176",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "name": "ceph_lv2",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "tags": {
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.cluster_name": "ceph",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.crush_device_class": "",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.encrypted": "0",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.osd_id": "2",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.type": "block",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:                "ceph.vdo": "0"
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            },
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "type": "block",
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:            "vg_name": "ceph_vg2"
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:        }
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]:    ]
Oct  2 04:57:01 np0005465604 quirky_bhaskara[396639]: }
Oct  2 04:57:01 np0005465604 systemd[1]: libpod-9737fe7027245ae7a6cf9343b5aaa4cdf496e5a3405b0874d7261ca5055e1bf5.scope: Deactivated successfully.
Oct  2 04:57:01 np0005465604 podman[396623]: 2025-10-02 08:57:01.557552918 +0000 UTC m=+0.902670879 container died 9737fe7027245ae7a6cf9343b5aaa4cdf496e5a3405b0874d7261ca5055e1bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:57:01 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4180288c5274885df5fb34efcc70ec10f7bab2e721f8a9a4c5fc661ee13fdc71-merged.mount: Deactivated successfully.
Oct  2 04:57:01 np0005465604 podman[396623]: 2025-10-02 08:57:01.631004267 +0000 UTC m=+0.976122238 container remove 9737fe7027245ae7a6cf9343b5aaa4cdf496e5a3405b0874d7261ca5055e1bf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 04:57:01 np0005465604 systemd[1]: libpod-conmon-9737fe7027245ae7a6cf9343b5aaa4cdf496e5a3405b0874d7261ca5055e1bf5.scope: Deactivated successfully.
Oct  2 04:57:01 np0005465604 podman[396658]: 2025-10-02 08:57:01.692800566 +0000 UTC m=+0.094929671 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:57:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 41 MiB data, 853 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:57:01 np0005465604 podman[396648]: 2025-10-02 08:57:01.716785036 +0000 UTC m=+0.116769433 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.879 2 DEBUG nova.network.neutron [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updating instance_info_cache with network_info: [{"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.908 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.908 2 DEBUG nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Instance network_info: |[{"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.908 2 DEBUG oslo_concurrency.lockutils [req-10812624-7780-4860-8fe9-8771af55d8df req-c6dba36a-7141-405d-b488-506d7e651edf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.908 2 DEBUG nova.network.neutron [req-10812624-7780-4860-8fe9-8771af55d8df req-c6dba36a-7141-405d-b488-506d7e651edf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Refreshing network info cache for port d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.911 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Start _get_guest_xml network_info=[{"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.916 2 WARNING nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.921 2 DEBUG nova.virt.libvirt.host [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.922 2 DEBUG nova.virt.libvirt.host [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.925 2 DEBUG nova.virt.libvirt.host [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.925 2 DEBUG nova.virt.libvirt.host [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.925 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.926 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.926 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.926 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.927 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.927 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.927 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.927 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.927 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.928 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.928 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.928 2 DEBUG nova.virt.hardware [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:57:01 np0005465604 nova_compute[260603]: 2025-10-02 08:57:01.931 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:57:02 np0005465604 podman[396860]: 2025-10-02 08:57:02.31367477 +0000 UTC m=+0.050743070 container create c3a46ce8db945e727a7f232e7bc3be1d0ef8cbef6c72256135d8254332119b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 04:57:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:57:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2006322213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:57:02 np0005465604 systemd[1]: Started libpod-conmon-c3a46ce8db945e727a7f232e7bc3be1d0ef8cbef6c72256135d8254332119b56.scope.
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.368 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:57:02 np0005465604 podman[396860]: 2025-10-02 08:57:02.291671542 +0000 UTC m=+0.028739842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:57:02 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:57:02 np0005465604 podman[396860]: 2025-10-02 08:57:02.400339087 +0000 UTC m=+0.137407367 container init c3a46ce8db945e727a7f232e7bc3be1d0ef8cbef6c72256135d8254332119b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:57:02 np0005465604 podman[396860]: 2025-10-02 08:57:02.410140258 +0000 UTC m=+0.147208518 container start c3a46ce8db945e727a7f232e7bc3be1d0ef8cbef6c72256135d8254332119b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.409 2 DEBUG nova.storage.rbd_utils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:57:02 np0005465604 podman[396860]: 2025-10-02 08:57:02.413337349 +0000 UTC m=+0.150405609 container attach c3a46ce8db945e727a7f232e7bc3be1d0ef8cbef6c72256135d8254332119b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_carson, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.413 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:57:02 np0005465604 funny_carson[396879]: 167 167
Oct  2 04:57:02 np0005465604 systemd[1]: libpod-c3a46ce8db945e727a7f232e7bc3be1d0ef8cbef6c72256135d8254332119b56.scope: Deactivated successfully.
Oct  2 04:57:02 np0005465604 podman[396860]: 2025-10-02 08:57:02.418017948 +0000 UTC m=+0.155086298 container died c3a46ce8db945e727a7f232e7bc3be1d0ef8cbef6c72256135d8254332119b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 04:57:02 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ed77df4859749418d438cd2eb7d15d22706f844208b1287f7d6282d9b196ae71-merged.mount: Deactivated successfully.
Oct  2 04:57:02 np0005465604 podman[396860]: 2025-10-02 08:57:02.460555326 +0000 UTC m=+0.197623586 container remove c3a46ce8db945e727a7f232e7bc3be1d0ef8cbef6c72256135d8254332119b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:57:02 np0005465604 systemd[1]: libpod-conmon-c3a46ce8db945e727a7f232e7bc3be1d0ef8cbef6c72256135d8254332119b56.scope: Deactivated successfully.
Oct  2 04:57:02 np0005465604 podman[396921]: 2025-10-02 08:57:02.609793057 +0000 UTC m=+0.040109063 container create bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sinoussi, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 04:57:02 np0005465604 systemd[1]: Started libpod-conmon-bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74.scope.
Oct  2 04:57:02 np0005465604 podman[396921]: 2025-10-02 08:57:02.591506428 +0000 UTC m=+0.021822444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:57:02 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:57:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d78b2ce007e66987db1213eb483ff70a259c848ec8290f42f3e9ffc223b7020/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:57:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d78b2ce007e66987db1213eb483ff70a259c848ec8290f42f3e9ffc223b7020/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:57:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d78b2ce007e66987db1213eb483ff70a259c848ec8290f42f3e9ffc223b7020/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:57:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d78b2ce007e66987db1213eb483ff70a259c848ec8290f42f3e9ffc223b7020/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:57:02 np0005465604 podman[396921]: 2025-10-02 08:57:02.713615459 +0000 UTC m=+0.143931535 container init bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:57:02 np0005465604 podman[396921]: 2025-10-02 08:57:02.724924417 +0000 UTC m=+0.155240433 container start bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:57:02 np0005465604 podman[396921]: 2025-10-02 08:57:02.729040768 +0000 UTC m=+0.159356804 container attach bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sinoussi, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 04:57:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:57:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/126124369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.885 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.888 2 DEBUG nova.virt.libvirt.vif [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:56:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1392343358',display_name='tempest-TestNetworkBasicOps-server-1392343358',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1392343358',id=127,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM7vAmstY/1wj8zilQusjnWxCH9HDy5ckxC0RSN69DfVAe97aVFxd/BU6NsgAvBbRW+DKoyLRAwETsr2HsjzpTSAXv0ZjguOo1+U6bZINpkB4yql6sk6skb6iUdvJViCMg==',key_name='tempest-TestNetworkBasicOps-399164218',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-k0j5xbkd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:56:58Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=44f2da71-ac35-415b-bb4b-6fbc3afe6cf9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.889 2 DEBUG nova.network.os_vif_util [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.889 2 DEBUG nova.network.os_vif_util [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a0:55,bridge_name='br-int',has_traffic_filtering=True,id=d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9fd4c9a-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.891 2 DEBUG nova.objects.instance [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.915 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:57:02 np0005465604 nova_compute[260603]:  <uuid>44f2da71-ac35-415b-bb4b-6fbc3afe6cf9</uuid>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:  <name>instance-0000007f</name>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkBasicOps-server-1392343358</nova:name>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:57:01</nova:creationTime>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:57:02 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:        <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:        <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:        <nova:port uuid="d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424">
Oct  2 04:57:02 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <entry name="serial">44f2da71-ac35-415b-bb4b-6fbc3afe6cf9</entry>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <entry name="uuid">44f2da71-ac35-415b-bb4b-6fbc3afe6cf9</entry>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk">
Oct  2 04:57:02 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:57:02 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk.config">
Oct  2 04:57:02 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:57:02 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:1a:a0:55"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <target dev="tapd9fd4c9a-6e"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9/console.log" append="off"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:57:02 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:57:02 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:57:02 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:57:02 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.916 2 DEBUG nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Preparing to wait for external event network-vif-plugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.917 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.917 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.917 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.918 2 DEBUG nova.virt.libvirt.vif [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:56:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1392343358',display_name='tempest-TestNetworkBasicOps-server-1392343358',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1392343358',id=127,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM7vAmstY/1wj8zilQusjnWxCH9HDy5ckxC0RSN69DfVAe97aVFxd/BU6NsgAvBbRW+DKoyLRAwETsr2HsjzpTSAXv0ZjguOo1+U6bZINpkB4yql6sk6skb6iUdvJViCMg==',key_name='tempest-TestNetworkBasicOps-399164218',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-k0j5xbkd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:56:58Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=44f2da71-ac35-415b-bb4b-6fbc3afe6cf9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.918 2 DEBUG nova.network.os_vif_util [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.918 2 DEBUG nova.network.os_vif_util [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a0:55,bridge_name='br-int',has_traffic_filtering=True,id=d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9fd4c9a-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.919 2 DEBUG os_vif [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a0:55,bridge_name='br-int',has_traffic_filtering=True,id=d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9fd4c9a-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.920 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.920 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.924 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9fd4c9a-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.924 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd9fd4c9a-6e, col_values=(('external_ids', {'iface-id': 'd9fd4c9a-6e5f-4c8a-b17c-28f6fe795424', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:a0:55', 'vm-uuid': '44f2da71-ac35-415b-bb4b-6fbc3afe6cf9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:02 np0005465604 NetworkManager[45129]: <info>  [1759395422.9281] manager: (tapd9fd4c9a-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/543)
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.935 2 INFO os_vif [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a0:55,bridge_name='br-int',has_traffic_filtering=True,id=d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9fd4c9a-6e')#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.987 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.988 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.988 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:1a:a0:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:57:02 np0005465604 nova_compute[260603]: 2025-10-02 08:57:02.988 2 INFO nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Using config drive#033[00m
Oct  2 04:57:03 np0005465604 nova_compute[260603]: 2025-10-02 08:57:03.004 2 DEBUG nova.storage.rbd_utils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:57:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:57:03 np0005465604 nova_compute[260603]: 2025-10-02 08:57:03.156 2 DEBUG nova.network.neutron [req-10812624-7780-4860-8fe9-8771af55d8df req-c6dba36a-7141-405d-b488-506d7e651edf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updated VIF entry in instance network info cache for port d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:57:03 np0005465604 nova_compute[260603]: 2025-10-02 08:57:03.157 2 DEBUG nova.network.neutron [req-10812624-7780-4860-8fe9-8771af55d8df req-c6dba36a-7141-405d-b488-506d7e651edf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updating instance_info_cache with network_info: [{"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:57:03 np0005465604 nova_compute[260603]: 2025-10-02 08:57:03.176 2 DEBUG oslo_concurrency.lockutils [req-10812624-7780-4860-8fe9-8771af55d8df req-c6dba36a-7141-405d-b488-506d7e651edf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:57:03 np0005465604 nova_compute[260603]: 2025-10-02 08:57:03.352 2 INFO nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Creating config drive at /var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9/disk.config#033[00m
Oct  2 04:57:03 np0005465604 nova_compute[260603]: 2025-10-02 08:57:03.356 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5jh5is_6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:57:03 np0005465604 nova_compute[260603]: 2025-10-02 08:57:03.523 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5jh5is_6" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:57:03 np0005465604 nova_compute[260603]: 2025-10-02 08:57:03.552 2 DEBUG nova.storage.rbd_utils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:57:03 np0005465604 nova_compute[260603]: 2025-10-02 08:57:03.557 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9/disk.config 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:57:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:57:03 np0005465604 nova_compute[260603]: 2025-10-02 08:57:03.728 2 DEBUG oslo_concurrency.processutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9/disk.config 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:57:03 np0005465604 nova_compute[260603]: 2025-10-02 08:57:03.729 2 INFO nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Deleting local config drive /var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9/disk.config because it was imported into RBD.#033[00m
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]: {
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:        "osd_id": 2,
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:        "type": "bluestore"
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:    },
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:        "osd_id": 1,
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:        "type": "bluestore"
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:    },
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:        "osd_id": 0,
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:        "type": "bluestore"
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]:    }
Oct  2 04:57:03 np0005465604 naughty_sinoussi[396956]: }
Oct  2 04:57:03 np0005465604 systemd[1]: libpod-bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74.scope: Deactivated successfully.
Oct  2 04:57:03 np0005465604 systemd[1]: libpod-bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74.scope: Consumed 1.023s CPU time.
Oct  2 04:57:03 np0005465604 podman[396921]: 2025-10-02 08:57:03.753082613 +0000 UTC m=+1.183398589 container died bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sinoussi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 04:57:03 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9d78b2ce007e66987db1213eb483ff70a259c848ec8290f42f3e9ffc223b7020-merged.mount: Deactivated successfully.
Oct  2 04:57:03 np0005465604 kernel: tapd9fd4c9a-6e: entered promiscuous mode
Oct  2 04:57:03 np0005465604 NetworkManager[45129]: <info>  [1759395423.8065] manager: (tapd9fd4c9a-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/544)
Oct  2 04:57:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:57:03Z|01371|binding|INFO|Claiming lport d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 for this chassis.
Oct  2 04:57:03 np0005465604 nova_compute[260603]: 2025-10-02 08:57:03.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:57:03Z|01372|binding|INFO|d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424: Claiming fa:16:3e:1a:a0:55 10.100.0.9
Oct  2 04:57:03 np0005465604 nova_compute[260603]: 2025-10-02 08:57:03.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:03 np0005465604 nova_compute[260603]: 2025-10-02 08:57:03.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:03 np0005465604 podman[396921]: 2025-10-02 08:57:03.819576732 +0000 UTC m=+1.249892708 container remove bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 04:57:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.825 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:a0:55 10.100.0.9'], port_security=['fa:16:3e:1a:a0:55 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '44f2da71-ac35-415b-bb4b-6fbc3afe6cf9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50f973db-7622-438d-89a4-98949bc018c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': '38d1b2fd-ca30-472b-9abc-fb39d0f98549', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6dd5c4a2-6437-4d79-b5d7-6dfc11d7b30a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:57:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.826 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 in datapath 50f973db-7622-438d-89a4-98949bc018c7 bound to our chassis#033[00m
Oct  2 04:57:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.827 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50f973db-7622-438d-89a4-98949bc018c7#033[00m
Oct  2 04:57:03 np0005465604 systemd[1]: libpod-conmon-bc06b9ccd1d5dc7f0750706d2fb61cb3d4b252e23db87b30012f8c501d20ff74.scope: Deactivated successfully.
Oct  2 04:57:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.840 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[491c54c4-557a-4417-a59f-24c2d248322b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.842 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50f973db-71 in ovnmeta-50f973db-7622-438d-89a4-98949bc018c7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:57:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.843 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50f973db-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:57:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.844 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[629b2574-e653-4998-8384-cfb76e0cd022]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.844 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[72729d87-e95e-4ec7-a70a-afe58c0ec815]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:03 np0005465604 systemd-udevd[397076]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:57:03 np0005465604 systemd-machined[214636]: New machine qemu-161-instance-0000007f.
Oct  2 04:57:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.855 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[4461033e-2610-42f8-83f6-9e34e597cdf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:03 np0005465604 NetworkManager[45129]: <info>  [1759395423.8613] device (tapd9fd4c9a-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:57:03 np0005465604 NetworkManager[45129]: <info>  [1759395423.8620] device (tapd9fd4c9a-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:57:03 np0005465604 systemd[1]: Started Virtual Machine qemu-161-instance-0000007f.
Oct  2 04:57:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:57:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:57:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:57:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.885 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed5f0be-2455-470e-a21d-0f73244fb191]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:57:03 np0005465604 nova_compute[260603]: 2025-10-02 08:57:03.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:57:03Z|01373|binding|INFO|Setting lport d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 ovn-installed in OVS
Oct  2 04:57:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:57:03Z|01374|binding|INFO|Setting lport d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 up in Southbound
Oct  2 04:57:03 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 00cf357e-337a-4520-9f70-e52ecf4d11f6 does not exist
Oct  2 04:57:03 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 3f4e2b77-cfef-43b7-b133-5d599384b783 does not exist
Oct  2 04:57:03 np0005465604 nova_compute[260603]: 2025-10-02 08:57:03.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.942 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[515f21e0-0d26-46b6-b9e2-11d53d2b2433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.946 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b9843866-546a-439f-8fc8-20cb5ec5973e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:03 np0005465604 NetworkManager[45129]: <info>  [1759395423.9477] manager: (tap50f973db-70): new Veth device (/org/freedesktop/NetworkManager/Devices/545)
Oct  2 04:57:03 np0005465604 systemd-udevd[397079]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:57:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.974 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5ceaf987-36e2-4525-b763-0c0d3e9de4a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:03.977 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[20beffdb-6b71-4f41-b67e-96a832a68797]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:04 np0005465604 NetworkManager[45129]: <info>  [1759395424.0003] device (tap50f973db-70): carrier: link connected
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.006 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[da5a044b-8d75-47a0-ab40-881956ef037e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.022 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29695432-4423-42ed-a6d8-e94e7a21b8d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50f973db-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:ed:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 389], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630260, 'reachable_time': 28777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397134, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.037 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5211b2-c9a5-4b04-a300-36df7b89be87]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:eddf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630260, 'tstamp': 630260}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397146, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.055 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd596933-489b-4bd8-9e98-1dbcf7a7af52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50f973db-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:ed:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 389], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630260, 'reachable_time': 28777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 397158, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.091 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[45562fe8-0b12-4b13-b1b4-09ebcc3cf3da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.170 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f42ac0c9-a155-4efe-b555-e67028bb14be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.172 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50f973db-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.172 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.172 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50f973db-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:57:04 np0005465604 NetworkManager[45129]: <info>  [1759395424.1746] manager: (tap50f973db-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/546)
Oct  2 04:57:04 np0005465604 kernel: tap50f973db-70: entered promiscuous mode
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.178 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50f973db-70, col_values=(('external_ids', {'iface-id': '02679a4c-7fb0-490c-8428-8668ec8ddf0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:57:04 np0005465604 ovn_controller[152344]: 2025-10-02T08:57:04Z|01375|binding|INFO|Releasing lport 02679a4c-7fb0-490c-8428-8668ec8ddf0b from this chassis (sb_readonly=0)
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.213 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50f973db-7622-438d-89a4-98949bc018c7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50f973db-7622-438d-89a4-98949bc018c7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.214 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[515f0575-cbef-4829-b204-391758fbeb1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.215 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-50f973db-7622-438d-89a4-98949bc018c7
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/50f973db-7622-438d-89a4-98949bc018c7.pid.haproxy
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 50f973db-7622-438d-89a4-98949bc018c7
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:57:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:04.215 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'env', 'PROCESS_TAG=haproxy-50f973db-7622-438d-89a4-98949bc018c7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50f973db-7622-438d-89a4-98949bc018c7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.620 2 DEBUG nova.compute.manager [req-9df5ef77-b2b4-4872-8c07-639917f3d955 req-aefec794-0706-4960-a95b-06ec842ffac8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received event network-vif-plugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.621 2 DEBUG oslo_concurrency.lockutils [req-9df5ef77-b2b4-4872-8c07-639917f3d955 req-aefec794-0706-4960-a95b-06ec842ffac8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.621 2 DEBUG oslo_concurrency.lockutils [req-9df5ef77-b2b4-4872-8c07-639917f3d955 req-aefec794-0706-4960-a95b-06ec842ffac8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.622 2 DEBUG oslo_concurrency.lockutils [req-9df5ef77-b2b4-4872-8c07-639917f3d955 req-aefec794-0706-4960-a95b-06ec842ffac8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.622 2 DEBUG nova.compute.manager [req-9df5ef77-b2b4-4872-8c07-639917f3d955 req-aefec794-0706-4960-a95b-06ec842ffac8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Processing event network-vif-plugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:57:04 np0005465604 podman[397233]: 2025-10-02 08:57:04.633361392 +0000 UTC m=+0.053810638 container create ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 04:57:04 np0005465604 systemd[1]: Started libpod-conmon-ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f.scope.
Oct  2 04:57:04 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:57:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c715720152d167f38b377cf95f213b66460140411dee007ac5a683ebd4decaf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:57:04 np0005465604 podman[397233]: 2025-10-02 08:57:04.608170983 +0000 UTC m=+0.028620219 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:57:04 np0005465604 podman[397233]: 2025-10-02 08:57:04.711470198 +0000 UTC m=+0.131919544 container init ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:57:04 np0005465604 podman[397233]: 2025-10-02 08:57:04.716669823 +0000 UTC m=+0.137119099 container start ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 04:57:04 np0005465604 neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7[397248]: [NOTICE]   (397252) : New worker (397254) forked
Oct  2 04:57:04 np0005465604 neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7[397248]: [NOTICE]   (397252) : Loading success.
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.882 2 DEBUG nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.884 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395424.881668, 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.884 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] VM Started (Lifecycle Event)#033[00m
Oct  2 04:57:04 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:57:04 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.888 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.894 2 INFO nova.virt.libvirt.driver [-] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Instance spawned successfully.#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.895 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.920 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.928 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.934 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.935 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.936 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.937 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.937 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.938 2 DEBUG nova.virt.libvirt.driver [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.974 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.975 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395424.8824563, 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:57:04 np0005465604 nova_compute[260603]: 2025-10-02 08:57:04.975 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:57:05 np0005465604 nova_compute[260603]: 2025-10-02 08:57:05.008 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:57:05 np0005465604 nova_compute[260603]: 2025-10-02 08:57:05.013 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395424.8873615, 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:57:05 np0005465604 nova_compute[260603]: 2025-10-02 08:57:05.013 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:57:05 np0005465604 nova_compute[260603]: 2025-10-02 08:57:05.023 2 INFO nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Took 6.48 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:57:05 np0005465604 nova_compute[260603]: 2025-10-02 08:57:05.024 2 DEBUG nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:57:05 np0005465604 nova_compute[260603]: 2025-10-02 08:57:05.033 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:57:05 np0005465604 nova_compute[260603]: 2025-10-02 08:57:05.035 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:57:05 np0005465604 nova_compute[260603]: 2025-10-02 08:57:05.077 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:57:05 np0005465604 nova_compute[260603]: 2025-10-02 08:57:05.108 2 INFO nova.compute.manager [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Took 7.59 seconds to build instance.#033[00m
Oct  2 04:57:05 np0005465604 nova_compute[260603]: 2025-10-02 08:57:05.130 2 DEBUG oslo_concurrency.lockutils [None req-3f8380ed-3b2f-4e43-9ab6-1fd2b19dd6d1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:57:05 np0005465604 nova_compute[260603]: 2025-10-02 08:57:05.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:57:06 np0005465604 nova_compute[260603]: 2025-10-02 08:57:06.695 2 DEBUG nova.compute.manager [req-eb32dccf-a9f8-4dd5-af35-bc514652f683 req-73d2aac8-70a4-479a-869c-d87946a4b073 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received event network-vif-plugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:57:06 np0005465604 nova_compute[260603]: 2025-10-02 08:57:06.695 2 DEBUG oslo_concurrency.lockutils [req-eb32dccf-a9f8-4dd5-af35-bc514652f683 req-73d2aac8-70a4-479a-869c-d87946a4b073 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:57:06 np0005465604 nova_compute[260603]: 2025-10-02 08:57:06.695 2 DEBUG oslo_concurrency.lockutils [req-eb32dccf-a9f8-4dd5-af35-bc514652f683 req-73d2aac8-70a4-479a-869c-d87946a4b073 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:57:06 np0005465604 nova_compute[260603]: 2025-10-02 08:57:06.695 2 DEBUG oslo_concurrency.lockutils [req-eb32dccf-a9f8-4dd5-af35-bc514652f683 req-73d2aac8-70a4-479a-869c-d87946a4b073 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:57:06 np0005465604 nova_compute[260603]: 2025-10-02 08:57:06.696 2 DEBUG nova.compute.manager [req-eb32dccf-a9f8-4dd5-af35-bc514652f683 req-73d2aac8-70a4-479a-869c-d87946a4b073 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] No waiting events found dispatching network-vif-plugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:57:06 np0005465604 nova_compute[260603]: 2025-10-02 08:57:06.696 2 WARNING nova.compute.manager [req-eb32dccf-a9f8-4dd5-af35-bc514652f683 req-73d2aac8-70a4-479a-869c-d87946a4b073 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received unexpected event network-vif-plugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:57:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 75 op/s
Oct  2 04:57:07 np0005465604 nova_compute[260603]: 2025-10-02 08:57:07.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:57:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 04:57:09 np0005465604 nova_compute[260603]: 2025-10-02 08:57:09.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:57:09Z|01376|binding|INFO|Releasing lport 02679a4c-7fb0-490c-8428-8668ec8ddf0b from this chassis (sb_readonly=0)
Oct  2 04:57:09 np0005465604 NetworkManager[45129]: <info>  [1759395429.7798] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/547)
Oct  2 04:57:09 np0005465604 NetworkManager[45129]: <info>  [1759395429.7814] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/548)
Oct  2 04:57:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:57:09Z|01377|binding|INFO|Releasing lport 02679a4c-7fb0-490c-8428-8668ec8ddf0b from this chassis (sb_readonly=0)
Oct  2 04:57:09 np0005465604 nova_compute[260603]: 2025-10-02 08:57:09.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:09 np0005465604 nova_compute[260603]: 2025-10-02 08:57:09.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:10 np0005465604 nova_compute[260603]: 2025-10-02 08:57:10.145 2 DEBUG nova.compute.manager [req-f7cff3f7-bf9f-492b-ba07-0ebce4e65ca4 req-65e2004b-bfc5-45aa-bc37-7725563d709f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received event network-changed-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:57:10 np0005465604 nova_compute[260603]: 2025-10-02 08:57:10.145 2 DEBUG nova.compute.manager [req-f7cff3f7-bf9f-492b-ba07-0ebce4e65ca4 req-65e2004b-bfc5-45aa-bc37-7725563d709f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Refreshing instance network info cache due to event network-changed-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:57:10 np0005465604 nova_compute[260603]: 2025-10-02 08:57:10.146 2 DEBUG oslo_concurrency.lockutils [req-f7cff3f7-bf9f-492b-ba07-0ebce4e65ca4 req-65e2004b-bfc5-45aa-bc37-7725563d709f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:57:10 np0005465604 nova_compute[260603]: 2025-10-02 08:57:10.146 2 DEBUG oslo_concurrency.lockutils [req-f7cff3f7-bf9f-492b-ba07-0ebce4e65ca4 req-65e2004b-bfc5-45aa-bc37-7725563d709f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:57:10 np0005465604 nova_compute[260603]: 2025-10-02 08:57:10.146 2 DEBUG nova.network.neutron [req-f7cff3f7-bf9f-492b-ba07-0ebce4e65ca4 req-65e2004b-bfc5-45aa-bc37-7725563d709f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Refreshing network info cache for port d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:57:10 np0005465604 nova_compute[260603]: 2025-10-02 08:57:10.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 04:57:11 np0005465604 nova_compute[260603]: 2025-10-02 08:57:11.880 2 DEBUG nova.network.neutron [req-f7cff3f7-bf9f-492b-ba07-0ebce4e65ca4 req-65e2004b-bfc5-45aa-bc37-7725563d709f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updated VIF entry in instance network info cache for port d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:57:11 np0005465604 nova_compute[260603]: 2025-10-02 08:57:11.880 2 DEBUG nova.network.neutron [req-f7cff3f7-bf9f-492b-ba07-0ebce4e65ca4 req-65e2004b-bfc5-45aa-bc37-7725563d709f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updating instance_info_cache with network_info: [{"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:57:11 np0005465604 nova_compute[260603]: 2025-10-02 08:57:11.910 2 DEBUG oslo_concurrency.lockutils [req-f7cff3f7-bf9f-492b-ba07-0ebce4e65ca4 req-65e2004b-bfc5-45aa-bc37-7725563d709f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:57:12 np0005465604 nova_compute[260603]: 2025-10-02 08:57:12.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:57:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 04:57:15 np0005465604 nova_compute[260603]: 2025-10-02 08:57:15.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 88 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 04:57:17 np0005465604 ovn_controller[152344]: 2025-10-02T08:57:17Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:a0:55 10.100.0.9
Oct  2 04:57:17 np0005465604 ovn_controller[152344]: 2025-10-02T08:57:17Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:a0:55 10.100.0.9
Oct  2 04:57:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 103 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 96 op/s
Oct  2 04:57:17 np0005465604 nova_compute[260603]: 2025-10-02 08:57:17.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:57:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 114 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 929 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct  2 04:57:20 np0005465604 nova_compute[260603]: 2025-10-02 08:57:20.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 114 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct  2 04:57:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:57:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1835829058' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:57:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:57:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1835829058' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:57:22 np0005465604 nova_compute[260603]: 2025-10-02 08:57:22.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:57:23 np0005465604 nova_compute[260603]: 2025-10-02 08:57:23.443 2 INFO nova.compute.manager [None req-788b3887-a2da-42a8-a28c-6c0ed17f5037 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Get console output#033[00m
Oct  2 04:57:23 np0005465604 nova_compute[260603]: 2025-10-02 08:57:23.449 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 04:57:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 04:57:25 np0005465604 nova_compute[260603]: 2025-10-02 08:57:25.169 2 DEBUG nova.compute.manager [req-68a68e13-83b6-403a-8967-8a380d198905 req-556357d6-85f5-4ab0-82c7-f2cac1e55ed0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received event network-changed-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:57:25 np0005465604 nova_compute[260603]: 2025-10-02 08:57:25.170 2 DEBUG nova.compute.manager [req-68a68e13-83b6-403a-8967-8a380d198905 req-556357d6-85f5-4ab0-82c7-f2cac1e55ed0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Refreshing instance network info cache due to event network-changed-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:57:25 np0005465604 nova_compute[260603]: 2025-10-02 08:57:25.170 2 DEBUG oslo_concurrency.lockutils [req-68a68e13-83b6-403a-8967-8a380d198905 req-556357d6-85f5-4ab0-82c7-f2cac1e55ed0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:57:25 np0005465604 nova_compute[260603]: 2025-10-02 08:57:25.170 2 DEBUG oslo_concurrency.lockutils [req-68a68e13-83b6-403a-8967-8a380d198905 req-556357d6-85f5-4ab0-82c7-f2cac1e55ed0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:57:25 np0005465604 nova_compute[260603]: 2025-10-02 08:57:25.170 2 DEBUG nova.network.neutron [req-68a68e13-83b6-403a-8967-8a380d198905 req-556357d6-85f5-4ab0-82c7-f2cac1e55ed0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Refreshing network info cache for port d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:57:25 np0005465604 nova_compute[260603]: 2025-10-02 08:57:25.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 04:57:26 np0005465604 podman[397265]: 2025-10-02 08:57:26.000102593 +0000 UTC m=+0.058903508 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:57:26 np0005465604 podman[397264]: 2025-10-02 08:57:26.020550081 +0000 UTC m=+0.084404886 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 04:57:26 np0005465604 nova_compute[260603]: 2025-10-02 08:57:26.210 2 DEBUG nova.network.neutron [req-68a68e13-83b6-403a-8967-8a380d198905 req-556357d6-85f5-4ab0-82c7-f2cac1e55ed0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updated VIF entry in instance network info cache for port d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:57:26 np0005465604 nova_compute[260603]: 2025-10-02 08:57:26.211 2 DEBUG nova.network.neutron [req-68a68e13-83b6-403a-8967-8a380d198905 req-556357d6-85f5-4ab0-82c7-f2cac1e55ed0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updating instance_info_cache with network_info: [{"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:57:26 np0005465604 nova_compute[260603]: 2025-10-02 08:57:26.235 2 DEBUG oslo_concurrency.lockutils [req-68a68e13-83b6-403a-8967-8a380d198905 req-556357d6-85f5-4ab0-82c7-f2cac1e55ed0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:57:26 np0005465604 nova_compute[260603]: 2025-10-02 08:57:26.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:26.599 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:57:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:26.600 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:57:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 04:57:27 np0005465604 nova_compute[260603]: 2025-10-02 08:57:27.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:57:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:57:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:57:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:57:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:57:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:57:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:57:28
Oct  2 04:57:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:57:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:57:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.meta', 'images', 'backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log']
Oct  2 04:57:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:57:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:57:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:57:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:57:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:57:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:57:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:57:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:57:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:57:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:57:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:57:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:57:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 249 KiB/s rd, 981 KiB/s wr, 41 op/s
Oct  2 04:57:30 np0005465604 nova_compute[260603]: 2025-10-02 08:57:30.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:30 np0005465604 nova_compute[260603]: 2025-10-02 08:57:30.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:57:30 np0005465604 nova_compute[260603]: 2025-10-02 08:57:30.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:57:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 41 KiB/s wr, 6 op/s
Oct  2 04:57:32 np0005465604 podman[397312]: 2025-10-02 08:57:32.008533654 +0000 UTC m=+0.064313919 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 04:57:32 np0005465604 podman[397311]: 2025-10-02 08:57:32.008547805 +0000 UTC m=+0.067390228 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 04:57:32 np0005465604 nova_compute[260603]: 2025-10-02 08:57:32.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:57:33 np0005465604 nova_compute[260603]: 2025-10-02 08:57:33.410 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "69862866-8141-4e02-bd49-4278bfdb7857" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:57:33 np0005465604 nova_compute[260603]: 2025-10-02 08:57:33.411 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:57:33 np0005465604 nova_compute[260603]: 2025-10-02 08:57:33.432 2 DEBUG nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:57:33 np0005465604 nova_compute[260603]: 2025-10-02 08:57:33.519 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:57:33 np0005465604 nova_compute[260603]: 2025-10-02 08:57:33.520 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:57:33 np0005465604 nova_compute[260603]: 2025-10-02 08:57:33.532 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:57:33 np0005465604 nova_compute[260603]: 2025-10-02 08:57:33.533 2 INFO nova.compute.claims [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:57:33 np0005465604 nova_compute[260603]: 2025-10-02 08:57:33.667 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:57:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 44 KiB/s wr, 6 op/s
Oct  2 04:57:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:57:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1406298978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.142 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.149 2 DEBUG nova.compute.provider_tree [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.349 2 DEBUG nova.scheduler.client.report [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.377 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.378 2 DEBUG nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.440 2 DEBUG nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.441 2 DEBUG nova.network.neutron [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.468 2 INFO nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.492 2 DEBUG nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:57:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:34.602 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.627 2 DEBUG nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.630 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.630 2 INFO nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Creating image(s)#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.667 2 DEBUG nova.storage.rbd_utils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 69862866-8141-4e02-bd49-4278bfdb7857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.707 2 DEBUG nova.storage.rbd_utils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 69862866-8141-4e02-bd49-4278bfdb7857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.744 2 DEBUG nova.storage.rbd_utils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 69862866-8141-4e02-bd49-4278bfdb7857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.750 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.810 2 DEBUG nova.policy [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:57:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:34.838 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:57:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:34.839 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:57:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:34.839 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.857 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.858 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.858 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.859 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.889 2 DEBUG nova.storage.rbd_utils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 69862866-8141-4e02-bd49-4278bfdb7857_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:57:34 np0005465604 nova_compute[260603]: 2025-10-02 08:57:34.897 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 69862866-8141-4e02-bd49-4278bfdb7857_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:57:35 np0005465604 nova_compute[260603]: 2025-10-02 08:57:35.261 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 69862866-8141-4e02-bd49-4278bfdb7857_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:57:35 np0005465604 nova_compute[260603]: 2025-10-02 08:57:35.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:35 np0005465604 nova_compute[260603]: 2025-10-02 08:57:35.373 2 DEBUG nova.storage.rbd_utils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image 69862866-8141-4e02-bd49-4278bfdb7857_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:57:35 np0005465604 nova_compute[260603]: 2025-10-02 08:57:35.499 2 DEBUG nova.objects.instance [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid 69862866-8141-4e02-bd49-4278bfdb7857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:57:35 np0005465604 nova_compute[260603]: 2025-10-02 08:57:35.520 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:57:35 np0005465604 nova_compute[260603]: 2025-10-02 08:57:35.520 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Ensure instance console log exists: /var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:57:35 np0005465604 nova_compute[260603]: 2025-10-02 08:57:35.521 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:57:35 np0005465604 nova_compute[260603]: 2025-10-02 08:57:35.521 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:57:35 np0005465604 nova_compute[260603]: 2025-10-02 08:57:35.522 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:57:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 121 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct  2 04:57:35 np0005465604 nova_compute[260603]: 2025-10-02 08:57:35.763 2 DEBUG nova.network.neutron [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Successfully created port: 81ef7b43-41dd-4a20-aa31-ed776a4299b0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:57:36 np0005465604 nova_compute[260603]: 2025-10-02 08:57:36.805 2 DEBUG nova.network.neutron [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Successfully updated port: 81ef7b43-41dd-4a20-aa31-ed776a4299b0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:57:36 np0005465604 nova_compute[260603]: 2025-10-02 08:57:36.831 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-69862866-8141-4e02-bd49-4278bfdb7857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:57:36 np0005465604 nova_compute[260603]: 2025-10-02 08:57:36.832 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-69862866-8141-4e02-bd49-4278bfdb7857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:57:36 np0005465604 nova_compute[260603]: 2025-10-02 08:57:36.832 2 DEBUG nova.network.neutron [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:57:36 np0005465604 nova_compute[260603]: 2025-10-02 08:57:36.934 2 DEBUG nova.compute.manager [req-fb9a388e-b4d9-4fbd-8dfb-c438d82f6544 req-db5b3588-9e47-44b7-8557-dfd1999cfb8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received event network-changed-81ef7b43-41dd-4a20-aa31-ed776a4299b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:57:36 np0005465604 nova_compute[260603]: 2025-10-02 08:57:36.935 2 DEBUG nova.compute.manager [req-fb9a388e-b4d9-4fbd-8dfb-c438d82f6544 req-db5b3588-9e47-44b7-8557-dfd1999cfb8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Refreshing instance network info cache due to event network-changed-81ef7b43-41dd-4a20-aa31-ed776a4299b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:57:36 np0005465604 nova_compute[260603]: 2025-10-02 08:57:36.935 2 DEBUG oslo_concurrency.lockutils [req-fb9a388e-b4d9-4fbd-8dfb-c438d82f6544 req-db5b3588-9e47-44b7-8557-dfd1999cfb8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-69862866-8141-4e02-bd49-4278bfdb7857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.067 2 DEBUG nova.network.neutron [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:57:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 146 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 1.1 MiB/s wr, 4 op/s
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.874 2 DEBUG nova.network.neutron [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Updating instance_info_cache with network_info: [{"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.898 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-69862866-8141-4e02-bd49-4278bfdb7857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.899 2 DEBUG nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Instance network_info: |[{"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.900 2 DEBUG oslo_concurrency.lockutils [req-fb9a388e-b4d9-4fbd-8dfb-c438d82f6544 req-db5b3588-9e47-44b7-8557-dfd1999cfb8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-69862866-8141-4e02-bd49-4278bfdb7857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.900 2 DEBUG nova.network.neutron [req-fb9a388e-b4d9-4fbd-8dfb-c438d82f6544 req-db5b3588-9e47-44b7-8557-dfd1999cfb8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Refreshing network info cache for port 81ef7b43-41dd-4a20-aa31-ed776a4299b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.905 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Start _get_guest_xml network_info=[{"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.913 2 WARNING nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.926 2 DEBUG nova.virt.libvirt.host [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.927 2 DEBUG nova.virt.libvirt.host [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.932 2 DEBUG nova.virt.libvirt.host [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.933 2 DEBUG nova.virt.libvirt.host [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.934 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.934 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.935 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.936 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.936 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.936 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.937 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.937 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.938 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.938 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.939 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.939 2 DEBUG nova.virt.hardware [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:57:37 np0005465604 nova_compute[260603]: 2025-10-02 08:57:37.944 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:57:38 np0005465604 nova_compute[260603]: 2025-10-02 08:57:38.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:57:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:57:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2882067832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:57:38 np0005465604 nova_compute[260603]: 2025-10-02 08:57:38.422 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:57:38 np0005465604 nova_compute[260603]: 2025-10-02 08:57:38.463 2 DEBUG nova.storage.rbd_utils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 69862866-8141-4e02-bd49-4278bfdb7857_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:57:38 np0005465604 nova_compute[260603]: 2025-10-02 08:57:38.470 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:57:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:57:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/942956624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:57:38 np0005465604 nova_compute[260603]: 2025-10-02 08:57:38.943 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:57:38 np0005465604 nova_compute[260603]: 2025-10-02 08:57:38.945 2 DEBUG nova.virt.libvirt.vif [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:57:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-564779768',display_name='tempest-TestNetworkBasicOps-server-564779768',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-564779768',id=128,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhkz6+vSnvnT1XLzopmVXCIQtQIQCx9/0pOGIggxX0jLtkT3sEzs9rpRLJf4JbtSL6bjNbeilYykS+0k+nQ8Lt3eGZ1mjA3hrv/+JLu3lNSQ3HWi2ZIUcAuMwawnCYCeg==',key_name='tempest-TestNetworkBasicOps-620917724',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-5t7zdpi1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:57:34Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=69862866-8141-4e02-bd49-4278bfdb7857,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:57:38 np0005465604 nova_compute[260603]: 2025-10-02 08:57:38.945 2 DEBUG nova.network.os_vif_util [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:57:38 np0005465604 nova_compute[260603]: 2025-10-02 08:57:38.946 2 DEBUG nova.network.os_vif_util [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:9b:69,bridge_name='br-int',has_traffic_filtering=True,id=81ef7b43-41dd-4a20-aa31-ed776a4299b0,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ef7b43-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:57:38 np0005465604 nova_compute[260603]: 2025-10-02 08:57:38.947 2 DEBUG nova.objects.instance [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid 69862866-8141-4e02-bd49-4278bfdb7857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009813318030865404 of space, bias 1.0, pg target 0.2943995409259621 quantized to 32 (current 32)
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:57:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.022 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:57:39 np0005465604 nova_compute[260603]:  <uuid>69862866-8141-4e02-bd49-4278bfdb7857</uuid>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:  <name>instance-00000080</name>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkBasicOps-server-564779768</nova:name>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:57:37</nova:creationTime>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:57:39 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:        <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:        <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:        <nova:port uuid="81ef7b43-41dd-4a20-aa31-ed776a4299b0">
Oct  2 04:57:39 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <entry name="serial">69862866-8141-4e02-bd49-4278bfdb7857</entry>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <entry name="uuid">69862866-8141-4e02-bd49-4278bfdb7857</entry>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/69862866-8141-4e02-bd49-4278bfdb7857_disk">
Oct  2 04:57:39 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:57:39 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/69862866-8141-4e02-bd49-4278bfdb7857_disk.config">
Oct  2 04:57:39 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:57:39 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:0f:9b:69"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <target dev="tap81ef7b43-41"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857/console.log" append="off"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:57:39 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:57:39 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:57:39 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:57:39 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.024 2 DEBUG nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Preparing to wait for external event network-vif-plugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.024 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "69862866-8141-4e02-bd49-4278bfdb7857-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.025 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.025 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.026 2 DEBUG nova.virt.libvirt.vif [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:57:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-564779768',display_name='tempest-TestNetworkBasicOps-server-564779768',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-564779768',id=128,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhkz6+vSnvnT1XLzopmVXCIQtQIQCx9/0pOGIggxX0jLtkT3sEzs9rpRLJf4JbtSL6bjNbeilYykS+0k+nQ8Lt3eGZ1mjA3hrv/+JLu3lNSQ3HWi2ZIUcAuMwawnCYCeg==',key_name='tempest-TestNetworkBasicOps-620917724',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-5t7zdpi1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:57:34Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=69862866-8141-4e02-bd49-4278bfdb7857,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.026 2 DEBUG nova.network.os_vif_util [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.027 2 DEBUG nova.network.os_vif_util [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:9b:69,bridge_name='br-int',has_traffic_filtering=True,id=81ef7b43-41dd-4a20-aa31-ed776a4299b0,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ef7b43-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.027 2 DEBUG os_vif [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:9b:69,bridge_name='br-int',has_traffic_filtering=True,id=81ef7b43-41dd-4a20-aa31-ed776a4299b0,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ef7b43-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.029 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.029 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.033 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81ef7b43-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.034 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap81ef7b43-41, col_values=(('external_ids', {'iface-id': '81ef7b43-41dd-4a20-aa31-ed776a4299b0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:9b:69', 'vm-uuid': '69862866-8141-4e02-bd49-4278bfdb7857'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:39 np0005465604 NetworkManager[45129]: <info>  [1759395459.0375] manager: (tap81ef7b43-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/549)
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.047 2 INFO os_vif [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:9b:69,bridge_name='br-int',has_traffic_filtering=True,id=81ef7b43-41dd-4a20-aa31-ed776a4299b0,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ef7b43-41')#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.109 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.110 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.110 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:0f:9b:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.111 2 INFO nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Using config drive#033[00m
Oct  2 04:57:39 np0005465604 nova_compute[260603]: 2025-10-02 08:57:39.138 2 DEBUG nova.storage.rbd_utils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 69862866-8141-4e02-bd49-4278bfdb7857_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:57:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:57:40 np0005465604 nova_compute[260603]: 2025-10-02 08:57:40.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:41 np0005465604 nova_compute[260603]: 2025-10-02 08:57:41.635 2 INFO nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Creating config drive at /var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857/disk.config#033[00m
Oct  2 04:57:41 np0005465604 nova_compute[260603]: 2025-10-02 08:57:41.644 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdody5rxy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:57:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:57:41 np0005465604 nova_compute[260603]: 2025-10-02 08:57:41.790 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdody5rxy" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:57:41 np0005465604 nova_compute[260603]: 2025-10-02 08:57:41.828 2 DEBUG nova.storage.rbd_utils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 69862866-8141-4e02-bd49-4278bfdb7857_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:57:41 np0005465604 nova_compute[260603]: 2025-10-02 08:57:41.832 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857/disk.config 69862866-8141-4e02-bd49-4278bfdb7857_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:57:41 np0005465604 nova_compute[260603]: 2025-10-02 08:57:41.992 2 DEBUG oslo_concurrency.processutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857/disk.config 69862866-8141-4e02-bd49-4278bfdb7857_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:57:41 np0005465604 nova_compute[260603]: 2025-10-02 08:57:41.993 2 INFO nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Deleting local config drive /var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857/disk.config because it was imported into RBD.#033[00m
Oct  2 04:57:42 np0005465604 kernel: tap81ef7b43-41: entered promiscuous mode
Oct  2 04:57:42 np0005465604 NetworkManager[45129]: <info>  [1759395462.0420] manager: (tap81ef7b43-41): new Tun device (/org/freedesktop/NetworkManager/Devices/550)
Oct  2 04:57:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:57:42Z|01378|binding|INFO|Claiming lport 81ef7b43-41dd-4a20-aa31-ed776a4299b0 for this chassis.
Oct  2 04:57:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:57:42Z|01379|binding|INFO|81ef7b43-41dd-4a20-aa31-ed776a4299b0: Claiming fa:16:3e:0f:9b:69 10.100.0.10
Oct  2 04:57:42 np0005465604 nova_compute[260603]: 2025-10-02 08:57:42.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.051 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:9b:69 10.100.0.10'], port_security=['fa:16:3e:0f:9b:69 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '69862866-8141-4e02-bd49-4278bfdb7857', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50f973db-7622-438d-89a4-98949bc018c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ad0d47c-459b-4d77-aba0-ab968d7d2321', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6dd5c4a2-6437-4d79-b5d7-6dfc11d7b30a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=81ef7b43-41dd-4a20-aa31-ed776a4299b0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:57:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.052 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 81ef7b43-41dd-4a20-aa31-ed776a4299b0 in datapath 50f973db-7622-438d-89a4-98949bc018c7 bound to our chassis#033[00m
Oct  2 04:57:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.053 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50f973db-7622-438d-89a4-98949bc018c7#033[00m
Oct  2 04:57:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:57:42Z|01380|binding|INFO|Setting lport 81ef7b43-41dd-4a20-aa31-ed776a4299b0 ovn-installed in OVS
Oct  2 04:57:42 np0005465604 ovn_controller[152344]: 2025-10-02T08:57:42Z|01381|binding|INFO|Setting lport 81ef7b43-41dd-4a20-aa31-ed776a4299b0 up in Southbound
Oct  2 04:57:42 np0005465604 nova_compute[260603]: 2025-10-02 08:57:42.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:42 np0005465604 systemd-machined[214636]: New machine qemu-162-instance-00000080.
Oct  2 04:57:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.076 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0514d44e-3a8a-4ddc-bf07-8315da8e0281]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:42 np0005465604 systemd-udevd[397674]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:57:42 np0005465604 systemd[1]: Started Virtual Machine qemu-162-instance-00000080.
Oct  2 04:57:42 np0005465604 NetworkManager[45129]: <info>  [1759395462.0942] device (tap81ef7b43-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:57:42 np0005465604 NetworkManager[45129]: <info>  [1759395462.0951] device (tap81ef7b43-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:57:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.114 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ce9ff23f-8140-402b-8f35-4df86dbc65fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.117 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2531b553-bb8b-4d26-bf71-c83f3284212c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.147 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f2c5d3-620c-455e-a8d9-ad0e6026f43d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.161 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b94b11eb-38da-4ad8-884a-c5489c144483]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50f973db-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:ed:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 389], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630260, 'reachable_time': 28777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397687, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.178 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0eaa6f41-035e-4574-a68c-f233ec7482bd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap50f973db-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630272, 'tstamp': 630272}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397688, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap50f973db-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630276, 'tstamp': 630276}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397688, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:57:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.179 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50f973db-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:57:42 np0005465604 nova_compute[260603]: 2025-10-02 08:57:42.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:42 np0005465604 nova_compute[260603]: 2025-10-02 08:57:42.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.182 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50f973db-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:57:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.182 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:57:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.182 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50f973db-70, col_values=(('external_ids', {'iface-id': '02679a4c-7fb0-490c-8428-8668ec8ddf0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:57:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:57:42.183 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:57:42 np0005465604 nova_compute[260603]: 2025-10-02 08:57:42.831 2 DEBUG nova.network.neutron [req-fb9a388e-b4d9-4fbd-8dfb-c438d82f6544 req-db5b3588-9e47-44b7-8557-dfd1999cfb8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Updated VIF entry in instance network info cache for port 81ef7b43-41dd-4a20-aa31-ed776a4299b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:57:42 np0005465604 nova_compute[260603]: 2025-10-02 08:57:42.832 2 DEBUG nova.network.neutron [req-fb9a388e-b4d9-4fbd-8dfb-c438d82f6544 req-db5b3588-9e47-44b7-8557-dfd1999cfb8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Updating instance_info_cache with network_info: [{"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:57:42 np0005465604 nova_compute[260603]: 2025-10-02 08:57:42.911 2 DEBUG oslo_concurrency.lockutils [req-fb9a388e-b4d9-4fbd-8dfb-c438d82f6544 req-db5b3588-9e47-44b7-8557-dfd1999cfb8b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-69862866-8141-4e02-bd49-4278bfdb7857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:57:42 np0005465604 nova_compute[260603]: 2025-10-02 08:57:42.919 2 DEBUG nova.compute.manager [req-dff261fa-3687-4cc5-a640-4ccb8595b326 req-15c40680-d8be-43f5-b1f3-165166d136ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received event network-vif-plugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:57:42 np0005465604 nova_compute[260603]: 2025-10-02 08:57:42.921 2 DEBUG oslo_concurrency.lockutils [req-dff261fa-3687-4cc5-a640-4ccb8595b326 req-15c40680-d8be-43f5-b1f3-165166d136ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "69862866-8141-4e02-bd49-4278bfdb7857-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:57:42 np0005465604 nova_compute[260603]: 2025-10-02 08:57:42.922 2 DEBUG oslo_concurrency.lockutils [req-dff261fa-3687-4cc5-a640-4ccb8595b326 req-15c40680-d8be-43f5-b1f3-165166d136ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:57:42 np0005465604 nova_compute[260603]: 2025-10-02 08:57:42.922 2 DEBUG oslo_concurrency.lockutils [req-dff261fa-3687-4cc5-a640-4ccb8595b326 req-15c40680-d8be-43f5-b1f3-165166d136ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:57:42 np0005465604 nova_compute[260603]: 2025-10-02 08:57:42.923 2 DEBUG nova.compute.manager [req-dff261fa-3687-4cc5-a640-4ccb8595b326 req-15c40680-d8be-43f5-b1f3-165166d136ea 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Processing event network-vif-plugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:57:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.271 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395463.2706242, 69862866-8141-4e02-bd49-4278bfdb7857 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.271 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] VM Started (Lifecycle Event)#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.273 2 DEBUG nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.276 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.279 2 INFO nova.virt.libvirt.driver [-] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Instance spawned successfully.#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.279 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.302 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.306 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.307 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.307 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.307 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.308 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.308 2 DEBUG nova.virt.libvirt.driver [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.312 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.350 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.351 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395463.27091, 69862866-8141-4e02-bd49-4278bfdb7857 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.351 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.374 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.376 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395463.275378, 69862866-8141-4e02-bd49-4278bfdb7857 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.376 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.395 2 INFO nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Took 8.77 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.395 2 DEBUG nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.397 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.402 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.530 2 INFO nova.compute.manager [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Took 10.04 seconds to build instance.#033[00m
Oct  2 04:57:43 np0005465604 nova_compute[260603]: 2025-10-02 08:57:43.549 2 DEBUG oslo_concurrency.lockutils [None req-254cdc91-d26c-4c78-b1a3-b90c6fe5ca71 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:57:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct  2 04:57:44 np0005465604 nova_compute[260603]: 2025-10-02 08:57:44.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:44 np0005465604 nova_compute[260603]: 2025-10-02 08:57:44.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:57:45 np0005465604 nova_compute[260603]: 2025-10-02 08:57:45.044 2 DEBUG nova.compute.manager [req-84b7a2dc-24a2-4aea-b49b-1697701b17de req-ec9f0b4d-d5eb-4ea9-9c31-2e4a750bafdd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received event network-vif-plugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:57:45 np0005465604 nova_compute[260603]: 2025-10-02 08:57:45.045 2 DEBUG oslo_concurrency.lockutils [req-84b7a2dc-24a2-4aea-b49b-1697701b17de req-ec9f0b4d-d5eb-4ea9-9c31-2e4a750bafdd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "69862866-8141-4e02-bd49-4278bfdb7857-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:57:45 np0005465604 nova_compute[260603]: 2025-10-02 08:57:45.047 2 DEBUG oslo_concurrency.lockutils [req-84b7a2dc-24a2-4aea-b49b-1697701b17de req-ec9f0b4d-d5eb-4ea9-9c31-2e4a750bafdd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:57:45 np0005465604 nova_compute[260603]: 2025-10-02 08:57:45.047 2 DEBUG oslo_concurrency.lockutils [req-84b7a2dc-24a2-4aea-b49b-1697701b17de req-ec9f0b4d-d5eb-4ea9-9c31-2e4a750bafdd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:57:45 np0005465604 nova_compute[260603]: 2025-10-02 08:57:45.048 2 DEBUG nova.compute.manager [req-84b7a2dc-24a2-4aea-b49b-1697701b17de req-ec9f0b4d-d5eb-4ea9-9c31-2e4a750bafdd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] No waiting events found dispatching network-vif-plugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:57:45 np0005465604 nova_compute[260603]: 2025-10-02 08:57:45.048 2 WARNING nova.compute.manager [req-84b7a2dc-24a2-4aea-b49b-1697701b17de req-ec9f0b4d-d5eb-4ea9-9c31-2e4a750bafdd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received unexpected event network-vif-plugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:57:45 np0005465604 nova_compute[260603]: 2025-10-02 08:57:45.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:45 np0005465604 nova_compute[260603]: 2025-10-02 08:57:45.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:57:45 np0005465604 nova_compute[260603]: 2025-10-02 08:57:45.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:57:45 np0005465604 nova_compute[260603]: 2025-10-02 08:57:45.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:57:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct  2 04:57:45 np0005465604 nova_compute[260603]: 2025-10-02 08:57:45.812 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:57:45 np0005465604 nova_compute[260603]: 2025-10-02 08:57:45.812 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:57:45 np0005465604 nova_compute[260603]: 2025-10-02 08:57:45.812 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:57:45 np0005465604 nova_compute[260603]: 2025-10-02 08:57:45.812 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:57:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Oct  2 04:57:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:57:48 np0005465604 nova_compute[260603]: 2025-10-02 08:57:48.983 2 DEBUG nova.compute.manager [req-91c768b1-87da-43e3-af83-a4b015a4da97 req-8f1c0498-70b8-46cc-a99e-2ab4007df1da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received event network-changed-81ef7b43-41dd-4a20-aa31-ed776a4299b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:57:48 np0005465604 nova_compute[260603]: 2025-10-02 08:57:48.984 2 DEBUG nova.compute.manager [req-91c768b1-87da-43e3-af83-a4b015a4da97 req-8f1c0498-70b8-46cc-a99e-2ab4007df1da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Refreshing instance network info cache due to event network-changed-81ef7b43-41dd-4a20-aa31-ed776a4299b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:57:48 np0005465604 nova_compute[260603]: 2025-10-02 08:57:48.984 2 DEBUG oslo_concurrency.lockutils [req-91c768b1-87da-43e3-af83-a4b015a4da97 req-8f1c0498-70b8-46cc-a99e-2ab4007df1da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-69862866-8141-4e02-bd49-4278bfdb7857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:57:48 np0005465604 nova_compute[260603]: 2025-10-02 08:57:48.984 2 DEBUG oslo_concurrency.lockutils [req-91c768b1-87da-43e3-af83-a4b015a4da97 req-8f1c0498-70b8-46cc-a99e-2ab4007df1da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-69862866-8141-4e02-bd49-4278bfdb7857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:57:48 np0005465604 nova_compute[260603]: 2025-10-02 08:57:48.985 2 DEBUG nova.network.neutron [req-91c768b1-87da-43e3-af83-a4b015a4da97 req-8f1c0498-70b8-46cc-a99e-2ab4007df1da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Refreshing network info cache for port 81ef7b43-41dd-4a20-aa31-ed776a4299b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:57:49 np0005465604 nova_compute[260603]: 2025-10-02 08:57:49.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:49 np0005465604 nova_compute[260603]: 2025-10-02 08:57:49.081 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updating instance_info_cache with network_info: [{"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:57:49 np0005465604 nova_compute[260603]: 2025-10-02 08:57:49.109 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:57:49 np0005465604 nova_compute[260603]: 2025-10-02 08:57:49.113 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:57:49 np0005465604 nova_compute[260603]: 2025-10-02 08:57:49.115 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:57:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 666 KiB/s wr, 97 op/s
Oct  2 04:57:50 np0005465604 nova_compute[260603]: 2025-10-02 08:57:50.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:51 np0005465604 nova_compute[260603]: 2025-10-02 08:57:51.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:57:51 np0005465604 nova_compute[260603]: 2025-10-02 08:57:51.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:57:51 np0005465604 nova_compute[260603]: 2025-10-02 08:57:51.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:57:51 np0005465604 nova_compute[260603]: 2025-10-02 08:57:51.696 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:57:51 np0005465604 nova_compute[260603]: 2025-10-02 08:57:51.696 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:57:51 np0005465604 nova_compute[260603]: 2025-10-02 08:57:51.697 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:57:51 np0005465604 nova_compute[260603]: 2025-10-02 08:57:51.697 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:57:51 np0005465604 nova_compute[260603]: 2025-10-02 08:57:51.697 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:57:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct  2 04:57:51 np0005465604 nova_compute[260603]: 2025-10-02 08:57:51.762 2 DEBUG nova.network.neutron [req-91c768b1-87da-43e3-af83-a4b015a4da97 req-8f1c0498-70b8-46cc-a99e-2ab4007df1da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Updated VIF entry in instance network info cache for port 81ef7b43-41dd-4a20-aa31-ed776a4299b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:57:51 np0005465604 nova_compute[260603]: 2025-10-02 08:57:51.767 2 DEBUG nova.network.neutron [req-91c768b1-87da-43e3-af83-a4b015a4da97 req-8f1c0498-70b8-46cc-a99e-2ab4007df1da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Updating instance_info_cache with network_info: [{"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:57:51 np0005465604 nova_compute[260603]: 2025-10-02 08:57:51.795 2 DEBUG oslo_concurrency.lockutils [req-91c768b1-87da-43e3-af83-a4b015a4da97 req-8f1c0498-70b8-46cc-a99e-2ab4007df1da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-69862866-8141-4e02-bd49-4278bfdb7857" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:57:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:57:52 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2142177687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:57:52 np0005465604 nova_compute[260603]: 2025-10-02 08:57:52.198 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:57:52 np0005465604 nova_compute[260603]: 2025-10-02 08:57:52.287 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:57:52 np0005465604 nova_compute[260603]: 2025-10-02 08:57:52.288 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:57:52 np0005465604 nova_compute[260603]: 2025-10-02 08:57:52.291 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:57:52 np0005465604 nova_compute[260603]: 2025-10-02 08:57:52.291 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:57:52 np0005465604 nova_compute[260603]: 2025-10-02 08:57:52.488 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:57:52 np0005465604 nova_compute[260603]: 2025-10-02 08:57:52.490 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3270MB free_disk=59.921817779541016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:57:52 np0005465604 nova_compute[260603]: 2025-10-02 08:57:52.490 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:57:52 np0005465604 nova_compute[260603]: 2025-10-02 08:57:52.491 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:57:52 np0005465604 nova_compute[260603]: 2025-10-02 08:57:52.597 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:57:52 np0005465604 nova_compute[260603]: 2025-10-02 08:57:52.597 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 69862866-8141-4e02-bd49-4278bfdb7857 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:57:52 np0005465604 nova_compute[260603]: 2025-10-02 08:57:52.597 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:57:52 np0005465604 nova_compute[260603]: 2025-10-02 08:57:52.597 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:57:52 np0005465604 nova_compute[260603]: 2025-10-02 08:57:52.651 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:57:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:57:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2863687521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:57:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:57:53 np0005465604 nova_compute[260603]: 2025-10-02 08:57:53.092 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:57:53 np0005465604 nova_compute[260603]: 2025-10-02 08:57:53.099 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:57:53 np0005465604 nova_compute[260603]: 2025-10-02 08:57:53.118 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:57:53 np0005465604 nova_compute[260603]: 2025-10-02 08:57:53.143 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:57:53 np0005465604 nova_compute[260603]: 2025-10-02 08:57:53.143 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:57:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Oct  2 04:57:54 np0005465604 nova_compute[260603]: 2025-10-02 08:57:54.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:57:55Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0f:9b:69 10.100.0.10
Oct  2 04:57:55 np0005465604 ovn_controller[152344]: 2025-10-02T08:57:55Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0f:9b:69 10.100.0.10
Oct  2 04:57:55 np0005465604 nova_compute[260603]: 2025-10-02 08:57:55.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 167 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 65 op/s
Oct  2 04:57:56 np0005465604 nova_compute[260603]: 2025-10-02 08:57:56.140 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:57:57 np0005465604 podman[397778]: 2025-10-02 08:57:57.051569837 +0000 UTC m=+0.099871377 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 04:57:57 np0005465604 podman[397777]: 2025-10-02 08:57:57.054569932 +0000 UTC m=+0.110919947 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 04:57:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 192 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 93 op/s
Oct  2 04:57:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:57:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:57:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:57:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:57:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:57:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:57:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:57:58 np0005465604 nova_compute[260603]: 2025-10-02 08:57:58.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:57:59 np0005465604 nova_compute[260603]: 2025-10-02 08:57:59.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:57:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 200 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct  2 04:58:00 np0005465604 nova_compute[260603]: 2025-10-02 08:58:00.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 200 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  2 04:58:02 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.392 2 INFO nova.compute.manager [None req-d00bbb4c-41b6-4fb2-a628-8e73f5eee5b6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Get console output#033[00m
Oct  2 04:58:02 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.398 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 04:58:02 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.732 2 DEBUG oslo_concurrency.lockutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "69862866-8141-4e02-bd49-4278bfdb7857" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:02 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.733 2 DEBUG oslo_concurrency.lockutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:02 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.733 2 DEBUG oslo_concurrency.lockutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "69862866-8141-4e02-bd49-4278bfdb7857-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:02 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.733 2 DEBUG oslo_concurrency.lockutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:02 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.734 2 DEBUG oslo_concurrency.lockutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:02 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.735 2 INFO nova.compute.manager [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Terminating instance#033[00m
Oct  2 04:58:02 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.736 2 DEBUG nova.compute.manager [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:58:02 np0005465604 kernel: tap81ef7b43-41 (unregistering): left promiscuous mode
Oct  2 04:58:02 np0005465604 NetworkManager[45129]: <info>  [1759395482.7870] device (tap81ef7b43-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:58:02 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:02 np0005465604 ovn_controller[152344]: 2025-10-02T08:58:02Z|01382|binding|INFO|Releasing lport 81ef7b43-41dd-4a20-aa31-ed776a4299b0 from this chassis (sb_readonly=0)
Oct  2 04:58:02 np0005465604 ovn_controller[152344]: 2025-10-02T08:58:02Z|01383|binding|INFO|Setting lport 81ef7b43-41dd-4a20-aa31-ed776a4299b0 down in Southbound
Oct  2 04:58:02 np0005465604 ovn_controller[152344]: 2025-10-02T08:58:02Z|01384|binding|INFO|Removing iface tap81ef7b43-41 ovn-installed in OVS
Oct  2 04:58:02 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:02.867 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:9b:69 10.100.0.10'], port_security=['fa:16:3e:0f:9b:69 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '69862866-8141-4e02-bd49-4278bfdb7857', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50f973db-7622-438d-89a4-98949bc018c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ad0d47c-459b-4d77-aba0-ab968d7d2321', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.228'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6dd5c4a2-6437-4d79-b5d7-6dfc11d7b30a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=81ef7b43-41dd-4a20-aa31-ed776a4299b0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:58:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:02.870 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 81ef7b43-41dd-4a20-aa31-ed776a4299b0 in datapath 50f973db-7622-438d-89a4-98949bc018c7 unbound from our chassis#033[00m
Oct  2 04:58:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:02.872 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50f973db-7622-438d-89a4-98949bc018c7#033[00m
Oct  2 04:58:02 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:02.900 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[48deb66b-34c0-4db4-98e2-1e592a5370d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:02 np0005465604 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000080.scope: Deactivated successfully.
Oct  2 04:58:02 np0005465604 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000080.scope: Consumed 13.355s CPU time.
Oct  2 04:58:02 np0005465604 systemd-machined[214636]: Machine qemu-162-instance-00000080 terminated.
Oct  2 04:58:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:02.946 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd24b01-86ec-4a9f-b823-ba48d33e3004]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:02.951 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1722d4ca-cbbe-480f-8696-3cbc964177d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:02 np0005465604 podman[397822]: 2025-10-02 08:58:02.970831568 +0000 UTC m=+0.097169011 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS)
Oct  2 04:58:02 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.978 2 INFO nova.virt.libvirt.driver [-] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Instance destroyed successfully.#033[00m
Oct  2 04:58:02 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.979 2 DEBUG nova.objects.instance [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid 69862866-8141-4e02-bd49-4278bfdb7857 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:58:02 np0005465604 podman[397825]: 2025-10-02 08:58:02.987703183 +0000 UTC m=+0.108668286 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct  2 04:58:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:02.993 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3662e541-9dec-4851-8714-3f7d582e4862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:02 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.996 2 DEBUG nova.virt.libvirt.vif [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:57:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-564779768',display_name='tempest-TestNetworkBasicOps-server-564779768',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-564779768',id=128,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhkz6+vSnvnT1XLzopmVXCIQtQIQCx9/0pOGIggxX0jLtkT3sEzs9rpRLJf4JbtSL6bjNbeilYykS+0k+nQ8Lt3eGZ1mjA3hrv/+JLu3lNSQ3HWi2ZIUcAuMwawnCYCeg==',key_name='tempest-TestNetworkBasicOps-620917724',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:57:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-5t7zdpi1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:57:43Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=69862866-8141-4e02-bd49-4278bfdb7857,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:58:02 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.996 2 DEBUG nova.network.os_vif_util [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "address": "fa:16:3e:0f:9b:69", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81ef7b43-41", "ovs_interfaceid": "81ef7b43-41dd-4a20-aa31-ed776a4299b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:58:02 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.997 2 DEBUG nova.network.os_vif_util [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0f:9b:69,bridge_name='br-int',has_traffic_filtering=True,id=81ef7b43-41dd-4a20-aa31-ed776a4299b0,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ef7b43-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:58:02 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.998 2 DEBUG os_vif [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:9b:69,bridge_name='br-int',has_traffic_filtering=True,id=81ef7b43-41dd-4a20-aa31-ed776a4299b0,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ef7b43-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:58:03 np0005465604 nova_compute[260603]: 2025-10-02 08:58:02.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:03 np0005465604 nova_compute[260603]: 2025-10-02 08:58:03.000 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81ef7b43-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:58:03 np0005465604 nova_compute[260603]: 2025-10-02 08:58:03.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:03 np0005465604 nova_compute[260603]: 2025-10-02 08:58:03.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:58:03 np0005465604 nova_compute[260603]: 2025-10-02 08:58:03.005 2 INFO os_vif [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:9b:69,bridge_name='br-int',has_traffic_filtering=True,id=81ef7b43-41dd-4a20-aa31-ed776a4299b0,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81ef7b43-41')#033[00m
Oct  2 04:58:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:03.018 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[840a3de9-6adc-4a27-b107-d418b62830b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50f973db-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:ed:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 389], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630260, 'reachable_time': 28777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397883, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:03.038 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[739182dc-6091-42c0-b98f-b1b6329508f3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap50f973db-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630272, 'tstamp': 630272}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397898, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap50f973db-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630276, 'tstamp': 630276}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397898, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:03.040 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50f973db-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:58:03 np0005465604 nova_compute[260603]: 2025-10-02 08:58:03.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:03 np0005465604 nova_compute[260603]: 2025-10-02 08:58:03.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:03.043 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50f973db-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:58:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:03.044 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:58:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:03.044 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50f973db-70, col_values=(('external_ids', {'iface-id': '02679a4c-7fb0-490c-8428-8668ec8ddf0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:58:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:03.044 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:58:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:58:03 np0005465604 nova_compute[260603]: 2025-10-02 08:58:03.388 2 INFO nova.virt.libvirt.driver [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Deleting instance files /var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857_del#033[00m
Oct  2 04:58:03 np0005465604 nova_compute[260603]: 2025-10-02 08:58:03.390 2 INFO nova.virt.libvirt.driver [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Deletion of /var/lib/nova/instances/69862866-8141-4e02-bd49-4278bfdb7857_del complete#033[00m
Oct  2 04:58:03 np0005465604 nova_compute[260603]: 2025-10-02 08:58:03.458 2 INFO nova.compute.manager [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:58:03 np0005465604 nova_compute[260603]: 2025-10-02 08:58:03.458 2 DEBUG oslo.service.loopingcall [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:58:03 np0005465604 nova_compute[260603]: 2025-10-02 08:58:03.459 2 DEBUG nova.compute.manager [-] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:58:03 np0005465604 nova_compute[260603]: 2025-10-02 08:58:03.459 2 DEBUG nova.network.neutron [-] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:58:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 171 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct  2 04:58:04 np0005465604 nova_compute[260603]: 2025-10-02 08:58:04.008 2 DEBUG nova.compute.manager [req-fe1af442-5cac-4b04-82dd-723ba487099c req-2b23612f-b22e-47a9-80b0-5def5d81522a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received event network-vif-unplugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:58:04 np0005465604 nova_compute[260603]: 2025-10-02 08:58:04.009 2 DEBUG oslo_concurrency.lockutils [req-fe1af442-5cac-4b04-82dd-723ba487099c req-2b23612f-b22e-47a9-80b0-5def5d81522a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "69862866-8141-4e02-bd49-4278bfdb7857-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:04 np0005465604 nova_compute[260603]: 2025-10-02 08:58:04.010 2 DEBUG oslo_concurrency.lockutils [req-fe1af442-5cac-4b04-82dd-723ba487099c req-2b23612f-b22e-47a9-80b0-5def5d81522a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:04 np0005465604 nova_compute[260603]: 2025-10-02 08:58:04.010 2 DEBUG oslo_concurrency.lockutils [req-fe1af442-5cac-4b04-82dd-723ba487099c req-2b23612f-b22e-47a9-80b0-5def5d81522a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:04 np0005465604 nova_compute[260603]: 2025-10-02 08:58:04.010 2 DEBUG nova.compute.manager [req-fe1af442-5cac-4b04-82dd-723ba487099c req-2b23612f-b22e-47a9-80b0-5def5d81522a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] No waiting events found dispatching network-vif-unplugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:58:04 np0005465604 nova_compute[260603]: 2025-10-02 08:58:04.011 2 DEBUG nova.compute.manager [req-fe1af442-5cac-4b04-82dd-723ba487099c req-2b23612f-b22e-47a9-80b0-5def5d81522a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received event network-vif-unplugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:58:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:58:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:58:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:58:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:58:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:58:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:58:05 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev fd88696e-892c-4b56-863c-4df5b396f19f does not exist
Oct  2 04:58:05 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b06ad103-6b34-4ab1-8d7f-3caec1f87123 does not exist
Oct  2 04:58:05 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 5cb53a80-a0e6-48ce-ae1b-a0d3c7d380e0 does not exist
Oct  2 04:58:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:58:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:58:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:58:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:58:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:58:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:58:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:58:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:58:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:58:05 np0005465604 nova_compute[260603]: 2025-10-02 08:58:05.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:05 np0005465604 nova_compute[260603]: 2025-10-02 08:58:05.415 2 DEBUG nova.network.neutron [-] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:58:05 np0005465604 nova_compute[260603]: 2025-10-02 08:58:05.435 2 INFO nova.compute.manager [-] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Took 1.98 seconds to deallocate network for instance.#033[00m
Oct  2 04:58:05 np0005465604 nova_compute[260603]: 2025-10-02 08:58:05.505 2 DEBUG oslo_concurrency.lockutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:05 np0005465604 nova_compute[260603]: 2025-10-02 08:58:05.505 2 DEBUG oslo_concurrency.lockutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:05 np0005465604 nova_compute[260603]: 2025-10-02 08:58:05.582 2 DEBUG oslo_concurrency.processutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:58:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 171 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Oct  2 04:58:05 np0005465604 podman[398194]: 2025-10-02 08:58:05.865611602 +0000 UTC m=+0.053490117 container create 65a374989530d3d454f73488cb331c05a0810e4441f5296f6bda17e5dcbc67d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bose, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:58:05 np0005465604 systemd[1]: Started libpod-conmon-65a374989530d3d454f73488cb331c05a0810e4441f5296f6bda17e5dcbc67d1.scope.
Oct  2 04:58:05 np0005465604 podman[398194]: 2025-10-02 08:58:05.836683215 +0000 UTC m=+0.024561780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:58:05 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:58:05 np0005465604 podman[398194]: 2025-10-02 08:58:05.989146408 +0000 UTC m=+0.177024993 container init 65a374989530d3d454f73488cb331c05a0810e4441f5296f6bda17e5dcbc67d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 04:58:06 np0005465604 podman[398194]: 2025-10-02 08:58:06.003610707 +0000 UTC m=+0.191489232 container start 65a374989530d3d454f73488cb331c05a0810e4441f5296f6bda17e5dcbc67d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 04:58:06 np0005465604 podman[398194]: 2025-10-02 08:58:06.007837451 +0000 UTC m=+0.195715986 container attach 65a374989530d3d454f73488cb331c05a0810e4441f5296f6bda17e5dcbc67d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  2 04:58:06 np0005465604 funny_bose[398211]: 167 167
Oct  2 04:58:06 np0005465604 systemd[1]: libpod-65a374989530d3d454f73488cb331c05a0810e4441f5296f6bda17e5dcbc67d1.scope: Deactivated successfully.
Oct  2 04:58:06 np0005465604 podman[398194]: 2025-10-02 08:58:06.012971284 +0000 UTC m=+0.200849809 container died 65a374989530d3d454f73488cb331c05a0810e4441f5296f6bda17e5dcbc67d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bose, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 04:58:06 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5cfb7a3649a6c1371ae0e4c0a506db2744990b8ad1f675ff57ea42619e5be318-merged.mount: Deactivated successfully.
Oct  2 04:58:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:58:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3183847465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:58:06 np0005465604 podman[398194]: 2025-10-02 08:58:06.061050918 +0000 UTC m=+0.248929443 container remove 65a374989530d3d454f73488cb331c05a0810e4441f5296f6bda17e5dcbc67d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:58:06 np0005465604 nova_compute[260603]: 2025-10-02 08:58:06.068 2 DEBUG oslo_concurrency.processutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:58:06 np0005465604 nova_compute[260603]: 2025-10-02 08:58:06.079 2 DEBUG nova.compute.provider_tree [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:58:06 np0005465604 systemd[1]: libpod-conmon-65a374989530d3d454f73488cb331c05a0810e4441f5296f6bda17e5dcbc67d1.scope: Deactivated successfully.
Oct  2 04:58:06 np0005465604 nova_compute[260603]: 2025-10-02 08:58:06.105 2 DEBUG nova.compute.manager [req-ca0a8f09-3af7-4618-8ecd-4685a28ae717 req-4956404f-943a-4af1-a53f-a6ef9febe42a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received event network-vif-plugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:58:06 np0005465604 nova_compute[260603]: 2025-10-02 08:58:06.106 2 DEBUG oslo_concurrency.lockutils [req-ca0a8f09-3af7-4618-8ecd-4685a28ae717 req-4956404f-943a-4af1-a53f-a6ef9febe42a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "69862866-8141-4e02-bd49-4278bfdb7857-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:06 np0005465604 nova_compute[260603]: 2025-10-02 08:58:06.106 2 DEBUG oslo_concurrency.lockutils [req-ca0a8f09-3af7-4618-8ecd-4685a28ae717 req-4956404f-943a-4af1-a53f-a6ef9febe42a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:06 np0005465604 nova_compute[260603]: 2025-10-02 08:58:06.107 2 DEBUG oslo_concurrency.lockutils [req-ca0a8f09-3af7-4618-8ecd-4685a28ae717 req-4956404f-943a-4af1-a53f-a6ef9febe42a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:06 np0005465604 nova_compute[260603]: 2025-10-02 08:58:06.107 2 DEBUG nova.compute.manager [req-ca0a8f09-3af7-4618-8ecd-4685a28ae717 req-4956404f-943a-4af1-a53f-a6ef9febe42a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] No waiting events found dispatching network-vif-plugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:58:06 np0005465604 nova_compute[260603]: 2025-10-02 08:58:06.108 2 WARNING nova.compute.manager [req-ca0a8f09-3af7-4618-8ecd-4685a28ae717 req-4956404f-943a-4af1-a53f-a6ef9febe42a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received unexpected event network-vif-plugged-81ef7b43-41dd-4a20-aa31-ed776a4299b0 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:58:06 np0005465604 nova_compute[260603]: 2025-10-02 08:58:06.108 2 DEBUG nova.compute.manager [req-ca0a8f09-3af7-4618-8ecd-4685a28ae717 req-4956404f-943a-4af1-a53f-a6ef9febe42a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Received event network-vif-deleted-81ef7b43-41dd-4a20-aa31-ed776a4299b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:58:06 np0005465604 nova_compute[260603]: 2025-10-02 08:58:06.126 2 DEBUG nova.scheduler.client.report [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:58:06 np0005465604 nova_compute[260603]: 2025-10-02 08:58:06.151 2 DEBUG oslo_concurrency.lockutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:06 np0005465604 nova_compute[260603]: 2025-10-02 08:58:06.193 2 INFO nova.scheduler.client.report [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance 69862866-8141-4e02-bd49-4278bfdb7857#033[00m
Oct  2 04:58:06 np0005465604 podman[398238]: 2025-10-02 08:58:06.263192207 +0000 UTC m=+0.060751748 container create 535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 04:58:06 np0005465604 nova_compute[260603]: 2025-10-02 08:58:06.304 2 DEBUG oslo_concurrency.lockutils [None req-c054ab89-db3f-4334-9e37-34ff514870f7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "69862866-8141-4e02-bd49-4278bfdb7857" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:06 np0005465604 systemd[1]: Started libpod-conmon-535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6.scope.
Oct  2 04:58:06 np0005465604 podman[398238]: 2025-10-02 08:58:06.228719464 +0000 UTC m=+0.026279045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:58:06 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:58:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488d5fea5ed2d749c2cdc5de69b7520bee246d3666735a5e644c8ea82634ec6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:58:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488d5fea5ed2d749c2cdc5de69b7520bee246d3666735a5e644c8ea82634ec6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:58:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488d5fea5ed2d749c2cdc5de69b7520bee246d3666735a5e644c8ea82634ec6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:58:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488d5fea5ed2d749c2cdc5de69b7520bee246d3666735a5e644c8ea82634ec6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:58:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/488d5fea5ed2d749c2cdc5de69b7520bee246d3666735a5e644c8ea82634ec6d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:58:06 np0005465604 podman[398238]: 2025-10-02 08:58:06.381122405 +0000 UTC m=+0.178681976 container init 535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 04:58:06 np0005465604 podman[398238]: 2025-10-02 08:58:06.388490078 +0000 UTC m=+0.186049589 container start 535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:58:06 np0005465604 podman[398238]: 2025-10-02 08:58:06.391864936 +0000 UTC m=+0.189424457 container attach 535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:58:07 np0005465604 eloquent_kepler[398254]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:58:07 np0005465604 eloquent_kepler[398254]: --> relative data size: 1.0
Oct  2 04:58:07 np0005465604 eloquent_kepler[398254]: --> All data devices are unavailable
Oct  2 04:58:07 np0005465604 systemd[1]: libpod-535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6.scope: Deactivated successfully.
Oct  2 04:58:07 np0005465604 systemd[1]: libpod-535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6.scope: Consumed 1.131s CPU time.
Oct  2 04:58:07 np0005465604 podman[398283]: 2025-10-02 08:58:07.648521866 +0000 UTC m=+0.044865773 container died 535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 04:58:07 np0005465604 systemd[1]: var-lib-containers-storage-overlay-488d5fea5ed2d749c2cdc5de69b7520bee246d3666735a5e644c8ea82634ec6d-merged.mount: Deactivated successfully.
Oct  2 04:58:07 np0005465604 podman[398283]: 2025-10-02 08:58:07.731516597 +0000 UTC m=+0.127860484 container remove 535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:58:07 np0005465604 systemd[1]: libpod-conmon-535995db684384445d0ce75e8fe656cd696e01821599bb7889f4a89246ec4cd6.scope: Deactivated successfully.
Oct  2 04:58:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 121 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct  2 04:58:08 np0005465604 nova_compute[260603]: 2025-10-02 08:58:08.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:58:08 np0005465604 podman[398436]: 2025-10-02 08:58:08.56685703 +0000 UTC m=+0.058380842 container create 055fdf6242c787ef5969ad2f540ad749bb0233a3ed178074b25061ecff28bbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_beaver, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 04:58:08 np0005465604 systemd[1]: Started libpod-conmon-055fdf6242c787ef5969ad2f540ad749bb0233a3ed178074b25061ecff28bbc9.scope.
Oct  2 04:58:08 np0005465604 podman[398436]: 2025-10-02 08:58:08.545409831 +0000 UTC m=+0.036933653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:58:08 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:58:08 np0005465604 podman[398436]: 2025-10-02 08:58:08.672099947 +0000 UTC m=+0.163623769 container init 055fdf6242c787ef5969ad2f540ad749bb0233a3ed178074b25061ecff28bbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:58:08 np0005465604 podman[398436]: 2025-10-02 08:58:08.683897341 +0000 UTC m=+0.175421153 container start 055fdf6242c787ef5969ad2f540ad749bb0233a3ed178074b25061ecff28bbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 04:58:08 np0005465604 podman[398436]: 2025-10-02 08:58:08.68826836 +0000 UTC m=+0.179792222 container attach 055fdf6242c787ef5969ad2f540ad749bb0233a3ed178074b25061ecff28bbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct  2 04:58:08 np0005465604 inspiring_beaver[398453]: 167 167
Oct  2 04:58:08 np0005465604 systemd[1]: libpod-055fdf6242c787ef5969ad2f540ad749bb0233a3ed178074b25061ecff28bbc9.scope: Deactivated successfully.
Oct  2 04:58:08 np0005465604 podman[398436]: 2025-10-02 08:58:08.693179845 +0000 UTC m=+0.184703657 container died 055fdf6242c787ef5969ad2f540ad749bb0233a3ed178074b25061ecff28bbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:58:08 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2b106b768b3a0abc6d243f26a0643f8e7c187608d73e84992b6a7ccf721c9d18-merged.mount: Deactivated successfully.
Oct  2 04:58:08 np0005465604 podman[398436]: 2025-10-02 08:58:08.744498042 +0000 UTC m=+0.236021854 container remove 055fdf6242c787ef5969ad2f540ad749bb0233a3ed178074b25061ecff28bbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_beaver, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 04:58:08 np0005465604 systemd[1]: libpod-conmon-055fdf6242c787ef5969ad2f540ad749bb0233a3ed178074b25061ecff28bbc9.scope: Deactivated successfully.
Oct  2 04:58:08 np0005465604 podman[398476]: 2025-10-02 08:58:08.967049278 +0000 UTC m=+0.055927604 container create 0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:58:09 np0005465604 systemd[1]: Started libpod-conmon-0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22.scope.
Oct  2 04:58:09 np0005465604 podman[398476]: 2025-10-02 08:58:08.944707429 +0000 UTC m=+0.033585735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:58:09 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:58:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e53ea5e78cf662bfdf4fba776218df43e7279a42e8c7cde50c4af0af067fff46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:58:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e53ea5e78cf662bfdf4fba776218df43e7279a42e8c7cde50c4af0af067fff46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:58:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e53ea5e78cf662bfdf4fba776218df43e7279a42e8c7cde50c4af0af067fff46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:58:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e53ea5e78cf662bfdf4fba776218df43e7279a42e8c7cde50c4af0af067fff46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:58:09 np0005465604 podman[398476]: 2025-10-02 08:58:09.067770641 +0000 UTC m=+0.156648977 container init 0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_colden, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 04:58:09 np0005465604 podman[398476]: 2025-10-02 08:58:09.082402915 +0000 UTC m=+0.171281211 container start 0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:58:09 np0005465604 podman[398476]: 2025-10-02 08:58:09.08603811 +0000 UTC m=+0.174916406 container attach 0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_colden, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.270 2 DEBUG oslo_concurrency.lockutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.271 2 DEBUG oslo_concurrency.lockutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.272 2 DEBUG oslo_concurrency.lockutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.272 2 DEBUG oslo_concurrency.lockutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.273 2 DEBUG oslo_concurrency.lockutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.274 2 INFO nova.compute.manager [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Terminating instance#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.276 2 DEBUG nova.compute.manager [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 04:58:09 np0005465604 kernel: tapd9fd4c9a-6e (unregistering): left promiscuous mode
Oct  2 04:58:09 np0005465604 NetworkManager[45129]: <info>  [1759395489.3470] device (tapd9fd4c9a-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 04:58:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:58:09Z|01385|binding|INFO|Releasing lport d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 from this chassis (sb_readonly=0)
Oct  2 04:58:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:58:09Z|01386|binding|INFO|Setting lport d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 down in Southbound
Oct  2 04:58:09 np0005465604 ovn_controller[152344]: 2025-10-02T08:58:09Z|01387|binding|INFO|Removing iface tapd9fd4c9a-6e ovn-installed in OVS
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.376 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:a0:55 10.100.0.9'], port_security=['fa:16:3e:1a:a0:55 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '44f2da71-ac35-415b-bb4b-6fbc3afe6cf9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50f973db-7622-438d-89a4-98949bc018c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': '38d1b2fd-ca30-472b-9abc-fb39d0f98549', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6dd5c4a2-6437-4d79-b5d7-6dfc11d7b30a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:58:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.378 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 in datapath 50f973db-7622-438d-89a4-98949bc018c7 unbound from our chassis#033[00m
Oct  2 04:58:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.380 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50f973db-7622-438d-89a4-98949bc018c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 04:58:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.382 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[728e16b3-0906-4f82-a6a2-abc386f93cbb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.384 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50f973db-7622-438d-89a4-98949bc018c7 namespace which is not needed anymore#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:09 np0005465604 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Oct  2 04:58:09 np0005465604 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d0000007f.scope: Consumed 15.658s CPU time.
Oct  2 04:58:09 np0005465604 systemd-machined[214636]: Machine qemu-161-instance-0000007f terminated.
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.530 2 INFO nova.virt.libvirt.driver [-] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Instance destroyed successfully.#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.531 2 DEBUG nova.objects.instance [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.546 2 DEBUG nova.virt.libvirt.vif [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:56:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1392343358',display_name='tempest-TestNetworkBasicOps-server-1392343358',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1392343358',id=127,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM7vAmstY/1wj8zilQusjnWxCH9HDy5ckxC0RSN69DfVAe97aVFxd/BU6NsgAvBbRW+DKoyLRAwETsr2HsjzpTSAXv0ZjguOo1+U6bZINpkB4yql6sk6skb6iUdvJViCMg==',key_name='tempest-TestNetworkBasicOps-399164218',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:57:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-k0j5xbkd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:57:05Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=44f2da71-ac35-415b-bb4b-6fbc3afe6cf9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.546 2 DEBUG nova.network.os_vif_util [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "address": "fa:16:3e:1a:a0:55", "network": {"id": "50f973db-7622-438d-89a4-98949bc018c7", "bridge": "br-int", "label": "tempest-network-smoke--1918613075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9fd4c9a-6e", "ovs_interfaceid": "d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.547 2 DEBUG nova.network.os_vif_util [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:a0:55,bridge_name='br-int',has_traffic_filtering=True,id=d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9fd4c9a-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.548 2 DEBUG os_vif [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:a0:55,bridge_name='br-int',has_traffic_filtering=True,id=d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9fd4c9a-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.551 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9fd4c9a-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.559 2 INFO os_vif [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:a0:55,bridge_name='br-int',has_traffic_filtering=True,id=d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424,network=Network(50f973db-7622-438d-89a4-98949bc018c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9fd4c9a-6e')#033[00m
Oct  2 04:58:09 np0005465604 neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7[397248]: [NOTICE]   (397252) : haproxy version is 2.8.14-c23fe91
Oct  2 04:58:09 np0005465604 neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7[397248]: [NOTICE]   (397252) : path to executable is /usr/sbin/haproxy
Oct  2 04:58:09 np0005465604 neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7[397248]: [WARNING]  (397252) : Exiting Master process...
Oct  2 04:58:09 np0005465604 neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7[397248]: [WARNING]  (397252) : Exiting Master process...
Oct  2 04:58:09 np0005465604 neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7[397248]: [ALERT]    (397252) : Current worker (397254) exited with code 143 (Terminated)
Oct  2 04:58:09 np0005465604 neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7[397248]: [WARNING]  (397252) : All workers exited. Exiting... (0)
Oct  2 04:58:09 np0005465604 systemd[1]: libpod-ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f.scope: Deactivated successfully.
Oct  2 04:58:09 np0005465604 podman[398524]: 2025-10-02 08:58:09.60711356 +0000 UTC m=+0.080377959 container died ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 04:58:09 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f-userdata-shm.mount: Deactivated successfully.
Oct  2 04:58:09 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0c715720152d167f38b377cf95f213b66460140411dee007ac5a683ebd4decaf-merged.mount: Deactivated successfully.
Oct  2 04:58:09 np0005465604 podman[398524]: 2025-10-02 08:58:09.6869237 +0000 UTC m=+0.160188089 container cleanup ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:58:09 np0005465604 systemd[1]: libpod-conmon-ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f.scope: Deactivated successfully.
Oct  2 04:58:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 121 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 267 KiB/s rd, 511 KiB/s wr, 65 op/s
Oct  2 04:58:09 np0005465604 podman[398577]: 2025-10-02 08:58:09.800703058 +0000 UTC m=+0.078279013 container remove ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 04:58:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.813 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a5a4b3ff-e960-4e0c-a4b9-cd4cdd2c7adf]: (4, ('Thu Oct  2 08:58:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7 (ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f)\nddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f\nThu Oct  2 08:58:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50f973db-7622-438d-89a4-98949bc018c7 (ddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f)\nddde8bec33be7e8fbe50d929452a92bd586e89ae8b5b52ec2028cb41a0097d4f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.817 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[276188e3-c187-483d-a398-b6442c277b1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.819 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50f973db-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:58:09 np0005465604 kernel: tap50f973db-70: left promiscuous mode
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.832 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4a167cb0-620d-442b-a97d-ae5e2d59110c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.867 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7de3092f-c878-43be-8050-02cf9300d0b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.869 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8eb0bb-2017-40c2-93b9-d17af03dba31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.869 2 DEBUG nova.compute.manager [req-5e6c9ba8-03a8-4ff3-a3da-b77ff2e7683e req-e2e8ed40-932a-45b8-9399-46552cb56b63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received event network-vif-unplugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.869 2 DEBUG oslo_concurrency.lockutils [req-5e6c9ba8-03a8-4ff3-a3da-b77ff2e7683e req-e2e8ed40-932a-45b8-9399-46552cb56b63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.869 2 DEBUG oslo_concurrency.lockutils [req-5e6c9ba8-03a8-4ff3-a3da-b77ff2e7683e req-e2e8ed40-932a-45b8-9399-46552cb56b63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.870 2 DEBUG oslo_concurrency.lockutils [req-5e6c9ba8-03a8-4ff3-a3da-b77ff2e7683e req-e2e8ed40-932a-45b8-9399-46552cb56b63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.870 2 DEBUG nova.compute.manager [req-5e6c9ba8-03a8-4ff3-a3da-b77ff2e7683e req-e2e8ed40-932a-45b8-9399-46552cb56b63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] No waiting events found dispatching network-vif-unplugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:58:09 np0005465604 nova_compute[260603]: 2025-10-02 08:58:09.870 2 DEBUG nova.compute.manager [req-5e6c9ba8-03a8-4ff3-a3da-b77ff2e7683e req-e2e8ed40-932a-45b8-9399-46552cb56b63 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received event network-vif-unplugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 04:58:09 np0005465604 nice_colden[398492]: {
Oct  2 04:58:09 np0005465604 nice_colden[398492]:    "0": [
Oct  2 04:58:09 np0005465604 nice_colden[398492]:        {
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "devices": [
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "/dev/loop3"
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            ],
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "lv_name": "ceph_lv0",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "lv_size": "21470642176",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "name": "ceph_lv0",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "tags": {
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.cluster_name": "ceph",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.crush_device_class": "",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.encrypted": "0",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.osd_id": "0",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.type": "block",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.vdo": "0"
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            },
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "type": "block",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "vg_name": "ceph_vg0"
Oct  2 04:58:09 np0005465604 nice_colden[398492]:        }
Oct  2 04:58:09 np0005465604 nice_colden[398492]:    ],
Oct  2 04:58:09 np0005465604 nice_colden[398492]:    "1": [
Oct  2 04:58:09 np0005465604 nice_colden[398492]:        {
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "devices": [
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "/dev/loop4"
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            ],
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "lv_name": "ceph_lv1",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "lv_size": "21470642176",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "name": "ceph_lv1",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "tags": {
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.cluster_name": "ceph",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.crush_device_class": "",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.encrypted": "0",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.osd_id": "1",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.type": "block",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.vdo": "0"
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            },
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "type": "block",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "vg_name": "ceph_vg1"
Oct  2 04:58:09 np0005465604 nice_colden[398492]:        }
Oct  2 04:58:09 np0005465604 nice_colden[398492]:    ],
Oct  2 04:58:09 np0005465604 nice_colden[398492]:    "2": [
Oct  2 04:58:09 np0005465604 nice_colden[398492]:        {
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "devices": [
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "/dev/loop5"
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            ],
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "lv_name": "ceph_lv2",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "lv_size": "21470642176",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "name": "ceph_lv2",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "tags": {
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.cluster_name": "ceph",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.crush_device_class": "",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.encrypted": "0",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.osd_id": "2",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.type": "block",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:                "ceph.vdo": "0"
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            },
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "type": "block",
Oct  2 04:58:09 np0005465604 nice_colden[398492]:            "vg_name": "ceph_vg2"
Oct  2 04:58:09 np0005465604 nice_colden[398492]:        }
Oct  2 04:58:09 np0005465604 nice_colden[398492]:    ]
Oct  2 04:58:09 np0005465604 nice_colden[398492]: }
Oct  2 04:58:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.906 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7ca739-2413-4de5-8305-761cd402a062]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630253, 'reachable_time': 37397, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398596, 'error': None, 'target': 'ovnmeta-50f973db-7622-438d-89a4-98949bc018c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:09 np0005465604 systemd[1]: run-netns-ovnmeta\x2d50f973db\x2d7622\x2d438d\x2d89a4\x2d98949bc018c7.mount: Deactivated successfully.
Oct  2 04:58:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.913 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50f973db-7622-438d-89a4-98949bc018c7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 04:58:09 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:09.913 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[40d29d55-6e34-4c79-88b7-8c50a46af58f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:09 np0005465604 systemd[1]: libpod-0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22.scope: Deactivated successfully.
Oct  2 04:58:09 np0005465604 conmon[398492]: conmon 0ed26643a50b36de853d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22.scope/container/memory.events
Oct  2 04:58:09 np0005465604 podman[398476]: 2025-10-02 08:58:09.950968581 +0000 UTC m=+1.039846927 container died 0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:58:09 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e53ea5e78cf662bfdf4fba776218df43e7279a42e8c7cde50c4af0af067fff46-merged.mount: Deactivated successfully.
Oct  2 04:58:10 np0005465604 podman[398476]: 2025-10-02 08:58:10.035276195 +0000 UTC m=+1.124154481 container remove 0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_colden, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 04:58:10 np0005465604 systemd[1]: libpod-conmon-0ed26643a50b36de853dbe130e736917e1620fd63ddf608ae9e7f0781c233d22.scope: Deactivated successfully.
Oct  2 04:58:10 np0005465604 nova_compute[260603]: 2025-10-02 08:58:10.077 2 INFO nova.virt.libvirt.driver [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Deleting instance files /var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_del#033[00m
Oct  2 04:58:10 np0005465604 nova_compute[260603]: 2025-10-02 08:58:10.078 2 INFO nova.virt.libvirt.driver [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Deletion of /var/lib/nova/instances/44f2da71-ac35-415b-bb4b-6fbc3afe6cf9_del complete#033[00m
Oct  2 04:58:10 np0005465604 nova_compute[260603]: 2025-10-02 08:58:10.137 2 INFO nova.compute.manager [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 04:58:10 np0005465604 nova_compute[260603]: 2025-10-02 08:58:10.138 2 DEBUG oslo.service.loopingcall [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 04:58:10 np0005465604 nova_compute[260603]: 2025-10-02 08:58:10.138 2 DEBUG nova.compute.manager [-] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 04:58:10 np0005465604 nova_compute[260603]: 2025-10-02 08:58:10.139 2 DEBUG nova.network.neutron [-] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 04:58:10 np0005465604 nova_compute[260603]: 2025-10-02 08:58:10.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:10 np0005465604 nova_compute[260603]: 2025-10-02 08:58:10.627 2 DEBUG nova.network.neutron [-] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:58:10 np0005465604 nova_compute[260603]: 2025-10-02 08:58:10.642 2 INFO nova.compute.manager [-] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Took 0.50 seconds to deallocate network for instance.#033[00m
Oct  2 04:58:10 np0005465604 nova_compute[260603]: 2025-10-02 08:58:10.682 2 DEBUG oslo_concurrency.lockutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:10 np0005465604 nova_compute[260603]: 2025-10-02 08:58:10.683 2 DEBUG oslo_concurrency.lockutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:10 np0005465604 nova_compute[260603]: 2025-10-02 08:58:10.723 2 DEBUG oslo_concurrency.processutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:58:10 np0005465604 podman[398770]: 2025-10-02 08:58:10.982194595 +0000 UTC m=+0.062776641 container create ad3f54d7821e8cf3136de66de2197c70acbd453f131a43589197ad162267877e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct  2 04:58:11 np0005465604 podman[398770]: 2025-10-02 08:58:10.948477296 +0000 UTC m=+0.029059322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:58:11 np0005465604 systemd[1]: Started libpod-conmon-ad3f54d7821e8cf3136de66de2197c70acbd453f131a43589197ad162267877e.scope.
Oct  2 04:58:11 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:58:11 np0005465604 podman[398770]: 2025-10-02 08:58:11.110455221 +0000 UTC m=+0.191037287 container init ad3f54d7821e8cf3136de66de2197c70acbd453f131a43589197ad162267877e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 04:58:11 np0005465604 podman[398770]: 2025-10-02 08:58:11.119853119 +0000 UTC m=+0.200435175 container start ad3f54d7821e8cf3136de66de2197c70acbd453f131a43589197ad162267877e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:58:11 np0005465604 podman[398770]: 2025-10-02 08:58:11.124240398 +0000 UTC m=+0.204822504 container attach ad3f54d7821e8cf3136de66de2197c70acbd453f131a43589197ad162267877e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  2 04:58:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:58:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/156203065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:58:11 np0005465604 musing_euler[398786]: 167 167
Oct  2 04:58:11 np0005465604 systemd[1]: libpod-ad3f54d7821e8cf3136de66de2197c70acbd453f131a43589197ad162267877e.scope: Deactivated successfully.
Oct  2 04:58:11 np0005465604 podman[398770]: 2025-10-02 08:58:11.129538426 +0000 UTC m=+0.210120482 container died ad3f54d7821e8cf3136de66de2197c70acbd453f131a43589197ad162267877e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:58:11 np0005465604 nova_compute[260603]: 2025-10-02 08:58:11.145 2 DEBUG oslo_concurrency.processutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:58:11 np0005465604 nova_compute[260603]: 2025-10-02 08:58:11.156 2 DEBUG nova.compute.provider_tree [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:58:11 np0005465604 systemd[1]: var-lib-containers-storage-overlay-196be8544f85941d8104e1783719c8cb81c0439dbd958aa604d4d5c142bbc250-merged.mount: Deactivated successfully.
Oct  2 04:58:11 np0005465604 podman[398770]: 2025-10-02 08:58:11.180842952 +0000 UTC m=+0.261424968 container remove ad3f54d7821e8cf3136de66de2197c70acbd453f131a43589197ad162267877e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_euler, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:58:11 np0005465604 nova_compute[260603]: 2025-10-02 08:58:11.181 2 DEBUG nova.scheduler.client.report [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:58:11 np0005465604 systemd[1]: libpod-conmon-ad3f54d7821e8cf3136de66de2197c70acbd453f131a43589197ad162267877e.scope: Deactivated successfully.
Oct  2 04:58:11 np0005465604 nova_compute[260603]: 2025-10-02 08:58:11.204 2 DEBUG oslo_concurrency.lockutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:11 np0005465604 nova_compute[260603]: 2025-10-02 08:58:11.238 2 INFO nova.scheduler.client.report [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9#033[00m
Oct  2 04:58:11 np0005465604 nova_compute[260603]: 2025-10-02 08:58:11.302 2 DEBUG oslo_concurrency.lockutils [None req-b1bc54f7-7fa5-45a8-8e91-0dae3e6b6ac3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:11 np0005465604 podman[398811]: 2025-10-02 08:58:11.423299919 +0000 UTC m=+0.058934379 container create 0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_newton, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:58:11 np0005465604 systemd[1]: Started libpod-conmon-0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a.scope.
Oct  2 04:58:11 np0005465604 podman[398811]: 2025-10-02 08:58:11.392738311 +0000 UTC m=+0.028372801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:58:11 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:58:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2486e289a7ca8e7ce6ab7d83fdc4f458c8720cd8ab204eb438ab514fbee958d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:58:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2486e289a7ca8e7ce6ab7d83fdc4f458c8720cd8ab204eb438ab514fbee958d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:58:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2486e289a7ca8e7ce6ab7d83fdc4f458c8720cd8ab204eb438ab514fbee958d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:58:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2486e289a7ca8e7ce6ab7d83fdc4f458c8720cd8ab204eb438ab514fbee958d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:58:11 np0005465604 podman[398811]: 2025-10-02 08:58:11.52047412 +0000 UTC m=+0.156108650 container init 0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_newton, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct  2 04:58:11 np0005465604 podman[398811]: 2025-10-02 08:58:11.533991179 +0000 UTC m=+0.169625649 container start 0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:58:11 np0005465604 podman[398811]: 2025-10-02 08:58:11.537634235 +0000 UTC m=+0.173268785 container attach 0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_newton, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:58:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 121 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Oct  2 04:58:11 np0005465604 nova_compute[260603]: 2025-10-02 08:58:11.985 2 DEBUG nova.compute.manager [req-c7e36086-d371-43d8-8a65-ca7e61e3a8cc req-5173667e-e43e-4b77-ad61-b64e17056f0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received event network-vif-plugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:58:11 np0005465604 nova_compute[260603]: 2025-10-02 08:58:11.987 2 DEBUG oslo_concurrency.lockutils [req-c7e36086-d371-43d8-8a65-ca7e61e3a8cc req-5173667e-e43e-4b77-ad61-b64e17056f0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:11 np0005465604 nova_compute[260603]: 2025-10-02 08:58:11.987 2 DEBUG oslo_concurrency.lockutils [req-c7e36086-d371-43d8-8a65-ca7e61e3a8cc req-5173667e-e43e-4b77-ad61-b64e17056f0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:11 np0005465604 nova_compute[260603]: 2025-10-02 08:58:11.988 2 DEBUG oslo_concurrency.lockutils [req-c7e36086-d371-43d8-8a65-ca7e61e3a8cc req-5173667e-e43e-4b77-ad61-b64e17056f0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "44f2da71-ac35-415b-bb4b-6fbc3afe6cf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:11 np0005465604 nova_compute[260603]: 2025-10-02 08:58:11.988 2 DEBUG nova.compute.manager [req-c7e36086-d371-43d8-8a65-ca7e61e3a8cc req-5173667e-e43e-4b77-ad61-b64e17056f0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] No waiting events found dispatching network-vif-plugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:58:11 np0005465604 nova_compute[260603]: 2025-10-02 08:58:11.988 2 WARNING nova.compute.manager [req-c7e36086-d371-43d8-8a65-ca7e61e3a8cc req-5173667e-e43e-4b77-ad61-b64e17056f0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received unexpected event network-vif-plugged-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 04:58:11 np0005465604 nova_compute[260603]: 2025-10-02 08:58:11.989 2 DEBUG nova.compute.manager [req-c7e36086-d371-43d8-8a65-ca7e61e3a8cc req-5173667e-e43e-4b77-ad61-b64e17056f0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Received event network-vif-deleted-d9fd4c9a-6e5f-4c8a-b17c-28f6fe795424 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:58:12 np0005465604 trusting_newton[398828]: {
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:        "osd_id": 2,
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:        "type": "bluestore"
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:    },
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:        "osd_id": 1,
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:        "type": "bluestore"
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:    },
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:        "osd_id": 0,
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:        "type": "bluestore"
Oct  2 04:58:12 np0005465604 trusting_newton[398828]:    }
Oct  2 04:58:12 np0005465604 trusting_newton[398828]: }
Oct  2 04:58:12 np0005465604 systemd[1]: libpod-0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a.scope: Deactivated successfully.
Oct  2 04:58:12 np0005465604 podman[398811]: 2025-10-02 08:58:12.645195648 +0000 UTC m=+1.280830148 container died 0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_newton, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 04:58:12 np0005465604 systemd[1]: libpod-0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a.scope: Consumed 1.115s CPU time.
Oct  2 04:58:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d2486e289a7ca8e7ce6ab7d83fdc4f458c8720cd8ab204eb438ab514fbee958d-merged.mount: Deactivated successfully.
Oct  2 04:58:12 np0005465604 podman[398811]: 2025-10-02 08:58:12.718032427 +0000 UTC m=+1.353666917 container remove 0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_newton, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:58:12 np0005465604 systemd[1]: libpod-conmon-0ae61b100c642c0675452904d5d0ea4c71c144dae3f909a959e7abfcd660503a.scope: Deactivated successfully.
Oct  2 04:58:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:58:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:58:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:58:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:58:12 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c4a9fae0-60ef-40d9-ae38-8eb64b17c316 does not exist
Oct  2 04:58:12 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 7d6fcf71-c06b-4188-a1da-9ddc9f5e46db does not exist
Oct  2 04:58:12 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:58:12 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:58:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:58:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 56 op/s
Oct  2 04:58:14 np0005465604 nova_compute[260603]: 2025-10-02 08:58:14.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:14 np0005465604 nova_compute[260603]: 2025-10-02 08:58:14.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:14 np0005465604 nova_compute[260603]: 2025-10-02 08:58:14.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:15 np0005465604 nova_compute[260603]: 2025-10-02 08:58:15.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.0 KiB/s wr, 38 op/s
Oct  2 04:58:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.0 KiB/s wr, 38 op/s
Oct  2 04:58:17 np0005465604 nova_compute[260603]: 2025-10-02 08:58:17.973 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395482.9719012, 69862866-8141-4e02-bd49-4278bfdb7857 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:58:17 np0005465604 nova_compute[260603]: 2025-10-02 08:58:17.974 2 INFO nova.compute.manager [-] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:58:17 np0005465604 nova_compute[260603]: 2025-10-02 08:58:17.995 2 DEBUG nova.compute.manager [None req-b53f4112-be05-4e07-ae55-77eb47dcd3df - - - - - -] [instance: 69862866-8141-4e02-bd49-4278bfdb7857] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:58:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:58:19 np0005465604 nova_compute[260603]: 2025-10-02 08:58:19.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Oct  2 04:58:20 np0005465604 nova_compute[260603]: 2025-10-02 08:58:20.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 04:58:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:58:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1903598391' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:58:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:58:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1903598391' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:58:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:58:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 39K writes, 154K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s#012Cumulative WAL: 39K writes, 14K syncs, 2.75 writes per sync, written: 0.15 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4589 writes, 18K keys, 4589 commit groups, 1.0 writes per commit group, ingest: 22.57 MB, 0.04 MB/s#012Interval WAL: 4589 writes, 1752 syncs, 2.62 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 04:58:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:58:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 04:58:24 np0005465604 nova_compute[260603]: 2025-10-02 08:58:24.529 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395489.5268977, 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:58:24 np0005465604 nova_compute[260603]: 2025-10-02 08:58:24.529 2 INFO nova.compute.manager [-] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] VM Stopped (Lifecycle Event)#033[00m
Oct  2 04:58:24 np0005465604 nova_compute[260603]: 2025-10-02 08:58:24.553 2 DEBUG nova.compute.manager [None req-39a443d8-17a8-48bd-8d0b-eb28af3d785f - - - - - -] [instance: 44f2da71-ac35-415b-bb4b-6fbc3afe6cf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:58:24 np0005465604 nova_compute[260603]: 2025-10-02 08:58:24.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:25 np0005465604 nova_compute[260603]: 2025-10-02 08:58:25.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:58:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:58:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 41K writes, 160K keys, 41K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s#012Cumulative WAL: 41K writes, 15K syncs, 2.73 writes per sync, written: 0.15 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4414 writes, 18K keys, 4414 commit groups, 1.0 writes per commit group, ingest: 19.80 MB, 0.03 MB/s#012Interval WAL: 4414 writes, 1688 syncs, 2.61 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 04:58:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:58:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:58:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:58:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:58:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:58:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:58:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:58:28 np0005465604 podman[398925]: 2025-10-02 08:58:28.036155905 +0000 UTC m=+0.082262399 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 04:58:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:58:28
Oct  2 04:58:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:58:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:58:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'images', 'default.rgw.control', '.mgr', 'default.rgw.meta', '.rgw.root', 'backups', 'vms', 'cephfs.cephfs.meta']
Oct  2 04:58:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:58:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:58:28 np0005465604 podman[398924]: 2025-10-02 08:58:28.136994601 +0000 UTC m=+0.182209627 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Oct  2 04:58:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:58:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:58:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:58:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:58:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:58:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:58:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:58:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:58:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:58:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:58:28 np0005465604 nova_compute[260603]: 2025-10-02 08:58:28.688 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:28 np0005465604 nova_compute[260603]: 2025-10-02 08:58:28.689 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:28 np0005465604 nova_compute[260603]: 2025-10-02 08:58:28.706 2 DEBUG nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:58:28 np0005465604 nova_compute[260603]: 2025-10-02 08:58:28.785 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:28 np0005465604 nova_compute[260603]: 2025-10-02 08:58:28.786 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:28 np0005465604 nova_compute[260603]: 2025-10-02 08:58:28.797 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:58:28 np0005465604 nova_compute[260603]: 2025-10-02 08:58:28.798 2 INFO nova.compute.claims [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:58:28 np0005465604 nova_compute[260603]: 2025-10-02 08:58:28.914 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:58:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:58:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/306506091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.385 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.395 2 DEBUG nova.compute.provider_tree [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.418 2 DEBUG nova.scheduler.client.report [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.454 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.455 2 DEBUG nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.511 2 DEBUG nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.513 2 DEBUG nova.network.neutron [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.542 2 INFO nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.564 2 DEBUG nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.704 2 DEBUG nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.706 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.707 2 INFO nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Creating image(s)#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.740 2 DEBUG nova.storage.rbd_utils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:58:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.777 2 DEBUG nova.storage.rbd_utils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.813 2 DEBUG nova.storage.rbd_utils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.818 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.880 2 DEBUG nova.policy [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.926 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.927 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.928 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.929 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.961 2 DEBUG nova.storage.rbd_utils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:58:29 np0005465604 nova_compute[260603]: 2025-10-02 08:58:29.968 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:58:30 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct  2 04:58:30 np0005465604 nova_compute[260603]: 2025-10-02 08:58:30.335 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:58:30 np0005465604 nova_compute[260603]: 2025-10-02 08:58:30.410 2 DEBUG nova.storage.rbd_utils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:58:30 np0005465604 nova_compute[260603]: 2025-10-02 08:58:30.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:30 np0005465604 nova_compute[260603]: 2025-10-02 08:58:30.506 2 DEBUG nova.objects.instance [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:58:30 np0005465604 nova_compute[260603]: 2025-10-02 08:58:30.526 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:58:30 np0005465604 nova_compute[260603]: 2025-10-02 08:58:30.526 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Ensure instance console log exists: /var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:58:30 np0005465604 nova_compute[260603]: 2025-10-02 08:58:30.527 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:30 np0005465604 nova_compute[260603]: 2025-10-02 08:58:30.527 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:30 np0005465604 nova_compute[260603]: 2025-10-02 08:58:30.527 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:30 np0005465604 nova_compute[260603]: 2025-10-02 08:58:30.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:30.856 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:58:30 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:30.859 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:58:30 np0005465604 nova_compute[260603]: 2025-10-02 08:58:30.943 2 DEBUG nova.network.neutron [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Successfully created port: 5ea70a9c-8299-4593-b2b2-5c3315870d73 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:58:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 04:58:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 32K writes, 124K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 32K writes, 11K syncs, 2.73 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3436 writes, 13K keys, 3436 commit groups, 1.0 writes per commit group, ingest: 14.09 MB, 0.02 MB/s#012Interval WAL: 3436 writes, 1397 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 04:58:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 41 MiB data, 854 MiB used, 59 GiB / 60 GiB avail
Oct  2 04:58:31 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:31.862 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:58:31 np0005465604 nova_compute[260603]: 2025-10-02 08:58:31.874 2 DEBUG nova.network.neutron [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Successfully updated port: 5ea70a9c-8299-4593-b2b2-5c3315870d73 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:58:31 np0005465604 nova_compute[260603]: 2025-10-02 08:58:31.894 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:58:31 np0005465604 nova_compute[260603]: 2025-10-02 08:58:31.894 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:58:31 np0005465604 nova_compute[260603]: 2025-10-02 08:58:31.894 2 DEBUG nova.network.neutron [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:58:31 np0005465604 nova_compute[260603]: 2025-10-02 08:58:31.948 2 DEBUG nova.compute.manager [req-3cfa995c-4510-4450-baeb-c99d3475b869 req-dca2c99a-ee8a-473d-aa4a-4582fffb3ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-changed-5ea70a9c-8299-4593-b2b2-5c3315870d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:58:31 np0005465604 nova_compute[260603]: 2025-10-02 08:58:31.949 2 DEBUG nova.compute.manager [req-3cfa995c-4510-4450-baeb-c99d3475b869 req-dca2c99a-ee8a-473d-aa4a-4582fffb3ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing instance network info cache due to event network-changed-5ea70a9c-8299-4593-b2b2-5c3315870d73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:58:31 np0005465604 nova_compute[260603]: 2025-10-02 08:58:31.949 2 DEBUG oslo_concurrency.lockutils [req-3cfa995c-4510-4450-baeb-c99d3475b869 req-dca2c99a-ee8a-473d-aa4a-4582fffb3ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.020 2 DEBUG nova.network.neutron [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:58:32 np0005465604 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.899 2 DEBUG nova.network.neutron [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.925 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.925 2 DEBUG nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Instance network_info: |[{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.926 2 DEBUG oslo_concurrency.lockutils [req-3cfa995c-4510-4450-baeb-c99d3475b869 req-dca2c99a-ee8a-473d-aa4a-4582fffb3ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.927 2 DEBUG nova.network.neutron [req-3cfa995c-4510-4450-baeb-c99d3475b869 req-dca2c99a-ee8a-473d-aa4a-4582fffb3ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing network info cache for port 5ea70a9c-8299-4593-b2b2-5c3315870d73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.932 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Start _get_guest_xml network_info=[{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.939 2 WARNING nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.946 2 DEBUG nova.virt.libvirt.host [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.947 2 DEBUG nova.virt.libvirt.host [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.952 2 DEBUG nova.virt.libvirt.host [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.953 2 DEBUG nova.virt.libvirt.host [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.954 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.954 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.955 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.956 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.957 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.957 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.958 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.958 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.959 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.960 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.960 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.961 2 DEBUG nova.virt.hardware [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:58:32 np0005465604 nova_compute[260603]: 2025-10-02 08:58:32.967 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:58:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:58:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:58:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145870380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.396 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.428 2 DEBUG nova.storage.rbd_utils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.433 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:58:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:58:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:58:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2719254403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.861 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.863 2 DEBUG nova.virt.libvirt.vif [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1449971772',display_name='tempest-TestNetworkBasicOps-server-1449971772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1449971772',id=129,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEDwYWjGLjzCnpGdjCiDGb4BhMkfSctBSp05Os75j+QqkDCA7DCGjUId9rj9ZbOBdRWfSegWfbBERKUoMgNp9TPgIkeea2IxYlIkGspXgk0R0VbovWgCgpmsyfTWHw3bA==',key_name='tempest-TestNetworkBasicOps-1897225661',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-s05ipuxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:58:29Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=b4eacfa3-8b31-492a-b3c5-829a890a4aae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.863 2 DEBUG nova.network.os_vif_util [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.864 2 DEBUG nova.network.os_vif_util [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:41:53,bridge_name='br-int',has_traffic_filtering=True,id=5ea70a9c-8299-4593-b2b2-5c3315870d73,network=Network(531b0560-b279-49fe-a565-b902507e886d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea70a9c-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.866 2 DEBUG nova.objects.instance [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.881 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:58:33 np0005465604 nova_compute[260603]:  <uuid>b4eacfa3-8b31-492a-b3c5-829a890a4aae</uuid>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:  <name>instance-00000081</name>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkBasicOps-server-1449971772</nova:name>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:58:32</nova:creationTime>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:58:33 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:        <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:        <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:        <nova:port uuid="5ea70a9c-8299-4593-b2b2-5c3315870d73">
Oct  2 04:58:33 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <entry name="serial">b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <entry name="uuid">b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk">
Oct  2 04:58:33 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:58:33 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config">
Oct  2 04:58:33 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:58:33 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:42:41:53"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <target dev="tap5ea70a9c-82"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log" append="off"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:58:33 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:58:33 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:58:33 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:58:33 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.882 2 DEBUG nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Preparing to wait for external event network-vif-plugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.883 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.883 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.883 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.884 2 DEBUG nova.virt.libvirt.vif [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1449971772',display_name='tempest-TestNetworkBasicOps-server-1449971772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1449971772',id=129,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEDwYWjGLjzCnpGdjCiDGb4BhMkfSctBSp05Os75j+QqkDCA7DCGjUId9rj9ZbOBdRWfSegWfbBERKUoMgNp9TPgIkeea2IxYlIkGspXgk0R0VbovWgCgpmsyfTWHw3bA==',key_name='tempest-TestNetworkBasicOps-1897225661',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-s05ipuxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:58:29Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=b4eacfa3-8b31-492a-b3c5-829a890a4aae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.884 2 DEBUG nova.network.os_vif_util [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.885 2 DEBUG nova.network.os_vif_util [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:41:53,bridge_name='br-int',has_traffic_filtering=True,id=5ea70a9c-8299-4593-b2b2-5c3315870d73,network=Network(531b0560-b279-49fe-a565-b902507e886d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea70a9c-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.885 2 DEBUG os_vif [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:41:53,bridge_name='br-int',has_traffic_filtering=True,id=5ea70a9c-8299-4593-b2b2-5c3315870d73,network=Network(531b0560-b279-49fe-a565-b902507e886d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea70a9c-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.887 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.887 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.897 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ea70a9c-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.898 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5ea70a9c-82, col_values=(('external_ids', {'iface-id': '5ea70a9c-8299-4593-b2b2-5c3315870d73', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:41:53', 'vm-uuid': 'b4eacfa3-8b31-492a-b3c5-829a890a4aae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:58:33 np0005465604 NetworkManager[45129]: <info>  [1759395513.9351] manager: (tap5ea70a9c-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/551)
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:33 np0005465604 nova_compute[260603]: 2025-10-02 08:58:33.947 2 INFO os_vif [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:41:53,bridge_name='br-int',has_traffic_filtering=True,id=5ea70a9c-8299-4593-b2b2-5c3315870d73,network=Network(531b0560-b279-49fe-a565-b902507e886d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea70a9c-82')#033[00m
Oct  2 04:58:34 np0005465604 nova_compute[260603]: 2025-10-02 08:58:34.034 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:58:34 np0005465604 nova_compute[260603]: 2025-10-02 08:58:34.035 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:58:34 np0005465604 nova_compute[260603]: 2025-10-02 08:58:34.035 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:42:41:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:58:34 np0005465604 nova_compute[260603]: 2025-10-02 08:58:34.035 2 INFO nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Using config drive#033[00m
Oct  2 04:58:34 np0005465604 nova_compute[260603]: 2025-10-02 08:58:34.058 2 DEBUG nova.storage.rbd_utils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:58:34 np0005465604 podman[399218]: 2025-10-02 08:58:34.066017862 +0000 UTC m=+0.091402478 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 04:58:34 np0005465604 podman[399221]: 2025-10-02 08:58:34.091328795 +0000 UTC m=+0.113432748 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:58:34 np0005465604 nova_compute[260603]: 2025-10-02 08:58:34.327 2 DEBUG nova.network.neutron [req-3cfa995c-4510-4450-baeb-c99d3475b869 req-dca2c99a-ee8a-473d-aa4a-4582fffb3ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updated VIF entry in instance network info cache for port 5ea70a9c-8299-4593-b2b2-5c3315870d73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:58:34 np0005465604 nova_compute[260603]: 2025-10-02 08:58:34.329 2 DEBUG nova.network.neutron [req-3cfa995c-4510-4450-baeb-c99d3475b869 req-dca2c99a-ee8a-473d-aa4a-4582fffb3ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:58:34 np0005465604 nova_compute[260603]: 2025-10-02 08:58:34.352 2 DEBUG oslo_concurrency.lockutils [req-3cfa995c-4510-4450-baeb-c99d3475b869 req-dca2c99a-ee8a-473d-aa4a-4582fffb3ae1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:58:34 np0005465604 nova_compute[260603]: 2025-10-02 08:58:34.381 2 INFO nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Creating config drive at /var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/disk.config#033[00m
Oct  2 04:58:34 np0005465604 nova_compute[260603]: 2025-10-02 08:58:34.391 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpztyre93w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:58:34 np0005465604 nova_compute[260603]: 2025-10-02 08:58:34.560 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpztyre93w" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:58:34 np0005465604 nova_compute[260603]: 2025-10-02 08:58:34.598 2 DEBUG nova.storage.rbd_utils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:58:34 np0005465604 nova_compute[260603]: 2025-10-02 08:58:34.606 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/disk.config b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:34.838 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:34.839 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:34.839 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:34 np0005465604 nova_compute[260603]: 2025-10-02 08:58:34.838 2 DEBUG oslo_concurrency.processutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/disk.config b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:58:34 np0005465604 nova_compute[260603]: 2025-10-02 08:58:34.840 2 INFO nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Deleting local config drive /var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/disk.config because it was imported into RBD.#033[00m
Oct  2 04:58:34 np0005465604 kernel: tap5ea70a9c-82: entered promiscuous mode
Oct  2 04:58:34 np0005465604 NetworkManager[45129]: <info>  [1759395514.9426] manager: (tap5ea70a9c-82): new Tun device (/org/freedesktop/NetworkManager/Devices/552)
Oct  2 04:58:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:58:34Z|01388|binding|INFO|Claiming lport 5ea70a9c-8299-4593-b2b2-5c3315870d73 for this chassis.
Oct  2 04:58:34 np0005465604 ovn_controller[152344]: 2025-10-02T08:58:34Z|01389|binding|INFO|5ea70a9c-8299-4593-b2b2-5c3315870d73: Claiming fa:16:3e:42:41:53 10.100.0.12
Oct  2 04:58:34 np0005465604 nova_compute[260603]: 2025-10-02 08:58:34.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:34 np0005465604 nova_compute[260603]: 2025-10-02 08:58:34.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:34 np0005465604 nova_compute[260603]: 2025-10-02 08:58:34.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:34.986 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:41:53 10.100.0.12'], port_security=['fa:16:3e:42:41:53 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b4eacfa3-8b31-492a-b3c5-829a890a4aae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-531b0560-b279-49fe-a565-b902507e886d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41a1668d-b8be-4a47-8e43-ab11db6fabeb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1a7ccfd-5e72-4548-90da-40016d961198, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=5ea70a9c-8299-4593-b2b2-5c3315870d73) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:34.988 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 5ea70a9c-8299-4593-b2b2-5c3315870d73 in datapath 531b0560-b279-49fe-a565-b902507e886d bound to our chassis#033[00m
Oct  2 04:58:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:34.990 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 531b0560-b279-49fe-a565-b902507e886d#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.010 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5477b588-0da9-4c13-a071-0e699b2fe999]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.012 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap531b0560-b1 in ovnmeta-531b0560-b279-49fe-a565-b902507e886d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.014 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap531b0560-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.014 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b80d51f2-9822-4e53-a40d-5cecbc319ce2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.016 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d85c61b4-f87b-4527-8147-2d30937e5468]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:35 np0005465604 systemd-machined[214636]: New machine qemu-163-instance-00000081.
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.035 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[b9eff316-a108-4cda-a1dd-52820fc78533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:35 np0005465604 nova_compute[260603]: 2025-10-02 08:58:35.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:35 np0005465604 systemd[1]: Started Virtual Machine qemu-163-instance-00000081.
Oct  2 04:58:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:58:35Z|01390|binding|INFO|Setting lport 5ea70a9c-8299-4593-b2b2-5c3315870d73 ovn-installed in OVS
Oct  2 04:58:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:58:35Z|01391|binding|INFO|Setting lport 5ea70a9c-8299-4593-b2b2-5c3315870d73 up in Southbound
Oct  2 04:58:35 np0005465604 nova_compute[260603]: 2025-10-02 08:58:35.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.054 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[569caf3b-8968-4903-b056-7e245fdadb91]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:35 np0005465604 systemd-udevd[399333]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:58:35 np0005465604 NetworkManager[45129]: <info>  [1759395515.0845] device (tap5ea70a9c-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:58:35 np0005465604 NetworkManager[45129]: <info>  [1759395515.0852] device (tap5ea70a9c-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.088 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5044c39e-2a53-4386-a6d0-008c13039c78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:35 np0005465604 NetworkManager[45129]: <info>  [1759395515.0950] manager: (tap531b0560-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/553)
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.094 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa0a3c1d-2c29-4dfb-8259-249ec5d060d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.129 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d4036b8b-8ac7-4931-869f-3da57726f7ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.133 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e331c4f6-c629-4778-b18d-f579b756f479]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:35 np0005465604 NetworkManager[45129]: <info>  [1759395515.1521] device (tap531b0560-b0): carrier: link connected
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.155 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc22a7f-e784-4fda-bc43-1a295c2b0005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.178 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa86a756-7f3b-4b6e-84b1-083ac8f0c874]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap531b0560-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:97:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 394], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639375, 'reachable_time': 19284, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399362, 'error': None, 'target': 'ovnmeta-531b0560-b279-49fe-a565-b902507e886d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.200 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8c7552b4-b15c-4e5e-b54d-151e67c301f2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:972e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639375, 'tstamp': 639375}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399363, 'error': None, 'target': 'ovnmeta-531b0560-b279-49fe-a565-b902507e886d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.227 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[54783a43-7475-4c8f-873a-ca162ea4d459]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap531b0560-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:97:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 394], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639375, 'reachable_time': 19284, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 399364, 'error': None, 'target': 'ovnmeta-531b0560-b279-49fe-a565-b902507e886d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.272 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c031f4-1511-41e3-b4c5-f21f78ade978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.357 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f3908d-fa46-481f-8a0c-b2928c7c619a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.359 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap531b0560-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.359 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.360 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap531b0560-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:58:35 np0005465604 nova_compute[260603]: 2025-10-02 08:58:35.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:35 np0005465604 NetworkManager[45129]: <info>  [1759395515.3629] manager: (tap531b0560-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/554)
Oct  2 04:58:35 np0005465604 kernel: tap531b0560-b0: entered promiscuous mode
Oct  2 04:58:35 np0005465604 nova_compute[260603]: 2025-10-02 08:58:35.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.366 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap531b0560-b0, col_values=(('external_ids', {'iface-id': '34739963-aa72-473b-8b1d-5d0d09f0b1de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:58:35 np0005465604 nova_compute[260603]: 2025-10-02 08:58:35.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:58:35Z|01392|binding|INFO|Releasing lport 34739963-aa72-473b-8b1d-5d0d09f0b1de from this chassis (sb_readonly=0)
Oct  2 04:58:35 np0005465604 nova_compute[260603]: 2025-10-02 08:58:35.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.393 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/531b0560-b279-49fe-a565-b902507e886d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/531b0560-b279-49fe-a565-b902507e886d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.395 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8d4adadb-e839-4247-8de4-800cb52cc422]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.396 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-531b0560-b279-49fe-a565-b902507e886d
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/531b0560-b279-49fe-a565-b902507e886d.pid.haproxy
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 531b0560-b279-49fe-a565-b902507e886d
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:58:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:58:35.398 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-531b0560-b279-49fe-a565-b902507e886d', 'env', 'PROCESS_TAG=haproxy-531b0560-b279-49fe-a565-b902507e886d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/531b0560-b279-49fe-a565-b902507e886d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:58:35 np0005465604 nova_compute[260603]: 2025-10-02 08:58:35.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:35 np0005465604 nova_compute[260603]: 2025-10-02 08:58:35.508 2 DEBUG nova.compute.manager [req-8c14ad5a-c8dd-4824-b921-35d1fe9521d4 req-d4b30c26-23f2-4df5-9342-122659b7cb29 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-plugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:58:35 np0005465604 nova_compute[260603]: 2025-10-02 08:58:35.509 2 DEBUG oslo_concurrency.lockutils [req-8c14ad5a-c8dd-4824-b921-35d1fe9521d4 req-d4b30c26-23f2-4df5-9342-122659b7cb29 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:35 np0005465604 nova_compute[260603]: 2025-10-02 08:58:35.510 2 DEBUG oslo_concurrency.lockutils [req-8c14ad5a-c8dd-4824-b921-35d1fe9521d4 req-d4b30c26-23f2-4df5-9342-122659b7cb29 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:35 np0005465604 nova_compute[260603]: 2025-10-02 08:58:35.514 2 DEBUG oslo_concurrency.lockutils [req-8c14ad5a-c8dd-4824-b921-35d1fe9521d4 req-d4b30c26-23f2-4df5-9342-122659b7cb29 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:35 np0005465604 nova_compute[260603]: 2025-10-02 08:58:35.516 2 DEBUG nova.compute.manager [req-8c14ad5a-c8dd-4824-b921-35d1fe9521d4 req-d4b30c26-23f2-4df5-9342-122659b7cb29 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Processing event network-vif-plugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:58:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:58:35 np0005465604 podman[399438]: 2025-10-02 08:58:35.80991716 +0000 UTC m=+0.057093811 container create 42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:58:35 np0005465604 systemd[1]: Started libpod-conmon-42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd.scope.
Oct  2 04:58:35 np0005465604 podman[399438]: 2025-10-02 08:58:35.781715906 +0000 UTC m=+0.028892567 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:58:35 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:58:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2789c6dcd510ee067d39b52a38d6115997f3323341899d6c11680c2dc54061e9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:58:35 np0005465604 podman[399438]: 2025-10-02 08:58:35.913268946 +0000 UTC m=+0.160445607 container init 42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 04:58:35 np0005465604 podman[399438]: 2025-10-02 08:58:35.921574209 +0000 UTC m=+0.168750850 container start 42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:58:35 np0005465604 neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d[399453]: [NOTICE]   (399457) : New worker (399459) forked
Oct  2 04:58:35 np0005465604 neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d[399453]: [NOTICE]   (399457) : Loading success.
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.099 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395516.099375, b4eacfa3-8b31-492a-b3c5-829a890a4aae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.100 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] VM Started (Lifecycle Event)#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.101 2 DEBUG nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.105 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.108 2 INFO nova.virt.libvirt.driver [-] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Instance spawned successfully.#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.108 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.126 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.132 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.138 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.139 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.139 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.140 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.141 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.141 2 DEBUG nova.virt.libvirt.driver [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.167 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.167 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395516.0995774, b4eacfa3-8b31-492a-b3c5-829a890a4aae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.168 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.193 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.197 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395516.104985, b4eacfa3-8b31-492a-b3c5-829a890a4aae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.197 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.380 2 INFO nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Took 6.67 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.380 2 DEBUG nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.382 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.396 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.427 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.467 2 INFO nova.compute.manager [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Took 7.72 seconds to build instance.#033[00m
Oct  2 04:58:36 np0005465604 nova_compute[260603]: 2025-10-02 08:58:36.529 2 DEBUG oslo_concurrency.lockutils [None req-a24b6084-7eb9-4819-8f56-558d3e06617c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:37 np0005465604 nova_compute[260603]: 2025-10-02 08:58:37.590 2 DEBUG nova.compute.manager [req-a73fd1b9-bfd0-422d-92b6-2071e5c49bd9 req-eeb37efc-c542-4b5d-ac86-68d0e79c85c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-plugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:58:37 np0005465604 nova_compute[260603]: 2025-10-02 08:58:37.591 2 DEBUG oslo_concurrency.lockutils [req-a73fd1b9-bfd0-422d-92b6-2071e5c49bd9 req-eeb37efc-c542-4b5d-ac86-68d0e79c85c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:37 np0005465604 nova_compute[260603]: 2025-10-02 08:58:37.592 2 DEBUG oslo_concurrency.lockutils [req-a73fd1b9-bfd0-422d-92b6-2071e5c49bd9 req-eeb37efc-c542-4b5d-ac86-68d0e79c85c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:37 np0005465604 nova_compute[260603]: 2025-10-02 08:58:37.592 2 DEBUG oslo_concurrency.lockutils [req-a73fd1b9-bfd0-422d-92b6-2071e5c49bd9 req-eeb37efc-c542-4b5d-ac86-68d0e79c85c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:37 np0005465604 nova_compute[260603]: 2025-10-02 08:58:37.592 2 DEBUG nova.compute.manager [req-a73fd1b9-bfd0-422d-92b6-2071e5c49bd9 req-eeb37efc-c542-4b5d-ac86-68d0e79c85c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] No waiting events found dispatching network-vif-plugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:58:37 np0005465604 nova_compute[260603]: 2025-10-02 08:58:37.593 2 WARNING nova.compute.manager [req-a73fd1b9-bfd0-422d-92b6-2071e5c49bd9 req-eeb37efc-c542-4b5d-ac86-68d0e79c85c6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received unexpected event network-vif-plugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:58:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 984 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Oct  2 04:58:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:58:38 np0005465604 nova_compute[260603]: 2025-10-02 08:58:38.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034834989744090644 of space, bias 1.0, pg target 0.10450496923227193 quantized to 32 (current 32)
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:58:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:58:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct  2 04:58:40 np0005465604 nova_compute[260603]: 2025-10-02 08:58:40.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct  2 04:58:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:58:41Z|01393|binding|INFO|Releasing lport 34739963-aa72-473b-8b1d-5d0d09f0b1de from this chassis (sb_readonly=0)
Oct  2 04:58:41 np0005465604 NetworkManager[45129]: <info>  [1759395521.8830] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/555)
Oct  2 04:58:41 np0005465604 NetworkManager[45129]: <info>  [1759395521.8843] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/556)
Oct  2 04:58:41 np0005465604 nova_compute[260603]: 2025-10-02 08:58:41.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:41 np0005465604 ovn_controller[152344]: 2025-10-02T08:58:41Z|01394|binding|INFO|Releasing lport 34739963-aa72-473b-8b1d-5d0d09f0b1de from this chassis (sb_readonly=0)
Oct  2 04:58:41 np0005465604 nova_compute[260603]: 2025-10-02 08:58:41.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:42 np0005465604 nova_compute[260603]: 2025-10-02 08:58:42.163 2 DEBUG nova.compute.manager [req-6dfa8e5d-67f8-4e78-9fcb-47a3c160b8a9 req-dabeb488-a397-4b61-8551-9ecbdb013a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-changed-5ea70a9c-8299-4593-b2b2-5c3315870d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:58:42 np0005465604 nova_compute[260603]: 2025-10-02 08:58:42.165 2 DEBUG nova.compute.manager [req-6dfa8e5d-67f8-4e78-9fcb-47a3c160b8a9 req-dabeb488-a397-4b61-8551-9ecbdb013a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing instance network info cache due to event network-changed-5ea70a9c-8299-4593-b2b2-5c3315870d73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:58:42 np0005465604 nova_compute[260603]: 2025-10-02 08:58:42.165 2 DEBUG oslo_concurrency.lockutils [req-6dfa8e5d-67f8-4e78-9fcb-47a3c160b8a9 req-dabeb488-a397-4b61-8551-9ecbdb013a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:58:42 np0005465604 nova_compute[260603]: 2025-10-02 08:58:42.165 2 DEBUG oslo_concurrency.lockutils [req-6dfa8e5d-67f8-4e78-9fcb-47a3c160b8a9 req-dabeb488-a397-4b61-8551-9ecbdb013a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:58:42 np0005465604 nova_compute[260603]: 2025-10-02 08:58:42.166 2 DEBUG nova.network.neutron [req-6dfa8e5d-67f8-4e78-9fcb-47a3c160b8a9 req-dabeb488-a397-4b61-8551-9ecbdb013a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing network info cache for port 5ea70a9c-8299-4593-b2b2-5c3315870d73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:58:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:58:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 04:58:43 np0005465604 nova_compute[260603]: 2025-10-02 08:58:43.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:44 np0005465604 nova_compute[260603]: 2025-10-02 08:58:44.096 2 DEBUG nova.network.neutron [req-6dfa8e5d-67f8-4e78-9fcb-47a3c160b8a9 req-dabeb488-a397-4b61-8551-9ecbdb013a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updated VIF entry in instance network info cache for port 5ea70a9c-8299-4593-b2b2-5c3315870d73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:58:44 np0005465604 nova_compute[260603]: 2025-10-02 08:58:44.096 2 DEBUG nova.network.neutron [req-6dfa8e5d-67f8-4e78-9fcb-47a3c160b8a9 req-dabeb488-a397-4b61-8551-9ecbdb013a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:58:44 np0005465604 nova_compute[260603]: 2025-10-02 08:58:44.124 2 DEBUG oslo_concurrency.lockutils [req-6dfa8e5d-67f8-4e78-9fcb-47a3c160b8a9 req-dabeb488-a397-4b61-8551-9ecbdb013a83 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:58:45 np0005465604 nova_compute[260603]: 2025-10-02 08:58:45.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:45 np0005465604 nova_compute[260603]: 2025-10-02 08:58:45.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:58:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 88 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 04:58:47 np0005465604 nova_compute[260603]: 2025-10-02 08:58:47.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:58:47 np0005465604 nova_compute[260603]: 2025-10-02 08:58:47.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:58:47 np0005465604 nova_compute[260603]: 2025-10-02 08:58:47.566 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 04:58:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 105 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 107 op/s
Oct  2 04:58:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:58:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:58:48Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:42:41:53 10.100.0.12
Oct  2 04:58:48 np0005465604 ovn_controller[152344]: 2025-10-02T08:58:48Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:42:41:53 10.100.0.12
Oct  2 04:58:48 np0005465604 nova_compute[260603]: 2025-10-02 08:58:48.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 114 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 82 op/s
Oct  2 04:58:50 np0005465604 nova_compute[260603]: 2025-10-02 08:58:50.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:50 np0005465604 nova_compute[260603]: 2025-10-02 08:58:50.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:58:51 np0005465604 nova_compute[260603]: 2025-10-02 08:58:51.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:58:51 np0005465604 nova_compute[260603]: 2025-10-02 08:58:51.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:58:51 np0005465604 nova_compute[260603]: 2025-10-02 08:58:51.560 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:51 np0005465604 nova_compute[260603]: 2025-10-02 08:58:51.560 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:51 np0005465604 nova_compute[260603]: 2025-10-02 08:58:51.560 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:51 np0005465604 nova_compute[260603]: 2025-10-02 08:58:51.560 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:58:51 np0005465604 nova_compute[260603]: 2025-10-02 08:58:51.561 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:58:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 114 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 719 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 04:58:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:58:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1031318051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:58:51 np0005465604 nova_compute[260603]: 2025-10-02 08:58:51.978 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:58:52 np0005465604 nova_compute[260603]: 2025-10-02 08:58:52.047 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:58:52 np0005465604 nova_compute[260603]: 2025-10-02 08:58:52.047 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:58:52 np0005465604 nova_compute[260603]: 2025-10-02 08:58:52.227 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:58:52 np0005465604 nova_compute[260603]: 2025-10-02 08:58:52.228 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3453MB free_disk=59.94355773925781GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:58:52 np0005465604 nova_compute[260603]: 2025-10-02 08:58:52.228 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:52 np0005465604 nova_compute[260603]: 2025-10-02 08:58:52.229 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:52 np0005465604 nova_compute[260603]: 2025-10-02 08:58:52.421 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance b4eacfa3-8b31-492a-b3c5-829a890a4aae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:58:52 np0005465604 nova_compute[260603]: 2025-10-02 08:58:52.421 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:58:52 np0005465604 nova_compute[260603]: 2025-10-02 08:58:52.422 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:58:52 np0005465604 nova_compute[260603]: 2025-10-02 08:58:52.562 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:58:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:58:52 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/549809416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:58:52 np0005465604 nova_compute[260603]: 2025-10-02 08:58:52.991 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:58:52 np0005465604 nova_compute[260603]: 2025-10-02 08:58:52.997 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:58:53 np0005465604 nova_compute[260603]: 2025-10-02 08:58:53.025 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:58:53 np0005465604 nova_compute[260603]: 2025-10-02 08:58:53.050 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:58:53 np0005465604 nova_compute[260603]: 2025-10-02 08:58:53.050 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:58:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:58:53 np0005465604 nova_compute[260603]: 2025-10-02 08:58:53.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:58:53 np0005465604 nova_compute[260603]: 2025-10-02 08:58:53.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:58:53 np0005465604 nova_compute[260603]: 2025-10-02 08:58:53.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 04:58:53 np0005465604 nova_compute[260603]: 2025-10-02 08:58:53.536 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 04:58:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 778 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct  2 04:58:53 np0005465604 nova_compute[260603]: 2025-10-02 08:58:53.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:55 np0005465604 nova_compute[260603]: 2025-10-02 08:58:55.280 2 INFO nova.compute.manager [None req-e4604733-4f94-4e89-9ed4-72bbb11abc21 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Get console output#033[00m
Oct  2 04:58:55 np0005465604 nova_compute[260603]: 2025-10-02 08:58:55.286 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 04:58:55 np0005465604 nova_compute[260603]: 2025-10-02 08:58:55.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:55 np0005465604 nova_compute[260603]: 2025-10-02 08:58:55.531 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:58:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct  2 04:58:57 np0005465604 nova_compute[260603]: 2025-10-02 08:58:57.740 2 DEBUG oslo_concurrency.lockutils [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "interface-b4eacfa3-8b31-492a-b3c5-829a890a4aae-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:58:57 np0005465604 nova_compute[260603]: 2025-10-02 08:58:57.740 2 DEBUG oslo_concurrency.lockutils [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "interface-b4eacfa3-8b31-492a-b3c5-829a890a4aae-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:58:57 np0005465604 nova_compute[260603]: 2025-10-02 08:58:57.741 2 DEBUG nova.objects.instance [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'flavor' on Instance uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:58:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  2 04:58:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:58:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:58:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:58:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:58:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:58:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:58:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:58:58 np0005465604 nova_compute[260603]: 2025-10-02 08:58:58.837 2 DEBUG nova.objects.instance [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_requests' on Instance uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:58:58 np0005465604 nova_compute[260603]: 2025-10-02 08:58:58.851 2 DEBUG nova.network.neutron [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:58:58 np0005465604 nova_compute[260603]: 2025-10-02 08:58:58.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:58:58 np0005465604 podman[399517]: 2025-10-02 08:58:58.992190297 +0000 UTC m=+0.060645123 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 04:58:59 np0005465604 podman[399516]: 2025-10-02 08:58:59.015391462 +0000 UTC m=+0.083885680 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  2 04:58:59 np0005465604 nova_compute[260603]: 2025-10-02 08:58:59.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:58:59 np0005465604 nova_compute[260603]: 2025-10-02 08:58:59.651 2 DEBUG nova.policy [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:58:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 131 KiB/s rd, 861 KiB/s wr, 27 op/s
Oct  2 04:59:00 np0005465604 nova_compute[260603]: 2025-10-02 08:59:00.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:00 np0005465604 nova_compute[260603]: 2025-10-02 08:59:00.806 2 DEBUG nova.network.neutron [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Successfully created port: 4e4cec89-b01e-4202-bc3e-a65ce8864017 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:59:01 np0005465604 nova_compute[260603]: 2025-10-02 08:59:01.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:59:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 74 KiB/s wr, 12 op/s
Oct  2 04:59:01 np0005465604 nova_compute[260603]: 2025-10-02 08:59:01.896 2 DEBUG nova.network.neutron [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Successfully updated port: 4e4cec89-b01e-4202-bc3e-a65ce8864017 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:59:01 np0005465604 nova_compute[260603]: 2025-10-02 08:59:01.914 2 DEBUG oslo_concurrency.lockutils [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:59:01 np0005465604 nova_compute[260603]: 2025-10-02 08:59:01.915 2 DEBUG oslo_concurrency.lockutils [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:59:01 np0005465604 nova_compute[260603]: 2025-10-02 08:59:01.915 2 DEBUG nova.network.neutron [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:59:02 np0005465604 nova_compute[260603]: 2025-10-02 08:59:02.008 2 DEBUG nova.compute.manager [req-8ce5fc3b-781f-42ca-9648-548085e98312 req-46c93f5e-e7c0-4d79-a157-d34fee054bc7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-changed-4e4cec89-b01e-4202-bc3e-a65ce8864017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:59:02 np0005465604 nova_compute[260603]: 2025-10-02 08:59:02.009 2 DEBUG nova.compute.manager [req-8ce5fc3b-781f-42ca-9648-548085e98312 req-46c93f5e-e7c0-4d79-a157-d34fee054bc7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing instance network info cache due to event network-changed-4e4cec89-b01e-4202-bc3e-a65ce8864017. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:59:02 np0005465604 nova_compute[260603]: 2025-10-02 08:59:02.009 2 DEBUG oslo_concurrency.lockutils [req-8ce5fc3b-781f-42ca-9648-548085e98312 req-46c93f5e-e7c0-4d79-a157-d34fee054bc7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:59:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.628 2 DEBUG nova.network.neutron [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.766 2 DEBUG oslo_concurrency.lockutils [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.767 2 DEBUG oslo_concurrency.lockutils [req-8ce5fc3b-781f-42ca-9648-548085e98312 req-46c93f5e-e7c0-4d79-a157-d34fee054bc7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.767 2 DEBUG nova.network.neutron [req-8ce5fc3b-781f-42ca-9648-548085e98312 req-46c93f5e-e7c0-4d79-a157-d34fee054bc7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing network info cache for port 4e4cec89-b01e-4202-bc3e-a65ce8864017 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.770 2 DEBUG nova.virt.libvirt.vif [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1449971772',display_name='tempest-TestNetworkBasicOps-server-1449971772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1449971772',id=129,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEDwYWjGLjzCnpGdjCiDGb4BhMkfSctBSp05Os75j+QqkDCA7DCGjUId9rj9ZbOBdRWfSegWfbBERKUoMgNp9TPgIkeea2IxYlIkGspXgk0R0VbovWgCgpmsyfTWHw3bA==',key_name='tempest-TestNetworkBasicOps-1897225661',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-s05ipuxq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:58:36Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=b4eacfa3-8b31-492a-b3c5-829a890a4aae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.771 2 DEBUG nova.network.os_vif_util [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.771 2 DEBUG nova.network.os_vif_util [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.772 2 DEBUG os_vif [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.772 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:59:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 78 KiB/s wr, 12 op/s
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.773 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.776 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4e4cec89-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.776 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4e4cec89-b0, col_values=(('external_ids', {'iface-id': '4e4cec89-b01e-4202-bc3e-a65ce8864017', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:88:e1', 'vm-uuid': 'b4eacfa3-8b31-492a-b3c5-829a890a4aae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:03 np0005465604 NetworkManager[45129]: <info>  [1759395543.7802] manager: (tap4e4cec89-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/557)
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.788 2 INFO os_vif [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0')#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.788 2 DEBUG nova.virt.libvirt.vif [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1449971772',display_name='tempest-TestNetworkBasicOps-server-1449971772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1449971772',id=129,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEDwYWjGLjzCnpGdjCiDGb4BhMkfSctBSp05Os75j+QqkDCA7DCGjUId9rj9ZbOBdRWfSegWfbBERKUoMgNp9TPgIkeea2IxYlIkGspXgk0R0VbovWgCgpmsyfTWHw3bA==',key_name='tempest-TestNetworkBasicOps-1897225661',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-s05ipuxq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:58:36Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=b4eacfa3-8b31-492a-b3c5-829a890a4aae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.789 2 DEBUG nova.network.os_vif_util [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.789 2 DEBUG nova.network.os_vif_util [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.792 2 DEBUG nova.virt.libvirt.guest [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] attach device xml: <interface type="ethernet">
Oct  2 04:59:03 np0005465604 nova_compute[260603]:  <mac address="fa:16:3e:45:88:e1"/>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:  <model type="virtio"/>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:  <mtu size="1442"/>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:  <target dev="tap4e4cec89-b0"/>
Oct  2 04:59:03 np0005465604 nova_compute[260603]: </interface>
Oct  2 04:59:03 np0005465604 nova_compute[260603]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 04:59:03 np0005465604 kernel: tap4e4cec89-b0: entered promiscuous mode
Oct  2 04:59:03 np0005465604 NetworkManager[45129]: <info>  [1759395543.8041] manager: (tap4e4cec89-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/558)
Oct  2 04:59:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:59:03Z|01395|binding|INFO|Claiming lport 4e4cec89-b01e-4202-bc3e-a65ce8864017 for this chassis.
Oct  2 04:59:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:59:03Z|01396|binding|INFO|4e4cec89-b01e-4202-bc3e-a65ce8864017: Claiming fa:16:3e:45:88:e1 10.100.0.28
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:03 np0005465604 systemd-udevd[399565]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:59:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.850 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:88:e1 10.100.0.28'], port_security=['fa:16:3e:45:88:e1 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'b4eacfa3-8b31-492a-b3c5-829a890a4aae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc3a0c93-fc04-4c05-88f3-b624ca1ad1bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf29bdfe-eec4-4bc1-887c-39f99d9387e9, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4e4cec89-b01e-4202-bc3e-a65ce8864017) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:59:03 np0005465604 NetworkManager[45129]: <info>  [1759395543.8514] device (tap4e4cec89-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:03 np0005465604 NetworkManager[45129]: <info>  [1759395543.8521] device (tap4e4cec89-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:59:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.852 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4e4cec89-b01e-4202-bc3e-a65ce8864017 in datapath 5d0f6b84-ebf5-436d-83fe-b7739dc629d9 bound to our chassis#033[00m
Oct  2 04:59:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.853 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d0f6b84-ebf5-436d-83fe-b7739dc629d9#033[00m
Oct  2 04:59:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:59:03Z|01397|binding|INFO|Setting lport 4e4cec89-b01e-4202-bc3e-a65ce8864017 ovn-installed in OVS
Oct  2 04:59:03 np0005465604 ovn_controller[152344]: 2025-10-02T08:59:03Z|01398|binding|INFO|Setting lport 4e4cec89-b01e-4202-bc3e-a65ce8864017 up in Southbound
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.863 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e8f9e0-4c0b-4e68-af1e-7d07b62c23f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.864 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d0f6b84-e1 in ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 04:59:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.866 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d0f6b84-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 04:59:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.866 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83bf3796-b19d-4ecd-b085-ec69c2549a35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.867 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[baaf6604-8ffd-4df2-afe0-268f53f8f836]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.876 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[f98bde0f-3b42-4c76-84c2-4ab5b156f2f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.898 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8bbfa6b2-6d88-4de0-ba52-c5704f4002f5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.925 2 DEBUG nova.virt.libvirt.driver [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.926 2 DEBUG nova.virt.libvirt.driver [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.926 2 DEBUG nova.virt.libvirt.driver [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:42:41:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.926 2 DEBUG nova.virt.libvirt.driver [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:45:88:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:59:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.926 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d3276dd1-8e33-4f40-83dc-bed4d76ae83d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.931 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a8238dd6-9f3c-4439-a364-4e522f9e9fb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:03 np0005465604 NetworkManager[45129]: <info>  [1759395543.9327] manager: (tap5d0f6b84-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/559)
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.957 2 DEBUG nova.virt.libvirt.guest [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:59:03 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:  <nova:name>tempest-TestNetworkBasicOps-server-1449971772</nova:name>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:59:03</nova:creationTime>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 04:59:03 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:    <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:    <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:    <nova:port uuid="5ea70a9c-8299-4593-b2b2-5c3315870d73">
Oct  2 04:59:03 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:    <nova:port uuid="4e4cec89-b01e-4202-bc3e-a65ce8864017">
Oct  2 04:59:03 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 04:59:03 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 04:59:03 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 04:59:03 np0005465604 nova_compute[260603]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 04:59:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.967 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd5373d-e862-404b-8f1e-a66da4c3a909]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:03.970 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e11230ff-aee5-49fa-bc4c-771d76677433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:03 np0005465604 nova_compute[260603]: 2025-10-02 08:59:03.988 2 DEBUG oslo_concurrency.lockutils [None req-1c894a6c-3add-48cb-87a6-29746ababae7 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "interface-b4eacfa3-8b31-492a-b3c5-829a890a4aae-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:59:03 np0005465604 NetworkManager[45129]: <info>  [1759395543.9957] device (tap5d0f6b84-e0): carrier: link connected
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.002 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[551ce90e-975a-445b-9b4f-24f5120834b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.018 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1179fd6e-b2a7-4cd6-96e5-b2c3286bf3a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d0f6b84-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:31:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 396], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642259, 'reachable_time': 20336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399592, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.033 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c41f4d79-667a-4140-a63f-4049ac7c15d4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe26:3180'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642259, 'tstamp': 642259}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399593, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.048 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e849f778-d822-425e-83fe-1ffb636152e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d0f6b84-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:31:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 396], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642259, 'reachable_time': 20336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 399594, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.077 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8cef4d82-cc5e-48da-960a-56b674f661c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:04 np0005465604 nova_compute[260603]: 2025-10-02 08:59:04.097 2 DEBUG nova.compute.manager [req-ba450b3c-276e-4f8e-a363-9643bc151bdb req-a9a532c1-7952-4bd3-870b-27a44ca7d88d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-plugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:59:04 np0005465604 nova_compute[260603]: 2025-10-02 08:59:04.098 2 DEBUG oslo_concurrency.lockutils [req-ba450b3c-276e-4f8e-a363-9643bc151bdb req-a9a532c1-7952-4bd3-870b-27a44ca7d88d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:59:04 np0005465604 nova_compute[260603]: 2025-10-02 08:59:04.098 2 DEBUG oslo_concurrency.lockutils [req-ba450b3c-276e-4f8e-a363-9643bc151bdb req-a9a532c1-7952-4bd3-870b-27a44ca7d88d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:59:04 np0005465604 nova_compute[260603]: 2025-10-02 08:59:04.099 2 DEBUG oslo_concurrency.lockutils [req-ba450b3c-276e-4f8e-a363-9643bc151bdb req-a9a532c1-7952-4bd3-870b-27a44ca7d88d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:59:04 np0005465604 nova_compute[260603]: 2025-10-02 08:59:04.099 2 DEBUG nova.compute.manager [req-ba450b3c-276e-4f8e-a363-9643bc151bdb req-a9a532c1-7952-4bd3-870b-27a44ca7d88d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] No waiting events found dispatching network-vif-plugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:59:04 np0005465604 nova_compute[260603]: 2025-10-02 08:59:04.099 2 WARNING nova.compute.manager [req-ba450b3c-276e-4f8e-a363-9643bc151bdb req-a9a532c1-7952-4bd3-870b-27a44ca7d88d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received unexpected event network-vif-plugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.141 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f30eae9-a7cf-4445-8676-45e853898dd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.142 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d0f6b84-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.143 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.143 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d0f6b84-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:59:04 np0005465604 nova_compute[260603]: 2025-10-02 08:59:04.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:04 np0005465604 NetworkManager[45129]: <info>  [1759395544.1456] manager: (tap5d0f6b84-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/560)
Oct  2 04:59:04 np0005465604 kernel: tap5d0f6b84-e0: entered promiscuous mode
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.147 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d0f6b84-e0, col_values=(('external_ids', {'iface-id': 'f202c767-dd88-4dcf-bf75-a2c0dfdb6c1d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:59:04 np0005465604 ovn_controller[152344]: 2025-10-02T08:59:04Z|01399|binding|INFO|Releasing lport f202c767-dd88-4dcf-bf75-a2c0dfdb6c1d from this chassis (sb_readonly=0)
Oct  2 04:59:04 np0005465604 nova_compute[260603]: 2025-10-02 08:59:04.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:04 np0005465604 nova_compute[260603]: 2025-10-02 08:59:04.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.162 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d0f6b84-ebf5-436d-83fe-b7739dc629d9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d0f6b84-ebf5-436d-83fe-b7739dc629d9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.163 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9a464b8a-18f8-4d28-aceb-20d59b06e0cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.164 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-5d0f6b84-ebf5-436d-83fe-b7739dc629d9
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/5d0f6b84-ebf5-436d-83fe-b7739dc629d9.pid.haproxy
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 5d0f6b84-ebf5-436d-83fe-b7739dc629d9
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 04:59:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:04.165 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'env', 'PROCESS_TAG=haproxy-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d0f6b84-ebf5-436d-83fe-b7739dc629d9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 04:59:04 np0005465604 podman[399626]: 2025-10-02 08:59:04.579926116 +0000 UTC m=+0.057530695 container create 9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 04:59:04 np0005465604 systemd[1]: Started libpod-conmon-9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb.scope.
Oct  2 04:59:04 np0005465604 podman[399626]: 2025-10-02 08:59:04.547335102 +0000 UTC m=+0.024939691 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 04:59:04 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:59:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/168d60d32efd9370edb94bf9730fe02355e22ea7cfe7215ac4f9b026d57c9878/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 04:59:04 np0005465604 podman[399626]: 2025-10-02 08:59:04.68571584 +0000 UTC m=+0.163320409 container init 9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 04:59:04 np0005465604 podman[399626]: 2025-10-02 08:59:04.692841456 +0000 UTC m=+0.170445985 container start 9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 04:59:04 np0005465604 podman[399643]: 2025-10-02 08:59:04.704273798 +0000 UTC m=+0.057170683 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 04:59:04 np0005465604 podman[399639]: 2025-10-02 08:59:04.704807125 +0000 UTC m=+0.066430717 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd)
Oct  2 04:59:04 np0005465604 neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9[399649]: [NOTICE]   (399684) : New worker (399688) forked
Oct  2 04:59:04 np0005465604 neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9[399649]: [NOTICE]   (399684) : Loading success.
Oct  2 04:59:05 np0005465604 nova_compute[260603]: 2025-10-02 08:59:05.051 2 DEBUG nova.network.neutron [req-8ce5fc3b-781f-42ca-9648-548085e98312 req-46c93f5e-e7c0-4d79-a157-d34fee054bc7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updated VIF entry in instance network info cache for port 4e4cec89-b01e-4202-bc3e-a65ce8864017. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:59:05 np0005465604 nova_compute[260603]: 2025-10-02 08:59:05.053 2 DEBUG nova.network.neutron [req-8ce5fc3b-781f-42ca-9648-548085e98312 req-46c93f5e-e7c0-4d79-a157-d34fee054bc7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:59:05 np0005465604 nova_compute[260603]: 2025-10-02 08:59:05.070 2 DEBUG oslo_concurrency.lockutils [req-8ce5fc3b-781f-42ca-9648-548085e98312 req-46c93f5e-e7c0-4d79-a157-d34fee054bc7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:59:05 np0005465604 nova_compute[260603]: 2025-10-02 08:59:05.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:05 np0005465604 nova_compute[260603]: 2025-10-02 08:59:05.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:59:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s wr, 0 op/s
Oct  2 04:59:06 np0005465604 ovn_controller[152344]: 2025-10-02T08:59:06Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:88:e1 10.100.0.28
Oct  2 04:59:06 np0005465604 ovn_controller[152344]: 2025-10-02T08:59:06Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:88:e1 10.100.0.28
Oct  2 04:59:06 np0005465604 nova_compute[260603]: 2025-10-02 08:59:06.231 2 DEBUG nova.compute.manager [req-9e8545b5-7c32-4e89-8ddd-c7c58454b458 req-961e7f52-f4af-4301-8129-4e3118cefc19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-plugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:59:06 np0005465604 nova_compute[260603]: 2025-10-02 08:59:06.231 2 DEBUG oslo_concurrency.lockutils [req-9e8545b5-7c32-4e89-8ddd-c7c58454b458 req-961e7f52-f4af-4301-8129-4e3118cefc19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:59:06 np0005465604 nova_compute[260603]: 2025-10-02 08:59:06.231 2 DEBUG oslo_concurrency.lockutils [req-9e8545b5-7c32-4e89-8ddd-c7c58454b458 req-961e7f52-f4af-4301-8129-4e3118cefc19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:59:06 np0005465604 nova_compute[260603]: 2025-10-02 08:59:06.232 2 DEBUG oslo_concurrency.lockutils [req-9e8545b5-7c32-4e89-8ddd-c7c58454b458 req-961e7f52-f4af-4301-8129-4e3118cefc19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:59:06 np0005465604 nova_compute[260603]: 2025-10-02 08:59:06.232 2 DEBUG nova.compute.manager [req-9e8545b5-7c32-4e89-8ddd-c7c58454b458 req-961e7f52-f4af-4301-8129-4e3118cefc19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] No waiting events found dispatching network-vif-plugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:59:06 np0005465604 nova_compute[260603]: 2025-10-02 08:59:06.232 2 WARNING nova.compute.manager [req-9e8545b5-7c32-4e89-8ddd-c7c58454b458 req-961e7f52-f4af-4301-8129-4e3118cefc19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received unexpected event network-vif-plugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:59:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 121 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 19 KiB/s wr, 1 op/s
Oct  2 04:59:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:59:08 np0005465604 nova_compute[260603]: 2025-10-02 08:59:08.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 121 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 8.3 KiB/s wr, 0 op/s
Oct  2 04:59:10 np0005465604 nova_compute[260603]: 2025-10-02 08:59:10.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:10 np0005465604 nova_compute[260603]: 2025-10-02 08:59:10.531 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:59:10 np0005465604 nova_compute[260603]: 2025-10-02 08:59:10.532 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 04:59:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 121 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 8.3 KiB/s wr, 0 op/s
Oct  2 04:59:11 np0005465604 nova_compute[260603]: 2025-10-02 08:59:11.915 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "f141f189-a224-4ac7-88b5-c28f198944e4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:59:11 np0005465604 nova_compute[260603]: 2025-10-02 08:59:11.916 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:59:11 np0005465604 nova_compute[260603]: 2025-10-02 08:59:11.935 2 DEBUG nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.002 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.002 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.010 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.011 2 INFO nova.compute.claims [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.116 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:59:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:59:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3051964863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.566 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.576 2 DEBUG nova.compute.provider_tree [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.593 2 DEBUG nova.scheduler.client.report [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.613 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.614 2 DEBUG nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.658 2 DEBUG nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.658 2 DEBUG nova.network.neutron [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.679 2 INFO nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.701 2 DEBUG nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.788 2 DEBUG nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.790 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.790 2 INFO nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Creating image(s)#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.816 2 DEBUG nova.storage.rbd_utils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image f141f189-a224-4ac7-88b5-c28f198944e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.843 2 DEBUG nova.storage.rbd_utils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image f141f189-a224-4ac7-88b5-c28f198944e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.871 2 DEBUG nova.storage.rbd_utils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image f141f189-a224-4ac7-88b5-c28f198944e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.875 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.974 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.975 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.976 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:59:12 np0005465604 nova_compute[260603]: 2025-10-02 08:59:12.976 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:59:13 np0005465604 nova_compute[260603]: 2025-10-02 08:59:13.010 2 DEBUG nova.storage.rbd_utils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image f141f189-a224-4ac7-88b5-c28f198944e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:59:13 np0005465604 nova_compute[260603]: 2025-10-02 08:59:13.017 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f141f189-a224-4ac7-88b5-c28f198944e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:59:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:59:13 np0005465604 nova_compute[260603]: 2025-10-02 08:59:13.315 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 f141f189-a224-4ac7-88b5-c28f198944e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:59:13 np0005465604 nova_compute[260603]: 2025-10-02 08:59:13.409 2 DEBUG nova.storage.rbd_utils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image f141f189-a224-4ac7-88b5-c28f198944e4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 04:59:13 np0005465604 nova_compute[260603]: 2025-10-02 08:59:13.503 2 DEBUG nova.objects.instance [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid f141f189-a224-4ac7-88b5-c28f198944e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:59:13 np0005465604 nova_compute[260603]: 2025-10-02 08:59:13.528 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 04:59:13 np0005465604 nova_compute[260603]: 2025-10-02 08:59:13.529 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Ensure instance console log exists: /var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 04:59:13 np0005465604 nova_compute[260603]: 2025-10-02 08:59:13.530 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:59:13 np0005465604 nova_compute[260603]: 2025-10-02 08:59:13.530 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:59:13 np0005465604 nova_compute[260603]: 2025-10-02 08:59:13.531 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:59:13 np0005465604 nova_compute[260603]: 2025-10-02 08:59:13.689 2 DEBUG nova.policy [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 04:59:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 131 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 481 KiB/s wr, 12 op/s
Oct  2 04:59:13 np0005465604 nova_compute[260603]: 2025-10-02 08:59:13.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:59:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:59:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 04:59:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:59:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 04:59:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:59:13 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c0b6fb01-0bed-4b4b-a56a-27deff671a21 does not exist
Oct  2 04:59:13 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 2b68cb83-a7f4-40b9-936c-9f3b8f9b44b5 does not exist
Oct  2 04:59:13 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b74cc849-e8b7-4bd8-a642-003e15510055 does not exist
Oct  2 04:59:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 04:59:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 04:59:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 04:59:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:59:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 04:59:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 04:59:14 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 04:59:14 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:59:14 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 04:59:14 np0005465604 podman[400156]: 2025-10-02 08:59:14.447210775 +0000 UTC m=+0.047332403 container create 5c4c021f1c90570c95dd9634ea99885c425760c46217d77ba9ca21412447efd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 04:59:14 np0005465604 systemd[1]: Started libpod-conmon-5c4c021f1c90570c95dd9634ea99885c425760c46217d77ba9ca21412447efd5.scope.
Oct  2 04:59:14 np0005465604 podman[400156]: 2025-10-02 08:59:14.425330831 +0000 UTC m=+0.025452499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:59:14 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:59:14 np0005465604 podman[400156]: 2025-10-02 08:59:14.542367011 +0000 UTC m=+0.142488669 container init 5c4c021f1c90570c95dd9634ea99885c425760c46217d77ba9ca21412447efd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:59:14 np0005465604 podman[400156]: 2025-10-02 08:59:14.553027519 +0000 UTC m=+0.153149167 container start 5c4c021f1c90570c95dd9634ea99885c425760c46217d77ba9ca21412447efd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 04:59:14 np0005465604 podman[400156]: 2025-10-02 08:59:14.556999985 +0000 UTC m=+0.157121633 container attach 5c4c021f1c90570c95dd9634ea99885c425760c46217d77ba9ca21412447efd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:59:14 np0005465604 hungry_cartwright[400172]: 167 167
Oct  2 04:59:14 np0005465604 systemd[1]: libpod-5c4c021f1c90570c95dd9634ea99885c425760c46217d77ba9ca21412447efd5.scope: Deactivated successfully.
Oct  2 04:59:14 np0005465604 podman[400156]: 2025-10-02 08:59:14.560849967 +0000 UTC m=+0.160971605 container died 5c4c021f1c90570c95dd9634ea99885c425760c46217d77ba9ca21412447efd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct  2 04:59:14 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d7023d175c066b291013a3c6702d6740aa066d4fdbc3c2f34ada44fa37e46df8-merged.mount: Deactivated successfully.
Oct  2 04:59:14 np0005465604 podman[400156]: 2025-10-02 08:59:14.601125064 +0000 UTC m=+0.201246682 container remove 5c4c021f1c90570c95dd9634ea99885c425760c46217d77ba9ca21412447efd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_cartwright, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:59:14 np0005465604 systemd[1]: libpod-conmon-5c4c021f1c90570c95dd9634ea99885c425760c46217d77ba9ca21412447efd5.scope: Deactivated successfully.
Oct  2 04:59:14 np0005465604 podman[400196]: 2025-10-02 08:59:14.811194344 +0000 UTC m=+0.049603434 container create a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_kare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  2 04:59:14 np0005465604 systemd[1]: Started libpod-conmon-a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59.scope.
Oct  2 04:59:14 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:59:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f412558a045f7a2f0390bb7e7ae505eef113770507bfd71bfc4e324402daa33d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:59:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f412558a045f7a2f0390bb7e7ae505eef113770507bfd71bfc4e324402daa33d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:59:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f412558a045f7a2f0390bb7e7ae505eef113770507bfd71bfc4e324402daa33d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:59:14 np0005465604 podman[400196]: 2025-10-02 08:59:14.785130838 +0000 UTC m=+0.023539958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:59:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f412558a045f7a2f0390bb7e7ae505eef113770507bfd71bfc4e324402daa33d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:59:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f412558a045f7a2f0390bb7e7ae505eef113770507bfd71bfc4e324402daa33d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 04:59:14 np0005465604 podman[400196]: 2025-10-02 08:59:14.888391281 +0000 UTC m=+0.126800381 container init a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_kare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  2 04:59:14 np0005465604 podman[400196]: 2025-10-02 08:59:14.897009394 +0000 UTC m=+0.135418484 container start a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_kare, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 04:59:14 np0005465604 podman[400196]: 2025-10-02 08:59:14.900514106 +0000 UTC m=+0.138923206 container attach a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 04:59:15 np0005465604 nova_compute[260603]: 2025-10-02 08:59:15.265 2 DEBUG nova.network.neutron [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Successfully created port: 48cfc997-bd30-40b4-a387-f40a75731793 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 04:59:15 np0005465604 nova_compute[260603]: 2025-10-02 08:59:15.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 131 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 476 KiB/s wr, 12 op/s
Oct  2 04:59:16 np0005465604 modest_kare[400212]: --> passed data devices: 0 physical, 3 LVM
Oct  2 04:59:16 np0005465604 modest_kare[400212]: --> relative data size: 1.0
Oct  2 04:59:16 np0005465604 modest_kare[400212]: --> All data devices are unavailable
Oct  2 04:59:16 np0005465604 systemd[1]: libpod-a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59.scope: Deactivated successfully.
Oct  2 04:59:16 np0005465604 podman[400196]: 2025-10-02 08:59:16.050840835 +0000 UTC m=+1.289249915 container died a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_kare, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:59:16 np0005465604 systemd[1]: libpod-a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59.scope: Consumed 1.052s CPU time.
Oct  2 04:59:16 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f412558a045f7a2f0390bb7e7ae505eef113770507bfd71bfc4e324402daa33d-merged.mount: Deactivated successfully.
Oct  2 04:59:16 np0005465604 podman[400196]: 2025-10-02 08:59:16.216355542 +0000 UTC m=+1.454764622 container remove a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_kare, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 04:59:16 np0005465604 systemd[1]: libpod-conmon-a6dd3b11c15b7803c30de31c3826ffa115f5b3453ed2be52155b893e1a3f0f59.scope: Deactivated successfully.
Oct  2 04:59:16 np0005465604 nova_compute[260603]: 2025-10-02 08:59:16.650 2 DEBUG nova.network.neutron [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Successfully updated port: 48cfc997-bd30-40b4-a387-f40a75731793 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 04:59:16 np0005465604 nova_compute[260603]: 2025-10-02 08:59:16.669 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-f141f189-a224-4ac7-88b5-c28f198944e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:59:16 np0005465604 nova_compute[260603]: 2025-10-02 08:59:16.670 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-f141f189-a224-4ac7-88b5-c28f198944e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:59:16 np0005465604 nova_compute[260603]: 2025-10-02 08:59:16.670 2 DEBUG nova.network.neutron [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 04:59:16 np0005465604 nova_compute[260603]: 2025-10-02 08:59:16.775 2 DEBUG nova.compute.manager [req-8b6eb120-fc57-4e4d-b8f4-71ef53115b49 req-c2914373-9eda-4605-926d-78cdd527006d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Received event network-changed-48cfc997-bd30-40b4-a387-f40a75731793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:59:16 np0005465604 nova_compute[260603]: 2025-10-02 08:59:16.775 2 DEBUG nova.compute.manager [req-8b6eb120-fc57-4e4d-b8f4-71ef53115b49 req-c2914373-9eda-4605-926d-78cdd527006d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Refreshing instance network info cache due to event network-changed-48cfc997-bd30-40b4-a387-f40a75731793. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:59:16 np0005465604 nova_compute[260603]: 2025-10-02 08:59:16.776 2 DEBUG oslo_concurrency.lockutils [req-8b6eb120-fc57-4e4d-b8f4-71ef53115b49 req-c2914373-9eda-4605-926d-78cdd527006d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-f141f189-a224-4ac7-88b5-c28f198944e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:59:16 np0005465604 nova_compute[260603]: 2025-10-02 08:59:16.878 2 DEBUG nova.network.neutron [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 04:59:16 np0005465604 podman[400398]: 2025-10-02 08:59:16.956672883 +0000 UTC m=+0.077156537 container create c9ca832f57d9493e6e709f5b5ac1edcc62b9297e3a1e5f0739999d52cf23c3a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:59:17 np0005465604 systemd[1]: Started libpod-conmon-c9ca832f57d9493e6e709f5b5ac1edcc62b9297e3a1e5f0739999d52cf23c3a2.scope.
Oct  2 04:59:17 np0005465604 podman[400398]: 2025-10-02 08:59:16.92659719 +0000 UTC m=+0.047080894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:59:17 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:59:17 np0005465604 podman[400398]: 2025-10-02 08:59:17.061656571 +0000 UTC m=+0.182140285 container init c9ca832f57d9493e6e709f5b5ac1edcc62b9297e3a1e5f0739999d52cf23c3a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:59:17 np0005465604 podman[400398]: 2025-10-02 08:59:17.072432263 +0000 UTC m=+0.192915887 container start c9ca832f57d9493e6e709f5b5ac1edcc62b9297e3a1e5f0739999d52cf23c3a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:59:17 np0005465604 podman[400398]: 2025-10-02 08:59:17.076424299 +0000 UTC m=+0.196908013 container attach c9ca832f57d9493e6e709f5b5ac1edcc62b9297e3a1e5f0739999d52cf23c3a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:59:17 np0005465604 hardcore_chaplygin[400414]: 167 167
Oct  2 04:59:17 np0005465604 systemd[1]: libpod-c9ca832f57d9493e6e709f5b5ac1edcc62b9297e3a1e5f0739999d52cf23c3a2.scope: Deactivated successfully.
Oct  2 04:59:17 np0005465604 podman[400398]: 2025-10-02 08:59:17.080209259 +0000 UTC m=+0.200692963 container died c9ca832f57d9493e6e709f5b5ac1edcc62b9297e3a1e5f0739999d52cf23c3a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chaplygin, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 04:59:17 np0005465604 systemd[1]: var-lib-containers-storage-overlay-204697e4a0f8d029d99359bef7c99e743ce1fc32c0fbd68b2aaf4038d4fc00a2-merged.mount: Deactivated successfully.
Oct  2 04:59:17 np0005465604 podman[400398]: 2025-10-02 08:59:17.129105149 +0000 UTC m=+0.249588773 container remove c9ca832f57d9493e6e709f5b5ac1edcc62b9297e3a1e5f0739999d52cf23c3a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chaplygin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  2 04:59:17 np0005465604 systemd[1]: libpod-conmon-c9ca832f57d9493e6e709f5b5ac1edcc62b9297e3a1e5f0739999d52cf23c3a2.scope: Deactivated successfully.
Oct  2 04:59:17 np0005465604 podman[400439]: 2025-10-02 08:59:17.406287227 +0000 UTC m=+0.067239763 container create 943c791431135a89f161c6144e2ad9977521c37b559909c165501908d0ce8a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:59:17 np0005465604 systemd[1]: Started libpod-conmon-943c791431135a89f161c6144e2ad9977521c37b559909c165501908d0ce8a93.scope.
Oct  2 04:59:17 np0005465604 podman[400439]: 2025-10-02 08:59:17.382641567 +0000 UTC m=+0.043594173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:59:17 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:59:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e48faacbb162b4916f6e59b5bdedeaed48843fe5c841ef1b3f5fbcab817442/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:59:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e48faacbb162b4916f6e59b5bdedeaed48843fe5c841ef1b3f5fbcab817442/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:59:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e48faacbb162b4916f6e59b5bdedeaed48843fe5c841ef1b3f5fbcab817442/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:59:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e48faacbb162b4916f6e59b5bdedeaed48843fe5c841ef1b3f5fbcab817442/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:59:17 np0005465604 podman[400439]: 2025-10-02 08:59:17.512020679 +0000 UTC m=+0.172973285 container init 943c791431135a89f161c6144e2ad9977521c37b559909c165501908d0ce8a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dijkstra, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:59:17 np0005465604 podman[400439]: 2025-10-02 08:59:17.533138799 +0000 UTC m=+0.194091325 container start 943c791431135a89f161c6144e2ad9977521c37b559909c165501908d0ce8a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dijkstra, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 04:59:17 np0005465604 podman[400439]: 2025-10-02 08:59:17.536763983 +0000 UTC m=+0.197716539 container attach 943c791431135a89f161c6144e2ad9977521c37b559909c165501908d0ce8a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:59:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 167 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct  2 04:59:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]: {
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:    "0": [
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:        {
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "devices": [
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "/dev/loop3"
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            ],
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "lv_name": "ceph_lv0",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "lv_size": "21470642176",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "name": "ceph_lv0",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "tags": {
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.cluster_name": "ceph",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.crush_device_class": "",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.encrypted": "0",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.osd_id": "0",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.type": "block",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.vdo": "0"
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            },
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "type": "block",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "vg_name": "ceph_vg0"
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:        }
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:    ],
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:    "1": [
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:        {
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "devices": [
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "/dev/loop4"
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            ],
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "lv_name": "ceph_lv1",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "lv_size": "21470642176",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "name": "ceph_lv1",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "tags": {
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.cluster_name": "ceph",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.crush_device_class": "",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.encrypted": "0",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.osd_id": "1",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.type": "block",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.vdo": "0"
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            },
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "type": "block",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "vg_name": "ceph_vg1"
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:        }
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:    ],
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:    "2": [
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:        {
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "devices": [
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "/dev/loop5"
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            ],
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "lv_name": "ceph_lv2",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "lv_size": "21470642176",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "name": "ceph_lv2",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "tags": {
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.cephx_lockbox_secret": "",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.cluster_name": "ceph",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.crush_device_class": "",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.encrypted": "0",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.osd_id": "2",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.type": "block",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:                "ceph.vdo": "0"
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            },
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "type": "block",
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:            "vg_name": "ceph_vg2"
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:        }
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]:    ]
Oct  2 04:59:18 np0005465604 frosty_dijkstra[400455]: }
Oct  2 04:59:18 np0005465604 systemd[1]: libpod-943c791431135a89f161c6144e2ad9977521c37b559909c165501908d0ce8a93.scope: Deactivated successfully.
Oct  2 04:59:18 np0005465604 podman[400439]: 2025-10-02 08:59:18.307329033 +0000 UTC m=+0.968281559 container died 943c791431135a89f161c6144e2ad9977521c37b559909c165501908d0ce8a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 04:59:18 np0005465604 systemd[1]: var-lib-containers-storage-overlay-12e48faacbb162b4916f6e59b5bdedeaed48843fe5c841ef1b3f5fbcab817442-merged.mount: Deactivated successfully.
Oct  2 04:59:18 np0005465604 podman[400439]: 2025-10-02 08:59:18.387382341 +0000 UTC m=+1.048334867 container remove 943c791431135a89f161c6144e2ad9977521c37b559909c165501908d0ce8a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_dijkstra, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 04:59:18 np0005465604 systemd[1]: libpod-conmon-943c791431135a89f161c6144e2ad9977521c37b559909c165501908d0ce8a93.scope: Deactivated successfully.
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.851 2 DEBUG nova.network.neutron [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Updating instance_info_cache with network_info: [{"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.879 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-f141f189-a224-4ac7-88b5-c28f198944e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.880 2 DEBUG nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Instance network_info: |[{"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.881 2 DEBUG oslo_concurrency.lockutils [req-8b6eb120-fc57-4e4d-b8f4-71ef53115b49 req-c2914373-9eda-4605-926d-78cdd527006d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-f141f189-a224-4ac7-88b5-c28f198944e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.881 2 DEBUG nova.network.neutron [req-8b6eb120-fc57-4e4d-b8f4-71ef53115b49 req-c2914373-9eda-4605-926d-78cdd527006d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Refreshing network info cache for port 48cfc997-bd30-40b4-a387-f40a75731793 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.886 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Start _get_guest_xml network_info=[{"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.895 2 WARNING nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.914 2 DEBUG nova.virt.libvirt.host [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.915 2 DEBUG nova.virt.libvirt.host [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.920 2 DEBUG nova.virt.libvirt.host [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.921 2 DEBUG nova.virt.libvirt.host [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.922 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.922 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.923 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.923 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.924 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.924 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.924 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.924 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.925 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.925 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.925 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.926 2 DEBUG nova.virt.hardware [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 04:59:18 np0005465604 nova_compute[260603]: 2025-10-02 08:59:18.931 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:59:19 np0005465604 podman[400637]: 2025-10-02 08:59:19.31722694 +0000 UTC m=+0.059486687 container create 1334960ef05dffce44e0c7def0913fd842477815ce11080fc6c322cf0e6843ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 04:59:19 np0005465604 systemd[1]: Started libpod-conmon-1334960ef05dffce44e0c7def0913fd842477815ce11080fc6c322cf0e6843ee.scope.
Oct  2 04:59:19 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:59:19 np0005465604 podman[400637]: 2025-10-02 08:59:19.286871208 +0000 UTC m=+0.029130985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:59:19 np0005465604 podman[400637]: 2025-10-02 08:59:19.389686287 +0000 UTC m=+0.131946054 container init 1334960ef05dffce44e0c7def0913fd842477815ce11080fc6c322cf0e6843ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:59:19 np0005465604 podman[400637]: 2025-10-02 08:59:19.399297352 +0000 UTC m=+0.141557099 container start 1334960ef05dffce44e0c7def0913fd842477815ce11080fc6c322cf0e6843ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rhodes, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  2 04:59:19 np0005465604 podman[400637]: 2025-10-02 08:59:19.40239092 +0000 UTC m=+0.144650697 container attach 1334960ef05dffce44e0c7def0913fd842477815ce11080fc6c322cf0e6843ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 04:59:19 np0005465604 intelligent_rhodes[400654]: 167 167
Oct  2 04:59:19 np0005465604 systemd[1]: libpod-1334960ef05dffce44e0c7def0913fd842477815ce11080fc6c322cf0e6843ee.scope: Deactivated successfully.
Oct  2 04:59:19 np0005465604 podman[400637]: 2025-10-02 08:59:19.408943158 +0000 UTC m=+0.151202915 container died 1334960ef05dffce44e0c7def0913fd842477815ce11080fc6c322cf0e6843ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 04:59:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:59:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/882428886' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:59:19 np0005465604 systemd[1]: var-lib-containers-storage-overlay-29e8f6feb5bb5784d851269a6099e8b279d475a28e1ea7494c98b408192c895c-merged.mount: Deactivated successfully.
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.444 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:59:19 np0005465604 podman[400637]: 2025-10-02 08:59:19.44748163 +0000 UTC m=+0.189741387 container remove 1334960ef05dffce44e0c7def0913fd842477815ce11080fc6c322cf0e6843ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:59:19 np0005465604 systemd[1]: libpod-conmon-1334960ef05dffce44e0c7def0913fd842477815ce11080fc6c322cf0e6843ee.scope: Deactivated successfully.
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.477 2 DEBUG nova.storage.rbd_utils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image f141f189-a224-4ac7-88b5-c28f198944e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.483 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:59:19 np0005465604 podman[400697]: 2025-10-02 08:59:19.643580267 +0000 UTC m=+0.041755744 container create b90f19f2db1aafca793fcdaa0f05faabb82d793665d9ab9c26569c3408a5257a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 04:59:19 np0005465604 systemd[1]: Started libpod-conmon-b90f19f2db1aafca793fcdaa0f05faabb82d793665d9ab9c26569c3408a5257a.scope.
Oct  2 04:59:19 np0005465604 systemd[1]: Started libcrun container.
Oct  2 04:59:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d90a6c527c8f6d473918ee6fd7f0fad7f8db4350821b61cdb03038154400bc5d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 04:59:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d90a6c527c8f6d473918ee6fd7f0fad7f8db4350821b61cdb03038154400bc5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 04:59:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d90a6c527c8f6d473918ee6fd7f0fad7f8db4350821b61cdb03038154400bc5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 04:59:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d90a6c527c8f6d473918ee6fd7f0fad7f8db4350821b61cdb03038154400bc5d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 04:59:19 np0005465604 podman[400697]: 2025-10-02 08:59:19.626164005 +0000 UTC m=+0.024339502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 04:59:19 np0005465604 podman[400697]: 2025-10-02 08:59:19.74145274 +0000 UTC m=+0.139628267 container init b90f19f2db1aafca793fcdaa0f05faabb82d793665d9ab9c26569c3408a5257a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 04:59:19 np0005465604 podman[400697]: 2025-10-02 08:59:19.748792223 +0000 UTC m=+0.146967700 container start b90f19f2db1aafca793fcdaa0f05faabb82d793665d9ab9c26569c3408a5257a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 04:59:19 np0005465604 podman[400697]: 2025-10-02 08:59:19.7524821 +0000 UTC m=+0.150657617 container attach b90f19f2db1aafca793fcdaa0f05faabb82d793665d9ab9c26569c3408a5257a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 04:59:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:59:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 04:59:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2766387724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.938 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.941 2 DEBUG nova.virt.libvirt.vif [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:59:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-300987939',display_name='tempest-TestNetworkBasicOps-server-300987939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-300987939',id=130,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCJjWfAj56mQeyeIvU6XGvvjANhAOW4ymkjaU7nDK7hdgdQPn23X31SFk66pyTpQCxaiWSbcIfBie1wBVKGMKO2QIcNLprOGSD1fN9DoZ9nw1hLigEh1SZTI3GQ/zevNFg==',key_name='tempest-TestNetworkBasicOps-1976267539',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-3sfliqo6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:59:12Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=f141f189-a224-4ac7-88b5-c28f198944e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.941 2 DEBUG nova.network.os_vif_util [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.942 2 DEBUG nova.network.os_vif_util [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:dd:9b,bridge_name='br-int',has_traffic_filtering=True,id=48cfc997-bd30-40b4-a387-f40a75731793,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48cfc997-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.944 2 DEBUG nova.objects.instance [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid f141f189-a224-4ac7-88b5-c28f198944e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.969 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] End _get_guest_xml xml=<domain type="kvm">
Oct  2 04:59:19 np0005465604 nova_compute[260603]:  <uuid>f141f189-a224-4ac7-88b5-c28f198944e4</uuid>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:  <name>instance-00000082</name>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkBasicOps-server-300987939</nova:name>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 08:59:18</nova:creationTime>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 04:59:19 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:        <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:        <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:        <nova:port uuid="48cfc997-bd30-40b4-a387-f40a75731793">
Oct  2 04:59:19 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <system>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <entry name="serial">f141f189-a224-4ac7-88b5-c28f198944e4</entry>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <entry name="uuid">f141f189-a224-4ac7-88b5-c28f198944e4</entry>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    </system>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:  <os>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:  </os>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:  <features>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:  </features>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:  </clock>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:  <devices>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f141f189-a224-4ac7-88b5-c28f198944e4_disk">
Oct  2 04:59:19 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:59:19 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/f141f189-a224-4ac7-88b5-c28f198944e4_disk.config">
Oct  2 04:59:19 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      </source>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 04:59:19 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      </auth>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    </disk>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:51:dd:9b"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <target dev="tap48cfc997-bd"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    </interface>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4/console.log" append="off"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    </serial>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <video>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    </video>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    </rng>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 04:59:19 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 04:59:19 np0005465604 nova_compute[260603]:  </devices>
Oct  2 04:59:19 np0005465604 nova_compute[260603]: </domain>
Oct  2 04:59:19 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.970 2 DEBUG nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Preparing to wait for external event network-vif-plugged-48cfc997-bd30-40b4-a387-f40a75731793 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.971 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.971 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.971 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.972 2 DEBUG nova.virt.libvirt.vif [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T08:59:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-300987939',display_name='tempest-TestNetworkBasicOps-server-300987939',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-300987939',id=130,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCJjWfAj56mQeyeIvU6XGvvjANhAOW4ymkjaU7nDK7hdgdQPn23X31SFk66pyTpQCxaiWSbcIfBie1wBVKGMKO2QIcNLprOGSD1fN9DoZ9nw1hLigEh1SZTI3GQ/zevNFg==',key_name='tempest-TestNetworkBasicOps-1976267539',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-3sfliqo6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T08:59:12Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=f141f189-a224-4ac7-88b5-c28f198944e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.972 2 DEBUG nova.network.os_vif_util [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.972 2 DEBUG nova.network.os_vif_util [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:dd:9b,bridge_name='br-int',has_traffic_filtering=True,id=48cfc997-bd30-40b4-a387-f40a75731793,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48cfc997-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.973 2 DEBUG os_vif [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:dd:9b,bridge_name='br-int',has_traffic_filtering=True,id=48cfc997-bd30-40b4-a387-f40a75731793,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48cfc997-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.974 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.974 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.978 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48cfc997-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:59:19 np0005465604 nova_compute[260603]: 2025-10-02 08:59:19.978 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap48cfc997-bd, col_values=(('external_ids', {'iface-id': '48cfc997-bd30-40b4-a387-f40a75731793', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:51:dd:9b', 'vm-uuid': 'f141f189-a224-4ac7-88b5-c28f198944e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:59:20 np0005465604 NetworkManager[45129]: <info>  [1759395560.0055] manager: (tap48cfc997-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/561)
Oct  2 04:59:20 np0005465604 nova_compute[260603]: 2025-10-02 08:59:20.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:20 np0005465604 nova_compute[260603]: 2025-10-02 08:59:20.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 04:59:20 np0005465604 nova_compute[260603]: 2025-10-02 08:59:20.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:20 np0005465604 nova_compute[260603]: 2025-10-02 08:59:20.017 2 INFO os_vif [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:dd:9b,bridge_name='br-int',has_traffic_filtering=True,id=48cfc997-bd30-40b4-a387-f40a75731793,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48cfc997-bd')#033[00m
Oct  2 04:59:20 np0005465604 nova_compute[260603]: 2025-10-02 08:59:20.099 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:59:20 np0005465604 nova_compute[260603]: 2025-10-02 08:59:20.100 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 04:59:20 np0005465604 nova_compute[260603]: 2025-10-02 08:59:20.100 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:51:dd:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 04:59:20 np0005465604 nova_compute[260603]: 2025-10-02 08:59:20.101 2 INFO nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Using config drive#033[00m
Oct  2 04:59:20 np0005465604 nova_compute[260603]: 2025-10-02 08:59:20.132 2 DEBUG nova.storage.rbd_utils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image f141f189-a224-4ac7-88b5-c28f198944e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:59:20 np0005465604 nova_compute[260603]: 2025-10-02 08:59:20.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]: {
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:        "osd_id": 2,
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:        "type": "bluestore"
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:    },
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:        "osd_id": 1,
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:        "type": "bluestore"
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:    },
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:        "osd_id": 0,
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:        "type": "bluestore"
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]:    }
Oct  2 04:59:20 np0005465604 youthful_neumann[400732]: }
Oct  2 04:59:20 np0005465604 systemd[1]: libpod-b90f19f2db1aafca793fcdaa0f05faabb82d793665d9ab9c26569c3408a5257a.scope: Deactivated successfully.
Oct  2 04:59:20 np0005465604 podman[400787]: 2025-10-02 08:59:20.790999824 +0000 UTC m=+0.030032923 container died b90f19f2db1aafca793fcdaa0f05faabb82d793665d9ab9c26569c3408a5257a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 04:59:20 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d90a6c527c8f6d473918ee6fd7f0fad7f8db4350821b61cdb03038154400bc5d-merged.mount: Deactivated successfully.
Oct  2 04:59:20 np0005465604 podman[400787]: 2025-10-02 08:59:20.859436003 +0000 UTC m=+0.098469022 container remove b90f19f2db1aafca793fcdaa0f05faabb82d793665d9ab9c26569c3408a5257a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 04:59:20 np0005465604 systemd[1]: libpod-conmon-b90f19f2db1aafca793fcdaa0f05faabb82d793665d9ab9c26569c3408a5257a.scope: Deactivated successfully.
Oct  2 04:59:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 04:59:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:59:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 04:59:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:59:20 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 89fdd375-cd66-4865-8678-8614ec26bc1f does not exist
Oct  2 04:59:20 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev eb3b960d-0c60-424a-8e9f-d40ad4e5a9c3 does not exist
Oct  2 04:59:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:59:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 04:59:21 np0005465604 nova_compute[260603]: 2025-10-02 08:59:21.618 2 INFO nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Creating config drive at /var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4/disk.config#033[00m
Oct  2 04:59:21 np0005465604 nova_compute[260603]: 2025-10-02 08:59:21.623 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9izmn284 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:59:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 04:59:21 np0005465604 nova_compute[260603]: 2025-10-02 08:59:21.784 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9izmn284" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:59:21 np0005465604 nova_compute[260603]: 2025-10-02 08:59:21.814 2 DEBUG nova.storage.rbd_utils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image f141f189-a224-4ac7-88b5-c28f198944e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 04:59:21 np0005465604 nova_compute[260603]: 2025-10-02 08:59:21.818 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4/disk.config f141f189-a224-4ac7-88b5-c28f198944e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:59:21 np0005465604 nova_compute[260603]: 2025-10-02 08:59:21.994 2 DEBUG oslo_concurrency.processutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4/disk.config f141f189-a224-4ac7-88b5-c28f198944e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:59:21 np0005465604 nova_compute[260603]: 2025-10-02 08:59:21.996 2 INFO nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Deleting local config drive /var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4/disk.config because it was imported into RBD.#033[00m
Oct  2 04:59:22 np0005465604 kernel: tap48cfc997-bd: entered promiscuous mode
Oct  2 04:59:22 np0005465604 NetworkManager[45129]: <info>  [1759395562.0621] manager: (tap48cfc997-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/562)
Oct  2 04:59:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:59:22Z|01400|binding|INFO|Claiming lport 48cfc997-bd30-40b4-a387-f40a75731793 for this chassis.
Oct  2 04:59:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:59:22Z|01401|binding|INFO|48cfc997-bd30-40b4-a387-f40a75731793: Claiming fa:16:3e:51:dd:9b 10.100.0.27
Oct  2 04:59:22 np0005465604 nova_compute[260603]: 2025-10-02 08:59:22.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.090 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:dd:9b 10.100.0.27'], port_security=['fa:16:3e:51:dd:9b 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'f141f189-a224-4ac7-88b5-c28f198944e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e3569eb1-74db-4df0-93c4-3e65c3d95428', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf29bdfe-eec4-4bc1-887c-39f99d9387e9, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=48cfc997-bd30-40b4-a387-f40a75731793) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:59:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.091 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 48cfc997-bd30-40b4-a387-f40a75731793 in datapath 5d0f6b84-ebf5-436d-83fe-b7739dc629d9 bound to our chassis#033[00m
Oct  2 04:59:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.093 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d0f6b84-ebf5-436d-83fe-b7739dc629d9#033[00m
Oct  2 04:59:22 np0005465604 systemd-udevd[400904]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 04:59:22 np0005465604 systemd-machined[214636]: New machine qemu-164-instance-00000082.
Oct  2 04:59:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:59:22Z|01402|binding|INFO|Setting lport 48cfc997-bd30-40b4-a387-f40a75731793 ovn-installed in OVS
Oct  2 04:59:22 np0005465604 ovn_controller[152344]: 2025-10-02T08:59:22Z|01403|binding|INFO|Setting lport 48cfc997-bd30-40b4-a387-f40a75731793 up in Southbound
Oct  2 04:59:22 np0005465604 nova_compute[260603]: 2025-10-02 08:59:22.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.115 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9be95f4-179e-4c35-bad1-ed68e1d038a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:22 np0005465604 NetworkManager[45129]: <info>  [1759395562.1191] device (tap48cfc997-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 04:59:22 np0005465604 NetworkManager[45129]: <info>  [1759395562.1201] device (tap48cfc997-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 04:59:22 np0005465604 systemd[1]: Started Virtual Machine qemu-164-instance-00000082.
Oct  2 04:59:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 04:59:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1851268524' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 04:59:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 04:59:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1851268524' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 04:59:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.158 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9d930683-1a5b-434c-b85b-c34f8b62a9a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.164 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[39c277a7-7b48-47a6-bd56-26754fda75ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.207 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ff688b8c-bd89-4e26-8e27-263bb16f6f49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.225 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa23c17f-eee6-4ca0-9a89-a8ffdf2f8034]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d0f6b84-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:31:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 396], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642259, 'reachable_time': 20336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400919, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.245 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7637380f-50a0-4732-b412-c8e9948749d9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5d0f6b84-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642270, 'tstamp': 642270}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400920, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap5d0f6b84-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642273, 'tstamp': 642273}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400920, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 04:59:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.247 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d0f6b84-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:59:22 np0005465604 nova_compute[260603]: 2025-10-02 08:59:22.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:22 np0005465604 nova_compute[260603]: 2025-10-02 08:59:22.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.255 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d0f6b84-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:59:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.255 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:59:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.256 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d0f6b84-e0, col_values=(('external_ids', {'iface-id': 'f202c767-dd88-4dcf-bf75-a2c0dfdb6c1d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:59:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:22.257 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 04:59:22 np0005465604 nova_compute[260603]: 2025-10-02 08:59:22.600 2 DEBUG nova.network.neutron [req-8b6eb120-fc57-4e4d-b8f4-71ef53115b49 req-c2914373-9eda-4605-926d-78cdd527006d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Updated VIF entry in instance network info cache for port 48cfc997-bd30-40b4-a387-f40a75731793. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:59:22 np0005465604 nova_compute[260603]: 2025-10-02 08:59:22.601 2 DEBUG nova.network.neutron [req-8b6eb120-fc57-4e4d-b8f4-71ef53115b49 req-c2914373-9eda-4605-926d-78cdd527006d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Updating instance_info_cache with network_info: [{"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:59:22 np0005465604 nova_compute[260603]: 2025-10-02 08:59:22.617 2 DEBUG oslo_concurrency.lockutils [req-8b6eb120-fc57-4e4d-b8f4-71ef53115b49 req-c2914373-9eda-4605-926d-78cdd527006d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-f141f189-a224-4ac7-88b5-c28f198944e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:59:22 np0005465604 nova_compute[260603]: 2025-10-02 08:59:22.754 2 DEBUG nova.compute.manager [req-67d37b99-0ce9-4221-97a7-12cf413888c8 req-bb15543f-1fb4-4735-a904-ff149923bf74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Received event network-vif-plugged-48cfc997-bd30-40b4-a387-f40a75731793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:59:22 np0005465604 nova_compute[260603]: 2025-10-02 08:59:22.755 2 DEBUG oslo_concurrency.lockutils [req-67d37b99-0ce9-4221-97a7-12cf413888c8 req-bb15543f-1fb4-4735-a904-ff149923bf74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:59:22 np0005465604 nova_compute[260603]: 2025-10-02 08:59:22.755 2 DEBUG oslo_concurrency.lockutils [req-67d37b99-0ce9-4221-97a7-12cf413888c8 req-bb15543f-1fb4-4735-a904-ff149923bf74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:59:22 np0005465604 nova_compute[260603]: 2025-10-02 08:59:22.756 2 DEBUG oslo_concurrency.lockutils [req-67d37b99-0ce9-4221-97a7-12cf413888c8 req-bb15543f-1fb4-4735-a904-ff149923bf74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:59:22 np0005465604 nova_compute[260603]: 2025-10-02 08:59:22.756 2 DEBUG nova.compute.manager [req-67d37b99-0ce9-4221-97a7-12cf413888c8 req-bb15543f-1fb4-4735-a904-ff149923bf74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Processing event network-vif-plugged-48cfc997-bd30-40b4-a387-f40a75731793 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 04:59:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:59:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.124 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395564.12463, f141f189-a224-4ac7-88b5-c28f198944e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.126 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] VM Started (Lifecycle Event)#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.127 2 DEBUG nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.133 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.137 2 INFO nova.virt.libvirt.driver [-] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Instance spawned successfully.#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.138 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.145 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.150 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.160 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.161 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.162 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.163 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.163 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.164 2 DEBUG nova.virt.libvirt.driver [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.177 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.177 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395564.124697, f141f189-a224-4ac7-88b5-c28f198944e4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.178 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] VM Paused (Lifecycle Event)#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.207 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.212 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395564.1313515, f141f189-a224-4ac7-88b5-c28f198944e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.212 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] VM Resumed (Lifecycle Event)#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.237 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.241 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.248 2 INFO nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Took 11.46 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.249 2 DEBUG nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.263 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.314 2 INFO nova.compute.manager [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Took 12.34 seconds to build instance.#033[00m
Oct  2 04:59:24 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 04:59:24 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.331 2 DEBUG oslo_concurrency.lockutils [None req-fff7bc98-a6f9-4924-b38d-21deb46bc5c5 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.865 2 DEBUG nova.compute.manager [req-7e6bf174-037b-45e7-ba68-0b86e7172d2d req-36cfbfb4-83aa-4c07-8c53-b3df6134454b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Received event network-vif-plugged-48cfc997-bd30-40b4-a387-f40a75731793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.865 2 DEBUG oslo_concurrency.lockutils [req-7e6bf174-037b-45e7-ba68-0b86e7172d2d req-36cfbfb4-83aa-4c07-8c53-b3df6134454b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.865 2 DEBUG oslo_concurrency.lockutils [req-7e6bf174-037b-45e7-ba68-0b86e7172d2d req-36cfbfb4-83aa-4c07-8c53-b3df6134454b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.865 2 DEBUG oslo_concurrency.lockutils [req-7e6bf174-037b-45e7-ba68-0b86e7172d2d req-36cfbfb4-83aa-4c07-8c53-b3df6134454b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.866 2 DEBUG nova.compute.manager [req-7e6bf174-037b-45e7-ba68-0b86e7172d2d req-36cfbfb4-83aa-4c07-8c53-b3df6134454b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] No waiting events found dispatching network-vif-plugged-48cfc997-bd30-40b4-a387-f40a75731793 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 04:59:24 np0005465604 nova_compute[260603]: 2025-10-02 08:59:24.866 2 WARNING nova.compute.manager [req-7e6bf174-037b-45e7-ba68-0b86e7172d2d req-36cfbfb4-83aa-4c07-8c53-b3df6134454b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Received unexpected event network-vif-plugged-48cfc997-bd30-40b4-a387-f40a75731793 for instance with vm_state active and task_state None.#033[00m
Oct  2 04:59:25 np0005465604 nova_compute[260603]: 2025-10-02 08:59:25.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:25 np0005465604 nova_compute[260603]: 2025-10-02 08:59:25.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.3 MiB/s wr, 21 op/s
Oct  2 04:59:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.3 MiB/s wr, 108 op/s
Oct  2 04:59:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:59:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:59:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:59:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:59:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:59:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:59:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_08:59:28
Oct  2 04:59:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 04:59:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 04:59:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'backups', 'images', 'volumes', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct  2 04:59:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 04:59:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:59:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 04:59:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:59:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 04:59:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 04:59:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:59:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 04:59:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:59:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 04:59:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:59:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 04:59:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 136 op/s
Oct  2 04:59:30 np0005465604 nova_compute[260603]: 2025-10-02 08:59:30.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:30 np0005465604 podman[400965]: 2025-10-02 08:59:30.002345705 +0000 UTC m=+0.068640098 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 04:59:30 np0005465604 podman[400964]: 2025-10-02 08:59:30.056625095 +0000 UTC m=+0.122675781 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 04:59:30 np0005465604 nova_compute[260603]: 2025-10-02 08:59:30.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 134 op/s
Oct  2 04:59:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:59:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 19 KiB/s wr, 134 op/s
Oct  2 04:59:34 np0005465604 nova_compute[260603]: 2025-10-02 08:59:34.538 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:59:34 np0005465604 nova_compute[260603]: 2025-10-02 08:59:34.538 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 04:59:34 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct  2 04:59:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:34.839 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:59:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:34.839 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:59:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:34.840 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:59:35 np0005465604 nova_compute[260603]: 2025-10-02 08:59:35.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:35 np0005465604 podman[401007]: 2025-10-02 08:59:35.048876345 +0000 UTC m=+0.096548531 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 04:59:35 np0005465604 podman[401008]: 2025-10-02 08:59:35.057904522 +0000 UTC m=+0.101627153 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 04:59:35 np0005465604 nova_compute[260603]: 2025-10-02 08:59:35.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 167 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.4 KiB/s wr, 129 op/s
Oct  2 04:59:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:59:35Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:51:dd:9b 10.100.0.27
Oct  2 04:59:35 np0005465604 ovn_controller[152344]: 2025-10-02T08:59:35Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:51:dd:9b 10.100.0.27
Oct  2 04:59:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 182 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 163 op/s
Oct  2 04:59:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013817624902324672 of space, bias 1.0, pg target 0.41452874706974013 quantized to 32 (current 32)
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 04:59:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 04:59:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 200 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 898 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Oct  2 04:59:40 np0005465604 nova_compute[260603]: 2025-10-02 08:59:40.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:40 np0005465604 nova_compute[260603]: 2025-10-02 08:59:40.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 200 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  2 04:59:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:59:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 04:59:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:44.729 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 04:59:44 np0005465604 nova_compute[260603]: 2025-10-02 08:59:44.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:44.731 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 04:59:44 np0005465604 nova_compute[260603]: 2025-10-02 08:59:44.877 2 DEBUG nova.compute.manager [req-1d095903-1f8d-4d16-9356-1cc8c1a24678 req-5f4143c2-32fa-4a5c-a998-b1f4708d85d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-changed-4e4cec89-b01e-4202-bc3e-a65ce8864017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 04:59:44 np0005465604 nova_compute[260603]: 2025-10-02 08:59:44.878 2 DEBUG nova.compute.manager [req-1d095903-1f8d-4d16-9356-1cc8c1a24678 req-5f4143c2-32fa-4a5c-a998-b1f4708d85d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing instance network info cache due to event network-changed-4e4cec89-b01e-4202-bc3e-a65ce8864017. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 04:59:44 np0005465604 nova_compute[260603]: 2025-10-02 08:59:44.878 2 DEBUG oslo_concurrency.lockutils [req-1d095903-1f8d-4d16-9356-1cc8c1a24678 req-5f4143c2-32fa-4a5c-a998-b1f4708d85d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:59:44 np0005465604 nova_compute[260603]: 2025-10-02 08:59:44.879 2 DEBUG oslo_concurrency.lockutils [req-1d095903-1f8d-4d16-9356-1cc8c1a24678 req-5f4143c2-32fa-4a5c-a998-b1f4708d85d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:59:44 np0005465604 nova_compute[260603]: 2025-10-02 08:59:44.879 2 DEBUG nova.network.neutron [req-1d095903-1f8d-4d16-9356-1cc8c1a24678 req-5f4143c2-32fa-4a5c-a998-b1f4708d85d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing network info cache for port 4e4cec89-b01e-4202-bc3e-a65ce8864017 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 04:59:45 np0005465604 nova_compute[260603]: 2025-10-02 08:59:45.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:45 np0005465604 nova_compute[260603]: 2025-10-02 08:59:45.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 08:59:45.732 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 04:59:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  2 04:59:46 np0005465604 nova_compute[260603]: 2025-10-02 08:59:46.431 2 DEBUG nova.network.neutron [req-1d095903-1f8d-4d16-9356-1cc8c1a24678 req-5f4143c2-32fa-4a5c-a998-b1f4708d85d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updated VIF entry in instance network info cache for port 4e4cec89-b01e-4202-bc3e-a65ce8864017. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 04:59:46 np0005465604 nova_compute[260603]: 2025-10-02 08:59:46.432 2 DEBUG nova.network.neutron [req-1d095903-1f8d-4d16-9356-1cc8c1a24678 req-5f4143c2-32fa-4a5c-a998-b1f4708d85d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:59:46 np0005465604 nova_compute[260603]: 2025-10-02 08:59:46.453 2 DEBUG oslo_concurrency.lockutils [req-1d095903-1f8d-4d16-9356-1cc8c1a24678 req-5f4143c2-32fa-4a5c-a998-b1f4708d85d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:59:46 np0005465604 nova_compute[260603]: 2025-10-02 08:59:46.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:59:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  2 04:59:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:59:48 np0005465604 nova_compute[260603]: 2025-10-02 08:59:48.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:59:48 np0005465604 nova_compute[260603]: 2025-10-02 08:59:48.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 04:59:48 np0005465604 nova_compute[260603]: 2025-10-02 08:59:48.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 04:59:49 np0005465604 nova_compute[260603]: 2025-10-02 08:59:49.621 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 04:59:49 np0005465604 nova_compute[260603]: 2025-10-02 08:59:49.622 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 04:59:49 np0005465604 nova_compute[260603]: 2025-10-02 08:59:49.622 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 04:59:49 np0005465604 nova_compute[260603]: 2025-10-02 08:59:49.623 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 04:59:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 138 KiB/s rd, 767 KiB/s wr, 28 op/s
Oct  2 04:59:50 np0005465604 nova_compute[260603]: 2025-10-02 08:59:50.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:50 np0005465604 nova_compute[260603]: 2025-10-02 08:59:50.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Oct  2 04:59:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:59:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s wr, 1 op/s
Oct  2 04:59:54 np0005465604 nova_compute[260603]: 2025-10-02 08:59:54.536 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 04:59:54 np0005465604 nova_compute[260603]: 2025-10-02 08:59:54.559 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 04:59:54 np0005465604 nova_compute[260603]: 2025-10-02 08:59:54.559 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 04:59:54 np0005465604 nova_compute[260603]: 2025-10-02 08:59:54.560 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:59:54 np0005465604 nova_compute[260603]: 2025-10-02 08:59:54.560 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:59:54 np0005465604 nova_compute[260603]: 2025-10-02 08:59:54.561 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:59:54 np0005465604 nova_compute[260603]: 2025-10-02 08:59:54.561 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 04:59:54 np0005465604 nova_compute[260603]: 2025-10-02 08:59:54.591 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:59:54 np0005465604 nova_compute[260603]: 2025-10-02 08:59:54.592 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:59:54 np0005465604 nova_compute[260603]: 2025-10-02 08:59:54.592 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:59:54 np0005465604 nova_compute[260603]: 2025-10-02 08:59:54.592 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 04:59:54 np0005465604 nova_compute[260603]: 2025-10-02 08:59:54.593 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:59:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:59:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1736591728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.064 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.154 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.155 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.160 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.161 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.406 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.407 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3261MB free_disk=59.89701843261719GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.407 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.407 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.486 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance b4eacfa3-8b31-492a-b3c5-829a890a4aae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.486 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance f141f189-a224-4ac7-88b5-c28f198944e4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.486 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.487 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.503 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.522 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.522 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.535 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.561 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 04:59:55 np0005465604 nova_compute[260603]: 2025-10-02 08:59:55.614 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 04:59:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s wr, 1 op/s
Oct  2 04:59:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 04:59:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2894883514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 04:59:56 np0005465604 nova_compute[260603]: 2025-10-02 08:59:56.029 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 04:59:56 np0005465604 nova_compute[260603]: 2025-10-02 08:59:56.038 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 04:59:56 np0005465604 nova_compute[260603]: 2025-10-02 08:59:56.070 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 04:59:56 np0005465604 nova_compute[260603]: 2025-10-02 08:59:56.097 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 04:59:56 np0005465604 nova_compute[260603]: 2025-10-02 08:59:56.097 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 04:59:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Oct  2 04:59:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:59:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:59:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:59:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:59:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 04:59:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 04:59:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 04:59:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Oct  2 05:00:00 np0005465604 nova_compute[260603]: 2025-10-02 09:00:00.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:00 np0005465604 nova_compute[260603]: 2025-10-02 09:00:00.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:01 np0005465604 podman[401095]: 2025-10-02 09:00:01.035074663 +0000 UTC m=+0.084859612 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 05:00:01 np0005465604 podman[401094]: 2025-10-02 09:00:01.084253702 +0000 UTC m=+0.138755580 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct  2 05:00:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Oct  2 05:00:03 np0005465604 nova_compute[260603]: 2025-10-02 09:00:03.055 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:00:03 np0005465604 nova_compute[260603]: 2025-10-02 09:00:03.055 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:00:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:00:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s wr, 2 op/s
Oct  2 05:00:05 np0005465604 nova_compute[260603]: 2025-10-02 09:00:05.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:05 np0005465604 nova_compute[260603]: 2025-10-02 09:00:05.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct  2 05:00:06 np0005465604 podman[401136]: 2025-10-02 09:00:06.005778663 +0000 UTC m=+0.075777543 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Oct  2 05:00:06 np0005465604 podman[401137]: 2025-10-02 09:00:06.043293332 +0000 UTC m=+0.098978689 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, tcib_managed=true, managed_by=edpm_ansible)
Oct  2 05:00:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s wr, 1 op/s
Oct  2 05:00:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:00:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s wr, 0 op/s
Oct  2 05:00:10 np0005465604 nova_compute[260603]: 2025-10-02 09:00:10.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:10 np0005465604 nova_compute[260603]: 2025-10-02 09:00:10.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s wr, 0 op/s
Oct  2 05:00:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:00:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s wr, 1 op/s
Oct  2 05:00:14 np0005465604 ovn_controller[152344]: 2025-10-02T09:00:14Z|01404|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Oct  2 05:00:15 np0005465604 nova_compute[260603]: 2025-10-02 09:00:15.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:15 np0005465604 nova_compute[260603]: 2025-10-02 09:00:15.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct  2 05:00:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s wr, 1 op/s
Oct  2 05:00:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:00:18 np0005465604 nova_compute[260603]: 2025-10-02 09:00:18.868 2 DEBUG oslo_concurrency.lockutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "f141f189-a224-4ac7-88b5-c28f198944e4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:18 np0005465604 nova_compute[260603]: 2025-10-02 09:00:18.869 2 DEBUG oslo_concurrency.lockutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:18 np0005465604 nova_compute[260603]: 2025-10-02 09:00:18.869 2 DEBUG oslo_concurrency.lockutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:18 np0005465604 nova_compute[260603]: 2025-10-02 09:00:18.869 2 DEBUG oslo_concurrency.lockutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:18 np0005465604 nova_compute[260603]: 2025-10-02 09:00:18.870 2 DEBUG oslo_concurrency.lockutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:18 np0005465604 nova_compute[260603]: 2025-10-02 09:00:18.871 2 INFO nova.compute.manager [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Terminating instance#033[00m
Oct  2 05:00:18 np0005465604 nova_compute[260603]: 2025-10-02 09:00:18.872 2 DEBUG nova.compute.manager [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:00:18 np0005465604 kernel: tap48cfc997-bd (unregistering): left promiscuous mode
Oct  2 05:00:18 np0005465604 NetworkManager[45129]: <info>  [1759395618.9215] device (tap48cfc997-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:00:18 np0005465604 ovn_controller[152344]: 2025-10-02T09:00:18Z|01405|binding|INFO|Releasing lport 48cfc997-bd30-40b4-a387-f40a75731793 from this chassis (sb_readonly=0)
Oct  2 05:00:18 np0005465604 ovn_controller[152344]: 2025-10-02T09:00:18Z|01406|binding|INFO|Setting lport 48cfc997-bd30-40b4-a387-f40a75731793 down in Southbound
Oct  2 05:00:18 np0005465604 ovn_controller[152344]: 2025-10-02T09:00:18Z|01407|binding|INFO|Removing iface tap48cfc997-bd ovn-installed in OVS
Oct  2 05:00:18 np0005465604 nova_compute[260603]: 2025-10-02 09:00:18.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:18 np0005465604 nova_compute[260603]: 2025-10-02 09:00:18.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:18.943 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:dd:9b 10.100.0.27'], port_security=['fa:16:3e:51:dd:9b 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'f141f189-a224-4ac7-88b5-c28f198944e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e3569eb1-74db-4df0-93c4-3e65c3d95428', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf29bdfe-eec4-4bc1-887c-39f99d9387e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=48cfc997-bd30-40b4-a387-f40a75731793) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:00:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:18.944 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 48cfc997-bd30-40b4-a387-f40a75731793 in datapath 5d0f6b84-ebf5-436d-83fe-b7739dc629d9 unbound from our chassis#033[00m
Oct  2 05:00:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:18.945 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d0f6b84-ebf5-436d-83fe-b7739dc629d9#033[00m
Oct  2 05:00:18 np0005465604 nova_compute[260603]: 2025-10-02 09:00:18.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:18.961 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1084c1c2-f3d1-419f-b72b-c30cc9ee5945]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:18 np0005465604 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000082.scope: Deactivated successfully.
Oct  2 05:00:18 np0005465604 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000082.scope: Consumed 15.627s CPU time.
Oct  2 05:00:18 np0005465604 systemd-machined[214636]: Machine qemu-164-instance-00000082 terminated.
Oct  2 05:00:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:18.989 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f20e80-e6e2-4321-87af-a43e7f60f7b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:18.992 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d97fd89f-2a98-47f3-a8e7-f7afc479ee06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:19.019 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8df78092-76f2-426e-bcf1-cab1ee101a1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:19.040 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f952da-c781-4c5f-9448-d0a7a95e0a64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d0f6b84-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:31:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 7, 'rx_bytes': 1222, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 7, 'rx_bytes': 1222, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 396], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642259, 'reachable_time': 20336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 10, 'inoctets': 872, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 10, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 872, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 10, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401187, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:19.066 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[19bf9fe9-fa51-4f22-a0d4-04ed01686bd4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5d0f6b84-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642270, 'tstamp': 642270}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401188, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap5d0f6b84-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642273, 'tstamp': 642273}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401188, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:19.067 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d0f6b84-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:19.075 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d0f6b84-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:00:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:19.076 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:00:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:19.076 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d0f6b84-e0, col_values=(('external_ids', {'iface-id': 'f202c767-dd88-4dcf-bf75-a2c0dfdb6c1d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:00:19 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:19.077 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.114 2 INFO nova.virt.libvirt.driver [-] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Instance destroyed successfully.#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.114 2 DEBUG nova.objects.instance [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid f141f189-a224-4ac7-88b5-c28f198944e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.132 2 DEBUG nova.virt.libvirt.vif [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:59:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-300987939',display_name='tempest-TestNetworkBasicOps-server-300987939',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-300987939',id=130,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCJjWfAj56mQeyeIvU6XGvvjANhAOW4ymkjaU7nDK7hdgdQPn23X31SFk66pyTpQCxaiWSbcIfBie1wBVKGMKO2QIcNLprOGSD1fN9DoZ9nw1hLigEh1SZTI3GQ/zevNFg==',key_name='tempest-TestNetworkBasicOps-1976267539',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:59:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-3sfliqo6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:59:24Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=f141f189-a224-4ac7-88b5-c28f198944e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.132 2 DEBUG nova.network.os_vif_util [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "48cfc997-bd30-40b4-a387-f40a75731793", "address": "fa:16:3e:51:dd:9b", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48cfc997-bd", "ovs_interfaceid": "48cfc997-bd30-40b4-a387-f40a75731793", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.133 2 DEBUG nova.network.os_vif_util [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:dd:9b,bridge_name='br-int',has_traffic_filtering=True,id=48cfc997-bd30-40b4-a387-f40a75731793,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48cfc997-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.134 2 DEBUG os_vif [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:dd:9b,bridge_name='br-int',has_traffic_filtering=True,id=48cfc997-bd30-40b4-a387-f40a75731793,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48cfc997-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.138242) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395619138311, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2051, "num_deletes": 251, "total_data_size": 3390672, "memory_usage": 3460240, "flush_reason": "Manual Compaction"}
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.136 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48cfc997-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.142 2 INFO os_vif [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:dd:9b,bridge_name='br-int',has_traffic_filtering=True,id=48cfc997-bd30-40b4-a387-f40a75731793,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48cfc997-bd')#033[00m
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395619156356, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3335379, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49775, "largest_seqno": 51825, "table_properties": {"data_size": 3325992, "index_size": 5945, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18802, "raw_average_key_size": 20, "raw_value_size": 3307475, "raw_average_value_size": 3541, "num_data_blocks": 263, "num_entries": 934, "num_filter_entries": 934, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759395390, "oldest_key_time": 1759395390, "file_creation_time": 1759395619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 18148 microseconds, and 9269 cpu microseconds.
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.156399) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3335379 bytes OK
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.156418) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.157939) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.157957) EVENT_LOG_v1 {"time_micros": 1759395619157950, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.157974) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3382088, prev total WAL file size 3382088, number of live WAL files 2.
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.158925) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3257KB)], [116(8268KB)]
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395619158958, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 11802611, "oldest_snapshot_seqno": -1}
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7366 keys, 10085821 bytes, temperature: kUnknown
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395619204165, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10085821, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10036578, "index_size": 29739, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18437, "raw_key_size": 190632, "raw_average_key_size": 25, "raw_value_size": 9904985, "raw_average_value_size": 1344, "num_data_blocks": 1165, "num_entries": 7366, "num_filter_entries": 7366, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759395619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.204443) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10085821 bytes
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.205627) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 260.6 rd, 222.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.1 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 7880, records dropped: 514 output_compression: NoCompression
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.205649) EVENT_LOG_v1 {"time_micros": 1759395619205639, "job": 70, "event": "compaction_finished", "compaction_time_micros": 45290, "compaction_time_cpu_micros": 22873, "output_level": 6, "num_output_files": 1, "total_output_size": 10085821, "num_input_records": 7880, "num_output_records": 7366, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395619206744, "job": 70, "event": "table_file_deletion", "file_number": 118}
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395619209035, "job": 70, "event": "table_file_deletion", "file_number": 116}
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.158877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.209159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.209164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.209165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.209167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:00:19 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:19.209168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.553 2 INFO nova.virt.libvirt.driver [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Deleting instance files /var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4_del#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.555 2 INFO nova.virt.libvirt.driver [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Deletion of /var/lib/nova/instances/f141f189-a224-4ac7-88b5-c28f198944e4_del complete#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.622 2 INFO nova.compute.manager [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.623 2 DEBUG oslo.service.loopingcall [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.623 2 DEBUG nova.compute.manager [-] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.623 2 DEBUG nova.network.neutron [-] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:00:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.935 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.957 2 DEBUG nova.compute.manager [req-f80a4b11-3945-4796-beb0-6c77794f8492 req-bd782c2a-ee07-4e3f-b3ab-0747522751c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Received event network-vif-unplugged-48cfc997-bd30-40b4-a387-f40a75731793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.957 2 DEBUG oslo_concurrency.lockutils [req-f80a4b11-3945-4796-beb0-6c77794f8492 req-bd782c2a-ee07-4e3f-b3ab-0747522751c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.957 2 DEBUG oslo_concurrency.lockutils [req-f80a4b11-3945-4796-beb0-6c77794f8492 req-bd782c2a-ee07-4e3f-b3ab-0747522751c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.958 2 DEBUG oslo_concurrency.lockutils [req-f80a4b11-3945-4796-beb0-6c77794f8492 req-bd782c2a-ee07-4e3f-b3ab-0747522751c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.958 2 DEBUG nova.compute.manager [req-f80a4b11-3945-4796-beb0-6c77794f8492 req-bd782c2a-ee07-4e3f-b3ab-0747522751c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] No waiting events found dispatching network-vif-unplugged-48cfc997-bd30-40b4-a387-f40a75731793 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.958 2 DEBUG nova.compute.manager [req-f80a4b11-3945-4796-beb0-6c77794f8492 req-bd782c2a-ee07-4e3f-b3ab-0747522751c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Received event network-vif-unplugged-48cfc997-bd30-40b4-a387-f40a75731793 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.963 2 WARNING nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.963 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.964 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid f141f189-a224-4ac7-88b5-c28f198944e4 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.965 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.965 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.965 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "f141f189-a224-4ac7-88b5-c28f198944e4" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:19 np0005465604 nova_compute[260603]: 2025-10-02 09:00:19.998 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:20 np0005465604 nova_compute[260603]: 2025-10-02 09:00:20.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:21 np0005465604 nova_compute[260603]: 2025-10-02 09:00:21.690 2 DEBUG nova.network.neutron [-] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:00:21 np0005465604 nova_compute[260603]: 2025-10-02 09:00:21.728 2 INFO nova.compute.manager [-] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Took 2.10 seconds to deallocate network for instance.#033[00m
Oct  2 05:00:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:00:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:00:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:00:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:00:21 np0005465604 nova_compute[260603]: 2025-10-02 09:00:21.773 2 DEBUG oslo_concurrency.lockutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:21 np0005465604 nova_compute[260603]: 2025-10-02 09:00:21.774 2 DEBUG oslo_concurrency.lockutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 200 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Oct  2 05:00:21 np0005465604 nova_compute[260603]: 2025-10-02 09:00:21.868 2 DEBUG oslo_concurrency.processutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:00:22 np0005465604 nova_compute[260603]: 2025-10-02 09:00:22.079 2 DEBUG nova.compute.manager [req-6e0ab980-eac6-4582-b38c-0bafba4ac654 req-acd6f157-77db-4be9-9bde-4f8cc0332ad2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Received event network-vif-plugged-48cfc997-bd30-40b4-a387-f40a75731793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:00:22 np0005465604 nova_compute[260603]: 2025-10-02 09:00:22.080 2 DEBUG oslo_concurrency.lockutils [req-6e0ab980-eac6-4582-b38c-0bafba4ac654 req-acd6f157-77db-4be9-9bde-4f8cc0332ad2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:22 np0005465604 nova_compute[260603]: 2025-10-02 09:00:22.081 2 DEBUG oslo_concurrency.lockutils [req-6e0ab980-eac6-4582-b38c-0bafba4ac654 req-acd6f157-77db-4be9-9bde-4f8cc0332ad2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:22 np0005465604 nova_compute[260603]: 2025-10-02 09:00:22.081 2 DEBUG oslo_concurrency.lockutils [req-6e0ab980-eac6-4582-b38c-0bafba4ac654 req-acd6f157-77db-4be9-9bde-4f8cc0332ad2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:22 np0005465604 nova_compute[260603]: 2025-10-02 09:00:22.081 2 DEBUG nova.compute.manager [req-6e0ab980-eac6-4582-b38c-0bafba4ac654 req-acd6f157-77db-4be9-9bde-4f8cc0332ad2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] No waiting events found dispatching network-vif-plugged-48cfc997-bd30-40b4-a387-f40a75731793 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:00:22 np0005465604 nova_compute[260603]: 2025-10-02 09:00:22.082 2 WARNING nova.compute.manager [req-6e0ab980-eac6-4582-b38c-0bafba4ac654 req-acd6f157-77db-4be9-9bde-4f8cc0332ad2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Received unexpected event network-vif-plugged-48cfc997-bd30-40b4-a387-f40a75731793 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 05:00:22 np0005465604 nova_compute[260603]: 2025-10-02 09:00:22.082 2 DEBUG nova.compute.manager [req-6e0ab980-eac6-4582-b38c-0bafba4ac654 req-acd6f157-77db-4be9-9bde-4f8cc0332ad2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Received event network-vif-deleted-48cfc997-bd30-40b4-a387-f40a75731793 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2717669239' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2717669239' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1454614570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:00:22 np0005465604 nova_compute[260603]: 2025-10-02 09:00:22.331 2 DEBUG oslo_concurrency.processutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:00:22 np0005465604 nova_compute[260603]: 2025-10-02 09:00:22.340 2 DEBUG nova.compute.provider_tree [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:00:22 np0005465604 nova_compute[260603]: 2025-10-02 09:00:22.361 2 DEBUG nova.scheduler.client.report [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:00:22 np0005465604 nova_compute[260603]: 2025-10-02 09:00:22.406 2 DEBUG oslo_concurrency.lockutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:22 np0005465604 nova_compute[260603]: 2025-10-02 09:00:22.440 2 INFO nova.scheduler.client.report [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance f141f189-a224-4ac7-88b5-c28f198944e4#033[00m
Oct  2 05:00:22 np0005465604 nova_compute[260603]: 2025-10-02 09:00:22.519 2 DEBUG oslo_concurrency.lockutils [None req-74ec843a-8a2b-4b4e-bfac-35630165c0d4 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "f141f189-a224-4ac7-88b5-c28f198944e4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:22 np0005465604 nova_compute[260603]: 2025-10-02 09:00:22.520 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "f141f189-a224-4ac7-88b5-c28f198944e4" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:22 np0005465604 nova_compute[260603]: 2025-10-02 09:00:22.521 2 INFO nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Oct  2 05:00:22 np0005465604 nova_compute[260603]: 2025-10-02 09:00:22.521 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "f141f189-a224-4ac7-88b5-c28f198944e4" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:00:22 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 957ad525-df82-4530-ab4f-ba561d5d9e45 does not exist
Oct  2 05:00:22 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev f612839f-e552-447d-af1d-4612eb0e1e97 does not exist
Oct  2 05:00:22 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 0a2c0a03-d564-4049-b5b4-add4f911d7fc does not exist
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:00:22 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:00:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:00:23 np0005465604 podman[401637]: 2025-10-02 09:00:23.333472528 +0000 UTC m=+0.047693043 container create c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cray, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 05:00:23 np0005465604 systemd[1]: Started libpod-conmon-c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372.scope.
Oct  2 05:00:23 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:00:23 np0005465604 podman[401637]: 2025-10-02 09:00:23.306687785 +0000 UTC m=+0.020908310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:00:23 np0005465604 podman[401637]: 2025-10-02 09:00:23.418293263 +0000 UTC m=+0.132513758 container init c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cray, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:00:23 np0005465604 podman[401637]: 2025-10-02 09:00:23.433063312 +0000 UTC m=+0.147283817 container start c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cray, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:00:23 np0005465604 podman[401637]: 2025-10-02 09:00:23.436740217 +0000 UTC m=+0.150960712 container attach c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 05:00:23 np0005465604 blissful_cray[401654]: 167 167
Oct  2 05:00:23 np0005465604 systemd[1]: libpod-c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372.scope: Deactivated successfully.
Oct  2 05:00:23 np0005465604 conmon[401654]: conmon c61008c9e6ff5a7c935b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372.scope/container/memory.events
Oct  2 05:00:23 np0005465604 podman[401637]: 2025-10-02 09:00:23.439981798 +0000 UTC m=+0.154202273 container died c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cray, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:00:23 np0005465604 systemd[1]: var-lib-containers-storage-overlay-050580085cd34342ca23409f75bd5880f758e0624022ab8d48111ae4b4de85e3-merged.mount: Deactivated successfully.
Oct  2 05:00:23 np0005465604 podman[401637]: 2025-10-02 09:00:23.475548663 +0000 UTC m=+0.189769138 container remove c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cray, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:00:23 np0005465604 systemd[1]: libpod-conmon-c61008c9e6ff5a7c935b95ea83c273a38fc3fccf6951bb75a993238a57e13372.scope: Deactivated successfully.
Oct  2 05:00:23 np0005465604 podman[401678]: 2025-10-02 09:00:23.708051188 +0000 UTC m=+0.039524899 container create 16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:00:23 np0005465604 systemd[1]: Started libpod-conmon-16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84.scope.
Oct  2 05:00:23 np0005465604 podman[401678]: 2025-10-02 09:00:23.690428321 +0000 UTC m=+0.021902042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:00:23 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:00:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628fc67570b9ea406254d43c126ef6728d15d420492e71082e230094c00c8ad0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:00:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628fc67570b9ea406254d43c126ef6728d15d420492e71082e230094c00c8ad0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:00:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628fc67570b9ea406254d43c126ef6728d15d420492e71082e230094c00c8ad0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:00:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628fc67570b9ea406254d43c126ef6728d15d420492e71082e230094c00c8ad0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:00:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628fc67570b9ea406254d43c126ef6728d15d420492e71082e230094c00c8ad0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:00:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 121 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Oct  2 05:00:23 np0005465604 podman[401678]: 2025-10-02 09:00:23.808705236 +0000 UTC m=+0.140178957 container init 16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 05:00:23 np0005465604 podman[401678]: 2025-10-02 09:00:23.818843972 +0000 UTC m=+0.150317683 container start 16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 05:00:23 np0005465604 podman[401678]: 2025-10-02 09:00:23.822281798 +0000 UTC m=+0.153755539 container attach 16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.122 2 DEBUG oslo_concurrency.lockutils [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "interface-b4eacfa3-8b31-492a-b3c5-829a890a4aae-4e4cec89-b01e-4202-bc3e-a65ce8864017" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.123 2 DEBUG oslo_concurrency.lockutils [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "interface-b4eacfa3-8b31-492a-b3c5-829a890a4aae-4e4cec89-b01e-4202-bc3e-a65ce8864017" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.140 2 DEBUG nova.objects.instance [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'flavor' on Instance uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.166 2 DEBUG nova.virt.libvirt.vif [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1449971772',display_name='tempest-TestNetworkBasicOps-server-1449971772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1449971772',id=129,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEDwYWjGLjzCnpGdjCiDGb4BhMkfSctBSp05Os75j+QqkDCA7DCGjUId9rj9ZbOBdRWfSegWfbBERKUoMgNp9TPgIkeea2IxYlIkGspXgk0R0VbovWgCgpmsyfTWHw3bA==',key_name='tempest-TestNetworkBasicOps-1897225661',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-s05ipuxq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:58:36Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=b4eacfa3-8b31-492a-b3c5-829a890a4aae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.167 2 DEBUG nova.network.os_vif_util [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.168 2 DEBUG nova.network.os_vif_util [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.173 2 DEBUG nova.virt.libvirt.guest [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.178 2 DEBUG nova.virt.libvirt.guest [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.182 2 DEBUG nova.virt.libvirt.driver [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Attempting to detach device tap4e4cec89-b0 from instance b4eacfa3-8b31-492a-b3c5-829a890a4aae from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.182 2 DEBUG nova.virt.libvirt.guest [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] detach device xml: <interface type="ethernet">
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <mac address="fa:16:3e:45:88:e1"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <model type="virtio"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <mtu size="1442"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <target dev="tap4e4cec89-b0"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]: </interface>
Oct  2 05:00:24 np0005465604 nova_compute[260603]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.193 2 DEBUG nova.virt.libvirt.guest [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.197 2 DEBUG nova.virt.libvirt.guest [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface>not found in domain: <domain type='kvm' id='163'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <name>instance-00000081</name>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <uuid>b4eacfa3-8b31-492a-b3c5-829a890a4aae</uuid>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:name>tempest-TestNetworkBasicOps-server-1449971772</nova:name>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:59:03</nova:creationTime>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:port uuid="5ea70a9c-8299-4593-b2b2-5c3315870d73">
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:port uuid="4e4cec89-b01e-4202-bc3e-a65ce8864017">
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 05:00:24 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <memory unit='KiB'>131072</memory>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <vcpu placement='static'>1</vcpu>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <resource>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <partition>/machine</partition>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </resource>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <sysinfo type='smbios'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <entry name='manufacturer'>RDO</entry>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <entry name='serial'>b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <entry name='uuid'>b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <entry name='family'>Virtual Machine</entry>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <boot dev='hd'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <smbios mode='sysinfo'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <vmcoreinfo state='on'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <vendor>AMD</vendor>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='x2apic'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc-deadline'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='hypervisor'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc_adjust'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='spec-ctrl'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='stibp'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='arch-capabilities'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='ssbd'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='cmp_legacy'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='overflow-recov'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='succor'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='ibrs'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='amd-ssbd'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='virt-ssbd'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='lbrv'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='tsc-scale'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='vmcb-clean'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='flushbyasid'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pause-filter'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pfthreshold'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='rdctl-no'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='mds-no'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='pschange-mc-no'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='gds-no'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='rfds-no'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='xsaves'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svm'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='topoext'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='npt'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='nrip-save'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <clock offset='utc'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <timer name='hpet' present='no'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <on_poweroff>destroy</on_poweroff>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <on_reboot>restart</on_reboot>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <on_crash>destroy</on_crash>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <disk type='network' device='disk'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk' index='2'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target dev='vda' bus='virtio'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='virtio-disk0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <disk type='network' device='cdrom'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config' index='1'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target dev='sda' bus='sata'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <readonly/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='sata0-0-0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pcie.0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='1' port='0x10'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='2' port='0x11'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.2'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='3' port='0x12'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.3'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='4' port='0x13'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.4'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='5' port='0x14'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.5'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='6' port='0x15'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.6'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='7' port='0x16'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.7'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='8' port='0x17'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.8'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='9' port='0x18'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.9'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='10' port='0x19'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.10'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='11' port='0x1a'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.11'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='12' port='0x1b'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.12'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='13' port='0x1c'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.13'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='14' port='0x1d'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.14'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='15' port='0x1e'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.15'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='16' port='0x1f'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.16'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='17' port='0x20'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.17'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='18' port='0x21'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.18'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='19' port='0x22'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.19'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='20' port='0x23'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.20'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='21' port='0x24'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.21'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='22' port='0x25'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.22'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='23' port='0x26'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.23'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='24' port='0x27'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.24'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='25' port='0x28'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.25'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-pci-bridge'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.26'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='usb'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='sata' index='0'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='ide'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:42:41:53'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target dev='tap5ea70a9c-82'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='net0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:45:88:e1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target dev='tap4e4cec89-b0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='net1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <serial type='pty'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log' append='off'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target type='isa-serial' port='0'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:        <model name='isa-serial'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      </target>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log' append='off'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target type='serial' port='0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </console>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <input type='tablet' bus='usb'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='input0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='usb' bus='0' port='1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </input>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <input type='mouse' bus='ps2'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='input1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </input>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <input type='keyboard' bus='ps2'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='input2'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </input>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <listen type='address' address='::0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <audio id='1' type='none'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='video0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <watchdog model='itco' action='reset'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='watchdog0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </watchdog>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <memballoon model='virtio'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <stats period='10'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='balloon0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <rng model='virtio'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <backend model='random'>/dev/urandom</backend>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='rng0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <label>system_u:system_r:svirt_t:s0:c363,c569</label>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c363,c569</imagelabel>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <label>+107:+107</label>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <imagelabel>+107:+107</imagelabel>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 05:00:24 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:00:24 np0005465604 nova_compute[260603]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.198 2 INFO nova.virt.libvirt.driver [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully detached device tap4e4cec89-b0 from instance b4eacfa3-8b31-492a-b3c5-829a890a4aae from the persistent domain config.#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.198 2 DEBUG nova.virt.libvirt.driver [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] (1/8): Attempting to detach device tap4e4cec89-b0 with device alias net1 from instance b4eacfa3-8b31-492a-b3c5-829a890a4aae from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.199 2 DEBUG nova.virt.libvirt.guest [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] detach device xml: <interface type="ethernet">
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <mac address="fa:16:3e:45:88:e1"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <model type="virtio"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <mtu size="1442"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <target dev="tap4e4cec89-b0"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]: </interface>
Oct  2 05:00:24 np0005465604 nova_compute[260603]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 05:00:24 np0005465604 kernel: tap4e4cec89-b0 (unregistering): left promiscuous mode
Oct  2 05:00:24 np0005465604 NetworkManager[45129]: <info>  [1759395624.3202] device (tap4e4cec89-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:00:24 np0005465604 ovn_controller[152344]: 2025-10-02T09:00:24Z|01408|binding|INFO|Releasing lport 4e4cec89-b01e-4202-bc3e-a65ce8864017 from this chassis (sb_readonly=0)
Oct  2 05:00:24 np0005465604 ovn_controller[152344]: 2025-10-02T09:00:24Z|01409|binding|INFO|Setting lport 4e4cec89-b01e-4202-bc3e-a65ce8864017 down in Southbound
Oct  2 05:00:24 np0005465604 ovn_controller[152344]: 2025-10-02T09:00:24Z|01410|binding|INFO|Removing iface tap4e4cec89-b0 ovn-installed in OVS
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.333 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:88:e1 10.100.0.28', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'b4eacfa3-8b31-492a-b3c5-829a890a4aae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf29bdfe-eec4-4bc1-887c-39f99d9387e9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=4e4cec89-b01e-4202-bc3e-a65ce8864017) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:00:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.336 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 4e4cec89-b01e-4202-bc3e-a65ce8864017 in datapath 5d0f6b84-ebf5-436d-83fe-b7739dc629d9 unbound from our chassis#033[00m
Oct  2 05:00:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.338 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d0f6b84-ebf5-436d-83fe-b7739dc629d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:00:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.340 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a31415e0-f8f5-437b-a354-3f78f93d7c61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.342 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9 namespace which is not needed anymore#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.363 2 DEBUG nova.virt.libvirt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Received event <DeviceRemovedEvent: 1759395624.360665, b4eacfa3-8b31-492a-b3c5-829a890a4aae => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.363 2 DEBUG nova.virt.libvirt.driver [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Start waiting for the detach event from libvirt for device tap4e4cec89-b0 with device alias net1 for instance b4eacfa3-8b31-492a-b3c5-829a890a4aae _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.364 2 DEBUG nova.virt.libvirt.guest [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.376 2 DEBUG nova.virt.libvirt.guest [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface>not found in domain: <domain type='kvm' id='163'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <name>instance-00000081</name>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <uuid>b4eacfa3-8b31-492a-b3c5-829a890a4aae</uuid>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:name>tempest-TestNetworkBasicOps-server-1449971772</nova:name>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 08:59:03</nova:creationTime>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:port uuid="5ea70a9c-8299-4593-b2b2-5c3315870d73">
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:port uuid="4e4cec89-b01e-4202-bc3e-a65ce8864017">
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 05:00:24 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <memory unit='KiB'>131072</memory>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <vcpu placement='static'>1</vcpu>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <resource>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <partition>/machine</partition>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </resource>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <sysinfo type='smbios'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <entry name='manufacturer'>RDO</entry>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <entry name='serial'>b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <entry name='uuid'>b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <entry name='family'>Virtual Machine</entry>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <boot dev='hd'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <smbios mode='sysinfo'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <vmcoreinfo state='on'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <vendor>AMD</vendor>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='x2apic'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc-deadline'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='hypervisor'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc_adjust'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='spec-ctrl'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='stibp'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='arch-capabilities'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='ssbd'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='cmp_legacy'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='overflow-recov'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='succor'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='ibrs'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='amd-ssbd'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='virt-ssbd'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='lbrv'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='tsc-scale'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='vmcb-clean'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='flushbyasid'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pause-filter'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pfthreshold'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='rdctl-no'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='mds-no'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='pschange-mc-no'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='gds-no'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='rfds-no'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='xsaves'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svm'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='require' name='topoext'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='npt'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <feature policy='disable' name='nrip-save'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <clock offset='utc'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <timer name='hpet' present='no'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <on_poweroff>destroy</on_poweroff>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <on_reboot>restart</on_reboot>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <on_crash>destroy</on_crash>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <disk type='network' device='disk'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk' index='2'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target dev='vda' bus='virtio'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='virtio-disk0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <disk type='network' device='cdrom'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config' index='1'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target dev='sda' bus='sata'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <readonly/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='sata0-0-0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pcie.0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='1' port='0x10'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='2' port='0x11'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.2'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='3' port='0x12'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.3'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='4' port='0x13'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.4'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='5' port='0x14'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.5'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='6' port='0x15'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.6'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='7' port='0x16'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.7'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='8' port='0x17'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.8'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='9' port='0x18'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.9'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='10' port='0x19'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.10'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='11' port='0x1a'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.11'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='12' port='0x1b'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.12'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='13' port='0x1c'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.13'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='14' port='0x1d'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.14'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='15' port='0x1e'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.15'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='16' port='0x1f'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.16'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='17' port='0x20'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.17'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='18' port='0x21'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.18'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='19' port='0x22'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.19'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='20' port='0x23'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.20'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='21' port='0x24'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.21'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='22' port='0x25'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.22'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='23' port='0x26'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.23'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='24' port='0x27'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.24'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target chassis='25' port='0x28'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.25'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model name='pcie-pci-bridge'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='pci.26'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='usb'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <controller type='sata' index='0'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='ide'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:42:41:53'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target dev='tap5ea70a9c-82'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='net0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <serial type='pty'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log' append='off'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target type='isa-serial' port='0'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:        <model name='isa-serial'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      </target>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log' append='off'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <target type='serial' port='0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </console>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <input type='tablet' bus='usb'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='input0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='usb' bus='0' port='1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </input>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <input type='mouse' bus='ps2'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='input1'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </input>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <input type='keyboard' bus='ps2'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='input2'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </input>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <listen type='address' address='::0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <audio id='1' type='none'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='video0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <watchdog model='itco' action='reset'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='watchdog0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </watchdog>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <memballoon model='virtio'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <stats period='10'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='balloon0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <rng model='virtio'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <backend model='random'>/dev/urandom</backend>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <alias name='rng0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <label>system_u:system_r:svirt_t:s0:c363,c569</label>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c363,c569</imagelabel>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <label>+107:+107</label>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <imagelabel>+107:+107</imagelabel>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 05:00:24 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:00:24 np0005465604 nova_compute[260603]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.376 2 INFO nova.virt.libvirt.driver [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully detached device tap4e4cec89-b0 from instance b4eacfa3-8b31-492a-b3c5-829a890a4aae from the live domain config.#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.378 2 DEBUG nova.virt.libvirt.vif [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1449971772',display_name='tempest-TestNetworkBasicOps-server-1449971772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1449971772',id=129,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEDwYWjGLjzCnpGdjCiDGb4BhMkfSctBSp05Os75j+QqkDCA7DCGjUId9rj9ZbOBdRWfSegWfbBERKUoMgNp9TPgIkeea2IxYlIkGspXgk0R0VbovWgCgpmsyfTWHw3bA==',key_name='tempest-TestNetworkBasicOps-1897225661',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-s05ipuxq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:58:36Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=b4eacfa3-8b31-492a-b3c5-829a890a4aae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.378 2 DEBUG nova.network.os_vif_util [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.379 2 DEBUG nova.network.os_vif_util [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.380 2 DEBUG os_vif [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e4cec89-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.394 2 INFO os_vif [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0')#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.395 2 DEBUG nova.virt.libvirt.guest [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:name>tempest-TestNetworkBasicOps-server-1449971772</nova:name>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 09:00:24</nova:creationTime>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    <nova:port uuid="5ea70a9c-8299-4593-b2b2-5c3315870d73">
Oct  2 05:00:24 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 05:00:24 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 05:00:24 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 05:00:24 np0005465604 nova_compute[260603]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 05:00:24 np0005465604 neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9[399649]: [NOTICE]   (399684) : haproxy version is 2.8.14-c23fe91
Oct  2 05:00:24 np0005465604 neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9[399649]: [NOTICE]   (399684) : path to executable is /usr/sbin/haproxy
Oct  2 05:00:24 np0005465604 neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9[399649]: [WARNING]  (399684) : Exiting Master process...
Oct  2 05:00:24 np0005465604 neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9[399649]: [WARNING]  (399684) : Exiting Master process...
Oct  2 05:00:24 np0005465604 neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9[399649]: [ALERT]    (399684) : Current worker (399688) exited with code 143 (Terminated)
Oct  2 05:00:24 np0005465604 neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9[399649]: [WARNING]  (399684) : All workers exited. Exiting... (0)
Oct  2 05:00:24 np0005465604 systemd[1]: libpod-9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb.scope: Deactivated successfully.
Oct  2 05:00:24 np0005465604 conmon[399649]: conmon 9849527b8e31c0013aa9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb.scope/container/memory.events
Oct  2 05:00:24 np0005465604 podman[401722]: 2025-10-02 09:00:24.555291757 +0000 UTC m=+0.057160817 container died 9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 05:00:24 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb-userdata-shm.mount: Deactivated successfully.
Oct  2 05:00:24 np0005465604 systemd[1]: var-lib-containers-storage-overlay-168d60d32efd9370edb94bf9730fe02355e22ea7cfe7215ac4f9b026d57c9878-merged.mount: Deactivated successfully.
Oct  2 05:00:24 np0005465604 podman[401722]: 2025-10-02 09:00:24.620299497 +0000 UTC m=+0.122168477 container cleanup 9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:00:24 np0005465604 systemd[1]: libpod-conmon-9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb.scope: Deactivated successfully.
Oct  2 05:00:24 np0005465604 podman[401759]: 2025-10-02 09:00:24.691373595 +0000 UTC m=+0.044536864 container remove 9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 05:00:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.700 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[78ed4d84-d9d0-4682-b285-6b8814062ecd]: (4, ('Thu Oct  2 09:00:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9 (9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb)\n9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb\nThu Oct  2 09:00:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9 (9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb)\n9849527b8e31c0013aa92e185f0da80557b68b613f4e0e358fd0b57bb6b7d7cb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.702 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[39ddc4ca-be14-44c5-a9cc-e9de5b43a030]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.703 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d0f6b84-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:24 np0005465604 kernel: tap5d0f6b84-e0: left promiscuous mode
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.717 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3d777e9b-2a8d-4781-9191-5076df536782]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.750 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d32944da-ba8b-4545-bde9-f2b6b1174ec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.751 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba0caf1-03dc-40d2-8f8d-1b218f63b95b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.776 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb38a1f-21cb-4acc-96f5-2e11a9382526]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642252, 'reachable_time': 16942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401781, 'error': None, 'target': 'ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.781 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d0f6b84-ebf5-436d-83fe-b7739dc629d9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:00:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:24.782 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[96d20189-6b67-49ab-99db-a598f5e5cc8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:24 np0005465604 systemd[1]: run-netns-ovnmeta\x2d5d0f6b84\x2debf5\x2d436d\x2d83fe\x2db7739dc629d9.mount: Deactivated successfully.
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.798 2 DEBUG nova.compute.manager [req-e02ddb9b-3a1b-43a8-bd05-13701fe29551 req-bc6c3a33-d655-4da6-a7b7-1c38adaac74a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-unplugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.799 2 DEBUG oslo_concurrency.lockutils [req-e02ddb9b-3a1b-43a8-bd05-13701fe29551 req-bc6c3a33-d655-4da6-a7b7-1c38adaac74a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.799 2 DEBUG oslo_concurrency.lockutils [req-e02ddb9b-3a1b-43a8-bd05-13701fe29551 req-bc6c3a33-d655-4da6-a7b7-1c38adaac74a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.800 2 DEBUG oslo_concurrency.lockutils [req-e02ddb9b-3a1b-43a8-bd05-13701fe29551 req-bc6c3a33-d655-4da6-a7b7-1c38adaac74a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.800 2 DEBUG nova.compute.manager [req-e02ddb9b-3a1b-43a8-bd05-13701fe29551 req-bc6c3a33-d655-4da6-a7b7-1c38adaac74a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] No waiting events found dispatching network-vif-unplugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:00:24 np0005465604 nova_compute[260603]: 2025-10-02 09:00:24.800 2 WARNING nova.compute.manager [req-e02ddb9b-3a1b-43a8-bd05-13701fe29551 req-bc6c3a33-d655-4da6-a7b7-1c38adaac74a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received unexpected event network-vif-unplugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:00:24 np0005465604 quizzical_cartwright[401694]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:00:24 np0005465604 quizzical_cartwright[401694]: --> relative data size: 1.0
Oct  2 05:00:24 np0005465604 quizzical_cartwright[401694]: --> All data devices are unavailable
Oct  2 05:00:24 np0005465604 systemd[1]: libpod-16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84.scope: Deactivated successfully.
Oct  2 05:00:24 np0005465604 systemd[1]: libpod-16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84.scope: Consumed 1.038s CPU time.
Oct  2 05:00:24 np0005465604 podman[401678]: 2025-10-02 09:00:24.981067939 +0000 UTC m=+1.312541690 container died 16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 05:00:25 np0005465604 systemd[1]: var-lib-containers-storage-overlay-628fc67570b9ea406254d43c126ef6728d15d420492e71082e230094c00c8ad0-merged.mount: Deactivated successfully.
Oct  2 05:00:25 np0005465604 podman[401678]: 2025-10-02 09:00:25.045029666 +0000 UTC m=+1.376503367 container remove 16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  2 05:00:25 np0005465604 systemd[1]: libpod-conmon-16a60aafc777584ff388f90c9004a97bff0fd4ec7e199408130c9e8248045a84.scope: Deactivated successfully.
Oct  2 05:00:25 np0005465604 nova_compute[260603]: 2025-10-02 09:00:25.094 2 DEBUG oslo_concurrency.lockutils [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:00:25 np0005465604 nova_compute[260603]: 2025-10-02 09:00:25.095 2 DEBUG oslo_concurrency.lockutils [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:00:25 np0005465604 nova_compute[260603]: 2025-10-02 09:00:25.095 2 DEBUG nova.network.neutron [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:00:25 np0005465604 nova_compute[260603]: 2025-10-02 09:00:25.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:25 np0005465604 podman[401947]: 2025-10-02 09:00:25.755200886 +0000 UTC m=+0.052510314 container create 528b8cc0ea0bfbf1e1e25bfc8e2dfd0818e6192815690c8ca09575b2bb3dfbeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moore, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 05:00:25 np0005465604 systemd[1]: Started libpod-conmon-528b8cc0ea0bfbf1e1e25bfc8e2dfd0818e6192815690c8ca09575b2bb3dfbeb.scope.
Oct  2 05:00:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 121 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct  2 05:00:25 np0005465604 podman[401947]: 2025-10-02 09:00:25.730246679 +0000 UTC m=+0.027556127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:00:25 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:00:25 np0005465604 podman[401947]: 2025-10-02 09:00:25.857652079 +0000 UTC m=+0.154961577 container init 528b8cc0ea0bfbf1e1e25bfc8e2dfd0818e6192815690c8ca09575b2bb3dfbeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moore, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct  2 05:00:25 np0005465604 podman[401947]: 2025-10-02 09:00:25.874214874 +0000 UTC m=+0.171524272 container start 528b8cc0ea0bfbf1e1e25bfc8e2dfd0818e6192815690c8ca09575b2bb3dfbeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moore, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:00:25 np0005465604 podman[401947]: 2025-10-02 09:00:25.877462905 +0000 UTC m=+0.174772393 container attach 528b8cc0ea0bfbf1e1e25bfc8e2dfd0818e6192815690c8ca09575b2bb3dfbeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 05:00:25 np0005465604 laughing_moore[401963]: 167 167
Oct  2 05:00:25 np0005465604 systemd[1]: libpod-528b8cc0ea0bfbf1e1e25bfc8e2dfd0818e6192815690c8ca09575b2bb3dfbeb.scope: Deactivated successfully.
Oct  2 05:00:25 np0005465604 podman[401947]: 2025-10-02 09:00:25.881975555 +0000 UTC m=+0.179284963 container died 528b8cc0ea0bfbf1e1e25bfc8e2dfd0818e6192815690c8ca09575b2bb3dfbeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moore, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:00:25 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9ca703e9529d711bdc86a830faa6f1f1a1fa44b256921dc0bed78563f2f4f10c-merged.mount: Deactivated successfully.
Oct  2 05:00:25 np0005465604 podman[401947]: 2025-10-02 09:00:25.938063477 +0000 UTC m=+0.235372915 container remove 528b8cc0ea0bfbf1e1e25bfc8e2dfd0818e6192815690c8ca09575b2bb3dfbeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 05:00:25 np0005465604 systemd[1]: libpod-conmon-528b8cc0ea0bfbf1e1e25bfc8e2dfd0818e6192815690c8ca09575b2bb3dfbeb.scope: Deactivated successfully.
Oct  2 05:00:26 np0005465604 podman[401987]: 2025-10-02 09:00:26.172172862 +0000 UTC m=+0.070873833 container create 63d4c4a58d81ce3ad94a134a4db9f222f6376035bb1819b9a5770b3889dbd178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaplygin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 05:00:26 np0005465604 systemd[1]: Started libpod-conmon-63d4c4a58d81ce3ad94a134a4db9f222f6376035bb1819b9a5770b3889dbd178.scope.
Oct  2 05:00:26 np0005465604 podman[401987]: 2025-10-02 09:00:26.147220567 +0000 UTC m=+0.045921628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:00:26 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:00:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab38271c20e3b812a759a83d9576d27be055902ccab734541d362ff3abeb19f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:00:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab38271c20e3b812a759a83d9576d27be055902ccab734541d362ff3abeb19f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:00:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab38271c20e3b812a759a83d9576d27be055902ccab734541d362ff3abeb19f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:00:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab38271c20e3b812a759a83d9576d27be055902ccab734541d362ff3abeb19f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:00:26 np0005465604 podman[401987]: 2025-10-02 09:00:26.303908187 +0000 UTC m=+0.202609228 container init 63d4c4a58d81ce3ad94a134a4db9f222f6376035bb1819b9a5770b3889dbd178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaplygin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 05:00:26 np0005465604 podman[401987]: 2025-10-02 09:00:26.315504097 +0000 UTC m=+0.214205088 container start 63d4c4a58d81ce3ad94a134a4db9f222f6376035bb1819b9a5770b3889dbd178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaplygin, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:00:26 np0005465604 podman[401987]: 2025-10-02 09:00:26.318979455 +0000 UTC m=+0.217680456 container attach 63d4c4a58d81ce3ad94a134a4db9f222f6376035bb1819b9a5770b3889dbd178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]: {
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:    "0": [
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:        {
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "devices": [
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "/dev/loop3"
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            ],
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "lv_name": "ceph_lv0",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "lv_size": "21470642176",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "name": "ceph_lv0",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "tags": {
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.cluster_name": "ceph",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.crush_device_class": "",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.encrypted": "0",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.osd_id": "0",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.type": "block",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.vdo": "0"
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            },
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "type": "block",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "vg_name": "ceph_vg0"
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:        }
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:    ],
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:    "1": [
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:        {
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "devices": [
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "/dev/loop4"
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            ],
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "lv_name": "ceph_lv1",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "lv_size": "21470642176",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "name": "ceph_lv1",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "tags": {
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.cluster_name": "ceph",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.crush_device_class": "",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.encrypted": "0",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.osd_id": "1",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.type": "block",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.vdo": "0"
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            },
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "type": "block",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "vg_name": "ceph_vg1"
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:        }
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:    ],
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:    "2": [
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:        {
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "devices": [
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "/dev/loop5"
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            ],
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "lv_name": "ceph_lv2",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "lv_size": "21470642176",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "name": "ceph_lv2",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "tags": {
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.cluster_name": "ceph",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.crush_device_class": "",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.encrypted": "0",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.osd_id": "2",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.type": "block",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:                "ceph.vdo": "0"
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            },
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "type": "block",
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:            "vg_name": "ceph_vg2"
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:        }
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]:    ]
Oct  2 05:00:27 np0005465604 strange_chaplygin[402004]: }
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.054 2 DEBUG nova.compute.manager [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-plugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.055 2 DEBUG oslo_concurrency.lockutils [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.058 2 DEBUG oslo_concurrency.lockutils [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.058 2 DEBUG oslo_concurrency.lockutils [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.058 2 DEBUG nova.compute.manager [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] No waiting events found dispatching network-vif-plugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.058 2 WARNING nova.compute.manager [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received unexpected event network-vif-plugged-4e4cec89-b01e-4202-bc3e-a65ce8864017 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.058 2 DEBUG nova.compute.manager [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-deleted-4e4cec89-b01e-4202-bc3e-a65ce8864017 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.059 2 INFO nova.compute.manager [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Neutron deleted interface 4e4cec89-b01e-4202-bc3e-a65ce8864017; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.059 2 DEBUG nova.network.neutron [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:00:27 np0005465604 systemd[1]: libpod-63d4c4a58d81ce3ad94a134a4db9f222f6376035bb1819b9a5770b3889dbd178.scope: Deactivated successfully.
Oct  2 05:00:27 np0005465604 podman[401987]: 2025-10-02 09:00:27.074428741 +0000 UTC m=+0.973129702 container died 63d4c4a58d81ce3ad94a134a4db9f222f6376035bb1819b9a5770b3889dbd178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 05:00:27 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ab38271c20e3b812a759a83d9576d27be055902ccab734541d362ff3abeb19f4-merged.mount: Deactivated successfully.
Oct  2 05:00:27 np0005465604 podman[401987]: 2025-10-02 09:00:27.142112184 +0000 UTC m=+1.040813155 container remove 63d4c4a58d81ce3ad94a134a4db9f222f6376035bb1819b9a5770b3889dbd178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaplygin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:00:27 np0005465604 systemd[1]: libpod-conmon-63d4c4a58d81ce3ad94a134a4db9f222f6376035bb1819b9a5770b3889dbd178.scope: Deactivated successfully.
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.225 2 DEBUG nova.objects.instance [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lazy-loading 'system_metadata' on Instance uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.299 2 DEBUG nova.objects.instance [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lazy-loading 'flavor' on Instance uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.348 2 DEBUG nova.virt.libvirt.vif [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1449971772',display_name='tempest-TestNetworkBasicOps-server-1449971772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1449971772',id=129,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEDwYWjGLjzCnpGdjCiDGb4BhMkfSctBSp05Os75j+QqkDCA7DCGjUId9rj9ZbOBdRWfSegWfbBERKUoMgNp9TPgIkeea2IxYlIkGspXgk0R0VbovWgCgpmsyfTWHw3bA==',key_name='tempest-TestNetworkBasicOps-1897225661',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-s05ipuxq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:58:36Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=b4eacfa3-8b31-492a-b3c5-829a890a4aae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.349 2 DEBUG nova.network.os_vif_util [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converting VIF {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.350 2 DEBUG nova.network.os_vif_util [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.356 2 DEBUG nova.virt.libvirt.guest [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.362 2 DEBUG nova.virt.libvirt.guest [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface>not found in domain: <domain type='kvm' id='163'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <name>instance-00000081</name>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <uuid>b4eacfa3-8b31-492a-b3c5-829a890a4aae</uuid>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:name>tempest-TestNetworkBasicOps-server-1449971772</nova:name>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 09:00:24</nova:creationTime>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:port uuid="5ea70a9c-8299-4593-b2b2-5c3315870d73">
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 05:00:27 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <memory unit='KiB'>131072</memory>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <vcpu placement='static'>1</vcpu>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <resource>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <partition>/machine</partition>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </resource>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <sysinfo type='smbios'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <entry name='manufacturer'>RDO</entry>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <entry name='serial'>b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <entry name='uuid'>b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <entry name='family'>Virtual Machine</entry>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <boot dev='hd'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <smbios mode='sysinfo'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <vmcoreinfo state='on'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <vendor>AMD</vendor>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='x2apic'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc-deadline'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='hypervisor'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc_adjust'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='spec-ctrl'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='stibp'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='arch-capabilities'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='ssbd'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='cmp_legacy'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='overflow-recov'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='succor'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='ibrs'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='amd-ssbd'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='virt-ssbd'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='lbrv'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='tsc-scale'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='vmcb-clean'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='flushbyasid'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pause-filter'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pfthreshold'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='rdctl-no'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='mds-no'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='pschange-mc-no'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='gds-no'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='rfds-no'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='xsaves'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svm'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='topoext'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='npt'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='nrip-save'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <clock offset='utc'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <timer name='hpet' present='no'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <on_poweroff>destroy</on_poweroff>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <on_reboot>restart</on_reboot>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <on_crash>destroy</on_crash>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <disk type='network' device='disk'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk' index='2'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target dev='vda' bus='virtio'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='virtio-disk0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <disk type='network' device='cdrom'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config' index='1'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target dev='sda' bus='sata'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <readonly/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='sata0-0-0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pcie.0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='1' port='0x10'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.1'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='2' port='0x11'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.2'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='3' port='0x12'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.3'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='4' port='0x13'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.4'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='5' port='0x14'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.5'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='6' port='0x15'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.6'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='7' port='0x16'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.7'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='8' port='0x17'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.8'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='9' port='0x18'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.9'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='10' port='0x19'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.10'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='11' port='0x1a'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.11'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='12' port='0x1b'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.12'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='13' port='0x1c'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.13'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='14' port='0x1d'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.14'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='15' port='0x1e'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.15'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='16' port='0x1f'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.16'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='17' port='0x20'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.17'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='18' port='0x21'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.18'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='19' port='0x22'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.19'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='20' port='0x23'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.20'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='21' port='0x24'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.21'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='22' port='0x25'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.22'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='23' port='0x26'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.23'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='24' port='0x27'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.24'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='25' port='0x28'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.25'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-pci-bridge'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.26'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='usb'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='sata' index='0'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='ide'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:42:41:53'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target dev='tap5ea70a9c-82'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='net0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <serial type='pty'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log' append='off'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target type='isa-serial' port='0'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:        <model name='isa-serial'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      </target>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log' append='off'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target type='serial' port='0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </console>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <input type='tablet' bus='usb'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='input0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='usb' bus='0' port='1'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </input>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <input type='mouse' bus='ps2'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='input1'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </input>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <input type='keyboard' bus='ps2'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='input2'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </input>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <listen type='address' address='::0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <audio id='1' type='none'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='video0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <watchdog model='itco' action='reset'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='watchdog0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </watchdog>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <memballoon model='virtio'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <stats period='10'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='balloon0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <rng model='virtio'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <backend model='random'>/dev/urandom</backend>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='rng0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <label>system_u:system_r:svirt_t:s0:c363,c569</label>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c363,c569</imagelabel>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <label>+107:+107</label>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <imagelabel>+107:+107</imagelabel>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 05:00:27 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:00:27 np0005465604 nova_compute[260603]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.363 2 DEBUG nova.virt.libvirt.guest [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.373 2 DEBUG nova.virt.libvirt.guest [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:45:88:e1"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4e4cec89-b0"/></interface>not found in domain: <domain type='kvm' id='163'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <name>instance-00000081</name>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <uuid>b4eacfa3-8b31-492a-b3c5-829a890a4aae</uuid>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:name>tempest-TestNetworkBasicOps-server-1449971772</nova:name>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 09:00:24</nova:creationTime>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:port uuid="5ea70a9c-8299-4593-b2b2-5c3315870d73">
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 05:00:27 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <memory unit='KiB'>131072</memory>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <vcpu placement='static'>1</vcpu>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <resource>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <partition>/machine</partition>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </resource>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <sysinfo type='smbios'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <entry name='manufacturer'>RDO</entry>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <entry name='serial'>b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <entry name='uuid'>b4eacfa3-8b31-492a-b3c5-829a890a4aae</entry>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <entry name='family'>Virtual Machine</entry>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <boot dev='hd'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <smbios mode='sysinfo'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <vmcoreinfo state='on'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <vendor>AMD</vendor>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='x2apic'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc-deadline'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='hypervisor'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='tsc_adjust'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='spec-ctrl'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='stibp'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='arch-capabilities'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='ssbd'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='cmp_legacy'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='overflow-recov'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='succor'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='ibrs'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='amd-ssbd'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='virt-ssbd'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='lbrv'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='tsc-scale'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='vmcb-clean'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='flushbyasid'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pause-filter'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='pfthreshold'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='rdctl-no'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='mds-no'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='pschange-mc-no'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='gds-no'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='rfds-no'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='xsaves'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='svm'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='require' name='topoext'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='npt'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <feature policy='disable' name='nrip-save'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <clock offset='utc'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <timer name='hpet' present='no'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <on_poweroff>destroy</on_poweroff>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <on_reboot>restart</on_reboot>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <on_crash>destroy</on_crash>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <disk type='network' device='disk'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk' index='2'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target dev='vda' bus='virtio'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='virtio-disk0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <disk type='network' device='cdrom'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <auth username='openstack'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:        <secret type='ceph' uuid='a52e644f-f702-594c-a648-813e3e0df2b1'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <source protocol='rbd' name='vms/b4eacfa3-8b31-492a-b3c5-829a890a4aae_disk.config' index='1'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:        <host name='192.168.122.100' port='6789'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target dev='sda' bus='sata'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <readonly/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='sata0-0-0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pcie.0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='1' port='0x10'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.1'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='2' port='0x11'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.2'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='3' port='0x12'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.3'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='4' port='0x13'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.4'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='5' port='0x14'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.5'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='6' port='0x15'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.6'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='7' port='0x16'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.7'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='8' port='0x17'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.8'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='9' port='0x18'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.9'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='10' port='0x19'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.10'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='11' port='0x1a'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.11'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='12' port='0x1b'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.12'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='13' port='0x1c'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.13'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='14' port='0x1d'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.14'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='15' port='0x1e'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.15'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='16' port='0x1f'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.16'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='17' port='0x20'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.17'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='18' port='0x21'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.18'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='19' port='0x22'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.19'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='20' port='0x23'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.20'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='21' port='0x24'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.21'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='22' port='0x25'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.22'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='23' port='0x26'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.23'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='24' port='0x27'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.24'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-root-port'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target chassis='25' port='0x28'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.25'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model name='pcie-pci-bridge'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='pci.26'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='usb'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <controller type='sata' index='0'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='ide'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </controller>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <interface type='ethernet'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <mac address='fa:16:3e:42:41:53'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target dev='tap5ea70a9c-82'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model type='virtio'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <mtu size='1442'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='net0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <serial type='pty'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log' append='off'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target type='isa-serial' port='0'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:        <model name='isa-serial'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      </target>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <console type='pty' tty='/dev/pts/0'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <source path='/dev/pts/0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <log file='/var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae/console.log' append='off'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <target type='serial' port='0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='serial0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </console>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <input type='tablet' bus='usb'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='input0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='usb' bus='0' port='1'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </input>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <input type='mouse' bus='ps2'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='input1'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </input>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <input type='keyboard' bus='ps2'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='input2'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </input>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <listen type='address' address='::0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </graphics>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <audio id='1' type='none'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='video0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <watchdog model='itco' action='reset'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='watchdog0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </watchdog>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <memballoon model='virtio'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <stats period='10'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='balloon0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <rng model='virtio'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <backend model='random'>/dev/urandom</backend>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <alias name='rng0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <label>system_u:system_r:svirt_t:s0:c363,c569</label>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c363,c569</imagelabel>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <label>+107:+107</label>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <imagelabel>+107:+107</imagelabel>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </seclabel>
Oct  2 05:00:27 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:00:27 np0005465604 nova_compute[260603]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.373 2 WARNING nova.virt.libvirt.driver [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Detaching interface fa:16:3e:45:88:e1 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap4e4cec89-b0' not found.#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.375 2 DEBUG nova.virt.libvirt.vif [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1449971772',display_name='tempest-TestNetworkBasicOps-server-1449971772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1449971772',id=129,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEDwYWjGLjzCnpGdjCiDGb4BhMkfSctBSp05Os75j+QqkDCA7DCGjUId9rj9ZbOBdRWfSegWfbBERKUoMgNp9TPgIkeea2IxYlIkGspXgk0R0VbovWgCgpmsyfTWHw3bA==',key_name='tempest-TestNetworkBasicOps-1897225661',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-s05ipuxq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:58:36Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=b4eacfa3-8b31-492a-b3c5-829a890a4aae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.376 2 DEBUG nova.network.os_vif_util [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converting VIF {"id": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "address": "fa:16:3e:45:88:e1", "network": {"id": "5d0f6b84-ebf5-436d-83fe-b7739dc629d9", "bridge": "br-int", "label": "tempest-network-smoke--122327414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e4cec89-b0", "ovs_interfaceid": "4e4cec89-b01e-4202-bc3e-a65ce8864017", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.377 2 DEBUG nova.network.os_vif_util [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.378 2 DEBUG os_vif [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.381 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e4cec89-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.385 2 INFO os_vif [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:88:e1,bridge_name='br-int',has_traffic_filtering=True,id=4e4cec89-b01e-4202-bc3e-a65ce8864017,network=Network(5d0f6b84-ebf5-436d-83fe-b7739dc629d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e4cec89-b0')#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.386 2 DEBUG nova.virt.libvirt.guest [req-f8762a72-8168-4958-8dea-ca3b069b5af5 req-832355a6-503c-46f8-8072-63524f04e6cb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:name>tempest-TestNetworkBasicOps-server-1449971772</nova:name>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:creationTime>2025-10-02 09:00:27</nova:creationTime>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:flavor name="m1.nano">
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:memory>128</nova:memory>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:disk>1</nova:disk>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:swap>0</nova:swap>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:vcpus>1</nova:vcpus>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </nova:flavor>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:owner>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </nova:owner>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  <nova:ports>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    <nova:port uuid="5ea70a9c-8299-4593-b2b2-5c3315870d73">
Oct  2 05:00:27 np0005465604 nova_compute[260603]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:    </nova:port>
Oct  2 05:00:27 np0005465604 nova_compute[260603]:  </nova:ports>
Oct  2 05:00:27 np0005465604 nova_compute[260603]: </nova:instance>
Oct  2 05:00:27 np0005465604 nova_compute[260603]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.656 2 INFO nova.network.neutron [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Port 4e4cec89-b01e-4202-bc3e-a65ce8864017 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.656 2 DEBUG nova.network.neutron [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.681 2 DEBUG oslo_concurrency.lockutils [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:00:27 np0005465604 nova_compute[260603]: 2025-10-02 09:00:27.758 2 DEBUG oslo_concurrency.lockutils [None req-d3153a57-58c1-466a-ad5b-a3e9a163046a ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "interface-b4eacfa3-8b31-492a-b3c5-829a890a4aae-4e4cec89-b01e-4202-bc3e-a65ce8864017" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 305 active+clean; 121 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct  2 05:00:27 np0005465604 podman[402170]: 2025-10-02 09:00:27.931369911 +0000 UTC m=+0.053687099 container create dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brattain, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  2 05:00:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:00:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:00:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:00:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:00:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:00:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:00:27 np0005465604 systemd[1]: Started libpod-conmon-dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a.scope.
Oct  2 05:00:27 np0005465604 podman[402170]: 2025-10-02 09:00:27.901830384 +0000 UTC m=+0.024147632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:00:27 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:00:28 np0005465604 ovn_controller[152344]: 2025-10-02T09:00:28Z|01411|binding|INFO|Releasing lport 34739963-aa72-473b-8b1d-5d0d09f0b1de from this chassis (sb_readonly=0)
Oct  2 05:00:28 np0005465604 podman[402170]: 2025-10-02 09:00:28.021209373 +0000 UTC m=+0.143526551 container init dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brattain, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 05:00:28 np0005465604 podman[402170]: 2025-10-02 09:00:28.036022303 +0000 UTC m=+0.158339461 container start dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brattain, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 05:00:28 np0005465604 podman[402170]: 2025-10-02 09:00:28.039874113 +0000 UTC m=+0.162191311 container attach dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:00:28 np0005465604 festive_brattain[402186]: 167 167
Oct  2 05:00:28 np0005465604 systemd[1]: libpod-dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a.scope: Deactivated successfully.
Oct  2 05:00:28 np0005465604 conmon[402186]: conmon dee73864900d5101de5d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a.scope/container/memory.events
Oct  2 05:00:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:00:28
Oct  2 05:00:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:00:28 np0005465604 podman[402170]: 2025-10-02 09:00:28.047605974 +0000 UTC m=+0.169923162 container died dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brattain, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:00:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:00:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'volumes', '.rgw.root', 'default.rgw.log', 'backups', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'vms']
Oct  2 05:00:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:00:28 np0005465604 nova_compute[260603]: 2025-10-02 09:00:28.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:28 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5bd4b2766eb78863dd305654839772bef4d759280a31b1332f89d46ed0b3b563-merged.mount: Deactivated successfully.
Oct  2 05:00:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:00:28 np0005465604 podman[402170]: 2025-10-02 09:00:28.114928996 +0000 UTC m=+0.237246154 container remove dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 05:00:28 np0005465604 systemd[1]: libpod-conmon-dee73864900d5101de5d5a9e14fd18bc205c373b927350f7db0b2f1e2023904a.scope: Deactivated successfully.
Oct  2 05:00:28 np0005465604 podman[402212]: 2025-10-02 09:00:28.305431466 +0000 UTC m=+0.039933663 container create 830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct  2 05:00:28 np0005465604 systemd[1]: Started libpod-conmon-830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4.scope.
Oct  2 05:00:28 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:00:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ce96af2c51c21de8648950c3c312618d5e57369241f4d35f3929d633954306/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:00:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ce96af2c51c21de8648950c3c312618d5e57369241f4d35f3929d633954306/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:00:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ce96af2c51c21de8648950c3c312618d5e57369241f4d35f3929d633954306/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:00:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ce96af2c51c21de8648950c3c312618d5e57369241f4d35f3929d633954306/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:00:28 np0005465604 podman[402212]: 2025-10-02 09:00:28.288230421 +0000 UTC m=+0.022732628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:00:28 np0005465604 podman[402212]: 2025-10-02 09:00:28.411598134 +0000 UTC m=+0.146100331 container init 830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:00:28 np0005465604 podman[402212]: 2025-10-02 09:00:28.419550271 +0000 UTC m=+0.154052458 container start 830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:00:28 np0005465604 podman[402212]: 2025-10-02 09:00:28.423631808 +0000 UTC m=+0.158134025 container attach 830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct  2 05:00:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:00:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:00:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:00:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:00:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:00:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:00:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:00:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:00:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:00:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.019 2 DEBUG nova.compute.manager [req-b6f13097-90d3-475e-9377-7386cf78a1a2 req-e70cd049-d5cb-4a4b-b927-886bef07babb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-changed-5ea70a9c-8299-4593-b2b2-5c3315870d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.021 2 DEBUG nova.compute.manager [req-b6f13097-90d3-475e-9377-7386cf78a1a2 req-e70cd049-d5cb-4a4b-b927-886bef07babb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing instance network info cache due to event network-changed-5ea70a9c-8299-4593-b2b2-5c3315870d73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.021 2 DEBUG oslo_concurrency.lockutils [req-b6f13097-90d3-475e-9377-7386cf78a1a2 req-e70cd049-d5cb-4a4b-b927-886bef07babb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.022 2 DEBUG oslo_concurrency.lockutils [req-b6f13097-90d3-475e-9377-7386cf78a1a2 req-e70cd049-d5cb-4a4b-b927-886bef07babb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.022 2 DEBUG nova.network.neutron [req-b6f13097-90d3-475e-9377-7386cf78a1a2 req-e70cd049-d5cb-4a4b-b927-886bef07babb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Refreshing network info cache for port 5ea70a9c-8299-4593-b2b2-5c3315870d73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.299 2 DEBUG oslo_concurrency.lockutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.300 2 DEBUG oslo_concurrency.lockutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.300 2 DEBUG oslo_concurrency.lockutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.301 2 DEBUG oslo_concurrency.lockutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.301 2 DEBUG oslo_concurrency.lockutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.304 2 INFO nova.compute.manager [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Terminating instance#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.306 2 DEBUG nova.compute.manager [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:00:29 np0005465604 kernel: tap5ea70a9c-82 (unregistering): left promiscuous mode
Oct  2 05:00:29 np0005465604 NetworkManager[45129]: <info>  [1759395629.3755] device (tap5ea70a9c-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:29 np0005465604 ovn_controller[152344]: 2025-10-02T09:00:29Z|01412|binding|INFO|Releasing lport 5ea70a9c-8299-4593-b2b2-5c3315870d73 from this chassis (sb_readonly=0)
Oct  2 05:00:29 np0005465604 ovn_controller[152344]: 2025-10-02T09:00:29Z|01413|binding|INFO|Setting lport 5ea70a9c-8299-4593-b2b2-5c3315870d73 down in Southbound
Oct  2 05:00:29 np0005465604 ovn_controller[152344]: 2025-10-02T09:00:29Z|01414|binding|INFO|Removing iface tap5ea70a9c-82 ovn-installed in OVS
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.432 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:41:53 10.100.0.12'], port_security=['fa:16:3e:42:41:53 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b4eacfa3-8b31-492a-b3c5-829a890a4aae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-531b0560-b279-49fe-a565-b902507e886d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41a1668d-b8be-4a47-8e43-ab11db6fabeb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1a7ccfd-5e72-4548-90da-40016d961198, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=5ea70a9c-8299-4593-b2b2-5c3315870d73) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:00:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.434 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 5ea70a9c-8299-4593-b2b2-5c3315870d73 in datapath 531b0560-b279-49fe-a565-b902507e886d unbound from our chassis#033[00m
Oct  2 05:00:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.435 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 531b0560-b279-49fe-a565-b902507e886d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:00:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.436 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3298e7-4e43-4feb-ba09-ce38a5b35d2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.437 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-531b0560-b279-49fe-a565-b902507e886d namespace which is not needed anymore#033[00m
Oct  2 05:00:29 np0005465604 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000081.scope: Deactivated successfully.
Oct  2 05:00:29 np0005465604 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000081.scope: Consumed 18.246s CPU time.
Oct  2 05:00:29 np0005465604 systemd-machined[214636]: Machine qemu-163-instance-00000081 terminated.
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]: {
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:        "osd_id": 2,
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:        "type": "bluestore"
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:    },
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:        "osd_id": 1,
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:        "type": "bluestore"
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:    },
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:        "osd_id": 0,
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:        "type": "bluestore"
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]:    }
Oct  2 05:00:29 np0005465604 naughty_franklin[402229]: }
Oct  2 05:00:29 np0005465604 systemd[1]: libpod-830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4.scope: Deactivated successfully.
Oct  2 05:00:29 np0005465604 systemd[1]: libpod-830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4.scope: Consumed 1.097s CPU time.
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.550 2 INFO nova.virt.libvirt.driver [-] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Instance destroyed successfully.#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.551 2 DEBUG nova.objects.instance [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid b4eacfa3-8b31-492a-b3c5-829a890a4aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.581 2 DEBUG nova.virt.libvirt.vif [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T08:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1449971772',display_name='tempest-TestNetworkBasicOps-server-1449971772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1449971772',id=129,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLEDwYWjGLjzCnpGdjCiDGb4BhMkfSctBSp05Os75j+QqkDCA7DCGjUId9rj9ZbOBdRWfSegWfbBERKUoMgNp9TPgIkeea2IxYlIkGspXgk0R0VbovWgCgpmsyfTWHw3bA==',key_name='tempest-TestNetworkBasicOps-1897225661',keypairs=<?>,launch_index=0,launched_at=2025-10-02T08:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-s05ipuxq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T08:58:36Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=b4eacfa3-8b31-492a-b3c5-829a890a4aae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.581 2 DEBUG nova.network.os_vif_util [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.582 2 DEBUG nova.network.os_vif_util [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:42:41:53,bridge_name='br-int',has_traffic_filtering=True,id=5ea70a9c-8299-4593-b2b2-5c3315870d73,network=Network(531b0560-b279-49fe-a565-b902507e886d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea70a9c-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.582 2 DEBUG os_vif [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:41:53,bridge_name='br-int',has_traffic_filtering=True,id=5ea70a9c-8299-4593-b2b2-5c3315870d73,network=Network(531b0560-b279-49fe-a565-b902507e886d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea70a9c-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.583 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ea70a9c-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.588 2 INFO os_vif [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:41:53,bridge_name='br-int',has_traffic_filtering=True,id=5ea70a9c-8299-4593-b2b2-5c3315870d73,network=Network(531b0560-b279-49fe-a565-b902507e886d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ea70a9c-82')#033[00m
Oct  2 05:00:29 np0005465604 neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d[399453]: [NOTICE]   (399457) : haproxy version is 2.8.14-c23fe91
Oct  2 05:00:29 np0005465604 neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d[399453]: [NOTICE]   (399457) : path to executable is /usr/sbin/haproxy
Oct  2 05:00:29 np0005465604 neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d[399453]: [WARNING]  (399457) : Exiting Master process...
Oct  2 05:00:29 np0005465604 neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d[399453]: [ALERT]    (399457) : Current worker (399459) exited with code 143 (Terminated)
Oct  2 05:00:29 np0005465604 neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d[399453]: [WARNING]  (399457) : All workers exited. Exiting... (0)
Oct  2 05:00:29 np0005465604 systemd[1]: libpod-42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd.scope: Deactivated successfully.
Oct  2 05:00:29 np0005465604 conmon[399453]: conmon 42856d274aa3e208f62e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd.scope/container/memory.events
Oct  2 05:00:29 np0005465604 podman[402290]: 2025-10-02 09:00:29.610335996 +0000 UTC m=+0.060695876 container died 42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 05:00:29 np0005465604 podman[402291]: 2025-10-02 09:00:29.622572027 +0000 UTC m=+0.071318157 container died 830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 05:00:29 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2789c6dcd510ee067d39b52a38d6115997f3323341899d6c11680c2dc54061e9-merged.mount: Deactivated successfully.
Oct  2 05:00:29 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd-userdata-shm.mount: Deactivated successfully.
Oct  2 05:00:29 np0005465604 systemd[1]: var-lib-containers-storage-overlay-51ce96af2c51c21de8648950c3c312618d5e57369241f4d35f3929d633954306-merged.mount: Deactivated successfully.
Oct  2 05:00:29 np0005465604 podman[402290]: 2025-10-02 09:00:29.670464995 +0000 UTC m=+0.120824865 container cleanup 42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:00:29 np0005465604 systemd[1]: libpod-conmon-42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd.scope: Deactivated successfully.
Oct  2 05:00:29 np0005465604 podman[402291]: 2025-10-02 09:00:29.701045315 +0000 UTC m=+0.149791435 container remove 830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_franklin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:00:29 np0005465604 systemd[1]: libpod-conmon-830891b2d0bcb9ad0cd6f5a98820226d91cb01f66f2d2a9e1afdadc8d039dda4.scope: Deactivated successfully.
Oct  2 05:00:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:00:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:00:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:00:29 np0005465604 podman[402358]: 2025-10-02 09:00:29.771168694 +0000 UTC m=+0.061282745 container remove 42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:00:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:00:29 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 1984a701-eaa2-4397-90d6-351fba2e6e76 does not exist
Oct  2 05:00:29 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev f14c438c-ea04-4619-bf18-525265cfdb4e does not exist
Oct  2 05:00:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.779 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bd75fef4-8176-4fee-9b90-1d53bda0a304]: (4, ('Thu Oct  2 09:00:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d (42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd)\n42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd\nThu Oct  2 09:00:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-531b0560-b279-49fe-a565-b902507e886d (42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd)\n42856d274aa3e208f62ec26ac0e74b0d32acd9fccc755feafc5ef9f0b3efc3dd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.782 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[31f74bf6-780b-41a0-ae47-e01c827a86b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.783 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap531b0560-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:29 np0005465604 kernel: tap531b0560-b0: left promiscuous mode
Oct  2 05:00:29 np0005465604 nova_compute[260603]: 2025-10-02 09:00:29.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.805 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[337843cd-d2ec-481a-bfec-20b8e6e0a727]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 121 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct  2 05:00:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.834 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[691ceab7-45e2-4b04-bbdf-2ef74ad7c281]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.836 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[95337864-72e5-45e7-9fdc-03f48e3ac8da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.858 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[298fb04e-df6c-4006-a26c-c02f1f71b720]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639368, 'reachable_time': 36024, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402396, 'error': None, 'target': 'ovnmeta-531b0560-b279-49fe-a565-b902507e886d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:29 np0005465604 systemd[1]: run-netns-ovnmeta\x2d531b0560\x2db279\x2d49fe\x2da565\x2db902507e886d.mount: Deactivated successfully.
Oct  2 05:00:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.864 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-531b0560-b279-49fe-a565-b902507e886d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:00:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:29.865 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4d04ee-8f69-4b8e-871a-d348746d805f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:00:30 np0005465604 nova_compute[260603]: 2025-10-02 09:00:30.004 2 DEBUG nova.compute.manager [req-149e034a-7258-444f-83c6-3f7b8ba7ffba req-2b6ee9a7-1c74-4ca2-89e6-b7b861ebac71 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-unplugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:00:30 np0005465604 nova_compute[260603]: 2025-10-02 09:00:30.005 2 DEBUG oslo_concurrency.lockutils [req-149e034a-7258-444f-83c6-3f7b8ba7ffba req-2b6ee9a7-1c74-4ca2-89e6-b7b861ebac71 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:30 np0005465604 nova_compute[260603]: 2025-10-02 09:00:30.005 2 DEBUG oslo_concurrency.lockutils [req-149e034a-7258-444f-83c6-3f7b8ba7ffba req-2b6ee9a7-1c74-4ca2-89e6-b7b861ebac71 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:30 np0005465604 nova_compute[260603]: 2025-10-02 09:00:30.005 2 DEBUG oslo_concurrency.lockutils [req-149e034a-7258-444f-83c6-3f7b8ba7ffba req-2b6ee9a7-1c74-4ca2-89e6-b7b861ebac71 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:30 np0005465604 nova_compute[260603]: 2025-10-02 09:00:30.006 2 DEBUG nova.compute.manager [req-149e034a-7258-444f-83c6-3f7b8ba7ffba req-2b6ee9a7-1c74-4ca2-89e6-b7b861ebac71 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] No waiting events found dispatching network-vif-unplugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:00:30 np0005465604 nova_compute[260603]: 2025-10-02 09:00:30.006 2 DEBUG nova.compute.manager [req-149e034a-7258-444f-83c6-3f7b8ba7ffba req-2b6ee9a7-1c74-4ca2-89e6-b7b861ebac71 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-unplugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:00:30 np0005465604 nova_compute[260603]: 2025-10-02 09:00:30.086 2 INFO nova.virt.libvirt.driver [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Deleting instance files /var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae_del#033[00m
Oct  2 05:00:30 np0005465604 nova_compute[260603]: 2025-10-02 09:00:30.087 2 INFO nova.virt.libvirt.driver [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Deletion of /var/lib/nova/instances/b4eacfa3-8b31-492a-b3c5-829a890a4aae_del complete#033[00m
Oct  2 05:00:30 np0005465604 nova_compute[260603]: 2025-10-02 09:00:30.465 2 INFO nova.compute.manager [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Took 1.16 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:00:30 np0005465604 nova_compute[260603]: 2025-10-02 09:00:30.466 2 DEBUG oslo.service.loopingcall [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:00:30 np0005465604 nova_compute[260603]: 2025-10-02 09:00:30.466 2 DEBUG nova.compute.manager [-] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:00:30 np0005465604 nova_compute[260603]: 2025-10-02 09:00:30.466 2 DEBUG nova.network.neutron [-] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:00:30 np0005465604 nova_compute[260603]: 2025-10-02 09:00:30.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:30 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:00:30 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:00:30 np0005465604 nova_compute[260603]: 2025-10-02 09:00:30.873 2 DEBUG nova.network.neutron [req-b6f13097-90d3-475e-9377-7386cf78a1a2 req-e70cd049-d5cb-4a4b-b927-886bef07babb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updated VIF entry in instance network info cache for port 5ea70a9c-8299-4593-b2b2-5c3315870d73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:00:30 np0005465604 nova_compute[260603]: 2025-10-02 09:00:30.874 2 DEBUG nova.network.neutron [req-b6f13097-90d3-475e-9377-7386cf78a1a2 req-e70cd049-d5cb-4a4b-b927-886bef07babb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [{"id": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "address": "fa:16:3e:42:41:53", "network": {"id": "531b0560-b279-49fe-a565-b902507e886d", "bridge": "br-int", "label": "tempest-network-smoke--1604229664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ea70a9c-82", "ovs_interfaceid": "5ea70a9c-8299-4593-b2b2-5c3315870d73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:00:31 np0005465604 nova_compute[260603]: 2025-10-02 09:00:31.100 2 DEBUG oslo_concurrency.lockutils [req-b6f13097-90d3-475e-9377-7386cf78a1a2 req-e70cd049-d5cb-4a4b-b927-886bef07babb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-b4eacfa3-8b31-492a-b3c5-829a890a4aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:00:31 np0005465604 nova_compute[260603]: 2025-10-02 09:00:31.258 2 DEBUG nova.network.neutron [-] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:00:31 np0005465604 nova_compute[260603]: 2025-10-02 09:00:31.300 2 INFO nova.compute.manager [-] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Took 0.83 seconds to deallocate network for instance.#033[00m
Oct  2 05:00:31 np0005465604 nova_compute[260603]: 2025-10-02 09:00:31.310 2 DEBUG nova.compute.manager [req-276629d8-4a55-4825-a47d-c6b6d9d4e91c req-4837ab2c-3b7f-415a-b5ed-565352f699ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-deleted-5ea70a9c-8299-4593-b2b2-5c3315870d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:00:31 np0005465604 nova_compute[260603]: 2025-10-02 09:00:31.310 2 INFO nova.compute.manager [req-276629d8-4a55-4825-a47d-c6b6d9d4e91c req-4837ab2c-3b7f-415a-b5ed-565352f699ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Neutron deleted interface 5ea70a9c-8299-4593-b2b2-5c3315870d73; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 05:00:31 np0005465604 nova_compute[260603]: 2025-10-02 09:00:31.311 2 DEBUG nova.network.neutron [req-276629d8-4a55-4825-a47d-c6b6d9d4e91c req-4837ab2c-3b7f-415a-b5ed-565352f699ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:00:31 np0005465604 nova_compute[260603]: 2025-10-02 09:00:31.345 2 DEBUG nova.compute.manager [req-276629d8-4a55-4825-a47d-c6b6d9d4e91c req-4837ab2c-3b7f-415a-b5ed-565352f699ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Detach interface failed, port_id=5ea70a9c-8299-4593-b2b2-5c3315870d73, reason: Instance b4eacfa3-8b31-492a-b3c5-829a890a4aae could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 05:00:31 np0005465604 nova_compute[260603]: 2025-10-02 09:00:31.363 2 DEBUG oslo_concurrency.lockutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:31 np0005465604 nova_compute[260603]: 2025-10-02 09:00:31.364 2 DEBUG oslo_concurrency.lockutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:31 np0005465604 nova_compute[260603]: 2025-10-02 09:00:31.426 2 DEBUG oslo_concurrency.processutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:00:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 121 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct  2 05:00:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:00:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3321827995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:00:31 np0005465604 nova_compute[260603]: 2025-10-02 09:00:31.893 2 DEBUG oslo_concurrency.processutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:00:31 np0005465604 nova_compute[260603]: 2025-10-02 09:00:31.900 2 DEBUG nova.compute.provider_tree [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:00:31 np0005465604 nova_compute[260603]: 2025-10-02 09:00:31.946 2 DEBUG nova.scheduler.client.report [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:00:31 np0005465604 nova_compute[260603]: 2025-10-02 09:00:31.993 2 DEBUG oslo_concurrency.lockutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:32 np0005465604 podman[402448]: 2025-10-02 09:00:32.001415531 +0000 UTC m=+0.061229734 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 05:00:32 np0005465604 nova_compute[260603]: 2025-10-02 09:00:32.024 2 INFO nova.scheduler.client.report [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance b4eacfa3-8b31-492a-b3c5-829a890a4aae#033[00m
Oct  2 05:00:32 np0005465604 podman[402447]: 2025-10-02 09:00:32.02937886 +0000 UTC m=+0.095770597 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:00:32 np0005465604 nova_compute[260603]: 2025-10-02 09:00:32.132 2 DEBUG oslo_concurrency.lockutils [None req-b9acf4a9-8c9e-4ccb-93da-719f9fdbf818 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:32 np0005465604 nova_compute[260603]: 2025-10-02 09:00:32.146 2 DEBUG nova.compute.manager [req-2f4b8cfa-874e-4e0c-bd04-666fdef0ba6c req-e128ff6f-e804-463b-aed3-95148b431459 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received event network-vif-plugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:00:32 np0005465604 nova_compute[260603]: 2025-10-02 09:00:32.146 2 DEBUG oslo_concurrency.lockutils [req-2f4b8cfa-874e-4e0c-bd04-666fdef0ba6c req-e128ff6f-e804-463b-aed3-95148b431459 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:32 np0005465604 nova_compute[260603]: 2025-10-02 09:00:32.147 2 DEBUG oslo_concurrency.lockutils [req-2f4b8cfa-874e-4e0c-bd04-666fdef0ba6c req-e128ff6f-e804-463b-aed3-95148b431459 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:32 np0005465604 nova_compute[260603]: 2025-10-02 09:00:32.147 2 DEBUG oslo_concurrency.lockutils [req-2f4b8cfa-874e-4e0c-bd04-666fdef0ba6c req-e128ff6f-e804-463b-aed3-95148b431459 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "b4eacfa3-8b31-492a-b3c5-829a890a4aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:32 np0005465604 nova_compute[260603]: 2025-10-02 09:00:32.147 2 DEBUG nova.compute.manager [req-2f4b8cfa-874e-4e0c-bd04-666fdef0ba6c req-e128ff6f-e804-463b-aed3-95148b431459 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] No waiting events found dispatching network-vif-plugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:00:32 np0005465604 nova_compute[260603]: 2025-10-02 09:00:32.147 2 WARNING nova.compute.manager [req-2f4b8cfa-874e-4e0c-bd04-666fdef0ba6c req-e128ff6f-e804-463b-aed3-95148b431459 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Received unexpected event network-vif-plugged-5ea70a9c-8299-4593-b2b2-5c3315870d73 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 05:00:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:00:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 55 op/s
Oct  2 05:00:34 np0005465604 nova_compute[260603]: 2025-10-02 09:00:34.112 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395619.1108572, f141f189-a224-4ac7-88b5-c28f198944e4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:00:34 np0005465604 nova_compute[260603]: 2025-10-02 09:00:34.113 2 INFO nova.compute.manager [-] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:00:34 np0005465604 nova_compute[260603]: 2025-10-02 09:00:34.142 2 DEBUG nova.compute.manager [None req-751b0ccc-3709-4626-8576-d57c4339be47 - - - - - -] [instance: f141f189-a224-4ac7-88b5-c28f198944e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:00:34 np0005465604 nova_compute[260603]: 2025-10-02 09:00:34.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:00:34 np0005465604 nova_compute[260603]: 2025-10-02 09:00:34.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:00:34 np0005465604 nova_compute[260603]: 2025-10-02 09:00:34.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:34.839 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:34.840 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:34.840 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:35 np0005465604 nova_compute[260603]: 2025-10-02 09:00:35.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct  2 05:00:37 np0005465604 podman[402493]: 2025-10-02 09:00:37.033489566 +0000 UTC m=+0.093449285 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:00:37 np0005465604 podman[402494]: 2025-10-02 09:00:37.038222963 +0000 UTC m=+0.101008740 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 05:00:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct  2 05:00:37 np0005465604 nova_compute[260603]: 2025-10-02 09:00:37.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:38 np0005465604 nova_compute[260603]: 2025-10-02 09:00:38.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:00:38 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:00:39 np0005465604 nova_compute[260603]: 2025-10-02 09:00:39.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct  2 05:00:40 np0005465604 nova_compute[260603]: 2025-10-02 09:00:40.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.115782) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395643115814, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 481, "num_deletes": 257, "total_data_size": 455397, "memory_usage": 465896, "flush_reason": "Manual Compaction"}
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395643120161, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 440919, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51826, "largest_seqno": 52306, "table_properties": {"data_size": 438152, "index_size": 803, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6586, "raw_average_key_size": 18, "raw_value_size": 432528, "raw_average_value_size": 1218, "num_data_blocks": 35, "num_entries": 355, "num_filter_entries": 355, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759395620, "oldest_key_time": 1759395620, "file_creation_time": 1759395643, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 4415 microseconds, and 1878 cpu microseconds.
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.120200) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 440919 bytes OK
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.120214) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.121417) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.121430) EVENT_LOG_v1 {"time_micros": 1759395643121425, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.121444) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 452493, prev total WAL file size 452493, number of live WAL files 2.
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.121922) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303033' seq:72057594037927935, type:22 .. '6C6F676D0032323536' seq:0, type:0; will stop at (end)
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(430KB)], [119(9849KB)]
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395643122012, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 10526740, "oldest_snapshot_seqno": -1}
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7195 keys, 10406469 bytes, temperature: kUnknown
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395643188992, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 10406469, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10357309, "index_size": 30040, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 188040, "raw_average_key_size": 26, "raw_value_size": 10227702, "raw_average_value_size": 1421, "num_data_blocks": 1175, "num_entries": 7195, "num_filter_entries": 7195, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759395643, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.189195) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 10406469 bytes
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.190476) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.0 rd, 155.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.6 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(47.5) write-amplify(23.6) OK, records in: 7721, records dropped: 526 output_compression: NoCompression
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.190490) EVENT_LOG_v1 {"time_micros": 1759395643190483, "job": 72, "event": "compaction_finished", "compaction_time_micros": 67030, "compaction_time_cpu_micros": 49369, "output_level": 6, "num_output_files": 1, "total_output_size": 10406469, "num_input_records": 7721, "num_output_records": 7195, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395643190671, "job": 72, "event": "table_file_deletion", "file_number": 121}
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395643192361, "job": 72, "event": "table_file_deletion", "file_number": 119}
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.121815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.192445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.192451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.192453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.192455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:00:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:00:43.192456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:00:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 05:00:44 np0005465604 nova_compute[260603]: 2025-10-02 09:00:44.546 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395629.5459328, b4eacfa3-8b31-492a-b3c5-829a890a4aae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:00:44 np0005465604 nova_compute[260603]: 2025-10-02 09:00:44.547 2 INFO nova.compute.manager [-] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:00:44 np0005465604 nova_compute[260603]: 2025-10-02 09:00:44.565 2 DEBUG nova.compute.manager [None req-553c7103-6137-4268-9bd2-2b598cd38dda - - - - - -] [instance: b4eacfa3-8b31-492a-b3c5-829a890a4aae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:00:44 np0005465604 nova_compute[260603]: 2025-10-02 09:00:44.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:44 np0005465604 nova_compute[260603]: 2025-10-02 09:00:44.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:44.835 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:00:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:44.837 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:00:45 np0005465604 nova_compute[260603]: 2025-10-02 09:00:45.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:00:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:00:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:00:48 np0005465604 nova_compute[260603]: 2025-10-02 09:00:48.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:00:49 np0005465604 nova_compute[260603]: 2025-10-02 09:00:49.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:00:50 np0005465604 nova_compute[260603]: 2025-10-02 09:00:50.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:50 np0005465604 nova_compute[260603]: 2025-10-02 09:00:50.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:00:50 np0005465604 nova_compute[260603]: 2025-10-02 09:00:50.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:00:50 np0005465604 nova_compute[260603]: 2025-10-02 09:00:50.538 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:00:51 np0005465604 nova_compute[260603]: 2025-10-02 09:00:51.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:00:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:00:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:00:53 np0005465604 nova_compute[260603]: 2025-10-02 09:00:53.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:00:53 np0005465604 nova_compute[260603]: 2025-10-02 09:00:53.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:00:53 np0005465604 nova_compute[260603]: 2025-10-02 09:00:53.553 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:53 np0005465604 nova_compute[260603]: 2025-10-02 09:00:53.553 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:53 np0005465604 nova_compute[260603]: 2025-10-02 09:00:53.554 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:53 np0005465604 nova_compute[260603]: 2025-10-02 09:00:53.554 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:00:53 np0005465604 nova_compute[260603]: 2025-10-02 09:00:53.555 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:00:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:00:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:00:53.839 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:00:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:00:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691722777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:00:54 np0005465604 nova_compute[260603]: 2025-10-02 09:00:54.011 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:00:54 np0005465604 nova_compute[260603]: 2025-10-02 09:00:54.229 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:00:54 np0005465604 nova_compute[260603]: 2025-10-02 09:00:54.233 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3678MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:00:54 np0005465604 nova_compute[260603]: 2025-10-02 09:00:54.233 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:54 np0005465604 nova_compute[260603]: 2025-10-02 09:00:54.234 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:54 np0005465604 nova_compute[260603]: 2025-10-02 09:00:54.337 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:00:54 np0005465604 nova_compute[260603]: 2025-10-02 09:00:54.337 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:00:54 np0005465604 nova_compute[260603]: 2025-10-02 09:00:54.354 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:00:54 np0005465604 nova_compute[260603]: 2025-10-02 09:00:54.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:00:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2928471322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:00:54 np0005465604 nova_compute[260603]: 2025-10-02 09:00:54.798 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:00:54 np0005465604 nova_compute[260603]: 2025-10-02 09:00:54.804 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:00:54 np0005465604 nova_compute[260603]: 2025-10-02 09:00:54.822 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:00:54 np0005465604 nova_compute[260603]: 2025-10-02 09:00:54.841 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:00:54 np0005465604 nova_compute[260603]: 2025-10-02 09:00:54.841 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:55 np0005465604 nova_compute[260603]: 2025-10-02 09:00:55.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:00:55 np0005465604 nova_compute[260603]: 2025-10-02 09:00:55.840 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:00:56 np0005465604 nova_compute[260603]: 2025-10-02 09:00:56.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:00:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:00:57 np0005465604 nova_compute[260603]: 2025-10-02 09:00:57.924 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "05300b1b-c030-499f-af29-cc94f4bf9e11" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:57 np0005465604 nova_compute[260603]: 2025-10-02 09:00:57.925 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:57 np0005465604 nova_compute[260603]: 2025-10-02 09:00:57.942 2 DEBUG nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:00:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:00:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:00:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:00:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:00:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:00:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:00:57 np0005465604 nova_compute[260603]: 2025-10-02 09:00:57.999 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:57.999 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.009 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.009 2 INFO nova.compute.claims [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.109 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:00:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:00:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:00:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1855295427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.569 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.576 2 DEBUG nova.compute.provider_tree [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.598 2 DEBUG nova.scheduler.client.report [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.625 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.627 2 DEBUG nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.672 2 DEBUG nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.673 2 DEBUG nova.network.neutron [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.695 2 INFO nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.716 2 DEBUG nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.818 2 DEBUG nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.820 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.821 2 INFO nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Creating image(s)#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.854 2 DEBUG nova.storage.rbd_utils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 05300b1b-c030-499f-af29-cc94f4bf9e11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.889 2 DEBUG nova.storage.rbd_utils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 05300b1b-c030-499f-af29-cc94f4bf9e11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.925 2 DEBUG nova.storage.rbd_utils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 05300b1b-c030-499f-af29-cc94f4bf9e11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:00:58 np0005465604 nova_compute[260603]: 2025-10-02 09:00:58.930 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:00:59 np0005465604 nova_compute[260603]: 2025-10-02 09:00:59.029 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:00:59 np0005465604 nova_compute[260603]: 2025-10-02 09:00:59.030 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:59 np0005465604 nova_compute[260603]: 2025-10-02 09:00:59.031 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:59 np0005465604 nova_compute[260603]: 2025-10-02 09:00:59.031 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:59 np0005465604 nova_compute[260603]: 2025-10-02 09:00:59.058 2 DEBUG nova.storage.rbd_utils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 05300b1b-c030-499f-af29-cc94f4bf9e11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:00:59 np0005465604 nova_compute[260603]: 2025-10-02 09:00:59.061 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 05300b1b-c030-499f-af29-cc94f4bf9e11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:00:59 np0005465604 nova_compute[260603]: 2025-10-02 09:00:59.136 2 DEBUG nova.policy [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:00:59 np0005465604 nova_compute[260603]: 2025-10-02 09:00:59.355 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 05300b1b-c030-499f-af29-cc94f4bf9e11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:00:59 np0005465604 nova_compute[260603]: 2025-10-02 09:00:59.435 2 DEBUG nova.storage.rbd_utils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image 05300b1b-c030-499f-af29-cc94f4bf9e11_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:00:59 np0005465604 nova_compute[260603]: 2025-10-02 09:00:59.531 2 DEBUG nova.objects.instance [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid 05300b1b-c030-499f-af29-cc94f4bf9e11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:00:59 np0005465604 nova_compute[260603]: 2025-10-02 09:00:59.548 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:00:59 np0005465604 nova_compute[260603]: 2025-10-02 09:00:59.549 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Ensure instance console log exists: /var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:00:59 np0005465604 nova_compute[260603]: 2025-10-02 09:00:59.550 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:00:59 np0005465604 nova_compute[260603]: 2025-10-02 09:00:59.551 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:00:59 np0005465604 nova_compute[260603]: 2025-10-02 09:00:59.551 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:00:59 np0005465604 nova_compute[260603]: 2025-10-02 09:00:59.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:00:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:01:00 np0005465604 nova_compute[260603]: 2025-10-02 09:01:00.211 2 DEBUG nova.network.neutron [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Successfully updated port: e14f3e85-84ff-49b3-8817-d926cb709f80 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:01:00 np0005465604 nova_compute[260603]: 2025-10-02 09:01:00.229 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-05300b1b-c030-499f-af29-cc94f4bf9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:01:00 np0005465604 nova_compute[260603]: 2025-10-02 09:01:00.230 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-05300b1b-c030-499f-af29-cc94f4bf9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:01:00 np0005465604 nova_compute[260603]: 2025-10-02 09:01:00.230 2 DEBUG nova.network.neutron [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:01:00 np0005465604 nova_compute[260603]: 2025-10-02 09:01:00.315 2 DEBUG nova.compute.manager [req-3f956bf1-046e-43c8-90b2-9fedccb1274e req-ea49bba0-4118-4ea2-82d5-c78e6259721f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Received event network-changed-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:01:00 np0005465604 nova_compute[260603]: 2025-10-02 09:01:00.315 2 DEBUG nova.compute.manager [req-3f956bf1-046e-43c8-90b2-9fedccb1274e req-ea49bba0-4118-4ea2-82d5-c78e6259721f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Refreshing instance network info cache due to event network-changed-e14f3e85-84ff-49b3-8817-d926cb709f80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:01:00 np0005465604 nova_compute[260603]: 2025-10-02 09:01:00.316 2 DEBUG oslo_concurrency.lockutils [req-3f956bf1-046e-43c8-90b2-9fedccb1274e req-ea49bba0-4118-4ea2-82d5-c78e6259721f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-05300b1b-c030-499f-af29-cc94f4bf9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:01:00 np0005465604 nova_compute[260603]: 2025-10-02 09:01:00.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:00 np0005465604 nova_compute[260603]: 2025-10-02 09:01:00.667 2 DEBUG nova.network.neutron [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.665 2 DEBUG nova.network.neutron [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Updating instance_info_cache with network_info: [{"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.687 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-05300b1b-c030-499f-af29-cc94f4bf9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.688 2 DEBUG nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Instance network_info: |[{"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.688 2 DEBUG oslo_concurrency.lockutils [req-3f956bf1-046e-43c8-90b2-9fedccb1274e req-ea49bba0-4118-4ea2-82d5-c78e6259721f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-05300b1b-c030-499f-af29-cc94f4bf9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.688 2 DEBUG nova.network.neutron [req-3f956bf1-046e-43c8-90b2-9fedccb1274e req-ea49bba0-4118-4ea2-82d5-c78e6259721f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Refreshing network info cache for port e14f3e85-84ff-49b3-8817-d926cb709f80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.691 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Start _get_guest_xml network_info=[{"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.698 2 WARNING nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.705 2 DEBUG nova.virt.libvirt.host [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.706 2 DEBUG nova.virt.libvirt.host [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.716 2 DEBUG nova.virt.libvirt.host [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.717 2 DEBUG nova.virt.libvirt.host [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.717 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.718 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.719 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.719 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.720 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.720 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.721 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.721 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.722 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.722 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.723 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.724 2 DEBUG nova.virt.hardware [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:01:01 np0005465604 nova_compute[260603]: 2025-10-02 09:01:01.729 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:01:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 41 MiB data, 892 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:01:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:01:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3873591614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.161 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.195 2 DEBUG nova.storage.rbd_utils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 05300b1b-c030-499f-af29-cc94f4bf9e11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.203 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:01:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:01:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/321709715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.700 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.702 2 DEBUG nova.virt.libvirt.vif [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:00:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-755603083',display_name='tempest-TestNetworkBasicOps-server-755603083',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-755603083',id=131,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJBmg0ihNhYo/3Tzh1Pt7lcbXy6uOqrL/08u+LngrhuZOWILyEHTHWbsg89FOrPiGPon4wJrkMN6ZCefE/Caz8hqQjrlNm3r99qN4W1mTg5pj+yc4yq/l4Zq52/JNOcbaw==',key_name='tempest-TestNetworkBasicOps-1951820191',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-uj66bhc3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:00:58Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=05300b1b-c030-499f-af29-cc94f4bf9e11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.703 2 DEBUG nova.network.os_vif_util [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.705 2 DEBUG nova.network.os_vif_util [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.707 2 DEBUG nova.objects.instance [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid 05300b1b-c030-499f-af29-cc94f4bf9e11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.725 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:01:02 np0005465604 nova_compute[260603]:  <uuid>05300b1b-c030-499f-af29-cc94f4bf9e11</uuid>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:  <name>instance-00000083</name>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkBasicOps-server-755603083</nova:name>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:01:01</nova:creationTime>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:01:02 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:        <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:        <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:        <nova:port uuid="e14f3e85-84ff-49b3-8817-d926cb709f80">
Oct  2 05:01:02 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <entry name="serial">05300b1b-c030-499f-af29-cc94f4bf9e11</entry>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <entry name="uuid">05300b1b-c030-499f-af29-cc94f4bf9e11</entry>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/05300b1b-c030-499f-af29-cc94f4bf9e11_disk">
Oct  2 05:01:02 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:01:02 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/05300b1b-c030-499f-af29-cc94f4bf9e11_disk.config">
Oct  2 05:01:02 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:01:02 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:56:61:55"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <target dev="tape14f3e85-84"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11/console.log" append="off"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:01:02 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:01:02 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:01:02 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:01:02 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.727 2 DEBUG nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Preparing to wait for external event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.728 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.728 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.729 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.730 2 DEBUG nova.virt.libvirt.vif [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:00:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-755603083',display_name='tempest-TestNetworkBasicOps-server-755603083',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-755603083',id=131,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJBmg0ihNhYo/3Tzh1Pt7lcbXy6uOqrL/08u+LngrhuZOWILyEHTHWbsg89FOrPiGPon4wJrkMN6ZCefE/Caz8hqQjrlNm3r99qN4W1mTg5pj+yc4yq/l4Zq52/JNOcbaw==',key_name='tempest-TestNetworkBasicOps-1951820191',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-uj66bhc3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:00:58Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=05300b1b-c030-499f-af29-cc94f4bf9e11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.731 2 DEBUG nova.network.os_vif_util [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.732 2 DEBUG nova.network.os_vif_util [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.733 2 DEBUG os_vif [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.755 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape14f3e85-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.756 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape14f3e85-84, col_values=(('external_ids', {'iface-id': 'e14f3e85-84ff-49b3-8817-d926cb709f80', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:61:55', 'vm-uuid': '05300b1b-c030-499f-af29-cc94f4bf9e11'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:02 np0005465604 NetworkManager[45129]: <info>  [1759395662.7601] manager: (tape14f3e85-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/563)
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.772 2 INFO os_vif [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84')#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.837 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.838 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.838 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:56:61:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.839 2 INFO nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Using config drive#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.871 2 DEBUG nova.storage.rbd_utils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 05300b1b-c030-499f-af29-cc94f4bf9e11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.958 2 DEBUG nova.network.neutron [req-3f956bf1-046e-43c8-90b2-9fedccb1274e req-ea49bba0-4118-4ea2-82d5-c78e6259721f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Updated VIF entry in instance network info cache for port e14f3e85-84ff-49b3-8817-d926cb709f80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.959 2 DEBUG nova.network.neutron [req-3f956bf1-046e-43c8-90b2-9fedccb1274e req-ea49bba0-4118-4ea2-82d5-c78e6259721f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Updating instance_info_cache with network_info: [{"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:01:02 np0005465604 nova_compute[260603]: 2025-10-02 09:01:02.976 2 DEBUG oslo_concurrency.lockutils [req-3f956bf1-046e-43c8-90b2-9fedccb1274e req-ea49bba0-4118-4ea2-82d5-c78e6259721f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-05300b1b-c030-499f-af29-cc94f4bf9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:01:03 np0005465604 podman[402858]: 2025-10-02 09:01:03.021162174 +0000 UTC m=+0.080416331 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:01:03 np0005465604 podman[402857]: 2025-10-02 09:01:03.064901583 +0000 UTC m=+0.124368696 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:01:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:01:03 np0005465604 nova_compute[260603]: 2025-10-02 09:01:03.233 2 INFO nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Creating config drive at /var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11/disk.config#033[00m
Oct  2 05:01:03 np0005465604 nova_compute[260603]: 2025-10-02 09:01:03.243 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr1sb28og execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:01:03 np0005465604 nova_compute[260603]: 2025-10-02 09:01:03.417 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr1sb28og" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:01:03 np0005465604 nova_compute[260603]: 2025-10-02 09:01:03.450 2 DEBUG nova.storage.rbd_utils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 05300b1b-c030-499f-af29-cc94f4bf9e11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:01:03 np0005465604 nova_compute[260603]: 2025-10-02 09:01:03.455 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11/disk.config 05300b1b-c030-499f-af29-cc94f4bf9e11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:01:03 np0005465604 nova_compute[260603]: 2025-10-02 09:01:03.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:01:03 np0005465604 nova_compute[260603]: 2025-10-02 09:01:03.653 2 DEBUG oslo_concurrency.processutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11/disk.config 05300b1b-c030-499f-af29-cc94f4bf9e11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:01:03 np0005465604 nova_compute[260603]: 2025-10-02 09:01:03.654 2 INFO nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Deleting local config drive /var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11/disk.config because it was imported into RBD.#033[00m
Oct  2 05:01:03 np0005465604 NetworkManager[45129]: <info>  [1759395663.7078] manager: (tape14f3e85-84): new Tun device (/org/freedesktop/NetworkManager/Devices/564)
Oct  2 05:01:03 np0005465604 kernel: tape14f3e85-84: entered promiscuous mode
Oct  2 05:01:03 np0005465604 nova_compute[260603]: 2025-10-02 09:01:03.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:03 np0005465604 ovn_controller[152344]: 2025-10-02T09:01:03Z|01415|binding|INFO|Claiming lport e14f3e85-84ff-49b3-8817-d926cb709f80 for this chassis.
Oct  2 05:01:03 np0005465604 ovn_controller[152344]: 2025-10-02T09:01:03Z|01416|binding|INFO|e14f3e85-84ff-49b3-8817-d926cb709f80: Claiming fa:16:3e:56:61:55 10.100.0.6
Oct  2 05:01:03 np0005465604 nova_compute[260603]: 2025-10-02 09:01:03.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.733 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:61:55 10.100.0.6'], port_security=['fa:16:3e:56:61:55 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1319178476', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '05300b1b-c030-499f-af29-cc94f4bf9e11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1319178476', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc3a0c93-fc04-4c05-88f3-b624ca1ad1bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=393b5666-d876-4099-9af5-9247fa9f1ed9, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e14f3e85-84ff-49b3-8817-d926cb709f80) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:01:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.734 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e14f3e85-84ff-49b3-8817-d926cb709f80 in datapath 577c4506-1f5a-48bf-b267-75fd69fe5e1c bound to our chassis#033[00m
Oct  2 05:01:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.735 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 577c4506-1f5a-48bf-b267-75fd69fe5e1c#033[00m
Oct  2 05:01:03 np0005465604 systemd-udevd[402953]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:01:03 np0005465604 NetworkManager[45129]: <info>  [1759395663.7461] device (tape14f3e85-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:01:03 np0005465604 NetworkManager[45129]: <info>  [1759395663.7470] device (tape14f3e85-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:01:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.751 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[702d1e6a-1ecc-41f4-b16f-958f9053a14a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.752 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap577c4506-11 in ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 05:01:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.754 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap577c4506-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 05:01:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.754 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd26fa3-c1dd-4473-9b06-bc445cad4bc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.755 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e81f1a8e-c88d-4c16-b5cd-8f532e203858]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:03 np0005465604 systemd-machined[214636]: New machine qemu-165-instance-00000083.
Oct  2 05:01:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.773 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[24b11793-7f0c-493c-bb83-2af2ab578c97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:03 np0005465604 systemd[1]: Started Virtual Machine qemu-165-instance-00000083.
Oct  2 05:01:03 np0005465604 ovn_controller[152344]: 2025-10-02T09:01:03Z|01417|binding|INFO|Setting lport e14f3e85-84ff-49b3-8817-d926cb709f80 ovn-installed in OVS
Oct  2 05:01:03 np0005465604 ovn_controller[152344]: 2025-10-02T09:01:03Z|01418|binding|INFO|Setting lport e14f3e85-84ff-49b3-8817-d926cb709f80 up in Southbound
Oct  2 05:01:03 np0005465604 nova_compute[260603]: 2025-10-02 09:01:03.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.810 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c0251abe-6d82-4cc1-8ab9-23ac7f00e44c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:01:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.844 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[29ec9ba8-be1b-46ca-a33a-0319ed879302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:03 np0005465604 NetworkManager[45129]: <info>  [1759395663.8504] manager: (tap577c4506-10): new Veth device (/org/freedesktop/NetworkManager/Devices/565)
Oct  2 05:01:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.849 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d6e47a-5c06-4f03-8699-6a5b52e4f568]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:03 np0005465604 systemd-udevd[402957]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:01:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.890 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bb19562f-aee2-4587-b755-493dd5ca9de3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.893 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[14c90486-e92b-4671-8a8f-88220a512962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:03 np0005465604 NetworkManager[45129]: <info>  [1759395663.9229] device (tap577c4506-10): carrier: link connected
Oct  2 05:01:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.927 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[77b974ec-e0b3-45f8-a3cf-8bc1823104af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.948 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[decf1ad3-ad27-411f-97a8-7d60432f9e7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap577c4506-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:0e:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654252, 'reachable_time': 15701, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402987, 'error': None, 'target': 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.961 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f28868-622d-4a51-98d7-cf32899fc4fe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:e80'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654252, 'tstamp': 654252}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402988, 'error': None, 'target': 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:03.981 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0a7757ec-aed7-4b51-83a1-eecb36550e11]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap577c4506-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:0e:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654252, 'reachable_time': 15701, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 402989, 'error': None, 'target': 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.005 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7cbc6a-040b-49b0-b575-78f08954abf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.059 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[adc0f373-ea87-4064-8694-0b62565df3b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.060 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap577c4506-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.061 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.061 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap577c4506-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:04 np0005465604 NetworkManager[45129]: <info>  [1759395664.0639] manager: (tap577c4506-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/566)
Oct  2 05:01:04 np0005465604 kernel: tap577c4506-10: entered promiscuous mode
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.066 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap577c4506-10, col_values=(('external_ids', {'iface-id': '8acd815f-105a-41fb-b7bf-32e4f419ccc2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.068 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/577c4506-1f5a-48bf-b267-75fd69fe5e1c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/577c4506-1f5a-48bf-b267-75fd69fe5e1c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 05:01:04 np0005465604 ovn_controller[152344]: 2025-10-02T09:01:04Z|01419|binding|INFO|Releasing lport 8acd815f-105a-41fb-b7bf-32e4f419ccc2 from this chassis (sb_readonly=0)
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.069 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[62af19a0-8dca-4d4b-80c3-06c865310cdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.070 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-577c4506-1f5a-48bf-b267-75fd69fe5e1c
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/577c4506-1f5a-48bf-b267-75fd69fe5e1c.pid.haproxy
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 577c4506-1f5a-48bf-b267-75fd69fe5e1c
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 05:01:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:04.071 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'env', 'PROCESS_TAG=haproxy-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/577c4506-1f5a-48bf-b267-75fd69fe5e1c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:04 np0005465604 podman[403063]: 2025-10-02 09:01:04.416947169 +0000 UTC m=+0.051266634 container create d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:01:04 np0005465604 systemd[1]: Started libpod-conmon-d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52.scope.
Oct  2 05:01:04 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:01:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8827b62d4c301f68ef38c01deef5bc5779710bd69d47a8e0e2f3379393e462c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 05:01:04 np0005465604 podman[403063]: 2025-10-02 09:01:04.393714436 +0000 UTC m=+0.028033921 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 05:01:04 np0005465604 podman[403063]: 2025-10-02 09:01:04.486466399 +0000 UTC m=+0.120785884 container init d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 05:01:04 np0005465604 podman[403063]: 2025-10-02 09:01:04.491510056 +0000 UTC m=+0.125829521 container start d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 05:01:04 np0005465604 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[403078]: [NOTICE]   (403082) : New worker (403084) forked
Oct  2 05:01:04 np0005465604 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[403078]: [NOTICE]   (403082) : Loading success.
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.566 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395664.5659397, 05300b1b-c030-499f-af29-cc94f4bf9e11 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.567 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] VM Started (Lifecycle Event)#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.589 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.592 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395664.5663855, 05300b1b-c030-499f-af29-cc94f4bf9e11 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.593 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.610 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.612 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.633 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.693 2 DEBUG nova.compute.manager [req-f1f81a71-d5c0-4f3b-b390-cacacc39109a req-e759d033-b609-49fa-a5e6-eeb8eae716a9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Received event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.693 2 DEBUG oslo_concurrency.lockutils [req-f1f81a71-d5c0-4f3b-b390-cacacc39109a req-e759d033-b609-49fa-a5e6-eeb8eae716a9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.694 2 DEBUG oslo_concurrency.lockutils [req-f1f81a71-d5c0-4f3b-b390-cacacc39109a req-e759d033-b609-49fa-a5e6-eeb8eae716a9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.694 2 DEBUG oslo_concurrency.lockutils [req-f1f81a71-d5c0-4f3b-b390-cacacc39109a req-e759d033-b609-49fa-a5e6-eeb8eae716a9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.694 2 DEBUG nova.compute.manager [req-f1f81a71-d5c0-4f3b-b390-cacacc39109a req-e759d033-b609-49fa-a5e6-eeb8eae716a9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Processing event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.695 2 DEBUG nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.699 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395664.6988919, 05300b1b-c030-499f-af29-cc94f4bf9e11 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.699 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.701 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.704 2 INFO nova.virt.libvirt.driver [-] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Instance spawned successfully.#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.704 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.717 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.724 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.728 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.728 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.729 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.729 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.729 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.730 2 DEBUG nova.virt.libvirt.driver [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.754 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.782 2 INFO nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Took 5.96 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.782 2 DEBUG nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.839 2 INFO nova.compute.manager [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Took 6.86 seconds to build instance.#033[00m
Oct  2 05:01:04 np0005465604 nova_compute[260603]: 2025-10-02 09:01:04.858 2 DEBUG oslo_concurrency.lockutils [None req-c28cda28-b7b3-43f2-b1c8-cba59c27f11c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:05 np0005465604 nova_compute[260603]: 2025-10-02 09:01:05.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:01:06 np0005465604 nova_compute[260603]: 2025-10-02 09:01:06.799 2 DEBUG nova.compute.manager [req-904a0fa2-ef73-4465-9ee6-d99851b2b942 req-3e67d1d4-6467-4814-8dee-4cb90e91d809 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Received event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:01:06 np0005465604 nova_compute[260603]: 2025-10-02 09:01:06.800 2 DEBUG oslo_concurrency.lockutils [req-904a0fa2-ef73-4465-9ee6-d99851b2b942 req-3e67d1d4-6467-4814-8dee-4cb90e91d809 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:06 np0005465604 nova_compute[260603]: 2025-10-02 09:01:06.800 2 DEBUG oslo_concurrency.lockutils [req-904a0fa2-ef73-4465-9ee6-d99851b2b942 req-3e67d1d4-6467-4814-8dee-4cb90e91d809 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:06 np0005465604 nova_compute[260603]: 2025-10-02 09:01:06.800 2 DEBUG oslo_concurrency.lockutils [req-904a0fa2-ef73-4465-9ee6-d99851b2b942 req-3e67d1d4-6467-4814-8dee-4cb90e91d809 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:06 np0005465604 nova_compute[260603]: 2025-10-02 09:01:06.801 2 DEBUG nova.compute.manager [req-904a0fa2-ef73-4465-9ee6-d99851b2b942 req-3e67d1d4-6467-4814-8dee-4cb90e91d809 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] No waiting events found dispatching network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:01:06 np0005465604 nova_compute[260603]: 2025-10-02 09:01:06.801 2 WARNING nova.compute.manager [req-904a0fa2-ef73-4465-9ee6-d99851b2b942 req-3e67d1d4-6467-4814-8dee-4cb90e91d809 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Received unexpected event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:01:07 np0005465604 nova_compute[260603]: 2025-10-02 09:01:07.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 809 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct  2 05:01:08 np0005465604 podman[403093]: 2025-10-02 09:01:08.007005533 +0000 UTC m=+0.066036733 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:01:08 np0005465604 podman[403094]: 2025-10-02 09:01:08.018869781 +0000 UTC m=+0.069986686 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct  2 05:01:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:01:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 05:01:10 np0005465604 nova_compute[260603]: 2025-10-02 09:01:10.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:10 np0005465604 nova_compute[260603]: 2025-10-02 09:01:10.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:10 np0005465604 NetworkManager[45129]: <info>  [1759395670.8582] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/567)
Oct  2 05:01:10 np0005465604 NetworkManager[45129]: <info>  [1759395670.8605] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/568)
Oct  2 05:01:10 np0005465604 ovn_controller[152344]: 2025-10-02T09:01:10Z|01420|binding|INFO|Releasing lport 8acd815f-105a-41fb-b7bf-32e4f419ccc2 from this chassis (sb_readonly=0)
Oct  2 05:01:10 np0005465604 ovn_controller[152344]: 2025-10-02T09:01:10Z|01421|binding|INFO|Releasing lport 8acd815f-105a-41fb-b7bf-32e4f419ccc2 from this chassis (sb_readonly=0)
Oct  2 05:01:10 np0005465604 nova_compute[260603]: 2025-10-02 09:01:10.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:10 np0005465604 nova_compute[260603]: 2025-10-02 09:01:10.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.255 2 DEBUG nova.compute.manager [req-c5df678f-efc0-416d-b2b1-2311a6dc98cc req-b0d98d94-cd8c-418b-ba92-bfc9d08ec7b6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Received event network-changed-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.256 2 DEBUG nova.compute.manager [req-c5df678f-efc0-416d-b2b1-2311a6dc98cc req-b0d98d94-cd8c-418b-ba92-bfc9d08ec7b6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Refreshing instance network info cache due to event network-changed-e14f3e85-84ff-49b3-8817-d926cb709f80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.256 2 DEBUG oslo_concurrency.lockutils [req-c5df678f-efc0-416d-b2b1-2311a6dc98cc req-b0d98d94-cd8c-418b-ba92-bfc9d08ec7b6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-05300b1b-c030-499f-af29-cc94f4bf9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.256 2 DEBUG oslo_concurrency.lockutils [req-c5df678f-efc0-416d-b2b1-2311a6dc98cc req-b0d98d94-cd8c-418b-ba92-bfc9d08ec7b6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-05300b1b-c030-499f-af29-cc94f4bf9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.256 2 DEBUG nova.network.neutron [req-c5df678f-efc0-416d-b2b1-2311a6dc98cc req-b0d98d94-cd8c-418b-ba92-bfc9d08ec7b6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Refreshing network info cache for port e14f3e85-84ff-49b3-8817-d926cb709f80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.471 2 DEBUG oslo_concurrency.lockutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "05300b1b-c030-499f-af29-cc94f4bf9e11" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.472 2 DEBUG oslo_concurrency.lockutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.472 2 DEBUG oslo_concurrency.lockutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.473 2 DEBUG oslo_concurrency.lockutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.473 2 DEBUG oslo_concurrency.lockutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.475 2 INFO nova.compute.manager [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Terminating instance#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.476 2 DEBUG nova.compute.manager [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:01:12 np0005465604 kernel: tape14f3e85-84 (unregistering): left promiscuous mode
Oct  2 05:01:12 np0005465604 NetworkManager[45129]: <info>  [1759395672.5269] device (tape14f3e85-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:12 np0005465604 ovn_controller[152344]: 2025-10-02T09:01:12Z|01422|binding|INFO|Releasing lport e14f3e85-84ff-49b3-8817-d926cb709f80 from this chassis (sb_readonly=0)
Oct  2 05:01:12 np0005465604 ovn_controller[152344]: 2025-10-02T09:01:12Z|01423|binding|INFO|Setting lport e14f3e85-84ff-49b3-8817-d926cb709f80 down in Southbound
Oct  2 05:01:12 np0005465604 ovn_controller[152344]: 2025-10-02T09:01:12Z|01424|binding|INFO|Removing iface tape14f3e85-84 ovn-installed in OVS
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.546 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:61:55 10.100.0.6'], port_security=['fa:16:3e:56:61:55 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1319178476', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '05300b1b-c030-499f-af29-cc94f4bf9e11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1319178476', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc3a0c93-fc04-4c05-88f3-b624ca1ad1bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=393b5666-d876-4099-9af5-9247fa9f1ed9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e14f3e85-84ff-49b3-8817-d926cb709f80) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:01:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.548 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e14f3e85-84ff-49b3-8817-d926cb709f80 in datapath 577c4506-1f5a-48bf-b267-75fd69fe5e1c unbound from our chassis#033[00m
Oct  2 05:01:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.550 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 577c4506-1f5a-48bf-b267-75fd69fe5e1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:01:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.551 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[44fa93b0-9a6f-44b1-b847-20619f8a6bfa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.552 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c namespace which is not needed anymore#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:12 np0005465604 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000083.scope: Deactivated successfully.
Oct  2 05:01:12 np0005465604 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000083.scope: Consumed 8.591s CPU time.
Oct  2 05:01:12 np0005465604 systemd-machined[214636]: Machine qemu-165-instance-00000083 terminated.
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.715 2 INFO nova.virt.libvirt.driver [-] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Instance destroyed successfully.#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.716 2 DEBUG nova.objects.instance [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid 05300b1b-c030-499f-af29-cc94f4bf9e11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:01:12 np0005465604 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[403078]: [NOTICE]   (403082) : haproxy version is 2.8.14-c23fe91
Oct  2 05:01:12 np0005465604 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[403078]: [NOTICE]   (403082) : path to executable is /usr/sbin/haproxy
Oct  2 05:01:12 np0005465604 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[403078]: [WARNING]  (403082) : Exiting Master process...
Oct  2 05:01:12 np0005465604 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[403078]: [ALERT]    (403082) : Current worker (403084) exited with code 143 (Terminated)
Oct  2 05:01:12 np0005465604 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[403078]: [WARNING]  (403082) : All workers exited. Exiting... (0)
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.731 2 DEBUG nova.virt.libvirt.vif [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:00:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-755603083',display_name='tempest-TestNetworkBasicOps-server-755603083',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-755603083',id=131,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJBmg0ihNhYo/3Tzh1Pt7lcbXy6uOqrL/08u+LngrhuZOWILyEHTHWbsg89FOrPiGPon4wJrkMN6ZCefE/Caz8hqQjrlNm3r99qN4W1mTg5pj+yc4yq/l4Zq52/JNOcbaw==',key_name='tempest-TestNetworkBasicOps-1951820191',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:01:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-uj66bhc3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:01:04Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=05300b1b-c030-499f-af29-cc94f4bf9e11,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:01:12 np0005465604 systemd[1]: libpod-d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52.scope: Deactivated successfully.
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.732 2 DEBUG nova.network.os_vif_util [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.733 2 DEBUG nova.network.os_vif_util [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.734 2 DEBUG os_vif [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:01:12 np0005465604 podman[403158]: 2025-10-02 09:01:12.735114802 +0000 UTC m=+0.072741022 container died d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape14f3e85-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.741 2 INFO os_vif [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84')#033[00m
Oct  2 05:01:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52-userdata-shm.mount: Deactivated successfully.
Oct  2 05:01:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8827b62d4c301f68ef38c01deef5bc5779710bd69d47a8e0e2f3379393e462c6-merged.mount: Deactivated successfully.
Oct  2 05:01:12 np0005465604 podman[403158]: 2025-10-02 09:01:12.779681078 +0000 UTC m=+0.117307308 container cleanup d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.793 2 DEBUG nova.compute.manager [req-9bd606f5-6b42-4b11-9425-0c6e659558c4 req-579957c4-ca28-4a5c-b422-8a3d61a65696 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Received event network-vif-unplugged-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.794 2 DEBUG oslo_concurrency.lockutils [req-9bd606f5-6b42-4b11-9425-0c6e659558c4 req-579957c4-ca28-4a5c-b422-8a3d61a65696 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.794 2 DEBUG oslo_concurrency.lockutils [req-9bd606f5-6b42-4b11-9425-0c6e659558c4 req-579957c4-ca28-4a5c-b422-8a3d61a65696 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.797 2 DEBUG oslo_concurrency.lockutils [req-9bd606f5-6b42-4b11-9425-0c6e659558c4 req-579957c4-ca28-4a5c-b422-8a3d61a65696 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.797 2 DEBUG nova.compute.manager [req-9bd606f5-6b42-4b11-9425-0c6e659558c4 req-579957c4-ca28-4a5c-b422-8a3d61a65696 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] No waiting events found dispatching network-vif-unplugged-e14f3e85-84ff-49b3-8817-d926cb709f80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.797 2 DEBUG nova.compute.manager [req-9bd606f5-6b42-4b11-9425-0c6e659558c4 req-579957c4-ca28-4a5c-b422-8a3d61a65696 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Received event network-vif-unplugged-e14f3e85-84ff-49b3-8817-d926cb709f80 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:01:12 np0005465604 systemd[1]: libpod-conmon-d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52.scope: Deactivated successfully.
Oct  2 05:01:12 np0005465604 podman[403210]: 2025-10-02 09:01:12.856263477 +0000 UTC m=+0.049575551 container remove d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 05:01:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.861 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa3a54b-f084-44bf-ad02-ca86bb2b6604]: (4, ('Thu Oct  2 09:01:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c (d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52)\nd25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52\nThu Oct  2 09:01:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c (d25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52)\nd25ac23ca98290d3fda31766980454538cd647ae32d0b29f4b77af208a98ad52\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.863 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[69d6cc48-0cb5-43a8-bc8f-fa51579ba5c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.865 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap577c4506-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:12 np0005465604 kernel: tap577c4506-10: left promiscuous mode
Oct  2 05:01:12 np0005465604 nova_compute[260603]: 2025-10-02 09:01:12.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.883 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[78aebcda-7969-4e6b-bfe2-d600c16d87f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.906 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[13c1dca8-951d-4020-8186-b90186fee4bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.907 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83233edd-86a0-480c-a73d-6c295a9c2250]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.933 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[235a096c-dd2a-4655-9fb4-57d2f867b52d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654243, 'reachable_time': 38871, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403231, 'error': None, 'target': 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:12 np0005465604 systemd[1]: run-netns-ovnmeta\x2d577c4506\x2d1f5a\x2d48bf\x2db267\x2d75fd69fe5e1c.mount: Deactivated successfully.
Oct  2 05:01:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.936 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:01:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:12.936 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[2567aa57-3b2e-4dda-b02a-e23b27a5ee4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:13 np0005465604 nova_compute[260603]: 2025-10-02 09:01:13.095 2 INFO nova.virt.libvirt.driver [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Deleting instance files /var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11_del#033[00m
Oct  2 05:01:13 np0005465604 nova_compute[260603]: 2025-10-02 09:01:13.096 2 INFO nova.virt.libvirt.driver [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Deletion of /var/lib/nova/instances/05300b1b-c030-499f-af29-cc94f4bf9e11_del complete#033[00m
Oct  2 05:01:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:01:13 np0005465604 nova_compute[260603]: 2025-10-02 09:01:13.173 2 INFO nova.compute.manager [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:01:13 np0005465604 nova_compute[260603]: 2025-10-02 09:01:13.173 2 DEBUG oslo.service.loopingcall [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:01:13 np0005465604 nova_compute[260603]: 2025-10-02 09:01:13.173 2 DEBUG nova.compute.manager [-] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:01:13 np0005465604 nova_compute[260603]: 2025-10-02 09:01:13.174 2 DEBUG nova.network.neutron [-] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:01:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 66 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct  2 05:01:14 np0005465604 nova_compute[260603]: 2025-10-02 09:01:14.583 2 DEBUG nova.network.neutron [req-c5df678f-efc0-416d-b2b1-2311a6dc98cc req-b0d98d94-cd8c-418b-ba92-bfc9d08ec7b6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Updated VIF entry in instance network info cache for port e14f3e85-84ff-49b3-8817-d926cb709f80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:01:14 np0005465604 nova_compute[260603]: 2025-10-02 09:01:14.583 2 DEBUG nova.network.neutron [req-c5df678f-efc0-416d-b2b1-2311a6dc98cc req-b0d98d94-cd8c-418b-ba92-bfc9d08ec7b6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Updating instance_info_cache with network_info: [{"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:01:14 np0005465604 nova_compute[260603]: 2025-10-02 09:01:14.609 2 DEBUG oslo_concurrency.lockutils [req-c5df678f-efc0-416d-b2b1-2311a6dc98cc req-b0d98d94-cd8c-418b-ba92-bfc9d08ec7b6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-05300b1b-c030-499f-af29-cc94f4bf9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:01:14 np0005465604 nova_compute[260603]: 2025-10-02 09:01:14.928 2 DEBUG nova.compute.manager [req-28984e71-e71e-4001-9bb7-cb0e7d618c80 req-232d0e06-6da6-4195-b15b-8394980963bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Received event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:01:14 np0005465604 nova_compute[260603]: 2025-10-02 09:01:14.929 2 DEBUG oslo_concurrency.lockutils [req-28984e71-e71e-4001-9bb7-cb0e7d618c80 req-232d0e06-6da6-4195-b15b-8394980963bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:14 np0005465604 nova_compute[260603]: 2025-10-02 09:01:14.929 2 DEBUG oslo_concurrency.lockutils [req-28984e71-e71e-4001-9bb7-cb0e7d618c80 req-232d0e06-6da6-4195-b15b-8394980963bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:14 np0005465604 nova_compute[260603]: 2025-10-02 09:01:14.930 2 DEBUG oslo_concurrency.lockutils [req-28984e71-e71e-4001-9bb7-cb0e7d618c80 req-232d0e06-6da6-4195-b15b-8394980963bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:14 np0005465604 nova_compute[260603]: 2025-10-02 09:01:14.930 2 DEBUG nova.compute.manager [req-28984e71-e71e-4001-9bb7-cb0e7d618c80 req-232d0e06-6da6-4195-b15b-8394980963bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] No waiting events found dispatching network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:01:14 np0005465604 nova_compute[260603]: 2025-10-02 09:01:14.931 2 WARNING nova.compute.manager [req-28984e71-e71e-4001-9bb7-cb0e7d618c80 req-232d0e06-6da6-4195-b15b-8394980963bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Received unexpected event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:01:15 np0005465604 nova_compute[260603]: 2025-10-02 09:01:15.111 2 DEBUG nova.network.neutron [-] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:01:15 np0005465604 nova_compute[260603]: 2025-10-02 09:01:15.134 2 INFO nova.compute.manager [-] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Took 1.96 seconds to deallocate network for instance.#033[00m
Oct  2 05:01:15 np0005465604 nova_compute[260603]: 2025-10-02 09:01:15.209 2 DEBUG oslo_concurrency.lockutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:15 np0005465604 nova_compute[260603]: 2025-10-02 09:01:15.211 2 DEBUG oslo_concurrency.lockutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:15 np0005465604 nova_compute[260603]: 2025-10-02 09:01:15.282 2 DEBUG oslo_concurrency.processutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:01:15 np0005465604 nova_compute[260603]: 2025-10-02 09:01:15.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:01:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3785030230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:01:15 np0005465604 nova_compute[260603]: 2025-10-02 09:01:15.752 2 DEBUG oslo_concurrency.processutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:01:15 np0005465604 nova_compute[260603]: 2025-10-02 09:01:15.762 2 DEBUG nova.compute.provider_tree [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:01:15 np0005465604 nova_compute[260603]: 2025-10-02 09:01:15.788 2 DEBUG nova.scheduler.client.report [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:01:15 np0005465604 nova_compute[260603]: 2025-10-02 09:01:15.821 2 DEBUG oslo_concurrency.lockutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 66 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Oct  2 05:01:15 np0005465604 nova_compute[260603]: 2025-10-02 09:01:15.854 2 INFO nova.scheduler.client.report [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance 05300b1b-c030-499f-af29-cc94f4bf9e11#033[00m
Oct  2 05:01:15 np0005465604 nova_compute[260603]: 2025-10-02 09:01:15.949 2 DEBUG oslo_concurrency.lockutils [None req-d49f8626-e9b3-4ffb-ba5b-d2c2b54fb135 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "05300b1b-c030-499f-af29-cc94f4bf9e11" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:17 np0005465604 nova_compute[260603]: 2025-10-02 09:01:17.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 305 active+clean; 41 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct  2 05:01:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:01:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.5 KiB/s wr, 65 op/s
Oct  2 05:01:20 np0005465604 nova_compute[260603]: 2025-10-02 09:01:20.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct  2 05:01:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:01:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2074779171' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:01:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:01:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2074779171' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:01:22 np0005465604 nova_compute[260603]: 2025-10-02 09:01:22.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:01:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct  2 05:01:25 np0005465604 nova_compute[260603]: 2025-10-02 09:01:25.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 21 op/s
Oct  2 05:01:26 np0005465604 nova_compute[260603]: 2025-10-02 09:01:26.985 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "2f4391e7-f233-4092-b54f-89a4c7840cd8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:26 np0005465604 nova_compute[260603]: 2025-10-02 09:01:26.986 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.018 2 DEBUG nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.104 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.105 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.115 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.116 2 INFO nova.compute.claims [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.221 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:01:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:01:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3182663924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.659 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.666 2 DEBUG nova.compute.provider_tree [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.687 2 DEBUG nova.scheduler.client.report [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.714 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395672.713664, 05300b1b-c030-499f-af29-cc94f4bf9e11 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.714 2 INFO nova.compute.manager [-] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.716 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.717 2 DEBUG nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.761 2 DEBUG nova.compute.manager [None req-db783599-46f3-4398-ae7b-2f75fd51270c - - - - - -] [instance: 05300b1b-c030-499f-af29-cc94f4bf9e11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.777 2 DEBUG nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.777 2 DEBUG nova.network.neutron [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.797 2 INFO nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.817 2 DEBUG nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:01:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 21 op/s
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.906 2 DEBUG nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.907 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.907 2 INFO nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Creating image(s)#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.928 2 DEBUG nova.storage.rbd_utils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:01:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:01:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:01:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:01:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:01:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:01:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.950 2 DEBUG nova.storage.rbd_utils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.973 2 DEBUG nova.storage.rbd_utils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:01:27 np0005465604 nova_compute[260603]: 2025-10-02 09:01:27.977 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:01:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:01:28
Oct  2 05:01:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:01:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:01:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['backups', 'volumes', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control']
Oct  2 05:01:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:01:28 np0005465604 nova_compute[260603]: 2025-10-02 09:01:28.067 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:01:28 np0005465604 nova_compute[260603]: 2025-10-02 09:01:28.067 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:28 np0005465604 nova_compute[260603]: 2025-10-02 09:01:28.068 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:28 np0005465604 nova_compute[260603]: 2025-10-02 09:01:28.068 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:28 np0005465604 nova_compute[260603]: 2025-10-02 09:01:28.088 2 DEBUG nova.storage.rbd_utils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:01:28 np0005465604 nova_compute[260603]: 2025-10-02 09:01:28.091 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:01:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:01:28 np0005465604 nova_compute[260603]: 2025-10-02 09:01:28.342 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:01:28 np0005465604 nova_compute[260603]: 2025-10-02 09:01:28.426 2 DEBUG nova.storage.rbd_utils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:01:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:01:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:01:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:01:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:01:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:01:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:01:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:01:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:01:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:01:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:01:28 np0005465604 nova_compute[260603]: 2025-10-02 09:01:28.527 2 DEBUG nova.objects.instance [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid 2f4391e7-f233-4092-b54f-89a4c7840cd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:01:28 np0005465604 nova_compute[260603]: 2025-10-02 09:01:28.546 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:01:28 np0005465604 nova_compute[260603]: 2025-10-02 09:01:28.547 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Ensure instance console log exists: /var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:01:28 np0005465604 nova_compute[260603]: 2025-10-02 09:01:28.547 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:28 np0005465604 nova_compute[260603]: 2025-10-02 09:01:28.548 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:28 np0005465604 nova_compute[260603]: 2025-10-02 09:01:28.548 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:28 np0005465604 nova_compute[260603]: 2025-10-02 09:01:28.888 2 DEBUG nova.policy [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:01:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 67 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 1.0 MiB/s wr, 4 op/s
Oct  2 05:01:30 np0005465604 nova_compute[260603]: 2025-10-02 09:01:30.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:30 np0005465604 nova_compute[260603]: 2025-10-02 09:01:30.971 2 DEBUG nova.network.neutron [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Successfully updated port: e14f3e85-84ff-49b3-8817-d926cb709f80 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:01:30 np0005465604 nova_compute[260603]: 2025-10-02 09:01:30.989 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-2f4391e7-f233-4092-b54f-89a4c7840cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:01:30 np0005465604 nova_compute[260603]: 2025-10-02 09:01:30.989 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-2f4391e7-f233-4092-b54f-89a4c7840cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:01:30 np0005465604 nova_compute[260603]: 2025-10-02 09:01:30.989 2 DEBUG nova.network.neutron [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:01:31 np0005465604 podman[403618]: 2025-10-02 09:01:31.014916805 +0000 UTC m=+0.079236423 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  2 05:01:31 np0005465604 nova_compute[260603]: 2025-10-02 09:01:31.064 2 DEBUG nova.compute.manager [req-c1649834-55a5-48a5-9f4b-e7a0fb216e7b req-d0a71596-0e58-41e7-a89a-d1e879b37ad7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Received event network-changed-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:01:31 np0005465604 nova_compute[260603]: 2025-10-02 09:01:31.064 2 DEBUG nova.compute.manager [req-c1649834-55a5-48a5-9f4b-e7a0fb216e7b req-d0a71596-0e58-41e7-a89a-d1e879b37ad7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Refreshing instance network info cache due to event network-changed-e14f3e85-84ff-49b3-8817-d926cb709f80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:01:31 np0005465604 nova_compute[260603]: 2025-10-02 09:01:31.064 2 DEBUG oslo_concurrency.lockutils [req-c1649834-55a5-48a5-9f4b-e7a0fb216e7b req-d0a71596-0e58-41e7-a89a-d1e879b37ad7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-2f4391e7-f233-4092-b54f-89a4c7840cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:01:31 np0005465604 podman[403618]: 2025-10-02 09:01:31.121319941 +0000 UTC m=+0.185639550 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 05:01:31 np0005465604 nova_compute[260603]: 2025-10-02 09:01:31.694 2 DEBUG nova.network.neutron [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:01:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 67 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 1.0 MiB/s wr, 4 op/s
Oct  2 05:01:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:01:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:01:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:01:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.745 2 DEBUG nova.network.neutron [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Updating instance_info_cache with network_info: [{"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.767 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-2f4391e7-f233-4092-b54f-89a4c7840cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.767 2 DEBUG nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Instance network_info: |[{"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.768 2 DEBUG oslo_concurrency.lockutils [req-c1649834-55a5-48a5-9f4b-e7a0fb216e7b req-d0a71596-0e58-41e7-a89a-d1e879b37ad7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-2f4391e7-f233-4092-b54f-89a4c7840cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.768 2 DEBUG nova.network.neutron [req-c1649834-55a5-48a5-9f4b-e7a0fb216e7b req-d0a71596-0e58-41e7-a89a-d1e879b37ad7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Refreshing network info cache for port e14f3e85-84ff-49b3-8817-d926cb709f80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.771 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Start _get_guest_xml network_info=[{"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.777 2 WARNING nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.786 2 DEBUG nova.virt.libvirt.host [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.786 2 DEBUG nova.virt.libvirt.host [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.790 2 DEBUG nova.virt.libvirt.host [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.790 2 DEBUG nova.virt.libvirt.host [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.791 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.791 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.792 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.792 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.792 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.793 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.793 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.793 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.793 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.794 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.794 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.794 2 DEBUG nova.virt.hardware [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.797 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:01:32 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:01:32 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:01:32 np0005465604 nova_compute[260603]: 2025-10-02 09:01:32.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  2 05:01:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 05:01:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:01:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:01:33 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 2fb9b33e-3176-4f4f-87e6-487b915c4415 does not exist
Oct  2 05:01:33 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev fe764605-ff52-4471-9f99-aa9508867e49 does not exist
Oct  2 05:01:33 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 482b979a-d50b-4a64-bc21-d53d1df72a78 does not exist
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:01:33 np0005465604 podman[403950]: 2025-10-02 09:01:33.225230164 +0000 UTC m=+0.077302343 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 05:01:33 np0005465604 podman[403949]: 2025-10-02 09:01:33.271403528 +0000 UTC m=+0.133929592 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1569992552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.373 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.400 2 DEBUG nova.storage.rbd_utils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.404 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:01:33 np0005465604 podman[404149]: 2025-10-02 09:01:33.822152694 +0000 UTC m=+0.060807061 container create 573ca29104ad6aecb4e527307c7ef97dd6ae23b8a36bc099c1f474797878abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/85280588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:01:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.853 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.857 2 DEBUG nova.virt.libvirt.vif [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:01:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-442472618',display_name='tempest-TestNetworkBasicOps-server-442472618',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-442472618',id=132,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMhdM2B/LrbDEwL4IycEJOfFLzMu6CpL77K1ym75rDBPo/cEgtvAALjq8eovnQtT7xFChCXYYkzy7ApfLKPYhw2LEullF9QdZuX88UmbA5oS4LeC35DxK9kYFS0r2dt6vg==',key_name='tempest-TestNetworkBasicOps-18474127',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-b1cmahn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:01:27Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=2f4391e7-f233-4092-b54f-89a4c7840cd8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.857 2 DEBUG nova.network.os_vif_util [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.859 2 DEBUG nova.network.os_vif_util [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.862 2 DEBUG nova.objects.instance [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f4391e7-f233-4092-b54f-89a4c7840cd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:01:33 np0005465604 systemd[1]: Started libpod-conmon-573ca29104ad6aecb4e527307c7ef97dd6ae23b8a36bc099c1f474797878abcd.scope.
Oct  2 05:01:33 np0005465604 podman[404149]: 2025-10-02 09:01:33.793870254 +0000 UTC m=+0.032524661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.890 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:01:33 np0005465604 nova_compute[260603]:  <uuid>2f4391e7-f233-4092-b54f-89a4c7840cd8</uuid>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:  <name>instance-00000084</name>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkBasicOps-server-442472618</nova:name>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:01:32</nova:creationTime>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:01:33 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:        <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:        <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:        <nova:port uuid="e14f3e85-84ff-49b3-8817-d926cb709f80">
Oct  2 05:01:33 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <entry name="serial">2f4391e7-f233-4092-b54f-89a4c7840cd8</entry>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <entry name="uuid">2f4391e7-f233-4092-b54f-89a4c7840cd8</entry>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/2f4391e7-f233-4092-b54f-89a4c7840cd8_disk">
Oct  2 05:01:33 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:01:33 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/2f4391e7-f233-4092-b54f-89a4c7840cd8_disk.config">
Oct  2 05:01:33 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:01:33 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:56:61:55"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <target dev="tape14f3e85-84"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8/console.log" append="off"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:01:33 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:01:33 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:01:33 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:01:33 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.893 2 DEBUG nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Preparing to wait for external event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.894 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.894 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.895 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.896 2 DEBUG nova.virt.libvirt.vif [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:01:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-442472618',display_name='tempest-TestNetworkBasicOps-server-442472618',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-442472618',id=132,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMhdM2B/LrbDEwL4IycEJOfFLzMu6CpL77K1ym75rDBPo/cEgtvAALjq8eovnQtT7xFChCXYYkzy7ApfLKPYhw2LEullF9QdZuX88UmbA5oS4LeC35DxK9kYFS0r2dt6vg==',key_name='tempest-TestNetworkBasicOps-18474127',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-b1cmahn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:01:27Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=2f4391e7-f233-4092-b54f-89a4c7840cd8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.897 2 DEBUG nova.network.os_vif_util [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.899 2 DEBUG nova.network.os_vif_util [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.900 2 DEBUG os_vif [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.903 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.908 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape14f3e85-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape14f3e85-84, col_values=(('external_ids', {'iface-id': 'e14f3e85-84ff-49b3-8817-d926cb709f80', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:61:55', 'vm-uuid': '2f4391e7-f233-4092-b54f-89a4c7840cd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:01:33 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:33 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:01:33 np0005465604 NetworkManager[45129]: <info>  [1759395693.9548] manager: (tape14f3e85-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/569)
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:33 np0005465604 nova_compute[260603]: 2025-10-02 09:01:33.966 2 INFO os_vif [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84')#033[00m
Oct  2 05:01:33 np0005465604 podman[404149]: 2025-10-02 09:01:33.979601866 +0000 UTC m=+0.218256313 container init 573ca29104ad6aecb4e527307c7ef97dd6ae23b8a36bc099c1f474797878abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 05:01:33 np0005465604 podman[404149]: 2025-10-02 09:01:33.988014577 +0000 UTC m=+0.226668924 container start 573ca29104ad6aecb4e527307c7ef97dd6ae23b8a36bc099c1f474797878abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 05:01:33 np0005465604 podman[404149]: 2025-10-02 09:01:33.991404583 +0000 UTC m=+0.230058960 container attach 573ca29104ad6aecb4e527307c7ef97dd6ae23b8a36bc099c1f474797878abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:01:33 np0005465604 strange_allen[404167]: 167 167
Oct  2 05:01:33 np0005465604 systemd[1]: libpod-573ca29104ad6aecb4e527307c7ef97dd6ae23b8a36bc099c1f474797878abcd.scope: Deactivated successfully.
Oct  2 05:01:33 np0005465604 podman[404149]: 2025-10-02 09:01:33.998672599 +0000 UTC m=+0.237326946 container died 573ca29104ad6aecb4e527307c7ef97dd6ae23b8a36bc099c1f474797878abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:01:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay-03d45cd62560ee12ac27f1bf28c7db46914abca9a0f6c9b9612b1348d820d19f-merged.mount: Deactivated successfully.
Oct  2 05:01:34 np0005465604 podman[404149]: 2025-10-02 09:01:34.041962915 +0000 UTC m=+0.280617262 container remove 573ca29104ad6aecb4e527307c7ef97dd6ae23b8a36bc099c1f474797878abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_allen, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:01:34 np0005465604 nova_compute[260603]: 2025-10-02 09:01:34.056 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:01:34 np0005465604 nova_compute[260603]: 2025-10-02 09:01:34.056 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:01:34 np0005465604 nova_compute[260603]: 2025-10-02 09:01:34.056 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:56:61:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:01:34 np0005465604 nova_compute[260603]: 2025-10-02 09:01:34.057 2 INFO nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Using config drive#033[00m
Oct  2 05:01:34 np0005465604 systemd[1]: libpod-conmon-573ca29104ad6aecb4e527307c7ef97dd6ae23b8a36bc099c1f474797878abcd.scope: Deactivated successfully.
Oct  2 05:01:34 np0005465604 nova_compute[260603]: 2025-10-02 09:01:34.081 2 DEBUG nova.storage.rbd_utils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:01:34 np0005465604 podman[404211]: 2025-10-02 09:01:34.249694729 +0000 UTC m=+0.049509589 container create d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:01:34 np0005465604 systemd[1]: Started libpod-conmon-d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea.scope.
Oct  2 05:01:34 np0005465604 podman[404211]: 2025-10-02 09:01:34.226907491 +0000 UTC m=+0.026722351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:01:34 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:01:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8adf27e840e3e4d392d94ffd8cae900ed55da66019836d2c3369829914de73d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:01:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8adf27e840e3e4d392d94ffd8cae900ed55da66019836d2c3369829914de73d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:01:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8adf27e840e3e4d392d94ffd8cae900ed55da66019836d2c3369829914de73d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:01:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8adf27e840e3e4d392d94ffd8cae900ed55da66019836d2c3369829914de73d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:01:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8adf27e840e3e4d392d94ffd8cae900ed55da66019836d2c3369829914de73d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:01:34 np0005465604 podman[404211]: 2025-10-02 09:01:34.373846847 +0000 UTC m=+0.173661678 container init d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_galois, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 05:01:34 np0005465604 podman[404211]: 2025-10-02 09:01:34.390774464 +0000 UTC m=+0.190589324 container start d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_galois, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:01:34 np0005465604 podman[404211]: 2025-10-02 09:01:34.39449552 +0000 UTC m=+0.194310380 container attach d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_galois, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 05:01:34 np0005465604 nova_compute[260603]: 2025-10-02 09:01:34.572 2 DEBUG nova.network.neutron [req-c1649834-55a5-48a5-9f4b-e7a0fb216e7b req-d0a71596-0e58-41e7-a89a-d1e879b37ad7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Updated VIF entry in instance network info cache for port e14f3e85-84ff-49b3-8817-d926cb709f80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:01:34 np0005465604 nova_compute[260603]: 2025-10-02 09:01:34.574 2 DEBUG nova.network.neutron [req-c1649834-55a5-48a5-9f4b-e7a0fb216e7b req-d0a71596-0e58-41e7-a89a-d1e879b37ad7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Updating instance_info_cache with network_info: [{"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:01:34 np0005465604 nova_compute[260603]: 2025-10-02 09:01:34.606 2 DEBUG oslo_concurrency.lockutils [req-c1649834-55a5-48a5-9f4b-e7a0fb216e7b req-d0a71596-0e58-41e7-a89a-d1e879b37ad7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-2f4391e7-f233-4092-b54f-89a4c7840cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:01:34 np0005465604 nova_compute[260603]: 2025-10-02 09:01:34.670 2 INFO nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Creating config drive at /var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8/disk.config#033[00m
Oct  2 05:01:34 np0005465604 nova_compute[260603]: 2025-10-02 09:01:34.679 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv_qhdrbe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:01:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:34.840 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:34.841 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:34.841 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:34 np0005465604 nova_compute[260603]: 2025-10-02 09:01:34.850 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv_qhdrbe" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:01:34 np0005465604 nova_compute[260603]: 2025-10-02 09:01:34.892 2 DEBUG nova.storage.rbd_utils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:01:34 np0005465604 nova_compute[260603]: 2025-10-02 09:01:34.900 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8/disk.config 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:01:35 np0005465604 nova_compute[260603]: 2025-10-02 09:01:35.113 2 DEBUG oslo_concurrency.processutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8/disk.config 2f4391e7-f233-4092-b54f-89a4c7840cd8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:01:35 np0005465604 nova_compute[260603]: 2025-10-02 09:01:35.116 2 INFO nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Deleting local config drive /var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8/disk.config because it was imported into RBD.#033[00m
Oct  2 05:01:35 np0005465604 kernel: tape14f3e85-84: entered promiscuous mode
Oct  2 05:01:35 np0005465604 NetworkManager[45129]: <info>  [1759395695.1909] manager: (tape14f3e85-84): new Tun device (/org/freedesktop/NetworkManager/Devices/570)
Oct  2 05:01:35 np0005465604 ovn_controller[152344]: 2025-10-02T09:01:35Z|01425|binding|INFO|Claiming lport e14f3e85-84ff-49b3-8817-d926cb709f80 for this chassis.
Oct  2 05:01:35 np0005465604 ovn_controller[152344]: 2025-10-02T09:01:35Z|01426|binding|INFO|e14f3e85-84ff-49b3-8817-d926cb709f80: Claiming fa:16:3e:56:61:55 10.100.0.6
Oct  2 05:01:35 np0005465604 nova_compute[260603]: 2025-10-02 09:01:35.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:35 np0005465604 ovn_controller[152344]: 2025-10-02T09:01:35Z|01427|binding|INFO|Setting lport e14f3e85-84ff-49b3-8817-d926cb709f80 ovn-installed in OVS
Oct  2 05:01:35 np0005465604 nova_compute[260603]: 2025-10-02 09:01:35.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:35 np0005465604 nova_compute[260603]: 2025-10-02 09:01:35.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:35 np0005465604 systemd-udevd[404295]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:01:35 np0005465604 NetworkManager[45129]: <info>  [1759395695.2459] device (tape14f3e85-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:01:35 np0005465604 NetworkManager[45129]: <info>  [1759395695.2477] device (tape14f3e85-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:01:35 np0005465604 ovn_controller[152344]: 2025-10-02T09:01:35Z|01428|binding|INFO|Setting lport e14f3e85-84ff-49b3-8817-d926cb709f80 up in Southbound
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.248 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:61:55 10.100.0.6'], port_security=['fa:16:3e:56:61:55 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1319178476', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2f4391e7-f233-4092-b54f-89a4c7840cd8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1319178476', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'cc3a0c93-fc04-4c05-88f3-b624ca1ad1bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=393b5666-d876-4099-9af5-9247fa9f1ed9, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e14f3e85-84ff-49b3-8817-d926cb709f80) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.249 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e14f3e85-84ff-49b3-8817-d926cb709f80 in datapath 577c4506-1f5a-48bf-b267-75fd69fe5e1c bound to our chassis#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.251 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 577c4506-1f5a-48bf-b267-75fd69fe5e1c#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.268 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ac11afe4-08be-4883-b929-5bfc8a64fbf9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.269 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap577c4506-11 in ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.271 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap577c4506-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.272 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83cce7a1-fc69-440a-a2cc-6cfe54646e74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.273 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f32c605c-9f04-4033-9e91-ae293b16dc00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.287 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[531e94aa-0155-4f8a-b925-5c740e491406]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:35 np0005465604 systemd-machined[214636]: New machine qemu-166-instance-00000084.
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.304 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[224caecb-ac33-45c4-bd6f-00a296411579]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:35 np0005465604 systemd[1]: Started Virtual Machine qemu-166-instance-00000084.
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.353 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbf93d1-6854-43e9-b9ce-0376ece9e9ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.360 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[88d832af-3767-4de1-950d-79d2696561dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:35 np0005465604 NetworkManager[45129]: <info>  [1759395695.3629] manager: (tap577c4506-10): new Veth device (/org/freedesktop/NetworkManager/Devices/571)
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.412 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[70ab1354-915d-42eb-bd56-21e9a7fa33ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.417 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d630763d-ebaf-40ad-b94c-bde9fc8744ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:35 np0005465604 NetworkManager[45129]: <info>  [1759395695.4527] device (tap577c4506-10): carrier: link connected
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.469 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a4be2566-0117-4249-885f-bdde2f6a190e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:35 np0005465604 nova_compute[260603]: 2025-10-02 09:01:35.472 2 DEBUG nova.compute.manager [req-66609f5a-395d-48c9-9fb7-aab465fc2919 req-0d37f5d8-81fe-4c9a-9d83-6f42aa28fedd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Received event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:01:35 np0005465604 nova_compute[260603]: 2025-10-02 09:01:35.473 2 DEBUG oslo_concurrency.lockutils [req-66609f5a-395d-48c9-9fb7-aab465fc2919 req-0d37f5d8-81fe-4c9a-9d83-6f42aa28fedd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:35 np0005465604 nova_compute[260603]: 2025-10-02 09:01:35.473 2 DEBUG oslo_concurrency.lockutils [req-66609f5a-395d-48c9-9fb7-aab465fc2919 req-0d37f5d8-81fe-4c9a-9d83-6f42aa28fedd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:35 np0005465604 nova_compute[260603]: 2025-10-02 09:01:35.473 2 DEBUG oslo_concurrency.lockutils [req-66609f5a-395d-48c9-9fb7-aab465fc2919 req-0d37f5d8-81fe-4c9a-9d83-6f42aa28fedd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:35 np0005465604 nova_compute[260603]: 2025-10-02 09:01:35.473 2 DEBUG nova.compute.manager [req-66609f5a-395d-48c9-9fb7-aab465fc2919 req-0d37f5d8-81fe-4c9a-9d83-6f42aa28fedd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Processing event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.493 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7e4531fd-5002-4941-aaf6-8cf615bbb67b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap577c4506-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:0e:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657405, 'reachable_time': 26949, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404342, 'error': None, 'target': 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:35 np0005465604 romantic_galois[404228]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:01:35 np0005465604 romantic_galois[404228]: --> relative data size: 1.0
Oct  2 05:01:35 np0005465604 romantic_galois[404228]: --> All data devices are unavailable
Oct  2 05:01:35 np0005465604 nova_compute[260603]: 2025-10-02 09:01:35.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:01:35 np0005465604 nova_compute[260603]: 2025-10-02 09:01:35.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.521 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d785a274-168d-4492-a5f3-787e42d5c471]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:e80'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 657405, 'tstamp': 657405}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404344, 'error': None, 'target': 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:35 np0005465604 nova_compute[260603]: 2025-10-02 09:01:35.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:35 np0005465604 systemd[1]: libpod-d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea.scope: Deactivated successfully.
Oct  2 05:01:35 np0005465604 systemd[1]: libpod-d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea.scope: Consumed 1.058s CPU time.
Oct  2 05:01:35 np0005465604 podman[404211]: 2025-10-02 09:01:35.540485813 +0000 UTC m=+1.340300633 container died d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_galois, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.550 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ab831c67-6170-433d-a105-12dee78ce919]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap577c4506-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:0e:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657405, 'reachable_time': 26949, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 404345, 'error': None, 'target': 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:35 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8adf27e840e3e4d392d94ffd8cae900ed55da66019836d2c3369829914de73d1-merged.mount: Deactivated successfully.
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.596 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d12f2961-0f97-43af-96ec-ba4834f03998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:35 np0005465604 podman[404211]: 2025-10-02 09:01:35.603344486 +0000 UTC m=+1.403159306 container remove d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:01:35 np0005465604 systemd[1]: libpod-conmon-d9d6582c1570b87acf6712521b43128761515c55a3a8f29d4ac370a1d8a61fea.scope: Deactivated successfully.
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.665 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[efd31fdc-e593-4460-9514-f217e7503b5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.667 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap577c4506-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.668 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.668 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap577c4506-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:01:35 np0005465604 nova_compute[260603]: 2025-10-02 09:01:35.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:35 np0005465604 kernel: tap577c4506-10: entered promiscuous mode
Oct  2 05:01:35 np0005465604 NetworkManager[45129]: <info>  [1759395695.6719] manager: (tap577c4506-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/572)
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.674 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap577c4506-10, col_values=(('external_ids', {'iface-id': '8acd815f-105a-41fb-b7bf-32e4f419ccc2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:01:35 np0005465604 nova_compute[260603]: 2025-10-02 09:01:35.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:35 np0005465604 ovn_controller[152344]: 2025-10-02T09:01:35Z|01429|binding|INFO|Releasing lport 8acd815f-105a-41fb-b7bf-32e4f419ccc2 from this chassis (sb_readonly=0)
Oct  2 05:01:35 np0005465604 nova_compute[260603]: 2025-10-02 09:01:35.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.691 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/577c4506-1f5a-48bf-b267-75fd69fe5e1c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/577c4506-1f5a-48bf-b267-75fd69fe5e1c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.692 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[74895d1a-3d13-4b99-ab09-a498ccba9456]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.694 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-577c4506-1f5a-48bf-b267-75fd69fe5e1c
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/577c4506-1f5a-48bf-b267-75fd69fe5e1c.pid.haproxy
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 577c4506-1f5a-48bf-b267-75fd69fe5e1c
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 05:01:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:35.697 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'env', 'PROCESS_TAG=haproxy-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/577c4506-1f5a-48bf-b267-75fd69fe5e1c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 05:01:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:01:36 np0005465604 podman[404527]: 2025-10-02 09:01:36.094683055 +0000 UTC m=+0.055247818 container create d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:01:36 np0005465604 systemd[1]: Started libpod-conmon-d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26.scope.
Oct  2 05:01:36 np0005465604 podman[404527]: 2025-10-02 09:01:36.067884372 +0000 UTC m=+0.028449165 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 05:01:36 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:01:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82b48f74f74c0fe00f86049b4511e73bea7d27c0713f68a9f2381a5dc573159/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 05:01:36 np0005465604 podman[404527]: 2025-10-02 09:01:36.187080257 +0000 UTC m=+0.147645020 container init d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:01:36 np0005465604 podman[404527]: 2025-10-02 09:01:36.194032152 +0000 UTC m=+0.154596905 container start d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 05:01:36 np0005465604 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[404565]: [NOTICE]   (404569) : New worker (404571) forked
Oct  2 05:01:36 np0005465604 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[404565]: [NOTICE]   (404569) : Loading success.
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.293 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395696.2929516, 2f4391e7-f233-4092-b54f-89a4c7840cd8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.293 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] VM Started (Lifecycle Event)#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.295 2 DEBUG nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.298 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.301 2 INFO nova.virt.libvirt.driver [-] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Instance spawned successfully.#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.302 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.321 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.329 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.334 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.334 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.335 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.335 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.335 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.336 2 DEBUG nova.virt.libvirt.driver [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.360 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.360 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395696.2936819, 2f4391e7-f233-4092-b54f-89a4c7840cd8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.361 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:01:36 np0005465604 podman[404594]: 2025-10-02 09:01:36.362731505 +0000 UTC m=+0.048875940 container create f0c78bd330232a0fc2a87fe0df9946e244bf5846a89434c70626582dca806d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_carver, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.383 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.387 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395696.297624, 2f4391e7-f233-4092-b54f-89a4c7840cd8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.387 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.394 2 INFO nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Took 8.49 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.395 2 DEBUG nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:01:36 np0005465604 systemd[1]: Started libpod-conmon-f0c78bd330232a0fc2a87fe0df9946e244bf5846a89434c70626582dca806d50.scope.
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.403 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.407 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.422 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:01:36 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:01:36 np0005465604 podman[404594]: 2025-10-02 09:01:36.343109626 +0000 UTC m=+0.029254061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:01:36 np0005465604 podman[404594]: 2025-10-02 09:01:36.445678433 +0000 UTC m=+0.131822878 container init f0c78bd330232a0fc2a87fe0df9946e244bf5846a89434c70626582dca806d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_carver, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.448 2 INFO nova.compute.manager [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Took 9.38 seconds to build instance.#033[00m
Oct  2 05:01:36 np0005465604 podman[404594]: 2025-10-02 09:01:36.452499505 +0000 UTC m=+0.138643920 container start f0c78bd330232a0fc2a87fe0df9946e244bf5846a89434c70626582dca806d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_carver, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:01:36 np0005465604 podman[404594]: 2025-10-02 09:01:36.45523998 +0000 UTC m=+0.141384425 container attach f0c78bd330232a0fc2a87fe0df9946e244bf5846a89434c70626582dca806d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:01:36 np0005465604 vibrant_carver[404610]: 167 167
Oct  2 05:01:36 np0005465604 systemd[1]: libpod-f0c78bd330232a0fc2a87fe0df9946e244bf5846a89434c70626582dca806d50.scope: Deactivated successfully.
Oct  2 05:01:36 np0005465604 podman[404594]: 2025-10-02 09:01:36.458099299 +0000 UTC m=+0.144243714 container died f0c78bd330232a0fc2a87fe0df9946e244bf5846a89434c70626582dca806d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_carver, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:01:36 np0005465604 nova_compute[260603]: 2025-10-02 09:01:36.462 2 DEBUG oslo_concurrency.lockutils [None req-cbb27da9-f7f9-4303-a743-3d7d4801dd94 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.476s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay-fe6bd4eb3b65bc4a2000ce87d585c95cee4497ee8f840b7830995826ec3f9395-merged.mount: Deactivated successfully.
Oct  2 05:01:36 np0005465604 podman[404594]: 2025-10-02 09:01:36.499197106 +0000 UTC m=+0.185341521 container remove f0c78bd330232a0fc2a87fe0df9946e244bf5846a89434c70626582dca806d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_carver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:01:36 np0005465604 systemd[1]: libpod-conmon-f0c78bd330232a0fc2a87fe0df9946e244bf5846a89434c70626582dca806d50.scope: Deactivated successfully.
Oct  2 05:01:36 np0005465604 podman[404632]: 2025-10-02 09:01:36.71235816 +0000 UTC m=+0.056941941 container create 99a9f9a8d2a6395c81504d83ef27ffdbba6bb5d9012ab8e490c87415c32f7149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 05:01:36 np0005465604 systemd[1]: Started libpod-conmon-99a9f9a8d2a6395c81504d83ef27ffdbba6bb5d9012ab8e490c87415c32f7149.scope.
Oct  2 05:01:36 np0005465604 podman[404632]: 2025-10-02 09:01:36.693721681 +0000 UTC m=+0.038305432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:01:36 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:01:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d5d7b749fa4d5b0415fcef2a9ff510686ba8f2f3756d1533be5882f9a7813e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:01:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d5d7b749fa4d5b0415fcef2a9ff510686ba8f2f3756d1533be5882f9a7813e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:01:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d5d7b749fa4d5b0415fcef2a9ff510686ba8f2f3756d1533be5882f9a7813e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:01:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d5d7b749fa4d5b0415fcef2a9ff510686ba8f2f3756d1533be5882f9a7813e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:01:36 np0005465604 podman[404632]: 2025-10-02 09:01:36.825827377 +0000 UTC m=+0.170411138 container init 99a9f9a8d2a6395c81504d83ef27ffdbba6bb5d9012ab8e490c87415c32f7149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wescoff, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:01:36 np0005465604 podman[404632]: 2025-10-02 09:01:36.849869704 +0000 UTC m=+0.194453465 container start 99a9f9a8d2a6395c81504d83ef27ffdbba6bb5d9012ab8e490c87415c32f7149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 05:01:36 np0005465604 podman[404632]: 2025-10-02 09:01:36.853415914 +0000 UTC m=+0.197999845 container attach 99a9f9a8d2a6395c81504d83ef27ffdbba6bb5d9012ab8e490c87415c32f7149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wescoff, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 05:01:37 np0005465604 nova_compute[260603]: 2025-10-02 09:01:37.630 2 DEBUG nova.compute.manager [req-7d392555-804f-4811-9c9a-5b9c55cf03aa req-4c4829cd-997e-4622-a579-01758180eb31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Received event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:01:37 np0005465604 nova_compute[260603]: 2025-10-02 09:01:37.630 2 DEBUG oslo_concurrency.lockutils [req-7d392555-804f-4811-9c9a-5b9c55cf03aa req-4c4829cd-997e-4622-a579-01758180eb31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:37 np0005465604 nova_compute[260603]: 2025-10-02 09:01:37.631 2 DEBUG oslo_concurrency.lockutils [req-7d392555-804f-4811-9c9a-5b9c55cf03aa req-4c4829cd-997e-4622-a579-01758180eb31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:37 np0005465604 nova_compute[260603]: 2025-10-02 09:01:37.631 2 DEBUG oslo_concurrency.lockutils [req-7d392555-804f-4811-9c9a-5b9c55cf03aa req-4c4829cd-997e-4622-a579-01758180eb31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:37 np0005465604 nova_compute[260603]: 2025-10-02 09:01:37.636 2 DEBUG nova.compute.manager [req-7d392555-804f-4811-9c9a-5b9c55cf03aa req-4c4829cd-997e-4622-a579-01758180eb31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] No waiting events found dispatching network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:01:37 np0005465604 nova_compute[260603]: 2025-10-02 09:01:37.637 2 WARNING nova.compute.manager [req-7d392555-804f-4811-9c9a-5b9c55cf03aa req-4c4829cd-997e-4622-a579-01758180eb31 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Received unexpected event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]: {
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:    "0": [
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:        {
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "devices": [
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "/dev/loop3"
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            ],
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "lv_name": "ceph_lv0",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "lv_size": "21470642176",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "name": "ceph_lv0",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "tags": {
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.cluster_name": "ceph",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.crush_device_class": "",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.encrypted": "0",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.osd_id": "0",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.type": "block",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.vdo": "0"
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            },
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "type": "block",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "vg_name": "ceph_vg0"
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:        }
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:    ],
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:    "1": [
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:        {
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "devices": [
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "/dev/loop4"
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            ],
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "lv_name": "ceph_lv1",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "lv_size": "21470642176",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "name": "ceph_lv1",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "tags": {
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.cluster_name": "ceph",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.crush_device_class": "",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.encrypted": "0",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.osd_id": "1",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.type": "block",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.vdo": "0"
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            },
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "type": "block",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "vg_name": "ceph_vg1"
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:        }
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:    ],
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:    "2": [
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:        {
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "devices": [
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "/dev/loop5"
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            ],
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "lv_name": "ceph_lv2",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "lv_size": "21470642176",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "name": "ceph_lv2",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "tags": {
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.cluster_name": "ceph",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.crush_device_class": "",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.encrypted": "0",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.osd_id": "2",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.type": "block",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:                "ceph.vdo": "0"
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            },
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "type": "block",
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:            "vg_name": "ceph_vg2"
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:        }
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]:    ]
Oct  2 05:01:37 np0005465604 adoring_wescoff[404649]: }
Oct  2 05:01:37 np0005465604 systemd[1]: libpod-99a9f9a8d2a6395c81504d83ef27ffdbba6bb5d9012ab8e490c87415c32f7149.scope: Deactivated successfully.
Oct  2 05:01:37 np0005465604 podman[404632]: 2025-10-02 09:01:37.749310195 +0000 UTC m=+1.093893986 container died 99a9f9a8d2a6395c81504d83ef27ffdbba6bb5d9012ab8e490c87415c32f7149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 05:01:37 np0005465604 systemd[1]: var-lib-containers-storage-overlay-98d5d7b749fa4d5b0415fcef2a9ff510686ba8f2f3756d1533be5882f9a7813e-merged.mount: Deactivated successfully.
Oct  2 05:01:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Oct  2 05:01:37 np0005465604 podman[404632]: 2025-10-02 09:01:37.85439286 +0000 UTC m=+1.198976621 container remove 99a9f9a8d2a6395c81504d83ef27ffdbba6bb5d9012ab8e490c87415c32f7149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:01:37 np0005465604 systemd[1]: libpod-conmon-99a9f9a8d2a6395c81504d83ef27ffdbba6bb5d9012ab8e490c87415c32f7149.scope: Deactivated successfully.
Oct  2 05:01:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:01:38 np0005465604 podman[404722]: 2025-10-02 09:01:38.224674617 +0000 UTC m=+0.080521753 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 05:01:38 np0005465604 podman[404721]: 2025-10-02 09:01:38.228710982 +0000 UTC m=+0.085872729 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:01:38 np0005465604 podman[404850]: 2025-10-02 09:01:38.836601793 +0000 UTC m=+0.110604368 container create c79d96ed5f309d8105a1117f7f596fe5adda775b04903f4eedef467fc6043e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 05:01:38 np0005465604 podman[404850]: 2025-10-02 09:01:38.770371885 +0000 UTC m=+0.044374540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:01:38 np0005465604 systemd[1]: Started libpod-conmon-c79d96ed5f309d8105a1117f7f596fe5adda775b04903f4eedef467fc6043e41.scope.
Oct  2 05:01:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:01:38 np0005465604 nova_compute[260603]: 2025-10-02 09:01:38.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:38 np0005465604 podman[404850]: 2025-10-02 09:01:38.957961525 +0000 UTC m=+0.231964120 container init c79d96ed5f309d8105a1117f7f596fe5adda775b04903f4eedef467fc6043e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tu, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 05:01:38 np0005465604 podman[404850]: 2025-10-02 09:01:38.970841224 +0000 UTC m=+0.244843909 container start c79d96ed5f309d8105a1117f7f596fe5adda775b04903f4eedef467fc6043e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 05:01:38 np0005465604 objective_tu[404866]: 167 167
Oct  2 05:01:38 np0005465604 systemd[1]: libpod-c79d96ed5f309d8105a1117f7f596fe5adda775b04903f4eedef467fc6043e41.scope: Deactivated successfully.
Oct  2 05:01:38 np0005465604 podman[404850]: 2025-10-02 09:01:38.98838748 +0000 UTC m=+0.262390155 container attach c79d96ed5f309d8105a1117f7f596fe5adda775b04903f4eedef467fc6043e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tu, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:01:38 np0005465604 podman[404850]: 2025-10-02 09:01:38.989236516 +0000 UTC m=+0.263239131 container died c79d96ed5f309d8105a1117f7f596fe5adda775b04903f4eedef467fc6043e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tu, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:01:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e86c1f62c105ae2a20ceddb2c7c23a43056760b1298122941114b3e37c6a7a55-merged.mount: Deactivated successfully.
Oct  2 05:01:39 np0005465604 podman[404850]: 2025-10-02 09:01:39.091874106 +0000 UTC m=+0.365876701 container remove c79d96ed5f309d8105a1117f7f596fe5adda775b04903f4eedef467fc6043e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:01:39 np0005465604 systemd[1]: libpod-conmon-c79d96ed5f309d8105a1117f7f596fe5adda775b04903f4eedef467fc6043e41.scope: Deactivated successfully.
Oct  2 05:01:39 np0005465604 podman[404894]: 2025-10-02 09:01:39.384771978 +0000 UTC m=+0.072670899 container create 040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 05:01:39 np0005465604 systemd[1]: Started libpod-conmon-040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8.scope.
Oct  2 05:01:39 np0005465604 podman[404894]: 2025-10-02 09:01:39.357453409 +0000 UTC m=+0.045352360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:01:39 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:01:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dbbe85e09d11099e0ef94e78d9f511abef2888a5913392f460602df123924a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:01:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dbbe85e09d11099e0ef94e78d9f511abef2888a5913392f460602df123924a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:01:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dbbe85e09d11099e0ef94e78d9f511abef2888a5913392f460602df123924a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:01:39 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6dbbe85e09d11099e0ef94e78d9f511abef2888a5913392f460602df123924a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:01:39 np0005465604 podman[404894]: 2025-10-02 09:01:39.498020527 +0000 UTC m=+0.185919528 container init 040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct  2 05:01:39 np0005465604 podman[404894]: 2025-10-02 09:01:39.512789236 +0000 UTC m=+0.200688157 container start 040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:01:39 np0005465604 podman[404894]: 2025-10-02 09:01:39.516634925 +0000 UTC m=+0.204533876 container attach 040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:01:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2506: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.208 2 DEBUG oslo_concurrency.lockutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "2f4391e7-f233-4092-b54f-89a4c7840cd8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.212 2 DEBUG oslo_concurrency.lockutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.213 2 DEBUG oslo_concurrency.lockutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.215 2 DEBUG oslo_concurrency.lockutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.216 2 DEBUG oslo_concurrency.lockutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.217 2 INFO nova.compute.manager [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Terminating instance#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.219 2 DEBUG nova.compute.manager [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:01:40 np0005465604 kernel: tape14f3e85-84 (unregistering): left promiscuous mode
Oct  2 05:01:40 np0005465604 NetworkManager[45129]: <info>  [1759395700.2668] device (tape14f3e85-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:01:40 np0005465604 ovn_controller[152344]: 2025-10-02T09:01:40Z|01430|binding|INFO|Releasing lport e14f3e85-84ff-49b3-8817-d926cb709f80 from this chassis (sb_readonly=0)
Oct  2 05:01:40 np0005465604 ovn_controller[152344]: 2025-10-02T09:01:40Z|01431|binding|INFO|Setting lport e14f3e85-84ff-49b3-8817-d926cb709f80 down in Southbound
Oct  2 05:01:40 np0005465604 ovn_controller[152344]: 2025-10-02T09:01:40Z|01432|binding|INFO|Removing iface tape14f3e85-84 ovn-installed in OVS
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.286 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:61:55 10.100.0.6'], port_security=['fa:16:3e:56:61:55 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1319178476', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2f4391e7-f233-4092-b54f-89a4c7840cd8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1319178476', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'cc3a0c93-fc04-4c05-88f3-b624ca1ad1bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=393b5666-d876-4099-9af5-9247fa9f1ed9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=e14f3e85-84ff-49b3-8817-d926cb709f80) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:01:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.288 162357 INFO neutron.agent.ovn.metadata.agent [-] Port e14f3e85-84ff-49b3-8817-d926cb709f80 in datapath 577c4506-1f5a-48bf-b267-75fd69fe5e1c unbound from our chassis#033[00m
Oct  2 05:01:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.289 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 577c4506-1f5a-48bf-b267-75fd69fe5e1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:01:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.292 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6645510c-85e3-4fb3-a25d-a642955965d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.292 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c namespace which is not needed anymore#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:40 np0005465604 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000084.scope: Deactivated successfully.
Oct  2 05:01:40 np0005465604 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000084.scope: Consumed 4.893s CPU time.
Oct  2 05:01:40 np0005465604 systemd-machined[214636]: Machine qemu-166-instance-00000084 terminated.
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.464 2 INFO nova.virt.libvirt.driver [-] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Instance destroyed successfully.#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.466 2 DEBUG nova.objects.instance [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid 2f4391e7-f233-4092-b54f-89a4c7840cd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:01:40 np0005465604 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[404565]: [NOTICE]   (404569) : haproxy version is 2.8.14-c23fe91
Oct  2 05:01:40 np0005465604 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[404565]: [NOTICE]   (404569) : path to executable is /usr/sbin/haproxy
Oct  2 05:01:40 np0005465604 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[404565]: [WARNING]  (404569) : Exiting Master process...
Oct  2 05:01:40 np0005465604 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[404565]: [WARNING]  (404569) : Exiting Master process...
Oct  2 05:01:40 np0005465604 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[404565]: [ALERT]    (404569) : Current worker (404571) exited with code 143 (Terminated)
Oct  2 05:01:40 np0005465604 neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c[404565]: [WARNING]  (404569) : All workers exited. Exiting... (0)
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.484 2 DEBUG nova.virt.libvirt.vif [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:01:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-442472618',display_name='tempest-TestNetworkBasicOps-server-442472618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-442472618',id=132,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMhdM2B/LrbDEwL4IycEJOfFLzMu6CpL77K1ym75rDBPo/cEgtvAALjq8eovnQtT7xFChCXYYkzy7ApfLKPYhw2LEullF9QdZuX88UmbA5oS4LeC35DxK9kYFS0r2dt6vg==',key_name='tempest-TestNetworkBasicOps-18474127',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:01:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-b1cmahn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:01:36Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=2f4391e7-f233-4092-b54f-89a4c7840cd8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.484 2 DEBUG nova.network.os_vif_util [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "e14f3e85-84ff-49b3-8817-d926cb709f80", "address": "fa:16:3e:56:61:55", "network": {"id": "577c4506-1f5a-48bf-b267-75fd69fe5e1c", "bridge": "br-int", "label": "tempest-network-smoke--1975765374", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape14f3e85-84", "ovs_interfaceid": "e14f3e85-84ff-49b3-8817-d926cb709f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.485 2 DEBUG nova.network.os_vif_util [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:01:40 np0005465604 systemd[1]: libpod-d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26.scope: Deactivated successfully.
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.486 2 DEBUG os_vif [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.489 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape14f3e85-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:40 np0005465604 podman[404960]: 2025-10-02 09:01:40.493887644 +0000 UTC m=+0.067448657 container died d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.496 2 INFO os_vif [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:55,bridge_name='br-int',has_traffic_filtering=True,id=e14f3e85-84ff-49b3-8817-d926cb709f80,network=Network(577c4506-1f5a-48bf-b267-75fd69fe5e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape14f3e85-84')#033[00m
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]: {
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:        "osd_id": 2,
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:        "type": "bluestore"
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:    },
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:        "osd_id": 1,
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:        "type": "bluestore"
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:    },
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:        "osd_id": 0,
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:        "type": "bluestore"
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]:    }
Oct  2 05:01:40 np0005465604 admiring_cannon[404910]: }
Oct  2 05:01:40 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26-userdata-shm.mount: Deactivated successfully.
Oct  2 05:01:40 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d82b48f74f74c0fe00f86049b4511e73bea7d27c0713f68a9f2381a5dc573159-merged.mount: Deactivated successfully.
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:40 np0005465604 systemd[1]: libpod-040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8.scope: Deactivated successfully.
Oct  2 05:01:40 np0005465604 systemd[1]: libpod-040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8.scope: Consumed 1.028s CPU time.
Oct  2 05:01:40 np0005465604 conmon[404910]: conmon 040aebf735a84c5f97c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8.scope/container/memory.events
Oct  2 05:01:40 np0005465604 podman[404894]: 2025-10-02 09:01:40.550462813 +0000 UTC m=+1.238361734 container died 040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct  2 05:01:40 np0005465604 podman[404960]: 2025-10-02 09:01:40.581003691 +0000 UTC m=+0.154564694 container cleanup d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 05:01:40 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a6dbbe85e09d11099e0ef94e78d9f511abef2888a5913392f460602df123924a-merged.mount: Deactivated successfully.
Oct  2 05:01:40 np0005465604 systemd[1]: libpod-conmon-d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26.scope: Deactivated successfully.
Oct  2 05:01:40 np0005465604 podman[404894]: 2025-10-02 09:01:40.617592279 +0000 UTC m=+1.305491200 container remove 040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:01:40 np0005465604 systemd[1]: libpod-conmon-040aebf735a84c5f97c6d90982cea571e5ef9aaf3ca4b217c54256dc3d1563d8.scope: Deactivated successfully.
Oct  2 05:01:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:01:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:01:40 np0005465604 podman[405040]: 2025-10-02 09:01:40.661471173 +0000 UTC m=+0.052694539 container remove d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 05:01:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:01:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:01:40 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev d01b29a7-e352-43b8-8a06-edd278ac8d6c does not exist
Oct  2 05:01:40 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 879cecf4-5919-447a-aa60-39b3ce169e04 does not exist
Oct  2 05:01:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.671 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e08d5c-9309-4e67-9a62-080881f9fe2b]: (4, ('Thu Oct  2 09:01:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c (d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26)\nd028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26\nThu Oct  2 09:01:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c (d028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26)\nd028af4ac83a4b0eec0fc3bca259fdcf242edf6425a9abc8e101a528b9b40c26\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.675 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3e25632e-68d4-4def-94f2-1af5739f3d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.676 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap577c4506-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:40 np0005465604 kernel: tap577c4506-10: left promiscuous mode
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.698 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[05d6bc7c-feee-43a6-b1d9-56f6b806ff41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.721 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ba08c17b-657f-4c50-a5e8-c384d6e70e14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.722 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5e245973-fde8-46df-aeba-03bfb97042cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.739 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[46390a86-e9d3-4003-a739-be3ef04bf9d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657395, 'reachable_time': 24370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405078, 'error': None, 'target': 'ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:40 np0005465604 systemd[1]: run-netns-ovnmeta\x2d577c4506\x2d1f5a\x2d48bf\x2db267\x2d75fd69fe5e1c.mount: Deactivated successfully.
Oct  2 05:01:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.745 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-577c4506-1f5a-48bf-b267-75fd69fe5e1c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:01:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:40.745 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[98c471e1-764e-417a-bbe6-1d22abea8b1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.870 2 DEBUG nova.compute.manager [req-ef344852-779b-49ad-82e8-341840f957c1 req-da5cde94-7adf-4cb0-b484-339df275f8b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Received event network-vif-unplugged-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.871 2 DEBUG oslo_concurrency.lockutils [req-ef344852-779b-49ad-82e8-341840f957c1 req-da5cde94-7adf-4cb0-b484-339df275f8b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.871 2 DEBUG oslo_concurrency.lockutils [req-ef344852-779b-49ad-82e8-341840f957c1 req-da5cde94-7adf-4cb0-b484-339df275f8b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.872 2 DEBUG oslo_concurrency.lockutils [req-ef344852-779b-49ad-82e8-341840f957c1 req-da5cde94-7adf-4cb0-b484-339df275f8b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.872 2 DEBUG nova.compute.manager [req-ef344852-779b-49ad-82e8-341840f957c1 req-da5cde94-7adf-4cb0-b484-339df275f8b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] No waiting events found dispatching network-vif-unplugged-e14f3e85-84ff-49b3-8817-d926cb709f80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.872 2 DEBUG nova.compute.manager [req-ef344852-779b-49ad-82e8-341840f957c1 req-da5cde94-7adf-4cb0-b484-339df275f8b2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Received event network-vif-unplugged-e14f3e85-84ff-49b3-8817-d926cb709f80 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.917 2 INFO nova.virt.libvirt.driver [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Deleting instance files /var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8_del#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.918 2 INFO nova.virt.libvirt.driver [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Deletion of /var/lib/nova/instances/2f4391e7-f233-4092-b54f-89a4c7840cd8_del complete#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.966 2 INFO nova.compute.manager [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.967 2 DEBUG oslo.service.loopingcall [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.968 2 DEBUG nova.compute.manager [-] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:01:40 np0005465604 nova_compute[260603]: 2025-10-02 09:01:40.968 2 DEBUG nova.network.neutron [-] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:01:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:01:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:01:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 773 KiB/s wr, 82 op/s
Oct  2 05:01:42 np0005465604 nova_compute[260603]: 2025-10-02 09:01:42.224 2 DEBUG nova.network.neutron [-] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:01:42 np0005465604 nova_compute[260603]: 2025-10-02 09:01:42.240 2 INFO nova.compute.manager [-] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Took 1.27 seconds to deallocate network for instance.#033[00m
Oct  2 05:01:42 np0005465604 nova_compute[260603]: 2025-10-02 09:01:42.282 2 DEBUG oslo_concurrency.lockutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:42 np0005465604 nova_compute[260603]: 2025-10-02 09:01:42.283 2 DEBUG oslo_concurrency.lockutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:42 np0005465604 nova_compute[260603]: 2025-10-02 09:01:42.351 2 DEBUG oslo_concurrency.processutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:01:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:01:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3000227654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:01:42 np0005465604 nova_compute[260603]: 2025-10-02 09:01:42.832 2 DEBUG oslo_concurrency.processutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:01:42 np0005465604 nova_compute[260603]: 2025-10-02 09:01:42.839 2 DEBUG nova.compute.provider_tree [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:01:42 np0005465604 nova_compute[260603]: 2025-10-02 09:01:42.857 2 DEBUG nova.scheduler.client.report [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:01:42 np0005465604 nova_compute[260603]: 2025-10-02 09:01:42.885 2 DEBUG oslo_concurrency.lockutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:42 np0005465604 nova_compute[260603]: 2025-10-02 09:01:42.920 2 INFO nova.scheduler.client.report [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance 2f4391e7-f233-4092-b54f-89a4c7840cd8#033[00m
Oct  2 05:01:42 np0005465604 nova_compute[260603]: 2025-10-02 09:01:42.964 2 DEBUG nova.compute.manager [req-d4daa0c5-78ac-4179-a7e2-248ef70287d4 req-077ad738-dfd0-498a-8193-7b6dcf52e55a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Received event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:01:42 np0005465604 nova_compute[260603]: 2025-10-02 09:01:42.965 2 DEBUG oslo_concurrency.lockutils [req-d4daa0c5-78ac-4179-a7e2-248ef70287d4 req-077ad738-dfd0-498a-8193-7b6dcf52e55a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:42 np0005465604 nova_compute[260603]: 2025-10-02 09:01:42.965 2 DEBUG oslo_concurrency.lockutils [req-d4daa0c5-78ac-4179-a7e2-248ef70287d4 req-077ad738-dfd0-498a-8193-7b6dcf52e55a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:42 np0005465604 nova_compute[260603]: 2025-10-02 09:01:42.965 2 DEBUG oslo_concurrency.lockutils [req-d4daa0c5-78ac-4179-a7e2-248ef70287d4 req-077ad738-dfd0-498a-8193-7b6dcf52e55a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:42 np0005465604 nova_compute[260603]: 2025-10-02 09:01:42.966 2 DEBUG nova.compute.manager [req-d4daa0c5-78ac-4179-a7e2-248ef70287d4 req-077ad738-dfd0-498a-8193-7b6dcf52e55a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] No waiting events found dispatching network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:01:42 np0005465604 nova_compute[260603]: 2025-10-02 09:01:42.966 2 WARNING nova.compute.manager [req-d4daa0c5-78ac-4179-a7e2-248ef70287d4 req-077ad738-dfd0-498a-8193-7b6dcf52e55a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Received unexpected event network-vif-plugged-e14f3e85-84ff-49b3-8817-d926cb709f80 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 05:01:42 np0005465604 nova_compute[260603]: 2025-10-02 09:01:42.981 2 DEBUG oslo_concurrency.lockutils [None req-6caad9ac-a9c1-4086-a7e8-bef95578a28b ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "2f4391e7-f233-4092-b54f-89a4c7840cd8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:01:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 775 KiB/s wr, 122 op/s
Oct  2 05:01:45 np0005465604 nova_compute[260603]: 2025-10-02 09:01:45.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:45.032 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:01:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:45.034 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:01:45 np0005465604 nova_compute[260603]: 2025-10-02 09:01:45.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:45 np0005465604 nova_compute[260603]: 2025-10-02 09:01:45.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Oct  2 05:01:47 np0005465604 nova_compute[260603]: 2025-10-02 09:01:47.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:47 np0005465604 nova_compute[260603]: 2025-10-02 09:01:47.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Oct  2 05:01:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:01:48 np0005465604 nova_compute[260603]: 2025-10-02 09:01:48.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:01:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.2 KiB/s wr, 78 op/s
Oct  2 05:01:50 np0005465604 nova_compute[260603]: 2025-10-02 09:01:50.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:50 np0005465604 nova_compute[260603]: 2025-10-02 09:01:50.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:01:50 np0005465604 nova_compute[260603]: 2025-10-02 09:01:50.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:01:50 np0005465604 nova_compute[260603]: 2025-10-02 09:01:50.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:01:50 np0005465604 nova_compute[260603]: 2025-10-02 09:01:50.536 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:01:50 np0005465604 nova_compute[260603]: 2025-10-02 09:01:50.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:51 np0005465604 nova_compute[260603]: 2025-10-02 09:01:51.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:01:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 432 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Oct  2 05:01:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:01:53.037 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:01:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:01:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 432 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Oct  2 05:01:54 np0005465604 nova_compute[260603]: 2025-10-02 09:01:54.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:01:54 np0005465604 nova_compute[260603]: 2025-10-02 09:01:54.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:01:54 np0005465604 nova_compute[260603]: 2025-10-02 09:01:54.627 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:54 np0005465604 nova_compute[260603]: 2025-10-02 09:01:54.628 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:54 np0005465604 nova_compute[260603]: 2025-10-02 09:01:54.628 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:54 np0005465604 nova_compute[260603]: 2025-10-02 09:01:54.628 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:01:54 np0005465604 nova_compute[260603]: 2025-10-02 09:01:54.629 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:01:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:01:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/75075170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:01:55 np0005465604 nova_compute[260603]: 2025-10-02 09:01:55.091 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:01:55 np0005465604 nova_compute[260603]: 2025-10-02 09:01:55.254 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:01:55 np0005465604 nova_compute[260603]: 2025-10-02 09:01:55.255 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3687MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:01:55 np0005465604 nova_compute[260603]: 2025-10-02 09:01:55.255 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:01:55 np0005465604 nova_compute[260603]: 2025-10-02 09:01:55.255 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:01:55 np0005465604 nova_compute[260603]: 2025-10-02 09:01:55.350 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:01:55 np0005465604 nova_compute[260603]: 2025-10-02 09:01:55.350 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:01:55 np0005465604 nova_compute[260603]: 2025-10-02 09:01:55.366 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:01:55 np0005465604 nova_compute[260603]: 2025-10-02 09:01:55.462 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395700.4615595, 2f4391e7-f233-4092-b54f-89a4c7840cd8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:01:55 np0005465604 nova_compute[260603]: 2025-10-02 09:01:55.463 2 INFO nova.compute.manager [-] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:01:55 np0005465604 nova_compute[260603]: 2025-10-02 09:01:55.492 2 DEBUG nova.compute.manager [None req-dd906b1b-348e-4e40-b27d-03906e0ca8a9 - - - - - -] [instance: 2f4391e7-f233-4092-b54f-89a4c7840cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:01:55 np0005465604 nova_compute[260603]: 2025-10-02 09:01:55.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:55 np0005465604 nova_compute[260603]: 2025-10-02 09:01:55.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:01:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:01:55 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/591262340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:01:55 np0005465604 nova_compute[260603]: 2025-10-02 09:01:55.783 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:01:55 np0005465604 nova_compute[260603]: 2025-10-02 09:01:55.790 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:01:55 np0005465604 nova_compute[260603]: 2025-10-02 09:01:55.803 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:01:55 np0005465604 nova_compute[260603]: 2025-10-02 09:01:55.833 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:01:55 np0005465604 nova_compute[260603]: 2025-10-02 09:01:55.833 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:01:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:01:56 np0005465604 nova_compute[260603]: 2025-10-02 09:01:56.834 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:01:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:01:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:01:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:01:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:01:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:01:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:01:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:01:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:01:58 np0005465604 nova_compute[260603]: 2025-10-02 09:01:58.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:01:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:02:00 np0005465604 nova_compute[260603]: 2025-10-02 09:02:00.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:00 np0005465604 nova_compute[260603]: 2025-10-02 09:02:00.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:02:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:02:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:02:04 np0005465604 podman[405176]: 2025-10-02 09:02:04.059260075 +0000 UTC m=+0.108141782 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 05:02:04 np0005465604 podman[405175]: 2025-10-02 09:02:04.091179097 +0000 UTC m=+0.141888870 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 05:02:04 np0005465604 nova_compute[260603]: 2025-10-02 09:02:04.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:02:05 np0005465604 nova_compute[260603]: 2025-10-02 09:02:05.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:05 np0005465604 nova_compute[260603]: 2025-10-02 09:02:05.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:02:06 np0005465604 nova_compute[260603]: 2025-10-02 09:02:06.997 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "3e3c7092-c580-477b-8596-4fd3b719e700" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:02:06 np0005465604 nova_compute[260603]: 2025-10-02 09:02:06.998 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.018 2 DEBUG nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.100 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.101 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.111 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.111 2 INFO nova.compute.claims [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.215 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:02:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:02:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1225500844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.699 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.709 2 DEBUG nova.compute.provider_tree [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.726 2 DEBUG nova.scheduler.client.report [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.758 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.759 2 DEBUG nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.806 2 DEBUG nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.807 2 DEBUG nova.network.neutron [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.828 2 INFO nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.847 2 DEBUG nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:02:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 41 MiB data, 891 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.939 2 DEBUG nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.941 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.941 2 INFO nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Creating image(s)#033[00m
Oct  2 05:02:07 np0005465604 nova_compute[260603]: 2025-10-02 09:02:07.979 2 DEBUG nova.storage.rbd_utils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 3e3c7092-c580-477b-8596-4fd3b719e700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:02:08 np0005465604 nova_compute[260603]: 2025-10-02 09:02:08.016 2 DEBUG nova.storage.rbd_utils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 3e3c7092-c580-477b-8596-4fd3b719e700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:02:08 np0005465604 nova_compute[260603]: 2025-10-02 09:02:08.048 2 DEBUG nova.storage.rbd_utils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 3e3c7092-c580-477b-8596-4fd3b719e700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:02:08 np0005465604 nova_compute[260603]: 2025-10-02 09:02:08.052 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:02:08 np0005465604 nova_compute[260603]: 2025-10-02 09:02:08.123 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:02:08 np0005465604 nova_compute[260603]: 2025-10-02 09:02:08.124 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:02:08 np0005465604 nova_compute[260603]: 2025-10-02 09:02:08.125 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:02:08 np0005465604 nova_compute[260603]: 2025-10-02 09:02:08.125 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:02:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:02:08 np0005465604 nova_compute[260603]: 2025-10-02 09:02:08.144 2 DEBUG nova.storage.rbd_utils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 3e3c7092-c580-477b-8596-4fd3b719e700_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:02:08 np0005465604 nova_compute[260603]: 2025-10-02 09:02:08.147 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 3e3c7092-c580-477b-8596-4fd3b719e700_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:02:08 np0005465604 nova_compute[260603]: 2025-10-02 09:02:08.420 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 3e3c7092-c580-477b-8596-4fd3b719e700_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:02:08 np0005465604 nova_compute[260603]: 2025-10-02 09:02:08.492 2 DEBUG nova.storage.rbd_utils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image 3e3c7092-c580-477b-8596-4fd3b719e700_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:02:08 np0005465604 nova_compute[260603]: 2025-10-02 09:02:08.595 2 DEBUG nova.objects.instance [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid 3e3c7092-c580-477b-8596-4fd3b719e700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:02:08 np0005465604 nova_compute[260603]: 2025-10-02 09:02:08.613 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:02:08 np0005465604 nova_compute[260603]: 2025-10-02 09:02:08.614 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Ensure instance console log exists: /var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:02:08 np0005465604 nova_compute[260603]: 2025-10-02 09:02:08.614 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:02:08 np0005465604 nova_compute[260603]: 2025-10-02 09:02:08.615 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:02:08 np0005465604 nova_compute[260603]: 2025-10-02 09:02:08.615 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:02:08 np0005465604 nova_compute[260603]: 2025-10-02 09:02:08.761 2 DEBUG nova.policy [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:02:09 np0005465604 podman[405410]: 2025-10-02 09:02:09.03533267 +0000 UTC m=+0.093243779 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible)
Oct  2 05:02:09 np0005465604 podman[405409]: 2025-10-02 09:02:09.03565219 +0000 UTC m=+0.103169746 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 05:02:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 305 active+clean; 43 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 92 KiB/s wr, 3 op/s
Oct  2 05:02:10 np0005465604 nova_compute[260603]: 2025-10-02 09:02:10.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:10 np0005465604 nova_compute[260603]: 2025-10-02 09:02:10.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:10 np0005465604 nova_compute[260603]: 2025-10-02 09:02:10.800 2 DEBUG nova.network.neutron [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Successfully created port: 43ff7902-a749-4e8e-9c64-efd972acf1a7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:02:11 np0005465604 nova_compute[260603]: 2025-10-02 09:02:11.716 2 DEBUG nova.network.neutron [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Successfully updated port: 43ff7902-a749-4e8e-9c64-efd972acf1a7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:02:11 np0005465604 nova_compute[260603]: 2025-10-02 09:02:11.730 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:02:11 np0005465604 nova_compute[260603]: 2025-10-02 09:02:11.730 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:02:11 np0005465604 nova_compute[260603]: 2025-10-02 09:02:11.730 2 DEBUG nova.network.neutron [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:02:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 43 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 92 KiB/s wr, 3 op/s
Oct  2 05:02:11 np0005465604 nova_compute[260603]: 2025-10-02 09:02:11.968 2 DEBUG nova.compute.manager [req-927e9ec9-69a4-40b2-b42d-b1ff7f3a4db9 req-ced20d4e-f3d5-4bd4-84b0-dbf98e3c9442 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received event network-changed-43ff7902-a749-4e8e-9c64-efd972acf1a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:02:11 np0005465604 nova_compute[260603]: 2025-10-02 09:02:11.969 2 DEBUG nova.compute.manager [req-927e9ec9-69a4-40b2-b42d-b1ff7f3a4db9 req-ced20d4e-f3d5-4bd4-84b0-dbf98e3c9442 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Refreshing instance network info cache due to event network-changed-43ff7902-a749-4e8e-9c64-efd972acf1a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:02:11 np0005465604 nova_compute[260603]: 2025-10-02 09:02:11.969 2 DEBUG oslo_concurrency.lockutils [req-927e9ec9-69a4-40b2-b42d-b1ff7f3a4db9 req-ced20d4e-f3d5-4bd4-84b0-dbf98e3c9442 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.042 2 DEBUG nova.network.neutron [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.909 2 DEBUG nova.network.neutron [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Updating instance_info_cache with network_info: [{"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.927 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.927 2 DEBUG nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Instance network_info: |[{"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.928 2 DEBUG oslo_concurrency.lockutils [req-927e9ec9-69a4-40b2-b42d-b1ff7f3a4db9 req-ced20d4e-f3d5-4bd4-84b0-dbf98e3c9442 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.928 2 DEBUG nova.network.neutron [req-927e9ec9-69a4-40b2-b42d-b1ff7f3a4db9 req-ced20d4e-f3d5-4bd4-84b0-dbf98e3c9442 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Refreshing network info cache for port 43ff7902-a749-4e8e-9c64-efd972acf1a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.931 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Start _get_guest_xml network_info=[{"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.935 2 WARNING nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.939 2 DEBUG nova.virt.libvirt.host [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.940 2 DEBUG nova.virt.libvirt.host [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.945 2 DEBUG nova.virt.libvirt.host [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.945 2 DEBUG nova.virt.libvirt.host [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.946 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.946 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.946 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.946 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.947 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.947 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.947 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.947 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.947 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.948 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.948 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.948 2 DEBUG nova.virt.hardware [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:02:12 np0005465604 nova_compute[260603]: 2025-10-02 09:02:12.951 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:02:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:02:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:02:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1203845190' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:02:13 np0005465604 nova_compute[260603]: 2025-10-02 09:02:13.426 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:02:13 np0005465604 nova_compute[260603]: 2025-10-02 09:02:13.445 2 DEBUG nova.storage.rbd_utils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 3e3c7092-c580-477b-8596-4fd3b719e700_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:02:13 np0005465604 nova_compute[260603]: 2025-10-02 09:02:13.451 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:02:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:02:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:02:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/714042361' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:02:13 np0005465604 nova_compute[260603]: 2025-10-02 09:02:13.983 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:02:13 np0005465604 nova_compute[260603]: 2025-10-02 09:02:13.985 2 DEBUG nova.virt.libvirt.vif [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:02:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2141391792',display_name='tempest-TestNetworkBasicOps-server-2141391792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2141391792',id=133,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKDp11F2lbrxxMl62/j7fHXkV6+flkvleRV3SlQtVIKr0IAxEI8xda+NGodZ4HaJd7EmZSJjX6G2CVi6Mz+byyGzwwm3n9HV8PIJ995e7gObIrbEd/QY1wwigSXqVllt8g==',key_name='tempest-TestNetworkBasicOps-1602850552',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-2y5fgdwx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:02:07Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=3e3c7092-c580-477b-8596-4fd3b719e700,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:02:13 np0005465604 nova_compute[260603]: 2025-10-02 09:02:13.985 2 DEBUG nova.network.os_vif_util [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:02:13 np0005465604 nova_compute[260603]: 2025-10-02 09:02:13.986 2 DEBUG nova.network.os_vif_util [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:49:2a,bridge_name='br-int',has_traffic_filtering=True,id=43ff7902-a749-4e8e-9c64-efd972acf1a7,network=Network(87157236-5092-4eb4-a9f0-535aee31f502),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ff7902-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:02:13 np0005465604 nova_compute[260603]: 2025-10-02 09:02:13.987 2 DEBUG nova.objects.instance [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3e3c7092-c580-477b-8596-4fd3b719e700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.003 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:02:14 np0005465604 nova_compute[260603]:  <uuid>3e3c7092-c580-477b-8596-4fd3b719e700</uuid>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:  <name>instance-00000085</name>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkBasicOps-server-2141391792</nova:name>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:02:12</nova:creationTime>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:02:14 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:        <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:        <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:        <nova:port uuid="43ff7902-a749-4e8e-9c64-efd972acf1a7">
Oct  2 05:02:14 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <entry name="serial">3e3c7092-c580-477b-8596-4fd3b719e700</entry>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <entry name="uuid">3e3c7092-c580-477b-8596-4fd3b719e700</entry>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/3e3c7092-c580-477b-8596-4fd3b719e700_disk">
Oct  2 05:02:14 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:02:14 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/3e3c7092-c580-477b-8596-4fd3b719e700_disk.config">
Oct  2 05:02:14 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:02:14 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:ad:49:2a"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <target dev="tap43ff7902-a7"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700/console.log" append="off"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:02:14 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:02:14 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:02:14 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:02:14 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.004 2 DEBUG nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Preparing to wait for external event network-vif-plugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.004 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.004 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.005 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.005 2 DEBUG nova.virt.libvirt.vif [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:02:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2141391792',display_name='tempest-TestNetworkBasicOps-server-2141391792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2141391792',id=133,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKDp11F2lbrxxMl62/j7fHXkV6+flkvleRV3SlQtVIKr0IAxEI8xda+NGodZ4HaJd7EmZSJjX6G2CVi6Mz+byyGzwwm3n9HV8PIJ995e7gObIrbEd/QY1wwigSXqVllt8g==',key_name='tempest-TestNetworkBasicOps-1602850552',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-2y5fgdwx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:02:07Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=3e3c7092-c580-477b-8596-4fd3b719e700,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.005 2 DEBUG nova.network.os_vif_util [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.006 2 DEBUG nova.network.os_vif_util [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:49:2a,bridge_name='br-int',has_traffic_filtering=True,id=43ff7902-a749-4e8e-9c64-efd972acf1a7,network=Network(87157236-5092-4eb4-a9f0-535aee31f502),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ff7902-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.006 2 DEBUG os_vif [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:49:2a,bridge_name='br-int',has_traffic_filtering=True,id=43ff7902-a749-4e8e-9c64-efd972acf1a7,network=Network(87157236-5092-4eb4-a9f0-535aee31f502),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ff7902-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.007 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.008 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.012 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ff7902-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.012 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43ff7902-a7, col_values=(('external_ids', {'iface-id': '43ff7902-a749-4e8e-9c64-efd972acf1a7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:49:2a', 'vm-uuid': '3e3c7092-c580-477b-8596-4fd3b719e700'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:14 np0005465604 NetworkManager[45129]: <info>  [1759395734.0157] manager: (tap43ff7902-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/573)
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.022 2 INFO os_vif [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:49:2a,bridge_name='br-int',has_traffic_filtering=True,id=43ff7902-a749-4e8e-9c64-efd972acf1a7,network=Network(87157236-5092-4eb4-a9f0-535aee31f502),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ff7902-a7')#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.072 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.072 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.072 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:ad:49:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.073 2 INFO nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Using config drive#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.094 2 DEBUG nova.storage.rbd_utils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 3e3c7092-c580-477b-8596-4fd3b719e700_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.563 2 INFO nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Creating config drive at /var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700/disk.config#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.573 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkfusyop0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.638 2 DEBUG nova.network.neutron [req-927e9ec9-69a4-40b2-b42d-b1ff7f3a4db9 req-ced20d4e-f3d5-4bd4-84b0-dbf98e3c9442 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Updated VIF entry in instance network info cache for port 43ff7902-a749-4e8e-9c64-efd972acf1a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.640 2 DEBUG nova.network.neutron [req-927e9ec9-69a4-40b2-b42d-b1ff7f3a4db9 req-ced20d4e-f3d5-4bd4-84b0-dbf98e3c9442 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Updating instance_info_cache with network_info: [{"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.660 2 DEBUG oslo_concurrency.lockutils [req-927e9ec9-69a4-40b2-b42d-b1ff7f3a4db9 req-ced20d4e-f3d5-4bd4-84b0-dbf98e3c9442 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.750 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkfusyop0" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.792 2 DEBUG nova.storage.rbd_utils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 3e3c7092-c580-477b-8596-4fd3b719e700_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:02:14 np0005465604 nova_compute[260603]: 2025-10-02 09:02:14.798 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700/disk.config 3e3c7092-c580-477b-8596-4fd3b719e700_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:02:15 np0005465604 nova_compute[260603]: 2025-10-02 09:02:15.001 2 DEBUG oslo_concurrency.processutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700/disk.config 3e3c7092-c580-477b-8596-4fd3b719e700_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:02:15 np0005465604 nova_compute[260603]: 2025-10-02 09:02:15.003 2 INFO nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Deleting local config drive /var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700/disk.config because it was imported into RBD.#033[00m
Oct  2 05:02:15 np0005465604 kernel: tap43ff7902-a7: entered promiscuous mode
Oct  2 05:02:15 np0005465604 NetworkManager[45129]: <info>  [1759395735.1037] manager: (tap43ff7902-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/574)
Oct  2 05:02:15 np0005465604 nova_compute[260603]: 2025-10-02 09:02:15.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:15 np0005465604 ovn_controller[152344]: 2025-10-02T09:02:15Z|01433|binding|INFO|Claiming lport 43ff7902-a749-4e8e-9c64-efd972acf1a7 for this chassis.
Oct  2 05:02:15 np0005465604 ovn_controller[152344]: 2025-10-02T09:02:15Z|01434|binding|INFO|43ff7902-a749-4e8e-9c64-efd972acf1a7: Claiming fa:16:3e:ad:49:2a 10.100.0.6
Oct  2 05:02:15 np0005465604 nova_compute[260603]: 2025-10-02 09:02:15.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:15 np0005465604 nova_compute[260603]: 2025-10-02 09:02:15.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.125 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:49:2a 10.100.0.6'], port_security=['fa:16:3e:ad:49:2a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e3c7092-c580-477b-8596-4fd3b719e700', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87157236-5092-4eb4-a9f0-535aee31f502', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': '96e8905a-c47c-4bff-936a-cf4b9c9d02f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5dca513d-9223-4241-809a-b7da8c65d0fb, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=43ff7902-a749-4e8e-9c64-efd972acf1a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.126 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 43ff7902-a749-4e8e-9c64-efd972acf1a7 in datapath 87157236-5092-4eb4-a9f0-535aee31f502 bound to our chassis#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.127 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87157236-5092-4eb4-a9f0-535aee31f502#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.143 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[31a3f820-a873-4118-b5b5-17013bb1fc76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.144 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap87157236-51 in ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.148 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap87157236-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.148 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3f1dc8-0729-43ab-ae7b-a2f155391d8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.149 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[64ff5bad-b3b0-4efe-8e47-fb47acfb3664]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.165 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1ca1b4-3d20-44b6-8fc8-8e419b6fdc48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:15 np0005465604 systemd-machined[214636]: New machine qemu-167-instance-00000085.
Oct  2 05:02:15 np0005465604 systemd[1]: Started Virtual Machine qemu-167-instance-00000085.
Oct  2 05:02:15 np0005465604 systemd-udevd[405587]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.206 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ff7801-2a7e-414b-a821-ed2f664a0abb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:15 np0005465604 nova_compute[260603]: 2025-10-02 09:02:15.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:15 np0005465604 ovn_controller[152344]: 2025-10-02T09:02:15Z|01435|binding|INFO|Setting lport 43ff7902-a749-4e8e-9c64-efd972acf1a7 ovn-installed in OVS
Oct  2 05:02:15 np0005465604 ovn_controller[152344]: 2025-10-02T09:02:15Z|01436|binding|INFO|Setting lport 43ff7902-a749-4e8e-9c64-efd972acf1a7 up in Southbound
Oct  2 05:02:15 np0005465604 nova_compute[260603]: 2025-10-02 09:02:15.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:15 np0005465604 NetworkManager[45129]: <info>  [1759395735.2318] device (tap43ff7902-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:02:15 np0005465604 NetworkManager[45129]: <info>  [1759395735.2355] device (tap43ff7902-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.254 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5dffd124-ec9f-42b5-8c52-74e559babd78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:15 np0005465604 NetworkManager[45129]: <info>  [1759395735.2640] manager: (tap87157236-50): new Veth device (/org/freedesktop/NetworkManager/Devices/575)
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.263 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a6473ce0-5a02-47d0-8767-5322f3c61297]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.298 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[337c7674-2b0d-4af1-b4b7-a680e38cfd79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.300 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1d05542e-48f6-474a-9d1d-15091760c7d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:15 np0005465604 NetworkManager[45129]: <info>  [1759395735.3191] device (tap87157236-50): carrier: link connected
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.324 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[aafbb723-f388-43a9-a136-c54c1d32db23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.339 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[93b667a1-c4dc-473b-9efa-c18561a47bda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87157236-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:3c:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661392, 'reachable_time': 26033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405617, 'error': None, 'target': 'ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.353 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4cb069-fe4f-44e2-a045-2e781445e653]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:3c41'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661392, 'tstamp': 661392}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405618, 'error': None, 'target': 'ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.368 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a47492-984a-45cc-b7ff-b787d1d7b049]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87157236-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:3c:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661392, 'reachable_time': 26033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 405619, 'error': None, 'target': 'ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.393 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eff6b800-e905-4ea9-b30e-6aa4aac41359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.447 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc4add9-d8c7-43b9-979c-28e7d4ca9f1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.449 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87157236-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.450 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.451 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87157236-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:02:15 np0005465604 nova_compute[260603]: 2025-10-02 09:02:15.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:15 np0005465604 NetworkManager[45129]: <info>  [1759395735.4546] manager: (tap87157236-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/576)
Oct  2 05:02:15 np0005465604 kernel: tap87157236-50: entered promiscuous mode
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.456 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87157236-50, col_values=(('external_ids', {'iface-id': '9693941d-7c4e-431b-8552-b9f105de9d56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:02:15 np0005465604 ovn_controller[152344]: 2025-10-02T09:02:15Z|01437|binding|INFO|Releasing lport 9693941d-7c4e-431b-8552-b9f105de9d56 from this chassis (sb_readonly=0)
Oct  2 05:02:15 np0005465604 nova_compute[260603]: 2025-10-02 09:02:15.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:15 np0005465604 nova_compute[260603]: 2025-10-02 09:02:15.465 2 DEBUG nova.compute.manager [req-8c95d0a7-3d91-4722-8d97-19468a05844c req-97d4487e-6286-4384-bb00-552618938ff1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received event network-vif-plugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:02:15 np0005465604 nova_compute[260603]: 2025-10-02 09:02:15.466 2 DEBUG oslo_concurrency.lockutils [req-8c95d0a7-3d91-4722-8d97-19468a05844c req-97d4487e-6286-4384-bb00-552618938ff1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:02:15 np0005465604 nova_compute[260603]: 2025-10-02 09:02:15.467 2 DEBUG oslo_concurrency.lockutils [req-8c95d0a7-3d91-4722-8d97-19468a05844c req-97d4487e-6286-4384-bb00-552618938ff1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:02:15 np0005465604 nova_compute[260603]: 2025-10-02 09:02:15.467 2 DEBUG oslo_concurrency.lockutils [req-8c95d0a7-3d91-4722-8d97-19468a05844c req-97d4487e-6286-4384-bb00-552618938ff1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:02:15 np0005465604 nova_compute[260603]: 2025-10-02 09:02:15.467 2 DEBUG nova.compute.manager [req-8c95d0a7-3d91-4722-8d97-19468a05844c req-97d4487e-6286-4384-bb00-552618938ff1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Processing event network-vif-plugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:02:15 np0005465604 nova_compute[260603]: 2025-10-02 09:02:15.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.471 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/87157236-5092-4eb4-a9f0-535aee31f502.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/87157236-5092-4eb4-a9f0-535aee31f502.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.472 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[547a39e0-197a-440a-8084-2a91df011ca0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.473 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-87157236-5092-4eb4-a9f0-535aee31f502
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/87157236-5092-4eb4-a9f0-535aee31f502.pid.haproxy
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 87157236-5092-4eb4-a9f0-535aee31f502
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 05:02:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:15.473 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502', 'env', 'PROCESS_TAG=haproxy-87157236-5092-4eb4-a9f0-535aee31f502', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/87157236-5092-4eb4-a9f0-535aee31f502.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 05:02:15 np0005465604 nova_compute[260603]: 2025-10-02 09:02:15.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:15 np0005465604 podman[405692]: 2025-10-02 09:02:15.838431763 +0000 UTC m=+0.063942458 container create ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 05:02:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:02:15 np0005465604 systemd[1]: Started libpod-conmon-ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e.scope.
Oct  2 05:02:15 np0005465604 podman[405692]: 2025-10-02 09:02:15.79974458 +0000 UTC m=+0.025255295 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 05:02:15 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:02:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e266c57055b81fe307404eb21b2013f29455cb3246668f7032d1fc218411294e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 05:02:15 np0005465604 podman[405692]: 2025-10-02 09:02:15.937271954 +0000 UTC m=+0.162782710 container init ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 05:02:15 np0005465604 podman[405692]: 2025-10-02 09:02:15.942073464 +0000 UTC m=+0.167584179 container start ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:02:15 np0005465604 neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502[405708]: [NOTICE]   (405712) : New worker (405714) forked
Oct  2 05:02:15 np0005465604 neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502[405708]: [NOTICE]   (405712) : Loading success.
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.205 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395736.204791, 3e3c7092-c580-477b-8596-4fd3b719e700 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.206 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] VM Started (Lifecycle Event)#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.208 2 DEBUG nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.213 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.216 2 INFO nova.virt.libvirt.driver [-] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Instance spawned successfully.#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.217 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.261 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.267 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.268 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.269 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.269 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.270 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.270 2 DEBUG nova.virt.libvirt.driver [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.276 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.316 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.317 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395736.2055516, 3e3c7092-c580-477b-8596-4fd3b719e700 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.317 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.347 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.350 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395736.210921, 3e3c7092-c580-477b-8596-4fd3b719e700 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.351 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.355 2 INFO nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Took 8.42 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.356 2 DEBUG nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.366 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.369 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.397 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.417 2 INFO nova.compute.manager [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Took 9.35 seconds to build instance.#033[00m
Oct  2 05:02:16 np0005465604 nova_compute[260603]: 2025-10-02 09:02:16.439 2 DEBUG oslo_concurrency.lockutils [None req-48f88603-e7a7-4222-a207-acd17adee6cc ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.442s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:02:17 np0005465604 nova_compute[260603]: 2025-10-02 09:02:17.591 2 DEBUG nova.compute.manager [req-11ce8d95-b716-4c0c-88cd-29760224f27a req-d79fbb30-52e1-406c-b186-b354e2afe713 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received event network-vif-plugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:02:17 np0005465604 nova_compute[260603]: 2025-10-02 09:02:17.592 2 DEBUG oslo_concurrency.lockutils [req-11ce8d95-b716-4c0c-88cd-29760224f27a req-d79fbb30-52e1-406c-b186-b354e2afe713 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:02:17 np0005465604 nova_compute[260603]: 2025-10-02 09:02:17.593 2 DEBUG oslo_concurrency.lockutils [req-11ce8d95-b716-4c0c-88cd-29760224f27a req-d79fbb30-52e1-406c-b186-b354e2afe713 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:02:17 np0005465604 nova_compute[260603]: 2025-10-02 09:02:17.593 2 DEBUG oslo_concurrency.lockutils [req-11ce8d95-b716-4c0c-88cd-29760224f27a req-d79fbb30-52e1-406c-b186-b354e2afe713 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:02:17 np0005465604 nova_compute[260603]: 2025-10-02 09:02:17.593 2 DEBUG nova.compute.manager [req-11ce8d95-b716-4c0c-88cd-29760224f27a req-d79fbb30-52e1-406c-b186-b354e2afe713 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] No waiting events found dispatching network-vif-plugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:02:17 np0005465604 nova_compute[260603]: 2025-10-02 09:02:17.594 2 WARNING nova.compute.manager [req-11ce8d95-b716-4c0c-88cd-29760224f27a req-d79fbb30-52e1-406c-b186-b354e2afe713 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received unexpected event network-vif-plugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:02:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct  2 05:02:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:02:19 np0005465604 nova_compute[260603]: 2025-10-02 09:02:19.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 05:02:19 np0005465604 ovn_controller[152344]: 2025-10-02T09:02:19Z|01438|binding|INFO|Releasing lport 9693941d-7c4e-431b-8552-b9f105de9d56 from this chassis (sb_readonly=0)
Oct  2 05:02:19 np0005465604 nova_compute[260603]: 2025-10-02 09:02:19.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:20 np0005465604 NetworkManager[45129]: <info>  [1759395739.9991] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/577)
Oct  2 05:02:20 np0005465604 NetworkManager[45129]: <info>  [1759395739.9999] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/578)
Oct  2 05:02:20 np0005465604 ovn_controller[152344]: 2025-10-02T09:02:20Z|01439|binding|INFO|Releasing lport 9693941d-7c4e-431b-8552-b9f105de9d56 from this chassis (sb_readonly=0)
Oct  2 05:02:20 np0005465604 nova_compute[260603]: 2025-10-02 09:02:20.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:20 np0005465604 nova_compute[260603]: 2025-10-02 09:02:20.201 2 DEBUG nova.compute.manager [req-959340f7-2621-420c-a9c8-8b586aa24aa4 req-8d9363e1-c975-4857-8379-447c20912b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received event network-changed-43ff7902-a749-4e8e-9c64-efd972acf1a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:02:20 np0005465604 nova_compute[260603]: 2025-10-02 09:02:20.201 2 DEBUG nova.compute.manager [req-959340f7-2621-420c-a9c8-8b586aa24aa4 req-8d9363e1-c975-4857-8379-447c20912b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Refreshing instance network info cache due to event network-changed-43ff7902-a749-4e8e-9c64-efd972acf1a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:02:20 np0005465604 nova_compute[260603]: 2025-10-02 09:02:20.202 2 DEBUG oslo_concurrency.lockutils [req-959340f7-2621-420c-a9c8-8b586aa24aa4 req-8d9363e1-c975-4857-8379-447c20912b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:02:20 np0005465604 nova_compute[260603]: 2025-10-02 09:02:20.202 2 DEBUG oslo_concurrency.lockutils [req-959340f7-2621-420c-a9c8-8b586aa24aa4 req-8d9363e1-c975-4857-8379-447c20912b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:02:20 np0005465604 nova_compute[260603]: 2025-10-02 09:02:20.202 2 DEBUG nova.network.neutron [req-959340f7-2621-420c-a9c8-8b586aa24aa4 req-8d9363e1-c975-4857-8379-447c20912b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Refreshing network info cache for port 43ff7902-a749-4e8e-9c64-efd972acf1a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:02:20 np0005465604 nova_compute[260603]: 2025-10-02 09:02:20.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:21 np0005465604 nova_compute[260603]: 2025-10-02 09:02:21.469 2 DEBUG nova.network.neutron [req-959340f7-2621-420c-a9c8-8b586aa24aa4 req-8d9363e1-c975-4857-8379-447c20912b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Updated VIF entry in instance network info cache for port 43ff7902-a749-4e8e-9c64-efd972acf1a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:02:21 np0005465604 nova_compute[260603]: 2025-10-02 09:02:21.469 2 DEBUG nova.network.neutron [req-959340f7-2621-420c-a9c8-8b586aa24aa4 req-8d9363e1-c975-4857-8379-447c20912b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Updating instance_info_cache with network_info: [{"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:02:21 np0005465604 nova_compute[260603]: 2025-10-02 09:02:21.493 2 DEBUG oslo_concurrency.lockutils [req-959340f7-2621-420c-a9c8-8b586aa24aa4 req-8d9363e1-c975-4857-8379-447c20912b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:02:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 97 op/s
Oct  2 05:02:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:02:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/130922234' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:02:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:02:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/130922234' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:02:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:02:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 97 op/s
Oct  2 05:02:24 np0005465604 nova_compute[260603]: 2025-10-02 09:02:24.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:25 np0005465604 nova_compute[260603]: 2025-10-02 09:02:25.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 88 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:02:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 106 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 114 op/s
Oct  2 05:02:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:02:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:02:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:02:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:02:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:02:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:02:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:02:28
Oct  2 05:02:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:02:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:02:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['images', 'backups', 'vms', 'volumes', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log']
Oct  2 05:02:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:02:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:02:28 np0005465604 ovn_controller[152344]: 2025-10-02T09:02:28Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ad:49:2a 10.100.0.6
Oct  2 05:02:28 np0005465604 ovn_controller[152344]: 2025-10-02T09:02:28Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:49:2a 10.100.0.6
Oct  2 05:02:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:02:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:02:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:02:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:02:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:02:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:02:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:02:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:02:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:02:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:02:29 np0005465604 nova_compute[260603]: 2025-10-02 09:02:29.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 118 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 601 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  2 05:02:30 np0005465604 nova_compute[260603]: 2025-10-02 09:02:30.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 118 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Oct  2 05:02:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:02:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 305 active+clean; 121 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  2 05:02:34 np0005465604 nova_compute[260603]: 2025-10-02 09:02:34.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:34.841 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:02:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:34.843 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:02:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:34.843 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:02:35 np0005465604 podman[405725]: 2025-10-02 09:02:35.01081942 +0000 UTC m=+0.053421271 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 05:02:35 np0005465604 podman[405724]: 2025-10-02 09:02:35.041773732 +0000 UTC m=+0.097121229 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:02:35 np0005465604 nova_compute[260603]: 2025-10-02 09:02:35.207 2 INFO nova.compute.manager [None req-4f446e10-c0be-4a7b-839f-ea5ebc58a922 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Get console output#033[00m
Oct  2 05:02:35 np0005465604 nova_compute[260603]: 2025-10-02 09:02:35.212 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 05:02:35 np0005465604 nova_compute[260603]: 2025-10-02 09:02:35.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:02:35 np0005465604 nova_compute[260603]: 2025-10-02 09:02:35.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:02:35 np0005465604 nova_compute[260603]: 2025-10-02 09:02:35.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 121 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  2 05:02:37 np0005465604 ovn_controller[152344]: 2025-10-02T09:02:37Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:49:2a 10.100.0.6
Oct  2 05:02:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 121 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  2 05:02:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007587643257146578 of space, bias 1.0, pg target 0.22762929771439736 quantized to 32 (current 32)
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:02:39 np0005465604 nova_compute[260603]: 2025-10-02 09:02:39.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 121 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 126 KiB/s rd, 480 KiB/s wr, 22 op/s
Oct  2 05:02:40 np0005465604 podman[405769]: 2025-10-02 09:02:40.006626599 +0000 UTC m=+0.059237011 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 05:02:40 np0005465604 podman[405768]: 2025-10-02 09:02:40.039717738 +0000 UTC m=+0.088232313 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct  2 05:02:40 np0005465604 ovn_controller[152344]: 2025-10-02T09:02:40Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:49:2a 10.100.0.6
Oct  2 05:02:40 np0005465604 nova_compute[260603]: 2025-10-02 09:02:40.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:40.706 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:02:40 np0005465604 nova_compute[260603]: 2025-10-02 09:02:40.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:40.708 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.026 2 DEBUG nova.compute.manager [req-59aec11f-5153-4bb8-bf82-f9ade412cfd3 req-174a49aa-da6b-4164-ba4e-d059946f9065 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received event network-changed-43ff7902-a749-4e8e-9c64-efd972acf1a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.027 2 DEBUG nova.compute.manager [req-59aec11f-5153-4bb8-bf82-f9ade412cfd3 req-174a49aa-da6b-4164-ba4e-d059946f9065 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Refreshing instance network info cache due to event network-changed-43ff7902-a749-4e8e-9c64-efd972acf1a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.027 2 DEBUG oslo_concurrency.lockutils [req-59aec11f-5153-4bb8-bf82-f9ade412cfd3 req-174a49aa-da6b-4164-ba4e-d059946f9065 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.027 2 DEBUG oslo_concurrency.lockutils [req-59aec11f-5153-4bb8-bf82-f9ade412cfd3 req-174a49aa-da6b-4164-ba4e-d059946f9065 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.027 2 DEBUG nova.network.neutron [req-59aec11f-5153-4bb8-bf82-f9ade412cfd3 req-174a49aa-da6b-4164-ba4e-d059946f9065 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Refreshing network info cache for port 43ff7902-a749-4e8e-9c64-efd972acf1a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.115 2 DEBUG oslo_concurrency.lockutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "3e3c7092-c580-477b-8596-4fd3b719e700" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.116 2 DEBUG oslo_concurrency.lockutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.116 2 DEBUG oslo_concurrency.lockutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.116 2 DEBUG oslo_concurrency.lockutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.116 2 DEBUG oslo_concurrency.lockutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.117 2 INFO nova.compute.manager [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Terminating instance#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.118 2 DEBUG nova.compute.manager [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:02:41 np0005465604 kernel: tap43ff7902-a7 (unregistering): left promiscuous mode
Oct  2 05:02:41 np0005465604 NetworkManager[45129]: <info>  [1759395761.1676] device (tap43ff7902-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:02:41 np0005465604 ovn_controller[152344]: 2025-10-02T09:02:41Z|01440|binding|INFO|Releasing lport 43ff7902-a749-4e8e-9c64-efd972acf1a7 from this chassis (sb_readonly=0)
Oct  2 05:02:41 np0005465604 ovn_controller[152344]: 2025-10-02T09:02:41Z|01441|binding|INFO|Setting lport 43ff7902-a749-4e8e-9c64-efd972acf1a7 down in Southbound
Oct  2 05:02:41 np0005465604 ovn_controller[152344]: 2025-10-02T09:02:41Z|01442|binding|INFO|Removing iface tap43ff7902-a7 ovn-installed in OVS
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.196 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:49:2a 10.100.0.6'], port_security=['fa:16:3e:ad:49:2a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e3c7092-c580-477b-8596-4fd3b719e700', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87157236-5092-4eb4-a9f0-535aee31f502', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96e8905a-c47c-4bff-936a-cf4b9c9d02f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5dca513d-9223-4241-809a-b7da8c65d0fb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=43ff7902-a749-4e8e-9c64-efd972acf1a7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:02:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.197 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 43ff7902-a749-4e8e-9c64-efd972acf1a7 in datapath 87157236-5092-4eb4-a9f0-535aee31f502 unbound from our chassis#033[00m
Oct  2 05:02:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.198 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 87157236-5092-4eb4-a9f0-535aee31f502, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:02:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.199 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5a1688-393d-46b4-ac72-9c9055a544e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.199 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502 namespace which is not needed anymore#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:41 np0005465604 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000085.scope: Deactivated successfully.
Oct  2 05:02:41 np0005465604 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000085.scope: Consumed 12.599s CPU time.
Oct  2 05:02:41 np0005465604 systemd-machined[214636]: Machine qemu-167-instance-00000085 terminated.
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:41 np0005465604 neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502[405708]: [NOTICE]   (405712) : haproxy version is 2.8.14-c23fe91
Oct  2 05:02:41 np0005465604 neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502[405708]: [NOTICE]   (405712) : path to executable is /usr/sbin/haproxy
Oct  2 05:02:41 np0005465604 neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502[405708]: [WARNING]  (405712) : Exiting Master process...
Oct  2 05:02:41 np0005465604 neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502[405708]: [WARNING]  (405712) : Exiting Master process...
Oct  2 05:02:41 np0005465604 neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502[405708]: [ALERT]    (405712) : Current worker (405714) exited with code 143 (Terminated)
Oct  2 05:02:41 np0005465604 neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502[405708]: [WARNING]  (405712) : All workers exited. Exiting... (0)
Oct  2 05:02:41 np0005465604 systemd[1]: libpod-ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e.scope: Deactivated successfully.
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.359 2 INFO nova.virt.libvirt.driver [-] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Instance destroyed successfully.#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.360 2 DEBUG nova.objects.instance [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid 3e3c7092-c580-477b-8596-4fd3b719e700 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:02:41 np0005465604 podman[405949]: 2025-10-02 09:02:41.365226939 +0000 UTC m=+0.061766441 container died ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.375 2 DEBUG nova.virt.libvirt.vif [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:02:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2141391792',display_name='tempest-TestNetworkBasicOps-server-2141391792',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2141391792',id=133,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKDp11F2lbrxxMl62/j7fHXkV6+flkvleRV3SlQtVIKr0IAxEI8xda+NGodZ4HaJd7EmZSJjX6G2CVi6Mz+byyGzwwm3n9HV8PIJ995e7gObIrbEd/QY1wwigSXqVllt8g==',key_name='tempest-TestNetworkBasicOps-1602850552',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:02:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-2y5fgdwx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:02:16Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=3e3c7092-c580-477b-8596-4fd3b719e700,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.375 2 DEBUG nova.network.os_vif_util [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.376 2 DEBUG nova.network.os_vif_util [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ad:49:2a,bridge_name='br-int',has_traffic_filtering=True,id=43ff7902-a749-4e8e-9c64-efd972acf1a7,network=Network(87157236-5092-4eb4-a9f0-535aee31f502),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ff7902-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.376 2 DEBUG os_vif [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:49:2a,bridge_name='br-int',has_traffic_filtering=True,id=43ff7902-a749-4e8e-9c64-efd972acf1a7,network=Network(87157236-5092-4eb4-a9f0-535aee31f502),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ff7902-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.378 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ff7902-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.383 2 INFO os_vif [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ad:49:2a,bridge_name='br-int',has_traffic_filtering=True,id=43ff7902-a749-4e8e-9c64-efd972acf1a7,network=Network(87157236-5092-4eb4-a9f0-535aee31f502),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ff7902-a7')#033[00m
Oct  2 05:02:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e-userdata-shm.mount: Deactivated successfully.
Oct  2 05:02:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e266c57055b81fe307404eb21b2013f29455cb3246668f7032d1fc218411294e-merged.mount: Deactivated successfully.
Oct  2 05:02:41 np0005465604 podman[405949]: 2025-10-02 09:02:41.438430323 +0000 UTC m=+0.134969825 container cleanup ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 05:02:41 np0005465604 systemd[1]: libpod-conmon-ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e.scope: Deactivated successfully.
Oct  2 05:02:41 np0005465604 podman[406010]: 2025-10-02 09:02:41.509901644 +0000 UTC m=+0.040514910 container remove ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:02:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.529 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bab0c7ba-1290-4965-a4ae-ed0ee0bf4a65]: (4, ('Thu Oct  2 09:02:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502 (ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e)\nea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e\nThu Oct  2 09:02:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502 (ea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e)\nea9d86c7c06a0c01418ce3f746c627f99ace99c2973d4741a3e05fbf14fe111e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.534 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ae72a6-56da-428d-b775-96d6bf809cb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.537 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87157236-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:41 np0005465604 kernel: tap87157236-50: left promiscuous mode
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.562 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[62975d89-a539-4c0c-8bc6-97ef34400421]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:02:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:02:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:02:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:02:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:02:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.589 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[33ab6a3f-948e-4d74-96f9-d53de015e3f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.592 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29fef401-8dc6-476b-a242-3317b4776925]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:02:41 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 8771f0df-d64d-46e3-8839-72e177e77685 does not exist
Oct  2 05:02:41 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9eb2f582-8ac8-4d90-9148-9ead670ad731 does not exist
Oct  2 05:02:41 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 983ff226-60d0-485f-a12d-8af2b57c3e5c does not exist
Oct  2 05:02:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:02:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:02:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:02:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:02:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:02:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:02:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.614 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[598cec1b-faaf-4086-98d9-b23edef69119]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661384, 'reachable_time': 33485, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406035, 'error': None, 'target': 'ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.616 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-87157236-5092-4eb4-a9f0-535aee31f502 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:02:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.616 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[1f6f23d3-65a2-4788-8ce4-dee1df818590]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:02:41 np0005465604 systemd[1]: run-netns-ovnmeta\x2d87157236\x2d5092\x2d4eb4\x2da9f0\x2d535aee31f502.mount: Deactivated successfully.
Oct  2 05:02:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:02:41.709 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.730 2 INFO nova.virt.libvirt.driver [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Deleting instance files /var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700_del#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.730 2 INFO nova.virt.libvirt.driver [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Deletion of /var/lib/nova/instances/3e3c7092-c580-477b-8596-4fd3b719e700_del complete#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.784 2 INFO nova.compute.manager [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.784 2 DEBUG oslo.service.loopingcall [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.785 2 DEBUG nova.compute.manager [-] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:02:41 np0005465604 nova_compute[260603]: 2025-10-02 09:02:41.785 2 DEBUG nova.network.neutron [-] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:02:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 121 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 25 KiB/s wr, 9 op/s
Oct  2 05:02:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:02:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:02:41 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:02:42 np0005465604 podman[406171]: 2025-10-02 09:02:42.222133418 +0000 UTC m=+0.060791310 container create 1776fef362dd29b546fed267d9a145f2dee10adc5cf30f049cd5b90546d3f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_khorana, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 05:02:42 np0005465604 systemd[1]: Started libpod-conmon-1776fef362dd29b546fed267d9a145f2dee10adc5cf30f049cd5b90546d3f6ca.scope.
Oct  2 05:02:42 np0005465604 podman[406171]: 2025-10-02 09:02:42.190542306 +0000 UTC m=+0.029200288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:02:42 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:02:42 np0005465604 podman[406171]: 2025-10-02 09:02:42.313167667 +0000 UTC m=+0.151825589 container init 1776fef362dd29b546fed267d9a145f2dee10adc5cf30f049cd5b90546d3f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 05:02:42 np0005465604 podman[406171]: 2025-10-02 09:02:42.318996618 +0000 UTC m=+0.157654510 container start 1776fef362dd29b546fed267d9a145f2dee10adc5cf30f049cd5b90546d3f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_khorana, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 05:02:42 np0005465604 podman[406171]: 2025-10-02 09:02:42.321953089 +0000 UTC m=+0.160611011 container attach 1776fef362dd29b546fed267d9a145f2dee10adc5cf30f049cd5b90546d3f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_khorana, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct  2 05:02:42 np0005465604 jovial_khorana[406187]: 167 167
Oct  2 05:02:42 np0005465604 systemd[1]: libpod-1776fef362dd29b546fed267d9a145f2dee10adc5cf30f049cd5b90546d3f6ca.scope: Deactivated successfully.
Oct  2 05:02:42 np0005465604 podman[406171]: 2025-10-02 09:02:42.327877503 +0000 UTC m=+0.166535395 container died 1776fef362dd29b546fed267d9a145f2dee10adc5cf30f049cd5b90546d3f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_khorana, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:02:42 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b1edc8c5b8c9e3be7f7a230f507eef2fcce2f1286c12005e51807c6a39ac3b06-merged.mount: Deactivated successfully.
Oct  2 05:02:42 np0005465604 nova_compute[260603]: 2025-10-02 09:02:42.364 2 DEBUG nova.network.neutron [req-59aec11f-5153-4bb8-bf82-f9ade412cfd3 req-174a49aa-da6b-4164-ba4e-d059946f9065 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Updated VIF entry in instance network info cache for port 43ff7902-a749-4e8e-9c64-efd972acf1a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:02:42 np0005465604 nova_compute[260603]: 2025-10-02 09:02:42.365 2 DEBUG nova.network.neutron [req-59aec11f-5153-4bb8-bf82-f9ade412cfd3 req-174a49aa-da6b-4164-ba4e-d059946f9065 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Updating instance_info_cache with network_info: [{"id": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "address": "fa:16:3e:ad:49:2a", "network": {"id": "87157236-5092-4eb4-a9f0-535aee31f502", "bridge": "br-int", "label": "tempest-network-smoke--505999966", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ff7902-a7", "ovs_interfaceid": "43ff7902-a749-4e8e-9c64-efd972acf1a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:02:42 np0005465604 podman[406171]: 2025-10-02 09:02:42.372796329 +0000 UTC m=+0.211454281 container remove 1776fef362dd29b546fed267d9a145f2dee10adc5cf30f049cd5b90546d3f6ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:02:42 np0005465604 nova_compute[260603]: 2025-10-02 09:02:42.384 2 DEBUG oslo_concurrency.lockutils [req-59aec11f-5153-4bb8-bf82-f9ade412cfd3 req-174a49aa-da6b-4164-ba4e-d059946f9065 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-3e3c7092-c580-477b-8596-4fd3b719e700" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:02:42 np0005465604 systemd[1]: libpod-conmon-1776fef362dd29b546fed267d9a145f2dee10adc5cf30f049cd5b90546d3f6ca.scope: Deactivated successfully.
Oct  2 05:02:42 np0005465604 nova_compute[260603]: 2025-10-02 09:02:42.522 2 DEBUG nova.network.neutron [-] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:02:42 np0005465604 nova_compute[260603]: 2025-10-02 09:02:42.543 2 INFO nova.compute.manager [-] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Took 0.76 seconds to deallocate network for instance.#033[00m
Oct  2 05:02:42 np0005465604 podman[406211]: 2025-10-02 09:02:42.556053114 +0000 UTC m=+0.050574982 container create 56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:02:42 np0005465604 systemd[1]: Started libpod-conmon-56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17.scope.
Oct  2 05:02:42 np0005465604 nova_compute[260603]: 2025-10-02 09:02:42.591 2 DEBUG oslo_concurrency.lockutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:02:42 np0005465604 nova_compute[260603]: 2025-10-02 09:02:42.592 2 DEBUG oslo_concurrency.lockutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:02:42 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:02:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f759079d289600ffffef1376c8720362708cc1d819aa84049580b59b737f21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:02:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f759079d289600ffffef1376c8720362708cc1d819aa84049580b59b737f21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:02:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f759079d289600ffffef1376c8720362708cc1d819aa84049580b59b737f21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:02:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f759079d289600ffffef1376c8720362708cc1d819aa84049580b59b737f21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:02:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f759079d289600ffffef1376c8720362708cc1d819aa84049580b59b737f21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:02:42 np0005465604 nova_compute[260603]: 2025-10-02 09:02:42.610 2 DEBUG nova.compute.manager [req-60f82552-0747-4868-8545-a2318ecb20ff req-7a8da71b-d40f-4e24-ad68-ee8ea978b0c2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received event network-vif-deleted-43ff7902-a749-4e8e-9c64-efd972acf1a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:02:42 np0005465604 podman[406211]: 2025-10-02 09:02:42.623623564 +0000 UTC m=+0.118145462 container init 56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 05:02:42 np0005465604 podman[406211]: 2025-10-02 09:02:42.528537159 +0000 UTC m=+0.023059087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:02:42 np0005465604 podman[406211]: 2025-10-02 09:02:42.632721427 +0000 UTC m=+0.127243315 container start 56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:02:42 np0005465604 podman[406211]: 2025-10-02 09:02:42.63764226 +0000 UTC m=+0.132164138 container attach 56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:02:42 np0005465604 nova_compute[260603]: 2025-10-02 09:02:42.671 2 DEBUG oslo_concurrency.processutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:02:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:02:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3552772987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:02:43 np0005465604 nova_compute[260603]: 2025-10-02 09:02:43.100 2 DEBUG oslo_concurrency.processutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:02:43 np0005465604 nova_compute[260603]: 2025-10-02 09:02:43.107 2 DEBUG nova.compute.provider_tree [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:02:43 np0005465604 nova_compute[260603]: 2025-10-02 09:02:43.116 2 DEBUG nova.compute.manager [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received event network-vif-unplugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:02:43 np0005465604 nova_compute[260603]: 2025-10-02 09:02:43.117 2 DEBUG oslo_concurrency.lockutils [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:02:43 np0005465604 nova_compute[260603]: 2025-10-02 09:02:43.117 2 DEBUG oslo_concurrency.lockutils [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:02:43 np0005465604 nova_compute[260603]: 2025-10-02 09:02:43.117 2 DEBUG oslo_concurrency.lockutils [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:02:43 np0005465604 nova_compute[260603]: 2025-10-02 09:02:43.117 2 DEBUG nova.compute.manager [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] No waiting events found dispatching network-vif-unplugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:02:43 np0005465604 nova_compute[260603]: 2025-10-02 09:02:43.117 2 WARNING nova.compute.manager [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received unexpected event network-vif-unplugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 05:02:43 np0005465604 nova_compute[260603]: 2025-10-02 09:02:43.117 2 DEBUG nova.compute.manager [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received event network-vif-plugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:02:43 np0005465604 nova_compute[260603]: 2025-10-02 09:02:43.118 2 DEBUG oslo_concurrency.lockutils [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:02:43 np0005465604 nova_compute[260603]: 2025-10-02 09:02:43.118 2 DEBUG oslo_concurrency.lockutils [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:02:43 np0005465604 nova_compute[260603]: 2025-10-02 09:02:43.118 2 DEBUG oslo_concurrency.lockutils [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:02:43 np0005465604 nova_compute[260603]: 2025-10-02 09:02:43.118 2 DEBUG nova.compute.manager [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] No waiting events found dispatching network-vif-plugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:02:43 np0005465604 nova_compute[260603]: 2025-10-02 09:02:43.118 2 WARNING nova.compute.manager [req-75861f07-6989-4451-82e5-17aadaf0c6f7 req-b5375deb-c92a-4067-a76e-45bb29d92f74 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Received unexpected event network-vif-plugged-43ff7902-a749-4e8e-9c64-efd972acf1a7 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 05:02:43 np0005465604 nova_compute[260603]: 2025-10-02 09:02:43.124 2 DEBUG nova.scheduler.client.report [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:02:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:02:43 np0005465604 nova_compute[260603]: 2025-10-02 09:02:43.150 2 DEBUG oslo_concurrency.lockutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:02:43 np0005465604 nova_compute[260603]: 2025-10-02 09:02:43.174 2 INFO nova.scheduler.client.report [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance 3e3c7092-c580-477b-8596-4fd3b719e700#033[00m
Oct  2 05:02:43 np0005465604 nova_compute[260603]: 2025-10-02 09:02:43.240 2 DEBUG oslo_concurrency.lockutils [None req-01e9b732-9b12-49d9-9e77-def04bfb2b8f ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "3e3c7092-c580-477b-8596-4fd3b719e700" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:02:43 np0005465604 trusting_mendeleev[406228]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:02:43 np0005465604 trusting_mendeleev[406228]: --> relative data size: 1.0
Oct  2 05:02:43 np0005465604 trusting_mendeleev[406228]: --> All data devices are unavailable
Oct  2 05:02:43 np0005465604 systemd[1]: libpod-56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17.scope: Deactivated successfully.
Oct  2 05:02:43 np0005465604 systemd[1]: libpod-56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17.scope: Consumed 1.006s CPU time.
Oct  2 05:02:43 np0005465604 podman[406211]: 2025-10-02 09:02:43.694897665 +0000 UTC m=+1.189419573 container died 56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 05:02:43 np0005465604 systemd[1]: var-lib-containers-storage-overlay-26f759079d289600ffffef1376c8720362708cc1d819aa84049580b59b737f21-merged.mount: Deactivated successfully.
Oct  2 05:02:43 np0005465604 podman[406211]: 2025-10-02 09:02:43.766264733 +0000 UTC m=+1.260786621 container remove 56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Oct  2 05:02:43 np0005465604 systemd[1]: libpod-conmon-56cfec41b9770d1300ffcb3d7c66f72ef60a5de5b1ac2f483486a87b5741bf17.scope: Deactivated successfully.
Oct  2 05:02:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 32 KiB/s wr, 37 op/s
Oct  2 05:02:44 np0005465604 podman[406433]: 2025-10-02 09:02:44.636481845 +0000 UTC m=+0.084531938 container create 33483a852eb93adefa1ea1401f957fbd7d2c41f75e31e39acae9c6b758463a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 05:02:44 np0005465604 systemd[1]: Started libpod-conmon-33483a852eb93adefa1ea1401f957fbd7d2c41f75e31e39acae9c6b758463a69.scope.
Oct  2 05:02:44 np0005465604 podman[406433]: 2025-10-02 09:02:44.598868986 +0000 UTC m=+0.046919139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:02:44 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:02:44 np0005465604 podman[406433]: 2025-10-02 09:02:44.753988997 +0000 UTC m=+0.202039160 container init 33483a852eb93adefa1ea1401f957fbd7d2c41f75e31e39acae9c6b758463a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 05:02:44 np0005465604 podman[406433]: 2025-10-02 09:02:44.766119874 +0000 UTC m=+0.214169927 container start 33483a852eb93adefa1ea1401f957fbd7d2c41f75e31e39acae9c6b758463a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcnulty, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:02:44 np0005465604 podman[406433]: 2025-10-02 09:02:44.773688349 +0000 UTC m=+0.221738502 container attach 33483a852eb93adefa1ea1401f957fbd7d2c41f75e31e39acae9c6b758463a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:02:44 np0005465604 cool_mcnulty[406449]: 167 167
Oct  2 05:02:44 np0005465604 systemd[1]: libpod-33483a852eb93adefa1ea1401f957fbd7d2c41f75e31e39acae9c6b758463a69.scope: Deactivated successfully.
Oct  2 05:02:44 np0005465604 podman[406433]: 2025-10-02 09:02:44.777102335 +0000 UTC m=+0.225152398 container died 33483a852eb93adefa1ea1401f957fbd7d2c41f75e31e39acae9c6b758463a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:02:44 np0005465604 systemd[1]: var-lib-containers-storage-overlay-538ac7d8497c896b51df05aaf530a38102c7a60acc8b34691cce52b6e5b90a5f-merged.mount: Deactivated successfully.
Oct  2 05:02:44 np0005465604 podman[406433]: 2025-10-02 09:02:44.840818025 +0000 UTC m=+0.288868118 container remove 33483a852eb93adefa1ea1401f957fbd7d2c41f75e31e39acae9c6b758463a69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mcnulty, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 05:02:44 np0005465604 systemd[1]: libpod-conmon-33483a852eb93adefa1ea1401f957fbd7d2c41f75e31e39acae9c6b758463a69.scope: Deactivated successfully.
Oct  2 05:02:45 np0005465604 podman[406473]: 2025-10-02 09:02:45.11451469 +0000 UTC m=+0.076675143 container create 6bf8e41581511dfdc7f31fb9da811fe931861b519171d7070392092168430559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 05:02:45 np0005465604 systemd[1]: Started libpod-conmon-6bf8e41581511dfdc7f31fb9da811fe931861b519171d7070392092168430559.scope.
Oct  2 05:02:45 np0005465604 podman[406473]: 2025-10-02 09:02:45.083867539 +0000 UTC m=+0.046028022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:02:45 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:02:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffe4ae17ec74ccddfd1a9433f3e49931b806b2d05d3a515dcf9c832b290bc95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:02:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffe4ae17ec74ccddfd1a9433f3e49931b806b2d05d3a515dcf9c832b290bc95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:02:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffe4ae17ec74ccddfd1a9433f3e49931b806b2d05d3a515dcf9c832b290bc95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:02:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffe4ae17ec74ccddfd1a9433f3e49931b806b2d05d3a515dcf9c832b290bc95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:02:45 np0005465604 podman[406473]: 2025-10-02 09:02:45.212465914 +0000 UTC m=+0.174626387 container init 6bf8e41581511dfdc7f31fb9da811fe931861b519171d7070392092168430559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct  2 05:02:45 np0005465604 podman[406473]: 2025-10-02 09:02:45.220117312 +0000 UTC m=+0.182277755 container start 6bf8e41581511dfdc7f31fb9da811fe931861b519171d7070392092168430559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brattain, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:02:45 np0005465604 podman[406473]: 2025-10-02 09:02:45.230149904 +0000 UTC m=+0.192310367 container attach 6bf8e41581511dfdc7f31fb9da811fe931861b519171d7070392092168430559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brattain, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 05:02:45 np0005465604 nova_compute[260603]: 2025-10-02 09:02:45.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 18 KiB/s wr, 28 op/s
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]: {
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:    "0": [
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:        {
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "devices": [
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "/dev/loop3"
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            ],
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "lv_name": "ceph_lv0",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "lv_size": "21470642176",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "name": "ceph_lv0",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "tags": {
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.cluster_name": "ceph",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.crush_device_class": "",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.encrypted": "0",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.osd_id": "0",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.type": "block",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.vdo": "0"
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            },
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "type": "block",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "vg_name": "ceph_vg0"
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:        }
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:    ],
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:    "1": [
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:        {
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "devices": [
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "/dev/loop4"
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            ],
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "lv_name": "ceph_lv1",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "lv_size": "21470642176",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "name": "ceph_lv1",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "tags": {
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.cluster_name": "ceph",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.crush_device_class": "",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.encrypted": "0",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.osd_id": "1",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.type": "block",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.vdo": "0"
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            },
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "type": "block",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "vg_name": "ceph_vg1"
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:        }
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:    ],
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:    "2": [
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:        {
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "devices": [
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "/dev/loop5"
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            ],
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "lv_name": "ceph_lv2",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "lv_size": "21470642176",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "name": "ceph_lv2",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "tags": {
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.cluster_name": "ceph",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.crush_device_class": "",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.encrypted": "0",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.osd_id": "2",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.type": "block",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:                "ceph.vdo": "0"
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            },
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "type": "block",
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:            "vg_name": "ceph_vg2"
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:        }
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]:    ]
Oct  2 05:02:45 np0005465604 suspicious_brattain[406490]: }
Oct  2 05:02:45 np0005465604 systemd[1]: libpod-6bf8e41581511dfdc7f31fb9da811fe931861b519171d7070392092168430559.scope: Deactivated successfully.
Oct  2 05:02:45 np0005465604 podman[406473]: 2025-10-02 09:02:45.960353175 +0000 UTC m=+0.922513618 container died 6bf8e41581511dfdc7f31fb9da811fe931861b519171d7070392092168430559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:02:45 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5ffe4ae17ec74ccddfd1a9433f3e49931b806b2d05d3a515dcf9c832b290bc95-merged.mount: Deactivated successfully.
Oct  2 05:02:46 np0005465604 podman[406473]: 2025-10-02 09:02:46.039696151 +0000 UTC m=+1.001856604 container remove 6bf8e41581511dfdc7f31fb9da811fe931861b519171d7070392092168430559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_brattain, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  2 05:02:46 np0005465604 systemd[1]: libpod-conmon-6bf8e41581511dfdc7f31fb9da811fe931861b519171d7070392092168430559.scope: Deactivated successfully.
Oct  2 05:02:46 np0005465604 nova_compute[260603]: 2025-10-02 09:02:46.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:46 np0005465604 nova_compute[260603]: 2025-10-02 09:02:46.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:46 np0005465604 nova_compute[260603]: 2025-10-02 09:02:46.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:46 np0005465604 podman[406653]: 2025-10-02 09:02:46.769780419 +0000 UTC m=+0.053660718 container create 1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brown, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 05:02:46 np0005465604 systemd[1]: Started libpod-conmon-1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438.scope.
Oct  2 05:02:46 np0005465604 podman[406653]: 2025-10-02 09:02:46.743335227 +0000 UTC m=+0.027215576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:02:46 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:02:46 np0005465604 podman[406653]: 2025-10-02 09:02:46.856987899 +0000 UTC m=+0.140868218 container init 1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brown, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct  2 05:02:46 np0005465604 podman[406653]: 2025-10-02 09:02:46.86665755 +0000 UTC m=+0.150537849 container start 1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brown, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 05:02:46 np0005465604 podman[406653]: 2025-10-02 09:02:46.869856439 +0000 UTC m=+0.153736758 container attach 1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brown, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 05:02:46 np0005465604 epic_brown[406669]: 167 167
Oct  2 05:02:46 np0005465604 systemd[1]: libpod-1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438.scope: Deactivated successfully.
Oct  2 05:02:46 np0005465604 conmon[406669]: conmon 1198e98236c38bccc9fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438.scope/container/memory.events
Oct  2 05:02:46 np0005465604 podman[406653]: 2025-10-02 09:02:46.874233805 +0000 UTC m=+0.158114074 container died 1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brown, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 05:02:46 np0005465604 systemd[1]: var-lib-containers-storage-overlay-64edf0f366bfef4201bada809d6f03329668547195fa5e940555f50a232c0e7a-merged.mount: Deactivated successfully.
Oct  2 05:02:46 np0005465604 podman[406653]: 2025-10-02 09:02:46.920238074 +0000 UTC m=+0.204118373 container remove 1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct  2 05:02:46 np0005465604 systemd[1]: libpod-conmon-1198e98236c38bccc9fe03df4825c87ce49cb4d4bd805af59a8751b21a55b438.scope: Deactivated successfully.
Oct  2 05:02:47 np0005465604 podman[406694]: 2025-10-02 09:02:47.148659073 +0000 UTC m=+0.063238986 container create 74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 05:02:47 np0005465604 systemd[1]: Started libpod-conmon-74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460.scope.
Oct  2 05:02:47 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:02:47 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b1e5f676dda206acfa672be8c59e25b13c14c98378202839698830982d3c61f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:02:47 np0005465604 podman[406694]: 2025-10-02 09:02:47.123673016 +0000 UTC m=+0.038253059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:02:47 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b1e5f676dda206acfa672be8c59e25b13c14c98378202839698830982d3c61f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:02:47 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b1e5f676dda206acfa672be8c59e25b13c14c98378202839698830982d3c61f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:02:47 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b1e5f676dda206acfa672be8c59e25b13c14c98378202839698830982d3c61f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:02:47 np0005465604 podman[406694]: 2025-10-02 09:02:47.230722304 +0000 UTC m=+0.145302217 container init 74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jackson, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:02:47 np0005465604 podman[406694]: 2025-10-02 09:02:47.254960847 +0000 UTC m=+0.169540730 container start 74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jackson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 05:02:47 np0005465604 podman[406694]: 2025-10-02 09:02:47.258550658 +0000 UTC m=+0.173130621 container attach 74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jackson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 05:02:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 18 KiB/s wr, 28 op/s
Oct  2 05:02:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]: {
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:        "osd_id": 2,
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:        "type": "bluestore"
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:    },
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:        "osd_id": 1,
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:        "type": "bluestore"
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:    },
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:        "osd_id": 0,
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:        "type": "bluestore"
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]:    }
Oct  2 05:02:48 np0005465604 reverent_jackson[406712]: }
Oct  2 05:02:48 np0005465604 systemd[1]: libpod-74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460.scope: Deactivated successfully.
Oct  2 05:02:48 np0005465604 conmon[406712]: conmon 74f76364917a7f523133 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460.scope/container/memory.events
Oct  2 05:02:48 np0005465604 podman[406694]: 2025-10-02 09:02:48.225074984 +0000 UTC m=+1.139654867 container died 74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:02:48 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3b1e5f676dda206acfa672be8c59e25b13c14c98378202839698830982d3c61f-merged.mount: Deactivated successfully.
Oct  2 05:02:48 np0005465604 podman[406694]: 2025-10-02 09:02:48.279837895 +0000 UTC m=+1.194417768 container remove 74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 05:02:48 np0005465604 systemd[1]: libpod-conmon-74f76364917a7f5231336b46e6cd0efd6fc484b562dcc8261af3f29a23e04460.scope: Deactivated successfully.
Oct  2 05:02:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:02:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:02:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:02:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:02:48 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 637675f4-f21b-4e1e-be77-380d6b7ffa31 does not exist
Oct  2 05:02:48 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ee515401-6d49-4c0c-bd52-ac81da1fbcf7 does not exist
Oct  2 05:02:48 np0005465604 nova_compute[260603]: 2025-10-02 09:02:48.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:02:49 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:02:49 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:02:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 7.5 KiB/s wr, 27 op/s
Oct  2 05:02:50 np0005465604 nova_compute[260603]: 2025-10-02 09:02:50.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:02:50 np0005465604 nova_compute[260603]: 2025-10-02 09:02:50.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:02:50 np0005465604 nova_compute[260603]: 2025-10-02 09:02:50.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:02:50 np0005465604 nova_compute[260603]: 2025-10-02 09:02:50.546 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:02:50 np0005465604 nova_compute[260603]: 2025-10-02 09:02:50.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:51 np0005465604 nova_compute[260603]: 2025-10-02 09:02:51.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 7.5 KiB/s wr, 27 op/s
Oct  2 05:02:52 np0005465604 nova_compute[260603]: 2025-10-02 09:02:52.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:02:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:02:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 7.5 KiB/s wr, 27 op/s
Oct  2 05:02:55 np0005465604 nova_compute[260603]: 2025-10-02 09:02:55.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:02:56 np0005465604 nova_compute[260603]: 2025-10-02 09:02:56.354 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395761.353559, 3e3c7092-c580-477b-8596-4fd3b719e700 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:02:56 np0005465604 nova_compute[260603]: 2025-10-02 09:02:56.355 2 INFO nova.compute.manager [-] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:02:56 np0005465604 nova_compute[260603]: 2025-10-02 09:02:56.378 2 DEBUG nova.compute.manager [None req-6f52d5c5-aedd-483e-ade1-22bc85ef95ff - - - - - -] [instance: 3e3c7092-c580-477b-8596-4fd3b719e700] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:02:56 np0005465604 nova_compute[260603]: 2025-10-02 09:02:56.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:02:56 np0005465604 nova_compute[260603]: 2025-10-02 09:02:56.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:02:56 np0005465604 nova_compute[260603]: 2025-10-02 09:02:56.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:02:56 np0005465604 nova_compute[260603]: 2025-10-02 09:02:56.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:02:56 np0005465604 nova_compute[260603]: 2025-10-02 09:02:56.554 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:02:56 np0005465604 nova_compute[260603]: 2025-10-02 09:02:56.554 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:02:56 np0005465604 nova_compute[260603]: 2025-10-02 09:02:56.555 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:02:56 np0005465604 nova_compute[260603]: 2025-10-02 09:02:56.555 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:02:56 np0005465604 nova_compute[260603]: 2025-10-02 09:02:56.556 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:02:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:02:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1292687566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:02:57 np0005465604 nova_compute[260603]: 2025-10-02 09:02:57.025 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:02:57 np0005465604 nova_compute[260603]: 2025-10-02 09:02:57.208 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:02:57 np0005465604 nova_compute[260603]: 2025-10-02 09:02:57.209 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3633MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:02:57 np0005465604 nova_compute[260603]: 2025-10-02 09:02:57.209 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:02:57 np0005465604 nova_compute[260603]: 2025-10-02 09:02:57.209 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:02:57 np0005465604 nova_compute[260603]: 2025-10-02 09:02:57.314 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:02:57 np0005465604 nova_compute[260603]: 2025-10-02 09:02:57.315 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:02:57 np0005465604 nova_compute[260603]: 2025-10-02 09:02:57.344 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:02:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:02:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1718814926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:02:57 np0005465604 nova_compute[260603]: 2025-10-02 09:02:57.851 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:02:57 np0005465604 nova_compute[260603]: 2025-10-02 09:02:57.858 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:02:57 np0005465604 nova_compute[260603]: 2025-10-02 09:02:57.878 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:02:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:02:57 np0005465604 nova_compute[260603]: 2025-10-02 09:02:57.909 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:02:57 np0005465604 nova_compute[260603]: 2025-10-02 09:02:57.910 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:02:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:02:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:02:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:02:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:02:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:02:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:02:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:02:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:02:59 np0005465604 nova_compute[260603]: 2025-10-02 09:02:59.905 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:03:00 np0005465604 nova_compute[260603]: 2025-10-02 09:03:00.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:01 np0005465604 nova_compute[260603]: 2025-10-02 09:03:01.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.153666) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395783153714, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1405, "num_deletes": 250, "total_data_size": 2146852, "memory_usage": 2186512, "flush_reason": "Manual Compaction"}
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395783165441, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1262814, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52307, "largest_seqno": 53711, "table_properties": {"data_size": 1257915, "index_size": 2231, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13047, "raw_average_key_size": 20, "raw_value_size": 1247115, "raw_average_value_size": 1979, "num_data_blocks": 102, "num_entries": 630, "num_filter_entries": 630, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759395644, "oldest_key_time": 1759395644, "file_creation_time": 1759395783, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 11844 microseconds, and 6953 cpu microseconds.
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.165507) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1262814 bytes OK
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.165533) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.167434) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.167451) EVENT_LOG_v1 {"time_micros": 1759395783167445, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.167468) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 2140631, prev total WAL file size 2140631, number of live WAL files 2.
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.168537) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303032' seq:72057594037927935, type:22 .. '6D6772737461740032323533' seq:0, type:0; will stop at (end)
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1233KB)], [122(10162KB)]
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395783168639, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 11669283, "oldest_snapshot_seqno": -1}
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7377 keys, 9210485 bytes, temperature: kUnknown
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395783236449, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 9210485, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9163109, "index_size": 27808, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18501, "raw_key_size": 191974, "raw_average_key_size": 26, "raw_value_size": 9033229, "raw_average_value_size": 1224, "num_data_blocks": 1088, "num_entries": 7377, "num_filter_entries": 7377, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759395783, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.236879) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 9210485 bytes
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.238566) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.8 rd, 135.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.9 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(16.5) write-amplify(7.3) OK, records in: 7825, records dropped: 448 output_compression: NoCompression
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.238597) EVENT_LOG_v1 {"time_micros": 1759395783238583, "job": 74, "event": "compaction_finished", "compaction_time_micros": 67914, "compaction_time_cpu_micros": 47230, "output_level": 6, "num_output_files": 1, "total_output_size": 9210485, "num_input_records": 7825, "num_output_records": 7377, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395783239327, "job": 74, "event": "table_file_deletion", "file_number": 124}
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395783243500, "job": 74, "event": "table_file_deletion", "file_number": 122}
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.168368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.243701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.243710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.243713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.243716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:03:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:03.243719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:03:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:03:05 np0005465604 nova_compute[260603]: 2025-10-02 09:03:05.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:03:06 np0005465604 podman[406853]: 2025-10-02 09:03:06.049425541 +0000 UTC m=+0.105378496 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 05:03:06 np0005465604 podman[406852]: 2025-10-02 09:03:06.087634849 +0000 UTC m=+0.152900143 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 05:03:06 np0005465604 nova_compute[260603]: 2025-10-02 09:03:06.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:06 np0005465604 nova_compute[260603]: 2025-10-02 09:03:06.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:03:07 np0005465604 nova_compute[260603]: 2025-10-02 09:03:07.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:03:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:03:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:03:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.007 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.007 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.023 2 DEBUG nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.104 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.104 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.113 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.114 2 INFO nova.compute.claims [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.215 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:03:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:03:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3286293280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.655 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.664 2 DEBUG nova.compute.provider_tree [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.689 2 DEBUG nova.scheduler.client.report [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.716 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.717 2 DEBUG nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.775 2 DEBUG nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.775 2 DEBUG nova.network.neutron [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.802 2 INFO nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.830 2 DEBUG nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.947 2 DEBUG nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.948 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.949 2 INFO nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Creating image(s)#033[00m
Oct  2 05:03:10 np0005465604 nova_compute[260603]: 2025-10-02 09:03:10.974 2 DEBUG nova.storage.rbd_utils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:03:11 np0005465604 podman[406918]: 2025-10-02 09:03:11.000704696 +0000 UTC m=+0.071649558 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:03:11 np0005465604 podman[406919]: 2025-10-02 09:03:11.018936832 +0000 UTC m=+0.078790608 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.031 2 DEBUG nova.storage.rbd_utils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.062 2 DEBUG nova.storage.rbd_utils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.065 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.134 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.135 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.136 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.136 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.157 2 DEBUG nova.storage.rbd_utils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.160 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.402 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.242s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.477 2 DEBUG nova.storage.rbd_utils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.566 2 DEBUG nova.objects.instance [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.588 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.588 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Ensure instance console log exists: /var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.589 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.590 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.590 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:11 np0005465604 nova_compute[260603]: 2025-10-02 09:03:11.757 2 DEBUG nova.policy [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:03:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 41 MiB data, 909 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:03:12 np0005465604 nova_compute[260603]: 2025-10-02 09:03:12.853 2 DEBUG nova.network.neutron [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Successfully created port: 3df4c898-fd96-4b0f-90ee-add24ca56aa2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:03:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:03:13 np0005465604 nova_compute[260603]: 2025-10-02 09:03:13.516 2 DEBUG nova.network.neutron [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Successfully updated port: 3df4c898-fd96-4b0f-90ee-add24ca56aa2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:03:13 np0005465604 nova_compute[260603]: 2025-10-02 09:03:13.531 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:03:13 np0005465604 nova_compute[260603]: 2025-10-02 09:03:13.532 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:03:13 np0005465604 nova_compute[260603]: 2025-10-02 09:03:13.532 2 DEBUG nova.network.neutron [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:03:13 np0005465604 nova_compute[260603]: 2025-10-02 09:03:13.598 2 DEBUG nova.compute.manager [req-685c825e-4ace-46c3-bc2b-ac0138fc2bab req-5996de85-07c2-4441-8291-025fa7a5b2b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:03:13 np0005465604 nova_compute[260603]: 2025-10-02 09:03:13.599 2 DEBUG nova.compute.manager [req-685c825e-4ace-46c3-bc2b-ac0138fc2bab req-5996de85-07c2-4441-8291-025fa7a5b2b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing instance network info cache due to event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:03:13 np0005465604 nova_compute[260603]: 2025-10-02 09:03:13.599 2 DEBUG oslo_concurrency.lockutils [req-685c825e-4ace-46c3-bc2b-ac0138fc2bab req-5996de85-07c2-4441-8291-025fa7a5b2b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:03:13 np0005465604 nova_compute[260603]: 2025-10-02 09:03:13.729 2 DEBUG nova.network.neutron [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:03:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.698 2 DEBUG nova.network.neutron [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updating instance_info_cache with network_info: [{"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.721 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.722 2 DEBUG nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Instance network_info: |[{"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.724 2 DEBUG oslo_concurrency.lockutils [req-685c825e-4ace-46c3-bc2b-ac0138fc2bab req-5996de85-07c2-4441-8291-025fa7a5b2b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.724 2 DEBUG nova.network.neutron [req-685c825e-4ace-46c3-bc2b-ac0138fc2bab req-5996de85-07c2-4441-8291-025fa7a5b2b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.730 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Start _get_guest_xml network_info=[{"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.737 2 WARNING nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.748 2 DEBUG nova.virt.libvirt.host [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.749 2 DEBUG nova.virt.libvirt.host [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.754 2 DEBUG nova.virt.libvirt.host [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.755 2 DEBUG nova.virt.libvirt.host [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.755 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.756 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.757 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.758 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.758 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.759 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.759 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.760 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.761 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.761 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.762 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.762 2 DEBUG nova.virt.hardware [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:03:14 np0005465604 nova_compute[260603]: 2025-10-02 09:03:14.767 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:03:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:03:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030734456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.275 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.308 2 DEBUG nova.storage.rbd_utils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.314 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:03:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3617887475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.778 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.779 2 DEBUG nova.virt.libvirt.vif [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:03:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2044507795',display_name='tempest-TestNetworkBasicOps-server-2044507795',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2044507795',id=134,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJyVGuJYWkz5cUrKkan7kCe4MbiRPaa7v1Q6d5xWchgY/tX4JduFQB6JZ0q369VSitON6EJsRLVHtNMGsTz7PTKCbeKQtDY6sbIs7RX5gGPDqTs/0LJrpZ68VxyA10mrYQ==',key_name='tempest-TestNetworkBasicOps-2118371598',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-oalw9fqv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:03:10Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=64577e3d-aa56-4fa9-a1b5-dc76a7a80754,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.780 2 DEBUG nova.network.os_vif_util [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.781 2 DEBUG nova.network.os_vif_util [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:95:1a,bridge_name='br-int',has_traffic_filtering=True,id=3df4c898-fd96-4b0f-90ee-add24ca56aa2,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3df4c898-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.782 2 DEBUG nova.objects.instance [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.799 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:03:15 np0005465604 nova_compute[260603]:  <uuid>64577e3d-aa56-4fa9-a1b5-dc76a7a80754</uuid>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:  <name>instance-00000086</name>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkBasicOps-server-2044507795</nova:name>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:03:14</nova:creationTime>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:03:15 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:        <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:        <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:        <nova:port uuid="3df4c898-fd96-4b0f-90ee-add24ca56aa2">
Oct  2 05:03:15 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <entry name="serial">64577e3d-aa56-4fa9-a1b5-dc76a7a80754</entry>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <entry name="uuid">64577e3d-aa56-4fa9-a1b5-dc76a7a80754</entry>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk">
Oct  2 05:03:15 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:03:15 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk.config">
Oct  2 05:03:15 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:03:15 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:b0:95:1a"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <target dev="tap3df4c898-fd"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754/console.log" append="off"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:03:15 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:03:15 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:03:15 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:03:15 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.800 2 DEBUG nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Preparing to wait for external event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.801 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.801 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.801 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.802 2 DEBUG nova.virt.libvirt.vif [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:03:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2044507795',display_name='tempest-TestNetworkBasicOps-server-2044507795',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2044507795',id=134,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJyVGuJYWkz5cUrKkan7kCe4MbiRPaa7v1Q6d5xWchgY/tX4JduFQB6JZ0q369VSitON6EJsRLVHtNMGsTz7PTKCbeKQtDY6sbIs7RX5gGPDqTs/0LJrpZ68VxyA10mrYQ==',key_name='tempest-TestNetworkBasicOps-2118371598',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-oalw9fqv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:03:10Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=64577e3d-aa56-4fa9-a1b5-dc76a7a80754,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.802 2 DEBUG nova.network.os_vif_util [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.803 2 DEBUG nova.network.os_vif_util [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:95:1a,bridge_name='br-int',has_traffic_filtering=True,id=3df4c898-fd96-4b0f-90ee-add24ca56aa2,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3df4c898-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.803 2 DEBUG os_vif [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:95:1a,bridge_name='br-int',has_traffic_filtering=True,id=3df4c898-fd96-4b0f-90ee-add24ca56aa2,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3df4c898-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.804 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.805 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.808 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3df4c898-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.808 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3df4c898-fd, col_values=(('external_ids', {'iface-id': '3df4c898-fd96-4b0f-90ee-add24ca56aa2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:95:1a', 'vm-uuid': '64577e3d-aa56-4fa9-a1b5-dc76a7a80754'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:15 np0005465604 NetworkManager[45129]: <info>  [1759395795.8115] manager: (tap3df4c898-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/579)
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.821 2 INFO os_vif [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:95:1a,bridge_name='br-int',has_traffic_filtering=True,id=3df4c898-fd96-4b0f-90ee-add24ca56aa2,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3df4c898-fd')#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.864 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.864 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.865 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:b0:95:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.865 2 INFO nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Using config drive#033[00m
Oct  2 05:03:15 np0005465604 nova_compute[260603]: 2025-10-02 09:03:15.885 2 DEBUG nova.storage.rbd_utils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:03:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:03:16 np0005465604 nova_compute[260603]: 2025-10-02 09:03:16.249 2 DEBUG nova.network.neutron [req-685c825e-4ace-46c3-bc2b-ac0138fc2bab req-5996de85-07c2-4441-8291-025fa7a5b2b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updated VIF entry in instance network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:03:16 np0005465604 nova_compute[260603]: 2025-10-02 09:03:16.249 2 DEBUG nova.network.neutron [req-685c825e-4ace-46c3-bc2b-ac0138fc2bab req-5996de85-07c2-4441-8291-025fa7a5b2b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updating instance_info_cache with network_info: [{"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:03:16 np0005465604 nova_compute[260603]: 2025-10-02 09:03:16.270 2 DEBUG oslo_concurrency.lockutils [req-685c825e-4ace-46c3-bc2b-ac0138fc2bab req-5996de85-07c2-4441-8291-025fa7a5b2b0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:03:16 np0005465604 nova_compute[260603]: 2025-10-02 09:03:16.704 2 INFO nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Creating config drive at /var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754/disk.config#033[00m
Oct  2 05:03:16 np0005465604 nova_compute[260603]: 2025-10-02 09:03:16.708 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkzpt5vlk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:03:16 np0005465604 nova_compute[260603]: 2025-10-02 09:03:16.873 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkzpt5vlk" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:03:16 np0005465604 nova_compute[260603]: 2025-10-02 09:03:16.896 2 DEBUG nova.storage.rbd_utils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:03:16 np0005465604 nova_compute[260603]: 2025-10-02 09:03:16.900 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754/disk.config 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:03:17 np0005465604 nova_compute[260603]: 2025-10-02 09:03:17.076 2 DEBUG oslo_concurrency.processutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754/disk.config 64577e3d-aa56-4fa9-a1b5-dc76a7a80754_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:03:17 np0005465604 nova_compute[260603]: 2025-10-02 09:03:17.077 2 INFO nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Deleting local config drive /var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754/disk.config because it was imported into RBD.#033[00m
Oct  2 05:03:17 np0005465604 kernel: tap3df4c898-fd: entered promiscuous mode
Oct  2 05:03:17 np0005465604 NetworkManager[45129]: <info>  [1759395797.1556] manager: (tap3df4c898-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/580)
Oct  2 05:03:17 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:17Z|01443|binding|INFO|Claiming lport 3df4c898-fd96-4b0f-90ee-add24ca56aa2 for this chassis.
Oct  2 05:03:17 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:17Z|01444|binding|INFO|3df4c898-fd96-4b0f-90ee-add24ca56aa2: Claiming fa:16:3e:b0:95:1a 10.100.0.12
Oct  2 05:03:17 np0005465604 nova_compute[260603]: 2025-10-02 09:03:17.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:17 np0005465604 nova_compute[260603]: 2025-10-02 09:03:17.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:17 np0005465604 nova_compute[260603]: 2025-10-02 09:03:17.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.179 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:95:1a 10.100.0.12'], port_security=['fa:16:3e:b0:95:1a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '64577e3d-aa56-4fa9-a1b5-dc76a7a80754', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c605495-2750-431a-94c8-fc1511dea80b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3f414f43-ca38-4bce-aa83-1fdd3cd738fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df881951-0cfd-4c8c-9854-241ef8244cff, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3df4c898-fd96-4b0f-90ee-add24ca56aa2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.181 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3df4c898-fd96-4b0f-90ee-add24ca56aa2 in datapath 2c605495-2750-431a-94c8-fc1511dea80b bound to our chassis#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.182 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2c605495-2750-431a-94c8-fc1511dea80b#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.204 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad13dbb-3039-442e-8ce0-2aedc2d1c8e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.205 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2c605495-21 in ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.213 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2c605495-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.213 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dfab6cb9-609f-4ad0-908c-7ec3b36d0484]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.214 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5337f8a9-8756-4d64-b32c-d30a5d55ab7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:17 np0005465604 systemd-udevd[407260]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:03:17 np0005465604 systemd-machined[214636]: New machine qemu-168-instance-00000086.
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.233 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[0a68b7c8-7d4e-4374-8c1c-54d39f209855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:17 np0005465604 NetworkManager[45129]: <info>  [1759395797.2401] device (tap3df4c898-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:03:17 np0005465604 NetworkManager[45129]: <info>  [1759395797.2411] device (tap3df4c898-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:03:17 np0005465604 nova_compute[260603]: 2025-10-02 09:03:17.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.257 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[48aef28c-f20e-46cd-b7fa-d592cfbe8c05]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:17 np0005465604 systemd[1]: Started Virtual Machine qemu-168-instance-00000086.
Oct  2 05:03:17 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:17Z|01445|binding|INFO|Setting lport 3df4c898-fd96-4b0f-90ee-add24ca56aa2 ovn-installed in OVS
Oct  2 05:03:17 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:17Z|01446|binding|INFO|Setting lport 3df4c898-fd96-4b0f-90ee-add24ca56aa2 up in Southbound
Oct  2 05:03:17 np0005465604 nova_compute[260603]: 2025-10-02 09:03:17.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.289 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[003705d8-ce02-4487-9ae4-279335ff522c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.294 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[42f8b8a0-402c-4366-922d-b8324c776e44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:17 np0005465604 NetworkManager[45129]: <info>  [1759395797.2958] manager: (tap2c605495-20): new Veth device (/org/freedesktop/NetworkManager/Devices/581)
Oct  2 05:03:17 np0005465604 systemd-udevd[407263]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.333 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4eeda1-03d2-496a-bb92-8b4300abe5c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.336 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d51ea629-9763-4ab0-9cd6-00b0cf519b5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:17 np0005465604 NetworkManager[45129]: <info>  [1759395797.3615] device (tap2c605495-20): carrier: link connected
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.365 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[573fb78f-363b-4232-891e-f652907d76cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.384 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5ffa9ef8-e663-436c-afd1-c26cba4f4ace]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c605495-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:5b:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667596, 'reachable_time': 18629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407292, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.400 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e231521-cf72-4d90-928a-b2baa7db1ffc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedf:5b4e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667596, 'tstamp': 667596}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407293, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.418 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d1520d7a-a6d1-4733-be87-2ae868747fd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c605495-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:5b:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667596, 'reachable_time': 18629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 407294, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.457 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2a8ab912-a9a3-4f28-8799-f7beb4c34cd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.517 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3e589147-ce1f-4901-9667-6fde94942c4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.518 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c605495-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.519 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.519 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c605495-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:17 np0005465604 nova_compute[260603]: 2025-10-02 09:03:17.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:17 np0005465604 NetworkManager[45129]: <info>  [1759395797.5222] manager: (tap2c605495-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/582)
Oct  2 05:03:17 np0005465604 kernel: tap2c605495-20: entered promiscuous mode
Oct  2 05:03:17 np0005465604 nova_compute[260603]: 2025-10-02 09:03:17.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.525 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2c605495-20, col_values=(('external_ids', {'iface-id': '88f0a719-bef7-4fa7-ad0c-3658148f5bdf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:17 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:17Z|01447|binding|INFO|Releasing lport 88f0a719-bef7-4fa7-ad0c-3658148f5bdf from this chassis (sb_readonly=0)
Oct  2 05:03:17 np0005465604 nova_compute[260603]: 2025-10-02 09:03:17.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.539 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2c605495-2750-431a-94c8-fc1511dea80b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2c605495-2750-431a-94c8-fc1511dea80b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.540 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0db28c-322e-4386-8e3e-29a470c71bb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.541 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-2c605495-2750-431a-94c8-fc1511dea80b
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/2c605495-2750-431a-94c8-fc1511dea80b.pid.haproxy
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 2c605495-2750-431a-94c8-fc1511dea80b
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 05:03:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:17.541 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'env', 'PROCESS_TAG=haproxy-2c605495-2750-431a-94c8-fc1511dea80b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2c605495-2750-431a-94c8-fc1511dea80b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 05:03:17 np0005465604 nova_compute[260603]: 2025-10-02 09:03:17.847 2 DEBUG nova.compute.manager [req-24b5164e-c7fc-45d0-81bc-d24bbcf9afc4 req-d7fa9c5c-8506-4eb5-95cc-03abcadb9a0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:03:17 np0005465604 nova_compute[260603]: 2025-10-02 09:03:17.849 2 DEBUG oslo_concurrency.lockutils [req-24b5164e-c7fc-45d0-81bc-d24bbcf9afc4 req-d7fa9c5c-8506-4eb5-95cc-03abcadb9a0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:17 np0005465604 nova_compute[260603]: 2025-10-02 09:03:17.849 2 DEBUG oslo_concurrency.lockutils [req-24b5164e-c7fc-45d0-81bc-d24bbcf9afc4 req-d7fa9c5c-8506-4eb5-95cc-03abcadb9a0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:17 np0005465604 nova_compute[260603]: 2025-10-02 09:03:17.850 2 DEBUG oslo_concurrency.lockutils [req-24b5164e-c7fc-45d0-81bc-d24bbcf9afc4 req-d7fa9c5c-8506-4eb5-95cc-03abcadb9a0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:17 np0005465604 nova_compute[260603]: 2025-10-02 09:03:17.850 2 DEBUG nova.compute.manager [req-24b5164e-c7fc-45d0-81bc-d24bbcf9afc4 req-d7fa9c5c-8506-4eb5-95cc-03abcadb9a0c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Processing event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:03:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  2 05:03:17 np0005465604 podman[407367]: 2025-10-02 09:03:17.931077183 +0000 UTC m=+0.048828478 container create 620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 05:03:17 np0005465604 systemd[1]: Started libpod-conmon-620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43.scope.
Oct  2 05:03:18 np0005465604 podman[407367]: 2025-10-02 09:03:17.906576072 +0000 UTC m=+0.024327407 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 05:03:18 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:03:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/addf8c19743aa1800a38e7a2c27551ad430b760bd20734d7536da7cb23fa4ebc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.033 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395798.0318964, 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.033 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] VM Started (Lifecycle Event)#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.035 2 DEBUG nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.038 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.041 2 INFO nova.virt.libvirt.driver [-] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Instance spawned successfully.#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.042 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:03:18 np0005465604 podman[407367]: 2025-10-02 09:03:18.048379388 +0000 UTC m=+0.166130693 container init 620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:03:18 np0005465604 podman[407367]: 2025-10-02 09:03:18.053862669 +0000 UTC m=+0.171613974 container start 620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.057 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.061 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.062 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.062 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.063 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.063 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.064 2 DEBUG nova.virt.libvirt.driver [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.068 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:03:18 np0005465604 neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b[407383]: [NOTICE]   (407387) : New worker (407389) forked
Oct  2 05:03:18 np0005465604 neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b[407383]: [NOTICE]   (407387) : Loading success.
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.109 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.109 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395798.031998, 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.109 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.146 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.150 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395798.037696, 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.151 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:03:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.155 2 INFO nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Took 7.21 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.156 2 DEBUG nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.168 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.171 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.209 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.227 2 INFO nova.compute.manager [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Took 8.16 seconds to build instance.#033[00m
Oct  2 05:03:18 np0005465604 nova_compute[260603]: 2025-10-02 09:03:18.254 2 DEBUG oslo_concurrency.lockutils [None req-c1b673b5-d418-4097-bf08-ae14b4b13940 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct  2 05:03:19 np0005465604 nova_compute[260603]: 2025-10-02 09:03:19.990 2 DEBUG nova.compute.manager [req-4ed44c85-4404-42f6-a140-4a0dc6ad5553 req-b6dcd1eb-5e62-4ebb-baa0-9ef708dfb5d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:03:19 np0005465604 nova_compute[260603]: 2025-10-02 09:03:19.991 2 DEBUG oslo_concurrency.lockutils [req-4ed44c85-4404-42f6-a140-4a0dc6ad5553 req-b6dcd1eb-5e62-4ebb-baa0-9ef708dfb5d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:19 np0005465604 nova_compute[260603]: 2025-10-02 09:03:19.991 2 DEBUG oslo_concurrency.lockutils [req-4ed44c85-4404-42f6-a140-4a0dc6ad5553 req-b6dcd1eb-5e62-4ebb-baa0-9ef708dfb5d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:19 np0005465604 nova_compute[260603]: 2025-10-02 09:03:19.991 2 DEBUG oslo_concurrency.lockutils [req-4ed44c85-4404-42f6-a140-4a0dc6ad5553 req-b6dcd1eb-5e62-4ebb-baa0-9ef708dfb5d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:19 np0005465604 nova_compute[260603]: 2025-10-02 09:03:19.991 2 DEBUG nova.compute.manager [req-4ed44c85-4404-42f6-a140-4a0dc6ad5553 req-b6dcd1eb-5e62-4ebb-baa0-9ef708dfb5d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] No waiting events found dispatching network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:03:19 np0005465604 nova_compute[260603]: 2025-10-02 09:03:19.992 2 WARNING nova.compute.manager [req-4ed44c85-4404-42f6-a140-4a0dc6ad5553 req-b6dcd1eb-5e62-4ebb-baa0-9ef708dfb5d5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received unexpected event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:03:20 np0005465604 nova_compute[260603]: 2025-10-02 09:03:20.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:20 np0005465604 nova_compute[260603]: 2025-10-02 09:03:20.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct  2 05:03:22 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:22Z|01448|binding|INFO|Releasing lport 88f0a719-bef7-4fa7-ad0c-3658148f5bdf from this chassis (sb_readonly=0)
Oct  2 05:03:22 np0005465604 nova_compute[260603]: 2025-10-02 09:03:22.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:22 np0005465604 NetworkManager[45129]: <info>  [1759395802.0437] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/583)
Oct  2 05:03:22 np0005465604 NetworkManager[45129]: <info>  [1759395802.0449] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/584)
Oct  2 05:03:22 np0005465604 nova_compute[260603]: 2025-10-02 09:03:22.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:22 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:22Z|01449|binding|INFO|Releasing lport 88f0a719-bef7-4fa7-ad0c-3658148f5bdf from this chassis (sb_readonly=0)
Oct  2 05:03:22 np0005465604 nova_compute[260603]: 2025-10-02 09:03:22.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:03:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1482455865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:03:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:03:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1482455865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:03:22 np0005465604 nova_compute[260603]: 2025-10-02 09:03:22.306 2 DEBUG nova.compute.manager [req-a6cf05b2-c532-49fe-b6aa-bfb57298032e req-3d7c821e-bafe-45f8-b7e2-5b147bf348c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:03:22 np0005465604 nova_compute[260603]: 2025-10-02 09:03:22.306 2 DEBUG nova.compute.manager [req-a6cf05b2-c532-49fe-b6aa-bfb57298032e req-3d7c821e-bafe-45f8-b7e2-5b147bf348c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing instance network info cache due to event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:03:22 np0005465604 nova_compute[260603]: 2025-10-02 09:03:22.307 2 DEBUG oslo_concurrency.lockutils [req-a6cf05b2-c532-49fe-b6aa-bfb57298032e req-3d7c821e-bafe-45f8-b7e2-5b147bf348c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:03:22 np0005465604 nova_compute[260603]: 2025-10-02 09:03:22.307 2 DEBUG oslo_concurrency.lockutils [req-a6cf05b2-c532-49fe-b6aa-bfb57298032e req-3d7c821e-bafe-45f8-b7e2-5b147bf348c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:03:22 np0005465604 nova_compute[260603]: 2025-10-02 09:03:22.308 2 DEBUG nova.network.neutron [req-a6cf05b2-c532-49fe-b6aa-bfb57298032e req-3d7c821e-bafe-45f8-b7e2-5b147bf348c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:03:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:03:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 05:03:23 np0005465604 nova_compute[260603]: 2025-10-02 09:03:23.979 2 DEBUG nova.network.neutron [req-a6cf05b2-c532-49fe-b6aa-bfb57298032e req-3d7c821e-bafe-45f8-b7e2-5b147bf348c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updated VIF entry in instance network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:03:23 np0005465604 nova_compute[260603]: 2025-10-02 09:03:23.980 2 DEBUG nova.network.neutron [req-a6cf05b2-c532-49fe-b6aa-bfb57298032e req-3d7c821e-bafe-45f8-b7e2-5b147bf348c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updating instance_info_cache with network_info: [{"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:03:24 np0005465604 nova_compute[260603]: 2025-10-02 09:03:24.014 2 DEBUG oslo_concurrency.lockutils [req-a6cf05b2-c532-49fe-b6aa-bfb57298032e req-3d7c821e-bafe-45f8-b7e2-5b147bf348c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:03:25 np0005465604 nova_compute[260603]: 2025-10-02 09:03:25.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:25 np0005465604 nova_compute[260603]: 2025-10-02 09:03:25.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:03:27 np0005465604 nova_compute[260603]: 2025-10-02 09:03:27.470 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:27 np0005465604 nova_compute[260603]: 2025-10-02 09:03:27.470 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:27 np0005465604 nova_compute[260603]: 2025-10-02 09:03:27.496 2 DEBUG nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:03:27 np0005465604 nova_compute[260603]: 2025-10-02 09:03:27.591 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:27 np0005465604 nova_compute[260603]: 2025-10-02 09:03:27.591 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:27 np0005465604 nova_compute[260603]: 2025-10-02 09:03:27.603 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:03:27 np0005465604 nova_compute[260603]: 2025-10-02 09:03:27.603 2 INFO nova.compute.claims [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:03:27 np0005465604 nova_compute[260603]: 2025-10-02 09:03:27.800 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:03:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:03:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:03:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:03:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:03:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:03:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:03:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:03:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:03:28
Oct  2 05:03:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:03:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:03:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'vms', 'volumes', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data']
Oct  2 05:03:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:03:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:03:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:03:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2377320350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.328 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.336 2 DEBUG nova.compute.provider_tree [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.351 2 DEBUG nova.scheduler.client.report [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.376 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.377 2 DEBUG nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.426 2 DEBUG nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.427 2 DEBUG nova.network.neutron [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.446 2 INFO nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.470 2 DEBUG nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:03:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:03:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:03:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:03:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:03:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:03:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:03:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:03:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:03:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:03:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.561 2 DEBUG nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.563 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.563 2 INFO nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Creating image(s)#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.594 2 DEBUG nova.storage.rbd_utils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.624 2 DEBUG nova.storage.rbd_utils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.654 2 DEBUG nova.storage.rbd_utils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.658 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.730 2 DEBUG nova.policy [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.750 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.751 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.751 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.752 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.774 2 DEBUG nova.storage.rbd_utils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:03:28 np0005465604 nova_compute[260603]: 2025-10-02 09:03:28.777 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:03:29 np0005465604 nova_compute[260603]: 2025-10-02 09:03:29.051 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:03:29 np0005465604 nova_compute[260603]: 2025-10-02 09:03:29.118 2 DEBUG nova.storage.rbd_utils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:03:29 np0005465604 nova_compute[260603]: 2025-10-02 09:03:29.220 2 DEBUG nova.objects.instance [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid def7636a-ab83-489d-ba8d-6f3dd1ccc841 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:03:29 np0005465604 nova_compute[260603]: 2025-10-02 09:03:29.235 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:03:29 np0005465604 nova_compute[260603]: 2025-10-02 09:03:29.236 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Ensure instance console log exists: /var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:03:29 np0005465604 nova_compute[260603]: 2025-10-02 09:03:29.236 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:29 np0005465604 nova_compute[260603]: 2025-10-02 09:03:29.237 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:29 np0005465604 nova_compute[260603]: 2025-10-02 09:03:29.237 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:29 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:29Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:95:1a 10.100.0.12
Oct  2 05:03:29 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:29Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:95:1a 10.100.0.12
Oct  2 05:03:29 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct  2 05:03:29 np0005465604 nova_compute[260603]: 2025-10-02 09:03:29.808 2 DEBUG nova.network.neutron [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Successfully created port: 0d6d454c-ed95-44d0-8bd1-e20589c708d1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:03:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:03:30 np0005465604 nova_compute[260603]: 2025-10-02 09:03:30.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:30 np0005465604 nova_compute[260603]: 2025-10-02 09:03:30.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:30 np0005465604 nova_compute[260603]: 2025-10-02 09:03:30.979 2 DEBUG nova.network.neutron [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Successfully updated port: 0d6d454c-ed95-44d0-8bd1-e20589c708d1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:03:31 np0005465604 nova_compute[260603]: 2025-10-02 09:03:31.003 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:03:31 np0005465604 nova_compute[260603]: 2025-10-02 09:03:31.004 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:03:31 np0005465604 nova_compute[260603]: 2025-10-02 09:03:31.004 2 DEBUG nova.network.neutron [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:03:31 np0005465604 nova_compute[260603]: 2025-10-02 09:03:31.100 2 DEBUG nova.compute.manager [req-1594c895-ce80-4ae9-bfbf-67ce34d4150a req-9ced631c-44f7-4820-bfc4-3be06fcc94e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-changed-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:03:31 np0005465604 nova_compute[260603]: 2025-10-02 09:03:31.101 2 DEBUG nova.compute.manager [req-1594c895-ce80-4ae9-bfbf-67ce34d4150a req-9ced631c-44f7-4820-bfc4-3be06fcc94e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Refreshing instance network info cache due to event network-changed-0d6d454c-ed95-44d0-8bd1-e20589c708d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:03:31 np0005465604 nova_compute[260603]: 2025-10-02 09:03:31.101 2 DEBUG oslo_concurrency.lockutils [req-1594c895-ce80-4ae9-bfbf-67ce34d4150a req-9ced631c-44f7-4820-bfc4-3be06fcc94e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:03:31 np0005465604 nova_compute[260603]: 2025-10-02 09:03:31.187 2 DEBUG nova.network.neutron [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:03:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 88 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 67 op/s
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.064 2 DEBUG nova.network.neutron [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Updating instance_info_cache with network_info: [{"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.088 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.089 2 DEBUG nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Instance network_info: |[{"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.089 2 DEBUG oslo_concurrency.lockutils [req-1594c895-ce80-4ae9-bfbf-67ce34d4150a req-9ced631c-44f7-4820-bfc4-3be06fcc94e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.090 2 DEBUG nova.network.neutron [req-1594c895-ce80-4ae9-bfbf-67ce34d4150a req-9ced631c-44f7-4820-bfc4-3be06fcc94e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Refreshing network info cache for port 0d6d454c-ed95-44d0-8bd1-e20589c708d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.098 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Start _get_guest_xml network_info=[{"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.103 2 WARNING nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.110 2 DEBUG nova.virt.libvirt.host [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.111 2 DEBUG nova.virt.libvirt.host [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.121 2 DEBUG nova.virt.libvirt.host [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.121 2 DEBUG nova.virt.libvirt.host [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.122 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.123 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.123 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.124 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.125 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.125 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.125 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.126 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.126 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.127 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.127 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.128 2 DEBUG nova.virt.hardware [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.133 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:03:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:03:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2364713645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.651 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.679 2 DEBUG nova.storage.rbd_utils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:03:32 np0005465604 nova_compute[260603]: 2025-10-02 09:03:32.684 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:03:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:03:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1813651229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:03:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.169 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.172 2 DEBUG nova.virt.libvirt.vif [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-556721783',display_name='tempest-TestNetworkBasicOps-server-556721783',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-556721783',id=135,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFBWd49oFUTm6XgyrgFAJAvAD9R5S9h35IghxF+WDuWxqO67NuyfvmqcjpF3R4Okql0uPjy7xGWqOKFWo5bhMt5wCOH87LjC+Dpu6giEiY38iIQYyWXpiLlgRndqhZX6/w==',key_name='tempest-TestNetworkBasicOps-141517795',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-kkoo640r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:03:28Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=def7636a-ab83-489d-ba8d-6f3dd1ccc841,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.172 2 DEBUG nova.network.os_vif_util [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.174 2 DEBUG nova.network.os_vif_util [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:f1:a7,bridge_name='br-int',has_traffic_filtering=True,id=0d6d454c-ed95-44d0-8bd1-e20589c708d1,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d6d454c-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.175 2 DEBUG nova.objects.instance [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid def7636a-ab83-489d-ba8d-6f3dd1ccc841 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.192 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:03:33 np0005465604 nova_compute[260603]:  <uuid>def7636a-ab83-489d-ba8d-6f3dd1ccc841</uuid>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:  <name>instance-00000087</name>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkBasicOps-server-556721783</nova:name>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:03:32</nova:creationTime>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:03:33 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:        <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:        <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:        <nova:port uuid="0d6d454c-ed95-44d0-8bd1-e20589c708d1">
Oct  2 05:03:33 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <entry name="serial">def7636a-ab83-489d-ba8d-6f3dd1ccc841</entry>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <entry name="uuid">def7636a-ab83-489d-ba8d-6f3dd1ccc841</entry>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk">
Oct  2 05:03:33 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:03:33 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk.config">
Oct  2 05:03:33 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:03:33 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:ac:f1:a7"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <target dev="tap0d6d454c-ed"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841/console.log" append="off"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:03:33 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:03:33 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:03:33 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:03:33 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.192 2 DEBUG nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Preparing to wait for external event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.193 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.193 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.194 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.195 2 DEBUG nova.virt.libvirt.vif [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-556721783',display_name='tempest-TestNetworkBasicOps-server-556721783',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-556721783',id=135,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFBWd49oFUTm6XgyrgFAJAvAD9R5S9h35IghxF+WDuWxqO67NuyfvmqcjpF3R4Okql0uPjy7xGWqOKFWo5bhMt5wCOH87LjC+Dpu6giEiY38iIQYyWXpiLlgRndqhZX6/w==',key_name='tempest-TestNetworkBasicOps-141517795',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-kkoo640r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:03:28Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=def7636a-ab83-489d-ba8d-6f3dd1ccc841,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.195 2 DEBUG nova.network.os_vif_util [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.196 2 DEBUG nova.network.os_vif_util [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:f1:a7,bridge_name='br-int',has_traffic_filtering=True,id=0d6d454c-ed95-44d0-8bd1-e20589c708d1,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d6d454c-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.197 2 DEBUG os_vif [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:f1:a7,bridge_name='br-int',has_traffic_filtering=True,id=0d6d454c-ed95-44d0-8bd1-e20589c708d1,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d6d454c-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.198 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.207 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d6d454c-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.207 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d6d454c-ed, col_values=(('external_ids', {'iface-id': '0d6d454c-ed95-44d0-8bd1-e20589c708d1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:f1:a7', 'vm-uuid': 'def7636a-ab83-489d-ba8d-6f3dd1ccc841'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:33 np0005465604 NetworkManager[45129]: <info>  [1759395813.2111] manager: (tap0d6d454c-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/585)
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.221 2 INFO os_vif [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:f1:a7,bridge_name='br-int',has_traffic_filtering=True,id=0d6d454c-ed95-44d0-8bd1-e20589c708d1,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d6d454c-ed')#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.289 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.290 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.290 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:ac:f1:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.291 2 INFO nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Using config drive#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.320 2 DEBUG nova.storage.rbd_utils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.347 2 DEBUG nova.network.neutron [req-1594c895-ce80-4ae9-bfbf-67ce34d4150a req-9ced631c-44f7-4820-bfc4-3be06fcc94e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Updated VIF entry in instance network info cache for port 0d6d454c-ed95-44d0-8bd1-e20589c708d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.348 2 DEBUG nova.network.neutron [req-1594c895-ce80-4ae9-bfbf-67ce34d4150a req-9ced631c-44f7-4820-bfc4-3be06fcc94e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Updating instance_info_cache with network_info: [{"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.388 2 DEBUG oslo_concurrency.lockutils [req-1594c895-ce80-4ae9-bfbf-67ce34d4150a req-9ced631c-44f7-4820-bfc4-3be06fcc94e3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.742 2 INFO nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Creating config drive at /var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841/disk.config#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.748 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl3lrv261 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:03:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 157 op/s
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.916 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl3lrv261" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.956 2 DEBUG nova.storage.rbd_utils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:03:33 np0005465604 nova_compute[260603]: 2025-10-02 09:03:33.961 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841/disk.config def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:03:34 np0005465604 nova_compute[260603]: 2025-10-02 09:03:34.167 2 DEBUG oslo_concurrency.processutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841/disk.config def7636a-ab83-489d-ba8d-6f3dd1ccc841_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:03:34 np0005465604 nova_compute[260603]: 2025-10-02 09:03:34.169 2 INFO nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Deleting local config drive /var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841/disk.config because it was imported into RBD.#033[00m
Oct  2 05:03:34 np0005465604 kernel: tap0d6d454c-ed: entered promiscuous mode
Oct  2 05:03:34 np0005465604 NetworkManager[45129]: <info>  [1759395814.2414] manager: (tap0d6d454c-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/586)
Oct  2 05:03:34 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:34Z|01450|binding|INFO|Claiming lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 for this chassis.
Oct  2 05:03:34 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:34Z|01451|binding|INFO|0d6d454c-ed95-44d0-8bd1-e20589c708d1: Claiming fa:16:3e:ac:f1:a7 10.100.0.5
Oct  2 05:03:34 np0005465604 nova_compute[260603]: 2025-10-02 09:03:34.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.253 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:f1:a7 10.100.0.5'], port_security=['fa:16:3e:ac:f1:a7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'def7636a-ab83-489d-ba8d-6f3dd1ccc841', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c605495-2750-431a-94c8-fc1511dea80b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b93b7300-a114-4509-a5d5-f258c80e5fdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df881951-0cfd-4c8c-9854-241ef8244cff, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=0d6d454c-ed95-44d0-8bd1-e20589c708d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.254 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 0d6d454c-ed95-44d0-8bd1-e20589c708d1 in datapath 2c605495-2750-431a-94c8-fc1511dea80b bound to our chassis#033[00m
Oct  2 05:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.255 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2c605495-2750-431a-94c8-fc1511dea80b#033[00m
Oct  2 05:03:34 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:34Z|01452|binding|INFO|Setting lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 ovn-installed in OVS
Oct  2 05:03:34 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:34Z|01453|binding|INFO|Setting lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 up in Southbound
Oct  2 05:03:34 np0005465604 nova_compute[260603]: 2025-10-02 09:03:34.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.279 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d11bce-fac2-450a-bde5-ca62f07f803c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:34 np0005465604 systemd-machined[214636]: New machine qemu-169-instance-00000087.
Oct  2 05:03:34 np0005465604 systemd[1]: Started Virtual Machine qemu-169-instance-00000087.
Oct  2 05:03:34 np0005465604 systemd-udevd[407729]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.315 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c943c5e2-d2e2-4aec-bc37-a1d50d416877]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.320 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1b12ba88-1313-4b08-8db8-454caeda75b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:34 np0005465604 NetworkManager[45129]: <info>  [1759395814.3319] device (tap0d6d454c-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:03:34 np0005465604 NetworkManager[45129]: <info>  [1759395814.3337] device (tap0d6d454c-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.357 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[706a1e86-d3a4-4f84-955f-a5b3cda135e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.385 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3c48e5c4-4490-4d80-8a6e-fd06530a5a8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c605495-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:5b:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667596, 'reachable_time': 36735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407738, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.409 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d94b364e-12e9-4ace-b014-320cd499ff93]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2c605495-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667608, 'tstamp': 667608}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407740, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2c605495-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667611, 'tstamp': 667611}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407740, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.411 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c605495-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:34 np0005465604 nova_compute[260603]: 2025-10-02 09:03:34.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:34 np0005465604 nova_compute[260603]: 2025-10-02 09:03:34.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.416 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c605495-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.417 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.418 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2c605495-20, col_values=(('external_ids', {'iface-id': '88f0a719-bef7-4fa7-ad0c-3658148f5bdf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.419 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:03:34 np0005465604 nova_compute[260603]: 2025-10-02 09:03:34.497 2 DEBUG nova.compute.manager [req-8686b3c7-085c-4650-80ec-1545c163ec94 req-9a597719-145a-47b2-a12b-212673dbed20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:03:34 np0005465604 nova_compute[260603]: 2025-10-02 09:03:34.497 2 DEBUG oslo_concurrency.lockutils [req-8686b3c7-085c-4650-80ec-1545c163ec94 req-9a597719-145a-47b2-a12b-212673dbed20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:34 np0005465604 nova_compute[260603]: 2025-10-02 09:03:34.498 2 DEBUG oslo_concurrency.lockutils [req-8686b3c7-085c-4650-80ec-1545c163ec94 req-9a597719-145a-47b2-a12b-212673dbed20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:34 np0005465604 nova_compute[260603]: 2025-10-02 09:03:34.498 2 DEBUG oslo_concurrency.lockutils [req-8686b3c7-085c-4650-80ec-1545c163ec94 req-9a597719-145a-47b2-a12b-212673dbed20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:34 np0005465604 nova_compute[260603]: 2025-10-02 09:03:34.499 2 DEBUG nova.compute.manager [req-8686b3c7-085c-4650-80ec-1545c163ec94 req-9a597719-145a-47b2-a12b-212673dbed20 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Processing event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.842 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.843 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:34.843 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.156 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395815.155862, def7636a-ab83-489d-ba8d-6f3dd1ccc841 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.156 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] VM Started (Lifecycle Event)#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.158 2 DEBUG nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.161 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.164 2 INFO nova.virt.libvirt.driver [-] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Instance spawned successfully.#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.164 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.181 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.186 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.189 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.189 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.190 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.190 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.190 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.191 2 DEBUG nova.virt.libvirt.driver [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.220 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.220 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395815.1560867, def7636a-ab83-489d-ba8d-6f3dd1ccc841 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.220 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.249 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.251 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395815.160564, def7636a-ab83-489d-ba8d-6f3dd1ccc841 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.251 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.262 2 INFO nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Took 6.70 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.262 2 DEBUG nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.274 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.276 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.303 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.324 2 INFO nova.compute.manager [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Took 7.77 seconds to build instance.#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.339 2 DEBUG oslo_concurrency.lockutils [None req-8fe6e7aa-619a-4ae7-ad53-885a057aca34 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:35 np0005465604 nova_compute[260603]: 2025-10-02 09:03:35.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct  2 05:03:36 np0005465604 nova_compute[260603]: 2025-10-02 09:03:36.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:03:36 np0005465604 nova_compute[260603]: 2025-10-02 09:03:36.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:03:36 np0005465604 nova_compute[260603]: 2025-10-02 09:03:36.558 2 DEBUG nova.compute.manager [req-374cb3bf-0180-44f1-b8ba-c55313505fdb req-2f8a4613-ae74-4f39-b904-fe903cd37230 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:03:36 np0005465604 nova_compute[260603]: 2025-10-02 09:03:36.558 2 DEBUG oslo_concurrency.lockutils [req-374cb3bf-0180-44f1-b8ba-c55313505fdb req-2f8a4613-ae74-4f39-b904-fe903cd37230 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:36 np0005465604 nova_compute[260603]: 2025-10-02 09:03:36.559 2 DEBUG oslo_concurrency.lockutils [req-374cb3bf-0180-44f1-b8ba-c55313505fdb req-2f8a4613-ae74-4f39-b904-fe903cd37230 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:36 np0005465604 nova_compute[260603]: 2025-10-02 09:03:36.559 2 DEBUG oslo_concurrency.lockutils [req-374cb3bf-0180-44f1-b8ba-c55313505fdb req-2f8a4613-ae74-4f39-b904-fe903cd37230 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:36 np0005465604 nova_compute[260603]: 2025-10-02 09:03:36.559 2 DEBUG nova.compute.manager [req-374cb3bf-0180-44f1-b8ba-c55313505fdb req-2f8a4613-ae74-4f39-b904-fe903cd37230 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] No waiting events found dispatching network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:03:36 np0005465604 nova_compute[260603]: 2025-10-02 09:03:36.559 2 WARNING nova.compute.manager [req-374cb3bf-0180-44f1-b8ba-c55313505fdb req-2f8a4613-ae74-4f39-b904-fe903cd37230 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received unexpected event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:03:37 np0005465604 podman[407785]: 2025-10-02 09:03:37.028457191 +0000 UTC m=+0.078894563 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct  2 05:03:37 np0005465604 podman[407784]: 2025-10-02 09:03:37.104185954 +0000 UTC m=+0.151258031 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:03:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2565: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 968 KiB/s rd, 3.9 MiB/s wr, 117 op/s
Oct  2 05:03:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:03:38 np0005465604 nova_compute[260603]: 2025-10-02 09:03:38.199 2 DEBUG nova.compute.manager [req-50893202-08a7-4da1-a116-cde23c84aeab req-ead4b8b4-31be-469b-a154-5d101b4dc9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-changed-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:03:38 np0005465604 nova_compute[260603]: 2025-10-02 09:03:38.199 2 DEBUG nova.compute.manager [req-50893202-08a7-4da1-a116-cde23c84aeab req-ead4b8b4-31be-469b-a154-5d101b4dc9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Refreshing instance network info cache due to event network-changed-0d6d454c-ed95-44d0-8bd1-e20589c708d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:03:38 np0005465604 nova_compute[260603]: 2025-10-02 09:03:38.199 2 DEBUG oslo_concurrency.lockutils [req-50893202-08a7-4da1-a116-cde23c84aeab req-ead4b8b4-31be-469b-a154-5d101b4dc9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:03:38 np0005465604 nova_compute[260603]: 2025-10-02 09:03:38.200 2 DEBUG oslo_concurrency.lockutils [req-50893202-08a7-4da1-a116-cde23c84aeab req-ead4b8b4-31be-469b-a154-5d101b4dc9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:03:38 np0005465604 nova_compute[260603]: 2025-10-02 09:03:38.200 2 DEBUG nova.network.neutron [req-50893202-08a7-4da1-a116-cde23c84aeab req-ead4b8b4-31be-469b-a154-5d101b4dc9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Refreshing network info cache for port 0d6d454c-ed95-44d0-8bd1-e20589c708d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:03:38 np0005465604 nova_compute[260603]: 2025-10-02 09:03:38.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011072414045712052 of space, bias 1.0, pg target 0.33217242137136155 quantized to 32 (current 32)
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:03:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct  2 05:03:40 np0005465604 nova_compute[260603]: 2025-10-02 09:03:40.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:41 np0005465604 nova_compute[260603]: 2025-10-02 09:03:41.741 2 DEBUG nova.network.neutron [req-50893202-08a7-4da1-a116-cde23c84aeab req-ead4b8b4-31be-469b-a154-5d101b4dc9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Updated VIF entry in instance network info cache for port 0d6d454c-ed95-44d0-8bd1-e20589c708d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:03:41 np0005465604 nova_compute[260603]: 2025-10-02 09:03:41.742 2 DEBUG nova.network.neutron [req-50893202-08a7-4da1-a116-cde23c84aeab req-ead4b8b4-31be-469b-a154-5d101b4dc9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Updating instance_info_cache with network_info: [{"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:03:41 np0005465604 nova_compute[260603]: 2025-10-02 09:03:41.770 2 DEBUG oslo_concurrency.lockutils [req-50893202-08a7-4da1-a116-cde23c84aeab req-ead4b8b4-31be-469b-a154-5d101b4dc9b5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:03:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct  2 05:03:42 np0005465604 podman[407829]: 2025-10-02 09:03:42.019729489 +0000 UTC m=+0.086587813 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:03:42 np0005465604 podman[407830]: 2025-10-02 09:03:42.049105001 +0000 UTC m=+0.102960520 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 05:03:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:03:43 np0005465604 nova_compute[260603]: 2025-10-02 09:03:43.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct  2 05:03:45 np0005465604 nova_compute[260603]: 2025-10-02 09:03:45.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Oct  2 05:03:47 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:47Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ac:f1:a7 10.100.0.5
Oct  2 05:03:47 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:47Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:f1:a7 10.100.0.5
Oct  2 05:03:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 183 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 124 op/s
Oct  2 05:03:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:03:48 np0005465604 nova_compute[260603]: 2025-10-02 09:03:48.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:03:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:03:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:03:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:03:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:03:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:03:49 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev d335dfbe-1458-4455-b080-b4a14a577106 does not exist
Oct  2 05:03:49 np0005465604 nova_compute[260603]: 2025-10-02 09:03:49.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:03:49 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 4298d917-b6ba-410f-bc8a-4e8aa2d6356b does not exist
Oct  2 05:03:49 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 8dabad9f-79d7-45e4-a3d1-84eb324876b9 does not exist
Oct  2 05:03:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:03:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:03:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:03:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:03:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:03:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:03:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 305 active+clean; 197 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Oct  2 05:03:50 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:03:50 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:03:50 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:03:50 np0005465604 podman[408145]: 2025-10-02 09:03:50.255922285 +0000 UTC m=+0.073037081 container create 9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_varahamihira, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:03:50 np0005465604 systemd[1]: Started libpod-conmon-9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0.scope.
Oct  2 05:03:50 np0005465604 podman[408145]: 2025-10-02 09:03:50.225421617 +0000 UTC m=+0.042536493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:03:50 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:03:50 np0005465604 podman[408145]: 2025-10-02 09:03:50.362082943 +0000 UTC m=+0.179197819 container init 9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 05:03:50 np0005465604 podman[408145]: 2025-10-02 09:03:50.369935837 +0000 UTC m=+0.187050663 container start 9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_varahamihira, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:03:50 np0005465604 podman[408145]: 2025-10-02 09:03:50.373351624 +0000 UTC m=+0.190466510 container attach 9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 05:03:50 np0005465604 agitated_varahamihira[408161]: 167 167
Oct  2 05:03:50 np0005465604 systemd[1]: libpod-9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0.scope: Deactivated successfully.
Oct  2 05:03:50 np0005465604 conmon[408161]: conmon 9782357870df519fe52d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0.scope/container/memory.events
Oct  2 05:03:50 np0005465604 podman[408145]: 2025-10-02 09:03:50.379740223 +0000 UTC m=+0.196855019 container died 9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_varahamihira, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 05:03:50 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e0140082edceed8d6cc8881be0a910587a627da7119c12a3173fbb3c63e22012-merged.mount: Deactivated successfully.
Oct  2 05:03:50 np0005465604 podman[408145]: 2025-10-02 09:03:50.43052047 +0000 UTC m=+0.247635296 container remove 9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_varahamihira, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:03:50 np0005465604 systemd[1]: libpod-conmon-9782357870df519fe52d291ec618d8db718a9d3389cceee42ff3b8c782dbf9b0.scope: Deactivated successfully.
Oct  2 05:03:50 np0005465604 nova_compute[260603]: 2025-10-02 09:03:50.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:03:50 np0005465604 nova_compute[260603]: 2025-10-02 09:03:50.525 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:03:50 np0005465604 nova_compute[260603]: 2025-10-02 09:03:50.525 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:03:50 np0005465604 podman[408185]: 2025-10-02 09:03:50.689954922 +0000 UTC m=+0.055779964 container create db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  2 05:03:50 np0005465604 nova_compute[260603]: 2025-10-02 09:03:50.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:50 np0005465604 systemd[1]: Started libpod-conmon-db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d.scope.
Oct  2 05:03:50 np0005465604 podman[408185]: 2025-10-02 09:03:50.663325514 +0000 UTC m=+0.029150636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:03:50 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:03:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f13eca15de58962a32371f1fefa70679da297cc4ba488918308335549f2fcda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:03:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f13eca15de58962a32371f1fefa70679da297cc4ba488918308335549f2fcda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:03:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f13eca15de58962a32371f1fefa70679da297cc4ba488918308335549f2fcda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:03:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f13eca15de58962a32371f1fefa70679da297cc4ba488918308335549f2fcda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:03:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f13eca15de58962a32371f1fefa70679da297cc4ba488918308335549f2fcda/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:03:50 np0005465604 podman[408185]: 2025-10-02 09:03:50.818298921 +0000 UTC m=+0.184123983 container init db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:03:50 np0005465604 podman[408185]: 2025-10-02 09:03:50.82470112 +0000 UTC m=+0.190526142 container start db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:03:50 np0005465604 podman[408185]: 2025-10-02 09:03:50.82828034 +0000 UTC m=+0.194105382 container attach db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sanderson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:03:51 np0005465604 nova_compute[260603]: 2025-10-02 09:03:51.730 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:03:51 np0005465604 nova_compute[260603]: 2025-10-02 09:03:51.731 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:03:51 np0005465604 nova_compute[260603]: 2025-10-02 09:03:51.731 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 05:03:51 np0005465604 nova_compute[260603]: 2025-10-02 09:03:51.731 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:03:51 np0005465604 loving_sanderson[408202]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:03:51 np0005465604 loving_sanderson[408202]: --> relative data size: 1.0
Oct  2 05:03:51 np0005465604 loving_sanderson[408202]: --> All data devices are unavailable
Oct  2 05:03:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 197 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct  2 05:03:51 np0005465604 systemd[1]: libpod-db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d.scope: Deactivated successfully.
Oct  2 05:03:51 np0005465604 systemd[1]: libpod-db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d.scope: Consumed 1.051s CPU time.
Oct  2 05:03:51 np0005465604 conmon[408202]: conmon db8ffdf647b7b7b66029 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d.scope/container/memory.events
Oct  2 05:03:51 np0005465604 podman[408185]: 2025-10-02 09:03:51.943325592 +0000 UTC m=+1.309150604 container died db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sanderson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  2 05:03:51 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9f13eca15de58962a32371f1fefa70679da297cc4ba488918308335549f2fcda-merged.mount: Deactivated successfully.
Oct  2 05:03:52 np0005465604 podman[408185]: 2025-10-02 09:03:52.005878506 +0000 UTC m=+1.371703528 container remove db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:03:52 np0005465604 systemd[1]: libpod-conmon-db8ffdf647b7b7b66029db340562338a9c45353ef7f1308c14d0f16b55c8b26d.scope: Deactivated successfully.
Oct  2 05:03:52 np0005465604 podman[408386]: 2025-10-02 09:03:52.746503521 +0000 UTC m=+0.051154571 container create a79149a4a533baea8a71efe94acd3bf379cb345063a319d5770893a9d336b98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 05:03:52 np0005465604 systemd[1]: Started libpod-conmon-a79149a4a533baea8a71efe94acd3bf379cb345063a319d5770893a9d336b98d.scope.
Oct  2 05:03:52 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:03:52 np0005465604 podman[408386]: 2025-10-02 09:03:52.821282234 +0000 UTC m=+0.125933334 container init a79149a4a533baea8a71efe94acd3bf379cb345063a319d5770893a9d336b98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:03:52 np0005465604 podman[408386]: 2025-10-02 09:03:52.731513686 +0000 UTC m=+0.036164756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:03:52 np0005465604 podman[408386]: 2025-10-02 09:03:52.827685334 +0000 UTC m=+0.132336394 container start a79149a4a533baea8a71efe94acd3bf379cb345063a319d5770893a9d336b98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 05:03:52 np0005465604 podman[408386]: 2025-10-02 09:03:52.831049639 +0000 UTC m=+0.135700699 container attach a79149a4a533baea8a71efe94acd3bf379cb345063a319d5770893a9d336b98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 05:03:52 np0005465604 cranky_agnesi[408402]: 167 167
Oct  2 05:03:52 np0005465604 systemd[1]: libpod-a79149a4a533baea8a71efe94acd3bf379cb345063a319d5770893a9d336b98d.scope: Deactivated successfully.
Oct  2 05:03:52 np0005465604 podman[408386]: 2025-10-02 09:03:52.832862565 +0000 UTC m=+0.137513605 container died a79149a4a533baea8a71efe94acd3bf379cb345063a319d5770893a9d336b98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 05:03:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ca265ae51332b021f1effa7d92e4798d73a78a27c8f03ab0ae3cf94580c8d917-merged.mount: Deactivated successfully.
Oct  2 05:03:52 np0005465604 podman[408386]: 2025-10-02 09:03:52.870966209 +0000 UTC m=+0.175617259 container remove a79149a4a533baea8a71efe94acd3bf379cb345063a319d5770893a9d336b98d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct  2 05:03:52 np0005465604 systemd[1]: libpod-conmon-a79149a4a533baea8a71efe94acd3bf379cb345063a319d5770893a9d336b98d.scope: Deactivated successfully.
Oct  2 05:03:53 np0005465604 podman[408426]: 2025-10-02 09:03:53.121928078 +0000 UTC m=+0.055807636 container create 45f9862a3d0432d4a811c8a68744274a4a73d55c49fa8cc4531d2a0f967a0726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 05:03:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:03:53 np0005465604 systemd[1]: Started libpod-conmon-45f9862a3d0432d4a811c8a68744274a4a73d55c49fa8cc4531d2a0f967a0726.scope.
Oct  2 05:03:53 np0005465604 podman[408426]: 2025-10-02 09:03:53.10397297 +0000 UTC m=+0.037852548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:03:53 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:03:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f4d2f2a140b18bf5f1c7c00efc974ab0219bdb28ea38d9238010bf4c822c93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:03:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f4d2f2a140b18bf5f1c7c00efc974ab0219bdb28ea38d9238010bf4c822c93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:03:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f4d2f2a140b18bf5f1c7c00efc974ab0219bdb28ea38d9238010bf4c822c93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:03:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f4d2f2a140b18bf5f1c7c00efc974ab0219bdb28ea38d9238010bf4c822c93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:03:53 np0005465604 podman[408426]: 2025-10-02 09:03:53.245112636 +0000 UTC m=+0.178992244 container init 45f9862a3d0432d4a811c8a68744274a4a73d55c49fa8cc4531d2a0f967a0726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feynman, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:03:53 np0005465604 nova_compute[260603]: 2025-10-02 09:03:53.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:53 np0005465604 podman[408426]: 2025-10-02 09:03:53.309673052 +0000 UTC m=+0.243552610 container start 45f9862a3d0432d4a811c8a68744274a4a73d55c49fa8cc4531d2a0f967a0726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feynman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:03:53 np0005465604 podman[408426]: 2025-10-02 09:03:53.313385737 +0000 UTC m=+0.247265395 container attach 45f9862a3d0432d4a811c8a68744274a4a73d55c49fa8cc4531d2a0f967a0726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:03:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 200 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 05:03:53 np0005465604 nova_compute[260603]: 2025-10-02 09:03:53.998 2 INFO nova.compute.manager [None req-76439550-94e3-4153-b92a-8620b5da26f3 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Get console output#033[00m
Oct  2 05:03:54 np0005465604 nova_compute[260603]: 2025-10-02 09:03:54.007 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 05:03:54 np0005465604 magical_feynman[408442]: {
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:    "0": [
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:        {
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "devices": [
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "/dev/loop3"
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            ],
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "lv_name": "ceph_lv0",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "lv_size": "21470642176",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "name": "ceph_lv0",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "tags": {
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.cluster_name": "ceph",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.crush_device_class": "",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.encrypted": "0",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.osd_id": "0",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.type": "block",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.vdo": "0"
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            },
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "type": "block",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "vg_name": "ceph_vg0"
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:        }
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:    ],
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:    "1": [
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:        {
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "devices": [
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "/dev/loop4"
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            ],
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "lv_name": "ceph_lv1",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "lv_size": "21470642176",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "name": "ceph_lv1",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "tags": {
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.cluster_name": "ceph",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.crush_device_class": "",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.encrypted": "0",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.osd_id": "1",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.type": "block",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.vdo": "0"
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            },
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "type": "block",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "vg_name": "ceph_vg1"
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:        }
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:    ],
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:    "2": [
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:        {
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "devices": [
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "/dev/loop5"
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            ],
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "lv_name": "ceph_lv2",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "lv_size": "21470642176",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "name": "ceph_lv2",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "tags": {
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.cluster_name": "ceph",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.crush_device_class": "",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.encrypted": "0",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.osd_id": "2",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.type": "block",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:                "ceph.vdo": "0"
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            },
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "type": "block",
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:            "vg_name": "ceph_vg2"
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:        }
Oct  2 05:03:54 np0005465604 magical_feynman[408442]:    ]
Oct  2 05:03:54 np0005465604 magical_feynman[408442]: }
Oct  2 05:03:54 np0005465604 systemd[1]: libpod-45f9862a3d0432d4a811c8a68744274a4a73d55c49fa8cc4531d2a0f967a0726.scope: Deactivated successfully.
Oct  2 05:03:54 np0005465604 podman[408426]: 2025-10-02 09:03:54.087963168 +0000 UTC m=+1.021842766 container died 45f9862a3d0432d4a811c8a68744274a4a73d55c49fa8cc4531d2a0f967a0726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feynman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 05:03:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay-83f4d2f2a140b18bf5f1c7c00efc974ab0219bdb28ea38d9238010bf4c822c93-merged.mount: Deactivated successfully.
Oct  2 05:03:54 np0005465604 podman[408426]: 2025-10-02 09:03:54.150606494 +0000 UTC m=+1.084486062 container remove 45f9862a3d0432d4a811c8a68744274a4a73d55c49fa8cc4531d2a0f967a0726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 05:03:54 np0005465604 systemd[1]: libpod-conmon-45f9862a3d0432d4a811c8a68744274a4a73d55c49fa8cc4531d2a0f967a0726.scope: Deactivated successfully.
Oct  2 05:03:54 np0005465604 nova_compute[260603]: 2025-10-02 09:03:54.651 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updating instance_info_cache with network_info: [{"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:03:54 np0005465604 nova_compute[260603]: 2025-10-02 09:03:54.673 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:03:54 np0005465604 nova_compute[260603]: 2025-10-02 09:03:54.673 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 05:03:54 np0005465604 nova_compute[260603]: 2025-10-02 09:03:54.673 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:03:54 np0005465604 podman[408605]: 2025-10-02 09:03:54.799442858 +0000 UTC m=+0.045608938 container create 66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  2 05:03:54 np0005465604 systemd[1]: Started libpod-conmon-66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d.scope.
Oct  2 05:03:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:03:54 np0005465604 podman[408605]: 2025-10-02 09:03:54.781324144 +0000 UTC m=+0.027490284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:03:54 np0005465604 podman[408605]: 2025-10-02 09:03:54.884119349 +0000 UTC m=+0.130285459 container init 66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:03:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:54.888 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:03:54 np0005465604 nova_compute[260603]: 2025-10-02 09:03:54.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:54.890 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:03:54 np0005465604 podman[408605]: 2025-10-02 09:03:54.892359135 +0000 UTC m=+0.138525215 container start 66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:03:54 np0005465604 podman[408605]: 2025-10-02 09:03:54.896308008 +0000 UTC m=+0.142474128 container attach 66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:03:54 np0005465604 stoic_blackwell[408621]: 167 167
Oct  2 05:03:54 np0005465604 systemd[1]: libpod-66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d.scope: Deactivated successfully.
Oct  2 05:03:54 np0005465604 conmon[408621]: conmon 66a7d05d08837afd4642 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d.scope/container/memory.events
Oct  2 05:03:54 np0005465604 podman[408605]: 2025-10-02 09:03:54.900386184 +0000 UTC m=+0.146552294 container died 66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  2 05:03:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c4c462ff857141b74263b060707af1c6f8a9462bfae72cf8d6d3ee0c53c5babf-merged.mount: Deactivated successfully.
Oct  2 05:03:54 np0005465604 podman[408605]: 2025-10-02 09:03:54.946439466 +0000 UTC m=+0.192605576 container remove 66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:03:54 np0005465604 systemd[1]: libpod-conmon-66a7d05d08837afd464279cd8750399a825d7dd7466fce589a19548f390a2d5d.scope: Deactivated successfully.
Oct  2 05:03:55 np0005465604 nova_compute[260603]: 2025-10-02 09:03:55.062 2 DEBUG nova.compute.manager [req-04545728-154c-4b61-b222-c3543041c79e req-bf614fc5-a5c1-430e-b228-16685f90c954 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:03:55 np0005465604 nova_compute[260603]: 2025-10-02 09:03:55.064 2 DEBUG nova.compute.manager [req-04545728-154c-4b61-b222-c3543041c79e req-bf614fc5-a5c1-430e-b228-16685f90c954 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing instance network info cache due to event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:03:55 np0005465604 nova_compute[260603]: 2025-10-02 09:03:55.065 2 DEBUG oslo_concurrency.lockutils [req-04545728-154c-4b61-b222-c3543041c79e req-bf614fc5-a5c1-430e-b228-16685f90c954 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:03:55 np0005465604 nova_compute[260603]: 2025-10-02 09:03:55.065 2 DEBUG oslo_concurrency.lockutils [req-04545728-154c-4b61-b222-c3543041c79e req-bf614fc5-a5c1-430e-b228-16685f90c954 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:03:55 np0005465604 nova_compute[260603]: 2025-10-02 09:03:55.067 2 DEBUG nova.network.neutron [req-04545728-154c-4b61-b222-c3543041c79e req-bf614fc5-a5c1-430e-b228-16685f90c954 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:03:55 np0005465604 podman[408645]: 2025-10-02 09:03:55.175110201 +0000 UTC m=+0.049394255 container create b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:03:55 np0005465604 systemd[1]: Started libpod-conmon-b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324.scope.
Oct  2 05:03:55 np0005465604 podman[408645]: 2025-10-02 09:03:55.151063035 +0000 UTC m=+0.025347099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:03:55 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:03:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb3a24fdc58c3d17bffb670d9aebd3f8f4ac880148f544000961ed8c7660607/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:03:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb3a24fdc58c3d17bffb670d9aebd3f8f4ac880148f544000961ed8c7660607/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:03:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb3a24fdc58c3d17bffb670d9aebd3f8f4ac880148f544000961ed8c7660607/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:03:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb3a24fdc58c3d17bffb670d9aebd3f8f4ac880148f544000961ed8c7660607/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:03:55 np0005465604 podman[408645]: 2025-10-02 09:03:55.279150685 +0000 UTC m=+0.153434709 container init b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_volhard, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 05:03:55 np0005465604 podman[408645]: 2025-10-02 09:03:55.299824508 +0000 UTC m=+0.174108532 container start b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_volhard, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:03:55 np0005465604 podman[408645]: 2025-10-02 09:03:55.302865792 +0000 UTC m=+0.177149816 container attach b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_volhard, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 05:03:55 np0005465604 nova_compute[260603]: 2025-10-02 09:03:55.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 200 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 05:03:56 np0005465604 nova_compute[260603]: 2025-10-02 09:03:56.135 2 INFO nova.compute.manager [None req-37e81269-4f54-40c4-a885-c1bf2a5d477d ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Get console output#033[00m
Oct  2 05:03:56 np0005465604 nova_compute[260603]: 2025-10-02 09:03:56.144 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 05:03:56 np0005465604 focused_volhard[408662]: {
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:        "osd_id": 2,
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:        "type": "bluestore"
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:    },
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:        "osd_id": 1,
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:        "type": "bluestore"
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:    },
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:        "osd_id": 0,
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:        "type": "bluestore"
Oct  2 05:03:56 np0005465604 focused_volhard[408662]:    }
Oct  2 05:03:56 np0005465604 focused_volhard[408662]: }
Oct  2 05:03:56 np0005465604 systemd[1]: libpod-b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324.scope: Deactivated successfully.
Oct  2 05:03:56 np0005465604 systemd[1]: libpod-b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324.scope: Consumed 1.039s CPU time.
Oct  2 05:03:56 np0005465604 podman[408645]: 2025-10-02 09:03:56.335951396 +0000 UTC m=+1.210235460 container died b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:03:56 np0005465604 systemd[1]: var-lib-containers-storage-overlay-bfb3a24fdc58c3d17bffb670d9aebd3f8f4ac880148f544000961ed8c7660607-merged.mount: Deactivated successfully.
Oct  2 05:03:56 np0005465604 podman[408645]: 2025-10-02 09:03:56.399914743 +0000 UTC m=+1.274198787 container remove b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_volhard, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 05:03:56 np0005465604 systemd[1]: libpod-conmon-b7c4a6bb624d862651d520d71e446d47fe990787bb2c378474f1d24184758324.scope: Deactivated successfully.
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.461669) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395836461847, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 682, "num_deletes": 251, "total_data_size": 791559, "memory_usage": 803736, "flush_reason": "Manual Compaction"}
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395836468427, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 783850, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53712, "largest_seqno": 54393, "table_properties": {"data_size": 780282, "index_size": 1412, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8201, "raw_average_key_size": 19, "raw_value_size": 773124, "raw_average_value_size": 1827, "num_data_blocks": 64, "num_entries": 423, "num_filter_entries": 423, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759395783, "oldest_key_time": 1759395783, "file_creation_time": 1759395836, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 6687 microseconds, and 3100 cpu microseconds.
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.468494) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 783850 bytes OK
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.468519) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.470622) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.470637) EVENT_LOG_v1 {"time_micros": 1759395836470632, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.470654) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 787974, prev total WAL file size 808561, number of live WAL files 2.
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.473206) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(765KB)], [125(8994KB)]
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395836473238, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 9994335, "oldest_snapshot_seqno": -1}
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:03:56 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a02c48ef-4d18-4d3d-9710-958a51fb301a does not exist
Oct  2 05:03:56 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 925549ba-77b0-4f5c-bb79-6a4989a7c5a5 does not exist
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7287 keys, 8318868 bytes, temperature: kUnknown
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395836517889, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8318868, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8272997, "index_size": 26558, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 190797, "raw_average_key_size": 26, "raw_value_size": 8145586, "raw_average_value_size": 1117, "num_data_blocks": 1028, "num_entries": 7287, "num_filter_entries": 7287, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759395836, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:03:56 np0005465604 nova_compute[260603]: 2025-10-02 09:03:56.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.518179) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8318868 bytes
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.520552) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.2 rd, 185.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.8 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(23.4) write-amplify(10.6) OK, records in: 7800, records dropped: 513 output_compression: NoCompression
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.520573) EVENT_LOG_v1 {"time_micros": 1759395836520564, "job": 76, "event": "compaction_finished", "compaction_time_micros": 44772, "compaction_time_cpu_micros": 24613, "output_level": 6, "num_output_files": 1, "total_output_size": 8318868, "num_input_records": 7800, "num_output_records": 7287, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395836520849, "job": 76, "event": "table_file_deletion", "file_number": 127}
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759395836522844, "job": 76, "event": "table_file_deletion", "file_number": 125}
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.473095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.522928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.522940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.522943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.522945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:03:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:03:56.522947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:03:56 np0005465604 nova_compute[260603]: 2025-10-02 09:03:56.735 2 DEBUG nova.network.neutron [req-04545728-154c-4b61-b222-c3543041c79e req-bf614fc5-a5c1-430e-b228-16685f90c954 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updated VIF entry in instance network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:03:56 np0005465604 nova_compute[260603]: 2025-10-02 09:03:56.736 2 DEBUG nova.network.neutron [req-04545728-154c-4b61-b222-c3543041c79e req-bf614fc5-a5c1-430e-b228-16685f90c954 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updating instance_info_cache with network_info: [{"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:03:56 np0005465604 nova_compute[260603]: 2025-10-02 09:03:56.756 2 DEBUG oslo_concurrency.lockutils [req-04545728-154c-4b61-b222-c3543041c79e req-bf614fc5-a5c1-430e-b228-16685f90c954 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:03:57 np0005465604 nova_compute[260603]: 2025-10-02 09:03:57.150 2 DEBUG nova.compute.manager [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-vif-unplugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:03:57 np0005465604 nova_compute[260603]: 2025-10-02 09:03:57.150 2 DEBUG oslo_concurrency.lockutils [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:57 np0005465604 nova_compute[260603]: 2025-10-02 09:03:57.151 2 DEBUG oslo_concurrency.lockutils [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:57 np0005465604 nova_compute[260603]: 2025-10-02 09:03:57.151 2 DEBUG oslo_concurrency.lockutils [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:57 np0005465604 nova_compute[260603]: 2025-10-02 09:03:57.152 2 DEBUG nova.compute.manager [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] No waiting events found dispatching network-vif-unplugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:03:57 np0005465604 nova_compute[260603]: 2025-10-02 09:03:57.152 2 WARNING nova.compute.manager [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received unexpected event network-vif-unplugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:03:57 np0005465604 nova_compute[260603]: 2025-10-02 09:03:57.153 2 DEBUG nova.compute.manager [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:03:57 np0005465604 nova_compute[260603]: 2025-10-02 09:03:57.153 2 DEBUG oslo_concurrency.lockutils [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:57 np0005465604 nova_compute[260603]: 2025-10-02 09:03:57.154 2 DEBUG oslo_concurrency.lockutils [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:57 np0005465604 nova_compute[260603]: 2025-10-02 09:03:57.154 2 DEBUG oslo_concurrency.lockutils [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:57 np0005465604 nova_compute[260603]: 2025-10-02 09:03:57.155 2 DEBUG nova.compute.manager [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] No waiting events found dispatching network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:03:57 np0005465604 nova_compute[260603]: 2025-10-02 09:03:57.155 2 WARNING nova.compute.manager [req-6b4d43bc-a82c-474e-8761-e51f40d00fbc req-9aeff4fd-0c12-478d-9fce-06343de32c1c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received unexpected event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:03:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:03:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:03:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 200 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 05:03:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:03:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:03:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:03:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:03:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:03:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:03:58 np0005465604 nova_compute[260603]: 2025-10-02 09:03:58.064 2 DEBUG nova.compute.manager [req-817259e0-1fc5-4924-b18f-981c8b5aa44c req-79e27690-da55-4f8c-bdc3-12823eb49cdc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:03:58 np0005465604 nova_compute[260603]: 2025-10-02 09:03:58.065 2 DEBUG nova.compute.manager [req-817259e0-1fc5-4924-b18f-981c8b5aa44c req-79e27690-da55-4f8c-bdc3-12823eb49cdc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing instance network info cache due to event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:03:58 np0005465604 nova_compute[260603]: 2025-10-02 09:03:58.065 2 DEBUG oslo_concurrency.lockutils [req-817259e0-1fc5-4924-b18f-981c8b5aa44c req-79e27690-da55-4f8c-bdc3-12823eb49cdc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:03:58 np0005465604 nova_compute[260603]: 2025-10-02 09:03:58.066 2 DEBUG oslo_concurrency.lockutils [req-817259e0-1fc5-4924-b18f-981c8b5aa44c req-79e27690-da55-4f8c-bdc3-12823eb49cdc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:03:58 np0005465604 nova_compute[260603]: 2025-10-02 09:03:58.066 2 DEBUG nova.network.neutron [req-817259e0-1fc5-4924-b18f-981c8b5aa44c req-79e27690-da55-4f8c-bdc3-12823eb49cdc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:03:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:03:58 np0005465604 nova_compute[260603]: 2025-10-02 09:03:58.279 2 INFO nova.compute.manager [None req-0f53f6e5-4200-43b5-a718-32d2783e3aa1 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Get console output#033[00m
Oct  2 05:03:58 np0005465604 nova_compute[260603]: 2025-10-02 09:03:58.287 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 05:03:58 np0005465604 nova_compute[260603]: 2025-10-02 09:03:58.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:58 np0005465604 nova_compute[260603]: 2025-10-02 09:03:58.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:03:58 np0005465604 nova_compute[260603]: 2025-10-02 09:03:58.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:03:58 np0005465604 nova_compute[260603]: 2025-10-02 09:03:58.547 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:58 np0005465604 nova_compute[260603]: 2025-10-02 09:03:58.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:58 np0005465604 nova_compute[260603]: 2025-10-02 09:03:58.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:58 np0005465604 nova_compute[260603]: 2025-10-02 09:03:58.548 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:03:58 np0005465604 nova_compute[260603]: 2025-10-02 09:03:58.549 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:03:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:03:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2051790108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.029 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.131 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.132 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.135 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.135 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.256 2 DEBUG nova.compute.manager [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.257 2 DEBUG oslo_concurrency.lockutils [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.257 2 DEBUG oslo_concurrency.lockutils [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.257 2 DEBUG oslo_concurrency.lockutils [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.258 2 DEBUG nova.compute.manager [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] No waiting events found dispatching network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.258 2 WARNING nova.compute.manager [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received unexpected event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.258 2 DEBUG nova.compute.manager [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.258 2 DEBUG oslo_concurrency.lockutils [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.259 2 DEBUG oslo_concurrency.lockutils [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.259 2 DEBUG oslo_concurrency.lockutils [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.259 2 DEBUG nova.compute.manager [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] No waiting events found dispatching network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.259 2 WARNING nova.compute.manager [req-89bd7cf0-8ee4-4fd8-a8c1-1a8ec535d64e req-b9c678a1-eccb-4915-b614-4e6a85f33012 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received unexpected event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.337 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.338 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3170MB free_disk=59.897216796875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.338 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.339 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.439 2 DEBUG oslo_concurrency.lockutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.440 2 DEBUG oslo_concurrency.lockutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.440 2 DEBUG oslo_concurrency.lockutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.440 2 DEBUG oslo_concurrency.lockutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.441 2 DEBUG oslo_concurrency.lockutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.442 2 INFO nova.compute.manager [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Terminating instance#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.443 2 DEBUG nova.compute.manager [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.454 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.455 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance def7636a-ab83-489d-ba8d-6f3dd1ccc841 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.455 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.455 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:03:59 np0005465604 kernel: tap0d6d454c-ed (unregistering): left promiscuous mode
Oct  2 05:03:59 np0005465604 NetworkManager[45129]: <info>  [1759395839.5046] device (tap0d6d454c-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:03:59 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:59Z|01454|binding|INFO|Releasing lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 from this chassis (sb_readonly=0)
Oct  2 05:03:59 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:59Z|01455|binding|INFO|Setting lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 down in Southbound
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:59 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:59Z|01456|binding|INFO|Removing iface tap0d6d454c-ed ovn-installed in OVS
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.531 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:f1:a7 10.100.0.5'], port_security=['fa:16:3e:ac:f1:a7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'def7636a-ab83-489d-ba8d-6f3dd1ccc841', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c605495-2750-431a-94c8-fc1511dea80b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b93b7300-a114-4509-a5d5-f258c80e5fdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df881951-0cfd-4c8c-9854-241ef8244cff, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=0d6d454c-ed95-44d0-8bd1-e20589c708d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.533 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 0d6d454c-ed95-44d0-8bd1-e20589c708d1 in datapath 2c605495-2750-431a-94c8-fc1511dea80b unbound from our chassis#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.535 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2c605495-2750-431a-94c8-fc1511dea80b#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:59 np0005465604 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000087.scope: Deactivated successfully.
Oct  2 05:03:59 np0005465604 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000087.scope: Consumed 13.109s CPU time.
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.560 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[246ab13d-fbbb-4270-a2c6-339cde2335cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:59 np0005465604 systemd-machined[214636]: Machine qemu-169-instance-00000087 terminated.
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.597 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.603 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[937b64d9-14e9-4f34-941c-3a2136c4ec5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.608 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5216203e-3247-419e-94cf-e5e271a2b890]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.646 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a0054107-ac9e-44a8-834f-35c6eb72e3a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:59 np0005465604 kernel: tap0d6d454c-ed: entered promiscuous mode
Oct  2 05:03:59 np0005465604 NetworkManager[45129]: <info>  [1759395839.6650] manager: (tap0d6d454c-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/587)
Oct  2 05:03:59 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:59Z|01457|binding|INFO|Claiming lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 for this chassis.
Oct  2 05:03:59 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:59Z|01458|binding|INFO|0d6d454c-ed95-44d0-8bd1-e20589c708d1: Claiming fa:16:3e:ac:f1:a7 10.100.0.5
Oct  2 05:03:59 np0005465604 kernel: tap0d6d454c-ed (unregistering): left promiscuous mode
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.674 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:f1:a7 10.100.0.5'], port_security=['fa:16:3e:ac:f1:a7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'def7636a-ab83-489d-ba8d-6f3dd1ccc841', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c605495-2750-431a-94c8-fc1511dea80b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b93b7300-a114-4509-a5d5-f258c80e5fdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df881951-0cfd-4c8c-9854-241ef8244cff, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=0d6d454c-ed95-44d0-8bd1-e20589c708d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.675 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[83eb457f-abf8-4e98-8a1e-a237ceb908da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c605495-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:5b:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667596, 'reachable_time': 36735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408796, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.693 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5cdf3f-47e1-4890-ba51-a7675fa41617]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2c605495-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667608, 'tstamp': 667608}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408799, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2c605495-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667611, 'tstamp': 667611}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408799, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.695 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c605495-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:59 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:59Z|01459|binding|INFO|Setting lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 ovn-installed in OVS
Oct  2 05:03:59 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:59Z|01460|binding|INFO|Setting lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 up in Southbound
Oct  2 05:03:59 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:59Z|01461|binding|INFO|Releasing lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 from this chassis (sb_readonly=1)
Oct  2 05:03:59 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:59Z|01462|if_status|INFO|Dropped 2 log messages in last 576 seconds (most recently, 576 seconds ago) due to excessive rate
Oct  2 05:03:59 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:59Z|01463|if_status|INFO|Not setting lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 down as sb is readonly
Oct  2 05:03:59 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:59Z|01464|binding|INFO|Removing iface tap0d6d454c-ed ovn-installed in OVS
Oct  2 05:03:59 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:59Z|01465|binding|INFO|Releasing lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 from this chassis (sb_readonly=0)
Oct  2 05:03:59 np0005465604 ovn_controller[152344]: 2025-10-02T09:03:59Z|01466|binding|INFO|Setting lport 0d6d454c-ed95-44d0-8bd1-e20589c708d1 down in Southbound
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.713 2 INFO nova.virt.libvirt.driver [-] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Instance destroyed successfully.#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.714 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:f1:a7 10.100.0.5'], port_security=['fa:16:3e:ac:f1:a7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'def7636a-ab83-489d-ba8d-6f3dd1ccc841', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c605495-2750-431a-94c8-fc1511dea80b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b93b7300-a114-4509-a5d5-f258c80e5fdc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df881951-0cfd-4c8c-9854-241ef8244cff, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=0d6d454c-ed95-44d0-8bd1-e20589c708d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.714 2 DEBUG nova.objects.instance [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid def7636a-ab83-489d-ba8d-6f3dd1ccc841 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.725 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c605495-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.726 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.726 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2c605495-20, col_values=(('external_ids', {'iface-id': '88f0a719-bef7-4fa7-ad0c-3658148f5bdf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.727 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.728 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 0d6d454c-ed95-44d0-8bd1-e20589c708d1 in datapath 2c605495-2750-431a-94c8-fc1511dea80b unbound from our chassis#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.729 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2c605495-2750-431a-94c8-fc1511dea80b#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.731 2 DEBUG nova.virt.libvirt.vif [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:03:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-556721783',display_name='tempest-TestNetworkBasicOps-server-556721783',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-556721783',id=135,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFBWd49oFUTm6XgyrgFAJAvAD9R5S9h35IghxF+WDuWxqO67NuyfvmqcjpF3R4Okql0uPjy7xGWqOKFWo5bhMt5wCOH87LjC+Dpu6giEiY38iIQYyWXpiLlgRndqhZX6/w==',key_name='tempest-TestNetworkBasicOps-141517795',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:03:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-kkoo640r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:03:35Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=def7636a-ab83-489d-ba8d-6f3dd1ccc841,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.732 2 DEBUG nova.network.os_vif_util [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "address": "fa:16:3e:ac:f1:a7", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d6d454c-ed", "ovs_interfaceid": "0d6d454c-ed95-44d0-8bd1-e20589c708d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.733 2 DEBUG nova.network.os_vif_util [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ac:f1:a7,bridge_name='br-int',has_traffic_filtering=True,id=0d6d454c-ed95-44d0-8bd1-e20589c708d1,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d6d454c-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.733 2 DEBUG os_vif [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:f1:a7,bridge_name='br-int',has_traffic_filtering=True,id=0d6d454c-ed95-44d0-8bd1-e20589c708d1,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d6d454c-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d6d454c-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.747 2 INFO os_vif [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:f1:a7,bridge_name='br-int',has_traffic_filtering=True,id=0d6d454c-ed95-44d0-8bd1-e20589c708d1,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d6d454c-ed')#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.748 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b2caa98c-64be-440d-b541-e234bb0796fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.786 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[49405ad4-f7cd-4856-b42d-2c8982f01946]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.790 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3aefb1bb-a76b-401e-9115-e43b70fd720e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.820 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ca944b13-aa55-4fc2-a85a-4bb4640a5af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.848 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3dc3cb0e-fc27-4359-af5a-47f2e306e147]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c605495-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:5b:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667596, 'reachable_time': 36735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408844, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.868 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[721b79e8-1666-4d5d-9e0d-2dc53dc42575]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2c605495-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667608, 'tstamp': 667608}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408845, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2c605495-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667611, 'tstamp': 667611}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408845, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.870 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c605495-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.872 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c605495-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.873 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.873 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2c605495-20, col_values=(('external_ids', {'iface-id': '88f0a719-bef7-4fa7-ad0c-3658148f5bdf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.873 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.874 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 0d6d454c-ed95-44d0-8bd1-e20589c708d1 in datapath 2c605495-2750-431a-94c8-fc1511dea80b unbound from our chassis#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.875 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2c605495-2750-431a-94c8-fc1511dea80b#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.890 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b97d7350-03c2-4205-bfb1-6bb0c2c99999]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 200 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 730 KiB/s wr, 14 op/s
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.928 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8e4513f1-ff45-4a57-b083-cf04ec11c7e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.931 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e3688d-c619-4f45-8109-78b7dcd22721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.959 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[484c1158-5b9b-4611-8d89-377463f75e55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.975 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c36adf33-7a71-404c-a54c-5fdcc914239a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c605495-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:5b:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667596, 'reachable_time': 36735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408852, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.990 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4ebb8c-a1af-4d12-b30c-e7c8c79c373d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2c605495-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667608, 'tstamp': 667608}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408853, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2c605495-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667611, 'tstamp': 667611}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408853, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.991 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c605495-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.994 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c605495-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:59 np0005465604 nova_compute[260603]: 2025-10-02 09:03:59.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.994 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.995 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2c605495-20, col_values=(('external_ids', {'iface-id': '88f0a719-bef7-4fa7-ad0c-3658148f5bdf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:03:59 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:03:59.995 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:04:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:04:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3658447094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.059 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.065 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.093 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.109 2 INFO nova.virt.libvirt.driver [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Deleting instance files /var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841_del#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.110 2 INFO nova.virt.libvirt.driver [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Deletion of /var/lib/nova/instances/def7636a-ab83-489d-ba8d-6f3dd1ccc841_del complete#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.158 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.159 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.180 2 INFO nova.compute.manager [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.181 2 DEBUG oslo.service.loopingcall [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.181 2 DEBUG nova.compute.manager [-] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.181 2 DEBUG nova.network.neutron [-] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.202 2 DEBUG nova.compute.manager [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-unplugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.203 2 DEBUG oslo_concurrency.lockutils [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.203 2 DEBUG oslo_concurrency.lockutils [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.204 2 DEBUG oslo_concurrency.lockutils [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.204 2 DEBUG nova.compute.manager [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] No waiting events found dispatching network-vif-unplugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.204 2 DEBUG nova.compute.manager [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-unplugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.205 2 DEBUG nova.compute.manager [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.205 2 DEBUG oslo_concurrency.lockutils [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.205 2 DEBUG oslo_concurrency.lockutils [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.205 2 DEBUG oslo_concurrency.lockutils [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.206 2 DEBUG nova.compute.manager [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] No waiting events found dispatching network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.206 2 WARNING nova.compute.manager [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received unexpected event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.206 2 DEBUG nova.compute.manager [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.207 2 DEBUG oslo_concurrency.lockutils [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.207 2 DEBUG oslo_concurrency.lockutils [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.207 2 DEBUG oslo_concurrency.lockutils [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.208 2 DEBUG nova.compute.manager [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] No waiting events found dispatching network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.208 2 WARNING nova.compute.manager [req-dcc14717-1f3b-43ff-a130-be94e91da071 req-59779ba1-6db4-410c-b591-9264b4fb1e3f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received unexpected event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.303 2 DEBUG nova.network.neutron [req-817259e0-1fc5-4924-b18f-981c8b5aa44c req-79e27690-da55-4f8c-bdc3-12823eb49cdc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updated VIF entry in instance network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.304 2 DEBUG nova.network.neutron [req-817259e0-1fc5-4924-b18f-981c8b5aa44c req-79e27690-da55-4f8c-bdc3-12823eb49cdc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updating instance_info_cache with network_info: [{"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.320 2 DEBUG oslo_concurrency.lockutils [req-817259e0-1fc5-4924-b18f-981c8b5aa44c req-79e27690-da55-4f8c-bdc3-12823eb49cdc 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.950 2 DEBUG nova.network.neutron [-] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:04:00 np0005465604 nova_compute[260603]: 2025-10-02 09:04:00.969 2 INFO nova.compute.manager [-] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Took 0.79 seconds to deallocate network for instance.#033[00m
Oct  2 05:04:01 np0005465604 nova_compute[260603]: 2025-10-02 09:04:01.014 2 DEBUG oslo_concurrency.lockutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:01 np0005465604 nova_compute[260603]: 2025-10-02 09:04:01.015 2 DEBUG oslo_concurrency.lockutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:01 np0005465604 nova_compute[260603]: 2025-10-02 09:04:01.084 2 DEBUG oslo_concurrency.processutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:04:01 np0005465604 nova_compute[260603]: 2025-10-02 09:04:01.359 2 DEBUG nova.compute.manager [req-fb48278e-350a-4b05-a00c-f0931238cc2f req-c46e1250-a680-4787-96ad-89a9090cce2d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-changed-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:04:01 np0005465604 nova_compute[260603]: 2025-10-02 09:04:01.359 2 DEBUG nova.compute.manager [req-fb48278e-350a-4b05-a00c-f0931238cc2f req-c46e1250-a680-4787-96ad-89a9090cce2d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Refreshing instance network info cache due to event network-changed-0d6d454c-ed95-44d0-8bd1-e20589c708d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:04:01 np0005465604 nova_compute[260603]: 2025-10-02 09:04:01.360 2 DEBUG oslo_concurrency.lockutils [req-fb48278e-350a-4b05-a00c-f0931238cc2f req-c46e1250-a680-4787-96ad-89a9090cce2d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:04:01 np0005465604 nova_compute[260603]: 2025-10-02 09:04:01.360 2 DEBUG oslo_concurrency.lockutils [req-fb48278e-350a-4b05-a00c-f0931238cc2f req-c46e1250-a680-4787-96ad-89a9090cce2d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:04:01 np0005465604 nova_compute[260603]: 2025-10-02 09:04:01.360 2 DEBUG nova.network.neutron [req-fb48278e-350a-4b05-a00c-f0931238cc2f req-c46e1250-a680-4787-96ad-89a9090cce2d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Refreshing network info cache for port 0d6d454c-ed95-44d0-8bd1-e20589c708d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:04:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:04:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2114879543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:04:01 np0005465604 nova_compute[260603]: 2025-10-02 09:04:01.560 2 DEBUG oslo_concurrency.processutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:04:01 np0005465604 nova_compute[260603]: 2025-10-02 09:04:01.565 2 DEBUG nova.compute.provider_tree [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:04:01 np0005465604 nova_compute[260603]: 2025-10-02 09:04:01.633 2 DEBUG nova.scheduler.client.report [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:04:01 np0005465604 nova_compute[260603]: 2025-10-02 09:04:01.708 2 DEBUG nova.network.neutron [req-fb48278e-350a-4b05-a00c-f0931238cc2f req-c46e1250-a680-4787-96ad-89a9090cce2d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:04:01 np0005465604 nova_compute[260603]: 2025-10-02 09:04:01.805 2 DEBUG oslo_concurrency.lockutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 200 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 17 KiB/s wr, 5 op/s
Oct  2 05:04:01 np0005465604 nova_compute[260603]: 2025-10-02 09:04:01.942 2 INFO nova.scheduler.client.report [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance def7636a-ab83-489d-ba8d-6f3dd1ccc841#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.123 2 DEBUG oslo_concurrency.lockutils [None req-e8b49bf2-a803-4da4-8b2e-d63494b428f2 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.154 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.180 2 DEBUG nova.network.neutron [req-fb48278e-350a-4b05-a00c-f0931238cc2f req-c46e1250-a680-4787-96ad-89a9090cce2d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.274 2 DEBUG oslo_concurrency.lockutils [req-fb48278e-350a-4b05-a00c-f0931238cc2f req-c46e1250-a680-4787-96ad-89a9090cce2d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-def7636a-ab83-489d-ba8d-6f3dd1ccc841" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.275 2 DEBUG nova.compute.manager [req-fb48278e-350a-4b05-a00c-f0931238cc2f req-c46e1250-a680-4787-96ad-89a9090cce2d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-deleted-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.356 2 DEBUG nova.compute.manager [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.356 2 DEBUG oslo_concurrency.lockutils [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.357 2 DEBUG oslo_concurrency.lockutils [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.357 2 DEBUG oslo_concurrency.lockutils [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.358 2 DEBUG nova.compute.manager [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] No waiting events found dispatching network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.358 2 WARNING nova.compute.manager [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received unexpected event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.358 2 DEBUG nova.compute.manager [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-unplugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.359 2 DEBUG oslo_concurrency.lockutils [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.359 2 DEBUG oslo_concurrency.lockutils [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.360 2 DEBUG oslo_concurrency.lockutils [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.360 2 DEBUG nova.compute.manager [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] No waiting events found dispatching network-vif-unplugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.361 2 WARNING nova.compute.manager [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received unexpected event network-vif-unplugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.361 2 DEBUG nova.compute.manager [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.362 2 DEBUG oslo_concurrency.lockutils [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.362 2 DEBUG oslo_concurrency.lockutils [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.363 2 DEBUG oslo_concurrency.lockutils [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "def7636a-ab83-489d-ba8d-6f3dd1ccc841-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.363 2 DEBUG nova.compute.manager [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] No waiting events found dispatching network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:04:02 np0005465604 nova_compute[260603]: 2025-10-02 09:04:02.364 2 WARNING nova.compute.manager [req-68fd1c96-a00a-4a02-bc47-3dc6c59d2cb2 req-b2f69402-f755-4614-92a3-a9274fd2cfa5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Received unexpected event network-vif-plugged-0d6d454c-ed95-44d0-8bd1-e20589c708d1 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 05:04:02 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:02.892 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:04:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:04:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 121 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 25 KiB/s wr, 34 op/s
Oct  2 05:04:04 np0005465604 nova_compute[260603]: 2025-10-02 09:04:04.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.116 2 DEBUG nova.compute.manager [req-947a2ca1-8c4d-408b-8e63-e4512ffdfb9c req-a4480b6d-829b-4355-a6c9-58acffcc3b89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.117 2 DEBUG nova.compute.manager [req-947a2ca1-8c4d-408b-8e63-e4512ffdfb9c req-a4480b6d-829b-4355-a6c9-58acffcc3b89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing instance network info cache due to event network-changed-3df4c898-fd96-4b0f-90ee-add24ca56aa2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.118 2 DEBUG oslo_concurrency.lockutils [req-947a2ca1-8c4d-408b-8e63-e4512ffdfb9c req-a4480b6d-829b-4355-a6c9-58acffcc3b89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.118 2 DEBUG oslo_concurrency.lockutils [req-947a2ca1-8c4d-408b-8e63-e4512ffdfb9c req-a4480b6d-829b-4355-a6c9-58acffcc3b89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.118 2 DEBUG nova.network.neutron [req-947a2ca1-8c4d-408b-8e63-e4512ffdfb9c req-a4480b6d-829b-4355-a6c9-58acffcc3b89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Refreshing network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.167 2 DEBUG oslo_concurrency.lockutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.168 2 DEBUG oslo_concurrency.lockutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.168 2 DEBUG oslo_concurrency.lockutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.169 2 DEBUG oslo_concurrency.lockutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.169 2 DEBUG oslo_concurrency.lockutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.170 2 INFO nova.compute.manager [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Terminating instance#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.172 2 DEBUG nova.compute.manager [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:04:05 np0005465604 kernel: tap3df4c898-fd (unregistering): left promiscuous mode
Oct  2 05:04:05 np0005465604 NetworkManager[45129]: <info>  [1759395845.2455] device (tap3df4c898-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:04:05 np0005465604 ovn_controller[152344]: 2025-10-02T09:04:05Z|01467|binding|INFO|Releasing lport 3df4c898-fd96-4b0f-90ee-add24ca56aa2 from this chassis (sb_readonly=0)
Oct  2 05:04:05 np0005465604 ovn_controller[152344]: 2025-10-02T09:04:05Z|01468|binding|INFO|Setting lport 3df4c898-fd96-4b0f-90ee-add24ca56aa2 down in Southbound
Oct  2 05:04:05 np0005465604 ovn_controller[152344]: 2025-10-02T09:04:05Z|01469|binding|INFO|Removing iface tap3df4c898-fd ovn-installed in OVS
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.264 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:95:1a 10.100.0.12'], port_security=['fa:16:3e:b0:95:1a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '64577e3d-aa56-4fa9-a1b5-dc76a7a80754', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c605495-2750-431a-94c8-fc1511dea80b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '8', 'neutron:security_group_ids': '3f414f43-ca38-4bce-aa83-1fdd3cd738fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df881951-0cfd-4c8c-9854-241ef8244cff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3df4c898-fd96-4b0f-90ee-add24ca56aa2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:04:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.266 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3df4c898-fd96-4b0f-90ee-add24ca56aa2 in datapath 2c605495-2750-431a-94c8-fc1511dea80b unbound from our chassis#033[00m
Oct  2 05:04:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.268 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2c605495-2750-431a-94c8-fc1511dea80b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:04:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.269 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[76dbd92f-1420-48a4-894d-e758ac560bcd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.269 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b namespace which is not needed anymore#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:05 np0005465604 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000086.scope: Deactivated successfully.
Oct  2 05:04:05 np0005465604 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000086.scope: Consumed 13.586s CPU time.
Oct  2 05:04:05 np0005465604 systemd-machined[214636]: Machine qemu-168-instance-00000086 terminated.
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.417 2 INFO nova.virt.libvirt.driver [-] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Instance destroyed successfully.#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.417 2 DEBUG nova.objects.instance [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.431 2 DEBUG nova.virt.libvirt.vif [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:03:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2044507795',display_name='tempest-TestNetworkBasicOps-server-2044507795',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2044507795',id=134,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJyVGuJYWkz5cUrKkan7kCe4MbiRPaa7v1Q6d5xWchgY/tX4JduFQB6JZ0q369VSitON6EJsRLVHtNMGsTz7PTKCbeKQtDY6sbIs7RX5gGPDqTs/0LJrpZ68VxyA10mrYQ==',key_name='tempest-TestNetworkBasicOps-2118371598',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:03:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-oalw9fqv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:03:18Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=64577e3d-aa56-4fa9-a1b5-dc76a7a80754,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.432 2 DEBUG nova.network.os_vif_util [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.432 2 DEBUG nova.network.os_vif_util [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b0:95:1a,bridge_name='br-int',has_traffic_filtering=True,id=3df4c898-fd96-4b0f-90ee-add24ca56aa2,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3df4c898-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.433 2 DEBUG os_vif [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:95:1a,bridge_name='br-int',has_traffic_filtering=True,id=3df4c898-fd96-4b0f-90ee-add24ca56aa2,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3df4c898-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.435 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3df4c898-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.445 2 INFO os_vif [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:95:1a,bridge_name='br-int',has_traffic_filtering=True,id=3df4c898-fd96-4b0f-90ee-add24ca56aa2,network=Network(2c605495-2750-431a-94c8-fc1511dea80b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3df4c898-fd')#033[00m
Oct  2 05:04:05 np0005465604 neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b[407383]: [NOTICE]   (407387) : haproxy version is 2.8.14-c23fe91
Oct  2 05:04:05 np0005465604 neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b[407383]: [NOTICE]   (407387) : path to executable is /usr/sbin/haproxy
Oct  2 05:04:05 np0005465604 neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b[407383]: [WARNING]  (407387) : Exiting Master process...
Oct  2 05:04:05 np0005465604 neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b[407383]: [WARNING]  (407387) : Exiting Master process...
Oct  2 05:04:05 np0005465604 neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b[407383]: [ALERT]    (407387) : Current worker (407389) exited with code 143 (Terminated)
Oct  2 05:04:05 np0005465604 neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b[407383]: [WARNING]  (407387) : All workers exited. Exiting... (0)
Oct  2 05:04:05 np0005465604 systemd[1]: libpod-620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43.scope: Deactivated successfully.
Oct  2 05:04:05 np0005465604 podman[408902]: 2025-10-02 09:04:05.458235719 +0000 UTC m=+0.065293050 container died 620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 05:04:05 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43-userdata-shm.mount: Deactivated successfully.
Oct  2 05:04:05 np0005465604 systemd[1]: var-lib-containers-storage-overlay-addf8c19743aa1800a38e7a2c27551ad430b760bd20734d7536da7cb23fa4ebc-merged.mount: Deactivated successfully.
Oct  2 05:04:05 np0005465604 podman[408902]: 2025-10-02 09:04:05.497019894 +0000 UTC m=+0.104077225 container cleanup 620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 05:04:05 np0005465604 systemd[1]: libpod-conmon-620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43.scope: Deactivated successfully.
Oct  2 05:04:05 np0005465604 podman[408962]: 2025-10-02 09:04:05.575442841 +0000 UTC m=+0.046532727 container remove 620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 05:04:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.585 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cbda5f44-b7e1-46b0-850a-c192134244f8]: (4, ('Thu Oct  2 09:04:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b (620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43)\n620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43\nThu Oct  2 09:04:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b (620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43)\n620d6b087a4dd0c6c11328cddcb24911bf65ce248ffbe24f85b38dbf4c58ef43\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.588 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eeef59f1-0037-4e6d-8af5-350a5f859d2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.590 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c605495-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:05 np0005465604 kernel: tap2c605495-20: left promiscuous mode
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.624 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fd9248-6804-4a8a-bc17-fa9dbea7fb38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.651 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ed13a5b0-77d9-47e2-a01f-171e98085474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.652 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[de9253f7-6ffe-44ed-8c11-deee87ae088d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.668 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d12e57e1-bb59-4d5d-9121-9302f4a07b15]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667588, 'reachable_time': 41444, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408979, 'error': None, 'target': 'ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:05 np0005465604 systemd[1]: run-netns-ovnmeta\x2d2c605495\x2d2750\x2d431a\x2d94c8\x2dfc1511dea80b.mount: Deactivated successfully.
Oct  2 05:04:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.672 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2c605495-2750-431a-94c8-fc1511dea80b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:04:05 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:05.672 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf03fda-1ca5-4b7f-a5fb-d21e6de36bc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.792 2 INFO nova.virt.libvirt.driver [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Deleting instance files /var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754_del#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.793 2 INFO nova.virt.libvirt.driver [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Deletion of /var/lib/nova/instances/64577e3d-aa56-4fa9-a1b5-dc76a7a80754_del complete#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.866 2 INFO nova.compute.manager [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.866 2 DEBUG oslo.service.loopingcall [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.867 2 DEBUG nova.compute.manager [-] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:04:05 np0005465604 nova_compute[260603]: 2025-10-02 09:04:05.867 2 DEBUG nova.network.neutron [-] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:04:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 121 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 20 KiB/s wr, 29 op/s
Oct  2 05:04:06 np0005465604 nova_compute[260603]: 2025-10-02 09:04:06.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:04:06 np0005465604 nova_compute[260603]: 2025-10-02 09:04:06.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 05:04:06 np0005465604 nova_compute[260603]: 2025-10-02 09:04:06.543 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.108 2 DEBUG nova.network.neutron [-] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.129 2 INFO nova.compute.manager [-] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Took 1.26 seconds to deallocate network for instance.#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.165 2 DEBUG nova.network.neutron [req-947a2ca1-8c4d-408b-8e63-e4512ffdfb9c req-a4480b6d-829b-4355-a6c9-58acffcc3b89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updated VIF entry in instance network info cache for port 3df4c898-fd96-4b0f-90ee-add24ca56aa2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.166 2 DEBUG nova.network.neutron [req-947a2ca1-8c4d-408b-8e63-e4512ffdfb9c req-a4480b6d-829b-4355-a6c9-58acffcc3b89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updating instance_info_cache with network_info: [{"id": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "address": "fa:16:3e:b0:95:1a", "network": {"id": "2c605495-2750-431a-94c8-fc1511dea80b", "bridge": "br-int", "label": "tempest-network-smoke--1102896332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3df4c898-fd", "ovs_interfaceid": "3df4c898-fd96-4b0f-90ee-add24ca56aa2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.193 2 DEBUG oslo_concurrency.lockutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.194 2 DEBUG oslo_concurrency.lockutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.197 2 DEBUG oslo_concurrency.lockutils [req-947a2ca1-8c4d-408b-8e63-e4512ffdfb9c req-a4480b6d-829b-4355-a6c9-58acffcc3b89 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-64577e3d-aa56-4fa9-a1b5-dc76a7a80754" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.215 2 DEBUG nova.compute.manager [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-vif-unplugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.216 2 DEBUG oslo_concurrency.lockutils [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.216 2 DEBUG oslo_concurrency.lockutils [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.217 2 DEBUG oslo_concurrency.lockutils [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.217 2 DEBUG nova.compute.manager [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] No waiting events found dispatching network-vif-unplugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.218 2 WARNING nova.compute.manager [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received unexpected event network-vif-unplugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.218 2 DEBUG nova.compute.manager [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.219 2 DEBUG oslo_concurrency.lockutils [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.219 2 DEBUG oslo_concurrency.lockutils [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.219 2 DEBUG oslo_concurrency.lockutils [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.220 2 DEBUG nova.compute.manager [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] No waiting events found dispatching network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.220 2 WARNING nova.compute.manager [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received unexpected event network-vif-plugged-3df4c898-fd96-4b0f-90ee-add24ca56aa2 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.221 2 DEBUG nova.compute.manager [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Received event network-vif-deleted-3df4c898-fd96-4b0f-90ee-add24ca56aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.221 2 INFO nova.compute.manager [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Neutron deleted interface 3df4c898-fd96-4b0f-90ee-add24ca56aa2; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.221 2 DEBUG nova.network.neutron [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.261 2 DEBUG nova.compute.manager [req-eee60b6f-75b7-4b6b-8543-0a49d85c1455 req-cbae889d-7101-42f5-946f-71bc4caba161 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Detach interface failed, port_id=3df4c898-fd96-4b0f-90ee-add24ca56aa2, reason: Instance 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.272 2 DEBUG oslo_concurrency.processutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.544 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:04:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:04:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322879297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.743 2 DEBUG oslo_concurrency.processutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.752 2 DEBUG nova.compute.provider_tree [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.779 2 DEBUG nova.scheduler.client.report [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.816 2 DEBUG oslo_concurrency.lockutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.853 2 INFO nova.scheduler.client.report [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance 64577e3d-aa56-4fa9-a1b5-dc76a7a80754#033[00m
Oct  2 05:04:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 64 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 21 KiB/s wr, 44 op/s
Oct  2 05:04:07 np0005465604 nova_compute[260603]: 2025-10-02 09:04:07.934 2 DEBUG oslo_concurrency.lockutils [None req-120a5bbf-b6cd-4ba0-978d-f9887c3f093c ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "64577e3d-aa56-4fa9-a1b5-dc76a7a80754" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:08 np0005465604 podman[409003]: 2025-10-02 09:04:08.045637094 +0000 UTC m=+0.090016057 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 05:04:08 np0005465604 podman[409002]: 2025-10-02 09:04:08.080654133 +0000 UTC m=+0.133555981 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 05:04:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:04:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 9.3 KiB/s wr, 56 op/s
Oct  2 05:04:10 np0005465604 nova_compute[260603]: 2025-10-02 09:04:10.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:10 np0005465604 nova_compute[260603]: 2025-10-02 09:04:10.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:11 np0005465604 nova_compute[260603]: 2025-10-02 09:04:11.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:11 np0005465604 nova_compute[260603]: 2025-10-02 09:04:11.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Oct  2 05:04:13 np0005465604 podman[409048]: 2025-10-02 09:04:13.016629122 +0000 UTC m=+0.080494582 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:04:13 np0005465604 podman[409047]: 2025-10-02 09:04:13.030300128 +0000 UTC m=+0.092880888 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 05:04:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:04:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Oct  2 05:04:14 np0005465604 nova_compute[260603]: 2025-10-02 09:04:14.707 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395839.702803, def7636a-ab83-489d-ba8d-6f3dd1ccc841 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:04:14 np0005465604 nova_compute[260603]: 2025-10-02 09:04:14.708 2 INFO nova.compute.manager [-] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:04:14 np0005465604 nova_compute[260603]: 2025-10-02 09:04:14.730 2 DEBUG nova.compute.manager [None req-21e9fb51-1d39-4308-a62e-30a6086d94e8 - - - - - -] [instance: def7636a-ab83-489d-ba8d-6f3dd1ccc841] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:04:15 np0005465604 nova_compute[260603]: 2025-10-02 09:04:15.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:15 np0005465604 nova_compute[260603]: 2025-10-02 09:04:15.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 05:04:16 np0005465604 nova_compute[260603]: 2025-10-02 09:04:16.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:04:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 05:04:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:04:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 341 B/s wr, 12 op/s
Oct  2 05:04:20 np0005465604 nova_compute[260603]: 2025-10-02 09:04:20.415 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395845.4146574, 64577e3d-aa56-4fa9-a1b5-dc76a7a80754 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:04:20 np0005465604 nova_compute[260603]: 2025-10-02 09:04:20.416 2 INFO nova.compute.manager [-] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:04:20 np0005465604 nova_compute[260603]: 2025-10-02 09:04:20.441 2 DEBUG nova.compute.manager [None req-1f32a34e-c674-4f41-9705-c61ab9c62821 - - - - - -] [instance: 64577e3d-aa56-4fa9-a1b5-dc76a7a80754] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:04:20 np0005465604 nova_compute[260603]: 2025-10-02 09:04:20.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:20 np0005465604 nova_compute[260603]: 2025-10-02 09:04:20.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:04:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:04:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1884056596' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:04:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:04:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1884056596' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:04:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:04:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:04:24 np0005465604 nova_compute[260603]: 2025-10-02 09:04:24.540 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:04:24 np0005465604 nova_compute[260603]: 2025-10-02 09:04:24.540 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 05:04:25 np0005465604 nova_compute[260603]: 2025-10-02 09:04:25.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:25 np0005465604 nova_compute[260603]: 2025-10-02 09:04:25.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:04:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:04:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:04:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:04:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:04:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:04:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:04:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:04:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:04:28
Oct  2 05:04:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:04:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:04:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.mgr', 'backups', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct  2 05:04:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:04:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:04:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:04:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:04:30 np0005465604 nova_compute[260603]: 2025-10-02 09:04:30.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:30 np0005465604 nova_compute[260603]: 2025-10-02 09:04:30.555 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "4e056573-a7f9-40a3-b57a-8415af28183c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:30 np0005465604 nova_compute[260603]: 2025-10-02 09:04:30.556 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:30 np0005465604 nova_compute[260603]: 2025-10-02 09:04:30.572 2 DEBUG nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:04:30 np0005465604 nova_compute[260603]: 2025-10-02 09:04:30.648 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:30 np0005465604 nova_compute[260603]: 2025-10-02 09:04:30.649 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:30 np0005465604 nova_compute[260603]: 2025-10-02 09:04:30.658 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:04:30 np0005465604 nova_compute[260603]: 2025-10-02 09:04:30.659 2 INFO nova.compute.claims [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:04:30 np0005465604 nova_compute[260603]: 2025-10-02 09:04:30.776 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:04:30 np0005465604 nova_compute[260603]: 2025-10-02 09:04:30.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:04:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/13869599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.230 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.237 2 DEBUG nova.compute.provider_tree [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.253 2 DEBUG nova.scheduler.client.report [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.277 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.278 2 DEBUG nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.325 2 DEBUG nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.326 2 DEBUG nova.network.neutron [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.346 2 INFO nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.365 2 DEBUG nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.460 2 DEBUG nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.462 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.462 2 INFO nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Creating image(s)#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.494 2 DEBUG nova.storage.rbd_utils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 4e056573-a7f9-40a3-b57a-8415af28183c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.525 2 DEBUG nova.storage.rbd_utils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 4e056573-a7f9-40a3-b57a-8415af28183c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.549 2 DEBUG nova.storage.rbd_utils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 4e056573-a7f9-40a3-b57a-8415af28183c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.553 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.593 2 DEBUG nova.policy [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed58c0dbe2eb44a6969a40202da07416', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.641 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.641 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.642 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.642 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.665 2 DEBUG nova.storage.rbd_utils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 4e056573-a7f9-40a3-b57a-8415af28183c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.669 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 4e056573-a7f9-40a3-b57a-8415af28183c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:04:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:04:31 np0005465604 nova_compute[260603]: 2025-10-02 09:04:31.981 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 4e056573-a7f9-40a3-b57a-8415af28183c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:04:32 np0005465604 nova_compute[260603]: 2025-10-02 09:04:32.067 2 DEBUG nova.storage.rbd_utils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] resizing rbd image 4e056573-a7f9-40a3-b57a-8415af28183c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:04:32 np0005465604 nova_compute[260603]: 2025-10-02 09:04:32.178 2 DEBUG nova.objects.instance [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e056573-a7f9-40a3-b57a-8415af28183c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:04:32 np0005465604 nova_compute[260603]: 2025-10-02 09:04:32.196 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:04:32 np0005465604 nova_compute[260603]: 2025-10-02 09:04:32.197 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Ensure instance console log exists: /var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:04:32 np0005465604 nova_compute[260603]: 2025-10-02 09:04:32.198 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:32 np0005465604 nova_compute[260603]: 2025-10-02 09:04:32.198 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:32 np0005465604 nova_compute[260603]: 2025-10-02 09:04:32.199 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:04:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 88 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct  2 05:04:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:34.843 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:34.844 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:34.844 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:35 np0005465604 nova_compute[260603]: 2025-10-02 09:04:35.529 2 DEBUG nova.network.neutron [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Successfully created port: 3426f15c-bff7-478f-a7d7-2fd7499af1c4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:04:35 np0005465604 nova_compute[260603]: 2025-10-02 09:04:35.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:35 np0005465604 nova_compute[260603]: 2025-10-02 09:04:35.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 88 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct  2 05:04:36 np0005465604 nova_compute[260603]: 2025-10-02 09:04:36.536 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:04:36 np0005465604 nova_compute[260603]: 2025-10-02 09:04:36.536 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:04:37 np0005465604 nova_compute[260603]: 2025-10-02 09:04:37.809 2 DEBUG nova.network.neutron [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Successfully updated port: 3426f15c-bff7-478f-a7d7-2fd7499af1c4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:04:37 np0005465604 nova_compute[260603]: 2025-10-02 09:04:37.825 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:04:37 np0005465604 nova_compute[260603]: 2025-10-02 09:04:37.825 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquired lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:04:37 np0005465604 nova_compute[260603]: 2025-10-02 09:04:37.825 2 DEBUG nova.network.neutron [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:04:37 np0005465604 nova_compute[260603]: 2025-10-02 09:04:37.913 2 DEBUG nova.compute.manager [req-55129212-a150-48e3-b693-3f073a8e2702 req-db1826ff-defb-4e3b-9244-896040796c97 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received event network-changed-3426f15c-bff7-478f-a7d7-2fd7499af1c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:04:37 np0005465604 nova_compute[260603]: 2025-10-02 09:04:37.914 2 DEBUG nova.compute.manager [req-55129212-a150-48e3-b693-3f073a8e2702 req-db1826ff-defb-4e3b-9244-896040796c97 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Refreshing instance network info cache due to event network-changed-3426f15c-bff7-478f-a7d7-2fd7499af1c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:04:37 np0005465604 nova_compute[260603]: 2025-10-02 09:04:37.914 2 DEBUG oslo_concurrency.lockutils [req-55129212-a150-48e3-b693-3f073a8e2702 req-db1826ff-defb-4e3b-9244-896040796c97 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:04:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:04:37 np0005465604 nova_compute[260603]: 2025-10-02 09:04:37.963 2 DEBUG nova.network.neutron [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:04:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:04:38 np0005465604 podman[409276]: 2025-10-02 09:04:38.996297074 +0000 UTC m=+0.057006752 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:04:39 np0005465604 podman[409275]: 2025-10-02 09:04:39.075543978 +0000 UTC m=+0.134869553 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 05:04:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:04:39 np0005465604 nova_compute[260603]: 2025-10-02 09:04:39.960 2 DEBUG nova.network.neutron [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Updating instance_info_cache with network_info: [{"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:04:39 np0005465604 nova_compute[260603]: 2025-10-02 09:04:39.985 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Releasing lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:04:39 np0005465604 nova_compute[260603]: 2025-10-02 09:04:39.985 2 DEBUG nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Instance network_info: |[{"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:04:39 np0005465604 nova_compute[260603]: 2025-10-02 09:04:39.985 2 DEBUG oslo_concurrency.lockutils [req-55129212-a150-48e3-b693-3f073a8e2702 req-db1826ff-defb-4e3b-9244-896040796c97 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:04:39 np0005465604 nova_compute[260603]: 2025-10-02 09:04:39.986 2 DEBUG nova.network.neutron [req-55129212-a150-48e3-b693-3f073a8e2702 req-db1826ff-defb-4e3b-9244-896040796c97 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Refreshing network info cache for port 3426f15c-bff7-478f-a7d7-2fd7499af1c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:04:39 np0005465604 nova_compute[260603]: 2025-10-02 09:04:39.988 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Start _get_guest_xml network_info=[{"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:04:39 np0005465604 nova_compute[260603]: 2025-10-02 09:04:39.992 2 WARNING nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:39.998 2 DEBUG nova.virt.libvirt.host [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.000 2 DEBUG nova.virt.libvirt.host [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.017 2 DEBUG nova.virt.libvirt.host [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.018 2 DEBUG nova.virt.libvirt.host [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.018 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.018 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.019 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.019 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.019 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.019 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.020 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.020 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.020 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.020 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.020 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.021 2 DEBUG nova.virt.hardware [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.023 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:04:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:04:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2207960740' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.548 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.571 2 DEBUG nova.storage.rbd_utils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 4e056573-a7f9-40a3-b57a-8415af28183c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.575 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:04:40 np0005465604 nova_compute[260603]: 2025-10-02 09:04:40.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:04:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2930881053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.011 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.014 2 DEBUG nova.virt.libvirt.vif [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:04:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-702449656',display_name='tempest-TestNetworkBasicOps-server-702449656',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-702449656',id=136,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFG4yVL3kWMKssiMMTxBkYCYMGOyIrMgKWW6mSphQLUGjYPZbZn8kAoyGfeCty+GAJO7M0ajY8H+P8bZ7xOYgSuU5Lwh2F7EvM3DqGuRUQe2gvgkzvhd/Nxhr2daBHDjEw==',key_name='tempest-TestNetworkBasicOps-1401796894',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-aolvfudx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:04:31Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=4e056573-a7f9-40a3-b57a-8415af28183c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.015 2 DEBUG nova.network.os_vif_util [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.017 2 DEBUG nova.network.os_vif_util [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:3b:0c,bridge_name='br-int',has_traffic_filtering=True,id=3426f15c-bff7-478f-a7d7-2fd7499af1c4,network=Network(b56304ae-559d-4697-b965-787fd568f6ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3426f15c-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.020 2 DEBUG nova.objects.instance [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e056573-a7f9-40a3-b57a-8415af28183c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.045 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:04:41 np0005465604 nova_compute[260603]:  <uuid>4e056573-a7f9-40a3-b57a-8415af28183c</uuid>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:  <name>instance-00000088</name>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestNetworkBasicOps-server-702449656</nova:name>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:04:39</nova:creationTime>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:04:41 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:        <nova:user uuid="ed58c0dbe2eb44a6969a40202da07416">tempest-TestNetworkBasicOps-67113886-project-member</nova:user>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:        <nova:project uuid="5f3ce144e8c54c29bd54d3b61166b175">tempest-TestNetworkBasicOps-67113886</nova:project>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:        <nova:port uuid="3426f15c-bff7-478f-a7d7-2fd7499af1c4">
Oct  2 05:04:41 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <entry name="serial">4e056573-a7f9-40a3-b57a-8415af28183c</entry>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <entry name="uuid">4e056573-a7f9-40a3-b57a-8415af28183c</entry>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/4e056573-a7f9-40a3-b57a-8415af28183c_disk">
Oct  2 05:04:41 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:04:41 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/4e056573-a7f9-40a3-b57a-8415af28183c_disk.config">
Oct  2 05:04:41 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:04:41 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:bf:3b:0c"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <target dev="tap3426f15c-bf"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c/console.log" append="off"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:04:41 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:04:41 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:04:41 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:04:41 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.047 2 DEBUG nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Preparing to wait for external event network-vif-plugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.048 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.048 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.049 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.050 2 DEBUG nova.virt.libvirt.vif [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:04:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-702449656',display_name='tempest-TestNetworkBasicOps-server-702449656',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-702449656',id=136,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFG4yVL3kWMKssiMMTxBkYCYMGOyIrMgKWW6mSphQLUGjYPZbZn8kAoyGfeCty+GAJO7M0ajY8H+P8bZ7xOYgSuU5Lwh2F7EvM3DqGuRUQe2gvgkzvhd/Nxhr2daBHDjEw==',key_name='tempest-TestNetworkBasicOps-1401796894',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-aolvfudx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:04:31Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=4e056573-a7f9-40a3-b57a-8415af28183c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.051 2 DEBUG nova.network.os_vif_util [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.052 2 DEBUG nova.network.os_vif_util [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:3b:0c,bridge_name='br-int',has_traffic_filtering=True,id=3426f15c-bff7-478f-a7d7-2fd7499af1c4,network=Network(b56304ae-559d-4697-b965-787fd568f6ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3426f15c-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.053 2 DEBUG os_vif [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:3b:0c,bridge_name='br-int',has_traffic_filtering=True,id=3426f15c-bff7-478f-a7d7-2fd7499af1c4,network=Network(b56304ae-559d-4697-b965-787fd568f6ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3426f15c-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.054 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.064 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3426f15c-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.065 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3426f15c-bf, col_values=(('external_ids', {'iface-id': '3426f15c-bff7-478f-a7d7-2fd7499af1c4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:3b:0c', 'vm-uuid': '4e056573-a7f9-40a3-b57a-8415af28183c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:41 np0005465604 NetworkManager[45129]: <info>  [1759395881.0691] manager: (tap3426f15c-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/588)
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.075 2 INFO os_vif [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:3b:0c,bridge_name='br-int',has_traffic_filtering=True,id=3426f15c-bff7-478f-a7d7-2fd7499af1c4,network=Network(b56304ae-559d-4697-b965-787fd568f6ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3426f15c-bf')#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.140 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.141 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.142 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] No VIF found with MAC fa:16:3e:bf:3b:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.143 2 INFO nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Using config drive#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.177 2 DEBUG nova.storage.rbd_utils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 4e056573-a7f9-40a3-b57a-8415af28183c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.913 2 INFO nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Creating config drive at /var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c/disk.config#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.918 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphzinsskj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:04:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.958 2 DEBUG nova.network.neutron [req-55129212-a150-48e3-b693-3f073a8e2702 req-db1826ff-defb-4e3b-9244-896040796c97 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Updated VIF entry in instance network info cache for port 3426f15c-bff7-478f-a7d7-2fd7499af1c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.959 2 DEBUG nova.network.neutron [req-55129212-a150-48e3-b693-3f073a8e2702 req-db1826ff-defb-4e3b-9244-896040796c97 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Updating instance_info_cache with network_info: [{"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:04:41 np0005465604 nova_compute[260603]: 2025-10-02 09:04:41.975 2 DEBUG oslo_concurrency.lockutils [req-55129212-a150-48e3-b693-3f073a8e2702 req-db1826ff-defb-4e3b-9244-896040796c97 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:04:42 np0005465604 nova_compute[260603]: 2025-10-02 09:04:42.068 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphzinsskj" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:04:42 np0005465604 nova_compute[260603]: 2025-10-02 09:04:42.106 2 DEBUG nova.storage.rbd_utils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] rbd image 4e056573-a7f9-40a3-b57a-8415af28183c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:04:42 np0005465604 nova_compute[260603]: 2025-10-02 09:04:42.112 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c/disk.config 4e056573-a7f9-40a3-b57a-8415af28183c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:04:42 np0005465604 nova_compute[260603]: 2025-10-02 09:04:42.311 2 DEBUG oslo_concurrency.processutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c/disk.config 4e056573-a7f9-40a3-b57a-8415af28183c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:04:42 np0005465604 nova_compute[260603]: 2025-10-02 09:04:42.313 2 INFO nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Deleting local config drive /var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c/disk.config because it was imported into RBD.#033[00m
Oct  2 05:04:42 np0005465604 kernel: tap3426f15c-bf: entered promiscuous mode
Oct  2 05:04:42 np0005465604 nova_compute[260603]: 2025-10-02 09:04:42.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:42 np0005465604 ovn_controller[152344]: 2025-10-02T09:04:42Z|01470|binding|INFO|Claiming lport 3426f15c-bff7-478f-a7d7-2fd7499af1c4 for this chassis.
Oct  2 05:04:42 np0005465604 ovn_controller[152344]: 2025-10-02T09:04:42Z|01471|binding|INFO|3426f15c-bff7-478f-a7d7-2fd7499af1c4: Claiming fa:16:3e:bf:3b:0c 10.100.0.6
Oct  2 05:04:42 np0005465604 NetworkManager[45129]: <info>  [1759395882.3850] manager: (tap3426f15c-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/589)
Oct  2 05:04:42 np0005465604 nova_compute[260603]: 2025-10-02 09:04:42.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.415 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:3b:0c 10.100.0.6'], port_security=['fa:16:3e:bf:3b:0c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4e056573-a7f9-40a3-b57a-8415af28183c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b56304ae-559d-4697-b965-787fd568f6ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '2', 'neutron:security_group_ids': '56d2bc0b-a112-4501-852b-a30e94c83df4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8bddaa3-c9e7-4b8d-b560-6df44a76261b, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3426f15c-bff7-478f-a7d7-2fd7499af1c4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.416 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3426f15c-bff7-478f-a7d7-2fd7499af1c4 in datapath b56304ae-559d-4697-b965-787fd568f6ea bound to our chassis#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.417 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b56304ae-559d-4697-b965-787fd568f6ea#033[00m
Oct  2 05:04:42 np0005465604 systemd-machined[214636]: New machine qemu-170-instance-00000088.
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.430 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f8800f1-f50f-469c-a3a8-4fea82bbac95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.431 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb56304ae-51 in ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.433 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb56304ae-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.433 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b04bb4ad-353e-4075-9ced-322458b04666]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.434 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[18e7784f-c061-421b-9792-92127825e797]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.445 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe6637d-082e-4400-8c0e-2864fbd66c9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.476 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b9c5cf-f01a-4652-abc1-850544eabaef]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:42 np0005465604 systemd[1]: Started Virtual Machine qemu-170-instance-00000088.
Oct  2 05:04:42 np0005465604 ovn_controller[152344]: 2025-10-02T09:04:42Z|01472|binding|INFO|Setting lport 3426f15c-bff7-478f-a7d7-2fd7499af1c4 ovn-installed in OVS
Oct  2 05:04:42 np0005465604 ovn_controller[152344]: 2025-10-02T09:04:42Z|01473|binding|INFO|Setting lport 3426f15c-bff7-478f-a7d7-2fd7499af1c4 up in Southbound
Oct  2 05:04:42 np0005465604 nova_compute[260603]: 2025-10-02 09:04:42.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:42 np0005465604 systemd-udevd[409461]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:04:42 np0005465604 NetworkManager[45129]: <info>  [1759395882.5009] device (tap3426f15c-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:04:42 np0005465604 NetworkManager[45129]: <info>  [1759395882.5019] device (tap3426f15c-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.505 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[815553ef-d3d3-4a4b-8d4b-2dafffd56354]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:42 np0005465604 systemd-udevd[409465]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.509 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d9aa6c-81af-4610-b890-3b70a128ada6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:42 np0005465604 NetworkManager[45129]: <info>  [1759395882.5107] manager: (tapb56304ae-50): new Veth device (/org/freedesktop/NetworkManager/Devices/590)
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.534 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e16344-0fad-401c-b6fa-0af492ede544]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.538 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3861cc3c-ea70-45a1-9937-a542d5c9cf03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:42 np0005465604 NetworkManager[45129]: <info>  [1759395882.5586] device (tapb56304ae-50): carrier: link connected
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.564 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4c89c72b-8fa5-4e00-9797-f85b594602c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.581 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[514b9cfb-05b4-49bc-ab08-a772f56661b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb56304ae-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:db:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 415], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676116, 'reachable_time': 42425, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409490, 'error': None, 'target': 'ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.595 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fd42bf83-820e-4ca7-b7be-87c4f85a71d3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:db41'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676116, 'tstamp': 676116}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409491, 'error': None, 'target': 'ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.613 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[133e4f50-f855-4c35-bd87-c13ce5f25764]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb56304ae-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:db:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 415], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676116, 'reachable_time': 42425, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 409492, 'error': None, 'target': 'ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.649 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac5d415-3d25-402b-bed4-5488f698de61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.738 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d25625-53e1-4472-9eee-6f1c73c4b4a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.740 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb56304ae-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.740 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.741 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb56304ae-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:04:42 np0005465604 nova_compute[260603]: 2025-10-02 09:04:42.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:42 np0005465604 NetworkManager[45129]: <info>  [1759395882.7449] manager: (tapb56304ae-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/591)
Oct  2 05:04:42 np0005465604 kernel: tapb56304ae-50: entered promiscuous mode
Oct  2 05:04:42 np0005465604 nova_compute[260603]: 2025-10-02 09:04:42.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.749 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb56304ae-50, col_values=(('external_ids', {'iface-id': '4b059aed-66ee-4478-ba1c-eaee8b0f3c46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:04:42 np0005465604 nova_compute[260603]: 2025-10-02 09:04:42.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:42 np0005465604 ovn_controller[152344]: 2025-10-02T09:04:42Z|01474|binding|INFO|Releasing lport 4b059aed-66ee-4478-ba1c-eaee8b0f3c46 from this chassis (sb_readonly=0)
Oct  2 05:04:42 np0005465604 nova_compute[260603]: 2025-10-02 09:04:42.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.779 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b56304ae-559d-4697-b965-787fd568f6ea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b56304ae-559d-4697-b965-787fd568f6ea.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.780 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[22962bf9-70ec-45cb-ae12-cfffde1f4fc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.781 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-b56304ae-559d-4697-b965-787fd568f6ea
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/b56304ae-559d-4697-b965-787fd568f6ea.pid.haproxy
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID b56304ae-559d-4697-b965-787fd568f6ea
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 05:04:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:04:42.782 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea', 'env', 'PROCESS_TAG=haproxy-b56304ae-559d-4697-b965-787fd568f6ea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b56304ae-559d-4697-b965-787fd568f6ea.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 05:04:42 np0005465604 nova_compute[260603]: 2025-10-02 09:04:42.907 2 DEBUG nova.compute.manager [req-1032a2e2-1753-4876-a709-c40cb02a304d req-3a4ab219-81cf-427f-8e41-6a0d04cb805d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received event network-vif-plugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:04:42 np0005465604 nova_compute[260603]: 2025-10-02 09:04:42.907 2 DEBUG oslo_concurrency.lockutils [req-1032a2e2-1753-4876-a709-c40cb02a304d req-3a4ab219-81cf-427f-8e41-6a0d04cb805d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:42 np0005465604 nova_compute[260603]: 2025-10-02 09:04:42.908 2 DEBUG oslo_concurrency.lockutils [req-1032a2e2-1753-4876-a709-c40cb02a304d req-3a4ab219-81cf-427f-8e41-6a0d04cb805d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:42 np0005465604 nova_compute[260603]: 2025-10-02 09:04:42.908 2 DEBUG oslo_concurrency.lockutils [req-1032a2e2-1753-4876-a709-c40cb02a304d req-3a4ab219-81cf-427f-8e41-6a0d04cb805d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:42 np0005465604 nova_compute[260603]: 2025-10-02 09:04:42.908 2 DEBUG nova.compute.manager [req-1032a2e2-1753-4876-a709-c40cb02a304d req-3a4ab219-81cf-427f-8e41-6a0d04cb805d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Processing event network-vif-plugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:04:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:04:43 np0005465604 podman[409566]: 2025-10-02 09:04:43.181661849 +0000 UTC m=+0.071109071 container create 9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:04:43 np0005465604 podman[409566]: 2025-10-02 09:04:43.148020314 +0000 UTC m=+0.037467556 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 05:04:43 np0005465604 systemd[1]: Started libpod-conmon-9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4.scope.
Oct  2 05:04:43 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:04:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b747b7aa890dff406452ac20375c7d0d240a41a3a839599776ae767a20c998/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 05:04:43 np0005465604 podman[409566]: 2025-10-02 09:04:43.307186079 +0000 UTC m=+0.196633321 container init 9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 05:04:43 np0005465604 podman[409582]: 2025-10-02 09:04:43.309013517 +0000 UTC m=+0.075304101 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 05:04:43 np0005465604 podman[409566]: 2025-10-02 09:04:43.312628559 +0000 UTC m=+0.202075781 container start 9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 05:04:43 np0005465604 podman[409583]: 2025-10-02 09:04:43.327692287 +0000 UTC m=+0.083858298 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid)
Oct  2 05:04:43 np0005465604 neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea[409599]: [NOTICE]   (409625) : New worker (409628) forked
Oct  2 05:04:43 np0005465604 neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea[409599]: [NOTICE]   (409625) : Loading success.
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.510 2 DEBUG nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.510 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395883.5092514, 4e056573-a7f9-40a3-b57a-8415af28183c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.510 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] VM Started (Lifecycle Event)#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.519 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.522 2 INFO nova.virt.libvirt.driver [-] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Instance spawned successfully.#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.523 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.540 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.546 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.550 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.550 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.550 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.551 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.551 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.552 2 DEBUG nova.virt.libvirt.driver [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.582 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.583 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395883.509438, 4e056573-a7f9-40a3-b57a-8415af28183c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.583 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.628 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.631 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395883.515296, 4e056573-a7f9-40a3-b57a-8415af28183c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.631 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.655 2 INFO nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Took 12.19 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.655 2 DEBUG nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.686 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.689 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.717 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.736 2 INFO nova.compute.manager [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Took 13.12 seconds to build instance.#033[00m
Oct  2 05:04:43 np0005465604 nova_compute[260603]: 2025-10-02 09:04:43.756 2 DEBUG oslo_concurrency.lockutils [None req-45512ef9-2de3-4132-be1a-e0072bface76 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:04:45 np0005465604 nova_compute[260603]: 2025-10-02 09:04:45.014 2 DEBUG nova.compute.manager [req-183edad4-59c9-41e1-b7f8-bb0277570873 req-679c4893-9824-46b7-9e04-f3edd249ce09 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received event network-vif-plugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:04:45 np0005465604 nova_compute[260603]: 2025-10-02 09:04:45.015 2 DEBUG oslo_concurrency.lockutils [req-183edad4-59c9-41e1-b7f8-bb0277570873 req-679c4893-9824-46b7-9e04-f3edd249ce09 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:04:45 np0005465604 nova_compute[260603]: 2025-10-02 09:04:45.015 2 DEBUG oslo_concurrency.lockutils [req-183edad4-59c9-41e1-b7f8-bb0277570873 req-679c4893-9824-46b7-9e04-f3edd249ce09 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:04:45 np0005465604 nova_compute[260603]: 2025-10-02 09:04:45.015 2 DEBUG oslo_concurrency.lockutils [req-183edad4-59c9-41e1-b7f8-bb0277570873 req-679c4893-9824-46b7-9e04-f3edd249ce09 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:04:45 np0005465604 nova_compute[260603]: 2025-10-02 09:04:45.015 2 DEBUG nova.compute.manager [req-183edad4-59c9-41e1-b7f8-bb0277570873 req-679c4893-9824-46b7-9e04-f3edd249ce09 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] No waiting events found dispatching network-vif-plugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:04:45 np0005465604 nova_compute[260603]: 2025-10-02 09:04:45.016 2 WARNING nova.compute.manager [req-183edad4-59c9-41e1-b7f8-bb0277570873 req-679c4893-9824-46b7-9e04-f3edd249ce09 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received unexpected event network-vif-plugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:04:45 np0005465604 nova_compute[260603]: 2025-10-02 09:04:45.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 170 B/s wr, 3 op/s
Oct  2 05:04:46 np0005465604 nova_compute[260603]: 2025-10-02 09:04:46.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:47 np0005465604 nova_compute[260603]: 2025-10-02 09:04:47.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:47 np0005465604 NetworkManager[45129]: <info>  [1759395887.0434] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/592)
Oct  2 05:04:47 np0005465604 NetworkManager[45129]: <info>  [1759395887.0443] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/593)
Oct  2 05:04:47 np0005465604 ovn_controller[152344]: 2025-10-02T09:04:47Z|01475|binding|INFO|Releasing lport 4b059aed-66ee-4478-ba1c-eaee8b0f3c46 from this chassis (sb_readonly=0)
Oct  2 05:04:47 np0005465604 nova_compute[260603]: 2025-10-02 09:04:47.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:47 np0005465604 ovn_controller[152344]: 2025-10-02T09:04:47Z|01476|binding|INFO|Releasing lport 4b059aed-66ee-4478-ba1c-eaee8b0f3c46 from this chassis (sb_readonly=0)
Oct  2 05:04:47 np0005465604 nova_compute[260603]: 2025-10-02 09:04:47.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:47 np0005465604 nova_compute[260603]: 2025-10-02 09:04:47.888 2 DEBUG nova.compute.manager [req-13fab4da-6870-4d76-8e1b-f58a9a816320 req-90ee6514-4c6c-462a-bba2-6a06f6b267bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received event network-changed-3426f15c-bff7-478f-a7d7-2fd7499af1c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:04:47 np0005465604 nova_compute[260603]: 2025-10-02 09:04:47.889 2 DEBUG nova.compute.manager [req-13fab4da-6870-4d76-8e1b-f58a9a816320 req-90ee6514-4c6c-462a-bba2-6a06f6b267bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Refreshing instance network info cache due to event network-changed-3426f15c-bff7-478f-a7d7-2fd7499af1c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:04:47 np0005465604 nova_compute[260603]: 2025-10-02 09:04:47.890 2 DEBUG oslo_concurrency.lockutils [req-13fab4da-6870-4d76-8e1b-f58a9a816320 req-90ee6514-4c6c-462a-bba2-6a06f6b267bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:04:47 np0005465604 nova_compute[260603]: 2025-10-02 09:04:47.890 2 DEBUG oslo_concurrency.lockutils [req-13fab4da-6870-4d76-8e1b-f58a9a816320 req-90ee6514-4c6c-462a-bba2-6a06f6b267bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:04:47 np0005465604 nova_compute[260603]: 2025-10-02 09:04:47.890 2 DEBUG nova.network.neutron [req-13fab4da-6870-4d76-8e1b-f58a9a816320 req-90ee6514-4c6c-462a-bba2-6a06f6b267bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Refreshing network info cache for port 3426f15c-bff7-478f-a7d7-2fd7499af1c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:04:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 59 op/s
Oct  2 05:04:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:04:49 np0005465604 nova_compute[260603]: 2025-10-02 09:04:49.265 2 DEBUG nova.network.neutron [req-13fab4da-6870-4d76-8e1b-f58a9a816320 req-90ee6514-4c6c-462a-bba2-6a06f6b267bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Updated VIF entry in instance network info cache for port 3426f15c-bff7-478f-a7d7-2fd7499af1c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:04:49 np0005465604 nova_compute[260603]: 2025-10-02 09:04:49.265 2 DEBUG nova.network.neutron [req-13fab4da-6870-4d76-8e1b-f58a9a816320 req-90ee6514-4c6c-462a-bba2-6a06f6b267bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Updating instance_info_cache with network_info: [{"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:04:49 np0005465604 nova_compute[260603]: 2025-10-02 09:04:49.282 2 DEBUG oslo_concurrency.lockutils [req-13fab4da-6870-4d76-8e1b-f58a9a816320 req-90ee6514-4c6c-462a-bba2-6a06f6b267bf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:04:49 np0005465604 nova_compute[260603]: 2025-10-02 09:04:49.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:04:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:04:50 np0005465604 nova_compute[260603]: 2025-10-02 09:04:50.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:51 np0005465604 nova_compute[260603]: 2025-10-02 09:04:51.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:04:52 np0005465604 nova_compute[260603]: 2025-10-02 09:04:52.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:04:52 np0005465604 nova_compute[260603]: 2025-10-02 09:04:52.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:04:52 np0005465604 nova_compute[260603]: 2025-10-02 09:04:52.544 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:04:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:04:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:04:54 np0005465604 nova_compute[260603]: 2025-10-02 09:04:54.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:04:55 np0005465604 nova_compute[260603]: 2025-10-02 09:04:55.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:55 np0005465604 ovn_controller[152344]: 2025-10-02T09:04:55Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:3b:0c 10.100.0.6
Oct  2 05:04:55 np0005465604 ovn_controller[152344]: 2025-10-02T09:04:55Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:3b:0c 10.100.0.6
Oct  2 05:04:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 305 active+clean; 88 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:04:56 np0005465604 nova_compute[260603]: 2025-10-02 09:04:56.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:04:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:04:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:04:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:04:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:04:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:04:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:04:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 818c2aa8-767e-4aa7-b08b-31675c452b17 does not exist
Oct  2 05:04:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 8c016e4c-6202-4f54-84a4-a5e186ef02f5 does not exist
Oct  2 05:04:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ff51f9e2-63ae-4324-b1bc-ce4ea160b49e does not exist
Oct  2 05:04:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:04:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:04:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:04:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:04:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:04:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:04:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 117 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Oct  2 05:04:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:04:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:04:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:04:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:04:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:04:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:04:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:04:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:04:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:04:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:04:58 np0005465604 podman[409911]: 2025-10-02 09:04:58.310461469 +0000 UTC m=+0.037087283 container create 83bc694299d5b8e213522368642726889d6191ba8221f449ecd162cc74b98ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:04:58 np0005465604 systemd[1]: Started libpod-conmon-83bc694299d5b8e213522368642726889d6191ba8221f449ecd162cc74b98ff6.scope.
Oct  2 05:04:58 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:04:58 np0005465604 podman[409911]: 2025-10-02 09:04:58.295536835 +0000 UTC m=+0.022162669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:04:58 np0005465604 podman[409911]: 2025-10-02 09:04:58.398599048 +0000 UTC m=+0.125224872 container init 83bc694299d5b8e213522368642726889d6191ba8221f449ecd162cc74b98ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swanson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 05:04:58 np0005465604 podman[409911]: 2025-10-02 09:04:58.404503591 +0000 UTC m=+0.131129405 container start 83bc694299d5b8e213522368642726889d6191ba8221f449ecd162cc74b98ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swanson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:04:58 np0005465604 podman[409911]: 2025-10-02 09:04:58.407960599 +0000 UTC m=+0.134586423 container attach 83bc694299d5b8e213522368642726889d6191ba8221f449ecd162cc74b98ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swanson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:04:58 np0005465604 flamboyant_swanson[409928]: 167 167
Oct  2 05:04:58 np0005465604 systemd[1]: libpod-83bc694299d5b8e213522368642726889d6191ba8221f449ecd162cc74b98ff6.scope: Deactivated successfully.
Oct  2 05:04:58 np0005465604 podman[409911]: 2025-10-02 09:04:58.411412816 +0000 UTC m=+0.138038630 container died 83bc694299d5b8e213522368642726889d6191ba8221f449ecd162cc74b98ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swanson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:04:58 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ab80f4e03f87f3eeee57bed41bda5149c58eb507322234902e476c77c7b94b5a-merged.mount: Deactivated successfully.
Oct  2 05:04:58 np0005465604 podman[409911]: 2025-10-02 09:04:58.448352264 +0000 UTC m=+0.174978078 container remove 83bc694299d5b8e213522368642726889d6191ba8221f449ecd162cc74b98ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swanson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 05:04:58 np0005465604 systemd[1]: libpod-conmon-83bc694299d5b8e213522368642726889d6191ba8221f449ecd162cc74b98ff6.scope: Deactivated successfully.
Oct  2 05:04:58 np0005465604 nova_compute[260603]: 2025-10-02 09:04:58.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:04:58 np0005465604 nova_compute[260603]: 2025-10-02 09:04:58.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:04:58 np0005465604 podman[409952]: 2025-10-02 09:04:58.628676138 +0000 UTC m=+0.047022053 container create 2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 05:04:58 np0005465604 systemd[1]: Started libpod-conmon-2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204.scope.
Oct  2 05:04:58 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:04:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc1730308bcbc97db7f8799b89d8cc7312dfae721b41ccfa11e4573b5099ff2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:04:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc1730308bcbc97db7f8799b89d8cc7312dfae721b41ccfa11e4573b5099ff2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:04:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc1730308bcbc97db7f8799b89d8cc7312dfae721b41ccfa11e4573b5099ff2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:04:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc1730308bcbc97db7f8799b89d8cc7312dfae721b41ccfa11e4573b5099ff2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:04:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dc1730308bcbc97db7f8799b89d8cc7312dfae721b41ccfa11e4573b5099ff2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:04:58 np0005465604 podman[409952]: 2025-10-02 09:04:58.607129609 +0000 UTC m=+0.025475554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:04:58 np0005465604 podman[409952]: 2025-10-02 09:04:58.718089977 +0000 UTC m=+0.136435892 container init 2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:04:58 np0005465604 podman[409952]: 2025-10-02 09:04:58.728916012 +0000 UTC m=+0.147261917 container start 2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jepsen, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 05:04:58 np0005465604 podman[409952]: 2025-10-02 09:04:58.739927695 +0000 UTC m=+0.158273590 container attach 2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:04:59 np0005465604 wizardly_jepsen[409968]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:04:59 np0005465604 wizardly_jepsen[409968]: --> relative data size: 1.0
Oct  2 05:04:59 np0005465604 wizardly_jepsen[409968]: --> All data devices are unavailable
Oct  2 05:04:59 np0005465604 systemd[1]: libpod-2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204.scope: Deactivated successfully.
Oct  2 05:04:59 np0005465604 systemd[1]: libpod-2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204.scope: Consumed 1.025s CPU time.
Oct  2 05:04:59 np0005465604 conmon[409968]: conmon 2bd00fba6e4f636cf4c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204.scope/container/memory.events
Oct  2 05:04:59 np0005465604 podman[409952]: 2025-10-02 09:04:59.810288408 +0000 UTC m=+1.228634323 container died 2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jepsen, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:04:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 760 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct  2 05:04:59 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0dc1730308bcbc97db7f8799b89d8cc7312dfae721b41ccfa11e4573b5099ff2-merged.mount: Deactivated successfully.
Oct  2 05:05:00 np0005465604 podman[409952]: 2025-10-02 09:05:00.010492958 +0000 UTC m=+1.428838863 container remove 2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 05:05:00 np0005465604 systemd[1]: libpod-conmon-2bd00fba6e4f636cf4c7cab75027a541c21b5b793c253ef3c6b330b127a80204.scope: Deactivated successfully.
Oct  2 05:05:00 np0005465604 nova_compute[260603]: 2025-10-02 09:05:00.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:05:00 np0005465604 nova_compute[260603]: 2025-10-02 09:05:00.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:05:00 np0005465604 nova_compute[260603]: 2025-10-02 09:05:00.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:05:00 np0005465604 nova_compute[260603]: 2025-10-02 09:05:00.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:05:00 np0005465604 nova_compute[260603]: 2025-10-02 09:05:00.549 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:05:00 np0005465604 nova_compute[260603]: 2025-10-02 09:05:00.549 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:05:00 np0005465604 podman[410152]: 2025-10-02 09:05:00.557495958 +0000 UTC m=+0.032817392 container create 978fed70d8fc4d6b0649aa6f4cf64444921a09bbe495d5f980f28c8d8a626ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  2 05:05:00 np0005465604 systemd[1]: Started libpod-conmon-978fed70d8fc4d6b0649aa6f4cf64444921a09bbe495d5f980f28c8d8a626ed5.scope.
Oct  2 05:05:00 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:05:00 np0005465604 podman[410152]: 2025-10-02 09:05:00.543944026 +0000 UTC m=+0.019265480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:05:00 np0005465604 podman[410152]: 2025-10-02 09:05:00.640740734 +0000 UTC m=+0.116062188 container init 978fed70d8fc4d6b0649aa6f4cf64444921a09bbe495d5f980f28c8d8a626ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:05:00 np0005465604 podman[410152]: 2025-10-02 09:05:00.646307168 +0000 UTC m=+0.121628602 container start 978fed70d8fc4d6b0649aa6f4cf64444921a09bbe495d5f980f28c8d8a626ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 05:05:00 np0005465604 crazy_zhukovsky[410169]: 167 167
Oct  2 05:05:00 np0005465604 systemd[1]: libpod-978fed70d8fc4d6b0649aa6f4cf64444921a09bbe495d5f980f28c8d8a626ed5.scope: Deactivated successfully.
Oct  2 05:05:00 np0005465604 podman[410152]: 2025-10-02 09:05:00.651459568 +0000 UTC m=+0.126781022 container attach 978fed70d8fc4d6b0649aa6f4cf64444921a09bbe495d5f980f28c8d8a626ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:05:00 np0005465604 podman[410152]: 2025-10-02 09:05:00.652114768 +0000 UTC m=+0.127436202 container died 978fed70d8fc4d6b0649aa6f4cf64444921a09bbe495d5f980f28c8d8a626ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:05:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4040110297a0a830c907d0b46d0890b5ebf2470712133555eede93812bfb1bed-merged.mount: Deactivated successfully.
Oct  2 05:05:00 np0005465604 podman[410152]: 2025-10-02 09:05:00.70688965 +0000 UTC m=+0.182211084 container remove 978fed70d8fc4d6b0649aa6f4cf64444921a09bbe495d5f980f28c8d8a626ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 05:05:00 np0005465604 systemd[1]: libpod-conmon-978fed70d8fc4d6b0649aa6f4cf64444921a09bbe495d5f980f28c8d8a626ed5.scope: Deactivated successfully.
Oct  2 05:05:00 np0005465604 nova_compute[260603]: 2025-10-02 09:05:00.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:00 np0005465604 podman[410211]: 2025-10-02 09:05:00.880893687 +0000 UTC m=+0.052404389 container create 93650300060c129161198dc7bb762de7445d97da756bed0c27a4aaa68ee08446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 05:05:00 np0005465604 systemd[1]: Started libpod-conmon-93650300060c129161198dc7bb762de7445d97da756bed0c27a4aaa68ee08446.scope.
Oct  2 05:05:00 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:05:00 np0005465604 podman[410211]: 2025-10-02 09:05:00.856408037 +0000 UTC m=+0.027918769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:05:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17215ff42829cb82d782a0f12ee468b463c32d05163120d226fc54170e7783c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:05:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17215ff42829cb82d782a0f12ee468b463c32d05163120d226fc54170e7783c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:05:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17215ff42829cb82d782a0f12ee468b463c32d05163120d226fc54170e7783c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:05:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17215ff42829cb82d782a0f12ee468b463c32d05163120d226fc54170e7783c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:05:00 np0005465604 podman[410211]: 2025-10-02 09:05:00.967914592 +0000 UTC m=+0.139425304 container init 93650300060c129161198dc7bb762de7445d97da756bed0c27a4aaa68ee08446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:05:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:05:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3038396416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:05:00 np0005465604 podman[410211]: 2025-10-02 09:05:00.977442588 +0000 UTC m=+0.148953290 container start 93650300060c129161198dc7bb762de7445d97da756bed0c27a4aaa68ee08446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 05:05:00 np0005465604 podman[410211]: 2025-10-02 09:05:00.982205556 +0000 UTC m=+0.153716278 container attach 93650300060c129161198dc7bb762de7445d97da756bed0c27a4aaa68ee08446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:05:00 np0005465604 nova_compute[260603]: 2025-10-02 09:05:00.990 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.065 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.065 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.202 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.203 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3432MB free_disk=59.94289016723633GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.203 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.203 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.269 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 4e056573-a7f9-40a3-b57a-8415af28183c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.269 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.270 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.289 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.304 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.304 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.317 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.335 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.362 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]: {
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:    "0": [
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:        {
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "devices": [
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "/dev/loop3"
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            ],
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "lv_name": "ceph_lv0",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "lv_size": "21470642176",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "name": "ceph_lv0",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "tags": {
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.cluster_name": "ceph",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.crush_device_class": "",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.encrypted": "0",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.osd_id": "0",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.type": "block",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.vdo": "0"
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            },
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "type": "block",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "vg_name": "ceph_vg0"
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:        }
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:    ],
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:    "1": [
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:        {
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "devices": [
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "/dev/loop4"
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            ],
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "lv_name": "ceph_lv1",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "lv_size": "21470642176",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "name": "ceph_lv1",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "tags": {
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.cluster_name": "ceph",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.crush_device_class": "",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.encrypted": "0",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.osd_id": "1",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.type": "block",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.vdo": "0"
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            },
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "type": "block",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "vg_name": "ceph_vg1"
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:        }
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:    ],
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:    "2": [
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:        {
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "devices": [
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "/dev/loop5"
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            ],
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "lv_name": "ceph_lv2",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "lv_size": "21470642176",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "name": "ceph_lv2",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "tags": {
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.cluster_name": "ceph",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.crush_device_class": "",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.encrypted": "0",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.osd_id": "2",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.type": "block",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:                "ceph.vdo": "0"
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            },
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "type": "block",
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:            "vg_name": "ceph_vg2"
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:        }
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]:    ]
Oct  2 05:05:01 np0005465604 vibrant_wu[410227]: }
Oct  2 05:05:01 np0005465604 systemd[1]: libpod-93650300060c129161198dc7bb762de7445d97da756bed0c27a4aaa68ee08446.scope: Deactivated successfully.
Oct  2 05:05:01 np0005465604 podman[410211]: 2025-10-02 09:05:01.762203066 +0000 UTC m=+0.933713778 container died 93650300060c129161198dc7bb762de7445d97da756bed0c27a4aaa68ee08446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 05:05:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:05:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3476907137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:05:01 np0005465604 systemd[1]: var-lib-containers-storage-overlay-17215ff42829cb82d782a0f12ee468b463c32d05163120d226fc54170e7783c9-merged.mount: Deactivated successfully.
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.792 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.800 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:05:01 np0005465604 podman[410211]: 2025-10-02 09:05:01.815058538 +0000 UTC m=+0.986569240 container remove 93650300060c129161198dc7bb762de7445d97da756bed0c27a4aaa68ee08446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.819 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:05:01 np0005465604 systemd[1]: libpod-conmon-93650300060c129161198dc7bb762de7445d97da756bed0c27a4aaa68ee08446.scope: Deactivated successfully.
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.845 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:05:01 np0005465604 nova_compute[260603]: 2025-10-02 09:05:01.845 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:05:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct  2 05:05:02 np0005465604 podman[410413]: 2025-10-02 09:05:02.461317022 +0000 UTC m=+0.043529394 container create c254a1eb765db42c5a78bca93a81639cac83a7d834fe0e61b6e57909cee5bd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 05:05:02 np0005465604 systemd[1]: Started libpod-conmon-c254a1eb765db42c5a78bca93a81639cac83a7d834fe0e61b6e57909cee5bd4c.scope.
Oct  2 05:05:02 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:05:02 np0005465604 podman[410413]: 2025-10-02 09:05:02.445072827 +0000 UTC m=+0.027285209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:05:02 np0005465604 podman[410413]: 2025-10-02 09:05:02.545808998 +0000 UTC m=+0.128021460 container init c254a1eb765db42c5a78bca93a81639cac83a7d834fe0e61b6e57909cee5bd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 05:05:02 np0005465604 podman[410413]: 2025-10-02 09:05:02.553650701 +0000 UTC m=+0.135863063 container start c254a1eb765db42c5a78bca93a81639cac83a7d834fe0e61b6e57909cee5bd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 05:05:02 np0005465604 podman[410413]: 2025-10-02 09:05:02.556247012 +0000 UTC m=+0.138459404 container attach c254a1eb765db42c5a78bca93a81639cac83a7d834fe0e61b6e57909cee5bd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:05:02 np0005465604 reverent_fermi[410429]: 167 167
Oct  2 05:05:02 np0005465604 systemd[1]: libpod-c254a1eb765db42c5a78bca93a81639cac83a7d834fe0e61b6e57909cee5bd4c.scope: Deactivated successfully.
Oct  2 05:05:02 np0005465604 podman[410413]: 2025-10-02 09:05:02.56070947 +0000 UTC m=+0.142921832 container died c254a1eb765db42c5a78bca93a81639cac83a7d834fe0e61b6e57909cee5bd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:05:02 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f9618d3f43999ea77ad0fb3b2d4022b7f9fc7cf7fdd42e69a4905256c93173ac-merged.mount: Deactivated successfully.
Oct  2 05:05:02 np0005465604 podman[410413]: 2025-10-02 09:05:02.601957933 +0000 UTC m=+0.184170305 container remove c254a1eb765db42c5a78bca93a81639cac83a7d834fe0e61b6e57909cee5bd4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:05:02 np0005465604 systemd[1]: libpod-conmon-c254a1eb765db42c5a78bca93a81639cac83a7d834fe0e61b6e57909cee5bd4c.scope: Deactivated successfully.
Oct  2 05:05:02 np0005465604 nova_compute[260603]: 2025-10-02 09:05:02.729 2 INFO nova.compute.manager [None req-01c3bff4-d70b-4745-b768-42beb3af0ca8 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Get console output#033[00m
Oct  2 05:05:02 np0005465604 nova_compute[260603]: 2025-10-02 09:05:02.737 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 05:05:02 np0005465604 podman[410455]: 2025-10-02 09:05:02.757868818 +0000 UTC m=+0.038326603 container create d56525d5af424c1b85f427c74e680ff49fd229a35e9a652ff73096a5f9b10d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:05:02 np0005465604 systemd[1]: Started libpod-conmon-d56525d5af424c1b85f427c74e680ff49fd229a35e9a652ff73096a5f9b10d81.scope.
Oct  2 05:05:02 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:05:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b7d8e0d4d87e3095393c79c2dd5d618708c56e631b3b6c3261471170683072/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:05:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b7d8e0d4d87e3095393c79c2dd5d618708c56e631b3b6c3261471170683072/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:05:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b7d8e0d4d87e3095393c79c2dd5d618708c56e631b3b6c3261471170683072/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:05:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b7d8e0d4d87e3095393c79c2dd5d618708c56e631b3b6c3261471170683072/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:05:02 np0005465604 podman[410455]: 2025-10-02 09:05:02.743854532 +0000 UTC m=+0.024312327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:05:02 np0005465604 podman[410455]: 2025-10-02 09:05:02.843773247 +0000 UTC m=+0.124231062 container init d56525d5af424c1b85f427c74e680ff49fd229a35e9a652ff73096a5f9b10d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 05:05:02 np0005465604 podman[410455]: 2025-10-02 09:05:02.850256498 +0000 UTC m=+0.130714283 container start d56525d5af424c1b85f427c74e680ff49fd229a35e9a652ff73096a5f9b10d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:05:02 np0005465604 podman[410455]: 2025-10-02 09:05:02.871686124 +0000 UTC m=+0.152143939 container attach d56525d5af424c1b85f427c74e680ff49fd229a35e9a652ff73096a5f9b10d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curie, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 05:05:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:05:03 np0005465604 boring_curie[410471]: {
Oct  2 05:05:03 np0005465604 boring_curie[410471]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:05:03 np0005465604 boring_curie[410471]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:05:03 np0005465604 boring_curie[410471]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:05:03 np0005465604 boring_curie[410471]:        "osd_id": 2,
Oct  2 05:05:03 np0005465604 boring_curie[410471]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:05:03 np0005465604 boring_curie[410471]:        "type": "bluestore"
Oct  2 05:05:03 np0005465604 boring_curie[410471]:    },
Oct  2 05:05:03 np0005465604 boring_curie[410471]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:05:03 np0005465604 boring_curie[410471]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:05:03 np0005465604 boring_curie[410471]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:05:03 np0005465604 boring_curie[410471]:        "osd_id": 1,
Oct  2 05:05:03 np0005465604 boring_curie[410471]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:05:03 np0005465604 boring_curie[410471]:        "type": "bluestore"
Oct  2 05:05:03 np0005465604 boring_curie[410471]:    },
Oct  2 05:05:03 np0005465604 boring_curie[410471]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:05:03 np0005465604 boring_curie[410471]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:05:03 np0005465604 boring_curie[410471]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:05:03 np0005465604 boring_curie[410471]:        "osd_id": 0,
Oct  2 05:05:03 np0005465604 boring_curie[410471]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:05:03 np0005465604 boring_curie[410471]:        "type": "bluestore"
Oct  2 05:05:03 np0005465604 boring_curie[410471]:    }
Oct  2 05:05:03 np0005465604 boring_curie[410471]: }
Oct  2 05:05:03 np0005465604 systemd[1]: libpod-d56525d5af424c1b85f427c74e680ff49fd229a35e9a652ff73096a5f9b10d81.scope: Deactivated successfully.
Oct  2 05:05:03 np0005465604 podman[410455]: 2025-10-02 09:05:03.832439901 +0000 UTC m=+1.112897696 container died d56525d5af424c1b85f427c74e680ff49fd229a35e9a652ff73096a5f9b10d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curie, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:05:03 np0005465604 nova_compute[260603]: 2025-10-02 09:05:03.840 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:05:03 np0005465604 systemd[1]: var-lib-containers-storage-overlay-09b7d8e0d4d87e3095393c79c2dd5d618708c56e631b3b6c3261471170683072-merged.mount: Deactivated successfully.
Oct  2 05:05:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct  2 05:05:04 np0005465604 podman[410455]: 2025-10-02 09:05:04.164371805 +0000 UTC m=+1.444829580 container remove d56525d5af424c1b85f427c74e680ff49fd229a35e9a652ff73096a5f9b10d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curie, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 05:05:04 np0005465604 systemd[1]: libpod-conmon-d56525d5af424c1b85f427c74e680ff49fd229a35e9a652ff73096a5f9b10d81.scope: Deactivated successfully.
Oct  2 05:05:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:05:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:05:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:05:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:05:04 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 56502a37-5e4b-486c-bd1f-99ea82239a15 does not exist
Oct  2 05:05:04 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 60c556e1-f2c4-4408-9d3d-f57d5f8f9a15 does not exist
Oct  2 05:05:04 np0005465604 ovn_controller[152344]: 2025-10-02T09:05:04Z|01477|binding|INFO|Releasing lport 4b059aed-66ee-4478-ba1c-eaee8b0f3c46 from this chassis (sb_readonly=0)
Oct  2 05:05:04 np0005465604 nova_compute[260603]: 2025-10-02 09:05:04.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:04 np0005465604 ovn_controller[152344]: 2025-10-02T09:05:04Z|01478|binding|INFO|Releasing lport 4b059aed-66ee-4478-ba1c-eaee8b0f3c46 from this chassis (sb_readonly=0)
Oct  2 05:05:04 np0005465604 nova_compute[260603]: 2025-10-02 09:05:04.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:05:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:05:05 np0005465604 nova_compute[260603]: 2025-10-02 09:05:05.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct  2 05:05:05 np0005465604 nova_compute[260603]: 2025-10-02 09:05:05.984 2 INFO nova.compute.manager [None req-bc712684-999b-4858-8cd5-0b9ac2599d88 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Get console output#033[00m
Oct  2 05:05:05 np0005465604 nova_compute[260603]: 2025-10-02 09:05:05.989 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 05:05:06 np0005465604 nova_compute[260603]: 2025-10-02 09:05:06.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:07 np0005465604 nova_compute[260603]: 2025-10-02 09:05:07.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:07 np0005465604 NetworkManager[45129]: <info>  [1759395907.0860] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/594)
Oct  2 05:05:07 np0005465604 NetworkManager[45129]: <info>  [1759395907.0871] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/595)
Oct  2 05:05:07 np0005465604 ovn_controller[152344]: 2025-10-02T09:05:07Z|01479|binding|INFO|Releasing lport 4b059aed-66ee-4478-ba1c-eaee8b0f3c46 from this chassis (sb_readonly=0)
Oct  2 05:05:07 np0005465604 nova_compute[260603]: 2025-10-02 09:05:07.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:07 np0005465604 nova_compute[260603]: 2025-10-02 09:05:07.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:07 np0005465604 nova_compute[260603]: 2025-10-02 09:05:07.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:07.326 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:05:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:07.327 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:05:07 np0005465604 nova_compute[260603]: 2025-10-02 09:05:07.421 2 INFO nova.compute.manager [None req-f3225b62-afa3-48a8-aa36-b14dd795f3e6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Get console output#033[00m
Oct  2 05:05:07 np0005465604 nova_compute[260603]: 2025-10-02 09:05:07.425 29746 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 05:05:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.012 2 DEBUG nova.compute.manager [req-6780386a-0bd3-451b-8f4c-8446f0908a3c req-a8220b52-45ca-4467-8ade-fa7829b21168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received event network-changed-3426f15c-bff7-478f-a7d7-2fd7499af1c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.012 2 DEBUG nova.compute.manager [req-6780386a-0bd3-451b-8f4c-8446f0908a3c req-a8220b52-45ca-4467-8ade-fa7829b21168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Refreshing instance network info cache due to event network-changed-3426f15c-bff7-478f-a7d7-2fd7499af1c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.013 2 DEBUG oslo_concurrency.lockutils [req-6780386a-0bd3-451b-8f4c-8446f0908a3c req-a8220b52-45ca-4467-8ade-fa7829b21168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.013 2 DEBUG oslo_concurrency.lockutils [req-6780386a-0bd3-451b-8f4c-8446f0908a3c req-a8220b52-45ca-4467-8ade-fa7829b21168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.014 2 DEBUG nova.network.neutron [req-6780386a-0bd3-451b-8f4c-8446f0908a3c req-a8220b52-45ca-4467-8ade-fa7829b21168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Refreshing network info cache for port 3426f15c-bff7-478f-a7d7-2fd7499af1c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.076 2 DEBUG oslo_concurrency.lockutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "4e056573-a7f9-40a3-b57a-8415af28183c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.077 2 DEBUG oslo_concurrency.lockutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.078 2 DEBUG oslo_concurrency.lockutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.078 2 DEBUG oslo_concurrency.lockutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.080 2 DEBUG oslo_concurrency.lockutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.083 2 INFO nova.compute.manager [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Terminating instance#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.085 2 DEBUG nova.compute.manager [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:05:08 np0005465604 kernel: tap3426f15c-bf (unregistering): left promiscuous mode
Oct  2 05:05:08 np0005465604 NetworkManager[45129]: <info>  [1759395908.1634] device (tap3426f15c-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:05:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:05:08 np0005465604 ovn_controller[152344]: 2025-10-02T09:05:08Z|01480|binding|INFO|Releasing lport 3426f15c-bff7-478f-a7d7-2fd7499af1c4 from this chassis (sb_readonly=0)
Oct  2 05:05:08 np0005465604 ovn_controller[152344]: 2025-10-02T09:05:08Z|01481|binding|INFO|Setting lport 3426f15c-bff7-478f-a7d7-2fd7499af1c4 down in Southbound
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:08 np0005465604 ovn_controller[152344]: 2025-10-02T09:05:08Z|01482|binding|INFO|Removing iface tap3426f15c-bf ovn-installed in OVS
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.187 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:3b:0c 10.100.0.6'], port_security=['fa:16:3e:bf:3b:0c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4e056573-a7f9-40a3-b57a-8415af28183c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b56304ae-559d-4697-b965-787fd568f6ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f3ce144e8c54c29bd54d3b61166b175', 'neutron:revision_number': '4', 'neutron:security_group_ids': '56d2bc0b-a112-4501-852b-a30e94c83df4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8bddaa3-c9e7-4b8d-b560-6df44a76261b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3426f15c-bff7-478f-a7d7-2fd7499af1c4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:05:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.189 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3426f15c-bff7-478f-a7d7-2fd7499af1c4 in datapath b56304ae-559d-4697-b965-787fd568f6ea unbound from our chassis#033[00m
Oct  2 05:05:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.190 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b56304ae-559d-4697-b965-787fd568f6ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:05:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.192 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2e6ccfd1-d0e1-4f6e-b55e-a3e24d8d70da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.193 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea namespace which is not needed anymore#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:08 np0005465604 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000088.scope: Deactivated successfully.
Oct  2 05:05:08 np0005465604 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000088.scope: Consumed 12.513s CPU time.
Oct  2 05:05:08 np0005465604 systemd-machined[214636]: Machine qemu-170-instance-00000088 terminated.
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.320 2 INFO nova.virt.libvirt.driver [-] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Instance destroyed successfully.#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.321 2 DEBUG nova.objects.instance [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lazy-loading 'resources' on Instance uuid 4e056573-a7f9-40a3-b57a-8415af28183c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:05:08 np0005465604 neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea[409599]: [NOTICE]   (409625) : haproxy version is 2.8.14-c23fe91
Oct  2 05:05:08 np0005465604 neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea[409599]: [NOTICE]   (409625) : path to executable is /usr/sbin/haproxy
Oct  2 05:05:08 np0005465604 neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea[409599]: [WARNING]  (409625) : Exiting Master process...
Oct  2 05:05:08 np0005465604 neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea[409599]: [ALERT]    (409625) : Current worker (409628) exited with code 143 (Terminated)
Oct  2 05:05:08 np0005465604 neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea[409599]: [WARNING]  (409625) : All workers exited. Exiting... (0)
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.335 2 DEBUG nova.virt.libvirt.vif [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:04:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-702449656',display_name='tempest-TestNetworkBasicOps-server-702449656',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-702449656',id=136,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFG4yVL3kWMKssiMMTxBkYCYMGOyIrMgKWW6mSphQLUGjYPZbZn8kAoyGfeCty+GAJO7M0ajY8H+P8bZ7xOYgSuU5Lwh2F7EvM3DqGuRUQe2gvgkzvhd/Nxhr2daBHDjEw==',key_name='tempest-TestNetworkBasicOps-1401796894',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:04:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5f3ce144e8c54c29bd54d3b61166b175',ramdisk_id='',reservation_id='r-aolvfudx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-67113886',owner_user_name='tempest-TestNetworkBasicOps-67113886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:04:43Z,user_data=None,user_id='ed58c0dbe2eb44a6969a40202da07416',uuid=4e056573-a7f9-40a3-b57a-8415af28183c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.336 2 DEBUG nova.network.os_vif_util [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converting VIF {"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:05:08 np0005465604 systemd[1]: libpod-9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4.scope: Deactivated successfully.
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.340 2 DEBUG nova.network.os_vif_util [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bf:3b:0c,bridge_name='br-int',has_traffic_filtering=True,id=3426f15c-bff7-478f-a7d7-2fd7499af1c4,network=Network(b56304ae-559d-4697-b965-787fd568f6ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3426f15c-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.341 2 DEBUG os_vif [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:3b:0c,bridge_name='br-int',has_traffic_filtering=True,id=3426f15c-bff7-478f-a7d7-2fd7499af1c4,network=Network(b56304ae-559d-4697-b965-787fd568f6ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3426f15c-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3426f15c-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:08 np0005465604 podman[410594]: 2025-10-02 09:05:08.345775655 +0000 UTC m=+0.059281392 container died 9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.349 2 INFO os_vif [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:3b:0c,bridge_name='br-int',has_traffic_filtering=True,id=3426f15c-bff7-478f-a7d7-2fd7499af1c4,network=Network(b56304ae-559d-4697-b965-787fd568f6ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3426f15c-bf')#033[00m
Oct  2 05:05:08 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4-userdata-shm.mount: Deactivated successfully.
Oct  2 05:05:08 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b9b747b7aa890dff406452ac20375c7d0d240a41a3a839599776ae767a20c998-merged.mount: Deactivated successfully.
Oct  2 05:05:08 np0005465604 podman[410594]: 2025-10-02 09:05:08.400720464 +0000 UTC m=+0.114226201 container cleanup 9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:05:08 np0005465604 systemd[1]: libpod-conmon-9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4.scope: Deactivated successfully.
Oct  2 05:05:08 np0005465604 podman[410652]: 2025-10-02 09:05:08.464596779 +0000 UTC m=+0.044328779 container remove 9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 05:05:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.470 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29012875-bcce-4301-9a57-7d31a8e77d99]: (4, ('Thu Oct  2 09:05:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea (9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4)\n9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4\nThu Oct  2 09:05:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea (9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4)\n9828c62da27f652e5230975c64ec1c6b7426f3c484e090b13314df6d6560b2d4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.472 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[39f80d3d-0f93-4e96-9d1b-857d38a2d75f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.473 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb56304ae-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:08 np0005465604 kernel: tapb56304ae-50: left promiscuous mode
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.493 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c29291c2-65fb-469e-bfb2-126aba3b0c6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.528 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cbda0e84-1cfe-40d3-b34d-58694bd615e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.529 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1f47c573-5554-4ac6-b9a3-9c1910f45d9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.544 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f4eaa2-eb18-48ef-b411-4d13ff3e0d67]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676110, 'reachable_time': 31686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410667, 'error': None, 'target': 'ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:08 np0005465604 systemd[1]: run-netns-ovnmeta\x2db56304ae\x2d559d\x2d4697\x2db965\x2d787fd568f6ea.mount: Deactivated successfully.
Oct  2 05:05:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.548 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b56304ae-559d-4697-b965-787fd568f6ea deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:05:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:08.548 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[1ccd5481-5f10-4291-b8b8-a3cfa17d5f66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.772 2 INFO nova.virt.libvirt.driver [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Deleting instance files /var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c_del#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.773 2 INFO nova.virt.libvirt.driver [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Deletion of /var/lib/nova/instances/4e056573-a7f9-40a3-b57a-8415af28183c_del complete#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.854 2 INFO nova.compute.manager [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.855 2 DEBUG oslo.service.loopingcall [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.856 2 DEBUG nova.compute.manager [-] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:05:08 np0005465604 nova_compute[260603]: 2025-10-02 09:05:08.856 2 DEBUG nova.network.neutron [-] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:05:09 np0005465604 nova_compute[260603]: 2025-10-02 09:05:09.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:05:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 314 KiB/s wr, 15 op/s
Oct  2 05:05:09 np0005465604 nova_compute[260603]: 2025-10-02 09:05:09.951 2 DEBUG nova.network.neutron [-] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:05:09 np0005465604 nova_compute[260603]: 2025-10-02 09:05:09.968 2 INFO nova.compute.manager [-] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Took 1.11 seconds to deallocate network for instance.#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.009 2 DEBUG oslo_concurrency.lockutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.010 2 DEBUG oslo_concurrency.lockutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:05:10 np0005465604 podman[410669]: 2025-10-02 09:05:10.049917184 +0000 UTC m=+0.121336561 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 05:05:10 np0005465604 podman[410670]: 2025-10-02 09:05:10.051967088 +0000 UTC m=+0.112057644 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.069 2 DEBUG oslo_concurrency.processutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.132 2 DEBUG nova.compute.manager [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received event network-vif-unplugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.133 2 DEBUG oslo_concurrency.lockutils [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.134 2 DEBUG oslo_concurrency.lockutils [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.135 2 DEBUG oslo_concurrency.lockutils [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.136 2 DEBUG nova.compute.manager [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] No waiting events found dispatching network-vif-unplugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.136 2 WARNING nova.compute.manager [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received unexpected event network-vif-unplugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.137 2 DEBUG nova.compute.manager [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received event network-vif-plugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.137 2 DEBUG oslo_concurrency.lockutils [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.138 2 DEBUG oslo_concurrency.lockutils [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.138 2 DEBUG oslo_concurrency.lockutils [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.139 2 DEBUG nova.compute.manager [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] No waiting events found dispatching network-vif-plugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.139 2 WARNING nova.compute.manager [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received unexpected event network-vif-plugged-3426f15c-bff7-478f-a7d7-2fd7499af1c4 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.140 2 DEBUG nova.compute.manager [req-4b39bb6f-40bb-4a39-88aa-b37efd7f4d25 req-eac1e470-3b9f-4a7b-a3ab-2b58b0168e4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Received event network-vif-deleted-3426f15c-bff7-478f-a7d7-2fd7499af1c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:05:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:05:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3266333853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.537 2 DEBUG oslo_concurrency.processutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.544 2 DEBUG nova.compute.provider_tree [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.559 2 DEBUG nova.scheduler.client.report [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.584 2 DEBUG oslo_concurrency.lockutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.611 2 INFO nova.scheduler.client.report [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Deleted allocations for instance 4e056573-a7f9-40a3-b57a-8415af28183c#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.674 2 DEBUG oslo_concurrency.lockutils [None req-81fa43d8-b99f-4d77-9309-560f0fba16d6 ed58c0dbe2eb44a6969a40202da07416 5f3ce144e8c54c29bd54d3b61166b175 - - default default] Lock "4e056573-a7f9-40a3-b57a-8415af28183c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.676 2 DEBUG nova.network.neutron [req-6780386a-0bd3-451b-8f4c-8446f0908a3c req-a8220b52-45ca-4467-8ade-fa7829b21168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Updated VIF entry in instance network info cache for port 3426f15c-bff7-478f-a7d7-2fd7499af1c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.676 2 DEBUG nova.network.neutron [req-6780386a-0bd3-451b-8f4c-8446f0908a3c req-a8220b52-45ca-4467-8ade-fa7829b21168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Updating instance_info_cache with network_info: [{"id": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "address": "fa:16:3e:bf:3b:0c", "network": {"id": "b56304ae-559d-4697-b965-787fd568f6ea", "bridge": "br-int", "label": "tempest-network-smoke--289186754", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5f3ce144e8c54c29bd54d3b61166b175", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3426f15c-bf", "ovs_interfaceid": "3426f15c-bff7-478f-a7d7-2fd7499af1c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.695 2 DEBUG oslo_concurrency.lockutils [req-6780386a-0bd3-451b-8f4c-8446f0908a3c req-a8220b52-45ca-4467-8ade-fa7829b21168 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-4e056573-a7f9-40a3-b57a-8415af28183c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:05:10 np0005465604 nova_compute[260603]: 2025-10-02 09:05:10.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 121 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Oct  2 05:05:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:05:13 np0005465604 nova_compute[260603]: 2025-10-02 09:05:13.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Oct  2 05:05:13 np0005465604 podman[410736]: 2025-10-02 09:05:13.983935268 +0000 UTC m=+0.050912934 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 05:05:14 np0005465604 podman[410735]: 2025-10-02 09:05:14.018935025 +0000 UTC m=+0.090989319 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:05:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:14.331 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:05:15 np0005465604 nova_compute[260603]: 2025-10-02 09:05:15.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Oct  2 05:05:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Oct  2 05:05:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:05:18 np0005465604 nova_compute[260603]: 2025-10-02 09:05:18.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:19 np0005465604 nova_compute[260603]: 2025-10-02 09:05:19.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:19 np0005465604 nova_compute[260603]: 2025-10-02 09:05:19.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Oct  2 05:05:20 np0005465604 nova_compute[260603]: 2025-10-02 09:05:20.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Oct  2 05:05:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:05:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1588677404' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:05:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:05:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1588677404' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:05:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:05:23 np0005465604 nova_compute[260603]: 2025-10-02 09:05:23.319 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759395908.3182437, 4e056573-a7f9-40a3-b57a-8415af28183c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:05:23 np0005465604 nova_compute[260603]: 2025-10-02 09:05:23.320 2 INFO nova.compute.manager [-] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:05:23 np0005465604 nova_compute[260603]: 2025-10-02 09:05:23.342 2 DEBUG nova.compute.manager [None req-f9e99111-bc52-418f-8c72-a1d122eb949b - - - - - -] [instance: 4e056573-a7f9-40a3-b57a-8415af28183c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:05:23 np0005465604 nova_compute[260603]: 2025-10-02 09:05:23.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Oct  2 05:05:25 np0005465604 nova_compute[260603]: 2025-10-02 09:05:25.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:05:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:05:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:05:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:05:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:05:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:05:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:05:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:05:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:05:28
Oct  2 05:05:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:05:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:05:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', '.mgr', 'volumes', 'vms', 'images', 'default.rgw.control', 'backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta']
Oct  2 05:05:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:05:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:05:28 np0005465604 nova_compute[260603]: 2025-10-02 09:05:28.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:05:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:05:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:05:30 np0005465604 nova_compute[260603]: 2025-10-02 09:05:30.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:05:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:05:33 np0005465604 nova_compute[260603]: 2025-10-02 09:05:33.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:05:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:34.844 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:05:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:34.844 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:05:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:34.844 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:05:35 np0005465604 nova_compute[260603]: 2025-10-02 09:05:35.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:05:36 np0005465604 nova_compute[260603]: 2025-10-02 09:05:36.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:05:36 np0005465604 nova_compute[260603]: 2025-10-02 09:05:36.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:05:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:05:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:05:38 np0005465604 nova_compute[260603]: 2025-10-02 09:05:38.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:38.447 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:67:c2 10.100.0.2 2001:db8::f816:3eff:fee2:67c2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee2:67c2/64', 'neutron:device_id': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d710978-7032-4293-a883-5a767163ed11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9915992-ac5f-4a55-8b96-3511c2ec67d2, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1fb74237-5dc8-49ee-a35b-4801dc5960b2) old=Port_Binding(mac=['fa:16:3e:e2:67:c2 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d710978-7032-4293-a883-5a767163ed11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:05:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:38.448 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1fb74237-5dc8-49ee-a35b-4801dc5960b2 in datapath 4d710978-7032-4293-a883-5a767163ed11 updated#033[00m
Oct  2 05:05:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:38.449 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d710978-7032-4293-a883-5a767163ed11, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:05:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:38.451 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2b552220-86b8-49fb-84c6-42a2615ae1cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:05:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:05:40 np0005465604 nova_compute[260603]: 2025-10-02 09:05:40.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:40 np0005465604 podman[410777]: 2025-10-02 09:05:40.98580244 +0000 UTC m=+0.046582579 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct  2 05:05:41 np0005465604 podman[410776]: 2025-10-02 09:05:41.022552911 +0000 UTC m=+0.085608071 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 05:05:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:05:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:05:43 np0005465604 nova_compute[260603]: 2025-10-02 09:05:43.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:05:44 np0005465604 nova_compute[260603]: 2025-10-02 09:05:44.070 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "88de4189-44e3-48fe-aa38-c33334b314b5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:05:44 np0005465604 nova_compute[260603]: 2025-10-02 09:05:44.071 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:05:44 np0005465604 nova_compute[260603]: 2025-10-02 09:05:44.086 2 DEBUG nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:05:44 np0005465604 nova_compute[260603]: 2025-10-02 09:05:44.168 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:05:44 np0005465604 nova_compute[260603]: 2025-10-02 09:05:44.169 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:05:44 np0005465604 nova_compute[260603]: 2025-10-02 09:05:44.181 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:05:44 np0005465604 nova_compute[260603]: 2025-10-02 09:05:44.181 2 INFO nova.compute.claims [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:05:44 np0005465604 nova_compute[260603]: 2025-10-02 09:05:44.302 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:05:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:05:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/251730679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:05:44 np0005465604 nova_compute[260603]: 2025-10-02 09:05:44.763 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:05:44 np0005465604 nova_compute[260603]: 2025-10-02 09:05:44.769 2 DEBUG nova.compute.provider_tree [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:05:44 np0005465604 nova_compute[260603]: 2025-10-02 09:05:44.786 2 DEBUG nova.scheduler.client.report [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:05:44 np0005465604 nova_compute[260603]: 2025-10-02 09:05:44.814 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:05:44 np0005465604 nova_compute[260603]: 2025-10-02 09:05:44.815 2 DEBUG nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:05:44 np0005465604 nova_compute[260603]: 2025-10-02 09:05:44.861 2 DEBUG nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:05:44 np0005465604 nova_compute[260603]: 2025-10-02 09:05:44.861 2 DEBUG nova.network.neutron [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:05:44 np0005465604 nova_compute[260603]: 2025-10-02 09:05:44.884 2 INFO nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:05:44 np0005465604 nova_compute[260603]: 2025-10-02 09:05:44.905 2 DEBUG nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.007 2 DEBUG nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.008 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.009 2 INFO nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Creating image(s)#033[00m
Oct  2 05:05:45 np0005465604 podman[410841]: 2025-10-02 09:05:45.014280897 +0000 UTC m=+0.079001275 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 05:05:45 np0005465604 podman[410840]: 2025-10-02 09:05:45.015564438 +0000 UTC m=+0.082757733 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.034 2 DEBUG nova.storage.rbd_utils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 88de4189-44e3-48fe-aa38-c33334b314b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.060 2 DEBUG nova.storage.rbd_utils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 88de4189-44e3-48fe-aa38-c33334b314b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.092 2 DEBUG nova.storage.rbd_utils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 88de4189-44e3-48fe-aa38-c33334b314b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.097 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.150 2 DEBUG nova.policy [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.196 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.196 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.197 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.198 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.221 2 DEBUG nova.storage.rbd_utils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 88de4189-44e3-48fe-aa38-c33334b314b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.225 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 88de4189-44e3-48fe-aa38-c33334b314b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.491 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 88de4189-44e3-48fe-aa38-c33334b314b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.536 2 DEBUG nova.storage.rbd_utils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 88de4189-44e3-48fe-aa38-c33334b314b5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.615 2 DEBUG nova.objects.instance [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 88de4189-44e3-48fe-aa38-c33334b314b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.636 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.636 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Ensure instance console log exists: /var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.637 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.637 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.637 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.748 2 DEBUG nova.network.neutron [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Successfully created port: 8d7734cf-6636-4070-a868-e4d1d2cfab65 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:05:45 np0005465604 nova_compute[260603]: 2025-10-02 09:05:45.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 41 MiB data, 928 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:05:46 np0005465604 nova_compute[260603]: 2025-10-02 09:05:46.895 2 DEBUG nova.network.neutron [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Successfully updated port: 8d7734cf-6636-4070-a868-e4d1d2cfab65 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:05:46 np0005465604 nova_compute[260603]: 2025-10-02 09:05:46.944 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:05:46 np0005465604 nova_compute[260603]: 2025-10-02 09:05:46.945 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:05:46 np0005465604 nova_compute[260603]: 2025-10-02 09:05:46.945 2 DEBUG nova.network.neutron [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:05:46 np0005465604 nova_compute[260603]: 2025-10-02 09:05:46.998 2 DEBUG nova.compute.manager [req-7479ad44-537d-4131-9dc8-c6757f7cd37a req-c585b27d-dc74-4ebb-9ab2-2beaae28a21d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received event network-changed-8d7734cf-6636-4070-a868-e4d1d2cfab65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:05:46 np0005465604 nova_compute[260603]: 2025-10-02 09:05:46.999 2 DEBUG nova.compute.manager [req-7479ad44-537d-4131-9dc8-c6757f7cd37a req-c585b27d-dc74-4ebb-9ab2-2beaae28a21d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Refreshing instance network info cache due to event network-changed-8d7734cf-6636-4070-a868-e4d1d2cfab65. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:05:47 np0005465604 nova_compute[260603]: 2025-10-02 09:05:46.999 2 DEBUG oslo_concurrency.lockutils [req-7479ad44-537d-4131-9dc8-c6757f7cd37a req-c585b27d-dc74-4ebb-9ab2-2beaae28a21d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:05:47 np0005465604 nova_compute[260603]: 2025-10-02 09:05:47.078 2 DEBUG nova.network.neutron [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:05:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 72 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.4 MiB/s wr, 13 op/s
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.046 2 DEBUG nova.network.neutron [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updating instance_info_cache with network_info: [{"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.069 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.070 2 DEBUG nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Instance network_info: |[{"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.072 2 DEBUG oslo_concurrency.lockutils [req-7479ad44-537d-4131-9dc8-c6757f7cd37a req-c585b27d-dc74-4ebb-9ab2-2beaae28a21d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.072 2 DEBUG nova.network.neutron [req-7479ad44-537d-4131-9dc8-c6757f7cd37a req-c585b27d-dc74-4ebb-9ab2-2beaae28a21d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Refreshing network info cache for port 8d7734cf-6636-4070-a868-e4d1d2cfab65 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.079 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Start _get_guest_xml network_info=[{"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.085 2 WARNING nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.091 2 DEBUG nova.virt.libvirt.host [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.092 2 DEBUG nova.virt.libvirt.host [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.096 2 DEBUG nova.virt.libvirt.host [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.097 2 DEBUG nova.virt.libvirt.host [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.097 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.097 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.098 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.098 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.098 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.098 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.098 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.099 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.099 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.099 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.099 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.099 2 DEBUG nova.virt.hardware [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.102 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:05:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:05:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3374293255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.578 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.601 2 DEBUG nova.storage.rbd_utils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 88de4189-44e3-48fe-aa38-c33334b314b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:05:48 np0005465604 nova_compute[260603]: 2025-10-02 09:05:48.607 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:05:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:05:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2965104233' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.060 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.062 2 DEBUG nova.virt.libvirt.vif [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:05:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1987487733',display_name='tempest-TestGettingAddress-server-1987487733',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1987487733',id=137,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOqsV+mRqA94BGsKqn96a/KPTTGiBWD+95ZJ/Yh7ODb2zqPMdXbtdzNYLEW6fE5OS4mYGF0KIkuvDnPSxXUjDfpHSgx5rD0Ef4PCofSlDC/ZVRctKKrWVNvfvA+fGJmQQ==',key_name='tempest-TestGettingAddress-1976047243',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-qbro3vsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:05:44Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=88de4189-44e3-48fe-aa38-c33334b314b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.063 2 DEBUG nova.network.os_vif_util [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.064 2 DEBUG nova.network.os_vif_util [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:2d:a6,bridge_name='br-int',has_traffic_filtering=True,id=8d7734cf-6636-4070-a868-e4d1d2cfab65,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7734cf-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.066 2 DEBUG nova.objects.instance [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 88de4189-44e3-48fe-aa38-c33334b314b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.085 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:05:49 np0005465604 nova_compute[260603]:  <uuid>88de4189-44e3-48fe-aa38-c33334b314b5</uuid>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:  <name>instance-00000089</name>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestGettingAddress-server-1987487733</nova:name>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:05:48</nova:creationTime>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:05:49 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:        <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:        <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:        <nova:port uuid="8d7734cf-6636-4070-a868-e4d1d2cfab65">
Oct  2 05:05:49 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fed3:2da6" ipVersion="6"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <entry name="serial">88de4189-44e3-48fe-aa38-c33334b314b5</entry>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <entry name="uuid">88de4189-44e3-48fe-aa38-c33334b314b5</entry>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/88de4189-44e3-48fe-aa38-c33334b314b5_disk">
Oct  2 05:05:49 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:05:49 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/88de4189-44e3-48fe-aa38-c33334b314b5_disk.config">
Oct  2 05:05:49 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:05:49 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:d3:2d:a6"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <target dev="tap8d7734cf-66"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5/console.log" append="off"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:05:49 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:05:49 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:05:49 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:05:49 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.086 2 DEBUG nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Preparing to wait for external event network-vif-plugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.086 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.087 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.087 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.087 2 DEBUG nova.virt.libvirt.vif [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:05:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1987487733',display_name='tempest-TestGettingAddress-server-1987487733',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1987487733',id=137,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOqsV+mRqA94BGsKqn96a/KPTTGiBWD+95ZJ/Yh7ODb2zqPMdXbtdzNYLEW6fE5OS4mYGF0KIkuvDnPSxXUjDfpHSgx5rD0Ef4PCofSlDC/ZVRctKKrWVNvfvA+fGJmQQ==',key_name='tempest-TestGettingAddress-1976047243',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-qbro3vsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:05:44Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=88de4189-44e3-48fe-aa38-c33334b314b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.088 2 DEBUG nova.network.os_vif_util [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.088 2 DEBUG nova.network.os_vif_util [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:2d:a6,bridge_name='br-int',has_traffic_filtering=True,id=8d7734cf-6636-4070-a868-e4d1d2cfab65,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7734cf-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.089 2 DEBUG os_vif [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:2d:a6,bridge_name='br-int',has_traffic_filtering=True,id=8d7734cf-6636-4070-a868-e4d1d2cfab65,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7734cf-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.090 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.090 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.092 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d7734cf-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.093 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d7734cf-66, col_values=(('external_ids', {'iface-id': '8d7734cf-6636-4070-a868-e4d1d2cfab65', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:2d:a6', 'vm-uuid': '88de4189-44e3-48fe-aa38-c33334b314b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:49 np0005465604 NetworkManager[45129]: <info>  [1759395949.0951] manager: (tap8d7734cf-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/596)
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.100 2 INFO os_vif [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:2d:a6,bridge_name='br-int',has_traffic_filtering=True,id=8d7734cf-6636-4070-a868-e4d1d2cfab65,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7734cf-66')#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.154 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.154 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.155 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:d3:2d:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.155 2 INFO nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Using config drive#033[00m
Oct  2 05:05:49 np0005465604 nova_compute[260603]: 2025-10-02 09:05:49.179 2 DEBUG nova.storage.rbd_utils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 88de4189-44e3-48fe-aa38-c33334b314b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:05:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:05:50 np0005465604 nova_compute[260603]: 2025-10-02 09:05:50.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:05:50 np0005465604 nova_compute[260603]: 2025-10-02 09:05:50.735 2 INFO nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Creating config drive at /var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5/disk.config#033[00m
Oct  2 05:05:50 np0005465604 nova_compute[260603]: 2025-10-02 09:05:50.743 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkwtoqqs7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:05:50 np0005465604 nova_compute[260603]: 2025-10-02 09:05:50.895 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkwtoqqs7" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:05:50 np0005465604 nova_compute[260603]: 2025-10-02 09:05:50.930 2 DEBUG nova.storage.rbd_utils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 88de4189-44e3-48fe-aa38-c33334b314b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:05:50 np0005465604 nova_compute[260603]: 2025-10-02 09:05:50.934 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5/disk.config 88de4189-44e3-48fe-aa38-c33334b314b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:05:50 np0005465604 nova_compute[260603]: 2025-10-02 09:05:50.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:51 np0005465604 nova_compute[260603]: 2025-10-02 09:05:51.072 2 DEBUG oslo_concurrency.processutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5/disk.config 88de4189-44e3-48fe-aa38-c33334b314b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:05:51 np0005465604 nova_compute[260603]: 2025-10-02 09:05:51.073 2 INFO nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Deleting local config drive /var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5/disk.config because it was imported into RBD.#033[00m
Oct  2 05:05:51 np0005465604 kernel: tap8d7734cf-66: entered promiscuous mode
Oct  2 05:05:51 np0005465604 NetworkManager[45129]: <info>  [1759395951.1252] manager: (tap8d7734cf-66): new Tun device (/org/freedesktop/NetworkManager/Devices/597)
Oct  2 05:05:51 np0005465604 ovn_controller[152344]: 2025-10-02T09:05:51Z|01483|binding|INFO|Claiming lport 8d7734cf-6636-4070-a868-e4d1d2cfab65 for this chassis.
Oct  2 05:05:51 np0005465604 ovn_controller[152344]: 2025-10-02T09:05:51Z|01484|binding|INFO|8d7734cf-6636-4070-a868-e4d1d2cfab65: Claiming fa:16:3e:d3:2d:a6 10.100.0.11 2001:db8::f816:3eff:fed3:2da6
Oct  2 05:05:51 np0005465604 nova_compute[260603]: 2025-10-02 09:05:51.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:51 np0005465604 nova_compute[260603]: 2025-10-02 09:05:51.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:51 np0005465604 nova_compute[260603]: 2025-10-02 09:05:51.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:51 np0005465604 nova_compute[260603]: 2025-10-02 09:05:51.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:51 np0005465604 systemd-udevd[411182]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.158 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:2d:a6 10.100.0.11 2001:db8::f816:3eff:fed3:2da6'], port_security=['fa:16:3e:d3:2d:a6 10.100.0.11 2001:db8::f816:3eff:fed3:2da6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fed3:2da6/64', 'neutron:device_id': '88de4189-44e3-48fe-aa38-c33334b314b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d710978-7032-4293-a883-5a767163ed11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8f998d1a-9f03-4830-9263-e8f19e5bb79e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9915992-ac5f-4a55-8b96-3511c2ec67d2, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8d7734cf-6636-4070-a868-e4d1d2cfab65) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.158 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8d7734cf-6636-4070-a868-e4d1d2cfab65 in datapath 4d710978-7032-4293-a883-5a767163ed11 bound to our chassis#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.159 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d710978-7032-4293-a883-5a767163ed11#033[00m
Oct  2 05:05:51 np0005465604 NetworkManager[45129]: <info>  [1759395951.1631] device (tap8d7734cf-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:05:51 np0005465604 NetworkManager[45129]: <info>  [1759395951.1639] device (tap8d7734cf-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:05:51 np0005465604 systemd-machined[214636]: New machine qemu-171-instance-00000089.
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.170 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7987df72-eeee-4100-9113-3b59ae2df46e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.171 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d710978-71 in ovnmeta-4d710978-7032-4293-a883-5a767163ed11 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.173 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d710978-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.173 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e1f9e3-8140-412d-a198-dfd6b8c1f891]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.173 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fa33417f-9249-4cc0-ae51-ad7fb9d323bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.186 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[4c8176f6-e0f8-4967-ade0-bd0aa5ba1dc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:51 np0005465604 ovn_controller[152344]: 2025-10-02T09:05:51Z|01485|binding|INFO|Setting lport 8d7734cf-6636-4070-a868-e4d1d2cfab65 ovn-installed in OVS
Oct  2 05:05:51 np0005465604 ovn_controller[152344]: 2025-10-02T09:05:51Z|01486|binding|INFO|Setting lport 8d7734cf-6636-4070-a868-e4d1d2cfab65 up in Southbound
Oct  2 05:05:51 np0005465604 nova_compute[260603]: 2025-10-02 09:05:51.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:51 np0005465604 systemd[1]: Started Virtual Machine qemu-171-instance-00000089.
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.210 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b6b6ee-6645-4943-8fb8-327861a178fc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.238 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7f6612-b6bf-45ff-a8ce-6b4937f8a919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.242 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0137a932-5a85-4e9d-849e-3192c0039526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:51 np0005465604 NetworkManager[45129]: <info>  [1759395951.2439] manager: (tap4d710978-70): new Veth device (/org/freedesktop/NetworkManager/Devices/598)
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.280 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[038be9f2-e9e0-4957-be88-fd2f42bcb473]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.282 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c54d0341-65fc-46d0-808b-2461932c0fc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:51 np0005465604 NetworkManager[45129]: <info>  [1759395951.3052] device (tap4d710978-70): carrier: link connected
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.312 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2d0d8635-83e4-475c-8f33-0d381b9828bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.330 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[693897b2-925a-467a-bb8c-1f91aa3e4ed0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d710978-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:67:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682990, 'reachable_time': 19213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411216, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.347 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2715b6-38b6-42f2-a117-482b2599d4d7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee2:67c2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 682990, 'tstamp': 682990}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411217, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.365 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c541e48d-b400-4d9d-b76a-a710bd422157]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d710978-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:67:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682990, 'reachable_time': 19213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 411218, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.396 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc4982f-cf80-4066-8d4d-db64a8c3d97c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.461 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[451d690c-138e-41d4-8697-a1d58e932148]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.462 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d710978-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.462 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.463 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d710978-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:05:51 np0005465604 kernel: tap4d710978-70: entered promiscuous mode
Oct  2 05:05:51 np0005465604 NetworkManager[45129]: <info>  [1759395951.4657] manager: (tap4d710978-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/599)
Oct  2 05:05:51 np0005465604 nova_compute[260603]: 2025-10-02 09:05:51.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.467 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d710978-70, col_values=(('external_ids', {'iface-id': '1fb74237-5dc8-49ee-a35b-4801dc5960b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:05:51 np0005465604 nova_compute[260603]: 2025-10-02 09:05:51.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:51 np0005465604 nova_compute[260603]: 2025-10-02 09:05:51.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:51 np0005465604 ovn_controller[152344]: 2025-10-02T09:05:51Z|01487|binding|INFO|Releasing lport 1fb74237-5dc8-49ee-a35b-4801dc5960b2 from this chassis (sb_readonly=0)
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.470 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d710978-7032-4293-a883-5a767163ed11.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d710978-7032-4293-a883-5a767163ed11.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.471 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1457a177-ce6b-4af7-9657-0fa8e0174a92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.472 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-4d710978-7032-4293-a883-5a767163ed11
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/4d710978-7032-4293-a883-5a767163ed11.pid.haproxy
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 4d710978-7032-4293-a883-5a767163ed11
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 05:05:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:05:51.474 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'env', 'PROCESS_TAG=haproxy-4d710978-7032-4293-a883-5a767163ed11', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d710978-7032-4293-a883-5a767163ed11.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 05:05:51 np0005465604 nova_compute[260603]: 2025-10-02 09:05:51.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:51 np0005465604 podman[411292]: 2025-10-02 09:05:51.827970569 +0000 UTC m=+0.054873087 container create 217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 05:05:51 np0005465604 systemd[1]: Started libpod-conmon-217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661.scope.
Oct  2 05:05:51 np0005465604 nova_compute[260603]: 2025-10-02 09:05:51.881 2 DEBUG nova.compute.manager [req-78964cce-ca95-4c34-9d62-d5aa4170c3a8 req-1b93e170-839c-483f-95c2-0098c803fa23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received event network-vif-plugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:05:51 np0005465604 nova_compute[260603]: 2025-10-02 09:05:51.883 2 DEBUG oslo_concurrency.lockutils [req-78964cce-ca95-4c34-9d62-d5aa4170c3a8 req-1b93e170-839c-483f-95c2-0098c803fa23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:05:51 np0005465604 nova_compute[260603]: 2025-10-02 09:05:51.884 2 DEBUG oslo_concurrency.lockutils [req-78964cce-ca95-4c34-9d62-d5aa4170c3a8 req-1b93e170-839c-483f-95c2-0098c803fa23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:05:51 np0005465604 nova_compute[260603]: 2025-10-02 09:05:51.884 2 DEBUG oslo_concurrency.lockutils [req-78964cce-ca95-4c34-9d62-d5aa4170c3a8 req-1b93e170-839c-483f-95c2-0098c803fa23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:05:51 np0005465604 nova_compute[260603]: 2025-10-02 09:05:51.885 2 DEBUG nova.compute.manager [req-78964cce-ca95-4c34-9d62-d5aa4170c3a8 req-1b93e170-839c-483f-95c2-0098c803fa23 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Processing event network-vif-plugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:05:51 np0005465604 podman[411292]: 2025-10-02 09:05:51.796430989 +0000 UTC m=+0.023333597 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 05:05:51 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:05:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40017d254e3a7ef326f21d5b61e3393056686d0000a01e658c3264ac29a6b4b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 05:05:51 np0005465604 podman[411292]: 2025-10-02 09:05:51.927625425 +0000 UTC m=+0.154527953 container init 217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 05:05:51 np0005465604 podman[411292]: 2025-10-02 09:05:51.935847351 +0000 UTC m=+0.162749869 container start 217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 05:05:51 np0005465604 neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11[411307]: [NOTICE]   (411311) : New worker (411313) forked
Oct  2 05:05:51 np0005465604 neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11[411307]: [NOTICE]   (411311) : Loading success.
Oct  2 05:05:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.096 2 DEBUG nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.097 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395952.095616, 88de4189-44e3-48fe-aa38-c33334b314b5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.097 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] VM Started (Lifecycle Event)#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.102 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.106 2 INFO nova.virt.libvirt.driver [-] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Instance spawned successfully.#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.106 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.120 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.124 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.135 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.135 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.136 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.136 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.137 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.137 2 DEBUG nova.virt.libvirt.driver [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.140 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.140 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395952.0965545, 88de4189-44e3-48fe-aa38-c33334b314b5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.141 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.167 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.170 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395952.101426, 88de4189-44e3-48fe-aa38-c33334b314b5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.171 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.189 2 INFO nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Took 7.18 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.189 2 DEBUG nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.190 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.196 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.232 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.248 2 INFO nova.compute.manager [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Took 8.11 seconds to build instance.#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.262 2 DEBUG oslo_concurrency.lockutils [None req-0f2e8a4e-1eec-46cc-ab9a-01e5489eeda0 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.791 2 DEBUG nova.network.neutron [req-7479ad44-537d-4131-9dc8-c6757f7cd37a req-c585b27d-dc74-4ebb-9ab2-2beaae28a21d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updated VIF entry in instance network info cache for port 8d7734cf-6636-4070-a868-e4d1d2cfab65. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.791 2 DEBUG nova.network.neutron [req-7479ad44-537d-4131-9dc8-c6757f7cd37a req-c585b27d-dc74-4ebb-9ab2-2beaae28a21d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updating instance_info_cache with network_info: [{"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:05:52 np0005465604 nova_compute[260603]: 2025-10-02 09:05:52.806 2 DEBUG oslo_concurrency.lockutils [req-7479ad44-537d-4131-9dc8-c6757f7cd37a req-c585b27d-dc74-4ebb-9ab2-2beaae28a21d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:05:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:05:53 np0005465604 nova_compute[260603]: 2025-10-02 09:05:53.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:05:53 np0005465604 nova_compute[260603]: 2025-10-02 09:05:53.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:05:53 np0005465604 nova_compute[260603]: 2025-10-02 09:05:53.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:05:53 np0005465604 nova_compute[260603]: 2025-10-02 09:05:53.737 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:05:53 np0005465604 nova_compute[260603]: 2025-10-02 09:05:53.737 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:05:53 np0005465604 nova_compute[260603]: 2025-10-02 09:05:53.738 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 05:05:53 np0005465604 nova_compute[260603]: 2025-10-02 09:05:53.738 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 88de4189-44e3-48fe-aa38-c33334b314b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:05:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Oct  2 05:05:53 np0005465604 nova_compute[260603]: 2025-10-02 09:05:53.973 2 DEBUG nova.compute.manager [req-f5de253e-2f31-4bed-b49b-9487c10d53ec req-c365f300-62eb-4455-98a7-6fb520c23598 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received event network-vif-plugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:05:53 np0005465604 nova_compute[260603]: 2025-10-02 09:05:53.973 2 DEBUG oslo_concurrency.lockutils [req-f5de253e-2f31-4bed-b49b-9487c10d53ec req-c365f300-62eb-4455-98a7-6fb520c23598 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:05:53 np0005465604 nova_compute[260603]: 2025-10-02 09:05:53.974 2 DEBUG oslo_concurrency.lockutils [req-f5de253e-2f31-4bed-b49b-9487c10d53ec req-c365f300-62eb-4455-98a7-6fb520c23598 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:05:53 np0005465604 nova_compute[260603]: 2025-10-02 09:05:53.974 2 DEBUG oslo_concurrency.lockutils [req-f5de253e-2f31-4bed-b49b-9487c10d53ec req-c365f300-62eb-4455-98a7-6fb520c23598 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:05:53 np0005465604 nova_compute[260603]: 2025-10-02 09:05:53.974 2 DEBUG nova.compute.manager [req-f5de253e-2f31-4bed-b49b-9487c10d53ec req-c365f300-62eb-4455-98a7-6fb520c23598 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] No waiting events found dispatching network-vif-plugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:05:53 np0005465604 nova_compute[260603]: 2025-10-02 09:05:53.974 2 WARNING nova.compute.manager [req-f5de253e-2f31-4bed-b49b-9487c10d53ec req-c365f300-62eb-4455-98a7-6fb520c23598 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received unexpected event network-vif-plugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:05:54 np0005465604 nova_compute[260603]: 2025-10-02 09:05:54.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:55 np0005465604 nova_compute[260603]: 2025-10-02 09:05:55.087 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updating instance_info_cache with network_info: [{"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:05:55 np0005465604 nova_compute[260603]: 2025-10-02 09:05:55.111 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:05:55 np0005465604 nova_compute[260603]: 2025-10-02 09:05:55.112 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 05:05:55 np0005465604 NetworkManager[45129]: <info>  [1759395955.6941] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/600)
Oct  2 05:05:55 np0005465604 NetworkManager[45129]: <info>  [1759395955.6949] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/601)
Oct  2 05:05:55 np0005465604 nova_compute[260603]: 2025-10-02 09:05:55.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:55 np0005465604 ovn_controller[152344]: 2025-10-02T09:05:55Z|01488|binding|INFO|Releasing lport 1fb74237-5dc8-49ee-a35b-4801dc5960b2 from this chassis (sb_readonly=0)
Oct  2 05:05:55 np0005465604 nova_compute[260603]: 2025-10-02 09:05:55.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:55 np0005465604 ovn_controller[152344]: 2025-10-02T09:05:55Z|01489|binding|INFO|Releasing lport 1fb74237-5dc8-49ee-a35b-4801dc5960b2 from this chassis (sb_readonly=0)
Oct  2 05:05:55 np0005465604 nova_compute[260603]: 2025-10-02 09:05:55.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Oct  2 05:05:56 np0005465604 nova_compute[260603]: 2025-10-02 09:05:56.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:05:56 np0005465604 nova_compute[260603]: 2025-10-02 09:05:56.895 2 DEBUG nova.compute.manager [req-d69b6121-2b41-4c22-97f8-6c455d8d806f req-8a5c244a-e2f3-485d-aa79-3f02d899f6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received event network-changed-8d7734cf-6636-4070-a868-e4d1d2cfab65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:05:56 np0005465604 nova_compute[260603]: 2025-10-02 09:05:56.895 2 DEBUG nova.compute.manager [req-d69b6121-2b41-4c22-97f8-6c455d8d806f req-8a5c244a-e2f3-485d-aa79-3f02d899f6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Refreshing instance network info cache due to event network-changed-8d7734cf-6636-4070-a868-e4d1d2cfab65. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:05:56 np0005465604 nova_compute[260603]: 2025-10-02 09:05:56.896 2 DEBUG oslo_concurrency.lockutils [req-d69b6121-2b41-4c22-97f8-6c455d8d806f req-8a5c244a-e2f3-485d-aa79-3f02d899f6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:05:56 np0005465604 nova_compute[260603]: 2025-10-02 09:05:56.896 2 DEBUG oslo_concurrency.lockutils [req-d69b6121-2b41-4c22-97f8-6c455d8d806f req-8a5c244a-e2f3-485d-aa79-3f02d899f6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:05:56 np0005465604 nova_compute[260603]: 2025-10-02 09:05:56.896 2 DEBUG nova.network.neutron [req-d69b6121-2b41-4c22-97f8-6c455d8d806f req-8a5c244a-e2f3-485d-aa79-3f02d899f6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Refreshing network info cache for port 8d7734cf-6636-4070-a868-e4d1d2cfab65 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:05:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:05:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:05:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:05:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:05:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:05:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:05:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 05:05:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:05:59 np0005465604 nova_compute[260603]: 2025-10-02 09:05:59.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:05:59 np0005465604 nova_compute[260603]: 2025-10-02 09:05:59.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:05:59 np0005465604 nova_compute[260603]: 2025-10-02 09:05:59.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:05:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 351 KiB/s wr, 87 op/s
Oct  2 05:06:00 np0005465604 nova_compute[260603]: 2025-10-02 09:06:00.754 2 DEBUG nova.network.neutron [req-d69b6121-2b41-4c22-97f8-6c455d8d806f req-8a5c244a-e2f3-485d-aa79-3f02d899f6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updated VIF entry in instance network info cache for port 8d7734cf-6636-4070-a868-e4d1d2cfab65. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:06:00 np0005465604 nova_compute[260603]: 2025-10-02 09:06:00.755 2 DEBUG nova.network.neutron [req-d69b6121-2b41-4c22-97f8-6c455d8d806f req-8a5c244a-e2f3-485d-aa79-3f02d899f6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updating instance_info_cache with network_info: [{"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:06:00 np0005465604 nova_compute[260603]: 2025-10-02 09:06:00.775 2 DEBUG oslo_concurrency.lockutils [req-d69b6121-2b41-4c22-97f8-6c455d8d806f req-8a5c244a-e2f3-485d-aa79-3f02d899f6e6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:06:00 np0005465604 nova_compute[260603]: 2025-10-02 09:06:00.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 88 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:06:02 np0005465604 nova_compute[260603]: 2025-10-02 09:06:02.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:06:02 np0005465604 nova_compute[260603]: 2025-10-02 09:06:02.545 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:02 np0005465604 nova_compute[260603]: 2025-10-02 09:06:02.545 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:02 np0005465604 nova_compute[260603]: 2025-10-02 09:06:02.545 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:02 np0005465604 nova_compute[260603]: 2025-10-02 09:06:02.546 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:06:02 np0005465604 nova_compute[260603]: 2025-10-02 09:06:02.546 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:06:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:06:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1939563937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:06:03 np0005465604 nova_compute[260603]: 2025-10-02 09:06:03.020 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:06:03 np0005465604 nova_compute[260603]: 2025-10-02 09:06:03.097 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:06:03 np0005465604 nova_compute[260603]: 2025-10-02 09:06:03.098 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:06:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:06:03 np0005465604 nova_compute[260603]: 2025-10-02 09:06:03.254 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:06:03 np0005465604 nova_compute[260603]: 2025-10-02 09:06:03.256 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3447MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:06:03 np0005465604 nova_compute[260603]: 2025-10-02 09:06:03.257 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:03 np0005465604 nova_compute[260603]: 2025-10-02 09:06:03.257 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:03 np0005465604 ovn_controller[152344]: 2025-10-02T09:06:03Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:2d:a6 10.100.0.11
Oct  2 05:06:03 np0005465604 ovn_controller[152344]: 2025-10-02T09:06:03Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:2d:a6 10.100.0.11
Oct  2 05:06:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 94 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 677 KiB/s wr, 96 op/s
Oct  2 05:06:04 np0005465604 nova_compute[260603]: 2025-10-02 09:06:04.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:04 np0005465604 nova_compute[260603]: 2025-10-02 09:06:04.185 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 88de4189-44e3-48fe-aa38-c33334b314b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:06:04 np0005465604 nova_compute[260603]: 2025-10-02 09:06:04.186 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:06:04 np0005465604 nova_compute[260603]: 2025-10-02 09:06:04.186 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:06:04 np0005465604 nova_compute[260603]: 2025-10-02 09:06:04.985 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:06:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:06:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:06:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:06:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:06:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:06:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:06:05 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 875d9811-5edb-40fa-aa87-bec749d34134 does not exist
Oct  2 05:06:05 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a6e7a63e-3254-4ae5-9405-4a34a8318539 does not exist
Oct  2 05:06:05 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev eb17e1f1-9105-4496-8e43-33a8db5780ab does not exist
Oct  2 05:06:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:06:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:06:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:06:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:06:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:06:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:06:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:06:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4183541134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:06:05 np0005465604 nova_compute[260603]: 2025-10-02 09:06:05.425 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:06:05 np0005465604 nova_compute[260603]: 2025-10-02 09:06:05.432 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:06:05 np0005465604 nova_compute[260603]: 2025-10-02 09:06:05.454 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:06:05 np0005465604 nova_compute[260603]: 2025-10-02 09:06:05.481 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:06:05 np0005465604 nova_compute[260603]: 2025-10-02 09:06:05.482 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:05 np0005465604 podman[411637]: 2025-10-02 09:06:05.806807602 +0000 UTC m=+0.036302319 container create fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_zhukovsky, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:06:05 np0005465604 systemd[1]: Started libpod-conmon-fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665.scope.
Oct  2 05:06:05 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:06:05 np0005465604 podman[411637]: 2025-10-02 09:06:05.883765424 +0000 UTC m=+0.113260161 container init fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:06:05 np0005465604 podman[411637]: 2025-10-02 09:06:05.789296398 +0000 UTC m=+0.018791135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:06:05 np0005465604 podman[411637]: 2025-10-02 09:06:05.890169942 +0000 UTC m=+0.119664649 container start fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:06:05 np0005465604 podman[411637]: 2025-10-02 09:06:05.892786834 +0000 UTC m=+0.122281551 container attach fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct  2 05:06:05 np0005465604 modest_zhukovsky[411653]: 167 167
Oct  2 05:06:05 np0005465604 systemd[1]: libpod-fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665.scope: Deactivated successfully.
Oct  2 05:06:05 np0005465604 conmon[411653]: conmon fede4b5732310de836a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665.scope/container/memory.events
Oct  2 05:06:05 np0005465604 podman[411637]: 2025-10-02 09:06:05.895936422 +0000 UTC m=+0.125431139 container died fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_zhukovsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 05:06:05 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c2fd0139ab503f896729f91775f4266da5f50286da552416622bf91ce3e09dbf-merged.mount: Deactivated successfully.
Oct  2 05:06:05 np0005465604 podman[411637]: 2025-10-02 09:06:05.931317071 +0000 UTC m=+0.160811788 container remove fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 05:06:05 np0005465604 systemd[1]: libpod-conmon-fede4b5732310de836a57752329660d91e951b29d059de9cfb88ceb335504665.scope: Deactivated successfully.
Oct  2 05:06:05 np0005465604 nova_compute[260603]: 2025-10-02 09:06:05.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 94 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 665 KiB/s wr, 71 op/s
Oct  2 05:06:06 np0005465604 podman[411678]: 2025-10-02 09:06:06.12276195 +0000 UTC m=+0.045578317 container create de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_payne, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 05:06:06 np0005465604 systemd[1]: Started libpod-conmon-de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09.scope.
Oct  2 05:06:06 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:06:06 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:06:06 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:06:06 np0005465604 podman[411678]: 2025-10-02 09:06:06.106289038 +0000 UTC m=+0.029105425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:06:06 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:06:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8beb36660f452f24b66a8dd653087d150696091e3db853d0862f6c6119986be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:06:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8beb36660f452f24b66a8dd653087d150696091e3db853d0862f6c6119986be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:06:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8beb36660f452f24b66a8dd653087d150696091e3db853d0862f6c6119986be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:06:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8beb36660f452f24b66a8dd653087d150696091e3db853d0862f6c6119986be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:06:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8beb36660f452f24b66a8dd653087d150696091e3db853d0862f6c6119986be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:06:06 np0005465604 podman[411678]: 2025-10-02 09:06:06.249942893 +0000 UTC m=+0.172759350 container init de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_payne, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:06:06 np0005465604 podman[411678]: 2025-10-02 09:06:06.257940211 +0000 UTC m=+0.180756608 container start de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_payne, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:06:06 np0005465604 podman[411678]: 2025-10-02 09:06:06.263099922 +0000 UTC m=+0.185916319 container attach de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:06:07 np0005465604 fervent_payne[411694]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:06:07 np0005465604 fervent_payne[411694]: --> relative data size: 1.0
Oct  2 05:06:07 np0005465604 fervent_payne[411694]: --> All data devices are unavailable
Oct  2 05:06:07 np0005465604 systemd[1]: libpod-de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09.scope: Deactivated successfully.
Oct  2 05:06:07 np0005465604 podman[411678]: 2025-10-02 09:06:07.423069109 +0000 UTC m=+1.345885486 container died de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 05:06:07 np0005465604 systemd[1]: libpod-de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09.scope: Consumed 1.108s CPU time.
Oct  2 05:06:07 np0005465604 nova_compute[260603]: 2025-10-02 09:06:07.478 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:06:07 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f8beb36660f452f24b66a8dd653087d150696091e3db853d0862f6c6119986be-merged.mount: Deactivated successfully.
Oct  2 05:06:07 np0005465604 podman[411678]: 2025-10-02 09:06:07.718660894 +0000 UTC m=+1.641477261 container remove de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_payne, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 05:06:07 np0005465604 systemd[1]: libpod-conmon-de1d2b5810ddfab18e2bc7154b9de3c9dbbfbfc3fecf3d03879e0f115db59a09.scope: Deactivated successfully.
Oct  2 05:06:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 116 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Oct  2 05:06:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:06:08 np0005465604 podman[411878]: 2025-10-02 09:06:08.613126 +0000 UTC m=+0.044907136 container create 9108f8dcbce84485e417647bd4c864914cc8c31ae1f6b95c8fff25d9d3befd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 05:06:08 np0005465604 systemd[1]: Started libpod-conmon-9108f8dcbce84485e417647bd4c864914cc8c31ae1f6b95c8fff25d9d3befd88.scope.
Oct  2 05:06:08 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:06:08 np0005465604 podman[411878]: 2025-10-02 09:06:08.593683175 +0000 UTC m=+0.025464341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:06:08 np0005465604 podman[411878]: 2025-10-02 09:06:08.693941732 +0000 UTC m=+0.125722898 container init 9108f8dcbce84485e417647bd4c864914cc8c31ae1f6b95c8fff25d9d3befd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brahmagupta, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:06:08 np0005465604 podman[411878]: 2025-10-02 09:06:08.701461235 +0000 UTC m=+0.133242371 container start 9108f8dcbce84485e417647bd4c864914cc8c31ae1f6b95c8fff25d9d3befd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brahmagupta, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:06:08 np0005465604 podman[411878]: 2025-10-02 09:06:08.704968875 +0000 UTC m=+0.136750011 container attach 9108f8dcbce84485e417647bd4c864914cc8c31ae1f6b95c8fff25d9d3befd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  2 05:06:08 np0005465604 agitated_brahmagupta[411894]: 167 167
Oct  2 05:06:08 np0005465604 systemd[1]: libpod-9108f8dcbce84485e417647bd4c864914cc8c31ae1f6b95c8fff25d9d3befd88.scope: Deactivated successfully.
Oct  2 05:06:08 np0005465604 podman[411878]: 2025-10-02 09:06:08.707102531 +0000 UTC m=+0.138883667 container died 9108f8dcbce84485e417647bd4c864914cc8c31ae1f6b95c8fff25d9d3befd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 05:06:08 np0005465604 systemd[1]: var-lib-containers-storage-overlay-586769ccf336b029e89699e437000c9f63b92eddeba544d7e0fb09a2c4243c70-merged.mount: Deactivated successfully.
Oct  2 05:06:08 np0005465604 podman[411878]: 2025-10-02 09:06:08.74861452 +0000 UTC m=+0.180395686 container remove 9108f8dcbce84485e417647bd4c864914cc8c31ae1f6b95c8fff25d9d3befd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 05:06:08 np0005465604 systemd[1]: libpod-conmon-9108f8dcbce84485e417647bd4c864914cc8c31ae1f6b95c8fff25d9d3befd88.scope: Deactivated successfully.
Oct  2 05:06:08 np0005465604 podman[411918]: 2025-10-02 09:06:08.927738477 +0000 UTC m=+0.040925993 container create e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct  2 05:06:08 np0005465604 systemd[1]: Started libpod-conmon-e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3.scope.
Oct  2 05:06:08 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:06:08 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a307a2a74f7358e42027e4608657ee44bcfdf44397614f6ef4c0c0122665d20f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:06:08 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a307a2a74f7358e42027e4608657ee44bcfdf44397614f6ef4c0c0122665d20f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:06:08 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a307a2a74f7358e42027e4608657ee44bcfdf44397614f6ef4c0c0122665d20f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:06:08 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a307a2a74f7358e42027e4608657ee44bcfdf44397614f6ef4c0c0122665d20f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:06:08 np0005465604 podman[411918]: 2025-10-02 09:06:08.993562862 +0000 UTC m=+0.106750408 container init e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hofstadter, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 05:06:09 np0005465604 podman[411918]: 2025-10-02 09:06:09.000500028 +0000 UTC m=+0.113687564 container start e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hofstadter, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:06:09 np0005465604 podman[411918]: 2025-10-02 09:06:08.908391455 +0000 UTC m=+0.021578981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:06:09 np0005465604 podman[411918]: 2025-10-02 09:06:09.003742718 +0000 UTC m=+0.116930234 container attach e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hofstadter, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct  2 05:06:09 np0005465604 nova_compute[260603]: 2025-10-02 09:06:09.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:09 np0005465604 nova_compute[260603]: 2025-10-02 09:06:09.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]: {
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:    "0": [
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:        {
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "devices": [
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "/dev/loop3"
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            ],
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "lv_name": "ceph_lv0",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "lv_size": "21470642176",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "name": "ceph_lv0",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "tags": {
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.cluster_name": "ceph",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.crush_device_class": "",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.encrypted": "0",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.osd_id": "0",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.type": "block",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.vdo": "0"
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            },
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "type": "block",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "vg_name": "ceph_vg0"
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:        }
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:    ],
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:    "1": [
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:        {
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "devices": [
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "/dev/loop4"
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            ],
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "lv_name": "ceph_lv1",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "lv_size": "21470642176",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "name": "ceph_lv1",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "tags": {
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.cluster_name": "ceph",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.crush_device_class": "",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.encrypted": "0",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.osd_id": "1",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.type": "block",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.vdo": "0"
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            },
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "type": "block",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "vg_name": "ceph_vg1"
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:        }
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:    ],
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:    "2": [
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:        {
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "devices": [
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "/dev/loop5"
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            ],
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "lv_name": "ceph_lv2",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "lv_size": "21470642176",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "name": "ceph_lv2",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "tags": {
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.cluster_name": "ceph",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.crush_device_class": "",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.encrypted": "0",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.osd_id": "2",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.type": "block",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:                "ceph.vdo": "0"
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            },
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "type": "block",
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:            "vg_name": "ceph_vg2"
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:        }
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]:    ]
Oct  2 05:06:09 np0005465604 unruffled_hofstadter[411935]: }
Oct  2 05:06:09 np0005465604 systemd[1]: libpod-e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3.scope: Deactivated successfully.
Oct  2 05:06:09 np0005465604 conmon[411935]: conmon e3f807d88903815013bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3.scope/container/memory.events
Oct  2 05:06:09 np0005465604 podman[411918]: 2025-10-02 09:06:09.787474424 +0000 UTC m=+0.900661970 container died e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 05:06:09 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a307a2a74f7358e42027e4608657ee44bcfdf44397614f6ef4c0c0122665d20f-merged.mount: Deactivated successfully.
Oct  2 05:06:09 np0005465604 podman[411918]: 2025-10-02 09:06:09.865878991 +0000 UTC m=+0.979066517 container remove e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  2 05:06:09 np0005465604 systemd[1]: libpod-conmon-e3f807d88903815013bfc33e4a001dcf04a79ab38f698e8334483e5df18c54d3.scope: Deactivated successfully.
Oct  2 05:06:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 05:06:10 np0005465604 podman[412103]: 2025-10-02 09:06:10.449085704 +0000 UTC m=+0.035252076 container create a907ea1bc178386162eaa0f285d27ff773db06547b18ec1549c4bc7498908b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:06:10 np0005465604 systemd[1]: Started libpod-conmon-a907ea1bc178386162eaa0f285d27ff773db06547b18ec1549c4bc7498908b2f.scope.
Oct  2 05:06:10 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:06:10 np0005465604 podman[412103]: 2025-10-02 09:06:10.529342418 +0000 UTC m=+0.115508810 container init a907ea1bc178386162eaa0f285d27ff773db06547b18ec1549c4bc7498908b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:06:10 np0005465604 podman[412103]: 2025-10-02 09:06:10.434007246 +0000 UTC m=+0.020173628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:06:10 np0005465604 podman[412103]: 2025-10-02 09:06:10.537206443 +0000 UTC m=+0.123372855 container start a907ea1bc178386162eaa0f285d27ff773db06547b18ec1549c4bc7498908b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 05:06:10 np0005465604 dreamy_banach[412119]: 167 167
Oct  2 05:06:10 np0005465604 systemd[1]: libpod-a907ea1bc178386162eaa0f285d27ff773db06547b18ec1549c4bc7498908b2f.scope: Deactivated successfully.
Oct  2 05:06:10 np0005465604 podman[412103]: 2025-10-02 09:06:10.557442131 +0000 UTC m=+0.143608523 container attach a907ea1bc178386162eaa0f285d27ff773db06547b18ec1549c4bc7498908b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 05:06:10 np0005465604 podman[412103]: 2025-10-02 09:06:10.557888985 +0000 UTC m=+0.144055357 container died a907ea1bc178386162eaa0f285d27ff773db06547b18ec1549c4bc7498908b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:06:10 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e64017814e4b0002ce2a1836426459a3b7d938e525caa37f6d21310ae91af749-merged.mount: Deactivated successfully.
Oct  2 05:06:10 np0005465604 podman[412103]: 2025-10-02 09:06:10.630860052 +0000 UTC m=+0.217026424 container remove a907ea1bc178386162eaa0f285d27ff773db06547b18ec1549c4bc7498908b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct  2 05:06:10 np0005465604 systemd[1]: libpod-conmon-a907ea1bc178386162eaa0f285d27ff773db06547b18ec1549c4bc7498908b2f.scope: Deactivated successfully.
Oct  2 05:06:10 np0005465604 podman[412142]: 2025-10-02 09:06:10.887923882 +0000 UTC m=+0.089979968 container create b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 05:06:10 np0005465604 podman[412142]: 2025-10-02 09:06:10.845452832 +0000 UTC m=+0.047508988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:06:10 np0005465604 systemd[1]: Started libpod-conmon-b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258.scope.
Oct  2 05:06:10 np0005465604 nova_compute[260603]: 2025-10-02 09:06:10.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:10 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:06:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/860eee515520521de262e7f9aa70bc534e314a4152dd42b9ec47beb7ef40f620/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:06:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/860eee515520521de262e7f9aa70bc534e314a4152dd42b9ec47beb7ef40f620/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:06:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/860eee515520521de262e7f9aa70bc534e314a4152dd42b9ec47beb7ef40f620/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:06:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/860eee515520521de262e7f9aa70bc534e314a4152dd42b9ec47beb7ef40f620/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:06:11 np0005465604 podman[412142]: 2025-10-02 09:06:11.005849686 +0000 UTC m=+0.207905852 container init b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:06:11 np0005465604 podman[412142]: 2025-10-02 09:06:11.020045217 +0000 UTC m=+0.222101293 container start b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:06:11 np0005465604 podman[412142]: 2025-10-02 09:06:11.024597668 +0000 UTC m=+0.226653734 container attach b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:06:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 05:06:12 np0005465604 podman[412183]: 2025-10-02 09:06:12.029683402 +0000 UTC m=+0.077255862 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 05:06:12 np0005465604 busy_banzai[412158]: {
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:        "osd_id": 2,
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:        "type": "bluestore"
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:    },
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:        "osd_id": 1,
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:        "type": "bluestore"
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:    },
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:        "osd_id": 0,
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:        "type": "bluestore"
Oct  2 05:06:12 np0005465604 busy_banzai[412158]:    }
Oct  2 05:06:12 np0005465604 busy_banzai[412158]: }
Oct  2 05:06:12 np0005465604 systemd[1]: libpod-b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258.scope: Deactivated successfully.
Oct  2 05:06:12 np0005465604 podman[412142]: 2025-10-02 09:06:12.071145971 +0000 UTC m=+1.273202037 container died b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:06:12 np0005465604 systemd[1]: libpod-b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258.scope: Consumed 1.054s CPU time.
Oct  2 05:06:12 np0005465604 podman[412180]: 2025-10-02 09:06:12.078179089 +0000 UTC m=+0.136337067 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  2 05:06:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay-860eee515520521de262e7f9aa70bc534e314a4152dd42b9ec47beb7ef40f620-merged.mount: Deactivated successfully.
Oct  2 05:06:12 np0005465604 podman[412142]: 2025-10-02 09:06:12.144351436 +0000 UTC m=+1.346407492 container remove b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:06:12 np0005465604 systemd[1]: libpod-conmon-b639240b9e0fb82d885fec724b564441b2efc652615e6404dc276ce0a5400258.scope: Deactivated successfully.
Oct  2 05:06:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:06:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:06:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:06:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:06:12 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 1dfd4a3a-6e8f-4f23-bcf2-9e2dbd36ac43 does not exist
Oct  2 05:06:12 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 6972d3a0-e438-46d5-aa56-0c153c8901c6 does not exist
Oct  2 05:06:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:06:13 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:06:13 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:06:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 05:06:14 np0005465604 nova_compute[260603]: 2025-10-02 09:06:14.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:15 np0005465604 nova_compute[260603]: 2025-10-02 09:06:15.594 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:15 np0005465604 nova_compute[260603]: 2025-10-02 09:06:15.594 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:15 np0005465604 nova_compute[260603]: 2025-10-02 09:06:15.619 2 DEBUG nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:06:15 np0005465604 nova_compute[260603]: 2025-10-02 09:06:15.738 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:15 np0005465604 nova_compute[260603]: 2025-10-02 09:06:15.739 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:15 np0005465604 nova_compute[260603]: 2025-10-02 09:06:15.752 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:06:15 np0005465604 nova_compute[260603]: 2025-10-02 09:06:15.752 2 INFO nova.compute.claims [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:06:15 np0005465604 nova_compute[260603]: 2025-10-02 09:06:15.906 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:06:15 np0005465604 nova_compute[260603]: 2025-10-02 09:06:15.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 1.5 MiB/s wr, 41 op/s
Oct  2 05:06:16 np0005465604 podman[412297]: 2025-10-02 09:06:16.009636062 +0000 UTC m=+0.068842750 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:06:16 np0005465604 podman[412299]: 2025-10-02 09:06:16.020366326 +0000 UTC m=+0.064787365 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:06:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:06:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3514338891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.380 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.390 2 DEBUG nova.compute.provider_tree [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.422 2 DEBUG nova.scheduler.client.report [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.447 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.448 2 DEBUG nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.491 2 DEBUG nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.492 2 DEBUG nova.network.neutron [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.509 2 INFO nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.531 2 DEBUG nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.660 2 DEBUG nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.661 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.662 2 INFO nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Creating image(s)#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.690 2 DEBUG nova.storage.rbd_utils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.712 2 DEBUG nova.storage.rbd_utils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.730 2 DEBUG nova.storage.rbd_utils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.733 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.773 2 DEBUG nova.policy [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.807 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.808 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.808 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.809 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.829 2 DEBUG nova.storage.rbd_utils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:06:16 np0005465604 nova_compute[260603]: 2025-10-02 09:06:16.833 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:06:17 np0005465604 nova_compute[260603]: 2025-10-02 09:06:17.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:17.490 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:06:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:17.492 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:06:17 np0005465604 nova_compute[260603]: 2025-10-02 09:06:17.842 2 DEBUG nova.network.neutron [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Successfully created port: 84190bf6-548d-4a11-83b3-0e6be88619c6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:06:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 121 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 41 op/s
Oct  2 05:06:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:06:18 np0005465604 nova_compute[260603]: 2025-10-02 09:06:18.314 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:06:18 np0005465604 nova_compute[260603]: 2025-10-02 09:06:18.397 2 DEBUG nova.storage.rbd_utils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:06:18 np0005465604 nova_compute[260603]: 2025-10-02 09:06:18.697 2 DEBUG nova.objects.instance [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid a021c5bb-f6b0-4434-bf26-81f294f0fe00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:06:18 np0005465604 nova_compute[260603]: 2025-10-02 09:06:18.717 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:06:18 np0005465604 nova_compute[260603]: 2025-10-02 09:06:18.718 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Ensure instance console log exists: /var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:06:18 np0005465604 nova_compute[260603]: 2025-10-02 09:06:18.718 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:18 np0005465604 nova_compute[260603]: 2025-10-02 09:06:18.719 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:18 np0005465604 nova_compute[260603]: 2025-10-02 09:06:18.719 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:19 np0005465604 nova_compute[260603]: 2025-10-02 09:06:19.041 2 DEBUG nova.network.neutron [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Successfully updated port: 84190bf6-548d-4a11-83b3-0e6be88619c6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:06:19 np0005465604 nova_compute[260603]: 2025-10-02 09:06:19.062 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:06:19 np0005465604 nova_compute[260603]: 2025-10-02 09:06:19.062 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:06:19 np0005465604 nova_compute[260603]: 2025-10-02 09:06:19.062 2 DEBUG nova.network.neutron [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:06:19 np0005465604 nova_compute[260603]: 2025-10-02 09:06:19.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:19 np0005465604 nova_compute[260603]: 2025-10-02 09:06:19.154 2 DEBUG nova.compute.manager [req-73145097-bff0-45b0-aae7-e6146d743b25 req-dabe2228-6804-4d22-921e-97b9f205b086 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received event network-changed-84190bf6-548d-4a11-83b3-0e6be88619c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:06:19 np0005465604 nova_compute[260603]: 2025-10-02 09:06:19.155 2 DEBUG nova.compute.manager [req-73145097-bff0-45b0-aae7-e6146d743b25 req-dabe2228-6804-4d22-921e-97b9f205b086 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Refreshing instance network info cache due to event network-changed-84190bf6-548d-4a11-83b3-0e6be88619c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:06:19 np0005465604 nova_compute[260603]: 2025-10-02 09:06:19.155 2 DEBUG oslo_concurrency.lockutils [req-73145097-bff0-45b0-aae7-e6146d743b25 req-dabe2228-6804-4d22-921e-97b9f205b086 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:06:19 np0005465604 nova_compute[260603]: 2025-10-02 09:06:19.247 2 DEBUG nova.network.neutron [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:06:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 144 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1011 KiB/s wr, 16 op/s
Oct  2 05:06:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:20.496 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.539 2 DEBUG nova.network.neutron [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Updating instance_info_cache with network_info: [{"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.569 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.570 2 DEBUG nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Instance network_info: |[{"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.570 2 DEBUG oslo_concurrency.lockutils [req-73145097-bff0-45b0-aae7-e6146d743b25 req-dabe2228-6804-4d22-921e-97b9f205b086 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.570 2 DEBUG nova.network.neutron [req-73145097-bff0-45b0-aae7-e6146d743b25 req-dabe2228-6804-4d22-921e-97b9f205b086 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Refreshing network info cache for port 84190bf6-548d-4a11-83b3-0e6be88619c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.574 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Start _get_guest_xml network_info=[{"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.579 2 WARNING nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.585 2 DEBUG nova.virt.libvirt.host [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.586 2 DEBUG nova.virt.libvirt.host [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.590 2 DEBUG nova.virt.libvirt.host [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.590 2 DEBUG nova.virt.libvirt.host [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.590 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.591 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.591 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.591 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.592 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.592 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.592 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.593 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.593 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.593 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.593 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.594 2 DEBUG nova.virt.hardware [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.597 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:06:20 np0005465604 nova_compute[260603]: 2025-10-02 09:06:20.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:06:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1630912268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.025 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.065 2 DEBUG nova.storage.rbd_utils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.070 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:06:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:06:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/470849595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.500 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.502 2 DEBUG nova.virt.libvirt.vif [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:06:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-420309713',display_name='tempest-TestGettingAddress-server-420309713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-420309713',id=138,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOqsV+mRqA94BGsKqn96a/KPTTGiBWD+95ZJ/Yh7ODb2zqPMdXbtdzNYLEW6fE5OS4mYGF0KIkuvDnPSxXUjDfpHSgx5rD0Ef4PCofSlDC/ZVRctKKrWVNvfvA+fGJmQQ==',key_name='tempest-TestGettingAddress-1976047243',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-sen5rcgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:06:16Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=a021c5bb-f6b0-4434-bf26-81f294f0fe00,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.503 2 DEBUG nova.network.os_vif_util [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.504 2 DEBUG nova.network.os_vif_util [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:cb:0b,bridge_name='br-int',has_traffic_filtering=True,id=84190bf6-548d-4a11-83b3-0e6be88619c6,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84190bf6-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.505 2 DEBUG nova.objects.instance [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid a021c5bb-f6b0-4434-bf26-81f294f0fe00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.523 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:06:21 np0005465604 nova_compute[260603]:  <uuid>a021c5bb-f6b0-4434-bf26-81f294f0fe00</uuid>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:  <name>instance-0000008a</name>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestGettingAddress-server-420309713</nova:name>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:06:20</nova:creationTime>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:06:21 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:        <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:        <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:        <nova:port uuid="84190bf6-548d-4a11-83b3-0e6be88619c6">
Oct  2 05:06:21 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe32:cb0b" ipVersion="6"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <entry name="serial">a021c5bb-f6b0-4434-bf26-81f294f0fe00</entry>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <entry name="uuid">a021c5bb-f6b0-4434-bf26-81f294f0fe00</entry>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk">
Oct  2 05:06:21 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:06:21 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk.config">
Oct  2 05:06:21 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:06:21 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:32:cb:0b"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <target dev="tap84190bf6-54"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00/console.log" append="off"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:06:21 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:06:21 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:06:21 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:06:21 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.524 2 DEBUG nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Preparing to wait for external event network-vif-plugged-84190bf6-548d-4a11-83b3-0e6be88619c6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.525 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.525 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.525 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.526 2 DEBUG nova.virt.libvirt.vif [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:06:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-420309713',display_name='tempest-TestGettingAddress-server-420309713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-420309713',id=138,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOqsV+mRqA94BGsKqn96a/KPTTGiBWD+95ZJ/Yh7ODb2zqPMdXbtdzNYLEW6fE5OS4mYGF0KIkuvDnPSxXUjDfpHSgx5rD0Ef4PCofSlDC/ZVRctKKrWVNvfvA+fGJmQQ==',key_name='tempest-TestGettingAddress-1976047243',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-sen5rcgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:06:16Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=a021c5bb-f6b0-4434-bf26-81f294f0fe00,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.526 2 DEBUG nova.network.os_vif_util [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.528 2 DEBUG nova.network.os_vif_util [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:cb:0b,bridge_name='br-int',has_traffic_filtering=True,id=84190bf6-548d-4a11-83b3-0e6be88619c6,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84190bf6-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.528 2 DEBUG os_vif [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:cb:0b,bridge_name='br-int',has_traffic_filtering=True,id=84190bf6-548d-4a11-83b3-0e6be88619c6,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84190bf6-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.529 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.530 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.533 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84190bf6-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.533 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap84190bf6-54, col_values=(('external_ids', {'iface-id': '84190bf6-548d-4a11-83b3-0e6be88619c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:cb:0b', 'vm-uuid': 'a021c5bb-f6b0-4434-bf26-81f294f0fe00'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:21 np0005465604 NetworkManager[45129]: <info>  [1759395981.5366] manager: (tap84190bf6-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/602)
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.543 2 INFO os_vif [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:cb:0b,bridge_name='br-int',has_traffic_filtering=True,id=84190bf6-548d-4a11-83b3-0e6be88619c6,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84190bf6-54')#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.600 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.602 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.602 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:32:cb:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.603 2 INFO nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Using config drive#033[00m
Oct  2 05:06:21 np0005465604 nova_compute[260603]: 2025-10-02 09:06:21.627 2 DEBUG nova.storage.rbd_utils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:06:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 144 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 979 KiB/s wr, 5 op/s
Oct  2 05:06:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:06:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2177048171' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:06:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:06:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2177048171' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:06:22 np0005465604 nova_compute[260603]: 2025-10-02 09:06:22.717 2 INFO nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Creating config drive at /var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00/disk.config#033[00m
Oct  2 05:06:22 np0005465604 nova_compute[260603]: 2025-10-02 09:06:22.726 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6gn16f8l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:06:22 np0005465604 nova_compute[260603]: 2025-10-02 09:06:22.895 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6gn16f8l" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:06:22 np0005465604 nova_compute[260603]: 2025-10-02 09:06:22.928 2 DEBUG nova.storage.rbd_utils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:06:22 np0005465604 nova_compute[260603]: 2025-10-02 09:06:22.931 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00/disk.config a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:06:23 np0005465604 nova_compute[260603]: 2025-10-02 09:06:23.126 2 DEBUG oslo_concurrency.processutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00/disk.config a021c5bb-f6b0-4434-bf26-81f294f0fe00_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:06:23 np0005465604 nova_compute[260603]: 2025-10-02 09:06:23.127 2 INFO nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Deleting local config drive /var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00/disk.config because it was imported into RBD.#033[00m
Oct  2 05:06:23 np0005465604 kernel: tap84190bf6-54: entered promiscuous mode
Oct  2 05:06:23 np0005465604 NetworkManager[45129]: <info>  [1759395983.1814] manager: (tap84190bf6-54): new Tun device (/org/freedesktop/NetworkManager/Devices/603)
Oct  2 05:06:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:06:23 np0005465604 nova_compute[260603]: 2025-10-02 09:06:23.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:23 np0005465604 ovn_controller[152344]: 2025-10-02T09:06:23Z|01490|binding|INFO|Claiming lport 84190bf6-548d-4a11-83b3-0e6be88619c6 for this chassis.
Oct  2 05:06:23 np0005465604 ovn_controller[152344]: 2025-10-02T09:06:23Z|01491|binding|INFO|84190bf6-548d-4a11-83b3-0e6be88619c6: Claiming fa:16:3e:32:cb:0b 10.100.0.4 2001:db8::f816:3eff:fe32:cb0b
Oct  2 05:06:23 np0005465604 systemd-udevd[412659]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:06:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.233 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:cb:0b 10.100.0.4 2001:db8::f816:3eff:fe32:cb0b'], port_security=['fa:16:3e:32:cb:0b 10.100.0.4 2001:db8::f816:3eff:fe32:cb0b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fe32:cb0b/64', 'neutron:device_id': 'a021c5bb-f6b0-4434-bf26-81f294f0fe00', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d710978-7032-4293-a883-5a767163ed11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8f998d1a-9f03-4830-9263-e8f19e5bb79e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9915992-ac5f-4a55-8b96-3511c2ec67d2, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=84190bf6-548d-4a11-83b3-0e6be88619c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:06:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.234 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 84190bf6-548d-4a11-83b3-0e6be88619c6 in datapath 4d710978-7032-4293-a883-5a767163ed11 bound to our chassis#033[00m
Oct  2 05:06:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.235 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d710978-7032-4293-a883-5a767163ed11#033[00m
Oct  2 05:06:23 np0005465604 NetworkManager[45129]: <info>  [1759395983.2390] device (tap84190bf6-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:06:23 np0005465604 NetworkManager[45129]: <info>  [1759395983.2402] device (tap84190bf6-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:06:23 np0005465604 ovn_controller[152344]: 2025-10-02T09:06:23Z|01492|binding|INFO|Setting lport 84190bf6-548d-4a11-83b3-0e6be88619c6 ovn-installed in OVS
Oct  2 05:06:23 np0005465604 ovn_controller[152344]: 2025-10-02T09:06:23Z|01493|binding|INFO|Setting lport 84190bf6-548d-4a11-83b3-0e6be88619c6 up in Southbound
Oct  2 05:06:23 np0005465604 nova_compute[260603]: 2025-10-02 09:06:23.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:23 np0005465604 nova_compute[260603]: 2025-10-02 09:06:23.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.250 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1e97e7-337a-4e4a-88e6-0061977c72db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:23 np0005465604 systemd-machined[214636]: New machine qemu-172-instance-0000008a.
Oct  2 05:06:23 np0005465604 systemd[1]: Started Virtual Machine qemu-172-instance-0000008a.
Oct  2 05:06:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.280 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1594b3-36f6-4502-999c-52263307ee0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.283 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b69ffb27-ea46-44aa-a735-d5fa8d58f6a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.313 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c6671382-65ca-4030-901b-15b0620e2490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.329 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd06330-c9aa-4940-af7a-8d5db0594459]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d710978-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:67:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682990, 'reachable_time': 19213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 20, 'inoctets': 1592, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 20, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1592, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 20, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412676, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.342 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4b3d9765-bc9e-4246-b87b-ee866792577e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4d710978-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683002, 'tstamp': 683002}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412677, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4d710978-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683005, 'tstamp': 683005}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412677, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.344 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d710978-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:06:23 np0005465604 nova_compute[260603]: 2025-10-02 09:06:23.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:23 np0005465604 nova_compute[260603]: 2025-10-02 09:06:23.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.347 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d710978-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:06:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.347 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:06:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.348 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d710978-70, col_values=(('external_ids', {'iface-id': '1fb74237-5dc8-49ee-a35b-4801dc5960b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:06:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:23.348 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:06:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:06:23 np0005465604 nova_compute[260603]: 2025-10-02 09:06:23.981 2 DEBUG nova.compute.manager [req-49c73821-0a89-43b7-85bf-1366d8146677 req-425d9c5b-ed41-40da-add5-e8ab8ede9147 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received event network-vif-plugged-84190bf6-548d-4a11-83b3-0e6be88619c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:06:23 np0005465604 nova_compute[260603]: 2025-10-02 09:06:23.981 2 DEBUG oslo_concurrency.lockutils [req-49c73821-0a89-43b7-85bf-1366d8146677 req-425d9c5b-ed41-40da-add5-e8ab8ede9147 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:23 np0005465604 nova_compute[260603]: 2025-10-02 09:06:23.982 2 DEBUG oslo_concurrency.lockutils [req-49c73821-0a89-43b7-85bf-1366d8146677 req-425d9c5b-ed41-40da-add5-e8ab8ede9147 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:23 np0005465604 nova_compute[260603]: 2025-10-02 09:06:23.982 2 DEBUG oslo_concurrency.lockutils [req-49c73821-0a89-43b7-85bf-1366d8146677 req-425d9c5b-ed41-40da-add5-e8ab8ede9147 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:23 np0005465604 nova_compute[260603]: 2025-10-02 09:06:23.982 2 DEBUG nova.compute.manager [req-49c73821-0a89-43b7-85bf-1366d8146677 req-425d9c5b-ed41-40da-add5-e8ab8ede9147 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Processing event network-vif-plugged-84190bf6-548d-4a11-83b3-0e6be88619c6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.094 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395984.0933611, a021c5bb-f6b0-4434-bf26-81f294f0fe00 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.094 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] VM Started (Lifecycle Event)#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.100 2 DEBUG nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.109 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.119 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.121 2 INFO nova.virt.libvirt.driver [-] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Instance spawned successfully.#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.122 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.132 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.150 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.152 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.153 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.153 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.154 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.155 2 DEBUG nova.virt.libvirt.driver [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.161 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.162 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395984.0935256, a021c5bb-f6b0-4434-bf26-81f294f0fe00 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.162 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.198 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.203 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759395984.1035948, a021c5bb-f6b0-4434-bf26-81f294f0fe00 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.203 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.227 2 INFO nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Took 7.57 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.227 2 DEBUG nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.236 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.247 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.285 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.317 2 INFO nova.compute.manager [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Took 8.63 seconds to build instance.#033[00m
Oct  2 05:06:24 np0005465604 nova_compute[260603]: 2025-10-02 09:06:24.333 2 DEBUG oslo_concurrency.lockutils [None req-09ab8a1a-9d2e-4fb8-8bed-4ce9059333c8 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:25 np0005465604 nova_compute[260603]: 2025-10-02 09:06:25.357 2 DEBUG nova.network.neutron [req-73145097-bff0-45b0-aae7-e6146d743b25 req-dabe2228-6804-4d22-921e-97b9f205b086 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Updated VIF entry in instance network info cache for port 84190bf6-548d-4a11-83b3-0e6be88619c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:06:25 np0005465604 nova_compute[260603]: 2025-10-02 09:06:25.357 2 DEBUG nova.network.neutron [req-73145097-bff0-45b0-aae7-e6146d743b25 req-dabe2228-6804-4d22-921e-97b9f205b086 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Updating instance_info_cache with network_info: [{"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:06:25 np0005465604 nova_compute[260603]: 2025-10-02 09:06:25.383 2 DEBUG oslo_concurrency.lockutils [req-73145097-bff0-45b0-aae7-e6146d743b25 req-dabe2228-6804-4d22-921e-97b9f205b086 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:06:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:06:25 np0005465604 nova_compute[260603]: 2025-10-02 09:06:25.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:26 np0005465604 nova_compute[260603]: 2025-10-02 09:06:26.063 2 DEBUG nova.compute.manager [req-d0fc7a51-59c3-463e-a3b9-487b10d77c55 req-8e915cdb-8822-4cfb-8535-eb348db71166 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received event network-vif-plugged-84190bf6-548d-4a11-83b3-0e6be88619c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:06:26 np0005465604 nova_compute[260603]: 2025-10-02 09:06:26.064 2 DEBUG oslo_concurrency.lockutils [req-d0fc7a51-59c3-463e-a3b9-487b10d77c55 req-8e915cdb-8822-4cfb-8535-eb348db71166 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:26 np0005465604 nova_compute[260603]: 2025-10-02 09:06:26.065 2 DEBUG oslo_concurrency.lockutils [req-d0fc7a51-59c3-463e-a3b9-487b10d77c55 req-8e915cdb-8822-4cfb-8535-eb348db71166 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:26 np0005465604 nova_compute[260603]: 2025-10-02 09:06:26.066 2 DEBUG oslo_concurrency.lockutils [req-d0fc7a51-59c3-463e-a3b9-487b10d77c55 req-8e915cdb-8822-4cfb-8535-eb348db71166 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:26 np0005465604 nova_compute[260603]: 2025-10-02 09:06:26.066 2 DEBUG nova.compute.manager [req-d0fc7a51-59c3-463e-a3b9-487b10d77c55 req-8e915cdb-8822-4cfb-8535-eb348db71166 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] No waiting events found dispatching network-vif-plugged-84190bf6-548d-4a11-83b3-0e6be88619c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:06:26 np0005465604 nova_compute[260603]: 2025-10-02 09:06:26.067 2 WARNING nova.compute.manager [req-d0fc7a51-59c3-463e-a3b9-487b10d77c55 req-8e915cdb-8822-4cfb-8535-eb348db71166 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received unexpected event network-vif-plugged-84190bf6-548d-4a11-83b3-0e6be88619c6 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:06:26 np0005465604 nova_compute[260603]: 2025-10-02 09:06:26.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:06:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:06:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:06:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:06:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:06:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:06:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 167 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 824 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Oct  2 05:06:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:06:28
Oct  2 05:06:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:06:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:06:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', '.mgr', 'volumes']
Oct  2 05:06:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:06:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:06:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:06:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 05:06:30 np0005465604 nova_compute[260603]: 2025-10-02 09:06:30.015 2 DEBUG nova.compute.manager [req-4f0fa7e1-692f-4dd4-abda-5af29d2d8dd4 req-bbdd1e74-886d-47a4-969a-71e41d0b5d43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received event network-changed-84190bf6-548d-4a11-83b3-0e6be88619c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:06:30 np0005465604 nova_compute[260603]: 2025-10-02 09:06:30.017 2 DEBUG nova.compute.manager [req-4f0fa7e1-692f-4dd4-abda-5af29d2d8dd4 req-bbdd1e74-886d-47a4-969a-71e41d0b5d43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Refreshing instance network info cache due to event network-changed-84190bf6-548d-4a11-83b3-0e6be88619c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:06:30 np0005465604 nova_compute[260603]: 2025-10-02 09:06:30.017 2 DEBUG oslo_concurrency.lockutils [req-4f0fa7e1-692f-4dd4-abda-5af29d2d8dd4 req-bbdd1e74-886d-47a4-969a-71e41d0b5d43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:06:30 np0005465604 nova_compute[260603]: 2025-10-02 09:06:30.018 2 DEBUG oslo_concurrency.lockutils [req-4f0fa7e1-692f-4dd4-abda-5af29d2d8dd4 req-bbdd1e74-886d-47a4-969a-71e41d0b5d43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:06:30 np0005465604 nova_compute[260603]: 2025-10-02 09:06:30.019 2 DEBUG nova.network.neutron [req-4f0fa7e1-692f-4dd4-abda-5af29d2d8dd4 req-bbdd1e74-886d-47a4-969a-71e41d0b5d43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Refreshing network info cache for port 84190bf6-548d-4a11-83b3-0e6be88619c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:06:30 np0005465604 nova_compute[260603]: 2025-10-02 09:06:30.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:31 np0005465604 nova_compute[260603]: 2025-10-02 09:06:31.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 863 KiB/s wr, 96 op/s
Oct  2 05:06:32 np0005465604 nova_compute[260603]: 2025-10-02 09:06:32.140 2 DEBUG nova.network.neutron [req-4f0fa7e1-692f-4dd4-abda-5af29d2d8dd4 req-bbdd1e74-886d-47a4-969a-71e41d0b5d43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Updated VIF entry in instance network info cache for port 84190bf6-548d-4a11-83b3-0e6be88619c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:06:32 np0005465604 nova_compute[260603]: 2025-10-02 09:06:32.141 2 DEBUG nova.network.neutron [req-4f0fa7e1-692f-4dd4-abda-5af29d2d8dd4 req-bbdd1e74-886d-47a4-969a-71e41d0b5d43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Updating instance_info_cache with network_info: [{"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:06:32 np0005465604 nova_compute[260603]: 2025-10-02 09:06:32.160 2 DEBUG oslo_concurrency.lockutils [req-4f0fa7e1-692f-4dd4-abda-5af29d2d8dd4 req-bbdd1e74-886d-47a4-969a-71e41d0b5d43 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:06:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:06:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 863 KiB/s wr, 96 op/s
Oct  2 05:06:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:34.845 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:34.845 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:34.846 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:35 np0005465604 ovn_controller[152344]: 2025-10-02T09:06:35Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:32:cb:0b 10.100.0.4
Oct  2 05:06:35 np0005465604 ovn_controller[152344]: 2025-10-02T09:06:35Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:cb:0b 10.100.0.4
Oct  2 05:06:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 167 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:06:35 np0005465604 nova_compute[260603]: 2025-10-02 09:06:35.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:36 np0005465604 nova_compute[260603]: 2025-10-02 09:06:36.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 187 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 875 KiB/s wr, 113 op/s
Oct  2 05:06:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:06:38 np0005465604 nova_compute[260603]: 2025-10-02 09:06:38.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:06:38 np0005465604 nova_compute[260603]: 2025-10-02 09:06:38.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012710510679169108 of space, bias 1.0, pg target 0.38131532037507326 quantized to 32 (current 32)
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:06:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct  2 05:06:40 np0005465604 nova_compute[260603]: 2025-10-02 09:06:40.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:06:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 12K writes, 55K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1344 writes, 5845 keys, 1344 commit groups, 1.0 writes per commit group, ingest: 8.68 MB, 0.01 MB/s#012Interval WAL: 1344 writes, 1344 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    111.5      0.59              0.25        38    0.016       0      0       0.0       0.0#012  L6      1/0    7.93 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.6    168.0    141.2      2.16              1.08        37    0.058    224K    20K       0.0       0.0#012 Sum      1/0    7.93 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.6    132.0    134.9      2.75              1.33        75    0.037    224K    20K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.5    157.7    157.1      0.27              0.17         8    0.033     31K   2001       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    168.0    141.2      2.16              1.08        37    0.058    224K    20K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    112.4      0.59              0.25        37    0.016       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.064, interval 0.005#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.36 GB write, 0.08 MB/s write, 0.36 GB read, 0.08 MB/s read, 2.8 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 304.00 MB usage: 40.98 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000338 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2654,39.32 MB,12.9333%) FilterBlock(76,635.67 KB,0.204202%) IndexBlock(76,1.04 MB,0.341832%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 05:06:41 np0005465604 nova_compute[260603]: 2025-10-02 09:06:41.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  2 05:06:43 np0005465604 podman[412722]: 2025-10-02 09:06:43.004899992 +0000 UTC m=+0.056983422 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 05:06:43 np0005465604 podman[412721]: 2025-10-02 09:06:43.052142891 +0000 UTC m=+0.106219392 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:06:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:06:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  2 05:06:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  2 05:06:45 np0005465604 nova_compute[260603]: 2025-10-02 09:06:45.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:46 np0005465604 nova_compute[260603]: 2025-10-02 09:06:46.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:46 np0005465604 podman[412766]: 2025-10-02 09:06:46.985727869 +0000 UTC m=+0.056576138 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct  2 05:06:47 np0005465604 podman[412767]: 2025-10-02 09:06:47.01181503 +0000 UTC m=+0.068375015 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.548 2 DEBUG nova.compute.manager [req-4d52cc9f-e4e3-41ac-818b-e17534b0a111 req-0af80b5f-ddff-46d3-adf4-d881bb9716db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received event network-changed-84190bf6-548d-4a11-83b3-0e6be88619c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.548 2 DEBUG nova.compute.manager [req-4d52cc9f-e4e3-41ac-818b-e17534b0a111 req-0af80b5f-ddff-46d3-adf4-d881bb9716db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Refreshing instance network info cache due to event network-changed-84190bf6-548d-4a11-83b3-0e6be88619c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.549 2 DEBUG oslo_concurrency.lockutils [req-4d52cc9f-e4e3-41ac-818b-e17534b0a111 req-0af80b5f-ddff-46d3-adf4-d881bb9716db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.549 2 DEBUG oslo_concurrency.lockutils [req-4d52cc9f-e4e3-41ac-818b-e17534b0a111 req-0af80b5f-ddff-46d3-adf4-d881bb9716db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.549 2 DEBUG nova.network.neutron [req-4d52cc9f-e4e3-41ac-818b-e17534b0a111 req-0af80b5f-ddff-46d3-adf4-d881bb9716db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Refreshing network info cache for port 84190bf6-548d-4a11-83b3-0e6be88619c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.630 2 DEBUG oslo_concurrency.lockutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.631 2 DEBUG oslo_concurrency.lockutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.631 2 DEBUG oslo_concurrency.lockutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.632 2 DEBUG oslo_concurrency.lockutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.632 2 DEBUG oslo_concurrency.lockutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.633 2 INFO nova.compute.manager [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Terminating instance#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.634 2 DEBUG nova.compute.manager [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:06:47 np0005465604 kernel: tap84190bf6-54 (unregistering): left promiscuous mode
Oct  2 05:06:47 np0005465604 NetworkManager[45129]: <info>  [1759396007.6928] device (tap84190bf6-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:06:47 np0005465604 ovn_controller[152344]: 2025-10-02T09:06:47Z|01494|binding|INFO|Releasing lport 84190bf6-548d-4a11-83b3-0e6be88619c6 from this chassis (sb_readonly=0)
Oct  2 05:06:47 np0005465604 ovn_controller[152344]: 2025-10-02T09:06:47Z|01495|binding|INFO|Setting lport 84190bf6-548d-4a11-83b3-0e6be88619c6 down in Southbound
Oct  2 05:06:47 np0005465604 ovn_controller[152344]: 2025-10-02T09:06:47Z|01496|binding|INFO|Removing iface tap84190bf6-54 ovn-installed in OVS
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.709 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:cb:0b 10.100.0.4 2001:db8::f816:3eff:fe32:cb0b'], port_security=['fa:16:3e:32:cb:0b 10.100.0.4 2001:db8::f816:3eff:fe32:cb0b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fe32:cb0b/64', 'neutron:device_id': 'a021c5bb-f6b0-4434-bf26-81f294f0fe00', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d710978-7032-4293-a883-5a767163ed11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8f998d1a-9f03-4830-9263-e8f19e5bb79e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9915992-ac5f-4a55-8b96-3511c2ec67d2, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=84190bf6-548d-4a11-83b3-0e6be88619c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:06:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.710 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 84190bf6-548d-4a11-83b3-0e6be88619c6 in datapath 4d710978-7032-4293-a883-5a767163ed11 unbound from our chassis#033[00m
Oct  2 05:06:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.711 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d710978-7032-4293-a883-5a767163ed11#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.729 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e9386a98-629b-42a1-8eb6-eb4818b6f56a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:47 np0005465604 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Oct  2 05:06:47 np0005465604 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008a.scope: Consumed 12.989s CPU time.
Oct  2 05:06:47 np0005465604 systemd-machined[214636]: Machine qemu-172-instance-0000008a terminated.
Oct  2 05:06:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.761 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5f195c8d-fe45-442a-bf2c-366c1bf75a33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.764 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ecc20b-38ef-4d82-8e0c-33770291f211]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.797 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5db08a93-a412-4b02-a845-3feccc6f4d02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.819 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8872b7dd-da8e-4771-b679-623c4273f543]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d710978-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:67:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682990, 'reachable_time': 19213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 32, 'inoctets': 2464, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 32, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2464, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 32, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412816, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.839 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[01544148-17e2-4e9e-b70d-2279e96ac3c7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4d710978-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683002, 'tstamp': 683002}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412817, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4d710978-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683005, 'tstamp': 683005}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412817, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.841 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d710978-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.849 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d710978-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.849 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:06:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.850 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d710978-70, col_values=(('external_ids', {'iface-id': '1fb74237-5dc8-49ee-a35b-4801dc5960b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:06:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:47.850 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.875 2 INFO nova.virt.libvirt.driver [-] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Instance destroyed successfully.#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.876 2 DEBUG nova.objects.instance [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid a021c5bb-f6b0-4434-bf26-81f294f0fe00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.889 2 DEBUG nova.virt.libvirt.vif [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:06:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-420309713',display_name='tempest-TestGettingAddress-server-420309713',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-420309713',id=138,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOqsV+mRqA94BGsKqn96a/KPTTGiBWD+95ZJ/Yh7ODb2zqPMdXbtdzNYLEW6fE5OS4mYGF0KIkuvDnPSxXUjDfpHSgx5rD0Ef4PCofSlDC/ZVRctKKrWVNvfvA+fGJmQQ==',key_name='tempest-TestGettingAddress-1976047243',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:06:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-sen5rcgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:06:24Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=a021c5bb-f6b0-4434-bf26-81f294f0fe00,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.890 2 DEBUG nova.network.os_vif_util [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.891 2 DEBUG nova.network.os_vif_util [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:32:cb:0b,bridge_name='br-int',has_traffic_filtering=True,id=84190bf6-548d-4a11-83b3-0e6be88619c6,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84190bf6-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.891 2 DEBUG os_vif [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:cb:0b,bridge_name='br-int',has_traffic_filtering=True,id=84190bf6-548d-4a11-83b3-0e6be88619c6,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84190bf6-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.893 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84190bf6-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:06:47 np0005465604 nova_compute[260603]: 2025-10-02 09:06:47.898 2 INFO os_vif [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:cb:0b,bridge_name='br-int',has_traffic_filtering=True,id=84190bf6-548d-4a11-83b3-0e6be88619c6,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84190bf6-54')#033[00m
Oct  2 05:06:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 371 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  2 05:06:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:06:48 np0005465604 nova_compute[260603]: 2025-10-02 09:06:48.211 2 INFO nova.virt.libvirt.driver [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Deleting instance files /var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00_del#033[00m
Oct  2 05:06:48 np0005465604 nova_compute[260603]: 2025-10-02 09:06:48.212 2 INFO nova.virt.libvirt.driver [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Deletion of /var/lib/nova/instances/a021c5bb-f6b0-4434-bf26-81f294f0fe00_del complete#033[00m
Oct  2 05:06:48 np0005465604 nova_compute[260603]: 2025-10-02 09:06:48.257 2 INFO nova.compute.manager [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:06:48 np0005465604 nova_compute[260603]: 2025-10-02 09:06:48.257 2 DEBUG oslo.service.loopingcall [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:06:48 np0005465604 nova_compute[260603]: 2025-10-02 09:06:48.257 2 DEBUG nova.compute.manager [-] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:06:48 np0005465604 nova_compute[260603]: 2025-10-02 09:06:48.258 2 DEBUG nova.network.neutron [-] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.067 2 DEBUG nova.network.neutron [-] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.092 2 INFO nova.compute.manager [-] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Took 0.83 seconds to deallocate network for instance.#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.142 2 DEBUG oslo_concurrency.lockutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.143 2 DEBUG oslo_concurrency.lockutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.203 2 DEBUG nova.network.neutron [req-4d52cc9f-e4e3-41ac-818b-e17534b0a111 req-0af80b5f-ddff-46d3-adf4-d881bb9716db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Updated VIF entry in instance network info cache for port 84190bf6-548d-4a11-83b3-0e6be88619c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.204 2 DEBUG nova.network.neutron [req-4d52cc9f-e4e3-41ac-818b-e17534b0a111 req-0af80b5f-ddff-46d3-adf4-d881bb9716db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Updating instance_info_cache with network_info: [{"id": "84190bf6-548d-4a11-83b3-0e6be88619c6", "address": "fa:16:3e:32:cb:0b", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:cb0b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84190bf6-54", "ovs_interfaceid": "84190bf6-548d-4a11-83b3-0e6be88619c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.218 2 DEBUG oslo_concurrency.processutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.252 2 DEBUG oslo_concurrency.lockutils [req-4d52cc9f-e4e3-41ac-818b-e17534b0a111 req-0af80b5f-ddff-46d3-adf4-d881bb9716db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-a021c5bb-f6b0-4434-bf26-81f294f0fe00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:06:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:06:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1950101900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.629 2 DEBUG nova.compute.manager [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received event network-vif-unplugged-84190bf6-548d-4a11-83b3-0e6be88619c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.629 2 DEBUG oslo_concurrency.lockutils [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.630 2 DEBUG oslo_concurrency.lockutils [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.630 2 DEBUG oslo_concurrency.lockutils [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.630 2 DEBUG nova.compute.manager [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] No waiting events found dispatching network-vif-unplugged-84190bf6-548d-4a11-83b3-0e6be88619c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.631 2 WARNING nova.compute.manager [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received unexpected event network-vif-unplugged-84190bf6-548d-4a11-83b3-0e6be88619c6 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.631 2 DEBUG nova.compute.manager [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received event network-vif-plugged-84190bf6-548d-4a11-83b3-0e6be88619c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.631 2 DEBUG oslo_concurrency.lockutils [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.631 2 DEBUG oslo_concurrency.lockutils [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.632 2 DEBUG oslo_concurrency.lockutils [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.632 2 DEBUG nova.compute.manager [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] No waiting events found dispatching network-vif-plugged-84190bf6-548d-4a11-83b3-0e6be88619c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.632 2 WARNING nova.compute.manager [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received unexpected event network-vif-plugged-84190bf6-548d-4a11-83b3-0e6be88619c6 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.632 2 DEBUG nova.compute.manager [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Received event network-vif-deleted-84190bf6-548d-4a11-83b3-0e6be88619c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.633 2 INFO nova.compute.manager [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Neutron deleted interface 84190bf6-548d-4a11-83b3-0e6be88619c6; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.633 2 DEBUG nova.network.neutron [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.640 2 DEBUG oslo_concurrency.processutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.647 2 DEBUG nova.compute.provider_tree [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.659 2 DEBUG nova.scheduler.client.report [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.664 2 DEBUG nova.compute.manager [req-208ce312-139b-43f9-9a9a-f89e71ecdf60 req-626ee5e9-a9bb-4570-828b-133dc5420a3a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Detach interface failed, port_id=84190bf6-548d-4a11-83b3-0e6be88619c6, reason: Instance a021c5bb-f6b0-4434-bf26-81f294f0fe00 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.684 2 DEBUG oslo_concurrency.lockutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.703 2 INFO nova.scheduler.client.report [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance a021c5bb-f6b0-4434-bf26-81f294f0fe00#033[00m
Oct  2 05:06:49 np0005465604 nova_compute[260603]: 2025-10-02 09:06:49.757 2 DEBUG oslo_concurrency.lockutils [None req-886a110c-09e9-42b9-8f77-5beeb68df9c4 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "a021c5bb-f6b0-4434-bf26-81f294f0fe00" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 202 KiB/s rd, 1.3 MiB/s wr, 32 op/s
Oct  2 05:06:50 np0005465604 nova_compute[260603]: 2025-10-02 09:06:50.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.409 2 DEBUG oslo_concurrency.lockutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "88de4189-44e3-48fe-aa38-c33334b314b5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.409 2 DEBUG oslo_concurrency.lockutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.410 2 DEBUG oslo_concurrency.lockutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.410 2 DEBUG oslo_concurrency.lockutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.410 2 DEBUG oslo_concurrency.lockutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.411 2 INFO nova.compute.manager [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Terminating instance#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.412 2 DEBUG nova.compute.manager [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:06:51 np0005465604 kernel: tap8d7734cf-66 (unregistering): left promiscuous mode
Oct  2 05:06:51 np0005465604 NetworkManager[45129]: <info>  [1759396011.6825] device (tap8d7734cf-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:51 np0005465604 ovn_controller[152344]: 2025-10-02T09:06:51Z|01497|binding|INFO|Releasing lport 8d7734cf-6636-4070-a868-e4d1d2cfab65 from this chassis (sb_readonly=0)
Oct  2 05:06:51 np0005465604 ovn_controller[152344]: 2025-10-02T09:06:51Z|01498|binding|INFO|Setting lport 8d7734cf-6636-4070-a868-e4d1d2cfab65 down in Southbound
Oct  2 05:06:51 np0005465604 ovn_controller[152344]: 2025-10-02T09:06:51Z|01499|binding|INFO|Removing iface tap8d7734cf-66 ovn-installed in OVS
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:51.701 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:2d:a6 10.100.0.11 2001:db8::f816:3eff:fed3:2da6'], port_security=['fa:16:3e:d3:2d:a6 10.100.0.11 2001:db8::f816:3eff:fed3:2da6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fed3:2da6/64', 'neutron:device_id': '88de4189-44e3-48fe-aa38-c33334b314b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d710978-7032-4293-a883-5a767163ed11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8f998d1a-9f03-4830-9263-e8f19e5bb79e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9915992-ac5f-4a55-8b96-3511c2ec67d2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8d7734cf-6636-4070-a868-e4d1d2cfab65) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:06:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:51.703 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8d7734cf-6636-4070-a868-e4d1d2cfab65 in datapath 4d710978-7032-4293-a883-5a767163ed11 unbound from our chassis#033[00m
Oct  2 05:06:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:51.703 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d710978-7032-4293-a883-5a767163ed11, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:06:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:51.705 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5fadcc7a-98d3-4b55-9d5e-ee004452b021]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:51.705 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d710978-7032-4293-a883-5a767163ed11 namespace which is not needed anymore#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.713 2 DEBUG nova.compute.manager [req-28751017-b9ac-4d4e-a510-99054266974d req-df51cd0c-96da-4a6c-a869-ac8f7bd45a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received event network-changed-8d7734cf-6636-4070-a868-e4d1d2cfab65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.714 2 DEBUG nova.compute.manager [req-28751017-b9ac-4d4e-a510-99054266974d req-df51cd0c-96da-4a6c-a869-ac8f7bd45a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Refreshing instance network info cache due to event network-changed-8d7734cf-6636-4070-a868-e4d1d2cfab65. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.714 2 DEBUG oslo_concurrency.lockutils [req-28751017-b9ac-4d4e-a510-99054266974d req-df51cd0c-96da-4a6c-a869-ac8f7bd45a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.714 2 DEBUG oslo_concurrency.lockutils [req-28751017-b9ac-4d4e-a510-99054266974d req-df51cd0c-96da-4a6c-a869-ac8f7bd45a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.714 2 DEBUG nova.network.neutron [req-28751017-b9ac-4d4e-a510-99054266974d req-df51cd0c-96da-4a6c-a869-ac8f7bd45a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Refreshing network info cache for port 8d7734cf-6636-4070-a868-e4d1d2cfab65 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:06:51 np0005465604 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000089.scope: Deactivated successfully.
Oct  2 05:06:51 np0005465604 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000089.scope: Consumed 14.701s CPU time.
Oct  2 05:06:51 np0005465604 systemd-machined[214636]: Machine qemu-171-instance-00000089 terminated.
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.856 2 INFO nova.virt.libvirt.driver [-] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Instance destroyed successfully.#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.856 2 DEBUG nova.objects.instance [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 88de4189-44e3-48fe-aa38-c33334b314b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.874 2 DEBUG nova.virt.libvirt.vif [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:05:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1987487733',display_name='tempest-TestGettingAddress-server-1987487733',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1987487733',id=137,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOqsV+mRqA94BGsKqn96a/KPTTGiBWD+95ZJ/Yh7ODb2zqPMdXbtdzNYLEW6fE5OS4mYGF0KIkuvDnPSxXUjDfpHSgx5rD0Ef4PCofSlDC/ZVRctKKrWVNvfvA+fGJmQQ==',key_name='tempest-TestGettingAddress-1976047243',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:05:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-qbro3vsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:05:52Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=88de4189-44e3-48fe-aa38-c33334b314b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.875 2 DEBUG nova.network.os_vif_util [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.876 2 DEBUG nova.network.os_vif_util [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d3:2d:a6,bridge_name='br-int',has_traffic_filtering=True,id=8d7734cf-6636-4070-a868-e4d1d2cfab65,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7734cf-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.876 2 DEBUG os_vif [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:2d:a6,bridge_name='br-int',has_traffic_filtering=True,id=8d7734cf-6636-4070-a868-e4d1d2cfab65,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7734cf-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.878 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d7734cf-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:51 np0005465604 nova_compute[260603]: 2025-10-02 09:06:51.885 2 INFO os_vif [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:2d:a6,bridge_name='br-int',has_traffic_filtering=True,id=8d7734cf-6636-4070-a868-e4d1d2cfab65,network=Network(4d710978-7032-4293-a883-5a767163ed11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d7734cf-66')#033[00m
Oct  2 05:06:51 np0005465604 neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11[411307]: [NOTICE]   (411311) : haproxy version is 2.8.14-c23fe91
Oct  2 05:06:51 np0005465604 neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11[411307]: [NOTICE]   (411311) : path to executable is /usr/sbin/haproxy
Oct  2 05:06:51 np0005465604 neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11[411307]: [WARNING]  (411311) : Exiting Master process...
Oct  2 05:06:51 np0005465604 neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11[411307]: [ALERT]    (411311) : Current worker (411313) exited with code 143 (Terminated)
Oct  2 05:06:51 np0005465604 neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11[411307]: [WARNING]  (411311) : All workers exited. Exiting... (0)
Oct  2 05:06:51 np0005465604 systemd[1]: libpod-217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661.scope: Deactivated successfully.
Oct  2 05:06:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct  2 05:06:51 np0005465604 podman[412895]: 2025-10-02 09:06:51.987364092 +0000 UTC m=+0.175384253 container died 217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:06:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661-userdata-shm.mount: Deactivated successfully.
Oct  2 05:06:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f40017d254e3a7ef326f21d5b61e3393056686d0000a01e658c3264ac29a6b4b-merged.mount: Deactivated successfully.
Oct  2 05:06:52 np0005465604 podman[412895]: 2025-10-02 09:06:52.498828845 +0000 UTC m=+0.686849036 container cleanup 217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 05:06:52 np0005465604 systemd[1]: libpod-conmon-217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661.scope: Deactivated successfully.
Oct  2 05:06:52 np0005465604 nova_compute[260603]: 2025-10-02 09:06:52.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:06:52 np0005465604 podman[412953]: 2025-10-02 09:06:52.738956028 +0000 UTC m=+0.211425202 container remove 217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 05:06:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:52.749 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[faab7c0d-5d33-44bc-9d98-ed4b484043ad]: (4, ('Thu Oct  2 09:06:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11 (217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661)\n217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661\nThu Oct  2 09:06:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d710978-7032-4293-a883-5a767163ed11 (217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661)\n217b1e6c32302a39911981b29b5adc3dad9af5b34c7087a648d328dc935bd661\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:52.751 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3150beaa-e291-4bea-8a57-9d95c6c488e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:52.752 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d710978-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:06:52 np0005465604 nova_compute[260603]: 2025-10-02 09:06:52.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:52 np0005465604 kernel: tap4d710978-70: left promiscuous mode
Oct  2 05:06:52 np0005465604 nova_compute[260603]: 2025-10-02 09:06:52.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:52.776 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8c18ae29-2347-40d1-a6f8-de9786955977]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:52.798 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8a097d80-2867-47fa-918e-e610b8ec546f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:52.799 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cd8e8ba1-e8a5-4b05-91c1-031cc9f878d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:52.826 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8bb3e6-c898-441d-a11b-61c59460281e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682983, 'reachable_time': 42999, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412970, 'error': None, 'target': 'ovnmeta-4d710978-7032-4293-a883-5a767163ed11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:52 np0005465604 systemd[1]: run-netns-ovnmeta\x2d4d710978\x2d7032\x2d4293\x2da883\x2d5a767163ed11.mount: Deactivated successfully.
Oct  2 05:06:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:52.831 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d710978-7032-4293-a883-5a767163ed11 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:06:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:06:52.832 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f77991-598d-40ec-a4a7-c17e90b870f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:06:53 np0005465604 nova_compute[260603]: 2025-10-02 09:06:53.056 2 DEBUG nova.network.neutron [req-28751017-b9ac-4d4e-a510-99054266974d req-df51cd0c-96da-4a6c-a869-ac8f7bd45a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updated VIF entry in instance network info cache for port 8d7734cf-6636-4070-a868-e4d1d2cfab65. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:06:53 np0005465604 nova_compute[260603]: 2025-10-02 09:06:53.056 2 DEBUG nova.network.neutron [req-28751017-b9ac-4d4e-a510-99054266974d req-df51cd0c-96da-4a6c-a869-ac8f7bd45a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updating instance_info_cache with network_info: [{"id": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "address": "fa:16:3e:d3:2d:a6", "network": {"id": "4d710978-7032-4293-a883-5a767163ed11", "bridge": "br-int", "label": "tempest-network-smoke--1419891041", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed3:2da6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d7734cf-66", "ovs_interfaceid": "8d7734cf-6636-4070-a868-e4d1d2cfab65", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:06:53 np0005465604 nova_compute[260603]: 2025-10-02 09:06:53.077 2 DEBUG oslo_concurrency.lockutils [req-28751017-b9ac-4d4e-a510-99054266974d req-df51cd0c-96da-4a6c-a869-ac8f7bd45a5e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-88de4189-44e3-48fe-aa38-c33334b314b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:06:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:06:53 np0005465604 nova_compute[260603]: 2025-10-02 09:06:53.805 2 DEBUG nova.compute.manager [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received event network-vif-unplugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:06:53 np0005465604 nova_compute[260603]: 2025-10-02 09:06:53.805 2 DEBUG oslo_concurrency.lockutils [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:53 np0005465604 nova_compute[260603]: 2025-10-02 09:06:53.805 2 DEBUG oslo_concurrency.lockutils [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:53 np0005465604 nova_compute[260603]: 2025-10-02 09:06:53.805 2 DEBUG oslo_concurrency.lockutils [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:53 np0005465604 nova_compute[260603]: 2025-10-02 09:06:53.806 2 DEBUG nova.compute.manager [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] No waiting events found dispatching network-vif-unplugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:06:53 np0005465604 nova_compute[260603]: 2025-10-02 09:06:53.806 2 DEBUG nova.compute.manager [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received event network-vif-unplugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:06:53 np0005465604 nova_compute[260603]: 2025-10-02 09:06:53.806 2 DEBUG nova.compute.manager [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received event network-vif-plugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:06:53 np0005465604 nova_compute[260603]: 2025-10-02 09:06:53.806 2 DEBUG oslo_concurrency.lockutils [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:53 np0005465604 nova_compute[260603]: 2025-10-02 09:06:53.806 2 DEBUG oslo_concurrency.lockutils [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:53 np0005465604 nova_compute[260603]: 2025-10-02 09:06:53.806 2 DEBUG oslo_concurrency.lockutils [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:53 np0005465604 nova_compute[260603]: 2025-10-02 09:06:53.807 2 DEBUG nova.compute.manager [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] No waiting events found dispatching network-vif-plugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:06:53 np0005465604 nova_compute[260603]: 2025-10-02 09:06:53.807 2 WARNING nova.compute.manager [req-81a5d93c-c83a-412d-a826-75a903ead6e3 req-f73bfa83-08c3-461e-899a-74d67f7ce4d4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received unexpected event network-vif-plugged-8d7734cf-6636-4070-a868-e4d1d2cfab65 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:06:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 107 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 21 KiB/s wr, 36 op/s
Oct  2 05:06:54 np0005465604 nova_compute[260603]: 2025-10-02 09:06:54.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:06:54 np0005465604 nova_compute[260603]: 2025-10-02 09:06:54.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:06:54 np0005465604 nova_compute[260603]: 2025-10-02 09:06:54.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:06:54 np0005465604 nova_compute[260603]: 2025-10-02 09:06:54.539 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 05:06:54 np0005465604 nova_compute[260603]: 2025-10-02 09:06:54.539 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:06:55 np0005465604 nova_compute[260603]: 2025-10-02 09:06:55.271 2 INFO nova.virt.libvirt.driver [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Deleting instance files /var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5_del#033[00m
Oct  2 05:06:55 np0005465604 nova_compute[260603]: 2025-10-02 09:06:55.272 2 INFO nova.virt.libvirt.driver [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Deletion of /var/lib/nova/instances/88de4189-44e3-48fe-aa38-c33334b314b5_del complete#033[00m
Oct  2 05:06:55 np0005465604 nova_compute[260603]: 2025-10-02 09:06:55.395 2 INFO nova.compute.manager [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Took 3.98 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:06:55 np0005465604 nova_compute[260603]: 2025-10-02 09:06:55.396 2 DEBUG oslo.service.loopingcall [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:06:55 np0005465604 nova_compute[260603]: 2025-10-02 09:06:55.396 2 DEBUG nova.compute.manager [-] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:06:55 np0005465604 nova_compute[260603]: 2025-10-02 09:06:55.396 2 DEBUG nova.network.neutron [-] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:06:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 305 active+clean; 107 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 9.2 KiB/s wr, 36 op/s
Oct  2 05:06:55 np0005465604 nova_compute[260603]: 2025-10-02 09:06:55.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:56 np0005465604 nova_compute[260603]: 2025-10-02 09:06:56.102 2 DEBUG nova.network.neutron [-] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:06:56 np0005465604 nova_compute[260603]: 2025-10-02 09:06:56.148 2 INFO nova.compute.manager [-] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Took 0.75 seconds to deallocate network for instance.#033[00m
Oct  2 05:06:56 np0005465604 nova_compute[260603]: 2025-10-02 09:06:56.185 2 DEBUG nova.compute.manager [req-81ec2e55-c926-476e-adf6-878702e49dd4 req-37550955-00e2-4146-8fed-fe86acfd7612 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Received event network-vif-deleted-8d7734cf-6636-4070-a868-e4d1d2cfab65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:06:56 np0005465604 nova_compute[260603]: 2025-10-02 09:06:56.211 2 DEBUG oslo_concurrency.lockutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:06:56 np0005465604 nova_compute[260603]: 2025-10-02 09:06:56.212 2 DEBUG oslo_concurrency.lockutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:06:56 np0005465604 nova_compute[260603]: 2025-10-02 09:06:56.293 2 DEBUG oslo_concurrency.processutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:06:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:06:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4140111897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:06:56 np0005465604 nova_compute[260603]: 2025-10-02 09:06:56.772 2 DEBUG oslo_concurrency.processutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:06:56 np0005465604 nova_compute[260603]: 2025-10-02 09:06:56.783 2 DEBUG nova.compute.provider_tree [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:06:56 np0005465604 nova_compute[260603]: 2025-10-02 09:06:56.823 2 DEBUG nova.scheduler.client.report [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:06:56 np0005465604 nova_compute[260603]: 2025-10-02 09:06:56.878 2 DEBUG oslo_concurrency.lockutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:56 np0005465604 nova_compute[260603]: 2025-10-02 09:06:56.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:06:56 np0005465604 nova_compute[260603]: 2025-10-02 09:06:56.965 2 INFO nova.scheduler.client.report [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 88de4189-44e3-48fe-aa38-c33334b314b5#033[00m
Oct  2 05:06:57 np0005465604 nova_compute[260603]: 2025-10-02 09:06:57.134 2 DEBUG oslo_concurrency.lockutils [None req-b9eba49a-e0ca-4133-becd-3d71bfc1c8f9 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "88de4189-44e3-48fe-aa38-c33334b314b5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:06:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:06:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:06:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:06:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:06:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:06:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:06:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 52 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 10 KiB/s wr, 55 op/s
Oct  2 05:06:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:06:58 np0005465604 nova_compute[260603]: 2025-10-02 09:06:58.657 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:06:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 10 KiB/s wr, 57 op/s
Oct  2 05:07:00 np0005465604 nova_compute[260603]: 2025-10-02 09:07:00.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:07:00 np0005465604 nova_compute[260603]: 2025-10-02 09:07:00.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:07:00 np0005465604 nova_compute[260603]: 2025-10-02 09:07:00.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:01 np0005465604 nova_compute[260603]: 2025-10-02 09:07:01.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 10 KiB/s wr, 50 op/s
Oct  2 05:07:02 np0005465604 nova_compute[260603]: 2025-10-02 09:07:02.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:07:02 np0005465604 nova_compute[260603]: 2025-10-02 09:07:02.578 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:07:02 np0005465604 nova_compute[260603]: 2025-10-02 09:07:02.578 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:07:02 np0005465604 nova_compute[260603]: 2025-10-02 09:07:02.578 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:07:02 np0005465604 nova_compute[260603]: 2025-10-02 09:07:02.579 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:07:02 np0005465604 nova_compute[260603]: 2025-10-02 09:07:02.579 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:07:02 np0005465604 nova_compute[260603]: 2025-10-02 09:07:02.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:02 np0005465604 nova_compute[260603]: 2025-10-02 09:07:02.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:02 np0005465604 nova_compute[260603]: 2025-10-02 09:07:02.873 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396007.8730283, a021c5bb-f6b0-4434-bf26-81f294f0fe00 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:07:02 np0005465604 nova_compute[260603]: 2025-10-02 09:07:02.874 2 INFO nova.compute.manager [-] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:07:02 np0005465604 nova_compute[260603]: 2025-10-02 09:07:02.920 2 DEBUG nova.compute.manager [None req-438e03ff-6f83-4b4e-82ec-a404a3f7a9cf - - - - - -] [instance: a021c5bb-f6b0-4434-bf26-81f294f0fe00] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:07:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:07:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/108144706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:07:03 np0005465604 nova_compute[260603]: 2025-10-02 09:07:03.031 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:07:03 np0005465604 nova_compute[260603]: 2025-10-02 09:07:03.186 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:07:03 np0005465604 nova_compute[260603]: 2025-10-02 09:07:03.187 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3610MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:07:03 np0005465604 nova_compute[260603]: 2025-10-02 09:07:03.187 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:07:03 np0005465604 nova_compute[260603]: 2025-10-02 09:07:03.188 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:07:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:07:03 np0005465604 nova_compute[260603]: 2025-10-02 09:07:03.254 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:07:03 np0005465604 nova_compute[260603]: 2025-10-02 09:07:03.254 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:07:03 np0005465604 nova_compute[260603]: 2025-10-02 09:07:03.270 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:07:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:07:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2619791510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:07:03 np0005465604 nova_compute[260603]: 2025-10-02 09:07:03.710 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:07:03 np0005465604 nova_compute[260603]: 2025-10-02 09:07:03.716 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:07:03 np0005465604 nova_compute[260603]: 2025-10-02 09:07:03.745 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:07:03 np0005465604 nova_compute[260603]: 2025-10-02 09:07:03.811 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:07:03 np0005465604 nova_compute[260603]: 2025-10-02 09:07:03.812 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:07:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 10 KiB/s wr, 50 op/s
Oct  2 05:07:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1023 B/s wr, 21 op/s
Oct  2 05:07:05 np0005465604 nova_compute[260603]: 2025-10-02 09:07:05.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:06 np0005465604 nova_compute[260603]: 2025-10-02 09:07:06.808 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:07:06 np0005465604 nova_compute[260603]: 2025-10-02 09:07:06.854 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396011.8534527, 88de4189-44e3-48fe-aa38-c33334b314b5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:07:06 np0005465604 nova_compute[260603]: 2025-10-02 09:07:06.854 2 INFO nova.compute.manager [-] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:07:06 np0005465604 nova_compute[260603]: 2025-10-02 09:07:06.872 2 DEBUG nova.compute.manager [None req-e7036673-c503-469f-8cfa-f067de1ce14f - - - - - -] [instance: 88de4189-44e3-48fe-aa38-c33334b314b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:07:06 np0005465604 nova_compute[260603]: 2025-10-02 09:07:06.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1023 B/s wr, 21 op/s
Oct  2 05:07:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:07:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 938 B/s rd, 170 B/s wr, 1 op/s
Oct  2 05:07:10 np0005465604 nova_compute[260603]: 2025-10-02 09:07:10.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:11 np0005465604 nova_compute[260603]: 2025-10-02 09:07:11.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:07:11 np0005465604 nova_compute[260603]: 2025-10-02 09:07:11.550 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:07:11 np0005465604 nova_compute[260603]: 2025-10-02 09:07:11.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:07:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:07:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:07:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:07:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:07:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:07:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:07:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:07:13 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c9505a8d-364c-4984-a9c9-7aa22f5062ef does not exist
Oct  2 05:07:13 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 8ea607fe-7849-4884-8621-035754a6bfd6 does not exist
Oct  2 05:07:13 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 33b07dea-7539-484c-9d12-c18a1e7f66be does not exist
Oct  2 05:07:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:07:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:07:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:07:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:07:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:07:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:07:13 np0005465604 podman[413196]: 2025-10-02 09:07:13.520338196 +0000 UTC m=+0.082327410 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 05:07:13 np0005465604 podman[413195]: 2025-10-02 09:07:13.556863591 +0000 UTC m=+0.107858593 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:07:13 np0005465604 podman[413355]: 2025-10-02 09:07:13.964701114 +0000 UTC m=+0.055692511 container create 22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:07:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:07:14 np0005465604 systemd[1]: Started libpod-conmon-22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71.scope.
Oct  2 05:07:14 np0005465604 podman[413355]: 2025-10-02 09:07:13.935453836 +0000 UTC m=+0.026445283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:07:14 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:07:14 np0005465604 podman[413355]: 2025-10-02 09:07:14.071500183 +0000 UTC m=+0.162491560 container init 22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 05:07:14 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:07:14 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:07:14 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:07:14 np0005465604 podman[413355]: 2025-10-02 09:07:14.084986262 +0000 UTC m=+0.175977619 container start 22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 05:07:14 np0005465604 podman[413355]: 2025-10-02 09:07:14.091216696 +0000 UTC m=+0.182208103 container attach 22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brown, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:07:14 np0005465604 nice_brown[413371]: 167 167
Oct  2 05:07:14 np0005465604 systemd[1]: libpod-22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71.scope: Deactivated successfully.
Oct  2 05:07:14 np0005465604 conmon[413371]: conmon 22d14a50149e947d89c3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71.scope/container/memory.events
Oct  2 05:07:14 np0005465604 podman[413355]: 2025-10-02 09:07:14.094927512 +0000 UTC m=+0.185918909 container died 22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brown, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 05:07:14 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4df60a9b8fc40f1b30df43c4d86ad93d84cf5e67ae61cdb44ae992d20a2cdbce-merged.mount: Deactivated successfully.
Oct  2 05:07:14 np0005465604 podman[413355]: 2025-10-02 09:07:14.148368582 +0000 UTC m=+0.239359969 container remove 22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brown, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:07:14 np0005465604 systemd[1]: libpod-conmon-22d14a50149e947d89c3989c15c05cd370c782ce956cfab30a74b15637e33d71.scope: Deactivated successfully.
Oct  2 05:07:14 np0005465604 podman[413395]: 2025-10-02 09:07:14.355061645 +0000 UTC m=+0.043688808 container create e3ab63f9c1f5b66cc8f0c1bf8a3948a5c8f2e6437657b28917db11438c50f6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:07:14 np0005465604 systemd[1]: Started libpod-conmon-e3ab63f9c1f5b66cc8f0c1bf8a3948a5c8f2e6437657b28917db11438c50f6e1.scope.
Oct  2 05:07:14 np0005465604 podman[413395]: 2025-10-02 09:07:14.338500971 +0000 UTC m=+0.027128114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:07:14 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:07:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/092ca119c11d8923d6c7c85457ef2a64dc3bfdcefac2768f55fa06a03175f1e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:07:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/092ca119c11d8923d6c7c85457ef2a64dc3bfdcefac2768f55fa06a03175f1e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:07:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/092ca119c11d8923d6c7c85457ef2a64dc3bfdcefac2768f55fa06a03175f1e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:07:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/092ca119c11d8923d6c7c85457ef2a64dc3bfdcefac2768f55fa06a03175f1e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:07:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/092ca119c11d8923d6c7c85457ef2a64dc3bfdcefac2768f55fa06a03175f1e6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:07:14 np0005465604 podman[413395]: 2025-10-02 09:07:14.461008598 +0000 UTC m=+0.149635821 container init e3ab63f9c1f5b66cc8f0c1bf8a3948a5c8f2e6437657b28917db11438c50f6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 05:07:14 np0005465604 podman[413395]: 2025-10-02 09:07:14.475091275 +0000 UTC m=+0.163718408 container start e3ab63f9c1f5b66cc8f0c1bf8a3948a5c8f2e6437657b28917db11438c50f6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_fermat, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 05:07:14 np0005465604 podman[413395]: 2025-10-02 09:07:14.478102079 +0000 UTC m=+0.166729232 container attach e3ab63f9c1f5b66cc8f0c1bf8a3948a5c8f2e6437657b28917db11438c50f6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_fermat, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:07:15 np0005465604 thirsty_fermat[413411]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:07:15 np0005465604 thirsty_fermat[413411]: --> relative data size: 1.0
Oct  2 05:07:15 np0005465604 thirsty_fermat[413411]: --> All data devices are unavailable
Oct  2 05:07:15 np0005465604 systemd[1]: libpod-e3ab63f9c1f5b66cc8f0c1bf8a3948a5c8f2e6437657b28917db11438c50f6e1.scope: Deactivated successfully.
Oct  2 05:07:15 np0005465604 podman[413440]: 2025-10-02 09:07:15.535848849 +0000 UTC m=+0.028383193 container died e3ab63f9c1f5b66cc8f0c1bf8a3948a5c8f2e6437657b28917db11438c50f6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:07:15 np0005465604 systemd[1]: var-lib-containers-storage-overlay-092ca119c11d8923d6c7c85457ef2a64dc3bfdcefac2768f55fa06a03175f1e6-merged.mount: Deactivated successfully.
Oct  2 05:07:15 np0005465604 podman[413440]: 2025-10-02 09:07:15.600600581 +0000 UTC m=+0.093134915 container remove e3ab63f9c1f5b66cc8f0c1bf8a3948a5c8f2e6437657b28917db11438c50f6e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:07:15 np0005465604 systemd[1]: libpod-conmon-e3ab63f9c1f5b66cc8f0c1bf8a3948a5c8f2e6437657b28917db11438c50f6e1.scope: Deactivated successfully.
Oct  2 05:07:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:07:16 np0005465604 nova_compute[260603]: 2025-10-02 09:07:15.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:16 np0005465604 podman[413596]: 2025-10-02 09:07:16.277378992 +0000 UTC m=+0.062979197 container create a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 05:07:16 np0005465604 systemd[1]: Started libpod-conmon-a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa.scope.
Oct  2 05:07:16 np0005465604 podman[413596]: 2025-10-02 09:07:16.255619306 +0000 UTC m=+0.041219491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:07:16 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:07:16 np0005465604 podman[413596]: 2025-10-02 09:07:16.395373999 +0000 UTC m=+0.180974214 container init a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 05:07:16 np0005465604 podman[413596]: 2025-10-02 09:07:16.405944608 +0000 UTC m=+0.191544813 container start a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:07:16 np0005465604 fervent_brattain[413612]: 167 167
Oct  2 05:07:16 np0005465604 systemd[1]: libpod-a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa.scope: Deactivated successfully.
Oct  2 05:07:16 np0005465604 conmon[413612]: conmon a944f4081f9d4d00b7c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa.scope/container/memory.events
Oct  2 05:07:16 np0005465604 podman[413596]: 2025-10-02 09:07:16.415259318 +0000 UTC m=+0.200859533 container attach a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 05:07:16 np0005465604 podman[413596]: 2025-10-02 09:07:16.416599939 +0000 UTC m=+0.202200104 container died a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  2 05:07:16 np0005465604 systemd[1]: var-lib-containers-storage-overlay-602fa73cd6703b527709bff6586a02c71414f09837ff88f3e982299f23304e4d-merged.mount: Deactivated successfully.
Oct  2 05:07:16 np0005465604 podman[413596]: 2025-10-02 09:07:16.461293488 +0000 UTC m=+0.246893653 container remove a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:07:16 np0005465604 systemd[1]: libpod-conmon-a944f4081f9d4d00b7c90672089e2ee05df881aec1fa4e338760ea7a89dd77fa.scope: Deactivated successfully.
Oct  2 05:07:16 np0005465604 podman[413634]: 2025-10-02 09:07:16.628989729 +0000 UTC m=+0.053935227 container create 519363fa540d6c8f05b23c5c8cd9c002b1ddbaa4744f2ecaffa535402928ad4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 05:07:16 np0005465604 systemd[1]: Started libpod-conmon-519363fa540d6c8f05b23c5c8cd9c002b1ddbaa4744f2ecaffa535402928ad4d.scope.
Oct  2 05:07:16 np0005465604 podman[413634]: 2025-10-02 09:07:16.605644014 +0000 UTC m=+0.030589492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:07:16 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:07:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecb1fb191b9a6f40f2e76f36d2fab5e00b24d4db0079b16e28717891dc51436b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:07:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecb1fb191b9a6f40f2e76f36d2fab5e00b24d4db0079b16e28717891dc51436b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:07:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecb1fb191b9a6f40f2e76f36d2fab5e00b24d4db0079b16e28717891dc51436b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:07:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecb1fb191b9a6f40f2e76f36d2fab5e00b24d4db0079b16e28717891dc51436b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:07:16 np0005465604 podman[413634]: 2025-10-02 09:07:16.739615187 +0000 UTC m=+0.164560655 container init 519363fa540d6c8f05b23c5c8cd9c002b1ddbaa4744f2ecaffa535402928ad4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 05:07:16 np0005465604 podman[413634]: 2025-10-02 09:07:16.748050379 +0000 UTC m=+0.172995837 container start 519363fa540d6c8f05b23c5c8cd9c002b1ddbaa4744f2ecaffa535402928ad4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:07:16 np0005465604 podman[413634]: 2025-10-02 09:07:16.752865819 +0000 UTC m=+0.177811307 container attach 519363fa540d6c8f05b23c5c8cd9c002b1ddbaa4744f2ecaffa535402928ad4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 05:07:16 np0005465604 nova_compute[260603]: 2025-10-02 09:07:16.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]: {
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:    "0": [
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:        {
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "devices": [
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "/dev/loop3"
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            ],
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "lv_name": "ceph_lv0",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "lv_size": "21470642176",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "name": "ceph_lv0",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "tags": {
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.cluster_name": "ceph",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.crush_device_class": "",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.encrypted": "0",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.osd_id": "0",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.type": "block",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.vdo": "0"
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            },
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "type": "block",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "vg_name": "ceph_vg0"
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:        }
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:    ],
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:    "1": [
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:        {
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "devices": [
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "/dev/loop4"
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            ],
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "lv_name": "ceph_lv1",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "lv_size": "21470642176",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "name": "ceph_lv1",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "tags": {
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.cluster_name": "ceph",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.crush_device_class": "",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.encrypted": "0",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.osd_id": "1",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.type": "block",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.vdo": "0"
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            },
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "type": "block",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "vg_name": "ceph_vg1"
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:        }
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:    ],
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:    "2": [
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:        {
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "devices": [
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "/dev/loop5"
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            ],
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "lv_name": "ceph_lv2",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "lv_size": "21470642176",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "name": "ceph_lv2",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "tags": {
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.cluster_name": "ceph",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.crush_device_class": "",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.encrypted": "0",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.osd_id": "2",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.type": "block",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:                "ceph.vdo": "0"
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            },
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "type": "block",
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:            "vg_name": "ceph_vg2"
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:        }
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]:    ]
Oct  2 05:07:17 np0005465604 quizzical_hoover[413651]: }
Oct  2 05:07:17 np0005465604 systemd[1]: libpod-519363fa540d6c8f05b23c5c8cd9c002b1ddbaa4744f2ecaffa535402928ad4d.scope: Deactivated successfully.
Oct  2 05:07:17 np0005465604 podman[413634]: 2025-10-02 09:07:17.60629686 +0000 UTC m=+1.031242328 container died 519363fa540d6c8f05b23c5c8cd9c002b1ddbaa4744f2ecaffa535402928ad4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:07:17 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ecb1fb191b9a6f40f2e76f36d2fab5e00b24d4db0079b16e28717891dc51436b-merged.mount: Deactivated successfully.
Oct  2 05:07:17 np0005465604 podman[413634]: 2025-10-02 09:07:17.678108872 +0000 UTC m=+1.103054330 container remove 519363fa540d6c8f05b23c5c8cd9c002b1ddbaa4744f2ecaffa535402928ad4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 05:07:17 np0005465604 systemd[1]: libpod-conmon-519363fa540d6c8f05b23c5c8cd9c002b1ddbaa4744f2ecaffa535402928ad4d.scope: Deactivated successfully.
Oct  2 05:07:17 np0005465604 podman[413672]: 2025-10-02 09:07:17.757706575 +0000 UTC m=+0.099858484 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  2 05:07:17 np0005465604 podman[413661]: 2025-10-02 09:07:17.758070216 +0000 UTC m=+0.096914892 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:07:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:07:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:07:18 np0005465604 podman[413852]: 2025-10-02 09:07:18.487683849 +0000 UTC m=+0.121639831 container create 8a8cf8904817c8b44874ca39f92c0e9e39107a39f0465435dd2ab034f79be1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:07:18 np0005465604 podman[413852]: 2025-10-02 09:07:18.403470323 +0000 UTC m=+0.037426315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:07:18 np0005465604 systemd[1]: Started libpod-conmon-8a8cf8904817c8b44874ca39f92c0e9e39107a39f0465435dd2ab034f79be1ab.scope.
Oct  2 05:07:18 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:07:18 np0005465604 podman[413852]: 2025-10-02 09:07:18.584024393 +0000 UTC m=+0.217980445 container init 8a8cf8904817c8b44874ca39f92c0e9e39107a39f0465435dd2ab034f79be1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct  2 05:07:18 np0005465604 podman[413852]: 2025-10-02 09:07:18.593320092 +0000 UTC m=+0.227276104 container start 8a8cf8904817c8b44874ca39f92c0e9e39107a39f0465435dd2ab034f79be1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:07:18 np0005465604 pedantic_tu[413869]: 167 167
Oct  2 05:07:18 np0005465604 systemd[1]: libpod-8a8cf8904817c8b44874ca39f92c0e9e39107a39f0465435dd2ab034f79be1ab.scope: Deactivated successfully.
Oct  2 05:07:18 np0005465604 podman[413852]: 2025-10-02 09:07:18.602709274 +0000 UTC m=+0.236665256 container attach 8a8cf8904817c8b44874ca39f92c0e9e39107a39f0465435dd2ab034f79be1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:07:18 np0005465604 podman[413852]: 2025-10-02 09:07:18.603979714 +0000 UTC m=+0.237935726 container died 8a8cf8904817c8b44874ca39f92c0e9e39107a39f0465435dd2ab034f79be1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 05:07:18 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ecf1f9c1fb4efea777fe93973e501d5e4bfec1cd7aa1274c5284f87ea024a309-merged.mount: Deactivated successfully.
Oct  2 05:07:18 np0005465604 podman[413852]: 2025-10-02 09:07:18.709203044 +0000 UTC m=+0.343159026 container remove 8a8cf8904817c8b44874ca39f92c0e9e39107a39f0465435dd2ab034f79be1ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 05:07:18 np0005465604 systemd[1]: libpod-conmon-8a8cf8904817c8b44874ca39f92c0e9e39107a39f0465435dd2ab034f79be1ab.scope: Deactivated successfully.
Oct  2 05:07:18 np0005465604 podman[413893]: 2025-10-02 09:07:18.938163208 +0000 UTC m=+0.075560118 container create 078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lederberg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  2 05:07:18 np0005465604 podman[413893]: 2025-10-02 09:07:18.899548429 +0000 UTC m=+0.036945419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:07:19 np0005465604 systemd[1]: Started libpod-conmon-078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d.scope.
Oct  2 05:07:19 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:07:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678e085ce6bc85f623ac5167ea4941398005a9bed9e6404d725ff9ab01c9be41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:07:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678e085ce6bc85f623ac5167ea4941398005a9bed9e6404d725ff9ab01c9be41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:07:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678e085ce6bc85f623ac5167ea4941398005a9bed9e6404d725ff9ab01c9be41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:07:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678e085ce6bc85f623ac5167ea4941398005a9bed9e6404d725ff9ab01c9be41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:07:19 np0005465604 podman[413893]: 2025-10-02 09:07:19.073791184 +0000 UTC m=+0.211188124 container init 078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lederberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 05:07:19 np0005465604 podman[413893]: 2025-10-02 09:07:19.08172094 +0000 UTC m=+0.219117840 container start 078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 05:07:19 np0005465604 podman[413893]: 2025-10-02 09:07:19.08494667 +0000 UTC m=+0.222343570 container attach 078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lederberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:07:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]: {
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:        "osd_id": 2,
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:        "type": "bluestore"
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:    },
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:        "osd_id": 1,
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:        "type": "bluestore"
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:    },
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:        "osd_id": 0,
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:        "type": "bluestore"
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]:    }
Oct  2 05:07:20 np0005465604 nervous_lederberg[413909]: }
Oct  2 05:07:20 np0005465604 systemd[1]: libpod-078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d.scope: Deactivated successfully.
Oct  2 05:07:20 np0005465604 systemd[1]: libpod-078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d.scope: Consumed 1.017s CPU time.
Oct  2 05:07:20 np0005465604 podman[413893]: 2025-10-02 09:07:20.093459131 +0000 UTC m=+1.230856071 container died 078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:07:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:20.116 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:07:20 np0005465604 nova_compute[260603]: 2025-10-02 09:07:20.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:20 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:20.119 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:07:20 np0005465604 systemd[1]: var-lib-containers-storage-overlay-678e085ce6bc85f623ac5167ea4941398005a9bed9e6404d725ff9ab01c9be41-merged.mount: Deactivated successfully.
Oct  2 05:07:20 np0005465604 podman[413893]: 2025-10-02 09:07:20.152002199 +0000 UTC m=+1.289399099 container remove 078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 05:07:20 np0005465604 systemd[1]: libpod-conmon-078c4f5880bd30a4c48d8060c4db9f66e98a074ba211c9370eba9d202ee8aa5d.scope: Deactivated successfully.
Oct  2 05:07:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:07:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:07:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:07:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:07:20 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e3250fec-268f-4232-bffe-17f296eec8eb does not exist
Oct  2 05:07:20 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ba9cae1f-efd4-483e-9416-3ef110f745bc does not exist
Oct  2 05:07:21 np0005465604 nova_compute[260603]: 2025-10-02 09:07:20.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:07:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:07:21 np0005465604 nova_compute[260603]: 2025-10-02 09:07:21.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:07:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:07:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3198808913' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:07:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:07:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3198808913' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:07:22 np0005465604 nova_compute[260603]: 2025-10-02 09:07:22.363 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:07:22 np0005465604 nova_compute[260603]: 2025-10-02 09:07:22.363 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:07:22 np0005465604 nova_compute[260603]: 2025-10-02 09:07:22.398 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:07:22 np0005465604 nova_compute[260603]: 2025-10-02 09:07:22.466 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:07:22 np0005465604 nova_compute[260603]: 2025-10-02 09:07:22.466 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:07:22 np0005465604 nova_compute[260603]: 2025-10-02 09:07:22.473 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:07:22 np0005465604 nova_compute[260603]: 2025-10-02 09:07:22.474 2 INFO nova.compute.claims [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:07:22 np0005465604 nova_compute[260603]: 2025-10-02 09:07:22.715 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:07:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:07:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1190906338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.132 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.137 2 DEBUG nova.compute.provider_tree [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.154 2 DEBUG nova.scheduler.client.report [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.173 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.174 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:07:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.218 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.219 2 DEBUG nova.network.neutron [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.239 2 INFO nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.257 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.359 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.361 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.362 2 INFO nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Creating image(s)#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.393 2 DEBUG nova.storage.rbd_utils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 086604c0-28d5-41d4-995c-17db322b3ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.427 2 DEBUG nova.storage.rbd_utils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 086604c0-28d5-41d4-995c-17db322b3ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.457 2 DEBUG nova.storage.rbd_utils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 086604c0-28d5-41d4-995c-17db322b3ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.461 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.503 2 DEBUG nova.policy [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.536 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.537 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.538 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.538 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.571 2 DEBUG nova.storage.rbd_utils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 086604c0-28d5-41d4-995c-17db322b3ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.576 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 086604c0-28d5-41d4-995c-17db322b3ded_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:07:23 np0005465604 nova_compute[260603]: 2025-10-02 09:07:23.929 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 086604c0-28d5-41d4-995c-17db322b3ded_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:07:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:07:24 np0005465604 nova_compute[260603]: 2025-10-02 09:07:24.018 2 DEBUG nova.storage.rbd_utils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 086604c0-28d5-41d4-995c-17db322b3ded_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:07:24 np0005465604 nova_compute[260603]: 2025-10-02 09:07:24.145 2 DEBUG nova.objects.instance [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 086604c0-28d5-41d4-995c-17db322b3ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:07:24 np0005465604 nova_compute[260603]: 2025-10-02 09:07:24.168 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:07:24 np0005465604 nova_compute[260603]: 2025-10-02 09:07:24.168 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Ensure instance console log exists: /var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:07:24 np0005465604 nova_compute[260603]: 2025-10-02 09:07:24.169 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:07:24 np0005465604 nova_compute[260603]: 2025-10-02 09:07:24.169 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:07:24 np0005465604 nova_compute[260603]: 2025-10-02 09:07:24.169 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:07:24 np0005465604 nova_compute[260603]: 2025-10-02 09:07:24.887 2 DEBUG nova.network.neutron [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Successfully created port: 1bf113e0-562b-45fb-9b97-aa76d5dac283 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:07:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:25.121 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:07:25 np0005465604 nova_compute[260603]: 2025-10-02 09:07:25.399 2 DEBUG nova.network.neutron [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Successfully created port: 97679643-cd71-4857-a615-c21d643d15c2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:07:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 305 active+clean; 41 MiB data, 919 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:07:26 np0005465604 nova_compute[260603]: 2025-10-02 09:07:26.021 2 DEBUG nova.network.neutron [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Successfully updated port: 1bf113e0-562b-45fb-9b97-aa76d5dac283 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:07:26 np0005465604 nova_compute[260603]: 2025-10-02 09:07:26.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:26 np0005465604 nova_compute[260603]: 2025-10-02 09:07:26.146 2 DEBUG nova.compute.manager [req-6bf67a94-27c8-42e5-8b3e-43fbe83cfd50 req-048eb441-a4b8-46db-b086-b3fcee969888 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-changed-1bf113e0-562b-45fb-9b97-aa76d5dac283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:07:26 np0005465604 nova_compute[260603]: 2025-10-02 09:07:26.147 2 DEBUG nova.compute.manager [req-6bf67a94-27c8-42e5-8b3e-43fbe83cfd50 req-048eb441-a4b8-46db-b086-b3fcee969888 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Refreshing instance network info cache due to event network-changed-1bf113e0-562b-45fb-9b97-aa76d5dac283. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:07:26 np0005465604 nova_compute[260603]: 2025-10-02 09:07:26.147 2 DEBUG oslo_concurrency.lockutils [req-6bf67a94-27c8-42e5-8b3e-43fbe83cfd50 req-048eb441-a4b8-46db-b086-b3fcee969888 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:07:26 np0005465604 nova_compute[260603]: 2025-10-02 09:07:26.147 2 DEBUG oslo_concurrency.lockutils [req-6bf67a94-27c8-42e5-8b3e-43fbe83cfd50 req-048eb441-a4b8-46db-b086-b3fcee969888 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:07:26 np0005465604 nova_compute[260603]: 2025-10-02 09:07:26.147 2 DEBUG nova.network.neutron [req-6bf67a94-27c8-42e5-8b3e-43fbe83cfd50 req-048eb441-a4b8-46db-b086-b3fcee969888 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Refreshing network info cache for port 1bf113e0-562b-45fb-9b97-aa76d5dac283 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:07:26 np0005465604 nova_compute[260603]: 2025-10-02 09:07:26.370 2 DEBUG nova.network.neutron [req-6bf67a94-27c8-42e5-8b3e-43fbe83cfd50 req-048eb441-a4b8-46db-b086-b3fcee969888 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:07:26 np0005465604 nova_compute[260603]: 2025-10-02 09:07:26.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:26 np0005465604 nova_compute[260603]: 2025-10-02 09:07:26.987 2 DEBUG nova.network.neutron [req-6bf67a94-27c8-42e5-8b3e-43fbe83cfd50 req-048eb441-a4b8-46db-b086-b3fcee969888 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:07:27 np0005465604 nova_compute[260603]: 2025-10-02 09:07:27.001 2 DEBUG oslo_concurrency.lockutils [req-6bf67a94-27c8-42e5-8b3e-43fbe83cfd50 req-048eb441-a4b8-46db-b086-b3fcee969888 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:07:27 np0005465604 nova_compute[260603]: 2025-10-02 09:07:27.074 2 DEBUG nova.network.neutron [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Successfully updated port: 97679643-cd71-4857-a615-c21d643d15c2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:07:27 np0005465604 nova_compute[260603]: 2025-10-02 09:07:27.122 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:07:27 np0005465604 nova_compute[260603]: 2025-10-02 09:07:27.122 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:07:27 np0005465604 nova_compute[260603]: 2025-10-02 09:07:27.122 2 DEBUG nova.network.neutron [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:07:27 np0005465604 nova_compute[260603]: 2025-10-02 09:07:27.242 2 DEBUG nova.network.neutron [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:07:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:07:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:07:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:07:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:07:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:07:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:07:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 71 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Oct  2 05:07:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:07:28
Oct  2 05:07:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:07:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:07:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data']
Oct  2 05:07:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:07:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:07:28 np0005465604 nova_compute[260603]: 2025-10-02 09:07:28.217 2 DEBUG nova.compute.manager [req-c6d928d3-4943-49a6-a4d3-91ee7c7b3325 req-8e6f6180-75de-4694-8ff7-e50f283eef70 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-changed-97679643-cd71-4857-a615-c21d643d15c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:07:28 np0005465604 nova_compute[260603]: 2025-10-02 09:07:28.218 2 DEBUG nova.compute.manager [req-c6d928d3-4943-49a6-a4d3-91ee7c7b3325 req-8e6f6180-75de-4694-8ff7-e50f283eef70 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Refreshing instance network info cache due to event network-changed-97679643-cd71-4857-a615-c21d643d15c2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:07:28 np0005465604 nova_compute[260603]: 2025-10-02 09:07:28.218 2 DEBUG oslo_concurrency.lockutils [req-c6d928d3-4943-49a6-a4d3-91ee7c7b3325 req-8e6f6180-75de-4694-8ff7-e50f283eef70 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:07:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:07:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 88 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:07:31 np0005465604 nova_compute[260603]: 2025-10-02 09:07:31.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:31 np0005465604 nova_compute[260603]: 2025-10-02 09:07:31.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 88 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.509 2 DEBUG nova.network.neutron [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updating instance_info_cache with network_info: [{"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.550 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.551 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Instance network_info: |[{"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.551 2 DEBUG oslo_concurrency.lockutils [req-c6d928d3-4943-49a6-a4d3-91ee7c7b3325 req-8e6f6180-75de-4694-8ff7-e50f283eef70 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.552 2 DEBUG nova.network.neutron [req-c6d928d3-4943-49a6-a4d3-91ee7c7b3325 req-8e6f6180-75de-4694-8ff7-e50f283eef70 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Refreshing network info cache for port 97679643-cd71-4857-a615-c21d643d15c2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.558 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Start _get_guest_xml network_info=[{"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.564 2 WARNING nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.571 2 DEBUG nova.virt.libvirt.host [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.572 2 DEBUG nova.virt.libvirt.host [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.576 2 DEBUG nova.virt.libvirt.host [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.577 2 DEBUG nova.virt.libvirt.host [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.577 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.578 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.579 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.579 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.580 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.580 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.581 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.581 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.581 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.582 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.582 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.583 2 DEBUG nova.virt.hardware [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:07:32 np0005465604 nova_compute[260603]: 2025-10-02 09:07:32.588 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:07:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:07:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4100699432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.011 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.044 2 DEBUG nova.storage.rbd_utils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 086604c0-28d5-41d4-995c-17db322b3ded_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.050 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:07:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:07:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:07:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1284669158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.514 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.516 2 DEBUG nova.virt.libvirt.vif [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892205724',display_name='tempest-TestGettingAddress-server-1892205724',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892205724',id=139,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-2q1pc96q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:07:23Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=086604c0-28d5-41d4-995c-17db322b3ded,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.516 2 DEBUG nova.network.os_vif_util [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.517 2 DEBUG nova.network.os_vif_util [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e1:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bf113e0-562b-45fb-9b97-aa76d5dac283,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf113e0-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.518 2 DEBUG nova.virt.libvirt.vif [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892205724',display_name='tempest-TestGettingAddress-server-1892205724',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892205724',id=139,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-2q1pc96q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:07:23Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=086604c0-28d5-41d4-995c-17db322b3ded,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.518 2 DEBUG nova.network.os_vif_util [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.519 2 DEBUG nova.network.os_vif_util [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:b1:36,bridge_name='br-int',has_traffic_filtering=True,id=97679643-cd71-4857-a615-c21d643d15c2,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97679643-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.520 2 DEBUG nova.objects.instance [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 086604c0-28d5-41d4-995c-17db322b3ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.536 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:  <uuid>086604c0-28d5-41d4-995c-17db322b3ded</uuid>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:  <name>instance-0000008b</name>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestGettingAddress-server-1892205724</nova:name>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:07:32</nova:creationTime>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:        <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:        <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:        <nova:port uuid="1bf113e0-562b-45fb-9b97-aa76d5dac283">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:        <nova:port uuid="97679643-cd71-4857-a615-c21d643d15c2">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe9d:b136" ipVersion="6"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <entry name="serial">086604c0-28d5-41d4-995c-17db322b3ded</entry>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <entry name="uuid">086604c0-28d5-41d4-995c-17db322b3ded</entry>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/086604c0-28d5-41d4-995c-17db322b3ded_disk">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/086604c0-28d5-41d4-995c-17db322b3ded_disk.config">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:fc:e1:b8"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <target dev="tap1bf113e0-56"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:9d:b1:36"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <target dev="tap97679643-cd"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded/console.log" append="off"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:07:33 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:07:33 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:07:33 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:07:33 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.538 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Preparing to wait for external event network-vif-plugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.538 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.539 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.539 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.539 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Preparing to wait for external event network-vif-plugged-97679643-cd71-4857-a615-c21d643d15c2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.539 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.540 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.540 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.541 2 DEBUG nova.virt.libvirt.vif [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892205724',display_name='tempest-TestGettingAddress-server-1892205724',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892205724',id=139,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-2q1pc96q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:07:23Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=086604c0-28d5-41d4-995c-17db322b3ded,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.541 2 DEBUG nova.network.os_vif_util [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.542 2 DEBUG nova.network.os_vif_util [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e1:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bf113e0-562b-45fb-9b97-aa76d5dac283,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf113e0-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.543 2 DEBUG os_vif [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e1:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bf113e0-562b-45fb-9b97-aa76d5dac283,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf113e0-56') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.544 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.544 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1bf113e0-56, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1bf113e0-56, col_values=(('external_ids', {'iface-id': '1bf113e0-562b-45fb-9b97-aa76d5dac283', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:e1:b8', 'vm-uuid': '086604c0-28d5-41d4-995c-17db322b3ded'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:33 np0005465604 NetworkManager[45129]: <info>  [1759396053.5514] manager: (tap1bf113e0-56): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/604)
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.561 2 INFO os_vif [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e1:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bf113e0-562b-45fb-9b97-aa76d5dac283,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf113e0-56')#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.562 2 DEBUG nova.virt.libvirt.vif [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892205724',display_name='tempest-TestGettingAddress-server-1892205724',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892205724',id=139,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-2q1pc96q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:07:23Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=086604c0-28d5-41d4-995c-17db322b3ded,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.562 2 DEBUG nova.network.os_vif_util [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.563 2 DEBUG nova.network.os_vif_util [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:b1:36,bridge_name='br-int',has_traffic_filtering=True,id=97679643-cd71-4857-a615-c21d643d15c2,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97679643-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.563 2 DEBUG os_vif [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:b1:36,bridge_name='br-int',has_traffic_filtering=True,id=97679643-cd71-4857-a615-c21d643d15c2,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97679643-cd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.564 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.564 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.565 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97679643-cd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.566 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97679643-cd, col_values=(('external_ids', {'iface-id': '97679643-cd71-4857-a615-c21d643d15c2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:b1:36', 'vm-uuid': '086604c0-28d5-41d4-995c-17db322b3ded'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:07:33 np0005465604 NetworkManager[45129]: <info>  [1759396053.5675] manager: (tap97679643-cd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/605)
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.575 2 INFO os_vif [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:b1:36,bridge_name='br-int',has_traffic_filtering=True,id=97679643-cd71-4857-a615-c21d643d15c2,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97679643-cd')#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.637 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.637 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.638 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:fc:e1:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.638 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:9d:b1:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.639 2 INFO nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Using config drive#033[00m
Oct  2 05:07:33 np0005465604 nova_compute[260603]: 2025-10-02 09:07:33.668 2 DEBUG nova.storage.rbd_utils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 086604c0-28d5-41d4-995c-17db322b3ded_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.001 2 INFO nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Creating config drive at /var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded/disk.config#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.006 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo8s4gwmh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:07:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 305 active+clean; 88 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.169 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo8s4gwmh" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.192 2 DEBUG nova.storage.rbd_utils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 086604c0-28d5-41d4-995c-17db322b3ded_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.195 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded/disk.config 086604c0-28d5-41d4-995c-17db322b3ded_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.339 2 DEBUG oslo_concurrency.processutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded/disk.config 086604c0-28d5-41d4-995c-17db322b3ded_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.341 2 INFO nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Deleting local config drive /var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded/disk.config because it was imported into RBD.#033[00m
Oct  2 05:07:34 np0005465604 kernel: tap1bf113e0-56: entered promiscuous mode
Oct  2 05:07:34 np0005465604 NetworkManager[45129]: <info>  [1759396054.3933] manager: (tap1bf113e0-56): new Tun device (/org/freedesktop/NetworkManager/Devices/606)
Oct  2 05:07:34 np0005465604 ovn_controller[152344]: 2025-10-02T09:07:34Z|01500|binding|INFO|Claiming lport 1bf113e0-562b-45fb-9b97-aa76d5dac283 for this chassis.
Oct  2 05:07:34 np0005465604 ovn_controller[152344]: 2025-10-02T09:07:34Z|01501|binding|INFO|1bf113e0-562b-45fb-9b97-aa76d5dac283: Claiming fa:16:3e:fc:e1:b8 10.100.0.7
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:34 np0005465604 NetworkManager[45129]: <info>  [1759396054.4078] manager: (tap97679643-cd): new Tun device (/org/freedesktop/NetworkManager/Devices/607)
Oct  2 05:07:34 np0005465604 kernel: tap97679643-cd: entered promiscuous mode
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.417 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:e1:b8 10.100.0.7'], port_security=['fa:16:3e:fc:e1:b8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '086604c0-28d5-41d4-995c-17db322b3ded', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b9015763-3b6e-4935-9c3a-d25c48006f88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8531ee7b-e9fa-4aeb-a901-a54a8597544d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=1bf113e0-562b-45fb-9b97-aa76d5dac283) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.419 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 1bf113e0-562b-45fb-9b97-aa76d5dac283 in datapath 0cd32ebe-6aa8-4400-8c00-a3546d677f2c bound to our chassis#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.420 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0cd32ebe-6aa8-4400-8c00-a3546d677f2c#033[00m
Oct  2 05:07:34 np0005465604 systemd-udevd[414333]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:07:34 np0005465604 systemd-udevd[414332]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.433 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f9b1cf-addd-462b-86af-b69f8c336fda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.434 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0cd32ebe-61 in ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.436 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0cd32ebe-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.436 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[264435ea-06fa-41df-93cd-fca80705e84d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.437 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[78f0cea0-7e3f-49ca-8b43-f0ccf517bc2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:34 np0005465604 NetworkManager[45129]: <info>  [1759396054.4403] device (tap97679643-cd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:07:34 np0005465604 NetworkManager[45129]: <info>  [1759396054.4418] device (tap97679643-cd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:07:34 np0005465604 NetworkManager[45129]: <info>  [1759396054.4476] device (tap1bf113e0-56): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:07:34 np0005465604 NetworkManager[45129]: <info>  [1759396054.4489] device (tap1bf113e0-56): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:07:34 np0005465604 systemd-machined[214636]: New machine qemu-173-instance-0000008b.
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.454 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[da0af809-938a-43b1-841f-3be5f0662cc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:34 np0005465604 ovn_controller[152344]: 2025-10-02T09:07:34Z|01502|binding|INFO|Claiming lport 97679643-cd71-4857-a615-c21d643d15c2 for this chassis.
Oct  2 05:07:34 np0005465604 systemd[1]: Started Virtual Machine qemu-173-instance-0000008b.
Oct  2 05:07:34 np0005465604 ovn_controller[152344]: 2025-10-02T09:07:34Z|01503|binding|INFO|97679643-cd71-4857-a615-c21d643d15c2: Claiming fa:16:3e:9d:b1:36 2001:db8::f816:3eff:fe9d:b136
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.487 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe34197-7d3a-40ab-88bd-b049876db650]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:34 np0005465604 ovn_controller[152344]: 2025-10-02T09:07:34Z|01504|binding|INFO|Setting lport 1bf113e0-562b-45fb-9b97-aa76d5dac283 ovn-installed in OVS
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:34 np0005465604 ovn_controller[152344]: 2025-10-02T09:07:34Z|01505|binding|INFO|Setting lport 97679643-cd71-4857-a615-c21d643d15c2 ovn-installed in OVS
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:34 np0005465604 ovn_controller[152344]: 2025-10-02T09:07:34Z|01506|binding|INFO|Setting lport 97679643-cd71-4857-a615-c21d643d15c2 up in Southbound
Oct  2 05:07:34 np0005465604 ovn_controller[152344]: 2025-10-02T09:07:34Z|01507|binding|INFO|Setting lport 1bf113e0-562b-45fb-9b97-aa76d5dac283 up in Southbound
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.511 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:b1:36 2001:db8::f816:3eff:fe9d:b136'], port_security=['fa:16:3e:9d:b1:36 2001:db8::f816:3eff:fe9d:b136'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9d:b136/64', 'neutron:device_id': '086604c0-28d5-41d4-995c-17db322b3ded', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b9015763-3b6e-4935-9c3a-d25c48006f88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40d94def-f0a5-4beb-85d6-bc3ad333488d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=97679643-cd71-4857-a615-c21d643d15c2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.522 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3600b49d-d5b6-4862-a4f4-1274b1744284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.527 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[23614e40-88cc-41cc-9b41-09de7d4fff13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:34 np0005465604 NetworkManager[45129]: <info>  [1759396054.5297] manager: (tap0cd32ebe-60): new Veth device (/org/freedesktop/NetworkManager/Devices/608)
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.566 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c55bf3-e585-4e92-bd1d-78598aeb3a89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.569 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3d8f61-5b9b-45cf-8ceb-2f89effcb703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:34 np0005465604 NetworkManager[45129]: <info>  [1759396054.5921] device (tap0cd32ebe-60): carrier: link connected
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.601 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1d40cefd-9b80-4389-a1bc-16106104c1b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.614 2 DEBUG nova.network.neutron [req-c6d928d3-4943-49a6-a4d3-91ee7c7b3325 req-8e6f6180-75de-4694-8ff7-e50f283eef70 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updated VIF entry in instance network info cache for port 97679643-cd71-4857-a615-c21d643d15c2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.614 2 DEBUG nova.network.neutron [req-c6d928d3-4943-49a6-a4d3-91ee7c7b3325 req-8e6f6180-75de-4694-8ff7-e50f283eef70 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updating instance_info_cache with network_info: [{"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.631 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b7cb7584-374a-4a8f-b84a-b4bc1aae2e82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cd32ebe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:64:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693319, 'reachable_time': 33516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414369, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.646 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9e90b06c-a38e-4b6e-ae79-ab09348069b2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:64c4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693319, 'tstamp': 693319}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414370, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.670 2 DEBUG oslo_concurrency.lockutils [req-c6d928d3-4943-49a6-a4d3-91ee7c7b3325 req-8e6f6180-75de-4694-8ff7-e50f283eef70 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.668 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cc5bf68b-08de-4353-a820-3bb165362e20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cd32ebe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:64:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693319, 'reachable_time': 33516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 414371, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.709 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[683dbdba-3dd8-42ac-b84d-782f2d7a719f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.785 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09196310-f5a3-4166-a118-c04ea09e0529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.788 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cd32ebe-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.788 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.789 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cd32ebe-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:07:34 np0005465604 kernel: tap0cd32ebe-60: entered promiscuous mode
Oct  2 05:07:34 np0005465604 NetworkManager[45129]: <info>  [1759396054.7928] manager: (tap0cd32ebe-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.796 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0cd32ebe-60, col_values=(('external_ids', {'iface-id': '251c2ade-6e56-457d-a6d2-c79238c2f10d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:34 np0005465604 ovn_controller[152344]: 2025-10-02T09:07:34Z|01508|binding|INFO|Releasing lport 251c2ade-6e56-457d-a6d2-c79238c2f10d from this chassis (sb_readonly=0)
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.798 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0cd32ebe-6aa8-4400-8c00-a3546d677f2c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0cd32ebe-6aa8-4400-8c00-a3546d677f2c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.799 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fe74a79c-cffc-4429-8ef2-480ea911288a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.800 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-0cd32ebe-6aa8-4400-8c00-a3546d677f2c
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/0cd32ebe-6aa8-4400-8c00-a3546d677f2c.pid.haproxy
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 0cd32ebe-6aa8-4400-8c00-a3546d677f2c
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.802 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'env', 'PROCESS_TAG=haproxy-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0cd32ebe-6aa8-4400-8c00-a3546d677f2c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.813 2 DEBUG nova.compute.manager [req-98e512eb-915d-4e0b-8fea-2384ec0e0a2c req-368f0a49-db0e-46e2-9328-82e5b5d1a86c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-plugged-97679643-cd71-4857-a615-c21d643d15c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.813 2 DEBUG oslo_concurrency.lockutils [req-98e512eb-915d-4e0b-8fea-2384ec0e0a2c req-368f0a49-db0e-46e2-9328-82e5b5d1a86c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.814 2 DEBUG oslo_concurrency.lockutils [req-98e512eb-915d-4e0b-8fea-2384ec0e0a2c req-368f0a49-db0e-46e2-9328-82e5b5d1a86c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.814 2 DEBUG oslo_concurrency.lockutils [req-98e512eb-915d-4e0b-8fea-2384ec0e0a2c req-368f0a49-db0e-46e2-9328-82e5b5d1a86c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.814 2 DEBUG nova.compute.manager [req-98e512eb-915d-4e0b-8fea-2384ec0e0a2c req-368f0a49-db0e-46e2-9328-82e5b5d1a86c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Processing event network-vif-plugged-97679643-cd71-4857-a615-c21d643d15c2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.819 2 DEBUG nova.compute.manager [req-9352d819-3b57-4b80-8c70-a02aaa90db5e req-3a2d5397-f4ba-46c1-b40a-e267df74ac76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-plugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.820 2 DEBUG oslo_concurrency.lockutils [req-9352d819-3b57-4b80-8c70-a02aaa90db5e req-3a2d5397-f4ba-46c1-b40a-e267df74ac76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.820 2 DEBUG oslo_concurrency.lockutils [req-9352d819-3b57-4b80-8c70-a02aaa90db5e req-3a2d5397-f4ba-46c1-b40a-e267df74ac76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.820 2 DEBUG oslo_concurrency.lockutils [req-9352d819-3b57-4b80-8c70-a02aaa90db5e req-3a2d5397-f4ba-46c1-b40a-e267df74ac76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:07:34 np0005465604 nova_compute[260603]: 2025-10-02 09:07:34.821 2 DEBUG nova.compute.manager [req-9352d819-3b57-4b80-8c70-a02aaa90db5e req-3a2d5397-f4ba-46c1-b40a-e267df74ac76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Processing event network-vif-plugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.845 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.846 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:07:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:34.847 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:07:35 np0005465604 podman[414446]: 2025-10-02 09:07:35.229962498 +0000 UTC m=+0.064553257 container create fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 05:07:35 np0005465604 systemd[1]: Started libpod-conmon-fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df.scope.
Oct  2 05:07:35 np0005465604 podman[414446]: 2025-10-02 09:07:35.19428811 +0000 UTC m=+0.028878859 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 05:07:35 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:07:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df163648e30c438ee7956b6dd528661349b15b516a55e59776780490c120fbcc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 05:07:35 np0005465604 podman[414446]: 2025-10-02 09:07:35.33362561 +0000 UTC m=+0.168216359 container init fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 05:07:35 np0005465604 podman[414446]: 2025-10-02 09:07:35.3449148 +0000 UTC m=+0.179505529 container start fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.364 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396055.363428, 086604c0-28d5-41d4-995c-17db322b3ded => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.365 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] VM Started (Lifecycle Event)#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.367 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:07:35 np0005465604 neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c[414461]: [NOTICE]   (414465) : New worker (414467) forked
Oct  2 05:07:35 np0005465604 neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c[414461]: [NOTICE]   (414465) : Loading success.
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.373 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.378 2 INFO nova.virt.libvirt.driver [-] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Instance spawned successfully.#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.378 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.397 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.401 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 97679643-cd71-4857-a615-c21d643d15c2 in datapath 3b4c5c4f-7410-4ce4-9e83-46e3156b929c unbound from our chassis#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.401 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.403 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3b4c5c4f-7410-4ce4-9e83-46e3156b929c#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.416 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[de872a1f-f0d2-4072-92a6-99ef2dc31900]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.417 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3b4c5c4f-71 in ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.420 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3b4c5c4f-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.420 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[23cff393-80d1-4b88-922b-7fecae9c071a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.421 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[44b81c19-f69e-43d1-a68a-e6224d3c67f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.422 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.422 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.423 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.424 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.425 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.425 2 DEBUG nova.virt.libvirt.driver [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.429 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.429 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396055.3637102, 086604c0-28d5-41d4-995c-17db322b3ded => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.429 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.437 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[38bbfc6a-eb61-4164-9d06-db59a4702a3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.469 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[86c9befa-9c73-4c67-86dc-7715042429f1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.494 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.503 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf36a40-7e57-4545-9df3-499793e94a14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.505 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396055.3720748, 086604c0-28d5-41d4-995c-17db322b3ded => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.505 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.515 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c15d01-f212-4e6d-97ba-a6882ead5de9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:35 np0005465604 systemd-udevd[414354]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:07:35 np0005465604 NetworkManager[45129]: <info>  [1759396055.5226] manager: (tap3b4c5c4f-70): new Veth device (/org/freedesktop/NetworkManager/Devices/610)
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.560 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.563 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.574 2 INFO nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Took 12.21 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.574 2 DEBUG nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.579 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f34fe1-922e-4038-b6f0-3c415ec6347a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.583 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[96e43c85-ba2d-4b9a-97bb-ac3f6de48ad9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.587 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:07:35 np0005465604 NetworkManager[45129]: <info>  [1759396055.6205] device (tap3b4c5c4f-70): carrier: link connected
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.630 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[78a9a571-aefe-4d7a-a66d-61caa3d0077b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.644 2 INFO nova.compute.manager [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Took 13.20 seconds to build instance.#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.658 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[88f43043-2cab-44ad-9d6f-914295cdad33]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b4c5c4f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:e1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693422, 'reachable_time': 34478, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414486, 'error': None, 'target': 'ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.672 2 DEBUG oslo_concurrency.lockutils [None req-06e3e036-70f7-4aa9-b63f-9cb70cc09278 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.677 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[179dbf13-75bd-4646-ab35-21eb25c26afc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:e118'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693422, 'tstamp': 693422}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414487, 'error': None, 'target': 'ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.696 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f8c63354-924c-4f2e-905f-0c8847246ec9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b4c5c4f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:e1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693422, 'reachable_time': 34478, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 414488, 'error': None, 'target': 'ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.727 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c7363c-87c5-4347-a3af-8d2ffbc1c08d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.768 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bed96f02-033e-4293-876b-1f5d3d8dbae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.769 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b4c5c4f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.770 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.770 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b4c5c4f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:07:35 np0005465604 kernel: tap3b4c5c4f-70: entered promiscuous mode
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:35 np0005465604 NetworkManager[45129]: <info>  [1759396055.7772] manager: (tap3b4c5c4f-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/611)
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.778 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3b4c5c4f-70, col_values=(('external_ids', {'iface-id': 'f16ced3a-20be-47b1-aeb4-8904dff1366f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:35 np0005465604 ovn_controller[152344]: 2025-10-02T09:07:35Z|01509|binding|INFO|Releasing lport f16ced3a-20be-47b1-aeb4-8904dff1366f from this chassis (sb_readonly=0)
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.783 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3b4c5c4f-7410-4ce4-9e83-46e3156b929c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3b4c5c4f-7410-4ce4-9e83-46e3156b929c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.784 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[75a10915-ec6f-414a-86d2-041909eceb04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.784 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-3b4c5c4f-7410-4ce4-9e83-46e3156b929c
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/3b4c5c4f-7410-4ce4-9e83-46e3156b929c.pid.haproxy
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 3b4c5c4f-7410-4ce4-9e83-46e3156b929c
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 05:07:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:07:35.785 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'env', 'PROCESS_TAG=haproxy-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3b4c5c4f-7410-4ce4-9e83-46e3156b929c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 05:07:35 np0005465604 nova_compute[260603]: 2025-10-02 09:07:35.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 88 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:07:36 np0005465604 nova_compute[260603]: 2025-10-02 09:07:36.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:36 np0005465604 podman[414518]: 2025-10-02 09:07:36.235647661 +0000 UTC m=+0.051913154 container create e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 05:07:36 np0005465604 systemd[1]: Started libpod-conmon-e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea.scope.
Oct  2 05:07:36 np0005465604 podman[414518]: 2025-10-02 09:07:36.208339872 +0000 UTC m=+0.024605385 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 05:07:36 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:07:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da359b613ffa5d68b88a20f5d2354085bb5af6508d378872b99a7f638a446a90/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 05:07:36 np0005465604 podman[414518]: 2025-10-02 09:07:36.322316395 +0000 UTC m=+0.138581918 container init e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:07:36 np0005465604 podman[414518]: 2025-10-02 09:07:36.329471016 +0000 UTC m=+0.145736509 container start e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 05:07:36 np0005465604 neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c[414534]: [NOTICE]   (414538) : New worker (414540) forked
Oct  2 05:07:36 np0005465604 neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c[414534]: [NOTICE]   (414538) : Loading success.
Oct  2 05:07:36 np0005465604 nova_compute[260603]: 2025-10-02 09:07:36.933 2 DEBUG nova.compute.manager [req-4a563e24-8137-4149-ab6b-a6560905f11d req-af2158d1-7d47-455d-b60b-4c1bbc91701f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-plugged-97679643-cd71-4857-a615-c21d643d15c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:07:36 np0005465604 nova_compute[260603]: 2025-10-02 09:07:36.935 2 DEBUG oslo_concurrency.lockutils [req-4a563e24-8137-4149-ab6b-a6560905f11d req-af2158d1-7d47-455d-b60b-4c1bbc91701f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:07:36 np0005465604 nova_compute[260603]: 2025-10-02 09:07:36.936 2 DEBUG oslo_concurrency.lockutils [req-4a563e24-8137-4149-ab6b-a6560905f11d req-af2158d1-7d47-455d-b60b-4c1bbc91701f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:07:36 np0005465604 nova_compute[260603]: 2025-10-02 09:07:36.937 2 DEBUG oslo_concurrency.lockutils [req-4a563e24-8137-4149-ab6b-a6560905f11d req-af2158d1-7d47-455d-b60b-4c1bbc91701f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:07:36 np0005465604 nova_compute[260603]: 2025-10-02 09:07:36.937 2 DEBUG nova.compute.manager [req-4a563e24-8137-4149-ab6b-a6560905f11d req-af2158d1-7d47-455d-b60b-4c1bbc91701f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] No waiting events found dispatching network-vif-plugged-97679643-cd71-4857-a615-c21d643d15c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:07:36 np0005465604 nova_compute[260603]: 2025-10-02 09:07:36.938 2 WARNING nova.compute.manager [req-4a563e24-8137-4149-ab6b-a6560905f11d req-af2158d1-7d47-455d-b60b-4c1bbc91701f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received unexpected event network-vif-plugged-97679643-cd71-4857-a615-c21d643d15c2 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:07:37 np0005465604 nova_compute[260603]: 2025-10-02 09:07:37.017 2 DEBUG nova.compute.manager [req-2ba9b71f-1bf7-4cad-97bd-0e405da60e5d req-5c65ca03-0b7c-4493-b55d-027fcebebef2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-plugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:07:37 np0005465604 nova_compute[260603]: 2025-10-02 09:07:37.017 2 DEBUG oslo_concurrency.lockutils [req-2ba9b71f-1bf7-4cad-97bd-0e405da60e5d req-5c65ca03-0b7c-4493-b55d-027fcebebef2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:07:37 np0005465604 nova_compute[260603]: 2025-10-02 09:07:37.018 2 DEBUG oslo_concurrency.lockutils [req-2ba9b71f-1bf7-4cad-97bd-0e405da60e5d req-5c65ca03-0b7c-4493-b55d-027fcebebef2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:07:37 np0005465604 nova_compute[260603]: 2025-10-02 09:07:37.018 2 DEBUG oslo_concurrency.lockutils [req-2ba9b71f-1bf7-4cad-97bd-0e405da60e5d req-5c65ca03-0b7c-4493-b55d-027fcebebef2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:07:37 np0005465604 nova_compute[260603]: 2025-10-02 09:07:37.018 2 DEBUG nova.compute.manager [req-2ba9b71f-1bf7-4cad-97bd-0e405da60e5d req-5c65ca03-0b7c-4493-b55d-027fcebebef2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] No waiting events found dispatching network-vif-plugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:07:37 np0005465604 nova_compute[260603]: 2025-10-02 09:07:37.019 2 WARNING nova.compute.manager [req-2ba9b71f-1bf7-4cad-97bd-0e405da60e5d req-5c65ca03-0b7c-4493-b55d-027fcebebef2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received unexpected event network-vif-plugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:07:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 88 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Oct  2 05:07:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:07:38 np0005465604 nova_compute[260603]: 2025-10-02 09:07:38.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:07:38 np0005465604 nova_compute[260603]: 2025-10-02 09:07:38.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:07:38 np0005465604 nova_compute[260603]: 2025-10-02 09:07:38.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003482863067330859 of space, bias 1.0, pg target 0.10448589201992577 quantized to 32 (current 32)
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:07:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:07:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 305 active+clean; 88 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 628 KiB/s wr, 74 op/s
Oct  2 05:07:40 np0005465604 NetworkManager[45129]: <info>  [1759396060.8730] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/612)
Oct  2 05:07:40 np0005465604 NetworkManager[45129]: <info>  [1759396060.8746] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/613)
Oct  2 05:07:40 np0005465604 nova_compute[260603]: 2025-10-02 09:07:40.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:40 np0005465604 ovn_controller[152344]: 2025-10-02T09:07:40Z|01510|binding|INFO|Releasing lport 251c2ade-6e56-457d-a6d2-c79238c2f10d from this chassis (sb_readonly=0)
Oct  2 05:07:40 np0005465604 ovn_controller[152344]: 2025-10-02T09:07:40Z|01511|binding|INFO|Releasing lport f16ced3a-20be-47b1-aeb4-8904dff1366f from this chassis (sb_readonly=0)
Oct  2 05:07:40 np0005465604 ovn_controller[152344]: 2025-10-02T09:07:40Z|01512|binding|INFO|Releasing lport 251c2ade-6e56-457d-a6d2-c79238c2f10d from this chassis (sb_readonly=0)
Oct  2 05:07:40 np0005465604 ovn_controller[152344]: 2025-10-02T09:07:40Z|01513|binding|INFO|Releasing lport f16ced3a-20be-47b1-aeb4-8904dff1366f from this chassis (sb_readonly=0)
Oct  2 05:07:40 np0005465604 nova_compute[260603]: 2025-10-02 09:07:40.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:40 np0005465604 nova_compute[260603]: 2025-10-02 09:07:40.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:41 np0005465604 nova_compute[260603]: 2025-10-02 09:07:41.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.127940) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396061128007, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2054, "num_deletes": 251, "total_data_size": 3393546, "memory_usage": 3449888, "flush_reason": "Manual Compaction"}
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396061147649, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 3328059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54394, "largest_seqno": 56447, "table_properties": {"data_size": 3318701, "index_size": 5915, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18897, "raw_average_key_size": 20, "raw_value_size": 3300113, "raw_average_value_size": 3518, "num_data_blocks": 262, "num_entries": 938, "num_filter_entries": 938, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759395836, "oldest_key_time": 1759395836, "file_creation_time": 1759396061, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 19758 microseconds, and 12529 cpu microseconds.
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.147707) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 3328059 bytes OK
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.147729) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.149539) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.149553) EVENT_LOG_v1 {"time_micros": 1759396061149549, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.149570) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 3384941, prev total WAL file size 3384941, number of live WAL files 2.
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.150453) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(3250KB)], [128(8123KB)]
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396061150495, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 11646927, "oldest_snapshot_seqno": -1}
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7711 keys, 9932464 bytes, temperature: kUnknown
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396061225619, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 9932464, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9882203, "index_size": 29860, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19333, "raw_key_size": 200267, "raw_average_key_size": 25, "raw_value_size": 9745786, "raw_average_value_size": 1263, "num_data_blocks": 1165, "num_entries": 7711, "num_filter_entries": 7711, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396061, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.226253) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 9932464 bytes
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.228440) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.1 rd, 131.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.9 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 8225, records dropped: 514 output_compression: NoCompression
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.228469) EVENT_LOG_v1 {"time_micros": 1759396061228455, "job": 78, "event": "compaction_finished", "compaction_time_micros": 75562, "compaction_time_cpu_micros": 44258, "output_level": 6, "num_output_files": 1, "total_output_size": 9932464, "num_input_records": 8225, "num_output_records": 7711, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396061230284, "job": 78, "event": "table_file_deletion", "file_number": 130}
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396061233510, "job": 78, "event": "table_file_deletion", "file_number": 128}
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.150274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.233691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.233695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.233697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.233699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:07:41 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:07:41.233700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:07:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 88 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:07:42 np0005465604 nova_compute[260603]: 2025-10-02 09:07:42.052 2 DEBUG nova.compute.manager [req-22ecf68a-23de-47f4-8b5a-112179d1b975 req-ad340304-ab37-4178-8e80-6faaad07d154 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-changed-1bf113e0-562b-45fb-9b97-aa76d5dac283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:07:42 np0005465604 nova_compute[260603]: 2025-10-02 09:07:42.052 2 DEBUG nova.compute.manager [req-22ecf68a-23de-47f4-8b5a-112179d1b975 req-ad340304-ab37-4178-8e80-6faaad07d154 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Refreshing instance network info cache due to event network-changed-1bf113e0-562b-45fb-9b97-aa76d5dac283. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:07:42 np0005465604 nova_compute[260603]: 2025-10-02 09:07:42.052 2 DEBUG oslo_concurrency.lockutils [req-22ecf68a-23de-47f4-8b5a-112179d1b975 req-ad340304-ab37-4178-8e80-6faaad07d154 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:07:42 np0005465604 nova_compute[260603]: 2025-10-02 09:07:42.053 2 DEBUG oslo_concurrency.lockutils [req-22ecf68a-23de-47f4-8b5a-112179d1b975 req-ad340304-ab37-4178-8e80-6faaad07d154 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:07:42 np0005465604 nova_compute[260603]: 2025-10-02 09:07:42.053 2 DEBUG nova.network.neutron [req-22ecf68a-23de-47f4-8b5a-112179d1b975 req-ad340304-ab37-4178-8e80-6faaad07d154 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Refreshing network info cache for port 1bf113e0-562b-45fb-9b97-aa76d5dac283 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:07:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:07:43 np0005465604 nova_compute[260603]: 2025-10-02 09:07:43.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:43 np0005465604 podman[414554]: 2025-10-02 09:07:43.985523125 +0000 UTC m=+0.049360696 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 05:07:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 88 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:07:44 np0005465604 nova_compute[260603]: 2025-10-02 09:07:44.016 2 DEBUG nova.network.neutron [req-22ecf68a-23de-47f4-8b5a-112179d1b975 req-ad340304-ab37-4178-8e80-6faaad07d154 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updated VIF entry in instance network info cache for port 1bf113e0-562b-45fb-9b97-aa76d5dac283. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:07:44 np0005465604 nova_compute[260603]: 2025-10-02 09:07:44.017 2 DEBUG nova.network.neutron [req-22ecf68a-23de-47f4-8b5a-112179d1b975 req-ad340304-ab37-4178-8e80-6faaad07d154 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updating instance_info_cache with network_info: [{"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:07:44 np0005465604 podman[414553]: 2025-10-02 09:07:44.027908691 +0000 UTC m=+0.091593537 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible)
Oct  2 05:07:44 np0005465604 nova_compute[260603]: 2025-10-02 09:07:44.042 2 DEBUG oslo_concurrency.lockutils [req-22ecf68a-23de-47f4-8b5a-112179d1b975 req-ad340304-ab37-4178-8e80-6faaad07d154 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:07:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 305 active+clean; 88 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:07:46 np0005465604 nova_compute[260603]: 2025-10-02 09:07:46.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:46 np0005465604 ovn_controller[152344]: 2025-10-02T09:07:46Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:e1:b8 10.100.0.7
Oct  2 05:07:46 np0005465604 ovn_controller[152344]: 2025-10-02T09:07:46Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:e1:b8 10.100.0.7
Oct  2 05:07:47 np0005465604 podman[414602]: 2025-10-02 09:07:47.990350937 +0000 UTC m=+0.053641658 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS)
Oct  2 05:07:47 np0005465604 podman[414601]: 2025-10-02 09:07:47.992904996 +0000 UTC m=+0.060873362 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible)
Oct  2 05:07:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 305 active+clean; 117 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 120 op/s
Oct  2 05:07:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:07:48 np0005465604 nova_compute[260603]: 2025-10-02 09:07:48.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 305 active+clean; 121 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct  2 05:07:51 np0005465604 nova_compute[260603]: 2025-10-02 09:07:51.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 305 active+clean; 121 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  2 05:07:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:07:53 np0005465604 nova_compute[260603]: 2025-10-02 09:07:53.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:07:53 np0005465604 nova_compute[260603]: 2025-10-02 09:07:53.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 05:07:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 05:07:56 np0005465604 nova_compute[260603]: 2025-10-02 09:07:56.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:56 np0005465604 nova_compute[260603]: 2025-10-02 09:07:56.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:07:56 np0005465604 nova_compute[260603]: 2025-10-02 09:07:56.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:07:56 np0005465604 nova_compute[260603]: 2025-10-02 09:07:56.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:07:56 np0005465604 nova_compute[260603]: 2025-10-02 09:07:56.842 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:07:56 np0005465604 nova_compute[260603]: 2025-10-02 09:07:56.843 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:07:56 np0005465604 nova_compute[260603]: 2025-10-02 09:07:56.843 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 05:07:56 np0005465604 nova_compute[260603]: 2025-10-02 09:07:56.844 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 086604c0-28d5-41d4-995c-17db322b3ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:07:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:07:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:07:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:07:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:07:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:07:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:07:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 05:07:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:07:58 np0005465604 nova_compute[260603]: 2025-10-02 09:07:58.545 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updating instance_info_cache with network_info: [{"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:07:58 np0005465604 nova_compute[260603]: 2025-10-02 09:07:58.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:07:58 np0005465604 nova_compute[260603]: 2025-10-02 09:07:58.717 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:07:58 np0005465604 nova_compute[260603]: 2025-10-02 09:07:58.718 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 05:07:58 np0005465604 nova_compute[260603]: 2025-10-02 09:07:58.718 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:08:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 290 KiB/s wr, 17 op/s
Oct  2 05:08:00 np0005465604 nova_compute[260603]: 2025-10-02 09:08:00.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:08:00 np0005465604 nova_compute[260603]: 2025-10-02 09:08:00.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:08:01 np0005465604 nova_compute[260603]: 2025-10-02 09:08:01.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:01 np0005465604 nova_compute[260603]: 2025-10-02 09:08:01.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:08:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 12 KiB/s wr, 2 op/s
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.143 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.144 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.183 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.247 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.248 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.257 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.258 2 INFO nova.compute.claims [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.396 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:08:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3552982764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.893 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.899 2 DEBUG nova.compute.provider_tree [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.920 2 DEBUG nova.scheduler.client.report [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.948 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.948 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.951 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.952 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.952 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:08:02 np0005465604 nova_compute[260603]: 2025-10-02 09:08:02.952 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.035 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.036 2 DEBUG nova.network.neutron [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.053 2 INFO nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.070 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.152 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.154 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.154 2 INFO nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Creating image(s)#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.174 2 DEBUG nova.storage.rbd_utils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.193 2 DEBUG nova.storage.rbd_utils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.215 2 DEBUG nova.storage.rbd_utils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.220 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.223183) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396083223244, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 407, "num_deletes": 255, "total_data_size": 318079, "memory_usage": 327064, "flush_reason": "Manual Compaction"}
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396083227237, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 315738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56448, "largest_seqno": 56854, "table_properties": {"data_size": 313263, "index_size": 574, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5618, "raw_average_key_size": 17, "raw_value_size": 308500, "raw_average_value_size": 979, "num_data_blocks": 26, "num_entries": 315, "num_filter_entries": 315, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396062, "oldest_key_time": 1759396062, "file_creation_time": 1759396083, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 4085 microseconds, and 1445 cpu microseconds.
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.227277) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 315738 bytes OK
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.227292) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.229237) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.229255) EVENT_LOG_v1 {"time_micros": 1759396083229249, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.229274) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 315503, prev total WAL file size 315503, number of live WAL files 2.
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.229697) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323535' seq:72057594037927935, type:22 .. '6C6F676D0032353036' seq:0, type:0; will stop at (end)
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(308KB)], [131(9699KB)]
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396083229831, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 10248202, "oldest_snapshot_seqno": -1}
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.296 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.297 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.298 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.298 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7509 keys, 10140944 bytes, temperature: kUnknown
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396083304935, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 10140944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10091164, "index_size": 29891, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18821, "raw_key_size": 196983, "raw_average_key_size": 26, "raw_value_size": 9957447, "raw_average_value_size": 1326, "num_data_blocks": 1164, "num_entries": 7509, "num_filter_entries": 7509, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396083, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.305293) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 10140944 bytes
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.307009) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.3 rd, 134.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.5 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(64.6) write-amplify(32.1) OK, records in: 8026, records dropped: 517 output_compression: NoCompression
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.307182) EVENT_LOG_v1 {"time_micros": 1759396083307019, "job": 80, "event": "compaction_finished", "compaction_time_micros": 75212, "compaction_time_cpu_micros": 28687, "output_level": 6, "num_output_files": 1, "total_output_size": 10140944, "num_input_records": 8026, "num_output_records": 7509, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396083307383, "job": 80, "event": "table_file_deletion", "file_number": 133}
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396083309721, "job": 80, "event": "table_file_deletion", "file_number": 131}
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.229567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.309791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.309797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.309800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.309802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:08:03.309804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.318 2 DEBUG nova.storage.rbd_utils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.321 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:08:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554511929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.431 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.520 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.522 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.648 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.707 2 DEBUG nova.storage.rbd_utils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.793 2 DEBUG nova.objects.instance [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 5a11349c-a726-40c7-83f0-95f708b3f5d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.809 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.809 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Ensure instance console log exists: /var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.810 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.810 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.810 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.827 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.828 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3439MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.828 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.828 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.896 2 DEBUG nova.policy [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.909 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 086604c0-28d5-41d4-995c-17db322b3ded actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.910 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 5a11349c-a726-40c7-83f0-95f708b3f5d2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.910 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.910 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:08:03 np0005465604 nova_compute[260603]: 2025-10-02 09:08:03.961 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:08:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 12 KiB/s wr, 3 op/s
Oct  2 05:08:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:08:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3650775426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:08:04 np0005465604 nova_compute[260603]: 2025-10-02 09:08:04.372 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:08:04 np0005465604 nova_compute[260603]: 2025-10-02 09:08:04.380 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:08:04 np0005465604 nova_compute[260603]: 2025-10-02 09:08:04.404 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:08:04 np0005465604 nova_compute[260603]: 2025-10-02 09:08:04.434 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:08:04 np0005465604 nova_compute[260603]: 2025-10-02 09:08:04.434 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:04 np0005465604 nova_compute[260603]: 2025-10-02 09:08:04.988 2 DEBUG nova.network.neutron [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Successfully created port: da8c962b-f6a3-4056-a774-9f03b36f62d5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:08:05 np0005465604 nova_compute[260603]: 2025-10-02 09:08:05.997 2 DEBUG nova.network.neutron [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Successfully created port: 95e5f1fa-72f5-4111-a730-1d1fb3203c6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:08:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 305 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct  2 05:08:06 np0005465604 nova_compute[260603]: 2025-10-02 09:08:06.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:07 np0005465604 nova_compute[260603]: 2025-10-02 09:08:07.137 2 DEBUG nova.network.neutron [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Successfully updated port: da8c962b-f6a3-4056-a774-9f03b36f62d5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:08:07 np0005465604 nova_compute[260603]: 2025-10-02 09:08:07.291 2 DEBUG nova.compute.manager [req-aa1c8f2a-996d-4338-bbf4-a96421e0e1c7 req-86f4c1aa-f1d9-43a0-bd46-9d27c61c535e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-changed-da8c962b-f6a3-4056-a774-9f03b36f62d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:07 np0005465604 nova_compute[260603]: 2025-10-02 09:08:07.292 2 DEBUG nova.compute.manager [req-aa1c8f2a-996d-4338-bbf4-a96421e0e1c7 req-86f4c1aa-f1d9-43a0-bd46-9d27c61c535e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Refreshing instance network info cache due to event network-changed-da8c962b-f6a3-4056-a774-9f03b36f62d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:08:07 np0005465604 nova_compute[260603]: 2025-10-02 09:08:07.292 2 DEBUG oslo_concurrency.lockutils [req-aa1c8f2a-996d-4338-bbf4-a96421e0e1c7 req-86f4c1aa-f1d9-43a0-bd46-9d27c61c535e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:08:07 np0005465604 nova_compute[260603]: 2025-10-02 09:08:07.292 2 DEBUG oslo_concurrency.lockutils [req-aa1c8f2a-996d-4338-bbf4-a96421e0e1c7 req-86f4c1aa-f1d9-43a0-bd46-9d27c61c535e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:08:07 np0005465604 nova_compute[260603]: 2025-10-02 09:08:07.293 2 DEBUG nova.network.neutron [req-aa1c8f2a-996d-4338-bbf4-a96421e0e1c7 req-86f4c1aa-f1d9-43a0-bd46-9d27c61c535e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Refreshing network info cache for port da8c962b-f6a3-4056-a774-9f03b36f62d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:08:07 np0005465604 nova_compute[260603]: 2025-10-02 09:08:07.504 2 DEBUG nova.network.neutron [req-aa1c8f2a-996d-4338-bbf4-a96421e0e1c7 req-86f4c1aa-f1d9-43a0-bd46-9d27c61c535e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:08:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 144 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 727 KiB/s wr, 26 op/s
Oct  2 05:08:08 np0005465604 nova_compute[260603]: 2025-10-02 09:08:08.065 2 DEBUG nova.network.neutron [req-aa1c8f2a-996d-4338-bbf4-a96421e0e1c7 req-86f4c1aa-f1d9-43a0-bd46-9d27c61c535e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:08:08 np0005465604 nova_compute[260603]: 2025-10-02 09:08:08.080 2 DEBUG oslo_concurrency.lockutils [req-aa1c8f2a-996d-4338-bbf4-a96421e0e1c7 req-86f4c1aa-f1d9-43a0-bd46-9d27c61c535e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:08:08 np0005465604 nova_compute[260603]: 2025-10-02 09:08:08.101 2 DEBUG nova.network.neutron [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Successfully updated port: 95e5f1fa-72f5-4111-a730-1d1fb3203c6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:08:08 np0005465604 nova_compute[260603]: 2025-10-02 09:08:08.161 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:08:08 np0005465604 nova_compute[260603]: 2025-10-02 09:08:08.162 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:08:08 np0005465604 nova_compute[260603]: 2025-10-02 09:08:08.162 2 DEBUG nova.network.neutron [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:08:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:08:08 np0005465604 nova_compute[260603]: 2025-10-02 09:08:08.306 2 DEBUG nova.network.neutron [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:08:08 np0005465604 nova_compute[260603]: 2025-10-02 09:08:08.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:09 np0005465604 nova_compute[260603]: 2025-10-02 09:08:09.391 2 DEBUG nova.compute.manager [req-87af3532-24ff-482a-a5d8-1b0803b00160 req-5f3a0f30-478a-4444-86e1-c562011e3c9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-changed-95e5f1fa-72f5-4111-a730-1d1fb3203c6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:09 np0005465604 nova_compute[260603]: 2025-10-02 09:08:09.391 2 DEBUG nova.compute.manager [req-87af3532-24ff-482a-a5d8-1b0803b00160 req-5f3a0f30-478a-4444-86e1-c562011e3c9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Refreshing instance network info cache due to event network-changed-95e5f1fa-72f5-4111-a730-1d1fb3203c6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:08:09 np0005465604 nova_compute[260603]: 2025-10-02 09:08:09.392 2 DEBUG oslo_concurrency.lockutils [req-87af3532-24ff-482a-a5d8-1b0803b00160 req-5f3a0f30-478a-4444-86e1-c562011e3c9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:08:09 np0005465604 nova_compute[260603]: 2025-10-02 09:08:09.428 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:08:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.071 2 DEBUG nova.network.neutron [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updating instance_info_cache with network_info: [{"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.095 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.096 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Instance network_info: |[{"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.096 2 DEBUG oslo_concurrency.lockutils [req-87af3532-24ff-482a-a5d8-1b0803b00160 req-5f3a0f30-478a-4444-86e1-c562011e3c9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.097 2 DEBUG nova.network.neutron [req-87af3532-24ff-482a-a5d8-1b0803b00160 req-5f3a0f30-478a-4444-86e1-c562011e3c9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Refreshing network info cache for port 95e5f1fa-72f5-4111-a730-1d1fb3203c6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.099 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Start _get_guest_xml network_info=[{"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.103 2 WARNING nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.108 2 DEBUG nova.virt.libvirt.host [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.108 2 DEBUG nova.virt.libvirt.host [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.113 2 DEBUG nova.virt.libvirt.host [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.114 2 DEBUG nova.virt.libvirt.host [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.114 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.115 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.115 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.115 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.115 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.116 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.116 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.116 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.116 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.116 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.117 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.117 2 DEBUG nova.virt.hardware [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.119 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:08:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:08:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4241435548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.550 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.570 2 DEBUG nova.storage.rbd_utils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.573 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:08:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:08:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1286380769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.994 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.995 2 DEBUG nova.virt.libvirt.vif [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-539987703',display_name='tempest-TestGettingAddress-server-539987703',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-539987703',id=140,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-pggoj88q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:08:03Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=5a11349c-a726-40c7-83f0-95f708b3f5d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.996 2 DEBUG nova.network.os_vif_util [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.996 2 DEBUG nova.network.os_vif_util [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:04:b3,bridge_name='br-int',has_traffic_filtering=True,id=da8c962b-f6a3-4056-a774-9f03b36f62d5,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda8c962b-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.997 2 DEBUG nova.virt.libvirt.vif [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-539987703',display_name='tempest-TestGettingAddress-server-539987703',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-539987703',id=140,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-pggoj88q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:08:03Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=5a11349c-a726-40c7-83f0-95f708b3f5d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.997 2 DEBUG nova.network.os_vif_util [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.998 2 DEBUG nova.network.os_vif_util [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:fd:19,bridge_name='br-int',has_traffic_filtering=True,id=95e5f1fa-72f5-4111-a730-1d1fb3203c6f,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f1fa-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:08:10 np0005465604 nova_compute[260603]: 2025-10-02 09:08:10.999 2 DEBUG nova.objects.instance [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a11349c-a726-40c7-83f0-95f708b3f5d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.016 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:  <uuid>5a11349c-a726-40c7-83f0-95f708b3f5d2</uuid>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:  <name>instance-0000008c</name>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestGettingAddress-server-539987703</nova:name>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:08:10</nova:creationTime>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:        <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:        <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:        <nova:port uuid="da8c962b-f6a3-4056-a774-9f03b36f62d5">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:        <nova:port uuid="95e5f1fa-72f5-4111-a730-1d1fb3203c6f">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe2e:fd19" ipVersion="6"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <entry name="serial">5a11349c-a726-40c7-83f0-95f708b3f5d2</entry>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <entry name="uuid">5a11349c-a726-40c7-83f0-95f708b3f5d2</entry>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/5a11349c-a726-40c7-83f0-95f708b3f5d2_disk">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/5a11349c-a726-40c7-83f0-95f708b3f5d2_disk.config">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:2d:04:b3"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <target dev="tapda8c962b-f6"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:2e:fd:19"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <target dev="tap95e5f1fa-72"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2/console.log" append="off"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:08:11 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:08:11 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:08:11 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:08:11 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.018 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Preparing to wait for external event network-vif-plugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.019 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.019 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.019 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.019 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Preparing to wait for external event network-vif-plugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.020 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.020 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.020 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.021 2 DEBUG nova.virt.libvirt.vif [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-539987703',display_name='tempest-TestGettingAddress-server-539987703',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-539987703',id=140,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-pggoj88q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:08:03Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=5a11349c-a726-40c7-83f0-95f708b3f5d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.021 2 DEBUG nova.network.os_vif_util [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.022 2 DEBUG nova.network.os_vif_util [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:04:b3,bridge_name='br-int',has_traffic_filtering=True,id=da8c962b-f6a3-4056-a774-9f03b36f62d5,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda8c962b-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.022 2 DEBUG os_vif [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:04:b3,bridge_name='br-int',has_traffic_filtering=True,id=da8c962b-f6a3-4056-a774-9f03b36f62d5,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda8c962b-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.023 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.024 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.027 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda8c962b-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.027 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda8c962b-f6, col_values=(('external_ids', {'iface-id': 'da8c962b-f6a3-4056-a774-9f03b36f62d5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:04:b3', 'vm-uuid': '5a11349c-a726-40c7-83f0-95f708b3f5d2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:11 np0005465604 NetworkManager[45129]: <info>  [1759396091.0303] manager: (tapda8c962b-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/614)
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.038 2 INFO os_vif [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:04:b3,bridge_name='br-int',has_traffic_filtering=True,id=da8c962b-f6a3-4056-a774-9f03b36f62d5,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda8c962b-f6')#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.039 2 DEBUG nova.virt.libvirt.vif [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-539987703',display_name='tempest-TestGettingAddress-server-539987703',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-539987703',id=140,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-pggoj88q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:08:03Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=5a11349c-a726-40c7-83f0-95f708b3f5d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.039 2 DEBUG nova.network.os_vif_util [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.040 2 DEBUG nova.network.os_vif_util [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:fd:19,bridge_name='br-int',has_traffic_filtering=True,id=95e5f1fa-72f5-4111-a730-1d1fb3203c6f,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f1fa-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.040 2 DEBUG os_vif [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:fd:19,bridge_name='br-int',has_traffic_filtering=True,id=95e5f1fa-72f5-4111-a730-1d1fb3203c6f,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f1fa-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.040 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.041 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.043 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95e5f1fa-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.043 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap95e5f1fa-72, col_values=(('external_ids', {'iface-id': '95e5f1fa-72f5-4111-a730-1d1fb3203c6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:fd:19', 'vm-uuid': '5a11349c-a726-40c7-83f0-95f708b3f5d2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:11 np0005465604 NetworkManager[45129]: <info>  [1759396091.0456] manager: (tap95e5f1fa-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/615)
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.053 2 INFO os_vif [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:fd:19,bridge_name='br-int',has_traffic_filtering=True,id=95e5f1fa-72f5-4111-a730-1d1fb3203c6f,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f1fa-72')#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.105 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.105 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.105 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:2d:04:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.105 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:2e:fd:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.106 2 INFO nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Using config drive#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.123 2 DEBUG nova.storage.rbd_utils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.128 2 DEBUG nova.network.neutron [req-87af3532-24ff-482a-a5d8-1b0803b00160 req-5f3a0f30-478a-4444-86e1-c562011e3c9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updated VIF entry in instance network info cache for port 95e5f1fa-72f5-4111-a730-1d1fb3203c6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.128 2 DEBUG nova.network.neutron [req-87af3532-24ff-482a-a5d8-1b0803b00160 req-5f3a0f30-478a-4444-86e1-c562011e3c9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updating instance_info_cache with network_info: [{"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.152 2 DEBUG oslo_concurrency.lockutils [req-87af3532-24ff-482a-a5d8-1b0803b00160 req-5f3a0f30-478a-4444-86e1-c562011e3c9a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.527 2 INFO nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Creating config drive at /var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2/disk.config#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.532 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5w4hr_5y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.674 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5w4hr_5y" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.700 2 DEBUG nova.storage.rbd_utils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.703 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2/disk.config 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.856 2 DEBUG oslo_concurrency.processutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2/disk.config 5a11349c-a726-40c7-83f0-95f708b3f5d2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.857 2 INFO nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Deleting local config drive /var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2/disk.config because it was imported into RBD.#033[00m
Oct  2 05:08:11 np0005465604 NetworkManager[45129]: <info>  [1759396091.9008] manager: (tapda8c962b-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/616)
Oct  2 05:08:11 np0005465604 kernel: tapda8c962b-f6: entered promiscuous mode
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:11 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:11Z|01514|binding|INFO|Claiming lport da8c962b-f6a3-4056-a774-9f03b36f62d5 for this chassis.
Oct  2 05:08:11 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:11Z|01515|binding|INFO|da8c962b-f6a3-4056-a774-9f03b36f62d5: Claiming fa:16:3e:2d:04:b3 10.100.0.9
Oct  2 05:08:11 np0005465604 systemd-udevd[415010]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:08:11 np0005465604 NetworkManager[45129]: <info>  [1759396091.9440] manager: (tap95e5f1fa-72): new Tun device (/org/freedesktop/NetworkManager/Devices/617)
Oct  2 05:08:11 np0005465604 systemd-udevd[415014]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:08:11 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:11Z|01516|binding|INFO|Setting lport da8c962b-f6a3-4056-a774-9f03b36f62d5 ovn-installed in OVS
Oct  2 05:08:11 np0005465604 kernel: tap95e5f1fa-72: entered promiscuous mode
Oct  2 05:08:11 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:11Z|01517|binding|INFO|Setting lport da8c962b-f6a3-4056-a774-9f03b36f62d5 up in Southbound
Oct  2 05:08:11 np0005465604 NetworkManager[45129]: <info>  [1759396091.9528] device (tapda8c962b-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:11 np0005465604 NetworkManager[45129]: <info>  [1759396091.9552] device (tapda8c962b-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:08:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:11.955 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:04:b3 10.100.0.9'], port_security=['fa:16:3e:2d:04:b3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5a11349c-a726-40c7-83f0-95f708b3f5d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b9015763-3b6e-4935-9c3a-d25c48006f88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8531ee7b-e9fa-4aeb-a901-a54a8597544d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=da8c962b-f6a3-4056-a774-9f03b36f62d5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:08:11 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:11Z|01518|if_status|INFO|Not updating pb chassis for 95e5f1fa-72f5-4111-a730-1d1fb3203c6f now as sb is readonly
Oct  2 05:08:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:11.956 162357 INFO neutron.agent.ovn.metadata.agent [-] Port da8c962b-f6a3-4056-a774-9f03b36f62d5 in datapath 0cd32ebe-6aa8-4400-8c00-a3546d677f2c bound to our chassis#033[00m
Oct  2 05:08:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:11.958 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0cd32ebe-6aa8-4400-8c00-a3546d677f2c#033[00m
Oct  2 05:08:11 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:11Z|01519|binding|INFO|Claiming lport 95e5f1fa-72f5-4111-a730-1d1fb3203c6f for this chassis.
Oct  2 05:08:11 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:11Z|01520|binding|INFO|95e5f1fa-72f5-4111-a730-1d1fb3203c6f: Claiming fa:16:3e:2e:fd:19 2001:db8::f816:3eff:fe2e:fd19
Oct  2 05:08:11 np0005465604 NetworkManager[45129]: <info>  [1759396091.9640] device (tap95e5f1fa-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:08:11 np0005465604 NetworkManager[45129]: <info>  [1759396091.9649] device (tap95e5f1fa-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:08:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:11.968 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:fd:19 2001:db8::f816:3eff:fe2e:fd19'], port_security=['fa:16:3e:2e:fd:19 2001:db8::f816:3eff:fe2e:fd19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe2e:fd19/64', 'neutron:device_id': '5a11349c-a726-40c7-83f0-95f708b3f5d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b9015763-3b6e-4935-9c3a-d25c48006f88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40d94def-f0a5-4beb-85d6-bc3ad333488d, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=95e5f1fa-72f5-4111-a730-1d1fb3203c6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:08:11 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:11Z|01521|binding|INFO|Setting lport 95e5f1fa-72f5-4111-a730-1d1fb3203c6f ovn-installed in OVS
Oct  2 05:08:11 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:11Z|01522|binding|INFO|Setting lport 95e5f1fa-72f5-4111-a730-1d1fb3203c6f up in Southbound
Oct  2 05:08:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:11.974 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[15417d41-de67-4814-a792-7179e0af2a91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:11 np0005465604 nova_compute[260603]: 2025-10-02 09:08:11.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:11 np0005465604 systemd-machined[214636]: New machine qemu-174-instance-0000008c.
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.004 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a876ad58-6df4-4dbd-aa97-2acf874e119b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:12 np0005465604 systemd[1]: Started Virtual Machine qemu-174-instance-0000008c.
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.007 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[82f59c5d-b27d-43f9-b408-72c402bcf64c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.039 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[31425227-0edb-47dd-824b-6d0d7f2e2d6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.055 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a341fe75-61d8-490a-aa50-1a1668d6fe61]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cd32ebe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:64:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693319, 'reachable_time': 33516, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 415031, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.069 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f62868-38cf-4b55-8114-68a99a2d89d7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0cd32ebe-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693334, 'tstamp': 693334}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415033, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0cd32ebe-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693338, 'tstamp': 693338}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415033, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.070 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cd32ebe-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.073 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cd32ebe-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.073 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.073 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0cd32ebe-60, col_values=(('external_ids', {'iface-id': '251c2ade-6e56-457d-a6d2-c79238c2f10d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.073 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.074 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 95e5f1fa-72f5-4111-a730-1d1fb3203c6f in datapath 3b4c5c4f-7410-4ce4-9e83-46e3156b929c unbound from our chassis#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.075 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3b4c5c4f-7410-4ce4-9e83-46e3156b929c#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.091 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1198bb9c-e929-4663-964c-c9de54936432]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.126 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d4beea-37c8-4f38-af8f-8d5a766682cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.129 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e707ab0f-0d60-4d16-83b1-9b06eb737509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.167 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4a9267-e5c4-41db-b980-6796b8a402d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.187 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[415fcba8-d685-4942-b4c5-7e242168da80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b4c5c4f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:e1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1802, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1802, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693422, 'reachable_time': 34478, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 19, 'inoctets': 1536, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 19, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1536, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 19, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 415039, 'error': None, 'target': 'ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.205 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a99e6a-fc37-4ab8-a719-8e5f7c1515de]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3b4c5c4f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693436, 'tstamp': 693436}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415040, 'error': None, 'target': 'ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.207 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b4c5c4f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.210 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b4c5c4f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.210 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.210 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3b4c5c4f-70, col_values=(('external_ids', {'iface-id': 'f16ced3a-20be-47b1-aeb4-8904dff1366f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:12 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:12.211 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.641 2 DEBUG nova.compute.manager [req-520a57f4-c9b0-4533-84a3-2341cf73d3d1 req-d5c5889e-b806-4a5c-817f-c0a418105709 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-plugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.642 2 DEBUG oslo_concurrency.lockutils [req-520a57f4-c9b0-4533-84a3-2341cf73d3d1 req-d5c5889e-b806-4a5c-817f-c0a418105709 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.642 2 DEBUG oslo_concurrency.lockutils [req-520a57f4-c9b0-4533-84a3-2341cf73d3d1 req-d5c5889e-b806-4a5c-817f-c0a418105709 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.642 2 DEBUG oslo_concurrency.lockutils [req-520a57f4-c9b0-4533-84a3-2341cf73d3d1 req-d5c5889e-b806-4a5c-817f-c0a418105709 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.642 2 DEBUG nova.compute.manager [req-520a57f4-c9b0-4533-84a3-2341cf73d3d1 req-d5c5889e-b806-4a5c-817f-c0a418105709 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Processing event network-vif-plugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.717 2 DEBUG nova.compute.manager [req-d7bd375c-f5b8-4745-a850-fe8aa3374c4c req-3a21d8b0-4eed-4a09-b999-dab2a49e86b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-plugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.718 2 DEBUG oslo_concurrency.lockutils [req-d7bd375c-f5b8-4745-a850-fe8aa3374c4c req-3a21d8b0-4eed-4a09-b999-dab2a49e86b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.718 2 DEBUG oslo_concurrency.lockutils [req-d7bd375c-f5b8-4745-a850-fe8aa3374c4c req-3a21d8b0-4eed-4a09-b999-dab2a49e86b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.718 2 DEBUG oslo_concurrency.lockutils [req-d7bd375c-f5b8-4745-a850-fe8aa3374c4c req-3a21d8b0-4eed-4a09-b999-dab2a49e86b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.719 2 DEBUG nova.compute.manager [req-d7bd375c-f5b8-4745-a850-fe8aa3374c4c req-3a21d8b0-4eed-4a09-b999-dab2a49e86b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Processing event network-vif-plugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.904 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396092.904153, 5a11349c-a726-40c7-83f0-95f708b3f5d2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.905 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] VM Started (Lifecycle Event)#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.907 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.909 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.913 2 INFO nova.virt.libvirt.driver [-] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Instance spawned successfully.#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.913 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.933 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.936 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.946 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.947 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.947 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.948 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.948 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.948 2 DEBUG nova.virt.libvirt.driver [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.960 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.960 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396092.9043505, 5a11349c-a726-40c7-83f0-95f708b3f5d2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.960 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.981 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.984 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396092.9091687, 5a11349c-a726-40c7-83f0-95f708b3f5d2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:08:12 np0005465604 nova_compute[260603]: 2025-10-02 09:08:12.984 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:08:13 np0005465604 nova_compute[260603]: 2025-10-02 09:08:13.008 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:08:13 np0005465604 nova_compute[260603]: 2025-10-02 09:08:13.012 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:08:13 np0005465604 nova_compute[260603]: 2025-10-02 09:08:13.022 2 INFO nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Took 9.87 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:08:13 np0005465604 nova_compute[260603]: 2025-10-02 09:08:13.023 2 DEBUG nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:08:13 np0005465604 nova_compute[260603]: 2025-10-02 09:08:13.033 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:08:13 np0005465604 nova_compute[260603]: 2025-10-02 09:08:13.081 2 INFO nova.compute.manager [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Took 10.86 seconds to build instance.#033[00m
Oct  2 05:08:13 np0005465604 nova_compute[260603]: 2025-10-02 09:08:13.119 2 DEBUG oslo_concurrency.lockutils [None req-24a86cb8-9a71-43ac-91bb-b8d1f204584e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:08:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct  2 05:08:14 np0005465604 nova_compute[260603]: 2025-10-02 09:08:14.733 2 DEBUG nova.compute.manager [req-70137433-5b0b-4db7-98f4-adc27ade3b16 req-c96e2544-5b66-41eb-aa26-ebc75d0735db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-plugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:14 np0005465604 nova_compute[260603]: 2025-10-02 09:08:14.733 2 DEBUG oslo_concurrency.lockutils [req-70137433-5b0b-4db7-98f4-adc27ade3b16 req-c96e2544-5b66-41eb-aa26-ebc75d0735db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:14 np0005465604 nova_compute[260603]: 2025-10-02 09:08:14.734 2 DEBUG oslo_concurrency.lockutils [req-70137433-5b0b-4db7-98f4-adc27ade3b16 req-c96e2544-5b66-41eb-aa26-ebc75d0735db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:14 np0005465604 nova_compute[260603]: 2025-10-02 09:08:14.734 2 DEBUG oslo_concurrency.lockutils [req-70137433-5b0b-4db7-98f4-adc27ade3b16 req-c96e2544-5b66-41eb-aa26-ebc75d0735db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:14 np0005465604 nova_compute[260603]: 2025-10-02 09:08:14.734 2 DEBUG nova.compute.manager [req-70137433-5b0b-4db7-98f4-adc27ade3b16 req-c96e2544-5b66-41eb-aa26-ebc75d0735db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] No waiting events found dispatching network-vif-plugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:08:14 np0005465604 nova_compute[260603]: 2025-10-02 09:08:14.734 2 WARNING nova.compute.manager [req-70137433-5b0b-4db7-98f4-adc27ade3b16 req-c96e2544-5b66-41eb-aa26-ebc75d0735db 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received unexpected event network-vif-plugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:08:14 np0005465604 nova_compute[260603]: 2025-10-02 09:08:14.795 2 DEBUG nova.compute.manager [req-ba7ee148-625a-4dc4-9bc5-3142c955da11 req-2b88e0c3-c49a-482d-8931-37e89218c794 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-plugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:14 np0005465604 nova_compute[260603]: 2025-10-02 09:08:14.796 2 DEBUG oslo_concurrency.lockutils [req-ba7ee148-625a-4dc4-9bc5-3142c955da11 req-2b88e0c3-c49a-482d-8931-37e89218c794 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:14 np0005465604 nova_compute[260603]: 2025-10-02 09:08:14.796 2 DEBUG oslo_concurrency.lockutils [req-ba7ee148-625a-4dc4-9bc5-3142c955da11 req-2b88e0c3-c49a-482d-8931-37e89218c794 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:14 np0005465604 nova_compute[260603]: 2025-10-02 09:08:14.797 2 DEBUG oslo_concurrency.lockutils [req-ba7ee148-625a-4dc4-9bc5-3142c955da11 req-2b88e0c3-c49a-482d-8931-37e89218c794 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:14 np0005465604 nova_compute[260603]: 2025-10-02 09:08:14.797 2 DEBUG nova.compute.manager [req-ba7ee148-625a-4dc4-9bc5-3142c955da11 req-2b88e0c3-c49a-482d-8931-37e89218c794 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] No waiting events found dispatching network-vif-plugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:08:14 np0005465604 nova_compute[260603]: 2025-10-02 09:08:14.797 2 WARNING nova.compute.manager [req-ba7ee148-625a-4dc4-9bc5-3142c955da11 req-2b88e0c3-c49a-482d-8931-37e89218c794 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received unexpected event network-vif-plugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f for instance with vm_state active and task_state None.#033[00m
Oct  2 05:08:15 np0005465604 podman[415085]: 2025-10-02 09:08:14.998639213 +0000 UTC m=+0.061702918 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:08:15 np0005465604 podman[415084]: 2025-10-02 09:08:15.047912145 +0000 UTC m=+0.113209809 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 05:08:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct  2 05:08:16 np0005465604 nova_compute[260603]: 2025-10-02 09:08:16.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:16 np0005465604 nova_compute[260603]: 2025-10-02 09:08:16.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 826 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct  2 05:08:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:08:19 np0005465604 podman[415128]: 2025-10-02 09:08:19.006194331 +0000 UTC m=+0.065503396 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct  2 05:08:19 np0005465604 podman[415129]: 2025-10-02 09:08:19.026271365 +0000 UTC m=+0.085883820 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  2 05:08:19 np0005465604 nova_compute[260603]: 2025-10-02 09:08:19.951 2 DEBUG nova.compute.manager [req-b377bd6b-3b0e-48d6-ab41-871d842505f8 req-dbc4e4c2-9b5f-47cb-82d0-aedaf480d7a4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-changed-da8c962b-f6a3-4056-a774-9f03b36f62d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:19 np0005465604 nova_compute[260603]: 2025-10-02 09:08:19.952 2 DEBUG nova.compute.manager [req-b377bd6b-3b0e-48d6-ab41-871d842505f8 req-dbc4e4c2-9b5f-47cb-82d0-aedaf480d7a4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Refreshing instance network info cache due to event network-changed-da8c962b-f6a3-4056-a774-9f03b36f62d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:08:19 np0005465604 nova_compute[260603]: 2025-10-02 09:08:19.953 2 DEBUG oslo_concurrency.lockutils [req-b377bd6b-3b0e-48d6-ab41-871d842505f8 req-dbc4e4c2-9b5f-47cb-82d0-aedaf480d7a4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:08:19 np0005465604 nova_compute[260603]: 2025-10-02 09:08:19.953 2 DEBUG oslo_concurrency.lockutils [req-b377bd6b-3b0e-48d6-ab41-871d842505f8 req-dbc4e4c2-9b5f-47cb-82d0-aedaf480d7a4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:08:19 np0005465604 nova_compute[260603]: 2025-10-02 09:08:19.953 2 DEBUG nova.network.neutron [req-b377bd6b-3b0e-48d6-ab41-871d842505f8 req-dbc4e4c2-9b5f-47cb-82d0-aedaf480d7a4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Refreshing network info cache for port da8c962b-f6a3-4056-a774-9f03b36f62d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:08:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 75 op/s
Oct  2 05:08:21 np0005465604 nova_compute[260603]: 2025-10-02 09:08:21.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:21 np0005465604 nova_compute[260603]: 2025-10-02 09:08:21.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:08:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:08:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:08:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:08:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:08:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:08:21 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 0777dd04-7237-40e8-b590-1c6960028b07 does not exist
Oct  2 05:08:21 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e20fe698-86e7-44bd-8d3b-136ad302a918 does not exist
Oct  2 05:08:21 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 46974bf2-9ffb-464d-926a-a239363d3523 does not exist
Oct  2 05:08:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:08:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:08:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:08:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:08:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:08:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:08:21 np0005465604 podman[415439]: 2025-10-02 09:08:21.971181081 +0000 UTC m=+0.056520657 container create 88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rhodes, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:08:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct  2 05:08:22 np0005465604 podman[415439]: 2025-10-02 09:08:21.93832664 +0000 UTC m=+0.023666266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:08:22 np0005465604 systemd[1]: Started libpod-conmon-88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16.scope.
Oct  2 05:08:22 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:08:22 np0005465604 podman[415439]: 2025-10-02 09:08:22.101115509 +0000 UTC m=+0.186455065 container init 88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:08:22 np0005465604 podman[415439]: 2025-10-02 09:08:22.114563656 +0000 UTC m=+0.199903192 container start 88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 05:08:22 np0005465604 podman[415439]: 2025-10-02 09:08:22.118012283 +0000 UTC m=+0.203351839 container attach 88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rhodes, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 05:08:22 np0005465604 wizardly_rhodes[415455]: 167 167
Oct  2 05:08:22 np0005465604 systemd[1]: libpod-88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16.scope: Deactivated successfully.
Oct  2 05:08:22 np0005465604 conmon[415455]: conmon 88211743a6f8fc3f6d72 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16.scope/container/memory.events
Oct  2 05:08:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:08:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 41K writes, 164K keys, 41K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s#012Cumulative WAL: 41K writes, 15K syncs, 2.73 writes per sync, written: 0.16 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2386 writes, 9534 keys, 2386 commit groups, 1.0 writes per commit group, ingest: 11.32 MB, 0.02 MB/s#012Interval WAL: 2386 writes, 937 syncs, 2.55 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:08:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:08:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/116952037' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:08:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:08:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/116952037' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:08:22 np0005465604 podman[415460]: 2025-10-02 09:08:22.170260737 +0000 UTC m=+0.033068348 container died 88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rhodes, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 05:08:22 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5d5cf87a7c357a7be41f53ed1e7ca64791d0a68f3e20db315b894ace7b4078d7-merged.mount: Deactivated successfully.
Oct  2 05:08:22 np0005465604 podman[415460]: 2025-10-02 09:08:22.208406333 +0000 UTC m=+0.071213944 container remove 88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rhodes, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 05:08:22 np0005465604 systemd[1]: libpod-conmon-88211743a6f8fc3f6d72c5137b31c1d0d1d9cd1153fea682d7b2e1c65615bb16.scope: Deactivated successfully.
Oct  2 05:08:22 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:08:22 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:08:22 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:08:22 np0005465604 podman[415482]: 2025-10-02 09:08:22.521563545 +0000 UTC m=+0.077088637 container create 92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 05:08:22 np0005465604 systemd[1]: Started libpod-conmon-92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3.scope.
Oct  2 05:08:22 np0005465604 podman[415482]: 2025-10-02 09:08:22.489728985 +0000 UTC m=+0.045254147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:08:22 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:08:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b4fc053baf88006f82b6849155a4b29e8263f182a917dc1b88928cc82d427ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:08:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b4fc053baf88006f82b6849155a4b29e8263f182a917dc1b88928cc82d427ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:08:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b4fc053baf88006f82b6849155a4b29e8263f182a917dc1b88928cc82d427ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:08:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b4fc053baf88006f82b6849155a4b29e8263f182a917dc1b88928cc82d427ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:08:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b4fc053baf88006f82b6849155a4b29e8263f182a917dc1b88928cc82d427ee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:08:22 np0005465604 podman[415482]: 2025-10-02 09:08:22.621284103 +0000 UTC m=+0.176809165 container init 92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclaren, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 05:08:22 np0005465604 podman[415482]: 2025-10-02 09:08:22.628815887 +0000 UTC m=+0.184340949 container start 92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclaren, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 05:08:22 np0005465604 podman[415482]: 2025-10-02 09:08:22.631413528 +0000 UTC m=+0.186938590 container attach 92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclaren, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 05:08:22 np0005465604 nova_compute[260603]: 2025-10-02 09:08:22.669 2 DEBUG nova.network.neutron [req-b377bd6b-3b0e-48d6-ab41-871d842505f8 req-dbc4e4c2-9b5f-47cb-82d0-aedaf480d7a4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updated VIF entry in instance network info cache for port da8c962b-f6a3-4056-a774-9f03b36f62d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:08:22 np0005465604 nova_compute[260603]: 2025-10-02 09:08:22.670 2 DEBUG nova.network.neutron [req-b377bd6b-3b0e-48d6-ab41-871d842505f8 req-dbc4e4c2-9b5f-47cb-82d0-aedaf480d7a4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updating instance_info_cache with network_info: [{"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:08:22 np0005465604 nova_compute[260603]: 2025-10-02 09:08:22.692 2 DEBUG oslo_concurrency.lockutils [req-b377bd6b-3b0e-48d6-ab41-871d842505f8 req-dbc4e4c2-9b5f-47cb-82d0-aedaf480d7a4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:08:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:08:23 np0005465604 amazing_mclaren[415499]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:08:23 np0005465604 amazing_mclaren[415499]: --> relative data size: 1.0
Oct  2 05:08:23 np0005465604 amazing_mclaren[415499]: --> All data devices are unavailable
Oct  2 05:08:23 np0005465604 systemd[1]: libpod-92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3.scope: Deactivated successfully.
Oct  2 05:08:23 np0005465604 systemd[1]: libpod-92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3.scope: Consumed 1.047s CPU time.
Oct  2 05:08:23 np0005465604 podman[415482]: 2025-10-02 09:08:23.761263289 +0000 UTC m=+1.316788351 container died 92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclaren, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:08:23 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1b4fc053baf88006f82b6849155a4b29e8263f182a917dc1b88928cc82d427ee-merged.mount: Deactivated successfully.
Oct  2 05:08:23 np0005465604 podman[415482]: 2025-10-02 09:08:23.826461745 +0000 UTC m=+1.381986797 container remove 92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct  2 05:08:23 np0005465604 systemd[1]: libpod-conmon-92e6362545f53ee23a3ac65360006e177fc4bf7869fe330e2b5b7f4e04e87ea3.scope: Deactivated successfully.
Oct  2 05:08:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 79 op/s
Oct  2 05:08:24 np0005465604 podman[415683]: 2025-10-02 09:08:24.473930376 +0000 UTC m=+0.043203184 container create 4a40509317a5f485e3b739cee7c2768119d7fb856db641db9500aa38eff0a6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hugle, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 05:08:24 np0005465604 systemd[1]: Started libpod-conmon-4a40509317a5f485e3b739cee7c2768119d7fb856db641db9500aa38eff0a6c2.scope.
Oct  2 05:08:24 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:08:24 np0005465604 podman[415683]: 2025-10-02 09:08:24.533898759 +0000 UTC m=+0.103171557 container init 4a40509317a5f485e3b739cee7c2768119d7fb856db641db9500aa38eff0a6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hugle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:08:24 np0005465604 podman[415683]: 2025-10-02 09:08:24.540507925 +0000 UTC m=+0.109780723 container start 4a40509317a5f485e3b739cee7c2768119d7fb856db641db9500aa38eff0a6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:08:24 np0005465604 elastic_hugle[415699]: 167 167
Oct  2 05:08:24 np0005465604 systemd[1]: libpod-4a40509317a5f485e3b739cee7c2768119d7fb856db641db9500aa38eff0a6c2.scope: Deactivated successfully.
Oct  2 05:08:24 np0005465604 podman[415683]: 2025-10-02 09:08:24.547165441 +0000 UTC m=+0.116438239 container attach 4a40509317a5f485e3b739cee7c2768119d7fb856db641db9500aa38eff0a6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:08:24 np0005465604 podman[415683]: 2025-10-02 09:08:24.547537553 +0000 UTC m=+0.116810361 container died 4a40509317a5f485e3b739cee7c2768119d7fb856db641db9500aa38eff0a6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  2 05:08:24 np0005465604 podman[415683]: 2025-10-02 09:08:24.454371927 +0000 UTC m=+0.023644745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:08:24 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4e8cd68ee7f48e7405a9c5b2f7287c9f6490771039d7010ef4ac7df5762bf0e0-merged.mount: Deactivated successfully.
Oct  2 05:08:24 np0005465604 podman[415683]: 2025-10-02 09:08:24.584525452 +0000 UTC m=+0.153798250 container remove 4a40509317a5f485e3b739cee7c2768119d7fb856db641db9500aa38eff0a6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:08:24 np0005465604 systemd[1]: libpod-conmon-4a40509317a5f485e3b739cee7c2768119d7fb856db641db9500aa38eff0a6c2.scope: Deactivated successfully.
Oct  2 05:08:24 np0005465604 podman[415721]: 2025-10-02 09:08:24.784330212 +0000 UTC m=+0.047694553 container create 472edebff1e3191b649bf049f38725b7f1441d4672214f670ec0cd32523cf8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:08:24 np0005465604 systemd[1]: Started libpod-conmon-472edebff1e3191b649bf049f38725b7f1441d4672214f670ec0cd32523cf8bb.scope.
Oct  2 05:08:24 np0005465604 podman[415721]: 2025-10-02 09:08:24.761701798 +0000 UTC m=+0.025066139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:08:24 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:08:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b88b07abb261d572d01d6a6604bfdb0aa78142a49c61a6b37e70ad26f1cd5a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:08:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b88b07abb261d572d01d6a6604bfdb0aa78142a49c61a6b37e70ad26f1cd5a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:08:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b88b07abb261d572d01d6a6604bfdb0aa78142a49c61a6b37e70ad26f1cd5a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:08:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b88b07abb261d572d01d6a6604bfdb0aa78142a49c61a6b37e70ad26f1cd5a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:08:24 np0005465604 podman[415721]: 2025-10-02 09:08:24.883829393 +0000 UTC m=+0.147193684 container init 472edebff1e3191b649bf049f38725b7f1441d4672214f670ec0cd32523cf8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_perlman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:08:24 np0005465604 podman[415721]: 2025-10-02 09:08:24.891942826 +0000 UTC m=+0.155307137 container start 472edebff1e3191b649bf049f38725b7f1441d4672214f670ec0cd32523cf8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_perlman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 05:08:24 np0005465604 podman[415721]: 2025-10-02 09:08:24.895420564 +0000 UTC m=+0.158784885 container attach 472edebff1e3191b649bf049f38725b7f1441d4672214f670ec0cd32523cf8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  2 05:08:25 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:25Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:04:b3 10.100.0.9
Oct  2 05:08:25 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:25Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:04:b3 10.100.0.9
Oct  2 05:08:25 np0005465604 funny_perlman[415737]: {
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:    "0": [
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:        {
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "devices": [
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "/dev/loop3"
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            ],
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "lv_name": "ceph_lv0",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "lv_size": "21470642176",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "name": "ceph_lv0",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "tags": {
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.cluster_name": "ceph",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.crush_device_class": "",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.encrypted": "0",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.osd_id": "0",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.type": "block",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.vdo": "0"
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            },
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "type": "block",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "vg_name": "ceph_vg0"
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:        }
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:    ],
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:    "1": [
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:        {
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "devices": [
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "/dev/loop4"
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            ],
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "lv_name": "ceph_lv1",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "lv_size": "21470642176",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "name": "ceph_lv1",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "tags": {
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.cluster_name": "ceph",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.crush_device_class": "",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.encrypted": "0",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.osd_id": "1",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.type": "block",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.vdo": "0"
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            },
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "type": "block",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "vg_name": "ceph_vg1"
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:        }
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:    ],
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:    "2": [
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:        {
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "devices": [
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "/dev/loop5"
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            ],
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "lv_name": "ceph_lv2",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "lv_size": "21470642176",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "name": "ceph_lv2",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "tags": {
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.cluster_name": "ceph",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.crush_device_class": "",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.encrypted": "0",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.osd_id": "2",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.type": "block",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:                "ceph.vdo": "0"
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            },
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "type": "block",
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:            "vg_name": "ceph_vg2"
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:        }
Oct  2 05:08:25 np0005465604 funny_perlman[415737]:    ]
Oct  2 05:08:25 np0005465604 funny_perlman[415737]: }
Oct  2 05:08:25 np0005465604 systemd[1]: libpod-472edebff1e3191b649bf049f38725b7f1441d4672214f670ec0cd32523cf8bb.scope: Deactivated successfully.
Oct  2 05:08:25 np0005465604 podman[415721]: 2025-10-02 09:08:25.673774742 +0000 UTC m=+0.937139073 container died 472edebff1e3191b649bf049f38725b7f1441d4672214f670ec0cd32523cf8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_perlman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:08:25 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8b88b07abb261d572d01d6a6604bfdb0aa78142a49c61a6b37e70ad26f1cd5a7-merged.mount: Deactivated successfully.
Oct  2 05:08:25 np0005465604 podman[415721]: 2025-10-02 09:08:25.774884154 +0000 UTC m=+1.038248455 container remove 472edebff1e3191b649bf049f38725b7f1441d4672214f670ec0cd32523cf8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_perlman, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 05:08:25 np0005465604 systemd[1]: libpod-conmon-472edebff1e3191b649bf049f38725b7f1441d4672214f670ec0cd32523cf8bb.scope: Deactivated successfully.
Oct  2 05:08:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 74 op/s
Oct  2 05:08:26 np0005465604 nova_compute[260603]: 2025-10-02 09:08:26.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:26 np0005465604 nova_compute[260603]: 2025-10-02 09:08:26.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:26 np0005465604 podman[415899]: 2025-10-02 09:08:26.398684469 +0000 UTC m=+0.048585021 container create cba840af9df5bb563d49fa717ab6835eb0f33d8fe4d2ea5e81e901ee6501ebf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:08:26 np0005465604 systemd[1]: Started libpod-conmon-cba840af9df5bb563d49fa717ab6835eb0f33d8fe4d2ea5e81e901ee6501ebf3.scope.
Oct  2 05:08:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:08:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 44K writes, 172K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s#012Cumulative WAL: 44K writes, 16K syncs, 2.72 writes per sync, written: 0.17 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2934 writes, 11K keys, 2934 commit groups, 1.0 writes per commit group, ingest: 14.99 MB, 0.02 MB/s#012Interval WAL: 2934 writes, 1145 syncs, 2.56 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:08:26 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:08:26 np0005465604 podman[415899]: 2025-10-02 09:08:26.377023565 +0000 UTC m=+0.026924147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:08:26 np0005465604 podman[415899]: 2025-10-02 09:08:26.480228953 +0000 UTC m=+0.130129535 container init cba840af9df5bb563d49fa717ab6835eb0f33d8fe4d2ea5e81e901ee6501ebf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:08:26 np0005465604 podman[415899]: 2025-10-02 09:08:26.487379285 +0000 UTC m=+0.137279817 container start cba840af9df5bb563d49fa717ab6835eb0f33d8fe4d2ea5e81e901ee6501ebf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:08:26 np0005465604 podman[415899]: 2025-10-02 09:08:26.490957126 +0000 UTC m=+0.140857678 container attach cba840af9df5bb563d49fa717ab6835eb0f33d8fe4d2ea5e81e901ee6501ebf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 05:08:26 np0005465604 lucid_haibt[415915]: 167 167
Oct  2 05:08:26 np0005465604 systemd[1]: libpod-cba840af9df5bb563d49fa717ab6835eb0f33d8fe4d2ea5e81e901ee6501ebf3.scope: Deactivated successfully.
Oct  2 05:08:26 np0005465604 podman[415899]: 2025-10-02 09:08:26.493702221 +0000 UTC m=+0.143602753 container died cba840af9df5bb563d49fa717ab6835eb0f33d8fe4d2ea5e81e901ee6501ebf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  2 05:08:26 np0005465604 systemd[1]: var-lib-containers-storage-overlay-68c479875b13517180f87a5d6e6aa19fc157f812ddc4369c7d40f1b57932911c-merged.mount: Deactivated successfully.
Oct  2 05:08:26 np0005465604 podman[415899]: 2025-10-02 09:08:26.54254958 +0000 UTC m=+0.192450122 container remove cba840af9df5bb563d49fa717ab6835eb0f33d8fe4d2ea5e81e901ee6501ebf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:08:26 np0005465604 systemd[1]: libpod-conmon-cba840af9df5bb563d49fa717ab6835eb0f33d8fe4d2ea5e81e901ee6501ebf3.scope: Deactivated successfully.
Oct  2 05:08:26 np0005465604 podman[415941]: 2025-10-02 09:08:26.767914063 +0000 UTC m=+0.049474238 container create 536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chandrasekhar, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:08:26 np0005465604 systemd[1]: Started libpod-conmon-536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69.scope.
Oct  2 05:08:26 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:08:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd5757af9281af490ac9828d5ba7b35ad0f3d4d0e941ebe8ce460e34315b45a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:08:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd5757af9281af490ac9828d5ba7b35ad0f3d4d0e941ebe8ce460e34315b45a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:08:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd5757af9281af490ac9828d5ba7b35ad0f3d4d0e941ebe8ce460e34315b45a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:08:26 np0005465604 podman[415941]: 2025-10-02 09:08:26.752879336 +0000 UTC m=+0.034439531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:08:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdd5757af9281af490ac9828d5ba7b35ad0f3d4d0e941ebe8ce460e34315b45a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:08:26 np0005465604 podman[415941]: 2025-10-02 09:08:26.855370031 +0000 UTC m=+0.136930226 container init 536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:08:26 np0005465604 podman[415941]: 2025-10-02 09:08:26.861392678 +0000 UTC m=+0.142952853 container start 536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 05:08:26 np0005465604 podman[415941]: 2025-10-02 09:08:26.864567106 +0000 UTC m=+0.146127301 container attach 536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chandrasekhar, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]: {
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:        "osd_id": 2,
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:        "type": "bluestore"
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:    },
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:        "osd_id": 1,
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:        "type": "bluestore"
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:    },
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:        "osd_id": 0,
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:        "type": "bluestore"
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]:    }
Oct  2 05:08:27 np0005465604 admiring_chandrasekhar[415958]: }
Oct  2 05:08:27 np0005465604 systemd[1]: libpod-536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69.scope: Deactivated successfully.
Oct  2 05:08:27 np0005465604 systemd[1]: libpod-536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69.scope: Consumed 1.065s CPU time.
Oct  2 05:08:27 np0005465604 podman[415941]: 2025-10-02 09:08:27.923039499 +0000 UTC m=+1.204599694 container died 536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chandrasekhar, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:08:27 np0005465604 systemd[1]: var-lib-containers-storage-overlay-fdd5757af9281af490ac9828d5ba7b35ad0f3d4d0e941ebe8ce460e34315b45a-merged.mount: Deactivated successfully.
Oct  2 05:08:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:08:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:08:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:08:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:08:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:08:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:08:27 np0005465604 podman[415941]: 2025-10-02 09:08:27.978182213 +0000 UTC m=+1.259742388 container remove 536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_chandrasekhar, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct  2 05:08:27 np0005465604 systemd[1]: libpod-conmon-536a6a38f143476f331bfce8d4b0d0ce23e61db8a2130f357171cfb5b4284c69.scope: Deactivated successfully.
Oct  2 05:08:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:08:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:08:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:08:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct  2 05:08:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:08:28 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e0ac35d5-47b1-40b9-bdce-61bd31a6cbbc does not exist
Oct  2 05:08:28 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b7868b03-5672-4794-b38d-d5f645187536 does not exist
Oct  2 05:08:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:08:28
Oct  2 05:08:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:08:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:08:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'images', 'volumes']
Oct  2 05:08:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:08:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:08:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:08:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:08:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:08:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Oct  2 05:08:31 np0005465604 nova_compute[260603]: 2025-10-02 09:08:31.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:31 np0005465604 nova_compute[260603]: 2025-10-02 09:08:31.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:08:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 34K writes, 133K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 34K writes, 12K syncs, 2.71 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2358 writes, 9236 keys, 2358 commit groups, 1.0 writes per commit group, ingest: 9.65 MB, 0.02 MB/s#012Interval WAL: 2358 writes, 983 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:08:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 05:08:32 np0005465604 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct  2 05:08:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:08:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 200 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 05:08:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:34.846 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:34.846 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:34.847 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 200 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct  2 05:08:36 np0005465604 nova_compute[260603]: 2025-10-02 09:08:36.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:36 np0005465604 nova_compute[260603]: 2025-10-02 09:08:36.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 200 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct  2 05:08:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:08:38 np0005465604 nova_compute[260603]: 2025-10-02 09:08:38.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:08:38 np0005465604 nova_compute[260603]: 2025-10-02 09:08:38.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:08:38 np0005465604 nova_compute[260603]: 2025-10-02 09:08:38.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:38.881 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:08:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:38.882 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015185461027544442 of space, bias 1.0, pg target 0.4555638308263333 quantized to 32 (current 32)
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:08:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.136 2 DEBUG nova.compute.manager [req-ec7c6544-9c66-4c59-858a-ed0479b0b498 req-3ff859b7-1b8d-4207-82da-dbbd61796268 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-changed-da8c962b-f6a3-4056-a774-9f03b36f62d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.137 2 DEBUG nova.compute.manager [req-ec7c6544-9c66-4c59-858a-ed0479b0b498 req-3ff859b7-1b8d-4207-82da-dbbd61796268 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Refreshing instance network info cache due to event network-changed-da8c962b-f6a3-4056-a774-9f03b36f62d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.138 2 DEBUG oslo_concurrency.lockutils [req-ec7c6544-9c66-4c59-858a-ed0479b0b498 req-3ff859b7-1b8d-4207-82da-dbbd61796268 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.138 2 DEBUG oslo_concurrency.lockutils [req-ec7c6544-9c66-4c59-858a-ed0479b0b498 req-3ff859b7-1b8d-4207-82da-dbbd61796268 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.138 2 DEBUG nova.network.neutron [req-ec7c6544-9c66-4c59-858a-ed0479b0b498 req-3ff859b7-1b8d-4207-82da-dbbd61796268 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Refreshing network info cache for port da8c962b-f6a3-4056-a774-9f03b36f62d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.250 2 DEBUG oslo_concurrency.lockutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.251 2 DEBUG oslo_concurrency.lockutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.251 2 DEBUG oslo_concurrency.lockutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.251 2 DEBUG oslo_concurrency.lockutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.251 2 DEBUG oslo_concurrency.lockutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.252 2 INFO nova.compute.manager [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Terminating instance#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.253 2 DEBUG nova.compute.manager [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:08:39 np0005465604 kernel: tapda8c962b-f6 (unregistering): left promiscuous mode
Oct  2 05:08:39 np0005465604 NetworkManager[45129]: <info>  [1759396119.3068] device (tapda8c962b-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:08:39 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:39Z|01523|binding|INFO|Releasing lport da8c962b-f6a3-4056-a774-9f03b36f62d5 from this chassis (sb_readonly=0)
Oct  2 05:08:39 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:39Z|01524|binding|INFO|Setting lport da8c962b-f6a3-4056-a774-9f03b36f62d5 down in Southbound
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:39 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:39Z|01525|binding|INFO|Removing iface tapda8c962b-f6 ovn-installed in OVS
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.327 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:04:b3 10.100.0.9'], port_security=['fa:16:3e:2d:04:b3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5a11349c-a726-40c7-83f0-95f708b3f5d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b9015763-3b6e-4935-9c3a-d25c48006f88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8531ee7b-e9fa-4aeb-a901-a54a8597544d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=da8c962b-f6a3-4056-a774-9f03b36f62d5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.329 162357 INFO neutron.agent.ovn.metadata.agent [-] Port da8c962b-f6a3-4056-a774-9f03b36f62d5 in datapath 0cd32ebe-6aa8-4400-8c00-a3546d677f2c unbound from our chassis#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.330 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0cd32ebe-6aa8-4400-8c00-a3546d677f2c#033[00m
Oct  2 05:08:39 np0005465604 kernel: tap95e5f1fa-72 (unregistering): left promiscuous mode
Oct  2 05:08:39 np0005465604 NetworkManager[45129]: <info>  [1759396119.3486] device (tap95e5f1fa-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:08:39 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:39Z|01526|binding|INFO|Releasing lport 95e5f1fa-72f5-4111-a730-1d1fb3203c6f from this chassis (sb_readonly=0)
Oct  2 05:08:39 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:39Z|01527|binding|INFO|Setting lport 95e5f1fa-72f5-4111-a730-1d1fb3203c6f down in Southbound
Oct  2 05:08:39 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:39Z|01528|binding|INFO|Removing iface tap95e5f1fa-72 ovn-installed in OVS
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.362 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[14f592f9-aa6b-4c13-ac7c-36472d701d1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.382 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:fd:19 2001:db8::f816:3eff:fe2e:fd19'], port_security=['fa:16:3e:2e:fd:19 2001:db8::f816:3eff:fe2e:fd19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe2e:fd19/64', 'neutron:device_id': '5a11349c-a726-40c7-83f0-95f708b3f5d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b9015763-3b6e-4935-9c3a-d25c48006f88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40d94def-f0a5-4beb-85d6-bc3ad333488d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=95e5f1fa-72f5-4111-a730-1d1fb3203c6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.404 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[391662c0-6893-48d2-8092-7f669b99192b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:39 np0005465604 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.409 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd2dc6f-50bb-4af9-a1fa-03e22233c79f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:39 np0005465604 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008c.scope: Consumed 12.855s CPU time.
Oct  2 05:08:39 np0005465604 systemd-machined[214636]: Machine qemu-174-instance-0000008c terminated.
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.442 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1363a344-bced-4358-a0c7-1ba82612b1d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.463 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c3678575-8765-4a6f-81de-fabf3fcbbbae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cd32ebe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:64:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693319, 'reachable_time': 33650, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416071, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:39 np0005465604 NetworkManager[45129]: <info>  [1759396119.4888] manager: (tap95e5f1fa-72): new Tun device (/org/freedesktop/NetworkManager/Devices/618)
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.490 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[86b8ed8c-84f1-4b3a-83ac-4e62b0e1cf66]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0cd32ebe-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693334, 'tstamp': 693334}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416074, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0cd32ebe-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693338, 'tstamp': 693338}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416074, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.492 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cd32ebe-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.501 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cd32ebe-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.501 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.501 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0cd32ebe-60, col_values=(('external_ids', {'iface-id': '251c2ade-6e56-457d-a6d2-c79238c2f10d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.502 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.503 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 95e5f1fa-72f5-4111-a730-1d1fb3203c6f in datapath 3b4c5c4f-7410-4ce4-9e83-46e3156b929c unbound from our chassis#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.505 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3b4c5c4f-7410-4ce4-9e83-46e3156b929c#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.504 2 INFO nova.virt.libvirt.driver [-] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Instance destroyed successfully.#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.505 2 DEBUG nova.objects.instance [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 5a11349c-a726-40c7-83f0-95f708b3f5d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.528 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df812588-4f94-49fb-9aac-b19324a4fb94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.529 2 DEBUG nova.virt.libvirt.vif [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-539987703',display_name='tempest-TestGettingAddress-server-539987703',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-539987703',id=140,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:08:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-pggoj88q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:08:13Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=5a11349c-a726-40c7-83f0-95f708b3f5d2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.530 2 DEBUG nova.network.os_vif_util [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.531 2 DEBUG nova.network.os_vif_util [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:04:b3,bridge_name='br-int',has_traffic_filtering=True,id=da8c962b-f6a3-4056-a774-9f03b36f62d5,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda8c962b-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.531 2 DEBUG os_vif [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:04:b3,bridge_name='br-int',has_traffic_filtering=True,id=da8c962b-f6a3-4056-a774-9f03b36f62d5,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda8c962b-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.534 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda8c962b-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.559 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[929aedfa-76c8-4e5b-b7e7-ee7d5f3d0972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.562 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[82c6fe64-c056-4b00-9cdf-ded421abdeef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.588 2 INFO os_vif [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:04:b3,bridge_name='br-int',has_traffic_filtering=True,id=da8c962b-f6a3-4056-a774-9f03b36f62d5,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda8c962b-f6')#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.589 2 DEBUG nova.virt.libvirt.vif [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:07:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-539987703',display_name='tempest-TestGettingAddress-server-539987703',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-539987703',id=140,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:08:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-pggoj88q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:08:13Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=5a11349c-a726-40c7-83f0-95f708b3f5d2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.590 2 DEBUG nova.network.os_vif_util [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.591 2 DEBUG nova.network.os_vif_util [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:fd:19,bridge_name='br-int',has_traffic_filtering=True,id=95e5f1fa-72f5-4111-a730-1d1fb3203c6f,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f1fa-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.591 2 DEBUG os_vif [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:fd:19,bridge_name='br-int',has_traffic_filtering=True,id=95e5f1fa-72f5-4111-a730-1d1fb3203c6f,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f1fa-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.593 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95e5f1fa-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.596 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[6baec88f-d2d5-4b43-9b6f-f639f5c531ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.599 2 INFO os_vif [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:fd:19,bridge_name='br-int',has_traffic_filtering=True,id=95e5f1fa-72f5-4111-a730-1d1fb3203c6f,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f1fa-72')#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.618 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[730ddf14-1a2d-495b-a923-29162d3b57fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b4c5c4f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:e1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 6, 'rx_bytes': 2772, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 6, 'rx_bytes': 2772, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693422, 'reachable_time': 38076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2352, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2352, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416104, 'error': None, 'target': 'ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.640 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d40caf7c-b967-44a5-b381-1f2f2d96caaa]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3b4c5c4f-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693436, 'tstamp': 693436}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416120, 'error': None, 'target': 'ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.643 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b4c5c4f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:39 np0005465604 nova_compute[260603]: 2025-10-02 09:08:39.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.648 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b4c5c4f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.648 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.649 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3b4c5c4f-70, col_values=(('external_ids', {'iface-id': 'f16ced3a-20be-47b1-aeb4-8904dff1366f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:39.649 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:08:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 200 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 14 KiB/s wr, 2 op/s
Oct  2 05:08:40 np0005465604 nova_compute[260603]: 2025-10-02 09:08:40.053 2 INFO nova.virt.libvirt.driver [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Deleting instance files /var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2_del#033[00m
Oct  2 05:08:40 np0005465604 nova_compute[260603]: 2025-10-02 09:08:40.054 2 INFO nova.virt.libvirt.driver [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Deletion of /var/lib/nova/instances/5a11349c-a726-40c7-83f0-95f708b3f5d2_del complete#033[00m
Oct  2 05:08:40 np0005465604 nova_compute[260603]: 2025-10-02 09:08:40.086 2 DEBUG nova.compute.manager [req-dcf5af10-e048-485b-ab72-2cbadccf85fb req-b2cc3ab6-cb4c-4257-9fcf-85de0cb19ec8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-unplugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:40 np0005465604 nova_compute[260603]: 2025-10-02 09:08:40.087 2 DEBUG oslo_concurrency.lockutils [req-dcf5af10-e048-485b-ab72-2cbadccf85fb req-b2cc3ab6-cb4c-4257-9fcf-85de0cb19ec8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:40 np0005465604 nova_compute[260603]: 2025-10-02 09:08:40.088 2 DEBUG oslo_concurrency.lockutils [req-dcf5af10-e048-485b-ab72-2cbadccf85fb req-b2cc3ab6-cb4c-4257-9fcf-85de0cb19ec8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:40 np0005465604 nova_compute[260603]: 2025-10-02 09:08:40.088 2 DEBUG oslo_concurrency.lockutils [req-dcf5af10-e048-485b-ab72-2cbadccf85fb req-b2cc3ab6-cb4c-4257-9fcf-85de0cb19ec8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:40 np0005465604 nova_compute[260603]: 2025-10-02 09:08:40.089 2 DEBUG nova.compute.manager [req-dcf5af10-e048-485b-ab72-2cbadccf85fb req-b2cc3ab6-cb4c-4257-9fcf-85de0cb19ec8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] No waiting events found dispatching network-vif-unplugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:08:40 np0005465604 nova_compute[260603]: 2025-10-02 09:08:40.090 2 DEBUG nova.compute.manager [req-dcf5af10-e048-485b-ab72-2cbadccf85fb req-b2cc3ab6-cb4c-4257-9fcf-85de0cb19ec8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-unplugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:08:40 np0005465604 nova_compute[260603]: 2025-10-02 09:08:40.182 2 INFO nova.compute.manager [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:08:40 np0005465604 nova_compute[260603]: 2025-10-02 09:08:40.184 2 DEBUG oslo.service.loopingcall [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:08:40 np0005465604 nova_compute[260603]: 2025-10-02 09:08:40.185 2 DEBUG nova.compute.manager [-] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:08:40 np0005465604 nova_compute[260603]: 2025-10-02 09:08:40.185 2 DEBUG nova.network.neutron [-] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:08:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:40.884 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:41 np0005465604 nova_compute[260603]: 2025-10-02 09:08:41.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:41 np0005465604 nova_compute[260603]: 2025-10-02 09:08:41.244 2 DEBUG nova.compute.manager [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-unplugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:41 np0005465604 nova_compute[260603]: 2025-10-02 09:08:41.244 2 DEBUG oslo_concurrency.lockutils [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:41 np0005465604 nova_compute[260603]: 2025-10-02 09:08:41.245 2 DEBUG oslo_concurrency.lockutils [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:41 np0005465604 nova_compute[260603]: 2025-10-02 09:08:41.245 2 DEBUG oslo_concurrency.lockutils [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:41 np0005465604 nova_compute[260603]: 2025-10-02 09:08:41.245 2 DEBUG nova.compute.manager [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] No waiting events found dispatching network-vif-unplugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:08:41 np0005465604 nova_compute[260603]: 2025-10-02 09:08:41.245 2 DEBUG nova.compute.manager [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-unplugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:08:41 np0005465604 nova_compute[260603]: 2025-10-02 09:08:41.245 2 DEBUG nova.compute.manager [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-plugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:41 np0005465604 nova_compute[260603]: 2025-10-02 09:08:41.245 2 DEBUG oslo_concurrency.lockutils [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:41 np0005465604 nova_compute[260603]: 2025-10-02 09:08:41.246 2 DEBUG oslo_concurrency.lockutils [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:41 np0005465604 nova_compute[260603]: 2025-10-02 09:08:41.246 2 DEBUG oslo_concurrency.lockutils [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:41 np0005465604 nova_compute[260603]: 2025-10-02 09:08:41.246 2 DEBUG nova.compute.manager [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] No waiting events found dispatching network-vif-plugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:08:41 np0005465604 nova_compute[260603]: 2025-10-02 09:08:41.246 2 WARNING nova.compute.manager [req-e6bd26ad-1c81-4756-9afb-0f24f67e31e7 req-cfb517fd-aa4c-41bb-b915-1f3d1247d725 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received unexpected event network-vif-plugged-da8c962b-f6a3-4056-a774-9f03b36f62d5 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:08:41 np0005465604 nova_compute[260603]: 2025-10-02 09:08:41.878 2 DEBUG nova.network.neutron [req-ec7c6544-9c66-4c59-858a-ed0479b0b498 req-3ff859b7-1b8d-4207-82da-dbbd61796268 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updated VIF entry in instance network info cache for port da8c962b-f6a3-4056-a774-9f03b36f62d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:08:41 np0005465604 nova_compute[260603]: 2025-10-02 09:08:41.879 2 DEBUG nova.network.neutron [req-ec7c6544-9c66-4c59-858a-ed0479b0b498 req-3ff859b7-1b8d-4207-82da-dbbd61796268 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updating instance_info_cache with network_info: [{"id": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "address": "fa:16:3e:2d:04:b3", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda8c962b-f6", "ovs_interfaceid": "da8c962b-f6a3-4056-a774-9f03b36f62d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:08:41 np0005465604 nova_compute[260603]: 2025-10-02 09:08:41.922 2 DEBUG oslo_concurrency.lockutils [req-ec7c6544-9c66-4c59-858a-ed0479b0b498 req-3ff859b7-1b8d-4207-82da-dbbd61796268 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-5a11349c-a726-40c7-83f0-95f708b3f5d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:08:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 200 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 14 KiB/s wr, 2 op/s
Oct  2 05:08:42 np0005465604 nova_compute[260603]: 2025-10-02 09:08:42.190 2 DEBUG nova.compute.manager [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-plugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:42 np0005465604 nova_compute[260603]: 2025-10-02 09:08:42.190 2 DEBUG oslo_concurrency.lockutils [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:42 np0005465604 nova_compute[260603]: 2025-10-02 09:08:42.191 2 DEBUG oslo_concurrency.lockutils [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:42 np0005465604 nova_compute[260603]: 2025-10-02 09:08:42.191 2 DEBUG oslo_concurrency.lockutils [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:42 np0005465604 nova_compute[260603]: 2025-10-02 09:08:42.191 2 DEBUG nova.compute.manager [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] No waiting events found dispatching network-vif-plugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:08:42 np0005465604 nova_compute[260603]: 2025-10-02 09:08:42.191 2 WARNING nova.compute.manager [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received unexpected event network-vif-plugged-95e5f1fa-72f5-4111-a730-1d1fb3203c6f for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:08:42 np0005465604 nova_compute[260603]: 2025-10-02 09:08:42.192 2 DEBUG nova.compute.manager [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-deleted-da8c962b-f6a3-4056-a774-9f03b36f62d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:42 np0005465604 nova_compute[260603]: 2025-10-02 09:08:42.192 2 INFO nova.compute.manager [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Neutron deleted interface da8c962b-f6a3-4056-a774-9f03b36f62d5; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 05:08:42 np0005465604 nova_compute[260603]: 2025-10-02 09:08:42.192 2 DEBUG nova.network.neutron [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updating instance_info_cache with network_info: [{"id": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "address": "fa:16:3e:2e:fd:19", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:fd19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f1fa-72", "ovs_interfaceid": "95e5f1fa-72f5-4111-a730-1d1fb3203c6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:08:42 np0005465604 nova_compute[260603]: 2025-10-02 09:08:42.218 2 DEBUG nova.compute.manager [req-2126fd23-8adc-49b9-bda9-28364d4d37c2 req-4e233be2-4adb-4ec3-97a5-3105551c0d12 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Detach interface failed, port_id=da8c962b-f6a3-4056-a774-9f03b36f62d5, reason: Instance 5a11349c-a726-40c7-83f0-95f708b3f5d2 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 05:08:42 np0005465604 nova_compute[260603]: 2025-10-02 09:08:42.377 2 DEBUG nova.network.neutron [-] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:08:42 np0005465604 nova_compute[260603]: 2025-10-02 09:08:42.410 2 INFO nova.compute.manager [-] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Took 2.22 seconds to deallocate network for instance.#033[00m
Oct  2 05:08:42 np0005465604 nova_compute[260603]: 2025-10-02 09:08:42.471 2 DEBUG oslo_concurrency.lockutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:42 np0005465604 nova_compute[260603]: 2025-10-02 09:08:42.472 2 DEBUG oslo_concurrency.lockutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:42 np0005465604 nova_compute[260603]: 2025-10-02 09:08:42.537 2 DEBUG oslo_concurrency.processutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:08:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:08:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1534899484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:08:43 np0005465604 nova_compute[260603]: 2025-10-02 09:08:43.017 2 DEBUG oslo_concurrency.processutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:08:43 np0005465604 nova_compute[260603]: 2025-10-02 09:08:43.025 2 DEBUG nova.compute.provider_tree [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:08:43 np0005465604 nova_compute[260603]: 2025-10-02 09:08:43.062 2 DEBUG nova.scheduler.client.report [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:08:43 np0005465604 nova_compute[260603]: 2025-10-02 09:08:43.096 2 DEBUG oslo_concurrency.lockutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:43 np0005465604 nova_compute[260603]: 2025-10-02 09:08:43.148 2 INFO nova.scheduler.client.report [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 5a11349c-a726-40c7-83f0-95f708b3f5d2#033[00m
Oct  2 05:08:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:08:43 np0005465604 nova_compute[260603]: 2025-10-02 09:08:43.246 2 DEBUG oslo_concurrency.lockutils [None req-79a74083-77ba-4ddf-b388-50e112863ad5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "5a11349c-a726-40c7-83f0-95f708b3f5d2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 19 KiB/s wr, 30 op/s
Oct  2 05:08:44 np0005465604 nova_compute[260603]: 2025-10-02 09:08:44.308 2 DEBUG nova.compute.manager [req-fc467e5d-2eb6-4a3a-b27c-adf93d31b9db req-a170d89c-eb7d-477b-9987-8036cdbed94d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Received event network-vif-deleted-95e5f1fa-72f5-4111-a730-1d1fb3203c6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:44 np0005465604 nova_compute[260603]: 2025-10-02 09:08:44.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:45 np0005465604 nova_compute[260603]: 2025-10-02 09:08:45.902 2 DEBUG nova.compute.manager [req-899c1b69-9c64-4ac8-a708-54302fd58b00 req-bff74727-ec96-4a5e-a1ac-27ed8b84143f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-changed-1bf113e0-562b-45fb-9b97-aa76d5dac283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:45 np0005465604 nova_compute[260603]: 2025-10-02 09:08:45.902 2 DEBUG nova.compute.manager [req-899c1b69-9c64-4ac8-a708-54302fd58b00 req-bff74727-ec96-4a5e-a1ac-27ed8b84143f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Refreshing instance network info cache due to event network-changed-1bf113e0-562b-45fb-9b97-aa76d5dac283. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:08:45 np0005465604 nova_compute[260603]: 2025-10-02 09:08:45.903 2 DEBUG oslo_concurrency.lockutils [req-899c1b69-9c64-4ac8-a708-54302fd58b00 req-bff74727-ec96-4a5e-a1ac-27ed8b84143f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:08:45 np0005465604 nova_compute[260603]: 2025-10-02 09:08:45.903 2 DEBUG oslo_concurrency.lockutils [req-899c1b69-9c64-4ac8-a708-54302fd58b00 req-bff74727-ec96-4a5e-a1ac-27ed8b84143f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:08:45 np0005465604 nova_compute[260603]: 2025-10-02 09:08:45.903 2 DEBUG nova.network.neutron [req-899c1b69-9c64-4ac8-a708-54302fd58b00 req-bff74727-ec96-4a5e-a1ac-27ed8b84143f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Refreshing network info cache for port 1bf113e0-562b-45fb-9b97-aa76d5dac283 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:08:45 np0005465604 nova_compute[260603]: 2025-10-02 09:08:45.947 2 DEBUG oslo_concurrency.lockutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:45 np0005465604 nova_compute[260603]: 2025-10-02 09:08:45.948 2 DEBUG oslo_concurrency.lockutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:45 np0005465604 nova_compute[260603]: 2025-10-02 09:08:45.948 2 DEBUG oslo_concurrency.lockutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:45 np0005465604 nova_compute[260603]: 2025-10-02 09:08:45.949 2 DEBUG oslo_concurrency.lockutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:45 np0005465604 nova_compute[260603]: 2025-10-02 09:08:45.949 2 DEBUG oslo_concurrency.lockutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:45 np0005465604 nova_compute[260603]: 2025-10-02 09:08:45.951 2 INFO nova.compute.manager [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Terminating instance#033[00m
Oct  2 05:08:45 np0005465604 nova_compute[260603]: 2025-10-02 09:08:45.952 2 DEBUG nova.compute.manager [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:08:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 7.0 KiB/s wr, 30 op/s
Oct  2 05:08:46 np0005465604 kernel: tap1bf113e0-56 (unregistering): left promiscuous mode
Oct  2 05:08:46 np0005465604 podman[416148]: 2025-10-02 09:08:46.067886451 +0000 UTC m=+0.119373350 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 05:08:46 np0005465604 NetworkManager[45129]: <info>  [1759396126.0691] device (tap1bf113e0-56): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:08:46 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:46Z|01529|binding|INFO|Releasing lport 1bf113e0-562b-45fb-9b97-aa76d5dac283 from this chassis (sb_readonly=0)
Oct  2 05:08:46 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:46Z|01530|binding|INFO|Setting lport 1bf113e0-562b-45fb-9b97-aa76d5dac283 down in Southbound
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:46 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:46Z|01531|binding|INFO|Removing iface tap1bf113e0-56 ovn-installed in OVS
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.088 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:e1:b8 10.100.0.7'], port_security=['fa:16:3e:fc:e1:b8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '086604c0-28d5-41d4-995c-17db322b3ded', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b9015763-3b6e-4935-9c3a-d25c48006f88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8531ee7b-e9fa-4aeb-a901-a54a8597544d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=1bf113e0-562b-45fb-9b97-aa76d5dac283) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.090 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 1bf113e0-562b-45fb-9b97-aa76d5dac283 in datapath 0cd32ebe-6aa8-4400-8c00-a3546d677f2c unbound from our chassis#033[00m
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.091 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0cd32ebe-6aa8-4400-8c00-a3546d677f2c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.093 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0651d79a-5c3e-41f6-ba5d-7d71f70e7d4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.094 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c namespace which is not needed anymore#033[00m
Oct  2 05:08:46 np0005465604 kernel: tap97679643-cd (unregistering): left promiscuous mode
Oct  2 05:08:46 np0005465604 NetworkManager[45129]: <info>  [1759396126.1107] device (tap97679643-cd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:46 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:46Z|01532|binding|INFO|Releasing lport 97679643-cd71-4857-a615-c21d643d15c2 from this chassis (sb_readonly=0)
Oct  2 05:08:46 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:46Z|01533|binding|INFO|Setting lport 97679643-cd71-4857-a615-c21d643d15c2 down in Southbound
Oct  2 05:08:46 np0005465604 ovn_controller[152344]: 2025-10-02T09:08:46Z|01534|binding|INFO|Removing iface tap97679643-cd ovn-installed in OVS
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.130 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:b1:36 2001:db8::f816:3eff:fe9d:b136'], port_security=['fa:16:3e:9d:b1:36 2001:db8::f816:3eff:fe9d:b136'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe9d:b136/64', 'neutron:device_id': '086604c0-28d5-41d4-995c-17db322b3ded', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b9015763-3b6e-4935-9c3a-d25c48006f88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40d94def-f0a5-4beb-85d6-bc3ad333488d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=97679643-cd71-4857-a615-c21d643d15c2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:08:46 np0005465604 podman[416147]: 2025-10-02 09:08:46.13700966 +0000 UTC m=+0.187032483 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:46 np0005465604 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:46 np0005465604 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008b.scope: Consumed 15.089s CPU time.
Oct  2 05:08:46 np0005465604 systemd-machined[214636]: Machine qemu-173-instance-0000008b terminated.
Oct  2 05:08:46 np0005465604 neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c[414461]: [NOTICE]   (414465) : haproxy version is 2.8.14-c23fe91
Oct  2 05:08:46 np0005465604 neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c[414461]: [NOTICE]   (414465) : path to executable is /usr/sbin/haproxy
Oct  2 05:08:46 np0005465604 neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c[414461]: [WARNING]  (414465) : Exiting Master process...
Oct  2 05:08:46 np0005465604 neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c[414461]: [WARNING]  (414465) : Exiting Master process...
Oct  2 05:08:46 np0005465604 neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c[414461]: [ALERT]    (414465) : Current worker (414467) exited with code 143 (Terminated)
Oct  2 05:08:46 np0005465604 neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c[414461]: [WARNING]  (414465) : All workers exited. Exiting... (0)
Oct  2 05:08:46 np0005465604 systemd[1]: libpod-fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df.scope: Deactivated successfully.
Oct  2 05:08:46 np0005465604 podman[416222]: 2025-10-02 09:08:46.27314906 +0000 UTC m=+0.061685997 container died fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 05:08:46 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df-userdata-shm.mount: Deactivated successfully.
Oct  2 05:08:46 np0005465604 systemd[1]: var-lib-containers-storage-overlay-df163648e30c438ee7956b6dd528661349b15b516a55e59776780490c120fbcc-merged.mount: Deactivated successfully.
Oct  2 05:08:46 np0005465604 podman[416222]: 2025-10-02 09:08:46.331169354 +0000 UTC m=+0.119706261 container cleanup fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:08:46 np0005465604 systemd[1]: libpod-conmon-fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df.scope: Deactivated successfully.
Oct  2 05:08:46 np0005465604 podman[416250]: 2025-10-02 09:08:46.404214473 +0000 UTC m=+0.048900200 container remove fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.416 2 INFO nova.virt.libvirt.driver [-] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Instance destroyed successfully.#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.417 2 DEBUG nova.objects.instance [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 086604c0-28d5-41d4-995c-17db322b3ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.417 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[630b7cbd-8e03-4944-81d4-6919b9462638]: (4, ('Thu Oct  2 09:08:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c (fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df)\nfc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df\nThu Oct  2 09:08:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c (fc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df)\nfc654c3182d993b94f729b792461a6f6aeed8936e6f8a2bb0e01cf93fb1f38df\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.420 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09b1e2d3-f526-41a7-96d0-f0fb66213b3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.421 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cd32ebe-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:46 np0005465604 kernel: tap0cd32ebe-60: left promiscuous mode
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.431 2 DEBUG nova.virt.libvirt.vif [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892205724',display_name='tempest-TestGettingAddress-server-1892205724',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892205724',id=139,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:07:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-2q1pc96q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:07:35Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=086604c0-28d5-41d4-995c-17db322b3ded,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.432 2 DEBUG nova.network.os_vif_util [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.433 2 DEBUG nova.network.os_vif_util [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:e1:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bf113e0-562b-45fb-9b97-aa76d5dac283,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf113e0-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.433 2 DEBUG os_vif [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:e1:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bf113e0-562b-45fb-9b97-aa76d5dac283,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf113e0-56') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1bf113e0-56, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.451 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09d66405-bc55-4d7f-a77c-c102b0e9be6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.457 2 INFO os_vif [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:e1:b8,bridge_name='br-int',has_traffic_filtering=True,id=1bf113e0-562b-45fb-9b97-aa76d5dac283,network=Network(0cd32ebe-6aa8-4400-8c00-a3546d677f2c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf113e0-56')#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.459 2 DEBUG nova.virt.libvirt.vif [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1892205724',display_name='tempest-TestGettingAddress-server-1892205724',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1892205724',id=139,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEd25k5vpe5RF7eE5I3Tvu0k0Vd4N3hWvJllCPRt8J8XGYxBXsiSIRCyegP797c/TRlPHA1fDrxk+Gm2vNvfIE63+X+KMEDY0xgAp7/yX6GkOSr/XekJV56qSP4G3OMAg==',key_name='tempest-TestGettingAddress-173295366',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:07:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-2q1pc96q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:07:35Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=086604c0-28d5-41d4-995c-17db322b3ded,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.459 2 DEBUG nova.network.os_vif_util [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.460 2 DEBUG nova.network.os_vif_util [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:b1:36,bridge_name='br-int',has_traffic_filtering=True,id=97679643-cd71-4857-a615-c21d643d15c2,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97679643-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.461 2 DEBUG os_vif [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:b1:36,bridge_name='br-int',has_traffic_filtering=True,id=97679643-cd71-4857-a615-c21d643d15c2,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97679643-cd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.463 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97679643-cd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:08:46 np0005465604 nova_compute[260603]: 2025-10-02 09:08:46.471 2 INFO os_vif [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:b1:36,bridge_name='br-int',has_traffic_filtering=True,id=97679643-cd71-4857-a615-c21d643d15c2,network=Network(3b4c5c4f-7410-4ce4-9e83-46e3156b929c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97679643-cd')#033[00m
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.483 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7c4c3184-2018-4589-8f14-00dd47a02329]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.485 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aafd63bd-bbb9-4d54-a3ce-fb39a79c4db5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.504 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8af5c7f0-3bf1-4e7a-b17d-ddc2c8c7b57a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693311, 'reachable_time': 40542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416298, 'error': None, 'target': 'ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:46 np0005465604 systemd[1]: run-netns-ovnmeta\x2d0cd32ebe\x2d6aa8\x2d4400\x2d8c00\x2da3546d677f2c.mount: Deactivated successfully.
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.512 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0cd32ebe-6aa8-4400-8c00-a3546d677f2c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.512 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e6ba2b-6a47-42ac-89a8-e0e6cd0016d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.513 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 97679643-cd71-4857-a615-c21d643d15c2 in datapath 3b4c5c4f-7410-4ce4-9e83-46e3156b929c unbound from our chassis#033[00m
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.514 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3b4c5c4f-7410-4ce4-9e83-46e3156b929c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.515 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[93819e33-a064-4f38-9a61-e6dc92b1d1e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:46 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:46.515 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c namespace which is not needed anymore#033[00m
Oct  2 05:08:46 np0005465604 neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c[414534]: [NOTICE]   (414538) : haproxy version is 2.8.14-c23fe91
Oct  2 05:08:46 np0005465604 neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c[414534]: [NOTICE]   (414538) : path to executable is /usr/sbin/haproxy
Oct  2 05:08:46 np0005465604 neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c[414534]: [WARNING]  (414538) : Exiting Master process...
Oct  2 05:08:46 np0005465604 neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c[414534]: [WARNING]  (414538) : Exiting Master process...
Oct  2 05:08:46 np0005465604 neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c[414534]: [ALERT]    (414538) : Current worker (414540) exited with code 143 (Terminated)
Oct  2 05:08:46 np0005465604 neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c[414534]: [WARNING]  (414538) : All workers exited. Exiting... (0)
Oct  2 05:08:46 np0005465604 systemd[1]: libpod-e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea.scope: Deactivated successfully.
Oct  2 05:08:46 np0005465604 podman[416326]: 2025-10-02 09:08:46.720666438 +0000 UTC m=+0.105105468 container died e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:08:46 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea-userdata-shm.mount: Deactivated successfully.
Oct  2 05:08:46 np0005465604 systemd[1]: var-lib-containers-storage-overlay-da359b613ffa5d68b88a20f5d2354085bb5af6508d378872b99a7f638a446a90-merged.mount: Deactivated successfully.
Oct  2 05:08:46 np0005465604 podman[416326]: 2025-10-02 09:08:46.92313654 +0000 UTC m=+0.307575540 container cleanup e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 05:08:46 np0005465604 systemd[1]: libpod-conmon-e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea.scope: Deactivated successfully.
Oct  2 05:08:47 np0005465604 podman[416358]: 2025-10-02 09:08:47.109387827 +0000 UTC m=+0.149975491 container remove e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 05:08:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:47.118 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3048d5d2-ae45-433d-99fc-283ac4ea9d76]: (4, ('Thu Oct  2 09:08:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c (e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea)\ne7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea\nThu Oct  2 09:08:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c (e7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea)\ne7e6a659b62282cfe27ba7aba70a453804b6b97b21a7aa898fb7d732f17b2eea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:47.122 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8488c3d6-9169-4533-8132-b42b15557714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:47.123 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b4c5c4f-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:08:47 np0005465604 nova_compute[260603]: 2025-10-02 09:08:47.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:47 np0005465604 kernel: tap3b4c5c4f-70: left promiscuous mode
Oct  2 05:08:47 np0005465604 nova_compute[260603]: 2025-10-02 09:08:47.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:47.134 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[18ccf2fb-9ff3-45c4-be84-242ea90b7e74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:47 np0005465604 nova_compute[260603]: 2025-10-02 09:08:47.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:47 np0005465604 nova_compute[260603]: 2025-10-02 09:08:47.155 2 DEBUG nova.compute.manager [req-d3e335a6-b3b4-4404-a830-61587986e2d3 req-b0234ac3-507d-488c-818e-737b050e9a92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-unplugged-97679643-cd71-4857-a615-c21d643d15c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:47 np0005465604 nova_compute[260603]: 2025-10-02 09:08:47.156 2 DEBUG oslo_concurrency.lockutils [req-d3e335a6-b3b4-4404-a830-61587986e2d3 req-b0234ac3-507d-488c-818e-737b050e9a92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:47.156 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9b7f099f-6e1f-40ae-804d-6383adb34b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:47 np0005465604 nova_compute[260603]: 2025-10-02 09:08:47.157 2 DEBUG oslo_concurrency.lockutils [req-d3e335a6-b3b4-4404-a830-61587986e2d3 req-b0234ac3-507d-488c-818e-737b050e9a92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:47 np0005465604 nova_compute[260603]: 2025-10-02 09:08:47.157 2 DEBUG oslo_concurrency.lockutils [req-d3e335a6-b3b4-4404-a830-61587986e2d3 req-b0234ac3-507d-488c-818e-737b050e9a92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:47.157 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e3915374-d113-4d6e-a623-0f0deced2d51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:47 np0005465604 nova_compute[260603]: 2025-10-02 09:08:47.158 2 DEBUG nova.compute.manager [req-d3e335a6-b3b4-4404-a830-61587986e2d3 req-b0234ac3-507d-488c-818e-737b050e9a92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] No waiting events found dispatching network-vif-unplugged-97679643-cd71-4857-a615-c21d643d15c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:08:47 np0005465604 nova_compute[260603]: 2025-10-02 09:08:47.158 2 DEBUG nova.compute.manager [req-d3e335a6-b3b4-4404-a830-61587986e2d3 req-b0234ac3-507d-488c-818e-737b050e9a92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-unplugged-97679643-cd71-4857-a615-c21d643d15c2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:08:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:47.184 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[57af7869-adbd-4766-920b-c44062cbc2fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693409, 'reachable_time': 25430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416373, 'error': None, 'target': 'ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:47.187 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3b4c5c4f-7410-4ce4-9e83-46e3156b929c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:08:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:08:47.187 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[0583ae78-eac9-41c7-ad2a-0faced1e0a1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:08:47 np0005465604 systemd[1]: run-netns-ovnmeta\x2d3b4c5c4f\x2d7410\x2d4ce4\x2d9e83\x2d46e3156b929c.mount: Deactivated successfully.
Oct  2 05:08:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 57 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 7.4 KiB/s wr, 44 op/s
Oct  2 05:08:48 np0005465604 nova_compute[260603]: 2025-10-02 09:08:48.133 2 DEBUG nova.compute.manager [req-625c5453-5bab-47c2-b5a1-5ea97c0c8c13 req-3f774b30-d968-4b24-9646-dbd00949a581 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-unplugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:48 np0005465604 nova_compute[260603]: 2025-10-02 09:08:48.134 2 DEBUG oslo_concurrency.lockutils [req-625c5453-5bab-47c2-b5a1-5ea97c0c8c13 req-3f774b30-d968-4b24-9646-dbd00949a581 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:48 np0005465604 nova_compute[260603]: 2025-10-02 09:08:48.134 2 DEBUG oslo_concurrency.lockutils [req-625c5453-5bab-47c2-b5a1-5ea97c0c8c13 req-3f774b30-d968-4b24-9646-dbd00949a581 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:48 np0005465604 nova_compute[260603]: 2025-10-02 09:08:48.135 2 DEBUG oslo_concurrency.lockutils [req-625c5453-5bab-47c2-b5a1-5ea97c0c8c13 req-3f774b30-d968-4b24-9646-dbd00949a581 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:48 np0005465604 nova_compute[260603]: 2025-10-02 09:08:48.135 2 DEBUG nova.compute.manager [req-625c5453-5bab-47c2-b5a1-5ea97c0c8c13 req-3f774b30-d968-4b24-9646-dbd00949a581 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] No waiting events found dispatching network-vif-unplugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:08:48 np0005465604 nova_compute[260603]: 2025-10-02 09:08:48.135 2 DEBUG nova.compute.manager [req-625c5453-5bab-47c2-b5a1-5ea97c0c8c13 req-3f774b30-d968-4b24-9646-dbd00949a581 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-unplugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:08:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:08:48 np0005465604 nova_compute[260603]: 2025-10-02 09:08:48.627 2 INFO nova.virt.libvirt.driver [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Deleting instance files /var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded_del#033[00m
Oct  2 05:08:48 np0005465604 nova_compute[260603]: 2025-10-02 09:08:48.627 2 INFO nova.virt.libvirt.driver [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Deletion of /var/lib/nova/instances/086604c0-28d5-41d4-995c-17db322b3ded_del complete#033[00m
Oct  2 05:08:48 np0005465604 nova_compute[260603]: 2025-10-02 09:08:48.683 2 INFO nova.compute.manager [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Took 2.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:08:48 np0005465604 nova_compute[260603]: 2025-10-02 09:08:48.684 2 DEBUG oslo.service.loopingcall [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:08:48 np0005465604 nova_compute[260603]: 2025-10-02 09:08:48.684 2 DEBUG nova.compute.manager [-] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:08:48 np0005465604 nova_compute[260603]: 2025-10-02 09:08:48.684 2 DEBUG nova.network.neutron [-] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:08:49 np0005465604 nova_compute[260603]: 2025-10-02 09:08:49.150 2 DEBUG nova.network.neutron [req-899c1b69-9c64-4ac8-a708-54302fd58b00 req-bff74727-ec96-4a5e-a1ac-27ed8b84143f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updated VIF entry in instance network info cache for port 1bf113e0-562b-45fb-9b97-aa76d5dac283. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:08:49 np0005465604 nova_compute[260603]: 2025-10-02 09:08:49.151 2 DEBUG nova.network.neutron [req-899c1b69-9c64-4ac8-a708-54302fd58b00 req-bff74727-ec96-4a5e-a1ac-27ed8b84143f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updating instance_info_cache with network_info: [{"id": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "address": "fa:16:3e:fc:e1:b8", "network": {"id": "0cd32ebe-6aa8-4400-8c00-a3546d677f2c", "bridge": "br-int", "label": "tempest-network-smoke--132784297", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf113e0-56", "ovs_interfaceid": "1bf113e0-562b-45fb-9b97-aa76d5dac283", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "97679643-cd71-4857-a615-c21d643d15c2", "address": "fa:16:3e:9d:b1:36", "network": {"id": "3b4c5c4f-7410-4ce4-9e83-46e3156b929c", "bridge": "br-int", "label": "tempest-network-smoke--1595158494", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe9d:b136", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97679643-cd", "ovs_interfaceid": "97679643-cd71-4857-a615-c21d643d15c2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:08:49 np0005465604 nova_compute[260603]: 2025-10-02 09:08:49.175 2 DEBUG oslo_concurrency.lockutils [req-899c1b69-9c64-4ac8-a708-54302fd58b00 req-bff74727-ec96-4a5e-a1ac-27ed8b84143f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-086604c0-28d5-41d4-995c-17db322b3ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:08:49 np0005465604 nova_compute[260603]: 2025-10-02 09:08:49.249 2 DEBUG nova.compute.manager [req-ad222c61-2e89-43cb-8450-2b5e187cbe19 req-386bfdf0-e053-4eed-89ba-4feaaa30461a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-plugged-97679643-cd71-4857-a615-c21d643d15c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:49 np0005465604 nova_compute[260603]: 2025-10-02 09:08:49.250 2 DEBUG oslo_concurrency.lockutils [req-ad222c61-2e89-43cb-8450-2b5e187cbe19 req-386bfdf0-e053-4eed-89ba-4feaaa30461a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:49 np0005465604 nova_compute[260603]: 2025-10-02 09:08:49.250 2 DEBUG oslo_concurrency.lockutils [req-ad222c61-2e89-43cb-8450-2b5e187cbe19 req-386bfdf0-e053-4eed-89ba-4feaaa30461a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:49 np0005465604 nova_compute[260603]: 2025-10-02 09:08:49.251 2 DEBUG oslo_concurrency.lockutils [req-ad222c61-2e89-43cb-8450-2b5e187cbe19 req-386bfdf0-e053-4eed-89ba-4feaaa30461a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:49 np0005465604 nova_compute[260603]: 2025-10-02 09:08:49.251 2 DEBUG nova.compute.manager [req-ad222c61-2e89-43cb-8450-2b5e187cbe19 req-386bfdf0-e053-4eed-89ba-4feaaa30461a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] No waiting events found dispatching network-vif-plugged-97679643-cd71-4857-a615-c21d643d15c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:08:49 np0005465604 nova_compute[260603]: 2025-10-02 09:08:49.251 2 WARNING nova.compute.manager [req-ad222c61-2e89-43cb-8450-2b5e187cbe19 req-386bfdf0-e053-4eed-89ba-4feaaa30461a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received unexpected event network-vif-plugged-97679643-cd71-4857-a615-c21d643d15c2 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:08:50 np0005465604 podman[416376]: 2025-10-02 09:08:50.009909873 +0000 UTC m=+0.062604046 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001)
Oct  2 05:08:50 np0005465604 podman[416375]: 2025-10-02 09:08:50.015175007 +0000 UTC m=+0.071578265 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 05:08:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 305 active+clean; 57 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 6.0 KiB/s wr, 42 op/s
Oct  2 05:08:50 np0005465604 nova_compute[260603]: 2025-10-02 09:08:50.220 2 DEBUG nova.compute.manager [req-7685f995-f85f-4303-9a48-3f0b2e0213ac req-3cea4447-9783-458e-807a-277de863b8af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-plugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:50 np0005465604 nova_compute[260603]: 2025-10-02 09:08:50.220 2 DEBUG oslo_concurrency.lockutils [req-7685f995-f85f-4303-9a48-3f0b2e0213ac req-3cea4447-9783-458e-807a-277de863b8af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "086604c0-28d5-41d4-995c-17db322b3ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:50 np0005465604 nova_compute[260603]: 2025-10-02 09:08:50.221 2 DEBUG oslo_concurrency.lockutils [req-7685f995-f85f-4303-9a48-3f0b2e0213ac req-3cea4447-9783-458e-807a-277de863b8af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:50 np0005465604 nova_compute[260603]: 2025-10-02 09:08:50.221 2 DEBUG oslo_concurrency.lockutils [req-7685f995-f85f-4303-9a48-3f0b2e0213ac req-3cea4447-9783-458e-807a-277de863b8af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:50 np0005465604 nova_compute[260603]: 2025-10-02 09:08:50.221 2 DEBUG nova.compute.manager [req-7685f995-f85f-4303-9a48-3f0b2e0213ac req-3cea4447-9783-458e-807a-277de863b8af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] No waiting events found dispatching network-vif-plugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:08:50 np0005465604 nova_compute[260603]: 2025-10-02 09:08:50.221 2 WARNING nova.compute.manager [req-7685f995-f85f-4303-9a48-3f0b2e0213ac req-3cea4447-9783-458e-807a-277de863b8af 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received unexpected event network-vif-plugged-1bf113e0-562b-45fb-9b97-aa76d5dac283 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:08:51 np0005465604 nova_compute[260603]: 2025-10-02 09:08:51.111 2 DEBUG nova.network.neutron [-] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:08:51 np0005465604 nova_compute[260603]: 2025-10-02 09:08:51.136 2 INFO nova.compute.manager [-] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Took 2.45 seconds to deallocate network for instance.#033[00m
Oct  2 05:08:51 np0005465604 nova_compute[260603]: 2025-10-02 09:08:51.194 2 DEBUG oslo_concurrency.lockutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:08:51 np0005465604 nova_compute[260603]: 2025-10-02 09:08:51.194 2 DEBUG oslo_concurrency.lockutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:08:51 np0005465604 nova_compute[260603]: 2025-10-02 09:08:51.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:51 np0005465604 nova_compute[260603]: 2025-10-02 09:08:51.258 2 DEBUG oslo_concurrency.processutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:08:51 np0005465604 nova_compute[260603]: 2025-10-02 09:08:51.324 2 DEBUG nova.compute.manager [req-57b43742-2723-4f75-a579-4f566b5218db req-fba99988-2cba-4a60-8a08-e026ac7a66ce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-deleted-1bf113e0-562b-45fb-9b97-aa76d5dac283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:51 np0005465604 nova_compute[260603]: 2025-10-02 09:08:51.325 2 DEBUG nova.compute.manager [req-57b43742-2723-4f75-a579-4f566b5218db req-fba99988-2cba-4a60-8a08-e026ac7a66ce 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Received event network-vif-deleted-97679643-cd71-4857-a615-c21d643d15c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:08:51 np0005465604 nova_compute[260603]: 2025-10-02 09:08:51.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:08:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1841118579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:08:51 np0005465604 nova_compute[260603]: 2025-10-02 09:08:51.715 2 DEBUG oslo_concurrency.processutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:08:51 np0005465604 nova_compute[260603]: 2025-10-02 09:08:51.722 2 DEBUG nova.compute.provider_tree [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:08:51 np0005465604 nova_compute[260603]: 2025-10-02 09:08:51.737 2 DEBUG nova.scheduler.client.report [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:08:51 np0005465604 nova_compute[260603]: 2025-10-02 09:08:51.755 2 DEBUG oslo_concurrency.lockutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:51 np0005465604 nova_compute[260603]: 2025-10-02 09:08:51.782 2 INFO nova.scheduler.client.report [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 086604c0-28d5-41d4-995c-17db322b3ded#033[00m
Oct  2 05:08:51 np0005465604 nova_compute[260603]: 2025-10-02 09:08:51.837 2 DEBUG oslo_concurrency.lockutils [None req-ec39db08-3ab7-4f9e-9187-4102a15d981c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "086604c0-28d5-41d4-995c-17db322b3ded" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:08:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 57 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 6.0 KiB/s wr, 42 op/s
Oct  2 05:08:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:08:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 55 op/s
Oct  2 05:08:54 np0005465604 nova_compute[260603]: 2025-10-02 09:08:54.504 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396119.502605, 5a11349c-a726-40c7-83f0-95f708b3f5d2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:08:54 np0005465604 nova_compute[260603]: 2025-10-02 09:08:54.504 2 INFO nova.compute.manager [-] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:08:54 np0005465604 nova_compute[260603]: 2025-10-02 09:08:54.521 2 DEBUG nova.compute.manager [None req-65b687f8-104d-4010-8be3-2a1313b5ac2b - - - - - -] [instance: 5a11349c-a726-40c7-83f0-95f708b3f5d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:08:55 np0005465604 nova_compute[260603]: 2025-10-02 09:08:55.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:08:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 05:08:56 np0005465604 nova_compute[260603]: 2025-10-02 09:08:56.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:56 np0005465604 nova_compute[260603]: 2025-10-02 09:08:56.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:56 np0005465604 nova_compute[260603]: 2025-10-02 09:08:56.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:56 np0005465604 nova_compute[260603]: 2025-10-02 09:08:56.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:08:57 np0005465604 nova_compute[260603]: 2025-10-02 09:08:57.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:08:57 np0005465604 nova_compute[260603]: 2025-10-02 09:08:57.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:08:57 np0005465604 nova_compute[260603]: 2025-10-02 09:08:57.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:08:57 np0005465604 nova_compute[260603]: 2025-10-02 09:08:57.539 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:08:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:08:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:08:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:08:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:08:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:08:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:08:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 05:08:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:09:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 767 B/s wr, 13 op/s
Oct  2 05:09:00 np0005465604 nova_compute[260603]: 2025-10-02 09:09:00.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:09:01 np0005465604 nova_compute[260603]: 2025-10-02 09:09:01.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:01 np0005465604 nova_compute[260603]: 2025-10-02 09:09:01.415 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396126.413921, 086604c0-28d5-41d4-995c-17db322b3ded => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:09:01 np0005465604 nova_compute[260603]: 2025-10-02 09:09:01.415 2 INFO nova.compute.manager [-] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:09:01 np0005465604 nova_compute[260603]: 2025-10-02 09:09:01.434 2 DEBUG nova.compute.manager [None req-74f1beda-7d5c-413a-95ae-41c1c78bc243 - - - - - -] [instance: 086604c0-28d5-41d4-995c-17db322b3ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:09:01 np0005465604 nova_compute[260603]: 2025-10-02 09:09:01.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:01 np0005465604 nova_compute[260603]: 2025-10-02 09:09:01.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:09:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 767 B/s wr, 13 op/s
Oct  2 05:09:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:09:03 np0005465604 nova_compute[260603]: 2025-10-02 09:09:03.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:09:03 np0005465604 nova_compute[260603]: 2025-10-02 09:09:03.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:09:03 np0005465604 nova_compute[260603]: 2025-10-02 09:09:03.554 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:09:03 np0005465604 nova_compute[260603]: 2025-10-02 09:09:03.554 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:09:03 np0005465604 nova_compute[260603]: 2025-10-02 09:09:03.555 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:09:03 np0005465604 nova_compute[260603]: 2025-10-02 09:09:03.555 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:09:03 np0005465604 nova_compute[260603]: 2025-10-02 09:09:03.556 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:09:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:09:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3447626246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:09:04 np0005465604 nova_compute[260603]: 2025-10-02 09:09:04.032 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:09:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 767 B/s wr, 13 op/s
Oct  2 05:09:04 np0005465604 nova_compute[260603]: 2025-10-02 09:09:04.183 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:09:04 np0005465604 nova_compute[260603]: 2025-10-02 09:09:04.185 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3625MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:09:04 np0005465604 nova_compute[260603]: 2025-10-02 09:09:04.185 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:09:04 np0005465604 nova_compute[260603]: 2025-10-02 09:09:04.186 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:09:04 np0005465604 nova_compute[260603]: 2025-10-02 09:09:04.357 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:09:04 np0005465604 nova_compute[260603]: 2025-10-02 09:09:04.357 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:09:04 np0005465604 nova_compute[260603]: 2025-10-02 09:09:04.499 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:09:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:09:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/566579265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:09:04 np0005465604 nova_compute[260603]: 2025-10-02 09:09:04.974 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:09:04 np0005465604 nova_compute[260603]: 2025-10-02 09:09:04.980 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:09:05 np0005465604 nova_compute[260603]: 2025-10-02 09:09:05.003 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:09:05 np0005465604 nova_compute[260603]: 2025-10-02 09:09:05.063 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:09:05 np0005465604 nova_compute[260603]: 2025-10-02 09:09:05.064 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:09:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:09:06 np0005465604 nova_compute[260603]: 2025-10-02 09:09:06.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:06 np0005465604 nova_compute[260603]: 2025-10-02 09:09:06.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:09:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:09:09 np0005465604 nova_compute[260603]: 2025-10-02 09:09:09.060 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:09:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:09:11 np0005465604 nova_compute[260603]: 2025-10-02 09:09:11.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:11 np0005465604 nova_compute[260603]: 2025-10-02 09:09:11.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:11 np0005465604 nova_compute[260603]: 2025-10-02 09:09:11.513 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:09:11 np0005465604 nova_compute[260603]: 2025-10-02 09:09:11.544 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:09:11 np0005465604 nova_compute[260603]: 2025-10-02 09:09:11.545 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 05:09:11 np0005465604 nova_compute[260603]: 2025-10-02 09:09:11.574 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 05:09:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:09:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:09:13 np0005465604 nova_compute[260603]: 2025-10-02 09:09:13.549 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:09:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:09:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:14.920 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:38:02 2001:db8:0:1:f816:3eff:fe57:3802 2001:db8::f816:3eff:fe57:3802'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe57:3802/64 2001:db8::f816:3eff:fe57:3802/64', 'neutron:device_id': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d9c157f-cf57-4b44-8fba-d16631e22418', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a251a259-65e8-4a45-82af-f69bd5f24a08, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=136a7ea2-2365-4779-b31d-41cbfc52a20f) old=Port_Binding(mac=['fa:16:3e:57:38:02 2001:db8::f816:3eff:fe57:3802'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe57:3802/64', 'neutron:device_id': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d9c157f-cf57-4b44-8fba-d16631e22418', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:09:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:14.921 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 136a7ea2-2365-4779-b31d-41cbfc52a20f in datapath 6d9c157f-cf57-4b44-8fba-d16631e22418 updated#033[00m
Oct  2 05:09:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:14.922 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d9c157f-cf57-4b44-8fba-d16631e22418, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:09:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:14.923 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0abeef-c538-476c-93e2-642cea4e0625]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:09:16 np0005465604 nova_compute[260603]: 2025-10-02 09:09:16.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:16 np0005465604 nova_compute[260603]: 2025-10-02 09:09:16.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:17 np0005465604 podman[416483]: 2025-10-02 09:09:17.056670491 +0000 UTC m=+0.125240342 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 05:09:17 np0005465604 podman[416484]: 2025-10-02 09:09:17.07947929 +0000 UTC m=+0.134585453 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 05:09:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:09:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:09:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:09:20 np0005465604 podman[416529]: 2025-10-02 09:09:20.992424647 +0000 UTC m=+0.063174664 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 05:09:21 np0005465604 podman[416530]: 2025-10-02 09:09:21.017411883 +0000 UTC m=+0.073544557 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid)
Oct  2 05:09:21 np0005465604 nova_compute[260603]: 2025-10-02 09:09:21.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:21 np0005465604 nova_compute[260603]: 2025-10-02 09:09:21.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:09:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:09:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/867271278' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:09:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:09:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/867271278' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:09:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:09:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:09:25 np0005465604 nova_compute[260603]: 2025-10-02 09:09:25.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:09:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 305 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.183 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.183 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.200 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.291 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.292 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.299 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.300 2 INFO nova.compute.claims [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.405 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:09:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3019343218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.835 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.841 2 DEBUG nova.compute.provider_tree [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.855 2 DEBUG nova.scheduler.client.report [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.876 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.877 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.929 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.930 2 DEBUG nova.network.neutron [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.953 2 INFO nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:09:26 np0005465604 nova_compute[260603]: 2025-10-02 09:09:26.971 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:09:27 np0005465604 nova_compute[260603]: 2025-10-02 09:09:27.078 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:09:27 np0005465604 nova_compute[260603]: 2025-10-02 09:09:27.079 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:09:27 np0005465604 nova_compute[260603]: 2025-10-02 09:09:27.079 2 INFO nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Creating image(s)#033[00m
Oct  2 05:09:27 np0005465604 nova_compute[260603]: 2025-10-02 09:09:27.099 2 DEBUG nova.storage.rbd_utils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:09:27 np0005465604 nova_compute[260603]: 2025-10-02 09:09:27.120 2 DEBUG nova.storage.rbd_utils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:09:27 np0005465604 nova_compute[260603]: 2025-10-02 09:09:27.141 2 DEBUG nova.storage.rbd_utils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:09:27 np0005465604 nova_compute[260603]: 2025-10-02 09:09:27.145 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:09:27 np0005465604 nova_compute[260603]: 2025-10-02 09:09:27.205 2 DEBUG nova.policy [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:09:27 np0005465604 nova_compute[260603]: 2025-10-02 09:09:27.247 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:09:27 np0005465604 nova_compute[260603]: 2025-10-02 09:09:27.248 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:09:27 np0005465604 nova_compute[260603]: 2025-10-02 09:09:27.248 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:09:27 np0005465604 nova_compute[260603]: 2025-10-02 09:09:27.248 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:09:27 np0005465604 nova_compute[260603]: 2025-10-02 09:09:27.269 2 DEBUG nova.storage.rbd_utils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:09:27 np0005465604 nova_compute[260603]: 2025-10-02 09:09:27.273 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:09:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:09:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:09:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:09:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:09:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:09:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:09:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 305 active+clean; 45 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 4.8 KiB/s rd, 10 KiB/s wr, 8 op/s
Oct  2 05:09:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:09:28
Oct  2 05:09:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:09:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:09:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'backups', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr']
Oct  2 05:09:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:09:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:09:28 np0005465604 nova_compute[260603]: 2025-10-02 09:09:28.282 2 DEBUG nova.network.neutron [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Successfully created port: 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:09:28 np0005465604 nova_compute[260603]: 2025-10-02 09:09:28.376 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:09:28 np0005465604 nova_compute[260603]: 2025-10-02 09:09:28.447 2 DEBUG nova.storage.rbd_utils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:09:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:09:28 np0005465604 nova_compute[260603]: 2025-10-02 09:09:28.726 2 DEBUG nova.objects.instance [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:09:28 np0005465604 nova_compute[260603]: 2025-10-02 09:09:28.781 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:09:28 np0005465604 nova_compute[260603]: 2025-10-02 09:09:28.782 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Ensure instance console log exists: /var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:09:28 np0005465604 nova_compute[260603]: 2025-10-02 09:09:28.783 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:09:28 np0005465604 nova_compute[260603]: 2025-10-02 09:09:28.783 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:09:28 np0005465604 nova_compute[260603]: 2025-10-02 09:09:28.783 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:09:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:09:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:09:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:09:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:09:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:09:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:09:29 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e0c0ff83-7f65-4578-aace-f004b70aa6ec does not exist
Oct  2 05:09:29 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c461ccbb-5e4e-4b9e-a329-52978d316540 does not exist
Oct  2 05:09:29 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev eeffa823-ac44-4c47-ba54-662e2b742b0f does not exist
Oct  2 05:09:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:09:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:09:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:09:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:09:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:09:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:09:29 np0005465604 podman[417030]: 2025-10-02 09:09:29.642006609 +0000 UTC m=+0.023733769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:09:29 np0005465604 podman[417030]: 2025-10-02 09:09:29.74628606 +0000 UTC m=+0.128013210 container create 2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pascal, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 05:09:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:09:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:09:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:09:29 np0005465604 systemd[1]: Started libpod-conmon-2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3.scope.
Oct  2 05:09:29 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:09:29 np0005465604 podman[417030]: 2025-10-02 09:09:29.905541199 +0000 UTC m=+0.287268359 container init 2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pascal, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 05:09:29 np0005465604 podman[417030]: 2025-10-02 09:09:29.913103154 +0000 UTC m=+0.294830324 container start 2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 05:09:29 np0005465604 systemd[1]: libpod-2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3.scope: Deactivated successfully.
Oct  2 05:09:29 np0005465604 friendly_pascal[417047]: 167 167
Oct  2 05:09:29 np0005465604 conmon[417047]: conmon 2824eb781c989c5d7f25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3.scope/container/memory.events
Oct  2 05:09:29 np0005465604 podman[417030]: 2025-10-02 09:09:29.924521479 +0000 UTC m=+0.306248649 container attach 2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 05:09:29 np0005465604 podman[417030]: 2025-10-02 09:09:29.924944992 +0000 UTC m=+0.306672132 container died 2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 05:09:29 np0005465604 systemd[1]: var-lib-containers-storage-overlay-fb4bd7bc0ebef1897f5d77ef330d7d7e22982d5fc36106ea935d52d9112cfb8e-merged.mount: Deactivated successfully.
Oct  2 05:09:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 45 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 4.8 KiB/s rd, 10 KiB/s wr, 8 op/s
Oct  2 05:09:30 np0005465604 podman[417030]: 2025-10-02 09:09:30.235889135 +0000 UTC m=+0.617616285 container remove 2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 05:09:30 np0005465604 systemd[1]: libpod-conmon-2824eb781c989c5d7f257d537e3a4a7f99bfce1834ee738a1cde1f6d9e1b8de3.scope: Deactivated successfully.
Oct  2 05:09:30 np0005465604 podman[417070]: 2025-10-02 09:09:30.515954298 +0000 UTC m=+0.102291760 container create 3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:09:30 np0005465604 podman[417070]: 2025-10-02 09:09:30.452914999 +0000 UTC m=+0.039252481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:09:30 np0005465604 systemd[1]: Started libpod-conmon-3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590.scope.
Oct  2 05:09:30 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:09:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5fb65108afc223619f66f1c67a3dd132bd60140e93b9e099984d0f0179b0ca7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:09:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5fb65108afc223619f66f1c67a3dd132bd60140e93b9e099984d0f0179b0ca7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:09:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5fb65108afc223619f66f1c67a3dd132bd60140e93b9e099984d0f0179b0ca7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:09:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5fb65108afc223619f66f1c67a3dd132bd60140e93b9e099984d0f0179b0ca7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:09:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5fb65108afc223619f66f1c67a3dd132bd60140e93b9e099984d0f0179b0ca7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:09:30 np0005465604 podman[417070]: 2025-10-02 09:09:30.649641453 +0000 UTC m=+0.235978935 container init 3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:09:30 np0005465604 podman[417070]: 2025-10-02 09:09:30.656671481 +0000 UTC m=+0.243008943 container start 3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 05:09:30 np0005465604 podman[417070]: 2025-10-02 09:09:30.663641428 +0000 UTC m=+0.249978910 container attach 3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:09:30 np0005465604 nova_compute[260603]: 2025-10-02 09:09:30.950 2 DEBUG nova.network.neutron [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Successfully created port: 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:09:31 np0005465604 nova_compute[260603]: 2025-10-02 09:09:31.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:31 np0005465604 nova_compute[260603]: 2025-10-02 09:09:31.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:31 np0005465604 determined_shamir[417086]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:09:31 np0005465604 determined_shamir[417086]: --> relative data size: 1.0
Oct  2 05:09:31 np0005465604 determined_shamir[417086]: --> All data devices are unavailable
Oct  2 05:09:31 np0005465604 systemd[1]: libpod-3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590.scope: Deactivated successfully.
Oct  2 05:09:31 np0005465604 systemd[1]: libpod-3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590.scope: Consumed 1.189s CPU time.
Oct  2 05:09:31 np0005465604 podman[417115]: 2025-10-02 09:09:31.998963074 +0000 UTC m=+0.035729921 container died 3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  2 05:09:32 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a5fb65108afc223619f66f1c67a3dd132bd60140e93b9e099984d0f0179b0ca7-merged.mount: Deactivated successfully.
Oct  2 05:09:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 305 active+clean; 45 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 4.8 KiB/s rd, 10 KiB/s wr, 8 op/s
Oct  2 05:09:32 np0005465604 podman[417115]: 2025-10-02 09:09:32.058890526 +0000 UTC m=+0.095657353 container remove 3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:09:32 np0005465604 systemd[1]: libpod-conmon-3dc9b61d1d2d6344e15cf61a7f6c38410d50fcf899614e4e8efd75c70d9fd590.scope: Deactivated successfully.
Oct  2 05:09:32 np0005465604 podman[417269]: 2025-10-02 09:09:32.812702511 +0000 UTC m=+0.048563330 container create 4018c3e3b6d5eace518c4478971c3600fcba645f468d9746ae3df22b1c72c014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 05:09:32 np0005465604 systemd[1]: Started libpod-conmon-4018c3e3b6d5eace518c4478971c3600fcba645f468d9746ae3df22b1c72c014.scope.
Oct  2 05:09:32 np0005465604 podman[417269]: 2025-10-02 09:09:32.791991518 +0000 UTC m=+0.027852347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:09:32 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:09:32 np0005465604 podman[417269]: 2025-10-02 09:09:32.914367201 +0000 UTC m=+0.150228110 container init 4018c3e3b6d5eace518c4478971c3600fcba645f468d9746ae3df22b1c72c014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_haslett, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:09:32 np0005465604 podman[417269]: 2025-10-02 09:09:32.931954658 +0000 UTC m=+0.167815467 container start 4018c3e3b6d5eace518c4478971c3600fcba645f468d9746ae3df22b1c72c014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_haslett, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:09:32 np0005465604 podman[417269]: 2025-10-02 09:09:32.935739594 +0000 UTC m=+0.171600423 container attach 4018c3e3b6d5eace518c4478971c3600fcba645f468d9746ae3df22b1c72c014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_haslett, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 05:09:32 np0005465604 magical_haslett[417285]: 167 167
Oct  2 05:09:32 np0005465604 systemd[1]: libpod-4018c3e3b6d5eace518c4478971c3600fcba645f468d9746ae3df22b1c72c014.scope: Deactivated successfully.
Oct  2 05:09:32 np0005465604 podman[417269]: 2025-10-02 09:09:32.940497812 +0000 UTC m=+0.176358621 container died 4018c3e3b6d5eace518c4478971c3600fcba645f468d9746ae3df22b1c72c014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:09:32 np0005465604 systemd[1]: var-lib-containers-storage-overlay-7cc60bf02188e57559b284b66a31300fa69a1eed8ecdecaf369f7d2ca28cb974-merged.mount: Deactivated successfully.
Oct  2 05:09:32 np0005465604 podman[417269]: 2025-10-02 09:09:32.982839508 +0000 UTC m=+0.218700347 container remove 4018c3e3b6d5eace518c4478971c3600fcba645f468d9746ae3df22b1c72c014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_haslett, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 05:09:33 np0005465604 systemd[1]: libpod-conmon-4018c3e3b6d5eace518c4478971c3600fcba645f468d9746ae3df22b1c72c014.scope: Deactivated successfully.
Oct  2 05:09:33 np0005465604 nova_compute[260603]: 2025-10-02 09:09:33.003 2 DEBUG nova.network.neutron [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Successfully updated port: 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:09:33 np0005465604 nova_compute[260603]: 2025-10-02 09:09:33.093 2 DEBUG nova.compute.manager [req-0e75f6f6-a37b-4075-bf06-e993aa290e6b req-a50135e2-4340-40e8-b19f-7b2c47f69552 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-changed-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:09:33 np0005465604 nova_compute[260603]: 2025-10-02 09:09:33.094 2 DEBUG nova.compute.manager [req-0e75f6f6-a37b-4075-bf06-e993aa290e6b req-a50135e2-4340-40e8-b19f-7b2c47f69552 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Refreshing instance network info cache due to event network-changed-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:09:33 np0005465604 nova_compute[260603]: 2025-10-02 09:09:33.095 2 DEBUG oslo_concurrency.lockutils [req-0e75f6f6-a37b-4075-bf06-e993aa290e6b req-a50135e2-4340-40e8-b19f-7b2c47f69552 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:09:33 np0005465604 nova_compute[260603]: 2025-10-02 09:09:33.095 2 DEBUG oslo_concurrency.lockutils [req-0e75f6f6-a37b-4075-bf06-e993aa290e6b req-a50135e2-4340-40e8-b19f-7b2c47f69552 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:09:33 np0005465604 nova_compute[260603]: 2025-10-02 09:09:33.095 2 DEBUG nova.network.neutron [req-0e75f6f6-a37b-4075-bf06-e993aa290e6b req-a50135e2-4340-40e8-b19f-7b2c47f69552 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Refreshing network info cache for port 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:09:33 np0005465604 podman[417311]: 2025-10-02 09:09:33.171938085 +0000 UTC m=+0.053659639 container create d040b8416c8a51c44a02c546d90ab3597a25b9bc114913944447de88825bcc74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  2 05:09:33 np0005465604 systemd[1]: Started libpod-conmon-d040b8416c8a51c44a02c546d90ab3597a25b9bc114913944447de88825bcc74.scope.
Oct  2 05:09:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:09:33 np0005465604 podman[417311]: 2025-10-02 09:09:33.149679673 +0000 UTC m=+0.031401207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:09:33 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:09:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b34e1c3bf90ad2d962482859ee95430fb65404f90eec4d3bb04dab1aa6b523/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:09:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b34e1c3bf90ad2d962482859ee95430fb65404f90eec4d3bb04dab1aa6b523/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:09:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b34e1c3bf90ad2d962482859ee95430fb65404f90eec4d3bb04dab1aa6b523/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:09:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b34e1c3bf90ad2d962482859ee95430fb65404f90eec4d3bb04dab1aa6b523/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:09:33 np0005465604 podman[417311]: 2025-10-02 09:09:33.289364224 +0000 UTC m=+0.171085748 container init d040b8416c8a51c44a02c546d90ab3597a25b9bc114913944447de88825bcc74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:09:33 np0005465604 podman[417311]: 2025-10-02 09:09:33.297667232 +0000 UTC m=+0.179388766 container start d040b8416c8a51c44a02c546d90ab3597a25b9bc114913944447de88825bcc74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:09:33 np0005465604 podman[417311]: 2025-10-02 09:09:33.301896993 +0000 UTC m=+0.183618517 container attach d040b8416c8a51c44a02c546d90ab3597a25b9bc114913944447de88825bcc74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:09:33 np0005465604 nova_compute[260603]: 2025-10-02 09:09:33.917 2 DEBUG nova.network.neutron [req-0e75f6f6-a37b-4075-bf06-e993aa290e6b req-a50135e2-4340-40e8-b19f-7b2c47f69552 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:09:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]: {
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:    "0": [
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:        {
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "devices": [
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "/dev/loop3"
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            ],
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "lv_name": "ceph_lv0",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "lv_size": "21470642176",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "name": "ceph_lv0",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "tags": {
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.cluster_name": "ceph",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.crush_device_class": "",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.encrypted": "0",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.osd_id": "0",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.type": "block",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.vdo": "0"
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            },
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "type": "block",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "vg_name": "ceph_vg0"
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:        }
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:    ],
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:    "1": [
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:        {
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "devices": [
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "/dev/loop4"
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            ],
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "lv_name": "ceph_lv1",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "lv_size": "21470642176",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "name": "ceph_lv1",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "tags": {
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.cluster_name": "ceph",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.crush_device_class": "",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.encrypted": "0",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.osd_id": "1",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.type": "block",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.vdo": "0"
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            },
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "type": "block",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "vg_name": "ceph_vg1"
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:        }
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:    ],
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:    "2": [
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:        {
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "devices": [
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "/dev/loop5"
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            ],
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "lv_name": "ceph_lv2",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "lv_size": "21470642176",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "name": "ceph_lv2",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "tags": {
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.cluster_name": "ceph",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.crush_device_class": "",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.encrypted": "0",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.osd_id": "2",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.type": "block",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:                "ceph.vdo": "0"
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            },
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "type": "block",
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:            "vg_name": "ceph_vg2"
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:        }
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]:    ]
Oct  2 05:09:34 np0005465604 affectionate_shirley[417328]: }
Oct  2 05:09:34 np0005465604 systemd[1]: libpod-d040b8416c8a51c44a02c546d90ab3597a25b9bc114913944447de88825bcc74.scope: Deactivated successfully.
Oct  2 05:09:34 np0005465604 podman[417311]: 2025-10-02 09:09:34.130525914 +0000 UTC m=+1.012247448 container died d040b8416c8a51c44a02c546d90ab3597a25b9bc114913944447de88825bcc74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:09:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d3b34e1c3bf90ad2d962482859ee95430fb65404f90eec4d3bb04dab1aa6b523-merged.mount: Deactivated successfully.
Oct  2 05:09:34 np0005465604 podman[417311]: 2025-10-02 09:09:34.183383657 +0000 UTC m=+1.065105161 container remove d040b8416c8a51c44a02c546d90ab3597a25b9bc114913944447de88825bcc74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 05:09:34 np0005465604 systemd[1]: libpod-conmon-d040b8416c8a51c44a02c546d90ab3597a25b9bc114913944447de88825bcc74.scope: Deactivated successfully.
Oct  2 05:09:34 np0005465604 nova_compute[260603]: 2025-10-02 09:09:34.582 2 DEBUG nova.network.neutron [req-0e75f6f6-a37b-4075-bf06-e993aa290e6b req-a50135e2-4340-40e8-b19f-7b2c47f69552 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:09:34 np0005465604 nova_compute[260603]: 2025-10-02 09:09:34.603 2 DEBUG oslo_concurrency.lockutils [req-0e75f6f6-a37b-4075-bf06-e993aa290e6b req-a50135e2-4340-40e8-b19f-7b2c47f69552 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:09:34 np0005465604 nova_compute[260603]: 2025-10-02 09:09:34.720 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:09:34 np0005465604 nova_compute[260603]: 2025-10-02 09:09:34.720 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 05:09:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:34.847 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:09:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:34.847 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:09:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:34.848 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:09:34 np0005465604 nova_compute[260603]: 2025-10-02 09:09:34.863 2 DEBUG nova.network.neutron [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Successfully updated port: 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:09:34 np0005465604 nova_compute[260603]: 2025-10-02 09:09:34.882 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:09:34 np0005465604 nova_compute[260603]: 2025-10-02 09:09:34.882 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:09:34 np0005465604 nova_compute[260603]: 2025-10-02 09:09:34.882 2 DEBUG nova.network.neutron [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:09:34 np0005465604 podman[417489]: 2025-10-02 09:09:34.905456556 +0000 UTC m=+0.047868539 container create aa003a35f0aa8e54911f19948bbc4e96af17a86e148187a31d036fd081b79963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_swartz, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:09:34 np0005465604 systemd[1]: Started libpod-conmon-aa003a35f0aa8e54911f19948bbc4e96af17a86e148187a31d036fd081b79963.scope.
Oct  2 05:09:34 np0005465604 podman[417489]: 2025-10-02 09:09:34.886808166 +0000 UTC m=+0.029220169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:09:34 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:09:35 np0005465604 podman[417489]: 2025-10-02 09:09:35.018338313 +0000 UTC m=+0.160750326 container init aa003a35f0aa8e54911f19948bbc4e96af17a86e148187a31d036fd081b79963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_swartz, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:09:35 np0005465604 podman[417489]: 2025-10-02 09:09:35.031784841 +0000 UTC m=+0.174196824 container start aa003a35f0aa8e54911f19948bbc4e96af17a86e148187a31d036fd081b79963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_swartz, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 05:09:35 np0005465604 compassionate_swartz[417505]: 167 167
Oct  2 05:09:35 np0005465604 systemd[1]: libpod-aa003a35f0aa8e54911f19948bbc4e96af17a86e148187a31d036fd081b79963.scope: Deactivated successfully.
Oct  2 05:09:35 np0005465604 podman[417489]: 2025-10-02 09:09:35.039102878 +0000 UTC m=+0.181514871 container attach aa003a35f0aa8e54911f19948bbc4e96af17a86e148187a31d036fd081b79963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  2 05:09:35 np0005465604 podman[417489]: 2025-10-02 09:09:35.039690157 +0000 UTC m=+0.182102140 container died aa003a35f0aa8e54911f19948bbc4e96af17a86e148187a31d036fd081b79963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_swartz, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 05:09:35 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1392cf9851fe8b3cfbe2231d8997a418ca19abf0076b7dddf53ffd1829863b30-merged.mount: Deactivated successfully.
Oct  2 05:09:35 np0005465604 nova_compute[260603]: 2025-10-02 09:09:35.115 2 DEBUG nova.network.neutron [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:09:35 np0005465604 podman[417489]: 2025-10-02 09:09:35.162209364 +0000 UTC m=+0.304621377 container remove aa003a35f0aa8e54911f19948bbc4e96af17a86e148187a31d036fd081b79963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:09:35 np0005465604 systemd[1]: libpod-conmon-aa003a35f0aa8e54911f19948bbc4e96af17a86e148187a31d036fd081b79963.scope: Deactivated successfully.
Oct  2 05:09:35 np0005465604 nova_compute[260603]: 2025-10-02 09:09:35.205 2 DEBUG nova.compute.manager [req-54f0d369-1eb9-4dc8-ac09-b690230129f5 req-5292d38d-25d1-436f-b41d-655dfd079da3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-changed-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:09:35 np0005465604 nova_compute[260603]: 2025-10-02 09:09:35.207 2 DEBUG nova.compute.manager [req-54f0d369-1eb9-4dc8-ac09-b690230129f5 req-5292d38d-25d1-436f-b41d-655dfd079da3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Refreshing instance network info cache due to event network-changed-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:09:35 np0005465604 nova_compute[260603]: 2025-10-02 09:09:35.208 2 DEBUG oslo_concurrency.lockutils [req-54f0d369-1eb9-4dc8-ac09-b690230129f5 req-5292d38d-25d1-436f-b41d-655dfd079da3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:09:35 np0005465604 podman[417529]: 2025-10-02 09:09:35.372009563 +0000 UTC m=+0.057943641 container create cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keller, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 05:09:35 np0005465604 systemd[1]: Started libpod-conmon-cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b.scope.
Oct  2 05:09:35 np0005465604 podman[417529]: 2025-10-02 09:09:35.344021534 +0000 UTC m=+0.029955692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:09:35 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:09:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c43c33331ecc50d597ea729823b989a4cdd08d7264402c7acfb40288b0d826/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:09:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c43c33331ecc50d597ea729823b989a4cdd08d7264402c7acfb40288b0d826/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:09:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c43c33331ecc50d597ea729823b989a4cdd08d7264402c7acfb40288b0d826/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:09:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c43c33331ecc50d597ea729823b989a4cdd08d7264402c7acfb40288b0d826/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:09:35 np0005465604 podman[417529]: 2025-10-02 09:09:35.46873534 +0000 UTC m=+0.154669488 container init cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keller, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 05:09:35 np0005465604 podman[417529]: 2025-10-02 09:09:35.476551302 +0000 UTC m=+0.162485390 container start cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keller, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 05:09:35 np0005465604 podman[417529]: 2025-10-02 09:09:35.480954969 +0000 UTC m=+0.166889067 container attach cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:09:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct  2 05:09:36 np0005465604 nova_compute[260603]: 2025-10-02 09:09:36.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:36 np0005465604 nova_compute[260603]: 2025-10-02 09:09:36.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:36 np0005465604 ovn_controller[152344]: 2025-10-02T09:09:36Z|01535|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct  2 05:09:36 np0005465604 admiring_keller[417546]: {
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:        "osd_id": 2,
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:        "type": "bluestore"
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:    },
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:        "osd_id": 1,
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:        "type": "bluestore"
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:    },
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:        "osd_id": 0,
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:        "type": "bluestore"
Oct  2 05:09:36 np0005465604 admiring_keller[417546]:    }
Oct  2 05:09:36 np0005465604 admiring_keller[417546]: }
Oct  2 05:09:36 np0005465604 systemd[1]: libpod-cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b.scope: Deactivated successfully.
Oct  2 05:09:36 np0005465604 systemd[1]: libpod-cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b.scope: Consumed 1.083s CPU time.
Oct  2 05:09:36 np0005465604 podman[417529]: 2025-10-02 09:09:36.554025116 +0000 UTC m=+1.239959204 container died cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:09:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay-56c43c33331ecc50d597ea729823b989a4cdd08d7264402c7acfb40288b0d826-merged.mount: Deactivated successfully.
Oct  2 05:09:36 np0005465604 podman[417529]: 2025-10-02 09:09:36.648102539 +0000 UTC m=+1.334036607 container remove cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 05:09:36 np0005465604 systemd[1]: libpod-conmon-cba5f92628e5c95153060f1b6298f1003390e68a8dbb890ec293c40dee41422b.scope: Deactivated successfully.
Oct  2 05:09:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:09:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:09:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:09:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:09:36 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a977273b-f569-4ca5-95a8-c5ceffdf8eb2 does not exist
Oct  2 05:09:36 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e8972721-075b-4ca8-8fde-fb32eed96006 does not exist
Oct  2 05:09:37 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:09:37 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:09:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.176 2 DEBUG nova.network.neutron [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updating instance_info_cache with network_info: [{"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:09:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.241 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.241 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Instance network_info: |[{"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.242 2 DEBUG oslo_concurrency.lockutils [req-54f0d369-1eb9-4dc8-ac09-b690230129f5 req-5292d38d-25d1-436f-b41d-655dfd079da3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.242 2 DEBUG nova.network.neutron [req-54f0d369-1eb9-4dc8-ac09-b690230129f5 req-5292d38d-25d1-436f-b41d-655dfd079da3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Refreshing network info cache for port 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.247 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Start _get_guest_xml network_info=[{"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.253 2 WARNING nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.260 2 DEBUG nova.virt.libvirt.host [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.261 2 DEBUG nova.virt.libvirt.host [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.268 2 DEBUG nova.virt.libvirt.host [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.269 2 DEBUG nova.virt.libvirt.host [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.269 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.270 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.271 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.272 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.272 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.273 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.273 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.274 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.274 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.275 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.275 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.276 2 DEBUG nova.virt.hardware [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.281 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:09:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:09:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3121066416' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.767 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.793 2 DEBUG nova.storage.rbd_utils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:09:38 np0005465604 nova_compute[260603]: 2025-10-02 09:09:38.797 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:09:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:09:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:09:39 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1307119254' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.284 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.286 2 DEBUG nova.virt.libvirt.vif [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:09:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2095471568',display_name='tempest-TestGettingAddress-server-2095471568',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2095471568',id=141,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-jh5zd8cz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:09:27Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.286 2 DEBUG nova.network.os_vif_util [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.287 2 DEBUG nova.network.os_vif_util [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:c0:38,bridge_name='br-int',has_traffic_filtering=True,id=2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b42cc6e-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.288 2 DEBUG nova.virt.libvirt.vif [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:09:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2095471568',display_name='tempest-TestGettingAddress-server-2095471568',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2095471568',id=141,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-jh5zd8cz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:09:27Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.288 2 DEBUG nova.network.os_vif_util [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.289 2 DEBUG nova.network.os_vif_util [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:94:b1,bridge_name='br-int',has_traffic_filtering=True,id=5bca2e0b-43ae-46f8-b3cf-7be35129b5d8,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bca2e0b-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.290 2 DEBUG nova.objects.instance [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.342 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:  <uuid>3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b</uuid>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:  <name>instance-0000008d</name>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestGettingAddress-server-2095471568</nova:name>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:09:38</nova:creationTime>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:        <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:        <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:        <nova:port uuid="2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:        <nova:port uuid="5bca2e0b-43ae-46f8-b3cf-7be35129b5d8">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe00:94b1" ipVersion="6"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe00:94b1" ipVersion="6"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <entry name="serial">3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b</entry>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <entry name="uuid">3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b</entry>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk.config">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:93:c0:38"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <target dev="tap2b42cc6e-a1"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:00:94:b1"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <target dev="tap5bca2e0b-43"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b/console.log" append="off"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:09:39 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:09:39 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:09:39 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:09:39 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.344 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Preparing to wait for external event network-vif-plugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.345 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.345 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.346 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.346 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Preparing to wait for external event network-vif-plugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.346 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.347 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.347 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.348 2 DEBUG nova.virt.libvirt.vif [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:09:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2095471568',display_name='tempest-TestGettingAddress-server-2095471568',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2095471568',id=141,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-jh5zd8cz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:09:27Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.349 2 DEBUG nova.network.os_vif_util [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.350 2 DEBUG nova.network.os_vif_util [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:c0:38,bridge_name='br-int',has_traffic_filtering=True,id=2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b42cc6e-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.351 2 DEBUG os_vif [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:c0:38,bridge_name='br-int',has_traffic_filtering=True,id=2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b42cc6e-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.352 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.353 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.358 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b42cc6e-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.359 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b42cc6e-a1, col_values=(('external_ids', {'iface-id': '2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:c0:38', 'vm-uuid': '3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:39 np0005465604 NetworkManager[45129]: <info>  [1759396179.3633] manager: (tap2b42cc6e-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/619)
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.370 2 INFO os_vif [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:c0:38,bridge_name='br-int',has_traffic_filtering=True,id=2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b42cc6e-a1')#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.371 2 DEBUG nova.virt.libvirt.vif [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:09:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2095471568',display_name='tempest-TestGettingAddress-server-2095471568',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2095471568',id=141,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-jh5zd8cz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:09:27Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.372 2 DEBUG nova.network.os_vif_util [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.374 2 DEBUG nova.network.os_vif_util [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:94:b1,bridge_name='br-int',has_traffic_filtering=True,id=5bca2e0b-43ae-46f8-b3cf-7be35129b5d8,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bca2e0b-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.375 2 DEBUG os_vif [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:94:b1,bridge_name='br-int',has_traffic_filtering=True,id=5bca2e0b-43ae-46f8-b3cf-7be35129b5d8,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bca2e0b-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.377 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.381 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5bca2e0b-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5bca2e0b-43, col_values=(('external_ids', {'iface-id': '5bca2e0b-43ae-46f8-b3cf-7be35129b5d8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:94:b1', 'vm-uuid': '3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:39 np0005465604 NetworkManager[45129]: <info>  [1759396179.3860] manager: (tap5bca2e0b-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/620)
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.395 2 INFO os_vif [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:94:b1,bridge_name='br-int',has_traffic_filtering=True,id=5bca2e0b-43ae-46f8-b3cf-7be35129b5d8,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bca2e0b-43')#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.483 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.484 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.484 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:93:c0:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.484 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:00:94:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.485 2 INFO nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Using config drive#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.519 2 DEBUG nova.storage.rbd_utils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.532 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:09:39 np0005465604 nova_compute[260603]: 2025-10-02 09:09:39.533 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:09:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.070 2 INFO nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Creating config drive at /var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b/disk.config#033[00m
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.076 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9qtaaep4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.230 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9qtaaep4" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.260 2 DEBUG nova.storage.rbd_utils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.265 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b/disk.config 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.351 2 DEBUG nova.network.neutron [req-54f0d369-1eb9-4dc8-ac09-b690230129f5 req-5292d38d-25d1-436f-b41d-655dfd079da3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updated VIF entry in instance network info cache for port 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.353 2 DEBUG nova.network.neutron [req-54f0d369-1eb9-4dc8-ac09-b690230129f5 req-5292d38d-25d1-436f-b41d-655dfd079da3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updating instance_info_cache with network_info: [{"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.386 2 DEBUG oslo_concurrency.lockutils [req-54f0d369-1eb9-4dc8-ac09-b690230129f5 req-5292d38d-25d1-436f-b41d-655dfd079da3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.449 2 DEBUG oslo_concurrency.processutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b/disk.config 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.450 2 INFO nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Deleting local config drive /var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b/disk.config because it was imported into RBD.#033[00m
Oct  2 05:09:40 np0005465604 kernel: tap2b42cc6e-a1: entered promiscuous mode
Oct  2 05:09:40 np0005465604 NetworkManager[45129]: <info>  [1759396180.5305] manager: (tap2b42cc6e-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/621)
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:40 np0005465604 ovn_controller[152344]: 2025-10-02T09:09:40Z|01536|binding|INFO|Claiming lport 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b for this chassis.
Oct  2 05:09:40 np0005465604 ovn_controller[152344]: 2025-10-02T09:09:40Z|01537|binding|INFO|2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b: Claiming fa:16:3e:93:c0:38 10.100.0.11
Oct  2 05:09:40 np0005465604 NetworkManager[45129]: <info>  [1759396180.5509] manager: (tap5bca2e0b-43): new Tun device (/org/freedesktop/NetworkManager/Devices/622)
Oct  2 05:09:40 np0005465604 kernel: tap5bca2e0b-43: entered promiscuous mode
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.562 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:c0:38 10.100.0.11'], port_security=['fa:16:3e:93:c0:38 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '037810a9-88c4-4f08-a476-337b92085e28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68558f02-4047-4331-a47e-cbbee9580ea4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.564 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b in datapath d7dd05b8-70c0-4ef8-a410-57d83c307eaa bound to our chassis#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.566 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7dd05b8-70c0-4ef8-a410-57d83c307eaa#033[00m
Oct  2 05:09:40 np0005465604 systemd-udevd[417788]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:09:40 np0005465604 systemd-udevd[417789]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.584 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a63445b8-4882-4fe7-90eb-2ada7c8d74d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.586 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd7dd05b8-71 in ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.588 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd7dd05b8-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.588 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3cae3488-5af5-4356-9513-70636629bdf5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.590 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9e55fec1-f385-4a08-b409-15e98af50de4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:40 np0005465604 systemd-machined[214636]: New machine qemu-175-instance-0000008d.
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.604 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[ee428333-2bbf-4bfc-9214-5331af06416f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:40 np0005465604 NetworkManager[45129]: <info>  [1759396180.6059] device (tap5bca2e0b-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:09:40 np0005465604 NetworkManager[45129]: <info>  [1759396180.6074] device (tap5bca2e0b-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:09:40 np0005465604 NetworkManager[45129]: <info>  [1759396180.6083] device (tap2b42cc6e-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:09:40 np0005465604 NetworkManager[45129]: <info>  [1759396180.6094] device (tap2b42cc6e-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:09:40 np0005465604 systemd[1]: Started Virtual Machine qemu-175-instance-0000008d.
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.630 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[38edd325-a808-4651-86a7-4fbfceb0c3d2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:40 np0005465604 ovn_controller[152344]: 2025-10-02T09:09:40Z|01538|binding|INFO|Claiming lport 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 for this chassis.
Oct  2 05:09:40 np0005465604 ovn_controller[152344]: 2025-10-02T09:09:40Z|01539|binding|INFO|5bca2e0b-43ae-46f8-b3cf-7be35129b5d8: Claiming fa:16:3e:00:94:b1 2001:db8:0:1:f816:3eff:fe00:94b1 2001:db8::f816:3eff:fe00:94b1
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.638 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:94:b1 2001:db8:0:1:f816:3eff:fe00:94b1 2001:db8::f816:3eff:fe00:94b1'], port_security=['fa:16:3e:00:94:b1 2001:db8:0:1:f816:3eff:fe00:94b1 2001:db8::f816:3eff:fe00:94b1'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe00:94b1/64 2001:db8::f816:3eff:fe00:94b1/64', 'neutron:device_id': '3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d9c157f-cf57-4b44-8fba-d16631e22418', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '037810a9-88c4-4f08-a476-337b92085e28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a251a259-65e8-4a45-82af-f69bd5f24a08, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=5bca2e0b-43ae-46f8-b3cf-7be35129b5d8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:40 np0005465604 ovn_controller[152344]: 2025-10-02T09:09:40Z|01540|binding|INFO|Setting lport 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b ovn-installed in OVS
Oct  2 05:09:40 np0005465604 ovn_controller[152344]: 2025-10-02T09:09:40Z|01541|binding|INFO|Setting lport 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b up in Southbound
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:40 np0005465604 ovn_controller[152344]: 2025-10-02T09:09:40Z|01542|binding|INFO|Setting lport 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 ovn-installed in OVS
Oct  2 05:09:40 np0005465604 ovn_controller[152344]: 2025-10-02T09:09:40Z|01543|binding|INFO|Setting lport 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 up in Southbound
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.670 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0c22e1a6-ba94-41d7-bb77-7b13c16e01c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.681 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d356d5d8-9c32-492f-895b-13a4b047e013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:40 np0005465604 NetworkManager[45129]: <info>  [1759396180.6870] manager: (tapd7dd05b8-70): new Veth device (/org/freedesktop/NetworkManager/Devices/623)
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.721 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ac7382-2956-4f2a-81d9-27e6195fed06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.724 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1897c0d8-e591-4e54-b1b6-3d958fcbb73f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:40 np0005465604 NetworkManager[45129]: <info>  [1759396180.7490] device (tapd7dd05b8-70): carrier: link connected
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.758 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a2db0f18-06c6-413a-ba48-6a7eaaba2268]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.786 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8394eb91-1e22-43e0-80e0-d03eb3e5e49c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7dd05b8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:04:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705935, 'reachable_time': 20541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417822, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.810 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6c54bc-f66b-4209-b8cb-79a1c4e528ce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:4ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 705935, 'tstamp': 705935}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417823, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.834 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a1683b47-b4cf-49c0-9e43-b4f3598a3524]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7dd05b8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:04:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705935, 'reachable_time': 20541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 417824, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.874 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3b75669b-7d7d-416f-b244-2b04153ad62a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.956 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccbe9be-cbce-4382-a78a-604f1f88ad73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.958 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7dd05b8-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.958 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.959 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7dd05b8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:40 np0005465604 NetworkManager[45129]: <info>  [1759396180.9621] manager: (tapd7dd05b8-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/624)
Oct  2 05:09:40 np0005465604 kernel: tapd7dd05b8-70: entered promiscuous mode
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.967 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7dd05b8-70, col_values=(('external_ids', {'iface-id': '93ded116-ee2f-4f81-a2c6-257136c86ae4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:09:40 np0005465604 ovn_controller[152344]: 2025-10-02T09:09:40Z|01544|binding|INFO|Releasing lport 93ded116-ee2f-4f81-a2c6-257136c86ae4 from this chassis (sb_readonly=0)
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:40 np0005465604 nova_compute[260603]: 2025-10-02 09:09:40.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.996 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d7dd05b8-70c0-4ef8-a410-57d83c307eaa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d7dd05b8-70c0-4ef8-a410-57d83c307eaa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.997 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[de113ef3-3360-46c7-bc7c-9221a8c50185]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.998 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-d7dd05b8-70c0-4ef8-a410-57d83c307eaa
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/d7dd05b8-70c0-4ef8-a410-57d83c307eaa.pid.haproxy
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID d7dd05b8-70c0-4ef8-a410-57d83c307eaa
Oct  2 05:09:40 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 05:09:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:40.998 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'env', 'PROCESS_TAG=haproxy-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d7dd05b8-70c0-4ef8-a410-57d83c307eaa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 05:09:41 np0005465604 nova_compute[260603]: 2025-10-02 09:09:41.171 2 DEBUG nova.compute.manager [req-42135d6b-2d30-41e3-a4bb-c7b1bc32eb76 req-a45538be-8119-4acd-abc2-4e77a43a447b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-plugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:09:41 np0005465604 nova_compute[260603]: 2025-10-02 09:09:41.172 2 DEBUG oslo_concurrency.lockutils [req-42135d6b-2d30-41e3-a4bb-c7b1bc32eb76 req-a45538be-8119-4acd-abc2-4e77a43a447b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:09:41 np0005465604 nova_compute[260603]: 2025-10-02 09:09:41.172 2 DEBUG oslo_concurrency.lockutils [req-42135d6b-2d30-41e3-a4bb-c7b1bc32eb76 req-a45538be-8119-4acd-abc2-4e77a43a447b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:09:41 np0005465604 nova_compute[260603]: 2025-10-02 09:09:41.172 2 DEBUG oslo_concurrency.lockutils [req-42135d6b-2d30-41e3-a4bb-c7b1bc32eb76 req-a45538be-8119-4acd-abc2-4e77a43a447b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:09:41 np0005465604 nova_compute[260603]: 2025-10-02 09:09:41.173 2 DEBUG nova.compute.manager [req-42135d6b-2d30-41e3-a4bb-c7b1bc32eb76 req-a45538be-8119-4acd-abc2-4e77a43a447b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Processing event network-vif-plugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:09:41 np0005465604 nova_compute[260603]: 2025-10-02 09:09:41.182 2 DEBUG nova.compute.manager [req-2f106664-884d-48ca-a448-63a72597bfbf req-4e2205c8-71f8-4171-9928-19f2b209edf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-plugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:09:41 np0005465604 nova_compute[260603]: 2025-10-02 09:09:41.183 2 DEBUG oslo_concurrency.lockutils [req-2f106664-884d-48ca-a448-63a72597bfbf req-4e2205c8-71f8-4171-9928-19f2b209edf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:09:41 np0005465604 nova_compute[260603]: 2025-10-02 09:09:41.183 2 DEBUG oslo_concurrency.lockutils [req-2f106664-884d-48ca-a448-63a72597bfbf req-4e2205c8-71f8-4171-9928-19f2b209edf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:09:41 np0005465604 nova_compute[260603]: 2025-10-02 09:09:41.183 2 DEBUG oslo_concurrency.lockutils [req-2f106664-884d-48ca-a448-63a72597bfbf req-4e2205c8-71f8-4171-9928-19f2b209edf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:09:41 np0005465604 nova_compute[260603]: 2025-10-02 09:09:41.183 2 DEBUG nova.compute.manager [req-2f106664-884d-48ca-a448-63a72597bfbf req-4e2205c8-71f8-4171-9928-19f2b209edf8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Processing event network-vif-plugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:09:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.322 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:09:41 np0005465604 nova_compute[260603]: 2025-10-02 09:09:41.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:41 np0005465604 podman[417898]: 2025-10-02 09:09:41.492138382 +0000 UTC m=+0.059186091 container create f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:09:41 np0005465604 podman[417898]: 2025-10-02 09:09:41.460623132 +0000 UTC m=+0.027670831 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 05:09:41 np0005465604 systemd[1]: Started libpod-conmon-f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a.scope.
Oct  2 05:09:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:09:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85b02860fc626a52f25bc8a62daedd7cc30adfcd5cfe6c239da051b179e05c4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 05:09:41 np0005465604 podman[417898]: 2025-10-02 09:09:41.623638498 +0000 UTC m=+0.190686227 container init f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 05:09:41 np0005465604 podman[417898]: 2025-10-02 09:09:41.634511306 +0000 UTC m=+0.201558975 container start f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:09:41 np0005465604 neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa[417914]: [NOTICE]   (417918) : New worker (417920) forked
Oct  2 05:09:41 np0005465604 neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa[417914]: [NOTICE]   (417918) : Loading success.
Oct  2 05:09:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.730 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 in datapath 6d9c157f-cf57-4b44-8fba-d16631e22418 unbound from our chassis#033[00m
Oct  2 05:09:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.731 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d9c157f-cf57-4b44-8fba-d16631e22418#033[00m
Oct  2 05:09:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.744 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[77088aa3-9989-40d7-865c-f0a82f9773f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.745 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d9c157f-c1 in ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 05:09:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.749 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d9c157f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 05:09:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.749 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fa13a146-2fbb-4c75-a9b9-f685148c836c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.750 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7da28982-b8de-4b17-8646-23297bb46b13]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.764 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb32067-8d8e-43df-bc6e-b9bc8fe9c42d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.784 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4fc27e-b005-4d2f-ad79-7568726134a5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.825 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[42d7a377-137d-417b-9a03-964da13ae197]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.832 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f86fa05-82bc-4820-82ce-0db27f04c109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:41 np0005465604 systemd-udevd[417810]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:09:41 np0005465604 NetworkManager[45129]: <info>  [1759396181.8355] manager: (tap6d9c157f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/625)
Oct  2 05:09:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.877 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0c879d44-209c-41f8-945f-a389b2b619ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.881 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[613bd572-7bb0-4c40-b371-390f5bf59b18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:41 np0005465604 NetworkManager[45129]: <info>  [1759396181.9164] device (tap6d9c157f-c0): carrier: link connected
Oct  2 05:09:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.926 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[11a21b7e-c3d8-4a1f-90f1-274607ea0303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.954 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[042300b9-201b-459a-8c0b-c499168cfcd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d9c157f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:38:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706051, 'reachable_time': 40260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417939, 'error': None, 'target': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:41 np0005465604 nova_compute[260603]: 2025-10-02 09:09:41.959 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:09:41 np0005465604 nova_compute[260603]: 2025-10-02 09:09:41.960 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396181.958515, 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:09:41 np0005465604 nova_compute[260603]: 2025-10-02 09:09:41.960 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] VM Started (Lifecycle Event)#033[00m
Oct  2 05:09:41 np0005465604 nova_compute[260603]: 2025-10-02 09:09:41.967 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:09:41 np0005465604 nova_compute[260603]: 2025-10-02 09:09:41.971 2 INFO nova.virt.libvirt.driver [-] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Instance spawned successfully.#033[00m
Oct  2 05:09:41 np0005465604 nova_compute[260603]: 2025-10-02 09:09:41.972 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:09:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:41.986 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6034eb40-4d52-43ab-bb26-39c37ddf10bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe57:3802'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706051, 'tstamp': 706051}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417940, 'error': None, 'target': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.009 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.010 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2411f0f9-1890-4eb1-b721-ad9680c90b48]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d9c157f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:38:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706051, 'reachable_time': 40260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 417941, 'error': None, 'target': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.014 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.015 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.015 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.015 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.016 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.016 2 DEBUG nova.virt.libvirt.driver [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.020 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.053 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dc88e72a-cf0c-4e5d-905b-8b631241843d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.071 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.071 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396181.9588575, 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.072 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.090 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[34efbb30-8e8d-42e8-9e4a-ae571ccdf6f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.092 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d9c157f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.092 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.092 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d9c157f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:09:42 np0005465604 kernel: tap6d9c157f-c0: entered promiscuous mode
Oct  2 05:09:42 np0005465604 NetworkManager[45129]: <info>  [1759396182.0966] manager: (tap6d9c157f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/626)
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.097 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d9c157f-c0, col_values=(('external_ids', {'iface-id': '136a7ea2-2365-4779-b31d-41cbfc52a20f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:09:42 np0005465604 ovn_controller[152344]: 2025-10-02T09:09:42Z|01545|binding|INFO|Releasing lport 136a7ea2-2365-4779-b31d-41cbfc52a20f from this chassis (sb_readonly=0)
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.105 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.108 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396181.965137, 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.108 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.113 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d9c157f-cf57-4b44-8fba-d16631e22418.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d9c157f-cf57-4b44-8fba-d16631e22418.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.114 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d9c9db-df6f-46a9-9929-bdf2bea6cb52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.114 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-6d9c157f-cf57-4b44-8fba-d16631e22418
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/6d9c157f-cf57-4b44-8fba-d16631e22418.pid.haproxy
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 6d9c157f-cf57-4b44-8fba-d16631e22418
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.115 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'env', 'PROCESS_TAG=haproxy-6d9c157f-cf57-4b44-8fba-d16631e22418', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d9c157f-cf57-4b44-8fba-d16631e22418.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.121 2 INFO nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Took 15.04 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.122 2 DEBUG nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.130 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.134 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.159 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.183 2 INFO nova.compute.manager [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Took 15.92 seconds to build instance.#033[00m
Oct  2 05:09:42 np0005465604 nova_compute[260603]: 2025-10-02 09:09:42.210 2 DEBUG oslo_concurrency.lockutils [None req-e0830125-48e7-467f-8da1-8ed955852d00 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:09:42 np0005465604 podman[417972]: 2025-10-02 09:09:42.601175226 +0000 UTC m=+0.063599117 container create 5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 05:09:42 np0005465604 systemd[1]: Started libpod-conmon-5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82.scope.
Oct  2 05:09:42 np0005465604 podman[417972]: 2025-10-02 09:09:42.572470484 +0000 UTC m=+0.034894395 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 05:09:42 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:09:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7808816a47e7300fa580de7380ab5c8b3663ac809aa4c663ed97b18b7941cb10/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 05:09:42 np0005465604 podman[417972]: 2025-10-02 09:09:42.687420636 +0000 UTC m=+0.149844617 container init 5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:09:42 np0005465604 podman[417972]: 2025-10-02 09:09:42.700597496 +0000 UTC m=+0.163021427 container start 5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 05:09:42 np0005465604 neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418[417988]: [NOTICE]   (417992) : New worker (417994) forked
Oct  2 05:09:42 np0005465604 neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418[417988]: [NOTICE]   (417992) : Loading success.
Oct  2 05:09:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:42.809 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:09:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:09:43 np0005465604 nova_compute[260603]: 2025-10-02 09:09:43.270 2 DEBUG nova.compute.manager [req-fcde42ab-70c1-49dd-b0bc-0cd7c9be5eb5 req-71ef4fe1-5992-4dcc-b347-7abc5a303b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-plugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:09:43 np0005465604 nova_compute[260603]: 2025-10-02 09:09:43.271 2 DEBUG oslo_concurrency.lockutils [req-fcde42ab-70c1-49dd-b0bc-0cd7c9be5eb5 req-71ef4fe1-5992-4dcc-b347-7abc5a303b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:09:43 np0005465604 nova_compute[260603]: 2025-10-02 09:09:43.271 2 DEBUG oslo_concurrency.lockutils [req-fcde42ab-70c1-49dd-b0bc-0cd7c9be5eb5 req-71ef4fe1-5992-4dcc-b347-7abc5a303b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:09:43 np0005465604 nova_compute[260603]: 2025-10-02 09:09:43.271 2 DEBUG oslo_concurrency.lockutils [req-fcde42ab-70c1-49dd-b0bc-0cd7c9be5eb5 req-71ef4fe1-5992-4dcc-b347-7abc5a303b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:09:43 np0005465604 nova_compute[260603]: 2025-10-02 09:09:43.272 2 DEBUG nova.compute.manager [req-fcde42ab-70c1-49dd-b0bc-0cd7c9be5eb5 req-71ef4fe1-5992-4dcc-b347-7abc5a303b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] No waiting events found dispatching network-vif-plugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:09:43 np0005465604 nova_compute[260603]: 2025-10-02 09:09:43.272 2 WARNING nova.compute.manager [req-fcde42ab-70c1-49dd-b0bc-0cd7c9be5eb5 req-71ef4fe1-5992-4dcc-b347-7abc5a303b37 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received unexpected event network-vif-plugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b for instance with vm_state active and task_state None.#033[00m
Oct  2 05:09:43 np0005465604 nova_compute[260603]: 2025-10-02 09:09:43.337 2 DEBUG nova.compute.manager [req-85526271-728e-4539-bf04-c0b5d3da5127 req-707b97b0-44b0-40f2-bde3-2e6218a9d4f3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-plugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:09:43 np0005465604 nova_compute[260603]: 2025-10-02 09:09:43.338 2 DEBUG oslo_concurrency.lockutils [req-85526271-728e-4539-bf04-c0b5d3da5127 req-707b97b0-44b0-40f2-bde3-2e6218a9d4f3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:09:43 np0005465604 nova_compute[260603]: 2025-10-02 09:09:43.338 2 DEBUG oslo_concurrency.lockutils [req-85526271-728e-4539-bf04-c0b5d3da5127 req-707b97b0-44b0-40f2-bde3-2e6218a9d4f3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:09:43 np0005465604 nova_compute[260603]: 2025-10-02 09:09:43.338 2 DEBUG oslo_concurrency.lockutils [req-85526271-728e-4539-bf04-c0b5d3da5127 req-707b97b0-44b0-40f2-bde3-2e6218a9d4f3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:09:43 np0005465604 nova_compute[260603]: 2025-10-02 09:09:43.338 2 DEBUG nova.compute.manager [req-85526271-728e-4539-bf04-c0b5d3da5127 req-707b97b0-44b0-40f2-bde3-2e6218a9d4f3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] No waiting events found dispatching network-vif-plugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:09:43 np0005465604 nova_compute[260603]: 2025-10-02 09:09:43.339 2 WARNING nova.compute.manager [req-85526271-728e-4539-bf04-c0b5d3da5127 req-707b97b0-44b0-40f2-bde3-2e6218a9d4f3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received unexpected event network-vif-plugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:09:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 489 KiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct  2 05:09:44 np0005465604 nova_compute[260603]: 2025-10-02 09:09:44.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:09:44.812 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:09:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 441 KiB/s rd, 12 KiB/s wr, 23 op/s
Oct  2 05:09:46 np0005465604 nova_compute[260603]: 2025-10-02 09:09:46.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:47 np0005465604 nova_compute[260603]: 2025-10-02 09:09:47.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:47 np0005465604 ovn_controller[152344]: 2025-10-02T09:09:47Z|01546|binding|INFO|Releasing lport 93ded116-ee2f-4f81-a2c6-257136c86ae4 from this chassis (sb_readonly=0)
Oct  2 05:09:47 np0005465604 ovn_controller[152344]: 2025-10-02T09:09:47Z|01547|binding|INFO|Releasing lport 136a7ea2-2365-4779-b31d-41cbfc52a20f from this chassis (sb_readonly=0)
Oct  2 05:09:47 np0005465604 NetworkManager[45129]: <info>  [1759396187.1098] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/627)
Oct  2 05:09:47 np0005465604 NetworkManager[45129]: <info>  [1759396187.1102] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/628)
Oct  2 05:09:47 np0005465604 nova_compute[260603]: 2025-10-02 09:09:47.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:47 np0005465604 ovn_controller[152344]: 2025-10-02T09:09:47Z|01548|binding|INFO|Releasing lport 93ded116-ee2f-4f81-a2c6-257136c86ae4 from this chassis (sb_readonly=0)
Oct  2 05:09:47 np0005465604 ovn_controller[152344]: 2025-10-02T09:09:47Z|01549|binding|INFO|Releasing lport 136a7ea2-2365-4779-b31d-41cbfc52a20f from this chassis (sb_readonly=0)
Oct  2 05:09:47 np0005465604 nova_compute[260603]: 2025-10-02 09:09:47.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:48 np0005465604 podman[418005]: 2025-10-02 09:09:48.008112911 +0000 UTC m=+0.067423916 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 05:09:48 np0005465604 podman[418004]: 2025-10-02 09:09:48.040696114 +0000 UTC m=+0.106178011 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:09:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:09:48 np0005465604 nova_compute[260603]: 2025-10-02 09:09:48.079 2 DEBUG nova.compute.manager [req-abd5dab8-855a-40fe-95df-bff5eb1bb47a req-ee21dd1d-39c9-4683-9886-811c280f4d03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-changed-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:09:48 np0005465604 nova_compute[260603]: 2025-10-02 09:09:48.080 2 DEBUG nova.compute.manager [req-abd5dab8-855a-40fe-95df-bff5eb1bb47a req-ee21dd1d-39c9-4683-9886-811c280f4d03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Refreshing instance network info cache due to event network-changed-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:09:48 np0005465604 nova_compute[260603]: 2025-10-02 09:09:48.080 2 DEBUG oslo_concurrency.lockutils [req-abd5dab8-855a-40fe-95df-bff5eb1bb47a req-ee21dd1d-39c9-4683-9886-811c280f4d03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:09:48 np0005465604 nova_compute[260603]: 2025-10-02 09:09:48.081 2 DEBUG oslo_concurrency.lockutils [req-abd5dab8-855a-40fe-95df-bff5eb1bb47a req-ee21dd1d-39c9-4683-9886-811c280f4d03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:09:48 np0005465604 nova_compute[260603]: 2025-10-02 09:09:48.081 2 DEBUG nova.network.neutron [req-abd5dab8-855a-40fe-95df-bff5eb1bb47a req-ee21dd1d-39c9-4683-9886-811c280f4d03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Refreshing network info cache for port 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:09:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:09:49 np0005465604 nova_compute[260603]: 2025-10-02 09:09:49.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:49 np0005465604 nova_compute[260603]: 2025-10-02 09:09:49.631 2 DEBUG nova.network.neutron [req-abd5dab8-855a-40fe-95df-bff5eb1bb47a req-ee21dd1d-39c9-4683-9886-811c280f4d03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updated VIF entry in instance network info cache for port 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:09:49 np0005465604 nova_compute[260603]: 2025-10-02 09:09:49.631 2 DEBUG nova.network.neutron [req-abd5dab8-855a-40fe-95df-bff5eb1bb47a req-ee21dd1d-39c9-4683-9886-811c280f4d03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updating instance_info_cache with network_info: [{"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:09:49 np0005465604 nova_compute[260603]: 2025-10-02 09:09:49.692 2 DEBUG oslo_concurrency.lockutils [req-abd5dab8-855a-40fe-95df-bff5eb1bb47a req-ee21dd1d-39c9-4683-9886-811c280f4d03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:09:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:09:50 np0005465604 nova_compute[260603]: 2025-10-02 09:09:50.972 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:09:51 np0005465604 nova_compute[260603]: 2025-10-02 09:09:51.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:52 np0005465604 podman[418047]: 2025-10-02 09:09:52.035276718 +0000 UTC m=+0.086094767 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct  2 05:09:52 np0005465604 podman[418048]: 2025-10-02 09:09:52.064898798 +0000 UTC m=+0.107371897 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2)
Oct  2 05:09:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:09:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:09:53 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct  2 05:09:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct  2 05:09:54 np0005465604 nova_compute[260603]: 2025-10-02 09:09:54.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:55 np0005465604 nova_compute[260603]: 2025-10-02 09:09:55.623 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:09:55 np0005465604 ovn_controller[152344]: 2025-10-02T09:09:55Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:c0:38 10.100.0.11
Oct  2 05:09:55 np0005465604 ovn_controller[152344]: 2025-10-02T09:09:55Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:c0:38 10.100.0.11
Oct  2 05:09:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 88 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 51 op/s
Oct  2 05:09:56 np0005465604 nova_compute[260603]: 2025-10-02 09:09:56.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:09:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:09:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:09:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:09:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:09:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:09:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:09:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 305 active+clean; 121 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Oct  2 05:09:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:09:58 np0005465604 nova_compute[260603]: 2025-10-02 09:09:58.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:09:58 np0005465604 nova_compute[260603]: 2025-10-02 09:09:58.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:09:58 np0005465604 nova_compute[260603]: 2025-10-02 09:09:58.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:09:58 np0005465604 nova_compute[260603]: 2025-10-02 09:09:58.875 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:09:58 np0005465604 nova_compute[260603]: 2025-10-02 09:09:58.876 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:09:58 np0005465604 nova_compute[260603]: 2025-10-02 09:09:58.876 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 05:09:58 np0005465604 nova_compute[260603]: 2025-10-02 09:09:58.877 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:09:59 np0005465604 nova_compute[260603]: 2025-10-02 09:09:59.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 305 active+clean; 121 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  2 05:10:01 np0005465604 nova_compute[260603]: 2025-10-02 09:10:01.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 305 active+clean; 121 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  2 05:10:02 np0005465604 nova_compute[260603]: 2025-10-02 09:10:02.442 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updating instance_info_cache with network_info: [{"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:10:02 np0005465604 nova_compute[260603]: 2025-10-02 09:10:02.485 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:10:02 np0005465604 nova_compute[260603]: 2025-10-02 09:10:02.485 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 05:10:02 np0005465604 nova_compute[260603]: 2025-10-02 09:10:02.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:10:02 np0005465604 nova_compute[260603]: 2025-10-02 09:10:02.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:10:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:10:03 np0005465604 nova_compute[260603]: 2025-10-02 09:10:03.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:10:03 np0005465604 nova_compute[260603]: 2025-10-02 09:10:03.542 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:03 np0005465604 nova_compute[260603]: 2025-10-02 09:10:03.543 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:03 np0005465604 nova_compute[260603]: 2025-10-02 09:10:03.543 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:03 np0005465604 nova_compute[260603]: 2025-10-02 09:10:03.543 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:10:03 np0005465604 nova_compute[260603]: 2025-10-02 09:10:03.544 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:10:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:10:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/57172279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:10:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  2 05:10:04 np0005465604 nova_compute[260603]: 2025-10-02 09:10:04.085 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:10:04 np0005465604 nova_compute[260603]: 2025-10-02 09:10:04.167 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:10:04 np0005465604 nova_compute[260603]: 2025-10-02 09:10:04.167 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:10:04 np0005465604 nova_compute[260603]: 2025-10-02 09:10:04.395 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:10:04 np0005465604 nova_compute[260603]: 2025-10-02 09:10:04.397 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3439MB free_disk=59.94289016723633GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:10:04 np0005465604 nova_compute[260603]: 2025-10-02 09:10:04.397 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:04 np0005465604 nova_compute[260603]: 2025-10-02 09:10:04.397 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:04 np0005465604 nova_compute[260603]: 2025-10-02 09:10:04.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:04 np0005465604 nova_compute[260603]: 2025-10-02 09:10:04.477 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:10:04 np0005465604 nova_compute[260603]: 2025-10-02 09:10:04.478 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:10:04 np0005465604 nova_compute[260603]: 2025-10-02 09:10:04.478 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:10:04 np0005465604 nova_compute[260603]: 2025-10-02 09:10:04.506 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 05:10:04 np0005465604 nova_compute[260603]: 2025-10-02 09:10:04.525 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 05:10:04 np0005465604 nova_compute[260603]: 2025-10-02 09:10:04.526 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 05:10:04 np0005465604 nova_compute[260603]: 2025-10-02 09:10:04.545 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 05:10:04 np0005465604 nova_compute[260603]: 2025-10-02 09:10:04.570 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 05:10:04 np0005465604 nova_compute[260603]: 2025-10-02 09:10:04.626 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:10:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:10:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3450206284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:10:05 np0005465604 nova_compute[260603]: 2025-10-02 09:10:05.138 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:10:05 np0005465604 nova_compute[260603]: 2025-10-02 09:10:05.143 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:10:05 np0005465604 nova_compute[260603]: 2025-10-02 09:10:05.170 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:10:05 np0005465604 nova_compute[260603]: 2025-10-02 09:10:05.190 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:10:05 np0005465604 nova_compute[260603]: 2025-10-02 09:10:05.191 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  2 05:10:06 np0005465604 nova_compute[260603]: 2025-10-02 09:10:06.193 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:10:06 np0005465604 nova_compute[260603]: 2025-10-02 09:10:06.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:06 np0005465604 nova_compute[260603]: 2025-10-02 09:10:06.875 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:06 np0005465604 nova_compute[260603]: 2025-10-02 09:10:06.875 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:06 np0005465604 nova_compute[260603]: 2025-10-02 09:10:06.898 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:10:06 np0005465604 nova_compute[260603]: 2025-10-02 09:10:06.978 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:06 np0005465604 nova_compute[260603]: 2025-10-02 09:10:06.979 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:06 np0005465604 nova_compute[260603]: 2025-10-02 09:10:06.987 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:10:06 np0005465604 nova_compute[260603]: 2025-10-02 09:10:06.988 2 INFO nova.compute.claims [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.105 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:10:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:10:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/418743664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.536 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.543 2 DEBUG nova.compute.provider_tree [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.566 2 DEBUG nova.scheduler.client.report [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.596 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.597 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.672 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.673 2 DEBUG nova.network.neutron [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.692 2 INFO nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.712 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.825 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.827 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.827 2 INFO nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Creating image(s)#033[00m
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.851 2 DEBUG nova.storage.rbd_utils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 770238ca-0d80-443f-943e-236e0cfb3606_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.881 2 DEBUG nova.storage.rbd_utils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 770238ca-0d80-443f-943e-236e0cfb3606_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.906 2 DEBUG nova.storage.rbd_utils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 770238ca-0d80-443f-943e-236e0cfb3606_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.910 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:10:07 np0005465604 nova_compute[260603]: 2025-10-02 09:10:07.953 2 DEBUG nova.policy [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:10:08 np0005465604 nova_compute[260603]: 2025-10-02 09:10:08.000 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:10:08 np0005465604 nova_compute[260603]: 2025-10-02 09:10:08.001 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:08 np0005465604 nova_compute[260603]: 2025-10-02 09:10:08.002 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:08 np0005465604 nova_compute[260603]: 2025-10-02 09:10:08.002 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:08 np0005465604 nova_compute[260603]: 2025-10-02 09:10:08.033 2 DEBUG nova.storage.rbd_utils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 770238ca-0d80-443f-943e-236e0cfb3606_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:10:08 np0005465604 nova_compute[260603]: 2025-10-02 09:10:08.037 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 770238ca-0d80-443f-943e-236e0cfb3606_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:10:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  2 05:10:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:10:08 np0005465604 nova_compute[260603]: 2025-10-02 09:10:08.594 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 770238ca-0d80-443f-943e-236e0cfb3606_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:10:08 np0005465604 nova_compute[260603]: 2025-10-02 09:10:08.685 2 DEBUG nova.storage.rbd_utils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 770238ca-0d80-443f-943e-236e0cfb3606_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:10:08 np0005465604 nova_compute[260603]: 2025-10-02 09:10:08.888 2 DEBUG nova.network.neutron [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Successfully created port: 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:10:08 np0005465604 nova_compute[260603]: 2025-10-02 09:10:08.903 2 DEBUG nova.objects.instance [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 770238ca-0d80-443f-943e-236e0cfb3606 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:10:08 np0005465604 nova_compute[260603]: 2025-10-02 09:10:08.926 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:10:08 np0005465604 nova_compute[260603]: 2025-10-02 09:10:08.926 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Ensure instance console log exists: /var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:10:08 np0005465604 nova_compute[260603]: 2025-10-02 09:10:08.927 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:08 np0005465604 nova_compute[260603]: 2025-10-02 09:10:08.928 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:08 np0005465604 nova_compute[260603]: 2025-10-02 09:10:08.929 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:09 np0005465604 nova_compute[260603]: 2025-10-02 09:10:09.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:09 np0005465604 nova_compute[260603]: 2025-10-02 09:10:09.555 2 DEBUG nova.network.neutron [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Successfully created port: 9e457fab-8f77-47eb-a2dc-fa212b72ab38 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:10:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct  2 05:10:10 np0005465604 nova_compute[260603]: 2025-10-02 09:10:10.445 2 DEBUG nova.network.neutron [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Successfully updated port: 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:10:10 np0005465604 nova_compute[260603]: 2025-10-02 09:10:10.566 2 DEBUG nova.compute.manager [req-5e300124-a50a-466c-a3db-28cbc543259f req-454a8034-f813-468f-947f-981c72551577 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-changed-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:10 np0005465604 nova_compute[260603]: 2025-10-02 09:10:10.566 2 DEBUG nova.compute.manager [req-5e300124-a50a-466c-a3db-28cbc543259f req-454a8034-f813-468f-947f-981c72551577 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Refreshing instance network info cache due to event network-changed-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:10:10 np0005465604 nova_compute[260603]: 2025-10-02 09:10:10.567 2 DEBUG oslo_concurrency.lockutils [req-5e300124-a50a-466c-a3db-28cbc543259f req-454a8034-f813-468f-947f-981c72551577 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:10:10 np0005465604 nova_compute[260603]: 2025-10-02 09:10:10.567 2 DEBUG oslo_concurrency.lockutils [req-5e300124-a50a-466c-a3db-28cbc543259f req-454a8034-f813-468f-947f-981c72551577 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:10:10 np0005465604 nova_compute[260603]: 2025-10-02 09:10:10.567 2 DEBUG nova.network.neutron [req-5e300124-a50a-466c-a3db-28cbc543259f req-454a8034-f813-468f-947f-981c72551577 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Refreshing network info cache for port 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:10:10 np0005465604 nova_compute[260603]: 2025-10-02 09:10:10.913 2 DEBUG nova.network.neutron [req-5e300124-a50a-466c-a3db-28cbc543259f req-454a8034-f813-468f-947f-981c72551577 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:10:11 np0005465604 nova_compute[260603]: 2025-10-02 09:10:11.302 2 DEBUG nova.network.neutron [req-5e300124-a50a-466c-a3db-28cbc543259f req-454a8034-f813-468f-947f-981c72551577 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:10:11 np0005465604 nova_compute[260603]: 2025-10-02 09:10:11.337 2 DEBUG oslo_concurrency.lockutils [req-5e300124-a50a-466c-a3db-28cbc543259f req-454a8034-f813-468f-947f-981c72551577 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:10:11 np0005465604 nova_compute[260603]: 2025-10-02 09:10:11.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:11 np0005465604 nova_compute[260603]: 2025-10-02 09:10:11.929 2 DEBUG nova.network.neutron [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Successfully updated port: 9e457fab-8f77-47eb-a2dc-fa212b72ab38 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:10:11 np0005465604 nova_compute[260603]: 2025-10-02 09:10:11.952 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:10:11 np0005465604 nova_compute[260603]: 2025-10-02 09:10:11.952 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:10:11 np0005465604 nova_compute[260603]: 2025-10-02 09:10:11.952 2 DEBUG nova.network.neutron [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:10:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct  2 05:10:12 np0005465604 nova_compute[260603]: 2025-10-02 09:10:12.137 2 DEBUG nova.network.neutron [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:10:12 np0005465604 nova_compute[260603]: 2025-10-02 09:10:12.675 2 DEBUG nova.compute.manager [req-fa530f52-64d7-45d4-9fc5-6cdb693a601f req-55d99d38-6145-4090-b1f0-e17c737ac338 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-changed-9e457fab-8f77-47eb-a2dc-fa212b72ab38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:12 np0005465604 nova_compute[260603]: 2025-10-02 09:10:12.676 2 DEBUG nova.compute.manager [req-fa530f52-64d7-45d4-9fc5-6cdb693a601f req-55d99d38-6145-4090-b1f0-e17c737ac338 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Refreshing instance network info cache due to event network-changed-9e457fab-8f77-47eb-a2dc-fa212b72ab38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:10:12 np0005465604 nova_compute[260603]: 2025-10-02 09:10:12.676 2 DEBUG oslo_concurrency.lockutils [req-fa530f52-64d7-45d4-9fc5-6cdb693a601f req-55d99d38-6145-4090-b1f0-e17c737ac338 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:10:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:10:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.229 2 DEBUG nova.network.neutron [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Updating instance_info_cache with network_info: [{"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.274 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.275 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Instance network_info: |[{"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.275 2 DEBUG oslo_concurrency.lockutils [req-fa530f52-64d7-45d4-9fc5-6cdb693a601f req-55d99d38-6145-4090-b1f0-e17c737ac338 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.275 2 DEBUG nova.network.neutron [req-fa530f52-64d7-45d4-9fc5-6cdb693a601f req-55d99d38-6145-4090-b1f0-e17c737ac338 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Refreshing network info cache for port 9e457fab-8f77-47eb-a2dc-fa212b72ab38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.278 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Start _get_guest_xml network_info=[{"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.283 2 WARNING nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.289 2 DEBUG nova.virt.libvirt.host [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.290 2 DEBUG nova.virt.libvirt.host [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.295 2 DEBUG nova.virt.libvirt.host [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.295 2 DEBUG nova.virt.libvirt.host [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.295 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.296 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.296 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.296 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.296 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.297 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.297 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.297 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.297 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.297 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.298 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.298 2 DEBUG nova.virt.hardware [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.300 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:10:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:10:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1326863042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.747 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.774 2 DEBUG nova.storage.rbd_utils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 770238ca-0d80-443f-943e-236e0cfb3606_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:10:14 np0005465604 nova_compute[260603]: 2025-10-02 09:10:14.777 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:10:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:10:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1236462984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.385 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.388 2 DEBUG nova.virt.libvirt.vif [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-242796341',display_name='tempest-TestGettingAddress-server-242796341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-242796341',id=142,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-r8zzf1o0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:10:07Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=770238ca-0d80-443f-943e-236e0cfb3606,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.389 2 DEBUG nova.network.os_vif_util [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.391 2 DEBUG nova.network.os_vif_util [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:4e:93,bridge_name='br-int',has_traffic_filtering=True,id=451aa4f7-0b3e-4a00-b063-7584a6bbc7cc,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap451aa4f7-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.393 2 DEBUG nova.virt.libvirt.vif [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-242796341',display_name='tempest-TestGettingAddress-server-242796341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-242796341',id=142,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-r8zzf1o0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:10:07Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=770238ca-0d80-443f-943e-236e0cfb3606,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.393 2 DEBUG nova.network.os_vif_util [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.394 2 DEBUG nova.network.os_vif_util [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:0e:64,bridge_name='br-int',has_traffic_filtering=True,id=9e457fab-8f77-47eb-a2dc-fa212b72ab38,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e457fab-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.396 2 DEBUG nova.objects.instance [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 770238ca-0d80-443f-943e-236e0cfb3606 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.482 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:  <uuid>770238ca-0d80-443f-943e-236e0cfb3606</uuid>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:  <name>instance-0000008e</name>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestGettingAddress-server-242796341</nova:name>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:10:14</nova:creationTime>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:        <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:        <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:        <nova:port uuid="451aa4f7-0b3e-4a00-b063-7584a6bbc7cc">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:        <nova:port uuid="9e457fab-8f77-47eb-a2dc-fa212b72ab38">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe3d:e64" ipVersion="6"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe3d:e64" ipVersion="6"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <entry name="serial">770238ca-0d80-443f-943e-236e0cfb3606</entry>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <entry name="uuid">770238ca-0d80-443f-943e-236e0cfb3606</entry>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/770238ca-0d80-443f-943e-236e0cfb3606_disk">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/770238ca-0d80-443f-943e-236e0cfb3606_disk.config">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:a9:4e:93"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <target dev="tap451aa4f7-0b"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:3d:0e:64"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <target dev="tap9e457fab-8f"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606/console.log" append="off"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:10:15 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:10:15 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:10:15 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:10:15 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.484 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Preparing to wait for external event network-vif-plugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.485 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.486 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.486 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.487 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Preparing to wait for external event network-vif-plugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.487 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.487 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.487 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.488 2 DEBUG nova.virt.libvirt.vif [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-242796341',display_name='tempest-TestGettingAddress-server-242796341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-242796341',id=142,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-r8zzf1o0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:10:07Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=770238ca-0d80-443f-943e-236e0cfb3606,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.488 2 DEBUG nova.network.os_vif_util [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.489 2 DEBUG nova.network.os_vif_util [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:4e:93,bridge_name='br-int',has_traffic_filtering=True,id=451aa4f7-0b3e-4a00-b063-7584a6bbc7cc,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap451aa4f7-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.489 2 DEBUG os_vif [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:4e:93,bridge_name='br-int',has_traffic_filtering=True,id=451aa4f7-0b3e-4a00-b063-7584a6bbc7cc,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap451aa4f7-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.491 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.491 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.495 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap451aa4f7-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.496 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap451aa4f7-0b, col_values=(('external_ids', {'iface-id': '451aa4f7-0b3e-4a00-b063-7584a6bbc7cc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:4e:93', 'vm-uuid': '770238ca-0d80-443f-943e-236e0cfb3606'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:15 np0005465604 NetworkManager[45129]: <info>  [1759396215.4986] manager: (tap451aa4f7-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/629)
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.508 2 INFO os_vif [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:4e:93,bridge_name='br-int',has_traffic_filtering=True,id=451aa4f7-0b3e-4a00-b063-7584a6bbc7cc,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap451aa4f7-0b')#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.509 2 DEBUG nova.virt.libvirt.vif [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-242796341',display_name='tempest-TestGettingAddress-server-242796341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-242796341',id=142,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-r8zzf1o0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:10:07Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=770238ca-0d80-443f-943e-236e0cfb3606,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.509 2 DEBUG nova.network.os_vif_util [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.510 2 DEBUG nova.network.os_vif_util [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:0e:64,bridge_name='br-int',has_traffic_filtering=True,id=9e457fab-8f77-47eb-a2dc-fa212b72ab38,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e457fab-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.510 2 DEBUG os_vif [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:0e:64,bridge_name='br-int',has_traffic_filtering=True,id=9e457fab-8f77-47eb-a2dc-fa212b72ab38,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e457fab-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.511 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.511 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e457fab-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.514 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e457fab-8f, col_values=(('external_ids', {'iface-id': '9e457fab-8f77-47eb-a2dc-fa212b72ab38', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:0e:64', 'vm-uuid': '770238ca-0d80-443f-943e-236e0cfb3606'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:15 np0005465604 NetworkManager[45129]: <info>  [1759396215.5162] manager: (tap9e457fab-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/630)
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.524 2 INFO os_vif [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:0e:64,bridge_name='br-int',has_traffic_filtering=True,id=9e457fab-8f77-47eb-a2dc-fa212b72ab38,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e457fab-8f')#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.601 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.602 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.602 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:a9:4e:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.602 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:3d:0e:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.603 2 INFO nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Using config drive#033[00m
Oct  2 05:10:15 np0005465604 nova_compute[260603]: 2025-10-02 09:10:15.639 2 DEBUG nova.storage.rbd_utils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 770238ca-0d80-443f-943e-236e0cfb3606_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:10:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:10:16 np0005465604 nova_compute[260603]: 2025-10-02 09:10:16.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:16 np0005465604 nova_compute[260603]: 2025-10-02 09:10:16.956 2 INFO nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Creating config drive at /var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606/disk.config#033[00m
Oct  2 05:10:16 np0005465604 nova_compute[260603]: 2025-10-02 09:10:16.966 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp412hh4ka execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:10:17 np0005465604 nova_compute[260603]: 2025-10-02 09:10:17.118 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp412hh4ka" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:10:17 np0005465604 nova_compute[260603]: 2025-10-02 09:10:17.141 2 DEBUG nova.storage.rbd_utils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 770238ca-0d80-443f-943e-236e0cfb3606_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:10:17 np0005465604 nova_compute[260603]: 2025-10-02 09:10:17.145 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606/disk.config 770238ca-0d80-443f-943e-236e0cfb3606_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:10:17 np0005465604 nova_compute[260603]: 2025-10-02 09:10:17.286 2 DEBUG oslo_concurrency.processutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606/disk.config 770238ca-0d80-443f-943e-236e0cfb3606_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:10:17 np0005465604 nova_compute[260603]: 2025-10-02 09:10:17.287 2 INFO nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Deleting local config drive /var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606/disk.config because it was imported into RBD.#033[00m
Oct  2 05:10:17 np0005465604 NetworkManager[45129]: <info>  [1759396217.3626] manager: (tap451aa4f7-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/631)
Oct  2 05:10:17 np0005465604 kernel: tap451aa4f7-0b: entered promiscuous mode
Oct  2 05:10:17 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:17Z|01550|binding|INFO|Claiming lport 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc for this chassis.
Oct  2 05:10:17 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:17Z|01551|binding|INFO|451aa4f7-0b3e-4a00-b063-7584a6bbc7cc: Claiming fa:16:3e:a9:4e:93 10.100.0.3
Oct  2 05:10:17 np0005465604 nova_compute[260603]: 2025-10-02 09:10:17.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.381 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:4e:93 10.100.0.3'], port_security=['fa:16:3e:a9:4e:93 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '770238ca-0d80-443f-943e-236e0cfb3606', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '037810a9-88c4-4f08-a476-337b92085e28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68558f02-4047-4331-a47e-cbbee9580ea4, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=451aa4f7-0b3e-4a00-b063-7584a6bbc7cc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.382 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc in datapath d7dd05b8-70c0-4ef8-a410-57d83c307eaa bound to our chassis#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.384 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7dd05b8-70c0-4ef8-a410-57d83c307eaa#033[00m
Oct  2 05:10:17 np0005465604 NetworkManager[45129]: <info>  [1759396217.3904] manager: (tap9e457fab-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/632)
Oct  2 05:10:17 np0005465604 kernel: tap9e457fab-8f: entered promiscuous mode
Oct  2 05:10:17 np0005465604 systemd-udevd[418457]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:10:17 np0005465604 systemd-udevd[418460]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:10:17 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:17Z|01552|binding|INFO|Setting lport 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc ovn-installed in OVS
Oct  2 05:10:17 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:17Z|01553|binding|INFO|Setting lport 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc up in Southbound
Oct  2 05:10:17 np0005465604 nova_compute[260603]: 2025-10-02 09:10:17.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:17 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:17Z|01554|if_status|INFO|Dropped 3 log messages in last 126 seconds (most recently, 126 seconds ago) due to excessive rate
Oct  2 05:10:17 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:17Z|01555|if_status|INFO|Not updating pb chassis for 9e457fab-8f77-47eb-a2dc-fa212b72ab38 now as sb is readonly
Oct  2 05:10:17 np0005465604 NetworkManager[45129]: <info>  [1759396217.4038] device (tap451aa4f7-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:10:17 np0005465604 NetworkManager[45129]: <info>  [1759396217.4050] device (tap451aa4f7-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:10:17 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:17Z|01556|binding|INFO|Claiming lport 9e457fab-8f77-47eb-a2dc-fa212b72ab38 for this chassis.
Oct  2 05:10:17 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:17Z|01557|binding|INFO|9e457fab-8f77-47eb-a2dc-fa212b72ab38: Claiming fa:16:3e:3d:0e:64 2001:db8:0:1:f816:3eff:fe3d:e64 2001:db8::f816:3eff:fe3d:e64
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.408 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8c6ac607-4870-462a-80ba-311207e6dc85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:17 np0005465604 NetworkManager[45129]: <info>  [1759396217.4168] device (tap9e457fab-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:10:17 np0005465604 NetworkManager[45129]: <info>  [1759396217.4186] device (tap9e457fab-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:10:17 np0005465604 nova_compute[260603]: 2025-10-02 09:10:17.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:17 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:17Z|01558|binding|INFO|Setting lport 9e457fab-8f77-47eb-a2dc-fa212b72ab38 ovn-installed in OVS
Oct  2 05:10:17 np0005465604 nova_compute[260603]: 2025-10-02 09:10:17.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:17 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:17Z|01559|binding|INFO|Setting lport 9e457fab-8f77-47eb-a2dc-fa212b72ab38 up in Southbound
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.431 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:0e:64 2001:db8:0:1:f816:3eff:fe3d:e64 2001:db8::f816:3eff:fe3d:e64'], port_security=['fa:16:3e:3d:0e:64 2001:db8:0:1:f816:3eff:fe3d:e64 2001:db8::f816:3eff:fe3d:e64'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe3d:e64/64 2001:db8::f816:3eff:fe3d:e64/64', 'neutron:device_id': '770238ca-0d80-443f-943e-236e0cfb3606', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d9c157f-cf57-4b44-8fba-d16631e22418', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '037810a9-88c4-4f08-a476-337b92085e28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a251a259-65e8-4a45-82af-f69bd5f24a08, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=9e457fab-8f77-47eb-a2dc-fa212b72ab38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:10:17 np0005465604 systemd-machined[214636]: New machine qemu-176-instance-0000008e.
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.440 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f10a5f-61be-4e29-84cc-baf49116a044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:17 np0005465604 systemd[1]: Started Virtual Machine qemu-176-instance-0000008e.
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.446 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[33ab7688-24f2-4b42-b551-4e588c4265f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.474 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[241a33a8-9754-422a-94aa-a40b8b59d97f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.492 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[75a1b15e-d2a9-40f6-8855-8c1cb16ad3ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7dd05b8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:04:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705935, 'reachable_time': 20541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 418472, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.507 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[be627377-7775-4cf0-8133-e130f6ed8fb3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd7dd05b8-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 705950, 'tstamp': 705950}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 418477, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd7dd05b8-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 705955, 'tstamp': 705955}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 418477, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.510 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7dd05b8-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:17 np0005465604 nova_compute[260603]: 2025-10-02 09:10:17.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.512 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7dd05b8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.513 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.513 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7dd05b8-70, col_values=(('external_ids', {'iface-id': '93ded116-ee2f-4f81-a2c6-257136c86ae4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.514 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.515 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 9e457fab-8f77-47eb-a2dc-fa212b72ab38 in datapath 6d9c157f-cf57-4b44-8fba-d16631e22418 unbound from our chassis#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.517 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d9c157f-cf57-4b44-8fba-d16631e22418#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.534 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4ea472-c419-4a76-a8e8-4a102ba13800]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.568 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[b6368c48-2548-4351-acef-78e845f3139a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.572 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[852c669e-155f-4c1d-a6ed-d98905c438e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.599 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4597e8-5d52-40f5-97b2-9a506fc06291]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.615 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d578f17e-fe76-4c18-abf9-0876d88188c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d9c157f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:38:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706051, 'reachable_time': 40260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 418484, 'error': None, 'target': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.628 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[963a6034-e5ba-4319-b6e7-a440fdec1eac]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6d9c157f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706068, 'tstamp': 706068}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 418485, 'error': None, 'target': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.630 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d9c157f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:17 np0005465604 nova_compute[260603]: 2025-10-02 09:10:17.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.633 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d9c157f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.634 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.634 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d9c157f-c0, col_values=(('external_ids', {'iface-id': '136a7ea2-2365-4779-b31d-41cbfc52a20f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:17 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:17.634 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:10:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.161 2 DEBUG nova.compute.manager [req-630d3f97-f1a3-4230-b1bf-87e2378b35c7 req-3e03cd7a-1013-4265-a916-0a89c19e4ec1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-plugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.162 2 DEBUG oslo_concurrency.lockutils [req-630d3f97-f1a3-4230-b1bf-87e2378b35c7 req-3e03cd7a-1013-4265-a916-0a89c19e4ec1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.162 2 DEBUG oslo_concurrency.lockutils [req-630d3f97-f1a3-4230-b1bf-87e2378b35c7 req-3e03cd7a-1013-4265-a916-0a89c19e4ec1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.162 2 DEBUG oslo_concurrency.lockutils [req-630d3f97-f1a3-4230-b1bf-87e2378b35c7 req-3e03cd7a-1013-4265-a916-0a89c19e4ec1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.163 2 DEBUG nova.compute.manager [req-630d3f97-f1a3-4230-b1bf-87e2378b35c7 req-3e03cd7a-1013-4265-a916-0a89c19e4ec1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Processing event network-vif-plugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.173 2 DEBUG nova.compute.manager [req-d2ff4de3-c50a-41dd-aca7-759ab2e81d4f req-d6efba99-6fbf-43d8-9ba5-4bb1ae1f3177 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-plugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.174 2 DEBUG oslo_concurrency.lockutils [req-d2ff4de3-c50a-41dd-aca7-759ab2e81d4f req-d6efba99-6fbf-43d8-9ba5-4bb1ae1f3177 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.174 2 DEBUG oslo_concurrency.lockutils [req-d2ff4de3-c50a-41dd-aca7-759ab2e81d4f req-d6efba99-6fbf-43d8-9ba5-4bb1ae1f3177 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.174 2 DEBUG oslo_concurrency.lockutils [req-d2ff4de3-c50a-41dd-aca7-759ab2e81d4f req-d6efba99-6fbf-43d8-9ba5-4bb1ae1f3177 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.175 2 DEBUG nova.compute.manager [req-d2ff4de3-c50a-41dd-aca7-759ab2e81d4f req-d6efba99-6fbf-43d8-9ba5-4bb1ae1f3177 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Processing event network-vif-plugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:10:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.384 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.385 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396218.3833501, 770238ca-0d80-443f-943e-236e0cfb3606 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.385 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] VM Started (Lifecycle Event)#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.391 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.395 2 INFO nova.virt.libvirt.driver [-] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Instance spawned successfully.#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.395 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.412 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.417 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.420 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.421 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.421 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.421 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.422 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.422 2 DEBUG nova.virt.libvirt.driver [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.450 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.450 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396218.38367, 770238ca-0d80-443f-943e-236e0cfb3606 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.451 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.485 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.488 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396218.390792, 770238ca-0d80-443f-943e-236e0cfb3606 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.488 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.496 2 INFO nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Took 10.67 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.497 2 DEBUG nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.509 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.512 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.540 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.561 2 INFO nova.compute.manager [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Took 11.61 seconds to build instance.#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.577 2 DEBUG oslo_concurrency.lockutils [None req-13e7c408-6e7b-4f79-a9c2-43f04e874336 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.940 2 DEBUG nova.network.neutron [req-fa530f52-64d7-45d4-9fc5-6cdb693a601f req-55d99d38-6145-4090-b1f0-e17c737ac338 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Updated VIF entry in instance network info cache for port 9e457fab-8f77-47eb-a2dc-fa212b72ab38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.941 2 DEBUG nova.network.neutron [req-fa530f52-64d7-45d4-9fc5-6cdb693a601f req-55d99d38-6145-4090-b1f0-e17c737ac338 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Updating instance_info_cache with network_info: [{"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:10:18 np0005465604 nova_compute[260603]: 2025-10-02 09:10:18.956 2 DEBUG oslo_concurrency.lockutils [req-fa530f52-64d7-45d4-9fc5-6cdb693a601f req-55d99d38-6145-4090-b1f0-e17c737ac338 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:10:18 np0005465604 podman[418530]: 2025-10-02 09:10:18.994921664 +0000 UTC m=+0.057792908 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:10:19 np0005465604 podman[418529]: 2025-10-02 09:10:19.027772244 +0000 UTC m=+0.090736381 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 05:10:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct  2 05:10:20 np0005465604 nova_compute[260603]: 2025-10-02 09:10:20.254 2 DEBUG nova.compute.manager [req-9bd1593b-d0f6-405c-83a2-4af36e5a378a req-64c7a721-9994-48ea-8f5f-d5f88144b873 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-plugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:20 np0005465604 nova_compute[260603]: 2025-10-02 09:10:20.255 2 DEBUG oslo_concurrency.lockutils [req-9bd1593b-d0f6-405c-83a2-4af36e5a378a req-64c7a721-9994-48ea-8f5f-d5f88144b873 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:20 np0005465604 nova_compute[260603]: 2025-10-02 09:10:20.255 2 DEBUG oslo_concurrency.lockutils [req-9bd1593b-d0f6-405c-83a2-4af36e5a378a req-64c7a721-9994-48ea-8f5f-d5f88144b873 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:20 np0005465604 nova_compute[260603]: 2025-10-02 09:10:20.255 2 DEBUG oslo_concurrency.lockutils [req-9bd1593b-d0f6-405c-83a2-4af36e5a378a req-64c7a721-9994-48ea-8f5f-d5f88144b873 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:20 np0005465604 nova_compute[260603]: 2025-10-02 09:10:20.256 2 DEBUG nova.compute.manager [req-9bd1593b-d0f6-405c-83a2-4af36e5a378a req-64c7a721-9994-48ea-8f5f-d5f88144b873 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] No waiting events found dispatching network-vif-plugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:10:20 np0005465604 nova_compute[260603]: 2025-10-02 09:10:20.256 2 WARNING nova.compute.manager [req-9bd1593b-d0f6-405c-83a2-4af36e5a378a req-64c7a721-9994-48ea-8f5f-d5f88144b873 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received unexpected event network-vif-plugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc for instance with vm_state active and task_state None.#033[00m
Oct  2 05:10:20 np0005465604 nova_compute[260603]: 2025-10-02 09:10:20.326 2 DEBUG nova.compute.manager [req-475d828d-0207-4b66-b589-ebdcc69a1bfb req-f503bdf0-0c42-43f3-8419-037d1865767e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-plugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:20 np0005465604 nova_compute[260603]: 2025-10-02 09:10:20.326 2 DEBUG oslo_concurrency.lockutils [req-475d828d-0207-4b66-b589-ebdcc69a1bfb req-f503bdf0-0c42-43f3-8419-037d1865767e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:20 np0005465604 nova_compute[260603]: 2025-10-02 09:10:20.327 2 DEBUG oslo_concurrency.lockutils [req-475d828d-0207-4b66-b589-ebdcc69a1bfb req-f503bdf0-0c42-43f3-8419-037d1865767e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:20 np0005465604 nova_compute[260603]: 2025-10-02 09:10:20.327 2 DEBUG oslo_concurrency.lockutils [req-475d828d-0207-4b66-b589-ebdcc69a1bfb req-f503bdf0-0c42-43f3-8419-037d1865767e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:20 np0005465604 nova_compute[260603]: 2025-10-02 09:10:20.327 2 DEBUG nova.compute.manager [req-475d828d-0207-4b66-b589-ebdcc69a1bfb req-f503bdf0-0c42-43f3-8419-037d1865767e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] No waiting events found dispatching network-vif-plugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:10:20 np0005465604 nova_compute[260603]: 2025-10-02 09:10:20.327 2 WARNING nova.compute.manager [req-475d828d-0207-4b66-b589-ebdcc69a1bfb req-f503bdf0-0c42-43f3-8419-037d1865767e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received unexpected event network-vif-plugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:10:20 np0005465604 nova_compute[260603]: 2025-10-02 09:10:20.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:21 np0005465604 nova_compute[260603]: 2025-10-02 09:10:21.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct  2 05:10:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:10:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2436508400' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:10:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:10:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2436508400' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:10:22 np0005465604 podman[418577]: 2025-10-02 09:10:22.990996194 +0000 UTC m=+0.053606256 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 05:10:22 np0005465604 podman[418576]: 2025-10-02 09:10:22.997707653 +0000 UTC m=+0.061814462 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:10:23 np0005465604 nova_compute[260603]: 2025-10-02 09:10:23.062 2 DEBUG nova.compute.manager [req-a5d61ed5-b936-4012-bf77-feaa1a453af4 req-821d70aa-6c7b-4f3a-a21d-c1442afdf67d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-changed-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:23 np0005465604 nova_compute[260603]: 2025-10-02 09:10:23.063 2 DEBUG nova.compute.manager [req-a5d61ed5-b936-4012-bf77-feaa1a453af4 req-821d70aa-6c7b-4f3a-a21d-c1442afdf67d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Refreshing instance network info cache due to event network-changed-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:10:23 np0005465604 nova_compute[260603]: 2025-10-02 09:10:23.063 2 DEBUG oslo_concurrency.lockutils [req-a5d61ed5-b936-4012-bf77-feaa1a453af4 req-821d70aa-6c7b-4f3a-a21d-c1442afdf67d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:10:23 np0005465604 nova_compute[260603]: 2025-10-02 09:10:23.063 2 DEBUG oslo_concurrency.lockutils [req-a5d61ed5-b936-4012-bf77-feaa1a453af4 req-821d70aa-6c7b-4f3a-a21d-c1442afdf67d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:10:23 np0005465604 nova_compute[260603]: 2025-10-02 09:10:23.063 2 DEBUG nova.network.neutron [req-a5d61ed5-b936-4012-bf77-feaa1a453af4 req-821d70aa-6c7b-4f3a-a21d-c1442afdf67d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Refreshing network info cache for port 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:10:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:10:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 05:10:25 np0005465604 nova_compute[260603]: 2025-10-02 09:10:25.132 2 DEBUG nova.network.neutron [req-a5d61ed5-b936-4012-bf77-feaa1a453af4 req-821d70aa-6c7b-4f3a-a21d-c1442afdf67d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Updated VIF entry in instance network info cache for port 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:10:25 np0005465604 nova_compute[260603]: 2025-10-02 09:10:25.134 2 DEBUG nova.network.neutron [req-a5d61ed5-b936-4012-bf77-feaa1a453af4 req-821d70aa-6c7b-4f3a-a21d-c1442afdf67d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Updating instance_info_cache with network_info: [{"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:10:25 np0005465604 nova_compute[260603]: 2025-10-02 09:10:25.174 2 DEBUG oslo_concurrency.lockutils [req-a5d61ed5-b936-4012-bf77-feaa1a453af4 req-821d70aa-6c7b-4f3a-a21d-c1442afdf67d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:10:25 np0005465604 nova_compute[260603]: 2025-10-02 09:10:25.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:10:26 np0005465604 nova_compute[260603]: 2025-10-02 09:10:26.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:10:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:10:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:10:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:10:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:10:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:10:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:10:28
Oct  2 05:10:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:10:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:10:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', '.rgw.root', 'images', 'default.rgw.log', '.mgr']
Oct  2 05:10:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:10:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct  2 05:10:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:10:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:10:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 70 op/s
Oct  2 05:10:30 np0005465604 nova_compute[260603]: 2025-10-02 09:10:30.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:31 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:31Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a9:4e:93 10.100.0.3
Oct  2 05:10:31 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:31Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:4e:93 10.100.0.3
Oct  2 05:10:31 np0005465604 nova_compute[260603]: 2025-10-02 09:10:31.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 70 op/s
Oct  2 05:10:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:10:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct  2 05:10:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:34.848 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:34.849 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:34.849 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:35 np0005465604 nova_compute[260603]: 2025-10-02 09:10:35.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 05:10:36 np0005465604 nova_compute[260603]: 2025-10-02 09:10:36.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:36 np0005465604 nova_compute[260603]: 2025-10-02 09:10:36.935 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:10:36 np0005465604 nova_compute[260603]: 2025-10-02 09:10:36.992 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 05:10:36 np0005465604 nova_compute[260603]: 2025-10-02 09:10:36.992 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid 770238ca-0d80-443f-943e-236e0cfb3606 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 05:10:36 np0005465604 nova_compute[260603]: 2025-10-02 09:10:36.993 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:36 np0005465604 nova_compute[260603]: 2025-10-02 09:10:36.993 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:36 np0005465604 nova_compute[260603]: 2025-10-02 09:10:36.994 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:36 np0005465604 nova_compute[260603]: 2025-10-02 09:10:36.994 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "770238ca-0d80-443f-943e-236e0cfb3606" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:37 np0005465604 nova_compute[260603]: 2025-10-02 09:10:37.046 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:37 np0005465604 nova_compute[260603]: 2025-10-02 09:10:37.047 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "770238ca-0d80-443f-943e-236e0cfb3606" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:37 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 05:10:37 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 05:10:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:10:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:10:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:10:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:10:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  2 05:10:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:10:38 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:10:38 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:10:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:10:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:10:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:10:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:10:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:10:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:10:38 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 2b80ba5e-b043-412a-be65-a6d482c2cfe9 does not exist
Oct  2 05:10:38 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b8ac9fa5-69f3-42c0-b1bb-7565312e3f81 does not exist
Oct  2 05:10:38 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 5a1f7398-e1da-4548-8e96-05e109d6d0e7 does not exist
Oct  2 05:10:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:10:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:10:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:10:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:10:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:10:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015181009677997005 of space, bias 1.0, pg target 0.45543029033991017 quantized to 32 (current 32)
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:10:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:10:39 np0005465604 podman[419013]: 2025-10-02 09:10:39.624888587 +0000 UTC m=+0.068296983 container create 2240cbeb381d6ca006b5b219c3fc43d1dc3bda4e12fc541c981c40b2428a6815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:10:39 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:10:39 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:10:39 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:10:39 np0005465604 systemd[1]: Started libpod-conmon-2240cbeb381d6ca006b5b219c3fc43d1dc3bda4e12fc541c981c40b2428a6815.scope.
Oct  2 05:10:39 np0005465604 podman[419013]: 2025-10-02 09:10:39.601573262 +0000 UTC m=+0.044981638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:10:39 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:10:39 np0005465604 podman[419013]: 2025-10-02 09:10:39.736416573 +0000 UTC m=+0.179824949 container init 2240cbeb381d6ca006b5b219c3fc43d1dc3bda4e12fc541c981c40b2428a6815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:10:39 np0005465604 podman[419013]: 2025-10-02 09:10:39.745153235 +0000 UTC m=+0.188561611 container start 2240cbeb381d6ca006b5b219c3fc43d1dc3bda4e12fc541c981c40b2428a6815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:10:39 np0005465604 podman[419013]: 2025-10-02 09:10:39.748693704 +0000 UTC m=+0.192102100 container attach 2240cbeb381d6ca006b5b219c3fc43d1dc3bda4e12fc541c981c40b2428a6815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:10:39 np0005465604 great_mclaren[419029]: 167 167
Oct  2 05:10:39 np0005465604 systemd[1]: libpod-2240cbeb381d6ca006b5b219c3fc43d1dc3bda4e12fc541c981c40b2428a6815.scope: Deactivated successfully.
Oct  2 05:10:39 np0005465604 podman[419013]: 2025-10-02 09:10:39.75210378 +0000 UTC m=+0.195512156 container died 2240cbeb381d6ca006b5b219c3fc43d1dc3bda4e12fc541c981c40b2428a6815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 05:10:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6cf5637a8d00e7882c29ecca008609a0675692d13eca5def109f43db085f1cdb-merged.mount: Deactivated successfully.
Oct  2 05:10:39 np0005465604 podman[419013]: 2025-10-02 09:10:39.800540625 +0000 UTC m=+0.243948991 container remove 2240cbeb381d6ca006b5b219c3fc43d1dc3bda4e12fc541c981c40b2428a6815 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:10:39 np0005465604 systemd[1]: libpod-conmon-2240cbeb381d6ca006b5b219c3fc43d1dc3bda4e12fc541c981c40b2428a6815.scope: Deactivated successfully.
Oct  2 05:10:40 np0005465604 podman[419052]: 2025-10-02 09:10:40.046931173 +0000 UTC m=+0.061887015 container create c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kalam, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 05:10:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 05:10:40 np0005465604 systemd[1]: Started libpod-conmon-c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09.scope.
Oct  2 05:10:40 np0005465604 podman[419052]: 2025-10-02 09:10:40.023445143 +0000 UTC m=+0.038400975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:10:40 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:10:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e39ff5aba036e34c6ad6a761f94b9f16d3b89a8a1f4ba6b6b1acee0b085076/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:10:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e39ff5aba036e34c6ad6a761f94b9f16d3b89a8a1f4ba6b6b1acee0b085076/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:10:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e39ff5aba036e34c6ad6a761f94b9f16d3b89a8a1f4ba6b6b1acee0b085076/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:10:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e39ff5aba036e34c6ad6a761f94b9f16d3b89a8a1f4ba6b6b1acee0b085076/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:10:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e39ff5aba036e34c6ad6a761f94b9f16d3b89a8a1f4ba6b6b1acee0b085076/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:10:40 np0005465604 podman[419052]: 2025-10-02 09:10:40.143376989 +0000 UTC m=+0.158332791 container init c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 05:10:40 np0005465604 podman[419052]: 2025-10-02 09:10:40.160587774 +0000 UTC m=+0.175543576 container start c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kalam, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 05:10:40 np0005465604 podman[419052]: 2025-10-02 09:10:40.165148235 +0000 UTC m=+0.180104087 container attach c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kalam, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 05:10:40 np0005465604 nova_compute[260603]: 2025-10-02 09:10:40.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:10:40 np0005465604 nova_compute[260603]: 2025-10-02 09:10:40.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:10:40 np0005465604 nova_compute[260603]: 2025-10-02 09:10:40.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:41 np0005465604 nice_kalam[419069]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:10:41 np0005465604 nice_kalam[419069]: --> relative data size: 1.0
Oct  2 05:10:41 np0005465604 nice_kalam[419069]: --> All data devices are unavailable
Oct  2 05:10:41 np0005465604 systemd[1]: libpod-c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09.scope: Deactivated successfully.
Oct  2 05:10:41 np0005465604 podman[419052]: 2025-10-02 09:10:41.384947572 +0000 UTC m=+1.399903384 container died c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 05:10:41 np0005465604 systemd[1]: libpod-c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09.scope: Consumed 1.159s CPU time.
Oct  2 05:10:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay-96e39ff5aba036e34c6ad6a761f94b9f16d3b89a8a1f4ba6b6b1acee0b085076-merged.mount: Deactivated successfully.
Oct  2 05:10:41 np0005465604 podman[419052]: 2025-10-02 09:10:41.446200495 +0000 UTC m=+1.461156297 container remove c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 05:10:41 np0005465604 systemd[1]: libpod-conmon-c84d12cb4ee366761e39dd681963a70c34624bba72e5ed326e74fd9a9cc5fc09.scope: Deactivated successfully.
Oct  2 05:10:41 np0005465604 nova_compute[260603]: 2025-10-02 09:10:41.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:41 np0005465604 nova_compute[260603]: 2025-10-02 09:10:41.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:41.903 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:10:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:41.905 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:10:41 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:41.905 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.251 2 DEBUG nova.compute.manager [req-75fcb0b1-ffd9-42f3-92e6-ffe7baf71ba2 req-51beafa5-46af-4205-b0a7-631a1723ef19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-changed-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.251 2 DEBUG nova.compute.manager [req-75fcb0b1-ffd9-42f3-92e6-ffe7baf71ba2 req-51beafa5-46af-4205-b0a7-631a1723ef19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Refreshing instance network info cache due to event network-changed-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.252 2 DEBUG oslo_concurrency.lockutils [req-75fcb0b1-ffd9-42f3-92e6-ffe7baf71ba2 req-51beafa5-46af-4205-b0a7-631a1723ef19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.252 2 DEBUG oslo_concurrency.lockutils [req-75fcb0b1-ffd9-42f3-92e6-ffe7baf71ba2 req-51beafa5-46af-4205-b0a7-631a1723ef19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.252 2 DEBUG nova.network.neutron [req-75fcb0b1-ffd9-42f3-92e6-ffe7baf71ba2 req-51beafa5-46af-4205-b0a7-631a1723ef19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Refreshing network info cache for port 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:10:42 np0005465604 podman[419252]: 2025-10-02 09:10:42.267009473 +0000 UTC m=+0.045769743 container create 33cabf280a93a289c20f28300be63bafacf32b60fdeb2ffb9319c0a8a2e05b96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 05:10:42 np0005465604 systemd[1]: Started libpod-conmon-33cabf280a93a289c20f28300be63bafacf32b60fdeb2ffb9319c0a8a2e05b96.scope.
Oct  2 05:10:42 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:10:42 np0005465604 podman[419252]: 2025-10-02 09:10:42.24472021 +0000 UTC m=+0.023480530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:10:42 np0005465604 podman[419252]: 2025-10-02 09:10:42.340183927 +0000 UTC m=+0.118944257 container init 33cabf280a93a289c20f28300be63bafacf32b60fdeb2ffb9319c0a8a2e05b96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  2 05:10:42 np0005465604 podman[419252]: 2025-10-02 09:10:42.348355051 +0000 UTC m=+0.127115321 container start 33cabf280a93a289c20f28300be63bafacf32b60fdeb2ffb9319c0a8a2e05b96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:10:42 np0005465604 podman[419252]: 2025-10-02 09:10:42.352226501 +0000 UTC m=+0.130986811 container attach 33cabf280a93a289c20f28300be63bafacf32b60fdeb2ffb9319c0a8a2e05b96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:10:42 np0005465604 inspiring_boyd[419268]: 167 167
Oct  2 05:10:42 np0005465604 systemd[1]: libpod-33cabf280a93a289c20f28300be63bafacf32b60fdeb2ffb9319c0a8a2e05b96.scope: Deactivated successfully.
Oct  2 05:10:42 np0005465604 podman[419252]: 2025-10-02 09:10:42.356509664 +0000 UTC m=+0.135269964 container died 33cabf280a93a289c20f28300be63bafacf32b60fdeb2ffb9319c0a8a2e05b96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.361 2 DEBUG oslo_concurrency.lockutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.362 2 DEBUG oslo_concurrency.lockutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.362 2 DEBUG oslo_concurrency.lockutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.362 2 DEBUG oslo_concurrency.lockutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.362 2 DEBUG oslo_concurrency.lockutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.363 2 INFO nova.compute.manager [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Terminating instance#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.364 2 DEBUG nova.compute.manager [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:10:42 np0005465604 systemd[1]: var-lib-containers-storage-overlay-41df3c6436e63e14b3f6f1bc5fb0222c0b9cfd8076995751a3a500c1d19148fc-merged.mount: Deactivated successfully.
Oct  2 05:10:42 np0005465604 podman[419252]: 2025-10-02 09:10:42.412572206 +0000 UTC m=+0.191332476 container remove 33cabf280a93a289c20f28300be63bafacf32b60fdeb2ffb9319c0a8a2e05b96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_boyd, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:10:42 np0005465604 systemd[1]: libpod-conmon-33cabf280a93a289c20f28300be63bafacf32b60fdeb2ffb9319c0a8a2e05b96.scope: Deactivated successfully.
Oct  2 05:10:42 np0005465604 kernel: tap451aa4f7-0b (unregistering): left promiscuous mode
Oct  2 05:10:42 np0005465604 NetworkManager[45129]: <info>  [1759396242.4347] device (tap451aa4f7-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:42 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:42Z|01560|binding|INFO|Releasing lport 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc from this chassis (sb_readonly=0)
Oct  2 05:10:42 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:42Z|01561|binding|INFO|Setting lport 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc down in Southbound
Oct  2 05:10:42 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:42Z|01562|binding|INFO|Removing iface tap451aa4f7-0b ovn-installed in OVS
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:42 np0005465604 kernel: tap9e457fab-8f (unregistering): left promiscuous mode
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.472 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:4e:93 10.100.0.3'], port_security=['fa:16:3e:a9:4e:93 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '770238ca-0d80-443f-943e-236e0cfb3606', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '037810a9-88c4-4f08-a476-337b92085e28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68558f02-4047-4331-a47e-cbbee9580ea4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=451aa4f7-0b3e-4a00-b063-7584a6bbc7cc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:10:42 np0005465604 NetworkManager[45129]: <info>  [1759396242.4740] device (tap9e457fab-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.474 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc in datapath d7dd05b8-70c0-4ef8-a410-57d83c307eaa unbound from our chassis#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.475 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7dd05b8-70c0-4ef8-a410-57d83c307eaa#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:42 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:42Z|01563|binding|INFO|Releasing lport 9e457fab-8f77-47eb-a2dc-fa212b72ab38 from this chassis (sb_readonly=0)
Oct  2 05:10:42 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:42Z|01564|binding|INFO|Setting lport 9e457fab-8f77-47eb-a2dc-fa212b72ab38 down in Southbound
Oct  2 05:10:42 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:42Z|01565|binding|INFO|Removing iface tap9e457fab-8f ovn-installed in OVS
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.495 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9d19acec-d5ec-492b-a8fa-8ebfa89a8737]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.508 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:0e:64 2001:db8:0:1:f816:3eff:fe3d:e64 2001:db8::f816:3eff:fe3d:e64'], port_security=['fa:16:3e:3d:0e:64 2001:db8:0:1:f816:3eff:fe3d:e64 2001:db8::f816:3eff:fe3d:e64'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe3d:e64/64 2001:db8::f816:3eff:fe3d:e64/64', 'neutron:device_id': '770238ca-0d80-443f-943e-236e0cfb3606', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d9c157f-cf57-4b44-8fba-d16631e22418', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '037810a9-88c4-4f08-a476-337b92085e28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a251a259-65e8-4a45-82af-f69bd5f24a08, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=9e457fab-8f77-47eb-a2dc-fa212b72ab38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.526 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c683b5dc-8f9c-4e7e-8dd2-7b88f49c76da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.529 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[90638446-a70d-44a3-a345-140e1100f162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:42 np0005465604 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Oct  2 05:10:42 np0005465604 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008e.scope: Consumed 13.335s CPU time.
Oct  2 05:10:42 np0005465604 systemd-machined[214636]: Machine qemu-176-instance-0000008e terminated.
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.557 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e55e3a2c-600e-4ce3-8d04-4b2c565d03c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.576 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0e884076-1e1f-4cdd-8369-e85549cd90a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7dd05b8-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:04:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 434], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705935, 'reachable_time': 20541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419310, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:42 np0005465604 NetworkManager[45129]: <info>  [1759396242.5988] manager: (tap9e457fab-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/633)
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.598 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d34453a2-3d67-43e8-ad79-5bbf7d09bc4f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd7dd05b8-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 705950, 'tstamp': 705950}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419318, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd7dd05b8-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 705955, 'tstamp': 705955}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419318, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.602 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7dd05b8-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.617 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7dd05b8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.617 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.617 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7dd05b8-70, col_values=(('external_ids', {'iface-id': '93ded116-ee2f-4f81-a2c6-257136c86ae4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.618 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.619 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 9e457fab-8f77-47eb-a2dc-fa212b72ab38 in datapath 6d9c157f-cf57-4b44-8fba-d16631e22418 unbound from our chassis#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.619 2 INFO nova.virt.libvirt.driver [-] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Instance destroyed successfully.#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.620 2 DEBUG nova.objects.instance [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 770238ca-0d80-443f-943e-236e0cfb3606 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.620 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d9c157f-cf57-4b44-8fba-d16631e22418#033[00m
Oct  2 05:10:42 np0005465604 podman[419308]: 2025-10-02 09:10:42.625790092 +0000 UTC m=+0.053004348 container create 7b9d1cbd9dfa67194cda1363ac89c3dfb5a6960ee3895ee1969edf2d6d65161d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tesla, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.639 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7499a1-a0f5-4c9f-945f-5763ae90212a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.652 2 DEBUG nova.virt.libvirt.vif [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-242796341',display_name='tempest-TestGettingAddress-server-242796341',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-242796341',id=142,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:10:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-r8zzf1o0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:10:18Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=770238ca-0d80-443f-943e-236e0cfb3606,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.653 2 DEBUG nova.network.os_vif_util [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.653 2 DEBUG nova.network.os_vif_util [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a9:4e:93,bridge_name='br-int',has_traffic_filtering=True,id=451aa4f7-0b3e-4a00-b063-7584a6bbc7cc,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap451aa4f7-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.653 2 DEBUG os_vif [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:4e:93,bridge_name='br-int',has_traffic_filtering=True,id=451aa4f7-0b3e-4a00-b063-7584a6bbc7cc,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap451aa4f7-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.656 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap451aa4f7-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:42 np0005465604 systemd[1]: Started libpod-conmon-7b9d1cbd9dfa67194cda1363ac89c3dfb5a6960ee3895ee1969edf2d6d65161d.scope.
Oct  2 05:10:42 np0005465604 podman[419308]: 2025-10-02 09:10:42.602808728 +0000 UTC m=+0.030023004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.697 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cb06324c-f573-4242-aea0-df9e89f54bf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.700 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[88cac130-b9e7-4111-9f6e-f16b7ce7cc35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.704 2 INFO os_vif [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:4e:93,bridge_name='br-int',has_traffic_filtering=True,id=451aa4f7-0b3e-4a00-b063-7584a6bbc7cc,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap451aa4f7-0b')#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.704 2 DEBUG nova.virt.libvirt.vif [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-242796341',display_name='tempest-TestGettingAddress-server-242796341',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-242796341',id=142,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:10:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-r8zzf1o0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:10:18Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=770238ca-0d80-443f-943e-236e0cfb3606,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.705 2 DEBUG nova.network.os_vif_util [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.705 2 DEBUG nova.network.os_vif_util [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:0e:64,bridge_name='br-int',has_traffic_filtering=True,id=9e457fab-8f77-47eb-a2dc-fa212b72ab38,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e457fab-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.706 2 DEBUG os_vif [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:0e:64,bridge_name='br-int',has_traffic_filtering=True,id=9e457fab-8f77-47eb-a2dc-fa212b72ab38,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e457fab-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.707 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e457fab-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.710 2 INFO os_vif [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:0e:64,bridge_name='br-int',has_traffic_filtering=True,id=9e457fab-8f77-47eb-a2dc-fa212b72ab38,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e457fab-8f')#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.727 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f004a05e-ef8c-4808-8f18-77f4f0e5ef1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:42 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:10:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583afa14d3d2032f73fb6806d0694e8d68450548d3b1b2d0ba9d11f8562586ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:10:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583afa14d3d2032f73fb6806d0694e8d68450548d3b1b2d0ba9d11f8562586ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:10:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583afa14d3d2032f73fb6806d0694e8d68450548d3b1b2d0ba9d11f8562586ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:10:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583afa14d3d2032f73fb6806d0694e8d68450548d3b1b2d0ba9d11f8562586ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:10:42 np0005465604 podman[419308]: 2025-10-02 09:10:42.74736202 +0000 UTC m=+0.174576306 container init 7b9d1cbd9dfa67194cda1363ac89c3dfb5a6960ee3895ee1969edf2d6d65161d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tesla, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.744 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ce8f1465-e950-4eba-9a46-57a09088dd9c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d9c157f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:38:02'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706051, 'reachable_time': 40260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419369, 'error': None, 'target': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:42 np0005465604 podman[419308]: 2025-10-02 09:10:42.756637218 +0000 UTC m=+0.183851474 container start 7b9d1cbd9dfa67194cda1363ac89c3dfb5a6960ee3895ee1969edf2d6d65161d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tesla, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:10:42 np0005465604 podman[419308]: 2025-10-02 09:10:42.759944561 +0000 UTC m=+0.187158817 container attach 7b9d1cbd9dfa67194cda1363ac89c3dfb5a6960ee3895ee1969edf2d6d65161d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.759 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e5014af9-772c-4fa2-a7aa-208695d62d07]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6d9c157f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706068, 'tstamp': 706068}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419371, 'error': None, 'target': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.763 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d9c157f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.765 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d9c157f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.766 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.766 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d9c157f-c0, col_values=(('external_ids', {'iface-id': '136a7ea2-2365-4779-b31d-41cbfc52a20f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:42.766 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.795 2 DEBUG nova.compute.manager [req-58630a29-1db2-4701-b4c7-26149d8acfe9 req-0166a2d7-aa25-4c7f-9289-e97b7b6f8afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-unplugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.796 2 DEBUG oslo_concurrency.lockutils [req-58630a29-1db2-4701-b4c7-26149d8acfe9 req-0166a2d7-aa25-4c7f-9289-e97b7b6f8afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.796 2 DEBUG oslo_concurrency.lockutils [req-58630a29-1db2-4701-b4c7-26149d8acfe9 req-0166a2d7-aa25-4c7f-9289-e97b7b6f8afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.796 2 DEBUG oslo_concurrency.lockutils [req-58630a29-1db2-4701-b4c7-26149d8acfe9 req-0166a2d7-aa25-4c7f-9289-e97b7b6f8afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.796 2 DEBUG nova.compute.manager [req-58630a29-1db2-4701-b4c7-26149d8acfe9 req-0166a2d7-aa25-4c7f-9289-e97b7b6f8afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] No waiting events found dispatching network-vif-unplugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:10:42 np0005465604 nova_compute[260603]: 2025-10-02 09:10:42.796 2 DEBUG nova.compute.manager [req-58630a29-1db2-4701-b4c7-26149d8acfe9 req-0166a2d7-aa25-4c7f-9289-e97b7b6f8afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-unplugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:10:43 np0005465604 nova_compute[260603]: 2025-10-02 09:10:43.020 2 INFO nova.virt.libvirt.driver [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Deleting instance files /var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606_del#033[00m
Oct  2 05:10:43 np0005465604 nova_compute[260603]: 2025-10-02 09:10:43.021 2 INFO nova.virt.libvirt.driver [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Deletion of /var/lib/nova/instances/770238ca-0d80-443f-943e-236e0cfb3606_del complete#033[00m
Oct  2 05:10:43 np0005465604 nova_compute[260603]: 2025-10-02 09:10:43.088 2 INFO nova.compute.manager [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:10:43 np0005465604 nova_compute[260603]: 2025-10-02 09:10:43.089 2 DEBUG oslo.service.loopingcall [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:10:43 np0005465604 nova_compute[260603]: 2025-10-02 09:10:43.089 2 DEBUG nova.compute.manager [-] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:10:43 np0005465604 nova_compute[260603]: 2025-10-02 09:10:43.090 2 DEBUG nova.network.neutron [-] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:10:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]: {
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:    "0": [
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:        {
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "devices": [
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "/dev/loop3"
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            ],
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "lv_name": "ceph_lv0",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "lv_size": "21470642176",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "name": "ceph_lv0",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "tags": {
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.cluster_name": "ceph",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.crush_device_class": "",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.encrypted": "0",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.osd_id": "0",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.type": "block",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.vdo": "0"
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            },
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "type": "block",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "vg_name": "ceph_vg0"
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:        }
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:    ],
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:    "1": [
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:        {
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "devices": [
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "/dev/loop4"
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            ],
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "lv_name": "ceph_lv1",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "lv_size": "21470642176",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "name": "ceph_lv1",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "tags": {
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.cluster_name": "ceph",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.crush_device_class": "",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.encrypted": "0",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.osd_id": "1",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.type": "block",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.vdo": "0"
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            },
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "type": "block",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "vg_name": "ceph_vg1"
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:        }
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:    ],
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:    "2": [
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:        {
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "devices": [
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "/dev/loop5"
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            ],
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "lv_name": "ceph_lv2",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "lv_size": "21470642176",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "name": "ceph_lv2",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "tags": {
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.cluster_name": "ceph",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.crush_device_class": "",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.encrypted": "0",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.osd_id": "2",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.type": "block",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:                "ceph.vdo": "0"
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            },
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "type": "block",
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:            "vg_name": "ceph_vg2"
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:        }
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]:    ]
Oct  2 05:10:43 np0005465604 exciting_tesla[419348]: }
Oct  2 05:10:43 np0005465604 systemd[1]: libpod-7b9d1cbd9dfa67194cda1363ac89c3dfb5a6960ee3895ee1969edf2d6d65161d.scope: Deactivated successfully.
Oct  2 05:10:43 np0005465604 podman[419308]: 2025-10-02 09:10:43.530573099 +0000 UTC m=+0.957787395 container died 7b9d1cbd9dfa67194cda1363ac89c3dfb5a6960ee3895ee1969edf2d6d65161d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tesla, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 05:10:43 np0005465604 systemd[1]: var-lib-containers-storage-overlay-583afa14d3d2032f73fb6806d0694e8d68450548d3b1b2d0ba9d11f8562586ea-merged.mount: Deactivated successfully.
Oct  2 05:10:43 np0005465604 podman[419308]: 2025-10-02 09:10:43.596716234 +0000 UTC m=+1.023930490 container remove 7b9d1cbd9dfa67194cda1363ac89c3dfb5a6960ee3895ee1969edf2d6d65161d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 05:10:43 np0005465604 systemd[1]: libpod-conmon-7b9d1cbd9dfa67194cda1363ac89c3dfb5a6960ee3895ee1969edf2d6d65161d.scope: Deactivated successfully.
Oct  2 05:10:43 np0005465604 nova_compute[260603]: 2025-10-02 09:10:43.650 2 DEBUG nova.network.neutron [req-75fcb0b1-ffd9-42f3-92e6-ffe7baf71ba2 req-51beafa5-46af-4205-b0a7-631a1723ef19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Updated VIF entry in instance network info cache for port 451aa4f7-0b3e-4a00-b063-7584a6bbc7cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:10:43 np0005465604 nova_compute[260603]: 2025-10-02 09:10:43.651 2 DEBUG nova.network.neutron [req-75fcb0b1-ffd9-42f3-92e6-ffe7baf71ba2 req-51beafa5-46af-4205-b0a7-631a1723ef19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Updating instance_info_cache with network_info: [{"id": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "address": "fa:16:3e:a9:4e:93", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap451aa4f7-0b", "ovs_interfaceid": "451aa4f7-0b3e-4a00-b063-7584a6bbc7cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "address": "fa:16:3e:3d:0e:64", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3d:e64", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e457fab-8f", "ovs_interfaceid": "9e457fab-8f77-47eb-a2dc-fa212b72ab38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:10:43 np0005465604 nova_compute[260603]: 2025-10-02 09:10:43.696 2 DEBUG oslo_concurrency.lockutils [req-75fcb0b1-ffd9-42f3-92e6-ffe7baf71ba2 req-51beafa5-46af-4205-b0a7-631a1723ef19 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-770238ca-0d80-443f-943e-236e0cfb3606" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:10:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 305 active+clean; 178 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct  2 05:10:44 np0005465604 podman[419532]: 2025-10-02 09:10:44.333250863 +0000 UTC m=+0.069737188 container create f22019bf449732b5c55dc808fb9d8af4eac34e4327462aa8ac5895efac6de697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tesla, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 05:10:44 np0005465604 podman[419532]: 2025-10-02 09:10:44.287626575 +0000 UTC m=+0.024112930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:10:44 np0005465604 systemd[1]: Started libpod-conmon-f22019bf449732b5c55dc808fb9d8af4eac34e4327462aa8ac5895efac6de697.scope.
Oct  2 05:10:44 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:10:44 np0005465604 podman[419532]: 2025-10-02 09:10:44.463619864 +0000 UTC m=+0.200106189 container init f22019bf449732b5c55dc808fb9d8af4eac34e4327462aa8ac5895efac6de697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tesla, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 05:10:44 np0005465604 podman[419532]: 2025-10-02 09:10:44.473033857 +0000 UTC m=+0.209520182 container start f22019bf449732b5c55dc808fb9d8af4eac34e4327462aa8ac5895efac6de697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tesla, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:10:44 np0005465604 blissful_tesla[419548]: 167 167
Oct  2 05:10:44 np0005465604 podman[419532]: 2025-10-02 09:10:44.480790218 +0000 UTC m=+0.217276573 container attach f22019bf449732b5c55dc808fb9d8af4eac34e4327462aa8ac5895efac6de697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tesla, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:10:44 np0005465604 systemd[1]: libpod-f22019bf449732b5c55dc808fb9d8af4eac34e4327462aa8ac5895efac6de697.scope: Deactivated successfully.
Oct  2 05:10:44 np0005465604 podman[419532]: 2025-10-02 09:10:44.482856022 +0000 UTC m=+0.219342337 container died f22019bf449732b5c55dc808fb9d8af4eac34e4327462aa8ac5895efac6de697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tesla, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:10:44 np0005465604 systemd[1]: var-lib-containers-storage-overlay-120eed7c678a060ac9f6bbec7431a29e50552716ea8f82bc65f97a8a9a5335bb-merged.mount: Deactivated successfully.
Oct  2 05:10:44 np0005465604 podman[419532]: 2025-10-02 09:10:44.554583511 +0000 UTC m=+0.291069816 container remove f22019bf449732b5c55dc808fb9d8af4eac34e4327462aa8ac5895efac6de697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_tesla, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:10:44 np0005465604 systemd[1]: libpod-conmon-f22019bf449732b5c55dc808fb9d8af4eac34e4327462aa8ac5895efac6de697.scope: Deactivated successfully.
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.712 2 DEBUG nova.compute.manager [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-unplugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.713 2 DEBUG oslo_concurrency.lockutils [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.714 2 DEBUG oslo_concurrency.lockutils [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.714 2 DEBUG oslo_concurrency.lockutils [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.714 2 DEBUG nova.compute.manager [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] No waiting events found dispatching network-vif-unplugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.714 2 DEBUG nova.compute.manager [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-unplugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.715 2 DEBUG nova.compute.manager [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-plugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.715 2 DEBUG oslo_concurrency.lockutils [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.715 2 DEBUG oslo_concurrency.lockutils [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.715 2 DEBUG oslo_concurrency.lockutils [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.716 2 DEBUG nova.compute.manager [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] No waiting events found dispatching network-vif-plugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.716 2 WARNING nova.compute.manager [req-e2843d24-5627-40d2-95e9-07ba140685b3 req-f1096e43-3a6b-4241-b2e5-9b2b8b8aa54c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received unexpected event network-vif-plugged-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.775 2 DEBUG nova.network.neutron [-] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.805 2 INFO nova.compute.manager [-] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Took 1.72 seconds to deallocate network for instance.#033[00m
Oct  2 05:10:44 np0005465604 podman[419574]: 2025-10-02 09:10:44.72544264 +0000 UTC m=+0.026620008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:10:44 np0005465604 podman[419574]: 2025-10-02 09:10:44.844702106 +0000 UTC m=+0.145879494 container create cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_buck, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.860 2 DEBUG oslo_concurrency.lockutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.861 2 DEBUG oslo_concurrency.lockutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.906 2 DEBUG nova.compute.manager [req-969f356f-cfbf-41a0-96f2-f684a0eeb1e4 req-ba6907aa-4ff6-43a3-9864-1f442a51a414 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-plugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.906 2 DEBUG oslo_concurrency.lockutils [req-969f356f-cfbf-41a0-96f2-f684a0eeb1e4 req-ba6907aa-4ff6-43a3-9864-1f442a51a414 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "770238ca-0d80-443f-943e-236e0cfb3606-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.907 2 DEBUG oslo_concurrency.lockutils [req-969f356f-cfbf-41a0-96f2-f684a0eeb1e4 req-ba6907aa-4ff6-43a3-9864-1f442a51a414 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.907 2 DEBUG oslo_concurrency.lockutils [req-969f356f-cfbf-41a0-96f2-f684a0eeb1e4 req-ba6907aa-4ff6-43a3-9864-1f442a51a414 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.908 2 DEBUG nova.compute.manager [req-969f356f-cfbf-41a0-96f2-f684a0eeb1e4 req-ba6907aa-4ff6-43a3-9864-1f442a51a414 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] No waiting events found dispatching network-vif-plugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.908 2 WARNING nova.compute.manager [req-969f356f-cfbf-41a0-96f2-f684a0eeb1e4 req-ba6907aa-4ff6-43a3-9864-1f442a51a414 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received unexpected event network-vif-plugged-9e457fab-8f77-47eb-a2dc-fa212b72ab38 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.909 2 DEBUG nova.compute.manager [req-969f356f-cfbf-41a0-96f2-f684a0eeb1e4 req-ba6907aa-4ff6-43a3-9864-1f442a51a414 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-deleted-451aa4f7-0b3e-4a00-b063-7584a6bbc7cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.910 2 DEBUG nova.compute.manager [req-969f356f-cfbf-41a0-96f2-f684a0eeb1e4 req-ba6907aa-4ff6-43a3-9864-1f442a51a414 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Received event network-vif-deleted-9e457fab-8f77-47eb-a2dc-fa212b72ab38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:44 np0005465604 systemd[1]: Started libpod-conmon-cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b.scope.
Oct  2 05:10:44 np0005465604 nova_compute[260603]: 2025-10-02 09:10:44.954 2 DEBUG oslo_concurrency.processutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:10:44 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:10:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af921acf617426c64938863419a6a080358bbc5484b5e6645a0a63e57cf9021/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:10:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af921acf617426c64938863419a6a080358bbc5484b5e6645a0a63e57cf9021/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:10:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af921acf617426c64938863419a6a080358bbc5484b5e6645a0a63e57cf9021/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:10:44 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af921acf617426c64938863419a6a080358bbc5484b5e6645a0a63e57cf9021/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:10:45 np0005465604 podman[419574]: 2025-10-02 09:10:45.001715256 +0000 UTC m=+0.302892684 container init cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_buck, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:10:45 np0005465604 podman[419574]: 2025-10-02 09:10:45.013795961 +0000 UTC m=+0.314973339 container start cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_buck, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 05:10:45 np0005465604 podman[419574]: 2025-10-02 09:10:45.052500633 +0000 UTC m=+0.353678011 container attach cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_buck, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:10:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:10:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2241820592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:10:45 np0005465604 nova_compute[260603]: 2025-10-02 09:10:45.430 2 DEBUG oslo_concurrency.processutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:10:45 np0005465604 nova_compute[260603]: 2025-10-02 09:10:45.438 2 DEBUG nova.compute.provider_tree [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:10:45 np0005465604 nova_compute[260603]: 2025-10-02 09:10:45.457 2 DEBUG nova.scheduler.client.report [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:10:45 np0005465604 nova_compute[260603]: 2025-10-02 09:10:45.483 2 DEBUG oslo_concurrency.lockutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:45 np0005465604 nova_compute[260603]: 2025-10-02 09:10:45.523 2 INFO nova.scheduler.client.report [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 770238ca-0d80-443f-943e-236e0cfb3606#033[00m
Oct  2 05:10:45 np0005465604 nova_compute[260603]: 2025-10-02 09:10:45.634 2 DEBUG oslo_concurrency.lockutils [None req-fc96eb78-9458-4cad-8ce8-b5642bad721d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "770238ca-0d80-443f-943e-236e0cfb3606" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:46 np0005465604 funny_buck[419591]: {
Oct  2 05:10:46 np0005465604 funny_buck[419591]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:10:46 np0005465604 funny_buck[419591]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:10:46 np0005465604 funny_buck[419591]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:10:46 np0005465604 funny_buck[419591]:        "osd_id": 2,
Oct  2 05:10:46 np0005465604 funny_buck[419591]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:10:46 np0005465604 funny_buck[419591]:        "type": "bluestore"
Oct  2 05:10:46 np0005465604 funny_buck[419591]:    },
Oct  2 05:10:46 np0005465604 funny_buck[419591]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:10:46 np0005465604 funny_buck[419591]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:10:46 np0005465604 funny_buck[419591]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:10:46 np0005465604 funny_buck[419591]:        "osd_id": 1,
Oct  2 05:10:46 np0005465604 funny_buck[419591]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:10:46 np0005465604 funny_buck[419591]:        "type": "bluestore"
Oct  2 05:10:46 np0005465604 funny_buck[419591]:    },
Oct  2 05:10:46 np0005465604 funny_buck[419591]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:10:46 np0005465604 funny_buck[419591]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:10:46 np0005465604 funny_buck[419591]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:10:46 np0005465604 funny_buck[419591]:        "osd_id": 0,
Oct  2 05:10:46 np0005465604 funny_buck[419591]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:10:46 np0005465604 funny_buck[419591]:        "type": "bluestore"
Oct  2 05:10:46 np0005465604 funny_buck[419591]:    }
Oct  2 05:10:46 np0005465604 funny_buck[419591]: }
Oct  2 05:10:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 305 active+clean; 178 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 11 KiB/s wr, 11 op/s
Oct  2 05:10:46 np0005465604 systemd[1]: libpod-cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b.scope: Deactivated successfully.
Oct  2 05:10:46 np0005465604 podman[419574]: 2025-10-02 09:10:46.121946997 +0000 UTC m=+1.423124345 container died cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_buck, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 05:10:46 np0005465604 systemd[1]: libpod-cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b.scope: Consumed 1.109s CPU time.
Oct  2 05:10:46 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4af921acf617426c64938863419a6a080358bbc5484b5e6645a0a63e57cf9021-merged.mount: Deactivated successfully.
Oct  2 05:10:46 np0005465604 podman[419574]: 2025-10-02 09:10:46.195818694 +0000 UTC m=+1.496996042 container remove cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 05:10:46 np0005465604 systemd[1]: libpod-conmon-cc0cc0a87a2e1054963a95b6bedc892604b2d96111aad8475244c4529f87b18b.scope: Deactivated successfully.
Oct  2 05:10:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:10:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:10:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:10:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:10:46 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 0c4213cd-9bbc-4461-bc5e-c44d27113d66 does not exist
Oct  2 05:10:46 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 09962792-687d-44c5-851b-828f16556e74 does not exist
Oct  2 05:10:46 np0005465604 nova_compute[260603]: 2025-10-02 09:10:46.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:10:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:10:47 np0005465604 nova_compute[260603]: 2025-10-02 09:10:47.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 24 KiB/s wr, 30 op/s
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.180 2 DEBUG nova.compute.manager [req-af906462-8932-4b06-b709-f3a365045cc4 req-2d026a75-2c92-4c05-9b13-0729095ed5c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-changed-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.180 2 DEBUG nova.compute.manager [req-af906462-8932-4b06-b709-f3a365045cc4 req-2d026a75-2c92-4c05-9b13-0729095ed5c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Refreshing instance network info cache due to event network-changed-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.180 2 DEBUG oslo_concurrency.lockutils [req-af906462-8932-4b06-b709-f3a365045cc4 req-2d026a75-2c92-4c05-9b13-0729095ed5c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.181 2 DEBUG oslo_concurrency.lockutils [req-af906462-8932-4b06-b709-f3a365045cc4 req-2d026a75-2c92-4c05-9b13-0729095ed5c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.181 2 DEBUG nova.network.neutron [req-af906462-8932-4b06-b709-f3a365045cc4 req-2d026a75-2c92-4c05-9b13-0729095ed5c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Refreshing network info cache for port 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:10:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.507 2 DEBUG oslo_concurrency.lockutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.508 2 DEBUG oslo_concurrency.lockutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.508 2 DEBUG oslo_concurrency.lockutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.509 2 DEBUG oslo_concurrency.lockutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.509 2 DEBUG oslo_concurrency.lockutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.510 2 INFO nova.compute.manager [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Terminating instance#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.511 2 DEBUG nova.compute.manager [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:10:48 np0005465604 kernel: tap2b42cc6e-a1 (unregistering): left promiscuous mode
Oct  2 05:10:48 np0005465604 NetworkManager[45129]: <info>  [1759396248.5727] device (tap2b42cc6e-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:48 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:48Z|01566|binding|INFO|Releasing lport 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b from this chassis (sb_readonly=0)
Oct  2 05:10:48 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:48Z|01567|binding|INFO|Setting lport 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b down in Southbound
Oct  2 05:10:48 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:48Z|01568|binding|INFO|Removing iface tap2b42cc6e-a1 ovn-installed in OVS
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:48 np0005465604 kernel: tap5bca2e0b-43 (unregistering): left promiscuous mode
Oct  2 05:10:48 np0005465604 NetworkManager[45129]: <info>  [1759396248.6339] device (tap5bca2e0b-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:48 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:48Z|01569|binding|INFO|Releasing lport 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 from this chassis (sb_readonly=1)
Oct  2 05:10:48 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:48Z|01570|binding|INFO|Removing iface tap5bca2e0b-43 ovn-installed in OVS
Oct  2 05:10:48 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:48Z|01571|if_status|INFO|Dropped 2 log messages in last 409 seconds (most recently, 409 seconds ago) due to excessive rate
Oct  2 05:10:48 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:48Z|01572|if_status|INFO|Not setting lport 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 down as sb is readonly
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:48 np0005465604 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Oct  2 05:10:48 np0005465604 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008d.scope: Consumed 16.658s CPU time.
Oct  2 05:10:48 np0005465604 ovn_controller[152344]: 2025-10-02T09:10:48Z|01573|binding|INFO|Setting lport 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 down in Southbound
Oct  2 05:10:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:48.696 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:c0:38 10.100.0.11'], port_security=['fa:16:3e:93:c0:38 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '037810a9-88c4-4f08-a476-337b92085e28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68558f02-4047-4331-a47e-cbbee9580ea4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:10:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:48.697 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b in datapath d7dd05b8-70c0-4ef8-a410-57d83c307eaa unbound from our chassis#033[00m
Oct  2 05:10:48 np0005465604 systemd-machined[214636]: Machine qemu-175-instance-0000008d terminated.
Oct  2 05:10:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:48.698 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7dd05b8-70c0-4ef8-a410-57d83c307eaa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:10:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:48.700 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e1df2b0a-08ce-4271-925d-f2e783ee9876]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:48.700 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa namespace which is not needed anymore#033[00m
Oct  2 05:10:48 np0005465604 NetworkManager[45129]: <info>  [1759396248.7341] manager: (tap2b42cc6e-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/634)
Oct  2 05:10:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:48.734 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:94:b1 2001:db8:0:1:f816:3eff:fe00:94b1 2001:db8::f816:3eff:fe00:94b1'], port_security=['fa:16:3e:00:94:b1 2001:db8:0:1:f816:3eff:fe00:94b1 2001:db8::f816:3eff:fe00:94b1'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe00:94b1/64 2001:db8::f816:3eff:fe00:94b1/64', 'neutron:device_id': '3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d9c157f-cf57-4b44-8fba-d16631e22418', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '037810a9-88c4-4f08-a476-337b92085e28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a251a259-65e8-4a45-82af-f69bd5f24a08, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=5bca2e0b-43ae-46f8-b3cf-7be35129b5d8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:48 np0005465604 NetworkManager[45129]: <info>  [1759396248.7459] manager: (tap5bca2e0b-43): new Tun device (/org/freedesktop/NetworkManager/Devices/635)
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.768 2 INFO nova.virt.libvirt.driver [-] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Instance destroyed successfully.#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.769 2 DEBUG nova.objects.instance [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:10:48 np0005465604 neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa[417914]: [NOTICE]   (417918) : haproxy version is 2.8.14-c23fe91
Oct  2 05:10:48 np0005465604 neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa[417914]: [NOTICE]   (417918) : path to executable is /usr/sbin/haproxy
Oct  2 05:10:48 np0005465604 neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa[417914]: [WARNING]  (417918) : Exiting Master process...
Oct  2 05:10:48 np0005465604 neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa[417914]: [WARNING]  (417918) : Exiting Master process...
Oct  2 05:10:48 np0005465604 neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa[417914]: [ALERT]    (417918) : Current worker (417920) exited with code 143 (Terminated)
Oct  2 05:10:48 np0005465604 neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa[417914]: [WARNING]  (417918) : All workers exited. Exiting... (0)
Oct  2 05:10:48 np0005465604 systemd[1]: libpod-f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a.scope: Deactivated successfully.
Oct  2 05:10:48 np0005465604 podman[419758]: 2025-10-02 09:10:48.838237159 +0000 UTC m=+0.046847897 container died f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.907 2 DEBUG nova.virt.libvirt.vif [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:09:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2095471568',display_name='tempest-TestGettingAddress-server-2095471568',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2095471568',id=141,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:09:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-jh5zd8cz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:09:42Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.908 2 DEBUG nova.network.os_vif_util [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.909 2 DEBUG nova.network.os_vif_util [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:c0:38,bridge_name='br-int',has_traffic_filtering=True,id=2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b42cc6e-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.909 2 DEBUG os_vif [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:c0:38,bridge_name='br-int',has_traffic_filtering=True,id=2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b42cc6e-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.911 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b42cc6e-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.919 2 INFO os_vif [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:c0:38,bridge_name='br-int',has_traffic_filtering=True,id=2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b,network=Network(d7dd05b8-70c0-4ef8-a410-57d83c307eaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b42cc6e-a1')#033[00m
Oct  2 05:10:48 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c85b02860fc626a52f25bc8a62daedd7cc30adfcd5cfe6c239da051b179e05c4-merged.mount: Deactivated successfully.
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.920 2 DEBUG nova.virt.libvirt.vif [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:09:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2095471568',display_name='tempest-TestGettingAddress-server-2095471568',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2095471568',id=141,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4nMb/KUk55r28jOeljF35WUkclQv1xrXvvFSATzf+0Xk9hatQTpe2/yHjSQn7EAM/joH4+xiSAb4IIBGqqzAc1QLV2Czjn+5come+8+9JVDKF3SRxW+2WaPOOafqdoWA==',key_name='tempest-TestGettingAddress-379320454',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:09:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-jh5zd8cz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:09:42Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:10:48 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a-userdata-shm.mount: Deactivated successfully.
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.920 2 DEBUG nova.network.os_vif_util [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.921 2 DEBUG nova.network.os_vif_util [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:94:b1,bridge_name='br-int',has_traffic_filtering=True,id=5bca2e0b-43ae-46f8-b3cf-7be35129b5d8,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bca2e0b-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.921 2 DEBUG os_vif [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:94:b1,bridge_name='br-int',has_traffic_filtering=True,id=5bca2e0b-43ae-46f8-b3cf-7be35129b5d8,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bca2e0b-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.923 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5bca2e0b-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:10:48 np0005465604 nova_compute[260603]: 2025-10-02 09:10:48.931 2 INFO os_vif [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:94:b1,bridge_name='br-int',has_traffic_filtering=True,id=5bca2e0b-43ae-46f8-b3cf-7be35129b5d8,network=Network(6d9c157f-cf57-4b44-8fba-d16631e22418),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5bca2e0b-43')#033[00m
Oct  2 05:10:48 np0005465604 podman[419758]: 2025-10-02 09:10:48.945086269 +0000 UTC m=+0.153696997 container cleanup f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 05:10:48 np0005465604 systemd[1]: libpod-conmon-f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a.scope: Deactivated successfully.
Oct  2 05:10:49 np0005465604 podman[419803]: 2025-10-02 09:10:49.0120231 +0000 UTC m=+0.044085411 container remove f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.020 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d88851-9d6f-4718-ad17-cf0d292bb785]: (4, ('Thu Oct  2 09:10:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa (f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a)\nf7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a\nThu Oct  2 09:10:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa (f7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a)\nf7225a0906d81b0223a1e93c2a028214dd08ab543c6293c93099be1651ad1f5a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.022 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8633bd35-b3ca-4a16-8b55-45c05a4d0fd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.023 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7dd05b8-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:49 np0005465604 kernel: tapd7dd05b8-70: left promiscuous mode
Oct  2 05:10:49 np0005465604 nova_compute[260603]: 2025-10-02 09:10:49.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:49 np0005465604 nova_compute[260603]: 2025-10-02 09:10:49.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.039 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[07b76bad-6585-4f6b-a7c5-2a3102431ac0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.071 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a6aa3972-dbdb-4ec1-812d-d3b083dde394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.072 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[30335c91-e493-4b59-a397-93471c5a2348]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.088 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ddba37bb-4d1d-4595-bfb4-a3c8f1de3979]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705926, 'reachable_time': 25298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419839, 'error': None, 'target': 'ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.090 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d7dd05b8-70c0-4ef8-a410-57d83c307eaa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.090 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[505898d4-2427-4b26-be23-66190386b49b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.091 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 in datapath 6d9c157f-cf57-4b44-8fba-d16631e22418 unbound from our chassis#033[00m
Oct  2 05:10:49 np0005465604 systemd[1]: run-netns-ovnmeta\x2dd7dd05b8\x2d70c0\x2d4ef8\x2da410\x2d57d83c307eaa.mount: Deactivated successfully.
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.092 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d9c157f-cf57-4b44-8fba-d16631e22418, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.093 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4ebaf70f-5035-4765-bf8b-23dee384c532]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.093 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418 namespace which is not needed anymore#033[00m
Oct  2 05:10:49 np0005465604 podman[419819]: 2025-10-02 09:10:49.127413476 +0000 UTC m=+0.060199043 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct  2 05:10:49 np0005465604 podman[419820]: 2025-10-02 09:10:49.150351888 +0000 UTC m=+0.090504643 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:10:49 np0005465604 neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418[417988]: [NOTICE]   (417992) : haproxy version is 2.8.14-c23fe91
Oct  2 05:10:49 np0005465604 neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418[417988]: [NOTICE]   (417992) : path to executable is /usr/sbin/haproxy
Oct  2 05:10:49 np0005465604 neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418[417988]: [WARNING]  (417992) : Exiting Master process...
Oct  2 05:10:49 np0005465604 neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418[417988]: [ALERT]    (417992) : Current worker (417994) exited with code 143 (Terminated)
Oct  2 05:10:49 np0005465604 neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418[417988]: [WARNING]  (417992) : All workers exited. Exiting... (0)
Oct  2 05:10:49 np0005465604 systemd[1]: libpod-5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82.scope: Deactivated successfully.
Oct  2 05:10:49 np0005465604 conmon[417988]: conmon 5f0f697c21e73e116db8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82.scope/container/memory.events
Oct  2 05:10:49 np0005465604 podman[419882]: 2025-10-02 09:10:49.277987714 +0000 UTC m=+0.101082402 container died 5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:10:49 np0005465604 nova_compute[260603]: 2025-10-02 09:10:49.387 2 DEBUG nova.compute.manager [req-36eb94bd-e623-4474-ae00-bb0dc3aa2023 req-0d1e1628-db8e-4715-a925-5bad4d9d42e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-unplugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:49 np0005465604 nova_compute[260603]: 2025-10-02 09:10:49.388 2 DEBUG oslo_concurrency.lockutils [req-36eb94bd-e623-4474-ae00-bb0dc3aa2023 req-0d1e1628-db8e-4715-a925-5bad4d9d42e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:49 np0005465604 nova_compute[260603]: 2025-10-02 09:10:49.388 2 DEBUG oslo_concurrency.lockutils [req-36eb94bd-e623-4474-ae00-bb0dc3aa2023 req-0d1e1628-db8e-4715-a925-5bad4d9d42e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:49 np0005465604 nova_compute[260603]: 2025-10-02 09:10:49.388 2 DEBUG oslo_concurrency.lockutils [req-36eb94bd-e623-4474-ae00-bb0dc3aa2023 req-0d1e1628-db8e-4715-a925-5bad4d9d42e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:49 np0005465604 nova_compute[260603]: 2025-10-02 09:10:49.388 2 DEBUG nova.compute.manager [req-36eb94bd-e623-4474-ae00-bb0dc3aa2023 req-0d1e1628-db8e-4715-a925-5bad4d9d42e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] No waiting events found dispatching network-vif-unplugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:10:49 np0005465604 nova_compute[260603]: 2025-10-02 09:10:49.389 2 DEBUG nova.compute.manager [req-36eb94bd-e623-4474-ae00-bb0dc3aa2023 req-0d1e1628-db8e-4715-a925-5bad4d9d42e0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-unplugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:10:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82-userdata-shm.mount: Deactivated successfully.
Oct  2 05:10:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay-7808816a47e7300fa580de7380ab5c8b3663ac809aa4c663ed97b18b7941cb10-merged.mount: Deactivated successfully.
Oct  2 05:10:49 np0005465604 podman[419882]: 2025-10-02 09:10:49.461302612 +0000 UTC m=+0.284397300 container cleanup 5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:10:49 np0005465604 systemd[1]: libpod-conmon-5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82.scope: Deactivated successfully.
Oct  2 05:10:49 np0005465604 podman[419911]: 2025-10-02 09:10:49.654503516 +0000 UTC m=+0.173421211 container remove 5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.660 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe6aaaf-0e1a-4e88-ac0c-00041594fe6e]: (4, ('Thu Oct  2 09:10:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418 (5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82)\n5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82\nThu Oct  2 09:10:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418 (5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82)\n5f0f697c21e73e116db8476133cbd0f2103efcb34c5781bad700ca8aaf3cfd82\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.662 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6348a6a2-9c54-4799-a429-8e3505f6ce62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.662 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d9c157f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:10:49 np0005465604 kernel: tap6d9c157f-c0: left promiscuous mode
Oct  2 05:10:49 np0005465604 nova_compute[260603]: 2025-10-02 09:10:49.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:49 np0005465604 nova_compute[260603]: 2025-10-02 09:10:49.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.721 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb53658-9fa2-4fdc-93d3-3a04a2983fb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.753 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[37a7fb59-1718-4225-8008-f65bfe8b5c03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.754 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7325b538-1c6f-4c85-934e-f90a3f8ee2b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.772 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[05991e30-107a-400f-89ef-17e86d3b70c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706042, 'reachable_time': 41988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419929, 'error': None, 'target': 'ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.774 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d9c157f-cf57-4b44-8fba-d16631e22418 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:10:49 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:10:49.775 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff7fc31-67b5-4110-b760-4074ee1d7b15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:10:49 np0005465604 nova_compute[260603]: 2025-10-02 09:10:49.869 2 INFO nova.virt.libvirt.driver [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Deleting instance files /var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_del#033[00m
Oct  2 05:10:49 np0005465604 nova_compute[260603]: 2025-10-02 09:10:49.870 2 INFO nova.virt.libvirt.driver [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Deletion of /var/lib/nova/instances/3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b_del complete#033[00m
Oct  2 05:10:49 np0005465604 systemd[1]: run-netns-ovnmeta\x2d6d9c157f\x2dcf57\x2d4b44\x2d8fba\x2dd16631e22418.mount: Deactivated successfully.
Oct  2 05:10:49 np0005465604 nova_compute[260603]: 2025-10-02 09:10:49.940 2 INFO nova.compute.manager [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Took 1.43 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:10:49 np0005465604 nova_compute[260603]: 2025-10-02 09:10:49.940 2 DEBUG oslo.service.loopingcall [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:10:49 np0005465604 nova_compute[260603]: 2025-10-02 09:10:49.941 2 DEBUG nova.compute.manager [-] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:10:49 np0005465604 nova_compute[260603]: 2025-10-02 09:10:49.941 2 DEBUG nova.network.neutron [-] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:10:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 29 op/s
Oct  2 05:10:50 np0005465604 nova_compute[260603]: 2025-10-02 09:10:50.387 2 DEBUG nova.network.neutron [req-af906462-8932-4b06-b709-f3a365045cc4 req-2d026a75-2c92-4c05-9b13-0729095ed5c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updated VIF entry in instance network info cache for port 2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:10:50 np0005465604 nova_compute[260603]: 2025-10-02 09:10:50.388 2 DEBUG nova.network.neutron [req-af906462-8932-4b06-b709-f3a365045cc4 req-2d026a75-2c92-4c05-9b13-0729095ed5c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updating instance_info_cache with network_info: [{"id": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "address": "fa:16:3e:93:c0:38", "network": {"id": "d7dd05b8-70c0-4ef8-a410-57d83c307eaa", "bridge": "br-int", "label": "tempest-network-smoke--839515267", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b42cc6e-a1", "ovs_interfaceid": "2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "address": "fa:16:3e:00:94:b1", "network": {"id": "6d9c157f-cf57-4b44-8fba-d16631e22418", "bridge": "br-int", "label": "tempest-network-smoke--1711352379", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe00:94b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5bca2e0b-43", "ovs_interfaceid": "5bca2e0b-43ae-46f8-b3cf-7be35129b5d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:10:50 np0005465604 nova_compute[260603]: 2025-10-02 09:10:50.422 2 DEBUG oslo_concurrency.lockutils [req-af906462-8932-4b06-b709-f3a365045cc4 req-2d026a75-2c92-4c05-9b13-0729095ed5c4 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:10:50 np0005465604 nova_compute[260603]: 2025-10-02 09:10:50.459 2 DEBUG nova.compute.manager [req-b332be07-16f7-4b06-8762-7ecc2662ea7a req-ad15b017-35bf-4db0-89a4-0a7b8bee89a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-unplugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:50 np0005465604 nova_compute[260603]: 2025-10-02 09:10:50.460 2 DEBUG oslo_concurrency.lockutils [req-b332be07-16f7-4b06-8762-7ecc2662ea7a req-ad15b017-35bf-4db0-89a4-0a7b8bee89a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:50 np0005465604 nova_compute[260603]: 2025-10-02 09:10:50.460 2 DEBUG oslo_concurrency.lockutils [req-b332be07-16f7-4b06-8762-7ecc2662ea7a req-ad15b017-35bf-4db0-89a4-0a7b8bee89a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:50 np0005465604 nova_compute[260603]: 2025-10-02 09:10:50.461 2 DEBUG oslo_concurrency.lockutils [req-b332be07-16f7-4b06-8762-7ecc2662ea7a req-ad15b017-35bf-4db0-89a4-0a7b8bee89a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:50 np0005465604 nova_compute[260603]: 2025-10-02 09:10:50.461 2 DEBUG nova.compute.manager [req-b332be07-16f7-4b06-8762-7ecc2662ea7a req-ad15b017-35bf-4db0-89a4-0a7b8bee89a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] No waiting events found dispatching network-vif-unplugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:10:50 np0005465604 nova_compute[260603]: 2025-10-02 09:10:50.462 2 DEBUG nova.compute.manager [req-b332be07-16f7-4b06-8762-7ecc2662ea7a req-ad15b017-35bf-4db0-89a4-0a7b8bee89a7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-unplugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:10:50 np0005465604 nova_compute[260603]: 2025-10-02 09:10:50.910 2 DEBUG nova.network.neutron [-] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:10:50 np0005465604 nova_compute[260603]: 2025-10-02 09:10:50.944 2 INFO nova.compute.manager [-] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Took 1.00 seconds to deallocate network for instance.#033[00m
Oct  2 05:10:51 np0005465604 nova_compute[260603]: 2025-10-02 09:10:51.002 2 DEBUG oslo_concurrency.lockutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:51 np0005465604 nova_compute[260603]: 2025-10-02 09:10:51.002 2 DEBUG oslo_concurrency.lockutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:51 np0005465604 nova_compute[260603]: 2025-10-02 09:10:51.069 2 DEBUG oslo_concurrency.processutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:10:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:10:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4244381091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:10:51 np0005465604 nova_compute[260603]: 2025-10-02 09:10:51.489 2 DEBUG oslo_concurrency.processutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:10:51 np0005465604 nova_compute[260603]: 2025-10-02 09:10:51.499 2 DEBUG nova.compute.provider_tree [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:10:51 np0005465604 nova_compute[260603]: 2025-10-02 09:10:51.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:51 np0005465604 nova_compute[260603]: 2025-10-02 09:10:51.509 2 DEBUG nova.compute.manager [req-77d44a77-e209-4af8-8d2e-5a8248b07569 req-b642c973-5ed9-41cf-b83d-05413093853d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-plugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:51 np0005465604 nova_compute[260603]: 2025-10-02 09:10:51.510 2 DEBUG oslo_concurrency.lockutils [req-77d44a77-e209-4af8-8d2e-5a8248b07569 req-b642c973-5ed9-41cf-b83d-05413093853d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:51 np0005465604 nova_compute[260603]: 2025-10-02 09:10:51.511 2 DEBUG oslo_concurrency.lockutils [req-77d44a77-e209-4af8-8d2e-5a8248b07569 req-b642c973-5ed9-41cf-b83d-05413093853d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:51 np0005465604 nova_compute[260603]: 2025-10-02 09:10:51.511 2 DEBUG oslo_concurrency.lockutils [req-77d44a77-e209-4af8-8d2e-5a8248b07569 req-b642c973-5ed9-41cf-b83d-05413093853d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:51 np0005465604 nova_compute[260603]: 2025-10-02 09:10:51.512 2 DEBUG nova.compute.manager [req-77d44a77-e209-4af8-8d2e-5a8248b07569 req-b642c973-5ed9-41cf-b83d-05413093853d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] No waiting events found dispatching network-vif-plugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:10:51 np0005465604 nova_compute[260603]: 2025-10-02 09:10:51.513 2 WARNING nova.compute.manager [req-77d44a77-e209-4af8-8d2e-5a8248b07569 req-b642c973-5ed9-41cf-b83d-05413093853d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received unexpected event network-vif-plugged-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b for instance with vm_state deleted and task_state None.#033[00m
Oct  2 05:10:51 np0005465604 nova_compute[260603]: 2025-10-02 09:10:51.513 2 DEBUG nova.compute.manager [req-77d44a77-e209-4af8-8d2e-5a8248b07569 req-b642c973-5ed9-41cf-b83d-05413093853d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-deleted-2b42cc6e-a10c-4e40-9ed0-a91f2a85c33b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:51 np0005465604 nova_compute[260603]: 2025-10-02 09:10:51.513 2 DEBUG nova.compute.manager [req-77d44a77-e209-4af8-8d2e-5a8248b07569 req-b642c973-5ed9-41cf-b83d-05413093853d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-deleted-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:51 np0005465604 nova_compute[260603]: 2025-10-02 09:10:51.527 2 DEBUG nova.scheduler.client.report [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:10:51 np0005465604 nova_compute[260603]: 2025-10-02 09:10:51.561 2 DEBUG oslo_concurrency.lockutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:51 np0005465604 nova_compute[260603]: 2025-10-02 09:10:51.593 2 INFO nova.scheduler.client.report [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b#033[00m
Oct  2 05:10:51 np0005465604 nova_compute[260603]: 2025-10-02 09:10:51.675 2 DEBUG oslo_concurrency.lockutils [None req-1516ea74-7057-4d34-ad32-ef008842c37e b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 29 op/s
Oct  2 05:10:52 np0005465604 nova_compute[260603]: 2025-10-02 09:10:52.681 2 DEBUG nova.compute.manager [req-1fa86fbd-5d02-4fa5-bf98-ee1cc013ccff req-6e65987d-9982-439d-ab6a-09b9d01c9346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received event network-vif-plugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:10:52 np0005465604 nova_compute[260603]: 2025-10-02 09:10:52.682 2 DEBUG oslo_concurrency.lockutils [req-1fa86fbd-5d02-4fa5-bf98-ee1cc013ccff req-6e65987d-9982-439d-ab6a-09b9d01c9346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:10:52 np0005465604 nova_compute[260603]: 2025-10-02 09:10:52.682 2 DEBUG oslo_concurrency.lockutils [req-1fa86fbd-5d02-4fa5-bf98-ee1cc013ccff req-6e65987d-9982-439d-ab6a-09b9d01c9346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:10:52 np0005465604 nova_compute[260603]: 2025-10-02 09:10:52.682 2 DEBUG oslo_concurrency.lockutils [req-1fa86fbd-5d02-4fa5-bf98-ee1cc013ccff req-6e65987d-9982-439d-ab6a-09b9d01c9346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:10:52 np0005465604 nova_compute[260603]: 2025-10-02 09:10:52.682 2 DEBUG nova.compute.manager [req-1fa86fbd-5d02-4fa5-bf98-ee1cc013ccff req-6e65987d-9982-439d-ab6a-09b9d01c9346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] No waiting events found dispatching network-vif-plugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:10:52 np0005465604 nova_compute[260603]: 2025-10-02 09:10:52.683 2 WARNING nova.compute.manager [req-1fa86fbd-5d02-4fa5-bf98-ee1cc013ccff req-6e65987d-9982-439d-ab6a-09b9d01c9346 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Received unexpected event network-vif-plugged-5bca2e0b-43ae-46f8-b3cf-7be35129b5d8 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 05:10:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:10:53 np0005465604 nova_compute[260603]: 2025-10-02 09:10:53.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:53 np0005465604 podman[419952]: 2025-10-02 09:10:53.976483814 +0000 UTC m=+0.049130168 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:10:54 np0005465604 podman[419953]: 2025-10-02 09:10:54.013637588 +0000 UTC m=+0.070192593 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct  2 05:10:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 13 KiB/s wr, 57 op/s
Oct  2 05:10:55 np0005465604 nova_compute[260603]: 2025-10-02 09:10:55.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:10:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 45 op/s
Oct  2 05:10:56 np0005465604 nova_compute[260603]: 2025-10-02 09:10:56.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:57 np0005465604 nova_compute[260603]: 2025-10-02 09:10:57.616 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396242.6146586, 770238ca-0d80-443f-943e-236e0cfb3606 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:10:57 np0005465604 nova_compute[260603]: 2025-10-02 09:10:57.617 2 INFO nova.compute.manager [-] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:10:57 np0005465604 nova_compute[260603]: 2025-10-02 09:10:57.650 2 DEBUG nova.compute.manager [None req-916ee98e-b9e2-42af-8c88-6431c1dce4b8 - - - - - -] [instance: 770238ca-0d80-443f-943e-236e0cfb3606] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:10:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:10:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:10:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:10:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:10:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:10:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:10:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 45 op/s
Oct  2 05:10:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:10:58 np0005465604 nova_compute[260603]: 2025-10-02 09:10:58.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:10:59 np0005465604 nova_compute[260603]: 2025-10-02 09:10:59.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:10:59 np0005465604 nova_compute[260603]: 2025-10-02 09:10:59.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:10:59 np0005465604 nova_compute[260603]: 2025-10-02 09:10:59.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:11:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 05:11:00 np0005465604 nova_compute[260603]: 2025-10-02 09:11:00.461 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:11:01 np0005465604 nova_compute[260603]: 2025-10-02 09:11:01.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:01 np0005465604 nova_compute[260603]: 2025-10-02 09:11:01.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:01 np0005465604 nova_compute[260603]: 2025-10-02 09:11:01.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 05:11:02 np0005465604 nova_compute[260603]: 2025-10-02 09:11:02.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:11:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:11:03 np0005465604 nova_compute[260603]: 2025-10-02 09:11:03.758 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396248.7572656, 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:11:03 np0005465604 nova_compute[260603]: 2025-10-02 09:11:03.759 2 INFO nova.compute.manager [-] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:11:03 np0005465604 nova_compute[260603]: 2025-10-02 09:11:03.784 2 DEBUG nova.compute.manager [None req-a02181e4-682c-4463-94b2-2e98a9a31076 - - - - - -] [instance: 3a8c0cc7-7d22-488d-ac96-a1fa159c6a1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:11:03 np0005465604 nova_compute[260603]: 2025-10-02 09:11:03.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 05:11:04 np0005465604 nova_compute[260603]: 2025-10-02 09:11:04.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:11:05 np0005465604 nova_compute[260603]: 2025-10-02 09:11:05.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:11:05 np0005465604 nova_compute[260603]: 2025-10-02 09:11:05.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:11:05 np0005465604 nova_compute[260603]: 2025-10-02 09:11:05.576 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:11:05 np0005465604 nova_compute[260603]: 2025-10-02 09:11:05.576 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:11:05 np0005465604 nova_compute[260603]: 2025-10-02 09:11:05.577 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:11:05 np0005465604 nova_compute[260603]: 2025-10-02 09:11:05.577 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:11:05 np0005465604 nova_compute[260603]: 2025-10-02 09:11:05.577 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:11:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:11:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/930059621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:11:06 np0005465604 nova_compute[260603]: 2025-10-02 09:11:06.058 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:11:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:06 np0005465604 nova_compute[260603]: 2025-10-02 09:11:06.256 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:11:06 np0005465604 nova_compute[260603]: 2025-10-02 09:11:06.257 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3593MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:11:06 np0005465604 nova_compute[260603]: 2025-10-02 09:11:06.258 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:11:06 np0005465604 nova_compute[260603]: 2025-10-02 09:11:06.258 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:11:06 np0005465604 nova_compute[260603]: 2025-10-02 09:11:06.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:07 np0005465604 nova_compute[260603]: 2025-10-02 09:11:07.698 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:11:07 np0005465604 nova_compute[260603]: 2025-10-02 09:11:07.699 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:11:07 np0005465604 nova_compute[260603]: 2025-10-02 09:11:07.780 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:11:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:11:08 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1012869170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:11:08 np0005465604 nova_compute[260603]: 2025-10-02 09:11:08.259 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:11:08 np0005465604 nova_compute[260603]: 2025-10-02 09:11:08.268 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:11:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:11:08 np0005465604 nova_compute[260603]: 2025-10-02 09:11:08.301 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:11:08 np0005465604 nova_compute[260603]: 2025-10-02 09:11:08.352 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:11:08 np0005465604 nova_compute[260603]: 2025-10-02 09:11:08.353 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:11:08 np0005465604 nova_compute[260603]: 2025-10-02 09:11:08.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:11 np0005465604 nova_compute[260603]: 2025-10-02 09:11:11.349 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:11:11 np0005465604 nova_compute[260603]: 2025-10-02 09:11:11.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:11:13 np0005465604 nova_compute[260603]: 2025-10-02 09:11:13.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:14 np0005465604 nova_compute[260603]: 2025-10-02 09:11:14.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:11:15 np0005465604 nova_compute[260603]: 2025-10-02 09:11:15.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:11:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:16 np0005465604 nova_compute[260603]: 2025-10-02 09:11:16.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:11:18 np0005465604 nova_compute[260603]: 2025-10-02 09:11:18.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:20 np0005465604 podman[420040]: 2025-10-02 09:11:20.014885257 +0000 UTC m=+0.069927574 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 05:11:20 np0005465604 podman[420039]: 2025-10-02 09:11:20.053647881 +0000 UTC m=+0.116256953 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 05:11:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:21 np0005465604 nova_compute[260603]: 2025-10-02 09:11:21.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4269718831' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4269718831' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:22.693905) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396282693966, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1896, "num_deletes": 251, "total_data_size": 3051073, "memory_usage": 3101312, "flush_reason": "Manual Compaction"}
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396282769297, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 2987451, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56855, "largest_seqno": 58750, "table_properties": {"data_size": 2978864, "index_size": 5272, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17625, "raw_average_key_size": 20, "raw_value_size": 2961754, "raw_average_value_size": 3373, "num_data_blocks": 234, "num_entries": 878, "num_filter_entries": 878, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396083, "oldest_key_time": 1759396083, "file_creation_time": 1759396282, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 75482 microseconds, and 12809 cpu microseconds.
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:22.769383) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 2987451 bytes OK
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:22.769415) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:22.844637) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:22.844691) EVENT_LOG_v1 {"time_micros": 1759396282844681, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:22.844718) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 3043046, prev total WAL file size 3043046, number of live WAL files 2.
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:22.846050) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(2917KB)], [134(9903KB)]
Oct  2 05:11:22 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396282846123, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 13128395, "oldest_snapshot_seqno": -1}
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 7873 keys, 11409239 bytes, temperature: kUnknown
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396283116619, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 11409239, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11355850, "index_size": 32604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19717, "raw_key_size": 205181, "raw_average_key_size": 26, "raw_value_size": 11214542, "raw_average_value_size": 1424, "num_data_blocks": 1275, "num_entries": 7873, "num_filter_entries": 7873, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396282, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.117105) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11409239 bytes
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.149868) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 48.5 rd, 42.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 9.7 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 8387, records dropped: 514 output_compression: NoCompression
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.149930) EVENT_LOG_v1 {"time_micros": 1759396283149907, "job": 82, "event": "compaction_finished", "compaction_time_micros": 270706, "compaction_time_cpu_micros": 52243, "output_level": 6, "num_output_files": 1, "total_output_size": 11409239, "num_input_records": 8387, "num_output_records": 7873, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396283151259, "job": 82, "event": "table_file_deletion", "file_number": 136}
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396283155361, "job": 82, "event": "table_file_deletion", "file_number": 134}
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:22.845889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.155508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.155521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.155525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.155530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.155534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.304032) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396283304077, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 255, "num_deletes": 250, "total_data_size": 14332, "memory_usage": 19968, "flush_reason": "Manual Compaction"}
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396283321134, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 13852, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58751, "largest_seqno": 59005, "table_properties": {"data_size": 12099, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 5124, "raw_average_key_size": 20, "raw_value_size": 8697, "raw_average_value_size": 34, "num_data_blocks": 2, "num_entries": 255, "num_filter_entries": 255, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396283, "oldest_key_time": 1759396283, "file_creation_time": 1759396283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 17221 microseconds, and 1252 cpu microseconds.
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.321247) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 13852 bytes OK
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.321280) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.342820) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.342854) EVENT_LOG_v1 {"time_micros": 1759396283342843, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.342884) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 12321, prev total WAL file size 12321, number of live WAL files 2.
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.343659) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323532' seq:72057594037927935, type:22 .. '6D6772737461740032353033' seq:0, type:0; will stop at (end)
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(13KB)], [137(10MB)]
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396283343731, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 11423091, "oldest_snapshot_seqno": -1}
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7624 keys, 8127621 bytes, temperature: kUnknown
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396283432146, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 8127621, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8080822, "index_size": 26654, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19077, "raw_key_size": 200199, "raw_average_key_size": 26, "raw_value_size": 7948729, "raw_average_value_size": 1042, "num_data_blocks": 1026, "num_entries": 7624, "num_filter_entries": 7624, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.432504) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 8127621 bytes
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.467926) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.1 rd, 91.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 10.9 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(1411.4) write-amplify(586.7) OK, records in: 8128, records dropped: 504 output_compression: NoCompression
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.467980) EVENT_LOG_v1 {"time_micros": 1759396283467958, "job": 84, "event": "compaction_finished", "compaction_time_micros": 88499, "compaction_time_cpu_micros": 33205, "output_level": 6, "num_output_files": 1, "total_output_size": 8127621, "num_input_records": 8128, "num_output_records": 7624, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396283468255, "job": 84, "event": "table_file_deletion", "file_number": 139}
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396283471764, "job": 84, "event": "table_file_deletion", "file_number": 137}
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.343463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.471896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.471907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.471911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.471915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:11:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:11:23.471919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:11:23 np0005465604 nova_compute[260603]: 2025-10-02 09:11:23.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:25 np0005465604 podman[420085]: 2025-10-02 09:11:25.020596924 +0000 UTC m=+0.092006560 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 05:11:25 np0005465604 podman[420086]: 2025-10-02 09:11:25.034424743 +0000 UTC m=+0.093361341 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:11:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:26.029 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:74:d6 2001:db8:0:1:f816:3eff:fe4b:74d6 2001:db8::f816:3eff:fe4b:74d6'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe4b:74d6/64 2001:db8::f816:3eff:fe4b:74d6/64', 'neutron:device_id': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d64d879-42c6-456c-a212-df00bf998997', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bb22cc4-c817-4149-925c-4cb21e573102, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=be1c87e3-582f-4bbb-a5fb-4fb837b7e882) old=Port_Binding(mac=['fa:16:3e:4b:74:d6 2001:db8::f816:3eff:fe4b:74d6'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe4b:74d6/64', 'neutron:device_id': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d64d879-42c6-456c-a212-df00bf998997', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:11:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:26.031 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port be1c87e3-582f-4bbb-a5fb-4fb837b7e882 in datapath 5d64d879-42c6-456c-a212-df00bf998997 updated#033[00m
Oct  2 05:11:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:26.032 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d64d879-42c6-456c-a212-df00bf998997, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:11:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:26.034 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd6677b-8835-4443-a650-eb58fe6e96cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:26 np0005465604 nova_compute[260603]: 2025-10-02 09:11:26.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:11:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:11:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:11:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:11:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:11:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:11:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:11:28
Oct  2 05:11:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:11:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:11:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['images', '.mgr', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups']
Oct  2 05:11:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:11:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:11:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:11:29 np0005465604 nova_compute[260603]: 2025-10-02 09:11:29.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:31 np0005465604 nova_compute[260603]: 2025-10-02 09:11:31.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:11:34 np0005465604 nova_compute[260603]: 2025-10-02 09:11:34.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:34.849 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:11:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:34.849 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:11:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:34.849 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:11:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:36 np0005465604 nova_compute[260603]: 2025-10-02 09:11:36.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:36 np0005465604 nova_compute[260603]: 2025-10-02 09:11:36.764 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:11:36 np0005465604 nova_compute[260603]: 2025-10-02 09:11:36.765 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:11:36 np0005465604 nova_compute[260603]: 2025-10-02 09:11:36.791 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:11:36 np0005465604 nova_compute[260603]: 2025-10-02 09:11:36.922 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:11:36 np0005465604 nova_compute[260603]: 2025-10-02 09:11:36.923 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:11:36 np0005465604 nova_compute[260603]: 2025-10-02 09:11:36.930 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:11:36 np0005465604 nova_compute[260603]: 2025-10-02 09:11:36.931 2 INFO nova.compute.claims [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:11:37 np0005465604 nova_compute[260603]: 2025-10-02 09:11:37.083 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:11:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:11:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1444054755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:11:37 np0005465604 nova_compute[260603]: 2025-10-02 09:11:37.622 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:11:37 np0005465604 nova_compute[260603]: 2025-10-02 09:11:37.630 2 DEBUG nova.compute.provider_tree [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:11:37 np0005465604 nova_compute[260603]: 2025-10-02 09:11:37.666 2 DEBUG nova.scheduler.client.report [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:11:37 np0005465604 nova_compute[260603]: 2025-10-02 09:11:37.728 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:11:37 np0005465604 nova_compute[260603]: 2025-10-02 09:11:37.730 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:11:37 np0005465604 nova_compute[260603]: 2025-10-02 09:11:37.822 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:11:37 np0005465604 nova_compute[260603]: 2025-10-02 09:11:37.823 2 DEBUG nova.network.neutron [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:11:37 np0005465604 nova_compute[260603]: 2025-10-02 09:11:37.877 2 INFO nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:11:37 np0005465604 nova_compute[260603]: 2025-10-02 09:11:37.961 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:11:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:38 np0005465604 nova_compute[260603]: 2025-10-02 09:11:38.125 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:11:38 np0005465604 nova_compute[260603]: 2025-10-02 09:11:38.127 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:11:38 np0005465604 nova_compute[260603]: 2025-10-02 09:11:38.128 2 INFO nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Creating image(s)#033[00m
Oct  2 05:11:38 np0005465604 nova_compute[260603]: 2025-10-02 09:11:38.166 2 DEBUG nova.storage.rbd_utils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:11:38 np0005465604 nova_compute[260603]: 2025-10-02 09:11:38.199 2 DEBUG nova.storage.rbd_utils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:11:38 np0005465604 nova_compute[260603]: 2025-10-02 09:11:38.230 2 DEBUG nova.storage.rbd_utils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:11:38 np0005465604 nova_compute[260603]: 2025-10-02 09:11:38.239 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:11:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:11:38 np0005465604 nova_compute[260603]: 2025-10-02 09:11:38.336 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:11:38 np0005465604 nova_compute[260603]: 2025-10-02 09:11:38.337 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:11:38 np0005465604 nova_compute[260603]: 2025-10-02 09:11:38.338 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:11:38 np0005465604 nova_compute[260603]: 2025-10-02 09:11:38.339 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:11:38 np0005465604 nova_compute[260603]: 2025-10-02 09:11:38.387 2 DEBUG nova.storage.rbd_utils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:11:38 np0005465604 nova_compute[260603]: 2025-10-02 09:11:38.391 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:11:39 np0005465604 nova_compute[260603]: 2025-10-02 09:11:39.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:11:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:11:39 np0005465604 nova_compute[260603]: 2025-10-02 09:11:39.695 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:11:39 np0005465604 nova_compute[260603]: 2025-10-02 09:11:39.789 2 DEBUG nova.storage.rbd_utils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:11:39 np0005465604 nova_compute[260603]: 2025-10-02 09:11:39.992 2 DEBUG nova.policy [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:11:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:40 np0005465604 nova_compute[260603]: 2025-10-02 09:11:40.503 2 DEBUG nova.objects.instance [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 71c9f70f-5f86-4723-9e4f-a4aca14211cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:11:40 np0005465604 nova_compute[260603]: 2025-10-02 09:11:40.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:11:40 np0005465604 nova_compute[260603]: 2025-10-02 09:11:40.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:11:40 np0005465604 nova_compute[260603]: 2025-10-02 09:11:40.530 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:11:40 np0005465604 nova_compute[260603]: 2025-10-02 09:11:40.530 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Ensure instance console log exists: /var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:11:40 np0005465604 nova_compute[260603]: 2025-10-02 09:11:40.531 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:11:40 np0005465604 nova_compute[260603]: 2025-10-02 09:11:40.532 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:11:40 np0005465604 nova_compute[260603]: 2025-10-02 09:11:40.533 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:11:41 np0005465604 nova_compute[260603]: 2025-10-02 09:11:41.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 41 MiB data, 940 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:11:42 np0005465604 nova_compute[260603]: 2025-10-02 09:11:42.751 2 DEBUG nova.network.neutron [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Successfully created port: 102411d5-80b6-47af-9293-08b07c65d541 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:11:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:11:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:44.089 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:11:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:44.090 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:11:44 np0005465604 nova_compute[260603]: 2025-10-02 09:11:44.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:11:44 np0005465604 nova_compute[260603]: 2025-10-02 09:11:44.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:44 np0005465604 nova_compute[260603]: 2025-10-02 09:11:44.177 2 DEBUG nova.network.neutron [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Successfully created port: d5b6295d-90a7-4d25-be69-ccd7a58621c6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:11:45 np0005465604 nova_compute[260603]: 2025-10-02 09:11:45.039 2 DEBUG nova.network.neutron [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Successfully updated port: 102411d5-80b6-47af-9293-08b07c65d541 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:11:45 np0005465604 nova_compute[260603]: 2025-10-02 09:11:45.136 2 DEBUG nova.compute.manager [req-757f6ede-336e-46f6-894a-f16cda08d3a9 req-048b28a0-0c34-40fc-87c3-bfa41cdf6100 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-changed-102411d5-80b6-47af-9293-08b07c65d541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:11:45 np0005465604 nova_compute[260603]: 2025-10-02 09:11:45.136 2 DEBUG nova.compute.manager [req-757f6ede-336e-46f6-894a-f16cda08d3a9 req-048b28a0-0c34-40fc-87c3-bfa41cdf6100 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Refreshing instance network info cache due to event network-changed-102411d5-80b6-47af-9293-08b07c65d541. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:11:45 np0005465604 nova_compute[260603]: 2025-10-02 09:11:45.137 2 DEBUG oslo_concurrency.lockutils [req-757f6ede-336e-46f6-894a-f16cda08d3a9 req-048b28a0-0c34-40fc-87c3-bfa41cdf6100 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:11:45 np0005465604 nova_compute[260603]: 2025-10-02 09:11:45.137 2 DEBUG oslo_concurrency.lockutils [req-757f6ede-336e-46f6-894a-f16cda08d3a9 req-048b28a0-0c34-40fc-87c3-bfa41cdf6100 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:11:45 np0005465604 nova_compute[260603]: 2025-10-02 09:11:45.138 2 DEBUG nova.network.neutron [req-757f6ede-336e-46f6-894a-f16cda08d3a9 req-048b28a0-0c34-40fc-87c3-bfa41cdf6100 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Refreshing network info cache for port 102411d5-80b6-47af-9293-08b07c65d541 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:11:45 np0005465604 nova_compute[260603]: 2025-10-02 09:11:45.293 2 DEBUG nova.network.neutron [req-757f6ede-336e-46f6-894a-f16cda08d3a9 req-048b28a0-0c34-40fc-87c3-bfa41cdf6100 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:11:45 np0005465604 nova_compute[260603]: 2025-10-02 09:11:45.880 2 DEBUG nova.network.neutron [req-757f6ede-336e-46f6-894a-f16cda08d3a9 req-048b28a0-0c34-40fc-87c3-bfa41cdf6100 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:11:45 np0005465604 nova_compute[260603]: 2025-10-02 09:11:45.903 2 DEBUG oslo_concurrency.lockutils [req-757f6ede-336e-46f6-894a-f16cda08d3a9 req-048b28a0-0c34-40fc-87c3-bfa41cdf6100 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:11:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:11:46 np0005465604 nova_compute[260603]: 2025-10-02 09:11:46.255 2 DEBUG nova.network.neutron [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Successfully updated port: d5b6295d-90a7-4d25-be69-ccd7a58621c6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:11:46 np0005465604 nova_compute[260603]: 2025-10-02 09:11:46.281 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:11:46 np0005465604 nova_compute[260603]: 2025-10-02 09:11:46.281 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:11:46 np0005465604 nova_compute[260603]: 2025-10-02 09:11:46.281 2 DEBUG nova.network.neutron [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:11:46 np0005465604 nova_compute[260603]: 2025-10-02 09:11:46.431 2 DEBUG nova.network.neutron [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:11:46 np0005465604 nova_compute[260603]: 2025-10-02 09:11:46.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:47 np0005465604 nova_compute[260603]: 2025-10-02 09:11:47.281 2 DEBUG nova.compute.manager [req-dd8b76d7-956a-4050-ba81-a7e2a3c63e4a req-40ad88ee-3974-4ea1-ae95-b03993b15e61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-changed-d5b6295d-90a7-4d25-be69-ccd7a58621c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:11:47 np0005465604 nova_compute[260603]: 2025-10-02 09:11:47.282 2 DEBUG nova.compute.manager [req-dd8b76d7-956a-4050-ba81-a7e2a3c63e4a req-40ad88ee-3974-4ea1-ae95-b03993b15e61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Refreshing instance network info cache due to event network-changed-d5b6295d-90a7-4d25-be69-ccd7a58621c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:11:47 np0005465604 nova_compute[260603]: 2025-10-02 09:11:47.282 2 DEBUG oslo_concurrency.lockutils [req-dd8b76d7-956a-4050-ba81-a7e2a3c63e4a req-40ad88ee-3974-4ea1-ae95-b03993b15e61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:11:47 np0005465604 podman[420486]: 2025-10-02 09:11:47.408729436 +0000 UTC m=+0.111208047 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:11:47 np0005465604 podman[420486]: 2025-10-02 09:11:47.505130111 +0000 UTC m=+0.207608672 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:11:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:11:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:11:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:11:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:11:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:11:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:11:49 np0005465604 ovn_controller[152344]: 2025-10-02T09:11:49Z|01574|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:11:49 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9eb06afc-c243-473a-8e5b-f054bf6c2878 does not exist
Oct  2 05:11:49 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b0af0d07-b722-40ce-98d5-3b49561aeef7 does not exist
Oct  2 05:11:49 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9c9c9cf9-e465-4633-8429-51406cb77170 does not exist
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.306 2 DEBUG nova.network.neutron [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updating instance_info_cache with network_info: [{"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:11:49 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.640 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.641 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Instance network_info: |[{"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.641 2 DEBUG oslo_concurrency.lockutils [req-dd8b76d7-956a-4050-ba81-a7e2a3c63e4a req-40ad88ee-3974-4ea1-ae95-b03993b15e61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.641 2 DEBUG nova.network.neutron [req-dd8b76d7-956a-4050-ba81-a7e2a3c63e4a req-40ad88ee-3974-4ea1-ae95-b03993b15e61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Refreshing network info cache for port d5b6295d-90a7-4d25-be69-ccd7a58621c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.644 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Start _get_guest_xml network_info=[{"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.650 2 WARNING nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.658 2 DEBUG nova.virt.libvirt.host [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.659 2 DEBUG nova.virt.libvirt.host [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.662 2 DEBUG nova.virt.libvirt.host [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.663 2 DEBUG nova.virt.libvirt.host [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.663 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.664 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.664 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.664 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.665 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.665 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.665 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.665 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.666 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.666 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.666 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.666 2 DEBUG nova.virt.hardware [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:11:49 np0005465604 nova_compute[260603]: 2025-10-02 09:11:49.671 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:11:49 np0005465604 podman[420917]: 2025-10-02 09:11:49.884273415 +0000 UTC m=+0.090411420 container create 19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swanson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 05:11:49 np0005465604 podman[420917]: 2025-10-02 09:11:49.814090934 +0000 UTC m=+0.020229029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:11:49 np0005465604 systemd[1]: Started libpod-conmon-19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952.scope.
Oct  2 05:11:49 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:11:49 np0005465604 podman[420917]: 2025-10-02 09:11:49.984394136 +0000 UTC m=+0.190532191 container init 19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 05:11:49 np0005465604 podman[420917]: 2025-10-02 09:11:49.995299625 +0000 UTC m=+0.201437630 container start 19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swanson, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:11:49 np0005465604 podman[420917]: 2025-10-02 09:11:49.998891747 +0000 UTC m=+0.205029792 container attach 19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  2 05:11:50 np0005465604 silly_swanson[420953]: 167 167
Oct  2 05:11:50 np0005465604 systemd[1]: libpod-19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952.scope: Deactivated successfully.
Oct  2 05:11:50 np0005465604 conmon[420953]: conmon 19f3e03c0ed665425233 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952.scope/container/memory.events
Oct  2 05:11:50 np0005465604 podman[420917]: 2025-10-02 09:11:50.007230616 +0000 UTC m=+0.213368641 container died 19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:11:50 np0005465604 systemd[1]: var-lib-containers-storage-overlay-815c9ad102a4a7abe644cd4dd988d467c1322bbae353039ede8c93707ec63c30-merged.mount: Deactivated successfully.
Oct  2 05:11:50 np0005465604 podman[420917]: 2025-10-02 09:11:50.055576488 +0000 UTC m=+0.261714503 container remove 19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 05:11:50 np0005465604 systemd[1]: libpod-conmon-19f3e03c0ed665425233a08f79f7ae80c821dee8be207e9e2017541b94d77952.scope: Deactivated successfully.
Oct  2 05:11:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:11:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:11:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1022964571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.162 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:11:50 np0005465604 podman[420968]: 2025-10-02 09:11:50.16695603 +0000 UTC m=+0.096677566 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.182 2 DEBUG nova.storage.rbd_utils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:11:50 np0005465604 podman[420977]: 2025-10-02 09:11:50.188648723 +0000 UTC m=+0.091129382 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.190 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:11:50 np0005465604 podman[421040]: 2025-10-02 09:11:50.251037362 +0000 UTC m=+0.039337903 container create f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_raman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 05:11:50 np0005465604 systemd[1]: Started libpod-conmon-f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045.scope.
Oct  2 05:11:50 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:11:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045b2c3a6f5ff1230eefa2196f522acc406f94a0a1bfa64360749da767b0e3db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:11:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045b2c3a6f5ff1230eefa2196f522acc406f94a0a1bfa64360749da767b0e3db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:11:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045b2c3a6f5ff1230eefa2196f522acc406f94a0a1bfa64360749da767b0e3db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:11:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045b2c3a6f5ff1230eefa2196f522acc406f94a0a1bfa64360749da767b0e3db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:11:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/045b2c3a6f5ff1230eefa2196f522acc406f94a0a1bfa64360749da767b0e3db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:11:50 np0005465604 podman[421040]: 2025-10-02 09:11:50.234904281 +0000 UTC m=+0.023204842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:11:50 np0005465604 podman[421040]: 2025-10-02 09:11:50.334156706 +0000 UTC m=+0.122457267 container init f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_raman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 05:11:50 np0005465604 podman[421040]: 2025-10-02 09:11:50.345375865 +0000 UTC m=+0.133676406 container start f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:11:50 np0005465604 podman[421040]: 2025-10-02 09:11:50.348603745 +0000 UTC m=+0.136904316 container attach f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 05:11:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:11:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3360459198' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.616 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.619 2 DEBUG nova.virt.libvirt.vif [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1569895686',display_name='tempest-TestGettingAddress-server-1569895686',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1569895686',id=143,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-59lxi72k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:11:38Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=71c9f70f-5f86-4723-9e4f-a4aca14211cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.620 2 DEBUG nova.network.os_vif_util [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.622 2 DEBUG nova.network.os_vif_util [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:ca:02,bridge_name='br-int',has_traffic_filtering=True,id=102411d5-80b6-47af-9293-08b07c65d541,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap102411d5-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.624 2 DEBUG nova.virt.libvirt.vif [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1569895686',display_name='tempest-TestGettingAddress-server-1569895686',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1569895686',id=143,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-59lxi72k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:11:38Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=71c9f70f-5f86-4723-9e4f-a4aca14211cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.625 2 DEBUG nova.network.os_vif_util [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.626 2 DEBUG nova.network.os_vif_util [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:62:03,bridge_name='br-int',has_traffic_filtering=True,id=d5b6295d-90a7-4d25-be69-ccd7a58621c6,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b6295d-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.629 2 DEBUG nova.objects.instance [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 71c9f70f-5f86-4723-9e4f-a4aca14211cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.702 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:  <uuid>71c9f70f-5f86-4723-9e4f-a4aca14211cb</uuid>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:  <name>instance-0000008f</name>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestGettingAddress-server-1569895686</nova:name>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:11:49</nova:creationTime>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:        <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:        <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:        <nova:port uuid="102411d5-80b6-47af-9293-08b07c65d541">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:        <nova:port uuid="d5b6295d-90a7-4d25-be69-ccd7a58621c6">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe3c:6203" ipVersion="6"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe3c:6203" ipVersion="6"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <entry name="serial">71c9f70f-5f86-4723-9e4f-a4aca14211cb</entry>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <entry name="uuid">71c9f70f-5f86-4723-9e4f-a4aca14211cb</entry>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk.config">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:05:ca:02"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <target dev="tap102411d5-80"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:3c:62:03"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <target dev="tapd5b6295d-90"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb/console.log" append="off"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:11:50 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:11:50 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:11:50 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:11:50 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.713 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Preparing to wait for external event network-vif-plugged-102411d5-80b6-47af-9293-08b07c65d541 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.714 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.714 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.715 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.715 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Preparing to wait for external event network-vif-plugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.716 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.716 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.717 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.718 2 DEBUG nova.virt.libvirt.vif [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1569895686',display_name='tempest-TestGettingAddress-server-1569895686',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1569895686',id=143,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-59lxi72k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:11:38Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=71c9f70f-5f86-4723-9e4f-a4aca14211cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.719 2 DEBUG nova.network.os_vif_util [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.720 2 DEBUG nova.network.os_vif_util [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:ca:02,bridge_name='br-int',has_traffic_filtering=True,id=102411d5-80b6-47af-9293-08b07c65d541,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap102411d5-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.721 2 DEBUG os_vif [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:ca:02,bridge_name='br-int',has_traffic_filtering=True,id=102411d5-80b6-47af-9293-08b07c65d541,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap102411d5-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.728 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.729 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.733 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap102411d5-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap102411d5-80, col_values=(('external_ids', {'iface-id': '102411d5-80b6-47af-9293-08b07c65d541', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:ca:02', 'vm-uuid': '71c9f70f-5f86-4723-9e4f-a4aca14211cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:11:50 np0005465604 NetworkManager[45129]: <info>  [1759396310.7822] manager: (tap102411d5-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/636)
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.790 2 INFO os_vif [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:ca:02,bridge_name='br-int',has_traffic_filtering=True,id=102411d5-80b6-47af-9293-08b07c65d541,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap102411d5-80')#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.792 2 DEBUG nova.virt.libvirt.vif [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1569895686',display_name='tempest-TestGettingAddress-server-1569895686',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1569895686',id=143,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-59lxi72k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:11:38Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=71c9f70f-5f86-4723-9e4f-a4aca14211cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.793 2 DEBUG nova.network.os_vif_util [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.796 2 DEBUG nova.network.os_vif_util [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:62:03,bridge_name='br-int',has_traffic_filtering=True,id=d5b6295d-90a7-4d25-be69-ccd7a58621c6,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b6295d-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.797 2 DEBUG os_vif [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:62:03,bridge_name='br-int',has_traffic_filtering=True,id=d5b6295d-90a7-4d25-be69-ccd7a58621c6,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b6295d-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.798 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.799 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.803 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5b6295d-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.804 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5b6295d-90, col_values=(('external_ids', {'iface-id': 'd5b6295d-90a7-4d25-be69-ccd7a58621c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:62:03', 'vm-uuid': '71c9f70f-5f86-4723-9e4f-a4aca14211cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:11:50 np0005465604 NetworkManager[45129]: <info>  [1759396310.8068] manager: (tapd5b6295d-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/637)
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.815 2 INFO os_vif [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:62:03,bridge_name='br-int',has_traffic_filtering=True,id=d5b6295d-90a7-4d25-be69-ccd7a58621c6,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b6295d-90')#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.872 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.873 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.874 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:05:ca:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.874 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:3c:62:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.876 2 INFO nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Using config drive#033[00m
Oct  2 05:11:50 np0005465604 nova_compute[260603]: 2025-10-02 09:11:50.909 2 DEBUG nova.storage.rbd_utils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:11:51 np0005465604 inspiring_raman[421058]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:11:51 np0005465604 inspiring_raman[421058]: --> relative data size: 1.0
Oct  2 05:11:51 np0005465604 inspiring_raman[421058]: --> All data devices are unavailable
Oct  2 05:11:51 np0005465604 nova_compute[260603]: 2025-10-02 09:11:51.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:51 np0005465604 systemd[1]: libpod-f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045.scope: Deactivated successfully.
Oct  2 05:11:51 np0005465604 systemd[1]: libpod-f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045.scope: Consumed 1.156s CPU time.
Oct  2 05:11:51 np0005465604 podman[421040]: 2025-10-02 09:11:51.562897169 +0000 UTC m=+1.351197710 container died f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_raman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 05:11:51 np0005465604 systemd[1]: var-lib-containers-storage-overlay-045b2c3a6f5ff1230eefa2196f522acc406f94a0a1bfa64360749da767b0e3db-merged.mount: Deactivated successfully.
Oct  2 05:11:51 np0005465604 podman[421040]: 2025-10-02 09:11:51.649296684 +0000 UTC m=+1.437597225 container remove f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 05:11:51 np0005465604 systemd[1]: libpod-conmon-f0eb315aa84e3c2dcaf6abae42cee2611673b2e7f82d3688a2f3b06e80b9c045.scope: Deactivated successfully.
Oct  2 05:11:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:11:52 np0005465604 podman[421283]: 2025-10-02 09:11:52.33586881 +0000 UTC m=+0.087731587 container create 1c0a3404b51ddd8cae0f78efa29e30624d3777171c4356a54966c42461be5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 05:11:52 np0005465604 systemd[1]: Started libpod-conmon-1c0a3404b51ddd8cae0f78efa29e30624d3777171c4356a54966c42461be5d32.scope.
Oct  2 05:11:52 np0005465604 podman[421283]: 2025-10-02 09:11:52.280634074 +0000 UTC m=+0.032496881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:11:52 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:11:52 np0005465604 podman[421283]: 2025-10-02 09:11:52.411552493 +0000 UTC m=+0.163415280 container init 1c0a3404b51ddd8cae0f78efa29e30624d3777171c4356a54966c42461be5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shirley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:11:52 np0005465604 podman[421283]: 2025-10-02 09:11:52.42596015 +0000 UTC m=+0.177822937 container start 1c0a3404b51ddd8cae0f78efa29e30624d3777171c4356a54966c42461be5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:11:52 np0005465604 podman[421283]: 2025-10-02 09:11:52.429570152 +0000 UTC m=+0.181432939 container attach 1c0a3404b51ddd8cae0f78efa29e30624d3777171c4356a54966c42461be5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shirley, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 05:11:52 np0005465604 dreamy_shirley[421300]: 167 167
Oct  2 05:11:52 np0005465604 systemd[1]: libpod-1c0a3404b51ddd8cae0f78efa29e30624d3777171c4356a54966c42461be5d32.scope: Deactivated successfully.
Oct  2 05:11:52 np0005465604 podman[421283]: 2025-10-02 09:11:52.433967068 +0000 UTC m=+0.185829855 container died 1c0a3404b51ddd8cae0f78efa29e30624d3777171c4356a54966c42461be5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:11:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8d9c7d5d2978d857d3e4fe4ac7c9fad088710b817d51d09605ad9408be256897-merged.mount: Deactivated successfully.
Oct  2 05:11:52 np0005465604 podman[421283]: 2025-10-02 09:11:52.475742537 +0000 UTC m=+0.227605354 container remove 1c0a3404b51ddd8cae0f78efa29e30624d3777171c4356a54966c42461be5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 05:11:52 np0005465604 systemd[1]: libpod-conmon-1c0a3404b51ddd8cae0f78efa29e30624d3777171c4356a54966c42461be5d32.scope: Deactivated successfully.
Oct  2 05:11:52 np0005465604 nova_compute[260603]: 2025-10-02 09:11:52.546 2 INFO nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Creating config drive at /var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb/disk.config#033[00m
Oct  2 05:11:52 np0005465604 nova_compute[260603]: 2025-10-02 09:11:52.551 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2umpw0z1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:11:52 np0005465604 nova_compute[260603]: 2025-10-02 09:11:52.601 2 DEBUG nova.network.neutron [req-dd8b76d7-956a-4050-ba81-a7e2a3c63e4a req-40ad88ee-3974-4ea1-ae95-b03993b15e61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updated VIF entry in instance network info cache for port d5b6295d-90a7-4d25-be69-ccd7a58621c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:11:52 np0005465604 nova_compute[260603]: 2025-10-02 09:11:52.603 2 DEBUG nova.network.neutron [req-dd8b76d7-956a-4050-ba81-a7e2a3c63e4a req-40ad88ee-3974-4ea1-ae95-b03993b15e61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updating instance_info_cache with network_info: [{"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:11:52 np0005465604 nova_compute[260603]: 2025-10-02 09:11:52.630 2 DEBUG oslo_concurrency.lockutils [req-dd8b76d7-956a-4050-ba81-a7e2a3c63e4a req-40ad88ee-3974-4ea1-ae95-b03993b15e61 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:11:52 np0005465604 podman[421326]: 2025-10-02 09:11:52.68240539 +0000 UTC m=+0.049141519 container create 9cce3f215ce4273513741da1fd0932a1553e8c400100b996c67cec2b2be02439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 05:11:52 np0005465604 nova_compute[260603]: 2025-10-02 09:11:52.692 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2umpw0z1" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:11:52 np0005465604 nova_compute[260603]: 2025-10-02 09:11:52.721 2 DEBUG nova.storage.rbd_utils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:11:52 np0005465604 nova_compute[260603]: 2025-10-02 09:11:52.725 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb/disk.config 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:11:52 np0005465604 systemd[1]: Started libpod-conmon-9cce3f215ce4273513741da1fd0932a1553e8c400100b996c67cec2b2be02439.scope.
Oct  2 05:11:52 np0005465604 podman[421326]: 2025-10-02 09:11:52.661627083 +0000 UTC m=+0.028363232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:11:52 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:11:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2baf0e342ee68d8880e8df6acae850236efb2b13e6da97563a8e97ae38ae513e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:11:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2baf0e342ee68d8880e8df6acae850236efb2b13e6da97563a8e97ae38ae513e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:11:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2baf0e342ee68d8880e8df6acae850236efb2b13e6da97563a8e97ae38ae513e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:11:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2baf0e342ee68d8880e8df6acae850236efb2b13e6da97563a8e97ae38ae513e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:11:52 np0005465604 podman[421326]: 2025-10-02 09:11:52.788053732 +0000 UTC m=+0.154789901 container init 9cce3f215ce4273513741da1fd0932a1553e8c400100b996c67cec2b2be02439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:11:52 np0005465604 podman[421326]: 2025-10-02 09:11:52.794392369 +0000 UTC m=+0.161128498 container start 9cce3f215ce4273513741da1fd0932a1553e8c400100b996c67cec2b2be02439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 05:11:52 np0005465604 podman[421326]: 2025-10-02 09:11:52.797604769 +0000 UTC m=+0.164340908 container attach 9cce3f215ce4273513741da1fd0932a1553e8c400100b996c67cec2b2be02439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:11:52 np0005465604 nova_compute[260603]: 2025-10-02 09:11:52.908 2 DEBUG oslo_concurrency.processutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb/disk.config 71c9f70f-5f86-4723-9e4f-a4aca14211cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:11:52 np0005465604 nova_compute[260603]: 2025-10-02 09:11:52.909 2 INFO nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Deleting local config drive /var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb/disk.config because it was imported into RBD.#033[00m
Oct  2 05:11:53 np0005465604 NetworkManager[45129]: <info>  [1759396313.0016] manager: (tap102411d5-80): new Tun device (/org/freedesktop/NetworkManager/Devices/638)
Oct  2 05:11:53 np0005465604 kernel: tap102411d5-80: entered promiscuous mode
Oct  2 05:11:53 np0005465604 nova_compute[260603]: 2025-10-02 09:11:53.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:53 np0005465604 ovn_controller[152344]: 2025-10-02T09:11:53Z|01575|binding|INFO|Claiming lport 102411d5-80b6-47af-9293-08b07c65d541 for this chassis.
Oct  2 05:11:53 np0005465604 ovn_controller[152344]: 2025-10-02T09:11:53Z|01576|binding|INFO|102411d5-80b6-47af-9293-08b07c65d541: Claiming fa:16:3e:05:ca:02 10.100.0.6
Oct  2 05:11:53 np0005465604 nova_compute[260603]: 2025-10-02 09:11:53.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:53 np0005465604 kernel: tapd5b6295d-90: entered promiscuous mode
Oct  2 05:11:53 np0005465604 NetworkManager[45129]: <info>  [1759396313.0358] manager: (tapd5b6295d-90): new Tun device (/org/freedesktop/NetworkManager/Devices/639)
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.041 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:ca:02 10.100.0.6'], port_security=['fa:16:3e:05:ca:02 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '71c9f70f-5f86-4723-9e4f-a4aca14211cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-436e56fa-4885-4043-b091-8043a6f9f710', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f053077-9033-462a-8246-131d2284fb71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79454522-7e2a-40b8-ae72-355dd621c03a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=102411d5-80b6-47af-9293-08b07c65d541) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.043 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 102411d5-80b6-47af-9293-08b07c65d541 in datapath 436e56fa-4885-4043-b091-8043a6f9f710 bound to our chassis#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.044 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 436e56fa-4885-4043-b091-8043a6f9f710#033[00m
Oct  2 05:11:53 np0005465604 systemd-udevd[421401]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:11:53 np0005465604 systemd-udevd[421402]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.058 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3e0aed-42cf-4995-be52-bf4b5718aed7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.064 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap436e56fa-41 in ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 05:11:53 np0005465604 NetworkManager[45129]: <info>  [1759396313.0683] device (tapd5b6295d-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.066 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap436e56fa-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.066 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2a372b83-300e-4414-b473-a2c4c2d926ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.068 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9964f8ac-300f-42bd-beb5-d7f4b160f92b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:53 np0005465604 NetworkManager[45129]: <info>  [1759396313.0733] device (tapd5b6295d-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:11:53 np0005465604 NetworkManager[45129]: <info>  [1759396313.0748] device (tap102411d5-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:11:53 np0005465604 NetworkManager[45129]: <info>  [1759396313.0766] device (tap102411d5-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.083 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe40c21-8ec0-47bb-89ec-58cb7820cedd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.091 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:11:53 np0005465604 systemd-machined[214636]: New machine qemu-177-instance-0000008f.
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.115 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2cab7d73-0892-4410-8e5b-a89237f6af95]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:53 np0005465604 systemd[1]: Started Virtual Machine qemu-177-instance-0000008f.
Oct  2 05:11:53 np0005465604 ovn_controller[152344]: 2025-10-02T09:11:53Z|01577|binding|INFO|Claiming lport d5b6295d-90a7-4d25-be69-ccd7a58621c6 for this chassis.
Oct  2 05:11:53 np0005465604 ovn_controller[152344]: 2025-10-02T09:11:53Z|01578|binding|INFO|d5b6295d-90a7-4d25-be69-ccd7a58621c6: Claiming fa:16:3e:3c:62:03 2001:db8:0:1:f816:3eff:fe3c:6203 2001:db8::f816:3eff:fe3c:6203
Oct  2 05:11:53 np0005465604 nova_compute[260603]: 2025-10-02 09:11:53.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.129 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:62:03 2001:db8:0:1:f816:3eff:fe3c:6203 2001:db8::f816:3eff:fe3c:6203'], port_security=['fa:16:3e:3c:62:03 2001:db8:0:1:f816:3eff:fe3c:6203 2001:db8::f816:3eff:fe3c:6203'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe3c:6203/64 2001:db8::f816:3eff:fe3c:6203/64', 'neutron:device_id': '71c9f70f-5f86-4723-9e4f-a4aca14211cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d64d879-42c6-456c-a212-df00bf998997', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f053077-9033-462a-8246-131d2284fb71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bb22cc4-c817-4149-925c-4cb21e573102, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d5b6295d-90a7-4d25-be69-ccd7a58621c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:11:53 np0005465604 ovn_controller[152344]: 2025-10-02T09:11:53Z|01579|binding|INFO|Setting lport 102411d5-80b6-47af-9293-08b07c65d541 ovn-installed in OVS
Oct  2 05:11:53 np0005465604 ovn_controller[152344]: 2025-10-02T09:11:53Z|01580|binding|INFO|Setting lport 102411d5-80b6-47af-9293-08b07c65d541 up in Southbound
Oct  2 05:11:53 np0005465604 nova_compute[260603]: 2025-10-02 09:11:53.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:53 np0005465604 ovn_controller[152344]: 2025-10-02T09:11:53Z|01581|binding|INFO|Setting lport d5b6295d-90a7-4d25-be69-ccd7a58621c6 ovn-installed in OVS
Oct  2 05:11:53 np0005465604 ovn_controller[152344]: 2025-10-02T09:11:53Z|01582|binding|INFO|Setting lport d5b6295d-90a7-4d25-be69-ccd7a58621c6 up in Southbound
Oct  2 05:11:53 np0005465604 nova_compute[260603]: 2025-10-02 09:11:53.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.150 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[36697106-905b-423b-a991-3782802f2488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:53 np0005465604 NetworkManager[45129]: <info>  [1759396313.1598] manager: (tap436e56fa-40): new Veth device (/org/freedesktop/NetworkManager/Devices/640)
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.164 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[601f84db-d4c5-4ecc-b549-7b5edf2060f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.210 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d24bf1d2-2ba1-496b-8a92-4d932a267b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.216 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0f578d92-bf6a-4d96-abd9-1be87108f795]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:53 np0005465604 NetworkManager[45129]: <info>  [1759396313.2468] device (tap436e56fa-40): carrier: link connected
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.253 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[75ba8f3d-0e6a-4752-b00a-94eb25c69ac6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.275 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[70082af5-8a2e-44e4-be28-86cd1a6bbe02]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap436e56fa-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:8d:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719184, 'reachable_time': 36984, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 421438, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.295 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ea07767f-a225-49d7-93ba-d9fd6c1f7245]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:8dfb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719184, 'tstamp': 719184}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 421439, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.311 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5028b5cd-bf6c-4013-9274-915e3f435a0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap436e56fa-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:8d:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719184, 'reachable_time': 36984, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 421440, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.341 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe07ab3-845f-4b9c-a268-a8e2a0d19df9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.401 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[247d8d6d-3340-4d70-9a7a-b0b5afee69b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.402 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap436e56fa-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.403 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.403 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap436e56fa-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:11:53 np0005465604 NetworkManager[45129]: <info>  [1759396313.4286] manager: (tap436e56fa-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Oct  2 05:11:53 np0005465604 kernel: tap436e56fa-40: entered promiscuous mode
Oct  2 05:11:53 np0005465604 nova_compute[260603]: 2025-10-02 09:11:53.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.434 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap436e56fa-40, col_values=(('external_ids', {'iface-id': '3c52d5f8-d941-470e-b21b-5afb7b1bf813'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:11:53 np0005465604 nova_compute[260603]: 2025-10-02 09:11:53.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:53 np0005465604 ovn_controller[152344]: 2025-10-02T09:11:53Z|01583|binding|INFO|Releasing lport 3c52d5f8-d941-470e-b21b-5afb7b1bf813 from this chassis (sb_readonly=0)
Oct  2 05:11:53 np0005465604 nova_compute[260603]: 2025-10-02 09:11:53.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.453 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/436e56fa-4885-4043-b091-8043a6f9f710.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/436e56fa-4885-4043-b091-8043a6f9f710.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.454 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29600729-0c3d-4e7c-9373-1b5f4e62b2ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.455 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-436e56fa-4885-4043-b091-8043a6f9f710
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/436e56fa-4885-4043-b091-8043a6f9f710.pid.haproxy
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 436e56fa-4885-4043-b091-8043a6f9f710
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 05:11:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:53.456 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'env', 'PROCESS_TAG=haproxy-436e56fa-4885-4043-b091-8043a6f9f710', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/436e56fa-4885-4043-b091-8043a6f9f710.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 05:11:53 np0005465604 nice_fermat[421360]: {
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:    "0": [
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:        {
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "devices": [
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "/dev/loop3"
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            ],
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "lv_name": "ceph_lv0",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "lv_size": "21470642176",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "name": "ceph_lv0",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "tags": {
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.cluster_name": "ceph",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.crush_device_class": "",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.encrypted": "0",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.osd_id": "0",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.type": "block",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.vdo": "0"
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            },
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "type": "block",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "vg_name": "ceph_vg0"
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:        }
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:    ],
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:    "1": [
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:        {
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "devices": [
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "/dev/loop4"
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            ],
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "lv_name": "ceph_lv1",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "lv_size": "21470642176",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "name": "ceph_lv1",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "tags": {
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.cluster_name": "ceph",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.crush_device_class": "",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.encrypted": "0",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.osd_id": "1",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.type": "block",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.vdo": "0"
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            },
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "type": "block",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "vg_name": "ceph_vg1"
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:        }
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:    ],
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:    "2": [
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:        {
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "devices": [
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "/dev/loop5"
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            ],
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "lv_name": "ceph_lv2",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "lv_size": "21470642176",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "name": "ceph_lv2",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "tags": {
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.cluster_name": "ceph",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.crush_device_class": "",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.encrypted": "0",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.osd_id": "2",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.type": "block",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:                "ceph.vdo": "0"
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            },
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "type": "block",
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:            "vg_name": "ceph_vg2"
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:        }
Oct  2 05:11:53 np0005465604 nice_fermat[421360]:    ]
Oct  2 05:11:53 np0005465604 nice_fermat[421360]: }
Oct  2 05:11:53 np0005465604 systemd[1]: libpod-9cce3f215ce4273513741da1fd0932a1553e8c400100b996c67cec2b2be02439.scope: Deactivated successfully.
Oct  2 05:11:53 np0005465604 podman[421326]: 2025-10-02 09:11:53.602006986 +0000 UTC m=+0.968743155 container died 9cce3f215ce4273513741da1fd0932a1553e8c400100b996c67cec2b2be02439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:11:53 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2baf0e342ee68d8880e8df6acae850236efb2b13e6da97563a8e97ae38ae513e-merged.mount: Deactivated successfully.
Oct  2 05:11:53 np0005465604 podman[421326]: 2025-10-02 09:11:53.882366819 +0000 UTC m=+1.249102948 container remove 9cce3f215ce4273513741da1fd0932a1553e8c400100b996c67cec2b2be02439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_fermat, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 05:11:53 np0005465604 systemd[1]: libpod-conmon-9cce3f215ce4273513741da1fd0932a1553e8c400100b996c67cec2b2be02439.scope: Deactivated successfully.
Oct  2 05:11:54 np0005465604 podman[421530]: 2025-10-02 09:11:54.038407858 +0000 UTC m=+0.079217573 container create ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 05:11:54 np0005465604 systemd[1]: Started libpod-conmon-ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772.scope.
Oct  2 05:11:54 np0005465604 podman[421530]: 2025-10-02 09:11:53.997666812 +0000 UTC m=+0.038476557 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 05:11:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:11:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ba23fbb429caa93a956634bafbbdd11b0d78f46b17b74eaebd84acb971bba0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 05:11:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:11:54 np0005465604 podman[421530]: 2025-10-02 09:11:54.123193383 +0000 UTC m=+0.164003128 container init ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 05:11:54 np0005465604 podman[421530]: 2025-10-02 09:11:54.130802509 +0000 UTC m=+0.171612224 container start ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 05:11:54 np0005465604 neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710[421591]: [NOTICE]   (421604) : New worker (421624) forked
Oct  2 05:11:54 np0005465604 neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710[421591]: [NOTICE]   (421604) : Loading success.
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.190 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d5b6295d-90a7-4d25-be69-ccd7a58621c6 in datapath 5d64d879-42c6-456c-a212-df00bf998997 unbound from our chassis#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.192 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d64d879-42c6-456c-a212-df00bf998997#033[00m
Oct  2 05:11:54 np0005465604 nova_compute[260603]: 2025-10-02 09:11:54.201 2 DEBUG nova.compute.manager [req-0271a026-dcb8-4ccc-8c3f-ede19c685b0c req-806eb3c6-19ea-4305-8da7-29a41c2c3694 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-plugged-102411d5-80b6-47af-9293-08b07c65d541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:11:54 np0005465604 nova_compute[260603]: 2025-10-02 09:11:54.202 2 DEBUG oslo_concurrency.lockutils [req-0271a026-dcb8-4ccc-8c3f-ede19c685b0c req-806eb3c6-19ea-4305-8da7-29a41c2c3694 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:11:54 np0005465604 nova_compute[260603]: 2025-10-02 09:11:54.202 2 DEBUG oslo_concurrency.lockutils [req-0271a026-dcb8-4ccc-8c3f-ede19c685b0c req-806eb3c6-19ea-4305-8da7-29a41c2c3694 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:11:54 np0005465604 nova_compute[260603]: 2025-10-02 09:11:54.202 2 DEBUG oslo_concurrency.lockutils [req-0271a026-dcb8-4ccc-8c3f-ede19c685b0c req-806eb3c6-19ea-4305-8da7-29a41c2c3694 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:11:54 np0005465604 nova_compute[260603]: 2025-10-02 09:11:54.203 2 DEBUG nova.compute.manager [req-0271a026-dcb8-4ccc-8c3f-ede19c685b0c req-806eb3c6-19ea-4305-8da7-29a41c2c3694 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Processing event network-vif-plugged-102411d5-80b6-47af-9293-08b07c65d541 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.203 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9f3c5f-573e-4afe-b62e-94c1da06fccf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.204 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d64d879-41 in ovnmeta-5d64d879-42c6-456c-a212-df00bf998997 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.206 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d64d879-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.206 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf6a0e7-6800-403e-9033-3fd5d9dacf8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.207 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[db488d29-529d-4769-bf93-e36a9ebc72f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.218 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[8505b06e-0e9e-4109-8059-dbd80ed0f686]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.232 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6743103c-322c-448b-b014-e73fd269dafa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:54 np0005465604 nova_compute[260603]: 2025-10-02 09:11:54.233 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396314.232942, 71c9f70f-5f86-4723-9e4f-a4aca14211cb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:11:54 np0005465604 nova_compute[260603]: 2025-10-02 09:11:54.233 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] VM Started (Lifecycle Event)#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.257 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9d4760c6-3c3a-4681-b04e-f11a328ffd3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.263 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a0fa26e3-a4ea-436c-9171-f7062731c220]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:54 np0005465604 NetworkManager[45129]: <info>  [1759396314.2643] manager: (tap5d64d879-40): new Veth device (/org/freedesktop/NetworkManager/Devices/642)
Oct  2 05:11:54 np0005465604 systemd-udevd[421426]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:11:54 np0005465604 nova_compute[260603]: 2025-10-02 09:11:54.271 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:11:54 np0005465604 nova_compute[260603]: 2025-10-02 09:11:54.283 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396314.2330513, 71c9f70f-5f86-4723-9e4f-a4aca14211cb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:11:54 np0005465604 nova_compute[260603]: 2025-10-02 09:11:54.284 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.298 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab4c00e-d5d3-48fb-abbf-ca354f79de65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.301 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a450fc24-b625-4904-ad66-dc0c3dc407c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:54 np0005465604 NetworkManager[45129]: <info>  [1759396314.3207] device (tap5d64d879-40): carrier: link connected
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.328 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4dfa7b62-5327-4ed7-8744-f758a388f990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.343 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[72b4d70c-3128-469f-92d7-1bd545a2c833]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d64d879-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:74:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719292, 'reachable_time': 17062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 421670, 'error': None, 'target': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.358 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[26d3415d-ef9d-4f09-83f7-db042debe901]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:74d6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719292, 'tstamp': 719292}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 421671, 'error': None, 'target': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.373 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[de45972a-6560-4960-a8e5-7a91d970e1e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d64d879-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:74:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719292, 'reachable_time': 17062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 421674, 'error': None, 'target': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.401 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[989861a6-d96b-488f-b17f-fdf9161bbdae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.429 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[376564e5-c288-4a08-b90a-46c3e3450c18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.431 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d64d879-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.431 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.431 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d64d879-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:11:54 np0005465604 kernel: tap5d64d879-40: entered promiscuous mode
Oct  2 05:11:54 np0005465604 nova_compute[260603]: 2025-10-02 09:11:54.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:54 np0005465604 nova_compute[260603]: 2025-10-02 09:11:54.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:54 np0005465604 NetworkManager[45129]: <info>  [1759396314.4374] manager: (tap5d64d879-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/643)
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.438 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d64d879-40, col_values=(('external_ids', {'iface-id': 'be1c87e3-582f-4bbb-a5fb-4fb837b7e882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:11:54 np0005465604 nova_compute[260603]: 2025-10-02 09:11:54.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:54 np0005465604 ovn_controller[152344]: 2025-10-02T09:11:54Z|01584|binding|INFO|Releasing lport be1c87e3-582f-4bbb-a5fb-4fb837b7e882 from this chassis (sb_readonly=0)
Oct  2 05:11:54 np0005465604 nova_compute[260603]: 2025-10-02 09:11:54.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.442 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d64d879-42c6-456c-a212-df00bf998997.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d64d879-42c6-456c-a212-df00bf998997.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.442 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9f6978-8f8c-42f5-940d-e25cd4a59ff5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.443 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-5d64d879-42c6-456c-a212-df00bf998997
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/5d64d879-42c6-456c-a212-df00bf998997.pid.haproxy
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 5d64d879-42c6-456c-a212-df00bf998997
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 05:11:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:11:54.445 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'env', 'PROCESS_TAG=haproxy-5d64d879-42c6-456c-a212-df00bf998997', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d64d879-42c6-456c-a212-df00bf998997.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 05:11:54 np0005465604 nova_compute[260603]: 2025-10-02 09:11:54.504 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:11:54 np0005465604 nova_compute[260603]: 2025-10-02 09:11:54.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:54 np0005465604 nova_compute[260603]: 2025-10-02 09:11:54.508 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:11:54 np0005465604 nova_compute[260603]: 2025-10-02 09:11:54.526 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:11:54 np0005465604 podman[421719]: 2025-10-02 09:11:54.590798204 +0000 UTC m=+0.043858124 container create f25a50703e079c64e5350ae02ba12cb7e8d3485cc7e36091839c452b77461049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shockley, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 05:11:54 np0005465604 systemd[1]: Started libpod-conmon-f25a50703e079c64e5350ae02ba12cb7e8d3485cc7e36091839c452b77461049.scope.
Oct  2 05:11:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:11:54 np0005465604 podman[421719]: 2025-10-02 09:11:54.573416444 +0000 UTC m=+0.026476384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:11:54 np0005465604 podman[421719]: 2025-10-02 09:11:54.680520812 +0000 UTC m=+0.133580752 container init f25a50703e079c64e5350ae02ba12cb7e8d3485cc7e36091839c452b77461049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shockley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:11:54 np0005465604 podman[421719]: 2025-10-02 09:11:54.687551131 +0000 UTC m=+0.140611051 container start f25a50703e079c64e5350ae02ba12cb7e8d3485cc7e36091839c452b77461049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shockley, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 05:11:54 np0005465604 podman[421719]: 2025-10-02 09:11:54.69106467 +0000 UTC m=+0.144124610 container attach f25a50703e079c64e5350ae02ba12cb7e8d3485cc7e36091839c452b77461049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shockley, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 05:11:54 np0005465604 stoic_shockley[421736]: 167 167
Oct  2 05:11:54 np0005465604 systemd[1]: libpod-f25a50703e079c64e5350ae02ba12cb7e8d3485cc7e36091839c452b77461049.scope: Deactivated successfully.
Oct  2 05:11:54 np0005465604 podman[421719]: 2025-10-02 09:11:54.69329503 +0000 UTC m=+0.146354950 container died f25a50703e079c64e5350ae02ba12cb7e8d3485cc7e36091839c452b77461049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:11:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e08e236a8c643bf7d50ac3c60e625506d22a8f19a90b193a29f8f77f2a15302e-merged.mount: Deactivated successfully.
Oct  2 05:11:54 np0005465604 podman[421719]: 2025-10-02 09:11:54.734178829 +0000 UTC m=+0.187238739 container remove f25a50703e079c64e5350ae02ba12cb7e8d3485cc7e36091839c452b77461049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shockley, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:11:54 np0005465604 systemd[1]: libpod-conmon-f25a50703e079c64e5350ae02ba12cb7e8d3485cc7e36091839c452b77461049.scope: Deactivated successfully.
Oct  2 05:11:54 np0005465604 podman[421776]: 2025-10-02 09:11:54.822428832 +0000 UTC m=+0.048927432 container create eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct  2 05:11:54 np0005465604 systemd[1]: Started libpod-conmon-eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099.scope.
Oct  2 05:11:54 np0005465604 podman[421776]: 2025-10-02 09:11:54.796168766 +0000 UTC m=+0.022667376 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 05:11:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:11:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89707fb13d0c4fb2e7f83ff5487b81f8ced57c0f936b53bc230c0c45bebc01b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 05:11:54 np0005465604 podman[421796]: 2025-10-02 09:11:54.922170551 +0000 UTC m=+0.057895209 container create ec5863ccee70bb1edfa4debebbcc5a0695c300aab409b4f46fc513821c7fee4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_haibt, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 05:11:54 np0005465604 podman[421776]: 2025-10-02 09:11:54.926655151 +0000 UTC m=+0.153153741 container init eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 05:11:54 np0005465604 podman[421776]: 2025-10-02 09:11:54.932433791 +0000 UTC m=+0.158932381 container start eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 05:11:54 np0005465604 neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997[421803]: [NOTICE]   (421815) : New worker (421819) forked
Oct  2 05:11:54 np0005465604 neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997[421803]: [NOTICE]   (421815) : Loading success.
Oct  2 05:11:54 np0005465604 systemd[1]: Started libpod-conmon-ec5863ccee70bb1edfa4debebbcc5a0695c300aab409b4f46fc513821c7fee4e.scope.
Oct  2 05:11:54 np0005465604 podman[421796]: 2025-10-02 09:11:54.888499185 +0000 UTC m=+0.024223933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:11:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:11:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9048349f208ba1bf57c0fcb1654e8417f54cd7552ba9176e06f6fd6d7fca2a98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:11:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9048349f208ba1bf57c0fcb1654e8417f54cd7552ba9176e06f6fd6d7fca2a98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:11:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9048349f208ba1bf57c0fcb1654e8417f54cd7552ba9176e06f6fd6d7fca2a98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:11:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9048349f208ba1bf57c0fcb1654e8417f54cd7552ba9176e06f6fd6d7fca2a98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:11:55 np0005465604 podman[421796]: 2025-10-02 09:11:55.008353059 +0000 UTC m=+0.144077747 container init ec5863ccee70bb1edfa4debebbcc5a0695c300aab409b4f46fc513821c7fee4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:11:55 np0005465604 podman[421796]: 2025-10-02 09:11:55.016993038 +0000 UTC m=+0.152717706 container start ec5863ccee70bb1edfa4debebbcc5a0695c300aab409b4f46fc513821c7fee4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_haibt, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 05:11:55 np0005465604 podman[421796]: 2025-10-02 09:11:55.021589892 +0000 UTC m=+0.157314560 container attach ec5863ccee70bb1edfa4debebbcc5a0695c300aab409b4f46fc513821c7fee4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 05:11:55 np0005465604 podman[421834]: 2025-10-02 09:11:55.137090961 +0000 UTC m=+0.059096148 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:11:55 np0005465604 podman[421833]: 2025-10-02 09:11:55.137405851 +0000 UTC m=+0.056328932 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:11:55 np0005465604 nova_compute[260603]: 2025-10-02 09:11:55.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]: {
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:        "osd_id": 2,
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:        "type": "bluestore"
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:    },
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:        "osd_id": 1,
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:        "type": "bluestore"
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:    },
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:        "osd_id": 0,
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:        "type": "bluestore"
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]:    }
Oct  2 05:11:55 np0005465604 inspiring_haibt[421827]: }
Oct  2 05:11:55 np0005465604 systemd[1]: libpod-ec5863ccee70bb1edfa4debebbcc5a0695c300aab409b4f46fc513821c7fee4e.scope: Deactivated successfully.
Oct  2 05:11:55 np0005465604 podman[421796]: 2025-10-02 09:11:55.995831866 +0000 UTC m=+1.131556554 container died ec5863ccee70bb1edfa4debebbcc5a0695c300aab409b4f46fc513821c7fee4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_haibt, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:11:56 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9048349f208ba1bf57c0fcb1654e8417f54cd7552ba9176e06f6fd6d7fca2a98-merged.mount: Deactivated successfully.
Oct  2 05:11:56 np0005465604 podman[421796]: 2025-10-02 09:11:56.044584892 +0000 UTC m=+1.180309560 container remove ec5863ccee70bb1edfa4debebbcc5a0695c300aab409b4f46fc513821c7fee4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_haibt, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 05:11:56 np0005465604 systemd[1]: libpod-conmon-ec5863ccee70bb1edfa4debebbcc5a0695c300aab409b4f46fc513821c7fee4e.scope: Deactivated successfully.
Oct  2 05:11:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:11:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:11:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:11:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:11:56 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 2c11cc22-5886-479a-90b4-6879d4cbf1e8 does not exist
Oct  2 05:11:56 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 68ed0cde-6d2a-427f-a025-af4290ea7974 does not exist
Oct  2 05:11:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 170 B/s wr, 0 op/s
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.434 2 DEBUG nova.compute.manager [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-plugged-102411d5-80b6-47af-9293-08b07c65d541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.434 2 DEBUG oslo_concurrency.lockutils [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.435 2 DEBUG oslo_concurrency.lockutils [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.435 2 DEBUG oslo_concurrency.lockutils [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.435 2 DEBUG nova.compute.manager [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] No event matching network-vif-plugged-102411d5-80b6-47af-9293-08b07c65d541 in dict_keys([('network-vif-plugged', 'd5b6295d-90a7-4d25-be69-ccd7a58621c6')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.435 2 WARNING nova.compute.manager [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received unexpected event network-vif-plugged-102411d5-80b6-47af-9293-08b07c65d541 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.435 2 DEBUG nova.compute.manager [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-plugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.435 2 DEBUG oslo_concurrency.lockutils [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.436 2 DEBUG oslo_concurrency.lockutils [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.436 2 DEBUG oslo_concurrency.lockutils [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.436 2 DEBUG nova.compute.manager [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Processing event network-vif-plugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.436 2 DEBUG nova.compute.manager [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-plugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.436 2 DEBUG oslo_concurrency.lockutils [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.436 2 DEBUG oslo_concurrency.lockutils [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.437 2 DEBUG oslo_concurrency.lockutils [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.437 2 DEBUG nova.compute.manager [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] No waiting events found dispatching network-vif-plugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.437 2 WARNING nova.compute.manager [req-d57e6038-3785-4678-ada6-13681b0a7e6f req-b3b2f5c2-2eb1-4380-a124-7ce0a39e9e92 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received unexpected event network-vif-plugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.438 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.442 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396316.4427345, 71c9f70f-5f86-4723-9e4f-a4aca14211cb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.443 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.444 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.448 2 INFO nova.virt.libvirt.driver [-] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Instance spawned successfully.#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.448 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.470 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.477 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.477 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.478 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.478 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.478 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.479 2 DEBUG nova.virt.libvirt.driver [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.482 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.519 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.540 2 INFO nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Took 18.41 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.541 2 DEBUG nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.604 2 INFO nova.compute.manager [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Took 19.73 seconds to build instance.#033[00m
Oct  2 05:11:56 np0005465604 nova_compute[260603]: 2025-10-02 09:11:56.622 2 DEBUG oslo_concurrency.lockutils [None req-66229afe-c837-4ef9-8d13-59a8d7adb714 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:11:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:11:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:11:57 np0005465604 nova_compute[260603]: 2025-10-02 09:11:57.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:11:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:11:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:11:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:11:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:11:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:11:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:11:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 61 op/s
Oct  2 05:11:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:12:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 61 op/s
Oct  2 05:12:00 np0005465604 nova_compute[260603]: 2025-10-02 09:12:00.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:00 np0005465604 nova_compute[260603]: 2025-10-02 09:12:00.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:00 np0005465604 NetworkManager[45129]: <info>  [1759396320.8551] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/644)
Oct  2 05:12:00 np0005465604 NetworkManager[45129]: <info>  [1759396320.8561] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/645)
Oct  2 05:12:00 np0005465604 ovn_controller[152344]: 2025-10-02T09:12:00Z|01585|binding|INFO|Releasing lport be1c87e3-582f-4bbb-a5fb-4fb837b7e882 from this chassis (sb_readonly=0)
Oct  2 05:12:00 np0005465604 ovn_controller[152344]: 2025-10-02T09:12:00Z|01586|binding|INFO|Releasing lport 3c52d5f8-d941-470e-b21b-5afb7b1bf813 from this chassis (sb_readonly=0)
Oct  2 05:12:00 np0005465604 ovn_controller[152344]: 2025-10-02T09:12:00Z|01587|binding|INFO|Releasing lport be1c87e3-582f-4bbb-a5fb-4fb837b7e882 from this chassis (sb_readonly=0)
Oct  2 05:12:00 np0005465604 ovn_controller[152344]: 2025-10-02T09:12:00Z|01588|binding|INFO|Releasing lport 3c52d5f8-d941-470e-b21b-5afb7b1bf813 from this chassis (sb_readonly=0)
Oct  2 05:12:00 np0005465604 nova_compute[260603]: 2025-10-02 09:12:00.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:00 np0005465604 nova_compute[260603]: 2025-10-02 09:12:00.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:01 np0005465604 nova_compute[260603]: 2025-10-02 09:12:01.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:12:01 np0005465604 nova_compute[260603]: 2025-10-02 09:12:01.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:12:01 np0005465604 nova_compute[260603]: 2025-10-02 09:12:01.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:12:01 np0005465604 nova_compute[260603]: 2025-10-02 09:12:01.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:02 np0005465604 nova_compute[260603]: 2025-10-02 09:12:02.000 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:12:02 np0005465604 nova_compute[260603]: 2025-10-02 09:12:02.000 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:12:02 np0005465604 nova_compute[260603]: 2025-10-02 09:12:02.000 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 05:12:02 np0005465604 nova_compute[260603]: 2025-10-02 09:12:02.001 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 71c9f70f-5f86-4723-9e4f-a4aca14211cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:12:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 61 op/s
Oct  2 05:12:03 np0005465604 nova_compute[260603]: 2025-10-02 09:12:03.080 2 DEBUG nova.compute.manager [req-257c6bf7-2eba-4e9b-b7f6-e08f200ef09f req-13b03048-3eff-4ae4-bf93-7b3eb952802b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-changed-102411d5-80b6-47af-9293-08b07c65d541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:12:03 np0005465604 nova_compute[260603]: 2025-10-02 09:12:03.080 2 DEBUG nova.compute.manager [req-257c6bf7-2eba-4e9b-b7f6-e08f200ef09f req-13b03048-3eff-4ae4-bf93-7b3eb952802b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Refreshing instance network info cache due to event network-changed-102411d5-80b6-47af-9293-08b07c65d541. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:12:03 np0005465604 nova_compute[260603]: 2025-10-02 09:12:03.080 2 DEBUG oslo_concurrency.lockutils [req-257c6bf7-2eba-4e9b-b7f6-e08f200ef09f req-13b03048-3eff-4ae4-bf93-7b3eb952802b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:12:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:12:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:12:05 np0005465604 nova_compute[260603]: 2025-10-02 09:12:05.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.000 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updating instance_info_cache with network_info: [{"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.047 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.048 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.048 2 DEBUG oslo_concurrency.lockutils [req-257c6bf7-2eba-4e9b-b7f6-e08f200ef09f req-13b03048-3eff-4ae4-bf93-7b3eb952802b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.048 2 DEBUG nova.network.neutron [req-257c6bf7-2eba-4e9b-b7f6-e08f200ef09f req-13b03048-3eff-4ae4-bf93-7b3eb952802b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Refreshing network info cache for port 102411d5-80b6-47af-9293-08b07c65d541 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.050 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.051 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.051 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:12:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.141 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.141 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.142 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.142 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.142 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:12:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4085201974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.618 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.755 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.756 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.960 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.961 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3404MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.961 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:12:06 np0005465604 nova_compute[260603]: 2025-10-02 09:12:06.961 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:12:07 np0005465604 nova_compute[260603]: 2025-10-02 09:12:07.065 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 71c9f70f-5f86-4723-9e4f-a4aca14211cb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:12:07 np0005465604 nova_compute[260603]: 2025-10-02 09:12:07.066 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:12:07 np0005465604 nova_compute[260603]: 2025-10-02 09:12:07.066 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:12:07 np0005465604 nova_compute[260603]: 2025-10-02 09:12:07.107 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:12:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:12:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3074771945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:12:07 np0005465604 nova_compute[260603]: 2025-10-02 09:12:07.581 2 DEBUG nova.network.neutron [req-257c6bf7-2eba-4e9b-b7f6-e08f200ef09f req-13b03048-3eff-4ae4-bf93-7b3eb952802b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updated VIF entry in instance network info cache for port 102411d5-80b6-47af-9293-08b07c65d541. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:12:07 np0005465604 nova_compute[260603]: 2025-10-02 09:12:07.582 2 DEBUG nova.network.neutron [req-257c6bf7-2eba-4e9b-b7f6-e08f200ef09f req-13b03048-3eff-4ae4-bf93-7b3eb952802b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updating instance_info_cache with network_info: [{"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:12:07 np0005465604 nova_compute[260603]: 2025-10-02 09:12:07.590 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:12:07 np0005465604 nova_compute[260603]: 2025-10-02 09:12:07.595 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:12:07 np0005465604 nova_compute[260603]: 2025-10-02 09:12:07.603 2 DEBUG oslo_concurrency.lockutils [req-257c6bf7-2eba-4e9b-b7f6-e08f200ef09f req-13b03048-3eff-4ae4-bf93-7b3eb952802b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:12:07 np0005465604 nova_compute[260603]: 2025-10-02 09:12:07.613 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:12:07 np0005465604 nova_compute[260603]: 2025-10-02 09:12:07.687 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:12:07 np0005465604 nova_compute[260603]: 2025-10-02 09:12:07.688 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:12:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 109 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 111 op/s
Oct  2 05:12:08 np0005465604 nova_compute[260603]: 2025-10-02 09:12:08.157 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:12:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:12:08 np0005465604 nova_compute[260603]: 2025-10-02 09:12:08.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:12:08 np0005465604 ovn_controller[152344]: 2025-10-02T09:12:08Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:05:ca:02 10.100.0.6
Oct  2 05:12:08 np0005465604 ovn_controller[152344]: 2025-10-02T09:12:08Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:05:ca:02 10.100.0.6
Oct  2 05:12:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 305 active+clean; 109 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 622 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Oct  2 05:12:10 np0005465604 nova_compute[260603]: 2025-10-02 09:12:10.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:11 np0005465604 nova_compute[260603]: 2025-10-02 09:12:11.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 109 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 622 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Oct  2 05:12:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:12:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 305 active+clean; 121 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 695 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Oct  2 05:12:15 np0005465604 nova_compute[260603]: 2025-10-02 09:12:15.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 305 active+clean; 121 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  2 05:12:16 np0005465604 nova_compute[260603]: 2025-10-02 09:12:16.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:17 np0005465604 nova_compute[260603]: 2025-10-02 09:12:17.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:12:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 05:12:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:12:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 107 KiB/s wr, 24 op/s
Oct  2 05:12:20 np0005465604 nova_compute[260603]: 2025-10-02 09:12:20.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:21 np0005465604 podman[422009]: 2025-10-02 09:12:21.006345478 +0000 UTC m=+0.063216876 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 05:12:21 np0005465604 podman[422008]: 2025-10-02 09:12:21.034951937 +0000 UTC m=+0.097754189 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 05:12:21 np0005465604 nova_compute[260603]: 2025-10-02 09:12:21.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:12:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2594647811' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:12:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:12:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2594647811' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:12:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 107 KiB/s wr, 24 op/s
Oct  2 05:12:22 np0005465604 nova_compute[260603]: 2025-10-02 09:12:22.956 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:12:22 np0005465604 nova_compute[260603]: 2025-10-02 09:12:22.957 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:12:22 np0005465604 nova_compute[260603]: 2025-10-02 09:12:22.976 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:12:23 np0005465604 nova_compute[260603]: 2025-10-02 09:12:23.089 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:12:23 np0005465604 nova_compute[260603]: 2025-10-02 09:12:23.090 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:12:23 np0005465604 nova_compute[260603]: 2025-10-02 09:12:23.102 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:12:23 np0005465604 nova_compute[260603]: 2025-10-02 09:12:23.103 2 INFO nova.compute.claims [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:12:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:12:23 np0005465604 nova_compute[260603]: 2025-10-02 09:12:23.511 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:12:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:12:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/410144398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.003 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.008 2 DEBUG nova.compute.provider_tree [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.042 2 DEBUG nova.scheduler.client.report [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.080 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.081 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.126 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.126 2 DEBUG nova.network.neutron [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:12:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 107 KiB/s wr, 25 op/s
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.159 2 INFO nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.190 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.279 2 DEBUG nova.policy [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.304 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.306 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.307 2 INFO nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Creating image(s)#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.338 2 DEBUG nova.storage.rbd_utils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 66710b2a-3c24-45a9-a500-f29978d33f4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.369 2 DEBUG nova.storage.rbd_utils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 66710b2a-3c24-45a9-a500-f29978d33f4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.399 2 DEBUG nova.storage.rbd_utils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 66710b2a-3c24-45a9-a500-f29978d33f4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.403 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.505 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.505 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.506 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.506 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.578 2 DEBUG nova.storage.rbd_utils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 66710b2a-3c24-45a9-a500-f29978d33f4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:12:24 np0005465604 nova_compute[260603]: 2025-10-02 09:12:24.583 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 66710b2a-3c24-45a9-a500-f29978d33f4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:12:25 np0005465604 nova_compute[260603]: 2025-10-02 09:12:25.194 2 DEBUG nova.network.neutron [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Successfully created port: 244a8221-32fa-4c1b-959f-5a29d4e651f1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:12:25 np0005465604 nova_compute[260603]: 2025-10-02 09:12:25.329 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 66710b2a-3c24-45a9-a500-f29978d33f4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.745s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:12:25 np0005465604 nova_compute[260603]: 2025-10-02 09:12:25.384 2 DEBUG nova.storage.rbd_utils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 66710b2a-3c24-45a9-a500-f29978d33f4f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:12:25 np0005465604 nova_compute[260603]: 2025-10-02 09:12:25.595 2 DEBUG nova.objects.instance [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 66710b2a-3c24-45a9-a500-f29978d33f4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:12:25 np0005465604 nova_compute[260603]: 2025-10-02 09:12:25.610 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:12:25 np0005465604 nova_compute[260603]: 2025-10-02 09:12:25.611 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Ensure instance console log exists: /var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:12:25 np0005465604 nova_compute[260603]: 2025-10-02 09:12:25.611 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:12:25 np0005465604 nova_compute[260603]: 2025-10-02 09:12:25.611 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:12:25 np0005465604 nova_compute[260603]: 2025-10-02 09:12:25.612 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:12:25 np0005465604 nova_compute[260603]: 2025-10-02 09:12:25.752 2 DEBUG nova.network.neutron [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Successfully created port: 8c6c65df-3056-4b48-96d3-e6dc8744f31d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:12:25 np0005465604 nova_compute[260603]: 2025-10-02 09:12:25.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:25 np0005465604 podman[422240]: 2025-10-02 09:12:25.995517932 +0000 UTC m=+0.058336375 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 05:12:26 np0005465604 podman[422241]: 2025-10-02 09:12:26.042803201 +0000 UTC m=+0.091378821 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 05:12:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 305 active+clean; 121 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct  2 05:12:26 np0005465604 nova_compute[260603]: 2025-10-02 09:12:26.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:27 np0005465604 nova_compute[260603]: 2025-10-02 09:12:27.016 2 DEBUG nova.network.neutron [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Successfully updated port: 244a8221-32fa-4c1b-959f-5a29d4e651f1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:12:27 np0005465604 nova_compute[260603]: 2025-10-02 09:12:27.149 2 DEBUG nova.compute.manager [req-47d8abe0-1c6e-426b-b959-ce1200aaca3d req-da548ee7-6796-4bab-b817-faf3ec46a11b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-changed-244a8221-32fa-4c1b-959f-5a29d4e651f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:12:27 np0005465604 nova_compute[260603]: 2025-10-02 09:12:27.149 2 DEBUG nova.compute.manager [req-47d8abe0-1c6e-426b-b959-ce1200aaca3d req-da548ee7-6796-4bab-b817-faf3ec46a11b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Refreshing instance network info cache due to event network-changed-244a8221-32fa-4c1b-959f-5a29d4e651f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:12:27 np0005465604 nova_compute[260603]: 2025-10-02 09:12:27.150 2 DEBUG oslo_concurrency.lockutils [req-47d8abe0-1c6e-426b-b959-ce1200aaca3d req-da548ee7-6796-4bab-b817-faf3ec46a11b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:12:27 np0005465604 nova_compute[260603]: 2025-10-02 09:12:27.150 2 DEBUG oslo_concurrency.lockutils [req-47d8abe0-1c6e-426b-b959-ce1200aaca3d req-da548ee7-6796-4bab-b817-faf3ec46a11b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:12:27 np0005465604 nova_compute[260603]: 2025-10-02 09:12:27.150 2 DEBUG nova.network.neutron [req-47d8abe0-1c6e-426b-b959-ce1200aaca3d req-da548ee7-6796-4bab-b817-faf3ec46a11b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Refreshing network info cache for port 244a8221-32fa-4c1b-959f-5a29d4e651f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:12:27 np0005465604 nova_compute[260603]: 2025-10-02 09:12:27.340 2 DEBUG nova.network.neutron [req-47d8abe0-1c6e-426b-b959-ce1200aaca3d req-da548ee7-6796-4bab-b817-faf3ec46a11b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:12:27 np0005465604 nova_compute[260603]: 2025-10-02 09:12:27.656 2 DEBUG nova.network.neutron [req-47d8abe0-1c6e-426b-b959-ce1200aaca3d req-da548ee7-6796-4bab-b817-faf3ec46a11b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:12:27 np0005465604 nova_compute[260603]: 2025-10-02 09:12:27.680 2 DEBUG oslo_concurrency.lockutils [req-47d8abe0-1c6e-426b-b959-ce1200aaca3d req-da548ee7-6796-4bab-b817-faf3ec46a11b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:12:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:12:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:12:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:12:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:12:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:12:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:12:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:12:28
Oct  2 05:12:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:12:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:12:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'backups', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta']
Oct  2 05:12:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:12:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:12:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:12:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:12:29 np0005465604 nova_compute[260603]: 2025-10-02 09:12:29.006 2 DEBUG nova.network.neutron [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Successfully updated port: 8c6c65df-3056-4b48-96d3-e6dc8744f31d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:12:29 np0005465604 nova_compute[260603]: 2025-10-02 09:12:29.023 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:12:29 np0005465604 nova_compute[260603]: 2025-10-02 09:12:29.023 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:12:29 np0005465604 nova_compute[260603]: 2025-10-02 09:12:29.024 2 DEBUG nova.network.neutron [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:12:29 np0005465604 nova_compute[260603]: 2025-10-02 09:12:29.243 2 DEBUG nova.compute.manager [req-83ced138-2703-42e8-a887-6064b8dc9923 req-4a7aff44-41c7-4a0c-9f35-0474fa1e1355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-changed-8c6c65df-3056-4b48-96d3-e6dc8744f31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:12:29 np0005465604 nova_compute[260603]: 2025-10-02 09:12:29.243 2 DEBUG nova.compute.manager [req-83ced138-2703-42e8-a887-6064b8dc9923 req-4a7aff44-41c7-4a0c-9f35-0474fa1e1355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Refreshing instance network info cache due to event network-changed-8c6c65df-3056-4b48-96d3-e6dc8744f31d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:12:29 np0005465604 nova_compute[260603]: 2025-10-02 09:12:29.244 2 DEBUG oslo_concurrency.lockutils [req-83ced138-2703-42e8-a887-6064b8dc9923 req-4a7aff44-41c7-4a0c-9f35-0474fa1e1355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:12:30 np0005465604 nova_compute[260603]: 2025-10-02 09:12:30.004 2 DEBUG nova.network.neutron [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:12:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:12:30 np0005465604 nova_compute[260603]: 2025-10-02 09:12:30.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:31 np0005465604 nova_compute[260603]: 2025-10-02 09:12:31.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:12:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:12:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.319 2 DEBUG nova.network.neutron [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Updating instance_info_cache with network_info: [{"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.346 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.346 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Instance network_info: |[{"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.347 2 DEBUG oslo_concurrency.lockutils [req-83ced138-2703-42e8-a887-6064b8dc9923 req-4a7aff44-41c7-4a0c-9f35-0474fa1e1355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.347 2 DEBUG nova.network.neutron [req-83ced138-2703-42e8-a887-6064b8dc9923 req-4a7aff44-41c7-4a0c-9f35-0474fa1e1355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Refreshing network info cache for port 8c6c65df-3056-4b48-96d3-e6dc8744f31d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.354 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Start _get_guest_xml network_info=[{"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.361 2 WARNING nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.368 2 DEBUG nova.virt.libvirt.host [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.369 2 DEBUG nova.virt.libvirt.host [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.373 2 DEBUG nova.virt.libvirt.host [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.373 2 DEBUG nova.virt.libvirt.host [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.374 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.374 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.375 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.375 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.375 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.376 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.376 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.376 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.377 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.377 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.377 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.378 2 DEBUG nova.virt.hardware [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.381 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:12:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:34.849 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:12:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:34.850 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:12:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:34.851 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:12:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:12:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3726409983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.923 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.950 2 DEBUG nova.storage.rbd_utils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 66710b2a-3c24-45a9-a500-f29978d33f4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:12:34 np0005465604 nova_compute[260603]: 2025-10-02 09:12:34.955 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:12:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:12:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2097381826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.382 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.384 2 DEBUG nova.virt.libvirt.vif [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-569731112',display_name='tempest-TestGettingAddress-server-569731112',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-569731112',id=144,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-i93zg3s6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:12:24Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=66710b2a-3c24-45a9-a500-f29978d33f4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.384 2 DEBUG nova.network.os_vif_util [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.385 2 DEBUG nova.network.os_vif_util [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=244a8221-32fa-4c1b-959f-5a29d4e651f1,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244a8221-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.386 2 DEBUG nova.virt.libvirt.vif [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-569731112',display_name='tempest-TestGettingAddress-server-569731112',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-569731112',id=144,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-i93zg3s6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:12:24Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=66710b2a-3c24-45a9-a500-f29978d33f4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.386 2 DEBUG nova.network.os_vif_util [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.386 2 DEBUG nova.network.os_vif_util [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:b8:50,bridge_name='br-int',has_traffic_filtering=True,id=8c6c65df-3056-4b48-96d3-e6dc8744f31d,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c6c65df-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.387 2 DEBUG nova.objects.instance [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 66710b2a-3c24-45a9-a500-f29978d33f4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.485 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:  <uuid>66710b2a-3c24-45a9-a500-f29978d33f4f</uuid>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:  <name>instance-00000090</name>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestGettingAddress-server-569731112</nova:name>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:12:34</nova:creationTime>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:        <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:        <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:        <nova:port uuid="244a8221-32fa-4c1b-959f-5a29d4e651f1">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:        <nova:port uuid="8c6c65df-3056-4b48-96d3-e6dc8744f31d">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe2b:b850" ipVersion="6"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe2b:b850" ipVersion="6"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <entry name="serial">66710b2a-3c24-45a9-a500-f29978d33f4f</entry>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <entry name="uuid">66710b2a-3c24-45a9-a500-f29978d33f4f</entry>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/66710b2a-3c24-45a9-a500-f29978d33f4f_disk">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/66710b2a-3c24-45a9-a500-f29978d33f4f_disk.config">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:10:a2:36"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <target dev="tap244a8221-32"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:2b:b8:50"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <target dev="tap8c6c65df-30"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f/console.log" append="off"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:12:35 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:12:35 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:12:35 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:12:35 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.486 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Preparing to wait for external event network-vif-plugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.486 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.487 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.487 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.487 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Preparing to wait for external event network-vif-plugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.487 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.487 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.488 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.488 2 DEBUG nova.virt.libvirt.vif [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-569731112',display_name='tempest-TestGettingAddress-server-569731112',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-569731112',id=144,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-i93zg3s6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:12:24Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=66710b2a-3c24-45a9-a500-f29978d33f4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.488 2 DEBUG nova.network.os_vif_util [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.489 2 DEBUG nova.network.os_vif_util [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=244a8221-32fa-4c1b-959f-5a29d4e651f1,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244a8221-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.489 2 DEBUG os_vif [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=244a8221-32fa-4c1b-959f-5a29d4e651f1,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244a8221-32') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.490 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.491 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.494 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap244a8221-32, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.495 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap244a8221-32, col_values=(('external_ids', {'iface-id': '244a8221-32fa-4c1b-959f-5a29d4e651f1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:a2:36', 'vm-uuid': '66710b2a-3c24-45a9-a500-f29978d33f4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:35 np0005465604 NetworkManager[45129]: <info>  [1759396355.4982] manager: (tap244a8221-32): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/646)
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.507 2 INFO os_vif [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=244a8221-32fa-4c1b-959f-5a29d4e651f1,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244a8221-32')#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.508 2 DEBUG nova.virt.libvirt.vif [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-569731112',display_name='tempest-TestGettingAddress-server-569731112',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-569731112',id=144,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-i93zg3s6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:12:24Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=66710b2a-3c24-45a9-a500-f29978d33f4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.508 2 DEBUG nova.network.os_vif_util [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.509 2 DEBUG nova.network.os_vif_util [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:b8:50,bridge_name='br-int',has_traffic_filtering=True,id=8c6c65df-3056-4b48-96d3-e6dc8744f31d,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c6c65df-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.509 2 DEBUG os_vif [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:b8:50,bridge_name='br-int',has_traffic_filtering=True,id=8c6c65df-3056-4b48-96d3-e6dc8744f31d,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c6c65df-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c6c65df-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.514 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8c6c65df-30, col_values=(('external_ids', {'iface-id': '8c6c65df-3056-4b48-96d3-e6dc8744f31d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:b8:50', 'vm-uuid': '66710b2a-3c24-45a9-a500-f29978d33f4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:35 np0005465604 NetworkManager[45129]: <info>  [1759396355.5163] manager: (tap8c6c65df-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/647)
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.522 2 INFO os_vif [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:b8:50,bridge_name='br-int',has_traffic_filtering=True,id=8c6c65df-3056-4b48-96d3-e6dc8744f31d,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c6c65df-30')#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.668 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.669 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.669 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:10:a2:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.669 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:2b:b8:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.670 2 INFO nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Using config drive#033[00m
Oct  2 05:12:35 np0005465604 nova_compute[260603]: 2025-10-02 09:12:35.696 2 DEBUG nova.storage.rbd_utils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 66710b2a-3c24-45a9-a500-f29978d33f4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:12:36 np0005465604 nova_compute[260603]: 2025-10-02 09:12:36.068 2 DEBUG nova.network.neutron [req-83ced138-2703-42e8-a887-6064b8dc9923 req-4a7aff44-41c7-4a0c-9f35-0474fa1e1355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Updated VIF entry in instance network info cache for port 8c6c65df-3056-4b48-96d3-e6dc8744f31d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:12:36 np0005465604 nova_compute[260603]: 2025-10-02 09:12:36.068 2 DEBUG nova.network.neutron [req-83ced138-2703-42e8-a887-6064b8dc9923 req-4a7aff44-41c7-4a0c-9f35-0474fa1e1355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Updating instance_info_cache with network_info: [{"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:12:36 np0005465604 nova_compute[260603]: 2025-10-02 09:12:36.130 2 DEBUG oslo_concurrency.lockutils [req-83ced138-2703-42e8-a887-6064b8dc9923 req-4a7aff44-41c7-4a0c-9f35-0474fa1e1355 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:12:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:12:36 np0005465604 nova_compute[260603]: 2025-10-02 09:12:36.244 2 INFO nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Creating config drive at /var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f/disk.config#033[00m
Oct  2 05:12:36 np0005465604 nova_compute[260603]: 2025-10-02 09:12:36.249 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpue8bzfif execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:12:36 np0005465604 nova_compute[260603]: 2025-10-02 09:12:36.422 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpue8bzfif" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:12:36 np0005465604 nova_compute[260603]: 2025-10-02 09:12:36.466 2 DEBUG nova.storage.rbd_utils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 66710b2a-3c24-45a9-a500-f29978d33f4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:12:36 np0005465604 nova_compute[260603]: 2025-10-02 09:12:36.471 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f/disk.config 66710b2a-3c24-45a9-a500-f29978d33f4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:12:36 np0005465604 nova_compute[260603]: 2025-10-02 09:12:36.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct  2 05:12:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:12:38 np0005465604 nova_compute[260603]: 2025-10-02 09:12:38.495 2 DEBUG oslo_concurrency.processutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f/disk.config 66710b2a-3c24-45a9-a500-f29978d33f4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:12:38 np0005465604 nova_compute[260603]: 2025-10-02 09:12:38.496 2 INFO nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Deleting local config drive /var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f/disk.config because it was imported into RBD.#033[00m
Oct  2 05:12:38 np0005465604 kernel: tap244a8221-32: entered promiscuous mode
Oct  2 05:12:38 np0005465604 NetworkManager[45129]: <info>  [1759396358.5797] manager: (tap244a8221-32): new Tun device (/org/freedesktop/NetworkManager/Devices/648)
Oct  2 05:12:38 np0005465604 ovn_controller[152344]: 2025-10-02T09:12:38Z|01589|binding|INFO|Claiming lport 244a8221-32fa-4c1b-959f-5a29d4e651f1 for this chassis.
Oct  2 05:12:38 np0005465604 ovn_controller[152344]: 2025-10-02T09:12:38Z|01590|binding|INFO|244a8221-32fa-4c1b-959f-5a29d4e651f1: Claiming fa:16:3e:10:a2:36 10.100.0.11
Oct  2 05:12:38 np0005465604 nova_compute[260603]: 2025-10-02 09:12:38.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:38 np0005465604 NetworkManager[45129]: <info>  [1759396358.6112] manager: (tap8c6c65df-30): new Tun device (/org/freedesktop/NetworkManager/Devices/649)
Oct  2 05:12:38 np0005465604 kernel: tap8c6c65df-30: entered promiscuous mode
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.613 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:a2:36 10.100.0.11'], port_security=['fa:16:3e:10:a2:36 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '66710b2a-3c24-45a9-a500-f29978d33f4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-436e56fa-4885-4043-b091-8043a6f9f710', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f053077-9033-462a-8246-131d2284fb71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79454522-7e2a-40b8-ae72-355dd621c03a, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=244a8221-32fa-4c1b-959f-5a29d4e651f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.614 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 244a8221-32fa-4c1b-959f-5a29d4e651f1 in datapath 436e56fa-4885-4043-b091-8043a6f9f710 bound to our chassis#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.615 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 436e56fa-4885-4043-b091-8043a6f9f710#033[00m
Oct  2 05:12:38 np0005465604 nova_compute[260603]: 2025-10-02 09:12:38.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:38 np0005465604 ovn_controller[152344]: 2025-10-02T09:12:38Z|01591|binding|INFO|Setting lport 244a8221-32fa-4c1b-959f-5a29d4e651f1 ovn-installed in OVS
Oct  2 05:12:38 np0005465604 ovn_controller[152344]: 2025-10-02T09:12:38Z|01592|binding|INFO|Setting lport 244a8221-32fa-4c1b-959f-5a29d4e651f1 up in Southbound
Oct  2 05:12:38 np0005465604 ovn_controller[152344]: 2025-10-02T09:12:38Z|01593|if_status|INFO|Dropped 1 log messages in last 141 seconds (most recently, 141 seconds ago) due to excessive rate
Oct  2 05:12:38 np0005465604 ovn_controller[152344]: 2025-10-02T09:12:38Z|01594|if_status|INFO|Not updating pb chassis for 8c6c65df-3056-4b48-96d3-e6dc8744f31d now as sb is readonly
Oct  2 05:12:38 np0005465604 systemd-udevd[422421]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:12:38 np0005465604 nova_compute[260603]: 2025-10-02 09:12:38.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:38 np0005465604 systemd-udevd[422420]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:12:38 np0005465604 ovn_controller[152344]: 2025-10-02T09:12:38Z|01595|binding|INFO|Claiming lport 8c6c65df-3056-4b48-96d3-e6dc8744f31d for this chassis.
Oct  2 05:12:38 np0005465604 ovn_controller[152344]: 2025-10-02T09:12:38Z|01596|binding|INFO|8c6c65df-3056-4b48-96d3-e6dc8744f31d: Claiming fa:16:3e:2b:b8:50 2001:db8:0:1:f816:3eff:fe2b:b850 2001:db8::f816:3eff:fe2b:b850
Oct  2 05:12:38 np0005465604 nova_compute[260603]: 2025-10-02 09:12:38.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:38 np0005465604 ovn_controller[152344]: 2025-10-02T09:12:38Z|01597|binding|INFO|Setting lport 8c6c65df-3056-4b48-96d3-e6dc8744f31d ovn-installed in OVS
Oct  2 05:12:38 np0005465604 nova_compute[260603]: 2025-10-02 09:12:38.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:38 np0005465604 NetworkManager[45129]: <info>  [1759396358.6456] device (tap244a8221-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.642 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7203507a-31fe-4cc6-ade6-cd9db4cd3e35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:12:38 np0005465604 NetworkManager[45129]: <info>  [1759396358.6471] device (tap244a8221-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.645 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:b8:50 2001:db8:0:1:f816:3eff:fe2b:b850 2001:db8::f816:3eff:fe2b:b850'], port_security=['fa:16:3e:2b:b8:50 2001:db8:0:1:f816:3eff:fe2b:b850 2001:db8::f816:3eff:fe2b:b850'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe2b:b850/64 2001:db8::f816:3eff:fe2b:b850/64', 'neutron:device_id': '66710b2a-3c24-45a9-a500-f29978d33f4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d64d879-42c6-456c-a212-df00bf998997', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f053077-9033-462a-8246-131d2284fb71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bb22cc4-c817-4149-925c-4cb21e573102, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8c6c65df-3056-4b48-96d3-e6dc8744f31d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:12:38 np0005465604 ovn_controller[152344]: 2025-10-02T09:12:38Z|01598|binding|INFO|Setting lport 8c6c65df-3056-4b48-96d3-e6dc8744f31d up in Southbound
Oct  2 05:12:38 np0005465604 NetworkManager[45129]: <info>  [1759396358.6559] device (tap8c6c65df-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:12:38 np0005465604 NetworkManager[45129]: <info>  [1759396358.6580] device (tap8c6c65df-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:12:38 np0005465604 systemd-machined[214636]: New machine qemu-178-instance-00000090.
Oct  2 05:12:38 np0005465604 systemd[1]: Started Virtual Machine qemu-178-instance-00000090.
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.686 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[87df7eff-da37-477d-b9f8-9a7ccd7d1754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.692 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[37f78e74-ad40-4757-82ee-4540d83e0995]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.725 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[28f9d231-71c6-4a21-96ef-b5e0b8cfac2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.744 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5e6ff2d0-47da-48fb-be96-4146bacfd944]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap436e56fa-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:8d:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719184, 'reachable_time': 36984, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422435, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.763 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cb3c1f6c-f814-407f-b021-e7da7d0c0566]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap436e56fa-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719196, 'tstamp': 719196}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422439, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap436e56fa-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719199, 'tstamp': 719199}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422439, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.765 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap436e56fa-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:12:38 np0005465604 nova_compute[260603]: 2025-10-02 09:12:38.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.768 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap436e56fa-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:12:38 np0005465604 nova_compute[260603]: 2025-10-02 09:12:38.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.769 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.769 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap436e56fa-40, col_values=(('external_ids', {'iface-id': '3c52d5f8-d941-470e-b21b-5afb7b1bf813'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.769 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.771 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8c6c65df-3056-4b48-96d3-e6dc8744f31d in datapath 5d64d879-42c6-456c-a212-df00bf998997 unbound from our chassis#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.772 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d64d879-42c6-456c-a212-df00bf998997#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.789 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2b31fa-2100-4e9f-9866-e6e527f7b193]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.822 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4a53de-29ea-4d5a-bf0d-ef14b8257660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.826 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[151a37a2-18d8-489c-a995-7cf96d53a1fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.857 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[1a5cd124-7a13-41d1-8406-02169541ee8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.876 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[72d8718c-5111-4b5b-aff0-36be1259d6f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d64d879-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:74:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 2146, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 2146, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719292, 'reachable_time': 17062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422446, 'error': None, 'target': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.893 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c72b49ab-f3a0-4972-8125-09f3d6f741fa]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5d64d879-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719302, 'tstamp': 719302}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422447, 'error': None, 'target': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.894 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d64d879-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:12:38 np0005465604 nova_compute[260603]: 2025-10-02 09:12:38.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:38 np0005465604 nova_compute[260603]: 2025-10-02 09:12:38.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.898 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d64d879-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.898 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.898 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d64d879-40, col_values=(('external_ids', {'iface-id': 'be1c87e3-582f-4bbb-a5fb-4fb837b7e882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:12:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:38.898 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011076865395259492 of space, bias 1.0, pg target 0.33230596185778477 quantized to 32 (current 32)
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:12:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.192 2 DEBUG nova.compute.manager [req-41599a5a-03b1-4947-91eb-cfeeee921631 req-e76f93c1-a212-499c-9161-fbbaf22d8ff9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-plugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.193 2 DEBUG oslo_concurrency.lockutils [req-41599a5a-03b1-4947-91eb-cfeeee921631 req-e76f93c1-a212-499c-9161-fbbaf22d8ff9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.194 2 DEBUG oslo_concurrency.lockutils [req-41599a5a-03b1-4947-91eb-cfeeee921631 req-e76f93c1-a212-499c-9161-fbbaf22d8ff9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.194 2 DEBUG oslo_concurrency.lockutils [req-41599a5a-03b1-4947-91eb-cfeeee921631 req-e76f93c1-a212-499c-9161-fbbaf22d8ff9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.194 2 DEBUG nova.compute.manager [req-41599a5a-03b1-4947-91eb-cfeeee921631 req-e76f93c1-a212-499c-9161-fbbaf22d8ff9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Processing event network-vif-plugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.196 2 DEBUG nova.compute.manager [req-366f0834-2606-4241-8cf4-205356e4b55e req-ccc47a28-33e1-494c-b8b4-d055901e3263 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-plugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.196 2 DEBUG oslo_concurrency.lockutils [req-366f0834-2606-4241-8cf4-205356e4b55e req-ccc47a28-33e1-494c-b8b4-d055901e3263 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.196 2 DEBUG oslo_concurrency.lockutils [req-366f0834-2606-4241-8cf4-205356e4b55e req-ccc47a28-33e1-494c-b8b4-d055901e3263 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.197 2 DEBUG oslo_concurrency.lockutils [req-366f0834-2606-4241-8cf4-205356e4b55e req-ccc47a28-33e1-494c-b8b4-d055901e3263 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.197 2 DEBUG nova.compute.manager [req-366f0834-2606-4241-8cf4-205356e4b55e req-ccc47a28-33e1-494c-b8b4-d055901e3263 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Processing event network-vif-plugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.665 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.666 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396359.6641946, 66710b2a-3c24-45a9-a500-f29978d33f4f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.667 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] VM Started (Lifecycle Event)#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.671 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.677 2 INFO nova.virt.libvirt.driver [-] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Instance spawned successfully.#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.678 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.695 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.708 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.709 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.710 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.711 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.712 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.713 2 DEBUG nova.virt.libvirt.driver [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.718 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.759 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.760 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396359.6642907, 66710b2a-3c24-45a9-a500-f29978d33f4f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.761 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.785 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.789 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396359.6698976, 66710b2a-3c24-45a9-a500-f29978d33f4f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.790 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.795 2 INFO nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Took 15.49 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.796 2 DEBUG nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.808 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.814 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.846 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.873 2 INFO nova.compute.manager [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Took 16.82 seconds to build instance.#033[00m
Oct  2 05:12:39 np0005465604 nova_compute[260603]: 2025-10-02 09:12:39.891 2 DEBUG oslo_concurrency.lockutils [None req-50e4b225-8607-4ac4-a0bf-624ea5ee5e99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:12:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 12 KiB/s wr, 3 op/s
Oct  2 05:12:40 np0005465604 nova_compute[260603]: 2025-10-02 09:12:40.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:40 np0005465604 nova_compute[260603]: 2025-10-02 09:12:40.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:12:40 np0005465604 nova_compute[260603]: 2025-10-02 09:12:40.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:12:41 np0005465604 nova_compute[260603]: 2025-10-02 09:12:41.320 2 DEBUG nova.compute.manager [req-1e4ea2f0-eccd-4626-9bb4-d8dd9b5ed745 req-f40ca315-c97e-485e-8276-df73efc993aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-plugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:12:41 np0005465604 nova_compute[260603]: 2025-10-02 09:12:41.321 2 DEBUG oslo_concurrency.lockutils [req-1e4ea2f0-eccd-4626-9bb4-d8dd9b5ed745 req-f40ca315-c97e-485e-8276-df73efc993aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:12:41 np0005465604 nova_compute[260603]: 2025-10-02 09:12:41.321 2 DEBUG oslo_concurrency.lockutils [req-1e4ea2f0-eccd-4626-9bb4-d8dd9b5ed745 req-f40ca315-c97e-485e-8276-df73efc993aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:12:41 np0005465604 nova_compute[260603]: 2025-10-02 09:12:41.321 2 DEBUG oslo_concurrency.lockutils [req-1e4ea2f0-eccd-4626-9bb4-d8dd9b5ed745 req-f40ca315-c97e-485e-8276-df73efc993aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:12:41 np0005465604 nova_compute[260603]: 2025-10-02 09:12:41.322 2 DEBUG nova.compute.manager [req-1e4ea2f0-eccd-4626-9bb4-d8dd9b5ed745 req-f40ca315-c97e-485e-8276-df73efc993aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] No waiting events found dispatching network-vif-plugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:12:41 np0005465604 nova_compute[260603]: 2025-10-02 09:12:41.322 2 WARNING nova.compute.manager [req-1e4ea2f0-eccd-4626-9bb4-d8dd9b5ed745 req-f40ca315-c97e-485e-8276-df73efc993aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received unexpected event network-vif-plugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:12:41 np0005465604 nova_compute[260603]: 2025-10-02 09:12:41.384 2 DEBUG nova.compute.manager [req-65d957e9-ab91-4756-b09e-d472bce25bfa req-15ee2fb9-73be-4ed4-840a-ebbce66d7600 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-plugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:12:41 np0005465604 nova_compute[260603]: 2025-10-02 09:12:41.385 2 DEBUG oslo_concurrency.lockutils [req-65d957e9-ab91-4756-b09e-d472bce25bfa req-15ee2fb9-73be-4ed4-840a-ebbce66d7600 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:12:41 np0005465604 nova_compute[260603]: 2025-10-02 09:12:41.385 2 DEBUG oslo_concurrency.lockutils [req-65d957e9-ab91-4756-b09e-d472bce25bfa req-15ee2fb9-73be-4ed4-840a-ebbce66d7600 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:12:41 np0005465604 nova_compute[260603]: 2025-10-02 09:12:41.386 2 DEBUG oslo_concurrency.lockutils [req-65d957e9-ab91-4756-b09e-d472bce25bfa req-15ee2fb9-73be-4ed4-840a-ebbce66d7600 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:12:41 np0005465604 nova_compute[260603]: 2025-10-02 09:12:41.386 2 DEBUG nova.compute.manager [req-65d957e9-ab91-4756-b09e-d472bce25bfa req-15ee2fb9-73be-4ed4-840a-ebbce66d7600 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] No waiting events found dispatching network-vif-plugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:12:41 np0005465604 nova_compute[260603]: 2025-10-02 09:12:41.386 2 WARNING nova.compute.manager [req-65d957e9-ab91-4756-b09e-d472bce25bfa req-15ee2fb9-73be-4ed4-840a-ebbce66d7600 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received unexpected event network-vif-plugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d for instance with vm_state active and task_state None.#033[00m
Oct  2 05:12:41 np0005465604 nova_compute[260603]: 2025-10-02 09:12:41.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 12 KiB/s wr, 3 op/s
Oct  2 05:12:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:12:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct  2 05:12:44 np0005465604 nova_compute[260603]: 2025-10-02 09:12:44.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:44.280 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:12:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:44.281 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:12:45 np0005465604 nova_compute[260603]: 2025-10-02 09:12:45.012 2 DEBUG nova.compute.manager [req-e9ef7d57-d9b0-4e28-99f3-8e003e90e2de req-26a405d5-18a7-46a9-90e3-fc6a7f616c76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-changed-244a8221-32fa-4c1b-959f-5a29d4e651f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:12:45 np0005465604 nova_compute[260603]: 2025-10-02 09:12:45.012 2 DEBUG nova.compute.manager [req-e9ef7d57-d9b0-4e28-99f3-8e003e90e2de req-26a405d5-18a7-46a9-90e3-fc6a7f616c76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Refreshing instance network info cache due to event network-changed-244a8221-32fa-4c1b-959f-5a29d4e651f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:12:45 np0005465604 nova_compute[260603]: 2025-10-02 09:12:45.012 2 DEBUG oslo_concurrency.lockutils [req-e9ef7d57-d9b0-4e28-99f3-8e003e90e2de req-26a405d5-18a7-46a9-90e3-fc6a7f616c76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:12:45 np0005465604 nova_compute[260603]: 2025-10-02 09:12:45.013 2 DEBUG oslo_concurrency.lockutils [req-e9ef7d57-d9b0-4e28-99f3-8e003e90e2de req-26a405d5-18a7-46a9-90e3-fc6a7f616c76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:12:45 np0005465604 nova_compute[260603]: 2025-10-02 09:12:45.013 2 DEBUG nova.network.neutron [req-e9ef7d57-d9b0-4e28-99f3-8e003e90e2de req-26a405d5-18a7-46a9-90e3-fc6a7f616c76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Refreshing network info cache for port 244a8221-32fa-4c1b-959f-5a29d4e651f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:12:45 np0005465604 nova_compute[260603]: 2025-10-02 09:12:45.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct  2 05:12:46 np0005465604 nova_compute[260603]: 2025-10-02 09:12:46.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:47 np0005465604 nova_compute[260603]: 2025-10-02 09:12:47.012 2 DEBUG nova.network.neutron [req-e9ef7d57-d9b0-4e28-99f3-8e003e90e2de req-26a405d5-18a7-46a9-90e3-fc6a7f616c76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Updated VIF entry in instance network info cache for port 244a8221-32fa-4c1b-959f-5a29d4e651f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:12:47 np0005465604 nova_compute[260603]: 2025-10-02 09:12:47.013 2 DEBUG nova.network.neutron [req-e9ef7d57-d9b0-4e28-99f3-8e003e90e2de req-26a405d5-18a7-46a9-90e3-fc6a7f616c76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Updating instance_info_cache with network_info: [{"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:12:47 np0005465604 nova_compute[260603]: 2025-10-02 09:12:47.133 2 DEBUG oslo_concurrency.lockutils [req-e9ef7d57-d9b0-4e28-99f3-8e003e90e2de req-26a405d5-18a7-46a9-90e3-fc6a7f616c76 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:12:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct  2 05:12:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:12:48.283 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:12:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:12:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 70 op/s
Oct  2 05:12:50 np0005465604 nova_compute[260603]: 2025-10-02 09:12:50.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:51 np0005465604 nova_compute[260603]: 2025-10-02 09:12:51.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:52 np0005465604 podman[422494]: 2025-10-02 09:12:52.038395364 +0000 UTC m=+0.090925426 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 05:12:52 np0005465604 podman[422493]: 2025-10-02 09:12:52.092435703 +0000 UTC m=+0.144380528 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:12:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 167 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 70 op/s
Oct  2 05:12:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:12:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 305 active+clean; 179 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 92 op/s
Oct  2 05:12:55 np0005465604 nova_compute[260603]: 2025-10-02 09:12:55.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 305 active+clean; 179 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.3 MiB/s wr, 22 op/s
Oct  2 05:12:56 np0005465604 podman[422561]: 2025-10-02 09:12:56.418783899 +0000 UTC m=+0.074079563 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 05:12:56 np0005465604 podman[422562]: 2025-10-02 09:12:56.435065835 +0000 UTC m=+0.080867214 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:12:56 np0005465604 ovn_controller[152344]: 2025-10-02T09:12:56Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:10:a2:36 10.100.0.11
Oct  2 05:12:56 np0005465604 ovn_controller[152344]: 2025-10-02T09:12:56Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:10:a2:36 10.100.0.11
Oct  2 05:12:56 np0005465604 nova_compute[260603]: 2025-10-02 09:12:56.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:12:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:12:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:12:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:12:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:12:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:12:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:12:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 5fffe95c-d5bf-48c2-bb6e-83c6444ffd8b does not exist
Oct  2 05:12:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev f5253d04-2671-4eee-ac12-7d4be5520576 does not exist
Oct  2 05:12:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 2759a9fb-36b4-4498-b7b3-966d772f8836 does not exist
Oct  2 05:12:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:12:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:12:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:12:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:12:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:12:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:12:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:12:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:12:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:12:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:12:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:12:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:12:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:12:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:12:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:12:58 np0005465604 podman[422851]: 2025-10-02 09:12:58.105725702 +0000 UTC m=+0.074554068 container create c07cee19d1661024393ad5f287f8abb03fb501798f77e7d5f1531863a023ae7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 05:12:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 380 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct  2 05:12:58 np0005465604 podman[422851]: 2025-10-02 09:12:58.067765132 +0000 UTC m=+0.036593518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:12:58 np0005465604 systemd[1]: Started libpod-conmon-c07cee19d1661024393ad5f287f8abb03fb501798f77e7d5f1531863a023ae7c.scope.
Oct  2 05:12:58 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:12:58 np0005465604 podman[422851]: 2025-10-02 09:12:58.266803508 +0000 UTC m=+0.235631954 container init c07cee19d1661024393ad5f287f8abb03fb501798f77e7d5f1531863a023ae7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:12:58 np0005465604 podman[422851]: 2025-10-02 09:12:58.278598435 +0000 UTC m=+0.247426791 container start c07cee19d1661024393ad5f287f8abb03fb501798f77e7d5f1531863a023ae7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 05:12:58 np0005465604 cool_swirles[422867]: 167 167
Oct  2 05:12:58 np0005465604 systemd[1]: libpod-c07cee19d1661024393ad5f287f8abb03fb501798f77e7d5f1531863a023ae7c.scope: Deactivated successfully.
Oct  2 05:12:58 np0005465604 podman[422851]: 2025-10-02 09:12:58.320736913 +0000 UTC m=+0.289565319 container attach c07cee19d1661024393ad5f287f8abb03fb501798f77e7d5f1531863a023ae7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:12:58 np0005465604 podman[422851]: 2025-10-02 09:12:58.322422956 +0000 UTC m=+0.291251342 container died c07cee19d1661024393ad5f287f8abb03fb501798f77e7d5f1531863a023ae7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 05:12:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:12:58 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2845016db65fc1eb9f142e76b4b6335d499a55322a8ebd2c7d93afe5ba5ded70-merged.mount: Deactivated successfully.
Oct  2 05:12:58 np0005465604 podman[422851]: 2025-10-02 09:12:58.526488138 +0000 UTC m=+0.495316494 container remove c07cee19d1661024393ad5f287f8abb03fb501798f77e7d5f1531863a023ae7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 05:12:58 np0005465604 systemd[1]: libpod-conmon-c07cee19d1661024393ad5f287f8abb03fb501798f77e7d5f1531863a023ae7c.scope: Deactivated successfully.
Oct  2 05:12:58 np0005465604 podman[422890]: 2025-10-02 09:12:58.763613317 +0000 UTC m=+0.058509650 container create 661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 05:12:58 np0005465604 podman[422890]: 2025-10-02 09:12:58.731738456 +0000 UTC m=+0.026634799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:12:58 np0005465604 systemd[1]: Started libpod-conmon-661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7.scope.
Oct  2 05:12:58 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:12:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87dc222d2cb7e41a1df234c8cd3f7dee4289fc6724b31c13047d3ca7af6ce17c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:12:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87dc222d2cb7e41a1df234c8cd3f7dee4289fc6724b31c13047d3ca7af6ce17c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:12:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87dc222d2cb7e41a1df234c8cd3f7dee4289fc6724b31c13047d3ca7af6ce17c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:12:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87dc222d2cb7e41a1df234c8cd3f7dee4289fc6724b31c13047d3ca7af6ce17c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:12:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87dc222d2cb7e41a1df234c8cd3f7dee4289fc6724b31c13047d3ca7af6ce17c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:12:58 np0005465604 podman[422890]: 2025-10-02 09:12:58.89984585 +0000 UTC m=+0.194742253 container init 661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 05:12:58 np0005465604 podman[422890]: 2025-10-02 09:12:58.910087278 +0000 UTC m=+0.204983601 container start 661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 05:12:58 np0005465604 podman[422890]: 2025-10-02 09:12:58.919906603 +0000 UTC m=+0.214802926 container attach 661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:12:59 np0005465604 nova_compute[260603]: 2025-10-02 09:12:59.523 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:13:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 380 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct  2 05:13:00 np0005465604 unruffled_chandrasekhar[422906]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:13:00 np0005465604 unruffled_chandrasekhar[422906]: --> relative data size: 1.0
Oct  2 05:13:00 np0005465604 unruffled_chandrasekhar[422906]: --> All data devices are unavailable
Oct  2 05:13:00 np0005465604 systemd[1]: libpod-661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7.scope: Deactivated successfully.
Oct  2 05:13:00 np0005465604 systemd[1]: libpod-661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7.scope: Consumed 1.224s CPU time.
Oct  2 05:13:00 np0005465604 podman[422890]: 2025-10-02 09:13:00.219452538 +0000 UTC m=+1.514348861 container died 661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:13:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay-87dc222d2cb7e41a1df234c8cd3f7dee4289fc6724b31c13047d3ca7af6ce17c-merged.mount: Deactivated successfully.
Oct  2 05:13:00 np0005465604 podman[422890]: 2025-10-02 09:13:00.320355833 +0000 UTC m=+1.615252166 container remove 661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  2 05:13:00 np0005465604 systemd[1]: libpod-conmon-661e866984a7c341cd482da916b5620a1c3cc64f8e094710e08b6bd0a44328f7.scope: Deactivated successfully.
Oct  2 05:13:00 np0005465604 nova_compute[260603]: 2025-10-02 09:13:00.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:00 np0005465604 podman[423085]: 2025-10-02 09:13:00.981072685 +0000 UTC m=+0.098941895 container create 5691e174b40d2b495e112b1473daa0e5a10a6fdf80b912d63ef158e5d893142a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chatelet, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:13:01 np0005465604 podman[423085]: 2025-10-02 09:13:00.914052913 +0000 UTC m=+0.031922143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:13:01 np0005465604 systemd[1]: Started libpod-conmon-5691e174b40d2b495e112b1473daa0e5a10a6fdf80b912d63ef158e5d893142a.scope.
Oct  2 05:13:01 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:13:01 np0005465604 podman[423085]: 2025-10-02 09:13:01.08899677 +0000 UTC m=+0.206866010 container init 5691e174b40d2b495e112b1473daa0e5a10a6fdf80b912d63ef158e5d893142a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chatelet, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:13:01 np0005465604 podman[423085]: 2025-10-02 09:13:01.095739659 +0000 UTC m=+0.213608859 container start 5691e174b40d2b495e112b1473daa0e5a10a6fdf80b912d63ef158e5d893142a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chatelet, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 05:13:01 np0005465604 cool_chatelet[423102]: 167 167
Oct  2 05:13:01 np0005465604 systemd[1]: libpod-5691e174b40d2b495e112b1473daa0e5a10a6fdf80b912d63ef158e5d893142a.scope: Deactivated successfully.
Oct  2 05:13:01 np0005465604 podman[423085]: 2025-10-02 09:13:01.127122024 +0000 UTC m=+0.244991234 container attach 5691e174b40d2b495e112b1473daa0e5a10a6fdf80b912d63ef158e5d893142a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chatelet, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:13:01 np0005465604 podman[423085]: 2025-10-02 09:13:01.127410423 +0000 UTC m=+0.245279633 container died 5691e174b40d2b495e112b1473daa0e5a10a6fdf80b912d63ef158e5d893142a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct  2 05:13:01 np0005465604 systemd[1]: var-lib-containers-storage-overlay-18287e2d3558a5d5e735e600d53a7cdedf203cd007898ef7659bfbcc927e6f38-merged.mount: Deactivated successfully.
Oct  2 05:13:01 np0005465604 podman[423085]: 2025-10-02 09:13:01.324875739 +0000 UTC m=+0.442744949 container remove 5691e174b40d2b495e112b1473daa0e5a10a6fdf80b912d63ef158e5d893142a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:13:01 np0005465604 systemd[1]: libpod-conmon-5691e174b40d2b495e112b1473daa0e5a10a6fdf80b912d63ef158e5d893142a.scope: Deactivated successfully.
Oct  2 05:13:01 np0005465604 nova_compute[260603]: 2025-10-02 09:13:01.522 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:13:01 np0005465604 nova_compute[260603]: 2025-10-02 09:13:01.523 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:13:01 np0005465604 nova_compute[260603]: 2025-10-02 09:13:01.523 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:13:01 np0005465604 podman[423126]: 2025-10-02 09:13:01.486935125 +0000 UTC m=+0.021868830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:13:01 np0005465604 podman[423126]: 2025-10-02 09:13:01.585576641 +0000 UTC m=+0.120510316 container create 6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_raman, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:13:01 np0005465604 systemd[1]: Started libpod-conmon-6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e.scope.
Oct  2 05:13:01 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:13:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170e83ff482772a262d809fa0f7b3b5896696d6a2c0d2bddfc9a4a46081cc279/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:13:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170e83ff482772a262d809fa0f7b3b5896696d6a2c0d2bddfc9a4a46081cc279/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:13:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170e83ff482772a262d809fa0f7b3b5896696d6a2c0d2bddfc9a4a46081cc279/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:13:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170e83ff482772a262d809fa0f7b3b5896696d6a2c0d2bddfc9a4a46081cc279/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:13:01 np0005465604 nova_compute[260603]: 2025-10-02 09:13:01.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:01 np0005465604 podman[423126]: 2025-10-02 09:13:01.760712153 +0000 UTC m=+0.295645898 container init 6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 05:13:01 np0005465604 podman[423126]: 2025-10-02 09:13:01.768032451 +0000 UTC m=+0.302966126 container start 6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:13:01 np0005465604 podman[423126]: 2025-10-02 09:13:01.823297199 +0000 UTC m=+0.358230954 container attach 6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_raman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:13:02 np0005465604 nova_compute[260603]: 2025-10-02 09:13:02.004 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:13:02 np0005465604 nova_compute[260603]: 2025-10-02 09:13:02.004 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:13:02 np0005465604 nova_compute[260603]: 2025-10-02 09:13:02.004 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 05:13:02 np0005465604 nova_compute[260603]: 2025-10-02 09:13:02.004 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 71c9f70f-5f86-4723-9e4f-a4aca14211cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:13:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 380 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]: {
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:    "0": [
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:        {
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "devices": [
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "/dev/loop3"
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            ],
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "lv_name": "ceph_lv0",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "lv_size": "21470642176",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "name": "ceph_lv0",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "tags": {
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.cluster_name": "ceph",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.crush_device_class": "",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.encrypted": "0",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.osd_id": "0",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.type": "block",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.vdo": "0"
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            },
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "type": "block",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "vg_name": "ceph_vg0"
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:        }
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:    ],
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:    "1": [
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:        {
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "devices": [
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "/dev/loop4"
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            ],
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "lv_name": "ceph_lv1",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "lv_size": "21470642176",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "name": "ceph_lv1",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "tags": {
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.cluster_name": "ceph",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.crush_device_class": "",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.encrypted": "0",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.osd_id": "1",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.type": "block",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.vdo": "0"
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            },
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "type": "block",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "vg_name": "ceph_vg1"
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:        }
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:    ],
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:    "2": [
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:        {
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "devices": [
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "/dev/loop5"
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            ],
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "lv_name": "ceph_lv2",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "lv_size": "21470642176",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "name": "ceph_lv2",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "tags": {
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.cluster_name": "ceph",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.crush_device_class": "",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.encrypted": "0",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.osd_id": "2",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.type": "block",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:                "ceph.vdo": "0"
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            },
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "type": "block",
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:            "vg_name": "ceph_vg2"
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:        }
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]:    ]
Oct  2 05:13:02 np0005465604 wizardly_raman[423143]: }
Oct  2 05:13:02 np0005465604 systemd[1]: libpod-6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e.scope: Deactivated successfully.
Oct  2 05:13:02 np0005465604 conmon[423143]: conmon 6d880ec1795572fbce46 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e.scope/container/memory.events
Oct  2 05:13:02 np0005465604 podman[423152]: 2025-10-02 09:13:02.622469443 +0000 UTC m=+0.025956707 container died 6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_raman, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:13:02 np0005465604 systemd[1]: var-lib-containers-storage-overlay-170e83ff482772a262d809fa0f7b3b5896696d6a2c0d2bddfc9a4a46081cc279-merged.mount: Deactivated successfully.
Oct  2 05:13:02 np0005465604 podman[423152]: 2025-10-02 09:13:02.736661322 +0000 UTC m=+0.140148586 container remove 6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:13:02 np0005465604 systemd[1]: libpod-conmon-6d880ec1795572fbce464e7c81d948960fd9bab67e2f4a99b7304d699c54804e.scope: Deactivated successfully.
Oct  2 05:13:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:13:03 np0005465604 podman[423308]: 2025-10-02 09:13:03.370482718 +0000 UTC m=+0.021514209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:13:03 np0005465604 podman[423308]: 2025-10-02 09:13:03.60348764 +0000 UTC m=+0.254519111 container create 7086ed62347484d523381ca3b3342ad8295927717b55405d5a8b1ba039a7ce21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_heisenberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 05:13:03 np0005465604 systemd[1]: Started libpod-conmon-7086ed62347484d523381ca3b3342ad8295927717b55405d5a8b1ba039a7ce21.scope.
Oct  2 05:13:03 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:13:03 np0005465604 podman[423308]: 2025-10-02 09:13:03.794548807 +0000 UTC m=+0.445580318 container init 7086ed62347484d523381ca3b3342ad8295927717b55405d5a8b1ba039a7ce21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_heisenberg, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 05:13:03 np0005465604 podman[423308]: 2025-10-02 09:13:03.806728296 +0000 UTC m=+0.457759767 container start 7086ed62347484d523381ca3b3342ad8295927717b55405d5a8b1ba039a7ce21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_heisenberg, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:13:03 np0005465604 youthful_heisenberg[423324]: 167 167
Oct  2 05:13:03 np0005465604 systemd[1]: libpod-7086ed62347484d523381ca3b3342ad8295927717b55405d5a8b1ba039a7ce21.scope: Deactivated successfully.
Oct  2 05:13:03 np0005465604 podman[423308]: 2025-10-02 09:13:03.812341609 +0000 UTC m=+0.463373110 container attach 7086ed62347484d523381ca3b3342ad8295927717b55405d5a8b1ba039a7ce21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:13:03 np0005465604 podman[423308]: 2025-10-02 09:13:03.812989349 +0000 UTC m=+0.464020810 container died 7086ed62347484d523381ca3b3342ad8295927717b55405d5a8b1ba039a7ce21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:13:03 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a15ec1af6dd370fbcee5a5ce84e3d16f683c902e32b33156400230131fa6226f-merged.mount: Deactivated successfully.
Oct  2 05:13:03 np0005465604 podman[423308]: 2025-10-02 09:13:03.969424021 +0000 UTC m=+0.620455492 container remove 7086ed62347484d523381ca3b3342ad8295927717b55405d5a8b1ba039a7ce21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_heisenberg, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:13:03 np0005465604 systemd[1]: libpod-conmon-7086ed62347484d523381ca3b3342ad8295927717b55405d5a8b1ba039a7ce21.scope: Deactivated successfully.
Oct  2 05:13:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Oct  2 05:13:04 np0005465604 podman[423346]: 2025-10-02 09:13:04.183646518 +0000 UTC m=+0.029905460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:13:04 np0005465604 podman[423346]: 2025-10-02 09:13:04.277886397 +0000 UTC m=+0.124145309 container create 84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:13:04 np0005465604 nova_compute[260603]: 2025-10-02 09:13:04.370 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updating instance_info_cache with network_info: [{"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:13:04 np0005465604 nova_compute[260603]: 2025-10-02 09:13:04.387 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:13:04 np0005465604 nova_compute[260603]: 2025-10-02 09:13:04.387 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 05:13:04 np0005465604 systemd[1]: Started libpod-conmon-84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409.scope.
Oct  2 05:13:04 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:13:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d2fcdc3f7c8784be87085719da67c3cd261ccec4a971f072019f10d07ced95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:13:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d2fcdc3f7c8784be87085719da67c3cd261ccec4a971f072019f10d07ced95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:13:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d2fcdc3f7c8784be87085719da67c3cd261ccec4a971f072019f10d07ced95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:13:04 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d2fcdc3f7c8784be87085719da67c3cd261ccec4a971f072019f10d07ced95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:13:04 np0005465604 podman[423346]: 2025-10-02 09:13:04.481124903 +0000 UTC m=+0.327383825 container init 84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 05:13:04 np0005465604 podman[423346]: 2025-10-02 09:13:04.487822131 +0000 UTC m=+0.334081043 container start 84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:13:04 np0005465604 nova_compute[260603]: 2025-10-02 09:13:04.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:13:04 np0005465604 podman[423346]: 2025-10-02 09:13:04.549960822 +0000 UTC m=+0.396219764 container attach 84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:13:05 np0005465604 nova_compute[260603]: 2025-10-02 09:13:05.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:13:05 np0005465604 nova_compute[260603]: 2025-10-02 09:13:05.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]: {
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:        "osd_id": 2,
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:        "type": "bluestore"
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:    },
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:        "osd_id": 1,
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:        "type": "bluestore"
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:    },
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:        "osd_id": 0,
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:        "type": "bluestore"
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]:    }
Oct  2 05:13:05 np0005465604 bold_gagarin[423362]: }
Oct  2 05:13:05 np0005465604 nova_compute[260603]: 2025-10-02 09:13:05.591 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:13:05 np0005465604 nova_compute[260603]: 2025-10-02 09:13:05.592 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:13:05 np0005465604 nova_compute[260603]: 2025-10-02 09:13:05.593 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:13:05 np0005465604 nova_compute[260603]: 2025-10-02 09:13:05.593 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:13:05 np0005465604 nova_compute[260603]: 2025-10-02 09:13:05.593 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:13:05 np0005465604 systemd[1]: libpod-84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409.scope: Deactivated successfully.
Oct  2 05:13:05 np0005465604 systemd[1]: libpod-84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409.scope: Consumed 1.123s CPU time.
Oct  2 05:13:05 np0005465604 podman[423396]: 2025-10-02 09:13:05.692578309 +0000 UTC m=+0.044206124 container died 84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct  2 05:13:05 np0005465604 systemd[1]: var-lib-containers-storage-overlay-72d2fcdc3f7c8784be87085719da67c3cd261ccec4a971f072019f10d07ced95-merged.mount: Deactivated successfully.
Oct  2 05:13:05 np0005465604 podman[423396]: 2025-10-02 09:13:05.931647839 +0000 UTC m=+0.283275564 container remove 84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gagarin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:13:05 np0005465604 systemd[1]: libpod-conmon-84a5f56d7a39ec13f8266e7d83f8c8e68b469cc3fb43372c1b62a0f74272c409.scope: Deactivated successfully.
Oct  2 05:13:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:13:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:13:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:13:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:13:06 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c46c46df-1b76-4338-8dbe-48bb021a8ad2 does not exist
Oct  2 05:13:06 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 3e44f3d1-e65c-4ca2-a43b-f8650d375477 does not exist
Oct  2 05:13:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:13:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1608232646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:13:06 np0005465604 nova_compute[260603]: 2025-10-02 09:13:06.102 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:13:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 817 KiB/s wr, 48 op/s
Oct  2 05:13:06 np0005465604 nova_compute[260603]: 2025-10-02 09:13:06.243 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:13:06 np0005465604 nova_compute[260603]: 2025-10-02 09:13:06.245 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:13:06 np0005465604 nova_compute[260603]: 2025-10-02 09:13:06.249 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:13:06 np0005465604 nova_compute[260603]: 2025-10-02 09:13:06.250 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:13:06 np0005465604 nova_compute[260603]: 2025-10-02 09:13:06.432 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:13:06 np0005465604 nova_compute[260603]: 2025-10-02 09:13:06.433 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3156MB free_disk=59.8972053527832GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:13:06 np0005465604 nova_compute[260603]: 2025-10-02 09:13:06.434 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:13:06 np0005465604 nova_compute[260603]: 2025-10-02 09:13:06.434 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:13:06 np0005465604 nova_compute[260603]: 2025-10-02 09:13:06.680 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 71c9f70f-5f86-4723-9e4f-a4aca14211cb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:13:06 np0005465604 nova_compute[260603]: 2025-10-02 09:13:06.681 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 66710b2a-3c24-45a9-a500-f29978d33f4f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:13:06 np0005465604 nova_compute[260603]: 2025-10-02 09:13:06.681 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:13:06 np0005465604 nova_compute[260603]: 2025-10-02 09:13:06.681 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:13:06 np0005465604 nova_compute[260603]: 2025-10-02 09:13:06.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:06 np0005465604 nova_compute[260603]: 2025-10-02 09:13:06.750 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:13:07 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:13:07 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:13:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:13:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1280372030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:13:07 np0005465604 nova_compute[260603]: 2025-10-02 09:13:07.285 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:13:07 np0005465604 nova_compute[260603]: 2025-10-02 09:13:07.294 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:13:07 np0005465604 nova_compute[260603]: 2025-10-02 09:13:07.317 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:13:07 np0005465604 nova_compute[260603]: 2025-10-02 09:13:07.355 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:13:07 np0005465604 nova_compute[260603]: 2025-10-02 09:13:07.356 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:13:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 817 KiB/s wr, 48 op/s
Oct  2 05:13:08 np0005465604 nova_compute[260603]: 2025-10-02 09:13:08.358 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:13:08 np0005465604 nova_compute[260603]: 2025-10-02 09:13:08.359 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:13:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:13:09 np0005465604 nova_compute[260603]: 2025-10-02 09:13:09.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:13:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 12 KiB/s wr, 0 op/s
Oct  2 05:13:10 np0005465604 nova_compute[260603]: 2025-10-02 09:13:10.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:11 np0005465604 nova_compute[260603]: 2025-10-02 09:13:11.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 12 KiB/s wr, 0 op/s
Oct  2 05:13:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:13:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 20 KiB/s wr, 1 op/s
Oct  2 05:13:14 np0005465604 ovn_controller[152344]: 2025-10-02T09:13:14Z|01599|memory_trim|INFO|Detected inactivity (last active 30021 ms ago): trimming memory
Oct  2 05:13:15 np0005465604 nova_compute[260603]: 2025-10-02 09:13:15.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:13:15 np0005465604 nova_compute[260603]: 2025-10-02 09:13:15.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Oct  2 05:13:16 np0005465604 nova_compute[260603]: 2025-10-02 09:13:16.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 9.3 KiB/s wr, 1 op/s
Oct  2 05:13:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:13:18 np0005465604 nova_compute[260603]: 2025-10-02 09:13:18.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:13:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 9.3 KiB/s wr, 1 op/s
Oct  2 05:13:20 np0005465604 nova_compute[260603]: 2025-10-02 09:13:20.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:21 np0005465604 nova_compute[260603]: 2025-10-02 09:13:21.508 2 DEBUG nova.compute.manager [req-7dcf3e68-4686-4561-9914-8611dcf5732f req-e4e9bbec-0cf4-4d75-95a2-823189737866 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-changed-244a8221-32fa-4c1b-959f-5a29d4e651f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:13:21 np0005465604 nova_compute[260603]: 2025-10-02 09:13:21.508 2 DEBUG nova.compute.manager [req-7dcf3e68-4686-4561-9914-8611dcf5732f req-e4e9bbec-0cf4-4d75-95a2-823189737866 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Refreshing instance network info cache due to event network-changed-244a8221-32fa-4c1b-959f-5a29d4e651f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:13:21 np0005465604 nova_compute[260603]: 2025-10-02 09:13:21.509 2 DEBUG oslo_concurrency.lockutils [req-7dcf3e68-4686-4561-9914-8611dcf5732f req-e4e9bbec-0cf4-4d75-95a2-823189737866 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:13:21 np0005465604 nova_compute[260603]: 2025-10-02 09:13:21.509 2 DEBUG oslo_concurrency.lockutils [req-7dcf3e68-4686-4561-9914-8611dcf5732f req-e4e9bbec-0cf4-4d75-95a2-823189737866 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:13:21 np0005465604 nova_compute[260603]: 2025-10-02 09:13:21.509 2 DEBUG nova.network.neutron [req-7dcf3e68-4686-4561-9914-8611dcf5732f req-e4e9bbec-0cf4-4d75-95a2-823189737866 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Refreshing network info cache for port 244a8221-32fa-4c1b-959f-5a29d4e651f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:13:21 np0005465604 nova_compute[260603]: 2025-10-02 09:13:21.575 2 DEBUG oslo_concurrency.lockutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:13:21 np0005465604 nova_compute[260603]: 2025-10-02 09:13:21.576 2 DEBUG oslo_concurrency.lockutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:13:21 np0005465604 nova_compute[260603]: 2025-10-02 09:13:21.577 2 DEBUG oslo_concurrency.lockutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:13:21 np0005465604 nova_compute[260603]: 2025-10-02 09:13:21.578 2 DEBUG oslo_concurrency.lockutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:13:21 np0005465604 nova_compute[260603]: 2025-10-02 09:13:21.578 2 DEBUG oslo_concurrency.lockutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:13:21 np0005465604 nova_compute[260603]: 2025-10-02 09:13:21.580 2 INFO nova.compute.manager [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Terminating instance#033[00m
Oct  2 05:13:21 np0005465604 nova_compute[260603]: 2025-10-02 09:13:21.583 2 DEBUG nova.compute.manager [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:13:21 np0005465604 nova_compute[260603]: 2025-10-02 09:13:21.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 9.3 KiB/s wr, 1 op/s
Oct  2 05:13:22 np0005465604 kernel: tap244a8221-32 (unregistering): left promiscuous mode
Oct  2 05:13:22 np0005465604 NetworkManager[45129]: <info>  [1759396402.3218] device (tap244a8221-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:13:22 np0005465604 ovn_controller[152344]: 2025-10-02T09:13:22Z|01600|binding|INFO|Releasing lport 244a8221-32fa-4c1b-959f-5a29d4e651f1 from this chassis (sb_readonly=0)
Oct  2 05:13:22 np0005465604 ovn_controller[152344]: 2025-10-02T09:13:22Z|01601|binding|INFO|Setting lport 244a8221-32fa-4c1b-959f-5a29d4e651f1 down in Southbound
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:22 np0005465604 ovn_controller[152344]: 2025-10-02T09:13:22Z|01602|binding|INFO|Removing iface tap244a8221-32 ovn-installed in OVS
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.357 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:a2:36 10.100.0.11'], port_security=['fa:16:3e:10:a2:36 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '66710b2a-3c24-45a9-a500-f29978d33f4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-436e56fa-4885-4043-b091-8043a6f9f710', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f053077-9033-462a-8246-131d2284fb71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79454522-7e2a-40b8-ae72-355dd621c03a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=244a8221-32fa-4c1b-959f-5a29d4e651f1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.360 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 244a8221-32fa-4c1b-959f-5a29d4e651f1 in datapath 436e56fa-4885-4043-b091-8043a6f9f710 unbound from our chassis#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.363 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 436e56fa-4885-4043-b091-8043a6f9f710#033[00m
Oct  2 05:13:22 np0005465604 kernel: tap8c6c65df-30 (unregistering): left promiscuous mode
Oct  2 05:13:22 np0005465604 NetworkManager[45129]: <info>  [1759396402.3725] device (tap8c6c65df-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.396 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d12060c9-2e3f-46ee-b6ce-ff19b16f3f45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:22 np0005465604 ovn_controller[152344]: 2025-10-02T09:13:22Z|01603|binding|INFO|Releasing lport 8c6c65df-3056-4b48-96d3-e6dc8744f31d from this chassis (sb_readonly=0)
Oct  2 05:13:22 np0005465604 ovn_controller[152344]: 2025-10-02T09:13:22Z|01604|binding|INFO|Setting lport 8c6c65df-3056-4b48-96d3-e6dc8744f31d down in Southbound
Oct  2 05:13:22 np0005465604 ovn_controller[152344]: 2025-10-02T09:13:22Z|01605|binding|INFO|Removing iface tap8c6c65df-30 ovn-installed in OVS
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:22 np0005465604 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000090.scope: Deactivated successfully.
Oct  2 05:13:22 np0005465604 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000090.scope: Consumed 14.636s CPU time.
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.436 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:b8:50 2001:db8:0:1:f816:3eff:fe2b:b850 2001:db8::f816:3eff:fe2b:b850'], port_security=['fa:16:3e:2b:b8:50 2001:db8:0:1:f816:3eff:fe2b:b850 2001:db8::f816:3eff:fe2b:b850'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe2b:b850/64 2001:db8::f816:3eff:fe2b:b850/64', 'neutron:device_id': '66710b2a-3c24-45a9-a500-f29978d33f4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d64d879-42c6-456c-a212-df00bf998997', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f053077-9033-462a-8246-131d2284fb71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bb22cc4-c817-4149-925c-4cb21e573102, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=8c6c65df-3056-4b48-96d3-e6dc8744f31d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:13:22 np0005465604 systemd-machined[214636]: Machine qemu-178-instance-00000090 terminated.
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.451 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d5cdf5-4278-43ad-b3b3-2135b1daffa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.455 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a83459-d5f8-4388-8ecf-720a526a87d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:22 np0005465604 podman[423508]: 2025-10-02 09:13:22.473054627 +0000 UTC m=+0.115764029 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.499 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[cf19d383-d239-4d08-96e1-c83046596725]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:22 np0005465604 podman[423505]: 2025-10-02 09:13:22.510616164 +0000 UTC m=+0.161552151 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.526 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa2dc92-09a7-41af-bc7b-31b9f7857eda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap436e56fa-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:8d:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719184, 'reachable_time': 36984, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423562, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.546 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[47e335fd-4c73-4012-bee1-35b7af5bd692]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap436e56fa-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719196, 'tstamp': 719196}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423563, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap436e56fa-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719199, 'tstamp': 719199}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423563, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.548 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap436e56fa-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.559 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap436e56fa-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.559 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.560 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap436e56fa-40, col_values=(('external_ids', {'iface-id': '3c52d5f8-d941-470e-b21b-5afb7b1bf813'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.561 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.562 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 8c6c65df-3056-4b48-96d3-e6dc8744f31d in datapath 5d64d879-42c6-456c-a212-df00bf998997 unbound from our chassis#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.565 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d64d879-42c6-456c-a212-df00bf998997#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.584 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3469a0fc-08ee-47e1-a5c4-088127e5e797]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:22 np0005465604 NetworkManager[45129]: <info>  [1759396402.6129] manager: (tap244a8221-32): new Tun device (/org/freedesktop/NetworkManager/Devices/650)
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.620 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac4c139-7b26-49a2-af7f-9444feaafe2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.624 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe8ce5b-18e0-49ef-90a6-533e6e03272b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.644 2 INFO nova.virt.libvirt.driver [-] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Instance destroyed successfully.#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.645 2 DEBUG nova.objects.instance [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 66710b2a-3c24-45a9-a500-f29978d33f4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.666 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4d7b8e-4dca-466b-a964-81dbb0718b0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.676 2 DEBUG nova.virt.libvirt.vif [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-569731112',display_name='tempest-TestGettingAddress-server-569731112',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-569731112',id=144,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:12:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-i93zg3s6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:12:39Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=66710b2a-3c24-45a9-a500-f29978d33f4f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.678 2 DEBUG nova.network.os_vif_util [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.678 2 DEBUG nova.network.os_vif_util [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:10:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=244a8221-32fa-4c1b-959f-5a29d4e651f1,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244a8221-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.679 2 DEBUG os_vif [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=244a8221-32fa-4c1b-959f-5a29d4e651f1,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244a8221-32') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.681 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap244a8221-32, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.691 2 INFO os_vif [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=244a8221-32fa-4c1b-959f-5a29d4e651f1,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap244a8221-32')#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.692 2 DEBUG nova.virt.libvirt.vif [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-569731112',display_name='tempest-TestGettingAddress-server-569731112',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-569731112',id=144,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:12:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-i93zg3s6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:12:39Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=66710b2a-3c24-45a9-a500-f29978d33f4f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.692 2 DEBUG nova.network.os_vif_util [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.693 2 DEBUG nova.network.os_vif_util [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:b8:50,bridge_name='br-int',has_traffic_filtering=True,id=8c6c65df-3056-4b48-96d3-e6dc8744f31d,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c6c65df-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.693 2 DEBUG os_vif [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:b8:50,bridge_name='br-int',has_traffic_filtering=True,id=8c6c65df-3056-4b48-96d3-e6dc8744f31d,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c6c65df-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.694 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c6c65df-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.698 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4c9b1b93-7372-40a9-9a4d-fc252537c909]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d64d879-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:74:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 6, 'rx_bytes': 3460, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 6, 'rx_bytes': 3460, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 445], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719292, 'reachable_time': 17062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423587, 'error': None, 'target': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.702 2 INFO os_vif [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:b8:50,bridge_name='br-int',has_traffic_filtering=True,id=8c6c65df-3056-4b48-96d3-e6dc8744f31d,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c6c65df-30')#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.724 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a9a0fb-8cf2-477a-9b9b-b17368aa567c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5d64d879-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719302, 'tstamp': 719302}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423593, 'error': None, 'target': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.726 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d64d879-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.729 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d64d879-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.729 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.730 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d64d879-40, col_values=(('external_ids', {'iface-id': 'be1c87e3-582f-4bbb-a5fb-4fb837b7e882'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:13:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:22.730 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.787 2 DEBUG nova.compute.manager [req-f9565c87-7978-4d73-a7df-2bb8fb974fed req-b4c6c881-ad4e-4f1f-93a5-31e3830faea0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-unplugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.787 2 DEBUG oslo_concurrency.lockutils [req-f9565c87-7978-4d73-a7df-2bb8fb974fed req-b4c6c881-ad4e-4f1f-93a5-31e3830faea0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.787 2 DEBUG oslo_concurrency.lockutils [req-f9565c87-7978-4d73-a7df-2bb8fb974fed req-b4c6c881-ad4e-4f1f-93a5-31e3830faea0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.788 2 DEBUG oslo_concurrency.lockutils [req-f9565c87-7978-4d73-a7df-2bb8fb974fed req-b4c6c881-ad4e-4f1f-93a5-31e3830faea0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.788 2 DEBUG nova.compute.manager [req-f9565c87-7978-4d73-a7df-2bb8fb974fed req-b4c6c881-ad4e-4f1f-93a5-31e3830faea0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] No waiting events found dispatching network-vif-unplugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:13:22 np0005465604 nova_compute[260603]: 2025-10-02 09:13:22.788 2 DEBUG nova.compute.manager [req-f9565c87-7978-4d73-a7df-2bb8fb974fed req-b4c6c881-ad4e-4f1f-93a5-31e3830faea0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-unplugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.044 2 DEBUG nova.network.neutron [req-7dcf3e68-4686-4561-9914-8611dcf5732f req-e4e9bbec-0cf4-4d75-95a2-823189737866 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Updated VIF entry in instance network info cache for port 244a8221-32fa-4c1b-959f-5a29d4e651f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.045 2 DEBUG nova.network.neutron [req-7dcf3e68-4686-4561-9914-8611dcf5732f req-e4e9bbec-0cf4-4d75-95a2-823189737866 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Updating instance_info_cache with network_info: [{"id": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "address": "fa:16:3e:10:a2:36", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap244a8221-32", "ovs_interfaceid": "244a8221-32fa-4c1b-959f-5a29d4e651f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "address": "fa:16:3e:2b:b8:50", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:b850", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c6c65df-30", "ovs_interfaceid": "8c6c65df-3056-4b48-96d3-e6dc8744f31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:13:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:23.073 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:13:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:23.074 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.077 2 DEBUG oslo_concurrency.lockutils [req-7dcf3e68-4686-4561-9914-8611dcf5732f req-e4e9bbec-0cf4-4d75-95a2-823189737866 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-66710b2a-3c24-45a9-a500-f29978d33f4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:13:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.495 2 INFO nova.virt.libvirt.driver [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Deleting instance files /var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f_del#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.496 2 INFO nova.virt.libvirt.driver [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Deletion of /var/lib/nova/instances/66710b2a-3c24-45a9-a500-f29978d33f4f_del complete#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.553 2 INFO nova.compute.manager [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Took 1.97 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.554 2 DEBUG oslo.service.loopingcall [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.554 2 DEBUG nova.compute.manager [-] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.554 2 DEBUG nova.network.neutron [-] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.629 2 DEBUG nova.compute.manager [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-unplugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.629 2 DEBUG oslo_concurrency.lockutils [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.630 2 DEBUG oslo_concurrency.lockutils [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.630 2 DEBUG oslo_concurrency.lockutils [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.630 2 DEBUG nova.compute.manager [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] No waiting events found dispatching network-vif-unplugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.631 2 DEBUG nova.compute.manager [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-unplugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.631 2 DEBUG nova.compute.manager [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-plugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.632 2 DEBUG oslo_concurrency.lockutils [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.632 2 DEBUG oslo_concurrency.lockutils [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.632 2 DEBUG oslo_concurrency.lockutils [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.633 2 DEBUG nova.compute.manager [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] No waiting events found dispatching network-vif-plugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:13:23 np0005465604 nova_compute[260603]: 2025-10-02 09:13:23.633 2 WARNING nova.compute.manager [req-95218bc7-d999-4bbb-b4fa-f2e5f75b22cf req-340fb86f-acf9-4622-a435-bff3ac0289ec 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received unexpected event network-vif-plugged-244a8221-32fa-4c1b-959f-5a29d4e651f1 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:13:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 9.4 KiB/s wr, 4 op/s
Oct  2 05:13:24 np0005465604 nova_compute[260603]: 2025-10-02 09:13:24.701 2 DEBUG nova.network.neutron [-] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:13:24 np0005465604 nova_compute[260603]: 2025-10-02 09:13:24.721 2 INFO nova.compute.manager [-] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Took 1.17 seconds to deallocate network for instance.#033[00m
Oct  2 05:13:24 np0005465604 nova_compute[260603]: 2025-10-02 09:13:24.787 2 DEBUG oslo_concurrency.lockutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:13:24 np0005465604 nova_compute[260603]: 2025-10-02 09:13:24.788 2 DEBUG oslo_concurrency.lockutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:13:24 np0005465604 nova_compute[260603]: 2025-10-02 09:13:24.893 2 DEBUG nova.compute.manager [req-49df157b-fc48-44cb-8df0-19cbe382617e req-5c74d170-2e02-4a7a-9357-4c9444446aeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-plugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:13:24 np0005465604 nova_compute[260603]: 2025-10-02 09:13:24.894 2 DEBUG oslo_concurrency.lockutils [req-49df157b-fc48-44cb-8df0-19cbe382617e req-5c74d170-2e02-4a7a-9357-4c9444446aeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:13:24 np0005465604 nova_compute[260603]: 2025-10-02 09:13:24.894 2 DEBUG oslo_concurrency.lockutils [req-49df157b-fc48-44cb-8df0-19cbe382617e req-5c74d170-2e02-4a7a-9357-4c9444446aeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:13:24 np0005465604 nova_compute[260603]: 2025-10-02 09:13:24.894 2 DEBUG oslo_concurrency.lockutils [req-49df157b-fc48-44cb-8df0-19cbe382617e req-5c74d170-2e02-4a7a-9357-4c9444446aeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:13:24 np0005465604 nova_compute[260603]: 2025-10-02 09:13:24.894 2 DEBUG nova.compute.manager [req-49df157b-fc48-44cb-8df0-19cbe382617e req-5c74d170-2e02-4a7a-9357-4c9444446aeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] No waiting events found dispatching network-vif-plugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:13:24 np0005465604 nova_compute[260603]: 2025-10-02 09:13:24.894 2 WARNING nova.compute.manager [req-49df157b-fc48-44cb-8df0-19cbe382617e req-5c74d170-2e02-4a7a-9357-4c9444446aeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received unexpected event network-vif-plugged-8c6c65df-3056-4b48-96d3-e6dc8744f31d for instance with vm_state deleted and task_state None.#033[00m
Oct  2 05:13:24 np0005465604 nova_compute[260603]: 2025-10-02 09:13:24.894 2 DEBUG nova.compute.manager [req-49df157b-fc48-44cb-8df0-19cbe382617e req-5c74d170-2e02-4a7a-9357-4c9444446aeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-deleted-244a8221-32fa-4c1b-959f-5a29d4e651f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:13:24 np0005465604 nova_compute[260603]: 2025-10-02 09:13:24.895 2 DEBUG nova.compute.manager [req-49df157b-fc48-44cb-8df0-19cbe382617e req-5c74d170-2e02-4a7a-9357-4c9444446aeb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Received event network-vif-deleted-8c6c65df-3056-4b48-96d3-e6dc8744f31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:13:24 np0005465604 nova_compute[260603]: 2025-10-02 09:13:24.896 2 DEBUG oslo_concurrency.processutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:13:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:13:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2601005303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:13:25 np0005465604 nova_compute[260603]: 2025-10-02 09:13:25.357 2 DEBUG oslo_concurrency.processutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:13:25 np0005465604 nova_compute[260603]: 2025-10-02 09:13:25.364 2 DEBUG nova.compute.provider_tree [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:13:25 np0005465604 nova_compute[260603]: 2025-10-02 09:13:25.381 2 DEBUG nova.scheduler.client.report [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:13:25 np0005465604 nova_compute[260603]: 2025-10-02 09:13:25.402 2 DEBUG oslo_concurrency.lockutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:13:25 np0005465604 nova_compute[260603]: 2025-10-02 09:13:25.429 2 INFO nova.scheduler.client.report [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 66710b2a-3c24-45a9-a500-f29978d33f4f#033[00m
Oct  2 05:13:25 np0005465604 nova_compute[260603]: 2025-10-02 09:13:25.482 2 DEBUG oslo_concurrency.lockutils [None req-dfc30caf-c278-4af4-b4bd-1534aa9dce8d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "66710b2a-3c24-45a9-a500-f29978d33f4f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:13:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct  2 05:13:26 np0005465604 nova_compute[260603]: 2025-10-02 09:13:26.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:27 np0005465604 podman[423631]: 2025-10-02 09:13:27.01207611 +0000 UTC m=+0.079456689 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001)
Oct  2 05:13:27 np0005465604 podman[423632]: 2025-10-02 09:13:27.030612787 +0000 UTC m=+0.093527448 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.462 2 DEBUG nova.compute.manager [req-b99a21e9-efe4-48a3-9819-a0d254a83a77 req-4f788d37-309a-44b6-b26b-ce78cfdfa6c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-changed-102411d5-80b6-47af-9293-08b07c65d541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.463 2 DEBUG nova.compute.manager [req-b99a21e9-efe4-48a3-9819-a0d254a83a77 req-4f788d37-309a-44b6-b26b-ce78cfdfa6c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Refreshing instance network info cache due to event network-changed-102411d5-80b6-47af-9293-08b07c65d541. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.463 2 DEBUG oslo_concurrency.lockutils [req-b99a21e9-efe4-48a3-9819-a0d254a83a77 req-4f788d37-309a-44b6-b26b-ce78cfdfa6c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.463 2 DEBUG oslo_concurrency.lockutils [req-b99a21e9-efe4-48a3-9819-a0d254a83a77 req-4f788d37-309a-44b6-b26b-ce78cfdfa6c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.463 2 DEBUG nova.network.neutron [req-b99a21e9-efe4-48a3-9819-a0d254a83a77 req-4f788d37-309a-44b6-b26b-ce78cfdfa6c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Refreshing network info cache for port 102411d5-80b6-47af-9293-08b07c65d541 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.524 2 DEBUG oslo_concurrency.lockutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.524 2 DEBUG oslo_concurrency.lockutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.524 2 DEBUG oslo_concurrency.lockutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.524 2 DEBUG oslo_concurrency.lockutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.525 2 DEBUG oslo_concurrency.lockutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.526 2 INFO nova.compute.manager [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Terminating instance#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.526 2 DEBUG nova.compute.manager [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:13:27 np0005465604 kernel: tap102411d5-80 (unregistering): left promiscuous mode
Oct  2 05:13:27 np0005465604 NetworkManager[45129]: <info>  [1759396407.5801] device (tap102411d5-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:27 np0005465604 ovn_controller[152344]: 2025-10-02T09:13:27Z|01606|binding|INFO|Releasing lport 102411d5-80b6-47af-9293-08b07c65d541 from this chassis (sb_readonly=0)
Oct  2 05:13:27 np0005465604 ovn_controller[152344]: 2025-10-02T09:13:27Z|01607|binding|INFO|Setting lport 102411d5-80b6-47af-9293-08b07c65d541 down in Southbound
Oct  2 05:13:27 np0005465604 ovn_controller[152344]: 2025-10-02T09:13:27Z|01608|binding|INFO|Removing iface tap102411d5-80 ovn-installed in OVS
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.607 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:ca:02 10.100.0.6'], port_security=['fa:16:3e:05:ca:02 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '71c9f70f-5f86-4723-9e4f-a4aca14211cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-436e56fa-4885-4043-b091-8043a6f9f710', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f053077-9033-462a-8246-131d2284fb71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79454522-7e2a-40b8-ae72-355dd621c03a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=102411d5-80b6-47af-9293-08b07c65d541) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:13:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.608 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 102411d5-80b6-47af-9293-08b07c65d541 in datapath 436e56fa-4885-4043-b091-8043a6f9f710 unbound from our chassis#033[00m
Oct  2 05:13:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.609 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 436e56fa-4885-4043-b091-8043a6f9f710, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:13:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.610 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4199d5-32e1-4f98-9f67-0f1fcd21eb42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.611 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710 namespace which is not needed anymore#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:27 np0005465604 kernel: tapd5b6295d-90 (unregistering): left promiscuous mode
Oct  2 05:13:27 np0005465604 NetworkManager[45129]: <info>  [1759396407.6299] device (tapd5b6295d-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:27 np0005465604 ovn_controller[152344]: 2025-10-02T09:13:27Z|01609|binding|INFO|Releasing lport d5b6295d-90a7-4d25-be69-ccd7a58621c6 from this chassis (sb_readonly=0)
Oct  2 05:13:27 np0005465604 ovn_controller[152344]: 2025-10-02T09:13:27Z|01610|binding|INFO|Setting lport d5b6295d-90a7-4d25-be69-ccd7a58621c6 down in Southbound
Oct  2 05:13:27 np0005465604 ovn_controller[152344]: 2025-10-02T09:13:27Z|01611|binding|INFO|Removing iface tapd5b6295d-90 ovn-installed in OVS
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.648 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:62:03 2001:db8:0:1:f816:3eff:fe3c:6203 2001:db8::f816:3eff:fe3c:6203'], port_security=['fa:16:3e:3c:62:03 2001:db8:0:1:f816:3eff:fe3c:6203 2001:db8::f816:3eff:fe3c:6203'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe3c:6203/64 2001:db8::f816:3eff:fe3c:6203/64', 'neutron:device_id': '71c9f70f-5f86-4723-9e4f-a4aca14211cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d64d879-42c6-456c-a212-df00bf998997', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f053077-9033-462a-8246-131d2284fb71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bb22cc4-c817-4149-925c-4cb21e573102, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=d5b6295d-90a7-4d25-be69-ccd7a58621c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:27 np0005465604 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Oct  2 05:13:27 np0005465604 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d0000008f.scope: Consumed 16.258s CPU time.
Oct  2 05:13:27 np0005465604 systemd-machined[214636]: Machine qemu-177-instance-0000008f terminated.
Oct  2 05:13:27 np0005465604 NetworkManager[45129]: <info>  [1759396407.7608] manager: (tapd5b6295d-90): new Tun device (/org/freedesktop/NetworkManager/Devices/651)
Oct  2 05:13:27 np0005465604 neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710[421591]: [NOTICE]   (421604) : haproxy version is 2.8.14-c23fe91
Oct  2 05:13:27 np0005465604 neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710[421591]: [NOTICE]   (421604) : path to executable is /usr/sbin/haproxy
Oct  2 05:13:27 np0005465604 neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710[421591]: [WARNING]  (421604) : Exiting Master process...
Oct  2 05:13:27 np0005465604 neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710[421591]: [ALERT]    (421604) : Current worker (421624) exited with code 143 (Terminated)
Oct  2 05:13:27 np0005465604 neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710[421591]: [WARNING]  (421604) : All workers exited. Exiting... (0)
Oct  2 05:13:27 np0005465604 systemd[1]: libpod-ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772.scope: Deactivated successfully.
Oct  2 05:13:27 np0005465604 conmon[421591]: conmon ad5b94684147e8d87723 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772.scope/container/memory.events
Oct  2 05:13:27 np0005465604 podman[423701]: 2025-10-02 09:13:27.778296081 +0000 UTC m=+0.052181332 container died ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.778 2 INFO nova.virt.libvirt.driver [-] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Instance destroyed successfully.#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.779 2 DEBUG nova.objects.instance [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 71c9f70f-5f86-4723-9e4f-a4aca14211cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.799 2 DEBUG nova.virt.libvirt.vif [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1569895686',display_name='tempest-TestGettingAddress-server-1569895686',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1569895686',id=143,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:11:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-59lxi72k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:11:56Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=71c9f70f-5f86-4723-9e4f-a4aca14211cb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.800 2 DEBUG nova.network.os_vif_util [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.800 2 DEBUG nova.network.os_vif_util [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:05:ca:02,bridge_name='br-int',has_traffic_filtering=True,id=102411d5-80b6-47af-9293-08b07c65d541,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap102411d5-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.801 2 DEBUG os_vif [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:ca:02,bridge_name='br-int',has_traffic_filtering=True,id=102411d5-80b6-47af-9293-08b07c65d541,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap102411d5-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.804 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap102411d5-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:13:27 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772-userdata-shm.mount: Deactivated successfully.
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:27 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b6ba23fbb429caa93a956634bafbbdd11b0d78f46b17b74eaebd84acb971bba0-merged.mount: Deactivated successfully.
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.845 2 INFO os_vif [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:ca:02,bridge_name='br-int',has_traffic_filtering=True,id=102411d5-80b6-47af-9293-08b07c65d541,network=Network(436e56fa-4885-4043-b091-8043a6f9f710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap102411d5-80')#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.846 2 DEBUG nova.virt.libvirt.vif [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:11:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1569895686',display_name='tempest-TestGettingAddress-server-1569895686',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1569895686',id=143,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDpdZyV5UQgCXdYCwwHyatOAW1fjcOYl+PKfkmrf27RdEejMmboZfFR/OQUnHUNTrqjbL6yE4rbmeJY3WNFhchni8rbQDOTF0cxHm41lo/GrsyLEHMnwRz0P7dHLtSxCow==',key_name='tempest-TestGettingAddress-1827640017',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:11:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-59lxi72k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:11:56Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=71c9f70f-5f86-4723-9e4f-a4aca14211cb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.847 2 DEBUG nova.network.os_vif_util [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.847 2 DEBUG nova.network.os_vif_util [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3c:62:03,bridge_name='br-int',has_traffic_filtering=True,id=d5b6295d-90a7-4d25-be69-ccd7a58621c6,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b6295d-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.848 2 DEBUG os_vif [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:62:03,bridge_name='br-int',has_traffic_filtering=True,id=d5b6295d-90a7-4d25-be69-ccd7a58621c6,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b6295d-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.850 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5b6295d-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.854 2 INFO os_vif [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:62:03,bridge_name='br-int',has_traffic_filtering=True,id=d5b6295d-90a7-4d25-be69-ccd7a58621c6,network=Network(5d64d879-42c6-456c-a212-df00bf998997),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b6295d-90')#033[00m
Oct  2 05:13:27 np0005465604 podman[423701]: 2025-10-02 09:13:27.860113094 +0000 UTC m=+0.133998345 container cleanup ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:13:27 np0005465604 systemd[1]: libpod-conmon-ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772.scope: Deactivated successfully.
Oct  2 05:13:27 np0005465604 podman[423759]: 2025-10-02 09:13:27.934355901 +0000 UTC m=+0.049220410 container remove ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 05:13:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.940 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6f64b78e-9380-4880-abe5-5b80cc7fc460]: (4, ('Thu Oct  2 09:13:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710 (ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772)\nad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772\nThu Oct  2 09:13:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710 (ad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772)\nad5b94684147e8d877235428e1f410f63cd0c5cbfaca371c8f55fd6a99195772\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.941 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[85641aa3-1ef6-4d82-be6d-dc0fc1343e7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.943 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap436e56fa-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:27 np0005465604 kernel: tap436e56fa-40: left promiscuous mode
Oct  2 05:13:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:13:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:13:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:13:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:13:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:13:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:27 np0005465604 nova_compute[260603]: 2025-10-02 09:13:27.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.968 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[837d7d5a-c88f-4b23-805d-509f3f54e988]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.991 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fa554a57-3d7a-424a-a7d3-77749cc2dd7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:27.992 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[535e1f0f-083a-4dbd-b927-8a4908c75b54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.008 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[34265c4f-3c13-4c82-baba-d86392ef4152]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719174, 'reachable_time': 42147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423781, 'error': None, 'target': 'ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:28 np0005465604 systemd[1]: run-netns-ovnmeta\x2d436e56fa\x2d4885\x2d4043\x2db091\x2d8043a6f9f710.mount: Deactivated successfully.
Oct  2 05:13:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.010 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-436e56fa-4885-4043-b091-8043a6f9f710 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:13:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.010 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[600a5463-8eea-4827-9ed1-8b0dff857c40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.014 162357 INFO neutron.agent.ovn.metadata.agent [-] Port d5b6295d-90a7-4d25-be69-ccd7a58621c6 in datapath 5d64d879-42c6-456c-a212-df00bf998997 unbound from our chassis#033[00m
Oct  2 05:13:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.015 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d64d879-42c6-456c-a212-df00bf998997, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:13:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.016 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9b997a87-99d5-4062-b0c3-87f8a0edc0af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.016 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d64d879-42c6-456c-a212-df00bf998997 namespace which is not needed anymore#033[00m
Oct  2 05:13:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:13:28
Oct  2 05:13:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:13:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:13:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', '.rgw.root', 'vms', '.mgr', 'images', 'default.rgw.control', 'volumes']
Oct  2 05:13:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:13:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 119 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 6.9 KiB/s wr, 31 op/s
Oct  2 05:13:28 np0005465604 neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997[421803]: [NOTICE]   (421815) : haproxy version is 2.8.14-c23fe91
Oct  2 05:13:28 np0005465604 neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997[421803]: [NOTICE]   (421815) : path to executable is /usr/sbin/haproxy
Oct  2 05:13:28 np0005465604 neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997[421803]: [WARNING]  (421815) : Exiting Master process...
Oct  2 05:13:28 np0005465604 neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997[421803]: [ALERT]    (421815) : Current worker (421819) exited with code 143 (Terminated)
Oct  2 05:13:28 np0005465604 neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997[421803]: [WARNING]  (421815) : All workers exited. Exiting... (0)
Oct  2 05:13:28 np0005465604 systemd[1]: libpod-eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099.scope: Deactivated successfully.
Oct  2 05:13:28 np0005465604 podman[423799]: 2025-10-02 09:13:28.199416698 +0000 UTC m=+0.057733215 container died eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:13:28 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099-userdata-shm.mount: Deactivated successfully.
Oct  2 05:13:28 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f89707fb13d0c4fb2e7f83ff5487b81f8ced57c0f936b53bc230c0c45bebc01b-merged.mount: Deactivated successfully.
Oct  2 05:13:28 np0005465604 podman[423799]: 2025-10-02 09:13:28.251216358 +0000 UTC m=+0.109532845 container cleanup eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 05:13:28 np0005465604 systemd[1]: libpod-conmon-eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099.scope: Deactivated successfully.
Oct  2 05:13:28 np0005465604 nova_compute[260603]: 2025-10-02 09:13:28.299 2 DEBUG nova.compute.manager [req-150fa4f0-4827-429d-b756-a3dc42a3f8ab req-8afd21f5-3f25-4eb8-b725-070045d57ed5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-unplugged-102411d5-80b6-47af-9293-08b07c65d541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:13:28 np0005465604 nova_compute[260603]: 2025-10-02 09:13:28.301 2 DEBUG oslo_concurrency.lockutils [req-150fa4f0-4827-429d-b756-a3dc42a3f8ab req-8afd21f5-3f25-4eb8-b725-070045d57ed5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:13:28 np0005465604 nova_compute[260603]: 2025-10-02 09:13:28.301 2 DEBUG oslo_concurrency.lockutils [req-150fa4f0-4827-429d-b756-a3dc42a3f8ab req-8afd21f5-3f25-4eb8-b725-070045d57ed5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:13:28 np0005465604 nova_compute[260603]: 2025-10-02 09:13:28.302 2 DEBUG oslo_concurrency.lockutils [req-150fa4f0-4827-429d-b756-a3dc42a3f8ab req-8afd21f5-3f25-4eb8-b725-070045d57ed5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:13:28 np0005465604 nova_compute[260603]: 2025-10-02 09:13:28.302 2 DEBUG nova.compute.manager [req-150fa4f0-4827-429d-b756-a3dc42a3f8ab req-8afd21f5-3f25-4eb8-b725-070045d57ed5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] No waiting events found dispatching network-vif-unplugged-102411d5-80b6-47af-9293-08b07c65d541 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:13:28 np0005465604 nova_compute[260603]: 2025-10-02 09:13:28.302 2 DEBUG nova.compute.manager [req-150fa4f0-4827-429d-b756-a3dc42a3f8ab req-8afd21f5-3f25-4eb8-b725-070045d57ed5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-unplugged-102411d5-80b6-47af-9293-08b07c65d541 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:13:28 np0005465604 podman[423828]: 2025-10-02 09:13:28.338128728 +0000 UTC m=+0.056726013 container remove eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:13:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.346 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7c870865-49ec-4adc-adf2-a0567c79fa28]: (4, ('Thu Oct  2 09:13:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997 (eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099)\neaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099\nThu Oct  2 09:13:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5d64d879-42c6-456c-a212-df00bf998997 (eaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099)\neaa8adef8b933e7a7a7b2890c45befd292494893b80b9a0cc439134056d93099\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.348 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[71a1b782-90af-4d00-9cd2-09e5aba0dd70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.349 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d64d879-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:13:28 np0005465604 nova_compute[260603]: 2025-10-02 09:13:28.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:28 np0005465604 kernel: tap5d64d879-40: left promiscuous mode
Oct  2 05:13:28 np0005465604 nova_compute[260603]: 2025-10-02 09:13:28.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.379 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[563c5704-69f6-499e-a311-55072a100c9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:28 np0005465604 nova_compute[260603]: 2025-10-02 09:13:28.402 2 INFO nova.virt.libvirt.driver [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Deleting instance files /var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb_del#033[00m
Oct  2 05:13:28 np0005465604 nova_compute[260603]: 2025-10-02 09:13:28.404 2 INFO nova.virt.libvirt.driver [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Deletion of /var/lib/nova/instances/71c9f70f-5f86-4723-9e4f-a4aca14211cb_del complete#033[00m
Oct  2 05:13:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.413 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bd4833-dbd5-416c-8400-39d9294bddfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.415 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[077af658-5bac-48c9-be60-abf4685f52fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:13:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.435 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dadc0a88-dded-49a6-8f2d-21fc55739fb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719285, 'reachable_time': 20450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423840, 'error': None, 'target': 'ovnmeta-5d64d879-42c6-456c-a212-df00bf998997', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.437 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d64d879-42c6-456c-a212-df00bf998997 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:13:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:28.437 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e86e3a26-c8f1-45b7-9457-c6a72b2bc4cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:13:28 np0005465604 nova_compute[260603]: 2025-10-02 09:13:28.458 2 INFO nova.compute.manager [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:13:28 np0005465604 nova_compute[260603]: 2025-10-02 09:13:28.460 2 DEBUG oslo.service.loopingcall [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:13:28 np0005465604 nova_compute[260603]: 2025-10-02 09:13:28.460 2 DEBUG nova.compute.manager [-] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:13:28 np0005465604 nova_compute[260603]: 2025-10-02 09:13:28.461 2 DEBUG nova.network.neutron [-] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:13:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:13:28 np0005465604 systemd[1]: run-netns-ovnmeta\x2d5d64d879\x2d42c6\x2d456c\x2da212\x2ddf00bf998997.mount: Deactivated successfully.
Oct  2 05:13:29 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:29.077 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:13:29 np0005465604 nova_compute[260603]: 2025-10-02 09:13:29.575 2 DEBUG nova.compute.manager [req-a60b5349-f487-4e2f-b02c-23b767701304 req-2b05f18c-b7f4-4193-8ce7-640a58a77f03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-deleted-102411d5-80b6-47af-9293-08b07c65d541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:13:29 np0005465604 nova_compute[260603]: 2025-10-02 09:13:29.576 2 INFO nova.compute.manager [req-a60b5349-f487-4e2f-b02c-23b767701304 req-2b05f18c-b7f4-4193-8ce7-640a58a77f03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Neutron deleted interface 102411d5-80b6-47af-9293-08b07c65d541; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 05:13:29 np0005465604 nova_compute[260603]: 2025-10-02 09:13:29.576 2 DEBUG nova.network.neutron [req-a60b5349-f487-4e2f-b02c-23b767701304 req-2b05f18c-b7f4-4193-8ce7-640a58a77f03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updating instance_info_cache with network_info: [{"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:13:29 np0005465604 nova_compute[260603]: 2025-10-02 09:13:29.602 2 DEBUG nova.compute.manager [req-a60b5349-f487-4e2f-b02c-23b767701304 req-2b05f18c-b7f4-4193-8ce7-640a58a77f03 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Detach interface failed, port_id=102411d5-80b6-47af-9293-08b07c65d541, reason: Instance 71c9f70f-5f86-4723-9e4f-a4aca14211cb could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.054 2 DEBUG nova.network.neutron [req-b99a21e9-efe4-48a3-9819-a0d254a83a77 req-4f788d37-309a-44b6-b26b-ce78cfdfa6c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updated VIF entry in instance network info cache for port 102411d5-80b6-47af-9293-08b07c65d541. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.054 2 DEBUG nova.network.neutron [req-b99a21e9-efe4-48a3-9819-a0d254a83a77 req-4f788d37-309a-44b6-b26b-ce78cfdfa6c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updating instance_info_cache with network_info: [{"id": "102411d5-80b6-47af-9293-08b07c65d541", "address": "fa:16:3e:05:ca:02", "network": {"id": "436e56fa-4885-4043-b091-8043a6f9f710", "bridge": "br-int", "label": "tempest-network-smoke--1491250558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap102411d5-80", "ovs_interfaceid": "102411d5-80b6-47af-9293-08b07c65d541", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "address": "fa:16:3e:3c:62:03", "network": {"id": "5d64d879-42c6-456c-a212-df00bf998997", "bridge": "br-int", "label": "tempest-network-smoke--1200268017", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe3c:6203", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b6295d-90", "ovs_interfaceid": "d5b6295d-90a7-4d25-be69-ccd7a58621c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.095 2 DEBUG oslo_concurrency.lockutils [req-b99a21e9-efe4-48a3-9819-a0d254a83a77 req-4f788d37-309a-44b6-b26b-ce78cfdfa6c7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-71c9f70f-5f86-4723-9e4f-a4aca14211cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:13:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 119 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 5.9 KiB/s wr, 30 op/s
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.368 2 DEBUG nova.network.neutron [-] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.399 2 DEBUG nova.compute.manager [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-plugged-102411d5-80b6-47af-9293-08b07c65d541 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.400 2 DEBUG oslo_concurrency.lockutils [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.400 2 DEBUG oslo_concurrency.lockutils [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.402 2 DEBUG oslo_concurrency.lockutils [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.403 2 DEBUG nova.compute.manager [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] No waiting events found dispatching network-vif-plugged-102411d5-80b6-47af-9293-08b07c65d541 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.404 2 WARNING nova.compute.manager [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received unexpected event network-vif-plugged-102411d5-80b6-47af-9293-08b07c65d541 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.404 2 DEBUG nova.compute.manager [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-unplugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.405 2 DEBUG oslo_concurrency.lockutils [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.406 2 DEBUG oslo_concurrency.lockutils [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.406 2 DEBUG oslo_concurrency.lockutils [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.407 2 DEBUG nova.compute.manager [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] No waiting events found dispatching network-vif-unplugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.407 2 DEBUG nova.compute.manager [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-unplugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.408 2 DEBUG nova.compute.manager [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-plugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.408 2 DEBUG oslo_concurrency.lockutils [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.409 2 DEBUG oslo_concurrency.lockutils [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.409 2 DEBUG oslo_concurrency.lockutils [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.410 2 DEBUG nova.compute.manager [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] No waiting events found dispatching network-vif-plugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.410 2 WARNING nova.compute.manager [req-8733cb7c-848b-4f7e-9b1c-d502382f4319 req-c64fdfb0-0c25-4ae4-b32b-fe65d95276e2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received unexpected event network-vif-plugged-d5b6295d-90a7-4d25-be69-ccd7a58621c6 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.413 2 INFO nova.compute.manager [-] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Took 1.95 seconds to deallocate network for instance.#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.597 2 DEBUG oslo_concurrency.lockutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.598 2 DEBUG oslo_concurrency.lockutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:13:30 np0005465604 nova_compute[260603]: 2025-10-02 09:13:30.665 2 DEBUG oslo_concurrency.processutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:13:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:13:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/654881305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:13:31 np0005465604 nova_compute[260603]: 2025-10-02 09:13:31.096 2 DEBUG oslo_concurrency.processutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:13:31 np0005465604 nova_compute[260603]: 2025-10-02 09:13:31.105 2 DEBUG nova.compute.provider_tree [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:13:31 np0005465604 nova_compute[260603]: 2025-10-02 09:13:31.133 2 DEBUG nova.scheduler.client.report [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:13:31 np0005465604 nova_compute[260603]: 2025-10-02 09:13:31.179 2 DEBUG oslo_concurrency.lockutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:13:31 np0005465604 nova_compute[260603]: 2025-10-02 09:13:31.205 2 INFO nova.scheduler.client.report [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 71c9f70f-5f86-4723-9e4f-a4aca14211cb#033[00m
Oct  2 05:13:31 np0005465604 nova_compute[260603]: 2025-10-02 09:13:31.270 2 DEBUG oslo_concurrency.lockutils [None req-81112a58-b805-4dc4-8fc6-a015fab73042 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "71c9f70f-5f86-4723-9e4f-a4aca14211cb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:13:31 np0005465604 nova_compute[260603]: 2025-10-02 09:13:31.698 2 DEBUG nova.compute.manager [req-bbe9462f-d5c6-41cf-921a-3fca2da06c3b req-44a6f2f9-ec52-4663-bcc6-6a1344ee6bcd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Received event network-vif-deleted-d5b6295d-90a7-4d25-be69-ccd7a58621c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:13:31 np0005465604 nova_compute[260603]: 2025-10-02 09:13:31.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 119 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 5.9 KiB/s wr, 30 op/s
Oct  2 05:13:32 np0005465604 nova_compute[260603]: 2025-10-02 09:13:32.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:13:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 7.1 KiB/s wr, 57 op/s
Oct  2 05:13:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:34.850 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:13:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:34.851 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:13:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:13:34.851 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:13:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 7.0 KiB/s wr, 54 op/s
Oct  2 05:13:36 np0005465604 nova_compute[260603]: 2025-10-02 09:13:36.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:37 np0005465604 nova_compute[260603]: 2025-10-02 09:13:37.642 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396402.641145, 66710b2a-3c24-45a9-a500-f29978d33f4f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:13:37 np0005465604 nova_compute[260603]: 2025-10-02 09:13:37.643 2 INFO nova.compute.manager [-] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:13:37 np0005465604 nova_compute[260603]: 2025-10-02 09:13:37.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:37 np0005465604 nova_compute[260603]: 2025-10-02 09:13:37.674 2 DEBUG nova.compute.manager [None req-4f6af903-4ad1-49d5-9d8f-c59dae47f232 - - - - - -] [instance: 66710b2a-3c24-45a9-a500-f29978d33f4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:13:37 np0005465604 nova_compute[260603]: 2025-10-02 09:13:37.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:37 np0005465604 nova_compute[260603]: 2025-10-02 09:13:37.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 7.0 KiB/s wr, 54 op/s
Oct  2 05:13:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:13:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:13:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct  2 05:13:40 np0005465604 nova_compute[260603]: 2025-10-02 09:13:40.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:13:40 np0005465604 nova_compute[260603]: 2025-10-02 09:13:40.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:13:41 np0005465604 nova_compute[260603]: 2025-10-02 09:13:41.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct  2 05:13:42 np0005465604 nova_compute[260603]: 2025-10-02 09:13:42.776 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396407.7744737, 71c9f70f-5f86-4723-9e4f-a4aca14211cb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:13:42 np0005465604 nova_compute[260603]: 2025-10-02 09:13:42.777 2 INFO nova.compute.manager [-] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:13:42 np0005465604 nova_compute[260603]: 2025-10-02 09:13:42.803 2 DEBUG nova.compute.manager [None req-e51d24a1-ab70-4e04-bd9f-314ef7ef0139 - - - - - -] [instance: 71c9f70f-5f86-4723-9e4f-a4aca14211cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:13:42 np0005465604 nova_compute[260603]: 2025-10-02 09:13:42.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:13:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Oct  2 05:13:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:13:46 np0005465604 nova_compute[260603]: 2025-10-02 09:13:46.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:47 np0005465604 nova_compute[260603]: 2025-10-02 09:13:47.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:13:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:13:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:13:51 np0005465604 nova_compute[260603]: 2025-10-02 09:13:51.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:13:52 np0005465604 nova_compute[260603]: 2025-10-02 09:13:52.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:52 np0005465604 podman[423865]: 2025-10-02 09:13:52.992077222 +0000 UTC m=+0.059918234 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 05:13:53 np0005465604 podman[423864]: 2025-10-02 09:13:53.036043538 +0000 UTC m=+0.097766980 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:13:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:13:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:13:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:13:56 np0005465604 nova_compute[260603]: 2025-10-02 09:13:56.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:57 np0005465604 nova_compute[260603]: 2025-10-02 09:13:57.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:13:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:13:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:13:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:13:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:13:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:13:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:13:58 np0005465604 podman[423909]: 2025-10-02 09:13:58.013823544 +0000 UTC m=+0.057171217 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 05:13:58 np0005465604 podman[423910]: 2025-10-02 09:13:58.040728641 +0000 UTC m=+0.070880033 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 05:13:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:13:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:14:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:14:00 np0005465604 nova_compute[260603]: 2025-10-02 09:14:00.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:14:01 np0005465604 nova_compute[260603]: 2025-10-02 09:14:01.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.003 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.003 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.027 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.119 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.119 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.129 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.129 2 INFO nova.compute.claims [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:14:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.236 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:14:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:14:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982536581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.675 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.684 2 DEBUG nova.compute.provider_tree [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.703 2 DEBUG nova.scheduler.client.report [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.730 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.731 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.772 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.773 2 DEBUG nova.network.neutron [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.800 2 INFO nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.818 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.902 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.903 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.903 2 INFO nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Creating image(s)#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.931 2 DEBUG nova.storage.rbd_utils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7293bf39-223f-4668-bd0f-c65476fac3e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:14:02 np0005465604 nova_compute[260603]: 2025-10-02 09:14:02.970 2 DEBUG nova.storage.rbd_utils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7293bf39-223f-4668-bd0f-c65476fac3e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:14:03 np0005465604 nova_compute[260603]: 2025-10-02 09:14:03.006 2 DEBUG nova.storage.rbd_utils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7293bf39-223f-4668-bd0f-c65476fac3e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:14:03 np0005465604 nova_compute[260603]: 2025-10-02 09:14:03.010 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:14:03 np0005465604 nova_compute[260603]: 2025-10-02 09:14:03.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:03 np0005465604 nova_compute[260603]: 2025-10-02 09:14:03.063 2 DEBUG nova.policy [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:14:03 np0005465604 nova_compute[260603]: 2025-10-02 09:14:03.105 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:14:03 np0005465604 nova_compute[260603]: 2025-10-02 09:14:03.106 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:03 np0005465604 nova_compute[260603]: 2025-10-02 09:14:03.107 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:03 np0005465604 nova_compute[260603]: 2025-10-02 09:14:03.107 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:03 np0005465604 nova_compute[260603]: 2025-10-02 09:14:03.129 2 DEBUG nova.storage.rbd_utils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7293bf39-223f-4668-bd0f-c65476fac3e4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:14:03 np0005465604 nova_compute[260603]: 2025-10-02 09:14:03.132 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7293bf39-223f-4668-bd0f-c65476fac3e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:14:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:14:03 np0005465604 nova_compute[260603]: 2025-10-02 09:14:03.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:14:03 np0005465604 nova_compute[260603]: 2025-10-02 09:14:03.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:14:03 np0005465604 nova_compute[260603]: 2025-10-02 09:14:03.544 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:14:04 np0005465604 nova_compute[260603]: 2025-10-02 09:14:04.004 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7293bf39-223f-4668-bd0f-c65476fac3e4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.872s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:14:04 np0005465604 nova_compute[260603]: 2025-10-02 09:14:04.051 2 DEBUG nova.network.neutron [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Successfully created port: 17c0a9ac-d61a-433a-b3f3-154a8c467f5a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:14:04 np0005465604 nova_compute[260603]: 2025-10-02 09:14:04.110 2 DEBUG nova.storage.rbd_utils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 7293bf39-223f-4668-bd0f-c65476fac3e4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:14:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:14:04 np0005465604 nova_compute[260603]: 2025-10-02 09:14:04.237 2 DEBUG nova.objects.instance [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 7293bf39-223f-4668-bd0f-c65476fac3e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:14:04 np0005465604 nova_compute[260603]: 2025-10-02 09:14:04.254 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:14:04 np0005465604 nova_compute[260603]: 2025-10-02 09:14:04.255 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Ensure instance console log exists: /var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:14:04 np0005465604 nova_compute[260603]: 2025-10-02 09:14:04.256 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:04 np0005465604 nova_compute[260603]: 2025-10-02 09:14:04.256 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:04 np0005465604 nova_compute[260603]: 2025-10-02 09:14:04.257 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:04 np0005465604 nova_compute[260603]: 2025-10-02 09:14:04.520 2 DEBUG nova.network.neutron [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Successfully created port: 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:14:05 np0005465604 nova_compute[260603]: 2025-10-02 09:14:05.258 2 DEBUG nova.network.neutron [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Successfully updated port: 17c0a9ac-d61a-433a-b3f3-154a8c467f5a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:14:05 np0005465604 nova_compute[260603]: 2025-10-02 09:14:05.374 2 DEBUG nova.compute.manager [req-ae57c38f-f5ec-41da-aa02-61ae61ad4f3f req-29f57103-fce2-4655-be5e-6dc22f4dc2dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-changed-17c0a9ac-d61a-433a-b3f3-154a8c467f5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:14:05 np0005465604 nova_compute[260603]: 2025-10-02 09:14:05.375 2 DEBUG nova.compute.manager [req-ae57c38f-f5ec-41da-aa02-61ae61ad4f3f req-29f57103-fce2-4655-be5e-6dc22f4dc2dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Refreshing instance network info cache due to event network-changed-17c0a9ac-d61a-433a-b3f3-154a8c467f5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:14:05 np0005465604 nova_compute[260603]: 2025-10-02 09:14:05.376 2 DEBUG oslo_concurrency.lockutils [req-ae57c38f-f5ec-41da-aa02-61ae61ad4f3f req-29f57103-fce2-4655-be5e-6dc22f4dc2dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:14:05 np0005465604 nova_compute[260603]: 2025-10-02 09:14:05.376 2 DEBUG oslo_concurrency.lockutils [req-ae57c38f-f5ec-41da-aa02-61ae61ad4f3f req-29f57103-fce2-4655-be5e-6dc22f4dc2dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:14:05 np0005465604 nova_compute[260603]: 2025-10-02 09:14:05.377 2 DEBUG nova.network.neutron [req-ae57c38f-f5ec-41da-aa02-61ae61ad4f3f req-29f57103-fce2-4655-be5e-6dc22f4dc2dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Refreshing network info cache for port 17c0a9ac-d61a-433a-b3f3-154a8c467f5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:14:05 np0005465604 nova_compute[260603]: 2025-10-02 09:14:05.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:14:05 np0005465604 nova_compute[260603]: 2025-10-02 09:14:05.617 2 DEBUG nova.network.neutron [req-ae57c38f-f5ec-41da-aa02-61ae61ad4f3f req-29f57103-fce2-4655-be5e-6dc22f4dc2dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:14:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 41 MiB data, 951 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:14:06 np0005465604 nova_compute[260603]: 2025-10-02 09:14:06.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:14:06 np0005465604 nova_compute[260603]: 2025-10-02 09:14:06.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:14:06 np0005465604 nova_compute[260603]: 2025-10-02 09:14:06.534 2 DEBUG nova.network.neutron [req-ae57c38f-f5ec-41da-aa02-61ae61ad4f3f req-29f57103-fce2-4655-be5e-6dc22f4dc2dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:14:06 np0005465604 nova_compute[260603]: 2025-10-02 09:14:06.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:06 np0005465604 nova_compute[260603]: 2025-10-02 09:14:06.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:06 np0005465604 nova_compute[260603]: 2025-10-02 09:14:06.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:06 np0005465604 nova_compute[260603]: 2025-10-02 09:14:06.548 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:14:06 np0005465604 nova_compute[260603]: 2025-10-02 09:14:06.549 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:14:06 np0005465604 nova_compute[260603]: 2025-10-02 09:14:06.585 2 DEBUG nova.network.neutron [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Successfully updated port: 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:14:06 np0005465604 nova_compute[260603]: 2025-10-02 09:14:06.588 2 DEBUG oslo_concurrency.lockutils [req-ae57c38f-f5ec-41da-aa02-61ae61ad4f3f req-29f57103-fce2-4655-be5e-6dc22f4dc2dd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:14:06 np0005465604 nova_compute[260603]: 2025-10-02 09:14:06.605 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:14:06 np0005465604 nova_compute[260603]: 2025-10-02 09:14:06.605 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:14:06 np0005465604 nova_compute[260603]: 2025-10-02 09:14:06.605 2 DEBUG nova.network.neutron [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:14:06 np0005465604 nova_compute[260603]: 2025-10-02 09:14:06.776 2 DEBUG nova.network.neutron [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:14:06 np0005465604 nova_compute[260603]: 2025-10-02 09:14:06.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:14:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3330760844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:14:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:14:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:14:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:14:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:14:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:14:06 np0005465604 nova_compute[260603]: 2025-10-02 09:14:06.993 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:14:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:14:07 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 731670fc-cdc4-42e2-aa6d-3debc18dc20d does not exist
Oct  2 05:14:07 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 2dcfc675-c57f-40cb-a96d-df74bfdae4f9 does not exist
Oct  2 05:14:07 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 3e6ac35a-807f-4a12-9669-c2bf9a73596b does not exist
Oct  2 05:14:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:14:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:14:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:14:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:14:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:14:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:14:07 np0005465604 nova_compute[260603]: 2025-10-02 09:14:07.169 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:14:07 np0005465604 nova_compute[260603]: 2025-10-02 09:14:07.170 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3602MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:14:07 np0005465604 nova_compute[260603]: 2025-10-02 09:14:07.170 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:07 np0005465604 nova_compute[260603]: 2025-10-02 09:14:07.170 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:07 np0005465604 nova_compute[260603]: 2025-10-02 09:14:07.274 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 7293bf39-223f-4668-bd0f-c65476fac3e4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:14:07 np0005465604 nova_compute[260603]: 2025-10-02 09:14:07.274 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:14:07 np0005465604 nova_compute[260603]: 2025-10-02 09:14:07.274 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:14:07 np0005465604 nova_compute[260603]: 2025-10-02 09:14:07.316 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:14:07 np0005465604 nova_compute[260603]: 2025-10-02 09:14:07.472 2 DEBUG nova.compute.manager [req-abaf49dd-7ce8-47fd-b3ac-162382769c32 req-62e1a10d-5c68-4958-a46d-2d47a2c04db7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-changed-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:14:07 np0005465604 nova_compute[260603]: 2025-10-02 09:14:07.472 2 DEBUG nova.compute.manager [req-abaf49dd-7ce8-47fd-b3ac-162382769c32 req-62e1a10d-5c68-4958-a46d-2d47a2c04db7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Refreshing instance network info cache due to event network-changed-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:14:07 np0005465604 nova_compute[260603]: 2025-10-02 09:14:07.473 2 DEBUG oslo_concurrency.lockutils [req-abaf49dd-7ce8-47fd-b3ac-162382769c32 req-62e1a10d-5c68-4958-a46d-2d47a2c04db7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:14:07 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:14:07 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:14:07 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:14:07 np0005465604 podman[424451]: 2025-10-02 09:14:07.601798572 +0000 UTC m=+0.020373505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:14:07 np0005465604 podman[424451]: 2025-10-02 09:14:07.711246243 +0000 UTC m=+0.129821126 container create f4f9a462c5996201896ca5fd40075fc726134227adbe5d81841f4b1cc2d63cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elion, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:14:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:14:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1158445354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:14:07 np0005465604 nova_compute[260603]: 2025-10-02 09:14:07.750 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:14:07 np0005465604 nova_compute[260603]: 2025-10-02 09:14:07.755 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:14:07 np0005465604 systemd[1]: Started libpod-conmon-f4f9a462c5996201896ca5fd40075fc726134227adbe5d81841f4b1cc2d63cfe.scope.
Oct  2 05:14:07 np0005465604 nova_compute[260603]: 2025-10-02 09:14:07.777 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:14:07 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:14:07 np0005465604 nova_compute[260603]: 2025-10-02 09:14:07.808 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:14:07 np0005465604 nova_compute[260603]: 2025-10-02 09:14:07.808 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:07 np0005465604 podman[424451]: 2025-10-02 09:14:07.842110759 +0000 UTC m=+0.260685732 container init f4f9a462c5996201896ca5fd40075fc726134227adbe5d81841f4b1cc2d63cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 05:14:07 np0005465604 podman[424451]: 2025-10-02 09:14:07.850360436 +0000 UTC m=+0.268935309 container start f4f9a462c5996201896ca5fd40075fc726134227adbe5d81841f4b1cc2d63cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 05:14:07 np0005465604 podman[424451]: 2025-10-02 09:14:07.85402838 +0000 UTC m=+0.272603343 container attach f4f9a462c5996201896ca5fd40075fc726134227adbe5d81841f4b1cc2d63cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:14:07 np0005465604 silly_elion[424469]: 167 167
Oct  2 05:14:07 np0005465604 systemd[1]: libpod-f4f9a462c5996201896ca5fd40075fc726134227adbe5d81841f4b1cc2d63cfe.scope: Deactivated successfully.
Oct  2 05:14:07 np0005465604 podman[424451]: 2025-10-02 09:14:07.859006205 +0000 UTC m=+0.277581098 container died f4f9a462c5996201896ca5fd40075fc726134227adbe5d81841f4b1cc2d63cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elion, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:14:07 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3b2da1585cddbb6f6ed44b41d89977d29bbcf1a0b236bf7698effe2f903dd4e0-merged.mount: Deactivated successfully.
Oct  2 05:14:07 np0005465604 podman[424451]: 2025-10-02 09:14:07.903514058 +0000 UTC m=+0.322088951 container remove f4f9a462c5996201896ca5fd40075fc726134227adbe5d81841f4b1cc2d63cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elion, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 05:14:07 np0005465604 systemd[1]: libpod-conmon-f4f9a462c5996201896ca5fd40075fc726134227adbe5d81841f4b1cc2d63cfe.scope: Deactivated successfully.
Oct  2 05:14:08 np0005465604 nova_compute[260603]: 2025-10-02 09:14:08.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:08 np0005465604 podman[424495]: 2025-10-02 09:14:08.085350568 +0000 UTC m=+0.057390964 container create 1e7d18641ab464a5384c2964808420726e636770d774ddd9f73aa297df1bc087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_pasteur, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 05:14:08 np0005465604 systemd[1]: Started libpod-conmon-1e7d18641ab464a5384c2964808420726e636770d774ddd9f73aa297df1bc087.scope.
Oct  2 05:14:08 np0005465604 podman[424495]: 2025-10-02 09:14:08.056217303 +0000 UTC m=+0.028257689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:14:08 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:14:08 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea387dee46837ea1c50c1d36f7be22f5855d187ad8d011ef2c1e8455cd63e288/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:14:08 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea387dee46837ea1c50c1d36f7be22f5855d187ad8d011ef2c1e8455cd63e288/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:14:08 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea387dee46837ea1c50c1d36f7be22f5855d187ad8d011ef2c1e8455cd63e288/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:14:08 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea387dee46837ea1c50c1d36f7be22f5855d187ad8d011ef2c1e8455cd63e288/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:14:08 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea387dee46837ea1c50c1d36f7be22f5855d187ad8d011ef2c1e8455cd63e288/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:14:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:14:08 np0005465604 podman[424495]: 2025-10-02 09:14:08.188019669 +0000 UTC m=+0.160060045 container init 1e7d18641ab464a5384c2964808420726e636770d774ddd9f73aa297df1bc087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 05:14:08 np0005465604 podman[424495]: 2025-10-02 09:14:08.199480225 +0000 UTC m=+0.171520621 container start 1e7d18641ab464a5384c2964808420726e636770d774ddd9f73aa297df1bc087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_pasteur, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:14:08 np0005465604 podman[424495]: 2025-10-02 09:14:08.21060808 +0000 UTC m=+0.182648446 container attach 1e7d18641ab464a5384c2964808420726e636770d774ddd9f73aa297df1bc087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:14:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:14:09 np0005465604 epic_pasteur[424511]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:14:09 np0005465604 epic_pasteur[424511]: --> relative data size: 1.0
Oct  2 05:14:09 np0005465604 epic_pasteur[424511]: --> All data devices are unavailable
Oct  2 05:14:09 np0005465604 systemd[1]: libpod-1e7d18641ab464a5384c2964808420726e636770d774ddd9f73aa297df1bc087.scope: Deactivated successfully.
Oct  2 05:14:09 np0005465604 podman[424495]: 2025-10-02 09:14:09.233164208 +0000 UTC m=+1.205204584 container died 1e7d18641ab464a5384c2964808420726e636770d774ddd9f73aa297df1bc087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 05:14:09 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ea387dee46837ea1c50c1d36f7be22f5855d187ad8d011ef2c1e8455cd63e288-merged.mount: Deactivated successfully.
Oct  2 05:14:09 np0005465604 podman[424495]: 2025-10-02 09:14:09.587976093 +0000 UTC m=+1.560016479 container remove 1e7d18641ab464a5384c2964808420726e636770d774ddd9f73aa297df1bc087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:14:09 np0005465604 systemd[1]: libpod-conmon-1e7d18641ab464a5384c2964808420726e636770d774ddd9f73aa297df1bc087.scope: Deactivated successfully.
Oct  2 05:14:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:14:10 np0005465604 podman[424696]: 2025-10-02 09:14:10.341093117 +0000 UTC m=+0.047824437 container create 2591be18a412ed4faf19d005e23e9139bfcdf811f2437e5be5f0d1ca2249f3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:14:10 np0005465604 systemd[1]: Started libpod-conmon-2591be18a412ed4faf19d005e23e9139bfcdf811f2437e5be5f0d1ca2249f3e5.scope.
Oct  2 05:14:10 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:14:10 np0005465604 podman[424696]: 2025-10-02 09:14:10.403380513 +0000 UTC m=+0.110111813 container init 2591be18a412ed4faf19d005e23e9139bfcdf811f2437e5be5f0d1ca2249f3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:14:10 np0005465604 podman[424696]: 2025-10-02 09:14:10.319627781 +0000 UTC m=+0.026359091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:14:10 np0005465604 podman[424696]: 2025-10-02 09:14:10.416022926 +0000 UTC m=+0.122754246 container start 2591be18a412ed4faf19d005e23e9139bfcdf811f2437e5be5f0d1ca2249f3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:14:10 np0005465604 gallant_mclaren[424712]: 167 167
Oct  2 05:14:10 np0005465604 systemd[1]: libpod-2591be18a412ed4faf19d005e23e9139bfcdf811f2437e5be5f0d1ca2249f3e5.scope: Deactivated successfully.
Oct  2 05:14:10 np0005465604 podman[424696]: 2025-10-02 09:14:10.423401595 +0000 UTC m=+0.130132875 container attach 2591be18a412ed4faf19d005e23e9139bfcdf811f2437e5be5f0d1ca2249f3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 05:14:10 np0005465604 podman[424696]: 2025-10-02 09:14:10.42388826 +0000 UTC m=+0.130619540 container died 2591be18a412ed4faf19d005e23e9139bfcdf811f2437e5be5f0d1ca2249f3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 05:14:10 np0005465604 systemd[1]: var-lib-containers-storage-overlay-26171184e808d6476d160e3618759f3295233729047ca903bea84c4bb6d00776-merged.mount: Deactivated successfully.
Oct  2 05:14:10 np0005465604 podman[424696]: 2025-10-02 09:14:10.461900441 +0000 UTC m=+0.168631721 container remove 2591be18a412ed4faf19d005e23e9139bfcdf811f2437e5be5f0d1ca2249f3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 05:14:10 np0005465604 systemd[1]: libpod-conmon-2591be18a412ed4faf19d005e23e9139bfcdf811f2437e5be5f0d1ca2249f3e5.scope: Deactivated successfully.
Oct  2 05:14:10 np0005465604 podman[424735]: 2025-10-02 09:14:10.637561842 +0000 UTC m=+0.040940025 container create 571121fcd78b9bb6fcca3490fdcf9a7116e174bd9d07e41a8632350e7e8876a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_aryabhata, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:14:10 np0005465604 systemd[1]: Started libpod-conmon-571121fcd78b9bb6fcca3490fdcf9a7116e174bd9d07e41a8632350e7e8876a4.scope.
Oct  2 05:14:10 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:14:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1612bd5730d3bb210c2ebeb2cfe3a229b70552996aae9f2e3971074348f760b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:14:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1612bd5730d3bb210c2ebeb2cfe3a229b70552996aae9f2e3971074348f760b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:14:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1612bd5730d3bb210c2ebeb2cfe3a229b70552996aae9f2e3971074348f760b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:14:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1612bd5730d3bb210c2ebeb2cfe3a229b70552996aae9f2e3971074348f760b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:14:10 np0005465604 podman[424735]: 2025-10-02 09:14:10.619735937 +0000 UTC m=+0.023114110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:14:10 np0005465604 podman[424735]: 2025-10-02 09:14:10.726958119 +0000 UTC m=+0.130336302 container init 571121fcd78b9bb6fcca3490fdcf9a7116e174bd9d07e41a8632350e7e8876a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_aryabhata, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 05:14:10 np0005465604 podman[424735]: 2025-10-02 09:14:10.744457334 +0000 UTC m=+0.147835537 container start 571121fcd78b9bb6fcca3490fdcf9a7116e174bd9d07e41a8632350e7e8876a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct  2 05:14:10 np0005465604 podman[424735]: 2025-10-02 09:14:10.748676294 +0000 UTC m=+0.152054467 container attach 571121fcd78b9bb6fcca3490fdcf9a7116e174bd9d07e41a8632350e7e8876a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_aryabhata, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:14:10 np0005465604 nova_compute[260603]: 2025-10-02 09:14:10.808 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.314 2 DEBUG nova.network.neutron [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updating instance_info_cache with network_info: [{"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.337 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.337 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Instance network_info: |[{"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.337 2 DEBUG oslo_concurrency.lockutils [req-abaf49dd-7ce8-47fd-b3ac-162382769c32 req-62e1a10d-5c68-4958-a46d-2d47a2c04db7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.338 2 DEBUG nova.network.neutron [req-abaf49dd-7ce8-47fd-b3ac-162382769c32 req-62e1a10d-5c68-4958-a46d-2d47a2c04db7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Refreshing network info cache for port 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.342 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Start _get_guest_xml network_info=[{"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.350 2 WARNING nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.361 2 DEBUG nova.virt.libvirt.host [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.362 2 DEBUG nova.virt.libvirt.host [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.365 2 DEBUG nova.virt.libvirt.host [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.366 2 DEBUG nova.virt.libvirt.host [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.366 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.366 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.367 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.367 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.367 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.368 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.368 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.368 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.368 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.368 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.369 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.369 2 DEBUG nova.virt.hardware [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.372 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]: {
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:    "0": [
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:        {
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "devices": [
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "/dev/loop3"
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            ],
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "lv_name": "ceph_lv0",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "lv_size": "21470642176",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "name": "ceph_lv0",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "tags": {
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.cluster_name": "ceph",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.crush_device_class": "",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.encrypted": "0",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.osd_id": "0",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.type": "block",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.vdo": "0"
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            },
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "type": "block",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "vg_name": "ceph_vg0"
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:        }
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:    ],
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:    "1": [
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:        {
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "devices": [
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "/dev/loop4"
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            ],
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "lv_name": "ceph_lv1",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "lv_size": "21470642176",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "name": "ceph_lv1",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "tags": {
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.cluster_name": "ceph",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.crush_device_class": "",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.encrypted": "0",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.osd_id": "1",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.type": "block",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.vdo": "0"
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            },
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "type": "block",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "vg_name": "ceph_vg1"
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:        }
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:    ],
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:    "2": [
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:        {
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "devices": [
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "/dev/loop5"
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            ],
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "lv_name": "ceph_lv2",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "lv_size": "21470642176",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "name": "ceph_lv2",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "tags": {
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.cluster_name": "ceph",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.crush_device_class": "",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.encrypted": "0",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.osd_id": "2",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.type": "block",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:                "ceph.vdo": "0"
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            },
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "type": "block",
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:            "vg_name": "ceph_vg2"
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:        }
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]:    ]
Oct  2 05:14:11 np0005465604 sweet_aryabhata[424751]: }
Oct  2 05:14:11 np0005465604 systemd[1]: libpod-571121fcd78b9bb6fcca3490fdcf9a7116e174bd9d07e41a8632350e7e8876a4.scope: Deactivated successfully.
Oct  2 05:14:11 np0005465604 podman[424735]: 2025-10-02 09:14:11.595941274 +0000 UTC m=+0.999319467 container died 571121fcd78b9bb6fcca3490fdcf9a7116e174bd9d07e41a8632350e7e8876a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_aryabhata, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:14:11 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1612bd5730d3bb210c2ebeb2cfe3a229b70552996aae9f2e3971074348f760b9-merged.mount: Deactivated successfully.
Oct  2 05:14:11 np0005465604 podman[424735]: 2025-10-02 09:14:11.718483943 +0000 UTC m=+1.121862126 container remove 571121fcd78b9bb6fcca3490fdcf9a7116e174bd9d07e41a8632350e7e8876a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_aryabhata, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Oct  2 05:14:11 np0005465604 systemd[1]: libpod-conmon-571121fcd78b9bb6fcca3490fdcf9a7116e174bd9d07e41a8632350e7e8876a4.scope: Deactivated successfully.
Oct  2 05:14:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:14:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2150068212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.848 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.878 2 DEBUG nova.storage.rbd_utils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7293bf39-223f-4668-bd0f-c65476fac3e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:14:11 np0005465604 nova_compute[260603]: 2025-10-02 09:14:11.885 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:14:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:14:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:14:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/487459000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.335 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.337 2 DEBUG nova.virt.libvirt.vif [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:14:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1093810609',display_name='tempest-TestGettingAddress-server-1093810609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1093810609',id=145,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-5uv39x2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:14:02Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=7293bf39-223f-4668-bd0f-c65476fac3e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.337 2 DEBUG nova.network.os_vif_util [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.338 2 DEBUG nova.network.os_vif_util [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:2b:06,bridge_name='br-int',has_traffic_filtering=True,id=17c0a9ac-d61a-433a-b3f3-154a8c467f5a,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17c0a9ac-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.338 2 DEBUG nova.virt.libvirt.vif [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:14:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1093810609',display_name='tempest-TestGettingAddress-server-1093810609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1093810609',id=145,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-5uv39x2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:14:02Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=7293bf39-223f-4668-bd0f-c65476fac3e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.339 2 DEBUG nova.network.os_vif_util [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.339 2 DEBUG nova.network.os_vif_util [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:66:56,bridge_name='br-int',has_traffic_filtering=True,id=6f4bc2ea-2d5e-4fbb-95ce-ada64748d460,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f4bc2ea-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.340 2 DEBUG nova.objects.instance [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7293bf39-223f-4668-bd0f-c65476fac3e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:14:12 np0005465604 podman[424976]: 2025-10-02 09:14:12.358730138 +0000 UTC m=+0.056412614 container create d32930a132a1edc8e84f97e1d2d0d4eccebd3f37de4ab5cfbd8fff756ba84b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.361 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:  <uuid>7293bf39-223f-4668-bd0f-c65476fac3e4</uuid>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:  <name>instance-00000091</name>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestGettingAddress-server-1093810609</nova:name>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:14:11</nova:creationTime>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:        <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:        <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:        <nova:port uuid="17c0a9ac-d61a-433a-b3f3-154a8c467f5a">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:        <nova:port uuid="6f4bc2ea-2d5e-4fbb-95ce-ada64748d460">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fed2:6656" ipVersion="6"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <entry name="serial">7293bf39-223f-4668-bd0f-c65476fac3e4</entry>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <entry name="uuid">7293bf39-223f-4668-bd0f-c65476fac3e4</entry>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7293bf39-223f-4668-bd0f-c65476fac3e4_disk">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7293bf39-223f-4668-bd0f-c65476fac3e4_disk.config">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:65:2b:06"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <target dev="tap17c0a9ac-d6"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:d2:66:56"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <target dev="tap6f4bc2ea-2d"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4/console.log" append="off"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:14:12 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:14:12 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:14:12 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:14:12 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.362 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Preparing to wait for external event network-vif-plugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.362 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.362 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.363 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.363 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Preparing to wait for external event network-vif-plugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.363 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.363 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.363 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.364 2 DEBUG nova.virt.libvirt.vif [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:14:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1093810609',display_name='tempest-TestGettingAddress-server-1093810609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1093810609',id=145,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-5uv39x2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:14:02Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=7293bf39-223f-4668-bd0f-c65476fac3e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.365 2 DEBUG nova.network.os_vif_util [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.365 2 DEBUG nova.network.os_vif_util [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:2b:06,bridge_name='br-int',has_traffic_filtering=True,id=17c0a9ac-d61a-433a-b3f3-154a8c467f5a,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17c0a9ac-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.365 2 DEBUG os_vif [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:2b:06,bridge_name='br-int',has_traffic_filtering=True,id=17c0a9ac-d61a-433a-b3f3-154a8c467f5a,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17c0a9ac-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.366 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.367 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.370 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17c0a9ac-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.370 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap17c0a9ac-d6, col_values=(('external_ids', {'iface-id': '17c0a9ac-d61a-433a-b3f3-154a8c467f5a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:2b:06', 'vm-uuid': '7293bf39-223f-4668-bd0f-c65476fac3e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:12 np0005465604 NetworkManager[45129]: <info>  [1759396452.3734] manager: (tap17c0a9ac-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/652)
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.380 2 INFO os_vif [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:2b:06,bridge_name='br-int',has_traffic_filtering=True,id=17c0a9ac-d61a-433a-b3f3-154a8c467f5a,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17c0a9ac-d6')#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.380 2 DEBUG nova.virt.libvirt.vif [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:14:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1093810609',display_name='tempest-TestGettingAddress-server-1093810609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1093810609',id=145,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-5uv39x2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:14:02Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=7293bf39-223f-4668-bd0f-c65476fac3e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.381 2 DEBUG nova.network.os_vif_util [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.381 2 DEBUG nova.network.os_vif_util [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:66:56,bridge_name='br-int',has_traffic_filtering=True,id=6f4bc2ea-2d5e-4fbb-95ce-ada64748d460,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f4bc2ea-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.381 2 DEBUG os_vif [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:66:56,bridge_name='br-int',has_traffic_filtering=True,id=6f4bc2ea-2d5e-4fbb-95ce-ada64748d460,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f4bc2ea-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f4bc2ea-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6f4bc2ea-2d, col_values=(('external_ids', {'iface-id': '6f4bc2ea-2d5e-4fbb-95ce-ada64748d460', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:66:56', 'vm-uuid': '7293bf39-223f-4668-bd0f-c65476fac3e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:12 np0005465604 NetworkManager[45129]: <info>  [1759396452.3865] manager: (tap6f4bc2ea-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/653)
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.393 2 INFO os_vif [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:66:56,bridge_name='br-int',has_traffic_filtering=True,id=6f4bc2ea-2d5e-4fbb-95ce-ada64748d460,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f4bc2ea-2d')#033[00m
Oct  2 05:14:12 np0005465604 systemd[1]: Started libpod-conmon-d32930a132a1edc8e84f97e1d2d0d4eccebd3f37de4ab5cfbd8fff756ba84b2e.scope.
Oct  2 05:14:12 np0005465604 podman[424976]: 2025-10-02 09:14:12.322640157 +0000 UTC m=+0.020322653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.439 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.440 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.440 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:65:2b:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.440 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:d2:66:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.441 2 INFO nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Using config drive#033[00m
Oct  2 05:14:12 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:14:12 np0005465604 podman[424976]: 2025-10-02 09:14:12.464615838 +0000 UTC m=+0.162298314 container init d32930a132a1edc8e84f97e1d2d0d4eccebd3f37de4ab5cfbd8fff756ba84b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.468 2 DEBUG nova.storage.rbd_utils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7293bf39-223f-4668-bd0f-c65476fac3e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:14:12 np0005465604 podman[424976]: 2025-10-02 09:14:12.474912359 +0000 UTC m=+0.172594825 container start d32930a132a1edc8e84f97e1d2d0d4eccebd3f37de4ab5cfbd8fff756ba84b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_murdock, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct  2 05:14:12 np0005465604 podman[424976]: 2025-10-02 09:14:12.478646385 +0000 UTC m=+0.176328861 container attach d32930a132a1edc8e84f97e1d2d0d4eccebd3f37de4ab5cfbd8fff756ba84b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:14:12 np0005465604 zealous_murdock[424998]: 167 167
Oct  2 05:14:12 np0005465604 systemd[1]: libpod-d32930a132a1edc8e84f97e1d2d0d4eccebd3f37de4ab5cfbd8fff756ba84b2e.scope: Deactivated successfully.
Oct  2 05:14:12 np0005465604 podman[424976]: 2025-10-02 09:14:12.482657339 +0000 UTC m=+0.180339825 container died d32930a132a1edc8e84f97e1d2d0d4eccebd3f37de4ab5cfbd8fff756ba84b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 05:14:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay-caa3ff46d79bd45a9137ebc282129da18c60a5bd81ca503ede3b9f5f4dd70383-merged.mount: Deactivated successfully.
Oct  2 05:14:12 np0005465604 podman[424976]: 2025-10-02 09:14:12.729444718 +0000 UTC m=+0.427127224 container remove d32930a132a1edc8e84f97e1d2d0d4eccebd3f37de4ab5cfbd8fff756ba84b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_murdock, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 05:14:12 np0005465604 systemd[1]: libpod-conmon-d32930a132a1edc8e84f97e1d2d0d4eccebd3f37de4ab5cfbd8fff756ba84b2e.scope: Deactivated successfully.
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.916 2 INFO nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Creating config drive at /var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4/disk.config#033[00m
Oct  2 05:14:12 np0005465604 nova_compute[260603]: 2025-10-02 09:14:12.923 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaicx45e_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:14:13 np0005465604 podman[425040]: 2025-10-02 09:14:12.912625831 +0000 UTC m=+0.027999091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:14:13 np0005465604 podman[425040]: 2025-10-02 09:14:13.017882222 +0000 UTC m=+0.133255442 container create 3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:14:13 np0005465604 nova_compute[260603]: 2025-10-02 09:14:13.071 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaicx45e_" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:14:13 np0005465604 nova_compute[260603]: 2025-10-02 09:14:13.112 2 DEBUG nova.storage.rbd_utils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7293bf39-223f-4668-bd0f-c65476fac3e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:14:13 np0005465604 systemd[1]: Started libpod-conmon-3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462.scope.
Oct  2 05:14:13 np0005465604 nova_compute[260603]: 2025-10-02 09:14:13.117 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4/disk.config 7293bf39-223f-4668-bd0f-c65476fac3e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:14:13 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:14:13 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7f03223ab3501bb8b718e3b7dadfd65e01f9a71f88c5856508be709ce7db40a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:14:13 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7f03223ab3501bb8b718e3b7dadfd65e01f9a71f88c5856508be709ce7db40a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:14:13 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7f03223ab3501bb8b718e3b7dadfd65e01f9a71f88c5856508be709ce7db40a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:14:13 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7f03223ab3501bb8b718e3b7dadfd65e01f9a71f88c5856508be709ce7db40a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:14:13 np0005465604 nova_compute[260603]: 2025-10-02 09:14:13.185 2 DEBUG nova.network.neutron [req-abaf49dd-7ce8-47fd-b3ac-162382769c32 req-62e1a10d-5c68-4958-a46d-2d47a2c04db7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updated VIF entry in instance network info cache for port 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:14:13 np0005465604 nova_compute[260603]: 2025-10-02 09:14:13.187 2 DEBUG nova.network.neutron [req-abaf49dd-7ce8-47fd-b3ac-162382769c32 req-62e1a10d-5c68-4958-a46d-2d47a2c04db7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updating instance_info_cache with network_info: [{"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:14:13 np0005465604 nova_compute[260603]: 2025-10-02 09:14:13.211 2 DEBUG oslo_concurrency.lockutils [req-abaf49dd-7ce8-47fd-b3ac-162382769c32 req-62e1a10d-5c68-4958-a46d-2d47a2c04db7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:14:13 np0005465604 podman[425040]: 2025-10-02 09:14:13.231097957 +0000 UTC m=+0.346471157 container init 3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:14:13 np0005465604 podman[425040]: 2025-10-02 09:14:13.240685825 +0000 UTC m=+0.356059005 container start 3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:14:13 np0005465604 podman[425040]: 2025-10-02 09:14:13.30165213 +0000 UTC m=+0.417025400 container attach 3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 05:14:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:14:13 np0005465604 nova_compute[260603]: 2025-10-02 09:14:13.998 2 DEBUG oslo_concurrency.processutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4/disk.config 7293bf39-223f-4668-bd0f-c65476fac3e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.880s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:14:14 np0005465604 nova_compute[260603]: 2025-10-02 09:14:14.000 2 INFO nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Deleting local config drive /var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4/disk.config because it was imported into RBD.#033[00m
Oct  2 05:14:14 np0005465604 NetworkManager[45129]: <info>  [1759396454.0710] manager: (tap17c0a9ac-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/654)
Oct  2 05:14:14 np0005465604 kernel: tap17c0a9ac-d6: entered promiscuous mode
Oct  2 05:14:14 np0005465604 systemd-udevd[425128]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:14:14 np0005465604 nova_compute[260603]: 2025-10-02 09:14:14.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:14 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:14Z|01612|binding|INFO|Claiming lport 17c0a9ac-d61a-433a-b3f3-154a8c467f5a for this chassis.
Oct  2 05:14:14 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:14Z|01613|binding|INFO|17c0a9ac-d61a-433a-b3f3-154a8c467f5a: Claiming fa:16:3e:65:2b:06 10.100.0.10
Oct  2 05:14:14 np0005465604 nova_compute[260603]: 2025-10-02 09:14:14.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:14 np0005465604 kernel: tap6f4bc2ea-2d: entered promiscuous mode
Oct  2 05:14:14 np0005465604 NetworkManager[45129]: <info>  [1759396454.1312] manager: (tap6f4bc2ea-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/655)
Oct  2 05:14:14 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:14Z|01614|if_status|INFO|Dropped 1 log messages in last 95 seconds (most recently, 95 seconds ago) due to excessive rate
Oct  2 05:14:14 np0005465604 nova_compute[260603]: 2025-10-02 09:14:14.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:14 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:14Z|01615|if_status|INFO|Not updating pb chassis for 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 now as sb is readonly
Oct  2 05:14:14 np0005465604 systemd-udevd[425138]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:14:14 np0005465604 NetworkManager[45129]: <info>  [1759396454.1408] device (tap17c0a9ac-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:14:14 np0005465604 NetworkManager[45129]: <info>  [1759396454.1432] device (tap17c0a9ac-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:14:14 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:14Z|01616|binding|INFO|Claiming lport 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 for this chassis.
Oct  2 05:14:14 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:14Z|01617|binding|INFO|6f4bc2ea-2d5e-4fbb-95ce-ada64748d460: Claiming fa:16:3e:d2:66:56 2001:db8::f816:3eff:fed2:6656
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.143 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:2b:06 10.100.0.10'], port_security=['fa:16:3e:65:2b:06 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7293bf39-223f-4668-bd0f-c65476fac3e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6339cab-fbb4-4887-8953-252cca735cc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cbefbcd-b288-496f-adbe-c563f4517ed7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80b3bab0-2229-4a60-832f-071c3bc1d0ec, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=17c0a9ac-d61a-433a-b3f3-154a8c467f5a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.144 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 17c0a9ac-d61a-433a-b3f3-154a8c467f5a in datapath c6339cab-fbb4-4887-8953-252cca735cc6 bound to our chassis#033[00m
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.145 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6339cab-fbb4-4887-8953-252cca735cc6#033[00m
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.148 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:66:56 2001:db8::f816:3eff:fed2:6656'], port_security=['fa:16:3e:d2:66:56 2001:db8::f816:3eff:fed2:6656'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fed2:6656/64', 'neutron:device_id': '7293bf39-223f-4668-bd0f-c65476fac3e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f302b50b-078a-40f3-87d8-1172d81fe604', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cbefbcd-b288-496f-adbe-c563f4517ed7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82aa0caa-5e65-4ef0-b1d6-b9e910e6cadb, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6f4bc2ea-2d5e-4fbb-95ce-ada64748d460) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:14:14 np0005465604 NetworkManager[45129]: <info>  [1759396454.1589] device (tap6f4bc2ea-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:14:14 np0005465604 NetworkManager[45129]: <info>  [1759396454.1602] device (tap6f4bc2ea-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.162 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[29a9defd-2f40-4475-8ff4-f18853a64587]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.163 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc6339cab-f1 in ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.165 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc6339cab-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.165 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[18bd6431-c830-4b09-aa7b-6fae4bdfba3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.167 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ceac3246-8e4d-44c8-80fc-18de5b1fb064]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:14 np0005465604 systemd-machined[214636]: New machine qemu-179-instance-00000091.
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.178 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[8ddfddc2-7e7b-496d-bffc-c438792bf90d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:14 np0005465604 systemd[1]: Started Virtual Machine qemu-179-instance-00000091.
Oct  2 05:14:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.204 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7b07a7ca-fa6b-4538-ba66-109a8073341b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:14 np0005465604 nova_compute[260603]: 2025-10-02 09:14:14.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:14 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:14Z|01618|binding|INFO|Setting lport 17c0a9ac-d61a-433a-b3f3-154a8c467f5a ovn-installed in OVS
Oct  2 05:14:14 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:14Z|01619|binding|INFO|Setting lport 17c0a9ac-d61a-433a-b3f3-154a8c467f5a up in Southbound
Oct  2 05:14:14 np0005465604 nova_compute[260603]: 2025-10-02 09:14:14.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:14 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:14Z|01620|binding|INFO|Setting lport 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 ovn-installed in OVS
Oct  2 05:14:14 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:14Z|01621|binding|INFO|Setting lport 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 up in Southbound
Oct  2 05:14:14 np0005465604 nova_compute[260603]: 2025-10-02 09:14:14.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.248 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[05f867ff-b520-4194-aad5-f1f823af135c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:14 np0005465604 NetworkManager[45129]: <info>  [1759396454.2572] manager: (tapc6339cab-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/656)
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.258 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[694e9f73-daa3-4a9b-96cb-5bee1aa4699a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]: {
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:        "osd_id": 2,
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:        "type": "bluestore"
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:    },
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:        "osd_id": 1,
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:        "type": "bluestore"
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:    },
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:        "osd_id": 0,
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:        "type": "bluestore"
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]:    }
Oct  2 05:14:14 np0005465604 kind_hodgkin[425078]: }
Oct  2 05:14:14 np0005465604 systemd[1]: libpod-3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462.scope: Deactivated successfully.
Oct  2 05:14:14 np0005465604 systemd[1]: libpod-3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462.scope: Consumed 1.032s CPU time.
Oct  2 05:14:14 np0005465604 conmon[425078]: conmon 3faab0a072f8fd5844d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462.scope/container/memory.events
Oct  2 05:14:14 np0005465604 podman[425040]: 2025-10-02 09:14:14.293472192 +0000 UTC m=+1.408845382 container died 3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.307 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[78a583b3-4d49-45e6-baec-470a0a72b39a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.312 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a0807b79-ea1a-406e-b6a0-86772d0e8113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:14 np0005465604 NetworkManager[45129]: <info>  [1759396454.3372] device (tapc6339cab-f0): carrier: link connected
Oct  2 05:14:14 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a7f03223ab3501bb8b718e3b7dadfd65e01f9a71f88c5856508be709ce7db40a-merged.mount: Deactivated successfully.
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.346 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8dcefc71-9c08-46c1-b99d-156996baebbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.373 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c6593d-da97-4f2a-9a96-9599e2ae4ffc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6339cab-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:e0:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733293, 'reachable_time': 28730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 425195, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:14 np0005465604 podman[425040]: 2025-10-02 09:14:14.394933165 +0000 UTC m=+1.510306345 container remove 3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  2 05:14:14 np0005465604 systemd[1]: libpod-conmon-3faab0a072f8fd5844d8b321cbb0821fd87d54208881e8d779939657c3c27462.scope: Deactivated successfully.
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.398 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[596d46de-e6d7-4e5f-9c7d-84e31dd284c7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:e05c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733293, 'tstamp': 733293}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 425203, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.424 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6e070086-3219-47bb-ba63-77dee93a969f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6339cab-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:e0:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733293, 'reachable_time': 28730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 425215, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:14 np0005465604 nova_compute[260603]: 2025-10-02 09:14:14.434 2 DEBUG nova.compute.manager [req-592b82e3-b5a1-4351-a967-d0fd83113b8b req-1adba0e2-d8a5-41fc-9e1f-8fbd19ac4afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-plugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:14:14 np0005465604 nova_compute[260603]: 2025-10-02 09:14:14.435 2 DEBUG oslo_concurrency.lockutils [req-592b82e3-b5a1-4351-a967-d0fd83113b8b req-1adba0e2-d8a5-41fc-9e1f-8fbd19ac4afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:14 np0005465604 nova_compute[260603]: 2025-10-02 09:14:14.435 2 DEBUG oslo_concurrency.lockutils [req-592b82e3-b5a1-4351-a967-d0fd83113b8b req-1adba0e2-d8a5-41fc-9e1f-8fbd19ac4afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:14 np0005465604 nova_compute[260603]: 2025-10-02 09:14:14.435 2 DEBUG oslo_concurrency.lockutils [req-592b82e3-b5a1-4351-a967-d0fd83113b8b req-1adba0e2-d8a5-41fc-9e1f-8fbd19ac4afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:14 np0005465604 nova_compute[260603]: 2025-10-02 09:14:14.436 2 DEBUG nova.compute.manager [req-592b82e3-b5a1-4351-a967-d0fd83113b8b req-1adba0e2-d8a5-41fc-9e1f-8fbd19ac4afb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Processing event network-vif-plugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:14:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:14:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:14:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.454 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0b01b260-acae-4367-b921-cd60418b1ded]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:14:14 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9944b4df-3e1c-4cae-9a01-763118d4c2e6 does not exist
Oct  2 05:14:14 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e914bea6-52e8-465b-8615-e97bce9eb505 does not exist
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.524 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c23f9e8f-2098-4d91-9920-25e47fe32062]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.525 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6339cab-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.526 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.526 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6339cab-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:14 np0005465604 NetworkManager[45129]: <info>  [1759396454.5286] manager: (tapc6339cab-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/657)
Oct  2 05:14:14 np0005465604 kernel: tapc6339cab-f0: entered promiscuous mode
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.530 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6339cab-f0, col_values=(('external_ids', {'iface-id': '698ce34e-6a9d-4a50-8426-c137ad35d6fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:14 np0005465604 nova_compute[260603]: 2025-10-02 09:14:14.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:14 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:14Z|01622|binding|INFO|Releasing lport 698ce34e-6a9d-4a50-8426-c137ad35d6fb from this chassis (sb_readonly=0)
Oct  2 05:14:14 np0005465604 nova_compute[260603]: 2025-10-02 09:14:14.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.548 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6339cab-fbb4-4887-8953-252cca735cc6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6339cab-fbb4-4887-8953-252cca735cc6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.549 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7b44590e-eee5-404c-936e-bcb308cca90d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.550 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-c6339cab-fbb4-4887-8953-252cca735cc6
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/c6339cab-fbb4-4887-8953-252cca735cc6.pid.haproxy
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID c6339cab-fbb4-4887-8953-252cca735cc6
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 05:14:14 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:14.551 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'env', 'PROCESS_TAG=haproxy-c6339cab-fbb4-4887-8953-252cca735cc6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c6339cab-fbb4-4887-8953-252cca735cc6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 05:14:14 np0005465604 nova_compute[260603]: 2025-10-02 09:14:14.963 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396454.9626837, 7293bf39-223f-4668-bd0f-c65476fac3e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:14:14 np0005465604 nova_compute[260603]: 2025-10-02 09:14:14.965 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] VM Started (Lifecycle Event)#033[00m
Oct  2 05:14:14 np0005465604 nova_compute[260603]: 2025-10-02 09:14:14.993 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:14:14 np0005465604 nova_compute[260603]: 2025-10-02 09:14:14.999 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396454.962983, 7293bf39-223f-4668-bd0f-c65476fac3e4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:14:15 np0005465604 nova_compute[260603]: 2025-10-02 09:14:15.001 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:14:15 np0005465604 podman[425322]: 2025-10-02 09:14:15.016037936 +0000 UTC m=+0.093996082 container create f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:14:15 np0005465604 nova_compute[260603]: 2025-10-02 09:14:15.026 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:14:15 np0005465604 nova_compute[260603]: 2025-10-02 09:14:15.031 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:14:15 np0005465604 podman[425322]: 2025-10-02 09:14:14.963791922 +0000 UTC m=+0.041750108 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 05:14:15 np0005465604 systemd[1]: Started libpod-conmon-f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63.scope.
Oct  2 05:14:15 np0005465604 nova_compute[260603]: 2025-10-02 09:14:15.065 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:14:15 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:14:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa7e4bc0c2d4824c03116f4f6f6e684d6814bec0ec595371262415f74733fdb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 05:14:15 np0005465604 podman[425322]: 2025-10-02 09:14:15.117625203 +0000 UTC m=+0.195583429 container init f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:14:15 np0005465604 podman[425322]: 2025-10-02 09:14:15.122576797 +0000 UTC m=+0.200534983 container start f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 05:14:15 np0005465604 neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6[425337]: [NOTICE]   (425341) : New worker (425343) forked
Oct  2 05:14:15 np0005465604 neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6[425337]: [NOTICE]   (425341) : Loading success.
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.201 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 in datapath f302b50b-078a-40f3-87d8-1172d81fe604 unbound from our chassis#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.203 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f302b50b-078a-40f3-87d8-1172d81fe604#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.215 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[65045182-21e0-40f6-9f1b-e74a225cf9ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.217 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf302b50b-01 in ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.219 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf302b50b-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.219 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7e08e240-52cc-45d2-b84c-04ac678cab78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.220 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[08b6bd46-c286-426c-b8aa-27de1f8719e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.233 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7e7ed6-2706-4a23-bcfe-ebc36886f096]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.260 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[89f302d8-e585-423b-bcc3-899da7a71e29]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.293 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[0a069f77-c573-4d65-9039-36f0672adf46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:15 np0005465604 NetworkManager[45129]: <info>  [1759396455.3026] manager: (tapf302b50b-00): new Veth device (/org/freedesktop/NetworkManager/Devices/658)
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.302 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[47402e61-810a-494e-8d78-fe6f53e7341e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:15 np0005465604 systemd-udevd[425175]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.342 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[354d34cc-ffe9-4385-843f-32762af1eb12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.345 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[5cdc61e1-dfda-4794-99b8-320fad701a83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:15 np0005465604 NetworkManager[45129]: <info>  [1759396455.3735] device (tapf302b50b-00): carrier: link connected
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.380 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[add28784-fe6e-4f65-a55b-ccb2fe66a45b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.399 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3b733b84-879a-41ac-a1ec-0ba8a000be0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf302b50b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:c1:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733397, 'reachable_time': 26509, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 425362, 'error': None, 'target': 'ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.416 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d81cb9-e149-420d-a9ba-571cc058201e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:c132'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733397, 'tstamp': 733397}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 425363, 'error': None, 'target': 'ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.434 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ea66f2-48ff-4ea3-a5ed-3fcc4902ecb5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf302b50b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:c1:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733397, 'reachable_time': 26509, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 425364, 'error': None, 'target': 'ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.476 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[689c5fb2-4421-419f-8257-6e14dc2dc3f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:15 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:14:15 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.511 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8c3cec-0430-4887-a5b8-c984ce2886cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.514 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf302b50b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.514 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.515 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf302b50b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:15 np0005465604 nova_compute[260603]: 2025-10-02 09:14:15.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:15 np0005465604 NetworkManager[45129]: <info>  [1759396455.5192] manager: (tapf302b50b-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/659)
Oct  2 05:14:15 np0005465604 kernel: tapf302b50b-00: entered promiscuous mode
Oct  2 05:14:15 np0005465604 nova_compute[260603]: 2025-10-02 09:14:15.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.522 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf302b50b-00, col_values=(('external_ids', {'iface-id': '3ba778b6-61e6-4019-8a62-1bee20d3b186'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:15 np0005465604 nova_compute[260603]: 2025-10-02 09:14:15.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:15 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:15Z|01623|binding|INFO|Releasing lport 3ba778b6-61e6-4019-8a62-1bee20d3b186 from this chassis (sb_readonly=0)
Oct  2 05:14:15 np0005465604 nova_compute[260603]: 2025-10-02 09:14:15.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.553 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f302b50b-078a-40f3-87d8-1172d81fe604.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f302b50b-078a-40f3-87d8-1172d81fe604.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.554 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c0ae45-f754-4e4d-b330-2b3bc217669e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.555 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-f302b50b-078a-40f3-87d8-1172d81fe604
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/f302b50b-078a-40f3-87d8-1172d81fe604.pid.haproxy
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID f302b50b-078a-40f3-87d8-1172d81fe604
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 05:14:15 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:15.555 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604', 'env', 'PROCESS_TAG=haproxy-f302b50b-078a-40f3-87d8-1172d81fe604', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f302b50b-078a-40f3-87d8-1172d81fe604.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 05:14:16 np0005465604 podman[425394]: 2025-10-02 09:14:15.941261078 +0000 UTC m=+0.038566729 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 05:14:16 np0005465604 podman[425394]: 2025-10-02 09:14:16.179936865 +0000 UTC m=+0.277242466 container create 6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 05:14:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:14:16 np0005465604 systemd[1]: Started libpod-conmon-6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64.scope.
Oct  2 05:14:16 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:14:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4deff7f3e4521b9208a36798e9ac94c4d2e8879828943b12e08aea6816e2d9bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 05:14:16 np0005465604 podman[425394]: 2025-10-02 09:14:16.334659714 +0000 UTC m=+0.431965305 container init 6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:14:16 np0005465604 podman[425394]: 2025-10-02 09:14:16.341872728 +0000 UTC m=+0.439178289 container start 6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:14:16 np0005465604 neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604[425409]: [NOTICE]   (425413) : New worker (425415) forked
Oct  2 05:14:16 np0005465604 neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604[425409]: [NOTICE]   (425413) : Loading success.
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.531 2 DEBUG nova.compute.manager [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-plugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.532 2 DEBUG oslo_concurrency.lockutils [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.532 2 DEBUG oslo_concurrency.lockutils [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.533 2 DEBUG oslo_concurrency.lockutils [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.533 2 DEBUG nova.compute.manager [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] No event matching network-vif-plugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a in dict_keys([('network-vif-plugged', '6f4bc2ea-2d5e-4fbb-95ce-ada64748d460')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.534 2 WARNING nova.compute.manager [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received unexpected event network-vif-plugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a for instance with vm_state building and task_state spawning.#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.534 2 DEBUG nova.compute.manager [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-plugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.535 2 DEBUG oslo_concurrency.lockutils [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.535 2 DEBUG oslo_concurrency.lockutils [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.536 2 DEBUG oslo_concurrency.lockutils [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.536 2 DEBUG nova.compute.manager [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Processing event network-vif-plugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.536 2 DEBUG nova.compute.manager [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-plugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.537 2 DEBUG oslo_concurrency.lockutils [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.537 2 DEBUG oslo_concurrency.lockutils [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.538 2 DEBUG oslo_concurrency.lockutils [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.538 2 DEBUG nova.compute.manager [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] No waiting events found dispatching network-vif-plugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.539 2 WARNING nova.compute.manager [req-f0cc2aba-e624-42cc-8776-903acc5d173b req-a9f6e577-7628-4797-955b-4578d0889780 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received unexpected event network-vif-plugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.540 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.545 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396456.545263, 7293bf39-223f-4668-bd0f-c65476fac3e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.546 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.549 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.554 2 INFO nova.virt.libvirt.driver [-] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Instance spawned successfully.#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.555 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.567 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.578 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.588 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.589 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.590 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.591 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.592 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.593 2 DEBUG nova.virt.libvirt.driver [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.616 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.675 2 INFO nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Took 13.77 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.676 2 DEBUG nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.749 2 INFO nova.compute.manager [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Took 14.66 seconds to build instance.#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.765 2 DEBUG oslo_concurrency.lockutils [None req-2b1ee4bf-9b61-48e1-a31a-81a0145d8d99 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:16 np0005465604 nova_compute[260603]: 2025-10-02 09:14:16.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:17 np0005465604 nova_compute[260603]: 2025-10-02 09:14:17.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:17 np0005465604 nova_compute[260603]: 2025-10-02 09:14:17.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:14:17 np0005465604 nova_compute[260603]: 2025-10-02 09:14:17.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 05:14:17 np0005465604 nova_compute[260603]: 2025-10-02 09:14:17.535 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 05:14:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct  2 05:14:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:14:19 np0005465604 nova_compute[260603]: 2025-10-02 09:14:19.537 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:14:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 59 op/s
Oct  2 05:14:21 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:21Z|01624|binding|INFO|Releasing lport 3ba778b6-61e6-4019-8a62-1bee20d3b186 from this chassis (sb_readonly=0)
Oct  2 05:14:21 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:21Z|01625|binding|INFO|Releasing lport 698ce34e-6a9d-4a50-8426-c137ad35d6fb from this chassis (sb_readonly=0)
Oct  2 05:14:21 np0005465604 NetworkManager[45129]: <info>  [1759396461.5357] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/660)
Oct  2 05:14:21 np0005465604 NetworkManager[45129]: <info>  [1759396461.5366] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/661)
Oct  2 05:14:21 np0005465604 nova_compute[260603]: 2025-10-02 09:14:21.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:21 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:21Z|01626|binding|INFO|Releasing lport 3ba778b6-61e6-4019-8a62-1bee20d3b186 from this chassis (sb_readonly=0)
Oct  2 05:14:21 np0005465604 nova_compute[260603]: 2025-10-02 09:14:21.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:21 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:21Z|01627|binding|INFO|Releasing lport 698ce34e-6a9d-4a50-8426-c137ad35d6fb from this chassis (sb_readonly=0)
Oct  2 05:14:21 np0005465604 nova_compute[260603]: 2025-10-02 09:14:21.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:21 np0005465604 nova_compute[260603]: 2025-10-02 09:14:21.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:21 np0005465604 nova_compute[260603]: 2025-10-02 09:14:21.929 2 DEBUG nova.compute.manager [req-3ba83c1a-2543-4b12-abf8-1365fb8cb553 req-4136372a-1aa2-4281-a906-0f0a95b5cef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-changed-17c0a9ac-d61a-433a-b3f3-154a8c467f5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:14:21 np0005465604 nova_compute[260603]: 2025-10-02 09:14:21.930 2 DEBUG nova.compute.manager [req-3ba83c1a-2543-4b12-abf8-1365fb8cb553 req-4136372a-1aa2-4281-a906-0f0a95b5cef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Refreshing instance network info cache due to event network-changed-17c0a9ac-d61a-433a-b3f3-154a8c467f5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:14:21 np0005465604 nova_compute[260603]: 2025-10-02 09:14:21.931 2 DEBUG oslo_concurrency.lockutils [req-3ba83c1a-2543-4b12-abf8-1365fb8cb553 req-4136372a-1aa2-4281-a906-0f0a95b5cef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:14:21 np0005465604 nova_compute[260603]: 2025-10-02 09:14:21.932 2 DEBUG oslo_concurrency.lockutils [req-3ba83c1a-2543-4b12-abf8-1365fb8cb553 req-4136372a-1aa2-4281-a906-0f0a95b5cef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:14:21 np0005465604 nova_compute[260603]: 2025-10-02 09:14:21.932 2 DEBUG nova.network.neutron [req-3ba83c1a-2543-4b12-abf8-1365fb8cb553 req-4136372a-1aa2-4281-a906-0f0a95b5cef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Refreshing network info cache for port 17c0a9ac-d61a-433a-b3f3-154a8c467f5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:14:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:14:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/317323458' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:14:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:14:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/317323458' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:14:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 59 op/s
Oct  2 05:14:22 np0005465604 nova_compute[260603]: 2025-10-02 09:14:22.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:23 np0005465604 nova_compute[260603]: 2025-10-02 09:14:23.314 2 DEBUG nova.network.neutron [req-3ba83c1a-2543-4b12-abf8-1365fb8cb553 req-4136372a-1aa2-4281-a906-0f0a95b5cef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updated VIF entry in instance network info cache for port 17c0a9ac-d61a-433a-b3f3-154a8c467f5a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:14:23 np0005465604 nova_compute[260603]: 2025-10-02 09:14:23.315 2 DEBUG nova.network.neutron [req-3ba83c1a-2543-4b12-abf8-1365fb8cb553 req-4136372a-1aa2-4281-a906-0f0a95b5cef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updating instance_info_cache with network_info: [{"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:14:23 np0005465604 nova_compute[260603]: 2025-10-02 09:14:23.339 2 DEBUG oslo_concurrency.lockutils [req-3ba83c1a-2543-4b12-abf8-1365fb8cb553 req-4136372a-1aa2-4281-a906-0f0a95b5cef9 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:14:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:14:24 np0005465604 podman[425426]: 2025-10-02 09:14:24.027574286 +0000 UTC m=+0.072484813 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 05:14:24 np0005465604 podman[425425]: 2025-10-02 09:14:24.027617438 +0000 UTC m=+0.085333883 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 05:14:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:14:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 88 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:14:26 np0005465604 nova_compute[260603]: 2025-10-02 09:14:26.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:27 np0005465604 nova_compute[260603]: 2025-10-02 09:14:27.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:14:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:14:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:14:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:14:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:14:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:14:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:14:28
Oct  2 05:14:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:14:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:14:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'images', 'backups', 'cephfs.cephfs.meta', '.mgr', 'vms', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log']
Oct  2 05:14:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:14:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 93 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 696 KiB/s wr, 87 op/s
Oct  2 05:14:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:14:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:14:28 np0005465604 podman[425473]: 2025-10-02 09:14:28.996768808 +0000 UTC m=+0.066583690 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 05:14:28 np0005465604 podman[425472]: 2025-10-02 09:14:28.99681406 +0000 UTC m=+0.065806956 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 05:14:29 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:29Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:2b:06 10.100.0.10
Oct  2 05:14:29 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:29Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:2b:06 10.100.0.10
Oct  2 05:14:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 93 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 462 KiB/s rd, 684 KiB/s wr, 28 op/s
Oct  2 05:14:31 np0005465604 nova_compute[260603]: 2025-10-02 09:14:31.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 93 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 462 KiB/s rd, 684 KiB/s wr, 28 op/s
Oct  2 05:14:32 np0005465604 nova_compute[260603]: 2025-10-02 09:14:32.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:14:33 np0005465604 nova_compute[260603]: 2025-10-02 09:14:33.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:14:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 743 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Oct  2 05:14:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:34.851 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:34.852 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:34.854 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  2 05:14:36 np0005465604 nova_compute[260603]: 2025-10-02 09:14:36.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:37 np0005465604 nova_compute[260603]: 2025-10-02 09:14:37.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct  2 05:14:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:14:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:14:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Oct  2 05:14:40 np0005465604 nova_compute[260603]: 2025-10-02 09:14:40.540 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:14:40 np0005465604 nova_compute[260603]: 2025-10-02 09:14:40.541 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:14:40 np0005465604 nova_compute[260603]: 2025-10-02 09:14:40.541 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:14:40 np0005465604 nova_compute[260603]: 2025-10-02 09:14:40.541 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 05:14:41 np0005465604 nova_compute[260603]: 2025-10-02 09:14:41.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Oct  2 05:14:42 np0005465604 nova_compute[260603]: 2025-10-02 09:14:42.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:14:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Oct  2 05:14:45 np0005465604 nova_compute[260603]: 2025-10-02 09:14:45.087 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:45 np0005465604 nova_compute[260603]: 2025-10-02 09:14:45.088 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:45 np0005465604 nova_compute[260603]: 2025-10-02 09:14:45.194 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:14:45 np0005465604 nova_compute[260603]: 2025-10-02 09:14:45.308 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:45 np0005465604 nova_compute[260603]: 2025-10-02 09:14:45.309 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:45 np0005465604 nova_compute[260603]: 2025-10-02 09:14:45.318 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:14:45 np0005465604 nova_compute[260603]: 2025-10-02 09:14:45.318 2 INFO nova.compute.claims [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:14:45 np0005465604 nova_compute[260603]: 2025-10-02 09:14:45.674 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:14:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:14:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4194959007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.112 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.122 2 DEBUG nova.compute.provider_tree [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.142 2 DEBUG nova.scheduler.client.report [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.173 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.174 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:14:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.241 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.241 2 DEBUG nova.network.neutron [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.277 2 INFO nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.300 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.416 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.417 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.417 2 INFO nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Creating image(s)#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.439 2 DEBUG nova.storage.rbd_utils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.460 2 DEBUG nova.storage.rbd_utils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.481 2 DEBUG nova.storage.rbd_utils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.484 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.584 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.585 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.585 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.586 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.610 2 DEBUG nova.storage.rbd_utils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.613 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.932 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:14:46 np0005465604 nova_compute[260603]: 2025-10-02 09:14:46.981 2 DEBUG nova.storage.rbd_utils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:14:47 np0005465604 nova_compute[260603]: 2025-10-02 09:14:47.060 2 DEBUG nova.objects.instance [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 6acf9ec4-afe0-4ef6-b857-246ad87fe800 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:14:47 np0005465604 nova_compute[260603]: 2025-10-02 09:14:47.064 2 DEBUG nova.policy [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:14:47 np0005465604 nova_compute[260603]: 2025-10-02 09:14:47.081 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:14:47 np0005465604 nova_compute[260603]: 2025-10-02 09:14:47.081 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Ensure instance console log exists: /var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:14:47 np0005465604 nova_compute[260603]: 2025-10-02 09:14:47.082 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:47 np0005465604 nova_compute[260603]: 2025-10-02 09:14:47.082 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:47 np0005465604 nova_compute[260603]: 2025-10-02 09:14:47.082 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:47 np0005465604 nova_compute[260603]: 2025-10-02 09:14:47.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:48 np0005465604 nova_compute[260603]: 2025-10-02 09:14:48.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:48.072 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:14:48 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:48.073 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:14:48 np0005465604 nova_compute[260603]: 2025-10-02 09:14:48.150 2 DEBUG nova.network.neutron [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Successfully created port: 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:14:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 305 active+clean; 167 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:14:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:14:49 np0005465604 nova_compute[260603]: 2025-10-02 09:14:49.090 2 DEBUG nova.network.neutron [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Successfully created port: 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:14:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:50.076 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 167 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:14:50 np0005465604 nova_compute[260603]: 2025-10-02 09:14:50.276 2 DEBUG nova.network.neutron [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Successfully updated port: 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:14:50 np0005465604 nova_compute[260603]: 2025-10-02 09:14:50.370 2 DEBUG nova.compute.manager [req-e6a9e51d-4024-419e-abd7-ddb63b1f0277 req-95f76b2b-f405-4457-a13d-f5633f24d87d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-changed-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:14:50 np0005465604 nova_compute[260603]: 2025-10-02 09:14:50.370 2 DEBUG nova.compute.manager [req-e6a9e51d-4024-419e-abd7-ddb63b1f0277 req-95f76b2b-f405-4457-a13d-f5633f24d87d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Refreshing instance network info cache due to event network-changed-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:14:50 np0005465604 nova_compute[260603]: 2025-10-02 09:14:50.371 2 DEBUG oslo_concurrency.lockutils [req-e6a9e51d-4024-419e-abd7-ddb63b1f0277 req-95f76b2b-f405-4457-a13d-f5633f24d87d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:14:50 np0005465604 nova_compute[260603]: 2025-10-02 09:14:50.371 2 DEBUG oslo_concurrency.lockutils [req-e6a9e51d-4024-419e-abd7-ddb63b1f0277 req-95f76b2b-f405-4457-a13d-f5633f24d87d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:14:50 np0005465604 nova_compute[260603]: 2025-10-02 09:14:50.371 2 DEBUG nova.network.neutron [req-e6a9e51d-4024-419e-abd7-ddb63b1f0277 req-95f76b2b-f405-4457-a13d-f5633f24d87d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Refreshing network info cache for port 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:14:50 np0005465604 nova_compute[260603]: 2025-10-02 09:14:50.583 2 DEBUG nova.network.neutron [req-e6a9e51d-4024-419e-abd7-ddb63b1f0277 req-95f76b2b-f405-4457-a13d-f5633f24d87d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:14:51 np0005465604 nova_compute[260603]: 2025-10-02 09:14:51.337 2 DEBUG nova.network.neutron [req-e6a9e51d-4024-419e-abd7-ddb63b1f0277 req-95f76b2b-f405-4457-a13d-f5633f24d87d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:14:51 np0005465604 nova_compute[260603]: 2025-10-02 09:14:51.363 2 DEBUG oslo_concurrency.lockutils [req-e6a9e51d-4024-419e-abd7-ddb63b1f0277 req-95f76b2b-f405-4457-a13d-f5633f24d87d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:14:51 np0005465604 nova_compute[260603]: 2025-10-02 09:14:51.418 2 DEBUG nova.network.neutron [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Successfully updated port: 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:14:51 np0005465604 nova_compute[260603]: 2025-10-02 09:14:51.437 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:14:51 np0005465604 nova_compute[260603]: 2025-10-02 09:14:51.438 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:14:51 np0005465604 nova_compute[260603]: 2025-10-02 09:14:51.438 2 DEBUG nova.network.neutron [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:14:51 np0005465604 nova_compute[260603]: 2025-10-02 09:14:51.573 2 DEBUG nova.network.neutron [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:14:51 np0005465604 nova_compute[260603]: 2025-10-02 09:14:51.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 167 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:14:52 np0005465604 nova_compute[260603]: 2025-10-02 09:14:52.444 2 DEBUG nova.compute.manager [req-806dcc0c-b844-468c-b3e8-8d2e0b854b7c req-4a020b50-9793-44f0-bd0f-4189d6d97adb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-changed-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:14:52 np0005465604 nova_compute[260603]: 2025-10-02 09:14:52.444 2 DEBUG nova.compute.manager [req-806dcc0c-b844-468c-b3e8-8d2e0b854b7c req-4a020b50-9793-44f0-bd0f-4189d6d97adb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Refreshing instance network info cache due to event network-changed-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:14:52 np0005465604 nova_compute[260603]: 2025-10-02 09:14:52.444 2 DEBUG oslo_concurrency.lockutils [req-806dcc0c-b844-468c-b3e8-8d2e0b854b7c req-4a020b50-9793-44f0-bd0f-4189d6d97adb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:14:52 np0005465604 nova_compute[260603]: 2025-10-02 09:14:52.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.228 2 DEBUG nova.network.neutron [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updating instance_info_cache with network_info: [{"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.249 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.250 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Instance network_info: |[{"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.250 2 DEBUG oslo_concurrency.lockutils [req-806dcc0c-b844-468c-b3e8-8d2e0b854b7c req-4a020b50-9793-44f0-bd0f-4189d6d97adb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.251 2 DEBUG nova.network.neutron [req-806dcc0c-b844-468c-b3e8-8d2e0b854b7c req-4a020b50-9793-44f0-bd0f-4189d6d97adb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Refreshing network info cache for port 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.253 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Start _get_guest_xml network_info=[{"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.258 2 WARNING nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.263 2 DEBUG nova.virt.libvirt.host [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.264 2 DEBUG nova.virt.libvirt.host [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.270 2 DEBUG nova.virt.libvirt.host [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.271 2 DEBUG nova.virt.libvirt.host [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.271 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.271 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.272 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.272 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.272 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.272 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.273 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.273 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.273 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.273 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.273 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.274 2 DEBUG nova.virt.hardware [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.276 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:14:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:14:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:14:53 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/440182644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.724 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.767 2 DEBUG nova.storage.rbd_utils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:14:53 np0005465604 nova_compute[260603]: 2025-10-02 09:14:53.772 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:14:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:14:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:14:54 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2178415046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.230 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.232 2 DEBUG nova.virt.libvirt.vif [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-847780468',display_name='tempest-TestGettingAddress-server-847780468',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-847780468',id=146,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-cp8cov8y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:14:46Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=6acf9ec4-afe0-4ef6-b857-246ad87fe800,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.233 2 DEBUG nova.network.os_vif_util [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.234 2 DEBUG nova.network.os_vif_util [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:55:21,bridge_name='br-int',has_traffic_filtering=True,id=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7304fbbd-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.235 2 DEBUG nova.virt.libvirt.vif [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-847780468',display_name='tempest-TestGettingAddress-server-847780468',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-847780468',id=146,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-cp8cov8y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:14:46Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=6acf9ec4-afe0-4ef6-b857-246ad87fe800,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.236 2 DEBUG nova.network.os_vif_util [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.237 2 DEBUG nova.network.os_vif_util [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:08:ec,bridge_name='br-int',has_traffic_filtering=True,id=3b0dd52d-fd1e-4e15-a6b5-ef4735fde479,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b0dd52d-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.239 2 DEBUG nova.objects.instance [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6acf9ec4-afe0-4ef6-b857-246ad87fe800 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.256 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:  <uuid>6acf9ec4-afe0-4ef6-b857-246ad87fe800</uuid>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:  <name>instance-00000092</name>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestGettingAddress-server-847780468</nova:name>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:14:53</nova:creationTime>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:        <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:        <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:        <nova:port uuid="7304fbbd-4ecf-4fd7-95ea-9dba30ee6456">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:        <nova:port uuid="3b0dd52d-fd1e-4e15-a6b5-ef4735fde479">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fef8:8ec" ipVersion="6"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <entry name="serial">6acf9ec4-afe0-4ef6-b857-246ad87fe800</entry>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <entry name="uuid">6acf9ec4-afe0-4ef6-b857-246ad87fe800</entry>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk.config">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:2e:55:21"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <target dev="tap7304fbbd-4e"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:f8:08:ec"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <target dev="tap3b0dd52d-fd"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800/console.log" append="off"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:14:54 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:14:54 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:14:54 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:14:54 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.258 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Preparing to wait for external event network-vif-plugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.259 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.260 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.260 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.261 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Preparing to wait for external event network-vif-plugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.261 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.262 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.262 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.264 2 DEBUG nova.virt.libvirt.vif [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-847780468',display_name='tempest-TestGettingAddress-server-847780468',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-847780468',id=146,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-cp8cov8y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:14:46Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=6acf9ec4-afe0-4ef6-b857-246ad87fe800,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.265 2 DEBUG nova.network.os_vif_util [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.266 2 DEBUG nova.network.os_vif_util [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:55:21,bridge_name='br-int',has_traffic_filtering=True,id=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7304fbbd-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.267 2 DEBUG os_vif [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:55:21,bridge_name='br-int',has_traffic_filtering=True,id=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7304fbbd-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.269 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.270 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.275 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7304fbbd-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.276 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7304fbbd-4e, col_values=(('external_ids', {'iface-id': '7304fbbd-4ecf-4fd7-95ea-9dba30ee6456', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:55:21', 'vm-uuid': '6acf9ec4-afe0-4ef6-b857-246ad87fe800'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:54 np0005465604 NetworkManager[45129]: <info>  [1759396494.2808] manager: (tap7304fbbd-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/662)
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.292 2 INFO os_vif [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:55:21,bridge_name='br-int',has_traffic_filtering=True,id=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7304fbbd-4e')#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.294 2 DEBUG nova.virt.libvirt.vif [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-847780468',display_name='tempest-TestGettingAddress-server-847780468',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-847780468',id=146,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-cp8cov8y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:14:46Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=6acf9ec4-afe0-4ef6-b857-246ad87fe800,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.294 2 DEBUG nova.network.os_vif_util [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.296 2 DEBUG nova.network.os_vif_util [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:08:ec,bridge_name='br-int',has_traffic_filtering=True,id=3b0dd52d-fd1e-4e15-a6b5-ef4735fde479,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b0dd52d-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.297 2 DEBUG os_vif [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:08:ec,bridge_name='br-int',has_traffic_filtering=True,id=3b0dd52d-fd1e-4e15-a6b5-ef4735fde479,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b0dd52d-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.298 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.299 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.302 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b0dd52d-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.303 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b0dd52d-fd, col_values=(('external_ids', {'iface-id': '3b0dd52d-fd1e-4e15-a6b5-ef4735fde479', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:08:ec', 'vm-uuid': '6acf9ec4-afe0-4ef6-b857-246ad87fe800'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:54 np0005465604 NetworkManager[45129]: <info>  [1759396494.3062] manager: (tap3b0dd52d-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/663)
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.315 2 INFO os_vif [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:08:ec,bridge_name='br-int',has_traffic_filtering=True,id=3b0dd52d-fd1e-4e15-a6b5-ef4735fde479,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b0dd52d-fd')#033[00m
Oct  2 05:14:54 np0005465604 podman[425767]: 2025-10-02 09:14:54.412564375 +0000 UTC m=+0.047708374 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  2 05:14:54 np0005465604 podman[425765]: 2025-10-02 09:14:54.453445485 +0000 UTC m=+0.087352996 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.453 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.453 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.453 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:2e:55:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.454 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:f8:08:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.454 2 INFO nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Using config drive#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.476 2 DEBUG nova.storage.rbd_utils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.574 2 DEBUG nova.network.neutron [req-806dcc0c-b844-468c-b3e8-8d2e0b854b7c req-4a020b50-9793-44f0-bd0f-4189d6d97adb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updated VIF entry in instance network info cache for port 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.575 2 DEBUG nova.network.neutron [req-806dcc0c-b844-468c-b3e8-8d2e0b854b7c req-4a020b50-9793-44f0-bd0f-4189d6d97adb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updating instance_info_cache with network_info: [{"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.599 2 DEBUG oslo_concurrency.lockutils [req-806dcc0c-b844-468c-b3e8-8d2e0b854b7c req-4a020b50-9793-44f0-bd0f-4189d6d97adb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.860 2 INFO nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Creating config drive at /var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800/disk.config#033[00m
Oct  2 05:14:54 np0005465604 nova_compute[260603]: 2025-10-02 09:14:54.864 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpah_07gqi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:14:55 np0005465604 nova_compute[260603]: 2025-10-02 09:14:55.029 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpah_07gqi" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:14:55 np0005465604 nova_compute[260603]: 2025-10-02 09:14:55.052 2 DEBUG nova.storage.rbd_utils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:14:55 np0005465604 nova_compute[260603]: 2025-10-02 09:14:55.055 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800/disk.config 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:14:56 np0005465604 nova_compute[260603]: 2025-10-02 09:14:56.046 2 DEBUG oslo_concurrency.processutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800/disk.config 6acf9ec4-afe0-4ef6-b857-246ad87fe800_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.991s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:14:56 np0005465604 nova_compute[260603]: 2025-10-02 09:14:56.047 2 INFO nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Deleting local config drive /var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800/disk.config because it was imported into RBD.#033[00m
Oct  2 05:14:56 np0005465604 NetworkManager[45129]: <info>  [1759396496.1128] manager: (tap7304fbbd-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/664)
Oct  2 05:14:56 np0005465604 kernel: tap7304fbbd-4e: entered promiscuous mode
Oct  2 05:14:56 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:56Z|01628|binding|INFO|Claiming lport 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 for this chassis.
Oct  2 05:14:56 np0005465604 nova_compute[260603]: 2025-10-02 09:14:56.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:56 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:56Z|01629|binding|INFO|7304fbbd-4ecf-4fd7-95ea-9dba30ee6456: Claiming fa:16:3e:2e:55:21 10.100.0.13
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.173 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:55:21 10.100.0.13'], port_security=['fa:16:3e:2e:55:21 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6acf9ec4-afe0-4ef6-b857-246ad87fe800', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6339cab-fbb4-4887-8953-252cca735cc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cbefbcd-b288-496f-adbe-c563f4517ed7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80b3bab0-2229-4a60-832f-071c3bc1d0ec, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.174 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 in datapath c6339cab-fbb4-4887-8953-252cca735cc6 bound to our chassis#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.175 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6339cab-fbb4-4887-8953-252cca735cc6#033[00m
Oct  2 05:14:56 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:56Z|01630|binding|INFO|Setting lport 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 ovn-installed in OVS
Oct  2 05:14:56 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:56Z|01631|binding|INFO|Setting lport 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 up in Southbound
Oct  2 05:14:56 np0005465604 NetworkManager[45129]: <info>  [1759396496.1845] manager: (tap3b0dd52d-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/665)
Oct  2 05:14:56 np0005465604 nova_compute[260603]: 2025-10-02 09:14:56.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:56 np0005465604 systemd-udevd[425887]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:14:56 np0005465604 systemd-udevd[425889]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:14:56 np0005465604 kernel: tap3b0dd52d-fd: entered promiscuous mode
Oct  2 05:14:56 np0005465604 nova_compute[260603]: 2025-10-02 09:14:56.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.194 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[394f01c5-e997-433c-90d0-a2cc9089c305]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:56 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:56Z|01632|binding|INFO|Claiming lport 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 for this chassis.
Oct  2 05:14:56 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:56Z|01633|binding|INFO|3b0dd52d-fd1e-4e15-a6b5-ef4735fde479: Claiming fa:16:3e:f8:08:ec 2001:db8::f816:3eff:fef8:8ec
Oct  2 05:14:56 np0005465604 NetworkManager[45129]: <info>  [1759396496.2042] device (tap7304fbbd-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:14:56 np0005465604 NetworkManager[45129]: <info>  [1759396496.2058] device (tap7304fbbd-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:14:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.206 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:08:ec 2001:db8::f816:3eff:fef8:8ec'], port_security=['fa:16:3e:f8:08:ec 2001:db8::f816:3eff:fef8:8ec'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef8:8ec/64', 'neutron:device_id': '6acf9ec4-afe0-4ef6-b857-246ad87fe800', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f302b50b-078a-40f3-87d8-1172d81fe604', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cbefbcd-b288-496f-adbe-c563f4517ed7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82aa0caa-5e65-4ef0-b1d6-b9e910e6cadb, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3b0dd52d-fd1e-4e15-a6b5-ef4735fde479) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:14:56 np0005465604 NetworkManager[45129]: <info>  [1759396496.2118] device (tap3b0dd52d-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:14:56 np0005465604 NetworkManager[45129]: <info>  [1759396496.2133] device (tap3b0dd52d-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:14:56 np0005465604 nova_compute[260603]: 2025-10-02 09:14:56.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:56 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:56Z|01634|binding|INFO|Setting lport 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 ovn-installed in OVS
Oct  2 05:14:56 np0005465604 ovn_controller[152344]: 2025-10-02T09:14:56Z|01635|binding|INFO|Setting lport 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 up in Southbound
Oct  2 05:14:56 np0005465604 nova_compute[260603]: 2025-10-02 09:14:56.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:56 np0005465604 systemd-machined[214636]: New machine qemu-180-instance-00000092.
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.235 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[44f723b7-6ea6-4cd9-85bf-9a9ad7518c54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.237 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e16b5a6c-312a-4f55-a01a-4c55d2e5c5fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:56 np0005465604 systemd[1]: Started Virtual Machine qemu-180-instance-00000092.
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.267 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7e75d86a-5b40-4627-8499-57189fd04573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.294 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[07ede62c-455d-45d7-aca1-10b12fa18d44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6339cab-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:e0:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733293, 'reachable_time': 28730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 425903, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.308 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d4aa30-0528-4226-8d06-474e14305882]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc6339cab-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733308, 'tstamp': 733308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 425907, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc6339cab-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733312, 'tstamp': 733312}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 425907, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.309 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6339cab-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:56 np0005465604 nova_compute[260603]: 2025-10-02 09:14:56.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:56 np0005465604 nova_compute[260603]: 2025-10-02 09:14:56.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.312 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6339cab-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.312 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.313 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6339cab-f0, col_values=(('external_ids', {'iface-id': '698ce34e-6a9d-4a50-8426-c137ad35d6fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.313 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.314 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 in datapath f302b50b-078a-40f3-87d8-1172d81fe604 unbound from our chassis#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.315 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f302b50b-078a-40f3-87d8-1172d81fe604#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.334 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d329b8ec-2735-417f-8f71-0f4268e5ad04]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.364 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7b904eb6-7891-42a9-9e94-1cfdc42de2d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.367 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[829406e2-91a7-4628-9fd1-274e913dc7f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.392 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7a0896d6-1d35-4807-a135-77650677cb46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.408 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a2629279-d0d6-4eb1-879d-4e026af79023]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf302b50b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:c1:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1872, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1872, 'tx_bytes': 398, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733397, 'reachable_time': 26509, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 20, 'inoctets': 1592, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 20, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1592, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 20, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 425913, 'error': None, 'target': 'ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.421 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[816cf3f9-8185-4d5a-bbb6-b9fc20b6cb31]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf302b50b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733410, 'tstamp': 733410}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 425914, 'error': None, 'target': 'ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.423 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf302b50b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:56 np0005465604 nova_compute[260603]: 2025-10-02 09:14:56.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:56 np0005465604 nova_compute[260603]: 2025-10-02 09:14:56.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.426 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf302b50b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.426 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.426 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf302b50b-00, col_values=(('external_ids', {'iface-id': '3ba778b6-61e6-4019-8a62-1bee20d3b186'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:14:56 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:14:56.426 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:14:56 np0005465604 nova_compute[260603]: 2025-10-02 09:14:56.524 2 DEBUG nova.compute.manager [req-709f02aa-fc10-4b3c-b087-6974f830c0a1 req-66c86f34-2953-4619-91eb-5f11305d795d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-plugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:14:56 np0005465604 nova_compute[260603]: 2025-10-02 09:14:56.527 2 DEBUG oslo_concurrency.lockutils [req-709f02aa-fc10-4b3c-b087-6974f830c0a1 req-66c86f34-2953-4619-91eb-5f11305d795d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:56 np0005465604 nova_compute[260603]: 2025-10-02 09:14:56.527 2 DEBUG oslo_concurrency.lockutils [req-709f02aa-fc10-4b3c-b087-6974f830c0a1 req-66c86f34-2953-4619-91eb-5f11305d795d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:56 np0005465604 nova_compute[260603]: 2025-10-02 09:14:56.528 2 DEBUG oslo_concurrency.lockutils [req-709f02aa-fc10-4b3c-b087-6974f830c0a1 req-66c86f34-2953-4619-91eb-5f11305d795d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:56 np0005465604 nova_compute[260603]: 2025-10-02 09:14:56.529 2 DEBUG nova.compute.manager [req-709f02aa-fc10-4b3c-b087-6974f830c0a1 req-66c86f34-2953-4619-91eb-5f11305d795d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Processing event network-vif-plugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:14:56 np0005465604 nova_compute[260603]: 2025-10-02 09:14:56.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:57 np0005465604 nova_compute[260603]: 2025-10-02 09:14:57.321 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396497.3203073, 6acf9ec4-afe0-4ef6-b857-246ad87fe800 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:14:57 np0005465604 nova_compute[260603]: 2025-10-02 09:14:57.321 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] VM Started (Lifecycle Event)#033[00m
Oct  2 05:14:57 np0005465604 nova_compute[260603]: 2025-10-02 09:14:57.352 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:14:57 np0005465604 nova_compute[260603]: 2025-10-02 09:14:57.357 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396497.3207018, 6acf9ec4-afe0-4ef6-b857-246ad87fe800 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:14:57 np0005465604 nova_compute[260603]: 2025-10-02 09:14:57.357 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:14:57 np0005465604 nova_compute[260603]: 2025-10-02 09:14:57.376 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:14:57 np0005465604 nova_compute[260603]: 2025-10-02 09:14:57.381 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:14:57 np0005465604 nova_compute[260603]: 2025-10-02 09:14:57.403 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:14:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:14:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:14:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:14:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:14:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:14:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:14:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct  2 05:14:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.602 2 DEBUG nova.compute.manager [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-plugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.602 2 DEBUG oslo_concurrency.lockutils [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.602 2 DEBUG oslo_concurrency.lockutils [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.602 2 DEBUG oslo_concurrency.lockutils [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.603 2 DEBUG nova.compute.manager [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] No event matching network-vif-plugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 in dict_keys([('network-vif-plugged', '3b0dd52d-fd1e-4e15-a6b5-ef4735fde479')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.603 2 WARNING nova.compute.manager [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received unexpected event network-vif-plugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.603 2 DEBUG nova.compute.manager [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-plugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.603 2 DEBUG oslo_concurrency.lockutils [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.604 2 DEBUG oslo_concurrency.lockutils [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.604 2 DEBUG oslo_concurrency.lockutils [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.604 2 DEBUG nova.compute.manager [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Processing event network-vif-plugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.604 2 DEBUG nova.compute.manager [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-plugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.604 2 DEBUG oslo_concurrency.lockutils [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.605 2 DEBUG oslo_concurrency.lockutils [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.605 2 DEBUG oslo_concurrency.lockutils [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.605 2 DEBUG nova.compute.manager [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] No waiting events found dispatching network-vif-plugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.605 2 WARNING nova.compute.manager [req-6d70c612-ef0e-4ded-9150-ee57556cf8fc req-d0d11ba9-69fd-440c-8b6e-469f1deb98bd 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received unexpected event network-vif-plugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.606 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.609 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396498.6085684, 6acf9ec4-afe0-4ef6-b857-246ad87fe800 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.610 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.611 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.615 2 INFO nova.virt.libvirt.driver [-] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Instance spawned successfully.#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.615 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.632 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.638 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.642 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.642 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.643 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.643 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.644 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.644 2 DEBUG nova.virt.libvirt.driver [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.673 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.755 2 INFO nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Took 12.34 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.763 2 DEBUG nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.901 2 INFO nova.compute.manager [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Took 13.62 seconds to build instance.#033[00m
Oct  2 05:14:58 np0005465604 nova_compute[260603]: 2025-10-02 09:14:58.946 2 DEBUG oslo_concurrency.lockutils [None req-9f7d1c0d-c9d6-4a58-a928-c590a1fb592d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:14:59 np0005465604 nova_compute[260603]: 2025-10-02 09:14:59.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:14:59 np0005465604 podman[425959]: 2025-10-02 09:14:59.996102938 +0000 UTC m=+0.064793485 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3)
Oct  2 05:15:00 np0005465604 podman[425958]: 2025-10-02 09:15:00.000547935 +0000 UTC m=+0.068600182 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 05:15:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 13 KiB/s wr, 10 op/s
Oct  2 05:15:01 np0005465604 nova_compute[260603]: 2025-10-02 09:15:01.546 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:15:01 np0005465604 nova_compute[260603]: 2025-10-02 09:15:01.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 13 KiB/s wr, 10 op/s
Oct  2 05:15:03 np0005465604 nova_compute[260603]: 2025-10-02 09:15:03.338 2 DEBUG nova.compute.manager [req-840d3f78-286e-42cf-8c22-db974fc36c20 req-da795135-148f-47f8-ae3e-34b7d4e8ff40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-changed-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:15:03 np0005465604 nova_compute[260603]: 2025-10-02 09:15:03.339 2 DEBUG nova.compute.manager [req-840d3f78-286e-42cf-8c22-db974fc36c20 req-da795135-148f-47f8-ae3e-34b7d4e8ff40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Refreshing instance network info cache due to event network-changed-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:15:03 np0005465604 nova_compute[260603]: 2025-10-02 09:15:03.340 2 DEBUG oslo_concurrency.lockutils [req-840d3f78-286e-42cf-8c22-db974fc36c20 req-da795135-148f-47f8-ae3e-34b7d4e8ff40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:15:03 np0005465604 nova_compute[260603]: 2025-10-02 09:15:03.341 2 DEBUG oslo_concurrency.lockutils [req-840d3f78-286e-42cf-8c22-db974fc36c20 req-da795135-148f-47f8-ae3e-34b7d4e8ff40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:15:03 np0005465604 nova_compute[260603]: 2025-10-02 09:15:03.341 2 DEBUG nova.network.neutron [req-840d3f78-286e-42cf-8c22-db974fc36c20 req-da795135-148f-47f8-ae3e-34b7d4e8ff40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Refreshing network info cache for port 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:15:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:15:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Oct  2 05:15:04 np0005465604 nova_compute[260603]: 2025-10-02 09:15:04.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:04 np0005465604 nova_compute[260603]: 2025-10-02 09:15:04.664 2 DEBUG nova.network.neutron [req-840d3f78-286e-42cf-8c22-db974fc36c20 req-da795135-148f-47f8-ae3e-34b7d4e8ff40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updated VIF entry in instance network info cache for port 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:15:04 np0005465604 nova_compute[260603]: 2025-10-02 09:15:04.664 2 DEBUG nova.network.neutron [req-840d3f78-286e-42cf-8c22-db974fc36c20 req-da795135-148f-47f8-ae3e-34b7d4e8ff40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updating instance_info_cache with network_info: [{"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:15:04 np0005465604 nova_compute[260603]: 2025-10-02 09:15:04.696 2 DEBUG oslo_concurrency.lockutils [req-840d3f78-286e-42cf-8c22-db974fc36c20 req-da795135-148f-47f8-ae3e-34b7d4e8ff40 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:15:05 np0005465604 nova_compute[260603]: 2025-10-02 09:15:05.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:15:05 np0005465604 nova_compute[260603]: 2025-10-02 09:15:05.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:15:05 np0005465604 nova_compute[260603]: 2025-10-02 09:15:05.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:15:06 np0005465604 nova_compute[260603]: 2025-10-02 09:15:06.019 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:15:06 np0005465604 nova_compute[260603]: 2025-10-02 09:15:06.020 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:15:06 np0005465604 nova_compute[260603]: 2025-10-02 09:15:06.020 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 05:15:06 np0005465604 nova_compute[260603]: 2025-10-02 09:15:06.020 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7293bf39-223f-4668-bd0f-c65476fac3e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:15:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct  2 05:15:06 np0005465604 nova_compute[260603]: 2025-10-02 09:15:06.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct  2 05:15:08 np0005465604 nova_compute[260603]: 2025-10-02 09:15:08.356 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updating instance_info_cache with network_info: [{"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:15:08 np0005465604 nova_compute[260603]: 2025-10-02 09:15:08.399 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:15:08 np0005465604 nova_compute[260603]: 2025-10-02 09:15:08.400 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 05:15:08 np0005465604 nova_compute[260603]: 2025-10-02 09:15:08.400 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:15:08 np0005465604 nova_compute[260603]: 2025-10-02 09:15:08.400 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:15:08 np0005465604 nova_compute[260603]: 2025-10-02 09:15:08.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:15:08 np0005465604 nova_compute[260603]: 2025-10-02 09:15:08.552 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:15:08 np0005465604 nova_compute[260603]: 2025-10-02 09:15:08.553 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:15:08 np0005465604 nova_compute[260603]: 2025-10-02 09:15:08.553 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:15:08 np0005465604 nova_compute[260603]: 2025-10-02 09:15:08.553 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:15:08 np0005465604 nova_compute[260603]: 2025-10-02 09:15:08.554 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:15:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3905090160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.017 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.138 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.138 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.141 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.142 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.369 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.371 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3190MB free_disk=59.921817779541016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.371 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.371 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.556454) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396509556516, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2044, "num_deletes": 251, "total_data_size": 3407360, "memory_usage": 3464976, "flush_reason": "Manual Compaction"}
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.608 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 7293bf39-223f-4668-bd0f-c65476fac3e4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.609 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 6acf9ec4-afe0-4ef6-b857-246ad87fe800 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.610 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.610 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.631 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396509647309, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3341124, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59006, "largest_seqno": 61049, "table_properties": {"data_size": 3331800, "index_size": 5881, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18702, "raw_average_key_size": 20, "raw_value_size": 3313375, "raw_average_value_size": 3562, "num_data_blocks": 262, "num_entries": 930, "num_filter_entries": 930, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396283, "oldest_key_time": 1759396283, "file_creation_time": 1759396509, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 90917 microseconds, and 7676 cpu microseconds.
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.654 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.655 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.647378) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3341124 bytes OK
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.647398) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.655761) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.655792) EVENT_LOG_v1 {"time_micros": 1759396509655784, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.655814) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3398801, prev total WAL file size 3398801, number of live WAL files 2.
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.656824) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3262KB)], [140(7937KB)]
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396509656873, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 11468745, "oldest_snapshot_seqno": -1}
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.677 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.706 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 05:15:09 np0005465604 nova_compute[260603]: 2025-10-02 09:15:09.772 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8040 keys, 9752794 bytes, temperature: kUnknown
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396509783797, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 9752794, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9701576, "index_size": 29985, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20165, "raw_key_size": 209474, "raw_average_key_size": 26, "raw_value_size": 9560739, "raw_average_value_size": 1189, "num_data_blocks": 1165, "num_entries": 8040, "num_filter_entries": 8040, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396509, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.784049) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 9752794 bytes
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.892923) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.3 rd, 76.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.8 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 8554, records dropped: 514 output_compression: NoCompression
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.892971) EVENT_LOG_v1 {"time_micros": 1759396509892953, "job": 86, "event": "compaction_finished", "compaction_time_micros": 127003, "compaction_time_cpu_micros": 28620, "output_level": 6, "num_output_files": 1, "total_output_size": 9752794, "num_input_records": 8554, "num_output_records": 8040, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396509894187, "job": 86, "event": "table_file_deletion", "file_number": 142}
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396509896583, "job": 86, "event": "table_file_deletion", "file_number": 140}
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.656710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.896629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.896634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.896638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.896641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:15:09 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:09.896644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:15:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Oct  2 05:15:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:15:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4039907010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:15:10 np0005465604 nova_compute[260603]: 2025-10-02 09:15:10.235 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:15:10 np0005465604 nova_compute[260603]: 2025-10-02 09:15:10.245 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:15:10 np0005465604 nova_compute[260603]: 2025-10-02 09:15:10.273 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:15:10 np0005465604 nova_compute[260603]: 2025-10-02 09:15:10.304 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:15:10 np0005465604 nova_compute[260603]: 2025-10-02 09:15:10.304 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:15:11 np0005465604 nova_compute[260603]: 2025-10-02 09:15:11.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:12 np0005465604 ovn_controller[152344]: 2025-10-02T09:15:12Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:55:21 10.100.0.13
Oct  2 05:15:12 np0005465604 ovn_controller[152344]: 2025-10-02T09:15:12Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:55:21 10.100.0.13
Oct  2 05:15:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Oct  2 05:15:13 np0005465604 nova_compute[260603]: 2025-10-02 09:15:13.304 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:15:13 np0005465604 nova_compute[260603]: 2025-10-02 09:15:13.305 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:15:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:15:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 193 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct  2 05:15:14 np0005465604 nova_compute[260603]: 2025-10-02 09:15:14.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:15 np0005465604 nova_compute[260603]: 2025-10-02 09:15:15.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:15:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:15:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:15:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:15:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:15:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:15:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:15:15 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a4f2322c-317e-48ff-a0e3-498caba5993d does not exist
Oct  2 05:15:15 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c62e123c-fd4d-46a7-b11f-d376c29d5ec7 does not exist
Oct  2 05:15:15 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ed5d3ddb-9040-4a47-8dd4-df1c5b53d509 does not exist
Oct  2 05:15:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:15:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:15:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:15:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:15:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:15:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:15:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 193 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct  2 05:15:16 np0005465604 podman[426319]: 2025-10-02 09:15:16.222194895 +0000 UTC m=+0.090014908 container create 26c3882df6696c73c60845e2ac6223a3c6258c35265ee519b4e93976dde701b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lederberg, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:15:16 np0005465604 podman[426319]: 2025-10-02 09:15:16.159907359 +0000 UTC m=+0.027727392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:15:16 np0005465604 systemd[1]: Started libpod-conmon-26c3882df6696c73c60845e2ac6223a3c6258c35265ee519b4e93976dde701b8.scope.
Oct  2 05:15:16 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:15:16 np0005465604 podman[426319]: 2025-10-02 09:15:16.353563138 +0000 UTC m=+0.221383191 container init 26c3882df6696c73c60845e2ac6223a3c6258c35265ee519b4e93976dde701b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lederberg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 05:15:16 np0005465604 podman[426319]: 2025-10-02 09:15:16.360309747 +0000 UTC m=+0.228129750 container start 26c3882df6696c73c60845e2ac6223a3c6258c35265ee519b4e93976dde701b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 05:15:16 np0005465604 romantic_lederberg[426335]: 167 167
Oct  2 05:15:16 np0005465604 systemd[1]: libpod-26c3882df6696c73c60845e2ac6223a3c6258c35265ee519b4e93976dde701b8.scope: Deactivated successfully.
Oct  2 05:15:16 np0005465604 podman[426319]: 2025-10-02 09:15:16.376832181 +0000 UTC m=+0.244652204 container attach 26c3882df6696c73c60845e2ac6223a3c6258c35265ee519b4e93976dde701b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lederberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 05:15:16 np0005465604 podman[426319]: 2025-10-02 09:15:16.377642377 +0000 UTC m=+0.245462390 container died 26c3882df6696c73c60845e2ac6223a3c6258c35265ee519b4e93976dde701b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lederberg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 05:15:16 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:15:16 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:15:16 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:15:16 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e6ac432dd7e71105e4f56ccd3b9123e817b9a6b2d6a2005b5b49154f8befe4fe-merged.mount: Deactivated successfully.
Oct  2 05:15:16 np0005465604 podman[426319]: 2025-10-02 09:15:16.45917712 +0000 UTC m=+0.326997123 container remove 26c3882df6696c73c60845e2ac6223a3c6258c35265ee519b4e93976dde701b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lederberg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  2 05:15:16 np0005465604 systemd[1]: libpod-conmon-26c3882df6696c73c60845e2ac6223a3c6258c35265ee519b4e93976dde701b8.scope: Deactivated successfully.
Oct  2 05:15:16 np0005465604 podman[426359]: 2025-10-02 09:15:16.68219324 +0000 UTC m=+0.059986645 container create e8f0fba627f5ded745ba2cdf948bb18e4cf788a83772ca4828c9a272c06e817e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:15:16 np0005465604 systemd[1]: Started libpod-conmon-e8f0fba627f5ded745ba2cdf948bb18e4cf788a83772ca4828c9a272c06e817e.scope.
Oct  2 05:15:16 np0005465604 podman[426359]: 2025-10-02 09:15:16.646594924 +0000 UTC m=+0.024388319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:15:16 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:15:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12917c30fe431423d775dbb7cbe23709d42806dbb204451014bf9166b523f3ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:15:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12917c30fe431423d775dbb7cbe23709d42806dbb204451014bf9166b523f3ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:15:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12917c30fe431423d775dbb7cbe23709d42806dbb204451014bf9166b523f3ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:15:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12917c30fe431423d775dbb7cbe23709d42806dbb204451014bf9166b523f3ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:15:16 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12917c30fe431423d775dbb7cbe23709d42806dbb204451014bf9166b523f3ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:15:16 np0005465604 podman[426359]: 2025-10-02 09:15:16.812502379 +0000 UTC m=+0.190295774 container init e8f0fba627f5ded745ba2cdf948bb18e4cf788a83772ca4828c9a272c06e817e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:15:16 np0005465604 podman[426359]: 2025-10-02 09:15:16.820796427 +0000 UTC m=+0.198589812 container start e8f0fba627f5ded745ba2cdf948bb18e4cf788a83772ca4828c9a272c06e817e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:15:16 np0005465604 podman[426359]: 2025-10-02 09:15:16.837520107 +0000 UTC m=+0.215313512 container attach e8f0fba627f5ded745ba2cdf948bb18e4cf788a83772ca4828c9a272c06e817e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 05:15:16 np0005465604 nova_compute[260603]: 2025-10-02 09:15:16.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:17 np0005465604 dreamy_swanson[426376]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:15:17 np0005465604 dreamy_swanson[426376]: --> relative data size: 1.0
Oct  2 05:15:17 np0005465604 dreamy_swanson[426376]: --> All data devices are unavailable
Oct  2 05:15:17 np0005465604 systemd[1]: libpod-e8f0fba627f5ded745ba2cdf948bb18e4cf788a83772ca4828c9a272c06e817e.scope: Deactivated successfully.
Oct  2 05:15:17 np0005465604 podman[426359]: 2025-10-02 09:15:17.83035135 +0000 UTC m=+1.208144715 container died e8f0fba627f5ded745ba2cdf948bb18e4cf788a83772ca4828c9a272c06e817e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:15:17 np0005465604 systemd[1]: var-lib-containers-storage-overlay-12917c30fe431423d775dbb7cbe23709d42806dbb204451014bf9166b523f3ff-merged.mount: Deactivated successfully.
Oct  2 05:15:17 np0005465604 podman[426359]: 2025-10-02 09:15:17.977246415 +0000 UTC m=+1.355039780 container remove e8f0fba627f5ded745ba2cdf948bb18e4cf788a83772ca4828c9a272c06e817e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:15:17 np0005465604 systemd[1]: libpod-conmon-e8f0fba627f5ded745ba2cdf948bb18e4cf788a83772ca4828c9a272c06e817e.scope: Deactivated successfully.
Oct  2 05:15:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 05:15:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:15:18 np0005465604 podman[426562]: 2025-10-02 09:15:18.827997683 +0000 UTC m=+0.118262877 container create cf8d10e00c72eb9b3596b89bd9f221f9114f936d95a856121e94a78e429e2e55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:15:18 np0005465604 podman[426562]: 2025-10-02 09:15:18.751162355 +0000 UTC m=+0.041427529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:15:18 np0005465604 systemd[1]: Started libpod-conmon-cf8d10e00c72eb9b3596b89bd9f221f9114f936d95a856121e94a78e429e2e55.scope.
Oct  2 05:15:18 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:15:18 np0005465604 podman[426562]: 2025-10-02 09:15:18.991668209 +0000 UTC m=+0.281933463 container init cf8d10e00c72eb9b3596b89bd9f221f9114f936d95a856121e94a78e429e2e55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:15:19 np0005465604 podman[426562]: 2025-10-02 09:15:19.00681391 +0000 UTC m=+0.297079104 container start cf8d10e00c72eb9b3596b89bd9f221f9114f936d95a856121e94a78e429e2e55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 05:15:19 np0005465604 unruffled_bardeen[426578]: 167 167
Oct  2 05:15:19 np0005465604 systemd[1]: libpod-cf8d10e00c72eb9b3596b89bd9f221f9114f936d95a856121e94a78e429e2e55.scope: Deactivated successfully.
Oct  2 05:15:19 np0005465604 podman[426562]: 2025-10-02 09:15:19.021070653 +0000 UTC m=+0.311335847 container attach cf8d10e00c72eb9b3596b89bd9f221f9114f936d95a856121e94a78e429e2e55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:15:19 np0005465604 podman[426562]: 2025-10-02 09:15:19.021533188 +0000 UTC m=+0.311798372 container died cf8d10e00c72eb9b3596b89bd9f221f9114f936d95a856121e94a78e429e2e55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 05:15:19 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4921e38935e74aec9cb9e95c6f1a4fe3f09f7ef2a569528ba9a8f9ba3b1564c4-merged.mount: Deactivated successfully.
Oct  2 05:15:19 np0005465604 podman[426562]: 2025-10-02 09:15:19.169826935 +0000 UTC m=+0.460092089 container remove cf8d10e00c72eb9b3596b89bd9f221f9114f936d95a856121e94a78e429e2e55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 05:15:19 np0005465604 systemd[1]: libpod-conmon-cf8d10e00c72eb9b3596b89bd9f221f9114f936d95a856121e94a78e429e2e55.scope: Deactivated successfully.
Oct  2 05:15:19 np0005465604 nova_compute[260603]: 2025-10-02 09:15:19.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:19 np0005465604 podman[426603]: 2025-10-02 09:15:19.346503496 +0000 UTC m=+0.039628013 container create 1368390c67883dd1d808bdcf86062f922c2c031d6586494c41adcd8a6ca1348a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:15:19 np0005465604 systemd[1]: Started libpod-conmon-1368390c67883dd1d808bdcf86062f922c2c031d6586494c41adcd8a6ca1348a.scope.
Oct  2 05:15:19 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:15:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/198cb23d87d6d97780771738a28fa735ab97573908e55ad4ec7f35d899c2f3cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:15:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/198cb23d87d6d97780771738a28fa735ab97573908e55ad4ec7f35d899c2f3cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:15:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/198cb23d87d6d97780771738a28fa735ab97573908e55ad4ec7f35d899c2f3cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:15:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/198cb23d87d6d97780771738a28fa735ab97573908e55ad4ec7f35d899c2f3cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:15:19 np0005465604 podman[426603]: 2025-10-02 09:15:19.420824326 +0000 UTC m=+0.113948853 container init 1368390c67883dd1d808bdcf86062f922c2c031d6586494c41adcd8a6ca1348a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:15:19 np0005465604 podman[426603]: 2025-10-02 09:15:19.331798329 +0000 UTC m=+0.024922876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:15:19 np0005465604 podman[426603]: 2025-10-02 09:15:19.429514456 +0000 UTC m=+0.122638973 container start 1368390c67883dd1d808bdcf86062f922c2c031d6586494c41adcd8a6ca1348a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:15:19 np0005465604 podman[426603]: 2025-10-02 09:15:19.437243866 +0000 UTC m=+0.130368403 container attach 1368390c67883dd1d808bdcf86062f922c2c031d6586494c41adcd8a6ca1348a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:15:19 np0005465604 nova_compute[260603]: 2025-10-02 09:15:19.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]: {
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:    "0": [
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:        {
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "devices": [
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "/dev/loop3"
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            ],
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "lv_name": "ceph_lv0",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "lv_size": "21470642176",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "name": "ceph_lv0",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "tags": {
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.cluster_name": "ceph",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.crush_device_class": "",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.encrypted": "0",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.osd_id": "0",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.type": "block",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.vdo": "0"
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            },
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "type": "block",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "vg_name": "ceph_vg0"
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:        }
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:    ],
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:    "1": [
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:        {
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "devices": [
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "/dev/loop4"
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            ],
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "lv_name": "ceph_lv1",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "lv_size": "21470642176",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "name": "ceph_lv1",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "tags": {
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.cluster_name": "ceph",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.crush_device_class": "",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.encrypted": "0",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.osd_id": "1",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.type": "block",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.vdo": "0"
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            },
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "type": "block",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "vg_name": "ceph_vg1"
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:        }
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:    ],
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:    "2": [
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:        {
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "devices": [
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "/dev/loop5"
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            ],
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "lv_name": "ceph_lv2",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "lv_size": "21470642176",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "name": "ceph_lv2",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "tags": {
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.cluster_name": "ceph",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.crush_device_class": "",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.encrypted": "0",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.osd_id": "2",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.type": "block",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:                "ceph.vdo": "0"
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            },
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "type": "block",
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:            "vg_name": "ceph_vg2"
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:        }
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]:    ]
Oct  2 05:15:20 np0005465604 pedantic_wilbur[426620]: }
Oct  2 05:15:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 05:15:20 np0005465604 systemd[1]: libpod-1368390c67883dd1d808bdcf86062f922c2c031d6586494c41adcd8a6ca1348a.scope: Deactivated successfully.
Oct  2 05:15:20 np0005465604 podman[426603]: 2025-10-02 09:15:20.218768842 +0000 UTC m=+0.911893359 container died 1368390c67883dd1d808bdcf86062f922c2c031d6586494c41adcd8a6ca1348a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:15:20 np0005465604 systemd[1]: var-lib-containers-storage-overlay-198cb23d87d6d97780771738a28fa735ab97573908e55ad4ec7f35d899c2f3cc-merged.mount: Deactivated successfully.
Oct  2 05:15:20 np0005465604 podman[426603]: 2025-10-02 09:15:20.280119429 +0000 UTC m=+0.973243946 container remove 1368390c67883dd1d808bdcf86062f922c2c031d6586494c41adcd8a6ca1348a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:15:20 np0005465604 systemd[1]: libpod-conmon-1368390c67883dd1d808bdcf86062f922c2c031d6586494c41adcd8a6ca1348a.scope: Deactivated successfully.
Oct  2 05:15:20 np0005465604 podman[426782]: 2025-10-02 09:15:20.91057263 +0000 UTC m=+0.097693277 container create da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 05:15:20 np0005465604 podman[426782]: 2025-10-02 09:15:20.838696027 +0000 UTC m=+0.025816704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:15:20 np0005465604 systemd[1]: Started libpod-conmon-da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812.scope.
Oct  2 05:15:20 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:15:20 np0005465604 podman[426782]: 2025-10-02 09:15:20.983102394 +0000 UTC m=+0.170223081 container init da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:15:20 np0005465604 podman[426782]: 2025-10-02 09:15:20.989553595 +0000 UTC m=+0.176674252 container start da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct  2 05:15:20 np0005465604 podman[426782]: 2025-10-02 09:15:20.992649191 +0000 UTC m=+0.179769848 container attach da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 05:15:20 np0005465604 systemd[1]: libpod-da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812.scope: Deactivated successfully.
Oct  2 05:15:20 np0005465604 relaxed_franklin[426799]: 167 167
Oct  2 05:15:20 np0005465604 conmon[426799]: conmon da50ebf4a69b4c119701 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812.scope/container/memory.events
Oct  2 05:15:20 np0005465604 podman[426782]: 2025-10-02 09:15:20.996960795 +0000 UTC m=+0.184081452 container died da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:15:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d97716b88969fbb091f9e0ad7cf424fd9219eff69595cea57b08ad46720c4a26-merged.mount: Deactivated successfully.
Oct  2 05:15:21 np0005465604 podman[426782]: 2025-10-02 09:15:21.050565831 +0000 UTC m=+0.237686508 container remove da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:15:21 np0005465604 systemd[1]: libpod-conmon-da50ebf4a69b4c1197015606ed0238bac99f244fa7508075f5ca70d32d364812.scope: Deactivated successfully.
Oct  2 05:15:21 np0005465604 podman[426823]: 2025-10-02 09:15:21.229479561 +0000 UTC m=+0.045452324 container create 4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:15:21 np0005465604 systemd[1]: Started libpod-conmon-4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c.scope.
Oct  2 05:15:21 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:15:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be750230f7ca837efa8a37ba46dc8bf276e5d96e41b65aa0b2c482bcba8d5e6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:15:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be750230f7ca837efa8a37ba46dc8bf276e5d96e41b65aa0b2c482bcba8d5e6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:15:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be750230f7ca837efa8a37ba46dc8bf276e5d96e41b65aa0b2c482bcba8d5e6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:15:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be750230f7ca837efa8a37ba46dc8bf276e5d96e41b65aa0b2c482bcba8d5e6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:15:21 np0005465604 podman[426823]: 2025-10-02 09:15:21.212223785 +0000 UTC m=+0.028196568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:15:21 np0005465604 podman[426823]: 2025-10-02 09:15:21.312843722 +0000 UTC m=+0.128816515 container init 4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 05:15:21 np0005465604 podman[426823]: 2025-10-02 09:15:21.319179908 +0000 UTC m=+0.135152691 container start 4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_diffie, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 05:15:21 np0005465604 podman[426823]: 2025-10-02 09:15:21.322954746 +0000 UTC m=+0.138927539 container attach 4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_diffie, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.520 2 DEBUG nova.compute.manager [req-301d8b70-595c-4609-a1a0-04b67eff1671 req-d91014af-a585-43d5-9361-6e6fc792f3ef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-changed-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.522 2 DEBUG nova.compute.manager [req-301d8b70-595c-4609-a1a0-04b67eff1671 req-d91014af-a585-43d5-9361-6e6fc792f3ef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Refreshing instance network info cache due to event network-changed-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.523 2 DEBUG oslo_concurrency.lockutils [req-301d8b70-595c-4609-a1a0-04b67eff1671 req-d91014af-a585-43d5-9361-6e6fc792f3ef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.523 2 DEBUG oslo_concurrency.lockutils [req-301d8b70-595c-4609-a1a0-04b67eff1671 req-d91014af-a585-43d5-9361-6e6fc792f3ef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.523 2 DEBUG nova.network.neutron [req-301d8b70-595c-4609-a1a0-04b67eff1671 req-d91014af-a585-43d5-9361-6e6fc792f3ef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Refreshing network info cache for port 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.638 2 DEBUG oslo_concurrency.lockutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.639 2 DEBUG oslo_concurrency.lockutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.639 2 DEBUG oslo_concurrency.lockutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.640 2 DEBUG oslo_concurrency.lockutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.640 2 DEBUG oslo_concurrency.lockutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.642 2 INFO nova.compute.manager [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Terminating instance#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.643 2 DEBUG nova.compute.manager [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:15:21 np0005465604 kernel: tap7304fbbd-4e (unregistering): left promiscuous mode
Oct  2 05:15:21 np0005465604 NetworkManager[45129]: <info>  [1759396521.7139] device (tap7304fbbd-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:15:21 np0005465604 ovn_controller[152344]: 2025-10-02T09:15:21Z|01636|binding|INFO|Releasing lport 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 from this chassis (sb_readonly=0)
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:21 np0005465604 ovn_controller[152344]: 2025-10-02T09:15:21Z|01637|binding|INFO|Setting lport 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 down in Southbound
Oct  2 05:15:21 np0005465604 ovn_controller[152344]: 2025-10-02T09:15:21Z|01638|binding|INFO|Removing iface tap7304fbbd-4e ovn-installed in OVS
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:21 np0005465604 kernel: tap3b0dd52d-fd (unregistering): left promiscuous mode
Oct  2 05:15:21 np0005465604 NetworkManager[45129]: <info>  [1759396521.7428] device (tap3b0dd52d-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.746 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:55:21 10.100.0.13'], port_security=['fa:16:3e:2e:55:21 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6acf9ec4-afe0-4ef6-b857-246ad87fe800', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6339cab-fbb4-4887-8953-252cca735cc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cbefbcd-b288-496f-adbe-c563f4517ed7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80b3bab0-2229-4a60-832f-071c3bc1d0ec, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.748 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 in datapath c6339cab-fbb4-4887-8953-252cca735cc6 unbound from our chassis#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.755 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6339cab-fbb4-4887-8953-252cca735cc6#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:21 np0005465604 ovn_controller[152344]: 2025-10-02T09:15:21Z|01639|binding|INFO|Releasing lport 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 from this chassis (sb_readonly=0)
Oct  2 05:15:21 np0005465604 ovn_controller[152344]: 2025-10-02T09:15:21Z|01640|binding|INFO|Setting lport 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 down in Southbound
Oct  2 05:15:21 np0005465604 ovn_controller[152344]: 2025-10-02T09:15:21Z|01641|binding|INFO|Removing iface tap3b0dd52d-fd ovn-installed in OVS
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.771 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[822ffd17-006f-4a58-9568-3168d698e3de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.788 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:08:ec 2001:db8::f816:3eff:fef8:8ec'], port_security=['fa:16:3e:f8:08:ec 2001:db8::f816:3eff:fef8:8ec'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef8:8ec/64', 'neutron:device_id': '6acf9ec4-afe0-4ef6-b857-246ad87fe800', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f302b50b-078a-40f3-87d8-1172d81fe604', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cbefbcd-b288-496f-adbe-c563f4517ed7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82aa0caa-5e65-4ef0-b1d6-b9e910e6cadb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=3b0dd52d-fd1e-4e15-a6b5-ef4735fde479) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:15:21 np0005465604 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000092.scope: Deactivated successfully.
Oct  2 05:15:21 np0005465604 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000092.scope: Consumed 13.271s CPU time.
Oct  2 05:15:21 np0005465604 systemd-machined[214636]: Machine qemu-180-instance-00000092 terminated.
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.804 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d42ae0d3-8b76-4029-bd31-f0302d1316d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.808 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0d35e7-8ba9-47c6-a3b3-3a69cf1497fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.835 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8e1a02-650b-4aba-8056-844269b5f829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.851 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b3696a86-ad0f-4acf-909c-0b464bda591e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6339cab-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:e0:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733293, 'reachable_time': 28730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 426861, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.868 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[10924ebc-50e3-444d-903c-67ec927d4ab4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc6339cab-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733308, 'tstamp': 733308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426862, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc6339cab-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733312, 'tstamp': 733312}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426862, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.870 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6339cab-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:21 np0005465604 NetworkManager[45129]: <info>  [1759396521.8788] manager: (tap3b0dd52d-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/666)
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.888 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6339cab-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.888 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.889 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6339cab-f0, col_values=(('external_ids', {'iface-id': '698ce34e-6a9d-4a50-8426-c137ad35d6fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.889 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.890 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 in datapath f302b50b-078a-40f3-87d8-1172d81fe604 unbound from our chassis#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.893 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f302b50b-078a-40f3-87d8-1172d81fe604#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.894 2 INFO nova.virt.libvirt.driver [-] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Instance destroyed successfully.#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.895 2 DEBUG nova.objects.instance [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 6acf9ec4-afe0-4ef6-b857-246ad87fe800 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.914 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[17ff02f5-7940-4232-a0e6-7880c22c1471]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.918 2 DEBUG nova.virt.libvirt.vif [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-847780468',display_name='tempest-TestGettingAddress-server-847780468',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-847780468',id=146,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:14:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-cp8cov8y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:14:58Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=6acf9ec4-afe0-4ef6-b857-246ad87fe800,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.918 2 DEBUG nova.network.os_vif_util [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.919 2 DEBUG nova.network.os_vif_util [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2e:55:21,bridge_name='br-int',has_traffic_filtering=True,id=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7304fbbd-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.919 2 DEBUG os_vif [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:55:21,bridge_name='br-int',has_traffic_filtering=True,id=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7304fbbd-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7304fbbd-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.932 2 INFO os_vif [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:55:21,bridge_name='br-int',has_traffic_filtering=True,id=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7304fbbd-4e')#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.933 2 DEBUG nova.virt.libvirt.vif [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:14:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-847780468',display_name='tempest-TestGettingAddress-server-847780468',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-847780468',id=146,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:14:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-cp8cov8y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:14:58Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=6acf9ec4-afe0-4ef6-b857-246ad87fe800,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.933 2 DEBUG nova.network.os_vif_util [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.934 2 DEBUG nova.network.os_vif_util [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:08:ec,bridge_name='br-int',has_traffic_filtering=True,id=3b0dd52d-fd1e-4e15-a6b5-ef4735fde479,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b0dd52d-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.934 2 DEBUG os_vif [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:08:ec,bridge_name='br-int',has_traffic_filtering=True,id=3b0dd52d-fd1e-4e15-a6b5-ef4735fde479,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b0dd52d-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b0dd52d-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:21 np0005465604 nova_compute[260603]: 2025-10-02 09:15:21.941 2 INFO os_vif [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:08:ec,bridge_name='br-int',has_traffic_filtering=True,id=3b0dd52d-fd1e-4e15-a6b5-ef4735fde479,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b0dd52d-fd')#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.962 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[962b33a2-8f02-45a8-ba86-abcb34e1ac85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.965 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[96ca5f0a-2dbb-4523-82ae-2d151e9aa108]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:21.991 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[ab9117a7-61bf-47f2-8c3f-caa8e9e0bbf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:22.006 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[601cd2f3-6d18-422e-9b3f-b5dad5fa2ffc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf302b50b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:c1:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 6, 'rx_bytes': 2912, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 6, 'rx_bytes': 2912, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733397, 'reachable_time': 26509, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 32, 'inoctets': 2464, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 32, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2464, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 32, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 426909, 'error': None, 'target': 'ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:22.021 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[041ee50f-1613-4b64-9950-a9f6a6887f20]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf302b50b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733410, 'tstamp': 733410}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426910, 'error': None, 'target': 'ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:22.023 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf302b50b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:15:22 np0005465604 nova_compute[260603]: 2025-10-02 09:15:22.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:22.025 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf302b50b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:15:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:22.025 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:15:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:22.026 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf302b50b-00, col_values=(('external_ids', {'iface-id': '3ba778b6-61e6-4019-8a62-1bee20d3b186'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:15:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:22.026 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:15:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:15:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3880506349' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:15:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:15:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3880506349' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:15:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]: {
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:        "osd_id": 2,
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:        "type": "bluestore"
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:    },
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:        "osd_id": 1,
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:        "type": "bluestore"
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:    },
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:        "osd_id": 0,
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:        "type": "bluestore"
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]:    }
Oct  2 05:15:22 np0005465604 mystifying_diffie[426840]: }
Oct  2 05:15:22 np0005465604 systemd[1]: libpod-4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c.scope: Deactivated successfully.
Oct  2 05:15:22 np0005465604 systemd[1]: libpod-4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c.scope: Consumed 1.030s CPU time.
Oct  2 05:15:22 np0005465604 podman[426823]: 2025-10-02 09:15:22.398727416 +0000 UTC m=+1.214700199 container died 4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 05:15:22 np0005465604 systemd[1]: var-lib-containers-storage-overlay-be750230f7ca837efa8a37ba46dc8bf276e5d96e41b65aa0b2c482bcba8d5e6a-merged.mount: Deactivated successfully.
Oct  2 05:15:22 np0005465604 podman[426823]: 2025-10-02 09:15:22.448800162 +0000 UTC m=+1.264772925 container remove 4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 05:15:22 np0005465604 systemd[1]: libpod-conmon-4f29b1f856a715b570862d93b43adfd777c6d9f3aedcd5954ce8b535eabbca2c.scope: Deactivated successfully.
Oct  2 05:15:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:15:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:15:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:15:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:15:22 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 5e7fca68-fdb6-4f1d-83d7-ff01018dc97a does not exist
Oct  2 05:15:22 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 0aa0fdae-64db-4676-a94c-e25b3afa04ec does not exist
Oct  2 05:15:22 np0005465604 nova_compute[260603]: 2025-10-02 09:15:22.573 2 INFO nova.virt.libvirt.driver [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Deleting instance files /var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800_del#033[00m
Oct  2 05:15:22 np0005465604 nova_compute[260603]: 2025-10-02 09:15:22.574 2 INFO nova.virt.libvirt.driver [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Deletion of /var/lib/nova/instances/6acf9ec4-afe0-4ef6-b857-246ad87fe800_del complete#033[00m
Oct  2 05:15:22 np0005465604 nova_compute[260603]: 2025-10-02 09:15:22.688 2 INFO nova.compute.manager [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Took 1.04 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:15:22 np0005465604 nova_compute[260603]: 2025-10-02 09:15:22.688 2 DEBUG oslo.service.loopingcall [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:15:22 np0005465604 nova_compute[260603]: 2025-10-02 09:15:22.689 2 DEBUG nova.compute.manager [-] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:15:22 np0005465604 nova_compute[260603]: 2025-10-02 09:15:22.689 2 DEBUG nova.network.neutron [-] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.071 2 DEBUG nova.network.neutron [req-301d8b70-595c-4609-a1a0-04b67eff1671 req-d91014af-a585-43d5-9361-6e6fc792f3ef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updated VIF entry in instance network info cache for port 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.072 2 DEBUG nova.network.neutron [req-301d8b70-595c-4609-a1a0-04b67eff1671 req-d91014af-a585-43d5-9361-6e6fc792f3ef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updating instance_info_cache with network_info: [{"id": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "address": "fa:16:3e:2e:55:21", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7304fbbd-4e", "ovs_interfaceid": "7304fbbd-4ecf-4fd7-95ea-9dba30ee6456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.098 2 DEBUG oslo_concurrency.lockutils [req-301d8b70-595c-4609-a1a0-04b67eff1671 req-d91014af-a585-43d5-9361-6e6fc792f3ef 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-6acf9ec4-afe0-4ef6-b857-246ad87fe800" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.621 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-unplugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.622 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.623 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.624 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.624 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] No waiting events found dispatching network-vif-unplugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.625 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-unplugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.625 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-plugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.626 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.626 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.627 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.627 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] No waiting events found dispatching network-vif-plugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.628 2 WARNING nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received unexpected event network-vif-plugged-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.628 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-unplugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.628 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.629 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.629 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.630 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] No waiting events found dispatching network-vif-unplugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.630 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-unplugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.631 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-plugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.632 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.632 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.633 2 DEBUG oslo_concurrency.lockutils [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.633 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] No waiting events found dispatching network-vif-plugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.633 2 WARNING nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received unexpected event network-vif-plugged-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.634 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-deleted-7304fbbd-4ecf-4fd7-95ea-9dba30ee6456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.634 2 INFO nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Neutron deleted interface 7304fbbd-4ecf-4fd7-95ea-9dba30ee6456; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.635 2 DEBUG nova.network.neutron [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updating instance_info_cache with network_info: [{"id": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "address": "fa:16:3e:f8:08:ec", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef8:8ec", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b0dd52d-fd", "ovs_interfaceid": "3b0dd52d-fd1e-4e15-a6b5-ef4735fde479", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.673 2 DEBUG nova.compute.manager [req-012e493a-41b6-4ffc-a01a-44c4216f39d2 req-03107e85-61a8-4d41-ba58-64c0aefd456e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Detach interface failed, port_id=7304fbbd-4ecf-4fd7-95ea-9dba30ee6456, reason: Instance 6acf9ec4-afe0-4ef6-b857-246ad87fe800 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.674 2 DEBUG nova.network.neutron [-] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.683280) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396523683319, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 410, "num_deletes": 257, "total_data_size": 266842, "memory_usage": 274648, "flush_reason": "Manual Compaction"}
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396523687078, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 264583, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61050, "largest_seqno": 61459, "table_properties": {"data_size": 262150, "index_size": 532, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5990, "raw_average_key_size": 18, "raw_value_size": 257190, "raw_average_value_size": 786, "num_data_blocks": 23, "num_entries": 327, "num_filter_entries": 327, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396510, "oldest_key_time": 1759396510, "file_creation_time": 1759396523, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 3909 microseconds, and 1535 cpu microseconds.
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.687182) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 264583 bytes OK
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.687209) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.688693) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.688714) EVENT_LOG_v1 {"time_micros": 1759396523688707, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.688734) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 264219, prev total WAL file size 264219, number of live WAL files 2.
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.689270) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353035' seq:72057594037927935, type:22 .. '6C6F676D0032373538' seq:0, type:0; will stop at (end)
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(258KB)], [143(9524KB)]
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396523689320, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 10017377, "oldest_snapshot_seqno": -1}
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.710 2 INFO nova.compute.manager [-] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Took 1.02 seconds to deallocate network for instance.#033[00m
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 7842 keys, 9902936 bytes, temperature: kUnknown
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396523742611, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 9902936, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9852222, "index_size": 29993, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19653, "raw_key_size": 206310, "raw_average_key_size": 26, "raw_value_size": 9713985, "raw_average_value_size": 1238, "num_data_blocks": 1163, "num_entries": 7842, "num_filter_entries": 7842, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396523, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.742873) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 9902936 bytes
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.744286) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.7 rd, 185.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.3 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(75.3) write-amplify(37.4) OK, records in: 8367, records dropped: 525 output_compression: NoCompression
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.744328) EVENT_LOG_v1 {"time_micros": 1759396523744310, "job": 88, "event": "compaction_finished", "compaction_time_micros": 53361, "compaction_time_cpu_micros": 27043, "output_level": 6, "num_output_files": 1, "total_output_size": 9902936, "num_input_records": 8367, "num_output_records": 7842, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396523744628, "job": 88, "event": "table_file_deletion", "file_number": 145}
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396523746425, "job": 88, "event": "table_file_deletion", "file_number": 143}
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.689165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.746521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.746528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.746530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.746532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:15:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:15:23.746534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.757 2 DEBUG oslo_concurrency.lockutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.757 2 DEBUG oslo_concurrency.lockutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:15:23 np0005465604 nova_compute[260603]: 2025-10-02 09:15:23.832 2 DEBUG oslo_concurrency.processutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:15:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.2 MiB/s wr, 90 op/s
Oct  2 05:15:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:15:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/164263658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:15:24 np0005465604 nova_compute[260603]: 2025-10-02 09:15:24.284 2 DEBUG oslo_concurrency.processutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:15:24 np0005465604 nova_compute[260603]: 2025-10-02 09:15:24.293 2 DEBUG nova.compute.provider_tree [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:15:24 np0005465604 nova_compute[260603]: 2025-10-02 09:15:24.312 2 DEBUG nova.scheduler.client.report [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:15:24 np0005465604 nova_compute[260603]: 2025-10-02 09:15:24.342 2 DEBUG oslo_concurrency.lockutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:15:24 np0005465604 nova_compute[260603]: 2025-10-02 09:15:24.374 2 INFO nova.scheduler.client.report [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 6acf9ec4-afe0-4ef6-b857-246ad87fe800#033[00m
Oct  2 05:15:24 np0005465604 nova_compute[260603]: 2025-10-02 09:15:24.443 2 DEBUG oslo_concurrency.lockutils [None req-e722019b-6553-483a-82ed-e84c8a35a0b1 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "6acf9ec4-afe0-4ef6-b857-246ad87fe800" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:15:24 np0005465604 podman[427024]: 2025-10-02 09:15:24.993703027 +0000 UTC m=+0.054410542 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 05:15:25 np0005465604 podman[427023]: 2025-10-02 09:15:25.022846962 +0000 UTC m=+0.087807890 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:15:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:25.157 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:25.159 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.365 2 DEBUG nova.compute.manager [req-b9c965a0-7319-431f-bf6e-d4501167d294 req-ad232009-3b4f-4d19-b437-ada45782c3aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-changed-17c0a9ac-d61a-433a-b3f3-154a8c467f5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.365 2 DEBUG nova.compute.manager [req-b9c965a0-7319-431f-bf6e-d4501167d294 req-ad232009-3b4f-4d19-b437-ada45782c3aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Refreshing instance network info cache due to event network-changed-17c0a9ac-d61a-433a-b3f3-154a8c467f5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.366 2 DEBUG oslo_concurrency.lockutils [req-b9c965a0-7319-431f-bf6e-d4501167d294 req-ad232009-3b4f-4d19-b437-ada45782c3aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.366 2 DEBUG oslo_concurrency.lockutils [req-b9c965a0-7319-431f-bf6e-d4501167d294 req-ad232009-3b4f-4d19-b437-ada45782c3aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.366 2 DEBUG nova.network.neutron [req-b9c965a0-7319-431f-bf6e-d4501167d294 req-ad232009-3b4f-4d19-b437-ada45782c3aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Refreshing network info cache for port 17c0a9ac-d61a-433a-b3f3-154a8c467f5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.514 2 DEBUG oslo_concurrency.lockutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.515 2 DEBUG oslo_concurrency.lockutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.515 2 DEBUG oslo_concurrency.lockutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.516 2 DEBUG oslo_concurrency.lockutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.516 2 DEBUG oslo_concurrency.lockutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.517 2 INFO nova.compute.manager [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Terminating instance#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.518 2 DEBUG nova.compute.manager [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:15:25 np0005465604 kernel: tap17c0a9ac-d6 (unregistering): left promiscuous mode
Oct  2 05:15:25 np0005465604 NetworkManager[45129]: <info>  [1759396525.5731] device (tap17c0a9ac-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:15:25 np0005465604 ovn_controller[152344]: 2025-10-02T09:15:25Z|01642|binding|INFO|Releasing lport 17c0a9ac-d61a-433a-b3f3-154a8c467f5a from this chassis (sb_readonly=0)
Oct  2 05:15:25 np0005465604 ovn_controller[152344]: 2025-10-02T09:15:25Z|01643|binding|INFO|Setting lport 17c0a9ac-d61a-433a-b3f3-154a8c467f5a down in Southbound
Oct  2 05:15:25 np0005465604 ovn_controller[152344]: 2025-10-02T09:15:25Z|01644|binding|INFO|Removing iface tap17c0a9ac-d6 ovn-installed in OVS
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:25.605 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:2b:06 10.100.0.10'], port_security=['fa:16:3e:65:2b:06 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7293bf39-223f-4668-bd0f-c65476fac3e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6339cab-fbb4-4887-8953-252cca735cc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cbefbcd-b288-496f-adbe-c563f4517ed7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80b3bab0-2229-4a60-832f-071c3bc1d0ec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=17c0a9ac-d61a-433a-b3f3-154a8c467f5a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:15:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:25.607 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 17c0a9ac-d61a-433a-b3f3-154a8c467f5a in datapath c6339cab-fbb4-4887-8953-252cca735cc6 unbound from our chassis#033[00m
Oct  2 05:15:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:25.609 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c6339cab-fbb4-4887-8953-252cca735cc6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:15:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:25.610 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2073db8b-22f1-444f-a74b-b9aaf424778f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:25.610 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6 namespace which is not needed anymore#033[00m
Oct  2 05:15:25 np0005465604 kernel: tap6f4bc2ea-2d (unregistering): left promiscuous mode
Oct  2 05:15:25 np0005465604 NetworkManager[45129]: <info>  [1759396525.6248] device (tap6f4bc2ea-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:15:25 np0005465604 ovn_controller[152344]: 2025-10-02T09:15:25Z|01645|binding|INFO|Releasing lport 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 from this chassis (sb_readonly=0)
Oct  2 05:15:25 np0005465604 ovn_controller[152344]: 2025-10-02T09:15:25Z|01646|binding|INFO|Setting lport 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 down in Southbound
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:25 np0005465604 ovn_controller[152344]: 2025-10-02T09:15:25Z|01647|binding|INFO|Removing iface tap6f4bc2ea-2d ovn-installed in OVS
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:25 np0005465604 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000091.scope: Deactivated successfully.
Oct  2 05:15:25 np0005465604 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000091.scope: Consumed 15.096s CPU time.
Oct  2 05:15:25 np0005465604 systemd-machined[214636]: Machine qemu-179-instance-00000091 terminated.
Oct  2 05:15:25 np0005465604 NetworkManager[45129]: <info>  [1759396525.7637] manager: (tap6f4bc2ea-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/667)
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:25.767 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:66:56 2001:db8::f816:3eff:fed2:6656'], port_security=['fa:16:3e:d2:66:56 2001:db8::f816:3eff:fed2:6656'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fed2:6656/64', 'neutron:device_id': '7293bf39-223f-4668-bd0f-c65476fac3e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f302b50b-078a-40f3-87d8-1172d81fe604', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cbefbcd-b288-496f-adbe-c563f4517ed7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82aa0caa-5e65-4ef0-b1d6-b9e910e6cadb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=6f4bc2ea-2d5e-4fbb-95ce-ada64748d460) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.775 2 DEBUG nova.compute.manager [req-ce5d0e7d-0d49-433f-8e84-4af602179ec6 req-abb6ab75-d48d-4d44-978d-c4c59ab1d849 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Received event network-vif-deleted-3b0dd52d-fd1e-4e15-a6b5-ef4735fde479 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.781 2 INFO nova.virt.libvirt.driver [-] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Instance destroyed successfully.#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.781 2 DEBUG nova.objects.instance [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 7293bf39-223f-4668-bd0f-c65476fac3e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:15:25 np0005465604 neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6[425337]: [NOTICE]   (425341) : haproxy version is 2.8.14-c23fe91
Oct  2 05:15:25 np0005465604 neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6[425337]: [NOTICE]   (425341) : path to executable is /usr/sbin/haproxy
Oct  2 05:15:25 np0005465604 neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6[425337]: [WARNING]  (425341) : Exiting Master process...
Oct  2 05:15:25 np0005465604 neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6[425337]: [WARNING]  (425341) : Exiting Master process...
Oct  2 05:15:25 np0005465604 neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6[425337]: [ALERT]    (425341) : Current worker (425343) exited with code 143 (Terminated)
Oct  2 05:15:25 np0005465604 neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6[425337]: [WARNING]  (425341) : All workers exited. Exiting... (0)
Oct  2 05:15:25 np0005465604 systemd[1]: libpod-f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63.scope: Deactivated successfully.
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.842 2 DEBUG nova.virt.libvirt.vif [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:14:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1093810609',display_name='tempest-TestGettingAddress-server-1093810609',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1093810609',id=145,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:14:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-5uv39x2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:14:16Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=7293bf39-223f-4668-bd0f-c65476fac3e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.842 2 DEBUG nova.network.os_vif_util [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.843 2 DEBUG nova.network.os_vif_util [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:2b:06,bridge_name='br-int',has_traffic_filtering=True,id=17c0a9ac-d61a-433a-b3f3-154a8c467f5a,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17c0a9ac-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:15:25 np0005465604 podman[427100]: 2025-10-02 09:15:25.843916387 +0000 UTC m=+0.109918007 container died f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.844 2 DEBUG os_vif [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:2b:06,bridge_name='br-int',has_traffic_filtering=True,id=17c0a9ac-d61a-433a-b3f3-154a8c467f5a,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17c0a9ac-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.847 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17c0a9ac-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.855 2 INFO os_vif [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:2b:06,bridge_name='br-int',has_traffic_filtering=True,id=17c0a9ac-d61a-433a-b3f3-154a8c467f5a,network=Network(c6339cab-fbb4-4887-8953-252cca735cc6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17c0a9ac-d6')#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.856 2 DEBUG nova.virt.libvirt.vif [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:14:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1093810609',display_name='tempest-TestGettingAddress-server-1093810609',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1093810609',id=145,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7xWJ2/w2qzfC9u138G3UzPedc4DZSiUvWoGia9JLiCBCe7DUotbkw+oCdi7LY3FyBRC9vjEB9vxdJUn+WMWmT3OuBtcfOZyjdaXvtgf4nhJ+wsxat52y9ESU6yJlWGkg==',key_name='tempest-TestGettingAddress-1056490066',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:14:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-5uv39x2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:14:16Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=7293bf39-223f-4668-bd0f-c65476fac3e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.856 2 DEBUG nova.network.os_vif_util [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.857 2 DEBUG nova.network.os_vif_util [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:66:56,bridge_name='br-int',has_traffic_filtering=True,id=6f4bc2ea-2d5e-4fbb-95ce-ada64748d460,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f4bc2ea-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.857 2 DEBUG os_vif [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:66:56,bridge_name='br-int',has_traffic_filtering=True,id=6f4bc2ea-2d5e-4fbb-95ce-ada64748d460,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f4bc2ea-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.859 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f4bc2ea-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:25 np0005465604 nova_compute[260603]: 2025-10-02 09:15:25.863 2 INFO os_vif [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:66:56,bridge_name='br-int',has_traffic_filtering=True,id=6f4bc2ea-2d5e-4fbb-95ce-ada64748d460,network=Network(f302b50b-078a-40f3-87d8-1172d81fe604),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f4bc2ea-2d')#033[00m
Oct  2 05:15:25 np0005465604 systemd[1]: var-lib-containers-storage-overlay-aa7e4bc0c2d4824c03116f4f6f6e684d6814bec0ec595371262415f74733fdb6-merged.mount: Deactivated successfully.
Oct  2 05:15:25 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63-userdata-shm.mount: Deactivated successfully.
Oct  2 05:15:26 np0005465604 podman[427100]: 2025-10-02 09:15:26.015000085 +0000 UTC m=+0.281001735 container cleanup f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:15:26 np0005465604 systemd[1]: libpod-conmon-f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63.scope: Deactivated successfully.
Oct  2 05:15:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.161 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:15:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 52 KiB/s wr, 37 op/s
Oct  2 05:15:26 np0005465604 podman[427165]: 2025-10-02 09:15:26.276012945 +0000 UTC m=+0.233902039 container remove f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:15:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.286 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[633d6943-2fc2-40bf-8063-bff87d22b922]: (4, ('Thu Oct  2 09:15:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6 (f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63)\nf3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63\nThu Oct  2 09:15:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6 (f3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63)\nf3bcb4f2c405f3143a314af698779cf999836562af63b30d1e967b45d780ea63\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.289 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac4f22e-7191-4db0-96a0-696b42abc5dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.290 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6339cab-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:15:26 np0005465604 nova_compute[260603]: 2025-10-02 09:15:26.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:26 np0005465604 kernel: tapc6339cab-f0: left promiscuous mode
Oct  2 05:15:26 np0005465604 nova_compute[260603]: 2025-10-02 09:15:26.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.303 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e87e9baf-dccd-4743-b96a-808fe7a83dff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:26 np0005465604 nova_compute[260603]: 2025-10-02 09:15:26.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.335 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[3f26f16f-b4f6-4f19-97dd-66d9e0fec1ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.337 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2d42496e-b4cb-42dc-8bce-6eeeb50283ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.358 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5131d128-7837-4569-ab39-55bb0a0a60a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733284, 'reachable_time': 34526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 427180, 'error': None, 'target': 'ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.362 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c6339cab-fbb4-4887-8953-252cca735cc6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:15:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.362 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[ca508ad9-fdc8-4a71-ba6e-d7a1ae522f18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.363 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 in datapath f302b50b-078a-40f3-87d8-1172d81fe604 unbound from our chassis#033[00m
Oct  2 05:15:26 np0005465604 systemd[1]: run-netns-ovnmeta\x2dc6339cab\x2dfbb4\x2d4887\x2d8953\x2d252cca735cc6.mount: Deactivated successfully.
Oct  2 05:15:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.366 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f302b50b-078a-40f3-87d8-1172d81fe604, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:15:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.367 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6f497c22-4679-4c4d-b38e-485fa79839ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:26.368 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604 namespace which is not needed anymore#033[00m
Oct  2 05:15:26 np0005465604 neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604[425409]: [NOTICE]   (425413) : haproxy version is 2.8.14-c23fe91
Oct  2 05:15:26 np0005465604 neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604[425409]: [NOTICE]   (425413) : path to executable is /usr/sbin/haproxy
Oct  2 05:15:26 np0005465604 neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604[425409]: [WARNING]  (425413) : Exiting Master process...
Oct  2 05:15:26 np0005465604 neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604[425409]: [ALERT]    (425413) : Current worker (425415) exited with code 143 (Terminated)
Oct  2 05:15:26 np0005465604 neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604[425409]: [WARNING]  (425413) : All workers exited. Exiting... (0)
Oct  2 05:15:26 np0005465604 systemd[1]: libpod-6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64.scope: Deactivated successfully.
Oct  2 05:15:26 np0005465604 podman[427199]: 2025-10-02 09:15:26.586803413 +0000 UTC m=+0.105404876 container died 6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:15:26 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64-userdata-shm.mount: Deactivated successfully.
Oct  2 05:15:26 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4deff7f3e4521b9208a36798e9ac94c4d2e8879828943b12e08aea6816e2d9bf-merged.mount: Deactivated successfully.
Oct  2 05:15:26 np0005465604 podman[427199]: 2025-10-02 09:15:26.765535337 +0000 UTC m=+0.284136720 container cleanup 6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:15:26 np0005465604 systemd[1]: libpod-conmon-6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64.scope: Deactivated successfully.
Oct  2 05:15:26 np0005465604 nova_compute[260603]: 2025-10-02 09:15:26.794 2 DEBUG nova.network.neutron [req-b9c965a0-7319-431f-bf6e-d4501167d294 req-ad232009-3b4f-4d19-b437-ada45782c3aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updated VIF entry in instance network info cache for port 17c0a9ac-d61a-433a-b3f3-154a8c467f5a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:15:26 np0005465604 nova_compute[260603]: 2025-10-02 09:15:26.795 2 DEBUG nova.network.neutron [req-b9c965a0-7319-431f-bf6e-d4501167d294 req-ad232009-3b4f-4d19-b437-ada45782c3aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updating instance_info_cache with network_info: [{"id": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "address": "fa:16:3e:65:2b:06", "network": {"id": "c6339cab-fbb4-4887-8953-252cca735cc6", "bridge": "br-int", "label": "tempest-network-smoke--426910230", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17c0a9ac-d6", "ovs_interfaceid": "17c0a9ac-d61a-433a-b3f3-154a8c467f5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:15:26 np0005465604 nova_compute[260603]: 2025-10-02 09:15:26.823 2 DEBUG oslo_concurrency.lockutils [req-b9c965a0-7319-431f-bf6e-d4501167d294 req-ad232009-3b4f-4d19-b437-ada45782c3aa 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7293bf39-223f-4668-bd0f-c65476fac3e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:15:26 np0005465604 nova_compute[260603]: 2025-10-02 09:15:26.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:27 np0005465604 podman[427230]: 2025-10-02 09:15:27.138092575 +0000 UTC m=+0.346269312 container remove 6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 05:15:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:27.144 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1617ebe4-95a2-433d-b40c-da9333affeb3]: (4, ('Thu Oct  2 09:15:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604 (6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64)\n6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64\nThu Oct  2 09:15:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604 (6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64)\n6f00d7a7b0ddcf03889142f246b0c6a1f1f784952138887ed0a2187410d68c64\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:27.145 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9c46616b-fdca-4bc5-8d89-2e8073979b83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:27.146 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf302b50b-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:15:27 np0005465604 kernel: tapf302b50b-00: left promiscuous mode
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:27.152 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1d8bd6da-1a28-484a-851b-95240340be4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:27.186 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[dce3d967-3830-4325-abd2-6f5d976e7574]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:27.187 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0ef74d-f2a5-4411-9d37-dd02497b23ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:27.210 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[251ceaea-5882-4ab3-92d9-68cfb52951fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733388, 'reachable_time': 29619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 427246, 'error': None, 'target': 'ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:27.214 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f302b50b-078a-40f3-87d8-1172d81fe604 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:15:27 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:27.214 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[f189f67f-aa49-4089-8712-b2106d4e9d6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:15:27 np0005465604 systemd[1]: run-netns-ovnmeta\x2df302b50b\x2d078a\x2d40f3\x2d87d8\x2d1172d81fe604.mount: Deactivated successfully.
Oct  2 05:15:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:15:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:15:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:15:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:15:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:15:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.967 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-unplugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.967 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.968 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.968 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.968 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] No waiting events found dispatching network-vif-unplugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.968 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-unplugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.969 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-plugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.969 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.969 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.969 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.969 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] No waiting events found dispatching network-vif-plugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.970 2 WARNING nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received unexpected event network-vif-plugged-17c0a9ac-d61a-433a-b3f3-154a8c467f5a for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.970 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-unplugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.970 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.970 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.970 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.971 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] No waiting events found dispatching network-vif-unplugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.971 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-unplugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.971 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-plugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.971 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.971 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.972 2 DEBUG oslo_concurrency.lockutils [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.972 2 DEBUG nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] No waiting events found dispatching network-vif-plugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:15:27 np0005465604 nova_compute[260603]: 2025-10-02 09:15:27.972 2 WARNING nova.compute.manager [req-b4d6fc32-a9ff-4189-b790-4822d4a4fcb9 req-ece88aaf-9692-4f8b-9f6a-42c4d56b7f4e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received unexpected event network-vif-plugged-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:15:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:15:28
Oct  2 05:15:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:15:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:15:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'backups', 'vms', 'default.rgw.log', '.mgr', 'images', 'cephfs.cephfs.meta', 'volumes']
Oct  2 05:15:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:15:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 58 KiB/s wr, 57 op/s
Oct  2 05:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:15:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:15:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:15:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 19 KiB/s wr, 44 op/s
Oct  2 05:15:30 np0005465604 nova_compute[260603]: 2025-10-02 09:15:30.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:31 np0005465604 podman[427249]: 2025-10-02 09:15:31.051936973 +0000 UTC m=+0.098125532 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:15:31 np0005465604 podman[427248]: 2025-10-02 09:15:31.078670562 +0000 UTC m=+0.128285047 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 05:15:31 np0005465604 nova_compute[260603]: 2025-10-02 09:15:31.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 19 KiB/s wr, 44 op/s
Oct  2 05:15:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:15:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 19 KiB/s wr, 54 op/s
Oct  2 05:15:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:34.852 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:15:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:34.853 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:15:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:15:34.853 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:15:35 np0005465604 nova_compute[260603]: 2025-10-02 09:15:35.088 2 INFO nova.virt.libvirt.driver [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Deleting instance files /var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4_del#033[00m
Oct  2 05:15:35 np0005465604 nova_compute[260603]: 2025-10-02 09:15:35.089 2 INFO nova.virt.libvirt.driver [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Deletion of /var/lib/nova/instances/7293bf39-223f-4668-bd0f-c65476fac3e4_del complete#033[00m
Oct  2 05:15:35 np0005465604 nova_compute[260603]: 2025-10-02 09:15:35.201 2 INFO nova.compute.manager [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Took 9.68 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:15:35 np0005465604 nova_compute[260603]: 2025-10-02 09:15:35.201 2 DEBUG oslo.service.loopingcall [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:15:35 np0005465604 nova_compute[260603]: 2025-10-02 09:15:35.202 2 DEBUG nova.compute.manager [-] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:15:35 np0005465604 nova_compute[260603]: 2025-10-02 09:15:35.202 2 DEBUG nova.network.neutron [-] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:15:35 np0005465604 nova_compute[260603]: 2025-10-02 09:15:35.845 2 DEBUG nova.compute.manager [req-6acfe868-b5c8-4e38-865c-67440d378dc6 req-65e617de-8704-4953-a9ec-249effbdcd45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-deleted-17c0a9ac-d61a-433a-b3f3-154a8c467f5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:15:35 np0005465604 nova_compute[260603]: 2025-10-02 09:15:35.845 2 INFO nova.compute.manager [req-6acfe868-b5c8-4e38-865c-67440d378dc6 req-65e617de-8704-4953-a9ec-249effbdcd45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Neutron deleted interface 17c0a9ac-d61a-433a-b3f3-154a8c467f5a; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 05:15:35 np0005465604 nova_compute[260603]: 2025-10-02 09:15:35.845 2 DEBUG nova.network.neutron [req-6acfe868-b5c8-4e38-865c-67440d378dc6 req-65e617de-8704-4953-a9ec-249effbdcd45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updating instance_info_cache with network_info: [{"id": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "address": "fa:16:3e:d2:66:56", "network": {"id": "f302b50b-078a-40f3-87d8-1172d81fe604", "bridge": "br-int", "label": "tempest-network-smoke--879498578", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed2:6656", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f4bc2ea-2d", "ovs_interfaceid": "6f4bc2ea-2d5e-4fbb-95ce-ada64748d460", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:15:35 np0005465604 nova_compute[260603]: 2025-10-02 09:15:35.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:35 np0005465604 nova_compute[260603]: 2025-10-02 09:15:35.890 2 DEBUG nova.compute.manager [req-6acfe868-b5c8-4e38-865c-67440d378dc6 req-65e617de-8704-4953-a9ec-249effbdcd45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Detach interface failed, port_id=17c0a9ac-d61a-433a-b3f3-154a8c467f5a, reason: Instance 7293bf39-223f-4668-bd0f-c65476fac3e4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 05:15:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.6 KiB/s wr, 29 op/s
Oct  2 05:15:36 np0005465604 nova_compute[260603]: 2025-10-02 09:15:36.293 2 DEBUG nova.network.neutron [-] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:15:36 np0005465604 nova_compute[260603]: 2025-10-02 09:15:36.327 2 INFO nova.compute.manager [-] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Took 1.12 seconds to deallocate network for instance.#033[00m
Oct  2 05:15:36 np0005465604 nova_compute[260603]: 2025-10-02 09:15:36.387 2 DEBUG oslo_concurrency.lockutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:15:36 np0005465604 nova_compute[260603]: 2025-10-02 09:15:36.388 2 DEBUG oslo_concurrency.lockutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:15:36 np0005465604 nova_compute[260603]: 2025-10-02 09:15:36.462 2 DEBUG oslo_concurrency.processutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:15:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:15:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3512102911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:15:36 np0005465604 nova_compute[260603]: 2025-10-02 09:15:36.894 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396521.8930523, 6acf9ec4-afe0-4ef6-b857-246ad87fe800 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:15:36 np0005465604 nova_compute[260603]: 2025-10-02 09:15:36.895 2 INFO nova.compute.manager [-] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:15:36 np0005465604 nova_compute[260603]: 2025-10-02 09:15:36.914 2 DEBUG oslo_concurrency.processutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:15:36 np0005465604 nova_compute[260603]: 2025-10-02 09:15:36.922 2 DEBUG nova.compute.provider_tree [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:15:36 np0005465604 nova_compute[260603]: 2025-10-02 09:15:36.928 2 DEBUG nova.compute.manager [None req-ee945fdf-ff84-4e2c-8b26-0eff7b4764fc - - - - - -] [instance: 6acf9ec4-afe0-4ef6-b857-246ad87fe800] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:15:36 np0005465604 nova_compute[260603]: 2025-10-02 09:15:36.945 2 DEBUG nova.scheduler.client.report [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:15:36 np0005465604 nova_compute[260603]: 2025-10-02 09:15:36.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:36 np0005465604 nova_compute[260603]: 2025-10-02 09:15:36.974 2 DEBUG oslo_concurrency.lockutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:15:37 np0005465604 nova_compute[260603]: 2025-10-02 09:15:37.004 2 INFO nova.scheduler.client.report [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 7293bf39-223f-4668-bd0f-c65476fac3e4#033[00m
Oct  2 05:15:37 np0005465604 nova_compute[260603]: 2025-10-02 09:15:37.072 2 DEBUG oslo_concurrency.lockutils [None req-9dfef624-1db4-405d-a7b1-b44fbbd4b84c b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7293bf39-223f-4668-bd0f-c65476fac3e4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:15:37 np0005465604 nova_compute[260603]: 2025-10-02 09:15:37.942 2 DEBUG nova.compute.manager [req-28718e5d-ff1d-4489-8100-0aea95f02d7a req-a3f2728b-75cc-422b-b71f-61e2d0b72e70 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Received event network-vif-deleted-6f4bc2ea-2d5e-4fbb-95ce-ada64748d460 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:15:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 6.1 KiB/s wr, 32 op/s
Oct  2 05:15:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:15:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:15:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 597 B/s wr, 13 op/s
Oct  2 05:15:40 np0005465604 nova_compute[260603]: 2025-10-02 09:15:40.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:15:40 np0005465604 nova_compute[260603]: 2025-10-02 09:15:40.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:15:40 np0005465604 nova_compute[260603]: 2025-10-02 09:15:40.779 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396525.77701, 7293bf39-223f-4668-bd0f-c65476fac3e4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:15:40 np0005465604 nova_compute[260603]: 2025-10-02 09:15:40.779 2 INFO nova.compute.manager [-] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:15:40 np0005465604 nova_compute[260603]: 2025-10-02 09:15:40.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:40 np0005465604 nova_compute[260603]: 2025-10-02 09:15:40.898 2 DEBUG nova.compute.manager [None req-380388eb-5634-4d06-9cf4-f48c87821a61 - - - - - -] [instance: 7293bf39-223f-4668-bd0f-c65476fac3e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:15:41 np0005465604 nova_compute[260603]: 2025-10-02 09:15:41.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 597 B/s wr, 13 op/s
Oct  2 05:15:42 np0005465604 nova_compute[260603]: 2025-10-02 09:15:42.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:42 np0005465604 nova_compute[260603]: 2025-10-02 09:15:42.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:15:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 597 B/s wr, 13 op/s
Oct  2 05:15:45 np0005465604 nova_compute[260603]: 2025-10-02 09:15:45.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 511 B/s wr, 3 op/s
Oct  2 05:15:46 np0005465604 nova_compute[260603]: 2025-10-02 09:15:46.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 511 B/s wr, 3 op/s
Oct  2 05:15:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:15:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:15:50 np0005465604 nova_compute[260603]: 2025-10-02 09:15:50.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:51 np0005465604 nova_compute[260603]: 2025-10-02 09:15:51.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:15:53 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:15:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:15:55 np0005465604 nova_compute[260603]: 2025-10-02 09:15:55.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:55 np0005465604 podman[427314]: 2025-10-02 09:15:55.99076411 +0000 UTC m=+0.054184565 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:15:56 np0005465604 podman[427313]: 2025-10-02 09:15:56.063620493 +0000 UTC m=+0.127388349 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:15:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:15:56 np0005465604 nova_compute[260603]: 2025-10-02 09:15:56.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:15:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:15:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:15:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:15:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:15:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:15:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:15:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:15:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:16:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:16:00 np0005465604 nova_compute[260603]: 2025-10-02 09:16:00.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:01 np0005465604 nova_compute[260603]: 2025-10-02 09:16:01.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:02 np0005465604 podman[427357]: 2025-10-02 09:16:02.000217396 +0000 UTC m=+0.064922828 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 05:16:02 np0005465604 podman[427358]: 2025-10-02 09:16:02.019669881 +0000 UTC m=+0.078952975 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:16:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:16:02 np0005465604 nova_compute[260603]: 2025-10-02 09:16:02.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:16:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:03.001 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:01:06 10.100.0.2 2001:db8::f816:3eff:fe96:106'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe96:106/64', 'neutron:device_id': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=591e6b4a-e1ca-4274-8e8d-321441edaa04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=25655298-5d81-4448-950c-289fb68606f0) old=Port_Binding(mac=['fa:16:3e:96:01:06 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:16:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:03.003 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 25655298-5d81-4448-950c-289fb68606f0 in datapath 32cae488-7671-4c05-a475-2c63f103261b updated#033[00m
Oct  2 05:16:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:03.005 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32cae488-7671-4c05-a475-2c63f103261b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:16:03 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:03.006 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2b49dda1-d28d-41f8-9f5f-68f6213e3c10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:16:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:16:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:16:05 np0005465604 nova_compute[260603]: 2025-10-02 09:16:05.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:16:05 np0005465604 nova_compute[260603]: 2025-10-02 09:16:05.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:16:05 np0005465604 nova_compute[260603]: 2025-10-02 09:16:05.539 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:16:05 np0005465604 nova_compute[260603]: 2025-10-02 09:16:05.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:16:06 np0005465604 nova_compute[260603]: 2025-10-02 09:16:06.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:07 np0005465604 nova_compute[260603]: 2025-10-02 09:16:07.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:16:07 np0005465604 nova_compute[260603]: 2025-10-02 09:16:07.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:16:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:07.610 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:01:06 10.100.0.2 2001:db8:0:1:f816:3eff:fe96:106 2001:db8::f816:3eff:fe96:106'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe96:106/64 2001:db8::f816:3eff:fe96:106/64', 'neutron:device_id': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=591e6b4a-e1ca-4274-8e8d-321441edaa04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=25655298-5d81-4448-950c-289fb68606f0) old=Port_Binding(mac=['fa:16:3e:96:01:06 10.100.0.2 2001:db8::f816:3eff:fe96:106'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe96:106/64', 'neutron:device_id': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:16:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:07.611 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 25655298-5d81-4448-950c-289fb68606f0 in datapath 32cae488-7671-4c05-a475-2c63f103261b updated#033[00m
Oct  2 05:16:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:07.612 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32cae488-7671-4c05-a475-2c63f103261b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:16:07 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:07.612 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[471f515f-09e6-4f9a-855b-64a4a0577b9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:16:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:16:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:16:09 np0005465604 nova_compute[260603]: 2025-10-02 09:16:09.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:16:09 np0005465604 nova_compute[260603]: 2025-10-02 09:16:09.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:16:09 np0005465604 nova_compute[260603]: 2025-10-02 09:16:09.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:16:09 np0005465604 nova_compute[260603]: 2025-10-02 09:16:09.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:16:09 np0005465604 nova_compute[260603]: 2025-10-02 09:16:09.549 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:16:09 np0005465604 nova_compute[260603]: 2025-10-02 09:16:09.549 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:16:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:16:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/696011136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:16:09 np0005465604 nova_compute[260603]: 2025-10-02 09:16:09.990 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:16:10 np0005465604 nova_compute[260603]: 2025-10-02 09:16:10.191 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:16:10 np0005465604 nova_compute[260603]: 2025-10-02 09:16:10.193 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3625MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:16:10 np0005465604 nova_compute[260603]: 2025-10-02 09:16:10.194 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:16:10 np0005465604 nova_compute[260603]: 2025-10-02 09:16:10.194 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:16:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:16:10 np0005465604 nova_compute[260603]: 2025-10-02 09:16:10.389 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:16:10 np0005465604 nova_compute[260603]: 2025-10-02 09:16:10.390 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:16:10 np0005465604 nova_compute[260603]: 2025-10-02 09:16:10.446 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:16:10 np0005465604 nova_compute[260603]: 2025-10-02 09:16:10.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:16:10 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4063309839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:16:10 np0005465604 nova_compute[260603]: 2025-10-02 09:16:10.981 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:16:10 np0005465604 nova_compute[260603]: 2025-10-02 09:16:10.991 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:16:11 np0005465604 nova_compute[260603]: 2025-10-02 09:16:11.016 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:16:11 np0005465604 nova_compute[260603]: 2025-10-02 09:16:11.061 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:16:11 np0005465604 nova_compute[260603]: 2025-10-02 09:16:11.062 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:16:11 np0005465604 nova_compute[260603]: 2025-10-02 09:16:11.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:16:12 np0005465604 nova_compute[260603]: 2025-10-02 09:16:12.505 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:16:12 np0005465604 nova_compute[260603]: 2025-10-02 09:16:12.506 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:16:12 np0005465604 nova_compute[260603]: 2025-10-02 09:16:12.526 2 DEBUG nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:16:12 np0005465604 nova_compute[260603]: 2025-10-02 09:16:12.595 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:16:12 np0005465604 nova_compute[260603]: 2025-10-02 09:16:12.596 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:16:12 np0005465604 nova_compute[260603]: 2025-10-02 09:16:12.606 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:16:12 np0005465604 nova_compute[260603]: 2025-10-02 09:16:12.607 2 INFO nova.compute.claims [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:16:12 np0005465604 nova_compute[260603]: 2025-10-02 09:16:12.713 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:16:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:16:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1507463177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.220 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.226 2 DEBUG nova.compute.provider_tree [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.244 2 DEBUG nova.scheduler.client.report [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.267 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.268 2 DEBUG nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.327 2 DEBUG nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.327 2 DEBUG nova.network.neutron [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.349 2 INFO nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.368 2 DEBUG nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.464 2 DEBUG nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.465 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.465 2 INFO nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Creating image(s)#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.486 2 DEBUG nova.storage.rbd_utils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.508 2 DEBUG nova.storage.rbd_utils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.537 2 DEBUG nova.storage.rbd_utils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.542 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.621 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.621 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.622 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.622 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.648 2 DEBUG nova.storage.rbd_utils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:16:13 np0005465604 nova_compute[260603]: 2025-10-02 09:16:13.653 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:16:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:16:14 np0005465604 nova_compute[260603]: 2025-10-02 09:16:14.058 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:16:14 np0005465604 nova_compute[260603]: 2025-10-02 09:16:14.059 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:16:14 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Oct  2 05:16:14 np0005465604 nova_compute[260603]: 2025-10-02 09:16:14.081 2 DEBUG nova.policy [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:16:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:16:15 np0005465604 nova_compute[260603]: 2025-10-02 09:16:15.209 2 DEBUG nova.network.neutron [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Successfully created port: dad30664-f830-4b51-9cf5-8b92d95308bb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:16:15 np0005465604 nova_compute[260603]: 2025-10-02 09:16:15.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:16:16 np0005465604 nova_compute[260603]: 2025-10-02 09:16:16.366 2 DEBUG nova.network.neutron [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Successfully updated port: dad30664-f830-4b51-9cf5-8b92d95308bb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:16:16 np0005465604 nova_compute[260603]: 2025-10-02 09:16:16.386 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:16:16 np0005465604 nova_compute[260603]: 2025-10-02 09:16:16.386 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:16:16 np0005465604 nova_compute[260603]: 2025-10-02 09:16:16.387 2 DEBUG nova.network.neutron [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:16:16 np0005465604 nova_compute[260603]: 2025-10-02 09:16:16.452 2 DEBUG nova.compute.manager [req-f375881c-aad9-4bc2-a80d-a77acc5b77fb req-a2734565-b8ca-42d8-afd9-5b849cebf510 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received event network-changed-dad30664-f830-4b51-9cf5-8b92d95308bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:16:16 np0005465604 nova_compute[260603]: 2025-10-02 09:16:16.453 2 DEBUG nova.compute.manager [req-f375881c-aad9-4bc2-a80d-a77acc5b77fb req-a2734565-b8ca-42d8-afd9-5b849cebf510 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Refreshing instance network info cache due to event network-changed-dad30664-f830-4b51-9cf5-8b92d95308bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:16:16 np0005465604 nova_compute[260603]: 2025-10-02 09:16:16.453 2 DEBUG oslo_concurrency.lockutils [req-f375881c-aad9-4bc2-a80d-a77acc5b77fb req-a2734565-b8ca-42d8-afd9-5b849cebf510 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:16:16 np0005465604 nova_compute[260603]: 2025-10-02 09:16:16.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:17 np0005465604 nova_compute[260603]: 2025-10-02 09:16:17.041 2 DEBUG nova.network.neutron [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:16:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 43 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 94 KiB/s wr, 10 op/s
Oct  2 05:16:19 np0005465604 nova_compute[260603]: 2025-10-02 09:16:19.238 2 DEBUG nova.network.neutron [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updating instance_info_cache with network_info: [{"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:16:19 np0005465604 nova_compute[260603]: 2025-10-02 09:16:19.262 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:16:19 np0005465604 nova_compute[260603]: 2025-10-02 09:16:19.262 2 DEBUG nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Instance network_info: |[{"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:16:19 np0005465604 nova_compute[260603]: 2025-10-02 09:16:19.263 2 DEBUG oslo_concurrency.lockutils [req-f375881c-aad9-4bc2-a80d-a77acc5b77fb req-a2734565-b8ca-42d8-afd9-5b849cebf510 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:16:19 np0005465604 nova_compute[260603]: 2025-10-02 09:16:19.263 2 DEBUG nova.network.neutron [req-f375881c-aad9-4bc2-a80d-a77acc5b77fb req-a2734565-b8ca-42d8-afd9-5b849cebf510 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Refreshing network info cache for port dad30664-f830-4b51-9cf5-8b92d95308bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:16:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:16:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 305 active+clean; 59 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 525 KiB/s wr, 14 op/s
Oct  2 05:16:20 np0005465604 nova_compute[260603]: 2025-10-02 09:16:20.364 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.712s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:16:20 np0005465604 nova_compute[260603]: 2025-10-02 09:16:20.442 2 DEBUG nova.storage.rbd_utils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:16:20 np0005465604 nova_compute[260603]: 2025-10-02 09:16:20.793 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:16:20 np0005465604 nova_compute[260603]: 2025-10-02 09:16:20.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:21 np0005465604 nova_compute[260603]: 2025-10-02 09:16:21.908 2 DEBUG nova.network.neutron [req-f375881c-aad9-4bc2-a80d-a77acc5b77fb req-a2734565-b8ca-42d8-afd9-5b849cebf510 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updated VIF entry in instance network info cache for port dad30664-f830-4b51-9cf5-8b92d95308bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:16:21 np0005465604 nova_compute[260603]: 2025-10-02 09:16:21.909 2 DEBUG nova.network.neutron [req-f375881c-aad9-4bc2-a80d-a77acc5b77fb req-a2734565-b8ca-42d8-afd9-5b849cebf510 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updating instance_info_cache with network_info: [{"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:16:21 np0005465604 nova_compute[260603]: 2025-10-02 09:16:21.933 2 DEBUG oslo_concurrency.lockutils [req-f375881c-aad9-4bc2-a80d-a77acc5b77fb req-a2734565-b8ca-42d8-afd9-5b849cebf510 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:16:21 np0005465604 nova_compute[260603]: 2025-10-02 09:16:21.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:16:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4019264692' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:16:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:16:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4019264692' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.173 2 DEBUG nova.objects.instance [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid bfb5f44c-0aeb-439f-9d64-934d5cb85c02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.206 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.207 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Ensure instance console log exists: /var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.207 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.208 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.208 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.210 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Start _get_guest_xml network_info=[{"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.214 2 WARNING nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.219 2 DEBUG nova.virt.libvirt.host [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.220 2 DEBUG nova.virt.libvirt.host [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.223 2 DEBUG nova.virt.libvirt.host [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.223 2 DEBUG nova.virt.libvirt.host [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.224 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.224 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.224 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.225 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.225 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.225 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.225 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.226 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.226 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.226 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.226 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.227 2 DEBUG nova.virt.hardware [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.229 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:16:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 305 active+clean; 59 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 525 KiB/s wr, 14 op/s
Oct  2 05:16:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:16:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3308999108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.760 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.781 2 DEBUG nova.storage.rbd_utils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:16:22 np0005465604 nova_compute[260603]: 2025-10-02 09:16:22.785 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:16:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:16:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1557702783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.208 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.212 2 DEBUG nova.virt.libvirt.vif [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:16:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1773019515',display_name='tempest-TestGettingAddress-server-1773019515',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1773019515',id=147,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGsNuW5gEn3WEc3qQ3L0JdcOZd6vD7SMJl6t9YZV9xlo9AdZ8yeIelaH48tIJjYeguHisd3f+wPKQxnBOP+bkaPT8EVTluVcKTxfA+koE1m61LBSFo3LpknbEg+9XiigA==',key_name='tempest-TestGettingAddress-1915804126',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-9gbn0ocy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:16:13Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=bfb5f44c-0aeb-439f-9d64-934d5cb85c02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.213 2 DEBUG nova.network.os_vif_util [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.215 2 DEBUG nova.network.os_vif_util [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:75:e6,bridge_name='br-int',has_traffic_filtering=True,id=dad30664-f830-4b51-9cf5-8b92d95308bb,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdad30664-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.218 2 DEBUG nova.objects.instance [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid bfb5f44c-0aeb-439f-9d64-934d5cb85c02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.256 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:16:23 np0005465604 nova_compute[260603]:  <uuid>bfb5f44c-0aeb-439f-9d64-934d5cb85c02</uuid>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:  <name>instance-00000093</name>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestGettingAddress-server-1773019515</nova:name>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:16:22</nova:creationTime>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:16:23 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:        <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:        <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:        <nova:port uuid="dad30664-f830-4b51-9cf5-8b92d95308bb">
Oct  2 05:16:23 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe28:75e6" ipVersion="6"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe28:75e6" ipVersion="6"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <entry name="serial">bfb5f44c-0aeb-439f-9d64-934d5cb85c02</entry>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <entry name="uuid">bfb5f44c-0aeb-439f-9d64-934d5cb85c02</entry>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk">
Oct  2 05:16:23 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:16:23 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk.config">
Oct  2 05:16:23 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:16:23 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:28:75:e6"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <target dev="tapdad30664-f8"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02/console.log" append="off"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:16:23 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:16:23 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:16:23 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:16:23 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.256 2 DEBUG nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Preparing to wait for external event network-vif-plugged-dad30664-f830-4b51-9cf5-8b92d95308bb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.256 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.257 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.257 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.259 2 DEBUG nova.virt.libvirt.vif [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:16:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1773019515',display_name='tempest-TestGettingAddress-server-1773019515',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1773019515',id=147,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGsNuW5gEn3WEc3qQ3L0JdcOZd6vD7SMJl6t9YZV9xlo9AdZ8yeIelaH48tIJjYeguHisd3f+wPKQxnBOP+bkaPT8EVTluVcKTxfA+koE1m61LBSFo3LpknbEg+9XiigA==',key_name='tempest-TestGettingAddress-1915804126',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-9gbn0ocy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:16:13Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=bfb5f44c-0aeb-439f-9d64-934d5cb85c02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.261 2 DEBUG nova.network.os_vif_util [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.262 2 DEBUG nova.network.os_vif_util [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:75:e6,bridge_name='br-int',has_traffic_filtering=True,id=dad30664-f830-4b51-9cf5-8b92d95308bb,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdad30664-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.263 2 DEBUG os_vif [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:75:e6,bridge_name='br-int',has_traffic_filtering=True,id=dad30664-f830-4b51-9cf5-8b92d95308bb,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdad30664-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.265 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.266 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.272 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdad30664-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdad30664-f8, col_values=(('external_ids', {'iface-id': 'dad30664-f830-4b51-9cf5-8b92d95308bb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:75:e6', 'vm-uuid': 'bfb5f44c-0aeb-439f-9d64-934d5cb85c02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:23 np0005465604 NetworkManager[45129]: <info>  [1759396583.2768] manager: (tapdad30664-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/668)
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.285 2 INFO os_vif [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:75:e6,bridge_name='br-int',has_traffic_filtering=True,id=dad30664-f830-4b51-9cf5-8b92d95308bb,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdad30664-f8')#033[00m
Oct  2 05:16:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:16:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:16:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:16:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:16:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:16:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:16:23 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 62763e4a-205a-4e49-9267-bad8ea4d990a does not exist
Oct  2 05:16:23 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 692aa8ad-7985-46ab-9854-b797b2efcfab does not exist
Oct  2 05:16:23 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 17c42f98-1d2a-4ee8-9b42-81b1cb91b8af does not exist
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.659 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.660 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.660 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:28:75:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.660 2 INFO nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Using config drive#033[00m
Oct  2 05:16:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:16:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:16:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:16:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:16:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:16:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:16:23 np0005465604 nova_compute[260603]: 2025-10-02 09:16:23.745 2 DEBUG nova.storage.rbd_utils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:16:23 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:16:24 np0005465604 nova_compute[260603]: 2025-10-02 09:16:24.205 2 INFO nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Creating config drive at /var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02/disk.config#033[00m
Oct  2 05:16:24 np0005465604 nova_compute[260603]: 2025-10-02 09:16:24.215 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp29y3cr41 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:16:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 305 active+clean; 80 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.1 MiB/s wr, 16 op/s
Oct  2 05:16:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:16:24 np0005465604 nova_compute[260603]: 2025-10-02 09:16:24.361 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp29y3cr41" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:16:24 np0005465604 nova_compute[260603]: 2025-10-02 09:16:24.385 2 DEBUG nova.storage.rbd_utils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:16:24 np0005465604 nova_compute[260603]: 2025-10-02 09:16:24.389 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02/disk.config bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:16:24 np0005465604 podman[427988]: 2025-10-02 09:16:24.328484714 +0000 UTC m=+0.026048050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:16:24 np0005465604 podman[427988]: 2025-10-02 09:16:24.785023841 +0000 UTC m=+0.482587187 container create 6a7091faba769e9ec03db0663cf89bdecdadf6b134d5d283bf0eb3deac2a2122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 05:16:25 np0005465604 systemd[1]: Started libpod-conmon-6a7091faba769e9ec03db0663cf89bdecdadf6b134d5d283bf0eb3deac2a2122.scope.
Oct  2 05:16:25 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:16:25 np0005465604 podman[427988]: 2025-10-02 09:16:25.581444451 +0000 UTC m=+1.279007837 container init 6a7091faba769e9ec03db0663cf89bdecdadf6b134d5d283bf0eb3deac2a2122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:16:25 np0005465604 podman[427988]: 2025-10-02 09:16:25.595810607 +0000 UTC m=+1.293373953 container start 6a7091faba769e9ec03db0663cf89bdecdadf6b134d5d283bf0eb3deac2a2122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:16:25 np0005465604 youthful_joliot[428039]: 167 167
Oct  2 05:16:25 np0005465604 systemd[1]: libpod-6a7091faba769e9ec03db0663cf89bdecdadf6b134d5d283bf0eb3deac2a2122.scope: Deactivated successfully.
Oct  2 05:16:25 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:16:25 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:16:25 np0005465604 podman[427988]: 2025-10-02 09:16:25.791825998 +0000 UTC m=+1.489389334 container attach 6a7091faba769e9ec03db0663cf89bdecdadf6b134d5d283bf0eb3deac2a2122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:16:25 np0005465604 podman[427988]: 2025-10-02 09:16:25.792369575 +0000 UTC m=+1.489932881 container died 6a7091faba769e9ec03db0663cf89bdecdadf6b134d5d283bf0eb3deac2a2122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 05:16:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 88 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:16:26 np0005465604 systemd[1]: var-lib-containers-storage-overlay-20b9d78d8c9a603e7d4ebe968df7a964affb8d6661e734b64405a29cf20e4b7c-merged.mount: Deactivated successfully.
Oct  2 05:16:26 np0005465604 nova_compute[260603]: 2025-10-02 09:16:26.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:27 np0005465604 podman[428044]: 2025-10-02 09:16:27.476111538 +0000 UTC m=+1.848271537 container remove 6a7091faba769e9ec03db0663cf89bdecdadf6b134d5d283bf0eb3deac2a2122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:16:27 np0005465604 systemd[1]: libpod-conmon-6a7091faba769e9ec03db0663cf89bdecdadf6b134d5d283bf0eb3deac2a2122.scope: Deactivated successfully.
Oct  2 05:16:27 np0005465604 podman[428061]: 2025-10-02 09:16:27.518554037 +0000 UTC m=+1.189076543 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 05:16:27 np0005465604 podman[428060]: 2025-10-02 09:16:27.569543072 +0000 UTC m=+1.236319270 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct  2 05:16:27 np0005465604 podman[428113]: 2025-10-02 09:16:27.632519169 +0000 UTC m=+0.026902147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:16:27 np0005465604 podman[428113]: 2025-10-02 09:16:27.823095812 +0000 UTC m=+0.217478790 container create d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 05:16:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:16:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:16:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:16:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:16:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:16:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:16:27 np0005465604 systemd[1]: Started libpod-conmon-d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66.scope.
Oct  2 05:16:28 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:16:28 np0005465604 nova_compute[260603]: 2025-10-02 09:16:28.003 2 DEBUG oslo_concurrency.processutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02/disk.config bfb5f44c-0aeb-439f-9d64-934d5cb85c02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:16:28 np0005465604 nova_compute[260603]: 2025-10-02 09:16:28.003 2 INFO nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Deleting local config drive /var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02/disk.config because it was imported into RBD.#033[00m
Oct  2 05:16:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce2a4cb15fa798089d2a70e5f089ac8d879228aad0f202571dd78b2fe684409/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:16:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce2a4cb15fa798089d2a70e5f089ac8d879228aad0f202571dd78b2fe684409/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:16:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce2a4cb15fa798089d2a70e5f089ac8d879228aad0f202571dd78b2fe684409/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:16:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce2a4cb15fa798089d2a70e5f089ac8d879228aad0f202571dd78b2fe684409/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:16:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce2a4cb15fa798089d2a70e5f089ac8d879228aad0f202571dd78b2fe684409/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:16:28 np0005465604 kernel: tapdad30664-f8: entered promiscuous mode
Oct  2 05:16:28 np0005465604 NetworkManager[45129]: <info>  [1759396588.0850] manager: (tapdad30664-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/669)
Oct  2 05:16:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:16:28
Oct  2 05:16:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:16:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:16:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'images', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'vms']
Oct  2 05:16:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:16:28 np0005465604 podman[428113]: 2025-10-02 09:16:28.1268255 +0000 UTC m=+0.521208498 container init d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 05:16:28 np0005465604 ovn_controller[152344]: 2025-10-02T09:16:28Z|01648|binding|INFO|Claiming lport dad30664-f830-4b51-9cf5-8b92d95308bb for this chassis.
Oct  2 05:16:28 np0005465604 ovn_controller[152344]: 2025-10-02T09:16:28Z|01649|binding|INFO|dad30664-f830-4b51-9cf5-8b92d95308bb: Claiming fa:16:3e:28:75:e6 10.100.0.11 2001:db8:0:1:f816:3eff:fe28:75e6 2001:db8::f816:3eff:fe28:75e6
Oct  2 05:16:28 np0005465604 nova_compute[260603]: 2025-10-02 09:16:28.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:28 np0005465604 nova_compute[260603]: 2025-10-02 09:16:28.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:28 np0005465604 podman[428113]: 2025-10-02 09:16:28.137909005 +0000 UTC m=+0.532291973 container start d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.148 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:75:e6 10.100.0.11 2001:db8:0:1:f816:3eff:fe28:75e6 2001:db8::f816:3eff:fe28:75e6'], port_security=['fa:16:3e:28:75:e6 10.100.0.11 2001:db8:0:1:f816:3eff:fe28:75e6 2001:db8::f816:3eff:fe28:75e6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8:0:1:f816:3eff:fe28:75e6/64 2001:db8::f816:3eff:fe28:75e6/64', 'neutron:device_id': 'bfb5f44c-0aeb-439f-9d64-934d5cb85c02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6142c2da-c0c8-4842-a55d-76581298b5e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=591e6b4a-e1ca-4274-8e8d-321441edaa04, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=dad30664-f830-4b51-9cf5-8b92d95308bb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.149 162357 INFO neutron.agent.ovn.metadata.agent [-] Port dad30664-f830-4b51-9cf5-8b92d95308bb in datapath 32cae488-7671-4c05-a475-2c63f103261b bound to our chassis#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.150 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32cae488-7671-4c05-a475-2c63f103261b#033[00m
Oct  2 05:16:28 np0005465604 systemd-udevd[428147]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:16:28 np0005465604 podman[428113]: 2025-10-02 09:16:28.165468301 +0000 UTC m=+0.559851329 container attach d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 05:16:28 np0005465604 systemd-machined[214636]: New machine qemu-181-instance-00000093.
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.164 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad322ef-e843-45b0-b493-80ea329321ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.166 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap32cae488-71 in ovnmeta-32cae488-7671-4c05-a475-2c63f103261b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.168 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap32cae488-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 05:16:28 np0005465604 NetworkManager[45129]: <info>  [1759396588.1686] device (tapdad30664-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.168 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c6838a85-606e-4203-900c-e222f64223c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:16:28 np0005465604 NetworkManager[45129]: <info>  [1759396588.1696] device (tapdad30664-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.170 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c46533af-daa2-4ca9-b506-ebc0c97e0b28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:16:28 np0005465604 systemd[1]: Started Virtual Machine qemu-181-instance-00000093.
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.185 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[138b0261-b840-43c4-a7d8-58bd51fed3ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.199 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aee5be3c-4aec-42df-aa31-7717c4c92f2f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:16:28 np0005465604 ovn_controller[152344]: 2025-10-02T09:16:28Z|01650|binding|INFO|Setting lport dad30664-f830-4b51-9cf5-8b92d95308bb ovn-installed in OVS
Oct  2 05:16:28 np0005465604 ovn_controller[152344]: 2025-10-02T09:16:28Z|01651|binding|INFO|Setting lport dad30664-f830-4b51-9cf5-8b92d95308bb up in Southbound
Oct  2 05:16:28 np0005465604 nova_compute[260603]: 2025-10-02 09:16:28.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.232 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[36e12c96-a45a-485a-b90e-d0dc59f4c4ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:16:28 np0005465604 systemd-udevd[428150]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:16:28 np0005465604 NetworkManager[45129]: <info>  [1759396588.2417] manager: (tap32cae488-70): new Veth device (/org/freedesktop/NetworkManager/Devices/670)
Oct  2 05:16:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.244 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6cc491-23a5-4767-a61a-5b7b0ae47278]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:16:28 np0005465604 nova_compute[260603]: 2025-10-02 09:16:28.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.286 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[9f48eecf-4f1d-4625-9826-bda13b40c3f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.289 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[eae3079c-518e-4709-a74a-7c0aac318cf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:16:28 np0005465604 NetworkManager[45129]: <info>  [1759396588.3159] device (tap32cae488-70): carrier: link connected
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.322 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[22dbb3e7-5aef-4600-8999-ddf2da07fdb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.338 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab69994-1d6e-4a1c-8140-f22f30990ae5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32cae488-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:01:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746691, 'reachable_time': 42035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 428180, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.351 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1955f8c9-33ec-4a71-b4dc-b5015ba75590]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:106'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 746691, 'tstamp': 746691}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 428181, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.366 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[46c662a4-a66f-4c30-be03-e0a9c5c5ae84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32cae488-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:01:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746691, 'reachable_time': 42035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 428182, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.396 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d3547526-ba1b-400f-b6d1-dfe17d5e321d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.446 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[cd6b312f-2beb-4079-b208-8ff698593035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.447 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32cae488-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.448 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.448 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32cae488-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:16:28 np0005465604 nova_compute[260603]: 2025-10-02 09:16:28.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:28 np0005465604 NetworkManager[45129]: <info>  [1759396588.4506] manager: (tap32cae488-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/671)
Oct  2 05:16:28 np0005465604 kernel: tap32cae488-70: entered promiscuous mode
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.453 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32cae488-70, col_values=(('external_ids', {'iface-id': '25655298-5d81-4448-950c-289fb68606f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:16:28 np0005465604 nova_compute[260603]: 2025-10-02 09:16:28.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:28 np0005465604 ovn_controller[152344]: 2025-10-02T09:16:28Z|01652|binding|INFO|Releasing lport 25655298-5d81-4448-950c-289fb68606f0 from this chassis (sb_readonly=0)
Oct  2 05:16:28 np0005465604 nova_compute[260603]: 2025-10-02 09:16:28.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.455 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/32cae488-7671-4c05-a475-2c63f103261b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/32cae488-7671-4c05-a475-2c63f103261b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.456 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[47c9d439-38a6-4482-9e05-72e4862e0bae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.457 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-32cae488-7671-4c05-a475-2c63f103261b
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/32cae488-7671-4c05-a475-2c63f103261b.pid.haproxy
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 32cae488-7671-4c05-a475-2c63f103261b
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 05:16:28 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:28.459 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'env', 'PROCESS_TAG=haproxy-32cae488-7671-4c05-a475-2c63f103261b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/32cae488-7671-4c05-a475-2c63f103261b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 05:16:28 np0005465604 nova_compute[260603]: 2025-10-02 09:16:28.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:16:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:16:28 np0005465604 podman[428214]: 2025-10-02 09:16:28.775228639 +0000 UTC m=+0.018781504 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 05:16:29 np0005465604 nova_compute[260603]: 2025-10-02 09:16:29.208 2 DEBUG nova.compute.manager [req-bb91e70f-69a8-4ece-a069-21f95d624941 req-04524914-b040-478b-8773-bc659acedcb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received event network-vif-plugged-dad30664-f830-4b51-9cf5-8b92d95308bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:16:29 np0005465604 nova_compute[260603]: 2025-10-02 09:16:29.210 2 DEBUG oslo_concurrency.lockutils [req-bb91e70f-69a8-4ece-a069-21f95d624941 req-04524914-b040-478b-8773-bc659acedcb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:16:29 np0005465604 nova_compute[260603]: 2025-10-02 09:16:29.210 2 DEBUG oslo_concurrency.lockutils [req-bb91e70f-69a8-4ece-a069-21f95d624941 req-04524914-b040-478b-8773-bc659acedcb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:16:29 np0005465604 nova_compute[260603]: 2025-10-02 09:16:29.210 2 DEBUG oslo_concurrency.lockutils [req-bb91e70f-69a8-4ece-a069-21f95d624941 req-04524914-b040-478b-8773-bc659acedcb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:16:29 np0005465604 nova_compute[260603]: 2025-10-02 09:16:29.211 2 DEBUG nova.compute.manager [req-bb91e70f-69a8-4ece-a069-21f95d624941 req-04524914-b040-478b-8773-bc659acedcb1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Processing event network-vif-plugged-dad30664-f830-4b51-9cf5-8b92d95308bb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:16:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:16:29 np0005465604 festive_yonath[428129]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:16:29 np0005465604 festive_yonath[428129]: --> relative data size: 1.0
Oct  2 05:16:29 np0005465604 festive_yonath[428129]: --> All data devices are unavailable
Oct  2 05:16:29 np0005465604 systemd[1]: libpod-d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66.scope: Deactivated successfully.
Oct  2 05:16:29 np0005465604 systemd[1]: libpod-d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66.scope: Consumed 1.016s CPU time.
Oct  2 05:16:29 np0005465604 podman[428214]: 2025-10-02 09:16:29.793904426 +0000 UTC m=+1.037457321 container create bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:16:30 np0005465604 systemd[1]: Started libpod-conmon-bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104.scope.
Oct  2 05:16:30 np0005465604 podman[428285]: 2025-10-02 09:16:30.054521575 +0000 UTC m=+0.366216622 container died d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 05:16:30 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:16:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/140339e959af83ff118cb8023b67732905454915fb70723adb2eabd214c6428e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 05:16:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.7 MiB/s wr, 23 op/s
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.281 2 DEBUG nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.283 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396590.2810063, bfb5f44c-0aeb-439f-9d64-934d5cb85c02 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.283 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] VM Started (Lifecycle Event)#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.288 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.292 2 INFO nova.virt.libvirt.driver [-] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Instance spawned successfully.#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.293 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.303 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.305 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.316 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.316 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.317 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.317 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.318 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.318 2 DEBUG nova.virt.libvirt.driver [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.345 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.345 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396590.2820663, bfb5f44c-0aeb-439f-9d64-934d5cb85c02 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.345 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.376 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.380 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396590.2855577, bfb5f44c-0aeb-439f-9d64-934d5cb85c02 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.381 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.388 2 INFO nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Took 16.92 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.389 2 DEBUG nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.399 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.403 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.426 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.454 2 INFO nova.compute.manager [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Took 17.88 seconds to build instance.#033[00m
Oct  2 05:16:30 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1ce2a4cb15fa798089d2a70e5f089ac8d879228aad0f202571dd78b2fe684409-merged.mount: Deactivated successfully.
Oct  2 05:16:30 np0005465604 nova_compute[260603]: 2025-10-02 09:16:30.529 2 DEBUG oslo_concurrency.lockutils [None req-327dba1b-657e-47e4-9a72-394d39b7e52d b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:16:31 np0005465604 podman[428214]: 2025-10-02 09:16:31.110432897 +0000 UTC m=+2.353985762 container init bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 05:16:31 np0005465604 podman[428214]: 2025-10-02 09:16:31.117820558 +0000 UTC m=+2.361373403 container start bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 05:16:31 np0005465604 neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b[428305]: [NOTICE]   (428314) : New worker (428316) forked
Oct  2 05:16:31 np0005465604 neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b[428305]: [NOTICE]   (428314) : Loading success.
Oct  2 05:16:31 np0005465604 nova_compute[260603]: 2025-10-02 09:16:31.309 2 DEBUG nova.compute.manager [req-c6b60424-d032-4ff9-9894-3f05c1e58980 req-164842ab-65c6-4718-be88-365c21d45c35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received event network-vif-plugged-dad30664-f830-4b51-9cf5-8b92d95308bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:16:31 np0005465604 nova_compute[260603]: 2025-10-02 09:16:31.310 2 DEBUG oslo_concurrency.lockutils [req-c6b60424-d032-4ff9-9894-3f05c1e58980 req-164842ab-65c6-4718-be88-365c21d45c35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:16:31 np0005465604 nova_compute[260603]: 2025-10-02 09:16:31.310 2 DEBUG oslo_concurrency.lockutils [req-c6b60424-d032-4ff9-9894-3f05c1e58980 req-164842ab-65c6-4718-be88-365c21d45c35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:16:31 np0005465604 nova_compute[260603]: 2025-10-02 09:16:31.310 2 DEBUG oslo_concurrency.lockutils [req-c6b60424-d032-4ff9-9894-3f05c1e58980 req-164842ab-65c6-4718-be88-365c21d45c35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:16:31 np0005465604 nova_compute[260603]: 2025-10-02 09:16:31.310 2 DEBUG nova.compute.manager [req-c6b60424-d032-4ff9-9894-3f05c1e58980 req-164842ab-65c6-4718-be88-365c21d45c35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] No waiting events found dispatching network-vif-plugged-dad30664-f830-4b51-9cf5-8b92d95308bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:16:31 np0005465604 nova_compute[260603]: 2025-10-02 09:16:31.310 2 WARNING nova.compute.manager [req-c6b60424-d032-4ff9-9894-3f05c1e58980 req-164842ab-65c6-4718-be88-365c21d45c35 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received unexpected event network-vif-plugged-dad30664-f830-4b51-9cf5-8b92d95308bb for instance with vm_state active and task_state None.#033[00m
Oct  2 05:16:31 np0005465604 podman[428285]: 2025-10-02 09:16:31.861015513 +0000 UTC m=+2.172710580 container remove d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_yonath, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:16:31 np0005465604 systemd[1]: libpod-conmon-d244ba53818530b1ae64e8f5ae3d67545363849acaf1f6c3f47653f5ff66ce66.scope: Deactivated successfully.
Oct  2 05:16:31 np0005465604 nova_compute[260603]: 2025-10-02 09:16:31.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:32 np0005465604 podman[428375]: 2025-10-02 09:16:32.138594089 +0000 UTC m=+0.065693383 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 05:16:32 np0005465604 podman[428374]: 2025-10-02 09:16:32.14699867 +0000 UTC m=+0.073402122 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible)
Oct  2 05:16:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.3 MiB/s wr, 19 op/s
Oct  2 05:16:32 np0005465604 podman[428505]: 2025-10-02 09:16:32.524985156 +0000 UTC m=+0.026487414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:16:32 np0005465604 podman[428505]: 2025-10-02 09:16:32.873540728 +0000 UTC m=+0.375042946 container create 417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 05:16:33 np0005465604 nova_compute[260603]: 2025-10-02 09:16:33.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:33 np0005465604 systemd[1]: Started libpod-conmon-417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734.scope.
Oct  2 05:16:33 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:16:33 np0005465604 podman[428505]: 2025-10-02 09:16:33.839021371 +0000 UTC m=+1.340523629 container init 417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  2 05:16:33 np0005465604 podman[428505]: 2025-10-02 09:16:33.845903715 +0000 UTC m=+1.347405933 container start 417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:16:33 np0005465604 festive_hertz[428521]: 167 167
Oct  2 05:16:33 np0005465604 systemd[1]: libpod-417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734.scope: Deactivated successfully.
Oct  2 05:16:33 np0005465604 conmon[428521]: conmon 417d22c72f17ab42df0e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734.scope/container/memory.events
Oct  2 05:16:34 np0005465604 podman[428505]: 2025-10-02 09:16:34.13244979 +0000 UTC m=+1.633951998 container attach 417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:16:34 np0005465604 podman[428505]: 2025-10-02 09:16:34.133491252 +0000 UTC m=+1.634993460 container died 417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 05:16:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 86 op/s
Oct  2 05:16:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:16:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e579e2e01c973b07b96fae702fe8be4e79d6dcfde181f5e56b6c305582d6cc78-merged.mount: Deactivated successfully.
Oct  2 05:16:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:34.853 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:16:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:34.854 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:16:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:16:34.855 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:16:35 np0005465604 podman[428505]: 2025-10-02 09:16:35.203995128 +0000 UTC m=+2.705497366 container remove 417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hertz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:16:35 np0005465604 systemd[1]: libpod-conmon-417d22c72f17ab42df0e96a927f001ba45fb8e57c6741cbb477b84f2549da734.scope: Deactivated successfully.
Oct  2 05:16:35 np0005465604 podman[428546]: 2025-10-02 09:16:35.388260405 +0000 UTC m=+0.022246333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:16:35 np0005465604 podman[428546]: 2025-10-02 09:16:35.588498897 +0000 UTC m=+0.222484835 container create dff570e22e5d63e780a5524a37d58329449a399a1137cbc3aa235c509daf6824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 05:16:35 np0005465604 systemd[1]: Started libpod-conmon-dff570e22e5d63e780a5524a37d58329449a399a1137cbc3aa235c509daf6824.scope.
Oct  2 05:16:35 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:16:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714e58500a605c2efc53481ecba0db0d1155d09f777b96d9ef23750eea1ef957/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:16:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714e58500a605c2efc53481ecba0db0d1155d09f777b96d9ef23750eea1ef957/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:16:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714e58500a605c2efc53481ecba0db0d1155d09f777b96d9ef23750eea1ef957/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:16:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714e58500a605c2efc53481ecba0db0d1155d09f777b96d9ef23750eea1ef957/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:16:36 np0005465604 podman[428546]: 2025-10-02 09:16:35.999587322 +0000 UTC m=+0.633573280 container init dff570e22e5d63e780a5524a37d58329449a399a1137cbc3aa235c509daf6824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:16:36 np0005465604 podman[428546]: 2025-10-02 09:16:36.006938581 +0000 UTC m=+0.640924479 container start dff570e22e5d63e780a5524a37d58329449a399a1137cbc3aa235c509daf6824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:16:36 np0005465604 podman[428546]: 2025-10-02 09:16:36.111928723 +0000 UTC m=+0.745914621 container attach dff570e22e5d63e780a5524a37d58329449a399a1137cbc3aa235c509daf6824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:16:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 677 KiB/s wr, 83 op/s
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]: {
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:    "0": [
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:        {
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "devices": [
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "/dev/loop3"
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            ],
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "lv_name": "ceph_lv0",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "lv_size": "21470642176",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "name": "ceph_lv0",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "tags": {
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.cluster_name": "ceph",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.crush_device_class": "",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.encrypted": "0",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.osd_id": "0",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.type": "block",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.vdo": "0"
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            },
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "type": "block",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "vg_name": "ceph_vg0"
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:        }
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:    ],
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:    "1": [
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:        {
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "devices": [
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "/dev/loop4"
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            ],
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "lv_name": "ceph_lv1",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "lv_size": "21470642176",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "name": "ceph_lv1",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "tags": {
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.cluster_name": "ceph",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.crush_device_class": "",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.encrypted": "0",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.osd_id": "1",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.type": "block",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.vdo": "0"
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            },
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "type": "block",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "vg_name": "ceph_vg1"
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:        }
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:    ],
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:    "2": [
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:        {
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "devices": [
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "/dev/loop5"
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            ],
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "lv_name": "ceph_lv2",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "lv_size": "21470642176",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "name": "ceph_lv2",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "tags": {
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.cluster_name": "ceph",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.crush_device_class": "",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.encrypted": "0",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.osd_id": "2",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.type": "block",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:                "ceph.vdo": "0"
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            },
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "type": "block",
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:            "vg_name": "ceph_vg2"
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:        }
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]:    ]
Oct  2 05:16:36 np0005465604 youthful_joliot[428564]: }
Oct  2 05:16:36 np0005465604 systemd[1]: libpod-dff570e22e5d63e780a5524a37d58329449a399a1137cbc3aa235c509daf6824.scope: Deactivated successfully.
Oct  2 05:16:36 np0005465604 podman[428546]: 2025-10-02 09:16:36.738209805 +0000 UTC m=+1.372195703 container died dff570e22e5d63e780a5524a37d58329449a399a1137cbc3aa235c509daf6824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:16:36 np0005465604 nova_compute[260603]: 2025-10-02 09:16:36.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:37 np0005465604 ovn_controller[152344]: 2025-10-02T09:16:37Z|01653|binding|INFO|Releasing lport 25655298-5d81-4448-950c-289fb68606f0 from this chassis (sb_readonly=0)
Oct  2 05:16:37 np0005465604 nova_compute[260603]: 2025-10-02 09:16:37.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:37 np0005465604 NetworkManager[45129]: <info>  [1759396597.0865] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/672)
Oct  2 05:16:37 np0005465604 NetworkManager[45129]: <info>  [1759396597.0881] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/673)
Oct  2 05:16:37 np0005465604 systemd[1]: var-lib-containers-storage-overlay-714e58500a605c2efc53481ecba0db0d1155d09f777b96d9ef23750eea1ef957-merged.mount: Deactivated successfully.
Oct  2 05:16:37 np0005465604 ovn_controller[152344]: 2025-10-02T09:16:37Z|01654|binding|INFO|Releasing lport 25655298-5d81-4448-950c-289fb68606f0 from this chassis (sb_readonly=0)
Oct  2 05:16:37 np0005465604 nova_compute[260603]: 2025-10-02 09:16:37.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:37 np0005465604 nova_compute[260603]: 2025-10-02 09:16:37.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:37 np0005465604 podman[428546]: 2025-10-02 09:16:37.392944082 +0000 UTC m=+2.026930010 container remove dff570e22e5d63e780a5524a37d58329449a399a1137cbc3aa235c509daf6824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_joliot, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:16:37 np0005465604 systemd[1]: libpod-conmon-dff570e22e5d63e780a5524a37d58329449a399a1137cbc3aa235c509daf6824.scope: Deactivated successfully.
Oct  2 05:16:37 np0005465604 nova_compute[260603]: 2025-10-02 09:16:37.651 2 DEBUG nova.compute.manager [req-0d61fe8e-656e-49ae-b904-d67ead0f95ca req-8ce3a58d-8427-41da-b613-a572ddc52523 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received event network-changed-dad30664-f830-4b51-9cf5-8b92d95308bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:16:37 np0005465604 nova_compute[260603]: 2025-10-02 09:16:37.652 2 DEBUG nova.compute.manager [req-0d61fe8e-656e-49ae-b904-d67ead0f95ca req-8ce3a58d-8427-41da-b613-a572ddc52523 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Refreshing instance network info cache due to event network-changed-dad30664-f830-4b51-9cf5-8b92d95308bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:16:37 np0005465604 nova_compute[260603]: 2025-10-02 09:16:37.652 2 DEBUG oslo_concurrency.lockutils [req-0d61fe8e-656e-49ae-b904-d67ead0f95ca req-8ce3a58d-8427-41da-b613-a572ddc52523 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:16:37 np0005465604 nova_compute[260603]: 2025-10-02 09:16:37.653 2 DEBUG oslo_concurrency.lockutils [req-0d61fe8e-656e-49ae-b904-d67ead0f95ca req-8ce3a58d-8427-41da-b613-a572ddc52523 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:16:37 np0005465604 nova_compute[260603]: 2025-10-02 09:16:37.653 2 DEBUG nova.network.neutron [req-0d61fe8e-656e-49ae-b904-d67ead0f95ca req-8ce3a58d-8427-41da-b613-a572ddc52523 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Refreshing network info cache for port dad30664-f830-4b51-9cf5-8b92d95308bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:16:38 np0005465604 podman[428728]: 2025-10-02 09:16:38.091794289 +0000 UTC m=+0.027314899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:16:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:16:38 np0005465604 nova_compute[260603]: 2025-10-02 09:16:38.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:38 np0005465604 podman[428728]: 2025-10-02 09:16:38.325329767 +0000 UTC m=+0.260850377 container create b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct  2 05:16:38 np0005465604 systemd[1]: Started libpod-conmon-b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a.scope.
Oct  2 05:16:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:16:38 np0005465604 podman[428728]: 2025-10-02 09:16:38.883412729 +0000 UTC m=+0.818933389 container init b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 05:16:38 np0005465604 podman[428728]: 2025-10-02 09:16:38.895253697 +0000 UTC m=+0.830774297 container start b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_elion, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 05:16:38 np0005465604 systemd[1]: libpod-b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a.scope: Deactivated successfully.
Oct  2 05:16:38 np0005465604 frosty_elion[428744]: 167 167
Oct  2 05:16:38 np0005465604 conmon[428744]: conmon b04faf59cd6483e5b866 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a.scope/container/memory.events
Oct  2 05:16:39 np0005465604 podman[428728]: 2025-10-02 09:16:39.0462672 +0000 UTC m=+0.981787860 container attach b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_elion, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 05:16:39 np0005465604 podman[428728]: 2025-10-02 09:16:39.046825087 +0000 UTC m=+0.982345687 container died b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_elion, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:16:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:16:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:16:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-90f26b115dafc1ce7542d7c456a5ab580335078293874ca5dc06110a187d958a-merged.mount: Deactivated successfully.
Oct  2 05:16:40 np0005465604 podman[428728]: 2025-10-02 09:16:40.239736898 +0000 UTC m=+2.175257508 container remove b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 05:16:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Oct  2 05:16:40 np0005465604 systemd[1]: libpod-conmon-b04faf59cd6483e5b866a26cb90ef4e80c9ce6b25a1c1a258d917749ce78ae0a.scope: Deactivated successfully.
Oct  2 05:16:40 np0005465604 nova_compute[260603]: 2025-10-02 09:16:40.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:16:40 np0005465604 nova_compute[260603]: 2025-10-02 09:16:40.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:16:40 np0005465604 podman[428767]: 2025-10-02 09:16:40.442262211 +0000 UTC m=+0.029152156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:16:40 np0005465604 podman[428767]: 2025-10-02 09:16:40.663378203 +0000 UTC m=+0.250268058 container create 37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  2 05:16:40 np0005465604 systemd[1]: Started libpod-conmon-37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396.scope.
Oct  2 05:16:40 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:16:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ad9048a84469a9a5f2abb0fcaf7db941ceab533dca8a2950793a82fbec1a0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:16:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ad9048a84469a9a5f2abb0fcaf7db941ceab533dca8a2950793a82fbec1a0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:16:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ad9048a84469a9a5f2abb0fcaf7db941ceab533dca8a2950793a82fbec1a0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:16:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ad9048a84469a9a5f2abb0fcaf7db941ceab533dca8a2950793a82fbec1a0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:16:40 np0005465604 podman[428767]: 2025-10-02 09:16:40.85409487 +0000 UTC m=+0.440984745 container init 37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:16:40 np0005465604 podman[428767]: 2025-10-02 09:16:40.869135257 +0000 UTC m=+0.456025142 container start 37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 05:16:40 np0005465604 podman[428767]: 2025-10-02 09:16:40.965958046 +0000 UTC m=+0.552847901 container attach 37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:16:41 np0005465604 nova_compute[260603]: 2025-10-02 09:16:41.251 2 DEBUG nova.network.neutron [req-0d61fe8e-656e-49ae-b904-d67ead0f95ca req-8ce3a58d-8427-41da-b613-a572ddc52523 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updated VIF entry in instance network info cache for port dad30664-f830-4b51-9cf5-8b92d95308bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:16:41 np0005465604 nova_compute[260603]: 2025-10-02 09:16:41.253 2 DEBUG nova.network.neutron [req-0d61fe8e-656e-49ae-b904-d67ead0f95ca req-8ce3a58d-8427-41da-b613-a572ddc52523 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updating instance_info_cache with network_info: [{"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:16:41 np0005465604 nova_compute[260603]: 2025-10-02 09:16:41.300 2 DEBUG oslo_concurrency.lockutils [req-0d61fe8e-656e-49ae-b904-d67ead0f95ca req-8ce3a58d-8427-41da-b613-a572ddc52523 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:16:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:16:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 13K writes, 61K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1348 writes, 6344 keys, 1348 commit groups, 1.0 writes per commit group, ingest: 8.63 MB, 0.01 MB/s#012Interval WAL: 1348 writes, 1348 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     94.3      0.80              0.29        44    0.018       0      0       0.0       0.0#012  L6      1/0    9.44 MB   0.0      0.4     0.1      0.3       0.4      0.0       0.0   4.8    150.1    126.9      2.85              1.29        43    0.066    274K    23K       0.0       0.0#012 Sum      1/0    9.44 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.8    117.1    119.7      3.66              1.58        87    0.042    274K    23K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.8     71.8     73.5      0.90              0.25        12    0.075     49K   3088       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.4      0.0       0.0   0.0    150.1    126.9      2.85              1.29        43    0.066    274K    23K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     94.9      0.80              0.29        43    0.019       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.074, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.43 GB write, 0.08 MB/s write, 0.42 GB read, 0.08 MB/s read, 3.7 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 304.00 MB usage: 48.13 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000311 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3105,46.14 MB,15.1765%) FilterBlock(88,765.73 KB,0.245983%) IndexBlock(88,1.25 MB,0.409975%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]: {
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:        "osd_id": 2,
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:        "type": "bluestore"
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:    },
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:        "osd_id": 1,
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:        "type": "bluestore"
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:    },
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:        "osd_id": 0,
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:        "type": "bluestore"
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]:    }
Oct  2 05:16:41 np0005465604 heuristic_matsumoto[428785]: }
Oct  2 05:16:41 np0005465604 systemd[1]: libpod-37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396.scope: Deactivated successfully.
Oct  2 05:16:41 np0005465604 podman[428767]: 2025-10-02 09:16:41.937268611 +0000 UTC m=+1.524158556 container died 37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  2 05:16:41 np0005465604 systemd[1]: libpod-37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396.scope: Consumed 1.063s CPU time.
Oct  2 05:16:42 np0005465604 nova_compute[260603]: 2025-10-02 09:16:42.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 305 active+clean; 88 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 66 op/s
Oct  2 05:16:42 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f2ad9048a84469a9a5f2abb0fcaf7db941ceab533dca8a2950793a82fbec1a0e-merged.mount: Deactivated successfully.
Oct  2 05:16:43 np0005465604 nova_compute[260603]: 2025-10-02 09:16:43.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:43 np0005465604 podman[428767]: 2025-10-02 09:16:43.512640506 +0000 UTC m=+3.099530411 container remove 37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:16:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:16:43 np0005465604 systemd[1]: libpod-conmon-37a6b54448693dec9788a052e0d5d8de7fe6e286afb561e8fcd32717578c5396.scope: Deactivated successfully.
Oct  2 05:16:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:16:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:16:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:16:43 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 04bc5472-b05a-49ae-a322-5426900638a3 does not exist
Oct  2 05:16:43 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 2488ac31-8a07-4863-a139-e47e5e7263c2 does not exist
Oct  2 05:16:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 305 active+clean; 97 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 85 op/s
Oct  2 05:16:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:16:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:16:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:16:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 97 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.2 MiB/s wr, 18 op/s
Oct  2 05:16:47 np0005465604 nova_compute[260603]: 2025-10-02 09:16:47.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 108 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 2.0 MiB/s wr, 26 op/s
Oct  2 05:16:48 np0005465604 nova_compute[260603]: 2025-10-02 09:16:48.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:48 np0005465604 ovn_controller[152344]: 2025-10-02T09:16:48Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:75:e6 10.100.0.11
Oct  2 05:16:48 np0005465604 ovn_controller[152344]: 2025-10-02T09:16:48Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:75:e6 10.100.0.11
Oct  2 05:16:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:16:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 109 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 2.0 MiB/s wr, 33 op/s
Oct  2 05:16:52 np0005465604 nova_compute[260603]: 2025-10-02 09:16:52.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 109 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 2.0 MiB/s wr, 33 op/s
Oct  2 05:16:53 np0005465604 nova_compute[260603]: 2025-10-02 09:16:53.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 259 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 05:16:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:16:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 959 KiB/s wr, 44 op/s
Oct  2 05:16:57 np0005465604 nova_compute[260603]: 2025-10-02 09:16:57.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:16:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:16:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:16:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:16:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:16:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:16:58 np0005465604 podman[428885]: 2025-10-02 09:16:58.050164863 +0000 UTC m=+0.090933197 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 05:16:58 np0005465604 podman[428884]: 2025-10-02 09:16:58.112222111 +0000 UTC m=+0.153253833 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Oct  2 05:16:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 971 KiB/s wr, 44 op/s
Oct  2 05:16:58 np0005465604 nova_compute[260603]: 2025-10-02 09:16:58.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:16:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:17:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 196 KiB/s rd, 182 KiB/s wr, 36 op/s
Oct  2 05:17:02 np0005465604 nova_compute[260603]: 2025-10-02 09:17:02.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 108 KiB/s wr, 29 op/s
Oct  2 05:17:02 np0005465604 nova_compute[260603]: 2025-10-02 09:17:02.521 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:17:03 np0005465604 podman[428927]: 2025-10-02 09:17:03.026770805 +0000 UTC m=+0.084762565 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:17:03 np0005465604 podman[428928]: 2025-10-02 09:17:03.037640262 +0000 UTC m=+0.076607301 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 05:17:03 np0005465604 nova_compute[260603]: 2025-10-02 09:17:03.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 108 KiB/s wr, 29 op/s
Oct  2 05:17:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:17:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct  2 05:17:07 np0005465604 ovn_controller[152344]: 2025-10-02T09:17:07Z|01655|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Oct  2 05:17:07 np0005465604 nova_compute[260603]: 2025-10-02 09:17:07.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:07 np0005465604 nova_compute[260603]: 2025-10-02 09:17:07.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:17:07 np0005465604 nova_compute[260603]: 2025-10-02 09:17:07.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:17:07 np0005465604 nova_compute[260603]: 2025-10-02 09:17:07.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:17:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct  2 05:17:08 np0005465604 nova_compute[260603]: 2025-10-02 09:17:08.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:09 np0005465604 nova_compute[260603]: 2025-10-02 09:17:09.060 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:17:09 np0005465604 nova_compute[260603]: 2025-10-02 09:17:09.061 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:17:09 np0005465604 nova_compute[260603]: 2025-10-02 09:17:09.061 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 05:17:09 np0005465604 nova_compute[260603]: 2025-10-02 09:17:09.062 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bfb5f44c-0aeb-439f-9d64-934d5cb85c02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:17:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:17:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct  2 05:17:12 np0005465604 nova_compute[260603]: 2025-10-02 09:17:12.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:17:13 np0005465604 nova_compute[260603]: 2025-10-02 09:17:13.175 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updating instance_info_cache with network_info: [{"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:17:13 np0005465604 nova_compute[260603]: 2025-10-02 09:17:13.202 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:17:13 np0005465604 nova_compute[260603]: 2025-10-02 09:17:13.203 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 05:17:13 np0005465604 nova_compute[260603]: 2025-10-02 09:17:13.204 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:17:13 np0005465604 nova_compute[260603]: 2025-10-02 09:17:13.204 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:17:13 np0005465604 nova_compute[260603]: 2025-10-02 09:17:13.204 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:17:13 np0005465604 nova_compute[260603]: 2025-10-02 09:17:13.255 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:17:13 np0005465604 nova_compute[260603]: 2025-10-02 09:17:13.256 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:17:13 np0005465604 nova_compute[260603]: 2025-10-02 09:17:13.256 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:17:13 np0005465604 nova_compute[260603]: 2025-10-02 09:17:13.256 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:17:13 np0005465604 nova_compute[260603]: 2025-10-02 09:17:13.257 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:17:13 np0005465604 nova_compute[260603]: 2025-10-02 09:17:13.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:17:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2073728237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:17:13 np0005465604 nova_compute[260603]: 2025-10-02 09:17:13.711 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:17:13 np0005465604 nova_compute[260603]: 2025-10-02 09:17:13.870 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:17:13 np0005465604 nova_compute[260603]: 2025-10-02 09:17:13.871 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.069 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.071 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3388MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.071 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.071 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.164 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance bfb5f44c-0aeb-439f-9d64-934d5cb85c02 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.164 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.164 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.208 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:17:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.641 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.642 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:17:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:17:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:17:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1165600605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.681 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.689 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.716 2 DEBUG nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.742 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.780 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.781 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.863 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.864 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.877 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:17:14 np0005465604 nova_compute[260603]: 2025-10-02 09:17:14.878 2 INFO nova.compute.claims [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:17:15 np0005465604 nova_compute[260603]: 2025-10-02 09:17:15.025 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:17:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:17:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3382146672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:17:15 np0005465604 nova_compute[260603]: 2025-10-02 09:17:15.515 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:17:15 np0005465604 nova_compute[260603]: 2025-10-02 09:17:15.528 2 DEBUG nova.compute.provider_tree [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:17:15 np0005465604 nova_compute[260603]: 2025-10-02 09:17:15.549 2 DEBUG nova.scheduler.client.report [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:17:15 np0005465604 nova_compute[260603]: 2025-10-02 09:17:15.628 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:17:15 np0005465604 nova_compute[260603]: 2025-10-02 09:17:15.630 2 DEBUG nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:17:15 np0005465604 nova_compute[260603]: 2025-10-02 09:17:15.776 2 DEBUG nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:17:15 np0005465604 nova_compute[260603]: 2025-10-02 09:17:15.777 2 DEBUG nova.network.neutron [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:17:15 np0005465604 nova_compute[260603]: 2025-10-02 09:17:15.830 2 INFO nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:17:15 np0005465604 nova_compute[260603]: 2025-10-02 09:17:15.897 2 DEBUG nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:17:15 np0005465604 nova_compute[260603]: 2025-10-02 09:17:15.988 2 DEBUG nova.policy [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:17:16 np0005465604 nova_compute[260603]: 2025-10-02 09:17:16.052 2 DEBUG nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:17:16 np0005465604 nova_compute[260603]: 2025-10-02 09:17:16.055 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:17:16 np0005465604 nova_compute[260603]: 2025-10-02 09:17:16.055 2 INFO nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Creating image(s)#033[00m
Oct  2 05:17:16 np0005465604 nova_compute[260603]: 2025-10-02 09:17:16.094 2 DEBUG nova.storage.rbd_utils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:17:16 np0005465604 nova_compute[260603]: 2025-10-02 09:17:16.127 2 DEBUG nova.storage.rbd_utils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:17:16 np0005465604 nova_compute[260603]: 2025-10-02 09:17:16.157 2 DEBUG nova.storage.rbd_utils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:17:16 np0005465604 nova_compute[260603]: 2025-10-02 09:17:16.162 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:17:16 np0005465604 nova_compute[260603]: 2025-10-02 09:17:16.209 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:17:16 np0005465604 nova_compute[260603]: 2025-10-02 09:17:16.209 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:17:16 np0005465604 nova_compute[260603]: 2025-10-02 09:17:16.239 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:17:16 np0005465604 nova_compute[260603]: 2025-10-02 09:17:16.239 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:17:16 np0005465604 nova_compute[260603]: 2025-10-02 09:17:16.240 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:17:16 np0005465604 nova_compute[260603]: 2025-10-02 09:17:16.240 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:17:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:17:16 np0005465604 nova_compute[260603]: 2025-10-02 09:17:16.269 2 DEBUG nova.storage.rbd_utils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:17:16 np0005465604 nova_compute[260603]: 2025-10-02 09:17:16.273 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:17:17 np0005465604 nova_compute[260603]: 2025-10-02 09:17:17.031 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.757s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:17:17 np0005465604 nova_compute[260603]: 2025-10-02 09:17:17.127 2 DEBUG nova.storage.rbd_utils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:17:17 np0005465604 nova_compute[260603]: 2025-10-02 09:17:17.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:17 np0005465604 nova_compute[260603]: 2025-10-02 09:17:17.404 2 DEBUG nova.objects.instance [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:17:17 np0005465604 nova_compute[260603]: 2025-10-02 09:17:17.423 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:17:17 np0005465604 nova_compute[260603]: 2025-10-02 09:17:17.424 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Ensure instance console log exists: /var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:17:17 np0005465604 nova_compute[260603]: 2025-10-02 09:17:17.425 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:17:17 np0005465604 nova_compute[260603]: 2025-10-02 09:17:17.426 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:17:17 np0005465604 nova_compute[260603]: 2025-10-02 09:17:17.426 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:17:18 np0005465604 nova_compute[260603]: 2025-10-02 09:17:18.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:18.092 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:17:18 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:18.095 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:17:18 np0005465604 nova_compute[260603]: 2025-10-02 09:17:18.204 2 DEBUG nova.network.neutron [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Successfully created port: 9d78430c-570c-4b66-97e9-27790d7f2c0b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:17:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 149 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 929 KiB/s wr, 13 op/s
Oct  2 05:17:18 np0005465604 nova_compute[260603]: 2025-10-02 09:17:18.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:19 np0005465604 nova_compute[260603]: 2025-10-02 09:17:19.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:17:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:17:20 np0005465604 nova_compute[260603]: 2025-10-02 09:17:20.243 2 DEBUG nova.network.neutron [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Successfully updated port: 9d78430c-570c-4b66-97e9-27790d7f2c0b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:17:20 np0005465604 nova_compute[260603]: 2025-10-02 09:17:20.265 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:17:20 np0005465604 nova_compute[260603]: 2025-10-02 09:17:20.265 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:17:20 np0005465604 nova_compute[260603]: 2025-10-02 09:17:20.266 2 DEBUG nova.network.neutron [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:17:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:17:20 np0005465604 nova_compute[260603]: 2025-10-02 09:17:20.381 2 DEBUG nova.compute.manager [req-3c2f8944-a11f-4b99-8f93-b02274c45d34 req-73d3dace-d302-409d-bd4f-fd3f54eace81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received event network-changed-9d78430c-570c-4b66-97e9-27790d7f2c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:17:20 np0005465604 nova_compute[260603]: 2025-10-02 09:17:20.382 2 DEBUG nova.compute.manager [req-3c2f8944-a11f-4b99-8f93-b02274c45d34 req-73d3dace-d302-409d-bd4f-fd3f54eace81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Refreshing instance network info cache due to event network-changed-9d78430c-570c-4b66-97e9-27790d7f2c0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:17:20 np0005465604 nova_compute[260603]: 2025-10-02 09:17:20.382 2 DEBUG oslo_concurrency.lockutils [req-3c2f8944-a11f-4b99-8f93-b02274c45d34 req-73d3dace-d302-409d-bd4f-fd3f54eace81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:17:21 np0005465604 nova_compute[260603]: 2025-10-02 09:17:21.063 2 DEBUG nova.network.neutron [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:17:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:21.097 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:17:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:17:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3548428173' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:17:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:17:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3548428173' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:17:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:17:22 np0005465604 nova_compute[260603]: 2025-10-02 09:17:22.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:22 np0005465604 nova_compute[260603]: 2025-10-02 09:17:22.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.036 2 DEBUG nova.network.neutron [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Updating instance_info_cache with network_info: [{"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.084 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.085 2 DEBUG nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Instance network_info: |[{"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.086 2 DEBUG oslo_concurrency.lockutils [req-3c2f8944-a11f-4b99-8f93-b02274c45d34 req-73d3dace-d302-409d-bd4f-fd3f54eace81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.087 2 DEBUG nova.network.neutron [req-3c2f8944-a11f-4b99-8f93-b02274c45d34 req-73d3dace-d302-409d-bd4f-fd3f54eace81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Refreshing network info cache for port 9d78430c-570c-4b66-97e9-27790d7f2c0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.093 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Start _get_guest_xml network_info=[{"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.099 2 WARNING nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.104 2 DEBUG nova.virt.libvirt.host [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.105 2 DEBUG nova.virt.libvirt.host [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.108 2 DEBUG nova.virt.libvirt.host [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.109 2 DEBUG nova.virt.libvirt.host [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.110 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.110 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.111 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.112 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.113 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.113 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.114 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.115 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.115 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.116 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.116 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.117 2 DEBUG nova.virt.hardware [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.124 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:17:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1139598554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.622 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.642 2 DEBUG nova.storage.rbd_utils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:17:23 np0005465604 nova_compute[260603]: 2025-10-02 09:17:23.646 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:17:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:17:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2848137534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.104 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.106 2 DEBUG nova.virt.libvirt.vif [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-650898802',display_name='tempest-TestGettingAddress-server-650898802',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-650898802',id=148,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGsNuW5gEn3WEc3qQ3L0JdcOZd6vD7SMJl6t9YZV9xlo9AdZ8yeIelaH48tIJjYeguHisd3f+wPKQxnBOP+bkaPT8EVTluVcKTxfA+koE1m61LBSFo3LpknbEg+9XiigA==',key_name='tempest-TestGettingAddress-1915804126',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-96j0e3zy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:17:15Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.106 2 DEBUG nova.network.os_vif_util [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.108 2 DEBUG nova.network.os_vif_util [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:b6:32,bridge_name='br-int',has_traffic_filtering=True,id=9d78430c-570c-4b66-97e9-27790d7f2c0b,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78430c-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.109 2 DEBUG nova.objects.instance [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.131 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:17:24 np0005465604 nova_compute[260603]:  <uuid>7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555</uuid>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:  <name>instance-00000094</name>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestGettingAddress-server-650898802</nova:name>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:17:23</nova:creationTime>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:17:24 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:        <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:        <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:        <nova:port uuid="9d78430c-570c-4b66-97e9-27790d7f2c0b">
Oct  2 05:17:24 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fef7:b632" ipVersion="6"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fef7:b632" ipVersion="6"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <entry name="serial">7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555</entry>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <entry name="uuid">7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555</entry>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk">
Oct  2 05:17:24 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:17:24 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk.config">
Oct  2 05:17:24 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:17:24 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:f7:b6:32"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <target dev="tap9d78430c-57"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555/console.log" append="off"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:17:24 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:17:24 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:17:24 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:17:24 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.132 2 DEBUG nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Preparing to wait for external event network-vif-plugged-9d78430c-570c-4b66-97e9-27790d7f2c0b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.132 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.133 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.133 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.133 2 DEBUG nova.virt.libvirt.vif [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-650898802',display_name='tempest-TestGettingAddress-server-650898802',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-650898802',id=148,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGsNuW5gEn3WEc3qQ3L0JdcOZd6vD7SMJl6t9YZV9xlo9AdZ8yeIelaH48tIJjYeguHisd3f+wPKQxnBOP+bkaPT8EVTluVcKTxfA+koE1m61LBSFo3LpknbEg+9XiigA==',key_name='tempest-TestGettingAddress-1915804126',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-96j0e3zy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:17:15Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.134 2 DEBUG nova.network.os_vif_util [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.134 2 DEBUG nova.network.os_vif_util [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:b6:32,bridge_name='br-int',has_traffic_filtering=True,id=9d78430c-570c-4b66-97e9-27790d7f2c0b,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78430c-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.135 2 DEBUG os_vif [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:b6:32,bridge_name='br-int',has_traffic_filtering=True,id=9d78430c-570c-4b66-97e9-27790d7f2c0b,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78430c-57') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.136 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.136 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d78430c-57, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9d78430c-57, col_values=(('external_ids', {'iface-id': '9d78430c-570c-4b66-97e9-27790d7f2c0b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:b6:32', 'vm-uuid': '7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:17:24 np0005465604 NetworkManager[45129]: <info>  [1759396644.1413] manager: (tap9d78430c-57): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/674)
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.148 2 INFO os_vif [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:b6:32,bridge_name='br-int',has_traffic_filtering=True,id=9d78430c-570c-4b66-97e9-27790d7f2c0b,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78430c-57')#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.231 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.232 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.232 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:f7:b6:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.233 2 INFO nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Using config drive#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.261 2 DEBUG nova.storage.rbd_utils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:17:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.559 2 INFO nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Creating config drive at /var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555/disk.config#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.571 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf63wnhg6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:17:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.738 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf63wnhg6" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.780 2 DEBUG nova.storage.rbd_utils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:17:24 np0005465604 nova_compute[260603]: 2025-10-02 09:17:24.785 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555/disk.config 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:17:25 np0005465604 nova_compute[260603]: 2025-10-02 09:17:25.106 2 DEBUG nova.network.neutron [req-3c2f8944-a11f-4b99-8f93-b02274c45d34 req-73d3dace-d302-409d-bd4f-fd3f54eace81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Updated VIF entry in instance network info cache for port 9d78430c-570c-4b66-97e9-27790d7f2c0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:17:25 np0005465604 nova_compute[260603]: 2025-10-02 09:17:25.107 2 DEBUG nova.network.neutron [req-3c2f8944-a11f-4b99-8f93-b02274c45d34 req-73d3dace-d302-409d-bd4f-fd3f54eace81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Updating instance_info_cache with network_info: [{"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:17:25 np0005465604 nova_compute[260603]: 2025-10-02 09:17:25.129 2 DEBUG oslo_concurrency.lockutils [req-3c2f8944-a11f-4b99-8f93-b02274c45d34 req-73d3dace-d302-409d-bd4f-fd3f54eace81 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:17:25 np0005465604 nova_compute[260603]: 2025-10-02 09:17:25.896 2 DEBUG oslo_concurrency.processutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555/disk.config 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:17:25 np0005465604 nova_compute[260603]: 2025-10-02 09:17:25.897 2 INFO nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Deleting local config drive /var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555/disk.config because it was imported into RBD.#033[00m
Oct  2 05:17:25 np0005465604 kernel: tap9d78430c-57: entered promiscuous mode
Oct  2 05:17:25 np0005465604 NetworkManager[45129]: <info>  [1759396645.9743] manager: (tap9d78430c-57): new Tun device (/org/freedesktop/NetworkManager/Devices/675)
Oct  2 05:17:25 np0005465604 ovn_controller[152344]: 2025-10-02T09:17:25Z|01656|binding|INFO|Claiming lport 9d78430c-570c-4b66-97e9-27790d7f2c0b for this chassis.
Oct  2 05:17:25 np0005465604 ovn_controller[152344]: 2025-10-02T09:17:25Z|01657|binding|INFO|9d78430c-570c-4b66-97e9-27790d7f2c0b: Claiming fa:16:3e:f7:b6:32 10.100.0.3 2001:db8:0:1:f816:3eff:fef7:b632 2001:db8::f816:3eff:fef7:b632
Oct  2 05:17:25 np0005465604 nova_compute[260603]: 2025-10-02 09:17:25.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:25.983 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:b6:32 10.100.0.3 2001:db8:0:1:f816:3eff:fef7:b632 2001:db8::f816:3eff:fef7:b632'], port_security=['fa:16:3e:f7:b6:32 10.100.0.3 2001:db8:0:1:f816:3eff:fef7:b632 2001:db8::f816:3eff:fef7:b632'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8:0:1:f816:3eff:fef7:b632/64 2001:db8::f816:3eff:fef7:b632/64', 'neutron:device_id': '7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6142c2da-c0c8-4842-a55d-76581298b5e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=591e6b4a-e1ca-4274-8e8d-321441edaa04, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=9d78430c-570c-4b66-97e9-27790d7f2c0b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:17:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:25.985 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 9d78430c-570c-4b66-97e9-27790d7f2c0b in datapath 32cae488-7671-4c05-a475-2c63f103261b bound to our chassis#033[00m
Oct  2 05:17:25 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:25.987 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32cae488-7671-4c05-a475-2c63f103261b#033[00m
Oct  2 05:17:25 np0005465604 ovn_controller[152344]: 2025-10-02T09:17:25Z|01658|binding|INFO|Setting lport 9d78430c-570c-4b66-97e9-27790d7f2c0b ovn-installed in OVS
Oct  2 05:17:25 np0005465604 ovn_controller[152344]: 2025-10-02T09:17:25Z|01659|binding|INFO|Setting lport 9d78430c-570c-4b66-97e9-27790d7f2c0b up in Southbound
Oct  2 05:17:25 np0005465604 nova_compute[260603]: 2025-10-02 09:17:25.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:26 np0005465604 nova_compute[260603]: 2025-10-02 09:17:25.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.016 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe666db-fbfd-44eb-99b8-b574db9c58d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:17:26 np0005465604 systemd-machined[214636]: New machine qemu-182-instance-00000094.
Oct  2 05:17:26 np0005465604 systemd[1]: Started Virtual Machine qemu-182-instance-00000094.
Oct  2 05:17:26 np0005465604 systemd-udevd[429340]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:17:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.064 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[d96cfc2e-f215-4870-8a16-4f4b06470ad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:17:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.068 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[a64c58a6-6ed1-4b48-8e29-4c79f36709e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:17:26 np0005465604 NetworkManager[45129]: <info>  [1759396646.0804] device (tap9d78430c-57): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:17:26 np0005465604 NetworkManager[45129]: <info>  [1759396646.0820] device (tap9d78430c-57): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:17:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.103 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[307ab445-b9fd-45e8-8b7f-d3f0b8383ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:17:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.132 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[50316b04-b308-4f39-9108-7efa80fca1a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32cae488-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:01:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 25, 'tx_packets': 6, 'rx_bytes': 2230, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 25, 'tx_packets': 6, 'rx_bytes': 2230, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746691, 'reachable_time': 42035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 429349, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:17:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.159 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4e84fa-7f9f-42f5-afc2-d24cc6b189da]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap32cae488-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 746702, 'tstamp': 746702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 429351, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap32cae488-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 746704, 'tstamp': 746704}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 429351, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:17:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.163 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32cae488-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:17:26 np0005465604 nova_compute[260603]: 2025-10-02 09:17:26.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.169 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32cae488-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:17:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.169 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:17:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.170 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32cae488-70, col_values=(('external_ids', {'iface-id': '25655298-5d81-4448-950c-289fb68606f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:17:26 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:26.171 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:17:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:17:26 np0005465604 nova_compute[260603]: 2025-10-02 09:17:26.529 2 DEBUG nova.compute.manager [req-42058949-f386-4e4d-9143-a55e5dfa3153 req-0f70c7d8-b0a7-4270-b66b-0210fba6cd57 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received event network-vif-plugged-9d78430c-570c-4b66-97e9-27790d7f2c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:17:26 np0005465604 nova_compute[260603]: 2025-10-02 09:17:26.530 2 DEBUG oslo_concurrency.lockutils [req-42058949-f386-4e4d-9143-a55e5dfa3153 req-0f70c7d8-b0a7-4270-b66b-0210fba6cd57 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:17:26 np0005465604 nova_compute[260603]: 2025-10-02 09:17:26.530 2 DEBUG oslo_concurrency.lockutils [req-42058949-f386-4e4d-9143-a55e5dfa3153 req-0f70c7d8-b0a7-4270-b66b-0210fba6cd57 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:17:26 np0005465604 nova_compute[260603]: 2025-10-02 09:17:26.531 2 DEBUG oslo_concurrency.lockutils [req-42058949-f386-4e4d-9143-a55e5dfa3153 req-0f70c7d8-b0a7-4270-b66b-0210fba6cd57 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:17:26 np0005465604 nova_compute[260603]: 2025-10-02 09:17:26.531 2 DEBUG nova.compute.manager [req-42058949-f386-4e4d-9143-a55e5dfa3153 req-0f70c7d8-b0a7-4270-b66b-0210fba6cd57 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Processing event network-vif-plugged-9d78430c-570c-4b66-97e9-27790d7f2c0b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.348 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396647.3481598, 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.349 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] VM Started (Lifecycle Event)#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.350 2 DEBUG nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.354 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.357 2 INFO nova.virt.libvirt.driver [-] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Instance spawned successfully.#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.357 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.375 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.378 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.385 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.385 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.386 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.386 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.386 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.387 2 DEBUG nova.virt.libvirt.driver [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.408 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.408 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396647.3482668, 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.409 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.433 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.436 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396647.3538828, 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.436 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.443 2 INFO nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Took 11.39 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.443 2 DEBUG nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.453 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.455 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.507 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.517 2 INFO nova.compute.manager [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Took 12.72 seconds to build instance.#033[00m
Oct  2 05:17:27 np0005465604 nova_compute[260603]: 2025-10-02 09:17:27.552 2 DEBUG oslo_concurrency.lockutils [None req-79277987-05f0-4868-9dd7-9cc640ddf785 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:17:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:17:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:17:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:17:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:17:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:17:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:17:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:17:28
Oct  2 05:17:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:17:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:17:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'default.rgw.control']
Oct  2 05:17:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:17:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct  2 05:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:17:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:17:28 np0005465604 nova_compute[260603]: 2025-10-02 09:17:28.648 2 DEBUG nova.compute.manager [req-1471add5-9cb3-41a9-be81-7482edd8b09d req-20725070-e473-454d-854e-7d7b6cb00b0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received event network-vif-plugged-9d78430c-570c-4b66-97e9-27790d7f2c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:17:28 np0005465604 nova_compute[260603]: 2025-10-02 09:17:28.648 2 DEBUG oslo_concurrency.lockutils [req-1471add5-9cb3-41a9-be81-7482edd8b09d req-20725070-e473-454d-854e-7d7b6cb00b0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:17:28 np0005465604 nova_compute[260603]: 2025-10-02 09:17:28.648 2 DEBUG oslo_concurrency.lockutils [req-1471add5-9cb3-41a9-be81-7482edd8b09d req-20725070-e473-454d-854e-7d7b6cb00b0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:17:28 np0005465604 nova_compute[260603]: 2025-10-02 09:17:28.649 2 DEBUG oslo_concurrency.lockutils [req-1471add5-9cb3-41a9-be81-7482edd8b09d req-20725070-e473-454d-854e-7d7b6cb00b0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:17:28 np0005465604 nova_compute[260603]: 2025-10-02 09:17:28.649 2 DEBUG nova.compute.manager [req-1471add5-9cb3-41a9-be81-7482edd8b09d req-20725070-e473-454d-854e-7d7b6cb00b0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] No waiting events found dispatching network-vif-plugged-9d78430c-570c-4b66-97e9-27790d7f2c0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:17:28 np0005465604 nova_compute[260603]: 2025-10-02 09:17:28.649 2 WARNING nova.compute.manager [req-1471add5-9cb3-41a9-be81-7482edd8b09d req-20725070-e473-454d-854e-7d7b6cb00b0e 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received unexpected event network-vif-plugged-9d78430c-570c-4b66-97e9-27790d7f2c0b for instance with vm_state active and task_state None.#033[00m
Oct  2 05:17:28 np0005465604 ceph-mgr[74774]: client.0 ms_handle_reset on v2:192.168.122.100:6800/860957497
Oct  2 05:17:29 np0005465604 podman[429396]: 2025-10-02 09:17:29.037925117 +0000 UTC m=+0.093607855 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 05:17:29 np0005465604 podman[429395]: 2025-10-02 09:17:29.057694375 +0000 UTC m=+0.123448457 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:17:29 np0005465604 nova_compute[260603]: 2025-10-02 09:17:29.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:17:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 948 KiB/s rd, 903 KiB/s wr, 54 op/s
Oct  2 05:17:31 np0005465604 nova_compute[260603]: 2025-10-02 09:17:31.632 2 DEBUG nova.compute.manager [req-f3d66d0a-7056-4942-923a-fd5d41afb39c req-327f00b8-83ff-431a-babb-dc0ea5b609cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received event network-changed-9d78430c-570c-4b66-97e9-27790d7f2c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:17:31 np0005465604 nova_compute[260603]: 2025-10-02 09:17:31.633 2 DEBUG nova.compute.manager [req-f3d66d0a-7056-4942-923a-fd5d41afb39c req-327f00b8-83ff-431a-babb-dc0ea5b609cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Refreshing instance network info cache due to event network-changed-9d78430c-570c-4b66-97e9-27790d7f2c0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:17:31 np0005465604 nova_compute[260603]: 2025-10-02 09:17:31.633 2 DEBUG oslo_concurrency.lockutils [req-f3d66d0a-7056-4942-923a-fd5d41afb39c req-327f00b8-83ff-431a-babb-dc0ea5b609cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:17:31 np0005465604 nova_compute[260603]: 2025-10-02 09:17:31.633 2 DEBUG oslo_concurrency.lockutils [req-f3d66d0a-7056-4942-923a-fd5d41afb39c req-327f00b8-83ff-431a-babb-dc0ea5b609cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:17:31 np0005465604 nova_compute[260603]: 2025-10-02 09:17:31.633 2 DEBUG nova.network.neutron [req-f3d66d0a-7056-4942-923a-fd5d41afb39c req-327f00b8-83ff-431a-babb-dc0ea5b609cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Refreshing network info cache for port 9d78430c-570c-4b66-97e9-27790d7f2c0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:17:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 938 KiB/s rd, 15 KiB/s wr, 40 op/s
Oct  2 05:17:32 np0005465604 nova_compute[260603]: 2025-10-02 09:17:32.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:33 np0005465604 nova_compute[260603]: 2025-10-02 09:17:33.856 2 DEBUG nova.network.neutron [req-f3d66d0a-7056-4942-923a-fd5d41afb39c req-327f00b8-83ff-431a-babb-dc0ea5b609cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Updated VIF entry in instance network info cache for port 9d78430c-570c-4b66-97e9-27790d7f2c0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:17:33 np0005465604 nova_compute[260603]: 2025-10-02 09:17:33.857 2 DEBUG nova.network.neutron [req-f3d66d0a-7056-4942-923a-fd5d41afb39c req-327f00b8-83ff-431a-babb-dc0ea5b609cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Updating instance_info_cache with network_info: [{"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:17:33 np0005465604 nova_compute[260603]: 2025-10-02 09:17:33.881 2 DEBUG oslo_concurrency.lockutils [req-f3d66d0a-7056-4942-923a-fd5d41afb39c req-327f00b8-83ff-431a-babb-dc0ea5b609cf 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:17:34 np0005465604 podman[429442]: 2025-10-02 09:17:34.045769526 +0000 UTC m=+0.098667623 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct  2 05:17:34 np0005465604 podman[429441]: 2025-10-02 09:17:34.056655025 +0000 UTC m=+0.114737254 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 05:17:34 np0005465604 nova_compute[260603]: 2025-10-02 09:17:34.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct  2 05:17:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:17:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:34.855 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:17:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:34.855 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:17:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:34.855 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:17:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct  2 05:17:37 np0005465604 nova_compute[260603]: 2025-10-02 09:17:37.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct  2 05:17:38 np0005465604 ovn_controller[152344]: 2025-10-02T09:17:38Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:b6:32 10.100.0.3
Oct  2 05:17:38 np0005465604 ovn_controller[152344]: 2025-10-02T09:17:38Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:b6:32 10.100.0.3
Oct  2 05:17:39 np0005465604 nova_compute[260603]: 2025-10-02 09:17:39.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011077501302337695 of space, bias 1.0, pg target 0.33232503907013083 quantized to 32 (current 32)
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:17:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:17:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:17:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 305 active+clean; 173 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 859 KiB/s wr, 89 op/s
Oct  2 05:17:41 np0005465604 nova_compute[260603]: 2025-10-02 09:17:41.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:17:41 np0005465604 nova_compute[260603]: 2025-10-02 09:17:41.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:17:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 305 active+clean; 173 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 846 KiB/s wr, 52 op/s
Oct  2 05:17:42 np0005465604 nova_compute[260603]: 2025-10-02 09:17:42.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:44 np0005465604 nova_compute[260603]: 2025-10-02 09:17:44.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 305 active+clean; 195 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct  2 05:17:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:17:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:17:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:17:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:17:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:17:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:17:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:17:45 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 0720e283-5492-4e18-a2f6-55bc26589134 does not exist
Oct  2 05:17:45 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev f59b07fa-d8f6-4523-926f-9879c2f669b1 does not exist
Oct  2 05:17:45 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev bb66cf11-13d9-461b-9734-c7a85c919512 does not exist
Oct  2 05:17:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:17:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:17:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:17:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:17:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:17:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:17:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:17:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:17:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:17:45 np0005465604 podman[429752]: 2025-10-02 09:17:45.664270168 +0000 UTC m=+0.023692351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:17:45 np0005465604 podman[429752]: 2025-10-02 09:17:45.843045562 +0000 UTC m=+0.202467735 container create e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  2 05:17:46 np0005465604 systemd[1]: Started libpod-conmon-e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd.scope.
Oct  2 05:17:46 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:17:46 np0005465604 podman[429752]: 2025-10-02 09:17:46.223369711 +0000 UTC m=+0.582791944 container init e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:17:46 np0005465604 podman[429752]: 2025-10-02 09:17:46.234413377 +0000 UTC m=+0.593835530 container start e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:17:46 np0005465604 beautiful_hypatia[429769]: 167 167
Oct  2 05:17:46 np0005465604 systemd[1]: libpod-e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd.scope: Deactivated successfully.
Oct  2 05:17:46 np0005465604 conmon[429769]: conmon e6be962c7e4324563a70 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd.scope/container/memory.events
Oct  2 05:17:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 195 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Oct  2 05:17:46 np0005465604 podman[429752]: 2025-10-02 09:17:46.410189006 +0000 UTC m=+0.769611249 container attach e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hypatia, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 05:17:46 np0005465604 podman[429752]: 2025-10-02 09:17:46.410685843 +0000 UTC m=+0.770108056 container died e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hypatia, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 05:17:46 np0005465604 systemd[1]: var-lib-containers-storage-overlay-60a45bebccab754c7c057c0272231327ff0102bf71ccb84183a950caa4356e19-merged.mount: Deactivated successfully.
Oct  2 05:17:47 np0005465604 nova_compute[260603]: 2025-10-02 09:17:47.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:47 np0005465604 podman[429752]: 2025-10-02 09:17:47.600691492 +0000 UTC m=+1.960113685 container remove e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hypatia, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 05:17:47 np0005465604 systemd[1]: libpod-conmon-e6be962c7e4324563a704ec196d1a2a4c9cf880c0ae0fffccc35c459abe9f6fd.scope: Deactivated successfully.
Oct  2 05:17:47 np0005465604 podman[429794]: 2025-10-02 09:17:47.819311071 +0000 UTC m=+0.028654096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:17:47 np0005465604 podman[429794]: 2025-10-02 09:17:47.979147223 +0000 UTC m=+0.188490228 container create cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chatelet, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 05:17:48 np0005465604 systemd[1]: Started libpod-conmon-cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261.scope.
Oct  2 05:17:48 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:17:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d924bf14f75e554ac83b8e53ed6701d411fb71134e15d0546878fe8472d7e7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:17:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d924bf14f75e554ac83b8e53ed6701d411fb71134e15d0546878fe8472d7e7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:17:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d924bf14f75e554ac83b8e53ed6701d411fb71134e15d0546878fe8472d7e7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:17:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d924bf14f75e554ac83b8e53ed6701d411fb71134e15d0546878fe8472d7e7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:17:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d924bf14f75e554ac83b8e53ed6701d411fb71134e15d0546878fe8472d7e7f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:17:48 np0005465604 podman[429794]: 2025-10-02 09:17:48.219739198 +0000 UTC m=+0.429082223 container init cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 05:17:48 np0005465604 podman[429794]: 2025-10-02 09:17:48.232677162 +0000 UTC m=+0.442020167 container start cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chatelet, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:17:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  2 05:17:48 np0005465604 podman[429794]: 2025-10-02 09:17:48.349107699 +0000 UTC m=+0.558450814 container attach cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:17:49 np0005465604 nova_compute[260603]: 2025-10-02 09:17:49.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:49 np0005465604 relaxed_chatelet[429811]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:17:49 np0005465604 relaxed_chatelet[429811]: --> relative data size: 1.0
Oct  2 05:17:49 np0005465604 relaxed_chatelet[429811]: --> All data devices are unavailable
Oct  2 05:17:49 np0005465604 systemd[1]: libpod-cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261.scope: Deactivated successfully.
Oct  2 05:17:49 np0005465604 systemd[1]: libpod-cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261.scope: Consumed 1.049s CPU time.
Oct  2 05:17:49 np0005465604 podman[429794]: 2025-10-02 09:17:49.349712142 +0000 UTC m=+1.559055187 container died cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chatelet, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:17:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:17:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5d924bf14f75e554ac83b8e53ed6701d411fb71134e15d0546878fe8472d7e7f-merged.mount: Deactivated successfully.
Oct  2 05:17:50 np0005465604 nova_compute[260603]: 2025-10-02 09:17:50.173 2 DEBUG nova.compute.manager [req-a7d537fb-1bb9-4123-a029-ef23b1bcfec4 req-9174233c-b330-4112-a076-cf63054ca860 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received event network-changed-9d78430c-570c-4b66-97e9-27790d7f2c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:17:50 np0005465604 nova_compute[260603]: 2025-10-02 09:17:50.174 2 DEBUG nova.compute.manager [req-a7d537fb-1bb9-4123-a029-ef23b1bcfec4 req-9174233c-b330-4112-a076-cf63054ca860 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Refreshing instance network info cache due to event network-changed-9d78430c-570c-4b66-97e9-27790d7f2c0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:17:50 np0005465604 nova_compute[260603]: 2025-10-02 09:17:50.174 2 DEBUG oslo_concurrency.lockutils [req-a7d537fb-1bb9-4123-a029-ef23b1bcfec4 req-9174233c-b330-4112-a076-cf63054ca860 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:17:50 np0005465604 nova_compute[260603]: 2025-10-02 09:17:50.175 2 DEBUG oslo_concurrency.lockutils [req-a7d537fb-1bb9-4123-a029-ef23b1bcfec4 req-9174233c-b330-4112-a076-cf63054ca860 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:17:50 np0005465604 nova_compute[260603]: 2025-10-02 09:17:50.175 2 DEBUG nova.network.neutron [req-a7d537fb-1bb9-4123-a029-ef23b1bcfec4 req-9174233c-b330-4112-a076-cf63054ca860 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Refreshing network info cache for port 9d78430c-570c-4b66-97e9-27790d7f2c0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:17:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  2 05:17:50 np0005465604 nova_compute[260603]: 2025-10-02 09:17:50.397 2 DEBUG oslo_concurrency.lockutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:17:50 np0005465604 nova_compute[260603]: 2025-10-02 09:17:50.398 2 DEBUG oslo_concurrency.lockutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:17:50 np0005465604 nova_compute[260603]: 2025-10-02 09:17:50.398 2 DEBUG oslo_concurrency.lockutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:17:50 np0005465604 nova_compute[260603]: 2025-10-02 09:17:50.399 2 DEBUG oslo_concurrency.lockutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:17:50 np0005465604 nova_compute[260603]: 2025-10-02 09:17:50.399 2 DEBUG oslo_concurrency.lockutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:17:50 np0005465604 nova_compute[260603]: 2025-10-02 09:17:50.401 2 INFO nova.compute.manager [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Terminating instance#033[00m
Oct  2 05:17:50 np0005465604 nova_compute[260603]: 2025-10-02 09:17:50.403 2 DEBUG nova.compute.manager [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:17:50 np0005465604 podman[429794]: 2025-10-02 09:17:50.452346302 +0000 UTC m=+2.661689317 container remove cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:17:50 np0005465604 systemd[1]: libpod-conmon-cee7b90a6144ec394e1d7865f1e2725476998bf6f45d74ee8fdfbf74ead07261.scope: Deactivated successfully.
Oct  2 05:17:51 np0005465604 kernel: tap9d78430c-57 (unregistering): left promiscuous mode
Oct  2 05:17:51 np0005465604 podman[429992]: 2025-10-02 09:17:51.151996885 +0000 UTC m=+0.032231258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:17:51 np0005465604 NetworkManager[45129]: <info>  [1759396671.2515] device (tap9d78430c-57): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:17:51 np0005465604 ovn_controller[152344]: 2025-10-02T09:17:51Z|01660|binding|INFO|Releasing lport 9d78430c-570c-4b66-97e9-27790d7f2c0b from this chassis (sb_readonly=0)
Oct  2 05:17:51 np0005465604 ovn_controller[152344]: 2025-10-02T09:17:51Z|01661|binding|INFO|Setting lport 9d78430c-570c-4b66-97e9-27790d7f2c0b down in Southbound
Oct  2 05:17:51 np0005465604 ovn_controller[152344]: 2025-10-02T09:17:51Z|01662|binding|INFO|Removing iface tap9d78430c-57 ovn-installed in OVS
Oct  2 05:17:51 np0005465604 nova_compute[260603]: 2025-10-02 09:17:51.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.294 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:b6:32 10.100.0.3 2001:db8:0:1:f816:3eff:fef7:b632 2001:db8::f816:3eff:fef7:b632'], port_security=['fa:16:3e:f7:b6:32 10.100.0.3 2001:db8:0:1:f816:3eff:fef7:b632 2001:db8::f816:3eff:fef7:b632'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8:0:1:f816:3eff:fef7:b632/64 2001:db8::f816:3eff:fef7:b632/64', 'neutron:device_id': '7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6142c2da-c0c8-4842-a55d-76581298b5e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=591e6b4a-e1ca-4274-8e8d-321441edaa04, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=9d78430c-570c-4b66-97e9-27790d7f2c0b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:17:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.297 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 9d78430c-570c-4b66-97e9-27790d7f2c0b in datapath 32cae488-7671-4c05-a475-2c63f103261b unbound from our chassis#033[00m
Oct  2 05:17:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.298 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 32cae488-7671-4c05-a475-2c63f103261b#033[00m
Oct  2 05:17:51 np0005465604 nova_compute[260603]: 2025-10-02 09:17:51.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.317 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[973ec3d1-0627-43ec-83dc-ec55799d82a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:17:51 np0005465604 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000094.scope: Deactivated successfully.
Oct  2 05:17:51 np0005465604 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000094.scope: Consumed 13.379s CPU time.
Oct  2 05:17:51 np0005465604 systemd-machined[214636]: Machine qemu-182-instance-00000094 terminated.
Oct  2 05:17:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.351 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f3a05c88-ffa4-45b1-bc12-d90449e4c276]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:17:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.355 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[7a68fb3c-1fad-40fc-b78f-23f6df47c0e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:17:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.384 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8d860f1b-2f72-4e16-a46c-635c06c0454c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:17:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.405 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7dcf948b-4d24-4614-8571-e82a2dea97fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap32cae488-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:01:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 8, 'rx_bytes': 3628, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 8, 'rx_bytes': 3628, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746691, 'reachable_time': 42035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 430015, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:17:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.423 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9ed20292-b3b7-486c-9944-04302d4a85a6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap32cae488-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 746702, 'tstamp': 746702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430016, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap32cae488-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 746704, 'tstamp': 746704}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430016, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:17:51 np0005465604 podman[429992]: 2025-10-02 09:17:51.425652053 +0000 UTC m=+0.305886446 container create 327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:17:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.426 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32cae488-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:17:51 np0005465604 nova_compute[260603]: 2025-10-02 09:17:51.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:51 np0005465604 nova_compute[260603]: 2025-10-02 09:17:51.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.436 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32cae488-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:17:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.437 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:17:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.437 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap32cae488-70, col_values=(('external_ids', {'iface-id': '25655298-5d81-4448-950c-289fb68606f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:17:51 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:17:51.437 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:17:51 np0005465604 nova_compute[260603]: 2025-10-02 09:17:51.445 2 INFO nova.virt.libvirt.driver [-] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Instance destroyed successfully.#033[00m
Oct  2 05:17:51 np0005465604 nova_compute[260603]: 2025-10-02 09:17:51.446 2 DEBUG nova.objects.instance [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:17:51 np0005465604 nova_compute[260603]: 2025-10-02 09:17:51.469 2 DEBUG nova.virt.libvirt.vif [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-650898802',display_name='tempest-TestGettingAddress-server-650898802',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-650898802',id=148,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGsNuW5gEn3WEc3qQ3L0JdcOZd6vD7SMJl6t9YZV9xlo9AdZ8yeIelaH48tIJjYeguHisd3f+wPKQxnBOP+bkaPT8EVTluVcKTxfA+koE1m61LBSFo3LpknbEg+9XiigA==',key_name='tempest-TestGettingAddress-1915804126',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:17:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-96j0e3zy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:17:27Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:17:51 np0005465604 nova_compute[260603]: 2025-10-02 09:17:51.469 2 DEBUG nova.network.os_vif_util [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:17:51 np0005465604 nova_compute[260603]: 2025-10-02 09:17:51.470 2 DEBUG nova.network.os_vif_util [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:b6:32,bridge_name='br-int',has_traffic_filtering=True,id=9d78430c-570c-4b66-97e9-27790d7f2c0b,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78430c-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:17:51 np0005465604 nova_compute[260603]: 2025-10-02 09:17:51.471 2 DEBUG os_vif [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:b6:32,bridge_name='br-int',has_traffic_filtering=True,id=9d78430c-570c-4b66-97e9-27790d7f2c0b,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78430c-57') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:17:51 np0005465604 nova_compute[260603]: 2025-10-02 09:17:51.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:51 np0005465604 nova_compute[260603]: 2025-10-02 09:17:51.472 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d78430c-57, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:17:51 np0005465604 nova_compute[260603]: 2025-10-02 09:17:51.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:17:51 np0005465604 nova_compute[260603]: 2025-10-02 09:17:51.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:51 np0005465604 nova_compute[260603]: 2025-10-02 09:17:51.480 2 INFO os_vif [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:b6:32,bridge_name='br-int',has_traffic_filtering=True,id=9d78430c-570c-4b66-97e9-27790d7f2c0b,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d78430c-57')#033[00m
Oct  2 05:17:51 np0005465604 systemd[1]: Started libpod-conmon-327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947.scope.
Oct  2 05:17:51 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:17:51 np0005465604 podman[429992]: 2025-10-02 09:17:51.916403371 +0000 UTC m=+0.796637734 container init 327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:17:51 np0005465604 podman[429992]: 2025-10-02 09:17:51.925187086 +0000 UTC m=+0.805421479 container start 327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:17:51 np0005465604 festive_carver[430050]: 167 167
Oct  2 05:17:51 np0005465604 systemd[1]: libpod-327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947.scope: Deactivated successfully.
Oct  2 05:17:51 np0005465604 conmon[430050]: conmon 327746a8703da799a265 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947.scope/container/memory.events
Oct  2 05:17:51 np0005465604 podman[429992]: 2025-10-02 09:17:51.996804522 +0000 UTC m=+0.877038885 container attach 327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 05:17:51 np0005465604 podman[429992]: 2025-10-02 09:17:51.997616278 +0000 UTC m=+0.877850621 container died 327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:17:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2003b70988640370bb0ec7f0d43f991805293303e6c672a52f2f3ef30d32bd8d-merged.mount: Deactivated successfully.
Oct  2 05:17:52 np0005465604 nova_compute[260603]: 2025-10-02 09:17:52.279 2 DEBUG nova.compute.manager [req-4d84a0e3-d091-4d1c-901e-49c0a7d0df5a req-47be0feb-cb49-42e0-97f3-aac2af6df497 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received event network-vif-unplugged-9d78430c-570c-4b66-97e9-27790d7f2c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:17:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 193 KiB/s rd, 1.3 MiB/s wr, 45 op/s
Oct  2 05:17:52 np0005465604 nova_compute[260603]: 2025-10-02 09:17:52.280 2 DEBUG oslo_concurrency.lockutils [req-4d84a0e3-d091-4d1c-901e-49c0a7d0df5a req-47be0feb-cb49-42e0-97f3-aac2af6df497 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:17:52 np0005465604 nova_compute[260603]: 2025-10-02 09:17:52.281 2 DEBUG oslo_concurrency.lockutils [req-4d84a0e3-d091-4d1c-901e-49c0a7d0df5a req-47be0feb-cb49-42e0-97f3-aac2af6df497 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:17:52 np0005465604 nova_compute[260603]: 2025-10-02 09:17:52.281 2 DEBUG oslo_concurrency.lockutils [req-4d84a0e3-d091-4d1c-901e-49c0a7d0df5a req-47be0feb-cb49-42e0-97f3-aac2af6df497 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:17:52 np0005465604 nova_compute[260603]: 2025-10-02 09:17:52.282 2 DEBUG nova.compute.manager [req-4d84a0e3-d091-4d1c-901e-49c0a7d0df5a req-47be0feb-cb49-42e0-97f3-aac2af6df497 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] No waiting events found dispatching network-vif-unplugged-9d78430c-570c-4b66-97e9-27790d7f2c0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:17:52 np0005465604 nova_compute[260603]: 2025-10-02 09:17:52.282 2 DEBUG nova.compute.manager [req-4d84a0e3-d091-4d1c-901e-49c0a7d0df5a req-47be0feb-cb49-42e0-97f3-aac2af6df497 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received event network-vif-unplugged-9d78430c-570c-4b66-97e9-27790d7f2c0b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:17:52 np0005465604 nova_compute[260603]: 2025-10-02 09:17:52.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:52 np0005465604 podman[429992]: 2025-10-02 09:17:52.471264882 +0000 UTC m=+1.351499275 container remove 327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:17:52 np0005465604 systemd[1]: libpod-conmon-327746a8703da799a2656fdf56fc0fee443661338b41f04b2089949d5c2cc947.scope: Deactivated successfully.
Oct  2 05:17:52 np0005465604 podman[430074]: 2025-10-02 09:17:52.661737281 +0000 UTC m=+0.047141164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:17:52 np0005465604 podman[430074]: 2025-10-02 09:17:52.817078744 +0000 UTC m=+0.202482647 container create bdb5d0610c72edd2d1bea52249a3afe8e6284916676288c3b9d895a0eddec9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 05:17:52 np0005465604 nova_compute[260603]: 2025-10-02 09:17:52.929 2 DEBUG nova.network.neutron [req-a7d537fb-1bb9-4123-a029-ef23b1bcfec4 req-9174233c-b330-4112-a076-cf63054ca860 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Updated VIF entry in instance network info cache for port 9d78430c-570c-4b66-97e9-27790d7f2c0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:17:52 np0005465604 nova_compute[260603]: 2025-10-02 09:17:52.930 2 DEBUG nova.network.neutron [req-a7d537fb-1bb9-4123-a029-ef23b1bcfec4 req-9174233c-b330-4112-a076-cf63054ca860 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Updating instance_info_cache with network_info: [{"id": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "address": "fa:16:3e:f7:b6:32", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef7:b632", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d78430c-57", "ovs_interfaceid": "9d78430c-570c-4b66-97e9-27790d7f2c0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:17:52 np0005465604 systemd[1]: Started libpod-conmon-bdb5d0610c72edd2d1bea52249a3afe8e6284916676288c3b9d895a0eddec9a9.scope.
Oct  2 05:17:53 np0005465604 nova_compute[260603]: 2025-10-02 09:17:53.010 2 DEBUG oslo_concurrency.lockutils [req-a7d537fb-1bb9-4123-a029-ef23b1bcfec4 req-9174233c-b330-4112-a076-cf63054ca860 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:17:53 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:17:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a35b65c387a4e2f6d7b2ab11616c6957d2387463d3e12be003417273ab4bf43f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:17:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a35b65c387a4e2f6d7b2ab11616c6957d2387463d3e12be003417273ab4bf43f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:17:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a35b65c387a4e2f6d7b2ab11616c6957d2387463d3e12be003417273ab4bf43f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:17:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a35b65c387a4e2f6d7b2ab11616c6957d2387463d3e12be003417273ab4bf43f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:17:53 np0005465604 podman[430074]: 2025-10-02 09:17:53.120441739 +0000 UTC m=+0.505845622 container init bdb5d0610c72edd2d1bea52249a3afe8e6284916676288c3b9d895a0eddec9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 05:17:53 np0005465604 podman[430074]: 2025-10-02 09:17:53.132736863 +0000 UTC m=+0.518140746 container start bdb5d0610c72edd2d1bea52249a3afe8e6284916676288c3b9d895a0eddec9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:17:53 np0005465604 podman[430074]: 2025-10-02 09:17:53.27639663 +0000 UTC m=+0.661800583 container attach bdb5d0610c72edd2d1bea52249a3afe8e6284916676288c3b9d895a0eddec9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]: {
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:    "0": [
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:        {
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "devices": [
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "/dev/loop3"
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            ],
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "lv_name": "ceph_lv0",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "lv_size": "21470642176",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "name": "ceph_lv0",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "tags": {
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.cluster_name": "ceph",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.crush_device_class": "",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.encrypted": "0",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.osd_id": "0",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.type": "block",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.vdo": "0"
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            },
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "type": "block",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "vg_name": "ceph_vg0"
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:        }
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:    ],
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:    "1": [
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:        {
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "devices": [
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "/dev/loop4"
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            ],
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "lv_name": "ceph_lv1",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "lv_size": "21470642176",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "name": "ceph_lv1",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "tags": {
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.cluster_name": "ceph",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.crush_device_class": "",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.encrypted": "0",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.osd_id": "1",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.type": "block",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.vdo": "0"
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            },
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "type": "block",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "vg_name": "ceph_vg1"
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:        }
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:    ],
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:    "2": [
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:        {
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "devices": [
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "/dev/loop5"
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            ],
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "lv_name": "ceph_lv2",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "lv_size": "21470642176",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "name": "ceph_lv2",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "tags": {
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.cluster_name": "ceph",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.crush_device_class": "",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.encrypted": "0",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.osd_id": "2",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.type": "block",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:                "ceph.vdo": "0"
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            },
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "type": "block",
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:            "vg_name": "ceph_vg2"
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:        }
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]:    ]
Oct  2 05:17:53 np0005465604 exciting_mccarthy[430091]: }
Oct  2 05:17:54 np0005465604 systemd[1]: libpod-bdb5d0610c72edd2d1bea52249a3afe8e6284916676288c3b9d895a0eddec9a9.scope: Deactivated successfully.
Oct  2 05:17:54 np0005465604 podman[430074]: 2025-10-02 09:17:54.017082325 +0000 UTC m=+1.402486218 container died bdb5d0610c72edd2d1bea52249a3afe8e6284916676288c3b9d895a0eddec9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:17:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 305 active+clean; 174 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 1.3 MiB/s wr, 57 op/s
Oct  2 05:17:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a35b65c387a4e2f6d7b2ab11616c6957d2387463d3e12be003417273ab4bf43f-merged.mount: Deactivated successfully.
Oct  2 05:17:54 np0005465604 nova_compute[260603]: 2025-10-02 09:17:54.388 2 DEBUG nova.compute.manager [req-ede535bb-5af5-499e-82fe-4f6d60287431 req-a4fc7dba-7eb1-4dab-826f-e0fe326f4189 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received event network-vif-plugged-9d78430c-570c-4b66-97e9-27790d7f2c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:17:54 np0005465604 nova_compute[260603]: 2025-10-02 09:17:54.389 2 DEBUG oslo_concurrency.lockutils [req-ede535bb-5af5-499e-82fe-4f6d60287431 req-a4fc7dba-7eb1-4dab-826f-e0fe326f4189 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:17:54 np0005465604 nova_compute[260603]: 2025-10-02 09:17:54.389 2 DEBUG oslo_concurrency.lockutils [req-ede535bb-5af5-499e-82fe-4f6d60287431 req-a4fc7dba-7eb1-4dab-826f-e0fe326f4189 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:17:54 np0005465604 nova_compute[260603]: 2025-10-02 09:17:54.389 2 DEBUG oslo_concurrency.lockutils [req-ede535bb-5af5-499e-82fe-4f6d60287431 req-a4fc7dba-7eb1-4dab-826f-e0fe326f4189 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:17:54 np0005465604 nova_compute[260603]: 2025-10-02 09:17:54.389 2 DEBUG nova.compute.manager [req-ede535bb-5af5-499e-82fe-4f6d60287431 req-a4fc7dba-7eb1-4dab-826f-e0fe326f4189 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] No waiting events found dispatching network-vif-plugged-9d78430c-570c-4b66-97e9-27790d7f2c0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:17:54 np0005465604 nova_compute[260603]: 2025-10-02 09:17:54.389 2 WARNING nova.compute.manager [req-ede535bb-5af5-499e-82fe-4f6d60287431 req-a4fc7dba-7eb1-4dab-826f-e0fe326f4189 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received unexpected event network-vif-plugged-9d78430c-570c-4b66-97e9-27790d7f2c0b for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:17:54 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct  2 05:17:54 np0005465604 podman[430074]: 2025-10-02 09:17:54.537539641 +0000 UTC m=+1.922943514 container remove bdb5d0610c72edd2d1bea52249a3afe8e6284916676288c3b9d895a0eddec9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 05:17:54 np0005465604 systemd[1]: libpod-conmon-bdb5d0610c72edd2d1bea52249a3afe8e6284916676288c3b9d895a0eddec9a9.scope: Deactivated successfully.
Oct  2 05:17:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:17:55 np0005465604 podman[430255]: 2025-10-02 09:17:55.345166826 +0000 UTC m=+0.030383030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:17:55 np0005465604 podman[430255]: 2025-10-02 09:17:55.478735598 +0000 UTC m=+0.163951792 container create 8ab47d5ce8e530d6134f682ef331da842ce2d02aa81199705b631b53ab2b449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_matsumoto, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:17:55 np0005465604 systemd[1]: Started libpod-conmon-8ab47d5ce8e530d6134f682ef331da842ce2d02aa81199705b631b53ab2b449a.scope.
Oct  2 05:17:55 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:17:55 np0005465604 podman[430255]: 2025-10-02 09:17:55.733491945 +0000 UTC m=+0.418708189 container init 8ab47d5ce8e530d6134f682ef331da842ce2d02aa81199705b631b53ab2b449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_matsumoto, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 05:17:55 np0005465604 podman[430255]: 2025-10-02 09:17:55.741195686 +0000 UTC m=+0.426411840 container start 8ab47d5ce8e530d6134f682ef331da842ce2d02aa81199705b631b53ab2b449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:17:55 np0005465604 sleepy_matsumoto[430273]: 167 167
Oct  2 05:17:55 np0005465604 systemd[1]: libpod-8ab47d5ce8e530d6134f682ef331da842ce2d02aa81199705b631b53ab2b449a.scope: Deactivated successfully.
Oct  2 05:17:55 np0005465604 podman[430255]: 2025-10-02 09:17:55.851167381 +0000 UTC m=+0.536383625 container attach 8ab47d5ce8e530d6134f682ef331da842ce2d02aa81199705b631b53ab2b449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct  2 05:17:55 np0005465604 podman[430255]: 2025-10-02 09:17:55.85207228 +0000 UTC m=+0.537288474 container died 8ab47d5ce8e530d6134f682ef331da842ce2d02aa81199705b631b53ab2b449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_matsumoto, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:17:56 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5c2832fb150155accf00b412ca4469803960ff2d9ac119745e05353876c87dd2-merged.mount: Deactivated successfully.
Oct  2 05:17:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 174 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 44 KiB/s wr, 23 op/s
Oct  2 05:17:56 np0005465604 podman[430255]: 2025-10-02 09:17:56.31022617 +0000 UTC m=+0.995442364 container remove 8ab47d5ce8e530d6134f682ef331da842ce2d02aa81199705b631b53ab2b449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:17:56 np0005465604 systemd[1]: libpod-conmon-8ab47d5ce8e530d6134f682ef331da842ce2d02aa81199705b631b53ab2b449a.scope: Deactivated successfully.
Oct  2 05:17:56 np0005465604 nova_compute[260603]: 2025-10-02 09:17:56.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:56 np0005465604 podman[430297]: 2025-10-02 09:17:56.52797614 +0000 UTC m=+0.062102020 container create e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 05:17:56 np0005465604 podman[430297]: 2025-10-02 09:17:56.498571933 +0000 UTC m=+0.032697873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:17:56 np0005465604 systemd[1]: Started libpod-conmon-e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175.scope.
Oct  2 05:17:56 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:17:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43bf4bd9ae20bb8708f1fad1c097a44dd934742bcc7e560dffc18fe1060a689/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:17:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43bf4bd9ae20bb8708f1fad1c097a44dd934742bcc7e560dffc18fe1060a689/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:17:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43bf4bd9ae20bb8708f1fad1c097a44dd934742bcc7e560dffc18fe1060a689/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:17:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43bf4bd9ae20bb8708f1fad1c097a44dd934742bcc7e560dffc18fe1060a689/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:17:56 np0005465604 podman[430297]: 2025-10-02 09:17:56.849654928 +0000 UTC m=+0.383780818 container init e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:17:56 np0005465604 podman[430297]: 2025-10-02 09:17:56.85898498 +0000 UTC m=+0.393110880 container start e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:17:56 np0005465604 podman[430297]: 2025-10-02 09:17:56.96177362 +0000 UTC m=+0.495899480 container attach e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 05:17:57 np0005465604 nova_compute[260603]: 2025-10-02 09:17:57.099 2 INFO nova.virt.libvirt.driver [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Deleting instance files /var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_del#033[00m
Oct  2 05:17:57 np0005465604 nova_compute[260603]: 2025-10-02 09:17:57.101 2 INFO nova.virt.libvirt.driver [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Deletion of /var/lib/nova/instances/7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555_del complete#033[00m
Oct  2 05:17:57 np0005465604 nova_compute[260603]: 2025-10-02 09:17:57.201 2 INFO nova.compute.manager [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Took 6.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:17:57 np0005465604 nova_compute[260603]: 2025-10-02 09:17:57.203 2 DEBUG oslo.service.loopingcall [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:17:57 np0005465604 nova_compute[260603]: 2025-10-02 09:17:57.204 2 DEBUG nova.compute.manager [-] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:17:57 np0005465604 nova_compute[260603]: 2025-10-02 09:17:57.205 2 DEBUG nova.network.neutron [-] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:17:57 np0005465604 nova_compute[260603]: 2025-10-02 09:17:57.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:17:57 np0005465604 elated_sammet[430313]: {
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:        "osd_id": 2,
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:        "type": "bluestore"
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:    },
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:        "osd_id": 1,
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:        "type": "bluestore"
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:    },
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:        "osd_id": 0,
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:        "type": "bluestore"
Oct  2 05:17:57 np0005465604 elated_sammet[430313]:    }
Oct  2 05:17:57 np0005465604 elated_sammet[430313]: }
Oct  2 05:17:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:17:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:17:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:17:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:17:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:17:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:17:57 np0005465604 systemd[1]: libpod-e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175.scope: Deactivated successfully.
Oct  2 05:17:57 np0005465604 podman[430297]: 2025-10-02 09:17:57.998182132 +0000 UTC m=+1.532308002 container died e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 05:17:57 np0005465604 systemd[1]: libpod-e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175.scope: Consumed 1.139s CPU time.
Oct  2 05:17:58 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a43bf4bd9ae20bb8708f1fad1c097a44dd934742bcc7e560dffc18fe1060a689-merged.mount: Deactivated successfully.
Oct  2 05:17:58 np0005465604 podman[430297]: 2025-10-02 09:17:58.264365726 +0000 UTC m=+1.798491596 container remove e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 05:17:58 np0005465604 systemd[1]: libpod-conmon-e0dc5c47c74623e01984d3cf5248f72d27f0f9203e98c25b2d9a783f968c9175.scope: Deactivated successfully.
Oct  2 05:17:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 125 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 48 KiB/s wr, 32 op/s
Oct  2 05:17:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:17:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:17:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:17:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:17:58 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 967754cc-5330-48d9-9939-84d079f83429 does not exist
Oct  2 05:17:58 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ff6f4044-38ca-4998-a906-4593e70b3049 does not exist
Oct  2 05:17:59 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:17:59 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:17:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:18:00 np0005465604 podman[430411]: 2025-10-02 09:18:00.05480104 +0000 UTC m=+0.102719719 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 05:18:00 np0005465604 podman[430410]: 2025-10-02 09:18:00.090103313 +0000 UTC m=+0.138115975 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:18:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 11 KiB/s wr, 30 op/s
Oct  2 05:18:00 np0005465604 nova_compute[260603]: 2025-10-02 09:18:00.299 2 DEBUG nova.network.neutron [-] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:18:00 np0005465604 nova_compute[260603]: 2025-10-02 09:18:00.488 2 DEBUG nova.compute.manager [req-8ccef13e-8fa3-4aa8-8436-e86a661f441a req-89a7020b-de6e-4f3e-a1db-77dffc0570d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Received event network-vif-deleted-9d78430c-570c-4b66-97e9-27790d7f2c0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:18:00 np0005465604 nova_compute[260603]: 2025-10-02 09:18:00.489 2 INFO nova.compute.manager [req-8ccef13e-8fa3-4aa8-8436-e86a661f441a req-89a7020b-de6e-4f3e-a1db-77dffc0570d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Neutron deleted interface 9d78430c-570c-4b66-97e9-27790d7f2c0b; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 05:18:00 np0005465604 nova_compute[260603]: 2025-10-02 09:18:00.489 2 DEBUG nova.network.neutron [req-8ccef13e-8fa3-4aa8-8436-e86a661f441a req-89a7020b-de6e-4f3e-a1db-77dffc0570d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:18:00 np0005465604 nova_compute[260603]: 2025-10-02 09:18:00.654 2 INFO nova.compute.manager [-] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Took 3.45 seconds to deallocate network for instance.#033[00m
Oct  2 05:18:00 np0005465604 nova_compute[260603]: 2025-10-02 09:18:00.669 2 DEBUG nova.compute.manager [req-8ccef13e-8fa3-4aa8-8436-e86a661f441a req-89a7020b-de6e-4f3e-a1db-77dffc0570d1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Detach interface failed, port_id=9d78430c-570c-4b66-97e9-27790d7f2c0b, reason: Instance 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 05:18:00 np0005465604 nova_compute[260603]: 2025-10-02 09:18:00.858 2 DEBUG oslo_concurrency.lockutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:18:00 np0005465604 nova_compute[260603]: 2025-10-02 09:18:00.859 2 DEBUG oslo_concurrency.lockutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:18:00 np0005465604 nova_compute[260603]: 2025-10-02 09:18:00.946 2 DEBUG oslo_concurrency.processutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:18:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:18:01 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3300512841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:18:01 np0005465604 nova_compute[260603]: 2025-10-02 09:18:01.390 2 DEBUG oslo_concurrency.processutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:18:01 np0005465604 nova_compute[260603]: 2025-10-02 09:18:01.399 2 DEBUG nova.compute.provider_tree [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:18:01 np0005465604 nova_compute[260603]: 2025-10-02 09:18:01.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:01 np0005465604 nova_compute[260603]: 2025-10-02 09:18:01.542 2 DEBUG nova.scheduler.client.report [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:18:01 np0005465604 nova_compute[260603]: 2025-10-02 09:18:01.701 2 DEBUG oslo_concurrency.lockutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:18:01 np0005465604 nova_compute[260603]: 2025-10-02 09:18:01.772 2 INFO nova.scheduler.client.report [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555#033[00m
Oct  2 05:18:01 np0005465604 nova_compute[260603]: 2025-10-02 09:18:01.884 2 DEBUG oslo_concurrency.lockutils [None req-73252d81-d979-4957-a316-a6d260c98bf2 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:18:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 11 KiB/s wr, 29 op/s
Oct  2 05:18:02 np0005465604 nova_compute[260603]: 2025-10-02 09:18:02.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:03 np0005465604 nova_compute[260603]: 2025-10-02 09:18:03.531 2 DEBUG nova.compute.manager [req-4d883165-300e-4a57-bcd0-d249d3fab18f req-26a3ce09-540b-4d5a-85c9-10f37f7e6377 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received event network-changed-dad30664-f830-4b51-9cf5-8b92d95308bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:18:03 np0005465604 nova_compute[260603]: 2025-10-02 09:18:03.532 2 DEBUG nova.compute.manager [req-4d883165-300e-4a57-bcd0-d249d3fab18f req-26a3ce09-540b-4d5a-85c9-10f37f7e6377 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Refreshing instance network info cache due to event network-changed-dad30664-f830-4b51-9cf5-8b92d95308bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:18:03 np0005465604 nova_compute[260603]: 2025-10-02 09:18:03.532 2 DEBUG oslo_concurrency.lockutils [req-4d883165-300e-4a57-bcd0-d249d3fab18f req-26a3ce09-540b-4d5a-85c9-10f37f7e6377 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:18:03 np0005465604 nova_compute[260603]: 2025-10-02 09:18:03.533 2 DEBUG oslo_concurrency.lockutils [req-4d883165-300e-4a57-bcd0-d249d3fab18f req-26a3ce09-540b-4d5a-85c9-10f37f7e6377 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:18:03 np0005465604 nova_compute[260603]: 2025-10-02 09:18:03.533 2 DEBUG nova.network.neutron [req-4d883165-300e-4a57-bcd0-d249d3fab18f req-26a3ce09-540b-4d5a-85c9-10f37f7e6377 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Refreshing network info cache for port dad30664-f830-4b51-9cf5-8b92d95308bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:18:03 np0005465604 nova_compute[260603]: 2025-10-02 09:18:03.777 2 DEBUG oslo_concurrency.lockutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:18:03 np0005465604 nova_compute[260603]: 2025-10-02 09:18:03.778 2 DEBUG oslo_concurrency.lockutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:18:03 np0005465604 nova_compute[260603]: 2025-10-02 09:18:03.778 2 DEBUG oslo_concurrency.lockutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:18:03 np0005465604 nova_compute[260603]: 2025-10-02 09:18:03.779 2 DEBUG oslo_concurrency.lockutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:18:03 np0005465604 nova_compute[260603]: 2025-10-02 09:18:03.779 2 DEBUG oslo_concurrency.lockutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:18:03 np0005465604 nova_compute[260603]: 2025-10-02 09:18:03.781 2 INFO nova.compute.manager [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Terminating instance#033[00m
Oct  2 05:18:03 np0005465604 nova_compute[260603]: 2025-10-02 09:18:03.783 2 DEBUG nova.compute.manager [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:18:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 11 KiB/s wr, 29 op/s
Oct  2 05:18:04 np0005465604 kernel: tapdad30664-f8 (unregistering): left promiscuous mode
Oct  2 05:18:04 np0005465604 NetworkManager[45129]: <info>  [1759396684.4812] device (tapdad30664-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:18:04 np0005465604 nova_compute[260603]: 2025-10-02 09:18:04.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:04 np0005465604 ovn_controller[152344]: 2025-10-02T09:18:04Z|01663|binding|INFO|Releasing lport dad30664-f830-4b51-9cf5-8b92d95308bb from this chassis (sb_readonly=0)
Oct  2 05:18:04 np0005465604 ovn_controller[152344]: 2025-10-02T09:18:04Z|01664|binding|INFO|Setting lport dad30664-f830-4b51-9cf5-8b92d95308bb down in Southbound
Oct  2 05:18:04 np0005465604 ovn_controller[152344]: 2025-10-02T09:18:04Z|01665|binding|INFO|Removing iface tapdad30664-f8 ovn-installed in OVS
Oct  2 05:18:04 np0005465604 nova_compute[260603]: 2025-10-02 09:18:04.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:04 np0005465604 nova_compute[260603]: 2025-10-02 09:18:04.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:18:04 np0005465604 nova_compute[260603]: 2025-10-02 09:18:04.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:04.542 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:75:e6 10.100.0.11 2001:db8:0:1:f816:3eff:fe28:75e6 2001:db8::f816:3eff:fe28:75e6'], port_security=['fa:16:3e:28:75:e6 10.100.0.11 2001:db8:0:1:f816:3eff:fe28:75e6 2001:db8::f816:3eff:fe28:75e6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8:0:1:f816:3eff:fe28:75e6/64 2001:db8::f816:3eff:fe28:75e6/64', 'neutron:device_id': 'bfb5f44c-0aeb-439f-9d64-934d5cb85c02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-32cae488-7671-4c05-a475-2c63f103261b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6142c2da-c0c8-4842-a55d-76581298b5e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=591e6b4a-e1ca-4274-8e8d-321441edaa04, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=dad30664-f830-4b51-9cf5-8b92d95308bb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:18:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:04.545 162357 INFO neutron.agent.ovn.metadata.agent [-] Port dad30664-f830-4b51-9cf5-8b92d95308bb in datapath 32cae488-7671-4c05-a475-2c63f103261b unbound from our chassis#033[00m
Oct  2 05:18:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:04.546 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 32cae488-7671-4c05-a475-2c63f103261b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:18:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:04.548 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[00f10b06-caa3-4767-b046-e30a03095d57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:18:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:04.549 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-32cae488-7671-4c05-a475-2c63f103261b namespace which is not needed anymore#033[00m
Oct  2 05:18:04 np0005465604 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000093.scope: Deactivated successfully.
Oct  2 05:18:04 np0005465604 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000093.scope: Consumed 17.226s CPU time.
Oct  2 05:18:04 np0005465604 systemd-machined[214636]: Machine qemu-181-instance-00000093 terminated.
Oct  2 05:18:04 np0005465604 podman[430480]: 2025-10-02 09:18:04.650998251 +0000 UTC m=+0.110981698 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 05:18:04 np0005465604 nova_compute[260603]: 2025-10-02 09:18:04.655 2 INFO nova.virt.libvirt.driver [-] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Instance destroyed successfully.#033[00m
Oct  2 05:18:04 np0005465604 nova_compute[260603]: 2025-10-02 09:18:04.656 2 DEBUG nova.objects.instance [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid bfb5f44c-0aeb-439f-9d64-934d5cb85c02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:18:04 np0005465604 podman[430482]: 2025-10-02 09:18:04.66539726 +0000 UTC m=+0.118509482 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct  2 05:18:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:18:04 np0005465604 nova_compute[260603]: 2025-10-02 09:18:04.757 2 DEBUG nova.virt.libvirt.vif [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:16:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1773019515',display_name='tempest-TestGettingAddress-server-1773019515',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1773019515',id=147,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEGsNuW5gEn3WEc3qQ3L0JdcOZd6vD7SMJl6t9YZV9xlo9AdZ8yeIelaH48tIJjYeguHisd3f+wPKQxnBOP+bkaPT8EVTluVcKTxfA+koE1m61LBSFo3LpknbEg+9XiigA==',key_name='tempest-TestGettingAddress-1915804126',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:16:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-9gbn0ocy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:16:30Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=bfb5f44c-0aeb-439f-9d64-934d5cb85c02,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:18:04 np0005465604 nova_compute[260603]: 2025-10-02 09:18:04.758 2 DEBUG nova.network.os_vif_util [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:18:04 np0005465604 nova_compute[260603]: 2025-10-02 09:18:04.759 2 DEBUG nova.network.os_vif_util [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:75:e6,bridge_name='br-int',has_traffic_filtering=True,id=dad30664-f830-4b51-9cf5-8b92d95308bb,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdad30664-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:18:04 np0005465604 nova_compute[260603]: 2025-10-02 09:18:04.760 2 DEBUG os_vif [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:75:e6,bridge_name='br-int',has_traffic_filtering=True,id=dad30664-f830-4b51-9cf5-8b92d95308bb,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdad30664-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:18:04 np0005465604 nova_compute[260603]: 2025-10-02 09:18:04.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:04 np0005465604 nova_compute[260603]: 2025-10-02 09:18:04.762 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdad30664-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:18:04 np0005465604 nova_compute[260603]: 2025-10-02 09:18:04.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:04 np0005465604 nova_compute[260603]: 2025-10-02 09:18:04.773 2 INFO os_vif [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:75:e6,bridge_name='br-int',has_traffic_filtering=True,id=dad30664-f830-4b51-9cf5-8b92d95308bb,network=Network(32cae488-7671-4c05-a475-2c63f103261b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdad30664-f8')#033[00m
Oct  2 05:18:04 np0005465604 neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b[428305]: [NOTICE]   (428314) : haproxy version is 2.8.14-c23fe91
Oct  2 05:18:04 np0005465604 neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b[428305]: [NOTICE]   (428314) : path to executable is /usr/sbin/haproxy
Oct  2 05:18:04 np0005465604 neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b[428305]: [WARNING]  (428314) : Exiting Master process...
Oct  2 05:18:04 np0005465604 neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b[428305]: [WARNING]  (428314) : Exiting Master process...
Oct  2 05:18:04 np0005465604 neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b[428305]: [ALERT]    (428314) : Current worker (428316) exited with code 143 (Terminated)
Oct  2 05:18:04 np0005465604 neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b[428305]: [WARNING]  (428314) : All workers exited. Exiting... (0)
Oct  2 05:18:04 np0005465604 systemd[1]: libpod-bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104.scope: Deactivated successfully.
Oct  2 05:18:04 np0005465604 podman[430551]: 2025-10-02 09:18:04.879909261 +0000 UTC m=+0.187088295 container died bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:18:05 np0005465604 nova_compute[260603]: 2025-10-02 09:18:05.083 2 DEBUG nova.compute.manager [req-a438c283-bf1a-43ad-9a83-b2c4b3551963 req-829187b6-c9b9-4a8f-b07c-ebd15f087b01 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received event network-vif-unplugged-dad30664-f830-4b51-9cf5-8b92d95308bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:18:05 np0005465604 nova_compute[260603]: 2025-10-02 09:18:05.083 2 DEBUG oslo_concurrency.lockutils [req-a438c283-bf1a-43ad-9a83-b2c4b3551963 req-829187b6-c9b9-4a8f-b07c-ebd15f087b01 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:18:05 np0005465604 nova_compute[260603]: 2025-10-02 09:18:05.084 2 DEBUG oslo_concurrency.lockutils [req-a438c283-bf1a-43ad-9a83-b2c4b3551963 req-829187b6-c9b9-4a8f-b07c-ebd15f087b01 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:18:05 np0005465604 nova_compute[260603]: 2025-10-02 09:18:05.084 2 DEBUG oslo_concurrency.lockutils [req-a438c283-bf1a-43ad-9a83-b2c4b3551963 req-829187b6-c9b9-4a8f-b07c-ebd15f087b01 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:18:05 np0005465604 nova_compute[260603]: 2025-10-02 09:18:05.084 2 DEBUG nova.compute.manager [req-a438c283-bf1a-43ad-9a83-b2c4b3551963 req-829187b6-c9b9-4a8f-b07c-ebd15f087b01 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] No waiting events found dispatching network-vif-unplugged-dad30664-f830-4b51-9cf5-8b92d95308bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:18:05 np0005465604 nova_compute[260603]: 2025-10-02 09:18:05.085 2 DEBUG nova.compute.manager [req-a438c283-bf1a-43ad-9a83-b2c4b3551963 req-829187b6-c9b9-4a8f-b07c-ebd15f087b01 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received event network-vif-unplugged-dad30664-f830-4b51-9cf5-8b92d95308bb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:18:05 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104-userdata-shm.mount: Deactivated successfully.
Oct  2 05:18:05 np0005465604 systemd[1]: var-lib-containers-storage-overlay-140339e959af83ff118cb8023b67732905454915fb70723adb2eabd214c6428e-merged.mount: Deactivated successfully.
Oct  2 05:18:06 np0005465604 podman[430551]: 2025-10-02 09:18:06.057376559 +0000 UTC m=+1.364555573 container cleanup bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 05:18:06 np0005465604 systemd[1]: libpod-conmon-bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104.scope: Deactivated successfully.
Oct  2 05:18:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 3.8 KiB/s wr, 18 op/s
Oct  2 05:18:06 np0005465604 nova_compute[260603]: 2025-10-02 09:18:06.443 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396671.4428585, 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:18:06 np0005465604 nova_compute[260603]: 2025-10-02 09:18:06.444 2 INFO nova.compute.manager [-] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:18:06 np0005465604 nova_compute[260603]: 2025-10-02 09:18:06.494 2 DEBUG nova.compute.manager [None req-a0371a05-4525-4344-87ac-0f21bcd67f57 - - - - - -] [instance: 7d6c9cd4-13c9-4912-8ffe-c4fdaeb1d555] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:18:06 np0005465604 podman[430598]: 2025-10-02 09:18:06.842688968 +0000 UTC m=+0.759868756 container remove bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 05:18:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:06.851 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[75a352c3-1ee6-43fd-8901-f8040b8d4c15]: (4, ('Thu Oct  2 09:18:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b (bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104)\nbc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104\nThu Oct  2 09:18:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-32cae488-7671-4c05-a475-2c63f103261b (bc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104)\nbc150311d48554bbfbd815b17e28b7f8b2646726eadd636a743106f705978104\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:18:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:06.854 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[11b3435e-5986-42ec-80c0-5ac68f44379d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:18:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:06.855 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32cae488-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:18:06 np0005465604 kernel: tap32cae488-70: left promiscuous mode
Oct  2 05:18:06 np0005465604 nova_compute[260603]: 2025-10-02 09:18:06.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:06 np0005465604 nova_compute[260603]: 2025-10-02 09:18:06.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:06.889 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[18bce2a8-cbd7-4742-bf7b-aa875eed039f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:18:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:06.920 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b084a4-05bb-4157-a440-c857e8bd20e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:18:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:06.922 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a825ded2-9cbd-41eb-a361-ca23fef225a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:18:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:06.948 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[10b11296-51e7-4a22-9936-19a8609352cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746682, 'reachable_time': 30147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 430614, 'error': None, 'target': 'ovnmeta-32cae488-7671-4c05-a475-2c63f103261b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:18:06 np0005465604 systemd[1]: run-netns-ovnmeta\x2d32cae488\x2d7671\x2d4c05\x2da475\x2d2c63f103261b.mount: Deactivated successfully.
Oct  2 05:18:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:06.952 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-32cae488-7671-4c05-a475-2c63f103261b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:18:06 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:06.952 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[a4cbf257-caae-4368-976f-3df165250823]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:18:07 np0005465604 nova_compute[260603]: 2025-10-02 09:18:07.101 2 DEBUG nova.network.neutron [req-4d883165-300e-4a57-bcd0-d249d3fab18f req-26a3ce09-540b-4d5a-85c9-10f37f7e6377 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updated VIF entry in instance network info cache for port dad30664-f830-4b51-9cf5-8b92d95308bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:18:07 np0005465604 nova_compute[260603]: 2025-10-02 09:18:07.101 2 DEBUG nova.network.neutron [req-4d883165-300e-4a57-bcd0-d249d3fab18f req-26a3ce09-540b-4d5a-85c9-10f37f7e6377 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updating instance_info_cache with network_info: [{"id": "dad30664-f830-4b51-9cf5-8b92d95308bb", "address": "fa:16:3e:28:75:e6", "network": {"id": "32cae488-7671-4c05-a475-2c63f103261b", "bridge": "br-int", "label": "tempest-network-smoke--1170378180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:75e6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdad30664-f8", "ovs_interfaceid": "dad30664-f830-4b51-9cf5-8b92d95308bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:18:07 np0005465604 nova_compute[260603]: 2025-10-02 09:18:07.123 2 DEBUG oslo_concurrency.lockutils [req-4d883165-300e-4a57-bcd0-d249d3fab18f req-26a3ce09-540b-4d5a-85c9-10f37f7e6377 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-bfb5f44c-0aeb-439f-9d64-934d5cb85c02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:18:07 np0005465604 nova_compute[260603]: 2025-10-02 09:18:07.169 2 DEBUG nova.compute.manager [req-6444cbbc-0ca0-4d86-bcd0-55d87e7fbf20 req-34b68275-64e0-44eb-b735-ea373fa2cc45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received event network-vif-plugged-dad30664-f830-4b51-9cf5-8b92d95308bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:18:07 np0005465604 nova_compute[260603]: 2025-10-02 09:18:07.170 2 DEBUG oslo_concurrency.lockutils [req-6444cbbc-0ca0-4d86-bcd0-55d87e7fbf20 req-34b68275-64e0-44eb-b735-ea373fa2cc45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:18:07 np0005465604 nova_compute[260603]: 2025-10-02 09:18:07.170 2 DEBUG oslo_concurrency.lockutils [req-6444cbbc-0ca0-4d86-bcd0-55d87e7fbf20 req-34b68275-64e0-44eb-b735-ea373fa2cc45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:18:07 np0005465604 nova_compute[260603]: 2025-10-02 09:18:07.171 2 DEBUG oslo_concurrency.lockutils [req-6444cbbc-0ca0-4d86-bcd0-55d87e7fbf20 req-34b68275-64e0-44eb-b735-ea373fa2cc45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:18:07 np0005465604 nova_compute[260603]: 2025-10-02 09:18:07.171 2 DEBUG nova.compute.manager [req-6444cbbc-0ca0-4d86-bcd0-55d87e7fbf20 req-34b68275-64e0-44eb-b735-ea373fa2cc45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] No waiting events found dispatching network-vif-plugged-dad30664-f830-4b51-9cf5-8b92d95308bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:18:07 np0005465604 nova_compute[260603]: 2025-10-02 09:18:07.172 2 WARNING nova.compute.manager [req-6444cbbc-0ca0-4d86-bcd0-55d87e7fbf20 req-34b68275-64e0-44eb-b735-ea373fa2cc45 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received unexpected event network-vif-plugged-dad30664-f830-4b51-9cf5-8b92d95308bb for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:18:07 np0005465604 nova_compute[260603]: 2025-10-02 09:18:07.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:07 np0005465604 nova_compute[260603]: 2025-10-02 09:18:07.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:18:07 np0005465604 nova_compute[260603]: 2025-10-02 09:18:07.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:18:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 3.9 KiB/s wr, 19 op/s
Oct  2 05:18:08 np0005465604 nova_compute[260603]: 2025-10-02 09:18:08.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:18:08 np0005465604 nova_compute[260603]: 2025-10-02 09:18:08.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:18:08 np0005465604 nova_compute[260603]: 2025-10-02 09:18:08.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:18:08 np0005465604 nova_compute[260603]: 2025-10-02 09:18:08.539 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 05:18:08 np0005465604 nova_compute[260603]: 2025-10-02 09:18:08.540 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:18:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:18:09 np0005465604 nova_compute[260603]: 2025-10-02 09:18:09.771 2 INFO nova.virt.libvirt.driver [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Deleting instance files /var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02_del#033[00m
Oct  2 05:18:09 np0005465604 nova_compute[260603]: 2025-10-02 09:18:09.773 2 INFO nova.virt.libvirt.driver [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Deletion of /var/lib/nova/instances/bfb5f44c-0aeb-439f-9d64-934d5cb85c02_del complete#033[00m
Oct  2 05:18:09 np0005465604 nova_compute[260603]: 2025-10-02 09:18:09.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:09 np0005465604 nova_compute[260603]: 2025-10-02 09:18:09.831 2 INFO nova.compute.manager [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Took 6.05 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:18:09 np0005465604 nova_compute[260603]: 2025-10-02 09:18:09.832 2 DEBUG oslo.service.loopingcall [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:18:09 np0005465604 nova_compute[260603]: 2025-10-02 09:18:09.832 2 DEBUG nova.compute.manager [-] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:18:09 np0005465604 nova_compute[260603]: 2025-10-02 09:18:09.833 2 DEBUG nova.network.neutron [-] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:18:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 82 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 85 B/s wr, 20 op/s
Oct  2 05:18:10 np0005465604 nova_compute[260603]: 2025-10-02 09:18:10.607 2 DEBUG nova.network.neutron [-] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:18:10 np0005465604 nova_compute[260603]: 2025-10-02 09:18:10.634 2 INFO nova.compute.manager [-] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Took 0.80 seconds to deallocate network for instance.#033[00m
Oct  2 05:18:10 np0005465604 nova_compute[260603]: 2025-10-02 09:18:10.665 2 DEBUG nova.compute.manager [req-564917b4-a336-4586-83d2-d2b114c16afb req-d9c2100f-8232-41f6-9763-3c169f4defe7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Received event network-vif-deleted-dad30664-f830-4b51-9cf5-8b92d95308bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:18:10 np0005465604 nova_compute[260603]: 2025-10-02 09:18:10.684 2 DEBUG oslo_concurrency.lockutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:18:10 np0005465604 nova_compute[260603]: 2025-10-02 09:18:10.685 2 DEBUG oslo_concurrency.lockutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:18:10 np0005465604 nova_compute[260603]: 2025-10-02 09:18:10.746 2 DEBUG oslo_concurrency.processutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:18:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:18:11 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2203521532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:18:11 np0005465604 nova_compute[260603]: 2025-10-02 09:18:11.212 2 DEBUG oslo_concurrency.processutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:18:11 np0005465604 nova_compute[260603]: 2025-10-02 09:18:11.223 2 DEBUG nova.compute.provider_tree [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:18:11 np0005465604 nova_compute[260603]: 2025-10-02 09:18:11.245 2 DEBUG nova.scheduler.client.report [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:18:11 np0005465604 nova_compute[260603]: 2025-10-02 09:18:11.267 2 DEBUG oslo_concurrency.lockutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:18:11 np0005465604 nova_compute[260603]: 2025-10-02 09:18:11.297 2 INFO nova.scheduler.client.report [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance bfb5f44c-0aeb-439f-9d64-934d5cb85c02#033[00m
Oct  2 05:18:11 np0005465604 nova_compute[260603]: 2025-10-02 09:18:11.363 2 DEBUG oslo_concurrency.lockutils [None req-b349eb7e-e476-46a2-8690-7d4ce2e6e874 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "bfb5f44c-0aeb-439f-9d64-934d5cb85c02" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:18:11 np0005465604 nova_compute[260603]: 2025-10-02 09:18:11.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:18:11 np0005465604 nova_compute[260603]: 2025-10-02 09:18:11.547 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:18:11 np0005465604 nova_compute[260603]: 2025-10-02 09:18:11.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:18:11 np0005465604 nova_compute[260603]: 2025-10-02 09:18:11.549 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:18:11 np0005465604 nova_compute[260603]: 2025-10-02 09:18:11.550 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:18:11 np0005465604 nova_compute[260603]: 2025-10-02 09:18:11.550 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:18:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:18:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/728948911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:18:12 np0005465604 nova_compute[260603]: 2025-10-02 09:18:12.112 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:18:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 305 active+clean; 82 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.4 KiB/s rd, 85 B/s wr, 12 op/s
Oct  2 05:18:12 np0005465604 nova_compute[260603]: 2025-10-02 09:18:12.381 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:18:12 np0005465604 nova_compute[260603]: 2025-10-02 09:18:12.383 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3554MB free_disk=59.96678924560547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:18:12 np0005465604 nova_compute[260603]: 2025-10-02 09:18:12.383 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:18:12 np0005465604 nova_compute[260603]: 2025-10-02 09:18:12.383 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:18:12 np0005465604 nova_compute[260603]: 2025-10-02 09:18:12.500 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:18:12 np0005465604 nova_compute[260603]: 2025-10-02 09:18:12.501 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:18:12 np0005465604 nova_compute[260603]: 2025-10-02 09:18:12.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:12 np0005465604 nova_compute[260603]: 2025-10-02 09:18:12.523 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:18:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:18:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2324713024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:18:13 np0005465604 nova_compute[260603]: 2025-10-02 09:18:13.072 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:18:13 np0005465604 nova_compute[260603]: 2025-10-02 09:18:13.081 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:18:13 np0005465604 nova_compute[260603]: 2025-10-02 09:18:13.098 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:18:13 np0005465604 nova_compute[260603]: 2025-10-02 09:18:13.141 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:18:13 np0005465604 nova_compute[260603]: 2025-10-02 09:18:13.142 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:18:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 05:18:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:18:14 np0005465604 nova_compute[260603]: 2025-10-02 09:18:14.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:15 np0005465604 nova_compute[260603]: 2025-10-02 09:18:15.138 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:18:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 05:18:16 np0005465604 nova_compute[260603]: 2025-10-02 09:18:16.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:18:16 np0005465604 nova_compute[260603]: 2025-10-02 09:18:16.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:17 np0005465604 nova_compute[260603]: 2025-10-02 09:18:17.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:17 np0005465604 nova_compute[260603]: 2025-10-02 09:18:17.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 05:18:19 np0005465604 nova_compute[260603]: 2025-10-02 09:18:19.651 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396684.6484373, bfb5f44c-0aeb-439f-9d64-934d5cb85c02 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:18:19 np0005465604 nova_compute[260603]: 2025-10-02 09:18:19.651 2 INFO nova.compute.manager [-] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:18:19 np0005465604 nova_compute[260603]: 2025-10-02 09:18:19.678 2 DEBUG nova.compute.manager [None req-d496d68a-eaae-4e81-a50a-8a06da71ca85 - - - - - -] [instance: bfb5f44c-0aeb-439f-9d64-934d5cb85c02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:18:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:18:19 np0005465604 nova_compute[260603]: 2025-10-02 09:18:19.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 KiB/s wr, 25 op/s
Oct  2 05:18:21 np0005465604 nova_compute[260603]: 2025-10-02 09:18:21.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:21.194 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:18:21 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:21.196 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:18:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:18:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 43K writes, 174K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 43K writes, 16K syncs, 2.74 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2386 writes, 10K keys, 2386 commit groups, 1.0 writes per commit group, ingest: 12.81 MB, 0.02 MB/s#012Interval WAL: 2386 writes, 857 syncs, 2.78 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:18:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:18:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/205807364' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:18:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:18:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/205807364' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:18:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.1 KiB/s wr, 15 op/s
Oct  2 05:18:22 np0005465604 nova_compute[260603]: 2025-10-02 09:18:22.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:23.198 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:18:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.1 KiB/s wr, 15 op/s
Oct  2 05:18:24 np0005465604 nova_compute[260603]: 2025-10-02 09:18:24.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:18:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:18:24 np0005465604 nova_compute[260603]: 2025-10-02 09:18:24.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:18:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:18:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 46K writes, 182K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 46K writes, 17K syncs, 2.72 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2280 writes, 9757 keys, 2280 commit groups, 1.0 writes per commit group, ingest: 10.53 MB, 0.02 MB/s#012Interval WAL: 2280 writes, 833 syncs, 2.74 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:18:27 np0005465604 nova_compute[260603]: 2025-10-02 09:18:27.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:18:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:18:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:18:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:18:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:18:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:18:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:18:28
Oct  2 05:18:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:18:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:18:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', '.rgw.root']
Oct  2 05:18:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:18:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:18:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:18:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:18:29 np0005465604 nova_compute[260603]: 2025-10-02 09:18:29.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:18:30 np0005465604 podman[430685]: 2025-10-02 09:18:30.991943939 +0000 UTC m=+0.059260043 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 05:18:31 np0005465604 podman[430684]: 2025-10-02 09:18:31.057802726 +0000 UTC m=+0.124773639 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct  2 05:18:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:18:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 36K writes, 140K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 36K writes, 13K syncs, 2.70 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1788 writes, 7086 keys, 1788 commit groups, 1.0 writes per commit group, ingest: 8.42 MB, 0.01 MB/s#012Interval WAL: 1788 writes, 697 syncs, 2.57 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:18:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:18:32 np0005465604 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct  2 05:18:32 np0005465604 nova_compute[260603]: 2025-10-02 09:18:32.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:18:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:18:34 np0005465604 nova_compute[260603]: 2025-10-02 09:18:34.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:34.855 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:18:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:34.856 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:18:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:34.856 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:18:35 np0005465604 podman[430731]: 2025-10-02 09:18:35.006888394 +0000 UTC m=+0.064685582 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:18:35 np0005465604 podman[430730]: 2025-10-02 09:18:35.050805865 +0000 UTC m=+0.106421685 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:18:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:35.400 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:77:f2 10.100.0.2 2001:db8::f816:3eff:fe92:77f2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe92:77f2/64', 'neutron:device_id': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d143d452-1c93-4a72-b1b5-b9ed8c58f219, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b940cd75-8ee4-4c6d-be34-c1fb5851ca50) old=Port_Binding(mac=['fa:16:3e:92:77:f2 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:18:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:35.401 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b940cd75-8ee4-4c6d-be34-c1fb5851ca50 in datapath 01f3e9aa-20f8-48ae-9b80-cbd0ba476303 updated#033[00m
Oct  2 05:18:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:35.402 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01f3e9aa-20f8-48ae-9b80-cbd0ba476303, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:18:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:35.403 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[777ccf24-3168-4821-b203-5d710adfb890]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:18:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:18:37 np0005465604 nova_compute[260603]: 2025-10-02 09:18:37.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:18:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:18:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:18:39 np0005465604 nova_compute[260603]: 2025-10-02 09:18:39.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:18:41 np0005465604 nova_compute[260603]: 2025-10-02 09:18:41.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:18:41 np0005465604 nova_compute[260603]: 2025-10-02 09:18:41.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:18:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:18:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:42.454 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:77:f2 10.100.0.2 2001:db8:0:1:f816:3eff:fe92:77f2 2001:db8::f816:3eff:fe92:77f2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe92:77f2/64 2001:db8::f816:3eff:fe92:77f2/64', 'neutron:device_id': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d143d452-1c93-4a72-b1b5-b9ed8c58f219, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b940cd75-8ee4-4c6d-be34-c1fb5851ca50) old=Port_Binding(mac=['fa:16:3e:92:77:f2 10.100.0.2 2001:db8::f816:3eff:fe92:77f2'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe92:77f2/64', 'neutron:device_id': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:18:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:42.456 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b940cd75-8ee4-4c6d-be34-c1fb5851ca50 in datapath 01f3e9aa-20f8-48ae-9b80-cbd0ba476303 updated#033[00m
Oct  2 05:18:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:42.457 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01f3e9aa-20f8-48ae-9b80-cbd0ba476303, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:18:42 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:18:42.458 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c55aa2ea-c70f-466e-ae9e-0f5036c535a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:18:42 np0005465604 nova_compute[260603]: 2025-10-02 09:18:42.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:18:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:18:44 np0005465604 nova_compute[260603]: 2025-10-02 09:18:44.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:18:47 np0005465604 nova_compute[260603]: 2025-10-02 09:18:47.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:18:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:18:49 np0005465604 nova_compute[260603]: 2025-10-02 09:18:49.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:18:51 np0005465604 nova_compute[260603]: 2025-10-02 09:18:51.504 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "d2fca23c-2574-4642-b931-f363d59bd5a7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:18:51 np0005465604 nova_compute[260603]: 2025-10-02 09:18:51.505 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:18:51 np0005465604 nova_compute[260603]: 2025-10-02 09:18:51.521 2 DEBUG nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:18:51 np0005465604 nova_compute[260603]: 2025-10-02 09:18:51.606 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:18:51 np0005465604 nova_compute[260603]: 2025-10-02 09:18:51.606 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:18:51 np0005465604 nova_compute[260603]: 2025-10-02 09:18:51.615 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:18:51 np0005465604 nova_compute[260603]: 2025-10-02 09:18:51.615 2 INFO nova.compute.claims [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:18:51 np0005465604 nova_compute[260603]: 2025-10-02 09:18:51.722 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:18:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:18:52 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/750493922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.210 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.219 2 DEBUG nova.compute.provider_tree [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.252 2 DEBUG nova.scheduler.client.report [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.275 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.276 2 DEBUG nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:18:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.326 2 DEBUG nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.326 2 DEBUG nova.network.neutron [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.362 2 INFO nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.388 2 DEBUG nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.499 2 DEBUG nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.500 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.500 2 INFO nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Creating image(s)#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.523 2 DEBUG nova.storage.rbd_utils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image d2fca23c-2574-4642-b931-f363d59bd5a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.551 2 DEBUG nova.storage.rbd_utils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image d2fca23c-2574-4642-b931-f363d59bd5a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.577 2 DEBUG nova.storage.rbd_utils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image d2fca23c-2574-4642-b931-f363d59bd5a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.581 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.643 2 DEBUG nova.policy [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.694 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.694 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.695 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.695 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.726 2 DEBUG nova.storage.rbd_utils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image d2fca23c-2574-4642-b931-f363d59bd5a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:18:52 np0005465604 nova_compute[260603]: 2025-10-02 09:18:52.730 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d2fca23c-2574-4642-b931-f363d59bd5a7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:18:53 np0005465604 nova_compute[260603]: 2025-10-02 09:18:53.407 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 d2fca23c-2574-4642-b931-f363d59bd5a7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.677s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:18:53 np0005465604 nova_compute[260603]: 2025-10-02 09:18:53.480 2 DEBUG nova.storage.rbd_utils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image d2fca23c-2574-4642-b931-f363d59bd5a7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:18:53 np0005465604 nova_compute[260603]: 2025-10-02 09:18:53.659 2 DEBUG nova.objects.instance [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid d2fca23c-2574-4642-b931-f363d59bd5a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:18:53 np0005465604 nova_compute[260603]: 2025-10-02 09:18:53.701 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:18:53 np0005465604 nova_compute[260603]: 2025-10-02 09:18:53.702 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Ensure instance console log exists: /var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:18:53 np0005465604 nova_compute[260603]: 2025-10-02 09:18:53.703 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:18:53 np0005465604 nova_compute[260603]: 2025-10-02 09:18:53.703 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:18:53 np0005465604 nova_compute[260603]: 2025-10-02 09:18:53.704 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:18:54 np0005465604 nova_compute[260603]: 2025-10-02 09:18:54.306 2 DEBUG nova.network.neutron [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Successfully created port: dac6b720-36af-4048-863c-cd259065f82e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:18:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s
Oct  2 05:18:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:18:54 np0005465604 nova_compute[260603]: 2025-10-02 09:18:54.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:55 np0005465604 nova_compute[260603]: 2025-10-02 09:18:55.287 2 DEBUG nova.network.neutron [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Successfully updated port: dac6b720-36af-4048-863c-cd259065f82e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:18:55 np0005465604 nova_compute[260603]: 2025-10-02 09:18:55.308 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:18:55 np0005465604 nova_compute[260603]: 2025-10-02 09:18:55.309 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:18:55 np0005465604 nova_compute[260603]: 2025-10-02 09:18:55.309 2 DEBUG nova.network.neutron [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:18:55 np0005465604 nova_compute[260603]: 2025-10-02 09:18:55.392 2 DEBUG nova.compute.manager [req-1b9a668e-6f4a-4935-b3fa-fc7987744d2c req-bebb1cab-f046-4180-bca5-6ac82f18e522 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received event network-changed-dac6b720-36af-4048-863c-cd259065f82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:18:55 np0005465604 nova_compute[260603]: 2025-10-02 09:18:55.393 2 DEBUG nova.compute.manager [req-1b9a668e-6f4a-4935-b3fa-fc7987744d2c req-bebb1cab-f046-4180-bca5-6ac82f18e522 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Refreshing instance network info cache due to event network-changed-dac6b720-36af-4048-863c-cd259065f82e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:18:55 np0005465604 nova_compute[260603]: 2025-10-02 09:18:55.393 2 DEBUG oslo_concurrency.lockutils [req-1b9a668e-6f4a-4935-b3fa-fc7987744d2c req-bebb1cab-f046-4180-bca5-6ac82f18e522 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:18:55 np0005465604 nova_compute[260603]: 2025-10-02 09:18:55.611 2 DEBUG nova.network.neutron [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:18:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s
Oct  2 05:18:56 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Oct  2 05:18:56 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:56.906291) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:18:56 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Oct  2 05:18:56 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396736906354, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1895, "num_deletes": 251, "total_data_size": 3139122, "memory_usage": 3193776, "flush_reason": "Manual Compaction"}
Oct  2 05:18:56 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Oct  2 05:18:56 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396736989118, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 3097786, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61460, "largest_seqno": 63354, "table_properties": {"data_size": 3089013, "index_size": 5458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17467, "raw_average_key_size": 20, "raw_value_size": 3071783, "raw_average_value_size": 3526, "num_data_blocks": 242, "num_entries": 871, "num_filter_entries": 871, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396524, "oldest_key_time": 1759396524, "file_creation_time": 1759396736, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:18:56 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 82913 microseconds, and 9138 cpu microseconds.
Oct  2 05:18:56 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:56.989195) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 3097786 bytes OK
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:56.989233) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.010009) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.010093) EVENT_LOG_v1 {"time_micros": 1759396737010076, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.010137) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 3131126, prev total WAL file size 3131126, number of live WAL files 2.
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.011718) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(3025KB)], [146(9670KB)]
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396737011787, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 13000722, "oldest_snapshot_seqno": -1}
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8199 keys, 11295475 bytes, temperature: kUnknown
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396737173549, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11295475, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11240934, "index_size": 32924, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20549, "raw_key_size": 214350, "raw_average_key_size": 26, "raw_value_size": 11095063, "raw_average_value_size": 1353, "num_data_blocks": 1283, "num_entries": 8199, "num_filter_entries": 8199, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396737, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.174100) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11295475 bytes
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.218617) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.3 rd, 69.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 9.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 8713, records dropped: 514 output_compression: NoCompression
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.218682) EVENT_LOG_v1 {"time_micros": 1759396737218656, "job": 90, "event": "compaction_finished", "compaction_time_micros": 161908, "compaction_time_cpu_micros": 34482, "output_level": 6, "num_output_files": 1, "total_output_size": 11295475, "num_input_records": 8713, "num_output_records": 8199, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396737220382, "job": 90, "event": "table_file_deletion", "file_number": 148}
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396737224809, "job": 90, "event": "table_file_deletion", "file_number": 146}
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.011533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.224929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.224940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.224944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.224948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:18:57 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:18:57.224952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:18:57 np0005465604 nova_compute[260603]: 2025-10-02 09:18:57.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:18:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:18:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:18:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:18:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:18:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.148 2 DEBUG nova.network.neutron [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updating instance_info_cache with network_info: [{"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.232 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.233 2 DEBUG nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Instance network_info: |[{"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.233 2 DEBUG oslo_concurrency.lockutils [req-1b9a668e-6f4a-4935-b3fa-fc7987744d2c req-bebb1cab-f046-4180-bca5-6ac82f18e522 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.234 2 DEBUG nova.network.neutron [req-1b9a668e-6f4a-4935-b3fa-fc7987744d2c req-bebb1cab-f046-4180-bca5-6ac82f18e522 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Refreshing network info cache for port dac6b720-36af-4048-863c-cd259065f82e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.238 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Start _get_guest_xml network_info=[{"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.245 2 WARNING nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.252 2 DEBUG nova.virt.libvirt.host [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.253 2 DEBUG nova.virt.libvirt.host [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.257 2 DEBUG nova.virt.libvirt.host [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.258 2 DEBUG nova.virt.libvirt.host [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.259 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.259 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.260 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.260 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.260 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.260 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.261 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.261 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.261 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.261 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.262 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.262 2 DEBUG nova.virt.hardware [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.266 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:18:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:18:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:18:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1404768080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.724 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.755 2 DEBUG nova.storage.rbd_utils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image d2fca23c-2574-4642-b931-f363d59bd5a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:18:58 np0005465604 nova_compute[260603]: 2025-10-02 09:18:58.761 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:18:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:18:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3026037847' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.275 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.279 2 DEBUG nova.virt.libvirt.vif [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:18:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-39754368',display_name='tempest-TestGettingAddress-server-39754368',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-39754368',id=149,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJcq6b1IVAJMqrCeKypbws2/0Rs5i6q90XYETpVyQMku/Sh9hYU1xPZGh64+rGmgREbgZiLmTHs8bnO5pL74gp5+lt+bQjj8c2EhhfVbuK3Dnp+gnRVW0xWshm1hg5jBSA==',key_name='tempest-TestGettingAddress-505095675',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-eaeyiug0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:18:52Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=d2fca23c-2574-4642-b931-f363d59bd5a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.280 2 DEBUG nova.network.os_vif_util [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.282 2 DEBUG nova.network.os_vif_util [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:0a:a0,bridge_name='br-int',has_traffic_filtering=True,id=dac6b720-36af-4048-863c-cd259065f82e,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdac6b720-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.284 2 DEBUG nova.objects.instance [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid d2fca23c-2574-4642-b931-f363d59bd5a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.426 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:18:59 np0005465604 nova_compute[260603]:  <uuid>d2fca23c-2574-4642-b931-f363d59bd5a7</uuid>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:  <name>instance-00000095</name>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestGettingAddress-server-39754368</nova:name>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:18:58</nova:creationTime>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:18:59 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:        <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:        <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:        <nova:port uuid="dac6b720-36af-4048-863c-cd259065f82e">
Oct  2 05:18:59 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe6a:aa0" ipVersion="6"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe6a:aa0" ipVersion="6"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <entry name="serial">d2fca23c-2574-4642-b931-f363d59bd5a7</entry>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <entry name="uuid">d2fca23c-2574-4642-b931-f363d59bd5a7</entry>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/d2fca23c-2574-4642-b931-f363d59bd5a7_disk">
Oct  2 05:18:59 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:18:59 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/d2fca23c-2574-4642-b931-f363d59bd5a7_disk.config">
Oct  2 05:18:59 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:18:59 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:6a:0a:a0"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <target dev="tapdac6b720-36"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7/console.log" append="off"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:18:59 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:18:59 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:18:59 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:18:59 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.430 2 DEBUG nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Preparing to wait for external event network-vif-plugged-dac6b720-36af-4048-863c-cd259065f82e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.431 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.431 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.431 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.432 2 DEBUG nova.virt.libvirt.vif [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:18:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-39754368',display_name='tempest-TestGettingAddress-server-39754368',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-39754368',id=149,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJcq6b1IVAJMqrCeKypbws2/0Rs5i6q90XYETpVyQMku/Sh9hYU1xPZGh64+rGmgREbgZiLmTHs8bnO5pL74gp5+lt+bQjj8c2EhhfVbuK3Dnp+gnRVW0xWshm1hg5jBSA==',key_name='tempest-TestGettingAddress-505095675',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-eaeyiug0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:18:52Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=d2fca23c-2574-4642-b931-f363d59bd5a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.432 2 DEBUG nova.network.os_vif_util [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.434 2 DEBUG nova.network.os_vif_util [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:0a:a0,bridge_name='br-int',has_traffic_filtering=True,id=dac6b720-36af-4048-863c-cd259065f82e,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdac6b720-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.434 2 DEBUG os_vif [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:0a:a0,bridge_name='br-int',has_traffic_filtering=True,id=dac6b720-36af-4048-863c-cd259065f82e,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdac6b720-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdac6b720-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.440 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdac6b720-36, col_values=(('external_ids', {'iface-id': 'dac6b720-36af-4048-863c-cd259065f82e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:0a:a0', 'vm-uuid': 'd2fca23c-2574-4642-b931-f363d59bd5a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:59 np0005465604 NetworkManager[45129]: <info>  [1759396739.4439] manager: (tapdac6b720-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/676)
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.455 2 INFO os_vif [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:0a:a0,bridge_name='br-int',has_traffic_filtering=True,id=dac6b720-36af-4048-863c-cd259065f82e,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdac6b720-36')#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.656 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.656 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.657 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:6a:0a:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.658 2 INFO nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Using config drive#033[00m
Oct  2 05:18:59 np0005465604 nova_compute[260603]: 2025-10-02 09:18:59.683 2 DEBUG nova.storage.rbd_utils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image d2fca23c-2574-4642-b931-f363d59bd5a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:18:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:19:00 np0005465604 nova_compute[260603]: 2025-10-02 09:19:00.289 2 INFO nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Creating config drive at /var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7/disk.config#033[00m
Oct  2 05:19:00 np0005465604 nova_compute[260603]: 2025-10-02 09:19:00.297 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpykv7wpf1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:19:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:19:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:19:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:19:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:19:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:19:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:19:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:19:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:19:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:19:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:19:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:19:00 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ebff2bcf-b2f4-440e-a1f3-3864438700d8 does not exist
Oct  2 05:19:00 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 26fe571c-dae3-47ce-874d-8f457d10645c does not exist
Oct  2 05:19:00 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 8bf94f34-f25f-4de1-b237-78a9072cab16 does not exist
Oct  2 05:19:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:19:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:19:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:19:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:19:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:19:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:19:00 np0005465604 nova_compute[260603]: 2025-10-02 09:19:00.472 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpykv7wpf1" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:19:00 np0005465604 nova_compute[260603]: 2025-10-02 09:19:00.507 2 DEBUG nova.storage.rbd_utils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image d2fca23c-2574-4642-b931-f363d59bd5a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:19:00 np0005465604 nova_compute[260603]: 2025-10-02 09:19:00.512 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7/disk.config d2fca23c-2574-4642-b931-f363d59bd5a7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:19:00 np0005465604 nova_compute[260603]: 2025-10-02 09:19:00.741 2 DEBUG oslo_concurrency.processutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7/disk.config d2fca23c-2574-4642-b931-f363d59bd5a7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:19:00 np0005465604 nova_compute[260603]: 2025-10-02 09:19:00.742 2 INFO nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Deleting local config drive /var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7/disk.config because it was imported into RBD.#033[00m
Oct  2 05:19:00 np0005465604 kernel: tapdac6b720-36: entered promiscuous mode
Oct  2 05:19:00 np0005465604 NetworkManager[45129]: <info>  [1759396740.8349] manager: (tapdac6b720-36): new Tun device (/org/freedesktop/NetworkManager/Devices/677)
Oct  2 05:19:00 np0005465604 ovn_controller[152344]: 2025-10-02T09:19:00Z|01666|binding|INFO|Claiming lport dac6b720-36af-4048-863c-cd259065f82e for this chassis.
Oct  2 05:19:00 np0005465604 ovn_controller[152344]: 2025-10-02T09:19:00Z|01667|binding|INFO|dac6b720-36af-4048-863c-cd259065f82e: Claiming fa:16:3e:6a:0a:a0 10.100.0.12 2001:db8:0:1:f816:3eff:fe6a:aa0 2001:db8::f816:3eff:fe6a:aa0
Oct  2 05:19:00 np0005465604 nova_compute[260603]: 2025-10-02 09:19:00.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:00 np0005465604 nova_compute[260603]: 2025-10-02 09:19:00.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:00 np0005465604 systemd-udevd[431441]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:19:00 np0005465604 NetworkManager[45129]: <info>  [1759396740.8906] device (tapdac6b720-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:19:00 np0005465604 NetworkManager[45129]: <info>  [1759396740.8918] device (tapdac6b720-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:19:00 np0005465604 systemd-machined[214636]: New machine qemu-183-instance-00000095.
Oct  2 05:19:00 np0005465604 systemd[1]: Started Virtual Machine qemu-183-instance-00000095.
Oct  2 05:19:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.919 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:0a:a0 10.100.0.12 2001:db8:0:1:f816:3eff:fe6a:aa0 2001:db8::f816:3eff:fe6a:aa0'], port_security=['fa:16:3e:6a:0a:a0 10.100.0.12 2001:db8:0:1:f816:3eff:fe6a:aa0 2001:db8::f816:3eff:fe6a:aa0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8:0:1:f816:3eff:fe6a:aa0/64 2001:db8::f816:3eff:fe6a:aa0/64', 'neutron:device_id': 'd2fca23c-2574-4642-b931-f363d59bd5a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f167374-6ddb-4a61-a8ba-3dc4425d2006', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d143d452-1c93-4a72-b1b5-b9ed8c58f219, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=dac6b720-36af-4048-863c-cd259065f82e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:19:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.920 162357 INFO neutron.agent.ovn.metadata.agent [-] Port dac6b720-36af-4048-863c-cd259065f82e in datapath 01f3e9aa-20f8-48ae-9b80-cbd0ba476303 bound to our chassis#033[00m
Oct  2 05:19:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.921 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01f3e9aa-20f8-48ae-9b80-cbd0ba476303#033[00m
Oct  2 05:19:00 np0005465604 nova_compute[260603]: 2025-10-02 09:19:00.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:00 np0005465604 ovn_controller[152344]: 2025-10-02T09:19:00Z|01668|binding|INFO|Setting lport dac6b720-36af-4048-863c-cd259065f82e ovn-installed in OVS
Oct  2 05:19:00 np0005465604 ovn_controller[152344]: 2025-10-02T09:19:00Z|01669|binding|INFO|Setting lport dac6b720-36af-4048-863c-cd259065f82e up in Southbound
Oct  2 05:19:00 np0005465604 nova_compute[260603]: 2025-10-02 09:19:00.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.945 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1273cd-b192-4413-9faf-e60e830a9406]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.946 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01f3e9aa-21 in ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 05:19:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.949 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01f3e9aa-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 05:19:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.949 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d9176275-c6c1-4d41-b71b-c96e38fa1e80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.950 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1902cbc5-e8e2-44d8-81f4-aa7ee50470a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:00 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.967 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[e45dee05-f64b-4457-85e1-b19143b97642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:00.998 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6be2a130-9e4a-45ad-8525-03e3dc663365]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.049 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4961d512-9e07-4ca0-a531-55603b4595a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:01 np0005465604 NetworkManager[45129]: <info>  [1759396741.0583] manager: (tap01f3e9aa-20): new Veth device (/org/freedesktop/NetworkManager/Devices/678)
Oct  2 05:19:01 np0005465604 systemd-udevd[431444]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.057 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1e20aea0-dbb7-4ea3-99fc-23c875f51955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.113 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[93c21c0b-e3d0-488b-b8ad-0843e60710b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.119 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[83fc131a-5c9a-4f43-9756-58ec26e5cf64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:01 np0005465604 NetworkManager[45129]: <info>  [1759396741.1510] device (tap01f3e9aa-20): carrier: link connected
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.156 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b5e1e7-0e9a-448f-b6ed-6362b1ce4af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.186 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[b62c4383-3f41-4527-bda9-b648b9a6c8e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01f3e9aa-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:77:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761975, 'reachable_time': 39646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 431535, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:01 np0005465604 podman[431496]: 2025-10-02 09:19:01.194839981 +0000 UTC m=+0.073203257 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.210 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7e943a8d-34ee-4d59-a474-6d2f4e698b1d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:77f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 761975, 'tstamp': 761975}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 431560, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.245 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[09152b3c-d0c5-4f13-aa63-53da0b3871eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01f3e9aa-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:77:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761975, 'reachable_time': 39646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 431564, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:01 np0005465604 podman[431494]: 2025-10-02 09:19:01.250791698 +0000 UTC m=+0.131277591 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.291 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[9c3dbce8-0b41-4367-b72a-497feddcf578]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:01 np0005465604 podman[431569]: 2025-10-02 09:19:01.303100852 +0000 UTC m=+0.060951265 container create 90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kare, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:19:01 np0005465604 systemd[1]: Started libpod-conmon-90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53.scope.
Oct  2 05:19:01 np0005465604 podman[431569]: 2025-10-02 09:19:01.273301121 +0000 UTC m=+0.031151554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:19:01 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.384 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2aff0472-9137-4d7f-8176-d00ca3f63bd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.389 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01f3e9aa-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.389 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.390 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01f3e9aa-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:01 np0005465604 kernel: tap01f3e9aa-20: entered promiscuous mode
Oct  2 05:19:01 np0005465604 NetworkManager[45129]: <info>  [1759396741.3935] manager: (tap01f3e9aa-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/679)
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.395 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01f3e9aa-20, col_values=(('external_ids', {'iface-id': 'b940cd75-8ee4-4c6d-be34-c1fb5851ca50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:19:01 np0005465604 ovn_controller[152344]: 2025-10-02T09:19:01Z|01670|binding|INFO|Releasing lport b940cd75-8ee4-4c6d-be34-c1fb5851ca50 from this chassis (sb_readonly=0)
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.397 2 DEBUG nova.compute.manager [req-e07fc541-af03-4c93-befb-3269ddd959df req-945619d1-2e09-4305-a63e-de1d401c1b4c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received event network-vif-plugged-dac6b720-36af-4048-863c-cd259065f82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.398 2 DEBUG oslo_concurrency.lockutils [req-e07fc541-af03-4c93-befb-3269ddd959df req-945619d1-2e09-4305-a63e-de1d401c1b4c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.398 2 DEBUG oslo_concurrency.lockutils [req-e07fc541-af03-4c93-befb-3269ddd959df req-945619d1-2e09-4305-a63e-de1d401c1b4c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.398 2 DEBUG oslo_concurrency.lockutils [req-e07fc541-af03-4c93-befb-3269ddd959df req-945619d1-2e09-4305-a63e-de1d401c1b4c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.398 2 DEBUG nova.compute.manager [req-e07fc541-af03-4c93-befb-3269ddd959df req-945619d1-2e09-4305-a63e-de1d401c1b4c 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Processing event network-vif-plugged-dac6b720-36af-4048-863c-cd259065f82e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.402 2 DEBUG nova.network.neutron [req-1b9a668e-6f4a-4935-b3fa-fc7987744d2c req-bebb1cab-f046-4180-bca5-6ac82f18e522 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updated VIF entry in instance network info cache for port dac6b720-36af-4048-863c-cd259065f82e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.403 2 DEBUG nova.network.neutron [req-1b9a668e-6f4a-4935-b3fa-fc7987744d2c req-bebb1cab-f046-4180-bca5-6ac82f18e522 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updating instance_info_cache with network_info: [{"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:19:01 np0005465604 podman[431569]: 2025-10-02 09:19:01.413434609 +0000 UTC m=+0.171285062 container init 90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.417 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01f3e9aa-20f8-48ae-9b80-cbd0ba476303.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01f3e9aa-20f8-48ae-9b80-cbd0ba476303.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.419 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f53721f0-2cc8-4777-900f-3e5561e2d9a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.420 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-01f3e9aa-20f8-48ae-9b80-cbd0ba476303
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/01f3e9aa-20f8-48ae-9b80-cbd0ba476303.pid.haproxy
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID 01f3e9aa-20f8-48ae-9b80-cbd0ba476303
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 05:19:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:01.422 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'env', 'PROCESS_TAG=haproxy-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01f3e9aa-20f8-48ae-9b80-cbd0ba476303.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 05:19:01 np0005465604 podman[431569]: 2025-10-02 09:19:01.428631323 +0000 UTC m=+0.186481736 container start 90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 05:19:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:19:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:19:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:19:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:19:01 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:19:01 np0005465604 podman[431569]: 2025-10-02 09:19:01.436356395 +0000 UTC m=+0.194206818 container attach 90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:19:01 np0005465604 systemd[1]: libpod-90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53.scope: Deactivated successfully.
Oct  2 05:19:01 np0005465604 conmon[431616]: conmon 90ef7d5608a8bb6d9c59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53.scope/container/memory.events
Oct  2 05:19:01 np0005465604 elated_kare[431616]: 167 167
Oct  2 05:19:01 np0005465604 podman[431569]: 2025-10-02 09:19:01.442256479 +0000 UTC m=+0.200106902 container died 90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kare, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 05:19:01 np0005465604 systemd[1]: var-lib-containers-storage-overlay-7dd334571404b15a8dc9a870b85f22b9d4ad876c9a696c29a12edc0d9d313106-merged.mount: Deactivated successfully.
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.475 2 DEBUG oslo_concurrency.lockutils [req-1b9a668e-6f4a-4935-b3fa-fc7987744d2c req-bebb1cab-f046-4180-bca5-6ac82f18e522 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:19:01 np0005465604 podman[431569]: 2025-10-02 09:19:01.493791169 +0000 UTC m=+0.251641592 container remove 90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_kare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:19:01 np0005465604 systemd[1]: libpod-conmon-90ef7d5608a8bb6d9c5915008d34fb1313742f513a2918c03a2c2d2977f4fe53.scope: Deactivated successfully.
Oct  2 05:19:01 np0005465604 podman[431646]: 2025-10-02 09:19:01.709734984 +0000 UTC m=+0.071533616 container create dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:19:01 np0005465604 systemd[1]: Started libpod-conmon-dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701.scope.
Oct  2 05:19:01 np0005465604 podman[431646]: 2025-10-02 09:19:01.67281791 +0000 UTC m=+0.034616552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:19:01 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:19:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf18e05b0217301f45fdf26fcc6a125b3127310a9d2df5dbd91c154da5a2d8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:19:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf18e05b0217301f45fdf26fcc6a125b3127310a9d2df5dbd91c154da5a2d8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:19:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf18e05b0217301f45fdf26fcc6a125b3127310a9d2df5dbd91c154da5a2d8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:19:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf18e05b0217301f45fdf26fcc6a125b3127310a9d2df5dbd91c154da5a2d8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:19:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf18e05b0217301f45fdf26fcc6a125b3127310a9d2df5dbd91c154da5a2d8d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:19:01 np0005465604 podman[431646]: 2025-10-02 09:19:01.815369303 +0000 UTC m=+0.177167975 container init dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_neumann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:19:01 np0005465604 podman[431646]: 2025-10-02 09:19:01.828514043 +0000 UTC m=+0.190312665 container start dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_neumann, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:19:01 np0005465604 podman[431646]: 2025-10-02 09:19:01.832184908 +0000 UTC m=+0.193983560 container attach dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_neumann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.845 2 DEBUG nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.847 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396741.8446815, d2fca23c-2574-4642-b931-f363d59bd5a7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.848 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] VM Started (Lifecycle Event)#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.852 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.856 2 INFO nova.virt.libvirt.driver [-] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Instance spawned successfully.#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.856 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.943 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.947 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.965 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.966 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.967 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.967 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.968 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.968 2 DEBUG nova.virt.libvirt.driver [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.976 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.976 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396741.8459713, d2fca23c-2574-4642-b931-f363d59bd5a7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:19:01 np0005465604 nova_compute[260603]: 2025-10-02 09:19:01.976 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:19:01 np0005465604 podman[431687]: 2025-10-02 09:19:01.886165414 +0000 UTC m=+0.036432578 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 05:19:02 np0005465604 nova_compute[260603]: 2025-10-02 09:19:02.035 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:19:02 np0005465604 nova_compute[260603]: 2025-10-02 09:19:02.040 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396741.8501787, d2fca23c-2574-4642-b931-f363d59bd5a7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:19:02 np0005465604 nova_compute[260603]: 2025-10-02 09:19:02.041 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:19:02 np0005465604 nova_compute[260603]: 2025-10-02 09:19:02.088 2 INFO nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Took 9.59 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:19:02 np0005465604 nova_compute[260603]: 2025-10-02 09:19:02.089 2 DEBUG nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:19:02 np0005465604 nova_compute[260603]: 2025-10-02 09:19:02.091 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:19:02 np0005465604 nova_compute[260603]: 2025-10-02 09:19:02.102 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:19:02 np0005465604 nova_compute[260603]: 2025-10-02 09:19:02.151 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:19:02 np0005465604 nova_compute[260603]: 2025-10-02 09:19:02.181 2 INFO nova.compute.manager [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Took 10.62 seconds to build instance.#033[00m
Oct  2 05:19:02 np0005465604 nova_compute[260603]: 2025-10-02 09:19:02.241 2 DEBUG oslo_concurrency.lockutils [None req-51983ff6-5095-448f-b439-259a008a35dc b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:19:02 np0005465604 podman[431687]: 2025-10-02 09:19:02.245161717 +0000 UTC m=+0.395428831 container create 77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 05:19:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct  2 05:19:02 np0005465604 systemd[1]: Started libpod-conmon-77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8.scope.
Oct  2 05:19:02 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:19:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fda16057209f96f06d9ded64e1e5d1b0e039e5d2ee796001d3308370659566c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 05:19:02 np0005465604 podman[431687]: 2025-10-02 09:19:02.615463843 +0000 UTC m=+0.765730997 container init 77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:19:02 np0005465604 podman[431687]: 2025-10-02 09:19:02.626304482 +0000 UTC m=+0.776571636 container start 77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 05:19:02 np0005465604 neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303[431703]: [NOTICE]   (431715) : New worker (431717) forked
Oct  2 05:19:02 np0005465604 neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303[431703]: [NOTICE]   (431715) : Loading success.
Oct  2 05:19:02 np0005465604 nova_compute[260603]: 2025-10-02 09:19:02.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:02 np0005465604 recursing_neumann[431677]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:19:02 np0005465604 recursing_neumann[431677]: --> relative data size: 1.0
Oct  2 05:19:02 np0005465604 recursing_neumann[431677]: --> All data devices are unavailable
Oct  2 05:19:02 np0005465604 systemd[1]: libpod-dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701.scope: Deactivated successfully.
Oct  2 05:19:02 np0005465604 systemd[1]: libpod-dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701.scope: Consumed 1.046s CPU time.
Oct  2 05:19:02 np0005465604 podman[431646]: 2025-10-02 09:19:02.972479194 +0000 UTC m=+1.334277866 container died dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_neumann, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:19:03 np0005465604 systemd[1]: var-lib-containers-storage-overlay-adf18e05b0217301f45fdf26fcc6a125b3127310a9d2df5dbd91c154da5a2d8d-merged.mount: Deactivated successfully.
Oct  2 05:19:03 np0005465604 podman[431646]: 2025-10-02 09:19:03.29312679 +0000 UTC m=+1.654925452 container remove dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_neumann, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 05:19:03 np0005465604 systemd[1]: libpod-conmon-dd0d9ac13b779add145c57b660f1e768ad56d5718e36ff3cbff8dbdfc8675701.scope: Deactivated successfully.
Oct  2 05:19:03 np0005465604 nova_compute[260603]: 2025-10-02 09:19:03.821 2 DEBUG nova.compute.manager [req-d794193a-fce5-44ad-8a9d-8d0445ff9152 req-598aa09f-90b4-44dc-b60d-57de6cd5cd67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received event network-vif-plugged-dac6b720-36af-4048-863c-cd259065f82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:19:03 np0005465604 nova_compute[260603]: 2025-10-02 09:19:03.823 2 DEBUG oslo_concurrency.lockutils [req-d794193a-fce5-44ad-8a9d-8d0445ff9152 req-598aa09f-90b4-44dc-b60d-57de6cd5cd67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:19:03 np0005465604 nova_compute[260603]: 2025-10-02 09:19:03.823 2 DEBUG oslo_concurrency.lockutils [req-d794193a-fce5-44ad-8a9d-8d0445ff9152 req-598aa09f-90b4-44dc-b60d-57de6cd5cd67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:19:03 np0005465604 nova_compute[260603]: 2025-10-02 09:19:03.823 2 DEBUG oslo_concurrency.lockutils [req-d794193a-fce5-44ad-8a9d-8d0445ff9152 req-598aa09f-90b4-44dc-b60d-57de6cd5cd67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:19:03 np0005465604 nova_compute[260603]: 2025-10-02 09:19:03.823 2 DEBUG nova.compute.manager [req-d794193a-fce5-44ad-8a9d-8d0445ff9152 req-598aa09f-90b4-44dc-b60d-57de6cd5cd67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] No waiting events found dispatching network-vif-plugged-dac6b720-36af-4048-863c-cd259065f82e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:19:03 np0005465604 nova_compute[260603]: 2025-10-02 09:19:03.823 2 WARNING nova.compute.manager [req-d794193a-fce5-44ad-8a9d-8d0445ff9152 req-598aa09f-90b4-44dc-b60d-57de6cd5cd67 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received unexpected event network-vif-plugged-dac6b720-36af-4048-863c-cd259065f82e for instance with vm_state active and task_state None.#033[00m
Oct  2 05:19:04 np0005465604 podman[431895]: 2025-10-02 09:19:04.114894377 +0000 UTC m=+0.027741807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:19:04 np0005465604 podman[431895]: 2025-10-02 09:19:04.237957181 +0000 UTC m=+0.150804521 container create 4365356be42182af161bd9d9bb644bc4398975762f2854e5a486b4eedf56972f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_babbage, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:19:04 np0005465604 systemd[1]: Started libpod-conmon-4365356be42182af161bd9d9bb644bc4398975762f2854e5a486b4eedf56972f.scope.
Oct  2 05:19:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct  2 05:19:04 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:19:04 np0005465604 podman[431895]: 2025-10-02 09:19:04.382074443 +0000 UTC m=+0.294921873 container init 4365356be42182af161bd9d9bb644bc4398975762f2854e5a486b4eedf56972f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_babbage, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:19:04 np0005465604 podman[431895]: 2025-10-02 09:19:04.397838195 +0000 UTC m=+0.310685535 container start 4365356be42182af161bd9d9bb644bc4398975762f2854e5a486b4eedf56972f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_babbage, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct  2 05:19:04 np0005465604 strange_babbage[431911]: 167 167
Oct  2 05:19:04 np0005465604 systemd[1]: libpod-4365356be42182af161bd9d9bb644bc4398975762f2854e5a486b4eedf56972f.scope: Deactivated successfully.
Oct  2 05:19:04 np0005465604 podman[431895]: 2025-10-02 09:19:04.41782862 +0000 UTC m=+0.330676080 container attach 4365356be42182af161bd9d9bb644bc4398975762f2854e5a486b4eedf56972f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:19:04 np0005465604 podman[431895]: 2025-10-02 09:19:04.418737967 +0000 UTC m=+0.331585367 container died 4365356be42182af161bd9d9bb644bc4398975762f2854e5a486b4eedf56972f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_babbage, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 05:19:04 np0005465604 nova_compute[260603]: 2025-10-02 09:19:04.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:04 np0005465604 systemd[1]: var-lib-containers-storage-overlay-888346d2588ce1997608a947893362006185c875e684595fd38746c0dc914f0f-merged.mount: Deactivated successfully.
Oct  2 05:19:04 np0005465604 podman[431895]: 2025-10-02 09:19:04.633741553 +0000 UTC m=+0.546588893 container remove 4365356be42182af161bd9d9bb644bc4398975762f2854e5a486b4eedf56972f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_babbage, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:19:04 np0005465604 systemd[1]: libpod-conmon-4365356be42182af161bd9d9bb644bc4398975762f2854e5a486b4eedf56972f.scope: Deactivated successfully.
Oct  2 05:19:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:19:04 np0005465604 podman[431936]: 2025-10-02 09:19:04.897421929 +0000 UTC m=+0.097216038 container create 4936dc4c651abebdf986247000220a22ad867a87eb4317cb7d5092fec888649a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:19:04 np0005465604 podman[431936]: 2025-10-02 09:19:04.825898195 +0000 UTC m=+0.025692324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:19:05 np0005465604 systemd[1]: Started libpod-conmon-4936dc4c651abebdf986247000220a22ad867a87eb4317cb7d5092fec888649a.scope.
Oct  2 05:19:05 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:19:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8db0ceeeb9420bcce1214ab79daaab5d827f7e0b10b90376ce548eb9e51444/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:19:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8db0ceeeb9420bcce1214ab79daaab5d827f7e0b10b90376ce548eb9e51444/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:19:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8db0ceeeb9420bcce1214ab79daaab5d827f7e0b10b90376ce548eb9e51444/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:19:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8db0ceeeb9420bcce1214ab79daaab5d827f7e0b10b90376ce548eb9e51444/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:19:05 np0005465604 podman[431936]: 2025-10-02 09:19:05.138868161 +0000 UTC m=+0.338662320 container init 4936dc4c651abebdf986247000220a22ad867a87eb4317cb7d5092fec888649a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 05:19:05 np0005465604 podman[431936]: 2025-10-02 09:19:05.152193397 +0000 UTC m=+0.351987546 container start 4936dc4c651abebdf986247000220a22ad867a87eb4317cb7d5092fec888649a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jepsen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:19:05 np0005465604 podman[431936]: 2025-10-02 09:19:05.213337166 +0000 UTC m=+0.413131305 container attach 4936dc4c651abebdf986247000220a22ad867a87eb4317cb7d5092fec888649a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jepsen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:19:05 np0005465604 podman[431953]: 2025-10-02 09:19:05.220150329 +0000 UTC m=+0.155683783 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid)
Oct  2 05:19:05 np0005465604 podman[431963]: 2025-10-02 09:19:05.288140503 +0000 UTC m=+0.164833789 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]: {
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:    "0": [
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:        {
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "devices": [
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "/dev/loop3"
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            ],
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "lv_name": "ceph_lv0",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "lv_size": "21470642176",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "name": "ceph_lv0",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "tags": {
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.cluster_name": "ceph",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.crush_device_class": "",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.encrypted": "0",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.osd_id": "0",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.type": "block",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.vdo": "0"
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            },
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "type": "block",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "vg_name": "ceph_vg0"
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:        }
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:    ],
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:    "1": [
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:        {
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "devices": [
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "/dev/loop4"
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            ],
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "lv_name": "ceph_lv1",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "lv_size": "21470642176",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "name": "ceph_lv1",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "tags": {
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.cluster_name": "ceph",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.crush_device_class": "",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.encrypted": "0",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.osd_id": "1",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.type": "block",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.vdo": "0"
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            },
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "type": "block",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "vg_name": "ceph_vg1"
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:        }
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:    ],
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:    "2": [
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:        {
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "devices": [
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "/dev/loop5"
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            ],
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "lv_name": "ceph_lv2",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "lv_size": "21470642176",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "name": "ceph_lv2",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "tags": {
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.cluster_name": "ceph",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.crush_device_class": "",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.encrypted": "0",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.osd_id": "2",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.type": "block",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:                "ceph.vdo": "0"
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            },
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "type": "block",
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:            "vg_name": "ceph_vg2"
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:        }
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]:    ]
Oct  2 05:19:05 np0005465604 crazy_jepsen[431952]: }
Oct  2 05:19:06 np0005465604 systemd[1]: libpod-4936dc4c651abebdf986247000220a22ad867a87eb4317cb7d5092fec888649a.scope: Deactivated successfully.
Oct  2 05:19:06 np0005465604 podman[431936]: 2025-10-02 09:19:06.017489934 +0000 UTC m=+1.217284083 container died 4936dc4c651abebdf986247000220a22ad867a87eb4317cb7d5092fec888649a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:19:06 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5a8db0ceeeb9420bcce1214ab79daaab5d827f7e0b10b90376ce548eb9e51444-merged.mount: Deactivated successfully.
Oct  2 05:19:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct  2 05:19:06 np0005465604 podman[431936]: 2025-10-02 09:19:06.397072571 +0000 UTC m=+1.596866680 container remove 4936dc4c651abebdf986247000220a22ad867a87eb4317cb7d5092fec888649a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jepsen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 05:19:06 np0005465604 systemd[1]: libpod-conmon-4936dc4c651abebdf986247000220a22ad867a87eb4317cb7d5092fec888649a.scope: Deactivated successfully.
Oct  2 05:19:06 np0005465604 nova_compute[260603]: 2025-10-02 09:19:06.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:19:06 np0005465604 ovn_controller[152344]: 2025-10-02T09:19:06Z|01671|binding|INFO|Releasing lport b940cd75-8ee4-4c6d-be34-c1fb5851ca50 from this chassis (sb_readonly=0)
Oct  2 05:19:06 np0005465604 NetworkManager[45129]: <info>  [1759396746.6004] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/680)
Oct  2 05:19:06 np0005465604 nova_compute[260603]: 2025-10-02 09:19:06.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:06 np0005465604 NetworkManager[45129]: <info>  [1759396746.6029] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/681)
Oct  2 05:19:06 np0005465604 ovn_controller[152344]: 2025-10-02T09:19:06Z|01672|binding|INFO|Releasing lport b940cd75-8ee4-4c6d-be34-c1fb5851ca50 from this chassis (sb_readonly=0)
Oct  2 05:19:06 np0005465604 nova_compute[260603]: 2025-10-02 09:19:06.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:06 np0005465604 nova_compute[260603]: 2025-10-02 09:19:06.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:06 np0005465604 nova_compute[260603]: 2025-10-02 09:19:06.977 2 DEBUG nova.compute.manager [req-f6106ee1-43be-4081-ba5c-bc66eb8d0e77 req-ed99c9ce-f79f-473c-8dc1-6a0372c50364 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received event network-changed-dac6b720-36af-4048-863c-cd259065f82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:19:06 np0005465604 nova_compute[260603]: 2025-10-02 09:19:06.980 2 DEBUG nova.compute.manager [req-f6106ee1-43be-4081-ba5c-bc66eb8d0e77 req-ed99c9ce-f79f-473c-8dc1-6a0372c50364 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Refreshing instance network info cache due to event network-changed-dac6b720-36af-4048-863c-cd259065f82e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:19:06 np0005465604 nova_compute[260603]: 2025-10-02 09:19:06.980 2 DEBUG oslo_concurrency.lockutils [req-f6106ee1-43be-4081-ba5c-bc66eb8d0e77 req-ed99c9ce-f79f-473c-8dc1-6a0372c50364 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:19:06 np0005465604 nova_compute[260603]: 2025-10-02 09:19:06.980 2 DEBUG oslo_concurrency.lockutils [req-f6106ee1-43be-4081-ba5c-bc66eb8d0e77 req-ed99c9ce-f79f-473c-8dc1-6a0372c50364 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:19:06 np0005465604 nova_compute[260603]: 2025-10-02 09:19:06.981 2 DEBUG nova.network.neutron [req-f6106ee1-43be-4081-ba5c-bc66eb8d0e77 req-ed99c9ce-f79f-473c-8dc1-6a0372c50364 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Refreshing network info cache for port dac6b720-36af-4048-863c-cd259065f82e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:19:07 np0005465604 podman[432155]: 2025-10-02 09:19:07.255493933 +0000 UTC m=+0.071894977 container create a62ebe6c0f4ab7a3b2cbf349cf6f6a4f583a4a559900ab7b9767eb98e0a0cfbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:19:07 np0005465604 podman[432155]: 2025-10-02 09:19:07.218311061 +0000 UTC m=+0.034712105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:19:07 np0005465604 systemd[1]: Started libpod-conmon-a62ebe6c0f4ab7a3b2cbf349cf6f6a4f583a4a559900ab7b9767eb98e0a0cfbb.scope.
Oct  2 05:19:07 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:19:07 np0005465604 podman[432155]: 2025-10-02 09:19:07.379447495 +0000 UTC m=+0.195848539 container init a62ebe6c0f4ab7a3b2cbf349cf6f6a4f583a4a559900ab7b9767eb98e0a0cfbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 05:19:07 np0005465604 podman[432155]: 2025-10-02 09:19:07.390939543 +0000 UTC m=+0.207340567 container start a62ebe6c0f4ab7a3b2cbf349cf6f6a4f583a4a559900ab7b9767eb98e0a0cfbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 05:19:07 np0005465604 podman[432155]: 2025-10-02 09:19:07.395025231 +0000 UTC m=+0.211426265 container attach a62ebe6c0f4ab7a3b2cbf349cf6f6a4f583a4a559900ab7b9767eb98e0a0cfbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_carson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:19:07 np0005465604 frosty_carson[432172]: 167 167
Oct  2 05:19:07 np0005465604 systemd[1]: libpod-a62ebe6c0f4ab7a3b2cbf349cf6f6a4f583a4a559900ab7b9767eb98e0a0cfbb.scope: Deactivated successfully.
Oct  2 05:19:07 np0005465604 podman[432155]: 2025-10-02 09:19:07.401505743 +0000 UTC m=+0.217906767 container died a62ebe6c0f4ab7a3b2cbf349cf6f6a4f583a4a559900ab7b9767eb98e0a0cfbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 05:19:07 np0005465604 systemd[1]: var-lib-containers-storage-overlay-416ecfbbc4c4207f885916f68b63450e2f2f26285f41ecafb01d161a285dcfef-merged.mount: Deactivated successfully.
Oct  2 05:19:07 np0005465604 podman[432155]: 2025-10-02 09:19:07.457876474 +0000 UTC m=+0.274277498 container remove a62ebe6c0f4ab7a3b2cbf349cf6f6a4f583a4a559900ab7b9767eb98e0a0cfbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:19:07 np0005465604 systemd[1]: libpod-conmon-a62ebe6c0f4ab7a3b2cbf349cf6f6a4f583a4a559900ab7b9767eb98e0a0cfbb.scope: Deactivated successfully.
Oct  2 05:19:07 np0005465604 podman[432195]: 2025-10-02 09:19:07.713678944 +0000 UTC m=+0.084129869 container create 8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chandrasekhar, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 05:19:07 np0005465604 systemd[1]: Started libpod-conmon-8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce.scope.
Oct  2 05:19:07 np0005465604 nova_compute[260603]: 2025-10-02 09:19:07.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:07 np0005465604 podman[432195]: 2025-10-02 09:19:07.683585024 +0000 UTC m=+0.054036049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:19:07 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:19:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e46d9ed111e3f3a21fd4b42df58ca21fb02d57f1bd3e75f808f1296a9342d9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:19:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e46d9ed111e3f3a21fd4b42df58ca21fb02d57f1bd3e75f808f1296a9342d9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:19:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e46d9ed111e3f3a21fd4b42df58ca21fb02d57f1bd3e75f808f1296a9342d9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:19:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e46d9ed111e3f3a21fd4b42df58ca21fb02d57f1bd3e75f808f1296a9342d9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:19:07 np0005465604 podman[432195]: 2025-10-02 09:19:07.829508642 +0000 UTC m=+0.199959577 container init 8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:19:07 np0005465604 podman[432195]: 2025-10-02 09:19:07.839298028 +0000 UTC m=+0.209748953 container start 8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chandrasekhar, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:19:07 np0005465604 podman[432195]: 2025-10-02 09:19:07.865185266 +0000 UTC m=+0.235636231 container attach 8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 05:19:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 05:19:08 np0005465604 nova_compute[260603]: 2025-10-02 09:19:08.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:19:08 np0005465604 nova_compute[260603]: 2025-10-02 09:19:08.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]: {
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:        "osd_id": 2,
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:        "type": "bluestore"
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:    },
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:        "osd_id": 1,
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:        "type": "bluestore"
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:    },
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:        "osd_id": 0,
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:        "type": "bluestore"
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]:    }
Oct  2 05:19:08 np0005465604 quizzical_chandrasekhar[432212]: }
Oct  2 05:19:08 np0005465604 systemd[1]: libpod-8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce.scope: Deactivated successfully.
Oct  2 05:19:08 np0005465604 systemd[1]: libpod-8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce.scope: Consumed 1.124s CPU time.
Oct  2 05:19:08 np0005465604 podman[432245]: 2025-10-02 09:19:08.996337547 +0000 UTC m=+0.025792746 container died 8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chandrasekhar, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 05:19:09 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6e46d9ed111e3f3a21fd4b42df58ca21fb02d57f1bd3e75f808f1296a9342d9b-merged.mount: Deactivated successfully.
Oct  2 05:19:09 np0005465604 podman[432245]: 2025-10-02 09:19:09.064158626 +0000 UTC m=+0.093613775 container remove 8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chandrasekhar, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct  2 05:19:09 np0005465604 systemd[1]: libpod-conmon-8122b290f665872a78d771aa92b100b61d82422dda4cf48a13bc2545d25f51ce.scope: Deactivated successfully.
Oct  2 05:19:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:19:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:19:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:19:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:19:09 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 67c3fca4-4062-42e2-a8e3-296d5163d17e does not exist
Oct  2 05:19:09 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev d3f16b45-edff-453c-b103-082d268f29b1 does not exist
Oct  2 05:19:09 np0005465604 nova_compute[260603]: 2025-10-02 09:19:09.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:09 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:19:09 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:19:09 np0005465604 nova_compute[260603]: 2025-10-02 09:19:09.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:19:09 np0005465604 nova_compute[260603]: 2025-10-02 09:19:09.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:19:09 np0005465604 nova_compute[260603]: 2025-10-02 09:19:09.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:19:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:19:10 np0005465604 nova_compute[260603]: 2025-10-02 09:19:10.138 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:19:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:19:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:19:12 np0005465604 nova_compute[260603]: 2025-10-02 09:19:12.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:14 np0005465604 nova_compute[260603]: 2025-10-02 09:19:14.157 2 DEBUG nova.network.neutron [req-f6106ee1-43be-4081-ba5c-bc66eb8d0e77 req-ed99c9ce-f79f-473c-8dc1-6a0372c50364 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updated VIF entry in instance network info cache for port dac6b720-36af-4048-863c-cd259065f82e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:19:14 np0005465604 nova_compute[260603]: 2025-10-02 09:19:14.158 2 DEBUG nova.network.neutron [req-f6106ee1-43be-4081-ba5c-bc66eb8d0e77 req-ed99c9ce-f79f-473c-8dc1-6a0372c50364 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updating instance_info_cache with network_info: [{"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:19:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 68 op/s
Oct  2 05:19:14 np0005465604 nova_compute[260603]: 2025-10-02 09:19:14.366 2 DEBUG oslo_concurrency.lockutils [req-f6106ee1-43be-4081-ba5c-bc66eb8d0e77 req-ed99c9ce-f79f-473c-8dc1-6a0372c50364 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:19:14 np0005465604 nova_compute[260603]: 2025-10-02 09:19:14.367 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:19:14 np0005465604 nova_compute[260603]: 2025-10-02 09:19:14.367 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 05:19:14 np0005465604 nova_compute[260603]: 2025-10-02 09:19:14.367 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d2fca23c-2574-4642-b931-f363d59bd5a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:19:14 np0005465604 nova_compute[260603]: 2025-10-02 09:19:14.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:19:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 56 op/s
Oct  2 05:19:16 np0005465604 nova_compute[260603]: 2025-10-02 09:19:16.698 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updating instance_info_cache with network_info: [{"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:19:16 np0005465604 nova_compute[260603]: 2025-10-02 09:19:16.754 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:19:16 np0005465604 nova_compute[260603]: 2025-10-02 09:19:16.754 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 05:19:16 np0005465604 nova_compute[260603]: 2025-10-02 09:19:16.754 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:19:16 np0005465604 nova_compute[260603]: 2025-10-02 09:19:16.800 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:19:16 np0005465604 nova_compute[260603]: 2025-10-02 09:19:16.800 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:19:16 np0005465604 nova_compute[260603]: 2025-10-02 09:19:16.800 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:19:16 np0005465604 nova_compute[260603]: 2025-10-02 09:19:16.801 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:19:16 np0005465604 nova_compute[260603]: 2025-10-02 09:19:16.801 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:19:17 np0005465604 ovn_controller[152344]: 2025-10-02T09:19:17Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6a:0a:a0 10.100.0.12
Oct  2 05:19:17 np0005465604 ovn_controller[152344]: 2025-10-02T09:19:17Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:0a:a0 10.100.0.12
Oct  2 05:19:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:19:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637019215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:19:17 np0005465604 nova_compute[260603]: 2025-10-02 09:19:17.223 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:19:17 np0005465604 nova_compute[260603]: 2025-10-02 09:19:17.397 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:19:17 np0005465604 nova_compute[260603]: 2025-10-02 09:19:17.397 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:19:17 np0005465604 nova_compute[260603]: 2025-10-02 09:19:17.552 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:19:17 np0005465604 nova_compute[260603]: 2025-10-02 09:19:17.553 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3386MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:19:17 np0005465604 nova_compute[260603]: 2025-10-02 09:19:17.553 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:19:17 np0005465604 nova_compute[260603]: 2025-10-02 09:19:17.554 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:19:17 np0005465604 nova_compute[260603]: 2025-10-02 09:19:17.676 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance d2fca23c-2574-4642-b931-f363d59bd5a7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:19:17 np0005465604 nova_compute[260603]: 2025-10-02 09:19:17.676 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:19:17 np0005465604 nova_compute[260603]: 2025-10-02 09:19:17.676 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:19:17 np0005465604 nova_compute[260603]: 2025-10-02 09:19:17.716 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:19:17 np0005465604 nova_compute[260603]: 2025-10-02 09:19:17.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:19:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/89840446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:19:18 np0005465604 nova_compute[260603]: 2025-10-02 09:19:18.142 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:19:18 np0005465604 nova_compute[260603]: 2025-10-02 09:19:18.148 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:19:18 np0005465604 nova_compute[260603]: 2025-10-02 09:19:18.196 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:19:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Oct  2 05:19:18 np0005465604 nova_compute[260603]: 2025-10-02 09:19:18.340 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:19:18 np0005465604 nova_compute[260603]: 2025-10-02 09:19:18.341 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:19:19 np0005465604 nova_compute[260603]: 2025-10-02 09:19:19.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:19:20 np0005465604 nova_compute[260603]: 2025-10-02 09:19:20.106 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:19:20 np0005465604 nova_compute[260603]: 2025-10-02 09:19:20.107 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:19:20 np0005465604 nova_compute[260603]: 2025-10-02 09:19:20.168 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:19:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  2 05:19:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:19:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3772413877' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:19:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:19:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3772413877' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:19:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  2 05:19:22 np0005465604 nova_compute[260603]: 2025-10-02 09:19:22.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:23 np0005465604 nova_compute[260603]: 2025-10-02 09:19:23.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:19:23 np0005465604 nova_compute[260603]: 2025-10-02 09:19:23.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 05:19:23 np0005465604 nova_compute[260603]: 2025-10-02 09:19:23.539 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 05:19:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  2 05:19:24 np0005465604 nova_compute[260603]: 2025-10-02 09:19:24.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:19:25 np0005465604 nova_compute[260603]: 2025-10-02 09:19:25.539 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:19:26 np0005465604 nova_compute[260603]: 2025-10-02 09:19:26.112 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "8e2e089b-fd7f-460e-ba3f-180204d961f8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:19:26 np0005465604 nova_compute[260603]: 2025-10-02 09:19:26.113 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:19:26 np0005465604 nova_compute[260603]: 2025-10-02 09:19:26.141 2 DEBUG nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:19:26 np0005465604 nova_compute[260603]: 2025-10-02 09:19:26.249 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:19:26 np0005465604 nova_compute[260603]: 2025-10-02 09:19:26.250 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:19:26 np0005465604 nova_compute[260603]: 2025-10-02 09:19:26.262 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:19:26 np0005465604 nova_compute[260603]: 2025-10-02 09:19:26.262 2 INFO nova.compute.claims [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:19:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  2 05:19:26 np0005465604 nova_compute[260603]: 2025-10-02 09:19:26.520 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:19:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:19:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1386203685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:19:26 np0005465604 nova_compute[260603]: 2025-10-02 09:19:26.972 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:19:26 np0005465604 nova_compute[260603]: 2025-10-02 09:19:26.982 2 DEBUG nova.compute.provider_tree [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.014 2 DEBUG nova.scheduler.client.report [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.119 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.121 2 DEBUG nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.227 2 DEBUG nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.228 2 DEBUG nova.network.neutron [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.255 2 INFO nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.274 2 DEBUG nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.358 2 DEBUG nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.360 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.360 2 INFO nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Creating image(s)#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.383 2 DEBUG nova.storage.rbd_utils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.405 2 DEBUG nova.storage.rbd_utils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.428 2 DEBUG nova.storage.rbd_utils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.432 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.475 2 DEBUG nova.policy [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.521 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.522 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.523 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.523 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.547 2 DEBUG nova.storage.rbd_utils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.551 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:19:27 np0005465604 nova_compute[260603]: 2025-10-02 09:19:27.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:19:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:19:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:19:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:19:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:19:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:19:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:19:28
Oct  2 05:19:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:19:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:19:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', '.rgw.root', 'default.rgw.control', 'vms', 'default.rgw.log']
Oct  2 05:19:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:19:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct  2 05:19:28 np0005465604 nova_compute[260603]: 2025-10-02 09:19:28.332 2 DEBUG nova.network.neutron [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Successfully created port: 24b323af-568c-4843-a858-dab3f5df627f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:19:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:19:29 np0005465604 nova_compute[260603]: 2025-10-02 09:19:29.283 2 DEBUG nova.network.neutron [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Successfully updated port: 24b323af-568c-4843-a858-dab3f5df627f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:19:29 np0005465604 nova_compute[260603]: 2025-10-02 09:19:29.311 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:19:29 np0005465604 nova_compute[260603]: 2025-10-02 09:19:29.312 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:19:29 np0005465604 nova_compute[260603]: 2025-10-02 09:19:29.312 2 DEBUG nova.network.neutron [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:19:29 np0005465604 nova_compute[260603]: 2025-10-02 09:19:29.400 2 DEBUG nova.compute.manager [req-3a602a03-cdc1-40d2-9fb3-4d393e68d94c req-83f42f30-cd99-4bdf-92d4-2b04c1b9911f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received event network-changed-24b323af-568c-4843-a858-dab3f5df627f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:19:29 np0005465604 nova_compute[260603]: 2025-10-02 09:19:29.400 2 DEBUG nova.compute.manager [req-3a602a03-cdc1-40d2-9fb3-4d393e68d94c req-83f42f30-cd99-4bdf-92d4-2b04c1b9911f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Refreshing instance network info cache due to event network-changed-24b323af-568c-4843-a858-dab3f5df627f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:19:29 np0005465604 nova_compute[260603]: 2025-10-02 09:19:29.400 2 DEBUG oslo_concurrency.lockutils [req-3a602a03-cdc1-40d2-9fb3-4d393e68d94c req-83f42f30-cd99-4bdf-92d4-2b04c1b9911f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:19:29 np0005465604 nova_compute[260603]: 2025-10-02 09:19:29.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:29 np0005465604 nova_compute[260603]: 2025-10-02 09:19:29.481 2 DEBUG nova.network.neutron [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:19:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:19:29 np0005465604 nova_compute[260603]: 2025-10-02 09:19:29.994 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:19:30 np0005465604 nova_compute[260603]: 2025-10-02 09:19:30.063 2 DEBUG nova.storage.rbd_utils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:19:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct  2 05:19:32 np0005465604 podman[432530]: 2025-10-02 09:19:32.052876259 +0000 UTC m=+0.102260705 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:19:32 np0005465604 podman[432529]: 2025-10-02 09:19:32.105024477 +0000 UTC m=+0.155187358 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 05:19:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 305 active+clean; 148 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 MiB/s wr, 33 op/s
Oct  2 05:19:32 np0005465604 nova_compute[260603]: 2025-10-02 09:19:32.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.136 2 DEBUG nova.network.neutron [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Updating instance_info_cache with network_info: [{"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.424 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.425 2 DEBUG nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Instance network_info: |[{"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.425 2 DEBUG oslo_concurrency.lockutils [req-3a602a03-cdc1-40d2-9fb3-4d393e68d94c req-83f42f30-cd99-4bdf-92d4-2b04c1b9911f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.426 2 DEBUG nova.network.neutron [req-3a602a03-cdc1-40d2-9fb3-4d393e68d94c req-83f42f30-cd99-4bdf-92d4-2b04c1b9911f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Refreshing network info cache for port 24b323af-568c-4843-a858-dab3f5df627f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.434 2 DEBUG nova.objects.instance [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 8e2e089b-fd7f-460e-ba3f-180204d961f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.760 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.761 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Ensure instance console log exists: /var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.762 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.763 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.764 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.769 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Start _get_guest_xml network_info=[{"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.777 2 WARNING nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.784 2 DEBUG nova.virt.libvirt.host [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.786 2 DEBUG nova.virt.libvirt.host [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.791 2 DEBUG nova.virt.libvirt.host [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.791 2 DEBUG nova.virt.libvirt.host [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.792 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.792 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.793 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.793 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.793 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.794 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.794 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.794 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.795 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.795 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.795 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.796 2 DEBUG nova.virt.hardware [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:19:33 np0005465604 nova_compute[260603]: 2025-10-02 09:19:33.800 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:19:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:19:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2210677655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:19:34 np0005465604 nova_compute[260603]: 2025-10-02 09:19:34.319 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:19:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct  2 05:19:34 np0005465604 nova_compute[260603]: 2025-10-02 09:19:34.344 2 DEBUG nova.storage.rbd_utils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:19:34 np0005465604 nova_compute[260603]: 2025-10-02 09:19:34.348 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:19:34 np0005465604 nova_compute[260603]: 2025-10-02 09:19:34.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:19:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:34.857 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:19:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:34.857 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:19:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:34.858 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:19:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:19:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/503369711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:19:34 np0005465604 nova_compute[260603]: 2025-10-02 09:19:34.911 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:19:34 np0005465604 nova_compute[260603]: 2025-10-02 09:19:34.913 2 DEBUG nova.virt.libvirt.vif [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:19:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1224861595',display_name='tempest-TestGettingAddress-server-1224861595',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1224861595',id=150,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJcq6b1IVAJMqrCeKypbws2/0Rs5i6q90XYETpVyQMku/Sh9hYU1xPZGh64+rGmgREbgZiLmTHs8bnO5pL74gp5+lt+bQjj8c2EhhfVbuK3Dnp+gnRVW0xWshm1hg5jBSA==',key_name='tempest-TestGettingAddress-505095675',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-s7sei7n9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:19:27Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=8e2e089b-fd7f-460e-ba3f-180204d961f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:19:34 np0005465604 nova_compute[260603]: 2025-10-02 09:19:34.914 2 DEBUG nova.network.os_vif_util [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:19:34 np0005465604 nova_compute[260603]: 2025-10-02 09:19:34.916 2 DEBUG nova.network.os_vif_util [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:7f:89,bridge_name='br-int',has_traffic_filtering=True,id=24b323af-568c-4843-a858-dab3f5df627f,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24b323af-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:19:34 np0005465604 nova_compute[260603]: 2025-10-02 09:19:34.918 2 DEBUG nova.objects.instance [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e2e089b-fd7f-460e-ba3f-180204d961f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.034 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:19:35 np0005465604 nova_compute[260603]:  <uuid>8e2e089b-fd7f-460e-ba3f-180204d961f8</uuid>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:  <name>instance-00000096</name>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestGettingAddress-server-1224861595</nova:name>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:19:33</nova:creationTime>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:19:35 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:        <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:        <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:        <nova:port uuid="24b323af-568c-4843-a858-dab3f5df627f">
Oct  2 05:19:35 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fed6:7f89" ipVersion="6"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fed6:7f89" ipVersion="6"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <entry name="serial">8e2e089b-fd7f-460e-ba3f-180204d961f8</entry>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <entry name="uuid">8e2e089b-fd7f-460e-ba3f-180204d961f8</entry>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/8e2e089b-fd7f-460e-ba3f-180204d961f8_disk">
Oct  2 05:19:35 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:19:35 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/8e2e089b-fd7f-460e-ba3f-180204d961f8_disk.config">
Oct  2 05:19:35 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:19:35 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:d6:7f:89"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <target dev="tap24b323af-56"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8/console.log" append="off"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:19:35 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:19:35 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:19:35 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:19:35 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.037 2 DEBUG nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Preparing to wait for external event network-vif-plugged-24b323af-568c-4843-a858-dab3f5df627f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.038 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.038 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.039 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.040 2 DEBUG nova.virt.libvirt.vif [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:19:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1224861595',display_name='tempest-TestGettingAddress-server-1224861595',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1224861595',id=150,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJcq6b1IVAJMqrCeKypbws2/0Rs5i6q90XYETpVyQMku/Sh9hYU1xPZGh64+rGmgREbgZiLmTHs8bnO5pL74gp5+lt+bQjj8c2EhhfVbuK3Dnp+gnRVW0xWshm1hg5jBSA==',key_name='tempest-TestGettingAddress-505095675',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-s7sei7n9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:19:27Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=8e2e089b-fd7f-460e-ba3f-180204d961f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.041 2 DEBUG nova.network.os_vif_util [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.043 2 DEBUG nova.network.os_vif_util [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:7f:89,bridge_name='br-int',has_traffic_filtering=True,id=24b323af-568c-4843-a858-dab3f5df627f,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24b323af-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.044 2 DEBUG os_vif [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:7f:89,bridge_name='br-int',has_traffic_filtering=True,id=24b323af-568c-4843-a858-dab3f5df627f,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24b323af-56') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.052 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24b323af-56, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.053 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap24b323af-56, col_values=(('external_ids', {'iface-id': '24b323af-568c-4843-a858-dab3f5df627f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:7f:89', 'vm-uuid': '8e2e089b-fd7f-460e-ba3f-180204d961f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:35 np0005465604 NetworkManager[45129]: <info>  [1759396775.0568] manager: (tap24b323af-56): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/682)
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.065 2 INFO os_vif [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:7f:89,bridge_name='br-int',has_traffic_filtering=True,id=24b323af-568c-4843-a858-dab3f5df627f,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24b323af-56')#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.250 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.251 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.251 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:d6:7f:89, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.252 2 INFO nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Using config drive#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.287 2 DEBUG nova.storage.rbd_utils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.690 2 DEBUG nova.network.neutron [req-3a602a03-cdc1-40d2-9fb3-4d393e68d94c req-83f42f30-cd99-4bdf-92d4-2b04c1b9911f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Updated VIF entry in instance network info cache for port 24b323af-568c-4843-a858-dab3f5df627f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.691 2 DEBUG nova.network.neutron [req-3a602a03-cdc1-40d2-9fb3-4d393e68d94c req-83f42f30-cd99-4bdf-92d4-2b04c1b9911f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Updating instance_info_cache with network_info: [{"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:19:35 np0005465604 nova_compute[260603]: 2025-10-02 09:19:35.818 2 DEBUG oslo_concurrency.lockutils [req-3a602a03-cdc1-40d2-9fb3-4d393e68d94c req-83f42f30-cd99-4bdf-92d4-2b04c1b9911f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:19:36 np0005465604 podman[432675]: 2025-10-02 09:19:36.012946491 +0000 UTC m=+0.081028212 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid)
Oct  2 05:19:36 np0005465604 podman[432674]: 2025-10-02 09:19:36.020839058 +0000 UTC m=+0.083389716 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:19:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct  2 05:19:37 np0005465604 nova_compute[260603]: 2025-10-02 09:19:37.146 2 INFO nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Creating config drive at /var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8/disk.config#033[00m
Oct  2 05:19:37 np0005465604 nova_compute[260603]: 2025-10-02 09:19:37.155 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1ervjxir execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:19:37 np0005465604 nova_compute[260603]: 2025-10-02 09:19:37.328 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1ervjxir" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:19:37 np0005465604 nova_compute[260603]: 2025-10-02 09:19:37.368 2 DEBUG nova.storage.rbd_utils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:19:37 np0005465604 nova_compute[260603]: 2025-10-02 09:19:37.372 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8/disk.config 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:19:37 np0005465604 nova_compute[260603]: 2025-10-02 09:19:37.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Oct  2 05:19:38 np0005465604 nova_compute[260603]: 2025-10-02 09:19:38.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011053336833365891 of space, bias 1.0, pg target 0.33160010500097675 quantized to 32 (current 32)
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:19:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:19:39 np0005465604 nova_compute[260603]: 2025-10-02 09:19:39.389 2 DEBUG oslo_concurrency.processutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8/disk.config 8e2e089b-fd7f-460e-ba3f-180204d961f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:19:39 np0005465604 nova_compute[260603]: 2025-10-02 09:19:39.390 2 INFO nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Deleting local config drive /var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8/disk.config because it was imported into RBD.#033[00m
Oct  2 05:19:39 np0005465604 kernel: tap24b323af-56: entered promiscuous mode
Oct  2 05:19:39 np0005465604 NetworkManager[45129]: <info>  [1759396779.4648] manager: (tap24b323af-56): new Tun device (/org/freedesktop/NetworkManager/Devices/683)
Oct  2 05:19:39 np0005465604 ovn_controller[152344]: 2025-10-02T09:19:39Z|01673|binding|INFO|Claiming lport 24b323af-568c-4843-a858-dab3f5df627f for this chassis.
Oct  2 05:19:39 np0005465604 ovn_controller[152344]: 2025-10-02T09:19:39Z|01674|binding|INFO|24b323af-568c-4843-a858-dab3f5df627f: Claiming fa:16:3e:d6:7f:89 10.100.0.8 2001:db8:0:1:f816:3eff:fed6:7f89 2001:db8::f816:3eff:fed6:7f89
Oct  2 05:19:39 np0005465604 nova_compute[260603]: 2025-10-02 09:19:39.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:39 np0005465604 nova_compute[260603]: 2025-10-02 09:19:39.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:39 np0005465604 ovn_controller[152344]: 2025-10-02T09:19:39Z|01675|binding|INFO|Setting lport 24b323af-568c-4843-a858-dab3f5df627f ovn-installed in OVS
Oct  2 05:19:39 np0005465604 nova_compute[260603]: 2025-10-02 09:19:39.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:39 np0005465604 ovn_controller[152344]: 2025-10-02T09:19:39Z|01676|binding|INFO|Setting lport 24b323af-568c-4843-a858-dab3f5df627f up in Southbound
Oct  2 05:19:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.501 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:7f:89 10.100.0.8 2001:db8:0:1:f816:3eff:fed6:7f89 2001:db8::f816:3eff:fed6:7f89'], port_security=['fa:16:3e:d6:7f:89 10.100.0.8 2001:db8:0:1:f816:3eff:fed6:7f89 2001:db8::f816:3eff:fed6:7f89'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8:0:1:f816:3eff:fed6:7f89/64 2001:db8::f816:3eff:fed6:7f89/64', 'neutron:device_id': '8e2e089b-fd7f-460e-ba3f-180204d961f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f167374-6ddb-4a61-a8ba-3dc4425d2006', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d143d452-1c93-4a72-b1b5-b9ed8c58f219, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=24b323af-568c-4843-a858-dab3f5df627f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:19:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.502 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 24b323af-568c-4843-a858-dab3f5df627f in datapath 01f3e9aa-20f8-48ae-9b80-cbd0ba476303 bound to our chassis#033[00m
Oct  2 05:19:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.504 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01f3e9aa-20f8-48ae-9b80-cbd0ba476303#033[00m
Oct  2 05:19:39 np0005465604 systemd-machined[214636]: New machine qemu-184-instance-00000096.
Oct  2 05:19:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.525 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2dae5a-f994-4232-9588-45fcab548795]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:39 np0005465604 systemd[1]: Started Virtual Machine qemu-184-instance-00000096.
Oct  2 05:19:39 np0005465604 systemd-udevd[432770]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:19:39 np0005465604 NetworkManager[45129]: <info>  [1759396779.5619] device (tap24b323af-56): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:19:39 np0005465604 NetworkManager[45129]: <info>  [1759396779.5628] device (tap24b323af-56): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:19:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.574 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c5eb6c22-81a4-4595-9811-b8a731a17139]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.580 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[94b6f98e-d021-4547-a9ce-537a8c0e8cc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.616 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bfefe207-8d7c-4073-b43a-6434f534d1f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.633 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eab3983a-6f2f-4e58-ab38-977a05888f3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01f3e9aa-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:77:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 25, 'tx_packets': 6, 'rx_bytes': 2230, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 25, 'tx_packets': 6, 'rx_bytes': 2230, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761975, 'reachable_time': 39646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 432782, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.654 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6af7f9b5-0538-4d9f-aaee-ad03a6da87b0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap01f3e9aa-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 761993, 'tstamp': 761993}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 432783, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap01f3e9aa-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 761997, 'tstamp': 761997}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 432783, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:19:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.657 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01f3e9aa-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:19:39 np0005465604 nova_compute[260603]: 2025-10-02 09:19:39.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.661 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01f3e9aa-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:19:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.661 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:19:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.662 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01f3e9aa-20, col_values=(('external_ids', {'iface-id': 'b940cd75-8ee4-4c6d-be34-c1fb5851ca50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:19:39 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:39.663 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:19:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:19:40 np0005465604 nova_compute[260603]: 2025-10-02 09:19:40.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Oct  2 05:19:40 np0005465604 nova_compute[260603]: 2025-10-02 09:19:40.532 2 DEBUG nova.compute.manager [req-462ccb42-c99b-4c38-8914-9b6c6cb898c5 req-4a4ee047-e97b-487f-a807-3265ad425c26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received event network-vif-plugged-24b323af-568c-4843-a858-dab3f5df627f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:19:40 np0005465604 nova_compute[260603]: 2025-10-02 09:19:40.534 2 DEBUG oslo_concurrency.lockutils [req-462ccb42-c99b-4c38-8914-9b6c6cb898c5 req-4a4ee047-e97b-487f-a807-3265ad425c26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:19:40 np0005465604 nova_compute[260603]: 2025-10-02 09:19:40.534 2 DEBUG oslo_concurrency.lockutils [req-462ccb42-c99b-4c38-8914-9b6c6cb898c5 req-4a4ee047-e97b-487f-a807-3265ad425c26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:19:40 np0005465604 nova_compute[260603]: 2025-10-02 09:19:40.534 2 DEBUG oslo_concurrency.lockutils [req-462ccb42-c99b-4c38-8914-9b6c6cb898c5 req-4a4ee047-e97b-487f-a807-3265ad425c26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:19:40 np0005465604 nova_compute[260603]: 2025-10-02 09:19:40.535 2 DEBUG nova.compute.manager [req-462ccb42-c99b-4c38-8914-9b6c6cb898c5 req-4a4ee047-e97b-487f-a807-3265ad425c26 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Processing event network-vif-plugged-24b323af-568c-4843-a858-dab3f5df627f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:19:40 np0005465604 nova_compute[260603]: 2025-10-02 09:19:40.721 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:19:40 np0005465604 nova_compute[260603]: 2025-10-02 09:19:40.722 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 05:19:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:40.735 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:19:40 np0005465604 nova_compute[260603]: 2025-10-02 09:19:40.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:40.737 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:19:40 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:19:40.738 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:19:40 np0005465604 nova_compute[260603]: 2025-10-02 09:19:40.982 2 DEBUG nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:19:40 np0005465604 nova_compute[260603]: 2025-10-02 09:19:40.983 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396780.9814334, 8e2e089b-fd7f-460e-ba3f-180204d961f8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:19:40 np0005465604 nova_compute[260603]: 2025-10-02 09:19:40.984 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] VM Started (Lifecycle Event)#033[00m
Oct  2 05:19:40 np0005465604 nova_compute[260603]: 2025-10-02 09:19:40.988 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:19:40 np0005465604 nova_compute[260603]: 2025-10-02 09:19:40.994 2 INFO nova.virt.libvirt.driver [-] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Instance spawned successfully.#033[00m
Oct  2 05:19:40 np0005465604 nova_compute[260603]: 2025-10-02 09:19:40.997 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:19:41 np0005465604 nova_compute[260603]: 2025-10-02 09:19:41.206 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:19:41 np0005465604 nova_compute[260603]: 2025-10-02 09:19:41.213 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:19:41 np0005465604 nova_compute[260603]: 2025-10-02 09:19:41.369 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:19:41 np0005465604 nova_compute[260603]: 2025-10-02 09:19:41.370 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:19:41 np0005465604 nova_compute[260603]: 2025-10-02 09:19:41.371 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:19:41 np0005465604 nova_compute[260603]: 2025-10-02 09:19:41.372 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:19:41 np0005465604 nova_compute[260603]: 2025-10-02 09:19:41.372 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:19:41 np0005465604 nova_compute[260603]: 2025-10-02 09:19:41.373 2 DEBUG nova.virt.libvirt.driver [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:19:41 np0005465604 nova_compute[260603]: 2025-10-02 09:19:41.454 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:19:41 np0005465604 nova_compute[260603]: 2025-10-02 09:19:41.455 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396780.9830508, 8e2e089b-fd7f-460e-ba3f-180204d961f8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:19:41 np0005465604 nova_compute[260603]: 2025-10-02 09:19:41.455 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:19:41 np0005465604 nova_compute[260603]: 2025-10-02 09:19:41.622 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:19:41 np0005465604 nova_compute[260603]: 2025-10-02 09:19:41.628 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396780.9878767, 8e2e089b-fd7f-460e-ba3f-180204d961f8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:19:41 np0005465604 nova_compute[260603]: 2025-10-02 09:19:41.629 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:19:41 np0005465604 nova_compute[260603]: 2025-10-02 09:19:41.716 2 INFO nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Took 14.36 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:19:41 np0005465604 nova_compute[260603]: 2025-10-02 09:19:41.717 2 DEBUG nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:19:41 np0005465604 nova_compute[260603]: 2025-10-02 09:19:41.803 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:19:41 np0005465604 nova_compute[260603]: 2025-10-02 09:19:41.809 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:19:42 np0005465604 nova_compute[260603]: 2025-10-02 09:19:42.196 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:19:42 np0005465604 nova_compute[260603]: 2025-10-02 09:19:42.283 2 INFO nova.compute.manager [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Took 16.08 seconds to build instance.#033[00m
Oct  2 05:19:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 850 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Oct  2 05:19:42 np0005465604 nova_compute[260603]: 2025-10-02 09:19:42.559 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:19:42 np0005465604 nova_compute[260603]: 2025-10-02 09:19:42.559 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:19:42 np0005465604 nova_compute[260603]: 2025-10-02 09:19:42.726 2 DEBUG oslo_concurrency.lockutils [None req-685b97c7-e451-46c2-8964-1727084026f3 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:19:42 np0005465604 nova_compute[260603]: 2025-10-02 09:19:42.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:42 np0005465604 nova_compute[260603]: 2025-10-02 09:19:42.997 2 DEBUG nova.compute.manager [req-c594792b-27f8-4357-abae-aad4b8f6872c req-a48aa0ee-01d2-4e52-863a-d05f410b5cf7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received event network-vif-plugged-24b323af-568c-4843-a858-dab3f5df627f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:19:42 np0005465604 nova_compute[260603]: 2025-10-02 09:19:42.998 2 DEBUG oslo_concurrency.lockutils [req-c594792b-27f8-4357-abae-aad4b8f6872c req-a48aa0ee-01d2-4e52-863a-d05f410b5cf7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:19:42 np0005465604 nova_compute[260603]: 2025-10-02 09:19:42.999 2 DEBUG oslo_concurrency.lockutils [req-c594792b-27f8-4357-abae-aad4b8f6872c req-a48aa0ee-01d2-4e52-863a-d05f410b5cf7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:19:43 np0005465604 nova_compute[260603]: 2025-10-02 09:19:42.999 2 DEBUG oslo_concurrency.lockutils [req-c594792b-27f8-4357-abae-aad4b8f6872c req-a48aa0ee-01d2-4e52-863a-d05f410b5cf7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:19:43 np0005465604 nova_compute[260603]: 2025-10-02 09:19:43.000 2 DEBUG nova.compute.manager [req-c594792b-27f8-4357-abae-aad4b8f6872c req-a48aa0ee-01d2-4e52-863a-d05f410b5cf7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] No waiting events found dispatching network-vif-plugged-24b323af-568c-4843-a858-dab3f5df627f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:19:43 np0005465604 nova_compute[260603]: 2025-10-02 09:19:43.000 2 WARNING nova.compute.manager [req-c594792b-27f8-4357-abae-aad4b8f6872c req-a48aa0ee-01d2-4e52-863a-d05f410b5cf7 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received unexpected event network-vif-plugged-24b323af-568c-4843-a858-dab3f5df627f for instance with vm_state active and task_state None.#033[00m
Oct  2 05:19:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 531 KiB/s wr, 91 op/s
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:44.877298) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396784877371, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 657, "num_deletes": 251, "total_data_size": 800270, "memory_usage": 812992, "flush_reason": "Manual Compaction"}
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396784923878, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 542639, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63355, "largest_seqno": 64011, "table_properties": {"data_size": 539560, "index_size": 986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8209, "raw_average_key_size": 20, "raw_value_size": 533159, "raw_average_value_size": 1342, "num_data_blocks": 44, "num_entries": 397, "num_filter_entries": 397, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396738, "oldest_key_time": 1759396738, "file_creation_time": 1759396784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 46657 microseconds, and 3381 cpu microseconds.
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:44.923954) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 542639 bytes OK
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:44.923993) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:44.927540) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:44.927562) EVENT_LOG_v1 {"time_micros": 1759396784927554, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:44.927592) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 796764, prev total WAL file size 796764, number of live WAL files 2.
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:44.928449) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353032' seq:72057594037927935, type:22 .. '6D6772737461740032373534' seq:0, type:0; will stop at (end)
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(529KB)], [149(10MB)]
Oct  2 05:19:44 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396784928524, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 11838114, "oldest_snapshot_seqno": -1}
Oct  2 05:19:45 np0005465604 nova_compute[260603]: 2025-10-02 09:19:45.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:45 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8101 keys, 8758236 bytes, temperature: kUnknown
Oct  2 05:19:45 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396785172946, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 8758236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8708582, "index_size": 28293, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 212510, "raw_average_key_size": 26, "raw_value_size": 8568562, "raw_average_value_size": 1057, "num_data_blocks": 1091, "num_entries": 8101, "num_filter_entries": 8101, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:19:45 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:19:45 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:45.173280) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 8758236 bytes
Oct  2 05:19:45 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:45.215516) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 48.4 rd, 35.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.8 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(38.0) write-amplify(16.1) OK, records in: 8596, records dropped: 495 output_compression: NoCompression
Oct  2 05:19:45 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:45.215549) EVENT_LOG_v1 {"time_micros": 1759396785215534, "job": 92, "event": "compaction_finished", "compaction_time_micros": 244523, "compaction_time_cpu_micros": 42156, "output_level": 6, "num_output_files": 1, "total_output_size": 8758236, "num_input_records": 8596, "num_output_records": 8101, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:19:45 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:19:45 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396785215917, "job": 92, "event": "table_file_deletion", "file_number": 151}
Oct  2 05:19:45 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:19:45 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396785219547, "job": 92, "event": "table_file_deletion", "file_number": 149}
Oct  2 05:19:45 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:44.928313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:19:45 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:45.219679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:19:45 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:45.219686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:19:45 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:45.219689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:19:45 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:45.219691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:19:45 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:19:45.219693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:19:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3049: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 89 op/s
Oct  2 05:19:46 np0005465604 nova_compute[260603]: 2025-10-02 09:19:46.526 2 DEBUG nova.compute.manager [req-7d9ce014-1b42-4e45-8b97-6301998da288 req-3202f5c1-3157-4ef8-ad8f-ccfbf73217d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received event network-changed-24b323af-568c-4843-a858-dab3f5df627f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:19:46 np0005465604 nova_compute[260603]: 2025-10-02 09:19:46.526 2 DEBUG nova.compute.manager [req-7d9ce014-1b42-4e45-8b97-6301998da288 req-3202f5c1-3157-4ef8-ad8f-ccfbf73217d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Refreshing instance network info cache due to event network-changed-24b323af-568c-4843-a858-dab3f5df627f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:19:46 np0005465604 nova_compute[260603]: 2025-10-02 09:19:46.527 2 DEBUG oslo_concurrency.lockutils [req-7d9ce014-1b42-4e45-8b97-6301998da288 req-3202f5c1-3157-4ef8-ad8f-ccfbf73217d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:19:46 np0005465604 nova_compute[260603]: 2025-10-02 09:19:46.527 2 DEBUG oslo_concurrency.lockutils [req-7d9ce014-1b42-4e45-8b97-6301998da288 req-3202f5c1-3157-4ef8-ad8f-ccfbf73217d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:19:46 np0005465604 nova_compute[260603]: 2025-10-02 09:19:46.527 2 DEBUG nova.network.neutron [req-7d9ce014-1b42-4e45-8b97-6301998da288 req-3202f5c1-3157-4ef8-ad8f-ccfbf73217d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Refreshing network info cache for port 24b323af-568c-4843-a858-dab3f5df627f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:19:47 np0005465604 nova_compute[260603]: 2025-10-02 09:19:47.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 125 op/s
Oct  2 05:19:49 np0005465604 nova_compute[260603]: 2025-10-02 09:19:49.157 2 DEBUG nova.network.neutron [req-7d9ce014-1b42-4e45-8b97-6301998da288 req-3202f5c1-3157-4ef8-ad8f-ccfbf73217d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Updated VIF entry in instance network info cache for port 24b323af-568c-4843-a858-dab3f5df627f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:19:49 np0005465604 nova_compute[260603]: 2025-10-02 09:19:49.158 2 DEBUG nova.network.neutron [req-7d9ce014-1b42-4e45-8b97-6301998da288 req-3202f5c1-3157-4ef8-ad8f-ccfbf73217d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Updating instance_info_cache with network_info: [{"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:19:49 np0005465604 nova_compute[260603]: 2025-10-02 09:19:49.734 2 DEBUG oslo_concurrency.lockutils [req-7d9ce014-1b42-4e45-8b97-6301998da288 req-3202f5c1-3157-4ef8-ad8f-ccfbf73217d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:19:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:19:50 np0005465604 nova_compute[260603]: 2025-10-02 09:19:50.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 107 op/s
Oct  2 05:19:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 109 op/s
Oct  2 05:19:52 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct  2 05:19:52 np0005465604 nova_compute[260603]: 2025-10-02 09:19:52.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 170 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 220 KiB/s wr, 73 op/s
Oct  2 05:19:54 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:19:55 np0005465604 nova_compute[260603]: 2025-10-02 09:19:55.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 305 active+clean; 170 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 556 KiB/s rd, 208 KiB/s wr, 46 op/s
Oct  2 05:19:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:19:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:19:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:19:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:19:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:19:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:19:57 np0005465604 nova_compute[260603]: 2025-10-02 09:19:57.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:19:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 560 KiB/s rd, 2.0 MiB/s wr, 62 op/s
Oct  2 05:19:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:20:00 np0005465604 nova_compute[260603]: 2025-10-02 09:20:00.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 MiB/s wr, 26 op/s
Oct  2 05:20:00 np0005465604 ovn_controller[152344]: 2025-10-02T09:20:00Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:7f:89 10.100.0.8
Oct  2 05:20:00 np0005465604 ovn_controller[152344]: 2025-10-02T09:20:00Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:7f:89 10.100.0.8
Oct  2 05:20:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 192 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 2.0 MiB/s wr, 35 op/s
Oct  2 05:20:03 np0005465604 nova_compute[260603]: 2025-10-02 09:20:03.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:03 np0005465604 podman[432828]: 2025-10-02 09:20:03.076956432 +0000 UTC m=+0.124699046 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:20:03 np0005465604 podman[432827]: 2025-10-02 09:20:03.099028111 +0000 UTC m=+0.148165838 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:20:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 193 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 2.1 MiB/s wr, 46 op/s
Oct  2 05:20:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:20:05 np0005465604 nova_compute[260603]: 2025-10-02 09:20:05.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 193 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 178 KiB/s rd, 1.9 MiB/s wr, 39 op/s
Oct  2 05:20:07 np0005465604 podman[432870]: 2025-10-02 09:20:07.039783629 +0000 UTC m=+0.102076369 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 05:20:07 np0005465604 podman[432871]: 2025-10-02 09:20:07.054911992 +0000 UTC m=+0.105589888 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:20:07 np0005465604 nova_compute[260603]: 2025-10-02 09:20:07.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:20:08 np0005465604 nova_compute[260603]: 2025-10-02 09:20:08.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 305 active+clean; 193 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 1.9 MiB/s wr, 41 op/s
Oct  2 05:20:09 np0005465604 nova_compute[260603]: 2025-10-02 09:20:09.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:20:10 np0005465604 nova_compute[260603]: 2025-10-02 09:20:10.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 305 active+clean; 193 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 181 KiB/s rd, 88 KiB/s wr, 25 op/s
Oct  2 05:20:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:20:10 np0005465604 nova_compute[260603]: 2025-10-02 09:20:10.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:20:10 np0005465604 nova_compute[260603]: 2025-10-02 09:20:10.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:20:10 np0005465604 nova_compute[260603]: 2025-10-02 09:20:10.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:20:11 np0005465604 nova_compute[260603]: 2025-10-02 09:20:11.148 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:20:11 np0005465604 nova_compute[260603]: 2025-10-02 09:20:11.149 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:20:11 np0005465604 nova_compute[260603]: 2025-10-02 09:20:11.149 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 05:20:11 np0005465604 nova_compute[260603]: 2025-10-02 09:20:11.150 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d2fca23c-2574-4642-b931-f363d59bd5a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:20:11 np0005465604 podman[433185]: 2025-10-02 09:20:11.106051609 +0000 UTC m=+0.047453093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:20:11 np0005465604 podman[433185]: 2025-10-02 09:20:11.689288497 +0000 UTC m=+0.630689951 container create 9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 05:20:12 np0005465604 systemd[1]: Started libpod-conmon-9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235.scope.
Oct  2 05:20:12 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:20:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 197 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 115 KiB/s wr, 33 op/s
Oct  2 05:20:12 np0005465604 podman[433185]: 2025-10-02 09:20:12.591111915 +0000 UTC m=+1.532513439 container init 9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:20:12 np0005465604 podman[433185]: 2025-10-02 09:20:12.607037802 +0000 UTC m=+1.548439266 container start 9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:20:12 np0005465604 systemd[1]: libpod-9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235.scope: Deactivated successfully.
Oct  2 05:20:12 np0005465604 loving_proskuriakova[433201]: 167 167
Oct  2 05:20:12 np0005465604 conmon[433201]: conmon 9aa8e7da49965b741110 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235.scope/container/memory.events
Oct  2 05:20:12 np0005465604 podman[433185]: 2025-10-02 09:20:12.990008754 +0000 UTC m=+1.931410278 container attach 9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 05:20:12 np0005465604 podman[433185]: 2025-10-02 09:20:12.991193951 +0000 UTC m=+1.932595435 container died 9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:20:13 np0005465604 nova_compute[260603]: 2025-10-02 09:20:13.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:13 np0005465604 nova_compute[260603]: 2025-10-02 09:20:13.310 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updating instance_info_cache with network_info: [{"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:20:13 np0005465604 nova_compute[260603]: 2025-10-02 09:20:13.397 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:20:13 np0005465604 nova_compute[260603]: 2025-10-02 09:20:13.398 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 05:20:13 np0005465604 nova_compute[260603]: 2025-10-02 09:20:13.399 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:20:13 np0005465604 nova_compute[260603]: 2025-10-02 09:20:13.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:20:13 np0005465604 nova_compute[260603]: 2025-10-02 09:20:13.597 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:20:13 np0005465604 nova_compute[260603]: 2025-10-02 09:20:13.599 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:20:13 np0005465604 nova_compute[260603]: 2025-10-02 09:20:13.599 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:20:13 np0005465604 nova_compute[260603]: 2025-10-02 09:20:13.599 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:20:13 np0005465604 nova_compute[260603]: 2025-10-02 09:20:13.601 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:20:13 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ca3f32bb23ce4910bbf87002928a59ecc107de40b099840cb4cf0b737146453c-merged.mount: Deactivated successfully.
Oct  2 05:20:14 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:20:14 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1344404463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.054 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.194 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.194 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.201 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.201 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:20:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 93 KiB/s wr, 27 op/s
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.490 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.492 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3103MB free_disk=59.89738464355469GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.493 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.493 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:20:14 np0005465604 podman[433185]: 2025-10-02 09:20:14.609874749 +0000 UTC m=+3.551276233 container remove 9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.633 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance d2fca23c-2574-4642-b931-f363d59bd5a7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.633 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 8e2e089b-fd7f-460e-ba3f-180204d961f8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.633 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.634 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.652 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.672 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.672 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.688 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.716 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 05:20:14 np0005465604 systemd[1]: libpod-conmon-9aa8e7da49965b741110978f753722ea152ca45bb5b529b5d0a7a794bfdd3235.scope: Deactivated successfully.
Oct  2 05:20:14 np0005465604 nova_compute[260603]: 2025-10-02 09:20:14.780 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:20:14 np0005465604 podman[433249]: 2025-10-02 09:20:14.845810839 +0000 UTC m=+0.030823834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:20:15 np0005465604 nova_compute[260603]: 2025-10-02 09:20:15.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:15 np0005465604 podman[433249]: 2025-10-02 09:20:15.332536791 +0000 UTC m=+0.517549726 container create 5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_proskuriakova, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:20:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:20:15 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/77357292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:20:15 np0005465604 nova_compute[260603]: 2025-10-02 09:20:15.400 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:20:15 np0005465604 nova_compute[260603]: 2025-10-02 09:20:15.409 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:20:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:20:15 np0005465604 nova_compute[260603]: 2025-10-02 09:20:15.602 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:20:15 np0005465604 systemd[1]: Started libpod-conmon-5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f.scope.
Oct  2 05:20:15 np0005465604 nova_compute[260603]: 2025-10-02 09:20:15.733 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:20:15 np0005465604 nova_compute[260603]: 2025-10-02 09:20:15.734 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:20:15 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:20:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d629f48d969214eb41a0b1f928279c4b21975e9acae05e35229c4105684ddeda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:20:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d629f48d969214eb41a0b1f928279c4b21975e9acae05e35229c4105684ddeda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:20:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d629f48d969214eb41a0b1f928279c4b21975e9acae05e35229c4105684ddeda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:20:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d629f48d969214eb41a0b1f928279c4b21975e9acae05e35229c4105684ddeda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:20:15 np0005465604 podman[433249]: 2025-10-02 09:20:15.89149867 +0000 UTC m=+1.076511665 container init 5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_proskuriakova, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 05:20:15 np0005465604 podman[433249]: 2025-10-02 09:20:15.905641422 +0000 UTC m=+1.090654367 container start 5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 05:20:15 np0005465604 ovn_controller[152344]: 2025-10-02T09:20:15Z|01677|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Oct  2 05:20:16 np0005465604 podman[433249]: 2025-10-02 09:20:16.164299381 +0000 UTC m=+1.349312326 container attach 5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 05:20:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 82 KiB/s wr, 14 op/s
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]: [
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:    {
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:        "available": false,
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:        "ceph_device": false,
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:        "lsm_data": {},
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:        "lvs": [],
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:        "path": "/dev/sr0",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:        "rejected_reasons": [
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "Has a FileSystem",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "Insufficient space (<5GB)"
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:        ],
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:        "sys_api": {
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "actuators": null,
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "device_nodes": "sr0",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "devname": "sr0",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "human_readable_size": "482.00 KB",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "id_bus": "ata",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "model": "QEMU DVD-ROM",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "nr_requests": "2",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "parent": "/dev/sr0",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "partitions": {},
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "path": "/dev/sr0",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "removable": "1",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "rev": "2.5+",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "ro": "0",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "rotational": "0",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "sas_address": "",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "sas_device_handle": "",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "scheduler_mode": "mq-deadline",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "sectors": 0,
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "sectorsize": "2048",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "size": 493568.0,
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "support_discard": "2048",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "type": "disk",
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:            "vendor": "QEMU"
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:        }
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]:    }
Oct  2 05:20:17 np0005465604 frosty_proskuriakova[433289]: ]
Oct  2 05:20:17 np0005465604 systemd[1]: libpod-5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f.scope: Deactivated successfully.
Oct  2 05:20:17 np0005465604 systemd[1]: libpod-5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f.scope: Consumed 1.890s CPU time.
Oct  2 05:20:17 np0005465604 podman[435437]: 2025-10-02 09:20:17.789058909 +0000 UTC m=+0.032420453 container died 5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct  2 05:20:18 np0005465604 nova_compute[260603]: 2025-10-02 09:20:18.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 82 KiB/s wr, 14 op/s
Oct  2 05:20:19 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d629f48d969214eb41a0b1f928279c4b21975e9acae05e35229c4105684ddeda-merged.mount: Deactivated successfully.
Oct  2 05:20:19 np0005465604 nova_compute[260603]: 2025-10-02 09:20:19.730 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:20:19 np0005465604 nova_compute[260603]: 2025-10-02 09:20:19.731 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:20:19 np0005465604 podman[435437]: 2025-10-02 09:20:19.83237047 +0000 UTC m=+2.075732024 container remove 5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:20:19 np0005465604 systemd[1]: libpod-conmon-5c8479b13a9c7306d49ee443b7e0778fac7ea28877a1216f24d6883558efb09f.scope: Deactivated successfully.
Oct  2 05:20:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:20:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:20:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:20:20 np0005465604 nova_compute[260603]: 2025-10-02 09:20:20.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:20:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:20:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:20:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:20:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:20:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:20:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:20:20 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev adb8a190-9b7d-482e-a31c-8be5098ea5e7 does not exist
Oct  2 05:20:20 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ccae9128-3274-43bc-bdd1-fea366fa6f32 does not exist
Oct  2 05:20:20 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9e0ce440-92f8-4884-9867-323d6f695cf7 does not exist
Oct  2 05:20:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:20:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:20:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 38 KiB/s wr, 12 op/s
Oct  2 05:20:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:20:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:20:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:20:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:20:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:20:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:20:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:20:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:20:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:20:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:20:21 np0005465604 podman[435593]: 2025-10-02 09:20:21.26859361 +0000 UTC m=+0.089867888 container create 1236b0d3af2d92dff473f355a13f2cb77e5662063ec25e2ee81fa11ca6f6d026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 05:20:21 np0005465604 podman[435593]: 2025-10-02 09:20:21.222729678 +0000 UTC m=+0.044004006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:20:21 np0005465604 systemd[1]: Started libpod-conmon-1236b0d3af2d92dff473f355a13f2cb77e5662063ec25e2ee81fa11ca6f6d026.scope.
Oct  2 05:20:21 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:20:21 np0005465604 podman[435593]: 2025-10-02 09:20:21.493253057 +0000 UTC m=+0.314527315 container init 1236b0d3af2d92dff473f355a13f2cb77e5662063ec25e2ee81fa11ca6f6d026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_panini, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 05:20:21 np0005465604 podman[435593]: 2025-10-02 09:20:21.505976585 +0000 UTC m=+0.327250853 container start 1236b0d3af2d92dff473f355a13f2cb77e5662063ec25e2ee81fa11ca6f6d026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 05:20:21 np0005465604 keen_panini[435609]: 167 167
Oct  2 05:20:21 np0005465604 systemd[1]: libpod-1236b0d3af2d92dff473f355a13f2cb77e5662063ec25e2ee81fa11ca6f6d026.scope: Deactivated successfully.
Oct  2 05:20:21 np0005465604 podman[435593]: 2025-10-02 09:20:21.573634787 +0000 UTC m=+0.394909035 container attach 1236b0d3af2d92dff473f355a13f2cb77e5662063ec25e2ee81fa11ca6f6d026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_panini, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:20:21 np0005465604 podman[435593]: 2025-10-02 09:20:21.574904478 +0000 UTC m=+0.396178716 container died 1236b0d3af2d92dff473f355a13f2cb77e5662063ec25e2ee81fa11ca6f6d026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 05:20:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-efc295c31643961b652803807db539db76a05625450a5ce2c957520be685bb46-merged.mount: Deactivated successfully.
Oct  2 05:20:22 np0005465604 podman[435593]: 2025-10-02 09:20:22.130397658 +0000 UTC m=+0.951671906 container remove 1236b0d3af2d92dff473f355a13f2cb77e5662063ec25e2ee81fa11ca6f6d026 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_panini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:20:22 np0005465604 systemd[1]: libpod-conmon-1236b0d3af2d92dff473f355a13f2cb77e5662063ec25e2ee81fa11ca6f6d026.scope: Deactivated successfully.
Oct  2 05:20:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:20:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1497119359' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:20:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:20:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1497119359' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:20:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 56 KiB/s wr, 13 op/s
Oct  2 05:20:22 np0005465604 podman[435635]: 2025-10-02 09:20:22.357495881 +0000 UTC m=+0.034464607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:20:22 np0005465604 podman[435635]: 2025-10-02 09:20:22.488629337 +0000 UTC m=+0.165598013 container create 3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 05:20:22 np0005465604 systemd[1]: Started libpod-conmon-3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d.scope.
Oct  2 05:20:22 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:20:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf57abdbffeb2067448601772c687c801e56d005541d9e9a41867585757310b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:20:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf57abdbffeb2067448601772c687c801e56d005541d9e9a41867585757310b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:20:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf57abdbffeb2067448601772c687c801e56d005541d9e9a41867585757310b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:20:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf57abdbffeb2067448601772c687c801e56d005541d9e9a41867585757310b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:20:22 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf57abdbffeb2067448601772c687c801e56d005541d9e9a41867585757310b1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:20:22 np0005465604 podman[435635]: 2025-10-02 09:20:22.890865961 +0000 UTC m=+0.567834657 container init 3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 05:20:22 np0005465604 podman[435635]: 2025-10-02 09:20:22.897494938 +0000 UTC m=+0.574463624 container start 3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:20:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:22.984 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:20:22 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:22.986 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:20:23 np0005465604 podman[435635]: 2025-10-02 09:20:23.00834129 +0000 UTC m=+0.685310006 container attach 3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:20:23 np0005465604 nova_compute[260603]: 2025-10-02 09:20:23.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:23 np0005465604 nova_compute[260603]: 2025-10-02 09:20:23.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:23 np0005465604 nova_compute[260603]: 2025-10-02 09:20:23.237 2 DEBUG nova.compute.manager [req-f83cc449-35df-40fa-80dd-e3cca444dfe2 req-115ec852-553d-4042-ac71-f779231236bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received event network-changed-24b323af-568c-4843-a858-dab3f5df627f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:20:23 np0005465604 nova_compute[260603]: 2025-10-02 09:20:23.238 2 DEBUG nova.compute.manager [req-f83cc449-35df-40fa-80dd-e3cca444dfe2 req-115ec852-553d-4042-ac71-f779231236bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Refreshing instance network info cache due to event network-changed-24b323af-568c-4843-a858-dab3f5df627f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:20:23 np0005465604 nova_compute[260603]: 2025-10-02 09:20:23.239 2 DEBUG oslo_concurrency.lockutils [req-f83cc449-35df-40fa-80dd-e3cca444dfe2 req-115ec852-553d-4042-ac71-f779231236bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:20:23 np0005465604 nova_compute[260603]: 2025-10-02 09:20:23.239 2 DEBUG oslo_concurrency.lockutils [req-f83cc449-35df-40fa-80dd-e3cca444dfe2 req-115ec852-553d-4042-ac71-f779231236bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:20:23 np0005465604 nova_compute[260603]: 2025-10-02 09:20:23.240 2 DEBUG nova.network.neutron [req-f83cc449-35df-40fa-80dd-e3cca444dfe2 req-115ec852-553d-4042-ac71-f779231236bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Refreshing network info cache for port 24b323af-568c-4843-a858-dab3f5df627f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:20:23 np0005465604 nova_compute[260603]: 2025-10-02 09:20:23.403 2 DEBUG oslo_concurrency.lockutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "8e2e089b-fd7f-460e-ba3f-180204d961f8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:20:23 np0005465604 nova_compute[260603]: 2025-10-02 09:20:23.404 2 DEBUG oslo_concurrency.lockutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:20:23 np0005465604 nova_compute[260603]: 2025-10-02 09:20:23.405 2 DEBUG oslo_concurrency.lockutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:20:23 np0005465604 nova_compute[260603]: 2025-10-02 09:20:23.406 2 DEBUG oslo_concurrency.lockutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:20:23 np0005465604 nova_compute[260603]: 2025-10-02 09:20:23.406 2 DEBUG oslo_concurrency.lockutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:20:23 np0005465604 nova_compute[260603]: 2025-10-02 09:20:23.408 2 INFO nova.compute.manager [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Terminating instance#033[00m
Oct  2 05:20:23 np0005465604 nova_compute[260603]: 2025-10-02 09:20:23.410 2 DEBUG nova.compute.manager [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:20:23 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:23.989 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:20:24 np0005465604 nice_lamport[435651]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:20:24 np0005465604 nice_lamport[435651]: --> relative data size: 1.0
Oct  2 05:20:24 np0005465604 nice_lamport[435651]: --> All data devices are unavailable
Oct  2 05:20:24 np0005465604 systemd[1]: libpod-3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d.scope: Deactivated successfully.
Oct  2 05:20:24 np0005465604 systemd[1]: libpod-3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d.scope: Consumed 1.058s CPU time.
Oct  2 05:20:24 np0005465604 podman[435635]: 2025-10-02 09:20:24.06646838 +0000 UTC m=+1.743437116 container died 3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 05:20:24 np0005465604 kernel: tap24b323af-56 (unregistering): left promiscuous mode
Oct  2 05:20:24 np0005465604 NetworkManager[45129]: <info>  [1759396824.3228] device (tap24b323af-56): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:20:24 np0005465604 nova_compute[260603]: 2025-10-02 09:20:24.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:24 np0005465604 ovn_controller[152344]: 2025-10-02T09:20:24Z|01678|binding|INFO|Releasing lport 24b323af-568c-4843-a858-dab3f5df627f from this chassis (sb_readonly=0)
Oct  2 05:20:24 np0005465604 ovn_controller[152344]: 2025-10-02T09:20:24Z|01679|binding|INFO|Setting lport 24b323af-568c-4843-a858-dab3f5df627f down in Southbound
Oct  2 05:20:24 np0005465604 ovn_controller[152344]: 2025-10-02T09:20:24Z|01680|binding|INFO|Removing iface tap24b323af-56 ovn-installed in OVS
Oct  2 05:20:24 np0005465604 nova_compute[260603]: 2025-10-02 09:20:24.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 31 KiB/s wr, 5 op/s
Oct  2 05:20:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.366 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:7f:89 10.100.0.8 2001:db8:0:1:f816:3eff:fed6:7f89 2001:db8::f816:3eff:fed6:7f89'], port_security=['fa:16:3e:d6:7f:89 10.100.0.8 2001:db8:0:1:f816:3eff:fed6:7f89 2001:db8::f816:3eff:fed6:7f89'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8:0:1:f816:3eff:fed6:7f89/64 2001:db8::f816:3eff:fed6:7f89/64', 'neutron:device_id': '8e2e089b-fd7f-460e-ba3f-180204d961f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f167374-6ddb-4a61-a8ba-3dc4425d2006', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d143d452-1c93-4a72-b1b5-b9ed8c58f219, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=24b323af-568c-4843-a858-dab3f5df627f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:20:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.368 162357 INFO neutron.agent.ovn.metadata.agent [-] Port 24b323af-568c-4843-a858-dab3f5df627f in datapath 01f3e9aa-20f8-48ae-9b80-cbd0ba476303 unbound from our chassis#033[00m
Oct  2 05:20:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.369 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01f3e9aa-20f8-48ae-9b80-cbd0ba476303#033[00m
Oct  2 05:20:24 np0005465604 nova_compute[260603]: 2025-10-02 09:20:24.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.400 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[155c128d-70b5-4045-b77d-408bec0b3751]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:20:24 np0005465604 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000096.scope: Deactivated successfully.
Oct  2 05:20:24 np0005465604 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000096.scope: Consumed 15.694s CPU time.
Oct  2 05:20:24 np0005465604 systemd-machined[214636]: Machine qemu-184-instance-00000096 terminated.
Oct  2 05:20:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.449 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f7ecc9d3-1d4a-4d0e-9a45-4f495413c3de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:20:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.453 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e276990f-e107-4e47-9364-e479e6357444]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:20:24 np0005465604 nova_compute[260603]: 2025-10-02 09:20:24.465 2 INFO nova.virt.libvirt.driver [-] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Instance destroyed successfully.#033[00m
Oct  2 05:20:24 np0005465604 nova_compute[260603]: 2025-10-02 09:20:24.466 2 DEBUG nova.objects.instance [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 8e2e089b-fd7f-460e-ba3f-180204d961f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:20:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.510 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[8208386b-b466-44ed-8ba5-48499c619f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:20:24 np0005465604 nova_compute[260603]: 2025-10-02 09:20:24.512 2 DEBUG nova.virt.libvirt.vif [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:19:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1224861595',display_name='tempest-TestGettingAddress-server-1224861595',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1224861595',id=150,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJcq6b1IVAJMqrCeKypbws2/0Rs5i6q90XYETpVyQMku/Sh9hYU1xPZGh64+rGmgREbgZiLmTHs8bnO5pL74gp5+lt+bQjj8c2EhhfVbuK3Dnp+gnRVW0xWshm1hg5jBSA==',key_name='tempest-TestGettingAddress-505095675',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:19:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-s7sei7n9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:19:41Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=8e2e089b-fd7f-460e-ba3f-180204d961f8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:20:24 np0005465604 nova_compute[260603]: 2025-10-02 09:20:24.512 2 DEBUG nova.network.os_vif_util [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:20:24 np0005465604 nova_compute[260603]: 2025-10-02 09:20:24.513 2 DEBUG nova.network.os_vif_util [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:7f:89,bridge_name='br-int',has_traffic_filtering=True,id=24b323af-568c-4843-a858-dab3f5df627f,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24b323af-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:20:24 np0005465604 nova_compute[260603]: 2025-10-02 09:20:24.514 2 DEBUG os_vif [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:7f:89,bridge_name='br-int',has_traffic_filtering=True,id=24b323af-568c-4843-a858-dab3f5df627f,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24b323af-56') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:20:24 np0005465604 nova_compute[260603]: 2025-10-02 09:20:24.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:24 np0005465604 nova_compute[260603]: 2025-10-02 09:20:24.516 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24b323af-56, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:20:24 np0005465604 nova_compute[260603]: 2025-10-02 09:20:24.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:24 np0005465604 nova_compute[260603]: 2025-10-02 09:20:24.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:24 np0005465604 nova_compute[260603]: 2025-10-02 09:20:24.522 2 INFO os_vif [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:7f:89,bridge_name='br-int',has_traffic_filtering=True,id=24b323af-568c-4843-a858-dab3f5df627f,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24b323af-56')#033[00m
Oct  2 05:20:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.537 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[df26c253-3730-4804-9608-2c5c80409808]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01f3e9aa-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:77:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 8, 'rx_bytes': 3628, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 8, 'rx_bytes': 3628, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761975, 'reachable_time': 39646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 435712, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:20:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.560 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab3f836-8ae0-4b5b-906e-3a2aa51a460b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap01f3e9aa-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 761993, 'tstamp': 761993}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 435728, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap01f3e9aa-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 761997, 'tstamp': 761997}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 435728, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:20:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.562 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01f3e9aa-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:20:24 np0005465604 nova_compute[260603]: 2025-10-02 09:20:24.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:24 np0005465604 nova_compute[260603]: 2025-10-02 09:20:24.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.566 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01f3e9aa-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:20:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.566 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:20:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.566 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01f3e9aa-20, col_values=(('external_ids', {'iface-id': 'b940cd75-8ee4-4c6d-be34-c1fb5851ca50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:20:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:24.567 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:20:24 np0005465604 systemd[1]: var-lib-containers-storage-overlay-cf57abdbffeb2067448601772c687c801e56d005541d9e9a41867585757310b1-merged.mount: Deactivated successfully.
Oct  2 05:20:25 np0005465604 nova_compute[260603]: 2025-10-02 09:20:25.009 2 DEBUG nova.network.neutron [req-f83cc449-35df-40fa-80dd-e3cca444dfe2 req-115ec852-553d-4042-ac71-f779231236bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Updated VIF entry in instance network info cache for port 24b323af-568c-4843-a858-dab3f5df627f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:20:25 np0005465604 nova_compute[260603]: 2025-10-02 09:20:25.010 2 DEBUG nova.network.neutron [req-f83cc449-35df-40fa-80dd-e3cca444dfe2 req-115ec852-553d-4042-ac71-f779231236bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Updating instance_info_cache with network_info: [{"id": "24b323af-568c-4843-a858-dab3f5df627f", "address": "fa:16:3e:d6:7f:89", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed6:7f89", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24b323af-56", "ovs_interfaceid": "24b323af-568c-4843-a858-dab3f5df627f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:20:25 np0005465604 nova_compute[260603]: 2025-10-02 09:20:25.089 2 DEBUG nova.compute.manager [req-1fb04ae4-fa32-416c-8646-c22a67321557 req-e85228ec-cb95-4a69-903b-d001af75ad7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received event network-vif-unplugged-24b323af-568c-4843-a858-dab3f5df627f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:20:25 np0005465604 nova_compute[260603]: 2025-10-02 09:20:25.089 2 DEBUG oslo_concurrency.lockutils [req-1fb04ae4-fa32-416c-8646-c22a67321557 req-e85228ec-cb95-4a69-903b-d001af75ad7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:20:25 np0005465604 nova_compute[260603]: 2025-10-02 09:20:25.090 2 DEBUG oslo_concurrency.lockutils [req-1fb04ae4-fa32-416c-8646-c22a67321557 req-e85228ec-cb95-4a69-903b-d001af75ad7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:20:25 np0005465604 nova_compute[260603]: 2025-10-02 09:20:25.090 2 DEBUG oslo_concurrency.lockutils [req-1fb04ae4-fa32-416c-8646-c22a67321557 req-e85228ec-cb95-4a69-903b-d001af75ad7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:20:25 np0005465604 nova_compute[260603]: 2025-10-02 09:20:25.090 2 DEBUG nova.compute.manager [req-1fb04ae4-fa32-416c-8646-c22a67321557 req-e85228ec-cb95-4a69-903b-d001af75ad7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] No waiting events found dispatching network-vif-unplugged-24b323af-568c-4843-a858-dab3f5df627f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:20:25 np0005465604 nova_compute[260603]: 2025-10-02 09:20:25.090 2 DEBUG nova.compute.manager [req-1fb04ae4-fa32-416c-8646-c22a67321557 req-e85228ec-cb95-4a69-903b-d001af75ad7d 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received event network-vif-unplugged-24b323af-568c-4843-a858-dab3f5df627f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:20:25 np0005465604 nova_compute[260603]: 2025-10-02 09:20:25.095 2 DEBUG oslo_concurrency.lockutils [req-f83cc449-35df-40fa-80dd-e3cca444dfe2 req-115ec852-553d-4042-ac71-f779231236bb 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-8e2e089b-fd7f-460e-ba3f-180204d961f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:20:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:20:25 np0005465604 podman[435635]: 2025-10-02 09:20:25.779225977 +0000 UTC m=+3.456194653 container remove 3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 05:20:25 np0005465604 systemd[1]: libpod-conmon-3d128d845c2e0930eaf835ac6e8ee00552dff81c1ade66d3687cc08772bf806d.scope: Deactivated successfully.
Oct  2 05:20:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 19 KiB/s wr, 1 op/s
Oct  2 05:20:26 np0005465604 nova_compute[260603]: 2025-10-02 09:20:26.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:20:26 np0005465604 podman[435873]: 2025-10-02 09:20:26.482406011 +0000 UTC m=+0.025098005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:20:26 np0005465604 podman[435873]: 2025-10-02 09:20:26.702218657 +0000 UTC m=+0.244910601 container create 73a2bdb6f9e0670b78dc4c6aa68153b9c968f123cc30f50e21a9ce765390f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_torvalds, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 05:20:26 np0005465604 systemd[1]: Started libpod-conmon-73a2bdb6f9e0670b78dc4c6aa68153b9c968f123cc30f50e21a9ce765390f362.scope.
Oct  2 05:20:26 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:20:27 np0005465604 nova_compute[260603]: 2025-10-02 09:20:27.183 2 DEBUG nova.compute.manager [req-ab9bee87-b551-4317-a84e-0a2890acb2aa req-bb1aed23-0896-4d09-ab67-6c80f78154b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received event network-vif-plugged-24b323af-568c-4843-a858-dab3f5df627f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:20:27 np0005465604 nova_compute[260603]: 2025-10-02 09:20:27.184 2 DEBUG oslo_concurrency.lockutils [req-ab9bee87-b551-4317-a84e-0a2890acb2aa req-bb1aed23-0896-4d09-ab67-6c80f78154b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:20:27 np0005465604 nova_compute[260603]: 2025-10-02 09:20:27.184 2 DEBUG oslo_concurrency.lockutils [req-ab9bee87-b551-4317-a84e-0a2890acb2aa req-bb1aed23-0896-4d09-ab67-6c80f78154b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:20:27 np0005465604 nova_compute[260603]: 2025-10-02 09:20:27.185 2 DEBUG oslo_concurrency.lockutils [req-ab9bee87-b551-4317-a84e-0a2890acb2aa req-bb1aed23-0896-4d09-ab67-6c80f78154b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:20:27 np0005465604 nova_compute[260603]: 2025-10-02 09:20:27.185 2 DEBUG nova.compute.manager [req-ab9bee87-b551-4317-a84e-0a2890acb2aa req-bb1aed23-0896-4d09-ab67-6c80f78154b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] No waiting events found dispatching network-vif-plugged-24b323af-568c-4843-a858-dab3f5df627f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:20:27 np0005465604 nova_compute[260603]: 2025-10-02 09:20:27.185 2 WARNING nova.compute.manager [req-ab9bee87-b551-4317-a84e-0a2890acb2aa req-bb1aed23-0896-4d09-ab67-6c80f78154b1 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received unexpected event network-vif-plugged-24b323af-568c-4843-a858-dab3f5df627f for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:20:27 np0005465604 podman[435873]: 2025-10-02 09:20:27.193044237 +0000 UTC m=+0.735736271 container init 73a2bdb6f9e0670b78dc4c6aa68153b9c968f123cc30f50e21a9ce765390f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_torvalds, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:20:27 np0005465604 podman[435873]: 2025-10-02 09:20:27.207581491 +0000 UTC m=+0.750273435 container start 73a2bdb6f9e0670b78dc4c6aa68153b9c968f123cc30f50e21a9ce765390f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 05:20:27 np0005465604 brave_torvalds[435889]: 167 167
Oct  2 05:20:27 np0005465604 systemd[1]: libpod-73a2bdb6f9e0670b78dc4c6aa68153b9c968f123cc30f50e21a9ce765390f362.scope: Deactivated successfully.
Oct  2 05:20:27 np0005465604 podman[435873]: 2025-10-02 09:20:27.360461167 +0000 UTC m=+0.903153151 container attach 73a2bdb6f9e0670b78dc4c6aa68153b9c968f123cc30f50e21a9ce765390f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_torvalds, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 05:20:27 np0005465604 podman[435873]: 2025-10-02 09:20:27.363026876 +0000 UTC m=+0.905718880 container died 73a2bdb6f9e0670b78dc4c6aa68153b9c968f123cc30f50e21a9ce765390f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_torvalds, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:20:27 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3f8c318215ab2256c81362e4aa47c528e318ebc1da26c38e7aeb2cbfba238e4e-merged.mount: Deactivated successfully.
Oct  2 05:20:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:20:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:20:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:20:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:20:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:20:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:20:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:20:28
Oct  2 05:20:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:20:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:20:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.data']
Oct  2 05:20:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:20:28 np0005465604 nova_compute[260603]: 2025-10-02 09:20:28.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 25 KiB/s wr, 9 op/s
Oct  2 05:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:20:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:20:28 np0005465604 podman[435873]: 2025-10-02 09:20:28.813708309 +0000 UTC m=+2.356400283 container remove 73a2bdb6f9e0670b78dc4c6aa68153b9c968f123cc30f50e21a9ce765390f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_torvalds, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:20:28 np0005465604 systemd[1]: libpod-conmon-73a2bdb6f9e0670b78dc4c6aa68153b9c968f123cc30f50e21a9ce765390f362.scope: Deactivated successfully.
Oct  2 05:20:29 np0005465604 podman[435914]: 2025-10-02 09:20:29.032339787 +0000 UTC m=+0.027177550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:20:29 np0005465604 podman[435914]: 2025-10-02 09:20:29.259013727 +0000 UTC m=+0.253851410 container create 261694cbcb60e35dd28910a654987c37c5f6209b81c997be473330e091e10ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_panini, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 05:20:29 np0005465604 systemd[1]: Started libpod-conmon-261694cbcb60e35dd28910a654987c37c5f6209b81c997be473330e091e10ea3.scope.
Oct  2 05:20:29 np0005465604 nova_compute[260603]: 2025-10-02 09:20:29.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:29 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:20:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d77d736fdf91485204957be5779e4c7edf8c34bbd7aa92c1f43abf56f3925d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:20:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d77d736fdf91485204957be5779e4c7edf8c34bbd7aa92c1f43abf56f3925d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:20:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d77d736fdf91485204957be5779e4c7edf8c34bbd7aa92c1f43abf56f3925d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:20:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d77d736fdf91485204957be5779e4c7edf8c34bbd7aa92c1f43abf56f3925d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:20:29 np0005465604 podman[435914]: 2025-10-02 09:20:29.658077101 +0000 UTC m=+0.652914844 container init 261694cbcb60e35dd28910a654987c37c5f6209b81c997be473330e091e10ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_panini, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 05:20:29 np0005465604 podman[435914]: 2025-10-02 09:20:29.668486096 +0000 UTC m=+0.663323759 container start 261694cbcb60e35dd28910a654987c37c5f6209b81c997be473330e091e10ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_panini, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 05:20:29 np0005465604 podman[435914]: 2025-10-02 09:20:29.814879259 +0000 UTC m=+0.809716952 container attach 261694cbcb60e35dd28910a654987c37c5f6209b81c997be473330e091e10ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:20:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 25 KiB/s wr, 9 op/s
Oct  2 05:20:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:20:30 np0005465604 zealous_panini[435930]: {
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:    "0": [
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:        {
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "devices": [
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "/dev/loop3"
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            ],
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "lv_name": "ceph_lv0",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "lv_size": "21470642176",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "name": "ceph_lv0",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "tags": {
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.cluster_name": "ceph",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.crush_device_class": "",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.encrypted": "0",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.osd_id": "0",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.type": "block",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.vdo": "0"
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            },
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "type": "block",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "vg_name": "ceph_vg0"
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:        }
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:    ],
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:    "1": [
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:        {
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "devices": [
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "/dev/loop4"
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            ],
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "lv_name": "ceph_lv1",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "lv_size": "21470642176",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "name": "ceph_lv1",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "tags": {
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.cluster_name": "ceph",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.crush_device_class": "",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.encrypted": "0",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.osd_id": "1",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.type": "block",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.vdo": "0"
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            },
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "type": "block",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "vg_name": "ceph_vg1"
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:        }
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:    ],
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:    "2": [
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:        {
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "devices": [
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "/dev/loop5"
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            ],
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "lv_name": "ceph_lv2",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "lv_size": "21470642176",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "name": "ceph_lv2",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "tags": {
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.cluster_name": "ceph",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.crush_device_class": "",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.encrypted": "0",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.osd_id": "2",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.type": "block",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:                "ceph.vdo": "0"
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            },
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "type": "block",
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:            "vg_name": "ceph_vg2"
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:        }
Oct  2 05:20:30 np0005465604 zealous_panini[435930]:    ]
Oct  2 05:20:30 np0005465604 zealous_panini[435930]: }
Oct  2 05:20:30 np0005465604 systemd[1]: libpod-261694cbcb60e35dd28910a654987c37c5f6209b81c997be473330e091e10ea3.scope: Deactivated successfully.
Oct  2 05:20:30 np0005465604 podman[435940]: 2025-10-02 09:20:30.737339652 +0000 UTC m=+0.034134977 container died 261694cbcb60e35dd28910a654987c37c5f6209b81c997be473330e091e10ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_panini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  2 05:20:31 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d2d77d736fdf91485204957be5779e4c7edf8c34bbd7aa92c1f43abf56f3925d-merged.mount: Deactivated successfully.
Oct  2 05:20:31 np0005465604 podman[435940]: 2025-10-02 09:20:31.13426588 +0000 UTC m=+0.431061155 container remove 261694cbcb60e35dd28910a654987c37c5f6209b81c997be473330e091e10ea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_panini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 05:20:31 np0005465604 systemd[1]: libpod-conmon-261694cbcb60e35dd28910a654987c37c5f6209b81c997be473330e091e10ea3.scope: Deactivated successfully.
Oct  2 05:20:32 np0005465604 podman[436099]: 2025-10-02 09:20:31.909196584 +0000 UTC m=+0.029186703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:20:32 np0005465604 podman[436099]: 2025-10-02 09:20:32.036392957 +0000 UTC m=+0.156383066 container create 69e2376eb414759870e575ebb996fa79b4b67bf769a41240d2ac871d8392e133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 05:20:32 np0005465604 systemd[1]: Started libpod-conmon-69e2376eb414759870e575ebb996fa79b4b67bf769a41240d2ac871d8392e133.scope.
Oct  2 05:20:32 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:20:32 np0005465604 podman[436099]: 2025-10-02 09:20:32.277937701 +0000 UTC m=+0.397927870 container init 69e2376eb414759870e575ebb996fa79b4b67bf769a41240d2ac871d8392e133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ramanujan, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:20:32 np0005465604 podman[436099]: 2025-10-02 09:20:32.291310569 +0000 UTC m=+0.411300638 container start 69e2376eb414759870e575ebb996fa79b4b67bf769a41240d2ac871d8392e133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct  2 05:20:32 np0005465604 busy_ramanujan[436116]: 167 167
Oct  2 05:20:32 np0005465604 systemd[1]: libpod-69e2376eb414759870e575ebb996fa79b4b67bf769a41240d2ac871d8392e133.scope: Deactivated successfully.
Oct  2 05:20:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 154 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 26 KiB/s wr, 27 op/s
Oct  2 05:20:32 np0005465604 podman[436099]: 2025-10-02 09:20:32.438567629 +0000 UTC m=+0.558557748 container attach 69e2376eb414759870e575ebb996fa79b4b67bf769a41240d2ac871d8392e133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 05:20:32 np0005465604 podman[436099]: 2025-10-02 09:20:32.439919662 +0000 UTC m=+0.559909741 container died 69e2376eb414759870e575ebb996fa79b4b67bf769a41240d2ac871d8392e133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:20:32 np0005465604 nova_compute[260603]: 2025-10-02 09:20:32.485 2 INFO nova.virt.libvirt.driver [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Deleting instance files /var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8_del#033[00m
Oct  2 05:20:32 np0005465604 nova_compute[260603]: 2025-10-02 09:20:32.489 2 INFO nova.virt.libvirt.driver [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Deletion of /var/lib/nova/instances/8e2e089b-fd7f-460e-ba3f-180204d961f8_del complete#033[00m
Oct  2 05:20:32 np0005465604 systemd[1]: var-lib-containers-storage-overlay-84ea02ab45280cbfe5a151b797d108c946668ec573ebe312aa7eaad1fdb3cf22-merged.mount: Deactivated successfully.
Oct  2 05:20:32 np0005465604 nova_compute[260603]: 2025-10-02 09:20:32.588 2 INFO nova.compute.manager [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Took 9.18 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:20:32 np0005465604 nova_compute[260603]: 2025-10-02 09:20:32.588 2 DEBUG oslo.service.loopingcall [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:20:32 np0005465604 nova_compute[260603]: 2025-10-02 09:20:32.589 2 DEBUG nova.compute.manager [-] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:20:32 np0005465604 nova_compute[260603]: 2025-10-02 09:20:32.589 2 DEBUG nova.network.neutron [-] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:20:32 np0005465604 podman[436099]: 2025-10-02 09:20:32.66656347 +0000 UTC m=+0.786553549 container remove 69e2376eb414759870e575ebb996fa79b4b67bf769a41240d2ac871d8392e133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ramanujan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 05:20:32 np0005465604 systemd[1]: libpod-conmon-69e2376eb414759870e575ebb996fa79b4b67bf769a41240d2ac871d8392e133.scope: Deactivated successfully.
Oct  2 05:20:32 np0005465604 podman[436141]: 2025-10-02 09:20:32.901826209 +0000 UTC m=+0.029216813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:20:33 np0005465604 podman[436141]: 2025-10-02 09:20:33.050388519 +0000 UTC m=+0.177779023 container create 2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 05:20:33 np0005465604 nova_compute[260603]: 2025-10-02 09:20:33.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:33 np0005465604 systemd[1]: Started libpod-conmon-2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028.scope.
Oct  2 05:20:33 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:20:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b330574442d53c064390e5722c4673f2eee2acd66dc47618ac589247635f086e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:20:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b330574442d53c064390e5722c4673f2eee2acd66dc47618ac589247635f086e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:20:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b330574442d53c064390e5722c4673f2eee2acd66dc47618ac589247635f086e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:20:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b330574442d53c064390e5722c4673f2eee2acd66dc47618ac589247635f086e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:20:33 np0005465604 podman[436141]: 2025-10-02 09:20:33.273017423 +0000 UTC m=+0.400408017 container init 2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  2 05:20:33 np0005465604 podman[436141]: 2025-10-02 09:20:33.281165408 +0000 UTC m=+0.408555912 container start 2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:20:33 np0005465604 podman[436141]: 2025-10-02 09:20:33.337216208 +0000 UTC m=+0.464606812 container attach 2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:20:33 np0005465604 podman[436160]: 2025-10-02 09:20:33.404599192 +0000 UTC m=+0.216475012 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 05:20:33 np0005465604 podman[436158]: 2025-10-02 09:20:33.428087436 +0000 UTC m=+0.242103342 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 05:20:33 np0005465604 nova_compute[260603]: 2025-10-02 09:20:33.451 2 DEBUG nova.network.neutron [-] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:20:33 np0005465604 nova_compute[260603]: 2025-10-02 09:20:33.504 2 DEBUG nova.compute.manager [req-e2030448-6bfd-4c6e-a13d-a91040060d90 req-6fbc048f-8e25-4f0a-9a54-5bcb72de0158 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Received event network-vif-deleted-24b323af-568c-4843-a858-dab3f5df627f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:20:33 np0005465604 nova_compute[260603]: 2025-10-02 09:20:33.504 2 INFO nova.compute.manager [req-e2030448-6bfd-4c6e-a13d-a91040060d90 req-6fbc048f-8e25-4f0a-9a54-5bcb72de0158 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Neutron deleted interface 24b323af-568c-4843-a858-dab3f5df627f; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 05:20:33 np0005465604 nova_compute[260603]: 2025-10-02 09:20:33.504 2 DEBUG nova.network.neutron [req-e2030448-6bfd-4c6e-a13d-a91040060d90 req-6fbc048f-8e25-4f0a-9a54-5bcb72de0158 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:20:33 np0005465604 nova_compute[260603]: 2025-10-02 09:20:33.547 2 INFO nova.compute.manager [-] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Took 0.96 seconds to deallocate network for instance.#033[00m
Oct  2 05:20:33 np0005465604 nova_compute[260603]: 2025-10-02 09:20:33.558 2 DEBUG nova.compute.manager [req-e2030448-6bfd-4c6e-a13d-a91040060d90 req-6fbc048f-8e25-4f0a-9a54-5bcb72de0158 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Detach interface failed, port_id=24b323af-568c-4843-a858-dab3f5df627f, reason: Instance 8e2e089b-fd7f-460e-ba3f-180204d961f8 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 05:20:33 np0005465604 nova_compute[260603]: 2025-10-02 09:20:33.646 2 DEBUG oslo_concurrency.lockutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:20:33 np0005465604 nova_compute[260603]: 2025-10-02 09:20:33.647 2 DEBUG oslo_concurrency.lockutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:20:33 np0005465604 nova_compute[260603]: 2025-10-02 09:20:33.730 2 DEBUG oslo_concurrency.processutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:20:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:20:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/893269694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:20:34 np0005465604 nova_compute[260603]: 2025-10-02 09:20:34.160 2 DEBUG oslo_concurrency.processutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:20:34 np0005465604 nova_compute[260603]: 2025-10-02 09:20:34.168 2 DEBUG nova.compute.provider_tree [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:20:34 np0005465604 nova_compute[260603]: 2025-10-02 09:20:34.189 2 DEBUG nova.scheduler.client.report [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:20:34 np0005465604 nova_compute[260603]: 2025-10-02 09:20:34.238 2 DEBUG oslo_concurrency.lockutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:20:34 np0005465604 tender_leakey[436157]: {
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:        "osd_id": 2,
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:        "type": "bluestore"
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:    },
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:        "osd_id": 1,
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:        "type": "bluestore"
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:    },
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:        "osd_id": 0,
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:        "type": "bluestore"
Oct  2 05:20:34 np0005465604 tender_leakey[436157]:    }
Oct  2 05:20:34 np0005465604 tender_leakey[436157]: }
Oct  2 05:20:34 np0005465604 nova_compute[260603]: 2025-10-02 09:20:34.288 2 INFO nova.scheduler.client.report [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 8e2e089b-fd7f-460e-ba3f-180204d961f8#033[00m
Oct  2 05:20:34 np0005465604 systemd[1]: libpod-2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028.scope: Deactivated successfully.
Oct  2 05:20:34 np0005465604 podman[436141]: 2025-10-02 09:20:34.323933768 +0000 UTC m=+1.451324282 container died 2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_leakey, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Oct  2 05:20:34 np0005465604 systemd[1]: libpod-2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028.scope: Consumed 1.042s CPU time.
Oct  2 05:20:34 np0005465604 nova_compute[260603]: 2025-10-02 09:20:34.368 2 DEBUG oslo_concurrency.lockutils [None req-87b04adb-4be8-44e7-9a35-e5cfa3e66952 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "8e2e089b-fd7f-460e-ba3f-180204d961f8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:20:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 7.5 KiB/s wr, 28 op/s
Oct  2 05:20:34 np0005465604 nova_compute[260603]: 2025-10-02 09:20:34.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b330574442d53c064390e5722c4673f2eee2acd66dc47618ac589247635f086e-merged.mount: Deactivated successfully.
Oct  2 05:20:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:34.858 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:20:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:34.859 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:20:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:34.860 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:20:35 np0005465604 podman[436141]: 2025-10-02 09:20:35.063070165 +0000 UTC m=+2.190460699 container remove 2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_leakey, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:20:35 np0005465604 systemd[1]: libpod-conmon-2ca6f7ce56f8fd05c5636dc3fa2d2ab2b4568d81f9df719f1183b3ab3750d028.scope: Deactivated successfully.
Oct  2 05:20:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:20:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:20:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:20:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:20:35 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a8fd93ee-7c27-40e9-9582-15b9b037e9a7 does not exist
Oct  2 05:20:35 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ce33eac2-1642-4d77-a90f-da9573586a5f does not exist
Oct  2 05:20:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:20:35 np0005465604 nova_compute[260603]: 2025-10-02 09:20:35.947 2 DEBUG nova.compute.manager [req-f1b27a56-6bf6-4ed1-9272-2baacd54ac73 req-c00a9c4a-5414-4d4c-8ea1-b26a62c582a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received event network-changed-dac6b720-36af-4048-863c-cd259065f82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:20:35 np0005465604 nova_compute[260603]: 2025-10-02 09:20:35.949 2 DEBUG nova.compute.manager [req-f1b27a56-6bf6-4ed1-9272-2baacd54ac73 req-c00a9c4a-5414-4d4c-8ea1-b26a62c582a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Refreshing instance network info cache due to event network-changed-dac6b720-36af-4048-863c-cd259065f82e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:20:35 np0005465604 nova_compute[260603]: 2025-10-02 09:20:35.949 2 DEBUG oslo_concurrency.lockutils [req-f1b27a56-6bf6-4ed1-9272-2baacd54ac73 req-c00a9c4a-5414-4d4c-8ea1-b26a62c582a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:20:35 np0005465604 nova_compute[260603]: 2025-10-02 09:20:35.950 2 DEBUG oslo_concurrency.lockutils [req-f1b27a56-6bf6-4ed1-9272-2baacd54ac73 req-c00a9c4a-5414-4d4c-8ea1-b26a62c582a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:20:35 np0005465604 nova_compute[260603]: 2025-10-02 09:20:35.950 2 DEBUG nova.network.neutron [req-f1b27a56-6bf6-4ed1-9272-2baacd54ac73 req-c00a9c4a-5414-4d4c-8ea1-b26a62c582a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Refreshing network info cache for port dac6b720-36af-4048-863c-cd259065f82e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.089 2 DEBUG oslo_concurrency.lockutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "d2fca23c-2574-4642-b931-f363d59bd5a7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.089 2 DEBUG oslo_concurrency.lockutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.090 2 DEBUG oslo_concurrency.lockutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.090 2 DEBUG oslo_concurrency.lockutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.090 2 DEBUG oslo_concurrency.lockutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.092 2 INFO nova.compute.manager [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Terminating instance#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.093 2 DEBUG nova.compute.manager [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:20:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:20:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:20:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 6.8 KiB/s wr, 27 op/s
Oct  2 05:20:36 np0005465604 kernel: tapdac6b720-36 (unregistering): left promiscuous mode
Oct  2 05:20:36 np0005465604 NetworkManager[45129]: <info>  [1759396836.3970] device (tapdac6b720-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:36 np0005465604 ovn_controller[152344]: 2025-10-02T09:20:36Z|01681|binding|INFO|Releasing lport dac6b720-36af-4048-863c-cd259065f82e from this chassis (sb_readonly=0)
Oct  2 05:20:36 np0005465604 ovn_controller[152344]: 2025-10-02T09:20:36Z|01682|binding|INFO|Setting lport dac6b720-36af-4048-863c-cd259065f82e down in Southbound
Oct  2 05:20:36 np0005465604 ovn_controller[152344]: 2025-10-02T09:20:36Z|01683|binding|INFO|Removing iface tapdac6b720-36 ovn-installed in OVS
Oct  2 05:20:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:36.421 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:0a:a0 10.100.0.12 2001:db8:0:1:f816:3eff:fe6a:aa0 2001:db8::f816:3eff:fe6a:aa0'], port_security=['fa:16:3e:6a:0a:a0 10.100.0.12 2001:db8:0:1:f816:3eff:fe6a:aa0 2001:db8::f816:3eff:fe6a:aa0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8:0:1:f816:3eff:fe6a:aa0/64 2001:db8::f816:3eff:fe6a:aa0/64', 'neutron:device_id': 'd2fca23c-2574-4642-b931-f363d59bd5a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f167374-6ddb-4a61-a8ba-3dc4425d2006', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d143d452-1c93-4a72-b1b5-b9ed8c58f219, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=dac6b720-36af-4048-863c-cd259065f82e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:20:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:36.422 162357 INFO neutron.agent.ovn.metadata.agent [-] Port dac6b720-36af-4048-863c-cd259065f82e in datapath 01f3e9aa-20f8-48ae-9b80-cbd0ba476303 unbound from our chassis#033[00m
Oct  2 05:20:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:36.423 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01f3e9aa-20f8-48ae-9b80-cbd0ba476303, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:20:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:36.425 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[7dea8400-3660-41c1-a631-9eadde9a8db7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:20:36 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:36.427 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303 namespace which is not needed anymore#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:36 np0005465604 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000095.scope: Deactivated successfully.
Oct  2 05:20:36 np0005465604 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000095.scope: Consumed 17.420s CPU time.
Oct  2 05:20:36 np0005465604 systemd-machined[214636]: Machine qemu-183-instance-00000095 terminated.
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.550 2 INFO nova.virt.libvirt.driver [-] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Instance destroyed successfully.#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.551 2 DEBUG nova.objects.instance [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid d2fca23c-2574-4642-b931-f363d59bd5a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.583 2 DEBUG nova.virt.libvirt.vif [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:18:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-39754368',display_name='tempest-TestGettingAddress-server-39754368',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-39754368',id=149,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJcq6b1IVAJMqrCeKypbws2/0Rs5i6q90XYETpVyQMku/Sh9hYU1xPZGh64+rGmgREbgZiLmTHs8bnO5pL74gp5+lt+bQjj8c2EhhfVbuK3Dnp+gnRVW0xWshm1hg5jBSA==',key_name='tempest-TestGettingAddress-505095675',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:19:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-eaeyiug0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:19:02Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=d2fca23c-2574-4642-b931-f363d59bd5a7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.584 2 DEBUG nova.network.os_vif_util [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.585 2 DEBUG nova.network.os_vif_util [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:0a:a0,bridge_name='br-int',has_traffic_filtering=True,id=dac6b720-36af-4048-863c-cd259065f82e,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdac6b720-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.586 2 DEBUG os_vif [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:0a:a0,bridge_name='br-int',has_traffic_filtering=True,id=dac6b720-36af-4048-863c-cd259065f82e,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdac6b720-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.588 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdac6b720-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:36 np0005465604 nova_compute[260603]: 2025-10-02 09:20:36.644 2 INFO os_vif [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:0a:a0,bridge_name='br-int',has_traffic_filtering=True,id=dac6b720-36af-4048-863c-cd259065f82e,network=Network(01f3e9aa-20f8-48ae-9b80-cbd0ba476303),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdac6b720-36')#033[00m
Oct  2 05:20:36 np0005465604 neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303[431703]: [NOTICE]   (431715) : haproxy version is 2.8.14-c23fe91
Oct  2 05:20:36 np0005465604 neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303[431703]: [NOTICE]   (431715) : path to executable is /usr/sbin/haproxy
Oct  2 05:20:36 np0005465604 neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303[431703]: [WARNING]  (431715) : Exiting Master process...
Oct  2 05:20:36 np0005465604 neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303[431703]: [ALERT]    (431715) : Current worker (431717) exited with code 143 (Terminated)
Oct  2 05:20:36 np0005465604 neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303[431703]: [WARNING]  (431715) : All workers exited. Exiting... (0)
Oct  2 05:20:36 np0005465604 systemd[1]: libpod-77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8.scope: Deactivated successfully.
Oct  2 05:20:36 np0005465604 podman[436351]: 2025-10-02 09:20:36.714850587 +0000 UTC m=+0.156993845 container died 77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 05:20:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8-userdata-shm.mount: Deactivated successfully.
Oct  2 05:20:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8fda16057209f96f06d9ded64e1e5d1b0e039e5d2ee796001d3308370659566c-merged.mount: Deactivated successfully.
Oct  2 05:20:36 np0005465604 podman[436351]: 2025-10-02 09:20:36.932212596 +0000 UTC m=+0.374355834 container cleanup 77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:20:36 np0005465604 systemd[1]: libpod-conmon-77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8.scope: Deactivated successfully.
Oct  2 05:20:37 np0005465604 podman[436402]: 2025-10-02 09:20:37.18654958 +0000 UTC m=+0.214698237 container remove 77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 05:20:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:37.200 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c06c9e2b-36bb-46aa-87fa-6bbdd76eb7d6]: (4, ('Thu Oct  2 09:20:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303 (77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8)\n77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8\nThu Oct  2 09:20:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303 (77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8)\n77a818e6b68b36e739557d18237affa6ea309f9c1ab3a4b176a6967a7359b7d8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:20:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:37.204 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[6c8f9452-77e0-4505-8948-849144f66974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:20:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:37.205 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01f3e9aa-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:20:37 np0005465604 nova_compute[260603]: 2025-10-02 09:20:37.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:37 np0005465604 kernel: tap01f3e9aa-20: left promiscuous mode
Oct  2 05:20:37 np0005465604 nova_compute[260603]: 2025-10-02 09:20:37.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:37.232 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[62cfa559-6ead-4d33-a9f4-a7d8f6dd7e12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:20:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:37.265 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[c37ca780-e433-444c-8c86-4bf109df3f31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:20:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:37.267 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9401b8-2beb-41ed-a0fc-46432e7e4a6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:20:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:37.300 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[223cebff-06e5-48f0-812f-5c124d4746cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761964, 'reachable_time': 35141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 436431, 'error': None, 'target': 'ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:20:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:37.304 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01f3e9aa-20f8-48ae-9b80-cbd0ba476303 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:20:37 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:20:37.305 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[33fbf0c0-d4d5-4acf-89f5-6d99f95f4d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:20:37 np0005465604 systemd[1]: run-netns-ovnmeta\x2d01f3e9aa\x2d20f8\x2d48ae\x2d9b80\x2dcbd0ba476303.mount: Deactivated successfully.
Oct  2 05:20:37 np0005465604 nova_compute[260603]: 2025-10-02 09:20:37.331 2 DEBUG nova.compute.manager [req-cea775ea-b184-4352-9705-2b25a6c84fa9 req-0cc894b4-75cf-4f66-943f-ca04b53e336b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received event network-vif-unplugged-dac6b720-36af-4048-863c-cd259065f82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:20:37 np0005465604 nova_compute[260603]: 2025-10-02 09:20:37.332 2 DEBUG oslo_concurrency.lockutils [req-cea775ea-b184-4352-9705-2b25a6c84fa9 req-0cc894b4-75cf-4f66-943f-ca04b53e336b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:20:37 np0005465604 nova_compute[260603]: 2025-10-02 09:20:37.332 2 DEBUG oslo_concurrency.lockutils [req-cea775ea-b184-4352-9705-2b25a6c84fa9 req-0cc894b4-75cf-4f66-943f-ca04b53e336b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:20:37 np0005465604 nova_compute[260603]: 2025-10-02 09:20:37.333 2 DEBUG oslo_concurrency.lockutils [req-cea775ea-b184-4352-9705-2b25a6c84fa9 req-0cc894b4-75cf-4f66-943f-ca04b53e336b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:20:37 np0005465604 nova_compute[260603]: 2025-10-02 09:20:37.333 2 DEBUG nova.compute.manager [req-cea775ea-b184-4352-9705-2b25a6c84fa9 req-0cc894b4-75cf-4f66-943f-ca04b53e336b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] No waiting events found dispatching network-vif-unplugged-dac6b720-36af-4048-863c-cd259065f82e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:20:37 np0005465604 nova_compute[260603]: 2025-10-02 09:20:37.333 2 DEBUG nova.compute.manager [req-cea775ea-b184-4352-9705-2b25a6c84fa9 req-0cc894b4-75cf-4f66-943f-ca04b53e336b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received event network-vif-unplugged-dac6b720-36af-4048-863c-cd259065f82e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:20:37 np0005465604 podman[436415]: 2025-10-02 09:20:37.376537215 +0000 UTC m=+0.114154847 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Oct  2 05:20:37 np0005465604 podman[436416]: 2025-10-02 09:20:37.399095619 +0000 UTC m=+0.130449625 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:20:37 np0005465604 nova_compute[260603]: 2025-10-02 09:20:37.935 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:20:38 np0005465604 nova_compute[260603]: 2025-10-02 09:20:38.000 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Triggering sync for uuid d2fca23c-2574-4642-b931-f363d59bd5a7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 05:20:38 np0005465604 nova_compute[260603]: 2025-10-02 09:20:38.001 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "d2fca23c-2574-4642-b931-f363d59bd5a7" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:20:38 np0005465604 nova_compute[260603]: 2025-10-02 09:20:38.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 83 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 8.8 KiB/s wr, 36 op/s
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0004291100963729895 of space, bias 1.0, pg target 0.12873302891189686 quantized to 32 (current 32)
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:20:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:20:39 np0005465604 nova_compute[260603]: 2025-10-02 09:20:39.450 2 DEBUG nova.compute.manager [req-c25bfbc8-077f-4078-9e40-150551b52fda req-92a41df7-4ba7-4940-932f-52a97b7e5466 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received event network-vif-plugged-dac6b720-36af-4048-863c-cd259065f82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:20:39 np0005465604 nova_compute[260603]: 2025-10-02 09:20:39.452 2 DEBUG oslo_concurrency.lockutils [req-c25bfbc8-077f-4078-9e40-150551b52fda req-92a41df7-4ba7-4940-932f-52a97b7e5466 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:20:39 np0005465604 nova_compute[260603]: 2025-10-02 09:20:39.452 2 DEBUG oslo_concurrency.lockutils [req-c25bfbc8-077f-4078-9e40-150551b52fda req-92a41df7-4ba7-4940-932f-52a97b7e5466 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:20:39 np0005465604 nova_compute[260603]: 2025-10-02 09:20:39.452 2 DEBUG oslo_concurrency.lockutils [req-c25bfbc8-077f-4078-9e40-150551b52fda req-92a41df7-4ba7-4940-932f-52a97b7e5466 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:20:39 np0005465604 nova_compute[260603]: 2025-10-02 09:20:39.453 2 DEBUG nova.compute.manager [req-c25bfbc8-077f-4078-9e40-150551b52fda req-92a41df7-4ba7-4940-932f-52a97b7e5466 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] No waiting events found dispatching network-vif-plugged-dac6b720-36af-4048-863c-cd259065f82e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:20:39 np0005465604 nova_compute[260603]: 2025-10-02 09:20:39.453 2 WARNING nova.compute.manager [req-c25bfbc8-077f-4078-9e40-150551b52fda req-92a41df7-4ba7-4940-932f-52a97b7e5466 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received unexpected event network-vif-plugged-dac6b720-36af-4048-863c-cd259065f82e for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:20:39 np0005465604 nova_compute[260603]: 2025-10-02 09:20:39.463 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396824.4625225, 8e2e089b-fd7f-460e-ba3f-180204d961f8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:20:39 np0005465604 nova_compute[260603]: 2025-10-02 09:20:39.464 2 INFO nova.compute.manager [-] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:20:39 np0005465604 nova_compute[260603]: 2025-10-02 09:20:39.499 2 DEBUG nova.compute.manager [None req-1a1c1383-49eb-407e-adf7-e296f0daace9 - - - - - -] [instance: 8e2e089b-fd7f-460e-ba3f-180204d961f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:20:40 np0005465604 nova_compute[260603]: 2025-10-02 09:20:40.005 2 INFO nova.virt.libvirt.driver [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Deleting instance files /var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7_del#033[00m
Oct  2 05:20:40 np0005465604 nova_compute[260603]: 2025-10-02 09:20:40.006 2 INFO nova.virt.libvirt.driver [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Deletion of /var/lib/nova/instances/d2fca23c-2574-4642-b931-f363d59bd5a7_del complete#033[00m
Oct  2 05:20:40 np0005465604 nova_compute[260603]: 2025-10-02 09:20:40.126 2 INFO nova.compute.manager [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Took 4.03 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:20:40 np0005465604 nova_compute[260603]: 2025-10-02 09:20:40.127 2 DEBUG oslo.service.loopingcall [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:20:40 np0005465604 nova_compute[260603]: 2025-10-02 09:20:40.127 2 DEBUG nova.compute.manager [-] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:20:40 np0005465604 nova_compute[260603]: 2025-10-02 09:20:40.127 2 DEBUG nova.network.neutron [-] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:20:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 83 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Oct  2 05:20:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:20:40 np0005465604 nova_compute[260603]: 2025-10-02 09:20:40.749 2 DEBUG nova.network.neutron [req-f1b27a56-6bf6-4ed1-9272-2baacd54ac73 req-c00a9c4a-5414-4d4c-8ea1-b26a62c582a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updated VIF entry in instance network info cache for port dac6b720-36af-4048-863c-cd259065f82e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:20:40 np0005465604 nova_compute[260603]: 2025-10-02 09:20:40.749 2 DEBUG nova.network.neutron [req-f1b27a56-6bf6-4ed1-9272-2baacd54ac73 req-c00a9c4a-5414-4d4c-8ea1-b26a62c582a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updating instance_info_cache with network_info: [{"id": "dac6b720-36af-4048-863c-cd259065f82e", "address": "fa:16:3e:6a:0a:a0", "network": {"id": "01f3e9aa-20f8-48ae-9b80-cbd0ba476303", "bridge": "br-int", "label": "tempest-network-smoke--465855463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe6a:aa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdac6b720-36", "ovs_interfaceid": "dac6b720-36af-4048-863c-cd259065f82e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:20:40 np0005465604 nova_compute[260603]: 2025-10-02 09:20:40.781 2 DEBUG oslo_concurrency.lockutils [req-f1b27a56-6bf6-4ed1-9272-2baacd54ac73 req-c00a9c4a-5414-4d4c-8ea1-b26a62c582a5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-d2fca23c-2574-4642-b931-f363d59bd5a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:20:41 np0005465604 nova_compute[260603]: 2025-10-02 09:20:41.248 2 DEBUG nova.network.neutron [-] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:20:41 np0005465604 nova_compute[260603]: 2025-10-02 09:20:41.305 2 INFO nova.compute.manager [-] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Took 1.18 seconds to deallocate network for instance.#033[00m
Oct  2 05:20:41 np0005465604 nova_compute[260603]: 2025-10-02 09:20:41.324 2 DEBUG nova.compute.manager [req-2a161286-2816-477a-a6f7-618f3a30206c req-cac66c92-15e1-4904-82bf-9c809a82c83b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Received event network-vif-deleted-dac6b720-36af-4048-863c-cd259065f82e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:20:41 np0005465604 nova_compute[260603]: 2025-10-02 09:20:41.324 2 INFO nova.compute.manager [req-2a161286-2816-477a-a6f7-618f3a30206c req-cac66c92-15e1-4904-82bf-9c809a82c83b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Neutron deleted interface dac6b720-36af-4048-863c-cd259065f82e; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 05:20:41 np0005465604 nova_compute[260603]: 2025-10-02 09:20:41.324 2 DEBUG nova.network.neutron [req-2a161286-2816-477a-a6f7-618f3a30206c req-cac66c92-15e1-4904-82bf-9c809a82c83b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:20:41 np0005465604 nova_compute[260603]: 2025-10-02 09:20:41.374 2 DEBUG nova.compute.manager [req-2a161286-2816-477a-a6f7-618f3a30206c req-cac66c92-15e1-4904-82bf-9c809a82c83b 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Detach interface failed, port_id=dac6b720-36af-4048-863c-cd259065f82e, reason: Instance d2fca23c-2574-4642-b931-f363d59bd5a7 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 05:20:41 np0005465604 nova_compute[260603]: 2025-10-02 09:20:41.414 2 DEBUG oslo_concurrency.lockutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:20:41 np0005465604 nova_compute[260603]: 2025-10-02 09:20:41.415 2 DEBUG oslo_concurrency.lockutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:20:41 np0005465604 nova_compute[260603]: 2025-10-02 09:20:41.466 2 DEBUG oslo_concurrency.processutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:20:41 np0005465604 nova_compute[260603]: 2025-10-02 09:20:41.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:20:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/869045681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:20:41 np0005465604 nova_compute[260603]: 2025-10-02 09:20:41.939 2 DEBUG oslo_concurrency.processutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:20:41 np0005465604 nova_compute[260603]: 2025-10-02 09:20:41.945 2 DEBUG nova.compute.provider_tree [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:20:41 np0005465604 nova_compute[260603]: 2025-10-02 09:20:41.978 2 DEBUG nova.scheduler.client.report [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:20:42 np0005465604 nova_compute[260603]: 2025-10-02 09:20:42.022 2 DEBUG oslo_concurrency.lockutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:20:42 np0005465604 nova_compute[260603]: 2025-10-02 09:20:42.054 2 INFO nova.scheduler.client.report [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance d2fca23c-2574-4642-b931-f363d59bd5a7#033[00m
Oct  2 05:20:42 np0005465604 nova_compute[260603]: 2025-10-02 09:20:42.194 2 DEBUG oslo_concurrency.lockutils [None req-e3b146db-135a-49a4-8c05-f125d3ad0098 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:20:42 np0005465604 nova_compute[260603]: 2025-10-02 09:20:42.195 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 4.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:20:42 np0005465604 nova_compute[260603]: 2025-10-02 09:20:42.195 2 INFO nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Oct  2 05:20:42 np0005465604 nova_compute[260603]: 2025-10-02 09:20:42.195 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "d2fca23c-2574-4642-b931-f363d59bd5a7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:20:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.0 KiB/s wr, 44 op/s
Oct  2 05:20:43 np0005465604 nova_compute[260603]: 2025-10-02 09:20:43.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:43 np0005465604 nova_compute[260603]: 2025-10-02 09:20:43.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:20:43 np0005465604 nova_compute[260603]: 2025-10-02 09:20:43.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:20:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 30 op/s
Oct  2 05:20:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:20:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct  2 05:20:46 np0005465604 nova_compute[260603]: 2025-10-02 09:20:46.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:48 np0005465604 nova_compute[260603]: 2025-10-02 09:20:48.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct  2 05:20:50 np0005465604 nova_compute[260603]: 2025-10-02 09:20:50.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:50 np0005465604 nova_compute[260603]: 2025-10-02 09:20:50.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Oct  2 05:20:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:20:51 np0005465604 nova_compute[260603]: 2025-10-02 09:20:51.548 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396836.5464284, d2fca23c-2574-4642-b931-f363d59bd5a7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:20:51 np0005465604 nova_compute[260603]: 2025-10-02 09:20:51.548 2 INFO nova.compute.manager [-] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:20:51 np0005465604 nova_compute[260603]: 2025-10-02 09:20:51.575 2 DEBUG nova.compute.manager [None req-b0439db5-365e-4ce3-89da-5a85944fd82c - - - - - -] [instance: d2fca23c-2574-4642-b931-f363d59bd5a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:20:51 np0005465604 nova_compute[260603]: 2025-10-02 09:20:51.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Oct  2 05:20:53 np0005465604 nova_compute[260603]: 2025-10-02 09:20:53.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 341 B/s wr, 3 op/s
Oct  2 05:20:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:20:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:20:56 np0005465604 nova_compute[260603]: 2025-10-02 09:20:56.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:20:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:20:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:20:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:20:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:20:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:20:58 np0005465604 nova_compute[260603]: 2025-10-02 09:20:58.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:20:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:21:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:21:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:21:01 np0005465604 nova_compute[260603]: 2025-10-02 09:21:01.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:21:03 np0005465604 nova_compute[260603]: 2025-10-02 09:21:03.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:04 np0005465604 podman[436483]: 2025-10-02 09:21:04.034951608 +0000 UTC m=+0.095098981 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:21:04 np0005465604 podman[436482]: 2025-10-02 09:21:04.094932041 +0000 UTC m=+0.164698826 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:21:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:21:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:21:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:21:06 np0005465604 nova_compute[260603]: 2025-10-02 09:21:06.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:07 np0005465604 nova_compute[260603]: 2025-10-02 09:21:07.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:21:08 np0005465604 podman[436531]: 2025-10-02 09:21:08.000013585 +0000 UTC m=+0.059637734 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:21:08 np0005465604 podman[436530]: 2025-10-02 09:21:08.021691881 +0000 UTC m=+0.091161909 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 05:21:08 np0005465604 nova_compute[260603]: 2025-10-02 09:21:08.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:21:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:21:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:21:10 np0005465604 nova_compute[260603]: 2025-10-02 09:21:10.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:21:10 np0005465604 nova_compute[260603]: 2025-10-02 09:21:10.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:21:10 np0005465604 nova_compute[260603]: 2025-10-02 09:21:10.546 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:21:10 np0005465604 nova_compute[260603]: 2025-10-02 09:21:10.546 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:21:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:11.190 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:0e:c9 10.100.0.2 2001:db8::f816:3eff:fe52:ec9'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe52:ec9/64', 'neutron:device_id': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79a700-778c-4189-bea4-a6e50510de5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=886ca98a-7662-4ca0-8c8e-c35442cbbef0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f52ffbc5-b75c-4a8b-a490-21571dd7145a) old=Port_Binding(mac=['fa:16:3e:52:0e:c9 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79a700-778c-4189-bea4-a6e50510de5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:21:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:11.192 162357 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f52ffbc5-b75c-4a8b-a490-21571dd7145a in datapath bb79a700-778c-4189-bea4-a6e50510de5b updated#033[00m
Oct  2 05:21:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:11.193 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bb79a700-778c-4189-bea4-a6e50510de5b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:21:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:11.194 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e1252e45-0c08-48b9-9619-282081dffe25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:21:11 np0005465604 nova_compute[260603]: 2025-10-02 09:21:11.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:21:12 np0005465604 nova_compute[260603]: 2025-10-02 09:21:12.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:21:13 np0005465604 nova_compute[260603]: 2025-10-02 09:21:13.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:21:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:21:15 np0005465604 nova_compute[260603]: 2025-10-02 09:21:15.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:21:15 np0005465604 nova_compute[260603]: 2025-10-02 09:21:15.588 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:21:15 np0005465604 nova_compute[260603]: 2025-10-02 09:21:15.588 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:21:15 np0005465604 nova_compute[260603]: 2025-10-02 09:21:15.588 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:21:15 np0005465604 nova_compute[260603]: 2025-10-02 09:21:15.588 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:21:15 np0005465604 nova_compute[260603]: 2025-10-02 09:21:15.589 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:21:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:21:16 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3377665194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:21:16 np0005465604 nova_compute[260603]: 2025-10-02 09:21:16.027 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:21:16 np0005465604 nova_compute[260603]: 2025-10-02 09:21:16.252 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:21:16 np0005465604 nova_compute[260603]: 2025-10-02 09:21:16.255 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3586MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:21:16 np0005465604 nova_compute[260603]: 2025-10-02 09:21:16.256 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:21:16 np0005465604 nova_compute[260603]: 2025-10-02 09:21:16.257 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:21:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:21:16 np0005465604 nova_compute[260603]: 2025-10-02 09:21:16.498 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:21:16 np0005465604 nova_compute[260603]: 2025-10-02 09:21:16.498 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:21:16 np0005465604 nova_compute[260603]: 2025-10-02 09:21:16.589 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:21:16 np0005465604 nova_compute[260603]: 2025-10-02 09:21:16.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:21:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/525364514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:21:17 np0005465604 nova_compute[260603]: 2025-10-02 09:21:17.025 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:21:17 np0005465604 nova_compute[260603]: 2025-10-02 09:21:17.032 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:21:17 np0005465604 nova_compute[260603]: 2025-10-02 09:21:17.139 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:21:17 np0005465604 nova_compute[260603]: 2025-10-02 09:21:17.211 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:21:17 np0005465604 nova_compute[260603]: 2025-10-02 09:21:17.212 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:21:17 np0005465604 nova_compute[260603]: 2025-10-02 09:21:17.852 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:21:17 np0005465604 nova_compute[260603]: 2025-10-02 09:21:17.852 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:21:18 np0005465604 nova_compute[260603]: 2025-10-02 09:21:18.100 2 DEBUG nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:21:18 np0005465604 nova_compute[260603]: 2025-10-02 09:21:18.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:18 np0005465604 nova_compute[260603]: 2025-10-02 09:21:18.335 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:21:18 np0005465604 nova_compute[260603]: 2025-10-02 09:21:18.336 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:21:18 np0005465604 nova_compute[260603]: 2025-10-02 09:21:18.349 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:21:18 np0005465604 nova_compute[260603]: 2025-10-02 09:21:18.350 2 INFO nova.compute.claims [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:21:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:21:18 np0005465604 nova_compute[260603]: 2025-10-02 09:21:18.878 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:21:19 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:21:19 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2741520235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:21:19 np0005465604 nova_compute[260603]: 2025-10-02 09:21:19.339 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:21:19 np0005465604 nova_compute[260603]: 2025-10-02 09:21:19.350 2 DEBUG nova.compute.provider_tree [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:21:19 np0005465604 nova_compute[260603]: 2025-10-02 09:21:19.611 2 DEBUG nova.scheduler.client.report [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:21:19 np0005465604 nova_compute[260603]: 2025-10-02 09:21:19.846 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:21:19 np0005465604 nova_compute[260603]: 2025-10-02 09:21:19.847 2 DEBUG nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.037 2 DEBUG nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.038 2 DEBUG nova.network.neutron [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.208 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.209 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.315 2 INFO nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.357 2 DEBUG nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:21:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 305 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:21:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.512 2 DEBUG nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.514 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.514 2 INFO nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Creating image(s)#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.545 2 DEBUG nova.storage.rbd_utils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.571 2 DEBUG nova.storage.rbd_utils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.607 2 DEBUG nova.storage.rbd_utils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.612 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.667 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.711 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.712 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.713 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.713 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.741 2 DEBUG nova.storage.rbd_utils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:21:20 np0005465604 nova_compute[260603]: 2025-10-02 09:21:20.747 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:21:21 np0005465604 nova_compute[260603]: 2025-10-02 09:21:21.084 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:21:21 np0005465604 nova_compute[260603]: 2025-10-02 09:21:21.146 2 DEBUG nova.storage.rbd_utils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:21:21 np0005465604 nova_compute[260603]: 2025-10-02 09:21:21.205 2 DEBUG nova.policy [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:21:21 np0005465604 nova_compute[260603]: 2025-10-02 09:21:21.241 2 DEBUG nova.objects.instance [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:21:21 np0005465604 nova_compute[260603]: 2025-10-02 09:21:21.296 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:21:21 np0005465604 nova_compute[260603]: 2025-10-02 09:21:21.297 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Ensure instance console log exists: /var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:21:21 np0005465604 nova_compute[260603]: 2025-10-02 09:21:21.298 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:21:21 np0005465604 nova_compute[260603]: 2025-10-02 09:21:21.298 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:21:21 np0005465604 nova_compute[260603]: 2025-10-02 09:21:21.298 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:21:21 np0005465604 nova_compute[260603]: 2025-10-02 09:21:21.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:21:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3078565076' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:21:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:21:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3078565076' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:21:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 62 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 706 KiB/s wr, 23 op/s
Oct  2 05:21:23 np0005465604 nova_compute[260603]: 2025-10-02 09:21:23.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:21:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:21:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:21:26 np0005465604 nova_compute[260603]: 2025-10-02 09:21:26.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:21:26 np0005465604 nova_compute[260603]: 2025-10-02 09:21:26.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:27 np0005465604 nova_compute[260603]: 2025-10-02 09:21:27.312 2 DEBUG nova.network.neutron [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Successfully created port: ce3d61f6-90a9-4869-ac95-256f110eea41 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:21:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:21:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:21:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:21:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:21:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:21:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:21:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:21:28
Oct  2 05:21:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:21:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:21:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'volumes', '.mgr', 'default.rgw.meta', 'vms', '.rgw.root', 'images']
Oct  2 05:21:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:21:28 np0005465604 nova_compute[260603]: 2025-10-02 09:21:28.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:21:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:21:29 np0005465604 nova_compute[260603]: 2025-10-02 09:21:29.193 2 DEBUG nova.network.neutron [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Successfully updated port: ce3d61f6-90a9-4869-ac95-256f110eea41 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:21:29 np0005465604 nova_compute[260603]: 2025-10-02 09:21:29.234 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:21:29 np0005465604 nova_compute[260603]: 2025-10-02 09:21:29.235 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:21:29 np0005465604 nova_compute[260603]: 2025-10-02 09:21:29.235 2 DEBUG nova.network.neutron [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:21:29 np0005465604 nova_compute[260603]: 2025-10-02 09:21:29.364 2 DEBUG nova.compute.manager [req-aa42370f-2730-4a0f-87f8-5fa70e883fda req-c146c423-f446-418a-84fd-674c8b8b77da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received event network-changed-ce3d61f6-90a9-4869-ac95-256f110eea41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:21:29 np0005465604 nova_compute[260603]: 2025-10-02 09:21:29.365 2 DEBUG nova.compute.manager [req-aa42370f-2730-4a0f-87f8-5fa70e883fda req-c146c423-f446-418a-84fd-674c8b8b77da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Refreshing instance network info cache due to event network-changed-ce3d61f6-90a9-4869-ac95-256f110eea41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:21:29 np0005465604 nova_compute[260603]: 2025-10-02 09:21:29.366 2 DEBUG oslo_concurrency.lockutils [req-aa42370f-2730-4a0f-87f8-5fa70e883fda req-c146c423-f446-418a-84fd-674c8b8b77da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:21:29 np0005465604 nova_compute[260603]: 2025-10-02 09:21:29.477 2 DEBUG nova.network.neutron [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:21:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:21:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:21:30 np0005465604 nova_compute[260603]: 2025-10-02 09:21:30.998 2 DEBUG nova.network.neutron [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updating instance_info_cache with network_info: [{"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.456 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.457 2 DEBUG nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Instance network_info: |[{"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.457 2 DEBUG oslo_concurrency.lockutils [req-aa42370f-2730-4a0f-87f8-5fa70e883fda req-c146c423-f446-418a-84fd-674c8b8b77da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.458 2 DEBUG nova.network.neutron [req-aa42370f-2730-4a0f-87f8-5fa70e883fda req-c146c423-f446-418a-84fd-674c8b8b77da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Refreshing network info cache for port ce3d61f6-90a9-4869-ac95-256f110eea41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.461 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Start _get_guest_xml network_info=[{"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.468 2 WARNING nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.485 2 DEBUG nova.virt.libvirt.host [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.486 2 DEBUG nova.virt.libvirt.host [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.498 2 DEBUG nova.virt.libvirt.host [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.499 2 DEBUG nova.virt.libvirt.host [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.500 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.500 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.501 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.501 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.502 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.502 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.503 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.503 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.504 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.504 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.505 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.505 2 DEBUG nova.virt.hardware [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.510 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:21:31 np0005465604 nova_compute[260603]: 2025-10-02 09:21:31.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:21:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1454536736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.043 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.069 2 DEBUG nova.storage.rbd_utils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.074 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:21:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:21:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:21:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/871546043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.544 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.546 2 DEBUG nova.virt.libvirt.vif [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:21:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-885089396',display_name='tempest-TestGettingAddress-server-885089396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-885089396',id=151,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJ4erUkDf9pFYvis3BxPTrsgrZAeghsAW2aYbDdKvJxPUtfd2zcNxkwWc27ijo1XxIL1GH95TwtVkIOZnFQCr789wREwZXl2iwWdFQxsXXMtQjjBE9pyaOAIR5A+kumzQ==',key_name='tempest-TestGettingAddress-2084952376',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-mdzn0750',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:21:20Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=ccf01ee2-f5c6-4802-a9dd-f8c0423d4914,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.546 2 DEBUG nova.network.os_vif_util [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.547 2 DEBUG nova.network.os_vif_util [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e3:da,bridge_name='br-int',has_traffic_filtering=True,id=ce3d61f6-90a9-4869-ac95-256f110eea41,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce3d61f6-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.548 2 DEBUG nova.objects.instance [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.597 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:21:32 np0005465604 nova_compute[260603]:  <uuid>ccf01ee2-f5c6-4802-a9dd-f8c0423d4914</uuid>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:  <name>instance-00000097</name>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestGettingAddress-server-885089396</nova:name>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:21:31</nova:creationTime>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:21:32 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:        <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:        <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:        <nova:port uuid="ce3d61f6-90a9-4869-ac95-256f110eea41">
Oct  2 05:21:32 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fec4:e3da" ipVersion="6"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <entry name="serial">ccf01ee2-f5c6-4802-a9dd-f8c0423d4914</entry>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <entry name="uuid">ccf01ee2-f5c6-4802-a9dd-f8c0423d4914</entry>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk">
Oct  2 05:21:32 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:21:32 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk.config">
Oct  2 05:21:32 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:21:32 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:c4:e3:da"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <target dev="tapce3d61f6-90"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914/console.log" append="off"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:21:32 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:21:32 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:21:32 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:21:32 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.599 2 DEBUG nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Preparing to wait for external event network-vif-plugged-ce3d61f6-90a9-4869-ac95-256f110eea41 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.599 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.600 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.600 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.601 2 DEBUG nova.virt.libvirt.vif [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:21:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-885089396',display_name='tempest-TestGettingAddress-server-885089396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-885089396',id=151,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJ4erUkDf9pFYvis3BxPTrsgrZAeghsAW2aYbDdKvJxPUtfd2zcNxkwWc27ijo1XxIL1GH95TwtVkIOZnFQCr789wREwZXl2iwWdFQxsXXMtQjjBE9pyaOAIR5A+kumzQ==',key_name='tempest-TestGettingAddress-2084952376',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-mdzn0750',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:21:20Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=ccf01ee2-f5c6-4802-a9dd-f8c0423d4914,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.602 2 DEBUG nova.network.os_vif_util [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.603 2 DEBUG nova.network.os_vif_util [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e3:da,bridge_name='br-int',has_traffic_filtering=True,id=ce3d61f6-90a9-4869-ac95-256f110eea41,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce3d61f6-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.604 2 DEBUG os_vif [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e3:da,bridge_name='br-int',has_traffic_filtering=True,id=ce3d61f6-90a9-4869-ac95-256f110eea41,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce3d61f6-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.606 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.606 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.610 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce3d61f6-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.611 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce3d61f6-90, col_values=(('external_ids', {'iface-id': 'ce3d61f6-90a9-4869-ac95-256f110eea41', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:e3:da', 'vm-uuid': 'ccf01ee2-f5c6-4802-a9dd-f8c0423d4914'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:21:32 np0005465604 NetworkManager[45129]: <info>  [1759396892.6141] manager: (tapce3d61f6-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/684)
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:32 np0005465604 nova_compute[260603]: 2025-10-02 09:21:32.622 2 INFO os_vif [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:e3:da,bridge_name='br-int',has_traffic_filtering=True,id=ce3d61f6-90a9-4869-ac95-256f110eea41,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce3d61f6-90')#033[00m
Oct  2 05:21:33 np0005465604 nova_compute[260603]: 2025-10-02 09:21:33.089 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:21:33 np0005465604 nova_compute[260603]: 2025-10-02 09:21:33.090 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:21:33 np0005465604 nova_compute[260603]: 2025-10-02 09:21:33.091 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:c4:e3:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:21:33 np0005465604 nova_compute[260603]: 2025-10-02 09:21:33.092 2 INFO nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Using config drive#033[00m
Oct  2 05:21:33 np0005465604 nova_compute[260603]: 2025-10-02 09:21:33.115 2 DEBUG nova.storage.rbd_utils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:21:33 np0005465604 nova_compute[260603]: 2025-10-02 09:21:33.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:33 np0005465604 nova_compute[260603]: 2025-10-02 09:21:33.636 2 INFO nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Creating config drive at /var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914/disk.config#033[00m
Oct  2 05:21:33 np0005465604 nova_compute[260603]: 2025-10-02 09:21:33.642 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6r3go9x_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:21:33 np0005465604 nova_compute[260603]: 2025-10-02 09:21:33.794 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6r3go9x_" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:21:33 np0005465604 nova_compute[260603]: 2025-10-02 09:21:33.817 2 DEBUG nova.storage.rbd_utils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:21:33 np0005465604 nova_compute[260603]: 2025-10-02 09:21:33.821 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914/disk.config ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:21:34 np0005465604 nova_compute[260603]: 2025-10-02 09:21:34.231 2 DEBUG oslo_concurrency.processutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914/disk.config ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:21:34 np0005465604 nova_compute[260603]: 2025-10-02 09:21:34.232 2 INFO nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Deleting local config drive /var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914/disk.config because it was imported into RBD.#033[00m
Oct  2 05:21:34 np0005465604 kernel: tapce3d61f6-90: entered promiscuous mode
Oct  2 05:21:34 np0005465604 NetworkManager[45129]: <info>  [1759396894.3085] manager: (tapce3d61f6-90): new Tun device (/org/freedesktop/NetworkManager/Devices/685)
Oct  2 05:21:34 np0005465604 ovn_controller[152344]: 2025-10-02T09:21:34Z|01684|binding|INFO|Claiming lport ce3d61f6-90a9-4869-ac95-256f110eea41 for this chassis.
Oct  2 05:21:34 np0005465604 ovn_controller[152344]: 2025-10-02T09:21:34Z|01685|binding|INFO|ce3d61f6-90a9-4869-ac95-256f110eea41: Claiming fa:16:3e:c4:e3:da 10.100.0.4 2001:db8::f816:3eff:fec4:e3da
Oct  2 05:21:34 np0005465604 nova_compute[260603]: 2025-10-02 09:21:34.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:34 np0005465604 nova_compute[260603]: 2025-10-02 09:21:34.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.346 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:e3:da 10.100.0.4 2001:db8::f816:3eff:fec4:e3da'], port_security=['fa:16:3e:c4:e3:da 10.100.0.4 2001:db8::f816:3eff:fec4:e3da'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fec4:e3da/64', 'neutron:device_id': 'ccf01ee2-f5c6-4802-a9dd-f8c0423d4914', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79a700-778c-4189-bea4-a6e50510de5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5b689ca1-3c9b-4813-8474-00abea3332c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=886ca98a-7662-4ca0-8c8e-c35442cbbef0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ce3d61f6-90a9-4869-ac95-256f110eea41) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.348 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ce3d61f6-90a9-4869-ac95-256f110eea41 in datapath bb79a700-778c-4189-bea4-a6e50510de5b bound to our chassis#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.350 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb79a700-778c-4189-bea4-a6e50510de5b#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.365 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb8974f-fc55-43a8-80b9-68538ec88811]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.366 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbb79a700-71 in ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.369 276572 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbb79a700-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.369 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e67db031-b425-4c5b-a769-e6ff6a225f0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.370 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[958de970-4cb7-40c4-a6a7-054470a6da7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:21:34 np0005465604 systemd-machined[214636]: New machine qemu-185-instance-00000097.
Oct  2 05:21:34 np0005465604 systemd[1]: Started Virtual Machine qemu-185-instance-00000097.
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.379 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[4670d03c-c135-418e-afc2-12d5ba1112bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:21:34 np0005465604 ovn_controller[152344]: 2025-10-02T09:21:34Z|01686|binding|INFO|Setting lport ce3d61f6-90a9-4869-ac95-256f110eea41 ovn-installed in OVS
Oct  2 05:21:34 np0005465604 ovn_controller[152344]: 2025-10-02T09:21:34Z|01687|binding|INFO|Setting lport ce3d61f6-90a9-4869-ac95-256f110eea41 up in Southbound
Oct  2 05:21:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 1.1 MiB/s wr, 3 op/s
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.396 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff4ef4c-6861-47ca-9701-a7ae6722db6d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:21:34 np0005465604 nova_compute[260603]: 2025-10-02 09:21:34.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:34 np0005465604 systemd-udevd[436963]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:21:34 np0005465604 NetworkManager[45129]: <info>  [1759396894.4161] device (tapce3d61f6-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:21:34 np0005465604 NetworkManager[45129]: <info>  [1759396894.4169] device (tapce3d61f6-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:21:34 np0005465604 nova_compute[260603]: 2025-10-02 09:21:34.423 2 DEBUG nova.network.neutron [req-aa42370f-2730-4a0f-87f8-5fa70e883fda req-c146c423-f446-418a-84fd-674c8b8b77da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updated VIF entry in instance network info cache for port ce3d61f6-90a9-4869-ac95-256f110eea41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:21:34 np0005465604 nova_compute[260603]: 2025-10-02 09:21:34.424 2 DEBUG nova.network.neutron [req-aa42370f-2730-4a0f-87f8-5fa70e883fda req-c146c423-f446-418a-84fd-674c8b8b77da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updating instance_info_cache with network_info: [{"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.433 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[89a28890-f56d-476e-94dc-24b2780b9e0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:21:34 np0005465604 NetworkManager[45129]: <info>  [1759396894.4386] manager: (tapbb79a700-70): new Veth device (/org/freedesktop/NetworkManager/Devices/686)
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.438 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[4a68540c-c27f-4455-98f9-6b7a118b9403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:21:34 np0005465604 podman[436936]: 2025-10-02 09:21:34.442484401 +0000 UTC m=+0.081241858 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.474 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1d52d7-bc72-40c9-a83e-ea6aa8cf89bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.476 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[e26b4e39-a239-4a96-bdcf-859182896684]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:21:34 np0005465604 podman[436935]: 2025-10-02 09:21:34.479525928 +0000 UTC m=+0.130818386 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 05:21:34 np0005465604 NetworkManager[45129]: <info>  [1759396894.5016] device (tapbb79a700-70): carrier: link connected
Oct  2 05:21:34 np0005465604 nova_compute[260603]: 2025-10-02 09:21:34.504 2 DEBUG oslo_concurrency.lockutils [req-aa42370f-2730-4a0f-87f8-5fa70e883fda req-c146c423-f446-418a-84fd-674c8b8b77da 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.508 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[f38a2f0c-051a-4307-b7eb-4042a3631024]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.525 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[eeea37d8-ef0b-4aeb-86cf-811482c3bdc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb79a700-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:0e:c9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777310, 'reachable_time': 42986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 437012, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.540 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[0d2c12b2-0b17-427d-ac46-1b96fb193913]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:ec9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 777310, 'tstamp': 777310}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 437013, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.555 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[a0df1ea6-51bd-4035-bd9d-27d3597534dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb79a700-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:0e:c9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777310, 'reachable_time': 42986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 437014, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.587 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[79f07108-9ebe-4f4e-bbf7-a5392fc18e6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.643 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9d1d4f1-519c-4075-adf1-eeae39709273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.645 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb79a700-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.645 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.645 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb79a700-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:21:34 np0005465604 NetworkManager[45129]: <info>  [1759396894.6479] manager: (tapbb79a700-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/687)
Oct  2 05:21:34 np0005465604 kernel: tapbb79a700-70: entered promiscuous mode
Oct  2 05:21:34 np0005465604 nova_compute[260603]: 2025-10-02 09:21:34.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.650 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb79a700-70, col_values=(('external_ids', {'iface-id': 'f52ffbc5-b75c-4a8b-a490-21571dd7145a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:21:34 np0005465604 ovn_controller[152344]: 2025-10-02T09:21:34Z|01688|binding|INFO|Releasing lport f52ffbc5-b75c-4a8b-a490-21571dd7145a from this chassis (sb_readonly=0)
Oct  2 05:21:34 np0005465604 nova_compute[260603]: 2025-10-02 09:21:34.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.667 162357 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bb79a700-778c-4189-bea4-a6e50510de5b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bb79a700-778c-4189-bea4-a6e50510de5b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.668 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[36f598b6-57c0-413b-85aa-fdb953cfdb64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.669 162357 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: global
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    log         /dev/log local0 debug
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    log-tag     haproxy-metadata-proxy-bb79a700-778c-4189-bea4-a6e50510de5b
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    user        root
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    group       root
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    maxconn     1024
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    pidfile     /var/lib/neutron/external/pids/bb79a700-778c-4189-bea4-a6e50510de5b.pid.haproxy
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    daemon
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: defaults
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    log global
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    mode http
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    option httplog
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    option dontlognull
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    option http-server-close
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    option forwardfor
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    retries                 3
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    timeout http-request    30s
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    timeout connect         30s
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    timeout client          32s
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    timeout server          32s
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    timeout http-keep-alive 30s
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: listen listener
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    bind 169.254.169.254:80
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]:    http-request add-header X-OVN-Network-ID bb79a700-778c-4189-bea4-a6e50510de5b
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.670 162357 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'env', 'PROCESS_TAG=haproxy-bb79a700-778c-4189-bea4-a6e50510de5b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bb79a700-778c-4189-bea4-a6e50510de5b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 05:21:34 np0005465604 nova_compute[260603]: 2025-10-02 09:21:34.811 2 DEBUG nova.compute.manager [req-49c0589b-fd14-45f6-8e38-031c29bbc7ab req-ed9c3c82-8568-4c85-bb5d-7bfe912bf311 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received event network-vif-plugged-ce3d61f6-90a9-4869-ac95-256f110eea41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:21:34 np0005465604 nova_compute[260603]: 2025-10-02 09:21:34.812 2 DEBUG oslo_concurrency.lockutils [req-49c0589b-fd14-45f6-8e38-031c29bbc7ab req-ed9c3c82-8568-4c85-bb5d-7bfe912bf311 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:21:34 np0005465604 nova_compute[260603]: 2025-10-02 09:21:34.812 2 DEBUG oslo_concurrency.lockutils [req-49c0589b-fd14-45f6-8e38-031c29bbc7ab req-ed9c3c82-8568-4c85-bb5d-7bfe912bf311 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:21:34 np0005465604 nova_compute[260603]: 2025-10-02 09:21:34.812 2 DEBUG oslo_concurrency.lockutils [req-49c0589b-fd14-45f6-8e38-031c29bbc7ab req-ed9c3c82-8568-4c85-bb5d-7bfe912bf311 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:21:34 np0005465604 nova_compute[260603]: 2025-10-02 09:21:34.812 2 DEBUG nova.compute.manager [req-49c0589b-fd14-45f6-8e38-031c29bbc7ab req-ed9c3c82-8568-4c85-bb5d-7bfe912bf311 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Processing event network-vif-plugged-ce3d61f6-90a9-4869-ac95-256f110eea41 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.859 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.860 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:21:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:34.860 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:21:35 np0005465604 podman[437088]: 2025-10-02 09:21:35.074015597 +0000 UTC m=+0.058989623 container create 63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 05:21:35 np0005465604 systemd[1]: Started libpod-conmon-63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d.scope.
Oct  2 05:21:35 np0005465604 podman[437088]: 2025-10-02 09:21:35.045406404 +0000 UTC m=+0.030380450 image pull 269d9fde257fe51bcfc3411ed4c4c36a03b726658e91b83df1028da499438537 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e
Oct  2 05:21:35 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:21:35 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8ec26be00ec6c1706956eeee6591a41a079324f4042ac45aeef1cce56edc2ca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 05:21:35 np0005465604 podman[437088]: 2025-10-02 09:21:35.167867189 +0000 UTC m=+0.152841215 container init 63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:21:35 np0005465604 podman[437088]: 2025-10-02 09:21:35.174223017 +0000 UTC m=+0.159197033 container start 63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:21:35 np0005465604 neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b[437104]: [NOTICE]   (437108) : New worker (437110) forked
Oct  2 05:21:35 np0005465604 neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b[437104]: [NOTICE]   (437108) : Loading success.
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:35.340 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:21:35 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:35.343 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.403 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396895.4023702, ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.403 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] VM Started (Lifecycle Event)#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.406 2 DEBUG nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.410 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.413 2 INFO nova.virt.libvirt.driver [-] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Instance spawned successfully.#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.414 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.437 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.440 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.454 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.454 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.455 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.455 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.455 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.456 2 DEBUG nova.virt.libvirt.driver [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:21:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.609 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.610 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396895.4044836, ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.610 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.685 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.689 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396895.4099636, ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.689 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.757 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.761 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.831 2 INFO nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Took 15.32 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.832 2 DEBUG nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:21:35 np0005465604 nova_compute[260603]: 2025-10-02 09:21:35.851 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:21:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:21:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:21:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:21:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:21:36 np0005465604 nova_compute[260603]: 2025-10-02 09:21:36.109 2 INFO nova.compute.manager [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Took 17.81 seconds to build instance.#033[00m
Oct  2 05:21:36 np0005465604 nova_compute[260603]: 2025-10-02 09:21:36.159 2 DEBUG oslo_concurrency.lockutils [None req-712fd21a-0eff-41e9-a83c-5b51861dd7c5 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:21:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:21:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:21:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:21:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:21:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:21:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:21:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:21:36 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 944c1748-536d-4798-a9c8-1d24bc2b5f7b does not exist
Oct  2 05:21:36 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c7b409fc-375a-419a-a14d-82f25cda2c89 does not exist
Oct  2 05:21:36 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 925cd190-fd48-4474-bfd3-cf31d5e5b01d does not exist
Oct  2 05:21:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:21:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:21:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:21:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:21:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:21:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:21:36 np0005465604 nova_compute[260603]: 2025-10-02 09:21:36.926 2 DEBUG nova.compute.manager [req-bfc530fb-69d0-443c-8de5-b34bbc2269c4 req-5fecf528-3873-4955-9f4c-d38e7e5d7ab6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received event network-vif-plugged-ce3d61f6-90a9-4869-ac95-256f110eea41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:21:36 np0005465604 nova_compute[260603]: 2025-10-02 09:21:36.929 2 DEBUG oslo_concurrency.lockutils [req-bfc530fb-69d0-443c-8de5-b34bbc2269c4 req-5fecf528-3873-4955-9f4c-d38e7e5d7ab6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:21:36 np0005465604 nova_compute[260603]: 2025-10-02 09:21:36.929 2 DEBUG oslo_concurrency.lockutils [req-bfc530fb-69d0-443c-8de5-b34bbc2269c4 req-5fecf528-3873-4955-9f4c-d38e7e5d7ab6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:21:36 np0005465604 nova_compute[260603]: 2025-10-02 09:21:36.930 2 DEBUG oslo_concurrency.lockutils [req-bfc530fb-69d0-443c-8de5-b34bbc2269c4 req-5fecf528-3873-4955-9f4c-d38e7e5d7ab6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:21:36 np0005465604 nova_compute[260603]: 2025-10-02 09:21:36.930 2 DEBUG nova.compute.manager [req-bfc530fb-69d0-443c-8de5-b34bbc2269c4 req-5fecf528-3873-4955-9f4c-d38e7e5d7ab6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] No waiting events found dispatching network-vif-plugged-ce3d61f6-90a9-4869-ac95-256f110eea41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:21:36 np0005465604 nova_compute[260603]: 2025-10-02 09:21:36.930 2 WARNING nova.compute.manager [req-bfc530fb-69d0-443c-8de5-b34bbc2269c4 req-5fecf528-3873-4955-9f4c-d38e7e5d7ab6 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received unexpected event network-vif-plugged-ce3d61f6-90a9-4869-ac95-256f110eea41 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:21:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:21:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:21:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:21:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:21:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:21:37 np0005465604 podman[437511]: 2025-10-02 09:21:37.489296437 +0000 UTC m=+0.064287499 container create 611db82557a004f3e171c1037b89545b7aa9067b4cd8e2268ee0d77cff762051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_cray, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 05:21:37 np0005465604 systemd[1]: Started libpod-conmon-611db82557a004f3e171c1037b89545b7aa9067b4cd8e2268ee0d77cff762051.scope.
Oct  2 05:21:37 np0005465604 podman[437511]: 2025-10-02 09:21:37.46089213 +0000 UTC m=+0.035883212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:21:37 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:21:37 np0005465604 podman[437511]: 2025-10-02 09:21:37.586005338 +0000 UTC m=+0.160996410 container init 611db82557a004f3e171c1037b89545b7aa9067b4cd8e2268ee0d77cff762051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_cray, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:21:37 np0005465604 podman[437511]: 2025-10-02 09:21:37.595543296 +0000 UTC m=+0.170534338 container start 611db82557a004f3e171c1037b89545b7aa9067b4cd8e2268ee0d77cff762051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_cray, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:21:37 np0005465604 podman[437511]: 2025-10-02 09:21:37.598630012 +0000 UTC m=+0.173621094 container attach 611db82557a004f3e171c1037b89545b7aa9067b4cd8e2268ee0d77cff762051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_cray, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 05:21:37 np0005465604 trusting_cray[437527]: 167 167
Oct  2 05:21:37 np0005465604 systemd[1]: libpod-611db82557a004f3e171c1037b89545b7aa9067b4cd8e2268ee0d77cff762051.scope: Deactivated successfully.
Oct  2 05:21:37 np0005465604 podman[437511]: 2025-10-02 09:21:37.605058493 +0000 UTC m=+0.180049535 container died 611db82557a004f3e171c1037b89545b7aa9067b4cd8e2268ee0d77cff762051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:21:37 np0005465604 nova_compute[260603]: 2025-10-02 09:21:37.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:37 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ab385e067fd66a928af843c231ad6880ebd4bf6209d8c437cfed91428b81af12-merged.mount: Deactivated successfully.
Oct  2 05:21:37 np0005465604 podman[437511]: 2025-10-02 09:21:37.649764009 +0000 UTC m=+0.224755051 container remove 611db82557a004f3e171c1037b89545b7aa9067b4cd8e2268ee0d77cff762051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_cray, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  2 05:21:37 np0005465604 systemd[1]: libpod-conmon-611db82557a004f3e171c1037b89545b7aa9067b4cd8e2268ee0d77cff762051.scope: Deactivated successfully.
Oct  2 05:21:37 np0005465604 podman[437552]: 2025-10-02 09:21:37.905368693 +0000 UTC m=+0.069335366 container create e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_moore, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 05:21:37 np0005465604 podman[437552]: 2025-10-02 09:21:37.872870788 +0000 UTC m=+0.036837541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:21:37 np0005465604 systemd[1]: Started libpod-conmon-e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5.scope.
Oct  2 05:21:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:21:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6418719180330139519b9391cddfb8fb8f8d35f444d4b3f2a19e4a8ac13b37b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:21:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6418719180330139519b9391cddfb8fb8f8d35f444d4b3f2a19e4a8ac13b37b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:21:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6418719180330139519b9391cddfb8fb8f8d35f444d4b3f2a19e4a8ac13b37b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:21:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6418719180330139519b9391cddfb8fb8f8d35f444d4b3f2a19e4a8ac13b37b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:21:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6418719180330139519b9391cddfb8fb8f8d35f444d4b3f2a19e4a8ac13b37b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:21:38 np0005465604 podman[437552]: 2025-10-02 09:21:38.051376633 +0000 UTC m=+0.215343316 container init e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_moore, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 05:21:38 np0005465604 podman[437552]: 2025-10-02 09:21:38.059861639 +0000 UTC m=+0.223828302 container start e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_moore, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:21:38 np0005465604 podman[437552]: 2025-10-02 09:21:38.063799512 +0000 UTC m=+0.227766175 container attach e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_moore, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 05:21:38 np0005465604 podman[437572]: 2025-10-02 09:21:38.129981959 +0000 UTC m=+0.095862745 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Oct  2 05:21:38 np0005465604 nova_compute[260603]: 2025-10-02 09:21:38.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:38 np0005465604 podman[437593]: 2025-10-02 09:21:38.270955612 +0000 UTC m=+0.098719144 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 05:21:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:21:39 np0005465604 confident_moore[437569]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:21:39 np0005465604 confident_moore[437569]: --> relative data size: 1.0
Oct  2 05:21:39 np0005465604 confident_moore[437569]: --> All data devices are unavailable
Oct  2 05:21:39 np0005465604 systemd[1]: libpod-e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5.scope: Deactivated successfully.
Oct  2 05:21:39 np0005465604 systemd[1]: libpod-e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5.scope: Consumed 1.054s CPU time.
Oct  2 05:21:39 np0005465604 podman[437552]: 2025-10-02 09:21:39.193515588 +0000 UTC m=+1.357482251 container died e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_moore, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:21:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f6418719180330139519b9391cddfb8fb8f8d35f444d4b3f2a19e4a8ac13b37b-merged.mount: Deactivated successfully.
Oct  2 05:21:39 np0005465604 podman[437552]: 2025-10-02 09:21:39.271478553 +0000 UTC m=+1.435445266 container remove e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 05:21:39 np0005465604 systemd[1]: libpod-conmon-e49f96ef47f6bb7525513d5071676c234faa19eac42a08ee53a75579c4fcfcd5.scope: Deactivated successfully.
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:21:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:21:40 np0005465604 podman[437788]: 2025-10-02 09:21:40.1332498 +0000 UTC m=+0.055591988 container create af74c0c4b8706caad5257f951b2b38ef5d436214e42041bd0769885c9404849a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 05:21:40 np0005465604 podman[437788]: 2025-10-02 09:21:40.114930978 +0000 UTC m=+0.037273176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:21:40 np0005465604 systemd[1]: Started libpod-conmon-af74c0c4b8706caad5257f951b2b38ef5d436214e42041bd0769885c9404849a.scope.
Oct  2 05:21:40 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:21:40 np0005465604 podman[437788]: 2025-10-02 09:21:40.29462217 +0000 UTC m=+0.216964378 container init af74c0c4b8706caad5257f951b2b38ef5d436214e42041bd0769885c9404849a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gauss, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:21:40 np0005465604 podman[437788]: 2025-10-02 09:21:40.302893008 +0000 UTC m=+0.225235236 container start af74c0c4b8706caad5257f951b2b38ef5d436214e42041bd0769885c9404849a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gauss, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 05:21:40 np0005465604 elastic_gauss[437806]: 167 167
Oct  2 05:21:40 np0005465604 systemd[1]: libpod-af74c0c4b8706caad5257f951b2b38ef5d436214e42041bd0769885c9404849a.scope: Deactivated successfully.
Oct  2 05:21:40 np0005465604 podman[437788]: 2025-10-02 09:21:40.33495107 +0000 UTC m=+0.257293258 container attach af74c0c4b8706caad5257f951b2b38ef5d436214e42041bd0769885c9404849a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gauss, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:21:40 np0005465604 podman[437788]: 2025-10-02 09:21:40.335737635 +0000 UTC m=+0.258079863 container died af74c0c4b8706caad5257f951b2b38ef5d436214e42041bd0769885c9404849a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gauss, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:21:40 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6691db60d9680e2d9dd427d0358fddcd0baba6554e718a242805d555bb3ecff6-merged.mount: Deactivated successfully.
Oct  2 05:21:40 np0005465604 podman[437788]: 2025-10-02 09:21:40.383558448 +0000 UTC m=+0.305900636 container remove af74c0c4b8706caad5257f951b2b38ef5d436214e42041bd0769885c9404849a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gauss, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 05:21:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:21:40 np0005465604 systemd[1]: libpod-conmon-af74c0c4b8706caad5257f951b2b38ef5d436214e42041bd0769885c9404849a.scope: Deactivated successfully.
Oct  2 05:21:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:21:40 np0005465604 podman[437829]: 2025-10-02 09:21:40.624224805 +0000 UTC m=+0.081398913 container create 1634f5d636ab41f1045c50f4ae57df7c556423187bfecc079eb4a2b2866d785b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_turing, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 05:21:40 np0005465604 systemd[1]: Started libpod-conmon-1634f5d636ab41f1045c50f4ae57df7c556423187bfecc079eb4a2b2866d785b.scope.
Oct  2 05:21:40 np0005465604 podman[437829]: 2025-10-02 09:21:40.591679769 +0000 UTC m=+0.048853967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:21:40 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:21:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef71e162ab9a6af21bf826166351dfec12e88f190652ae5d8f53a35b6a08f31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:21:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef71e162ab9a6af21bf826166351dfec12e88f190652ae5d8f53a35b6a08f31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:21:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef71e162ab9a6af21bf826166351dfec12e88f190652ae5d8f53a35b6a08f31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:21:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef71e162ab9a6af21bf826166351dfec12e88f190652ae5d8f53a35b6a08f31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:21:40 np0005465604 podman[437829]: 2025-10-02 09:21:40.731669822 +0000 UTC m=+0.188843960 container init 1634f5d636ab41f1045c50f4ae57df7c556423187bfecc079eb4a2b2866d785b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_turing, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 05:21:40 np0005465604 podman[437829]: 2025-10-02 09:21:40.751594464 +0000 UTC m=+0.208768602 container start 1634f5d636ab41f1045c50f4ae57df7c556423187bfecc079eb4a2b2866d785b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_turing, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 05:21:40 np0005465604 podman[437829]: 2025-10-02 09:21:40.756455466 +0000 UTC m=+0.213629604 container attach 1634f5d636ab41f1045c50f4ae57df7c556423187bfecc079eb4a2b2866d785b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:21:41 np0005465604 exciting_turing[437846]: {
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:    "0": [
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:        {
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "devices": [
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "/dev/loop3"
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            ],
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "lv_name": "ceph_lv0",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "lv_size": "21470642176",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "name": "ceph_lv0",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "tags": {
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.cluster_name": "ceph",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.crush_device_class": "",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.encrypted": "0",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.osd_id": "0",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.type": "block",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.vdo": "0"
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            },
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "type": "block",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "vg_name": "ceph_vg0"
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:        }
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:    ],
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:    "1": [
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:        {
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "devices": [
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "/dev/loop4"
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            ],
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "lv_name": "ceph_lv1",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "lv_size": "21470642176",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "name": "ceph_lv1",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "tags": {
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.cluster_name": "ceph",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.crush_device_class": "",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.encrypted": "0",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.osd_id": "1",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.type": "block",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.vdo": "0"
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            },
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "type": "block",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "vg_name": "ceph_vg1"
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:        }
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:    ],
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:    "2": [
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:        {
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "devices": [
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "/dev/loop5"
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            ],
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "lv_name": "ceph_lv2",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "lv_size": "21470642176",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "name": "ceph_lv2",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "tags": {
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.cluster_name": "ceph",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.crush_device_class": "",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.encrypted": "0",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.osd_id": "2",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.type": "block",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:                "ceph.vdo": "0"
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            },
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "type": "block",
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:            "vg_name": "ceph_vg2"
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:        }
Oct  2 05:21:41 np0005465604 exciting_turing[437846]:    ]
Oct  2 05:21:41 np0005465604 exciting_turing[437846]: }
Oct  2 05:21:41 np0005465604 systemd[1]: libpod-1634f5d636ab41f1045c50f4ae57df7c556423187bfecc079eb4a2b2866d785b.scope: Deactivated successfully.
Oct  2 05:21:41 np0005465604 podman[437829]: 2025-10-02 09:21:41.610089168 +0000 UTC m=+1.067263286 container died 1634f5d636ab41f1045c50f4ae57df7c556423187bfecc079eb4a2b2866d785b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_turing, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:21:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2ef71e162ab9a6af21bf826166351dfec12e88f190652ae5d8f53a35b6a08f31-merged.mount: Deactivated successfully.
Oct  2 05:21:41 np0005465604 podman[437829]: 2025-10-02 09:21:41.721687684 +0000 UTC m=+1.178861792 container remove 1634f5d636ab41f1045c50f4ae57df7c556423187bfecc079eb4a2b2866d785b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 05:21:41 np0005465604 systemd[1]: libpod-conmon-1634f5d636ab41f1045c50f4ae57df7c556423187bfecc079eb4a2b2866d785b.scope: Deactivated successfully.
Oct  2 05:21:41 np0005465604 NetworkManager[45129]: <info>  [1759396901.8828] manager: (patch-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/688)
Oct  2 05:21:41 np0005465604 NetworkManager[45129]: <info>  [1759396901.8838] manager: (patch-br-int-to-provnet-84f0f649-fe41-40ad-a49a-6e4c6afbea7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/689)
Oct  2 05:21:41 np0005465604 nova_compute[260603]: 2025-10-02 09:21:41.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:41 np0005465604 ovn_controller[152344]: 2025-10-02T09:21:41Z|01689|binding|INFO|Releasing lport f52ffbc5-b75c-4a8b-a490-21571dd7145a from this chassis (sb_readonly=0)
Oct  2 05:21:41 np0005465604 ovn_controller[152344]: 2025-10-02T09:21:41Z|01690|binding|INFO|Releasing lport f52ffbc5-b75c-4a8b-a490-21571dd7145a from this chassis (sb_readonly=0)
Oct  2 05:21:41 np0005465604 nova_compute[260603]: 2025-10-02 09:21:41.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:41 np0005465604 nova_compute[260603]: 2025-10-02 09:21:41.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:21:42 np0005465604 nova_compute[260603]: 2025-10-02 09:21:42.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:42 np0005465604 podman[438010]: 2025-10-02 09:21:42.662409777 +0000 UTC m=+0.073367463 container create b4af760077d89facea75f3dabaea86d7576d69e37e1061d47fd55aebfa67bc77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 05:21:42 np0005465604 systemd[1]: Started libpod-conmon-b4af760077d89facea75f3dabaea86d7576d69e37e1061d47fd55aebfa67bc77.scope.
Oct  2 05:21:42 np0005465604 podman[438010]: 2025-10-02 09:21:42.634435963 +0000 UTC m=+0.045393679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:21:42 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:21:42 np0005465604 podman[438010]: 2025-10-02 09:21:42.742627622 +0000 UTC m=+0.153585328 container init b4af760077d89facea75f3dabaea86d7576d69e37e1061d47fd55aebfa67bc77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:21:42 np0005465604 podman[438010]: 2025-10-02 09:21:42.755526646 +0000 UTC m=+0.166484332 container start b4af760077d89facea75f3dabaea86d7576d69e37e1061d47fd55aebfa67bc77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 05:21:42 np0005465604 podman[438010]: 2025-10-02 09:21:42.75888586 +0000 UTC m=+0.169843556 container attach b4af760077d89facea75f3dabaea86d7576d69e37e1061d47fd55aebfa67bc77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:21:42 np0005465604 sleepy_heyrovsky[438027]: 167 167
Oct  2 05:21:42 np0005465604 systemd[1]: libpod-b4af760077d89facea75f3dabaea86d7576d69e37e1061d47fd55aebfa67bc77.scope: Deactivated successfully.
Oct  2 05:21:42 np0005465604 podman[438010]: 2025-10-02 09:21:42.761511782 +0000 UTC m=+0.172469488 container died b4af760077d89facea75f3dabaea86d7576d69e37e1061d47fd55aebfa67bc77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct  2 05:21:42 np0005465604 systemd[1]: var-lib-containers-storage-overlay-dc65c22dcc1c7e0b675b69cc3cc7dfd240934dbf58acf7c57d274b7ab82b5f5b-merged.mount: Deactivated successfully.
Oct  2 05:21:42 np0005465604 podman[438010]: 2025-10-02 09:21:42.801349527 +0000 UTC m=+0.212307213 container remove b4af760077d89facea75f3dabaea86d7576d69e37e1061d47fd55aebfa67bc77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heyrovsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:21:42 np0005465604 systemd[1]: libpod-conmon-b4af760077d89facea75f3dabaea86d7576d69e37e1061d47fd55aebfa67bc77.scope: Deactivated successfully.
Oct  2 05:21:42 np0005465604 nova_compute[260603]: 2025-10-02 09:21:42.938 2 DEBUG nova.compute.manager [req-e84f2c92-f13e-44a8-b6f5-abe909b1ce42 req-52f77873-8261-4739-992a-f7174b4b6f77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received event network-changed-ce3d61f6-90a9-4869-ac95-256f110eea41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:21:42 np0005465604 nova_compute[260603]: 2025-10-02 09:21:42.939 2 DEBUG nova.compute.manager [req-e84f2c92-f13e-44a8-b6f5-abe909b1ce42 req-52f77873-8261-4739-992a-f7174b4b6f77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Refreshing instance network info cache due to event network-changed-ce3d61f6-90a9-4869-ac95-256f110eea41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:21:42 np0005465604 nova_compute[260603]: 2025-10-02 09:21:42.940 2 DEBUG oslo_concurrency.lockutils [req-e84f2c92-f13e-44a8-b6f5-abe909b1ce42 req-52f77873-8261-4739-992a-f7174b4b6f77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:21:42 np0005465604 nova_compute[260603]: 2025-10-02 09:21:42.940 2 DEBUG oslo_concurrency.lockutils [req-e84f2c92-f13e-44a8-b6f5-abe909b1ce42 req-52f77873-8261-4739-992a-f7174b4b6f77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:21:42 np0005465604 nova_compute[260603]: 2025-10-02 09:21:42.940 2 DEBUG nova.network.neutron [req-e84f2c92-f13e-44a8-b6f5-abe909b1ce42 req-52f77873-8261-4739-992a-f7174b4b6f77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Refreshing network info cache for port ce3d61f6-90a9-4869-ac95-256f110eea41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:21:43 np0005465604 podman[438051]: 2025-10-02 09:21:43.004928835 +0000 UTC m=+0.073058953 container create daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:21:43 np0005465604 systemd[1]: Started libpod-conmon-daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61.scope.
Oct  2 05:21:43 np0005465604 podman[438051]: 2025-10-02 09:21:42.965884286 +0000 UTC m=+0.034014434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:21:43 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:21:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73860daa9e1f97c125f9be2ce152bc7d2fddb96fb24240eab51736f27d9f24ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:21:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73860daa9e1f97c125f9be2ce152bc7d2fddb96fb24240eab51736f27d9f24ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:21:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73860daa9e1f97c125f9be2ce152bc7d2fddb96fb24240eab51736f27d9f24ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:21:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73860daa9e1f97c125f9be2ce152bc7d2fddb96fb24240eab51736f27d9f24ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:21:43 np0005465604 podman[438051]: 2025-10-02 09:21:43.108325285 +0000 UTC m=+0.176455423 container init daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:21:43 np0005465604 podman[438051]: 2025-10-02 09:21:43.116554412 +0000 UTC m=+0.184684570 container start daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 05:21:43 np0005465604 podman[438051]: 2025-10-02 09:21:43.121699512 +0000 UTC m=+0.189829630 container attach daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:21:43 np0005465604 nova_compute[260603]: 2025-10-02 09:21:43.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:21:43.346 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]: {
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:        "osd_id": 2,
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:        "type": "bluestore"
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:    },
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:        "osd_id": 1,
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:        "type": "bluestore"
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:    },
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:        "osd_id": 0,
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:        "type": "bluestore"
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]:    }
Oct  2 05:21:44 np0005465604 eloquent_volhard[438068]: }
Oct  2 05:21:44 np0005465604 systemd[1]: libpod-daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61.scope: Deactivated successfully.
Oct  2 05:21:44 np0005465604 systemd[1]: libpod-daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61.scope: Consumed 1.183s CPU time.
Oct  2 05:21:44 np0005465604 podman[438051]: 2025-10-02 09:21:44.313982023 +0000 UTC m=+1.382112161 container died daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 05:21:44 np0005465604 systemd[1]: var-lib-containers-storage-overlay-73860daa9e1f97c125f9be2ce152bc7d2fddb96fb24240eab51736f27d9f24ce-merged.mount: Deactivated successfully.
Oct  2 05:21:44 np0005465604 podman[438051]: 2025-10-02 09:21:44.397180281 +0000 UTC m=+1.465310399 container remove daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct  2 05:21:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:21:44 np0005465604 systemd[1]: libpod-conmon-daf1bf793acd1767f2aa38beac11681043606e9b594e121dca71683339df3c61.scope: Deactivated successfully.
Oct  2 05:21:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:21:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:21:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:21:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:21:44 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 2eb0dcf0-b82a-49bc-92c9-89f48f9d5835 does not exist
Oct  2 05:21:44 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 411e8268-5e60-4358-8bb5-c2158c42be87 does not exist
Oct  2 05:21:44 np0005465604 nova_compute[260603]: 2025-10-02 09:21:44.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:21:44 np0005465604 nova_compute[260603]: 2025-10-02 09:21:44.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:21:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:21:45 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:21:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:21:46 np0005465604 nova_compute[260603]: 2025-10-02 09:21:46.279 2 DEBUG nova.network.neutron [req-e84f2c92-f13e-44a8-b6f5-abe909b1ce42 req-52f77873-8261-4739-992a-f7174b4b6f77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updated VIF entry in instance network info cache for port ce3d61f6-90a9-4869-ac95-256f110eea41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:21:46 np0005465604 nova_compute[260603]: 2025-10-02 09:21:46.279 2 DEBUG nova.network.neutron [req-e84f2c92-f13e-44a8-b6f5-abe909b1ce42 req-52f77873-8261-4739-992a-f7174b4b6f77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updating instance_info_cache with network_info: [{"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:21:46 np0005465604 nova_compute[260603]: 2025-10-02 09:21:46.355 2 DEBUG oslo_concurrency.lockutils [req-e84f2c92-f13e-44a8-b6f5-abe909b1ce42 req-52f77873-8261-4739-992a-f7174b4b6f77 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:21:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:21:47 np0005465604 nova_compute[260603]: 2025-10-02 09:21:47.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:48 np0005465604 nova_compute[260603]: 2025-10-02 09:21:48.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:48 np0005465604 ovn_controller[152344]: 2025-10-02T09:21:48Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:e3:da 10.100.0.4
Oct  2 05:21:48 np0005465604 ovn_controller[152344]: 2025-10-02T09:21:48Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:e3:da 10.100.0.4
Oct  2 05:21:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 305 active+clean; 113 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Oct  2 05:21:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 305 active+clean; 113 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 244 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Oct  2 05:21:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:21:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 305 active+clean; 118 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Oct  2 05:21:52 np0005465604 nova_compute[260603]: 2025-10-02 09:21:52.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:53 np0005465604 nova_compute[260603]: 2025-10-02 09:21:53.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 05:21:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:21:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 05:21:57 np0005465604 nova_compute[260603]: 2025-10-02 09:21:57.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:21:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:21:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:21:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:21:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:21:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:21:58 np0005465604 nova_compute[260603]: 2025-10-02 09:21:58.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:21:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  2 05:22:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 95 KiB/s wr, 18 op/s
Oct  2 05:22:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:22:01 np0005465604 nova_compute[260603]: 2025-10-02 09:22:01.627 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "1e3be288-5261-4a77-a127-f7bf088caf01" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:01 np0005465604 nova_compute[260603]: 2025-10-02 09:22:01.627 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:01 np0005465604 nova_compute[260603]: 2025-10-02 09:22:01.653 2 DEBUG nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:22:01 np0005465604 nova_compute[260603]: 2025-10-02 09:22:01.745 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:01 np0005465604 nova_compute[260603]: 2025-10-02 09:22:01.745 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:01 np0005465604 nova_compute[260603]: 2025-10-02 09:22:01.755 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:22:01 np0005465604 nova_compute[260603]: 2025-10-02 09:22:01.756 2 INFO nova.compute.claims [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:22:01 np0005465604 nova_compute[260603]: 2025-10-02 09:22:01.881 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:22:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:22:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2786742805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:22:02 np0005465604 nova_compute[260603]: 2025-10-02 09:22:02.388 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:22:02 np0005465604 nova_compute[260603]: 2025-10-02 09:22:02.399 2 DEBUG nova.compute.provider_tree [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:22:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 95 KiB/s wr, 19 op/s
Oct  2 05:22:02 np0005465604 nova_compute[260603]: 2025-10-02 09:22:02.459 2 DEBUG nova.scheduler.client.report [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:22:02 np0005465604 nova_compute[260603]: 2025-10-02 09:22:02.511 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:22:02 np0005465604 nova_compute[260603]: 2025-10-02 09:22:02.512 2 DEBUG nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:22:02 np0005465604 nova_compute[260603]: 2025-10-02 09:22:02.596 2 DEBUG nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:22:02 np0005465604 nova_compute[260603]: 2025-10-02 09:22:02.597 2 DEBUG nova.network.neutron [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:22:02 np0005465604 nova_compute[260603]: 2025-10-02 09:22:02.654 2 INFO nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:22:02 np0005465604 nova_compute[260603]: 2025-10-02 09:22:02.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:02 np0005465604 nova_compute[260603]: 2025-10-02 09:22:02.713 2 DEBUG nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:22:02 np0005465604 nova_compute[260603]: 2025-10-02 09:22:02.812 2 DEBUG nova.policy [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7765a573b734de786f94b675c6ab654', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 05:22:02 np0005465604 nova_compute[260603]: 2025-10-02 09:22:02.876 2 DEBUG nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:22:02 np0005465604 nova_compute[260603]: 2025-10-02 09:22:02.877 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:22:02 np0005465604 nova_compute[260603]: 2025-10-02 09:22:02.878 2 INFO nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Creating image(s)#033[00m
Oct  2 05:22:02 np0005465604 nova_compute[260603]: 2025-10-02 09:22:02.905 2 DEBUG nova.storage.rbd_utils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 1e3be288-5261-4a77-a127-f7bf088caf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:22:02 np0005465604 nova_compute[260603]: 2025-10-02 09:22:02.931 2 DEBUG nova.storage.rbd_utils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 1e3be288-5261-4a77-a127-f7bf088caf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:22:02 np0005465604 nova_compute[260603]: 2025-10-02 09:22:02.957 2 DEBUG nova.storage.rbd_utils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 1e3be288-5261-4a77-a127-f7bf088caf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:22:02 np0005465604 nova_compute[260603]: 2025-10-02 09:22:02.961 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:22:03 np0005465604 nova_compute[260603]: 2025-10-02 09:22:03.038 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:22:03 np0005465604 nova_compute[260603]: 2025-10-02 09:22:03.039 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:03 np0005465604 nova_compute[260603]: 2025-10-02 09:22:03.039 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:03 np0005465604 nova_compute[260603]: 2025-10-02 09:22:03.040 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:22:03 np0005465604 nova_compute[260603]: 2025-10-02 09:22:03.062 2 DEBUG nova.storage.rbd_utils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 1e3be288-5261-4a77-a127-f7bf088caf01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:22:03 np0005465604 nova_compute[260603]: 2025-10-02 09:22:03.066 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1e3be288-5261-4a77-a127-f7bf088caf01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:22:03 np0005465604 nova_compute[260603]: 2025-10-02 09:22:03.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:03 np0005465604 nova_compute[260603]: 2025-10-02 09:22:03.900 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 1e3be288-5261-4a77-a127-f7bf088caf01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.834s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:22:03 np0005465604 nova_compute[260603]: 2025-10-02 09:22:03.990 2 DEBUG nova.storage.rbd_utils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] resizing rbd image 1e3be288-5261-4a77-a127-f7bf088caf01_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:22:04 np0005465604 nova_compute[260603]: 2025-10-02 09:22:04.355 2 DEBUG nova.objects.instance [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 1e3be288-5261-4a77-a127-f7bf088caf01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:22:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 18 KiB/s wr, 4 op/s
Oct  2 05:22:04 np0005465604 nova_compute[260603]: 2025-10-02 09:22:04.426 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:22:04 np0005465604 nova_compute[260603]: 2025-10-02 09:22:04.427 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Ensure instance console log exists: /var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:22:04 np0005465604 nova_compute[260603]: 2025-10-02 09:22:04.428 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:04 np0005465604 nova_compute[260603]: 2025-10-02 09:22:04.428 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:04 np0005465604 nova_compute[260603]: 2025-10-02 09:22:04.428 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:22:04 np0005465604 nova_compute[260603]: 2025-10-02 09:22:04.556 2 DEBUG nova.network.neutron [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Successfully created port: bfc1331e-8260-43ad-b409-38c22a730429 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 05:22:05 np0005465604 podman[438354]: 2025-10-02 09:22:05.028417992 +0000 UTC m=+0.087350649 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 05:22:05 np0005465604 podman[438353]: 2025-10-02 09:22:05.03700459 +0000 UTC m=+0.098184568 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:22:05 np0005465604 nova_compute[260603]: 2025-10-02 09:22:05.384 2 DEBUG nova.network.neutron [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Successfully updated port: bfc1331e-8260-43ad-b409-38c22a730429 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 05:22:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:22:05 np0005465604 nova_compute[260603]: 2025-10-02 09:22:05.604 2 DEBUG nova.compute.manager [req-c1342d5c-ae7b-4b5a-a74b-6e59c6a6b6f4 req-b2321f52-d952-4a7f-b706-61d7ae4b890f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received event network-changed-bfc1331e-8260-43ad-b409-38c22a730429 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:22:05 np0005465604 nova_compute[260603]: 2025-10-02 09:22:05.604 2 DEBUG nova.compute.manager [req-c1342d5c-ae7b-4b5a-a74b-6e59c6a6b6f4 req-b2321f52-d952-4a7f-b706-61d7ae4b890f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Refreshing instance network info cache due to event network-changed-bfc1331e-8260-43ad-b409-38c22a730429. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:22:05 np0005465604 nova_compute[260603]: 2025-10-02 09:22:05.605 2 DEBUG oslo_concurrency.lockutils [req-c1342d5c-ae7b-4b5a-a74b-6e59c6a6b6f4 req-b2321f52-d952-4a7f-b706-61d7ae4b890f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:22:05 np0005465604 nova_compute[260603]: 2025-10-02 09:22:05.605 2 DEBUG oslo_concurrency.lockutils [req-c1342d5c-ae7b-4b5a-a74b-6e59c6a6b6f4 req-b2321f52-d952-4a7f-b706-61d7ae4b890f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:22:05 np0005465604 nova_compute[260603]: 2025-10-02 09:22:05.605 2 DEBUG nova.network.neutron [req-c1342d5c-ae7b-4b5a-a74b-6e59c6a6b6f4 req-b2321f52-d952-4a7f-b706-61d7ae4b890f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Refreshing network info cache for port bfc1331e-8260-43ad-b409-38c22a730429 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:22:05 np0005465604 nova_compute[260603]: 2025-10-02 09:22:05.608 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:22:06 np0005465604 nova_compute[260603]: 2025-10-02 09:22:06.208 2 DEBUG nova.network.neutron [req-c1342d5c-ae7b-4b5a-a74b-6e59c6a6b6f4 req-b2321f52-d952-4a7f-b706-61d7ae4b890f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:22:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Oct  2 05:22:06 np0005465604 nova_compute[260603]: 2025-10-02 09:22:06.602 2 DEBUG nova.network.neutron [req-c1342d5c-ae7b-4b5a-a74b-6e59c6a6b6f4 req-b2321f52-d952-4a7f-b706-61d7ae4b890f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:22:06 np0005465604 nova_compute[260603]: 2025-10-02 09:22:06.633 2 DEBUG oslo_concurrency.lockutils [req-c1342d5c-ae7b-4b5a-a74b-6e59c6a6b6f4 req-b2321f52-d952-4a7f-b706-61d7ae4b890f 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:22:06 np0005465604 nova_compute[260603]: 2025-10-02 09:22:06.633 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquired lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:22:06 np0005465604 nova_compute[260603]: 2025-10-02 09:22:06.634 2 DEBUG nova.network.neutron [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:22:06 np0005465604 nova_compute[260603]: 2025-10-02 09:22:06.827 2 DEBUG nova.network.neutron [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:22:07 np0005465604 nova_compute[260603]: 2025-10-02 09:22:07.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.229 2 DEBUG nova.network.neutron [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Updating instance_info_cache with network_info: [{"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.259 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Releasing lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.260 2 DEBUG nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Instance network_info: |[{"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.265 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Start _get_guest_xml network_info=[{"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.272 2 WARNING nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.281 2 DEBUG nova.virt.libvirt.host [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.284 2 DEBUG nova.virt.libvirt.host [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.288 2 DEBUG nova.virt.libvirt.host [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.289 2 DEBUG nova.virt.libvirt.host [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.290 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.291 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.292 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.292 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.293 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.293 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.294 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.294 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.295 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.296 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.296 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.297 2 DEBUG nova.virt.hardware [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.303 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:22:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:22:08 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:22:08 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2545529187' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.797 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.835 2 DEBUG nova.storage.rbd_utils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 1e3be288-5261-4a77-a127-f7bf088caf01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:22:08 np0005465604 nova_compute[260603]: 2025-10-02 09:22:08.840 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:22:08 np0005465604 podman[438442]: 2025-10-02 09:22:08.995623264 +0000 UTC m=+0.054495903 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid)
Oct  2 05:22:08 np0005465604 podman[438441]: 2025-10-02 09:22:08.996540123 +0000 UTC m=+0.060677286 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:22:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:22:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109340288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.315 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.317 2 DEBUG nova.virt.libvirt.vif [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:22:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-421018847',display_name='tempest-TestGettingAddress-server-421018847',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-421018847',id=152,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJ4erUkDf9pFYvis3BxPTrsgrZAeghsAW2aYbDdKvJxPUtfd2zcNxkwWc27ijo1XxIL1GH95TwtVkIOZnFQCr789wREwZXl2iwWdFQxsXXMtQjjBE9pyaOAIR5A+kumzQ==',key_name='tempest-TestGettingAddress-2084952376',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-77p3vre3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:22:02Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=1e3be288-5261-4a77-a127-f7bf088caf01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.317 2 DEBUG nova.network.os_vif_util [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.318 2 DEBUG nova.network.os_vif_util [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:40:0b,bridge_name='br-int',has_traffic_filtering=True,id=bfc1331e-8260-43ad-b409-38c22a730429,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc1331e-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.319 2 DEBUG nova.objects.instance [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e3be288-5261-4a77-a127-f7bf088caf01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.339 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:22:09 np0005465604 nova_compute[260603]:  <uuid>1e3be288-5261-4a77-a127-f7bf088caf01</uuid>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:  <name>instance-00000098</name>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <nova:name>tempest-TestGettingAddress-server-421018847</nova:name>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:22:08</nova:creationTime>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:22:09 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:        <nova:user uuid="b7765a573b734de786f94b675c6ab654">tempest-TestGettingAddress-44642193-project-member</nova:user>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:        <nova:project uuid="674f53964f0a4a0d9e9b5ebfaf4248b4">tempest-TestGettingAddress-44642193</nova:project>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <nova:ports>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:        <nova:port uuid="bfc1331e-8260-43ad-b409-38c22a730429">
Oct  2 05:22:09 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe5c:400b" ipVersion="6"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:        </nova:port>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      </nova:ports>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <entry name="serial">1e3be288-5261-4a77-a127-f7bf088caf01</entry>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <entry name="uuid">1e3be288-5261-4a77-a127-f7bf088caf01</entry>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/1e3be288-5261-4a77-a127-f7bf088caf01_disk">
Oct  2 05:22:09 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:22:09 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/1e3be288-5261-4a77-a127-f7bf088caf01_disk.config">
Oct  2 05:22:09 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:22:09 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <interface type="ethernet">
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <mac address="fa:16:3e:5c:40:0b"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <mtu size="1442"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <target dev="tapbfc1331e-82"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    </interface>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01/console.log" append="off"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:22:09 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:22:09 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:22:09 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:22:09 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.340 2 DEBUG nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Preparing to wait for external event network-vif-plugged-bfc1331e-8260-43ad-b409-38c22a730429 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.341 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.341 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.341 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.342 2 DEBUG nova.virt.libvirt.vif [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T09:22:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-421018847',display_name='tempest-TestGettingAddress-server-421018847',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-421018847',id=152,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJ4erUkDf9pFYvis3BxPTrsgrZAeghsAW2aYbDdKvJxPUtfd2zcNxkwWc27ijo1XxIL1GH95TwtVkIOZnFQCr789wREwZXl2iwWdFQxsXXMtQjjBE9pyaOAIR5A+kumzQ==',key_name='tempest-TestGettingAddress-2084952376',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-77p3vre3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T09:22:02Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=1e3be288-5261-4a77-a127-f7bf088caf01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.343 2 DEBUG nova.network.os_vif_util [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.343 2 DEBUG nova.network.os_vif_util [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:40:0b,bridge_name='br-int',has_traffic_filtering=True,id=bfc1331e-8260-43ad-b409-38c22a730429,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc1331e-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.344 2 DEBUG os_vif [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:40:0b,bridge_name='br-int',has_traffic_filtering=True,id=bfc1331e-8260-43ad-b409-38c22a730429,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc1331e-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.345 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.349 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfc1331e-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.350 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbfc1331e-82, col_values=(('external_ids', {'iface-id': 'bfc1331e-8260-43ad-b409-38c22a730429', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5c:40:0b', 'vm-uuid': '1e3be288-5261-4a77-a127-f7bf088caf01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:22:09 np0005465604 NetworkManager[45129]: <info>  [1759396929.4020] manager: (tapbfc1331e-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/690)
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.409 2 INFO os_vif [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:40:0b,bridge_name='br-int',has_traffic_filtering=True,id=bfc1331e-8260-43ad-b409-38c22a730429,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc1331e-82')#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.817 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.817 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.818 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] No VIF found with MAC fa:16:3e:5c:40:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.819 2 INFO nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Using config drive#033[00m
Oct  2 05:22:09 np0005465604 nova_compute[260603]: 2025-10-02 09:22:09.853 2 DEBUG nova.storage.rbd_utils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 1e3be288-5261-4a77-a127-f7bf088caf01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:22:10 np0005465604 nova_compute[260603]: 2025-10-02 09:22:10.398 2 INFO nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Creating config drive at /var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01/disk.config#033[00m
Oct  2 05:22:10 np0005465604 nova_compute[260603]: 2025-10-02 09:22:10.404 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmxfn0gxe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:22:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  2 05:22:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:22:10 np0005465604 nova_compute[260603]: 2025-10-02 09:22:10.575 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmxfn0gxe" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:22:10 np0005465604 nova_compute[260603]: 2025-10-02 09:22:10.616 2 DEBUG nova.storage.rbd_utils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] rbd image 1e3be288-5261-4a77-a127-f7bf088caf01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:22:10 np0005465604 nova_compute[260603]: 2025-10-02 09:22:10.621 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01/disk.config 1e3be288-5261-4a77-a127-f7bf088caf01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:22:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct  2 05:22:12 np0005465604 nova_compute[260603]: 2025-10-02 09:22:12.522 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:22:12 np0005465604 nova_compute[260603]: 2025-10-02 09:22:12.523 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:22:12 np0005465604 nova_compute[260603]: 2025-10-02 09:22:12.523 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:22:12 np0005465604 nova_compute[260603]: 2025-10-02 09:22:12.576 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 05:22:12 np0005465604 nova_compute[260603]: 2025-10-02 09:22:12.909 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:22:12 np0005465604 nova_compute[260603]: 2025-10-02 09:22:12.909 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquired lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:22:12 np0005465604 nova_compute[260603]: 2025-10-02 09:22:12.909 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 05:22:12 np0005465604 nova_compute[260603]: 2025-10-02 09:22:12.910 2 DEBUG nova.objects.instance [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:22:13 np0005465604 nova_compute[260603]: 2025-10-02 09:22:13.243 2 DEBUG oslo_concurrency.processutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01/disk.config 1e3be288-5261-4a77-a127-f7bf088caf01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:22:13 np0005465604 nova_compute[260603]: 2025-10-02 09:22:13.244 2 INFO nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Deleting local config drive /var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01/disk.config because it was imported into RBD.#033[00m
Oct  2 05:22:13 np0005465604 nova_compute[260603]: 2025-10-02 09:22:13.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:13 np0005465604 NetworkManager[45129]: <info>  [1759396933.3211] manager: (tapbfc1331e-82): new Tun device (/org/freedesktop/NetworkManager/Devices/691)
Oct  2 05:22:13 np0005465604 kernel: tapbfc1331e-82: entered promiscuous mode
Oct  2 05:22:13 np0005465604 ovn_controller[152344]: 2025-10-02T09:22:13Z|01691|binding|INFO|Claiming lport bfc1331e-8260-43ad-b409-38c22a730429 for this chassis.
Oct  2 05:22:13 np0005465604 ovn_controller[152344]: 2025-10-02T09:22:13Z|01692|binding|INFO|bfc1331e-8260-43ad-b409-38c22a730429: Claiming fa:16:3e:5c:40:0b 10.100.0.3 2001:db8::f816:3eff:fe5c:400b
Oct  2 05:22:13 np0005465604 nova_compute[260603]: 2025-10-02 09:22:13.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:13 np0005465604 ovn_controller[152344]: 2025-10-02T09:22:13Z|01693|binding|INFO|Setting lport bfc1331e-8260-43ad-b409-38c22a730429 ovn-installed in OVS
Oct  2 05:22:13 np0005465604 nova_compute[260603]: 2025-10-02 09:22:13.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:13 np0005465604 nova_compute[260603]: 2025-10-02 09:22:13.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:13 np0005465604 systemd-udevd[438573]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 05:22:13 np0005465604 systemd-machined[214636]: New machine qemu-186-instance-00000098.
Oct  2 05:22:13 np0005465604 NetworkManager[45129]: <info>  [1759396933.3910] device (tapbfc1331e-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 05:22:13 np0005465604 NetworkManager[45129]: <info>  [1759396933.3924] device (tapbfc1331e-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 05:22:13 np0005465604 systemd[1]: Started Virtual Machine qemu-186-instance-00000098.
Oct  2 05:22:13 np0005465604 ovn_controller[152344]: 2025-10-02T09:22:13Z|01694|binding|INFO|Setting lport bfc1331e-8260-43ad-b409-38c22a730429 up in Southbound
Oct  2 05:22:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.557 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:40:0b 10.100.0.3 2001:db8::f816:3eff:fe5c:400b'], port_security=['fa:16:3e:5c:40:0b 10.100.0.3 2001:db8::f816:3eff:fe5c:400b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe5c:400b/64', 'neutron:device_id': '1e3be288-5261-4a77-a127-f7bf088caf01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79a700-778c-4189-bea4-a6e50510de5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5b689ca1-3c9b-4813-8474-00abea3332c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=886ca98a-7662-4ca0-8c8e-c35442cbbef0, chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bfc1331e-8260-43ad-b409-38c22a730429) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:22:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.560 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bfc1331e-8260-43ad-b409-38c22a730429 in datapath bb79a700-778c-4189-bea4-a6e50510de5b bound to our chassis#033[00m
Oct  2 05:22:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.564 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb79a700-778c-4189-bea4-a6e50510de5b#033[00m
Oct  2 05:22:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.590 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[d84f6134-32fb-4f7b-8c84-aa5b5ea4e822]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.631 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb432c4-4b2f-4407-8fec-5893ffd23bd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.637 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae588d2-e3c6-4c4d-a388-0fb66a109322]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.683 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c744d2ef-3f16-49e8-9cda-19047cc83278]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.703 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[35a368ba-80b5-4601-bc97-a0f8d9bde520]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb79a700-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:0e:c9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1886, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1886, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777310, 'reachable_time': 42986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 19, 'inoctets': 1536, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 19, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1536, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 19, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 438588, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.723 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[ec860b32-9a84-4c94-8af0-c6e59ca5a880]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbb79a700-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 777321, 'tstamp': 777321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 438589, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbb79a700-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 777323, 'tstamp': 777323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 438589, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.725 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb79a700-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:22:13 np0005465604 nova_compute[260603]: 2025-10-02 09:22:13.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.730 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb79a700-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:22:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.730 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:22:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.731 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb79a700-70, col_values=(('external_ids', {'iface-id': 'f52ffbc5-b75c-4a8b-a490-21571dd7145a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:22:13 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:13.731 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:22:14 np0005465604 nova_compute[260603]: 2025-10-02 09:22:14.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct  2 05:22:14 np0005465604 nova_compute[260603]: 2025-10-02 09:22:14.544 2 DEBUG nova.compute.manager [req-c7514f2f-877e-40cc-a62d-2160d1518f33 req-91e6bef0-8804-4724-b966-0a0a6d8e71d3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received event network-vif-plugged-bfc1331e-8260-43ad-b409-38c22a730429 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:22:14 np0005465604 nova_compute[260603]: 2025-10-02 09:22:14.545 2 DEBUG oslo_concurrency.lockutils [req-c7514f2f-877e-40cc-a62d-2160d1518f33 req-91e6bef0-8804-4724-b966-0a0a6d8e71d3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:14 np0005465604 nova_compute[260603]: 2025-10-02 09:22:14.545 2 DEBUG oslo_concurrency.lockutils [req-c7514f2f-877e-40cc-a62d-2160d1518f33 req-91e6bef0-8804-4724-b966-0a0a6d8e71d3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:14 np0005465604 nova_compute[260603]: 2025-10-02 09:22:14.545 2 DEBUG oslo_concurrency.lockutils [req-c7514f2f-877e-40cc-a62d-2160d1518f33 req-91e6bef0-8804-4724-b966-0a0a6d8e71d3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:22:14 np0005465604 nova_compute[260603]: 2025-10-02 09:22:14.545 2 DEBUG nova.compute.manager [req-c7514f2f-877e-40cc-a62d-2160d1518f33 req-91e6bef0-8804-4724-b966-0a0a6d8e71d3 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Processing event network-vif-plugged-bfc1331e-8260-43ad-b409-38c22a730429 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 05:22:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:22:15 np0005465604 nova_compute[260603]: 2025-10-02 09:22:15.744 2 DEBUG nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:22:15 np0005465604 nova_compute[260603]: 2025-10-02 09:22:15.745 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396935.742994, 1e3be288-5261-4a77-a127-f7bf088caf01 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:22:15 np0005465604 nova_compute[260603]: 2025-10-02 09:22:15.745 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] VM Started (Lifecycle Event)#033[00m
Oct  2 05:22:15 np0005465604 nova_compute[260603]: 2025-10-02 09:22:15.749 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:22:15 np0005465604 nova_compute[260603]: 2025-10-02 09:22:15.753 2 INFO nova.virt.libvirt.driver [-] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Instance spawned successfully.#033[00m
Oct  2 05:22:15 np0005465604 nova_compute[260603]: 2025-10-02 09:22:15.753 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:22:15 np0005465604 nova_compute[260603]: 2025-10-02 09:22:15.900 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:22:15 np0005465604 nova_compute[260603]: 2025-10-02 09:22:15.905 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:22:15 np0005465604 nova_compute[260603]: 2025-10-02 09:22:15.926 2 DEBUG nova.network.neutron [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updating instance_info_cache with network_info: [{"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:22:15 np0005465604 nova_compute[260603]: 2025-10-02 09:22:15.960 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:22:15 np0005465604 nova_compute[260603]: 2025-10-02 09:22:15.960 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:22:15 np0005465604 nova_compute[260603]: 2025-10-02 09:22:15.961 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:22:15 np0005465604 nova_compute[260603]: 2025-10-02 09:22:15.961 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:22:15 np0005465604 nova_compute[260603]: 2025-10-02 09:22:15.961 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:22:15 np0005465604 nova_compute[260603]: 2025-10-02 09:22:15.962 2 DEBUG nova.virt.libvirt.driver [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.016 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.016 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396935.7433307, 1e3be288-5261-4a77-a127-f7bf088caf01 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.016 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] VM Paused (Lifecycle Event)#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.021 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Releasing lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.022 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.022 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.022 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.187 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.193 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759396935.7487059, 1e3be288-5261-4a77-a127-f7bf088caf01 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.193 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.284 2 INFO nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Took 13.41 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.285 2 DEBUG nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:22:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.518 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.524 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.629 2 INFO nova.compute.manager [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Took 14.91 seconds to build instance.#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.679 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.679 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.679 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.679 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.680 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.770 2 DEBUG nova.compute.manager [req-8895f43c-3315-44be-b0d8-88d8fe76cd93 req-985fd6aa-0831-4be8-811c-7a6fbf88e778 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received event network-vif-plugged-bfc1331e-8260-43ad-b409-38c22a730429 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.770 2 DEBUG oslo_concurrency.lockutils [req-8895f43c-3315-44be-b0d8-88d8fe76cd93 req-985fd6aa-0831-4be8-811c-7a6fbf88e778 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.771 2 DEBUG oslo_concurrency.lockutils [req-8895f43c-3315-44be-b0d8-88d8fe76cd93 req-985fd6aa-0831-4be8-811c-7a6fbf88e778 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.771 2 DEBUG oslo_concurrency.lockutils [req-8895f43c-3315-44be-b0d8-88d8fe76cd93 req-985fd6aa-0831-4be8-811c-7a6fbf88e778 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.771 2 DEBUG nova.compute.manager [req-8895f43c-3315-44be-b0d8-88d8fe76cd93 req-985fd6aa-0831-4be8-811c-7a6fbf88e778 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] No waiting events found dispatching network-vif-plugged-bfc1331e-8260-43ad-b409-38c22a730429 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.771 2 WARNING nova.compute.manager [req-8895f43c-3315-44be-b0d8-88d8fe76cd93 req-985fd6aa-0831-4be8-811c-7a6fbf88e778 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received unexpected event network-vif-plugged-bfc1331e-8260-43ad-b409-38c22a730429 for instance with vm_state active and task_state None.#033[00m
Oct  2 05:22:16 np0005465604 nova_compute[260603]: 2025-10-02 09:22:16.805 2 DEBUG oslo_concurrency.lockutils [None req-dac39aa8-318f-40a5-9b2a-ae7c7bd5fd89 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:22:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:22:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/823752995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:22:17 np0005465604 nova_compute[260603]: 2025-10-02 09:22:17.269 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:22:17 np0005465604 nova_compute[260603]: 2025-10-02 09:22:17.454 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:22:17 np0005465604 nova_compute[260603]: 2025-10-02 09:22:17.455 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:22:17 np0005465604 nova_compute[260603]: 2025-10-02 09:22:17.460 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:22:17 np0005465604 nova_compute[260603]: 2025-10-02 09:22:17.460 2 DEBUG nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 05:22:17 np0005465604 nova_compute[260603]: 2025-10-02 09:22:17.638 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:22:17 np0005465604 nova_compute[260603]: 2025-10-02 09:22:17.640 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3273MB free_disk=59.92182922363281GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:22:17 np0005465604 nova_compute[260603]: 2025-10-02 09:22:17.640 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:17 np0005465604 nova_compute[260603]: 2025-10-02 09:22:17.641 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:17 np0005465604 nova_compute[260603]: 2025-10-02 09:22:17.872 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:22:17 np0005465604 nova_compute[260603]: 2025-10-02 09:22:17.872 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Instance 1e3be288-5261-4a77-a127-f7bf088caf01 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 05:22:17 np0005465604 nova_compute[260603]: 2025-10-02 09:22:17.873 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:22:17 np0005465604 nova_compute[260603]: 2025-10-02 09:22:17.873 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:22:17 np0005465604 nova_compute[260603]: 2025-10-02 09:22:17.936 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:22:18 np0005465604 nova_compute[260603]: 2025-10-02 09:22:18.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:22:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3508554890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:22:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  2 05:22:18 np0005465604 nova_compute[260603]: 2025-10-02 09:22:18.436 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:22:18 np0005465604 nova_compute[260603]: 2025-10-02 09:22:18.443 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:22:18 np0005465604 nova_compute[260603]: 2025-10-02 09:22:18.646 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:22:18 np0005465604 nova_compute[260603]: 2025-10-02 09:22:18.757 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:22:18 np0005465604 nova_compute[260603]: 2025-10-02 09:22:18.758 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:22:19 np0005465604 nova_compute[260603]: 2025-10-02 09:22:19.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  2 05:22:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:22:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:22:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1859768270' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:22:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:22:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1859768270' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:22:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct  2 05:22:22 np0005465604 nova_compute[260603]: 2025-10-02 09:22:22.753 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:22:22 np0005465604 nova_compute[260603]: 2025-10-02 09:22:22.754 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:22:23 np0005465604 nova_compute[260603]: 2025-10-02 09:22:23.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Oct  2 05:22:24 np0005465604 nova_compute[260603]: 2025-10-02 09:22:24.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:22:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Oct  2 05:22:26 np0005465604 nova_compute[260603]: 2025-10-02 09:22:26.455 2 DEBUG nova.compute.manager [req-c56a3be1-119a-45e1-b35f-379a96f925c4 req-24171d49-a394-41b7-babe-8c3f9dea59f5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received event network-changed-bfc1331e-8260-43ad-b409-38c22a730429 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:22:26 np0005465604 nova_compute[260603]: 2025-10-02 09:22:26.456 2 DEBUG nova.compute.manager [req-c56a3be1-119a-45e1-b35f-379a96f925c4 req-24171d49-a394-41b7-babe-8c3f9dea59f5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Refreshing instance network info cache due to event network-changed-bfc1331e-8260-43ad-b409-38c22a730429. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:22:26 np0005465604 nova_compute[260603]: 2025-10-02 09:22:26.456 2 DEBUG oslo_concurrency.lockutils [req-c56a3be1-119a-45e1-b35f-379a96f925c4 req-24171d49-a394-41b7-babe-8c3f9dea59f5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:22:26 np0005465604 nova_compute[260603]: 2025-10-02 09:22:26.457 2 DEBUG oslo_concurrency.lockutils [req-c56a3be1-119a-45e1-b35f-379a96f925c4 req-24171d49-a394-41b7-babe-8c3f9dea59f5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:22:26 np0005465604 nova_compute[260603]: 2025-10-02 09:22:26.458 2 DEBUG nova.network.neutron [req-c56a3be1-119a-45e1-b35f-379a96f925c4 req-24171d49-a394-41b7-babe-8c3f9dea59f5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Refreshing network info cache for port bfc1331e-8260-43ad-b409-38c22a730429 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:22:27 np0005465604 nova_compute[260603]: 2025-10-02 09:22:27.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:22:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:22:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:22:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:22:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:22:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:22:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:22:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:22:28
Oct  2 05:22:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:22:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:22:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.control', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.log', 'vms']
Oct  2 05:22:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:22:28 np0005465604 nova_compute[260603]: 2025-10-02 09:22:28.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 181 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 99 op/s
Oct  2 05:22:28 np0005465604 nova_compute[260603]: 2025-10-02 09:22:28.519 2 DEBUG nova.network.neutron [req-c56a3be1-119a-45e1-b35f-379a96f925c4 req-24171d49-a394-41b7-babe-8c3f9dea59f5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Updated VIF entry in instance network info cache for port bfc1331e-8260-43ad-b409-38c22a730429. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:22:28 np0005465604 nova_compute[260603]: 2025-10-02 09:22:28.520 2 DEBUG nova.network.neutron [req-c56a3be1-119a-45e1-b35f-379a96f925c4 req-24171d49-a394-41b7-babe-8c3f9dea59f5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Updating instance_info_cache with network_info: [{"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:22:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:22:28 np0005465604 nova_compute[260603]: 2025-10-02 09:22:28.701 2 DEBUG oslo_concurrency.lockutils [req-c56a3be1-119a-45e1-b35f-379a96f925c4 req-24171d49-a394-41b7-babe-8c3f9dea59f5 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:22:29 np0005465604 ovn_controller[152344]: 2025-10-02T09:22:29Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5c:40:0b 10.100.0.3
Oct  2 05:22:29 np0005465604 ovn_controller[152344]: 2025-10-02T09:22:29Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5c:40:0b 10.100.0.3
Oct  2 05:22:29 np0005465604 nova_compute[260603]: 2025-10-02 09:22:29.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 181 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 1.5 MiB/s wr, 29 op/s
Oct  2 05:22:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:22:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 305 active+clean; 199 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct  2 05:22:33 np0005465604 nova_compute[260603]: 2025-10-02 09:22:33.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  2 05:22:34 np0005465604 nova_compute[260603]: 2025-10-02 09:22:34.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:34.860 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:34.861 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:34.861 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:22:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:22:36 np0005465604 podman[438679]: 2025-10-02 09:22:36.014685623 +0000 UTC m=+0.070335238 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  2 05:22:36 np0005465604 podman[438678]: 2025-10-02 09:22:36.044792914 +0000 UTC m=+0.105597630 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:22:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  2 05:22:38 np0005465604 nova_compute[260603]: 2025-10-02 09:22:38.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015183553306309825 of space, bias 1.0, pg target 0.45550659918929476 quantized to 32 (current 32)
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:22:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:22:39 np0005465604 nova_compute[260603]: 2025-10-02 09:22:39.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:40 np0005465604 podman[438723]: 2025-10-02 09:22:40.026718267 +0000 UTC m=+0.078243915 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Oct  2 05:22:40 np0005465604 podman[438724]: 2025-10-02 09:22:40.048762386 +0000 UTC m=+0.093107069 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 05:22:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 248 KiB/s rd, 708 KiB/s wr, 36 op/s
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.526434) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396960526537, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1650, "num_deletes": 251, "total_data_size": 2698688, "memory_usage": 2742632, "flush_reason": "Manual Compaction"}
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396960550041, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 2640225, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64012, "largest_seqno": 65661, "table_properties": {"data_size": 2632528, "index_size": 4639, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15627, "raw_average_key_size": 19, "raw_value_size": 2617304, "raw_average_value_size": 3346, "num_data_blocks": 208, "num_entries": 782, "num_filter_entries": 782, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396785, "oldest_key_time": 1759396785, "file_creation_time": 1759396960, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 23640 microseconds, and 11140 cpu microseconds.
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.550088) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 2640225 bytes OK
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.550112) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.555458) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.555473) EVENT_LOG_v1 {"time_micros": 1759396960555468, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.555490) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 2691567, prev total WAL file size 2691567, number of live WAL files 2.
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.556475) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(2578KB)], [152(8552KB)]
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396960556546, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 11398461, "oldest_snapshot_seqno": -1}
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8369 keys, 9666963 bytes, temperature: kUnknown
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396960609181, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 9666963, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9614623, "index_size": 30339, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20933, "raw_key_size": 218710, "raw_average_key_size": 26, "raw_value_size": 9469009, "raw_average_value_size": 1131, "num_data_blocks": 1174, "num_entries": 8369, "num_filter_entries": 8369, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396960, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.609499) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 9666963 bytes
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.611051) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.2 rd, 183.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 8883, records dropped: 514 output_compression: NoCompression
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.611076) EVENT_LOG_v1 {"time_micros": 1759396960611064, "job": 94, "event": "compaction_finished", "compaction_time_micros": 52732, "compaction_time_cpu_micros": 24090, "output_level": 6, "num_output_files": 1, "total_output_size": 9666963, "num_input_records": 8883, "num_output_records": 8369, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396960611794, "job": 94, "event": "table_file_deletion", "file_number": 154}
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396960613464, "job": 94, "event": "table_file_deletion", "file_number": 152}
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.556316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.613519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.613527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.613530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.613533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:22:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:22:40.613536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:22:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 708 KiB/s wr, 37 op/s
Oct  2 05:22:43 np0005465604 nova_compute[260603]: 2025-10-02 09:22:43.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:43.539 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:22:43 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:43.540 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:22:43 np0005465604 nova_compute[260603]: 2025-10-02 09:22:43.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 57 KiB/s wr, 9 op/s
Oct  2 05:22:44 np0005465604 nova_compute[260603]: 2025-10-02 09:22:44.473 2 DEBUG nova.compute.manager [req-213f3bdb-8454-49df-b269-c506e243261d req-7e3f291b-7a92-48ef-b5a3-9239058985d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received event network-changed-bfc1331e-8260-43ad-b409-38c22a730429 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:22:44 np0005465604 nova_compute[260603]: 2025-10-02 09:22:44.474 2 DEBUG nova.compute.manager [req-213f3bdb-8454-49df-b269-c506e243261d req-7e3f291b-7a92-48ef-b5a3-9239058985d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Refreshing instance network info cache due to event network-changed-bfc1331e-8260-43ad-b409-38c22a730429. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:22:44 np0005465604 nova_compute[260603]: 2025-10-02 09:22:44.474 2 DEBUG oslo_concurrency.lockutils [req-213f3bdb-8454-49df-b269-c506e243261d req-7e3f291b-7a92-48ef-b5a3-9239058985d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:22:44 np0005465604 nova_compute[260603]: 2025-10-02 09:22:44.474 2 DEBUG oslo_concurrency.lockutils [req-213f3bdb-8454-49df-b269-c506e243261d req-7e3f291b-7a92-48ef-b5a3-9239058985d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:22:44 np0005465604 nova_compute[260603]: 2025-10-02 09:22:44.474 2 DEBUG nova.network.neutron [req-213f3bdb-8454-49df-b269-c506e243261d req-7e3f291b-7a92-48ef-b5a3-9239058985d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Refreshing network info cache for port bfc1331e-8260-43ad-b409-38c22a730429 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:22:44 np0005465604 nova_compute[260603]: 2025-10-02 09:22:44.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:44 np0005465604 nova_compute[260603]: 2025-10-02 09:22:44.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:22:44 np0005465604 nova_compute[260603]: 2025-10-02 09:22:44.518 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.051 2 DEBUG oslo_concurrency.lockutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "1e3be288-5261-4a77-a127-f7bf088caf01" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.052 2 DEBUG oslo_concurrency.lockutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.053 2 DEBUG oslo_concurrency.lockutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.053 2 DEBUG oslo_concurrency.lockutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.054 2 DEBUG oslo_concurrency.lockutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.055 2 INFO nova.compute.manager [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Terminating instance#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.056 2 DEBUG nova.compute.manager [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:22:45 np0005465604 kernel: tapbfc1331e-82 (unregistering): left promiscuous mode
Oct  2 05:22:45 np0005465604 NetworkManager[45129]: <info>  [1759396965.1129] device (tapbfc1331e-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:45 np0005465604 ovn_controller[152344]: 2025-10-02T09:22:45Z|01695|binding|INFO|Releasing lport bfc1331e-8260-43ad-b409-38c22a730429 from this chassis (sb_readonly=0)
Oct  2 05:22:45 np0005465604 ovn_controller[152344]: 2025-10-02T09:22:45Z|01696|binding|INFO|Setting lport bfc1331e-8260-43ad-b409-38c22a730429 down in Southbound
Oct  2 05:22:45 np0005465604 ovn_controller[152344]: 2025-10-02T09:22:45Z|01697|binding|INFO|Removing iface tapbfc1331e-82 ovn-installed in OVS
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.165 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:40:0b 10.100.0.3 2001:db8::f816:3eff:fe5c:400b'], port_security=['fa:16:3e:5c:40:0b 10.100.0.3 2001:db8::f816:3eff:fe5c:400b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe5c:400b/64', 'neutron:device_id': '1e3be288-5261-4a77-a127-f7bf088caf01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79a700-778c-4189-bea4-a6e50510de5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5b689ca1-3c9b-4813-8474-00abea3332c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=886ca98a-7662-4ca0-8c8e-c35442cbbef0, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=bfc1331e-8260-43ad-b409-38c22a730429) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:22:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.166 162357 INFO neutron.agent.ovn.metadata.agent [-] Port bfc1331e-8260-43ad-b409-38c22a730429 in datapath bb79a700-778c-4189-bea4-a6e50510de5b unbound from our chassis#033[00m
Oct  2 05:22:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.167 162357 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb79a700-778c-4189-bea4-a6e50510de5b#033[00m
Oct  2 05:22:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.186 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[63050e68-a87c-493a-8238-56a818c184ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:45 np0005465604 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Deactivated successfully.
Oct  2 05:22:45 np0005465604 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Consumed 14.666s CPU time.
Oct  2 05:22:45 np0005465604 systemd-machined[214636]: Machine qemu-186-instance-00000098 terminated.
Oct  2 05:22:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.215 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[360ce56b-2ea7-4072-b913-6136f453bb55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.219 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a97ea5-725d-4258-8dbd-9aa436e52062]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.245 276965 DEBUG oslo.privsep.daemon [-] privsep: reply[95211e31-6fb8-4d57-a367-001e15e160bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.262 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[aa56d419-55a9-4bc3-acd9-861ed5141754]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb79a700-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:0e:c9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2940, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2940, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777310, 'reachable_time': 42986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2352, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2352, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 438929, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.280 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[fcebfb21-7b0a-463a-aebd-285513d65815]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbb79a700-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 777321, 'tstamp': 777321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 438938, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbb79a700-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 777323, 'tstamp': 777323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 438938, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.282 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb79a700-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.291 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb79a700-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:22:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.291 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:22:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.291 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb79a700-70, col_values=(('external_ids', {'iface-id': 'f52ffbc5-b75c-4a8b-a490-21571dd7145a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:22:45 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:45.291 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.295 2 INFO nova.virt.libvirt.driver [-] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Instance destroyed successfully.#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.295 2 DEBUG nova.objects.instance [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid 1e3be288-5261-4a77-a127-f7bf088caf01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:22:45 np0005465604 podman[438952]: 2025-10-02 09:22:45.37472411 +0000 UTC m=+0.061127080 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.411 2 DEBUG nova.virt.libvirt.vif [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:22:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-421018847',display_name='tempest-TestGettingAddress-server-421018847',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-421018847',id=152,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJ4erUkDf9pFYvis3BxPTrsgrZAeghsAW2aYbDdKvJxPUtfd2zcNxkwWc27ijo1XxIL1GH95TwtVkIOZnFQCr789wREwZXl2iwWdFQxsXXMtQjjBE9pyaOAIR5A+kumzQ==',key_name='tempest-TestGettingAddress-2084952376',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:22:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-77p3vre3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:22:16Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=1e3be288-5261-4a77-a127-f7bf088caf01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.411 2 DEBUG nova.network.os_vif_util [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.412 2 DEBUG nova.network.os_vif_util [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5c:40:0b,bridge_name='br-int',has_traffic_filtering=True,id=bfc1331e-8260-43ad-b409-38c22a730429,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc1331e-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.412 2 DEBUG os_vif [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:40:0b,bridge_name='br-int',has_traffic_filtering=True,id=bfc1331e-8260-43ad-b409-38c22a730429,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc1331e-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfc1331e-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.421 2 INFO os_vif [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:40:0b,bridge_name='br-int',has_traffic_filtering=True,id=bfc1331e-8260-43ad-b409-38c22a730429,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfc1331e-82')#033[00m
Oct  2 05:22:45 np0005465604 podman[438952]: 2025-10-02 09:22:45.475504338 +0000 UTC m=+0.161907288 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 05:22:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.871 2 INFO nova.virt.libvirt.driver [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Deleting instance files /var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01_del#033[00m
Oct  2 05:22:45 np0005465604 nova_compute[260603]: 2025-10-02 09:22:45.872 2 INFO nova.virt.libvirt.driver [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Deletion of /var/lib/nova/instances/1e3be288-5261-4a77-a127-f7bf088caf01_del complete#033[00m
Oct  2 05:22:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:22:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:22:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:22:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:22:46 np0005465604 nova_compute[260603]: 2025-10-02 09:22:46.135 2 INFO nova.compute.manager [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Took 1.08 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:22:46 np0005465604 nova_compute[260603]: 2025-10-02 09:22:46.135 2 DEBUG oslo.service.loopingcall [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:22:46 np0005465604 nova_compute[260603]: 2025-10-02 09:22:46.136 2 DEBUG nova.compute.manager [-] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:22:46 np0005465604 nova_compute[260603]: 2025-10-02 09:22:46.136 2 DEBUG nova.network.neutron [-] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:22:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 1 op/s
Oct  2 05:22:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  2 05:22:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 05:22:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:22:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:22:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:22:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:22:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:22:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:22:46 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e0250a9e-0c5f-4921-99b3-46374d18fd56 does not exist
Oct  2 05:22:46 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 4cf7b506-6ace-43c2-9ca2-6347c2dc4c9a does not exist
Oct  2 05:22:46 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 31cc5b5e-69fa-42ed-ab38-7086fe52e636 does not exist
Oct  2 05:22:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:22:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:22:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:22:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:22:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:22:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:22:46 np0005465604 nova_compute[260603]: 2025-10-02 09:22:46.894 2 DEBUG nova.compute.manager [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received event network-vif-unplugged-bfc1331e-8260-43ad-b409-38c22a730429 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:22:46 np0005465604 nova_compute[260603]: 2025-10-02 09:22:46.895 2 DEBUG oslo_concurrency.lockutils [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:46 np0005465604 nova_compute[260603]: 2025-10-02 09:22:46.895 2 DEBUG oslo_concurrency.lockutils [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:46 np0005465604 nova_compute[260603]: 2025-10-02 09:22:46.895 2 DEBUG oslo_concurrency.lockutils [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:22:46 np0005465604 nova_compute[260603]: 2025-10-02 09:22:46.896 2 DEBUG nova.compute.manager [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] No waiting events found dispatching network-vif-unplugged-bfc1331e-8260-43ad-b409-38c22a730429 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:22:46 np0005465604 nova_compute[260603]: 2025-10-02 09:22:46.896 2 DEBUG nova.compute.manager [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received event network-vif-unplugged-bfc1331e-8260-43ad-b409-38c22a730429 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:22:46 np0005465604 nova_compute[260603]: 2025-10-02 09:22:46.896 2 DEBUG nova.compute.manager [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received event network-vif-plugged-bfc1331e-8260-43ad-b409-38c22a730429 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:22:46 np0005465604 nova_compute[260603]: 2025-10-02 09:22:46.896 2 DEBUG oslo_concurrency.lockutils [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:46 np0005465604 nova_compute[260603]: 2025-10-02 09:22:46.897 2 DEBUG oslo_concurrency.lockutils [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:46 np0005465604 nova_compute[260603]: 2025-10-02 09:22:46.897 2 DEBUG oslo_concurrency.lockutils [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:22:46 np0005465604 nova_compute[260603]: 2025-10-02 09:22:46.897 2 DEBUG nova.compute.manager [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] No waiting events found dispatching network-vif-plugged-bfc1331e-8260-43ad-b409-38c22a730429 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:22:46 np0005465604 nova_compute[260603]: 2025-10-02 09:22:46.898 2 WARNING nova.compute.manager [req-8e9b65d5-0346-4e1b-ae25-80abc8323e89 req-fe3e6977-bd6d-4305-a617-68b7218aa7f0 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received unexpected event network-vif-plugged-bfc1331e-8260-43ad-b409-38c22a730429 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:22:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:22:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:22:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 05:22:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:22:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:22:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:22:47 np0005465604 podman[439398]: 2025-10-02 09:22:47.49065593 +0000 UTC m=+0.040086842 container create 8bf35823daeb0a74b4359b58118c3df3ed5a4437b74d2360ba2960241600627f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:22:47 np0005465604 systemd[1]: Started libpod-conmon-8bf35823daeb0a74b4359b58118c3df3ed5a4437b74d2360ba2960241600627f.scope.
Oct  2 05:22:47 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:47.542 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:22:47 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:22:47 np0005465604 podman[439398]: 2025-10-02 09:22:47.472309648 +0000 UTC m=+0.021740580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:22:47 np0005465604 podman[439398]: 2025-10-02 09:22:47.576398999 +0000 UTC m=+0.125829921 container init 8bf35823daeb0a74b4359b58118c3df3ed5a4437b74d2360ba2960241600627f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 05:22:47 np0005465604 podman[439398]: 2025-10-02 09:22:47.58508481 +0000 UTC m=+0.134515722 container start 8bf35823daeb0a74b4359b58118c3df3ed5a4437b74d2360ba2960241600627f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 05:22:47 np0005465604 podman[439398]: 2025-10-02 09:22:47.588165927 +0000 UTC m=+0.137596859 container attach 8bf35823daeb0a74b4359b58118c3df3ed5a4437b74d2360ba2960241600627f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 05:22:47 np0005465604 affectionate_thompson[439415]: 167 167
Oct  2 05:22:47 np0005465604 podman[439398]: 2025-10-02 09:22:47.592096579 +0000 UTC m=+0.141527491 container died 8bf35823daeb0a74b4359b58118c3df3ed5a4437b74d2360ba2960241600627f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 05:22:47 np0005465604 systemd[1]: libpod-8bf35823daeb0a74b4359b58118c3df3ed5a4437b74d2360ba2960241600627f.scope: Deactivated successfully.
Oct  2 05:22:47 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8245cb5c949930e21268ec1e8e73c53d0ae79ffaa5faef695a21a13b49d09f97-merged.mount: Deactivated successfully.
Oct  2 05:22:47 np0005465604 podman[439398]: 2025-10-02 09:22:47.639407387 +0000 UTC m=+0.188838299 container remove 8bf35823daeb0a74b4359b58118c3df3ed5a4437b74d2360ba2960241600627f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 05:22:47 np0005465604 systemd[1]: libpod-conmon-8bf35823daeb0a74b4359b58118c3df3ed5a4437b74d2360ba2960241600627f.scope: Deactivated successfully.
Oct  2 05:22:47 np0005465604 podman[439438]: 2025-10-02 09:22:47.811007337 +0000 UTC m=+0.045887504 container create bdf766202a0dd50d02be62a8b09b519d414434bf6b1bf725e0d8806a816f5a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_albattani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:22:47 np0005465604 systemd[1]: Started libpod-conmon-bdf766202a0dd50d02be62a8b09b519d414434bf6b1bf725e0d8806a816f5a1a.scope.
Oct  2 05:22:47 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:22:47 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d5e8f9587a2bd9677ef69d07177d6a8af233328364eabcaabb37fb7a6ff74f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:22:47 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d5e8f9587a2bd9677ef69d07177d6a8af233328364eabcaabb37fb7a6ff74f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:22:47 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d5e8f9587a2bd9677ef69d07177d6a8af233328364eabcaabb37fb7a6ff74f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:22:47 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d5e8f9587a2bd9677ef69d07177d6a8af233328364eabcaabb37fb7a6ff74f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:22:47 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d5e8f9587a2bd9677ef69d07177d6a8af233328364eabcaabb37fb7a6ff74f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:22:47 np0005465604 podman[439438]: 2025-10-02 09:22:47.880317372 +0000 UTC m=+0.115197549 container init bdf766202a0dd50d02be62a8b09b519d414434bf6b1bf725e0d8806a816f5a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_albattani, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 05:22:47 np0005465604 podman[439438]: 2025-10-02 09:22:47.789737652 +0000 UTC m=+0.024617839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:22:47 np0005465604 podman[439438]: 2025-10-02 09:22:47.890939484 +0000 UTC m=+0.125819651 container start bdf766202a0dd50d02be62a8b09b519d414434bf6b1bf725e0d8806a816f5a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct  2 05:22:47 np0005465604 podman[439438]: 2025-10-02 09:22:47.895458575 +0000 UTC m=+0.130338772 container attach bdf766202a0dd50d02be62a8b09b519d414434bf6b1bf725e0d8806a816f5a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:22:48 np0005465604 nova_compute[260603]: 2025-10-02 09:22:48.062 2 DEBUG nova.network.neutron [req-213f3bdb-8454-49df-b269-c506e243261d req-7e3f291b-7a92-48ef-b5a3-9239058985d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Updated VIF entry in instance network info cache for port bfc1331e-8260-43ad-b409-38c22a730429. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:22:48 np0005465604 nova_compute[260603]: 2025-10-02 09:22:48.063 2 DEBUG nova.network.neutron [req-213f3bdb-8454-49df-b269-c506e243261d req-7e3f291b-7a92-48ef-b5a3-9239058985d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Updating instance_info_cache with network_info: [{"id": "bfc1331e-8260-43ad-b409-38c22a730429", "address": "fa:16:3e:5c:40:0b", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:400b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfc1331e-82", "ovs_interfaceid": "bfc1331e-8260-43ad-b409-38c22a730429", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:22:48 np0005465604 nova_compute[260603]: 2025-10-02 09:22:48.281 2 DEBUG oslo_concurrency.lockutils [req-213f3bdb-8454-49df-b269-c506e243261d req-7e3f291b-7a92-48ef-b5a3-9239058985d8 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-1e3be288-5261-4a77-a127-f7bf088caf01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:22:48 np0005465604 nova_compute[260603]: 2025-10-02 09:22:48.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct  2 05:22:48 np0005465604 intelligent_albattani[439454]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:22:48 np0005465604 intelligent_albattani[439454]: --> relative data size: 1.0
Oct  2 05:22:48 np0005465604 intelligent_albattani[439454]: --> All data devices are unavailable
Oct  2 05:22:48 np0005465604 systemd[1]: libpod-bdf766202a0dd50d02be62a8b09b519d414434bf6b1bf725e0d8806a816f5a1a.scope: Deactivated successfully.
Oct  2 05:22:48 np0005465604 podman[439438]: 2025-10-02 09:22:48.926152518 +0000 UTC m=+1.161032695 container died bdf766202a0dd50d02be62a8b09b519d414434bf6b1bf725e0d8806a816f5a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_albattani, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  2 05:22:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay-38d5e8f9587a2bd9677ef69d07177d6a8af233328364eabcaabb37fb7a6ff74f-merged.mount: Deactivated successfully.
Oct  2 05:22:49 np0005465604 podman[439438]: 2025-10-02 09:22:49.087008632 +0000 UTC m=+1.321888799 container remove bdf766202a0dd50d02be62a8b09b519d414434bf6b1bf725e0d8806a816f5a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_albattani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:22:49 np0005465604 systemd[1]: libpod-conmon-bdf766202a0dd50d02be62a8b09b519d414434bf6b1bf725e0d8806a816f5a1a.scope: Deactivated successfully.
Oct  2 05:22:49 np0005465604 podman[439637]: 2025-10-02 09:22:49.749152304 +0000 UTC m=+0.056873207 container create 5781cb9456a5657930a57c8f21f765fcd991a99f3bd288d9cb9532e9b9a18de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 05:22:49 np0005465604 systemd[1]: Started libpod-conmon-5781cb9456a5657930a57c8f21f765fcd991a99f3bd288d9cb9532e9b9a18de2.scope.
Oct  2 05:22:49 np0005465604 podman[439637]: 2025-10-02 09:22:49.716094361 +0000 UTC m=+0.023815284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:22:49 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:22:49 np0005465604 podman[439637]: 2025-10-02 09:22:49.903007339 +0000 UTC m=+0.210728262 container init 5781cb9456a5657930a57c8f21f765fcd991a99f3bd288d9cb9532e9b9a18de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 05:22:49 np0005465604 podman[439637]: 2025-10-02 09:22:49.909244774 +0000 UTC m=+0.216965677 container start 5781cb9456a5657930a57c8f21f765fcd991a99f3bd288d9cb9532e9b9a18de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_sinoussi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 05:22:49 np0005465604 recursing_sinoussi[439653]: 167 167
Oct  2 05:22:49 np0005465604 systemd[1]: libpod-5781cb9456a5657930a57c8f21f765fcd991a99f3bd288d9cb9532e9b9a18de2.scope: Deactivated successfully.
Oct  2 05:22:49 np0005465604 podman[439637]: 2025-10-02 09:22:49.956569073 +0000 UTC m=+0.264289976 container attach 5781cb9456a5657930a57c8f21f765fcd991a99f3bd288d9cb9532e9b9a18de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_sinoussi, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 05:22:49 np0005465604 podman[439637]: 2025-10-02 09:22:49.956963065 +0000 UTC m=+0.264683968 container died 5781cb9456a5657930a57c8f21f765fcd991a99f3bd288d9cb9532e9b9a18de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  2 05:22:50 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6908a33c4e64b53782267465aaaa1a40cce2138c05deeec1ed918626d3aff8e4-merged.mount: Deactivated successfully.
Oct  2 05:22:50 np0005465604 podman[439637]: 2025-10-02 09:22:50.187076382 +0000 UTC m=+0.494797285 container remove 5781cb9456a5657930a57c8f21f765fcd991a99f3bd288d9cb9532e9b9a18de2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_sinoussi, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:22:50 np0005465604 systemd[1]: libpod-conmon-5781cb9456a5657930a57c8f21f765fcd991a99f3bd288d9cb9532e9b9a18de2.scope: Deactivated successfully.
Oct  2 05:22:50 np0005465604 nova_compute[260603]: 2025-10-02 09:22:50.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Oct  2 05:22:50 np0005465604 podman[439679]: 2025-10-02 09:22:50.441789258 +0000 UTC m=+0.059830000 container create 3986f384649a1d3c29e0f0d3d4b97231d0cc51a05715f3114294c8745f72b198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  2 05:22:50 np0005465604 nova_compute[260603]: 2025-10-02 09:22:50.444 2 DEBUG nova.network.neutron [-] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:22:50 np0005465604 systemd[1]: Started libpod-conmon-3986f384649a1d3c29e0f0d3d4b97231d0cc51a05715f3114294c8745f72b198.scope.
Oct  2 05:22:50 np0005465604 podman[439679]: 2025-10-02 09:22:50.416428535 +0000 UTC m=+0.034469297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:22:50 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:22:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ab1eefe931ea7df4e54d553ba1ae486c8e71202a0c7f0d75b90a7969aa6e5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:22:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ab1eefe931ea7df4e54d553ba1ae486c8e71202a0c7f0d75b90a7969aa6e5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:22:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ab1eefe931ea7df4e54d553ba1ae486c8e71202a0c7f0d75b90a7969aa6e5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:22:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91ab1eefe931ea7df4e54d553ba1ae486c8e71202a0c7f0d75b90a7969aa6e5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:22:50 np0005465604 podman[439679]: 2025-10-02 09:22:50.533437821 +0000 UTC m=+0.151478553 container init 3986f384649a1d3c29e0f0d3d4b97231d0cc51a05715f3114294c8745f72b198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 05:22:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:22:50 np0005465604 podman[439679]: 2025-10-02 09:22:50.540826721 +0000 UTC m=+0.158867433 container start 3986f384649a1d3c29e0f0d3d4b97231d0cc51a05715f3114294c8745f72b198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:22:50 np0005465604 podman[439679]: 2025-10-02 09:22:50.544194857 +0000 UTC m=+0.162235569 container attach 3986f384649a1d3c29e0f0d3d4b97231d0cc51a05715f3114294c8745f72b198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:22:50 np0005465604 nova_compute[260603]: 2025-10-02 09:22:50.621 2 DEBUG nova.compute.manager [req-ac09e89e-26f9-4a0a-bce4-66c0eeb16abf req-c9ab922f-b97a-42a2-90e0-490d8f0d553a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Received event network-vif-deleted-bfc1331e-8260-43ad-b409-38c22a730429 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:22:50 np0005465604 nova_compute[260603]: 2025-10-02 09:22:50.623 2 INFO nova.compute.manager [req-ac09e89e-26f9-4a0a-bce4-66c0eeb16abf req-c9ab922f-b97a-42a2-90e0-490d8f0d553a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Neutron deleted interface bfc1331e-8260-43ad-b409-38c22a730429; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 05:22:50 np0005465604 nova_compute[260603]: 2025-10-02 09:22:50.623 2 DEBUG nova.network.neutron [req-ac09e89e-26f9-4a0a-bce4-66c0eeb16abf req-c9ab922f-b97a-42a2-90e0-490d8f0d553a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:22:50 np0005465604 nova_compute[260603]: 2025-10-02 09:22:50.731 2 INFO nova.compute.manager [-] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Took 4.60 seconds to deallocate network for instance.#033[00m
Oct  2 05:22:50 np0005465604 nova_compute[260603]: 2025-10-02 09:22:50.813 2 DEBUG nova.compute.manager [req-ac09e89e-26f9-4a0a-bce4-66c0eeb16abf req-c9ab922f-b97a-42a2-90e0-490d8f0d553a 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Detach interface failed, port_id=bfc1331e-8260-43ad-b409-38c22a730429, reason: Instance 1e3be288-5261-4a77-a127-f7bf088caf01 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 05:22:50 np0005465604 nova_compute[260603]: 2025-10-02 09:22:50.928 2 DEBUG oslo_concurrency.lockutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:50 np0005465604 nova_compute[260603]: 2025-10-02 09:22:50.929 2 DEBUG oslo_concurrency.lockutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:51 np0005465604 nova_compute[260603]: 2025-10-02 09:22:51.012 2 DEBUG oslo_concurrency.processutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]: {
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:    "0": [
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:        {
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "devices": [
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "/dev/loop3"
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            ],
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "lv_name": "ceph_lv0",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "lv_size": "21470642176",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "name": "ceph_lv0",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "tags": {
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.cluster_name": "ceph",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.crush_device_class": "",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.encrypted": "0",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.osd_id": "0",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.type": "block",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.vdo": "0"
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            },
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "type": "block",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "vg_name": "ceph_vg0"
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:        }
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:    ],
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:    "1": [
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:        {
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "devices": [
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "/dev/loop4"
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            ],
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "lv_name": "ceph_lv1",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "lv_size": "21470642176",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "name": "ceph_lv1",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "tags": {
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.cluster_name": "ceph",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.crush_device_class": "",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.encrypted": "0",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.osd_id": "1",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.type": "block",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.vdo": "0"
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            },
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "type": "block",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "vg_name": "ceph_vg1"
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:        }
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:    ],
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:    "2": [
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:        {
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "devices": [
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "/dev/loop5"
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            ],
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "lv_name": "ceph_lv2",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "lv_size": "21470642176",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "name": "ceph_lv2",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "tags": {
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.cluster_name": "ceph",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.crush_device_class": "",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.encrypted": "0",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.osd_id": "2",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.type": "block",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:                "ceph.vdo": "0"
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            },
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "type": "block",
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:            "vg_name": "ceph_vg2"
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:        }
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]:    ]
Oct  2 05:22:51 np0005465604 goofy_jemison[439696]: }
Oct  2 05:22:51 np0005465604 systemd[1]: libpod-3986f384649a1d3c29e0f0d3d4b97231d0cc51a05715f3114294c8745f72b198.scope: Deactivated successfully.
Oct  2 05:22:51 np0005465604 podman[439679]: 2025-10-02 09:22:51.343819272 +0000 UTC m=+0.961860004 container died 3986f384649a1d3c29e0f0d3d4b97231d0cc51a05715f3114294c8745f72b198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:22:51 np0005465604 systemd[1]: var-lib-containers-storage-overlay-91ab1eefe931ea7df4e54d553ba1ae486c8e71202a0c7f0d75b90a7969aa6e5a-merged.mount: Deactivated successfully.
Oct  2 05:22:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:22:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4135541183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:22:51 np0005465604 nova_compute[260603]: 2025-10-02 09:22:51.609 2 DEBUG oslo_concurrency.processutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:22:51 np0005465604 nova_compute[260603]: 2025-10-02 09:22:51.617 2 DEBUG nova.compute.provider_tree [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:22:51 np0005465604 nova_compute[260603]: 2025-10-02 09:22:51.660 2 DEBUG nova.scheduler.client.report [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:22:51 np0005465604 nova_compute[260603]: 2025-10-02 09:22:51.721 2 DEBUG oslo_concurrency.lockutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:22:51 np0005465604 nova_compute[260603]: 2025-10-02 09:22:51.806 2 INFO nova.scheduler.client.report [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance 1e3be288-5261-4a77-a127-f7bf088caf01#033[00m
Oct  2 05:22:51 np0005465604 podman[439679]: 2025-10-02 09:22:51.942293585 +0000 UTC m=+1.560334337 container remove 3986f384649a1d3c29e0f0d3d4b97231d0cc51a05715f3114294c8745f72b198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 05:22:51 np0005465604 systemd[1]: libpod-conmon-3986f384649a1d3c29e0f0d3d4b97231d0cc51a05715f3114294c8745f72b198.scope: Deactivated successfully.
Oct  2 05:22:52 np0005465604 nova_compute[260603]: 2025-10-02 09:22:52.125 2 DEBUG oslo_concurrency.lockutils [None req-23b31f34-ae36-44b6-931d-ce4828ed7a55 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "1e3be288-5261-4a77-a127-f7bf088caf01" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:22:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct  2 05:22:52 np0005465604 podman[439883]: 2025-10-02 09:22:52.659028523 +0000 UTC m=+0.089607011 container create 34b4de9880cdd0bd1303c97a8c503f205f675e673fcc53e815fcaddc1f7759a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cartwright, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 05:22:52 np0005465604 podman[439883]: 2025-10-02 09:22:52.590399039 +0000 UTC m=+0.020977487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:22:52 np0005465604 systemd[1]: Started libpod-conmon-34b4de9880cdd0bd1303c97a8c503f205f675e673fcc53e815fcaddc1f7759a8.scope.
Oct  2 05:22:52 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:22:52 np0005465604 podman[439883]: 2025-10-02 09:22:52.827416992 +0000 UTC m=+0.257995450 container init 34b4de9880cdd0bd1303c97a8c503f205f675e673fcc53e815fcaddc1f7759a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:22:52 np0005465604 podman[439883]: 2025-10-02 09:22:52.838537039 +0000 UTC m=+0.269115537 container start 34b4de9880cdd0bd1303c97a8c503f205f675e673fcc53e815fcaddc1f7759a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:22:52 np0005465604 reverent_cartwright[439899]: 167 167
Oct  2 05:22:52 np0005465604 systemd[1]: libpod-34b4de9880cdd0bd1303c97a8c503f205f675e673fcc53e815fcaddc1f7759a8.scope: Deactivated successfully.
Oct  2 05:22:53 np0005465604 podman[439883]: 2025-10-02 09:22:53.132810051 +0000 UTC m=+0.563388609 container attach 34b4de9880cdd0bd1303c97a8c503f205f675e673fcc53e815fcaddc1f7759a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cartwright, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 05:22:53 np0005465604 podman[439883]: 2025-10-02 09:22:53.13404175 +0000 UTC m=+0.564620248 container died 34b4de9880cdd0bd1303c97a8c503f205f675e673fcc53e815fcaddc1f7759a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 05:22:53 np0005465604 nova_compute[260603]: 2025-10-02 09:22:53.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:53 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0771d8ef4a65f30c10f07dcb7b7e47b5df531d523fc9622b464034f6b59c2d9a-merged.mount: Deactivated successfully.
Oct  2 05:22:53 np0005465604 nova_compute[260603]: 2025-10-02 09:22:53.578 2 DEBUG nova.compute.manager [req-340c6717-e11e-49db-a188-96e9139845b9 req-b58200ed-6985-4e94-b070-bf04cd3512d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received event network-changed-ce3d61f6-90a9-4869-ac95-256f110eea41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:22:53 np0005465604 nova_compute[260603]: 2025-10-02 09:22:53.578 2 DEBUG nova.compute.manager [req-340c6717-e11e-49db-a188-96e9139845b9 req-b58200ed-6985-4e94-b070-bf04cd3512d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Refreshing instance network info cache due to event network-changed-ce3d61f6-90a9-4869-ac95-256f110eea41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 05:22:53 np0005465604 nova_compute[260603]: 2025-10-02 09:22:53.579 2 DEBUG oslo_concurrency.lockutils [req-340c6717-e11e-49db-a188-96e9139845b9 req-b58200ed-6985-4e94-b070-bf04cd3512d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:22:53 np0005465604 nova_compute[260603]: 2025-10-02 09:22:53.579 2 DEBUG oslo_concurrency.lockutils [req-340c6717-e11e-49db-a188-96e9139845b9 req-b58200ed-6985-4e94-b070-bf04cd3512d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquired lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:22:53 np0005465604 nova_compute[260603]: 2025-10-02 09:22:53.579 2 DEBUG nova.network.neutron [req-340c6717-e11e-49db-a188-96e9139845b9 req-b58200ed-6985-4e94-b070-bf04cd3512d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Refreshing network info cache for port ce3d61f6-90a9-4869-ac95-256f110eea41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 05:22:53 np0005465604 nova_compute[260603]: 2025-10-02 09:22:53.801 2 DEBUG oslo_concurrency.lockutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:53 np0005465604 nova_compute[260603]: 2025-10-02 09:22:53.802 2 DEBUG oslo_concurrency.lockutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:53 np0005465604 nova_compute[260603]: 2025-10-02 09:22:53.803 2 DEBUG oslo_concurrency.lockutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:53 np0005465604 nova_compute[260603]: 2025-10-02 09:22:53.804 2 DEBUG oslo_concurrency.lockutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:53 np0005465604 nova_compute[260603]: 2025-10-02 09:22:53.805 2 DEBUG oslo_concurrency.lockutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:22:53 np0005465604 nova_compute[260603]: 2025-10-02 09:22:53.807 2 INFO nova.compute.manager [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Terminating instance#033[00m
Oct  2 05:22:53 np0005465604 nova_compute[260603]: 2025-10-02 09:22:53.809 2 DEBUG nova.compute.manager [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:22:53 np0005465604 podman[439883]: 2025-10-02 09:22:53.811154399 +0000 UTC m=+1.241732837 container remove 34b4de9880cdd0bd1303c97a8c503f205f675e673fcc53e815fcaddc1f7759a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:22:53 np0005465604 systemd[1]: libpod-conmon-34b4de9880cdd0bd1303c97a8c503f205f675e673fcc53e815fcaddc1f7759a8.scope: Deactivated successfully.
Oct  2 05:22:53 np0005465604 kernel: tapce3d61f6-90 (unregistering): left promiscuous mode
Oct  2 05:22:53 np0005465604 NetworkManager[45129]: <info>  [1759396973.8942] device (tapce3d61f6-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 05:22:53 np0005465604 ovn_controller[152344]: 2025-10-02T09:22:53Z|01698|binding|INFO|Releasing lport ce3d61f6-90a9-4869-ac95-256f110eea41 from this chassis (sb_readonly=0)
Oct  2 05:22:53 np0005465604 ovn_controller[152344]: 2025-10-02T09:22:53Z|01699|binding|INFO|Setting lport ce3d61f6-90a9-4869-ac95-256f110eea41 down in Southbound
Oct  2 05:22:53 np0005465604 ovn_controller[152344]: 2025-10-02T09:22:53Z|01700|binding|INFO|Removing iface tapce3d61f6-90 ovn-installed in OVS
Oct  2 05:22:53 np0005465604 nova_compute[260603]: 2025-10-02 09:22:53.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:53 np0005465604 nova_compute[260603]: 2025-10-02 09:22:53.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:53.927 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:e3:da 10.100.0.4 2001:db8::f816:3eff:fec4:e3da'], port_security=['fa:16:3e:c4:e3:da 10.100.0.4 2001:db8::f816:3eff:fec4:e3da'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fec4:e3da/64', 'neutron:device_id': 'ccf01ee2-f5c6-4802-a9dd-f8c0423d4914', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79a700-778c-4189-bea4-a6e50510de5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '674f53964f0a4a0d9e9b5ebfaf4248b4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5b689ca1-3c9b-4813-8474-00abea3332c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=886ca98a-7662-4ca0-8c8e-c35442cbbef0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>], logical_port=ce3d61f6-90a9-4869-ac95-256f110eea41) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f3313a91b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:22:53 np0005465604 nova_compute[260603]: 2025-10-02 09:22:53.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:53.934 162357 INFO neutron.agent.ovn.metadata.agent [-] Port ce3d61f6-90a9-4869-ac95-256f110eea41 in datapath bb79a700-778c-4189-bea4-a6e50510de5b unbound from our chassis#033[00m
Oct  2 05:22:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:53.934 162357 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bb79a700-778c-4189-bea4-a6e50510de5b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 05:22:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:53.936 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a101a3-c0d7-4f4c-8ee5-b79aaf61ae5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:53 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:53.937 162357 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b namespace which is not needed anymore#033[00m
Oct  2 05:22:53 np0005465604 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Deactivated successfully.
Oct  2 05:22:53 np0005465604 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Consumed 16.136s CPU time.
Oct  2 05:22:53 np0005465604 systemd-machined[214636]: Machine qemu-185-instance-00000097 terminated.
Oct  2 05:22:54 np0005465604 nova_compute[260603]: 2025-10-02 09:22:54.056 2 INFO nova.virt.libvirt.driver [-] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Instance destroyed successfully.#033[00m
Oct  2 05:22:54 np0005465604 nova_compute[260603]: 2025-10-02 09:22:54.056 2 DEBUG nova.objects.instance [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lazy-loading 'resources' on Instance uuid ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:22:54 np0005465604 nova_compute[260603]: 2025-10-02 09:22:54.084 2 DEBUG nova.virt.libvirt.vif [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T09:21:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-885089396',display_name='tempest-TestGettingAddress-server-885089396',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-885089396',id=151,image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJ4erUkDf9pFYvis3BxPTrsgrZAeghsAW2aYbDdKvJxPUtfd2zcNxkwWc27ijo1XxIL1GH95TwtVkIOZnFQCr789wREwZXl2iwWdFQxsXXMtQjjBE9pyaOAIR5A+kumzQ==',key_name='tempest-TestGettingAddress-2084952376',keypairs=<?>,launch_index=0,launched_at=2025-10-02T09:21:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='674f53964f0a4a0d9e9b5ebfaf4248b4',ramdisk_id='',reservation_id='r-mdzn0750',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='420393e6-d62b-4055-afb9-674967e2c2b0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-44642193',owner_user_name='tempest-TestGettingAddress-44642193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T09:21:36Z,user_data=None,user_id='b7765a573b734de786f94b675c6ab654',uuid=ccf01ee2-f5c6-4802-a9dd-f8c0423d4914,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 05:22:54 np0005465604 nova_compute[260603]: 2025-10-02 09:22:54.084 2 DEBUG nova.network.os_vif_util [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converting VIF {"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 05:22:54 np0005465604 nova_compute[260603]: 2025-10-02 09:22:54.085 2 DEBUG nova.network.os_vif_util [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c4:e3:da,bridge_name='br-int',has_traffic_filtering=True,id=ce3d61f6-90a9-4869-ac95-256f110eea41,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce3d61f6-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 05:22:54 np0005465604 nova_compute[260603]: 2025-10-02 09:22:54.085 2 DEBUG os_vif [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:e3:da,bridge_name='br-int',has_traffic_filtering=True,id=ce3d61f6-90a9-4869-ac95-256f110eea41,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce3d61f6-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 05:22:54 np0005465604 nova_compute[260603]: 2025-10-02 09:22:54.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:54 np0005465604 nova_compute[260603]: 2025-10-02 09:22:54.087 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce3d61f6-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:22:54 np0005465604 nova_compute[260603]: 2025-10-02 09:22:54.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:54 np0005465604 nova_compute[260603]: 2025-10-02 09:22:54.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 05:22:54 np0005465604 nova_compute[260603]: 2025-10-02 09:22:54.092 2 INFO os_vif [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:e3:da,bridge_name='br-int',has_traffic_filtering=True,id=ce3d61f6-90a9-4869-ac95-256f110eea41,network=Network(bb79a700-778c-4189-bea4-a6e50510de5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce3d61f6-90')#033[00m
Oct  2 05:22:54 np0005465604 podman[439935]: 2025-10-02 09:22:54.016672538 +0000 UTC m=+0.024775195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:22:54 np0005465604 podman[439935]: 2025-10-02 09:22:54.20885719 +0000 UTC m=+0.216959837 container create b67b2bd6aa5920818a7eff934e0fc26ae6630af60a5a4f9bf65ae2b78fd402f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 05:22:54 np0005465604 systemd[1]: Started libpod-conmon-b67b2bd6aa5920818a7eff934e0fc26ae6630af60a5a4f9bf65ae2b78fd402f7.scope.
Oct  2 05:22:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:22:54 np0005465604 neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b[437104]: [NOTICE]   (437108) : haproxy version is 2.8.14-c23fe91
Oct  2 05:22:54 np0005465604 neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b[437104]: [NOTICE]   (437108) : path to executable is /usr/sbin/haproxy
Oct  2 05:22:54 np0005465604 neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b[437104]: [WARNING]  (437108) : Exiting Master process...
Oct  2 05:22:54 np0005465604 neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b[437104]: [WARNING]  (437108) : Exiting Master process...
Oct  2 05:22:54 np0005465604 neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b[437104]: [ALERT]    (437108) : Current worker (437110) exited with code 143 (Terminated)
Oct  2 05:22:54 np0005465604 neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b[437104]: [WARNING]  (437108) : All workers exited. Exiting... (0)
Oct  2 05:22:54 np0005465604 systemd[1]: libpod-63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d.scope: Deactivated successfully.
Oct  2 05:22:54 np0005465604 conmon[437104]: conmon 63df10a98f81980cbadb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d.scope/container/memory.events
Oct  2 05:22:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8291ff0d529a747f2016907affefa6247a3441c4f5e649cb8395a22a54f862eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:22:54 np0005465604 podman[439987]: 2025-10-02 09:22:54.319538867 +0000 UTC m=+0.063576897 container died 63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:22:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8291ff0d529a747f2016907affefa6247a3441c4f5e649cb8395a22a54f862eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:22:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8291ff0d529a747f2016907affefa6247a3441c4f5e649cb8395a22a54f862eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:22:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8291ff0d529a747f2016907affefa6247a3441c4f5e649cb8395a22a54f862eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:22:54 np0005465604 podman[439935]: 2025-10-02 09:22:54.33815765 +0000 UTC m=+0.346260347 container init b67b2bd6aa5920818a7eff934e0fc26ae6630af60a5a4f9bf65ae2b78fd402f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 05:22:54 np0005465604 podman[439935]: 2025-10-02 09:22:54.352742385 +0000 UTC m=+0.360845062 container start b67b2bd6aa5920818a7eff934e0fc26ae6630af60a5a4f9bf65ae2b78fd402f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:22:54 np0005465604 podman[439935]: 2025-10-02 09:22:54.357172413 +0000 UTC m=+0.365275080 container attach b67b2bd6aa5920818a7eff934e0fc26ae6630af60a5a4f9bf65ae2b78fd402f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:22:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d-userdata-shm.mount: Deactivated successfully.
Oct  2 05:22:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay-a8ec26be00ec6c1706956eeee6591a41a079324f4042ac45aeef1cce56edc2ca-merged.mount: Deactivated successfully.
Oct  2 05:22:54 np0005465604 podman[439987]: 2025-10-02 09:22:54.395396377 +0000 UTC m=+0.139434327 container cleanup 63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 05:22:54 np0005465604 systemd[1]: libpod-conmon-63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d.scope: Deactivated successfully.
Oct  2 05:22:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 12 KiB/s wr, 29 op/s
Oct  2 05:22:54 np0005465604 podman[440028]: 2025-10-02 09:22:54.683404203 +0000 UTC m=+0.265069341 container remove 63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 05:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:54.689 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[48ed838a-aa97-41e2-a757-151f198ecf6c]: (4, ('Thu Oct  2 09:22:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b (63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d)\n63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d\nThu Oct  2 09:22:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b (63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d)\n63df10a98f81980cbadb401b10ba7154f880d2c6bdde15cb15b9bf034655873d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:54.692 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f1a893-8e77-4953-8ef5-71ffe80d277e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:54.693 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb79a700-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:22:54 np0005465604 kernel: tapbb79a700-70: left promiscuous mode
Oct  2 05:22:54 np0005465604 nova_compute[260603]: 2025-10-02 09:22:54.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:54.713 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[f16bb836-a7f1-4492-b26e-ec58eed5fff2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:54.742 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb2ae72-1807-4d94-8126-da1e36b07406]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:54.743 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[39155c83-3807-4151-ae94-1ff5abf6370b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:54.757 276572 DEBUG oslo.privsep.daemon [-] privsep: reply[02eb93f3-284b-4027-8e4d-23d4bf9464fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777303, 'reachable_time': 22149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 440044, 'error': None, 'target': 'ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:54 np0005465604 systemd[1]: run-netns-ovnmeta\x2dbb79a700\x2d778c\x2d4189\x2dbea4\x2da6e50510de5b.mount: Deactivated successfully.
Oct  2 05:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:54.760 162690 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bb79a700-778c-4189-bea4-a6e50510de5b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 05:22:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:22:54.760 162690 DEBUG oslo.privsep.daemon [-] privsep: reply[7987dca4-92bf-43e2-9489-d9c7ef96307a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]: {
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:        "osd_id": 2,
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:        "type": "bluestore"
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:    },
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:        "osd_id": 1,
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:        "type": "bluestore"
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:    },
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:        "osd_id": 0,
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:        "type": "bluestore"
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]:    }
Oct  2 05:22:55 np0005465604 gifted_montalcini[440002]: }
Oct  2 05:22:55 np0005465604 systemd[1]: libpod-b67b2bd6aa5920818a7eff934e0fc26ae6630af60a5a4f9bf65ae2b78fd402f7.scope: Deactivated successfully.
Oct  2 05:22:55 np0005465604 podman[439935]: 2025-10-02 09:22:55.297876805 +0000 UTC m=+1.305979442 container died b67b2bd6aa5920818a7eff934e0fc26ae6630af60a5a4f9bf65ae2b78fd402f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 05:22:55 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8291ff0d529a747f2016907affefa6247a3441c4f5e649cb8395a22a54f862eb-merged.mount: Deactivated successfully.
Oct  2 05:22:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:22:55 np0005465604 nova_compute[260603]: 2025-10-02 09:22:55.581 2 DEBUG nova.network.neutron [req-340c6717-e11e-49db-a188-96e9139845b9 req-b58200ed-6985-4e94-b070-bf04cd3512d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updated VIF entry in instance network info cache for port ce3d61f6-90a9-4869-ac95-256f110eea41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 05:22:55 np0005465604 nova_compute[260603]: 2025-10-02 09:22:55.582 2 DEBUG nova.network.neutron [req-340c6717-e11e-49db-a188-96e9139845b9 req-b58200ed-6985-4e94-b070-bf04cd3512d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updating instance_info_cache with network_info: [{"id": "ce3d61f6-90a9-4869-ac95-256f110eea41", "address": "fa:16:3e:c4:e3:da", "network": {"id": "bb79a700-778c-4189-bea4-a6e50510de5b", "bridge": "br-int", "label": "tempest-network-smoke--1747357083", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:e3da", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "674f53964f0a4a0d9e9b5ebfaf4248b4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce3d61f6-90", "ovs_interfaceid": "ce3d61f6-90a9-4869-ac95-256f110eea41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:22:55 np0005465604 nova_compute[260603]: 2025-10-02 09:22:55.613 2 DEBUG oslo_concurrency.lockutils [req-340c6717-e11e-49db-a188-96e9139845b9 req-b58200ed-6985-4e94-b070-bf04cd3512d2 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Releasing lock "refresh_cache-ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:22:55 np0005465604 nova_compute[260603]: 2025-10-02 09:22:55.673 2 DEBUG nova.compute.manager [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received event network-vif-unplugged-ce3d61f6-90a9-4869-ac95-256f110eea41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:22:55 np0005465604 nova_compute[260603]: 2025-10-02 09:22:55.673 2 DEBUG oslo_concurrency.lockutils [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:55 np0005465604 nova_compute[260603]: 2025-10-02 09:22:55.674 2 DEBUG oslo_concurrency.lockutils [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:55 np0005465604 nova_compute[260603]: 2025-10-02 09:22:55.674 2 DEBUG oslo_concurrency.lockutils [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:22:55 np0005465604 nova_compute[260603]: 2025-10-02 09:22:55.674 2 DEBUG nova.compute.manager [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] No waiting events found dispatching network-vif-unplugged-ce3d61f6-90a9-4869-ac95-256f110eea41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:22:55 np0005465604 nova_compute[260603]: 2025-10-02 09:22:55.675 2 DEBUG nova.compute.manager [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received event network-vif-unplugged-ce3d61f6-90a9-4869-ac95-256f110eea41 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 05:22:55 np0005465604 nova_compute[260603]: 2025-10-02 09:22:55.675 2 DEBUG nova.compute.manager [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received event network-vif-plugged-ce3d61f6-90a9-4869-ac95-256f110eea41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:22:55 np0005465604 nova_compute[260603]: 2025-10-02 09:22:55.675 2 DEBUG oslo_concurrency.lockutils [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Acquiring lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:22:55 np0005465604 nova_compute[260603]: 2025-10-02 09:22:55.676 2 DEBUG oslo_concurrency.lockutils [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:22:55 np0005465604 nova_compute[260603]: 2025-10-02 09:22:55.676 2 DEBUG oslo_concurrency.lockutils [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:22:55 np0005465604 nova_compute[260603]: 2025-10-02 09:22:55.676 2 DEBUG nova.compute.manager [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] No waiting events found dispatching network-vif-plugged-ce3d61f6-90a9-4869-ac95-256f110eea41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 05:22:55 np0005465604 nova_compute[260603]: 2025-10-02 09:22:55.676 2 WARNING nova.compute.manager [req-aa4eb771-bd1c-46f8-8a5d-b1e1023fbc48 req-5cf28d1f-c5c0-4e76-8f07-47cd483d3640 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received unexpected event network-vif-plugged-ce3d61f6-90a9-4869-ac95-256f110eea41 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 05:22:56 np0005465604 podman[439935]: 2025-10-02 09:22:56.063561771 +0000 UTC m=+2.071664418 container remove b67b2bd6aa5920818a7eff934e0fc26ae6630af60a5a4f9bf65ae2b78fd402f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 05:22:56 np0005465604 systemd[1]: libpod-conmon-b67b2bd6aa5920818a7eff934e0fc26ae6630af60a5a4f9bf65ae2b78fd402f7.scope: Deactivated successfully.
Oct  2 05:22:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:22:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Oct  2 05:22:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:22:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:22:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:22:56 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 65b55690-b8fd-4633-8337-da5bb0d09b77 does not exist
Oct  2 05:22:56 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 4ecfdeba-41f0-46ea-8ca4-c8767e95a744 does not exist
Oct  2 05:22:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:22:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:22:57 np0005465604 nova_compute[260603]: 2025-10-02 09:22:57.741 2 INFO nova.virt.libvirt.driver [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Deleting instance files /var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_del#033[00m
Oct  2 05:22:57 np0005465604 nova_compute[260603]: 2025-10-02 09:22:57.743 2 INFO nova.virt.libvirt.driver [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Deletion of /var/lib/nova/instances/ccf01ee2-f5c6-4802-a9dd-f8c0423d4914_del complete#033[00m
Oct  2 05:22:57 np0005465604 nova_compute[260603]: 2025-10-02 09:22:57.879 2 INFO nova.compute.manager [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Took 4.07 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:22:57 np0005465604 nova_compute[260603]: 2025-10-02 09:22:57.879 2 DEBUG oslo.service.loopingcall [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:22:57 np0005465604 nova_compute[260603]: 2025-10-02 09:22:57.880 2 DEBUG nova.compute.manager [-] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:22:57 np0005465604 nova_compute[260603]: 2025-10-02 09:22:57.880 2 DEBUG nova.network.neutron [-] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:22:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:22:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:22:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:22:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:22:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:22:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:22:58 np0005465604 nova_compute[260603]: 2025-10-02 09:22:58.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:22:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 13 KiB/s wr, 53 op/s
Oct  2 05:22:59 np0005465604 nova_compute[260603]: 2025-10-02 09:22:59.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:00 np0005465604 nova_compute[260603]: 2025-10-02 09:23:00.290 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396965.2889314, 1e3be288-5261-4a77-a127-f7bf088caf01 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:23:00 np0005465604 nova_compute[260603]: 2025-10-02 09:23:00.291 2 INFO nova.compute.manager [-] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:23:00 np0005465604 nova_compute[260603]: 2025-10-02 09:23:00.341 2 DEBUG nova.compute.manager [None req-36c68962-04f6-4b3b-8c85-2e7b4c562537 - - - - - -] [instance: 1e3be288-5261-4a77-a127-f7bf088caf01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:23:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 12 KiB/s wr, 26 op/s
Oct  2 05:23:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:23:01 np0005465604 nova_compute[260603]: 2025-10-02 09:23:01.543 2 DEBUG nova.network.neutron [-] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:23:01 np0005465604 nova_compute[260603]: 2025-10-02 09:23:01.652 2 DEBUG nova.compute.manager [req-c97c6734-5a50-40dc-a5bb-7fe9de5246cb req-abdc2cac-dc9d-4b80-b75a-abacb1baf664 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Received event network-vif-deleted-ce3d61f6-90a9-4869-ac95-256f110eea41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 05:23:01 np0005465604 nova_compute[260603]: 2025-10-02 09:23:01.653 2 INFO nova.compute.manager [req-c97c6734-5a50-40dc-a5bb-7fe9de5246cb req-abdc2cac-dc9d-4b80-b75a-abacb1baf664 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Neutron deleted interface ce3d61f6-90a9-4869-ac95-256f110eea41; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 05:23:01 np0005465604 nova_compute[260603]: 2025-10-02 09:23:01.653 2 DEBUG nova.network.neutron [req-c97c6734-5a50-40dc-a5bb-7fe9de5246cb req-abdc2cac-dc9d-4b80-b75a-abacb1baf664 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:23:01 np0005465604 nova_compute[260603]: 2025-10-02 09:23:01.670 2 INFO nova.compute.manager [-] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Took 3.79 seconds to deallocate network for instance.#033[00m
Oct  2 05:23:01 np0005465604 nova_compute[260603]: 2025-10-02 09:23:01.688 2 DEBUG nova.compute.manager [req-c97c6734-5a50-40dc-a5bb-7fe9de5246cb req-abdc2cac-dc9d-4b80-b75a-abacb1baf664 36226b06dcf74a778595d195592aa83a e2bd24e2b1194cc8ae6ec425d4f7d63d - - default default] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Detach interface failed, port_id=ce3d61f6-90a9-4869-ac95-256f110eea41, reason: Instance ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 05:23:01 np0005465604 nova_compute[260603]: 2025-10-02 09:23:01.774 2 DEBUG oslo_concurrency.lockutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:23:01 np0005465604 nova_compute[260603]: 2025-10-02 09:23:01.775 2 DEBUG oslo_concurrency.lockutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:23:01 np0005465604 nova_compute[260603]: 2025-10-02 09:23:01.829 2 DEBUG oslo_concurrency.processutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:23:02 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:23:02 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1776422063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:23:02 np0005465604 nova_compute[260603]: 2025-10-02 09:23:02.317 2 DEBUG oslo_concurrency.processutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:23:02 np0005465604 nova_compute[260603]: 2025-10-02 09:23:02.326 2 DEBUG nova.compute.provider_tree [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:23:02 np0005465604 nova_compute[260603]: 2025-10-02 09:23:02.352 2 DEBUG nova.scheduler.client.report [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:23:02 np0005465604 nova_compute[260603]: 2025-10-02 09:23:02.392 2 DEBUG oslo_concurrency.lockutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:23:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Oct  2 05:23:02 np0005465604 nova_compute[260603]: 2025-10-02 09:23:02.447 2 INFO nova.scheduler.client.report [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Deleted allocations for instance ccf01ee2-f5c6-4802-a9dd-f8c0423d4914#033[00m
Oct  2 05:23:02 np0005465604 nova_compute[260603]: 2025-10-02 09:23:02.551 2 DEBUG oslo_concurrency.lockutils [None req-a8b4ff83-e41e-4804-8ad8-93297ef3a592 b7765a573b734de786f94b675c6ab654 674f53964f0a4a0d9e9b5ebfaf4248b4 - - default default] Lock "ccf01ee2-f5c6-4802-a9dd-f8c0423d4914" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:23:03 np0005465604 nova_compute[260603]: 2025-10-02 09:23:03.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:04 np0005465604 nova_compute[260603]: 2025-10-02 09:23:04.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.574643) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396985574680, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 498, "num_deletes": 258, "total_data_size": 452825, "memory_usage": 462232, "flush_reason": "Manual Compaction"}
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396985604594, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 438089, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65662, "largest_seqno": 66159, "table_properties": {"data_size": 435241, "index_size": 819, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6664, "raw_average_key_size": 18, "raw_value_size": 429532, "raw_average_value_size": 1193, "num_data_blocks": 36, "num_entries": 360, "num_filter_entries": 360, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396961, "oldest_key_time": 1759396961, "file_creation_time": 1759396985, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 30040 microseconds, and 2394 cpu microseconds.
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.604671) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 438089 bytes OK
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.604705) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.653557) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.653601) EVENT_LOG_v1 {"time_micros": 1759396985653591, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.653629) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 449873, prev total WAL file size 449873, number of live WAL files 2.
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.654722) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373537' seq:72057594037927935, type:22 .. '6C6F676D0033303131' seq:0, type:0; will stop at (end)
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(427KB)], [155(9440KB)]
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396985654812, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 10105052, "oldest_snapshot_seqno": -1}
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8202 keys, 9993530 bytes, temperature: kUnknown
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396985935876, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 9993530, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9941265, "index_size": 30648, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20549, "raw_key_size": 216186, "raw_average_key_size": 26, "raw_value_size": 9797615, "raw_average_value_size": 1194, "num_data_blocks": 1185, "num_entries": 8202, "num_filter_entries": 8202, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759396985, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.936226) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 9993530 bytes
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.937929) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 35.9 rd, 35.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.2 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(45.9) write-amplify(22.8) OK, records in: 8729, records dropped: 527 output_compression: NoCompression
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.937958) EVENT_LOG_v1 {"time_micros": 1759396985937945, "job": 96, "event": "compaction_finished", "compaction_time_micros": 281175, "compaction_time_cpu_micros": 47073, "output_level": 6, "num_output_files": 1, "total_output_size": 9993530, "num_input_records": 8729, "num_output_records": 8202, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396985938241, "job": 96, "event": "table_file_deletion", "file_number": 157}
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759396985941522, "job": 96, "event": "table_file_deletion", "file_number": 155}
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.654596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.941778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.941801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.941804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.941807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:23:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:23:05.941810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:23:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 05:23:06 np0005465604 podman[440161]: 2025-10-02 09:23:06.99564618 +0000 UTC m=+0.063048341 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 05:23:07 np0005465604 podman[440160]: 2025-10-02 09:23:07.025216313 +0000 UTC m=+0.096050321 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 05:23:08 np0005465604 nova_compute[260603]: 2025-10-02 09:23:08.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  2 05:23:09 np0005465604 nova_compute[260603]: 2025-10-02 09:23:09.054 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759396974.0529292, ccf01ee2-f5c6-4802-a9dd-f8c0423d4914 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:23:09 np0005465604 nova_compute[260603]: 2025-10-02 09:23:09.055 2 INFO nova.compute.manager [-] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:23:09 np0005465604 nova_compute[260603]: 2025-10-02 09:23:09.083 2 DEBUG nova.compute.manager [None req-9344112f-08ee-4e47-93e7-121a1f3bc9f5 - - - - - -] [instance: ccf01ee2-f5c6-4802-a9dd-f8c0423d4914] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:23:09 np0005465604 nova_compute[260603]: 2025-10-02 09:23:09.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:09 np0005465604 nova_compute[260603]: 2025-10-02 09:23:09.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:09 np0005465604 nova_compute[260603]: 2025-10-02 09:23:09.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:09 np0005465604 nova_compute[260603]: 2025-10-02 09:23:09.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:23:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 1.5 KiB/s rd, 341 B/s wr, 2 op/s
Oct  2 05:23:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:23:11 np0005465604 podman[440207]: 2025-10-02 09:23:11.003966017 +0000 UTC m=+0.073069632 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:23:11 np0005465604 podman[440208]: 2025-10-02 09:23:11.014330691 +0000 UTC m=+0.069342587 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 05:23:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 1.5 KiB/s rd, 341 B/s wr, 2 op/s
Oct  2 05:23:13 np0005465604 nova_compute[260603]: 2025-10-02 09:23:13.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:13 np0005465604 nova_compute[260603]: 2025-10-02 09:23:13.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:23:13 np0005465604 nova_compute[260603]: 2025-10-02 09:23:13.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:23:13 np0005465604 nova_compute[260603]: 2025-10-02 09:23:13.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:23:13 np0005465604 nova_compute[260603]: 2025-10-02 09:23:13.537 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:23:13 np0005465604 nova_compute[260603]: 2025-10-02 09:23:13.538 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:23:14 np0005465604 nova_compute[260603]: 2025-10-02 09:23:14.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:23:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:16 np0005465604 nova_compute[260603]: 2025-10-02 09:23:16.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:23:16 np0005465604 nova_compute[260603]: 2025-10-02 09:23:16.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:23:16 np0005465604 nova_compute[260603]: 2025-10-02 09:23:16.552 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:23:16 np0005465604 nova_compute[260603]: 2025-10-02 09:23:16.553 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:23:16 np0005465604 nova_compute[260603]: 2025-10-02 09:23:16.553 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:23:16 np0005465604 nova_compute[260603]: 2025-10-02 09:23:16.553 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:23:16 np0005465604 nova_compute[260603]: 2025-10-02 09:23:16.554 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:23:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:23:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1979432588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:23:17 np0005465604 nova_compute[260603]: 2025-10-02 09:23:17.059 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:23:17 np0005465604 nova_compute[260603]: 2025-10-02 09:23:17.229 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:23:17 np0005465604 nova_compute[260603]: 2025-10-02 09:23:17.230 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3558MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:23:17 np0005465604 nova_compute[260603]: 2025-10-02 09:23:17.230 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:23:17 np0005465604 nova_compute[260603]: 2025-10-02 09:23:17.230 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:23:17 np0005465604 nova_compute[260603]: 2025-10-02 09:23:17.790 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:23:17 np0005465604 nova_compute[260603]: 2025-10-02 09:23:17.791 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:23:17 np0005465604 nova_compute[260603]: 2025-10-02 09:23:17.818 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:23:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:23:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1721013755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:23:18 np0005465604 nova_compute[260603]: 2025-10-02 09:23:18.283 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:23:18 np0005465604 nova_compute[260603]: 2025-10-02 09:23:18.288 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:23:18 np0005465604 nova_compute[260603]: 2025-10-02 09:23:18.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:18 np0005465604 nova_compute[260603]: 2025-10-02 09:23:18.382 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:23:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:18 np0005465604 nova_compute[260603]: 2025-10-02 09:23:18.480 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:23:18 np0005465604 nova_compute[260603]: 2025-10-02 09:23:18.480 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:23:19 np0005465604 nova_compute[260603]: 2025-10-02 09:23:19.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:23:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:23:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2264696705' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:23:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:23:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2264696705' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:23:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:22 np0005465604 nova_compute[260603]: 2025-10-02 09:23:22.476 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:23:22 np0005465604 nova_compute[260603]: 2025-10-02 09:23:22.476 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:23:23 np0005465604 nova_compute[260603]: 2025-10-02 09:23:23.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:24 np0005465604 nova_compute[260603]: 2025-10-02 09:23:24.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:24 np0005465604 nova_compute[260603]: 2025-10-02 09:23:24.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:23:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:23:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:23:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:23:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:23:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:23:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:23:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:23:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:23:28
Oct  2 05:23:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:23:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:23:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'volumes', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'images', 'default.rgw.meta']
Oct  2 05:23:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:23:28 np0005465604 nova_compute[260603]: 2025-10-02 09:23:28.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:23:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:23:29 np0005465604 nova_compute[260603]: 2025-10-02 09:23:29.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:29 np0005465604 nova_compute[260603]: 2025-10-02 09:23:29.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:23:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:23:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:33 np0005465604 nova_compute[260603]: 2025-10-02 09:23:33.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:34 np0005465604 nova_compute[260603]: 2025-10-02 09:23:34.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:23:34.862 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:23:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:23:34.862 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:23:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:23:34.863 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:23:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:23:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:38 np0005465604 podman[440293]: 2025-10-02 09:23:38.023345137 +0000 UTC m=+0.089495457 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:23:38 np0005465604 podman[440292]: 2025-10-02 09:23:38.032626766 +0000 UTC m=+0.098668423 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  2 05:23:38 np0005465604 nova_compute[260603]: 2025-10-02 09:23:38.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:39 np0005465604 nova_compute[260603]: 2025-10-02 09:23:39.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:23:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:23:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:23:42 np0005465604 podman[440338]: 2025-10-02 09:23:42.017716958 +0000 UTC m=+0.074996823 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:23:42 np0005465604 podman[440337]: 2025-10-02 09:23:42.041732869 +0000 UTC m=+0.099652034 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:23:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:43 np0005465604 nova_compute[260603]: 2025-10-02 09:23:43.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:44 np0005465604 nova_compute[260603]: 2025-10-02 09:23:44.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:23:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:46 np0005465604 nova_compute[260603]: 2025-10-02 09:23:46.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:23:46 np0005465604 nova_compute[260603]: 2025-10-02 09:23:46.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:23:48 np0005465604 nova_compute[260603]: 2025-10-02 09:23:48.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:49 np0005465604 nova_compute[260603]: 2025-10-02 09:23:49.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:23:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:23:52.762 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:23:52 np0005465604 nova_compute[260603]: 2025-10-02 09:23:52.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:23:52.763 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:23:52 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:23:52.763 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:23:53 np0005465604 nova_compute[260603]: 2025-10-02 09:23:53.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:54 np0005465604 nova_compute[260603]: 2025-10-02 09:23:54.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:23:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:23:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:23:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:23:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:23:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:23:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:23:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 7f5624d9-1c60-4165-ad0d-2bd9629e2431 does not exist
Oct  2 05:23:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev f33e64ac-25ce-4748-ae78-cb185a8c38cd does not exist
Oct  2 05:23:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 6baa1be7-be67-4817-ad4c-88e13aec3646 does not exist
Oct  2 05:23:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:23:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:23:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:23:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:23:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:23:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:23:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:23:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:23:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:23:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:23:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:23:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:23:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:23:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:23:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:23:58 np0005465604 podman[440648]: 2025-10-02 09:23:58.078993178 +0000 UTC m=+0.044387637 container create 5867cf93b208893b764c55d6aa8ea04cf79e7b6a9d49867de38ae6562657db24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rubin, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:23:58 np0005465604 systemd[1]: Started libpod-conmon-5867cf93b208893b764c55d6aa8ea04cf79e7b6a9d49867de38ae6562657db24.scope.
Oct  2 05:23:58 np0005465604 podman[440648]: 2025-10-02 09:23:58.058557139 +0000 UTC m=+0.023951618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:23:58 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:23:58 np0005465604 podman[440648]: 2025-10-02 09:23:58.198896663 +0000 UTC m=+0.164291162 container init 5867cf93b208893b764c55d6aa8ea04cf79e7b6a9d49867de38ae6562657db24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:23:58 np0005465604 podman[440648]: 2025-10-02 09:23:58.213237731 +0000 UTC m=+0.178632190 container start 5867cf93b208893b764c55d6aa8ea04cf79e7b6a9d49867de38ae6562657db24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rubin, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:23:58 np0005465604 youthful_rubin[440664]: 167 167
Oct  2 05:23:58 np0005465604 systemd[1]: libpod-5867cf93b208893b764c55d6aa8ea04cf79e7b6a9d49867de38ae6562657db24.scope: Deactivated successfully.
Oct  2 05:23:58 np0005465604 podman[440648]: 2025-10-02 09:23:58.23051441 +0000 UTC m=+0.195908919 container attach 5867cf93b208893b764c55d6aa8ea04cf79e7b6a9d49867de38ae6562657db24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rubin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:23:58 np0005465604 podman[440648]: 2025-10-02 09:23:58.231986066 +0000 UTC m=+0.197380525 container died 5867cf93b208893b764c55d6aa8ea04cf79e7b6a9d49867de38ae6562657db24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct  2 05:23:58 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d7c83a9a0320756c87a9f8fc7e30989ca0662ba8a44377f33a5af4ebd3cb7f15-merged.mount: Deactivated successfully.
Oct  2 05:23:58 np0005465604 nova_compute[260603]: 2025-10-02 09:23:58.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:58 np0005465604 podman[440648]: 2025-10-02 09:23:58.446007912 +0000 UTC m=+0.411402371 container remove 5867cf93b208893b764c55d6aa8ea04cf79e7b6a9d49867de38ae6562657db24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:23:58 np0005465604 systemd[1]: libpod-conmon-5867cf93b208893b764c55d6aa8ea04cf79e7b6a9d49867de38ae6562657db24.scope: Deactivated successfully.
Oct  2 05:23:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:23:58 np0005465604 podman[440690]: 2025-10-02 09:23:58.709283044 +0000 UTC m=+0.111122701 container create 975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ritchie, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:23:58 np0005465604 podman[440690]: 2025-10-02 09:23:58.645823163 +0000 UTC m=+0.047662840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:23:58 np0005465604 systemd[1]: Started libpod-conmon-975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3.scope.
Oct  2 05:23:58 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:23:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9dcdbcddfe321aef264ff73a330b19322b0455464290a828664e1f7e2310d05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:23:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9dcdbcddfe321aef264ff73a330b19322b0455464290a828664e1f7e2310d05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:23:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9dcdbcddfe321aef264ff73a330b19322b0455464290a828664e1f7e2310d05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:23:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9dcdbcddfe321aef264ff73a330b19322b0455464290a828664e1f7e2310d05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:23:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9dcdbcddfe321aef264ff73a330b19322b0455464290a828664e1f7e2310d05/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:23:58 np0005465604 podman[440690]: 2025-10-02 09:23:58.852338953 +0000 UTC m=+0.254178600 container init 975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 05:23:58 np0005465604 podman[440690]: 2025-10-02 09:23:58.861568091 +0000 UTC m=+0.263407718 container start 975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 05:23:58 np0005465604 podman[440690]: 2025-10-02 09:23:58.921459192 +0000 UTC m=+0.323298829 container attach 975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:23:59 np0005465604 nova_compute[260603]: 2025-10-02 09:23:59.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:23:59 np0005465604 blissful_ritchie[440706]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:23:59 np0005465604 blissful_ritchie[440706]: --> relative data size: 1.0
Oct  2 05:23:59 np0005465604 blissful_ritchie[440706]: --> All data devices are unavailable
Oct  2 05:23:59 np0005465604 systemd[1]: libpod-975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3.scope: Deactivated successfully.
Oct  2 05:23:59 np0005465604 systemd[1]: libpod-975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3.scope: Consumed 1.060s CPU time.
Oct  2 05:23:59 np0005465604 podman[440690]: 2025-10-02 09:23:59.974352699 +0000 UTC m=+1.376192366 container died 975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 05:24:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b9dcdbcddfe321aef264ff73a330b19322b0455464290a828664e1f7e2310d05-merged.mount: Deactivated successfully.
Oct  2 05:24:00 np0005465604 podman[440690]: 2025-10-02 09:24:00.089139454 +0000 UTC m=+1.490979091 container remove 975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:24:00 np0005465604 systemd[1]: libpod-conmon-975d20b894eb7f2c629d4156d780ab02cba347c19803279016c19fe567ee5fd3.scope: Deactivated successfully.
Oct  2 05:24:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:24:00 np0005465604 podman[440892]: 2025-10-02 09:24:00.835715223 +0000 UTC m=+0.048974781 container create 8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct  2 05:24:00 np0005465604 systemd[1]: Started libpod-conmon-8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65.scope.
Oct  2 05:24:00 np0005465604 podman[440892]: 2025-10-02 09:24:00.812930721 +0000 UTC m=+0.026190359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:24:00 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:24:00 np0005465604 podman[440892]: 2025-10-02 09:24:00.930294947 +0000 UTC m=+0.143554575 container init 8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:24:00 np0005465604 podman[440892]: 2025-10-02 09:24:00.942176358 +0000 UTC m=+0.155435906 container start 8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct  2 05:24:00 np0005465604 podman[440892]: 2025-10-02 09:24:00.945821562 +0000 UTC m=+0.159081160 container attach 8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:24:00 np0005465604 systemd[1]: libpod-8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65.scope: Deactivated successfully.
Oct  2 05:24:00 np0005465604 dazzling_kare[440909]: 167 167
Oct  2 05:24:00 np0005465604 podman[440892]: 2025-10-02 09:24:00.951557101 +0000 UTC m=+0.164816659 container died 8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct  2 05:24:00 np0005465604 conmon[440909]: conmon 8620fb3a7333554d0d1d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65.scope/container/memory.events
Oct  2 05:24:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f7667cbd134fa136678357bdd763197c76959f8b064b4fdf42f78d12e97390a6-merged.mount: Deactivated successfully.
Oct  2 05:24:00 np0005465604 podman[440892]: 2025-10-02 09:24:00.998419345 +0000 UTC m=+0.211678923 container remove 8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 05:24:01 np0005465604 systemd[1]: libpod-conmon-8620fb3a7333554d0d1da304fa9252c07df891ff2b215c3117e75b483cfd6d65.scope: Deactivated successfully.
Oct  2 05:24:01 np0005465604 podman[440933]: 2025-10-02 09:24:01.200848508 +0000 UTC m=+0.060047627 container create bbca98863c2fae2ef77b27c09a3e822ce6d898de4cb7bf2c9da6adc870457842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_meitner, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:24:01 np0005465604 systemd[1]: Started libpod-conmon-bbca98863c2fae2ef77b27c09a3e822ce6d898de4cb7bf2c9da6adc870457842.scope.
Oct  2 05:24:01 np0005465604 podman[440933]: 2025-10-02 09:24:01.178077047 +0000 UTC m=+0.037276176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:24:01 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:24:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4424c1361ad9a7643db9e36769d7aae2788678db9d073b796e6625fe7ab2bfb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:24:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4424c1361ad9a7643db9e36769d7aae2788678db9d073b796e6625fe7ab2bfb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:24:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4424c1361ad9a7643db9e36769d7aae2788678db9d073b796e6625fe7ab2bfb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:24:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4424c1361ad9a7643db9e36769d7aae2788678db9d073b796e6625fe7ab2bfb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:24:01 np0005465604 podman[440933]: 2025-10-02 09:24:01.29823787 +0000 UTC m=+0.157437009 container init bbca98863c2fae2ef77b27c09a3e822ce6d898de4cb7bf2c9da6adc870457842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_meitner, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 05:24:01 np0005465604 podman[440933]: 2025-10-02 09:24:01.316031425 +0000 UTC m=+0.175230514 container start bbca98863c2fae2ef77b27c09a3e822ce6d898de4cb7bf2c9da6adc870457842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_meitner, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 05:24:01 np0005465604 podman[440933]: 2025-10-02 09:24:01.320074111 +0000 UTC m=+0.179273210 container attach bbca98863c2fae2ef77b27c09a3e822ce6d898de4cb7bf2c9da6adc870457842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_meitner, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]: {
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:    "0": [
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:        {
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "devices": [
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "/dev/loop3"
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            ],
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "lv_name": "ceph_lv0",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "lv_size": "21470642176",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "name": "ceph_lv0",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "tags": {
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.cluster_name": "ceph",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.crush_device_class": "",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.encrypted": "0",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.osd_id": "0",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.type": "block",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.vdo": "0"
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            },
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "type": "block",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "vg_name": "ceph_vg0"
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:        }
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:    ],
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:    "1": [
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:        {
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "devices": [
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "/dev/loop4"
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            ],
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "lv_name": "ceph_lv1",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "lv_size": "21470642176",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "name": "ceph_lv1",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "tags": {
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.cluster_name": "ceph",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.crush_device_class": "",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.encrypted": "0",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.osd_id": "1",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.type": "block",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.vdo": "0"
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            },
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "type": "block",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "vg_name": "ceph_vg1"
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:        }
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:    ],
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:    "2": [
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:        {
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "devices": [
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "/dev/loop5"
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            ],
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "lv_name": "ceph_lv2",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "lv_size": "21470642176",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "name": "ceph_lv2",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "tags": {
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.cluster_name": "ceph",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.crush_device_class": "",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.encrypted": "0",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.osd_id": "2",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.type": "block",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:                "ceph.vdo": "0"
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            },
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "type": "block",
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:            "vg_name": "ceph_vg2"
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:        }
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]:    ]
Oct  2 05:24:02 np0005465604 sleepy_meitner[440949]: }
Oct  2 05:24:02 np0005465604 systemd[1]: libpod-bbca98863c2fae2ef77b27c09a3e822ce6d898de4cb7bf2c9da6adc870457842.scope: Deactivated successfully.
Oct  2 05:24:02 np0005465604 podman[440933]: 2025-10-02 09:24:02.14313671 +0000 UTC m=+1.002335819 container died bbca98863c2fae2ef77b27c09a3e822ce6d898de4cb7bf2c9da6adc870457842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_meitner, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 05:24:02 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4424c1361ad9a7643db9e36769d7aae2788678db9d073b796e6625fe7ab2bfb6-merged.mount: Deactivated successfully.
Oct  2 05:24:02 np0005465604 podman[440933]: 2025-10-02 09:24:02.255984604 +0000 UTC m=+1.115183693 container remove bbca98863c2fae2ef77b27c09a3e822ce6d898de4cb7bf2c9da6adc870457842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_meitner, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 05:24:02 np0005465604 systemd[1]: libpod-conmon-bbca98863c2fae2ef77b27c09a3e822ce6d898de4cb7bf2c9da6adc870457842.scope: Deactivated successfully.
Oct  2 05:24:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:03 np0005465604 podman[441110]: 2025-10-02 09:24:03.144406324 +0000 UTC m=+0.061423600 container create 907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 05:24:03 np0005465604 systemd[1]: Started libpod-conmon-907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0.scope.
Oct  2 05:24:03 np0005465604 podman[441110]: 2025-10-02 09:24:03.120419844 +0000 UTC m=+0.037437190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:24:03 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:24:03 np0005465604 podman[441110]: 2025-10-02 09:24:03.258549079 +0000 UTC m=+0.175566385 container init 907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feistel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 05:24:03 np0005465604 podman[441110]: 2025-10-02 09:24:03.272560616 +0000 UTC m=+0.189577892 container start 907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feistel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:24:03 np0005465604 podman[441110]: 2025-10-02 09:24:03.276029475 +0000 UTC m=+0.193046751 container attach 907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feistel, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 05:24:03 np0005465604 upbeat_feistel[441126]: 167 167
Oct  2 05:24:03 np0005465604 systemd[1]: libpod-907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0.scope: Deactivated successfully.
Oct  2 05:24:03 np0005465604 conmon[441126]: conmon 907cf23f0bf31d0a55d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0.scope/container/memory.events
Oct  2 05:24:03 np0005465604 podman[441110]: 2025-10-02 09:24:03.282822447 +0000 UTC m=+0.199839703 container died 907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feistel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 05:24:03 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3254f4c137e9cb420bd18445139dd050f4889513cf16ccbfc265c6edd0ab34c2-merged.mount: Deactivated successfully.
Oct  2 05:24:03 np0005465604 podman[441110]: 2025-10-02 09:24:03.328427872 +0000 UTC m=+0.245445158 container remove 907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:24:03 np0005465604 systemd[1]: libpod-conmon-907cf23f0bf31d0a55d626299e08d61ccc392486b0b17ee869b8a5896f9c09f0.scope: Deactivated successfully.
Oct  2 05:24:03 np0005465604 nova_compute[260603]: 2025-10-02 09:24:03.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:03 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 05:24:03 np0005465604 podman[441151]: 2025-10-02 09:24:03.53069066 +0000 UTC m=+0.055185966 container create 9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:24:03 np0005465604 systemd[1]: Started libpod-conmon-9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81.scope.
Oct  2 05:24:03 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:24:03 np0005465604 podman[441151]: 2025-10-02 09:24:03.513484532 +0000 UTC m=+0.037979858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:24:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33178b93e44ec6d8cc2d4ed8319c08ca33c73a6226728f6468e18c45820151f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:24:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33178b93e44ec6d8cc2d4ed8319c08ca33c73a6226728f6468e18c45820151f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:24:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33178b93e44ec6d8cc2d4ed8319c08ca33c73a6226728f6468e18c45820151f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:24:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33178b93e44ec6d8cc2d4ed8319c08ca33c73a6226728f6468e18c45820151f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:24:03 np0005465604 podman[441151]: 2025-10-02 09:24:03.621876388 +0000 UTC m=+0.146371724 container init 9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 05:24:03 np0005465604 podman[441151]: 2025-10-02 09:24:03.628649459 +0000 UTC m=+0.153144785 container start 9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:24:03 np0005465604 podman[441151]: 2025-10-02 09:24:03.632292153 +0000 UTC m=+0.156787479 container attach 9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:24:04 np0005465604 nova_compute[260603]: 2025-10-02 09:24:04.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]: {
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:        "osd_id": 2,
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:        "type": "bluestore"
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:    },
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:        "osd_id": 1,
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:        "type": "bluestore"
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:    },
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:        "osd_id": 0,
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:        "type": "bluestore"
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]:    }
Oct  2 05:24:04 np0005465604 stupefied_stonebraker[441168]: }
Oct  2 05:24:04 np0005465604 systemd[1]: libpod-9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81.scope: Deactivated successfully.
Oct  2 05:24:04 np0005465604 podman[441151]: 2025-10-02 09:24:04.695038667 +0000 UTC m=+1.219534063 container died 9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:24:04 np0005465604 systemd[1]: libpod-9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81.scope: Consumed 1.070s CPU time.
Oct  2 05:24:04 np0005465604 systemd[1]: var-lib-containers-storage-overlay-33178b93e44ec6d8cc2d4ed8319c08ca33c73a6226728f6468e18c45820151f3-merged.mount: Deactivated successfully.
Oct  2 05:24:04 np0005465604 podman[441151]: 2025-10-02 09:24:04.822866169 +0000 UTC m=+1.347361465 container remove 9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  2 05:24:04 np0005465604 systemd[1]: libpod-conmon-9e2deb177eb221290f8a602914c5f5487d9cf46ca5e4193cdb8697fb02d49f81.scope: Deactivated successfully.
Oct  2 05:24:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:24:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:24:04 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:24:04 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:24:04 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 6dd329f5-f934-49c5-8595-518d02b4c16e does not exist
Oct  2 05:24:04 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 35db8778-9d33-4973-953e-f8d792e4c464 does not exist
Oct  2 05:24:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:24:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:24:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:24:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:08 np0005465604 nova_compute[260603]: 2025-10-02 09:24:08.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:09 np0005465604 podman[441266]: 2025-10-02 09:24:09.030955456 +0000 UTC m=+0.084365856 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 05:24:09 np0005465604 podman[441265]: 2025-10-02 09:24:09.061901883 +0000 UTC m=+0.125789720 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 05:24:09 np0005465604 nova_compute[260603]: 2025-10-02 09:24:09.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:09 np0005465604 ovn_controller[152344]: 2025-10-02T09:24:09Z|01701|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  2 05:24:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:24:11 np0005465604 nova_compute[260603]: 2025-10-02 09:24:11.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:24:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:13 np0005465604 podman[441310]: 2025-10-02 09:24:13.008960328 +0000 UTC m=+0.067007154 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 05:24:13 np0005465604 podman[441311]: 2025-10-02 09:24:13.01700165 +0000 UTC m=+0.066345634 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 05:24:13 np0005465604 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  2 05:24:13 np0005465604 systemd[1]: virtsecretd.service: Consumed 1.243s CPU time.
Oct  2 05:24:13 np0005465604 nova_compute[260603]: 2025-10-02 09:24:13.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:13 np0005465604 nova_compute[260603]: 2025-10-02 09:24:13.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:24:14 np0005465604 nova_compute[260603]: 2025-10-02 09:24:14.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:14 np0005465604 nova_compute[260603]: 2025-10-02 09:24:14.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:24:14 np0005465604 nova_compute[260603]: 2025-10-02 09:24:14.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:24:14 np0005465604 nova_compute[260603]: 2025-10-02 09:24:14.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:24:14 np0005465604 nova_compute[260603]: 2025-10-02 09:24:14.635 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:24:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:24:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:16 np0005465604 nova_compute[260603]: 2025-10-02 09:24:16.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:24:17 np0005465604 nova_compute[260603]: 2025-10-02 09:24:17.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:24:17 np0005465604 nova_compute[260603]: 2025-10-02 09:24:17.603 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:24:17 np0005465604 nova_compute[260603]: 2025-10-02 09:24:17.603 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:24:17 np0005465604 nova_compute[260603]: 2025-10-02 09:24:17.603 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:24:17 np0005465604 nova_compute[260603]: 2025-10-02 09:24:17.604 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:24:17 np0005465604 nova_compute[260603]: 2025-10-02 09:24:17.604 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:24:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:24:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/360997613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:24:18 np0005465604 nova_compute[260603]: 2025-10-02 09:24:18.043 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:24:18 np0005465604 nova_compute[260603]: 2025-10-02 09:24:18.222 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:24:18 np0005465604 nova_compute[260603]: 2025-10-02 09:24:18.223 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3543MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:24:18 np0005465604 nova_compute[260603]: 2025-10-02 09:24:18.223 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:24:18 np0005465604 nova_compute[260603]: 2025-10-02 09:24:18.224 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:24:18 np0005465604 nova_compute[260603]: 2025-10-02 09:24:18.431 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:24:18 np0005465604 nova_compute[260603]: 2025-10-02 09:24:18.431 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:24:18 np0005465604 nova_compute[260603]: 2025-10-02 09:24:18.451 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:24:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:18 np0005465604 nova_compute[260603]: 2025-10-02 09:24:18.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:24:18 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2826731496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:24:18 np0005465604 nova_compute[260603]: 2025-10-02 09:24:18.903 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:24:18 np0005465604 nova_compute[260603]: 2025-10-02 09:24:18.909 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:24:18 np0005465604 nova_compute[260603]: 2025-10-02 09:24:18.943 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:24:18 np0005465604 nova_compute[260603]: 2025-10-02 09:24:18.945 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:24:18 np0005465604 nova_compute[260603]: 2025-10-02 09:24:18.945 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:24:19 np0005465604 nova_compute[260603]: 2025-10-02 09:24:19.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:24:21 np0005465604 nova_compute[260603]: 2025-10-02 09:24:21.943 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:24:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:24:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1356717144' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:24:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:24:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1356717144' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:24:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:23 np0005465604 nova_compute[260603]: 2025-10-02 09:24:23.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:23 np0005465604 nova_compute[260603]: 2025-10-02 09:24:23.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:24:24 np0005465604 nova_compute[260603]: 2025-10-02 09:24:24.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:24:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:24:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:24:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:24:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:24:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:24:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:24:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:24:28
Oct  2 05:24:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:24:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:24:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'backups', 'volumes', 'default.rgw.control']
Oct  2 05:24:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:24:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:28 np0005465604 nova_compute[260603]: 2025-10-02 09:24:28.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:24:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:24:29 np0005465604 nova_compute[260603]: 2025-10-02 09:24:29.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:29 np0005465604 nova_compute[260603]: 2025-10-02 09:24:29.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:24:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:24:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:33 np0005465604 nova_compute[260603]: 2025-10-02 09:24:33.236 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Acquiring lock "29e12e23-cd78-4748-a814-e030401a2d37" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:24:33 np0005465604 nova_compute[260603]: 2025-10-02 09:24:33.236 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "29e12e23-cd78-4748-a814-e030401a2d37" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:24:33 np0005465604 nova_compute[260603]: 2025-10-02 09:24:33.374 2 DEBUG nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 05:24:33 np0005465604 nova_compute[260603]: 2025-10-02 09:24:33.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:24:33 np0005465604 nova_compute[260603]: 2025-10-02 09:24:33.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 05:24:33 np0005465604 nova_compute[260603]: 2025-10-02 09:24:33.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:33 np0005465604 nova_compute[260603]: 2025-10-02 09:24:33.751 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:24:33 np0005465604 nova_compute[260603]: 2025-10-02 09:24:33.751 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:24:33 np0005465604 nova_compute[260603]: 2025-10-02 09:24:33.765 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 05:24:33 np0005465604 nova_compute[260603]: 2025-10-02 09:24:33.766 2 INFO nova.compute.claims [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  2 05:24:33 np0005465604 nova_compute[260603]: 2025-10-02 09:24:33.833 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 05:24:34 np0005465604 nova_compute[260603]: 2025-10-02 09:24:34.074 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:24:34 np0005465604 nova_compute[260603]: 2025-10-02 09:24:34.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:24:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2512420117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:24:34 np0005465604 nova_compute[260603]: 2025-10-02 09:24:34.543 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:24:34 np0005465604 nova_compute[260603]: 2025-10-02 09:24:34.551 2 DEBUG nova.compute.provider_tree [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:24:34 np0005465604 nova_compute[260603]: 2025-10-02 09:24:34.657 2 DEBUG nova.scheduler.client.report [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:24:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:24:34.863 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:24:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:24:34.863 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:24:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:24:34.864 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:24:34 np0005465604 nova_compute[260603]: 2025-10-02 09:24:34.908 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:24:34 np0005465604 nova_compute[260603]: 2025-10-02 09:24:34.909 2 DEBUG nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 05:24:35 np0005465604 nova_compute[260603]: 2025-10-02 09:24:35.089 2 DEBUG nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 05:24:35 np0005465604 nova_compute[260603]: 2025-10-02 09:24:35.090 2 DEBUG nova.network.neutron [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 05:24:35 np0005465604 nova_compute[260603]: 2025-10-02 09:24:35.260 2 INFO nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 05:24:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:24:35 np0005465604 nova_compute[260603]: 2025-10-02 09:24:35.661 2 DEBUG nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 05:24:35 np0005465604 nova_compute[260603]: 2025-10-02 09:24:35.775 2 DEBUG nova.network.neutron [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 05:24:35 np0005465604 nova_compute[260603]: 2025-10-02 09:24:35.776 2 DEBUG nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 05:24:36 np0005465604 nova_compute[260603]: 2025-10-02 09:24:36.216 2 DEBUG nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 05:24:36 np0005465604 nova_compute[260603]: 2025-10-02 09:24:36.218 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 05:24:36 np0005465604 nova_compute[260603]: 2025-10-02 09:24:36.219 2 INFO nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Creating image(s)#033[00m
Oct  2 05:24:36 np0005465604 nova_compute[260603]: 2025-10-02 09:24:36.254 2 DEBUG nova.storage.rbd_utils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] rbd image 29e12e23-cd78-4748-a814-e030401a2d37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:24:36 np0005465604 nova_compute[260603]: 2025-10-02 09:24:36.294 2 DEBUG nova.storage.rbd_utils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] rbd image 29e12e23-cd78-4748-a814-e030401a2d37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:24:36 np0005465604 nova_compute[260603]: 2025-10-02 09:24:36.333 2 DEBUG nova.storage.rbd_utils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] rbd image 29e12e23-cd78-4748-a814-e030401a2d37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:24:36 np0005465604 nova_compute[260603]: 2025-10-02 09:24:36.340 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:24:36 np0005465604 nova_compute[260603]: 2025-10-02 09:24:36.450 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:24:36 np0005465604 nova_compute[260603]: 2025-10-02 09:24:36.451 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Acquiring lock "55fe19af44c773772fc736fd085017e37d622236" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:24:36 np0005465604 nova_compute[260603]: 2025-10-02 09:24:36.452 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:24:36 np0005465604 nova_compute[260603]: 2025-10-02 09:24:36.453 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "55fe19af44c773772fc736fd085017e37d622236" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:24:36 np0005465604 nova_compute[260603]: 2025-10-02 09:24:36.477 2 DEBUG nova.storage.rbd_utils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] rbd image 29e12e23-cd78-4748-a814-e030401a2d37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:24:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:24:36 np0005465604 nova_compute[260603]: 2025-10-02 09:24:36.483 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 29e12e23-cd78-4748-a814-e030401a2d37_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.269 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 29e12e23-cd78-4748-a814-e030401a2d37_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.787s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.366 2 DEBUG nova.storage.rbd_utils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] resizing rbd image 29e12e23-cd78-4748-a814-e030401a2d37_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.851 2 DEBUG nova.objects.instance [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lazy-loading 'migration_context' on Instance uuid 29e12e23-cd78-4748-a814-e030401a2d37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.968 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.969 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Ensure instance console log exists: /var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.970 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.970 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.971 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.973 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'encrypted': False, 'image_id': '420393e6-d62b-4055-afb9-674967e2c2b0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.981 2 WARNING nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.988 2 DEBUG nova.virt.libvirt.host [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.989 2 DEBUG nova.virt.libvirt.host [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.993 2 DEBUG nova.virt.libvirt.host [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.993 2 DEBUG nova.virt.libvirt.host [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.994 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.995 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T08:18:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8be08a9b-8bd0-4b57-a123-a3bab9250562',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T08:18:20Z,direct_url=<?>,disk_format='qcow2',id=420393e6-d62b-4055-afb9-674967e2c2b0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='db4e6e87e4854aa8b884db24f9410884',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T08:18:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.995 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.996 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.996 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.997 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.997 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.998 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.998 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.999 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 05:24:37 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.999 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 05:24:38 np0005465604 nova_compute[260603]: 2025-10-02 09:24:37.999 2 DEBUG nova.virt.hardware [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 05:24:38 np0005465604 nova_compute[260603]: 2025-10-02 09:24:38.004 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:24:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:24:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3422868072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:24:38 np0005465604 nova_compute[260603]: 2025-10-02 09:24:38.449 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:24:38 np0005465604 nova_compute[260603]: 2025-10-02 09:24:38.475 2 DEBUG nova.storage.rbd_utils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] rbd image 29e12e23-cd78-4748-a814-e030401a2d37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:24:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct  2 05:24:38 np0005465604 nova_compute[260603]: 2025-10-02 09:24:38.481 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:24:38 np0005465604 nova_compute[260603]: 2025-10-02 09:24:38.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 05:24:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4066405845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 05:24:38 np0005465604 nova_compute[260603]: 2025-10-02 09:24:38.960 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:24:38 np0005465604 nova_compute[260603]: 2025-10-02 09:24:38.961 2 DEBUG nova.objects.instance [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lazy-loading 'pci_devices' on Instance uuid 29e12e23-cd78-4748-a814-e030401a2d37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:24:39 np0005465604 nova_compute[260603]: 2025-10-02 09:24:39.107 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] End _get_guest_xml xml=<domain type="kvm">
Oct  2 05:24:39 np0005465604 nova_compute[260603]:  <uuid>29e12e23-cd78-4748-a814-e030401a2d37</uuid>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:  <name>instance-00000099</name>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:  <memory>131072</memory>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:  <vcpu>1</vcpu>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:  <metadata>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <nova:name>tempest-AggregatesAdminTestJSON-server-1040576008</nova:name>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <nova:creationTime>2025-10-02 09:24:37</nova:creationTime>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <nova:flavor name="m1.nano">
Oct  2 05:24:39 np0005465604 nova_compute[260603]:        <nova:memory>128</nova:memory>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:        <nova:disk>1</nova:disk>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:        <nova:swap>0</nova:swap>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:        <nova:vcpus>1</nova:vcpus>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      </nova:flavor>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <nova:owner>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:        <nova:user uuid="ab1bff8a265e4dc48c8c8ab958df08cd">tempest-AggregatesAdminTestJSON-1484718099-project-member</nova:user>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:        <nova:project uuid="6d90e7dd32004462a3f1becf8eeda717">tempest-AggregatesAdminTestJSON-1484718099</nova:project>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      </nova:owner>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <nova:root type="image" uuid="420393e6-d62b-4055-afb9-674967e2c2b0"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <nova:ports/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    </nova:instance>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:  </metadata>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:  <sysinfo type="smbios">
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <system>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <entry name="manufacturer">RDO</entry>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <entry name="product">OpenStack Compute</entry>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <entry name="serial">29e12e23-cd78-4748-a814-e030401a2d37</entry>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <entry name="uuid">29e12e23-cd78-4748-a814-e030401a2d37</entry>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <entry name="family">Virtual Machine</entry>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    </system>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:  </sysinfo>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:  <os>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <boot dev="hd"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <smbios mode="sysinfo"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:  </os>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:  <features>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <acpi/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <apic/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <vmcoreinfo/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:  </features>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:  <clock offset="utc">
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <timer name="hpet" present="no"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:  </clock>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:  <cpu mode="host-model" match="exact">
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:  </cpu>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:  <devices>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <disk type="network" device="disk">
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/29e12e23-cd78-4748-a814-e030401a2d37_disk">
Oct  2 05:24:39 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:24:39 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <target dev="vda" bus="virtio"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <disk type="network" device="cdrom">
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <driver type="raw" cache="none"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <source protocol="rbd" name="vms/29e12e23-cd78-4748-a814-e030401a2d37_disk.config">
Oct  2 05:24:39 np0005465604 nova_compute[260603]:        <host name="192.168.122.100" port="6789"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      </source>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <auth username="openstack">
Oct  2 05:24:39 np0005465604 nova_compute[260603]:        <secret type="ceph" uuid="a52e644f-f702-594c-a648-813e3e0df2b1"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      </auth>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <target dev="sda" bus="sata"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    </disk>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <serial type="pty">
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <log file="/var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37/console.log" append="off"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    </serial>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <video>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <model type="virtio"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    </video>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <input type="tablet" bus="usb"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <rng model="virtio">
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <backend model="random">/dev/urandom</backend>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    </rng>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <controller type="usb" index="0"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    <memballoon model="virtio">
Oct  2 05:24:39 np0005465604 nova_compute[260603]:      <stats period="10"/>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:    </memballoon>
Oct  2 05:24:39 np0005465604 nova_compute[260603]:  </devices>
Oct  2 05:24:39 np0005465604 nova_compute[260603]: </domain>
Oct  2 05:24:39 np0005465604 nova_compute[260603]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 05:24:39 np0005465604 nova_compute[260603]: 2025-10-02 09:24:39.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:24:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:24:39 np0005465604 nova_compute[260603]: 2025-10-02 09:24:39.585 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:24:39 np0005465604 nova_compute[260603]: 2025-10-02 09:24:39.585 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 05:24:39 np0005465604 nova_compute[260603]: 2025-10-02 09:24:39.586 2 INFO nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Using config drive#033[00m
Oct  2 05:24:39 np0005465604 nova_compute[260603]: 2025-10-02 09:24:39.616 2 DEBUG nova.storage.rbd_utils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] rbd image 29e12e23-cd78-4748-a814-e030401a2d37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:24:40 np0005465604 podman[441664]: 2025-10-02 09:24:40.009641932 +0000 UTC m=+0.068513201 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:24:40 np0005465604 podman[441663]: 2025-10-02 09:24:40.079988789 +0000 UTC m=+0.142837193 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:24:40 np0005465604 nova_compute[260603]: 2025-10-02 09:24:40.212 2 INFO nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Creating config drive at /var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37/disk.config#033[00m
Oct  2 05:24:40 np0005465604 nova_compute[260603]: 2025-10-02 09:24:40.222 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvctzr9u1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:24:40 np0005465604 nova_compute[260603]: 2025-10-02 09:24:40.409 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvctzr9u1" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:24:40 np0005465604 nova_compute[260603]: 2025-10-02 09:24:40.453 2 DEBUG nova.storage.rbd_utils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] rbd image 29e12e23-cd78-4748-a814-e030401a2d37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 05:24:40 np0005465604 nova_compute[260603]: 2025-10-02 09:24:40.460 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37/disk.config 29e12e23-cd78-4748-a814-e030401a2d37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:24:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct  2 05:24:40 np0005465604 nova_compute[260603]: 2025-10-02 09:24:40.524 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:24:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:24:40 np0005465604 nova_compute[260603]: 2025-10-02 09:24:40.660 2 DEBUG oslo_concurrency.processutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37/disk.config 29e12e23-cd78-4748-a814-e030401a2d37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:24:40 np0005465604 nova_compute[260603]: 2025-10-02 09:24:40.661 2 INFO nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Deleting local config drive /var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37/disk.config because it was imported into RBD.#033[00m
Oct  2 05:24:40 np0005465604 systemd[1]: Starting libvirt secret daemon...
Oct  2 05:24:40 np0005465604 systemd[1]: Started libvirt secret daemon.
Oct  2 05:24:40 np0005465604 systemd-machined[214636]: New machine qemu-187-instance-00000099.
Oct  2 05:24:40 np0005465604 systemd[1]: Started Virtual Machine qemu-187-instance-00000099.
Oct  2 05:24:41 np0005465604 nova_compute[260603]: 2025-10-02 09:24:41.825 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759397081.8255632, 29e12e23-cd78-4748-a814-e030401a2d37 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:24:41 np0005465604 nova_compute[260603]: 2025-10-02 09:24:41.828 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] VM Resumed (Lifecycle Event)#033[00m
Oct  2 05:24:41 np0005465604 nova_compute[260603]: 2025-10-02 09:24:41.830 2 DEBUG nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 05:24:41 np0005465604 nova_compute[260603]: 2025-10-02 09:24:41.831 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 05:24:41 np0005465604 nova_compute[260603]: 2025-10-02 09:24:41.834 2 INFO nova.virt.libvirt.driver [-] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Instance spawned successfully.#033[00m
Oct  2 05:24:41 np0005465604 nova_compute[260603]: 2025-10-02 09:24:41.835 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 05:24:41 np0005465604 nova_compute[260603]: 2025-10-02 09:24:41.909 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:24:41 np0005465604 nova_compute[260603]: 2025-10-02 09:24:41.914 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:24:41 np0005465604 nova_compute[260603]: 2025-10-02 09:24:41.994 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:24:41 np0005465604 nova_compute[260603]: 2025-10-02 09:24:41.994 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:24:41 np0005465604 nova_compute[260603]: 2025-10-02 09:24:41.995 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:24:41 np0005465604 nova_compute[260603]: 2025-10-02 09:24:41.995 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:24:41 np0005465604 nova_compute[260603]: 2025-10-02 09:24:41.995 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:24:41 np0005465604 nova_compute[260603]: 2025-10-02 09:24:41.996 2 DEBUG nova.virt.libvirt.driver [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 05:24:42 np0005465604 nova_compute[260603]: 2025-10-02 09:24:42.057 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:24:42 np0005465604 nova_compute[260603]: 2025-10-02 09:24:42.058 2 DEBUG nova.virt.driver [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] Emitting event <LifecycleEvent: 1759397081.8279276, 29e12e23-cd78-4748-a814-e030401a2d37 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:24:42 np0005465604 nova_compute[260603]: 2025-10-02 09:24:42.059 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] VM Started (Lifecycle Event)#033[00m
Oct  2 05:24:42 np0005465604 nova_compute[260603]: 2025-10-02 09:24:42.180 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:24:42 np0005465604 nova_compute[260603]: 2025-10-02 09:24:42.184 2 DEBUG nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 05:24:42 np0005465604 nova_compute[260603]: 2025-10-02 09:24:42.249 2 INFO nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Took 6.03 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 05:24:42 np0005465604 nova_compute[260603]: 2025-10-02 09:24:42.250 2 DEBUG nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:24:42 np0005465604 nova_compute[260603]: 2025-10-02 09:24:42.291 2 INFO nova.compute.manager [None req-19fb5a25-c17b-4847-9286-92ee2281d585 - - - - - -] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 05:24:42 np0005465604 nova_compute[260603]: 2025-10-02 09:24:42.414 2 INFO nova.compute.manager [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Took 8.71 seconds to build instance.#033[00m
Oct  2 05:24:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct  2 05:24:42 np0005465604 nova_compute[260603]: 2025-10-02 09:24:42.625 2 DEBUG oslo_concurrency.lockutils [None req-4473f920-5f9f-4715-9756-4fabf3aa7680 ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "29e12e23-cd78-4748-a814-e030401a2d37" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.389s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:24:43 np0005465604 nova_compute[260603]: 2025-10-02 09:24:43.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:44 np0005465604 podman[441821]: 2025-10-02 09:24:44.004658815 +0000 UTC m=+0.066964193 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 05:24:44 np0005465604 podman[441820]: 2025-10-02 09:24:44.015934357 +0000 UTC m=+0.078907366 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  2 05:24:44 np0005465604 nova_compute[260603]: 2025-10-02 09:24:44.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct  2 05:24:45 np0005465604 nova_compute[260603]: 2025-10-02 09:24:45.329 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Acquiring lock "29e12e23-cd78-4748-a814-e030401a2d37" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:24:45 np0005465604 nova_compute[260603]: 2025-10-02 09:24:45.331 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "29e12e23-cd78-4748-a814-e030401a2d37" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:24:45 np0005465604 nova_compute[260603]: 2025-10-02 09:24:45.332 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Acquiring lock "29e12e23-cd78-4748-a814-e030401a2d37-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:24:45 np0005465604 nova_compute[260603]: 2025-10-02 09:24:45.332 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "29e12e23-cd78-4748-a814-e030401a2d37-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:24:45 np0005465604 nova_compute[260603]: 2025-10-02 09:24:45.333 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "29e12e23-cd78-4748-a814-e030401a2d37-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:24:45 np0005465604 nova_compute[260603]: 2025-10-02 09:24:45.335 2 INFO nova.compute.manager [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Terminating instance#033[00m
Oct  2 05:24:45 np0005465604 nova_compute[260603]: 2025-10-02 09:24:45.336 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Acquiring lock "refresh_cache-29e12e23-cd78-4748-a814-e030401a2d37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 05:24:45 np0005465604 nova_compute[260603]: 2025-10-02 09:24:45.337 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Acquired lock "refresh_cache-29e12e23-cd78-4748-a814-e030401a2d37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 05:24:45 np0005465604 nova_compute[260603]: 2025-10-02 09:24:45.338 2 DEBUG nova.network.neutron [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 05:24:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:24:45 np0005465604 nova_compute[260603]: 2025-10-02 09:24:45.676 2 DEBUG nova.network.neutron [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:24:45 np0005465604 nova_compute[260603]: 2025-10-02 09:24:45.907 2 DEBUG nova.network.neutron [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:24:46 np0005465604 nova_compute[260603]: 2025-10-02 09:24:46.131 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Releasing lock "refresh_cache-29e12e23-cd78-4748-a814-e030401a2d37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 05:24:46 np0005465604 nova_compute[260603]: 2025-10-02 09:24:46.132 2 DEBUG nova.compute.manager [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 05:24:46 np0005465604 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Deactivated successfully.
Oct  2 05:24:46 np0005465604 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Consumed 5.316s CPU time.
Oct  2 05:24:46 np0005465604 systemd-machined[214636]: Machine qemu-187-instance-00000099 terminated.
Oct  2 05:24:46 np0005465604 nova_compute[260603]: 2025-10-02 09:24:46.366 2 INFO nova.virt.libvirt.driver [-] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Instance destroyed successfully.#033[00m
Oct  2 05:24:46 np0005465604 nova_compute[260603]: 2025-10-02 09:24:46.367 2 DEBUG nova.objects.instance [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lazy-loading 'resources' on Instance uuid 29e12e23-cd78-4748-a814-e030401a2d37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 05:24:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct  2 05:24:47 np0005465604 nova_compute[260603]: 2025-10-02 09:24:47.603 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:24:47 np0005465604 nova_compute[260603]: 2025-10-02 09:24:47.604 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 05:24:48 np0005465604 nova_compute[260603]: 2025-10-02 09:24:48.409 2 INFO nova.virt.libvirt.driver [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Deleting instance files /var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37_del#033[00m
Oct  2 05:24:48 np0005465604 nova_compute[260603]: 2025-10-02 09:24:48.410 2 INFO nova.virt.libvirt.driver [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Deletion of /var/lib/nova/instances/29e12e23-cd78-4748-a814-e030401a2d37_del complete#033[00m
Oct  2 05:24:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 63 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Oct  2 05:24:48 np0005465604 nova_compute[260603]: 2025-10-02 09:24:48.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:48 np0005465604 nova_compute[260603]: 2025-10-02 09:24:48.620 2 INFO nova.compute.manager [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Took 2.49 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 05:24:48 np0005465604 nova_compute[260603]: 2025-10-02 09:24:48.620 2 DEBUG oslo.service.loopingcall [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 05:24:48 np0005465604 nova_compute[260603]: 2025-10-02 09:24:48.621 2 DEBUG nova.compute.manager [-] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 05:24:48 np0005465604 nova_compute[260603]: 2025-10-02 09:24:48.621 2 DEBUG nova.network.neutron [-] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 05:24:48 np0005465604 nova_compute[260603]: 2025-10-02 09:24:48.714 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:24:48 np0005465604 nova_compute[260603]: 2025-10-02 09:24:48.714 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:24:49 np0005465604 nova_compute[260603]: 2025-10-02 09:24:49.225 2 DEBUG nova.network.neutron [-] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 05:24:49 np0005465604 nova_compute[260603]: 2025-10-02 09:24:49.253 2 DEBUG nova.network.neutron [-] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 05:24:49 np0005465604 nova_compute[260603]: 2025-10-02 09:24:49.330 2 INFO nova.compute.manager [-] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Took 0.71 seconds to deallocate network for instance.#033[00m
Oct  2 05:24:49 np0005465604 nova_compute[260603]: 2025-10-02 09:24:49.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:49 np0005465604 nova_compute[260603]: 2025-10-02 09:24:49.451 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:24:49 np0005465604 nova_compute[260603]: 2025-10-02 09:24:49.452 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:24:49 np0005465604 nova_compute[260603]: 2025-10-02 09:24:49.517 2 DEBUG oslo_concurrency.processutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:24:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:24:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2445335509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:24:50 np0005465604 nova_compute[260603]: 2025-10-02 09:24:50.054 2 DEBUG oslo_concurrency.processutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:24:50 np0005465604 nova_compute[260603]: 2025-10-02 09:24:50.061 2 DEBUG nova.compute.provider_tree [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:24:50 np0005465604 nova_compute[260603]: 2025-10-02 09:24:50.093 2 DEBUG nova.scheduler.client.report [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:24:50 np0005465604 nova_compute[260603]: 2025-10-02 09:24:50.267 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:24:50 np0005465604 nova_compute[260603]: 2025-10-02 09:24:50.423 2 INFO nova.scheduler.client.report [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Deleted allocations for instance 29e12e23-cd78-4748-a814-e030401a2d37#033[00m
Oct  2 05:24:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 63 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 87 op/s
Oct  2 05:24:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:24:50 np0005465604 nova_compute[260603]: 2025-10-02 09:24:50.634 2 DEBUG oslo_concurrency.lockutils [None req-b585bff0-95bc-4175-abdf-ee3ad951b7aa ab1bff8a265e4dc48c8c8ab958df08cd 6d90e7dd32004462a3f1becf8eeda717 - - default default] Lock "29e12e23-cd78-4748-a814-e030401a2d37" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:24:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct  2 05:24:53 np0005465604 nova_compute[260603]: 2025-10-02 09:24:53.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:54 np0005465604 nova_compute[260603]: 2025-10-02 09:24:54.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 92 op/s
Oct  2 05:24:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:24:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 1.2 KiB/s wr, 52 op/s
Oct  2 05:24:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:24:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:24:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:24:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:24:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:24:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:24:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 1.2 KiB/s wr, 52 op/s
Oct  2 05:24:58 np0005465604 nova_compute[260603]: 2025-10-02 09:24:58.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:24:59 np0005465604 nova_compute[260603]: 2025-10-02 09:24:59.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 767 B/s wr, 13 op/s
Oct  2 05:25:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:25:01 np0005465604 nova_compute[260603]: 2025-10-02 09:25:01.364 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759397086.3620563, 29e12e23-cd78-4748-a814-e030401a2d37 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 05:25:01 np0005465604 nova_compute[260603]: 2025-10-02 09:25:01.364 2 INFO nova.compute.manager [-] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] VM Stopped (Lifecycle Event)#033[00m
Oct  2 05:25:01 np0005465604 nova_compute[260603]: 2025-10-02 09:25:01.543 2 DEBUG nova.compute.manager [None req-72f35939-edef-402d-a214-c2cfe84d200c - - - - - -] [instance: 29e12e23-cd78-4748-a814-e030401a2d37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 05:25:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 767 B/s wr, 13 op/s
Oct  2 05:25:03 np0005465604 nova_compute[260603]: 2025-10-02 09:25:03.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:04 np0005465604 nova_compute[260603]: 2025-10-02 09:25:04.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct  2 05:25:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:25:04.745 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:25:04 np0005465604 nova_compute[260603]: 2025-10-02 09:25:04.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:04 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:25:04.746 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:25:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:25:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:25:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:25:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:25:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:25:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:25:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:25:05 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 347523e2-ca60-4462-a977-1328a2d99beb does not exist
Oct  2 05:25:05 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 73dbe6ce-4e7b-456c-9803-ab6388baff46 does not exist
Oct  2 05:25:05 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 97df788d-0c4f-490d-a7bc-04a943daa1ad does not exist
Oct  2 05:25:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:25:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:25:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:25:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:25:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:25:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:25:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:06 np0005465604 podman[442178]: 2025-10-02 09:25:06.661024096 +0000 UTC m=+0.063184595 container create 50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_noyce, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 05:25:06 np0005465604 systemd[1]: Started libpod-conmon-50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91.scope.
Oct  2 05:25:06 np0005465604 podman[442178]: 2025-10-02 09:25:06.629405228 +0000 UTC m=+0.031565777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:25:06 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:25:06 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:25:06 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:25:06 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:25:06 np0005465604 podman[442178]: 2025-10-02 09:25:06.752567785 +0000 UTC m=+0.154728314 container init 50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_noyce, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:25:06 np0005465604 podman[442178]: 2025-10-02 09:25:06.766008805 +0000 UTC m=+0.168169304 container start 50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:25:06 np0005465604 podman[442178]: 2025-10-02 09:25:06.769253577 +0000 UTC m=+0.171414076 container attach 50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct  2 05:25:06 np0005465604 systemd[1]: libpod-50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91.scope: Deactivated successfully.
Oct  2 05:25:06 np0005465604 competent_noyce[442194]: 167 167
Oct  2 05:25:06 np0005465604 conmon[442194]: conmon 50715c37edc0ab4a22c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91.scope/container/memory.events
Oct  2 05:25:06 np0005465604 podman[442178]: 2025-10-02 09:25:06.776989558 +0000 UTC m=+0.179150067 container died 50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 05:25:06 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5153923d708fe883bb935d56c9de8f13cdbee3308c41d5ceb0e9863b212ca02d-merged.mount: Deactivated successfully.
Oct  2 05:25:06 np0005465604 podman[442178]: 2025-10-02 09:25:06.818585257 +0000 UTC m=+0.220745756 container remove 50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 05:25:06 np0005465604 systemd[1]: libpod-conmon-50715c37edc0ab4a22c739baed82aa6d773875a3dfe126bd8122e424d347ee91.scope: Deactivated successfully.
Oct  2 05:25:07 np0005465604 podman[442216]: 2025-10-02 09:25:07.001645325 +0000 UTC m=+0.046210934 container create 14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct  2 05:25:07 np0005465604 systemd[1]: Started libpod-conmon-14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac.scope.
Oct  2 05:25:07 np0005465604 podman[442216]: 2025-10-02 09:25:06.98259528 +0000 UTC m=+0.027160899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:25:07 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:25:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b196ebfa6bccbb85fc7c3732d20cbdda353b4662e81fc789dc4cce8c6c5e63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:25:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b196ebfa6bccbb85fc7c3732d20cbdda353b4662e81fc789dc4cce8c6c5e63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:25:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b196ebfa6bccbb85fc7c3732d20cbdda353b4662e81fc789dc4cce8c6c5e63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:25:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b196ebfa6bccbb85fc7c3732d20cbdda353b4662e81fc789dc4cce8c6c5e63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:25:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b196ebfa6bccbb85fc7c3732d20cbdda353b4662e81fc789dc4cce8c6c5e63/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:25:07 np0005465604 podman[442216]: 2025-10-02 09:25:07.110506165 +0000 UTC m=+0.155071774 container init 14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  2 05:25:07 np0005465604 podman[442216]: 2025-10-02 09:25:07.120442396 +0000 UTC m=+0.165007995 container start 14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 05:25:07 np0005465604 podman[442216]: 2025-10-02 09:25:07.124165922 +0000 UTC m=+0.168731521 container attach 14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 05:25:08 np0005465604 thirsty_almeida[442233]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:25:08 np0005465604 thirsty_almeida[442233]: --> relative data size: 1.0
Oct  2 05:25:08 np0005465604 thirsty_almeida[442233]: --> All data devices are unavailable
Oct  2 05:25:08 np0005465604 systemd[1]: libpod-14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac.scope: Deactivated successfully.
Oct  2 05:25:08 np0005465604 systemd[1]: libpod-14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac.scope: Consumed 1.017s CPU time.
Oct  2 05:25:08 np0005465604 podman[442216]: 2025-10-02 09:25:08.196102623 +0000 UTC m=+1.240668222 container died 14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 05:25:08 np0005465604 systemd[1]: var-lib-containers-storage-overlay-10b196ebfa6bccbb85fc7c3732d20cbdda353b4662e81fc789dc4cce8c6c5e63-merged.mount: Deactivated successfully.
Oct  2 05:25:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:08 np0005465604 nova_compute[260603]: 2025-10-02 09:25:08.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:08 np0005465604 podman[442216]: 2025-10-02 09:25:08.593318621 +0000 UTC m=+1.637884240 container remove 14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:25:08 np0005465604 systemd[1]: libpod-conmon-14d6cd877ea4c098c52ed2224023f4ef0160937ef4eae31368a9535632632bac.scope: Deactivated successfully.
Oct  2 05:25:09 np0005465604 podman[442417]: 2025-10-02 09:25:09.198191553 +0000 UTC m=+0.039080951 container create 3881c5d76d905716592e7e920d35656f020375abc2c3ddda4b4cf25808d2cb7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:25:09 np0005465604 systemd[1]: Started libpod-conmon-3881c5d76d905716592e7e920d35656f020375abc2c3ddda4b4cf25808d2cb7b.scope.
Oct  2 05:25:09 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:25:09 np0005465604 podman[442417]: 2025-10-02 09:25:09.181234394 +0000 UTC m=+0.022123822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:25:09 np0005465604 podman[442417]: 2025-10-02 09:25:09.280097292 +0000 UTC m=+0.120986730 container init 3881c5d76d905716592e7e920d35656f020375abc2c3ddda4b4cf25808d2cb7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:25:09 np0005465604 podman[442417]: 2025-10-02 09:25:09.287927917 +0000 UTC m=+0.128817335 container start 3881c5d76d905716592e7e920d35656f020375abc2c3ddda4b4cf25808d2cb7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 05:25:09 np0005465604 podman[442417]: 2025-10-02 09:25:09.291671574 +0000 UTC m=+0.132561002 container attach 3881c5d76d905716592e7e920d35656f020375abc2c3ddda4b4cf25808d2cb7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 05:25:09 np0005465604 romantic_turing[442434]: 167 167
Oct  2 05:25:09 np0005465604 systemd[1]: libpod-3881c5d76d905716592e7e920d35656f020375abc2c3ddda4b4cf25808d2cb7b.scope: Deactivated successfully.
Oct  2 05:25:09 np0005465604 podman[442417]: 2025-10-02 09:25:09.295300557 +0000 UTC m=+0.136189965 container died 3881c5d76d905716592e7e920d35656f020375abc2c3ddda4b4cf25808d2cb7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 05:25:09 np0005465604 systemd[1]: var-lib-containers-storage-overlay-92452794eef8ff848e6ec735ed098783a27cef0539277850f67f21f8ee713a36-merged.mount: Deactivated successfully.
Oct  2 05:25:09 np0005465604 podman[442417]: 2025-10-02 09:25:09.328287527 +0000 UTC m=+0.169176925 container remove 3881c5d76d905716592e7e920d35656f020375abc2c3ddda4b4cf25808d2cb7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 05:25:09 np0005465604 systemd[1]: libpod-conmon-3881c5d76d905716592e7e920d35656f020375abc2c3ddda4b4cf25808d2cb7b.scope: Deactivated successfully.
Oct  2 05:25:09 np0005465604 nova_compute[260603]: 2025-10-02 09:25:09.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:09 np0005465604 podman[442456]: 2025-10-02 09:25:09.494011393 +0000 UTC m=+0.039903307 container create 499ef3c88ed1f47a82d381aa739e77a11b1120fb19b50c92f284711eebb86644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:25:09 np0005465604 systemd[1]: Started libpod-conmon-499ef3c88ed1f47a82d381aa739e77a11b1120fb19b50c92f284711eebb86644.scope.
Oct  2 05:25:09 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:25:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ebd393b95da0b8cf43abf9a74b92756975708042f606e7d16ed02f7b5a8d50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:25:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ebd393b95da0b8cf43abf9a74b92756975708042f606e7d16ed02f7b5a8d50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:25:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ebd393b95da0b8cf43abf9a74b92756975708042f606e7d16ed02f7b5a8d50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:25:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ebd393b95da0b8cf43abf9a74b92756975708042f606e7d16ed02f7b5a8d50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:25:09 np0005465604 podman[442456]: 2025-10-02 09:25:09.561578203 +0000 UTC m=+0.107470127 container init 499ef3c88ed1f47a82d381aa739e77a11b1120fb19b50c92f284711eebb86644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 05:25:09 np0005465604 podman[442456]: 2025-10-02 09:25:09.567646313 +0000 UTC m=+0.113538227 container start 499ef3c88ed1f47a82d381aa739e77a11b1120fb19b50c92f284711eebb86644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:25:09 np0005465604 podman[442456]: 2025-10-02 09:25:09.570245194 +0000 UTC m=+0.116137108 container attach 499ef3c88ed1f47a82d381aa739e77a11b1120fb19b50c92f284711eebb86644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mendeleev, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 05:25:09 np0005465604 podman[442456]: 2025-10-02 09:25:09.476719083 +0000 UTC m=+0.022610997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]: {
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:    "0": [
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:        {
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "devices": [
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "/dev/loop3"
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            ],
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "lv_name": "ceph_lv0",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "lv_size": "21470642176",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "name": "ceph_lv0",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "tags": {
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.cluster_name": "ceph",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.crush_device_class": "",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.encrypted": "0",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.osd_id": "0",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.type": "block",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.vdo": "0"
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            },
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "type": "block",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "vg_name": "ceph_vg0"
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:        }
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:    ],
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:    "1": [
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:        {
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "devices": [
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "/dev/loop4"
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            ],
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "lv_name": "ceph_lv1",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "lv_size": "21470642176",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "name": "ceph_lv1",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "tags": {
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.cluster_name": "ceph",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.crush_device_class": "",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.encrypted": "0",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.osd_id": "1",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.type": "block",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.vdo": "0"
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            },
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "type": "block",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "vg_name": "ceph_vg1"
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:        }
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:    ],
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:    "2": [
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:        {
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "devices": [
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "/dev/loop5"
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            ],
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "lv_name": "ceph_lv2",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "lv_size": "21470642176",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "name": "ceph_lv2",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "tags": {
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.cluster_name": "ceph",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.crush_device_class": "",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.encrypted": "0",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.osd_id": "2",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.type": "block",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:                "ceph.vdo": "0"
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            },
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "type": "block",
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:            "vg_name": "ceph_vg2"
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:        }
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]:    ]
Oct  2 05:25:10 np0005465604 amazing_mendeleev[442472]: }
Oct  2 05:25:10 np0005465604 systemd[1]: libpod-499ef3c88ed1f47a82d381aa739e77a11b1120fb19b50c92f284711eebb86644.scope: Deactivated successfully.
Oct  2 05:25:10 np0005465604 podman[442456]: 2025-10-02 09:25:10.354114678 +0000 UTC m=+0.900006592 container died 499ef3c88ed1f47a82d381aa739e77a11b1120fb19b50c92f284711eebb86644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:25:10 np0005465604 systemd[1]: var-lib-containers-storage-overlay-69ebd393b95da0b8cf43abf9a74b92756975708042f606e7d16ed02f7b5a8d50-merged.mount: Deactivated successfully.
Oct  2 05:25:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:10 np0005465604 podman[442456]: 2025-10-02 09:25:10.508364556 +0000 UTC m=+1.054256470 container remove 499ef3c88ed1f47a82d381aa739e77a11b1120fb19b50c92f284711eebb86644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:25:10 np0005465604 systemd[1]: libpod-conmon-499ef3c88ed1f47a82d381aa739e77a11b1120fb19b50c92f284711eebb86644.scope: Deactivated successfully.
Oct  2 05:25:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:25:10 np0005465604 podman[442489]: 2025-10-02 09:25:10.704587522 +0000 UTC m=+0.310458855 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 05:25:10 np0005465604 podman[442482]: 2025-10-02 09:25:10.721668364 +0000 UTC m=+0.332338277 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  2 05:25:11 np0005465604 podman[442674]: 2025-10-02 09:25:11.28358604 +0000 UTC m=+0.061538133 container create 5ba4d85593919a6647e2be5a6de7b0e2a96c00377479e683d6be0e45dd69ecab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:25:11 np0005465604 podman[442674]: 2025-10-02 09:25:11.246898354 +0000 UTC m=+0.024850537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:25:11 np0005465604 systemd[1]: Started libpod-conmon-5ba4d85593919a6647e2be5a6de7b0e2a96c00377479e683d6be0e45dd69ecab.scope.
Oct  2 05:25:11 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:25:11 np0005465604 podman[442674]: 2025-10-02 09:25:11.454344324 +0000 UTC m=+0.232296497 container init 5ba4d85593919a6647e2be5a6de7b0e2a96c00377479e683d6be0e45dd69ecab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lewin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 05:25:11 np0005465604 podman[442674]: 2025-10-02 09:25:11.463706166 +0000 UTC m=+0.241658299 container start 5ba4d85593919a6647e2be5a6de7b0e2a96c00377479e683d6be0e45dd69ecab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lewin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:25:11 np0005465604 tender_lewin[442691]: 167 167
Oct  2 05:25:11 np0005465604 systemd[1]: libpod-5ba4d85593919a6647e2be5a6de7b0e2a96c00377479e683d6be0e45dd69ecab.scope: Deactivated successfully.
Oct  2 05:25:11 np0005465604 podman[442674]: 2025-10-02 09:25:11.487315684 +0000 UTC m=+0.265267807 container attach 5ba4d85593919a6647e2be5a6de7b0e2a96c00377479e683d6be0e45dd69ecab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lewin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:25:11 np0005465604 podman[442674]: 2025-10-02 09:25:11.487882412 +0000 UTC m=+0.265834535 container died 5ba4d85593919a6647e2be5a6de7b0e2a96c00377479e683d6be0e45dd69ecab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lewin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:25:11 np0005465604 systemd[1]: var-lib-containers-storage-overlay-111b93b37ae9e3d83d2f79b16ee4ac80c1f50310ad14f3732486ca05601272c1-merged.mount: Deactivated successfully.
Oct  2 05:25:11 np0005465604 podman[442674]: 2025-10-02 09:25:11.554002997 +0000 UTC m=+0.331955090 container remove 5ba4d85593919a6647e2be5a6de7b0e2a96c00377479e683d6be0e45dd69ecab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 05:25:11 np0005465604 systemd[1]: libpod-conmon-5ba4d85593919a6647e2be5a6de7b0e2a96c00377479e683d6be0e45dd69ecab.scope: Deactivated successfully.
Oct  2 05:25:11 np0005465604 podman[442716]: 2025-10-02 09:25:11.741203544 +0000 UTC m=+0.043296333 container create 9a06e24f55e0b9b491d630810080fb42c5d424f84fefe7c50f0757978099caaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hodgkin, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:25:11 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:25:11.749 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:25:11 np0005465604 systemd[1]: Started libpod-conmon-9a06e24f55e0b9b491d630810080fb42c5d424f84fefe7c50f0757978099caaa.scope.
Oct  2 05:25:11 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:25:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a457c57dffd50c2cc7d345271e1ec1d2b96169864a129816ff0b015d4b69486/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:25:11 np0005465604 podman[442716]: 2025-10-02 09:25:11.721210459 +0000 UTC m=+0.023303258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:25:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a457c57dffd50c2cc7d345271e1ec1d2b96169864a129816ff0b015d4b69486/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:25:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a457c57dffd50c2cc7d345271e1ec1d2b96169864a129816ff0b015d4b69486/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:25:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a457c57dffd50c2cc7d345271e1ec1d2b96169864a129816ff0b015d4b69486/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:25:11 np0005465604 podman[442716]: 2025-10-02 09:25:11.836382557 +0000 UTC m=+0.138475446 container init 9a06e24f55e0b9b491d630810080fb42c5d424f84fefe7c50f0757978099caaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 05:25:11 np0005465604 podman[442716]: 2025-10-02 09:25:11.844400467 +0000 UTC m=+0.146493256 container start 9a06e24f55e0b9b491d630810080fb42c5d424f84fefe7c50f0757978099caaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:25:11 np0005465604 podman[442716]: 2025-10-02 09:25:11.848198136 +0000 UTC m=+0.150291015 container attach 9a06e24f55e0b9b491d630810080fb42c5d424f84fefe7c50f0757978099caaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hodgkin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 05:25:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]: {
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:        "osd_id": 2,
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:        "type": "bluestore"
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:    },
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:        "osd_id": 1,
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:        "type": "bluestore"
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:    },
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:        "osd_id": 0,
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:        "type": "bluestore"
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]:    }
Oct  2 05:25:12 np0005465604 intelligent_hodgkin[442733]: }
Oct  2 05:25:12 np0005465604 systemd[1]: libpod-9a06e24f55e0b9b491d630810080fb42c5d424f84fefe7c50f0757978099caaa.scope: Deactivated successfully.
Oct  2 05:25:12 np0005465604 podman[442716]: 2025-10-02 09:25:12.823128947 +0000 UTC m=+1.125221756 container died 9a06e24f55e0b9b491d630810080fb42c5d424f84fefe7c50f0757978099caaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hodgkin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 05:25:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6a457c57dffd50c2cc7d345271e1ec1d2b96169864a129816ff0b015d4b69486-merged.mount: Deactivated successfully.
Oct  2 05:25:12 np0005465604 podman[442716]: 2025-10-02 09:25:12.870990622 +0000 UTC m=+1.173083451 container remove 9a06e24f55e0b9b491d630810080fb42c5d424f84fefe7c50f0757978099caaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hodgkin, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:25:12 np0005465604 systemd[1]: libpod-conmon-9a06e24f55e0b9b491d630810080fb42c5d424f84fefe7c50f0757978099caaa.scope: Deactivated successfully.
Oct  2 05:25:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:25:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:25:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:25:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:25:13 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 09f8f95b-56eb-451b-821c-4bb855128451 does not exist
Oct  2 05:25:13 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b14efd20-43b0-4e9d-8ef5-ac2cec181cc2 does not exist
Oct  2 05:25:13 np0005465604 nova_compute[260603]: 2025-10-02 09:25:13.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:25:13 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:25:13 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:25:13 np0005465604 nova_compute[260603]: 2025-10-02 09:25:13.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:14 np0005465604 nova_compute[260603]: 2025-10-02 09:25:14.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:14 np0005465604 podman[442831]: 2025-10-02 09:25:14.996308946 +0000 UTC m=+0.063092521 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 05:25:14 np0005465604 podman[442830]: 2025-10-02 09:25:14.99640699 +0000 UTC m=+0.061684178 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 05:25:15 np0005465604 nova_compute[260603]: 2025-10-02 09:25:15.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:25:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:25:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:16 np0005465604 nova_compute[260603]: 2025-10-02 09:25:16.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:25:16 np0005465604 nova_compute[260603]: 2025-10-02 09:25:16.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:25:16 np0005465604 nova_compute[260603]: 2025-10-02 09:25:16.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:25:16 np0005465604 nova_compute[260603]: 2025-10-02 09:25:16.764 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:25:17 np0005465604 nova_compute[260603]: 2025-10-02 09:25:17.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:25:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:18 np0005465604 nova_compute[260603]: 2025-10-02 09:25:18.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Oct  2 05:25:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Oct  2 05:25:18 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Oct  2 05:25:19 np0005465604 nova_compute[260603]: 2025-10-02 09:25:19.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:19 np0005465604 nova_compute[260603]: 2025-10-02 09:25:19.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:25:19 np0005465604 nova_compute[260603]: 2025-10-02 09:25:19.687 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:25:19 np0005465604 nova_compute[260603]: 2025-10-02 09:25:19.687 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:25:19 np0005465604 nova_compute[260603]: 2025-10-02 09:25:19.687 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:25:19 np0005465604 nova_compute[260603]: 2025-10-02 09:25:19.688 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:25:19 np0005465604 nova_compute[260603]: 2025-10-02 09:25:19.688 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:25:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:25:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1454194714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:25:20 np0005465604 nova_compute[260603]: 2025-10-02 09:25:20.156 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:25:20 np0005465604 nova_compute[260603]: 2025-10-02 09:25:20.313 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:25:20 np0005465604 nova_compute[260603]: 2025-10-02 09:25:20.315 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3517MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:25:20 np0005465604 nova_compute[260603]: 2025-10-02 09:25:20.315 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:25:20 np0005465604 nova_compute[260603]: 2025-10-02 09:25:20.315 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:25:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:25:20 np0005465604 nova_compute[260603]: 2025-10-02 09:25:20.865 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:25:20 np0005465604 nova_compute[260603]: 2025-10-02 09:25:20.866 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:25:20 np0005465604 nova_compute[260603]: 2025-10-02 09:25:20.878 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 05:25:20 np0005465604 nova_compute[260603]: 2025-10-02 09:25:20.899 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 05:25:20 np0005465604 nova_compute[260603]: 2025-10-02 09:25:20.900 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 05:25:20 np0005465604 nova_compute[260603]: 2025-10-02 09:25:20.914 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 05:25:20 np0005465604 nova_compute[260603]: 2025-10-02 09:25:20.934 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 05:25:20 np0005465604 nova_compute[260603]: 2025-10-02 09:25:20.952 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:25:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:25:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/87705783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:25:21 np0005465604 nova_compute[260603]: 2025-10-02 09:25:21.372 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:25:21 np0005465604 nova_compute[260603]: 2025-10-02 09:25:21.377 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:25:21 np0005465604 nova_compute[260603]: 2025-10-02 09:25:21.635 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:25:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:25:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1230208095' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:25:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:25:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1230208095' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:25:22 np0005465604 nova_compute[260603]: 2025-10-02 09:25:22.222 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:25:22 np0005465604 nova_compute[260603]: 2025-10-02 09:25:22.223 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:25:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Oct  2 05:25:23 np0005465604 nova_compute[260603]: 2025-10-02 09:25:23.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:24 np0005465604 nova_compute[260603]: 2025-10-02 09:25:24.219 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:25:24 np0005465604 nova_compute[260603]: 2025-10-02 09:25:24.219 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:25:24 np0005465604 nova_compute[260603]: 2025-10-02 09:25:24.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  2 05:25:24 np0005465604 nova_compute[260603]: 2025-10-02 09:25:24.513 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:25:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:25:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Oct  2 05:25:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Oct  2 05:25:25 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Oct  2 05:25:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 305 active+clean; 41 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct  2 05:25:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:25:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:25:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:25:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:25:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:25:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:25:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:25:28
Oct  2 05:25:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:25:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:25:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'vms', '.mgr', 'images', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta']
Oct  2 05:25:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:25:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct  2 05:25:28 np0005465604 nova_compute[260603]: 2025-10-02 09:25:28.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:25:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:25:29 np0005465604 nova_compute[260603]: 2025-10-02 09:25:29.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  2 05:25:30 np0005465604 nova_compute[260603]: 2025-10-02 09:25:30.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:25:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:25:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 614 B/s rd, 307 B/s wr, 1 op/s
Oct  2 05:25:33 np0005465604 nova_compute[260603]: 2025-10-02 09:25:33.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:34 np0005465604 nova_compute[260603]: 2025-10-02 09:25:34.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:25:34.864 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:25:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:25:34.864 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:25:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:25:34.865 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:25:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:25:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:38 np0005465604 nova_compute[260603]: 2025-10-02 09:25:38.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:25:39 np0005465604 nova_compute[260603]: 2025-10-02 09:25:39.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:25:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:25:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:25:41 np0005465604 podman[442918]: 2025-10-02 09:25:41.003333767 +0000 UTC m=+0.065121065 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 05:25:41 np0005465604 podman[442917]: 2025-10-02 09:25:41.027500362 +0000 UTC m=+0.094511373 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 05:25:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:43 np0005465604 nova_compute[260603]: 2025-10-02 09:25:43.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:44 np0005465604 nova_compute[260603]: 2025-10-02 09:25:44.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:25:46 np0005465604 podman[442964]: 2025-10-02 09:25:46.010643338 +0000 UTC m=+0.080436304 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:25:46 np0005465604 podman[442965]: 2025-10-02 09:25:46.054661433 +0000 UTC m=+0.108642864 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 05:25:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:48 np0005465604 nova_compute[260603]: 2025-10-02 09:25:48.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:49 np0005465604 nova_compute[260603]: 2025-10-02 09:25:49.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:49 np0005465604 nova_compute[260603]: 2025-10-02 09:25:49.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:25:49 np0005465604 nova_compute[260603]: 2025-10-02 09:25:49.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:25:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:25:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:53 np0005465604 nova_compute[260603]: 2025-10-02 09:25:53.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:54 np0005465604 nova_compute[260603]: 2025-10-02 09:25:54.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:25:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:25:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:25:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:25:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:25:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:25:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:25:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:25:58 np0005465604 nova_compute[260603]: 2025-10-02 09:25:58.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:25:59 np0005465604 nova_compute[260603]: 2025-10-02 09:25:59.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:26:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:03 np0005465604 nova_compute[260603]: 2025-10-02 09:26:03.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:04 np0005465604 nova_compute[260603]: 2025-10-02 09:26:04.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:26:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:08 np0005465604 nova_compute[260603]: 2025-10-02 09:26:08.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:09 np0005465604 nova_compute[260603]: 2025-10-02 09:26:09.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:26:12 np0005465604 podman[443001]: 2025-10-02 09:26:12.01947034 +0000 UTC m=+0.078828913 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 05:26:12 np0005465604 podman[443002]: 2025-10-02 09:26:12.022736102 +0000 UTC m=+0.066999614 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 05:26:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:13 np0005465604 nova_compute[260603]: 2025-10-02 09:26:13.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:26:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:26:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:26:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:26:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:26:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:26:13 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 05c4c6d0-e724-4615-96a5-67bc7eaa40b2 does not exist
Oct  2 05:26:13 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 571d8288-5313-41a1-82e1-71c46312fea9 does not exist
Oct  2 05:26:13 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 28bfeddb-a8b0-474a-b477-5c15b5e423f4 does not exist
Oct  2 05:26:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:26:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:26:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:26:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:26:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:26:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:26:14 np0005465604 nova_compute[260603]: 2025-10-02 09:26:14.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:14 np0005465604 podman[443318]: 2025-10-02 09:26:14.352830031 +0000 UTC m=+0.023047620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:26:14 np0005465604 podman[443318]: 2025-10-02 09:26:14.498291524 +0000 UTC m=+0.168509133 container create 647bac30be6be07e3053a95f16b56812e1dc829cf8e8e9c93fe175417b4be4f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 05:26:14 np0005465604 nova_compute[260603]: 2025-10-02 09:26:14.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:26:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:14 np0005465604 systemd[1]: Started libpod-conmon-647bac30be6be07e3053a95f16b56812e1dc829cf8e8e9c93fe175417b4be4f2.scope.
Oct  2 05:26:14 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:26:14 np0005465604 podman[443318]: 2025-10-02 09:26:14.672507196 +0000 UTC m=+0.342724785 container init 647bac30be6be07e3053a95f16b56812e1dc829cf8e8e9c93fe175417b4be4f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 05:26:14 np0005465604 podman[443318]: 2025-10-02 09:26:14.682261831 +0000 UTC m=+0.352479410 container start 647bac30be6be07e3053a95f16b56812e1dc829cf8e8e9c93fe175417b4be4f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 05:26:14 np0005465604 nervous_brown[443335]: 167 167
Oct  2 05:26:14 np0005465604 systemd[1]: libpod-647bac30be6be07e3053a95f16b56812e1dc829cf8e8e9c93fe175417b4be4f2.scope: Deactivated successfully.
Oct  2 05:26:14 np0005465604 podman[443318]: 2025-10-02 09:26:14.712501585 +0000 UTC m=+0.382719174 container attach 647bac30be6be07e3053a95f16b56812e1dc829cf8e8e9c93fe175417b4be4f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:26:14 np0005465604 podman[443318]: 2025-10-02 09:26:14.713846117 +0000 UTC m=+0.384063716 container died 647bac30be6be07e3053a95f16b56812e1dc829cf8e8e9c93fe175417b4be4f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:26:14 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1477374622a5ffa6af3b166e5a2dacdedec7328222899ba6662c454c2a0ce1e8-merged.mount: Deactivated successfully.
Oct  2 05:26:14 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:26:14 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:26:14 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:26:14 np0005465604 podman[443318]: 2025-10-02 09:26:14.970268597 +0000 UTC m=+0.640486196 container remove 647bac30be6be07e3053a95f16b56812e1dc829cf8e8e9c93fe175417b4be4f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 05:26:14 np0005465604 systemd[1]: libpod-conmon-647bac30be6be07e3053a95f16b56812e1dc829cf8e8e9c93fe175417b4be4f2.scope: Deactivated successfully.
Oct  2 05:26:15 np0005465604 podman[443361]: 2025-10-02 09:26:15.216077605 +0000 UTC m=+0.116522721 container create 9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 05:26:15 np0005465604 podman[443361]: 2025-10-02 09:26:15.133307299 +0000 UTC m=+0.033752485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:26:15 np0005465604 systemd[1]: Started libpod-conmon-9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d.scope.
Oct  2 05:26:15 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:26:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f6c95ff34805f68a191366630ea2063e3554c8cf9a0702a84f1644b0364c92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:26:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f6c95ff34805f68a191366630ea2063e3554c8cf9a0702a84f1644b0364c92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:26:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f6c95ff34805f68a191366630ea2063e3554c8cf9a0702a84f1644b0364c92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:26:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f6c95ff34805f68a191366630ea2063e3554c8cf9a0702a84f1644b0364c92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:26:15 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f6c95ff34805f68a191366630ea2063e3554c8cf9a0702a84f1644b0364c92/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:26:15 np0005465604 podman[443361]: 2025-10-02 09:26:15.37795698 +0000 UTC m=+0.278402186 container init 9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:26:15 np0005465604 podman[443361]: 2025-10-02 09:26:15.387124237 +0000 UTC m=+0.287569343 container start 9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:26:15 np0005465604 podman[443361]: 2025-10-02 09:26:15.434612381 +0000 UTC m=+0.335057497 container attach 9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 05:26:15 np0005465604 nova_compute[260603]: 2025-10-02 09:26:15.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:26:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:26:16 np0005465604 boring_spence[443378]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:26:16 np0005465604 boring_spence[443378]: --> relative data size: 1.0
Oct  2 05:26:16 np0005465604 boring_spence[443378]: --> All data devices are unavailable
Oct  2 05:26:16 np0005465604 systemd[1]: libpod-9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d.scope: Deactivated successfully.
Oct  2 05:26:16 np0005465604 podman[443361]: 2025-10-02 09:26:16.481029865 +0000 UTC m=+1.381474971 container died 9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:26:16 np0005465604 systemd[1]: libpod-9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d.scope: Consumed 1.041s CPU time.
Oct  2 05:26:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:16 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c8f6c95ff34805f68a191366630ea2063e3554c8cf9a0702a84f1644b0364c92-merged.mount: Deactivated successfully.
Oct  2 05:26:17 np0005465604 podman[443361]: 2025-10-02 09:26:17.06583535 +0000 UTC m=+1.966280446 container remove 9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 05:26:17 np0005465604 podman[443409]: 2025-10-02 09:26:17.080940682 +0000 UTC m=+0.553426247 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 05:26:17 np0005465604 podman[443416]: 2025-10-02 09:26:17.136106135 +0000 UTC m=+0.606441052 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible)
Oct  2 05:26:17 np0005465604 systemd[1]: libpod-conmon-9b1a9f11b826f4d9a2bda88ef0a109bf29114abc90c42853198d465ac3b1730d.scope: Deactivated successfully.
Oct  2 05:26:17 np0005465604 podman[443598]: 2025-10-02 09:26:17.824720254 +0000 UTC m=+0.107502359 container create ae01e449a26714c52815577b302c0842cab0adb60a9d769784f3257a6c683dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 05:26:17 np0005465604 podman[443598]: 2025-10-02 09:26:17.738074178 +0000 UTC m=+0.020856303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:26:17 np0005465604 systemd[1]: Started libpod-conmon-ae01e449a26714c52815577b302c0842cab0adb60a9d769784f3257a6c683dd7.scope.
Oct  2 05:26:18 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:26:18 np0005465604 podman[443598]: 2025-10-02 09:26:18.204530767 +0000 UTC m=+0.487312912 container init ae01e449a26714c52815577b302c0842cab0adb60a9d769784f3257a6c683dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nobel, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:26:18 np0005465604 podman[443598]: 2025-10-02 09:26:18.217619396 +0000 UTC m=+0.500401501 container start ae01e449a26714c52815577b302c0842cab0adb60a9d769784f3257a6c683dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nobel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:26:18 np0005465604 jovial_nobel[443615]: 167 167
Oct  2 05:26:18 np0005465604 systemd[1]: libpod-ae01e449a26714c52815577b302c0842cab0adb60a9d769784f3257a6c683dd7.scope: Deactivated successfully.
Oct  2 05:26:18 np0005465604 podman[443598]: 2025-10-02 09:26:18.267353959 +0000 UTC m=+0.550136104 container attach ae01e449a26714c52815577b302c0842cab0adb60a9d769784f3257a6c683dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nobel, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:26:18 np0005465604 podman[443598]: 2025-10-02 09:26:18.268185706 +0000 UTC m=+0.550967871 container died ae01e449a26714c52815577b302c0842cab0adb60a9d769784f3257a6c683dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 05:26:18 np0005465604 nova_compute[260603]: 2025-10-02 09:26:18.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:26:18 np0005465604 nova_compute[260603]: 2025-10-02 09:26:18.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:26:18 np0005465604 nova_compute[260603]: 2025-10-02 09:26:18.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:26:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:18 np0005465604 nova_compute[260603]: 2025-10-02 09:26:18.545 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:26:18 np0005465604 nova_compute[260603]: 2025-10-02 09:26:18.546 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:26:18 np0005465604 systemd[1]: var-lib-containers-storage-overlay-eacbe9107d4919bb95ddc49df1080dff17cfb32b07ff31631fc5913922a51bc2-merged.mount: Deactivated successfully.
Oct  2 05:26:18 np0005465604 nova_compute[260603]: 2025-10-02 09:26:18.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:18 np0005465604 podman[443598]: 2025-10-02 09:26:18.788646012 +0000 UTC m=+1.071428117 container remove ae01e449a26714c52815577b302c0842cab0adb60a9d769784f3257a6c683dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 05:26:18 np0005465604 systemd[1]: libpod-conmon-ae01e449a26714c52815577b302c0842cab0adb60a9d769784f3257a6c683dd7.scope: Deactivated successfully.
Oct  2 05:26:19 np0005465604 podman[443641]: 2025-10-02 09:26:18.923703241 +0000 UTC m=+0.022798954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:26:19 np0005465604 podman[443641]: 2025-10-02 09:26:19.123025606 +0000 UTC m=+0.222121309 container create 1fef619c4bb28f5cda9c3feea451b1b8e1eb345aa8f2e124ed3f886717f4f32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct  2 05:26:19 np0005465604 systemd[1]: Started libpod-conmon-1fef619c4bb28f5cda9c3feea451b1b8e1eb345aa8f2e124ed3f886717f4f32d.scope.
Oct  2 05:26:19 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:26:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b411206e35fad3038cf87ff7e8c65f55d6ec3b29a7265f0290396b4e6f96389f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:26:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b411206e35fad3038cf87ff7e8c65f55d6ec3b29a7265f0290396b4e6f96389f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:26:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b411206e35fad3038cf87ff7e8c65f55d6ec3b29a7265f0290396b4e6f96389f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:26:19 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b411206e35fad3038cf87ff7e8c65f55d6ec3b29a7265f0290396b4e6f96389f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:26:19 np0005465604 podman[443641]: 2025-10-02 09:26:19.372951812 +0000 UTC m=+0.472047555 container init 1fef619c4bb28f5cda9c3feea451b1b8e1eb345aa8f2e124ed3f886717f4f32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 05:26:19 np0005465604 podman[443641]: 2025-10-02 09:26:19.386936309 +0000 UTC m=+0.486032002 container start 1fef619c4bb28f5cda9c3feea451b1b8e1eb345aa8f2e124ed3f886717f4f32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jang, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 05:26:19 np0005465604 podman[443641]: 2025-10-02 09:26:19.417804443 +0000 UTC m=+0.516900196 container attach 1fef619c4bb28f5cda9c3feea451b1b8e1eb345aa8f2e124ed3f886717f4f32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jang, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:26:19 np0005465604 nova_compute[260603]: 2025-10-02 09:26:19.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:20 np0005465604 gracious_jang[443657]: {
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:    "0": [
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:        {
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "devices": [
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "/dev/loop3"
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            ],
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "lv_name": "ceph_lv0",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "lv_size": "21470642176",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "name": "ceph_lv0",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "tags": {
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.cluster_name": "ceph",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.crush_device_class": "",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.encrypted": "0",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.osd_id": "0",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.type": "block",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.vdo": "0"
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            },
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "type": "block",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "vg_name": "ceph_vg0"
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:        }
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:    ],
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:    "1": [
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:        {
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "devices": [
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "/dev/loop4"
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            ],
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "lv_name": "ceph_lv1",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "lv_size": "21470642176",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "name": "ceph_lv1",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "tags": {
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.cluster_name": "ceph",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.crush_device_class": "",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.encrypted": "0",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.osd_id": "1",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.type": "block",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.vdo": "0"
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            },
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "type": "block",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "vg_name": "ceph_vg1"
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:        }
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:    ],
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:    "2": [
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:        {
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "devices": [
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "/dev/loop5"
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            ],
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "lv_name": "ceph_lv2",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "lv_size": "21470642176",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "name": "ceph_lv2",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "tags": {
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.cluster_name": "ceph",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.crush_device_class": "",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.encrypted": "0",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.osd_id": "2",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.type": "block",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:                "ceph.vdo": "0"
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            },
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "type": "block",
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:            "vg_name": "ceph_vg2"
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:        }
Oct  2 05:26:20 np0005465604 gracious_jang[443657]:    ]
Oct  2 05:26:20 np0005465604 gracious_jang[443657]: }
Oct  2 05:26:20 np0005465604 systemd[1]: libpod-1fef619c4bb28f5cda9c3feea451b1b8e1eb345aa8f2e124ed3f886717f4f32d.scope: Deactivated successfully.
Oct  2 05:26:20 np0005465604 podman[443641]: 2025-10-02 09:26:20.133158717 +0000 UTC m=+1.232254450 container died 1fef619c4bb28f5cda9c3feea451b1b8e1eb345aa8f2e124ed3f886717f4f32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jang, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  2 05:26:20 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b411206e35fad3038cf87ff7e8c65f55d6ec3b29a7265f0290396b4e6f96389f-merged.mount: Deactivated successfully.
Oct  2 05:26:20 np0005465604 podman[443641]: 2025-10-02 09:26:20.184347116 +0000 UTC m=+1.283442819 container remove 1fef619c4bb28f5cda9c3feea451b1b8e1eb345aa8f2e124ed3f886717f4f32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jang, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 05:26:20 np0005465604 systemd[1]: libpod-conmon-1fef619c4bb28f5cda9c3feea451b1b8e1eb345aa8f2e124ed3f886717f4f32d.scope: Deactivated successfully.
Oct  2 05:26:20 np0005465604 nova_compute[260603]: 2025-10-02 09:26:20.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:26:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:20 np0005465604 nova_compute[260603]: 2025-10-02 09:26:20.561 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:26:20 np0005465604 nova_compute[260603]: 2025-10-02 09:26:20.562 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:26:20 np0005465604 nova_compute[260603]: 2025-10-02 09:26:20.562 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:26:20 np0005465604 nova_compute[260603]: 2025-10-02 09:26:20.562 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:26:20 np0005465604 nova_compute[260603]: 2025-10-02 09:26:20.562 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:26:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:26:20 np0005465604 podman[443840]: 2025-10-02 09:26:20.865604674 +0000 UTC m=+0.069324885 container create 78b840581e5a2ed36a0a909eb214c3726e7099e68bb64c5c9d80679f47ba41fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mayer, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:26:20 np0005465604 podman[443840]: 2025-10-02 09:26:20.826090041 +0000 UTC m=+0.029810282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:26:20 np0005465604 systemd[1]: Started libpod-conmon-78b840581e5a2ed36a0a909eb214c3726e7099e68bb64c5c9d80679f47ba41fe.scope.
Oct  2 05:26:20 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:26:20 np0005465604 podman[443840]: 2025-10-02 09:26:20.992348163 +0000 UTC m=+0.196068454 container init 78b840581e5a2ed36a0a909eb214c3726e7099e68bb64c5c9d80679f47ba41fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mayer, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:26:21 np0005465604 podman[443840]: 2025-10-02 09:26:21.000698704 +0000 UTC m=+0.204418945 container start 78b840581e5a2ed36a0a909eb214c3726e7099e68bb64c5c9d80679f47ba41fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mayer, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 05:26:21 np0005465604 kind_mayer[443856]: 167 167
Oct  2 05:26:21 np0005465604 systemd[1]: libpod-78b840581e5a2ed36a0a909eb214c3726e7099e68bb64c5c9d80679f47ba41fe.scope: Deactivated successfully.
Oct  2 05:26:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:26:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4147006343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:26:21 np0005465604 podman[443840]: 2025-10-02 09:26:21.02969569 +0000 UTC m=+0.233416241 container attach 78b840581e5a2ed36a0a909eb214c3726e7099e68bb64c5c9d80679f47ba41fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:26:21 np0005465604 podman[443840]: 2025-10-02 09:26:21.030539227 +0000 UTC m=+0.234259468 container died 78b840581e5a2ed36a0a909eb214c3726e7099e68bb64c5c9d80679f47ba41fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:26:21 np0005465604 nova_compute[260603]: 2025-10-02 09:26:21.051 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:26:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0ac2d1669624845626e9edf8abea10d7062dbd61178c3b967a481cce7ee82928-merged.mount: Deactivated successfully.
Oct  2 05:26:21 np0005465604 nova_compute[260603]: 2025-10-02 09:26:21.215 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:26:21 np0005465604 nova_compute[260603]: 2025-10-02 09:26:21.216 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3519MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:26:21 np0005465604 nova_compute[260603]: 2025-10-02 09:26:21.217 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:26:21 np0005465604 nova_compute[260603]: 2025-10-02 09:26:21.217 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:26:21 np0005465604 podman[443840]: 2025-10-02 09:26:21.269647914 +0000 UTC m=+0.473368125 container remove 78b840581e5a2ed36a0a909eb214c3726e7099e68bb64c5c9d80679f47ba41fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:26:21 np0005465604 systemd[1]: libpod-conmon-78b840581e5a2ed36a0a909eb214c3726e7099e68bb64c5c9d80679f47ba41fe.scope: Deactivated successfully.
Oct  2 05:26:21 np0005465604 nova_compute[260603]: 2025-10-02 09:26:21.291 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:26:21 np0005465604 nova_compute[260603]: 2025-10-02 09:26:21.291 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:26:21 np0005465604 podman[443881]: 2025-10-02 09:26:21.453054633 +0000 UTC m=+0.060877982 container create 396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:26:21 np0005465604 systemd[1]: Started libpod-conmon-396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c.scope.
Oct  2 05:26:21 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:26:21 np0005465604 podman[443881]: 2025-10-02 09:26:21.429419815 +0000 UTC m=+0.037243184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:26:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ceeb2d577026225d95a25b63341af937b4f85952db29b084083342c99e7f4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:26:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ceeb2d577026225d95a25b63341af937b4f85952db29b084083342c99e7f4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:26:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ceeb2d577026225d95a25b63341af937b4f85952db29b084083342c99e7f4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:26:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ceeb2d577026225d95a25b63341af937b4f85952db29b084083342c99e7f4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:26:21 np0005465604 podman[443881]: 2025-10-02 09:26:21.556603098 +0000 UTC m=+0.164426487 container init 396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:26:21 np0005465604 podman[443881]: 2025-10-02 09:26:21.569471159 +0000 UTC m=+0.177294538 container start 396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct  2 05:26:21 np0005465604 podman[443881]: 2025-10-02 09:26:21.575170977 +0000 UTC m=+0.182994366 container attach 396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:26:21 np0005465604 nova_compute[260603]: 2025-10-02 09:26:21.591 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:26:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:26:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3264831642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:26:22 np0005465604 nova_compute[260603]: 2025-10-02 09:26:22.111 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:26:22 np0005465604 nova_compute[260603]: 2025-10-02 09:26:22.118 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:26:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:26:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3828276912' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:26:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:26:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3828276912' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:26:22 np0005465604 nova_compute[260603]: 2025-10-02 09:26:22.257 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:26:22 np0005465604 nova_compute[260603]: 2025-10-02 09:26:22.259 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:26:22 np0005465604 nova_compute[260603]: 2025-10-02 09:26:22.259 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:26:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:22 np0005465604 kind_perlman[443897]: {
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:        "osd_id": 2,
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:        "type": "bluestore"
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:    },
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:        "osd_id": 1,
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:        "type": "bluestore"
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:    },
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:        "osd_id": 0,
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:        "type": "bluestore"
Oct  2 05:26:22 np0005465604 kind_perlman[443897]:    }
Oct  2 05:26:22 np0005465604 kind_perlman[443897]: }
Oct  2 05:26:22 np0005465604 systemd[1]: libpod-396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c.scope: Deactivated successfully.
Oct  2 05:26:22 np0005465604 systemd[1]: libpod-396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c.scope: Consumed 1.019s CPU time.
Oct  2 05:26:22 np0005465604 podman[443881]: 2025-10-02 09:26:22.581393757 +0000 UTC m=+1.189217106 container died 396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 05:26:22 np0005465604 systemd[1]: var-lib-containers-storage-overlay-24ceeb2d577026225d95a25b63341af937b4f85952db29b084083342c99e7f4e-merged.mount: Deactivated successfully.
Oct  2 05:26:22 np0005465604 podman[443881]: 2025-10-02 09:26:22.961000374 +0000 UTC m=+1.568823763 container remove 396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_perlman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 05:26:22 np0005465604 systemd[1]: libpod-conmon-396a04396d06f5cd818076c738c3454cf5578f596e8cdb991e630c432a6a505c.scope: Deactivated successfully.
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.045203) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397183045237, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1824, "num_deletes": 253, "total_data_size": 2939113, "memory_usage": 2986768, "flush_reason": "Manual Compaction"}
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397183086908, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 2898432, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66160, "largest_seqno": 67983, "table_properties": {"data_size": 2889991, "index_size": 5190, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17255, "raw_average_key_size": 20, "raw_value_size": 2873078, "raw_average_value_size": 3368, "num_data_blocks": 230, "num_entries": 853, "num_filter_entries": 853, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759396986, "oldest_key_time": 1759396986, "file_creation_time": 1759397183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 41756 microseconds, and 7899 cpu microseconds.
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.086954) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 2898432 bytes OK
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.086975) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.102399) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.102422) EVENT_LOG_v1 {"time_micros": 1759397183102417, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.102442) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 2931348, prev total WAL file size 2972016, number of live WAL files 2.
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.103416) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(2830KB)], [158(9759KB)]
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397183103445, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 12891962, "oldest_snapshot_seqno": -1}
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:26:23 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 72c99e21-5bce-448b-a503-786e3c7b024d does not exist
Oct  2 05:26:23 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 6e3a5479-4a9a-4910-b517-cfb710ffd019 does not exist
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8533 keys, 11168005 bytes, temperature: kUnknown
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397183248514, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11168005, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11112279, "index_size": 33277, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21381, "raw_key_size": 223718, "raw_average_key_size": 26, "raw_value_size": 10961487, "raw_average_value_size": 1284, "num_data_blocks": 1292, "num_entries": 8533, "num_filter_entries": 8533, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759397183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.248808) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11168005 bytes
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.255255) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 88.8 rd, 76.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 9.5 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(8.3) write-amplify(3.9) OK, records in: 9055, records dropped: 522 output_compression: NoCompression
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.255273) EVENT_LOG_v1 {"time_micros": 1759397183255265, "job": 98, "event": "compaction_finished", "compaction_time_micros": 145180, "compaction_time_cpu_micros": 25011, "output_level": 6, "num_output_files": 1, "total_output_size": 11168005, "num_input_records": 9055, "num_output_records": 8533, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397183255796, "job": 98, "event": "table_file_deletion", "file_number": 160}
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397183257279, "job": 98, "event": "table_file_deletion", "file_number": 158}
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.103361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.257413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.257419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.257421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.257423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:26:23.257424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:26:23 np0005465604 nova_compute[260603]: 2025-10-02 09:26:23.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:26:23 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:26:24 np0005465604 nova_compute[260603]: 2025-10-02 09:26:24.256 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:26:24 np0005465604 nova_compute[260603]: 2025-10-02 09:26:24.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:25 np0005465604 nova_compute[260603]: 2025-10-02 09:26:25.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:26:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:26:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:26:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:26:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:26:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:26:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:26:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:26:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:26:28
Oct  2 05:26:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:26:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:26:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'images', 'vms', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr']
Oct  2 05:26:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:26:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:26:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:26:28 np0005465604 nova_compute[260603]: 2025-10-02 09:26:28.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:29 np0005465604 nova_compute[260603]: 2025-10-02 09:26:29.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:26:32 np0005465604 nova_compute[260603]: 2025-10-02 09:26:32.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:26:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:33 np0005465604 nova_compute[260603]: 2025-10-02 09:26:33.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:34 np0005465604 nova_compute[260603]: 2025-10-02 09:26:34.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:26:34.865 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:26:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:26:34.866 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:26:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:26:34.866 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:26:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:26:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:38 np0005465604 nova_compute[260603]: 2025-10-02 09:26:38.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:26:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:26:39 np0005465604 nova_compute[260603]: 2025-10-02 09:26:39.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:26:40 np0005465604 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  2 05:26:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:26:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 14K writes, 67K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1342 writes, 6080 keys, 1342 commit groups, 1.0 writes per commit group, ingest: 8.77 MB, 0.01 MB/s#012Interval WAL: 1342 writes, 1342 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     82.6      1.03              0.32        49    0.021       0      0       0.0       0.0#012  L6      1/0   10.65 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.8    129.7    109.8      3.74              1.46        48    0.078    318K    25K       0.0       0.0#012 Sum      1/0   10.65 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.8    101.7    103.9      4.77              1.79        97    0.049    318K    25K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.3     50.9     52.0      1.11              0.21        10    0.111     43K   2572       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    129.7    109.8      3.74              1.46        48    0.078    318K    25K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     83.0      1.02              0.32        48    0.021       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.083, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.48 GB write, 0.08 MB/s write, 0.47 GB read, 0.08 MB/s read, 4.8 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.09 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 304.00 MB usage: 55.00 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000313 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3544,52.69 MB,17.3328%) FilterBlock(98,890.67 KB,0.286117%) IndexBlock(98,1.44 MB,0.472827%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 05:26:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:43 np0005465604 podman[444018]: 2025-10-02 09:26:43.029051783 +0000 UTC m=+0.081679753 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:26:43 np0005465604 podman[444017]: 2025-10-02 09:26:43.093274329 +0000 UTC m=+0.145675292 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 05:26:43 np0005465604 nova_compute[260603]: 2025-10-02 09:26:43.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:44 np0005465604 nova_compute[260603]: 2025-10-02 09:26:44.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:26:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:48 np0005465604 podman[444065]: 2025-10-02 09:26:48.014524232 +0000 UTC m=+0.065977372 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:26:48 np0005465604 podman[444066]: 2025-10-02 09:26:48.036772766 +0000 UTC m=+0.082721474 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:26:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:48 np0005465604 nova_compute[260603]: 2025-10-02 09:26:48.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:49 np0005465604 nova_compute[260603]: 2025-10-02 09:26:49.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:49 np0005465604 nova_compute[260603]: 2025-10-02 09:26:49.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:26:49 np0005465604 nova_compute[260603]: 2025-10-02 09:26:49.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:26:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:26:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:53 np0005465604 nova_compute[260603]: 2025-10-02 09:26:53.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:54 np0005465604 nova_compute[260603]: 2025-10-02 09:26:54.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:26:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:26:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:26:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:26:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:26:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:26:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:26:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:26:58 np0005465604 nova_compute[260603]: 2025-10-02 09:26:58.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:26:59 np0005465604 nova_compute[260603]: 2025-10-02 09:26:59.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:27:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:27:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:27:03 np0005465604 nova_compute[260603]: 2025-10-02 09:27:03.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:04 np0005465604 nova_compute[260603]: 2025-10-02 09:27:04.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:27:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:27:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:27:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:27:08 np0005465604 nova_compute[260603]: 2025-10-02 09:27:08.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:09 np0005465604 nova_compute[260603]: 2025-10-02 09:27:09.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:27:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:27:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:27:13 np0005465604 nova_compute[260603]: 2025-10-02 09:27:13.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:13 np0005465604 podman[444106]: 2025-10-02 09:27:13.978563636 +0000 UTC m=+0.047267037 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 05:27:14 np0005465604 podman[444105]: 2025-10-02 09:27:14.002511694 +0000 UTC m=+0.074177638 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct  2 05:27:14 np0005465604 nova_compute[260603]: 2025-10-02 09:27:14.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:14 np0005465604 nova_compute[260603]: 2025-10-02 09:27:14.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:27:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:27:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:27:16 np0005465604 nova_compute[260603]: 2025-10-02 09:27:16.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:27:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:27:18 np0005465604 nova_compute[260603]: 2025-10-02 09:27:18.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:27:18 np0005465604 nova_compute[260603]: 2025-10-02 09:27:18.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:27:18 np0005465604 nova_compute[260603]: 2025-10-02 09:27:18.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:27:18 np0005465604 nova_compute[260603]: 2025-10-02 09:27:18.543 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:27:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:27:18 np0005465604 nova_compute[260603]: 2025-10-02 09:27:18.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:19 np0005465604 podman[444149]: 2025-10-02 09:27:19.009366861 +0000 UTC m=+0.067165319 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:27:19 np0005465604 podman[444150]: 2025-10-02 09:27:19.031448751 +0000 UTC m=+0.085106980 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 05:27:19 np0005465604 nova_compute[260603]: 2025-10-02 09:27:19.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:19 np0005465604 nova_compute[260603]: 2025-10-02 09:27:19.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:27:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:27:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:27:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:27:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3918914276' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:27:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:27:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3918914276' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:27:22 np0005465604 nova_compute[260603]: 2025-10-02 09:27:22.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:27:22 np0005465604 nova_compute[260603]: 2025-10-02 09:27:22.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:27:22 np0005465604 nova_compute[260603]: 2025-10-02 09:27:22.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:27:22 np0005465604 nova_compute[260603]: 2025-10-02 09:27:22.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:27:22 np0005465604 nova_compute[260603]: 2025-10-02 09:27:22.548 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:27:22 np0005465604 nova_compute[260603]: 2025-10-02 09:27:22.549 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:27:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 6 op/s
Oct  2 05:27:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:27:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3001045560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:27:22 np0005465604 nova_compute[260603]: 2025-10-02 09:27:22.974 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:27:23 np0005465604 nova_compute[260603]: 2025-10-02 09:27:23.188 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:27:23 np0005465604 nova_compute[260603]: 2025-10-02 09:27:23.189 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3576MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:27:23 np0005465604 nova_compute[260603]: 2025-10-02 09:27:23.189 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:27:23 np0005465604 nova_compute[260603]: 2025-10-02 09:27:23.189 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:27:23 np0005465604 nova_compute[260603]: 2025-10-02 09:27:23.305 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:27:23 np0005465604 nova_compute[260603]: 2025-10-02 09:27:23.305 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:27:23 np0005465604 nova_compute[260603]: 2025-10-02 09:27:23.322 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:27:23 np0005465604 nova_compute[260603]: 2025-10-02 09:27:23.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:27:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1178586253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:27:23 np0005465604 nova_compute[260603]: 2025-10-02 09:27:23.803 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:27:23 np0005465604 nova_compute[260603]: 2025-10-02 09:27:23.811 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:27:23 np0005465604 nova_compute[260603]: 2025-10-02 09:27:23.834 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:27:23 np0005465604 nova_compute[260603]: 2025-10-02 09:27:23.836 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:27:23 np0005465604 nova_compute[260603]: 2025-10-02 09:27:23.836 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:27:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:27:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:27:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:27:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:27:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:27:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:27:24 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a4cc0a1e-f33e-4fc7-aa37-5490d4fd89aa does not exist
Oct  2 05:27:24 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev f6c7187d-3390-4dea-86fe-7d98a3531df4 does not exist
Oct  2 05:27:24 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 85288a4f-6e92-4495-8850-a8aec0bc19ba does not exist
Oct  2 05:27:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:27:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:27:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:27:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:27:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:27:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:27:24 np0005465604 nova_compute[260603]: 2025-10-02 09:27:24.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct  2 05:27:24 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:27:24 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:27:24 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:27:24 np0005465604 podman[444511]: 2025-10-02 09:27:24.766379339 +0000 UTC m=+0.041434716 container create 9545107c295e2ae3e8392a099040b7b080f6ddee2db3103dfadf2c8469129851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 05:27:24 np0005465604 systemd[1]: Started libpod-conmon-9545107c295e2ae3e8392a099040b7b080f6ddee2db3103dfadf2c8469129851.scope.
Oct  2 05:27:24 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:27:24 np0005465604 podman[444511]: 2025-10-02 09:27:24.748041056 +0000 UTC m=+0.023096453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:27:24 np0005465604 podman[444511]: 2025-10-02 09:27:24.865099672 +0000 UTC m=+0.140155089 container init 9545107c295e2ae3e8392a099040b7b080f6ddee2db3103dfadf2c8469129851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 05:27:24 np0005465604 podman[444511]: 2025-10-02 09:27:24.880245375 +0000 UTC m=+0.155300742 container start 9545107c295e2ae3e8392a099040b7b080f6ddee2db3103dfadf2c8469129851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 05:27:24 np0005465604 podman[444511]: 2025-10-02 09:27:24.884728075 +0000 UTC m=+0.159783552 container attach 9545107c295e2ae3e8392a099040b7b080f6ddee2db3103dfadf2c8469129851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 05:27:24 np0005465604 agitated_haibt[444528]: 167 167
Oct  2 05:27:24 np0005465604 systemd[1]: libpod-9545107c295e2ae3e8392a099040b7b080f6ddee2db3103dfadf2c8469129851.scope: Deactivated successfully.
Oct  2 05:27:24 np0005465604 podman[444511]: 2025-10-02 09:27:24.888647807 +0000 UTC m=+0.163703184 container died 9545107c295e2ae3e8392a099040b7b080f6ddee2db3103dfadf2c8469129851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 05:27:24 np0005465604 systemd[1]: var-lib-containers-storage-overlay-bc206c9ef383c99a0b6bb56e323e61c2138353c29b8ed8780e60c42148f45fcb-merged.mount: Deactivated successfully.
Oct  2 05:27:24 np0005465604 podman[444511]: 2025-10-02 09:27:24.931069352 +0000 UTC m=+0.206124759 container remove 9545107c295e2ae3e8392a099040b7b080f6ddee2db3103dfadf2c8469129851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 05:27:24 np0005465604 systemd[1]: libpod-conmon-9545107c295e2ae3e8392a099040b7b080f6ddee2db3103dfadf2c8469129851.scope: Deactivated successfully.
Oct  2 05:27:25 np0005465604 podman[444550]: 2025-10-02 09:27:25.115489703 +0000 UTC m=+0.052337796 container create 828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 05:27:25 np0005465604 systemd[1]: Started libpod-conmon-828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73.scope.
Oct  2 05:27:25 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:27:25 np0005465604 podman[444550]: 2025-10-02 09:27:25.090231373 +0000 UTC m=+0.027079536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:27:25 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67ac39440d2688266d7135dc168bf5a663f9c542390a15b244e2b6a12643eae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:27:25 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67ac39440d2688266d7135dc168bf5a663f9c542390a15b244e2b6a12643eae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:27:25 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67ac39440d2688266d7135dc168bf5a663f9c542390a15b244e2b6a12643eae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:27:25 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67ac39440d2688266d7135dc168bf5a663f9c542390a15b244e2b6a12643eae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:27:25 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67ac39440d2688266d7135dc168bf5a663f9c542390a15b244e2b6a12643eae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:27:25 np0005465604 podman[444550]: 2025-10-02 09:27:25.201376925 +0000 UTC m=+0.138225068 container init 828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:27:25 np0005465604 podman[444550]: 2025-10-02 09:27:25.212771831 +0000 UTC m=+0.149619904 container start 828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:27:25 np0005465604 podman[444550]: 2025-10-02 09:27:25.21624644 +0000 UTC m=+0.153094543 container attach 828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:27:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:27:25 np0005465604 nova_compute[260603]: 2025-10-02 09:27:25.832 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:27:25 np0005465604 nova_compute[260603]: 2025-10-02 09:27:25.834 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:27:26 np0005465604 festive_carver[444567]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:27:26 np0005465604 festive_carver[444567]: --> relative data size: 1.0
Oct  2 05:27:26 np0005465604 festive_carver[444567]: --> All data devices are unavailable
Oct  2 05:27:26 np0005465604 systemd[1]: libpod-828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73.scope: Deactivated successfully.
Oct  2 05:27:26 np0005465604 podman[444550]: 2025-10-02 09:27:26.314941237 +0000 UTC m=+1.251789310 container died 828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  2 05:27:26 np0005465604 systemd[1]: libpod-828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73.scope: Consumed 1.038s CPU time.
Oct  2 05:27:26 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e67ac39440d2688266d7135dc168bf5a663f9c542390a15b244e2b6a12643eae-merged.mount: Deactivated successfully.
Oct  2 05:27:26 np0005465604 podman[444550]: 2025-10-02 09:27:26.393529872 +0000 UTC m=+1.330377945 container remove 828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 05:27:26 np0005465604 systemd[1]: libpod-conmon-828dafe4fe01c144e17d97c0c06362b4ff7117fb0de999837d78ec6936a3ba73.scope: Deactivated successfully.
Oct  2 05:27:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct  2 05:27:27 np0005465604 podman[444749]: 2025-10-02 09:27:27.098302205 +0000 UTC m=+0.035946214 container create 4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gauss, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 05:27:27 np0005465604 systemd[1]: Started libpod-conmon-4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d.scope.
Oct  2 05:27:27 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:27:27 np0005465604 podman[444749]: 2025-10-02 09:27:27.175177086 +0000 UTC m=+0.112821135 container init 4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gauss, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 05:27:27 np0005465604 podman[444749]: 2025-10-02 09:27:27.082078028 +0000 UTC m=+0.019722057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:27:27 np0005465604 podman[444749]: 2025-10-02 09:27:27.182549967 +0000 UTC m=+0.120193976 container start 4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 05:27:27 np0005465604 podman[444749]: 2025-10-02 09:27:27.18620492 +0000 UTC m=+0.123848979 container attach 4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gauss, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 05:27:27 np0005465604 systemd[1]: libpod-4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d.scope: Deactivated successfully.
Oct  2 05:27:27 np0005465604 distracted_gauss[444765]: 167 167
Oct  2 05:27:27 np0005465604 conmon[444765]: conmon 4976c1837aae89b48a9b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d.scope/container/memory.events
Oct  2 05:27:27 np0005465604 podman[444749]: 2025-10-02 09:27:27.188365838 +0000 UTC m=+0.126009847 container died 4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:27:27 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0302047885f15203920e1921295e2bab35298391515a17821b24c1cf6a9bcc33-merged.mount: Deactivated successfully.
Oct  2 05:27:27 np0005465604 podman[444749]: 2025-10-02 09:27:27.217729915 +0000 UTC m=+0.155373924 container remove 4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 05:27:27 np0005465604 systemd[1]: libpod-conmon-4976c1837aae89b48a9b102c7c072544616464dc971a1820dbde18b3ae119a6d.scope: Deactivated successfully.
Oct  2 05:27:27 np0005465604 podman[444792]: 2025-10-02 09:27:27.388667834 +0000 UTC m=+0.045359808 container create a579ca0e102982752507c74b3bf4e1b9b96479d5f114cdcc75faca22dce2b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_agnesi, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:27:27 np0005465604 systemd[1]: Started libpod-conmon-a579ca0e102982752507c74b3bf4e1b9b96479d5f114cdcc75faca22dce2b7f2.scope.
Oct  2 05:27:27 np0005465604 podman[444792]: 2025-10-02 09:27:27.368089881 +0000 UTC m=+0.024781835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:27:27 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:27:27 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d5faa1a0d816f3b727794f60429ebe684af46c06472cdcd689509d40c0a56d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:27:27 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d5faa1a0d816f3b727794f60429ebe684af46c06472cdcd689509d40c0a56d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:27:27 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d5faa1a0d816f3b727794f60429ebe684af46c06472cdcd689509d40c0a56d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:27:27 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d5faa1a0d816f3b727794f60429ebe684af46c06472cdcd689509d40c0a56d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:27:27 np0005465604 podman[444792]: 2025-10-02 09:27:27.497605887 +0000 UTC m=+0.154297901 container init a579ca0e102982752507c74b3bf4e1b9b96479d5f114cdcc75faca22dce2b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_agnesi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:27:27 np0005465604 podman[444792]: 2025-10-02 09:27:27.503631676 +0000 UTC m=+0.160323650 container start a579ca0e102982752507c74b3bf4e1b9b96479d5f114cdcc75faca22dce2b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 05:27:27 np0005465604 podman[444792]: 2025-10-02 09:27:27.507883348 +0000 UTC m=+0.164575392 container attach a579ca0e102982752507c74b3bf4e1b9b96479d5f114cdcc75faca22dce2b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:27:27 np0005465604 nova_compute[260603]: 2025-10-02 09:27:27.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:27:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:27:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:27:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:27:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:27:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:27:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:27:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:27:28
Oct  2 05:27:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:27:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:27:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'vms', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', '.mgr', '.rgw.root']
Oct  2 05:27:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]: {
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:    "0": [
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:        {
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "devices": [
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "/dev/loop3"
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            ],
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "lv_name": "ceph_lv0",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "lv_size": "21470642176",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "name": "ceph_lv0",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "tags": {
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.cluster_name": "ceph",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.crush_device_class": "",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.encrypted": "0",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.osd_id": "0",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.type": "block",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.vdo": "0"
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            },
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "type": "block",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "vg_name": "ceph_vg0"
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:        }
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:    ],
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:    "1": [
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:        {
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "devices": [
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "/dev/loop4"
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            ],
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "lv_name": "ceph_lv1",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "lv_size": "21470642176",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "name": "ceph_lv1",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "tags": {
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.cluster_name": "ceph",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.crush_device_class": "",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.encrypted": "0",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.osd_id": "1",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.type": "block",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.vdo": "0"
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            },
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "type": "block",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "vg_name": "ceph_vg1"
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:        }
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:    ],
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:    "2": [
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:        {
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "devices": [
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "/dev/loop5"
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            ],
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "lv_name": "ceph_lv2",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "lv_size": "21470642176",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "name": "ceph_lv2",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "tags": {
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.cluster_name": "ceph",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.crush_device_class": "",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.encrypted": "0",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.osd_id": "2",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.type": "block",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:                "ceph.vdo": "0"
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            },
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "type": "block",
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:            "vg_name": "ceph_vg2"
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:        }
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]:    ]
Oct  2 05:27:28 np0005465604 compassionate_agnesi[444808]: }
Oct  2 05:27:28 np0005465604 systemd[1]: libpod-a579ca0e102982752507c74b3bf4e1b9b96479d5f114cdcc75faca22dce2b7f2.scope: Deactivated successfully.
Oct  2 05:27:28 np0005465604 podman[444792]: 2025-10-02 09:27:28.27729349 +0000 UTC m=+0.933985454 container died a579ca0e102982752507c74b3bf4e1b9b96479d5f114cdcc75faca22dce2b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_agnesi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:27:28 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c8d5faa1a0d816f3b727794f60429ebe684af46c06472cdcd689509d40c0a56d-merged.mount: Deactivated successfully.
Oct  2 05:27:28 np0005465604 podman[444792]: 2025-10-02 09:27:28.336597733 +0000 UTC m=+0.993289667 container remove a579ca0e102982752507c74b3bf4e1b9b96479d5f114cdcc75faca22dce2b7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_agnesi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 05:27:28 np0005465604 systemd[1]: libpod-conmon-a579ca0e102982752507c74b3bf4e1b9b96479d5f114cdcc75faca22dce2b7f2.scope: Deactivated successfully.
Oct  2 05:27:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct  2 05:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:27:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:27:28 np0005465604 nova_compute[260603]: 2025-10-02 09:27:28.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:29 np0005465604 podman[444974]: 2025-10-02 09:27:29.142697481 +0000 UTC m=+0.071560926 container create 08f22ae2e70d4250f5522a616b44af85730f763aabafc645e29c5ff1c0924aaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 05:27:29 np0005465604 systemd[1]: Started libpod-conmon-08f22ae2e70d4250f5522a616b44af85730f763aabafc645e29c5ff1c0924aaa.scope.
Oct  2 05:27:29 np0005465604 podman[444974]: 2025-10-02 09:27:29.117336148 +0000 UTC m=+0.046199593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:27:29 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:27:29 np0005465604 podman[444974]: 2025-10-02 09:27:29.261215393 +0000 UTC m=+0.190078848 container init 08f22ae2e70d4250f5522a616b44af85730f763aabafc645e29c5ff1c0924aaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ishizaka, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:27:29 np0005465604 podman[444974]: 2025-10-02 09:27:29.275065415 +0000 UTC m=+0.203928830 container start 08f22ae2e70d4250f5522a616b44af85730f763aabafc645e29c5ff1c0924aaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 05:27:29 np0005465604 podman[444974]: 2025-10-02 09:27:29.278609476 +0000 UTC m=+0.207472891 container attach 08f22ae2e70d4250f5522a616b44af85730f763aabafc645e29c5ff1c0924aaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 05:27:29 np0005465604 ecstatic_ishizaka[444991]: 167 167
Oct  2 05:27:29 np0005465604 systemd[1]: libpod-08f22ae2e70d4250f5522a616b44af85730f763aabafc645e29c5ff1c0924aaa.scope: Deactivated successfully.
Oct  2 05:27:29 np0005465604 podman[444974]: 2025-10-02 09:27:29.286071189 +0000 UTC m=+0.214934644 container died 08f22ae2e70d4250f5522a616b44af85730f763aabafc645e29c5ff1c0924aaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:27:29 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b4036798b60e16d5d154774b32e0676b9d7fd4d6d6303c31e28d5ad269d0cd63-merged.mount: Deactivated successfully.
Oct  2 05:27:29 np0005465604 podman[444974]: 2025-10-02 09:27:29.345456964 +0000 UTC m=+0.274320409 container remove 08f22ae2e70d4250f5522a616b44af85730f763aabafc645e29c5ff1c0924aaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ishizaka, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 05:27:29 np0005465604 systemd[1]: libpod-conmon-08f22ae2e70d4250f5522a616b44af85730f763aabafc645e29c5ff1c0924aaa.scope: Deactivated successfully.
Oct  2 05:27:29 np0005465604 nova_compute[260603]: 2025-10-02 09:27:29.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:29 np0005465604 podman[445014]: 2025-10-02 09:27:29.618338957 +0000 UTC m=+0.076195271 container create c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bhaskara, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:27:29 np0005465604 systemd[1]: Started libpod-conmon-c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6.scope.
Oct  2 05:27:29 np0005465604 podman[445014]: 2025-10-02 09:27:29.591352234 +0000 UTC m=+0.049208588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:27:29 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:27:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9801916c7f2163211c03dce83b64506fa18b86e5327a3725edb480399cb441/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:27:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9801916c7f2163211c03dce83b64506fa18b86e5327a3725edb480399cb441/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:27:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9801916c7f2163211c03dce83b64506fa18b86e5327a3725edb480399cb441/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:27:29 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9801916c7f2163211c03dce83b64506fa18b86e5327a3725edb480399cb441/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:27:29 np0005465604 podman[445014]: 2025-10-02 09:27:29.725640389 +0000 UTC m=+0.183496723 container init c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bhaskara, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:27:29 np0005465604 podman[445014]: 2025-10-02 09:27:29.738589323 +0000 UTC m=+0.196445667 container start c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bhaskara, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 05:27:29 np0005465604 podman[445014]: 2025-10-02 09:27:29.742923998 +0000 UTC m=+0.200780332 container attach c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:27:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct  2 05:27:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]: {
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:        "osd_id": 2,
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:        "type": "bluestore"
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:    },
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:        "osd_id": 1,
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:        "type": "bluestore"
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:    },
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:        "osd_id": 0,
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:        "type": "bluestore"
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]:    }
Oct  2 05:27:30 np0005465604 cool_bhaskara[445031]: }
Oct  2 05:27:30 np0005465604 systemd[1]: libpod-c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6.scope: Deactivated successfully.
Oct  2 05:27:30 np0005465604 podman[445014]: 2025-10-02 09:27:30.950629261 +0000 UTC m=+1.408485565 container died c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bhaskara, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:27:30 np0005465604 systemd[1]: libpod-c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6.scope: Consumed 1.223s CPU time.
Oct  2 05:27:30 np0005465604 systemd[1]: var-lib-containers-storage-overlay-dd9801916c7f2163211c03dce83b64506fa18b86e5327a3725edb480399cb441-merged.mount: Deactivated successfully.
Oct  2 05:27:31 np0005465604 podman[445014]: 2025-10-02 09:27:31.023639201 +0000 UTC m=+1.481495495 container remove c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 05:27:31 np0005465604 systemd[1]: libpod-conmon-c5d9524b9cb55157c80e66dc9e1f9136c944c9191041d63e03987b5b0adf2ae6.scope: Deactivated successfully.
Oct  2 05:27:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:27:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:27:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:27:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:27:31 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a64d73d6-2855-49a1-97df-e9e02d1dee10 does not exist
Oct  2 05:27:31 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 737a2ced-39a1-424f-8028-b68deb91c42a does not exist
Oct  2 05:27:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:27:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:27:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Oct  2 05:27:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Oct  2 05:27:32 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Oct  2 05:27:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 296 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 831 KiB/s rd, 614 B/s wr, 15 op/s
Oct  2 05:27:33 np0005465604 nova_compute[260603]: 2025-10-02 09:27:33.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:34 np0005465604 nova_compute[260603]: 2025-10-02 09:27:34.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:34 np0005465604 nova_compute[260603]: 2025-10-02 09:27:34.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:27:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 296 active+clean; 33 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 20 op/s
Oct  2 05:27:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:27:34.866 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:27:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:27:34.867 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:27:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:27:34.867 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:27:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Oct  2 05:27:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Oct  2 05:27:35 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Oct  2 05:27:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:27:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 296 active+clean; 33 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.7 KiB/s wr, 25 op/s
Oct  2 05:27:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct  2 05:27:38 np0005465604 nova_compute[260603]: 2025-10-02 09:27:38.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:27:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:27:39 np0005465604 nova_compute[260603]: 2025-10-02 09:27:39.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 3.3 KiB/s wr, 59 op/s
Oct  2 05:27:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:27:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Oct  2 05:27:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Oct  2 05:27:40 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Oct  2 05:27:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.7 KiB/s wr, 36 op/s
Oct  2 05:27:43 np0005465604 nova_compute[260603]: 2025-10-02 09:27:43.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:44 np0005465604 nova_compute[260603]: 2025-10-02 09:27:44.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.6 KiB/s wr, 33 op/s
Oct  2 05:27:44 np0005465604 podman[445130]: 2025-10-02 09:27:44.986514875 +0000 UTC m=+0.051847790 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:27:45 np0005465604 podman[445129]: 2025-10-02 09:27:45.012454635 +0000 UTC m=+0.077719528 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Oct  2 05:27:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:27:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Oct  2 05:27:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:27:48 np0005465604 nova_compute[260603]: 2025-10-02 09:27:48.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Oct  2 05:27:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Oct  2 05:27:48 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Oct  2 05:27:49 np0005465604 nova_compute[260603]: 2025-10-02 09:27:49.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:49 np0005465604 podman[445174]: 2025-10-02 09:27:49.995708223 +0000 UTC m=+0.067822509 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct  2 05:27:50 np0005465604 podman[445175]: 2025-10-02 09:27:50.009501485 +0000 UTC m=+0.071725622 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 05:27:50 np0005465604 nova_compute[260603]: 2025-10-02 09:27:50.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:27:50 np0005465604 nova_compute[260603]: 2025-10-02 09:27:50.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:27:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:27:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:27:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Oct  2 05:27:53 np0005465604 nova_compute[260603]: 2025-10-02 09:27:53.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:54 np0005465604 nova_compute[260603]: 2025-10-02 09:27:54.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  2 05:27:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:27:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  2 05:27:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:27:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:27:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:27:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:27:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:27:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:27:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  2 05:27:58 np0005465604 nova_compute[260603]: 2025-10-02 09:27:58.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:27:59 np0005465604 nova_compute[260603]: 2025-10-02 09:27:59.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Oct  2 05:28:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:28:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct  2 05:28:03 np0005465604 nova_compute[260603]: 2025-10-02 09:28:03.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:04 np0005465604 nova_compute[260603]: 2025-10-02 09:28:04.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.758882) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397285758908, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1093, "num_deletes": 253, "total_data_size": 1602403, "memory_usage": 1627800, "flush_reason": "Manual Compaction"}
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397285764936, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 985433, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67984, "largest_seqno": 69076, "table_properties": {"data_size": 981206, "index_size": 1814, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 11182, "raw_average_key_size": 20, "raw_value_size": 972006, "raw_average_value_size": 1823, "num_data_blocks": 82, "num_entries": 533, "num_filter_entries": 533, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759397183, "oldest_key_time": 1759397183, "file_creation_time": 1759397285, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 6107 microseconds, and 3007 cpu microseconds.
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.764991) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 985433 bytes OK
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.765022) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.766406) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.766416) EVENT_LOG_v1 {"time_micros": 1759397285766413, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.766430) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1597307, prev total WAL file size 1597307, number of live WAL files 2.
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.767034) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373533' seq:72057594037927935, type:22 .. '6D6772737461740033303034' seq:0, type:0; will stop at (end)
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(962KB)], [161(10MB)]
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397285767058, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12153438, "oldest_snapshot_seqno": -1}
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8589 keys, 9440695 bytes, temperature: kUnknown
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397285824345, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 9440695, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9387901, "index_size": 30217, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 225017, "raw_average_key_size": 26, "raw_value_size": 9239412, "raw_average_value_size": 1075, "num_data_blocks": 1168, "num_entries": 8589, "num_filter_entries": 8589, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759397285, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.824559) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 9440695 bytes
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.826358) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.9 rd, 164.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.7 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(21.9) write-amplify(9.6) OK, records in: 9066, records dropped: 477 output_compression: NoCompression
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.826373) EVENT_LOG_v1 {"time_micros": 1759397285826366, "job": 100, "event": "compaction_finished", "compaction_time_micros": 57352, "compaction_time_cpu_micros": 22374, "output_level": 6, "num_output_files": 1, "total_output_size": 9440695, "num_input_records": 9066, "num_output_records": 8589, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397285826652, "job": 100, "event": "table_file_deletion", "file_number": 163}
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397285828689, "job": 100, "event": "table_file_deletion", "file_number": 161}
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.766956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.828810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.828816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.828819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.828822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:28:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:28:05.828825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:28:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:08 np0005465604 nova_compute[260603]: 2025-10-02 09:28:08.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:09 np0005465604 nova_compute[260603]: 2025-10-02 09:28:09.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:28:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:13 np0005465604 nova_compute[260603]: 2025-10-02 09:28:13.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:14 np0005465604 nova_compute[260603]: 2025-10-02 09:28:14.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:14 np0005465604 nova_compute[260603]: 2025-10-02 09:28:14.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:28:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:28:16 np0005465604 podman[445215]: 2025-10-02 09:28:16.027736883 +0000 UTC m=+0.075562981 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 05:28:16 np0005465604 podman[445214]: 2025-10-02 09:28:16.06830017 +0000 UTC m=+0.130155737 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 05:28:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:17 np0005465604 nova_compute[260603]: 2025-10-02 09:28:17.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:28:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:18 np0005465604 nova_compute[260603]: 2025-10-02 09:28:18.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:19 np0005465604 nova_compute[260603]: 2025-10-02 09:28:19.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:28:19 np0005465604 nova_compute[260603]: 2025-10-02 09:28:19.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:28:19 np0005465604 nova_compute[260603]: 2025-10-02 09:28:19.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:28:19 np0005465604 nova_compute[260603]: 2025-10-02 09:28:19.534 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:28:19 np0005465604 nova_compute[260603]: 2025-10-02 09:28:19.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:28:21 np0005465604 podman[445260]: 2025-10-02 09:28:21.002482336 +0000 UTC m=+0.061748330 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001)
Oct  2 05:28:21 np0005465604 podman[445259]: 2025-10-02 09:28:21.022487581 +0000 UTC m=+0.077678568 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 05:28:21 np0005465604 nova_compute[260603]: 2025-10-02 09:28:21.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:28:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:28:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 45K writes, 180K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.74 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1442 writes, 6120 keys, 1442 commit groups, 1.0 writes per commit group, ingest: 6.56 MB, 0.01 MB/s#012Interval WAL: 1442 writes, 522 syncs, 2.76 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x555e7d2d8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 mem
Oct  2 05:28:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:28:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2302162713' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:28:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:28:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2302162713' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:28:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:23 np0005465604 nova_compute[260603]: 2025-10-02 09:28:23.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:28:23 np0005465604 nova_compute[260603]: 2025-10-02 09:28:23.544 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:28:23 np0005465604 nova_compute[260603]: 2025-10-02 09:28:23.545 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:28:23 np0005465604 nova_compute[260603]: 2025-10-02 09:28:23.545 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:28:23 np0005465604 nova_compute[260603]: 2025-10-02 09:28:23.545 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:28:23 np0005465604 nova_compute[260603]: 2025-10-02 09:28:23.546 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:28:23 np0005465604 nova_compute[260603]: 2025-10-02 09:28:23.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:28:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3270020275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:28:24 np0005465604 nova_compute[260603]: 2025-10-02 09:28:24.044 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:28:24 np0005465604 nova_compute[260603]: 2025-10-02 09:28:24.174 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:28:24 np0005465604 nova_compute[260603]: 2025-10-02 09:28:24.175 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3588MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:28:24 np0005465604 nova_compute[260603]: 2025-10-02 09:28:24.176 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:28:24 np0005465604 nova_compute[260603]: 2025-10-02 09:28:24.176 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:28:24 np0005465604 nova_compute[260603]: 2025-10-02 09:28:24.241 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:28:24 np0005465604 nova_compute[260603]: 2025-10-02 09:28:24.241 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:28:24 np0005465604 nova_compute[260603]: 2025-10-02 09:28:24.258 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:28:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:24 np0005465604 nova_compute[260603]: 2025-10-02 09:28:24.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:28:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3283747876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:28:24 np0005465604 nova_compute[260603]: 2025-10-02 09:28:24.690 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:28:24 np0005465604 nova_compute[260603]: 2025-10-02 09:28:24.695 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:28:24 np0005465604 nova_compute[260603]: 2025-10-02 09:28:24.713 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:28:24 np0005465604 nova_compute[260603]: 2025-10-02 09:28:24.715 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:28:24 np0005465604 nova_compute[260603]: 2025-10-02 09:28:24.715 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:28:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:28:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:28:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 48K writes, 187K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 48K writes, 17K syncs, 2.72 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1591 writes, 5946 keys, 1591 commit groups, 1.0 writes per commit group, ingest: 5.99 MB, 0.01 MB/s#012Interval WAL: 1591 writes, 616 syncs, 2.58 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtabl
Oct  2 05:28:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:26 np0005465604 nova_compute[260603]: 2025-10-02 09:28:26.712 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:28:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:28:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:28:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:28:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:28:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:28:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:28:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:28:28
Oct  2 05:28:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:28:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:28:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'vms', 'backups', 'default.rgw.meta', 'images', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct  2 05:28:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:28:28 np0005465604 nova_compute[260603]: 2025-10-02 09:28:28.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:28:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:28:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:28:28 np0005465604 nova_compute[260603]: 2025-10-02 09:28:28.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:29 np0005465604 nova_compute[260603]: 2025-10-02 09:28:29.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:28:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:28:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 37K writes, 145K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 37K writes, 14K syncs, 2.69 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1178 writes, 4161 keys, 1178 commit groups, 1.0 writes per commit group, ingest: 5.22 MB, 0.01 MB/s#012Interval WAL: 1178 writes, 469 syncs, 2.51 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 mem
Oct  2 05:28:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:28:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:28:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:28:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:28:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:28:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:28:32 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 2af39d83-cb0b-4423-a41d-461070a09e7e does not exist
Oct  2 05:28:32 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 35c2d301-022b-4444-977c-4d772eb11cd0 does not exist
Oct  2 05:28:32 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 6f276a30-309f-4e12-9f03-13a95808266a does not exist
Oct  2 05:28:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:28:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:28:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:28:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:28:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:28:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:28:32 np0005465604 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct  2 05:28:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:32 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:28:32 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:28:32 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:28:32 np0005465604 podman[445612]: 2025-10-02 09:28:32.860734693 +0000 UTC m=+0.048310189 container create fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:28:32 np0005465604 systemd[1]: Started libpod-conmon-fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c.scope.
Oct  2 05:28:32 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:28:32 np0005465604 podman[445612]: 2025-10-02 09:28:32.841161132 +0000 UTC m=+0.028736638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:28:32 np0005465604 podman[445612]: 2025-10-02 09:28:32.958942582 +0000 UTC m=+0.146518098 container init fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:28:32 np0005465604 podman[445612]: 2025-10-02 09:28:32.966564199 +0000 UTC m=+0.154139675 container start fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_galileo, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:28:32 np0005465604 podman[445612]: 2025-10-02 09:28:32.9697886 +0000 UTC m=+0.157364126 container attach fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_galileo, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 05:28:32 np0005465604 brave_galileo[445628]: 167 167
Oct  2 05:28:32 np0005465604 systemd[1]: libpod-fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c.scope: Deactivated successfully.
Oct  2 05:28:32 np0005465604 conmon[445628]: conmon fc224efb1d4af0007920 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c.scope/container/memory.events
Oct  2 05:28:32 np0005465604 podman[445612]: 2025-10-02 09:28:32.976048106 +0000 UTC m=+0.163623612 container died fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_galileo, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:28:33 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9dd066d5c975c60dd36ba13c696cd2d1853f5c82cc37ba0a997970b73cda15b9-merged.mount: Deactivated successfully.
Oct  2 05:28:33 np0005465604 podman[445612]: 2025-10-02 09:28:33.03926645 +0000 UTC m=+0.226841926 container remove fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_galileo, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 05:28:33 np0005465604 systemd[1]: libpod-conmon-fc224efb1d4af00079205406bfd7c41ed7eb08f7866b31917e66dc0e8b50893c.scope: Deactivated successfully.
Oct  2 05:28:33 np0005465604 podman[445652]: 2025-10-02 09:28:33.200797295 +0000 UTC m=+0.038763341 container create 81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bouman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 05:28:33 np0005465604 systemd[1]: Started libpod-conmon-81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f.scope.
Oct  2 05:28:33 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:28:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f63360c7f6aa7aa2612efa026c22149c0e081ef24d8fccc99bdc97e53fa2443/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:28:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f63360c7f6aa7aa2612efa026c22149c0e081ef24d8fccc99bdc97e53fa2443/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:28:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f63360c7f6aa7aa2612efa026c22149c0e081ef24d8fccc99bdc97e53fa2443/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:28:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f63360c7f6aa7aa2612efa026c22149c0e081ef24d8fccc99bdc97e53fa2443/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:28:33 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f63360c7f6aa7aa2612efa026c22149c0e081ef24d8fccc99bdc97e53fa2443/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:28:33 np0005465604 podman[445652]: 2025-10-02 09:28:33.273363042 +0000 UTC m=+0.111329098 container init 81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bouman, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:28:33 np0005465604 podman[445652]: 2025-10-02 09:28:33.185061634 +0000 UTC m=+0.023027700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:28:33 np0005465604 podman[445652]: 2025-10-02 09:28:33.28674352 +0000 UTC m=+0.124709566 container start 81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bouman, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 05:28:33 np0005465604 podman[445652]: 2025-10-02 09:28:33.290455876 +0000 UTC m=+0.128421952 container attach 81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 05:28:33 np0005465604 nova_compute[260603]: 2025-10-02 09:28:33.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:34 np0005465604 determined_bouman[445669]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:28:34 np0005465604 determined_bouman[445669]: --> relative data size: 1.0
Oct  2 05:28:34 np0005465604 determined_bouman[445669]: --> All data devices are unavailable
Oct  2 05:28:34 np0005465604 systemd[1]: libpod-81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f.scope: Deactivated successfully.
Oct  2 05:28:34 np0005465604 systemd[1]: libpod-81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f.scope: Consumed 1.097s CPU time.
Oct  2 05:28:34 np0005465604 podman[445698]: 2025-10-02 09:28:34.504249168 +0000 UTC m=+0.025397154 container died 81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bouman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  2 05:28:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:34 np0005465604 nova_compute[260603]: 2025-10-02 09:28:34.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:28:34.867 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:28:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:28:34.868 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:28:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:28:34.869 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:28:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6f63360c7f6aa7aa2612efa026c22149c0e081ef24d8fccc99bdc97e53fa2443-merged.mount: Deactivated successfully.
Oct  2 05:28:35 np0005465604 podman[445698]: 2025-10-02 09:28:35.196572773 +0000 UTC m=+0.717720729 container remove 81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bouman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 05:28:35 np0005465604 systemd[1]: libpod-conmon-81acadee52016411bdbf0be28051863bcabcd7670b7af026f9200c16739a047f.scope: Deactivated successfully.
Oct  2 05:28:35 np0005465604 nova_compute[260603]: 2025-10-02 09:28:35.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:28:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:28:35 np0005465604 podman[445854]: 2025-10-02 09:28:35.908907682 +0000 UTC m=+0.056906148 container create 1037dbb8ca5607060b7e35d7be0504df81060904fc4aa75172f5bfaa539b294c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:28:35 np0005465604 systemd[1]: Started libpod-conmon-1037dbb8ca5607060b7e35d7be0504df81060904fc4aa75172f5bfaa539b294c.scope.
Oct  2 05:28:35 np0005465604 podman[445854]: 2025-10-02 09:28:35.879710431 +0000 UTC m=+0.027708997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:28:35 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:28:36 np0005465604 podman[445854]: 2025-10-02 09:28:36.013325734 +0000 UTC m=+0.161324220 container init 1037dbb8ca5607060b7e35d7be0504df81060904fc4aa75172f5bfaa539b294c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lumiere, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:28:36 np0005465604 podman[445854]: 2025-10-02 09:28:36.021063395 +0000 UTC m=+0.169061901 container start 1037dbb8ca5607060b7e35d7be0504df81060904fc4aa75172f5bfaa539b294c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lumiere, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:28:36 np0005465604 podman[445854]: 2025-10-02 09:28:36.025856005 +0000 UTC m=+0.173854501 container attach 1037dbb8ca5607060b7e35d7be0504df81060904fc4aa75172f5bfaa539b294c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lumiere, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 05:28:36 np0005465604 systemd[1]: libpod-1037dbb8ca5607060b7e35d7be0504df81060904fc4aa75172f5bfaa539b294c.scope: Deactivated successfully.
Oct  2 05:28:36 np0005465604 dreamy_lumiere[445870]: 167 167
Oct  2 05:28:36 np0005465604 podman[445854]: 2025-10-02 09:28:36.02887666 +0000 UTC m=+0.176875136 container died 1037dbb8ca5607060b7e35d7be0504df81060904fc4aa75172f5bfaa539b294c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lumiere, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:28:36 np0005465604 systemd[1]: var-lib-containers-storage-overlay-54c2537928ef1b90515b1051557ccba619648b0ae047bb4b87b867e4059d5a8a-merged.mount: Deactivated successfully.
Oct  2 05:28:36 np0005465604 podman[445854]: 2025-10-02 09:28:36.069146287 +0000 UTC m=+0.217144743 container remove 1037dbb8ca5607060b7e35d7be0504df81060904fc4aa75172f5bfaa539b294c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lumiere, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 05:28:36 np0005465604 systemd[1]: libpod-conmon-1037dbb8ca5607060b7e35d7be0504df81060904fc4aa75172f5bfaa539b294c.scope: Deactivated successfully.
Oct  2 05:28:36 np0005465604 podman[445894]: 2025-10-02 09:28:36.256987145 +0000 UTC m=+0.050678965 container create 2eaf9cce937fecaa465d3215e2a2c2bc6cf33e9de20a57a96a2477f9cc7d8a73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ride, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:28:36 np0005465604 systemd[1]: Started libpod-conmon-2eaf9cce937fecaa465d3215e2a2c2bc6cf33e9de20a57a96a2477f9cc7d8a73.scope.
Oct  2 05:28:36 np0005465604 podman[445894]: 2025-10-02 09:28:36.236876106 +0000 UTC m=+0.030567886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:28:36 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:28:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bbd03c313b23b35baf6fc8fc4ca87fbf7ca9bf08437d65904fc8245a7db976e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:28:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bbd03c313b23b35baf6fc8fc4ca87fbf7ca9bf08437d65904fc8245a7db976e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:28:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bbd03c313b23b35baf6fc8fc4ca87fbf7ca9bf08437d65904fc8245a7db976e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:28:36 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bbd03c313b23b35baf6fc8fc4ca87fbf7ca9bf08437d65904fc8245a7db976e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:28:36 np0005465604 podman[445894]: 2025-10-02 09:28:36.362456729 +0000 UTC m=+0.156148539 container init 2eaf9cce937fecaa465d3215e2a2c2bc6cf33e9de20a57a96a2477f9cc7d8a73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ride, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:28:36 np0005465604 podman[445894]: 2025-10-02 09:28:36.374854156 +0000 UTC m=+0.168545976 container start 2eaf9cce937fecaa465d3215e2a2c2bc6cf33e9de20a57a96a2477f9cc7d8a73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ride, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:28:36 np0005465604 podman[445894]: 2025-10-02 09:28:36.378610173 +0000 UTC m=+0.172301973 container attach 2eaf9cce937fecaa465d3215e2a2c2bc6cf33e9de20a57a96a2477f9cc7d8a73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:28:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:37 np0005465604 zealous_ride[445911]: {
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:    "0": [
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:        {
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "devices": [
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "/dev/loop3"
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            ],
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "lv_name": "ceph_lv0",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "lv_size": "21470642176",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "name": "ceph_lv0",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "tags": {
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.cluster_name": "ceph",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.crush_device_class": "",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.encrypted": "0",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.osd_id": "0",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.type": "block",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.vdo": "0"
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            },
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "type": "block",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "vg_name": "ceph_vg0"
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:        }
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:    ],
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:    "1": [
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:        {
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "devices": [
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "/dev/loop4"
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            ],
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "lv_name": "ceph_lv1",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "lv_size": "21470642176",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "name": "ceph_lv1",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "tags": {
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.cluster_name": "ceph",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.crush_device_class": "",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.encrypted": "0",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.osd_id": "1",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.type": "block",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.vdo": "0"
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            },
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "type": "block",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "vg_name": "ceph_vg1"
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:        }
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:    ],
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:    "2": [
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:        {
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "devices": [
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "/dev/loop5"
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            ],
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "lv_name": "ceph_lv2",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "lv_size": "21470642176",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "name": "ceph_lv2",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "tags": {
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.cluster_name": "ceph",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.crush_device_class": "",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.encrypted": "0",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.osd_id": "2",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.type": "block",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:                "ceph.vdo": "0"
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            },
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "type": "block",
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:            "vg_name": "ceph_vg2"
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:        }
Oct  2 05:28:37 np0005465604 zealous_ride[445911]:    ]
Oct  2 05:28:37 np0005465604 zealous_ride[445911]: }
Oct  2 05:28:37 np0005465604 systemd[1]: libpod-2eaf9cce937fecaa465d3215e2a2c2bc6cf33e9de20a57a96a2477f9cc7d8a73.scope: Deactivated successfully.
Oct  2 05:28:37 np0005465604 podman[445894]: 2025-10-02 09:28:37.204861951 +0000 UTC m=+0.998553761 container died 2eaf9cce937fecaa465d3215e2a2c2bc6cf33e9de20a57a96a2477f9cc7d8a73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ride, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 05:28:37 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9bbd03c313b23b35baf6fc8fc4ca87fbf7ca9bf08437d65904fc8245a7db976e-merged.mount: Deactivated successfully.
Oct  2 05:28:37 np0005465604 podman[445894]: 2025-10-02 09:28:37.267356703 +0000 UTC m=+1.061048513 container remove 2eaf9cce937fecaa465d3215e2a2c2bc6cf33e9de20a57a96a2477f9cc7d8a73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ride, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:28:37 np0005465604 systemd[1]: libpod-conmon-2eaf9cce937fecaa465d3215e2a2c2bc6cf33e9de20a57a96a2477f9cc7d8a73.scope: Deactivated successfully.
Oct  2 05:28:38 np0005465604 podman[446072]: 2025-10-02 09:28:37.999478781 +0000 UTC m=+0.038973319 container create 9dc4e45d27fb0c66ef927418ed6181fa133b16d338a25f51b4a8cecfba558a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:28:38 np0005465604 systemd[1]: Started libpod-conmon-9dc4e45d27fb0c66ef927418ed6181fa133b16d338a25f51b4a8cecfba558a18.scope.
Oct  2 05:28:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:28:38 np0005465604 podman[446072]: 2025-10-02 09:28:38.073687859 +0000 UTC m=+0.113182427 container init 9dc4e45d27fb0c66ef927418ed6181fa133b16d338a25f51b4a8cecfba558a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:28:38 np0005465604 podman[446072]: 2025-10-02 09:28:37.983196692 +0000 UTC m=+0.022691250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:28:38 np0005465604 podman[446072]: 2025-10-02 09:28:38.080599695 +0000 UTC m=+0.120094233 container start 9dc4e45d27fb0c66ef927418ed6181fa133b16d338a25f51b4a8cecfba558a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 05:28:38 np0005465604 podman[446072]: 2025-10-02 09:28:38.083264188 +0000 UTC m=+0.122758726 container attach 9dc4e45d27fb0c66ef927418ed6181fa133b16d338a25f51b4a8cecfba558a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 05:28:38 np0005465604 eager_newton[446089]: 167 167
Oct  2 05:28:38 np0005465604 systemd[1]: libpod-9dc4e45d27fb0c66ef927418ed6181fa133b16d338a25f51b4a8cecfba558a18.scope: Deactivated successfully.
Oct  2 05:28:38 np0005465604 podman[446094]: 2025-10-02 09:28:38.128309855 +0000 UTC m=+0.025723855 container died 9dc4e45d27fb0c66ef927418ed6181fa133b16d338a25f51b4a8cecfba558a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 05:28:38 np0005465604 systemd[1]: var-lib-containers-storage-overlay-74de3b1e6de0e899225514c1a9ef368a11400223c1614a3e77694cb1fbc07f1c-merged.mount: Deactivated successfully.
Oct  2 05:28:38 np0005465604 podman[446094]: 2025-10-02 09:28:38.160327825 +0000 UTC m=+0.057741805 container remove 9dc4e45d27fb0c66ef927418ed6181fa133b16d338a25f51b4a8cecfba558a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_newton, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:28:38 np0005465604 systemd[1]: libpod-conmon-9dc4e45d27fb0c66ef927418ed6181fa133b16d338a25f51b4a8cecfba558a18.scope: Deactivated successfully.
Oct  2 05:28:38 np0005465604 podman[446116]: 2025-10-02 09:28:38.337564441 +0000 UTC m=+0.056397153 container create bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_herschel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:28:38 np0005465604 systemd[1]: Started libpod-conmon-bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd.scope.
Oct  2 05:28:38 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:28:38 np0005465604 podman[446116]: 2025-10-02 09:28:38.309619748 +0000 UTC m=+0.028452510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:28:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5ce4da80d345c0c9ff2993fe758ffad7b959eb480c695e4e06f210b68c5719/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:28:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5ce4da80d345c0c9ff2993fe758ffad7b959eb480c695e4e06f210b68c5719/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:28:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5ce4da80d345c0c9ff2993fe758ffad7b959eb480c695e4e06f210b68c5719/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:28:38 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5ce4da80d345c0c9ff2993fe758ffad7b959eb480c695e4e06f210b68c5719/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:28:38 np0005465604 podman[446116]: 2025-10-02 09:28:38.42173126 +0000 UTC m=+0.140563972 container init bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_herschel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:28:38 np0005465604 podman[446116]: 2025-10-02 09:28:38.433394723 +0000 UTC m=+0.152227405 container start bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_herschel, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:28:38 np0005465604 podman[446116]: 2025-10-02 09:28:38.436052547 +0000 UTC m=+0.154885259 container attach bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 05:28:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:38 np0005465604 nova_compute[260603]: 2025-10-02 09:28:38.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:28:39 np0005465604 focused_herschel[446132]: {
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:        "osd_id": 2,
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:        "type": "bluestore"
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:    },
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:        "osd_id": 1,
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:        "type": "bluestore"
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:    },
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:        "osd_id": 0,
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:        "type": "bluestore"
Oct  2 05:28:39 np0005465604 focused_herschel[446132]:    }
Oct  2 05:28:39 np0005465604 focused_herschel[446132]: }
Oct  2 05:28:39 np0005465604 systemd[1]: libpod-bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd.scope: Deactivated successfully.
Oct  2 05:28:39 np0005465604 systemd[1]: libpod-bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd.scope: Consumed 1.136s CPU time.
Oct  2 05:28:39 np0005465604 conmon[446132]: conmon bd99a7827988a2143aba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd.scope/container/memory.events
Oct  2 05:28:39 np0005465604 podman[446116]: 2025-10-02 09:28:39.567122805 +0000 UTC m=+1.285955527 container died bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_herschel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 05:28:39 np0005465604 nova_compute[260603]: 2025-10-02 09:28:39.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1e5ce4da80d345c0c9ff2993fe758ffad7b959eb480c695e4e06f210b68c5719-merged.mount: Deactivated successfully.
Oct  2 05:28:39 np0005465604 podman[446116]: 2025-10-02 09:28:39.637676999 +0000 UTC m=+1.356509681 container remove bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:28:39 np0005465604 systemd[1]: libpod-conmon-bd99a7827988a2143abaa33417232a34c726db748aeab71497a30b56f9d38ffd.scope: Deactivated successfully.
Oct  2 05:28:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:28:39 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:28:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:28:39 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev de7a1190-ec75-450f-9b42-b02cd1bba31d does not exist
Oct  2 05:28:39 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 44101bc6-ea68-4613-97a0-684a24923ac6 does not exist
Oct  2 05:28:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:40 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:28:40 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:28:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:28:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:43 np0005465604 nova_compute[260603]: 2025-10-02 09:28:43.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.519 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.520 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.523 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.523 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.524 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.524 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.553 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.572 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.572 2 WARNING nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236#033[00m
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.573 2 WARNING nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28#033[00m
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.573 2 INFO nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Removable base files: /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236 /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28#033[00m
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.574 2 INFO nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/55fe19af44c773772fc736fd085017e37d622236#033[00m
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.574 2 INFO nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/c0fdc067b2937ea086be0c187b6d99f3c486af28#033[00m
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.575 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.575 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.575 2 DEBUG nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.576 2 INFO nova.virt.libvirt.imagecache [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Oct  2 05:28:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:44 np0005465604 nova_compute[260603]: 2025-10-02 09:28:44.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:28:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:47 np0005465604 podman[446229]: 2025-10-02 09:28:47.036149537 +0000 UTC m=+0.089169996 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct  2 05:28:47 np0005465604 podman[446228]: 2025-10-02 09:28:47.099531047 +0000 UTC m=+0.150581634 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 05:28:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:48 np0005465604 nova_compute[260603]: 2025-10-02 09:28:48.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:49 np0005465604 nova_compute[260603]: 2025-10-02 09:28:49.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:50 np0005465604 nova_compute[260603]: 2025-10-02 09:28:50.577 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:28:50 np0005465604 nova_compute[260603]: 2025-10-02 09:28:50.577 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:28:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:28:52 np0005465604 podman[446272]: 2025-10-02 09:28:52.049198439 +0000 UTC m=+0.101958066 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:28:52 np0005465604 podman[446273]: 2025-10-02 09:28:52.061813123 +0000 UTC m=+0.113534297 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 05:28:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:53 np0005465604 nova_compute[260603]: 2025-10-02 09:28:53.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:54 np0005465604 nova_compute[260603]: 2025-10-02 09:28:54.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:28:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:28:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:28:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:28:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:28:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:28:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:28:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:28:58 np0005465604 nova_compute[260603]: 2025-10-02 09:28:58.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:28:59 np0005465604 nova_compute[260603]: 2025-10-02 09:28:59.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:29:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:29:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:29:03 np0005465604 nova_compute[260603]: 2025-10-02 09:29:03.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:29:04 np0005465604 nova_compute[260603]: 2025-10-02 09:29:04.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:29:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:29:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:29:08 np0005465604 nova_compute[260603]: 2025-10-02 09:29:08.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:09 np0005465604 nova_compute[260603]: 2025-10-02 09:29:09.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:29:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:29:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:29:13 np0005465604 nova_compute[260603]: 2025-10-02 09:29:13.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:29:14 np0005465604 nova_compute[260603]: 2025-10-02 09:29:14.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:15 np0005465604 nova_compute[260603]: 2025-10-02 09:29:15.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:29:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:29:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:29:18 np0005465604 podman[446315]: 2025-10-02 09:29:18.020980987 +0000 UTC m=+0.072951860 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 05:29:18 np0005465604 podman[446314]: 2025-10-02 09:29:18.046109001 +0000 UTC m=+0.104124153 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 05:29:18 np0005465604 nova_compute[260603]: 2025-10-02 09:29:18.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:29:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:29:18 np0005465604 nova_compute[260603]: 2025-10-02 09:29:18.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:19 np0005465604 nova_compute[260603]: 2025-10-02 09:29:19.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:29:19 np0005465604 nova_compute[260603]: 2025-10-02 09:29:19.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:29:19 np0005465604 nova_compute[260603]: 2025-10-02 09:29:19.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:29:19 np0005465604 nova_compute[260603]: 2025-10-02 09:29:19.538 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:29:19 np0005465604 nova_compute[260603]: 2025-10-02 09:29:19.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:29:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:29:21 np0005465604 nova_compute[260603]: 2025-10-02 09:29:21.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:29:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:29:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1236954431' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:29:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:29:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1236954431' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:29:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:29:23 np0005465604 podman[446356]: 2025-10-02 09:29:23.003239934 +0000 UTC m=+0.069735308 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 05:29:23 np0005465604 podman[446357]: 2025-10-02 09:29:23.033224891 +0000 UTC m=+0.100643625 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct  2 05:29:23 np0005465604 nova_compute[260603]: 2025-10-02 09:29:23.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:29:23 np0005465604 nova_compute[260603]: 2025-10-02 09:29:23.547 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:29:23 np0005465604 nova_compute[260603]: 2025-10-02 09:29:23.547 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:29:23 np0005465604 nova_compute[260603]: 2025-10-02 09:29:23.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:29:23 np0005465604 nova_compute[260603]: 2025-10-02 09:29:23.548 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:29:23 np0005465604 nova_compute[260603]: 2025-10-02 09:29:23.548 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:29:23 np0005465604 nova_compute[260603]: 2025-10-02 09:29:23.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:29:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/467833117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:29:23 np0005465604 nova_compute[260603]: 2025-10-02 09:29:23.985 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:29:24 np0005465604 nova_compute[260603]: 2025-10-02 09:29:24.162 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:29:24 np0005465604 nova_compute[260603]: 2025-10-02 09:29:24.163 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3585MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:29:24 np0005465604 nova_compute[260603]: 2025-10-02 09:29:24.164 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:29:24 np0005465604 nova_compute[260603]: 2025-10-02 09:29:24.164 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:29:24 np0005465604 nova_compute[260603]: 2025-10-02 09:29:24.337 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:29:24 np0005465604 nova_compute[260603]: 2025-10-02 09:29:24.337 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:29:24 np0005465604 nova_compute[260603]: 2025-10-02 09:29:24.355 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:29:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:29:24 np0005465604 nova_compute[260603]: 2025-10-02 09:29:24.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:29:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/23523161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:29:24 np0005465604 nova_compute[260603]: 2025-10-02 09:29:24.777 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:29:24 np0005465604 nova_compute[260603]: 2025-10-02 09:29:24.782 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:29:24 np0005465604 nova_compute[260603]: 2025-10-02 09:29:24.797 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:29:24 np0005465604 nova_compute[260603]: 2025-10-02 09:29:24.798 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:29:24 np0005465604 nova_compute[260603]: 2025-10-02 09:29:24.799 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:29:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:29:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:29:27 np0005465604 nova_compute[260603]: 2025-10-02 09:29:27.794 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:29:27 np0005465604 nova_compute[260603]: 2025-10-02 09:29:27.795 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:29:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:29:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:29:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:29:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:29:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:29:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:29:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:29:28
Oct  2 05:29:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:29:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:29:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'vms', '.mgr', 'volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta']
Oct  2 05:29:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:29:28 np0005465604 nova_compute[260603]: 2025-10-02 09:29:28.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:29:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Oct  2 05:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:29:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:29:28 np0005465604 nova_compute[260603]: 2025-10-02 09:29:28.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:29 np0005465604 nova_compute[260603]: 2025-10-02 09:29:29.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Oct  2 05:29:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:29:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Oct  2 05:29:33 np0005465604 nova_compute[260603]: 2025-10-02 09:29:33.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  2 05:29:34 np0005465604 nova_compute[260603]: 2025-10-02 09:29:34.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:29:34.869 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:29:34.869 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:29:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:29:34.869 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:29:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:29:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Oct  2 05:29:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Oct  2 05:29:36 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Oct  2 05:29:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 305 active+clean; 458 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 71 op/s
Oct  2 05:29:37 np0005465604 nova_compute[260603]: 2025-10-02 09:29:37.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:29:37 np0005465604 nova_compute[260603]: 2025-10-02 09:29:37.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:29:37 np0005465604 nova_compute[260603]: 2025-10-02 09:29:37.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 05:29:37 np0005465604 nova_compute[260603]: 2025-10-02 09:29:37.532 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 05:29:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Oct  2 05:29:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Oct  2 05:29:38 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Oct  2 05:29:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 305 active+clean; 33 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 4.1 MiB/s wr, 97 op/s
Oct  2 05:29:38 np0005465604 nova_compute[260603]: 2025-10-02 09:29:38.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0005359424855114932 of space, bias 1.0, pg target 0.16078274565344797 quantized to 32 (current 32)
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:29:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:29:39 np0005465604 nova_compute[260603]: 2025-10-02 09:29:39.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 305 active+clean; 33 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.1 MiB/s wr, 46 op/s
Oct  2 05:29:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:29:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:29:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:29:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:29:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:29:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:29:40 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 5be00a4f-30b7-41bc-8440-2c2bcea18e24 does not exist
Oct  2 05:29:40 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 4906b2a9-c516-4fa9-8e57-6cfd41fe01f9 does not exist
Oct  2 05:29:40 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 022ce5a7-38f4-49bb-9833-2de2b4c084c2 does not exist
Oct  2 05:29:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:29:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:29:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:29:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:29:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:29:40 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:29:40 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:29:40 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:29:40 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:29:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:29:41 np0005465604 podman[446711]: 2025-10-02 09:29:41.303405801 +0000 UTC m=+0.076498910 container create c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shockley, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:29:41 np0005465604 podman[446711]: 2025-10-02 09:29:41.252273773 +0000 UTC m=+0.025366902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:29:41 np0005465604 systemd[1]: Started libpod-conmon-c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f.scope.
Oct  2 05:29:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:29:41 np0005465604 podman[446711]: 2025-10-02 09:29:41.428322292 +0000 UTC m=+0.201415451 container init c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shockley, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 05:29:41 np0005465604 podman[446711]: 2025-10-02 09:29:41.434539657 +0000 UTC m=+0.207632766 container start c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:29:41 np0005465604 cool_shockley[446727]: 167 167
Oct  2 05:29:41 np0005465604 systemd[1]: libpod-c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f.scope: Deactivated successfully.
Oct  2 05:29:41 np0005465604 conmon[446727]: conmon c3d6746e8c050295daae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f.scope/container/memory.events
Oct  2 05:29:41 np0005465604 podman[446711]: 2025-10-02 09:29:41.462288434 +0000 UTC m=+0.235381583 container attach c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:29:41 np0005465604 podman[446711]: 2025-10-02 09:29:41.462967124 +0000 UTC m=+0.236060233 container died c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shockley, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 05:29:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c0b2eb73da0f9d2ec8e02774a6d292c1e63692470b259c3c7ad07aed5fc606b4-merged.mount: Deactivated successfully.
Oct  2 05:29:41 np0005465604 podman[446711]: 2025-10-02 09:29:41.67348522 +0000 UTC m=+0.446578329 container remove c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shockley, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:29:41 np0005465604 systemd[1]: libpod-conmon-c3d6746e8c050295daaee504f7f204e869a363e8144a35795ae378679ebc9d2f.scope: Deactivated successfully.
Oct  2 05:29:41 np0005465604 podman[446751]: 2025-10-02 09:29:41.854796614 +0000 UTC m=+0.058529690 container create cb3b4404c2072c5920a420e78e9c69c590a5e48b13ad51d894c199d587c34dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_payne, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:29:41 np0005465604 systemd[1]: Started libpod-conmon-cb3b4404c2072c5920a420e78e9c69c590a5e48b13ad51d894c199d587c34dbd.scope.
Oct  2 05:29:41 np0005465604 podman[446751]: 2025-10-02 09:29:41.818932753 +0000 UTC m=+0.022665859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:29:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:29:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd3dd45b98efe08a72d0e085abb02fc7f4b85ee5e055a67c4ca9d3676431d19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:29:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd3dd45b98efe08a72d0e085abb02fc7f4b85ee5e055a67c4ca9d3676431d19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:29:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd3dd45b98efe08a72d0e085abb02fc7f4b85ee5e055a67c4ca9d3676431d19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:29:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd3dd45b98efe08a72d0e085abb02fc7f4b85ee5e055a67c4ca9d3676431d19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:29:41 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd3dd45b98efe08a72d0e085abb02fc7f4b85ee5e055a67c4ca9d3676431d19/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:29:41 np0005465604 podman[446751]: 2025-10-02 09:29:41.987675534 +0000 UTC m=+0.191408620 container init cb3b4404c2072c5920a420e78e9c69c590a5e48b13ad51d894c199d587c34dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_payne, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:29:41 np0005465604 podman[446751]: 2025-10-02 09:29:41.996531541 +0000 UTC m=+0.200264617 container start cb3b4404c2072c5920a420e78e9c69c590a5e48b13ad51d894c199d587c34dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 05:29:42 np0005465604 podman[446751]: 2025-10-02 09:29:42.02437189 +0000 UTC m=+0.228105006 container attach cb3b4404c2072c5920a420e78e9c69c590a5e48b13ad51d894c199d587c34dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct  2 05:29:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct  2 05:29:42 np0005465604 suspicious_payne[446767]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:29:42 np0005465604 suspicious_payne[446767]: --> relative data size: 1.0
Oct  2 05:29:42 np0005465604 suspicious_payne[446767]: --> All data devices are unavailable
Oct  2 05:29:43 np0005465604 systemd[1]: libpod-cb3b4404c2072c5920a420e78e9c69c590a5e48b13ad51d894c199d587c34dbd.scope: Deactivated successfully.
Oct  2 05:29:43 np0005465604 podman[446751]: 2025-10-02 09:29:43.016487928 +0000 UTC m=+1.220221004 container died cb3b4404c2072c5920a420e78e9c69c590a5e48b13ad51d894c199d587c34dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 05:29:43 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2fd3dd45b98efe08a72d0e085abb02fc7f4b85ee5e055a67c4ca9d3676431d19-merged.mount: Deactivated successfully.
Oct  2 05:29:43 np0005465604 podman[446751]: 2025-10-02 09:29:43.082519041 +0000 UTC m=+1.286252117 container remove cb3b4404c2072c5920a420e78e9c69c590a5e48b13ad51d894c199d587c34dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_payne, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 05:29:43 np0005465604 systemd[1]: libpod-conmon-cb3b4404c2072c5920a420e78e9c69c590a5e48b13ad51d894c199d587c34dbd.scope: Deactivated successfully.
Oct  2 05:29:43 np0005465604 podman[446948]: 2025-10-02 09:29:43.633228462 +0000 UTC m=+0.045608246 container create 9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lehmann, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:29:43 np0005465604 systemd[1]: Started libpod-conmon-9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9.scope.
Oct  2 05:29:43 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:29:43 np0005465604 podman[446948]: 2025-10-02 09:29:43.610058568 +0000 UTC m=+0.022438382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:29:43 np0005465604 podman[446948]: 2025-10-02 09:29:43.716492312 +0000 UTC m=+0.128872126 container init 9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 05:29:43 np0005465604 podman[446948]: 2025-10-02 09:29:43.72247171 +0000 UTC m=+0.134851494 container start 9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  2 05:29:43 np0005465604 happy_lehmann[446964]: 167 167
Oct  2 05:29:43 np0005465604 systemd[1]: libpod-9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9.scope: Deactivated successfully.
Oct  2 05:29:43 np0005465604 podman[446948]: 2025-10-02 09:29:43.726380591 +0000 UTC m=+0.138760375 container attach 9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lehmann, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:29:43 np0005465604 conmon[446964]: conmon 9740726a5e6f0c2af15f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9.scope/container/memory.events
Oct  2 05:29:43 np0005465604 podman[446948]: 2025-10-02 09:29:43.727509677 +0000 UTC m=+0.139889461 container died 9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lehmann, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:29:43 np0005465604 systemd[1]: var-lib-containers-storage-overlay-51dac1c3751c2a648049a8783b15878fb394ae34a01f90c88776214f9d525580-merged.mount: Deactivated successfully.
Oct  2 05:29:43 np0005465604 podman[446948]: 2025-10-02 09:29:43.766606418 +0000 UTC m=+0.178986202 container remove 9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  2 05:29:43 np0005465604 systemd[1]: libpod-conmon-9740726a5e6f0c2af15f69f506f2ea3343e8d2095214da3cb51b16df7d2fc8f9.scope: Deactivated successfully.
Oct  2 05:29:43 np0005465604 nova_compute[260603]: 2025-10-02 09:29:43.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:43 np0005465604 podman[446986]: 2025-10-02 09:29:43.909945325 +0000 UTC m=+0.039149904 container create fb3ca277639ff801f6bf0e5ab1d91ca95cca3f894b86310cb156dc88279715c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hoover, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 05:29:43 np0005465604 systemd[1]: Started libpod-conmon-fb3ca277639ff801f6bf0e5ab1d91ca95cca3f894b86310cb156dc88279715c6.scope.
Oct  2 05:29:43 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:29:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09dd5ef850258bea38f112048a19260e9649366c72d3ec4893b779cabe44f5c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:29:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09dd5ef850258bea38f112048a19260e9649366c72d3ec4893b779cabe44f5c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:29:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09dd5ef850258bea38f112048a19260e9649366c72d3ec4893b779cabe44f5c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:29:43 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09dd5ef850258bea38f112048a19260e9649366c72d3ec4893b779cabe44f5c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:29:43 np0005465604 podman[446986]: 2025-10-02 09:29:43.893344997 +0000 UTC m=+0.022549596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:29:43 np0005465604 podman[446986]: 2025-10-02 09:29:43.989695836 +0000 UTC m=+0.118900425 container init fb3ca277639ff801f6bf0e5ab1d91ca95cca3f894b86310cb156dc88279715c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hoover, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:29:43 np0005465604 podman[446986]: 2025-10-02 09:29:43.994820826 +0000 UTC m=+0.124025405 container start fb3ca277639ff801f6bf0e5ab1d91ca95cca3f894b86310cb156dc88279715c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  2 05:29:43 np0005465604 podman[446986]: 2025-10-02 09:29:43.998677697 +0000 UTC m=+0.127882296 container attach fb3ca277639ff801f6bf0e5ab1d91ca95cca3f894b86310cb156dc88279715c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 05:29:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]: {
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:    "0": [
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:        {
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "devices": [
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "/dev/loop3"
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            ],
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "lv_name": "ceph_lv0",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "lv_size": "21470642176",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "name": "ceph_lv0",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "tags": {
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.cluster_name": "ceph",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.crush_device_class": "",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.encrypted": "0",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.osd_id": "0",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.type": "block",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.vdo": "0"
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            },
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "type": "block",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "vg_name": "ceph_vg0"
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:        }
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:    ],
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:    "1": [
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:        {
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "devices": [
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "/dev/loop4"
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            ],
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "lv_name": "ceph_lv1",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "lv_size": "21470642176",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "name": "ceph_lv1",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "tags": {
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.cluster_name": "ceph",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.crush_device_class": "",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.encrypted": "0",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.osd_id": "1",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.type": "block",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.vdo": "0"
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            },
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "type": "block",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "vg_name": "ceph_vg1"
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:        }
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:    ],
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:    "2": [
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:        {
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "devices": [
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "/dev/loop5"
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            ],
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "lv_name": "ceph_lv2",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "lv_size": "21470642176",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "name": "ceph_lv2",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "tags": {
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.cluster_name": "ceph",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.crush_device_class": "",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.encrypted": "0",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.osd_id": "2",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.type": "block",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:                "ceph.vdo": "0"
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            },
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "type": "block",
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:            "vg_name": "ceph_vg2"
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:        }
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]:    ]
Oct  2 05:29:44 np0005465604 crazy_hoover[447002]: }
Oct  2 05:29:44 np0005465604 systemd[1]: libpod-fb3ca277639ff801f6bf0e5ab1d91ca95cca3f894b86310cb156dc88279715c6.scope: Deactivated successfully.
Oct  2 05:29:44 np0005465604 podman[446986]: 2025-10-02 09:29:44.735200482 +0000 UTC m=+0.864405061 container died fb3ca277639ff801f6bf0e5ab1d91ca95cca3f894b86310cb156dc88279715c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 05:29:44 np0005465604 nova_compute[260603]: 2025-10-02 09:29:44.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:44 np0005465604 systemd[1]: var-lib-containers-storage-overlay-09dd5ef850258bea38f112048a19260e9649366c72d3ec4893b779cabe44f5c0-merged.mount: Deactivated successfully.
Oct  2 05:29:44 np0005465604 podman[446986]: 2025-10-02 09:29:44.806060504 +0000 UTC m=+0.935265083 container remove fb3ca277639ff801f6bf0e5ab1d91ca95cca3f894b86310cb156dc88279715c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hoover, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 05:29:44 np0005465604 systemd[1]: libpod-conmon-fb3ca277639ff801f6bf0e5ab1d91ca95cca3f894b86310cb156dc88279715c6.scope: Deactivated successfully.
Oct  2 05:29:45 np0005465604 podman[447166]: 2025-10-02 09:29:45.424643486 +0000 UTC m=+0.066630002 container create 28a7e0f04288e6b3cecaaa6a67a9310c9b9f7e0148acf56de26630486fc6793a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:29:45 np0005465604 systemd[1]: Started libpod-conmon-28a7e0f04288e6b3cecaaa6a67a9310c9b9f7e0148acf56de26630486fc6793a.scope.
Oct  2 05:29:45 np0005465604 podman[447166]: 2025-10-02 09:29:45.377381319 +0000 UTC m=+0.019367875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:29:45 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:29:45 np0005465604 podman[447166]: 2025-10-02 09:29:45.520600423 +0000 UTC m=+0.162586969 container init 28a7e0f04288e6b3cecaaa6a67a9310c9b9f7e0148acf56de26630486fc6793a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 05:29:45 np0005465604 podman[447166]: 2025-10-02 09:29:45.527343443 +0000 UTC m=+0.169329959 container start 28a7e0f04288e6b3cecaaa6a67a9310c9b9f7e0148acf56de26630486fc6793a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 05:29:45 np0005465604 great_merkle[447182]: 167 167
Oct  2 05:29:45 np0005465604 systemd[1]: libpod-28a7e0f04288e6b3cecaaa6a67a9310c9b9f7e0148acf56de26630486fc6793a.scope: Deactivated successfully.
Oct  2 05:29:45 np0005465604 podman[447166]: 2025-10-02 09:29:45.534139016 +0000 UTC m=+0.176125532 container attach 28a7e0f04288e6b3cecaaa6a67a9310c9b9f7e0148acf56de26630486fc6793a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 05:29:45 np0005465604 podman[447166]: 2025-10-02 09:29:45.534426515 +0000 UTC m=+0.176413031 container died 28a7e0f04288e6b3cecaaa6a67a9310c9b9f7e0148acf56de26630486fc6793a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 05:29:45 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ef984ec635f5393b438c7c8d2648e644be953990fedebcfafe3ec1feaa8e5f02-merged.mount: Deactivated successfully.
Oct  2 05:29:45 np0005465604 podman[447166]: 2025-10-02 09:29:45.614355321 +0000 UTC m=+0.256341837 container remove 28a7e0f04288e6b3cecaaa6a67a9310c9b9f7e0148acf56de26630486fc6793a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 05:29:45 np0005465604 systemd[1]: libpod-conmon-28a7e0f04288e6b3cecaaa6a67a9310c9b9f7e0148acf56de26630486fc6793a.scope: Deactivated successfully.
Oct  2 05:29:45 np0005465604 podman[447207]: 2025-10-02 09:29:45.772129789 +0000 UTC m=+0.044187531 container create 0e5d505c3927a1b708fb01aca7ce0e6da75d125451483f8cb13d387c37a3b3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feistel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 05:29:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:29:45 np0005465604 systemd[1]: Started libpod-conmon-0e5d505c3927a1b708fb01aca7ce0e6da75d125451483f8cb13d387c37a3b3a0.scope.
Oct  2 05:29:45 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:29:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1cf4230c8e24e94a2fdb0205530bdaff41c362f75b857d65b3766c05060012/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:29:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1cf4230c8e24e94a2fdb0205530bdaff41c362f75b857d65b3766c05060012/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:29:45 np0005465604 podman[447207]: 2025-10-02 09:29:45.75165057 +0000 UTC m=+0.023708322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:29:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1cf4230c8e24e94a2fdb0205530bdaff41c362f75b857d65b3766c05060012/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:29:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e1cf4230c8e24e94a2fdb0205530bdaff41c362f75b857d65b3766c05060012/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:29:45 np0005465604 podman[447207]: 2025-10-02 09:29:45.869832751 +0000 UTC m=+0.141890493 container init 0e5d505c3927a1b708fb01aca7ce0e6da75d125451483f8cb13d387c37a3b3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feistel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 05:29:45 np0005465604 podman[447207]: 2025-10-02 09:29:45.876030384 +0000 UTC m=+0.148088106 container start 0e5d505c3927a1b708fb01aca7ce0e6da75d125451483f8cb13d387c37a3b3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feistel, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:29:45 np0005465604 podman[447207]: 2025-10-02 09:29:45.882780296 +0000 UTC m=+0.154838038 container attach 0e5d505c3927a1b708fb01aca7ce0e6da75d125451483f8cb13d387c37a3b3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feistel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  2 05:29:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Oct  2 05:29:46 np0005465604 zen_feistel[447224]: {
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:        "osd_id": 2,
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:        "type": "bluestore"
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:    },
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:        "osd_id": 1,
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:        "type": "bluestore"
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:    },
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:        "osd_id": 0,
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:        "type": "bluestore"
Oct  2 05:29:46 np0005465604 zen_feistel[447224]:    }
Oct  2 05:29:46 np0005465604 zen_feistel[447224]: }
Oct  2 05:29:46 np0005465604 systemd[1]: libpod-0e5d505c3927a1b708fb01aca7ce0e6da75d125451483f8cb13d387c37a3b3a0.scope: Deactivated successfully.
Oct  2 05:29:46 np0005465604 podman[447207]: 2025-10-02 09:29:46.797109654 +0000 UTC m=+1.069167396 container died 0e5d505c3927a1b708fb01aca7ce0e6da75d125451483f8cb13d387c37a3b3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feistel, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 05:29:46 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8e1cf4230c8e24e94a2fdb0205530bdaff41c362f75b857d65b3766c05060012-merged.mount: Deactivated successfully.
Oct  2 05:29:46 np0005465604 podman[447207]: 2025-10-02 09:29:46.851712759 +0000 UTC m=+1.123770481 container remove 0e5d505c3927a1b708fb01aca7ce0e6da75d125451483f8cb13d387c37a3b3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 05:29:46 np0005465604 systemd[1]: libpod-conmon-0e5d505c3927a1b708fb01aca7ce0e6da75d125451483f8cb13d387c37a3b3a0.scope: Deactivated successfully.
Oct  2 05:29:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:29:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:29:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:29:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:29:46 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ecf2e04a-88b1-4116-b10f-b3c22df2ebc6 does not exist
Oct  2 05:29:46 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 8ba57206-0e89-48d9-9c3e-c4f78d2e6ae2 does not exist
Oct  2 05:29:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:29:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:29:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 817 KiB/s wr, 12 op/s
Oct  2 05:29:48 np0005465604 nova_compute[260603]: 2025-10-02 09:29:48.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:48 np0005465604 podman[447319]: 2025-10-02 09:29:48.9955388 +0000 UTC m=+0.058694034 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 05:29:49 np0005465604 podman[447318]: 2025-10-02 09:29:49.027612563 +0000 UTC m=+0.090494589 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 05:29:49 np0005465604 nova_compute[260603]: 2025-10-02 09:29:49.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:50 np0005465604 nova_compute[260603]: 2025-10-02 09:29:50.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:29:50 np0005465604 nova_compute[260603]: 2025-10-02 09:29:50.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:29:50 np0005465604 nova_compute[260603]: 2025-10-02 09:29:50.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:29:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 684 KiB/s wr, 10 op/s
Oct  2 05:29:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:29:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 684 KiB/s wr, 10 op/s
Oct  2 05:29:53 np0005465604 nova_compute[260603]: 2025-10-02 09:29:53.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:53 np0005465604 podman[447358]: 2025-10-02 09:29:53.991984183 +0000 UTC m=+0.052836302 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 05:29:54 np0005465604 podman[447357]: 2025-10-02 09:29:54.001947154 +0000 UTC m=+0.065582510 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct  2 05:29:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Oct  2 05:29:54 np0005465604 nova_compute[260603]: 2025-10-02 09:29:54.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:29:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:29:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:29:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:29:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:29:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:29:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:29:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:29:58 np0005465604 nova_compute[260603]: 2025-10-02 09:29:58.532 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:29:58 np0005465604 nova_compute[260603]: 2025-10-02 09:29:58.533 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 05:29:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:29:58 np0005465604 nova_compute[260603]: 2025-10-02 09:29:58.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:29:59 np0005465604 nova_compute[260603]: 2025-10-02 09:29:59.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:30:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:30:01.478 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:30:01 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:30:01.479 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:30:01 np0005465604 nova_compute[260603]: 2025-10-02 09:30:01.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:03 np0005465604 nova_compute[260603]: 2025-10-02 09:30:03.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:04 np0005465604 nova_compute[260603]: 2025-10-02 09:30:04.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.127861) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397405127908, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1228, "num_deletes": 251, "total_data_size": 1830030, "memory_usage": 1863560, "flush_reason": "Manual Compaction"}
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397405169236, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 1801236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69077, "largest_seqno": 70304, "table_properties": {"data_size": 1795363, "index_size": 3203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12438, "raw_average_key_size": 19, "raw_value_size": 1783530, "raw_average_value_size": 2844, "num_data_blocks": 143, "num_entries": 627, "num_filter_entries": 627, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759397286, "oldest_key_time": 1759397286, "file_creation_time": 1759397405, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 41426 microseconds, and 4746 cpu microseconds.
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.169288) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 1801236 bytes OK
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.169311) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.231296) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.231346) EVENT_LOG_v1 {"time_micros": 1759397405231335, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.231372) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 1824471, prev total WAL file size 1824471, number of live WAL files 2.
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.232313) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(1759KB)], [164(9219KB)]
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397405232378, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 11241931, "oldest_snapshot_seqno": -1}
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8698 keys, 9471514 bytes, temperature: kUnknown
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397405397978, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 9471514, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9417898, "index_size": 30783, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 227920, "raw_average_key_size": 26, "raw_value_size": 9267273, "raw_average_value_size": 1065, "num_data_blocks": 1185, "num_entries": 8698, "num_filter_entries": 8698, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759397405, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.398189) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 9471514 bytes
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.434277) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 67.9 rd, 57.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.0 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(11.5) write-amplify(5.3) OK, records in: 9216, records dropped: 518 output_compression: NoCompression
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.434321) EVENT_LOG_v1 {"time_micros": 1759397405434302, "job": 102, "event": "compaction_finished", "compaction_time_micros": 165486, "compaction_time_cpu_micros": 49692, "output_level": 6, "num_output_files": 1, "total_output_size": 9471514, "num_input_records": 9216, "num_output_records": 8698, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397405435126, "job": 102, "event": "table_file_deletion", "file_number": 166}
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397405438286, "job": 102, "event": "table_file_deletion", "file_number": 164}
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.232223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.438415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.438424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.438427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.438429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:05.438432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:30:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:30:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:08 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:30:08.482 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:30:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:08 np0005465604 nova_compute[260603]: 2025-10-02 09:30:08.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:09 np0005465604 nova_compute[260603]: 2025-10-02 09:30:09.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:30:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 511 B/s wr, 7 op/s
Oct  2 05:30:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Oct  2 05:30:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Oct  2 05:30:12 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Oct  2 05:30:13 np0005465604 nova_compute[260603]: 2025-10-02 09:30:13.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Oct  2 05:30:14 np0005465604 nova_compute[260603]: 2025-10-02 09:30:14.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:30:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Oct  2 05:30:17 np0005465604 nova_compute[260603]: 2025-10-02 09:30:17.539 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:30:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  2 05:30:18 np0005465604 nova_compute[260603]: 2025-10-02 09:30:18.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:19 np0005465604 nova_compute[260603]: 2025-10-02 09:30:19.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:19 np0005465604 podman[447398]: 2025-10-02 09:30:19.999464054 +0000 UTC m=+0.050528330 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:30:20 np0005465604 podman[447397]: 2025-10-02 09:30:20.029553454 +0000 UTC m=+0.087068531 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 05:30:20 np0005465604 nova_compute[260603]: 2025-10-02 09:30:20.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:30:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  2 05:30:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:30:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Oct  2 05:30:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Oct  2 05:30:20 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Oct  2 05:30:21 np0005465604 nova_compute[260603]: 2025-10-02 09:30:21.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:30:21 np0005465604 nova_compute[260603]: 2025-10-02 09:30:21.518 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:30:21 np0005465604 nova_compute[260603]: 2025-10-02 09:30:21.518 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:30:21 np0005465604 nova_compute[260603]: 2025-10-02 09:30:21.531 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:30:21 np0005465604 nova_compute[260603]: 2025-10-02 09:30:21.532 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:30:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:30:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2433323668' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:30:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:30:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2433323668' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:30:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 846 B/s wr, 17 op/s
Oct  2 05:30:23 np0005465604 nova_compute[260603]: 2025-10-02 09:30:23.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:30:23 np0005465604 nova_compute[260603]: 2025-10-02 09:30:23.557 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:30:23 np0005465604 nova_compute[260603]: 2025-10-02 09:30:23.558 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:30:23 np0005465604 nova_compute[260603]: 2025-10-02 09:30:23.558 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:30:23 np0005465604 nova_compute[260603]: 2025-10-02 09:30:23.558 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:30:23 np0005465604 nova_compute[260603]: 2025-10-02 09:30:23.558 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:30:23 np0005465604 nova_compute[260603]: 2025-10-02 09:30:23.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:30:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3738599554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:30:24 np0005465604 nova_compute[260603]: 2025-10-02 09:30:24.041 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:30:24 np0005465604 nova_compute[260603]: 2025-10-02 09:30:24.198 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:30:24 np0005465604 nova_compute[260603]: 2025-10-02 09:30:24.199 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3565MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:30:24 np0005465604 nova_compute[260603]: 2025-10-02 09:30:24.199 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:30:24 np0005465604 nova_compute[260603]: 2025-10-02 09:30:24.200 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:30:24 np0005465604 nova_compute[260603]: 2025-10-02 09:30:24.283 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:30:24 np0005465604 nova_compute[260603]: 2025-10-02 09:30:24.284 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:30:24 np0005465604 nova_compute[260603]: 2025-10-02 09:30:24.302 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 05:30:24 np0005465604 nova_compute[260603]: 2025-10-02 09:30:24.417 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 05:30:24 np0005465604 nova_compute[260603]: 2025-10-02 09:30:24.417 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 05:30:24 np0005465604 nova_compute[260603]: 2025-10-02 09:30:24.533 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 05:30:24 np0005465604 nova_compute[260603]: 2025-10-02 09:30:24.562 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 05:30:24 np0005465604 nova_compute[260603]: 2025-10-02 09:30:24.580 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:30:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct  2 05:30:24 np0005465604 nova_compute[260603]: 2025-10-02 09:30:24.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:25 np0005465604 podman[447485]: 2025-10-02 09:30:25.008805799 +0000 UTC m=+0.079998891 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 05:30:25 np0005465604 podman[447486]: 2025-10-02 09:30:25.016663814 +0000 UTC m=+0.081346012 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 05:30:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:30:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2583034819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:30:25 np0005465604 nova_compute[260603]: 2025-10-02 09:30:25.039 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:30:25 np0005465604 nova_compute[260603]: 2025-10-02 09:30:25.043 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:30:25 np0005465604 nova_compute[260603]: 2025-10-02 09:30:25.065 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:30:25 np0005465604 nova_compute[260603]: 2025-10-02 09:30:25.066 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:30:25 np0005465604 nova_compute[260603]: 2025-10-02 09:30:25.067 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:30:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:30:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct  2 05:30:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:30:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:30:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:30:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:30:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:30:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:30:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:30:28
Oct  2 05:30:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:30:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:30:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['backups', 'default.rgw.log', '.mgr', 'vms', 'volumes', 'images', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data']
Oct  2 05:30:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:30:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:30:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:30:28 np0005465604 nova_compute[260603]: 2025-10-02 09:30:28.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:29 np0005465604 nova_compute[260603]: 2025-10-02 09:30:29.062 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:30:29 np0005465604 nova_compute[260603]: 2025-10-02 09:30:29.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:30 np0005465604 nova_compute[260603]: 2025-10-02 09:30:30.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:30:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:30:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:33 np0005465604 nova_compute[260603]: 2025-10-02 09:30:33.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:34 np0005465604 nova_compute[260603]: 2025-10-02 09:30:34.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:30:34.870 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:30:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:30:34.870 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:30:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:30:34.870 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:30:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:30:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:38 np0005465604 nova_compute[260603]: 2025-10-02 09:30:38.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:30:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:30:39 np0005465604 nova_compute[260603]: 2025-10-02 09:30:39.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:30:39 np0005465604 nova_compute[260603]: 2025-10-02 09:30:39.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.801241) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397440801291, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 542, "num_deletes": 255, "total_data_size": 548791, "memory_usage": 559720, "flush_reason": "Manual Compaction"}
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397440809079, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 543898, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70305, "largest_seqno": 70846, "table_properties": {"data_size": 540893, "index_size": 976, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6933, "raw_average_key_size": 18, "raw_value_size": 534844, "raw_average_value_size": 1437, "num_data_blocks": 44, "num_entries": 372, "num_filter_entries": 372, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759397406, "oldest_key_time": 1759397406, "file_creation_time": 1759397440, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 7888 microseconds, and 4342 cpu microseconds.
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.809128) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 543898 bytes OK
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.809151) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.811032) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.811053) EVENT_LOG_v1 {"time_micros": 1759397440811046, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.811075) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 545702, prev total WAL file size 545702, number of live WAL files 2.
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.811673) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303130' seq:72057594037927935, type:22 .. '6C6F676D0033323631' seq:0, type:0; will stop at (end)
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(531KB)], [167(9249KB)]
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397440811715, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 10015412, "oldest_snapshot_seqno": -1}
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 8549 keys, 9916520 bytes, temperature: kUnknown
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397440896051, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 9916520, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9862803, "index_size": 31268, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21381, "raw_key_size": 225793, "raw_average_key_size": 26, "raw_value_size": 9713605, "raw_average_value_size": 1136, "num_data_blocks": 1205, "num_entries": 8549, "num_filter_entries": 8549, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759397440, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.896334) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 9916520 bytes
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.898026) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 118.6 rd, 117.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.0 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(36.6) write-amplify(18.2) OK, records in: 9070, records dropped: 521 output_compression: NoCompression
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.898055) EVENT_LOG_v1 {"time_micros": 1759397440898043, "job": 104, "event": "compaction_finished", "compaction_time_micros": 84437, "compaction_time_cpu_micros": 52088, "output_level": 6, "num_output_files": 1, "total_output_size": 9916520, "num_input_records": 9070, "num_output_records": 8549, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397440898275, "job": 104, "event": "table_file_deletion", "file_number": 169}
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397440900454, "job": 104, "event": "table_file_deletion", "file_number": 167}
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.811580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.900589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.900594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.900596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.900599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:30:40 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:30:40.900601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:30:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:43 np0005465604 nova_compute[260603]: 2025-10-02 09:30:43.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:44 np0005465604 nova_compute[260603]: 2025-10-02 09:30:44.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:30:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:30:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:30:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:30:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:30:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:30:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:30:47 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 47d8df75-182c-43d1-afd1-d89fb78c5612 does not exist
Oct  2 05:30:47 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 167e5c71-fd02-47fb-a933-e7d36d981d1a does not exist
Oct  2 05:30:47 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 705c6b4b-c575-4c28-b9c6-564b7b332da9 does not exist
Oct  2 05:30:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:30:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:30:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:30:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:30:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:30:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:30:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:30:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:30:47 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:30:48 np0005465604 podman[447800]: 2025-10-02 09:30:48.32888923 +0000 UTC m=+0.051041845 container create 6af360b4723f723d5d343b868c98a9b45c392e3fd6e97367ee0a4e6e7fa9817b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_borg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 05:30:48 np0005465604 systemd[1]: Started libpod-conmon-6af360b4723f723d5d343b868c98a9b45c392e3fd6e97367ee0a4e6e7fa9817b.scope.
Oct  2 05:30:48 np0005465604 podman[447800]: 2025-10-02 09:30:48.30198585 +0000 UTC m=+0.024138485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:30:48 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:30:48 np0005465604 podman[447800]: 2025-10-02 09:30:48.439063861 +0000 UTC m=+0.161216496 container init 6af360b4723f723d5d343b868c98a9b45c392e3fd6e97367ee0a4e6e7fa9817b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_borg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:30:48 np0005465604 podman[447800]: 2025-10-02 09:30:48.447705291 +0000 UTC m=+0.169857906 container start 6af360b4723f723d5d343b868c98a9b45c392e3fd6e97367ee0a4e6e7fa9817b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_borg, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:30:48 np0005465604 podman[447800]: 2025-10-02 09:30:48.454163373 +0000 UTC m=+0.176315988 container attach 6af360b4723f723d5d343b868c98a9b45c392e3fd6e97367ee0a4e6e7fa9817b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_borg, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 05:30:48 np0005465604 interesting_borg[447816]: 167 167
Oct  2 05:30:48 np0005465604 systemd[1]: libpod-6af360b4723f723d5d343b868c98a9b45c392e3fd6e97367ee0a4e6e7fa9817b.scope: Deactivated successfully.
Oct  2 05:30:48 np0005465604 podman[447800]: 2025-10-02 09:30:48.455999581 +0000 UTC m=+0.178152196 container died 6af360b4723f723d5d343b868c98a9b45c392e3fd6e97367ee0a4e6e7fa9817b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 05:30:48 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f7bbda7d2d4d4141faa19ac409dacf2e969e5bd0d058ed6451a5d13420024a93-merged.mount: Deactivated successfully.
Oct  2 05:30:48 np0005465604 podman[447800]: 2025-10-02 09:30:48.547271991 +0000 UTC m=+0.269424606 container remove 6af360b4723f723d5d343b868c98a9b45c392e3fd6e97367ee0a4e6e7fa9817b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:30:48 np0005465604 systemd[1]: libpod-conmon-6af360b4723f723d5d343b868c98a9b45c392e3fd6e97367ee0a4e6e7fa9817b.scope: Deactivated successfully.
Oct  2 05:30:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:48 np0005465604 podman[447840]: 2025-10-02 09:30:48.728335186 +0000 UTC m=+0.056940379 container create e731c3a10ee742d6b957af92d974066c64bc88bf54a6b38cf1e3df426aa44b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:30:48 np0005465604 podman[447840]: 2025-10-02 09:30:48.693827759 +0000 UTC m=+0.022432982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:30:48 np0005465604 systemd[1]: Started libpod-conmon-e731c3a10ee742d6b957af92d974066c64bc88bf54a6b38cf1e3df426aa44b84.scope.
Oct  2 05:30:48 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:30:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6871bfa19ccf24c6a52cbf8f51a94a86f907c6dd272e3cbe0e12558d82ea4733/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:30:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6871bfa19ccf24c6a52cbf8f51a94a86f907c6dd272e3cbe0e12558d82ea4733/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:30:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6871bfa19ccf24c6a52cbf8f51a94a86f907c6dd272e3cbe0e12558d82ea4733/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:30:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6871bfa19ccf24c6a52cbf8f51a94a86f907c6dd272e3cbe0e12558d82ea4733/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:30:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6871bfa19ccf24c6a52cbf8f51a94a86f907c6dd272e3cbe0e12558d82ea4733/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:30:48 np0005465604 podman[447840]: 2025-10-02 09:30:48.835982889 +0000 UTC m=+0.164588112 container init e731c3a10ee742d6b957af92d974066c64bc88bf54a6b38cf1e3df426aa44b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  2 05:30:48 np0005465604 podman[447840]: 2025-10-02 09:30:48.843201905 +0000 UTC m=+0.171807108 container start e731c3a10ee742d6b957af92d974066c64bc88bf54a6b38cf1e3df426aa44b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pike, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 05:30:48 np0005465604 podman[447840]: 2025-10-02 09:30:48.850380599 +0000 UTC m=+0.178985782 container attach e731c3a10ee742d6b957af92d974066c64bc88bf54a6b38cf1e3df426aa44b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:30:48 np0005465604 nova_compute[260603]: 2025-10-02 09:30:48.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:49 np0005465604 sleepy_pike[447856]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:30:49 np0005465604 sleepy_pike[447856]: --> relative data size: 1.0
Oct  2 05:30:49 np0005465604 sleepy_pike[447856]: --> All data devices are unavailable
Oct  2 05:30:49 np0005465604 nova_compute[260603]: 2025-10-02 09:30:49.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:49 np0005465604 systemd[1]: libpod-e731c3a10ee742d6b957af92d974066c64bc88bf54a6b38cf1e3df426aa44b84.scope: Deactivated successfully.
Oct  2 05:30:49 np0005465604 podman[447840]: 2025-10-02 09:30:49.887667948 +0000 UTC m=+1.216273151 container died e731c3a10ee742d6b957af92d974066c64bc88bf54a6b38cf1e3df426aa44b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pike, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:30:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6871bfa19ccf24c6a52cbf8f51a94a86f907c6dd272e3cbe0e12558d82ea4733-merged.mount: Deactivated successfully.
Oct  2 05:30:50 np0005465604 podman[447840]: 2025-10-02 09:30:50.034222195 +0000 UTC m=+1.362827428 container remove e731c3a10ee742d6b957af92d974066c64bc88bf54a6b38cf1e3df426aa44b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 05:30:50 np0005465604 systemd[1]: libpod-conmon-e731c3a10ee742d6b957af92d974066c64bc88bf54a6b38cf1e3df426aa44b84.scope: Deactivated successfully.
Oct  2 05:30:50 np0005465604 podman[447900]: 2025-10-02 09:30:50.138596016 +0000 UTC m=+0.056063872 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:30:50 np0005465604 podman[447899]: 2025-10-02 09:30:50.179801703 +0000 UTC m=+0.104066462 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:30:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:50 np0005465604 podman[448085]: 2025-10-02 09:30:50.675078623 +0000 UTC m=+0.024679582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:30:50 np0005465604 podman[448085]: 2025-10-02 09:30:50.772512226 +0000 UTC m=+0.122113175 container create eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_leavitt, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:30:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:30:50 np0005465604 systemd[1]: Started libpod-conmon-eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7.scope.
Oct  2 05:30:50 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:30:50 np0005465604 podman[448085]: 2025-10-02 09:30:50.956982187 +0000 UTC m=+0.306583166 container init eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_leavitt, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:30:50 np0005465604 podman[448085]: 2025-10-02 09:30:50.966313119 +0000 UTC m=+0.315914068 container start eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_leavitt, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  2 05:30:50 np0005465604 dazzling_leavitt[448101]: 167 167
Oct  2 05:30:50 np0005465604 systemd[1]: libpod-eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7.scope: Deactivated successfully.
Oct  2 05:30:50 np0005465604 conmon[448101]: conmon eea79f9b02a764eac238 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7.scope/container/memory.events
Oct  2 05:30:50 np0005465604 podman[448085]: 2025-10-02 09:30:50.991591788 +0000 UTC m=+0.341192767 container attach eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 05:30:50 np0005465604 podman[448085]: 2025-10-02 09:30:50.993087985 +0000 UTC m=+0.342688944 container died eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:30:51 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1dfb8fbf1a2ea90ec817ed240354d0d00348ef98b127a6719ce468c017ea8858-merged.mount: Deactivated successfully.
Oct  2 05:30:51 np0005465604 podman[448085]: 2025-10-02 09:30:51.188656964 +0000 UTC m=+0.538257953 container remove eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_leavitt, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:30:51 np0005465604 systemd[1]: libpod-conmon-eea79f9b02a764eac238dabf86d3b7575661b7ad87e78dd6b2fabbeeb73213d7.scope: Deactivated successfully.
Oct  2 05:30:51 np0005465604 podman[448124]: 2025-10-02 09:30:51.435314898 +0000 UTC m=+0.090856499 container create c30590f9eaffaf7497d42f59ef91efe564d86504218185b4d6f303bf15608b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:30:51 np0005465604 podman[448124]: 2025-10-02 09:30:51.379524115 +0000 UTC m=+0.035065766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:30:51 np0005465604 systemd[1]: Started libpod-conmon-c30590f9eaffaf7497d42f59ef91efe564d86504218185b4d6f303bf15608b6b.scope.
Oct  2 05:30:51 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:30:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8538371e7b504f9d5deaa52ea0164c076718bbc2740085bfe7eb76cb421f09c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:30:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8538371e7b504f9d5deaa52ea0164c076718bbc2740085bfe7eb76cb421f09c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:30:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8538371e7b504f9d5deaa52ea0164c076718bbc2740085bfe7eb76cb421f09c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:30:51 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8538371e7b504f9d5deaa52ea0164c076718bbc2740085bfe7eb76cb421f09c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:30:51 np0005465604 podman[448124]: 2025-10-02 09:30:51.592944292 +0000 UTC m=+0.248485923 container init c30590f9eaffaf7497d42f59ef91efe564d86504218185b4d6f303bf15608b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bhaskara, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 05:30:51 np0005465604 podman[448124]: 2025-10-02 09:30:51.608179538 +0000 UTC m=+0.263721129 container start c30590f9eaffaf7497d42f59ef91efe564d86504218185b4d6f303bf15608b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 05:30:51 np0005465604 podman[448124]: 2025-10-02 09:30:51.630584947 +0000 UTC m=+0.286126578 container attach c30590f9eaffaf7497d42f59ef91efe564d86504218185b4d6f303bf15608b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]: {
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:    "0": [
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:        {
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "devices": [
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "/dev/loop3"
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            ],
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "lv_name": "ceph_lv0",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "lv_size": "21470642176",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "name": "ceph_lv0",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "tags": {
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.cluster_name": "ceph",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.crush_device_class": "",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.encrypted": "0",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.osd_id": "0",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.type": "block",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.vdo": "0"
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            },
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "type": "block",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "vg_name": "ceph_vg0"
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:        }
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:    ],
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:    "1": [
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:        {
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "devices": [
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "/dev/loop4"
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            ],
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "lv_name": "ceph_lv1",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "lv_size": "21470642176",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "name": "ceph_lv1",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "tags": {
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.cluster_name": "ceph",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.crush_device_class": "",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.encrypted": "0",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.osd_id": "1",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.type": "block",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.vdo": "0"
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            },
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "type": "block",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "vg_name": "ceph_vg1"
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:        }
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:    ],
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:    "2": [
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:        {
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "devices": [
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "/dev/loop5"
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            ],
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "lv_name": "ceph_lv2",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "lv_size": "21470642176",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "name": "ceph_lv2",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "tags": {
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.cluster_name": "ceph",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.crush_device_class": "",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.encrypted": "0",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.osd_id": "2",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.type": "block",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:                "ceph.vdo": "0"
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            },
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "type": "block",
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:            "vg_name": "ceph_vg2"
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:        }
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]:    ]
Oct  2 05:30:52 np0005465604 sharp_bhaskara[448140]: }
Oct  2 05:30:52 np0005465604 systemd[1]: libpod-c30590f9eaffaf7497d42f59ef91efe564d86504218185b4d6f303bf15608b6b.scope: Deactivated successfully.
Oct  2 05:30:52 np0005465604 podman[448149]: 2025-10-02 09:30:52.452902012 +0000 UTC m=+0.028684827 container died c30590f9eaffaf7497d42f59ef91efe564d86504218185b4d6f303bf15608b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bhaskara, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:30:52 np0005465604 nova_compute[260603]: 2025-10-02 09:30:52.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:30:52 np0005465604 nova_compute[260603]: 2025-10-02 09:30:52.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:30:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:52 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f8538371e7b504f9d5deaa52ea0164c076718bbc2740085bfe7eb76cb421f09c-merged.mount: Deactivated successfully.
Oct  2 05:30:53 np0005465604 podman[448149]: 2025-10-02 09:30:53.216460302 +0000 UTC m=+0.792243107 container remove c30590f9eaffaf7497d42f59ef91efe564d86504218185b4d6f303bf15608b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bhaskara, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:30:53 np0005465604 systemd[1]: libpod-conmon-c30590f9eaffaf7497d42f59ef91efe564d86504218185b4d6f303bf15608b6b.scope: Deactivated successfully.
Oct  2 05:30:53 np0005465604 nova_compute[260603]: 2025-10-02 09:30:53.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:54 np0005465604 podman[448306]: 2025-10-02 09:30:54.010249915 +0000 UTC m=+0.101446759 container create 85f097565aecd7378ba2211ad094d3f3e77170811a8de3d852996158eb72fb9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:30:54 np0005465604 podman[448306]: 2025-10-02 09:30:53.946346579 +0000 UTC m=+0.037543463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:30:54 np0005465604 systemd[1]: Started libpod-conmon-85f097565aecd7378ba2211ad094d3f3e77170811a8de3d852996158eb72fb9f.scope.
Oct  2 05:30:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:30:54 np0005465604 podman[448306]: 2025-10-02 09:30:54.254597187 +0000 UTC m=+0.345794091 container init 85f097565aecd7378ba2211ad094d3f3e77170811a8de3d852996158eb72fb9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 05:30:54 np0005465604 podman[448306]: 2025-10-02 09:30:54.263683681 +0000 UTC m=+0.354880525 container start 85f097565aecd7378ba2211ad094d3f3e77170811a8de3d852996158eb72fb9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:30:54 np0005465604 youthful_galileo[448322]: 167 167
Oct  2 05:30:54 np0005465604 systemd[1]: libpod-85f097565aecd7378ba2211ad094d3f3e77170811a8de3d852996158eb72fb9f.scope: Deactivated successfully.
Oct  2 05:30:54 np0005465604 podman[448306]: 2025-10-02 09:30:54.331444657 +0000 UTC m=+0.422641471 container attach 85f097565aecd7378ba2211ad094d3f3e77170811a8de3d852996158eb72fb9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 05:30:54 np0005465604 podman[448306]: 2025-10-02 09:30:54.332454239 +0000 UTC m=+0.423651053 container died 85f097565aecd7378ba2211ad094d3f3e77170811a8de3d852996158eb72fb9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 05:30:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8590b6b36f19ad5e399c47b90ca06255701049fe2f4c3cecce3e256ec96821a5-merged.mount: Deactivated successfully.
Oct  2 05:30:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:54 np0005465604 nova_compute[260603]: 2025-10-02 09:30:54.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:54 np0005465604 podman[448306]: 2025-10-02 09:30:54.879482236 +0000 UTC m=+0.970679040 container remove 85f097565aecd7378ba2211ad094d3f3e77170811a8de3d852996158eb72fb9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:30:54 np0005465604 systemd[1]: libpod-conmon-85f097565aecd7378ba2211ad094d3f3e77170811a8de3d852996158eb72fb9f.scope: Deactivated successfully.
Oct  2 05:30:55 np0005465604 podman[448347]: 2025-10-02 09:30:55.093976725 +0000 UTC m=+0.076747028 container create 7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 05:30:55 np0005465604 podman[448347]: 2025-10-02 09:30:55.043567851 +0000 UTC m=+0.026338244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:30:55 np0005465604 systemd[1]: Started libpod-conmon-7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14.scope.
Oct  2 05:30:55 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:30:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfae5424f15ce851447eb4b1819bf46d158f770bbbedf6534b81c66026cf865a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:30:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfae5424f15ce851447eb4b1819bf46d158f770bbbedf6534b81c66026cf865a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:30:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfae5424f15ce851447eb4b1819bf46d158f770bbbedf6534b81c66026cf865a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:30:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfae5424f15ce851447eb4b1819bf46d158f770bbbedf6534b81c66026cf865a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:30:55 np0005465604 podman[448347]: 2025-10-02 09:30:55.291806505 +0000 UTC m=+0.274576808 container init 7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:30:55 np0005465604 podman[448347]: 2025-10-02 09:30:55.300422054 +0000 UTC m=+0.283192357 container start 7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 05:30:55 np0005465604 podman[448347]: 2025-10-02 09:30:55.31312121 +0000 UTC m=+0.295891533 container attach 7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct  2 05:30:55 np0005465604 podman[448362]: 2025-10-02 09:30:55.328996776 +0000 UTC m=+0.193302018 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 05:30:55 np0005465604 podman[448363]: 2025-10-02 09:30:55.329037947 +0000 UTC m=+0.187601511 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:30:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]: {
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:        "osd_id": 2,
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:        "type": "bluestore"
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:    },
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:        "osd_id": 1,
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:        "type": "bluestore"
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:    },
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:        "osd_id": 0,
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:        "type": "bluestore"
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]:    }
Oct  2 05:30:56 np0005465604 nostalgic_brattain[448384]: }
Oct  2 05:30:56 np0005465604 systemd[1]: libpod-7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14.scope: Deactivated successfully.
Oct  2 05:30:56 np0005465604 podman[448347]: 2025-10-02 09:30:56.312165905 +0000 UTC m=+1.294936208 container died 7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 05:30:56 np0005465604 systemd[1]: libpod-7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14.scope: Consumed 1.017s CPU time.
Oct  2 05:30:56 np0005465604 systemd[1]: var-lib-containers-storage-overlay-cfae5424f15ce851447eb4b1819bf46d158f770bbbedf6534b81c66026cf865a-merged.mount: Deactivated successfully.
Oct  2 05:30:56 np0005465604 podman[448347]: 2025-10-02 09:30:56.485153718 +0000 UTC m=+1.467924021 container remove 7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 05:30:56 np0005465604 systemd[1]: libpod-conmon-7157b2d1db15ee0e152c640428d9316df02175afd9a8da2b70acabaff31d1d14.scope: Deactivated successfully.
Oct  2 05:30:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:30:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:30:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:30:56 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:30:56 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 0345baff-f42a-4282-80d6-2f3e05016b64 does not exist
Oct  2 05:30:56 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 4219c4e5-4fcf-4adb-9203-f42b05013cbd does not exist
Oct  2 05:30:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:30:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:30:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:30:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:30:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:30:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:30:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:30:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:30:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:30:58 np0005465604 nova_compute[260603]: 2025-10-02 09:30:58.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:30:59 np0005465604 nova_compute[260603]: 2025-10-02 09:30:59.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:31:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:03 np0005465604 nova_compute[260603]: 2025-10-02 09:31:03.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:04 np0005465604 nova_compute[260603]: 2025-10-02 09:31:04.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:31:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:06 np0005465604 nova_compute[260603]: 2025-10-02 09:31:06.940 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:31:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:08 np0005465604 nova_compute[260603]: 2025-10-02 09:31:08.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:09 np0005465604 nova_compute[260603]: 2025-10-02 09:31:09.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:31:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:13 np0005465604 nova_compute[260603]: 2025-10-02 09:31:13.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:14 np0005465604 nova_compute[260603]: 2025-10-02 09:31:14.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:31:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:18 np0005465604 nova_compute[260603]: 2025-10-02 09:31:18.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:19 np0005465604 nova_compute[260603]: 2025-10-02 09:31:19.539 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:31:20 np0005465604 nova_compute[260603]: 2025-10-02 09:31:20.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:31:21 np0005465604 podman[448503]: 2025-10-02 09:31:21.045386156 +0000 UTC m=+0.080956770 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 05:31:21 np0005465604 podman[448502]: 2025-10-02 09:31:21.12205078 +0000 UTC m=+0.164774377 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  2 05:31:21 np0005465604 nova_compute[260603]: 2025-10-02 09:31:21.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:31:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:31:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/575152297' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:31:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:31:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/575152297' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:31:22 np0005465604 nova_compute[260603]: 2025-10-02 09:31:22.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:31:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:23 np0005465604 nova_compute[260603]: 2025-10-02 09:31:23.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:31:23 np0005465604 nova_compute[260603]: 2025-10-02 09:31:23.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:31:23 np0005465604 nova_compute[260603]: 2025-10-02 09:31:23.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:31:23 np0005465604 nova_compute[260603]: 2025-10-02 09:31:23.536 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:31:23 np0005465604 nova_compute[260603]: 2025-10-02 09:31:23.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:24 np0005465604 nova_compute[260603]: 2025-10-02 09:31:24.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:31:24 np0005465604 nova_compute[260603]: 2025-10-02 09:31:24.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:31:24 np0005465604 nova_compute[260603]: 2025-10-02 09:31:24.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:31:24 np0005465604 nova_compute[260603]: 2025-10-02 09:31:24.548 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:31:24 np0005465604 nova_compute[260603]: 2025-10-02 09:31:24.548 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:31:24 np0005465604 nova_compute[260603]: 2025-10-02 09:31:24.548 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:31:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:31:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4031284939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:31:24 np0005465604 nova_compute[260603]: 2025-10-02 09:31:24.993 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:31:25 np0005465604 nova_compute[260603]: 2025-10-02 09:31:25.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:25 np0005465604 nova_compute[260603]: 2025-10-02 09:31:25.151 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:31:25 np0005465604 nova_compute[260603]: 2025-10-02 09:31:25.152 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3572MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:31:25 np0005465604 nova_compute[260603]: 2025-10-02 09:31:25.152 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:31:25 np0005465604 nova_compute[260603]: 2025-10-02 09:31:25.153 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:31:25 np0005465604 nova_compute[260603]: 2025-10-02 09:31:25.578 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:31:25 np0005465604 nova_compute[260603]: 2025-10-02 09:31:25.578 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:31:25 np0005465604 nova_compute[260603]: 2025-10-02 09:31:25.599 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:31:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:31:26 np0005465604 podman[448590]: 2025-10-02 09:31:26.010618413 +0000 UTC m=+0.067547060 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 05:31:26 np0005465604 podman[448589]: 2025-10-02 09:31:26.029360518 +0000 UTC m=+0.085033626 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:31:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:31:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3888347941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:31:26 np0005465604 nova_compute[260603]: 2025-10-02 09:31:26.099 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:31:26 np0005465604 nova_compute[260603]: 2025-10-02 09:31:26.105 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:31:26 np0005465604 nova_compute[260603]: 2025-10-02 09:31:26.130 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:31:26 np0005465604 nova_compute[260603]: 2025-10-02 09:31:26.131 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:31:26 np0005465604 nova_compute[260603]: 2025-10-02 09:31:26.132 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:31:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3409: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:31:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:31:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:31:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:31:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:31:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:31:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:31:28
Oct  2 05:31:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:31:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:31:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'images', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'backups', '.mgr', 'cephfs.cephfs.meta']
Oct  2 05:31:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:31:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3410: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:31:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:31:28 np0005465604 nova_compute[260603]: 2025-10-02 09:31:28.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:30 np0005465604 nova_compute[260603]: 2025-10-02 09:31:30.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3411: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:31:31 np0005465604 nova_compute[260603]: 2025-10-02 09:31:31.128 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:31:31 np0005465604 nova_compute[260603]: 2025-10-02 09:31:31.128 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:31:32 np0005465604 nova_compute[260603]: 2025-10-02 09:31:32.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:31:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:33 np0005465604 nova_compute[260603]: 2025-10-02 09:31:33.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3413: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:31:34.870 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:31:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:31:34.871 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:31:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:31:34.871 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:31:35 np0005465604 nova_compute[260603]: 2025-10-02 09:31:35.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:31:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:38 np0005465604 nova_compute[260603]: 2025-10-02 09:31:38.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:31:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:31:39 np0005465604 nova_compute[260603]: 2025-10-02 09:31:39.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:31:40 np0005465604 nova_compute[260603]: 2025-10-02 09:31:40.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3416: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:31:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:43 np0005465604 nova_compute[260603]: 2025-10-02 09:31:43.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3418: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:45 np0005465604 nova_compute[260603]: 2025-10-02 09:31:45.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:31:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3420: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:48 np0005465604 nova_compute[260603]: 2025-10-02 09:31:48.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:50 np0005465604 nova_compute[260603]: 2025-10-02 09:31:50.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3421: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:31:52 np0005465604 podman[448628]: 2025-10-02 09:31:52.050889648 +0000 UTC m=+0.097055412 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 05:31:52 np0005465604 podman[448627]: 2025-10-02 09:31:52.070741948 +0000 UTC m=+0.131289611 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:31:52 np0005465604 nova_compute[260603]: 2025-10-02 09:31:52.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:31:52 np0005465604 nova_compute[260603]: 2025-10-02 09:31:52.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:31:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:53 np0005465604 nova_compute[260603]: 2025-10-02 09:31:53.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:55 np0005465604 nova_compute[260603]: 2025-10-02 09:31:55.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:31:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3424: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:56 np0005465604 podman[448693]: 2025-10-02 09:31:56.822025211 +0000 UTC m=+0.046940687 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:31:56 np0005465604 podman[448694]: 2025-10-02 09:31:56.857639383 +0000 UTC m=+0.078372209 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 05:31:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:31:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:31:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:31:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:31:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:31:57 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:31:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:31:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:31:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:31:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:31:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:31:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:31:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 4664fca8-489b-4a1c-925a-ca83df12e830 does not exist
Oct  2 05:31:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 1b31f815-7123-40e4-b2e0-23ee782f58e0 does not exist
Oct  2 05:31:57 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 92d32189-6ef2-4a09-ac62-ff989416f6b1 does not exist
Oct  2 05:31:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:31:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:31:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:31:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:31:57 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:31:57 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:31:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:31:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:31:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:31:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:31:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:31:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:31:58 np0005465604 podman[449097]: 2025-10-02 09:31:58.426209527 +0000 UTC m=+0.036465010 container create e640269aea31f16076453a333e6b47ef0f8a1dc757283650b3b1c69b615657d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_swartz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:31:58 np0005465604 systemd[1]: Started libpod-conmon-e640269aea31f16076453a333e6b47ef0f8a1dc757283650b3b1c69b615657d9.scope.
Oct  2 05:31:58 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:31:58 np0005465604 podman[449097]: 2025-10-02 09:31:58.496975536 +0000 UTC m=+0.107231029 container init e640269aea31f16076453a333e6b47ef0f8a1dc757283650b3b1c69b615657d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:31:58 np0005465604 podman[449097]: 2025-10-02 09:31:58.503468349 +0000 UTC m=+0.113723822 container start e640269aea31f16076453a333e6b47ef0f8a1dc757283650b3b1c69b615657d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_swartz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 05:31:58 np0005465604 podman[449097]: 2025-10-02 09:31:58.410800446 +0000 UTC m=+0.021055959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:31:58 np0005465604 romantic_swartz[449114]: 167 167
Oct  2 05:31:58 np0005465604 systemd[1]: libpod-e640269aea31f16076453a333e6b47ef0f8a1dc757283650b3b1c69b615657d9.scope: Deactivated successfully.
Oct  2 05:31:58 np0005465604 podman[449097]: 2025-10-02 09:31:58.510311134 +0000 UTC m=+0.120566617 container attach e640269aea31f16076453a333e6b47ef0f8a1dc757283650b3b1c69b615657d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_swartz, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 05:31:58 np0005465604 podman[449097]: 2025-10-02 09:31:58.510597372 +0000 UTC m=+0.120852865 container died e640269aea31f16076453a333e6b47ef0f8a1dc757283650b3b1c69b615657d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 05:31:58 np0005465604 systemd[1]: var-lib-containers-storage-overlay-24b8e8a8c753af238d85e46035dfd91b85fc80bb24a0d191367a4d34148e3bed-merged.mount: Deactivated successfully.
Oct  2 05:31:58 np0005465604 podman[449097]: 2025-10-02 09:31:58.555550236 +0000 UTC m=+0.165805719 container remove e640269aea31f16076453a333e6b47ef0f8a1dc757283650b3b1c69b615657d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_swartz, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:31:58 np0005465604 systemd[1]: libpod-conmon-e640269aea31f16076453a333e6b47ef0f8a1dc757283650b3b1c69b615657d9.scope: Deactivated successfully.
Oct  2 05:31:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:31:58 np0005465604 podman[449136]: 2025-10-02 09:31:58.699640717 +0000 UTC m=+0.036344436 container create df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:31:58 np0005465604 systemd[1]: Started libpod-conmon-df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3.scope.
Oct  2 05:31:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:31:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:31:58 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:31:58 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:31:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249da64f6f3c756cc747b53196307ceb8ab546c7c7699688586d3d2ff6c7d866/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:31:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249da64f6f3c756cc747b53196307ceb8ab546c7c7699688586d3d2ff6c7d866/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:31:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249da64f6f3c756cc747b53196307ceb8ab546c7c7699688586d3d2ff6c7d866/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:31:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249da64f6f3c756cc747b53196307ceb8ab546c7c7699688586d3d2ff6c7d866/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:31:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/249da64f6f3c756cc747b53196307ceb8ab546c7c7699688586d3d2ff6c7d866/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:31:58 np0005465604 podman[449136]: 2025-10-02 09:31:58.683253665 +0000 UTC m=+0.019957384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:31:58 np0005465604 podman[449136]: 2025-10-02 09:31:58.782107543 +0000 UTC m=+0.118811282 container init df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:31:58 np0005465604 podman[449136]: 2025-10-02 09:31:58.79162359 +0000 UTC m=+0.128327299 container start df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:31:58 np0005465604 podman[449136]: 2025-10-02 09:31:58.795543933 +0000 UTC m=+0.132247652 container attach df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:31:59 np0005465604 nova_compute[260603]: 2025-10-02 09:31:58.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:31:59 np0005465604 vibrant_saha[449153]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:31:59 np0005465604 vibrant_saha[449153]: --> relative data size: 1.0
Oct  2 05:31:59 np0005465604 vibrant_saha[449153]: --> All data devices are unavailable
Oct  2 05:31:59 np0005465604 systemd[1]: libpod-df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3.scope: Deactivated successfully.
Oct  2 05:31:59 np0005465604 podman[449136]: 2025-10-02 09:31:59.878387914 +0000 UTC m=+1.215091613 container died df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  2 05:31:59 np0005465604 systemd[1]: libpod-df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3.scope: Consumed 1.031s CPU time.
Oct  2 05:31:59 np0005465604 systemd[1]: var-lib-containers-storage-overlay-249da64f6f3c756cc747b53196307ceb8ab546c7c7699688586d3d2ff6c7d866-merged.mount: Deactivated successfully.
Oct  2 05:31:59 np0005465604 podman[449136]: 2025-10-02 09:31:59.945552992 +0000 UTC m=+1.282256691 container remove df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 05:31:59 np0005465604 systemd[1]: libpod-conmon-df61fd8912dc7db34a14a739d65b0e57571e605dd314b8e2c8aa549f27a57bb3.scope: Deactivated successfully.
Oct  2 05:32:00 np0005465604 nova_compute[260603]: 2025-10-02 09:32:00.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:00 np0005465604 podman[449338]: 2025-10-02 09:32:00.52592868 +0000 UTC m=+0.041759485 container create 942dcec687f8a11f756aecbde731949c2367f63e6e4b907e6c91f90c7f01cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jackson, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:32:00 np0005465604 systemd[1]: Started libpod-conmon-942dcec687f8a11f756aecbde731949c2367f63e6e4b907e6c91f90c7f01cf32.scope.
Oct  2 05:32:00 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:32:00 np0005465604 podman[449338]: 2025-10-02 09:32:00.504991456 +0000 UTC m=+0.020822291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:32:00 np0005465604 podman[449338]: 2025-10-02 09:32:00.610167301 +0000 UTC m=+0.125998146 container init 942dcec687f8a11f756aecbde731949c2367f63e6e4b907e6c91f90c7f01cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jackson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:32:00 np0005465604 podman[449338]: 2025-10-02 09:32:00.617661325 +0000 UTC m=+0.133492130 container start 942dcec687f8a11f756aecbde731949c2367f63e6e4b907e6c91f90c7f01cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:32:00 np0005465604 podman[449338]: 2025-10-02 09:32:00.620490904 +0000 UTC m=+0.136321709 container attach 942dcec687f8a11f756aecbde731949c2367f63e6e4b907e6c91f90c7f01cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jackson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:32:00 np0005465604 hungry_jackson[449354]: 167 167
Oct  2 05:32:00 np0005465604 podman[449338]: 2025-10-02 09:32:00.624789528 +0000 UTC m=+0.140620343 container died 942dcec687f8a11f756aecbde731949c2367f63e6e4b907e6c91f90c7f01cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jackson, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 05:32:00 np0005465604 systemd[1]: libpod-942dcec687f8a11f756aecbde731949c2367f63e6e4b907e6c91f90c7f01cf32.scope: Deactivated successfully.
Oct  2 05:32:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ba0b658e876c1ab77bd38e08cb7b6f75f939a01364e92cd53baee529b4c9d194-merged.mount: Deactivated successfully.
Oct  2 05:32:00 np0005465604 podman[449338]: 2025-10-02 09:32:00.665561362 +0000 UTC m=+0.181392187 container remove 942dcec687f8a11f756aecbde731949c2367f63e6e4b907e6c91f90c7f01cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jackson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:32:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3426: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:00 np0005465604 systemd[1]: libpod-conmon-942dcec687f8a11f756aecbde731949c2367f63e6e4b907e6c91f90c7f01cf32.scope: Deactivated successfully.
Oct  2 05:32:00 np0005465604 podman[449379]: 2025-10-02 09:32:00.844453059 +0000 UTC m=+0.048127684 container create 82baccbc877824374a08521e8968f5020c0dd00147205842741bcf5a06e6b255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:32:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:32:00 np0005465604 systemd[1]: Started libpod-conmon-82baccbc877824374a08521e8968f5020c0dd00147205842741bcf5a06e6b255.scope.
Oct  2 05:32:00 np0005465604 podman[449379]: 2025-10-02 09:32:00.828371087 +0000 UTC m=+0.032045702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:32:00 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:32:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5221f1465fa383fb321288e71c11891ff92f8f607cf337e390373cd78738efb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:32:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5221f1465fa383fb321288e71c11891ff92f8f607cf337e390373cd78738efb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:32:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5221f1465fa383fb321288e71c11891ff92f8f607cf337e390373cd78738efb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:32:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5221f1465fa383fb321288e71c11891ff92f8f607cf337e390373cd78738efb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:32:00 np0005465604 podman[449379]: 2025-10-02 09:32:00.970386262 +0000 UTC m=+0.174060937 container init 82baccbc877824374a08521e8968f5020c0dd00147205842741bcf5a06e6b255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:32:00 np0005465604 podman[449379]: 2025-10-02 09:32:00.978404183 +0000 UTC m=+0.182078808 container start 82baccbc877824374a08521e8968f5020c0dd00147205842741bcf5a06e6b255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ptolemy, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:32:00 np0005465604 podman[449379]: 2025-10-02 09:32:00.982020906 +0000 UTC m=+0.185695581 container attach 82baccbc877824374a08521e8968f5020c0dd00147205842741bcf5a06e6b255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ptolemy, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]: {
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:    "0": [
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:        {
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "devices": [
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "/dev/loop3"
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            ],
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "lv_name": "ceph_lv0",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "lv_size": "21470642176",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "name": "ceph_lv0",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "tags": {
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.cluster_name": "ceph",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.crush_device_class": "",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.encrypted": "0",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.osd_id": "0",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.type": "block",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.vdo": "0"
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            },
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "type": "block",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "vg_name": "ceph_vg0"
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:        }
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:    ],
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:    "1": [
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:        {
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "devices": [
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "/dev/loop4"
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            ],
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "lv_name": "ceph_lv1",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "lv_size": "21470642176",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "name": "ceph_lv1",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "tags": {
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.cluster_name": "ceph",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.crush_device_class": "",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.encrypted": "0",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.osd_id": "1",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.type": "block",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.vdo": "0"
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            },
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "type": "block",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "vg_name": "ceph_vg1"
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:        }
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:    ],
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:    "2": [
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:        {
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "devices": [
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "/dev/loop5"
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            ],
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "lv_name": "ceph_lv2",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "lv_size": "21470642176",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "name": "ceph_lv2",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "tags": {
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.cluster_name": "ceph",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.crush_device_class": "",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.encrypted": "0",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.osd_id": "2",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.type": "block",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:                "ceph.vdo": "0"
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            },
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "type": "block",
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:            "vg_name": "ceph_vg2"
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:        }
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]:    ]
Oct  2 05:32:01 np0005465604 busy_ptolemy[449395]: }
Oct  2 05:32:01 np0005465604 systemd[1]: libpod-82baccbc877824374a08521e8968f5020c0dd00147205842741bcf5a06e6b255.scope: Deactivated successfully.
Oct  2 05:32:01 np0005465604 podman[449379]: 2025-10-02 09:32:01.730127943 +0000 UTC m=+0.933802538 container died 82baccbc877824374a08521e8968f5020c0dd00147205842741bcf5a06e6b255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ptolemy, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:32:01 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5221f1465fa383fb321288e71c11891ff92f8f607cf337e390373cd78738efb8-merged.mount: Deactivated successfully.
Oct  2 05:32:01 np0005465604 podman[449379]: 2025-10-02 09:32:01.791240882 +0000 UTC m=+0.994915487 container remove 82baccbc877824374a08521e8968f5020c0dd00147205842741bcf5a06e6b255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ptolemy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:32:01 np0005465604 systemd[1]: libpod-conmon-82baccbc877824374a08521e8968f5020c0dd00147205842741bcf5a06e6b255.scope: Deactivated successfully.
Oct  2 05:32:02 np0005465604 podman[449555]: 2025-10-02 09:32:02.416726709 +0000 UTC m=+0.037668938 container create cfd2024192beef8112d39b7f87bf994127a207cc4ebd6f12f0e4bab16a10433a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:32:02 np0005465604 systemd[1]: Started libpod-conmon-cfd2024192beef8112d39b7f87bf994127a207cc4ebd6f12f0e4bab16a10433a.scope.
Oct  2 05:32:02 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:32:02 np0005465604 podman[449555]: 2025-10-02 09:32:02.488929944 +0000 UTC m=+0.109872193 container init cfd2024192beef8112d39b7f87bf994127a207cc4ebd6f12f0e4bab16a10433a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:32:02 np0005465604 podman[449555]: 2025-10-02 09:32:02.496015365 +0000 UTC m=+0.116957584 container start cfd2024192beef8112d39b7f87bf994127a207cc4ebd6f12f0e4bab16a10433a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  2 05:32:02 np0005465604 podman[449555]: 2025-10-02 09:32:02.402097481 +0000 UTC m=+0.023039720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:32:02 np0005465604 podman[449555]: 2025-10-02 09:32:02.498923556 +0000 UTC m=+0.119865785 container attach cfd2024192beef8112d39b7f87bf994127a207cc4ebd6f12f0e4bab16a10433a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:32:02 np0005465604 funny_mirzakhani[449571]: 167 167
Oct  2 05:32:02 np0005465604 systemd[1]: libpod-cfd2024192beef8112d39b7f87bf994127a207cc4ebd6f12f0e4bab16a10433a.scope: Deactivated successfully.
Oct  2 05:32:02 np0005465604 podman[449555]: 2025-10-02 09:32:02.502029572 +0000 UTC m=+0.122971801 container died cfd2024192beef8112d39b7f87bf994127a207cc4ebd6f12f0e4bab16a10433a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 05:32:02 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c5384a1214e79141930b06481cfc4f2460bdc605884a25e8d691f5a08eb95ad8-merged.mount: Deactivated successfully.
Oct  2 05:32:02 np0005465604 podman[449555]: 2025-10-02 09:32:02.534503357 +0000 UTC m=+0.155445576 container remove cfd2024192beef8112d39b7f87bf994127a207cc4ebd6f12f0e4bab16a10433a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:32:02 np0005465604 systemd[1]: libpod-conmon-cfd2024192beef8112d39b7f87bf994127a207cc4ebd6f12f0e4bab16a10433a.scope: Deactivated successfully.
Oct  2 05:32:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3427: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:02 np0005465604 podman[449596]: 2025-10-02 09:32:02.689562331 +0000 UTC m=+0.039312210 container create bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 05:32:02 np0005465604 systemd[1]: Started libpod-conmon-bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a.scope.
Oct  2 05:32:02 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:32:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e56f7ae743b2421a84cdb494c60a789a28fdb439fe23b5481eae2011bf636e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:32:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e56f7ae743b2421a84cdb494c60a789a28fdb439fe23b5481eae2011bf636e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:32:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e56f7ae743b2421a84cdb494c60a789a28fdb439fe23b5481eae2011bf636e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:32:02 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06e56f7ae743b2421a84cdb494c60a789a28fdb439fe23b5481eae2011bf636e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:32:02 np0005465604 podman[449596]: 2025-10-02 09:32:02.674168849 +0000 UTC m=+0.023918748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:32:02 np0005465604 podman[449596]: 2025-10-02 09:32:02.774126721 +0000 UTC m=+0.123876640 container init bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct  2 05:32:02 np0005465604 podman[449596]: 2025-10-02 09:32:02.78271023 +0000 UTC m=+0.132460109 container start bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:32:02 np0005465604 podman[449596]: 2025-10-02 09:32:02.786197379 +0000 UTC m=+0.135947338 container attach bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]: {
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:        "osd_id": 2,
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:        "type": "bluestore"
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:    },
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:        "osd_id": 1,
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:        "type": "bluestore"
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:    },
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:        "osd_id": 0,
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:        "type": "bluestore"
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]:    }
Oct  2 05:32:03 np0005465604 flamboyant_poincare[449612]: }
Oct  2 05:32:03 np0005465604 systemd[1]: libpod-bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a.scope: Deactivated successfully.
Oct  2 05:32:03 np0005465604 conmon[449612]: conmon bffcb5fc646ab0db91e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a.scope/container/memory.events
Oct  2 05:32:03 np0005465604 podman[449596]: 2025-10-02 09:32:03.764472565 +0000 UTC m=+1.114222454 container died bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 05:32:03 np0005465604 systemd[1]: var-lib-containers-storage-overlay-06e56f7ae743b2421a84cdb494c60a789a28fdb439fe23b5481eae2011bf636e-merged.mount: Deactivated successfully.
Oct  2 05:32:03 np0005465604 podman[449596]: 2025-10-02 09:32:03.816278803 +0000 UTC m=+1.166028682 container remove bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 05:32:03 np0005465604 systemd[1]: libpod-conmon-bffcb5fc646ab0db91e0bf8abe75ca3964c199767038e46f917bcd880da6654a.scope: Deactivated successfully.
Oct  2 05:32:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:32:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:32:03 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:32:03 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:32:03 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev aa25b502-1085-4ece-9179-ea30e7234369 does not exist
Oct  2 05:32:03 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b0118f18-8659-41f6-90fe-bb94a9936844 does not exist
Oct  2 05:32:04 np0005465604 nova_compute[260603]: 2025-10-02 09:32:03.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3428: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:04 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:32:04 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:32:05 np0005465604 nova_compute[260603]: 2025-10-02 09:32:05.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:32:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3429: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:09 np0005465604 nova_compute[260603]: 2025-10-02 09:32:09.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:10 np0005465604 nova_compute[260603]: 2025-10-02 09:32:10.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3431: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:32:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3432: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:14 np0005465604 nova_compute[260603]: 2025-10-02 09:32:14.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:15 np0005465604 nova_compute[260603]: 2025-10-02 09:32:15.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:32:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3434: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:19 np0005465604 nova_compute[260603]: 2025-10-02 09:32:19.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:20 np0005465604 nova_compute[260603]: 2025-10-02 09:32:20.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:32:21 np0005465604 nova_compute[260603]: 2025-10-02 09:32:21.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:32:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:32:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/936917552' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:32:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:32:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/936917552' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:32:22 np0005465604 nova_compute[260603]: 2025-10-02 09:32:22.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:32:22 np0005465604 nova_compute[260603]: 2025-10-02 09:32:22.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:32:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3437: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:22 np0005465604 podman[449711]: 2025-10-02 09:32:22.988574189 +0000 UTC m=+0.055807994 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:32:23 np0005465604 podman[449710]: 2025-10-02 09:32:23.018720601 +0000 UTC m=+0.085937135 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 05:32:24 np0005465604 nova_compute[260603]: 2025-10-02 09:32:24.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:25 np0005465604 nova_compute[260603]: 2025-10-02 09:32:25.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:25 np0005465604 nova_compute[260603]: 2025-10-02 09:32:25.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:32:25 np0005465604 nova_compute[260603]: 2025-10-02 09:32:25.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:32:25 np0005465604 nova_compute[260603]: 2025-10-02 09:32:25.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:32:25 np0005465604 nova_compute[260603]: 2025-10-02 09:32:25.645 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:32:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:32:26 np0005465604 nova_compute[260603]: 2025-10-02 09:32:26.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:32:26 np0005465604 nova_compute[260603]: 2025-10-02 09:32:26.608 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:32:26 np0005465604 nova_compute[260603]: 2025-10-02 09:32:26.608 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:32:26 np0005465604 nova_compute[260603]: 2025-10-02 09:32:26.609 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:32:26 np0005465604 nova_compute[260603]: 2025-10-02 09:32:26.609 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:32:26 np0005465604 nova_compute[260603]: 2025-10-02 09:32:26.609 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:32:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3439: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:26 np0005465604 podman[449775]: 2025-10-02 09:32:26.992114997 +0000 UTC m=+0.060145600 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:32:27 np0005465604 podman[449774]: 2025-10-02 09:32:27.0139813 +0000 UTC m=+0.084048726 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:32:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:32:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2492896351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:32:27 np0005465604 nova_compute[260603]: 2025-10-02 09:32:27.037 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:32:27 np0005465604 nova_compute[260603]: 2025-10-02 09:32:27.180 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:32:27 np0005465604 nova_compute[260603]: 2025-10-02 09:32:27.181 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3568MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:32:27 np0005465604 nova_compute[260603]: 2025-10-02 09:32:27.182 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:32:27 np0005465604 nova_compute[260603]: 2025-10-02 09:32:27.182 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:32:27 np0005465604 nova_compute[260603]: 2025-10-02 09:32:27.390 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:32:27 np0005465604 nova_compute[260603]: 2025-10-02 09:32:27.391 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:32:27 np0005465604 nova_compute[260603]: 2025-10-02 09:32:27.413 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:32:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:32:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3435835629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:32:27 np0005465604 nova_compute[260603]: 2025-10-02 09:32:27.845 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:32:27 np0005465604 nova_compute[260603]: 2025-10-02 09:32:27.849 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:32:27 np0005465604 nova_compute[260603]: 2025-10-02 09:32:27.877 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:32:27 np0005465604 nova_compute[260603]: 2025-10-02 09:32:27.879 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:32:27 np0005465604 nova_compute[260603]: 2025-10-02 09:32:27.880 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:32:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:32:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:32:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:32:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:32:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:32:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:32:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:32:28
Oct  2 05:32:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:32:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:32:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'images', 'default.rgw.control', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'backups', 'volumes']
Oct  2 05:32:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:32:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3440: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:32:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:32:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:32:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:32:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:32:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:32:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:32:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:32:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:32:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:32:29 np0005465604 nova_compute[260603]: 2025-10-02 09:32:29.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:30 np0005465604 nova_compute[260603]: 2025-10-02 09:32:30.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3441: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:30 np0005465604 nova_compute[260603]: 2025-10-02 09:32:30.877 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:32:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:32:32 np0005465604 nova_compute[260603]: 2025-10-02 09:32:32.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:32:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3442: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:34 np0005465604 nova_compute[260603]: 2025-10-02 09:32:34.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3443: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:32:34.871 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:32:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:32:34.872 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:32:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:32:34.872 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:32:35 np0005465604 nova_compute[260603]: 2025-10-02 09:32:35.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:32:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3444: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3445: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:39 np0005465604 nova_compute[260603]: 2025-10-02 09:32:39.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:32:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:32:40 np0005465604 nova_compute[260603]: 2025-10-02 09:32:40.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3446: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:32:41 np0005465604 nova_compute[260603]: 2025-10-02 09:32:41.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:32:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3447: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:44 np0005465604 nova_compute[260603]: 2025-10-02 09:32:44.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3448: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:45 np0005465604 nova_compute[260603]: 2025-10-02 09:32:45.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:32:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3449: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3450: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:48 np0005465604 systemd[1]: Starting dnf makecache...
Oct  2 05:32:49 np0005465604 nova_compute[260603]: 2025-10-02 09:32:49.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:49 np0005465604 dnf[449838]: Metadata cache refreshed recently.
Oct  2 05:32:49 np0005465604 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  2 05:32:49 np0005465604 systemd[1]: Finished dnf makecache.
Oct  2 05:32:50 np0005465604 nova_compute[260603]: 2025-10-02 09:32:50.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3451: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:32:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3452: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:54 np0005465604 podman[449840]: 2025-10-02 09:32:54.017697507 +0000 UTC m=+0.067281432 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 05:32:54 np0005465604 nova_compute[260603]: 2025-10-02 09:32:54.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:54 np0005465604 podman[449839]: 2025-10-02 09:32:54.112874469 +0000 UTC m=+0.165031205 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 05:32:54 np0005465604 nova_compute[260603]: 2025-10-02 09:32:54.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:32:54 np0005465604 nova_compute[260603]: 2025-10-02 09:32:54.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:32:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3453: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:55 np0005465604 nova_compute[260603]: 2025-10-02 09:32:55.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:32:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:32:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3454: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:32:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:32:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:32:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:32:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:32:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:32:57 np0005465604 podman[449883]: 2025-10-02 09:32:57.990739683 +0000 UTC m=+0.054892925 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  2 05:32:57 np0005465604 podman[449882]: 2025-10-02 09:32:57.996367799 +0000 UTC m=+0.063970220 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 05:32:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3455: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:32:59 np0005465604 nova_compute[260603]: 2025-10-02 09:32:59.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:00 np0005465604 nova_compute[260603]: 2025-10-02 09:33:00.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3456: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:33:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3457: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:04 np0005465604 nova_compute[260603]: 2025-10-02 09:33:04.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3458: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:04 np0005465604 podman[450093]: 2025-10-02 09:33:04.722352192 +0000 UTC m=+0.063209056 container exec 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:33:04 np0005465604 podman[450093]: 2025-10-02 09:33:04.830412227 +0000 UTC m=+0.171269081 container exec_died 6c3e23d2ca6ac20502c2581f7b3cd8acc51ed0bbd29d0af9cc014a7631736104 (image=quay.io/ceph/ceph:v18, name=ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 05:33:05 np0005465604 nova_compute[260603]: 2025-10-02 09:33:05.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:33:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:33:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:33:05 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:33:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:33:05 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:33:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:33:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  2 05:33:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 05:33:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:33:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:33:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:33:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:33:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:33:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:33:06 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 473cd2f8-84fd-4f08-a352-0307668c1e5a does not exist
Oct  2 05:33:06 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 94143f1a-ebce-4d09-bff0-55ba09d715b2 does not exist
Oct  2 05:33:06 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev cef387da-af16-44b5-9258-27c3cc35fc52 does not exist
Oct  2 05:33:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:33:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:33:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:33:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:33:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:33:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:33:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3459: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:06 np0005465604 podman[450522]: 2025-10-02 09:33:06.825958137 +0000 UTC m=+0.044999106 container create b9e9b1c935ee0835406ea8abfbf0166d70cdef0f784f4e95445db8a44f4b1367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 05:33:06 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 05:33:06 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:33:06 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:33:06 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:33:06 np0005465604 systemd[1]: Started libpod-conmon-b9e9b1c935ee0835406ea8abfbf0166d70cdef0f784f4e95445db8a44f4b1367.scope.
Oct  2 05:33:06 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:33:06 np0005465604 podman[450522]: 2025-10-02 09:33:06.805678414 +0000 UTC m=+0.024719463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:33:06 np0005465604 podman[450522]: 2025-10-02 09:33:06.910666923 +0000 UTC m=+0.129707902 container init b9e9b1c935ee0835406ea8abfbf0166d70cdef0f784f4e95445db8a44f4b1367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:33:06 np0005465604 podman[450522]: 2025-10-02 09:33:06.918993963 +0000 UTC m=+0.138034932 container start b9e9b1c935ee0835406ea8abfbf0166d70cdef0f784f4e95445db8a44f4b1367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 05:33:06 np0005465604 podman[450522]: 2025-10-02 09:33:06.922535504 +0000 UTC m=+0.141576493 container attach b9e9b1c935ee0835406ea8abfbf0166d70cdef0f784f4e95445db8a44f4b1367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 05:33:06 np0005465604 cranky_einstein[450539]: 167 167
Oct  2 05:33:06 np0005465604 systemd[1]: libpod-b9e9b1c935ee0835406ea8abfbf0166d70cdef0f784f4e95445db8a44f4b1367.scope: Deactivated successfully.
Oct  2 05:33:06 np0005465604 podman[450522]: 2025-10-02 09:33:06.92659259 +0000 UTC m=+0.145633579 container died b9e9b1c935ee0835406ea8abfbf0166d70cdef0f784f4e95445db8a44f4b1367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:33:06 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ae09a9f6c9e7943c5555e4e07157c780b7877831473a0da33a8b1901803c3ba6-merged.mount: Deactivated successfully.
Oct  2 05:33:06 np0005465604 podman[450522]: 2025-10-02 09:33:06.974701773 +0000 UTC m=+0.193742752 container remove b9e9b1c935ee0835406ea8abfbf0166d70cdef0f784f4e95445db8a44f4b1367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:33:07 np0005465604 systemd[1]: libpod-conmon-b9e9b1c935ee0835406ea8abfbf0166d70cdef0f784f4e95445db8a44f4b1367.scope: Deactivated successfully.
Oct  2 05:33:07 np0005465604 podman[450563]: 2025-10-02 09:33:07.151251538 +0000 UTC m=+0.057914580 container create d86968faa1bf4fd1063f83033413386609d58de7644468fee276302bfe5c96ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sutherland, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 05:33:07 np0005465604 systemd[1]: Started libpod-conmon-d86968faa1bf4fd1063f83033413386609d58de7644468fee276302bfe5c96ae.scope.
Oct  2 05:33:07 np0005465604 podman[450563]: 2025-10-02 09:33:07.121017523 +0000 UTC m=+0.027680625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:33:07 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:33:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f4c0892d7f37fb79b84284cb2a3cc124dfee5c8421a9b15f5715d38a822320/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:33:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f4c0892d7f37fb79b84284cb2a3cc124dfee5c8421a9b15f5715d38a822320/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:33:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f4c0892d7f37fb79b84284cb2a3cc124dfee5c8421a9b15f5715d38a822320/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:33:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f4c0892d7f37fb79b84284cb2a3cc124dfee5c8421a9b15f5715d38a822320/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:33:07 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f4c0892d7f37fb79b84284cb2a3cc124dfee5c8421a9b15f5715d38a822320/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:33:07 np0005465604 podman[450563]: 2025-10-02 09:33:07.254839553 +0000 UTC m=+0.161502575 container init d86968faa1bf4fd1063f83033413386609d58de7644468fee276302bfe5c96ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sutherland, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:33:07 np0005465604 podman[450563]: 2025-10-02 09:33:07.263939268 +0000 UTC m=+0.170602290 container start d86968faa1bf4fd1063f83033413386609d58de7644468fee276302bfe5c96ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sutherland, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 05:33:07 np0005465604 podman[450563]: 2025-10-02 09:33:07.267398925 +0000 UTC m=+0.174061967 container attach d86968faa1bf4fd1063f83033413386609d58de7644468fee276302bfe5c96ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:33:08 np0005465604 awesome_sutherland[450579]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:33:08 np0005465604 awesome_sutherland[450579]: --> relative data size: 1.0
Oct  2 05:33:08 np0005465604 awesome_sutherland[450579]: --> All data devices are unavailable
Oct  2 05:33:08 np0005465604 systemd[1]: libpod-d86968faa1bf4fd1063f83033413386609d58de7644468fee276302bfe5c96ae.scope: Deactivated successfully.
Oct  2 05:33:08 np0005465604 podman[450563]: 2025-10-02 09:33:08.28823459 +0000 UTC m=+1.194897622 container died d86968faa1bf4fd1063f83033413386609d58de7644468fee276302bfe5c96ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sutherland, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 05:33:08 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e2f4c0892d7f37fb79b84284cb2a3cc124dfee5c8421a9b15f5715d38a822320-merged.mount: Deactivated successfully.
Oct  2 05:33:08 np0005465604 podman[450563]: 2025-10-02 09:33:08.347680717 +0000 UTC m=+1.254343729 container remove d86968faa1bf4fd1063f83033413386609d58de7644468fee276302bfe5c96ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 05:33:08 np0005465604 systemd[1]: libpod-conmon-d86968faa1bf4fd1063f83033413386609d58de7644468fee276302bfe5c96ae.scope: Deactivated successfully.
Oct  2 05:33:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3460: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:08 np0005465604 podman[450764]: 2025-10-02 09:33:08.951394435 +0000 UTC m=+0.036401288 container create 75e0b32a7ae52605e6d990352d9c9788475656c736edd30ba93008f6f7e21f6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jemison, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 05:33:08 np0005465604 systemd[1]: Started libpod-conmon-75e0b32a7ae52605e6d990352d9c9788475656c736edd30ba93008f6f7e21f6d.scope.
Oct  2 05:33:09 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:33:09 np0005465604 podman[450764]: 2025-10-02 09:33:09.016929211 +0000 UTC m=+0.101936064 container init 75e0b32a7ae52605e6d990352d9c9788475656c736edd30ba93008f6f7e21f6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 05:33:09 np0005465604 podman[450764]: 2025-10-02 09:33:09.025807069 +0000 UTC m=+0.110813922 container start 75e0b32a7ae52605e6d990352d9c9788475656c736edd30ba93008f6f7e21f6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:33:09 np0005465604 strange_jemison[450780]: 167 167
Oct  2 05:33:09 np0005465604 podman[450764]: 2025-10-02 09:33:09.028761871 +0000 UTC m=+0.113768754 container attach 75e0b32a7ae52605e6d990352d9c9788475656c736edd30ba93008f6f7e21f6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jemison, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 05:33:09 np0005465604 podman[450764]: 2025-10-02 09:33:09.029014868 +0000 UTC m=+0.114021721 container died 75e0b32a7ae52605e6d990352d9c9788475656c736edd30ba93008f6f7e21f6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:33:09 np0005465604 systemd[1]: libpod-75e0b32a7ae52605e6d990352d9c9788475656c736edd30ba93008f6f7e21f6d.scope: Deactivated successfully.
Oct  2 05:33:09 np0005465604 podman[450764]: 2025-10-02 09:33:08.935615192 +0000 UTC m=+0.020622065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:33:09 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c0c68dc87df7f35a78391ec517eaeeb69759aed3ef4c2ec5317fd8b4dbfe03b7-merged.mount: Deactivated successfully.
Oct  2 05:33:09 np0005465604 podman[450764]: 2025-10-02 09:33:09.058357985 +0000 UTC m=+0.143364838 container remove 75e0b32a7ae52605e6d990352d9c9788475656c736edd30ba93008f6f7e21f6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  2 05:33:09 np0005465604 systemd[1]: libpod-conmon-75e0b32a7ae52605e6d990352d9c9788475656c736edd30ba93008f6f7e21f6d.scope: Deactivated successfully.
Oct  2 05:33:09 np0005465604 nova_compute[260603]: 2025-10-02 09:33:09.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:09 np0005465604 podman[450807]: 2025-10-02 09:33:09.222304376 +0000 UTC m=+0.045243254 container create 8b57c1ae5f56206a9085664ea913f32751ea28841d8a6e88ec88abf41731b5c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hamilton, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 05:33:09 np0005465604 systemd[1]: Started libpod-conmon-8b57c1ae5f56206a9085664ea913f32751ea28841d8a6e88ec88abf41731b5c4.scope.
Oct  2 05:33:09 np0005465604 podman[450807]: 2025-10-02 09:33:09.199576966 +0000 UTC m=+0.022515824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:33:09 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:33:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b353217d7d895fc18250a1bf5f974032d5784f51945349b07a37214be15fd875/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:33:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b353217d7d895fc18250a1bf5f974032d5784f51945349b07a37214be15fd875/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:33:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b353217d7d895fc18250a1bf5f974032d5784f51945349b07a37214be15fd875/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:33:09 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b353217d7d895fc18250a1bf5f974032d5784f51945349b07a37214be15fd875/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:33:09 np0005465604 podman[450807]: 2025-10-02 09:33:09.321922397 +0000 UTC m=+0.144861315 container init 8b57c1ae5f56206a9085664ea913f32751ea28841d8a6e88ec88abf41731b5c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hamilton, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:33:09 np0005465604 podman[450807]: 2025-10-02 09:33:09.334106888 +0000 UTC m=+0.157045726 container start 8b57c1ae5f56206a9085664ea913f32751ea28841d8a6e88ec88abf41731b5c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hamilton, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 05:33:09 np0005465604 podman[450807]: 2025-10-02 09:33:09.337405321 +0000 UTC m=+0.160344219 container attach 8b57c1ae5f56206a9085664ea913f32751ea28841d8a6e88ec88abf41731b5c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]: {
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:    "0": [
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:        {
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "devices": [
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "/dev/loop3"
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            ],
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "lv_name": "ceph_lv0",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "lv_size": "21470642176",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "name": "ceph_lv0",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "tags": {
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.cluster_name": "ceph",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.crush_device_class": "",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.encrypted": "0",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.osd_id": "0",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.type": "block",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.vdo": "0"
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            },
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "type": "block",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "vg_name": "ceph_vg0"
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:        }
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:    ],
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:    "1": [
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:        {
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "devices": [
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "/dev/loop4"
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            ],
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "lv_name": "ceph_lv1",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "lv_size": "21470642176",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "name": "ceph_lv1",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "tags": {
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.cluster_name": "ceph",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.crush_device_class": "",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.encrypted": "0",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.osd_id": "1",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.type": "block",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.vdo": "0"
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            },
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "type": "block",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "vg_name": "ceph_vg1"
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:        }
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:    ],
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:    "2": [
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:        {
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "devices": [
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "/dev/loop5"
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            ],
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "lv_name": "ceph_lv2",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "lv_size": "21470642176",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "name": "ceph_lv2",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "tags": {
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.cluster_name": "ceph",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.crush_device_class": "",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.encrypted": "0",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.osd_id": "2",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.type": "block",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:                "ceph.vdo": "0"
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            },
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "type": "block",
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:            "vg_name": "ceph_vg2"
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:        }
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]:    ]
Oct  2 05:33:10 np0005465604 dreamy_hamilton[450824]: }
Oct  2 05:33:10 np0005465604 systemd[1]: libpod-8b57c1ae5f56206a9085664ea913f32751ea28841d8a6e88ec88abf41731b5c4.scope: Deactivated successfully.
Oct  2 05:33:10 np0005465604 podman[450807]: 2025-10-02 09:33:10.089708629 +0000 UTC m=+0.912647467 container died 8b57c1ae5f56206a9085664ea913f32751ea28841d8a6e88ec88abf41731b5c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hamilton, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:33:10 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b353217d7d895fc18250a1bf5f974032d5784f51945349b07a37214be15fd875-merged.mount: Deactivated successfully.
Oct  2 05:33:10 np0005465604 podman[450807]: 2025-10-02 09:33:10.146520294 +0000 UTC m=+0.969459132 container remove 8b57c1ae5f56206a9085664ea913f32751ea28841d8a6e88ec88abf41731b5c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hamilton, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 05:33:10 np0005465604 systemd[1]: libpod-conmon-8b57c1ae5f56206a9085664ea913f32751ea28841d8a6e88ec88abf41731b5c4.scope: Deactivated successfully.
Oct  2 05:33:10 np0005465604 nova_compute[260603]: 2025-10-02 09:33:10.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3461: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:33:10 np0005465604 podman[450988]: 2025-10-02 09:33:10.912872481 +0000 UTC m=+0.048133985 container create 244960188e98b38f58d82f18b593f67ac34c8587cbe48fe250fb490a3bc439f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 05:33:10 np0005465604 systemd[1]: Started libpod-conmon-244960188e98b38f58d82f18b593f67ac34c8587cbe48fe250fb490a3bc439f2.scope.
Oct  2 05:33:10 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:33:10 np0005465604 podman[450988]: 2025-10-02 09:33:10.89300743 +0000 UTC m=+0.028268954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:33:10 np0005465604 podman[450988]: 2025-10-02 09:33:10.993467057 +0000 UTC m=+0.128728591 container init 244960188e98b38f58d82f18b593f67ac34c8587cbe48fe250fb490a3bc439f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:33:11 np0005465604 podman[450988]: 2025-10-02 09:33:11.000226389 +0000 UTC m=+0.135487903 container start 244960188e98b38f58d82f18b593f67ac34c8587cbe48fe250fb490a3bc439f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 05:33:11 np0005465604 podman[450988]: 2025-10-02 09:33:11.003232362 +0000 UTC m=+0.138493896 container attach 244960188e98b38f58d82f18b593f67ac34c8587cbe48fe250fb490a3bc439f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 05:33:11 np0005465604 thirsty_herschel[451004]: 167 167
Oct  2 05:33:11 np0005465604 systemd[1]: libpod-244960188e98b38f58d82f18b593f67ac34c8587cbe48fe250fb490a3bc439f2.scope: Deactivated successfully.
Oct  2 05:33:11 np0005465604 podman[451009]: 2025-10-02 09:33:11.053596366 +0000 UTC m=+0.031198915 container died 244960188e98b38f58d82f18b593f67ac34c8587cbe48fe250fb490a3bc439f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:33:11 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1db4d3e65b4f7132f29bddf27404bcbaeb589603b8a837e04a78081c194c5618-merged.mount: Deactivated successfully.
Oct  2 05:33:11 np0005465604 podman[451009]: 2025-10-02 09:33:11.094193874 +0000 UTC m=+0.071796413 container remove 244960188e98b38f58d82f18b593f67ac34c8587cbe48fe250fb490a3bc439f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:33:11 np0005465604 systemd[1]: libpod-conmon-244960188e98b38f58d82f18b593f67ac34c8587cbe48fe250fb490a3bc439f2.scope: Deactivated successfully.
Oct  2 05:33:11 np0005465604 podman[451029]: 2025-10-02 09:33:11.307052293 +0000 UTC m=+0.063398282 container create 32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamarr, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:33:11 np0005465604 systemd[1]: Started libpod-conmon-32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c.scope.
Oct  2 05:33:11 np0005465604 podman[451029]: 2025-10-02 09:33:11.274188206 +0000 UTC m=+0.030534295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:33:11 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:33:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3db29172cb49752b549c7b52c8f83de944a19e28d2211aea5e22e43dfb25e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:33:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3db29172cb49752b549c7b52c8f83de944a19e28d2211aea5e22e43dfb25e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:33:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3db29172cb49752b549c7b52c8f83de944a19e28d2211aea5e22e43dfb25e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:33:11 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3db29172cb49752b549c7b52c8f83de944a19e28d2211aea5e22e43dfb25e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:33:11 np0005465604 podman[451029]: 2025-10-02 09:33:11.389999513 +0000 UTC m=+0.146345502 container init 32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 05:33:11 np0005465604 podman[451029]: 2025-10-02 09:33:11.39599832 +0000 UTC m=+0.152344299 container start 32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamarr, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:33:11 np0005465604 podman[451029]: 2025-10-02 09:33:11.399481139 +0000 UTC m=+0.155827108 container attach 32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]: {
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:        "osd_id": 2,
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:        "type": "bluestore"
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:    },
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:        "osd_id": 1,
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:        "type": "bluestore"
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:    },
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:        "osd_id": 0,
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:        "type": "bluestore"
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]:    }
Oct  2 05:33:12 np0005465604 dazzling_lamarr[451046]: }
Oct  2 05:33:12 np0005465604 systemd[1]: libpod-32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c.scope: Deactivated successfully.
Oct  2 05:33:12 np0005465604 systemd[1]: libpod-32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c.scope: Consumed 1.028s CPU time.
Oct  2 05:33:12 np0005465604 podman[451029]: 2025-10-02 09:33:12.420898572 +0000 UTC m=+1.177244581 container died 32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 05:33:12 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4f3db29172cb49752b549c7b52c8f83de944a19e28d2211aea5e22e43dfb25e0-merged.mount: Deactivated successfully.
Oct  2 05:33:12 np0005465604 podman[451029]: 2025-10-02 09:33:12.480107352 +0000 UTC m=+1.236453331 container remove 32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamarr, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:33:12 np0005465604 systemd[1]: libpod-conmon-32de83f940a4d18842d5fbd42f102961fc8056ef7254bc13ae300adbd42ab02c.scope: Deactivated successfully.
Oct  2 05:33:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:33:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:33:12 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:33:12 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:33:12 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev d516ae73-b727-435d-b808-00001310e754 does not exist
Oct  2 05:33:12 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 72088a30-55df-4415-b04a-df572faa848d does not exist
Oct  2 05:33:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3462: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:13 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:33:13 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:33:14 np0005465604 nova_compute[260603]: 2025-10-02 09:33:14.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3463: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:15 np0005465604 nova_compute[260603]: 2025-10-02 09:33:15.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:33:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3464: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3465: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:19 np0005465604 nova_compute[260603]: 2025-10-02 09:33:19.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:20 np0005465604 nova_compute[260603]: 2025-10-02 09:33:20.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3466: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:33:21 np0005465604 nova_compute[260603]: 2025-10-02 09:33:21.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:33:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:33:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3253008149' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:33:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:33:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3253008149' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:33:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3467: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:23 np0005465604 nova_compute[260603]: 2025-10-02 09:33:23.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:33:24 np0005465604 nova_compute[260603]: 2025-10-02 09:33:24.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:24 np0005465604 nova_compute[260603]: 2025-10-02 09:33:24.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:33:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3468: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:25 np0005465604 podman[451142]: 2025-10-02 09:33:25.00948936 +0000 UTC m=+0.067583602 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct  2 05:33:25 np0005465604 podman[451141]: 2025-10-02 09:33:25.06549737 +0000 UTC m=+0.127691700 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 05:33:25 np0005465604 nova_compute[260603]: 2025-10-02 09:33:25.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:25 np0005465604 nova_compute[260603]: 2025-10-02 09:33:25.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:33:25 np0005465604 nova_compute[260603]: 2025-10-02 09:33:25.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:33:25 np0005465604 nova_compute[260603]: 2025-10-02 09:33:25.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:33:25 np0005465604 nova_compute[260603]: 2025-10-02 09:33:25.589 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:33:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:33:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3469: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:27 np0005465604 nova_compute[260603]: 2025-10-02 09:33:27.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:33:27 np0005465604 nova_compute[260603]: 2025-10-02 09:33:27.563 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:33:27 np0005465604 nova_compute[260603]: 2025-10-02 09:33:27.563 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:33:27 np0005465604 nova_compute[260603]: 2025-10-02 09:33:27.563 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:33:27 np0005465604 nova_compute[260603]: 2025-10-02 09:33:27.563 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:33:27 np0005465604 nova_compute[260603]: 2025-10-02 09:33:27.563 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:33:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:33:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:33:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:33:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:33:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:33:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/56989600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:33:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:33:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:33:27 np0005465604 nova_compute[260603]: 2025-10-02 09:33:27.995 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:33:28 np0005465604 nova_compute[260603]: 2025-10-02 09:33:28.142 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:33:28 np0005465604 nova_compute[260603]: 2025-10-02 09:33:28.144 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3551MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:33:28 np0005465604 nova_compute[260603]: 2025-10-02 09:33:28.145 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:33:28 np0005465604 nova_compute[260603]: 2025-10-02 09:33:28.145 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:33:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:33:28
Oct  2 05:33:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:33:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:33:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'backups']
Oct  2 05:33:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:33:28 np0005465604 nova_compute[260603]: 2025-10-02 09:33:28.224 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:33:28 np0005465604 nova_compute[260603]: 2025-10-02 09:33:28.225 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:33:28 np0005465604 nova_compute[260603]: 2025-10-02 09:33:28.354 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:33:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:33:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:33:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:33:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:33:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:33:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:33:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:33:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:33:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:33:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:33:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3470: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:33:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3094738245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:33:28 np0005465604 nova_compute[260603]: 2025-10-02 09:33:28.769 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:33:28 np0005465604 nova_compute[260603]: 2025-10-02 09:33:28.774 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:33:28 np0005465604 nova_compute[260603]: 2025-10-02 09:33:28.812 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:33:28 np0005465604 nova_compute[260603]: 2025-10-02 09:33:28.814 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:33:28 np0005465604 nova_compute[260603]: 2025-10-02 09:33:28.814 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:33:28 np0005465604 podman[451230]: 2025-10-02 09:33:28.988597095 +0000 UTC m=+0.050090696 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS)
Oct  2 05:33:28 np0005465604 podman[451229]: 2025-10-02 09:33:28.990124603 +0000 UTC m=+0.057965642 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:33:29 np0005465604 nova_compute[260603]: 2025-10-02 09:33:29.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:30 np0005465604 nova_compute[260603]: 2025-10-02 09:33:30.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3471: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:30 np0005465604 nova_compute[260603]: 2025-10-02 09:33:30.811 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:33:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:33:31 np0005465604 nova_compute[260603]: 2025-10-02 09:33:31.632 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:33:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3472: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:33 np0005465604 nova_compute[260603]: 2025-10-02 09:33:33.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:33:34 np0005465604 nova_compute[260603]: 2025-10-02 09:33:34.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3473: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:33:34.872 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:33:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:33:34.873 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:33:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:33:34.873 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:33:35 np0005465604 nova_compute[260603]: 2025-10-02 09:33:35.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:33:35 np0005465604 systemd-logind[787]: New session 54 of user zuul.
Oct  2 05:33:35 np0005465604 systemd[1]: Started Session 54 of User zuul.
Oct  2 05:33:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3474: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:38 np0005465604 nova_compute[260603]: 2025-10-02 09:33:38.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:33:38.580 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:33:38 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:33:38.582 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:33:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3475: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:39 np0005465604 nova_compute[260603]: 2025-10-02 09:33:39.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:33:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:33:40 np0005465604 nova_compute[260603]: 2025-10-02 09:33:40.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3476: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:33:41 np0005465604 nova_compute[260603]: 2025-10-02 09:33:41.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:33:42 np0005465604 systemd[1]: session-54.scope: Deactivated successfully.
Oct  2 05:33:42 np0005465604 systemd-logind[787]: Session 54 logged out. Waiting for processes to exit.
Oct  2 05:33:42 np0005465604 systemd-logind[787]: Removed session 54.
Oct  2 05:33:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3477: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:44 np0005465604 nova_compute[260603]: 2025-10-02 09:33:44.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:44 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:33:44.584 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:33:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3478: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:45 np0005465604 nova_compute[260603]: 2025-10-02 09:33:45.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:33:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3479: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3480: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:49 np0005465604 nova_compute[260603]: 2025-10-02 09:33:49.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.254913) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397629254966, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1769, "num_deletes": 251, "total_data_size": 2894977, "memory_usage": 2937568, "flush_reason": "Manual Compaction"}
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397629417353, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 2823286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70847, "largest_seqno": 72615, "table_properties": {"data_size": 2815151, "index_size": 4948, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16461, "raw_average_key_size": 19, "raw_value_size": 2798965, "raw_average_value_size": 3396, "num_data_blocks": 221, "num_entries": 824, "num_filter_entries": 824, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759397441, "oldest_key_time": 1759397441, "file_creation_time": 1759397629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 162484 microseconds, and 11048 cpu microseconds.
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.417397) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 2823286 bytes OK
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.417419) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.523043) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.523093) EVENT_LOG_v1 {"time_micros": 1759397629523082, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.523123) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2887431, prev total WAL file size 2887431, number of live WAL files 2.
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.524727) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(2757KB)], [170(9684KB)]
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397629524803, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 12739806, "oldest_snapshot_seqno": -1}
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 8859 keys, 11023086 bytes, temperature: kUnknown
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397629654961, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 11023086, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10966180, "index_size": 33625, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 232827, "raw_average_key_size": 26, "raw_value_size": 10810505, "raw_average_value_size": 1220, "num_data_blocks": 1301, "num_entries": 8859, "num_filter_entries": 8859, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759397629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.655236) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 11023086 bytes
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.656733) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.8 rd, 84.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 9.5 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(8.4) write-amplify(3.9) OK, records in: 9373, records dropped: 514 output_compression: NoCompression
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.656770) EVENT_LOG_v1 {"time_micros": 1759397629656760, "job": 106, "event": "compaction_finished", "compaction_time_micros": 130261, "compaction_time_cpu_micros": 50189, "output_level": 6, "num_output_files": 1, "total_output_size": 11023086, "num_input_records": 9373, "num_output_records": 8859, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397629657641, "job": 106, "event": "table_file_deletion", "file_number": 172}
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397629660005, "job": 106, "event": "table_file_deletion", "file_number": 170}
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.524654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.660066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.660071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.660073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.660075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:33:49 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:33:49.660077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:33:50 np0005465604 nova_compute[260603]: 2025-10-02 09:33:50.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3481: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:33:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3482: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:54 np0005465604 nova_compute[260603]: 2025-10-02 09:33:54.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:54 np0005465604 nova_compute[260603]: 2025-10-02 09:33:54.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:33:54 np0005465604 nova_compute[260603]: 2025-10-02 09:33:54.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:33:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3483: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:55 np0005465604 nova_compute[260603]: 2025-10-02 09:33:55.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:33:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:33:55 np0005465604 podman[451527]: 2025-10-02 09:33:55.990575433 +0000 UTC m=+0.051292293 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 05:33:56 np0005465604 podman[451526]: 2025-10-02 09:33:56.013776837 +0000 UTC m=+0.077117180 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 05:33:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3484: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:33:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:33:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:33:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:33:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:33:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:33:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3485: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:33:59 np0005465604 nova_compute[260603]: 2025-10-02 09:33:59.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:00 np0005465604 podman[451569]: 2025-10-02 09:34:00.005462185 +0000 UTC m=+0.063044960 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 05:34:00 np0005465604 podman[451570]: 2025-10-02 09:34:00.018068029 +0000 UTC m=+0.078470992 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3)
Oct  2 05:34:00 np0005465604 nova_compute[260603]: 2025-10-02 09:34:00.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3486: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:34:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3487: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:04 np0005465604 nova_compute[260603]: 2025-10-02 09:34:04.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3488: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:05 np0005465604 nova_compute[260603]: 2025-10-02 09:34:05.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:34:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3489: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3490: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:09 np0005465604 nova_compute[260603]: 2025-10-02 09:34:09.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:10 np0005465604 nova_compute[260603]: 2025-10-02 09:34:10.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3491: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:34:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3492: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:34:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:34:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:34:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:34:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:34:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:34:13 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 2d215fff-7329-4955-9e0c-10627c5a07a0 does not exist
Oct  2 05:34:13 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a4c1bfa7-ab4e-47c1-b414-4b605c236a09 does not exist
Oct  2 05:34:13 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev bbacac02-c5f8-4e59-b2a6-6354c00e0117 does not exist
Oct  2 05:34:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:34:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:34:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:34:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:34:13 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:34:13 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:34:13 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:34:13 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:34:13 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:34:14 np0005465604 podman[451884]: 2025-10-02 09:34:14.092151956 +0000 UTC m=+0.115121757 container create 9135e107a767fb83669328ebc4ed62e0001f61f014d22cc6bc680b3ba8d824b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 05:34:14 np0005465604 podman[451884]: 2025-10-02 09:34:14.002239288 +0000 UTC m=+0.025209109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:34:14 np0005465604 nova_compute[260603]: 2025-10-02 09:34:14.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:14 np0005465604 systemd[1]: Started libpod-conmon-9135e107a767fb83669328ebc4ed62e0001f61f014d22cc6bc680b3ba8d824b5.scope.
Oct  2 05:34:14 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:34:14 np0005465604 podman[451884]: 2025-10-02 09:34:14.237098744 +0000 UTC m=+0.260068555 container init 9135e107a767fb83669328ebc4ed62e0001f61f014d22cc6bc680b3ba8d824b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jang, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:34:14 np0005465604 podman[451884]: 2025-10-02 09:34:14.24595004 +0000 UTC m=+0.268919831 container start 9135e107a767fb83669328ebc4ed62e0001f61f014d22cc6bc680b3ba8d824b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 05:34:14 np0005465604 tender_jang[451901]: 167 167
Oct  2 05:34:14 np0005465604 systemd[1]: libpod-9135e107a767fb83669328ebc4ed62e0001f61f014d22cc6bc680b3ba8d824b5.scope: Deactivated successfully.
Oct  2 05:34:14 np0005465604 podman[451884]: 2025-10-02 09:34:14.260203255 +0000 UTC m=+0.283173066 container attach 9135e107a767fb83669328ebc4ed62e0001f61f014d22cc6bc680b3ba8d824b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jang, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  2 05:34:14 np0005465604 podman[451884]: 2025-10-02 09:34:14.260706191 +0000 UTC m=+0.283675982 container died 9135e107a767fb83669328ebc4ed62e0001f61f014d22cc6bc680b3ba8d824b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:34:14 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b94b8253487b55c725807ecaef1cbeb9890902276c796638ad1418de45e39cea-merged.mount: Deactivated successfully.
Oct  2 05:34:14 np0005465604 podman[451884]: 2025-10-02 09:34:14.462218525 +0000 UTC m=+0.485188316 container remove 9135e107a767fb83669328ebc4ed62e0001f61f014d22cc6bc680b3ba8d824b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:34:14 np0005465604 systemd[1]: libpod-conmon-9135e107a767fb83669328ebc4ed62e0001f61f014d22cc6bc680b3ba8d824b5.scope: Deactivated successfully.
Oct  2 05:34:14 np0005465604 podman[451927]: 2025-10-02 09:34:14.625594028 +0000 UTC m=+0.023074232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:34:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3493: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:14 np0005465604 podman[451927]: 2025-10-02 09:34:14.727239693 +0000 UTC m=+0.124719897 container create 1829d36acd90d2eba4437685396937f2547085e3e858f96dc0171a3f2c718a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cerf, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 05:34:14 np0005465604 systemd[1]: Started libpod-conmon-1829d36acd90d2eba4437685396937f2547085e3e858f96dc0171a3f2c718a1e.scope.
Oct  2 05:34:14 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:34:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4bae5057299a20c68268d58fea4284714015ffbb7f65dc026319f3124c08c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:34:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4bae5057299a20c68268d58fea4284714015ffbb7f65dc026319f3124c08c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:34:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4bae5057299a20c68268d58fea4284714015ffbb7f65dc026319f3124c08c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:34:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4bae5057299a20c68268d58fea4284714015ffbb7f65dc026319f3124c08c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:34:14 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4bae5057299a20c68268d58fea4284714015ffbb7f65dc026319f3124c08c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:34:14 np0005465604 podman[451927]: 2025-10-02 09:34:14.94231168 +0000 UTC m=+0.339791904 container init 1829d36acd90d2eba4437685396937f2547085e3e858f96dc0171a3f2c718a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 05:34:14 np0005465604 podman[451927]: 2025-10-02 09:34:14.950391273 +0000 UTC m=+0.347871467 container start 1829d36acd90d2eba4437685396937f2547085e3e858f96dc0171a3f2c718a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 05:34:15 np0005465604 podman[451927]: 2025-10-02 09:34:15.04699636 +0000 UTC m=+0.444476574 container attach 1829d36acd90d2eba4437685396937f2547085e3e858f96dc0171a3f2c718a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:34:15 np0005465604 nova_compute[260603]: 2025-10-02 09:34:15.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:15 np0005465604 focused_cerf[451944]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:34:15 np0005465604 focused_cerf[451944]: --> relative data size: 1.0
Oct  2 05:34:15 np0005465604 focused_cerf[451944]: --> All data devices are unavailable
Oct  2 05:34:15 np0005465604 systemd[1]: libpod-1829d36acd90d2eba4437685396937f2547085e3e858f96dc0171a3f2c718a1e.scope: Deactivated successfully.
Oct  2 05:34:15 np0005465604 podman[451927]: 2025-10-02 09:34:15.950731198 +0000 UTC m=+1.348211402 container died 1829d36acd90d2eba4437685396937f2547085e3e858f96dc0171a3f2c718a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cerf, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:34:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:34:16 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4d4bae5057299a20c68268d58fea4284714015ffbb7f65dc026319f3124c08c4-merged.mount: Deactivated successfully.
Oct  2 05:34:16 np0005465604 podman[451927]: 2025-10-02 09:34:16.053979933 +0000 UTC m=+1.451460147 container remove 1829d36acd90d2eba4437685396937f2547085e3e858f96dc0171a3f2c718a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_cerf, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 05:34:16 np0005465604 systemd[1]: libpod-conmon-1829d36acd90d2eba4437685396937f2547085e3e858f96dc0171a3f2c718a1e.scope: Deactivated successfully.
Oct  2 05:34:16 np0005465604 podman[452128]: 2025-10-02 09:34:16.708245019 +0000 UTC m=+0.043132798 container create 2f05e9174679ecf309da843733ed730ba3937ae0e6c7e5853e2af2ef75141137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 05:34:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3494: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:16 np0005465604 systemd[1]: Started libpod-conmon-2f05e9174679ecf309da843733ed730ba3937ae0e6c7e5853e2af2ef75141137.scope.
Oct  2 05:34:16 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:34:16 np0005465604 podman[452128]: 2025-10-02 09:34:16.784922424 +0000 UTC m=+0.119810203 container init 2f05e9174679ecf309da843733ed730ba3937ae0e6c7e5853e2af2ef75141137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:34:16 np0005465604 podman[452128]: 2025-10-02 09:34:16.691434203 +0000 UTC m=+0.026322002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:34:16 np0005465604 podman[452128]: 2025-10-02 09:34:16.791599392 +0000 UTC m=+0.126487171 container start 2f05e9174679ecf309da843733ed730ba3937ae0e6c7e5853e2af2ef75141137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 05:34:16 np0005465604 wonderful_pascal[452145]: 167 167
Oct  2 05:34:16 np0005465604 podman[452128]: 2025-10-02 09:34:16.79474136 +0000 UTC m=+0.129629159 container attach 2f05e9174679ecf309da843733ed730ba3937ae0e6c7e5853e2af2ef75141137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:34:16 np0005465604 systemd[1]: libpod-2f05e9174679ecf309da843733ed730ba3937ae0e6c7e5853e2af2ef75141137.scope: Deactivated successfully.
Oct  2 05:34:16 np0005465604 podman[452128]: 2025-10-02 09:34:16.797053523 +0000 UTC m=+0.131941302 container died 2f05e9174679ecf309da843733ed730ba3937ae0e6c7e5853e2af2ef75141137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 05:34:16 np0005465604 systemd[1]: var-lib-containers-storage-overlay-66e1b03d6eac417d121196692221215d785c4ae72a9c0a5d9a4a02772b510fd0-merged.mount: Deactivated successfully.
Oct  2 05:34:16 np0005465604 podman[452128]: 2025-10-02 09:34:16.836473783 +0000 UTC m=+0.171361562 container remove 2f05e9174679ecf309da843733ed730ba3937ae0e6c7e5853e2af2ef75141137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_pascal, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:34:16 np0005465604 systemd[1]: libpod-conmon-2f05e9174679ecf309da843733ed730ba3937ae0e6c7e5853e2af2ef75141137.scope: Deactivated successfully.
Oct  2 05:34:17 np0005465604 podman[452169]: 2025-10-02 09:34:17.008152836 +0000 UTC m=+0.043815499 container create 8ed06b0e40dbb13b187b8c3acf0849b4ab60f5920a715b072bcda776824a9c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 05:34:17 np0005465604 systemd[1]: Started libpod-conmon-8ed06b0e40dbb13b187b8c3acf0849b4ab60f5920a715b072bcda776824a9c62.scope.
Oct  2 05:34:17 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:34:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40de3e6787684dff2f59205eda2767da813fa0359a346ef3581c1a28840d1ea1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:34:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40de3e6787684dff2f59205eda2767da813fa0359a346ef3581c1a28840d1ea1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:34:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40de3e6787684dff2f59205eda2767da813fa0359a346ef3581c1a28840d1ea1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:34:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40de3e6787684dff2f59205eda2767da813fa0359a346ef3581c1a28840d1ea1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:34:17 np0005465604 podman[452169]: 2025-10-02 09:34:16.98968808 +0000 UTC m=+0.025350763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:34:17 np0005465604 podman[452169]: 2025-10-02 09:34:17.091133858 +0000 UTC m=+0.126796541 container init 8ed06b0e40dbb13b187b8c3acf0849b4ab60f5920a715b072bcda776824a9c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:34:17 np0005465604 podman[452169]: 2025-10-02 09:34:17.098491858 +0000 UTC m=+0.134154521 container start 8ed06b0e40dbb13b187b8c3acf0849b4ab60f5920a715b072bcda776824a9c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:34:17 np0005465604 podman[452169]: 2025-10-02 09:34:17.101848192 +0000 UTC m=+0.137510885 container attach 8ed06b0e40dbb13b187b8c3acf0849b4ab60f5920a715b072bcda776824a9c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]: {
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:    "0": [
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:        {
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "devices": [
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "/dev/loop3"
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            ],
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "lv_name": "ceph_lv0",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "lv_size": "21470642176",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "name": "ceph_lv0",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "tags": {
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.cluster_name": "ceph",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.crush_device_class": "",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.encrypted": "0",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.osd_id": "0",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.type": "block",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.vdo": "0"
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            },
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "type": "block",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "vg_name": "ceph_vg0"
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:        }
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:    ],
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:    "1": [
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:        {
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "devices": [
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "/dev/loop4"
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            ],
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "lv_name": "ceph_lv1",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "lv_size": "21470642176",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "name": "ceph_lv1",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "tags": {
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.cluster_name": "ceph",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.crush_device_class": "",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.encrypted": "0",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.osd_id": "1",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.type": "block",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.vdo": "0"
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            },
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "type": "block",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "vg_name": "ceph_vg1"
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:        }
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:    ],
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:    "2": [
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:        {
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "devices": [
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "/dev/loop5"
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            ],
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "lv_name": "ceph_lv2",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "lv_size": "21470642176",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "name": "ceph_lv2",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "tags": {
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.cluster_name": "ceph",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.crush_device_class": "",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.encrypted": "0",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.osd_id": "2",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.type": "block",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:                "ceph.vdo": "0"
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            },
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "type": "block",
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:            "vg_name": "ceph_vg2"
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:        }
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]:    ]
Oct  2 05:34:17 np0005465604 awesome_ellis[452185]: }
Oct  2 05:34:17 np0005465604 systemd[1]: libpod-8ed06b0e40dbb13b187b8c3acf0849b4ab60f5920a715b072bcda776824a9c62.scope: Deactivated successfully.
Oct  2 05:34:17 np0005465604 podman[452169]: 2025-10-02 09:34:17.862573663 +0000 UTC m=+0.898236336 container died 8ed06b0e40dbb13b187b8c3acf0849b4ab60f5920a715b072bcda776824a9c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  2 05:34:17 np0005465604 systemd[1]: var-lib-containers-storage-overlay-40de3e6787684dff2f59205eda2767da813fa0359a346ef3581c1a28840d1ea1-merged.mount: Deactivated successfully.
Oct  2 05:34:17 np0005465604 podman[452169]: 2025-10-02 09:34:17.916158728 +0000 UTC m=+0.951821391 container remove 8ed06b0e40dbb13b187b8c3acf0849b4ab60f5920a715b072bcda776824a9c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:34:17 np0005465604 systemd[1]: libpod-conmon-8ed06b0e40dbb13b187b8c3acf0849b4ab60f5920a715b072bcda776824a9c62.scope: Deactivated successfully.
Oct  2 05:34:18 np0005465604 podman[452346]: 2025-10-02 09:34:18.509112868 +0000 UTC m=+0.042629333 container create 503748d889c6c9e14e7ad654542df1eb9cac7f41fd9f0dfc475db119111320eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 05:34:18 np0005465604 systemd[1]: Started libpod-conmon-503748d889c6c9e14e7ad654542df1eb9cac7f41fd9f0dfc475db119111320eb.scope.
Oct  2 05:34:18 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:34:18 np0005465604 podman[452346]: 2025-10-02 09:34:18.491964993 +0000 UTC m=+0.025481478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:34:18 np0005465604 podman[452346]: 2025-10-02 09:34:18.595516737 +0000 UTC m=+0.129033212 container init 503748d889c6c9e14e7ad654542df1eb9cac7f41fd9f0dfc475db119111320eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 05:34:18 np0005465604 podman[452346]: 2025-10-02 09:34:18.602699921 +0000 UTC m=+0.136216386 container start 503748d889c6c9e14e7ad654542df1eb9cac7f41fd9f0dfc475db119111320eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 05:34:18 np0005465604 ecstatic_wilson[452362]: 167 167
Oct  2 05:34:18 np0005465604 podman[452346]: 2025-10-02 09:34:18.609799133 +0000 UTC m=+0.143315598 container attach 503748d889c6c9e14e7ad654542df1eb9cac7f41fd9f0dfc475db119111320eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct  2 05:34:18 np0005465604 systemd[1]: libpod-503748d889c6c9e14e7ad654542df1eb9cac7f41fd9f0dfc475db119111320eb.scope: Deactivated successfully.
Oct  2 05:34:18 np0005465604 podman[452346]: 2025-10-02 09:34:18.610270248 +0000 UTC m=+0.143786723 container died 503748d889c6c9e14e7ad654542df1eb9cac7f41fd9f0dfc475db119111320eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 05:34:18 np0005465604 systemd[1]: var-lib-containers-storage-overlay-908585e7c5114e8ec5f58af0386fabea4ef3301af6b083c5e8b92182cea0cb7c-merged.mount: Deactivated successfully.
Oct  2 05:34:18 np0005465604 podman[452346]: 2025-10-02 09:34:18.652524947 +0000 UTC m=+0.186041412 container remove 503748d889c6c9e14e7ad654542df1eb9cac7f41fd9f0dfc475db119111320eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wilson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 05:34:18 np0005465604 systemd[1]: libpod-conmon-503748d889c6c9e14e7ad654542df1eb9cac7f41fd9f0dfc475db119111320eb.scope: Deactivated successfully.
Oct  2 05:34:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3495: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:18 np0005465604 podman[452384]: 2025-10-02 09:34:18.808821819 +0000 UTC m=+0.037815522 container create 482efa5ca3d4301326c9afd41ec61068af7d9d4f50ac585280662dedb35d28da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kepler, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 05:34:18 np0005465604 systemd[1]: Started libpod-conmon-482efa5ca3d4301326c9afd41ec61068af7d9d4f50ac585280662dedb35d28da.scope.
Oct  2 05:34:18 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:34:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c344155613f30c8e008d97b46898832af8de288d450450ca4f0b2e8cdf981cf8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:34:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c344155613f30c8e008d97b46898832af8de288d450450ca4f0b2e8cdf981cf8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:34:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c344155613f30c8e008d97b46898832af8de288d450450ca4f0b2e8cdf981cf8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:34:18 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c344155613f30c8e008d97b46898832af8de288d450450ca4f0b2e8cdf981cf8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:34:18 np0005465604 podman[452384]: 2025-10-02 09:34:18.888121176 +0000 UTC m=+0.117114899 container init 482efa5ca3d4301326c9afd41ec61068af7d9d4f50ac585280662dedb35d28da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kepler, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:34:18 np0005465604 podman[452384]: 2025-10-02 09:34:18.793684036 +0000 UTC m=+0.022677759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:34:18 np0005465604 podman[452384]: 2025-10-02 09:34:18.896706884 +0000 UTC m=+0.125700587 container start 482efa5ca3d4301326c9afd41ec61068af7d9d4f50ac585280662dedb35d28da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kepler, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:34:18 np0005465604 podman[452384]: 2025-10-02 09:34:18.899959106 +0000 UTC m=+0.128952809 container attach 482efa5ca3d4301326c9afd41ec61068af7d9d4f50ac585280662dedb35d28da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:34:19 np0005465604 nova_compute[260603]: 2025-10-02 09:34:19.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]: {
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:        "osd_id": 2,
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:        "type": "bluestore"
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:    },
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:        "osd_id": 1,
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:        "type": "bluestore"
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:    },
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:        "osd_id": 0,
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:        "type": "bluestore"
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]:    }
Oct  2 05:34:19 np0005465604 quirky_kepler[452400]: }
Oct  2 05:34:19 np0005465604 systemd[1]: libpod-482efa5ca3d4301326c9afd41ec61068af7d9d4f50ac585280662dedb35d28da.scope: Deactivated successfully.
Oct  2 05:34:19 np0005465604 podman[452433]: 2025-10-02 09:34:19.917607051 +0000 UTC m=+0.022351719 container died 482efa5ca3d4301326c9afd41ec61068af7d9d4f50ac585280662dedb35d28da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kepler, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 05:34:19 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c344155613f30c8e008d97b46898832af8de288d450450ca4f0b2e8cdf981cf8-merged.mount: Deactivated successfully.
Oct  2 05:34:19 np0005465604 podman[452433]: 2025-10-02 09:34:19.962834734 +0000 UTC m=+0.067579382 container remove 482efa5ca3d4301326c9afd41ec61068af7d9d4f50ac585280662dedb35d28da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kepler, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:34:19 np0005465604 systemd[1]: libpod-conmon-482efa5ca3d4301326c9afd41ec61068af7d9d4f50ac585280662dedb35d28da.scope: Deactivated successfully.
Oct  2 05:34:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:34:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:34:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:34:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:34:20 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c71643c5-2d88-442e-b92e-9ad9cf6f8aec does not exist
Oct  2 05:34:20 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c33790ac-f923-41ba-b7bf-cecb07385bba does not exist
Oct  2 05:34:20 np0005465604 nova_compute[260603]: 2025-10-02 09:34:20.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3496: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:34:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:34:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:34:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:34:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2141865596' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:34:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:34:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2141865596' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:34:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3497: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:23 np0005465604 nova_compute[260603]: 2025-10-02 09:34:23.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:34:24 np0005465604 nova_compute[260603]: 2025-10-02 09:34:24.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:24 np0005465604 nova_compute[260603]: 2025-10-02 09:34:24.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:34:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3498: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:25 np0005465604 nova_compute[260603]: 2025-10-02 09:34:25.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:34:26 np0005465604 nova_compute[260603]: 2025-10-02 09:34:26.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:34:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3499: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:27 np0005465604 podman[452499]: 2025-10-02 09:34:27.014702445 +0000 UTC m=+0.072318450 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 05:34:27 np0005465604 podman[452498]: 2025-10-02 09:34:27.054650083 +0000 UTC m=+0.111692590 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct  2 05:34:27 np0005465604 nova_compute[260603]: 2025-10-02 09:34:27.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:34:27 np0005465604 nova_compute[260603]: 2025-10-02 09:34:27.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:34:27 np0005465604 nova_compute[260603]: 2025-10-02 09:34:27.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:34:27 np0005465604 nova_compute[260603]: 2025-10-02 09:34:27.542 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:34:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:34:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:34:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:34:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:34:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:34:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:34:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:34:28
Oct  2 05:34:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:34:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:34:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'backups', 'volumes', 'vms', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Oct  2 05:34:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:34:28 np0005465604 nova_compute[260603]: 2025-10-02 09:34:28.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:34:28 np0005465604 nova_compute[260603]: 2025-10-02 09:34:28.567 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:34:28 np0005465604 nova_compute[260603]: 2025-10-02 09:34:28.568 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:34:28 np0005465604 nova_compute[260603]: 2025-10-02 09:34:28.569 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:34:28 np0005465604 nova_compute[260603]: 2025-10-02 09:34:28.569 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:34:28 np0005465604 nova_compute[260603]: 2025-10-02 09:34:28.570 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:34:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:34:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:34:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:34:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:34:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:34:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:34:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:34:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:34:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:34:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:34:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3500: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:34:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2411532253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:34:29 np0005465604 nova_compute[260603]: 2025-10-02 09:34:29.070 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:34:29 np0005465604 nova_compute[260603]: 2025-10-02 09:34:29.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:29 np0005465604 nova_compute[260603]: 2025-10-02 09:34:29.223 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:34:29 np0005465604 nova_compute[260603]: 2025-10-02 09:34:29.224 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3543MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:34:29 np0005465604 nova_compute[260603]: 2025-10-02 09:34:29.224 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:34:29 np0005465604 nova_compute[260603]: 2025-10-02 09:34:29.224 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:34:29 np0005465604 nova_compute[260603]: 2025-10-02 09:34:29.571 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:34:29 np0005465604 nova_compute[260603]: 2025-10-02 09:34:29.572 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:34:29 np0005465604 nova_compute[260603]: 2025-10-02 09:34:29.620 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:34:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:34:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/601945178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:34:30 np0005465604 nova_compute[260603]: 2025-10-02 09:34:30.044 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:34:30 np0005465604 nova_compute[260603]: 2025-10-02 09:34:30.065 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:34:30 np0005465604 nova_compute[260603]: 2025-10-02 09:34:30.097 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:34:30 np0005465604 nova_compute[260603]: 2025-10-02 09:34:30.099 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:34:30 np0005465604 nova_compute[260603]: 2025-10-02 09:34:30.099 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:34:30 np0005465604 nova_compute[260603]: 2025-10-02 09:34:30.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3501: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:34:31 np0005465604 podman[452586]: 2025-10-02 09:34:31.001780598 +0000 UTC m=+0.053579574 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:34:31 np0005465604 podman[452585]: 2025-10-02 09:34:31.003320006 +0000 UTC m=+0.058178128 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 05:34:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3502: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:34 np0005465604 nova_compute[260603]: 2025-10-02 09:34:34.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3503: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:34:34.873 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:34:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:34:34.873 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:34:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:34:34.874 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:34:35 np0005465604 nova_compute[260603]: 2025-10-02 09:34:35.096 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:34:35 np0005465604 nova_compute[260603]: 2025-10-02 09:34:35.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:35 np0005465604 nova_compute[260603]: 2025-10-02 09:34:35.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:34:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:34:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3504: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:37 np0005465604 nova_compute[260603]: 2025-10-02 09:34:37.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:34:37 np0005465604 nova_compute[260603]: 2025-10-02 09:34:37.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 05:34:37 np0005465604 nova_compute[260603]: 2025-10-02 09:34:37.546 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 05:34:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3505: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:39 np0005465604 nova_compute[260603]: 2025-10-02 09:34:39.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:34:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:34:40 np0005465604 nova_compute[260603]: 2025-10-02 09:34:40.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3506: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:34:41 np0005465604 nova_compute[260603]: 2025-10-02 09:34:41.547 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:34:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3507: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:44 np0005465604 nova_compute[260603]: 2025-10-02 09:34:44.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3508: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:45 np0005465604 nova_compute[260603]: 2025-10-02 09:34:45.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:34:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3509: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3510: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:49 np0005465604 nova_compute[260603]: 2025-10-02 09:34:49.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:50 np0005465604 nova_compute[260603]: 2025-10-02 09:34:50.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3511: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:34:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3512: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:54 np0005465604 nova_compute[260603]: 2025-10-02 09:34:54.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3513: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:55 np0005465604 nova_compute[260603]: 2025-10-02 09:34:55.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:34:55 np0005465604 nova_compute[260603]: 2025-10-02 09:34:55.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:34:55 np0005465604 nova_compute[260603]: 2025-10-02 09:34:55.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:34:55 np0005465604 nova_compute[260603]: 2025-10-02 09:34:55.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:34:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:34:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3514: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:34:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:34:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:34:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:34:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:34:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:34:57 np0005465604 podman[452623]: 2025-10-02 09:34:57.990411561 +0000 UTC m=+0.059100096 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:34:58 np0005465604 podman[452622]: 2025-10-02 09:34:58.04509482 +0000 UTC m=+0.109360327 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 05:34:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3515: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:34:59 np0005465604 nova_compute[260603]: 2025-10-02 09:34:59.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:00 np0005465604 nova_compute[260603]: 2025-10-02 09:35:00.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3516: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:35:01 np0005465604 podman[452668]: 2025-10-02 09:35:01.986177958 +0000 UTC m=+0.054595827 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3)
Oct  2 05:35:01 np0005465604 podman[452667]: 2025-10-02 09:35:01.998898235 +0000 UTC m=+0.067154998 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true)
Oct  2 05:35:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3517: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:04 np0005465604 nova_compute[260603]: 2025-10-02 09:35:04.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3518: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:05 np0005465604 nova_compute[260603]: 2025-10-02 09:35:05.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:35:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3519: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:06 np0005465604 nova_compute[260603]: 2025-10-02 09:35:06.991 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:35:06 np0005465604 nova_compute[260603]: 2025-10-02 09:35:06.992 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 05:35:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3520: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:09 np0005465604 nova_compute[260603]: 2025-10-02 09:35:09.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:10 np0005465604 nova_compute[260603]: 2025-10-02 09:35:10.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3521: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:35:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3522: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:14 np0005465604 nova_compute[260603]: 2025-10-02 09:35:14.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3523: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:15 np0005465604 nova_compute[260603]: 2025-10-02 09:35:15.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:35:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3524: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3525: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:19 np0005465604 nova_compute[260603]: 2025-10-02 09:35:19.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:20 np0005465604 nova_compute[260603]: 2025-10-02 09:35:20.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3526: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:35:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:35:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:35:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:35:20 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:35:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:35:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:35:21 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e76e2339-0dfb-449e-a0cb-e01ce31433c8 does not exist
Oct  2 05:35:21 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 85749207-8b0e-45e9-8bf6-67df3a6326d4 does not exist
Oct  2 05:35:21 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b201a201-87a1-4a65-832a-d1ce62870df2 does not exist
Oct  2 05:35:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:35:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:35:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:35:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:35:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:35:21 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:35:21 np0005465604 podman[452980]: 2025-10-02 09:35:21.574557453 +0000 UTC m=+0.037206453 container create ccf8b33e9c69f5aff43820524265ea2d4426ed1a38c598c1ada68ffad5b1816a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 05:35:21 np0005465604 systemd[1]: Started libpod-conmon-ccf8b33e9c69f5aff43820524265ea2d4426ed1a38c598c1ada68ffad5b1816a.scope.
Oct  2 05:35:21 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:35:21 np0005465604 podman[452980]: 2025-10-02 09:35:21.645940723 +0000 UTC m=+0.108589813 container init ccf8b33e9c69f5aff43820524265ea2d4426ed1a38c598c1ada68ffad5b1816a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  2 05:35:21 np0005465604 podman[452980]: 2025-10-02 09:35:21.653726066 +0000 UTC m=+0.116375066 container start ccf8b33e9c69f5aff43820524265ea2d4426ed1a38c598c1ada68ffad5b1816a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_morse, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 05:35:21 np0005465604 podman[452980]: 2025-10-02 09:35:21.559316347 +0000 UTC m=+0.021965377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:35:21 np0005465604 podman[452980]: 2025-10-02 09:35:21.6570289 +0000 UTC m=+0.119677920 container attach ccf8b33e9c69f5aff43820524265ea2d4426ed1a38c598c1ada68ffad5b1816a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_morse, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 05:35:21 np0005465604 vigilant_morse[452997]: 167 167
Oct  2 05:35:21 np0005465604 systemd[1]: libpod-ccf8b33e9c69f5aff43820524265ea2d4426ed1a38c598c1ada68ffad5b1816a.scope: Deactivated successfully.
Oct  2 05:35:21 np0005465604 podman[452980]: 2025-10-02 09:35:21.658347091 +0000 UTC m=+0.120996101 container died ccf8b33e9c69f5aff43820524265ea2d4426ed1a38c598c1ada68ffad5b1816a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_morse, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  2 05:35:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2e3de6dc1ccc7838e91d482230c66278116cb48bb4088bac92b34ae8db20b6e3-merged.mount: Deactivated successfully.
Oct  2 05:35:21 np0005465604 podman[452980]: 2025-10-02 09:35:21.69611673 +0000 UTC m=+0.158765730 container remove ccf8b33e9c69f5aff43820524265ea2d4426ed1a38c598c1ada68ffad5b1816a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct  2 05:35:21 np0005465604 systemd[1]: libpod-conmon-ccf8b33e9c69f5aff43820524265ea2d4426ed1a38c598c1ada68ffad5b1816a.scope: Deactivated successfully.
Oct  2 05:35:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:35:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:35:21 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:35:21 np0005465604 podman[453022]: 2025-10-02 09:35:21.839716005 +0000 UTC m=+0.039218525 container create 80c36bb00906babc19f34ec883aa83425b571fb6c148c277511fa1db5faa1765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:35:21 np0005465604 systemd[1]: Started libpod-conmon-80c36bb00906babc19f34ec883aa83425b571fb6c148c277511fa1db5faa1765.scope.
Oct  2 05:35:21 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:35:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dcb4d1c999a31c4f6ab814a777e209bad3947310677d88ef83cd30981b67ec3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:35:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dcb4d1c999a31c4f6ab814a777e209bad3947310677d88ef83cd30981b67ec3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:35:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dcb4d1c999a31c4f6ab814a777e209bad3947310677d88ef83cd30981b67ec3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:35:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dcb4d1c999a31c4f6ab814a777e209bad3947310677d88ef83cd30981b67ec3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:35:21 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dcb4d1c999a31c4f6ab814a777e209bad3947310677d88ef83cd30981b67ec3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:35:21 np0005465604 podman[453022]: 2025-10-02 09:35:21.823759817 +0000 UTC m=+0.023262357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:35:21 np0005465604 podman[453022]: 2025-10-02 09:35:21.927455356 +0000 UTC m=+0.126957876 container init 80c36bb00906babc19f34ec883aa83425b571fb6c148c277511fa1db5faa1765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dubinsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 05:35:21 np0005465604 podman[453022]: 2025-10-02 09:35:21.933362011 +0000 UTC m=+0.132864531 container start 80c36bb00906babc19f34ec883aa83425b571fb6c148c277511fa1db5faa1765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 05:35:21 np0005465604 podman[453022]: 2025-10-02 09:35:21.942285519 +0000 UTC m=+0.141788059 container attach 80c36bb00906babc19f34ec883aa83425b571fb6c148c277511fa1db5faa1765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dubinsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 05:35:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:35:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/324418563' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:35:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:35:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/324418563' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:35:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3527: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:22 np0005465604 infallible_dubinsky[453039]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:35:22 np0005465604 infallible_dubinsky[453039]: --> relative data size: 1.0
Oct  2 05:35:22 np0005465604 infallible_dubinsky[453039]: --> All data devices are unavailable
Oct  2 05:35:22 np0005465604 systemd[1]: libpod-80c36bb00906babc19f34ec883aa83425b571fb6c148c277511fa1db5faa1765.scope: Deactivated successfully.
Oct  2 05:35:22 np0005465604 podman[453068]: 2025-10-02 09:35:22.980043173 +0000 UTC m=+0.024593849 container died 80c36bb00906babc19f34ec883aa83425b571fb6c148c277511fa1db5faa1765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dubinsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:35:23 np0005465604 systemd[1]: var-lib-containers-storage-overlay-7dcb4d1c999a31c4f6ab814a777e209bad3947310677d88ef83cd30981b67ec3-merged.mount: Deactivated successfully.
Oct  2 05:35:23 np0005465604 podman[453068]: 2025-10-02 09:35:23.067152354 +0000 UTC m=+0.111703010 container remove 80c36bb00906babc19f34ec883aa83425b571fb6c148c277511fa1db5faa1765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_dubinsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 05:35:23 np0005465604 systemd[1]: libpod-conmon-80c36bb00906babc19f34ec883aa83425b571fb6c148c277511fa1db5faa1765.scope: Deactivated successfully.
Oct  2 05:35:23 np0005465604 podman[453224]: 2025-10-02 09:35:23.792306943 +0000 UTC m=+0.075047984 container create 7de6ee7a15c6477f2f5e6562874373815c3512b5f1ff5dceb91b0d0efa9e85b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carson, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 05:35:23 np0005465604 podman[453224]: 2025-10-02 09:35:23.746280006 +0000 UTC m=+0.029021067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:35:23 np0005465604 systemd[1]: Started libpod-conmon-7de6ee7a15c6477f2f5e6562874373815c3512b5f1ff5dceb91b0d0efa9e85b6.scope.
Oct  2 05:35:23 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:35:24 np0005465604 podman[453224]: 2025-10-02 09:35:24.003912743 +0000 UTC m=+0.286653814 container init 7de6ee7a15c6477f2f5e6562874373815c3512b5f1ff5dceb91b0d0efa9e85b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 05:35:24 np0005465604 podman[453224]: 2025-10-02 09:35:24.012494552 +0000 UTC m=+0.295235593 container start 7de6ee7a15c6477f2f5e6562874373815c3512b5f1ff5dceb91b0d0efa9e85b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 05:35:24 np0005465604 hungry_carson[453241]: 167 167
Oct  2 05:35:24 np0005465604 systemd[1]: libpod-7de6ee7a15c6477f2f5e6562874373815c3512b5f1ff5dceb91b0d0efa9e85b6.scope: Deactivated successfully.
Oct  2 05:35:24 np0005465604 podman[453224]: 2025-10-02 09:35:24.088361441 +0000 UTC m=+0.371102502 container attach 7de6ee7a15c6477f2f5e6562874373815c3512b5f1ff5dceb91b0d0efa9e85b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:35:24 np0005465604 podman[453224]: 2025-10-02 09:35:24.0902337 +0000 UTC m=+0.372974781 container died 7de6ee7a15c6477f2f5e6562874373815c3512b5f1ff5dceb91b0d0efa9e85b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:35:24 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d047a76148bd1661be28a6077230d891a68a8c5e5f319229e3acc546d5c5e291-merged.mount: Deactivated successfully.
Oct  2 05:35:24 np0005465604 podman[453224]: 2025-10-02 09:35:24.167571015 +0000 UTC m=+0.450312066 container remove 7de6ee7a15c6477f2f5e6562874373815c3512b5f1ff5dceb91b0d0efa9e85b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 05:35:24 np0005465604 systemd[1]: libpod-conmon-7de6ee7a15c6477f2f5e6562874373815c3512b5f1ff5dceb91b0d0efa9e85b6.scope: Deactivated successfully.
Oct  2 05:35:24 np0005465604 nova_compute[260603]: 2025-10-02 09:35:24.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:24 np0005465604 podman[453267]: 2025-10-02 09:35:24.368054187 +0000 UTC m=+0.050102696 container create 70980f8f1e2ccf99bb47032098fdb8a4062cb5a86f51da5d9d5fa396fbcd3f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:35:24 np0005465604 systemd[1]: Started libpod-conmon-70980f8f1e2ccf99bb47032098fdb8a4062cb5a86f51da5d9d5fa396fbcd3f09.scope.
Oct  2 05:35:24 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:35:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912a3acbf25b788b494844aef9e336c8a45c7f32473ef1be82cf53a811c7eef0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:35:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912a3acbf25b788b494844aef9e336c8a45c7f32473ef1be82cf53a811c7eef0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:35:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912a3acbf25b788b494844aef9e336c8a45c7f32473ef1be82cf53a811c7eef0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:35:24 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912a3acbf25b788b494844aef9e336c8a45c7f32473ef1be82cf53a811c7eef0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:35:24 np0005465604 podman[453267]: 2025-10-02 09:35:24.44050014 +0000 UTC m=+0.122548689 container init 70980f8f1e2ccf99bb47032098fdb8a4062cb5a86f51da5d9d5fa396fbcd3f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:35:24 np0005465604 podman[453267]: 2025-10-02 09:35:24.349139237 +0000 UTC m=+0.031187796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:35:24 np0005465604 podman[453267]: 2025-10-02 09:35:24.448945424 +0000 UTC m=+0.130993933 container start 70980f8f1e2ccf99bb47032098fdb8a4062cb5a86f51da5d9d5fa396fbcd3f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 05:35:24 np0005465604 podman[453267]: 2025-10-02 09:35:24.451588426 +0000 UTC m=+0.133636965 container attach 70980f8f1e2ccf99bb47032098fdb8a4062cb5a86f51da5d9d5fa396fbcd3f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 05:35:24 np0005465604 nova_compute[260603]: 2025-10-02 09:35:24.542 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:35:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3528: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:35:24.937 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:35:24 np0005465604 nova_compute[260603]: 2025-10-02 09:35:24.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:24 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:35:24.941 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:35:25 np0005465604 systemd-logind[787]: New session 55 of user zuul.
Oct  2 05:35:25 np0005465604 systemd[1]: Started Session 55 of User zuul.
Oct  2 05:35:25 np0005465604 cool_bartik[453283]: {
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:    "0": [
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:        {
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "devices": [
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "/dev/loop3"
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            ],
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "lv_name": "ceph_lv0",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "lv_size": "21470642176",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "name": "ceph_lv0",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "tags": {
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.cluster_name": "ceph",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.crush_device_class": "",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.encrypted": "0",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.osd_id": "0",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.type": "block",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.vdo": "0"
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            },
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "type": "block",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "vg_name": "ceph_vg0"
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:        }
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:    ],
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:    "1": [
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:        {
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "devices": [
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "/dev/loop4"
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            ],
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "lv_name": "ceph_lv1",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "lv_size": "21470642176",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "name": "ceph_lv1",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "tags": {
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.cluster_name": "ceph",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.crush_device_class": "",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.encrypted": "0",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.osd_id": "1",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.type": "block",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.vdo": "0"
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            },
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "type": "block",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "vg_name": "ceph_vg1"
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:        }
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:    ],
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:    "2": [
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:        {
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "devices": [
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "/dev/loop5"
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            ],
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "lv_name": "ceph_lv2",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "lv_size": "21470642176",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "name": "ceph_lv2",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "tags": {
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.cluster_name": "ceph",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.crush_device_class": "",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.encrypted": "0",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.osd_id": "2",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.type": "block",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:                "ceph.vdo": "0"
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            },
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "type": "block",
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:            "vg_name": "ceph_vg2"
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:        }
Oct  2 05:35:25 np0005465604 cool_bartik[453283]:    ]
Oct  2 05:35:25 np0005465604 cool_bartik[453283]: }
Oct  2 05:35:25 np0005465604 systemd[1]: libpod-70980f8f1e2ccf99bb47032098fdb8a4062cb5a86f51da5d9d5fa396fbcd3f09.scope: Deactivated successfully.
Oct  2 05:35:25 np0005465604 podman[453267]: 2025-10-02 09:35:25.222432343 +0000 UTC m=+0.904480872 container died 70980f8f1e2ccf99bb47032098fdb8a4062cb5a86f51da5d9d5fa396fbcd3f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:35:25 np0005465604 systemd[1]: var-lib-containers-storage-overlay-912a3acbf25b788b494844aef9e336c8a45c7f32473ef1be82cf53a811c7eef0-merged.mount: Deactivated successfully.
Oct  2 05:35:25 np0005465604 podman[453267]: 2025-10-02 09:35:25.308620216 +0000 UTC m=+0.990668725 container remove 70980f8f1e2ccf99bb47032098fdb8a4062cb5a86f51da5d9d5fa396fbcd3f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 05:35:25 np0005465604 systemd[1]: libpod-conmon-70980f8f1e2ccf99bb47032098fdb8a4062cb5a86f51da5d9d5fa396fbcd3f09.scope: Deactivated successfully.
Oct  2 05:35:25 np0005465604 nova_compute[260603]: 2025-10-02 09:35:25.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:35:25 np0005465604 nova_compute[260603]: 2025-10-02 09:35:25.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:25 np0005465604 systemd[1]: Reloading.
Oct  2 05:35:25 np0005465604 podman[453577]: 2025-10-02 09:35:25.910461833 +0000 UTC m=+0.041372972 container create 8cf3298ddacd21764e12086e710582a9752fd477727b5d1f1aca82b0a12ac5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 05:35:25 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 05:35:25 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 05:35:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:35:25 np0005465604 podman[453577]: 2025-10-02 09:35:25.895373492 +0000 UTC m=+0.026284661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:35:26 np0005465604 systemd[1]: Started libpod-conmon-8cf3298ddacd21764e12086e710582a9752fd477727b5d1f1aca82b0a12ac5d0.scope.
Oct  2 05:35:26 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:35:26 np0005465604 podman[453577]: 2025-10-02 09:35:26.282739381 +0000 UTC m=+0.413650570 container init 8cf3298ddacd21764e12086e710582a9752fd477727b5d1f1aca82b0a12ac5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 05:35:26 np0005465604 podman[453577]: 2025-10-02 09:35:26.294673484 +0000 UTC m=+0.425584623 container start 8cf3298ddacd21764e12086e710582a9752fd477727b5d1f1aca82b0a12ac5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:35:26 np0005465604 podman[453577]: 2025-10-02 09:35:26.297386009 +0000 UTC m=+0.428297168 container attach 8cf3298ddacd21764e12086e710582a9752fd477727b5d1f1aca82b0a12ac5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct  2 05:35:26 np0005465604 eager_cerf[453628]: 167 167
Oct  2 05:35:26 np0005465604 systemd[1]: libpod-8cf3298ddacd21764e12086e710582a9752fd477727b5d1f1aca82b0a12ac5d0.scope: Deactivated successfully.
Oct  2 05:35:26 np0005465604 podman[453577]: 2025-10-02 09:35:26.300724403 +0000 UTC m=+0.431635542 container died 8cf3298ddacd21764e12086e710582a9752fd477727b5d1f1aca82b0a12ac5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 05:35:26 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2e9349e0f8777708b5ad3d61bfcd99cb21312072bbc01ce249904e1f4936db1b-merged.mount: Deactivated successfully.
Oct  2 05:35:26 np0005465604 podman[453577]: 2025-10-02 09:35:26.338967318 +0000 UTC m=+0.469878457 container remove 8cf3298ddacd21764e12086e710582a9752fd477727b5d1f1aca82b0a12ac5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 05:35:26 np0005465604 systemd[1]: libpod-conmon-8cf3298ddacd21764e12086e710582a9752fd477727b5d1f1aca82b0a12ac5d0.scope: Deactivated successfully.
Oct  2 05:35:26 np0005465604 systemd[1]: Reloading.
Oct  2 05:35:26 np0005465604 podman[453655]: 2025-10-02 09:35:26.508340858 +0000 UTC m=+0.038904756 container create c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lalande, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 05:35:26 np0005465604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 05:35:26 np0005465604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 05:35:26 np0005465604 podman[453655]: 2025-10-02 09:35:26.492250676 +0000 UTC m=+0.022814574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:35:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3529: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:26 np0005465604 systemd[1]: Started libpod-conmon-c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502.scope.
Oct  2 05:35:26 np0005465604 systemd[1]: Starting Podman API Socket...
Oct  2 05:35:26 np0005465604 systemd[1]: Listening on Podman API Socket.
Oct  2 05:35:26 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:35:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ff0404dd1820922daaec999bda2d115bcbf1c934d9bcf9e9e459dd879651d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:35:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ff0404dd1820922daaec999bda2d115bcbf1c934d9bcf9e9e459dd879651d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:35:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ff0404dd1820922daaec999bda2d115bcbf1c934d9bcf9e9e459dd879651d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:35:26 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ff0404dd1820922daaec999bda2d115bcbf1c934d9bcf9e9e459dd879651d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:35:26 np0005465604 podman[453655]: 2025-10-02 09:35:26.847612425 +0000 UTC m=+0.378176323 container init c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lalande, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:35:26 np0005465604 podman[453655]: 2025-10-02 09:35:26.858137784 +0000 UTC m=+0.388701662 container start c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:35:26 np0005465604 podman[453655]: 2025-10-02 09:35:26.860990493 +0000 UTC m=+0.391554421 container attach c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lalande, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 05:35:26 np0005465604 dbus-broker-launch[772]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Oct  2 05:35:26 np0005465604 systemd[1]: podman.socket: Deactivated successfully.
Oct  2 05:35:26 np0005465604 systemd[1]: Closed Podman API Socket.
Oct  2 05:35:26 np0005465604 systemd[1]: Stopping Podman API Socket...
Oct  2 05:35:26 np0005465604 systemd[1]: Starting Podman API Socket...
Oct  2 05:35:27 np0005465604 systemd[1]: Listening on Podman API Socket.
Oct  2 05:35:27 np0005465604 systemd-logind[787]: New session 56 of user zuul.
Oct  2 05:35:27 np0005465604 systemd[1]: Started Session 56 of User zuul.
Oct  2 05:35:27 np0005465604 systemd[1]: Starting Podman API Service...
Oct  2 05:35:27 np0005465604 systemd[1]: Started Podman API Service.
Oct  2 05:35:27 np0005465604 podman[453737]: time="2025-10-02T09:35:27Z" level=info msg="/usr/bin/podman filtering at log level info"
Oct  2 05:35:27 np0005465604 podman[453737]: time="2025-10-02T09:35:27Z" level=info msg="Setting parallel job count to 25"
Oct  2 05:35:27 np0005465604 podman[453737]: time="2025-10-02T09:35:27Z" level=info msg="Using sqlite as database backend"
Oct  2 05:35:27 np0005465604 podman[453737]: time="2025-10-02T09:35:27Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct  2 05:35:27 np0005465604 podman[453737]: time="2025-10-02T09:35:27Z" level=info msg="Using systemd socket activation to determine API endpoint"
Oct  2 05:35:27 np0005465604 podman[453737]: time="2025-10-02T09:35:27Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Oct  2 05:35:27 np0005465604 podman[453737]: @ - - [02/Oct/2025:09:35:27 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct  2 05:35:27 np0005465604 podman[453737]: @ - - [02/Oct/2025:09:35:27 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 29027 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct  2 05:35:27 np0005465604 nova_compute[260603]: 2025-10-02 09:35:27.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:35:27 np0005465604 nova_compute[260603]: 2025-10-02 09:35:27.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:35:27 np0005465604 nova_compute[260603]: 2025-10-02 09:35:27.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:35:27 np0005465604 nova_compute[260603]: 2025-10-02 09:35:27.596 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:35:27 np0005465604 busy_lalande[453706]: {
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:        "osd_id": 2,
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:        "type": "bluestore"
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:    },
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:        "osd_id": 1,
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:        "type": "bluestore"
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:    },
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:        "osd_id": 0,
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:        "type": "bluestore"
Oct  2 05:35:27 np0005465604 busy_lalande[453706]:    }
Oct  2 05:35:27 np0005465604 busy_lalande[453706]: }
Oct  2 05:35:27 np0005465604 systemd[1]: libpod-c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502.scope: Deactivated successfully.
Oct  2 05:35:27 np0005465604 systemd[1]: libpod-c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502.scope: Consumed 1.028s CPU time.
Oct  2 05:35:27 np0005465604 podman[453655]: 2025-10-02 09:35:27.895274418 +0000 UTC m=+1.425838356 container died c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lalande, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  2 05:35:27 np0005465604 systemd[1]: var-lib-containers-storage-overlay-95ff0404dd1820922daaec999bda2d115bcbf1c934d9bcf9e9e459dd879651d2-merged.mount: Deactivated successfully.
Oct  2 05:35:27 np0005465604 podman[453655]: 2025-10-02 09:35:27.961098324 +0000 UTC m=+1.491662192 container remove c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lalande, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:35:27 np0005465604 systemd[1]: libpod-conmon-c99b4dd63427d753abb702838db3206de99c67b08f77d344ef8d545db66c0502.scope: Deactivated successfully.
Oct  2 05:35:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:35:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:35:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:35:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:35:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:35:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:35:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:35:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:35:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:35:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:35:28 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev d3513ca8-b3b6-4f7f-9508-e0268f6b418a does not exist
Oct  2 05:35:28 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 27d81dc5-0665-4523-ae30-8ea9e84e4bb4 does not exist
Oct  2 05:35:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:35:28
Oct  2 05:35:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:35:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:35:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.mgr', 'volumes', 'backups', 'cephfs.cephfs.meta']
Oct  2 05:35:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:35:28 np0005465604 podman[453817]: 2025-10-02 09:35:28.165473217 +0000 UTC m=+0.061037037 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 05:35:28 np0005465604 podman[453816]: 2025-10-02 09:35:28.183642056 +0000 UTC m=+0.097085074 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct  2 05:35:28 np0005465604 nova_compute[260603]: 2025-10-02 09:35:28.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:35:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:35:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:35:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:35:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:35:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:35:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:35:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:35:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:35:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:35:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:35:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3530: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:35:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:35:29 np0005465604 nova_compute[260603]: 2025-10-02 09:35:29.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:29 np0005465604 nova_compute[260603]: 2025-10-02 09:35:29.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:35:29 np0005465604 nova_compute[260603]: 2025-10-02 09:35:29.557 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:35:29 np0005465604 nova_compute[260603]: 2025-10-02 09:35:29.557 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:35:29 np0005465604 nova_compute[260603]: 2025-10-02 09:35:29.557 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:35:29 np0005465604 nova_compute[260603]: 2025-10-02 09:35:29.558 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:35:29 np0005465604 nova_compute[260603]: 2025-10-02 09:35:29.558 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:35:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:35:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3639851267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:35:30 np0005465604 nova_compute[260603]: 2025-10-02 09:35:30.094 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:35:30 np0005465604 nova_compute[260603]: 2025-10-02 09:35:30.302 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:35:30 np0005465604 nova_compute[260603]: 2025-10-02 09:35:30.303 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3536MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:35:30 np0005465604 nova_compute[260603]: 2025-10-02 09:35:30.303 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:35:30 np0005465604 nova_compute[260603]: 2025-10-02 09:35:30.304 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:35:30 np0005465604 nova_compute[260603]: 2025-10-02 09:35:30.483 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:35:30 np0005465604 nova_compute[260603]: 2025-10-02 09:35:30.485 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:35:30 np0005465604 nova_compute[260603]: 2025-10-02 09:35:30.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:30 np0005465604 nova_compute[260603]: 2025-10-02 09:35:30.539 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 05:35:30 np0005465604 nova_compute[260603]: 2025-10-02 09:35:30.579 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 05:35:30 np0005465604 nova_compute[260603]: 2025-10-02 09:35:30.580 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 05:35:30 np0005465604 nova_compute[260603]: 2025-10-02 09:35:30.608 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 05:35:30 np0005465604 nova_compute[260603]: 2025-10-02 09:35:30.644 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 05:35:30 np0005465604 nova_compute[260603]: 2025-10-02 09:35:30.677 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:35:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3531: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:35:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:35:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1797151839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:35:31 np0005465604 nova_compute[260603]: 2025-10-02 09:35:31.207 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:35:31 np0005465604 nova_compute[260603]: 2025-10-02 09:35:31.213 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:35:31 np0005465604 nova_compute[260603]: 2025-10-02 09:35:31.253 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:35:31 np0005465604 nova_compute[260603]: 2025-10-02 09:35:31.254 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:35:31 np0005465604 nova_compute[260603]: 2025-10-02 09:35:31.254 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:35:32 np0005465604 nova_compute[260603]: 2025-10-02 09:35:32.252 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:35:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3532: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:32 np0005465604 podman[453954]: 2025-10-02 09:35:32.996988079 +0000 UTC m=+0.058209419 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 05:35:33 np0005465604 podman[453953]: 2025-10-02 09:35:33.005697241 +0000 UTC m=+0.071478633 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:35:34 np0005465604 nova_compute[260603]: 2025-10-02 09:35:34.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:34 np0005465604 nova_compute[260603]: 2025-10-02 09:35:34.625 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:35:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3533: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:35:34.874 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:35:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:35:34.874 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:35:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:35:34.874 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:35:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:35:34.943 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:35:35 np0005465604 nova_compute[260603]: 2025-10-02 09:35:35.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:35:36 np0005465604 nova_compute[260603]: 2025-10-02 09:35:36.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:35:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3534: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3535: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:39 np0005465604 nova_compute[260603]: 2025-10-02 09:35:39.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:35:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:35:40 np0005465604 nova_compute[260603]: 2025-10-02 09:35:40.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3536: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:40 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:35:42 np0005465604 podman[453737]: time="2025-10-02T09:35:42Z" level=info msg="Received shutdown.Stop(), terminating!" PID=453737
Oct  2 05:35:42 np0005465604 systemd[1]: podman.service: Deactivated successfully.
Oct  2 05:35:42 np0005465604 nova_compute[260603]: 2025-10-02 09:35:42.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:35:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3537: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:44 np0005465604 nova_compute[260603]: 2025-10-02 09:35:44.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3538: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:45 np0005465604 nova_compute[260603]: 2025-10-02 09:35:45.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:35:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3539: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3540: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:49 np0005465604 nova_compute[260603]: 2025-10-02 09:35:49.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:50 np0005465604 nova_compute[260603]: 2025-10-02 09:35:50.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3541: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:35:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3542: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:54 np0005465604 nova_compute[260603]: 2025-10-02 09:35:54.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3543: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:55 np0005465604 nova_compute[260603]: 2025-10-02 09:35:55.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:35:55 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:35:56 np0005465604 nova_compute[260603]: 2025-10-02 09:35:56.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:35:56 np0005465604 nova_compute[260603]: 2025-10-02 09:35:56.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:35:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3544: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:35:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:35:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:35:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:35:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:35:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:35:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3545: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:35:59 np0005465604 podman[454042]: 2025-10-02 09:35:59.048628549 +0000 UTC m=+0.104343390 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct  2 05:35:59 np0005465604 podman[454041]: 2025-10-02 09:35:59.122507157 +0000 UTC m=+0.176504055 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 05:35:59 np0005465604 nova_compute[260603]: 2025-10-02 09:35:59.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:00 np0005465604 nova_compute[260603]: 2025-10-02 09:36:00.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3546: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:36:02 np0005465604 systemd[1]: session-55.scope: Deactivated successfully.
Oct  2 05:36:02 np0005465604 systemd[1]: session-55.scope: Consumed 1.173s CPU time.
Oct  2 05:36:02 np0005465604 systemd-logind[787]: Session 55 logged out. Waiting for processes to exit.
Oct  2 05:36:02 np0005465604 systemd-logind[787]: Removed session 55.
Oct  2 05:36:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3547: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:03 np0005465604 systemd[1]: session-56.scope: Deactivated successfully.
Oct  2 05:36:03 np0005465604 systemd-logind[787]: Session 56 logged out. Waiting for processes to exit.
Oct  2 05:36:03 np0005465604 systemd-logind[787]: Removed session 56.
Oct  2 05:36:03 np0005465604 podman[454083]: 2025-10-02 09:36:03.992116867 +0000 UTC m=+0.056523806 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3)
Oct  2 05:36:03 np0005465604 podman[454084]: 2025-10-02 09:36:03.993270584 +0000 UTC m=+0.054943867 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 05:36:04 np0005465604 nova_compute[260603]: 2025-10-02 09:36:04.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3548: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:05 np0005465604 nova_compute[260603]: 2025-10-02 09:36:05.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:05 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:36:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3549: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3550: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:09 np0005465604 nova_compute[260603]: 2025-10-02 09:36:09.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:10 np0005465604 nova_compute[260603]: 2025-10-02 09:36:10.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3551: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:10 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:36:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3552: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:14 np0005465604 nova_compute[260603]: 2025-10-02 09:36:14.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3553: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:15 np0005465604 nova_compute[260603]: 2025-10-02 09:36:15.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:15 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:36:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3554: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3555: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:19 np0005465604 nova_compute[260603]: 2025-10-02 09:36:19.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:20 np0005465604 nova_compute[260603]: 2025-10-02 09:36:20.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3556: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:36:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:36:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2078246163' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:36:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:36:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2078246163' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:36:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3557: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:24 np0005465604 nova_compute[260603]: 2025-10-02 09:36:24.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3558: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:25 np0005465604 nova_compute[260603]: 2025-10-02 09:36:25.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.010511) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397786010551, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1449, "num_deletes": 250, "total_data_size": 2307191, "memory_usage": 2343536, "flush_reason": "Manual Compaction"}
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397786026315, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 1325255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72616, "largest_seqno": 74064, "table_properties": {"data_size": 1320273, "index_size": 2313, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13040, "raw_average_key_size": 20, "raw_value_size": 1309310, "raw_average_value_size": 2074, "num_data_blocks": 106, "num_entries": 631, "num_filter_entries": 631, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759397630, "oldest_key_time": 1759397630, "file_creation_time": 1759397786, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 15848 microseconds, and 3945 cpu microseconds.
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.026356) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 1325255 bytes OK
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.026375) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.028026) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.028038) EVENT_LOG_v1 {"time_micros": 1759397786028034, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.028056) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 2300830, prev total WAL file size 2300830, number of live WAL files 2.
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.028714) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303033' seq:72057594037927935, type:22 .. '6D6772737461740033323534' seq:0, type:0; will stop at (end)
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(1294KB)], [173(10MB)]
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397786028778, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 12348341, "oldest_snapshot_seqno": -1}
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 9050 keys, 9994484 bytes, temperature: kUnknown
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397786085082, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 9994484, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9939054, "index_size": 31701, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22661, "raw_key_size": 236914, "raw_average_key_size": 26, "raw_value_size": 9782658, "raw_average_value_size": 1080, "num_data_blocks": 1226, "num_entries": 9050, "num_filter_entries": 9050, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759397786, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.085390) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 9994484 bytes
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.086538) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 218.9 rd, 177.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 10.5 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(16.9) write-amplify(7.5) OK, records in: 9490, records dropped: 440 output_compression: NoCompression
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.086566) EVENT_LOG_v1 {"time_micros": 1759397786086553, "job": 108, "event": "compaction_finished", "compaction_time_micros": 56398, "compaction_time_cpu_micros": 26101, "output_level": 6, "num_output_files": 1, "total_output_size": 9994484, "num_input_records": 9490, "num_output_records": 9050, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397786087166, "job": 108, "event": "table_file_deletion", "file_number": 175}
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397786090807, "job": 108, "event": "table_file_deletion", "file_number": 173}
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.028657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.090896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.090902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.090904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.090906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:36:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:36:26.090908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:36:26 np0005465604 nova_compute[260603]: 2025-10-02 09:36:26.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:36:26 np0005465604 nova_compute[260603]: 2025-10-02 09:36:26.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:36:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3559: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:36:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:36:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:36:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:36:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:36:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:36:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:36:28
Oct  2 05:36:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:36:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:36:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'vms', 'volumes', 'backups', '.mgr', 'default.rgw.meta', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log']
Oct  2 05:36:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:36:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:36:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:36:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:36:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:36:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:36:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:36:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:36:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:36:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:36:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:36:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3560: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:36:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:36:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:36:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:36:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:36:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:36:29 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev d9cdc942-8dd5-45a3-877b-d41652b29fc8 does not exist
Oct  2 05:36:29 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 9baea1fc-0a59-4b88-8495-efc78f0f9115 does not exist
Oct  2 05:36:29 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b0f0a5af-613b-456d-9e9c-d6f4ad404315 does not exist
Oct  2 05:36:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:36:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:36:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:36:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:36:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:36:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:36:29 np0005465604 podman[454280]: 2025-10-02 09:36:29.188932739 +0000 UTC m=+0.065444735 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct  2 05:36:29 np0005465604 podman[454325]: 2025-10-02 09:36:29.334030331 +0000 UTC m=+0.119023099 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 05:36:29 np0005465604 nova_compute[260603]: 2025-10-02 09:36:29.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:29 np0005465604 nova_compute[260603]: 2025-10-02 09:36:29.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:36:29 np0005465604 nova_compute[260603]: 2025-10-02 09:36:29.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:36:29 np0005465604 nova_compute[260603]: 2025-10-02 09:36:29.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:36:29 np0005465604 nova_compute[260603]: 2025-10-02 09:36:29.541 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:36:29 np0005465604 nova_compute[260603]: 2025-10-02 09:36:29.543 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:36:29 np0005465604 nova_compute[260603]: 2025-10-02 09:36:29.544 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:36:29 np0005465604 nova_compute[260603]: 2025-10-02 09:36:29.570 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:36:29 np0005465604 nova_compute[260603]: 2025-10-02 09:36:29.570 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:36:29 np0005465604 nova_compute[260603]: 2025-10-02 09:36:29.570 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:36:29 np0005465604 nova_compute[260603]: 2025-10-02 09:36:29.571 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:36:29 np0005465604 nova_compute[260603]: 2025-10-02 09:36:29.571 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:36:29 np0005465604 podman[454454]: 2025-10-02 09:36:29.763572788 +0000 UTC m=+0.044124600 container create ded83a569866dde2c80576f042fa947c42e5e2330077b56d82fe00de3edb9744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_edison, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:36:29 np0005465604 systemd[1]: Started libpod-conmon-ded83a569866dde2c80576f042fa947c42e5e2330077b56d82fe00de3edb9744.scope.
Oct  2 05:36:29 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:36:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:36:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:36:29 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:36:29 np0005465604 podman[454454]: 2025-10-02 09:36:29.746859376 +0000 UTC m=+0.027411208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:36:29 np0005465604 podman[454454]: 2025-10-02 09:36:29.852034781 +0000 UTC m=+0.132586673 container init ded83a569866dde2c80576f042fa947c42e5e2330077b56d82fe00de3edb9744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:36:29 np0005465604 podman[454454]: 2025-10-02 09:36:29.859860265 +0000 UTC m=+0.140412077 container start ded83a569866dde2c80576f042fa947c42e5e2330077b56d82fe00de3edb9744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 05:36:29 np0005465604 podman[454454]: 2025-10-02 09:36:29.863058475 +0000 UTC m=+0.143610327 container attach ded83a569866dde2c80576f042fa947c42e5e2330077b56d82fe00de3edb9744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 05:36:29 np0005465604 festive_edison[454479]: 167 167
Oct  2 05:36:29 np0005465604 systemd[1]: libpod-ded83a569866dde2c80576f042fa947c42e5e2330077b56d82fe00de3edb9744.scope: Deactivated successfully.
Oct  2 05:36:29 np0005465604 podman[454454]: 2025-10-02 09:36:29.866176563 +0000 UTC m=+0.146728365 container died ded83a569866dde2c80576f042fa947c42e5e2330077b56d82fe00de3edb9744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_edison, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 05:36:29 np0005465604 systemd[1]: var-lib-containers-storage-overlay-387e5eae8d3a0ce780c8df3a7bcea16d867333014bed0287208f957e081bb936-merged.mount: Deactivated successfully.
Oct  2 05:36:29 np0005465604 podman[454454]: 2025-10-02 09:36:29.905344726 +0000 UTC m=+0.185896538 container remove ded83a569866dde2c80576f042fa947c42e5e2330077b56d82fe00de3edb9744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct  2 05:36:29 np0005465604 systemd[1]: libpod-conmon-ded83a569866dde2c80576f042fa947c42e5e2330077b56d82fe00de3edb9744.scope: Deactivated successfully.
Oct  2 05:36:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:36:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2289469839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:36:30 np0005465604 nova_compute[260603]: 2025-10-02 09:36:30.047 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:36:30 np0005465604 podman[454506]: 2025-10-02 09:36:30.141576934 +0000 UTC m=+0.068316715 container create d7957a6218ef6186b64ae7d8f2e2c2487a3d3b3c852f8e007e505372be9c7580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poincare, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 05:36:30 np0005465604 systemd[1]: Started libpod-conmon-d7957a6218ef6186b64ae7d8f2e2c2487a3d3b3c852f8e007e505372be9c7580.scope.
Oct  2 05:36:30 np0005465604 podman[454506]: 2025-10-02 09:36:30.112904959 +0000 UTC m=+0.039644820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:36:30 np0005465604 nova_compute[260603]: 2025-10-02 09:36:30.217 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:36:30 np0005465604 nova_compute[260603]: 2025-10-02 09:36:30.219 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3547MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:36:30 np0005465604 nova_compute[260603]: 2025-10-02 09:36:30.219 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:36:30 np0005465604 nova_compute[260603]: 2025-10-02 09:36:30.219 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:36:30 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:36:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6b72941278d7c1b545ca4a74a3ec98e197af35303000d889b298b0b0598b3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:36:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6b72941278d7c1b545ca4a74a3ec98e197af35303000d889b298b0b0598b3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:36:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6b72941278d7c1b545ca4a74a3ec98e197af35303000d889b298b0b0598b3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:36:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6b72941278d7c1b545ca4a74a3ec98e197af35303000d889b298b0b0598b3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:36:30 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6b72941278d7c1b545ca4a74a3ec98e197af35303000d889b298b0b0598b3a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:36:30 np0005465604 podman[454506]: 2025-10-02 09:36:30.245260063 +0000 UTC m=+0.171999844 container init d7957a6218ef6186b64ae7d8f2e2c2487a3d3b3c852f8e007e505372be9c7580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:36:30 np0005465604 podman[454506]: 2025-10-02 09:36:30.252702505 +0000 UTC m=+0.179442286 container start d7957a6218ef6186b64ae7d8f2e2c2487a3d3b3c852f8e007e505372be9c7580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poincare, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:36:30 np0005465604 podman[454506]: 2025-10-02 09:36:30.256129812 +0000 UTC m=+0.182869593 container attach d7957a6218ef6186b64ae7d8f2e2c2487a3d3b3c852f8e007e505372be9c7580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:36:30 np0005465604 nova_compute[260603]: 2025-10-02 09:36:30.290 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:36:30 np0005465604 nova_compute[260603]: 2025-10-02 09:36:30.291 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:36:30 np0005465604 nova_compute[260603]: 2025-10-02 09:36:30.310 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:36:30 np0005465604 nova_compute[260603]: 2025-10-02 09:36:30.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:36:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2342216201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:36:30 np0005465604 nova_compute[260603]: 2025-10-02 09:36:30.756 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:36:30 np0005465604 nova_compute[260603]: 2025-10-02 09:36:30.761 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:36:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3561: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:30 np0005465604 nova_compute[260603]: 2025-10-02 09:36:30.782 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:36:30 np0005465604 nova_compute[260603]: 2025-10-02 09:36:30.786 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:36:30 np0005465604 nova_compute[260603]: 2025-10-02 09:36:30.787 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:36:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:36:31 np0005465604 hungry_poincare[454523]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:36:31 np0005465604 hungry_poincare[454523]: --> relative data size: 1.0
Oct  2 05:36:31 np0005465604 hungry_poincare[454523]: --> All data devices are unavailable
Oct  2 05:36:31 np0005465604 systemd[1]: libpod-d7957a6218ef6186b64ae7d8f2e2c2487a3d3b3c852f8e007e505372be9c7580.scope: Deactivated successfully.
Oct  2 05:36:31 np0005465604 podman[454574]: 2025-10-02 09:36:31.301869235 +0000 UTC m=+0.024453514 container died d7957a6218ef6186b64ae7d8f2e2c2487a3d3b3c852f8e007e505372be9c7580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 05:36:31 np0005465604 systemd[1]: var-lib-containers-storage-overlay-7f6b72941278d7c1b545ca4a74a3ec98e197af35303000d889b298b0b0598b3a-merged.mount: Deactivated successfully.
Oct  2 05:36:31 np0005465604 podman[454574]: 2025-10-02 09:36:31.38393837 +0000 UTC m=+0.106522639 container remove d7957a6218ef6186b64ae7d8f2e2c2487a3d3b3c852f8e007e505372be9c7580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poincare, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:36:31 np0005465604 systemd[1]: libpod-conmon-d7957a6218ef6186b64ae7d8f2e2c2487a3d3b3c852f8e007e505372be9c7580.scope: Deactivated successfully.
Oct  2 05:36:32 np0005465604 podman[454725]: 2025-10-02 09:36:32.002284773 +0000 UTC m=+0.045574415 container create 5c783a9b650b40bace42581af253324f0d74cf939ff953ce2ab7b0d3367bcc5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 05:36:32 np0005465604 systemd[1]: Started libpod-conmon-5c783a9b650b40bace42581af253324f0d74cf939ff953ce2ab7b0d3367bcc5d.scope.
Oct  2 05:36:32 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:36:32 np0005465604 podman[454725]: 2025-10-02 09:36:31.979415718 +0000 UTC m=+0.022705410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:36:32 np0005465604 podman[454725]: 2025-10-02 09:36:32.091557721 +0000 UTC m=+0.134847433 container init 5c783a9b650b40bace42581af253324f0d74cf939ff953ce2ab7b0d3367bcc5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:36:32 np0005465604 podman[454725]: 2025-10-02 09:36:32.09920822 +0000 UTC m=+0.142497862 container start 5c783a9b650b40bace42581af253324f0d74cf939ff953ce2ab7b0d3367bcc5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 05:36:32 np0005465604 compassionate_elgamal[454741]: 167 167
Oct  2 05:36:32 np0005465604 podman[454725]: 2025-10-02 09:36:32.104247718 +0000 UTC m=+0.147537380 container attach 5c783a9b650b40bace42581af253324f0d74cf939ff953ce2ab7b0d3367bcc5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:36:32 np0005465604 systemd[1]: libpod-5c783a9b650b40bace42581af253324f0d74cf939ff953ce2ab7b0d3367bcc5d.scope: Deactivated successfully.
Oct  2 05:36:32 np0005465604 podman[454725]: 2025-10-02 09:36:32.105356682 +0000 UTC m=+0.148646344 container died 5c783a9b650b40bace42581af253324f0d74cf939ff953ce2ab7b0d3367bcc5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 05:36:32 np0005465604 systemd[1]: var-lib-containers-storage-overlay-bac952b325ce062a7232c736001858a2b6620db5be1f8574137ca31c14aa908f-merged.mount: Deactivated successfully.
Oct  2 05:36:32 np0005465604 podman[454725]: 2025-10-02 09:36:32.149673797 +0000 UTC m=+0.192963439 container remove 5c783a9b650b40bace42581af253324f0d74cf939ff953ce2ab7b0d3367bcc5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_elgamal, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:36:32 np0005465604 systemd[1]: libpod-conmon-5c783a9b650b40bace42581af253324f0d74cf939ff953ce2ab7b0d3367bcc5d.scope: Deactivated successfully.
Oct  2 05:36:32 np0005465604 podman[454765]: 2025-10-02 09:36:32.34444192 +0000 UTC m=+0.043972905 container create c71718ca92f86180f067e368e03e98c991727a5c33e9610d861c94f3a63c568b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curie, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:36:32 np0005465604 systemd[1]: Started libpod-conmon-c71718ca92f86180f067e368e03e98c991727a5c33e9610d861c94f3a63c568b.scope.
Oct  2 05:36:32 np0005465604 podman[454765]: 2025-10-02 09:36:32.328515913 +0000 UTC m=+0.028046918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:36:32 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:36:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97add8675f6f5eec1661afc57075d6db5fecc08dfe2882f72dc64b1b0244f8c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:36:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97add8675f6f5eec1661afc57075d6db5fecc08dfe2882f72dc64b1b0244f8c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:36:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97add8675f6f5eec1661afc57075d6db5fecc08dfe2882f72dc64b1b0244f8c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:36:32 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97add8675f6f5eec1661afc57075d6db5fecc08dfe2882f72dc64b1b0244f8c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:36:32 np0005465604 podman[454765]: 2025-10-02 09:36:32.460191265 +0000 UTC m=+0.159722340 container init c71718ca92f86180f067e368e03e98c991727a5c33e9610d861c94f3a63c568b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 05:36:32 np0005465604 podman[454765]: 2025-10-02 09:36:32.473553332 +0000 UTC m=+0.173084357 container start c71718ca92f86180f067e368e03e98c991727a5c33e9610d861c94f3a63c568b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 05:36:32 np0005465604 podman[454765]: 2025-10-02 09:36:32.478536178 +0000 UTC m=+0.178067173 container attach c71718ca92f86180f067e368e03e98c991727a5c33e9610d861c94f3a63c568b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 05:36:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3562: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:33 np0005465604 adoring_curie[454781]: {
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:    "0": [
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:        {
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "devices": [
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "/dev/loop3"
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            ],
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "lv_name": "ceph_lv0",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "lv_size": "21470642176",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "name": "ceph_lv0",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "tags": {
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.cluster_name": "ceph",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.crush_device_class": "",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.encrypted": "0",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.osd_id": "0",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.type": "block",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.vdo": "0"
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            },
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "type": "block",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "vg_name": "ceph_vg0"
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:        }
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:    ],
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:    "1": [
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:        {
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "devices": [
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "/dev/loop4"
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            ],
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "lv_name": "ceph_lv1",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "lv_size": "21470642176",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "name": "ceph_lv1",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "tags": {
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.cluster_name": "ceph",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.crush_device_class": "",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.encrypted": "0",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.osd_id": "1",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.type": "block",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.vdo": "0"
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            },
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "type": "block",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "vg_name": "ceph_vg1"
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:        }
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:    ],
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:    "2": [
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:        {
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "devices": [
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "/dev/loop5"
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            ],
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "lv_name": "ceph_lv2",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "lv_size": "21470642176",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "name": "ceph_lv2",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "tags": {
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.cluster_name": "ceph",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.crush_device_class": "",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.encrypted": "0",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.osd_id": "2",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.type": "block",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:                "ceph.vdo": "0"
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            },
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "type": "block",
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:            "vg_name": "ceph_vg2"
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:        }
Oct  2 05:36:33 np0005465604 adoring_curie[454781]:    ]
Oct  2 05:36:33 np0005465604 adoring_curie[454781]: }
Oct  2 05:36:33 np0005465604 systemd[1]: libpod-c71718ca92f86180f067e368e03e98c991727a5c33e9610d861c94f3a63c568b.scope: Deactivated successfully.
Oct  2 05:36:33 np0005465604 podman[454765]: 2025-10-02 09:36:33.206405593 +0000 UTC m=+0.905936608 container died c71718ca92f86180f067e368e03e98c991727a5c33e9610d861c94f3a63c568b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct  2 05:36:33 np0005465604 systemd[1]: var-lib-containers-storage-overlay-97add8675f6f5eec1661afc57075d6db5fecc08dfe2882f72dc64b1b0244f8c4-merged.mount: Deactivated successfully.
Oct  2 05:36:33 np0005465604 podman[454765]: 2025-10-02 09:36:33.317904296 +0000 UTC m=+1.017435281 container remove c71718ca92f86180f067e368e03e98c991727a5c33e9610d861c94f3a63c568b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curie, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 05:36:33 np0005465604 systemd[1]: libpod-conmon-c71718ca92f86180f067e368e03e98c991727a5c33e9610d861c94f3a63c568b.scope: Deactivated successfully.
Oct  2 05:36:34 np0005465604 podman[454943]: 2025-10-02 09:36:34.010910772 +0000 UTC m=+0.050286773 container create 79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_burnell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct  2 05:36:34 np0005465604 systemd[1]: Started libpod-conmon-79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474.scope.
Oct  2 05:36:34 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:36:34 np0005465604 podman[454943]: 2025-10-02 09:36:33.985342273 +0000 UTC m=+0.024718264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:36:34 np0005465604 podman[454943]: 2025-10-02 09:36:34.0973177 +0000 UTC m=+0.136693681 container init 79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:36:34 np0005465604 podman[454943]: 2025-10-02 09:36:34.106787626 +0000 UTC m=+0.146163587 container start 79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 05:36:34 np0005465604 youthful_burnell[454961]: 167 167
Oct  2 05:36:34 np0005465604 systemd[1]: libpod-79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474.scope: Deactivated successfully.
Oct  2 05:36:34 np0005465604 conmon[454961]: conmon 79499508fd7981656b0e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474.scope/container/memory.events
Oct  2 05:36:34 np0005465604 podman[454943]: 2025-10-02 09:36:34.12610998 +0000 UTC m=+0.165485991 container attach 79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_burnell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct  2 05:36:34 np0005465604 podman[454943]: 2025-10-02 09:36:34.128064191 +0000 UTC m=+0.167440172 container died 79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_burnell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 05:36:34 np0005465604 podman[454957]: 2025-10-02 09:36:34.144064201 +0000 UTC m=+0.088506776 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 05:36:34 np0005465604 podman[454960]: 2025-10-02 09:36:34.15524826 +0000 UTC m=+0.099682765 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 05:36:34 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1f4f841618327cec25e04f596acd32e814a6fccea54e72e134f4ad5507b76753-merged.mount: Deactivated successfully.
Oct  2 05:36:34 np0005465604 podman[454943]: 2025-10-02 09:36:34.177509045 +0000 UTC m=+0.216885006 container remove 79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_burnell, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 05:36:34 np0005465604 systemd[1]: libpod-conmon-79499508fd7981656b0e3c40ce4c6cc6716445ee2655cb121a1376fbee5d5474.scope: Deactivated successfully.
Oct  2 05:36:34 np0005465604 podman[455024]: 2025-10-02 09:36:34.387656879 +0000 UTC m=+0.065349132 container create 241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:36:34 np0005465604 nova_compute[260603]: 2025-10-02 09:36:34.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:34 np0005465604 systemd[1]: Started libpod-conmon-241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec.scope.
Oct  2 05:36:34 np0005465604 podman[455024]: 2025-10-02 09:36:34.356861598 +0000 UTC m=+0.034553901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:36:34 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:36:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c4e113c999d36ea65e7d0ca79b35daaa4f159ed4f693ede380d91b1a5e0d6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:36:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c4e113c999d36ea65e7d0ca79b35daaa4f159ed4f693ede380d91b1a5e0d6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:36:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c4e113c999d36ea65e7d0ca79b35daaa4f159ed4f693ede380d91b1a5e0d6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:36:34 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49c4e113c999d36ea65e7d0ca79b35daaa4f159ed4f693ede380d91b1a5e0d6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:36:34 np0005465604 podman[455024]: 2025-10-02 09:36:34.48951512 +0000 UTC m=+0.167207343 container init 241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hugle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:36:34 np0005465604 podman[455024]: 2025-10-02 09:36:34.498249604 +0000 UTC m=+0.175941827 container start 241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hugle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:36:34 np0005465604 podman[455024]: 2025-10-02 09:36:34.503413475 +0000 UTC m=+0.181105738 container attach 241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hugle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 05:36:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3563: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:36:34.876 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:36:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:36:34.878 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:36:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:36:34.879 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]: {
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:        "osd_id": 2,
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:        "type": "bluestore"
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:    },
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:        "osd_id": 1,
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:        "type": "bluestore"
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:    },
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:        "osd_id": 0,
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:        "type": "bluestore"
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]:    }
Oct  2 05:36:35 np0005465604 beautiful_hugle[455042]: }
Oct  2 05:36:35 np0005465604 nova_compute[260603]: 2025-10-02 09:36:35.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:35 np0005465604 systemd[1]: libpod-241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec.scope: Deactivated successfully.
Oct  2 05:36:35 np0005465604 podman[455024]: 2025-10-02 09:36:35.615143879 +0000 UTC m=+1.292836102 container died 241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hugle, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 05:36:35 np0005465604 systemd[1]: libpod-241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec.scope: Consumed 1.127s CPU time.
Oct  2 05:36:35 np0005465604 systemd[1]: var-lib-containers-storage-overlay-49c4e113c999d36ea65e7d0ca79b35daaa4f159ed4f693ede380d91b1a5e0d6b-merged.mount: Deactivated successfully.
Oct  2 05:36:35 np0005465604 podman[455024]: 2025-10-02 09:36:35.685314801 +0000 UTC m=+1.363007034 container remove 241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 05:36:35 np0005465604 systemd[1]: libpod-conmon-241b02a4637be2167bd349ab8dc5b839b1be5c07e9a3491e4ded5d62515d11ec.scope: Deactivated successfully.
Oct  2 05:36:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:36:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:36:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:36:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:36:35 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 2abc1c95-8a04-4b7c-ab9d-7f34caf6423d does not exist
Oct  2 05:36:35 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 74202bd4-0ee3-48ae-83d1-ff14e0e30b9a does not exist
Oct  2 05:36:35 np0005465604 nova_compute[260603]: 2025-10-02 09:36:35.784 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:36:35 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:36:35 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:36:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:36:36 np0005465604 nova_compute[260603]: 2025-10-02 09:36:36.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:36:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3564: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3565: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:39 np0005465604 nova_compute[260603]: 2025-10-02 09:36:39.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:36:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:36:40 np0005465604 nova_compute[260603]: 2025-10-02 09:36:40.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3566: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:36:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:36:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 16K writes, 74K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s#012Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1346 writes, 6079 keys, 1346 commit groups, 1.0 writes per commit group, ingest: 8.71 MB, 0.01 MB/s#012Interval WAL: 1346 writes, 1346 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     72.9      1.26              0.35        54    0.023       0      0       0.0       0.0#012  L6      1/0    9.53 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0    127.7    108.2      4.23              1.66        53    0.080    364K    28K       0.0       0.0#012 Sum      1/0    9.53 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     98.4    100.1      5.49              2.01       107    0.051    364K    28K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.7     76.7     75.1      0.73              0.23        10    0.073     46K   2470       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    127.7    108.2      4.23              1.66        53    0.080    364K    28K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     73.2      1.26              0.35        53    0.024       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.090, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.54 GB write, 0.08 MB/s write, 0.53 GB read, 0.08 MB/s read, 5.5 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 304.00 MB usage: 60.11 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000513 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3863,57.50 MB,18.913%) FilterBlock(108,1019.11 KB,0.327376%) IndexBlock(108,1.62 MB,0.531463%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 05:36:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3567: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:44 np0005465604 nova_compute[260603]: 2025-10-02 09:36:44.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:44 np0005465604 nova_compute[260603]: 2025-10-02 09:36:44.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:36:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3568: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:45 np0005465604 nova_compute[260603]: 2025-10-02 09:36:45.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:36:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3569: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3570: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:49 np0005465604 nova_compute[260603]: 2025-10-02 09:36:49.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:50 np0005465604 nova_compute[260603]: 2025-10-02 09:36:50.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3571: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:36:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3572: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:54 np0005465604 nova_compute[260603]: 2025-10-02 09:36:54.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3573: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:55 np0005465604 nova_compute[260603]: 2025-10-02 09:36:55.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:36:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:36:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3574: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:57 np0005465604 nova_compute[260603]: 2025-10-02 09:36:57.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:36:57 np0005465604 nova_compute[260603]: 2025-10-02 09:36:57.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:36:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:36:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:36:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:36:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:36:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:36:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:36:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3575: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:36:59 np0005465604 nova_compute[260603]: 2025-10-02 09:36:59.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:00 np0005465604 podman[455138]: 2025-10-02 09:37:00.008389691 +0000 UTC m=+0.073827446 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 05:37:00 np0005465604 podman[455137]: 2025-10-02 09:37:00.088144713 +0000 UTC m=+0.147441226 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct  2 05:37:00 np0005465604 nova_compute[260603]: 2025-10-02 09:37:00.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3576: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:37:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3577: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:04 np0005465604 nova_compute[260603]: 2025-10-02 09:37:04.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3578: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:04 np0005465604 podman[455180]: 2025-10-02 09:37:04.999391112 +0000 UTC m=+0.069307295 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd)
Oct  2 05:37:05 np0005465604 podman[455181]: 2025-10-02 09:37:05.006550956 +0000 UTC m=+0.075745137 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct  2 05:37:05 np0005465604 nova_compute[260603]: 2025-10-02 09:37:05.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:37:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3579: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3580: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:09 np0005465604 nova_compute[260603]: 2025-10-02 09:37:09.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:10 np0005465604 nova_compute[260603]: 2025-10-02 09:37:10.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3581: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:37:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3582: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:14 np0005465604 nova_compute[260603]: 2025-10-02 09:37:14.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3583: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:15 np0005465604 nova_compute[260603]: 2025-10-02 09:37:15.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:37:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3584: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3585: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:19 np0005465604 nova_compute[260603]: 2025-10-02 09:37:19.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:20 np0005465604 nova_compute[260603]: 2025-10-02 09:37:20.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3586: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:37:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:37:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2099761710' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:37:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:37:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2099761710' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:37:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3587: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:24 np0005465604 nova_compute[260603]: 2025-10-02 09:37:24.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3588: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:25 np0005465604 nova_compute[260603]: 2025-10-02 09:37:25.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:37:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3589: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:27 np0005465604 nova_compute[260603]: 2025-10-02 09:37:27.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:37:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:37:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:37:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:37:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:37:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:37:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:37:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:37:28
Oct  2 05:37:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:37:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:37:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'images', 'volumes', 'default.rgw.log']
Oct  2 05:37:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:37:28 np0005465604 nova_compute[260603]: 2025-10-02 09:37:28.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:37:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:37:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3590: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:29 np0005465604 nova_compute[260603]: 2025-10-02 09:37:29.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:29 np0005465604 nova_compute[260603]: 2025-10-02 09:37:29.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:37:29 np0005465604 nova_compute[260603]: 2025-10-02 09:37:29.518 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:37:29 np0005465604 nova_compute[260603]: 2025-10-02 09:37:29.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:37:29 np0005465604 nova_compute[260603]: 2025-10-02 09:37:29.534 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:37:29 np0005465604 nova_compute[260603]: 2025-10-02 09:37:29.534 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:37:29 np0005465604 nova_compute[260603]: 2025-10-02 09:37:29.561 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:37:29 np0005465604 nova_compute[260603]: 2025-10-02 09:37:29.561 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:37:29 np0005465604 nova_compute[260603]: 2025-10-02 09:37:29.561 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:37:29 np0005465604 nova_compute[260603]: 2025-10-02 09:37:29.561 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:37:29 np0005465604 nova_compute[260603]: 2025-10-02 09:37:29.562 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:37:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:37:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1781629694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:37:30 np0005465604 nova_compute[260603]: 2025-10-02 09:37:30.475 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.914s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:37:30 np0005465604 nova_compute[260603]: 2025-10-02 09:37:30.615 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:37:30 np0005465604 nova_compute[260603]: 2025-10-02 09:37:30.616 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3598MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:37:30 np0005465604 nova_compute[260603]: 2025-10-02 09:37:30.616 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:37:30 np0005465604 nova_compute[260603]: 2025-10-02 09:37:30.617 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:37:30 np0005465604 nova_compute[260603]: 2025-10-02 09:37:30.674 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:37:30 np0005465604 nova_compute[260603]: 2025-10-02 09:37:30.674 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:37:30 np0005465604 nova_compute[260603]: 2025-10-02 09:37:30.691 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:37:30 np0005465604 nova_compute[260603]: 2025-10-02 09:37:30.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3591: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:30 np0005465604 podman[455261]: 2025-10-02 09:37:30.981375973 +0000 UTC m=+0.049071524 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 05:37:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:37:31 np0005465604 podman[455260]: 2025-10-02 09:37:31.036373291 +0000 UTC m=+0.104934299 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 05:37:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:37:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/103037187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:37:31 np0005465604 nova_compute[260603]: 2025-10-02 09:37:31.125 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:37:31 np0005465604 nova_compute[260603]: 2025-10-02 09:37:31.131 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:37:31 np0005465604 nova_compute[260603]: 2025-10-02 09:37:31.147 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:37:31 np0005465604 nova_compute[260603]: 2025-10-02 09:37:31.149 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:37:31 np0005465604 nova_compute[260603]: 2025-10-02 09:37:31.149 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:37:32 np0005465604 nova_compute[260603]: 2025-10-02 09:37:32.134 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:37:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3592: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:33 np0005465604 nova_compute[260603]: 2025-10-02 09:37:33.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:37:34 np0005465604 nova_compute[260603]: 2025-10-02 09:37:34.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:34 np0005465604 nova_compute[260603]: 2025-10-02 09:37:34.531 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:37:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3593: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:37:34.877 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:37:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:37:34.878 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:37:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:37:34.878 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:37:35 np0005465604 nova_compute[260603]: 2025-10-02 09:37:35.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:35 np0005465604 podman[455307]: 2025-10-02 09:37:35.99371851 +0000 UTC m=+0.057536008 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:37:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:37:36 np0005465604 podman[455316]: 2025-10-02 09:37:36.023466579 +0000 UTC m=+0.082217079 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct  2 05:37:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:37:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:37:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:37:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:37:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:37:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:37:36 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev bc4956c2-c904-42b5-ae0e-a09308514c6c does not exist
Oct  2 05:37:36 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 46b726d6-ddac-42a3-803e-797e6322c8cc does not exist
Oct  2 05:37:36 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev fa791a51-c999-42d8-b854-cf9d8a87fa66 does not exist
Oct  2 05:37:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:37:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:37:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:37:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:37:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:37:36 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:37:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:37:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:37:36 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:37:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3594: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:37 np0005465604 podman[455618]: 2025-10-02 09:37:37.181482369 +0000 UTC m=+0.051657065 container create 366650e4806777fe425430575e68900dd71b59325b8b25ca3ce06d8c390bbc0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cartwright, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:37:37 np0005465604 systemd[1]: Started libpod-conmon-366650e4806777fe425430575e68900dd71b59325b8b25ca3ce06d8c390bbc0b.scope.
Oct  2 05:37:37 np0005465604 podman[455618]: 2025-10-02 09:37:37.157454199 +0000 UTC m=+0.027628915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:37:37 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:37:37 np0005465604 podman[455618]: 2025-10-02 09:37:37.272699398 +0000 UTC m=+0.142874114 container init 366650e4806777fe425430575e68900dd71b59325b8b25ca3ce06d8c390bbc0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cartwright, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 05:37:37 np0005465604 podman[455618]: 2025-10-02 09:37:37.282623509 +0000 UTC m=+0.152798205 container start 366650e4806777fe425430575e68900dd71b59325b8b25ca3ce06d8c390bbc0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cartwright, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:37:37 np0005465604 blissful_cartwright[455635]: 167 167
Oct  2 05:37:37 np0005465604 systemd[1]: libpod-366650e4806777fe425430575e68900dd71b59325b8b25ca3ce06d8c390bbc0b.scope: Deactivated successfully.
Oct  2 05:37:37 np0005465604 podman[455618]: 2025-10-02 09:37:37.290114462 +0000 UTC m=+0.160289148 container attach 366650e4806777fe425430575e68900dd71b59325b8b25ca3ce06d8c390bbc0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cartwright, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:37:37 np0005465604 podman[455618]: 2025-10-02 09:37:37.292592589 +0000 UTC m=+0.162767275 container died 366650e4806777fe425430575e68900dd71b59325b8b25ca3ce06d8c390bbc0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:37:37 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b0061ddd83d41609e3f87091a5ce4b520b27420ed84c3818469a522cf4cc9eb6-merged.mount: Deactivated successfully.
Oct  2 05:37:37 np0005465604 podman[455618]: 2025-10-02 09:37:37.328279915 +0000 UTC m=+0.198454601 container remove 366650e4806777fe425430575e68900dd71b59325b8b25ca3ce06d8c390bbc0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  2 05:37:37 np0005465604 systemd[1]: libpod-conmon-366650e4806777fe425430575e68900dd71b59325b8b25ca3ce06d8c390bbc0b.scope: Deactivated successfully.
Oct  2 05:37:37 np0005465604 podman[455661]: 2025-10-02 09:37:37.479121536 +0000 UTC m=+0.041044013 container create 02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bose, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:37:37 np0005465604 systemd[1]: Started libpod-conmon-02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6.scope.
Oct  2 05:37:37 np0005465604 nova_compute[260603]: 2025-10-02 09:37:37.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:37:37 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:37:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63feb6ba1f1bf8086060765c0599ab833021c989d05e14d70b52ea2542777ba5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:37:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63feb6ba1f1bf8086060765c0599ab833021c989d05e14d70b52ea2542777ba5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:37:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63feb6ba1f1bf8086060765c0599ab833021c989d05e14d70b52ea2542777ba5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:37:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63feb6ba1f1bf8086060765c0599ab833021c989d05e14d70b52ea2542777ba5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:37:37 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63feb6ba1f1bf8086060765c0599ab833021c989d05e14d70b52ea2542777ba5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:37:37 np0005465604 podman[455661]: 2025-10-02 09:37:37.46294567 +0000 UTC m=+0.024868167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:37:37 np0005465604 podman[455661]: 2025-10-02 09:37:37.565475233 +0000 UTC m=+0.127397710 container init 02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bose, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:37:37 np0005465604 podman[455661]: 2025-10-02 09:37:37.572765001 +0000 UTC m=+0.134687478 container start 02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bose, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:37:37 np0005465604 podman[455661]: 2025-10-02 09:37:37.58202429 +0000 UTC m=+0.143946787 container attach 02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bose, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 05:37:38 np0005465604 charming_bose[455677]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:37:38 np0005465604 charming_bose[455677]: --> relative data size: 1.0
Oct  2 05:37:38 np0005465604 charming_bose[455677]: --> All data devices are unavailable
Oct  2 05:37:38 np0005465604 systemd[1]: libpod-02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6.scope: Deactivated successfully.
Oct  2 05:37:38 np0005465604 conmon[455677]: conmon 02768c8f3d08d9aeb2cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6.scope/container/memory.events
Oct  2 05:37:38 np0005465604 podman[455661]: 2025-10-02 09:37:38.588849028 +0000 UTC m=+1.150771505 container died 02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 05:37:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3595: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-63feb6ba1f1bf8086060765c0599ab833021c989d05e14d70b52ea2542777ba5-merged.mount: Deactivated successfully.
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.082934) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397859083015, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 848, "num_deletes": 251, "total_data_size": 1109973, "memory_usage": 1125648, "flush_reason": "Manual Compaction"}
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397859125724, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 1099206, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74065, "largest_seqno": 74912, "table_properties": {"data_size": 1094940, "index_size": 1981, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9531, "raw_average_key_size": 19, "raw_value_size": 1086332, "raw_average_value_size": 2235, "num_data_blocks": 88, "num_entries": 486, "num_filter_entries": 486, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759397787, "oldest_key_time": 1759397787, "file_creation_time": 1759397859, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 42841 microseconds, and 4290 cpu microseconds.
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:37:39 np0005465604 podman[455661]: 2025-10-02 09:37:39.125929184 +0000 UTC m=+1.687851661 container remove 02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_bose, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.125787) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 1099206 bytes OK
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.125809) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.130912) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.130937) EVENT_LOG_v1 {"time_micros": 1759397859130929, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.130956) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 1105778, prev total WAL file size 1105778, number of live WAL files 2.
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.131463) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(1073KB)], [176(9760KB)]
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397859131497, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 11093690, "oldest_snapshot_seqno": -1}
Oct  2 05:37:39 np0005465604 systemd[1]: libpod-conmon-02768c8f3d08d9aeb2cdc57b2582d0a04e5b7a601eedf4154382c09450c44cd6.scope: Deactivated successfully.
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 9022 keys, 9356716 bytes, temperature: kUnknown
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397859213913, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 9356716, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9302092, "index_size": 30959, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22597, "raw_key_size": 237018, "raw_average_key_size": 26, "raw_value_size": 9146773, "raw_average_value_size": 1013, "num_data_blocks": 1187, "num_entries": 9022, "num_filter_entries": 9022, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759397859, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.214350) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 9356716 bytes
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.217191) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.3 rd, 113.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 9.5 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(18.6) write-amplify(8.5) OK, records in: 9536, records dropped: 514 output_compression: NoCompression
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.217212) EVENT_LOG_v1 {"time_micros": 1759397859217202, "job": 110, "event": "compaction_finished", "compaction_time_micros": 82631, "compaction_time_cpu_micros": 26846, "output_level": 6, "num_output_files": 1, "total_output_size": 9356716, "num_input_records": 9536, "num_output_records": 9022, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397859217537, "job": 110, "event": "table_file_deletion", "file_number": 178}
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397859219818, "job": 110, "event": "table_file_deletion", "file_number": 176}
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.131388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.219944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.219951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.219956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.219958) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:37:39 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:37:39.219961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:37:39 np0005465604 nova_compute[260603]: 2025-10-02 09:37:39.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:37:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:37:39 np0005465604 podman[455859]: 2025-10-02 09:37:39.719853944 +0000 UTC m=+0.045549503 container create 70a30d042ab0801f8ff63ddd08f1b667233339bfa57b1d19ca9385414d8f928f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct  2 05:37:39 np0005465604 podman[455859]: 2025-10-02 09:37:39.698434665 +0000 UTC m=+0.024130234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:37:39 np0005465604 systemd[1]: Started libpod-conmon-70a30d042ab0801f8ff63ddd08f1b667233339bfa57b1d19ca9385414d8f928f.scope.
Oct  2 05:37:39 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:37:39 np0005465604 podman[455859]: 2025-10-02 09:37:39.856069949 +0000 UTC m=+0.181765528 container init 70a30d042ab0801f8ff63ddd08f1b667233339bfa57b1d19ca9385414d8f928f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct  2 05:37:39 np0005465604 podman[455859]: 2025-10-02 09:37:39.86667529 +0000 UTC m=+0.192370849 container start 70a30d042ab0801f8ff63ddd08f1b667233339bfa57b1d19ca9385414d8f928f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  2 05:37:39 np0005465604 podman[455859]: 2025-10-02 09:37:39.869693754 +0000 UTC m=+0.195389373 container attach 70a30d042ab0801f8ff63ddd08f1b667233339bfa57b1d19ca9385414d8f928f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:37:39 np0005465604 nostalgic_swanson[455875]: 167 167
Oct  2 05:37:39 np0005465604 systemd[1]: libpod-70a30d042ab0801f8ff63ddd08f1b667233339bfa57b1d19ca9385414d8f928f.scope: Deactivated successfully.
Oct  2 05:37:39 np0005465604 podman[455859]: 2025-10-02 09:37:39.871954425 +0000 UTC m=+0.197649984 container died 70a30d042ab0801f8ff63ddd08f1b667233339bfa57b1d19ca9385414d8f928f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:37:39 np0005465604 systemd[1]: var-lib-containers-storage-overlay-dc91c4092c604054701d6cd8f4f7ed461c7046899de3bbd97e8c53f11ba048f8-merged.mount: Deactivated successfully.
Oct  2 05:37:39 np0005465604 podman[455859]: 2025-10-02 09:37:39.908536048 +0000 UTC m=+0.234231607 container remove 70a30d042ab0801f8ff63ddd08f1b667233339bfa57b1d19ca9385414d8f928f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 05:37:39 np0005465604 systemd[1]: libpod-conmon-70a30d042ab0801f8ff63ddd08f1b667233339bfa57b1d19ca9385414d8f928f.scope: Deactivated successfully.
Oct  2 05:37:40 np0005465604 podman[455898]: 2025-10-02 09:37:40.093810975 +0000 UTC m=+0.072127594 container create 74976c20248a790414c61c8c97b2e453fb1008a61800e3d3754f7a8dbf10d16d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khorana, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:37:40 np0005465604 systemd[1]: Started libpod-conmon-74976c20248a790414c61c8c97b2e453fb1008a61800e3d3754f7a8dbf10d16d.scope.
Oct  2 05:37:40 np0005465604 podman[455898]: 2025-10-02 09:37:40.06325159 +0000 UTC m=+0.041568269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:37:40 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:37:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8d11282fc6ab1b9e52138e58830f0048f73a1199f3f12afbd32e9d3373dfdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:37:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8d11282fc6ab1b9e52138e58830f0048f73a1199f3f12afbd32e9d3373dfdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:37:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8d11282fc6ab1b9e52138e58830f0048f73a1199f3f12afbd32e9d3373dfdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:37:40 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8d11282fc6ab1b9e52138e58830f0048f73a1199f3f12afbd32e9d3373dfdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:37:40 np0005465604 podman[455898]: 2025-10-02 09:37:40.177431156 +0000 UTC m=+0.155747765 container init 74976c20248a790414c61c8c97b2e453fb1008a61800e3d3754f7a8dbf10d16d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khorana, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 05:37:40 np0005465604 podman[455898]: 2025-10-02 09:37:40.191771124 +0000 UTC m=+0.170087713 container start 74976c20248a790414c61c8c97b2e453fb1008a61800e3d3754f7a8dbf10d16d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 05:37:40 np0005465604 podman[455898]: 2025-10-02 09:37:40.195028766 +0000 UTC m=+0.173345355 container attach 74976c20248a790414c61c8c97b2e453fb1008a61800e3d3754f7a8dbf10d16d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khorana, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:37:40 np0005465604 nova_compute[260603]: 2025-10-02 09:37:40.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3596: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]: {
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:    "0": [
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:        {
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "devices": [
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "/dev/loop3"
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            ],
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "lv_name": "ceph_lv0",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "lv_size": "21470642176",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "name": "ceph_lv0",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "tags": {
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.cluster_name": "ceph",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.crush_device_class": "",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.encrypted": "0",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.osd_id": "0",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.type": "block",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.vdo": "0"
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            },
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "type": "block",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "vg_name": "ceph_vg0"
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:        }
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:    ],
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:    "1": [
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:        {
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "devices": [
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "/dev/loop4"
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            ],
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "lv_name": "ceph_lv1",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "lv_size": "21470642176",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "name": "ceph_lv1",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "tags": {
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.cluster_name": "ceph",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.crush_device_class": "",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.encrypted": "0",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.osd_id": "1",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.type": "block",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.vdo": "0"
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            },
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "type": "block",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "vg_name": "ceph_vg1"
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:        }
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:    ],
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:    "2": [
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:        {
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "devices": [
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "/dev/loop5"
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            ],
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "lv_name": "ceph_lv2",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "lv_size": "21470642176",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "name": "ceph_lv2",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "tags": {
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.cluster_name": "ceph",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.crush_device_class": "",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.encrypted": "0",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.osd_id": "2",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.type": "block",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:                "ceph.vdo": "0"
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            },
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "type": "block",
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:            "vg_name": "ceph_vg2"
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:        }
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]:    ]
Oct  2 05:37:40 np0005465604 mystifying_khorana[455914]: }
Oct  2 05:37:40 np0005465604 systemd[1]: libpod-74976c20248a790414c61c8c97b2e453fb1008a61800e3d3754f7a8dbf10d16d.scope: Deactivated successfully.
Oct  2 05:37:40 np0005465604 podman[455898]: 2025-10-02 09:37:40.955821249 +0000 UTC m=+0.934137868 container died 74976c20248a790414c61c8c97b2e453fb1008a61800e3d3754f7a8dbf10d16d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct  2 05:37:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:37:41 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8d8d11282fc6ab1b9e52138e58830f0048f73a1199f3f12afbd32e9d3373dfdd-merged.mount: Deactivated successfully.
Oct  2 05:37:41 np0005465604 podman[455898]: 2025-10-02 09:37:41.202586037 +0000 UTC m=+1.180902626 container remove 74976c20248a790414c61c8c97b2e453fb1008a61800e3d3754f7a8dbf10d16d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khorana, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 05:37:41 np0005465604 systemd[1]: libpod-conmon-74976c20248a790414c61c8c97b2e453fb1008a61800e3d3754f7a8dbf10d16d.scope: Deactivated successfully.
Oct  2 05:37:41 np0005465604 podman[456073]: 2025-10-02 09:37:41.813514919 +0000 UTC m=+0.056943140 container create dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cerf, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 05:37:41 np0005465604 systemd[1]: Started libpod-conmon-dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62.scope.
Oct  2 05:37:41 np0005465604 podman[456073]: 2025-10-02 09:37:41.777839364 +0000 UTC m=+0.021267605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:37:41 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:37:41 np0005465604 podman[456073]: 2025-10-02 09:37:41.948721212 +0000 UTC m=+0.192149443 container init dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cerf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 05:37:41 np0005465604 podman[456073]: 2025-10-02 09:37:41.955642698 +0000 UTC m=+0.199070929 container start dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:37:41 np0005465604 podman[456073]: 2025-10-02 09:37:41.960345205 +0000 UTC m=+0.203773446 container attach dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cerf, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  2 05:37:41 np0005465604 awesome_cerf[456089]: 167 167
Oct  2 05:37:41 np0005465604 systemd[1]: libpod-dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62.scope: Deactivated successfully.
Oct  2 05:37:41 np0005465604 conmon[456089]: conmon dbd8e49d6b8ccd7732b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62.scope/container/memory.events
Oct  2 05:37:41 np0005465604 podman[456073]: 2025-10-02 09:37:41.962400119 +0000 UTC m=+0.205828350 container died dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cerf, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 05:37:42 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e1d09a04cbc87023e8acdad9e79ac530923eaf75a566e7b11e293066e4bb93ba-merged.mount: Deactivated successfully.
Oct  2 05:37:42 np0005465604 podman[456073]: 2025-10-02 09:37:42.075280695 +0000 UTC m=+0.318708906 container remove dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 05:37:42 np0005465604 systemd[1]: libpod-conmon-dbd8e49d6b8ccd7732b112dc1f6dfc05c938b93e612535489223884d949cab62.scope: Deactivated successfully.
Oct  2 05:37:42 np0005465604 podman[456112]: 2025-10-02 09:37:42.229938226 +0000 UTC m=+0.036778730 container create faa187530ac9e99622aae9b43bd4c0d1dcdfd857fd66d03c194d8ab30eeab27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 05:37:42 np0005465604 systemd[1]: Started libpod-conmon-faa187530ac9e99622aae9b43bd4c0d1dcdfd857fd66d03c194d8ab30eeab27d.scope.
Oct  2 05:37:42 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:37:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf6000cb6dbea9d37d460578e91d7d69129b36f522cc69cf6abd2db004782822/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:37:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf6000cb6dbea9d37d460578e91d7d69129b36f522cc69cf6abd2db004782822/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:37:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf6000cb6dbea9d37d460578e91d7d69129b36f522cc69cf6abd2db004782822/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:37:42 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf6000cb6dbea9d37d460578e91d7d69129b36f522cc69cf6abd2db004782822/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:37:42 np0005465604 podman[456112]: 2025-10-02 09:37:42.292875971 +0000 UTC m=+0.099716505 container init faa187530ac9e99622aae9b43bd4c0d1dcdfd857fd66d03c194d8ab30eeab27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_blackwell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 05:37:42 np0005465604 podman[456112]: 2025-10-02 09:37:42.298809226 +0000 UTC m=+0.105649740 container start faa187530ac9e99622aae9b43bd4c0d1dcdfd857fd66d03c194d8ab30eeab27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_blackwell, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:37:42 np0005465604 podman[456112]: 2025-10-02 09:37:42.301686006 +0000 UTC m=+0.108526540 container attach faa187530ac9e99622aae9b43bd4c0d1dcdfd857fd66d03c194d8ab30eeab27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 05:37:42 np0005465604 podman[456112]: 2025-10-02 09:37:42.213280006 +0000 UTC m=+0.020120550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:37:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3597: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]: {
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:        "osd_id": 2,
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:        "type": "bluestore"
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:    },
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:        "osd_id": 1,
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:        "type": "bluestore"
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:    },
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:        "osd_id": 0,
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:        "type": "bluestore"
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]:    }
Oct  2 05:37:43 np0005465604 eloquent_blackwell[456128]: }
Oct  2 05:37:43 np0005465604 systemd[1]: libpod-faa187530ac9e99622aae9b43bd4c0d1dcdfd857fd66d03c194d8ab30eeab27d.scope: Deactivated successfully.
Oct  2 05:37:43 np0005465604 podman[456112]: 2025-10-02 09:37:43.21937988 +0000 UTC m=+1.026220394 container died faa187530ac9e99622aae9b43bd4c0d1dcdfd857fd66d03c194d8ab30eeab27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_blackwell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 05:37:43 np0005465604 systemd[1]: var-lib-containers-storage-overlay-cf6000cb6dbea9d37d460578e91d7d69129b36f522cc69cf6abd2db004782822-merged.mount: Deactivated successfully.
Oct  2 05:37:43 np0005465604 podman[456112]: 2025-10-02 09:37:43.26483608 +0000 UTC m=+1.071676594 container remove faa187530ac9e99622aae9b43bd4c0d1dcdfd857fd66d03c194d8ab30eeab27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:37:43 np0005465604 systemd[1]: libpod-conmon-faa187530ac9e99622aae9b43bd4c0d1dcdfd857fd66d03c194d8ab30eeab27d.scope: Deactivated successfully.
Oct  2 05:37:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:37:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:37:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:37:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:37:43 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 673050f3-1de8-4476-a834-0d7e5681b345 does not exist
Oct  2 05:37:43 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 5077c134-ed9a-44d4-b172-a0f39ff50eef does not exist
Oct  2 05:37:43 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:37:43 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:37:44 np0005465604 nova_compute[260603]: 2025-10-02 09:37:44.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3598: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:45 np0005465604 nova_compute[260603]: 2025-10-02 09:37:45.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:37:46 np0005465604 nova_compute[260603]: 2025-10-02 09:37:46.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:37:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3599: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3600: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:49 np0005465604 nova_compute[260603]: 2025-10-02 09:37:49.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3601: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:50 np0005465604 nova_compute[260603]: 2025-10-02 09:37:50.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:37:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3602: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:54 np0005465604 nova_compute[260603]: 2025-10-02 09:37:54.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3603: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:55 np0005465604 nova_compute[260603]: 2025-10-02 09:37:55.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:37:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:37:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3604: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:57 np0005465604 nova_compute[260603]: 2025-10-02 09:37:57.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:37:57 np0005465604 nova_compute[260603]: 2025-10-02 09:37:57.518 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:37:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:37:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:37:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:37:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:37:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:37:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:37:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3605: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:37:59 np0005465604 nova_compute[260603]: 2025-10-02 09:37:59.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3606: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:00 np0005465604 nova_compute[260603]: 2025-10-02 09:38:00.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:38:02 np0005465604 podman[456226]: 2025-10-02 09:38:02.000595998 +0000 UTC m=+0.061083689 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 05:38:02 np0005465604 podman[456225]: 2025-10-02 09:38:02.029835581 +0000 UTC m=+0.090936691 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 05:38:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3607: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:04 np0005465604 nova_compute[260603]: 2025-10-02 09:38:04.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3608: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:05 np0005465604 nova_compute[260603]: 2025-10-02 09:38:05.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:38:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:06 np0005465604 podman[456269]: 2025-10-02 09:38:06.994346984 +0000 UTC m=+0.061020817 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible)
Oct  2 05:38:06 np0005465604 podman[456270]: 2025-10-02 09:38:06.996106169 +0000 UTC m=+0.062985498 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:38:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:09 np0005465604 nova_compute[260603]: 2025-10-02 09:38:09.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:10 np0005465604 nova_compute[260603]: 2025-10-02 09:38:10.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:38:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3612: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:14 np0005465604 nova_compute[260603]: 2025-10-02 09:38:14.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:15 np0005465604 nova_compute[260603]: 2025-10-02 09:38:15.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:38:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:19 np0005465604 nova_compute[260603]: 2025-10-02 09:38:19.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:20 np0005465604 nova_compute[260603]: 2025-10-02 09:38:20.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:38:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:38:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 45K writes, 181K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.73 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 264 writes, 597 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.26 MB, 0.00 MB/s#012Interval WAL: 264 writes, 125 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:38:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:38:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/334300868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:38:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:38:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/334300868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:38:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:24 np0005465604 nova_compute[260603]: 2025-10-02 09:38:24.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:26 np0005465604 nova_compute[260603]: 2025-10-02 09:38:26.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.033317) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397906033352, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 608, "num_deletes": 257, "total_data_size": 699141, "memory_usage": 711832, "flush_reason": "Manual Compaction"}
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397906040100, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 693135, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74913, "largest_seqno": 75520, "table_properties": {"data_size": 689830, "index_size": 1212, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7280, "raw_average_key_size": 18, "raw_value_size": 683287, "raw_average_value_size": 1738, "num_data_blocks": 54, "num_entries": 393, "num_filter_entries": 393, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759397860, "oldest_key_time": 1759397860, "file_creation_time": 1759397906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 6819 microseconds, and 3725 cpu microseconds.
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.040135) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 693135 bytes OK
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.040152) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.041582) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.041596) EVENT_LOG_v1 {"time_micros": 1759397906041591, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.041611) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 695828, prev total WAL file size 695828, number of live WAL files 2.
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.042141) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323630' seq:72057594037927935, type:22 .. '6C6F676D0033353133' seq:0, type:0; will stop at (end)
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(676KB)], [179(9137KB)]
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397906042222, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 10049851, "oldest_snapshot_seqno": -1}
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 8891 keys, 9938348 bytes, temperature: kUnknown
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397906134998, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 9938348, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9883341, "index_size": 31662, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22277, "raw_key_size": 235195, "raw_average_key_size": 26, "raw_value_size": 9728963, "raw_average_value_size": 1094, "num_data_blocks": 1216, "num_entries": 8891, "num_filter_entries": 8891, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759397906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.135378) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 9938348 bytes
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.137116) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.2 rd, 107.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.9 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(28.8) write-amplify(14.3) OK, records in: 9415, records dropped: 524 output_compression: NoCompression
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.137139) EVENT_LOG_v1 {"time_micros": 1759397906137129, "job": 112, "event": "compaction_finished", "compaction_time_micros": 92863, "compaction_time_cpu_micros": 34468, "output_level": 6, "num_output_files": 1, "total_output_size": 9938348, "num_input_records": 9415, "num_output_records": 8891, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397906137491, "job": 112, "event": "table_file_deletion", "file_number": 181}
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759397906139860, "job": 112, "event": "table_file_deletion", "file_number": 179}
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.041974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.139917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.139922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.139924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.139926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:38:26 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:38:26.139928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:38:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:38:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 48K writes, 17K syncs, 2.71 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 278 writes, 624 keys, 278 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s#012Interval WAL: 278 writes, 128 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:38:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:27 np0005465604 nova_compute[260603]: 2025-10-02 09:38:27.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:38:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:38:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:38:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:38:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:38:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:38:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:38:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:38:28
Oct  2 05:38:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:38:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:38:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'images', '.mgr', 'backups', 'vms', 'volumes', 'default.rgw.log']
Oct  2 05:38:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:38:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:38:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:29 np0005465604 nova_compute[260603]: 2025-10-02 09:38:29.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:38:29 np0005465604 nova_compute[260603]: 2025-10-02 09:38:29.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:38:29 np0005465604 nova_compute[260603]: 2025-10-02 09:38:29.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:38:29 np0005465604 nova_compute[260603]: 2025-10-02 09:38:29.554 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:38:29 np0005465604 nova_compute[260603]: 2025-10-02 09:38:29.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:30 np0005465604 nova_compute[260603]: 2025-10-02 09:38:30.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:38:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:31 np0005465604 nova_compute[260603]: 2025-10-02 09:38:31.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:38:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:38:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 38K writes, 145K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 38K writes, 14K syncs, 2.69 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 338 writes, 741 keys, 338 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s#012Interval WAL: 338 writes, 158 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:38:31 np0005465604 nova_compute[260603]: 2025-10-02 09:38:31.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:38:31 np0005465604 nova_compute[260603]: 2025-10-02 09:38:31.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:38:31 np0005465604 nova_compute[260603]: 2025-10-02 09:38:31.712 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:38:31 np0005465604 nova_compute[260603]: 2025-10-02 09:38:31.713 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:38:31 np0005465604 nova_compute[260603]: 2025-10-02 09:38:31.713 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:38:31 np0005465604 nova_compute[260603]: 2025-10-02 09:38:31.713 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:38:31 np0005465604 nova_compute[260603]: 2025-10-02 09:38:31.714 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:38:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:38:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2768344955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:38:32 np0005465604 nova_compute[260603]: 2025-10-02 09:38:32.149 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:38:32 np0005465604 nova_compute[260603]: 2025-10-02 09:38:32.344 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:38:32 np0005465604 nova_compute[260603]: 2025-10-02 09:38:32.345 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3581MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:38:32 np0005465604 nova_compute[260603]: 2025-10-02 09:38:32.345 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:38:32 np0005465604 nova_compute[260603]: 2025-10-02 09:38:32.346 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:38:32 np0005465604 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct  2 05:38:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:32 np0005465604 nova_compute[260603]: 2025-10-02 09:38:32.925 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:38:32 np0005465604 nova_compute[260603]: 2025-10-02 09:38:32.926 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:38:33 np0005465604 nova_compute[260603]: 2025-10-02 09:38:33.010 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:38:33 np0005465604 podman[456328]: 2025-10-02 09:38:33.012058427 +0000 UTC m=+0.073018502 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller)
Oct  2 05:38:33 np0005465604 podman[456329]: 2025-10-02 09:38:33.039803923 +0000 UTC m=+0.084978755 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:38:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:38:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1913189035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:38:33 np0005465604 nova_compute[260603]: 2025-10-02 09:38:33.443 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:38:33 np0005465604 nova_compute[260603]: 2025-10-02 09:38:33.447 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:38:33 np0005465604 nova_compute[260603]: 2025-10-02 09:38:33.592 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:38:33 np0005465604 nova_compute[260603]: 2025-10-02 09:38:33.594 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:38:33 np0005465604 nova_compute[260603]: 2025-10-02 09:38:33.594 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:38:34 np0005465604 nova_compute[260603]: 2025-10-02 09:38:34.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:38:34.878 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:38:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:38:34.879 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:38:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:38:34.879 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:38:36 np0005465604 nova_compute[260603]: 2025-10-02 09:38:36.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:38:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:37 np0005465604 nova_compute[260603]: 2025-10-02 09:38:37.590 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:38:38 np0005465604 podman[456396]: 2025-10-02 09:38:38.009661485 +0000 UTC m=+0.069736749 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 05:38:38 np0005465604 podman[456395]: 2025-10-02 09:38:38.011328637 +0000 UTC m=+0.081331491 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, container_name=multipathd)
Oct  2 05:38:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:39 np0005465604 nova_compute[260603]: 2025-10-02 09:38:39.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:38:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:38:39 np0005465604 nova_compute[260603]: 2025-10-02 09:38:39.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:41 np0005465604 nova_compute[260603]: 2025-10-02 09:38:41.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:38:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:38:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:38:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:38:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:38:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:38:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:38:44 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev d428e1e7-b3b7-4b32-aa7f-d1ae8877e488 does not exist
Oct  2 05:38:44 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ed41cc08-eeda-461c-8745-178dcf9fc3de does not exist
Oct  2 05:38:44 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 31bfa002-d9ad-4d4e-b734-5f83fa37c39c does not exist
Oct  2 05:38:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:38:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:38:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:38:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:38:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:38:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:38:44 np0005465604 nova_compute[260603]: 2025-10-02 09:38:44.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:44 np0005465604 podman[456708]: 2025-10-02 09:38:44.805099677 +0000 UTC m=+0.035988015 container create 3a3a0b3f3b2df26bf4c8a45490e35f2579a62aaf2450cd4a9a032e1f01f141b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:38:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:44 np0005465604 systemd[1]: Started libpod-conmon-3a3a0b3f3b2df26bf4c8a45490e35f2579a62aaf2450cd4a9a032e1f01f141b4.scope.
Oct  2 05:38:44 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:38:44 np0005465604 podman[456708]: 2025-10-02 09:38:44.879667485 +0000 UTC m=+0.110555853 container init 3a3a0b3f3b2df26bf4c8a45490e35f2579a62aaf2450cd4a9a032e1f01f141b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:38:44 np0005465604 podman[456708]: 2025-10-02 09:38:44.789630264 +0000 UTC m=+0.020518622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:38:44 np0005465604 podman[456708]: 2025-10-02 09:38:44.888527503 +0000 UTC m=+0.119415851 container start 3a3a0b3f3b2df26bf4c8a45490e35f2579a62aaf2450cd4a9a032e1f01f141b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:38:44 np0005465604 podman[456708]: 2025-10-02 09:38:44.891360311 +0000 UTC m=+0.122248649 container attach 3a3a0b3f3b2df26bf4c8a45490e35f2579a62aaf2450cd4a9a032e1f01f141b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galois, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct  2 05:38:44 np0005465604 silly_galois[456725]: 167 167
Oct  2 05:38:44 np0005465604 systemd[1]: libpod-3a3a0b3f3b2df26bf4c8a45490e35f2579a62aaf2450cd4a9a032e1f01f141b4.scope: Deactivated successfully.
Oct  2 05:38:44 np0005465604 podman[456708]: 2025-10-02 09:38:44.894469638 +0000 UTC m=+0.125357976 container died 3a3a0b3f3b2df26bf4c8a45490e35f2579a62aaf2450cd4a9a032e1f01f141b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galois, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:38:44 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1edcec2658f371448fcbfadf4b8a67a58585492f11dcb7dfcad02beddc837dbe-merged.mount: Deactivated successfully.
Oct  2 05:38:44 np0005465604 podman[456708]: 2025-10-02 09:38:44.933349773 +0000 UTC m=+0.164238131 container remove 3a3a0b3f3b2df26bf4c8a45490e35f2579a62aaf2450cd4a9a032e1f01f141b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:38:44 np0005465604 systemd[1]: libpod-conmon-3a3a0b3f3b2df26bf4c8a45490e35f2579a62aaf2450cd4a9a032e1f01f141b4.scope: Deactivated successfully.
Oct  2 05:38:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:38:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:38:44 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:38:45 np0005465604 podman[456748]: 2025-10-02 09:38:45.097481359 +0000 UTC m=+0.036519361 container create 573884b6e696c8f87bcce65bd330a2154a2c284be8e49e222a6abc0c33e05a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 05:38:45 np0005465604 systemd[1]: Started libpod-conmon-573884b6e696c8f87bcce65bd330a2154a2c284be8e49e222a6abc0c33e05a6e.scope.
Oct  2 05:38:45 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:38:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e0be225597c18a44629d179110158188554cc61b3119a06caccc909268757a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:38:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e0be225597c18a44629d179110158188554cc61b3119a06caccc909268757a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:38:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e0be225597c18a44629d179110158188554cc61b3119a06caccc909268757a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:38:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e0be225597c18a44629d179110158188554cc61b3119a06caccc909268757a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:38:45 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e0be225597c18a44629d179110158188554cc61b3119a06caccc909268757a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:38:45 np0005465604 podman[456748]: 2025-10-02 09:38:45.160204258 +0000 UTC m=+0.099242280 container init 573884b6e696c8f87bcce65bd330a2154a2c284be8e49e222a6abc0c33e05a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ride, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 05:38:45 np0005465604 podman[456748]: 2025-10-02 09:38:45.167026712 +0000 UTC m=+0.106064714 container start 573884b6e696c8f87bcce65bd330a2154a2c284be8e49e222a6abc0c33e05a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ride, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  2 05:38:45 np0005465604 podman[456748]: 2025-10-02 09:38:45.171030966 +0000 UTC m=+0.110068968 container attach 573884b6e696c8f87bcce65bd330a2154a2c284be8e49e222a6abc0c33e05a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  2 05:38:45 np0005465604 podman[456748]: 2025-10-02 09:38:45.081734677 +0000 UTC m=+0.020772699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:38:46 np0005465604 nova_compute[260603]: 2025-10-02 09:38:46.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:38:46 np0005465604 charming_ride[456765]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:38:46 np0005465604 charming_ride[456765]: --> relative data size: 1.0
Oct  2 05:38:46 np0005465604 charming_ride[456765]: --> All data devices are unavailable
Oct  2 05:38:46 np0005465604 systemd[1]: libpod-573884b6e696c8f87bcce65bd330a2154a2c284be8e49e222a6abc0c33e05a6e.scope: Deactivated successfully.
Oct  2 05:38:46 np0005465604 podman[456794]: 2025-10-02 09:38:46.254980453 +0000 UTC m=+0.046002537 container died 573884b6e696c8f87bcce65bd330a2154a2c284be8e49e222a6abc0c33e05a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ride, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:38:46 np0005465604 systemd[1]: var-lib-containers-storage-overlay-e2e0be225597c18a44629d179110158188554cc61b3119a06caccc909268757a-merged.mount: Deactivated successfully.
Oct  2 05:38:46 np0005465604 podman[456794]: 2025-10-02 09:38:46.316938459 +0000 UTC m=+0.107960563 container remove 573884b6e696c8f87bcce65bd330a2154a2c284be8e49e222a6abc0c33e05a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ride, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 05:38:46 np0005465604 systemd[1]: libpod-conmon-573884b6e696c8f87bcce65bd330a2154a2c284be8e49e222a6abc0c33e05a6e.scope: Deactivated successfully.
Oct  2 05:38:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:47 np0005465604 podman[456949]: 2025-10-02 09:38:47.057781398 +0000 UTC m=+0.026722405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:38:47 np0005465604 podman[456949]: 2025-10-02 09:38:47.160062083 +0000 UTC m=+0.129003070 container create d05ee1f76e3b79d49c34ff45e8c30d18ce174aa192149e7833c946d80ccbf362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:38:47 np0005465604 systemd[1]: Started libpod-conmon-d05ee1f76e3b79d49c34ff45e8c30d18ce174aa192149e7833c946d80ccbf362.scope.
Oct  2 05:38:47 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:38:47 np0005465604 podman[456949]: 2025-10-02 09:38:47.314734094 +0000 UTC m=+0.283675101 container init d05ee1f76e3b79d49c34ff45e8c30d18ce174aa192149e7833c946d80ccbf362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:38:47 np0005465604 podman[456949]: 2025-10-02 09:38:47.320930177 +0000 UTC m=+0.289871154 container start d05ee1f76e3b79d49c34ff45e8c30d18ce174aa192149e7833c946d80ccbf362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 05:38:47 np0005465604 flamboyant_antonelli[456966]: 167 167
Oct  2 05:38:47 np0005465604 systemd[1]: libpod-d05ee1f76e3b79d49c34ff45e8c30d18ce174aa192149e7833c946d80ccbf362.scope: Deactivated successfully.
Oct  2 05:38:47 np0005465604 podman[456949]: 2025-10-02 09:38:47.379240749 +0000 UTC m=+0.348181746 container attach d05ee1f76e3b79d49c34ff45e8c30d18ce174aa192149e7833c946d80ccbf362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 05:38:47 np0005465604 podman[456949]: 2025-10-02 09:38:47.379909859 +0000 UTC m=+0.348850856 container died d05ee1f76e3b79d49c34ff45e8c30d18ce174aa192149e7833c946d80ccbf362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  2 05:38:47 np0005465604 systemd[1]: var-lib-containers-storage-overlay-d2dc370b4ecbf2587b83985f4387666a76efff99d332f92019c5ce1b4d3752ad-merged.mount: Deactivated successfully.
Oct  2 05:38:47 np0005465604 podman[456949]: 2025-10-02 09:38:47.649257042 +0000 UTC m=+0.618198029 container remove d05ee1f76e3b79d49c34ff45e8c30d18ce174aa192149e7833c946d80ccbf362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 05:38:47 np0005465604 systemd[1]: libpod-conmon-d05ee1f76e3b79d49c34ff45e8c30d18ce174aa192149e7833c946d80ccbf362.scope: Deactivated successfully.
Oct  2 05:38:47 np0005465604 podman[456991]: 2025-10-02 09:38:47.827382366 +0000 UTC m=+0.029847592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:38:47 np0005465604 podman[456991]: 2025-10-02 09:38:47.960952049 +0000 UTC m=+0.163417225 container create 88a4b7cb2e3c02433c707db6a44114c7cf3b0af7adceaed542576e8e0dcc6a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_black, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 05:38:48 np0005465604 systemd[1]: Started libpod-conmon-88a4b7cb2e3c02433c707db6a44114c7cf3b0af7adceaed542576e8e0dcc6a84.scope.
Oct  2 05:38:48 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:38:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aaf873931f8cb44a6625cc9f0aa09d01c856e7dc38e3434c5bec4dd15a20a19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:38:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aaf873931f8cb44a6625cc9f0aa09d01c856e7dc38e3434c5bec4dd15a20a19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:38:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aaf873931f8cb44a6625cc9f0aa09d01c856e7dc38e3434c5bec4dd15a20a19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:38:48 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aaf873931f8cb44a6625cc9f0aa09d01c856e7dc38e3434c5bec4dd15a20a19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:38:48 np0005465604 podman[456991]: 2025-10-02 09:38:48.376364073 +0000 UTC m=+0.578829289 container init 88a4b7cb2e3c02433c707db6a44114c7cf3b0af7adceaed542576e8e0dcc6a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_black, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:38:48 np0005465604 podman[456991]: 2025-10-02 09:38:48.388373759 +0000 UTC m=+0.590838955 container start 88a4b7cb2e3c02433c707db6a44114c7cf3b0af7adceaed542576e8e0dcc6a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_black, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:38:48 np0005465604 podman[456991]: 2025-10-02 09:38:48.495732152 +0000 UTC m=+0.698197328 container attach 88a4b7cb2e3c02433c707db6a44114c7cf3b0af7adceaed542576e8e0dcc6a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_black, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 05:38:48 np0005465604 nova_compute[260603]: 2025-10-02 09:38:48.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:38:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:49 np0005465604 lucid_black[457008]: {
Oct  2 05:38:49 np0005465604 lucid_black[457008]:    "0": [
Oct  2 05:38:49 np0005465604 lucid_black[457008]:        {
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "devices": [
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "/dev/loop3"
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            ],
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "lv_name": "ceph_lv0",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "lv_size": "21470642176",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "name": "ceph_lv0",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "tags": {
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.cluster_name": "ceph",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.crush_device_class": "",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.encrypted": "0",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.osd_id": "0",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.type": "block",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.vdo": "0"
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            },
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "type": "block",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "vg_name": "ceph_vg0"
Oct  2 05:38:49 np0005465604 lucid_black[457008]:        }
Oct  2 05:38:49 np0005465604 lucid_black[457008]:    ],
Oct  2 05:38:49 np0005465604 lucid_black[457008]:    "1": [
Oct  2 05:38:49 np0005465604 lucid_black[457008]:        {
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "devices": [
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "/dev/loop4"
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            ],
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "lv_name": "ceph_lv1",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "lv_size": "21470642176",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "name": "ceph_lv1",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "tags": {
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.cluster_name": "ceph",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.crush_device_class": "",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.encrypted": "0",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.osd_id": "1",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.type": "block",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.vdo": "0"
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            },
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "type": "block",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "vg_name": "ceph_vg1"
Oct  2 05:38:49 np0005465604 lucid_black[457008]:        }
Oct  2 05:38:49 np0005465604 lucid_black[457008]:    ],
Oct  2 05:38:49 np0005465604 lucid_black[457008]:    "2": [
Oct  2 05:38:49 np0005465604 lucid_black[457008]:        {
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "devices": [
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "/dev/loop5"
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            ],
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "lv_name": "ceph_lv2",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "lv_size": "21470642176",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "name": "ceph_lv2",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "tags": {
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.cluster_name": "ceph",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.crush_device_class": "",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.encrypted": "0",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.osd_id": "2",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.type": "block",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:                "ceph.vdo": "0"
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            },
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "type": "block",
Oct  2 05:38:49 np0005465604 lucid_black[457008]:            "vg_name": "ceph_vg2"
Oct  2 05:38:49 np0005465604 lucid_black[457008]:        }
Oct  2 05:38:49 np0005465604 lucid_black[457008]:    ]
Oct  2 05:38:49 np0005465604 lucid_black[457008]: }
Oct  2 05:38:49 np0005465604 systemd[1]: libpod-88a4b7cb2e3c02433c707db6a44114c7cf3b0af7adceaed542576e8e0dcc6a84.scope: Deactivated successfully.
Oct  2 05:38:49 np0005465604 podman[456991]: 2025-10-02 09:38:49.177094494 +0000 UTC m=+1.379559700 container died 88a4b7cb2e3c02433c707db6a44114c7cf3b0af7adceaed542576e8e0dcc6a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_black, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:38:49 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3aaf873931f8cb44a6625cc9f0aa09d01c856e7dc38e3434c5bec4dd15a20a19-merged.mount: Deactivated successfully.
Oct  2 05:38:49 np0005465604 podman[456991]: 2025-10-02 09:38:49.376825632 +0000 UTC m=+1.579290808 container remove 88a4b7cb2e3c02433c707db6a44114c7cf3b0af7adceaed542576e8e0dcc6a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_black, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 05:38:49 np0005465604 systemd[1]: libpod-conmon-88a4b7cb2e3c02433c707db6a44114c7cf3b0af7adceaed542576e8e0dcc6a84.scope: Deactivated successfully.
Oct  2 05:38:49 np0005465604 nova_compute[260603]: 2025-10-02 09:38:49.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:50 np0005465604 podman[457171]: 2025-10-02 09:38:50.010822275 +0000 UTC m=+0.045488772 container create ebc804e3a0d63be1d943f87d07c062a873f555f5511a20fe04bc8c05d25e4455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:38:50 np0005465604 systemd[1]: Started libpod-conmon-ebc804e3a0d63be1d943f87d07c062a873f555f5511a20fe04bc8c05d25e4455.scope.
Oct  2 05:38:50 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:38:50 np0005465604 podman[457171]: 2025-10-02 09:38:49.995412964 +0000 UTC m=+0.030079491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:38:50 np0005465604 podman[457171]: 2025-10-02 09:38:50.096625224 +0000 UTC m=+0.131291751 container init ebc804e3a0d63be1d943f87d07c062a873f555f5511a20fe04bc8c05d25e4455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:38:50 np0005465604 podman[457171]: 2025-10-02 09:38:50.104933355 +0000 UTC m=+0.139599862 container start ebc804e3a0d63be1d943f87d07c062a873f555f5511a20fe04bc8c05d25e4455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  2 05:38:50 np0005465604 podman[457171]: 2025-10-02 09:38:50.109000532 +0000 UTC m=+0.143667059 container attach ebc804e3a0d63be1d943f87d07c062a873f555f5511a20fe04bc8c05d25e4455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  2 05:38:50 np0005465604 trusting_mendeleev[457187]: 167 167
Oct  2 05:38:50 np0005465604 systemd[1]: libpod-ebc804e3a0d63be1d943f87d07c062a873f555f5511a20fe04bc8c05d25e4455.scope: Deactivated successfully.
Oct  2 05:38:50 np0005465604 podman[457171]: 2025-10-02 09:38:50.111895712 +0000 UTC m=+0.146562219 container died ebc804e3a0d63be1d943f87d07c062a873f555f5511a20fe04bc8c05d25e4455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 05:38:50 np0005465604 systemd[1]: var-lib-containers-storage-overlay-cc32a0800df2293f42aa62cf28b3900145c6bb7a2702bdfaedf39c73b215e442-merged.mount: Deactivated successfully.
Oct  2 05:38:50 np0005465604 podman[457171]: 2025-10-02 09:38:50.144927134 +0000 UTC m=+0.179593641 container remove ebc804e3a0d63be1d943f87d07c062a873f555f5511a20fe04bc8c05d25e4455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:38:50 np0005465604 systemd[1]: libpod-conmon-ebc804e3a0d63be1d943f87d07c062a873f555f5511a20fe04bc8c05d25e4455.scope: Deactivated successfully.
Oct  2 05:38:50 np0005465604 podman[457211]: 2025-10-02 09:38:50.315860402 +0000 UTC m=+0.050300952 container create 990a4a3ca0cda03a58862b393ec95096c5cdee982afa2c5eef81424f0aea1d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  2 05:38:50 np0005465604 systemd[1]: Started libpod-conmon-990a4a3ca0cda03a58862b393ec95096c5cdee982afa2c5eef81424f0aea1d9a.scope.
Oct  2 05:38:50 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:38:50 np0005465604 podman[457211]: 2025-10-02 09:38:50.297936223 +0000 UTC m=+0.032376823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:38:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ffff26f0a0514173f06239ef4bb27c41e8dbcff558025f49aebfea3f04e9cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:38:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ffff26f0a0514173f06239ef4bb27c41e8dbcff558025f49aebfea3f04e9cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:38:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ffff26f0a0514173f06239ef4bb27c41e8dbcff558025f49aebfea3f04e9cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:38:50 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ffff26f0a0514173f06239ef4bb27c41e8dbcff558025f49aebfea3f04e9cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:38:50 np0005465604 podman[457211]: 2025-10-02 09:38:50.408967841 +0000 UTC m=+0.143408421 container init 990a4a3ca0cda03a58862b393ec95096c5cdee982afa2c5eef81424f0aea1d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:38:50 np0005465604 podman[457211]: 2025-10-02 09:38:50.423587868 +0000 UTC m=+0.158028418 container start 990a4a3ca0cda03a58862b393ec95096c5cdee982afa2c5eef81424f0aea1d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 05:38:50 np0005465604 podman[457211]: 2025-10-02 09:38:50.426416946 +0000 UTC m=+0.160857606 container attach 990a4a3ca0cda03a58862b393ec95096c5cdee982afa2c5eef81424f0aea1d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 05:38:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:51 np0005465604 nova_compute[260603]: 2025-10-02 09:38:51.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:38:51 np0005465604 blissful_germain[457228]: {
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:        "osd_id": 2,
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:        "type": "bluestore"
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:    },
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:        "osd_id": 1,
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:        "type": "bluestore"
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:    },
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:        "osd_id": 0,
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:        "type": "bluestore"
Oct  2 05:38:51 np0005465604 blissful_germain[457228]:    }
Oct  2 05:38:51 np0005465604 blissful_germain[457228]: }
Oct  2 05:38:51 np0005465604 systemd[1]: libpod-990a4a3ca0cda03a58862b393ec95096c5cdee982afa2c5eef81424f0aea1d9a.scope: Deactivated successfully.
Oct  2 05:38:51 np0005465604 podman[457211]: 2025-10-02 09:38:51.40206969 +0000 UTC m=+1.136510280 container died 990a4a3ca0cda03a58862b393ec95096c5cdee982afa2c5eef81424f0aea1d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 05:38:51 np0005465604 systemd[1]: var-lib-containers-storage-overlay-45ffff26f0a0514173f06239ef4bb27c41e8dbcff558025f49aebfea3f04e9cd-merged.mount: Deactivated successfully.
Oct  2 05:38:51 np0005465604 podman[457211]: 2025-10-02 09:38:51.57302169 +0000 UTC m=+1.307462240 container remove 990a4a3ca0cda03a58862b393ec95096c5cdee982afa2c5eef81424f0aea1d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:38:51 np0005465604 systemd[1]: libpod-conmon-990a4a3ca0cda03a58862b393ec95096c5cdee982afa2c5eef81424f0aea1d9a.scope: Deactivated successfully.
Oct  2 05:38:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:38:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:38:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:38:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:38:51 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a2b8a4bc-0058-44a4-9483-b6976efb2b6f does not exist
Oct  2 05:38:51 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 0edac61b-7317-4a3b-8463-2ee476d47451 does not exist
Oct  2 05:38:51 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:38:51 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:38:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:54 np0005465604 nova_compute[260603]: 2025-10-02 09:38:54.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:38:56 np0005465604 nova_compute[260603]: 2025-10-02 09:38:56.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:38:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:57 np0005465604 nova_compute[260603]: 2025-10-02 09:38:57.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:38:57 np0005465604 nova_compute[260603]: 2025-10-02 09:38:57.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:38:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:38:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:38:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:38:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:38:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:38:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:38:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:38:59 np0005465604 nova_compute[260603]: 2025-10-02 09:38:59.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:39:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:39:01 np0005465604 nova_compute[260603]: 2025-10-02 09:39:01.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:39:04 np0005465604 podman[457324]: 2025-10-02 09:39:04.02585946 +0000 UTC m=+0.079826185 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:39:04 np0005465604 podman[457323]: 2025-10-02 09:39:04.114337604 +0000 UTC m=+0.168017729 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:39:04 np0005465604 nova_compute[260603]: 2025-10-02 09:39:04.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:39:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:39:06 np0005465604 nova_compute[260603]: 2025-10-02 09:39:06.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:39:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:39:08 np0005465604 podman[457367]: 2025-10-02 09:39:08.997102754 +0000 UTC m=+0.057400434 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:39:09 np0005465604 podman[457366]: 2025-10-02 09:39:09.005555679 +0000 UTC m=+0.068852332 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 05:39:09 np0005465604 nova_compute[260603]: 2025-10-02 09:39:09.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:39:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:39:11 np0005465604 nova_compute[260603]: 2025-10-02 09:39:11.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:39:14 np0005465604 nova_compute[260603]: 2025-10-02 09:39:14.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:39:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:39:16 np0005465604 nova_compute[260603]: 2025-10-02 09:39:16.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:39:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Oct  2 05:39:18 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Oct  2 05:39:18 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Oct  2 05:39:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 3 op/s
Oct  2 05:39:19 np0005465604 nova_compute[260603]: 2025-10-02 09:39:19.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Oct  2 05:39:20 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Oct  2 05:39:20 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Oct  2 05:39:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.4 KiB/s rd, 3 op/s
Oct  2 05:39:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:39:21 np0005465604 nova_compute[260603]: 2025-10-02 09:39:21.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:39:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1108771135' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:39:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:39:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1108771135' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:39:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3649: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.7 KiB/s wr, 59 op/s
Oct  2 05:39:24 np0005465604 nova_compute[260603]: 2025-10-02 09:39:24.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3650: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Oct  2 05:39:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:39:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Oct  2 05:39:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Oct  2 05:39:26 np0005465604 nova_compute[260603]: 2025-10-02 09:39:26.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:26 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Oct  2 05:39:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3652: 305 pgs: 305 active+clean; 457 KiB data, 983 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.5 KiB/s wr, 59 op/s
Oct  2 05:39:27 np0005465604 nova_compute[260603]: 2025-10-02 09:39:27.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:39:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:39:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:39:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:39:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:39:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:39:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:39:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:39:28
Oct  2 05:39:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:39:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:39:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes']
Oct  2 05:39:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:39:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Oct  2 05:39:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Oct  2 05:39:28 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Oct  2 05:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:39:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:39:28 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Oct  2 05:39:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3654: 305 pgs: 305 active+clean; 16 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 2.0 MiB/s wr, 73 op/s
Oct  2 05:39:29 np0005465604 nova_compute[260603]: 2025-10-02 09:39:29.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:30 np0005465604 nova_compute[260603]: 2025-10-02 09:39:30.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:39:30 np0005465604 nova_compute[260603]: 2025-10-02 09:39:30.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:39:30 np0005465604 nova_compute[260603]: 2025-10-02 09:39:30.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:39:30 np0005465604 nova_compute[260603]: 2025-10-02 09:39:30.563 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:39:30 np0005465604 nova_compute[260603]: 2025-10-02 09:39:30.564 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:39:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3655: 305 pgs: 305 active+clean; 16 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 2.0 MiB/s wr, 17 op/s
Oct  2 05:39:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:39:31 np0005465604 nova_compute[260603]: 2025-10-02 09:39:31.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:31 np0005465604 nova_compute[260603]: 2025-10-02 09:39:31.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:39:31 np0005465604 nova_compute[260603]: 2025-10-02 09:39:31.578 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:39:31 np0005465604 nova_compute[260603]: 2025-10-02 09:39:31.579 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:39:31 np0005465604 nova_compute[260603]: 2025-10-02 09:39:31.579 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:39:31 np0005465604 nova_compute[260603]: 2025-10-02 09:39:31.579 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:39:31 np0005465604 nova_compute[260603]: 2025-10-02 09:39:31.579 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:39:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:39:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2453467086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:39:32 np0005465604 nova_compute[260603]: 2025-10-02 09:39:32.037 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:39:32 np0005465604 nova_compute[260603]: 2025-10-02 09:39:32.218 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:39:32 np0005465604 nova_compute[260603]: 2025-10-02 09:39:32.219 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3567MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:39:32 np0005465604 nova_compute[260603]: 2025-10-02 09:39:32.220 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:39:32 np0005465604 nova_compute[260603]: 2025-10-02 09:39:32.220 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:39:32 np0005465604 nova_compute[260603]: 2025-10-02 09:39:32.372 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:39:32 np0005465604 nova_compute[260603]: 2025-10-02 09:39:32.372 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:39:32 np0005465604 nova_compute[260603]: 2025-10-02 09:39:32.392 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:39:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3656: 305 pgs: 305 active+clean; 21 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 2.6 MiB/s wr, 74 op/s
Oct  2 05:39:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:39:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1622228759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:39:32 np0005465604 nova_compute[260603]: 2025-10-02 09:39:32.871 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:39:32 np0005465604 nova_compute[260603]: 2025-10-02 09:39:32.877 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:39:33 np0005465604 nova_compute[260603]: 2025-10-02 09:39:33.051 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:39:33 np0005465604 nova_compute[260603]: 2025-10-02 09:39:33.053 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:39:33 np0005465604 nova_compute[260603]: 2025-10-02 09:39:33.053 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:39:34 np0005465604 nova_compute[260603]: 2025-10-02 09:39:34.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3657: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 2.3 MiB/s wr, 95 op/s
Oct  2 05:39:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:39:34.880 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:39:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:39:34.880 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:39:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:39:34.880 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:39:34 np0005465604 podman[457453]: 2025-10-02 09:39:34.987899025 +0000 UTC m=+0.055763363 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  2 05:39:35 np0005465604 podman[457452]: 2025-10-02 09:39:35.013126183 +0000 UTC m=+0.085734999 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:39:35 np0005465604 nova_compute[260603]: 2025-10-02 09:39:35.053 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:39:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:39:36 np0005465604 nova_compute[260603]: 2025-10-02 09:39:36.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:36 np0005465604 nova_compute[260603]: 2025-10-02 09:39:36.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:39:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3658: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 2.0 MiB/s wr, 83 op/s
Oct  2 05:39:37 np0005465604 nova_compute[260603]: 2025-10-02 09:39:37.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:39:38 np0005465604 nova_compute[260603]: 2025-10-02 09:39:38.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:39:38 np0005465604 nova_compute[260603]: 2025-10-02 09:39:38.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 05:39:38 np0005465604 nova_compute[260603]: 2025-10-02 09:39:38.650 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 05:39:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3659: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1.9 MiB/s wr, 74 op/s
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033308812756397733 of space, bias 1.0, pg target 0.0999264382691932 quantized to 32 (current 32)
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:39:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:39:39 np0005465604 nova_compute[260603]: 2025-10-02 09:39:39.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:39 np0005465604 podman[457498]: 2025-10-02 09:39:39.998889652 +0000 UTC m=+0.053554914 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 05:39:40 np0005465604 podman[457497]: 2025-10-02 09:39:40.035728992 +0000 UTC m=+0.086118950 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 05:39:40 np0005465604 nova_compute[260603]: 2025-10-02 09:39:40.649 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:39:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3660: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 379 KiB/s wr, 59 op/s
Oct  2 05:39:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:39:41 np0005465604 nova_compute[260603]: 2025-10-02 09:39:41.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3661: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 379 KiB/s wr, 59 op/s
Oct  2 05:39:44 np0005465604 nova_compute[260603]: 2025-10-02 09:39:44.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3662: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 85 B/s wr, 19 op/s
Oct  2 05:39:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:39:46 np0005465604 nova_compute[260603]: 2025-10-02 09:39:46.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3663: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:39:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3664: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  2 05:39:49 np0005465604 nova_compute[260603]: 2025-10-02 09:39:49.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:50 np0005465604 nova_compute[260603]: 2025-10-02 09:39:50.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:39:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3665: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  2 05:39:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:39:51 np0005465604 nova_compute[260603]: 2025-10-02 09:39:51.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:39:52 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:39:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:39:52 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:39:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:39:52 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:39:52 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev c9e6f6f9-60b0-48c2-94fc-33648c5fb986 does not exist
Oct  2 05:39:52 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev a7a3bac3-33ee-498f-9a20-2a8707f01bf8 does not exist
Oct  2 05:39:52 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 6b172b12-f6ea-4706-8f6a-b7fb3f4807e4 does not exist
Oct  2 05:39:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:39:52 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:39:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:39:52 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:39:52 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:39:52 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:39:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3666: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  2 05:39:52 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:39:52 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:39:52 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:39:53 np0005465604 podman[457812]: 2025-10-02 09:39:53.203811863 +0000 UTC m=+0.047356770 container create 66b35f68f61fb77467b0f5fa9fccdf51640238ccde0e51f68affe2c1c8bcd25f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shtern, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:39:53 np0005465604 systemd[1]: Started libpod-conmon-66b35f68f61fb77467b0f5fa9fccdf51640238ccde0e51f68affe2c1c8bcd25f.scope.
Oct  2 05:39:53 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:39:53 np0005465604 podman[457812]: 2025-10-02 09:39:53.178591875 +0000 UTC m=+0.022136832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:39:53 np0005465604 podman[457812]: 2025-10-02 09:39:53.284572875 +0000 UTC m=+0.128117792 container init 66b35f68f61fb77467b0f5fa9fccdf51640238ccde0e51f68affe2c1c8bcd25f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct  2 05:39:53 np0005465604 podman[457812]: 2025-10-02 09:39:53.291058688 +0000 UTC m=+0.134603595 container start 66b35f68f61fb77467b0f5fa9fccdf51640238ccde0e51f68affe2c1c8bcd25f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shtern, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 05:39:53 np0005465604 podman[457812]: 2025-10-02 09:39:53.294805525 +0000 UTC m=+0.138350482 container attach 66b35f68f61fb77467b0f5fa9fccdf51640238ccde0e51f68affe2c1c8bcd25f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:39:53 np0005465604 quizzical_shtern[457829]: 167 167
Oct  2 05:39:53 np0005465604 systemd[1]: libpod-66b35f68f61fb77467b0f5fa9fccdf51640238ccde0e51f68affe2c1c8bcd25f.scope: Deactivated successfully.
Oct  2 05:39:53 np0005465604 podman[457812]: 2025-10-02 09:39:53.297690255 +0000 UTC m=+0.141235162 container died 66b35f68f61fb77467b0f5fa9fccdf51640238ccde0e51f68affe2c1c8bcd25f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shtern, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 05:39:53 np0005465604 systemd[1]: var-lib-containers-storage-overlay-ddbb11813b32c562f7f4f0976fb3bd93e7708ec3ea25463bcd32ce9b10e5c226-merged.mount: Deactivated successfully.
Oct  2 05:39:53 np0005465604 podman[457812]: 2025-10-02 09:39:53.34202417 +0000 UTC m=+0.185569077 container remove 66b35f68f61fb77467b0f5fa9fccdf51640238ccde0e51f68affe2c1c8bcd25f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  2 05:39:53 np0005465604 systemd[1]: libpod-conmon-66b35f68f61fb77467b0f5fa9fccdf51640238ccde0e51f68affe2c1c8bcd25f.scope: Deactivated successfully.
Oct  2 05:39:53 np0005465604 podman[457854]: 2025-10-02 09:39:53.529159225 +0000 UTC m=+0.040626570 container create edc4dba6f205878497286cd797793e3d9ccc1a3173c84f543572315758495625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct  2 05:39:53 np0005465604 systemd[1]: Started libpod-conmon-edc4dba6f205878497286cd797793e3d9ccc1a3173c84f543572315758495625.scope.
Oct  2 05:39:53 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:39:53 np0005465604 podman[457854]: 2025-10-02 09:39:53.512278678 +0000 UTC m=+0.023746043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:39:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38077d29e507c11ac9d3436944a3c7bd109b21ae6ec5ebfb274d384b221c5fcc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:39:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38077d29e507c11ac9d3436944a3c7bd109b21ae6ec5ebfb274d384b221c5fcc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:39:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38077d29e507c11ac9d3436944a3c7bd109b21ae6ec5ebfb274d384b221c5fcc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:39:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38077d29e507c11ac9d3436944a3c7bd109b21ae6ec5ebfb274d384b221c5fcc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:39:53 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38077d29e507c11ac9d3436944a3c7bd109b21ae6ec5ebfb274d384b221c5fcc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:39:53 np0005465604 podman[457854]: 2025-10-02 09:39:53.624552635 +0000 UTC m=+0.136020000 container init edc4dba6f205878497286cd797793e3d9ccc1a3173c84f543572315758495625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 05:39:53 np0005465604 podman[457854]: 2025-10-02 09:39:53.633448703 +0000 UTC m=+0.144916058 container start edc4dba6f205878497286cd797793e3d9ccc1a3173c84f543572315758495625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct  2 05:39:53 np0005465604 podman[457854]: 2025-10-02 09:39:53.637027664 +0000 UTC m=+0.148495019 container attach edc4dba6f205878497286cd797793e3d9ccc1a3173c84f543572315758495625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 05:39:54 np0005465604 competent_margulis[457871]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:39:54 np0005465604 competent_margulis[457871]: --> relative data size: 1.0
Oct  2 05:39:54 np0005465604 competent_margulis[457871]: --> All data devices are unavailable
Oct  2 05:39:54 np0005465604 systemd[1]: libpod-edc4dba6f205878497286cd797793e3d9ccc1a3173c84f543572315758495625.scope: Deactivated successfully.
Oct  2 05:39:54 np0005465604 podman[457854]: 2025-10-02 09:39:54.653682328 +0000 UTC m=+1.165149693 container died edc4dba6f205878497286cd797793e3d9ccc1a3173c84f543572315758495625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:39:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay-38077d29e507c11ac9d3436944a3c7bd109b21ae6ec5ebfb274d384b221c5fcc-merged.mount: Deactivated successfully.
Oct  2 05:39:54 np0005465604 podman[457854]: 2025-10-02 09:39:54.725609045 +0000 UTC m=+1.237076400 container remove edc4dba6f205878497286cd797793e3d9ccc1a3173c84f543572315758495625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:39:54 np0005465604 systemd[1]: libpod-conmon-edc4dba6f205878497286cd797793e3d9ccc1a3173c84f543572315758495625.scope: Deactivated successfully.
Oct  2 05:39:54 np0005465604 nova_compute[260603]: 2025-10-02 09:39:54.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3667: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  2 05:39:55 np0005465604 podman[458054]: 2025-10-02 09:39:55.369073164 +0000 UTC m=+0.077029457 container create cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:39:55 np0005465604 podman[458054]: 2025-10-02 09:39:55.31484199 +0000 UTC m=+0.022798343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:39:55 np0005465604 systemd[1]: Started libpod-conmon-cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1.scope.
Oct  2 05:39:55 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:39:55 np0005465604 podman[458054]: 2025-10-02 09:39:55.507145776 +0000 UTC m=+0.215102089 container init cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tharp, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:39:55 np0005465604 podman[458054]: 2025-10-02 09:39:55.515379803 +0000 UTC m=+0.223336096 container start cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tharp, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:39:55 np0005465604 podman[458054]: 2025-10-02 09:39:55.51848351 +0000 UTC m=+0.226439803 container attach cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 05:39:55 np0005465604 admiring_tharp[458070]: 167 167
Oct  2 05:39:55 np0005465604 systemd[1]: libpod-cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1.scope: Deactivated successfully.
Oct  2 05:39:55 np0005465604 conmon[458070]: conmon cd50fa1e0ca5cab949d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1.scope/container/memory.events
Oct  2 05:39:55 np0005465604 podman[458054]: 2025-10-02 09:39:55.521925418 +0000 UTC m=+0.229881751 container died cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct  2 05:39:55 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f6749435fea4f7bf247fc3193571c3ec8ae43b033aad5700cc09e6a5fe94ff87-merged.mount: Deactivated successfully.
Oct  2 05:39:55 np0005465604 podman[458054]: 2025-10-02 09:39:55.573876091 +0000 UTC m=+0.281832394 container remove cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 05:39:55 np0005465604 systemd[1]: libpod-conmon-cd50fa1e0ca5cab949d1017938853e21343ff1db69ad574e3cd51774c525d1a1.scope: Deactivated successfully.
Oct  2 05:39:55 np0005465604 podman[458094]: 2025-10-02 09:39:55.743228571 +0000 UTC m=+0.046735322 container create 7522544e6dc1ef75726a87ad8dc01ed6c453ca03d836b7e8bbff8a59f25a4edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 05:39:55 np0005465604 systemd[1]: Started libpod-conmon-7522544e6dc1ef75726a87ad8dc01ed6c453ca03d836b7e8bbff8a59f25a4edb.scope.
Oct  2 05:39:55 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:39:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935ed63ffb8e7bed81c510c05ab0f51ff3d24af3b241a62bb17790309372cab0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:39:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935ed63ffb8e7bed81c510c05ab0f51ff3d24af3b241a62bb17790309372cab0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:39:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935ed63ffb8e7bed81c510c05ab0f51ff3d24af3b241a62bb17790309372cab0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:39:55 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935ed63ffb8e7bed81c510c05ab0f51ff3d24af3b241a62bb17790309372cab0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:39:55 np0005465604 podman[458094]: 2025-10-02 09:39:55.817604333 +0000 UTC m=+0.121111184 container init 7522544e6dc1ef75726a87ad8dc01ed6c453ca03d836b7e8bbff8a59f25a4edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 05:39:55 np0005465604 podman[458094]: 2025-10-02 09:39:55.722317937 +0000 UTC m=+0.025824718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:39:55 np0005465604 podman[458094]: 2025-10-02 09:39:55.829163885 +0000 UTC m=+0.132670636 container start 7522544e6dc1ef75726a87ad8dc01ed6c453ca03d836b7e8bbff8a59f25a4edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 05:39:55 np0005465604 podman[458094]: 2025-10-02 09:39:55.833959784 +0000 UTC m=+0.137466595 container attach 7522544e6dc1ef75726a87ad8dc01ed6c453ca03d836b7e8bbff8a59f25a4edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:39:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:39:56 np0005465604 nova_compute[260603]: 2025-10-02 09:39:56.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:56 np0005465604 angry_noether[458110]: {
Oct  2 05:39:56 np0005465604 angry_noether[458110]:    "0": [
Oct  2 05:39:56 np0005465604 angry_noether[458110]:        {
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "devices": [
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "/dev/loop3"
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            ],
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "lv_name": "ceph_lv0",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "lv_size": "21470642176",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "name": "ceph_lv0",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "tags": {
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.cluster_name": "ceph",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.crush_device_class": "",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.encrypted": "0",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.osd_id": "0",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.type": "block",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.vdo": "0"
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            },
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "type": "block",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "vg_name": "ceph_vg0"
Oct  2 05:39:56 np0005465604 angry_noether[458110]:        }
Oct  2 05:39:56 np0005465604 angry_noether[458110]:    ],
Oct  2 05:39:56 np0005465604 angry_noether[458110]:    "1": [
Oct  2 05:39:56 np0005465604 angry_noether[458110]:        {
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "devices": [
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "/dev/loop4"
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            ],
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "lv_name": "ceph_lv1",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "lv_size": "21470642176",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "name": "ceph_lv1",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "tags": {
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.cluster_name": "ceph",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.crush_device_class": "",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.encrypted": "0",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.osd_id": "1",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.type": "block",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.vdo": "0"
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            },
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "type": "block",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "vg_name": "ceph_vg1"
Oct  2 05:39:56 np0005465604 angry_noether[458110]:        }
Oct  2 05:39:56 np0005465604 angry_noether[458110]:    ],
Oct  2 05:39:56 np0005465604 angry_noether[458110]:    "2": [
Oct  2 05:39:56 np0005465604 angry_noether[458110]:        {
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "devices": [
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "/dev/loop5"
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            ],
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "lv_name": "ceph_lv2",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "lv_size": "21470642176",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "name": "ceph_lv2",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "tags": {
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.cluster_name": "ceph",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.crush_device_class": "",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.encrypted": "0",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.osd_id": "2",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.type": "block",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:                "ceph.vdo": "0"
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            },
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "type": "block",
Oct  2 05:39:56 np0005465604 angry_noether[458110]:            "vg_name": "ceph_vg2"
Oct  2 05:39:56 np0005465604 angry_noether[458110]:        }
Oct  2 05:39:56 np0005465604 angry_noether[458110]:    ]
Oct  2 05:39:56 np0005465604 angry_noether[458110]: }
Oct  2 05:39:56 np0005465604 systemd[1]: libpod-7522544e6dc1ef75726a87ad8dc01ed6c453ca03d836b7e8bbff8a59f25a4edb.scope: Deactivated successfully.
Oct  2 05:39:56 np0005465604 podman[458094]: 2025-10-02 09:39:56.602449227 +0000 UTC m=+0.905955978 container died 7522544e6dc1ef75726a87ad8dc01ed6c453ca03d836b7e8bbff8a59f25a4edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:39:56 np0005465604 systemd[1]: var-lib-containers-storage-overlay-935ed63ffb8e7bed81c510c05ab0f51ff3d24af3b241a62bb17790309372cab0-merged.mount: Deactivated successfully.
Oct  2 05:39:56 np0005465604 podman[458094]: 2025-10-02 09:39:56.68608207 +0000 UTC m=+0.989588821 container remove 7522544e6dc1ef75726a87ad8dc01ed6c453ca03d836b7e8bbff8a59f25a4edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 05:39:56 np0005465604 systemd[1]: libpod-conmon-7522544e6dc1ef75726a87ad8dc01ed6c453ca03d836b7e8bbff8a59f25a4edb.scope: Deactivated successfully.
Oct  2 05:39:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3668: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  2 05:39:57 np0005465604 podman[458271]: 2025-10-02 09:39:57.424082201 +0000 UTC m=+0.049047203 container create 69bdb494451e2f58a2b0dbfc929cc23cf7c366fab5aadb7691b83fd34df5d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_swartz, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 05:39:57 np0005465604 systemd[1]: Started libpod-conmon-69bdb494451e2f58a2b0dbfc929cc23cf7c366fab5aadb7691b83fd34df5d980.scope.
Oct  2 05:39:57 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:39:57 np0005465604 podman[458271]: 2025-10-02 09:39:57.489098312 +0000 UTC m=+0.114063334 container init 69bdb494451e2f58a2b0dbfc929cc23cf7c366fab5aadb7691b83fd34df5d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 05:39:57 np0005465604 podman[458271]: 2025-10-02 09:39:57.496649458 +0000 UTC m=+0.121614440 container start 69bdb494451e2f58a2b0dbfc929cc23cf7c366fab5aadb7691b83fd34df5d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:39:57 np0005465604 podman[458271]: 2025-10-02 09:39:57.500225849 +0000 UTC m=+0.125190881 container attach 69bdb494451e2f58a2b0dbfc929cc23cf7c366fab5aadb7691b83fd34df5d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_swartz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:39:57 np0005465604 ecstatic_swartz[458286]: 167 167
Oct  2 05:39:57 np0005465604 systemd[1]: libpod-69bdb494451e2f58a2b0dbfc929cc23cf7c366fab5aadb7691b83fd34df5d980.scope: Deactivated successfully.
Oct  2 05:39:57 np0005465604 podman[458271]: 2025-10-02 09:39:57.407949097 +0000 UTC m=+0.032914089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:39:57 np0005465604 podman[458271]: 2025-10-02 09:39:57.504136231 +0000 UTC m=+0.129101223 container died 69bdb494451e2f58a2b0dbfc929cc23cf7c366fab5aadb7691b83fd34df5d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:39:57 np0005465604 nova_compute[260603]: 2025-10-02 09:39:57.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:39:57 np0005465604 nova_compute[260603]: 2025-10-02 09:39:57.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:39:57 np0005465604 systemd[1]: var-lib-containers-storage-overlay-59c60ba4590a1c04566513d243c11741d2f28dd0f08750bd792c98eb88d72026-merged.mount: Deactivated successfully.
Oct  2 05:39:57 np0005465604 podman[458271]: 2025-10-02 09:39:57.805318219 +0000 UTC m=+0.430283211 container remove 69bdb494451e2f58a2b0dbfc929cc23cf7c366fab5aadb7691b83fd34df5d980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:39:57 np0005465604 systemd[1]: libpod-conmon-69bdb494451e2f58a2b0dbfc929cc23cf7c366fab5aadb7691b83fd34df5d980.scope: Deactivated successfully.
Oct  2 05:39:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:39:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:39:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:39:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:39:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:39:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:39:58 np0005465604 podman[458311]: 2025-10-02 09:39:58.00705196 +0000 UTC m=+0.047694911 container create 4cf3ae9872ea2ea67749b1e4fe0ea3111f1247b5e2c9679392a77d74b9d4346d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct  2 05:39:58 np0005465604 systemd[1]: Started libpod-conmon-4cf3ae9872ea2ea67749b1e4fe0ea3111f1247b5e2c9679392a77d74b9d4346d.scope.
Oct  2 05:39:58 np0005465604 podman[458311]: 2025-10-02 09:39:57.989220873 +0000 UTC m=+0.029863854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:39:58 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:39:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d6608643fe287ec4e643a54d7282d22117864ebeff6afd9d365648699a2b85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:39:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d6608643fe287ec4e643a54d7282d22117864ebeff6afd9d365648699a2b85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:39:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d6608643fe287ec4e643a54d7282d22117864ebeff6afd9d365648699a2b85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:39:58 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d6608643fe287ec4e643a54d7282d22117864ebeff6afd9d365648699a2b85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:39:58 np0005465604 podman[458311]: 2025-10-02 09:39:58.124944602 +0000 UTC m=+0.165587573 container init 4cf3ae9872ea2ea67749b1e4fe0ea3111f1247b5e2c9679392a77d74b9d4346d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 05:39:58 np0005465604 podman[458311]: 2025-10-02 09:39:58.137274448 +0000 UTC m=+0.177917389 container start 4cf3ae9872ea2ea67749b1e4fe0ea3111f1247b5e2c9679392a77d74b9d4346d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 05:39:58 np0005465604 podman[458311]: 2025-10-02 09:39:58.14118947 +0000 UTC m=+0.181832511 container attach 4cf3ae9872ea2ea67749b1e4fe0ea3111f1247b5e2c9679392a77d74b9d4346d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 05:39:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3669: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]: {
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:        "osd_id": 2,
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:        "type": "bluestore"
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:    },
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:        "osd_id": 1,
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:        "type": "bluestore"
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:    },
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:        "osd_id": 0,
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:        "type": "bluestore"
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]:    }
Oct  2 05:39:59 np0005465604 xenodochial_boyd[458327]: }
Oct  2 05:39:59 np0005465604 systemd[1]: libpod-4cf3ae9872ea2ea67749b1e4fe0ea3111f1247b5e2c9679392a77d74b9d4346d.scope: Deactivated successfully.
Oct  2 05:39:59 np0005465604 podman[458311]: 2025-10-02 09:39:59.081868271 +0000 UTC m=+1.122511212 container died 4cf3ae9872ea2ea67749b1e4fe0ea3111f1247b5e2c9679392a77d74b9d4346d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:39:59 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c3d6608643fe287ec4e643a54d7282d22117864ebeff6afd9d365648699a2b85-merged.mount: Deactivated successfully.
Oct  2 05:39:59 np0005465604 podman[458311]: 2025-10-02 09:39:59.133467113 +0000 UTC m=+1.174110054 container remove 4cf3ae9872ea2ea67749b1e4fe0ea3111f1247b5e2c9679392a77d74b9d4346d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:39:59 np0005465604 systemd[1]: libpod-conmon-4cf3ae9872ea2ea67749b1e4fe0ea3111f1247b5e2c9679392a77d74b9d4346d.scope: Deactivated successfully.
Oct  2 05:39:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:39:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:39:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:39:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:39:59 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 92759472-e73a-48dd-b1a6-5407d4aeaa46 does not exist
Oct  2 05:39:59 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 99a78ea7-e179-46bb-a30b-c3302844269d does not exist
Oct  2 05:39:59 np0005465604 nova_compute[260603]: 2025-10-02 09:39:59.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:39:59 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:39:59 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:40:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3670: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:40:01 np0005465604 nova_compute[260603]: 2025-10-02 09:40:01.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3671: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3672: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:04 np0005465604 nova_compute[260603]: 2025-10-02 09:40:04.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:05 np0005465604 nova_compute[260603]: 2025-10-02 09:40:05.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:40:06 np0005465604 podman[458426]: 2025-10-02 09:40:06.025623198 +0000 UTC m=+0.080337851 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 05:40:06 np0005465604 podman[458425]: 2025-10-02 09:40:06.046107107 +0000 UTC m=+0.106794877 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 05:40:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:40:06 np0005465604 nova_compute[260603]: 2025-10-02 09:40:06.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3673: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3674: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:09 np0005465604 nova_compute[260603]: 2025-10-02 09:40:09.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3675: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:10 np0005465604 podman[458472]: 2025-10-02 09:40:10.99811313 +0000 UTC m=+0.066483027 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:40:11 np0005465604 podman[458471]: 2025-10-02 09:40:11.000884837 +0000 UTC m=+0.071091021 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:40:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:40:11 np0005465604 nova_compute[260603]: 2025-10-02 09:40:11.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:12 np0005465604 nova_compute[260603]: 2025-10-02 09:40:12.553 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:40:12 np0005465604 nova_compute[260603]: 2025-10-02 09:40:12.554 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 05:40:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3676: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3677: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:14 np0005465604 nova_compute[260603]: 2025-10-02 09:40:14.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:40:16 np0005465604 nova_compute[260603]: 2025-10-02 09:40:16.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3678: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3679: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:20 np0005465604 nova_compute[260603]: 2025-10-02 09:40:20.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3680: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:40:21 np0005465604 nova_compute[260603]: 2025-10-02 09:40:21.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:40:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2564754252' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:40:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:40:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2564754252' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:40:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3681: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3682: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:25 np0005465604 nova_compute[260603]: 2025-10-02 09:40:25.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:40:26 np0005465604 nova_compute[260603]: 2025-10-02 09:40:26.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Oct  2 05:40:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Oct  2 05:40:26 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Oct  2 05:40:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3684: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:40:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:40:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:40:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:40:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:40:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:40:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:40:28
Oct  2 05:40:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:40:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:40:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta']
Oct  2 05:40:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:40:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:40:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3685: 305 pgs: 305 active+clean; 457 KiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct  2 05:40:29 np0005465604 nova_compute[260603]: 2025-10-02 09:40:29.549 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:40:30 np0005465604 nova_compute[260603]: 2025-10-02 09:40:30.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:30 np0005465604 nova_compute[260603]: 2025-10-02 09:40:30.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:40:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3686: 305 pgs: 305 active+clean; 457 KiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct  2 05:40:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:40:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Oct  2 05:40:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Oct  2 05:40:31 np0005465604 ceph-mon[74477]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Oct  2 05:40:31 np0005465604 nova_compute[260603]: 2025-10-02 09:40:31.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:31 np0005465604 nova_compute[260603]: 2025-10-02 09:40:31.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:40:31 np0005465604 nova_compute[260603]: 2025-10-02 09:40:31.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:40:31 np0005465604 nova_compute[260603]: 2025-10-02 09:40:31.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:40:31 np0005465604 nova_compute[260603]: 2025-10-02 09:40:31.870 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:40:31 np0005465604 nova_compute[260603]: 2025-10-02 09:40:31.871 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:40:32 np0005465604 nova_compute[260603]: 2025-10-02 09:40:32.119 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:40:32 np0005465604 nova_compute[260603]: 2025-10-02 09:40:32.119 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:40:32 np0005465604 nova_compute[260603]: 2025-10-02 09:40:32.119 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:40:32 np0005465604 nova_compute[260603]: 2025-10-02 09:40:32.120 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:40:32 np0005465604 nova_compute[260603]: 2025-10-02 09:40:32.120 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:40:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:40:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3912608870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:40:32 np0005465604 nova_compute[260603]: 2025-10-02 09:40:32.578 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:40:32 np0005465604 nova_compute[260603]: 2025-10-02 09:40:32.745 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:40:32 np0005465604 nova_compute[260603]: 2025-10-02 09:40:32.746 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3566MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:40:32 np0005465604 nova_compute[260603]: 2025-10-02 09:40:32.747 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:40:32 np0005465604 nova_compute[260603]: 2025-10-02 09:40:32.747 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:40:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3688: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct  2 05:40:33 np0005465604 nova_compute[260603]: 2025-10-02 09:40:33.748 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:40:33 np0005465604 nova_compute[260603]: 2025-10-02 09:40:33.748 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:40:33 np0005465604 nova_compute[260603]: 2025-10-02 09:40:33.762 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing inventories for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 05:40:33 np0005465604 nova_compute[260603]: 2025-10-02 09:40:33.778 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating ProviderTree inventory for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 05:40:33 np0005465604 nova_compute[260603]: 2025-10-02 09:40:33.778 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Updating inventory in ProviderTree for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 05:40:33 np0005465604 nova_compute[260603]: 2025-10-02 09:40:33.792 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing aggregate associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 05:40:33 np0005465604 nova_compute[260603]: 2025-10-02 09:40:33.820 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Refreshing trait associations for resource provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27, traits: HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 05:40:33 np0005465604 nova_compute[260603]: 2025-10-02 09:40:33.837 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:40:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:40:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/984032977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:40:34 np0005465604 nova_compute[260603]: 2025-10-02 09:40:34.309 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:40:34 np0005465604 nova_compute[260603]: 2025-10-02 09:40:34.315 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:40:34 np0005465604 nova_compute[260603]: 2025-10-02 09:40:34.517 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:40:34 np0005465604 nova_compute[260603]: 2025-10-02 09:40:34.519 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:40:34 np0005465604 nova_compute[260603]: 2025-10-02 09:40:34.520 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:40:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3689: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct  2 05:40:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:40:34.881 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:40:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:40:34.881 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:40:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:40:34.881 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:40:35 np0005465604 nova_compute[260603]: 2025-10-02 09:40:35.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:40:36 np0005465604 nova_compute[260603]: 2025-10-02 09:40:36.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3690: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct  2 05:40:37 np0005465604 podman[458555]: 2025-10-02 09:40:37.039572458 +0000 UTC m=+0.097345752 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:40:37 np0005465604 podman[458554]: 2025-10-02 09:40:37.039601229 +0000 UTC m=+0.102796273 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Oct  2 05:40:37 np0005465604 nova_compute[260603]: 2025-10-02 09:40:37.170 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:40:37 np0005465604 nova_compute[260603]: 2025-10-02 09:40:37.170 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:40:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3691: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:40:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:40:40 np0005465604 nova_compute[260603]: 2025-10-02 09:40:40.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3692: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:40:41 np0005465604 nova_compute[260603]: 2025-10-02 09:40:41.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:41 np0005465604 nova_compute[260603]: 2025-10-02 09:40:41.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:40:41 np0005465604 podman[458599]: 2025-10-02 09:40:41.998989523 +0000 UTC m=+0.072881058 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 05:40:42 np0005465604 podman[458600]: 2025-10-02 09:40:42.012391351 +0000 UTC m=+0.071846285 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 05:40:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3693: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3694: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:45 np0005465604 nova_compute[260603]: 2025-10-02 09:40:45.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:40:46 np0005465604 nova_compute[260603]: 2025-10-02 09:40:46.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3695: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:46 np0005465604 nova_compute[260603]: 2025-10-02 09:40:46.972 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:40:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3696: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:50 np0005465604 nova_compute[260603]: 2025-10-02 09:40:50.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:50 np0005465604 nova_compute[260603]: 2025-10-02 09:40:50.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:40:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3697: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:40:51 np0005465604 nova_compute[260603]: 2025-10-02 09:40:51.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:52 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3698: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:54 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3699: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:55 np0005465604 nova_compute[260603]: 2025-10-02 09:40:55.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:40:56 np0005465604 nova_compute[260603]: 2025-10-02 09:40:56.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:40:56 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3700: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:40:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:40:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:40:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:40:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:40:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:40:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:40:58 np0005465604 nova_compute[260603]: 2025-10-02 09:40:58.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:40:58 np0005465604 nova_compute[260603]: 2025-10-02 09:40:58.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:40:58 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3701: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:00 np0005465604 nova_compute[260603]: 2025-10-02 09:41:00.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:41:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:41:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:41:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:41:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:41:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:41:00 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 6a4c2b3b-7293-4e4b-950c-8269007cb4f5 does not exist
Oct  2 05:41:00 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 6c4a6cc6-f305-4579-9d7d-4c7d42820818 does not exist
Oct  2 05:41:00 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 700ba972-b5db-47f6-a4b1-47e9d8dbcc9e does not exist
Oct  2 05:41:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:41:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:41:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:41:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:41:00 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:41:00 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:41:00 np0005465604 podman[458909]: 2025-10-02 09:41:00.80025178 +0000 UTC m=+0.036183651 container create b3c8f06aa8fe39a7a18d8673d72773714cc824153bc93f03bec25b03aac4e73c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 05:41:00 np0005465604 systemd[1]: Started libpod-conmon-b3c8f06aa8fe39a7a18d8673d72773714cc824153bc93f03bec25b03aac4e73c.scope.
Oct  2 05:41:00 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:41:00 np0005465604 podman[458909]: 2025-10-02 09:41:00.854934608 +0000 UTC m=+0.090866479 container init b3c8f06aa8fe39a7a18d8673d72773714cc824153bc93f03bec25b03aac4e73c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct  2 05:41:00 np0005465604 podman[458909]: 2025-10-02 09:41:00.864123855 +0000 UTC m=+0.100055726 container start b3c8f06aa8fe39a7a18d8673d72773714cc824153bc93f03bec25b03aac4e73c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:41:00 np0005465604 hungry_kirch[458925]: 167 167
Oct  2 05:41:00 np0005465604 systemd[1]: libpod-b3c8f06aa8fe39a7a18d8673d72773714cc824153bc93f03bec25b03aac4e73c.scope: Deactivated successfully.
Oct  2 05:41:00 np0005465604 podman[458909]: 2025-10-02 09:41:00.870140523 +0000 UTC m=+0.106072394 container attach b3c8f06aa8fe39a7a18d8673d72773714cc824153bc93f03bec25b03aac4e73c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 05:41:00 np0005465604 podman[458909]: 2025-10-02 09:41:00.870459883 +0000 UTC m=+0.106391744 container died b3c8f06aa8fe39a7a18d8673d72773714cc824153bc93f03bec25b03aac4e73c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  2 05:41:00 np0005465604 podman[458909]: 2025-10-02 09:41:00.784533829 +0000 UTC m=+0.020465700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:41:00 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3702: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8709e6475ea883e631ffa976460ef56923c8c086b7a7a87c4ea1a6a44b9ef345-merged.mount: Deactivated successfully.
Oct  2 05:41:00 np0005465604 podman[458909]: 2025-10-02 09:41:00.908961625 +0000 UTC m=+0.144893496 container remove b3c8f06aa8fe39a7a18d8673d72773714cc824153bc93f03bec25b03aac4e73c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kirch, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 05:41:00 np0005465604 systemd[1]: libpod-conmon-b3c8f06aa8fe39a7a18d8673d72773714cc824153bc93f03bec25b03aac4e73c.scope: Deactivated successfully.
Oct  2 05:41:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:41:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:41:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:41:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:41:01 np0005465604 podman[458947]: 2025-10-02 09:41:01.104664458 +0000 UTC m=+0.058358624 container create 6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:41:01 np0005465604 systemd[1]: Started libpod-conmon-6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a.scope.
Oct  2 05:41:01 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:41:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f821aa36c430f4bc378429178f89174e212259e59b8554609919ebcfe40f99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:41:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f821aa36c430f4bc378429178f89174e212259e59b8554609919ebcfe40f99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:41:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f821aa36c430f4bc378429178f89174e212259e59b8554609919ebcfe40f99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:41:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f821aa36c430f4bc378429178f89174e212259e59b8554609919ebcfe40f99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:41:01 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f821aa36c430f4bc378429178f89174e212259e59b8554609919ebcfe40f99/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:41:01 np0005465604 podman[458947]: 2025-10-02 09:41:01.086413649 +0000 UTC m=+0.040107805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:41:01 np0005465604 podman[458947]: 2025-10-02 09:41:01.194420951 +0000 UTC m=+0.148115087 container init 6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:41:01 np0005465604 podman[458947]: 2025-10-02 09:41:01.201509954 +0000 UTC m=+0.155204090 container start 6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  2 05:41:01 np0005465604 podman[458947]: 2025-10-02 09:41:01.204234358 +0000 UTC m=+0.157928544 container attach 6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rosalind, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 05:41:01 np0005465604 nova_compute[260603]: 2025-10-02 09:41:01.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:02 np0005465604 wonderful_rosalind[458963]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:41:02 np0005465604 wonderful_rosalind[458963]: --> relative data size: 1.0
Oct  2 05:41:02 np0005465604 wonderful_rosalind[458963]: --> All data devices are unavailable
Oct  2 05:41:02 np0005465604 systemd[1]: libpod-6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a.scope: Deactivated successfully.
Oct  2 05:41:02 np0005465604 systemd[1]: libpod-6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a.scope: Consumed 1.096s CPU time.
Oct  2 05:41:02 np0005465604 podman[458947]: 2025-10-02 09:41:02.342509282 +0000 UTC m=+1.296203418 container died 6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 05:41:02 np0005465604 systemd[1]: var-lib-containers-storage-overlay-f5f821aa36c430f4bc378429178f89174e212259e59b8554609919ebcfe40f99-merged.mount: Deactivated successfully.
Oct  2 05:41:02 np0005465604 podman[458947]: 2025-10-02 09:41:02.396490998 +0000 UTC m=+1.350185144 container remove 6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:41:02 np0005465604 systemd[1]: libpod-conmon-6d61fa26d2b8ea0085735b0b23d372ba8503518b41604b962d8c7221dee9fa2a.scope: Deactivated successfully.
Oct  2 05:41:02 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3703: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:02 np0005465604 podman[459143]: 2025-10-02 09:41:02.985532137 +0000 UTC m=+0.035169721 container create 8d1613cd8988628f62d7d94bdf6d23966c34f30c1fe3a0cc77e0f15962df52d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct  2 05:41:03 np0005465604 systemd[1]: Started libpod-conmon-8d1613cd8988628f62d7d94bdf6d23966c34f30c1fe3a0cc77e0f15962df52d2.scope.
Oct  2 05:41:03 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:41:03 np0005465604 podman[459143]: 2025-10-02 09:41:03.045900262 +0000 UTC m=+0.095537866 container init 8d1613cd8988628f62d7d94bdf6d23966c34f30c1fe3a0cc77e0f15962df52d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hermann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 05:41:03 np0005465604 podman[459143]: 2025-10-02 09:41:03.053022234 +0000 UTC m=+0.102659818 container start 8d1613cd8988628f62d7d94bdf6d23966c34f30c1fe3a0cc77e0f15962df52d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  2 05:41:03 np0005465604 podman[459143]: 2025-10-02 09:41:03.056682749 +0000 UTC m=+0.106320353 container attach 8d1613cd8988628f62d7d94bdf6d23966c34f30c1fe3a0cc77e0f15962df52d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hermann, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 05:41:03 np0005465604 wizardly_hermann[459159]: 167 167
Oct  2 05:41:03 np0005465604 systemd[1]: libpod-8d1613cd8988628f62d7d94bdf6d23966c34f30c1fe3a0cc77e0f15962df52d2.scope: Deactivated successfully.
Oct  2 05:41:03 np0005465604 podman[459143]: 2025-10-02 09:41:03.05799392 +0000 UTC m=+0.107631514 container died 8d1613cd8988628f62d7d94bdf6d23966c34f30c1fe3a0cc77e0f15962df52d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hermann, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:41:03 np0005465604 podman[459143]: 2025-10-02 09:41:02.971021323 +0000 UTC m=+0.020658927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:41:03 np0005465604 systemd[1]: var-lib-containers-storage-overlay-3c79659304bf1242f4ec1813c1a94576864c918c0ce5f286ca94443790ea37f7-merged.mount: Deactivated successfully.
Oct  2 05:41:03 np0005465604 podman[459143]: 2025-10-02 09:41:03.092057184 +0000 UTC m=+0.141694768 container remove 8d1613cd8988628f62d7d94bdf6d23966c34f30c1fe3a0cc77e0f15962df52d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:41:03 np0005465604 systemd[1]: libpod-conmon-8d1613cd8988628f62d7d94bdf6d23966c34f30c1fe3a0cc77e0f15962df52d2.scope: Deactivated successfully.
Oct  2 05:41:03 np0005465604 podman[459182]: 2025-10-02 09:41:03.255990664 +0000 UTC m=+0.056980701 container create 74ae0b9b28aa0a1117291213fa7ba14658882bc62c354b5dda91f8e8eaa80d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct  2 05:41:03 np0005465604 podman[459182]: 2025-10-02 09:41:03.226239825 +0000 UTC m=+0.027229942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:41:03 np0005465604 systemd[1]: Started libpod-conmon-74ae0b9b28aa0a1117291213fa7ba14658882bc62c354b5dda91f8e8eaa80d48.scope.
Oct  2 05:41:03 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:41:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2d8970043b09718b35c3835ab4d0a6aa53703eeefa8c998ab0871c932cc777/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:41:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2d8970043b09718b35c3835ab4d0a6aa53703eeefa8c998ab0871c932cc777/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:41:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2d8970043b09718b35c3835ab4d0a6aa53703eeefa8c998ab0871c932cc777/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:41:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2d8970043b09718b35c3835ab4d0a6aa53703eeefa8c998ab0871c932cc777/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:41:03 np0005465604 podman[459182]: 2025-10-02 09:41:03.370250203 +0000 UTC m=+0.171240260 container init 74ae0b9b28aa0a1117291213fa7ba14658882bc62c354b5dda91f8e8eaa80d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 05:41:03 np0005465604 podman[459182]: 2025-10-02 09:41:03.382046051 +0000 UTC m=+0.183036078 container start 74ae0b9b28aa0a1117291213fa7ba14658882bc62c354b5dda91f8e8eaa80d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:41:03 np0005465604 podman[459182]: 2025-10-02 09:41:03.385856681 +0000 UTC m=+0.186846728 container attach 74ae0b9b28aa0a1117291213fa7ba14658882bc62c354b5dda91f8e8eaa80d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]: {
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:    "0": [
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:        {
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "devices": [
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "/dev/loop3"
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            ],
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "lv_name": "ceph_lv0",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "lv_size": "21470642176",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "name": "ceph_lv0",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "tags": {
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.cluster_name": "ceph",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.crush_device_class": "",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.encrypted": "0",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.osd_id": "0",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.type": "block",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.vdo": "0"
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            },
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "type": "block",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "vg_name": "ceph_vg0"
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:        }
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:    ],
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:    "1": [
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:        {
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "devices": [
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "/dev/loop4"
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            ],
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "lv_name": "ceph_lv1",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "lv_size": "21470642176",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "name": "ceph_lv1",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "tags": {
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.cluster_name": "ceph",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.crush_device_class": "",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.encrypted": "0",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.osd_id": "1",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.type": "block",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.vdo": "0"
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            },
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "type": "block",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "vg_name": "ceph_vg1"
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:        }
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:    ],
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:    "2": [
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:        {
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "devices": [
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "/dev/loop5"
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            ],
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "lv_name": "ceph_lv2",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "lv_size": "21470642176",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "name": "ceph_lv2",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "tags": {
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.cluster_name": "ceph",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.crush_device_class": "",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.encrypted": "0",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.osd_id": "2",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.type": "block",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:                "ceph.vdo": "0"
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            },
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "type": "block",
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:            "vg_name": "ceph_vg2"
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:        }
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]:    ]
Oct  2 05:41:04 np0005465604 infallible_darwin[459198]: }
Oct  2 05:41:04 np0005465604 systemd[1]: libpod-74ae0b9b28aa0a1117291213fa7ba14658882bc62c354b5dda91f8e8eaa80d48.scope: Deactivated successfully.
Oct  2 05:41:04 np0005465604 podman[459182]: 2025-10-02 09:41:04.155684176 +0000 UTC m=+0.956674223 container died 74ae0b9b28aa0a1117291213fa7ba14658882bc62c354b5dda91f8e8eaa80d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:41:04 np0005465604 systemd[1]: var-lib-containers-storage-overlay-4d2d8970043b09718b35c3835ab4d0a6aa53703eeefa8c998ab0871c932cc777-merged.mount: Deactivated successfully.
Oct  2 05:41:04 np0005465604 podman[459182]: 2025-10-02 09:41:04.202191948 +0000 UTC m=+1.003181975 container remove 74ae0b9b28aa0a1117291213fa7ba14658882bc62c354b5dda91f8e8eaa80d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_darwin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 05:41:04 np0005465604 systemd[1]: libpod-conmon-74ae0b9b28aa0a1117291213fa7ba14658882bc62c354b5dda91f8e8eaa80d48.scope: Deactivated successfully.
Oct  2 05:41:04 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3704: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:04 np0005465604 podman[459363]: 2025-10-02 09:41:04.908304223 +0000 UTC m=+0.044948925 container create 04ceead992b80c76e036bc077696b45f4cd7eb74be594504e36118a1eeeecf88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 05:41:04 np0005465604 systemd[1]: Started libpod-conmon-04ceead992b80c76e036bc077696b45f4cd7eb74be594504e36118a1eeeecf88.scope.
Oct  2 05:41:04 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:41:04 np0005465604 podman[459363]: 2025-10-02 09:41:04.889976821 +0000 UTC m=+0.026621533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:41:04 np0005465604 podman[459363]: 2025-10-02 09:41:04.996471487 +0000 UTC m=+0.133116179 container init 04ceead992b80c76e036bc077696b45f4cd7eb74be594504e36118a1eeeecf88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sutherland, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:41:05 np0005465604 podman[459363]: 2025-10-02 09:41:05.00842498 +0000 UTC m=+0.145069692 container start 04ceead992b80c76e036bc077696b45f4cd7eb74be594504e36118a1eeeecf88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sutherland, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  2 05:41:05 np0005465604 podman[459363]: 2025-10-02 09:41:05.012114576 +0000 UTC m=+0.148759288 container attach 04ceead992b80c76e036bc077696b45f4cd7eb74be594504e36118a1eeeecf88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:41:05 np0005465604 nervous_sutherland[459379]: 167 167
Oct  2 05:41:05 np0005465604 systemd[1]: libpod-04ceead992b80c76e036bc077696b45f4cd7eb74be594504e36118a1eeeecf88.scope: Deactivated successfully.
Oct  2 05:41:05 np0005465604 podman[459363]: 2025-10-02 09:41:05.014400367 +0000 UTC m=+0.151045069 container died 04ceead992b80c76e036bc077696b45f4cd7eb74be594504e36118a1eeeecf88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sutherland, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 05:41:05 np0005465604 nova_compute[260603]: 2025-10-02 09:41:05.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:05 np0005465604 systemd[1]: var-lib-containers-storage-overlay-35f3aad7bea8b1ec61897cb00f8360311f9eca2a6ed1df99f2feb808abead92f-merged.mount: Deactivated successfully.
Oct  2 05:41:05 np0005465604 podman[459363]: 2025-10-02 09:41:05.094698135 +0000 UTC m=+0.231342847 container remove 04ceead992b80c76e036bc077696b45f4cd7eb74be594504e36118a1eeeecf88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:41:05 np0005465604 systemd[1]: libpod-conmon-04ceead992b80c76e036bc077696b45f4cd7eb74be594504e36118a1eeeecf88.scope: Deactivated successfully.
Oct  2 05:41:05 np0005465604 podman[459404]: 2025-10-02 09:41:05.282668127 +0000 UTC m=+0.038833874 container create 36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_chaum, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  2 05:41:05 np0005465604 systemd[1]: Started libpod-conmon-36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d.scope.
Oct  2 05:41:05 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:41:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc519a3db1073b3527b7a64a1d1deeb0245a3a35cd820704a9b6e03d64db145/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:41:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc519a3db1073b3527b7a64a1d1deeb0245a3a35cd820704a9b6e03d64db145/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:41:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc519a3db1073b3527b7a64a1d1deeb0245a3a35cd820704a9b6e03d64db145/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:41:05 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc519a3db1073b3527b7a64a1d1deeb0245a3a35cd820704a9b6e03d64db145/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:41:05 np0005465604 podman[459404]: 2025-10-02 09:41:05.267820643 +0000 UTC m=+0.023986420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:41:05 np0005465604 podman[459404]: 2025-10-02 09:41:05.368643071 +0000 UTC m=+0.124808818 container init 36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  2 05:41:05 np0005465604 podman[459404]: 2025-10-02 09:41:05.376993702 +0000 UTC m=+0.133159449 container start 36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:41:05 np0005465604 podman[459404]: 2025-10-02 09:41:05.379508991 +0000 UTC m=+0.135674728 container attach 36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 05:41:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:41:06 np0005465604 nova_compute[260603]: 2025-10-02 09:41:06.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]: {
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:        "osd_id": 2,
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:        "type": "bluestore"
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:    },
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:        "osd_id": 1,
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:        "type": "bluestore"
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:    },
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:        "osd_id": 0,
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:        "type": "bluestore"
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]:    }
Oct  2 05:41:06 np0005465604 zealous_chaum[459421]: }
Oct  2 05:41:06 np0005465604 systemd[1]: libpod-36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d.scope: Deactivated successfully.
Oct  2 05:41:06 np0005465604 systemd[1]: libpod-36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d.scope: Consumed 1.051s CPU time.
Oct  2 05:41:06 np0005465604 podman[459404]: 2025-10-02 09:41:06.423706456 +0000 UTC m=+1.179872203 container died 36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 05:41:06 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1bc519a3db1073b3527b7a64a1d1deeb0245a3a35cd820704a9b6e03d64db145-merged.mount: Deactivated successfully.
Oct  2 05:41:06 np0005465604 podman[459404]: 2025-10-02 09:41:06.507017478 +0000 UTC m=+1.263183225 container remove 36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:41:06 np0005465604 systemd[1]: libpod-conmon-36f6a826c49db5e4d22dc376584a785cdbd058c42aace1fa15eac5933c72f50d.scope: Deactivated successfully.
Oct  2 05:41:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:41:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:41:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:41:06 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:41:06 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 7ba347a6-dff5-4711-8ec8-25fb6d9164d9 does not exist
Oct  2 05:41:06 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 6d66d829-c9ef-4dc3-966a-d5889f0c0db7 does not exist
Oct  2 05:41:06 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3705: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:07 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:41:07 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:41:08 np0005465604 podman[459519]: 2025-10-02 09:41:08.011718138 +0000 UTC m=+0.073753665 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 05:41:08 np0005465604 podman[459518]: 2025-10-02 09:41:08.061617796 +0000 UTC m=+0.123421055 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:41:08 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3706: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:10 np0005465604 nova_compute[260603]: 2025-10-02 09:41:10.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:10 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3707: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:41:11 np0005465604 nova_compute[260603]: 2025-10-02 09:41:11.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:12 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3708: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:12 np0005465604 podman[459564]: 2025-10-02 09:41:12.995378127 +0000 UTC m=+0.061954496 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:41:13 np0005465604 podman[459565]: 2025-10-02 09:41:13.014835965 +0000 UTC m=+0.082671854 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible)
Oct  2 05:41:14 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3709: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:15 np0005465604 nova_compute[260603]: 2025-10-02 09:41:15.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:41:16 np0005465604 nova_compute[260603]: 2025-10-02 09:41:16.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:16 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3710: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:18 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3711: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:20 np0005465604 nova_compute[260603]: 2025-10-02 09:41:20.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:20 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3712: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:41:21 np0005465604 nova_compute[260603]: 2025-10-02 09:41:21.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2708946234' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2708946234' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:41:22 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3713: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:22.975189) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398082975252, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1725, "num_deletes": 255, "total_data_size": 2758381, "memory_usage": 2788640, "flush_reason": "Manual Compaction"}
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398082990200, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 2708960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75521, "largest_seqno": 77245, "table_properties": {"data_size": 2700883, "index_size": 4954, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16489, "raw_average_key_size": 20, "raw_value_size": 2684736, "raw_average_value_size": 3298, "num_data_blocks": 220, "num_entries": 814, "num_filter_entries": 814, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759397907, "oldest_key_time": 1759397907, "file_creation_time": 1759398082, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 15085 microseconds, and 5983 cpu microseconds.
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:22.990276) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 2708960 bytes OK
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:22.990308) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:22.992711) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:22.992741) EVENT_LOG_v1 {"time_micros": 1759398082992730, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:22.992822) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2750947, prev total WAL file size 2750947, number of live WAL files 2.
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:22.994499) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(2645KB)], [182(9705KB)]
Oct  2 05:41:22 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398082994572, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 12647308, "oldest_snapshot_seqno": -1}
Oct  2 05:41:23 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 9181 keys, 10904264 bytes, temperature: kUnknown
Oct  2 05:41:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398083058522, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 10904264, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10846201, "index_size": 34014, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 241867, "raw_average_key_size": 26, "raw_value_size": 10685607, "raw_average_value_size": 1163, "num_data_blocks": 1310, "num_entries": 9181, "num_filter_entries": 9181, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759398082, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:41:23 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:41:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:23.058816) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 10904264 bytes
Oct  2 05:41:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:23.060195) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.5 rd, 170.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 9.5 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(8.7) write-amplify(4.0) OK, records in: 9705, records dropped: 524 output_compression: NoCompression
Oct  2 05:41:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:23.060213) EVENT_LOG_v1 {"time_micros": 1759398083060205, "job": 114, "event": "compaction_finished", "compaction_time_micros": 64022, "compaction_time_cpu_micros": 28103, "output_level": 6, "num_output_files": 1, "total_output_size": 10904264, "num_input_records": 9705, "num_output_records": 9181, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:41:23 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:41:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398083060887, "job": 114, "event": "table_file_deletion", "file_number": 184}
Oct  2 05:41:23 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:41:23 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398083062869, "job": 114, "event": "table_file_deletion", "file_number": 182}
Oct  2 05:41:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:22.994331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:41:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:23.062972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:41:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:23.062978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:41:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:23.062986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:41:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:23.062988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:41:23 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:41:23.062990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:41:24 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3714: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:25 np0005465604 nova_compute[260603]: 2025-10-02 09:41:25.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:41:26 np0005465604 nova_compute[260603]: 2025-10-02 09:41:26.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:26 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3715: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:41:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:41:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:41:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:41:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:41:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:41:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:41:28
Oct  2 05:41:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:41:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:41:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['images', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'vms']
Oct  2 05:41:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:41:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:41:28 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3716: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:29 np0005465604 nova_compute[260603]: 2025-10-02 09:41:29.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:41:30 np0005465604 nova_compute[260603]: 2025-10-02 09:41:30.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:30 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3717: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:41:31 np0005465604 nova_compute[260603]: 2025-10-02 09:41:31.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:31 np0005465604 systemd-logind[787]: New session 57 of user zuul.
Oct  2 05:41:31 np0005465604 systemd[1]: Started Session 57 of User zuul.
Oct  2 05:41:31 np0005465604 nova_compute[260603]: 2025-10-02 09:41:31.935 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:41:32 np0005465604 nova_compute[260603]: 2025-10-02 09:41:32.579 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:41:32 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3718: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:33 np0005465604 nova_compute[260603]: 2025-10-02 09:41:33.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:41:33 np0005465604 nova_compute[260603]: 2025-10-02 09:41:33.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:41:33 np0005465604 nova_compute[260603]: 2025-10-02 09:41:33.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:41:33 np0005465604 nova_compute[260603]: 2025-10-02 09:41:33.561 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:41:33 np0005465604 nova_compute[260603]: 2025-10-02 09:41:33.562 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:41:33 np0005465604 nova_compute[260603]: 2025-10-02 09:41:33.597 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:41:33 np0005465604 nova_compute[260603]: 2025-10-02 09:41:33.597 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:41:33 np0005465604 nova_compute[260603]: 2025-10-02 09:41:33.598 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:41:33 np0005465604 nova_compute[260603]: 2025-10-02 09:41:33.598 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:41:33 np0005465604 nova_compute[260603]: 2025-10-02 09:41:33.598 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:41:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:41:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/190454191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:41:34 np0005465604 nova_compute[260603]: 2025-10-02 09:41:34.008 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:41:34 np0005465604 nova_compute[260603]: 2025-10-02 09:41:34.179 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:41:34 np0005465604 nova_compute[260603]: 2025-10-02 09:41:34.181 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3540MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:41:34 np0005465604 nova_compute[260603]: 2025-10-02 09:41:34.181 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:41:34 np0005465604 nova_compute[260603]: 2025-10-02 09:41:34.181 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:41:34 np0005465604 nova_compute[260603]: 2025-10-02 09:41:34.282 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:41:34 np0005465604 nova_compute[260603]: 2025-10-02 09:41:34.283 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:41:34 np0005465604 nova_compute[260603]: 2025-10-02 09:41:34.324 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:41:34 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23115 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:41:34 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:41:34 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1175116722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:41:34 np0005465604 nova_compute[260603]: 2025-10-02 09:41:34.751 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:41:34 np0005465604 nova_compute[260603]: 2025-10-02 09:41:34.756 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:41:34 np0005465604 nova_compute[260603]: 2025-10-02 09:41:34.785 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:41:34 np0005465604 nova_compute[260603]: 2025-10-02 09:41:34.787 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:41:34 np0005465604 nova_compute[260603]: 2025-10-02 09:41:34.787 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:41:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:41:34.882 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:41:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:41:34.882 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:41:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:41:34.882 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:41:34 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3719: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:35 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23119 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:41:35 np0005465604 nova_compute[260603]: 2025-10-02 09:41:35.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:35 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  2 05:41:35 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3387588497' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  2 05:41:35 np0005465604 nova_compute[260603]: 2025-10-02 09:41:35.744 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:41:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:41:36 np0005465604 nova_compute[260603]: 2025-10-02 09:41:36.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:36 np0005465604 nova_compute[260603]: 2025-10-02 09:41:36.513 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:41:36 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3720: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:38 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3721: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:38 np0005465604 ovs-vsctl[459953]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  2 05:41:38 np0005465604 podman[459934]: 2025-10-02 09:41:38.998566052 +0000 UTC m=+0.054147732 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 05:41:39 np0005465604 podman[459930]: 2025-10-02 09:41:39.039586614 +0000 UTC m=+0.095402632 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:41:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:41:39 np0005465604 virtqemud[260328]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  2 05:41:39 np0005465604 virtqemud[260328]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  2 05:41:39 np0005465604 virtqemud[260328]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  2 05:41:40 np0005465604 nova_compute[260603]: 2025-10-02 09:41:40.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:40 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: cache status {prefix=cache status} (starting...)
Oct  2 05:41:40 np0005465604 nova_compute[260603]: 2025-10-02 09:41:40.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:41:40 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: client ls {prefix=client ls} (starting...)
Oct  2 05:41:40 np0005465604 lvm[460325]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  2 05:41:40 np0005465604 lvm[460325]: VG ceph_vg2 finished
Oct  2 05:41:40 np0005465604 lvm[460330]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 05:41:40 np0005465604 lvm[460330]: VG ceph_vg0 finished
Oct  2 05:41:40 np0005465604 lvm[460343]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  2 05:41:40 np0005465604 lvm[460343]: VG ceph_vg1 finished
Oct  2 05:41:40 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3722: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:40 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23123 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:41:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:41:41 np0005465604 kernel: block dm-0: the capability attribute has been deprecated.
Oct  2 05:41:41 np0005465604 nova_compute[260603]: 2025-10-02 09:41:41.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:41 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: damage ls {prefix=damage ls} (starting...)
Oct  2 05:41:41 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23125 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:41:41 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump loads {prefix=dump loads} (starting...)
Oct  2 05:41:41 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  2 05:41:41 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  2 05:41:41 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  2 05:41:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct  2 05:41:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/670286013' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  2 05:41:41 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  2 05:41:42 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23131 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:41:42 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T09:41:42.179+0000 7f67e8e61640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  2 05:41:42 np0005465604 ceph-mgr[74774]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  2 05:41:42 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  2 05:41:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:41:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2828878450' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:41:42 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  2 05:41:42 np0005465604 nova_compute[260603]: 2025-10-02 09:41:42.447 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:41:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct  2 05:41:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3240732150' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  2 05:41:42 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct  2 05:41:42 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4125070202' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  2 05:41:42 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: ops {prefix=ops} (starting...)
Oct  2 05:41:42 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3723: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  2 05:41:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1135961721' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  2 05:41:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct  2 05:41:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1369396267' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  2 05:41:43 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: session ls {prefix=session ls} (starting...)
Oct  2 05:41:43 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: status {prefix=status} (starting...)
Oct  2 05:41:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  2 05:41:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/923827585' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  2 05:41:43 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23145 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:41:43 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23147 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:41:43 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  2 05:41:43 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3452661649' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  2 05:41:43 np0005465604 podman[460777]: 2025-10-02 09:41:43.998114401 +0000 UTC m=+0.062641738 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 05:41:43 np0005465604 podman[460780]: 2025-10-02 09:41:43.999357129 +0000 UTC m=+0.063636678 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 05:41:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct  2 05:41:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3996156377' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  2 05:41:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  2 05:41:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/193128793' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  2 05:41:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct  2 05:41:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/752878372' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  2 05:41:44 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct  2 05:41:44 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1989067092' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  2 05:41:44 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3724: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:45 np0005465604 nova_compute[260603]: 2025-10-02 09:41:45.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:45 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23159 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:41:45 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T09:41:45.176+0000 7f67e8e61640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  2 05:41:45 np0005465604 ceph-mgr[74774]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  2 05:41:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  2 05:41:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1768599386' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  2 05:41:45 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct  2 05:41:45 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2724206446' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  2 05:41:45 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23165 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:41:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct  2 05:41:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/574475291' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  2 05:41:46 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23169 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:41:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:41:46 np0005465604 nova_compute[260603]: 2025-10-02 09:41:46.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:46 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23172 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:41:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  2 05:41:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3925258120' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  2 05:41:46 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23175 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 44.403163910s of 45.503894806s, submitted: 67
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266199040 unmapped: 48218112 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2883895 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266199040 unmapped: 48218112 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c649f0000 session 0x562c646f94a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c624745a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64991400 session 0x562c6467ab40
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266199040 unmapped: 48218112 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65472400 session 0x562c61e4fc20
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,14])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266354688 unmapped: 48062464 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64992400 session 0x562c627490e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661dc400 session 0x562c6274ba40
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c6551be00
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64991400 session 0x562c62732780
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64992400 session 0x562c6551b0e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266354688 unmapped: 48062464 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7d9000/0x0/0x4ffc00000, data 0x10b8faf/0x1245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266362880 unmapped: 48054272 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910535 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266387456 unmapped: 48029696 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266403840 unmapped: 48013312 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266403840 unmapped: 48013312 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266403840 unmapped: 48013312 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266403840 unmapped: 48013312 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910535 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7d9000/0x0/0x4ffc00000, data 0x10b8faf/0x1245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266403840 unmapped: 48013312 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266403840 unmapped: 48013312 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266412032 unmapped: 48005120 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64999c00 session 0x562c6471a780
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266412032 unmapped: 48005120 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62b25400 session 0x562c6367cd20
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7d9000/0x0/0x4ffc00000, data 0x10b8faf/0x1245000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c61e4e1e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266412032 unmapped: 48005120 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.368391991s of 15.228948593s, submitted: 103
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64991400 session 0x562c6274a960
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915699 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266420224 unmapped: 47996928 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7d7000/0x0/0x4ffc00000, data 0x10b8fe2/0x1247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266428416 unmapped: 47988736 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2926231 data_alloc: 218103808 data_used: 1384448
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7d7000/0x0/0x4ffc00000, data 0x10b8fe2/0x1247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7d7000/0x0/0x4ffc00000, data 0x10b8fe2/0x1247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2926231 data_alloc: 218103808 data_used: 1384448
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266756096 unmapped: 47661056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7d7000/0x0/0x4ffc00000, data 0x10b8fe2/0x1247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.722763062s of 12.753391266s, submitted: 11
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268009472 unmapped: 46407680 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 264888320 unmapped: 49528832 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 264806400 unmapped: 49610752 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x1972fe2/0x1b01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,2])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006123 data_alloc: 218103808 data_used: 2637824
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265863168 unmapped: 48553984 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265863168 unmapped: 48553984 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265863168 unmapped: 48553984 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265863168 unmapped: 48553984 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef0a000/0x0/0x4ffc00000, data 0x197cfe2/0x1b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265863168 unmapped: 48553984 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3007883 data_alloc: 218103808 data_used: 2797568
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265871360 unmapped: 48545792 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265871360 unmapped: 48545792 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265871360 unmapped: 48545792 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef0a000/0x0/0x4ffc00000, data 0x197cfe2/0x1b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265871360 unmapped: 48545792 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265871360 unmapped: 48545792 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3007883 data_alloc: 218103808 data_used: 2797568
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265871360 unmapped: 48545792 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 265871360 unmapped: 48545792 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef0a000/0x0/0x4ffc00000, data 0x197cfe2/0x1b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef0a000/0x0/0x4ffc00000, data 0x197cfe2/0x1b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.115167618s of 15.105305672s, submitted: 79
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266182656 unmapped: 48234496 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54e800 session 0x562c64eff680
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c647e6c00 session 0x562c6279b680
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f40400 session 0x562c64ef6000
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f40400 session 0x562c6363e3c0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c645694a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266182656 unmapped: 48234496 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266190848 unmapped: 48226304 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee956000/0x0/0x4ffc00000, data 0x1f39fe2/0x20c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053655 data_alloc: 218103808 data_used: 2797568
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266190848 unmapped: 48226304 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266190848 unmapped: 48226304 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266190848 unmapped: 48226304 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266190848 unmapped: 48226304 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee956000/0x0/0x4ffc00000, data 0x1f39fe2/0x20c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266190848 unmapped: 48226304 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053655 data_alloc: 218103808 data_used: 2797568
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266190848 unmapped: 48226304 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c64569860
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266084352 unmapped: 48332800 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee955000/0x0/0x4ffc00000, data 0x1f3a005/0x20c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 266084352 unmapped: 48332800 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee955000/0x0/0x4ffc00000, data 0x1f3a005/0x20c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267239424 unmapped: 47177728 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267567104 unmapped: 46850048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee955000/0x0/0x4ffc00000, data 0x1f3a005/0x20c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3096060 data_alloc: 218103808 data_used: 8437760
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267567104 unmapped: 46850048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267567104 unmapped: 46850048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267567104 unmapped: 46850048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267567104 unmapped: 46850048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee955000/0x0/0x4ffc00000, data 0x1f3a005/0x20c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267567104 unmapped: 46850048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3096060 data_alloc: 218103808 data_used: 8437760
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.969276428s of 18.160720825s, submitted: 17
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267567104 unmapped: 46850048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267567104 unmapped: 46850048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 267567104 unmapped: 46850048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271720448 unmapped: 42696704 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273137664 unmapped: 41279488 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee1a7000/0x0/0x4ffc00000, data 0x26e0005/0x286f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3165242 data_alloc: 234881024 data_used: 9703424
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3165730 data_alloc: 234881024 data_used: 9703424
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee110000/0x0/0x4ffc00000, data 0x277f005/0x290e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee0ef000/0x0/0x4ffc00000, data 0x27a0005/0x292f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272687104 unmapped: 41730048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3166530 data_alloc: 234881024 data_used: 9723904
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272695296 unmapped: 41721856 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.562002182s of 15.969758034s, submitted: 113
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67f41000 session 0x562c6475f2c0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd5800 session 0x562c62748f00
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee0dc000/0x0/0x4ffc00000, data 0x27b3005/0x2942000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272695296 unmapped: 41721856 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c6363e5a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee0dc000/0x0/0x4ffc00000, data 0x27b3005/0x2942000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271130624 unmapped: 43286528 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271130624 unmapped: 43286528 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271130624 unmapped: 43286528 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef12000/0x0/0x4ffc00000, data 0x197cfe2/0x1b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010920 data_alloc: 218103808 data_used: 2801664
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271130624 unmapped: 43286528 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271130624 unmapped: 43286528 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64992400 session 0x562c61e56000
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64999c00 session 0x562c627485a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271138816 unmapped: 43278336 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268460032 unmapped: 45957120 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64380400 session 0x562c6475f0e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef37c000/0x0/0x4ffc00000, data 0xe53fe2/0xfe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269516800 unmapped: 44900352 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269524992 unmapped: 44892160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269533184 unmapped: 44883968 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269533184 unmapped: 44883968 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269533184 unmapped: 44883968 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269533184 unmapped: 44883968 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269533184 unmapped: 44883968 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269533184 unmapped: 44883968 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269533184 unmapped: 44883968 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269533184 unmapped: 44883968 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269541376 unmapped: 44875776 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269541376 unmapped: 44875776 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269541376 unmapped: 44875776 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269541376 unmapped: 44875776 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269541376 unmapped: 44875776 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269541376 unmapped: 44875776 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269541376 unmapped: 44875776 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904235 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269541376 unmapped: 44875776 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269549568 unmapped: 44867584 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6547a000 session 0x562c62a0a960
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c643a74a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64380400 session 0x562c6b3063c0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64992400 session 0x562c64ef6960
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 55.805976868s of 56.174449921s, submitted: 53
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268500992 unmapped: 45916160 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64999c00 session 0x562c645692c0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6547a000 session 0x562c62733a40
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c645674a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef723000/0x0/0x4ffc00000, data 0x116dfbf/0x12fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64380400 session 0x562c646d9680
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64992400 session 0x562c6274a780
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2930959 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef723000/0x0/0x4ffc00000, data 0x116dfbf/0x12fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2930959 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef723000/0x0/0x4ffc00000, data 0x116dfbf/0x12fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268591104 unmapped: 45826048 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef723000/0x0/0x4ffc00000, data 0x116dfbf/0x12fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268599296 unmapped: 45817856 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2930959 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268599296 unmapped: 45817856 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268599296 unmapped: 45817856 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.797731400s of 14.340232849s, submitted: 7
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64902c00 session 0x562c62749860
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268599296 unmapped: 45817856 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268607488 unmapped: 45809664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6ff000/0x0/0x4ffc00000, data 0x1191fbf/0x131f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268607488 unmapped: 45809664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2933491 data_alloc: 218103808 data_used: 53248
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268607488 unmapped: 45809664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268623872 unmapped: 45793280 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6ff000/0x0/0x4ffc00000, data 0x1191fbf/0x131f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268623872 unmapped: 45793280 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268623872 unmapped: 45793280 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268623872 unmapped: 45793280 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949011 data_alloc: 218103808 data_used: 2220032
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268623872 unmapped: 45793280 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6ff000/0x0/0x4ffc00000, data 0x1191fbf/0x131f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268623872 unmapped: 45793280 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268623872 unmapped: 45793280 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268623872 unmapped: 45793280 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6ff000/0x0/0x4ffc00000, data 0x1191fbf/0x131f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268623872 unmapped: 45793280 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949011 data_alloc: 218103808 data_used: 2220032
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.914422989s of 13.919497490s, submitted: 1
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 268918784 unmapped: 45498368 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef2df000/0x0/0x4ffc00000, data 0x15b0fbf/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2990425 data_alloc: 218103808 data_used: 2490368
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef2df000/0x0/0x4ffc00000, data 0x15b0fbf/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef2df000/0x0/0x4ffc00000, data 0x15b0fbf/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2990425 data_alloc: 218103808 data_used: 2490368
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269205504 unmapped: 45211648 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269213696 unmapped: 45203456 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef2df000/0x0/0x4ffc00000, data 0x15b0fbf/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269213696 unmapped: 45203456 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef2df000/0x0/0x4ffc00000, data 0x15b0fbf/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269213696 unmapped: 45203456 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269213696 unmapped: 45203456 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2990425 data_alloc: 218103808 data_used: 2490368
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269213696 unmapped: 45203456 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269213696 unmapped: 45203456 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef2df000/0x0/0x4ffc00000, data 0x15b0fbf/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269213696 unmapped: 45203456 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.337739944s of 17.901878357s, submitted: 21
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef2df000/0x0/0x4ffc00000, data 0x15b0fbf/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269213696 unmapped: 45203456 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62862c00 session 0x562c6274b860
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62862c00 session 0x562c61e570e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c643a63c0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64380400 session 0x562c6551a780
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64902c00 session 0x562c6456bc20
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef16000/0x0/0x4ffc00000, data 0x197afbf/0x1b08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269320192 unmapped: 45096960 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019370 data_alloc: 218103808 data_used: 2490368
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269320192 unmapped: 45096960 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269320192 unmapped: 45096960 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef16000/0x0/0x4ffc00000, data 0x197afbf/0x1b08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef16000/0x0/0x4ffc00000, data 0x197afbf/0x1b08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269320192 unmapped: 45096960 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef16000/0x0/0x4ffc00000, data 0x197afbf/0x1b08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269320192 unmapped: 45096960 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef16000/0x0/0x4ffc00000, data 0x197afbf/0x1b08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269320192 unmapped: 45096960 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019370 data_alloc: 218103808 data_used: 2490368
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269320192 unmapped: 45096960 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269320192 unmapped: 45096960 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef16000/0x0/0x4ffc00000, data 0x197afbf/0x1b08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269328384 unmapped: 45088768 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65477000 session 0x562c64d20b40
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269328384 unmapped: 45088768 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65477000 session 0x562c646bb0e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269328384 unmapped: 45088768 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62862c00 session 0x562c64eff2c0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019370 data_alloc: 218103808 data_used: 2490368
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.536311150s of 11.944916725s, submitted: 13
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269328384 unmapped: 45088768 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x197aff2/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x197aff2/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269328384 unmapped: 45088768 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c6367c000
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269328384 unmapped: 45088768 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x197aff2/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269328384 unmapped: 45088768 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269336576 unmapped: 45080576 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051281 data_alloc: 218103808 data_used: 6467584
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x197aff2/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269336576 unmapped: 45080576 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x197aff2/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269336576 unmapped: 45080576 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269336576 unmapped: 45080576 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269336576 unmapped: 45080576 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x197aff2/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269336576 unmapped: 45080576 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3051281 data_alloc: 218103808 data_used: 6467584
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269336576 unmapped: 45080576 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269336576 unmapped: 45080576 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269336576 unmapped: 45080576 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x197aff2/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269344768 unmapped: 45072384 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.889685631s of 13.700659752s, submitted: 6
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef14000/0x0/0x4ffc00000, data 0x197aff2/0x1b0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 269344768 unmapped: 45072384 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3086317 data_alloc: 218103808 data_used: 6467584
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272089088 unmapped: 42328064 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272498688 unmapped: 41918464 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272097280 unmapped: 42319872 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272097280 unmapped: 42319872 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee91f000/0x0/0x4ffc00000, data 0x1f6fff2/0x20ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272097280 unmapped: 42319872 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099259 data_alloc: 218103808 data_used: 6471680
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272097280 unmapped: 42319872 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272105472 unmapped: 42311680 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272105472 unmapped: 42311680 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272105472 unmapped: 42311680 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272105472 unmapped: 42311680 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099259 data_alloc: 218103808 data_used: 6471680
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee91b000/0x0/0x4ffc00000, data 0x1f73ff2/0x2103000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272105472 unmapped: 42311680 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272105472 unmapped: 42311680 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272105472 unmapped: 42311680 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272105472 unmapped: 42311680 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272113664 unmapped: 42303488 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099259 data_alloc: 218103808 data_used: 6471680
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee91b000/0x0/0x4ffc00000, data 0x1f73ff2/0x2103000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272113664 unmapped: 42303488 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272113664 unmapped: 42303488 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272113664 unmapped: 42303488 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272113664 unmapped: 42303488 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee91b000/0x0/0x4ffc00000, data 0x1f73ff2/0x2103000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272113664 unmapped: 42303488 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099259 data_alloc: 218103808 data_used: 6471680
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272113664 unmapped: 42303488 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272113664 unmapped: 42303488 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee91b000/0x0/0x4ffc00000, data 0x1f73ff2/0x2103000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272121856 unmapped: 42295296 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272121856 unmapped: 42295296 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee91b000/0x0/0x4ffc00000, data 0x1f73ff2/0x2103000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272121856 unmapped: 42295296 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099259 data_alloc: 218103808 data_used: 6471680
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272121856 unmapped: 42295296 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee91b000/0x0/0x4ffc00000, data 0x1f73ff2/0x2103000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272130048 unmapped: 42287104 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272130048 unmapped: 42287104 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272130048 unmapped: 42287104 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee91b000/0x0/0x4ffc00000, data 0x1f73ff2/0x2103000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272130048 unmapped: 42287104 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099259 data_alloc: 218103808 data_used: 6471680
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.167901993s of 31.580886841s, submitted: 39
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64686000 session 0x562c647034a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64903400 session 0x562c6456a960
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273178624 unmapped: 41238528 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271876096 unmapped: 42541056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62862000 session 0x562c628f12c0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271876096 unmapped: 42541056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271876096 unmapped: 42541056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef2df000/0x0/0x4ffc00000, data 0x15b0fbf/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271876096 unmapped: 42541056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998448 data_alloc: 218103808 data_used: 2490368
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271876096 unmapped: 42541056 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6547d000 session 0x562c64d20d20
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54e800 session 0x562c62749e00
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef2df000/0x0/0x4ffc00000, data 0x15b0fbf/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271892480 unmapped: 42524672 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6547e400 session 0x562c62a0b860
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271892480 unmapped: 42524672 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271892480 unmapped: 42524672 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271892480 unmapped: 42524672 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919524 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271892480 unmapped: 42524672 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271892480 unmapped: 42524672 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271892480 unmapped: 42524672 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271892480 unmapped: 42524672 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271892480 unmapped: 42524672 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919524 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271900672 unmapped: 42516480 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271900672 unmapped: 42516480 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271900672 unmapped: 42516480 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271900672 unmapped: 42516480 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 42500096 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919524 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 42500096 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 42500096 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 42500096 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 42500096 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271917056 unmapped: 42500096 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919524 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 42491904 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 42491904 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 42491904 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 42491904 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 42491904 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919524 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 42491904 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 42491904 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 42491904 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271925248 unmapped: 42491904 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271933440 unmapped: 42483712 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919524 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271933440 unmapped: 42483712 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271933440 unmapped: 42483712 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271933440 unmapped: 42483712 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271933440 unmapped: 42483712 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271933440 unmapped: 42483712 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa48000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2919524 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271941632 unmapped: 42475520 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 40.886993408s of 41.488769531s, submitted: 38
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271941632 unmapped: 42475520 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65472400 session 0x562c62474d20
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62862000 session 0x562c6456a1e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67f40800 session 0x562c64ef70e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65472400 session 0x562c62732f00
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6547d000 session 0x562c628ee780
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273080320 unmapped: 41336832 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273080320 unmapped: 41336832 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273080320 unmapped: 41336832 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7cb000/0x0/0x4ffc00000, data 0x10c6faf/0x1253000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951343 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273080320 unmapped: 41336832 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273080320 unmapped: 41336832 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7cb000/0x0/0x4ffc00000, data 0x10c6faf/0x1253000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273080320 unmapped: 41336832 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7cb000/0x0/0x4ffc00000, data 0x10c6faf/0x1253000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273088512 unmapped: 41328640 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273088512 unmapped: 41328640 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951343 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273088512 unmapped: 41328640 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.799205780s of 10.012652397s, submitted: 24
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273096704 unmapped: 41320448 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7cb000/0x0/0x4ffc00000, data 0x10c6faf/0x1253000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f70000 session 0x562c646e2b40
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273104896 unmapped: 41312256 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273104896 unmapped: 41312256 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273104896 unmapped: 41312256 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952056 data_alloc: 218103808 data_used: 49152
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273113088 unmapped: 41304064 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273113088 unmapped: 41304064 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7ca000/0x0/0x4ffc00000, data 0x10c6fd2/0x1254000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273113088 unmapped: 41304064 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273113088 unmapped: 41304064 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273113088 unmapped: 41304064 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2960536 data_alloc: 218103808 data_used: 1224704
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273113088 unmapped: 41304064 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273113088 unmapped: 41304064 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef7ca000/0x0/0x4ffc00000, data 0x10c6fd2/0x1254000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273113088 unmapped: 41304064 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273121280 unmapped: 41295872 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273121280 unmapped: 41295872 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.232060432s of 13.526559830s, submitted: 5
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2962708 data_alloc: 218103808 data_used: 1277952
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274333696 unmapped: 40083456 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273162240 unmapped: 41254912 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272490496 unmapped: 41926656 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef039000/0x0/0x4ffc00000, data 0x1856fd2/0x19e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3029688 data_alloc: 218103808 data_used: 1294336
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eefb3000/0x0/0x4ffc00000, data 0x18dcfd2/0x1a6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef93000/0x0/0x4ffc00000, data 0x18fdfd2/0x1a8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3027644 data_alloc: 218103808 data_used: 1294336
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef93000/0x0/0x4ffc00000, data 0x18fdfd2/0x1a8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3027644 data_alloc: 218103808 data_used: 1294336
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.022167206s of 15.504188538s, submitted: 83
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef93000/0x0/0x4ffc00000, data 0x18fdfd2/0x1a8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272703488 unmapped: 41713664 heap: 314417152 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c648ff800 session 0x562c628e30e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64e73800 session 0x562c64566b40
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65473400 session 0x562c6551a3c0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f70800 session 0x562c627330e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69211000 session 0x562c61e4e000
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103486 data_alloc: 218103808 data_used: 1294336
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272867328 unmapped: 43655168 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5ae000/0x0/0x4ffc00000, data 0x22e2fd2/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272867328 unmapped: 43655168 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272875520 unmapped: 43646976 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5ab000/0x0/0x4ffc00000, data 0x22e5fd2/0x2473000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272875520 unmapped: 43646976 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272883712 unmapped: 43638784 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103914 data_alloc: 218103808 data_used: 1294336
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272883712 unmapped: 43638784 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5ab000/0x0/0x4ffc00000, data 0x22e5fd2/0x2473000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272883712 unmapped: 43638784 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64990800 session 0x562c62a0a1e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6547f000 session 0x562c646d83c0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272883712 unmapped: 43638784 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c649f1800 session 0x562c628eeb40
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.639535904s of 12.789156914s, submitted: 15
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272891904 unmapped: 43630592 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c649c4000 session 0x562c62a46000
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272891904 unmapped: 43630592 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3105752 data_alloc: 218103808 data_used: 1294336
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272891904 unmapped: 43630592 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272891904 unmapped: 43630592 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5aa000/0x0/0x4ffc00000, data 0x22e5fe2/0x2474000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272900096 unmapped: 43622400 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273203200 unmapped: 43319296 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5aa000/0x0/0x4ffc00000, data 0x22e5fe2/0x2474000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273203200 unmapped: 43319296 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3179164 data_alloc: 234881024 data_used: 11558912
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5a8000/0x0/0x4ffc00000, data 0x22e6fe2/0x2475000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273203200 unmapped: 43319296 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273203200 unmapped: 43319296 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5a8000/0x0/0x4ffc00000, data 0x22e6fe2/0x2475000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273203200 unmapped: 43319296 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5a8000/0x0/0x4ffc00000, data 0x22e6fe2/0x2475000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5a8000/0x0/0x4ffc00000, data 0x22e6fe2/0x2475000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273203200 unmapped: 43319296 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273203200 unmapped: 43319296 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3179164 data_alloc: 234881024 data_used: 11558912
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5a8000/0x0/0x4ffc00000, data 0x22e6fe2/0x2475000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273203200 unmapped: 43319296 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273203200 unmapped: 43319296 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee5a8000/0x0/0x4ffc00000, data 0x22e6fe2/0x2475000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.232625008s of 13.830818176s, submitted: 4
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274964480 unmapped: 41558016 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edef8000/0x0/0x4ffc00000, data 0x2997fe2/0x2b26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275292160 unmapped: 41230336 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275300352 unmapped: 41222144 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236902 data_alloc: 234881024 data_used: 13082624
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edeeb000/0x0/0x4ffc00000, data 0x29a4fe2/0x2b33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3242978 data_alloc: 234881024 data_used: 13316096
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edeeb000/0x0/0x4ffc00000, data 0x29a4fe2/0x2b33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.206348419s of 10.187524796s, submitted: 61
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd5800 session 0x562c6467b680
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c668c4c00 session 0x562c628ee3c0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275308544 unmapped: 41213952 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edeeb000/0x0/0x4ffc00000, data 0x29a4fe2/0x2b33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3036586 data_alloc: 218103808 data_used: 1302528
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd5800 session 0x562c628e3e00
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270229504 unmapped: 46292992 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270229504 unmapped: 46292992 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef89000/0x0/0x4ffc00000, data 0x1907fd2/0x1a95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270229504 unmapped: 46292992 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef89000/0x0/0x4ffc00000, data 0x1907fd2/0x1a95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62862000 session 0x562c627323c0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65472400 session 0x562c628dd4a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270229504 unmapped: 46292992 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270245888 unmapped: 46276608 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2940567 data_alloc: 218103808 data_used: 49152
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270245888 unmapped: 46276608 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270245888 unmapped: 46276608 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.968927860s of 10.047794342s, submitted: 39
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49fd2/0xfd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270245888 unmapped: 46276608 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270245888 unmapped: 46276608 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270245888 unmapped: 46276608 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939752 data_alloc: 218103808 data_used: 49152
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270245888 unmapped: 46276608 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270245888 unmapped: 46276608 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49fd2/0xfd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f40400 session 0x562c64ef6780
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49fd2/0xfd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939056 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939056 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939056 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939056 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939056 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939056 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939056 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939056 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65477000 session 0x562c628ee780
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69856000 session 0x562c62732f00
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64684800 session 0x562c64ef70e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6547ec00 session 0x562c6456a1e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa47000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.979553223s of 45.496376038s, submitted: 7
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270254080 unmapped: 46268416 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4efa46000/0x0/0x4ffc00000, data 0xe49fd8/0xfd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270262272 unmapped: 46260224 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273637376 unmapped: 42885120 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2977829 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 273637376 unmapped: 42885120 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270491648 unmapped: 46030848 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270491648 unmapped: 46030848 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c63620800 session 0x562c62474d20
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64684800 session 0x562c647034a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65477000 session 0x562c6367c000
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270491648 unmapped: 46030848 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6f9000/0x0/0x4ffc00000, data 0x1197fd8/0x1325000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270426112 unmapped: 46096384 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6547ec00 session 0x562c64eff2c0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69856000 session 0x562c6456bc20
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2968269 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c63621000 session 0x562c6551a780
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270426112 unmapped: 46096384 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64684800 session 0x562c643a63c0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270426112 unmapped: 46096384 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65477000 session 0x562c646bb860
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.162470818s of 10.054964066s, submitted: 29
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270426112 unmapped: 46096384 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270434304 unmapped: 46088192 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6f7000/0x0/0x4ffc00000, data 0x1198044/0x1327000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270737408 unmapped: 45785088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2974484 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6547ec00 session 0x562c6b3074a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270737408 unmapped: 45785088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6d3000/0x0/0x4ffc00000, data 0x11bc044/0x134b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270737408 unmapped: 45785088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6d3000/0x0/0x4ffc00000, data 0x11bc044/0x134b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270737408 unmapped: 45785088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6d3000/0x0/0x4ffc00000, data 0x11bc044/0x134b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270737408 unmapped: 45785088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270868480 unmapped: 45654016 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998324 data_alloc: 218103808 data_used: 3317760
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270868480 unmapped: 45654016 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270868480 unmapped: 45654016 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6d3000/0x0/0x4ffc00000, data 0x11bc044/0x134b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270868480 unmapped: 45654016 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6d3000/0x0/0x4ffc00000, data 0x11bc044/0x134b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 45645824 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 45645824 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2998324 data_alloc: 218103808 data_used: 3317760
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 45645824 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef6d3000/0x0/0x4ffc00000, data 0x11bc044/0x134b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 45645824 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 270876672 unmapped: 45645824 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.463221550s of 15.829542160s, submitted: 6
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 271310848 unmapped: 45211648 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ef629000/0x0/0x4ffc00000, data 0x1266044/0x13f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,0,1,13,0,0,6])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272531456 unmapped: 43991040 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014386 data_alloc: 218103808 data_used: 3338240
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 272875520 unmapped: 43646976 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274784256 unmapped: 41738240 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274972672 unmapped: 41549824 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275210240 unmapped: 41312256 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275218432 unmapped: 41304064 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3054432 data_alloc: 218103808 data_used: 3563520
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef7c000/0x0/0x4ffc00000, data 0x190d044/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275218432 unmapped: 41304064 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275152896 unmapped: 41369600 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eef65000/0x0/0x4ffc00000, data 0x192a044/0x1ab9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275685376 unmapped: 40837120 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275685376 unmapped: 40837120 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275685376 unmapped: 40837120 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3071318 data_alloc: 218103808 data_used: 3534848
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275685376 unmapped: 40837120 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275685376 unmapped: 40837120 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eeed3000/0x0/0x4ffc00000, data 0x19b4044/0x1b43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275685376 unmapped: 40837120 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275685376 unmapped: 40837120 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.730766296s of 15.541202545s, submitted: 109
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275619840 unmapped: 40902656 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3066714 data_alloc: 218103808 data_used: 3538944
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275628032 unmapped: 40894464 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275628032 unmapped: 40894464 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275628032 unmapped: 40894464 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eeeb7000/0x0/0x4ffc00000, data 0x19d8044/0x1b67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275628032 unmapped: 40894464 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275628032 unmapped: 40894464 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3066714 data_alloc: 218103808 data_used: 3538944
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275628032 unmapped: 40894464 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275628032 unmapped: 40894464 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eeeb7000/0x0/0x4ffc00000, data 0x19d8044/0x1b67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275636224 unmapped: 40886272 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275636224 unmapped: 40886272 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eeeb7000/0x0/0x4ffc00000, data 0x19d8044/0x1b67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275636224 unmapped: 40886272 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3066714 data_alloc: 218103808 data_used: 3538944
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eeeb7000/0x0/0x4ffc00000, data 0x19d8044/0x1b67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 40878080 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 40878080 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 40878080 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eeeb7000/0x0/0x4ffc00000, data 0x19d8044/0x1b67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 40878080 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 40878080 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3066714 data_alloc: 218103808 data_used: 3538944
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 40878080 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 40878080 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.826000214s of 18.922349930s, submitted: 5
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275652608 unmapped: 40869888 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eeeb1000/0x0/0x4ffc00000, data 0x19de044/0x1b6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275652608 unmapped: 40869888 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f62400 session 0x562c6551b0e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f3f800 session 0x562c61e4e1e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f3f800 session 0x562c6274a960
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276881408 unmapped: 39641088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64684800 session 0x562c645694a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64900400 session 0x562c61e56000
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedcb000/0x0/0x4ffc00000, data 0x1ac4044/0x1c53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3080850 data_alloc: 218103808 data_used: 3538944
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276881408 unmapped: 39641088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276881408 unmapped: 39641088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276881408 unmapped: 39641088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276881408 unmapped: 39641088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedcb000/0x0/0x4ffc00000, data 0x1ac4044/0x1c53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276881408 unmapped: 39641088 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3080322 data_alloc: 218103808 data_used: 3538944
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276889600 unmapped: 39632896 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64902c00 session 0x562c62a0a960
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62b25400 session 0x562c643a74a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276889600 unmapped: 39632896 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64684800 session 0x562c64ef6960
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.592034340s of 10.031236649s, submitted: 11
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276889600 unmapped: 39632896 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64900400 session 0x562c62733a40
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedca000/0x0/0x4ffc00000, data 0x1ac4067/0x1c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081975 data_alloc: 218103808 data_used: 3538944
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedca000/0x0/0x4ffc00000, data 0x1ac4067/0x1c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3087095 data_alloc: 218103808 data_used: 4190208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedca000/0x0/0x4ffc00000, data 0x1ac4067/0x1c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276914176 unmapped: 39608320 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3087095 data_alloc: 218103808 data_used: 4190208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedca000/0x0/0x4ffc00000, data 0x1ac4067/0x1c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.632834435s of 12.789543152s, submitted: 4
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278503424 unmapped: 38019072 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278503424 unmapped: 38019072 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278503424 unmapped: 38019072 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278503424 unmapped: 38019072 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed507000/0x0/0x4ffc00000, data 0x21e7067/0x2377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277700608 unmapped: 38821888 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3138289 data_alloc: 218103808 data_used: 4243456
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277708800 unmapped: 38813696 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed4cf000/0x0/0x4ffc00000, data 0x221f067/0x23af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3145465 data_alloc: 218103808 data_used: 4243456
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed4c7000/0x0/0x4ffc00000, data 0x2227067/0x23b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.898307800s of 12.210510254s, submitted: 59
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64902c00 session 0x562c628ede00
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f3f800 session 0x562c61e56f00
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277741568 unmapped: 38780928 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed4c7000/0x0/0x4ffc00000, data 0x2227067/0x23b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277741568 unmapped: 38780928 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3146537 data_alloc: 218103808 data_used: 4231168
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277749760 unmapped: 38772736 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed4bf000/0x0/0x4ffc00000, data 0x222f067/0x23bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,1])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277749760 unmapped: 38772736 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277749760 unmapped: 38772736 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f70800 session 0x562c646baf00
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd08000/0x0/0x4ffc00000, data 0x19e6067/0x1b76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074399 data_alloc: 218103808 data_used: 3526656
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd08000/0x0/0x4ffc00000, data 0x19e6044/0x1b75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd08000/0x0/0x4ffc00000, data 0x19e6044/0x1b75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074399 data_alloc: 218103808 data_used: 3526656
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.508622169s of 13.601557732s, submitted: 32
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd4c00 session 0x562c6475e780
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661dcc00 session 0x562c628f0000
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074223 data_alloc: 218103808 data_used: 3526656
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd09000/0x0/0x4ffc00000, data 0x19e6044/0x1b75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c649c5000 session 0x562c6367cd20
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 36K writes, 140K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 36K writes, 13K syncs, 2.70 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1788 writes, 7086 keys, 1788 commit groups, 1.0 writes per commit group, ingest: 8.42 MB, 0.01 MB/s#012Interval WAL: 1788 writes, 697 syncs, 2.57 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: mgrc ms_handle_reset ms_handle_reset con 0x562c64994400
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: mgrc handle_mgr_configure stats_period=5
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274432000 unmapped: 42090496 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274432000 unmapped: 42090496 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274432000 unmapped: 42090496 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.216686249s of 49.199211121s, submitted: 27
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2958270 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f3f800 session 0x562c6b307a40
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6963f000 session 0x562c6363ef00
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c649c5000 session 0x562c646d6780
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661dcc00 session 0x562c628dc1e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd4c00 session 0x562c646d7860
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3003011 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c6315f0e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274382848 unmapped: 46342144 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274391040 unmapped: 46333952 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274399232 unmapped: 46325760 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026672 data_alloc: 218103808 data_used: 3313664
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045552 data_alloc: 218103808 data_used: 5947392
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.794466019s of 20.170822144s, submitted: 32
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045624 data_alloc: 218103808 data_used: 5947392
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276774912 unmapped: 43950080 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,20])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275988480 unmapped: 44736512 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276111360 unmapped: 44613632 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092030 data_alloc: 218103808 data_used: 6160384
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd83000/0x0/0x4ffc00000, data 0x196e011/0x1afb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd62000/0x0/0x4ffc00000, data 0x198f011/0x1b1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3090850 data_alloc: 218103808 data_used: 6164480
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.045855522s of 12.648342133s, submitted: 63
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd62000/0x0/0x4ffc00000, data 0x198f011/0x1b1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276332544 unmapped: 44392448 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276340736 unmapped: 44384256 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092395 data_alloc: 218103808 data_used: 6164480
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 284459008 unmapped: 40468480 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 48259072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54e400 session 0x562c64ef6960
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54e400 session 0x562c643a61e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c6551b2c0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 48259072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 48259072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,2])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151015 data_alloc: 218103808 data_used: 6164480
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c649c5000 session 0x562c6b3061e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f71400 session 0x562c6274ab40
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.307779789s of 10.068083763s, submitted: 53
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  2 05:41:46 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/180055983' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3152489 data_alloc: 218103808 data_used: 6164480
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276692992 unmapped: 48234496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62874c00 session 0x562c646d9680
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276701184 unmapped: 48226304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276701184 unmapped: 48226304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276701184 unmapped: 48226304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279396352 unmapped: 45531136 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3210696 data_alloc: 234881024 data_used: 14290944
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279412736 unmapped: 45514752 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.747380257s of 11.106559753s, submitted: 63
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3211048 data_alloc: 234881024 data_used: 14290944
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279437312 unmapped: 45490176 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279437312 unmapped: 45490176 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282304512 unmapped: 42622976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3228632 data_alloc: 234881024 data_used: 14290944
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282304512 unmapped: 42622976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 284680192 unmapped: 40247296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebf33000/0x0/0x4ffc00000, data 0x261e011/0x27ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.735648155s of 10.060455322s, submitted: 31
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247268 data_alloc: 234881024 data_used: 14290944
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebebb000/0x0/0x4ffc00000, data 0x2696011/0x2823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeb6000/0x0/0x4ffc00000, data 0x269b011/0x2828000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3249180 data_alloc: 234881024 data_used: 14315520
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeb0000/0x0/0x4ffc00000, data 0x26a1011/0x282e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3249180 data_alloc: 234881024 data_used: 14315520
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.374366760s of 11.633337975s, submitted: 16
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253572 data_alloc: 234881024 data_used: 14413824
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253572 data_alloc: 234881024 data_used: 14413824
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253572 data_alloc: 234881024 data_used: 14413824
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.346201897s of 15.059863091s, submitted: 4
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62874c00 session 0x562c64efe960
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c628dda40
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebea5000/0x0/0x4ffc00000, data 0x26ac011/0x2839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253376 data_alloc: 234881024 data_used: 14413824
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282861568 unmapped: 42065920 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbb5000/0x0/0x4ffc00000, data 0x199c011/0x1b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282861568 unmapped: 42065920 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f3f800 session 0x562c64ef7860
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099936 data_alloc: 218103808 data_used: 6164480
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbb5000/0x0/0x4ffc00000, data 0x199c011/0x1b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.364779472s of 12.359013557s, submitted: 40
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64901800 session 0x562c6b307a40
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f41400 session 0x562c6551bc20
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276013056 unmapped: 48914432 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2973676 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed307000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c64effc20
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3725: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64998000 session 0x562c62a46f00
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661dd800 session 0x562c6274a780
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c6471a3c0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64999000 session 0x562c646d94a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.197753906s of 44.837074280s, submitted: 31
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c6475f4a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c6b3065a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64998000 session 0x562c6467bc20
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661dd800 session 0x562c6363f0e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65477c00 session 0x562c6315f860
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001882 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001882 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661ddc00 session 0x562c646e2b40
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd4c00 session 0x562c645674a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001882 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c64566780
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.026555061s of 13.081507683s, submitted: 4
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c68fe9000 session 0x562c628ed4a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276045824 unmapped: 48881664 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030199 data_alloc: 218103808 data_used: 3846144
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030199 data_alloc: 218103808 data_used: 3846144
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.270869255s of 12.386258125s, submitted: 6
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276652032 unmapped: 48275456 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278601728 unmapped: 46325760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278700032 unmapped: 46227456 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3067793 data_alloc: 218103808 data_used: 4562944
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3071175 data_alloc: 218103808 data_used: 4702208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3071495 data_alloc: 218103808 data_used: 4710400
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62863400 session 0x562c6279b0e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62863400 session 0x562c646ba000
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661ddc00 session 0x562c61e57a40
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd4c00 session 0x562c64effe00
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.001466751s of 16.817237854s, submitted: 48
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279355392 unmapped: 45572096 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c68fe9000 session 0x562c628ec000
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3088659 data_alloc: 218103808 data_used: 4710400
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c62748000
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c6363e1e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62863400 session 0x562c64566d20
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661ddc00 session 0x562c61e57a40
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eceab000/0x0/0x4ffc00000, data 0x16a5fbf/0x1833000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3088819 data_alloc: 218103808 data_used: 4714496
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eceab000/0x0/0x4ffc00000, data 0x16a5fbf/0x1833000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eceab000/0x0/0x4ffc00000, data 0x16a5fbf/0x1833000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279044096 unmapped: 45883392 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.341195107s of 10.009945869s, submitted: 15
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62862c00 session 0x562c6279b0e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279044096 unmapped: 45883392 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092880 data_alloc: 218103808 data_used: 4714496
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279044096 unmapped: 45883392 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279044096 unmapped: 45883392 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279543808 unmapped: 45383680 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103412 data_alloc: 218103808 data_used: 6053888
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103412 data_alloc: 218103808 data_used: 6053888
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.366211891s of 13.564207077s, submitted: 1
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282476544 unmapped: 42450944 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x1da3fbf/0x1f31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3172036 data_alloc: 218103808 data_used: 7139328
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec790000/0x0/0x4ffc00000, data 0x1da7fbf/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec790000/0x0/0x4ffc00000, data 0x1da7fbf/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3172664 data_alloc: 218103808 data_used: 7143424
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec7a6000/0x0/0x4ffc00000, data 0x1daafbf/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3166132 data_alloc: 218103808 data_used: 7143424
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec7a6000/0x0/0x4ffc00000, data 0x1daafbf/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3166772 data_alloc: 218103808 data_used: 7204864
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64902400 session 0x562c6363f0e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.399305344s of 18.292930603s, submitted: 109
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f71800 session 0x562c6551a780
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282681344 unmapped: 42246144 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64902400 session 0x562c645665a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3080618 data_alloc: 218103808 data_used: 4771840
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62874400 session 0x562c62a46000
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64e73c00 session 0x562c64efe1e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3082950 data_alloc: 218103808 data_used: 4759552
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280166400 unmapped: 44761088 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.822414398s of 10.087261200s, submitted: 43
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280166400 unmapped: 44761088 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c643a4c00 session 0x562c64eff680
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280182784 unmapped: 44744704 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280190976 unmapped: 44736512 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280190976 unmapped: 44736512 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280190976 unmapped: 44736512 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280190976 unmapped: 44736512 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280199168 unmapped: 44728320 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280199168 unmapped: 44728320 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280199168 unmapped: 44728320 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280199168 unmapped: 44728320 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280231936 unmapped: 44695552 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280231936 unmapped: 44695552 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280231936 unmapped: 44695552 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280256512 unmapped: 44670976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280256512 unmapped: 44670976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280256512 unmapped: 44670976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280272896 unmapped: 44654592 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280272896 unmapped: 44654592 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280272896 unmapped: 44654592 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280289280 unmapped: 44638208 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280289280 unmapped: 44638208 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280289280 unmapped: 44638208 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 100.534408569s of 101.709854126s, submitted: 14
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282599424 unmapped: 42328064 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64901800 session 0x562c6551af00
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c6456ad20
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c6475e780
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f41000 session 0x562c628e2d20
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f63800 session 0x562c646f8f00
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282599424 unmapped: 42328064 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c64ef61e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282599424 unmapped: 42328064 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3017460 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecfaa000/0x0/0x4ffc00000, data 0x1197faf/0x1324000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64901800 session 0x562c646ba960
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282599424 unmapped: 42328064 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f41000 session 0x562c61e4eb40
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c62732000
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282607616 unmapped: 42319872 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282615808 unmapped: 42311680 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecfa8000/0x0/0x4ffc00000, data 0x1197fe2/0x1326000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3043767 data_alloc: 218103808 data_used: 3309568
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecfa8000/0x0/0x4ffc00000, data 0x1197fe2/0x1326000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64998000 session 0x562c627330e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f0000 session 0x562c646d94a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.375305176s of 10.585134506s, submitted: 14
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c62474d20
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.151807785s of 30.341138840s, submitted: 13
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 42262528 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 291 ms_handle_reset con 0x562c68fe8800 session 0x562c62475e00
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2997150 data_alloc: 218103808 data_used: 53248
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 42262528 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ed2f5000/0x0/0x4ffc00000, data 0xe4bb5d/0xfd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ed2f5000/0x0/0x4ffc00000, data 0xe4bb5d/0xfd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2997150 data_alloc: 218103808 data_used: 53248
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ed2f5000/0x0/0x4ffc00000, data 0xe4bb5d/0xfd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 42196992 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 42196992 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 42196992 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 42196992 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282796032 unmapped: 42131456 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282796032 unmapped: 42131456 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 42074112 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 42074112 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 42074112 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 290488320 unmapped: 34439168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 ms_handle_reset con 0x562c64687c00 session 0x562c6315e1e0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3012284 data_alloc: 218103808 data_used: 6860800
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289538048 unmapped: 35389440 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3012284 data_alloc: 218103808 data_used: 6860800
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289538048 unmapped: 35389440 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289538048 unmapped: 35389440 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 134.109100342s of 134.325363159s, submitted: 30
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 293 ms_handle_reset con 0x562c64900400 session 0x562c6315ed20
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 293 heartbeat osd_stat(store_statfs(0x4edaef000/0x0/0x4ffc00000, data 0x64f191/0x7de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2941203 data_alloc: 218103808 data_used: 45056
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 294 ms_handle_reset con 0x562c630f0000 session 0x562c64efe000
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 294 heartbeat osd_stat(store_statfs(0x4edf5c000/0x0/0x4ffc00000, data 0x1e0d62/0x371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 294 heartbeat osd_stat(store_statfs(0x4edf5c000/0x0/0x4ffc00000, data 0x1e0d62/0x371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902681 data_alloc: 218103808 data_used: 53248
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 294 ms_handle_reset con 0x562c64bd9c00 session 0x562c6315e780
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.945940018s of 11.156617165s, submitted: 55
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 295 heartbeat osd_stat(store_statfs(0x4edf5c000/0x0/0x4ffc00000, data 0x1e0d62/0x371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905655 data_alloc: 218103808 data_used: 53248
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf5a000/0x0/0x4ffc00000, data 0x1e27c5/0x374000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 ms_handle_reset con 0x562c64687400 session 0x562c6456af00
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 37K writes, 145K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 37K writes, 14K syncs, 2.69 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1178 writes, 4161 keys, 1178 commit groups, 1.0 writes per commit group, ingest: 5.22 MB, 0.01 MB/s#012Interval WAL: 1178 writes, 469 syncs, 2.51 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 mem
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286097408 unmapped: 38830080 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286097408 unmapped: 38830080 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286097408 unmapped: 38830080 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286146560 unmapped: 38780928 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 102.574424744s of 102.733085632s, submitted: 24
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf57000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286203904 unmapped: 38723584 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2909566 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf57000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf57000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286261248 unmapped: 38666240 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 38674432 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2968052 data_alloc: 218103808 data_used: 61440
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286261248 unmapped: 47063040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.070027351s of 10.197218895s, submitted: 103
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 297 ms_handle_reset con 0x562c6547c800 session 0x562c61dd25a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286277632 unmapped: 47046656 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286285824 unmapped: 47038464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74f000/0x0/0x4ffc00000, data 0x9e5f38/0xb7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 298 ms_handle_reset con 0x562c6c54e800 session 0x562c646f8d20
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286343168 unmapped: 46981120 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 34.268466949s of 34.596256256s, submitted: 25
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286351360 unmapped: 46972928 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74d000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [0,0,0,0,0,0,2])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286359552 unmapped: 46964736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286384128 unmapped: 46940160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed749000/0x0/0x4ffc00000, data 0x9e9686/0xb84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286384128 unmapped: 46940160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287432704 unmapped: 45891584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2982217 data_alloc: 218103808 data_used: 77824
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 299 ms_handle_reset con 0x562c62874800 session 0x562c624745a0
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed74a000/0x0/0x4ffc00000, data 0x9e9686/0xb84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2981401 data_alloc: 218103808 data_used: 77824
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.584912777s of 10.009610176s, submitted: 37
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286408704 unmapped: 46915584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286408704 unmapped: 46915584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286408704 unmapped: 46915584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286408704 unmapped: 46915584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985575 data_alloc: 218103808 data_used: 86016
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286441472 unmapped: 46882816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286441472 unmapped: 46882816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286441472 unmapped: 46882816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 46866432 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286482432 unmapped: 46841856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286482432 unmapped: 46841856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286482432 unmapped: 46841856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 46792704 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 46792704 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 46792704 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 46751744 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 46751744 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 46751744 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 46702592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 46702592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 46702592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286646272 unmapped: 46678016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286646272 unmapped: 46678016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286646272 unmapped: 46678016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286662656 unmapped: 46661632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286662656 unmapped: 46661632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286662656 unmapped: 46661632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286711808 unmapped: 46612480 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286711808 unmapped: 46612480 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286711808 unmapped: 46612480 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 46596096 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 46596096 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 46596096 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: mgrc ms_handle_reset ms_handle_reset con 0x562c62b25400
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: mgrc handle_mgr_configure stats_period=5
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286752768 unmapped: 46571520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286752768 unmapped: 46571520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286752768 unmapped: 46571520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:46 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 46522368 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286810112 unmapped: 46514176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286810112 unmapped: 46514176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286826496 unmapped: 46497792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286826496 unmapped: 46497792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286826496 unmapped: 46497792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286851072 unmapped: 46473216 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286851072 unmapped: 46473216 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286875648 unmapped: 46448640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286875648 unmapped: 46448640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286875648 unmapped: 46448640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286900224 unmapped: 46424064 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286908416 unmapped: 46415872 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286908416 unmapped: 46415872 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 46383104 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 46383104 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 46383104 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 46383104 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286949376 unmapped: 46374912 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286949376 unmapped: 46374912 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286949376 unmapped: 46374912 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286957568 unmapped: 46366720 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286973952 unmapped: 46350336 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286973952 unmapped: 46350336 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286973952 unmapped: 46350336 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287023104 unmapped: 46301184 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287023104 unmapped: 46301184 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287023104 unmapped: 46301184 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287064064 unmapped: 46260224 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287064064 unmapped: 46260224 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287064064 unmapped: 46260224 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 46243840 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 46243840 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 46243840 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 46211072 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 46211072 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 46170112 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 46170112 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 46170112 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 46145536 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 46145536 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287195136 unmapped: 46129152 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287227904 unmapped: 46096384 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287227904 unmapped: 46096384 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287227904 unmapped: 46096384 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 46080000 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 46080000 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 46071808 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 46047232 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 46047232 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 46047232 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 46047232 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 38K writes, 145K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 38K writes, 14K syncs, 2.69 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 338 writes, 741 keys, 338 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s#012Interval WAL: 338 writes, 158 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287350784 unmapped: 45973504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287350784 unmapped: 45973504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287350784 unmapped: 45973504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287375360 unmapped: 45948928 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287375360 unmapped: 45948928 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 45932544 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 45932544 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 45932544 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 45932544 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 45924352 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 45907968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 45907968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 45907968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 45907968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 536.308471680s of 536.377807617s, submitted: 15
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287424512 unmapped: 45899776 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985147 data_alloc: 218103808 data_used: 94208
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287440896 unmapped: 45883392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 301 ms_handle_reset con 0x562c64900800 session 0x562c64d21c20
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287440896 unmapped: 45883392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 301 heartbeat osd_stat(store_statfs(0x4edf44000/0x0/0x4ffc00000, data 0x1ecc74/0x388000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287440896 unmapped: 45883392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 302 ms_handle_reset con 0x562c647e6800 session 0x562c6279be00
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2935656 data_alloc: 218103808 data_used: 110592
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 302 heartbeat osd_stat(store_statfs(0x4edf43000/0x0/0x4ffc00000, data 0x1ee812/0x389000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287506432 unmapped: 45817856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.243688583s of 10.019772530s, submitted: 100
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287514624 unmapped: 45809664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2942373 data_alloc: 218103808 data_used: 114688
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287563776 unmapped: 45760512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 ms_handle_reset con 0x562c649f1800 session 0x562c628ec3c0
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287604736 unmapped: 45719552 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 45686784 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 45686784 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 45645824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 45645824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 45645824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 45645824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 45637632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 45637632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 45637632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 45637632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 45629440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 45629440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 58.478626251s of 59.437236786s, submitted: 96
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 304 handle_osd_map epochs [305,305], i have 305, src has [1,305]
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 305 ms_handle_reset con 0x562c6bb59400 session 0x562c628e30e0
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949134 data_alloc: 218103808 data_used: 126976
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf3b000/0x0/0x4ffc00000, data 0x1f39c3/0x392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf3b000/0x0/0x4ffc00000, data 0x1f39c3/0x392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 45555712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 45555712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 45539328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 45539328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 45539328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 45522944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 45522944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 45522944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287842304 unmapped: 45481984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287842304 unmapped: 45481984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287842304 unmapped: 45481984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 45457408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 45457408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 45449216 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 45449216 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287883264 unmapped: 45441024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287883264 unmapped: 45441024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287883264 unmapped: 45441024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287883264 unmapped: 45441024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 45293568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: do_command 'config diff' '{prefix=config diff}'
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: do_command 'config show' '{prefix=config show}'
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: do_command 'counter dump' '{prefix=counter dump}'
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: do_command 'counter schema' '{prefix=counter schema}'
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287506432 unmapped: 45817856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 45850624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:47 np0005465604 ceph-osd[90385]: do_command 'log dump' '{prefix=log dump}'
Oct  2 05:41:47 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23179 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:41:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  2 05:41:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2696113627' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  2 05:41:47 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23183 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:41:47 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  2 05:41:47 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1751981710' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  2 05:41:47 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23187 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:41:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  2 05:41:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/915543584' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  2 05:41:48 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23191 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  2 05:41:48 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct  2 05:41:48 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3965773376' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct  2 05:41:48 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23195 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  2 05:41:48 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3726: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:49 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23201 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  2 05:41:49 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T09:41:49.343+0000 7f67e8e61640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  2 05:41:49 np0005465604 ceph-mgr[74774]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  2 05:41:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct  2 05:41:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2719431498' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  2 05:41:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct  2 05:41:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/474808673' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct  2 05:41:49 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct  2 05:41:49 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4252180943' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct  2 05:41:50 np0005465604 nova_compute[260603]: 2025-10-02 09:41:50.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct  2 05:41:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1130492162' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct  2 05:41:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct  2 05:41:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2233334211' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct  2 05:41:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct  2 05:41:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/400909382' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct  2 05:41:50 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct  2 05:41:50 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2270087015' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct  2 05:41:50 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3727: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:41:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct  2 05:41:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1347777016' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct  2 05:41:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct  2 05:41:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2976900227' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct  2 05:41:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 329441280 unmapped: 54337536 heap: 383778816 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.584579468s of 45.537818909s, submitted: 71
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 329449472 unmapped: 54329344 heap: 383778816 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eb75f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3319974 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 329449472 unmapped: 54329344 heap: 383778816 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da193b680
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340800 session 0x562da060ab40
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562d9f911c20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da2309400 session 0x562d9f4af4a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338706432 unmapped: 48750592 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f948800 session 0x562da17014a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da1982f00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340800 session 0x562da16f4b40
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562da202b860
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328ec00 session 0x562da14b4f00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327884800 unmapped: 59572224 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eae3f000/0x0/0x4ffc00000, data 0x17d8816/0x196f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327884800 unmapped: 59572224 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327819264 unmapped: 59637760 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395744 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 59604992 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 59604992 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eae3f000/0x0/0x4ffc00000, data 0x17d884f/0x196f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 59604992 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 59604992 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 59604992 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395744 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 59604992 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 59604992 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 59604992 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eae3f000/0x0/0x4ffc00000, data 0x17d884f/0x196f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327852032 unmapped: 59604992 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.190498352s of 15.229944229s, submitted: 132
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f948800 session 0x562da1e9a960
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327999488 unmapped: 59457536 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398952 data_alloc: 218103808 data_used: 8122368
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eae1b000/0x0/0x4ffc00000, data 0x17fc84f/0x1993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327999488 unmapped: 59457536 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 327999488 unmapped: 59457536 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eae1b000/0x0/0x4ffc00000, data 0x17fc84f/0x1993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eae1b000/0x0/0x4ffc00000, data 0x17fc84f/0x1993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eae1b000/0x0/0x4ffc00000, data 0x17fc84f/0x1993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eae1b000/0x0/0x4ffc00000, data 0x17fc84f/0x1993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3465992 data_alloc: 218103808 data_used: 17485824
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4eae1b000/0x0/0x4ffc00000, data 0x17fc84f/0x1993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3465992 data_alloc: 218103808 data_used: 17485824
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 328859648 unmapped: 58597376 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.618291855s of 12.628442764s, submitted: 2
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e98c3000/0x0/0x4ffc00000, data 0x1bb484f/0x1d4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334258176 unmapped: 53198848 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334258176 unmapped: 53198848 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 52977664 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521134 data_alloc: 234881024 data_used: 18034688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 52969472 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 52969472 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 52969472 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e96c6000/0x0/0x4ffc00000, data 0x1db184f/0x1f48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 52969472 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 52969472 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519938 data_alloc: 234881024 data_used: 18042880
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 51920896 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 51920896 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 51920896 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e96a5000/0x0/0x4ffc00000, data 0x1dd284f/0x1f69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 51920896 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 51920896 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3519938 data_alloc: 234881024 data_used: 18042880
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 51920896 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562da193a5a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949800 session 0x562d9f4b0d20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9facd000 session 0x562d9f8d83c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335536128 unmapped: 51920896 heap: 387457024 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3171c00 session 0x562d9f8d9680
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.862783432s of 15.095202446s, submitted: 84
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d82000/0x0/0x4ffc00000, data 0x26f3888/0x288c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3171c00 session 0x562d9f8d9c20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f948800 session 0x562da1c8ad20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949800 session 0x562da068d2c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9facd000 session 0x562da1e9bc20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562d9f8d8d20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 54476800 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d7c000/0x0/0x4ffc00000, data 0x26f98c1/0x2892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 54476800 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 54476800 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d7c000/0x0/0x4ffc00000, data 0x26f98c1/0x2892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3593837 data_alloc: 234881024 data_used: 18042880
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 54476800 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 54476800 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 54476800 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562da202bc20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 54476800 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d7c000/0x0/0x4ffc00000, data 0x26f98c1/0x2892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f948800 session 0x562da2063e00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 54476800 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3593837 data_alloc: 234881024 data_used: 18042880
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334602240 unmapped: 54476800 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949800 session 0x562da2882780
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9facd000 session 0x562d9f911860
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d7c000/0x0/0x4ffc00000, data 0x26f98c1/0x2892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 54468608 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d7c000/0x0/0x4ffc00000, data 0x26f98c1/0x2892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334610432 unmapped: 54468608 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 51576832 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 51576832 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3654474 data_alloc: 234881024 data_used: 26480640
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 51576832 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d7c000/0x0/0x4ffc00000, data 0x26f98c1/0x2892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 51576832 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 51576832 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 51576832 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 51576832 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.848459244s of 18.252662659s, submitted: 42
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3654826 data_alloc: 234881024 data_used: 26480640
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 51576832 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d7c000/0x0/0x4ffc00000, data 0x26f98c1/0x2892000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 51576832 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338001920 unmapped: 51077120 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8350000/0x0/0x4ffc00000, data 0x31258c1/0x32be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,1])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338354176 unmapped: 50724864 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e834b000/0x0/0x4ffc00000, data 0x31298c1/0x32c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e834b000/0x0/0x4ffc00000, data 0x31298c1/0x32c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3750220 data_alloc: 234881024 data_used: 27578368
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8341000/0x0/0x4ffc00000, data 0x31338c1/0x32cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8341000/0x0/0x4ffc00000, data 0x31338c1/0x32cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3750220 data_alloc: 234881024 data_used: 27578368
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.315657616s of 12.759304047s, submitted: 82
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 50987008 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8341000/0x0/0x4ffc00000, data 0x31338c1/0x32cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 50970624 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3752140 data_alloc: 234881024 data_used: 27672576
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 50970624 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3171c00 session 0x562d9f4b1c20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340c00 session 0x562da060a000
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3171c00 session 0x562da03d0960
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 50929664 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 50929664 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 50929664 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 50929664 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533874 data_alloc: 234881024 data_used: 18051072
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e95c7000/0x0/0x4ffc00000, data 0x1de484f/0x1f7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 50929664 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 50929664 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da1f85860
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340800 session 0x562da16f41e0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338173952 unmapped: 50905088 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.686920166s of 10.025838852s, submitted: 74
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f948800 session 0x562da16f54a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334897152 unmapped: 54181888 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 54173696 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 54173696 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 54173696 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 54173696 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 54173696 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 54173696 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 54173696 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334905344 unmapped: 54173696 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 54165504 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 54165504 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 54165504 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334913536 unmapped: 54165504 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 54157312 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 54157312 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 54157312 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 54157312 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 54157312 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 54157312 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334921728 unmapped: 54157312 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 54149120 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 54149120 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 54149120 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 54149120 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 54149120 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 54149120 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea5be000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 54149120 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334929920 unmapped: 54149120 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344494 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334938112 unmapped: 54140928 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334938112 unmapped: 54140928 heap: 389079040 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 48.946887970s of 49.407531738s, submitted: 22
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343089152 unmapped: 49668096 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9dd3000/0x0/0x4ffc00000, data 0x16a5806/0x183b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,4,0,5])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da068d2c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340800 session 0x562da14b4f00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340c00 session 0x562da16f4b40
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 57794560 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3171c00 session 0x562d9f4af4a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949800 session 0x562da1f19c20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 57794560 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411752 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 57794560 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 57794560 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9dd3000/0x0/0x4ffc00000, data 0x16a583f/0x183b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334962688 unmapped: 57794560 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334970880 unmapped: 57786368 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334970880 unmapped: 57786368 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411752 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334970880 unmapped: 57786368 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334970880 unmapped: 57786368 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334970880 unmapped: 57786368 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9dd3000/0x0/0x4ffc00000, data 0x16a583f/0x183b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da1709a40
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334979072 unmapped: 57778176 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340800 session 0x562d9f9885a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334979072 unmapped: 57778176 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9dd3000/0x0/0x4ffc00000, data 0x16a583f/0x183b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411752 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334979072 unmapped: 57778176 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340c00 session 0x562da17081e0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.625880241s of 14.252739906s, submitted: 39
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3171c00 session 0x562d9edd54a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 334995456 unmapped: 57761792 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9dd2000/0x0/0x4ffc00000, data 0x16a5862/0x183c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414638 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9dd2000/0x0/0x4ffc00000, data 0x16a5862/0x183c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472558 data_alloc: 218103808 data_used: 14942208
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9dd2000/0x0/0x4ffc00000, data 0x16a5862/0x183c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9dd2000/0x0/0x4ffc00000, data 0x16a5862/0x183c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9dd2000/0x0/0x4ffc00000, data 0x16a5862/0x183c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 335003648 unmapped: 57753600 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.789023399s of 13.874046326s, submitted: 11
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498930 data_alloc: 218103808 data_used: 14954496
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336822272 unmapped: 55934976 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9804000/0x0/0x4ffc00000, data 0x1c6b862/0x1e02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336822272 unmapped: 55934976 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 54878208 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e97d9000/0x0/0x4ffc00000, data 0x1c96862/0x1e2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 54878208 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 54878208 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3530596 data_alloc: 218103808 data_used: 14950400
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 54878208 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e97d9000/0x0/0x4ffc00000, data 0x1c96862/0x1e2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 54878208 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 54878208 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336158720 unmapped: 56598528 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336158720 unmapped: 56598528 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525844 data_alloc: 218103808 data_used: 14954496
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 56590336 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e97de000/0x0/0x4ffc00000, data 0x1c99862/0x1e30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 56590336 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 56590336 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 56590336 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 56590336 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525844 data_alloc: 218103808 data_used: 14954496
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 56590336 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e97de000/0x0/0x4ffc00000, data 0x1c99862/0x1e30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 56590336 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da03d0d20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da1701e00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da18b4780
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340800 session 0x562da202ad20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 56590336 heap: 392757248 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.834115982s of 18.167894363s, submitted: 75
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340c00 session 0x562da17094a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 63586304 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 63586304 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3645462 data_alloc: 218103808 data_used: 14954496
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 63586304 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8921000/0x0/0x4ffc00000, data 0x2b56862/0x2ced000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 63586304 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 63586304 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 63586304 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 63586304 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3645462 data_alloc: 218103808 data_used: 14954496
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 63586304 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337575936 unmapped: 63578112 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8921000/0x0/0x4ffc00000, data 0x2b56862/0x2ced000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337575936 unmapped: 63578112 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3171c00 session 0x562da19063c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337575936 unmapped: 63578112 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562d9f8d61e0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340800 session 0x562da02fc000
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337575936 unmapped: 63578112 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3645462 data_alloc: 218103808 data_used: 14954496
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337575936 unmapped: 63578112 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340c00 session 0x562da1771680
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337584128 unmapped: 63569920 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8921000/0x0/0x4ffc00000, data 0x2b56862/0x2ced000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.322172165s of 14.704947472s, submitted: 23
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 337592320 unmapped: 63561728 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 338812928 unmapped: 62341120 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 344219648 unmapped: 56934400 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3745354 data_alloc: 234881024 data_used: 28442624
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 344219648 unmapped: 56934400 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 344219648 unmapped: 56934400 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e891f000/0x0/0x4ffc00000, data 0x2b57862/0x2cee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 344219648 unmapped: 56934400 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 344219648 unmapped: 56934400 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 344219648 unmapped: 56934400 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3745354 data_alloc: 234881024 data_used: 28442624
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 344219648 unmapped: 56934400 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 344219648 unmapped: 56934400 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 344219648 unmapped: 56934400 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e891f000/0x0/0x4ffc00000, data 0x2b57862/0x2cee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.767458916s of 11.075243950s, submitted: 3
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 54853632 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349028352 unmapped: 52125696 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e81dc000/0x0/0x4ffc00000, data 0x329b862/0x3432000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1,19])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3820578 data_alloc: 234881024 data_used: 28479488
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 52232192 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348921856 unmapped: 52232192 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7ef2000/0x0/0x4ffc00000, data 0x3585862/0x371c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349822976 unmapped: 51331072 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350044160 unmapped: 51109888 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7ea3000/0x0/0x4ffc00000, data 0x35d3862/0x376a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [0,0,0,0,2])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350265344 unmapped: 50888704 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3842816 data_alloc: 234881024 data_used: 31006720
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 50855936 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 50855936 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7e6d000/0x0/0x4ffc00000, data 0x3609862/0x37a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350298112 unmapped: 50855936 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7e6d000/0x0/0x4ffc00000, data 0x3609862/0x37a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.121047020s of 11.344943047s, submitted: 137
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3841648 data_alloc: 234881024 data_used: 31023104
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7e4e000/0x0/0x4ffc00000, data 0x3629862/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7e4e000/0x0/0x4ffc00000, data 0x3629862/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3841648 data_alloc: 234881024 data_used: 31023104
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7e4e000/0x0/0x4ffc00000, data 0x3629862/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350330880 unmapped: 50823168 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 50814976 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 50814976 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3841648 data_alloc: 234881024 data_used: 31023104
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7e4e000/0x0/0x4ffc00000, data 0x3629862/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.616966248s of 10.726793289s, submitted: 2
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 50814976 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 50814976 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 50814976 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350339072 unmapped: 50814976 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 50806784 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3842188 data_alloc: 234881024 data_used: 31031296
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 50806784 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7e4b000/0x0/0x4ffc00000, data 0x362c862/0x37c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 50806784 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7e4b000/0x0/0x4ffc00000, data 0x362c862/0x37c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 50806784 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350347264 unmapped: 50806784 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 50798592 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da20630e0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3844268 data_alloc: 234881024 data_used: 31129600
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7e4b000/0x0/0x4ffc00000, data 0x362c862/0x37c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350355456 unmapped: 50798592 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.174716949s of 10.465250015s, submitted: 3
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da17010e0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 54419456 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e97db000/0x0/0x4ffc00000, data 0x1c9c862/0x1e33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 54419456 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 54419456 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 54419456 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3537040 data_alloc: 218103808 data_used: 14401536
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9facd000 session 0x562da02c12c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562da14b4b40
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 54411264 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da19825a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba800/0x1050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369418 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369418 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369418 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369418 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369418 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369418 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369418 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9acb000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da14b4960
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340800 session 0x562da1e5c5a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 61743104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da1906f00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9facd000 session 0x562d9f8d92c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.645301819s of 40.069389343s, submitted: 91
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562da1e9be00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 61734912 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da285cd20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da0340c00 session 0x562da16f52c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562d9f8d81e0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9facd000 session 0x562da285cd20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea133000/0x0/0x4ffc00000, data 0x134484e/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3405689 data_alloc: 218103808 data_used: 8122368
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea133000/0x0/0x4ffc00000, data 0x134484e/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562d9f8d92c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3405689 data_alloc: 218103808 data_used: 8122368
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da1906f00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da14b4960
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.455097198s of 10.046999931s, submitted: 23
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea132000/0x0/0x4ffc00000, data 0x134485e/0x14dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da19825a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea132000/0x0/0x4ffc00000, data 0x134485e/0x14dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea132000/0x0/0x4ffc00000, data 0x134485e/0x14dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429607 data_alloc: 218103808 data_used: 11268096
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea132000/0x0/0x4ffc00000, data 0x134485e/0x14dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3440967 data_alloc: 218103808 data_used: 12881920
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea132000/0x0/0x4ffc00000, data 0x134485e/0x14dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.729443550s of 13.749686241s, submitted: 1
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 61726720 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 nova_compute[260603]: 2025-10-02 09:41:51.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3473237 data_alloc: 218103808 data_used: 12881920
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343187456 unmapped: 57966592 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343236608 unmapped: 57917440 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e98fb000/0x0/0x4ffc00000, data 0x1b7b85e/0x1d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e98fb000/0x0/0x4ffc00000, data 0x1b7b85e/0x1d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3520797 data_alloc: 218103808 data_used: 13578240
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3520013 data_alloc: 218103808 data_used: 13590528
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e98f8000/0x0/0x4ffc00000, data 0x1b7e85e/0x1d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e98f8000/0x0/0x4ffc00000, data 0x1b7e85e/0x1d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343244800 unmapped: 57909248 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e98f8000/0x0/0x4ffc00000, data 0x1b7e85e/0x1d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343252992 unmapped: 57901056 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3520013 data_alloc: 218103808 data_used: 13590528
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343252992 unmapped: 57901056 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343252992 unmapped: 57901056 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e98f8000/0x0/0x4ffc00000, data 0x1b7e85e/0x1d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343252992 unmapped: 57901056 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 343252992 unmapped: 57901056 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da1f85c20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562d9f8d63c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1964000 session 0x562da1ad4d20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562d9f9103c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.446899414s of 19.630510330s, submitted: 91
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562da18b4b40
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da170ba40
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1964000 session 0x562d9f988f00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da1a1b2c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562d9f8d6780
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345407488 unmapped: 55746560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3577041 data_alloc: 218103808 data_used: 13590528
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345407488 unmapped: 55746560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345407488 unmapped: 55746560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e924b000/0x0/0x4ffc00000, data 0x222a86e/0x23c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345415680 unmapped: 55738368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345415680 unmapped: 55738368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345415680 unmapped: 55738368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3577041 data_alloc: 218103808 data_used: 13590528
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345415680 unmapped: 55738368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e924b000/0x0/0x4ffc00000, data 0x222a86e/0x23c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345415680 unmapped: 55738368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345415680 unmapped: 55738368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da27ce5a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 55697408 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345456640 unmapped: 55697408 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9226000/0x0/0x4ffc00000, data 0x224e891/0x23e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580874 data_alloc: 218103808 data_used: 13598720
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345464832 unmapped: 55689216 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345423872 unmapped: 55730176 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55345152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.688966751s of 13.939682961s, submitted: 17
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55345152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9226000/0x0/0x4ffc00000, data 0x224e891/0x23e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55345152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623130 data_alloc: 234881024 data_used: 19468288
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55345152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55345152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9226000/0x0/0x4ffc00000, data 0x224e891/0x23e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55345152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9226000/0x0/0x4ffc00000, data 0x224e891/0x23e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55345152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55345152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623130 data_alloc: 234881024 data_used: 19468288
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345808896 unmapped: 55345152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348127232 unmapped: 53026816 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348667904 unmapped: 52486144 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8066000/0x0/0x4ffc00000, data 0x2ffe891/0x3198000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.595734596s of 10.033220291s, submitted: 70
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 52412416 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348741632 unmapped: 52412416 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3736804 data_alloc: 234881024 data_used: 20041728
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 52273152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 52273152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e801c000/0x0/0x4ffc00000, data 0x3048891/0x31e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 52273152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 52273152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 52273152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739248 data_alloc: 234881024 data_used: 20037632
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 52273152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e801c000/0x0/0x4ffc00000, data 0x3048891/0x31e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 52273152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 52273152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348880896 unmapped: 52273152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da202a5a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1964000 session 0x562da1e9a5a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.884395599s of 11.480986595s, submitted: 23
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 55320576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da25825a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3534844 data_alloc: 218103808 data_used: 13668352
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 55320576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 55320576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e94c3000/0x0/0x4ffc00000, data 0x1b7e85e/0x1d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345833472 unmapped: 55320576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9facd000 session 0x562da02c12c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562d9f4b14a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345849856 unmapped: 55304192 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 54247424 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9895000/0x0/0x4ffc00000, data 0x114f84f/0x12e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3393446 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 54247424 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.958738804s of 10.038612366s, submitted: 58
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392545 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da1ae43c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392545 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 54296576 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 54288384 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392545 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 54288384 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 54288384 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 54288384 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 54288384 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 54288384 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392545 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 54288384 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 54280192 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 54280192 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 54280192 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 54280192 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392545 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 54272000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 54272000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 54272000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 54272000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 54272000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392545 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 54272000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 54272000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 54272000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 54263808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 54263808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392545 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 54263808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 54263808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 54263808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 54263808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 54263808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392545 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 54263808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 54255616 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 54255616 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 54255616 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 54255616 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3392545 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 54255616 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 54247424 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 54247424 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.313434601s of 44.072120667s, submitted: 4
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 53207040 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4ea1af000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,5])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9c96000/0x0/0x4ffc00000, data 0x13d37dd/0x1568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,6,5])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 48996352 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3451037 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350060544 unmapped: 51093504 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9c96000/0x0/0x4ffc00000, data 0x13d37dd/0x1568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54239232 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1964000 session 0x562da03d1e00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54239232 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54239232 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 54239232 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3437597 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9c96000/0x0/0x4ffc00000, data 0x13d37dd/0x1568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54231040 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9c96000/0x0/0x4ffc00000, data 0x13d37dd/0x1568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54231040 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54231040 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.686579227s of 10.136670113s, submitted: 14
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 54231040 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 54222848 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3438038 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da1708b40
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9c96000/0x0/0x4ffc00000, data 0x13d37dd/0x1568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 54222848 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 54222848 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9c96000/0x0/0x4ffc00000, data 0x13d37dd/0x1568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467930 data_alloc: 218103808 data_used: 12324864
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3467930 data_alloc: 218103808 data_used: 12324864
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9c96000/0x0/0x4ffc00000, data 0x13d37dd/0x1568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9c96000/0x0/0x4ffc00000, data 0x13d37dd/0x1568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 53174272 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.153639793s of 14.558044434s, submitted: 3
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347774976 unmapped: 53379072 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347447296 unmapped: 53706752 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492032 data_alloc: 218103808 data_used: 12374016
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347447296 unmapped: 53706752 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9758000/0x0/0x4ffc00000, data 0x19117dd/0x1aa6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347447296 unmapped: 53706752 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e9758000/0x0/0x4ffc00000, data 0x19117dd/0x1aa6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347447296 unmapped: 53706752 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347447296 unmapped: 53706752 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3509594 data_alloc: 218103808 data_used: 12570624
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347447296 unmapped: 53706752 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 53698560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 53698560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e974c000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 53698560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 53698560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3514002 data_alloc: 218103808 data_used: 12632064
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 53698560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 53698560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 53698560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347455488 unmapped: 53698560 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e974c000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 53690368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3514002 data_alloc: 218103808 data_used: 12632064
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 53690368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 53690368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 53690368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e974c000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 53690368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 53690368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3514002 data_alloc: 218103808 data_used: 12632064
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 53690368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e974c000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e974c000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 53690368 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 53682176 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 53682176 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 53682176 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3514002 data_alloc: 218103808 data_used: 12632064
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347471872 unmapped: 53682176 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e974c000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 53673984 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 53673984 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 53673984 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 53673984 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e974c000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3514002 data_alloc: 218103808 data_used: 12632064
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 53673984 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 53673984 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 53673984 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da1f185a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da1d35680
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da03d0000
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562da03d0d20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.753906250s of 35.480411530s, submitted: 49
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 43393024 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1964000 session 0x562da202ad20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x230d806/0x24a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da1907680
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 53673984 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3593951 data_alloc: 218103808 data_used: 12632064
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347488256 unmapped: 53665792 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x230d83f/0x24a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347488256 unmapped: 53665792 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347488256 unmapped: 53665792 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347496448 unmapped: 53657600 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347496448 unmapped: 53657600 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3593951 data_alloc: 218103808 data_used: 12632064
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347496448 unmapped: 53657600 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x230d83f/0x24a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347496448 unmapped: 53657600 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347496448 unmapped: 53657600 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347496448 unmapped: 53657600 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347496448 unmapped: 53657600 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3605151 data_alloc: 218103808 data_used: 14209024
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x230d83f/0x24a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659071 data_alloc: 234881024 data_used: 21536768
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x230d83f/0x24a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.032077789s of 21.478837967s, submitted: 37
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3690449 data_alloc: 234881024 data_used: 21581824
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x230d83f/0x24a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,18])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350855168 unmapped: 50298880 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85d6000/0x0/0x4ffc00000, data 0x2a8c83f/0x2c22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 50290688 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 50290688 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 50290688 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85b2000/0x0/0x4ffc00000, data 0x2aad83f/0x2c43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,4])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50233344 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733393 data_alloc: 234881024 data_used: 22216704
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85a5000/0x0/0x4ffc00000, data 0x2ac383f/0x2c59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3740617 data_alloc: 234881024 data_used: 22421504
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.158976555s of 12.613157272s, submitted: 94
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da170be00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85a5000/0x0/0x4ffc00000, data 0x2ac383f/0x2c59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 50208768 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 50208768 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3738385 data_alloc: 234881024 data_used: 22425600
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 50208768 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85a5000/0x0/0x4ffc00000, data 0x2ac383f/0x2c59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,3])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8a9b000/0x0/0x4ffc00000, data 0x192083f/0x1ab6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da060d680
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525679 data_alloc: 218103808 data_used: 12595200
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85ad000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85ad000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85ad000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525679 data_alloc: 218103808 data_used: 12595200
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562d9fa010e0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.587030411s of 13.759822845s, submitted: 32
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562d9f4b05a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413519 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85ad000/0x0/0x4ffc00000, data 0x11d77dd/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349446144 unmapped: 51707904 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562d9f910f00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 46K writes, 182K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 46K writes, 17K syncs, 2.72 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2280 writes, 9757 keys, 2280 commit groups, 1.0 writes per commit group, ingest: 10.53 MB, 0.02 MB/s#012Interval WAL: 2280 writes, 833 syncs, 2.74 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1cb7800 session 0x562da1ad5680
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: mgrc ms_handle_reset ms_handle_reset con 0x562da328f400
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: mgrc handle_mgr_configure stats_period=5
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da40e3400 session 0x562da1f84960
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da02c0d20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 51666944 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 51666944 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 51666944 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 51666944 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da1a1a000
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da1700f00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562da02fc960
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562d9f8d9c20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.360366821s of 48.620044708s, submitted: 32
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 357384192 unmapped: 43769856 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1964000 session 0x562da1e9be00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da03d0000
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da1f185a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3473234 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da1ae43c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562da1e9a5a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 50569216 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 50569216 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8904000/0x0/0x4ffc00000, data 0x15c57dd/0x175a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 50569216 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 50569216 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 50561024 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3473010 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3298400 session 0x562d9f8d6780
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 50561024 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8904000/0x0/0x4ffc00000, data 0x15c57dd/0x175a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da1a1b2c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 50561024 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562d9f988f00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da18b4b40
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 50257920 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 50257920 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529512 data_alloc: 218103808 data_used: 15495168
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e88df000/0x0/0x4ffc00000, data 0x15e97ed/0x177f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e88df000/0x0/0x4ffc00000, data 0x15e97ed/0x177f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529512 data_alloc: 218103808 data_used: 15495168
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.683063507s of 20.067323685s, submitted: 24
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e88df000/0x0/0x4ffc00000, data 0x15e97ed/0x177f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3565370 data_alloc: 218103808 data_used: 15523840
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 49455104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350699520 unmapped: 50454528 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3600800 data_alloc: 218103808 data_used: 15843328
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3600800 data_alloc: 218103808 data_used: 15843328
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.367254257s of 13.001447678s, submitted: 92
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 49397760 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 49397760 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 49389568 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3677723 data_alloc: 218103808 data_used: 15843328
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da5c8fc00 session 0x562d9f8d7c20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b3000/0x0/0x4ffc00000, data 0x2514816/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da170b2c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3656683 data_alloc: 218103808 data_used: 15843328
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b3000/0x0/0x4ffc00000, data 0x251484f/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b3000/0x0/0x4ffc00000, data 0x251484f/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.334874630s of 10.473359108s, submitted: 68
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b3000/0x0/0x4ffc00000, data 0x251484f/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 52985856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 52985856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3658552 data_alloc: 218103808 data_used: 15843328
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da2063860
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 52985856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 52985856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 52977664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 52936704 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 52928512 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703584 data_alloc: 234881024 data_used: 22114304
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703584 data_alloc: 234881024 data_used: 22114304
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.022311211s of 15.682323456s, submitted: 60
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,2,2,2])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352444416 unmapped: 52387840 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353009664 unmapped: 51822592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3744446 data_alloc: 234881024 data_used: 22147072
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74cd000/0x0/0x4ffc00000, data 0x29f9872/0x2b91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,2])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 51150848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 51150848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 51150848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353755136 unmapped: 51077120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 51740672 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3750872 data_alloc: 234881024 data_used: 22351872
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 51740672 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e739e000/0x0/0x4ffc00000, data 0x2b28872/0x2cc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e739e000/0x0/0x4ffc00000, data 0x2b28872/0x2cc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.567046165s of 10.198050499s, submitted: 62
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7359000/0x0/0x4ffc00000, data 0x2b6d872/0x2d05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754768 data_alloc: 234881024 data_used: 22589440
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 50446336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7320000/0x0/0x4ffc00000, data 0x2ba6872/0x2d3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 50446336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 50446336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3757992 data_alloc: 234881024 data_used: 22646784
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e731b000/0x0/0x4ffc00000, data 0x2bab872/0x2d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e731b000/0x0/0x4ffc00000, data 0x2bab872/0x2d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762988 data_alloc: 234881024 data_used: 22687744
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7317000/0x0/0x4ffc00000, data 0x2baf872/0x2d47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7317000/0x0/0x4ffc00000, data 0x2baf872/0x2d47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.851724625s of 15.161529541s, submitted: 25
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 50421760 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3764976 data_alloc: 234881024 data_used: 22679552
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e72fa000/0x0/0x4ffc00000, data 0x2bcc872/0x2d64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e72fa000/0x0/0x4ffc00000, data 0x2bcc872/0x2d64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354476032 unmapped: 50356224 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762328 data_alloc: 234881024 data_used: 22679552
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da060ab40
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354476032 unmapped: 50356224 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 50348032 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e72f2000/0x0/0x4ffc00000, data 0x2bd4872/0x2d6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 50348032 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.166407108s of 10.001169205s, submitted: 24
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 50348032 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e72f2000/0x0/0x4ffc00000, data 0x2bd4872/0x2d6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 50348032 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762224 data_alloc: 234881024 data_used: 22679552
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 50339840 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8106000/0x0/0x4ffc00000, data 0x1dc0872/0x1f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354500608 unmapped: 50331648 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 50323456 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da2582d20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 50323456 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8106000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354516992 unmapped: 50315264 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3606314 data_alloc: 218103808 data_used: 15831040
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 50307072 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 50307072 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562da1ad4d20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562da068cb40
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 50307072 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.448636055s of 10.010467529s, submitted: 54
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8108000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 52862976 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8fea000/0x0/0x4ffc00000, data 0xede7ed/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 52854784 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3440388 data_alloc: 218103808 data_used: 8228864
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 52854784 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da03d1e00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 52813824 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.401294708s of 43.381404877s, submitted: 17
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562d9f989a40
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562d9f4b14a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562d9f989680
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562da1e9b0e0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da16f41e0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 52609024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 52609024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 52609024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479417 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 52609024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6f000/0x0/0x4ffc00000, data 0x135983f/0x14ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 52600832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 52600832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 52600832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 52600832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479417 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6f000/0x0/0x4ffc00000, data 0x135983f/0x14ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da202be00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da16f52c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562da170be00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.937417030s of 13.075335503s, submitted: 42
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da1f85c20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481740 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6d000/0x0/0x4ffc00000, data 0x1359872/0x14f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515152 data_alloc: 218103808 data_used: 12677120
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6d000/0x0/0x4ffc00000, data 0x1359872/0x14f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515152 data_alloc: 218103808 data_used: 12677120
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 52584448 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6d000/0x0/0x4ffc00000, data 0x1359872/0x14f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.298130989s of 12.432414055s, submitted: 6
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8330000/0x0/0x4ffc00000, data 0x1b96872/0x1d2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3585360 data_alloc: 218103808 data_used: 13922304
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82ea000/0x0/0x4ffc00000, data 0x1bdc872/0x1d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3585680 data_alloc: 218103808 data_used: 13930496
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82c9000/0x0/0x4ffc00000, data 0x1bfd872/0x1d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82c9000/0x0/0x4ffc00000, data 0x1bfd872/0x1d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583492 data_alloc: 218103808 data_used: 13934592
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 49446912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 49446912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.958930016s of 17.081628799s, submitted: 105
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82c9000/0x0/0x4ffc00000, data 0x1bfd872/0x1d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da14b4f00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 49446912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642870 data_alloc: 218103808 data_used: 13934592
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 49446912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 49438720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b53000/0x0/0x4ffc00000, data 0x2373872/0x250b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b53000/0x0/0x4ffc00000, data 0x2373872/0x250b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 49438720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b53000/0x0/0x4ffc00000, data 0x2373872/0x250b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 49438720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 49438720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642870 data_alloc: 218103808 data_used: 13934592
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b53000/0x0/0x4ffc00000, data 0x2373872/0x250b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562d9f8d9a40
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643467 data_alloc: 218103808 data_used: 13934592
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b52000/0x0/0x4ffc00000, data 0x2373895/0x250c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.500934601s of 15.522595406s, submitted: 17
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692123 data_alloc: 234881024 data_used: 20611072
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b52000/0x0/0x4ffc00000, data 0x2373895/0x250c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692123 data_alloc: 234881024 data_used: 20611072
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b52000/0x0/0x4ffc00000, data 0x2373895/0x250c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 45514752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358465536 unmapped: 46366720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793705 data_alloc: 234881024 data_used: 21045248
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e6e9f000/0x0/0x4ffc00000, data 0x3025895/0x31be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793705 data_alloc: 234881024 data_used: 21045248
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e6e9f000/0x0/0x4ffc00000, data 0x3025895/0x31be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793705 data_alloc: 234881024 data_used: 21045248
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e6e9f000/0x0/0x4ffc00000, data 0x3025895/0x31be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.707262039s of 23.078323364s, submitted: 83
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 46333952 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 46333952 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 46333952 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793705 data_alloc: 234881024 data_used: 21045248
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da5c8fc00 session 0x562da16f4f00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 46333952 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da230f800 session 0x562da14b4b40
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82c2000/0x0/0x4ffc00000, data 0x1c03872/0x1d9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595381 data_alloc: 218103808 data_used: 13934592
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82bb000/0x0/0x4ffc00000, data 0x1c0a872/0x1da2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.594208717s of 11.749836922s, submitted: 32
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da202a780
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da18b5680
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 51036160 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3594709 data_alloc: 218103808 data_used: 13930496
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354861056 unmapped: 49971200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354861056 unmapped: 49971200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354861056 unmapped: 49971200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da1f18b40
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 49938432 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 49938432 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 49938432 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 49872896 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 49872896 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 49872896 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355000320 unmapped: 49831936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355008512 unmapped: 49823744 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355008512 unmapped: 49823744 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da1904f00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da230f800 session 0x562d9f8da5a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562da202a3c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da1a1bc20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 100.757339478s of 102.810211182s, submitted: 53
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 49815552 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da1708960
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da1770b40
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da230f800 session 0x562da25825a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 49725440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562da14b4780
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da03d0f00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 49725440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 49725440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495520 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8be5000/0x0/0x4ffc00000, data 0x12e37ed/0x1479000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da02fc000
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3522193 data_alloc: 218103808 data_used: 11350016
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8bc1000/0x0/0x4ffc00000, data 0x13077ed/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da068c1e0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da230f800 session 0x562d9f9885a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.136798859s of 10.462966919s, submitted: 20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 52199424 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da5c8fc00 session 0x562da2583860
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352665600 unmapped: 52166656 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.090339661s of 30.667470932s, submitted: 30
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 52142080 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 290 handle_osd_map epochs [291,291], i have 291, src has [1,291]
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 291 ms_handle_reset con 0x562da18b8800 session 0x562d9f9112c0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e900d000/0x0/0x4ffc00000, data 0xebc37b/0x1050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3462964 data_alloc: 218103808 data_used: 8126464
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e900d000/0x0/0x4ffc00000, data 0xebc37b/0x1050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e900d000/0x0/0x4ffc00000, data 0xebc37b/0x1050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3462964 data_alloc: 218103808 data_used: 8126464
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e900d000/0x0/0x4ffc00000, data 0xebc37b/0x1050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 52109312 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 52109312 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 52109312 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 52109312 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352763904 unmapped: 52068352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352763904 unmapped: 52068352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 52060160 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 52060160 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 52060160 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 52043776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 52043776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 52043776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 52043776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 52035584 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 52035584 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 52027392 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 52027392 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 52027392 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 52027392 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 52019200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 52019200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 51994624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 51994624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 51994624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 51994624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 51978240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 51978240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 51961856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 51945472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 51888128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 51888128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 51888128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 51888128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 51879936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 51879936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 51879936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 51879936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352960512 unmapped: 51871744 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 123.357223511s of 123.813163757s, submitted: 62
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 ms_handle_reset con 0x562da192d800 session 0x562da068c960
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 ms_handle_reset con 0x562da1d55400 session 0x562da1ebbc20
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 51855360 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466082 data_alloc: 218103808 data_used: 9117696
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466082 data_alloc: 218103808 data_used: 9117696
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353001472 unmapped: 51830784 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.466549873s of 10.502127647s, submitted: 11
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 293 ms_handle_reset con 0x562da230f800 session 0x562da060cb40
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 55345152 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 55345152 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 55345152 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 55345152 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e9479000/0x0/0x4ffc00000, data 0xa4f97d/0xbe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429452 data_alloc: 218103808 data_used: 4472832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 294 ms_handle_reset con 0x562da192d000 session 0x562d9f910f00
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9c77000/0x0/0x4ffc00000, data 0x251508/0x3e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3358060 data_alloc: 218103808 data_used: 1130496
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9c77000/0x0/0x4ffc00000, data 0x251508/0x3e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360842 data_alloc: 218103808 data_used: 1130496
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e9c75000/0x0/0x4ffc00000, data 0x252f6b/0x3e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.849654198s of 16.112354279s, submitted: 77
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 ms_handle_reset con 0x562da18b8800 session 0x562da193a5a0
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 58843136 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 58843136 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 58843136 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 58843136 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 48K writes, 187K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 48K writes, 17K syncs, 2.72 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1591 writes, 5946 keys, 1591 commit groups, 1.0 writes per commit group, ingest: 5.99 MB, 0.01 MB/s#012Interval WAL: 1591 writes, 616 syncs, 2.58 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtabl
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346030080 unmapped: 58802176 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346038272 unmapped: 58793984 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346038272 unmapped: 58793984 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346038272 unmapped: 58793984 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346038272 unmapped: 58793984 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 58785792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 58785792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 58785792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 58785792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346054656 unmapped: 58777600 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346054656 unmapped: 58777600 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346054656 unmapped: 58777600 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346054656 unmapped: 58777600 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 58769408 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 58769408 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 58769408 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 58753024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 58753024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 58753024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 58744832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 58720256 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 58720256 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 58720256 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 58720256 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 58703872 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 58695680 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 97.635398865s of 97.845252991s, submitted: 11
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346169344 unmapped: 58662912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c70000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 58638336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c70000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346251264 unmapped: 58580992 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346251264 unmapped: 58580992 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct  2 05:41:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/382984171' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346251264 unmapped: 58580992 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c70000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3368383 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370197 data_alloc: 218103808 data_used: 1138688
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c3c/0x3ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.818475723s of 10.272746086s, submitted: 91
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 297 ms_handle_reset con 0x562da192d800 session 0x562da202a960
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346267648 unmapped: 58564608 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346284032 unmapped: 66945024 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 297 handle_osd_map epochs [298,298], i have 298, src has [1,298]
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346292224 unmapped: 66936832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 298 ms_handle_reset con 0x562da1d55400 session 0x562da2063860
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346333184 unmapped: 66895872 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346333184 unmapped: 66895872 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346333184 unmapped: 66895872 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464650 data_alloc: 218103808 data_used: 1150976
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.875930786s of 35.106796265s, submitted: 17
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346374144 unmapped: 66854912 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xec9f07/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464947 data_alloc: 218103808 data_used: 1155072
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 299 ms_handle_reset con 0x562da230f800 session 0x562d9f4b0780
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec9dc3/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 66805760 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 66805760 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 66805760 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 66789376 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 66707456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 66691072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 66691072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 66691072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 66682880 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346578944 unmapped: 66650112 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 66633728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346628096 unmapped: 66600960 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346628096 unmapped: 66600960 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 66592768 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 66592768 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 66592768 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346660864 unmapped: 66568192 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346660864 unmapped: 66568192 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346660864 unmapped: 66568192 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 66560000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 66551808 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 66551808 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 66551808 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346710016 unmapped: 66519040 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 66469888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 66469888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 66469888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346783744 unmapped: 66445312 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346783744 unmapped: 66445312 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346783744 unmapped: 66445312 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346791936 unmapped: 66437120 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346791936 unmapped: 66437120 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346791936 unmapped: 66437120 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346791936 unmapped: 66437120 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346800128 unmapped: 66428928 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346800128 unmapped: 66428928 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346808320 unmapped: 66420736 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346808320 unmapped: 66420736 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346816512 unmapped: 66412544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346816512 unmapped: 66412544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346816512 unmapped: 66412544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346841088 unmapped: 66387968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 66379776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 66379776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 66379776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 66371584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 66371584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 66371584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 66371584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 66347008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 66347008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 66338816 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:41:51 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 66338816 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:46:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:46:40 np0005465604 rsyslogd[1004]: imjournal: 15066 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct  2 05:46:40 np0005465604 nova_compute[260603]: 2025-10-02 09:46:40.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:46:40 np0005465604 nova_compute[260603]: 2025-10-02 09:46:40.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:46:40 np0005465604 nova_compute[260603]: 2025-10-02 09:46:40.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:46:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3872: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:46:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:46:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:46:41 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.0 total, 600.0 interval#012Cumulative writes: 17K writes, 80K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s#012Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1370 writes, 6517 keys, 1370 commit groups, 1.0 writes per commit group, ingest: 8.76 MB, 0.01 MB/s#012Interval WAL: 1370 writes, 1370 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     67.3      1.47              0.38        60    0.024       0      0       0.0       0.0#012  L6      1/0    9.21 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.2    123.3    105.0      4.90              1.84        59    0.083    423K    31K       0.0       0.0#012 Sum      1/0    9.21 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.2     94.9     96.4      6.37              2.22       119    0.054    423K    31K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.4     73.1     72.8      0.88              0.20        12    0.073     58K   3005       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    123.3    105.0      4.90              1.84        59    0.083    423K    31K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     67.5      1.46              0.38        59    0.025       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.1      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7200.0 total, 600.0 interval#012Flush(GB): cumulative 0.096, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.60 GB write, 0.09 MB/s write, 0.59 GB read, 0.08 MB/s read, 6.4 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a653c11f0#2 capacity: 304.00 MB usage: 69.13 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000524 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4753,66.15 MB,21.76%) FilterBlock(120,1.15 MB,0.376646%) IndexBlock(120,1.83 MB,0.602928%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 05:46:41 np0005465604 nova_compute[260603]: 2025-10-02 09:46:41.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:46:42 np0005465604 nova_compute[260603]: 2025-10-02 09:46:42.284 2 DEBUG oslo_concurrency.processutils [None req-a6a311c5-f6fe-4054-b703-3b4765cc45ab bc7f869da0c24323b627eb876eb78945 db4e6e87e4854aa8b884db24f9410884 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:46:42 np0005465604 nova_compute[260603]: 2025-10-02 09:46:42.323 2 DEBUG oslo_concurrency.processutils [None req-a6a311c5-f6fe-4054-b703-3b4765cc45ab bc7f869da0c24323b627eb876eb78945 db4e6e87e4854aa8b884db24f9410884 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:46:42 np0005465604 nova_compute[260603]: 2025-10-02 09:46:42.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:46:42 np0005465604 nova_compute[260603]: 2025-10-02 09:46:42.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:46:42 np0005465604 nova_compute[260603]: 2025-10-02 09:46:42.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:46:42 np0005465604 nova_compute[260603]: 2025-10-02 09:46:42.648 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:46:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3873: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:46:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3874: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:46:45 np0005465604 nova_compute[260603]: 2025-10-02 09:46:45.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:46:45 np0005465604 nova_compute[260603]: 2025-10-02 09:46:45.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:46:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:46:46 np0005465604 nova_compute[260603]: 2025-10-02 09:46:46.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:46:46 np0005465604 podman[471297]: 2025-10-02 09:46:46.993086149 +0000 UTC m=+0.053846743 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:46:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3875: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:46:47 np0005465604 podman[471296]: 2025-10-02 09:46:47.033490011 +0000 UTC m=+0.095509524 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 05:46:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3876: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:46:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:46:50.374 162357 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '5e:32:d0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ea:f0:cb:d0:80:37'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 05:46:50 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:46:50.374 162357 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 05:46:50 np0005465604 nova_compute[260603]: 2025-10-02 09:46:50.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:46:50 np0005465604 nova_compute[260603]: 2025-10-02 09:46:50.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:46:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3877: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:46:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:46:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:46:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:46:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:46:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:46:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:46:51 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 627f986b-76e2-41d2-8256-c4069866b0c0 does not exist
Oct  2 05:46:51 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 98aee177-b757-49b3-9496-03bbef2db6b5 does not exist
Oct  2 05:46:51 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 90c80edb-b1c4-4633-bcd2-786e7959b46e does not exist
Oct  2 05:46:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:46:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:46:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:46:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:46:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:46:51 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:46:51 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:46:51 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:46:51 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:46:51 np0005465604 podman[471499]: 2025-10-02 09:46:51.244155879 +0000 UTC m=+0.065639102 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid)
Oct  2 05:46:51 np0005465604 podman[471498]: 2025-10-02 09:46:51.266646921 +0000 UTC m=+0.088277998 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  2 05:46:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:46:51 np0005465604 nova_compute[260603]: 2025-10-02 09:46:51.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:46:51 np0005465604 podman[471653]: 2025-10-02 09:46:51.782141103 +0000 UTC m=+0.049287161 container create 77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 05:46:51 np0005465604 systemd[1]: Started libpod-conmon-77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb.scope.
Oct  2 05:46:51 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:46:51 np0005465604 podman[471653]: 2025-10-02 09:46:51.759304469 +0000 UTC m=+0.026450567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:46:51 np0005465604 podman[471653]: 2025-10-02 09:46:51.869930825 +0000 UTC m=+0.137076913 container init 77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 05:46:51 np0005465604 podman[471653]: 2025-10-02 09:46:51.878919776 +0000 UTC m=+0.146065824 container start 77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 05:46:51 np0005465604 podman[471653]: 2025-10-02 09:46:51.881815606 +0000 UTC m=+0.148961684 container attach 77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gauss, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:46:51 np0005465604 optimistic_gauss[471669]: 167 167
Oct  2 05:46:51 np0005465604 systemd[1]: libpod-77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb.scope: Deactivated successfully.
Oct  2 05:46:51 np0005465604 conmon[471669]: conmon 77a8a5415d4ca880c4bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb.scope/container/memory.events
Oct  2 05:46:51 np0005465604 podman[471653]: 2025-10-02 09:46:51.887280187 +0000 UTC m=+0.154426245 container died 77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct  2 05:46:51 np0005465604 systemd[1]: var-lib-containers-storage-overlay-046cb8a83cd8dedf312c70306076d70717da068836e54d2c071984e3b5da90c9-merged.mount: Deactivated successfully.
Oct  2 05:46:51 np0005465604 podman[471653]: 2025-10-02 09:46:51.922729934 +0000 UTC m=+0.189875992 container remove 77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gauss, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:46:51 np0005465604 systemd[1]: libpod-conmon-77a8a5415d4ca880c4bd82a616580206e986468d70640ca0094c2fdf4150e8bb.scope: Deactivated successfully.
Oct  2 05:46:52 np0005465604 podman[471694]: 2025-10-02 09:46:52.072467731 +0000 UTC m=+0.039514045 container create 30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 05:46:52 np0005465604 systemd[1]: Started libpod-conmon-30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455.scope.
Oct  2 05:46:52 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:46:52 np0005465604 podman[471694]: 2025-10-02 09:46:52.055955365 +0000 UTC m=+0.023001699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:46:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a157fdaff2c0e0fa536138cef511e1952993b6fd1b838fd1b86d7c0dfa6e377/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:46:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a157fdaff2c0e0fa536138cef511e1952993b6fd1b838fd1b86d7c0dfa6e377/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:46:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a157fdaff2c0e0fa536138cef511e1952993b6fd1b838fd1b86d7c0dfa6e377/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:46:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a157fdaff2c0e0fa536138cef511e1952993b6fd1b838fd1b86d7c0dfa6e377/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:46:52 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a157fdaff2c0e0fa536138cef511e1952993b6fd1b838fd1b86d7c0dfa6e377/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:46:52 np0005465604 podman[471694]: 2025-10-02 09:46:52.164505166 +0000 UTC m=+0.131551500 container init 30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct  2 05:46:52 np0005465604 podman[471694]: 2025-10-02 09:46:52.172840416 +0000 UTC m=+0.139886730 container start 30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:46:52 np0005465604 podman[471694]: 2025-10-02 09:46:52.176048026 +0000 UTC m=+0.143094340 container attach 30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:46:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3878: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:46:53 np0005465604 epic_gould[471710]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:46:53 np0005465604 epic_gould[471710]: --> relative data size: 1.0
Oct  2 05:46:53 np0005465604 epic_gould[471710]: --> All data devices are unavailable
Oct  2 05:46:53 np0005465604 systemd[1]: libpod-30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455.scope: Deactivated successfully.
Oct  2 05:46:53 np0005465604 systemd[1]: libpod-30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455.scope: Consumed 1.048s CPU time.
Oct  2 05:46:53 np0005465604 podman[471694]: 2025-10-02 09:46:53.273742432 +0000 UTC m=+1.240788796 container died 30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  2 05:46:53 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5a157fdaff2c0e0fa536138cef511e1952993b6fd1b838fd1b86d7c0dfa6e377-merged.mount: Deactivated successfully.
Oct  2 05:46:53 np0005465604 podman[471694]: 2025-10-02 09:46:53.341087406 +0000 UTC m=+1.308133740 container remove 30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:46:53 np0005465604 systemd[1]: libpod-conmon-30fa8646f2870da0c30818c92a4351e15597d3b93a187103b47468ac82193455.scope: Deactivated successfully.
Oct  2 05:46:54 np0005465604 podman[471896]: 2025-10-02 09:46:54.100544117 +0000 UTC m=+0.037973947 container create 9d0bf1b05b1d06b8309edbe3a9de3bae054e459386101f5c189fdc135f012d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 05:46:54 np0005465604 systemd[1]: Started libpod-conmon-9d0bf1b05b1d06b8309edbe3a9de3bae054e459386101f5c189fdc135f012d27.scope.
Oct  2 05:46:54 np0005465604 podman[471896]: 2025-10-02 09:46:54.083836755 +0000 UTC m=+0.021266585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:46:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:46:54 np0005465604 podman[471896]: 2025-10-02 09:46:54.206202907 +0000 UTC m=+0.143632747 container init 9d0bf1b05b1d06b8309edbe3a9de3bae054e459386101f5c189fdc135f012d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 05:46:54 np0005465604 podman[471896]: 2025-10-02 09:46:54.218647016 +0000 UTC m=+0.156076826 container start 9d0bf1b05b1d06b8309edbe3a9de3bae054e459386101f5c189fdc135f012d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 05:46:54 np0005465604 podman[471896]: 2025-10-02 09:46:54.222152015 +0000 UTC m=+0.159581825 container attach 9d0bf1b05b1d06b8309edbe3a9de3bae054e459386101f5c189fdc135f012d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_darwin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  2 05:46:54 np0005465604 reverent_darwin[471912]: 167 167
Oct  2 05:46:54 np0005465604 systemd[1]: libpod-9d0bf1b05b1d06b8309edbe3a9de3bae054e459386101f5c189fdc135f012d27.scope: Deactivated successfully.
Oct  2 05:46:54 np0005465604 podman[471896]: 2025-10-02 09:46:54.225003844 +0000 UTC m=+0.162433654 container died 9d0bf1b05b1d06b8309edbe3a9de3bae054e459386101f5c189fdc135f012d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_darwin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:46:54 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b17dfa2ce81cd694a479585a569ba7c62f074412efc0100bef3821039b5b8208-merged.mount: Deactivated successfully.
Oct  2 05:46:54 np0005465604 podman[471896]: 2025-10-02 09:46:54.259558354 +0000 UTC m=+0.196988174 container remove 9d0bf1b05b1d06b8309edbe3a9de3bae054e459386101f5c189fdc135f012d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  2 05:46:54 np0005465604 systemd[1]: libpod-conmon-9d0bf1b05b1d06b8309edbe3a9de3bae054e459386101f5c189fdc135f012d27.scope: Deactivated successfully.
Oct  2 05:46:54 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:46:54.377 162357 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=45c6349c-a870-4e27-8117-4ccd02005c80, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 05:46:54 np0005465604 podman[471936]: 2025-10-02 09:46:54.44480584 +0000 UTC m=+0.053189633 container create 22110607f99a60e2a24da8aec8b2bd56c08b4f6bfdb1641aa08d6a81df7cd3a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 05:46:54 np0005465604 systemd[1]: Started libpod-conmon-22110607f99a60e2a24da8aec8b2bd56c08b4f6bfdb1641aa08d6a81df7cd3a8.scope.
Oct  2 05:46:54 np0005465604 podman[471936]: 2025-10-02 09:46:54.423590508 +0000 UTC m=+0.031974331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:46:54 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:46:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989dfb4a662bbcf3070ae5fae1c698266e66539c21074496caf4d5c1c5f5b61c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:46:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989dfb4a662bbcf3070ae5fae1c698266e66539c21074496caf4d5c1c5f5b61c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:46:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989dfb4a662bbcf3070ae5fae1c698266e66539c21074496caf4d5c1c5f5b61c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:46:54 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989dfb4a662bbcf3070ae5fae1c698266e66539c21074496caf4d5c1c5f5b61c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:46:54 np0005465604 podman[471936]: 2025-10-02 09:46:54.558648775 +0000 UTC m=+0.167032668 container init 22110607f99a60e2a24da8aec8b2bd56c08b4f6bfdb1641aa08d6a81df7cd3a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leakey, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:46:54 np0005465604 podman[471936]: 2025-10-02 09:46:54.568781452 +0000 UTC m=+0.177165235 container start 22110607f99a60e2a24da8aec8b2bd56c08b4f6bfdb1641aa08d6a81df7cd3a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leakey, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 05:46:54 np0005465604 podman[471936]: 2025-10-02 09:46:54.572296142 +0000 UTC m=+0.180680025 container attach 22110607f99a60e2a24da8aec8b2bd56c08b4f6bfdb1641aa08d6a81df7cd3a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:46:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3879: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:46:55 np0005465604 cool_leakey[471953]: {
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:    "0": [
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:        {
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "devices": [
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "/dev/loop3"
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            ],
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "lv_name": "ceph_lv0",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "lv_size": "21470642176",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "name": "ceph_lv0",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "tags": {
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.cluster_name": "ceph",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.crush_device_class": "",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.encrypted": "0",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.osd_id": "0",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.type": "block",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.vdo": "0"
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            },
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "type": "block",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "vg_name": "ceph_vg0"
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:        }
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:    ],
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:    "1": [
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:        {
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "devices": [
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "/dev/loop4"
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            ],
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "lv_name": "ceph_lv1",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "lv_size": "21470642176",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "name": "ceph_lv1",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "tags": {
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.cluster_name": "ceph",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.crush_device_class": "",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.encrypted": "0",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.osd_id": "1",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.type": "block",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.vdo": "0"
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            },
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "type": "block",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "vg_name": "ceph_vg1"
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:        }
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:    ],
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:    "2": [
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:        {
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "devices": [
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "/dev/loop5"
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            ],
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "lv_name": "ceph_lv2",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "lv_size": "21470642176",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "name": "ceph_lv2",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "tags": {
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.cluster_name": "ceph",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.crush_device_class": "",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.encrypted": "0",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.osd_id": "2",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.type": "block",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:                "ceph.vdo": "0"
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            },
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "type": "block",
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:            "vg_name": "ceph_vg2"
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:        }
Oct  2 05:46:55 np0005465604 cool_leakey[471953]:    ]
Oct  2 05:46:55 np0005465604 cool_leakey[471953]: }
Oct  2 05:46:55 np0005465604 systemd[1]: libpod-22110607f99a60e2a24da8aec8b2bd56c08b4f6bfdb1641aa08d6a81df7cd3a8.scope: Deactivated successfully.
Oct  2 05:46:55 np0005465604 podman[471936]: 2025-10-02 09:46:55.342602602 +0000 UTC m=+0.950986395 container died 22110607f99a60e2a24da8aec8b2bd56c08b4f6bfdb1641aa08d6a81df7cd3a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:46:55 np0005465604 systemd[1]: var-lib-containers-storage-overlay-989dfb4a662bbcf3070ae5fae1c698266e66539c21074496caf4d5c1c5f5b61c-merged.mount: Deactivated successfully.
Oct  2 05:46:55 np0005465604 podman[471936]: 2025-10-02 09:46:55.403635358 +0000 UTC m=+1.012019151 container remove 22110607f99a60e2a24da8aec8b2bd56c08b4f6bfdb1641aa08d6a81df7cd3a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leakey, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 05:46:55 np0005465604 systemd[1]: libpod-conmon-22110607f99a60e2a24da8aec8b2bd56c08b4f6bfdb1641aa08d6a81df7cd3a8.scope: Deactivated successfully.
Oct  2 05:46:55 np0005465604 nova_compute[260603]: 2025-10-02 09:46:55.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:46:55 np0005465604 podman[472115]: 2025-10-02 09:46:55.982992485 +0000 UTC m=+0.037253945 container create 33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 05:46:56 np0005465604 systemd[1]: Started libpod-conmon-33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b.scope.
Oct  2 05:46:56 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:46:56 np0005465604 podman[472115]: 2025-10-02 09:46:55.968112989 +0000 UTC m=+0.022374469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:46:56 np0005465604 podman[472115]: 2025-10-02 09:46:56.117691322 +0000 UTC m=+0.171952792 container init 33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:46:56 np0005465604 podman[472115]: 2025-10-02 09:46:56.125455494 +0000 UTC m=+0.179716954 container start 33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_pasteur, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:46:56 np0005465604 modest_pasteur[472131]: 167 167
Oct  2 05:46:56 np0005465604 systemd[1]: libpod-33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b.scope: Deactivated successfully.
Oct  2 05:46:56 np0005465604 conmon[472131]: conmon 33571b9efd492843dfae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b.scope/container/memory.events
Oct  2 05:46:56 np0005465604 podman[472115]: 2025-10-02 09:46:56.171204523 +0000 UTC m=+0.225465983 container attach 33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_pasteur, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 05:46:56 np0005465604 podman[472115]: 2025-10-02 09:46:56.171827503 +0000 UTC m=+0.226088983 container died 33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_pasteur, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:46:56 np0005465604 systemd[1]: var-lib-containers-storage-overlay-5fceb72004a65632ab61849edbaa4ae04ca26642dbb71e5079b694d05f6849c6-merged.mount: Deactivated successfully.
Oct  2 05:46:56 np0005465604 podman[472115]: 2025-10-02 09:46:56.251605454 +0000 UTC m=+0.305866914 container remove 33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_pasteur, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:46:56 np0005465604 systemd[1]: libpod-conmon-33571b9efd492843dfaeaf51d56ec980114be68a82f207798f92f493bd12b16b.scope: Deactivated successfully.
Oct  2 05:46:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:46:56 np0005465604 podman[472157]: 2025-10-02 09:46:56.400173605 +0000 UTC m=+0.027980325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:46:56 np0005465604 nova_compute[260603]: 2025-10-02 09:46:56.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:46:56 np0005465604 nova_compute[260603]: 2025-10-02 09:46:56.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:46:56 np0005465604 podman[472157]: 2025-10-02 09:46:56.58637104 +0000 UTC m=+0.214177770 container create 142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:46:56 np0005465604 systemd[1]: Started libpod-conmon-142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c.scope.
Oct  2 05:46:56 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:46:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee3e8e11e61ec0e19f96f4d3d1fe205899e7f5b620a6e66f4f78cd10122e146/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:46:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee3e8e11e61ec0e19f96f4d3d1fe205899e7f5b620a6e66f4f78cd10122e146/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:46:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee3e8e11e61ec0e19f96f4d3d1fe205899e7f5b620a6e66f4f78cd10122e146/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:46:56 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee3e8e11e61ec0e19f96f4d3d1fe205899e7f5b620a6e66f4f78cd10122e146/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:46:56 np0005465604 podman[472157]: 2025-10-02 09:46:56.742717383 +0000 UTC m=+0.370524103 container init 142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khorana, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 05:46:56 np0005465604 podman[472157]: 2025-10-02 09:46:56.754044137 +0000 UTC m=+0.381850837 container start 142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khorana, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:46:56 np0005465604 podman[472157]: 2025-10-02 09:46:56.844212673 +0000 UTC m=+0.472019403 container attach 142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:46:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3880: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]: {
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:        "osd_id": 2,
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:        "type": "bluestore"
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:    },
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:        "osd_id": 1,
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:        "type": "bluestore"
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:    },
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:        "osd_id": 0,
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:        "type": "bluestore"
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]:    }
Oct  2 05:46:57 np0005465604 fervent_khorana[472174]: }
Oct  2 05:46:57 np0005465604 systemd[1]: libpod-142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c.scope: Deactivated successfully.
Oct  2 05:46:57 np0005465604 conmon[472174]: conmon 142787f91b4ddf2cbda7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c.scope/container/memory.events
Oct  2 05:46:57 np0005465604 podman[472157]: 2025-10-02 09:46:57.717111718 +0000 UTC m=+1.344918418 container died 142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khorana, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:46:57 np0005465604 systemd[1]: var-lib-containers-storage-overlay-6ee3e8e11e61ec0e19f96f4d3d1fe205899e7f5b620a6e66f4f78cd10122e146-merged.mount: Deactivated successfully.
Oct  2 05:46:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:46:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:46:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:46:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:46:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:46:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:46:58 np0005465604 podman[472157]: 2025-10-02 09:46:58.005843966 +0000 UTC m=+1.633650676 container remove 142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khorana, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:46:58 np0005465604 systemd[1]: libpod-conmon-142787f91b4ddf2cbda7d99d0375ebc3b7c840c57509b895d80a8a8b366fea5c.scope: Deactivated successfully.
Oct  2 05:46:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:46:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:46:58 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:46:58 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:46:58 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 972db762-3493-4804-b9bf-996eeb01c2be does not exist
Oct  2 05:46:58 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev efcb4192-6ccd-4183-afc8-fff0e1c2ff99 does not exist
Oct  2 05:46:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3881: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:46:59 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:46:59 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:47:00 np0005465604 nova_compute[260603]: 2025-10-02 09:47:00.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3882: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:47:01 np0005465604 nova_compute[260603]: 2025-10-02 09:47:01.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3883: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:03 np0005465604 nova_compute[260603]: 2025-10-02 09:47:03.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:47:03 np0005465604 nova_compute[260603]: 2025-10-02 09:47:03.518 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:47:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3884: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:05 np0005465604 nova_compute[260603]: 2025-10-02 09:47:05.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:47:06 np0005465604 nova_compute[260603]: 2025-10-02 09:47:06.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3885: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3886: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:10 np0005465604 nova_compute[260603]: 2025-10-02 09:47:10.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3887: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:47:11 np0005465604 nova_compute[260603]: 2025-10-02 09:47:11.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3888: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3889: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:15 np0005465604 nova_compute[260603]: 2025-10-02 09:47:15.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:47:16 np0005465604 nova_compute[260603]: 2025-10-02 09:47:16.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3890: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:17 np0005465604 podman[472271]: 2025-10-02 09:47:17.99052003 +0000 UTC m=+0.055143974 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:47:18 np0005465604 podman[472270]: 2025-10-02 09:47:18.0235115 +0000 UTC m=+0.088177676 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 05:47:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3891: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:20 np0005465604 nova_compute[260603]: 2025-10-02 09:47:20.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3892: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:47:21 np0005465604 nova_compute[260603]: 2025-10-02 09:47:21.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:22 np0005465604 podman[472313]: 2025-10-02 09:47:22.02547856 +0000 UTC m=+0.097314240 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 05:47:22 np0005465604 podman[472314]: 2025-10-02 09:47:22.025771199 +0000 UTC m=+0.090069334 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:47:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:47:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4011591992' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:47:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:47:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4011591992' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:47:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3893: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3894: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:25 np0005465604 nova_compute[260603]: 2025-10-02 09:47:25.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:47:26 np0005465604 nova_compute[260603]: 2025-10-02 09:47:26.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3895: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:47:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:47:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:47:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:47:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:47:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:47:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:47:28
Oct  2 05:47:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:47:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:47:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'volumes', 'default.rgw.log', '.mgr', 'backups', 'vms', 'cephfs.cephfs.meta']
Oct  2 05:47:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:47:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:47:28 np0005465604 ceph-mgr[74774]: client.0 ms_handle_reset on v2:192.168.122.100:6800/860957497
Oct  2 05:47:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3896: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:30 np0005465604 nova_compute[260603]: 2025-10-02 09:47:30.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3897: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:47:31 np0005465604 nova_compute[260603]: 2025-10-02 09:47:31.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3898: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:34 np0005465604 nova_compute[260603]: 2025-10-02 09:47:34.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:47:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:47:34.887 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:47:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:47:34.887 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:47:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:47:34.887 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:47:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3899: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:35 np0005465604 nova_compute[260603]: 2025-10-02 09:47:35.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:47:36 np0005465604 nova_compute[260603]: 2025-10-02 09:47:36.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3900: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:37 np0005465604 nova_compute[260603]: 2025-10-02 09:47:37.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:47:37 np0005465604 nova_compute[260603]: 2025-10-02 09:47:37.569 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:47:37 np0005465604 nova_compute[260603]: 2025-10-02 09:47:37.569 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:47:37 np0005465604 nova_compute[260603]: 2025-10-02 09:47:37.570 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:47:37 np0005465604 nova_compute[260603]: 2025-10-02 09:47:37.570 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:47:37 np0005465604 nova_compute[260603]: 2025-10-02 09:47:37.570 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:47:37 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:47:37 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2129779092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:47:37 np0005465604 nova_compute[260603]: 2025-10-02 09:47:37.991 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:47:38 np0005465604 nova_compute[260603]: 2025-10-02 09:47:38.140 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:47:38 np0005465604 nova_compute[260603]: 2025-10-02 09:47:38.142 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3508MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:47:38 np0005465604 nova_compute[260603]: 2025-10-02 09:47:38.142 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:47:38 np0005465604 nova_compute[260603]: 2025-10-02 09:47:38.142 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:47:38 np0005465604 nova_compute[260603]: 2025-10-02 09:47:38.241 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:47:38 np0005465604 nova_compute[260603]: 2025-10-02 09:47:38.241 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:47:38 np0005465604 nova_compute[260603]: 2025-10-02 09:47:38.261 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:47:38 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:47:38 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3817562172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:47:38 np0005465604 nova_compute[260603]: 2025-10-02 09:47:38.699 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:47:38 np0005465604 nova_compute[260603]: 2025-10-02 09:47:38.704 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:47:38 np0005465604 nova_compute[260603]: 2025-10-02 09:47:38.729 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:47:38 np0005465604 nova_compute[260603]: 2025-10-02 09:47:38.731 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:47:38 np0005465604 nova_compute[260603]: 2025-10-02 09:47:38.731 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3901: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:47:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:47:40 np0005465604 nova_compute[260603]: 2025-10-02 09:47:40.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:40 np0005465604 nova_compute[260603]: 2025-10-02 09:47:40.731 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:47:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3902: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:47:41 np0005465604 nova_compute[260603]: 2025-10-02 09:47:41.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:47:41 np0005465604 nova_compute[260603]: 2025-10-02 09:47:41.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:42 np0005465604 nova_compute[260603]: 2025-10-02 09:47:42.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:47:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3903: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:44 np0005465604 nova_compute[260603]: 2025-10-02 09:47:44.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:47:44 np0005465604 nova_compute[260603]: 2025-10-02 09:47:44.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:47:44 np0005465604 nova_compute[260603]: 2025-10-02 09:47:44.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:47:44 np0005465604 nova_compute[260603]: 2025-10-02 09:47:44.536 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:47:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3904: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:45 np0005465604 nova_compute[260603]: 2025-10-02 09:47:45.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:45 np0005465604 nova_compute[260603]: 2025-10-02 09:47:45.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:47:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:47:46 np0005465604 nova_compute[260603]: 2025-10-02 09:47:46.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3905: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3906: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:49 np0005465604 podman[472403]: 2025-10-02 09:47:49.052779558 +0000 UTC m=+0.102345837 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 05:47:49 np0005465604 podman[472402]: 2025-10-02 09:47:49.062565594 +0000 UTC m=+0.112022910 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:47:49 np0005465604 nova_compute[260603]: 2025-10-02 09:47:49.515 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:47:50 np0005465604 nova_compute[260603]: 2025-10-02 09:47:50.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3907: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:47:51 np0005465604 nova_compute[260603]: 2025-10-02 09:47:51.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:53 np0005465604 podman[472442]: 2025-10-02 09:47:53.013799819 +0000 UTC m=+0.072636599 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 05:47:53 np0005465604 podman[472443]: 2025-10-02 09:47:53.013922733 +0000 UTC m=+0.059657375 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:47:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3908: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3909: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:55 np0005465604 nova_compute[260603]: 2025-10-02 09:47:55.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:47:56 np0005465604 nova_compute[260603]: 2025-10-02 09:47:56.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:47:56 np0005465604 nova_compute[260603]: 2025-10-02 09:47:56.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:47:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3910: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:47:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:47:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:47:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:47:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:47:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:47:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3911: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:47:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:47:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:47:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:47:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:47:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:47:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:47:59 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev bef146d0-6eab-4838-906f-f27b1975e443 does not exist
Oct  2 05:47:59 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 3d4301ac-6346-400f-ad9a-95ab869bb585 does not exist
Oct  2 05:47:59 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev ea6c337e-54ad-4dbe-bac2-4623ad2147c4 does not exist
Oct  2 05:47:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:47:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:47:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:47:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:47:59 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:47:59 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:48:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:48:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:48:00 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:48:00 np0005465604 podman[472751]: 2025-10-02 09:48:00.152267646 +0000 UTC m=+0.083970964 container create 1d815c783139454d0fd403d905e39526030bb21a10f8d2ab91a2e36c1351259c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_margulis, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:48:00 np0005465604 podman[472751]: 2025-10-02 09:48:00.101686326 +0000 UTC m=+0.033389674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:48:00 np0005465604 systemd[1]: Started libpod-conmon-1d815c783139454d0fd403d905e39526030bb21a10f8d2ab91a2e36c1351259c.scope.
Oct  2 05:48:00 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:48:00 np0005465604 podman[472751]: 2025-10-02 09:48:00.353689137 +0000 UTC m=+0.285392485 container init 1d815c783139454d0fd403d905e39526030bb21a10f8d2ab91a2e36c1351259c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_margulis, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 05:48:00 np0005465604 podman[472751]: 2025-10-02 09:48:00.369395348 +0000 UTC m=+0.301098696 container start 1d815c783139454d0fd403d905e39526030bb21a10f8d2ab91a2e36c1351259c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_margulis, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  2 05:48:00 np0005465604 great_margulis[472767]: 167 167
Oct  2 05:48:00 np0005465604 systemd[1]: libpod-1d815c783139454d0fd403d905e39526030bb21a10f8d2ab91a2e36c1351259c.scope: Deactivated successfully.
Oct  2 05:48:00 np0005465604 podman[472751]: 2025-10-02 09:48:00.399783427 +0000 UTC m=+0.331486775 container attach 1d815c783139454d0fd403d905e39526030bb21a10f8d2ab91a2e36c1351259c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_margulis, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:48:00 np0005465604 podman[472751]: 2025-10-02 09:48:00.400302763 +0000 UTC m=+0.332006081 container died 1d815c783139454d0fd403d905e39526030bb21a10f8d2ab91a2e36c1351259c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 05:48:00 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2188d06bbc4b548809aafbd373b2931a0aa57a13a2b841206b6aa2f109abcc43-merged.mount: Deactivated successfully.
Oct  2 05:48:00 np0005465604 nova_compute[260603]: 2025-10-02 09:48:00.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:00 np0005465604 podman[472751]: 2025-10-02 09:48:00.5996782 +0000 UTC m=+0.531381578 container remove 1d815c783139454d0fd403d905e39526030bb21a10f8d2ab91a2e36c1351259c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_margulis, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:48:00 np0005465604 systemd[1]: libpod-conmon-1d815c783139454d0fd403d905e39526030bb21a10f8d2ab91a2e36c1351259c.scope: Deactivated successfully.
Oct  2 05:48:00 np0005465604 podman[472791]: 2025-10-02 09:48:00.88047876 +0000 UTC m=+0.076179750 container create 7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:48:00 np0005465604 systemd[1]: Started libpod-conmon-7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113.scope.
Oct  2 05:48:00 np0005465604 podman[472791]: 2025-10-02 09:48:00.85003416 +0000 UTC m=+0.045735160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:48:00 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:48:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c123250583e46a789e5a1aae9e29692a4d19c584d1d0097e8f1cb2b5f94b53ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:48:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c123250583e46a789e5a1aae9e29692a4d19c584d1d0097e8f1cb2b5f94b53ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:48:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c123250583e46a789e5a1aae9e29692a4d19c584d1d0097e8f1cb2b5f94b53ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:48:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c123250583e46a789e5a1aae9e29692a4d19c584d1d0097e8f1cb2b5f94b53ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:48:00 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c123250583e46a789e5a1aae9e29692a4d19c584d1d0097e8f1cb2b5f94b53ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:48:00 np0005465604 podman[472791]: 2025-10-02 09:48:00.985284744 +0000 UTC m=+0.180985774 container init 7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_agnesi, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:48:01 np0005465604 podman[472791]: 2025-10-02 09:48:01.004167134 +0000 UTC m=+0.199868104 container start 7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_agnesi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:48:01 np0005465604 podman[472791]: 2025-10-02 09:48:01.008416047 +0000 UTC m=+0.204117077 container attach 7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_agnesi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:48:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3912: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:48:01 np0005465604 nova_compute[260603]: 2025-10-02 09:48:01.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:02 np0005465604 brave_agnesi[472808]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:48:02 np0005465604 brave_agnesi[472808]: --> relative data size: 1.0
Oct  2 05:48:02 np0005465604 brave_agnesi[472808]: --> All data devices are unavailable
Oct  2 05:48:02 np0005465604 systemd[1]: libpod-7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113.scope: Deactivated successfully.
Oct  2 05:48:02 np0005465604 systemd[1]: libpod-7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113.scope: Consumed 1.290s CPU time.
Oct  2 05:48:02 np0005465604 podman[472791]: 2025-10-02 09:48:02.343714195 +0000 UTC m=+1.539415175 container died 7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct  2 05:48:02 np0005465604 systemd[1]: var-lib-containers-storage-overlay-c123250583e46a789e5a1aae9e29692a4d19c584d1d0097e8f1cb2b5f94b53ca-merged.mount: Deactivated successfully.
Oct  2 05:48:02 np0005465604 podman[472791]: 2025-10-02 09:48:02.484212863 +0000 UTC m=+1.679913843 container remove 7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:48:02 np0005465604 systemd[1]: libpod-conmon-7758e2214e95a0ebf6ccae810e86ae22a9a868df4cbe394c83f5ca0afdb84113.scope: Deactivated successfully.
Oct  2 05:48:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3913: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:03 np0005465604 podman[472992]: 2025-10-02 09:48:03.368484592 +0000 UTC m=+0.064244398 container create 77c32e09dcae3d19db428ccf45669492ef6d08d8e5dc61f49b86da3d7bb2cbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_feynman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:48:03 np0005465604 systemd[1]: Started libpod-conmon-77c32e09dcae3d19db428ccf45669492ef6d08d8e5dc61f49b86da3d7bb2cbd9.scope.
Oct  2 05:48:03 np0005465604 podman[472992]: 2025-10-02 09:48:03.347950131 +0000 UTC m=+0.043709976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:48:03 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:48:03 np0005465604 podman[472992]: 2025-10-02 09:48:03.483693221 +0000 UTC m=+0.179453076 container init 77c32e09dcae3d19db428ccf45669492ef6d08d8e5dc61f49b86da3d7bb2cbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_feynman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 05:48:03 np0005465604 podman[472992]: 2025-10-02 09:48:03.496438809 +0000 UTC m=+0.192198614 container start 77c32e09dcae3d19db428ccf45669492ef6d08d8e5dc61f49b86da3d7bb2cbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:48:03 np0005465604 podman[472992]: 2025-10-02 09:48:03.501087244 +0000 UTC m=+0.196847079 container attach 77c32e09dcae3d19db428ccf45669492ef6d08d8e5dc61f49b86da3d7bb2cbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_feynman, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 05:48:03 np0005465604 distracted_feynman[473008]: 167 167
Oct  2 05:48:03 np0005465604 systemd[1]: libpod-77c32e09dcae3d19db428ccf45669492ef6d08d8e5dc61f49b86da3d7bb2cbd9.scope: Deactivated successfully.
Oct  2 05:48:03 np0005465604 podman[472992]: 2025-10-02 09:48:03.503712536 +0000 UTC m=+0.199472401 container died 77c32e09dcae3d19db428ccf45669492ef6d08d8e5dc61f49b86da3d7bb2cbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_feynman, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:48:03 np0005465604 systemd[1]: var-lib-containers-storage-overlay-0a576d1dc5ece41089f34d106f4a33c4957222de04fa77aac44278e2b8f4eaf9-merged.mount: Deactivated successfully.
Oct  2 05:48:03 np0005465604 podman[472992]: 2025-10-02 09:48:03.549259889 +0000 UTC m=+0.245019694 container remove 77c32e09dcae3d19db428ccf45669492ef6d08d8e5dc61f49b86da3d7bb2cbd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_feynman, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:48:03 np0005465604 systemd[1]: libpod-conmon-77c32e09dcae3d19db428ccf45669492ef6d08d8e5dc61f49b86da3d7bb2cbd9.scope: Deactivated successfully.
Oct  2 05:48:03 np0005465604 podman[473031]: 2025-10-02 09:48:03.755083818 +0000 UTC m=+0.066980654 container create c2640bc6f73c9f3ae0ee5e4aa9504d74fd7a9b842746ac5e813cdd72b4a308eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:48:03 np0005465604 systemd[1]: Started libpod-conmon-c2640bc6f73c9f3ae0ee5e4aa9504d74fd7a9b842746ac5e813cdd72b4a308eb.scope.
Oct  2 05:48:03 np0005465604 podman[473031]: 2025-10-02 09:48:03.724372548 +0000 UTC m=+0.036269454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:48:03 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:48:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4cd58a91756f401b220cfe99aea28ca7445621b4c650bfb005eb591ef13c8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:48:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4cd58a91756f401b220cfe99aea28ca7445621b4c650bfb005eb591ef13c8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:48:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4cd58a91756f401b220cfe99aea28ca7445621b4c650bfb005eb591ef13c8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:48:03 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4cd58a91756f401b220cfe99aea28ca7445621b4c650bfb005eb591ef13c8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:48:03 np0005465604 podman[473031]: 2025-10-02 09:48:03.866366774 +0000 UTC m=+0.178263580 container init c2640bc6f73c9f3ae0ee5e4aa9504d74fd7a9b842746ac5e813cdd72b4a308eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 05:48:03 np0005465604 podman[473031]: 2025-10-02 09:48:03.877392518 +0000 UTC m=+0.189289314 container start c2640bc6f73c9f3ae0ee5e4aa9504d74fd7a9b842746ac5e813cdd72b4a308eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:48:03 np0005465604 podman[473031]: 2025-10-02 09:48:03.881369182 +0000 UTC m=+0.193266008 container attach c2640bc6f73c9f3ae0ee5e4aa9504d74fd7a9b842746ac5e813cdd72b4a308eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  2 05:48:04 np0005465604 nova_compute[260603]: 2025-10-02 09:48:04.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:48:04 np0005465604 nova_compute[260603]: 2025-10-02 09:48:04.522 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:48:04 np0005465604 crazy_buck[473047]: {
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:    "0": [
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:        {
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "devices": [
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "/dev/loop3"
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            ],
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "lv_name": "ceph_lv0",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "lv_size": "21470642176",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "name": "ceph_lv0",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "tags": {
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.cluster_name": "ceph",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.crush_device_class": "",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.encrypted": "0",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.osd_id": "0",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.type": "block",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.vdo": "0"
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            },
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "type": "block",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "vg_name": "ceph_vg0"
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:        }
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:    ],
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:    "1": [
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:        {
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "devices": [
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "/dev/loop4"
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            ],
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "lv_name": "ceph_lv1",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "lv_size": "21470642176",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "name": "ceph_lv1",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "tags": {
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.cluster_name": "ceph",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.crush_device_class": "",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.encrypted": "0",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.osd_id": "1",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.type": "block",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.vdo": "0"
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            },
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "type": "block",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "vg_name": "ceph_vg1"
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:        }
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:    ],
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:    "2": [
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:        {
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "devices": [
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "/dev/loop5"
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            ],
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "lv_name": "ceph_lv2",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "lv_size": "21470642176",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "name": "ceph_lv2",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "tags": {
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.cluster_name": "ceph",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.crush_device_class": "",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.encrypted": "0",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.osd_id": "2",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.type": "block",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:                "ceph.vdo": "0"
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            },
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "type": "block",
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:            "vg_name": "ceph_vg2"
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:        }
Oct  2 05:48:04 np0005465604 crazy_buck[473047]:    ]
Oct  2 05:48:04 np0005465604 crazy_buck[473047]: }
Oct  2 05:48:04 np0005465604 systemd[1]: libpod-c2640bc6f73c9f3ae0ee5e4aa9504d74fd7a9b842746ac5e813cdd72b4a308eb.scope: Deactivated successfully.
Oct  2 05:48:04 np0005465604 podman[473031]: 2025-10-02 09:48:04.675854437 +0000 UTC m=+0.987751263 container died c2640bc6f73c9f3ae0ee5e4aa9504d74fd7a9b842746ac5e813cdd72b4a308eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  2 05:48:04 np0005465604 systemd[1]: var-lib-containers-storage-overlay-9b4cd58a91756f401b220cfe99aea28ca7445621b4c650bfb005eb591ef13c8c-merged.mount: Deactivated successfully.
Oct  2 05:48:04 np0005465604 podman[473031]: 2025-10-02 09:48:04.755923349 +0000 UTC m=+1.067820155 container remove c2640bc6f73c9f3ae0ee5e4aa9504d74fd7a9b842746ac5e813cdd72b4a308eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:48:04 np0005465604 systemd[1]: libpod-conmon-c2640bc6f73c9f3ae0ee5e4aa9504d74fd7a9b842746ac5e813cdd72b4a308eb.scope: Deactivated successfully.
Oct  2 05:48:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3914: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:05 np0005465604 nova_compute[260603]: 2025-10-02 09:48:05.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:05 np0005465604 podman[473211]: 2025-10-02 09:48:05.577538971 +0000 UTC m=+0.070921206 container create 37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_raman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  2 05:48:05 np0005465604 podman[473211]: 2025-10-02 09:48:05.53556162 +0000 UTC m=+0.028943895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:48:05 np0005465604 systemd[1]: Started libpod-conmon-37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d.scope.
Oct  2 05:48:05 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:48:05 np0005465604 podman[473211]: 2025-10-02 09:48:05.731409478 +0000 UTC m=+0.224791753 container init 37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_raman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  2 05:48:05 np0005465604 podman[473211]: 2025-10-02 09:48:05.741223854 +0000 UTC m=+0.234606079 container start 37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_raman, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 05:48:05 np0005465604 competent_raman[473228]: 167 167
Oct  2 05:48:05 np0005465604 systemd[1]: libpod-37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d.scope: Deactivated successfully.
Oct  2 05:48:05 np0005465604 conmon[473228]: conmon 37824e61506cd773132a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d.scope/container/memory.events
Oct  2 05:48:05 np0005465604 podman[473211]: 2025-10-02 09:48:05.79585216 +0000 UTC m=+0.289234395 container attach 37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_raman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 05:48:05 np0005465604 podman[473211]: 2025-10-02 09:48:05.797223354 +0000 UTC m=+0.290605579 container died 37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_raman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:48:05 np0005465604 systemd[1]: var-lib-containers-storage-overlay-b33f29ea783cd0dd4a660ab659b09bbe250b9220120fe3f695051cbd39d34a7a-merged.mount: Deactivated successfully.
Oct  2 05:48:05 np0005465604 podman[473211]: 2025-10-02 09:48:05.981929452 +0000 UTC m=+0.475311677 container remove 37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:48:05 np0005465604 systemd[1]: libpod-conmon-37824e61506cd773132a7cb82fa6406c173549ddff925b10b3fffed20252752d.scope: Deactivated successfully.
Oct  2 05:48:06 np0005465604 podman[473253]: 2025-10-02 09:48:06.165104873 +0000 UTC m=+0.027920462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:48:06 np0005465604 podman[473253]: 2025-10-02 09:48:06.260461082 +0000 UTC m=+0.123276631 container create 69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:48:06 np0005465604 systemd[1]: Started libpod-conmon-69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0.scope.
Oct  2 05:48:06 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:48:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb54c7baa339ad651352d834e942d7ed98fd67015d8886972d63b624976c3bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:48:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb54c7baa339ad651352d834e942d7ed98fd67015d8886972d63b624976c3bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:48:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb54c7baa339ad651352d834e942d7ed98fd67015d8886972d63b624976c3bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:48:06 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bb54c7baa339ad651352d834e942d7ed98fd67015d8886972d63b624976c3bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:48:06 np0005465604 podman[473253]: 2025-10-02 09:48:06.445348407 +0000 UTC m=+0.308163966 container init 69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 05:48:06 np0005465604 podman[473253]: 2025-10-02 09:48:06.459498679 +0000 UTC m=+0.322314258 container start 69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:48:06 np0005465604 podman[473253]: 2025-10-02 09:48:06.476187861 +0000 UTC m=+0.339003440 container attach 69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 05:48:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:48:06 np0005465604 nova_compute[260603]: 2025-10-02 09:48:06.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3915: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]: {
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:        "osd_id": 2,
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:        "type": "bluestore"
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:    },
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:        "osd_id": 1,
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:        "type": "bluestore"
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:    },
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:        "osd_id": 0,
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:        "type": "bluestore"
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]:    }
Oct  2 05:48:07 np0005465604 stupefied_tu[473269]: }
Oct  2 05:48:07 np0005465604 systemd[1]: libpod-69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0.scope: Deactivated successfully.
Oct  2 05:48:07 np0005465604 systemd[1]: libpod-69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0.scope: Consumed 1.037s CPU time.
Oct  2 05:48:07 np0005465604 podman[473253]: 2025-10-02 09:48:07.488907472 +0000 UTC m=+1.351723081 container died 69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:48:07 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2bb54c7baa339ad651352d834e942d7ed98fd67015d8886972d63b624976c3bd-merged.mount: Deactivated successfully.
Oct  2 05:48:07 np0005465604 podman[473253]: 2025-10-02 09:48:07.850557679 +0000 UTC m=+1.713373248 container remove 69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_tu, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:48:07 np0005465604 systemd[1]: libpod-conmon-69ed3f95ccb81de1e5aa65be19885ee651f5027cf303e8c40462e3798d53aff0.scope: Deactivated successfully.
Oct  2 05:48:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:48:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:48:07 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:48:07 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:48:07 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev b3fbb71e-f003-42bc-bea6-9ac0847fb52b does not exist
Oct  2 05:48:07 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev edc28543-ccb6-4466-95d1-2cb9814ad39c does not exist
Oct  2 05:48:08 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:48:08 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:48:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3916: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:10 np0005465604 nova_compute[260603]: 2025-10-02 09:48:10.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3917: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:48:11 np0005465604 nova_compute[260603]: 2025-10-02 09:48:11.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3918: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3919: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:15 np0005465604 nova_compute[260603]: 2025-10-02 09:48:15.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:48:16 np0005465604 nova_compute[260603]: 2025-10-02 09:48:16.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3920: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3921: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:20 np0005465604 podman[473367]: 2025-10-02 09:48:20.011275503 +0000 UTC m=+0.068695297 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Oct  2 05:48:20 np0005465604 podman[473366]: 2025-10-02 09:48:20.059134658 +0000 UTC m=+0.117063607 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:48:20 np0005465604 nova_compute[260603]: 2025-10-02 09:48:20.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3922: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:48:21 np0005465604 nova_compute[260603]: 2025-10-02 09:48:21.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:48:22 np0005465604 ceph-osd[88314]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 46K writes, 182K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 46K writes, 16K syncs, 2.73 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 372 writes, 991 keys, 372 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s#012Interval WAL: 372 writes, 177 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:48:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:48:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1970290044' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:48:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:48:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1970290044' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:48:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3923: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:23 np0005465604 podman[473409]: 2025-10-02 09:48:23.997053778 +0000 UTC m=+0.063631659 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 05:48:24 np0005465604 podman[473410]: 2025-10-02 09:48:24.025665902 +0000 UTC m=+0.091387915 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  2 05:48:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3924: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:25 np0005465604 nova_compute[260603]: 2025-10-02 09:48:25.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:48:26 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 48K writes, 189K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 48K writes, 18K syncs, 2.71 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 334 writes, 766 keys, 334 commit groups, 1.0 writes per commit group, ingest: 0.33 MB, 0.00 MB/s#012Interval WAL: 334 writes, 148 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:48:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:48:26 np0005465604 nova_compute[260603]: 2025-10-02 09:48:26.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3925: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:48:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:48:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:48:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:48:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:48:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:48:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:48:28
Oct  2 05:48:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:48:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:48:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'backups', '.rgw.root', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr']
Oct  2 05:48:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:48:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:48:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3926: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:30 np0005465604 nova_compute[260603]: 2025-10-02 09:48:30.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3927: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:48:31 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 38K writes, 146K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 38K writes, 14K syncs, 2.68 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 458 writes, 1050 keys, 458 commit groups, 1.0 writes per commit group, ingest: 0.46 MB, 0.00 MB/s#012Interval WAL: 458 writes, 208 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:48:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:48:31 np0005465604 nova_compute[260603]: 2025-10-02 09:48:31.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:32 np0005465604 ceph-mgr[74774]: [devicehealth INFO root] Check health
Oct  2 05:48:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3928: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:48:34.889 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:48:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:48:34.890 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:48:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:48:34.890 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:48:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3929: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:35 np0005465604 nova_compute[260603]: 2025-10-02 09:48:35.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:48:36 np0005465604 nova_compute[260603]: 2025-10-02 09:48:36.522 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:48:36 np0005465604 nova_compute[260603]: 2025-10-02 09:48:36.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3930: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:38 np0005465604 nova_compute[260603]: 2025-10-02 09:48:38.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:48:38 np0005465604 nova_compute[260603]: 2025-10-02 09:48:38.578 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:48:38 np0005465604 nova_compute[260603]: 2025-10-02 09:48:38.579 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:48:38 np0005465604 nova_compute[260603]: 2025-10-02 09:48:38.579 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:48:38 np0005465604 nova_compute[260603]: 2025-10-02 09:48:38.579 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:48:38 np0005465604 nova_compute[260603]: 2025-10-02 09:48:38.580 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:48:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:48:39 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1752737819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:48:39 np0005465604 nova_compute[260603]: 2025-10-02 09:48:39.051 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3931: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:39 np0005465604 nova_compute[260603]: 2025-10-02 09:48:39.207 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:48:39 np0005465604 nova_compute[260603]: 2025-10-02 09:48:39.208 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3498MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:48:39 np0005465604 nova_compute[260603]: 2025-10-02 09:48:39.208 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:48:39 np0005465604 nova_compute[260603]: 2025-10-02 09:48:39.209 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:48:39 np0005465604 nova_compute[260603]: 2025-10-02 09:48:39.305 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:48:39 np0005465604 nova_compute[260603]: 2025-10-02 09:48:39.305 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:48:39 np0005465604 nova_compute[260603]: 2025-10-02 09:48:39.496 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:48:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:48:39 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:48:39 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1171908827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:48:39 np0005465604 nova_compute[260603]: 2025-10-02 09:48:39.913 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:48:39 np0005465604 nova_compute[260603]: 2025-10-02 09:48:39.917 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:48:40 np0005465604 nova_compute[260603]: 2025-10-02 09:48:40.003 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:48:40 np0005465604 nova_compute[260603]: 2025-10-02 09:48:40.005 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:48:40 np0005465604 nova_compute[260603]: 2025-10-02 09:48:40.005 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:48:40 np0005465604 nova_compute[260603]: 2025-10-02 09:48:40.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:41 np0005465604 nova_compute[260603]: 2025-10-02 09:48:41.006 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:48:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3932: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:48:41 np0005465604 nova_compute[260603]: 2025-10-02 09:48:41.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:42 np0005465604 nova_compute[260603]: 2025-10-02 09:48:42.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:48:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3933: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.313561) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398523313630, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 1736, "num_deletes": 251, "total_data_size": 2803502, "memory_usage": 2848080, "flush_reason": "Manual Compaction"}
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398523342180, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 2754286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80201, "largest_seqno": 81936, "table_properties": {"data_size": 2746302, "index_size": 4862, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16211, "raw_average_key_size": 19, "raw_value_size": 2730342, "raw_average_value_size": 3362, "num_data_blocks": 217, "num_entries": 812, "num_filter_entries": 812, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759398337, "oldest_key_time": 1759398337, "file_creation_time": 1759398523, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 28653 microseconds, and 6072 cpu microseconds.
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.342220) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 2754286 bytes OK
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.342242) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.353604) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.353626) EVENT_LOG_v1 {"time_micros": 1759398523353620, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.353646) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 2796091, prev total WAL file size 2796091, number of live WAL files 2.
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.354430) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(2689KB)], [194(9435KB)]
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398523354482, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 12416336, "oldest_snapshot_seqno": -1}
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 9600 keys, 10666603 bytes, temperature: kUnknown
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398523424848, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 10666603, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10607232, "index_size": 34298, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24005, "raw_key_size": 253472, "raw_average_key_size": 26, "raw_value_size": 10440613, "raw_average_value_size": 1087, "num_data_blocks": 1315, "num_entries": 9600, "num_filter_entries": 9600, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759391198, "oldest_key_time": 0, "file_creation_time": 1759398523, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b07c9b1-a6c7-45d0-9522-b015946345f4", "db_session_id": "E5Q3H049J9TEXP7LLR7P", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.425201) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 10666603 bytes
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.438831) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.2 rd, 151.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 9.2 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(8.4) write-amplify(3.9) OK, records in: 10117, records dropped: 517 output_compression: NoCompression
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.438876) EVENT_LOG_v1 {"time_micros": 1759398523438859, "job": 122, "event": "compaction_finished", "compaction_time_micros": 70464, "compaction_time_cpu_micros": 24412, "output_level": 6, "num_output_files": 1, "total_output_size": 10666603, "num_input_records": 10117, "num_output_records": 9600, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398523439950, "job": 122, "event": "table_file_deletion", "file_number": 196}
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759398523443392, "job": 122, "event": "table_file_deletion", "file_number": 194}
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.354323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.443444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.443450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.443452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.443454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:48:43 np0005465604 ceph-mon[74477]: rocksdb: (Original Log Time 2025/10/02-09:48:43.443457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 05:48:43 np0005465604 nova_compute[260603]: 2025-10-02 09:48:43.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:48:44 np0005465604 nova_compute[260603]: 2025-10-02 09:48:44.520 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:48:44 np0005465604 nova_compute[260603]: 2025-10-02 09:48:44.520 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:48:44 np0005465604 nova_compute[260603]: 2025-10-02 09:48:44.521 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:48:44 np0005465604 nova_compute[260603]: 2025-10-02 09:48:44.598 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:48:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3934: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:45 np0005465604 nova_compute[260603]: 2025-10-02 09:48:45.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:48:45 np0005465604 nova_compute[260603]: 2025-10-02 09:48:45.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:48:46 np0005465604 nova_compute[260603]: 2025-10-02 09:48:46.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3935: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3936: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:50 np0005465604 nova_compute[260603]: 2025-10-02 09:48:50.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:51 np0005465604 podman[473496]: 2025-10-02 09:48:51.039338408 +0000 UTC m=+0.078155472 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  2 05:48:51 np0005465604 podman[473495]: 2025-10-02 09:48:51.058028482 +0000 UTC m=+0.108175589 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 05:48:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3937: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:48:51 np0005465604 nova_compute[260603]: 2025-10-02 09:48:51.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3938: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:54 np0005465604 podman[473540]: 2025-10-02 09:48:54.991962048 +0000 UTC m=+0.060236223 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 05:48:54 np0005465604 podman[473541]: 2025-10-02 09:48:54.99777353 +0000 UTC m=+0.062271126 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  2 05:48:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3939: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:55 np0005465604 nova_compute[260603]: 2025-10-02 09:48:55.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:48:56 np0005465604 nova_compute[260603]: 2025-10-02 09:48:56.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:48:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3940: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:48:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:48:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:48:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:48:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:48:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:48:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:48:58 np0005465604 nova_compute[260603]: 2025-10-02 09:48:58.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:48:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3941: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:00 np0005465604 nova_compute[260603]: 2025-10-02 09:49:00.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3942: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:49:01 np0005465604 nova_compute[260603]: 2025-10-02 09:49:01.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3943: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3944: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:05 np0005465604 nova_compute[260603]: 2025-10-02 09:49:05.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:49:06 np0005465604 nova_compute[260603]: 2025-10-02 09:49:06.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:49:06 np0005465604 nova_compute[260603]: 2025-10-02 09:49:06.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:49:06 np0005465604 nova_compute[260603]: 2025-10-02 09:49:06.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3945: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3946: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:49:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:49:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:49:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:49:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:49:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:49:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:49:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:49:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:49:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:49:09 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 7fb107ee-b12b-4b79-903a-a64dcf5eb7b6 does not exist
Oct  2 05:49:09 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev e44f36cb-d42d-43e9-8841-98a6917eb105 does not exist
Oct  2 05:49:09 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 8bdc2f25-1820-4233-9e0a-1e1a85ebb01e does not exist
Oct  2 05:49:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:49:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:49:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:49:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:49:09 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:49:09 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:49:09 np0005465604 podman[473967]: 2025-10-02 09:49:09.953094492 +0000 UTC m=+0.085390168 container create c48df15ef50fade011b9502f0c1e46f6bfe750b247aa9696867e7bfce8dba457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  2 05:49:09 np0005465604 podman[473967]: 2025-10-02 09:49:09.893348566 +0000 UTC m=+0.025644272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:49:10 np0005465604 systemd[1]: Started libpod-conmon-c48df15ef50fade011b9502f0c1e46f6bfe750b247aa9696867e7bfce8dba457.scope.
Oct  2 05:49:10 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:49:10 np0005465604 podman[473967]: 2025-10-02 09:49:10.095546661 +0000 UTC m=+0.227842437 container init c48df15ef50fade011b9502f0c1e46f6bfe750b247aa9696867e7bfce8dba457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:49:10 np0005465604 podman[473967]: 2025-10-02 09:49:10.104819601 +0000 UTC m=+0.237115307 container start c48df15ef50fade011b9502f0c1e46f6bfe750b247aa9696867e7bfce8dba457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 05:49:10 np0005465604 mystifying_spence[473983]: 167 167
Oct  2 05:49:10 np0005465604 systemd[1]: libpod-c48df15ef50fade011b9502f0c1e46f6bfe750b247aa9696867e7bfce8dba457.scope: Deactivated successfully.
Oct  2 05:49:10 np0005465604 podman[473967]: 2025-10-02 09:49:10.14771262 +0000 UTC m=+0.280008306 container attach c48df15ef50fade011b9502f0c1e46f6bfe750b247aa9696867e7bfce8dba457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:49:10 np0005465604 podman[473967]: 2025-10-02 09:49:10.149208708 +0000 UTC m=+0.281504384 container died c48df15ef50fade011b9502f0c1e46f6bfe750b247aa9696867e7bfce8dba457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 05:49:10 np0005465604 systemd[1]: var-lib-containers-storage-overlay-8eb2623a03257046f1d280d4b69f4a9d8deb521e2985043e6543c23ae6225916-merged.mount: Deactivated successfully.
Oct  2 05:49:10 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:49:10 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:49:10 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:49:10 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:49:10 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:49:10 np0005465604 podman[473967]: 2025-10-02 09:49:10.411850071 +0000 UTC m=+0.544145777 container remove c48df15ef50fade011b9502f0c1e46f6bfe750b247aa9696867e7bfce8dba457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_spence, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:49:10 np0005465604 systemd[1]: libpod-conmon-c48df15ef50fade011b9502f0c1e46f6bfe750b247aa9696867e7bfce8dba457.scope: Deactivated successfully.
Oct  2 05:49:10 np0005465604 nova_compute[260603]: 2025-10-02 09:49:10.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:10 np0005465604 podman[474008]: 2025-10-02 09:49:10.675574578 +0000 UTC m=+0.106937850 container create 6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:49:10 np0005465604 podman[474008]: 2025-10-02 09:49:10.597309544 +0000 UTC m=+0.028672826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:49:10 np0005465604 systemd[1]: Started libpod-conmon-6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6.scope.
Oct  2 05:49:10 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:49:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be57a2f7de865981bac0172d8e0dc40eae27061656c5f487dd3469b02db85ccb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:49:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be57a2f7de865981bac0172d8e0dc40eae27061656c5f487dd3469b02db85ccb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:49:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be57a2f7de865981bac0172d8e0dc40eae27061656c5f487dd3469b02db85ccb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:49:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be57a2f7de865981bac0172d8e0dc40eae27061656c5f487dd3469b02db85ccb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:49:10 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be57a2f7de865981bac0172d8e0dc40eae27061656c5f487dd3469b02db85ccb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 05:49:10 np0005465604 podman[474008]: 2025-10-02 09:49:10.865060017 +0000 UTC m=+0.296423339 container init 6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 05:49:10 np0005465604 podman[474008]: 2025-10-02 09:49:10.879670383 +0000 UTC m=+0.311033665 container start 6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  2 05:49:11 np0005465604 podman[474008]: 2025-10-02 09:49:11.006194535 +0000 UTC m=+0.437557807 container attach 6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 05:49:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3947: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:49:11 np0005465604 nova_compute[260603]: 2025-10-02 09:49:11.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:12 np0005465604 trusting_golick[474025]: --> passed data devices: 0 physical, 3 LVM
Oct  2 05:49:12 np0005465604 trusting_golick[474025]: --> relative data size: 1.0
Oct  2 05:49:12 np0005465604 trusting_golick[474025]: --> All data devices are unavailable
Oct  2 05:49:12 np0005465604 systemd[1]: libpod-6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6.scope: Deactivated successfully.
Oct  2 05:49:12 np0005465604 systemd[1]: libpod-6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6.scope: Consumed 1.265s CPU time.
Oct  2 05:49:12 np0005465604 podman[474054]: 2025-10-02 09:49:12.233707016 +0000 UTC m=+0.031708051 container died 6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 05:49:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3948: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:13 np0005465604 systemd[1]: var-lib-containers-storage-overlay-be57a2f7de865981bac0172d8e0dc40eae27061656c5f487dd3469b02db85ccb-merged.mount: Deactivated successfully.
Oct  2 05:49:13 np0005465604 podman[474054]: 2025-10-02 09:49:13.641369454 +0000 UTC m=+1.439370519 container remove 6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:49:13 np0005465604 systemd[1]: libpod-conmon-6ca301bebc83ad350287544b10bb43ba7aeff879b4b26da57fec48ef41b45de6.scope: Deactivated successfully.
Oct  2 05:49:14 np0005465604 podman[474210]: 2025-10-02 09:49:14.412314284 +0000 UTC m=+0.026456187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:49:14 np0005465604 podman[474210]: 2025-10-02 09:49:14.557824149 +0000 UTC m=+0.171966062 container create 50ed064f7e5e7479810872419094b09e0c5e673eb7131cf90ad677de47b44b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 05:49:14 np0005465604 systemd[1]: Started libpod-conmon-50ed064f7e5e7479810872419094b09e0c5e673eb7131cf90ad677de47b44b90.scope.
Oct  2 05:49:14 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:49:14 np0005465604 podman[474210]: 2025-10-02 09:49:14.948594425 +0000 UTC m=+0.562736388 container init 50ed064f7e5e7479810872419094b09e0c5e673eb7131cf90ad677de47b44b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feistel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  2 05:49:14 np0005465604 podman[474210]: 2025-10-02 09:49:14.96061684 +0000 UTC m=+0.574758713 container start 50ed064f7e5e7479810872419094b09e0c5e673eb7131cf90ad677de47b44b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:49:14 np0005465604 hardcore_feistel[474226]: 167 167
Oct  2 05:49:14 np0005465604 systemd[1]: libpod-50ed064f7e5e7479810872419094b09e0c5e673eb7131cf90ad677de47b44b90.scope: Deactivated successfully.
Oct  2 05:49:15 np0005465604 podman[474210]: 2025-10-02 09:49:15.056238427 +0000 UTC m=+0.670380290 container attach 50ed064f7e5e7479810872419094b09e0c5e673eb7131cf90ad677de47b44b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:49:15 np0005465604 podman[474210]: 2025-10-02 09:49:15.057136805 +0000 UTC m=+0.671278678 container died 50ed064f7e5e7479810872419094b09e0c5e673eb7131cf90ad677de47b44b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  2 05:49:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3949: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:15 np0005465604 nova_compute[260603]: 2025-10-02 09:49:15.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:15 np0005465604 systemd[1]: var-lib-containers-storage-overlay-82a441db44e138c0509b84c9ce5689a80b06a5c703a06cceedb37eac6d7c6a60-merged.mount: Deactivated successfully.
Oct  2 05:49:16 np0005465604 podman[474210]: 2025-10-02 09:49:16.479948186 +0000 UTC m=+2.094090099 container remove 50ed064f7e5e7479810872419094b09e0c5e673eb7131cf90ad677de47b44b90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feistel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  2 05:49:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:49:16 np0005465604 systemd[1]: libpod-conmon-50ed064f7e5e7479810872419094b09e0c5e673eb7131cf90ad677de47b44b90.scope: Deactivated successfully.
Oct  2 05:49:16 np0005465604 nova_compute[260603]: 2025-10-02 09:49:16.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:16 np0005465604 podman[474252]: 2025-10-02 09:49:16.70129841 +0000 UTC m=+0.028983187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:49:16 np0005465604 podman[474252]: 2025-10-02 09:49:16.831124874 +0000 UTC m=+0.158809621 container create 25d81dc5e68fc836b174b3724ca1657fce8b41ea98d6ab901481b9901b709b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_feynman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:49:17 np0005465604 systemd[1]: Started libpod-conmon-25d81dc5e68fc836b174b3724ca1657fce8b41ea98d6ab901481b9901b709b30.scope.
Oct  2 05:49:17 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:49:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c45f564b7e331489a88f000f378562dd23d2ce619fc649a78cf5d1a6e014d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:49:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c45f564b7e331489a88f000f378562dd23d2ce619fc649a78cf5d1a6e014d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:49:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c45f564b7e331489a88f000f378562dd23d2ce619fc649a78cf5d1a6e014d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:49:17 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c45f564b7e331489a88f000f378562dd23d2ce619fc649a78cf5d1a6e014d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:49:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3950: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:17 np0005465604 podman[474252]: 2025-10-02 09:49:17.436939257 +0000 UTC m=+0.764623974 container init 25d81dc5e68fc836b174b3724ca1657fce8b41ea98d6ab901481b9901b709b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 05:49:17 np0005465604 podman[474252]: 2025-10-02 09:49:17.444655548 +0000 UTC m=+0.772340265 container start 25d81dc5e68fc836b174b3724ca1657fce8b41ea98d6ab901481b9901b709b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:49:17 np0005465604 podman[474252]: 2025-10-02 09:49:17.693061277 +0000 UTC m=+1.020746034 container attach 25d81dc5e68fc836b174b3724ca1657fce8b41ea98d6ab901481b9901b709b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_feynman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 05:49:18 np0005465604 kind_feynman[474268]: {
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:    "0": [
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:        {
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "devices": [
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "/dev/loop3"
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            ],
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "lv_name": "ceph_lv0",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "lv_size": "21470642176",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f14c1e76-9e34-46aa-9e3c-f0bae5378cc0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "lv_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "name": "ceph_lv0",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "tags": {
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.block_uuid": "1QoGnJ-kEhj-UcTX-vXLl-WqFG-5xsg-2sitGv",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.cluster_name": "ceph",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.crush_device_class": "",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.encrypted": "0",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.osd_fsid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.osd_id": "0",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.type": "block",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.vdo": "0"
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            },
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "type": "block",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "vg_name": "ceph_vg0"
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:        }
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:    ],
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:    "1": [
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:        {
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "devices": [
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "/dev/loop4"
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            ],
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "lv_name": "ceph_lv1",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "lv_size": "21470642176",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8ecdfa53-c8d8-401c-b12f-ba8d091f39fe,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "lv_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "name": "ceph_lv1",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "tags": {
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.block_uuid": "F8hcVm-sKG6-0jax-jdCa-ziO5-4kDF-GqivDL",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.cluster_name": "ceph",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.crush_device_class": "",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.encrypted": "0",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.osd_fsid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.osd_id": "1",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.type": "block",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.vdo": "0"
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            },
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "type": "block",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "vg_name": "ceph_vg1"
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:        }
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:    ],
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:    "2": [
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:        {
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "devices": [
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "/dev/loop5"
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            ],
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "lv_name": "ceph_lv2",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "lv_size": "21470642176",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a52e644f-f702-594c-a648-813e3e0df2b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8e617210-aec3-4316-bc5c-58c501c21dd7,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "lv_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "name": "ceph_lv2",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "tags": {
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.block_uuid": "JeLBPc-e5Vt-bWyx-LOs4-uWig-GFsT-F5RA9F",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.cephx_lockbox_secret": "",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.cluster_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.cluster_name": "ceph",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.crush_device_class": "",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.encrypted": "0",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.osd_fsid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.osd_id": "2",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.type": "block",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:                "ceph.vdo": "0"
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            },
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "type": "block",
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:            "vg_name": "ceph_vg2"
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:        }
Oct  2 05:49:18 np0005465604 kind_feynman[474268]:    ]
Oct  2 05:49:18 np0005465604 kind_feynman[474268]: }
Oct  2 05:49:18 np0005465604 systemd[1]: libpod-25d81dc5e68fc836b174b3724ca1657fce8b41ea98d6ab901481b9901b709b30.scope: Deactivated successfully.
Oct  2 05:49:18 np0005465604 podman[474252]: 2025-10-02 09:49:18.274078185 +0000 UTC m=+1.601762952 container died 25d81dc5e68fc836b174b3724ca1657fce8b41ea98d6ab901481b9901b709b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 05:49:18 np0005465604 systemd[1]: var-lib-containers-storage-overlay-45c45f564b7e331489a88f000f378562dd23d2ce619fc649a78cf5d1a6e014d3-merged.mount: Deactivated successfully.
Oct  2 05:49:18 np0005465604 podman[474252]: 2025-10-02 09:49:18.890838119 +0000 UTC m=+2.218522866 container remove 25d81dc5e68fc836b174b3724ca1657fce8b41ea98d6ab901481b9901b709b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_feynman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 05:49:18 np0005465604 systemd[1]: libpod-conmon-25d81dc5e68fc836b174b3724ca1657fce8b41ea98d6ab901481b9901b709b30.scope: Deactivated successfully.
Oct  2 05:49:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3951: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:19 np0005465604 podman[474429]: 2025-10-02 09:49:19.72551149 +0000 UTC m=+0.042357785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:49:19 np0005465604 podman[474429]: 2025-10-02 09:49:19.858795553 +0000 UTC m=+0.175641828 container create 2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mendeleev, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 05:49:19 np0005465604 systemd[1]: Started libpod-conmon-2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7.scope.
Oct  2 05:49:20 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:49:20 np0005465604 podman[474429]: 2025-10-02 09:49:20.158962169 +0000 UTC m=+0.475808494 container init 2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 05:49:20 np0005465604 podman[474429]: 2025-10-02 09:49:20.166268436 +0000 UTC m=+0.483114751 container start 2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mendeleev, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:49:20 np0005465604 systemd[1]: libpod-2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7.scope: Deactivated successfully.
Oct  2 05:49:20 np0005465604 happy_mendeleev[474446]: 167 167
Oct  2 05:49:20 np0005465604 conmon[474446]: conmon 2fd480bb586bba482809 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7.scope/container/memory.events
Oct  2 05:49:20 np0005465604 podman[474429]: 2025-10-02 09:49:20.645525076 +0000 UTC m=+0.962371361 container attach 2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mendeleev, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 05:49:20 np0005465604 podman[474429]: 2025-10-02 09:49:20.646802505 +0000 UTC m=+0.963648820 container died 2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 05:49:20 np0005465604 nova_compute[260603]: 2025-10-02 09:49:20.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3952: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:49:21 np0005465604 nova_compute[260603]: 2025-10-02 09:49:21.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:21 np0005465604 systemd[1]: var-lib-containers-storage-overlay-1e1a7946a768281e447ca96f98c7ad79900033e60e9f3996f3a417dd73c9c001-merged.mount: Deactivated successfully.
Oct  2 05:49:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:49:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1775118086' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:49:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:49:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1775118086' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:49:22 np0005465604 podman[474429]: 2025-10-02 09:49:22.305076681 +0000 UTC m=+2.621922996 container remove 2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct  2 05:49:22 np0005465604 systemd[1]: libpod-conmon-2fd480bb586bba482809cf9720d9c384b706bdc98d8a2fd34c935d4c729b7cf7.scope: Deactivated successfully.
Oct  2 05:49:22 np0005465604 podman[474463]: 2025-10-02 09:49:22.402203115 +0000 UTC m=+0.820834460 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 05:49:22 np0005465604 podman[474462]: 2025-10-02 09:49:22.429522378 +0000 UTC m=+0.852647923 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 05:49:22 np0005465604 podman[474517]: 2025-10-02 09:49:22.528817979 +0000 UTC m=+0.033650822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:49:23 np0005465604 podman[474517]: 2025-10-02 09:49:23.043028889 +0000 UTC m=+0.547861712 container create 69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 05:49:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3953: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:23 np0005465604 systemd[1]: Started libpod-conmon-69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c.scope.
Oct  2 05:49:23 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:49:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187f31de43cab2229b62fc045b8b80ad1a188eb253fcec6042cc8f6326039f5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:49:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187f31de43cab2229b62fc045b8b80ad1a188eb253fcec6042cc8f6326039f5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:49:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187f31de43cab2229b62fc045b8b80ad1a188eb253fcec6042cc8f6326039f5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:49:23 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187f31de43cab2229b62fc045b8b80ad1a188eb253fcec6042cc8f6326039f5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:49:24 np0005465604 podman[474517]: 2025-10-02 09:49:24.026673295 +0000 UTC m=+1.531506168 container init 69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 05:49:24 np0005465604 podman[474517]: 2025-10-02 09:49:24.043108858 +0000 UTC m=+1.547941631 container start 69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 05:49:24 np0005465604 podman[474517]: 2025-10-02 09:49:24.229630463 +0000 UTC m=+1.734463276 container attach 69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brattain, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 05:49:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3954: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]: {
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:    "8e617210-aec3-4316-bc5c-58c501c21dd7": {
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:        "osd_id": 2,
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:        "osd_uuid": "8e617210-aec3-4316-bc5c-58c501c21dd7",
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:        "type": "bluestore"
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:    },
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:    "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe": {
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:        "osd_id": 1,
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:        "osd_uuid": "8ecdfa53-c8d8-401c-b12f-ba8d091f39fe",
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:        "type": "bluestore"
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:    },
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:    "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0": {
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:        "ceph_fsid": "a52e644f-f702-594c-a648-813e3e0df2b1",
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:        "osd_id": 0,
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:        "osd_uuid": "f14c1e76-9e34-46aa-9e3c-f0bae5378cc0",
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:        "type": "bluestore"
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]:    }
Oct  2 05:49:25 np0005465604 vigilant_brattain[474534]: }
Oct  2 05:49:25 np0005465604 systemd[1]: libpod-69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c.scope: Deactivated successfully.
Oct  2 05:49:25 np0005465604 podman[474517]: 2025-10-02 09:49:25.151628352 +0000 UTC m=+2.656461175 container died 69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct  2 05:49:25 np0005465604 systemd[1]: libpod-69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c.scope: Consumed 1.116s CPU time.
Oct  2 05:49:25 np0005465604 systemd[1]: var-lib-containers-storage-overlay-187f31de43cab2229b62fc045b8b80ad1a188eb253fcec6042cc8f6326039f5e-merged.mount: Deactivated successfully.
Oct  2 05:49:25 np0005465604 nova_compute[260603]: 2025-10-02 09:49:25.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:26 np0005465604 podman[474517]: 2025-10-02 09:49:25.999728473 +0000 UTC m=+3.504561286 container remove 69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  2 05:49:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:49:26 np0005465604 systemd[1]: libpod-conmon-69384fce5272b660e80c278c1856c700e4c750af338b54793cbdcbc082f8d02c.scope: Deactivated successfully.
Oct  2 05:49:26 np0005465604 podman[474574]: 2025-10-02 09:49:26.070923656 +0000 UTC m=+0.858996512 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 05:49:26 np0005465604 podman[474568]: 2025-10-02 09:49:26.109264403 +0000 UTC m=+0.910745828 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 05:49:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:49:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:49:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:49:26 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 53500d9a-d2fd-4c35-9da8-5929d455698c does not exist
Oct  2 05:49:26 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 02feecb4-032a-4c99-9aa2-7317029eac32 does not exist
Oct  2 05:49:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:49:26 np0005465604 nova_compute[260603]: 2025-10-02 09:49:26.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:26 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:49:26 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:49:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3955: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:49:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:49:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:49:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:49:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:49:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:49:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:49:28
Oct  2 05:49:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:49:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:49:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'backups', '.rgw.root', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'images', 'cephfs.cephfs.data']
Oct  2 05:49:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:49:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:49:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3956: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s
Oct  2 05:49:30 np0005465604 nova_compute[260603]: 2025-10-02 09:49:30.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3957: 305 pgs: 305 active+clean; 457 KiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s
Oct  2 05:49:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:49:31 np0005465604 nova_compute[260603]: 2025-10-02 09:49:31.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3958: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Oct  2 05:49:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:49:34.890 162357 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:49:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:49:34.891 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:49:34 np0005465604 ovn_metadata_agent[162328]: 2025-10-02 09:49:34.891 162357 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:49:35 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3959: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Oct  2 05:49:35 np0005465604 nova_compute[260603]: 2025-10-02 09:49:35.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:36 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:49:36 np0005465604 nova_compute[260603]: 2025-10-02 09:49:36.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:49:36 np0005465604 nova_compute[260603]: 2025-10-02 09:49:36.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:37 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3960: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3961: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] _maybe_adjust
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  2 05:49:39 np0005465604 ceph-mgr[74774]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  2 05:49:40 np0005465604 nova_compute[260603]: 2025-10-02 09:49:40.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:49:40 np0005465604 nova_compute[260603]: 2025-10-02 09:49:40.519 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:49:40 np0005465604 nova_compute[260603]: 2025-10-02 09:49:40.591 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:49:40 np0005465604 nova_compute[260603]: 2025-10-02 09:49:40.592 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:49:40 np0005465604 nova_compute[260603]: 2025-10-02 09:49:40.592 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:49:40 np0005465604 nova_compute[260603]: 2025-10-02 09:49:40.592 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 05:49:40 np0005465604 nova_compute[260603]: 2025-10-02 09:49:40.592 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:49:40 np0005465604 nova_compute[260603]: 2025-10-02 09:49:40.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:49:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/456584274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:49:41 np0005465604 nova_compute[260603]: 2025-10-02 09:49:41.082 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:49:41 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3962: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 69 op/s
Oct  2 05:49:41 np0005465604 nova_compute[260603]: 2025-10-02 09:49:41.241 2 WARNING nova.virt.libvirt.driver [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 05:49:41 np0005465604 nova_compute[260603]: 2025-10-02 09:49:41.242 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3493MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 05:49:41 np0005465604 nova_compute[260603]: 2025-10-02 09:49:41.242 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 05:49:41 np0005465604 nova_compute[260603]: 2025-10-02 09:49:41.243 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 05:49:41 np0005465604 nova_compute[260603]: 2025-10-02 09:49:41.313 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 05:49:41 np0005465604 nova_compute[260603]: 2025-10-02 09:49:41.313 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 05:49:41 np0005465604 nova_compute[260603]: 2025-10-02 09:49:41.335 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 05:49:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:49:41 np0005465604 nova_compute[260603]: 2025-10-02 09:49:41.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:41 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 05:49:41 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2778361680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 05:49:41 np0005465604 nova_compute[260603]: 2025-10-02 09:49:41.755 2 DEBUG oslo_concurrency.processutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 05:49:41 np0005465604 nova_compute[260603]: 2025-10-02 09:49:41.760 2 DEBUG nova.compute.provider_tree [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed in ProviderTree for provider: 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 05:49:41 np0005465604 nova_compute[260603]: 2025-10-02 09:49:41.783 2 DEBUG nova.scheduler.client.report [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Inventory has not changed for provider 1a696fbb-1cdc-4e1f-9b1a-1353b42d4f27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 05:49:41 np0005465604 nova_compute[260603]: 2025-10-02 09:49:41.785 2 DEBUG nova.compute.resource_tracker [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 05:49:41 np0005465604 nova_compute[260603]: 2025-10-02 09:49:41.785 2 DEBUG oslo_concurrency.lockutils [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 05:49:43 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3963: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 69 op/s
Oct  2 05:49:44 np0005465604 nova_compute[260603]: 2025-10-02 09:49:44.781 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:49:44 np0005465604 nova_compute[260603]: 2025-10-02 09:49:44.781 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:49:45 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3964: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 34 op/s
Oct  2 05:49:45 np0005465604 nova_compute[260603]: 2025-10-02 09:49:45.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:49:45 np0005465604 nova_compute[260603]: 2025-10-02 09:49:45.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 05:49:45 np0005465604 nova_compute[260603]: 2025-10-02 09:49:45.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 05:49:45 np0005465604 nova_compute[260603]: 2025-10-02 09:49:45.539 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 05:49:45 np0005465604 nova_compute[260603]: 2025-10-02 09:49:45.539 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:49:45 np0005465604 nova_compute[260603]: 2025-10-02 09:49:45.539 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 05:49:45 np0005465604 nova_compute[260603]: 2025-10-02 09:49:45.561 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 05:49:45 np0005465604 nova_compute[260603]: 2025-10-02 09:49:45.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:46 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:49:46 np0005465604 nova_compute[260603]: 2025-10-02 09:49:46.540 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:49:46 np0005465604 nova_compute[260603]: 2025-10-02 09:49:46.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:47 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3965: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 34 op/s
Oct  2 05:49:49 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3966: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 34 op/s
Oct  2 05:49:49 np0005465604 nova_compute[260603]: 2025-10-02 09:49:49.514 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:49:50 np0005465604 nova_compute[260603]: 2025-10-02 09:49:50.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:51 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3967: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:51 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:49:51 np0005465604 nova_compute[260603]: 2025-10-02 09:49:51.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:53 np0005465604 podman[474718]: 2025-10-02 09:49:53.0230004 +0000 UTC m=+0.078017267 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 05:49:53 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3968: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:53 np0005465604 podman[474717]: 2025-10-02 09:49:53.117995177 +0000 UTC m=+0.176603807 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 05:49:55 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3969: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:55 np0005465604 nova_compute[260603]: 2025-10-02 09:49:55.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:56 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:49:56 np0005465604 nova_compute[260603]: 2025-10-02 09:49:56.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:49:57 np0005465604 podman[474760]: 2025-10-02 09:49:57.011595562 +0000 UTC m=+0.075196390 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct  2 05:49:57 np0005465604 podman[474761]: 2025-10-02 09:49:57.029738648 +0000 UTC m=+0.079187414 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid)
Oct  2 05:49:57 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3970: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:49:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:49:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:49:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:49:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:49:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:49:57 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:49:59 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3971: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:50:00 np0005465604 nova_compute[260603]: 2025-10-02 09:50:00.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:50:00 np0005465604 nova_compute[260603]: 2025-10-02 09:50:00.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:50:01 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3972: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:50:01 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:50:01 np0005465604 nova_compute[260603]: 2025-10-02 09:50:01.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:50:03 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3973: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:50:05 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3974: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:50:05 np0005465604 nova_compute[260603]: 2025-10-02 09:50:05.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:50:06 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:50:06 np0005465604 nova_compute[260603]: 2025-10-02 09:50:06.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:50:07 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3975: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:50:08 np0005465604 nova_compute[260603]: 2025-10-02 09:50:08.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:50:08 np0005465604 nova_compute[260603]: 2025-10-02 09:50:08.519 2 DEBUG nova.compute.manager [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 05:50:09 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3976: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:50:10 np0005465604 nova_compute[260603]: 2025-10-02 09:50:10.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:50:11 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3977: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:50:11 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:50:11 np0005465604 nova_compute[260603]: 2025-10-02 09:50:11.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:50:12 np0005465604 nova_compute[260603]: 2025-10-02 09:50:12.518 2 DEBUG oslo_service.periodic_task [None req-d99081fa-37ca-43c8-8b87-6377be5aec93 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 05:50:13 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3978: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:50:13 np0005465604 systemd-logind[787]: New session 60 of user zuul.
Oct  2 05:50:13 np0005465604 systemd[1]: Started Session 60 of User zuul.
Oct  2 05:50:15 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3979: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:50:15 np0005465604 nova_compute[260603]: 2025-10-02 09:50:15.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:50:16 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23423 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:50:16 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:50:16 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23425 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:50:16 np0005465604 nova_compute[260603]: 2025-10-02 09:50:16.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:50:17 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  2 05:50:17 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2224406694' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  2 05:50:17 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3980: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:50:19 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3981: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:50:19 np0005465604 ovs-vsctl[475086]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  2 05:50:20 np0005465604 nova_compute[260603]: 2025-10-02 09:50:20.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:50:20 np0005465604 virtqemud[260328]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  2 05:50:20 np0005465604 virtqemud[260328]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  2 05:50:20 np0005465604 virtqemud[260328]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  2 05:50:21 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3982: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:50:21 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:50:21 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: cache status {prefix=cache status} (starting...)
Oct  2 05:50:21 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: client ls {prefix=client ls} (starting...)
Oct  2 05:50:21 np0005465604 nova_compute[260603]: 2025-10-02 09:50:21.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:50:21 np0005465604 lvm[475426]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  2 05:50:21 np0005465604 lvm[475425]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 05:50:21 np0005465604 lvm[475425]: VG ceph_vg0 finished
Oct  2 05:50:21 np0005465604 lvm[475426]: VG ceph_vg1 finished
Oct  2 05:50:22 np0005465604 lvm[475466]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  2 05:50:22 np0005465604 lvm[475466]: VG ceph_vg2 finished
Oct  2 05:50:22 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23429 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:50:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 05:50:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/227274942' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 05:50:22 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 05:50:22 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/227274942' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 05:50:22 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: damage ls {prefix=damage ls} (starting...)
Oct  2 05:50:22 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump loads {prefix=dump loads} (starting...)
Oct  2 05:50:22 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23435 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:50:22 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  2 05:50:22 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  2 05:50:22 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  2 05:50:23 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  2 05:50:23 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3983: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:50:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct  2 05:50:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1711645371' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  2 05:50:23 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  2 05:50:23 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23441 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:50:23 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T09:50:23.445+0000 7f67e8e61640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  2 05:50:23 np0005465604 ceph-mgr[74774]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  2 05:50:23 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  2 05:50:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:50:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2517926739' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:50:23 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: ops {prefix=ops} (starting...)
Oct  2 05:50:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct  2 05:50:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/541629165' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  2 05:50:23 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct  2 05:50:23 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1268134731' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  2 05:50:23 np0005465604 podman[475714]: 2025-10-02 09:50:23.998043323 +0000 UTC m=+0.058766317 container health_status fa2f0ee9a66d5344f4b0782ae6dd6018c15fd96f8a4484adc5bee1bde6833838 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c669f661c7b636aa2018e5e98d826141cf1626e32230a8985e4cfb6a55e6cb3e', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:50:24 np0005465604 podman[475713]: 2025-10-02 09:50:24.036539386 +0000 UTC m=+0.097494527 container health_status 034cf4171a5d3cb2dc2e5f641814374af9817e13cc2566d39754f444024212fa (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:a2ac7724a8631b0e30a36b14bfd96711af16439a740cea5fde1405253e9798a0', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 05:50:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct  2 05:50:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3906598908' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  2 05:50:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  2 05:50:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1604482718' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  2 05:50:24 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: session ls {prefix=session ls} (starting...)
Oct  2 05:50:24 np0005465604 ceph-mds[100739]: mds.cephfs.compute-0.mjmqka asok_command: status {prefix=status} (starting...)
Oct  2 05:50:24 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23453 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:50:24 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  2 05:50:24 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1092721279' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  2 05:50:25 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3984: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:50:25 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23457 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:50:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  2 05:50:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1704548908' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  2 05:50:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct  2 05:50:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2254031404' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  2 05:50:25 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  2 05:50:25 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3214909565' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  2 05:50:25 np0005465604 nova_compute[260603]: 2025-10-02 09:50:25.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:50:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct  2 05:50:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4191865689' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  2 05:50:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct  2 05:50:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925953199' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  2 05:50:26 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23469 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:50:26 np0005465604 ceph-mgr[74774]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  2 05:50:26 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T09:50:26.498+0000 7f67e8e61640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  2 05:50:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:50:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  2 05:50:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1730364769' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  2 05:50:26 np0005465604 nova_compute[260603]: 2025-10-02 09:50:26.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:50:26 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct  2 05:50:26 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/968457732' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  2 05:50:27 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23475 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:50:27 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3985: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:50:27 np0005465604 podman[476351]: 2025-10-02 09:50:27.408208538 +0000 UTC m=+0.070715930 container health_status f6465b908c5c7b148e0f2e928593cd24c5bebd286f03838a5b680841a43f48b9 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:ccd28e86a082f716c528dd19339b20b089babe706b9221d03a483bf5e38eaa85', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 05:50:27 np0005465604 podman[476346]: 2025-10-02 09:50:27.408716143 +0000 UTC m=+0.073074083 container health_status 08d230f76cdb1fab7318076237c0c8d46feb25f4759f2447b623d1ed5a6e5480 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:e2165a561d61825f8c0fb2c0514e8e828f53fa25fe921e79bb0e4b440b10b136', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, container_name=multipathd)
Oct  2 05:50:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct  2 05:50:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/606793835' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  2 05:50:27 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23479 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:50:27 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23481 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:50:27 np0005465604 podman[476654]: 2025-10-02 09:50:27.903011093 +0000 UTC m=+0.042987084 container create e7f0a53717edf7a6ca940a536c9e3fc6721a70bb6199e45312d75a09d005c0a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:50:27 np0005465604 systemd[1]: Started libpod-conmon-e7f0a53717edf7a6ca940a536c9e3fc6721a70bb6199e45312d75a09d005c0a0.scope.
Oct  2 05:50:27 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  2 05:50:27 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2581317241' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  2 05:50:27 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:50:27 np0005465604 podman[476654]: 2025-10-02 09:50:27.880937753 +0000 UTC m=+0.020913744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:50:27 np0005465604 podman[476654]: 2025-10-02 09:50:27.98840024 +0000 UTC m=+0.128376241 container init e7f0a53717edf7a6ca940a536c9e3fc6721a70bb6199e45312d75a09d005c0a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  2 05:50:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:50:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:50:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:50:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:50:27 np0005465604 podman[476654]: 2025-10-02 09:50:27.996235895 +0000 UTC m=+0.136211886 container start e7f0a53717edf7a6ca940a536c9e3fc6721a70bb6199e45312d75a09d005c0a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 05:50:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] scanning for idle connections..
Oct  2 05:50:27 np0005465604 ceph-mgr[74774]: [volumes INFO mgr_util] cleaning up connections: []
Oct  2 05:50:28 np0005465604 podman[476654]: 2025-10-02 09:50:28.000367224 +0000 UTC m=+0.140343235 container attach e7f0a53717edf7a6ca940a536c9e3fc6721a70bb6199e45312d75a09d005c0a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 05:50:28 np0005465604 reverent_noyce[476693]: 167 167
Oct  2 05:50:28 np0005465604 systemd[1]: libpod-e7f0a53717edf7a6ca940a536c9e3fc6721a70bb6199e45312d75a09d005c0a0.scope: Deactivated successfully.
Oct  2 05:50:28 np0005465604 podman[476654]: 2025-10-02 09:50:28.003010116 +0000 UTC m=+0.142986107 container died e7f0a53717edf7a6ca940a536c9e3fc6721a70bb6199e45312d75a09d005c0a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_noyce, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 05:50:28 np0005465604 systemd[1]: var-lib-containers-storage-overlay-cd21132eff86ccb978386cd9ffe2e95c40ba3c77cff38fb09b2c1b3b765603ad-merged.mount: Deactivated successfully.
Oct  2 05:50:28 np0005465604 podman[476654]: 2025-10-02 09:50:28.046288588 +0000 UTC m=+0.186264579 container remove e7f0a53717edf7a6ca940a536c9e3fc6721a70bb6199e45312d75a09d005c0a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  2 05:50:28 np0005465604 systemd[1]: libpod-conmon-e7f0a53717edf7a6ca940a536c9e3fc6721a70bb6199e45312d75a09d005c0a0.scope: Deactivated successfully.
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64900400 session 0x562c62733a40
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Optimize plan auto_2025-10-02_09:50:28
Oct  2 05:50:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  2 05:50:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] do_upmap
Oct  2 05:50:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] pools ['images', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.control', 'vms', 'default.rgw.meta']
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedca000/0x0/0x4ffc00000, data 0x1ac4067/0x1c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-mgr[74774]: [balancer INFO root] prepared 0/10 changes
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081975 data_alloc: 218103808 data_used: 3538944
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276897792 unmapped: 39624704 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedca000/0x0/0x4ffc00000, data 0x1ac4067/0x1c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3087095 data_alloc: 218103808 data_used: 4190208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedca000/0x0/0x4ffc00000, data 0x1ac4067/0x1c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276905984 unmapped: 39616512 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276914176 unmapped: 39608320 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3087095 data_alloc: 218103808 data_used: 4190208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eedca000/0x0/0x4ffc00000, data 0x1ac4067/0x1c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xf1df9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.632834435s of 12.789543152s, submitted: 4
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278503424 unmapped: 38019072 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278503424 unmapped: 38019072 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278503424 unmapped: 38019072 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278503424 unmapped: 38019072 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed507000/0x0/0x4ffc00000, data 0x21e7067/0x2377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277700608 unmapped: 38821888 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3138289 data_alloc: 218103808 data_used: 4243456
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277708800 unmapped: 38813696 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed4cf000/0x0/0x4ffc00000, data 0x221f067/0x23af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3145465 data_alloc: 218103808 data_used: 4243456
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed4c7000/0x0/0x4ffc00000, data 0x2227067/0x23b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.898307800s of 12.210510254s, submitted: 59
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 38797312 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64902c00 session 0x562c628ede00
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f3f800 session 0x562c61e56f00
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277741568 unmapped: 38780928 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed4c7000/0x0/0x4ffc00000, data 0x2227067/0x23b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277741568 unmapped: 38780928 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3146537 data_alloc: 218103808 data_used: 4231168
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277749760 unmapped: 38772736 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed4bf000/0x0/0x4ffc00000, data 0x222f067/0x23bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,1])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277749760 unmapped: 38772736 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277749760 unmapped: 38772736 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f70800 session 0x562c646baf00
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd08000/0x0/0x4ffc00000, data 0x19e6067/0x1b76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074399 data_alloc: 218103808 data_used: 3526656
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd08000/0x0/0x4ffc00000, data 0x19e6044/0x1b75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277757952 unmapped: 38764544 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd08000/0x0/0x4ffc00000, data 0x19e6044/0x1b75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074399 data_alloc: 218103808 data_used: 3526656
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.508622169s of 13.601557732s, submitted: 32
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd4c00 session 0x562c6475e780
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661dcc00 session 0x562c628f0000
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 277766144 unmapped: 38756352 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074223 data_alloc: 218103808 data_used: 3526656
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd09000/0x0/0x4ffc00000, data 0x19e6044/0x1b75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c649c5000 session 0x562c6367cd20
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 36K writes, 140K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 36K writes, 13K syncs, 2.70 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1788 writes, 7086 keys, 1788 commit groups, 1.0 writes per commit group, ingest: 8.42 MB, 0.01 MB/s#012Interval WAL: 1788 writes, 697 syncs, 2.57 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: mgrc ms_handle_reset ms_handle_reset con 0x562c64994400
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: mgrc handle_mgr_configure stats_period=5
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2956488 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 42098688 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274432000 unmapped: 42090496 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274432000 unmapped: 42090496 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee882000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274432000 unmapped: 42090496 heap: 316522496 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.216686249s of 49.199211121s, submitted: 27
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2958270 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f3f800 session 0x562c6b307a40
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6963f000 session 0x562c6363ef00
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c649c5000 session 0x562c646d6780
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661dcc00 session 0x562c628dc1e0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd4c00 session 0x562c646d7860
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3003011 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274448384 unmapped: 46276608 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c6315f0e0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274382848 unmapped: 46342144 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274391040 unmapped: 46333952 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274399232 unmapped: 46325760 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026672 data_alloc: 218103808 data_used: 3313664
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045552 data_alloc: 218103808 data_used: 5947392
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 274423808 unmapped: 46301184 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.794466019s of 20.170822144s, submitted: 32
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3045624 data_alloc: 218103808 data_used: 5947392
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276774912 unmapped: 43950080 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ee2d5000/0x0/0x4ffc00000, data 0x141c011/0x15a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,20])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 275988480 unmapped: 44736512 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276111360 unmapped: 44613632 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092030 data_alloc: 218103808 data_used: 6160384
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd83000/0x0/0x4ffc00000, data 0x196e011/0x1afb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd62000/0x0/0x4ffc00000, data 0x198f011/0x1b1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3090850 data_alloc: 218103808 data_used: 6164480
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.045855522s of 12.648342133s, submitted: 63
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276324352 unmapped: 44400640 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4edd62000/0x0/0x4ffc00000, data 0x198f011/0x1b1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276332544 unmapped: 44392448 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276340736 unmapped: 44384256 heap: 320724992 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092395 data_alloc: 218103808 data_used: 6164480
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 284459008 unmapped: 40468480 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 48259072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54e400 session 0x562c64ef6960
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54e400 session 0x562c643a61e0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c6551b2c0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 48259072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 48259072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,2])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151015 data_alloc: 218103808 data_used: 6164480
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c649c5000 session 0x562c6b3061e0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f71400 session 0x562c6274ab40
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.307779789s of 10.068083763s, submitted: 53
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 48250880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3152489 data_alloc: 218103808 data_used: 6164480
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276692992 unmapped: 48234496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62874c00 session 0x562c646d9680
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276701184 unmapped: 48226304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276701184 unmapped: 48226304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276701184 unmapped: 48226304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279396352 unmapped: 45531136 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3210696 data_alloc: 234881024 data_used: 14290944
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279412736 unmapped: 45514752 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.747380257s of 11.106559753s, submitted: 63
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3211048 data_alloc: 234881024 data_used: 14290944
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279429120 unmapped: 45498368 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279437312 unmapped: 45490176 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed587000/0x0/0x4ffc00000, data 0x216a011/0x22f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1037f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279437312 unmapped: 45490176 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282304512 unmapped: 42622976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3228632 data_alloc: 234881024 data_used: 14290944
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282304512 unmapped: 42622976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 284680192 unmapped: 40247296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebf33000/0x0/0x4ffc00000, data 0x261e011/0x27ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.735648155s of 10.060455322s, submitted: 31
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247268 data_alloc: 234881024 data_used: 14290944
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebebb000/0x0/0x4ffc00000, data 0x2696011/0x2823000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeb6000/0x0/0x4ffc00000, data 0x269b011/0x2828000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3249180 data_alloc: 234881024 data_used: 14315520
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeb0000/0x0/0x4ffc00000, data 0x26a1011/0x282e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3249180 data_alloc: 234881024 data_used: 14315520
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.374366760s of 11.633337975s, submitted: 16
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253572 data_alloc: 234881024 data_used: 14413824
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253572 data_alloc: 234881024 data_used: 14413824
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253572 data_alloc: 234881024 data_used: 14413824
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.346201897s of 15.059863091s, submitted: 4
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebeac000/0x0/0x4ffc00000, data 0x26a5011/0x2832000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62874c00 session 0x562c64efe960
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c628dda40
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ebea5000/0x0/0x4ffc00000, data 0x26ac011/0x2839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253376 data_alloc: 234881024 data_used: 14413824
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282861568 unmapped: 42065920 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbb5000/0x0/0x4ffc00000, data 0x199c011/0x1b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282861568 unmapped: 42065920 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c69f3f800 session 0x562c64ef7860
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099936 data_alloc: 218103808 data_used: 6164480
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbb5000/0x0/0x4ffc00000, data 0x199c011/0x1b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.364779472s of 12.359013557s, submitted: 40
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64901800 session 0x562c6b307a40
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f41400 session 0x562c6551bc20
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276013056 unmapped: 48914432 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2973676 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed307000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c64effc20
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972776 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64998000 session 0x562c62a46f00
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661dd800 session 0x562c6274a780
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c6471a3c0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64999000 session 0x562c646d94a0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.197753906s of 44.837074280s, submitted: 31
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed708000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c627dfc00 session 0x562c6475f4a0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f1400 session 0x562c6b3065a0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64998000 session 0x562c6467bc20
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661dd800 session 0x562c6363f0e0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c65477c00 session 0x562c6315f860
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001882 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001882 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661ddc00 session 0x562c646e2b40
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd4c00 session 0x562c645674a0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276029440 unmapped: 48898048 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3001882 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c64566780
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.026555061s of 13.081507683s, submitted: 4
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c68fe9000 session 0x562c628ed4a0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276045824 unmapped: 48881664 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030199 data_alloc: 218103808 data_used: 3846144
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030199 data_alloc: 218103808 data_used: 3846144
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276054016 unmapped: 48873472 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed368000/0x0/0x4ffc00000, data 0x11e9faf/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.270869255s of 12.386258125s, submitted: 6
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 276652032 unmapped: 48275456 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278601728 unmapped: 46325760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278700032 unmapped: 46227456 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3067793 data_alloc: 218103808 data_used: 4562944
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3071175 data_alloc: 218103808 data_used: 4702208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3071495 data_alloc: 218103808 data_used: 4710400
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed001000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278740992 unmapped: 46186496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62863400 session 0x562c6279b0e0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62863400 session 0x562c646ba000
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661ddc00 session 0x562c61e57a40
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c67dd4c00 session 0x562c64effe00
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.001466751s of 16.817237854s, submitted: 48
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279355392 unmapped: 45572096 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c68fe9000 session 0x562c628ec000
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3088659 data_alloc: 218103808 data_used: 4710400
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c62748000
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c6363e1e0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62863400 session 0x562c64566d20
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c661ddc00 session 0x562c61e57a40
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eceab000/0x0/0x4ffc00000, data 0x16a5fbf/0x1833000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3088819 data_alloc: 218103808 data_used: 4714496
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eceab000/0x0/0x4ffc00000, data 0x16a5fbf/0x1833000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4eceab000/0x0/0x4ffc00000, data 0x16a5fbf/0x1833000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 278896640 unmapped: 46030848 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279044096 unmapped: 45883392 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.341195107s of 10.009945869s, submitted: 15
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62862c00 session 0x562c6279b0e0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279044096 unmapped: 45883392 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092880 data_alloc: 218103808 data_used: 4714496
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279044096 unmapped: 45883392 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279044096 unmapped: 45883392 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279543808 unmapped: 45383680 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103412 data_alloc: 218103808 data_used: 6053888
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103412 data_alloc: 218103808 data_used: 6053888
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ece87000/0x0/0x4ffc00000, data 0x16c9fbf/0x1857000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 279699456 unmapped: 45228032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.366211891s of 13.564207077s, submitted: 1
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282476544 unmapped: 42450944 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec794000/0x0/0x4ffc00000, data 0x1da3fbf/0x1f31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3172036 data_alloc: 218103808 data_used: 7139328
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec790000/0x0/0x4ffc00000, data 0x1da7fbf/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec790000/0x0/0x4ffc00000, data 0x1da7fbf/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3172664 data_alloc: 218103808 data_used: 7143424
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec7a6000/0x0/0x4ffc00000, data 0x1daafbf/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3166132 data_alloc: 218103808 data_used: 7143424
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ec7a6000/0x0/0x4ffc00000, data 0x1daafbf/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1151f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3166772 data_alloc: 218103808 data_used: 7204864
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 42270720 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64902400 session 0x562c6363f0e0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.399305344s of 18.292930603s, submitted: 109
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f71800 session 0x562c6551a780
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282681344 unmapped: 42246144 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64902400 session 0x562c645665a0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 podman[476794]: 2025-10-02 09:50:28.260401766 +0000 UTC m=+0.088258458 container create eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3080618 data_alloc: 218103808 data_used: 4771840
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecbf7000/0x0/0x4ffc00000, data 0x154afaf/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c62874400 session 0x562c62a46000
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64e73c00 session 0x562c64efe1e0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3082950 data_alloc: 218103808 data_used: 4759552
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280166400 unmapped: 44761088 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.822414398s of 10.087261200s, submitted: 43
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280166400 unmapped: 44761088 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c643a4c00 session 0x562c64eff680
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280174592 unmapped: 44752896 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280182784 unmapped: 44744704 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280190976 unmapped: 44736512 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280190976 unmapped: 44736512 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280190976 unmapped: 44736512 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280190976 unmapped: 44736512 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280199168 unmapped: 44728320 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280199168 unmapped: 44728320 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280199168 unmapped: 44728320 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280199168 unmapped: 44728320 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280207360 unmapped: 44720128 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280215552 unmapped: 44711936 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280223744 unmapped: 44703744 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280231936 unmapped: 44695552 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280231936 unmapped: 44695552 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280231936 unmapped: 44695552 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280240128 unmapped: 44687360 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280248320 unmapped: 44679168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280256512 unmapped: 44670976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280256512 unmapped: 44670976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280256512 unmapped: 44670976 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280264704 unmapped: 44662784 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280272896 unmapped: 44654592 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280272896 unmapped: 44654592 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280272896 unmapped: 44654592 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280281088 unmapped: 44646400 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280289280 unmapped: 44638208 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989004 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280289280 unmapped: 44638208 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f8000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 280289280 unmapped: 44638208 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 100.534408569s of 101.709854126s, submitted: 14
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282599424 unmapped: 42328064 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64901800 session 0x562c6551af00
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c6456ad20
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c6475e780
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f41000 session 0x562c628e2d20
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f63800 session 0x562c646f8f00
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282599424 unmapped: 42328064 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c64ef61e0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282599424 unmapped: 42328064 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3017460 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecfaa000/0x0/0x4ffc00000, data 0x1197faf/0x1324000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64901800 session 0x562c646ba960
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282599424 unmapped: 42328064 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64f41000 session 0x562c61e4eb40
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c6c54fc00 session 0x562c62732000
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282607616 unmapped: 42319872 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282615808 unmapped: 42311680 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecfa8000/0x0/0x4ffc00000, data 0x1197fe2/0x1326000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3043767 data_alloc: 218103808 data_used: 3309568
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ecfa8000/0x0/0x4ffc00000, data 0x1197fe2/0x1326000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64998000 session 0x562c627330e0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c630f0000 session 0x562c646d94a0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.375305176s of 10.585134506s, submitted: 14
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 ms_handle_reset con 0x562c64381c00 session 0x562c62474d20
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 42303488 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 42295296 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282640384 unmapped: 42287104 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993866 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 heartbeat osd_stat(store_statfs(0x4ed2f7000/0x0/0x4ffc00000, data 0xe49faf/0xfd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 30.151807785s of 30.341138840s, submitted: 13
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 42278912 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 42262528 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 291 ms_handle_reset con 0x562c68fe8800 session 0x562c62475e00
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2997150 data_alloc: 218103808 data_used: 53248
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 42262528 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ed2f5000/0x0/0x4ffc00000, data 0xe4bb5d/0xfd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ed2f5000/0x0/0x4ffc00000, data 0xe4bb5d/0xfd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2997150 data_alloc: 218103808 data_used: 53248
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 291 heartbeat osd_stat(store_statfs(0x4ed2f5000/0x0/0x4ffc00000, data 0xe4bb5d/0xfd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282673152 unmapped: 42254336 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282689536 unmapped: 42237952 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282697728 unmapped: 42229760 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282705920 unmapped: 42221568 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282714112 unmapped: 42213376 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 42196992 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 42196992 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 42196992 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282730496 unmapped: 42196992 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282738688 unmapped: 42188800 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 42180608 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282755072 unmapped: 42172416 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282771456 unmapped: 42156032 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282779648 unmapped: 42147840 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282787840 unmapped: 42139648 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282796032 unmapped: 42131456 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282796032 unmapped: 42131456 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282804224 unmapped: 42123264 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282812416 unmapped: 42115072 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 42106880 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 42098688 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282836992 unmapped: 42090496 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282845184 unmapped: 42082304 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 42074112 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 42074112 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 42074112 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 42057728 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000284 data_alloc: 218103808 data_used: 57344
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 42049536 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 290488320 unmapped: 34439168 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 ms_handle_reset con 0x562c64687c00 session 0x562c6315e1e0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3012284 data_alloc: 218103808 data_used: 6860800
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289521664 unmapped: 35405824 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 heartbeat osd_stat(store_statfs(0x4ed2f2000/0x0/0x4ffc00000, data 0xe4d5c0/0xfdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289538048 unmapped: 35389440 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3012284 data_alloc: 218103808 data_used: 6860800
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289538048 unmapped: 35389440 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 289538048 unmapped: 35389440 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 134.109100342s of 134.325363159s, submitted: 30
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 293 ms_handle_reset con 0x562c64900400 session 0x562c6315ed20
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 293 heartbeat osd_stat(store_statfs(0x4edaef000/0x0/0x4ffc00000, data 0x64f191/0x7de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2941203 data_alloc: 218103808 data_used: 45056
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 294 ms_handle_reset con 0x562c630f0000 session 0x562c64efe000
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 294 heartbeat osd_stat(store_statfs(0x4edf5c000/0x0/0x4ffc00000, data 0x1e0d62/0x371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 294 heartbeat osd_stat(store_statfs(0x4edf5c000/0x0/0x4ffc00000, data 0x1e0d62/0x371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902681 data_alloc: 218103808 data_used: 53248
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 294 ms_handle_reset con 0x562c64bd9c00 session 0x562c6315e780
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.945940018s of 11.156617165s, submitted: 55
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 295 heartbeat osd_stat(store_statfs(0x4edf5c000/0x0/0x4ffc00000, data 0x1e0d62/0x371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905655 data_alloc: 218103808 data_used: 53248
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286040064 unmapped: 38887424 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf5a000/0x0/0x4ffc00000, data 0x1e27c5/0x374000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 ms_handle_reset con 0x562c64687400 session 0x562c6456af00
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286048256 unmapped: 38879232 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23485 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 37K writes, 145K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 37K writes, 14K syncs, 2.69 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1178 writes, 4161 keys, 1178 commit groups, 1.0 writes per commit group, ingest: 5.22 MB, 0.01 MB/s#012Interval WAL: 1178 writes, 469 syncs, 2.51 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562c60ff8dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 mem
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286056448 unmapped: 38871040 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286072832 unmapped: 38854656 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286081024 unmapped: 38846464 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286097408 unmapped: 38830080 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286097408 unmapped: 38830080 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286097408 unmapped: 38830080 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286105600 unmapped: 38821888 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286113792 unmapped: 38813696 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286130176 unmapped: 38797312 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286138368 unmapped: 38789120 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf56000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286146560 unmapped: 38780928 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 systemd[1]: Started libpod-conmon-eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc.scope.
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910446 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 102.574424744s of 102.733085632s, submitted: 24
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf57000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286154752 unmapped: 38772736 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286203904 unmapped: 38723584 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2909566 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf57000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286236672 unmapped: 38690816 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 heartbeat osd_stat(store_statfs(0x4edf57000/0x0/0x4ffc00000, data 0x1e4342/0x377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286261248 unmapped: 38666240 heap: 324927488 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 294649856 unmapped: 38674432 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2968052 data_alloc: 218103808 data_used: 61440
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286261248 unmapped: 47063040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.070027351s of 10.197218895s, submitted: 103
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 297 ms_handle_reset con 0x562c6547c800 session 0x562c61dd25a0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286277632 unmapped: 47046656 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286285824 unmapped: 47038464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74f000/0x0/0x4ffc00000, data 0x9e5f38/0xb7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 298 ms_handle_reset con 0x562c6c54e800 session 0x562c646f8d20
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 podman[476794]: 2025-10-02 09:50:28.225200696 +0000 UTC m=+0.053057418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286302208 unmapped: 47022080 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286310400 unmapped: 47013888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286318592 unmapped: 47005696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74c000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286326784 unmapped: 46997504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286343168 unmapped: 46981120 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979755 data_alloc: 218103808 data_used: 69632
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 34.268466949s of 34.596256256s, submitted: 25
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286351360 unmapped: 46972928 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 298 heartbeat osd_stat(store_statfs(0x4ed74d000/0x0/0x4ffc00000, data 0x9e7ab5/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [0,0,0,0,0,0,2])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286359552 unmapped: 46964736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286384128 unmapped: 46940160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed749000/0x0/0x4ffc00000, data 0x9e9686/0xb84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286384128 unmapped: 46940160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287432704 unmapped: 45891584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2982217 data_alloc: 218103808 data_used: 77824
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 299 ms_handle_reset con 0x562c62874800 session 0x562c624745a0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed74a000/0x0/0x4ffc00000, data 0x9e9686/0xb84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286392320 unmapped: 46931968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2981401 data_alloc: 218103808 data_used: 77824
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.584912777s of 10.009610176s, submitted: 37
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286408704 unmapped: 46915584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286408704 unmapped: 46915584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286408704 unmapped: 46915584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286408704 unmapped: 46915584 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985575 data_alloc: 218103808 data_used: 86016
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286416896 unmapped: 46907392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286425088 unmapped: 46899200 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286441472 unmapped: 46882816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286441472 unmapped: 46882816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286441472 unmapped: 46882816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 46874624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 46866432 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 46858240 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 46850048 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286482432 unmapped: 46841856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286482432 unmapped: 46841856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286482432 unmapped: 46841856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 46833664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 46825472 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 46809088 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 46800896 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 46792704 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 46792704 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 46792704 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 46784512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 46768128 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 46759936 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 46751744 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 46751744 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 46751744 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 46735360 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286597120 unmapped: 46727168 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 46718976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 46702592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 46702592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286621696 unmapped: 46702592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286629888 unmapped: 46694400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286638080 unmapped: 46686208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286646272 unmapped: 46678016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286646272 unmapped: 46678016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286646272 unmapped: 46678016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286654464 unmapped: 46669824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286662656 unmapped: 46661632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286662656 unmapped: 46661632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286662656 unmapped: 46661632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2040b99ff67525125f524b889ffae00099dd6f7e45d95dbbaf735db57cc4d04b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 46653440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286679040 unmapped: 46645248 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286687232 unmapped: 46637056 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286703616 unmapped: 46620672 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286711808 unmapped: 46612480 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286711808 unmapped: 46612480 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286711808 unmapped: 46612480 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2040b99ff67525125f524b889ffae00099dd6f7e45d95dbbaf735db57cc4d04b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 05:50:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2040b99ff67525125f524b889ffae00099dd6f7e45d95dbbaf735db57cc4d04b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286720000 unmapped: 46604288 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 46596096 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 46596096 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286728192 unmapped: 46596096 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2040b99ff67525125f524b889ffae00099dd6f7e45d95dbbaf735db57cc4d04b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286736384 unmapped: 46587904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: mgrc ms_handle_reset ms_handle_reset con 0x562c62b25400
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: mgrc handle_mgr_configure stats_period=5
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286744576 unmapped: 46579712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286752768 unmapped: 46571520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286752768 unmapped: 46571520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286752768 unmapped: 46571520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286760960 unmapped: 46563328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 46546944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 46538752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 46530560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 46522368 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286810112 unmapped: 46514176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286810112 unmapped: 46514176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286818304 unmapped: 46505984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286826496 unmapped: 46497792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286826496 unmapped: 46497792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286826496 unmapped: 46497792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286834688 unmapped: 46489600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286842880 unmapped: 46481408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286851072 unmapped: 46473216 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286851072 unmapped: 46473216 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286859264 unmapped: 46465024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286875648 unmapped: 46448640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286875648 unmapped: 46448640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286875648 unmapped: 46448640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286883840 unmapped: 46440448 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286892032 unmapped: 46432256 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286900224 unmapped: 46424064 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286908416 unmapped: 46415872 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286908416 unmapped: 46415872 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286924800 unmapped: 46399488 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286932992 unmapped: 46391296 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 46383104 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 46383104 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 46383104 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286941184 unmapped: 46383104 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286949376 unmapped: 46374912 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286949376 unmapped: 46374912 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286949376 unmapped: 46374912 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286957568 unmapped: 46366720 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286973952 unmapped: 46350336 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286973952 unmapped: 46350336 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286973952 unmapped: 46350336 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286982144 unmapped: 46342144 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286990336 unmapped: 46333952 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 286998528 unmapped: 46325760 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287006720 unmapped: 46317568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287023104 unmapped: 46301184 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287023104 unmapped: 46301184 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287023104 unmapped: 46301184 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287031296 unmapped: 46292992 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  2 05:50:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1299664923' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  2 05:50:28 np0005465604 podman[476794]: 2025-10-02 09:50:28.371203677 +0000 UTC m=+0.199060379 container init eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keldysh, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287047680 unmapped: 46276608 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287064064 unmapped: 46260224 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287064064 unmapped: 46260224 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287064064 unmapped: 46260224 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 46252032 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 46243840 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 46243840 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 46243840 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287088640 unmapped: 46235648 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 46219264 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 46211072 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 46211072 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 46202880 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 46194688 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 46178304 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 46170112 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 46170112 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 46170112 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 46161920 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 46145536 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 46145536 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 46137344 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287195136 unmapped: 46129152 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 46120960 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 46104576 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287227904 unmapped: 46096384 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287227904 unmapped: 46096384 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287227904 unmapped: 46096384 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 46088192 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 46080000 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 46080000 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 46071808 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 46063616 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 46055424 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 46047232 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 46047232 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 46047232 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 46047232 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 podman[476794]: 2025-10-02 09:50:28.380318702 +0000 UTC m=+0.208175394 container start eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keldysh, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 05:50:28 np0005465604 podman[476794]: 2025-10-02 09:50:28.384194782 +0000 UTC m=+0.212051504 container attach eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keldysh, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 46039040 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 46030848 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 46014464 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 38K writes, 145K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 38K writes, 14K syncs, 2.69 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 338 writes, 741 keys, 338 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s#012Interval WAL: 338 writes, 158 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 45989888 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 45981696 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287350784 unmapped: 45973504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287350784 unmapped: 45973504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287350784 unmapped: 45973504 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 45965312 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287375360 unmapped: 45948928 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287375360 unmapped: 45948928 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 45940736 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 45932544 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 45932544 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 45932544 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 45932544 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 45924352 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 45916160 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985895 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 45907968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 45907968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ed746000/0x0/0x4ffc00000, data 0x9eb0e9/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 45907968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 45907968 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 536.308471680s of 536.377807617s, submitted: 15
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287424512 unmapped: 45899776 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985147 data_alloc: 218103808 data_used: 94208
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287440896 unmapped: 45883392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 301 ms_handle_reset con 0x562c64900800 session 0x562c64d21c20
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287440896 unmapped: 45883392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 301 heartbeat osd_stat(store_statfs(0x4edf44000/0x0/0x4ffc00000, data 0x1ecc74/0x388000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287440896 unmapped: 45883392 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 302 ms_handle_reset con 0x562c647e6800 session 0x562c6279be00
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2935656 data_alloc: 218103808 data_used: 110592
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 302 heartbeat osd_stat(store_statfs(0x4edf43000/0x0/0x4ffc00000, data 0x1ee812/0x389000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287465472 unmapped: 45858816 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287506432 unmapped: 45817856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.243688583s of 10.019772530s, submitted: 100
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287514624 unmapped: 45809664 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2942373 data_alloc: 218103808 data_used: 114688
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287563776 unmapped: 45760512 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 ms_handle_reset con 0x562c649f1800 session 0x562c628ec3c0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287604736 unmapped: 45719552 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 45694976 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 45686784 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 45686784 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 45678592 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 45670400 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287662080 unmapped: 45662208 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 45654016 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 45645824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 45645824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 45645824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 45645824 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 45637632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 45637632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976843 data_alloc: 218103808 data_used: 118784
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 45637632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 45637632 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 45629440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 45629440 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 58.478626251s of 59.437236786s, submitted: 96
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 304 handle_osd_map epochs [305,305], i have 305, src has [1,305]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edacd000/0x0/0x4ffc00000, data 0x661e25/0x801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 305 ms_handle_reset con 0x562c6bb59400 session 0x562c628e30e0
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949134 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf3b000/0x0/0x4ffc00000, data 0x1f39c3/0x392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edf3b000/0x0/0x4ffc00000, data 0x1f39c3/0x392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 45563904 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 45555712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 45555712 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 45547520 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 45539328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 45539328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 45539328 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 45531136 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 45522944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 45522944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 45522944 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 45514752 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287817728 unmapped: 45506560 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 45490176 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287842304 unmapped: 45481984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287842304 unmapped: 45481984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287842304 unmapped: 45481984 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 45473792 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 45465600 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 45457408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 45457408 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 45449216 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 45449216 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287883264 unmapped: 45441024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287883264 unmapped: 45441024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287883264 unmapped: 45441024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287883264 unmapped: 45441024 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287899648 unmapped: 45424640 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 45293568 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'config diff' '{prefix=config diff}'
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'config show' '{prefix=config show}'
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'counter dump' '{prefix=counter dump}'
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'counter schema' '{prefix=counter schema}'
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287506432 unmapped: 45817856 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 45850624 heap: 333324288 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'log dump' '{prefix=log dump}'
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 298065920 unmapped: 46301184 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'perf dump' '{prefix=perf dump}'
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'perf schema' '{prefix=perf schema}'
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287055872 unmapped: 57311232 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287064064 unmapped: 57303040 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287064064 unmapped: 57303040 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 57294848 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 57294848 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 57294848 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 57294848 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 57294848 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 57294848 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 57294848 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287072256 unmapped: 57294848 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287080448 unmapped: 57286656 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287096832 unmapped: 57270272 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287096832 unmapped: 57270272 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287096832 unmapped: 57270272 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 57262080 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 57262080 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 57262080 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 57262080 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287105024 unmapped: 57262080 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 57253888 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 57253888 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 57253888 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 57253888 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 57253888 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 57253888 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 57253888 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 57253888 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 57245696 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 57245696 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287121408 unmapped: 57245696 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 57237504 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 57237504 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 57237504 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 57237504 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287129600 unmapped: 57237504 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287137792 unmapped: 57229312 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287137792 unmapped: 57229312 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287137792 unmapped: 57229312 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287137792 unmapped: 57229312 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287137792 unmapped: 57229312 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 57221120 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 57221120 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 57221120 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287145984 unmapped: 57221120 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 57212928 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 57212928 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 57212928 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 57212928 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287154176 unmapped: 57212928 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 57204736 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287162368 unmapped: 57204736 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287170560 unmapped: 57196544 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287170560 unmapped: 57196544 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287170560 unmapped: 57196544 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287170560 unmapped: 57196544 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287170560 unmapped: 57196544 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287170560 unmapped: 57196544 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 57188352 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 57188352 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 57188352 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 57188352 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 57188352 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 57188352 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 57188352 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 57188352 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287178752 unmapped: 57188352 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287186944 unmapped: 57180160 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287195136 unmapped: 57171968 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287195136 unmapped: 57171968 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287195136 unmapped: 57171968 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287195136 unmapped: 57171968 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287195136 unmapped: 57171968 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287195136 unmapped: 57171968 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287195136 unmapped: 57171968 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 57163776 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 57163776 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287203328 unmapped: 57163776 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287211520 unmapped: 57155584 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287211520 unmapped: 57155584 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287211520 unmapped: 57155584 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287211520 unmapped: 57155584 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287211520 unmapped: 57155584 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287211520 unmapped: 57155584 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 57147392 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 57147392 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 57147392 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 57147392 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 57147392 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 57147392 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287219712 unmapped: 57147392 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57131008 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57131008 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57131008 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57131008 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57131008 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57131008 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57131008 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 57122816 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 57122816 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 57122816 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 57122816 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57114624 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57114624 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57114624 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57114624 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57114624 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57098240 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57098240 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57098240 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57098240 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57098240 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57098240 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57098240 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57098240 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57090048 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57090048 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57090048 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57090048 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57081856 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57081856 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57081856 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57081856 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 57073664 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 57073664 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287301632 unmapped: 57065472 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287301632 unmapped: 57065472 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287301632 unmapped: 57065472 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287301632 unmapped: 57065472 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287301632 unmapped: 57065472 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287301632 unmapped: 57065472 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 57057280 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 57057280 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 57057280 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 57057280 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 57057280 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 57057280 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 57057280 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287309824 unmapped: 57057280 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 57049088 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 57049088 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 57049088 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 57049088 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 57049088 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 57049088 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 57049088 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 57049088 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 57032704 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 57032704 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287334400 unmapped: 57032704 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287342592 unmapped: 57024512 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287350784 unmapped: 57016320 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287350784 unmapped: 57016320 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 57008128 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 57008128 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287358976 unmapped: 57008128 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287367168 unmapped: 56999936 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287367168 unmapped: 56999936 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287367168 unmapped: 56999936 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 56983552 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 56983552 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 56983552 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 56983552 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 56983552 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 56983552 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 56983552 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287383552 unmapped: 56983552 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 56975360 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 56975360 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287391744 unmapped: 56975360 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 56967168 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 56967168 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 56967168 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 56967168 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 56967168 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 56958976 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 56958976 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 56958976 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 56958976 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 56958976 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 56958976 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 56958976 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287408128 unmapped: 56958976 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 56950784 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287416320 unmapped: 56950784 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287424512 unmapped: 56942592 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287424512 unmapped: 56942592 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287424512 unmapped: 56942592 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287424512 unmapped: 56942592 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287424512 unmapped: 56942592 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287424512 unmapped: 56942592 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287440896 unmapped: 56926208 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287440896 unmapped: 56926208 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287440896 unmapped: 56926208 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 56918016 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 56918016 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 56918016 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 56918016 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 56918016 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 56918016 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 56918016 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287449088 unmapped: 56918016 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287457280 unmapped: 56909824 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287457280 unmapped: 56909824 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287457280 unmapped: 56909824 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287457280 unmapped: 56909824 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287457280 unmapped: 56909824 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 56893440 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 56893440 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 56893440 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 56893440 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 56893440 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 56893440 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 56893440 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287473664 unmapped: 56893440 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287498240 unmapped: 56868864 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287498240 unmapped: 56868864 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287498240 unmapped: 56868864 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287498240 unmapped: 56868864 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287498240 unmapped: 56868864 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287498240 unmapped: 56868864 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287498240 unmapped: 56868864 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287498240 unmapped: 56868864 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287506432 unmapped: 56860672 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287506432 unmapped: 56860672 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287506432 unmapped: 56860672 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287514624 unmapped: 56852480 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287514624 unmapped: 56852480 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287514624 unmapped: 56852480 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287514624 unmapped: 56852480 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287514624 unmapped: 56852480 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287522816 unmapped: 56844288 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287522816 unmapped: 56844288 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287522816 unmapped: 56844288 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287531008 unmapped: 56836096 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287531008 unmapped: 56836096 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287531008 unmapped: 56836096 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287531008 unmapped: 56836096 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287531008 unmapped: 56836096 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287539200 unmapped: 56827904 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287539200 unmapped: 56827904 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287539200 unmapped: 56827904 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287547392 unmapped: 56819712 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287547392 unmapped: 56819712 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287547392 unmapped: 56819712 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287547392 unmapped: 56819712 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287547392 unmapped: 56819712 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287555584 unmapped: 56811520 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287555584 unmapped: 56811520 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287555584 unmapped: 56811520 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287555584 unmapped: 56811520 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287555584 unmapped: 56811520 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287555584 unmapped: 56811520 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287555584 unmapped: 56811520 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287555584 unmapped: 56811520 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287563776 unmapped: 56803328 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287563776 unmapped: 56803328 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287563776 unmapped: 56803328 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287571968 unmapped: 56795136 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287571968 unmapped: 56795136 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287571968 unmapped: 56795136 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287571968 unmapped: 56795136 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287571968 unmapped: 56795136 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287580160 unmapped: 56786944 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287580160 unmapped: 56786944 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287580160 unmapped: 56786944 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287588352 unmapped: 56778752 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287588352 unmapped: 56778752 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287588352 unmapped: 56778752 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287588352 unmapped: 56778752 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287588352 unmapped: 56778752 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287596544 unmapped: 56770560 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287596544 unmapped: 56770560 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287596544 unmapped: 56770560 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287596544 unmapped: 56770560 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287596544 unmapped: 56770560 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287596544 unmapped: 56770560 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287596544 unmapped: 56770560 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287596544 unmapped: 56770560 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287604736 unmapped: 56762368 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56754176 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56754176 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56754176 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56754176 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56754176 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56754176 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56754176 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56737792 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56737792 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56737792 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56737792 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56737792 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56737792 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56737792 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56737792 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56729600 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56729600 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56729600 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56729600 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56729600 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56729600 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56729600 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56729600 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 56721408 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 56721408 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 56721408 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 56721408 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 56721408 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 56721408 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 56721408 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 56721408 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 56696832 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 56696832 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 56696832 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 56696832 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 56696832 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 56696832 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 56696832 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287670272 unmapped: 56696832 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 56688640 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287678464 unmapped: 56688640 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 56680448 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 56680448 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 56680448 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 56680448 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 56680448 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287686656 unmapped: 56680448 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 56672256 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 56672256 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 56672256 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 56672256 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 56672256 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 56672256 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 56672256 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287694848 unmapped: 56672256 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287711232 unmapped: 56655872 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287719424 unmapped: 56647680 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287719424 unmapped: 56647680 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287727616 unmapped: 56639488 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 38K writes, 146K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 38K writes, 14K syncs, 2.68 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 458 writes, 1050 keys, 458 commit groups, 1.0 writes per commit group, ingest: 0.46 MB, 0.00 MB/s#012Interval WAL: 458 writes, 208 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56623104 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287768576 unmapped: 56598528 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 56590336 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 56590336 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287776768 unmapped: 56590336 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 56582144 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 56582144 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 56582144 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 56582144 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 56582144 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287784960 unmapped: 56582144 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 56573952 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 56573952 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 56573952 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 56573952 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 56573952 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 56573952 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 56573952 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287793152 unmapped: 56573952 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 56565760 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 56565760 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 56565760 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 56565760 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 56565760 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 56565760 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287801344 unmapped: 56565760 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 56557568 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 56557568 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 56557568 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 56557568 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2952108 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 56557568 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 538.921264648s of 539.136047363s, submitted: 54
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 56557568 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,0,1])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287809536 unmapped: 56557568 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287825920 unmapped: 56541184 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 56532992 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951300 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [0,0,1,0,1])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 56532992 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287834112 unmapped: 56532992 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287842304 unmapped: 56524800 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287850496 unmapped: 56516608 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287858688 unmapped: 56508416 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 56500224 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287875072 unmapped: 56492032 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: bluestore.MempoolThread(0x562c610d7b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951228 data_alloc: 218103808 data_used: 126976
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'config diff' '{prefix=config diff}'
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'config show' '{prefix=config show}'
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0x1f5426/0x395000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1192f9c6), peers [0,1] op hist [])
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'counter dump' '{prefix=counter dump}'
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 56336384 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'counter schema' '{prefix=counter schema}'
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: prioritycache tune_memory target: 4294967296 mapped: 287760384 unmapped: 56606720 heap: 344367104 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:28 np0005465604 ceph-osd[90385]: do_command 'log dump' '{prefix=log dump}'
Oct  2 05:50:28 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23489 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  2 05:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  2 05:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  2 05:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  2 05:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  2 05:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:50:28 np0005465604 ceph-mgr[74774]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  2 05:50:28 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  2 05:50:28 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/250851339' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  2 05:50:29 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3986: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:50:29 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23493 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:50:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  2 05:50:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3852882195' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  2 05:50:29 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23497 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct  2 05:50:29 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  2 05:50:29 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1843826777' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]: [
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:    {
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:        "available": false,
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:        "ceph_device": false,
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:        "lsm_data": {},
Oct  2 05:50:29 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:        "lvs": [],
Oct  2 05:50:29 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:        "path": "/dev/sr0",
Oct  2 05:50:29 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:        "rejected_reasons": [
Oct  2 05:50:29 np0005465604 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "Has a FileSystem",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "Insufficient space (<5GB)"
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:        ],
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:        "sys_api": {
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "actuators": null,
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "device_nodes": "sr0",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "devname": "sr0",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "human_readable_size": "482.00 KB",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "id_bus": "ata",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "model": "QEMU DVD-ROM",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "nr_requests": "2",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "parent": "/dev/sr0",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "partitions": {},
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "path": "/dev/sr0",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "removable": "1",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "rev": "2.5+",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "ro": "0",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "rotational": "0",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "sas_address": "",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "sas_device_handle": "",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "scheduler_mode": "mq-deadline",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "sectors": 0,
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "sectorsize": "2048",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "size": 493568.0,
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "support_discard": "2048",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "type": "disk",
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:            "vendor": "QEMU"
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:        }
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]:    }
Oct  2 05:50:29 np0005465604 charming_keldysh[476830]: ]
Oct  2 05:50:29 np0005465604 systemd[1]: libpod-eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc.scope: Deactivated successfully.
Oct  2 05:50:29 np0005465604 podman[476794]: 2025-10-02 09:50:29.91638238 +0000 UTC m=+1.744239072 container died eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  2 05:50:29 np0005465604 systemd[1]: libpod-eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc.scope: Consumed 1.566s CPU time.
Oct  2 05:50:29 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23501 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  2 05:50:30 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct  2 05:50:30 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069847584' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct  2 05:50:30 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23505 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  2 05:50:30 np0005465604 systemd[1]: var-lib-containers-storage-overlay-2040b99ff67525125f524b889ffae00099dd6f7e45d95dbbaf735db57cc4d04b-merged.mount: Deactivated successfully.
Oct  2 05:50:30 np0005465604 nova_compute[260603]: 2025-10-02 09:50:30.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:50:31 np0005465604 podman[476794]: 2025-10-02 09:50:31.079247881 +0000 UTC m=+2.907104603 container remove eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keldysh, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 05:50:31 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3987: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:50:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  2 05:50:31 np0005465604 systemd[1]: libpod-conmon-eb208ac0740e4242a3cdfebf5fd2fcbd0d5eb6c4d44d3dc7e87965d0cf43d0cc.scope: Deactivated successfully.
Oct  2 05:50:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct  2 05:50:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4064757745' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  2 05:50:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:50:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  2 05:50:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:50:31 np0005465604 ceph-mgr[74774]: log_channel(audit) log [DBG] : from='client.23513 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  2 05:50:31 np0005465604 ceph-mgr[74774]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  2 05:50:31 np0005465604 ceph-a52e644f-f702-594c-a648-813e3e0df2b1-mgr-compute-0-qlmhsi[74770]: 2025-10-02T09:50:31.538+0000 7f67e8e61640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  2 05:50:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:50:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:50:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 05:50:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  2 05:50:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:50:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  2 05:50:31 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct  2 05:50:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2803204799' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct  2 05:50:31 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:50:31 np0005465604 nova_compute[260603]: 2025-10-02 09:50:31.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 05:50:31 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:50:31 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 8ad3e822-a13a-4470-920d-644c3a6f059b does not exist
Oct  2 05:50:31 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 3e900b88-0d3b-4103-93c2-00efff6a8a23 does not exist
Oct  2 05:50:31 np0005465604 ceph-mgr[74774]: [progress WARNING root] complete: ev 43575c45-6f8e-4858-a3c8-d2531eb1efd3 does not exist
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3556137666' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1717055950' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3178428276' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219174880' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' 
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: from='mgr.14132 192.168.122.100:0/3005504038' entity='mgr.compute-0.qlmhsi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 05:50:32 np0005465604 podman[479855]: 2025-10-02 09:50:32.704818864 +0000 UTC m=+0.029847282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 05:50:32 np0005465604 podman[479855]: 2025-10-02 09:50:32.833098301 +0000 UTC m=+0.158126719 container create b3b402a26b5e1354a08ce00b725cd8c5e3c6b217acdcae1d6879736b67a4519b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gould, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct  2 05:50:32 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2972735957' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct  2 05:50:33 np0005465604 systemd[1]: Started libpod-conmon-b3b402a26b5e1354a08ce00b725cd8c5e3c6b217acdcae1d6879736b67a4519b.scope.
Oct  2 05:50:33 np0005465604 systemd[1]: Started libcrun container.
Oct  2 05:50:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct  2 05:50:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3675472947' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3659071 data_alloc: 234881024 data_used: 21536768
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x230d83f/0x24a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 53297152 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.032077789s of 21.478837967s, submitted: 37
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3690449 data_alloc: 234881024 data_used: 21581824
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8d5b000/0x0/0x4ffc00000, data 0x230d83f/0x24a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,18])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350855168 unmapped: 50298880 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85d6000/0x0/0x4ffc00000, data 0x2a8c83f/0x2c22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 50290688 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 50290688 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350863360 unmapped: 50290688 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85b2000/0x0/0x4ffc00000, data 0x2aad83f/0x2c43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,4])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350920704 unmapped: 50233344 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733393 data_alloc: 234881024 data_used: 22216704
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85a5000/0x0/0x4ffc00000, data 0x2ac383f/0x2c59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3740617 data_alloc: 234881024 data_used: 22421504
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.158976555s of 12.613157272s, submitted: 94
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da328e800 session 0x562da170be00
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350937088 unmapped: 50216960 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85a5000/0x0/0x4ffc00000, data 0x2ac383f/0x2c59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 50208768 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 50208768 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3738385 data_alloc: 234881024 data_used: 22425600
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350945280 unmapped: 50208768 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85a5000/0x0/0x4ffc00000, data 0x2ac383f/0x2c59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,3])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8a9b000/0x0/0x4ffc00000, data 0x192083f/0x1ab6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562d9f949c00 session 0x562da060d680
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525679 data_alloc: 218103808 data_used: 12595200
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85ad000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85ad000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 49152000 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85ad000/0x0/0x4ffc00000, data 0x191c7dd/0x1ab1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3525679 data_alloc: 218103808 data_used: 12595200
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562d9fa010e0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.587030411s of 13.759822845s, submitted: 32
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562d9f4b05a0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 49143808 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413519 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e85ad000/0x0/0x4ffc00000, data 0x11d77dd/0x136c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349446144 unmapped: 51707904 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da06cf000 session 0x562d9f910f00
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 46K writes, 182K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 46K writes, 17K syncs, 2.72 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2280 writes, 9757 keys, 2280 commit groups, 1.0 writes per commit group, ingest: 10.53 MB, 0.02 MB/s#012Interval WAL: 2280 writes, 833 syncs, 2.74 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349470720 unmapped: 51683328 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1cb7800 session 0x562da1ad5680
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: mgrc ms_handle_reset ms_handle_reset con 0x562da328f400
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: mgrc handle_mgr_configure stats_period=5
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da40e3400 session 0x562da1f84960
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da02c0d20
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349478912 unmapped: 51675136 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412927 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 51666944 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 51666944 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 51666944 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 51666944 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da1a1a000
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da1700f00
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562da02fc960
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562d9f8d9c20
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.360366821s of 48.620044708s, submitted: 32
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 357384192 unmapped: 43769856 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1964000 session 0x562da1e9be00
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da03d0000
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da1f185a0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3473234 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da1ae43c0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562da1e9a5a0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 50569216 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 50569216 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8904000/0x0/0x4ffc00000, data 0x15c57dd/0x175a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 50569216 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350584832 unmapped: 50569216 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 50561024 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3473010 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3298400 session 0x562d9f8d6780
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 50561024 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8904000/0x0/0x4ffc00000, data 0x15c57dd/0x175a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da1a1b2c0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350593024 unmapped: 50561024 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562d9f988f00
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da18b4b40
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 50257920 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350896128 unmapped: 50257920 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529512 data_alloc: 218103808 data_used: 15495168
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e88df000/0x0/0x4ffc00000, data 0x15e97ed/0x177f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e88df000/0x0/0x4ffc00000, data 0x15e97ed/0x177f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3529512 data_alloc: 218103808 data_used: 15495168
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.683063507s of 20.067323685s, submitted: 24
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e88df000/0x0/0x4ffc00000, data 0x15e97ed/0x177f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349265920 unmapped: 51888128 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3565370 data_alloc: 218103808 data_used: 15523840
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351698944 unmapped: 49455104 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 350699520 unmapped: 50454528 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3600800 data_alloc: 218103808 data_used: 15843328
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3600800 data_alloc: 218103808 data_used: 15843328
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 49405952 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.367254257s of 13.001447678s, submitted: 92
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 49397760 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e80fa000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 49397760 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 49389568 heap: 401154048 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3677723 data_alloc: 218103808 data_used: 15843328
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da5c8fc00 session 0x562d9f8d7c20
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b3000/0x0/0x4ffc00000, data 0x2514816/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da170b2c0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3656683 data_alloc: 218103808 data_used: 15843328
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b3000/0x0/0x4ffc00000, data 0x251484f/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b3000/0x0/0x4ffc00000, data 0x251484f/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 52994048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.334874630s of 10.473359108s, submitted: 68
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b3000/0x0/0x4ffc00000, data 0x251484f/0x26ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 52985856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 52985856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3658552 data_alloc: 218103808 data_used: 15843328
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da2063860
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 52985856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 52985856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 52977664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 52936704 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 52928512 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703584 data_alloc: 234881024 data_used: 22114304
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703584 data_alloc: 234881024 data_used: 22114304
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 52920320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.022311211s of 15.682323456s, submitted: 60
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e79b2000/0x0/0x4ffc00000, data 0x2514872/0x26ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,2,2,2])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352444416 unmapped: 52387840 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353009664 unmapped: 51822592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3744446 data_alloc: 234881024 data_used: 22147072
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e74cd000/0x0/0x4ffc00000, data 0x29f9872/0x2b91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,2])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 51150848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 51150848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 51150848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353755136 unmapped: 51077120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 51740672 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3750872 data_alloc: 234881024 data_used: 22351872
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 51740672 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e739e000/0x0/0x4ffc00000, data 0x2b28872/0x2cc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e739e000/0x0/0x4ffc00000, data 0x2b28872/0x2cc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.567046165s of 10.198050499s, submitted: 62
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7359000/0x0/0x4ffc00000, data 0x2b6d872/0x2d05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754768 data_alloc: 234881024 data_used: 22589440
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 50692096 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 50446336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7320000/0x0/0x4ffc00000, data 0x2ba6872/0x2d3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 50446336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 50446336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3757992 data_alloc: 234881024 data_used: 22646784
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e731b000/0x0/0x4ffc00000, data 0x2bab872/0x2d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e731b000/0x0/0x4ffc00000, data 0x2bab872/0x2d43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762988 data_alloc: 234881024 data_used: 22687744
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7317000/0x0/0x4ffc00000, data 0x2baf872/0x2d47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7317000/0x0/0x4ffc00000, data 0x2baf872/0x2d47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 50438144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.851724625s of 15.161529541s, submitted: 25
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 50421760 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3764976 data_alloc: 234881024 data_used: 22679552
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e72fa000/0x0/0x4ffc00000, data 0x2bcc872/0x2d64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e72fa000/0x0/0x4ffc00000, data 0x2bcc872/0x2d64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 50397184 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354476032 unmapped: 50356224 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762328 data_alloc: 234881024 data_used: 22679552
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da060ab40
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354476032 unmapped: 50356224 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 50348032 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e72f2000/0x0/0x4ffc00000, data 0x2bd4872/0x2d6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 50348032 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.166407108s of 10.001169205s, submitted: 24
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 50348032 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e72f2000/0x0/0x4ffc00000, data 0x2bd4872/0x2d6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 50348032 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762224 data_alloc: 234881024 data_used: 22679552
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 50339840 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8106000/0x0/0x4ffc00000, data 0x1dc0872/0x1f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354500608 unmapped: 50331648 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 50323456 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da2582d20
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354508800 unmapped: 50323456 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8106000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354516992 unmapped: 50315264 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3606314 data_alloc: 218103808 data_used: 15831040
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 50307072 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 50307072 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562da1ad4d20
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562da068cb40
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 50307072 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.448636055s of 10.010467529s, submitted: 54
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8108000/0x0/0x4ffc00000, data 0x1dc07ed/0x1f56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 52862976 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8fea000/0x0/0x4ffc00000, data 0xede7ed/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 52854784 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3440388 data_alloc: 218103808 data_used: 8228864
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 52854784 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da03d1e00
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 52846592 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 52838400 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 52830208 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 52822016 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 52813824 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 52805632 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433347 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 52797440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.401294708s of 43.381404877s, submitted: 17
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562d9f989a40
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562d9f4b14a0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562d9f989680
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da329a400 session 0x562da1e9b0e0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da16f41e0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 52609024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 52609024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-mgr[74774]: log_channel(cluster) log [DBG] : pgmap v3988: 305 pgs: 305 active+clean; 457 KiB data, 1009 MiB used, 59 GiB / 60 GiB avail
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 52609024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479417 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 52609024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6f000/0x0/0x4ffc00000, data 0x135983f/0x14ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 52600832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 52600832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 52600832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 52600832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479417 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6f000/0x0/0x4ffc00000, data 0x135983f/0x14ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da202be00
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da16f52c0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562da170be00
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.937417030s of 13.075335503s, submitted: 42
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3291000 session 0x562da1f85c20
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481740 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6d000/0x0/0x4ffc00000, data 0x1359872/0x14f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515152 data_alloc: 218103808 data_used: 12677120
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6d000/0x0/0x4ffc00000, data 0x1359872/0x14f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515152 data_alloc: 218103808 data_used: 12677120
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 52592640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352247808 unmapped: 52584448 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8b6d000/0x0/0x4ffc00000, data 0x1359872/0x14f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.298130989s of 12.432414055s, submitted: 6
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8330000/0x0/0x4ffc00000, data 0x1b96872/0x1d2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3585360 data_alloc: 218103808 data_used: 13922304
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82ea000/0x0/0x4ffc00000, data 0x1bdc872/0x1d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3585680 data_alloc: 218103808 data_used: 13930496
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82c9000/0x0/0x4ffc00000, data 0x1bfd872/0x1d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82c9000/0x0/0x4ffc00000, data 0x1bfd872/0x1d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583492 data_alloc: 218103808 data_used: 13934592
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355377152 unmapped: 49455104 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 49446912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 49446912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.958930016s of 17.081628799s, submitted: 105
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82c9000/0x0/0x4ffc00000, data 0x1bfd872/0x1d95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da14b4f00
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 49446912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642870 data_alloc: 218103808 data_used: 13934592
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 49446912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 49438720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b53000/0x0/0x4ffc00000, data 0x2373872/0x250b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b53000/0x0/0x4ffc00000, data 0x2373872/0x250b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 49438720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b53000/0x0/0x4ffc00000, data 0x2373872/0x250b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 49438720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355393536 unmapped: 49438720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3642870 data_alloc: 218103808 data_used: 13934592
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b53000/0x0/0x4ffc00000, data 0x2373872/0x250b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562d9f8d9a40
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3643467 data_alloc: 218103808 data_used: 13934592
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b52000/0x0/0x4ffc00000, data 0x2373895/0x250c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 49430528 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.500934601s of 15.522595406s, submitted: 17
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692123 data_alloc: 234881024 data_used: 20611072
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b52000/0x0/0x4ffc00000, data 0x2373895/0x250c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692123 data_alloc: 234881024 data_used: 20611072
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e7b52000/0x0/0x4ffc00000, data 0x2373895/0x250c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 48971776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 45514752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358465536 unmapped: 46366720 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793705 data_alloc: 234881024 data_used: 21045248
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e6e9f000/0x0/0x4ffc00000, data 0x3025895/0x31be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793705 data_alloc: 234881024 data_used: 21045248
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 46350336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e6e9f000/0x0/0x4ffc00000, data 0x3025895/0x31be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793705 data_alloc: 234881024 data_used: 21045248
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 46342144 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e6e9f000/0x0/0x4ffc00000, data 0x3025895/0x31be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.707262039s of 23.078323364s, submitted: 83
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 46333952 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 46333952 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 46333952 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793705 data_alloc: 234881024 data_used: 21045248
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da5c8fc00 session 0x562da16f4f00
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 46333952 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da230f800 session 0x562da14b4b40
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82c2000/0x0/0x4ffc00000, data 0x1c03872/0x1d9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3595381 data_alloc: 218103808 data_used: 13934592
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e82bb000/0x0/0x4ffc00000, data 0x1c0a872/0x1da2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 51044352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.594208717s of 11.749836922s, submitted: 32
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da202a780
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da18b5680
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 51036160 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3594709 data_alloc: 218103808 data_used: 13930496
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354861056 unmapped: 49971200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354861056 unmapped: 49971200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354861056 unmapped: 49971200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da1f18b40
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 49963008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354877440 unmapped: 49954816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 49946624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 49938432 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 49938432 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 49938432 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 49930240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354910208 unmapped: 49922048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 49905664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 49897472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 49889280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 49881088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 49872896 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 49872896 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354959360 unmapped: 49872896 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354983936 unmapped: 49848320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 354992128 unmapped: 49840128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355000320 unmapped: 49831936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900e000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355008512 unmapped: 49823744 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3455092 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355008512 unmapped: 49823744 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da1904f00
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da230f800 session 0x562d9f8da5a0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562da202a3c0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da1a1bc20
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 100.757339478s of 102.810211182s, submitted: 53
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355016704 unmapped: 49815552 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da1708960
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da1770b40
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da230f800 session 0x562da25825a0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 49725440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da3297c00 session 0x562da14b4780
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da18b8800 session 0x562da03d0f00
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 49725440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355106816 unmapped: 49725440 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495520 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8be5000/0x0/0x4ffc00000, data 0x12e37ed/0x1479000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da192d800 session 0x562da02fc000
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3522193 data_alloc: 218103808 data_used: 11350016
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e8bc1000/0x0/0x4ffc00000, data 0x13077ed/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da1d55400 session 0x562da068c1e0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da230f800 session 0x562d9f9885a0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 355262464 unmapped: 49569792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.136798859s of 10.462966919s, submitted: 20
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 52199424 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 ms_handle_reset con 0x562da5c8fc00 session 0x562da2583860
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 52191232 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 52183040 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 52174848 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352665600 unmapped: 52166656 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 52158464 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3461511 data_alloc: 218103808 data_used: 8118272
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 52150272 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 heartbeat osd_stat(store_statfs(0x4e900f000/0x0/0x4ffc00000, data 0xeba7dd/0x104f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.090339661s of 30.667470932s, submitted: 30
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 52142080 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 290 handle_osd_map epochs [291,291], i have 291, src has [1,291]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 291 ms_handle_reset con 0x562da18b8800 session 0x562d9f9112c0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e900d000/0x0/0x4ffc00000, data 0xebc37b/0x1050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3462964 data_alloc: 218103808 data_used: 8126464
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e900d000/0x0/0x4ffc00000, data 0xebc37b/0x1050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e900d000/0x0/0x4ffc00000, data 0xebc37b/0x1050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 52125696 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3462964 data_alloc: 218103808 data_used: 8126464
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e900d000/0x0/0x4ffc00000, data 0xebc37b/0x1050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 52109312 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 52109312 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 52109312 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 52109312 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 52101120 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 52084736 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 52076544 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352763904 unmapped: 52068352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352763904 unmapped: 52068352 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 52060160 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 52060160 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 52060160 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 52051968 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 52043776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 52043776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 52043776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 52043776 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 52035584 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 52035584 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 52027392 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 52027392 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 52027392 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 52027392 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 52019200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352813056 unmapped: 52019200 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 52011008 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 52002816 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 51994624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 51994624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 51994624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 51994624 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 51978240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 51978240 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 51970048 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 51961856 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352878592 unmapped: 51953664 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 51945472 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 51937280 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 51929088 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 51896320 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 51888128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 51888128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 51888128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 51888128 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 51879936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 51879936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900a000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 51879936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 51879936 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466962 data_alloc: 218103808 data_used: 8134656
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352960512 unmapped: 51871744 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 123.357223511s of 123.813163757s, submitted: 62
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 ms_handle_reset con 0x562da192d800 session 0x562da068c960
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 ms_handle_reset con 0x562da1d55400 session 0x562da1ebbc20
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 51855360 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466082 data_alloc: 218103808 data_used: 9117696
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 51847168 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3466082 data_alloc: 218103808 data_used: 9117696
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 353001472 unmapped: 51830784 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e900b000/0x0/0x4ffc00000, data 0xebddde/0x1053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.466549873s of 10.502127647s, submitted: 11
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 293 ms_handle_reset con 0x562da230f800 session 0x562da060cb40
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 55345152 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 55345152 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 55345152 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 349487104 unmapped: 55345152 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e9479000/0x0/0x4ffc00000, data 0xa4f97d/0xbe4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429452 data_alloc: 218103808 data_used: 4472832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 294 ms_handle_reset con 0x562da192d000 session 0x562d9f910f00
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9c77000/0x0/0x4ffc00000, data 0x251508/0x3e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3358060 data_alloc: 218103808 data_used: 1130496
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e9c77000/0x0/0x4ffc00000, data 0x251508/0x3e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360842 data_alloc: 218103808 data_used: 1130496
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e9c75000/0x0/0x4ffc00000, data 0x252f6b/0x3e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.849654198s of 16.112354279s, submitted: 77
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 ms_handle_reset con 0x562da18b8800 session 0x562da193a5a0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 58843136 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 58843136 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 58843136 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345989120 unmapped: 58843136 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 345997312 unmapped: 58834944 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 48K writes, 187K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 48K writes, 17K syncs, 2.72 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1591 writes, 5946 keys, 1591 commit groups, 1.0 writes per commit group, ingest: 5.99 MB, 0.01 MB/s#012Interval WAL: 1591 writes, 616 syncs, 2.58 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562d9dffb4b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtabl
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346005504 unmapped: 58826752 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346013696 unmapped: 58818560 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346021888 unmapped: 58810368 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346030080 unmapped: 58802176 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346038272 unmapped: 58793984 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346038272 unmapped: 58793984 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346038272 unmapped: 58793984 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346038272 unmapped: 58793984 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 58785792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 58785792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 58785792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346046464 unmapped: 58785792 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346054656 unmapped: 58777600 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346054656 unmapped: 58777600 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346054656 unmapped: 58777600 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346054656 unmapped: 58777600 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 58769408 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 58769408 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346062848 unmapped: 58769408 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346071040 unmapped: 58761216 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 58753024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 58753024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346079232 unmapped: 58753024 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346087424 unmapped: 58744832 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346095616 unmapped: 58736640 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 58720256 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 58720256 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 58720256 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346112000 unmapped: 58720256 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346120192 unmapped: 58712064 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346128384 unmapped: 58703872 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346136576 unmapped: 58695680 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369871 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 97.635398865s of 97.845252991s, submitted: 11
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346169344 unmapped: 58662912 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c70000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346193920 unmapped: 58638336 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c70000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346251264 unmapped: 58580992 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346251264 unmapped: 58580992 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346251264 unmapped: 58580992 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c70000/0x0/0x4ffc00000, data 0x254c2c/0x3ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3368383 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 58572800 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370197 data_alloc: 218103808 data_used: 1138688
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e9c6f000/0x0/0x4ffc00000, data 0x254c3c/0x3ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.818475723s of 10.272746086s, submitted: 91
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 297 ms_handle_reset con 0x562da192d800 session 0x562da202a960
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346267648 unmapped: 58564608 heap: 404832256 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346284032 unmapped: 66945024 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 297 handle_osd_map epochs [298,298], i have 298, src has [1,298]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346292224 unmapped: 66936832 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 298 ms_handle_reset con 0x562da1d55400 session 0x562da2063860
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346333184 unmapped: 66895872 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346333184 unmapped: 66895872 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346333184 unmapped: 66895872 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464490 data_alloc: 218103808 data_used: 1146880
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464650 data_alloc: 218103808 data_used: 1150976
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.875930786s of 35.106796265s, submitted: 17
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec8336/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346374144 unmapped: 66854912 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xec9f07/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464947 data_alloc: 218103808 data_used: 1155072
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 299 ms_handle_reset con 0x562da230f800 session 0x562d9f4b0780
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e8ff8000/0x0/0x4ffc00000, data 0xec9dc3/0x1065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 66805760 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 66805760 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 66805760 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 66789376 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346456064 unmapped: 66772992 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346464256 unmapped: 66764800 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 66707456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 66691072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 66691072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 66691072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 66682880 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346578944 unmapped: 66650112 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 66633728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346628096 unmapped: 66600960 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346628096 unmapped: 66600960 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 66592768 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 66592768 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 66592768 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346660864 unmapped: 66568192 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346660864 unmapped: 66568192 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346660864 unmapped: 66568192 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 66560000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 66551808 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 66551808 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 66551808 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 66543616 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346710016 unmapped: 66519040 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346734592 unmapped: 66494464 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 66486272 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 66469888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 66469888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 66469888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346775552 unmapped: 66453504 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346783744 unmapped: 66445312 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346783744 unmapped: 66445312 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346783744 unmapped: 66445312 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346791936 unmapped: 66437120 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346791936 unmapped: 66437120 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346791936 unmapped: 66437120 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346791936 unmapped: 66437120 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346800128 unmapped: 66428928 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346800128 unmapped: 66428928 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346808320 unmapped: 66420736 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346808320 unmapped: 66420736 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346816512 unmapped: 66412544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346816512 unmapped: 66412544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346816512 unmapped: 66412544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346824704 unmapped: 66404352 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346841088 unmapped: 66387968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 66379776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 66379776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 66379776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 66371584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 66371584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 66371584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 66371584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 66363392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 66347008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 66347008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 66338816 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 66338816 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 66330624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 66330624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 66330624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 66330624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 66322432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 66322432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 ms_handle_reset con 0x562d9f949c00 session 0x562da1a1ad20
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 66322432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: mgrc ms_handle_reset ms_handle_reset con 0x562da3171c00
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/860957497
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/860957497,v1:192.168.122.100:6801/860957497]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: mgrc handle_mgr_configure stats_period=5
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 ms_handle_reset con 0x562da06d4400 session 0x562da1907c20
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 ms_handle_reset con 0x562da1965c00 session 0x562da060c5a0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 66322432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346906624 unmapped: 66322432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 66314240 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 66314240 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346914816 unmapped: 66314240 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 66306048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 66306048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 66306048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 66306048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 66306048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 66306048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 66297856 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 66297856 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 66289664 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 66289664 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 66289664 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346939392 unmapped: 66289664 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346947584 unmapped: 66281472 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346955776 unmapped: 66273280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 66256896 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346972160 unmapped: 66256896 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346980352 unmapped: 66248704 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346996736 unmapped: 66232320 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346996736 unmapped: 66232320 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347004928 unmapped: 66224128 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347004928 unmapped: 66224128 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 66215936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 66215936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 66215936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 66215936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 66207744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 66207744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 66207744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 66207744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 66199552 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 66199552 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 66199552 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 66199552 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 66183168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 66183168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 66183168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 66183168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347070464 unmapped: 66158592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347070464 unmapped: 66158592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347070464 unmapped: 66158592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347070464 unmapped: 66158592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347086848 unmapped: 66142208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347086848 unmapped: 66142208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347095040 unmapped: 66134016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347127808 unmapped: 66101248 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347168768 unmapped: 66060288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347168768 unmapped: 66060288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 66035712 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347234304 unmapped: 65994752 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 65986560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 65986560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 65986560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 65986560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346218496 unmapped: 67010560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346218496 unmapped: 67010560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346218496 unmapped: 67010560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346218496 unmapped: 67010560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346226688 unmapped: 67002368 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346243072 unmapped: 66985984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346251264 unmapped: 66977792 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 66969600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 66969600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 66969600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 66969600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346259456 unmapped: 66969600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346267648 unmapped: 66961408 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346267648 unmapped: 66961408 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 66953216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 66953216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 66953216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346275840 unmapped: 66953216 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346284032 unmapped: 66945024 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346284032 unmapped: 66945024 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346284032 unmapped: 66945024 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346284032 unmapped: 66945024 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346300416 unmapped: 66928640 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346308608 unmapped: 66920448 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346316800 unmapped: 66912256 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 66904064 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 66887680 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 66879488 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346357760 unmapped: 66871296 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 66863104 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346374144 unmapped: 66854912 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346374144 unmapped: 66854912 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346374144 unmapped: 66854912 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346374144 unmapped: 66854912 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346382336 unmapped: 66846720 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 66838528 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346398720 unmapped: 66830336 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 66813952 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346431488 unmapped: 66797568 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 66789376 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 66789376 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 66789376 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 66789376 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 66781184 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346472448 unmapped: 66756608 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346480640 unmapped: 66748416 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346488832 unmapped: 66740224 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346497024 unmapped: 66732032 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346505216 unmapped: 66723840 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346513408 unmapped: 66715648 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 66707456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346521600 unmapped: 66707456 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346529792 unmapped: 66699264 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346537984 unmapped: 66691072 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 66682880 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 66682880 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346546176 unmapped: 66682880 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346554368 unmapped: 66674688 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 48K writes, 17K syncs, 2.71 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 278 writes, 624 keys, 278 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s#012Interval WAL: 278 writes, 128 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346562560 unmapped: 66666496 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-mon[74477]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct  2 05:50:33 np0005465604 ceph-mon[74477]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/145436114' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346570752 unmapped: 66658304 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346587136 unmapped: 66641920 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 66633728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 66633728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 66633728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346595328 unmapped: 66633728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346603520 unmapped: 66625536 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346611712 unmapped: 66617344 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346619904 unmapped: 66609152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 66592768 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346636288 unmapped: 66592768 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346644480 unmapped: 66584576 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346644480 unmapped: 66584576 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346644480 unmapped: 66584576 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346644480 unmapped: 66584576 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 66576384 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3468945 data_alloc: 218103808 data_used: 1163264
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e8ff5000/0x0/0x4ffc00000, data 0xecb826/0x1068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 66560000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 66560000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 66560000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346669056 unmapped: 66560000 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346677248 unmapped: 66551808 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 545.098144531s of 546.776916504s, submitted: 42
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472079 data_alloc: 218103808 data_used: 1167360
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 301 ms_handle_reset con 0x562da1965c00 session 0x562da18b4780
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346693632 unmapped: 66535424 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e9463000/0x0/0x4ffc00000, data 0xa5d3e7/0xbfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e9463000/0x0/0x4ffc00000, data 0xa5d3e7/0xbfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346701824 unmapped: 66527232 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 301 heartbeat osd_stat(store_statfs(0x4e9463000/0x0/0x4ffc00000, data 0xa5d3e7/0xbfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346710016 unmapped: 66519040 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 302 ms_handle_reset con 0x562da1d55400 session 0x562da202ab40
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 66510848 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346726400 unmapped: 66502656 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390468 data_alloc: 218103808 data_used: 1171456
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346726400 unmapped: 66502656 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9c60000/0x0/0x4ffc00000, data 0x25efb8/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346726400 unmapped: 66502656 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e9c60000/0x0/0x4ffc00000, data 0x25efb8/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346726400 unmapped: 66502656 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346750976 unmapped: 66478080 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 346759168 unmapped: 66469888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.232305527s of 10.012306213s, submitted: 80
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e9c5d000/0x0/0x4ffc00000, data 0x260a1b/0x400000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 ms_handle_reset con 0x562da230f800 session 0x562da02fc5a0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347840512 unmapped: 65388544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347840512 unmapped: 65388544 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 podman[479855]: 2025-10-02 09:50:33.301730319 +0000 UTC m=+0.626758767 container init b3b402a26b5e1354a08ce00b725cd8c5e3c6b217acdcae1d6879736b67a4519b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gould, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 65363968 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 65355776 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 65347584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 65347584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 65347584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 65347584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 65347584 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 65339392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 65339392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 65339392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 65339392 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 65331200 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 65323008 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 65314816 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 65314816 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 65306624 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396736 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 65298432 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347938816 unmapped: 65290240 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347947008 unmapped: 65282048 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e984a000/0x0/0x4ffc00000, data 0x262598/0x403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 57.771060944s of 58.697120667s, submitted: 80
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 305 ms_handle_reset con 0x562da1a59c00 session 0x562da17092c0
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 65257472 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e9847000/0x0/0x4ffc00000, data 0x264169/0x406000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 65257472 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3399710 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347971584 unmapped: 65257472 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 65249280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 65249280 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 305 ms_handle_reset con 0x562da1eaf000 session 0x562da18b4960
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 65241088 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 65241088 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348012544 unmapped: 65216512 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 65208320 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 65208320 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 65208320 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348020736 unmapped: 65208320 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 65200128 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 65200128 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 65200128 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348028928 unmapped: 65200128 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 65191936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 65191936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 65191936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 65191936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348037120 unmapped: 65191936 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 conmon[479915]: conmon b3b402a26b5e1354a08c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3b402a26b5e1354a08ce00b725cd8c5e3c6b217acdcae1d6879736b67a4519b.scope/container/memory.events
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 65183744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 podman[479855]: 2025-10-02 09:50:33.314612961 +0000 UTC m=+0.639641379 container start b3b402a26b5e1354a08ce00b725cd8c5e3c6b217acdcae1d6879736b67a4519b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_gould, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 65183744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348045312 unmapped: 65183744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 65167360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 65167360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 65167360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348061696 unmapped: 65167360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 65150976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 65150976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 65150976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348078080 unmapped: 65150976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348094464 unmapped: 65134592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 65126400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 65126400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348102656 unmapped: 65126400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 65118208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348110848 unmapped: 65118208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 65110016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 65110016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348119040 unmapped: 65110016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 wonderful_gould[479915]: 167 167
Oct  2 05:50:33 np0005465604 systemd[1]: libpod-b3b402a26b5e1354a08ce00b725cd8c5e3c6b217acdcae1d6879736b67a4519b.scope: Deactivated successfully.
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348135424 unmapped: 65093632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 65085440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 65085440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 65085440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348143616 unmapped: 65085440 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348151808 unmapped: 65077248 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 65069056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 65069056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348160000 unmapped: 65069056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 65052672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 65052672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348176384 unmapped: 65052672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 348069888 unmapped: 65159168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: do_command 'config diff' '{prefix=config diff}'
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: do_command 'config show' '{prefix=config show}'
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: do_command 'counter dump' '{prefix=counter dump}'
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: do_command 'counter schema' '{prefix=counter schema}'
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347783168 unmapped: 65445888 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 65609728 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: do_command 'log dump' '{prefix=log dump}'
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347643904 unmapped: 65585152 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: do_command 'perf dump' '{prefix=perf dump}'
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: do_command 'perf schema' '{prefix=perf schema}'
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347021312 unmapped: 66207744 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 66191360 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 66183168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 66183168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347045888 unmapped: 66183168 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347054080 unmapped: 66174976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347054080 unmapped: 66174976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347054080 unmapped: 66174976 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347062272 unmapped: 66166784 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347070464 unmapped: 66158592 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347078656 unmapped: 66150400 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347086848 unmapped: 66142208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347086848 unmapped: 66142208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347086848 unmapped: 66142208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347086848 unmapped: 66142208 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347095040 unmapped: 66134016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347095040 unmapped: 66134016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347095040 unmapped: 66134016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347095040 unmapped: 66134016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347095040 unmapped: 66134016 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 66125824 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347103232 unmapped: 66125824 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347111424 unmapped: 66117632 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347127808 unmapped: 66101248 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347127808 unmapped: 66101248 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347127808 unmapped: 66101248 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347127808 unmapped: 66101248 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347136000 unmapped: 66093056 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347144192 unmapped: 66084864 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 66076672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 66076672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347152384 unmapped: 66076672 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347160576 unmapped: 66068480 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347168768 unmapped: 66060288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347168768 unmapped: 66060288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347168768 unmapped: 66060288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347168768 unmapped: 66060288 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347176960 unmapped: 66052096 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347185152 unmapped: 66043904 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 66035712 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347193344 unmapped: 66035712 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347201536 unmapped: 66027520 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 66011136 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 66002944 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347234304 unmapped: 65994752 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347234304 unmapped: 65994752 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347234304 unmapped: 65994752 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347234304 unmapped: 65994752 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347234304 unmapped: 65994752 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347234304 unmapped: 65994752 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 65986560 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 65970176 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347267072 unmapped: 65961984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347267072 unmapped: 65961984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347267072 unmapped: 65961984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347267072 unmapped: 65961984 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347275264 unmapped: 65953792 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347275264 unmapped: 65953792 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347275264 unmapped: 65953792 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347275264 unmapped: 65953792 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347283456 unmapped: 65945600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347283456 unmapped: 65945600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e9844000/0x0/0x4ffc00000, data 0x265bcc/0x409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347283456 unmapped: 65945600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: bluestore.MempoolThread(0x562d9e0d9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402684 data_alloc: 218103808 data_used: 1179648
Oct  2 05:50:33 np0005465604 ceph-osd[89321]: prioritycache tune_memory target: 4294967296 mapped: 347283456 unmapped: 65945600 heap: 413229056 old mem: 2845415832 new mem: 2845415832
